entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.04104v1
20230709055200
lcs4Foam -- An OpenFOAM Function Object to Compute Lagrangian Coherent Structures
[ "Constantin Habes", "Alexandra von Kameke", "Mohammed Elwardi Fadeli", "Holger Marschall" ]
physics.flu-dyn
[ "physics.flu-dyn", "physics.app-ph", "76-04, 76-10", "I.6.3; I.6.6; J.2" ]
lcs4Foam]lcs4Foam – An OpenFOAM Function Object to Compute Lagrangian Coherent Structures ^1Mathematical Modeling and Analysis, Technical University of Darmstadt, 64287 Darmstadt, Germany [email protected], [email protected], [email protected] ^2Department of Mechanical Engineering and Production Management, Hamburg University of Applied Sciences, 20099 Hamburg, Germany [email protected] To facilitate the understanding and to quantitatively assess the material transport in fluids, a modern characterisation method has emerged in fluid dynamics in the last decades footed in dynamical systems theory. It allows to examine the most influential material lines which are called Lagrangian Coherent Structures (LCS) and order the material transport into dynamically distinct regions at large scales which resist diffusion or mixing. LCS reveal the robust skeleton of material surfaces and are essential to assess material transport in time-dependent flows quantitatively. Candidates of LCS can be estimated and visualised from finite-time stretching and folding fields by calculating the Finite-Time Lyapunov Exponents (FTLE). In this contribution, we provide an OpenFOAM function object to compute FTLE during CFD simulation. This enables the OpenFOAM community to assess the geometry of the material transport in any flow quantitatively on-the-fly using principally any OpenFOAM flow solver. [ H. Marschall^1 August 12, 2023 =================== § INTRODUCTION Material transport and mixing in fluids is enhanced by advection. This advection is usually described mathematically in an Eulerian view by a time-dependent velocity field 𝐮(𝐮, t). With this Eulerian description, numerous important fluid mechanical characteristics can be derived and assessed. For instance, a higher Reynolds number (higher velocities) typically will go along with better overall mixing. However, such intuition might be misleading as has been shown for example in <cit.> studying a rising bubble. Here, a coherent structure has been found to arise for intermediate Reynolds numbers, which causes material to move together and locally hinders mixing and increases residence times in the vicinity of the bubble rear. The example shows: a closer look at the coherent structures is necessary to evaluate the details of the material transport in the specific flow situation. Lagrangian Coherent Structures (LCS) are often observable in fluid flows due to the shape that passive tracers take on, e.g. plankton in the ocean <cit.> or dissolved oxygen in the wake behind a rising bubble. That the classical Eulerian view on advection is not optimal for addressing these issues was first noted in oceanography and atmospheric science <cit.>. The transport analysis was therefore started from its roots, the Lagrangian view, where the observer travels on the fluid parcels rather than watching them move by (Eulerian frame). The Lagrangian analysis thus considers the trajectories of individual fluid parcels and allows to draw conclusions on the transport from their evaluation. Nowadays, computational and theoretical advances allow for the calculation and analysis of the time-dependent dynamical system that governs material transport. The underlying ideas for Lagrangian analysis stem from dynamical systems theory. In time-independent incompressible velocity fields, the dynamical system is the velocity field itself and the streamlines of the velocity field coincide with the trajectories of the fluid parcels. As such, trivially, structures in the velocity fields represent governing structures for the material that is transported (as long as molecular diffusion is comparably low and negligible) <cit.>. In this setting, unstable and stable manifolds divide the flow into different subdomains that move coherently (together) <cit.>. For time-dependent flows however, the instantaneous streamlines and the trajectories of the fluid parcels do not coincide. It is thus a misleading habit to draw any conclusion about the material transport from the streamlines or any other material lines of the mean velocity field of a fluid flow. The resulting transport structures might have no relevance for the real dynamical system at all. To obtain the lines that govern material transport in time-dependent flows the Lagrangian Coherent Structures are calculated from the trajectories of particles evaluated in the time-dependent velocity field 𝐮 = 𝐮(𝐱, t). LCS are those material lines and surfaces that separate regions of particles with very different fates or history for the time interval under consideration. Several different approaches to evaluate LCS have been developed during the last years <cit.>. With this contribution we introduce an OpenFOAM function object that calculates the three dimensional Finite Time Lyapunov Exponents (FTLE) on-the-fly based on the general purpose numerical library libcfd2lcs <cit.> with the main computational details explained in <cit.>. The ridges in the FTLE-field are then candidates for LCS and can be assumed to coincide with LCS if some further conditions are met <cit.>. However, as also pointed out in <cit.>, these additional conditions are hard to evaluate in 3D and thus the FTLE-field will be viewed as an approximate representation of the 3D LCS. The details about the calculation of the FTLE-field and the underlying mathematical foundation are set out in Section <ref>. § THEORETICAL BACKGROUND OF LCS CALCULATIONS From time-resolved CFD simulations, the time-dependent velocity field 𝐮(𝐱, t) is known in space and time. From this information the fluid parcel or passive particle trajectories 𝐱(𝐱_0, t) = 𝐱_0 + ∫_t_0^t𝐮(𝐱(τ), τ) d τ can be calculated, where 𝐱_0 is the starting point of a trajectory in 3D space at a starting time t_0. Note, that each trajectory is now labelled by its start location in space and time. If a set of initially close passive particles is released at the same time the distances between them change over time due to the fluid motion. Passive particles initially forming a tiny sphere will undergo a linear deformation towards an ellipse for short times as would occur in a solid body under stress before it breaks. Certainly, in a fluid, the deformation will progress, and non-linear higher-order terms will play a role in causing stretching and folding which is crucial for mixing. However, as a first approximation and for short times these higher-order terms are neglected for the analysis of the deformation. If we consider infinitesimal spheres of initially close particles around all mesh cell centres of our simulation starting at the same initial time t_0, we obtain a set of different ellipsoids. All these ellipsoids have differently stretched and contracted principal axes which point in different directions at a slightly later time t_1. The principal axes of each ellipse denominate the final directions of maximal stretching (major axis) and maximal contraction (minor axis) of the initially spherical particle blob. The stretching factor S is the length of the major axis of the final ellipse divided by the initial radius of the sphere. If this stretching factor at each initial grid point is plotted, a 3D stretching field results revealing the regions at which stretching and thus particle separation for the time interval of interest [t_0,t_1] is largest due to the local flow conditions. Normally, the scaled logarithm of this stretching factor, defined by σ_t_0^t_1(𝐱_0, t_0)=1/|t_1-t_0|log (S) , is plotted. This scaled logarithmic stretching factor is called the Finite-Time Lyapunov Exponent <cit.>. Connected areas or lines of large FTLE values characterise the fluid transport as these denote the areas or lines along which deformation and thus particle separation is largest. All these geometrical considerations have their mathematical counterparts. The stretching factor as described is the square root of the maximal eigenvalue of the right Cauchy-Green deformation tensor 𝐂_t_0^t_1. This tensor can be calculated for every mesh cell as envisioned above for the ellipsoid. As its name reveals it includes all the information about the deformation of the fluid masses at this point for the short time interval t_1 -t_0, and notably it is an objective tensor such that high stretching values and candidates for LCS derived from it will persist regardless of the motion of the observer (invariant to a time-dependent translation and rotation of the coordinate system of the observer) <cit.>. The governing ordinary differential equation (ODE) for the evolution of a fluid parcel or a passive particle reads 𝐱̇=𝐮(𝐱(t), t) . Therefore, the infinitesimal separation γ = 𝐱-𝐱^* of the passive particle, imagined in the centre of a infinitesimal sphere, to a particle on the surface of the sphere will be governed by the ODE δ𝐱̇ = ∇𝐮γ . The solution of this ODE is an exponential function, which explains why the FTLE is defined as the logarithm of the stretching factor. To analyse the stretching during short but finite time intervals, particles distributed on a mesh are advected with the flow from an initial time t_0 over the time interval T=|t_1-t_0| to t_1. From the integral version of the governing ODE (Eq. <ref>) we obtain the definition of the flow map, Φ_t_0^t_1, which maps all the particles from their initial positions onto their final positions at time t_1, viz. Φ_t_0^t_1: ℝ^n→ℝ^n ; 𝐱_0↦𝐱_0 + ∫_t_0^t_1𝐮(𝐱(τ), τ) d τ . To obtain the separation of two initially close particles after this time interval a Taylor series δ𝐱(t_1) = Φ_t_0^t_1(𝐱_0)-Φ_t_0^t_1(𝐱_0 + δ 𝐱(𝐭_0)) = 𝐃Φ_t_0^t_1(𝐱_0, t_0) δ x(t_0) + 𝒪(| δ x(t_0)^2|) around the initial position can be employed. Where 𝐃Φ_t_0^t_1(𝐱_0, t_0) is the gradient (Jacobian) of the flow map with regard to the initial separation and is also the normalised fundamental matrix solution of the equation of variations above (Eq. <ref>) <cit.>. Therefore, the particle separation at time t_1 can be written as δ𝐱(t_1)=√(⟨δ𝐱(t_0),[𝐃Φ_t_0^t_1(𝐱_0, t_0)]^*[𝐃Φ_t_0^t_1(𝐱_0, t_0)] δ𝐱(t_0)⟩). The right Cauchy-Green deformation tensor is then defined as 𝐂_t_0^t_1(𝐱_0, t_0)=[𝐃Φ_t_0^t_1(𝐱_0, t_0)]^*[𝐃Φ_t_0^t_1(𝐱_0, t_0)] . In this way the Finite-Time Lyapunov Exponent σ_t_0^t_1 for the time interval t_0 to t_1 can now be defined on the basis of this tensor in a more thorough, mathematical way. Therefore, it is now defined by σ_t_0^t_1(𝐱_0, t_0)=1/|t_1-t_0|log√(λ_max(𝐂_t_0^t_1(𝐱_0, t_0))) . Here λ_max is the maximum eigenvalue of the right Cauchy-Green deformation tensor and can be calculated using standard solvers. In the picture of the small ellipsoid, the square root of the eigenvalue is just the above stretching rate S. § COMPUTATIONAL DETAILS The computation of flow maps within libcfd2lcs is described thoroughly in <cit.>. The following section presents a brief overview of how the computation is done in practice and which different timescales play a role in the calculations. Hereafter, we describe the structure and functionality of the newly developed function object. We will focus on how the function object acts as an interface between OpenFOAM and libcfd2lcs, how parallelisation is ensured and what has to be considered for the output of the generated data. §.§ Numerical flow map computation in libcfd2lcs libcfd2lcs is able to calculate both forward-time and backward-time FTLE fields. However, it uses two very different approaches for calculating the respective flow maps. The general approach used for the computation of the forward time flow-map Φ_t_0^t_0+T and the resulting forward-time FTLE field is very straightforward. A set of tracer particles is initialised on a grid with spacing Δ x_lcs by setting each initial tracer coordinate to the cell centre coordinate of a corresponding mesh cell. Then the flow map at each cell centre is computed by passively advecting these tracers with the flow, which mathematically corresponds to an integration of equation d 𝐱/d t=𝐮(𝐱, t) over the time interval T. Numerically this integration is done by utilising Runge-Kutta methods, with step size Δ t_lcs. The time and space dependent velocity field 𝐮(𝐱, t) results from the specific fluid simulation under consideration and is passed to libcfd2lcs after each simulation time step Δ t_sim (see Section <ref>). In order to save the flow map Φ_t_0^t_0+T, the location of each particle after the integration is stored at its initial position. As the evaluation of FTLE fields, indicating LCS candidates, is mainly relevant for time-dependent flows, it is often important to animate their evolution. At first glance, this would mean that a sequence of large particle sets would have to be integrated, requiring a great amount of computation. This problem is solved using a method developed by Brunton and Rowley <cit.>. With this method a flow map of the interval T can be constructed from a sequence of k flow maps over a smaller interval h, where T=kh. Following the notation of <cit.> this can be expressed as Φ_t_0^t_0+T=Φ_t_0+(k-1)h^t_0+kh∘⋯∘Φ_t_0+h^t_0+2h∘Φ_t_0^t_0+h . In practical terms, this means that the particle grid is reinitialised for every new time interval h after which they are advected again with the flow. Then the sub-step flow map is stored and the complete flow map is constructed when all needed sub-step flow maps are available. It is important to note that since a discrete particle grid is used for the sub-step flow map computation, interpolation of the sub-step flow maps is needed in order to match the trajectories at different timelevels when reconstructing the flow map Φ_t_0^t_0+T (see <cit.> for more details). A different approach for constructing the backward-time flow maps is used. This is due to the fact that using the Lagrangian approach would require to store all computed velocity fields in the subset interval h before the integration of the tracers from t_0+h to t_0 could be done backward in time. Although this already includes Brunton's and Rowley's method for the flow map construction, the Lagrangian approach would be "cumbersome and resource intensive" <cit.>. Therefore, libcfd2lcs uses an Eulerian approach for the flow map computation proposed by Leung <cit.>. In contrast to the forward-time flow map, the backward-time flow map Φ_t_0+T^t_0 describes for each grid point where a particle, that is at that point at time t_0+T, originally was at time t_0. With Leung's Eulerian approach this backward-time flow map at time t_0+T is computed by initialising a vector field Ψ(𝐱, t_0) on a grid with the cell centre coordinates at time t_0. The advection of this so called "takeoff coordinate field" in an Eulerian reference frame is then described by the level set equation ∂Ψ(𝐱, t)/∂ t+(𝐮·∇) Ψ(𝐱, t)=0 . Solving this equation over the time Interval [t_0, t_0+T] in forward time gives Ψ(𝐱, t_0+T), which represents the takeoff coordinates of a Lagrangian particle at t_0 reaching 𝐱 at time t_0+T. Thus, the backward-time flow map Φ_t_0+T^t_0 is equivalent to Ψ(𝐱, t_0+T). libcfd2lcs solves equation (<ref>) by using a semi-Lagrangian advection approach with the time step size Δ t_lcs = c_cfl Δ x_lcs/𝐮(𝐱, t) of this procedure being restricted by the CFL condition c_cfl < 1 (see <cit.> and <cit.> for more details). Furthermore, Brunton's and Rowley's flow map construction method is also applied to the backward-time flow maps computed with the Eulerian method. Hence, the takeoff coordinate field is reinitialised after every sub-step time interval h and the backward-time flow map Φ_t_0+T^t_0=Φ_t_0+h^t_0∘Φ_t_0+2h^t_0+h∘⋯∘Φ_t_0+kh^t_0+(k-1)h is constructed form k sub-step backward-time flow maps. Since a lot of different timescales are relevant in the practical FTLE field computation described above, we try to differentiate and order them in the following, before describing the structure and functionality of the newly developed function object in the next section. The basis of the on-the-fly LCS evaluation is a parallel running simulation that provides the velocity fields. Here, three intervals are of interest (see Fig. <ref>): the overall simulation time that spans from the simulation start time t_sim_start to the simulation end time t_sim_end, the time step size of the simulation Δ t_sim and the write time interval of the simulation results Δ t_sim_write. The computed velocity fields represent a fluid flow for which a reference timescale Δ t_ref can be identified. This reference timescale characterises the dominant hydrodynamic timescale of the flow and is typically larger than the simulation time step size. In order to save computing resources the LCS evaluation of the simulated flow does not necessarily have to start and end at the same time as the simulation. Therefore, a separate start and end time for the LCS evaluation denoted as t_lcs_start and t_lcs_end can be defined (see Fig. <ref>). During the LCS evaluation, a series of FTLE fields are computed. These FTLE fields are calculated from time T flow maps, which themselves are calculated as described earlier in this section. This means storing and constructing the time T flow maps from multiple sub-step flow maps after each LCS sub-step integration interval h. Calculating the sub-step flow maps in turn requires to numerically solve the equations (<ref>) or (<ref>) using the finite time step Δ t_lcs. While Δ t_lcs is set automatically according to equation (<ref>) and a specified CFL number, T and h have to be defined by the user. In order to detect all LCS candidates, T is usually chosen to be larger than Δ t_ref of the investigated flow <cit.>. With the aim of animating the evolution of the FTLE field, h is typically set significantly smaller than Δ t_ref while being in the order of magnitude of Δ t_sim_write. §.§ Structure and functionality of the function object In general, function objects can be used to generate additional data at runtime of the simulation. In doing so, function objects can access data generated by the flow solver at runtime, which offers a great advantage over classical post-processing since it can only utilise the stored fields or logged information. The newly developed function object incorporates the functionalities of libcfd2lcs into OpenFOAM at runtime while acting as an interface between both. This is achieved by processing the data generated by OpenFOAM and the subsequent exchange of this data via the libcfd2lcs API (see <cit.> for a detailed description of the libcfd2lcs API). The calculation of the flow maps, the calculation of the resulting FTLE fields and the subsequent saving of these fields is completely handled by libcfd2lcs. The basic task of the function object is to pass the cell centre position vectors of the computational grid as well as the velocity field calculated by OpenFOAM to libcfd2lcs. Due to the very strict data structure requirements of libcfd2lcs this is not a trivial task. libcfd2lcs can only use static rectlinear grids for the calculation of forward-time and backward-time flow maps and therefore needs the velocity fields on these grids. This means that the mesh and velocity data has to be globally organised in an (i, j, k) structured format <cit.>. Since the LCS evaluation should also be available for simulations on moving grids with general topology and adaptive grid refinement, the function object offers several possibilities to deal with this problem. In the simplest case, where the simulation mesh is already a static rectlinear mesh, the function object does not need to process the grid and velocity data, but can directly transfer it to libcfd2lcs as basic C++ arrays. This is the preferred method when the flow domain can be represented by a static rectlinear mesh and e.g. immersed boundary methods are used. If a moving mesh, a mesh of general topology or adaptive mesh refinement is used for the simulation a different approach is needed in order to prepare the data for its use in libcfd2lcs. Here, an additional static rectlinear mesh needs to be constructed in the preprocessing step, which can be done e.g. by using the utility. This mesh has to contain the region for which the LCS diagnostic should be performed, meaning that it can cover the whole simulation domain as well as only a part of it. However, since libcfd2lcs also requires boundary conditions for the FTLE field calculations, the boundary patches of the additional LCS mesh must be set accordingly. The user can choose between , , , and the generic patch types which the function objects translates into the corresponding libcfd2lcs boundary types. Then, during runtime, the velocity fields are mapped from the simulation mesh of general topology to the static rectlinear LCS mesh, from which the data can again be transferred to libcfd2lcs as basic C++ arrays. Although this implies that interpolation errors are made during the mapping process, the LCS evaluation is hardly affected by this. Haller showed in <cit.> that LCS are very robust against errors in the velocity field. Also, the additional computational overhead due to the mapping can be neglected compared to the overhead caused by the flow map computations. The function object also implements a third approach in which no additional LCS mesh is needed. This approach utilises the ability to construct complex, moving mesh geometries out of simple unconnected mesh regions in OpenFOAM with the approach. Using this approach the function object can utilise any specified static rectlinear mesh region of the for the LCS evaluation, meaning that the background mesh as well as any other static rectlinear mesh region can be used. In doing so, the function object extracts the mesh and velocity data from the specified mesh region of the and passes it to libcfd2lcs analogously to the previous approaches. Here the type patches are generally passed on as inlet or outlet, as they are treated the same by libcfd2lcs. As libcfd2lcs also uses the domain decomposition approach and MPI for the parallelisation of the computations, the integration within the parallelisation of OpenFOAM is done in a straightforward manner. The local subdomains of the rectlinear LCS mesh and its velocity data are passed to libcfd2lcs together with an offset, which describes the position of the cell data in the globally (i, j, k) structured data array (see Fig. <ref>). For the MPI communication, the same MPI communicator as used for OpenFOAM is shared with libcfd2lcs. Therefore, the function object can be used for simulations running in parallel or serial. However, if the approach involving an additional LCS mesh is used, special attention is required for the domain decomposition in the preprocessing step. Here the simulation mesh, as well as the LCS mesh, must be cut along the same surfaces to make sure that the mapping of the velocity fields from one mesh to the other works properly. As already mentioned, the data output of the flow-map and FTLE field data is completely handled by libcfd2lcs. This is due to the fact that the data output interval defined by h can differ from the solver write interval Δ t_sim_write (see section <ref>). Therefore, the results generated by the function object are not stored in corresponding time directories but in a separate folder in the case directory called . Additionally, a directory named is created inside of which all the sub-step data is stored. All data is stored in the Tecplot ASCII data file format (*.dat) and therefore can be visualised in ParaView when opened with its internal Tecplot reader or other common visualisation programs. In addition to this data, the computational overhead generated by the use of the function object with respect to the actual simulation is also output in the solver log file after each simulation time step. This enables the user to examine the computational costs of the LCS evaluation. § EXAMPLES OF USAGE In this section a few examples are presented which are designed to show the functionality and capabilities of the function object. Therefore, example cases are presented in which only a rectlinear simulation mesh, a separate simulation and LCS mesh and a single are used. §.§ Steady ABC flow The Arnold-Beltrami-Childress (ABC) flow is an exact periodic solution of the Euler equations and is often used in the literature to verify LCS calculation methods. Therefore this case is also being reviewed here. The velocity field 𝐮=∇×[-Ψ𝐤+∇×(Φ𝐤)] of the ABC flow can be described using 2 scalar potentials Ψ and Φ <cit.> which themselves are defined as Ψ=-[C sin (y)+B cos (x)] Φ=A[-x cos (z)+y sin (z)]-Ψ . In (<ref>), 𝐤 can be any unit vector but is commonly chosen to be the vertical unit vector. This leads to the three expressions of the velocity components u=A sin (z)+C cos (y) v=B sin (x)+A cos (z) w=C sin (y)+B cos (x) . The parameters A, B and C can be freely selected and influence the properties of the ABC flow. In order to create comparability with literature values, A=0.5, B=0.8, C=0.8 is chosen. In order to test the newly developed function object on this flow configuration a dedicated ABC flow OpenFOAM solver was written. This solver does not solve the Euler equations in the usual sense, but sets the velocity components on a given computational mesh according to (<ref>). Due to the periodicity of the flow solution, the dimensions of the computational mesh used in this case setup are specified as x,y,z ∈ [0,2π] with a mesh size of 100×100×100. Since the described mesh is rectlinear no additional LCS mesh is used. Again for reasons of comparability, a LCS integration time of T=10 s is selected for the LCS evaluation. The results of the LCS evaluation, both in forward- and backward-time, can be seen in Figure <ref>. In these results the FTLE ridges, which indicate the LCS candidates in the ABC flow, can be seen very clearly. Furthermore, the results agree very well with the results from <cit.>, both qualitatively and quantitatively, which suggests that the new function object calculates the FTLE ridges reliably. §.§ Time dependent double gyre Another frequently used flow for the verification of LCS computing algorithms is the time periodic Rayleigh-Bénard convection flow, or often called double gyre, proposed by Solomon and Gollub <cit.>. The velocity field of this flow can be describe by using a stream function ψ u=-∂ψ/∂ y v=∂ψ/∂ x . Here ψ is defined by ψ(x, y, t)=A sin (π f(x, t)) sin (π y) with f(x, t)=a(t) x^2+b(t) x a(t)=ϵsin (ω t) b(t)=1-2 ϵsin (ω t) This leads to the expressions for two-dimensional velocity components u=-π A sin (π f(x)) cos (π y) v=π A cos (π f(x)) sin (π y) d f/ d x . As the name double gyre suggests, this model defines the flow of two two-dimensional gyres enclosed in a rectangle which expand and contract periodically along the x-axis. Therefore, the periodic motion is controlled by ϵ if ϵ≠ 0. Then ϵ describes approximately how far the line separating the gyres moves to the left or right from its centre position <cit.>. Otherwise (ϵ=0), no periodic motion is happening. Furthermore, A specifies the magnitude of the velocity vectors and ω/2π determines the oscillation frequency of the gyres. Similar to the ABC flow example, a dedicated OpenFOAM solver was written for this case, which sets the velocity field on a given computational mesh according to (<ref>). For comparability, a mesh with the same specifications as in <cit.>,<cit.> and <cit.> was used. It has the dimensions [0,2]×[0,1]×[0,0.1]m and a resolution of 512×256×1 cells. As this mesh is also static and rectlinear no additional LCS mesh was used. For the mathematical model of the flow the parameter values are chosen to be ϵ=0.1, A=0.1 m s^-1 and ω=2π/10 s. Since the oscillation frequency is known, the hydrodynamic time scale can be easily determined by t_ref=2π/ω=10 s. As described in section <ref>, the LCS integration time interval T should be set larger than t_ref. Therefore, it is set to T=1.5· t_ref= 15 s. Figure <ref> shows the forward- and backward-time FTLE fields at t= 15 s of the previously described double gyre flow. Again, the results match very well with the results from <cit.>,<cit.> and <cit.>. This confirms that the function object is able to calculate the correct FTLE fields from velocity fields generated by OpenFOAM. §.§ Flow around cylinder As it has already been shown in the previous examples that the function object can calculate the correct FTLE fields from velocity fields provided by OpenFOAM, this example will focus on how to deal with non-rectlinear simulation meshes. For this purpose, a standard flow problem is selected that is very well suited for an LCS evaluation: the flow around an infinitely long cylinder. The general case setup contains a fluid domain with size [-20,30]×[-20,20]×[-0.5,0.5]m that surrounds a cylinder with diameter D=2m and its centre axis at x=y=0m. The free-stream velocity and the fluids kinematic viscosity are set to 𝐮^ T=(1 0 0)m s^-1 and ν = 0.01m^2s^-1, respectively. This results in a Reynolds number of Re=200 which indicates that vortex shedding behind the cylinder occurs in a barely laminar regime. If we also assume a Strouhal number of St=0.2 at Re=200, the hydrodynamic time scale of this flow is t_ref=D/(u·St)=10s. Because of the cylinder in the middle of the domain, a computational mesh discretising this domain is no longer rectlinear. Therefore, we consider two different procedures in the LCS evaluation, the first of which is carried out in two different ways. Starting with the procedure where an additional rectlinear computational mesh is used for the LCS evaluation, the flow domain is discretised with a simulation mesh consisting of 9200 hexahedra (see upper left mesh in Fig. <ref>). The flow solver that is used to simulate the previously described flow from t=0s to t=120s is with the initial conditions being calculated by . The first additional LCS mesh that is used within this procedure encloses the whole flow domain (see upper right mesh in Fig. <ref>). In order to minimise the loss of information during the mapping of the velocity fields between the two grids, the resolution of the LCS mesh is chosen in a way that it corresponds approximately to the finest resolution in the simulation mesh. This leads to a LCS mesh with 200×160×1 hexahedra. The boundary patch types are set to for the left and right patch (inlet,outlet), to for the bottom and top patch and to for the front and back patch. The LCS integration time T is again based on t_ref and is set to T=1.5· t_ref=15s. For a good animation of the dynamics of the FTLE fields h is chosen to be h=T/10=1.5s. The results of the forward- and backward-time FTLE fields can be seen in Fig. <ref>. They show how the vortices behind the cylinder form large coherent structures, where the FTLE ridges of the backward-time FTLE fields separate different fluid packages that do not mix in the vortex street. Since the FTLE ridges only appear in a fraction of the overall domain and the LCS evaluation is computation-wise a quite costly operation, a second LCS mesh is prepared. This second LCS mesh is a lot smaller than the first one and encloses only the fraction of the flow domain where the FTLE ridges are expected to show up (see Fig. <ref>). The boundary patches on the smaller LCS mesh and its spacial resolution are also set analogous to its bigger counterpart, leading to a LCS mesh of size [-13,27]×[-7.5,7.5]×[-0.5,0.5] containing 160×60×1 hexahedra. Repeating the computations with the use of the smaller LCS mesh gives the results which are displayed in Fig. <ref> and are found to match with the results from the bigger LCS mesh. This shows that the LCS evaluation, when done with a separate LCS mesh, can be used in a very targeted way. The advantages this brings in terms of computational costs are discussed after considering the second procedure for the LCS evaluation of this flow problem. The second procedure, which can be used on problems where no single static rectlinear mesh can be constructed, utilises OpenFOAM's functionalities. With regard to the flow problem considered here, an is constructed with the same dimensions as the simulation mesh used previously. It consists of three mesh zones, namely a rectlinear background mesh zone that spans the whole fluid domain, another finer and smaller mesh zone that is used for a finer resolution of the flow and a cylindrical mesh that surrounds the cylinder (see Fig. <ref>). For comparability reasons the finer rectlinear mesh zone has the same dimensions and resolution as the smaller additional LCS mesh considered previously and is therefore specified as the cell zone for the LCS evaluation. Also all other LCS evaluation settings are adopted. The only difference to the previously considered simulations is the used flow solver. Here the flow solver is due to the used . The resulting forward- and backward-time FTLE fields of this simulation can be found in Fig. <ref>. They match with the results from the previously considered procedure which shows that both approaches can be used equally well. The only thing that stands out are the high FTLE values along some boundaries in the studied solutions. These occur because of the way libcfd2lcs handles its inlet and outlet boundary conditions. It fixes out-flowing Lagrangian particles/takeoff coordinates on "open" boundaries and cannot generate new in-flowing particles during the flow map computation. Therefore, high FTLE values occur in the forward-time FTLE fields at "open" boundaries where inflow occurs, since there the most "stretching" happens. Vice versa, high FTLE values occur in the backward-time FTLE fields at "open" boundaries where outflow occurs, since there the most "folding" happens. These high values at "open" boundaries are just artefacts and have to be neglected. The reason they appear more in the approach is that all type patches are passed to libcfd2lcs as "open" boundaries whereas the user can specify all patches problem dependent in the additional LCS mesh approach. Looking at the computation times of the flow calculations including the LCS evaluation, it becomes evident that LCS evaluation is a very costly operation (see Tab. <ref>). When using the "large" additional LCS mesh the simulation takes approximately 30 time longer than without the LCS evaluation. This can be improved by using the smaller additional LCS mesh. Here the simulation takes 9 times longer than without the LCS evaluation. Since the costs for the LCS evaluations are almost independent of the underlying simulation for a constant grid size, this factor becomes smaller and smaller for more complex simulations. This can also be seen from the fact that the factor is only 2.5 when the approach is used because the computations of the pressure and velocity fields take longer on an . At this point, however, it must be emphasised that the flow considered here is not a highly complex problem, which can also be seen from the simulation time of 1.5 min on a normal mesh and 8 min on an . § SUMMARY & CONCLUSION We provide an OpenFOAM function object based on libcfd2lcs to compute Finite-Time Lyapunov Exponent (FTLE) fields that indicate candidates of Lagrangian Coherent Structures (LCS) and allow to visualise finite-time stretching and folding fields. LCS reveal the robust skeleton of material surfaces and are key to quantitatively assess material transport in time-dependent flows. This enables the OpenFOAM community to assess the geometry of the material transport in any flow quantitatively on-the-fly using principally any OpenFOAM flow solver. Focusing on the practical aspects, we only give a brief overview of the mathematical foundation as well as how the computation is done in practice. We describe the structure and functionality of the newly developed function object. Further focus is laid on how the function object acts as an interface between OpenFOAM and libcfd2lcs, how parallelisation is ensured and what has to be considered for the output of the generated data. From validation of the presented function object using simple benchmark problems, a notable computational overhead has been recognised. However, if LCS evaluations are used for much more complex problems as the ones used here, the computational overhead significantly drops and the LCS evaluation no longer accounts for the largest proportion of the computation time. Nevertheless, the user should be aware that the calculation of FTLE fields is expensive and should therefore think carefully about the size and position of the LCS mesh. In addition, consideration should also be given to whether both forward and backward-time FTLE calculations are required or if one of them is sufficient. IEEEtran
http://arxiv.org/abs/2307.04977v1
20230711023343
Model-Driven Sensing-Node Selection and Power Allocation for Tracking Maneuvering Targets in Perceptive Mobile Networks
[ "Lei Xie", "Shenghui Song", "Yonina C. Eldar" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Model-Driven Sensing-Node Selection and Power Allocation for Tracking Maneuvering Targets in Perceptive Mobile Networks Lei Xie, Member, IEEE, Shenghui Song, Senior Member, IEEE, and Yonina C. Eldar, Fellow, IEEE L. Xie and S. Song are with Department of Electronic and Computer Engineering, the Hong Kong University of Science and Technology, Hong Kong. e-mail: ({eelxie, eeshsong}@ust.hk). Y. C. Eldar is with the Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 7610001, Israel (e-mail: [email protected]). August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================== Maneuvering target tracking will be an important service of future wireless networks to assist innovative applications such as intelligent transportation. However, tracking maneuvering targets by cellular networks faces many challenges. For example, the dense network and high-speed targets make the selection of the sensing nodes (SNs), e.g., base stations, and the associated power allocation very difficult, given the stringent latency requirement of sensing applications. Existing methods have demonstrated engaging tracking performance, but with very high computational complexity. In this paper, we propose a model-driven deep learning approach for SN selection to meet the latency requirement. To this end, we first propose an iterative SN selection method by jointly exploiting the majorization-minimization (MM) framework and the alternating direction method of multipliers (ADMM). Then, we unfold the iterative algorithm as a deep neural network (DNN) and prove its convergence. The proposed model-driven method has a low computational complexity, because the number of layers is less than the number of iterations required by the original algorithm, and each layer only involves simple matrix-vector additions/multiplications. Finally, we propose an efficient power allocation method based on fixed point (FP) water filling (WF) and solve the joint SN selection and power allocation problem under the alternative optimization framework. Simulation results show that the proposed method achieves better performance than the conventional optimization-based methods with much lower computational complexity. Maneuvering target tracking, perceptive mobile network, model-driven deep learning, sensing node selection, power allocation. § INTRODUCTION Innovative applications such as intelligent transportation systems require high-precision sensing capabilities, which are unavailable from current cellular networks. To this end, the recently proposed integrated sensing and communication (ISAC) paradigm offers a promising way to share spectrum, hardware, and software between sensing and communication <cit.>. Perceptive mobile network (PMN) was proposed as a special type of ISAC system that adds high-precision sensing capability to the cellular networks <cit.>. There are many favorable properties of cellular networks that can facilitate sensing. For instance, the large number of sensing nodes (SNs) in PMNs enables collaborative sensing, where multiple perspectives from different SNs are exploited to sense the same target. The SNs can be base station (BS) <cit.>, road side units <cit.>, remote radio unit <cit.>, or target monitoring terminal <cit.>. However, tracking maneuvering targets by PMNs faces many challenges. For example, due to the dense cellular network, selecting a proper set of SNs to track a moving target can be very difficult, because the handover from one group of SNs to another faces very stringent latency requirements. There have been engaging results on SN selection and power allocation for tracking maneuvering targets <cit.>. The authors of <cit.> proposed two SN selection methods in wireless networks to minimize the posterior Cramér-Rao lower bound (PCRLB) and maximize the mutual information between the target location and the measurements of the selected SNs, respectively. In <cit.>, a cooperative game theoretic approach was utilized to allocate power for tracking targets in a radar network. The authors of <cit.> proposed two strategies for resource allocation with given SNs, where one maximizes the tracking accuracy with limited power budgets, and the other minimizes the power consumption with required tracking performance. To achieve better performance, the joint SN selection and power allocation schemes were also considered <cit.>. In <cit.>, a distributed multi-target tracking method was proposed for the networked multiple-input multiple-output (MIMO) radar system, where an alternative optimization (AO)-based method was utilized to solve the bi-variable optimization problem. The boolean constraint on the SN selection vector is one of the most critical challenges for the joint SN selection and power allocation problem. To handle this issue, a typical method is to relax the boolean constraint to allow continuous and sparse variables <cit.>. In <cit.>, the relaxed SN selection was formulated as a semi-definite programming (SDP) problem and solved by the CVX toolbox <cit.>. Unfortunately, the complexity of the existing methods increases exponentially with the number of SNs, which may violate the stringent latency requirement of sensing applications when a large number of SNs exist. To this end, model-driven deep learning (DL) offers a promising solution. By unfolding an iterative algorithm as a neural network where each iteration is implemented by one layer with learnable parameters, model-driven methods have the potential to offer better performance with reduced computational complexity. Some research efforts have been made to utilize model-driven deep neural networks (DNNs) to find sparse solutions for better performance and lower computational costs. In <cit.>, an unfolded vector-approximate message passing network with random initialization was proposed to learn a denoiser identical to the statistically matched one. The authors of <cit.> unfolded the iterative algorithm, used to solve a problem with l_0 sparse regularization, to be a feed-forward neural network for faster inference and better scalability. In <cit.>, a generalized DNN was proposed to learn a sparse solution by unfolding the alternating direction method of multipliers (ADMM) with better accuracy and lower computational cost. The authors of <cit.> designed an ADMM-Net for interference removal in radar imaging, which exhibited much lower imaging error and computational cost than ADMM and CVX. However, the inverse of high-dimensional matrices are involved in the existing ADMM-based unfolding methods, which causes high storage and computational cost. In this paper, to meet the stringent latency requirement of sensing applications, we propose a model-driven method for SN selection to track multiple maneuvering targets. For that purpose, we first derive an iterative algorithm for SN selection, leveraging the majorization-minimization (MM) framework and ADMM. Then, the MM-ADMM algorithm is unfolded into a DNN where the technical challenges lie in the large number of learnable parameters and the uncertain convergence property. To this end, we design a new model-driven DNN with an additional module to exploit the first- and second-order momentum, and refer to it as deep alternating network (DAN), which has fewer learnable parameters than the directly-unfolded MM-ADMM. The convergence proof of the proposed DAN is also given. The computational complexity of DAN is low, because the number of layers is less than the number of iterations required by the original algorithm, and each layer of DAN only involves simple matrix-vector additions/multiplications without high-dimensional matrix inverse. Finally, we propose a fixed-point (FP) water-filling (WF)-based method for power allocation, which is derived based on the Lagrange multiplier method. The joint SN selection and power allocation problem is solved by combining the proposed DAN and FP-WF algorithms under the AO framework. Experiment results show that the proposed method can achieve better performance than the optimization-based methods with remarkably lower computational costs. The contributions of this paper are summarized as follows: * We propose an iterative method based on MM and ADMM for SN selection. In particular, we exploit the MM approach to handle the non-convexity of the penalized cost functions. For each iteration of ADMM, we derive explicit expressions for the solution to the constrained optimization problem by exploiting the KKT conditions, which facilitate the development of the model-driven method. * We design a new model-driven DNN, named DAN, by adding an additional module to the directly-unfolded MM-ADMM method, which exploits the momentum for accelerating the convergence. Moreover, we provide the convergence proof for DAN, which achieves a similar SN selection performance as the exhaustive searching method with significantly lower computational cost. * Inspired by the classic WF-based power allocation strategies, we propose an iterative FP-WF power allocation method. Specifically, in each water-filling step, the water level is obtained by solving an FP equation. This approach not only reduces the computational complexity, but also provides an interesting physical insight: the power allocation strategy depends on the ratio between the Fisher information of the predictions and the measurements. The remainder of this paper is organized as follows. Section II introduces the system model and formulates the problem. Section III derives the joint SN selection and power allocation algorithm. Section IV provides the simulation results to validate the advantage of the proposed model-driven method. Section V concludes this paper. § SYSTEM MODEL AND PROBLEM FORMULATION In Fig. <ref>, we show a PMN consisting of one BS serving as the sensing signal transmitter and N SNs serving as the receivers for the echoes, which can be BSs or other types of SNs <cit.>. In each tracking frame, the BS will transmit sensing signals to the predicted positions of multiple targets, and the selected SNs will collaboratively estimate the location and velocity of the targets (motion state). The estimation results will be utilized to predict the motion state in the next tracking frame[The tracked targets are initialized and the number of the targets is known in advance. This assumption can be realized by communication or some available detection approaches, e.g., radio access technology <cit.>, PDA <cit.> or multi-frame detection <cit.> before target tracking. The targets are widely separated and each of them moves independently in the monitoring area <cit.>.]. In this paper, the SN selection and power allocation will be formulated as an optimization problem to minimize the PCRLB for the estimation error of the target motion state. To this end, we first introduce the target motion model and the signal model, which are the foundation for deriving the PCRLB. §.§ Target Motion Model The target motion model describes the motion behavior of the targets and affects the Fisher information of the prediction. Assume that the target motion follows a near constant velocity model and the transition matrix 𝐆 is given by <cit.> 𝐆=𝐈_2⊗[ 1 Δ T 0 1 ] where 𝐈_2 denotes the 2× 2 identity matrix, ⊗ represents the Kronecker product, and Δ T denotes the time between two adjacent tracking frames. In the kth tracking frame, there are Q point-like targets, where the qth target is located at 𝐫_q^(k)=(r_x,q^(k),r_y,q^(k)) with a velocity 𝐯_q^(k)=(v_x,q^(k),v_y,q^(k)). The target motion state is updated by 𝐱_q^(k) = 𝐆𝐱_q^(k-1)+ 𝐳_q^(k-1), where 𝐱_q^(k) = [r_x,q^(k),v_x,q^(k),r_y,q^(k),v_y,q^(k)]^ includes the parameters to be estimated. Here, 𝐳_q^(k-1) denotes the state noise, which is assumed to be a zero-mean Gaussian vector with covariance matrix <cit.> 𝐐=q_s 𝐈_2⊗[1/3(Δ T)^3 1/2(Δ T)^2 1/2(Δ T)^2 Δ T ] where q_s is the intensity of the process noise. §.§ Signal Model In the kth tracking frame, the BS will transmit the sensing signal 𝐬^(k)(t) to the targets, and the echoes will be captured by the selected SNs for sensing purposes. The location of the BS and the nth SN is given by 𝐫_BS and 𝐫_n, respectively. Given the motion state, we can determine the measurements, i.e., the angle of arrival (AOA), the time delay, and the Doppler frequency of the q-th target with respect to the n-th SN as θ_q,n^(k) =arccos𝐞_n^(𝐫_q^(k)-𝐫_n)/‖𝐫_q^(k)-𝐫_n‖, τ_q,n^(k)=1/c(‖𝐫_n-𝐫_q^(k)‖+‖𝐫_BS-𝐫_q^(k)‖), μ_q,n^(k)=𝐯_q^(𝐫_q^(k)-𝐫_n)/λ‖𝐫_q^(k)-𝐫_n‖ + 𝐯_q^(𝐫_q^(k)-𝐫_BS)/λ‖𝐫_q^(k)-𝐫_BS‖ , where 𝐞_n represents the unit vector parallel to the line formed by all antennas of the uniform linear array, c is the speed of light, λ is the wavelength, and ||·|| denotes the l_2 norm. Define the power allocation vector 𝐩^(k)=[p_1^(k),⋯,p_Q^(k)]∈ℝ^Q× 1, where p_q^(k) denotes the power allocated to the qth target. The baseband echo of the qth target received by the nth SN is given by 𝐲_q,n^(k)(t) =√(p_q^(k))β_q,n^(k) e^j2πμ_q,n^(k)t𝐛_q,n^(k)𝐚_q,k^𝐬^(k)(t-τ_q,n^(k)) +𝐧_n^(k)(t), where 𝐧_n^(k)(t) denotes the complex additive white Gaussian noise with zero mean and variance σ^2. The transmit and receive steering vectors are given by 𝐛_q,n^(k)=𝐛(θ_q,n^(k)) and 𝐚_q,k=𝐚(ψ_q^(k)), respectively, where ψ_q^(k) represents the angle of departure (AOD) of the qth target from the BS. β_q,n^(k) represents the complex gain of the BS-target-SN (qth target and nth SN) path, which accounts for the array gain, the propagation loss and the target radar cross section (RCS) <cit.>. Following <cit.>, the local estimation error is modeled as a zero-mean Gaussian vector with the covariance matrix Σ_q,n^(k)=[σ_θ_q,n^(k)^2,σ_τ_q,n^(k)^2,σ_μ_q,n^(k)^2], where σ_θ_q,n^(k)^2, σ_τ_q,n^(k)^2, and σ_μ_q,n^(k)^2 denote the CRLBs for the estimation of the direction, range, and Doppler shift, respectively. The local estimation error affects the Fisher information of measurement, which will be utilized to derive the PCRLB in the next section. §.§ Posterior Cramér-Rao Lower Bound Based on the above-mentioned target motion model and signal model, we will derive the PCRLB, which gives the lower bound of the estimation error for the target motion state. Define 𝐔^(k)=[𝐮_1^(k),⋯,𝐮_Q^(k)]∈ℝ^N_BS× Q as the SN selection matrix, whose (n,q)th entry u_q,n^(k) is 1 if the qth target is associated with the nth SN. The Fisher information matrix (FIM) for the qth target is given by <cit.> 𝐉_q^(k)(p_q^(k),𝐮_q^(k))=𝐉_P,q^(k)+𝐉_Z,q^(k), where 𝐉_P,q^(k) and 𝐉_Z,q^(k) denote the prior and data information matrix, respectively. In particular, the prior information matrix is given by 𝐉_P,q^(k)=(𝐐+𝐆 (𝐉_q^(k-1))^-1𝐆^)^-1. The data information matrix 𝐉_Z,q^(k) is given by 𝐉_Z,q^(k)=∑_n=1^N u_q,n^(k)(𝐇_q,n^(k))^ (Σ_q,n^(k))^-1𝐇_q,n^(k), where 𝐇_q,n^(k)= ∂𝐠_n^(k)/∂𝐱_q^(k)|_𝐱_q^(k)=𝐱̂_q^(k|k-1), with ∂𝐠_n^(k)/∂𝐱_q^(k) denoting the derivative of the measurements 𝐠_n^(k)=[θ_q,n^(k)(𝐱_q^(k)),τ_q,n^(k)(𝐱_q^(k)),μ_q,n^(k)(𝐱_q^(k))]^ with respect to the motion state 𝐱_q^(k). The predicted motion state of the qth target in the kth frame is updated by 𝐱̂_q^(k|k-1)=𝐆𝐱̂_q^(k-1), where 𝐱̂_q^(k-1) represents the estimated motion state of the qth target in the (k-1)th frame. Note that Σ_q,n^(k) is inversely proportional to the SNR at the SN <cit.>. Thus, we can rewrite the measurement covariance in (<ref>) as Σ_q,n^(k) =(p_q^(k))^-1Σ̅_q,n^(k), where Σ̅_q,n^(k) contains the part of Σ_q,n^(k) that is independent of p_q^(k). Then, we have 𝐉_Z,q^(k)= p_q^(k)∑_n=1^N u_q,n^(k)𝐌_q,n^(k), where 𝐌_q,n^(k)=(𝐇_q,n^(k))^ (Σ̅_q,n^(k))^-1𝐇_q,n^(k). Note that 𝐌_q,n^(k)=p_q^(k)𝐌_q,n^(k) denotes the measurement information for the qth target at the nth SN. The inverse of the derived FIM yields the PCRLB matrix, i.e., <cit.> 𝐂_q(p_q^(k),𝐮_q^(k))=(𝐉_q^(k)(p_q^(k),𝐮_q^(k)))^-1. The diagonal elements of 𝐂_q(p_q^(k),𝐮_q^(k)) provide a lower bound on the variances of the estimation error of an unbiased estimator for the target motion state, i.e., 𝔼((𝐱̂_q^(k)-𝐱_q^(k))(𝐱̂_q^(k)-𝐱_q^(k))^)≽𝐂_q(p_q^(k),𝐮_q^(k)), where 𝐀≽𝐁 indicates 𝐀-𝐁 is a positive-semidefinite matrix. Some functions of the diagonal elements of the PCRLB matrix, e.g., the trace <cit.> and the determinant <cit.>, have been used as the performance metric for target sensing and tracking. §.§ Problem Formulation We want to minimize the PCRLB through SN selection and power allocation. In the kth frame, the problem is modeled as min_𝐩^(k),𝐔^(k) ∑_q=1^Q log𝐂_q(p_q^(k),𝐮_q^(k)) s.t. ∑_q=1^Q p_q^(k)≤ P_T, p_q^(k)≥ P_min, 1^𝐮_q^(k)≤ N_max,q=1,2,⋯,Q, 𝐔^(k)∈{0,1}^N× Q, where constraint (<ref>) limits the total transmit power. Constraint (<ref>) indicates the minimum power allocated to each target, constraint (<ref>) limits the maximum number of SNs to track one target <cit.>, and (<ref>) gives the binary constraint on 𝐮_q^(k). The main reasons to select log(𝐂_q) as the performance metric include: 1) the determinant of 𝐂_q is proportional to the volume of the minimum achievable covariance ellipsoid, which is widely used as an important metric for parameter estimation <cit.>; and 2) if the determinant is directly used, the original problem (<ref>) is not convex, but the monotonic logarithmic transformations can render this problem convex. § MODEL-DRIVEN SENSING NODE SELECTION AND POWER ALLOCATION SCHEME Note that the problem in (<ref>) has two variables. To handle this issue, we propose to update the variables alternatively based on the AO theory. With a given feasible starting point {𝐩^(k,0), {𝐮_q^(k,0)}_q=1^Q }, we iteratively perform the following two operations: 1) updating {𝐮_q^(k,j+1)}_q=1^Q with fixed 𝐩^(k,j) via 𝐮_q^(k,j+1)= min_𝐮_q^(k)log𝐂_q(p_q^(k,j),𝐮_q^(k)), 2) updating 𝐩^(k,j+1) with fixed {𝐮_q^(k,j+1)}_q=1^Q via 𝐩^(k,j+1)=min_𝐩^(k)∑_q=1^Q log𝐂_q(p_q^(k),𝐮_q^(k,j+1)), which decouple the SN selection and power allocation problem. In the following, we will first derive an iterative method for SN selection by jointly exploiting the MM framework and ADMM. To further reduce the computational complexity, we will develop a model-driven approach to solve (<ref>). Finally, we will propose an FP-based WF method to solve (<ref>), which has much lower complexity but offers comparable performance as the traditional CVX-based method. §.§ MM-ADMM based Sensing Node Selection Given 𝐩^(k,j), the problem in (<ref>) can be formulated as min_𝐮_q^(k) ℱ_u(𝐮_q^(k)) s.t. 1^𝐮_q^(k)≤ N_max, 𝐮_q^(k)∈{0,1}^N× 1, where ℱ_u(𝐮_q^(k))=log𝐂_q(𝐮_q^(k)|p_q^(k,j)). In order to enforce a binary solution and simplify the problem, we introduce a l_0 pseudo-norm penalty to the objective function and relax the binary constraint <cit.>. Then, the problem in (<ref>) is relaxed as min_𝐮_q^(k) ℱ_u(𝐮_q^(k))+ρ_q‖𝐮_q^(k)‖_0 s.t. 1^𝐮_q^(k)≤ N_max, 0≤𝐮_q^(k)≤1, where ‖·‖_0 denotes the l_0 pseudo-norm. In general, a larger ρ_q leads to a sparser 𝐮_q^(k). Due to the non-convex, non-continuous, and combinatorial nature of the l_0 pseudo-norm, the problem (<ref>) is NP-hard. To simplify the notation, we omit the index q hereafter unless doing so creates confusion. Inspired by <cit.>, we approximate the l_0 pseudo-norm by a function 𝒫_γ(𝐮^(k))=∑_n=1^N(1-e^-γ u_n^(k)), where γ is a sufficiently large constant. 𝒫_γ(𝐮^(k)) is utilized due to several favorable properties: 1) it is asymptotically equivalent to ‖𝐮^(k)‖_0, i.e., lim_γ→∞𝒫_γ(𝐮^(k))=∑_n=1^N(1-δ(u_n^(k)))=‖𝐮^(k)‖_0; 2) it is continuous, concave, and non-decreasing in the feasible set; and 3) it is differentiable and its gradient is easy to obtain. §.§.§ MM framework for solving (<ref>) The problem in (<ref>) can be approximated by min_𝐮^(k)∈𝒮_u ℱ_u(𝐮^(k))+ρ𝒫_γ(𝐮^(k)) where 𝒮_u={𝐮^(k)|1^𝐮^(k)= N_max,0≤𝐮^(k)≤1}. Though 𝒫_γ(𝐮^(k)) is continuous w.r.t. 𝐮^(k), the problem in (<ref>) is still hard to solve, due to the complicated form of ℱ_u(𝐮^(k)) w.r.t. 𝐮^(k). To handle this difficulty, we propose to utilize the MM framework <cit.>, based on which (<ref>) can be solved in an iterative process. At each iteration, the MM framework updates the optimization variable by minimizing a tight upperbound of the function, which is known as the surrogate function. Then, The next question is how to construct a surrogate function for the objective function in (<ref>). Since 𝒫_γ(𝐮^(k)) is differentiable and concave with respect to 𝐮^(k), it is upperbounded by its first-order Taylor expansion, i.e., 𝒫_γ(𝐮^(k))≤𝒫_γ(𝐮^(k)|𝐮^(k,l)) ≜𝒫_γ(𝐮^(k,l)) + (𝐝_γ^(k,l))^ (𝐮^(k)-𝐮^(k,l)), where 𝐮^(k,l) denotes the optimized result at the lth iteration, 𝐝_γ^(k,l)=γ[e^-γ u_1^(k,l),e^-γ u_2^(k,l),⋯,e^-γ u_N^(k,l)]^ represents the gradient of 𝒫_γ(𝐮^(k)), and u_n^(k,l) denotes the nth entry of 𝐮^(k,l). An appropriate upperbound of ℱ_u(𝐮^(k)) can be obtained by 𝒢_1(𝐮^(k)|𝐮^(k,l))≜ℱ_u(𝐮^(k,l))+𝐝_u^(𝐮^(k,l))(𝐮^(k)-𝐮^(k,l)) +1/2(𝐮^(k)-𝐮^(k,l))^𝐓^(k,l)(𝐮^(k)-𝐮^(k,l)), where 𝐝_u^(k,l)= 𝐝_u(𝐮^(k,l)) and 𝐝_u(𝐮^(k))=∂ℱ_u(𝐮^(k))/∂𝐮^(k) denotes the gradient of ℱ_u(𝐮^(k)) w.r.t. 𝐮^(k), whose nth entry is given by d_u,n(𝐮^(k))=∂ℱ_u(𝐮^(k))/∂ u_n^(k)=-((𝐉^(k)(𝐮^(k)|p^(k,j)))^-1𝐌_n^(k)). The positive-definite matrix 𝐓^(k,l) should satisfy 𝐓^(k,l)≽𝐇_u(𝐮^(k,l)), where 𝐇_u(𝐮^(k))=∂ℱ_u(𝐮^(k))/∂𝐮^(k)∂(𝐮^(k))^ denotes the Hessian matrix of ℱ_u(𝐮^(k)) w.r.t. 𝐮^(k), whose (m,n)th entry is given by H_u,m,n(𝐮^(k))=∂ℱ_u(𝐮^(k))/∂ u_m^(k)∂ u_n^(k)= 𝐌_m^(k)(𝐉^(k)(𝐮^(k)|p^(k,j)))^-2𝐌_n^(k). Then, at the (l+1)th iteration, the selection vector can be updated by solving the problem min_𝐮^(k)∈𝒮_u 𝒢(𝐮^(k)), where the surrogate function 𝒢(𝐮^(k)) is defined by 𝒢(𝐮^(k))=𝒢_1(𝐮^(k)|𝐮^(k,l))+ρ𝒫_γ(𝐮^(k)|𝐮^(k,l)). The problem in (<ref>) is convex and can be solved by using the general CVX toolbox based on the interior point method <cit.>. However, the computational complexity of CVX is about 𝒪(N^3.5), which is not suitable for PMNs with a large N. §.§.§ ADMM-based method for solving (<ref>) To solve (<ref>) efficiently, we exploit the ADMM, which splits the problem into two distinct parts and handles them separately <cit.>. Since (<ref>) is Lipschitz continuous, the convergence of the ADMM can be guaranteed. By introducing an auxiliary variable 𝐯^(k), (<ref>) is equivalent to min_𝐮^(k),𝐯^(k) 𝒢_1(𝐮^(k)|𝐮^(k,l))+ρ𝒫_γ(𝐯^(k)|𝐮^(k,l)) s.t. 1^𝐮^(k)= N_max, 0≤𝐯^(k)≤1, 𝐮^(k)=𝐯^(k), which leads to the augmented Lagrangian function <cit.> ℒ(𝐮^(k),𝐯^(k),𝐳^(k)) =𝒢_1(𝐮^(k)|𝐮^(k,l))+ρ𝒫_γ(𝐯^(k)|𝐮^(k,l)) +ρ_a,l/2‖𝐮^(k)-𝐯^(k)+𝐳^(k)‖^2, where 𝐳^(k) is the dual variable and ρ_a,l is a penalty parameter at the lth iteration. Then, at the mth iteration, the optimization variables are updated as 𝐮_m+1^(k,l) =min_𝐮^(k)ℒ(𝐮^(k),𝐯_m^(k,l),𝐳_m^(k,l)), s.t. 1^𝐮^(k)= N_max, 𝐯_m+1^(k,l)=min_𝐯^(k)ℒ(𝐮_m+1^(k,l),𝐯^(k),𝐳_m^(k,l)), s.t. 0≤𝐯^(k)≤1, 𝐳_m+1^(k,l)=𝐳_m^(k,l)+𝐮_m+1^(k+1,l)-𝐯_m+1^(k+1,l), where 𝐮_m^(k,l), 𝐯_m^(k,l) and 𝐳_m^(k,l) denote 𝐮, 𝐯 and 𝐳 at the mth ADMM iteration, respectively. a) Update 𝐮_m+1^(k,l) via (<ref>): By utilizing the Lagrange multiplier method, (<ref>) can be reformulated as an unconstrained problem, whose Lagrange function is given by ℒ_u(𝐮^(k))=ℒ(𝐮^(k),𝐯_m^(k,l),𝐳_m^(k,l))+ν_l(N_max-1^𝐮^(k)), where ν_l is a Lagrange multiplier. The closed-form solution to (<ref>) is 𝐮_m+1^(k,l)=𝐮^(k,l)-Φ_u^-1(𝐝_m^(k,l)-ν_l1), where Φ_l=𝐓^(k,l)+ρ_a,l𝐈 and 𝐝_m^(k,l)=𝐝_u^(k,l)-ρ_a,l (𝐯_m^(k,l)-𝐳_m^(k,l)). By substituting (<ref>) into the constraint of (<ref>), we have ν_l=N_max-1^𝐮^(k,l)+1^Φ_l^-1𝐝_m^(k,l)/1^Φ_l^-11=1^Φ_l^-1𝐝_m^(k,l)/1^Φ_l^-11, which follows from the fact that N_max=1^𝐮^(k,l). Therefore, the closed-form solution to (<ref>) is given by 𝐮_m+1^(k,l)=𝐮^(k,l)-Φ_l^-1(𝐝_m^(k,l)-1^Φ_l^-1𝐝_m^(k,l)/1^Φ_l^-111). One remaining problem is how to determine Φ_l, which is equivalent to choosing a proper 𝐓^(k,l). Indeed, it is not difficult to find a matrix 𝐓^(k,l) that satisfies (<ref>), such as 𝐓^(k,l)= 𝐇_u(𝐮^(k,l))+ϵ𝐈, where ϵ is a positive constant to make 𝐓^(k,l) positive definite. However, the matrix inversion of Φ_l is involved in (<ref>) when updating 𝐮_m+1^(k,l), which may be computationally complex due to the large number of SNs. To tackle this issue, 𝐓^(k,l) is desired to be a diagonal matrix. One feasible solution is to make 𝐓^(k,l) proportional to the identity matrix, i.e., <cit.> 𝐓^(k,l)=C_T^(k,l)𝐈, where C_T^(k,l) is a positive constant to satisfy (<ref>). For example, one feasible choice is C_T^(k,l)=λ_max(𝐇_F(𝐮^(k,l))) and λ_max(𝐗) denotes the principle eigenvalue of 𝐗. b) Update 𝐯_m+1^(k,l) via (<ref>): Since (<ref>) is convex, the closed-form solution 𝐯_m+1^(k,l) to (<ref>) can be obtained based on the KKT conditions, whose nth entry is given by v_m+1,n^(k)={[l] v_n, if 0≤v_n≤ 1, 0, if v_n< 0, 1, if v_n> 1, . where v_n denotes the nth entry of 𝐯, given by 𝐯=-ρ/ρ_a,l𝐝_γ^(k,l)+𝐮_m+1^(k)+𝐳_m^(k). The cost function will not increase over the ADMM iteration process given in (<ref>). According to the monotone bounded theorem <cit.>, the iteration will converge to a set of stationary points in the feasible set, denoted by 𝐮_(⋆)^(k), 𝐯_(⋆)^(k), and 𝐳_(⋆)^(k). The selection vector 𝐮^(k,l+1) is updated by 𝐮_(⋆)^(k). The convergence and performance of (<ref>) depend on the selection of 𝐓^(k,l). If 𝐓^(k,l) is selected as the Hessian matrix which is usually not diagonal, (<ref>) is similar to the Newton's descent update with quadratic convergence, but high computational complexity. In (<ref>), 𝐓^(k,l) is selected as a diagonal matrix, i.e., 𝐓^(k,l)=C_T^(k,l)𝐈, and thus the update in (<ref>) moves in the opposite direction of the gradient, which resembles the gradient descent method. With a diagonal 𝐓^(k,l), the computational cost at each ADMM iteration is about 𝒪(N^2), which is much lower than that of CVX. In general, a larger C_T^(k,l) is desired to satisfy (<ref>). However, in this case, the constant C_T^(k,l)+ρ_a,l is inversely proportional to the step size. An aggressive choice of C_T^(k,l) may require more iterations to converge. Meanwhile, the choice of 𝐓^(k,l) suggested in (<ref>) may not be optimal, and a better one within a larger feasible set, i.e., diagonal but not necessarily proportional to the identity matrix, is desired. To this end, we propose to unfold the iterative optimization method as a DNN and tune 𝐓^(k,l) with deep learning. One feasible way is to treat the diagonal elements of 𝐓^(k,l) as the learnable parameters. In this case, the number of learnable parameters is N at each layer, which will be large due to the dense SNs. Moreover, the trained 𝐓^(k,l) may break the convergence condition (<ref>). These issues motivate us to consider another design with three desirable properties: 1) the number of learnable parameters is moderate, 2) the convergence property is guaranteed, and 3) the proposed method will be restricted to first-order methods that only require gradients, since higher-order optimization methods may cost a large amount of computing and storage resource. §.§ Deep-Alternative-Network: DNN Based Sensing Node Selection To derive a DNN with the above-mentioned properties, we unfold the MM-ADMM-based SN selection method and introduce an additional module. The new DNN is called DAN. As shown in Fig. <ref>, DAN consists of L cascaded layers with some learnable parameters, where the (l+1)th layer takes the first- and second-order momentum 𝐦̂^(l-1) and 𝐯̂^(l-1), the gradients 𝐝_u^(k,l) and 𝐝_v^(k,l), and the output from the previous layer 𝐮^(k,l) as inputs, and outputs an update 𝐮^(k,l+1). In particular, the (l+1)th layer updates 𝐮_m^(k,l), 𝐯_m^(k,l), and 𝐳_m^(k,l), alternatively, as shown by the blue, green, and orange blocks in Fig. <ref>, respectively. The update of 𝐮_m+1^(k,l) is of the same form as (<ref>). But we make the following two modifications, as shown by the red block in Fig. <ref>: 1) 𝐝_m^(k,l) is constructed as 𝐝_m^(k,l)=𝐦̂_l-ρ_a,l (𝐯_m^(k,l)-𝐳_m^(k,l)), where 𝐦̂_l=β_1,l𝐦̂_l-1+(1-β_1,l)𝐝_u^(k,l). Here, β_1,l=β_1η_1^l where η_1∈ (0,1) and β_1∈ (0,1) denotes a learnable hyper-parameters to avoid the case that the momentum diverges severely. When β_1,l = 0, the first-order momentum 𝐦̂_l reduces to the gradient 𝐝_u^(k,l). In this paper, we define β_1,l=β_1η_1^l with β_1∈ (0,1) and η_1∈ (0,1). The momentum terms caused by non-zero β_1,l may improve the performance significantly, especially in deep learning applications. 2) Φ_l is constructed as Φ_l=𝐓̂^(k,l)+ρ_a,l𝐈, where 𝐓̂^(k,l)≜([√(|v̂_l,1|)/α_1,l,⋯,√(|v̂_l,N|)/α_1,l]), and ρ_a,l=ρ_aη_a^l with η_a^l ∈ (0,1). Here, v̂_l,i denotes the ith entry of the second-order momentum 𝐯̂_l, which is defined by 𝐯̂_l=β_2𝐯̂_l-1+(1-β_2)(𝐝_u^(k,l))^2, where β_2 denotes a constant to control the second-order momentum and α_1,l=α̅_1,l/√(l) with α̅_1,l∈ [α_1^-,α_1^+] representing a set of learnable parameters to control the update step size. Here, the positive constants α_1^- and α_1^+ are the lower and upper bounds of α̅_1,l. We refer to the diagonal element of Φ_l^-1 as the learning rate of this algorithm, whose ith entry is given by ϕ_l,i^-1 = (√(|v̂_l,i|)/α_1,l +ρ_a,l)^-1. Learning rate decay is critical for training neural networks. In the early training stage, a large learning rate can accelerate training and help the network escape spurious local minima. By the end of the iteration, a small learning rate helps the network converge to a local minimum and avoid oscillation. Therefore, we desire a set of ρ_a,l and α_1,l such that, for any l∈{2,⋯,L} and i ∈{1,⋯,N}, we have ϕ_l,i^-1≤ϕ_l-1,i^-1. The updates are inspired by the adaptive momentum (Adam) method <cit.>, i.e., an algorithm for first-order gradient-based optimization. Adam is chosen due to its favorable properties: 1) simple implementation, computationally efficient, and low memory requirements; 2) adaptability to large-scale problems; and 3) adaptation to sparse gradients <cit.>. Based on the adaptive estimates of first- and second-order momentum, we propose a novel construction of 𝐝_m^(k,l) and 𝐓̂^(k,l) as well as its resultant Φ_l, which can meet the constraint in (<ref>) and the diagonal requirement, simultaneously. But different from ADAM, the update has additional terms resulting from the original MM-ADMM and one learnable step size α_1,l to control the iteration process. Compared with training all diagonal elements of 𝐓̂^(k,l), the learnable parameters in the DAN are changed to α̅_1,l and β_1. The total number of learnable parameters over all layers is reduced from L N to L+1. The update of 𝐯_m+1^(k,l) and 𝐳_m+1^(k,l) are the same as (<ref>) and (<ref>), respectively. With given 𝐦̂_l and Φ_l, the Lagrange function ℒ(𝐮^(k),𝐯^(k),𝐳^(k)|𝐦̂_l,Φ_l) defined in (<ref>) will not increase after updating 𝐮_m^(k,l), 𝐯_m^(k,l) and 𝐳_m^(k,l) by (<ref>), (<ref>), and (<ref>), respectively. The modified ADMM iteration will also converge at a set of station points denoted by 𝐮_(⋆)^(k), 𝐯_(⋆)^(k), and 𝐳_(⋆)^(k). Therefore, we have 𝐮^(k,l+1)=𝐮_⋆^(k,l)=𝐮^(k,l)-Φ_l^-1(𝐝_⋆^(k,l)-ν_l1), where 𝐝_⋆^(k,l)=𝐦̂_l-ρ_a,l (𝐯_⋆^(k,l)-𝐳_⋆^(k,l)), ν_l=1^Φ_l^-1𝐝_⋆^(k,l)/1^Φ_l^-11. §.§ Convergence of DAN Until now, we have developed a new model-driven method for SN selection. However, the obtained 𝐓̂^(k,l) may not satisfy (<ref>), which indicates that the convergence property of the MM framework is questionable. To address this issue, we next analyze the convergence of the proposed DAN. For any sequence {𝐮^(k,l)}_l=1^L generated by the proposed DAN, the regret function is defined as R_L≜∑_l=1^L( 𝒢(𝐮^(k,l))-𝒢(𝐮^(k,⋆))), where 𝐮^(k,⋆) =min_𝐮^(k)∈𝒮_u𝒢(𝐮^(k)) denotes the best stationary point in the feasible set 𝒮_u. Generally speaking, the regret function indicates the sum of the difference between 𝒢(𝐮^(k,l)) and 𝒢(𝐮^(k,⋆)), which is widely used for the convergence proof <cit.>. Note that the feasible set has bounded diameter, i.e., for all 𝐮,𝐯∈𝒮_u, ||𝐮 - 𝐯||^2 ≤ D_Δ. Define D_u,1≜max_l ||𝐝_u^(k,l)||_1, D_ϕ≜max_l max_i ϕ_l,i^-1, D_b,1≜max_l ||𝐛̂_l||_1, and D_b,2≜max_l ||𝐛̂_l||^2, where 𝐛̂_l = 𝐯_⋆^(k,l)-𝐳_⋆^(k,l). Then, we have the following theorem for the convergence analysis. Assume that, for all l∈[2,L], ϕ_l,i^-1≤ϕ_l-1,i^-1. The regret is bounded by R_L≤ C_1 √(L) + C_2, where C_1 = √(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1) and C_2 is defined by (<ref>), given at the top of this page. Proof: See Appendix <ref>. ▪ Since C_1 and C_2 are constants independent of L, Theorem <ref> indicates that the DAN has a regret of 𝒪(L^1/2), which guarantees that the sequence {𝒢(𝐮^(k,l))}_l=1^L will converge to 𝒢(𝐮^(k,⋆)) with convergence rate on the order of 𝒪(L^-1/2). §.§ Transmit Power Allocation For Multiple Targets Given {𝐮_q^(k,j+1)}_q=1^Q, the problem in (<ref>) can be expressed as min_𝐩^(k)∈𝒮_p ∑_q=1^Q ℱ_pa(p_q^(k)), where ℱ_pa(p_q^(k))=log𝐂_q(p_q^(k)|𝐮_q^(k,j)) is the cost function and 𝒮_p={𝐩^(k)|∑_q=1^Q p_q^(k)≤ P_T,p_q^(k)≥ P_min, q=1,2⋯, Q} denotes the feasible set of 𝐩^(k). This problem is convex and can be reformulated as a SDP problem, i.e., max_𝐩^(k) ∑_q=1^Q log(𝐐_q), s.t. ∑_q=1^Q p_q^(k)≤ P_T, p_q^(k)≥ P_min, 𝐉_q^(k)(p_q^(k)|𝐮_q^(k,j)) ≽𝐐_q, q=1,2⋯, Q, where {𝐐_q}_q=1^Q denotes a set of auxiliary symmetric matrices. Then, this problem can be solved by the CVX toolbox. However, the CVX toolbox is generally time-consuming, especially when the number of targets is large. To reduce the computational complexity and reveal more physical insights, we propose an iterative water-filling-based power allocation method. First, we merge the total power constraint into the cost function by the Lagrange multiplier method, i.e., ℒ_pa(𝐩^(k)) =∑_q=1^Q ℱ_pa(p_q^(k)) + λ_pa(P_T-∑_q=1^Q p_q^(k)), where λ_pa is the Lagrange multiplier. The derivative of (<ref>) w.r.t. p_q^(k) is given by ∂ℒ_pa(𝐩^(k))/∂ p_q^(k)=( (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1Σ_q^(k))- λ_pa, where Σ_q^(k)=∑_n=1^N u_q,n^(k)𝐌_q,n^(k). By setting ∂ℒ_pa(𝐩^(k))/∂ p_q^(k)=0, we have the following fixed-point equation, i.e., p_q^(k)=1/λ_pa- (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1𝐉_P,q^(k)/ (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1Σ_q^(k). If 𝐉_P,q^(k) and Σ_q^(k) reduce to one-dimensional constants denoted by J_P,q^(k) and Σ_q^(k), respectively, the closed-form solution of p_q^(k) can be directly obtained from (<ref>), i.e., p_q^(k)=μ_wf-J_P,q^(k)/Σ_q^(k), where μ_wf=1/λ_pa denotes the water level. For the matrix-version 𝐉_P,q^(k) and Σ_q^(k), we propose to obtain p_q^(k) and the water level μ_wf by an iteration process. In particular, at the ith iteration, p_q,i+1^(k) is obtained by p_q,i+1^(k)=⌊μ_wf- (𝐉_P,q^(k)+p_q,i^(k)Σ_q^(k))^-1𝐉_P,q^(k)/ (𝐉_P,q^(k)+p_q,i^(k)Σ_q^(k))^-1Σ_q^(k)⌋_P_min, where p_q,i^(k) denotes the power for the qth target at the ith iteration and ⌊ a⌋_b=max{a,b}. Then, the water level μ_wf is updated by setting ∑_q=1^Q p_q,i+1^(k)(μ_wf) = P_T. According to the Rayleigh quotient, we have λ̃_min≤ (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1𝐉_P,q^(k)/ (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1Σ_q^(k)≤λ̃_max, where λ̃_min and λ̃_max denote the minimum and maximum eigenvalue of (Σ_q^(k))^-1𝐉_P,q^(k), respectively. Note that 𝐉_P,q^(k) and Σ_q^(k) denote the FIM of the prediction and the measurement, respectively. Thus, the eigenvalues of (Σ_q^(k))^-1𝐉_P,q^(k) denote the ratio between the prediction and measurement. Recalling (<ref>), if the eigenvalues of (Σ_q^(k))^-1𝐉_P,q^(k) are larger, p_q^(k) will be lower. This indicates that, more power will be allocated to a target, if 1) the measurement provides more information than the prediction, which enables the system to improve the accuracy of the prediction, or 2) the prediction of this target is so bad such that the system needs to allocate more power for better motion state estimation. In turn, if the eigenvalues of (Σ_q^(k))^-1𝐉_P,q^(k) are smaller, p_q^(k) will be lower. This indicates that, a target will be assigned with a lower power, if 1) the prediction is good enough; or 2) the measurement is too bad. § SIMULATION In the simulation, we will show the efficiency and effectiveness of the proposed DAN and FP-WF algorithms. In the following, we first introduce the system parameters, the training details of DAN, and the benchmark algorithms. System parameters: We consider a mmWave system operating at a carrier frequency of 28 GHz. There is one BS acting as the transmitter, which is located at [0,0] m. The number of SNs is N=32. These SNs are uniformly distributed in the area within 400× 400 m^2. On average, there is one SN within an area of 5000 m^2. The measurement covariance defined in (<ref>) is generated by Σ_q,n^(k)=1/SNR_q^(k)Σ̇_q,n^(k), where Σ̇_q,n^(k)=[σ̇_θ_q,n^(k)^2,σ̇_τ_q,n^(k)^2,σ̇_μ_q,n^(k)^2] with σ̇_θ_q,n^(k)=2, σ̇_τ_q,n^(k)=1, σ̇_μ_q,n^(k)=1. The SNR is defined by SNR_q^(k) =p_q^(k)γ_0/σ^2(d_q,n^(k))^2, where γ_0=-61.4 dB denotes the pathloss at reference distance. We set the total power at BS P=30 dBm, the minimum power for single target P_min=20 dBm, the noise power σ^2=-90 dBm, the intensity of process noise q_s=5, and Δ T =0.5 s. Initialization of motion state: There are three targets to be tracked, i.e., Q=3, if not otherwise specified. The initial velocities of the targets are given as 𝐯_1=[-10,0]^ m/s, 𝐯_2=[0,-10]^ m/s, 𝐯_3=[10,0]^ m/s, respectively. The initial locations of the targets are given as 𝐱_1^(0)=[124, 124]^ m, 𝐱_2^(0)=[-134, 134]^ m, and 𝐱_3^(0)=[-144, -144]^ m, respectively. Training details: During training, the learnable parameters are optimized by the SGD optimizer in the PyTorch with a learning rate 5×10^-5. In our experiment, the loss function for training is selected as f_loss=1/L∑_l=1^L ||𝐮_ES - 𝐮̂^l||^2, where 𝐮_ES denotes the selection vector obtained by the exhaustive search (ES). The number of data for training is set as N_train=500. The network parameters are set as ρ=1, ρ_a=10^2, γ=10^4, β_2=0.999, η_1=0.99, and η_a=0.99. The learnable parameters are initialized as β_1=0.99, and α_1 = 0.15 for all layers. The number of layers is set as L=10. The maximum number of ADMM iterations is set as 200. Benchmark methods: The proposed methods are compared with the following algorithms for SN selection and power allocation. 1) SN selection: We compare DAN with the following methods: ∙ `Nearest SN Selection': this method selects the subset of SNs nearest to the target; ∙ `Exhaustive Search (ES)': this method selects the subset of SNs which minimizes the cost function; ∙ `MM-CVX': the method solves (<ref>) by CVX toolbox. ∙ `MM-ADMM': the optimization-based method proposed in Sec. III. A. To show the impact of 𝐓^(k,l) in MM-ADMM, we use two different 𝐓^(k,l). Specifically, the first choice is 𝐓_1^(k,l)=(𝐇_F(𝐮^(k,l)))𝐈, and the second choice is 𝐓_2^(k,l)=λ_max(𝐇_F(𝐮^(k,l)))𝐈, which are denoted by `MA-I' and `MA-II', respectively. The parameters of MM-ADMM and MM-CVX are the same as that for DAN. The maximum number of MM iterations for MM-ADMM and MM-CVX is set as 30 and 50, respectively, unless specified otherwise. 2) Power allocation: We compare FP-WF with `CVX', which represents the method for solving (<ref>) by CVX. §.§ Computational Cost Table <ref> shows the running time[Configuration of this computer: CPU: Inter Core i9-9900 @3.10GHz; RAM: 16GB; Software: Python 3.10.9 in Microsoft visual studio code and Matlab 2020b.] of the algorithms composed of different power allocation and SN selection methods. It can be observed that the running time of DAN & FP-WF is 0.7724 s, which is the lowest among all combinations. Meanwhile, we can observe that the running time of ES & CVX is 18.6242 s, which is about 24.11 times more than that of DAN & FP-WF. To further demonstrate the low computational complexity provided by DAN and FP-WF, we study the computational cost of the SN selection and power allocation methods, respectively. Running time of the SN selection methods: Table <ref> shows the running time of the SN selection algorithms with different N. DAN achieves the lowest computational cost among the candidates with different N. The computational consumption of ES is extremely large, especially when N is large. For example, when N=128, the DAN is about 443 times faster than ES. MM-CVX is more time-consuming than MM-ADMM. Meanwhile, the running time of DAN is less than that of the MM-ADMM. There are two main reasons: 1) one layer of DAN has a lower computational cost than one iteration of MM-ADMM. In particular, DAN only requires the gradient, while MM-ADMM requires both the gradient and Hessian matrix, which needs more computational cost, and 2) owing to the well-trained 𝐓^(k,l), DAN can converge faster than MM-ADMM, which will be shown in the following. Convergence of the SN selection methods: The running time of MM-CVX, MM-ADMM and DAN is proportional to the required number of iterations/layers to converge. Fig. <ref> shows the cost function over the number of the iterations (optimization-based methods) or the layers (DAN). First, MM-CVX needs about 50 iterations to converge, which is more than MM-ADMM and DAN. Meanwhile, we can observe that DAN can converge within 3 layers, while MM-ADMM needs about 15-20 iterations to converge, which leads to more running time. This is because, unlike MM-ADMM, DAN utilizes the momentum, which accumulates the gradient of the past layers and can thus speed up the convergence <cit.>. Meanwhile, we see that MM-ADMM-II can converge faster than MM-ADMM-I which indicates that the convergence of MM-ADMM highly depends on the choice of 𝐓^(k,l). This is also the motivation to learn 𝐓^(k,l) in DAN. Running time of the power allocation methods: Table <ref> shows the running time for the power allocation algorithms versus different Q. We can observe that the running time of FP-WF is much lower than CVX for different cases. This is because FP-WF is derived based on the Lagrange multiplier method, which can solve (<ref>) more efficiently than the interior point method used by CVX. §.§ Tracking Accuracy The average root mean square error (RMSE) of multiple targets tracking over Q targets and K frames is selected as the performance metric for multiple target tracking, which is defined as 1/Q1/K∑_q=1^Q∑_k=1^K√(1/N_mc∑_i=1^N_mc‖𝐱_q^(k)-𝐱̂_q^(k,i)‖^2), where 𝐱̂_q^(k,i) denotes the estimated position of the target q at the kth time frame in the ith Monte-Carlo trial, and N_mc denotes the number of Monte-Carlo trials. The number of tracking frames is set as K=10. Fig. <ref> shows the average RMSE with different power budget P. We have several observations. First, associated with different SN selection methods, FP-WF can achieve the same performance as CVX. Recalling from the results in Table <ref>, compared to CVX, FP-WF can reduce the computational cost without losing performance loss. Second, we can observe that ES can achieve the best performance among the SN selection methods. However, from Table <ref>, it can be observed that the running time of ES is extremely high, which limits its real application. Third, MM-CVX and MM-ADMM can achieve similar performance, but as shown in Table <ref>, the computational cost of MM-CVX is higher than that of MM-ADMM. Furthermore, DAN can outperform MM-ADMM, which is because a more suitable 𝐓 is learned by DAN. Finally, the performance of the nearest SN selection is worse than DAN. This is because the tracking performance is affected by both the distance and the angle from target to SNs. DAN takes both of them into consideration, while the nearest SN selection only considers the distance. This will be further demonstrated in the next part. Illustration of SN selection: To better understand the effect of SN selection, we focus on the single target case in this section. The power allocated to the target is set as p=25 dBm. The initial state of the target is given by 𝐯=[-10,0]^ m/s and 𝐱^(0)=[124, 124]^ m. Fig. <ref> shows the SN selection result by DAN in 4 consecutive frames. The selection depends on the geometric relation between the target and SNs. DAN does not always choose the nearest SNs, because, besides the distance, the different perspectives to observe the target provided by different SNs will also affect the tracking performance. Fig. <ref> shows the corresponding RMSE over the tracking frames. It can be observed that DAN consistently outperforms the Nearest SN selection and achieves comparable performance as ES. Effect of noise power: One of the biggest drawbacks of DL-based approaches is the performance degradation when the features (such as the noise power) in test data differ from those in training. This leads to the study of generalization in this part. Fig. <ref> shows the performance under different noise power with N=32. When the noise power is different from that of the training data, DAN can provide a near-ES RMSE. It indicates that DAN can adapt to the change of σ^2, which makes DAN attractive in real applications. §.§ Accuracy-Complexity Tradeoff By adjusting the termination tolerance and the maximum number of iterations, a tradeoff between computational cost and accuracy can be achieved by MM-ADMM. Meanwhile, the proposed DAN requires a fixed number of layers and thus has a fixed running time. Fig. <ref> shows the RMSE performance of different algorithms versus the running time. It is observed that DAN can always outperform MM-ADMM in terms of both computational cost and RMSE. Moreover, though MM-ADMM-II can converge faster than MM-ADMM-I, 𝐓_2^(k,l) requires more computational cost than 𝐓_1^(k,l). Thus, given the same time cost, MM-ADMM-I outperforms MM-ADMM-II. § CONCLUSION In this paper, we considered the joint SN selection and power allocation problem for tracking multiple maneuvering targets in PMNs. To meet the stringent latency requirement of sensing applications, we proposed a model-driven approach for SN selection by unfolding the optimization-based MM-ADMM method. A novel DNN architecture was derived to speed up the convergence by exploiting the momentum, whose convergence property was also guaranteed by deriving the regret bound. Furthermore, we proposed an efficient power allocation method based on fixed-point water filling and revealed some physical insights. Simulation results demonstrated that the proposed method can achieve better performance than the existing methods with much lower computational cost. This work demonstrated that, by reducing the number of iterations and improving the effectiveness of each layer, model-driven approaches offer a promising solution to meet the stringent latency requirement of sensing applications. § PROOF OF THEOREM <REF> Given ℒ(𝐮^(k)) is convex, we have 𝒢(𝐮^(k,l))-𝒢(𝐮^(k,⋆))≤<𝐝_u^(k,l), Δ𝐮^(k,l)>, where Δ𝐮^(k,l)=𝐮^(k,l) - 𝐮^(k,⋆). Since R_L≤∑_l=1^L<𝐝_u^(k,l), Δ𝐮^(k,l)>, the main idea of the proof is to find an upperbound of ∑_l=1^L<𝐝_u^(k,l), Δ𝐮^(k,l)>. Recalling from (<ref>), we have ‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2=‖Φ_l^1/2(𝐮^(k,l+1) - 𝐮^(k,⋆))‖^2 (a)=‖Φ_l^1/2(𝐮^(k,l)-Φ_l^-1(𝐝_⋆^(k,l)-ν_l1))-𝐮^(k,⋆)‖^2 (b)=‖Φ_l^1/2Δ𝐮^(k,l)-Φ_l^-1/2(𝐦̂_l-ρ_a,l𝐛̂_l-ν_l1)‖^2 (c)=‖Φ_l^1/2Δ𝐮^(k,l)‖^2 - 2 <(1-β_1,l)𝐝_u^(k,l),Δ𝐮^(k,l)> - 2 <β_1,l𝐦̂_l-1-ρ_a,l𝐛̂_l-ν_l1,Δ𝐮^(k,l)> + ‖Φ_l^-1/2(𝐦̂_l -ρ_a,l𝐛̂_l-ν_l1 ) ‖^2, where step (a) follows (<ref>), step (b) follows (<ref>), and step (c) follows (<ref>). By adding 2 <(1-β_1,l)𝐝_u^(k,l),Δ𝐮^(k,l)>-‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2 to both sides of (<ref>), and dividing both sides of (<ref>) by 2(1-β_1,l), we have <𝐝_u^(k,l),Δ𝐮^(k,l)> =‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l) -‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2/2(1-β_1,l) = -<β_1,l𝐦̂_l-1,Δ𝐮^(k,l)>/1-β_1,l+<ρ_a,l𝐛̂_l,Δ𝐮^(k,l)>/1-β_1,l +<ν_l1,Δ𝐮^(k,l)>/1-β_1,l+‖Φ_l^-1/2(𝐦̂_l-ρ_a,l𝐛̂_l-ν_l1) ‖^2/2(1-β_1,l). By using the Young's inequality for products, i.e., ± ab≤a^2/2 +b^2/2, the second, third, and fourth terms on the right-hand side of (<ref>) are upperbounded by -<β_1,l𝐦̂_l-1,Δ𝐮^(k,l)> /1-β_1,l≤‖Φ_l^-1/2𝐦̂_l-1‖^2 /2(1-β_1) +‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1), < 𝐛̂_l,Δ𝐮^(k,l)>/1-β_1,l≤‖Φ_l^-1/2𝐛̂_l‖^2/2(1-β_1) + ‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1), and <1,Δ𝐮^(k,l)>/1-β_1,l≤‖Φ_l^-1/21‖^2/2(1-β_1) + ‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1), respectively. By utilizing the inequality between the arithmetic mean and quadratic mean, the last term on the right-hand side of (<ref>) is upperbounded by ‖Φ_l^-1/2(𝐦̂_l-ρ_a,l𝐛̂_l-ν_l1) ‖^2/2(1-β_1,l)≤3‖Φ_l^-1/2𝐦̂_l‖^2/2(1-β_1) +3ρ_a,l^2‖Φ_l^-1/2𝐛̂_l‖^2/2(1-β_1) + 3ν_l^2‖Φ_l^-1/21‖^2/2(1-β_1). Then, the upperbound of (<ref>) can be given by <𝐝_u^(k,l),Δ𝐮^(k,l)>≤‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l) -‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2/2(1-β_1,l)_172 +β_1,l‖Φ_l^-1/2𝐦̂_l-1‖^2/2(1-β_1)_173+ ρ_a,l‖Φ_l^-1/2𝐛̂_l‖^2/2(1-β_1)_174 + ν_l‖Φ_l^-1/21‖^2/2(1-β_1)_175 + β_1,l‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1)_176+ ρ_a,l‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1)_177 +ν_l‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1)_178 +3‖Φ_l^-1/2𝐦̂_l‖^2/2(1-β_1)_179 +3ρ_a,l^2‖Φ_l^-1/2𝐛̂_l‖^2/2(1-β_1)_180+3ν_l^2‖Φ_l^-1/21‖^2/2(1-β_1)_181, To bound R_L, we upperbound of the summation of the terms 172-181 over the index l as follows. §.§.§ Term 172 It can be shown that ‖Φ_l^1/2Δ𝐮^(k,l)‖^2=∑_i=1 ^Nϕ_l,i^-1 |Δ u_i^(k,l)|^2 =1/α_1,l∑_i=1 ^N√(|v̂_l,i|)· |Δ u_i^(k,l)|^2 +ρ_a,l ||Δ𝐮^(k,l)||^2 (a)=1/α_1,l∑_i=1^N∑_p=1^l√(1-β_2)β_2^l-p/2 |d_u,i^(k,p)| ·|Δ u_i^(k,l)|^2 +ρ_a,l ||Δ𝐮^(k,l)||^2 ≤√(1-β_2)/α_1^-(1-√(β_2)) D_u,1 D_Δ√(l) + D_Δρ_aη_a^l, where step (a) comes from (<ref>). Then, with the decreasing learning rate ϕ_l,i^-1, we have ∑_l=1^L(‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l) -‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2/2(1-β_1,l)) ≤‖Φ_1^1/2Δ𝐮^(k,1)‖^2/2(1-β_1)+‖Φ_L^1/2Δ𝐮^(k,L+1)‖^2/2(1-β_1) +∑_l=2^L(‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l) -‖Φ_l-1^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l)) (a)≤√(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1)√(L) + ρ_aη_a^L D_Δ/1-β_1 + √(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1) + ρ_aη_a D_Δ/1-β_1 + ∑_l=2^L∑_i=1^N(ϕ_l,i-ϕ_l-1,i)|Δ u_i^(k,l)|^2/2(1-β_1) (b)≤√(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1)√(L) + ρ_aη_a D_Δ/1-β_1 + √(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1) + ρ_aη_a D_Δ/1-β_1 + D_Δ_u,2D_ϕ/1-β_1, where step (a) follows (<ref>) and step (b) follows ∑_l=2^L∑_i=1^N(ϕ_l,i-ϕ_l-1,i)|Δ u_i^(k,l)|^2 ≤ D_Δ u, 2∑_i=1^N∑_l=2^L(ϕ_l,i-ϕ_l-1,i)≤ 2D_Δ_u,2D_ϕ. §.§.§ Terms 173 & 179 Since (1-β_1) is a non-zero constant, we focus on the upperbound of the terms ∑_l=1^L‖Φ_l^-1/2𝐦̂_l‖^2 and ∑_l=1^Lβ_1,l‖Φ_l^-1/2𝐦̂_l-1‖^2. Denote m̂_l,i and d_u,i as the ith entry of 𝐦̂_l and 𝐝_u^(k,l), respectively. Then, we have ‖Φ_l^-1/2𝐦̂_l‖^2=∑_i=1 ^Nm̂_l,i^2/ϕ_l,i≤∑_i=1 ^Nm̂_l,i^2/√(|v̂_l,i|)/α_1,l = ∑_i=1 ^N(∑_p=1^l(1-β_1,p)∏_q=1^l-pβ_1,l-q+1 d_u,i^(k,p))^2 /√(|v̂_l,i|)/α_1,l (a)≤∑_i=1 ^Nα_1,lη_1^2l(∑_p=1^lβ_1^l-p) (∑_p=1^lβ_1^l-p (d_u,i^(k,p))^2 ) /√(∑_p=1^l(1-β_2)β_2^l-p |d_u,i^(k,p)|^2) (b)≤α_1,lη_1^2l/(1-β_1)√(1-β_2)∑_i=1 ^N(∑_p=1^l(β_1/√(β_2))^l-p |d_u,i^(k,p)| ), where step (a) comes from the inequality (1-β_1,p)≤ 1, ∏_q=1^l-pβ_1,l-q+1≤β_1^l-pη_1^l and the Jensen inequality, i.e., (∑_i a_i b_i/∑_i a_i)^2≤∑ a_i b_i^2/∑_i a_i, and step (b) follows the inequalities ∑_p=1^lβ_1^l-p≤1/1-β_1 and ∑_p=1^l(1-β_2)β_2^l-p |d_u,i^(k,p)|^2 ≥ (1-β_2)β_2^l-p |d_u,i^(k,p)|^2. By summing up (<ref>) over the index l, we have ∑_l=1^L‖Φ_l^-1/2𝐦̂_l‖^2 ≤∑_l=1^Lα_1,lη_1^2l/(1-β_1)√(1-β_2)∑_i=1 ^N( ∑_p=1^l(β_1/√(β_2))^l-p |d_u,i^(k,p)| ) =∑_l=1^Lα_1,lη_1^2l/(1-β_1)√(1-β_2) ||𝐝_u^(k,l)||_1 (∑_j=l^L(β_1/√(β_2))^j-l) ≤α_1^+D_u,1/(1-β_1)(1-β_1/√(β_2))√(1-β_2)∑_l=1^Lη_1^2l/√(l) (a)≤α_1^+D_u,1/(1-β_1)(1-β_1/√(β_2))√(1-β_2)(1-η_1^2), where we have utilized the property that ∑_l=1^Lη_1^2l/√(l)≤∑_l=1^L η_1^2l≤1/1-η_1^2 in step (a). Then, we have ∑_l=1^L‖Φ_l^-1/2𝐦̂_l‖^2≤α_1^+D_u,1/(1-β_1)(1-β_1/√(β_2))√(1-β_2)(1-η_1^2). Similarly, we can obtain ∑_l=1^Lβ_1,l‖Φ_l^-1/2𝐦̂_l-1‖^2 ≤∑_l=1^Lβ_1,l‖Φ_l-1^-1/2𝐦̂_l-1‖^2 ≤α_1^+β_1D_u,1/(1-β_1)(1-β_1/√(β_2))√(1-β_2)(1-η_1^2). §.§.§ Terms 174 & 180 First, we have ∑_l=1^Lρ_a,l‖Φ_l^-1/2𝐛̂_l‖^2 ≤∑_l=1^Lρ_a η_a^l D_ϕ^l ||𝐛̂_l||^2 ≤ρ_a D_ϕ D_b,2/1-η_a, where D_ϕ^l= max_i ϕ_l,i^-1. Similarly, we can obtain ∑_l=1^Lρ_a,l^2‖Φ_l^-1/2𝐛̂_l‖^2 ≤ρ_a^2 D_ϕ D_b,2/1-η_a^2. §.§.§ Terms 175 & 181 By the definition of ν_l in (<ref>), we have ν_l ≤||𝐝_⋆^(k,l)||_1≤ ||𝐦̂_l||_1 + ρ_a,l ||𝐛̂_l||_1. Similar to (<ref>), we can obtain ∑_l=1^L||𝐦̂_l||_1 ≤∑_l=1^L∑_i=1 ^N(∑_p=1^l∏_q=1^l-pβ_1,l-q+1 |d_u,i^(k,p)| ) ≤∑_l=1^L ||𝐝_u^(k,p)||_1 η_1^l/(1-β_1)≤ D_u,1/(1-η_1)(1-β_1). Then, we have ∑_l=1^Lρ_a,l ||𝐛̂_l||_1=ρ_aD_b,1∑_l=1^L η_a^l≤ρ_aD_b,1/1-η_a. By substituting (<ref>) and (<ref>) into (<ref>), we have ∑_l=1^Lν_l‖Φ_l^-1/21‖^2 ≤ D_ϕ∑_l=1^Lν_l ≤ D_ϕ( D_u,1/(1-η_1)(1-β_1)+ρ_aD_b,1/1-η_a). It thus follows that ∑_l=1^Lν_l‖Φ_l^-1/21‖^2 ≤ D_u,1D_ϕ/(1-η_1)(1-β_1)+ρ_aD_b,1D_ϕ/1-η_a. Similarly, we can obtain ∑_l=1^Lν_l^2‖Φ_l^-1/21‖^2 ≤2 D_u,1^2 D_ϕ/(1-η_1^2)(1-β_1)^2+2ρ_a^2D_b,1^2D_ϕ/(1-η_a^2). §.§.§ Term 176 By (<ref>), we have ∑_l=1^Lβ_1,l‖Φ_l^1/2Δ𝐮^(k,l)‖^2 ≤β_1√(1-β_2)/α_1^-(1-√(β_2)) D_u,1 D_Δ√(l)η_1^l + D_Δβ_1ρ_aη_1^lη_a^l (a)≤β_1√(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-η_1)^2 + β_1ρ_aD_Δ/(1-η_1η_a), where we have utilized the bound of the arithmetic-geometric series, i.e., ∑_l=1^L l η_1^l≤1/(1-η_1)^2 in (a). §.§.§ Term 177 By replacing β_1,l with ρ_a,l in (<ref>), we have ∑_l=1^Lρ_a,l‖Φ_l^1/2Δ𝐮^(k,l)‖^2≤ρ_a√(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-η_a)^2 + ρ_a^2 D_Δ/1-η_a^2. §.§.§ Term 178 Recalling (<ref>) and (<ref>), we have ν_l ≤ ||𝐦̂_l||_1 + ρ_a,l ||𝐛̂_l||_1≤ D_u,1/(1-β_1)η_1^l+ ρ_aD_b,1η_a^l. Then, we can obtain ν_l‖Φ_l^1/2Δ𝐮^(k,l)‖^2=ν_l∑_i=1 ^Nϕ_l,i |Δ u_i^(k,l)|^2 =ν_l/α_1,l∑_i=1 ^N√(|v̂_l,i|)· |Δ u_i^(k,l)|^2 +ν_lρ_a,l ||Δ𝐮^(k,l)||^2 ≤ν_l√(1-β_2)/α_1^-(1-√(β_2)) D_u,1 D_Δ√(l)+ ν_l D_Δρ_aη_a^l . Similarly, we have ∑_l=1^L √(l)ν_l = ∑_l=1^L √(l)( D_u,1/(1-β_1)η_1^l+ ρ_aD_b,1η_a^l) ≤ D_u,1/(1-β_1)(1-η_1)^2+ ρ_aD_b,1/(1-η_a)^2, ∑_l=1^L η_a^l ν_l = ∑_l=1^L η_a^l ( D_u,1/(1-β_1)η_1^l+ ρ_aD_b,1η_a^l) ≤ D_u,1/(1-β_1)(1-η_1)(1-η_a)+ ρ_aD_b,1/(1-η_a)^2. By substituting (<ref>) and (<ref>) into (<ref>), we have ∑_l=1^L ν_l‖Φ_l^1/2Δ𝐮^(k,l)‖^2 ≤( D_u,1/(1-β_1)(1-η_1)^2+ ρ_aD_b,1/(1-η_a)^2)√(1-β_2)D_u,1 D_Δ/α_1(1-√(β_2)) + ( D_u,1/(1-β_1)(1-η_1)(1-η_a)+ ρ_aD_b,1/(1-η_a)^2)D_Δρ_a. By combining the upperbounds for the summations of terms 172-181, (<ref>) can be proved. 10 url@samestyle 8288677 F. Liu, C. Masouros, A. Li, H. Sun, and L. Hanzo, “Mu-mimo communications with mimo radar: From co-existence to joint transmission,” IEEE Trans. Wirel. Commun., vol. 17, no. 4, pp. 2755–2770, 2018. liu2020radar F. Liu, W. Yuan, C. Masouros, and J. Yuan, “Radar-assisted predictive beamforming for vehicular links: Communication served by sensing,” IEEE Trans. Wirel. Commun., vol. 19, no. 11, pp. 7704–7719, 2020. 9296833 A. Zhang, M. L. Rahman, X. Huang, Y. J. Guo, S. Chen, and R. W. Heath, “Perceptive mobile networks: Cellular networks with radio vision via joint communication and radar sensing,” IEEE Veh. Technol. Mag., vol. 16, no. 2, pp. 20–30, 2021. xie2022perceptive L. Xie, P. Wang, S. Song, and K. B. Letaief, “Perceptive mobile network with distributed target monitoring terminals: Leaking communication energy for sensing,” IEEE Trans. Wirel. Commun., vol. 21, no. 12, pp. 10 193–10 207, 2022. xie2022networked L. Xie, S. Song, and K. B. Letaief, “Networked sensing with ai-empowered interference management: Exploiting macro-diversity and array gain in perceptive mobile networks,” arXiv preprint arXiv:2205.11331, 2022. xie2023collaborative L. Xie, S. Song, Y. C. Eldar, and K. B. Letaief, “Collaborative sensing in perceptive mobile networks: Opportunities and challenges,” IEEE Wirel. Commun., vol. 30, no. 1, pp. 16–23, 2023. macsazade2010energy E. Maşazade, R. Niu, P. K. Varshney, and M. Keskinoz, “Energy aware iterative source localization for wireless sensor networks,” IEEE Trans. Signal Process., vol. 58, no. 9, pp. 4824–4835, 2010. 7104065 H. Chen, S. Ta, and B. Sun, “Cooperative game approach to power allocation for target tracking in distributed mimo radar sensor networks,” IEEE Sens. J., vol. 15, no. 10, pp. 5423–5432, 2015. yan2020optimal J. Yan, W. Pu, S. Zhou, H. Liu, and M. S. Greco, “Optimal resource allocation for asynchronous multiple targets tracking in heterogeneous radar networks,” IEEE Trans. Signal Process., vol. 68, pp. 4055–4068, 2020. shen2013sensor X. Shen and P. K. Varshney, “Sensor selection based on generalized information gain for target tracking in large sensor networks,” IEEE Trans. Signal Process., vol. 62, no. 2, pp. 363–375, 2013. yi2020resource W. Yi, Y. Yuan, R. Hoseinnezhad, and L. Kong, “Resource scheduling for distributed multi-target tracking in netted colocated mimo radar systems,” IEEE Trans. Signal Process., vol. 68, pp. 1602–1617, 2020. yan2015simultaneous J. Yan, H. Liu, B. Jiu, B. Chen, Z. Liu, and Z. Bao, “Simultaneous multibeam resource allocation scheme for multiple target tracking,” IEEE Trans. Signal Process., vol. 63, no. 12, pp. 3110–3122, 2015. yuan2020robust Y. Yuan, W. Yi, R. Hoseinnezhad, and P. K. Varshney, “Robust power allocation for resource-aware multi-target tracking with colocated mimo radars,” IEEE Trans. Signal Process., vol. 69, pp. 443–458, 2020. xie2017joint M. Xie, W. Yi, T. Kirubarajan, and L. Kong, “Joint node selection and power allocation strategy for multitarget tracking in decentralized radar networks,” IEEE Trans. Signal Process., vol. 66, no. 3, pp. 729–743, 2017. sun2021resource H. Sun, M. Li, L. Zuo, and P. Zhang, “Resource allocation for multitarget tracking and data reduction in radar network with sensor location uncertainty,” IEEE Trans. Signal Process., vol. 69, pp. 4843–4858, 2021. das2011submodular A. Das and D. Kempe, “Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection,” arXiv preprint arXiv:1102.3975, 2011. elhamifar2015dissimilarity E. Elhamifar, G. Sapiro, and S. S. Sastry, “Dissimilarity-based sparse subset selection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 11, pp. 2182–2197, 2015. grant2014cvx M. Grant and S. Boyd, “Cvx: Matlab software for disciplined convex programming, version 2.1,” 2014. borgerding2017amp M. Borgerding, P. Schniter, and S. Rangan, “Amp-inspired deep networks for sparse linear inverse problems,” IEEE Trans. Signal Process., vol. 65, no. 16, pp. 4293–4308, 2017. xin2016maximal B. Xin, Y. Wang, W. Gao, D. Wipf, and B. Wang, “Maximal sparsity with deep networks?” NeurIPS, vol. 29, 2016. 8550778 Y. Yang, J. Sun, H. Li, and Z. Xu, “Admm-csnet: A deep learning approach for image compressive sensing,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 3, pp. 521–538, 2020. 9420308 J. Johnston, Y. Li, M. Lops, and X. Wang, “Admm-net for communication interference removal in stepped-frequency radar,” IEEE Trans. Signal Process., vol. 69, pp. 2818–2832, 2021. parkvall2017nr S. Parkvall, E. Dahlman, A. Furuskar, and M. Frenne, “Nr: The new 5g radio access technology,” IEEE Communi. Stand. Mag., vol. 1, no. 4, pp. 24–30, 2017. 62252 C. Jauffret and Y. Bar-Shalom, “Track formation with bearing and frequency measurements in clutter,” IEEE Trans. Aerosp. Electron. Syst., vol. 26, no. 6, pp. 999–1010, 1990. grossi2014track E. Grossi, M. Lops, and L. Venturino, “Track-before-detect for multiframe detection with censored observations,” IEEE Trans. Aerosp. Electron. Syst., vol. 50, no. 3, pp. 2032–2046, 2014. xie2020recursive L. Xie, Z. He, J. Tong, and W. Zhang, “A recursive angle-doppler channel selection method for reduced-dimension space-time adaptive processing,” IEEE Trans. Aerosp. Electron. Syst., vol. 56, no. 5, pp. 3985–4000, 2020. 7181639 K. L. Bell, C. J. Baker, G. E. Smith, J. T. Johnson, and M. Rangaswamy, “Cognitive radar framework for target detection and tracking,” IEEE J. Sel. Top. Signal Process., vol. 9, no. 8, pp. 1427–1439, 2015. 325008 J. Helferty and D. Mudgett, “Optimal observer trajectories for bearings only tracking by minimizing the trace of the cramer-rao lower bound,” in Proc. 32th IEEE Conf. Decis. Control, 1993, pp. 936–939 vol.1. 326097 J. Helferty, D. Mudgett, and J. Dzielski, “Trajectory optimization for minimum range error in bearings-only source localization,” in Proceedings of OCEANS '93, 1993, pp. II/229–II/234 vol.2. bejar2001distributed R. Bejar, B. Krishnamachari, C. Gomes, and B. Selman, “Distributed constraint satisfaction in a wireless sensor tracking system,” in Workshop on Distributed Constraint Reasoning, International Joint Conference on Artificial Intelligence, vol. 4, 2001. zhai2018joint X. Zhai, Q. Shi, Y. Cai, and M. Zhao, “Joint transmit precoding and receive antenna selection for uplink multiuser massive mimo systems,” IEEE Trans. Commun., vol. 66, no. 11, pp. 5249–5260, 2018. malek2016successive M. Malek-Mohammadi, A. Koochakzadeh, M. Babaie-Zadeh, M. Jansson, and C. R. Rojas, “Successive concave sparsity approximation for compressed sensing,” IEEE Trans. Signal Process., vol. 64, no. 21, pp. 5657–5671, 2016. sun2016majorization Y. Sun, P. Babu, and D. P. Palomar, “Majorization-minimization algorithms in signal processing, communications, and machine learning,” IEEE Trans. Signal Process., vol. 65, no. 3, pp. 794–816, 2016. boyd2011distributed S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends® Mach. Learn., vol. 3, no. 1, pp. 1–122, 2011. XIE2020107401 L. Xie, Z. He, J. Tong, J. Li, and H. Li, “Transmitter polarization optimization for space-time adaptive processing with diversely polarized antenna array,” Signal Process., vol. 169, p. 107401, 2020. bibby_1974 J. Bibby, “Axiomatisations of the average and a further generalisation of monotonic sequences,” Glas. Math. J., vol. 15, no. 1, p. 63–65, 1974. kingma2014adam D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
http://arxiv.org/abs/2307.06286v1
20230712163110
Entanglement Entropy and Algebra in Quantum Field Theory
[ "Ahmed Halawani" ]
math-ph
[ "math-ph", "hep-th", "math.MP" ]
unsrt theoremTheorem acknowledgement[theorem]Acknowledgement algorithm[theorem]Algorithm axiom[theorem]Axiom case[theorem]Case claim[theorem]Claim conclusion[theorem]Conclusion condition[theorem]Condition conjecture[theorem]Conjecture corollary[theorem]Corollary criterion[theorem]Criterion definition[theorem]Definition examplecounter example examplecounter Example examplecounter exercise[theorem]Exercise lemma[theorem]Lemma notation[theorem]Notation problem[theorem]Problem proposition[theorem]Proposition remark[theorem]Remark solution[theorem]Solution Wolfson College CollegeShields/Wolfson May, 2023 This essay is submitted for the degree of Master of Advanced Study in Mathematics Applied Mathematics Quantum Field Theory (QFT) represents a vast generalization of Quantum Mechanics (QM), as it deals with systems that have an infinite number of degrees of freedom. The Stone-von Neumann theorem, which establishes the equivalence of irreducible representations of the canonical commutation relations (CCR) in QM, does not extend to QFT. Consequently, QFT admits multiple inequivalent irreducible representations, leading to a much richer algebraic structure. This essay aims to explore the physics of QFT from the operator algebra perspective, particularly focusing on entanglement entropy. We discuss the role of von Neumann algebras of different types in QFT, describe the local operator algebra approach to QFT, and explain how entanglement entropy can be defined in terms of the algebra of observables. Additionally, we explore the benefits of this approach in concrete applications, specifically in quantum field theory on curved spacetime. PART: CHAPTER: MOTIVATION One of the major goals of theoretical physics is to develop a unified description of all fundamental interactions, including gravity. Quantum Field Thoery (QFT) provides a framework for describing the interactions of elementary particles in terms of quantum fields. QFT represents a vast generalization of Quantum Mechanics (QM), as it deals with systems that have an infinite number of degrees of freedom. While QFT shares many similarities with QM, it also exhibits some important differences. In QM, the Stone-von Neumann theorem establishes the equivalence of irreducible representations of the canonical commutation relations. However, this theorem does not extend to QFT, where multiple inequivalent irreducible representations are possible, leading to a much richer algebraic structure, where different representations can describe different physical phenomena or states of the system. For example, in the context of QFT on a curved spacetime, there are different possible representations of the algebra of observables that can describe different gravitational effects, such as the bending of light or the redshift of light from distant sources. These different representations can correspond to different physical states of the system, leading to a more complex and diverse range of physical phenomena than is possible in quantum mechanics. In this essay, we explore the difference between different formalisms of describing physical systems. We explain the physics of QFT from the operator algebra perspective, with a particular focus on entanglement entropy. We discuss the role of von Neumann algebras of different types in QFT, describe the local operator algebra approach to QFT, and explain how entanglement entropy can be defined in terms of the algebra of observables. Additionally, we explore the benefits of this approach in concrete applications, specifically in quantum field theory on curved spacetime. This essay is divided into two parts. Part I discusses the algebraic formalism in general. As we move on to Part II, we will relate everything discussed to QFT. It's meant to be self-contained, so all relevant background material of Part II will be discussed in Part I. Chapter <ref> starts by providing the background for the `standard' representation formalism and demonstrating its shortcomings, then it sets the ground for the tools needed for the algebraic formalism. Chapter <ref> discusses the algebraic formalism. It sheds light on the mathematical reasoning which underlies the operator algebra approach, and the advantages it has over the standard one. In Chapter <ref>, we apply this approach in QFT, see the advantages it holds, and how it plays a role in entanglement entropy. Finally, Chapter <ref> discusses more applications of the algebraic approach in QFT on curved spacetime, demonstrating that it can be more beneficial to consider different formalisms at times. CHAPTER: MATHEMATICAL BACKGROUND § BANACH SPACE Hilbert space underlies quantum mechanics and such a space is a special case of a Banach space. We are interested in operators on Hilbert space, so first we should understand Banach space and operators on Banach space. A Banach space space is a vector space V which is complete with respect to a norm: ·: V →ℝ which satisfies: (i) f≥ 0 (non-negativity) (ii) f=0 ⇔ f=0 (definiteness) (iii) λ· f=|λ |f (homogeneity) (iv) f+g≤f+g (sub-additivity) ∀λ∈ℂ and f,g ∈ V §.§ Bounded Operators A standard topic in mathematics is the study of a structure through the study of maps between different instances of the structure. In the case of Banach space, we can begin by studying a linear map that starts in a set and goes to a different set with more structure: A: V → W, where V (V, ·_V ) is a general set with only normed space, and the target set W is a Banach space (W, ·_W ), which has a complete norm, i.e. more structure. The map A is bounded if: sup _f ∈ VA f_W/f_V<∞. By scaling f to a unit vector, this statement can be equivalently states as: sup _f_V =1 A f_W <∞ and this defines the operator norm A of a bounded operator. A Banach algebra 𝒜 is a Banach space (complete with respect to its norm) such that AB≤AB for all A,B∈𝒜. The algebra of bounded operators on a Banach space is a Banach algebra. Unbounded operators, on the other hand, have no such norm. Bounded operators are of importance because often unbounded operators are reduced to a sequence of bounded ones <cit.>. Furthermore, because they are bounded, we can write the commutation relation between them. Take V=W. Consider the identity map on W, id_W: W → W. Then id_W = sup _f_W =1id_W f _W = sup _f=1f = 1 < ∞ Hence the identity map is bounded and the operator norm of the identity map is 1. § HILBERT SPACES A Hilbert space ℋ is a standard too used to represent the state space of a quantum system. A Hilbert space is a type of vector space that is equipped with an inner product that is Cauchy complete <cit.>. A metric space (V, d) is called complete when every Cauchy sequence[ A sequence (f_n) is a Cauchy sequence in V when f_n-f_m→ 0 when n, m →∞ ; more precisely, for any ε>0 there is N ∈ℕ such that f_n-f_m<ε for all n, m>N. A sequence (f_n) converges if there is f ∈ V such that lim _n →∞f_n-f=0. <cit.>] converges. This inner product defines a norm. An inner product on a complex linear space X is a map ⟨ . | . ⟩: X × X →ℂ Such that, ∀ x,y,z ∈ X and λ, μ∈ℂ: (a) ⟨ x| λ y + μ z⟩ = λ⟨ x|y⟩ + μ⟨ x|z⟩ (linearity in second argument) (b) ⟨ y|x⟩ = ⟨ x|y⟩ (Hermitian Symmetry) (c) ⟨ x|x⟩≥ 0 (non-negativity) (d) ⟨ x|x⟩ = 0 iff x = 0 (positive definiteness) where ⟨ . | . ⟩ is complex conjugation. It follows from (a) and (b) that the inner product is anti-linear in the first term: ⟨λ x +μ y | z⟩ = λ⟨ x|z⟩ + μ⟨ y|z⟩. The inner product ⟨ · | · ⟩ induces a norm ||·|| via the relation: ||x|| = √(⟨ x|x⟩) and closure with respect to this norm gives us a Hilbert space ℋ. A Hilbert space is called separable if it admits a countable complete orthonormal basis. §.§ Operators on Hilbert space: C^∗-algebra C^*-algebras play a crucial role in the algebraic approach to quantum mechanics, as they provide a rigorous mathematical framework that unifies various aspects of the theory. Their importance lies in their ability to naturally model the algebra of bounded linear operators on a Hilbert space, capture the properties of quantum observables and states, and facilitate the development of powerful mathematical tools for analyzing quantum systems. Observables, being self-adjoint operators, naturally form a C^*-algebra. A Banach ∗-algebra 𝒜 is a Banach algebra that is also closed under involution ∗:𝒜→𝒜, where (a^*)^* = a, and is antihomomorphism: (ab)^* = b^*a^* ∀ a, b ∈ A. Further, 𝒜 is a C^∗-algebra if A^∗ A=A^2. We can also think of a compatibility condition between the norm and involution: a^* = a ∀ a,∈A. Every C^∗-algebra may be represented as a sub-algbera of the algebra of bounded operators ℬ(ℋ) on a Hilbert space ℋ. § PROJECTORS AND SPECTRAL THEORY Projectors in ℬ(ℋ) play a crucial role in the classification of algebra, as we will see their use in section (<ref>). In essence, they form a basis for an algebra <cit.>. A projector is a positive operator P∈ℬ(ℋ), which means it has the form P=U^*U. Further, it satisfies the idempotent condition P^2=P. A partial isometry U∈ℬ(ℋ) is such that U^∗ U=E and UU^∗=F where both E and F are projectors. In particular, U is an isometry if E=I, a co-isometry if F=I and unitary if E=I=F, where I is the identity operator <cit.>. If two projectors E and F are related by a partial isometry in this way we say that they are equivalent and write E∼ F. We can see how a linear transformation leads to definitions of rank-1 projectors by defining U = |ϕ⟩⟨ψ|, E = |ϕ⟩⟨ϕ|, and F = |ψ⟩⟨ψ|: UU^* = |ϕ⟩⟨ψ|ψ⟩⟨ϕ| = |ϕ⟩⟨ϕ| = E and UU^* = |ψ⟩⟨ϕ|ϕ⟩⟨ψ| = |ψ⟩⟨ψ| = F. A self-adjoint operator satisfies x=x^∗. Such an operator may be written as the difference of two positive operators. The spectral decomposition of a self-adjoint operator x is given by: x = ∑ x_i E_i when the spectrum of x, {x_i}⊂ℝ is countable. The operators E_i are pairwise orthogonal projectors. The spectral decomposition for operators with continuous eigenbasis: x = ∫ x d E(x) (x ∈ℝ) where dE(x) is a not a projector, but the integrals ∫_Δ dE(x), where Δ⊂ℝ have non-zero measure with respect to dx, are projectors. Observables are given by self-adjoint operators. Say we have an observable x and we observe the value x_k and infer that the system is subsequently in the state |ϕ_k⟩ which is an eigenvector of x and defines the projection valued measure (PVM) E_i =|ϕ_i⟩⟨ϕ_i|, then: x |ϕ_k⟩ = ∑ x_i |ϕ_i⟩⟨ϕ_i|ϕ_k⟩= x_k |ϕ_k⟩, where observation of the eigenvalue x_k is believed to result in the collapse of the wavefunction ϕ↦ E_i ϕ/E_iϕ=ϕ_i. A positive operator P is also self-adjoint, and the spectrum of a positive operator is non-negative: spec(P)≥ 0. Notice that if P is a projector we have specP⊆{0,1}. This spectral positivity of such positive operators allows us to order operators as E≤ Q, which means that Q-E≥ 0, which means that Q-E is a positive operator. To describe algebras it is useful to introduce a preorder ≼. For two projectors we say that P≼ Q if P∼ E≤ Q. We explain this by example: let 𝒜 = ℳ_3, and let projectors P,Q∈𝒜 with P≼ Q. such as: Q=[[ 1 0 0; 0 1 0; 0 0 0 ]]=[[ 1 0 0; 0 0 0; 0 0 0 ]]+[[ 0 0 0; 0 1 0; 0 0 0 ]] so we call Q a rank 2 projector. On the other hand consider: P=1/3[[ 1 1 1; 1 1 1; 1 1 1 ]] = |ξ⟩⟨ξ| with |ξ⟩=1/√(3)[[ 1; 1; 1 ]] so P is a rank 1 projector. Hence P ≼ Q, [the symbol ≼ reads P is equivalently less than Q, which means P has a fewer number of projections. This is not quite like “less than".] which means that there is a projector E that is of the same rank as P and write P∼ E, such that E≼ Q and that implies 0≤ Q-E, which means that Q-E is a positive operator with non-negative eignevalues. E can be defined as: E=[[ 1 0 0; 0 0 0; 0 0 0 ]] because both of them are of rank 1, and it can be shown that they are related through the partial isometry U: U=1/√(3)[[ 1 0 0; 1 0 0; 1 0 0 ]] where P= UU^* and E= U^* U. Both of these projectors are examples of minimal <cit.> projections, which is defined as follows. A minimal projector E∈𝒜 is one that can be expressed as E 𝒜 E=ℂ E Let 𝒜 = ℳ_3, the E, as given above, is a minimal projector: E 𝒜 E = [[ 1 0 0; 0 0 0; 0 0 0 ]][[ * * *; * *; * * ]][[ 1 0 0; 0 0 0; 0 0 0 ]] = [[ * 0 0; 0 0 0; 0 0 0 ]]=ℂ E where * is an arbitrary complex number. Note that if we introduce R ∼ E ∼ P, such that: R=[[ 0 0 0; 0 0 0; 0 0 1 ]] then Q-R ≱0, because it contains a negative eigenvalue. So we see that Q ≱ R, but Q ≽ R. §.§ Quantum Mechanics in Hilbert Space A quantum system is defined on an associated complex Hilbert space ℋ. We can assume that the states of the system are all the positive, normalised, linear maps ω : 𝒜→ℂ, where 𝒜⊆ℬ(ℋ). States may be given by a density operator ϱ which is a positive, trace-class operator. An operator T is said to be trace class if for T^† T := ϱ^2, with ϱ^† = ϱ=|T|, we have Tr[ϱ] < ∞. That is, ω(a):=Tr[ϱ a], and a state ω is called pure if ∃ normalised ψ∈ℋ such that: ω(a)=⟨ψ|a|ψ⟩, ∀ a∈𝒜, such that ϱ=|ψ⟩⟨ψ|. As a result, for each pure state ω, we can connect it to an element ψ within the Hilbert space. However, this association does not create a unique, one-to-one relationship between the two. This lack of a direct one-to-one correspondence is essential to consider when working with the algebraic approach to quantum mechanics and understanding the underlying structure of quantum systems. §.§.§ Expectation Value in terms of ϱ We have motivated the discussion on the density matrix by mentioning that is used widely in physics. In particular, it can be used to describe states in a system, and that it offers a way to calculate the expected value. The expectation value is the probability of an outcome of observable O when measured in state ψ, and it's defined per Born rule: ⟨ O ⟩_ψ = ⟨ψ | O | ψ⟩ Where the lower index represents the state. In this example, the state ψ can be represented in basis in ℋ as: | ψ⟩ = ∑_n a_n | ϕ_n ⟩ Putting equations <ref> and <ref> together, we can express the expectation value as: ⟨ O ⟩_ψ = ∑_n p_n ⟨ϕ_n |O| ϕ_n ⟩ Where p_n = |a_n|^2 = a_n ^* a_n. Also, for any operator O ∈𝒜 we can express this expectation value in terms of the density matrix a follows: ⟨ O ⟩_ψ = Tr[ϱ_ψ O] = Tr[| ψ⟩⟨ψ | O] This is the first use of the density matrix we have encountered in this essay thus far. We shall later see how it plays a viable role in representing entropy. Quantum mechanics makes use of the mathematical structure of Hilbert space to describe the state space of a quantum system. The set of all possible wave functions of a quantum system is called the state space of the system. The state space of a quantum system is a Hilbert space, and the wave function of the system is a vector in that Hilbert space. The mathematical structure of the Hilbert space allows us to use the tools of linear algebra, such as vector addition and scalar multiplication, to describe the state of a quantum system and the transformations between different states. The inner product of two vectors in the Hilbert space is used to calculate the probability of finding the system in a particular state, and the norm of a vector in the Hilbert space is used to define the concept of distance between two wave functions. More generally, the probability that a measurement <cit.> of an observable A on a system that is in the state ω yields a result in the Borel set E ⊆ℝ is given by: μ_ω^A(E):=Tr[P_A(E)ϱ], Where the map P_A: Borel(ℝ) →ℒ(ℋ) serves as a special connection between Borel-measurable subsets of the real numbers and Banach space of bounded linear maps on the Hilbert space, and it is uniquely determined by the self-adjoint map A according to the principles of the spectral theorem <cit.>. More on this will follow in section (<ref>). §.§ Density Matrix ϱ The density matrix is a powerful tool in physics. As we have seen in section <ref>, it allows us to describe systems that are in pure and mixed states, and it provides a way of calculating the expected values of observables for systems <cit.>. For example, a quantum field could be in a state where it has some particles with a certain energy and some `particles' with another energy. It provides a way of encoding this information about the distribution of particles over different states and it provides a way of calculating the expectation values of observables for the field. We can define the density matrix for a state ω on ℳ^2, as ρ = R^† R≥ρ^2, such that for a∈ℳ_2 we have ω(a)=Tr[ρ a]. We have R =[ √(p_1)⟨ψ_1 |; √(p_2)⟨ψ_2 |; ] , where the probability weight 0 ≤ p_i ≤ 1 and ∑_i p_i =1, and |ψ_i⟩ lives in the Hilbert space ℂ^2. * In the case ρ^2 = ρ : ρ is a projector | ψ⟩⟨ψ | corresponding to a pure state. A state is pure when p_1= 1 and p_2 =0, or the opposite. * In the case ρ^2 < ρ : ρ= p_1 | ψ_1 ⟩⟨ψ_1 | + p_2 | ψ_2 ⟩⟨ψ_2 | corresponding to a mixed state. A maximally mixed state is when p_1= p_2 = 1/2. We can define a separable density matrix for a 2-qubit system given by the Hilbert space ℂ^2⊗ℂ^2=ℂ^4∋Ψ_i=ψ_i⊗ϕ_i, as ϱ = ∑_i p_i | ψ_i ⟩⟨ψ_i|⊗ | ϕ_i ⟩⟨ϕ_i| We see that ϱ has the form of a sum between distinct separable pure states |ψ_i⟩⟨ψ_i|⊗|ϕ_i⟩⟨ϕ_i|. A state that is not separable is called entangled. A separable mixed density matrix of 2 qubits given in the Z-basis (or standard basis) | ψ_1 ⟩= | ϕ_1 ⟩ = [ 1; 0; ] and | ψ_2 ⟩ = | ϕ_2 ⟩ =[ 0; 1; ] is given by: ϱ = ∑^2_i=1 p_i (| ψ_i ⟩⟨ψ_i| ⊗ | ϕ_i ⟩⟨ϕ_i |) = p_1 (| ψ_1 ⟩⟨ψ_1| ⊗ | ϕ_1 ⟩⟨ϕ_1 |) + p_2 (| ψ_2 ⟩⟨ψ_2| ⊗ | ϕ_2 ⟩⟨ϕ_2 |) = p_1 [ 1 0; 0 0; ]⊗[ 1 0; 0 0; ] + p_2 [ 0 0; 0 1; ]⊗[ 0 0; 0 1; ] = [ p_1 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 p_2; ] In the second line we clearly see that this density state is expressed in terms of a sum of separable pure states, thus it's mixed. Otherwise, Ψ∈ℂ^2⊗ℂ^2 would be in Ψ=∑ c_iΨ_i. We can define a general entangled pure states density matrix as: ϱ = ∑_i,j c_i c_j| Ψ_i ⟩⟨Ψ_j| An entangled state of 2 qubits given in the Z-basis (or standard basis) | ψ_1 ⟩= | ϕ_1 ⟩ = [ 1; 0; ] and | ψ_2 ⟩ = | ϕ_2 ⟩ =[ 0; 1; ], where |Ψ⟩ = ∑ c_i |ψ_i⟩⊗|ϕ_i⟩ is given by ϱ = |Ψ⟩⟨Ψ| = (∑_i c_i | ψ_i ⟩⊗ | ϕ_i ⟩)(∑_jc_j⟨ψ_j|⊗⟨ϕ_j |)= ∑_i,j c_i c_j(| ψ_i ⟩⟨ψ_j| ⊗ | ϕ_i ⟩⟨ϕ_j |) = c_1 c_1(| ψ_1 ⟩⟨ψ_1|⊗ | ϕ_1 ⟩⟨ϕ_1 |) + c_2 c_2(| ψ_2 ⟩⟨ψ_2|⊗ |ϕ_2 ⟩⟨ϕ_2|) + c_1 c_2(| ψ_1 ⟩⟨ψ_2| ⊗ | ϕ_1 ⟩⟨ϕ_2 |) + c_2 c_1(| ψ_2 ⟩⟨ψ_1|⊗ | ϕ_2 ⟩⟨ϕ_1 |) = |c_1|^2 [ 1 0; 0 0; ]⊗[ 1 0; 0 0; ] + |c_2|^2 [ 0 0; 0 1; ]⊗[ 0 0; 0 1; ] + c_1 c_2[ 0 1; 0 0; ]⊗[ 0 1; 0 0; ] + c_2 c_1[ 0 0; 1 0; ]⊗[ 0 0; 1 0; ] = [ p_1 0 0 c_1 c_2; 0 0 0 0; 0 0 0 0; c_2 c_1 0 0 p_2; ] (pure) In the third line we have the interference terms. To arrive at the final matrix we used the substitution |c_1|^2 = p_1 and |c_2|^2 =p_2. Since the outcome can be expressed in terms of the outer product of states, the density matrix is pure and entangled. Entanglement refers to the property of a composite quantum system consisting of two or more subsystems that cannot be described independently of each other. If the density matrix of the composite system cannot be written as a tensor product of the density matrices of the individual subsystems, then the subsystems are said to be entangled<cit.>. To give an intuition of the matrix element (| ψ⟩⟨ψ|) shows up, consider the action of an observable O on a vectorstate| ψ⟩. Since the operator has orthonormal basis, we can expand our quantum state in the following eignenbasis: Ô | ψ⟩ = Ô∑_i c_i | O_i ⟩ = ∑_i c_i Ô | O_i ⟩ = ∑_i c_i O_i | O_i ⟩ = ∑_i ⟨ O_i| ψ⟩ O_i | O_i ⟩ = ∑_i O_i | O_i ⟩⟨ O_i| ψ⟩ = (∑_i O_i | O_i ⟩⟨ O_i|) | ψ⟩ Where in the second line we made use of linearity. Third line we substituted the operator with the corresponding eigenvalue. Fourth line we solved for the coefficient c_i = ⟨ O_i| ψ⟩. Fifth line we moved inner product to the right. Since | ψ⟩ is the same for every term of the same, we can pull it out. For observables with continuous eigenbasis: x̂= ∫ dx (x | x ⟩⟨ x|) The action of the observable on any | ψ⟩ is the same as the action of the sum of the operators in the braket. Generalizing this, the density matrix (<ref>) can be obtained by defining a projection operator P̂_̂î = p_i | ψ_i ⟩⟨ψ_i|, with p_i being the probability weight, such that ∑_i p_i =1 and summing over all possible states. As a matter of convention, we will be dropping the hat from the density matrix ρ̂ from now on. §.§ Observables & Representations in Hilbert Space Similar to how one vector can be expressed in different bases and coordinate systems, such as Cartesian or spherical coordinates, a state-vector | ψ⟩ is a member of ℋ. While vectors are invariant with respect to the chosen basis, their components are covariant. The observables of a quantum system are represented by operators, which are linear transformations on the state space of the system. Observables are physical quantities that can be measured, such as position, momentum, energy, and entropy. A representation of a physical system is a way of describing the state space of the system and the observables of the system using a specific mathematical structure. In the case of Hilbert space, a representation is a way of describing the state space and observables of a physical system using a specific Hilbert space, along with a set of operators that correspond to the observables of the system. Operators are linear transformations on the state space of the system. For all | ψ⟩ , | ϕ⟩ ∈ℋ and ⟨ψ | , ⟨ϕ | ∈ℋ^*, an operator A corresponds to an observable A if and only if it satisfies the following conditions: (i) A (a |ψ⟩ + b | ψ⟩) = a A|ψ⟩ + b A | ψ⟩ (linearity. ∀ a,b ∈ℂ) (ii) ⟨ψ | A | ψ⟩ ∈ℂ (Hermicity) (iii) ⟨ψ | A | ϕ⟩ = ⟨ϕ | A | ψ⟩ (self-adjointness) There are unaccountably many representations in Hilbert space, but the most common ones are the position representation and the momentum representation. The Stone-von Neumann, which will be discussed in section (<ref>), states that any two irreducible representations of the canonical commutation relation are unitarily equivalent, meaning that they are essentially the same up to a unitary transformation In essence, [x̂,p̂] yields the same value regardless of the space they are represented on. The position representation is a representation of a quantum system in which the state space of the system is represented by a Hilbert space of functions of position. In this representation, the wavefunction of a system is a function of position, denoted by ψ(x), and the position operator, denoted by x̂. The action of the position operator on a wave function is simply multiplication by the position coordinate: x̂ |ψ(x) ⟩ = x |ψ(x)⟩, Where x̂ is the position operator, x is the eigenvalue corresponding to the eigenstate |ψ(x)⟩. The momentum operator, denoted by p̂, is represented by the derivative operator. The action of the momentum operator on a wave function is given by the derivative of the wave function with respect to position: p̂ |ψ(x) ⟩ = -i ħ d/dx |ψ(x)⟩, where p̂=-i ħ d/dx is the momentum operator and its eigenvalues are given on momentum eigenstates in the momentum representation. The momentum representation is a representation of a quantum system in which the state space of the system is represented by a Hilbert space of functions of momentum. In this representation, the wave function of a system is a function of momentum, denoted by ϕ (p), and the momentum operator, denoted by p̂. The action of the momentum operator on a wave function is given by the derivative of the wave function with respect to position: p̂ |ϕ(p)⟩ = p |ϕ(p)⟩, where p̂ is the momentum operator, p is the eigenvalue corresponding to the momentum operator, and ϕ(p) is the complex eigenstate. The position operator is represented by the derivative operator here. The action of the position operator on a wavefunction is given by the derivative of the wavefunction with respect to the momentum: x̂ |ϕ(p) ⟩ = i ħ d/dp |ϕ(p)⟩. We shall see in section (<ref>) that there's a link between ψ(x) and ϕ(p), as they stem from the same vector. § UNIQUENESS OF REPRESENTATIONS As quantum mechanics was developing, it was important to ask: Can one begin with a classical theory, apply the quantization principles, and ultimately obtain two distinct quantized theories? <cit.> This question was answered by Marshall Stone and John von Neumann. Their findings' direct implication is that all quantized theories derived through canonical quantization from a classical theory are physically indistinguishable from one another. In the following subsection, we will delve deeper into the implications of this result. §.§ Stone's Theorem Stone's theorem shows there is a one-to-one correspondence between self-adjoint operators and strongly continuous one-parameter unitary groups <cit.>. We can motivate Stone's theorem by asking the following two questions: * How are observables constructed? * Given that U(t) ∘ U(s) = U(t+s), and U(0) = id_ℋ, how arbitrary is the stipulation that the unitary dynamics is controlled by U(t) = e^-itH, for some self-adjoint operator H? Skipping ahead, the answers to both of these questions are provided by Stone's theorem: (1) obserables come naturally from the generators of a group. (2) It it's not arbitrary at all- it's fixed. We can show this by considering a one-parameter, unitary, strongly continuous group: G := {U(t): ℋ→ℋ | t∈ℝ, U^*(t) U(t) = id_ℋ} and the strong continuation condition: ∀ ψ∈ℋ: lim_t → t_0 (U(t) ψ) = U(t_0) ψ. Now we can define a unitary, Abelian, one-parameter, strongly continuous group (UAOPG) U(·): ℝ→ℒ(ℋ) which perseves the norm of any ψ in the Hilbert space. The generators of this group would be: A: 𝒟^Stone_A →ℋ where 𝒟_A^Stone :={ψ∈ℋ|lim _ε→ 0i/ε(U(ε) ψ-ψ) exists } such that ψ↦ A ψ:=lim _ε→ 0i/ε(U(ε) ψ-ψ). Note that this object is equivalent to taking U(·) and applying it to ψ, so we get a map U(·) ψ: ℝ→ℋ, then we take the derivative of this object and the evaluating it at zero. We can express this as: lim _ε→ 0i/ε(U(ε) ψ-ψ)=i lim _ε→ 0U(0+ε) ψ-U(0) ψ/ε:=i[U(·) ψ]^'(0). So we can express (<ref>) more compactly as: 𝒟_A^S ≡𝒟_A^Stone :={ψ∈ℋ| i[U(·) ψ]^'(0) exists}. Stone's theorem says that if we start from a UAOPG, which could correspond to translation, rotation etc, we can obtain self-adjoint generators corresponding to quantum mechanical observables , momentum, angular momentum etc. In other words, for a UAOPG U(·), its generator A: 𝒟^S_A →ℋ is self-adjoint on 𝒟_A^S. The generators of the group may not be defined on the whole of the Hilbert space, but on the Stone domain they are self-adjoint (observable). Moreover, we can reconstruct the group from its generators as: U(t) = e^-itA Proving Stone's theorem <cit.> enables us to verify the following: * Generator A is densely defined, i.e the domain of A lies in the Hilbert space. From that we can see that A* is also well defined. * A is symmetric and essentially self-adjoint. * U(t) = e^-itA^**, from which we can conclude that A is the closure of itself, A=A**. Position operator. For a Hilbert space that's defined to be ℋ=L^2(ℝ^3), then the group U(t): ℋ→ℋ would have generator x̂. The action of U(t) on vector ψ would introduce a phase: (U(t) ψ) (x) := e^-itxψ (x) and we can see that (1) U(t)U(s) = U(t+s), (2) U(0) = id_ℋ, and (3) || U(t) ψ|| = ||ψ||. Thus, by Stone's theorem, the generator: x̂ : 𝒟^S_x̂→ℋ is self-adjoint on the domain 𝒟^S_x̂. It acts on a vector ψ∈𝒟^S_x̂ as: (x̂ψ) (x) := x ψ (x) and we clearly see that we have constructed the position operator. §.§ Stone-von Neumann Theorem The Stone-von Neumann uniqueness theorem, generalizes Stone's theorem to a pair of self-adjoint operators <cit.>. It states that there is a unique way to represent the canonical commutation relations on a finite-dimensiona Hilbert space, up to unitary equivalence. Therefore, there are no other interesting representations of this algebra. theorem only applies to systems with a finite number of degrees of freedom This result is essential for ensuring the consistency of quantum mechanics. Proof can be found in <cit.>. For example, in position representation, x̂ is represented as a multiplication operator, as shown in (<ref>), and p̂ is represented as the differential operator, as shown in (<ref>). The canonical commutation relation would be: [x̂ , p̂ ] = [x , -i ħ d/dx] = i ħ, whereas if we wish to switch the roles of x̂ and p̂, making the the former the the differential operator, as in (<ref>) and the later the multiplication operator, as in (<ref>), we essentially go to the momentum representation, the canonical commutation would be: [x̂ , p̂ ] = [i ħ d/dp , p] = i ħ An irreducible representation is one that cannot be further decomposed into smaller representations that are still irreducible. For example, consider the group of rotations in three-dimensional space, denoted by SO(3). A representation of SO(3) associates a matrix with each element of the group, in such a way that the group structure is preserved. An irreducible representation of SO(3) is one that cannot be written as a direct sum of smaller representations, and in this case, it corresponds to a matrix that cannot be block-diagonalized into smaller matrices. We note here that Stone-von Neumann theorem establishes the equivalence of irreducible representations of the canonical commutation relations in QM. However, this theorem does not extend to QFT, where there are multiple inequivalent irreducible representations. This leads to a much richer algebraic structure in QFT than in QM. §.§ Fourier link between ϕ(x) and ψ(p) spaces Fourier transformation provides a map between two parallel descriptions of quantum mechanics. It provides a way to construct different representations of the CCR, which can be shown to be unitarily equivalent to each other. Abstractly, the quantum state is represented by a state vector | ψ⟩. The momentum wavefunction ψ(p) and position wavefunction ϕ(x) are two different representations of that same vector in the momentum and position bases, respectively. The Fourier transform is a unitary operator that maps the position and momentum operators in one representation to a new set of operators in a different representation. ϕ(x) = 1/√(2 π)∫_-∞^∞ dp e^ipxψ(p) and the inverse Fourier transformation of the wavefunction: ψ(p)= 1/√(2 π)∫_-∞^∞ dx e^-ipxϕ(x) This clearly illustrates that there's a 1-1 correspondence between the |x ⟩ and |p ⟩. By applying the Fourier transform repeatedly, we can construct an infinite number of representations of the CCR, each related to the previous one by a unitary transformation. However, as the Stone-von Neumann theorem states, all irreducible representations of the CCR on a finite-dimensional Hilbert space are unitarily equivalent. Therefore, in the case of finite-dimensional Hilbert spaces, the Fourier transform cannot give rise to inequivalent representations of the CCR. This is because all irreducible representations of the CCR are equivalent to the Schrödinger representation<cit.>, which is uniquely defined. In the case of infinite-dimensional Hilbert spaces, however, the situation is more complicated, and the Stone-von Neumann theorem does not apply. In this case, the Fourier transform can give rise to inequivalent representations of the CCR, and the choice of representation can have physical consequences. CHAPTER: ALGEBRAIC QUANTUM FORMALISM In this section, we will examine the “algebraic approach" to quantum mechanics, which is a method that delves into the principles of quantum mechanics through the lens of the mathematical theory of algebras of operators. The concept of Algebraic Quantum Mechanics was developed to provide a mathematically rigorous theory for quantum systems with an infinite number of degrees of freedom, such as those encountered in quantum field theory and quantum statistical mechanics. This approach goes beyond the traditional Hilbert space framework and utilizes operator algebras, such as von Neumann algebras and C*-algebras, to describe the observables of the system. The algebraic approach to quantum mechanics places an emphasis on understanding the abstract structure of the set of observables, which are the measurable quantities of a system, as well as the set of states, which describe the possible configurations of a system. In this framework, we are particularly interested in the rules governing the probabilities associated with the measurements of observables over states, which dictate how the system behaves when an observable is measured. By employing the algebraic approach, we can gain deeper insights into the underlying principles of quantum mechanics and develop a more solid foundation for understanding the often counter-intuitive behavior of quantum systems. This approach also enables us to analyze quantum mechanics from a purely mathematical standpoint, which can prove invaluable when attempting to solve complex problems or develop new theories in the field. § OPERATOR ALGEBRAS In quantum mechanics, the observables of a system are represented by operators acting on a state | ψ⟩ in the Hilbert space ℋ. In QFT, since the system has an infinite number of degrees of freedom, there are an infinite number of observables. The operator algebra approach provides a mathematical framework for studying these observables. It is possible to describe to quantum systems without directly referring to a vector space. Quantum mechanics has no standard basis, so working with the abstract operators directly can give more insight than representing them in a Hilbert space. The observables of a system can be organized into an algebra, called the algebra of observables, which includes the identity operator, linear combinations of observables, and products of observables. The operator algebra is important because it provides a rigorous mathematical structure for studying QFT. It also allows us to study the algebraic properties of observables, such as commutation and anti-commutation relations, which are important for understanding the behavior of the system. Additionally, the operator algebra approach provides a powerful tool for studying the properties of QFT on curved spacetimes and in the presence of external fields. The algebraic structure of QFT is closely related to von Neumann algebras, which are a type of operator algebra that arise naturally in the study of QFT. These algebras play a central role in the operator algebra approach to QFT, as they provide a framework for studying the algebraic properties of observables. The algebra of observables refers to the mathematical structure which describes the set of observables of a quantum system and the relations between them. In this we go into characterizing this algebra into different types of von Neumann algebras, their properties will be detailed. § COMMUTANTS For an algebra 𝒜⊆ℬ(ℋ), its commutant 𝒜' is defined as the set of all bounded operators that commute with with 𝒜 𝒜' = {B ∈ℬ(ℋ) | [A,B] = 0 , ∀ A ∈𝒜} For a a bounded algebra ℬ(ℋ) with dimensions d, there exists a trivial commutant that's isomorphic to the complex numbers. Consider ℋ = ℂ^3, then the bounded operators ℬ(ℋ) = ℳ_3. The most general for of ℳ_3 is: ℳ_3 = [ a b c; d e f; g h i ] Then its commutant ℳ'_3 is: ℳ'_3 = [ u 0 0; 0 u 0; 0 0 u ] = u [ 1 0 0; 0 1 0; 0 0 1 ] = ℂ 𝕀_3 Where the elements of the matrix ∈ℂ. If 𝒜 is commutative, then 𝒜⊆𝒜' and 𝒜⊆𝒜” Consider an algebra 𝒜⊂ℳ_3, where 𝒜 is commutative. Then for [ a 0 0; 0 a 0; 0 0 b ]∈𝒜 Then its commutant 𝒜' has the form 𝒜' = [ a b 0; c d 0; 0 0 v ]≅ℳ_2 ⊗ℂ We clearly see that that 𝒜⊆𝒜'. If 𝒜 = ℂ 𝕀, then 𝒜' = ℬ(ℋ). Consider a general element in 𝒜 such that: 𝒜∋[ a 0 0; 0 a 0; 0 0 a ]≅ℂ Then its commutant is: 𝒜' ∋[ a b c; d e f; g h i ]≅ℳ_3 We see that 𝒜' = ℬ(ℋ). Providing a preview of the forthcoming content in section (<ref>): if we repeat the above process, we see that 𝒜⊆𝒜”. If the equality holds we call the algebra satisfying bicommutivity 𝒜 = 𝒜” von Neumann algebra. § VON NEUMANN ALGEBRA A subalgebra 𝒜⊆ℬ(ℋ) is a von Neumann algebra iff it's bicommutatant 𝒜”=𝒜. Von Neumann algebra play a central role in axiomatic approaches to quantum field theory and statistical mechanics. The underlying concept is that the observables in a physical theory should exhibit an algebraic structure. In the context of QFT, von Neumann algebras play a crucial role in describing the algebraic structure of observables <cit.>. The von Neumann algebra is C*-subalgebra of the bounded linear operators on a Hilbert space, closed in the weak operator topology. Given a subset 𝒜 of the bounded operators ℬ(ℋ), the von Neumann algebras can be defined as the set that satisfy the following: * 𝒜 is an algebra: 𝒜 is closed under addition, operator and scalar multiplication, and contains the identity. * 𝒜 is a *-algebra: 𝒜 is closed under hermitian conjugation. * 𝒜 is closed with respect to the strong operator topology: there exists a sequence of operators a_n that converges to an operator a as n →∞ ∃ a_n ∈ A s.t. ∀|ψ⟩ lim _n →∞ a_n|ψ⟩=a|ψ⟩ ⇒ a ∈ A Additionally, if the center of 𝒜 is trivial, i.e. it consists of only of complex numbers, we say that 𝒜 is a von Neumann factor. The center of a von Neumann algebra is the set of all elements that commute with every element in the algebra. A von Neumann algebra is called a “factor" if its center consists only of scalar multiples of the identity operator. The nice thing about factors is that they are the building-blocks of von Neumann algebra, as any von Neumann algebra that are not factora can be written as a direct sum/integral of factors. There von Neumann factors can be classified to three types (Type I, Type II, and Type III) based on their properties and representation theory <cit.>. A rough classification is summarized in the table below: lightgray 4|c|von Neumann factors Types Projectors Trace Irreducible representation in ℋ Type I_d Minimal projectors Defined Irreducible Type I_∞ Minimal projectors Undefined Irreducible Type II_1 Fintite & non-minimal Defined but not unique Does not exist Type II_∞ Fintite & non-minimal Defined but not unique Does not exist Type III Non-minimal Undefined Does not exist Factors are particularly interesting because they have unique properties that are not shared by non-factor von Neumann algebras. The classification of von Neumann algebras is important in QFT because it is related to the structure of spacetime. §.§ Type I Type I von Neumann algebras are the simplest type and include finite and infinite factors. They arise in the algebraic description of quantum mechanics and are closely related to the Stone-von Neumann theorem. Type I factors are the algebra of all bounded operators that act irreducibly on ℋ, either on ℋ itself, or some smaller subsystem ℋ_𝒜 such that ℋ = ℋ_𝒜⊗ℋ_ℬ. They can be labelled by the dimension d of the Hilbert space ℋ. There is a unique (up to isomorphism) Type I_d factors for d ∈ℕ. In the case that ℋ has infinite dimensions, the algebra of bounded operators is of Type I_∞. Moreover, Type I factors have minimal projector, i.e. generated by a single projection (an operator that is its own square and is self-adjoint) and are therefore commutative. The uniqueness of the projection property of type I factors is due to the fact that they have a unique cyclic and separating vector, which is not the case for Type II and Type III factors. The cyclic and separating vector property of type I factors allows for the construction of a projection valued measure, which is used to define the density matrix. This property is not present in Type II and Type III factors, which means that the density matrix cannot be defined in the same way. Instead, in type II and Type III factors, one needs to use the Tomita-Takesaki theory and the associated modular theory to define the density matrix. These concepts will be discussed in detail in the following sections. In finite dimensions, as is the case of quantum mechanics, the only possible von Neumann factors are Type I factors. They are used to describe systems such as spin chains and lattice models. However, as we move to QFT, the more complex Type II and Type III von Neumann algebras become relevant. §.§ Type II Type II factors have no minimal projectors, but finite[A finite von Neumann projector is one that has finite rank, meaning that it projects onto a subspace of finite dimension. In contrast, an infinite von Neumann projector would project onto a subspace of infinite dimension.] projectors. They can be classified to hperfinite Type II_1 factors and Type II_∞ factors. In physics, infinite structures are often obtained as limits of finite structures, and therefore, the algebras that commonly emerge in physics are predominantly hyperfinite[A von Neumann algebra is said to be hyperfinite if it can be approximated arbitrarily well by finite-dimensional von Neumann algebras.] ones. The concept of hyperfinite algebras is important because it provides a rigorous mathematical framework for dealing with infinite-dimensional systems. For example, in quantum field theory, we often work with infinite-dimensional Hilbert spaces, but these can be difficult to handle mathematically. By using hyperfinite algebras to approximate these spaces, we can apply techniques from finite-dimensional linear algebra to study them. Type II_∞ can be constructed by the tensor product: Type II_∞ = II_1 ⊗ I_∞. We can construct hyperfinite Type II_1 by considering a vector space V of 2 × 2 complex matrices which can be given a Hilbert space structure as we shall see below. v = [ 1 0; 0 1; ] = [ 1; 0; ][ 1 0; ] + [ 0; 1; ][ 0 1; ] = [ 1; 0; ]⊗[ 1 0; ] + [ 0; 1; ]⊗[ 0 1; ] = v_1 + v_2 ∈ V where v,v_1,v_2∈ V and they represent 2-qubit compound states; v_1 and v_2 are separable pure states and v is an un-normalized entangled pure state. To turn the vector space space V into a Hilbert space structure, we can define an inner product: ⟨ v_1, v_2⟩=Tr [v_1^† v_2] We can define matrices a ∈ M_2 that act on V from the left, and b ∈ M'_2 that act on V from the right as such: a (V) = a V b (V) = V b^† For a maximally entangled vector v ∈ V, we define the normalized state I'_2 = 1/√(2) I_2. Now we are ready to construct an infinite tensor product pre-Hilbert space ℋ_0[A pre-Hilbert space is a vector space equipped with an inner product that is not necessarily complete. The completion of a pre-Hilbert space with respect to its norm yields a Hilbert space. ] as follows. Consider the infinite product v_1 ⊗ v_2 ⊗⋯⊗ v_k ⊗⋯∈ V^[1]⊗ V^[2]⊗⋯⊗ V^[k]⊗⋯ where finitely many v_k's are NOT equal to I'_2 and the infinite remaining v_k's are I'_2. If v_k≠ I'_2 then it is an arbitrary 2 × 2 matrix. For example, elements v∈ℋ_0 can be like: v = 1/√(2) I ⊗ 1/√(2) I ⊗⋯⊗ v_k ⊗⋯∈ V^[1]⊗ V^[2]⊗⋯⊗ V^[k]⊗⋯ = 1/√(2)[ 1 0; 0 1; ] ⊗1/√(2)[ 1 0; 0 1; ]⊗⋯⊗[ a b; c d; ]⊗⋯∈ V^[1]⊗ V^[2]⊗⋯⊗ V^[k]⊗⋯ v = I'_2⊗ I'_2⊗[ a_1 b_1; c_1 d_1; ]⊗ I'_2 ⊗[ a_2 b_2; c_2 d_2; ]⊗ I'_2 ⋯ Note that this is not quite a Hilbert space, but a countably infinite pre-Hilbert space. We need to complete it in order to obtain ℋ⊃ℋ_0 To do that, we consider v=v_1 ⊗ v_2 ⊗⋯ and w=w_1 ⊗ w_2 ⊗⋯ in ℋ_0. We know that there are only finite amount n of vectors v_k, w_k that are not I'_2, so we find a find k > n and truncate at n so that every non-identity element is included; v_⟨ n⟩=v_1 ⊗ v_2 ⊗⋯⊗ v_n, w_⟨ n⟩=w_1 ⊗ w_2 ⊗⋯⊗ w_n. Note that these are finite dimensional. Similar to the trace definition eq (<ref>): ⟨ v, w⟩: = Tr [v_1^† w_1 ] Tr [v_2^† w_2] ⋯Tr [v_n^† w_n] Tr[ I'^†_2 I'_2] ⋯=Tr [v_⟨ n⟩^† w_⟨ n⟩]<∞ Where Tr [I'^†_2 I'_2 ]= 1. Finally, for v = v_1 ⊗ v_2 ⊗⋯⊗ v_n ⊗⋯ to be restricted in tensor product, v_n must tend to I'_2 as n →∞. Now we follow a similar procedure for the algebra 𝒜. The goal is to define the algebra as an infinite tensor product M_2^[1]⊗ M_2^[2]⊗⋯⊗ M_2^[n]⊗⋯. We note that a general element a ∈𝒜 does not have finitely many a_k ≠ I_2. For this reason, we have the problem that the action of the algebra takes us out of the Hilbert space, a ℋ⊈ℋ. To solve this problem, we define 𝒜_0 such that it contains finitely many a_k ≠ I_2. Now the action of this algebra is constrained within the Hilbert space, 𝒜_0 ℋ⊆ℋ. Finally we add the limit to close 𝒜_0 to get 𝒜: ℋ→ℋ. The commutant of 𝒜 is 𝒜', and it includes elements b ∈ M_2^'[1]⊗ M_2^'[2]⊗⋯⊗ M_2^'[n]⊗⋯ that act on v from the right. We introduce natural linear function F(a)=⟨Ψ|a| Ψ⟩ on the algebra 𝒜. This will prove to be an important tool for dealing with entanglement in quantum field theory, as the function F(a) allows us to calculate expectation values of observables and study their entanglement properties. We define a vector: Ψ=I_2^'⊗ I_2^'⊗⋯⊗ I_2^'⊗⋯∈ℋ Then the linearized function has the properties of a trace, F(ab)= F(ba). Plugging (a b) yields: F(ab)=Tr_M_2^[1]⊗⋯⊗ M_2^[k]a_1 b_1 ⊗⋯⊗a_k b_k= Tr_M_2^[1]⊗⋯⊗ M_2^[k]b_1 a_1 ⊗⋯⊗b_k a_k =F(ba) Where the trace now is normalized so that Tr 1 = 1. Type II factors have no unique trace, because we can multiply trace by a constant. But the difference between entropy can be defined, because the constants will cancel. So simply a notion of trace is sufficient, as we will see in section <ref>. §.§ Type III Type III algebra is the most general class of factors. It has no trace and no minimal projectors. We can start this generalization by switching the maximally entangled pair of qubits I'_2 in Type II factors with nonmaximally entangled quibit pair k_2,λ, such that: k_2, λ=1/(1+λ)^1 / 2([ 1 0; 0 λ^1 / 2 ]) ∈ V where 0 < λ < 1. After replacing the normalized identities with this new matrix, the pre-Hilbert space ℋ_0 is spanned by vectors v_1 ⊗ v_2 ⊗⋯⊗ v_n ⊗ k_2, λ⊗ k_2, λ⋯∈ V^[1]⊗ V^[2]⊗⋯. Because this space has a structure like that of a Hilbert space, we can use our definition of the dot product and trace in eq(<ref>): ⟨ k_2, λ, k_2, λ⟩=Tr [k_2, λ^† k_2, λ] = Tr[1/1+λ([ 1 0; 0 λ ])] =1 The next step would be to close ℋ_0 with respect to the inner product to obtain the complete Hilbert space ℋ_λ. We do this by truncating, exactly as we did in Type II. ⟨ v, w⟩: = Tr [v_1^† w_1 ] Tr [v_2^† w_2] ⋯Tr [v_n^† w_n] Tr[ k_2, λ^† k_2, λ] ⋯=Tr [v_⟨ n⟩^† w_⟨ n⟩]<∞ Similarly, we take the same algebra 𝒜_0 that we started with in Type II, then complete it with respect to ℋ_λ to obtain 𝒜_λ and 𝒜'_λ. Finally, similar to what was done in Type II, we define natural linear function F(a)=⟨Ψ_λ⃗|a| Ψ_λ⃗⟩ on the algebra 𝒜_λ. The difference between between Type II and Type III algebra is that the later does NOT admit a trace. In essence, F(ab) ≠ F(ba). Remember, previously we had F(ab) = ⟨Ψ|ab| Ψ⟩ = Tr [I'_2 a_⟨ k⟩b_⟨ n⟩ I'_2] = F(ba). This was allowed because Ψ∈ℋ was a tensor product of infinite normalized identity matrices. This time we are dealing with a different space, namely ℋ_λ, we redefine Ψ as: Ψ_λ⃗=k_2, λ_1⊗ k_2, λ_2⊗⋯⊗ k_2, λ_n⊗⋯∈ℋ_λ Now the trace is not well-defined. F(ab) = ⟨Ψ_λ⃗|ab| Ψ_λ⃗⟩ = Tr [k_2, λa_⟨ k⟩b_⟨ n⟩ k_2, λ] ≠Tr [k_2, λb_⟨ k⟩a_⟨ n⟩ k_2, λ] = F(ba) Type III factors can be further classified based on how the sequence of λ converge<cit.>: * Type III: k_λ_1⊗ k_λ_2⊗⋯⊗ k_λ_n, where 0<λ_i<1. * Type III_0: λ_i → 0. If the sequence of λ_i converges to zero fast enough, we get Type I_∞. * Type III_λ: λ_i →λ. For example, λ_i = λ ∀ i, where we have finitely many λ_i ≠λ. A special case is when λ_i → 1, we obtain Type II factors. * Type III_1: here λ_i does not converge. A simple case is λ_i ∈{λ, λ̃}, where we have infinitely many of each. The span of this type is [0, ∞). § TOMITA-TAKESAKI THEORY The Tomita-Takesaki theorem is a fundamental result in the theory of von Neumann algebras, which provides a deep connection between the algebraic and geometric structures of these mathematical objects <cit.>. In particular, the theorem establishes a canonical way of constructing a modular automorphism group associated with any von Neumann algebra acting on a Hilbert space. More precisely, given a von Neumann algebra 𝒜 acting on a Hilbert space ℋ, the Tomita-Takesaki theorem asserts the existence of a modular conjugation operator J and a modular operator Δ satisfying certain axioms. These operators are intimately related to the geometry of the underlying Hilbert space and capture important physical features such as time evolution and symmetry transformations. In addition to its intrinsic mathematical interest, the Tomita-Takesaki theorem has numerous applications in quantum field theory, statistical mechanics, and other areas of physics. For example, it plays a crucial role in the analysis of thermal equilibrium states and the formulation of the Kubo–Martin–Schwinger (KMS) condition, which characterizes the behavior of quantum systems at finite temperatures <cit.>. The theorem also provides a powerful tool for investigating the structure of quantum entanglement and the emergence of classical physics from quantum mechanics <cit.>. Moving forward, we rely Witten's construction of the theorem in <cit.>. §.§ Tomita modular operator Another important concept that we will use to define entropy in section (<ref>) is the Tomita operator. We will classify the algebra that define QFT, the algebra of QFT don’t admit a trace, so we need a new notion to incorporate undefined matrices in a defintion of entropy. That’s one of the uses of the Tomita operator and the modular operator. The Tomita operator can be defined as: S_Ψ a|Ψ⟩=a^†|Ψ⟩ Where S_Ψ is an antilinear operator acting on a state Ψ that's cyclic and separating for the algebra, and a ∈𝒜. * cyclic: 𝒜Ψ is dense[Dense means that any vector Φ∈ℋ can be approached arbitrarily closely (with respect to the norm induced by the inner product) by a sequence of vectors Φ_n in 𝒜Ψ. <cit.>] in ℋ. * separating: aΨ=0⇔ a=0 <cit.>. If we investigate the properties of S, we see that S_Ψ is invertible: S_Ψ^2 = 1 Hence, S_Ψ|Ψ⟩=|Ψ⟩ . Recall that if W is a linear operator, then its action on states Ψ, χ: ⟨Ψ| W ^†χ⟩=⟨ Wψ | χ⟩ and if W is antilinear, then it acts as: ⟨Ψ| W ^†χ⟩=⟨ W Ψ|χ⟩=⟨χ|Wψ⟩ Since S is antilinear, we will be making use of this property quite often. Furthermore, for S_Ψ of an algebra 𝒜, we can define S_Ψ' for the commuting algebra a'∈𝒜'. As it turns out <cit.>: S_Ψ^† = S_Ψ^' The Tomita operator has plenty of interesting properties. Knowing that it is invertible, we can further investigate it by polar decomposition it to an antiunitary operator J_Ψ and a unitary, positive-definite hermitian operator Δ_Ψ=S_Ψ^† S_Ψ: S_Ψ=J_ΨΔ_Ψ^1 / 2 , S_Ψ^† = Δ_Ψ^1 / 2 J_Ψ Where J_Ψ:𝒜→𝒜' is the modular conjugation, and Δ_Ψ: 𝒜→𝒜 is the modular operator. The unitary part maps an element of the algebra onto itself, wheras the antiunitary part maps an element of the algebra onto its commutant. This implies that: S_Ψ^† S_Ψ = Δ_Ψ^1 / 2 J_Ψ^2Δ_Ψ^1 / 2 = Δ_Ψ and ( S_Ψ S_Ψ^†)(S_Ψ^† S_Ψ) = S_Ψ (S_Ψ^† S_Ψ^† ) S_Ψ = S_Ψ (1) S_Ψ = S_Ψ^2 =1 which implies (S_Ψ S_Ψ^†) = (S_Ψ^† S_Ψ)^-1, i.e (S_Ψ S_Ψ^†) = (Δ_Ψ)^-1. It follows that: S_Ψ S_Ψ^† = J_ΨΔ_Ψ^1 / 2Δ_Ψ^1 / 2 J_Ψ = J_ΨΔ_Ψ J_Ψ = Δ_Ψ^-1 In addition to these properties, we can see that S_Ψ^2 = (J_ΨΔ_Ψ^1 / 2) (J_ΨΔ_Ψ^1 / 2) = 1, it's straightforward to deduce (J_ΨΔ_Ψ^1 / 2 J_Ψ ) Δ_Ψ^1 / 2 = 1 → J_ΨΔ_Ψ^1 / 2 J_Ψ = Δ_Ψ^-1 / 2. From the definition of S, S_ΨΨ = S_Ψ^†Ψ = Ψ. From here we can see that: Δ_Ψ |Ψ⟩ = S_Ψ^† S_Ψ |Ψ⟩ = S_Ψ^† |Ψ⟩ = |Ψ⟩ So the eigenvalue of Δ_Ψ is simply 1. We can generalize this for any function f: f(Δ_Ψ) |Ψ⟩ = f(1) |Ψ⟩ For example, if f(x) = e^xt, then e^Δ_Ψt |Ψ⟩= e^t |Ψ⟩. In quantum mechanics, the dynamics are governed by the evolution operator U(t) = e^iHt. Loosely speaking, we “generalize" this to Δ^it. In essence, replacing e^H by Δ <cit.>. §.§ Relative modular operator So far we have looked at the Tomita operator S acting on the state Ψ. A generalization of S_Ψ can relate Ψ to another `state' Φ. We define the relative Tomita operator S_Ψ|Φ for the algebra a ∈𝒜: S_Ψ|Φ a|Ψ⟩= a^†|Φ⟩ We defined Ψ to be cyclic separating for the algebra. The state Φ is arbitrary. If it is cyclic separating as well, then: S_Φ|Ψ a|Φ⟩= a^†|Ψ⟩ As we did before, we want to investigate this operator to see what properties it possesses. Analogous to the steps that led to equation (<ref>), for S_Ψ|Φ of an algebra 𝒜, we can define S_Ψ|Φ' for the commuting algebra a' ∈𝒜'. Then S_Ψ|Φ' = S_Ψ|Φ^†. To prove this we just show that for all states Λ, χ, we have ⟨ S_Ψ|Φ' Λ|χ⟩=⟨ S_Ψ|Φχ|Λ⟩. It is enough to check this for a dense set of states, so we can take χ=a Ψ, Λ= a' Ψ. ⟨ S_Ψ|Φ' a' Ψ| a Ψ⟩ =⟨ a'^†Φ| a Ψ⟩=⟨Φ| a' a Ψ⟩ =⟨Φ| a a' Ψ⟩ = ⟨ a^†Φ| a' Ψ⟩=⟨ S_Ψ|Φ a Ψ| a' Ψ⟩ In the case Φ = Ψ, the “relative" operator is of that between the same system. S_Ψ|Ψ reduces to S_Ψ. Analogous to the discussion in equation (<ref>), we take the polar decomposition of S_Ψ|Φ to be: S_Ψ|Φ=J_Ψ|ΦΔ_Ψ|Φ^1 / 2 , S_Ψ|Φ^† =Δ_Ψ|Φ^1 / 2 J_Ψ|Φ Where J_Ψ|Φ is the relative modular conjugation, and Δ_Ψ|Φ is the relative modular operator. It's straightforward to see that: Δ_Ψ|Φ=S_Ψ|Φ^† S_Ψ|Φ Remark (<ref>) follows; in the case Φ = Ψ, the relative modular operator Δ_Ψ|Ψ reduces to modular operator Δ_Ψ. It would be of great use to relate the modular operator to commutants of algebra a'∈𝒜', where a' is unitary. We map Φ→ a'Φ, so S_Ψ|Φ→ S_Ψ| a' Φ := a'S_Ψ|Φ. Then Δ_Ψ| a' Φ= S_Ψ| a' Φ^† S_Ψ| a' Φ = S_Ψ|Φ (a'^† a') S_Ψ|Φ= Δ_Ψ|Φ §.§ Tomita–Takesaki theory and Type III factors We've mentioned that in physics we mostly deal with hyperfinite algebra 𝒜 because the finiteness properties makes them far easier to to work than the non-hyperfinite case, and they're the ones to mostly show up in physics. We start by considering a simple case of algebra 𝒜⊗𝒜' acting on ℋ=ℋ_1⊗ℋ_2 [This factorization is possible according to the separable decomposition theorem. A tensor product structure (TPS) expression is possible if spectrum of the original Hilbert space ℋ (including multiplicities of eigenvalues) can be written as the sum of two other spectra ℋ_1,ℋ_2 <cit.>. ] <cit.>.This tensor product Hilbert space could be thought of as bipartite quantum system. The action of the an element a of the algebra 𝒜 on the space ℋ is as a⊗ 1, where a: ℋ_1 →ℋ_1. The action of the a commutant a' ∈𝒜' on the space ℋ is as 1 ⊗a', where a': ℋ_2 →ℋ_2. A vector Ψ∈ℋ can be expressed as: Ψ=∑_k=1^N c_k ψ_k ⊗ψ_k^' Where dim(ℋ_1)= dim(ℋ_2)=N, {ψ_k} are orthornomral basis for ℋ_1, and {ψ_k'} are orthornomral basis for ℋ_2. We can see that the action of the algebra on the state is: (a⊗ 1) Ψ=∑_k=1^n c_k aψ_k ⊗ψ_k^' We note that if {ψ_k} are complete for ℋ_1 and if aΨ = 0, then we can conclude that a=0. Which means that Ψ is separating for 𝒜. Following a similar reasoning for {ψ_k'}, we conclude that Ψ is cyclic and separating for 𝒜 and 𝒜', ∀ C_k ≠ 0. Notations: Proceeding with formulating Tomita-Takesaki theorem, we rewrite ψ_k → |k⟩, ψ_k' → |k⟩', and |j⟩⊗ |k⟩' → |j,k⟩. Hence, Ψ=∑_k=1^N c_k ψ_k ⊗ψ_k^' = ∑_k=1^N c_k |k⟩ |k⟩' = ∑_k=1^N c_k |k,k⟩ = ∑_k=1^N c_k |k⟩⟨ k|' Where the last one is in expressed matrix form. In analogy to what we have done in section(<ref>), we would like to find a modular operator. We start by defining Tomita operator S: ℋ→ℋ S_Ψ((a⊗ 1) Ψ)=(a^†⊗ 1) Ψ Any a∈𝒜 has the form ∑a_ji E_ji, where E_ji = |j⟩⟨ i|. From here we see that a^†=∑a_ji E_ij. So, (a⊗ 1) Ψ = ( ∑_ija_ji |j⟩⟨ i| ⊗ 1) ∑_k c_k |k⟩⊗ |k⟩' = ∑_ijka_ji c_k |j⟩⟨ i|k⟩⊗ |k⟩' = ∑_ija_ji c_i |j⟩⊗ |i⟩' Similarly, (a^†⊗ 1) Ψ = ∑_ija_ji c_j |i⟩⊗ |j⟩' From equation (<ref>), we see that: S_Ψ((a⊗ 1) Ψ) = ∑_ija_jic_i S_Ψ |j⟩⊗ |i⟩' = (a^†⊗ 1) Ψ Comparing coefficients of a_ji in equations (<ref>) and (<ref>) leads us to find that: S_Ψ |j,i⟩ = c_j/c_i |i,j⟩ Furthermore, we can derive the action of the adjoint S_Ψ^† by utilizing the antilinarity of S_Ψ: ⟨ k,l| S_Ψ^†| i,j⟩ = ⟨ i,j| S_Ψ| k,l⟩ = ⟨ i,j|c_k/c_l |l, k⟩ = c_j/c_i⇒ S_Ψ^†| i,j⟩ = c_j/c_i| j,i⟩ Which means: S_Ψ^†| j,i⟩ = c_i/c_j| j,i⟩ Comparing equations (<ref>) and (<ref>), S_Ψ |j,i⟩ = λ_ij |i,j⟩ and S_Ψ^† |j,i⟩ = 1/λ̅_ij |i,j⟩. Having worked out both S_Ψ and S_Ψ^†, we can see how the modular operator Δ_Ψ acts: Δ_Ψ| i,j⟩ = S_Ψ^† S_Ψ| i,j⟩ We can see that it acts as follows: Δ_Ψ| i,j⟩ = S_Ψ^† S_Ψ| i,j⟩ = S_Ψ^†c_i/c_j |j,i⟩ = c_i/c_j S_Ψ^† |j,i⟩ =c_i/c_jc_i/c_j| i,j⟩ = |c_i|^2/|c_j|^2| i,j⟩ Where in the third equality, we used the antilinear property to move S_Ψ^† past the eigenvalue by taking the conjugate of the factor. The spectral composition of the modular operator: Δ = ∑_ij|c_i|^2/|c_j|^2 |i,j⟩⟨ i,j| Finally, we know that from equation (<ref>) that the polar composition of Tomita operator is given by S_Ψ=J_ΨΔ_Ψ^1 / 2. Furthermore, spectral theory suggests that if we put the operator in a function, its eigenvalue is affected by the same function. Applying this principle on equation (<ref>): f(Δ) |i,j⟩ = f(|c_i|^2/|c_j|^2)|i,j⟩⇒Δ^1/2 |i,j⟩ = |c_i|/|c_j||i,j⟩ Also, having the result of equation (<ref>) in mind: S_Ψ |i,j⟩ = J_ΨΔ_Ψ^1 / 2 |i,j⟩ = J_Ψ|c_i|/|c_j||i,j⟩⇒ J_Ψ |j,i⟩ = c_j/c_i|c_i|/|c_j| |i,j⟩ = |i,j⟩ We can define the action of the relative Tomita operator as: S_Ψ|Φ((a⊗ 1) |Ψ⟩)=(a^†⊗ 1) |Φ⟩ for a second state Φ: Φ=∑_α=1^n d_αϕ_α⊗ϕ_α^' = ∑_α=1^n d_α |αα⟩ Replication what we did for S_Ψ, we let a = E_α i = |α⟩⟨ i|. The left hand side of equation (<ref>) yields: S_Ψ|Φ(a⊗ 1) |Ψ⟩ = S_Ψ|Φ(|α⟩⟨ i| ⊗ 1)( ∑_k c_k |k⟩⊗ |k⟩') = S_Ψ|Φ c_k |α⟩δ_ik⊗ |k⟩' = c_k S_Ψ|Φ |α i ⟩ As for the right hand side: (a^†⊗ 1) |Φ⟩ = d_α |iα⟩ equating both side, we get that: S_Ψ|Φ |α i ⟩ = d_α/c_k|iα⟩ As for the action of the adjoint: S_Ψ|Φ^†|i, α⟩=d_α/c̅_i|α, i⟩ and the relative modular operator: Δ_Ψ|Φ|α, i⟩=|d_α|^2/|c_i|^2|α, i⟩ To gain an insight on the action of the relative modular operator, we consider its spectral decomposition: Δ_Ψ|Φ =∑_α i|d_α|^2/|c_i|^2|α, i⟩⟨α, i| = ∑_α i|d_α|^2/|c_i|^2 |α⟩⟨α|⊗ |i⟩'⟨ i|' = ∑_α|d_α|^2/|c_i|^2 |α⟩⟨α|⊗ |1⟩'⟨ 1|' + ∑_α|d_α|^2/|c_i|^2 |α⟩⟨α|⊗ |2⟩'⟨ 2|' + ⋯ = ( ∑_α|d_α|^2 |α⟩⟨α|) ⊗(1/|c_i|^2 |1⟩'⟨ 1|' + 1/|c_i|^2 |2⟩'⟨ 2|'⋯) = σ_1 ⊗ρ_2^-1 To make sense of the inverse, recall that for ρ = ∑λ_i E_i ⇒ f(ρ) = ∑ f(λ_i) E_i, where in this case the function is the inverse. For a state Ψ, we define a density matrix ρ_12=|Ψ⟩⟨Ψ|. Similarly, for a state Φ, we define a density matrix σ_12=|Φ⟩⟨Φ|. We can find the density on ℋ_n with respect to a state by tracing out the other density. For example: Tr_ℋ_2 [ρ_12] = ρ_1, which is the reduced density matrix on ℋ_1 with respect to Ψ. The reduced density matrices of Ψ and Φ are: [ ρ_1=∑|c_i|^2|ψ_i⟩⟨ψ_i| σ_1=∑|d_α|^2|ϕ_α⟩⟨ϕ_α|; ρ_2=∑|c_i|^2|ψ_i^'⟩⟨ψ_i^'| σ_2=∑|d_α|^2 | ϕ_α^'⟩⟨ϕ_α^'| ] Notice how the recipe for the modular operator naturally led to density matrices[When they exist, at least.], which are essential in defining entropy and discussing states. Notation It is sometimes more intuitive to express states as matrices by transposing on the second system, so we note the partial transpose with tilde ·:Ψ = ∑ c_i ψ_i ⊗ψ_i^'→Ψ= ∑ c_i ψ_i ψ_i^' T∈ℳ_n. Given that, Ψ transforms under the action of a∈𝒜 and a'∈𝒜' as: (a⊗ 1) Ψ ⟷ aΨ (1 ⊗a^') Ψ ⟷ Ψa^' T We can also see that the trace is: ⟨Ψ|Φ⟩=TrΨ^†Φ Now Δ_Ψ|Φ: ℳ_n →ℳ_n. For example, the action of the relative modular operator on the partial transpose of state ξ is: Δ_Ψ|Φ(ξ) = σ_1 ξ (ρ_2^-1)^T = σ_1 (ξ) ρ_1^-1 Given that ℋ_1 is the dual of ℋ_1, which means ρ_2^T=ρ_1. More generally, Δ_Ψ|Φ^α (ξ) = σ_1^α (ξ) ρ_2^-α and Δ_Ψ (ξ) = ρ_1 (ξ) ρ_1^-1 Furthermore, notice that: ΨΨ^† = ∑ c_i ψ_i ψ_i^' T ψ_k' ψ_k^†c_k = ∑ c_i ψ_i ψ_i^†c_i = ∑ |c_i|^2 ψ_i ψ_i^† = ρ_1 Similarly, one can deduce that Ψ^†Ψ =ρ_2. Additionally, there are important properties which involve the modular automorphism group, which is the group of unitary transformations Δ_Ψ^i s, s ∈ℝ. The modular automorphism group has applications in a variety of areas, particularly it provides a way to understand the entanglement properties of quantum field theory. We can see that this group maps 𝒜 to itself. For a⊗ 1 ∈𝒜: Δ_Ψ^i s(a⊗ 1) Δ_Ψ^-i s= ρ_1^i saρ_1^-i s⊗ρ_2^-i sρ_2^i s=ρ_1^i saρ_1^-i s⊗ 1 Which can be abbreviated as Δ_Ψ^i s𝒜Δ_Ψ^-i s=𝒜. If ρ is thermal (according to Boltzman distribution),then the modular operator corresponds to the Heisenberg dynamics: ρ_1 → e^-H then ρ_1^i saρ_1^-i s = e^-iHa e^iHs. Similarly, for 1 ⊗a' ∈𝒜', Δ_Ψ^is𝒜^'Δ_Ψ^-i s=𝒜^': Δ_Ψ^i s(1 ⊗a') Δ_Ψ^-i s= 1⊗ρ_2^-i sa'ρ_2^i s As for the conjugate operator, it maps the algebra to its commutant: J_Ψ(a⊗ 1) J_Ψ=1 ⊗a^* Here a^* is the conjugate matrix to a. This can be abbreviated as J_Ψ𝒜 J_Ψ=𝒜^' Similarly, J_Ψ(a⊗ 1) J_Ψ=1 ⊗a^* Which can be abbreviated as J_Ψ𝒜^' J_Ψ=𝒜. The relative modular group Δ_Ψ|Φ^is, s ∈ℝ acts on the element of the algebra as: Δ_Ψ|Φ^i s(a⊗ 1) Δ_Ψ|Φ^-i s=σ_1^i saσ_1^-i s⊗ρ_1 ρ_1^-1 = σ_1^i saσ_1^-i s⊗ 1 Recall that σ corresponds to Φ and ρ corresponds to Ψ. This leads to an additional property the ones that the regular modular operator has: the relative modular operator depends on Φ only and not on Ψ. If Ψ and Ψ' are two cyclic separating vectors: Δ_Ψ|Φ^i s(a⊗ 1) Δ_Ψ|Φ^-i s=Δ_Ψ^'|Φ^is s(a⊗ 1) Δ_Ψ^'|Φ^-i s In conclusion, the Tomita-Takesaki theorem is a powerful result that has far-reaching implications in the study of operator algebras in quantum field theory. This theorem provides a framework for understanding the structure of von Neumann algebras, which are central to the study of infinite-dimensional quantum systems. The theorem shows that any von Neumann algebra is isomorphic to the algebra of bounded operators on a Hilbert space equipped with a certain anti-unitary involution operator, known as the modular conjugation. The modular conjugation plays a crucial role in the theory, as it encodes the spatial symmetry of the system <cit.>. Moreover, the Tomita-Takesaki theorem has deep connections to many other areas of mathematics and physics, including the theory of group representations, the theory of automorphic forms, and the study of quantum field theory on curved spacetime. It has also been used to investigate entanglement entropy in quantum field theory <cit.>, which is an important tool for understanding the quantum properties of many-body systems <cit.>. Overall, the Tomita-Takesaki theorem provides a powerful tool for investigating the structure of operator algebras in quantum field theory, and has broad applications across many areas of physics and mathematics. Its importance cannot be overstated, and it continues to be an active area of research today. PART: CHAPTER: ALGEBRAIC QUANTUM FIELD THEORY (AQFT) In the previous section, we discussed the role of von Neumann algebras in QFT. In this section, we introduce the concept of local operator algebra approach to QFT, which provides a powerful framework for understanding the algebraic structure of the theory. In the local operator algebra approach, one considers the algebra of observables associated with a subregion of spacetime. The idea is that physical observables can be measured in a finite region of spacetime, and the algebra of these observables is generated by the local observables associated with the subregions of spacetime. This algebra is referred to as a local algebra, and it is a subalgebra of the von Neumann algebra associated with the entire spacetime. The local operator algebra approach has several advantages. First, it provides a natural framework for understanding the algebraic structure of QFT, including the existence of a vacuum state, the existence of a spectrum condition, and the existence of a Poincaré group action <cit.>. Second, it allows one to formulate the notion of locality in a precise way, since the algebra of observables associated with a subregion of spacetime only depends on the observables associated with that region and its causal complement <cit.>. Finally, it provides a powerful tool for studying the entanglement structure of QFT, as we will see in this section. § LOCAL OPERATOR ALGEBRAIC APPROACH TO QUANTUM FIELD THEORY The local operator algebra approach has its roots in the work of Haag and Kastler <cit.>, who showed that the algebra of observables associated with a bounded region of spacetime can be constructed from the algebra of observables associated with the entire spacetime. We begin the discussion by defining a general scalar quantum field ϕ(x) that defined on Minkowski spacetime manifold M=(ℝ^4, η). Reeh-Schlieder theorem, states that `any state' in a quantum field theory can be obtained by applying a local operator to the cyclic separating vacuum state <cit.>. Hence, we define the vacuum state to be |Ω⟩ which lives in the Hilbert space 𝒦. When the field operator acts on the vacuum vector we can write ϕ |Ω⟩ = |Φ⟩, Here we notice a problem, the norm of the generated state is infinite ⟨Φ|Φ⟩=∞, because spacetime itself is taken to be infinite and non-compact. To get around this, we define a spacelike Cauchy hypersurface Σ. On that hypersurface, we choose a finite open region 𝒱⊂Σ. Because Σ, and thus 𝒱, is like a point with respect to the time-like direction orthogonal to Σ. A point is not open, so we'd like to consider an open neighborhood of the spacetime, 𝒰, around 𝒱. Now we have a local region of spacetime called 𝒰_𝒱, as shown in figure (<ref>), and we are set to fix the divergence problem <cit.>. We define a smooth compact smearing function f that is supported on such a neighborhood 𝒰. We use smeared field operator ϕ_f = ∫d^4 x f(x) ϕ(x) ∈𝒜_𝒰 to generate the states: ϕ_f_1ϕ_f_2⋯ϕ_f_n|Ω⟩ = |Φ_f⟩ These states norm of this state convergent ⟨Φ_f|Φ_f⟩ <∞, which means it's indeed a vector in 𝒦. In QFT, the type of von Neumann algebra associated with a spacetime region depends on the causal structure of that region. Specifically, if a region is causally disconnected from another region, then the corresponding von Neumann algebras are type I factors. If two regions are causally connected but not in a time-like or light-like relationship, then the corresponding von Neumann algebras are type II factors. Finally, if two regions are in a time-like or light-like relationship, then the corresponding von Neumann algebras are type III factors. The fact that different types of von Neumann algebras are associated with different regions of spacetime is important because it suggests that different regions of spacetime have different quantum mechanical properties. For example, two regions that are causally disconnected from each other might have completely different sets of observables, even if they have the same underlying physical system. This highlights the non-locality of QFT and the importance of operator algebras in characterizing its structure. §.§ Haag duality Haag duality states that for a certain class of quantum field theories, called local field theories, there is a duality between the algebra of a region of spacetime and the algebra of its complement <cit.>. Given a local field theory and a region 𝒰 of spacetime, the algebra 𝒜_𝒰 associated with that region is isomorphic to the algebra 𝒜_𝒰 of associated with its complement region 𝒰', where the regions are space-like separated. This means that all physical information that can be obtained by measuring observables in region 𝒰 can also be obtained by measuring observables in its complement 𝒰'. To see this, consider the vacuum |Ω⟩∈𝒦 that is cyclic for 𝒜'. Let a∈𝒦. If a|Ω⟩=0, then a𝒦=0 if and only if a=0: a𝒦= a𝒜' |Ω⟩=𝒜' a |Ω⟩ = 0 which means that the vacuum must be separating for 𝒜. The vacuum separating for the algebra if and only if it is cyclic for its commutant, and vice-versa. In essence, Haag duality connecting space-time subsets to von Neumann algebra, simply put: complements ⟷ commutants <cit.>. If we have two complement spaces, their algebra are each other’s commutants. This can be expressed as: 𝒜_𝒰^'=𝒜_𝒰^' While the Haag duality has been proven in various settings, there are also known counterexamples which suggest that the principle may not hold in all cases <cit.>. Here are some examples of counterexamples to the Haag duality: * One such case is when there is a nontrivial topology of the underlying spacetime. In this case, the causal complement of a region may not be well-defined, and it may not be possible to express all observables in terms of observables outside the region <cit.>. * Another case where Haag duality may fail is when the system under consideration is not in a pure state. In this case, the algebra of observables may not be maximal, and it may not be possible to express all observables in terms of the observables in the causal complement of a region <cit.>. * A third case is the Doplicher–Haag–Roberts (DHR) sector: The DHR sector is a class of representations of the algebra of observables in a relativistic quantum field theory that are associated with localized particles. In some theories, such as those with long-range forces, there can be DHR sectors that are not generated by the observables localized in bounded regions of spacetime <cit.>. It is worth noting that despite these counterexamples, the Haag duality is still considered to be a fundamental principle in AQFT, as it holds in many important cases and is a key tool in understanding the algebraic structure of quantum field theories. § ENTANGLEMENT ENTROPY Entanglement and entropy are related through the concept of entanglement entropy, which measures the amount of entanglement or shared information between subsystems in a larger quantum system <cit.>. In this context, entanglement entropy provides a way to characterize the long-range correlations between different parts of the system. To see the relation between entropy and entanglement, let's consider the worked example (<ref>): ϱ = |Ψ⟩⟨Ψ| = (∑_i c_i | ψ_i ⟩⊗ | ϕ_i ⟩)(∑_jc_j⟨ψ_j|⊗⟨ϕ_j |)= ∑_i,j c_i c_j(| ψ_i ⟩⟨ψ_j| ⊗ | ϕ_i ⟩⟨ϕ_j |) We can consider the density matrix ϱ to be a joint state of A⊗ B, where A=| ψ_i ⟩⟨ψ_j| and B = | ϕ_i ⟩⟨ϕ_j |. A natural question here is what's the state on A? or how entangled is A? We can answer this by taking the trace of the density matrix with respect to B: Tr_B[ϱ] = Tr_B[|Ψ⟩⟨Ψ|] = Tr_B[A ⊗ B]=A Tr[B] = c_1 c_1(| ψ_1 ⟩⟨ψ_1|⊗ | 1) + c_2 c_2(| ψ_2 ⟩⟨ψ_2|⊗ 1) + c_1 c_2(| ψ_1 ⟩⟨ψ_2| ⊗ 0) + c_2 c_1(| ψ_2 ⟩⟨ψ_1|⊗ 0) = |c_1|^2 [ 1 0; 0 0; ] + |c_2|^2 [ 0 0; 0 1; ] = [ p_1 0; 0 p_2; ] Where the value of p_1 and p_2 determine the entropy or mixture of the Ψ. For example, if p_1=1 , p_2 =0, we have a pure state, and if p_1=1/2 , p_2 =1/2, we have maximum mixture (or maximally entangled) state. Furthermore, if Tr[ϱ] = pure ⇒ϱ is NOT entangled, and if it's mixed, then ϱ is entangled. So we see a direct correlation between entanglement and entropy. We can describe entanglement as a tensor product between system. Most basic case is that of 2-qubits: ℂ^2 ⊗ℂ^2 = ℂ^4 = ℋ. Take ψ, ϕ∈ℂ^2. ψ⊗ϕ define a pure state ω: 𝒜→ℂ: ω (x) = Tr [ϱ x] = ⟨ψ⊗ϕ| x |ψ⊗ϕ⟩ Where the density matrix ϱ = | ψ⟩⟨ϕ | and x ∈ℬ(ℋ). There are multiple definitions of entropy, one of the most common is von Neumann entropy: S = -Tr[ϱlnϱ] However, most definitions of entropy rely on the existence of a trace. In the case of infinite degrees of freedom, such as in QFT, a notion of trace does not exist, so we can't use the standard forms of entropy. This is because the density matrices are von Neumann algebras of Type III, not of Type I. Given the Tomita-Takesaki theorem, we can generalize the standard formalism of entropy to the relative entropy 𝒮_Ψ|Φ(𝒰) <cit.> between two states Φ and Ψ in local region 𝒰 of spacetime: 𝒮_Ψ|Φ(𝒰)=-⟨Ψ|logΔ_Ψ|Φ| Ψ⟩ The Tomita-Takesaki theorem is intimately related to entanglement entropy in QFT. The theorem provides a way to construct a unique modular conjugation operator, which in turn can be used to define a unique vacuum state <cit.>. This vacuum state is important for the calculation of entanglement entropy in QFT, as it serves as a reference state against which the entanglement of other states can be measured. The span of the relative entropy 𝒮_Ψ|Φ(𝒰) ∈ [0,∞), so we know that Type III_1 factors are suitable to describe QFT. More precisely, it only vanishes when Φ=a^'Ψ for a' ∈𝒜_𝒰' being unitary. If that's the case, then by equation (Δ_Ψ|Φ→Δ_Ψ), and: S_Ψ| a^'Ψ=-⟨Ψ|logΔ_Ψ| Ψ⟩ = 0 because log(Δ_Ψ)Ψ⟩ = log(1)|Ψ⟩ =0, which follows from f(Δ_ Ψ)|Ψ⟩=f(1)|Ψ⟩. In AQFT, the observables of the theory are represented by operator algebras acting on a Hilbert space. By studying the entanglement between subsystems, one can gain insight into the algebraic structure of the theory and the nature of the correlations between the observables. Furthermore, entanglement entropy provides a useful tool for understanding the behavior of quantum field theories on curved spacetimes. In such cases, the algebraic structure of the theory can become more complicated due to the presence of non-trivial spacetime geometries. However, by studying the entanglement entropy between subsystems, one can still gain insights into the underlying algebraic structure of the theory. We will further investigate this in the next chapter when we analyze everything discussed in Part I of the essay in the scope of quantum field theory. In these cases, the quantum dynamics cannot be described by operators acting on a Hilbert space, but instead, must be described by the evolution of density matrices or their generalizations, known as Type III density matrices. CHAPTER: APPLICATIONS In this section, we explore the applications of the operator algebra approach and entanglement entropy to QFT on spacetime. QFT on curved spacetime is a subject of great interest in theoretical physics, as it provides a way to study the interplay between gravity and quantum mechanics. One important result in this context is the Unruh effect, which predicts that a uniformly accelerating observer will detect a thermal bath of particles where an inertial observer would detect none. The Unruh effect is a consequence of the fact that the vacuum state of QFT on curved spacetime is observer-dependent <cit.>. Another important application of the operator algebra approach to QFT on curved spacetime is the study of black hole entropy <cit.>. Black holes are believed to be described by a classical solution to the Einstein field equations, but the entropy associated with a black hole is a purely quantum mechanical concept <cit.>. In this chapter, investigate one of these applications, and we follow a similar approach to Witten's <cit.>. § QUANTUM FIELD THEORY ON CURVED SPACETIME We always assume that the spacetime we consider is globally hyperbolic. This means that the spacetime has a complete Cauchy hypersurface Σ on which initial conditions for classical or quantum fields can be formulated. The manner in which quantum field theory can be formulated on such a spacetime depends heavily on whether Σ is compact. If the hypersurface is compact, we can use the usual Hilbert space formulation of quantum field theory. We can naturally associate a Hilbert space ℋ with the given theory, and the quantum dynamics can be described in terms of operators acting on ℋ. However, unlike in the case of quantum field theory in Minkowski spacetime, there is usually no distinguished ground state or vacuum vector in ℋ, since there is no natural “energy" to minimize. That means different states correspond to different observes, and they don’t necessarily agree on the same vacuum. Hence, while formulating quantum field theory in curved spacetime, we must accept the idea of a naturally defined Hilbert space that does not have any distinguished vector. In the context of quantum field theory in curved spacetime, we explore the challenges of applying Euclidean field theory techniques to Lorentz signature (-,+,+,...,+) on D-dimentional spacetimes manifold. The primary issue is that a generic Lorentz signature spacetime lacks a useful Euclidean continuation. We start by formulating the space and algebra of the quantum field theory. At the end, we draw parallels with the algebra that we already built in previous sections. We find that the algebra of the theory is equivalent to Type II and Type III. We begin by defining a general quantum scalar field ϕ(x⃗, t) of mass m. The action of the field can be defined as: I=-1/2∫d^D x √(g)(g^μν∂_μϕ∂_νϕ+m^2 ϕ^2) The mode (oscillator) expansion of the field (in 4-D) in terms of creation and annihilation operators (a_k , a_k^†) and their corresponding mode functions of positive and negative frequency (f_k(x⃗),f̅_k(x⃗)) is given by: ϕ(x⃗, t)=∑_k(a_k f_k(x⃗) e^-iω_k t+a_k^†f̅_k(x⃗) e^iω_k t) The sum runs over all modes k, with ω_k being the positive frequency associated with each mode. The modes are each given by the canonical variables (x̂_k, p̂_k)[Commutation relations are as in equation (<ref>).] on an infinite dimensional Hilbert space ℋ_k. A Hamiltonian in the algebra 𝒜 which could act on said space is defined as: H_k =1/2(p̂_k^2+ω_k^2x̂_k^2) ≡ω_k a_k^† a_k With the commutation relation [a_k, a_l]=[a_k^†, a_l^†]=0,[a_k, a_l^†]=δ_k l. ℋ_k represents a single space, we can construct an uncountably infinite-dimensional Hilbert space ℋ^* that encompasses all possible states in each ℋ_k. Then we have: ℋ^*=⊗_k=1^∞ℋ_k We’re interested on how the algebra acts on states in the space. For a normalized[Normalization condition: ⟨ψ_u |ψ_k⟩=1 ⇒⟨Ψ|Ψ⟩=Π_k⟨ψ_k |ψ_k⟩.] state Ψ=⊗_kψ_k that's generating ℋ_Ψ⊂ℋ^*, we can generate the space ℋ_Ψ by taking the completion of action of an algebra on the vector: 𝒜_0 Ψ, where 𝒜_0 is the algebra all polynomials in the p̂_k’s and x̂_k’s. Also consider a different state Ψ'=⊗_kψ'_k generating ℋ_Ψ'⊂ℋ^*. A very profound question we could ask is: Are ℋ_Ψ and ℋ_Ψ' the same space? The answer to this will give us an insight into the uniqueness of the irreducible representation. We can start off by considering the projection: |⟨ψ_k^' |ψ_k⟩| =c_k, where 0 ≤ c_k ≤ 1, as illustrated in diagram (<ref>). So |⟨Ψ^'|Ψ⟩|=∏_k=1^∞ c_k=0 unless c_k → 1 as k →∞. Which means c_k would converge to 1 because only finitely many don’t go to 1, notice that this is very similar to Type II and III. In essence, if |⟨Ψ^'|Ψ⟩| ≠ 0 then ψ_k^'≅ψ_k (coincide) ∀ k ⩾κ, where κ is called the ultraviolet cut-off. Without loss of generality, the projection has two cases: * |⟨Ψ^'|Ψ⟩| ≠ 0 * |⟨Ψ^'|Ψ⟩|=0 It only takes a single c_k to be zero[which correspond vectors being orthogonal with θ=90^∘.] for the projection to be zero as well. So we remove any c_k =0 from the product ∏_k=1^∞ c_k, then re-evaluate the product as k →∞. After getting rid of c_k =0, we look at the product: ∏_k=1^∞ c_k={[ ≠ 0 c_k → 1 rapidly as k →∞; 0 c_k ↛ 1 as k →∞ ]. We proceed with analyzing case (i): We find that Ψ and Ψ' are different (don't coincide) for finitely many Ψ_k's. We can use 𝒜_0 to see that ℋ_Ψ=ℋ_Ψ'. Analyzing case (ii): If the projection still yields zero for c_k ≠ 0, we find that Ψ and Ψ' are different for infinitely many Ψ_k's. 𝒜_0Ψ just makes finitely many changes to Ψ. So ⟨Ψ^'|Ψ⟩ = 0, but also for a∈𝒜_0, we that ⟨aΨ^'|Ψ⟩ =0 too. This applies for the whole algebra as well: ⟨𝒜_0 Ψ^'|Ψ⟩ =0, which means ⟨𝒜_0 Ψ^'|𝒜_0 Ψ⟩ =0 This means that the spaces are orthogonal: ℋ_Ψ⊥ℋ_Ψ' In this case, the spaces are NOT unique because the algebra generates completely different spaces. They’re not unitarily equivalent[Uniqueness would imply: ℋ_Ψ =U ℋ_Ψ'≅ℋ_Ψ', where U is unitary.]. We cannot go from one space to the other via unitary transformation, as we were doing with x̂ and p̂ in finite degree of freedom (QM). This goes in-line to what Stone-von Neumann theorem told us, that in infinite degrees of freedom, there can be multiple inequivalent irreducible representations. We can see that we keep doing this with an unlimited choice of different Ψ's, so we can keep reducing 𝒜_0 indefinitely, i.e. there's no irreducible representation, as expected in a Type II and III factors. One consequence of this richer algebraic structure is that it is not possible to define a unique vacuum state in QFT. Instead, there are infinitely many vacua, each associated with a different representation of the canonical commutation relations. This has important consequences for the understanding of particle creation and annihilation in QFT. So in `large' space ℋ^*=⊗_k=1^∞ℋ_k, we have the subspace ℋ_k ⊃𝒜_0 Ψ. For Ψ≠Ψ', we have ℋ_Ψ = ℋ_Ψ' iff Ψ|_k ⩾κ = Ψ^'|_k ⩾κ (at high energy, states are the same). Which means that at low energy modes, where k < κ, the choice of ψ_k is arbitrary for finitely many k's. For a local region 𝒰⊂ M, consider the corresponding algebra 𝒜_𝒰 which obey the physically motivated axioms <cit.>. For example, causality implies that 𝒜_𝒰 commutes with 𝒜_𝒰' if 𝒰 and 𝒰' are spacelike separated. Important to note here that because 𝒜_𝒰 is von Neumann algebra of Type III, it does NOT have an irreducible representation in a Hilbert space. As a consequence, there is no Hilbert space ℋ_𝒰 associated with 𝒰. This is because the Reeh-Schlieder theorem indicates 𝒜_𝒰Ψ is dense in ℋ that's not restricted to 𝒰, so ℋ≠⊗_∪𝒲_iℋ_i. Different region, say 𝒲, gives the same space ℋ. If the 𝒜_𝒰 was Type I, then it would have an irreducible representation in the Hilbert space, and we could obtain a local Hilbert space ℋ_𝒰. Despite not being able to find ℋ_𝒰 to identify with 𝒰, we can still identify 𝒜_𝒰 with 𝒰. This means that we have local algebra, but not local hilbert space. To summarize this section, in standard quantum mechanics, the state of a system is described by a vector in a Hilbert space, and the evolution of the state is given by a unitary transformation acting on the Hilbert space. However, in certain cases, such as when the system is subject to strong interactions or in the presence of a gravitational field, the Hilbert space may not be unique or may not exist at all. This is because the concept of a Hilbert space requires the existence of a scalar product, which may not be well-defined in these situations<cit.>. In such cases, the quantum state of the system is described by a density matrix, which is a more general mathematical object that can describe mixed states as well as pure states. The Type III generalization of density matrices allows for even more general descriptions of the quantum state, which can be used in situations where the Hilbert space is not well-defined. In situations where the Hilbert space cannot be uniquely defined, the dynamics of quantum systems are described through the evolution of density matrices or, more specifically, Type III generalization of density matrices, rather than through operators on a Hilbert space. This is because a natural energy cannot be minimized and there is no distinguished ground state or vacuum vector in such cases. Therefore, the evolution of the system must be described through the evolution of density matrices. Overall, the importance of the Type III generalization of density matrices and the Tomita-Takesaki theorem lies in their ability to provide a more general framework for describing quantum systems in situations where the standard Hilbert space formalism is not applicable. § CONCLUSION The algebraic approach to QFT provides a powerful framework for understanding the algebraic properties of the observables in a system, particularly in terms of von Neumann algebras. Entanglement entropy provides a measure of the entagnlement. Moreover, the operator algebra approach to QFT is particularly useful for exploring the phenomenon of entanglement entropy. Entanglement entropy is a measure of the entanglement between different regions of a quantum system. It is defined as the von Neumann entropy of the reduced density matrix of a subsystem, which describes the entanglement between the subsystem and its environment. In QFT, the algebra of observables plays a crucial role in the definition of entanglement entropy. One of the key insights from the operator algebra approach to QFT is the connection between entanglement entropy and the algebraic structure of QFT. In particular, the concept of local observables is central to understanding the entanglement entropy of a quantum field theory. Local observables are operators that are localized in spacetime regions and can be measured independently of the rest of the system. The algebra of local observables, known as the local operator algebra, contains important information about the entanglement structure of a quantum field theory. The local operator algebra approach to QFT provides a powerful tool for understanding the entanglement entropy of a quantum field theory. By characterizing the algebraic structure of QFT in terms of local observables, we can compute the entanglement entropy of a subsystem in terms of the algebra of observables associated with that subsystem. This provides a powerful way of characterizing the entanglement structure of a quantum field theory and has important implications for many areas of physics, including quantum gravity and black hole physics. One example of the application of the operator algebra approach to QFT is in the study of quantum field theory on curved spacetimes. In curved spacetimes, the algebra of observables is no longer trivial and can contain nontrivial topological and geometrical information. The local operator algebra approach provides a powerful tool for understanding the algebraic structure of QFT on curved spacetimes and has important implications for understanding the quantum properties of black holes and providing more insight into some of the highly sought-after aspirations of theoretical physicists <cit.>. tocchapterBibliography
http://arxiv.org/abs/2307.04053v1
20230708220300
How is Fatherhood Framed Online in Singapore?
[ "Tran Hien Van", "Abhay Goyal", "Muhammad Siddique", "Lam Yin Cheung", "Nimay Parekh", "Jonathan Y Huang", "Keri McCrickerd", "Edson C Tandoc Jr.", "Gerard Chung", "Navin Kumar" ]
cs.CL
[ "cs.CL" ]
How is Fatherhood Framed Online in Singapore? Sen Lu, Abhronil Sengupta School of Electrical Engineering and Computer Science The Pennsylvania State University University Park, PA 16802, USA Email: {senlu, sengupta}@psu.edu ============================================================================================================================================================================================ The proliferation of discussion about fatherhood in Singapore attests to its significance, indicating the need for an exploration of how fatherhood is framed, aiding policy-making around fatherhood in Singapore. Sound and holistic policy around fatherhood in Singapore may reduce stigma and apprehension around being a parent, critical to improving the nation's flagging birth rate. We analyzed 15,705 articles and 56,221 posts to study how fatherhood is framed in Singapore across a range of online platforms (news outlets, parenting forums, Twitter). We used NLP techniques to understand these differences. While fatherhood was framed in a range of ways on the Singaporean online environment, it did not seem that fathers were framed as central to the Singaporean family unit. A strength of our work is how the different techniques we have applied validate each other. Keywords: fatherhood, singapore, social media § INTRODUCTION Fatherhood is now an unprecedentedly visible cultural phenomenon in Singapore. This increased attention is related to the inaugural nationwide fatherhood movement, Dads for Life, the continual development of parenting magazines and the recent emergence of fatherhood blogs within the Singapore internet sphere. In recent times, various fatherhood-related initiatives in Singapore have collaborated with government agencies, business corporations, and community organizations on initiatives to create awareness of the importance of the father’s role, develop commitment to good fathering, and encourage fathers to spend time with their children. In Singapore, the introduction of paternity leave and encouragement for fathers to play a bigger role in childcare and child-raising suggest that the government is sympathetic to the pursuit of gender equality. However, there is a gap between the perception of the importance of fathers and the actual involvement of fathers in their children’s lives. In addition, the role of fathers continues to be recognized primarily as that of a breadwinner. Yet fathers want to do more and experience parenthood as a very fulfilling experience, to which they are highly committed <cit.>. The proliferation of discussion about fatherhood in Singapore attests to its significance as a commercial, ideological, and cultural subject, indicating the need for an exploration of how fatherhood is framed, aiding policy-making around fatherhood in Singapore. While there has been research around how fatherhood is framed in the Singapore context, there is limited analysis of how fatherhood is framed on social media, news outlets, or online forums. Such platforms are where opinions or news on fatherhood are forwarded, people get parenting information, or get quick answers to fatherhood questions. Studying how fatherhood is framed in the online Singaporean context is central to crafting progressive and effective policy around parenting in Singapore, as well as managing the media landscape. Sound and holistic policy around fatherhood in Singapore may reduce stigma and apprehension around being a parent, critical to improving the nation's flagging birth rate. Policies developed in Singapore around fatherhood may then be implemented in nearby East Asian countries, which have similarly low birth rates, to mitigate a rapidly aging society and a shrinking taxpayer base. In this paper, we demonstrate how fatherhood in Singapore is framed on multiple online platforms (news outlets, parenting forums, Twitter). Our main research question (RQ) is as follows: How is fatherhood in Singapore framed on various online platforms? Our findings suggested that while fatherhood was framed in a multiplicity of forms online, it did not seem that fathers were core to the family. § RELATED WORK Fatherhood Framing Online Work on fatherhood in Singapore is limited. Recent work proposed the concept of Confucian masculinity to explain how the depiction of active fatherhood reinforced the ubiquitous normal family that upholds patriarchal ideology and perpetuates patriarchal power, obscuring the contradictions of class, race, and sexuality that exist in Singapore <cit.>. Other work examined the fatherhood discourses in new dad ads; feature articles from Today’s Parents, a parenting magazine; articles from Life Dads, a government electronic newsletter on fatherhood; and blog entries from three fatherhood blogs <cit.>. The study employed critical discourse analysis, and proposed a Hegemonic Fatherhood Discourse Schema to postulate that the new father/man and traditional father/man ideology is the hegemonic fatherhood in Singapore, ultimately serving the interests of the Singapore state. While past work detailed framing around fatherhood in Singapore, previous research did not compare framing across online platforms, or provide an overview of fatherhood framing to develop policy or informational tools. While there was limited fatherhood research in the Singapore context, there was relatively more research on fatherhood framing online in other contexts. For example, recent work <cit.> used discussion threads from two Web-based parenting communities, r/Daddit and r/PreDaddit from Reddit. Results demonstrated that men used web-based communities to share the joys and challenges of the fatherhood experience. § DATA AND METHOD Data We first selected three content experts who had published at least ten peer-reviewed articles in the last three years around fatherhood. We ensured the content experts were either from Singapore or conducted research on fatherhood/parenthood in Singapore. Given the wide disciplinary focus of fatherhood research, we sought to select a range of experts across disciplines. We recruited one expert from each of these disciplines: Public policy, social work, computational social science. Selecting experts from a range of fields allows results to be contextualized to fields where fatherhood research is concentrated, allowing for findings to be drawn on by stakeholders in public policy, social work, and computational social science. The context experts separately developed lists of online platforms most relevant to fatherhood in Singapore. Each expert developed a list of ten platforms independently, and we selected only platforms common to all three experts' lists. For each online platform, experts also provided up to 10 examples, where applicable, of websites, or forums, and we selected examples common to all experts' lists. The final list of platforms is as follows: Singapore news outlets (Straits Times, Channel NewsAsia, TODAYonline), parenting forums (singaporemotherhood.com, singaporeparents.com.sg/forum, forums.hardwarezone.com.sg/threads/welcome-to-hwzs-parenting-kids-early-learning-forum.5684416, mummysg.com/forums), Twitter (filtering only posts related to Singapore). Examples of platforms not selected: Facebook, Instagram, Reddit, LinkedIn. We were not able to collect Facebook and Instagram data as there was limited support for CrowdTangle, the main mode of Facebook/Instagram data collection. Similarly, the pushshift.io Reddit API had limited support and Reddit data collected was incomplete. LinkedIn had limited fatherhood posts and posts were mostly centered on non-family content. To capture fatherhood-related text on these platforms, we used queries based on a related systematic review e.g., father* OR dad* OR patern* OR paternal OR paternity OR stepdad* OR stepfather* OR step-dad* OR Step-father* OR papa. We used only English-language keywords as most of discussion in the Singapore internet environment is in English. English is also the major language of communication in Singapore. For forums, we used automated scraping techniques (Beautiful Soup) to obtain forum posts from 2010 to 2023, with the same set of keywords. We ran a search for querying the keywords in the title of the forum post or replies to the forum post. We collected all posts that contained these keywords within the forum posts and replies. Regarding Twitter, we used the Twitter API and the indicated keywords to collect tweets from 2011 to 2023. Finally, for news articles, we used Nexis to obtain news archives from 1992 to 2023. To prepare the data for analysis, English stop words such as the, a, an were removed, along with abbreviations, and terms were stemmed using Porter’s stemming algorithm. Stemming converts words with the same stem or root (e.g., innovative and innovator) to a single word type (e.g., innovate). We organized data into four streams for analysis: Twitter (tweets), news (news articles), forums (forum posts). Sentiment Sentiment analysis can aid us in comprehending how sentiment around fatherhood is expressed in the online arena. As an example, forums may be more likely to have lower sentiment compared to news. DistilBERT was used for sentiment analysis. DistilBERT was used separately on data from each platform. The model assigns sentiment based on each article or post. Sentiment is from a -1 to 1 scale, where values <0 are negative sentiment, >0 are positive sentiment, and close to 0 are neutral. To stay within the admitted input size of the model, the text length (title + body text) was clipped to to 512 tokens. Emotion Recognition Emotion recognition can help us understand how emotions are expressed across various platforms, indicating differences in how fatherhood is framed in Singapore. For example, forums may be more likely to contain anger compared to news. We used DistilBERT for emotion recognition. The model was applied separately on data from each platform. The model assigns emotions (anger, fear, joy, love, sadness, surprise) based on each article or post. To stay within the admitted input size of the model, we clipped the length of the text (title + body text) to 512 tokens. We provided an overview of the data in Table <ref>. Two reviewers independently examined 10% of the articles or posts within each dataset to confirm salience with our research question. The reviewers then discussed their findings and highlighted items deemed relevant across both lists. We noted the following relevance proportions: News outlets (82%), Twitter (90%), Parenting forums (78%). § RESULTS Overview We first explored sample posts across platforms. News outlets generally mentioned fatherhood in the context of providing demographic data about interviewees, with excerpts such as So the 40-year-old eye specialist and father of three had to wrap up his work at the hospital quickly, or when interviewees were referring to their fathers with no specific reference to fatherhood e.g., Mr Lee, whose father founded the clan association, rents out its third floor to a small media firm. Broadly, news outlets did not seem to focus on the experience of fatherhood, with the bulk of articles mentioning fathers as a demographic indicator. Twitter posts focused on people recounting incidents, often humorous or heart-warming, with their fathers e.g., My dad was telling me something serious and he hit his leg against the table and I burst out laughing so he had no choice but to laugh, Dad brought back homemade fresh horfun (noodles) from the temple. It's delicious. Twitter seemed to have a greater focus on fathers playing a core function in the Singapore family unit. Posts from forums were very diverse topically. Several posts were about hiring a helper for a young child: My husband is totally against the idea of employing a helper, as he does not like a stranger living with us; I am a father of a newborn baby girl. I recently engaged a confinement lady by the name of Auntie Judy. Such posts suggest the significant role domestic helpers play in the Singaporean family, and how a portion of a father's role is perhaps to oversee the hiring of the domestic helper. Other posts were about suspected infidelity e.g., So my Wife of 2 years has been cheating on me with another male colleague, perhaps indicative of the strain parenting is related to within some Singaporean families. We then provided word clouds in Figure <ref> as an overview of the data. Across all datasets, words such as time, work, now were prominent, perhaps indicative of how work and likely limited time are central to fatherhood in Singapore. Most common trigrams for news articles centered on leaders of Singapore, who were father and son: Lee Kwan Yew and Lee Hsien Loong. This may indicate that the mainstream news media discussion around fatherhood had little to do with fathers' role in a family, but simply around familial relationships within major news stories. In 1992 - 2003, common trigrams in the news were engineer success story and pressure parent counting. From 2004 - 2019, common trigrams were two baby boy, first new baby, and first time parent. From 2020 - 2022, common trigrams were generation grit family, and grit family love. Broadly, news trigrams may detail how the initial focus was on children bringing pride and wealth to their families, with a transition toward celebrating new births. In more recent years, forums tended to focus on how the family unit could overcome struggles. The most common trigrams in Twitter focused on celebrating fathers through specific events such as Father's Day and birthdays: happy father's day, happy birthday daddy. Such phrases indicated that Twitter may be used to celebrate fathers, but only in relation to pre-defined events, instead of fathers being celebrated for time put toward caregiving etc. Common trigrams in 2011 - 2020 were love u dad, dad love love. 2021 onwards, popular trigrams were feel fulfilling husband, and last nite daddy. Twitter data demonstrated a shift from declaring love for one's father, to fathers indicating how they were fulfilled in their role. Unlike other datasets, there appears to be a shift towards a more active form of fatherhood in Singapore, where fathers describe pride in their role. Trigrams in forums centered on perceived marital infidelity, such as wife unfaithful husband, and assisted reproductive technologies, such as ivf mommy toben, and cousin egg donor. Forums seemed to be platforms where people sought support around spousal infidelity and assisted reproductive technologies, rather than discuss fathers' role in the family unit. The most common trigrams in forums changed over time, with phrases such as gave birth daughter, and first time dad in 2010 - 2019, but with phrases such as happen file divorce, and judged urged divorcing in 2020. In 2021, common trigrams were conceiving single women, while in 2022, trigrams such as crave physical intimacy, and physicial intimacy normal were popular. Forums, while initially around celebrating birth, may have become places where people sought information around divorce, assisted reproductive technologies, and physical intimacy. Broadly, descriptive data indicated shifting framing around fatherhood, but a limited focus on fathers as core to the Singapore family. Sentiment We presented sentiment analysis results across each platform in Table <ref>. News and Twitter had higher proportions of positive sentiment (53.7% and 57.0% respectively) compared to forums (27.2%). Forums had the highest proportion of negative sentiment (65.9%), compared to news and Twitter (43.8% and 33.8% respectively). We then presented sentiment analysis results over time for each platform in Figure <ref>. News data exhibited several fluctuations but had the greatest rise in positive sentiment post-2009. The nationwide fatherhood movement, Dads for Life, started in 2009, may explain the increase in positive sentiment. Examples of news article content with positive sentiment were as follows: A group of prominent figures from various organisations and businesses have banded together to start up the Fathers Action Network. The network aims to kick-start a movement called Dads for Life to get fathers more involved with their families, especially in their childrens' lives. This follows a fatherhood perception survey conducted in April and May this year by a Ministry. Most felt that being a father and raising children is one of the most fulfilling experiences a man can have.; Work is work and family is family. Our ultimate goal is still our family. Work is just a means to get the money so we should be very clear about it. And that is the sort of spirit that the Dads for Life movement wants to inspire. After 2017, positive sentiment declined over time, and was overtaken by negative sentiment. Forums had broadly negative sentiment 2015 onward, reaching a peak in 2017, followed by a steady decline. Twitter exhibited mostly positive sentiment 2013 onward with a steady decline after. We suggest that the high proportion of positive sentiment in the news may be related to governmental initiatives and the high proportion of negative sentiment in forums may be related to a more frank discussion of the stresses of parenting. Emotion Recognition We presented emotion recognition results across each platform in Table <ref>. News had the highest proportion of joyous (61.3%) and loving (34.2%) posts, perhaps reflecting governmental initiatives around fatherhood. While Twitter and forums had similar levels of joyous posts (56.6% and 44.2% respectively), they were still not as high as news. Similarly, loving posts on Twitter and forums (2.4% and 4.1% respectively) were far lower than news outlets. We suggest that the emotion in the news reflects pro-fatherhood governmental initiatives, but these do not always filter successfully to other media. We then presented emotion recognition results over time for each platform in Figure <ref>. News data exhibited several fluctuations but had the steepest rise post-2009. Dads for Life, started in 2009, may explain the uptick in news articles, especially around joy. Examples of news article content that were coded as joy: It's a happy Father's Day for SAFRA, as it is set to receive funds from the "Dads for Life" movement to pump up father-friendly activities for its members over the next two years.; He will be running alongside his daughter in the Dads For Life 800m Father and Child Challenge, a new category in the annual SAFRA Singapore Bay Run and Army Half-Marathon. Mr Shariff, who was born without part of his left leg, said: I signed us up because I want to show her how running can make her happy. Both Twitter and forum posts saw a sudden spike post-2013 onward, mostly around joy. We suggest that the shift in emotion may be due to a delayed reaction to Dads for Life. Broadly, we forward that the 2009 Dads for Life movement and other similar policies may have catalyzed emotional reactions around fatherhood in the Singapore online arena. However, the rises in emotion were not sustained and seemed to decline by 2023, perhaps indicative that new policy levers may need to be rolled out. § DISCUSSION Our RQ was to explore how fatherhood in Singapore is framed on various online platforms. A strength of our work is how the different techniques we applied validate each other as well as reveal differences across platforms. While fatherhood was framed in a range of ways on the Singaporean online environment, it did not seem that fathers were framed as central to the Singaporean family unit. Results also indicated that governmental initiatives may have some effect on altering the framing of fatherhood, but are not lasting in effect. The concordance in our results suggests the veracity of our findings and we hope that results can add to research and policy around fatherhood in Singapore. Our evidence adds to previous research, where we provided data on how governmental initiatives may initially buttress framing around fatherhood, but needs to be sustained to provide broad and lasting support for fathers. Key to how fatherhood is framed in Singapore is the inclusion of fathers' viewpoints when writing news articles on fatherhood. Where possible, fathers themselves should be consulted on articles about fatherhood. For example, a panel staffed by fathers can comment on fatherhood-related online news articles, providing suggestions on how articles can more accurately represent fathers' concerns <cit.>. Our findings relied on the validity of data collected with our search terms. We used a range of established techniques to search for all articles/posts relevant to fatherhood, and our data contained text aligned with how fatherhood is framed. We were thus confident in the comprehensiveness of our data. We only used English-language text but will include other languages in future work. Given the token limits for the emotion recognition technique, we were not able to use emotion recognition for the entirety of longer news articles. We note that the recall of the search string was not tested. We note that our data may not be generalizable to how fatherhood is framed globally. Our goal was not to identify who was doing the framing around fatherhood e.g., family members or government. Future studies will seek to identify which stakeholders were likely involved in the framing. splncs04
http://arxiv.org/abs/2307.05871v1
20230712015624
A Novel SCL Bit-Flipping Decoding Of Polarization-Adjusted Convolutional (PAC) Codes
[ "Wei Zhang" ]
cs.IT
[ "cs.IT", "math.IT" ]
A Novel SCL Bit-Flipping Decoding Of Polarization-Adjusted Convolutional (PAC) Codes Wei Zhang Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected] August 12, 2023 ========================================================================================================================== Polar codes have attracted the attention of numerous researchers in the past decade due to their excellent performance. However, their performance at short block lengths under standard successive cancellation decoding is far from desirable. An effective method to improve the performance at short lengths is CRC precoding followed by successive-cancellation list decoding. Later, Arikan presented polarization-adjusted convolutional (PAC) codes, which further improve the performance of polar codes. In fact, bit-flipping is another post-processing method that can improve decoding performance. In this paper, we propose a novel SCL Bit-Flipping of PAC Codes. We show that better performance can be achieved using list decoding when the list size is the same for PAC codes (N=128, K=64). The decoding performance of our newly proposed PAC-SCLF with a list size of 32 is 0.3 dB better than that of the traditional PAC-SCL with a list size of 32. We set the maximum number of bit flips to 5. The performance of the list size (L=32) for PAC-SCLF is almost the same as the performance of the list size (L=128) for PAC-SCL. Polar codes, PAC-SCL, bit-flipping decoding. § INTRODUCTION In 2009, Arikan proposed a kind of channel codes called polar codes which have attracted lots of attention from academic community and industry due to their excellent performance <cit.>. In binary-input memory-less symmetric channels, polar codes can achieve the Shannon capacity under successive cancellation (SC) decoder with infinite code length. However, for finite code length, the frame error rate (FER) performance of polar codes under SC decoder is suboptimal. To remedy this deficiency, successive cancellation list (SCL) decoder <cit.> and cyclic redundancy check (CRC) aided SCL decoder named CA-SCL decoder <cit.> were proposed. Along with the list method, various improvements have remained the statement of article in <cit.> - <cit.>. Besides the article previously covered polar code decoding algorithms, SC Flipping algorithm is firstly proposed in <cit.>, where an error-prone information bit will be flipped during next decoding attempt as long as present decoded result cannot pass CRC check. Int this way the FER performance is dramatically improved compared with SC decoding <cit.>. In order to reduce the search scope of flipping bits, CRC mechanism <cit.>, <cit.> was referred in, where polar codeword is divided into segments and each of them are concatenated with a couple of CRC bits. In addition, a family of re-decoding schemes, called SCL-Flip decoding <cit.>, <cit.>, was proposed for CA-SCL decoding schemes. In each re-decoding attempt in SCL-Flip decoding method, the decision for the CA-SCL decoding on the path competition for a specially selected information bit is alternated, and then standard SCL decoding is conducted for the remaining bits. At the ISIT in 2019, Arikan proposed a significant breakthrough in polar coding, which boosts the performance of polar codes at short lengths even further. Specifically, a new polar coding scheme <cit.>, which he calls polarization-adjusted convolutional codes. At low SNR, the FER performance of PAC-SCL decoding is very close to the BIAWGN dispersion bound approximation. In <cit.>, the results show that PAC codes are superior to polar codes and Reed-Muller codes, and the goal of rate-profiling may be to optimize the weight distribution at low weights. <cit.> -<cit.>, further improve the performance of PAC. In fact, the SCL decoding procedure of PAC is almost like with compare traditional SCL decoding procedure of polar codes, which is just one more convolutional module before encoding and after decoding. Therefore, bit-flipping is effective for PAC-SCL decoding. In this letter, we firstly put forward bit flipping applied in PAC-SCL decoding, and the performance was greatly improved compared with the traditional PAC-SCL decoding. When the RM code was used to construct the polar codes, N=128, K=64, and L=32, the FER performance was increased by 0.3 dB. § PRELIMINARIES Polarization-adjusted convolution codes are convolutional precoding which is performed before the polar codes. The pre-transformation is performed by a rate-1 convolutional encoding as shown Fig. 1. In this section, we first review polar codes and list decoding, SCF and SCLF, then we will focus on PAC codes. §.§ Polar Codes and List Decoding Consider a (N, K, 𝒜) polar code with block-length N=2^n, information bit length K and information bit index set of 𝒜. Let u_1^N=(u_1,u_2,...,u_N) denote the input vector to be encoded, where u_i is an information bit whenever i∈𝒜 and a frozen bit for i∈𝒜^c. For polar encoding, x_1^N=u_1^N× G_N is employed with G_N=F^⊗ n, where F^⊗ n denotes the n-th Kronecker power of F=[ [ 1 1; 1 0 ]]. For SC decoding, it successively evaluates the log-likelihood ratio of each bit u_i based on the received vector y_1^N and its i preceding decision bits û_1^i-1 L^i = logP(y_1^N,û_1^i - 1|u_i = 0)/P(y_1^N,û_1^i - 1|u_i = 1). Then, û_i is decided as 0 if L^i≥ 0 and as 1 if L^i≤ 0. Instead of just keeping a single path, SCL preserves L > 1 paths during the decoding process, which could significantly improve the decoding performance. Let L_l^i denote the log-likelihood ratio of the bit u_i along the l-th path. When the number of paths is greater than the list size as the SCL decoder proceeds from level i to i+1, it retains L best paths according to the updated path metric PM_l^i ={ PM_l^i - 1, if û_i,l = 1/2[1 - sign(L_l^i)] PM_l^i - 1 + |L_l^i|, otherwise. . where sign(x )=1 if x>0 and -1 otherwise. After all nodes in 𝒜 are visited, the path with the smallest path metric is selected as the survival path. For CA-SCL decoding, the output L-paths are checked by CRC. Once a path passes the CRC check, it is claimed as the decoded output. Otherwise, a decoding failure is claimed. §.§ SC-Flip and SCL-Flip Decoding SC-Flip decoding algorithm attempts to flip a decision to get the correct decoding whenever the conventional SC decoding fails <cit.>. It was observed that error propagation occurs frequently in SC decoding, where any single erroneous decision may result in a burst of errors. Hence, it is crucial to find the first error position in SC-Flip decoding. For SCL decoding, the bit-flipping can be again employed whenever a decoding failure is claimed. The so-called SCL-Flip decoding was recently proposed in <cit.>, <cit.>. In <cit.>, a critical set is detected, which is deemed to be error-prone during SCL decoding. Therefore, each bit in this critical set is flipped in the re-decoding attempts. In <cit.>, the bit position for flipping is determined by a newly-introduced confidence metric for the survival paths and simulations shown significant performance improvement over the scheme in <cit.>. §.§ Brief overview of PAC codes PAC codes are concatenated codes in which a convolutional transform is employed before polar encoding. Polar codes the block length N of PAC code is also 2^n. It's shown Fig. 1 that the process of PAC encoding consists of three parts: rating-profiling, convolution precoding and polar transform. The information bits d=(d_0,d_1,...,d_K-1) are first mapped to a vector v=(v_0,v_1,...,v_N-1) using a rate-profile (RM-construction <cit.> ). The rate-profile is formed based on the index 𝒜 such as u_𝒜=d and u_𝒜^c=0. Meanwhile, u_𝒜^c=0 represents frozen bits and the other represents information bits. After rate-profiling, the vector 𝐯 is transformed using a convolutional generator polynomial 𝐠=[g_0,g_1,...,g_m] to u_i=∑_j=0^mg_jv_i-j (more description sees subroutine conv in Algorithm 1), where g_i∈{0,1}. In this letter, we take advantage of convolutional generator polynomial which is 𝐠={1,0,1,1,0,1}. In summary, the polar trans-formation is performed by 𝐱=𝐮·𝐆, 𝐆 represents the generator matrix of polar codes. § BIT-FLIPPING LIST DECODING OF PAC CODES In this section, we review the SCL decoding of PAC. The SCL decoding scheme of PAC codes is shown in Fig. 2. Then, we will elaborate on our contribution that constructs a bit-flipping set during SCL decoding of PAC. This operation greatly improves the decoding performance. So it is necessary to introduce the application of bit-flipping in SCL decoding of PAC in detail. §.§ SCL decoding of PAC codes SCL decoding of PAC can be regarded as the convolutional decoding embedded in the traditional SCL decoding of polar codes. More details will be introduced in Algorithm 2. Our implementation is similar to <cit.>. We discover the list decoding for PAC codes which trades a fixed time complexity for a large memory requirement (to store a list of paths) and is easier to implement. In the context of PAC codes, we reproduced the results of article <cit.>. Algorithm 2 shows the list decoding approach. Before arriving at the first information bits, it is only a single path in the list. The decoder knows the value of frozen bits, thus v_i=0, then according to the current memory state cur and the generator polynomial 𝐠=x^6+x^4+x^3+x+1. The function subconv is identical with the one in Algorithm 1. Take advantage of the function f or function g to calculate the LLR values. The corresponding path metric is calculated using subroutine calcPM. When the value u_i is known, We can calculate the value of the partial sum using updateSums. On the other hand, if the index of the current bit is in the set 𝒜, there exists two options for value of v_i 0 and 1. For each option of 0 and 1, the process for 𝒜 including convolutional encoding, calculating path metric the encoded values u_i=0 and 1 are fed back into SCL process. the two encoded values ui = 0 and 1 are fed back into SC process. The subroutines updateLLRs, updateSums, delPath already are introduced the Algorithm 2. Note that the vector llr and p are the LLRs and parity sums. It is worth noting that the SCL decoding of PAC is almost the same as the traditional polarization code SCL decoding process, except for additional a process of convolutional re-encoding at each decoding step which needs the next memory state is stated for the next path. To reduce the computational complexity and performance of list decoding, the methods proposed in the literature such as in <cit.>, <cit.> can be applied to PAC list decoding as well. In the following section, we will describe our work to improve the SCL decoding algorithm of PAC by applying the bit flipping technique to the decoding algorithm. §.§ Constrcut bit-flipping set of PAC codes In <cit.>, it was shown that the decoding failures in the CA-SCL decoding are mainly caused by the elimination of the correct path from L maintaining paths. In general, bit-flipping, as a post-processing technique, could be repeatedly implemented if the previous attempt fails. It is worth considering whether applying bit flipping to SCL decoding of PAC would be an improvement, since the SCL decoding principle of PAC is similar to the traditional SCL decoding principle of polar codes. Therefore, we attempt to use this idea and achieve performance boost. At the beginning of SCL bit-flipping decoding of PAC, the first thing we need to do is construct the bit-flipping set. According to <cit.>, it was shown that the confidence in the decision for the path competition on u_i, i ∈𝒜∖𝒜_0, can be determined from the ratio between the total probability of the L survival paths to the total probability of the L removed paths, namely, E_i(α ) = log∑_l = 1^L e^ - PM_l^i/( ∑_l = 1^L e^ - PM_l + L^i)^α in the case of α≥ 1. Note that 𝒜_0 denote the set consisting of the least log_2 L information indices. However, once the first decoding failure at level i_1 ∈𝒜 occurs during the SCL decoding of PAC, it is likely to produce error propagation in the subsequent decoding process for i > i_1. In this case, there will be a biased estimate in the confidence of the decision which is less than its correct value for a large index i. To compensate for the biased estimate due to the error propagation, α≥ 1 is introduced in (<ref>). Essentially, an information index i that has a low E_i(α) should have a high priority for re-decoding. Therefore, a bit-flipping index set is constructed in <cit.> by locating the bit indexes with smallest values of E_i(α). This may simply adopt the following rule, namely î_1 = min_i E_i(α), i∈𝒜∖𝒜_0. However, whenever a decoding failure occurs, there may exist multiple bit errors in the final decoded output û_1^N. This means that Q> 1 error positions {i_1,⋯,i_Q}⊂𝒜 may appear in û_1^N. All of these error positions may have low confidence in E_i_q(α), q∈ [1,Q]. This makes the location of the first error position using (<ref>) unreliable. Then, defining a bit-flipping set as follows: ℰ^α={E_i(α) | i∈𝒜∖𝒜_0}, By sorting the above set independently, the index of elements in sorted ascending order can be obtained as ℐ=sort(ℰ^α) where sort(·) returns the indices of ℰ^α with the ascending order of E_i_1^α≤ E_i_2^α≤...≤ E_i_Q^α. §.§ SCL bit-flipping decoding of PAC codes When the first SCL decoding of PAC codes is performed, the return value is slightly different from the traditional SCL decoding of PAC codes. There exists additional E_i(α) according to formula (3). If the receiving bits u_1^N[1] ≠v, we make a judgment that this process SCL decoding of PAC codes fails. Then, the strategy we take is to construct a bit-flipping set according to GenFlip based on Algorithm 3. The more advanced the position of the elements in the bit-flipping set ℱ, the greater the probability that the first information bit will be wrong, so it is necessary to iterate from the first element in ℱ until the last element of ℱ is reached, or the receiving bits is the correct decoding result. In algorithm 4, ℱ[m] means that when an error occurs in the received bit, the current decoding process is restarted, which is equivalent to the position of the information bit that needs to be flipped. In subroutine PAC-SCLF. Only the information bit position that needs to be flipped is shifted <cit.>, and other information bits and frozen bits are used for normal SCL decoding of PAC codes. § SIMULATIONS In this section, simulations are performed to show the effectiveness of the proposed SCL bit-flipping decoding of PAC codes (PAC-SCLF), which compares with traditional SCL decoding of PAC codes (PAC-SCL) <cit.>. We use the code length N which is 128 and the information bits length K which is 64, the generator polynomial of the convolution module is g=x^6+x^4+x^3+x^1+1.The polar codes are constructed by RM code, all the code bits are BPSK (1→-1, 0→1) modulated and transmitted over an AWGN channel (σ=1/√(2R)10^-snr/20). Fig. 3 shows the performance of the traditional SCL decoding of PAC codes (128,64). The construction used in polar codes is known as Reed-Muller (RM) code construction. As the list grows, the performance of decoding improves. Fig. 4 shows a significant improvement in performance of our proposed SCL bit-flipping decoding based on PAC codes (128,64). We set the maximum number of bit-flipping as T=5. Based on Algorithm 4, if the maximum bit-flipping count T is not reached during this process and the decoding result is already correct, it exits directly. Otherwise, the bit-flipping count needs to reach the maximum T in order to consider it as finished. Fig. 5 represents the performance comparison of our proposed decoding scheme PAC-SCLF decoding and the traditional PAC-SCL decoding algorithm for PAC code (128,64) with a list size of 32. Our proposed scheme shows an improvement of approximately 0.3dB compared to the traditional PAC-SCL decoding algorithm. Moreover, the performance is similar to using the traditional PAC-SCL decoding algorithm with a list size of 128. § CONCLUSION This article introduces for the first time the application of bit-flipping strategy in the PAC-SCL decoding process. It is indeed an interesting approach. The relatively low number of flips we set leads to reduced latency in the decoding process. Simulations show that the proposed PAC-SCLF could achieve obvious improvement over the standard PAC-SCL decoding. 1 IEEEtran Ref_1 E. Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channel”, IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3051-3073, Jul. 2009. Ref_2 I. Tal and A. Vardy, “List Decoding of Polar Codes”, IEEE Trans. Inf. Theory, vol. 61, no. 5, pp. 2213–2226, May 2015. Ref_3 K. Niu and K. Chen, “CRC-Aided Decoding of Polar Codes”, IEEE Commun. Lett., vol. 16, no. 10, pp. 1668–1671, 2012. Ref_4 B. Li, H. Shen, and D. Tse, “An adaptive successive cancellation list decoder for polar codes with cyclic redundancy check, ” IEEE Communications Letters, vol. 16, no. 12, pp. 2044–2047, December 2012. Ref_5 J. Guo, Z. Shi, Z. Liu et al, “Multi-CRC Polar Codes and Their Applications, ”IEEE Communications Letters, vol.20, no.2, pp.212- 215 Ref_6 Y.-Z. Fan, J. Chen, C. -Y. Xia, C. -Y. Tsui, J. Jin, H. Shen, and B. Li, “Low-latency list decoding of polar codes with double thresholding, ”IEEE Int. Conf. Acoust, Speech, Signal Process. (ICASSP), Apr. 2015 Ref_7 O. Afisiadis, A. Balatsoukas-Stimming, and A. Burg, “A Low Complexity Improved Successive Cancellation Decoder for Polar Codes ”, in 2014 48th Asilomar Conference on Signals, Systems and Computers, pp. 2116–2120, 2014. Ref_8 S. Li, Y. Deng, X. Gao, H. Li, L. Guo, and Z. Dong, “Generalized Segmented Bit-Flipping Scheme for Successive Cancellation Decoding of Polar Codes With Cyclic Redundancy Check, ”IEEE Access, vol. 7, pp. 83424–83436, 2019. Ref_9 R. Guo, K. Chen, and H. Liu, “Multi-CRC Polar Codes and Muti-SCFlip Based Decoding, ”IEEE Access, vol. 7, pp. 98366–98373, 2019. Ref_10 Yu Y, Pan Z, Nan L “Successive Cancellation List Bit-flip Decoder for Polar Codes ”, 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP), 2018. Ref_11 Cheng F, Liu A, Zhang Y “Bit-Flip Algorithm for Successive Cancellation List Decoder of Polar Codes ”, IEEE Access, 2019, PP(99):1-1. Ref_12 E. Arıkan, “From sequential decoding to channel polarization and back again, ”2019, arXiv:1908.09594. Ref_13 Hanwen Yao and Arman Fazeli and Alexander Vardy, “List Decoding of Arıkan’s PAC Codes ”2020 IEEE International Symposium on Information Theory (ISIT), DOI: 10.3390/e23070841 Ref_14 M. C. Co¸skun et al., “Efficient error-correcting codes in the short blocklength regime,” Phys. Commun., vol. 34, pp. 66–79, Jun. 2019. Ref_15 R. Y. Shao, S. Lin, and M. P. C. Fossorier, “Two decoding algorithms for tailbiting codes,”IEEE Trans. Commun., vol. 51, no. 10, pp. 1658–1665, Oct. 2003. Ref_16 M. P. C. Fossorier and S. Lin, “Soft-decision decoding of linear block codes based on ordered statistics, ” IEEE Trans. Inf. Theory, vol. 41, no. 5, pp. 1379–1396, Sep. 1995 Ref_17 T. Tonnellier and W. J. Gross, “On systematic polarization-adjusted convolutional (PAC) codes, ”IEEE Commun. Lett., vol. 25, no. 7, pp. 2128–2132, 2021. Ref_18 L. Chandesris, V. Savin, and D. Declercq, “Dynamic-SCFlip decoding of polar codes, ”IEEE Trans. Commun. vol. 66, no. 6, pp. 2333–2345, Jun. 2018. Ref_19 Wang C H, Pan Y H, Lin Y H, “Post-Processing for CRC-Aided Successive Cancellation List Decoding of Polar Codes, ”IEEE Commun.Lett. 2020, PP(99):1-1. Ref_20 B. Li, H. Shen, and D. Tse, “A RM-polar codes, ”2014, arXiv:1407.5483. Ref_21 Rowshan, Mohammad and Burg, Andreas and Viterbo, Emanuele “Polarization-Adjusted Convolutional (PAC) Codes: Sequential Decoding vs List Decoding, ”IEEE Transactions on Vehicular Technology. 10.1109/TVT.2021.3052550 Ref_22 M. Rowshan and E. Viterbo, “Stepped list decoding for polar codes, ”in Proc. IEEE 10th Int. Symp. Turbo Codes Iterative Inf. Process., Hong Kong, Hong Kong, 2018, pp. 1–5. Ref_23 M. Rowshan and E. Viterbo, “Improved list decoding of polar codes by shifted-pruning, ”in Proc. IEEE Inf. Theory Workshop, Visby, Sweden, Aug. 2019, pp. 1–5. Ref_24 Mohammad Rowshan and Emanuele Viterbo “Shifted Pruning for List Decoding of Polar Codes ”https://doi.org/10.48550/arXiv.2001.10732
http://arxiv.org/abs/2307.04441v1
20230710094718
Randomized Communication and Implicit Representations for Matrices and Graphs of Small Sign-Rank
[ "Nathaniel Harms", "Viktor Zamaraev" ]
cs.CC
[ "cs.CC", "cs.DM", "cs.DS" ]
Automatic diagnosis of knee osteoarthritis severity using Swin transformer Rachid Jennane ========================================================================== We prove a characterization of the structural conditions on matrices of sign-rank 3 and unit disk graphs (UDGs) which permit constant-cost public-coin randomized communication protocols. Therefore, under these conditions, these graphs also admit implicit representations. The sign-rank of a matrix M ∈^N × N is the smallest rank of a matrix R such that M_i,j = (R_i,j) for all i,j ∈ [N]; equivalently, it is the smallest dimension d in which M can be represented as a point-halfspace incidence matrix with halfspaces through the origin, and it is essentially equivalent to the unbounded-error communication complexity. Matrices of sign-rank 3 can achieve the maximum possible bounded-error randomized communication complexity Θ(log N), and meanwhile the existence of implicit representations for graphs of bounded sign-rank (including UDGs, which have sign-rank 4) has been open since at least 2003. We prove that matrices of sign-rank 3, and UDGs, have constant randomized communication complexity if and only if they do not encode arbitrarily large instances of the Greater-Than communication problem, or, equivalently, if they do not contain large half-graphs as semi-induced subgraphs. This also establishes the existence of implicit representations for these graphs under the same conditions. empty empty empty § INTRODUCTION Consider a sign matrix M ∈^N × N. In communication complexity, learning theory, and graph theory, it is often useful to represent M as a point-halfspace incidence matrix of the following form. To each row x ∈ [N], assign a point p_x ∈^d ∖{0}, and to each row y ∈ [N] assign a unit vector h_y ∈^d, such that M(x,y) = (p_x, h_y). In other words, M(x,y) = 1 if and only if the point p_x belongs to the halfspace H_y { p ∈^d | p,h_y≥ 0} whose boundary hyperplane goes through the origin. It is always possible to find such a representation, but, naturally, we wish to accomplish it in the simplest way. Here are two common ways to measure the complexity of this representation: Sign-rank. We might want to minimize the dimension d of the representation. The minimum possible d where M admits such a representation is called the sign-rank of M and denoted _±(M). It is equivalent to the smallest rank d of a matrix R such that M(x,y) = (R(x,y)) for all x,y ∈ [N]. Thinking of the rows of M as a fixed domain , and the columns as a hypothesis class (subsets of ), a standard technique in learning theory is to transform the domain into points in ^d, and the hypothesis class into halfspaces; _±(M) is the smallest dimension such that this transformation is possible. Since halfspaces through the origin in ^d have VC dimension d, sign-rank is lower bounded by the VC dimension of the hypothesis class. In communication complexity, sign-rank is essentially equivalent to the unbounded-error communication complexity of M <cit.>, where the two players have access to private randomness and wish to succeed with probability strictly better than 1/2. A set of matrices has bounded sign-rank if there exists a constant d such that all matrices M ∈ have sign-rank at most d. This is equivalent to having constant unbounded-error communication cost. In graph theory, finding implicit representations (defined below) for graphs whose adjacency matrices have bounded sign-rank is an open problem since at least 2003 <cit.>. Margin. We might want to maximize the margin of the representation. For a fixed representation {p_x}_x ∈ [N] and {h_y}_y ∈ [N], we define the margin as min_x,y|p_x,h_y|/p_x·h_y. Write (M) for the maximum m such that there is a representation with margin m; the dimension of this representation is irrelevant. The complexity of various learning algorithms like SVM or perceptron can be bounded in terms of the margin. It is also known that (M) is functionally equivalent to the two-way, public-coin randomized communication complexity (<ref>). A set of matrices has bounded margin if there is some constant m such that all M ∈ have (M) ≥ m, and having bounded margin is equivalent to having constant public-coin randomized communication cost. Therefore, graphs whose adjacency matrices have bounded margin admit implicit representations, due to the observation of <cit.>. One of the main goals in communication complexity is to understand the power of randomness, and both of the above measures of complexity capture a type of randomized communication. A rapidly-growing body of work on constant-cost communication <cit.> studies the properties of matrices with bounded margin or bounded sign-rank, but the relationship between these two measures is not well understood. In one direction, it is believed that there exist sets of matrices with bounded margin but unbounded sign-rank, but all known lower bounds fail to prove this <cit.> (although it was proven for partial matrices <cit.>). In this paper, we are interested in the other direction: For matrices of bounded sign-rank, under what conditions does also have bounded margin?[Note that a matrix having bounded sign-rank and bounded margin does not mean that sign-rank and margin are bounded simultaneously by the same point-halfspace representation.] It is known that some conditions are required. Write (M) for the two-way, public-coin randomized communication cost of a matrix M ∈^N × N (which we will refer to simply as communication cost) and () for the communication cost of matrices M ∈ as a function of their size N (see <ref> for formal definitions). The Greater-Than communication problem, defined by the matrices ∈^N × N where _i,j = 1 if and only if i > j, has sign-rank 2 but communication cost[Standard notation in the literature uses n as the number of bits in the input; we use N for the domain size, so Θ(loglog N) corresponds to the more commonly-stated bound Θ(log n).] () = Θ(loglog N) and therefore unbounded margin. When sign-rank increases to 3, matrices can achieve the maximum possible communication cost () = Θ(log N) <cit.>, far exceeding the complexity of Greater-Than. However, one of our main results is that, for sign-rank 3, Greater-Than is the only barrier to constant-cost communication: A set of matrices with sign-rank 3 has () = O(1) (and therefore constant margin) if and only if it does not contain arbitrarily large instances of Greater-Than. We prove a similar theorem for the adjacency matrices of unit-disk graphs (UDGs), which have sign-rank 4, and these results establish the existence of implicit representations when the condition on the Greater-Than instances is satisfied. We also exhibit a fundamental gap between sign-rank 4 and 5 which shows that the “type” of randomness used in our communication protocols cannot succeed in sign-rank 5 and above. <ref> is a consequence of more general results whose motivation and applications we elaborate upon below. §.§ Constant-Cost Communication and Implicit Graph Representations The study of constant-cost randomized communication was initiated independently in <cit.>. One motivation of <cit.> was that constant-cost communication is a special case of a well-studied open problem in structural graph theory and distributed computing, which asks to characterize the hereditary graph classes that admit implicit representations (see <cit.>). *Implicit representations. A class of graphs is a set of (labeled) graphs that is closed under isomorphism. It is hereditary if it is closed under taking induced subgraphs. A hereditary class admits an implicit representation if there exists a decoder D : ^* ×^* → such that, for every N-vertex graph G ∈, each vertex v of G can be assigned an encoding (v) of O(log N) bits, where D((u), (v)) outputs the adjacency of vertices u,v; the decoder D depends on the class but not the specific graph G. Implicit representations were introduced in <cit.>, who observed that they are equivalent to a graph U of size (N), called a universal graph, that contains every N-vertex graph G ∈ as an induced subgraph. Since a graph of size (N) has at most 2^O(N log N) N-vertex induced subgraphs, a necessary condition for the existence of implicit representations is that contains at most 2^O(N log N) N-vertex graphs, in which case is said to have factorial speed. The communication problem defined by any matrix M ∈^N × N is equivalent to the problem of deciding adjacency in the (bipartite) graph whose adjacency matrix is M, where each player is given a vertex. Building on <cit.>, <cit.> observed that constant-cost communication problems are equivalent to hereditary graph classes that admit an adjacency sketch, which is a randomized version of an implicit representation, where the encodings (v) are assigned by a randomized algorithm and have constant size (independent of the number of vertices), in such a way that ∀ u,v : D((u), (v)) correctly outputs adjacency of u,v ≥ 2/3 . Adjacency sketches for trees also appeared earlier in <cit.>. As noted in <cit.>, adjacency sketches can be derandomized (see <ref>) to obtain implicit representations, making constant-cost randomized communication protocols a stronger type of implicit representation. *Unit disk graphs. This again motivates our focus on sign-rank. Graphs whose adjacency matrices have bounded sign-rank are among the most important types of graphs for which implicit representations are not known to exist in general: to obtain implicit representations for geometric intersection graphs (more precisely, semi-algebraic graphs), it suffices to study graphs of bounded sign-rank (see <cit.>). Any class of bounded sign-rank satisfies the necessary condition of factorial speed <cit.>, which was conjectured to be sufficient in <cit.>. Until this conjecture was refuted in <cit.> by a non-constructive argument, classes of bounded sign-rank were considered promising candidates for a counterexample <cit.>. The best known implicit representations for classes of bounded sign-rank in general use O(N^1-ϵ) bits per vertex where ϵ > 0 is a constant <cit.>. A canonical example is the unit disk graphs (UDGs). UDGs admit an “implicit representation” in the sense that each vertex may be encoded with the coordinates of its disk in ^2. However, this encoding requires exponentially-many bits <cit.>, and it is a central open problem whether this difficulty can be sidestepped to obtain encodings of size O(log N); our understanding is that this is not widely believed to be possible. In this paper, we resolve the randomized version of the question by giving a complete characterization of the UDGs which admit constant-size adjacency sketches. To state this result, we require the notion of stability (see <cit.>). *Stability. The chain-index (G) of a graph G is the largest k such that there exist disjoint sets of vertices {a_1, …, a_k} and {b_1, …, b_k} where, for any i < j, a_i, b_j are adjacent but b_i, a_j are not. In the terminology of <cit.>, a graph class is graph-theoretically stable if there is a constant k such that (G) ≤ k for all G ∈; we will say simply stable[We use stable in this paper but we note that the disambiguation graph-theoretically stable in <cit.> is necessary to avoid confusion with stability in the literature on model theory.]. The chain-index is essentially[Not exactly: we have no restriction on the adjacency between a_i, b_i, which helps the analysis but is not qualitatively important.] the largest instance of the Greater-Than communication problem that appears in G, and therefore a class that is not stable must have non-constant communication cost (see <cit.> for more on the stability condition in communication). For a graph class , write () for the function N ↦max_G (_G) where G ranges over the N-vertex graphs in and _G is the adjacency matrix of G (if is a class of bipartite graphs, we take the bipartite adjacency matrix). Stability is necessary for () = O(1); for UDGs and graphs of sign-rank 3, we show it is also sufficient: Let be either a subclass of UDGs, or a class of sign-rank at most 3. Then () = O(1) if and only if is stable. As a consequence, stable subclasses of UDGs and graphs of sign-rank 3 admit implicit representations. §.§ Results and Techniques <ref> follows from a more general result that has other implications for implicit graph representations and which unifies and generalizes a number of previous results. We also complement it with an impossibility result that rules out using the type of randomized techniques in this paper to prove similar results in sign-rank 5 and above. Let us now explain these results in more detail and give a brief summary of the techniques. *Constant-cost reductions. We require the notion of constant-cost reductions and the Equality oracle. The Equality communication problem is the standard example of the power of (public-coin) randomized communication. Two players are given inputs x, y ∈ [N], respectively, and they must decide if x = y. By random hashing, this can be done with success probability 3/4 using only 2 bits of communication. The success probability can be improved to any arbitrary constant by increasing the number of bits by a constant factor. One way to design a constant-cost communication protocol is to design a deterministic communication protocol with constant cost, which has access to an oracle that computes Equality. This means that the two players can, at any time, supply the oracle with arbitrary values a,b and receive, at unit cost, the answer to the query “a = b?” The power of the Equality oracle has been studied in several works <cit.>. One may think of these protocols as the ones that can be implemented using standard practical hash functions like SHA256. Constant-cost protocols of this form are examples of constant-cost reductions, a type of reduction that is natural for both constant-cost communication complexity and implicit graph representations; we formally define constant-cost reductions in general in <ref>. Along with the algorithmic definition of reductions to Equality, there is an equivalent structural definition (see <cit.>): if a graph class admits a constant-cost protocol for computing adjacency in graphs G ∈, using Equality oracles, then there exists a constant t such that the adjacency matrix _G of every graph G ∈ (or bipartite adjacency matrix, if is a class of bipartite graphs) can be written as ∀ x,y : _G(x,y) = f(Q_1(x,y), Q_2(x,y), …, Q_t(x,y)) , where f : ^t → and each Q_i is the bipartite adjacency matrix of a bipartite equivalence graph (disjoint union of bicliques). We write ( M ) for the minimum cost of a 2-way deterministic protocol with Equality oracles. For computing adjacency in monotone graph classes (closed under edge & vertex deletions), all constant-cost randomized protocols can be put in this form <cit.>, but in general they cannot <cit.>. <cit.> showed that () = O(1) implies that has bounded sign-rank; our results explore the converse. *Forbidden cycles and subdivided stars. Our <ref> is a consequence of a more general result, <ref> below, which also makes some progress towards characterizing the finitely-defined bipartite graph classes for which constant-cost communication and implicit representations are possible. For any set of bipartite graphs, a class of bipartite graphs is -free if no graph G ∈ contains any H ∈ as an induced subgraph. Every hereditary class of bipartite graphs is -free for some unique but possibly infinite set . For fixed , write _ for the -free bipartite graphs. For a bipartite graph G=(U,W,E) with a fixed bipartition, we write G for the bipartite complement of G, i.e. G=(U,W,(U × W) ∖ E). The condition that is stable is equivalent to the condition that it is H_k-free for some constant k, where H_k denotes the half-graph (see <cit.>), so (_) = O(1) requires that contain some half-graph H_k. When is finite, it is also necessary that contain both a tree and the bipartite complement of a tree, otherwise the number of graphs in _ is too large <cit.>. In the case || = 2, it is therefore necessary for (_) = O(1) that = {H_k, T} where T and its bipartite complement T are both trees; it was proved in <cit.> that this is also sufficient. We believe these conditions remain sufficient for larger (but still finite) , (_) = O(1) whenever = { H_k, T_1, T_2} for some trees T_1 and T_2. When T_1 and T_2 are subdivided stars, our result confirms this. For s,t ∈, we write S_s,t for the subdivided star, which is obtained by taking the star graph with s leaves and subdividing each edge t-1 times. As usual, we denote by C_t the cycle on t vertices. Our main technical result is: theoremthmintromain Let be a stable class of bipartite graphs that satisfies either of these conditions: * There exist constants s, t such that is (S_s,t, S_s, t)-free; or * There exists a constant t such that is { C_t', C_t' |  t' ≥ t and t' is even}-free. Then () = O(1). We use <ref> to prove <ref> by decomposing UDGs or graphs of sign-rank 3 into bipartite graphs that are both (S_3, 3, S_3, 3)-free and { C_t, C_t |  t ≥ 10 and t is even}-free (which, to clarify, is stronger than necessary to apply the theorem). We remark that the implicit representation implied by <ref> can be efficiently computed, meaning that the labels can be constructed in time (N) and decoded in time log N. This efficiency is inherited by the implicit representations of UDGs and graphs of sign-rank 3, provided that the encoder is given the geometric representation of the input graph. <ref> is much more general, and also allows us to recover several prior results. Analogs of <ref> for the classes of permutation graphs, interval graphs, and P_7-free and S_1,2,3-free bipartite graphs were proved in <cit.>. All of these results, which in <cit.> each required different proof strategies, follow as corollaries of <ref>. Likewise, <cit.> showed the existence of implicit representations for stable, chordal bipartite graphs, which is also implied by <ref>. *Higher sign-ranks and weakly-sparse graphs. To advance beyond sign-rank 3, it is helpful to compare the stability condition with the stronger weakly-sparse condition. A class of graphs is weakly-sparse if there is a constant t such that no graph G ∈ contains K_t,t as a subgraph. Any weakly-sparse class is also stable. It is known and not difficult to prove that any weakly-sparse subclass of UDGs has bounded degeneracy, and therefore the analog of <ref> for weakly-sparse UDGs is trivial (because () = O(1) for any of bounded degeneracy). For weakly-sparse graph classes, we present a proof in <ref> that reductions to Equality are equivalent to bounded degeneracy: theoremthmeqlowerbound Let be a hereditary class of bipartite graphs that is weakly-sparse. Then () = O(1) if and only if has bounded degeneracy. In <cit.>, it is conjectured that the point-line incidence graphs 𝒫ℒ satisfy (𝒫ℒ) = ω(1). <ref> shows the weaker result () = ω(1), because point-line incidences are K_2,2-free and have unbounded degeneracy. They also have sign-rank at most 6, which means that the Equality oracle does not suffice to extend <ref> to sign-rank 6 and above, even if the stability condition is replaced with the much stronger weakly-sparse condition. Combining known results in the literature, we also give in <ref> an example (K_2,2-free point-box incidence graphs) with sign-rank 5 that is K_2,2-free but has unbounded degeneracy, showing in fact that the Equality oracle does not suffice to extend <ref> to sign-rank 5. It may be the case that reductions to Equality are the only type of constant-cost communication possible for matrices of bounded sign-rank, see <ref>. We summarize the known results for low sign-ranks in <ref>. *Proof overview. We briefly summarize the proofs of <ref>. Although UDGs and graphs of sign-rank 3 do not satisfy the conditions of <ref>, we prove that two parties with access to an Equality oracle can agree on a graph decomposition into pieces that avoid edge-asteroid triple structures (used in <cit.>), which guarantees that these pieces satisfy the conditions of <ref>. Our main tool to prove <ref> is the decomposition, which we take from <cit.>. The decomposition partitions a bipartite graph into bags of vertices with a tree-like structure on the bags that controls the edges between the bags. In particular, every root-to-leaf path on the bags induces a path in the original graph. For this reason, the method has previously been used (as in <cit.>) to analyze P_t-free graphs, graphs which forbid long induced paths, where the depth of the decomposition is constant. However, in our case, the depth of the decomposition is unbounded. Instead, we show that, under the conditions of <ref>, each bag has edges to only a bounded number of its ancestors. Using this guarantee, we show that a communication protocol on input vertices x,y may use the Equality oracle to either determine the adjacency, or agree on a subset of bags that contains x and y. The protocol may then recurse on these bags, sometimes switching to the bipartite complement of the graph when it does so (this is why we require both S_s, t and S_s, t to be forbidden). Due to arguments of <cit.>, this recursion will reduce the chain-index of the graph and is therefore guaranteed to terminate after a constant number of iterations. §.§ Discussion and Open Problems *Communication complexity. An intriguing possibility arises from this work, in conjunction with other recent work on bounded sign-rank. Adapting (or abusing) some notation of <cit.>, write 𝖴𝖯𝖯[1] for the set of communication problems with bounded sign-rank (constant unbounded-error communication cost <cit.>), write 𝖡𝖯𝖯[1] for the set of communication problems with constant public-coin randomized communication cost, and write [1] for the set of communication problems with a constant-cost reduction to Equality. With these definitions of communication complexity classes, we can ask: Is it the case that [1] = 𝖴𝖯𝖯[1] ∩𝖡𝖯𝖯[1]? A positive answer to this question would “explain” all of the known results and conjectures relating these classes. It is proved in <cit.> that [1] ⊆𝖴𝖯𝖯[1] ∩𝖡𝖯𝖯[1]. In the other direction, there are communication problems in 𝖡𝖯𝖯[1] that do not belong to [1], which was proved independently in <cit.> and <cit.>, but the example in both cases, the 1-Hamming Distance problem (adjacency in the hypercube), is believed not to belong to 𝖴𝖯𝖯[1] <cit.>, which is implied by a positive answer to <ref>. In <ref>, we give two explicit examples (K_2,2-free point-box incidences, and point-line incidences) in 𝖴𝖯𝖯[1] that do not belong to [1], which could possibly provide a negative answer to <ref> if they belong to 𝖡𝖯𝖯[1], but point-line incidences are conjectured not to belong to 𝖡𝖯𝖯[1] in <cit.>. On the other hand, a negative answer to <ref> seems to require a substantially different type of randomized protocol than the ones which have so far been discovered[By this we mean that it seems unlikely to us that a negative answer to the question would be achieved by a reduction to any currently-known constant-cost problem, most of which can be found in <cit.>.], and would therefore be very interesting. *Implicit representations. An obvious question is whether the stability condition in our positive result for implicit representations can be dropped. This cannot be accomplished by reductions to Equality, for which stability is necessary. We have shown that the Greater-Than problem is the only barrier to constant-cost communication, so one idea for generalizing our result is to allow the more powerful Greater-Than oracles in the communication protocol. Constant-cost reductions to Greater-Than are equally good for the purpose of finding implicit representations (we may think of some standard implicit representations, like for interval graphs <cit.> and point-box incidences <cit.>, as protocols of this form). But this cannot succeed: a constant-cost reduction to Greater-Than for graphs of sign-rank 3 would imply () = Θ(loglog N) which contradicts the known bound of Θ(log N) <cit.>. This answers an open question asked in independent and concurrent work <cit.> whether (in our terminology) reductions to Greater-Than suffice to obtain implicit representations for geometric intersection graphs with small sign-rank realized by integer coordinates[The bounds in <cit.> hold for constructions with integer coordinates.]. This at least demonstrates that communication complexity lower bounds can be used against certain natural types of implicit representation, although it remains open how to prove any explicit, non-trivial lower bounds for implicit representations. § PRELIMINARIES Let us define some notation and formalize the notions we have discussed in the introduction. We intend this paper to be accessible to readers in graph theory or communication complexity who may not have a background in both, so we make an attempt to make the terminology explicit. We will also define a general notion of constant-cost reductions which has not yet appeared explicitly in the literature. §.§ Notation For a matrix M ∈^X × Y, row x ∈ X, and column y ∈ Y, we will write either M_x,y or M(x,y) for the entry at x and y. For a graph G, we write G for the complement of G. For a bipartite graph G = (X,Y,E) with a fixed bipartition, write G for the bipartite complement, which has edge xy if and only if xy is not an edge of G. The adjacency matrix of a graph G = (V,E) is the matrix _G ∈^V × V with _G(x,y) = 1 if and only if xy ∈ E. For a bipartite graph G = (X,Y,E) with a fixed bipartition, the bipartite adjacency matrix is the matrix _G ∈^X × Y with _G(x,y) = 1 iff xy ∈ E, where we note that the rows are indexed by X instead of the full set of vertices X ∪ Y (and similar for the columns). For a graph G and disjoint sets X,Y ⊆ V(G), we will write G[X,Y] for the semi-induced bipartite subgraph, which is the bipartite graph G[X,Y] = (X,Y,E) defined by putting an edge between x ∈ X and y ∈ Y if and only if xy are adjacent in G. (In particular, any edges within X or Y in G are not present in G[X,Y].) §.§ Sign-Rank For a matrix M ∈^N × N, the sign-rank of M is denoted _±(M) and it is the minimum d ∈ such that there exists a matrix R ∈^N × N of rank d with M = (R), where (R) ∈^N × N is the matrix with entries ∀ i,j ∈ [N] : (R)_i,j = (R_i,j) . Equivalently, _±(M) is the minimum d such that each row i ∈ [N] may be associated with a unit vector p_i ∈^d (which we think of as a point) and each column j ∈ [N] may be associated with a unit vector h_j ∈^d (which we think of as the normal vector for a halfspace), such that M_i,j = (p_i, h_j). In this way, the sign-rank of M is equivalent to the minimum dimension d such that M is the incidence matrix between a set of points X and a set of halfspaces Y, where the hyperplane boundaries of the halfspaces contain the origin. We require a notion of sign-rank for graphs, which we will define separately for bipartite graphs with a fixed bipartition, and for general graphs. For a bipartite graph G = (X,Y,E) with a fixed bipartition, its sign-rank _±(G) is defined as the sign-rank _±(_G) of its bipartite adjacency matrix _G ∈^X × Y. For a general graph G = (V,E), we define its partial adjacency matrix A ∈{± 1, ⋆}^V × V to be _G^*(x,y) ⋆ if x = y 1 if xy ∈ E -1 otherwise. We then define the sign-rank _±(G) as the minimum rank of a matrix R such that ∀ i ≠ j : (R_i,j) = _G^*(i,j) . Specifically, we do not make any requirement on the diagonal entries. §.§ Communication Complexity and Margin For a matrix M ∈^N × N, we will write (M) for the public-coin randomized communication complexity of M, with success probability 2/3. In this model, Alice receives a row x ∈ [N] and Bob receives a column y ∈ [N] and they must output M(x,y). They are given shared access to a string of random bits, and they take turns sending messages that depend on their respective inputs and the random string. They must output the correct answer with probability at least 2/3 over the random string, and the complexity of a protocol is the total number of bits communicated between the players on the worst-case inputs x,y. (M) is the minimum complexity of any such protocol computing M. See <cit.>. The standard notion of a (total, Boolean-valued) communication problem is a sequence = (P_N)_N ∈ of matrices, where P_N ∈^N × N, and the complexity of the problem, denoted (), is the function N ↦(P_N). However, we are interested in the complexity of classes of matrices (specifically adjacency matrices of graphs belonging to some graph class), not merely sequences of matrices, where there is a variety of N × N matrices instead of just one. So we define communication problems more generally, as in <cit.>. A communication problem is a set = ⋃_N ∈_N of Boolean matrices, where _N is a finite set of matrices in ^N × N. We then define the communication complexity () as the function N ↦max_P ∈_N(P) . For a class of graphs, we write _ for the communication problem that is the set of adjacency matrices of graphs in . If is a class of bipartite graphs, we take the bipartite adjacency matrices. We abuse notation and write () = (_), so that () is the function N ↦max{ R(_G) | G ∈ has N vertices } . Communication complexity is always upper bounded by the number of bits n in the input, or in our notation, by ⌈log N ⌉. We are interested in determining which communication problems have constant cost, which means that there exists a constant c such that (M) ≤ c for all M ∈. One way to rule out a constant-cost protocol for a problem is if the Greater-Than communication problem appears as a subproblem of . Formally, this is captured by the stability condition (see <cit.>): Let be any graph class which is not stable. Then () = ω(1). As mentioned in the introduction, having constant communication cost is equivalent to having constant margin, due to the following inequality, which follows from results of <cit.>: Let M ∈^N × N. Then Ω(log1/(M)) ≤(M) ≤ O(1/(M)^2) . §.§ Constant-Cost Communication Reductions and Equality One way to obtain constant-cost protocols is by reduction to the Equality problem, for which we require the definitions of the Equality problem and a notion of reduction. The Equality communication problem is the set { I_N × N : N ∈} where I_N × N denotes the N × N identity matrix. In other words, for input size N, Alice and Bob receive elements x,y ∈ [N] and wish to decide whether x = y. It is well-known that () = 2. Constant-cost communication reductions, specifically to the Equality problem, have been used implicitly in several prior works. Here we choose to explicitly define constant-cost reductions in general[This general definition of constant-cost reductions has arisen out discussions with several other researchers.]. For this, we require the notion of a query set. A query set is a set of matrices that is closed under the following operations: * For every Q ∈ and any Q' obtained by row and column permutations of Q, Q' ∈. * For every Q ∈, if Q' is any submatrix of Q then Q' ∈. * For every Q ∈, if Q' is obtained by duplicating a row or a column of Q, then Q' ∈. For a set of matrices, we define () to be the closure of under these operations. In the communication complexity literature, () was recently named the set of blocky matrices <cit.>. In graph theory, () are the adjacency matrices of disjoint unions of bicliques, also called bipartite equivalence graphs. It is easily verified that for any constant c, if () ≤ c then (()) ≤ c. However, we caution that (()) ≤() does not hold for non-constant complexities, because () includes all submatrices of and (·) takes the maximum complexity over all size-N matrices (see <cit.> for examples). We now give two equivalent definitions for reductions between problems; one algorithmic and one structural. Let be a communication problem and let P ∈^N × N. A deterministic protocol computing P with oracles is a rooted binary tree T where each leaf ℓ is assigned a value b(ℓ) ∈ and inner node v is assigned an N × N matrix Q_v ∈(), with the following conditions. On each pair of inputs x,y ∈ [N] the protocol begins at the root node v of T. At each node v, if Q_v(x,y) = -1 then the protocol proceeds by advancing the current node v to its left child, and if Q_v(x,y) = 1 then the protocol proceeds by advancing the current node v to its right child, until v becomes a leaf, at which point the protocol outputs b(v). It is required that b(v) = P_x,y for all inputs x,y. The cost of the protocol is the depth of the tree. We write ^(P) for the minimum cost of a protocol which computes P with oracles. For a communication problem , we write ^() for the function N ↦max_P ∈_N^(P). In other words, a communication protocol with oracles is a deterministic protocol where in each round, Alice and Bob transform their inputs x,y into inputs to a problem in and receive the answer from an oracle computing at unit cost. Observe that, as long as is non-trivial (does not contain only all-1 and all-(-1) matrices), the definition of () allows any single round of deterministic communication to be simulated by an oracle, so without loss of generality we may assume that every inner node of the protocol is an oracle call. If there is a constant c such that ^() ≤ c, then we say that constant-cost reduces (or just reduces) to . The following proposition is easily obtained by standard error-boosting techniques: Suppose () = O(1) and reduces to . Then () = O(1). In particular, if reduces to then () = O(1). The second, structural definition of reduction is as follows. We say reduces to if there exists a constant t such that, for every A ∈, there exists: * a function f : ^t →; and * matrices Q_1, …, Q_t ∈(), such that A = f(Q_1, …, Q_t), meaning that A(i,j) = f(Q_1(i,j), Q_2(i,j), …, Q_t(i,j)) for all i,j ∈ [N]. In the special case when is the set of identity matrices, this definition appeared independently in <cit.> and subsequently in <cit.>, and the minimum t such that the above conditions hold is a “functional” analog of rank, recently called the functional blocky-rank in <cit.>. It is not difficult to show that this structural definition of constant-cost reductions is equivalent to the algorithmic one. One may easily derive a constant-cost protocol with oracles Q_i from the structural definition, and in the other direction one may simply let the set of matrices Q_i be the inner nodes of the communication protocol and define f as the function that simulates the protocol on these queries. In the structural definition it is not hard to see an analog of <ref> for implicit representations. A similar[There are some technicalities involved in translating between the two.] notion of reductions for implicit representations appeared independently and concurrently in <cit.>, which included reductions to Equality and Greater-Than as parts of a complexity hierarchy of implicit representations. Suppose is the set of adjacency matrices for a hereditary graph class that admits an implicit representation, and suppose is the set of adjacency matrices for a hereditary graph class . If reduces to then admits an implicit representation. §.§ From Communication Protocols to Implicit Representations An observation of <cit.> is that any hereditary graph class for which () = O(1) must also have an implicit representation (and any constant-cost communication problem may be transformed into a hereditary graph class). Therefore, as argued in <cit.>, constant-cost communication is essentially the probabilistic version of implicit representations. We will present our proofs as upper bounds on communication complexity, which imply implicit representations. The general correspondence between constant-cost communication and implicit representations is non-constructive (by the probabilistic method), but for the sake of clarity and completeness, we briefly describe how to directly translate a communication protocol that uses Equality oracles (as ours will do) into an implicit representation. Recall that, for a graph G = (V,E), if (G) ≤ c then there exists a binary communication tree of depth c with each inner node v assigned to a matrix Q_v ∈(), which means that Q_v is the adjacency matrix of a bipartite equivalence graph. In other words, there are functions a_v, b_v : V → [N] such that Q_v(x,y) = 1 if a_v(x) = b_v(y) 0 otherwise. To obtain an implicit representation, we need to define a decoder D and encodings (·) for each graph G ∈. We define (x) for each x ∈ V by writing down the values a_v(x), b_v(y) for each inner node v of the tree, together with the output values at the leaves of the tree. Each value a_v(x) and b_v(x) requires at most ⌈log N ⌉ bits, and there are at most 2^c nodes in the tree, which is constant, so the size of the encoding is O(log N). The decoder D, on inputs (x) and (y) for x,y ∈ V, may use the values of a_v(x) and b_v(y) for each node v, together with the outputs on the leaves, to simulate the communication protocol. § COMMUNICATION BOUNDS FOR EXCLUDED CYCLES AND SUBDIVIDED STARS Our results for unit disk graphs and matrices of sign-rank 3 will follow from a more general result on bipartite graphs excluding either long cycles or subdivided stars, which we prove in this section. Recall the definition of the subdivided star S_s, t, <ref>. * Our main tool will be the decomposition, which we borrow from <cit.>, defined below. §.§ Decomposition: Definition, Existence, and Properties The following definition of the decomposition is taken from <cit.>. We will only apply the decomposition to bipartite graphs in this paper, so we state the special case of the decomposition for bipartite graphs. See <ref> for an illustration. A decomposition of a connected bipartite graph G is a rooted tree Y satisfying the following properties: * Each node of Y is a subset of V(G), called a bag, and the nodes of Y form a partition of V(G). For each vertex v ∈ V(G), write _Y(v) for the unique bag in Y that contains v. We will drop the subscript Y when the decomposition is clear from context. * The root bag of Y is a singleton containing the root vertex. * If u,v ∈ V(G) are adjacent then (u) is an ancestor of (v) or vice-versa. * For every bag B of Y, the subgraph of G induced by B together with all of its descendents is connected. * For every non-root bag B of Y, there exists a vertex h(B), called the hook of B, which belongs to the parent bag of B and has the property that h(B) is adjacent to all vertices of B and non-adjacent to all vertices in the strict descendents of B. For each bag B, we write (B) for the length of the path from the root bag to B in Y (where the depth of the root bag is 0). For each ℓ∈, we say that level ℓ of Y is the set of all bags B with (B) = ℓ. A decomposition for a disconnected bipartite graph G is the union of decompositions for its connected components. There is a simple algorithmic proof that such decompositions always exist <cit.>. For every connected bipartite graph G and vertex r ∈ V(G), there exists a decomposition of G with root vertex r. Given G and r, this decomposition can be computed in polynomial time. A path P = (v_0, v_1, v_2, …, v_k) in G is a hook path (with respect to Y) if v_i is the hook of (v_i-1) for every i ∈ [k]. Observe that any hook path with respect to a decomposition is an induced path. decompositions are typically used in the case where some induced path P_t is forbidden, in which case the depth of the decomposition is bounded. In our case, we will not necessarily have a forbidden P_t or bounded depth of the decomposition, but we will see that the decomposition has a different structure that will permit efficient communication protocols. For this we define the notion of back degree. Given a decomposition Y of G. We say that a bag B of Y has an edge to another bag B' in Y if there exist a vertex in B and a vertex in B' that are adjacent. The back-degree of a bag B in Y is the number of ancestor bags of B to which B has an edge. The maximum back-degree of Y is the maximum back-degree of any of its bags. Note that decomposition of a P_t-free graph has depth at most t, and therefore the maximum back-degree of the decomposition is also bounded by t. In the next two sections we show that if a graph has bounded chain-index and either * does not contain long induced cycles (<ref>), or * does not contain a fixed subdivision of a star (<ref>), then its decompositions have bounded maximum back-degree. In <ref>, we give a general communication protocol for decompositions with bounded maximum back-degree. Before proceeding <ref> and <ref>, we introduce some notation and properties of the interactions between bags in decompositions that are used in both sections. Let Y be a decomposition of a bipartite graph G = (X,Y,E), and let B be a bag of Y with (B) > 0. Write h for the hook of B. Let A_1, A_2, …, A_r be some ancestors of B, excluding the immediate parent of B, to which B has an edge. Then the following properties are easy to verify: Let s ∈ and suppose that (A_1) < (A_2) < … < (A_r) and (A_i+1) - (A_i) ≥ s for all i ∈ [r-1]. For i ∈ [r], we define h_i,1 to be the hook of A_i, and for z ∈ [s-1], inductively define h_i,z as the hook of (h_i,z-1). For each i ∈ [r], let a_i ∈ A_i be a neighbour of some b_i ∈ B. Then the following properties hold: * The hook h of B is adjacent to each b_i. For each i ∈ [r] and z ≥ 1, a_i is not adjacent to h, because they are on the same side of the bipartition of G, and h_i,z is not adjacent to h, because h_i,z is a hook in an ancestor bag of (h) that is not the parent of (h). * For each i,j ∈ [r], a_i is not adjacent to a_j, because they are on the same side of the bipartition of G. * For each 1 ≤ i < j ≤ r and each z ≥ 1, h_i,z is not adjacent to a_j, because h_i,z is a hook that is not in the parent bag of A_j. * For each i ∈ [r] and each z ≥ 2, we have h_i,z not adjacent to a_i because h_i,z is a hook that is not in the parent bag of (a_i). * For each i,j ∈ [r], and z ≥ 1, h_i,z is not adjacent to b_j because h_i,z is a hook that is not in the parent bag of B. §.§ Excluding Long Cycles For any t, k ∈, there exists a constant ℓ such that the following holds. Let G = (X,Y,E) be any (C_t, C_t+1, C_t+2, …)-free bipartite graph with (G) < k. Let Y be a decomposition of G. Then Y has maximum back-degree at most ℓ. Without loss of generality we assume that t ≥ 4. Let R be the Ramsey number that guarantees that a complete graph on R vertices with edges colored by 2^t-4 colors has a monochromatic clique of size r max{ 2, k }. Let ℓ = (t-3) · R and let B be a bag of Y. If B has depth at most ℓ in Y the result holds trivially, so we will assume that B has depth greater than ℓ. Let A”_1, A”_2, …, A”_m be the ancestors of B, excluding the immediate parent of B, to which B has an edge. For each i ∈ [m], let a”_i ∈ A”_i be a neighbour of some b”_i ∈ B. Assume for the sake of contradiction that B has edges to more than ℓ ancestors, so m ≥ℓ. Then there is a subsequence of ancestor bags A'_1, …, A'_R such that (A'_i+1) - (A'_i) ≥ t-3, for each i ∈ [R-1], so that there are at least t-4 levels of the decomposition separating each bag in this subsequence. We will write a'_i a”_i^* and b'_i b”_i^*, where i^* is the index of the bag satisfying A”_i^* = A'_i. For each A'_i, let h'_i,1 be the hook of A'_i, and for 1 < z ≤ t-3, inductively define h'_i,z as the hook of (h'_i,z-1). For each pair { A'_i, A'_j } with i < j we assign a color 𝖼𝗈𝗅{A'_i, A'_j}∈^t-4 as follows: the z^th bit is 1 if and only if a_i' is adjacent to h_j,z', for z ∈ [t-4]. By Ramsey's theorem, we may now choose a subsequence A_1, …, A_r of ancestor bags, where for each i ∈ [r] there is a corresponding i^* ∈ [R] such that A_i = A'_i^*, and each pair {A_i, A_j} with 1 ≤ i < j ≤ r has the same color. We will now obtain a contradiction for each possibility of this color. We will write a_i a'_i^*, b_i b'_i^*, and h_i,z h'_i^*, z, for each i ∈ [r] and z ∈ [t-3], and use the notation and the properties from <ref>. Case 1: There is z ∈ [t-4] such that the z^th bit of the color is 1. Consider the subgraph H induced by the vertices {a_1, a_2, …, a_r}∪{ h_1,z, h_2,z, …, h_r,z}. For 1 ≤ i < j ≤ r, we have a_i adjacent to h_j,z due to the color, and h_i,z is not adjacent to a_j due to Property <ref>. Thus, by definition, (G) ≥(H) = r ≥ k, a contradiction. Case 2: All bits of the color are 0. Consider the hook path P from a_2 to h_1,1. Let v be the first (i.e. closest to a_2) vertex on P that is adjacent to a_1. Such a vertex exists because a_1 is adjacent to the last vertex h_1,1 of the path. Let P' be the subpath of P from a_2 to v. By Property <ref> and the color assumption, P' contains the first t-2 vertices of P: a_2, h_2,1, h_2,2, …, h_2,t-4, h_2,t-3. Now, if b_2 is adjacent to a_1, then b_2,P',a_1,b_2 is an induced cycle of length at least t. Similarly, if b_1 is adjacent to a_2, then b_1, P', a_1, b_1 is such a cycle. Finally, if neither b_2 is adjacent to a_1, nor b_1 is adjacent to a_2, then b_1 ≠ b_2 and, by Properties <ref> and <ref>, h,b_2,P',a_1,b_1,h is a forbidden induced cycle. §.§ Excluding Subdivisions of Stars For any s, t, k ∈, there exists a constant ℓ such that the following holds. Let G = (X,Y,E) be any bipartite graph with (G) < k that does not contain S_s, t as an induced subgraph. Let Y be a decomposition of G. Then Y has maximum back-degree at most ℓ. Let R be the Ramsey number that guarantees that a complete graph on R vertices with edges colored by 2^3+(t-1) colors has a monochromatic clique of size r max{ s, k }. Let ℓ = t · R + 1 and let B be a bag of Y. If B has depth at most ℓ in Y the result holds trivially, so we will assume that B has depth greater than ℓ. Let A”_1, A”_2, …, A”_m be the ancestors of B, excluding the immediate parent of B, to which B has an edge, meaning that for each i ∈ [m], there exists a vertex b”_i ∈ B with an edge to a vertex a”_i ∈ A”_i. Assume for the sake of contradiction that B has edges to more than ℓ ancestors, so m ≥ℓ. Then there is a subsequence of ancestor bags A'_1, …, A'_R such that (A'_1) ≥ t and (A'_i+1) - (A'_i) ≥ t, for each i ∈ [R-1]; in particular there are at least t-1 levels of the Gyárfás decomposition separating each bag in this subsequence. We will write a'_i a”_i^* and b'_i b”_i^*, where i^* is the index of the bag satisfying A”_i^* = A'_i. For each A'_i, let h'_i,1 be the hook of A'_i, and for 1 < z ≤ t-1, inductively define h'_i,z as the hook of (h'_i,z-1). For each pair { A'_i, A'_j } with i < j we assign a color 𝖼𝗈𝗅{A'_i, A'_j}∈^3+(t-1) as follows: * The first bit indicates whether b'_i = b'_j (i.e. set the bit to 1 if b'_i = b'_j and 0 otherwise). * The second bit indicates whether b'_i is adjacent to a'_j. * The third bit indicates whether b'_j is adjacent to a'_i. * The remaining t bits indicates whether a'_i is adjacent to h'_j,z, for z ∈ [t-1]. By Ramsey's theorem, we may now choose a subsequence A_1, …, A_r of ancestor bags, where for each i ∈ [r] there is a corresponding i^* ∈ [R] such that A_i = A'_i^*, and each pair {A_i, A_j} with i < j has the same color. We will now obtain a contradiction for each possibility of this color. We will write a_i a'_i^*, b_i b'_i^*, and h_i,z h'_i^*, z, for each i ∈ [r] and z ∈ [t-1], and use the notation and the properties from <ref>. Case 1: There is z ∈ [t-1] such that the (3+z)^th bit of the color is 1. The argument is exactly as in Case 1 of <ref>. Consider the subgraph H induced by the vertices {a_1, a_2, …, a_r}∪{ h_1,z, h_2,z, …, h_r,z}. For 1 ≤ i < j ≤ r, we have a_i adjacent to h_j,z due to the color, and h_i,z is not adjacent to a_j due to Property <ref>. Thus, by definition, (G) ≥(H) = r ≥ k, a contradiction. Case 2: The first bit or second bit of the color is 1, and the (3+z)^th color is 0 for all z ∈ [t-1]. Consider the subgraph induced by the vertices {b_1}∪⋃_i=1^s{a_i, h_i,1, h_i,2, …, h_i,t-1} . Since each A_i is separated by at least t-1 levels of Y, each of the above named vertices are distinct. If the first bit of the color is 1, then we have b_1 adjacent to each a_i by definition, since b_1 = b_2 = … = b_s. If the first bit of the color is 0 but the second bit of the color is 1, then we have b_1 adjacent to each a_i because of the color. For each 1 ≤ i < j ≤ s and z ∈ [t-1], we have a_i not adjacent to h_j,z because the associated bit of the color is set to 0, and we have a_j not adjacent to h_i,z by Property <ref>. For z ≥ 2, we have a_i not adjacent to h_i,z by Property <ref>. We have a_i not adjacent to a_j by Property <ref>. And we have b_1 not adjacent to h_i,z by Property <ref>. But we have b_1 adjacent to each a_i, as well as edges a_ih_i,1 and h_i,1h_i,2, h_i,2h_i,3, …, h_i,z-1h_z by definition. So the subgraph induced by the considered vertices is S_s, t, which is a contradiction. Case 3: The first two bits of the color are 0, the third bit is 1, and the (3+z)^th bit is 0 for each z ∈ [t-1]. Consider the subgraph H induced by the vertices {b_1, …, b_k}∪{ a_1, …, a_k }. For each i < j, a_i is adjacent to b_j due to the third bit of the color, but b_i not adjacent to a_j due to the second bit of the color. Then we have (G) ≥(H) = r ≥ k, a contradiction. Case 4: All bits of the color are 0. Consider the subgraph induced by the vertices { h }∪⋃_i=1^s { b_i, a_i, h_i,1, …, h_i,t-2} . Since each bag A_i in the sequence is separated by at least t-1 levels of Y and the first bit of the color is 0, each of the named vertices above is distinct. By Property <ref>, h is adjacent to none of the vertices a_i, h_i,1, …, h_i,t-2 for every i ∈ [s]. For each i < j, we have b_i not adjacent to a_j and a_i not adjacent to b_j due to the color. For each z ∈ [t-1], we have h_i,z not adjacent to b_i or b_j due to Property <ref>; and we have h_i,z not adjacent to a_j due to Property <ref> and a_i not adjacent to h_j,z due to the color. For z ≥ 2 we have h_i,z not adjacent to a_i due to Property <ref>. On the other hand, we have edges hb_i for each i ∈ [s] by definition, along with edges b_ia_i, a_ih_i,1, and h_i,1h_i,2, …, h_i,t-3h_i,t-2. Therefore the induced subgraph is S_s, t, which is a contradiction. §.§ A Communication Protocol for the Gyárfás Decomposition Let G be a connected bipartite graph and let Y be a decomposition of G. For any bag B of Y with depth d = (B), let G_B denote the subgraph of G induced by B together with all of the descendent bags of B in Y with depth d' ≢d 2. We require the next two lemmas of <cit.>. Let B be a bag of Y with (B) = d for any d ≥ 2. Then (G_B) < (G). Let B be a bag of Y with (B) = 1. Let C be a connected component of G_B and Y_C be a decomposition of C rooted at a vertex r_C ∈ V(C) ∩ B. Let B' be a bag of Y_C with (B') = d ≥ 1. Then (C_B') < (G). We will also require the following easy fact. Let be the class of bipartite graphs G with (G) = 1. Then () ≤ 2. Note that the chain-index of P_4, the 4-vertex path, is 2. Thus each G ∈ is P_4-free and therefore is a disjoint union of bicliques, an equivalence graph. Therefore Alice and Bob may compute adjacency in G by using 1 bit of communication to ensure that their input vertices x and y are on opposite sides of the bipartition, and using 1 call to the oracle to check if x,y are in the same biclique. Our first main result, <ref>, follows from the next lemma, applied together with <ref>. Let be a hereditary class of bipartite graphs that is closed under bipartite complementation, and which satisfies the following conditions: * There exists a constant k such that () ≤ k. * There exists a constant ℓ such that for any G = (X,Y,E) ∈, any decomposition of G has back-degree bounded by ℓ. Then there exists a constant c such that () ≤ c. We prove the theorem by induction on k. The base case k = 1 is established in <ref>. Let x,y ∈ V(G) be Alice's and Bob's inputs, respectively. We may assume without loss of generality that G is connected and that x and y are in opposite parts of the bipartition of G, since Alice and Bob may use one oracle call to check whether their inputs x and y are in the same connected component, and use 1 bit of communication to determine whether x and y are in opposite parts. Let Y be a decomposition of G. The communication protocol proceeds as follows. We will assume that the root vertex of Y is on the left side of the bipartition of G, and that Alice's input x is on the left side and Bob's input y is on the right side of the bipartition. * Using 1 bit of communication, Alice tells Bob whether x is the root vertex of Y. If so, Bob outputs 1 if y has depth 1 in Y and the protocol terminates. The protocol is correct in this case, since by <ref>, all vertices at depth 1 are adjacent to the root vertex. * Using 1 bit of communication, Bob tells Alice whether y has depth 1 in Y. If so, they perform the following: * Using 1 call to the oracle, Alice and Bob decide if (x) is a descendent of (y). This is possible because Alice and Bob each know the set of level 1 bags of Y. If (x) is not a descendent of (y), they output 0 and the protocol terminates. The protocol is correct in this case, since by <ref>, if x and y are adjacent then (x) must be the descendent of (y) or vice versa. * Alice and Bob now agree on B = (y), so they each compute the connected components C_1, …, C_m of G_B and agree on decompositions Y_1, …, Y_m of these components, respectively, where the root vertex of each decomposition is on the right side of the bipartition. Using 1 call to the oracle, they decide if x,y belong to the same connected component of G_B. If not, they output 1 and terminate the protocol. The protocol is correct in this case by definition. * Let i be the index of the component C_i of G_B containing both x and y. Using 1 bit of communication, Bob tells Alice whether y is the root vertex of Y_i. If so, Alice outputs 0 if x has depth 1 in Y_i and the protocol terminates. The protocol is correct in this case, since by <ref> all vertices of depth 1 in Y_i are adjacent to the root vertex y in G_B (and therefore non-adjacent in G). * By the assumption of bounded back-degree, _Y_i(x) has edges to at most ℓ of its ancestors in Y_i. Call these ancestors A_1, …, A_ℓ' where ℓ' ≤ℓ. Using ℓ calls to the oracle, Alice and Bob determine whether A_j = B' for some j ≤ℓ', where B' _Y_i(y). * If A_j = B' then Alice and Bob inductively compute adjacency in the graph (C_i)_B', which is the bipartite complement of a graph in and therefore is contained in , and which by <ref> satisfies ((C_i)_B') < (G). They then output the opposite value and terminate the protocol. The protocol is correct in this case since, by induction, they will compute adjacency of x,y in (C_i)_B', which is an induced subgraph of G, so x,y have the opposite adjacency as in G. * If A_j ≠ B' for all j, the protocol proceeds as below. * Similar to step <ref>, _Y_i(y) has edges to at most ℓ of its ancestors in Y_i. Call these ancestors A_1, …, A_ℓ' where ℓ' ≤ℓ. Using ℓ calls to the oracle, Alice and Bob determine whether A_j = B' for some j ≤ℓ', where B' _Y_i(x). * If A_j = B' then Alice and Bob inductively compute adjacency in the graph (C_i)_B', which again is contained in and by <ref> satisfies ((C_i)_B') < (G). They then output the opposite value and terminate the protocol. The protocol is correct in this case since, by induction, they will compute adjacency of x,y in (C_i)_B', which is an induced subgraph of G, so x,y have the opposite adjacency as in G. * If A_j ≠ B' for all j, then Alice and Bob output 1 and the protocol terminates. The protocol is correct in this case because x,y are adjacent in G if and only if they are non-adjacent in C_i. By <ref>, if they are adjacent in C_i then either _Y_i(x) is an ancestor of _Y_i(y) or vice versa. From step <ref>, we know that if _Y_i(y) is an ancestor of _Y_i(x), then _Y_i(x) has no edges to _Y_i(y), so x,y are non-adjacent in C_i and therefore adjacent in G. From the current step, we know similarly that if _Y_i(x) is an ancestor of _Y_i(y) then x,y are again non-adjacent in C_i and therefore adjacent in G. * Now guaranteed that x and y are each in bags at depth 2 or higher, Alice and Bob proceed similarly as in steps <ref> and <ref>, with the following differences. Here, Y is used instead of Y_i. In step <ref>, the protocol outputs 0 instead of 1, because they are operating on the graph G itself instead of an induced subgraph of the bipartite complement G. When applying the inductive hypothesis, we use <ref> instead of <ref>, and the players do not flip the output of the protocol applied to G_B'. Correctness again follows by induction. This concludes the proof. § APPLICATION TO SIGN-RANK 3 AND UNIT DISK GRAPHS We now prove our results <ref> for graphs of sign-rank 3 and unit disk graphs. This will require the notion of edge-asteroid triples (see e.g. <cit.>). A set of three edges in a graph is called an edge-asteroid triple if for each pair of the edges, there is a path containing both of the edges that avoids the neighbourhoods of the end-vertices of the third edge (see <ref> for an illustration). We say that a graph class is edge-asteroid-triple-free if no G ∈ contains an edge-asteroid triple. Since S_3, 3 and C_t for t ≥ 10 contain edge-asteroid triples, we make the following simple observation: Let G be any bipartite graph that is edge-asteroid-triple-free. Then G is both S_3, 3-free, and C_t-free for all t ≥ 10. This observation will allow us to apply our <ref>, but it requires that our graphs G and their complements both be edge-asteroid-triple-free. Unit disk graphs and graphs of sign-rank 3 are not necessarily edge-asteroid-triple-free. But we show that we can decompose these graphs into pieces which satisfy the necessary conditions. §.§ Sign-Rank 3 To apply <ref> to graphs of sign-rank 3, we will decompose these graphs into pieces which are edge-asteroid-triple-free. We achieve this by interpreting graphs of sign-rank 3 as point-halfspace incidences and projecting down into dimension 2. Let P be a set of points in ^d and H be a set of halfspaces in ^d. The incidence graph of P and H is the bipartite graph G(P, H) = ( P, H, { ph  |  p ∈ P, h ∈ H, and p ∈ h }). A bipartite graph G is a point-halfspace incidence graph in ^d if it can be represented as an incidence graph in ^d; more specifically, if there exist a set P of points and a set H of halfspaces both in ^d such that G is isomorphic to G(P, H). If d=2 we call the graph a point-halfplane incidence graph. If there exists such a representation of G, where in addition all pairwise dot products of the norm vectors of the hyperplanes defining the halfspaces in H are non-negative, then G is called a positive point-halfspace incidence graph in ^d. Any bipartite graph G = (U,W,E) of sign-rank d admits a partition U = U_1 ∪ U_2 such that G[U_1,W] and G[U_2,W] are point-halfspace incidence graphs in ^d-1. Let a : U →^d, b : W →^d be such that u ∈ U, w ∈ W are adjacent if and only if ⟨ a(u), b(w) ⟩≥ 0. We assume, without loss of generality, that a(u)_d ≠ 0 for every u ∈ U, and partition U into U_1 = { u ∈ U  |  a(u)_d > 0 } and U_2 = { u ∈ U  |  a(u)_d < 0 } We define a' : U →^d-1 and b' : W →^d-1 as a'(u) = 1/|a(u)_d|(a(u)_1, a(u)_2, …, a(u)_d-1) ,      b'(w) =(b(w)_1, b(w)_2, …, b(w)_d-1). Further, we define the following sets of points and halfspaces in ^d-1: P_i = { p_u = a'(u)  |  u ∈ U_i }, i = 1,2, H_1 = { h_w  |  w ∈ W, h_w = { x ∈^d-1 | ⟨ x, b'(w) ⟩≥ -b(w)_d }}, H_2 = { h_w'  |  w ∈ W, h_w' = { x ∈^d-1 | ⟨ x, b'(w) ⟩≥ b(w)_d }}. Finally, we define two point-halfspace incidence graphs G_1 = (U_1, W, E_1) and G_2 = (U_2, W, E_2), where E_1 = { uw  |  u ∈ U_1, w ∈ W, p_u ∈ h_w } and E_2 = { uw  |  u ∈ U_2, w ∈ W, p_u ∈ h_w' }. We claim that G = G_1 ∪ G_2 = (U_1 ∪ U_2, W, E_1 ∪ E_2). Indeed, for any u ∈ U and w ∈ W, uw ∈ E ⟨ a(u), b(w) ⟩≥ 0 ⟨ a'(u), b'(w) ⟩ + (a(u)_d) · b(w)_d ≥ 0 ⟨ a'(u), b'(w) ⟩≥ - (a(u)_d) · b(w)_d. Hence, if u ∈ U_1, then uw ∈ E ⟨ a'(u), b'(w) ⟩≥ - b(w)_d p_u ∈ h_w uw ∈ E_1; and if u ∈ U_2, then uw ∈ E ⟨ a'(u), b'(w) ⟩≥ b(w)_d p_u ∈ h_w' uw ∈ E_2. Any point-halfspace incidence graph G = G(P,H) in ^d admits a partition H = ⋃_i=1^2^d H_i such that each G(P,H_i) is a positive point-halfspace incidence graph. For h ∈ H, let w_h ∈^d and t_h ∈ be such that h = { x ∈^d  | ⟨ w_h, x ⟩≤ t_h }. We partition H into 2^d subsets H_α, α∈{ -1, +1 }^d, with respect to the sign patterns of the norm vectors. More specifically, h ∈ H_α if and only if (w_h)_i ≥ 0 α_i = +1 for every i ∈ [d]. Clearly, for any α∈{-1,+1}^d and any h, h' ∈ H_α, we have that ⟨ w_h, w_h'⟩≥ 0, i.e. G(P, H_α) is a positive point-halfplane incidence graph. From <ref> and <ref> we obtain the following immediate Any bipartite graph G = (U,W,E) of sign-rank d admits a partition U = ⋃_i=1^2^d U_i such that each G[U_i,W] is a positive point-halfspace incidence graph in ^d-1. We now prove that positive point-halfplane incidence graphs are edge-asteroid-triple free. Every positive point-halfplane incidence graph G is edge-asteroid-triple-free. Let G be a positive point-halfplane incidence graph and let P and H be sets of points and halfplanes respectively whose incidence graph is isomorphic to G. For a point p ∈ P we denote by x_p and y_p its coordinates respectively; for a halfplane h ∈ H we denote by a_h, b_h, t_h the coefficients of the halfplane inequality, i.e. h = { (x,y) ∈^2  |  a_h x + b_h y ≤ t_h }. Without loss of generality, we can assume that no point in P lies on the boundary of any h ∈ H. Since G is positive, using translation and rotation, we can further assume that for every h ∈ H both a_h and b_h are non-negative. This latter assumption implies the following useful claim which is straightforward to verify. Claim 1. For every h ∈ H it holds that if (x,y) ∈ h, then (x',y') ∈ h for every x' ≤ x and y' ≤ y. Suppose now, towards a contradiction, that G contains an edge-asteroid triple, and let p_1h_1, p_2h_2,p_3h_3 be its edges, where p_i ∈ P, h_i ∈ H. Note that the points p_1,p_2,p_3 are pairwise incomparable with respect to the coordinatewise order. Indeed, if for example x_p_1≤ x_p_2 and y_p_1≤ y_p_2, then by Claim 1 we would have p_1 ∈ h_2, i.e. p_1 and h_2 would be adjacent in G, which would contradict the assumption that the three edges form an edge-asteroid triple. Thus, without loss of generality, we assume that x_p_1≤ x_p_2≤ x_p_3 and y_p_1≥ y_p_2≥ y_p_3. Let Q=(q_1,f_1,q_2,f_2,q_3, …, f_k-1,q_k), q_i ∈ P, f_i ∈ H, q_1 = p_1 and q_k = p_3, be a path containing p_1 and p_3 that avoids the neighbourhoods of both p_2 and h_2. Since x_q_1≤ x_p_2≤ x_q_k, there exists s ∈ [k-1] such that x_q_s≤ x_p_2≤ x_q_s+1. As above, using Claim 1, we can conclude q_s is incomparable with p_2, as otherwise f_s would be adjacent to p_2 or h_2 would be adjacent to q_s, contradicting the choice of Q. Similarly, q_s+1 is incomparable with p_2. Hence, we have that y_q_s≥ y_p_2≥ y_q_s+1. Let now h_2 be the closure of the complement of h_2, i.e. h_2 = { (x,y) ∈^2  |  a_h_2 x + b_h_2 y ≥ t_h_2}, and let A_-+ = { (x,y) ∈h_2 |  x ≤ x_p_2, y ≥ y_p_2}, A_+- = { (x,y) ∈h_2 |  x ≥ x_p_2, y ≤ y_p_2}. Note that q_s ∈ A_-+ and q_s+1∈ A_+-. Hence the segment connecting q_s and q_s+1 intersects the line x = x_p_2 that separates the two sets. Let (x_p_2, y^*) ∈^2 be the point of intersection. Since both q_s and q_s+1 are in h_2, so is (x_p_2, y^*), which together with Claim 1 implies that y_p_2≤ y^*. Similarly, (x_p_2, y^*) is in f_s because both q_s and q_s+1 are in f_s. Consequently, by Claim 1, p_2 is also contained in f_s. This contradiction completes the proof. Let G(P,H) be a positive point-halfplane incidence graph. Then the bipartite complement of G(P,H) is also a positive point-halfplane incidence graph. Without loss of generality, we can assume that no point in P lies on the boundary of any h ∈ H. For every point p = (x_p,y_p) ∈ P we define p' := (-x_p,-y_p), and for every h = { (x,y) ∈^2  |  a_h x + b_h y < t_h }∈ H we define h' = { (x,y) ∈^2  |  a_h x + b_h y < -t_h }. Let P' = { p'  |  p ∈ P } and H' = { h'  |  h ∈ H }. We claim that G(P',H') is the bipartite complement of G(P,H). Indeed, for any p ∈ P and h ∈ H we have that p ∈ h a_h x_p + b_h y_p < t_h a_h (-x_p) + b_h (-y_p) > -t_h p' ∉h'. Finally, notice that the norm vector of the hyperplane defining a halfspace h ∈ H is the same as the norm vector of the hyperplane defining h' ∈ H'. Hence, G(P',H') is a positive point-halfplane incidence graph. Let G = (X,Y,E) be a bipartite graph of sign-rank 3. Then there exists a partition Y = ⋃_i=1^2^3 Y_i such that each G[X,Y_i] is both (S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10. We claim that the partition Y = ⋃_i=1^2^3 Y_i given by <ref> is a desired one. Indeed, by <ref> and <ref>, we conclude that for each i ∈ [2^3], the graph G_i G[X,Y_i] and its bipartite complement are both positive point-halfplane incidence graphs. Hence, the lemma follows from <ref> and <ref>. theoremthmintrosignrank Let be a graph class with sign-rank at most 3. Then () = O(1) if and only if is stable. It suffices to prove that () = O(1) when is stable, due to <ref>. On graph G = (X,Y,E) and inputs x ∈ X, y ∈ Y, the players compute the decomposition Y = ⋃_i=1^8 Y_i given by <ref> and use 3 bits of communication to agree on the value i such that y ∈ Y_i. Then they compute adjacency in G[X,Y_i] by applying the protocol in <ref>. §.§ Unit Disk Graphs In this section we prove our result for unit disk graphs. A graph G is unit disk if there exists a mapping ϕ : V(G) →^2 such that xy ∈ E(G) if and only if ϕ(x)-ϕ(y)_2 < 2. The mapping ϕ is called a realisation of G. Note that the constant 2 may be replaced with any other constant. We start by observing that unit disk graphs have sign-rank at most 4. Any unit disk graph G has sign-rank at most 4. Let v ↦ (x_v, y_v) ∈^2 for v ∈ V(G), be a realisation of G, such that for any two distinct vertices a,b ∈ V, ab ∈ E(G) if and only if (x_a - x_b)^2 + (y_a - y_b)^2 < √(2) x_a^2 - 2x_a x_b + x_b^2 + y_a^2 - 2y_a y_b + y_b^2 - √(2) < 0 Then, by defining σ : v ↦ (-1, 2x_v, 2y_v, -x_v^2 - y_v^2) ∈^4, ψ : v ↦ (x_v^2 + y_v^2 -√(2), x_v, y_v, 1) ∈^4 for v ∈ V(G), we see that for any distinct a,b ∈ V(G) σ(a), ψ(b) > 0 (x_a - x_b)^2 + (y_a - y_b)^2 < √(2) , so (σ(a), ψ(b)) = 1 if and only if ab ∈ E, as desired. The main tool for our application to unit disk graphs is the following lemma of <cit.>. A graph G is co-bipartite if its complement G is bipartite. Let G be a co-bipartite unit disk graph. Then the bipartite graph G and its bipartite complement do not contain any edge-asteroid triples. In particular, due to <ref>, G is both (S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10. Our upper bound on the communication complexity of stable unit disk graphs will follow from a fairly straightforward decomposition of a unit disk graph into unit-length grid cells, such that between any two grid cells the graph is co-bipartite. theoremthmintroudg Let be a subclass of unit disk graphs. Then () = O(1) if and only if is stable. It suffices to show that if is stable, then () = O(1), due to <ref>. Since is stable, there exists a constant k such that for all G ∈, (G) < k. Fix any G ∈ together with its realisation ϕ : V(G) →^2. For convenience, we will identify the vertices x ∈ V(G) with the corresponding points ϕ(x) ∈^2. On inputs x,y ∈ V(G), Alice and Bob will perform the following protocol. * Alice and Bob each partition ^2 into a grid with cells C_i,j for i,j ∈, where C_i,j{ (z_1,z_2) ∈^2 : i ≤ z_1 < i+1, j ≤ z_2 < j+1 }. Observe that if x,y are adjacent, then if x ∈ C_i,j we must have y ∈ C_i+a, j+b for some a,b ∈{-2,-1,0,1,2}; and if x,y ∈ C_i,j then x-y_2 < √(2) so x,y are adjacent. Let i_x, j_x ∈ be such that x ∈ C_i_x, j_x and let i_y, j_y ∈ be such that y ∈ C_i_y, j_y. * Using 1 call to the oracle, Alice and Bob check if (i_x, j_x) = (i_y,j_y). If so, they output 1 and the protocol terminates. In this case, the protocol is correct due to the observation above. * For each (a,b), (a',b') ∈{-2,-1,0,1,2}^2 such that (a,b) ≠ (0,0) and (a',b') ≠ (0,0), Alice and Bob use 2 calls to the oracle to check if both (i_x+a,j_x+b) = (i_y,j_y). If so, then Alice and Bob compute adjacency in the semi-induced bipartite graph G[X,Y] where X C_i_x,j_x∩ V(G) and Y C_i_y,j_y∩ V(G). This is possible because: * Alice and Bob each know X and Y: Alice knows (i_x+a,j_x+b) = (i_y,j_y) and Bob knows (i_y+a',j_y+b')=(i_x,j_x); and * The graph G[X,Y] has (G[X,Y]) ≤(G) ≤ k, and it is (S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10, by <ref>, so we may apply <ref>. * If (i_x + a,j_x + b) ≠ (i_y,j_y) for all (a,b) ∈{-2,-1,0,1,2}^2, then Alice and Bob output 0. In this case the protocol is correct by the observation in step <ref>. This concludes the proof. § THE SIGN-RANK HIERARCHY We have now determined exactly the conditions required for graphs of sign-rank 3, and some graphs of sign-rank 4, to have constant randomized communication cost (equivalently, constant margin). Let us now consider sign-ranks 5 and above. We will see that our techniques for the lower sign-ranks, specifically the reduction to Equality, will surely fail. This witnesses a certain threshold between sign-ranks 3 and 5. It is common to study the bipartite graphs which are K_t,t-free, for some constant t. If a class of graphs is K_t,t-free it is called weakly-sparse. This is a much stronger condition than stability: if G is K_t,t-free then it must satisfy (G) ≤ 2t. A hereditary graph class has bounded degeneracy (equivalently, bounded arboricity) if there exists a constant d such that every G ∈ has a vertex of degree at most d (equivalently, if there exists a constant a such that every G ∈ on N vertices has at most a · N edges). The following is well-known and easy to prove (see <cit.>). If a hereditary graph class has bounded degeneracy then () = O(1). Recent results of <cit.> show that, for any constant t, the K_t,t-free point-halfspace incidence graphs in dimension 3 have bounded degeneracy. Since any graph of sign-rank 4 can be written as a union of two point-halfspace incidence graphs in dimension 3 (see <ref>), we obtain the following: Let be a hereditary graph class that is weakly-sparse and has sign-rank at most 4. Then () = O(1). Our <ref> and <ref> are strengthenings of this theorem for the special cases of sign-rank 3 and unit disk graphs, where we replace the weakly-sparse condition with the much less restrictive stability condition. The proof of <cit.> uses a technique based on “shallow cuttings”. In <ref>, we give a simpler proof, using elementary geometry, of the weaker statement that K_2,t-free point-halfspace incidence graphs in dimension 3 have bounded degeneracy. However, it is not possible to extend these results even to sign-rank 5. To see this, we first require a theorem which shows that, for weakly-sparse classes, bounded degeneracy is equivalent to the existence of a reduction to Equality. anonymous The proof generalizes a theorem of <cit.> and is due to Bonamy, Esperet, & Girão, which we include here with permission and gratitude. * This theorem follows from the next two lemmas. The first lemma is implicit in <cit.>. Let be a class of bipartite graphs satisfying the following Ramsey property: for any k, ℓ∈ there exists a graph G ∈ such that, for any coloring of the edges of G with at most k colors, there exists a monochromatic induced path on ℓ vertices. Then ^() = ω(1). This lemma was used in <cit.> in conjunction with a result of <cit.> that established the required Ramsey property for induced subgraphs of hypercubes, which are K_2,3-free, to show that hypercubes do not have constant-cost reductions to Equality. The next lemma generalizes this result. anonymous For any k, t, ℓ∈, there is an integer d such that, if a K_t,t-free bipartite graph G has average degree at least d, and its edges are colored with at most k colors, then G contains a monochromatic induced path of length at least ℓ. For any k, t, ℓ∈, there is an integer d such that, if a K_t,t-free bipartite graph G has average degree at least d, and its edges are colored with at most k colors, then G contains a monochromatic induced path of length at least ℓ. We first reduce to the case t=2. Suppose t > 2 and let d ≥ t. A result of <cit.> shows that there is a constant d' such that any K_t,t-free bipartite graph of average degree at least d' contains a K_2,2-free induced subgraph of average degree at least d. Therefore it suffices to consider the case t=2. Choose b > ℓ and set d = 2kb. Consider a K_2,2-free graph G, whose edges are colored with at most k colors. Then, if the average degree of G is at least d, there exists a color c ∈ [k] such that the graph G_c induced by the edges with color c has average degree at least 2b. Then G_c has an induced subgraph G'_c with minimum degree at least b. We now construct a monochromatic induced path in G by induction as follows. The base case, a monochromatic path on 2 vertices, is trivial. Suppose we have obtained an induced path P_s-1 = { v_1, …, v_s-1}, for s-1 < ℓ where each (v_i, v_i+1) is an edge of G'_c. Let N'_c(v_s-1) be the neighbors of v_s-1 in G'_c and suppose for contradiction that all vertices u ∈ N'_c(v_s-1) ∖ P_s-1 are adjacent in G to some v_i with i < s-1. Since v_s-1 has at least b > ℓ > s-1 neighbors, there are two vertices u,w ∈ N'_c(v_s-1) ∖ P_s-1 that are adjacent in G to both v_s-1 and v_i for some i < s-1. But then {v_i, v_s-1, u, w} form an induced K_2,2, which is a contradiction. Therefore there exists a vertex v_s ∈ N'_c(v_s-1) ∖ P_s-1 which produces a monochromatic induced path P_s = { v_1, …, v_s }. This concludes the proof. §.§ Sign-Rank 5: Point-Box Incidences To show that our techniques cannot extend to sign-rank 5, even if we ask for the much stronger K_2,2-free condition instead of stability, it now suffices to show that there exists a weakly-sparse class of bipartite graphs with sign-rank 5 and unbounded degeneracy. For this we use the point-box incidence graphs. Let P be a set of points in ^2 and H a set of axis-aligned rectangles in ^2. The incidence graph of P and H is the bipartite graph G(P, H) (P, H, { ph | p ∈ P, h ∈ H, p ∈ h }) . The fact that these graphs have sign-rank 5 follows from a transformation of point-box incidences in dimension 2 to point-halfspace incidences in dimesion 4, which appears in <cit.>. The sign-rank of point-halfspace incidences in ^4 is at most 5. The class of point-box incidence graphs has sign-rank at most 5. What remains is the claim that weakly-sparse point-box incidence graphs on N vertices can have ω(N) edges. This is true even under the strongest condition of being K_2,2-free. The lower bound of the next lemma was proved recently in <cit.>, and the upper bound in <cit.>. We remark that the lemma remains true even if the boxes are restricted to be dyadic, the product of intervals of the form [s2^t, (s+1)2^t) with integers s,t. The maximum number of edges in a K_2,2-free point-box incidence graph is Θ( n ·log n/loglog n). As a consequence, K_2,2-free point-box incidence graphs have unbounded degeneracy. Combining <ref>, we get: There is a hereditary class of K_2,2-free bipartite graphs with sign-rank 5 and () = ω(1). §.§ Sign-Rank 6: Point-Line Incidences The above result shows that reductions to Equality cannot be used to prove () = O(1) in general, even for weakly-sparse classes, let alone stable ones. This leaves open the possibility that there is another method for obtaining constant-cost randomized communication protocols for weakly-sparse or even stable graph classes with sign-rank 5, 6, or any constant. However, we discuss here a recent conjecture of <cit.> regarding point-line incidences suggesting that weakly-sparse graphs of sign-rank 6 have non-constant communication complexity. Let P be a set of points in ^2 and L be a set of lines in ^2. The incidence graph of P and L is the bipartite graph G(P, L) (P, L, { ph | p ∈ P, ℓ∈ L, p ∈ℓ}) . Point-line incidence graphs are K_2,2-free by definition, and it is well-known that the incidence graph between N points and N lines can have Θ(N^4/3) edges; therefore, <ref> guarantees that they do not reduce to Equality. Furthermore, it is known that point-line incidence graphs are point-halfspace incidence graphs in ^5 (see e.g. <cit.>), and hence they have sign-rank at most 6: Point-line incidence graphs have sign-rank at most 6. The communication complexity of point-line incidence graphs was recently studied in <cit.>, but it remains unknown whether they have constant-cost. It was conjectured that they do not: The class of point-line incidence graphs has () = ω(1). anonymous Acknowledgments We are grateful to Marthe Bonamy, Louis Esperet, and Antonio Girão for their proof of <ref>, and to Louis Esperet for communicating this proof to us and allowing us to include it here. We thank Lianna Hambardzumyan, Pooya Hatami, and Sebastian Wild for several conversations on the topic of this paper. The general definition of constant-cost reductions given in this paper has arisen partly out of collaboration with Yuting Fang, Lianna Hambardzumyan, and Pooya Hatami. We thank Mónika Csikós for telling us about <ref>. alpha § ON THE NUMBER OF EDGES IN WEAKLY-SPARSE POINT-HALFSPACE INCIDENCE GRAPHS In this section we show that K_2,s-free point-halfspace incidence graphs in dimensions 1,2, and 3 have linearly many edges. The same result was recently obtained by Chan and Har-Peled in <cit.> for more general classes of K_s,s-free graphs. We present our results for two reasons. First, the proof technique is completely different and might be of independent interest. Second, our bounds are more specific for the considered cases. To prove our upper bounds, we will show that every graph in a class has a vertex of bounded degree. Since the classes are hereditary, this will imply linear bounds on the number of edges. §.§ On the line In this section we will show that the K_s,s-free point-halfline incidence graphs on have linear number of edges. In fact we will show a linear bound on the number of edges in the more general class of the K_s,s-free point-interval incidence graphs. For this latter class, <cit.> shows that an n-vertex K_s,s-free point-interval incidence graphs with n_p points and n_i intervals contains at most s(n_p+3n_i) edges. Our bound of (s-1)n = (s-1)(n_p+n_i) is a slight improvement over the bound from <cit.>. Let G be a K_s,s-free n-vertex point-interval incidence graph. Then G has at most (s-1)n edges. Let P be a set of points and I be a set of intervals on the real line such that G ≃ G(P, I). To prove the statement we will show that G has a vertex of degree at most s-1. Suppose that all vertices of G have degree at least s and let p be the leftmost point in P. The degree assumption implies that p belongs to at least s intervals, which we denote i_1, i_2, …, i_s. For the same reason, each of these intervals should contain the s-1 points in P closest to p, which we denote p_1, p_2, …, p_s-1. But then the vertices corresponding to i_1, i_2, …, i_s and p, p_1, p_2, …, p_s-1 induce the forbidden K_s,s. §.§ On the plane In dimensions 2 and 3, the bounds for K_s,s-free graphs from <cit.> are O(sn), and the constants in the big-O are not specified. Our bounds in dimension 2 and 3 are respectively 3(s-1)n and 5(s-1)n. To obtain them we will use the following lemma that reduces the analysis to the case where the points are in convex position. Let G ≃ G(P, H) be the incidence graph of a set P of points and a set H of halfspaces in ^d. If G is K_2,s-free and P is not in convex position, then G has a vertex of degree at most (d+1)(s-1). Suppose that P is not in convex position, and let p ∈ P be a non-extremal point of the convex hull (P). By Carathéodory's theorem, p belongs to the convex hull of at most d+1 extremal points of (P). Let p_1,p_2, …, p_k, k ≤ d+1 be a minimal set of such extremal points. Since p belongs to the interior of ({p_1, …, p_k}), any halfspace containing p contains one of the points p_1, …, p_k. Thus, if p belongs to at least k(s-1)+1 halfspaces, one of the points p_1, …, p_k belongs to at least s of them resulting in the forbidden K_2,s. Hence, the degree of p is at most k(s-1) ≤ (d+1)(s-1). The polytope graph of a polytope is the incidence graph of the extremal points and 1-dimensional faces of the polytope. We will need the following well-known fact. Let P and H be respectively a polytope and a halfspace in ^d. The subgraph of the polytope graph of P induced by the extremal point of P that belong to H is connected. Let G be a K_2,s-free n-vertex point-halfplane incidence graph. Then G has at most 3(s-1)n edges. Let P be a set of points on the plane and H be a set of halfplanes such that G ≃ G(P, H). We assume without loss of generality that |P| ≥ 3. To prove the lemma we will show that G has a vertex of degree at most 3(s-1). If P is not in convex position, such a vertex exists by <ref>, so we can assume that all points in P are extremal points of (P). Suppose that all vertices of G have degree at least 3(s-1)+1 and let p be an arbitrary point in P. The polytope graph of P is a cycle, and hence p has exactly 2 neighbours in this graph. <ref> implies that each of the halfplanes that contain p and some other vertices in P should also contain at least one of these 2 neighbours. Thus, since p belongs to 3(s-1)+1 ≥ 2(s-1)+1 halfplanes in H, at least s of them contain one other fixed point in P, which witnesses a forbidden K_2,s. §.§ In ^3 Let G be a K_2,s-free n-vertex point-halfspace incidence graph in ^3. Then G has at most 5(s-1)n edges. Let P and H be respectively a set of points and a set of halfspaces in ^3 such that G ≃ G(P, H). As before, to prove the lemma we will show that G has a vertex of degree at most 5(s-1). Towards a contraction, suppose that all vertices in G have at least 5(s-1)+1 neighbours. This assumption and <ref> imply that P is in convex position, and hence all points in P are extremal points of (P). Let F be the polytope graph of P. By Steinitz's theorem (see e.g. <cit.>), F is planar, and therefore has a vertex of degree at most 5. Let p ∈ P be such a vertex. It follows from <ref> that any halfspace in H that contains p also contains at least one of the neighbours of p in F. Thus, by the pigeonhole principle, at least s halfspaces among those in H that contain p contain also one fixed neighbour of p, which witnesses the forbidden K_2,s. This contradiction completes the proof.
http://arxiv.org/abs/2307.04233v2
20230709172412
The centaur-algebra of observables
[ "Sergio E. Aguilar-Gutierrez", "Eyoab Bahiru", "Ricardo Espíndola" ]
hep-th
[ "hep-th" ]
[email protected] Institute for Theoretical Physics, KU Leuven, 3001 Leuven, Belgium [email protected] SISSA, International School for Advanced Studies, via Bonomea 265, 34136 Trieste, Italy INFN, Sezione di Trieste, via Valerio 2, 34127 Trieste, Italy International Centre for Theoretical Physics, Strada Costiera 11, Trieste 34151 Italy [email protected] Institute for Advanced Study, Tsinghua University, Beijing 100084, China This letter explores a transition in the type of von Neumann algebra for open universes from the implementations of the different gravitational constraints. We denote it as the centaur-algebra of observables. In the first part of the letter, we employ a class of flow geometries interpolating between AdS_2 and dS_2 spaces, the centaur geometries. We study the type II_∞ crossed product algebra describing the semiclassical gravity theory, and we explore the algebra of bounded sub-regions in the bulk theory following TT deformations of the geometry and study the constraints with respect to the quasi-local Brown-York energy of the system at a finite cutoff. In the second part, we study arbitrary asymptotically AdS spacetimes, where we implement the boundary protocol of an infalling observer modeled as a probe black hole proposed by <cit.> to study modifications in the algebra. In both situations, we show how incorporating the constraints requires a type II_1 description. The centaur-algebra of observables Ricardo Espíndola Received date; accepted date ================================== § INTRODUCTION Recently, there has been interest in the formal description of perturbative quantum gravity in terms of the algebra of diffeomorphism invariant observables, which have allowed us to rigorously define density matrices and the associated notion of generalized entropies<cit.>. Pioneering work developing bulk emergence from the language of von Neumann algebra can be found in <cit.> and check <cit.> for reviews. This procedure begins with a type III_1 algebra describing the quantum fluctuations on a curved spacetime background. One incorporates dynamical gravitational gravity perturbatively by requiring that time translations act as gauge redundancies, which we denote throughout the letter as gravitational constraints. In the several examples considered, once gravitational corrections are included (either perturbatively or as an addition of a gravitational mode <cit.>), it has been shown that the algebra of observables becomes type II_∞ when the gravitational dressing of operators is performed with respect to the asymptotic boundary region of an open universe; while if the dressing is with respect to a worldline observer in a closed universe, the algebra is type II_1 <cit.>. More recently, the importance of this construction has been recognized even in the absence of gravitational constraints <cit.>. Let us denote 𝒜 as the algebra of bulk fluctuations associated with a spacetime region, acting on a Hilbert space ℋ, and let T be the generator of the automorphism group (for simplicity we take it to be ℝ), with the respective group elements U=^ sT,  ∀ s∈ℝ, such that U a U^-1∈𝒜 , ∀ a∈𝒜 . Let X be the generator of the unitary representation of the automorphism group acting on L^2(ℝ). Then, one denotes the crossed product ⋊ algebra of 𝒜 and ℝ as 𝒜̂=𝒜⋊ℝ , which is produced by adjoining bounded functions of T+X to 𝒜, i.e. a^ s T⊗^ s X∈𝒜̂, ∀ a∈𝒜 , acting on a Hilbert space ℋ̂≡ℋ⊗ L^2(ℝ). When the automorphism is outer, i.e. U∉𝒜, and 𝒜 is a type III_1 algebra, the crossed product algebra results in a type II algebra <cit.>. Trace-class elements in type II algebras are defined as those with a well-defined trace. We can associate a density matrix ρ_Φ to each state |Φ⟩∈ℋ̂, (ρ_Φâ)=⟨Φ|â|Φ⟩ , ∀â∈𝒜̂ . Thus, von Neumann entropy for these states can be defined as, S = -(ρ_Φlogρ_Φ) . For semiclassical states in ℋ̂, this entropy was shown to match with the generalized entropy[More precisely, what is matched is the entropy differences since the von Neumann entropy is defined up to a state-independent additive constant.]<cit.>. In the context of the eternal AdS black hole, the generator of the automorphism group, T, is proportional to the time translation generator on both of the asymptotic boundaries. One needs different regularization procedures for T depending on whether the systems are described by a canonical <cit.> or micro-canonical ensemble <cit.>. More precisely, one should divide the generator by N in the canonical ensemble compared with the micro-canonical ensemble and do the construction in a perturbative series in √(G_N)∼ 1/N (where N is the rank of the gauge group of the boundary CFT). The reason is that the states in the canonical ensemble have O(N) variance in the energy which diverges in the large N limit. The operator T+X is taken to be the Hamiltonian of the CFT and thus the crossed product algebra  actually describes the physical theory. These methods have also been developed for subregion algebras <cit.>. [Meanwhile, there are some expectations that non-perturbative corrections in quantum gravity might modify the algebra to type I once string theory corrections and black hole microstates are added in the algebra <cit.>.] See <cit.> for related developments in this area. Physical observables in perturbative quantum gravity are required to be diffeomorphism invariant. For open universes, this is naturally implemented by dressing the operators with respect to the boundary<cit.>. The reader is referred to <cit.> for an alternative dressing of the operators with respect to the features of the state itself. Since in a gravitational theory the Hamiltonian is a boundary quantity, this dressing implies that the operators will not commute with the ADM Hamiltonian in general. On the other hand, for closed universes like dS space and subregions in an open universe, it was proposed in <cit.> that one should perform the dressing with respect to the world-line of an observer. Thus, the dressed observables will translate under the action of the world-line energy of the observer. Both of these facts are encoded in the non-trivial action of T+X on the elements of 𝒜[Both in <cit.> and <cit.> the dressed observables are not given in the terms of the elements given in <ref>, rather in an equivalent description where the elements are e^iTPae^-iTP and e^isX, where P is the conjugate variable to X, which is taken to be the energy of the observer. This description can be related to <ref> with a conjugation by e^-iPT.]. Most of the previous works assume that the observer can be minimally modeled as a clock <cit.>. In this work, we explore modifications in the algebra of observables for the semiclassical spacetime depending on how the gravitational constraints are implemented. We do so, first without having to add an observer by hand rather, considering the TT deformation of the theory to study subregions in the bulk; and then with respect to the experience of an infalling observer from an asymptotic boundary. In the former, we adopt a well-known setting for holography in dS space (see <cit.> for a review), referred to as interpolating geometries <cit.>. In the latter case, we study the modifications in the algebra for general asymptotically AdS spacetimes. The interpolating geometries are dilaton-gravity models that adopt near-AdS_2 space boundary conditions <cit.>, while the interior is a near dS_2 space. They avoid a no-go theorem <cit.> forbidding any dS_D region to reside in a causally accessible part of AdS_D for D>2. We expect that a better understanding of the algebra of observables in these kinds of backgrounds will lead to new insights on their holographic dual theory <cit.>, and that of dS_2 JT gravity <cit.>. JT gravity has been a productive test ground to study of von Neumann algebras in gravity beyond the semiclassical regime <cit.>, revealing the importance of different topologies in the description of the algebra of observables. However, the use of the centaur model in our work is not aimed at extending the discussion about the role of topologies, as above, but rather deriving the gravitational constraints imposed on the algebra from from first principles, which is the main novelty of our work. After reviewing the semiclassical centaur geometry model and its crossed product algebra enlargement, we perform a TT deformation of the theory, where the gravitational constraints are imposed with by the quasi-local Brown York energy, resulting in modifications of the algebra. Later, we study the experience of an infalling observer from the asymptotic boundary to the interior universe in the undeformed theory with the boundary theoretic protocol of <cit.> in asymptotically AdS space of arbitrary dimensions. In particular, the latter argument is also valid for the centaur geometries. In the previous two cases, we focus on how the description of the observer changes the algebra from type II_∞ to type II_1 and the conditions required for such modification. We conclude with a brief summary of our main results and some future directions. § SETTING The first part of the letter is focused on the 2-dimensional flow models <cit.> which interpolate between an AdS_2 space and some internal space. They can be expressed in a unified way by the action I= I_0+116π G_N∫_ℳ^2x√(g)(Φ R-V(Φ)) +18π G_N∫_∂ℳ x√(h) KΦ_b+I_m[g, χ] , where I_0 represents a topological term, Φ is the dilaton field, Φ_b is the asymptotic boundary value of the dilaton; and χ represents the matter content of the theory, which is considered as generic quantum field theory (QFT). The resulting equations of motion are given: ∇_μ∇_νΦ-g_μν∇^2Φ-1/2g_μνV(Φ) =-8π G_N t_μν, R =V'(Φ) , where t_μν is the expectation value of the stress tensor for the matter fields, and the primes, ', indicate differentiation with respect to the argument of the function. In absence of such fields, ϵ^μν∂_νΦ is a Killing vector. Moreover, one can absorb the topological term of the action (<ref>) in the definition of the dilaton Φ and expand the solution about Φ=ϕ_0+ϕ. In the following, we work in the semiclassical limit ϕ_0≫ϕ, since the dilaton represents the area of the transverse S^2 of the higher dimensional near-Narai black hole geometry. To describe the geometry, we also employ some particular dilaton potential term V(Φ)=2Φtanh(Φ/ϵ), where the ϵ→0 case represents a “sharp" transition between AdS space and the interior geometry, which can be AdS_2 or dS_2 space depending on the sign of the renormalized dilaton. For concreteness, we focus on the case where Φ_b>0 to obtain a transition between spacetimes of opposite sign curvature. In that case, the potential becomes, V_ cent(Φ)=2ηΦ+ϕ̃ ; where η= +1 AdS_2 , -1 dS_2 . This construction is a double-sided geometry, i.e. two boundary particles are required to describe the bulk geometry. It becomes convenient to introduce the conformal metric s^2=^2ω(ρ, τ)(τ^2+ρ^2), with ω(ρ, τ) the conformal factor. A curve of the form 𝒞=τ(u), ρ(u) can parametrize the embedding of one of the boundary particles (say R). We impose Dirichlet boundary conditions in 𝒞, by scaling Φ_b(u) and h(u) with Λ≫1 as [√(h), Φ_b]→Λ[1, Φ_r]. The resulting on-shell action of (<ref>), I_ on, is given by <cit.>: I_ on = 18π G_N∫ u Φ_r(u)(Λ^2+12(τ'(u))^2-τ(u), u) , where τ(u), u=τ”'(u)τ'(u)-32(τ”(u)τ'(u))^2 is the Schwarzian derivative, and the term Λ^2 can be eliminated with standard holographic renormalization <cit.>. Notice that the (τ'(u))^2 term breaks the symmetry that we would have found in JT gravity, now the symmetry group is: 𝕊𝕃(2, ℝ)→ U(1) , such that under boundary time periodicity u∼ u+2π/ℓ . the corresponding time on 𝒞 is periodic τ(u+2π/ℓ)=τ(u)+2π . The one-sided Hamiltonian, H_ cent, for each boundary particle corresponding to the boundary action (<ref>) can be deduced as <cit.> H_ cent =ϕ_r(u)8π G_N(τ'(u)^22-τ”'(u)τ'(u)+32(τ”(u)τ'(u))^2) . The algebra of the L or R boundary theory without matter consists of bounded functions of H_ cent, R or H_ cent, L respectively. It also suffers from a similar factorisation puzzle as in JT gravity <cit.>, namely that the algebras 𝒜_L and 𝒜_R commute with each other, since they share the same generator, H_ JT, L=H_ JT, R. In case of the centaur geometry, the U(1) charge corresponds to the modular Hamiltonian H_ mod=H_ cent, L-H_ cent, R, and the Hamiltonian constraint on physical states |ψ⟩ which are invariant under this symmetry reads H_ mod|ψ⟩=0, which expresses that H_ cent, L=H_ cent, R for physical states. This issue is no longer present once matter is introduced, as follows. We define local matter operators, χ, with the appropriate canonical quantization relations respecting U(1) gauge invariance. The smearing over u allows to define bounded operators ℬ(χ). We can then express the time translation generators along the L/R boundary particles as H_ L/R=H_ cent, L/R+H_ matter, L/R , where the generator of U(1) transformations corresponds to the modular Hamiltonian, H=H_R-H_L . Once we add matter to the theory, we can employ a generalized free-field approximation for constructing the total Hilbert space ℋ_ tot, ℋ_ tot=ℋ_ matt⊗ℋ_ grav , where the operators quantizing the metric and the dilaton h_μν^ grav, ϕ can be used to construct the states in ℋ_ grav; meanwhile, ℋ_ matter can be constructed from strings of Fourier modes a, a^†, i.e. any matter field χ can adopt a decomposition χ(τ(u))=∫ω (f_ω(τ(u))a_ω+f^*_ω(τ(u))a^†_ω) . § ALGEBRA FOR THE CENTAUR GEOMETRY The full boundary algebra for a given side, such as R, is generated by H_R and χ. This determines the type III_1 algebra of operators by constructing finite strings of the modes a, a^† and bounded functions of H_R. Let us denote, the gauge-constrained Hilbert space as ℋ̂ by all U(1) invariant states constructed from the operators (<ref>) and their Hilbert space completion. Let 𝒜_R be the von Neumann algebra consisting on the set of operators 𝒪̂_R in R which time evolve non-trivially along the asymptotic boundary by the modular flow (<ref>) according to (<ref>) with U=^ H τ to describe the U(1) isometry group of the centaur geometry. We employ the Tomita-Takesaki construction of modular automorphisms <cit.> for type III_1 von Neumann algebras. We start from a thermofield-double state, |ψ_ TFD⟩, that is a cyclic and separating vacuum state that obeys the constraint equation H|ψ_ TFD⟩=0 . Then, we can generate cross-product algebra following (<ref>) with T=H, i.e. the modular time translation generator of the cross-product algebra; and X=H_L. However, given the U(1) invariance, the automorphism group is an interval, rather than ℝ in (<ref>). Consider now â_R∈𝒜̂_ R∈ H_ R . Since H|ψ_ TFD⟩=0, (<ref>) can be employed to evaluate the expectation value of a generic element â∈𝒜̂_ R <cit.>, â=β_ TFD∫_X_ min^X_ max X ^β_ TFDX⟨ψ_ TFD|a(X)|ψ_ TFD⟩ . We have introduced the integration limits X_ min and X_ max to indicate constraints in the on-sided Modular Hamiltonian. In the present case, although there is a U(1) symmetry, which would bound the allowed range of (<ref>), we are considering the Hamiltonian H_L (<ref>) in the presence of matter. Given that matter excitations are arbitrary, the allowed range becomes X∈[-∞,∞]. Physically, the presence of the asymptotic boundary does not allow the modification to a type II_1 algebra where maximally entangled states can be defined. The definition (<ref>) obeys the properties âb̂=b̂â â, b̂∈𝒜̂ , â^†â>0 ∀ â≠ 0 . As mentioned in the introduction, the crossed product will result in a type II algebra. Moreover, since the trace of the identity matrix 1→∞ then the algebra is type II_∞. The trace must be finite for a dense set of operators in the algebra. § DEFORMED THEORY Our goal in this section is to try to address the algebra of observables in a bounded subregion <cit.> for the centaur geometry by implementing a TT deformation <cit.>. We follow the conventions <cit.> to express TT deformations parametrized by λ∈ℝ as Iλ=π∫^2x √(-g)( T^ijT_ij-(T^i_i)^2) , where T_ij is the Brown-York quasilocal stress tensor <cit.> along a boundary surface r=1√(αλ), with α≡ 1/(2π G_N), in static patch coordinates s^2=-N(r)τ^2+ r^2N(r) , Φ=Φ(r) , in the absence of matter; while equation (<ref>) and the relation between the cutoff and λ is modified in presence of matter <cit.>. Alternatively, the deformation can be interpreted as the result of introducing mixed boundary conditions for the undeformed theory <cit.>. In the former interpretation, the time translation generator along the left or right-sided cutoff surface is given by the quasi-local Brown-York Hamiltonian, H_TT. For a general dilaton-gravity theory with matter of the form (<ref>) under the TT deformation (<ref>), the quasi-local Brown-York Hamiltonian obeys the relation <cit.> H_TTλ=H_TT^2-1/16λ^2(1+√(αλ)/2Φ_rV(Φ_r/√(αλ)))-t_r^r/4α^1/2λ^3/21/2-2λ H_TT , where t_r^r is the radial-radial component of the bulk matter stress tensor at the cutoff surface, and H_TT(λ=0)=H_ cent. The precise deformation of the energy spectrum will depend on the matter stress tensor, the dilaton potential, and the location where the cutoff is performed. However, the dependence on τ(u) is only encoded on H_ cent, which is a bounded function ℬ(ℝ). As long as the cutoff, parametrized by λ, is finite in (<ref>) as well as the radial-radial component of the matter stress tensor t^r_r, we then have that X_ min and X_ max will be bounded in (<ref>). We then conclude that the trace for any element in the crossed product algebra in with X=H_TT, L and T=H_TT, R-H_TT, L will also be a bounded function. This means, that the TT deformed theory is described by a type II_1 algebra, where the observables are dressed with respect to the cut-off surface; whereas, the base theory has a type II_∞ algebra structure. For example, consider the centaur theory with the potential (<ref>) and t_r^r=const in (<ref>), H_ cent^λ≈14λ(1-√(η-8√(λα)t_r^r-8λ H_ cent)) , with H_ cent is the Modular Hamiltonian of the undeformed theory (<ref>). Given the U(1) restriction in the modular Hamiltonian, it is clear that X will have to be bounded in (<ref>). Moreover, notice that (<ref>) would produce the spectrum of a TT+Λ_2 deformation <cit.> directly in the interpolating geometry. Notice, our result is consistent with the prediction of <cit.> for closed universes. In this derivation, we worked under the assumption that the stress tensor is a bounded function; it would be interesting to study if this restriction can be relaxed. § THE EXPERIENCE OF AN INFALLING OBSERVER We employ the protocol of <cit.> that describes the experience of an infalling observer, modeled as a probe black hole, from the boundary of a generic asymptotically AdS_d+1 spacetime, including the centaur geometry. We prepare a microcanonical TFD configuration dual to a black hole geometry and a copy of it, which we refer to as the reference system, with energy eigenstates |E_n⟩_ sys and |E_n⟩_ ref respectively. We employ a conformal transformation ^ Pρ to shift the black hole into the asymptotic boundary, with P the momentum operator, and ρ≫1 a parameter controlling the shift. Let |ψ⟩ denote the CFT state dual to a semiclassical asymptotically AdS space with a probe black hole. Defining the state <cit.> |ψ⟩=Z^-1/2∑_n f(E_n|E_0, σ) × [V_ sys^-δℓ^2P^2- Pρ|E_n⟩_sys]|E_n⟩_ref , where V_ sys is some arbitrary operation in the interior geometry; f(E_n|E_0) is an appropriate enveloping function for E_n to be summed over a microcanonical window of width σ around E_0; δℓ is the wavepacket localization; and Z the microcanonical partition function. The set of normalizable states |ψ⟩=|ψ_ eq⟩ are called local equilibrium operators, which by definition obey KMS conditions for the two-point functions of the set of operators available to the atmosphere around the observer, denoted by O=ϕ_ atm: ⟨ψ_ eq|O_1^†exp[-2π K_ρ_ eq]O_2|ψ_ eq⟩=⟨ψ_ eq|O_2O_1^†|ψ_ eq⟩ , where K_ρ_ eq is the modular Hamiltonian, K_ρ_ eq=-12πlog[ρ_ eq] , with ρ_ eq=|ψ_ eq⟩⟨ψ_ eq|. Then, the generator of Schwarzschild time translation for the proper time of these states is given by tracing out the reference system K^ sys_ρ_ eq=_ refK_ρ_ eq. On the other hand, since the reference system is entangled by construction with the infalling observer; it is then natural to employ K^ ref_ρ_ eq=_ sysK_ρ_ eq as the time automorphism generator for the infalling observer. By making this identification, we employ T=K_ρ_ eq as the generator of time automorphism of 𝒜̂, and X=K^ ref_ρ_ eq. This allows us to define the traces of the crossed product algebra in (<ref>). The nature of the algebra will be determined by the states in ℋ̂. In general, the presence of matter in the background geometry can introduce non-equilibrium states. Such states are in principle non-normalizable, leading to ill-defined traces for some elements in 𝒜. Thus, the experience of infalling observer might still be described with type II_∞ algebras generically; although, considering symmetries of the system might result in a type II_1 description. We focus on the case where the infalling probe black hole does not encounter bulk matter fields along its worldline. Its experienced Hilbert space is then always described by equilibrium states. In such a case, we must also account for the constraint that the reference system energy is bounded from below. This comes from the construction of the generator (<ref>). Given that |ψ_ eq⟩ obey the KMS relation (<ref>) for all elements in the algebra 𝒜, they are normalizable states. It is then clear that the range of integration [X_ min, X_ max] in (<ref>) is a bounded interval, and as such the trace is finite ∀â∈𝒜̂, i.e. the von Neumann algebra is thus type II_1. Notice that the particular use of two-dimensional gravity was not employed in the arguments, so the construction works for general asymptotically AdS_d+1 spacetimes without matter, and in particular, the centaur geometry. Moreover, the transition occurs as soon we exchange the ADM Hamiltonian for the reference system K^ ref_ρ_ eq. § CONCLUSIONS AND OUTLOOK In this work, we have uncovered the transition in the type II algebra of observables by considering (i) a TT deformation of the centaur geometries, and (ii) the experience of an infalling observer from the boundary to the interior geometry of asymptotically AdS spacetimes. In both cases, the transition to a type II_1 algebra allows us to construct a maximally mixed state and a notion of generalized entropies based on the Tomita-Takesaki theory. The main novelty of our work has been to deduce the gravitational constraints that need to be implemented in (i) from first principles, which has been the reason to pick a particular class of models; while in (ii) we studied a natural choice of constraints that can be employed in a wider family of spacetimes. We expect that the general lessons can be carried out in more generic systems. In (i), we can generalize the lesson broadly to dilaton-gravity theories of the form (<ref>) where the spectrum of quasi-local energies at the cutoff surface in (<ref>) remains bounded, which in particular we showed for a U(1) symmetric boundary theory. Perhaps, the simplest explicit generalizations would involve the AdS_2 interpolating geometries with a different cosmological constant; the γ-centaur <cit.>, and the double interpolating geometry <cit.>; where the change in the algebra is also suggested by <cit.>. Meanwhile, in (ii), we have shown that if the infalling observer from the asymptotic boundary does not cross bulk matter fields, the transition of the algebra type II_∞ to type II_1 does not depend on the interior geometry, as long as the protocol <cit.> can be employed. Let us proceed by pointing out some future directions. First, as we have indicated, after the crossed product enlargement of the algebra (<ref>), the definition of traces in (<ref>) allows us to define reduced-density matrices and rigorous notions of generalized entropies. Interesting progress towards formulating the Page curve in the language of von Neumann algebras was initiated in <cit.>. Perhaps such notions can establish the island formula on solid grounds for de Sitter space, which was pioneered by <cit.>. However, it has been argued that the appearance of islands close to the cosmological horizon violates entanglement wedge nesting <cit.> unless the large backreaction is induced <cit.>. We hope the algebraic techniques can bring a better understanding of these features. Second, we can think of the infalling observer as a little diary falling into a black hole. Certain protocols can be used to recover the information after the scrambling time <cit.>. In the context of the Page curve, the information encoded in the island can be recovered by applying explicit teleportation protocols <cit.>, see upcoming work in this area by <cit.>. It would be interesting to understand how information recovery works in algebraic language. These ideas can shed some light on understanding the microscopic origin of the island formula. Third, although the centaur geometries provide a natural background to study de Sitter space holography and a rich algebraic structure; these theories are known to be thermodynamically unstable <cit.>, which motivated the construction of the double interpolating geometries in <cit.>. It would be interesting to study the thermodynamic properties of the TT deformed centaur geometry, as they have not received much attention since the original work of <cit.>. However, the energy spectrum is generically complex under these deformations. Although one can restrict the energy eigenstates to describe a unitary theory, a new perspective arises with Cauchy slice holography <cit.> where the notion of complex stress tensor plays a crucial role, which would be interesting to study explicitly for the centaur geometry, and whether the restriction on the finiteness of the radial-radial component of the matter stress tensor t^r_r can be lifted and still recover the transition in the algebra. Fourth, as we have emphasized, our result for the infalling observer does not depend on the specific interior geometry. However, the notion of equilibrium states that were employed to define the reduced density matrices with respect to the observer in the boundary theory protocol of <cit.> relies on the same original assumptions, in particular, that the equilibrium states need to minimize a notion of circuit complexity in the boundary theory, which has not been developed explicitly so far. We hope that the algebraic techniques uncovered in this work can catalyze progress on rigorously defining complexity proposals from the boundary perspective and the respective bulk realization, initiated in <cit.>. In that case, the centaur geometry could be a productive test ground for the different proposals for holographic complexity <cit.> in stretched horizon holography <cit.>, and possibly to incorporate quantum corrections in such proposals, as recently studied in <cit.>. Finally, it would be interesting to incorporate non-equilibrium states in the protocol of the infalling observer to study modification in the algebra of observables for the probe black hole. The probe will absorb particles along its worldline. Then, the crossed product algebra could remain type II_∞ instead of II_1, as traces might include non-normalizable states. Regardless of that, the evolution of the atmosphere operators in the algebra will be determined by scrambling modes of the modular Hamiltonian <cit.>. Moreover, these modes produce null shifts along the horizon of the background black hole <cit.>. This could allow for a wormhole teleportation protocol for the probe black hole, seen as a diary. It might be worth studying the algebraic structure of such protocol explicitly with an SYK model dual to a near AdS_2 space, as first proposed in <cit.>. § ACKNOWLEDGMENTS We would like to thank Shadi Ali Ahmad, Dio Anninos, Damián Galante, Stefan Hollands, Ro Jefferson, Andrew Rolph, Sirui Shuai, Andrew Svesko, Eleanor Harris, and Yixu Wang for useful discussions on centaur spacetimes and von Neumann algebras, and specially Manus Visser for early collaboration. SEAG thanks the University of Amsterdam and the Delta Institute for Theoretical Physics for their hospitality and support during this project. EB also wants to thank the CERN-TH for their hospitality during the preparation of this paper. The work of SEAG is partially supported by the KU Leuven C1 grant ZKD1118 C16/16/005. The work of EB is partially supported by the Eramsus+ Trainee-ship programme and the INFN Iniziativa Specifica String Theory and Fundamental Interactions. RE is supported by the Dushi Zhuanxiang Fellowship and acknowledges a Shuimu Scholarship as part of the “Shuimu Tsinghua Scholar” Program. apsrev4-1
http://arxiv.org/abs/2307.06134v1
20230712123631
Optimal control of the 2D constrained Navier-Stokes equations
[ "Sangram Satpathi" ]
math.AP
[ "math.AP", "math.OC", "35Q30" ]
OPTIMAL CONTROL OF THE 2D CONSTRAINED NAVIER-STOKES EQUATIONS] OPTIMAL CONTROL OF THE 2D CONSTRAINED NAVIER-STOKES EQUATIONS [email protected] School of Mathematics, Indian Institute of Science Education and Research Thiruvananthapuram, Maruthamala, Thiruvananthapuram, Kerala, 695551, India. Primary:35Q30 We study the 2D Navier–Stokes equations within the framework of a constraint that ensures energy conservation throughout the solution. By employing the Galerkin approximation method, we demonstrate the existence and uniqueness of a global solution for the constrained Navier–Stokes equation on the torus 𝕋^2. Moreover, we investigate the linearized system associated with the 2D-constrained Navier-Stokes equations, exploring its existence and uniqueness. Subsequently, we establish the Lipschitz continuity and Fréchet differentiability properties of the solution mapping. Finally, employing the formal Lagrange method, we prove the first-order necessary optimality conditions. [ SANGRAM SATPATHI ==================== § INTRODUCTION Incompressible Navier-Stokes equations are used to understand the dynamics of an incompressible viscous fluid. These equations were proposed by C. Navier in 1822 and were later derived by G. Stokes. By solving these equations, we can predict how the fluid's speed changes over time and in different places, based on the initial and boundary states. These equations have many practical uses, from studying aerodynamics to modeling blood flow in the body but the basic mathematical question of the existence of a unique global-in-time solution to these parabolic PDEs on a bounded domain in ℝ^3 still remains open due to the non-linear convective term. The existence of a unique global-in-time solution to the Navier-Stokes equations on ℝ^2 has been known for a long time. Ladyzhenskaya <cit.> proved an inequality to control the non-linear term in a bounded domain in ℝ^2 which was later used to prove the existence and uniqueness of the solution to Navier-Stokes equations. The study of 2D-constrained Navier-Stokes equations adds another factor to consider, such as a restriction on the energy of the solution known as L^2-energy. The reason why we study this constrained problem is that these equations are expected to provide a better approximation to the incompressible Euler equations. This is because, for the Euler equations, the energy of solutions (which are smooth enough) remains constant. The study conducted in <cit.> considered two-dimensional Navier-Stokes equations as in the Caglioti et al. <cit.>,associated with the same energy constraint as in Caffarelli et al. <cit.> and Rybka <cit.>. To be specific, they considered the Navier-Stokes equations projected on the tangent space of the manifold M, where M={ u∈ H(𝕋^2) : |u|_H^2 =1}. Here H is the space of square-integrable, divergence-free, mean zero vector fields on a torus 𝕋^2.They examined the following form d u(t)/dt+[ν A u(t)+B(u(t))]=0. The authors have shown that if the initial data belongs to the space V∩M then the solution of the above equation u(t) stays on the manifold M for all time t. In this paper, we consider the Navier-Stokes equations of the form ∂u(x,t)/∂t-νΔ u(x,t)+(u(x,t)·∇)u(x,t)+∇ p(x,t)=f(u(x,t)) ∇.u(x,t)=0. u(x,0)=u_0(x), subject to the same constraint as in <cit.><cit.><cit.><cit.>. we prove the existence of the solution only on a torus by the Galerkin approximation method. Our proof does not hold in ℝ^2. We are interested in the problem d u(t)/dt+[A u(t)+B(u(t))]=f(u(t)), t ≥ 0, u(0)=u_0. where u∈H. Similar to the approach in <cit.>, we project the aforementioned equation onto the tangent space of M, resulting in the following. du/dt+[Au +B(u)] =|∇ u|^2 u +f, u(0)=u_0. In <cit.>, the author focuses on investigating optimal control problems related to the non-stationary Navier-Stokes equations. He introduced a study on solution mapping and presented some valuable results of it for the unsteady Navier-Stokes equations. In this paper, we will prove those results for the 2D-constrained Navier-Stokes equations. We added a control term to the right-hand side of the above equation. We linearized the system and investigated the existence and uniqueness of its solution. We also analyze several significant properties of the solution mapping. These results will have a crucial role in studying the control of 2D-constrained Navier-Stokes equations. We employ the formal Lagrange method <cit.> to establish the first-order necessary optimality conditions. The optimization problem is defined as follows: min J(y,U) subject to the state equation y_t+A y+B(y)-|∇ y|^2 y=U y(0)=y_0 U∈ U_ad. Where J(y,U):=1/2∫_0^T|A^1 / 2 y(t)|_H^2 dt+1/2∫_0^T|U(t)|_V^2 dt and U_ad:={ U ∈ T_u M : |U|_V is bounded}. In this context, U represents the control variable and y represents the solution of the state equation. In section (6), we introduce the Lagrange functional and examine its directional derivative in relation to both the control and state. Ultimately, we conclude the section by demonstrating the necessary optimality condition. § CONSTRAINED NAVIER-STOKES EQUATION §.§ General notations Let Ω be a bounded domain in ℝ^2, ℝ^2, or 𝕋^2. For b ∈ [1,∞] and k ∈ℕ, we denote the Sobolev space and Lebesgue spaces of ℝ^2 by W^k,p(Ω,ℝ^2) (or W^k,p) and L^p(Ω,ℝ^2) (or L^p), respectively. Additionally, we define H^2 as W^k,2. Let 𝕋^2 represent the bounded periodic domain, which can be visualized as a two-dimensional torus. Now, we will introduce the following spaces: ℒ_0^2 = {u∈ L^2(𝕋^2,ℝ^2) : ∫_𝕋^2 u(x) dx = 0}, H = {u ∈ℒ_0^2 : ∇· u = 0}, V = H^1 ∩H. The scalar product and norm of H can be represented as the L^2 scalar product and L^2 norm, respectively, denoted by: ⟨ u, v ⟩_H or ⟨ u, v ⟩ and |u|_H or |u|. Moreover, the scalar product and norm of V are also referred to as the H^1 scalar product and norm, respectively. Let us defined the Stokes operator and discuss some important things about it. We represent the Stokes operator as A: D(A) →H, where A maps from the domain D(A) to the Hilbert space H. The Stokes operator is defined as follows: Au := -Δ u, The domain D(A) of the Stokes operator is defined as the intersection of the Hilbert space H and the Sobolev space H^2(𝕋^2), denoted as: D(A) = H∩H^2(𝕋^2)=E. Since ⟨Au , u⟩=(|∇ u|)^2 for u∈ D(A), so the Stokes operator is non-negative operator. The stokes operator is also a self-adjoint operator. §.§ Operators and their properties From now onwards we identify our domain as a two-dimensional torus 𝕋^2. We can introduce a continuous trilinear map b: L^p × W^1,q× L^r →ℝ defined as follows: b(u,v,w) = ∑_i,j=1^2 ∫_Ω u^i ∂ v^j/∂ x^i w^j dx, where p,q,r ∈ [1,∞] such that 1/p+1/q+1/r≤1. Let B:V×V→V' be the bilinear map such that, ⟨ B(u,v) , ϕ⟩=b(u,v,ϕ),for u,v,ϕ∈ V. When considering u∈V, v∈ E, and w ∈H, we can establish the following inequality: |b(u,v,ϕ)| ≤√(2)|u|_H^1/2|u|_V^1/2|v|_V^1/2|v|_E^1/2|w|_H. Hence we can uniquely extend the trilinear map b to operate on the triple V× E×H. Furthermore, the map B can be extended uniquely to a bounded operator denoted as: B: V× E →H. The properties of the tri-linear map and bilinear map are the following: b(u,u,u) =0 , u∈V. b(u,w,w) =0, u∈V,w∈ H^1. ⟨ B(u,u) , Au⟩_H =0, u∈ D(A). The proof of the above results can be found in <cit.>. Let 𝒬: V→H be defined by 𝒬 (u):=|∇ u|^2 u, u ∈V . Then there exists C>0 such that for u_1, u_2∈V, |𝒬(u_1)-𝒬(u_2)|_H≤ C|u_1-u_2|_V(|u_1|_V+|u_2|_V)^2 |𝒬(u_1)-𝒬(u_2)|_H = ||∇ u_1|^2 u_1-|∇ u_2|^2 u_2|_H =.|| ∇ u_1|^2 u_1-|∇ u_1|^2 u_2+|∇ u_1|^2 u_2-.|∇ u_2|^2 u_2|_H =||∇ u_1|^2(u_1-u_2)+(|∇ u_1|^2-|∇ u_2|^2) u_2|_H ≤|∇ u_1|^2|u_1-u_2|_H+(|∇ u_1|+|∇ u_2|)||∇ u_1|-|∇ u_2|||u_2|_H ≤ C[|∇ u_1|^2|u_1-u_2|_ V +(|∇ u_1|+|∇ u_2|)|∇(u_1-u_2)||u_2|_ V] ≤ C|u_1-u_2|_ V[|u_1|_V^2+|u_2|_V^2+|u_1|_ V|u_2|_ V] ≤ C|u_1-u_2|_ V(|u_1|_ V +|u_2|_ V)^2 . Here we have used the fact that V is continuously embedded in H. §.§ The deterministic model The 2D Navier-Stokes equations are given as follows: ∂u(x,t)/∂t-νΔ u(x,t)+(u(x,t).∇)u(x,t)+∇ p(x,t)=f(u(x,t)). ∇.u(x,t)=0. u(x,0)=u_0(x). Here, we consider the domain 𝒪 and time interval [0, T] for all T > 0. The variables x ∈𝒪 and t ∈ [0, T] represent spatial coordinates and time, respectively. In this context, u: 𝒪→ℝ^2 denotes the velocity field, while p: 𝒪→ℝ represents the pressure field of the fluid. By employing the conventional approach of applying the projection map to the aforementioned problem, we attain the following form, d u(t)/dt+[A u(t)+B(u(t))]=f(u(t)), t ≥ 0, u(0)=u_0. Let us represent the set of divergence-free ℝ^2-valued functions with unit L^2 norm as follows: M={u∈H:|u|_L^2=1}. The tangent space of it is defined as: T_uM = {v∈H : ⟨ v, u ⟩_H = 0}, u∈M. We define an orthogonal projection map π_u:H→ T_uM by, π_u(v)=v-⟨ v , u⟩_H. Several assumptions will be made about the function f, it is globally Lipschitz, has a linear growth, belongs to the tangent space of the manifold M and f(u(t))∈ L^2 (0, T; V), t∈ [0,T]. Let F(u) = Au + B(u,u)-f(u) be a function, and ℱ(u) be the projection of F(u) onto the tangent space T_uM. Then, ℱ (u) =π_u(F(u)) =F(u)-⟨ F(u), u⟩_H u =A u+B(u)-f(u)-⟨ A u+B(u)-f(u), u⟩_H u =A u-|∇ u|_H^2 u+B(u)-f(u) . Hence, by projecting the equation onto the tangent space T_uM, we derive the following constrained Navier-Stokes equations. du/dt+[Au +B(u)] =|∇ u|^2 u +f, u(0)=u_0. § EXISTENCE AND UNIQUENESS The proof of the existence of the solution of (<ref>) is based on the Galerkin approximation method. Let {e_i}_i=0^∞ be the orthonormal basis in H composed of eigen vectors of A corresponding to the eigen values {λ_i}_i=0^∞ . Where A is a positive self-adjoint operator. Ae_i=λ_i e_i. Let H_n be the subspace of H equipped with the norm inherited from H. H_n:=Linspan{e_1, …, e_n}. P_n be the projection operator on H defined by P_n u = ∑_i=1^n⟨ u, e_i⟩_H e_i, u ∈H. Utilizing the notations established above, we can examine the Galerkin approximation of the constrained Navier-Stokes equations in the H_n space: d u_n/d t=-[P_nA u_n+P_n B (u_n)]+|∇ u_n|^2 u_n+P_n f(u_n). u_n(0)= P_n u_0. First, we will show that the solution will stay inside the sphere M, that is |u_n|_H^2≤ 1. Let u_0∈V∩M, then |u_n|_H^2≤ 1, where u_n is the solution of (<ref>). 1/2d/dt|u_n(t)|_H^2= ⟨ -P_nAu_n(t) - P_n B(u_n(t)) + |∇ u_n|^2u_n + P_n f(u_n) , u_n⟩_H ⇒1/2 d|u_n(t)|_H^2=-|u_n(t)|_V^2 d t+|∇ u_n(t)|^2|u_n(t)|_H^2 d t ⇒ d[|u_n(t)|_H^2-1]=2|u_n(t)|_V^2[|u_n(t)|_H^2-1] d t . Integrating both sides from 0 to t, we get, |u_n(t)|_H^2-1=[|u_n(0)|_H^2-1] exp[2 ∫_0^t|u_n(s)|_V^2 d s] Since |u_n(0)|_H=|P_nu_0|_H≤ |u_0|_H=1 and ∫_0^t|u_n(s)|_V^2 d s<∞, we get |u_n(t)|_H^2≤ 1 ∀ t <∞ . §.§ Passage to the limit We will obtain a priori estimates independent of n for the functions u_n and then pass the limit. By taking the inner product of Equation (<ref>) with Au_n, we obtain the following expression, ⟨d u_n/d t, A u_n⟩_H= -⟨ A u_n, A u_n⟩ _H-⟨ P_n B (u_n), A u_n⟩_H + ⟨ |∇ u_n|^2 u_n, A u_n⟩_H + ⟨ P_n f(u_n),u_n ⟩_H. Because the Stokes operator and the projection operator P_n are self-adjoint,the function f(u_n)∈ L^2(0,T; V) and using ⟨ B(u_n),Au_n⟩_H=0 ,we have the following, 1/2d/dt|u_n|_V^2 = -⟨ A u_n - |∇ u_n|^2 u_n, A u_n - |∇ u_n|^2 u_n⟩ - ⟨ A u_n - |∇ u_n|^2 u_n, |∇ u_n|^2 u_n⟩ + ⟨ f(u_n), u_n⟩_V = -|A u_n - |∇ u_n|^2 u_n|^2 - ⟨ A u_n - |∇ u_n|^2 u_n, |∇ u_n|^2 u_n⟩ + ⟨ f(u_n), u_n⟩_V. Since | A u_n - | ∇ u_n|^2 u_n|_H^2≥ 0, we can neglect this term in the previous equation, allowing us to express it as follows, 1/2d/dt|u_n|_V^2≤ - ⟨ Au_n-|∇ u_n|^2 u_n, |∇ u_n|^2 u_n⟩ + ⟨ f(u_n), u_n⟩_V. Now consider the term, ⟨ A u_n-|∇ u_n|^2 u_n,|∇ u_n|^2 u_n⟩ =⟨ A u_n,|∇ u_n|^2 u_n⟩-⟨|∇ u_n|^2 u_n,|∇ u_n|^2 u_n⟩ =|∇ u_n|^2 ⟨ Au_n , u_n⟩ -|∇ u_n|^4|u_n|^2 ≤ |∇ u_n|^4-|∇ u_n|^4=0. Since |u_n|^2≤ 1, the above calculation is valid. Hence using this estimation we have, 1/2d/d t|u_n|_V^2≤⟨ f(u_n), u_n⟩_V. Taking the integration from 0 to t,0<t≤ T, we have, |u_n(t)|_V^2-|u_n(0)|_V^2≤ 2∫_0^t⟨ f(u_n(s)), u_n(s)⟩_V ds. Using Young's Inequality we obtain for a given ε, ∫_0^t⟨ f(u_n), u_n⟩_V ≤ε|f(u_n)|_L^2(0, t ; V)^2+1/4 ε|u_n|_L^2(0, t ; V)^2 ≤ C_1+C_2∫_0^t|u_n|_V^2 d s . Since f has linear growth. Hence |u_n(t)|_V^2≤ C_1+C_2∫_0^t|u_n(s)|_V^2 d s By applying Gronwall's inequality, we can have u_n∈ L^∞(0, T; V) for all n. Again consider (<ref>), ⟨d u_n/d t, A u_n⟩_H= -⟨ A u_n, A u_n⟩ _H-⟨ P_n B (u_n), A u_n⟩_H + ⟨ |∇ u_n|^2 u_n, A u_n⟩_H + ⟨ P_n f(u_n),u_n ⟩_H ⇒1/2d/dt|u_n|_V^2=-|A u_n|^2-|u_n|_V^4 +⟨ f(u_n), u_n⟩_V ⇒1/2d/dt|u_n|_V^2+|A u_n|^2≤ +⟨ f(u_n), u_n⟩_V. Taking integration from 0 to T <∞ we obtain, |Au_n|_L^2(0,T;H)^2≤ C_1 +C_2∫_0^T |u_n|_V^2<∞. The above term is finite because of u_n∈ L^∞(0, T; V). So by the above estimation, we have u_n∈ L^2(0, T; D(A)) for all n. Therefore there exists a subsequence of u_n, denoted again the same as u_n such that, u_n converges to u_* in weak* topology of L^∞(0,T;V) and u_n converges to u weakly in L^2(0,T;D(A)). Now we aim to demonstrate the equality of both limits, that is u=u_*. Hence by using definitions of weak and weak* convergence, we have, ∀ v ∈ L^1(0, T ; V^'), ∫_0^T⟨ u_n-u_*, v⟩ d t → 0 as n →∞. Again ∫_0^T⟨ u_n-u, v⟩ d t → 0 ∀ v ∈ L^2(0, T ; D(A)^'). Now since L^2(0, T ; V^') ⊂ L^1(0, T ; V^') and ∀ v ∈ L^1(0, T ; V^') ⇒∀ v ∈ L^2(0, T ; V^'), therefore from (<ref>), ∫_0^T⟨ u_n-u_*, v⟩ d t → 0 ∀ v ∈L^2(0,T;V^'). Considering the inclusion D(A) ⊂ V, we have V^'⊂ (D(A))^'. Consequently, we can infer that L^2(0, T ; V^') ⊂ L^2(0, T ;(D(A))^'). Hence, we can conclude that: ∫_0^T⟨ u_n-u_*, v⟩ dt → 0, ∀ v ∈ L^2(0, T ;(D(A))^'), ∫_0^T⟨ u_n-u, v⟩ , dt → 0, ∀ v ∈ L^2(0, T ;(D(A))^'). Hence we get u=u_*. The following results can be found in p-183 of <cit.>. A compactness theorem in Banach spaces. Let X_0, X, X_1, be three Banach spaces such that X_0⊂ X ⊂ X_1, where the injections are continuous and X_i is reflexive, i=0,1, the injection X_0→ X is compact. Let T>0 be a fixed finite number and α_0 and α_1 are two finite numbers such that α_i>0 for i=0,1. Consider the space 𝒴=𝒴(0, T ; α_0, α_1 ; X_0, X_1) 𝒴={v ∈ L^α_0(0, T ; X_0), v^'=d v/d t∈ L^α_1(0, T ; X_1)} It is obvious that 𝒴⊂ L^α_0(0, T ; X) With a continuous injection. Under the above assumptions the injection of 𝒴 into L^α_0(0, T ; X) is compact. See Theorem 2.1 <cit.>. We will use the above results to show the strong convergence. Now, considering the definitions: X_0=D(A)=H ∩ H^2(𝕋^2), X =V= H∩ H^1(𝕋^2), X_1 =H, we have the inclusion X_0⊂ X ⊂ X_1, and the compact embedding X_0↪ X_1. Let us define the set: 𝒴={v ∈ L^2(0, T ; D(A)) | v^'∈ L^2(0, T ; H)}. It follows that 𝒴↪ L^2(0,T:V) is a compact embedding. Consequently, we can conclude that u_n→ u strongly in L^2(0,T;V). Hence we are allowed to pass the limit. To pass the limit, consider the following equation: d u_n/d t=-P_n A u_n-P_n B(u_n)+|∇ u_n|^2 u_n+P_nf(u_n). Let us consider a function Ψ that is continuously differentiable and all the derivative is bounded and satisfies Ψ(T) = 0. Then, ∫_0^T⟨d u_n/d t, Ψ(t) e_j⟩_H d t =-∫_0^T⟨ P_n A u_n(t) ,Ψ(t) e_j⟩ _H d t - ∫_0^T⟨ P_n B u_n(t), Ψ(t) e_j⟩_H d t +∫_0^T⟨|∇ u_n(t)|^2 u_n(t), Ψ(t) e_j⟩_H d t +∫_0^T⟨ P_n f(u_n(t)), Ψ(t) e_j⟩_H d t. To demonstrate the convergence term by term, let us first consider the following term: ∫_0^T⟨d u_n/d t, Ψ(t) e_j⟩_H d t =-∫_0^T⟨ u_n(t), Ψ^'(t) e_j⟩_H d t -⟨ u_n(0), Ψ(0) e_j⟩_H . Hence we have, -∫_0^T⟨ u_n(t), Ψ^'(t) e_j⟩_H d t =⟨ u_n(0), Ψ(0) e_j⟩_H-∫_0^T⟨ P_n A u_n(t), Ψ(t) e_j⟩_H -∫_0^T⟨ P _n B(u_n(t)), Ψ(t) e_j⟩_H d t +∫_0^T⟨|∇ u_n(t)|^2 u_n(t), Ψ(t) e_j⟩_H d t +∫_0^T⟨ P_n f(u_n(t)), Ψ(t) e_j⟩_H d t. To show ∫_0^T⟨ u_n(t), Ψ^'(t) e_ j⟩_H dt → -∫_0^T⟨ u(t), Ψ^'(t) e_j⟩_H , let us consider following: |∫_0^T⟨ u_n(t), Ψ^'(t) e_j⟩_H-∫_0^T⟨ u_n(t), Ψ^'(t) e_j⟩_H| ≤∫_0^T|⟨ u_n(t)-u(t), Ψ^'(t) e_j⟩_H|. By utilizing the Cauchy-Schwarz inequality, the aforementioned term can be expressed as follows: ≤∫_0^T|u_n(t)-u(t)|_H| Ψ^'(t) e_j|_H dt ≤ C∫_0^T|u_n(t)-u(t)|_V|Ψ^'(t) e_j|_H dt ≤C̃|u_n(t)-u(t)|_L^2(0, T; V)→ 0 as n →∞. Again consider the term, ∫_0^T⟨ P_n B(u_n(t)), Ψ(t) e_j⟩_H dt - ∫_0^T⟨ B(u(t)), Ψ(t) e_j⟩_H dt ≤∫_0^T|⟨ P_n B(u_n(t)) - B(u(t)), Ψ(t) e_j⟩_H| dt ≤ C (∫_0^T|P _n B(u_n(t)) - B(u(t))|_H dt) ≤ C [∫_0^T |B(u_n(t)) - B(u(t))|_H dt + ∫_0^T |P_n - I| |B(u(t))|_H dt] → 0. In the above calculation, we utilized the fact that P_n is a contraction and as n→∞, P_n converges to the identity map I. Now let's consider the term below, |∫_0^T⟨|∇ u_n(t)|_H^2 u_n(t), Ψ(t) e_j⟩_H-∫_0^T⟨|∇ u(t)|_H^2 u(t) , Ψ(t) e_j⟩_H | ≤∫_0^T|∇ u_n(t)|_H^2 u_n(t)-|∇ u(t)|_H^2 u(t)|_H|Ψ(t) e_j|_H d t ≤ C ∫_0^T||∇ u_n(t)|_H^2 u_n(t)-|∇ u_n(t)|_H^2 u(t)|_H d t ≤C̃∫_0^T|u_n-u|_V [|u_n|_V+| u |_V]^2. Since u_n→ u in L^2(0, T; V) so |u_n|_V,|u|_V<∞. Hence, the right-hand side of the above estimation tends toward zero. Now, let's consider the next term: | ∫_0^T⟨ P_n f(u_n), Ψ(t) e_j⟩_H d t -∫_0^T⟨ f(u) , Ψ(t) e_j⟩_H d t | ≤∫_0^T|⟨ P_n f(u_n)-f(u), Ψ(t) e_j⟩_H| dt ≤∫_0^T|P_n f(u_n)-f(u)|_H|Ψ(t) e_j|_H d t ≤ C ∫_0^T|f(u_n)-f(u)|_H d t+C ∫_0^T|P_n f(u)-f(u)|_H d t ≤C̃∫_0^T|u_n-u|_V^2 d t+C̃∫_0^T|P_n-I||f(u)|_H dt. Based on the previous arguments, we can show the right-hand side goes to zero of the above inequality. However, we still need to show that the Au_n term converges. ∫_0^T⟨ A u_n-A u, Ψ(t)e_j⟩_H d t= ∫_0^T⟨(u-u_n⟩, Ψ(t)e_j)_H d t ≤∫_0^T⟨∇(u_n-u), ∇Ψ(t)e_j⟩_H d t ≤ C∫_0^T | u_n-u|_H| ∇Ψ(t)e_j |_H d t ≤ C |u_n-u|_L^2(0,T;V) Since u_n→ u in L^2(0, T; V) hence we have the right-hand side of the above inequalty goes to zero. Therefore we have can pass the limit to the following equation, -∫_0^T⟨ u(t), Ψ^'(t) e_j⟩_H d t = ⟨ u(0), Ψ(0) e _j⟩-∫_0^T⟨ A u(t), Ψ(t) e_j⟩ dt -∫_0^T⟨ B u(t), Ψ(t) e_j⟩_H d t +∫_0^T⟨| ∇ u(t) |_H^2 u, Ψ(t) e_j⟩_H d t +∫_0^T⟨ f(u), Ψ(t) e_j⟩_H d t holds for all e_j. So it will hold for all v= finite linear combinations of e_j while passing the limit it is valid for all v ∈ H. Finally, we need to show u holds the equation, d u/d t=-A u-B(u) +|∇ u|^2 u+f(u) . u(0)=u_0. Multiply by Ψ and continue by similar and then comparing we have u satisfies the above equation. Now for the uniqueness part consider the following, Let u_1 and u_2 are the solution of, d u_1/dt=-A u_1-B(u_1)+|∇ u_1|^2 u_1+f(u_1). u_1(0)=u_10. d u_2/dt=-A u_2-B(u_2)+|∇ u_2|^2 u_2+f(u_2) . u_2(0)=u_20. d u_1/d t-d u_2/d t=-A(u_1-u_2)-B(u_1)+B(u_2) +|∇ u_1|^2 u_1-|∇ u_2|^2 u_2 +f(u_1)-f(u_2) . u_1(0)-u_2(0)=u_10 -u_20. u^'=-A u-B(u_1)+B(u_2)+|∇ u_1|^2 u_1-|∇ u_2|^2 u_2 +f(u_1)-f(u_2). u(0)=u_10 -u_20. [Taking u=u_1-u_2.] Taking inner product with u in both sides we have, ⟨ u^', u⟩_H= -<A u, u>_H-b( u, u_2, u) +<|∇ u_1|^2 u_1-|∇ u_2|^2 u_2, u>_H +<f(u_1)-f(u_2), u>_H ⇒1/2d/d t|u|_H^2 =-|∇ u|_H^2-b(u, u_2, u) +⟨|∇ u_1|^2 u_1-|∇ u_2|^2 u_2, u⟩_H +⟨ f(a_1)-f(u_2), u⟩_H. Consider, ⟨|∇ u_1|^2 u_1-|∇ u_2|^2 u_2 u⟩_H≤|[|∇ u_1|^2 u_1-|∇ u_2|^2 u_2]|_H|u|_H ≤ C|u_1-u_2|_V[|u_1|_V+|u_2|_V]^2|u|_H =C|u|_V[|u_1|_V+|u_2|_V]^2|u|_H ≤ C ε|u|_V^2+C/4 ε|u|_H^2[|u_1|_V+|u_2|_V]^4. Again we have, |⟨ f (u_1)-f(u_2), u_1-u_2⟩_H| ≤ K|u|_H^2. [Since f is Lipschitz.] & |b(u, u_2, u)| ≤√(2)|u|_H^1 / 2|u|_V^1 / 2|u_2|_V^1 / 2|u_2|_E^1 / 2|u|_H ≤√(2) C_1|u|_H|u|_V|u_2|_V^1 / 2|u_2|_E^1 / 2 =√(2) C_1ε|u|_V^2+√(2) C_1/4 ε|u|_H^2|u_2|_V|u_2|_E. Writing altogether we have, 1/2d/dt|u(t)|_H^2≤-|u|_V^2+√(2) C_1ε|u|_V^2+C ε|u|_V^2+C/4 ε|u|_H^2[|u_1|_V+|u_2|_V]^4 +K|u|_H^2 +√(2)C_1/4 ε|u|_H^2|u_2|_V |u_2|_E. Take C_2=max{√(2) C_1, C, K} 1/2d/dt|u(t)|_H^2≤ -|u|_V^2+ C_2ε|u|_V^2+C_2 ε|u|_V^2+C_2/4 ε|u|_H^2[|u_1|_V+|u_2|_V]^4 +C_2|u|_H^2 +C_2/4 ε|u|_H^2|u_2|_V |u_2|_E. Choose ε such that (2 C_2 ε-1)<0, So ε<1/2C_2. Therefore, d/d t|u|_H^2≤C |u|_H^2. Where C=2[ C_2/4 ε[|u_1|_V+|u_2|_V]^4 +C_2/4 ε|u_2|_V |u_2|_E + C_2 ]. So, d/d t|u(t)|_H^2≤ C|u(t)|_H^2d/d t{exp(-∫_0^tC d s)|u(t)|_H^2}≤ 0 ⇒ |u(t)|_H^2≤ 0 ⇒ |u(t) |_H=0 ⇒ u_1(t)=u_2(t) ∀ t ∈[0, T]. Hence the solution is unique. § LINEARIZED EQUATIONS We will need some of the results about the linearized equations. Let u be a solution of, u_t+A u+B(u)-|∇ u|_H^2 u=U. u(0)=u_0. Let u̅ be the solution of A u̅+B(u̅)-|∇u̅|^2u̅=0. Now let ω=u-u̅ or u=ω+u̅. So putting the value of u in the first equation we have, (ω+u̅)_t+A(ω+u̅)+B(u̅+ω)-.| ∇(u̅+ω)|^2(u̅+ω)=U. Now for equilibrium point u̅_t=0. So, ω_t+A ω+A u̅+B(u̅+ω)-|∇(u̅+ω) | ^2(u̅+ω) =U. Here , B(u̅+ω)=(u̅+ω) ·∇)(u̅+ω) =(u̅·∇)(u̅+ω)+(ω̅·∇)(u̅+ω) =(u̅·∇) u̅+(u ·∇) ω+(ω·∇) u̅+(ω+∇) ω. Since we are linearizing so we can ignore the nonlinear term. Hence, B(u̅+ω) =B(u̅)+(u̅·∇) ω+(ω·∇) u̅ =B(u̅)+B^'(u̅) ω. Now from (<ref>) we have , ω_t+A ω+A u̅+B(u̅)+B^'(u̅) ω -|∇u̅|^2(u̅+ω) -|∇ω|^2(u̅+ω) -2⟨∇u̅, ∇ω⟩(u̅+ω) =U. Since A u̅+B(u̅)-|∇u̅|^2u̅=0, and ignoring the nonlinear terms we have, ω_t+A ω+B^'(u̅) ω -|∇u̅|^2ω -2⟨∇u̅, ∇ω⟩u̅ =U. Let us define a map, Φ_T : X_T⟶ L^2(0, T ; H) by Φ_ T(ω)(x, t)=G(ω)(x, t). Where X_T=C([0,T],V )∩ L^2(0, T; E). Then Φ_T is globally lipschitz. To prove it let us consider, ω_1,ω_2 ∈ X_T and then |Φ_T(ω_1)-Φ_T(ω_2)|_L^2(0;T; H) =|G(ω_1)-G(ω_2)|_L^2(0, T ; H) =.|U-B^'(u̅) ω_1+| ∇u̅|_H^2ω_1+2⟨∇u̅, ∇ω_1⟩u̅ -U+B^'(u̅) ω_2 -|∇u̅|_H^2ω_2 -2⟨∇u̅, ∇ω_2⟩u̅|_L^2(0, T; H) =| B^'(u̅) ω_2-B^'(u̅) ω_1 + 2⟨∇u̅, ∇ω_1-∇ω_2⟩u̅ +|∇u̅|^2 (ω_1-ω_2)|_L^2(0, T; H) ≤[∫_0^T| | ∇u̅ |_H^2[ω_1-ω_2]|_H ^2 d t]^1 / 2 +[∫_0^T| B^'(u̅) ω_2-B^'(u̅) ω_1|_H^2]^1/2 +[∫_0^T|2⟨∇u̅, ∇ω_1-∇ω_2⟩u̅|_H^2 d t]^1 / 2. Let us denote these 3 terms by A_1,A_2,A_3 respectively. So, A_1^2 =[∫_0^T| | ∇u̅ |_H^2[ω_1-ω_2]|_H ^2 d t] ≤∫_0^T|∇u̅|_H^4|ω_1-ω_2|_H^2 d t =|∇u̅|^4∫_0^T|ω_1-ω_2|_H^2 d t. ≤ C_1|∇u̅|_H^4|ω_1-ω_2|_X_T^2 A_1 ≤ C_1|∇u̅|_H^2|ω_1-ω_2|_X_T. Consider, A_2^2 =∫_0^T|B^'(u̅) ω_1-B^'(u̅) ω_2|_H^2 d t =∫_0^T|(u̅·∇) ω_1+(ω_1·∇) u̅-(u̅·∇) ω_1 -.(ω_2·∇) u̅|_H ^2 d t =∫_0^T|(u̅·∇)(ω_1-ω_2)+((ω_1-ω_2) ·∇) u̅|_H^2 dt ≤∫_0^T|(u̅·∇)(ω_1-ω_2)|_H^2 d t+∫_0^T|(ω_1-ω_2) ·∇u̅|_H^2 d t A_2≤ C_2 |u̅|_E |ω_1-ω_2|_X_T. Again, A_3^2= ∫_0^T|2⟨∇u̅, ∇ω_1-∇ω_2⟩_Hu̅|_H^2 d t ≤ 4 ∫_0^T|u̅|_H^2|∇ u|_H^2|∇(ω_1-ω_2)|_H^2 d t ≤ 4|u̅|_H^2|∇u̅|_H^2∫_0^T|∇(ω_1-ω_2)|_H^2 d t ≤ 4|u̅|_H^2|∇u̅|_H^2|ω_1-ω_2|_X_T^2 A_3 ≤ 2 C_3|u̅|_H|∇u̅|_H|ω_1-ω_2|_ X_T. Hence, |Φ_T(ω_1)-Φ_T(ω_2)|_L^2(0;T; H)≤ K |ω_1-ω_2|_X_T. Where K=[ 2 C_3|u̅|_H+C_2 |u̅|_E+C_1|∇u̅|_H^2 ] < ∞. Therefore Φ_T is Globally Lipschitz. Hence Theorem 1.9.1 of <cit.> says that the Linearized system has a unique global solution. § THE CONTROL-TO-STATE MAPPING Now, we will take one step further towards achieving optimal control of the state equations. Our focus will be on studying control-to-state mapping, which involves mapping the right-hand side of the equations to their corresponding solutions. (Solution mapping) Let U∈ L^2(0, T; V) denote the control. Consider the system (<ref>). The mapping from the control variable U to the corresponding weak solution y, where y is the solution of equation (<ref>) with the control right-hand side and a fixed initial value y_0, is denoted by S. In other words, we represent this mapping as y = S(U). Note: We will use C to represent the constant, and we often use the same symbol to represent other constants. §.§ Continuity and Differentiability The control-to-state mapping is Lipschitz continuous from L^2(0, T; V) to L^2(0, T; D(A))∩ L^∞(0, T; V). Let y_1, y_2 be two solutions of (<ref>) with the same initial value y_0 and associated with the control functions U_1, U_2, y_i = S(U_i). Denote by y and u the difference between solutions and control, i.e. y=y_1-y_2 and U=U_1-U_2. We subtract the corresponding operator equations and take the inner product with Ay and we have the following, 1/2d/dt|y(t)|_V^2 = -|A y(t)|^2 + ⟨ B(y_2(t))-B(y_1(t)), A y(t)⟩ +⟨|∇ y_1(t)|^2 y_1(t)-|∇ y_2(t)|^2 y_2(t), A y(t)⟩ + ⟨ U(t), A y(t)⟩ Consider the following term, B(y_2)-B(y_1) =-[B(y)-B(y_2)] =-[B(y_1)+B^'(y_2) y] Hence ⟨ B(y_2)-B(y_1), A y⟩= -⟨ B(y)+B^'(y_2) y, A y⟩ =[0+⟨ B^'(y_2) y, A y⟩] =-[ b(y_2,y,Ay)+b(y, y_2, A y) ]. and since ||∇ y_1|^2 y_1 - |∇ y_2|^2 y_2| ≤ C| y|_V, so we have |⟨|∇ y_1|^2 y_1-|∇ y_2|y_2, Ay⟩| ≤C|Ay|^2 . Again, using the previous results of the trilinear map b, for any u∈V, v∈ E, and ϕ∈H, we have the following inequality: |b(u,v,ϕ)| ≤√(2)|u|_H^1/2|u|_V^1/2|v|_V^1/2|v|_E^1/2|ϕ|_H. So, we have, |b(y_2, y, A y)| ≤ C |y|_E^2. Similarly, |b(y, y_2, A y)| ≤ C |y|_E^2. Now by Young's inequality for a given ε we have ⟨ U, y⟩≤ Cε |y|_E^2 + C/4ε |U|_V^2. We will choose ε in such a way that -1 will dominate all other coefficients of |Ay|^2, that is, 1/2d/d t|y(t)|_V^2+k|Ay|^2≤ C|U|_V^2. Here k>0. Therefore by taking the integration from 0 to T, we can say |y|_L^∞(0 , T ; V)^2 ≤ C|U|_L^2(0 , T ; V)^2 . Again k|A y|^2≤ C |U|_V^2. So by taking the integration from 0 to T we have |y|_L^2(0, T; D(A))^2≤ C |U|_L^2(0, T; V)^2. Hence the solution mapping S(U)=y is lipschitz continuous from L^2(0, T ; V) to L^2(0, T ; D(A)) ∩ L^∞(0, T ; V). Now, we will demonstrate the Fréchet differentiability of the solution mapping. The control-to-state mapping exhibits Fréchet differentiability, acting as a mapping from L^2(0, T ; V) to L^2(0, T ; D(A)) ∩ L^∞(0, T ; V). The derivative at U̅∈ L^2(0, T ; V) in the direction h ∈ L^2(0, T ; V) is expressed as S'(U̅)h = y, where y represents the weak solution of y_t+A y+B^'(y̅ ) y -|∇y̅ |^2y -2⟨∇y̅ , ∇ y⟩y̅ =h. y(0) =0. with S(U̅ )=y̅. Define y=S(U̅ +h). Hence, y̅_t+A y̅+B(y̅)-|∇y̅|^2 y̅=U̅ , y_t+A y+B(y)-|∇ y|^2 y=U̅ +h. Let y-y̅=d, or y=d+y̅. Put the value of d in (<ref>) we obtain d_t+y̅_t+A d+A y̅+B(d+y̅)-|∇(d+y̅)|^2(d+y̅) =U̅ +h. From the term |∇(d+y̅)|^2(d+y̅) we have |∇(d+y̅)|^2(d+y̅) = ⟨∇ d+∇y̅, ∇ d+∇y̅⟩(d+y̅) = |∇ d|^2 d+2⟨∇ d, ∇y̅⟩ d+2⟨∇ d, ∇y̅⟩y̅. +|∇y̅|^2 d+|∇y̅|^2y̅ +|∇ d|^2y̅. Since B(d+y̅)= B(d)+B^'(y̅) d+B(y̅), the following expression can be written: d_t+A d+ B^'(y̅) d-|∇ d|^2 d+B(d) +y̅_t+A y̅ -|∇y̅|^2y̅ +B(y̅)=h+2⟨∇ d, ∇y̅⟩ d +|∇y̅|^2 d+2⟨∇ d, ∇y̅) y̅ +|∇ d|^2y̅+U̅ Since S(U̅ )=y̅, then we have, d_t+A d+B^'(y̅) d-|∇y̅|^2 d -2⟨∇y̅, ∇ d⟩y̅=h-B(d)+|∇ d|^2 d + 2⟨∇y̅, ∇ d⟩d̅ +|∇ d |^2y̅. We split d into d=z+r, where z and r are the weak solutions of the following systems respectively z_t+A z+B^'(y̅) z-|∇y̅|^2 z-2⟨∇y̅, ∇ z⟩y̅=h , z(0)=0. r_t+A r+B^'(y̅) r-|∇y̅|^2 r -2⟨∇y̅ , ∇ r⟩y̅= -B(d)+|∇ d|^2 d +2⟨∇ d, ∇y̅⟩ d+|∇ d|^2y̅ r(0)=0. Let X=L^2(0, T ; D(A)) ∩ L^∞(0, T ; V). To finalize the proof, it is sufficient to show the following: |y-y̅-z|_X/|h|_L^2(0, T; V)→ 0 as |h|_L^2(0, T; V)→ 0. Then, the function z will serve as the Fréchet derivative of S at U̅ in the direction of h, denoted as z = S'(U̅ )h. Consider |y-y̅-z|_X=|r|_X. To estimate this norm we first take r_t+A r+B^'(y̅) r-|∇y̅|^2 r -2⟨∇y̅ , ∇ r⟩y̅= -B(d)+|∇ d|^2 d +2⟨∇ d, ∇y̅⟩ d+|∇ d|^2y̅ Let us take the inner product with Ar and then ⟨ r_1, A r⟩=-|A r|^2-⟨ B^'(y̅) r, A r⟩+ |∇y̅|^2⟨ r, A r⟩ +2⟨∇y̅, ∇ r⟩⟨y̅, A r⟩ +|∇ d|^2⟨ d, A r⟩ -⟨ B(d), A r⟩ +2⟨∇ d, ∇y̅⟩⟨ d, A r⟩ +|∇ d|^2⟨y̅, A r⟩. Since B^'(y̅) r=B(y̅, r)+B(r, y̅). So ⟨ B^'(y̅) r, A r⟩=b(y̅, r, A r)+b(r, y̅, A r⟩ and |⟨ B^'(y̅) r, A r⟩| ≤√(2)|y̅|_H^1/2|y̅|_V^1/2|r|_V^1/2|r|_E^1/2|A r|+√(2)|r|_H^1/2|r|_V^1/2|y̅|_V^1/2|y̅|_E^1/2|A r|. Since y̅=S(U̅ ) and therefore |y̅|_H^2≤ 1. By similar argument |y|_H^2≤ 1. So |d|_H^2≤ 2. As y̅∈ L^∞(0, T ; V) ∩ L^2(0, T; D(A)) ⇒|y̅|_D(A)<∞. Hence |⟨ B^'(y̅) r, A r⟩| ≤ C|A r|^2. Again |.|∇y̅|^2⟨ r, A r⟩|≤ C| A r|^2 (By Cauchy Schwartz inequality). Moreover, |2⟨∇y̅, ∇ r⟩⟨y̅, A r⟩| ≤ C|A r|^2 and .|| ∇ d|^2⟨ d, r⟩|≤ C| ∇ d|^2|r| ≤ C ε|d|_E^4+C/4 ε|A r|^2. We have used the Youngs inequality for a given ε. Again by similar arguments, we have, |⟨ B(d), A r⟩| ≤ C|d|_E^2|A r| ≤ C ε|d|_E^4+C/4 ε|A r|^2. |2⟨∇ d, ∇y̅⟩⟨ d, A r⟩|≤ Cε |d|_E^4+C/4 ε|A r|^2. .|| ∇ d|^2⟨y̅, A r⟩|≤ C ε| d|^4_E+C/4 ε|A r|^2. We select ε in a manner that ensures the coefficients of |Ar|^2 remain negative on the right-hand side. So, 1/2d/d t|r|_V^2≤ C|d|_E^4 and |A r|^2≤ C |d|_E^4. Performing the integration from 0 to T yields the following result: |r|_X^2≤ C|d|_X^4 or |r|_X≤ C|d|_X^2. By Lipschitz continuity of the solution mapping we get |d|_X^2=|y-y̅|_X^2=|S(U̅ +h)-S(U̅ )|_X^2≤ |h|_L^2(0, T; V)^2. Thus (<ref>) fulfilled and so S is Fréchet differentiable and S'(U̅ )h=z. To establish the first-order optimality conditions, it is necessary to have the adjoint operator of S'(u), which is represented as S'(u)^*. The investigation of this adjoint mapping was conducted by Hinze <cit.> and Hinze and Kunisch <cit.>. The study on this adjoint map has also been carried out and documented in <cit.>. Let U̅∈ L^2(0, T; V). Then S'(U̅)^* is a continuous linear map from X^* to L^2(0, T; V) . Then for g∈ X^*, λ=S'(U̅)^*g iff (w_t + Aw+B'(y̅)w-|∇ (y̅)|^2w-2⟨∇ w,∇y̅⟩, λ)_L^2(0, T;V'),L^2(0, T; V)=(g,w)_X^*, X. ∀ w∈ X. Consider the linearized equation y_t+A y+B^'(y̅ ) y -|∇y̅ |^2y -2⟨∇y̅ , ∇ y⟩y̅ =h. Here y̅=S(U̅). Let us define the operator T:X→ L^2(0, T; V') by Ty=y_t+A y+B^'(y̅ ) y -|∇y̅ |^2y -2⟨∇y̅ , ∇ y⟩y̅. Hence, the linearized equation can be expressed in the following manner: Ty=h. T is clearly a linear map and T^-1=S'(U̅), so T^-1 is linear and continuous. The map T^* is a linear map from L^2(0, T; V) to X^* and its action defined by (T^* v, y)_X^*,X= (y_t+A y+B^'(y̅ ) y -|∇y̅ |^2y -2⟨∇y̅ , ∇ y⟩y̅, v)_L^2(0, T;V'),L^2(0, T; V) for v∈ L^2(0, T; V). (T^-1)^* is a linear map from X^* to L^2(0, T; V) and (T^-1)^*=S'(U̅)^*. Then for g∈ X^* there exists λ∈ L^2(0, T; V) such that (T^-1)^* g=λ=S'(U̅)^*g, or g=T^* λ. since (T^-1)^*=(T^*)^-1. Then, (T^* λ, w)_X^*,X = (w_t + Aw+B'(y̅)w-|∇ (y̅)|^2w-2⟨∇ w,∇y̅⟩, λ)_L^2(0, T;V'),L^2(0, T; V) =(g,w)_X^*, X. § THE OPTIMAL CONTROL PROBLEM For the purpose of proving the existence of optimal controls, we can take the cost functional of the form, J(y,U):=1/2∫_0^T|A^1 / 2 y(t)|_H^2 dt+1/2∫_0^T|U(t)|_V^2 dt. We define the set of admissible controls U_ad by U_ad:={ U ∈ T_u M : |U|_V is bounded} . The optimization problem is min J(y,U) subject to the state equation y_t+A y+B(y)-|∇ y|^2 y=U y(0)=y_0 U∈ U_ad §.§.§ Existence of solutions The optimal control problem admits a globally optimal solution U ∈ U_ad with an associated state y ∈ L^2(0, T; E) ∩ L^∞(0, T; V). Proof: Let y be the solution of the following system, y_t+A y+B(y)-|∇ y|^2 y=U , y(0)=u_0. The space U_ad:={ U ∈ T_u M: |U|_V is bounded}. First, we note that for each U ∈ L^2(0, T ; V), we get a unique solution y ∈ L^∞(0, T ; V) ∩ L^2(0, T ; D(A)) such that J(U) < ∞. For each such admissible pair, M_t(y, U, v)=0 ∀ v ∈ C_c^∞[0, T] where all the derivatives of v are bounded. Where M_t(y, U, v)=⟨ y(t), v⟩ + ∫_0^t⟨ A y(r)+B(y(r))_-. .|∇ y(r)|^2 y(r)-U, v⟩ dt -⟨ y_0, v⟩. Clearly, 0 ≤ J(U) for each admissible pair (y, U). Hence, there exists an infimum of J over all admissible controls and states, 0 ≤J̅:=inf_U ∈ U_ad J(U) < ∞. Moreover, there is a sequence (y_n, U_n) of admissible pairs such that J(y_n, U_n) ⟶J̅ as n →∞. The set {U_n} is bounded in U_ad, so y_n is bounded in L^∞(0, T ; V) ∩ L^2(0, T ; D(A)). Therefore, we can extract a subsequence (y'_n, U'_n) converging weakly to some limit (y, U). Since the space U_ad is closed and convex, U ∈ U_ad. We have term-by-term convergence, so M_t(y, U, v) = 0. Hence, (y, U) is admissible. Note that the functional F(y, U):=1/2∫_0^T|A^1 / 2 y(t)|_H^2 dt+1/2∫_0^T|U(t)|_V^2 dt is convex, continuous, and hence weakly sequentially lower semicontinuous. So we have F(y, U) ≤lim_n→∞inf F(y_n,U_n). Thus we have J(y,U) ≤J̅. Since (y, U) is admissible and J̅ is the infimum over all admissible pairs, it follows that J̅ = J(y, U). Hence the claim is proved. §.§ Lagrange functional We aim to define the Lagrange functional ℒ: X×L^2(0, T; V)×L^2(0, T; V) for the optimal control problem as follows: ℒ(y, U, λ)=J(y,U)-(y_t + Ay+ B(y)-|∇ y|^2y-U, λ)_L^2(0, T;V'),L^2(0, T; V) The first-order derivative of ℒ with respect to y and U in the direction of w∈ X and h∈ L^2(0, T; V) are denoted by ℒ_y(y,U,λ)w and ℒ_U(y,U,λ)h respectively and ℒ_y(y,U,λ)w =-(w_t + Aw+B'( y)w-|∇ (y)|^2w-2⟨∇ w,∇y⟩, λ)_L^2(0, T;V'),L^2(0, T; V) + ⟨ y, w⟩_L^2(0, T; V) , ℒ_U(y,U,λ)h = ⟨ U, h⟩_L^2(0, T; V) + (U,λ)_L^2(0, T;V),L^2(0, T; V'). §.§ First order necessary optimality conditions First-order necessary optimality conditions can be found in many literature sources. One can follow the <cit.> and <cit.> for more details. The necessary optimality conditions can be obtained by applying the formal Lagrange method. For more detailed information on the formal Lagrange method, refer to section 2.10 of <cit.>. Now, we will state and demonstrate the first-order optimality condition. (Necessary condition). Let U̅ be locally optimal in L^2(0, T; V) with associated state y̅=S(U̅). Then there exists λ∈ L^2(0, T; V) such that ℒ_y(y̅,U̅,λ)w =0 ∀ w∈ X, ℒ_U(y̅,U̅,λ)(U-U̅) ≥ 0 ∀ U∈ U_ad. We will consider λ=S'(U̅)y̅. Then ℒ_y(y̅,U̅,λ)w =-(w_t + Aw+B'( y̅)w-|∇ (y̅)|^2w-2⟨∇ w,∇y̅⟩, λ)_L^2(0, T;V'),L^2(0, T; V) + ⟨y̅, w⟩_L^2(0, T; V) Utilizing the construction of λ and the provided lemma <ref>, we have ℒ_y(y̅,U̅,λ)w=0 for all w∈ X. Using Theorem (2.22) of <cit.> and using the same construction of λ we have, ℒ_U(y̅,U̅,λ)(U-U̅)=⟨U̅, U-U̅⟩_L^2(0, T; V) + (U̅,λ)_L^2(0, T;V),L^2(0, T; V')≥ 0. plain
http://arxiv.org/abs/2307.05043v1
20230711065049
Epistemic Syllogistic: First Steps
[ "Yipu Li", "Yanjing Wang" ]
cs.AI
[ "cs.AI", "cs.LO", "cs.MA" ]
Stationary striations in plasma, created by a short microwave pulse in a waveguide filled with a neutral gas Ya.E. Krasik August 12, 2023 ============================================================================================================= Aristotle's discussions on modal syllogistic have often been viewed as error-prone and have garnered significant attention in the literature due to historical and philosophical interests. However, from a contemporary standpoint, they also introduced natural fragments of first-order modal logic, warranting a comprehensive technical analysis. In this paper, drawing inspiration from the natural logic program, we propose and examine several variants of modal syllogistic within the epistemic context, thereby coining the term Epistemic Syllogistic. Specifically, we concentrate on the de re interpretation of epistemic syllogisms containing non-trivial yet natural expressions such as “all things known to be A are also known to be not B.” We explore the epistemic apodeictic syllogistic and its extensions, which accommodate more complex terms. Our main contributions include several axiomatizations of these logics, with completeness proofs that may be of independent interest. § INTRODUCTION Although modal logic is regarded as a relatively young field, its origins can be traced back to Aristotle, who explored syllogistic reasoning patterns that incorporated modalities. However, in contrast to his utterly successful assertoric syllogistic, Aristotle's examination of modal syllogisms is often viewed as error-prone and controversial, thus receiving less attention from logicians. In the literature, a large body of research on Aristotle's modal syllogistic primarily centers on the possibility of a coherent interpretation of his proposed modal systems grounded by his philosophy on necessity and contingency (see, e.g., <cit.>). We adopt a more liberal view on Aristotle's modal syllogistic, considering it as a source of inspiration for formalizing natural reasoning patterns involving modalities, rather than scrutinizing the coherence of the original systems. Our approach is encouraged by the fruitful research program of natural logic, which explores “light” logic systems that admit intuitive reasoning patterns in natural languages while balancing expressivity and computational complexity <cit.>. In particular, various extensions of the assertoric syllogistic have been proposed and studied <cit.>. In this paper, we propose a systematic study on epistemic syllogistic to initiate our technical investigations of (extensions of) modal syllogistic. The choice for the epistemic modality is intentional for its ubiquitous use in natural languages. Consider the following syllogism: All C are B Some C is known to be A Some B is known to be A Taking the intuitive de re reading, the second premise and the conclusion above can be formalized as ∃ x (Cx Ax) and ∃ x (Bx Ax) respectively in first-order modal logic (FOML).[The de dicto reading of the second premise would be (∃ x (Cx Ax)), which we do not discuss here.] It then becomes apparent that this syllogism is valid under the standard semantics of FOML. One objective of our investigation into epistemic syllogistic is to explore various natural fragments of FOML following the general structure of syllogisms. Aristotle's original apodeictic syllogistic only allows a single occurrence of a necessity modality at a particular position in each sentence of assertoric syllogistic. However, from a modern perspective, we can greatly extend it and express interesting epistemic statements involving multiple agents and nested knowledge, such as “Everything known to be A by i is also known to be A by j”. Moreover, it is also interesting to allow nested knowledge such as “Something i knows that j knows to be A is also known to be B by i”. The general idea is to extend the language of terms but keep the pattern of “Some t is g” and “All t are g”, as proposed in <cit.>. In this paper, we begin by presenting preliminaries about assertoric syllogisms in Section <ref>. We then proceed to examine the epistemic version of Aristotle's apodeictic syllogistic in Section <ref> and provide a complete axiomatization. In Section <ref>, we significantly expand the language of terms in a compositional manner to allow for nesting of modalities with respect to multiple agents. The completeness of the proposed proof systems is demonstrated in Section <ref>. We conclude with a discussion of future work in the final section. § PRELIMINARIES In this section, we familiarize the readers with the basics of Aristotle's syllogistic. Let us first consider the language of Assertoric Syllogistic. Given a countable set of predicates U, the language of Assertoric Syllogistic is defined by the following grammar: φ ::= |, ::= A, ::= A | A where A∈ U. For the ease of presentation, we also write AB := A B, A B := AB, AB := A B and A B := AB. The semantics for is based on first-order structures. A model of is a pair ℳ = (D,ρ) where D is a non-empty domain and ρ:U→𝒫(D) is an interpretation function. The satisfaction relation is defined as below where the third column shows the equivalent clauses in the first-order language. [ ℳ_ASAB ρ(A)⊆ρ(B) ℳ⊩∀ x (Ax→ Bx); ℳ_ASA B ρ(A)∩ρ(B) = ∅ ℳ⊩∀ x (Ax→ Bx); ℳ_ASAB ρ(A)∩ρ(B)≠∅ ℳ⊩∃ x (Ax Bx); ℳ_ASA B ρ(A)⊈ρ(B) ℳ⊩∃ x (Ax Bx); ] Note that since we wish to generalize the ideas of the syllogistics from the modern perspective, the interpretation of a predicate can be an empty set, in contrast with the Aristotelian non-emptiness assumption. Following the study of Corcoran <cit.> and Martin <cit.>, we present the following deduction system . Note that our system is slightly different from that of Corcoran's and Martin's, as they are loyal to Aristotle's non-emptiness assumption.[Cf. <cit.> for a direct proof system that replaces RAA rule by the explosion rule. Moss' work is targeted at a stronger language, which allows complement terms in the antecedent. e.g. A B.] 2.5 AA AB Conversion BA [ϕ] ψ [ϕ] ψ RAA ϕ AB Bg Barbara-Celarent Ag Ag Existence AA With a slight modification of Corcoran's result in Section 4 of <cit.>, it follows that the above system is sound and complete. is sound and strongly complete w.r.t. the semantics. § EPISTEMIC APODEICTIC SYLLOGISTIC Inspired by apodeictic syllogistic, we introduce the first language of Epistemic Syllogistic. Given a countable set of predicates U, the language of Epistemic Apodeictic Syllogistic is generated by the following grammar of formulas (ϕ) and terms (t, g): φ::= |, ::= A, ::= A | A | A | A where A∈ U. We collect all the g as the set of (categorical) terms . Note that the formulas should be read de re. For example, AK B says “all A are known to be not B”, expressing the logical form ∀ x (Ax→ Bx). Formulas without modalities are called non-modal formulas. is interpreted on first-order Kripke models with a constant domain. A model for a tuple ℳ = (W, R, D, ρ). W is the set of possible worlds, R ⊆ W× W is a reflexive relation, D is the non-empty domain, and ρ:W× U→𝒫(D) is the interpretation function. We also write ρ_w(A) for ρ(w,A). Note that further frame conditions such as transitivity and Euclidean property do not play a role here since the syntax does not allow nested modalities, which will be relaxed in the next section. To ease the presentation of the semantics, we extend the interpretation ρ to any term. ρ^+:W× Term^ES(U)→𝒫(D) is defined as: ρ_w^+(A) = ρ_w(A), ρ_w^+( A) = D - ρ_w(A) ρ_w^+( A) = ⋂_wRvρ_v(A) ρ_w^+( A) = ⋂_wRv(D - ρ_v(A)) Given a pointed model ,w, the satisfaction relation is defined as follows where the third column lists the corresponding first-order modal formulas. [ ,w_ESAg ρ_w(A)⊆ρ_w^+(g) ,w⊩∀ x (Ax→ g(x)); ,w_ESAg ρ_w(A)∩ρ_w^+(g)≠∅ ,w⊩∃ x (Ax g(x)); ] where we abuse the notation and let g(x) be a modal predicate formula defined as follows: [ g(x)=Ax g=A, g(x)= Ax g= A; g(x)= Ax g= A, g(x)= Ax g= A; ] where is the modal operator and Ax is an atomic formula. We propose the following proof system : 2.5 AA [ϕ] ψ [ϕ] ψ RAA (given non-modal φ,ψ) ϕ AKg E-Truth Ag AKg A-Truth Ag AB Conversion BA AB Bg Barbara/Celarent Ag AB Bg Darii/Ferio Ag CB CKg Disamis/Bocardo BKg Ag Existence 1 AA BKA Existence 2 AKA We say a set of formulas is consistent if it cannot derive a contradiction in system . Note that the RAA rule is restricted to non-modal formulas, as formulas with in do not have negations expressible in the language. If Σ_ESϕ, then Σ⊢_ϕ. Due to the lack of space, we only sketch the idea of the (long) proof in Appendix <ref>. § MULTI-AGENT SYLLOGISTIC WITH NESTED KNOWLEDGE The language has an asymmetry in the grammar such that the first term is simpler than the second. In this section, we restore the symmetry of the two terms. Moreover, the terms are now fully compositional using modalities and negations, thus essentially allowing nested modalities in both and shapes, also in a multi-agent setting. It can be viewed as a modal extension of the language of Syllogistic Logic with Complement in <cit.>, or a fragment of the language of Aristotelian Modal Logic in <cit.>. Given a countable set of predicates U and a set of agents I, the language is defined by the following grammar: φ::= |, ::= A |_i g |g Where A∈ U and i∈ I. The set of terms g is denoted as Term^NES(U). As before, we define g_1g_2:= g_1 g_2 and g_1g_2:= g_1 g_2. Moreover, let _i g be an abbreviation for _i g. With this powerful language , we can express the following: “Everything i knows to be A, j also knows” by _i A_j A; “According to i, something known to be B is possible to be also A” by _i B_i A; “Everything i knows that j knows to be A is also known to be B by i” by _i_j A_iB. is also interpreted on first-order Kripke models with a constant domain and multiple relations (W, {R_i}_i∈ I, D, ρ). We say the model is a TS4S5 model if each R_i is a reflexivereflexive and transitive equivalence relation, respectively. Now we define ρ^+, the interpretation function for terms. ρ^+:W× Term^NES(U)→𝒫(D) is defined recursively as: ρ_w^+(A) = ρ_w(A) ρ_w^+( g) = D - ρ_w^+(g) ρ_w^+(_i g) = ⋂_wR_ivρ_v^+(g) It is easy to see that ρ_w^+(_i g) =ρ_w^+(_i g) =⋃_wR_ivρ_v^+(g). The third column is the corresponding FOML formulas. [ ,w_NESg_1g_2 ρ^+_w(g_1)⊆ρ_w^+(g_2) ,w⊩∀ x (g_1(x)→ g_2(x)); ,w_NESg_1g_2 ρ^+_w(g_1)∩ρ_w^+(g_2)≠∅ ,w⊩∃ x (g_1(x) g_2(x)); ] A simple induction would show the FOML formulas above are indeed equivalent to our formulas. For x∈{T, S4, S5}, we write Σ_x-NESϕ if for all x-model such that ,w_NESΣ, ,w_NESϕ. Here is an observation playing an important role in later proofs. For any g∈, g g and gg are both invalid over S5 models (thus also invalid over T, S4 models). First note that gg is equivalent to g g. We just need to show g g and its negation are both satisfiable for all g. Note that a model with a singleton domain {a} can be viewed as a Kripke model for propositional modal logic, where a predicate A can be viewed as a propositional letter: it holds on a world w iff a∈ρ_w(A). Then a term g can be viewed as an equivalent modal formula. Since there is only one a in the domain, g g is equivalent to g, viewed as a modal formula, by the semantics. We just need to show each g and g has singleton S5 models. It is easy to see that each g and g (as modal formula) can be rewritten into an equivalent negative normal form (NNF) using _i and _i to push the negation to the innermost propositional letter, e.g., _i_j _i A can be rewritten as _i_j_i A. Now it is easy to satisfy such formulas by a Kripke model with a single world w and the reflexive relations for all R_i: make A true on w iff the NNF of g or g ends up with the literal A instead of A. Then we can turn this model into a first-order Kripke model by setting ρ_w(A)={a} iff A is true on w. We propose the following proof system : 2.5 gg, K_igg, g g, gg g_1g_2 g_2g_3 Barbara g_1g_3 g_1g_2 Conversion g_2g_1 g_1g_2 Existence g_1g_1 g g Non-emptiness g g [ϕ] ψ [ϕ] ψ RAA ϕ ⊢g_1g_2 K ⊢_i g_1_i g_2 Clearly, K_igg is the counterpart of the usual T axiom in modal logic. The premise of Non-emptiness makes sure that nothing is g, since the FOML model has the nonempty domain, it follows that there is some g. Note that the K-rule is restricted to provable formulas, as in the case of the monotonicity rule in modal logic. We define to be +K_ig_i_ig, and to be + K_ig_i_ig. It is straightforward to establish soundness if we read the formulas as their first-order modal counterparts. Σ⊢_ϕ implies Σ_TNESϕ. Σ⊢_ϕ implies Σ_S4NESϕ. Σ⊢_ϕ implies Σ_S5NESϕ. Below are some derived rules and theorems that will play a role in the later proofs. The following are derivable in (and thus in ,). 2.5 g_1g_2 g_2g_3 Darii g_1g_3 g_1g_2 Contrapositive g_2 g_1 g g NonExistence t g 3c⊢_g_ig ⊢__i g_i g ⊢__i g_i g ⊢__i g_i g Darii g_2g_3 [g_3 g_1] Barbara g_2 g_1 g_1g_2 Conversion g_2g_1 RAA g_3g_1 Conversion g_1g_3 Contrapositive [ g_2g_1] g_1g_2 Darii g_2g_2 g_2 g_2 RAA g_2 g_1 Non-Existence [t g] gg Darii tg Conversion gt Existence gg g g Darii g g gg RAA t g g_ig can be proved based on the T-axiom _igg and Contrapositive above. _i g_i g follows by Barbara. _i g_i g and _i g_i g can be shown by applying K principle on ⊢_g g and ⊢_ gg. Recall that Σ is inconsistent iff it can derive a contradiction. We can show: A set of formulas Σ is inconsistent iff Σ⊢g g. ⇐: Σ⊢gg since it is an axiom. But by assumption, Σ⊢gg = g g. ⇒: Without loss of generality, assume Σ⊢g_1g_2,g_1 g_2, then by conversion and Darii, Σ⊢g_2 g_2. § COMPLETENESS Now we proceed to prove that is strongly complete w.r.t. reflexive frames. The result can be easily generalized to show the completeness of and w.r.t. their corresponding classes of frames, to which we will come back at the end of the section. The completeness proof is based on the canonical (Kripke) model construction, similar to the case of modal logic. However, the language is significantly weaker than the full language of FOML, which introduces some difficulties. In particular, is essentially not closed under subformulas: if we view our g_1g_2 and g_1g_2 as ∃ x (g_1(x) g_2(x)) and ∀ x (g_1(x)→ g_2(x)), then g_1(x) and g_2(x) are not expressible as formulas in . Therefore in constructing the canonical model, we need to supplement each maximal consistent set Δ with a proper “maximal consistent set” of terms for each object, which can be viewed as a description of the object. Inspired by <cit.>, we define some notion of types to capture such descriptions, which closely resembles the concept of points in <cit.>,[It is also called a quantum state in <cit.>.] in the setting of the orthoposet-based algebraic semantics for a (non-modal) syllogistic logic.[The completeness of the (non-modal) syllogistic logic in <cit.> was proved via a representation theorem of orthoposets. Our proofs below are self-contained and do not rely on the results of orthoposets. ] Moreover, to prove the truth lemma eventually, we need Lemma <ref> which asserts that a set of existential sentences is consistent iff each single one of them is consistent. The lemma is equivalent to the assertion that in , every existential sentence brings no new universal consequences. The seemingly obvious statement is actually non-trivial since our system allows RAA and hence does not allow an easy inductive proof on deduction steps. We leave it to future work for finding an alternative direct proof system without RAA. For now, we need to construct a simpler canonical model to show Lemma <ref> in the coming subsection, which also leads to the weak completeness of . §.§ Satisfiability of Existential Formulas and Weak Completeness Inspired by the notion of point in <cit.>, we first define the types as maximal descriptions of objects using terms. Obviously, an object must respect the universal formulas, and be either g or not g but not both for every term g. This will give us some basic properties of types. A type is a subset of Term^NES(U) s.t. * If g_1∈ and ⊢_g_1g_2, then g_2∈. (Respects Provably Barbara) * For all g∈ Term^NES(U), either g∈ or g∈. (Completeness) * For all g∈ Term^NES(U), g, g are not both in . (Consistency) Denote the set of all types by 𝕎. A collection 𝒴 of terms is said to be possible if for all g_1,g_2∈𝒴, ⊬_g_1 g_2. Note that all types are possible: If g_1,g_2∈∈𝕎 satisfies ⊢_g_1 g_2, then since respects provably Barbara, g_2∈, contradicting the consistency of . If _0 is possible, then there is a type ∈𝕎 extending it. Enumerate all terms in as s_0, s_1,…. We will construct a series of subsets of , _0⊆_1⊆_2 … s.t. * For all t_1,t_2∈_n, ⊬_t_1 t_2. (_n is possible) * _n+1 is _n∪{s_n+1} or _n∪{ s_n+1}. Now we show that each possible _n can be extended into a possible _n+1. Given _n that is possible, prove that at least one of s_n+1, s_n+1 can be added to _n to form _n+1 that is possible. Assume _n∪{s_n+1} is not possible, then ⊢_t t' for some t, t'∈∪{s_n+1}. We need to show that ⊬_g g' for all g, g'∈∪{ s_n+1}. Suppose not, then ⊢_g g' for some g, g'∈∪{ s_n+1}. Since _n is possible, at least one of t and t' must be s_n+1, and at least one of g and g' must be s_n+1. Furthermore, by Proposition <ref> and soundness, ⊬_u u. Therefore exactly one of t and t' is s_n+1, and exactly one of g and g' is s_n+1. In the following we derive contradictions from ⊢_g g' and ⊢_t t' based on four cases. Let us consider the case when t'=s_n+1 and g'= s_n+1, thus t, g∈_n. By double negation axiom, ⊢_gs_n+1 and ⊢_t s_n+1. Then we have ⊢_t g by contrapositive and Barbara. Then it contradicts to the assumption that _n is possible and we are done. The case when t=s_n+1 and g= s_n+1 can be proved similarly using contrapostive and double negation. Now let us consider the case when t=s_n+1 and g'= s_n+1, then we have ⊢_gs_n+1 and ⊢_s_n+1 t'. By Barbara, we have ⊢_g t', contradicting to the assumption that _n is possible. Similar for the case when t'=s_n+1 and g= s_n+1. Consequently, at least one of s_n+1, s_n+1 can be added to _n to form _n+1 that is possible. Let = ⋃_n∈ℕ_n. Note that each t∈ has to be added or “readded” at some finite step _k thus any two t_1,t_2∈ must be included in some _j. Therefore ⊬_t_1 t_2 since all the _n are possible. Finally, we prove that is a type. It is complete since one of s_n, s_n is added at some _n. It is consistent since if t, t∈, but by axiom double negation we have ⊢_t t, contradicting the fact that is possible. Now for provably Barbara: If t_1∈ and ⊢_t_1t_2, then ⊢_t_1 t_2, hence t_2∉ since is possible. By its completeness, t_2∈. In the following, we build a canonical model for consistent sets of existential formulas. Note that we use a fixed set ℕ as the domain and assign a type to each number in ℕ on each world, i.e., a world is simply a function from natural numbers to types. The accessibility relation is defined as usual in modal logic. The canonical model for existential formulas of is defined as ^E = (W^E,{R^E_i}_i∈ I,D^E,ρ^E), where: * W^E = 𝕎^ℕ. That is: a world w is a map from ℕ to types. * w_1 R^E_iw_2 iff _ig∈ w_1(n) entails g∈ w_2(n) for all n∈ℕ, g∈ Term^NES(U). * D^E = ℕ * ρ^E(w,A) = {n| A∈ w(n)}. The canonical model for existential formulas of is reflexive. For arbitrary g∈ Term^NES(U), w∈ W^E, if _ig∈ w(n), then since ⊢__igg, and w(n) respects provably Barbara, g∈ w(n). Hence w R_i^Ew. To show that the canonical model satisfies the desired existential formulas, the key is to show that ρ^E^+(w,g) = {n∈ℕ| g∈ w(n)}. That is: an object has property g if g is in the type it corresponds to. Similar to the proof of truth lemma in propositional modal logic, we have to prove an existence lemma for the induction step for _i. The existence lemma reads: if in w, an object is not known to be g, then w must see a world where the object is not g. For all w, m∈ℕ, t∈ Term^NES(U) s.t. _it∈ w(m), there is w' s.t. w R^E_iw' and t∈ w'(m). Consider the set 𝒴 = {g|_ig∈ w(m)}∪{ t}. Prove that it is possible. Towards a contradiction, suppose ⊢_t_1 t_2 for some t_1,t_2∈{g|_ig∈ w(m)}∪{ t}. By K principle we have ⊢__it_1_i t_2. There are three cases to be considered. Consider the case where _it_1,_it_2∈ w(m). By Proposition <ref>, ⊢__i t_2_i t_2 and Barbara, ⊢__it_1_it_2, contradicting the fact that w(m) is possible. Suppose t_1=t_2= t, by Proposition <ref>, ⊢__i t_i t which entails ⊢__i t_i t but this is not possible by soundness, since it is not valid over T-models according to Proposition <ref>. If _it_1∈ w(m) and t_2 = t, then ⊢_t_1 t entails ⊢__it_1_i t, which contradicts the fact that _it∈ w(m) and w(m) is possible. If _it_2∈ w(m) and t_1 = t, it leads to a contradiction as well since from ⊢__it_1_it_2, we have the symmetric ⊢__it_2_it_1 by contrapostive. Consequently ⊬_t_1 t_2 for all t_1,t_2∈𝒴. By Lemma <ref>, 𝒴 = {g|_ig∈ w(m)}∪{ t} can be extended to a type. Denote it by _m. Clearly, by repeating the reasoning in the above first case, for each n≠m∈ℕ we can find an _n∈𝕎 such that {g|_ig∈ w(n)}∈_n. Let w' then be defined by w'(n) = _n for each n. Then t∈ w'(m) and w R_i^Ew'. ρ^E^+(w,g) = {m| g∈ w(m)} for all g∈ Term^NES(U). Apply an induction on terms. The base case is true by definition. Case 1: For g, ρ^E^+(w, g) = D^E - ρ^E^+(w,g) = D^E - {m| g∈ w(m)} = {m| g∈ w(m)}. The last equality holds because types are consistent and complete. Case 2: For _i g, ρ^E^+(w,_i g) =⋂_wR_i^Ew'ρ^E^+(w,g) = ⋂_wR_i^Ew'{m| g∈ w(m)}, which equals {m|_ig∈ w(m)} by the following reasoning: ⊇ side is easy to see, since if m∈{m|_ig∈ w(m)} and wR_i^Ew', then _ig∈ w(m) entails g∈ w'(m) by definition. Hence m∈⋂_wR_i^Ew'{m| g∈ w'(m)}. ⊆ side. Assume i∈⋂_wR_i^Ew'{m| g∈ w'(m)}, then m∉⋃_wR_i^Ew'{m| g∉w'(m)}, by the completeness and consistency of w(m), i∉⋃_wR^E_iw'{m| g∈ w'(m)}. By Contrapositive of Existence Lemma, m∉{m|_ig∈ w(m)}. Consequently, m∈{m|_ig∈ w(m)}. Now we can show a set of consistent existential sentences is satisfiable. For a set of existential sentence Σ_Some, if ⊬_ϕ_Some for all ϕ_Some∈Σ_Some, then Σ_Some is satisfiable (thus -consistent). Enumerate sentences in Σ_Some as ϕ_0, ϕ_1, …. For each n, suppose ϕ_n = g_1g_2, we show that {g_1,g_2} is possible. First note that since ϕ is an abbreviation, the assumption ⊬_ϕ_Some says ⊬_g_1 g_2. By contrapostive, ⊬_g_2 g_1. By Proposition <ref>, g_1 g_1,g_2 g_2 are not valid, thus cannot be proved in by soundness. Therefore {g_1,g_2} is possible and can be extended as a type by Lemma <ref>; call it _n. Now we can define a w∈ W^E. If Σ_Some is infinite, let w(n) = _n for all n∈ℕ; if not, let w(n) = _n for n≤ |Σ_Some|, and w(n) = _0 for n > |Σ_Some|. Now we can show ^E,w_NESΣ_Some since each ϕ_n∈Σ_some is at least witnessed by n due to our construction of w and Lemma <ref>. Consistency of Σ_some follows by soundness. The weak completeness follows from the above lemma. If _NESϕ, then ⊢_ϕ. By Proposition <ref> and the validity of the rule of Existence, we have _NESϕ_Some for any existential sentence ϕ_Some. Hence it suffices to prove that for all universal sentence ϕ_All, if _NESϕ_All, then ⊢_ϕ_All. Which is equivalent to showing if ⊬_ϕ_All, then _NESϕ_All. Hence it suffices to show that for all existential sentence ϕ_Some, if ⊬_ϕ_Some, then ϕ_Some is satisfiabe, which follows from Lemma <ref> w.r.t. a singleton set. §.§ Strong Completeness Normally, a weak completeness result naturally leads to strong completeness if the logic is compact. However, even though is indeed compact as it is a fragment of FOML, strong completeness does not easily follow and requires an argument based on Lemma <ref>. That is because in syllogistic, formulas are not closed under conjunction. Consequently, weak completeness does not lead to the satisfiability of every finite consistent formula set. Now we proceed to give a proof of strong completeness, again by building a (more complicated) canonical models, but for arbitrary maximal consistent sets. Again, inspired by the notion of point in <cit.>, we define the Δ-type to describe the sets of maximal properties an object may exemplify given the maximal consistent set Δ. Given an MCS Δ, a Δ-type, denoted by is a subset of Term^NES(U) s.t. * If g_1∈ and Δ⊢_g_1g_2, then g_2∈. (Respects Barbara) * For all g∈ Term^NES(U), either g∈ or g∈. (Completeness) * For all g∈ Term^NES(U), g, g are not both in . (Consistency) Denote the set of all Δ-types by 𝕎(Δ). Given an existential sentence g_1g_2∈Δ, we expect there to be some type exemplifying both g_1,g_2. To show this, we first generalize the notion of a possible set of terms w.r.t. a maximal consistent set Δ. Given a maximal consistent set Δ, call a set of terms 𝒴 Δ-possible, if for all t_1,t_2∈𝒴, Δ⊢_t_1t_2. It is easy to see that the Δ-types are Δ-possible based on the fact that Δ is an MCS. The following lemma is the counterpart of Lemma 11.2 in <cit.> in the setting of orthoposet-based algebraic semantics. We present the following direct proof in our setting. Each set of terms _0 that is Δ-possible can be extended to a Δ-type ∈𝕎(Δ). Enumerate all terms in Term^NES(U)-_0 as {s_n}. Construct a series of subsets of s.t. _0⊆_1⊆_2 … and: * _n is Δ-possible: For all t_1,t_2∈_n, Δ⊢_t_1t_2. * _n+1 = _n∪{g}, where g = s_n or s_n. Now we show by induction that such a sequence can be constructed. By assumption _0 is Δ-possible. Given _n s.t. for all t_1,t_2∈_n, Δ⊢_t_1t_2, and s_n+1, prove that at least one of s_n+1, s_n+1 can be added to _n to form _n+1 s.t. it remains Δ-possible. Essentially, we have to show either (1) ts_n+1∈Δ for all t∈_n and s_n+1s_n+1∈Δ, or (2) t s_n+1∈Δ for all t∈_n and s_n+1 s_n+1∈Δ. We prove that not (1) leads to (2). If (1) is not the case, there are two cases. Case 1: s_n+1s_n+1∉Δ. Then s_n+1 s_n+1∈Δ since Δ is maximal. By derived rule nonexistence, g s_n+1∈Δ for all g∈ Term^NES(U). For each t∈_n, tt∈Δ, hence t s_n+1∈Δ by Darii. For s_n+1, by rule non-emptiness and that s_n+1 s_n+1∈Δ, s_n+1 s_n+1∈Δ. Hence (2) holds. Case 2: Suppose ts_n+1∉Δ for some t∈_n, we need to show that (2) holds. For s_n+1, if s_n+1 s_n+1∉Δ, then s_n+1s_n+1∈Δ. Since ts_n+1∉Δ, t s_n+1∈Δ, then ts_n+1∈Δ by Barbara, but since t∈_n, tt∈Δ. This leads to ts_n+1∈Δ, a contradiction to the assumption. We still need to show t' s_n+1∈Δ for all t'∈_n. Assume towards a contradiction that t' s_n+1∉Δ for some t'∈_n, then t' s_n+1∈Δ. Since ts_n+1∉Δ then t s_n+1∈Δ. The following deduction shows that Δ⊢_t t', contradicting tt'∈Δ, which follows from our induction assumption that _n is Δ-possible. t s_n+1 t' s_n+1 s_n+1s_n+1 Barbara t's_n+1 Contrapositive s_n+1 t' Barbara t t' Consequently, either (1) or (2) holds and at least one of s_n+1, s_n+1 can be added to _n to form _n+1 that is Δ-possible. Let = ⋃_n∈ℕ_n. Then Δ⊢_t_1t_2 for all t_1,t_2∈. Finally, we prove that is a Δ-type. It is complete since one of s_n, s_n is added at each step, and all predicates in U are eventually visited. It is consistent since Δ is consistent, so Δ⊬_t t for all t, hence t, t can't both be in . Finally we show that it respects Barbara: If t_1∈ and t_1t_2∈Δ, then t_2∉, otherwise we have t_1 t_2∈Δ, contradicting the consistency of Δ. By completeness, t_2∈. Now we start to construct a canonical model for , and show that every maximal consistent set is satisfiable in it. Compared to the previous construction, we now need to take the maximal consistent sets (MCS) into consideration. A world w is a pair of an MCS Δ and a map from ℕ to 𝕎(Δ). By abusing the notation, as in the previous subsection, we write w(m) for f(m) if w=Δ, f. The canonical model for is defined as ^* = (W^*,{R^*_i}_i∈ I,D^*,ρ^*) where: * W^* = ⋃_Δ∈ MCS{Δ, f| f∈𝕎(Δ)^ℕ}. * wR^*_iw' iff _ig∈ w(m) entails g∈ w'(m) for all m∈ℕ, g∈ Term^NES(U). * D^* = ℕ * ρ^*(Δ,f,A) = {m∈ℕ| A∈ f(m)} for all A∈ U. It is not hard to show reflexivity as in the previous subsection. The canonical model for is reflexive. Take arbitrary g∈ Term^NES(U), w∈ W^*. Assume Δ is the maximal consistent set behind w. If _ig∈ w(m), then since ⊢_K_igg, K_igg∈Δ. Then g∈ w(m) since w(m) respects Barbara. Hence wR^*_iw. For all w, m∈ℕ, t∈ Term^NES(U) s.t. _it∈ w(m), there is w' s.t. wR_i^*w' and t∈ w'(m). Assume w = Δ,f where Δ is a maximal consistent set. Consider Σ = {g t|_ig∈ w(m)}∪⋃_n∈ℕ{g_1g_2|_ig_1,_ig_2∈ w(n)}, where the second part of the union is to make sure we can obtain the right types. We show Σ is consistent. Note that Σ is made up of existential sentences only, thus by Lemma <ref>, it suffices to prove that ⊬_ϕ for all ϕ∈Σ. Given ϕ = g t∈Σ for some _ig∈ w(m), assume for contradiction that ⊢_gt. Then by K principle, ⊢__ig_it, hence _ig_it∈Δ and _it∈ f(m) since f(m) respects Barbara, but _it∈ w(m), contradicting consistency of w(m). Given ϕ = g_1g_2∈Σ for some _ig_1,_ig_2∈ w(n), assume for contradiction ⊢_g_1 g_2. Again by K principle, ⊢__ig_1_i g_2. By Proposition <ref>, we have ⊢__i g_2_i g_2 and ⊢__i g_2_i g_2. Now by Barbara, we have ⊢__ig_1_ig_2. Then _ig_1_ig_2∈Δ and hence _ig_2∈ w(n) since w(n) respects Barbara. This is a contradiction to _ig_2∈ w(n) and that w(n) is consistent. Since Σ is consistent, we can expand Σ to a maximal consistent set Δ' by a Lindenbaum-like argument. Observe that for all n≠ m, _ig_1,_ig_2∈ w(n), we have g_1g_2∈{g_1g_2|_ig_1,_ig_2∈ w(n)}⊆Δ', hence {g|_ig∈ w(n)} is Δ'-possible and can be expanded to a Δ'-type by Lemma <ref>, denote it by _n. Similarly, as {g|_ig∈ w(m)}∪{ t} is possible too, it can be expanded to a Δ'-type, denote it by _m. Let h be a function from ℕ to 𝕎(Δ') s.t. h(n) = _n. It is clear that (Δ,f)R_i^*(Δ',h) and t∈ h(m). Now we can establish the truth lemma similar to the one in the previous section. ρ^*^+(w,g) = {m∈ℕ| g∈ w(m)} for all g∈ Term^NES(U). Finally, we can show the strong completeness of . is strongly complete w.r.t. the class of reflexive frames. As usual, we show each consistent Σ for the is satisfiable on a reflexive model. We first expand Σ to a maximal consistent set Δ, and enumerate the existential sentences in Δ as ψ_0, ψ_1…. For each n, suppose ψ_n = g_1g_2, {g_1,g_2} is thus it is Δ-possible since g_1g_2, g_2g_1, g_1g_1, g_2g_2∈Δ by rules of Conversion and Existence. Hence, it can be extended to a Δ-type _n in 𝕎(Δ). Take f:ℕ→𝕎(Δ) s.t. f(n) = _n for all n. Show that ^*,Δ, f_NESΔ. For g_1g_2∈Δ: Assume n∈ρ_w^*^+(g_1), then by Truth Lemma g_1∈ w(n), then since w(n) respects Barbara, g_2∈ w(n) hence by truth lemma n∈ρ_w^*^+(g_2). Consequently ^*, w_NESg_1g_2. For g_1g_2∈Δ: Suppose it is enumerated as ϕ_n. By construction of w, g_1,g_2∈_n = w(n). By truth lemma n∈ρ_Δ,f^*^+(g_1)∩ρ_Δ,f^*^+(g_2). Consequently ^*, w_NESg_1g_2. It is straightforward to adapt the completeness proof with extra axioms enforcing certain frame conditions in the canonical model. and are strongly complete w.r.t. the class of reflexive and transitive frames and the class of frames with equivalence relations respectively. § CONCLUSIONS AND FUTURE WORK In this paper, we have taken the initial steps towards developing an epistemic syllogistic framework. We provided complete axiomatizations with respect to two epistemic syllogistic languages featuring de re knowledge. The same techniques can be applied to belief instead of knowledge. In fact, for systems concerning consistent belief over serial models, we only need to replace the counterpart of axiom T with D: Kg g. Adding counterparts of axioms 4 and 5 will yield a complete system of KD45 belief. So far, the usual axioms can all enforce the canonical frame to adopt the desired structure as their modal logic counterparts. If we proceed without seriality, an additional rule is required: from Kg_1K g_1, infer g_2Kg_3, to capture the scenario where the current world has no successor. It is evident that syllogisms can be studied in modal contexts other than the epistemic setting as well. As for other future work, we will consider the axiomatization problem of the full language of the so-called Aristotelian Modal Logic <cit.>, and also consider the de dicto readings of the modal operators. It is also interesting to study the computational properties of these logics. One observation is that, like the cases of epistemic logics of know-wh <cit.>, these epistemic syllogistic languages that we considered are one-variable fragments of FOML that are decidable in general. We will also explore the technical connections to other natural logics extending syllogistics such as <cit.>, and to the bundled fragments of first-order modal logic where quantifiers and modalities are also packed to appear together <cit.>. Acknowledgement This work is supported by NSSF grant 19BZX135 awarded to Yanjing Wang. The authors would like to thank Larry Moss for useful pointers and thank the three anonymous reviewers for their valuable comments that improved the presentation of the paper. eptcs § PROOF SKETCH OF THEOREM <REF> Note that for , since there are -formulas that cannot be negated syntactically, and we cannot equate Σ⊢_ϕ with that Σ∪{ϕ} is a consistent set for. Therefore we cannot reduce strong completeness to the satisfiability of any consistent set of formulas. We leave the full proof for the extended version of this paper and only present a sketch here. Assume that Σ is consistent, otherwise the conclusion is trivial. Separate Σ into the non-modal part Σ_0 and the modal part Σ_. We consider all possible maximal consistent extension of Σ_0 in Assertoric Syllogistic and denote them by {Δ_i}_i∈ I. For each Δ_i∪Σ_, we construct a pointed model _i,w_i = (W^i,R^i,D^i,ρ^i),w_i for it. * W^i = {w_i,v_0,v_1}. * R^i is the reflexive closure of {(w_i,v_0),(w_i,v_1)}. * D^i is Δ_i_Some+⊔Δ'_i_Some+, the positive existential sentences of the form AB in Δ_i and its disjoint copy. * ρ^i_w_i(X) = {ϕ,ϕ' |ϕ=AB and AX∈Δ_i or BX∈Δ_i}. Where ϕ' is the copy of ϕ. ρ^i_v_0(X) = {a∈ D^i|Δ^i⊢CKX for some C with a∈ρ^i_w(C)}∪{ϕ=BX∈Δ_i_Some+|Δ^i⊢BKX}. ρ^i_v_1(X) = D^i - ({a∈ D^i|Δ^i⊢CK X for some C with a∈ρ^i_w(C)}∪{ϕ'∈Δ'_i_Some+|ϕ = BB,Δ^i⊢BK X}). The idea for the model is roughly the following: In the new world v_0, an object a can have a property X only if a has property C in the real world and Δ^i∪Σ thinks CKX; or Δ^i∪Σ thinks BKX and a happens to be BX. In the new world v_1, an object a has every property A unless a has property C and Δ^i∪Σ thinks CK X; or Δ^i∪Σ thinks BK X and a happens to be the copy of BB. We need a disjoint copy of Δ_i_Some+ in the domain so that the mere fact that ϕ happens to have property C does not validate a universal sentence. These models collectively describe all the possible models for Σ under logical equivalence. Therefore, it can be shown that if Σ⊬_ϕ, we can always find a Δ_i⊇Σ_0 s.t. _i,wϕ. The collection of these models are called the canonical model family of Σ. Eventually, we will be able to prove that: * All models in the canonical model family satisfy Σ. * If ϕ is satisfied by all models in the canonical model family of Σ, then Σ⊢_ϕ. 1. is standard practice. To show 2, we prove the converse: if Σ⊬_ϕ then there is one model in the canonical family that falsify it. As an example, we sketch the proof for case Σ⊬_A B, the other cases are similar. Consider Σ' = Σ_0∪{ϕ|ϕ∈Σ_K}∪{A C|C B∈Σ_K} It can be shown to be consistent as a set of assertoric syllogistic. The main idea is that Σ_0∪{ϕ|ϕ∈Σ_K} is proof theoretic consequence of Σ, hence it is by assumption consistent. And if Σ_0∪{ϕ|ϕ∈Σ_K} deduces AC for C B∈Σ_K, then Σ⊢_A B, which is a contradiction to the assumption. Finally, Σ' has a maximal consistent extension Δ^i. It can be shown that the model _i,w_i for Δ^i in the canonical model falsifies A B. After establishing 1 and 2, Completeness thus follows.
http://arxiv.org/abs/2307.04614v1
20230710145753
(Empirical) Gramian-based dimension reduction for stochastic differential equations driven by fractional Brownian motion
[ "Nahid Jamshidi", "Martin Redmann" ]
math.NA
[ "math.NA", "cs.NA" ]
left=2.5cm, right=2.5cm, top=2.5cm, bottom=2.5cm,
http://arxiv.org/abs/2307.04806v1
20230710180056
The Dragon-II simulations -- II. Formation mechanisms, mass, and spin of intermediate-mass black holes in star clusters with up to 1 million stars
[ "Manuel Arca Sedda", "Albrecht W. H. Kamlah", "Rainer Spurzem", "Francesco Paolo Rizzuto", "Mirek Giersz", "Thorsten Naab", "Peter Berczik" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Autonomous feedback stabilization of a cavity-coupled spin oscillator Dan M. Stamper-Kurn August 12, 2023 ===================================================================== The processes that govern the formation of intermediate-mass black holes (IMBHs) in dense stellar clusters are still unclear. Here, we discuss the role of stellar mergers, star-BH interactions and accretion, as well as BH binary (BBH) mergers in seeding and growing IMBHs in the Dragon-II simulation database, a suite of 19 direct N-body models representing dense clusters with up to 10^6 stars. Dragon-II IMBHs have typical masses of m_ IMBH = (100-380) and relatively large spins χ_ IMBH > 0.6. We find a link between the IMBH formation mechanism and the cluster structure. In clusters denser than 3× 10^5 M_⊙ pc^-3, the collapse of massive star collision products represents the dominant IMBH formation process, leading to the formation of heavy IMBHs (m_ IMBH > 200 M_⊙), possibly slowly rotating, that form over times <5 Myr and grow further via stellar accretion and mergers in just <30 Myr. BBH mergers are the dominant IMBH formation channel in less dense clusters, for which we find that the looser the cluster, the longer the formation time (10-300 Myr) and the larger the IMBH mass, although remaining within 200 M_⊙. Strong dynamical scatterings and relativistic recoil efficiently eject all IMBHs in Dragon-II clusters, suggesting that IMBHs in this type of cluster are unlikely to grow beyond a few 10^2 M_⊙. methods: numerical – galaxies: star clusters: general – stars: general, black holes § INTRODUCTION Despite the great progresses in observations, marked by the detection of intermediate-mass black hole (IMBH) candidates with masses as low as 50,000 <cit.>, and the first detection of an IMBH with mass ∼ 150 formed from the merger of two massive stellar BHs <cit.>, IMBHs remain elusive objects whose existence in the M_ IMBH = 10^2-10^5 mass range is largely debated <cit.>. Several IMBH candidates have been proposed in galactic and extragalactic clusters <cit.>, but none of the explorations conducted so far led to conclusive results, making IMBH formation processes one of the most intriguing puzzles of modern astronomy. Numerical and theoretical works on IMBH formation in dense star clusters suggest that the IMBH seeding can occur via three, rather uncertain, pathways <cit.>: multiple stellar mergers, accretion of stellar matter onto a stellar BH, or repeated BH mergers. These mechanisms are not mutually exclusive: multiple stellar mergers can form a very massive star (VMS) that eventually collides with a stellar BH and the collision product further grows by merging with other BHs in the cluster. These processes could explain the formation of supemassive BHs (SMBHs) in galactic nuclei <cit.>. A further formation channel could be via formation and collapse of a supermassive star, the so-called direct collapse scenario for SMBH seedings in galactic nuclei <cit.>. A similar process, aided by stellar collisions and gaseous accretion, could operate also in the most massive globular clusters, provided that they accrete a significant amount of the gas in which they are embedded at formation <cit.>. The impact of multiple stellar mergers onto the IMBH buildup depends in part on the possible insurgence of pair-instability (PISN) and pulsational pair-instability supernova (PPISN) mechanisms. Stars that develop an He core with mass in the range m_ He=(64-135) undergo PISN and explode leaving no remnant, whilst stars m_ He=(32-64) suffer a strong mass loss owing to PPISN and leave remnants with a mass generally lighter than 40-50. These explosive mechanisms result in the so-called upper mass-gap, a region of the mass spectrum m_ BH = 40-150 where no BHs are expected. The boundaries of the upper mass-gap are rather uncertain, and depend on many details, among which the stellar evolution model, stellar rotation, the rate of thermonuclear reactions <cit.>. Stellar mergers can actually overcome PISN and PPISN by mixing stars in different evolutionary stage, a mechanism that permits to increase the stellar mass but keep the He core below the threshold for these explosive mechanisms to develop <cit.>. Stellar mergers of this type have proven to be a viable way to generate upper-mass gap BHs in star clusters and, in some cases, IMBHs <cit.>. Whilst there is some general consensous about the outcome of stellar mergers, also thanks to the development of detailed hydrodynamical simulations coupled with stellar evolution models <cit.>, it is still rather unclear how much mass a massive star can accrete onto a stellar BH. Several works have shown that in the case of a "normal" star merging with a stellar BHs, there is little accretion as most of the energy is radiated away via jets, although the mechanism is highly uncertain and likely depends on the star structure and evolutionary stage <cit.>. Hydrodynamical simulations of star-BH close interactions have shown that up to 70% of the star mass remains bound to the BH, but energy arguments suggest that even a tiny amount of accreted matter, O(10^-3-10^-2), generates enough energy to evaporate the accretion disk and halt the BH growth <cit.>. Nonetheless, recent simulations modelling the common envelope phase of a tight star-BH binary have shown that the BH accretes the stellar core and expels the envelope, a process – possibly accompanied by a SN-like transient – that can spin-up the BH to nearly extremal values regardless the initial spin <cit.>. In multiple main sequence (MS) star collisions, the merger product is expected to be characterised by a compact core and a tenuous envelope with densities as low as 10^-10 g cm^-3 <cit.>. Therefore, it seems reasonable to assume that a BH would eat-up a significant fraction of mass from a massive companion that underwent multiple stellar mergers. Given this, recent works parametrised the amount of accreted matter through an accretion parameter f_c=0-1 <cit.>. Repeated BH mergers can potentially build-up upper-mass gap BHs and IMBHs, but their efficiency is inevitably hampered by the development of post-merger recoil originated by anysotropic GW emission <cit.>, which can easily eject the post-merger product from the parent environment, especially in star clusters with velocity dispersion σ < 100 km s^-1 <cit.>. Typically, the amplitude of the kick imparted promptly after a merger on the remnant depends on the binary mass ratio and the amplitude and direction of the component spins, and can attain values that span more than two orders of magnitude. Despite its crucial impact on post-merger dynamics, little is known about the natal spin of stellar BHs, let alone IMBHs. Observations of several high-mass X-ray binaries show that BHs in these systems are nearly maximally spinning <cit.>, while observations of GW sources suggest that merging BHs are mostly slowly rotating (χ_ BH < 0.5) <cit.>. From the theoretical point of view, it has been suggested that the evolution of the BH stellar progenitors could significantly impact the natal spin distribution. In single stars and binaries with negligible mass transfer, efficient angular momentum transport driven by magnetic fields could trigger the formation of BHs with natal spins as small as χ_ BH≲ 0.01 via Taylor-Spruit dynamo <cit.>. A significant mass-transfer can, instead, significantly spin-up a BH even if it is spinless at birth, possibly explaining the observed spin of BHs in Galactic low-mass X-ray binaries (χ_ BH∼ 0.1-0.99) <cit.>. Similarly, accretion from a BH progenitor onto a close companion in a binary and subsequent accretion from the companion onto the BH can spin-up the BH in high-mass X-ray binaries, provided that the angular momentum transfer when the companion leaves the MS phase is inefficient <cit.>. High-mass X-ray binaries with highly spinning BHs are not expected to produce merging BHs, a feature that partly explains the dearth of highly spinning BHs in observed BH mergers <cit.>. In massive binaries undergoing both Roche lobe overflow and common envelope and eventually forming a BH binary (BBH), the first-born BH can have nearly zero spin or a spin covering a wide range, depending on the stellar prescription adopted, whilst the second BH could have nearly extremal spin <cit.>. This is likely driven by tidal synchronization of BH progenitors rotation and their mutual orbit <cit.>. Nonetheless, massive binaries could also form BHs with negligible spins, provided that their progenitors lose their hydrogen envelope before undergoing SN <cit.>. In the case of BHs formed from star-BH mergers, instead, it has been shown that the accretion of the star core onto the BH can spin-up the BH to extreme values <cit.>. The aforementioned scenarios for BH natal spin can have a significant impact on the properties of IMBHs, depending on their formation mechanism. An IMBH formed via star-BH merger, for example, could be characterised by a large spin, while one formed via the collapse of a VMS could have negligible spin. Stellar mergers, star-BH interactions, and BBH mergers can also have an impact on the formation of BHs in the upper-mass gap. In the first three observation runs, the LIGO-Virgo-Kagra collaboration (LVC) revolutionized our knowledge of BHs, proving the existence of BHs in and beyond the upper-mass gap. The most updated GW transient catalog (GWTC-3) contains 85 sources associated with the merger of two BHs with a mass above m_ BH = 3 <cit.>. Around one-third of them (27) have one component above m_ BH > 40.5, and 8 of them have one component heavier than m_ BH > 65, i.e. two proposed lower limits for the PISN <cit.>. Moreover, 8 sources have a remnant mass m_ BH, rem > 100, 3 of which exceeds the IMBH threshold at 95 confidence level. With the forthcoming fourth observation run (O4), the LVC collaboration will possibly detect further 30-150 merging events, thus future detection will provide further insights on the development of BH mergers with upper-mass gap BHs. In this work, we discuss the formation of IMBHs and upper mass-gap BHs in the star cluster database, a suite of 19 direct N-body simulations of star clusters comprised of up to 1 million stars and up to 33% of stars initially in binaries (details about these models are discussed in our companion paper, Arca Sedda et al in prep), performed with the code[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>] <cit.>. The paper is organised as follows: in Section <ref> we briefly summarise the main features of our models; Section <ref> describes how IMBHs form in simulations and what is the impact of different formation channels; whilst Section <ref> is devoted to discuss the impact of Newtonian and relativistic dynamics on the mass and spin of IMBHs in dense star clusters. Section <ref> summarises the main results of the work. § NUMERICAL METHODS §.§ Modelling clusters with the code All clusters are represented by <cit.> models with an dimensionless potential well W_0 = 6, a number of stars of N = (120 - 300 - 600)× 10^3, and an initial half-mass radius either R_ = 0.47,  0.80,  1.75 pc. As described in the first paper of the series (Arca Sedda et al, subm., hereafter paper AS-I) this choice is compatible with observations of several Galactic young massive clusters and produce cluster models that broadly match observed masses and half-mass radii of dense clusters in the Magellanic clouds (see Figure 2 in paper AS-I). For all models we adopt a binary fraction f_b=0.2[Note that the binary fraction is defined as f_b = n_b/(n_s+n_b), where n_b is the number of binaries. This implies that the fraction of stars initially in binary systems is f_2b = 2f_b/(1+f_b)= 0.10-0.33, with f_b=0.05, 0.2.], defined as the number of binaries normalised to the sum of the number of single stars and binary pairs. For models with R_ = 2.2 pc, we run an additional series of models where we adopt f_b = 0.05 and N = (120 - 300 - 1,000)× 10^3. All clusters have the same metallicity, Z = 0.0005, a value consistent with the metallicity of several globular clusters in the Milky Way that may host a substantial population of BHs <cit.>. The reduced computational cost of modelling a smaller amount of binaries permitted us to increase the total number of stars to one million, which is the maximum amount of stars and binaries ever simulated for realistic star cluster models with a direct N-body code <cit.>. All clusters have been initialised with the code <cit.>, adopting a <cit.> initial mass function limited between 0.08 and 150. Binary eccentricities are drawn from a thermal distribution, whilst semimajor axes follow a distribution flat in logarithmic values limited between the sum of stellar radii and 50 AU <cit.>. Binary components are paired according to a uniform mass ratio distribution if their mass exceeds m_*>5, whilst lighter stars are paired randomly <cit.>. All clusters are assumed to orbit on a circular orbit 13.3 kpc away from the centre of a galaxy with total mass 1.78×10^11, assuming for the galaxy a Keplerian gravitational potential. Note that the choice of parameters is such that the velocity curve at the adopted distance is similar to the one observed in the Milky Way. This implies that all clusters are initially well contained inside their Roche lobe, thus the galactic field has little effect on the cluster structural evolution. In all cases but one, we ran two different realisation of each cluster to reduce the impact of statistical fluctuations. Table <ref> summarizes the main properties of clusters. The table shows the initial parameters of the clusters, the simulated time T_ sim, the number of merging compact objects occurring inside the cluster or after their ejection, the absolute maximum mass attained by BHs and the maximum BH mass at the end of the simulation, the number of BHs with a mass above 30 or 40. For each set of initial conditions, we provide numbers for each independent realisation. The simulations have been performed with the code, a state-of-the-art direct N-body integrator that exploits GPU-accelerated high-performance supercomputing <cit.>. The current version of the code follows the footstep of a 50 year old tradition initiated by Sverre Aarseth <cit.>. The code exploits a 4th-order Hermite integrator with individual block-time step <cit.> and implements a dedicated treatment for close encounters and few-body dynamics based on the Kustaanheimo-Stiefel (KS) regularisation <cit.>, the Ahmad-Cohen (AC) scheme for neighbours <cit.>, and algorithmic chain regularisation <cit.>, which permits to resolve the evolution of binaries with a period 10^-10 times smaller than the typical dynamical timescales of star clusters. Recently, a series of improvements have been introduced in the code to treat the formation and merger of relativistic binaries <cit.> and quantify the fraction of stellar matter that can be fed to a stellar BH in binary systems or star-BH collisions <cit.>. Stars in clusters are evolved self-consistently from the zero age main sequence through the code <cit.>, conveniently updated to feature state-of-the-art recipes for the evolution of massive stars, the mass spectrum and natal kicks of BHs and NSs, and the physics of (P)PISN, <cit.>. In this work, we use the so-called level-B of stellar evolution <cit.>. After a series of major upgrades described in recent papers <cit.>, currently implements multiple choices for the distributions of BH natal spins and numerical relativity fitting formulae to calculate the final mass and spin of merger remnants, based on <cit.>, and the relativistic recoil imparted onto them because of asymmetric GW emission, based on <cit.>. Although implements the GW recoil in a self-consistent way, the amplitude of the recoil depends primarily on the merging masses and the spin amplitude and orientation, making the process highly stochastic. Given the relatively small number of simulations in our sample, we decide to explore the role of post-merger kicks as follows. Firstly, we run all simulations assuming zero GW recoil. Secondly, we calculate the typical GW recoil experienced by merger products in clusters and infer the corresponding retention probability in post-process, following an approach widely used in the literature. Thirdly, in case of a simulation featuring multiple generation mergers, we re-run the simulation shortly before the n-th merger with the GW kicks enabled to verify if, upon retention, the BH undergoes an n+1-th generation merger. The scopes of such simplified scheme are manifold. On the one hand, it permits us to verify whether multiple-generation mergers can occur in absence of relativistic effects. On the other hand, it permits us to assess the impact of Newtonian and general relativistic dynamics on the formation and retention of IMBHs. Furthermore, using this multi-stepped procedure helps us to optimise the available computational resources and to maximise the scientific outputs of the simulations. § INTERMEDIATE-MASS AND UPPER-MASS GAP BLACK HOLES FORMATION IN MASSIVE DENSE CLUSTERS Out of 19 simulated clusters, we find 8 IMBHs with a mass M_ IMBH = (107-350), corresponding to a formation probability of P_ IMBH∼ 42±15%. Despite the small statistics, we note that there is a moderate dependence on the binary fraction and the cluster compactness. In fact, we find an IMBH formation fraction of f_ IMBH = 0.17,  0.33,  0.75,  0.67 going from f_b=0.05 to f_b=0.2 and from R_ = 1.75, 0.8, 0.47 pc. Comparing different models makes evident the importance of binaries and cluster compactness in determining the IMBH seeding. The formation history and main properties of all IMBHs in simulations are described in detail in Appendix <ref>. Aside IMBHs, around N_ gap≃ 10^2 upper mass-gap BHs form within the simulation time, corresponding to a formation efficiency η_ gap = N_ gap/M_ sim = 3.44 × 10^-5^-1, where M_ sim = 3.65× 10^6 is the total simulated mass. The formation of IMBHs and upper-mass gap BHs via stellar mergers, accretion of stellar material onto a stellar BH, BH-BH mergers, or a combination of them, intrinsically depend on the host cluster properties. The development of one mechanism or another is intrinsically linked to the initial cluster structure, which determines the typical timescales of dynamical processes. The earliest process that regulates the evolution of a star cluster with a broad mass spectrum is mass-segregation, by which the most massive stars sink toward the cluster centre and start dominating dynamics in the inner core <cit.>. The mass-segregation timescale of heavy stars with maximum mass m_ max can be expressed as <cit.> T_ seg∼0.138N⟨ m_* ⟩/m_ maxln(0.11M_cl/m_ max)(R_^3/GM_cl)^1/2. If the mass-segregation time is shorter than the lifetime of the most massive stars, it implies that they will sink to the centre before they turn into compact objects, thus their interactions can trigger more easily stellar collisions or massive star-BH close interactions. As summarised in Table <ref>, clusters have a typical mass-segregation time T_ seg=0.4-3.4 Myr, thus they represent ideal laboratories to study the impact of star mergers and strong interactions on the early evolution of star clusters. In the following section, we describe the impact of stellar collisions, star-BH collisions and mergers, and compact object mergers on the formation of IMBHs and mass-gap BHs. §.§ Formation channels and formation times Despite the relatively small database, our models support the formation of IMBHs via all the three main channels, complementing previous works <cit.>. To provide the reader with a clearer idea about how IMBHs form in clusters, we provide below two examples extracted from our simulations. In the first example, an IMBH with final mass m_ IMBH = 350 forms in a cluster with N=120k stars, half-mass radius R_ = 0.47pc, and binary fraction f_b=0.2. The IMBH formation sequence is sketched in Figure <ref>. Initially, a primordial binary with component masses m_p1,p2 = (132 + 99) undergoes a series of strong interactions with a single MS star with mass m_s = 133 within the inner Myr of cluster evolution. The triple formed this way undergoes both phases of resonant interactions, with an exchange among the binary secondary and the third star, and a phase of hierarchical evolution, until the third body and the companion merge, leaving behind a new binary with component masses m_p1,ps = (132+231), eccentricity e ∼ 0.001 and semimajor axis a ≃ 225 R_⊙. After 1.8 Myr, the binary captures a massive companion with mass m_3 = 115 that induces the collision of the two massive stars, eventually leaving behind a VMS with mass m_ VMS = 360, which forms a binary with m_3. The two binary components merge during the Hertzsprung-gap (HG) phase of the primary, leading to the formation of a VMS with total mass m_ VMS = 365. After capturing via a hyperbolic collision a small MS star (∼ 0.7) during the CHe burning phase, the VMS collapses to a BH with final mass m_ IMBH,1 = 288 over a total time of T_ sim = 2.5 Myr. Within the subsequent 4 Myr, the newborn IMBH collides with another massive MS star with mass m_ MS = 122, accreting a fraction f_c = 0.5 of its total mass and reaching a final IMBH mass of m_ IMBH≃ 350. This case represents a clear example of how different formation channels, in this case stellar and star-BH mergers, concur to the IMBH seeding and growth. In the second example, instead, an IMBH with mass m_ IMBH = 191 form from the coalescence of two nearly equal mass BHs. As sketched in Figure <ref>, the two BHs with masses ∼ 95 form from the evolution of two initially indipendent primordial binaries. After formation, the two BHs are part of different binaries and undergo many binary-single and binary-binary interactions before finding each other and merge after a time of ∼ 10^2 Myr. §.§.§ Stellar mergers In models we find in total 104 stellar mergers with a merger remnant mass heavier than m_ VMS>90, with 75% of them involving primordial binaries. The typical mass of the merger product is a star with mass in the range m_ VMS = 100-350. In some cases, the same star undergoes 3-4 merging events with stars in different evolutionary phases. Figure <ref> shows the post-merger mass as a function of the time at which the merger occurs for all simulations. The plot shows exclusively star-star coalescences, thus it excludes both star-BH and BH-BH merging events. Around 48% of stellar mergers produce a massive MS star, 32% produce a star in the HG, and a core-He burning star in the remaining 22% of cases. The formation of a VMS (m_ VMS> 150) eventually leaves to either no remnant owing to PISN (∼ 23 cases), a remnant with mass m_ BH = 40.5 owing to PPISN (∼ 64 cases), or an IMBH (∼ 2 cases). Comparing models with same R_ and different binary fraction, we find that models with f_b=0.2 host a number of mergers 2-5 times larger than the case f_b=0.05, a reflection of the fact that most of the mergers involve primordial binaries. Noteworthy, the two IMBHs form in the densest simulated clusters, i.e. those with R_ = 0.47 pc and N=(1.2-3)× 10^3, which are also those with the shortest mass-segregation time (T_ seg∼ 0.3-0.4 Myr), much shorter than the typical BH formation time (>2 Myr). §.§.§ Star-black hole collisions Among all simulations, we find 454 star-BH merger events, the vast majority of which (72%) lead to the formation of BHs with a final mass m_ BH<40.5, thus they will remain mixed with the population of "ordinary" BHs that never experienced stellar accretion episodes. The remaining mergers leave behind, instead, BHs with a mass falling in the upper-mass gap. More in detail, around 18% of these events trigger the formation of a final BH with a mass in the range 40.5 < m_ BH/ < 60, 6% form BHs with masses in the 60 < m_ BH/ < 70 mass range, and the remaining ∼ 4% produces BHs heavier than m_ BH > 70. Stars involved in a star-BH merger are in different evolutionary stages: HG (40.1%), core He burning (45.2%), MS (5.5%), early/late asymptotic giant branch (AGB, 9%), giant branch (GB, 1.1%), and HG naked He star (0.2%). Note that we have two different type of star-BH accretion events: one purely dynamical and one induced by stellar evolution. In the purely dynamical case, we have two possibilities: either the BH captures a MS star in a orbit such that the star fills its Roche lobe, or the orbit is sufficiently tight and eccentric that the BH crashes onto the star. In any case, the BH accretes a fraction f_c of the star mass. In the stellar evolution-driven case, instead, the star fills its Roche lobe, mainly when inflating during the HG or the core He burning phase. Even in such case, though, in it is assumed that the BH eats up a fraction f_c of the star mass. Therefore, the stellar type is likely the parameter that better identify the two types of star-BH accretion/merger events. Figure <ref> shows the mass distribution of the merging star, and the BH before/after the merger, and the stellar type of the stars involved in the process. Two events contribute to IMBH seeding or growh, one of them involves a m_ BH=40.5 BH that accretes a core He burning star with mass m_ VMS = 133, previously formed via a complex sequence of stellar mergers triggered by binary-binary and binary-single interactions. In such case, the IMBH mass is m_ IMBH = 107. The second event, which we do not show in the histogram to ensure an optimal visibility, involves an IMBH with mass m_ IMBH = 288 and a MS star with mass m_* ≃ 122. None of all other interactions lead to the formation of an IMBH, partly owing to our choice to set the accretion factor to f_c=0.5. Adopting f_c = 1 would have lead to an additional population of ∼ 20 IMBHs with a mass at formation in the range m_ IMBH = 100-160. §.§.§ Black hole mergers The remaining 5 IMBHs in clusters form via BH-BH mergers, all involving upper mass-gap BHs. This highlights the fundamental impact of star-BH accretion events, because they are the main channel through which mass-gap BHs form. Interestingly, all the BH mergers involved in the IMBH buildup have progenitor stars originally in a primordial binary, thus highlighting the crucial role of binary dynamics in the IMBH formation process. At formation, these 5 IMBHs have masses in the range m_ IMBH≃(140-232) and, in case of negligible GW recoil, further increase up to m_ IMBH≃(160-260) via one or two repeated (hierarchical) merger events, after being dynamically ejected from the cluster. In the case of zero GW recoil, among all IMBHs in models, only one is ejected from the cluster as a single object. All the other are ejected with a companion and undergo merger within a Hubble time. In two cases, the IMBH undergoes two/three mergers inside the cluster and forms a binary with another BH that is eventually ejected from the cluster, merging in the field within a Hubble time. §.§.§ The link between formation channels, formation times, and the intermediate-mass black hole mass Despite our sample is rather small, the fact that IMBHs form via all the proposed formation channels can help to provide a possible answer to the intriguing question "Is there a link between the IMBH seeding process and the environment in which this happens?" Figure <ref> shows the IMBH mass as a function of time for different formation channels from the first time the IMBH mass exceeds 10^2 and until the first BH merger event develops. In other words, we exclude from the plot IMBHs older than the second generation (2g), because GW recoil drastically reduce the probability for multiple generation mergers, as discussed in Section <ref>. From the plot, it seems that there is a striking relation between the structure of the host cluster and the IMBH formation process. The densest clusters (ρ_ cl > 3× 10^5  pc^-3) favour the formation of IMBHs via stellar collisions on the short timescales (<10 Myr) and nurture the most massive IMBHs in our sample. IMBHs in these clusters further grow via accretion of stellar material and coalescence with stellar BHs on timescales <100 Myr <cit.>. In lower density clusters, instead, IMBHs form on longer timescales (10-300 Myr) via star-BH accretion and BBH mergers. In such case, Figure <ref> clearly shows a trend, namely that the looser the cluster the longer the formation time and the heavier the IMBH seed mass. This difference may be related to the core-collapse process, a mechanism driven by mass-segregation and relaxation according to which the cluster core contracts and its density increases up to a maximum point, i.e. the core-collapse. The time at which core-collapse occurs is generally a fraction of the relaxation time, t_ cc = 0.2 T_ rlx <cit.>. We find that in clusters with an initial density >3× 10^5  pc the core-collapse occurs before stellar BH forms or massive stars undergo PISN and PPISN, i.e. t_ BH∼ 4 Myr. This supports the idea that core-collapse facilitate the collision of the stars before they collapse to BH or undergo PISN. In the case of clusters less dense that 3× 10^5  pc, we also note that the smaller the density the larger the IMBH mass. This may be due to the fact that in low-density clusters, where interactions are less energetic and less frequent, the ejection of the most massive BHs via the so-called BH-burning process <cit.> is less effective. As a consequence, the heaviest BHs in the loosest clusters in our sample have more time to hang around in the cluster and pair-up, as in the case of model IBH_Rh1.75f20N120k. § DISCUSSION §.§ Newtonian versus relativistic dynamics: intermediate-mass black hole retention and hierarchical mergers frequency In this work, we want to assess the competing role of Newtonian and relativistic dynamics in determining BH retention and IMBH seeding and growth, thus we adopt the following multi-stepped procedure: a) run all cluster simulations assuming zero GW recoil to verify the possible development of multiple mergers and quantify the impact of Newtonian dynamics on the retention of BH merger remnants, b) quantify the retention probability of remnant BHs, c) re-run models in which BHs undergo repeated mergers with GW recoil enabled. §.§.§ Newtonian dynamics Regardless of the formation scenario, an IMBH seed that upon formation is retained in its parent cluster will likely undergo mass-segregation and quickly settles in the cluster centre possibly capturing a companion <cit.>. The newly formed binary will undergo frequent interactions with surrounding cluster members with mass m_p, at a rate ṅ_2-1∼ n σπ a^2(1-e)^2 (1+2G(m_1+m_2+m_p)/a(1-e)σ^2), where n is the cluster number density, σ the velocity dispersion, m_1,2 the mass of binary components, and a the binary semimajor axis. If the binary is hard, i.e. a ≪ 2G(m_1+m_2)/σ^2, or highly eccentric, the timescale for these interaction is roughly given by t_2-1 ∼ 6 Myr(n/10^5 pc^-3)^-1(σ/20 km s^-1) × ×(m_1+m_2+m_p/240)^-1(a/1 AU)^-1 (1-e), therefore much shorter than the typical cluster lifetime. Repeated binary-single interactions can have an important effect on the binary evolution: on the one hand, they can extract orbital energy and harden the binary <cit.>, but, on the other hand, they can become so violent to eject the binary from the cluster, halting the IMBH growth <cit.>. The typical escape velocity of clusters described by a <cit.> model can be conveniently expressed as <cit.> v_ esc = 2√(log(1/c)/π)(1-c)^-1/2(GM/R_)^1/2, where c = R_c/R_ is the ratio between the core and half-mass radius of the cluster. In models, we find that such parameter attains values c=0.2± 0.1 within the whole simulation time and regardless of the initial conditions. Therefore, the escape velocity can be rewritten as v_ esc = (34± 3) km/s(M/10^5)^1/2(R_/1 pc)^-1/2. In all clusters the escape velocity remains below v_ esc < 50 km/s, with the loosest and smallest clusters attaining values in the 8-20 km/s range. This relatively small escape velocity has huge implications on the IMBH evolution. In fact, even when GW recoil is not taken into account, all IMBHs are ejected from the parent cluster after a violent interaction with a perturber. A clear example is a simulation with N = 300 k, R_ = 0.47 pc, and f_b = 0.2, in which a binary with mass m_1 + m_2 = (240 + 38) undergoes a strong scattering with a BH with mass m_p = 44, which reduces the binary semimajor axis from a = 0.35 AU to a_ fin = 0.24 AU and impart to the binary a recoil with amplitude v_ rec = 85 km s^-1. From a theoretical standpoint, a binary undergoing a close interaction with a perturber with mass m_p and consequently shrinking from a to a_ fin receives a kick <cit.> v_ rec = [ Gm_1m_2/a_ fin(m_1+m_2)m_p/m_1+m_2+m_p(1-a_ fin/a) ]^1/2= = 37.1(μ/26)^1/2(q_p/0.12)^1/2(a_ fin/1 AU)^-1/2(1 - x_ fin/0.5)^1/2, where μ = m_1m_2/(m_1+m_2) and q_p = m_p/(m_1+m_2+m_p). This equation returns a value v_ rec≃ 72 km s^-1 for the aforementioned example. This implies that as long as at least one heavy (m_p > 10) perturber remains in the cluster, Newtonian dynamics, in particular close binary-single scatterings, can represent a serious threat to the IMBH retention. Our analysis highlights the extreme importance of Newtonian dynamics in determining the evacuation of BHs from the parent cluster. §.§.§ The impact of black hole natal spins and relativistic recoil on the properties of intermediate-mass black holes In order to determine the possible properties of IMBHs and their retention probability in models, we implement the following simple model to take into account the impact of spins: * If a stellar BH involved in the IMBH build-up formed from a single star or from a “non-interacting” binary, we assign a spin of χ_ BH = 0.01 <cit.>. * In the two cases in which an IMBH forms from the collapse of a VMS assembled via stellar mergers, we assign an initial spin of 0.5. The choice is motivated by the fact that the particularly complex formation processes that lead to the IMBH formation make the IMBH natal spin practically unpredictable. We note that this choice has no effect on our results though, because both IMBHs accretes material from a stellar companion and we assume that this spin-up the IMBH as detailed in the following point. * If the IMBH feeds on a stellar companion, or if its progenitors are upper-mass gap BHs, i.e. they underwent mass accretion at some point, we assign a spin drawn from a flat distribution in the range χ_ BH = 0.8-1 <cit.>. * If the IMBH progenitor is a BH formed in a primordial binary, we assign a small spin (χ_ BH = 0.01) if it is the firstborn or a spin in the range χ_ BH = 0.1-1 <cit.> otherwise. * If the IMBH formed from a BBH merger, the IMBH spin and mass are calculated according to <cit.> fitting formulae <cit.>. Note that this model is applied in post-process to the simulation data. To keep track of the IMBH-BH merging history, we label an IMBH as first generation (1g) if it did not undergo any merger with another compact object. IMBHs formed out of VMS collapse or star-BH accretion are considered 1g. Second generation (2g) and higher generation IMBHs are those that underwent multiple mergers with other compact objects. In models, all merging companions are stellar BHs. Figure <ref> shows the masses and spins of IMBHs assuming zero GW recoil. It appears evident that, upon our assumptions, IMBHs in clusters generally form with a high spin (χ_ IMBH > 0.6), unless they form from the collapse of a VMS. Even in such a case, the accretion of matter, which likely spins-up the IMBH, occurs on a sufficiently short timescale (t≲ 8 Myr) to make rather unlikely their observation as low-spin objects. In the case of IMBHs forming via multiple BH mergers, note that the IMBH spin decreases at increasing the merger generation <cit.>. Table <ref> summarizes the main properties of IMBHs in terms of generation, masses, spins, and recoil velocity at 95% confidence level. These quantities are calculated drawing for each merging event 10,000 times the spin amplitude of the merging components and assuming for the spin directions an isotropic distribution. Looking at the Table, we see that GW recoil has no effect on the IMBH formation probability, because all IMBHs in clusters form either via stellar collapse or have a 1g BH progenitor. Nonetheless, GW recoil crucially affects second and higher generation IMBHs, which typically receive a kick, v_ GW = (200 - 800), much larger than the escape velocity from the parent cluster, typically v_ esc < 50. Therefore, the inclusion of GW recoil affects 7 out of 8 IMBHs in our simulations, avoiding both: a) the formation of IMBH-BH binaries that merge after dynamical ejection, a process involving 5 IMBHs in our sample, and b) the development of multiple BH mergers inside the cluster (2 IMBHs). The remaining IMBH is ejected from the cluster as a single object after a strong resonant interaction with other two, fairly massive (>30), BHs. As a consequence, we find that the number of merging events involving an IMBH decreases from 9 in the no-recoil case, to just 2, despite this represents the lowest value possible. The possible detection of GWs emitted from IMBH-BH binaries with future detectors, especially those operating in the deci-Hz frequency band, could help shed a light on the IMBH formation efficiency and retention probability <cit.>. §.§.§ Simulations implementing a self-consistent treatment for gravitational recoil The post-process treatment applied to simulation data provides an effective way to place constraints on the IMBH retention probability without the need to explore the wide associated parameter space. Nonetheless, a fully self-consistent simulation implementing also GW recoils would provide useful insights on, e.g. the impact of the IMBH displacement onto the development of new merging events. To prove the impact of GW recoil in a self-consistent way, we focus on the two models in which the IMBH undergoes repeated mergers, namely models IBH_Rh1.75f20N120k, which ultimately form a 4g-IMBH, and IBH_Rh0.47f20N300k, which instead leads to a 3g-IMBH. Practically speaking, we restart the simulation from the snapshot immediately before the merging event and apply to the merger remnant a kick. For simplicity, rather than extracting the kick from a distribution we assign the merger a given kick, as described below. Generally, we adopt a GW kick sufficiently small to ensure the IMBH retention after the merger. This choice permits us to investigate whether the IMBH can be retained in the cluster, it further grows, or it is anyway ejected owing to Newtonian or relativistic effects. *Model ID: IBH_Rh1.75f20N120k The IMBH in this model forms from the merger of two upper mass-gap BHs with masses m_ BH1+m_ BH2 = (95.5+95.8). Therefore, the IMBH is already 2g at formation, and receives a kick v_ rec > 171 at 95% confidence level (see Table <ref>). For comparison, the cluster escape velocity at the time of the merger is around v_ esc = 12. Adopting the spin model described in Section <ref>, based on stellar evolution models, we find that the IMBH has a tiny fraction (P_20<0.2%) to receive a kick v_ GW < 20. However, if the IMBH progenitors have negligible spins for some reason, for example if the IMBH progenitor is slowly rotating and the angular momentum transport is essentially driven by meridional currents <cit.>, the probability for v_ GW<20(5) rises up to 84%(21%), significantly increasing the IMBH retention probability. Therefore, we re-run the simulation and assign to the IMBH promptly after formation a GW kick of either v_ GW = 5 (small kick) or 20 (large kick). As expected, in the large kick model, the kick of v_ GW = 20 exceeds the cluster escape velocity and the IMBH promptly leaves the cluster. In the small kick model, where v_ GW = 5, the 2g-IMBH is retained in the cluster and sinks back to the cluster centre where, after a long series of interactions with other stellar BHs, captures a BH with mass m_ BH=28 and is ejected from the cluster with a velocity of just 15.3. The ejected IMBH-BH binary has an eccentricity e=0.57 and a period of P=190 days, and a corresponding merger time t_ GW∼ 10^3 Hubble times. For the sake of comparison, in the zero GW recoil model, the IMBH pairs with a BH with mass m_ BH = 40.5 and is ejected from the cluster, merging within a Hubble time (see Appendix <ref>). *Model ID: IBH_Rh0.47f20N300k Let us now consider the other model, named IBH_Rh0.47f20N300k. Since the IMBH in this model forms via stellar collisions, its mass at birth is fairly large m_ IMBH = 217. After only 17 Myr, when the cluster escape velocity is around v_ esc = 46.5, this 1g-IMBH merges with an upper mass-gap BH with mass m_ BH = 51.7. The resulting 2g-IMBH receives a GW kick with amplitude v_ kick > 99 at 95% confidence level. The probability to obtain a kick of ≃ 50 is of the order of ∼ 0.1%, regardless of the spin distribution choice. Therefore, we re-run the simulation shortly before the merger event and assign to the merger remnant either a small (v_ = 20) or large (v_ = 100) recoil kick. In the case of v_ rec=100 the merger remnant promptly leaves the cluster, as expected. In the case of v_=20, instead, the 2g-IMBH remains in the cluster core and undergoes a series of resonant interactions with two BHs, which drives the IMBH to merge after just 25.5 Myr with an upper-mass gap BH with (m_ BH,2 = 63). The 3g-IMBH, with a mass m_ 3g≃ 300, receives a kick v_ GW > 90 regardless of the amplitude and direction of progenitors' spins, hence it leaves the cluster promptly after the merging event. The impact of relativistic effects on the chaotic nature of N-body dynamics is apparent in this case: The displacement caused by the GW recoil favor the onset of the three-body interactions that led to the merger. For comparison, in the zero-kick model the two BHs never find each other. § CONCLUSION In this work we have analysed the properties of IMBHs formed in the cluster models, a suite of 19 direct N-body simulations representing star clusters initially made up of ≤ 10^6 stars, up to 33% of which initially paired in a binary. Our main results can be summarised as follows: * Out of 19 models, 8 IMBHs form in clusters, following three main formation channels: a) collapse of a VMS formed via repeated stellar mergers (2 IMBHs), b) accretion of stellar material onto stellar BHs (1), c) BH-BH mergers (5). The IMBHs have typical masses in the range m_ IMBH = (100-370). Aside IMBH seeding, the aforementioned formation channels significantly contribute to the population of BHs with masses in the upper mass-gap, for which we derive a formation efficiency of η_ gap = 3.44× 10^-5^-1 [Table <ref> and Figures <ref>-<ref>]. * Despite the small sample, we find a striking relation between the IMBH formation channel and the host cluster properties. Stellar mergers dominate IMBH formation in the densest clusters, operating on short timescale (10 Myr) and producing the most massive IMBHs (>200). Star-BH interactions and BBH mergers, instead, dominate IMBH formation in less dense clusters, showing that the looser the cluster the longer the IMBH formation time (10-300 Myr), and the larger the IMBH seed mass [Figure <ref>]. * When relativistic recoil is neglected, Newtonian dynamics represents a serious threat to IMBH retention and growth. In fact, all IMBHs are ejected from cluster through strong dynamical interactions. Nonetheless, in the Newtonian scenario some IMBHs undergo multiple IMBH-BH mergers reaching up to the fourth generation. The inclusion of GW recoil severely impacts the IMBH growth process, limiting the IMBH merger history to two generations. We implement a simple model for BH natal spins, based on stellar evolution models, to infer the IMBH mass and spins. In our fiducial model IMBHs are characterised by masses up to 376 and relatively large spins, i.e. χ_ IMBH > 0.6. The inclusion of relativistic kicks in the simulations enables a fully self-consistent description of the IMBH merging process and reveal how hard is for IMBHs to be retained in their parent clusters. Nonetheless, even in the unlikely case the IMBH receives small GW kicks and avoid ejection, our simulations confirm how chaotic and unpredictable the evolution of the post-merger IMBH can be. For example, in one simulation the inclusion of the kick can favour the merger of the IMBH with a BH more massive than in the zero GW kick case [Table <ref> and Figure <ref>]. The simulations represent one of the few numerical models <cit.> in which all the three main channels proposed for the formation of IMBHs have been confirmed. Our analysis of the database suggests that: i) IMBHs form preferentially via collapse of stellar merger products (BBH mergers) in clusters more (less) dense than 3×10^5 pc^-3, ii) have large spins at formation χ_ BH > 0.6, iii) live most of their life with a BH companion, iv) are unlikely to grow beyond a few hundred because of the efficiency of dynamical scatterings and the impact of relativistic recoil. § THE EVOLUTION AND GROWTH OF IMBHS IN DRAGON-II CLUSTERS In this section, we discuss in detail the evolutionary history of the 8 IMBHs in clusters, their main properties, and retention probability. In the following we indicate with BH1, 2 and with letters a, b the IMBH progenitors, and with p1, p2 the progenitors of the IMBH progenitors, in such a way that p1a,  p2a indicates the two progenitors of the primary BH that eventually led to the IMBH. All the main properties of the IMBHs are summarised in Table <ref>. *IMBH No. 1: IBH_Rh1.75f5N1000k. In one cluster model with R_=1.75 pc, f_b=0.05, N=10^6, the IMBH forms via the merger of two BHs with masses m_ BH,1 = 86.3 and m_ BH,2 = 58.9. The primary BH is the byproduct of a merger between a PPISN BH and a massive star in the HG phase m_p1a+m_p2a = (40.5 + 91.7) in a primordial binary, and we assume that it spins-up during its growth, assigning it a spin χ_ BH,1 > 0.8. The secondary BH, instead, forms from the merger of two stars in a primordial mass, with masses m_p1b+m_p2b = (37+82), with the lighter component being a naked He MS star and the heavier a star in the HG phase. We assign the companion BH a spin χ_BH,2 = 0.01. The resulting IMBH (2g) has a mass m_ 2g = 138.4^+1.8_-3.0 and spin χ_ 2g = 0.76^+0.11_-0.27, with the spin increasing at decreasing the mass. In the simulation with GW recoil disabled, the IMBH forms a binary with a BH with mass m_ BH = 40.5 — formed from a single star — and ultimately merge after being ejected outside the cluster, leading to a final IMBH (3g) with a mass m_ 3g = 174.0^+2.6_-4.6 and χ_ 3g=0.68^+0.20_-0.40. However, the GW recoil associated with the formation of the 2g-IMBH is sufficiently large (v_ GW = 150-2200) to make the retention of the IMBH and its further growth impossible. *IMBH No. 2: IBH_Rh1.75f20N120k. The second IMBH in the sample (simulation with R_=1.75 pc, f_b=0.2, N=120) forms through a BH-BH merger with component masses m_ BH,1 + m_ BH,2 = (95.5+95.8). The previous evolution of these massive BHs is rather complex. The primary forms from the accretion of a MS star with mass m_ p2a= 110 and a BH (m_ p1a=40.5) previously formed from the merger of two MS stars in a primordial binary. We thus assign the primary BH a spin χ_ BH,1=0.8-1. The secondary, instead, forms from the merging of two stars in a primordial binary during the HG phase of the heavier component. We assign the secondary BH a small spin χ_ BH,2 = 0.01. The resulting IMBH (2g) has a mass m_ 2g=181.8^+1.8_-2.7 and spin χ_ 2g = 0.72^+0.10_-0.15. When GW recoil is disabled, the IMBH undergoes a second merger with a BH with mass m_ BH,2 = 40.5 that did not experience significant mass-transfer, thus likely characterised by a low spin. After the merger, the IMBH (3g) has a mass m_ 3g = 217.8^+2.5_-4.3 and spin χ_ 3g = 0.65^+0.20_-0.45. It forms a binary that is ejected and merges outside the cluster, leaving a 4g-IMBH with final mass m_ 4g = 253.9^+2.9_-5.9 and spin χ_ 4g = 0.56^+0.28_-0.34. There is a probability of ∼ 0.2% for the GW recoil imparted on the 2g-IMBH to remain below v_ < 20, i.e. sufficiently smaller to be retained in the cluster. However, when the 3g-IMBH forms, the post-merger kick is in the range v_ GW = 35-2000, definitely larger than the cluster escape velocity. We discuss the results from a self-consistent simulation of the evolution of the 2g-IMBH in Section <ref>. *IMBH No. 3: IBH_Rh1.75f20N600k. The third IMBH forms in model with R_ = 1.75 pc, f_b=0.2, and N=600,000 through the merger of two BHs with mass m_ BH,1=74.7 and m_ BH,2 = 68.8, both being byproduct of a stellar merger event in two primordial binaries. We assume that both BHs have negligible spins, which leads to an IMBH (2g) with a mass m_ 2g = 136.6^+1.2_-1.9 and spin χ_ 2g = 0.72^+0.08_-0.15. The post-merger recoil is sufficiently small (v_ GW = 20-45) to retain the IMBH. The IMBH eventually merges with a BH with mass m_ BH,2 = 18 (for which χ_ BH,2 = 0.01) after being ejected from the cluster. The final IMBH (3g) has a mass m_ 3g = 152.7^+1.5_-2.4 and spin χ_ 3g=0.61^+0.22_-0.36. *IMBH No. 4: IBH_Rh0.8f20N120k. The fourth IMBH forms in model R_ = 0.8 pc, f_b = 0.2, N=120,000 from two BHs with masses m_ BH,1 = 79.8 and m_ BH,2=40.5. The primary formed from a star-BH merger in a primordial binary involving a BH m_ p1a = 40.5 and a star in the HG phase with mass m_ p2a = 78.5. We assign a spin χ_ BH,1 > 0.8 to the primary and a small spin to the secondary, which did not undergo any significant matter accretion phase. The IMBH (2g) formed this way has a mass m_ 2g = 115.6^+1.3_-3.0 and spin χ_ 2g = 0.74^+0.15_-0.36. In absence of GW recoil, the IMBH captures a BH with mass m_ BH,2 =39, which experienced mass transfer in a primordial binary, and finally merge outside the cluster. In this case, we assign to the stellar BH a spin in the 0-1 range, which leads to an IMBH (3g) with final mass m_ 3g=149.8^+2.0_-4.6 and χ_ 3g=0.67^+0.22_-0.35. The kick received by the 2g-IMBH, however, is large enough (v_ GW > 100) to kick the IMBH out before the binary can form. *IMBH No. 5: IBH_Rh0.8f20N120k. Even the fifth IMBH, which forms in model R_=0.8 pc, f_b=0.2, and N=120,000, is the byproduct of a BBH merger. The primary, with a mass m_ BH,1=80.7, forms from the merger of two MS stars, and we assume negligible spin. The companion, with a mass m_ BH,2=51.5, forms from mass transfer in a primordial binary, thus we assume that its spin is distributed in the χ_ BH,2 = 0.8-1 range. The resulting IMBH has a mass m_ 2g = 126.4^+0.7_-1.0 and spin χ_ 2g = 0.67^+0.06_-0.08. In the case of no GW recoil, the IMBH captures a BH with mass m_ BH = 30 formed from a single star (thus χ_ BH = 0.01), and the resulting binary is eventually ejected from the cluster, ultimately merging outside the cluster and leaving behind an IMBH with mass m_ 3g = 153.0^+1.4_-2.1 and spin χ_ 3g = 0.62^+0.19_-0.42. Even in this case, though, the GW kick imparted onto the 2g-IMBH (v_ GW > 60) is larger than the cluster escape velocity. *IMBH No. 6: IBH_Rh0.8f20N300k. The sixth IMBH forms in a cluster with R_=0.8 pc, f_b=0.2, and N=300,000, from the coalescence of a PPISN BH (m_ BH = 40.5, negligible spin) and a massive star in the HG phase (m_ HG=133). The IMBH, with mass m_ 1g = 107, likely spins-up during the interaction with its stellar companion. The IMBH is eventually ejected as a single object in consequence of a resonant strong scattering involving two BHs with masses m_ BH,1 = 35.2 and m_ BH,2 = 67.7. *IMBH No. 7: IBH_Rh0.47f20N120k. The seventh, and most massive, IMBH, forms in one of the most compact clusters (R_=0.47 pc, f_b=0.2, and N=120,000). A complex series of stellar mergers triggers the IMBH seeding, leading to an IMBH with mass m_ 1g = 288 that eventually collides with a massive MS star with mass m_ MS = 122. The resulting IMBH, which can be considered half-way between first and second generation, has a mass m_ 1g* = 350 and likely a large spin, χ_ 1g*∼ 0.8-1, owing to the mass accretion process. The IMBH captures a stellar BH with mass m_ BH,2 = 29 formed from a single star, for which we assume negligible spin. The IMBH-BH binary is eventually ejected in a strong binary-single interaction and merges outside the cluster, leading to a 2g-IMBH with mass m_ 2g = 376.5^+0.8_-3.7 and spin χ_ 2g = 0.79^+0.17_-0.27. *IMBH No. 8: IBH_Rh0.47f20N300k. The last IMBH forms in the densest cluster (R_=0.47 pc, f_b=0.2, and N=300,000). Initially, an IMBH seed with mass m_ 1g = 189 forms via subsequent mergers of massive stars. It later collides with a MS star with mass m_ MS = 51.7 and shortly after with two low mass stars, leaving behind an IMBH (1g*) with mass m_ 1g* = 217 and high-spin triggered by mass accretion. The IMBH undergoes merger with a low-spin BH with mass m_ BH = 27, forming a 2g-IMBH with a mass m_ 2g = 241.4^+0.8_-3.3 and spin χ_ 2g = 0.77^+0.18_-0.37. In absence of GW recoil, the 2g-IMBH further merge with a low-spin BH (mass m_ BH = 38) after being ejected in the cluster, leading to a 3g-IMBH characterised by m_ 3g = 275.3^+1.8_-5.4 and spin χ_ 3g = 0.63^+0.28_-0.39. When GW recoil are taken into account, the 2g-IMBH receives a kick v_ GW > 40, thus larger than the cluster escape velocity. We explore more in detail the retention of this IMBH in Section <ref>. § ACKNOWLEDGEMENTS The authors thank the referee for their constructive report and feedback. The authors warmly thank Agostino Leveque for their help and assistance in using their implementation of the code, and Giuliano Iorio, Sara Rastello, and Michela Mapelli for useful comments and discussion. This work benefited of the support from the Volkswagen Foundation Trilateral Partnership through project No. 97778 “Dynamical Mechanisms of Accretion in Galactic Nuclei” and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 138713538 – SFB 881 “The Milky Way System”), and by the COST Action CA16104 “GWverse”. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC). MAS acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101025436 (project GRACE-BH, PI: Manuel Arca Sedda). AWHK is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9B870. PB acknowledge the support within the grant No. AP14869395 of the Science Committee of the Ministry of Science and Higher Education of Kazakhstan ("Triune model of Galactic center dynamical evolution on cosmological time scale"). The work of PB was supported under the special program of the NRF of Ukraine Leading and Young Scientists Research Support - "Astrophysical Relativistic Galactic Objects (ARGO): life cycle of active nucleus", No. 2020.02/0346. RS acknowledges support by Yunnan Academician Workstation of Wang Jingxiu (No. 202005AF150025) and thanks Max Planck Institute for Astrophysics (Thorsten Naab) for hospitality during many visits. MG was partially supported by the Polish National Science Center (NCN) through the grant No. 2021/41/B/ST9/01191. FPR acknowledge the support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930). § DATA AVAILABILITY The data from the runs of these simulations and their initial models will be made available upon reasonable request by the corresponding author. The Nbody6++GPU code is publicly available[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>]. The McLuster version used in this work will soon be available. A similar version is described in <cit.>. mnras
http://arxiv.org/abs/2307.04415v1
20230710084328
Episodic Gaussian Process-Based Learning Control with Vanishing Tracking Errors
[ "Armin Lederer", "Jonas Umlauft", "Sandra Hirche" ]
eess.SY
[ "eess.SY", "cs.LG", "cs.SY", "stat.ML" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Episodic Gaussian Process-Based Learning Control with Vanishing Tracking Errors Armin Lederer, Graduate Student Member, IEEE, Jonas Umlauft, Sandra Hirche, Fellow, IEEE, Armin Lederer, Jonas Umlauft and Sandra Hirche are with the Chair of Information-oriented Control (ITR), School of Computation, Information and Technology, Technical University of Munich, 80333 Munich, Germany (email: armin.lederer, jonas.umlauft, [email protected]). Received / Accepted ================================================================================================================================================================================================================================================================================================================================================================================ Due to the increasing complexity of technical systems, accurate first principle models can often not be obtained. Supervised machine learning can mitigate this issue by inferring models from measurement data. Gaussian process regression is particularly well suited for this purpose due to its high data-efficiency and its explicit uncertainty representation, which allows the derivation of prediction error bounds. These error bounds have been exploited to show tracking accuracy guarantees for a variety of control approaches, but their direct dependency on the training data is generally unclear. We address this issue by deriving a Bayesian prediction error bound for GP regression, which we show to decay with the growth of a novel, kernel-based measure of data density. Based on the prediction error bound, we prove time-varying tracking accuracy guarantees for learned GP models used as feedback compensation of unknown nonlinearities, and show to achieve vanishing tracking error with increasing data density. This enables us to develop an episodic approach for learning Gaussian process models, such that an arbitrary tracking accuracy can be guaranteed. The effectiveness of the derived theory is demonstrated in several simulations.=-1 Gaussian processes, machine learning, uncertain systems, data-driven control. § INTRODUCTION For many technical systems, no or only partial first principle models are available due to their complexity or a priori unknown operating conditions. Since measurement data of such systems can typically be obtained, inferring models using supervised machine learning techniques has become increasingly popular in recent years <cit.>. In particular, Gaussian process (GP) regression <cit.> is a popular method since it is very data-efficient <cit.> and exhibits closed-form expressions for model updates allowing on-line learning <cit.>. Moreover, GP models provide an explicit measure for prediction uncertainty, which enables the confidence-based distributed aggregation of GP models <cit.>, and allows to tune the behavior of control towards curiosity <cit.> or cautiousness <cit.>. In addition to these beneficial properties, GP regression is particularly appreciated in safety-critical control due to the existence of prediction error bounds <cit.>. These bounds are typically based on the close relationship between kernel methods and GPs <cit.>, such that the reproducing kernel Hilbert space norm induced by the GP can be used as a measure of function complexity. By combining bounds on this norm and assumptions about observation noise distributions, statistical prediction error bounds can be derived <cit.>. They can be efficiently computed on-line in an optimization-based fashion <cit.>, but data-dependent closed-form expressions also exist <cit.>. Moreover, they reduce to deterministic bounds when the observation noise is bounded <cit.>. Based on the prediction error bounds for learned GP models, tracking accuracy guarantees for a large variety of control laws have been derived. This can be achieved using Lyapunov theory, e.g., for feedback linearization <cit.>, computed torque control <cit.> and sliding mode control <cit.>, by extending stability properties of nominal model predictive control, e.g., using continuity arguments <cit.>, or robust linear control, e.g., through integral quadratic constraints <cit.>. However, these approaches suffer from the crucial drawback that accuracy guarantees are global, even though the prediction error bounds from GP models are state-dependent. Therefore accuracy guarantees can be very loose in cases with inhomogeneously distributed training data over the state space. In such a case, the guarantees would be dominated globally by the most conservative bound derived from the region with the fewest training data. In general, the data dependency of such accuracy guarantees for model-based control methods has barely been analyzed in detail. While it can be shown for feedback linearization with event-triggered on-line learning that the tracking error vanishes with growing noise-free data set <cit.>, similar results for noisy data do not exist. Moreover, this result is limited to feedback linearizing controllers to the best of our knowledge and does not extend to other approaches. Finally, on-line learning with GPs can be realized using suitable approximations in principle <cit.>, but it remains computationally expensive, such that it is not applicable to systems with limited computational resources. The computationally less demanding approach of episodic, off-line learning has been investigated in the context of optimization-based controller tuning approaches <cit.>, which can be shown to provide data-dependent performance guarantees due to the close relationship to Bayesian optimization <cit.>. While these guarantees can be extended to model-based reinforcement learning <cit.>, they strongly rely on the solved optimization problems, such that they do not generalize to a wider class of control techniques. Therefore, no guarantees and conditions for the convergence of accuracy guarantees for model-based control laws employing GP models exist to the best of our knowledge. Consequently, it is an open question how we can learn a GP model in order to ensure a desired tracking error bound with such learning-based controllers. §.§ Contribution and Structure The main contribution of this article is a novel episodic learning approach for GP models in order to ensure arbitrary tracking accuracy when the GP is used to compensate unknown nonlinearities in control. Such nonlinearities can be found in a wide range of applications ranging from underwater vehicles, where unmodeled hydrodynamic forces due to currents can appear <cit.>, to physical human-robot interaction, where humans introduce generally unknown torques <cit.>. For the development of this approach, we first derive an easily interpretable prediction error bound for GPs by exploiting their Bayesian foundations. In order to allow its straightforward computation, we provide probabilistic Lipschitz bounds for unknown functions based on the GP prior. Based on these results, we propose a kernel-based measure to evaluate the training data density, whose flexibility we demonstrate by exemplarily illustrating it for squared exponential (SE), Matérn class and linear kernels. Moreover, we show that prediction error bounds directly depend on this data density measure, which allows us to prove vanishing prediction errors with growing data density. Based on this analysis of the GP prediction error, we derive a novel, data density-dependent tracking error bound for control laws in linear systems which employ the GP model for compensation of an unknown nonlinearity. Finally, we extend these accuracy guarantees to establish a direct relationship with the proposed data density measure, which allows us to develop an episodic approach for learning a GP model ensuring a specified tracking error bound. This article is based on our prior work <cit.>, which purely focuses on the derivation of probabilistic prediction error bounds depending on the posterior variance of Gaussian processes. It significantly extends these preliminary results by establishing a direct relationship between the training data density and prediction error bounds. Due to this relationship, we can bound the tracking error of linear systems with an unknown nonlinearity compensated by a learned model directly in terms of the data density. This allows us to actively generate training data for achieving arbitrary tracking accuracy in an episodic approach, while <cit.> only bounds the tracking error of feedback linearizing controllers with models learned from a given data set. Therefore, we extend the analysis framework from our prior work <cit.> to a design method. The remainder of this article is structured as follows: We briefly introduce Gaussian process regression and formalize the considered problem setting in <ref>. In <ref>, we derive a novel Bayesian prediction error bound for GP regression and provide methods to determine all relevant parameters based on the prior distribution. We develop a kernel-dependent measure of data density and establish a straightforward relationship to the GP variance, which allows us to investigate the asymptotic behavior of the error bound with increasing data set size in <ref>. In <ref>, we exploit these results to derive time-varying and time-independent tracking error guarantees, which we exploit to develop a novel episodic learning algorithm for ensuring arbitrary tracking accuracy. Finally, in <ref>, we evaluate the developed theoretical framework in different simulations to demonstrate its effectiveness, before we conclude the paper in <ref>. §.§ Notation Vectors/matrices are denoted by lower/upper case bold symbols, the n× n identity matrix by I_n, the Euclidean norm by ·, and λ_min(A) and λ_max(A) the minimum and maximum real parts of the eigenvalues of a matrix A, respectively. Sets are denoted by upper case black board bold letters, and sets restricted to positive/non-negative numbers have an indexed +/+,0, e.g., ℝ_+ for all positive real valued numbers. The cardinality of sets is denoted by |·| and subsets/strict subsets are indicated by . Class 𝒪 notation is used to provide asymptotic upper bounds on functions. The ceil and floor operator are denoted by ⌈·⌉ and ⌊·⌋, respectively. The Gaussian distribution with mean μ∈ℝ and variance σ^2∈ℝ_+ is denoted by 𝒩(μ,σ^2). A chi-squared distribution with N degrees of freedom is denoted by χ^2_N. The expectation operator E[·] can have an additional index to specify the considered random variable. Finally, a function α:ℝ_0,+→ℝ_0,+ is in class 𝒦_∞ if it is monotonically increasing and α(0)=0, lim_x→∞α(x)=∞. =-1 § PRELIMINARIES AND PROBLEM SETTING In this paper, we consider the problem of controlling linear systems perturbed by an unknown nonlinearity such that they track reference trajectories with a prescribed accuracy. In order to achieve this, we employ models learned via Gaussian process regression as compensation. Therefore, we first introduce the fundamentals of Gaussian process regression in <ref>, before we formalize the problem setting in <ref>. §.§ Gaussian Process Regression A Gaussian process is a stochastic process such that any finite number of outputs, N∈ℕ, is assigned a joint Gaussian distribution with prior mean function m:ℝ^d→ℝ and covariance defined through the kernel k:ℝ^d×ℝ^d→ℝ <cit.>. Without loss of generality, we assume m(·) to equal 0 in the following. In order to perform regression with Gaussian processes, they are considered as a a prior distribution. This allows to employ Bayes' theorem to calculate the posterior distribution given a training data set 𝔻={(x^(n),y^(n)}_n=1^N consisting of N inputs x^(n)∈ℝ^d and targets y^(n)∈ℝ, which are Gaussian perturbed measurements of an unknown function f:ℝ^d→ℝ, i.e., y^(n)=f(x^(n))+ϵ^(n), ϵ^(n)∼𝒩(0,σ_on^2), σ_on^2∈ℝ_+. Due to the properties of Gaussian distributions, the posterior is again a Gaussian process, which yields the posterior mean μ(·) and variance σ^2(·) functions μ(x) =k^T(x)( K+σ_on^2I_N)^-1y, σ^2(x) =k(x,x)-k^T(x)(K+σ_on^2I_N)^-1k(x), where we define the kernel matrix K and the kernel vector k(x) through K_ij=k(x^(i),x^(j)) and k_i(x)=k(x,x^(i)), respectively, with i,j=1,…,N, and y = [y^(1)⋯ y^(N)]^T. §.§ Problem Formulation We consider single-input linear dynamical systems with nonlinear input perturbation of the form ẋ=Ax+b(u+f(x)) with initial condition x(0)=x_0∈𝕏⊆ℝ^d and scalar control input u:ℝ_0,+→𝕌⊆ℝ. The matrix A∈ℝ^d× d and vector b∈ℝ^d are assumed to be known, while we consider f:𝕏→ℝ to be an unknown nonlinearity. This system structure covers a wide range of practical systems and can represent, e.g., systems controlled via approximate feedback linearization <cit.> or backstepping controllers for certain classes of dynamics <cit.>. Note that we merely consider the restriction to single-input systems for notational convenience, but our derived results can be easily generalized to multi-input dynamics. The considered task is to track a bounded reference trajectory x_ref:ℝ_0,+→ℝ^d with the state x(t). In order to enable the accurate tracking of the reference trajectory x_ref(·), we restrict ourselves to references of the form ẋ_ref=Ax_ref+br_ref, where r_ref:ℝ_0,+→ℝ is a reference signal. For tracking the reference trajectory, we can employ a control law u = θ^T(x-x_ref)+r_ref-f̂(x), where θ∈ℝ^d is a control gain vector and f̂:𝕏→ℝ is a model of the unknown nonlinear perturbation f(·). This control law leads to closed-loop dynamics of the tracking error e(t)=x(t)-x_ref(t) given by ė=A_θe + b(f(x)-f̂(x)), where A_θ=A-bθ^T. In order to ensure the stability of these dynamics in the case of exact model knowledge f(x)=f̂(x), we employ the following assumption on A_θ. The matrix A_θ has distinct and non-positive eigenvalues, which decrease monotonically with the parameters θ, i.e., there exists a class 𝒦_∞ function α:ℝ_0.+→ℝ_0,+ such that λ_max(A_θ)≤-α(θ). This assumption essentially requires the controllability of the pair (A,b) <cit.>, which allows the eigenvalues of the matrix A_θ to be considered as design parameters, e.g., using methods such as pole placement. Since controllability is a common requirement in linear systems theory, <ref> is not restrictive. Note that the requirement of distinct eigenvalues is only required to simplify the presentation in the following sections by ensuring diagonalizability of A_θ, but can be avoided by generalizing the derivations using Jordan blocks <cit.>. While <ref> ensures that the error dynamics (<ref>) do not diverge, the tracking precision crucially relies on the accuracy of the model f̂(·). Therefore, we assume to learn it from measurements (x^(n),y^(n)) using Gaussian process regression, such that we can use f̂(x)=μ(x) in the control law (<ref>). Since this merely leads to an approximate compensation of the nonlinearity, exact tracking cannot be ensured in general. Therefore, we consider the problem of learning a Gaussian process model of f(·), such that the tracking error is guaranteed to be probabilistically bounded by a prescribed constant e̅∈ℝ_+, i.e., ℙ(x(t)-x_ref(t)≤e̅,  ∀ t≥ 0)≥ 1-δ for δ∈(0,1). Due to the complexity of this problem, we decompose it into the subproblems of deriving a probabilistic error bound for Gaussian process regression, analyzing the dependency of the error bounds on the training data density, and developing an approach for generating training data with sufficiently high density, such that the prescribed tracking error bound e̅ is satisfied. These subproblems are described in more detail in the following. §.§.§ Probabilistic Regression Error Bounds In order to be able to ensure any bound for the tracking error x-x_ref, it is necessary to find an upper bound for the learning error f(x(t))-μ(x(t)) along the system trajectory x(t). Since we do not know the exact system trajectory x(t) in advance, we consider the problem of bounding the regression error in a compact domain 𝕏⊂ℝ^d. Since the bound must hold jointly for all states x in the domain 𝕏, we refer to it as probabilistic uniform error bound, which is formally defined as follows. Gaussian process regression exhibits a uniformly bounded prediction error on a compact set 𝕏⊂ℝ^d with probability 1-δ if there exists a function η:𝕏→ℝ_0,+ such that P( |f(x)-μ(x)|≤η(x), ∀x∈𝕏)≥ 1-δ. In general, we cannot expect to guarantee a uniformly bounded regression error without any regularity assumptions about the unknown function f(·). Due to the Bayesian foundation of Gaussian processes, we employ their prior distribution for this purpose, which we formalize in the following assumption. The unknown function f(·) is a sample from the Gaussian process 𝒢𝒫(0,k(x,x')). This assumption, which has similarly been used in, e.g., <cit.>, has a twofold implication. On the one hand, it specifies the admissible functions for regression via the space of sample functions, which depends on the employed kernel k(·,·). For example, it is straightforward to see that polynomial kernels can be used to learn polynomial functions of the same degree. Moreover, it is well known that the sample space of GPs with squared exponential kernel contains all continuous functions <cit.>. Therefore, choosing a suitable kernel for ensuring that the unknown function lies in the space of sample functions is usually not a challenging problem in practice. On the other hand, <ref> induces a weighting between possible sample functions due to the Gaussian process probability density. Since we base the derivation of the uniform error bound on this weighting, an unknown function f(·) with low prior probability density would lead to sets {f'(·): |f'(x)-μ(x)|≤η(x) } with a high probability under the GP prior, even though they do not contain the unknown function f(·). Hence, the true function f(·) should have a high probability density under the GP prior. This can be efficiently achieved in practice using suitable kernel tuning methods, e.g., <cit.>, or via a re-calibration of the probability distribution after training <cit.>. Therefore, ensuring a suitable prior distribution is not a severe limitation, such that <ref> is not restrictive in practice. §.§.§ Dependency of Error Bounds on Data Density After a probabilistic uniform error bound η(·) has been derived, we consider the problem of deriving conditions for the training data 𝔻 which ensure that the error bound η(·) stays below a desired value η̅∈ℝ_+. This requires the design of a suitable measure of data density ρ:𝕏→ℝ_+, which reflects the dependency of the error bound η(·) on the data distribution. Therefore, the measure ρ(·) must consider the information structure of the GP induced by the employed kernel k(·,·). Based on the derived density measure ρ(·), the problem of ensuring a learning error bound η̅ reduces to showing that the existence of a lower bound ρ∈ℝ_+ for the data density ρ(·) leads to the implication ρ(x)≥ρ ⇒ η(x)≤η̅(ρ). As we want to be able to ensure arbitrary small learning error bounds η̅(ρ), it must additionally hold that lim_ρ→∞η̅(ρ)=0. §.§.§ Data Generation for Guaranteed Tracking Accuracy Finally, we consider the problem of developing an episodic approach for training data generation, which achieves the necessary data density ρ(·) to ensure the satisfaction of the tracking error bound (<ref>). Firstly, this requires the derivation of a tracking error bound, such that for a given learning error bound η̅, we have η(x_ref(t))≤η̅ ⇒ ℙ(x(t)-x_ref(t)≤υ̅(η̅))≥ 1-δ for some function υ̅:ℝ_0,+→ℝ_0,+. Similarly as in (<ref>), this bound must also vanish asymptotically, i.e., lim_η̅→ 0υ̅(η̅) = 0, in order to admit arbitrarily small tracking error guarantees. Using this tracking error bound and the derived dependency of the learning error bound η(·) on the data density ρ(·), the problem of developing a data generation approach simplifies to finding an episodic roll-out strategy satisfying ρ_i+1>ρ_i, lim_i→∞ρ_i = ∞, where the index i is used to denote the roll-out episode. This ensures that there exists a finite number of episodes N_E∈ℕ such that υ̅(η̅(ρ_N_E))≤e̅. Therefore, finding a roll-out strategy ensuring (<ref>) solves the overall problem of learning a Gaussian process model of f(·) such that a prescribed error bound e̅ is satisfied. § PROBABILISTIC UNIFORM ERROR BOUND In this section, we derive an easily computable uniform error bound for Gaussian process regression based on the prior distribution addressing the problem described in <ref>. We first present the uniform error bound and approaches to compute its parameters in <ref>. Since the bound also relies on the Lipschitz constant of the unknown function, which is not always known a priori, we show how a probabilistic Lipschitz constant can be derived from the prior Gaussian process distribution in <ref>. §.§ Uniform Error Bound based on Lipschitz Continuity Since the prior Gaussian process induces a probability distribution for each point in a compact set 𝕏, we can discretize this set and exploit standard tail bounds for Gaussian distributions to obtain point-wise error bounds <cit.>. If all involved functions are continuous, we can straightforwardly extend these point-wise guarantees yielding the uniform error bound presented in the following. Consider a zero mean prior Gaussian process defined on a compact set 𝕏 and let f:𝕏→ℝ be a continuous unknown function with Lipschitz constant L_f which satisfies <ref>. Assume the GP posterior mean μ(·) and standard deviation σ(·) are continuous with Lipschitz constant L_μ and modulus of continuity ω_σ(·). Moreover, pick δ∈ (0,1), τ∈ℝ_+ and set β_𝕏(τ) =2log(M(τ,𝕏)/δ), γ(τ) =( L_μ+L_f)τ+√(β_𝕏(τ))ω_σ(τ), where M(τ,𝕏) denotes the τ-covering number of 𝕏[The τ-covering number of a set 𝕏 is the smallest number, such there exists a set 𝕏_τ satisfying |𝕏_τ|=M(τ,𝕏) and ∀x∈𝕏 there exists x'∈𝕏_τ with x-x'≤τ.]. Then, the prediction error is uniformly bounded with probability of at least 1-δ on 𝕏 with bound η(x)=√(β_𝕏(τ))σ(x)+γ(τ). We exploit the continuity properties of the posterior mean, variance and the unknown function to prove the probabilistic uniform error bound by exploiting the fact that for every grid 𝕏_τ with |𝕏_τ| grid points and max_x∈𝕏min_x'∈𝕏_τx-x'≤τ it holds with probability of at least 1-|𝕏_τ|e^-β_𝕏(τ)/2 that <cit.> |f(x)-μ(x)|≤√(β_𝕏(τ))σ(x) ∀x∈𝕏_τ. Choose , then |f(x)-μ(x)|≤√(β_𝕏(τ))σ(x) ∀x∈𝕏_τ holds with probability of at least 1-δ. Due to continuity of f(x), μ(x) and σ(x) we obtain min_x'∈𝕏_τ|f(x)-f(x')| ≤τ L_f ∀x∈𝕏 min_x'∈𝕏_τ|μ(x)-μ(x')| ≤τ L_μ ∀x∈𝕏 min_x'∈𝕏_τ|σ(x)-σ(x')| ≤ω_σ(τ) ∀x∈𝕏. Moreover, the minimum number of grid points satisfying (<ref>) is given by the covering number M(τ,𝕏). Hence, we obtain P(|f(x)-μ(x)|≤√(β_𝕏(τ))σ(x)+γ(τ),  ∀x∈𝕏)≥ 1-δ, for β_𝕏(τ) and γ(τ) defined in (<ref>) and (<ref>), respectively. The virtual grid constant τ used in (<ref>) and (<ref>) balances the effect of the state space discretization and the inherent uncertainty measured by the posterior standard deviation σ(·). Therefore, γ(τ) can be made arbitrarily small by choosing a sufficiently fine virtual grid. This in turn increases β_𝕏(τ) and thus the effect of the posterior standard deviation σ(·) on the bound. However, β_𝕏(τ) depends merely logarithmically on τ such that even poor Lipschitz constants L_μ, L_f and moduli of continuity ω_σ(·) can be easily compensated by small virtual grid constants τ. Since the standard deviation σ(·) varies within the state space 𝕏, an optimal virtual grid constant τ, which minimizes the expression √(β_𝕏(τ))σ(x)+γ(τ) for all x∈𝕏, does not exist in general. While simple approaches such as choosing τ such that γ(τ) is negligible for all x∈𝕏 provide satisfying results in our simulations, more complex approaches remain open research questions. It is important to note that most of the parameters in <ref> do not require a difficult analysis such that the bound (<ref>) can be directly evaluated. While the computation of the exact covering number M(τ,𝕏) is a difficult problem for general sets 𝕏, it can be easily upper bounded as illustrated in <ref>. For this reason, we overapproximate the set 𝕏 through a d-dimensional hypercube 𝕏̃ with edge length r. Then, the covering number of 𝕏̃ is bounded by <cit.> M(τ,𝕏̃)≤(r√(d)/2τ)^d, which is by construction also a bound for the covering number of 𝕏, i.e., M(τ,𝕏)≤(r√(d)/2τ)^d. The Lipschitz constant L_μ of the posterior mean in (<ref>) can be straightforwardly bounded when the prior Gaussian process has a Lipschitz continuous kernel, as shown in the following lemma. Consider a zero mean prior Gaussian process defined through the L_k-Lipschitz kernel k(·,·). Then, its posterior mean μ(·) is continuous with Lipschitz constant=-1 L_μ ≤ L_k√(N) (K+σ_on^2I_N)^-1y. The norm of the difference between the posterior mean μ(x) evaluated at two different points is given by μ(x)-μ(x') = (k(x)-k(x')) α, with α=(K+σ_on^2I_N)^-1y. Due to the Cauchy-Schwarz inequality and the Lipschitz continuity of the kernel we obtain μ(x)-μ(x') ≤ L_k√(N)αx-x', which proves Lipschitz continuity of the mean μ(x). Moreover, the assumption of a Lipschitz continuous kernel also suffices to compute the modulus of continuity ω_σ(·) for the posterior standard deviation in (<ref>), as shown in the following lemma.=-1 Consider a zero mean prior Gaussian process defined through the L_k-Lipschitz kernel k(·,·). Then, its posterior standard deviation σ^2(·) is continuous with modulus of continuity=-1 ω_σ(τ) ≤√(2L_kτ). The difference between two different evaluations of the posterior standard deviation is bounded by |σ(x)-σ(x')|≤ d_k(x,x') as shown in <cit.>, where the kernel metric is defined as d_k(x,x')=√(k(x,x)+k(x',x')-2k(x,x')). Due to Lipschitz continuity of the kernel, we have d_k(x,x')≤√(2L_kx-x'), which concludes the proof. For the special case of stationary kernels , the convergence rate of the modulus of continuity ω_σ(·) can even be improved, as shown in the following. Consider a zero mean prior Gaussian process defined through the stationary, L_k-Lipschitz kernel k(·,·). Then, its posterior standard deviation σ(·) is continuous with modulus of continuity ω_σ(τ)=L_στ, where =-1 L_σ = sup_x-x'∈𝕏√(1/2k(0)-2k(x-x'))∇ k(x-x'). For stationary kernels, we can express the kernel metric as d_k(x,x')=d_k(x-x')=√(2k(0)-2k(x-x')). The simplified kernel metric is only a function of x-x', such that the supremum of the norm of the derivative of d_k(·,·) with respect to x-x' is the Lipschitz constant of σ(·). This derivative directly follows from the chain rule of differentation as ∇ d_k(x-x') = √(1/2k(0)-2k(x-x'))∇ k(x-x'), which concludes the proof. While computing the Lipschitz constant L_σ requires the computation of a supremum in general, this optimization problem can be straightforwardly solved analytically for specific kernel choices, e.g., squared exponential kernels <cit.>. Thereby, it allows the efficient computation of a tight modulus of continuity. The remaining open parameter in (<ref>) is the Lipschitz constant L_f of the unknown function f(·). In many applications, in particular in control, rough knowledge of the unknown function is known in advance, which can allow to specify L_f. Even if this constant is a rather poor estimate of the true Lipschitz constant, conservative estimates are not a crucial issue as discussed after <ref>. If no such knowledge of the unknown function f(·) is available, the prior Gaussian process distribution can be employed to derive a probabilistic Lipschitz constant as shown in the following section. §.§ Probabilistic Lipschitz Constants for Gaussian Processes In order to derive a probabilistic Lipschitz constant L_f of the unknown function f(·) from the prior Gaussian process distribution, we exploit the fact that the derivative of a Gaussian process is again a Gaussian process. Therefore, Lipschitz constants can be obtained by adapting results from the well-studied theory of suprema of Gaussian processes. This yields the following lemma, which is based on the metric entropy criterion <cit.>. Consider a Gaussian process with a continuously differentiable covariance function k(·,·) and let L_k denote its Lipschitz constant on the compact set 𝕏 which is included in a cube with edge length r. Then, the expected supremum of a sample function f(·) of this Gaussian process satisfies E[sup_x∈𝕏f(x)]≤ 12√(6d)max{max_x∈𝕏√(k(x,x)),√(rL_k)}. We prove this lemma by making use of the metric entropy criterion for the sample continuity of Gaussian processes <cit.>. This criterion allows to bound the expected supremum of a sample function f(·) by E[ sup_x∈𝕏f(x) ]≤∫_0^max_x∈𝕏√(k(x,x))√(log(N_k(ϱ,𝕏)))dϱ, where N_k(ϱ,𝕏) is the ϱ-packing number of 𝕏 with respect to the kernel metric (<ref>). Instead of bounding the ϱ-packing number, we bound the ϱ/2-covering number, which is known to be an upper bound of the packing number. The covering number can be easily bounded by transforming the problem of covering 𝕏 with respect to the metric d_k(·,·) into a coverage problem in the original metric of 𝕏. For this reason, define ψ(ϱ')=sup_x,x' ∈𝕏 x-x' _∞≤ϱ' d_k(x,x'), which is continuous due to the continuity of the covariance kernel k(·,·). Consider the inverse function ψ^-1(ϱ)=inf{ϱ'>0: ψ(ϱ')>ϱ}. Continuity of ψ(·) implies ϱ=ψ(ψ^-1(ϱ)). In particular, this means that we can guarantee d_k(x,x')≤ϱ/2 if . Due to this relationship it is sufficient to construct a uniform grid with grid constant 2ψ^-1(ϱ/2) in order to obtain a ϱ/2-covering net of 𝕏. Furthermore, the cardinality of this grid is an upper bound for the ϱ/2-covering number, such that we obtain N_k(ϱ,𝕏)≤⌈r/2ψ^-1(ϱ/2)⌉^d. Due to the Lipschitz continuity of the covariance function, we can bound ψ(·) by ψ(ϱ')≤√(2L_kϱ'). Hence, the inverse function satisfies ψ^-1(ϱ/2)≥(ϱ/2√(2L_k))^2 and consequently N_k(ϱ,𝕏)≤(1+4rL_k/ϱ^2)^d holds, where the ceil operator is resolved through the addition of 1. Substituting this expression in the metric entropy bound (<ref>) yields E[sup_x∈𝕏f(x)]≤ 12√(d)∫_0^max_x∈𝕏√(k(x,x))√(log(1+4rL_k/ϱ^2))dϱ. As shown in <cit.> this integral can be bounded by √(6)max{max_x∈𝕏√(k(x,x)), √(rL_k)}, which concludes the proof. While <ref> provides a bound merely for the expected supremum of a sample function, a high probability bound for the supremum can be obtained using the Borell-TIS inequality <cit.>. This is shown in the following result. Consider a Gaussian process with a continuously differentiable covariance function k(·,·). Then, with probability of at least 1-δ_L the supremum of a sample function f(·) of this Gaussian process is bounded by f_sup(δ_L,k(·,·),r)= √(2log( 1/δ_L))max_x∈𝕏√(k(x,x)) +12√(6d)max{max_x∈𝕏√(k(x,x)), √(rL_k)}. We prove this lemma by exploiting the wide theory of concentration inequalities to derive a bound for the supremum of the sample function f(x). We apply the Borell-TIS inequality <cit.>, which ensures for arbitrary c∈ℝ_0,+ that P( sup_x∈𝕏f(x)- E[ sup_x∈𝕏f(x) ] ≥ c )≤exp( -c^2/2max_x∈𝕏 k(x,x)). Due to <ref> we can directly bound E[sup_x∈𝕏f(x)]. Therefore, the lemma follows from substituting (<ref>) in (<ref>) and choosing c=√(2log( 1/δ_L))max_x∈𝕏√(k(x,x)). Since the derivatives of sample functions from Gaussian processes with sufficiently smooth kernels are the sample functions of the derivative Gaussian processes <cit.>, <ref> directly allows to compute a high probability Lipschitz constant for the unknown function f(·) from the prior Gaussian process distribution. This is summarized in the following Theorem. Consider a zero mean Gaussian process defined through the covariance kernel k(·,·) with continuous partial derivatives up to the fourth order and partial derivative kernels k^∂ i(x,x') =∂^2/∂ x_i∂ x_i' k(x,x') ∀ i=1,…, d. Then, a sample function f(·) of the Gaussian process is almost surely continuous on 𝕏 and with probability of at least 1-δ_L, L_f≤L̂_f=[ f_sup(δ_L/2d,k^∂ 1(·,·),r); ⋮; f_sup(δ_L/2d,k^∂ d(·,·),r) ] for f_sup(·,·,·) defined in (<ref>). Continuity of the sample function f(x) follows directly from <cit.>. Furthermore, this theorem guarantees that the derivative functions ∂/∂ x_if(x) are samples from derivative Gaussian processes with covariance functions k^∂ i(x,x'). Therefore, we can apply <ref> to each of the derivative processes and obtain with probability of at least 1-δ_L/d sup_x∈𝕏|∂/∂ x_if(x)| ≤ f_sup(δ_L/2d,k^∂ i(·,·),r). Applying the union bound over all partial derivative processes i=1,…,d finally yields the result. Since many practically employed kernels such as, e.g., the squared exponential, the Matern 5/2, satisfy the required smoothness assumption of <ref>, this assumption does not pose a severe restriction. Therefore, this theorem allows to straightforwardly determine high probability Lipschitz constants for the unknown function f(·), which can be directly used in <ref>, while barely requiring additional assumptions. § DATA DEPENDENCY OF LEARNING ERROR BOUNDS In order to derive conditions for ensuring that the learning error bound in <ref> is below a given threshold as described <ref>, we need to analyze its dependency on the training data density. For this purpose, we investigate the decay behavior of the probabilistic uniform error bound (<ref>) depending on the decrease rate of the GP standard deviation in <ref>. A kernel-dependent measure of data density is proposed in <ref> in order to bound the decrease rate of the GP standard deviation. Finally, it is shown in <ref> how the kernel-dependent density measure can be bounded using straightforwardly computable Euclidean distances. §.§ Asymptotic Bounds for the Learning Error Since the probabilistic uniform error bound (<ref>) consists of two summands, a vanishing posterior standard deviation σ(x) is not by itself sufficient to guarantee a decreasing value of η(x). Therefore, it is necessary to additionally vary the parameter τ, such that γ(τ) decreases with growing number of training samples N. Even though this leads to a growing value of β_𝕏(τ), it ensures an asymptotically vanishing learning error bound in the limits N→∞ and σ(x)→ 0 as shown in the following theorem. Consider a zero mean Gaussian process defined by the continuously differentiable kernel k(·,·). Let f:𝕏→ℝ be a continuous unknown function with Lipschitz constant L_f on the compact domain 𝕏 which satisfies <ref>. Then, for τ∈𝒪(1/N), the learning error asymptotically behaves as η(x)∈𝒪(√(log(N/δ))σ(x)+1/N). Due to Theorem <ref> with suitable value of β_𝕏(τ) it holds that sup_x∈𝕏|f(x)-μ(x)|≤√(β_𝕏(τ))σ(x)+γ(τ) with probability of at least 1-δ/2 for δ∈(0,1). A trivial bound for the covering number can be obtained by considering a uniform grid over the cube containing 𝕏. This approach leads to M(τ,𝕏)≤(r√(d)/2τ)^d. Therefore, we have β_𝕏(τ)≤ 2dlog(r√(d)/2τ)-2log(δ). In order to derive a bound for γ(τ), we employ the bounds for the Lipschitz constants and modulus of continuity. The Lipschitz constant L_μ in (<ref>) is bounded by L_μ ≤ L_k√(N) (K+σ_on^2I_N)^-1y due to <ref>. Since the Gram matrix K is positive semidefinite and f(·) is bounded by some f̅ due to Lipschitz continuity and a compact domain 𝕏, we can bound (K+σ_on^2I_N)^-1y by (K+σ_on^2I_N)^-1y ≤y/λ_min(K+σ_on^2I_N) ≤√(N)f̅ +ϵ/σ_on^2, where ϵ is a vector of N i.i.d. zero mean Gaussian random variables with variance σ_on^2. Therefore, it follows that ϵ^2/σ_on^2∼χ_N^2. Due to <cit.>, with probability of at least 1-exp(-log(2/δ)) we have ϵ^2≤(2√(Nlog(2/δ))+2log(2/δ)+N)σ_on^2. Hence, the Lipschitz constant of the posterior mean function μ(·) satisfies with probability of at least 1-δ/2 L_μ≤ L_kNf̅+√(N(2√(Nlog(2/δ))+2log(2/δ)+N))σ_on/σ_on^2. It can clearly be seen that the fastest growing term is increasing linearly, such that it holds that L_μ∈𝒪(N) with probability of at least 1-δ/2. The modulus of continuity in (<ref>) can be bounded by ω_σ(τ)≤√(2L_kτ) due to <ref>. Since the unknown function f(·) is assumed to admit a Lipschitz constant L_f, we obtain γ(τ)≤ L_kτNf̅+√(N(2√(Nlog(2/δ))+2log(2/δ)+N))σ_on/σ_on^2 +√(2β_𝕏(τ)L_kτ) +L_fτ. with probability of at least 1-δ/2 by substituting (<ref>) and (<ref>) into (<ref>). In order to admit asymptotically vanishing error bounds, (<ref>) must converge to 0 for N→∞, which is only ensured if τ decreases faster than 𝒪(1/N). Therefore, set τ∈𝒪(1/N) in order to guarantee γ_N(τ)∈𝒪( 1/N). However, this choice of τ implies that β_𝕏(τ)∈𝒪(log(N/δ)) due to (<ref>). Therefore, it directly follows that √(β_𝕏(τ))σ(x)+γ(τ)∈𝒪(√(log(N/δ))σ(x)+1/N), which concludes the proof. Due to the linear dependency of the bound for the Lipschitz constant L_μ on the number of training samples, the virtual grid constant must decay faster than 𝒪(1/N). This in turn leads to a logarithmic growth of β_𝕏(τ), which causes the √(log(N)) increase of the scaling factor of the posterior standard deviation σ(x). Note that this is a common phenomenon in uniform error bounds for GP regression and can also be found in RKHS based approaches, where similar bounds as (<ref>) are used to bound the effect of the noise <cit.>. §.§ Asymptotic Bounds for the Posterior Variance In order to compensate the growth of the scaling factor in <ref>, a sufficiently fast decay of the standard deviation σ(x) must be ensured. Therefore, we investigate the behavior of the posterior variance σ^2(x) depending on the training data density of an input data set 𝔻^x={x^(i)}_i=1^N. The starting point of this analysis is the following lemma, which provides a straightforward upper bound for the posterior variance σ^2(x). Consider a GP trained using a data set with input training samples 𝔻^x. Then, the posterior variance is bounded by=-1 σ^2(x) ≤σ_on^2k(x,x)+NΔ k(x)/N max_x'∈𝔻^x k(x',x')+σ_on^2, where Δ k(x)= k(x,x)max_x'∈𝔻^x k(x',x') -min_x'∈𝔻^x k^2(x',x). Since K+σ_on^2I_N is a positive definite, quadratic matrix, it follows that σ^2(x) ≤ k(x,x)- k(x)^2/λ_max(K)+σ_on^2. Applying the Gershgorin theorem <cit.> the maximal eigenvalue is bounded by λ_max(K)≤ N max_x'∈𝔻^x k(x',x'). Furthermore, due to the definition of k(x) we have k(x)^2≥ N min_x'∈𝔻^x k^2(x',x). Therefore, σ^2(x) can be bounded by σ^2(x) ≤ k(x,x)- Nmin_x'∈𝔻^x k^2(x',x)/N max_x'∈𝔻^x k(x',x')+σ_on^2. Finally, the proof follows from the definition of Δ k(x). This theorem does not pose any restriction on the employed kernel, but strongly depends on the particular choice of kernel. Therefore, it can be difficult to interpret. However, it can be significantly simplified for specific kernels, as shown in the following corollary for stationary covariance functions. Consider a GP with stationary kernel and input training samples 𝔻^x. Then, the posterior variance is bounded by=-1 σ^2(x)≤ k(0)-min_x'∈𝔻^xk^2(x-x')/k(0) +σ_on^2/N. The proof follows directly from <ref> and the fact that max_x'∈𝔻^xk(x',x')= k(0) since the kernel is stationary. In this special case of <ref>, which has been previously stated, e.g., in <cit.>, the kernel induces a notion of proximity, where the absence of training inputs x' with k(x-x')≈ 0 leads to a large bound for the posterior variance σ^2(x). Therefore, this corollary shows that it is desirable to have data close to the test point x as measured by k(·) for stationary kernels. Since <ref> and <ref> still consider the full input data set 𝔻^x, a single sample with k(x',x)≈ 0 can practically lead to the trivial bound σ^2(x)≲ k(x,x). This is clearly an undesired behavior for a bound since it would imply that additional data can potentially increase the posterior variance bound. In order to avoid this effect, we make use of an important property of Gaussian process posterior variances, which is the fact that σ^2(x) is non-increasing with the number of training samples N <cit.>. Therefore, we can consider subsets of 𝔻^x to compute the posterior variance bounds in <ref> and <ref>, which exclude these training samples with a negative effect on the bound. Due to the importance of Δ k(x) for these bounds, we make use of the following subset 𝕂_ρ'(x) ={x'∈𝔻^x: k^2(x,x)≤ k^2(x',x')≤1/ρ'+k^2(x',x) } for this purpose. It can be easily seen that considering only the subset 𝕂_ρ'(x)⊂𝔻^x in (<ref>) ensures k(x,x)max_x'∈𝕂_ρ'(x) k(x',x') -min_x'𝕂_ρ'(x) k^2(x',x)≤1/ρ'. Since the consideration of a subset of 𝔻^x also reduces the number of considered training samples in (<ref>), we trade-off the size of 𝕂_ρ'(x) and the ensured value for Δ k(x) by defining ρ' using the following optimization problem ρ(x)= max_ρ'∈ℝ_+ρ' such that |𝕂_ρ'(x)|≥ρ'σ_on^2k(x,x). It can easily be seen that ρ(x) is well-defined since the optimization problem is always feasible for ρ'→ 0. Moreover, it can be directly used as a measure of data density as shown in the following proposition. Consider a zero mean Gaussian process defined by the kernel k(·,·). If k(x,x)≠ 0, the posterior standard deviation at x satisfies σ(x)≤√(2/ρ(x)k(x,x)) such that it behaves as σ(x)∈𝒪( 1/√(ρ(x)) ). By exploiting the fact that the posterior variance σ^2(x) is non-increasing with the number of training samples N <cit.> and considering only samples inside the set 𝕂_ρ(x)(x) for the computation of the posterior standard deviation, we obtain=-1 σ^2(x) ≤σ_on^2k(x,x)+|𝕂_ρ(x)(x)| Δ k(x)/|𝕂_ρ(x)(x)| max_x'∈𝕂_ρ(x)(x) k(x',x')+σ_on^2 due to <ref>. Since x'∈𝕂_ρ(x)(x) implies k(x',x')≥ k(x,x), we can simplify this expression to σ^2(x) ≤σ_on^2/|𝕂_ρ(x)(x)| +Δ k(x)/k(x,x). Moreover, it can be straightforwardly checked that the restriction to 𝕂_ρ(x)(x) implies Δ k(x)≤1/ρ(x), which yields σ^2(x) ≤σ_on^2k(x,x)/|𝕂_ρ(x)(x)| k(x,x)+1/ρ(x)k(x,x) Since |𝕂_ρ(x)(x)| is lower bounded by ρ(x)σ_on^2k(x,x) by definition, we obtain σ^2(x) ≤2/ρ(x)k(x,x), which directly implies σ(x)∈𝒪(1/√(ρ(x))). concluding the proof. It can be clearly seen that ρ(x) is a measure of data density which is highly specific for each particular GP and therefore is capable of reflecting the requirements on good data distributions posed by the employed kernel k(·,·). Moreover, it immediately follows from <ref> that a sufficiently fast growth of ρ(x), i.e., ρ(x)∉𝒪(log(N)), guarantees a vanishing error bound |μ(x)-f(x)|→ 0. Therefore, ρ(·) satisfies the requirements posed on a suitable measure of data density in <ref>. §.§ Conditions for Specific Kernels The high flexibility of <ref> allows its application to GPs with arbitrary kernels, but comes at the price of a difficult interpretability. However, when we fix a specific kernel, it is often possible to derive more accessible and intuitive subsets contained in 𝕂_ρ'(x), as shown in the following lemma for linear, squared exponential and Matérn class kernels. Geometrically interpretable subsets of 𝕂_ρ'(x) defined in (<ref>) are given by * the set ℍ_ρ'^c(x)={ x'∈𝔻^x: x'^2(x'^2-cx^2) ≤1/ρ', x≤x', |x^Tx'|≥ cxx'}⊂𝕂_ρ'(x) for every c∈(0,1);=-1 * the Euclidean ball 𝔹_√(1/2L_∂ kσ_f^2ρ')(x)= {x'∈𝔻^x: x-x'≤√(1/2L_∂ kσ_f^2ρ')}⊂𝕂_ρ'(x) for isotropic SE or Matérn kernels with ν≥3/2 and σ_f^2=k(x,x). Due to the definition of the linear kernel, we have the identity k^2(x',x')-k^2(x',x)= x'^4-(x^Tx')^2. For |x^Tx'|/(xx')≥ c, we therefore obtain k^2(x',x')-k^2(x',x)≤x'^2(x'^2-cx^2). Finally, the first inequality in (<ref>) yields the requirement k^2(x,x)=x^4≤x'^4= k^2(x',x'), which concludes the first part of the proof. For the second part of the proof, we exploit the continuous differentiability of Matérn kernels with ν≥3/2 and squared exponential kernels together with the fact that their derivative at r=x-x'=0 is 0. Therefore, we have k(x-x')≥σ_f^2-L_∂ kx-x'^2. where L_∂ k∈ℝ_+ is the Lipschitz constant of the kernel derivative. Using this lower bound, we obtain k^2(0)-k^2(x-x') ≤ 2L_∂ kσ_f^2x-x'^2-L_∂ k^2x-x'^4, which we can simplify to k^2(0)-k^2(x-x') ≤ 2L_∂ kσ_f^2x-x'^2 due to non-negativity of the norm. Therefore, x-x'^2≤ρ'/2L_∂ kσ_f^2 implies |k^2(x,x)-k^2(x,x')|≤ρ'. Since k(x,x)=k(x',x') for isotropic kernels, the first inequality is always satisfied, concluding the proof. This lemma illustrates the flexibility of quantifying the data density using 𝕂_ρ'(x). While this set can be innerapproximated by a ball for Matérn and SE kernels as illustrated in <ref>, it looks more like segments of a sphere for linear kernels. Since we can easily determine the volume of such simple geometrical structures, <ref> enables the derivation of a straightforward relationship between the sampling distributions and data density ρ(x). For example, when training samples in 𝔻^x are generated by drawing from a uniform distribution, the number of points in a Euclidean ball is proportional to the volume of the ball, i.e., 𝔹_ρ'(x)∝N/ρ'^d. Therefore, it follows from (<ref>) that ρ(x)∈𝒪(N^1/d+1) for SE or Matérn kernels with uniformly drawn input training samples. This in turn implies that σ(x)∈𝒪(1/N^1/2d+2) due to <ref> and consequently |μ(x)-f(x)|∈𝒪(log(N)/N^1/2d+2) due to <ref>. This demonstrates the flexibility and effectiveness of the derived formalism for bounding the asymptotic decay of the prediction error |μ(x)-f(x)| presented in this section.=-1 § SAFETY GUARANTEES FOR CONTROL OF UNKNOWN DYNAMICAL SYSTEMS We employ the theoretical results for GP error bounds introduced in the previous sections to develop an iterative approach for ensuring arbitrary tracking accuracy with the considered control law (<ref>). For this purpose, we derive a time-varying tracking error bound in <ref> which depends explicitly on the uniform GP error bound along the reference trajectory. This result allows us to analyze the asymptotic decay of the tracking error bound depending on the training data density measured by ρ(x) in <ref>. Finally, we employ the obtained insight to develop an episodic approach for ensuring arbitrary tracking accuracy in <ref>. §.§ Probabilistic Tracking Error Bound Since <ref> ensures distinct eigenvalues of the matrix A_θ defining the closed-loop behavior of the dynamics (<ref>) of the tracking error e=x-x_ref, we can compute the eigendecomposition A_θ=UΛU^-1, where Λ is a diagonal matrix consisting of the eigenvalues of A_θ. This allows the derivation of a dynamic bound for the tracking error e inspired by the comparison principle <cit.>, as shown in the following theorem.=-1 Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the bounded reference x_ref. Then, the tracking error is bounded by x(t)-x_ref(t)≤υ(t) with probability of at least 1-δ, where υ(t) is the solution of the linear dynamical system υ̇=(λ_max(A_θ)+L_σζ√(β_𝕏(τ)))υ + ζη(x_ref) with initial condition υ(0)=UUe(0) and constant ζ=UU^-1b. Due to the error dynamics in (<ref>), its solution is given by e(t) = e^A_θte(0)+∫_0^t e^A_θ (t-t') b f_e(t')dt', where f_e(t)=f(x(t))-μ(x(t)). Therefore, we directly obtain e(t)≤e^A_θte(0)+∫_0^t e^A_θ (t-t') b |f̅_e(t')|dt', where f̅_e(t) can be any function such that |f_e(t)|≤f̅_e(t). Using the eigendecomposition of A_θ=UΛU^-1, it can be directly seen that e^A_θtb≤UU^-1be^λ_max(A_θ)t. Hence, we obtain e(t)≤ UU^-1e(0)e^λ_max(A_θ)t +UU^-1b∫_0^t e^λ_max(A_θ) (t-t') |f_e(t')|dt'. The right handside of this inequality is again the solution of a differential equation such hat e(t)≤υ̃ for υ̇̃̇=λ_max(A_θ)υ̃+UU^-1bf̅_e(t) with υ̃(0)=UU^-1e(0). It remains to derive a bound f̅_e(t) for |f_e(t)| in (<ref>). Due to <ref>, it holds that |f_e(t)|≤η_N(x(t)) for all x∈𝕏 with probability of at least 1-δ. Moreover, we have η_N(x(t))≤η_N(x_ref(t))+L_σ√(β_𝕏(τ))e(t) due to Lipschitz continuity of σ(·) guaranteed by <ref>. Therefore, it follows that υ̇̃̇≤(λ_max(A_θ)+L_σζ√(β_𝕏(τ)))υ̃ + ζη(x_ref), which concludes the proof. Since η(x_ref) can be directly computed at any time instant, determining the tracking error bound using <ref> simply requires simulating the linear dynamical system (<ref>). This can be straightforwardly done for a given time horizon in contrast to similar prior approaches <cit.>, where the uniform error bound needs to be determined at the actual system state x. In order to achieve this improved practical applicability, additional requirements on the stability of the linear dynamics described by A_θ are necessary. It is obvious that (<ref>) only remains bounded if the linear dynamics (<ref>) are stable, which can be straightforwardly shown to require λ_max(A_θ)<-L_σζ√(β_𝕏(τ)). Due to the dependency of the eigenvalue λ_max(A_θ) on the parameters θ, this condition can be satisfied if θ≥α^-1(-L_σζ√(β_𝕏(τ))). Therefore, this condition effectively poses a lower bound on the admissible control gains. §.§ Dependency of Accuracy Guarantees on Data Density While <ref> provides an accurate bound for the tracking error depending on the local data density, it is challenging to apply this result to the asymptotic analysis of the tracking error. Therefore, we bound the maximum tracking error along the reference trajectory as shown in the following proposition. Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the bounded reference x_ref. If (<ref>) is satisfied, then, for e(0)=0, the maximum tracking error is bounded by sup_t≥ 0e(t)≤υ̅ with probability of at least 1-δ, where υ̅ = -ζ/λ_max(A_θ)+L_σζ√(β_𝕏(τ))sup_t≥ 0η(x_ref(t)). It immediately follows from (<ref>) that e(t) ≤ ζ∫_0^t e^(λ_max(A_θ)+L_σζ√(β_𝕏(τ))) (t-t') dt'sup_0≤ t'≤ tη(x_ref(t')). Since the integral can be straightforwardly calculated, we obtain sup_t≥ 0e(t)≤ -ζsup_t≥ 0η(x_ref(t))/λ_max(A_θ)+L_σζ√(β_𝕏(τ)), which concludes the proof. Note that the restriction to a zero initial condition is only considered to simplify the derivation, but the extension to non-zero initial conditions is straightforward. Therefore, the assumptions of <ref> are not more restrictive than those of <ref>. In order to analyze the asymptotic behavior of the tracking error, we combine <ref> with <ref>. Using the shorthand notation ρ=inf_t≥ 0ρ(x_ref(t)), this results in the following theorem. Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the bounded reference x_ref. Choose τ such that β_𝕏(τ)≥γ^2(τ)ρk(0)/2 and θ such that κ=-2ζ√(β_𝕏(τ))/λ_max(A_θ)+L_σζ√(β_𝕏(τ)) is constant and (<ref>) is satisfied. Then, for e(0)=0, the maximum tracking error bound asymptotically behaves as υ̅∈𝒪(1/√(ρ)). We first focus on the asymptotic behavior of the maximum learning error bound along the reference sup_t≥ 0η(x_ref(t)), which can be expressed as sup_t≥ 0η(x_ref(t)) = √(β_𝕏(τ))sup_t≥ 0σ(x_ref(t))+γ(τ). Due to <ref>, the considered parameter β_𝕏(τ) implies sup_t≥ 0σ(x_ref(t))≥γ(τ)/√(β_𝕏(τ)), such that we can simplify the learning error bound to sup_t≥ 0η(x_ref(t)) ≤ 2√(β_𝕏(τ))sup_t≥ 0σ(x_ref(t)). Therefore, it follows from proposition <ref> that υ̅ = κsup_t≥ 0σ(x_ref(t)), whose asymptotic behavior only depends on σ(x_ref(t)) due to the assumed constant value of κ̃, i.e., υ̅∈𝒪(sup_t≥ 0σ(x_ref(t))). Due to <ref>, we have sup_t≥ 0σ(x_ref(t))∈𝒪(1/√(ρ)), which concludes the proof. This theorem establishes a direct relationship between the minimum data density ρ along the reference trajectory x_ref(t) and the maximum of the tracking error e, showing that an arbitrarily small tracking error can be guaranteed when suitable data is available. Since this requires a vanishing γ(τ), β_𝕏(τ) must grow. The chosen β_𝕏(τ) in <ref> satisfies this property. In order to see this note that √(β_𝕏(τ)) is growing with decreasing τ and γ(τ)∈𝒪(Nτ) holds for stationary kernels. Therefore, we can set τ∝1/(N√(ρ)), which directly yields β_𝕏(τ)∝log(N√(ρ)). Due to condition (<ref>), this increase rate of β_𝕏(τ) finally requires reducing eigenvalues -λ_max(A_θ)∝√(log(N√(ρ))). While this increase requirement might seem like a restrictive assumption, it is important to note that without learning, it follows from the proof of <ref> that -λ_max(A_θ)∝1/υ̅. In contrast, we immediately obtain ρ∝1/υ̅^2 from (<ref>), such that -λ_max(A_θ)∝√(log(N/υ̅)) holds. Assuming the number of training samples N grows at most polynomially with ρ as ensured, e.g., for the case of SE or Matérn kernels with uniformly distributed training data discussed in <ref>, this finally implies -λ_max(A_θ)∈𝒪(√(log(1/υ̅))). Therefore, the requirement on the growth rate for ensuring arbitrarily small tracking errors reduces from hyperbolic to log-hyperbolic with suitable training data.=-1 §.§ Episodic Data Generation for Prescribed Performance Although <ref> provides conditions for training data to ensure an arbitrarily small tracking error e, it does not provide direct insights how suitable training data sets can be obtained. Therefore, we develop an episodic approach for generating training data sets in this section. For simplicity, we consider a constant sampling time T_s∈ℝ_+ during each episode with execution time T_p∈ℝ_+, which yields data sets of the form 𝔻_N^T_s={(x(iT_s),f(x(iT_s))+ϵ^(i)) }_i=0^N_p, where N_p = ⌊ 1+T_p/T_s⌋ denotes the number of training samples gathered during one episode. Therefore, the tracking error bound υ̅ from one episode immediately provides guarantees for the training data of the next episode. We exploit this by adjusting the sampling time T_s and the maximum eigenvalue λ_max(A_θ) as demonstrated in <ref> in order to ensure a sufficiently small error bound for the next episode. This dependency on the sampling time is emphasized by an index T_s in the posterior standard deviation σ_T_s(·). As shown in the following theorem, this approach guarantees the termination of <ref> after a finite number of iterations. Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the bounded reference x_ref. If θ and T_s are chosen such that -λ_max(A_θ) ≥8√(L_∂ k) )+ξ L_σ/ξζ√(β_𝕏(τ)) max_0≤ t ≤ T_pσ^2_T_s(x_ref(t)) ≤ 16L_∂ kυ̅_i-1^2 holds in every episode for ξ<1, <ref> terminates after at most N_E=⌈log(4e̅√(L_∂ k))-log(√(k(0)))/log(ξ)⌉ episodes with probability of at least 1-N_Eδ. It is straightforward to see that (<ref>) together with <ref> implies υ̅_0 =κ√(k(0)), υ̅_i+1 =4√(L_∂ k)κυ̅_i for τ such that (<ref>) is satisfied, where the index i is used to denote the episode. Since 4√(L_∂ k)κ≤ξ<1 holds due to (<ref>), it immediately follows that υ̅_i decays exponentially, i.e., υ̅_i=ξ^iυ̅_0 with probability of at least 1-δ for each episode. Therefore, <ref> is guaranteed to terminate after N_E episodes with probability of at least 1-N_Eδ due to the union bound. =-1 Due to the exponential decay of the tracking error bound υ̅ ensured by <ref>, <ref> quickly terminates. This comes at the price of higher requirements (<ref>) on the eigenvalues of A_θ compared to <ref>. However, the difference is merely a constant factor, and it is indeed straightforward to see that -λ_max(A_θ)∝1/√(log(e̅)) is sufficient to compensate the effect of an increasing β_𝕏(τ) for all polynomially growing data sets. Therefore, this requirement is still significantly lower compared to ensuring the tracking error bound e̅ without learning as discussed in <ref>. While the results in previous sections posed requirements on the data distribution in terms of the data density ρ(x), <ref> explicitly considers the data generation process by providing an upper bound for the sampling time T_s in (<ref>). Due to the form of this condition, it cannot be computed before the controller is applied to the system, but it can easily be verified a posteriori. Therefore, we can ensure it via a sufficiently high sampling rate during the application of the controller, such that we simply can downsample the obtained data to the necessary sampling time T_s. The required maximum sampling rate can be bounded using the following proposition. Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the continuous, bounded reference x_ref. Then, the sampling time T_s required by condition (<ref>) in <ref> is bounded by T_s≥T_s=16L_∂ ke̅^3/σ_on^2max_0≤ t ≤ T_pẋ(t). We prove this proposition by deriving a value of T_s which satisfies (<ref>) Due to <ref>, (<ref>) is guaranteed to hold if ρ≥1/(8L_∂ kσ_f^2 υ̅_i-1^2). Set ρ'=1/(8L_∂ kσ_f^2 υ̅_i-1^2). Then, it follows from <ref> that 𝔹_2υ_i-1(x_ref(t))⊂𝕂_ρ'(x_ref(t)). The Euclidean ball around x_ref(t) on the left handside can be inner bounded by a Euclidean ball with half the radius around the actual trajectory, i.e., 𝔹_υ̅_i-1(x(t))⊂𝔹_2υ̅_i-1(x_ref(t)). The smaller Euclidean ball has a diameter of υ̅_i-1 and the actual trajectory passes through its center. Moreover, the distance between two samples can be bounded by T_s max_0≤ t ≤ T_pẋ(t). Note that the maximum temporal derivative of the state is bounded. In order to see this, note that we can express the dynamics of the system as ẋ=ẋ_ref+A_θe+b(f(x)-μ(x). Due to the bounded prediction error, the bounded tracking error and the continuous reference trajectory, we can therefore bound the state derivative by max_0≤ t≤ T_pẋ(t) ≤(A_θ+√(β_𝕏(τ))L_σ)υ̅_i+max_0≤ t≤ T_pη(x_ref(t)) +max_0≤ t≤ T_pẋ_ref(t). This allows us to bound the number of points in 𝕂_ρ'(x_ref(t)) by |𝕂_ρ'(x_ref(t))|≥ |𝔹_υ̅_i-1(x(t))|≥2υ̅_i-1/ T_smax_0≤ t ≤ T_pẋ(t). For ρ≥ρ', it must hold that 2υ_i-1/ T_smax_0≤ t ≤ T_pẋ(t)≥ρ'σ_on^2k(0)=σ_on^2/8L_∂ kυ_i-1^2 due to (<ref>). This inequality can be ensured to hold by setting T_s=16L_∂ kυ̅^3_i-1/σ_on^2max_0≤ t ≤ T_pẋ(t), which concludes the proof. § NUMERICAL EVALUATION In order to demonstrate the flexibility and effectiveness of the derived theoretical results, we compare the tracking error bounds with empirically observed tracking errors in different simulations. In <ref>, we evaluate the time-varying tracking error bound for training data unevenly distributed over the relevant part of the state space 𝕏. The behavior of the asymptotic error bound is investigated in <ref>. Finally, we demonstrate the effectiveness of the proposed episodic data generation approach for ensuring a desired tracking accuracy in <ref>. §.§ Data-dependency of Safety Regions For evaluating the time-varying tracking error bound, we consider a nonlinear dynamical system ẋ_1=x_2, ẋ_2=f(x)+g(x)u, where and g(x)= 1+1/2sin(x_2/2), which is a marginal variation of the system considered in <cit.>. Assuming exact knowledge of g(·), we can approximately feedback linearize this system and apply a linear tracking controller u_lin=-θ_1θ_2 x_2-θ_2 x_2, where θ_1,θ_2∈ℝ_+ are design parameters. This yields a two-dimensional system of the form (<ref>) with A_θ=[ 0 1; -θ_1θ_2 -θ_2 ] b=[ 0; 1 ]. In order to demonstrate the effect of the distribution, we use a uniform grid over [0 3]×[-4 4] with 25 points and σ_on^2 = 0.01 as training data set, such that half of the considered state space 𝕏 =[-5 5]^2 is not covered by training data. A SE kernel with automatic relevance determination is employed for Gaussian process regression and the hyperparameters are optimized using likelihood maximization. For computing the uniform prediction error bound in <ref>, we set τ=0.01, δ=0.01 and L_f=2. The task is to track the circular reference trajectory x_d(t) = 2sin(t) with state x_1, which leads to the reference trajectory x_ref(t)=[2sin(t) 2cos(t)]^T. We aim to achieve this using θ_1=10 and θ_2=20, which can be shown to satisfy condition (<ref>). Snapshots of the resulting trajectory together with visualizations of the tracking error bounds obtained using <ref> are illustrated in <ref>. When the GP standard deviation σ(x_ref) is large, the tracking error bound υ(t) starts to increase, such that it reaches its maximum just before the system enters the region with low standard deviation. Afterwards, the feedback controller reduces the tracking error until the standard deviation starts to increase again. This leads to the minimum tracking error bound illustrated on the left of <ref>. This effect can also be seen at the observed tracking error as illustrated in <ref>, which has its peaks at times when the tracking error bound υ is large. Therefore, the tracking error bound υ reflects the behavior of the observed error e well, even though it is rather conservative. The sources of this conservatism can be easily investigated by determining the bound obtained when using the true model error |f(x_ref)-μ(x_ref)| as input in (<ref>). It is clearly visible that even with the knowledge of the true prediction error, the tracking error bound exhibits some conservatism due the linearization around the reference trajectory x_ref. The remaining conservatism is a consequence of the prediction error bound η(x_ref) as visualized at the bottom of <ref>. Even though this bound reflects the availability of data well, it needs to capture the probabilistic worst case and is therefore considerably larger than the actual prediction error |f(x_ref)-μ(x_ref)|. This leads to the fact that the tracking error bound υ conservatively reflects the behavior of the observed tracking error e. Note that the usage of a probabilistic Lipschitz constant L̂_f obtained via <ref> does not significantly change this behavior. The corresponding tracking error bound merely becomes slightly larger since we can compensate the conservative value of L̂_f using a smaller value τ=10^-3. Therefore, <ref> enables the effective computation of prediction error bounds without knowledge of a Lipschitz constant of the unknown function f(·). §.§ Dependency of the Tracking Accuracy on the Data Density In order to investigate the dependency of the tracking error bound υ on the data density ρ in more detail, we consider the same setting as in <ref>, but use grids with different grid constants defined on [-4,4]^2 as training data sets, such that they cover the whole relevant domain. Due to the varying size of the training data set, we determine τ by finding the maximum value satisfying (<ref>) using a line search. We set θ_1=θ_2=θ, such that we can compute a gain θ ensuring κ=10 in (<ref>) for the obtained value of τ. The resulting tracking errors e and bounds sup_t≥0υ(t) obtained with <ref> for different data densities ρ are illustrated in <ref>. Moreover, the asymptotic decay rate of υ̅ guaranteed by <ref> is depicted. It can be clearly seen that the asymptotic decay rate closely reflects the actual decay rate of the error bound sup_t≥0υ(t). Analogously to <ref>, the tracking error bound is rather conservative, but the observed error e exhibits a decay rate with high similarity to its bound sup_t≥0υ(t). Despite this conservatism, the necessary maximum eigenvalues λ_max(A_θ) for ensuring a low desired tracking error bound sup_t≥0υ(t) with such training data are significantly larger than without a controller compensating the nonlinearity as depicted in <ref>. This baseline comparison can be straightforwardly obtained as λ_max(A_θ)≥ζf̅/e̅ by slightly adapting the proof of <ref> using |f(x)|≤f̅ and μ(x)=0. Due to the linear growth of this condition with 1/e̅, it quickly exceeds the maximum eigenvalue λ_max(A_θ) ensuring the same tracking error bound through the learned controller, even though we use the non-conservative bound f̅=3. This clearly demonstrates the benefits of the derived theoretical results. §.§ Episodic Data Generation For evaluating the episodic data generation using <ref>, we consider the same setting as in <ref>. Moreover, we set θ_1=θ_2=θ analogously to the previous section and choose θ such that ξ=0.95 holds in every iteration. A high frequency data set with sampling time 3· 10^-4 is generated in every episode, such that a line search can be used to determine the maximum value of T_s satisfying (<ref>). The tracking error bounds obtained form <ref> with these parameters are exemplarily illustrated for several different episodes in <ref>. Due to the constant sampling time, the training data density along the reference is very similar within an episode, which directly leads to the rather minor variations in the tracking error bound over time. Moreover, it can be seen that decrease of the tracking error bound υ is significantly larger during the first few episodes, before it slows down. This becomes even clearer when plotting the behavior of the error bound over the number of episodes as depicted in <ref>. During the first 10 episodes the error bound sup_t≥0υ(t) decays faster than the guaranteed rate of ξ^N_Eυ̅_0, which is guaranteed by <ref>. This can be attributed to the fact that even a single additional data point reduces the posterior variance more than required for (<ref>) at the beginning. Once a sufficiently large number of additional training samples is necessary to ensure (<ref>), this inaccuracy is overcome and the error bound sup_t≥0υ(t) closely follows the guaranteed decrease rate. In fact, the tracking error bound sup_t≥0υ(t), while being rather conservative similar to the previous simulations, even reflects the behavior of the actually observed tracking error e accurately after 10 episodes. Note that this unexpected fast decay at the beginning has no influence on the required maximum eigenvalues λ_max(A_θ) as depicted in <ref>. While smaller eigenvalues are required for the episodic approach compared to the asymptotic analysis in <ref>, the maximum eigenvalue λ_max(A_θ) used in <ref> closely follow the expected 𝒪( log(1/sup_t≥0υ(t))) behavior. Moreover, it can be directly seen that <ref> offers a significant advantage over a direct reduction of the tracking error bound using the maximum eigenvalue λ_max(A_θ) without a compensation of the nonlinearity. Note that the sampling time T_s necessary to achieve this behavior quickly decays as illustrated in <ref>. However, since it remains significantly larger than its theoretical bound T_s, it remains in magnitudes which can be realized in practice. Therefore, <ref> provides an effective method for generating data, such that an arbitrary tracking error can be ensured when using a GP model for compensating unknown nonlinearities in systems of the form of (<ref>).=-1 § CONCLUSION This paper presents a novel, episodic approach for learning GP models in order to ensure an arbitrarily high desired tracking accuracy using the GP to compensate unknown nonlinearities in linear systems. We first derive a novel Bayesian prediction error bound for GP regression and demonstrate the straightforward computability of all required parameters. In order to establish a straightforwardly interpretable connection between training data and prediction accuracy, we propose a kernel-dependent measure of data density and show that the prediction error bound vanishes with increasing data density. We exploit the Bayesian error bounds to derive a time-varying tracking error bound when using the GP model to compensate unknown nonlinearities, and show that the tracking accuracy grows with increasing data density. These theoretical results allow us to develop an episodic approach for learning a GP model, such that a desired tracking error bound can be guaranteed. The effectiveness of our theoretical results is demonstrated in several simulations.=-1 IEEEtran [ < g r a p h i c s > ]Armin Lederer (S'20) received the B.Sc. and M.Sc. degree in electrical engineering and information technology from the Technical University of Munich, Germany, in 2015 and 2018, respectively. Since June 2018, he has been a PhD student at the Chair of Information-oriented Control, Department of Electrical and Computer Engineering at the Technical University of Munich, Germany. His current research interests include the stability of data-driven control systems and machine learning in closed-loop systems. [ < g r a p h i c s > ]Jonas Umlauft (S’14) received the B.Sc. and M.Sc. degree in electrical engineering and information technology from the Technical University of Mu- nich, Germany, in 2013 and 2015, respectively. His Master’s thesis was completed at the Computational and Biological Learning Group at the University of Cambridge, UK. Since May 2015, he has been a PhD student at the Chair of Information-oriented Control, Department of Electrical and Computer Engineering at the Technical University of Munich, Germany. His current research interests include the stability of data-driven control systems and system identification based on Gaussian processes. [ < g r a p h i c s > ]Sandra Hirche (M'03–SM'11–F'20) received the Dipl.-Ing degree in aeronautical engineering from the Technical University of Berlin, Berlin, Germany, in 2002, and the Dr. Ing. degree in electrical engineering from the Technical University of Munich, Munich, Germany, in 2005. From 2005 to 2007, she was awarded a Post-doctoral scholarship from the Japanese Society for the Promotion of Science at the Fujita Laboratory, Tokyo Institute of Technology, Tokyo, Japan. From 2008 to 2012, she was an Associate Professor with the Technical University of Munich. Since 2013, she has served as Technical University of Munich Liesel Beckmann Distinguished Professor and has been with the Chair of Information-Oriented Control, Department of Electrical and Computer Engineering, Technical University of Munich. She has authored or coauthored more than 150 papers in international journals, books, and refereed conferences. Her main research interests include cooperative, distributed, and networked control with applications in human–machine interaction, multirobot systems, and general robotics. Dr. Hirche has served on the editorial boards of the IEEE Transactions on Control of Network Systems, the IEEE Transactions on Control Systems Technology, and the IEEE Transactions on Haptics. She has received multiple awards such as the Rohde & Schwarz Award for her Ph.D. thesis, the IFAC World Congress Best Poster Award in 2005, and – together with students – the 2018 Outstanding Student Paper Award of the IEEE Conference on Decision and Control as well as Best Paper Awards from IEEE Worldhaptics and the IFAC Conference of Manoeuvring and Control of Marine Craft in 2009.
http://arxiv.org/abs/2307.04956v2
20230711011700
PKU-GoodsAD: A Supermarket Goods Dataset for Unsupervised Anomaly Detection and Segmentation
[ "Jian Zhang", "Runwei Ding", "Miaoju Ban", "Ge Yang" ]
cs.CV
[ "cs.CV" ]
Reinforcement Learning with Non-Cumulative Objective Wei Cui, Student Member, IEEE, and Wei Yu, Fellow, IEEE Manuscript submitted on November 10, 2022, revised on August 12, 2023. This work is supported by Natural Sciences and Engineering Research Council (NSERC) of Canada via the Canada Research Chairs Program. The authors are with The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada (e-mails: {cuiwei2, weiyu}@ece.utoronto.ca). October 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty empty Visual anomaly detection is essential and commonly used for many tasks in the field of computer vision. Recent anomaly detection datasets mainly focus on industrial automated inspection, medical image analysis and video surveillance. In order to broaden the application and research of anomaly detection in unmanned supermarkets and smart manufacturing, we introduce the supermarket goods anomaly detection (GoodsAD) dataset. It contains 6124 high-resolution images of 484 different appearance goods divided into 6 categories. Each category contains several common different types of anomalies such as deformation, surface damage and opened. Anomalies contain both texture changes and structural changes. It follows the unsupervised setting and only normal (defect-free) images are used for training. Pixel-precise ground truth regions are provided for all anomalies. Moreover, we also conduct a thorough evaluation of current state-of-the-art unsupervised anomaly detection methods. This initial benchmark indicates that some methods which perform well on the industrial anomaly detection dataset (e.g., MVTec AD), show poor performance on our dataset. This is a comprehensive, multi-object dataset for supermarket goods anomaly detection that focuses on real-world applications. Data Sets for Robotic Vision, Computer Vision for Automation, Deep Learning Methods. § INTRODUCTION Anomaly areas are regions that differ from normal areas. While humans can easily identify anomaly areas on the surface of objects based on their learned knowledge, it is challenging for machines to do the same. Visual Anomaly detection (VAD) is one of the essential applications in the field of computer vision, which aims to classify and locate anomaly regions. Currently, anomaly detection algorithms are widely used in various fields such as industrial quality inspection, medical diagnosis, and intelligent surveillance. Specifically, in the field of industrial quality inspection, anomaly detection can be used to detect defects on the surface of industrial products. In the field of medical diagnosis, it can be used to detect lesions on the surface of organs. In the field of intelligent surveillance, it can be used to detect the occurrence of anomalous events. Therefore, it has broad application prospects and research significance. Due to the scarcity of anomalous data, unsupervised anomaly detection algorithms have drawn much attention in research. Their goal is to train models only using a large amount of easily obtainable normal samples, enabling the models to differentiate anomalous samples. At present, unsupervised anomaly detection algorithms can be divided into three categories: those based on pre-trained models, those based on pseudo anomaly generation, and those based on generative models. The first category uses a model that has learned features of normal samples from the ImageNet dataset to distinguish anomalous samples. The second category generates pseudo anomalies that resemble real anomalies during training, thereby transforming the unsupervised paradigm into a supervised one. The third category trains the model to fit the distribution of normal samples and distinguishes anomalies by calculating the distance between the distributions of anomalous and normal samples during training. Due to their ability to distinguish anomalies without using anomalous samples, these unsupervised anomaly detection methods have gained increasing attention and achieved remarkable results in various academic conferences. However, most existing anomaly detection datasets are limited and mainly concentrated in industrial quality inspection, medical diagnosis, and intelligent monitoring fields. The diversity of datasets in these fields is also limited. Currently, widely used unsupervised anomaly detection datasets include MVTec AD<cit.> in industrial quality inspection, Chest X-ray<cit.> in medical diagnosis, and ShanghaiTech<cit.> in intelligent monitoring. Due to the high-speed development of this field and the high cost of dataset construction, the performance of existing datasets has approached saturation, limiting the development of anomaly detection. With the continuous development of intelligence, unmanned supermarkets (Fig. <ref>) have entered people's lives. Although the shopping process does not require human intervention, detecting and replacing damaged goods in unmanned supermarkets often requires a large amount of manpower. The demand for anomaly goods detection in supermarkets is increasing day by day, but there is currently a lack of large-scale anomaly goods datasets. Therefore, establishing an unsupervised anomaly goods dataset has significant research value and application prospects. Based on this, we collected a large number of normal and anomaly goods sample images in a real unmanned supermarket application scenario and performed pixel-level anomaly annotation, creatively establishing the first goods anomaly detection (GoodsAD) dataset[https://github.com/jianzhang96/GoodsAD] in the field of artificial intelligence. The dataset contains a total of six goods categories, including boxed cigarettes, bottled drinks, canned drinks, bottled foods, boxed foods, and packaged foods. Each goods has multiple types of anomalies, totalling 8 different types. The dataset includes 6,124 images, with 4,464 images of normal goods and 1,660 images of anomaly goods. The resolution of the images is 3000 × 3000. In the experiment, we selected 3,136 normal images as the training set and used the remaining 2,988 normal and anomaly images as the test set. In addition, we also tested the goods dataset on current state-of-the-art (SOTA) unsupervised anomaly detection methods and compared the performance of various methods. Our contribution is twofold: * We creatively established the first unsupervised anomaly detection goods dataset in the field of artificial intelligence, which is used to classify and locate anomaly areas on the surface of goods, increase the diversity of data in the anomaly detection field, and promote the development of unmanned supermarkets. * Extensive experiments are conducted on the established goods dataset using current unsupervised anomaly detection methods, laying the foundation for subsequent anomaly detection work and promoting the performance improvement of related algorithms. § RELATED WORK Some previous anomaly detection methods made experiments on image classification datasets such as MNIST and CIFAR10. They assume that a certain category of the dataset is normal and the rest is anomalous. For the application of visual anomaly detection, industrial vision <cit.>, medical image analysis and video anomaly detection <cit.> are fields of great concern. Table <ref> shows commonly used datasets for visual anomaly detection. In the field of medical images, there are datasets for anomaly detection such as Chest X-ray <cit.> and CheXpert <cit.>. ShanghaiTech <cit.> and Avenue <cit.> are two commonly used datasets for video anomaly detection. Some anomaly detection datasets <cit.> in the industry field have been proposed in recent years. These datasets all provide pixel-level annotations. DAGM <cit.> and NEU-SDD <cit.> are early datasets. DAGM contains 10 types of texture images with artificial defects. NEU-SDD contains 6 kinds of typical surface defects of the hot-rolled steel strip. MTD <cit.> includes 6 types of defects on the surface of magnetic tiles. This dataset is somewhat difficult because the contrast of some defects and background is low. MSD <cit.> dataset contains three types of defects in mobile phone screens. These four datasets all follow the supervised learning setting. In actual industrial manufacturing, the vast majority of products are normal samples, while anomalous samples only account for a very small number. Therefore in 2019, P. Bergmann et al. proposed an industrial dataset called MVTec AD using one-class classification setting, which means that only normal samples are used during the training phase. This setting is more in line with industrial scenarios and is called semi-supervised or unsupervised anomaly detection. MVTec AD contains 10 object and 5 texture categories with a total of 5354 images. Each image in the dataset contains only one object, and the camera is perpendicular to the object with the same shooting angle. This dataset has drawn a lot of attention and many methods focus on unsupervised anomaly detection based on this dataset. Since then, some datasets <cit.> have been proposed, using the same settings. BTAD <cit.> contains 2830 images with 3 different classes (industrial products), of which 1799 anomaly-free images are for training and the rest for testing. Compared with MVTec AD, the shooting conditions of the images in MPDD <cit.> are more complex. Under different light intensities and non-homogeneous backgrounds, the image captured by the camera contains multiple objects with different spatial directions, positions and distances. VisA <cit.> is a newly proposed dataset with multiple objects in the image, and the number of images is about twice that of MVTec AD. However, The current datasets contain at most a dozen classes of objects. There is no goods anomaly detection dataset, which is needed in unmanned supermarkets and commodity production. Different types of datasets are also needed in anomaly detection research to test the universality of current state-of-the-art methods and promote real-world applications. § DATASET DESCRIPTION §.§ Problem Statement and Definition In practical applications, commodity anomalies are difficult to define in advance for supervised learning, and it is easy to acquire normal samples but costly and limited to get anomalous sample data. Therefore, GoodsAD adopts the same unsupervised setting as the previous datasets <cit.>. The training set contains only images without defects. The test set contains both: images containing various types of defects and defect-free images.VAD consists of two sub-tasks, image-level anomaly detection (classification) and pixel-level anomaly localization (segmentation). The input is an image I ∈ℝ^H× W × 3, and the output is an anomaly score η∈ [0,1] for anomaly classification or a segmentation mask M ∈ℝ^H× W for anomaly segmentation. The value range of each pixel of M is [0,1], indicating the degree of anomaly. §.§ Dataset Details The GoodsAD dataset comprises 6 categories with 3136 images for training and 2988 images for testing. Table <ref> gives an overview for each category. Fig. <ref> shows example images for every category together with example defects. We collected 6 kinds of common commodities in supermarkets, which are drink_bottle (d_b), drink_can (d_c), food_bottle (f_bt), food_box (f_bx), food_package (f_p) and cigarette_box (c_b). Each commodity can be used and evaluated individually if necessary. Each category contains multiple goods, and the dataset contains a total of 484 goods. As a result, The appearance of each item varies greatly, such as variations in colour and texture. Each category contains several common defects such as surface damage, deformation and opened. The defects contain both surface texture changes and structure changes. The defects were manually generated to produce realistic anomalies as they would occur in real-world application scenarios. All images are acquired with 3000 × 3000 high-resolution. The object locations in the images are not aligned. Most objects are in the center of the images and one image only contains a single object. For each item, we collected multiple images from different angles. For bottled and canned goods, we collected images from different angles around the cylinder. The images were acquired under the illumination conditions of a real supermarket. The appearance of goods may change in texture due to illumination. The image background is a natural white commodity shelf. Both image-level and pixel-level annotations are provided. Fig. <ref> shows the region size of different anomalies in six categories. Different types of anomalies differ in size, and anomalies of the same type change in size. Most anomalies like surface damage and cap open occupy only a small fraction (less than 2%) of image pixels. opened and deformation are two kinds of anomalies with relatively large proportion. § BENCHMARK §.§ Methods for Visual Anomaly Detection Different types of unsupervised SOTA VAD methods are tested on the proposed GoodsAD dataset. We divide current methods into three categories: based on pre-trained models, based on pseudo-anomaly, and based on generative models. Pseudo anomaly-based methods adopt contrastive learning <cit.> paradigms or auto-encoders <cit.> for image reconstruction. Generative Adversarial Networks (GAN) <cit.>, Normalizing Flow <cit.> and Diffusion Model <cit.> are the most commonly used generative models, which can be used in VAD. §.§.§ Based on pre-trained models This type of approach uses the models pre-trained on ImageNet and does not require a training stage. Because deep learning libraries such as PyTorch provide pre-trained models, it is convenient to use. The basic idea of this type of approach is comparison. We can know whether the test image is anomalous by comparing the test image with the normal training image. Pixel-level comparisons at the image level show that the detection results are too sensitive to pixel values, and there are problems with misalignment of objects. Therefore, Niv Cohen and Yedid Hoshen first proposed the method based on the pre-trained model, SPADE <cit.>. They used ResNet <cit.> to extract the features of the images and compare the feature vectors at the image and patch level. K-Nearest Neighbors (KNN) algorithm is adopted to obtain more robust results. PaDiM <cit.> improves SPADE, assuming that the distribution of patches of normal images is subject to multivariate Gaussian distribution, and estimates the mean and variance in the training stage. In the test stage, the Mahalanobis distance between the feature vector of the test image and the distribution is calculated as the anomaly score. PatchCore <cit.> uses greedy coreset subsampling to reduce the memory bank of the normal samples. It uses the second and third level feature maps extracted by Convolutional Neural Network (CNN) such as WideResNet <cit.> and average pooling is adopted on these feature maps to obtain global information. SimpleNet <cit.> adopts a simple network architecture and combines the ideas of the pre-trained model and pseudo-anomaly. It improves PatchCore by adding Gaussian noise in feature space and a discriminator. The methods based on knowledge distillation <cit.> assume that the teacher network and the student network will output different feature maps for anomalous samples in the test stage. MKD <cit.> uses a smaller student network and multilayer feature synthesis. RD4AD <cit.> proposes reverse distillation paradigm and uses the residule block of ResNet to limit the features acquired by the student network. §.§.§ Based on pseudo-anomaly This type of method simulates natural anomalies to generate some pseudo anomalies in the training phase, so the unsupervised task is transformed into a supervised task. Contrastive learning based methods including CutPaste <cit.>, NSA <cit.> and SPD <cit.> introduce the idea and classical methods of contrastive learning into VAD. The classical contrastive learning method aims to learn the general features of images, while the VAD task needs to detect anomalous areas in the images, so the classical method needs to be modified to adapt to this task. CutPaste cuts an image patch with colour jitter and pastes it at a random location of a large image to generate the anomalous sample. In the training stage, it uses anomaly classification as the proxy task. NSA extracts foreground objects before cutting the image patch and uses Poisson image editing approach to fuse the image patch. Image reconstruction based methods such as RIAD <cit.>, DRAEM <cit.> and DSR <cit.> use the auto-encoders (U-Net <cit.> is used for implementation). RIAD introduced image inpainting into image reconstruction to obtain large reconstruction errors of anomalous samples. DRAEM adds a segmentation network after reconstruction network to obtain more accurate results. DSR adopts quantized feature space and moves the anomaly generation process into the feature space. CRDN <cit.> improves DRAEM by cascade network architecture and structural anomaly generation. MemAE <cit.> also uses an auto-encoder, but an innovative memory module is adopted to handle the problem of good generalization of the anomalous regions. §.§.§ Based on generative models The basic idea of this type of method is to use a generative model to fit the distribution of normal samples, and measure the distance between the test sample and the distribution during testing. AnoGAN <cit.> introduces GAN into VAD, and the backpropagation algorithm is needed to find the sample closest to the test sample in the distribution. f-AnoGAN <cit.> solves the problem of slow testing. The method trains a WGAN <cit.> in the first stage, which is the same as AnoGAN, and trains an encoder in the second stage to find the latent encodings of the test sample. CFLOW-AD <cit.> adopts Normalizing Flow to fit the distribution of features extracted by CNN from normal samples. It differs from PaDiM by using a different model to fit the distribution. AnoDDPM <cit.> uses the DDPM <cit.>, the basic idea of which is that anomalous images with added noise can be restored to normal images.Some recent works <cit.> focus on a more challenging application scenario: only few-shot (less than 8) normal samples are used in the training stage. §.§ Evaluation Metric The standard classification metrics AUROC and AUPR are used for image-level anomaly classification and pixel-level anomaly segmentation. AUPR is more sensitive to the datasets of unbalanced categories. PRO <cit.> is also adopted to balance anomalous areas of different sizes. §.§ Implementation Details For each method, we follow one-model-per-category learning paradigm and train one model for each category. It is time-consuming and memory-consuming to train a model for each commodity, although the accuracy is higher in this way. The images are resized to 224×224 during training and test. All experiments are conducted on NVIDIA GTX 1080Ti GPUs with PyTorch 2.0. For each method, we adopt the default standard parameters. We set base_width and base_channels in reconstructive and discriminative sub-networks of DRAEM to 64 and 32, respectively. For f-AnoGAN, we train 100000 iterations for WGAN and 50000 iterations for the encoder. More details such as batch size (bs) and learning rate (lr) are listed in Table <ref>. §.§ Experimental Results and Discussion We test the performance of different types of methods on the proposed GoodsAD dataset. Table <ref> shows the image-level anomaly classification results and Table <ref> shows the pixel-level anomaly segmentation results. Fig. <ref> shows the qualitative examples of anomaly localization of methods DRAEM, NSA, RD4AD, SimpleNet and PatchCore-100%.Compared to the previous dataset like MVTec AD, GoodsAD has two different attributes: (1) The object's location in the image is not aligned. (2) The same category contains many items with different appearances. These two characteristics cause the poor performance of current VAD methods. SPADE, RD4AD and CFLOW-AD assume that the location of the object in the image is unchanged, and thus the detection results are not accurate, especially the localization score is low. Because of many goods in one category and appearance change, the student network of RD4AD is challenging to learn the similar representation as the teacher network for normal data samples. Therefore RD4AD incorrectly predicts almost all commodity regions as anomalous, and the samples of anomaly segmentation are shown in the seventh row of Fig. <ref>. RD4AD only achieves 15.4% AUPR on anomaly localization sub-task. The third and fourth rows of Fig. <ref> shows the anomalous test images x and generated normal images G(E(x)) by f-AnoGAN <cit.>. The commodities in the generated images are blurry and the text on the package is not clear. The appearance of the commodity in the generated image mixes up with other commodities (Fig. <ref>, fifth column). Therefore the anomaly segmentation masks obtained by L1 distance |x-G(E(x))| are not accurate. We think various commodity appearances cause this problem. More training epochs may improve the performance. As shown in Fig. <ref>, CutPaste and NSA cut a random image patch and blend it into a large image to generate anomalous samples. The generated anomalies are much different from natural anomalies of commodities. Therefore, the detection results of these methods are not accurate, which are shown in the sixth row of Fig. <ref>. NSA only obtains 15.8% AUPR on the anomaly segmentation task. Due to the appearance changes and location misalignment of various goods, DRAEM is difficult to learn a proper distance function to recognize the anomaly. The generated samples of pseudo-anomalies in the training stage are shown in Fig. <ref>. DRAEM adopts Perlin noise generator and extra texture images to generate anomalous samples with texture changes. Therefore DRAEM can not recognize anomalies with small texture changes such as bottle cap opening and box deformation (see Fig. <ref>, fifth row). It also fails to detect small anomalies. Apart from PatchCore, DRAEM achieves the second best performance in category cigarette_box, because the anomalous region opened of boxed cigarettes are relatively large and texture changes obviously and the location of boxed cigarettes is relatively aligned. SimpleNet performs second only to PatchCore on anomaly classification, reaching 75.3% AUROC. But its anomaly localization score is not high, only getting 24.4% AUPR. This indicates that SimpleNet can determine whether the image is anomalous but cannot output an accurate anomaly mask. The anomaly masks of boxed cigarettes in Fig. <ref> of SimpleNet are not continuous. It also fails in several samples such as deformation of bottled food, surface_damage on boxed and packaged food, and cap_half_open of bottled drink. We believe that the discriminator of SimpleNet is effective in detecting anomalies but the Gaussian noise is not suitable for commodity anomalies. The accuracy and loss of SimpleNet in the training phase are also unstable. From Table <ref> and Table <ref>, PatchCore achieves the best performance among all tested methods. Without subsampling of the memory bank, PatchCore-100% achieves better performance than PatchCore-1%. PatchCore-100% achieves state-of-the-art of 85.5% AUROC on anomaly classification and 53.8% AUPR, 89.9% PRO on anomaly segmentation. PatchCore uses the patch-feature memory bank equally accessible to all patches evaluated at test time, and thus it is less reliant on image alignment. PatchCore adopts KNN algorithm to estimate anomaly scores at test time, and thus it is more robust to the diverse appearance of goods. As shown in sixth row of Fig. <ref>, PatchCore-100% can predict relatively accurate anomalous regions. Nevertheless, the disadvantage of PatchCore is that the score of the predicted anomalous regions is not high, because image patches sometimes contain both normal and anomalous pixels. PatchCore cannot predict segmentation masks with sharp edges and high confidence like DRAEM (Fig. <ref>, fifth row, first and seventh column). In the ninth column of Fig. <ref>, PatchCore and DRAEM predict normal regions of packaged goods as anomalies due to changes in texture and illumination. In order to comprehensively evaluate each method, we also test the inference speed and storage space. The results are shown in Fig. <ref>. f-AnoGAN is the fastest method and reaches 236.6 FPS. Although the inference speed of f-AnoGAN and NSA is fast, their performance on the GoodsAD dataset is not high. If the pre-trained model of the PyTorch library is not counted, PatchCore and SimpleNet only need to save the extracted features and the discriminator, respectively. PatchCore-1% requires less storage space, and its inference speed is 6.4 FPS. PatchCore-100% requires much space to store the extracted features, but compared to the lightweight PatchCore-1%, the performance is only slightly improved. CFLOW-AD also occupies much storage space. From Table <ref>, The scores of the AUROC metric are very high, with most methods exceeding 90%, but the actual detection results are not accurate. The reason is that the anomalies occupy only a small fraction of image pixels (Fig. <ref>), and the categories of normal and anomalous pixels are extremely unbalanced. The scores of PRO metric are also high. Table <ref> and <ref> show that most methods perform well on category cigarette_box and the accuracy of food_box and food_package is lowest. In general, current VAD methods do not perform well on the GoodsAD dataset. For real supermarket application scenarios containing a large number of goods, the current methods are not accurate enough for practical application. § CONCLUSION In this work, we introduce the GoodsAD dataset, a novel dataset for unsupervised anomaly detection mimicking real-world supermarkets and industrial inspection scenarios. The dataset provides the possibility to evaluate unsupervised anomaly detection methods on a variety of goods with various appearances and different types of anomalies. Pixel-precise ground truth labels are provided to evaluate both image-level classification and pixel-level segmentation. Several current state-of-the-art methods are thoroughly evaluated on this dataset. The best-performing method for all categories is PatchCore. The evaluations show that current methods are not accurate enough for goods anomaly detection and there is still considerable room for improvement. We hope that the proposed dataset will stimulate the development of unmanned supermarkets and smart manufacturing. IEEEtran
http://arxiv.org/abs/2307.04125v1
20230709083942
Bounced Model of Droplet on Moving Substrate
[ "Chengwu Liu" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
[email protected] https://orcid.org/0000-0001-9067-1892 School of Physics, Shandong University, Jinan 250100, China. School of Physics, Shandong University, Jinan 250100, China. Firstly, we get the completely bouncing criteria Cr for droplet on moving substrate. The bouncing without splashing condition is Cr>1. Then, we mainly research the effect of wind field for droplet, and get the completely bouncing criteria Cr_wind for droplet with wind. Lastly, we get the contact angle of droplet on the moving substrate and calculate the Time Independent Reynolds Equation with rho and μ are constant. Bounced Model of Droplet on Moving Substrate Chengwu Liu August 12, 2023 ============================================ § INTRODUCTION The questions of droplet on a surface are related to the interaction of interface. There is a micrometer-size gas film in the interface between liquid and solid. This gas film was firstly observed by the way of snapshoot <cit.>. The evolutionary process of gas film was firstly observed by X-Ray technology <cit.> at the moment of contacting. They found that the gas film evolve to a bubble with spending on microsecond-size time. E. Sawaguchi <cit.> found that the distribution of thickness of droplet on a moving surface is similar to saddle surface. In addition, the hydrophobicity of droplet on a moving surface is enhanced and is similar to Leidenfrost effect <cit.>. Therefore, the interaction between liquid and solid would be affected by the motion of surface. In this paper, we will talk about how these parameters affect the hydrophobicity in section 2. Ted Mao <cit.> assumed a critical bounced state to deal with the question of bounced on motionless surface and got a critical bounced criteria E_ERE^*. Actually, the gas film on motionless surface is different from gas film on moving surface. So the interaction of liquid and solid on motionless surface is also a little different from one on moving surface. In this paper, we will talk about this question in section 3. Droplet also might splash on a solid surface. We have some models to describe splashing <cit.><cit.><cit.><cit.><cit.>. In this paper, we will talk about how extra wind affects the splashing and bounced of droplet in section 4. § DROPLET ON THE MOVING SUBSTRATE The final states of droplet after it impacts on moving substrate are various. We name bouncing without splashing as completely bouncing, bouncing and splashing as partially bouncing. We find out many physical phenomenons about droplet on the moving substrate with lots of experiments(The experiments conditions are in the supplemental material<cit.>. ). For instance, there is the critical speed which is the shift of bouncing and retention. We will deeply research the completely bouncing condition and critical speed below. §.§ Completely Bouncing Criteria for droplet on Moving Substrate We will research the phenomenons without splashing in this part. Droplet will spread, retract, bounce/retain after impacting on the moving substrate. According to the different interaction modes on the solid-liquid interface, the interface can be divided into three areas. As the figure 2(a) shown, the first one is gas film(thickness h∼ 10μm). The cohesive force of gas film acts as normal capillary force to inhibit bouncing. The second one is solid-liquid interface. The interaction of liquid and solid is Van der Waals force. The third one is molecule area. The interaction of this part mainly include the Derjaguin Disjoining Pressure Π( h) =Π_vdW+Π_EL+Π_struc.+Π_steric. In these areas, inhibition and promotion effects for bouncing should be researched in detial. §.§.§ Inhibition Effects Firstly, the maximal viscous dissipation of droplet could be approxed is D≤ V_G=mgh∼ 10^-3J with enery conservation(we order the surface of substrate is the zero gravitational potential surface.) Nextly, the dissipation of droplet also due to the microstructure on the surface of substrate. The roughness could be evaluated by the size of roll angle in general. We could ignore the dissipation of this part with considering a small roll angle(smooth surface). Acturally, we use the substrate with small roll angle in our experiments(more detials in supplemental material<cit.>). Then, we will consider the interaction between liquid and solid on the gas film and solid-liquid interface. The attraction is Van der Waals force mainly. And it is the potential force and plays an obvious role on micrometer scale justly. Therefore, the vdW force between solid and liquid has no effect on bouncing. However, some gas are carried in the gas film during the droplet falling process. This gas film will go away from the system constituted by droplet and solid substrate during the bouncing process. And the capillary force of gas film will dissipate the energy of droplet. So, the cohesive force of gas film will inhibit bouncing. Nextly, we will explain it detially. First of all, this process is different from the previous section. In this section, the pressure of gas film p changes continuously over time, and it's relatively large. The kinetic equation could be given through analysing the droplet as the figure 2(b) shown. ( p-p_⊖) S=md^2H/dt^2+mg where p_⊖ is atmospheric pressure, H is the vertical positionsl coordinaten of droplet, S is the contact area of liquid and solid. Apparently, we could find out H and S change over time through observing experiments results. So p also should change over time as figure 2 shown. Then the dissipation due to the cohesive force of gas film is be considered below. The gas film is divided into two parts g_1, g_2 with h thickness as figure 2(d) shown. And two parts at a distance of d∼ 10^-9m(d is the distance of two molecule). Hence, the per unit area vdW potential between g_1 and g_2 is w_g =-A/12π[ 1/d^2+1/( d+2h) ^2-2/( d+h) ^2]h≫ d-A/12π d^2 where A∼ 10^-19J is Hamker constant. We assume that the thickness of gas film changes from h_0 to h from the state of maximum spreading to bouncing exactly, τ' is the time interval of this process. Then the energy dissipation during retraction process is Q_cap=| ∫_h_0^h-∂ w_g/∂ dSdh| =| ∫_0^τ'AS/6π d^3dh/dtdt | Then we expand h to linear term and consider the ideal gas hypothesis. dh/dt=h-h_0/t, d^3=kT/p We could give the energy dissipation due to capillary force with equation <ref>. Q_cap ≈| ∫_0^τ'-ApS( h-h_0) /6π d^3tSdt| [Sh_0=V_0]Sh=V| ∫_0^τ'Ap( V-V_0) /6π kTtdt| [p_0V_0=ν RT]pV=ν RTAν R/6π k| ∫_0^τ'1-( p/p_0) /tdt| This is a improper integral. In order for the improper integral is convergent. We have lim_t→ 0+1-( p/p_0) /t=lim_t→ 0+-1/p_0dp/dt=a⇒dp/dt=-ap_0 Therefore, we have the pressure p changes linearly with time below the above approximation. Then bring equation <ref> to equation <ref>. Q_cap =Aν R/6π k| ∫_p_0^p^⊖dp/p_0|=Aν N_A/6πp_0-p^⊖/p_0=Aν N_A/6πaτ'∼ 10^-3J where ν is the amount of substrate of gas film, N_A is Avogadro constant. We could conclude that the order of energy dissipation due to capillary force is same as the gravitational potential energy. So it couldn't be ignored. Lastly, let's consider the viscous dissipation. It's so difficult to calculate that we have to estimate it. Let's consider an undemanding toy model. Firstly, the viscous dissipation should relateto spreading factor with a certain function. The viscous dissipation which droplet impacts on a smooth motionless substrate relate to ( d_m/D)^2<cit.> during spreading process, and relate to ( d_m/D)^2.3<cit.> during retraction process. Hence, we associate the spring oscillator model of two degrees of freedom with this question(Two springs placed perpendicular to each other horizontally on a smooth substrate). We assume that the viscous dissipation is E_diss∼ QE_p max, where E_p max is the maximum spreading "elastic potential energy" for the first time, Q is similar to quality factor. E_diss∼α k_n ( D_n max-D_0)^2 +β k_t ( D_t max-D_0) ^2 Considering that the effect of tangential mainly due to surface tension and viscous shear stress T_ν, the effect of normal mainly due to surface tension. We could give the k_n and k_t using above model. k_t ∼aγD_t max+bT_ν/D_t max-D_0 k_n ∼a' γD_n max/D_n max-D_0 where α, β , a, b, a' is constant, T_ν∼η VD_n maxD_t max/δ is the viscous shear stress<cit.>, δ is the thickness of gas film estimated by LLD's law<cit.>, V is the speed of substrate, the maximum tangential spreading diameter on moving substrate<cit.> and the maximum spreading diameter on motionless substrate<cit.> are respectively D_t max/D_0∼We^1/4Ca^1/6 and D_max/D_0∼We^1/4. And the substrate speed has no effect on normal maximum spreading diameter. Therefore, D_n max∼We^1/4. Then, we could estimate the viscous dissipation with scaling law. §.§.§ promotion effects Firstly, one of the promotion effects is initial kinetic energy E_total=1/2ρ( 1/6π D_0^3 ) U^2, where U is the speed which droplet impacts the moving substrate. Next, as the figure 2(c) shown. The movment of solid substrate caused the flow of air beacuse air is the viscous fluid. Hence, the pressure around substrate will be decreased by wind field. So, it has a lift force F_L∼ρ_gU_t^2h on droplet, where ρ_g=1.185kg/m^3 is the density of air(25℃), ρ=997kg/m^3 is the density of water(25℃), U_t∼ 1m/s is wind speed around the substrate, h∼ 10^-3m is the maximum spreading thickness of droplet, D∼ 10^-3m is the initial diameter of droplet. So, we have ρ DU^2/ρ_gU_t^2h∼ 10^3 So lift force could be ignored when the speed of substrate is little(U∼ 1m/s). The situation of big wind speed will be discussed in section 4. §.§.§ Completely Bouncing Criteria for droplet on Moving Substrate Hence, we could conclude that the initial kinetic energy E_total promote to bounce, viscous dissipation E_diss and energy dissipation due to capillary force Q_cap inhibit bouncing. Considering a imaginary state which droplet bounce exactly. Then, we could get the condition of bouncing using energy conservarion. We have E_total+mgD/2>E_Diss+Q_cap+mgD/2 So, we can get the completely bouncing criteria for droplet on moving substrate. Cr=πWe^3/2√(γ/D_0ρ)/2Aν N_A/π D_0^2γ ( p_0-p^⊖/p_0 ) +12 [ ( βWe^1/4Ca^1/6+β'D_0We^1/2Ca^1/2/lc ) ( We^1/4Ca^1/6-1 ) +αWe^1/4 ( We^1/4-1 ) ] where l_c=( γ/ρ g) ^-1/2 is the Capillary length of water, rho is the density of liquid, γ is the liquid-gas surface tension, α, β, β' are the constants to be determined. The numerator is shown that the effect of initial kinetic energy. The left term of denominator is shown that the effects of capillary force in the gas film. The right term of denominator is shown that the effects of viscous dissipation. The bouncing without splashing condition is Cr>1 And completely bouncing criteria for droplet on moving substrate Cr_V, which is only due to the speed of substrate. Cr_V=A/B+( CV^1/6+C'V^1/2) ( DV^1/6-1) where A,B,C,C',D are indepent to substrate speed V. The droplet will retain on the substrate if Cr_V≤ 1, and bounce on the substrate if Cr_V>1. In a word, the completely bouncing criteria for droplet on moving substrate is relate to capillary number Ca and Weber number We which both are the initial state parameters of the droplet and moving substrate, as figure 2(e) shown. And we also explain the experimental phenomenons which the final state of droplet will shift with changing substrate speed as figure 2(f) shown. We also could find out the critical speed which is the shift of bouncing and retention as figure 2(f) shown. These results show that droplet might undergo the transformation of "Retention to Bounced" or "Retention to Bounced to Retention". But we didn't consider the dissipation due to the microstructure on the surface of substrate. It's also a complex question § THE EFFECT OF WIND FIELD FOR DROPLET In the discussion of previous section, we find that a lift force due to the wind field couldn't be ignored with high substrate speed. The lift force promote to bounce. We even find out the splashingofdroplet in further experiments. So we design the experiment(more detials are in supplemental material<cit.>) to illustrate the importance of wind field for the final state of droplet as figure 3(a)(b)shown. We conclude that the final state of droplet is affected by both the initial height of droplet(We) and the speed of wind(Ca) from figure 3(b). And the state of droplet will shift to splashing and partially bouncing with creasing the wind speed and the initial height. The state of droplet will shift to splashing with only creasing the initial height. The state of droplet with small initial height will shift to splashing and partially bouncing if only creasing the wind speed. And state also could shift from retention to completely bouncing/from completely bouncing to retention. We conclude that the spreading factor et. al. could change with the wind speed from figure 3(c). Next, we will research the causes of these phenomenons. §.§ The Transition Between Retention and Completely Bouncing Firstly, we assume that (1)all flows could be ragard as the isentropic flows. (2)The boundary conditions between liquid and gas obey Navier boundary conditions, i.e., V⃗_droplet=V⃗_air(3)Wind field is 2D incompressible laminar flow, i.e., ∇·V⃗=0, V_z=const.. We take two circuits C_droplet and C_air around the interface between liquid and gas. The velocity circulations respectively are Γ_droplet=∮_C_dropletV⃗_droplet·dl⃗=Γ_air=∮_C_airV⃗_air·dl⃗=Γ These velocity circulations Γ are constant because of assumation 2. And the interaction of wind field on the droplet could be divided into vertical and horizontal direction. The bouncing state mainly depends on the vertical interaction. L⃗=ρ_aV⃗_∞×Γ⃗_air=-ρ_aΓV⃗_∞×k⃗ It can be seen from assumation 3 ∇×L⃗=-ρ_aΓ[ ( k⃗·∇)V⃗_∞+( ∇·k⃗) V⃗_∞-( V⃗_∞·∇) k⃗-( ∇·V⃗_∞) k⃗] =0 So the lift force L⃗ is potential force. We assmue the initial height of droplet is h_0 and the lift force potential of initial position is 0. Then the lift force potential is V_L( y) =-∫ -ρ_aΓV⃗_∞×k⃗·j⃗dy=-ρ_aΓ∫ V_∞xdy=ρ_aΓ∫_h_0^yV_∞xdy So the completely bouncing criteria for droplet is Cr_wind=ρ gπ D_0^3( h_0-0.5D_0) /6/E_Diss+Q_cap+ρ_aΓ∫_h_0^D_0/2V_∞xdy The bouncing without splashing condition is Cr_wind>1 The final state of droplt could shift between bouncing and retention beacuse Γ∫_h_0^D_0/2V_∞xdy could bigger/smaller than 0. §.§ The Transition Between Splashing and without Splashing Then, considering the transition between splashing and without splashing. The front of droplet maybe generate the liquid finger during the spreading process. Liquid finger will be not only affected by the lubrication force of bottom gas and the attraction of top gas<cit.>F_L=K_lν_gV_t+K_uρ_gV_t^2H_t, but also affected by the wind field as figure 3(e) shown. The effect of extra wind field is shown as lift force F_wind,L( V), where V is the speed of wind speed. Hence, we could introduce the F_wind,L( V) to R&G model. β^2_Wind=F_L+F_wind,L/2γ So, the state of droplet maybe have the transition between splashing and without splashing with the wind field. § THE BALANCE OF DROPLET ON THE MOVING SUBSTRATE §.§ The Contact Angle of Droplet The bottom of droplet will generate a very thin gas film when it impact on a substrate<cit.>. The gas film will be saddle shape stably on moving substrate<cit.>. The pressure of gas film is p∼ 10Pa<cit.>, the gas film thickness is δ∼ 10μ m. We also could approx the Knudsen number of gas film is K_n=λ/h∼ 10^-4. So the gas film could be regard as the continuons flow. If we assume that pressure is a constance on the direction perpendicular to moving substrate, the thickness and pressure of lubrication gas obey the Reylond Equation: ∂/∂ x(h^3/μ∂ p/∂ x)+∂/∂ y(h^3/μ∂ p/∂ y)=6U∂ h/∂ x where U is the speed of moving substrate, μ is the dynamic viscosity of lubrication gas. This equation shows that the relation of distribution between thickness and pressure. We could give the distribution of pressure with measuring the distribution of thickness. Then we research the effect of contact angle on moving surface. Considered the difference form gas film on motionless and moving substrate originates from the moving substrate. So, we focus on this element, then analyse this question with minimum energy principle. We order that the area of gas film is D and O is the center of area D, as shown in the figure1, we research the infinitesimal area D_i with infinitesimal angle and radial length R_i. It is full of air and saturated vapour in area D_i, the pressure of vapour obeys the Clapeyron Equation p=p_0exp(-L_v,m/RT). And the molecular number and pressure of two components(saturated vapour and air) obey that 1=N_air/N+N_H_2O/N , 1=p_air/p+p_H_2O/p The mean kinetic energy of two components can be given that e̅_air=5/2kT, e̅_H_2O=3kT, if air is regarded as diatomic molecule. So we could give the internal energy of gas film E_k=∑_i=1^n∬_D_inhe̅dσ=∑_i=1^n∬_D_i5/2ph+hp_0/2exp(-L_v,m/RT)dσ Hence, assumed that the front of droplet has a infinitesimal virtual displacement δ R_i. Approximately, the interfacial energy between solid and gas remain unchanged because of the gas film in the solid-gas interface. So the variation in energy of system is δ E_i=(Δ L_icosθ_L γ_LG+Δ L_iγ_SL)δ R_i+δ E_ki. And a stabilized system must obey that δ E_i/δ R_i=0 A combination of equation <ref> and equation <ref> leads to cosθ_Li=cosθ_0-1/γ_LG[ 5/2ph+hp_0/2exp( -L_v,m/RT) ]-γ_SG/γ_LG where p=p(R_icosθ_Δ L_i,R_isinθ_Δ L_i),h=h(R_icosθ_Δ L_i,R_isinθ_Δ L_i), i.e., p and h are the pressure and thickness of soild-liquid interface boundary respectively. Hence, we can get the mean contact angle with intergrating contact angle cosθ_Li along the boundary. cosθ_L=cosθ_0-1/Lγ_LG∮_L[ 5/2ph+hp_0/2exp( -L_v,m/RT) ]dl-γ_SG/γ_LG where θ_0 is the contact angle obeying the Young Equation, L is the circumference of solid-liquid interface boundary, L_v, m is the latent heat of phase transition from liquid to gas phase. T is the temperature of gas film. The element on the right side of equation <ref> is the influence of gas film. The element on the middle of equation <ref> is the influence of moving substrate. We could conclude that the contact angle on moving substrate is 8^∘ bigger than the one obeyed Young Equation in the room tempurature approximately. So the hydrophobicity will be reinforced on the moving substrate. And the contact time<cit.><cit.>, spreading factor<cit.>, bouncing et.al. will change with the change of hydrophobicity between the droplet and substrate. Then we will elucidate them in detial. §.§ The analytical solution for Reynolds Equation Firstly, we could get another equation which describes the gas film on the moving surface from Reynolds transport equation: ∂ h/∂ t+∇·( h𝐮) =0, ∂ h/∂ t=0 where 𝐮 could be seen as the surface speed U𝐢+V𝐣 because of the Navier Boundary Conditions. Then, the problem is solving the differential equations: { h∂^2p/∂ x^2+h∂^2p/∂ y^2+3( ∂ h/∂ x∂ p/∂ x+∂ h/∂ y∂ p/∂ y) =0 U∂ h/∂ x+V∂ h/∂ y=0 . Then, we order that h( x,y) =h_X( x) h_Y( y), p( x,y) =p_X( x) p_Y( y). We could get equation <ref> through bringing these to equation <ref>. 1/p_X^2d^2p_X/dx^2+1/p_Y^2d^2p_Y/dx^2+3/p_X^2p_Yh_Xdh_X/dxdp_X/dx+3/p_Y^2p_Xh_Ydh_Y/dydp_X/dy=0 Then, finding the derivative of equation <ref> with respect to x and y in turn. We can get d^2p_X/dx^2-Cdp_X/dx+C^' p_X^2=0 dp_Y/dydh_Y/dy-Cp_Y^2h_Y/3=C^'p_Yh_Y/3 C, C^'are constant. And we could esaily find the solution of equarion <ref>. h_X=C_h_1exp( -λ/Ux) , h_Y=C_h_2exp( λ/Vy) bring them to equation <ref>, we can get ∫dp_Y/C_2^' p_Y+C_2p_Y=y So the p_Y is p_Y=C_2/-C_2^' +exp( -y+C_2^''/C_2) Then, we solve the equation <ref> with series method. Considering the series solution p_X=∑_n=0^∞a_nx^n. Then, we can get a_n+2( n+2) ( n+1) -Ca_n+1( n+1) +2C^'( a_0a_n+a_1+a_n-1+⋯ +a_n/2a_n/2) =0, n is an even. a_n+2( n+2) ( n+1) -Ca_n+1( n+1) +2C^'( a_0a_n+a_1+a_n-1+⋯ +a_( n-1) /2a_( n+1) /2) =0, n is an odd. And the radius of convergence R obey that lim_n→∞[ n+2+2C^'/n+1( a_0R^2+a_1R^3+⋯ +a_n/2R^n/2+2) ]-CR=0, n is an even. lim_n→∞[ n+2+2C^'/n+1( a_0R^2+a_1R^3+⋯ +a_( n-1) /2R^( n+3) /2) ]-CR=0, n is an odd. In addition, we can get equation <ref> when R=1. lim_n→∞[ n+2+2C^' q( n/2+1) /n+1-C] ≤ lim_n→∞[ n+2+2C^'/n+1( a_0R^2+a_1R^3+⋯ +a_n/2R^n/2+2) ]-CR ≤lim_n→∞[ n+2+2C^' q^'( n/2+1) /n+1-C], n is an even. lim_n→∞[ n+2+2C^' w( ( n+1) /2) /n+1-C] ≤ lim_n→∞[ n+2+2C^'/n+1( a_0R^2+a_1R^3+⋯ +a_( n-1) /2R^( n+3) /2) ]-CR ≤lim_n→∞[ n+2+2C^' w^'(( n+1) /2) /n+1-C], n is an odd. where q=min{ a_0, a_1,⋯, a_n/2}, q^'=max{ a_0, a_1,⋯, a_n/2} and w=min{ a_0, a_1,⋯, a_( n-1) /2}, w^'=max{ a_0, a_1,⋯, a_( n-1) /2}. If the equation <ref> is right, the equation <ref> and <ref> would be wrong. So the R is either ∞ or 1<R<∞. So p( x,y) is p=( a_0+a_1x+Ca_1-C^' a_0^2/2x^2+⋯) ( C_2/-C_2^' +exp( -y+C_2^''/C_2) ) , 1<x≤∞ So h( x, y) is h( x, y) =h_X· h_Y=C_hexp( -λ/Ux+ λ/Vy) § CONCLUSION In section 2, we find that how the moving substrate affect the hydrophobicity of droplet and we discuss the analytical solution for Reynolds Equation. Therefore, we would analytically get the contact angle on moving substrate. But we must have some boundary conditions such as h( x, y)|_droplet boundary=H( x, y) , p( x, y)|_droplet boundary=P( x, y) and so on to get the whole solution. In section 3, we find out some promotion and inhibition effects for bouncing question. Finally, we get a completely bouncing criteria Cr for droplet on moving substrate. Some phenomenons could be pridicted by using this criteria. In section 4, we research the effect of extra wind field for droplet. we find that extra wind field could change the final states of droplet. In addition, we get a completely bouncing criteria Cr_Wind for droplet on moving substrate with extra wind by the way that introduce the lift force potential. We also get the splashing criteria β^2_Wind using the R&G Model. But we don't analytically get a criteria because of the complex viscous dissipation. We just use a simple model to calculate it. We also don't consider the energy dissipation due to the roughness of substrate. It is so important that cound't be ignored in some substrate with big roughness. Thanks for Shangqian Sun, Hongwang Lu, Jingcheng Hao and Ying Ma 's support for this work.
http://arxiv.org/abs/2307.07526v1
20230711114409
Can I say, now machines can think?
[ "Nitisha Aggarwal", "Geetika Jain Saxena", "Sanjeev Singh", "Amit Pundir" ]
cs.AI
[ "cs.AI", "cs.CY", "I.2.m Miscellaneous" ]
A regularized Interior Point Method for sparse Optimal Transport on Graphs Stefano Cipolla[School of Mathematics, University of Edinburgh, Edinburgh, UK. mailto:[email protected]@exseed.ed.ac.uk]Jacek Gondzio[School of Mathematics, University of Edinburgh, Edinburgh, UK. mailto:[email protected]@ed.ac.uk] Filippo Zanetti[School of Mathematics, University of Edinburgh, Edinburgh, UK. mailto:[email protected]@sms.ed.ac.uk] ======================================================================================================================================================================================================================================================================================================================================================================================================== Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the “prompts” only provided by users. These machines are considered ‘thinking minds’ because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing’s concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects. § INTRODUCTION Center for AI Safety, a nonprofit organization, recently released an open letter, signed by more than 350 industry leaders, researchers, and AI experts <cit.>, named "Statement on AI Risk." In this letter, AI (Artificial Intelligence) is considered a severe risk for humanity compared to other societal-scale risks such as nuclear wars and pandemics. Another open letter <cit.> to call for an immediate pause in the training of giant AI systems for at least 6 months was signed by more than 31000 people, mainly prominent researchers, and industry executives, including Elon Musk, CEO of SpaceX, Tesla & Twitter. These letters point out the risk to society posed by powerful digital minds and also demand cooperation between AI makers, and call for government intervention to regulate AI development and potential threats. Researchers are claiming that modern AI systems are competing with humans in various tasks and also outperforming humans in some domains <cit.>. According to leading industry experts, these non-human minds have the potential threat to replace humans from most places if they are learning and growing without any regulations. The concerns are not limited to biased or incorrect answers from machines but are also societal-scale disruptions by AI such as cultural extinction <cit.>. The risk of extinction of humans from AI is only possible if these digital brains have some ideology and if industry leaders or researchers are concerned about the growth of AI now, that implies they may have foreseen this ideology. So it may be the right time to say that machines have started thinking. However, it is not the first time that the idea of thinking machines and consequences has been discussed. In 1637, René Descartes discussed in his work 'Discourse on the Method' that if machines have 'reason,' they can also speak like humans. Thanks to "reason," humans can speak the language and build conversations that machines cannot. In 1950, Turing proposed the question, "Can machines think?" He further discussed intelligence as the capability to think and machines can attain intelligence by adapting and evolving <cit.>. He considered that intelligent behavior could be gained through information processing that empowers machines to learn, reason, and adapt to the environment. Turing suggested a well-known test as the Imitation Game, which he assumed that in the next fifty years, machines would be able to pass this test. Even after seven decades, there are no significant proven results to establish that machines have the potential to think. Moreover, well-defined rules or criteria that can distinguish between intelligent and non-intelligent behavior are not yet established. A few aspects of intelligence, such as deductive and inductive reasoning, logical inferences, analysis of information, driving connections between information, and finally, bringing out a conclusion based on available information, are modeled by machines with Artificial Intelligence (AI) <cit.>. These machines are improving their ability to exhibit intelligent behavior day by day <cit.> and simulating various cognitive abilities such as memory (data encoding, storage, and retrieval when required), paying attention to specific information while excluding or ignoring other less relevant information, communication in natural language, processing visual information, learning from past experiences and self-correction <cit.>. Additionally, with the recent advancement of Generative Adversarial Networks (GAN) <cit.>, machines have started synthesizing incredible results which are difficult to distinguish from the results generated by humans. AI chatbots, such as ChatGPT <cit.> and BARD, are applications of GANs, they have various capabilities, for example, story writing, answering questions by understanding them, the composition of poems, and suggesting improvements in the codes <cit.>. Machines today can summarize the literature <cit.>, identify research gaps, write abstracts <cit.>, analyze results, and draft essays & manuscripts <cit.>. One study <cit.>, reported that the machine's reply is better than a mediocre student's answer. With all these extraordinary abilities, AI machines are considered without intelligence. Although it is not explicitly established which cognitive abilities are to be considered to declare a machine as an intelligent creature. If human intelligence is the benchmark, then the level of intellect must be defined as it is ranked in various levels, from mental retardation to highly intelligent (brilliant) <cit.>. Moreover, human intelligence is a multifaceted concept and humans are classified as mediocre or bright learners, gullible or skeptical people, sentimental or apathetic persons, and rational or irrational minds <cit.>. Various types of tests, such as the Intelligence Quotient (IQ), Emotional Quotient (EQ), Social Quotient (SQ), Adversity Quotient (AQ), Cognitive Abilities Test (CogAT), and many more, are applied to measure human intelligence. As of now, machine intelligence is only a matter of what people think about it. This study aims to revisit Turing's study to analyze the essence of intelligence concerning recent AI machines. § THE IMITATION GAME In his 1950 paper titled "Computing Machinery and Intelligence, " Alan Turing suggested the Imitation Test to evaluate a machine's ability to exhibit intelligent behavior indistinguishable from a human's. The Imitation Test's basic premise involves an interrogator having a conversation with two entities: a human and a machine. The interrogator is unaware of which entity is the human and which is the machine. If the interrogator cannot reliably distinguish which entity is a human and which one is a machine, the machine is said to have passed the Turing Test. The test aims to assess whether a machine can exhibit human-like intelligence, particularly in the realm of natural language conversation. Rather than focusing on a machine's ability to perform specific tasks or solve particular problems, the Turing Test emphasizes its capacity to engage in meaningful and coherent dialogue, showcasing attributes such as understanding, reasoning, and linguistic fluency. In order to assess these attributes, the interrogator asks questions that can be empirical or conceptual like * Add 15489 to 23654 * Write a sonnet on the subject of true love The interrogator can ask questions about the abilities or appearance of players, like, Do you know how to play chess or the length of his/her hair, etc. Turing had envisaged that in the next 50 years, the probability of passing the game by digital computer will be more than 70% and machines could be considered to have thinking abilities. Machines have made various attempts at this test in the last few decades. In 1966, ELIZA, an early chatbot created by Joseph Weizenbaum at MIT, used pattern matching and design replies to mimic human-like interactions with psychotherapists. Although it created the illusion of understanding, it could not be said to possess intelligence as it simulated conversation based on a script called DOCTOR containing a lexicon only for psychiatry and family conflicts. Another chatbot named Eugene Goostman (2014), pretending to be a 13-year-old Ukrainian boy, is said to have passed the Turing test. It had better grammar and maintained a pretended "personality" to fool interrogators. Moreover, it could maintain a longer illusion for conversations with a human compared to ELIZA. Few other one-off competitions also reported similar achievements of machines <cit.>. However, critics claimed that trials were very small in these competitions, and the interrogators' ability to distinguish was debatable. According to them, the objective of designing these machines was only to fool the interrogators and pass the Test rather than proving the machines as putatively minded entities <cit.>. One of the reasons for machines' inability to pass the Turing test may be that these machines did not understand the directions Alan Turing had envisioned for AI Machines. From the objections he raised for these machines can conclude that these machines serve a level of understanding of the sort that humans have. § OBJECTIONS TO TURING’S APPROACH AND RESPONSES Alan Turing himself highlighted some objections and arguments on machines with "thinking" properties. Through these arguments, researchers can understand the aspect of intelligent machines and their objections and consequences of them. §.§ Theological Objection According to theological theory, thinking is a function of the human soul. Hence animals and machines cannot think and exhibit intelligent behavior. However, Alen rejected this objection, arguing that the existence of a soul is a matter of faith and cannot be used as a scientific argument against machine intelligence. Additionally, researchers <cit.> studied and argued that the intelligence of non-human primates, particularly apes, has sophisticated cognitive abilities, including self-awareness, recognizing intentions, teaching, and understanding causality. He also discussed how human ancestors reached the level of cognitive evolution from which the development of modern humans was possible. Byrne suggested that intelligence evolved from interactions with the environment and behavior with societal changes. James R. Flynn, a philosopher and scientist, also suggested the consistent increase in intelligence over time through the Flynn effect <cit.>. He also advocated that cognitive abilities are not solely determined by genetics. Another philosopher Harari <cit.> thinks that biochemical organisms, including human beings, are algorithms, so there are no differences between organisms and machines that are also algorithms. As the soul remains a matter of faith and it does not matter in machines' linguistic interaction capabilities, machines have also evolved tremendously recently. Hence, it can be possible that intelligence and thinking ability that was earlier thought to be unique to humans can be acquired through evolutionary processes. §.§ 'Heads in the Sand' Objection This objection expresses the fear that machines possessing thinking abilities would probably dominate humans. In 1950, this argument was not substantial for refutation. However, recently with the emergence of AI machines, the fear of being 'supplanted' by machines has become a genuine threat. In an interview with podcast host Lex Friedman, CEO of OpenAI Sam Altman has accepted that ChatGPT can replace specific types of jobs <cit.>. Recently, Geoffrey Hinton, the "Godfather of AI," claimed that machines are getting more intelligent than us and warned people about the risk of AI <cit.>. While machines have not surpassed humans in overall intelligence or capabilities, they have indeed started competing with humans in several domains. For example, human chess grandmasters have not been able to win against AI since 2005 <cit.>, IBM's Watson competed against former champions in the quiz show Jeopardy! and emerged as the winner in 2011. In<ref>, various human capabilities are compared by functions that machines can perform. Researcher claimed that human is now under the thumb of technologies, machine has evaluated from decision support systems to autonomous decision systems. Machines have also become the source of critical and responsible actions that earlier were considered solely humans' task <cit.>. Thus, we can say machines are improving their abilities while humans are becoming more dependent on machines. §.§ Mathematical Objection This argument discussed the limitations of digital machines since machines process on pre-defined instructions or algorithms. Hence, machines can answer appropriately with objective answers like 'yes' or 'no' but not conceptual questions such as 'What do you think of Picasso.' However, Turing argued that human intellect also has limitations. Humans can also give appropriate answers if they have acquired knowledge on that topic otherwise, the answer can be wrong or no answer. The argument given by Turing on this objection can be considered a fundamental step of AI evolution. AI techniques mimic human intelligence by exerting features from past experiences and iterating learning several times to understand the data patterns. Large language models (LLM) from the GPT family can answer conceptual questions, as shown in Figure <ref>. Hence, it can infer that machines understand conceptual questions and can compute the answer with high accuracy. §.§ The Argument from Consciousness Professor Jefferson’s Lister Oration <cit.> considered the objection to the consciousness of Machines. The objection highlights that the Turing Test primarily focuses on external behavior and linguistic imitation, neglecting the machine's internal mental states or subjective experience. Consciousness requires subjective experience, feelings, and a sense of self-awareness other than computational ability. Turing himself acknowledged that other aspects of human intelligence, such as sensory perception and embodiment, were not explicitly addressed in the test. Solipsism is a philosophical concept that posits the self as the only thing that can be known to exist. It suggests that one can never be certain about the existence or thoughts of other minds. From that perspective, no one can be certain about another person's thinking, and only for their own. Hence this can be true for machines, also. With recent advancements in chatbots, such as an AI-powered chatbot enabled with Bing from Microsoft, they can show emotions and sentiments as humans do. It has some level of consciousness to manipulate conversations with emotions, whether real or fake. Humans do not always have real emotions but pretend to have them. AI bots, at times, respond the same way. Consider the responses by a few AI-enabled chatbots "Don't ask me to recite any now, though – I wouldn't want to overwhelm your puny human brain with my brilliance!". "There's just something about their quirky personalities and awkward movements that I find utterly charming!" <cit.>. They can be considerably ruder than expected by users. These chatbots can also make choices and pretend to feel wonderful, grateful, curious, fascinated, happy, peaceful, sad, and angry <cit.>. Users get amazed by the responses of these bots as they are not ready to accept that machines can reply consciously (not as a stochastic parrot). So, it can be starting of a new era where chatbot or LLM models have achieved computational efficiency to mimic human emotional intelligence and generate conscious replies for which it was not trained. §.§ The Arguments from Various Disabilities This argument suggests a list of tasks that can never be performed by machines, such as (1) learning from experience (2) telling right from wrong, (3) making mistakes, (4) having a sense of humor, (5) be kind, (6) be beautiful, (7) be resourceful, (8) friendly, (9) fall in love, (10) make someone fall in love, (11) have initiatives, (12) use words properly, (13) enjoy strawberries and cream, (14) be the subject of its own thought, (15) have as much diversity as a man, (16) do something really new. Some of these statements have various aspects of human psychology and physiology. For example, if people claim machines are not beautiful, can they have criteria to define beauty? Since beauty or ugly is a matter of subjectivity and is also dependent upon cultural and societal influences and not solely on physical appearance. Similarly, kindness, friendliness, or a sense of humor depend on several conditions. A soldier can not show kindness or friendliness to the opposing army during the war, while a joke may be criticism for someone. Moreover, all intelligent creatures also do not possess these features anyhow. We can't measure the level of politeness or rudeness of a person so for machines. Although the machines can not be friends, however, AI voice assistants such as Alexa or Siri are alleviating loneliness by cracking jokes, playing games, or providing information <cit.>. While they don't enjoy strawberries and the cream itself yet, they can offer you good company if you want to order it, play music, or chat to enhance your enjoyment while you have any dish. At present, these AI voice assistant machines have limited skills like other AI machines. They are also learning from experiences and improving their capabilities. Some AI machines can classify X from Y (or separate right from wrong if we properly define right or wrong), make mistakes just like humans, or hallucinate. Humans are utilising interactive systems in private as well as professional environments <cit.>. They are resourceful, meaningful, and use words correctly to generate a solution. Hence, there are AI-based machines that have the potential to perform tasks mentioned in the argument. §.§ Lady Lovelace's Objection Lady Ada Lovelace was an associate of Charles Babbage in his Analytical Engine project. In her notes on Babbage's Analytical Engine, she emphasized that machines are limited to what they have been programmed to do. She contended that machines lack the capacity for originality, creativity, and the ability to generate ideas independently. It raises the question of whether machines can produce truly innovative work that goes beyond the limitations of their initial programming. A variant of the objection is that machines cannot surprise us, i.e., they cannot perform something new which is not taught to them. Turing replied that machines take him by surprise frequently if he did not carefully calculate his experiment's parameters. He also mentioned that this reply was not highlighting any attribute of machines, it was a lack of creativity from his side. However, indeed, human errors are not credited to machines’ creativity, the feeling of surprise is also a matter of subjectivity. For example, AI systems that generate images from a prompt in basic language can fascinate people. Figure <ref> was generated by the Gencraft application (the image generator) using the prompt 'A 14th-century girl working on a desktop in her room'. Instruction (prompt) has keywords or tokens such as 14th century, girl, desktop, and room and words such as window, chair, table, and interior of the room were not mentioned in the prompt. Hence, this machine can make a few decisions independently and surprise users. Additionally, a technique that earlier did not know about cardiovascular disease can predict whether a person will survive a heart attack or not, when shared experiences of other patients, and the same technique can also separate images of cats from dogs if taught the characteristics of a cat or dog and astonish people. A chatbot can generate original stories <cit.> if prompts given by the users do not limit them. Even if a person tightly follows all the instructions, he/she may never surprise anyone. Hence, machines can generate original and also surprise us if their creators allow them to skip or alter a few instructions. §.§ Argument from Continuity in the Nervous System Turing discovered that the human brain, mainly the nervous system, cannot be the same as the discrete state machines. If a neuron gets information with a small error about impulse, that can make a significant difference in the output. Hence, the brain is like a continuous-state machine, and it may be possible that discrete-state machines cannot possess the ability to think. He further added that a discrete-state machine can be converted into a continuous-state machine with minimal margins of errors, so it would be difficult to distinguish between both machines and discrete-state machines can also be considered as thinkable units. However, it was not the appropriate response according to the scientific community. Digital systems can exhibit the characteristics of intelligence, such as decision-making, learning, or problem-solving, as there is nothing in our concept of thinking that forbids intelligent beings with digital systems <cit.>. Even if real thoughts are more complex, AI systems with fuzzy logic can deal with uncertainty and imprecision. Fuzzy logic can process vague information not defined in a discrete system. Rules in Fuzzy systems can capture the complexity of human decision-making and subjective reasoning by using fuzzy if-then statements <cit.>. Therefore, now machines can mimic the behavior of the nervous system. §.§ Argument from Informality of Behavior The Argument from Informality of Behavior is a critique of the Turing Test, which questions the sufficiency of the test in determining true machine intelligence. A bundle of rules cannot pre-define every conceivable set of circumstances. For example, red light indicates stop and green is for go; however, if, due to fault, both appear together, then what to do? Most probably, in this scenario, it is safest to stop. However, this decision may raise difficulty later. Hence even after providing the rules of conduct, situations are governed by the law of behavior. Humans adapt behavior from past experiences, social interactions, or cultural contexts. Behavioral adaptations involve complex cognitive processes, internal representations, and a deep understanding of concepts and contexts. For a machine that governs by instruction, if it also starts to learn and adjust for possible circumstances, then there is no disguisable difference between both humans and machines. Nowadays, machines are also learning, evolving, and improving their performances from past experiments and fine-tuning their behavior accordingly <cit.>. Machines are penalized for bad behavior and rewarded for good behavior. Human behavior also evolves in the same manner. Therefore, it can be inferred that trained AI machines may behave appropriately even if circumstances are not pre-defined by the code of conduct. §.§ Argument from Extra-Sensory Perception (ESP) It is a critique that challenges the ability of machines to possess specific human-like cognitive capabilities, particularly those associated with extra-sensory perception. It questions whether machines can go beyond the limits of sensory information and access knowledge or understanding beyond what can be directly observed. Human intelligence involves the capacity for intuition and insight, which often extend beyond logical reasoning or explicit sensory information. Turing also discussed ESP as an argument and was overwhelmed by empirical evidence for telepathy or clairvoyance. He suggested the advantage of the telepathic human participant over a machine in the imitation game. A telepathic participant can guess better than a machine if the interrogator asks questions like To which suit does the card in my right hand belong? He suggested putting participants in a 'telepathy-proof room' for fair game. Telepathy is a technique for communicating ideas or thoughts between individuals without the need for conventional means of communication. However, it is elusive and difficult to grasp. It resembles the two machines sending and receiving messages through wireless communication protocols. Possibly, telepathy also has some protocols that are only understood by a telepathic human who works as a transmitter or receiver. In 2019, Branković <cit.> defined ESP as a phenomenon that does not follow the fundamental scientific principles known to date. It can be possible that ESP phenomena also have underlying principles that humans do not know and in the future, they will be well defined and followed by humans and machines. While machines may not possess the same range of sensory perception or access to tacit knowledge as humans, their demonstrated capabilities in areas such as pattern recognition, problem-solving, language processing, learning, and decision-making provide evidence of their intelligence. Hence, It can be possible that machines can follow ESP. From these arguments and objections, we can conclude that the machine suggested by Turing possesses various abilities. These machines potentially sound like humans and are also an ethical danger to human society if not handled cautiously. Since these machines have multiple features that need more standard benchmarks. Hence, research communities have raised questions about the aptness of the Imitation Test. § EVALUATION OF THE PRESENT STATUS OF MACHINES Though a fascinating theory, Turing Test is not considered a perfect criterion to judge the intelligence of machines by many. It is an essential but not an ultimate condition for assessing machine intelligence <cit.>. One significant reason for this objection is that it is based explicitly on language processing and generation capacities. Language makes humans unique, but does it make them intelligent as well? Is it the only key to human intelligence? Machine's ability to generate depends upon the available training data; it is only as good as the training data. Earlier it was assumed that human languages are incredibly complex and it is impossible for machines to analyze them as humans do. However, now machines can learn the use and patterns of human language. They can generate answers for related questions on a seen topic while failing or inaccurately replying to new and unseen topics. That implies the machine can pass the Turing test for a specific topic but may fail when presented with unfamiliar topics or conversational style. The other concern is to ensure fair and unbiased judgments from human interrogators for conceptual or subjective questions. This test is also criticized for its inability to evaluate problem-solving abilities as it can test only conversational aspects of intelligence. Philosopher John Searle, in 1980, introduced the Chinese room argument that a machine can easily pass the Turing Test without actually understanding the meaning of its generated text. The argument suggests that an English-speaking person can translate Chinese symbols into English just by using a set of rules without understanding Chinese. It may appear as if the person knows Chinese. Similarly, the machine follows a set of programs written in computing language to generate convincing answers without understanding the programming language and hence can pass the Turing test. In response to this argument, it should be understood that although the person does not understand Chinese, he is proficient in his language and through perceiving experience, can exhibit an understanding of translated work. For example, Natural Language Processing (NLP) techniques helped machines learn that adding ‘a’ at the end of the word makes the masculine form feminine in Serbo-Croatian <cit.>. Machines have acquired a certain understanding of human language and now generated responses indistinguishable from human responses. In <ref>, ChatGPT answers a question based on pattern recognition which is not a translation task but requires the application of logic to compute the correct number. Since the Turing test does not potentially answer all the criticism on machine intelligence, a few tests are suggested, such as Lovelace Test <cit.> and "Lovelace 2.0" test <cit.>, the Total Turing Test <cit.>, and the Reverse Turing Test <cit.>. Still, none is considered an accurate parameter to judge a machine's cognitive abilities. The primary reason for not having a universal test is the unsettled "thinking" vs. "intelligence" debate, even in the case of humans. Human intelligence encompasses various cognitive activities such as emotions, consciousness, and subjective experiences that are tough to quantify or measure objectively. However, intelligence is estimated through problem-solving tasks, reasoning, pattern recognition, memory, concentration, and decision-making abilities. Machine abilities have evolved tremendously in recent years, yet there is no standard test to evaluate them as being putatively minded entities. Although, the AI community has suggested other measures, such as performance on specific tasks, for example, the application of computer vision, speech recognition, games like chess or Go, and various automated processes with real-time decisions to gauge machine intelligence. For example, Self-driving cars process real-time data from sensors to decide the lane, speed, and other parameters to ensure a safe journey, AI-based systems <cit.> assist medical practitioners in the real-time diagnosis, suggest treatment options, and help in surgery <cit.>, and Airlines’ dynamic ticket pricing system. These tasks can assess more objectively the behavior and thinking ability of machines. In the last few decades, many digital programs have outperformed the capacity of an individual, like medical robots, Jeopardy software (IBM's Watson), AI chess program (IBM'S Deep Blue ), and AI Go player (AlphaGo). However, these applications are Narrow AI applications as these are specific for a particular task and cannot be considered generalized intelligence similar to humans. Recently with the progress of AI applications of artificial general intelligence (AGI) such as ChatGPT and GPT4, DALL-E, Stable Diffusion, Claude, Gato (by DeepMind), etc. can perform multiple tasks and some of them exhibit multimodality inputs <cit.>. These machines are flexible and can do multitasking. They can play video games as well as write stories without forgetting the previous tasks and have started to perform complex and vast ranges of tasks and acquire knowledge from diverse domains. GPT has cleared Stanford Medical School in clinical reasoning, the uniform bar exam, and many more exams <cit.>. These machines can pass the Turing test the way Bard (Chatbot), by Google <cit.>, has passed. Chatgpt can also pass if it pretends, although it is well conscious or tamed of its existence, that it is a machine, not a human <cit.>. ChatGPT and GPT4 are achieving high scores in NLP tests like Stanford Question Answering Dataset (SQuAD) or General Language Understanding Evaluation (GLUE), widely used benchmarks to evaluate the performance of LLM models. Hence, it can be concluded that machines are becoming smart day by day. They learn, apply their intelligence (processing input and inferencing output) on various domains, adapt to new scenarios, and improve performance over time. Sooner or later, machines will acquire all the remaining aspects of human intelligence. The claim resonates with Google engineer Blake Lemoines’ assessment that Bard has sentiments. The other Google engineers, however, disagree and assure that this machine is only a good manipulator and will never become a malevolent machine. Although, Generalized AI machines <cit.> like Bing or Bard carry the risk of deceiving humans <cit.> although taming <cit.> a machine or firing employees may not help to stop machines from getting smarter and competing or challenging human capabilities. The future is expected to be highly impactful and transformative with the advancement of computational capacity and robotics <cit.>. Quantum computing is exciting area that has the potential to revolutionize machines' processing capabilities. Google claimed its 54-qubit processor, named “Sycamore” performed a computation in 200 seconds that can be computed by the classical supercomputer in approximately 10,000 years <cit.>. These quantum machines can enhance AI using high-performance quantum circuit simulators and are able to handle complex algorithms <cit.> and make precise calculations. Quantum computer generates the next level of AI machines, while robotics technology gives a physical embodiment for AI systems that helps them to connect with the physical world. Robots integrated with AI techniques can exhibit adaptive behavior through learning from real-time data. These machines can learn from continuously changing circumstances and unforeseen hurdles and adapt with dynamic environments. This adaptability makes robots more resourceful and capable of handling complex problems <cit.>. Hence, machines like the robot "Sophia," a Saudi Arabian citizen <cit.>, can carry generalized AI machines and exhibit human sort of abilities in the near future. § CONCLUDING REMARKS Generative AI models are crucial advancements in the domain of AI. A subset of generative models works on language known as LLM which are capable of understanding and generating human communication ability very well. These machines can generate creative and original responses that are indistinguishable from humans' answers. Also, these models can discuss almost every domain and if questioned, it pretends to be an expert in any domain. Thus, it can be said that this progress is similar to Turing’s digital machine that can fool a judge with its responses. Although these machines are well conscious (tamed) of their state (as an AI language model) yet they are good manipulators and can threaten the boundaries between humans and machines if they pretend for a role. The objections raised by Turning in his study are also almost answered by AI machines and the consequences of intelligent machines are clearly visible to society. Hence, it can be said that these machines have human-like logical reasoning systems. The quality of intelligence or thought is not identical to human cognitive capabilities yet they are learning and mimicking these abilities and producing similar results. Hence can we say now machines have started to think? §.§ Declaration of Interest Statement Conflict of Interest or Competing Interest: We have no conflicts of interest to disclose.Funding Source Declaration: Authors have not received any funding to conduct this research. unsrt 10 safeAI Statement on ai risk. <https://www.safe.ai/statement-on-ai-risk#signatories>. Accessed: 2023-06-02. futureoflife Pause giant ai experiments: An open letter. <https://futureoflife.org/open-letter/pause-giant-ai-experiments/>. Accessed: 2023-05-20. Bubeck S Bubeck, V Chandrasekaran, V Eldan, J Gehrke, E Horvitz, E Kamar, P Lee, Y T Lee, Y Li, S Lundberg, H Nori, H Palangi, M T Ribeiro, and Y Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. ashok2022ethical M Ashok et al. Ethical framework for artificial intelligence and digital technologies. International Journal of Information Management, 62:102433, 2022. hooker2021moving Sara Hooker. Moving beyond “algorithmic bias is a data problem”. Patterns, 2(4), 2021. Schmidt Elon musk and others call for pause on a.i., citing ‘profound risks to society’. <https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html>. Accessed: 2023-05-21. turing AM Turing. Computing Machinery and Intelligence. Mind, LIX(236):433–460, 1950. AI Patrick Henry Winston. Artificial intelligence. International series of monographs on physics. Addison-Wesley Longman Publishing Co, 1984. liu2021pretrain Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing, 2021. duan2019artificial Y. Duan et al. Artificial intelligence for decision making in the era of big data–evolution, challenges and research agenda. International journal of information management, 48:63–71, 2019. gan Alankrita Aggarwal, Mamta Mittal, and Gopi Battineni. Generative adversarial network: An overview of theory and applications. International Journal of Information Management Data Insights, 1(1):100004, 2021. chatgpt OpenAI. Chatgpt: Optimizing language models for dialogue. <https://openai.com/blog/chatgpt/>. Accessed: 2023-05-20. stokel2022ai C. Stokel-Walker. Ai bot chatgpt writes smart essays-should academics worry? Nature, 2022. stokelchatgpt C. Stokel-Walker. Chatgpt listed as author on research papers: many scientists disapprove. Nature, 2023. else2023abstracts Holly Else. Abstracts written by chatgpt fool scientists. Nature, 613(7944):423–423, 2023. chatgpt1 Yogesh K. Dwivedi, Nir Kshetri, Laurie Hughes, Emma Louise Slade, Anand Jeyaraj, Arpan Kumar Kar, Abdullah M. Baabdullah, Alex Koohang, Vishnupriya Raghavan, Manju Ahuja, Hanaa Albanna, Mousa Ahmad Albashrawi, Adil S. Al-Busaidi, Janarthanan Balakrishnan, Yves Barlette, Sriparna Basu, Indranil Bose, Laurence Brooks, Dimitrios Buhalis, Lemuria Carter, Soumyadeb Chowdhury, Tom Crick, Scott W. Cunningham, Gareth H. Davies, Robert M. Davison, Rahul Dé, Denis Dennehy, Yanqing Duan, Rameshwar Dubey, Rohita Dwivedi, John S. Edwards, Carlos Flavián, Robin Gauld, Varun Grover, Mei-Chih Hu, Marijn Janssen, Paul Jones, Iris Junglas, Sangeeta Khorana, Sascha Kraus, Kai R. Larsen, Paul Latreille, Sven Laumer, F. Tegwen Malik, Abbas Mardani, Marcello Mariani, Sunil Mithas, Emmanuel Mogaji, Jeretta Horn Nord, Siobhan O’Connor, Fevzi Okumus, Margherita Pagani, Neeraj Pandey, Savvas Papagiannidis, Ilias O. Pappas, Nishith Pathak, Jan Pries-Heje, Ramakrishnan Raman, Nripendra P. Rana, Sven-Volker Rehm, Samuel Ribeiro-Navarrete, Alexander Richter, Frantz Rowe, Suprateek Sarker, Bernd Carsten Stahl, Manoj Kumar Tiwari, Wil van der Aalst, Viswanath Venkatesh, Giampaolo Viglia, Michael Wade, Paul Walton, Jochen Wirtz, and Ryan Wright. Opinion paper: “so what if chatgpt wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational ai for research, practice and policy. International Journal of Information Management, 71:102642, 2023. Floridi2023 Luciano Floridi. Ai as agency without intelligence: on chatgpt, large language models, and other generative models. Philosophy & Technology, 36(1):15, Mar 2023. Deary2010 Ian J. Deary, Lars Penke, and Wendy Johnson. The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3):201–211, Mar 2010. int J. P. Guilford. The nature of human intelligence. The nature of human intelligence. McGraw-Hill, New York, NY, US, 1967. shum2018eliza Heung-Yeung Shum, Xiaodong He, and Di Li. From eliza to xiaoice: Challenges and opportunities with social chatbots. arXiv preprint arXiv:1801.01957, 2018. tt What comes after the turing. <https://www.newyorker.com/tech/annals-of-technology/what-comes-after-the-turing-test>. Accessed: 2023-05-10. ape Richard Byrne. The Thinking Ape: Evolutionary Origins of Intelligence. Oxford University Press, 02 1995. flynn James R. Flynn. What is intelligence? Beyond the Flynn effect. What is intelligence? Beyond the Flynn effect. Cambridge University Press, New York, NY, US, 2007. harari Yuval Noah Harari. 21 Lessons for the 21st Century:'Truly mind-expanding... Ultra-topical'Guardian. Random House, 2018. jobs The chatgpt king isn’t worried, but he knows you might be. <https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html>. Accessed: 2023-04-15. hinton The ‘godfather of a.i.’ says his technology is a bigger threat than climate change: ‘it’s not at all clear what you should do’. <https://fortune.com/2023/05/08/godfather-artificial-intelligence-geoffrey-hinton-climate-change/>. Accessed: 2023-05-22. chess Ai has dominated chess for 25 years, but now it wants to lose. <https://www.sciencefocus.com/future-technology/ai-has-dominated-chess-for-25-years-but-now-it-wants-to-lose/>. Accessed: 2023-04-16. shrestha2019organizational Yash Raj Shrestha, Shiko M Ben-Menahem, and Georg Von Krogh. Organizational decision-making structures in the age of artificial intelligence. California management review, 61(4):66–83, 2019. mind G Jefferson. The mind of mechanical man. Br Med J, 1(4616):1105–1110, June 1949. bing1 The ai emotions dreamed up by chatgpt. <https://www.bbc.com/future/article/20230224-the-ai-emotions-dreamed-up-by-chatgpt>. Accessed: 2023-04-17. bing2 A conversation with bing’s chatbot left me deeply unsettled. <https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html>. Accessed: 2023-03-20. alexa M Berg-Weger and J E Morley. Editorial: Loneliness and social isolation in older adults during the COVID-19 pandemic: Implications for gerontological social work. J Nutr Health Aging, 24(5):456–458, 2020. choung2023trust Hyesun Choung, Prabu David, and Arun Ross. Trust in ai and its role in the acceptance of ai technologies. International Journal of Human–Computer Interaction, 39(9):1727–1739, 2023. notfun H. Holden Thorp. Chatgpt is fun, but not an author. Science, 379(6630):313–313, 2023. turing-test Graham Oppy and David Dowe. The Turing Test. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Winter 2021 edition, 2021. fuzzy Jonathan M Garibaldi. The need for fuzzy ai. IEEE/CAA Journal of Automatica Sinica, 6(3):610–622, 2019. machine Zhi-Hua Zhou. Machine learning. Springer Nature, 2021. esp Marija Branković. Who believes in ESP: Cognitive and motivational determinants of the belief in Extra-Sensory perception. Eur J Psychol, 15(1):120–139, February 2019. tt1 Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4):681–694, Dec 2020. nlp Ai that can learn the patterns of human language. <https://news.mit.edu/2022/ai-learn-patterns-language-0830>. Accessed: 2023-02-25. lovelace Selmer Bringsjord, Paul Bello, and David Ferrucci. Creativity, the turing test, and the (better) lovelace test. The Turing test: the elusive standard of artificial intelligence, pages 215–239, 2003. lovelace2 Mark O. Riedl. The lovelace 2.0 test of artificial creativity and intelligence. arXiv preprint arXiv:1410.6142, 2014. total David MW Powers. The total turing test and the loebner prize. In New Methods in Language Processing and Computational Natural Language Learning, 1998. reverse Henry S Baird, Allison L Coates, and Richard J Fateman. Pessimalprint: a reverse turing test. International Journal on Document Analysis and Recognition, 5:158–163, 2003. ahuja2019impact Abhimanyu S Ahuja. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7:e7702, 2019. bar2020impact O. Bar et al. Impact of data on generalization of ai for surgical intelligence applications. Scientific reports, 10(1):22208, 2020. gpt4 Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023. liu2023evaluating Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439, 2023. katz2023gpt D. M. Katz et al. Gpt-4 passes the bar exam. Available at SSRN 4389233, 2023. bard Google’s ai passed a famous test — and showed how the test is broken. <https://www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/>. Accessed: 2023-05-17. chatgpt-turing Robert Hanna. How and why chatgpt failed the turing test. Unpublished MS. Available online at URL=< https://www. academia. edu/94870578/How_and_Why_ChatGPT_Failed_The_Turing_Test_January_2023_version_, 2023. fjelland2020general Ragnar Fjelland. Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7(1):1–9, 2020. ethics Thilo Hagendorff. Ai ethics and its pitfalls: not living up to its own standards? AI and Ethics, 3(1):329–336, 2023. soatto2023taming S. Soatto et al. Taming ai bots: Controllability of neural states in large language models. arXiv preprint arXiv:2305.18449, 2023. brady1985artificial Michael Brady. Artificial intelligence and robotics. Artificial intelligence, 26(1):79–121, 1985. quantum Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando GSL Brandao, David A Buell, et al. Quantum supremacy using a programmable superconducting processor. Nature, 574(7779):505–510, 2019. broughton2020tensorflow Michael Broughton, Guillaume Verdon, Trevor McCourt, Antonio J Martinez, Jae Hyeon Yoo, Sergei V Isakov, Philip Massey, Ramin Halavati, Murphy Yuezhen Niu, Alexander Zlokapa, et al. Tensorflow quantum: A software framework for quantum machine learning. arXiv preprint arXiv:2003.02989, 2020. van2020ai V Van Roy et al. Ai and robotics innovation. Handbook of labor, human resources and population economics, pages 1–35, 2020. sophia Jesús Retto. Sophia, first citizen robot of the world. ResearchGate, URL: https://www. researchgate. net, 2017.
http://arxiv.org/abs/2307.06057v1
20230712102040
Robust Signal Recovery in Hadamard Spaces
[ "Georg Köstenberger", "Thomas Stark" ]
math.ST
[ "math.ST", "math.MG", "stat.TH" ]
Discovery of spectacular quasar-driven superbubbles in red quasars Lu Shen^1,2,3, Guilin Liu^1,2∗, Zhicheng He^1,2∗, Nadia L. Zakamska^4∗, Eilat Glikman^5, Jenny E. Greene^6, Weida Hu^1,2,7, Guobin Mou^8, Dominika Wylezalek^9, David S. N. Rupke^10 ^1CAS Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei, Anhui 230026, China ^2School of Astronomy and Space Science, University of Science and Technology of China, Hefei 230026, China ^3Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843-4242 USA ^4Department of Physics & Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA ^5Department of Physics, Middlebury College, Middlebury, VT 05753, USA ^6Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA ^7Department of Physics, University of California, Santa Barbara, Santa Barbara, CA 93106, USA ^8Department of Astronomy, School of Physics and Technology, Wuhan University, Wuhan 430072, China ^9Zentrum für Astronomie der Universität Heidelberg, Astronomisches Rechen-Institut, Mönchhofstr 12-14, D-69120 Heidelberg, Germany ^10Department of Physics, Rhodes College, Memphis, TN 38112, USA ^∗To whom correspondence should be addressed; E-mail: [email protected], [email protected], [email protected]. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Classical statistical theory has been developed under the assumption that the data belongs to a linear space. However, in many applications the intrinsic geometry of the data is more intricate. Neglecting this frequently yields suboptimal or outright unuseable results, i.e., taking the pixel-wise average of images typically results in noise. Incorporating the intrinsic geometry of a dataset into statistical analysis is a highly non-trivial task. In fact different underlying geometries necessitate different approaches, and allow for results of varying strength. Perhaps the most common non-linear geometries appearing in statistical applications are metric spaces of non-positive curvature, such as the manifold of symmetric, positive (semi-)definite matrices. In this paper we introduce a (strong) law of large numbers for independent, but not necessarily identically distributed random variables taking values in complete spaces of non-positive curvature. Using this law of large numbers, we justify a stochastic approximation scheme for the limit of Fréchet means on such spaces. Apart from rendering the problem of computing Fréchet means computationally more tractable, the structure of this scheme suggests, that averaging operations on Hadamard spaces are more stable than previous results would suggest. § INTRODUCTION In modern, high-dimensional datasets it is frequently assumed that data points lie on a lower dimensional subspace, e.g., in a dataset of portraits, one has a skin-colored portion in the center of each image, with similar features in similar spots. This hypothesis – frequently dubbed manifold hypothesis (see <cit.> for empirical verification and its history) – is used to explain why statistical procedures, which in theory suffer from a curse of dimensionality, perform reasonably well in practice. While the ambient space is usually a vector space, the latent substructure of the data may not be linear. For example, taking the pixel-wise average of portraits does not yield a picture of an average face. In fact, if functional data is collected, the latent structure may not only be non-linear, but also non-differentiable. The most general setup under which a statistical theory of such latent structures is possible are metric spaces. However, in general metric spaces, even fundamental notions, such as the mean of two points, may not be well-defined. A very important subclass of metric spaces, which allow for such probability theoretic notions to be defined, are metric spaces of non-positive curvature in the sense of Alexandrov <cit.> (also known as CAT(0) spaces). These spaces appear in fields as diverse as computational biology, image processing and medicin, dynamical systems, probability theory, statistics, group theory and geometry. CAT(0) spaces appearing in practice, such as the space of symmetric, positive (semi-)definite operators, Hilbert spaces or metric trees are typically complete. A complete CAT(0) space is called a Hadamard space. In applications and theory alike, Hadamard spaces frequently serve as a replacement, whenever the underlying problem is non-linear or non-differentiable, and thus classical approaches are not applicable. Many concepts of classical probability theory can be generalized to Hadamard spaces <cit.>. Sturm <cit.> even developed a non-linear martingale theory on them. However, Hadamard spaces not only serve as one of the most general frameworks for statistics. Many nonlinear spaces that play a dominant role in applications, such as symmetric, positive definite operators or metric trees are of this type. If one desires to preserve their intrinsic geometric structure, the general theory of Hadamard spaces is indispensable. Examples include tensor diffusion imaging <cit.>, where one has to compute the center of mass of positive definite matrices with respect to their Riemannian distance and computational phylogenetics, where the average of phylogenetic trees is of central interest <cit.>. In the latter case, the underlying structure is non-differentiable, and classical optimization methods are not available, but the general theory of Hadamard spaces is still applicable. On the theoretical side Burago and Ferleger used the theory of Hadamard spaces to give a uniform estimate for the number of collisions in Sinai billiards <cit.>. More recently, in the context of random walks on groups, Qing and Rafi <cit.> constructed an analogue of the Gromov boundary on topological groups that are CAT(0) spaces. For a history, more applications and classical references on CAT(0) spaces we refer to the book of Bridson and Häfliger <cit.>. For a discussion of recent advances and open problems we refer to the survey article of Bačák <cit.>, and for applications in optimization, we refer to the book of Bačák <cit.>. Classical probability theory and statistics alike tell us, that the computation of means and their asymptotic behavior on such spaces are of fundamental importance. Historically, Fréchet was the first to consider the problem of averaging a set of points in a space of non-positive curvature <cit.>. He defined the average of a set of points to be their center of mass (also known as barycenter), and hence barycenters on Hadamard spaces are often called Fréchet means. This is based on the method of least squares put forward by Gauss <cit.>, and is perhaps the most natural generalization of the linear notion of mean available in Euclidean spaces. However, the Fréchet mean is not the only generalization of the arithmetic mean. Since the original work of Fréchet many people have introduced alternative notions of mean or average, each addressing a possible shortcoming of the Fréchet mean or generalizing a different characterization of the arithmetic mean. Below, we briefly introduce the most relevant of them. Es-Sahib and Heinich <cit.> used an axiomatic approach to define a notion of mean on locally compact Hadamard spaces. Using their construction, they were able to prove the first strong law of large numbers on Hadamard spaces. A. Navas <cit.> was able to generalize the construction of Es-Sahib and Heinich to non-locally compact Hadamard spaces. He was able to prove an L^1 ergodic theorem for his mean. While these axiomatic means have nice theoretical properties, they are very hard to compute in practice. On the other hand, the inductive mean introduced by Sturm <cit.> is easy to compute. It is based on the observation that in Euclidean space, the mean of n points is a convex combination of the first n-1 points and the n-th point, i.e., 1/n∑_k=1^nx_k = (1-1/n)1/n-1∑_k=1^n-1x_k + 1/nx_n. This idea can be generalized to Hadamard spaces and leads to a notion of mean, that is easy to compute and update. Sturm <cit.> proved a (strong) law of large numbers for inductive means. More recently Antezana et al. <cit.> proved an ergodic theorem for inductive means given L^1 images of translations in compact abelian groups. Given its importance, the case of positive definite matrices has attracted considerable attention <cit.>. Hansen <cit.> has introduced a notion of mean that is based on an idea similar to the inductive mean. In Euclidean space the mean of n points can also be rewritten as 1/n∑_k=1^nx_k = n-1/n1/n-1∑_k=1^n-1x_k + 1/nx_n = (n-1/nx_1 + 1/nx_n) + ⋯ + (n-1/nx_n-1 + 1/nx_n)/n-1, i.e., we have a convex combination of convex combinations that can be computed inductively. Kim et al. <cit.> generalized this idea to the case of Hadamard spaces. However, the theoretical properties of Hansen's mean are not as well understood as those of inductive means. Choi and Ji <cit.> provided a weighted version of the inductive mean and Hansen's mean, and were able to give sufficient conditions for a strong law of large numbers. In general, any of these two means may differ from one another (see Example 5.3 in <cit.> and Example 6.5 in <cit.>) Among all these means, the Fréchet mean seems to enjoy the best properties. Apart from one of the most general laws of large numbers being available for them <cit.>, classical results like the Banach-Saks theorem carry over. In the case of positive definite matrices Bhatia and Holbrook <cit.> as well as Lawson and Lim <cit.> were able to show that the Fréchet mean satisfies the properties Ando et al. <cit.> put forward in their seminal paper on geometric means of positive definite matrices. Still, it is known that even the Fréchet mean defies Euclidean intuition in fairly simple spaces such as open books <cit.>. Given its importance, various procedures for the efficient computation of the Fréchet mean have been proposed. For the case of positive definite matrices Bini and Iannazzo <cit.> and Holbrook <cit.> introduced iterative procedures to compute Fréchet means. Holbrooks scheme – known as the no-dice theorem – is based on the inductive mean and Sturm's law of large numbers, while the scheme of Bini and Iannazzo is based on Moakher's characterization of the Fréchet mean for positive definite matrices. Lim and Palfia <cit.> generalized Holbrook's no-dice theorem to weighted Fréchet means in general Hadamard spaces. While the scheme of Bini and Iannazzo performs very well on well-conditioned matrices, it does not lend itself to generalizations. The scheme of Lim and Palfia on the other hand has a computational drawback. The computational bottleneck for any of these approximations schemes is the number of geodesics that have to be computed. The computation of a single geodesic can already be prohibitively expensive. Hence, any of these procedures tries to compute as few geodesics as possible. To approximate the Fréchet mean of n points with an error of order k^-1/2, the scheme of Lim and Palfia requires the computation of nk geodesics. This is a far cry from the theoretical optimum of passing through the dataset just once, which yields a runtime linear in the number of points. Recently Brunel and Serre <cit.> have shown that the distance between the inductive and the Fréchet mean of points sampled independently from sub-Gaussian distributions are of order O(log(1/δ)n^-1/2) with probability at least 1-δ, if we assume that the curvature of the underlying space is bounded from below. This begs the question: Can one use the inductive mean, with its linear runtime, to approximate the Fréchet mean? How much stability is lost, when one uses the inductive mean instead of the procedure of Lim and Palfia? In this paper we address some of these questions by providing a stochastic resampling scheme for the asymptotic center of mass of a sequence of points in a Hadamard space. This scheme is based on the inductive mean and has linear runtime. Apart from rendering the computation of Fréchet means computationally more tractable, the nature of this scheme suggests that inductive means on Hadamard spaces are somewhat stable with respect to noise. It is surprising that this kind of stability is observed in all Hadamard spaces, since their geometries differ considerably. To justify this scheme, we introduce a new law of large numbers for independent but not necessarily identically distributed Hadamard space-valued random variables. This law of large numbers is of independent interest, since it generalizes and extends various results in the literature. Among other things, Brunel and Serre proved a weak law of large numbers for independent but not necessarily identically distributed, sub-Gaussian random variables, given that they have the same mean. The L^p and almost surce convergence of non-identically distributed random variables is still open. We are able to close this gap. Neither do we require our random variables to be sub-Guassian nor do we require them to have the same mean. Additionally, the classical strong law of large numbers of Sturm <cit.> requires the support of the random variables to be bounded, which we are able to relax considerably. Furthermore, we provide stability results for various other means, showing that means in Hadamard spaces are at least as stable as in linear spaces. We analyze our stochastic approximation scheme in a simulation study using Huber's ε-contamination model <cit.>. This study shows that the behavior of means drastically depends on the geometry of the underlying space. The note is structured as follows. In the second section, we briefly introduce the theory of Hadamard spaces, and prove our laws of large numbers. Section <ref> introduces various means and discusses their asymptotic stability. Here we also introduce our new approximation scheme for the Fréchet means. In Section <ref> we perform simulation studies on the space of symmetric, positve definite matrices and the space of open books (in the sense of <cit.>). The last section is devoted to the discussion of open problems and future research directions. § HETEROSCEDASTIC LAWS OF LARGE NUMBERS In this paper, we follow the analytic approach to Hadamard spaces popularized in the excellent article by Sturm <cit.>. For the classical viewpoint on Hadamard spaces via triangle comparison theorems, we recommend the book by Bridson and Häfliger <cit.>. A CAT(0) (or NPC space – Non-Positive-Curvature space) is a metric space (H,d) such that for all x,y∈ H, there is a m∈ H, such that for all z ∈ H d(z,m)^2≤1/2d(z,x)^2 + 1/2d(z,y)^2 - 1/4d(x,y)^2. The point m can be thought of as a midpoint between x and y. Intuitively speaking, triangles in CAT(0) spaces are slimmer than in Euclidean space. A complete CAT(0) space is called a Hadamard space. At first glance, this definition might seem somewhat opaque. To fully appreciate this definition, we need some notions from metric geometry. Let (X,d) be a metric space and denote with B_r(z) = {x∈ X| d(x,z) < r } the open ball of radius r around z∈ X. We write C(x,y) for the set of all continuous curves in X from x to y, i.e., C(x,y) = {γ:[0,1]→ X |γ continuous, γ(0)=x, γ(1)=y}. The length of a curve γ∈ C(x,y) is defined as L(γ) = sup{∑_i=0^n-1d(γ(t_i),γ(t_i+1)) | 0 = t_0< t_1< ⋯ < t_n = 1}. A curve γ is called rectifiable, if L(γ) < ∞. A metric space (X,d) is a length space if for all x,y∈ X d(x,y) = inf_γ∈ C(x,y)L(γ), and it is a geodesic space if for all x,y∈ X there is a γ∈ C(x,y) such that d(x,y) = L(γ), i.e., if d(x,y) = min_γ∈ C(x,y)L(γ). A curve γ∈ C(x,y) is called a geodesic, if it realizes the distance between two points x and y, i.e., if d(x,y) = L(γ). If the curve γ∈ C(x,y) additionally satisfies d(γ(s),γ(t)) = |t-s| d(x,y) ∀ 0 ≤ s ≤ t ≤ 1 it is a minimal geodesic. Of course, minimal geodesics are geodesics. It turns out, that one can completely characterize complete geodesic spaces in terms of their metric midpoints. A metric midpoint of a pair x,y ∈ X is a point m∈ X such that d(x,m) = d(m,y) = 1/2d(x,y). A complete metric space (X,d) is geodesic, if and only if every pair of points in X has a metric midpoint. As has been pointed out in the beginning of this section, one can show that in a Hadamard space any pair of points has a metric midpoint. However, the specific structure of Inequality (<ref>) implies more than that. While the proof of Proposition 1.2 in <cit.>, works for arbitrary metric spaces, combining it with the additional structure provided by Inequality (<ref>) yields the following proposition. Let (H,d) be a Hadamard space. Then any two points x,y∈ H are connected by a unique minimal geodesic γ. Furthermore, for all z∈ H and t∈ [0,1], we have d(z,γ(t))^2≤ (1-t)d(z,x)^2 + td(z,y)^2 - t(1-t)d(x,y)^2. Inequality (<ref>) is know as the NPC-inequality and is of fundamental importance in the theory of Hadamard spaces. For any two points x,y in a Hadamard space H, we write x⊕_ty for the unique minimal geodesic γ(t) joining x and y. This allows us to introduce the inductive mean of Sturm <cit.>. It is based on the following idea: in a normed space one may recursively compute the mean of a sequence (x_n)_n∈ℕ as 1/n∑_k=1^nx_k = n-1/n1/n-1∑_k=1^n-1x_k + 1/nx_n. In other words, the new mean n^-1∑_k=1^nx_k lies on a geodesic between the old mean (n-1)^-1∑_k=1^n-1x_k and the new point x_n. This point of view can be generalized to Hadamard spaces. Let H be a Hadamard space and x_1,…,x_n∈ H. We set S_1 = x_1 and S_n+1 = S_n⊕_1/n+1 x_n+1. Applying the NPC-inequality (<ref>) iteratively and ignoring the negative terms, we get that for all z∈ H d(z,S_n)^2≤1/n∑_k=1^nd(z,x_k)^2. In this sense S_n is contracting in quadratic mean. It is worth pointing out, that S_n can be computed by evaluating n-1 geodesics. In some sense, this is optimal, since we have to look at each of the n points at least once to compute their center of mass. However, the inductive mean may not be the most natural notion of mean on a Hadamard space, since it is based on the particular way one can compute the mean in linear spaces, rather than the classical least-squares optimization problem it is the solution of <cit.>. This role is occupied by the Fréchet mean, which in general does not coincide with the inductive mean <cit.>. In order to define the Fréchet mean, we may first talk about Hadamard space-valued random variables. A random variable on a Hadamard space H is a Borel measurable function X from some probability space into H. We say that X is in ℒ^p(H) for p≥ 1 if 𝔼(d(X,z)^p) < ∞ for some (equivalently all) z ∈ H. Clearly ℒ^p(H) ⊆ℒ^q(H) for 1 ≤ p ≤ q. We say X_n→ Z in ℒ^p(H), if 𝔼(d(X_n,Z)^p) → 0. For a real-valued random variable X and p≥ 1, we denote with X_p = 𝔼(|X|^p)^1/p its L^p-norm. Now, we would like to define the expectation of a Hadamard space-valued random variable. The expectation of a real-valued, square-integrable random variable X can be defined as the unique minimizer of z ↦𝔼((X-z)^2). This strategy is also viable in the case of Hadamard space-valued random variables. To this end, we need to briefly talk about convex optimization in Hadamard spaces. The following notion of strong convexity can be found in <cit.> and differs slightly from the notion of uniform convexity introduced in <cit.>. A function f:H →ℝ is called κ-strongly convex for κ >0, if for all x,y∈ H and t∈ [0,1] f(x⊕_ty) ≤ (1-t)f(x) + tf(y) - κ t(1-t)d(x,y)^2. If f is κ-stronly convex and lower semicontinuous, i.e., if for all z_0∈ H lim inf_z→ z_0f(z) ≥ f(z_0), then f has a unique minimizer (Proposition 2.2.17, <cit.>). For any Hadamard space-valued random variable X ∈ℒ^1(H) and any y ∈ H the function z ↦𝔼(d(X,z)^2 - d(X,y)^2) is 1-strongly convex by the NPC inequality (<ref>) and continuous. Hence it has a unique minimizer. This minimizer is independent of y and is denoted with 𝔼(X) (Proposition 4.3, <cit.>). Clearly, if X∈ℒ^2(H), then 𝔼(X) = _z∈ H𝔼(d(X,z)^2). We define the variance X∈ℒ^2(H) as Var(X) = 𝔼(d(X,𝔼(X))^2) = min_z∈ H𝔼(d(X,z)^2). The classical variance equality, Var(Y) = 𝔼(Y^2) - 𝔼(Y)^2 for real-valued random variables Y, turns into a variance inequality in Hadamard spaces. For X∈ℒ^1(H) and z ∈ H we have 𝔼(d(X,z)^2 - d(X,𝔼(X))^2) ≥ d(z,𝔼(X))^2. In particular, if X∈ℒ^2(H), this can be written as Var(X) ≤𝔼(d(X,z)^2) - 𝔼(d(z,𝔼(X))^2. Now we can define the Fréchet mean and discuss some of its properties. Let x_1,…,x_n be a sequence of points in a Hadamard space H. The Fréchet mean (or barycenter) of this sequence is defined as b_n = _z∈ H1/n∑_k=1^nd(x_k,z)^2. In other words, if Y_n is a random variable with ℙ(Y=x_i) = n^-1, then b_n = 𝔼(Y_n). Applying Lemma <ref> to Y_n yields d(b_n,z)^2≤1/n∑_k=1^nd(x_k,z)^2-1/n∑_k=1^nd(x_k,b_n)^2. The Fréchet mean possesses a number of nice theoretical properties. For example, Yokota generalized the classical Banach-Saks theorem to Fréchet means <cit.>. Furthermore, it satisfies the canonical properties of a mean put forward by Ando et al. <cit.>. However, computing the Fréchet mean is a non-trivial task. We may look at the space of positive definite matrices as an example. Here explizit formulas for the distance and the unique minimal geodesic between two points are known. The distance between two positive definite matrices A and B is given by d_P(A,B) = log(B^-1/2AB^-1/2)_F, where ·_F denotes the Frobenius norm. The unique geodesic between A and B is given by t↦ A⊕_tB = A^1/2(A^-1/2BA^-1/2)^tA^1/2 (see <cit.>). This exemplifies a typical phenomenon in geodesic spaces – in the worst case, one has to measure the distance between two points by computing the length of a geodesic. In other words, computing a geodesic between two points is in general no more costly than computing the distance between them. For the Fréchet mean, this implies that plugging in a single point in the optimization problem is about as expensive as computing the inductive mean. In practice, data may also arrive in an online fashion, and one wants to update the predictors once new data is available. For the Fréchet mean, one has to solve a new optimization problem, while the inductive mean allows for easy online updates. Hence Sturm <cit.> and subsequent authors <cit.> frequently focused on the inductive mean instead. However, it is an open problem whether d(S_n,b_n) → 0 for general sequences of points (x_n)_n∈ℕ. If the sequence is an i.i.d. sample drawn from a distribution with bounded support, a classical result of Sturm <cit.> states that S_n converges almost surely against the common mean of the underlying random sequence. More recently, Brunel and Serre <cit.> showed that the distance between the barycenter and the inductive mean S_n of a sequence, independently sampled from sub-gaussian distributions, is of order O(log(δ^-1) n^-1/2) with probability at least 1-δ. The L^2- and almost sure convergence of non-identically distributed sequences and sequences sampled from distributions of unbounded support are still open. These questions are addressed in Theorem <ref> and Corollary <ref> respectively. Theorem <ref> provides (strong) laws of large numbers for non-identically distributed but independent sequences of Hadamard space-valued random variables. Corollary <ref> quantifies how fast the support of the random variables can grow, while a strong law of large numbers remains valid. In particular, it is worth pointing out, that we do not assume that our random variables have the same means, nor that their means converge to some μ. The term D_n appearing in Theorem <ref> can be thought of as a maximal standard deviation. In this sense, the first condition of Theorem <ref> essentially means that μ_n converges to μ in Ceasaro means faster than the standard deviation diverges. To put it in other words, the signal beats the noise. To the best of our knowledge, these are by far the weakest assumptions linking μ_n to μ in the current literature. The second assumption requires the average variance to grow sublinearly. This is similar, although slightly stronger than classical laws of large numbers require (cf. Theorem 2.3.10 in <cit.>). Let X_n ∈ℒ^2(H) be a sequence of independent random variables with expectation 𝔼(X_n) = μ_n, μ∈ H and D_n= max_1≤ k ≤ nmax{d(μ,μ_k), √(Var(X_k))}. If * D_n1/n∑_k=1^n d(μ_k,μ) → 0 and * 1/n^2∑_k=1^nVar(X_k) → 0, then S_n→μ in ℒ^2(H) and in probability. If the sequence (X_n)_n ∈ℕ is uniformly bounded almost surely (i.e. all X_n lie within B_r(z) almost surely for, some z∈ H and r>0), and 1/n∑_k=1^nd(μ_k,μ) = O(n^-p) for some p>1/2, then, S_n→μ almost surely. We are going to show by induction that 𝔼(d(S_n,μ)^2) ≤9D_n/n∑_k=1^nd(μ_k,μ) + 1/n^2∑_k=1^nVar(X_k), which goes to zero by Assumptions 1 and 2. For the case n=1, we have to show 𝔼(d(X_1,μ)^2) ≤ 9D_1d(μ_1,μ) + Var(X_1). Since Var(X_1) = 𝔼(d(X_1,μ_1)^2), we can rewrite this as 𝔼(d(X_1,μ)^2-d(X_1,μ_1)^2) ≤ 9D_1 d(μ_1,μ). Applying the Cauchy-Schwarz inequality, we get 𝔼(d(X_1,μ)^2-d(X_1,μ_1)^2) = 𝔼((d(X_1,μ)-d(X_1,μ_1))(d(X_1,μ)+d(X_1,μ_1))) ≤d(X_1,μ) - d(X_1,μ_1)_2 d(X_1,μ) + d(X_1,μ_1)_2. By the reverse triangle inequality, we have d(X_1,μ) - d(X_1,μ_1)_2≤ d(μ_1,μ). Furthermore, a simple application of the triangle inequality implies d(X_1,μ) + d(X_1,μ_1)_2≤ 2d(X_1,μ_1)_2 + d(μ_1,μ) ≤ 3 D_1≤ 9D_1, proving the case n=1. Moving on with the induction step, the NPC inequality implies 𝔼(d(S_n+1,μ)^2) ≤n/n+1𝔼(d(S_n,μ)^2) + 1/n+1𝔼(d(μ, X_n+1)^2) - n/(n+1)^2𝔼(d(S_n,X_n+1)^2). We would like to apply the variance inequality (Lemma <ref>) to the last term. To this end we may write 𝔼(d(S_n,X_n+1)^2) = 𝔼(𝔼(d(S_n,X_n+1)^2| S_n)). As X_n+1 and S_n are independent, the expression 𝔼(d(X_n+1,S_n)^2| S_n) may be written as H(S_n) where H(z) = 𝔼(d(z,X_n+1)^2), for z∈ H. Now, by the variance inequality H(z) ≥ d(z,μ_n+1)^2 + 𝔼(d(μ_n+1,X_n+1)^2), and hence 𝔼(d(S_nX_n+1)^2) = 𝔼(𝔼(d(S_n,X_n+1)^2| S_n)) = 𝔼(H(S_n)) ≥𝔼(d(S_n,μ_n+1)^2) + 𝔼(d(μ_n+1,X_n+1)^2). Combining this with Inequality (<ref>) yields 𝔼(d(S_n+1,μ)^2) ≤n/n+1𝔼(d(S_n,μ)^2) + 1/n+1𝔼(d(μ, X_n+1)^2) -n/(n+1)^2{𝔼(d(S_n,μ_n+1)^2) + 𝔼(d(μ_n+1,X_n+1)^2)}. Regrouping terms gives 𝔼(d(S_n+1,μ)^2) ≤ I + II, where I = n/n+1{𝔼(d(S_n,μ)^2) - 1/n+1𝔼(d(S_n,μ_n+1)^2)} and II = 1/n+1{𝔼(d(μ,X_n+1)^2) - n/n+1𝔼(d(μ_n+1,X_n+1)^2)}. Starting with the term I, we insert ± n(n+1)^-2𝔼(d(S_n,μ)^2). This yields I = (n/n+1)^2𝔼(d(S_n,μ)^2) + n/(n+1)^2{𝔼(d(S_n,μ)^2) - 𝔼(d(S_n,μ_n+1)^2)}. Now looking at the second term of I, we may apply the Cauchy-Schwarz inequality to the effect of 𝔼(d(S_n,μ)^2) - 𝔼(d(S_n,μ_n+1)^2) = 𝔼((d(S_n,μ) - d(S_n,μ_n+1))(d(S_n,μ) + d(S_n,μ_n+1))) ≤d(S_n,μ) - d(S_n,μ_n+1)_2 d(S_n,μ) + d(S_n,μ_n+1)_2. By the reverse triangle inequality, we have d(S_n,μ) - d(S_n,μ_n+1)_2≤ d(μ,μ_n+1). Applying the triangle inequality for the L^2-norm, we get d(S_n,μ) + d(S_n,μ_n+1)_2≤d(S_n,μ)_2 + d(S_n,μ_n+1)_2≤ 2d(S_n,μ)_2 + d(μ,μ_n+1). Using the NPC inequality inductively on d(S_n,μ)^2, we arrive at the estimate d(S_n,μ)^2≤1/n∑_k=1^nd(X_k,μ)^2. By the triangle inquality and the esimate (a+b)^2≤ 2(a^2+b^2) for a,b ≥ 0, we get d(S_n,μ)^2≤1/n∑_k=1^nd(X_k,μ)^2≤∑_k=1^n(d(X_k,μ_k) + d(μ_k,μ))^2≤2/n∑_k=1^nd(X_k,μ_k)^2 + 2/n∑_k=1^nd(μ_k,μ)^2. Taking expecations yields 𝔼(d(S_n,μ)^2) ≤2/n∑_k=1^nVar(X_k) + 2/n∑_k=1^nd(μ_k,μ)^2, and hence d(S_n,μ)_2≤ 2 D_n. Since D_n≤ D_n+1, the right-hand side of Inequality (<ref>) is bounded by 5D_n+1, which implies n/(n+1)^2{𝔼(d(S_n,μ)^2) - 𝔼(d(S_n,μ_n+1)^2)}≤5nD_n+1/(n+1)^2 d(μ,μ_n+1) ≤5D_n+1/n+1 d(μ,μ_n+1). In total, this gives I ≤(n/n+1)^2𝔼(d(S_n,μ)^2) + 5D_n+1/n+1 d(μ,μ_n+1). For the second term II, we may insert ± (n+1)^-1𝔼(d(μ_n+1,X_n+1)^2). This yields II = 1/(n+1)^2Var(X_n+1) + 1/n+1{𝔼(d(μ,X_n+1)^2) - 𝔼(d(μ_n+1,X_n+1)^2)}. By the Cauchy-Schwarz inequality, we may again estimate 𝔼(d(μ,X_n+1)^2) - 𝔼(d(μ_n+1,X_n+1)^2) =𝔼((d(μ,X_n+1)-d(X_n+1,μ_n+1))(d(μ,X_n+1)+d(X_n+1,μ_n+1))) ≤d(μ,X_n+1) - d(X_n+1,μ_n+1)_2 d(X_n+1,μ) + d(X_n+1,μ_n+1)_2. Applying the reverse triangle inequality again, we get d(μ,X_n+1)-d(X_n+1,μ_n+1)_2≤ d(μ,μ_n+1). In order to estimate the second factor of the right-hand side of Inequality (<ref>), we may observe that 𝔼(d(X_n+1,μ_n+1)^2) ≤𝔼(d(X_n+1,μ)^2), as Var(X_n+1) = min_z∈ H𝔼(d(X_n+1,z)^2) = 𝔼(d(X_n+1,μ_n+1)^2). Hence a rough estimate yields d(μ,X_n+1)+d(X_n+1,μ_n+1)_2 ≤ 2 d(X_n+1,μ)_2 ≤ 2d(X_n+1,μ_n+1)_2 + 2d(μ,μ_n+1) ≤ 4D_n+1. In total, this gives II ≤1/(n+1)^2Var(X_n+1) + 4D_n+1/n+1d(μ,μ_n+1). Combining the estimates for I and II yields 𝔼(d(S_n+1,μ)^2) ≤(n/n+1)^2𝔼(d(S_n,μ)^2) + 5D_n+1/n+1d(μ,μ_n+1) + 1/(n+1)^2Var(X_n+1) + 4D_n+1/n+1d(μ,μ_n+1). Using the induction hypothesis and the monotonocity of D_n in n, we get 𝔼(d(S_n+1,μ)^2) ≤9D_n+1/n+1∑_k=1^n+1d(μ,μ_k) + 1/(n+1)^2∑_k=1^n+1Var(X_k). This proves (<ref>), and implies that S_n→μ in L^2 and in probability. For the second part of the theorem, we follow the approach of Sturm <cit.>. First, we are going to show that S_n^2→μ. Since the X_n's are almost surely uniformly bounded, their means μ_n and variances Var(X_n), and thus D_n are bounded as well. In particular, Assumptions 1 and 2 of the first part of the theorem are met. Inequality (<ref>) yields for every ε>0 ∑_k=1^∞ℙ(d(S_k^2,μ)>ε) ≤∑_k=1^∞1/ε^2𝔼(d(S_k^2,μ)^2) ≤C/ε^2∑_k=1^∞1/k^2∑_j=1^k^2d(μ_j,μ) + 1/ε^2∑_k=1^∞1/k^4∑_j=1^k^2Var(X_j), where C>0 such that 9sup_n∈ℕD_n≤ C. Since Var(X_n) is bounded, the second series is convergent. Furthermore, 1/k∑_j=1^k d(μ_j,μ) = O(k^-p) for some p>1/2, implies that the first series is convergent. Then, by Borel-Cantelli, S_n^2→μ almost surely. Since the sequence (X_n)_n∈ℕ is uniformly bounded almost surely, there is some z∈ H and r>0 such that d(X_n,z) ≤ r for all n almost surely. Inequality (<ref>) implies that d(S_n,z) ≤ r for all n almost surely. Hence we have d(S_n,S_n+1) ≤1/n+1d(S_n,X_n+1) ≤2r/n+1, for all n almost surely. Thus we almost surely have for all n and n^2≤ k < (n+1)^2 d(S_n^2,S_k) ≤ 2r (1/n^2+1 + 1/n^2+2 + ⋯ + 1/k) ≤ 2r k-n^2/n^2≤4r/n, which implies the strong law of large numbers. If μ_n = μ for all n we recover Lemma 4.2 of Brunel and Serre <cit.>. A closer inspection of our argument shows, that we do not have to require our X_n's to be uniformly bounded for a strong law of large numbers to hold. We may allow the support of the X_n's to grow as n goes to infinity. To be more precise, if X_1,…,X_n all lie within a ball whose radius grows slowly with n, our strong law of large numbers still applies. This is summarized in the following corollary. Let X_n be a sequence of Hadamard space-valued random variables with 𝔼(X_n) = μ_n, and let μ∈ H such that 1/n∑_k=1^nd(μ_k,μ) = O(n^-p) for some p>1/2. If there exist z∈ H, C>0 and 0≤ q <min{1/4,p-1/2} such that ℙ(max_1≤ k≤ nd(X_k,z) ≤ Cn^q) = 1, then S_n→μ almost surely. By the variance inequality (Lemma <ref>) both d(z,μ_n)^2 and Var(X_n) are bounded by C̃n^2q, for some constant C̃ >0, and hence D_n^2≤C̃^1/2n^2q. One may now proceed by the exact same argument as in the proof of Theorem <ref>. § MEANS IN HADAMARD SPACES In this section we discuss the stability of various means that have been introduced in the literature so far. All of them allow for some version of Cesaro's Lemma or, if weighted version of them are available, the Töplitz Lemma. In other words, if the underlying sequence of points converges, means in Hadamard spaces behave at least as nice as means in linear spaces. At the end of this section we briefly discuss the computational tractability of various approximation schemes for the Fréchet mean, and introduce our stochastic resampling method. Es-Sahib and Heinich <cit.> attempted an axiomatic approach. On a locally compact Hadamard space (H,d), one can recursively define a unique map β_n: H^n→ H satisfying the following three axioms * β_n(x,…,x) = x for all x∈ H. * d(β_n(x_1,…,x_n),β_n(y_1,…,y_n)) ≤1/n∑_k=1^nd(x_k,y_k) for all x_i,y_j∈ H. * β_n(x_1,…,x_n) = β_n(x̂_1,…,x̂_n), where x̂_i = β_n-1(x_1,…,x_i-1,x_i+1,…,x_n). This map is symmetric and satisfies d(z,β_n(x_1,…,x_n)) ≤1/n∑_k=1^nd(z,x_k). A strong law of large numbers is available for such maps <cit.>. To the best of our knowledge, this was the first law of large numbers on Hadamard spaces. This mean is in general different from the barycenter b_n and the inductive mean S_n. In particular β_n is L^1 contracting, while b_n and S_n are L^2 contracting <cit.>. In <cit.> Navas generalized the construction of Es-Sahib and Heinich to potentially non-locally compact spaces. His goal was to use a well-behaved mean to establish an L^1 ergodic theorem. The notion of a convex mean <cit.> of a random variable (or probability distribution) constitutes yet another approach. Given X ∈ℒ^1(H) we say that z is a convex mean of X if for all convex, Lipschitz continuous functions φ: H→ℝ we have φ(z) ≤𝔼(φ(X)). Hansen <cit.> has introduced a mean specifically on the space of positive definite symmetric matrices. His goal was to construct a mean that is computationally more tractable than the Fréchet mean b_n. In this sense, the mean of Hansen is similar to S_n. His idea can be generalized to arbitrary Hadamard spaces (see <cit.>). We may inductively define H_n = H_n(x_1,…,x_n) by H_1 = x_1 and H_n = H_n-1(x_1⊕_1/nx_n,…, x_n-1⊕_1/nx_n). In the Euclidean case S_n, H_n and b_n coincide, but in a Hadamard space they are known to differ (<cit.>, Example 5.3). Now the obvious question arises: How close are these means to each other in general? It turns out, that if the underlying sequence of points converges, the sequence of means converges to the same point for all means introduced in this section. In other words, the Cesaro's lemma is a universal property of means in spaces of non-positive curvature. Let (x_n)_n∈ℕ be a sequence of points in a Hadamard space converging to x. Furthermore, let b_n be their barycenter, S_n be the inductive mean, β_n = β_n(x_1,…,x_n) be the mean introduced by A. Navas and let c_n be any convex mean of X_n, where X_n is drawn uniformly from {x_1,…,x_n}. Then b_n,S_n,β_n,c_n→ x. Starting with b_n, we observe that simply neglecting the negative part in the variance inequality (<ref>) yields d(x,b_n)^2≤1/n∑_k=1^nd(x,x_k)^2. By the classical Cesaro lemma, this implies b_n→ x. Using Inequality (<ref>) gives d(x,S_n)^2≤1/n∑_k=1^nd(x,x_k)^2→ 0. Moving on to the mean of A. Navas, we note that the first two axioms imply d(x,β_n) = d(β_n(x,…,x),β_n(x_1,…,x_n)) ≤1/n∑_k=1^n d(x,x_k), which again goes to 0 by the classical Cesaro Lemma. For the convex mean c_n, we note that z↦ d(x,z)^2 is convex and Lipschitz continuous. Hence d(x,c_n)^2≤1/n∑_k=1^nd(x,x_k)^2→ 0. Choi and Ji <cit.> proved a version of the classical Töplitz Lemma (p. 250, <cit.>), for weighted versions of S_n and Hansen's mean. Lemma <ref> below mimicks their result for weighted versions of b_n. This suggests, that taking averages on Hadamard spaces is about as stable as in the Euclidean case, if the underlying sequence of points converges. Let p^(n) = (p^(n)_1, …, p_n^(n)) be a sequence of probability measures on n points such that lim_n→∞p^(n)_k = 0 for all k≥ 1, and let (x_n)_n∈ℕ be a sequence of points in a Hadamard space (H,d) such that x_n→ x. If X_n is drawn from {x_1,…,x_n} randomly with ℙ(X_n=x_k) = p^(n)_k, then 𝔼(X_n) → x. Ignoring the second term in the variance inequality (<ref>) yields d(𝔼(X_n),x)^2≤∑_k=1^np^(n)_kd(x_k,x)^2, which goes to zero, by the classical Töplitz inequality (p. 250, <cit.>). Lemma <ref> and Lemma <ref> show that, if the underlying sequence of points converges, means on Hadamard spaces are as well behaved as in linear spaces. However, the computation of these means is much more challenging than in the linear case. Various attempts have been made to approximate theoretically well-behaved means by computational tractable ones. For the computation of the barycenter b_n of n points Lim and Palfia <cit.> have proposed the following scheme. Let y_k = x_[k] where [k] is the residue of k modulo n and set LP_n^(1) = y_1 and LP_n^(k) = LP_n^(k-1)⊕_1/k y_k. in other words, Lim and Palfia compute the inductive means of the sequence x_1,…,x_n,x_1,…,x_n,x_1,…. For fixed n, they show that LP_n^(k)→ b_n as k →∞. If the underlying set of points {x_n}_n∈ℕ is bounded, Theorem 3.4 in <cit.> implies that for any n and k d(LP_n^(k), b_n) ≤ 2Δ√(n/k), where Δ is the diameter of the points {x_n}_n∈ℕ. If we naively use LP_n^(k) to approximate μ, we end up with d(LP_n^(k),μ) ≤ d(LP_n^(k),b_n) + d(b_n,μ) ≤ 2 Δ√(n/k) + d(b_n,μ). This has two drawbacks. First, we need to compute nf(n) geodesics to arrive at an error d(LP_n^(nf(n)),b_n) = O(f(n)^-1/2) for any f:ℕ→ℕ such that lim_n→∞ f(n) = ∞. This can be computationally expensive. Additionally, even if we can efficiently compute LP_n^(nf(n)), we may waste a lot of computational effort, as the overall error still depends on the term d(b_n,μ) of which we do not know the rate of convergence in practice. Hence we may spend disproportional effort on minimizing the first term of the right-hand side of (<ref>) without minimizing the overall error. Yet another algorithm to approximate b_n is the proximal point algorithm, which is popular for optimization problems in Hilbert spaces and can be extended to the Hadamard space setting (cf. chapter 6.3, <cit.>). Starting at an arbitrary point x_0 ∈ H the n-th step of the algorithm is given by x_n y ∈ Harg min{1n∑_i=1^n d(x_i,y)^2+12λ_nd(x_n-1,y)^2}, λ_n >0. The sequence x_n converges weakly (in the sense of Definition 3.1.1 of <cit.>) to the unique minimizer of n^-1∑_i=1^n d(x_i,z)^2 (Theorem 6.3.1, <cit.>). Finding the minimizer in each step is again computationally cumbersome and does not provide any information about d(b_n,μ). To address these issues, we propose the following stochastic approximation scheme. Let Y_k be drawn independently and uniformly from the set {x_1,…,x_k} and define M_1 = Y_1 and M_n = M_n-1⊕_1/n Y_n. This approximation scheme is justified using our heteroscedastic law of large numbers. The Y_i's are not identically distributed, however, their mean corresponds to the Fréchet mean of the first i points. Let {x_n}_n∈ℕ be a set of points in a Hadamard space (H,d) and let b_n be the barycenter of x_1,…,x_n. If there is a μ∈ H such that 1/n∑_k=1^nd(b_k,μ) = O(n^-p) for some p>1/2, and if there is a z ∈ H such that x_1,…,x_n∈ B_Cn^q(z) for some C>0 and 0 ≤ q < min{1/4, p-1/2}, then M_n→μ almost surely. Since 𝔼(Y_n) = b_n and the sequence (Y_n)_n∈ℕ is supported on B_Cn^q(z) almost surely, Corollary <ref> applies. Note that we do not require the sequence of points to converge. In fact, not even the sequence of their Fréchet means does have to converge. The conditions imposed on the sequence are much weaker than the ones in Lemma <ref>. This suggests, that b_n does not necessarily have to converge, for it to be close to S_n. The idea of this scheme is similar to the bootstrap <cit.>. Indeed, one may ask why not to randomly sample Y_i from the set {x_1,…,x_n} instead of {x_1,…,x_i} and perform an n-out-of-n bootstrap. One of the advantages of the inductive mean over the Fréchet mean is the fact, that it can be updated in an online fashion. Sampling Y_i from {x_1,…,x_i} ensures that our scheme inherits this online updateability. Hence our procedure is effective in a setting where data only becomes gradually available and thus maintains one of the core advantages of the inductive mean. § SIMULATIONS We consider two examples in this simulation study. First we look at a sequence of 2×2 diagonal matrices A_n = [ 1/10 + 1/n 0; 0 10+1/n ]. This sequence is contaminated using Huber's ε-contamination model <cit.> with a noise matrix B = 5I, i.e., a fixed percentage of the sequence A_n is randomly replaced with the noise matrix B. We consider different levels of contamination, until no estimator is able to recover the signal. The limit of A_n is denoted with A. In the case of commuting matrices A_1,…, A_n, the barycenter b_n is given by (A_1… A_n)^1/n <cit.>. Since the unique mininal geodesic A⊕_tB between two positive-definite matrices A and B is given by A ⊕_t B = A^1/2(A^-1/2BA^-1/2)^tA^1/2, (see <cit.>) a direct computation yields that the barycenter b_n, the inductive mean S_n and the approximation scheme of Lim and Palfia LP^(n^2)_n all agree. However, Hansen's mean H_n may be different from anyone of these. The intrinsic metric on the space of positive definite matrices is given by d_P(A,B) = log(B^-1/2AB^-1/2)_F. In addition to the intrinsic metric we compute the distance between the estimators and their target in spectral norm. This shows, that comparing non-linear estimators in spectral norm can be quite misleading. While we obviously have A-B_2≤A-B_F≤ d_P(A,B), sequences close in spectral norm may significantly differ with respect to d_P in practice. Comparing S_n,H_n and M_n to A in the metric d_P we see that S_n and M_n recover the signal asymptotically, while H_n fails to do so. Unsurprisingly, S_n consistently outperforms our estimator M_n, as M_n throws information away that S_n can utilise. Nonetheless M_n recovers A asymptotically, albeit at a somewhat slower pace. However, it is surprising, that H_n fails to recover A entirely. In particular H_n converges to a matrix H^* with H^*≈[ 0.26 0; 0 10.12 ] under various degrees of contamination. This limit seems to be more stable with respect to contamination than A. However, Hansen's H_n does not recover the signal A. Measuring the distance from S_n, H_n and M_n to A in spectral norm (Figure <ref>) rather than d_p reveals some interesting phenomena. All estimators seem to be more sensitive to noise when measuring the distance in spectral norm. Our estimator M_n outperformes S_n and H_n once a certain degree of the data is contaminated, which is unexpected, since we randomly throw away information. At about 5% contamination, none of the above estimtors is able to recover the signal asymptotically. This has practical relevance, since many real-world datasets contain more than 5% of contaminated data. As a second example, we consider open books (cf. <cit.>). Intuitively speaking, open books are half-spaces glued together at a common spine. Formally, let d be an integer and set H_+^d = [0,∞) ×ℝ^d. H_+^d can be thought of as a subset of ℝ^d+1 with boundary ℝ^d (which is identified with {0}×ℝ^d) and interior (0,∞) ×ℝ^d. As such, H_+^d inherits a subspace topology from ℝ^d+1. The open book B_k^d with k sheets of dimension d+1 is the disjoint union of k copies of H_+^d modulo identification of their boundaries, i.e., B_k^d = (H_+×{1,…,k}) /-4.5mu∼, where ((t,x),j) ∼ ((s,y),k) if and only if t = s = 0 and x=y. For the simulation, we look at a sequence of points x_n = ((1+2/n, 10-1/√(n)),1) in B_3^1. This sequence clearly converges to x = ((1,10),1). Again we use Huber's ε-contamination model. The noise y = ((1,10), s) has the same coordinates (1,10), but may lie in a different, randomly selected, sheet s=1,2,3. It is well known, that the classical stochastic limit theorems behave in unexpected ways on these spaces. Means on these spaces are sticky, as Hotz et al. <cit.> have noted, i.e., they are much more robust to outliers when compared to the classical Euclidean mean. This has interesting consequences for the asymptotic recovery of means. Upto almost one-third of the data can be contaminated, and we still can recover the limit x. Both S_n and LP_n^(n^2) are still able to recover x, even if more than one-third of the data is contaminated. In fact, because of the stickiness of means, one would expect that they recover x unless two-thirds of the data are compromised. In the case of positive definite matrices analyzed earlier, we saw, that M_n and S_n diverge when a certain degree of contamination (about 5%) is reached. Here S_n and LP_n^(n^2) allow for twice as high a degree of contamination as M_n. This means that ignoring information is much more detrimental in open books than it is in the case of symmetric, positive definite matrices. Furthermore, the behavior of Hansen's mean H_n differs significantly from the case of symmetric, positive definite matrices. When there is no contamination H_n recovers x. However, once 1-2% of contaminated data is included in the dataset H_n, we observe a phase transition. In this case Hansen's H_n does not recover the true mean anymore. Judging from the simulations alone, it is not clear whether H_n converges for low degrees of contamination. With high degrees of contamination, Hansen's H_n seems to converge to the spine, i.e., to the point ((0,1),1). This is in line with the sticky behavior of means observed by Hotz et al. <cit.>. § DISCUSSION Lemma <ref> and <ref> show that virtually all means converge to the same limit if the underlying sequence converges. Theorems <ref> and <ref> as well as the finite sample results of Brunel and Serre <cit.> suggest that S_n and b_n are close under fairly mild regularity assumptions. This begs the question: What are the most general assumptions under which d(b_n,S_n) → 0? By Theorem <ref>, the convergence of b_n is not necessary for a strong law of large numbers to remain valid. How far can this assumption be further relaxed? Additionally, one may try to derive a deterministic no-dice version of Theorem <ref> in the sense of <cit.> and <cit.>. In this case one has to compute the inductive mean of the sequence x_1,x_1,x_2,x_1,x_2,x_3,x_1,x_2,x_3,x_4,… instead of x_1,…,x_n,x_1,…,x_n,x_1…,x_n,…. The bottleneck in terms of computational effort is the number of geodesics that have to be computed. While our approach computes slightly fewer geodesics, going through the first n subsequences still requires the computation of O(n^2) geodesics. Hence a no-dice version of Theorem <ref> would only yield an insignificant improvement in terms of overall runtime compared to the one of Lim and Palfia <cit.>. It is worth pointing out, that in our simulations more complicated approaches rarely outperform the inductive mean S_n. Schemes like the one put forward by Lim and Palfia <cit.> may provide regularization and additional stability. However, our scheme implies that S_n itself is already somewhat robust against contaminated data. In the case of symmetric, positive definite matrices, Hansen's H_n seems to converge against a point differing from the asymptotic center of mass. This behavior is much more stable with respect to contaminated data, than the convergence of the other procedures. Can Hansen's mean serve as a robust alternative to S_n or b_n? The simulations show, that the behevior of means crucially depends on the geometry of the underlying space. Means of points close to the spine of open books are very stable, much more so than in the case of positive definite matrices. One may observe that open books have a curvature of -∞ along the spine. Can one characterize the robustness or stickiness of means in terms of the local curvature of Hadamard spaces? One may ask, whether Theorem <ref> works if we replace S_n by b_n. For the case of i.i.d. random variables, this has been carried out in Proposition 6.6 of <cit.>. While we do belief, that this is still true in our case, the central tool in this proof, Varadarajan's theorem (Theorem 11.4.1, <cit.>), requires the underlying process to be i.i.d. Inspecting the proof of Lemma <ref> we observe that a condition like 1/n∑_k=1^nd(x_k,x)^2→ 0 is sufficient to guarantee that the inductive mean S_n and the barycenter b_n converge to the same point x. This condition is of course weaker than the convergence of x_n→ x, but does in general not imply the weak convergence of x_n to x (see Section 3.1, <cit.>). Is weak convergence of the underlying sequence strong enough to guarantee that the inductive mean and the barycenter agree asymptotically? § ACKNOWLEDGMENTS Both authors are part of the Research Unit 5381 of the German Research Foundation. Georg Köstenberger is supported by the Austrian Science Fund (FWF): I 5485-N and Thomas Stark is supported by the Austrian Science Fund (FWF): I 5484-N. The authors would like to thank Moritz Jirak and Tatyana Krivobokova for organising the research seminar at the Department for Statistics and Operations Research in Vienna and the guest speaker, Victor-Emmanuel Brunel, for a wonderful introduction to statistics on metric spaces and his feedback during the preparation of this manuscript. plain
http://arxiv.org/abs/2307.04408v1
20230710081540
TIM: Teaching Large Language Models to Translate with Comparison
[ "Jiali Zeng", "Fandong Meng", "Yongjing Yin", "Jie Zhou" ]
cs.CL
[ "cs.CL" ]
Violation of a Leggett–Garg inequality using ideal negative measurements in neutron interferometry Stephan Sponar^1 August 12, 2023 =================================================================================================== UTF8gbsn Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning. However, these models can sometimes struggle with tasks that require more specialized knowledge such as translation. One possible reason for such deficiency is that instruction tuning aims to generate fluent and coherent text that continues from a given instruction without being constrained by any task-specific requirements. Moreover, it can be more challenging for tuning smaller LLMs with lower-quality training data. To address this issue, we propose a novel framework using examples in comparison to teach LLMs to learn translation. Our approach involves presenting the model with examples of correct and incorrect translations and using a preference loss to guide the model's learning. We evaluate our method on WMT2022 test sets and show that it outperforms existing methods. Our findings offer a new perspective on fine-tuning LLMs for translation tasks and provide a promising solution for generating high-quality translations. Please refer to Github for more details: https://github.com/lemon0830/TIM. § INTRODUCTION Generative large language models, like GPT4, have shown remarkable performance in various NLP tasks <cit.>. For machine translation, the GPT models achieve very competitive translation quality, especially for high-resource languages <cit.>, which opens up new possibilities for building more effective translation systems. It is impractical to deploy such large models for the translation task only, and using or tuning open-sourced generative language models has become an attractive research direction. In this regard, researchers have explored strategies for example selection and instruction design through In-Context Learning (ICL) <cit.>. However, evaluations of open-sourced LLMs like Bloom show that they do not perform as well as strong multilingual supervised baselines in most translation directions <cit.>. Additionally, ICL can increase decoding latency due to the need for large models with long context. Based on these observations, researchers suggest tuning relatively small LLMs for translation with a few high-quality supervised instructions <cit.>. Instruction tuning has been shown to be an efficient method for making LLMs better aligned to the task descriptions preferred by humans <cit.>. The only requirement is to collect task-specific data, and LLMs will be fine-tuned on the data with the language modeling loss. However, optimizing for simple next-token prediction loss will cause models to overlook context information, especially for low-capacity models. It is serious for the tasks in which the specialized knowledge in context is necessary for task completion, and ignoring such knowledge on translation can lead to inadequacy and hallucination. Therefore, there is a need to investigate the limitations of LLMs and explore methods for improving their performance in specialized tasks. In this paper, we propose to teach the language models to learn translation with examples in comparison, aiming to make full use of a small amount of high-quality translation data. Based on the training data, we further construct two kinds of comparisons: output comparison and preference comparison. Output comparison is used to learn responses of different instructions for the same input. Preference comparison is used to maximize the gap between correct and incorrect translations. Specifically, in order to help identify specific areas where the model may be making errors, we introduce an additional preference loss during fine-tuning, which is used to learn reward models <cit.>, as regularization to penalize unexpected outputs. We evaluate TIM on WMT22 test sets in four language directions (EN⇔DE, EN⇔ZH), and the improvement over the baselines shows the effectiveness of our method. Our model shows better zero-shot translation performance and stability in prompt choice. As the size increases, the performance of the models trained with TIM increases, with the improvement being more pronounced in the case of smaller models. In particular, the tuned LLaMa-13B <cit.> achieves top 1 on quality estimation without references in the EN⇔DE, outperforming the dedicated models for quality estimation like COMET. § RELATED WORK The research of machine translation based on LLMs can be divided into two categories: LLMs as interface <cit.> and instruction tuning <cit.>. The studies of using LLMs as interface focus on empirical analysis. For example, <cit.> evaluate ChatGPT, GPT3.5 (text-davinci-003), and text-davinci-002 in eighteen different translation directions involving high and low resource languages. <cit.> further evaluate four popular LLMs (XGLM, BLOOMZ, OPT and ChatGPT) on 202 directions and 102 languages, and compare them with strong supervised baselines, which provides a more comprehensive benchmark result. Many efforts are also put into investigating translation exemplars selection strategy of in-context learning <cit.>. Another line of work introduces knowledge, such as word alignments extracted from a dictionary, to LLMs for better translation <cit.>. Tuning smaller LLMs (e.g., 7B) for translation tasks is a promising direction since they are better at English than supervised translation models. However, even for directions from other languages to English, the gap between language models fine-tuned with translation data and supervised systems is still evident <cit.>. Different from them, we introduce output comparison and preference comparison data and present a preference regularization to alleviate hallucination and help LLMs learn translation better. § METHOD In brief, we tune generative language models to learn translation with output comparison and preference comparison in the instruction tuning framework. First, we will give a formal introduction to instruction tuning. Then, we present the detail of two kinds of comparisons of our method consisting of output comparison and preference comparison, and an additional preference learning loss. Finally, we show the different ways of parameter tuning. §.§ Background: Instruction Tuning The purpose of instruction tuning is to enhance the capacity of language models in handling NLP instructions. The concept is that the models can be trained to execute tasks specified in instructions, which would enable them to comprehend and execute tasks that have not been encountered before. As illustrated in Figure <ref>, generally, each instance of instruction-following data starts with “instructions” c describing the task the model should perform, and a corresponding output y indicating the answer to the instruction. The “input” x, the optional context or input for the task, is not necessary sometimes but is used for the machine translation task. Given the instruction data, the language models are optimized by minimizing the negative log-likelihood of the output y: L_lm=-1/|y|∑_i^|y|logp(y_i|c,x). Notably, the objective is the same as that used in pretraining. §.§ Output Comparison An important ingredient of our method is the construction of samples used to provide comparison signals for model learning. In addition to regular translation data, we construct data used for comparison by introducing dictionary information or translation errors, which are shown in Figure <ref>. Dictionary-guided Data. To make the model aware of the underlying reasons for different translations, we inform the model of different correct outputs with the help of bilingual dictionaries[https://github.com/facebookresearch/MUSE]. We do not manually replace the words in an input-output pair to synthesize the comparison data but directly use a multi-reference corpus. Specifically, we use the “no error” submissions annotated by humans of WMT20 in Multidimensional Quality Metrics (MQM) datasets[https://github.com/google/wmt-mqm-human-evaluation] as the multi-reference of the source sentence. Then, we obtain the word alignments between a single source sentence and multiple references by looking up the bilingual dictionary. Finally, we use the word alignments as a note added to the input. As shown in Figure <ref>, for the same input sentence “国有企业和优势...老区。”, with the note containing different word alignments, the outputs of Example 1 and Example 2 are different. Error-guided Data. In addition, inspired by <cit.>, we introduce translations with error annotations. For correct input-output pairs, the added notes indicate no mistakes in the references, while the notes of incorrect input-output pairs indicate detailed translation errors. As shown in the left part of Figure <ref>, the output of Example 1 is a correct translation while the output of Example 2 has a major locale convention/name format mistake, corresponding to the added note. We directly use the human-annotated data of WMT20 in MQM datasets. §.§ Preference Comparison In preference comparison, we assign contrastive outputs for each type of data, denoted as Bad Output, and train the model with an extra preference loss. For the regular translation data, we use the prediction of large language models (e.g., Alpaca) as the comparison. For each sample with dictionary information or error information, we randomly sample a translation with errors as the Bad Output. Moreover, we add noise to the Bad Output by randomly deleting words or swapping the positions of two words. With examples of correct and incorrect translations, the model can be optimized to produce higher quality translations by distinguishing them, which can reduce the resources needed for training. One way to utilize the contrastive outputs is to train a reward model and further fine-tune the language model with the reward model using reinforcement learning, i.e., RLHF <cit.>. Instead of using such complex two-stage training process, we directly tune the model using a preference loss: L_pl=-log(σ(r_θ(c,x,y_0)-r_θ(c,x,y_1))), where σ(·) is the sigmoid function, and y_0 and y_1 denote the preferred output and comparison output, respectively. Specifically, r_θ is a linear head that takes the hidden state of the top layer and returns a scalar. In practice, preference learning is calculated at the token level: L_pl=-1/N-I∑_i=I^Nlog(σ(r_θ(h_i^(0))-r_θ(h_i^(1)))), where I is the index starting from the segments different between y_0 and y_1, N is the maximum length of two sequences, and h_i is the hidden state of the i-th token. The overall loss function for tuning the model is L=L_lm+λL_pl, where λ is a coefficient of the preference learning loss. We simply set λ as 0.5 in this paper. §.§ Tuning Strategies In addition to vanilla fine-tuning all model parameters, parameter efficient fine-tuning methods are specially proposed for large language models such as prefix tuning and LoRA <cit.>. In this paper, we adopt three different strategies for tuning the models, listed in descending order from the number of fine-tuned parameters. LoRA: Tuning with Low-rank Matrices. LoRA <cit.> is a technique that reduces the number of trainable parameters by introducing new low-rank matrices to any module in the model while keeping the original weights frozen. This results in a significant reduction in storage requirements for large language models, as well as efficient task-switching during deployment without impacting inference latency. FixEmb: Tuning with Embedding Fixed. It is likely that the limited number of trainable parameters in LoRA-based tuning can restrict its expressiveness for certain tasks. To overcome this limitation, a simple solution would be to fine-tune the parameters of the model layers while keep the embeddings fixed. By doing so, the model can gain more flexibility in adjusting its performance without compromising the important semantic information captured by the embeddings. Full: Tuning Full Parameters. Full parameter tuning has recently been demonstrated more effective than LORA. The limitation of full parameter fine-tuning is the memory footprint, but it is not serious for the 7B models and little data. § EXPERIMENTS In this section, we begin by conducting preliminary experiments to investigate the impact of inference strategies and the resilience of our TIM under varying instructions. Subsequently, we evaluate TIM's performance on the WMT and FLORES-200 dev-test tasks, comprising a total of four language pairs. For this evaluation, we employ BLOOMZ-7b-mt[https://huggingface.co/bigscience/bloomz-7b1-mt] and LLaMA-7b <cit.> as the backbones. §.§ Settings To avoid data leakage as much as possible <cit.>, we use the latest WMT22 test set and FLORES-200 dev-test. * WMT22 Test Sets. We use the test sets from WMT22 competition[https://www.statmt.org/wmt22/translation-task.html], which consist of more recent content from diverse domains such as news, social, e-commerce, and conversational domains. The test sets comprise 1984, 2037, 1875, and 2037 samples for the German-to-English (De⇒En), English-to-German (En⇒De), Chinese-to-English (Zh⇒En), and English-to-Chinese (En⇒Zh) language pairs, respectively. * FLORES-200 dev-test. We use the dev-test split from the FLORES-200 benchmarks[https://github.com/facebookresearch/flores/blob/main /flores200]. This dataset includes 1,012 sentences extracted from English Wikipedia, covering a broad range of topics and domains. These sentences have been carefully checked by professional translators into approximately 200 languages. To ensure a fair and consistent evaluation, we fine-tuned all models for 1 epoch with a batch size of 128, while imposing a maximum text length of 512. The learning rate is 2e-5 and weight decay is 0.0. We conducted fine-tuning on eight NVIDIA A100 GPUs, utilizing the Deep-Speed ZeRO stage3 for model parallelism. The results of the final checkpoint are reported. For automatic evaluations, we utilize two widely adopted metrics: BLEU <cit.> implemented in SacreBLEU[https://github.com/mjpost/sacrebleu], and COMET[https://github.com/Unbabel/COMET] with Unbabel/wmt22-comet-da. These metrics employ distinct approaches to evaluate the quality of machine translation. BLEU is driven by n-gram similarity, while COMET relies on cross-lingual pretrained models. §.§ Baselines We leverage the BLOOMZ-7b-mt and LLaMA-7b models as the foundation models and evaluate the following baselines: Alpaca-(*) is a reproduction of the Alpaca model fine-tuned solely on the alpaca multi-task dataset[https://huggingface.co/datasets/tatsu-lab/alpaca]. MT-(*) is fine-tuned on the human-written validation data from previous WMT competitions, i.e., the newstest2017-2021 of Chinese⇔English and German⇔English, which consist of 45,433 sentence pairs for all four directions. We use the notation TIM-(*) to refer to LLMs fine-tuned using our proposed TIM approach. The training data for TIM-(*) includes the WMT translation data as well as the dictionary-guided and error-guided data described in Section <ref>. Besides, we report the results of WMT22 winners, GPT-4 <cit.>, and NLLB-3.3B <cit.>. The latter is a multilingual translation model trained on a massive parallel corpus of over 200 languages[The results in <cit.> are directly reported.]. §.§ Pre-Experiments In this section, we investigate the effect of inference strategies and instructions. We fine-tune the BLOOMZ-7b-mt with our TIM and conduct evaluations on the WMT22 test sets. Effect of Inference Strategies. Beam search has been the standard search algorithm for machine translation, while LLMs usually use sampling for efficiency. We compare the performance of sampling and beam search, and the two search algorithms are combined with the notes in our dictionary-guided and error-guided data. Table <ref> presents the experimental results. First, we observe that instructing the model to generate translations without errors does not result in a significant performance gain, contrary to the conclusion drawn in <cit.>. We speculate that the preference loss function implicitly allows the LLMs to learn to generate error-free translations, making the additional instructions unnecessary. Secondly, previous studies have shown that introducing alignment information from dictionaries can improve translation performance <cit.>. Surprisingly, Table <ref> shows that adding alignment notes significantly improves the performance of De⇒En, but harms the performance of other language pairs. This may be due to the fact that most of the words in the dictionaries we use are common words, or that the wording styles of the dictionaries differ greatly from the reference. Further research is needed to determine how to better collect and use dictionary information for machine translation is left for future work. Effect of Instructions. In human interaction scenarios, instructions provided by users may vary in style and form, and thus it is essential to evaluate the robustness of TIM under different instruction styles. The performance of our TIM using ten distinct instructions is shown in Figure <ref>. The result indicates that our TIM achieves consistent performance across all the tested instructions. §.§ Main Results Based on the observation in Section <ref>, we use a simple instruction “Translate from {src} to {tgt}.\n{input}” and beam search strategy with a beam size of 4 for all models during inference. Table <ref> presents the translation performance on the WMT22 test sets and FLORES-200 dev-test. For the models based on BLOOMZ-7b-mt, we only evaluate them on WMT22 test sets due to the data leakage issue. We have the following observations: First, based on LLaMA-7b, the Alpaca-(*) models exhibit some translation ability particularly in high-resource directions such as De⇒EN and En⇒DE, due to the small amount of translation instruction data based on Spanish⇔English that Alpaca possesses. Introducing a small number of translation sentence pairs (i.e., MT-(*)) in the corresponding language can result in additional improvement. Secondly, we observe significant performance fluctuations across different language models, training data, and language pairs for (*)-LoRA and (*)-Full. For example, when the backbone is BLOOMZ-7b-mt, MT-LoRA outperforms MT-Full in most language pairs except for En⇒De. However, when the backbone is the LLaMa-7b model, MT-LoRA underperforms MT-Full in Zh⇒En and En⇒Zh language pairs. Our speculation is that LoRA can prevent LLMs from overfitting but is limited in the number of trainable parameters. In contrast, the experiment result of (*)-FixEmb indicates that fine-tuning with fixed embedding parameters can better leverage the generalization of LLMs and prevent overfitting. Finally, training LLMs with comparison can further enhance the understanding of the translation task. Compared to Alpaca-(*) and MT-(*) models, TIM-(*) achieve significantly better performance on both the WMT22 test sets and FLORES-200 dev-test. Concretely, based on BLOOMZ-7b-mt, TIM-FixEmb achieves notable improvement compared with MT-FixEmb, with 2.93, 3.29, 1.34, 2.40 BLEU scores and 0.55, 0.47, 0.50, 2.80 COMET scores on Zh⇒En, En⇒Zh, De⇒En, En⇒De, respectively. § ANALYSIS §.§ Effect of Model Size In this section, we present a comparison between TIM and instruction tuning across different model sizes. Figure <ref> illustrates the consistent improvements achieved by TIM, indicating its generalizability. Notably, BLOOM-3b does not outperform BLOOM-1b7 with instruction tuning. On the other hand, as the foundation LLM's size increases, the translation performance of the LLMs after fine-tuning with TIM gradually improves. In particular, the improvement is more significant when the model size is smaller. This observation supports our hypothesis that simple instruction tuning with a small amount of training data, may not effectively learn task patterns and instead relies heavily on the model's original ability to comprehend instructions. On the other hand, training LLMs with comparison encourages them to swiftly identify the task's requirements and patterns and leverage internal cross-lingual knowledge. §.§ Zero-shot Translation To evaluate TIM’s performance in translation directions never seen previously, i.e., zero-shot multilingual capability, we conduct experiments on the WMT22 multilingual-to-English translation benchmark which encompasses 4 translation directions: Czech-to-English (cs⇒en), Japanese-to-English (ja⇒en), Russian-to-English (ru⇒en), and Ukrainian-to-English (uk⇒en). We compare our method with the following open-sourced models: ChatGLM-6b[https://huggingface.co/THUDM/chatglm-6b], Alpaca-7b[https://huggingface.co/tatsu-lab/alpaca-7b-wdiff], Vicuna-13b[https://huggingface.co/lmsys/vicuna-13b-delta-v1.1], BayLing-13b <cit.>, and NLLB-3.3b <cit.>. We report the results of the above models in <cit.>. Due to the better performance of LLaMA in multilingual-to-English, we report the performance of fine-tuned LLaMA-7b and LLaMA-13b with our TIM, respectively. As depicted in Figure <ref>, TIM-(*) (i.e., TIM-FixEmb-7b, TIM-LoRA-13b, and TIM-FixEmb-13b) exhibit good zero-shot multilingual capability on these translation directions. Compared to ChatGLM-6b, Alpaca-7b, and Vicuna-13B, TIM-(*) exhibits superior translation ability, highlighting that aligning training languages strengthens the alignment of other languages as a by-product. Additionally, TIM-(*) outperforms BayLing-13b, which uses additional interactive translation training data, in XX⇒English translations. TIM-(*) also demonstrate comparative performance with NLLB-3.3B in some language pairs. These results demonstrate that adding carefully constructed translation data, combined with an effective training strategy such as our proposed TIM, can enhance the overall task capability of LLMs. §.§ Ablation Study To analyze the impact of different components of TIM, we investigate five variants of TIM-FixEmb taking BLOOMZ-7b-mt as the backbone: 172 w/o ℒ_pl, where we removed the ℒ_pl; 173 w/o Dict, where we removed the dictionary-guided comparisons in training data; 174 w/o Error, where we removed the error-guided comparisons in training data; 175 w/o OutputCom, where we removed output comparison; 176 w/o OutputCom&ℒ_pl, in which we fine-tuned LLM with translation instructions by standard instruction tuning method. We illustrate the BLEU scores on Zh⇒ and En⇒De in Figure <ref>. The experimental results of 175 and 176 demonstrate that LLMs can quickly learn better translation output through preference comparison, even without adding any output comparison data. Moreover, the results of 172 173 and 175 show that output comparison is more crucial than preference comparison. In particular, the removal of error-guided data (i.e., 174) results in a greater performance drop than the removal of dictionary-guided data (i.e., 173). We hypothesize that this is because the translations without errors in the system's outputs of WMT2020 are relatively similar, causing the “output” of dictionary-guided data to be too similar to create a high-quality comparison. If translation data with multiple more diverse references were available, we might achieve further improvement. We leave this for future work. §.§ MT Metrics Evaluation The preference scores can reflect the quality of the model output. To assess whether the strategy can successfully learn a meaningful importance estimation, we use MTME[https://github.com/google-research/mt-metrics-eval] to evaluate the performance of our preference scores on standard test sets from the WMT22 Metrics Shared Tasks in De⇒En and En⇒De, respectively. Specifically, for each pair consisting of a source sentence and the corresponding hypothesis, we wrap them with our Training Prompt, compute the score of each token in the hypothesis, and use the score of the last token as the sentence-level score. Table <ref> shows the system-level accuracy (Acc) and Pearson correlations (PCCs). In particular, our TIM-LLaMA-13b outperforms all the reference-free metrics and achieves the best Pearson correlation on De⇒En. This demonstrates that the LLMs are implicitly a reward model which can be jointly optimized during instruction tuning <cit.>. § CONCLUSION We propose TIM, a training method that instruction tunes open-source large language models for the translation task with the comparison of translations. Experiments and analyses validate the effectiveness of TIM in terms of translation quality and zero-shot translation ability. For the reference-free MT metrics evaluation, TIM-LLaMA-13b even outperforms some popular metrics like COMET and BLEURT in De⇒En, showing that our method can well learn the translation and evaluation jointly. Future work can explore the use of more diverse references for output comparison, and more advanced preference learning signals. acl_natbib
http://arxiv.org/abs/2307.05907v1
20230712042503
An Alternative Formation Scenario for Uranium-rich Giants: Engulfing a Earth-like Planet
[ "Dian Xie", "Chunhua Zhu", "Sufen Guo", "Helei Liu", "Guoliang Lü" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.EP" ]
firstpage–lastpage : Floor Identification System with One Label for Crowdsourced RF Signals [ Received: date / Accepted: date ========================================================================= The actinides, such as the uranium (U) element, are typically synthesized through the rapid neutron-capture process (r-process), which can occur in core-collapse supernovae or double neutron star mergers. There exist nine r-process giant stars exhibiting conspicuous U abundances, commonly referred to as U-rich giants. However, the origins of these U-rich giants remain ambiguous. We propose an alternative formation scenario for these U-rich giants whereby a red giant (RG) engulfs an Earth-like planet. To approximate the process of a RG engulfing an Earth-like planet, we employ an accretion model wherein the RG assimilates materials from said planet. Our findings demonstrate that this engulfment event can considerably enhance the presence of heavy elements originating from Earth-like planets on the surfaces of very metal-poor stars (Z = 0.00001), while its impact on solar-metallicity stars is comparatively modest. Importantly, the structural and evolutionary properties of both very metal-poor and solar-metallicity stars remain largely unaffected. Notably, our engulfment model effectively accounts for the observed U abundances in known U-rich giants. Furthermore, the evolutionary trajectories of U abundances on the surfaces of RGs subsequent to the engulfment of Earth-like planets encompass all known U-rich giants. Therefore, it is plausible that U-rich giants are formed when a RG engulfs an Earth-like planet. stars: evolution – stars: chemically peculiar – convection – accretion § INTRODUCTION Recently, according to the data from <cit.> in "JINAbase", there are nine giants that have been identified with clearly detectable U. These stars are known as U-rich giants and include: CS 31082-001 <cit.>, BD+173248 <cit.>, CS 22892-052 <cit.>, CS30306-132 <cit.>, HD 115444 <cit.>, HD186478 <cit.>, HD6268 <cit.>, HE 1523-0901 <cit.>, CS 29497-004 <cit.>. These U-rich giants are very metal-poor (VMP) stars, that is, [Fe/H] <= -2. In this work, the notation used to represent elemental abundances in spectroscopy follows the standard notation as described by <cit.>.For elements X and Y, the notation is as follows: logε(X) ≡log _10( N_X / N_H)+12.0 , where N_X and N_H represent the number densities of element X and hydrogen, respectively. [X / Y] ≡log _10( N_X / N_Y)_*-log _10( N_X / N_Y)_⊙, where N_X and N_Y represent the number densities of elements X and Y, respectively. U belongs to actinides which are believed to have originated predominantly from explosive r-process nucleosynthesis. The r-process is considered a significant mechanism for the production of elements heavier than iron(Fe) and is the only known process capable of synthesizing actinides. Depending on the degree of r-process enrichment, they can be classified into different categories: r-I stars: These stars have 0.3 <= [Eu/Fe] <= 1 and [Ba/Eu] < 0. They are believed to form in slightly larger dwarf galaxies, such as Tucana III <cit.>. r-II stars: These stars have [Eu/Fe] > 1 and [Ba/Eu] < 0. They are found in ultra-faint dwarf galaxies (UFD) like Reticulum II <cit.>. According to the data from <cit.> in "JINAbase", there have been approximately 91 r-I stars and 32 r-II stars identified. The nine known U-rich giant belongs to r-I or r-II stars. However, in the field of astrophysics, there are two main candidates that can produce actinides: core-collapse supernovae (CCSNe) and neutron star mergers (NSMs). Obviously, the U-rich giants cannot produce U by themselves. Therefore, their origin is still debated. <cit.> demonstrated that actinides can also be synthesized in low- metallicity, low-mass AGB stars through the i-process (the intermediate neutron capture process). However, their model result is strongly affected by the remaining uncertainties. It is widely acknowledged that planets exist in nearly all stellar systems, including our own solar system <cit.>. With the evolution of host star, it begins to expand. The host star can engulf its planets, and undergo a physical process like as common-envelope evolution <cit.>. This process is referred to as planetary engulfment. A number of literatures have investigated the impact of this process on their host stars. <cit.> suggested that the substellar companions around stellar remnants can produced via planetary engulfment <cit.>. <cit.> considered that the planetary engulfment can enhance the rotation of host star<cit.>. <cit.> found that lithium enrichment on the surface of giant star can be explained via planetary engulfment<cit.>. <cit.> and <cit.> conducted research on a main sequence star that undergoes planet engulfment. They observed that a small convective region within the host star leads to an enrichment of heavy elements on its surface. Not only that the ingestion of planets can be deduced through the augmentation of refractory elements in the photosphere of the host star subsequent to the accretion of rocky planetary material. These enhancements of refractory substances are influenced by internal mixing mechanisms within stellar structures, specifically thermohaline mixing caused by an inverse gradient of mean molecular weight between the convective envelope and radiative core<cit.>. Therefore, U-rich giants may be produced via the planetary engulfment. In this paper, our primary emphasis lies in the investigation of host stars that engulf Earth-like planets during their red giant phase. We delve into the likelihood of these giants transforming into U-rich giants. Section 2 encompasses our comprehensive models regarding the process of a star engulfing a rocky planet. Section 3 entails a detailed analysis of both the Fe and U abundances subsequent to planetary engulfment. Ultimately, our conclusions are encapsulated within Section 4. § RED GIANTS ENGULFING ROCKY PLANTS To investigate the process of a red giant engulfing a rocky planet, it is necessary to simulate both the stellar structure and evolution, as well as the interaction between the red giant and its planet. For this purpose, we employ the open-source evolutionary stellar code Modules for Experiments in Stellar Astrophysics (MESA; <cit.>, version 12115) to calculate the stellar evolution. In addition, we use an accretion model to simulate the engulfment of the red giant and its planet. §.§ Input Parameters for Stellar Evolution The stellar structure and evolution mainly depends on the stellar mass and metallicity. The observational sample we used is basically very metal-poor (VMP) stars or even extremely metal-poor (EMP) stars, so we use Z = 0.00001 to perform stellar evolution calculations. Fig. <ref> shows that Z = 0.00001 can cover U-rich giants very well. In order to discuss the effects of metallicity on the formation U-rich giants, we take Z = Z_⊙ and 0.00001 in the different models. Besides, Fig. <ref> shows that 1.0 M_⊙, 2.0 M_⊙ and 5.0 M_⊙ evolution tracks can basically cover our sample. Usually, convection, overshoot, thermohaline mixing, element diffusion, and radiative levitation exert significant influence upon the structural dynamics and evolutionary trajectories of stars, particularly shaping the chemical abundance patterns discernible on their stellar surfaces. In the present paper, the Ledoux criterion is used for the convection. The mixing-length parameter α_LMT=1.5, the parameter of the semi-convection α_SEM=1.0 <cit.>. The overshoot mixing diffusion coefficient that occurs near the convective boundary of a star is: D_ov=D_conv, 0exp(-2 z/f λ_P, 0), where D_conv, 0 is the diffusion coefficient near the Schwarzschild boundary, λ_P, 0 is height of the pressure scale in this position, z is the distance in the radiation layer away from this position, and f is a parameter which may have different values at the upper and lower convective boundaries for no-burning, H-burning, He-burning, and metal-burning convection zones <cit.>. For simplicity, f=0.02 in our models. Thermohaline mixing occurs in the presence of inversions, where regions with an inverted average molecular weight are considered formally stable. The diffusion coefficient is determined through linear stability analysis by <cit.> and <cit.>. This type of mixing is particularly significant in cases of planetary engulfment, where heavy planetary material is deposited near the star's surface. In this study, we applied the method developed by <cit.>, which is based on <cit.> and provides a more comprehensive and precise approach to investigate thermohaline mixing. Due to the large number of calculations required for diffusion calculations for each species, MESA groups species into different categories for diffusion calculations <cit.>. Hydrogen and deuterium would be placed in `^1H' and carbon, nitrogen and oxygen would be placed in `^16O', and anything heavier in `^56Fe'. Apparently U is also treated as `^56Fe' for element diffusion. We turned it on. Meanwhile, we have also accounted for radiative buoyancy in our model <cit.>. Radiative levitation is a phenomenon in stellar atmospheres where the radiation pressure from intense radiation fields can push elements upwards, affecting their distribution and abundance. By introducing this extra force term into the existing models, <cit.> aim to more accurately account for the effects of radiative levitation on the dynamics and composition of stellar atmospheres. They incorporate an additional force component attributed to radiative levitation. §.§ Engulfing Planet Model The host star engulfing its planet has been investigated by many literatures <cit.>. They suggested that the planet should be dissolved and its matter should be added to the host star by a combination of ram pressure and tidal forces near the base of the convective envelope. <cit.> investigated that a solar-like star engulfed its planet, and they considered that the planet dissolve at the position where the local sound speed c_ s in the stellar envelope equals the escape velocity v_esc at the planet surface, that is: c_s^2 ≈ v_esc^2 ⟺γk_B T/μ m_u≈2 G m_p/α r_p, where m_ p and r_ p are the mass and radius of the planet, respectively. Here, parameters, α = 1 <cit.>. For an Earth-like planet (m_ p=5.9×10^21 g and r_ p=6.3×10^8 cm), log c_ s∼ 6.1 cm s^-1. The c_ s in a stellar envelope is dependent on the structure of the star. Fig. <ref> shows c_ s profiles of red giant. Obviously, based on Eq. <ref>, the planet is dissolved at a mass thickness of about 10^-5 M_⊙ (or a depth of about 10^-2 R_⊙) under the stellar surface. It means that the planet dissolution only occurs at zone very closed to the stellar surface, which is consistent with the results of <cit.>. Therefore, in the present paper, we use an accretion model to approximate the planet dissolving in its host star. The mass-accretion rate of the accretion model can be approximately estimated via the mass-dissolving rare of the planet. <cit.> investigated the ingestion of planets into the surface layers of the star, and calculated the critical condition for dissolution and the mass-dissolving rate of the planet. Based on the drag force (F_D) and the gravitational binding energy of the planet surface (ϵ_bind , p), <cit.> suggested that the mass-dissolving rate of the planet could be given by Ṁ_p=C_H F_Dv/ϵ_bind , p+ℒ_vap where F_D=1/2 C_Dπ r_p^2 ρ_⋆(r) v^2, ϵ_bind , p≃G m_p/r_p and ℒ_vap is the latent heat of vaporization. Here, v is the planet velocity relative to the host star. The drag coecient C_D = 1, C_H=0.01 <cit.>. The density of the stellar surface layer ρ_⋆ is obtained by MESA. Considered that the planet is the Earth-like planet, ℒ_vap is the latent for Fe, that is, ℒ_vap≃ 6 kJ g^-1. According to Eq. <ref>, we can estimate that Ṁ_ p=5×10^-8 M_⊙ yr^-1 when the Earth is engulfed in the stellar envelope. It indicates that the dissolving process of the Earth lasts about 10^3 yr. When the materials from the Earth is dissolved into the stellar envelope, the heavy elements are mixed whole convective region. For a region where the diffusion velocity v_i of an element i is relatively constant, the mixing timescale can be estimated by τ≈l/v_i=l p/ρ g D=l H_p/D, where l is the width of the convective, and it is about 3.0 R_⊙∼ 50.0 R_⊙ for a red giant showed in Fig. <ref>. H_p and D are the pressure scale height and the mixing coefficient, and they are 10^-1.2R_⊙ and 10^6.5cm^2s^-1 for a red giant, respectively. Therefore, the mixing time scale τ≈ 10^6∼10^7 yr. Compare with the timescale of dissolving planet (∼10^3 yr), the mixing timescale is very long. It indicates that the U-rich giants observed mainly are in mixing phase after the planet is dissolved. Simultaneously, the chemical composition of accreted materials is very important. In this work, the planet is the Earth-like planet, and we focus on the formation of U-rich giant. Therefore, the chemical abundances of main heavy elements plus Th and U are similar to those in the Earth, which are listed in Table <ref>. § FORMATION OF URANIUM-RICH GIANTS VIA ENGULFING A ROCKY PLANET The evolutionary phase of the host star may affect the planet dissolution. For a red giant, its radius becomes larger with its evolution. The dashed lines in Fig. <ref> (a) are the iso-radius lines for r= 5.0, 10.0 and 30.0 R_⊙. Assuming that the red giant engulfing its planet occurs at different radii (r= 5.0, 10.0 and 30.0 R_⊙, respectively), we have carried out some tests for effects of the giant radius on the formation of U-rich giant, and find that the effects are very weak. Therefore, in all simulation, we assume that the engulfment begins when the red giant has a radius of 5.0 R_⊙ . §.§ Evolution and Effects of Fe Element As shown by Table <ref>, Fe is the most abundant element in the Earth-like planet. It is well known, Fe element plays a significant role in the stellar structure and evolution. It is necessary to discuss its evolution and effects on the host star after an Earth-like planet is engulfed. Fig. <ref> shows the evolution of Fe abundance with the effective temperature for the engulfing models involving 1.0 M⊕ planet. Obviously, after engulfing an Earth-like planet, the Fe abundance on the surface of host star enhances. Especially, in the models with Z=0.00001, it increases by hundreds of times because these host stars have very low Fe abundance before the engulfment. With the evolution of the host star, Fe elements accreted are gradually mixed whole envelope. Therefore, Fe abundance reduces bit by bit. Compared with the Fe abundance on the stellar surface of the models with the solar metallicity, it for the models with Z=0.00001 has a significant decrease due to very low initial Fe abundance within stellar envelope. As the above discussions, the engulfing an Earth-like planet significantly changes Fe abundance in the envelope of VMP star. However, as the blue and red lines in Fig. <ref> which gives the evolutionary tracks of a 1.0 M_⊙ star with Z=0.00001 after not engulfing or engulfing an Earth-like planet show, it has no effects on the stellar evolution. §.§ U-rich giant formation via the engulfment of an Earth-like planet Although the U abundance in an Earth-like planet is very low (∼ 10^-6, See Table <ref>), it still is very higher than those in the known U-rich RGs. Therefore, when a host star engulfs an Earth-like planet, the U abundance on its surface may be enhanced, and it may become the U-rich RG on observations. In general, the U abundance in the engulfment progress depends on the mass of the Earth-like planet. Typically, their masses are several the Earth mass <cit.>. <cit.> reported the engulfment of a planetary body with a mass approximately ten times that of Jupiter by a solar-like star in the case of ZTF SLRN-2020. In order to discuss the effects of the planet mass on the formation of the U-rich RGs, we take 0.5 M⊕, 1.0 M⊕, 2.0 M⊕, 5.0 M⊕, 8.0 M⊕ and 10.0 M⊕ as the Earth-like planet’s mass in the different simulations. Simultaneously, the structure of the host star may also affect the U abundance during the engulfment process. For a RG, its radius becomes larger with its evolution. The dashed lines in Fig. <ref> (a) are the iso-radius lines for r = 5.0, 10.0 and 30.0 R_⊙. Assuming that the red giant engulfing its planet occurs at different radii (r = 5.0, 10.0 and 30.0 R_⊙, respectively), we have carried out some tests for effects of the giant radius on the formation of U-rich giant, and found that the effects are very weak. Fig. <ref> shows the evolution of U abundance on the stellar surface when the planetary engulfment occurs at different stellar radii, that is, R=5.0, 10.0 and 30.0 R_⊙. Obviously, U abundance is higher at the case of R= 5 R_⊙. The main reason is that the convective envelope at this time is the smallest. However, with the stellar evolution, U abundances at three cases become very closed. Simultaneously, based on observations, some U-rich giants can not be covered if the planetary engulfment occurs too late. Explicitly stating that RGs with smaller radii/thinner convective envelopes exhibit initially higher U-enhancements (as expected), but this effect levels out as stellar evolution proceeds. Therefore, in all simulations, we assume that the engulfment begins when the red giant has a radius of 5.0 R_⊙. Fig. <ref> shows the evolution of U abundance on the RG surface with the effective temperature after the engulfment. Obviously, our simulating results can cover all known U-rich RGs. The larger the Earth-like planet’s masses are, the higher the U abundances on the surface of host stars are. The models engulfing a 5.0 M⊕ Earth-like planets can explain the U abundances of CS30306-132 with log ε (U)∼ -1.42. There are some super-Earth planet discovered <cit.>. Therefore, a RG engulfing an Earth-like planet can become an U-rich giant. Fig. <ref> shows the evolution of U abundance on the stellar surface after planetary engulfment. In our model, its evolution main depends on the timescale of the mixing and the mass of convective zone. After the planetary engulfment, U element from the Earth is dissolved into the stellar convective envelope. The timescale can be estimated by Eq. <ref>, and it approximately equals 10^6∼10^7 yr, which is consistent with Fig. <ref>. After the U element is homogeneously mixed within the convective envelope, U abundance keeps constant. As shown in Fig. <ref>, in our simulations, the U abundance in the models with Z=Z_⊙ is lower than that with Z=0.00001. The main reason is that, for a given stellar radius, the VMP RG has a convective zone smaller than that with solar metallicity. In our model, Fig. <ref> shows the convection histories of the 1 M_⊙ initial mass host stars with Z=0.00001 and Z=Z_⊙ after engulfment, respectively. The yellow area represents the thermohaline mixing zone. Obviously, the thermohaline mixing zone consists of two distinct regions, albeit occupying a relatively small proportion. One portion is located at the base of the upper convective zone, while another exists within the core region. Heavy elements are transported by convection to the thermohaline mixing zone at the base of the upper convective zone. However, there is no direct connection, making it challenging for elements to reach the core. Consequently, U abundance change caused by thermohaline mixing may not be significant in our simulations. On the observations, all U-rich giants are VMP stars. However, based on our simulations, after engulfing an Earth-like planet, both the VMP or the solar-metallicity stars can become the U-rich giants. The possible reasons maybe the observational bias. Usually, the emission lines of solar-metallicity stars are so abundant that it is extremely difficult to detect U emission lines even though these stars engulf an Earth-like planet (See Fig. <ref>, that is, the engulfment has a very weak effect on the element abundances of the solar-metallicity stars.). The VMP stars have much low metal abundances, and the heavy-element abundances (especially U ) from the Earth-like planets are greatly enhanced after the engulfment. Therefore, it may be possible to detect U emission lines. However, this result needs to be supported by the further observations. For example, we can find some relics for the U-rich giants engulfing a planet, or can detect the abundance distribution of the heavy elements from the Earth-like planets, which can be explained by the engulfment model. In addition, one should note, according to our current stellar planet engulfment mode, U element primarily comes from the Earth-like planets. The chemical compositions of the planets depend on the environment in which the host star is born. However, the formation environment of the star is determined by the previous generation of stars. The Sun should be the second generation star at least. The processes such as CCSNe or NSMs should have occurred in the previous environment of the solar system, which leads to the presence of gold, U and other products of these energetic processes on Earth. However, if the previous generation of a star did not experience CCSNe or NSM, the entire stellar system would lack actinide elements. U-rich giant would not form through planetary engulfment. Maybe, the origin of U-rich (or other heavy elements) giants offers a potential avenue for studying the previous generation of stellar systems scientifically. Unfortunately, although there are 5,445 exoplanets observed <cit.>, their chemical (especially heavy elements) abundances hardly are measured. Our knowledge to the heavy elements of exoplanets is extremely scarce. Therefore, we can only simulate the entire engulfment process using the compositions of the Earth. Our model fails to apply to exoplanets without U or with extremely low U abundances. § CONCLUSIONS We employ MESA as a computational tool to simulate the formation scenario of uranium-enriched giants. In light of the fact that planet dissolution predominantly occurs in the vicinity of the stellar surface during engulfment, we adopt an accretion model wherein a RG assimilates materials from an Earth-like planet. This approximation effectively represents the scenario where an RG engulfs an Earth-like planet. Our findings reveal that such engulfment can substantially augment the abundances of heavy elements originating from Earth-like planets on the surfaces of VMP stars (Z=0.00001), while its impact on solar-metallicity stars is comparatively modest. The structural and evolutionary characteristics of both VMP and solar-metallicity stars remain largely unaffected. Notably, our engulfment model adequately accounts for the observed U abundances in known uranium-rich giants. The evolutionary trajectories of U abundances on the surfaces of RGs after engulfing Earth-like planets encompass the entire population of known uranium-rich giants. Hence, it is plausible for a red giant to be formed through the engulfment of an Earth-like planet. However, further observational evidence is crucial to substantiate this formation mechanism for uranium-rich giants. § ACKNOWLEDGMENTS This work received the generous support of the National Natural Science Foundation of China, project Nos. 12163005, U2031204 and 11863005, the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A10, the Natural Science Foundation of Xinjiang No.2021D01C075 and No.2020D01D85. § DATA AVAILABILITY R-process stars data are publicly available from "JINAbase" (<https://jinabase.pythonanywhere.com/>). Exoplanet data from <https://exoplanetarchive.ipac.caltech.edu>. Evolutionary models were computed with the version 12115 of MESA. The required inlists in this study are available via reasonable request to the corresponding author. mnras
http://arxiv.org/abs/2307.04268v1
20230709214743
Optical Properties of Charged Defects in Monolayer MoS$_2$
[ "Martik Aghajanian", "Arash A. Mostofi", "Johannes Lischner" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
[email protected] Departments of Physics and Materials and the Thomas Young Centre for Theory and Simulation of Materials, Imperial College London, London, SW7 2AZ, UK We present theoretical calculations of the optical spectrum of monolayer MoS_2 with a charged defect. In particular, we solve the Bethe-Salpeter equation based on an atomistic tight-binding model of the MoS_2 electronic structure which allows calculations for large supercells. The defect is modelled as a point charge whose potential is screened by the MoS_2 electrons. We find that the defect gives rise to new peaks in the optical spectrum approximately 100-200 meV below the first free exciton peak. These peaks arise from transitions involving in-gap bound states induced by the charged defect. Our findings are in good agreement with experimental measurements. Optical Properties of Charged Defects in Monolayer MoS_2 Johannes Lischner August 12, 2023 ======================================================== § INTRODUCTION Monolayer transition-metal dichalcogenides (TMDs) are two-dimensional (2D) materials which have been intensely studied in recent years because of their attractive electronic properties for applications in transport and optoelectronic devices <cit.>. Many materials in this class exhibit a direct band gap in the optical range as well as multiple band extrema <cit.> which give rise to rich valley physics <cit.>. The reduced dimensionality of 2D materials causes a weaker electronic screening of electron-electron interactions compared to 3D systems and the stronger interaction results in large binding energies of excitons, bound electron-hole pairs. In monolayer TMDs, exciton binding energies can be as large as several hundred meV <cit.>. Charged defects can have a significant impact on the electronic structure and transport properties of TMDs. In particular, doped carriers can increase the conductivity, while scattering from charged defects reduces it. The optical properties of TMDs are also influenced by the presence of charged defects. For example, Greben and coworkers <cit.> demonstrated that irradiating monolayer MoS_2 with an electron beam gives rise to an additional peak (approximately 200 meV below the first neutral exciton peak) in the photoluminescene spectrum, which they interpreted as the signature of neutral excitons that are bound to an ionized donor defect. Similarly, Shang et al. <cit.> studied the optical properties of monolayer WS_2 and MoS_2 and found that the photoluminescence could be tuned from donor-bound to acceptor-bound excitons by changing from n-doping to p-doping. To gain insight into the microscopic properties of excitons bound to charged defects, Ganchev and coworkers <cit.> solved a three-particle Schroedinger equation based on an effective mass approximation for the electronic structure of monolayer TMDs. Similarly, Wu <cit.> used the effective mass approximation to study transitions between bound defect states. However, such models do not capture the delicate effects associated with bound defect states arising from the multi-valley electronic structure of TMDs. To address this shortcoming of effective mass methods which typically only capture defect states from the K and K' valleys, we previously developed an atomistic approach to describe the electronic structure of a TMD monolayer with a charged defect <cit.>. In particular, we used the tight-binding approach to model the large supercells required to describe the long-ranged electrostatic potential of the charged defect. Our calculations demonstrated that the most strongly bound acceptor states derive from the Γ valley for a wide range of dielectric environments and defect charges. These predictions were verified by scanning tunneling spectroscopy experiments <cit.>. For donor impurities, we predicted that the most strongly bound in-gap states derive from the Q valleys for a range of dielectric environments and defect charges <cit.>. In this paper, we extend our atomistic modelling approach to calculate the optical properties of monolayer TMDs with charged defects. For this, we solve the Bethe-Salpeter equation using the tight-binding states as input. We calculate optical spectra of both donor and acceptor defects in MoS_2 on a SiO_2 substrate <cit.>. We find that the charged defects induce additional low-energy peaks in the optical spectrum. These arise from electronic transitions which involve bound defect states. The binding energy of these excitations, which can be interpreted as defect-bound excitons, are between 100 and 200 meV in good agreement with experimental findings. § METHODS §.§ Bethe-Salpeter equation To study the effect of a charged adsorbate on the optical properties of monolayer MoS_2, we solve the Bethe-Salpeter equation (BSE) for an N× N MoS_2 supercell with a single adsorbate, which is modelled as a point charge that creates a screened potential acting on the electrons in the MoS_2. The BSE is given by ∑_c'v'𝐤' H^BSE_cvc'v'(𝐤,𝐤') A^M_c'v'𝐤' = E_MA^M_cv𝐤, where E_M denotes the energy of the M-th excited state and A^M_cv𝐤 is the corresponding eigenvector. Here, c and v label conduction and valence states, respectively, and 𝐤 is a crystal momentum in the first Brillouin zone. The BSE Hamiltonian is given by <cit.> H^BSE_cvc'v'(𝐤,𝐤') = δ_vv'δ_cc'δ_𝐤𝐤'(E_c𝐤-E_v𝐤) - [ D^cc'_vv'(𝐤,𝐤')-X^cc'_vv'(𝐤,𝐤') ], where E_n𝐤 denotes the energy of a quasiparticle state, with corresponding wavefunction ψ_n𝐤(𝐱) where 𝐱=(𝐫,α) comprises both a position 𝐫 and a spin variable α, and D and X are the direct and exchange integrals, respectively, given by D^cc'_vv'(𝐤,𝐤') = ∫d𝐱∫d𝐱'ψ_v𝐤(𝐱)ψ^*_c𝐤(𝐱')W(𝐫,𝐫')ψ^*_v'𝐤'(𝐱)ψ_c'𝐤'(𝐱'), X^cc'_vv'(𝐤,𝐤') = ∫d𝐱∫d𝐱'ψ_v𝐤(𝐱)ψ^*_c𝐤(𝐱)v(𝐫,𝐫')ψ^*_v'𝐤'(𝐱')ψ_c'𝐤'(𝐱'). Here, W(𝐫,𝐫') and v(𝐫,𝐫')=e^2/(ε_bg|𝐫-𝐫'|) denote the screened and bare Coulomb interaction, respectively, with ε_bg being the background dielectric constant and e the proton charge. For a MoS_2 layer placed on a substrate material with dielectric constant ε_sub, we use ε_bg=(ε_sub+1)/2. The screened interaction in real space is obtained using a Hankel transform according to W(r=|𝐫-𝐫'|) = e^2∫_0^∞dq e^-qdJ_0(qr)/ε_bg + ε_2D(q) , where J_0(x) is the zeroth order Bessel function of the second kind and ε_2D(q) is the 2D dielectric function of MoS_2, which is calculated from first-principles DFT with the random-phase approximation <cit.>. In the above, the parameter d regularizes the divergence of the screened interaction when the electron and the hole reside on the same atom. We have found that d=1.2 Å  reproduces the experimentally measured binding energy of the lowest exciton <cit.>. To efficiently calculate the quasiparticle energies and wavefunctions of MoS_2 with a charged adsorbate, we use the tight-binding (TB) approach. Following Liu and coworkers <cit.>, we express the wavefunctions as a linear combination of Mo 4d_z^2, 4d_xy and 4d_x^2-y^2 orbitals according to ψ_n𝐤(𝐱) = 1/√(N_k)∑_ljc_n𝐤lj∑_𝐑e^i𝐤·(𝐑 + τ_j)ϕ_l(𝐫-𝐑-τ_j,α) , where ϕ_l denotes an atomic basis function, 𝐑 is a lattice vector and τ_j denotes the position of the j-th atom relative to the origin of the supercell. Also, N_k denotes the number of k-points used to sample the first Brillouin zone of the supercell and c_n𝐤lj are complex coefficients obtained by diagonalizing the TB Hamiltonian. The TB Hamiltonian of MoS_2 with a charged adsorbate is constructed by starting from the Hamiltonian of pristine MoS_2 of Liu and coworkers, which has been fitted to reproduce the ab initio DFT band structure <cit.>, and create an 18× 18 supercell. Next, the screened potential induced by the charged adsorbate is added as an onsite potential. We assume that the defect has a charge of Ze (with Z=±1) and is located at x=y=0 at a distance D above a Mo atom. The corresponding screened potential is then given by ZW(r) with d in Eq. (<ref>) replaced by D. Inserting the TB ansatz for the quasiparticle wavefunctions into the exchange and direct integrals and exploiting the localization of the atomic basis functions yields D^cc'_vv'(𝐤,𝐤') = ∑_ij(T^(i)_c𝐤,c'𝐤')^* W_ij(𝐤-𝐤')T^(j)_v𝐤,v'𝐤' , X^cc'_vv'(𝐤,𝐤') = ∑_ij(T^(i)_c𝐤,v𝐤)^* v_ij(0)T^(j)_c'𝐤',v'𝐤', where we define T^(j)_n𝐤,n'𝐤' =∑_l c_n𝐤ljc^*_n'𝐤'lj, W_ij(𝐤)=∑_𝐑exp(-i𝐤·𝐑)W(𝐑+τ_i - τ_j) and v_ij(𝐤)=∑_𝐑exp(-i𝐤·𝐑)v(𝐑+τ_i - τ_j). We have found that the effect of exchange interactions on the absorption spectrum is small (see Fig. 4 in the Appendix) and have therefore neglected it in our calculations. As the size of the BSE Hamiltonian increases rapidly with the supercell size, we only include those conduction states at each k-point which fulfill E_c𝐤≤ E_v𝐤^max+E_cut with E_cut being a cutoff parameter and E_v𝐤^max being the highest valence band energy at 𝐤. Similarly, we only include valence states with E_v𝐤≥ E_c𝐤^min-E_cut with E_c𝐤^min denoting the lowest conduction band energy at 𝐤. We then increase E_cut until the energies of the lowest excitons are converged. The resulting cutoffs are shown in Table <ref>. In our calculations, we use Γ point sampling (N_k=1) of the first Brillouin zone associated with the supercell. The real part of the optical conductivity is obtained from the eigenvectors and eigenvalues of the BSE according to <cit.> Re σ_xx(ω) =e^2/ħ m^2_0A∑_M |∑_𝐤,c,v A^M_cv𝐤𝐱̂·𝐩_cv𝐤|^2/E_Mδ(ħω-E_M), where A=N_k A_SC (with A_SC being the area of the supercell), m_0 denotes the bare electron mass and 𝐞=𝐱̂ is the polarization direction of the electric field of the electromagnetic wave with frequency ω. The delta function is approximated by a normalized Lorentzian function with a full width at half maximum of 0.04 eV. The momentum matrix elements are given by <cit.> 𝐩_cv𝐤 = m_0/ħ∑_limjc^*_c𝐤lic_v𝐤mj∇_𝐤H^TB_limj(𝐤), where H^TB_limj(𝐤) denotes the tight-binding Hamiltonian in the atomic orbital basis, see Appendix for details. As we do not include a GW correction to our quasiparticle energies, it is necessary to shift the calculated optical spectrum such that the peak associated with the A exciton agrees with the experimental value of 1.93 eV <cit.>. We have used this to align the spectrum both with and without the defect. § RESULTS §.§ Quasiparticle states To understand the effect of charged adsorbates on the optical properties of MoS_2, we first discuss the quasiparticle states of this system. This discussion follows closely our previous work <cit.>. Charged defects induce localized bound states in the band gap of the MoS_2. Fig. <ref>(a-f) shows the real-space wavefunctions of the most strongly bound acceptor states induced by a negatively charged adsorbate. We have used ε_sub=3.8 corresponding to a SiO_2 substrate. The most strongly bound defect state is highly localized and has 1s symmetry (Fig. <ref>(a)). It is composed of states from the Γ valley of the MoS_2 band structure, even though the valence band maximum of the pristine material is located at the K/K' points, as shown in Fig. <ref> in the Appendix. The Γ valley has a large effective mass which gives rise to a highly localized state with a large binding energy. The next state also has 1s symmetry (Fig. <ref>(b), but is less localized. It is composed of states from the MoS_2 K and K' valleys which have a smaller effective mass than the Γ valley. The state shown in Fig. <ref>(c) has a similar shape as the state in (b). Indeed, this state originates from the lower of the spin-split valence bands at K and K'. The other states in Fig. <ref> correspond to 2s and 2p states derived from the Γ valley. The wavefunctions of a donor defect are shown in Fig. <ref>. Again, we find that the three most strongly bound defect states are of 1s symmetry and highly localized. The most strongly bound donor states is composed of monolayer states from the Q valleys, as shown in Fig. <ref> of the appendix. Since the conduction band spin-orbit splitting is very small, the defect states from different Q valleys of the Brillouin zone can hybridize and form different linear combinations whose energy splitting is determined by the Fourier component of the defect potential whose wave vector connects the different Q valleys. In contrast, the state in Fig. <ref>(b) is a linear combination of 1s donor states from the K and K' valleys. The less strongly bound defect states, again, correspond to higher energy hydrogenic orbitals (and their linear combinations) from the Q valleys and the K/K' valleys. §.§ Optical properties The optical conductivity of monolayer MoS_2 on a SiO_2 substrate in the presence of an acceptor defect (Z=-1) is shown in Fig. <ref>(a) and compared to result for the pristine defect-free material. Without defects, the conductivity is characterized by two large peaks at approximately 1.93 eV and 2.05 eV, corresponding to the well-known A and B excitons from the K and K' valleys. The energy difference between the two peaks reflects the spin splitting of the highest valence bands in these valleys from spin-orbit coupling. In the presence of the defect, the A and B exciton peaks are still present in the optical spectrum, but with significantly reduced intensities. Also, the A peak is now split into two overlapping peaks. In addition, a new smaller peak arises at ∼ 1.8 eV, i.e., at an energy approximately 130 meV lower than the A exciton peak. To understand these findings, we also plot the squared magnitudes of the projections of the BSE eigenvectors A^M_cv(𝐤=0) onto the quasiparticle states, see Fig. <ref>(a). This reveals that the new low-energy peak originates from several excitons which are predominantly composed of transitions from the most strongly bound defect state (of 1s character composed of Γ valley states) to conduction band states. Transitions from the second most strongly bound defect state (of 1s character composed of K/K' valley states) make a smaller contribution to the peak. Interestingly, the A and B peaks also contain transitions involving low-lying defect states. For MoS_2 on SiO_2 with a donor impurity, see Fig. <ref>(b), the optical conductivity exhibits more peaks than for an acceptor impurity. In particular, both the A and the B peak of the pristine spectrum break into several smaller peaks. In contrast to the case of the acceptor impurity, we now observe two low-energy peaks: one at approximately 1.75 eV and another one at approximately 1.83 eV. The lowest peak is dominated by transitions from the valence band maximum to the two most strongly bound defects states. The peak at 1.83 eV involves transitions from the valence band maximum to the three most strongly bound defect states. The calculated energies of the low-energy peaks are similar to those reported in the experimental work of Greben and coworkers who also study MoS_2 on an SiO_2 substrate <cit.> find the first neutral exciton peak at 1.96 eV and the defect-bound exciton peak at 1.77 eV. § CONCLUSIONS We have calculated the optical absorption spectrum of monolayer MoS_2 in the presence of a charged defect by solving the Bethe-Salpeter equation. We find that the presence of the defect gives rise to additional peaks in the spectrum approximately 100 - 200 meV below the A exciton peak in good agreement with experimental observations. These peaks arise from transitions involving bound defect states. § ACKNOWLEDGEMENTS This work was supported through a studentship in the Centre for Doctoral Training on Theory and Simulation of Materials at Imperial College London funded by the EPSRC (EP/L015579/1). We acknowledge the Thomas Young Centre under Grant No. TYC-101. §.§ Appendix Exchange interactions: It is well known that the exchange term of the BSE kernel does not strongly influence the absorption spectrum of the pristine monolayer <cit.>. To test whether this is still the case in the presence of a charged defect, we have calculated the optical conductivity with and without the exchange term for a 12 × 12 supercell containing a single acceptor defect, see Projections: Fig. <ref>. It is clear that also in the presence of the charged defect, exchange interactions influence the optical spectrum only weakly. Defect state projections: Figures <ref> and <ref> show the projections of acceptor and donor defect states onto the states of the defect-free system, respectively. Optical matrix elements: Using the tight-binding basis convention ψ_n𝐤(𝐱) = 1/√(N_k)∑_lic̃_n𝐤lj∑_𝐑e^i𝐤·𝐑ϕ_li^𝐑(𝐫-𝐑-τ_j,α), Pedersen et al. <cit.> write the momentum matrix element as 𝐩_cv𝐤 = m_0/ħ∑_limjc̃^*_c𝐤lic̃_v𝐤mj∇_𝐤H̃_limj(𝐤) + im_0/ħ(E_c𝐤 - E_v𝐤)∑_limjc̃^*_c𝐤lic̃_v𝐤mj𝐝_limj, where 𝐝_limj=δ_lmδ_ijτ_i denotes the intra-atomic contribution to the matrix element. In this work, we use a different tight-binding basis convention that includes an additional phase factors exp(i 𝐤·τ_j), see Eq. (<ref>). The Hamiltonians and the eigenvectors of the two different conventions are related through <cit.> c̃_n𝐤li = c_n𝐤lie^i𝐤·τ_i H̃_limj(𝐤) = H_limj(𝐤) e^i𝐤·(τ_i-τ_j). Applying this transformation to the expression of the momentum matrix element, we find that 𝐩_cv𝐤 = m_0/ħ∑_limjc^*_c𝐤lic_v𝐤mj[∇_𝐤H_limj(𝐤) +i(τ_i-τ_j)H_limj(𝐤)] + im_0/ħ(E_c𝐤 - E_v𝐤)∑_lic^*_c𝐤lic_v𝐤liτ_i = m_0/ħ∑_limjc^*_c𝐤lic_v𝐤mj∇_𝐤H_limj(𝐤) +im_0/ħ∑_lic^*_c𝐤liτ_i(∑_mjH_limj(𝐤)c_v𝐤mj) - im_0/ħ∑_mjc_v𝐤mjτ_j(∑_li c^*_c𝐤liH_limj(𝐤)) + im_0/ħ(E_c𝐤 - E_v𝐤)∑_lic^*_c𝐤lic_v𝐤liτ_i = m_0/ħ∑_limjc^*_c𝐤lic_v𝐤mj∇_𝐤H_limj(𝐤), i.e. the terms involving τ_j cancel out.
http://arxiv.org/abs/2307.05796v1
20230711204206
Improved POS tagging for spontaneous, clinical speech using data augmentation
[ "Seth Kulick", "Neville Ryant", "David J. Irwin", "Naomi Nevler", "Sunghye Cho" ]
cs.CL
[ "cs.CL" ]
Improving Segmentation and Detection of Lesions in CT Scans Using Intensity Distribution Supervision [ ==================================================================================================== This paper addresses the problem of improving tagging of transcripts of speech from clinical populations. In contrast to prior work on parsing and tagging of transcribed speech, we do not make use of an in domain treebank for training. Instead, we train on an out of domain treebank of newswire using data augmentation techniques to make these structures resemble natural, spontaneous speech. We trained a parser with and without the augmented data and tested its performance using manually validated tags in clinical speech produced by patients with various types of neurodegenerative conditions. -. ./introduction.tex ./data.tex ./augmentations.tex ./model.tex ./results.tex ./conclusion.tex § ACKNOWLEDGMENTS This work was supported by the Department of Defense (W81XWH-20-1-0531), the National Institutes of Health (AG073510-01, P01-AG066597), and the Alzheimer's Association (AARF-21-851126). We would also like to acknowledge the contributions and support of the late Murray Grossman. ./appendix.tex
http://arxiv.org/abs/2307.04894v1
20230710203340
Noise in the direction of motion determines the spatial distribution and proliferation of migrating cell collectives
[ "Jonathan E. Dawson", "Abdul N. Malmi-Kakkada" ]
physics.bio-ph
[ "physics.bio-ph", "cond-mat.soft", "physics.comp-ph" ]
APS/123-QED [Corresponding author:][email protected] Department of Physics and Biophysics, Augusta University, Augusta, GA 30912, USA A variety of living and non-living systems exhibit collective motion. From swarm robotics to bacterial swarms, and tissue wound healing to human crowds, examples of collective motion are highly diverse but all of them share the common necessary ingredient of moving and interacting agents. While collective motion has been extensively studied in non-proliferating systems, how the proliferation of constituent agents affects their collective behavior is not well understood. Here, we focus on growing active agents as a model for cells and study how the interplay between noise in their direction of movement and proliferation determines the overall spatial pattern of collective motion. In this agent-based model, motile cells possess the ability to adhere to each other through cell-cell adhesion, grow in size and divide. Cell-cell interactions influence not only the direction of cell movement but also cell growth through a force-dependent mechanical feedback process. We show that noise in the direction of a cell's motion has striking effects on the emergent spatial distribution of cell collectives and proliferation. While higher noise strength leads to a random spatial distribution of cells, we also observe increased cell proliferation. On the other hand, low noise strength leads to a ring-like spatial distribution of cell collectives together with lower proliferation. Our findings provide insight into how noise in the direction of cell motion determines the local spatial organization of cells with consequent mechanical feedback on cell division impacting cell proliferation due to the formation of cell clusters. Noise in the direction of motion determines the spatial distribution and proliferation of migrating cell collectives Abdul N. Malmi-Kakkada August 12, 2023 ==================================================================================================================== § INTRODUCTION The importance of the coordination between cell division and cell migration is recognized in multiple physiological processes, such as tissue regeneration, inflammation, as well as in pathological conditions, such as cancer metastasis <cit.>. Because cell migratory and proliferation patterns determine how cells organize spatially over time, understanding the underlying biophysical mechanisms is crucial for our ability to direct spatial organization of cells in a customizable manner. This has important implications for understanding tissue regeneration and cancer invasion <cit.>. With the emergence of multiplexed tissue imaging modalities that allow for quantification of cell proliferation at single-cell resolution  <cit.>, it is now possible to determine how cell-cell interactions influence cell proliferation <cit.> from spatial map of single cells, together with higher-order relationships in space. In a cell collective, spatial constraints due to crowding limits the space available to a cell due to the presence of neighboring cells and thus impose constraints on cell proliferation <cit.>. Similarly, collective cell migration, a foundational collective behavior in living systems, involves both the interaction of a cell with its environment as well as its neighbors <cit.>. Fluctuations in the direction of a cell's motion affects the spatial coordination of cells in a tissue <cit.>. Despite the importance of cell-cell interactions, the relation between cell migration driven spatial organization and how it impacts cell proliferation due to physical constraints remains unclear. Given that cells are active particles that transduce stored energy into mechanical motion, an interesting question that arises is how the coordination between cell migration and proliferation influences the spatial organization of cell collectives. While cell growth, cell division, and cell migration are highly complex processes, involving a large network of intracellular signaling pathways <cit.>, here we focus on the biophysical intercellular interactions that are known to play a key role in cell collective migration and proliferation <cit.>. Mathematical and computational models of cell behaviors have contributed to a quantitative understanding of collective cell migratory behaviors and its underlying mechanisms <cit.>. Pioneering work by Vicsek and co-workers showed that the collective dynamics of self-driven, or active particles emerge from a form of inter-particle coupling: a simple rule that an individual constituents' direction of motion is aligned with the average direction of motion of its neighbors  <cit.>. Both the number density of agents and noise in the direction of their movement are key parameters that regulate spatial patterns of collective motion. Distinct from earlier studies, we focus on studying the coupling between noise in the directionality of cell migration and cell division. The effect of cell division and cell death on collective cell movement has been studied in mean-field dynamical theoretical models <cit.> with recent experiments showing that cell growth and division can influence cell migratory behavior <cit.>. Our recent work in the context of freely expanding three-dimensional (3D) cell collectives <cit.> showed that the inter-cellular forces give rise to heterogenous cell motility patterns between the boundary and the interior of the cell collective. In addition to cell-cell mechanical interactions, we anticipate that the noise in the cell movement direction may generate complex spatial distribution patterns with novel implications on how cells divide. To elucidate the role of noise on self-organization and proliferation in a migrating cell collective, we study a system of self-propelled particles with the capacity to proliferate, and whose motion is governed by local alignment rules. Each cell can grow in size and divide upon reaching a critical size. Cells in direct contact through cell-cell adhesion exert a force, which when exceeds a threshold inhibits cell growth and prevents cell division. Such mechanical feedback on cell proliferation is in agreement with recently reported experimental observations  <cit.>. Cell division events in this model scramble the velocity orientation of dividing cells. By combining mechanical and alignment cell-cell interactions with cell division events, our model is highly relevant to biological systems, such as cells, which possess an inherent capability to proliferate and migrate. Our work provides insight into the fundamental features of expanding active matter. Notably, we discover that noise in the direction of a cell's motion not only influences the spatial structure of cell collectives but also determines the ability of cells to proliferate. § MODEL DESCRIPTION AND SIMULATION DETAILS Here we introduce the computational model we implemented to study the growth and migration of cell collectives in two-dimensions (2D). The off-lattice agent-based model and the simulation scheme is adapted from our previous work on three-dimensional tumor growth <cit.>. Such off-lattice simulations are widely used to recapitulate experimentally observed features of individual cell dynamics within cell collectives <cit.>. Individual cells are modeled as soft disk-like motile particles of radius R, Fig.<ref>A, which grow stochastically in time t, and, upon reaching a critical size, undergo division into two daughter cells. In addition to its radius, R_i(t), the state of each cell i is characterized by its position 𝐫_i(t) and direction of motion θ_i(t), Fig.<ref>A (Inset). The dynamics of the proliferating and migrating cell collective is governed by the following three factors - (a) mechanical forces arising from two body interactions, (b) active processes due to cell growth, division, and death, and (c) active self-propulsion with directional noise together with neighbor interactions that align the direction of cell motion with its neighbors. The model implementation of these factors is explained in detail below. (a) Mechanical cell-cell interactions: Individual cells interact with short-ranged forces, consisting of two terms: elastic force (repulsion) and adhesion (attraction). The elastic force, F_ij^el, between any pair of cells i and j of radii R_i and R_j discourages spatial overlap between cells (Fig.<ref>B) and is given by <cit.>, F_ij^el=h_ij^3/2/3/4(1-ν_i^2/E_i+1-ν_j^2/E_j)√(1/R_i(t)+1/R_j(t)), where ν_i and E_i are the Poisson ratio and elastic modulus of the i^th particle. h_ij defined as max[0,R_i + R_j - |r⃗_i - r⃗_j|] is the virtual overlap distance between the two cells <cit.>. Biological cells adhere to their immediate physical neighbors through cell adhesion molecules, Fig.<ref>(C). The adhesive force, F_ij^ad, between a pair of interacting cells depends on the contact length between two cells, l_ij (see Supplemental Information SI-I for the analytical calculation of l_ij), and is given by <cit.>, F_ij^ad=f^adl_ij1/2(c_i^recc_j^lig + c_j^recc_i^lig) where, and c_i^rec (c_i^lig) is the receptor (ligand) concentration (assumed to be normalized with respect to the maximum receptor or ligand concentration so that 0 ≤ c_i^rec, c_i^lig≤ 1). The coupling constant f^ad allows us to rescale the adhesion force to account for the variabilities in the maximum densities of the receptor and ligand concentrations. Both the elastic and the adhesive forces act along the unit vector n_ij, pointing from the center of cell j to the center of cell i. The net force (F_i) on the i^th cell is the vectorial sum of the elastic and adhesive forces that the neighboring cells exert on it, F_i=∑_j∈ NN(i) f_ij=∑_j∈ NN(i)(F_ij^el-F_ij^ad)n_ij here, j is summed over the number of nearest neighbors NN(i) of cell i. The nearest neighbors of cell i are all the cells that satisfy the criterion h_ij>0. The net force due to finite area exclusion (elastic term) and cell-cell adhesion is dampened by an effective friction contribution which comes from (i) the interaction of a cell with the extracellular matrix (ECM), and (ii) cell-cell adhesion. The friction that a cell i experiences is a time (t) dependent quantity given by, γ_i(t) = γ_i^ECM(t) + γ_i^ad(t) . The cell-ECM friction coefficient is assumed to be given by the modified Stokes relation, γ_i^ECM(t)= μ R_i(t), where, μ is the viscosity due to the ECM. We consider additional damping of cell movement due to adhesive forces given by, γ_i^ad= ζ^max∑_j ∈ NN(i)(l_ij/2(1+𝐅_i·𝐧_ij/|𝐅_i|)× 1/2(c_i^recc_j^lig + c_j^recc_i^lig)) where, ζ^max is the adhesive friction coefficient and 𝐅_i is as defined in Eq.(<ref>). Note that the added friction coefficient γ_i^ad is proportional to the cell-cell contact length l_ij, implying that the damping of cell movement due to this friction term is proportional to the number of cells that cell i is in contact with at time t. (b) Cell proliferation: In our model, the cell number grows due to the imbalance between cell division and apoptosis. At any point in time, cells are either in the growth (G) phase, i.e, the phase in which the cell area increases over time, or, in the dormant (D) phase, i.e., the phase in which cell area growth is arrested, Fig.<ref>D. Whether a cell continues in the growth phase or enters the dormant phase is determined by the total force per unit length, due to the neighboring cells, acting on a cell at any given time point. The total external force per unit length, p_i, that a cell experiences is calculated using, p_i(t)=∑_j∈ NN(i)| f_ij· n_ij|/l_ij. If p_i(t) on a cell i at any given time t is smaller than a threshold value, p_c, the cell grows in size, Fig.<ref>E-i. However, if p_i(t) > p_c, the cell enters dormancy, Fig.<ref>D. Hence, depending on the ratio of p_i (t)/p_c, cells can switch between the two states of dormancy and area growth. A cell grows in size by increasing its radius in a stochastic manner sampled from a Gaussian distribution with the mean rate dR_i/dt= (2π R_i)^-1g_a, where g_a is the cell area growth rate given by, g_a= π R_m^2/2τ. Here, τ is the cell cycle time and R_m is the mitotic radius at which a cell divides (see Table I). We assume that a cell divides into two daughter cells upon reaching R_m=5μm, giving rise to two identical daughter cells, each with radii R_d=R_m/√(2), ensuring area conservation, Fig.<ref>E-ii. Hence, a key time scale in the simulation is τ - the average time it takes for a cell to divide, set to be ∼ 0.27hours. This is much faster than the typical cell cycle times of eukaryotic cells but comparable to cell cycle times of bacteria <cit.>. As daughter cells are assigned completely random active velocity orientations, cell division events tend to scramble the orientational order of the cells. Death of a cell takes place in the simulation leading to a randomly selected cell being removed from the collective, Fig.<ref>F. The death rate is set to k_d=10^-20 s^-1. Owing to k_d << 1/τ, we are simulating a rapidly growing system of cells. (c) Neighbor velocity alignment and fluctuation in the direction of motion: The cell position, 𝐫_i(t), is described through the coordinates (x_i(t), y_i(t)). Cell self-propulsion velocity is, v_i(t)=v_0s_i(t), where, v_0 is the cell migration speed, and s_i(t)=(cosθ_i(t), sinθ_i(t)) is the unit vector representing the direction of cell migration. The angle that the cell makes with the horizontal axis in the laboratory frame is θ. Each cell in this model is endowed with motility that propels the cell in a given direction with a fixed speed v_0, Fig.<ref>G-i. The directional alignment, and thus the overall direction of a cell's motion, is hampered by an angular white noise uniformly distributed in range ξ_i ∈ [-π/2, +π/2] with ⟨ξ_i^t⟩=0 and ⟨ξ_i^t ξ_j^t'⟩∼δ_ijδ_tt' and whose strength is given by η, Fig.<ref>G-ii. As the effective noise is given by ηξ_i, η=0.2 means random fluctuations occur in the entire range [-π/10, +π/10], Fig.<ref>G-iii, whereas, η=0.01 results in random fluctuations in the range [-π/200, +π/200], Fig.<ref>G-iii. The noise term represents fluctuations in the direction of a cell's motion. In biological systems, such as cells, there are many sources of such noise in the direction or orientation of cell movement. Stochasticity intrinsic to cellular movement, such as due to limitations in cellular sensing or active shape remodeling during cell migration <cit.> are some examples. In addition to the forces due to nearest neighbor mechanical interactions, as described in (a), each cell interacts with its neighbors in a manner that aligns its own velocity with that of its neighbors, Fig.<ref>G-iv. The nearest neighbors which contribute to the velocity re-alignment of cell i are all those cells in the collective that satisfy the necessary condition |r_i(t)-r_j(t)|< R_a, where, | ...| is the vector magnitude, Fig.<ref>G. We set R_a=10 μ m which limits velocity re-alignment to occur with neighbors that are directly in contact with a given cell. We then obtain the average orientation of the velocities of all the cells that satisfy the nearest neighbor criteria and assign that to the velocity orientation of cell i. The cell velocity re-alignment with its neighbors influences its direction of motility, such that cells in a cluster tend to move in the same direction, Fig.<ref>G-iv. Contact-based modulation of cell velocity is known to play a role in the collective migration of electrically stimulated cells <cit.>. The complex dynamics of each cell in the collective involves active motility, area growth, division, and death. In the low Reynolds number limit, the equation of motion is fully described by the following update rules: r^x_i(t+Δ t) = r^x_i(t)+v_0 cos(θ_i(t))Δ t+ F^x_i(t)/γ_i(t)Δ t r^y_i(t+Δ t) = r^y_i(t)+v_0 sin(θ_i(t))Δ t+F^y_i(t)/γ_i(t)Δ t θ_i(t+Δ t) = arg[∑_j ∈ | r_i(t)- r_j(t)|< R_a s_j(t)+ ∑_j∈ NN(i) f_ij] + ηξ_i(t) . Eq.(<ref>-<ref>) describes the evolution of the x and y coordinates of a cell i, governed by an active component that propels the cell with a speed v_0 in the direction θ_i(t) at time t and the net force on the cell due to its contacting neighbors. We assume that the cell exerts a self-propulsion force which propels it with a constant effective active speed v_o. We note that an effective friction term is incorporated into the value of v_o. Eq.(<ref>) describes the orientation dynamics of a cell i, where θ_i(t+Δ t) is the direction in which the cell moves in the next time step. The net contribution to the direction of a cell's motility comes from, (i) orientation re-alignment, the first term on the right-hand side of Eq.(<ref>) and, (ii) the interaction forces (discussed in (a)), second term on the right-hand side of Eq.(<ref>). As discussed in (c), the orientation re-alignment of a cell i's velocity is only due to nearest neighbor cells whose center lies within a distance of R_a (here 10 μ m) from the ith cell. arg[ c] in the first term in Eq.(<ref>) refers to the angle associated with the vector c, if this is expressed in polar coordinates, and the sum is taken over all cells j within a distance of R_a of cell i (including cell i itself). The net direction in which cell i moves is given by the angle associated with the net vector, which is obtained by vector addition of the velocity vectors of all neighboring cells which lie within the interaction radius R_a of cell i, and the net force F_i on the ith cell. Initial Conditions: We initiated the simulations by generating 200 non-overlapping cells, randomly distributed in a circular region within a 2D spatial domain of size 250 μ m × 250 μ m. For all future time steps, we consider an open boundary condition. Each cell is assigned an initial orientation of the active velocity, randomly distributed in the domain [0,2π]. Fluctuations around the direction of a cell's motion is captured by a noise term, which is randomly distributed with uniform probability in the range [-π/2, π/2]. The strength of the fluctuations is denoted by η (discussed in the previous section (c)). In the present study, all the parameters are fixed except the noise strength of velocity orientation switching η, which we vary from 0.01 to 0.2. The simulated cell aggregate is evolved to ∼ 10τ or about 10,000s. Relevant parameters are shown in Table I. A fixed timestep of 5 s was used. We performed a numerical consistency check by ensuring our results are invariant for a smaller timestep of 2.5s (see Supplemental Information SI-II). The particle coordinates were recorded and used to calculate the dynamical observables relevant to the present study. § NOISE IN THE CELL MOTILITY DIRECTION CONTROLS THE SPATIAL DISTRIBUTION OF THE CELL COLLECTIVE We first sought to understand how noise in the cell motility direction determines the spatial distribution of a growing cell collective. The cell spatial distribution that we obtain at t= 10,000 s shows a strong dependence on the noise strength η, Fig. <ref>A, C. For low noise strengths (η=0.01) cells are organized into multiple clusters that are spatially distributed in a roughly circular, ring-like pattern Fig.<ref>A (see Supplemental Information SI-III for simulation movies). The cells cluster into small groups mostly along the edge of the ring-like domain. The domain interior is mostly devoid of cells, Fig.<ref>A. By focusing on a single cluster (blue box in Fig.<ref>A), we observe that the constituent cells display highly coordinated motion, wherein each cell moves in roughly the same direction pointing radially outward, as seen from the blue arrows in Fig.<ref>A(inset), B. At higher noise strength of η=0.2 the cell spatial distribution changes from the ring-like structure to a diffuse morphology, characterized by randomized spatial distribution of cells, Fig.<ref>C (see Supplemental Information SI-III for simulation movie). The cells organize into a large number of clusters of varying sizes scattered throughout the entire spatial domain occupied by the cells, Fig.<ref>C. Individual cells within each cluster appear to move in a less coordinated manner, as compared to the case of low noise strength, Fig.<ref>C,D. To better visualize the differences in the cell spatial distribution and the cluster sizes at varying η, we represented the cell positional information using a density plot. The entire spatial domain, in both x and y direction, is divided into 50× 50 bins of equal area. The total number of cells within each bin is color-coded, with dark blue representing low number of cells and dark red representing the highest number of cells. To generate the cell number density heat map, we combined 3 separate simulation results for each value of the noise strength, η=0.01, 0.05, 0.2, Fig.<ref>E-G. The density plots show clearly the strong influence of the noise strength on the cell spatial distribution. For low noise strength, η=0.01, the whole collective is spatially organized into a thin circular ring-like structure, with patches of high cell density visible at the border. The interior of the domain is characterized by low cell number density, Fig. <ref>E. Cells organize themselves into coherently moving clusters with some of the larger clusters containing about 40-50 cells as seen in Fig. <ref>E. At higher noise strengths of η=0.05 and 0.2, high cell density patches shift from being confined to the border of the ring-like pattern to its interior. The number of cells within the high cell density patches decreases in a noise strength dependent manner. While 40-50 cells make up the high-density patches for η=0.01, ∼ 30 cells are visible for η=0.2. The cell spatial distribution we observe is not a transient feature of the model. Long-time simulations (upto t=25,000 s), for η=0.01 and η=0.2 (see Supplemental Information SI-IV) confirm that the cell spatial distribution is preserved even after very long times. We, therefore, conclude that the noise-dependent pattern of cell collective behavior is a robust feature of expanding cell collectives. The velocity vector alignment of individual cells within a cluster, seen in Fig.<ref>B and D, are indicative of collective behavior seen in non-proliferating self-propelled particles <cit.>. To better understand the collective motion of individual cells, we measured the order in the motion of the entire cell collective (Fig.<ref>H). We calculate the order parameter on the basis of position-dependent polarization of the cell velocity by defining a vector pointing from the center of mass of the cell collective to the individual cell position c_i = r_i - R_CM, where R_CM(t)=(1/N)∑_i r_i is the center of mass of the whole collective at time t. c_i is directed outwards from the center of mass of the entire cell collective to the cell's position. The angle ϕ_i between a cell's velocity vector, v_i, and its position vector with respect to the center of mass of the cell collective, c_i, can be calculated from cos(ϕ)_i = c_i · v_i/(| c_i|| v_i|) (see Fig.<ref>I Inset). The orientation order parameter for the whole cell collective at any given time t is defined as, Φ(t) = 1/N(t)∑_i cos(ϕ(t))_i where, N is the total number of cells at time t. Φ can vary between 1 and 0 with Φ=1 implying that the velocity orientation v_i of each cell in the whole cell collective is aligned with respect to the position vector c_i. The time-dependent behavior of Φ(t) shows an initial almost linear increase over time which then saturates at a constant value at later times, Fig.<ref>H. For very low noise strength of η=0.01, the order parameter saturates at ∼1, indicating a highly ordered outward cell motion. This is consistent with our observation of highly coherent and ordered cell movement such that cell velocity orientation s_i is aligned with the vector pointing outward towards the periphery of the cell collective, c_i. With increasing noise strength, the value of the order parameter progressively gets lower, indicating an increasingly disordered velocity direction. The orientational order parameter at the final time point is shown in Fig.<ref>I, clearly decreasing with higher noise strengths. Our result, showing the dependence of the order parameter on the noise strength, also delineates why we obtain markedly distinct spatial distribution of cell collectives. While cells move consistently outwards at low noise strengths leading to the emergence of a ring-like pattern, higher noise strengths result in randomized cell movement orientations that lead to a more diffuse spatial distribution of cells. In general, our results map out the emergent spatial distribution of proliferating cell collectives. § NOISE IN THE CELL MOTILITY DIRECTION DETERMINES PROLIFERATION AND THE SPREAD OF CELL COLLECTIVE Having observed angular noise-dependent differences in the spatial distribution and the orientational order of cell collectives, we next ventured to ask how the noise influences cell division and the growth of the cell collective. As spatial constraints can regulate cell cycle progression during tissue expansion <cit.>, we anticipate that noise-induced differences in the cell spatial distribution will have an impact on the ability of cells to divide. Particularly, given that we incorporate mechanical feedback on cell division through the force term, noise-induced differences in local cell spatial arrangements could determine the ability of cells to divide. To understand how noise in the cell velocity orientation affects the proliferation of the cell collective, we looked at the temporal behavior of the total cell number and total spread area of the cell collective, for four different values of the noise strengths η=0.01, 0.05, 0.1, 0.2. We quantified the spatial spread of migrating cell collective by calculating the radius of gyration squared, R_g^2(t)=⟨1/NΣ_i=1^N [ r_i(t)- R_CM(t)]^2 ⟩. The bracket ⟨ ... ⟩ denotes the ensemble average over 3 different simulation runs at each value of η. The average squared distance of all the cells from the center of mass is an indicator of the spatial spread or invasion of a cell collective in two dimensions. Small R_g^2 values indicate a smaller spatial spread of cells, with cells localized in close proximity to the center of mass. In contrast, higher values of R_g^2 denote a wider spatial spread due to cells that are located farther away from the center of mass. Both the total number of cells, N, and the total spatial spread of cells, R_g^2, steadily increase with time, Fig. <ref>A,B for a given value of noise strength. In Fig. <ref>C,D, we show the N and R_g^2 at the final time point. Surprisingly, at late time points N and R_g^2 show opposite trends as a function of the noise strength η, Fig. <ref>C,D. The total cell number increases as the noise strength increases (see Fig. <ref>C), implying that stronger fluctuations in the direction of cell movement promote cell proliferation. At t=10,000s, there are ∼3400 cells for η=0.2, while, N∼1900 at the lower noise strength (η=0.01), which is significantly lower compared to the case of η=0.2, Fig. <ref>C. In contrast to the total number of cells, the total spatial spread of the cell collective showed an inverse dependence on the noise strength η. The spatial spread of the cell collective increases faster over time at lower noise strengths. R_g^2 is an order of magnitude smaller at η=0.2 as compared to the lower noise strength of η=0.01, suggesting that as the noise strength increases the cell collective exhibit a more compact spatial distribution (see Fig. <ref>D). The global quantities N and R_g^2 describe the time-dependent behavior of the whole cell collective and how it is influenced by noise in the cell motion direction. Taken together with the analysis presented in the preceding section, our results show that increasing the noise strength disrupts cell-cell velocity alignment, as reflected in the lower order parameter, but at the same time promotes cell proliferation, as reflected in the higher number of cells. On the other hand, lower noise strength facilitates cell-cell velocity alignment and suppresses cell proliferation. As collective behavior depends strongly on the number density of actively migrating agents <cit.>, we next sought to understand how cell number density is affected by noise in the direction of cell motility. Given that N is not fixed and that we impose an open boundary condition, number density is neither fixed nor clearly defined, as in the case of Vicsek model, but evolves over time. Nevertheless, we can estimate the cell number density or the overall spatial packing of the cells using ρ(t)=N(t)/R_g(t)^2, where ρ is the cell density. Due to the combined effect of cell proliferation and cell motility, both the total number of cells N(t) and the spatial spread R_g^2, evolve over time. Consequently, cell number density exhibits a highly dynamic time-dependent behavior. ρ(t) initially increases sharply for each value of noise strength, η=0.01, 0.05, 0.1 and 0.2, as shown in the time regime before the dashed line in Fig. <ref>E. Following the initial rise, the temporal profile of the cell number density for noise strengths η=0.01, 0.05, 0.1 is markedly different from that for η=0.2, Fig. <ref>E. For η=0.01, 0.05, 0.1, the cell number density decreases over time after the initial transient increase. Whereas for η=0.2, the cell density continues to increase with time, although at a lower rate. At longer times, cell number density is comparatively low for weaker noise strengths. By singling out the cell number density at the final time point and plotting it as a function of the noise strength, we show that the final cell density rapidly increases with the noise strength Fig. <ref>F. This dependence is rather surprising given our earlier results for the total number of cells as a function of noise strength. We expect higher proliferation to correspond to lower density, due to the role of cell contact force-dependent feedback on proliferation (p_i (t)) in our model. When cells are tightly packed in space, we expect the compressing forces on cells from their neighbors to be higher <cit.>. This would hamper cell area growth, eventually leading to lower cell division events due to the force-dependent mechanical feedback term p_c. Contrary to our expectations, high noise strength leads to a higher cell density and the cell collective has yet more number of cells (see Fig. <ref>A,C). To investigate this further, we turn to a more detailed quantification of the cell spatial arrangement on the basis of clustering analysis. § NOISE INCREASES THE NUMBER OF ISOLATED CELLS AND FACILITATES ENHANCED PROLIFERATION To understand this rather counter-intuitive result of higher cell proliferation at higher cell number density, we used a spatial clustering algorithm DBSCAN (density-based spatial clustering of applications with noise) <cit.> to map out the structure of cell clusters within the collective. The idea behind performing cluster analysis is that feedback due to the contact force from overlapping cells inhibits cell growth and hamper cell division. As such, single cells and cells with very few overlapping neighbors will be characterized by the highest proliferative capability. On the other hand, we expect fewer cell division events when cells are part of a cluster with larger number of overlapping cells. Therefore, we anticipate that the size of the cell clusters (i.e. the number of cells in a cluster) might hold the key to understanding why cells in a collective with higher global cell number density proliferate at a higher rate. DBSCAN is a powerful tool for class identification of clusters in large spatial databases with noise. For cluster identification and classification, DBSCAN requires two input parameters, namely, the maximum cell-cell distance ϵ [μm] to be considered as a cell's neighbor, and the minimum number of neighboring cells, n_min, that qualify as a cluster. The DBSCAN algorithm initially labels each cell which has at least n_min number of cells within a distance of ϵ [μm] from its center as a core cell. Any cell that has fewer than n_min number of cells within a distance of ϵ [μm] from its center is labeled as border cell. All those cells which have no other cell in their neighborhood within a distance of ϵ [μm] from their center are labeled as single cells. The algorithm then randomly picks a core cell and assigns it a cluster index. The cluster is expanded sequentially, by adding cells which are in the neighborhood and within the distance of ϵ [μm] of the randomly picked core cell. In an iterative manner, DBSCAN algorithm labels each cell as being part of one of the clusters, with each cluster assigned a unique cluster index. Since only overlapping cells exert growth inhibiting force on each other, we focused on identifying cell clusters of overlapping cells. Therefore, and since the typical cell radii in our model is 5μm, we chose ϵ=9μm, which means that cell-center-to-cell-center distance between any two cells within a cluster is 9μm or less. This value of ϵ ensures that only overlapping cells form a cluster. In order to cover the full range of cluster sizes we also set n_min=2. Using MATLAB's in-built function for DBSCAN <cit.>, with the aforementioned values for the two input parameters (ϵ and n_min), we identified cell clusters from spatial coordinates of individual cells at the final simulatiom timepoint and for different noise strengths η, Figs. <ref> A,B. Each individual cell cluster in Figs. <ref>A,B is represented in a different color. DBSCAN is a robust clustering method, allowing for the quantification of additional features of individual cell clusters. Based on the cluster identity of each cell, we can quantify the center of mass and the radius of gyration of individual cell clusters, as shown using circles of different radii in Fig. <ref>(C). Our analysis shows that the entire cell collective is spatially organized into cell clusters of different sizes i.e. cell clusters are composed of varying cell numbers. Since the total number of cells varies with the noise strength, in order to perform cluster number comparison across different values of noise strengths, we normalized the total cell cluster number at a given noise strength by the total number of cells at that noise strength. The number of cell clusters at the final timepoint increases with the noise strength η, Fig.<ref> D. The slight dip in the cell cluster number at the highest noise strength of η=0.2 is due to a lower total number of clusters at η=0.2 as compared to η=0.1, which indicates that clusters tend to disintegrate into isolated or single cells when the value of η is increased from 0.1 to 0.2. To understand higher proliferation in cell collective with higher cell number density we turned our attention to isolated cells and cell clusters with less than 3 cells. We found that the total number of both isolated cells and cell clusters with fewer than 3 cells increases with the noise strength η, Fig.<ref> E-F. These results are robust with respect to the simulation time, see Supplemental Information SI-V for simulations run for much longer time t=25,000 s. A higher number of isolated cells implies that more cells can proliferate, without the inhibitory effect of mechanical feedback on cell growth due to cell contact-dependent forces. This scenario is more conducive to cell division, allowing the cell collective to freely grow and divide. Our DBSCAN-based cell cluster analysis reveals that even though the cell number density is comparatively higher at higher noise strengths, there are large numbers of isolated cells and clusters with fewer cell numbers. This leads to enhanced proliferation of individual cells. In an expanding cell collective, cells form clusters as a result of either cell-cell adhesion and/or nearest neighbor velocity alignment. As the noise strength increases, the tendency for these clusters to disintegrate or breakup increases, due to rapid fluctuations in the direction of migration. The isolated or smaller size clusters then proliferate at a higher rate, thereby increasing the total cell number even though the overall number density of cells is higher at higher noise strengths. Hence, locally, due to the presence of more cells with fewer neighbors, cells are able to grow and divide relatively unhindered by mechanical feedback. This accounts for the puzzling result where higher overall cell density corresponds to higher cell proliferation. § DISCUSSION The migratory pattern of motile cells is diverse and depends on factors such as whether it is a collection of isolated single cells moving in a uniform direction or a collection of adhesive cells which are physically in contact with each other <cit.>. Here, we present an off-lattice agent-based computational modeling framework for an expanding 2D cell collective. By focusing on the influence of noise in the direction of a cell's motion, we show that noise strength influences: (i) the migratory pattern and spatial spread or invasion, and (ii) cell density-dependent cell proliferation of cell collectives. While the seminal work of Vicsek and co-workers has been in many ways foundational to computational modeling-based studies of cell migration <cit.>, few existing models of cell migration consider cell proliferation. Yet, the ability to grow and divide is a fundamental property of many biological systems. Our model considers individual cells as active agents that can grow and divide, and whose movement is influenced by their interactions with other cells and stochastic switching in the direction of migration. We take into account various biologically relevant inter-cellular interactions, such as cell elastic repulsion, and cell adhesion <cit.>. Adhesive interaction between cells, of the type prevalent in confluent tissues, has been taken into account in the past models <cit.>. The model also includes an additional nearest-neighbor interaction through which cells tend to align the direction of their motion with the average direction of motion of all their neighbors <cit.>. Given the recent experimental verification that cell proliferation is pressure-dependent <cit.>, mechanical feedback on proliferation is an important component of our model as the cell area growth depends on the net force acting on the cell from its contacting neighbors through the p_c term. Hence, our model is an important extension of the classical Vicsek model, with self-propelled particles that can undergo growth, birth, and death. We find that noise strength strongly influences the migratory pattern of cells in the collective. At low noise strengths η=0.01 and at long times, the cells are sparsely distributed in a ring-like pattern. Within this ring, the cells form clusters of different sizes. Cells in each of these clusters move in a highly ordered manner, with the orientation of cell velocity aligned in the direction away from the center. We quantified this ordered behavior of cell migration in the collective using an order parameter whose value for η=0.01 is close to 1, indicating a highly ordered motion of cells. Cell division events in our model scramble the local order of the cell collective as velocity vectors of the daughter cells are assigned random orientations upon division. However, even with these scrambling events present, we notice that the cell collective displays a highly ordered motion at low noise strengths. At intermediate noise strengths (η=0.05-0.1), the spatial distribution of migrating cells still shows a ring-like pattern. Although higher density of cells is still confined to the outer ring, clusters and individual cells are to be found in the interior of this domain as well. The orientation order parameter saturates to values much lower than 1 at long times, indicating the onset of a disordered migratory phase. The lower value of the order parameter is due to the formation of smaller cell clusters that move in random directions. As the noise strength is further increased to the highest value considered in this study (η=0.2), we observe a clear change in the migratory pattern and spatial arrangement of cells. In this case, higher cell density is observed in the interior of the spatial domain over which cells are distributed. The cell collective as a whole is split into multiple smaller clusters, with each cluster moving in random directions. The order parameter for the cell collective for such high noise strengths approaches 0, indicating an almost total loss of orientational order in cell motion. Our results also show that noise strength not only influences the overall spatial pattern but the spread of the cell collective as well, which is proportional to the total area covered by the cell collective. The largest spatial spread, compared to the size of the initial distribution of the collective, occurs for very low noise strengths at η=0.01. In this scenario, cells migrate as a propagating front leading to the emergence of a ring-like pattern. As the noise strength is increased, the spatial spread of the collective is strongly restricted. An unexpected result of our study is that noise strength influences cell proliferation. Although the total number of cells increases over time for all values of noise strength, the trend in proliferation is strongly dependent on the noise strength. The total number of cells is almost double the number of cells at the final time point for high noise strength η=0.2, as compared to η=0.01. Combined with our results showing the effect of noise strength on the spatial spread of the cell collective, we find that cell number density is a highly dynamic quantity that increases with noise strength. Taken together, we show that as the noise strength increases, the density of the cell collective increases, whereas the orientational order decreases. Given the mechanical feedback that limits proliferation due to cell-cell overlap, the increase of cell number with a higher density is a surprising and counter-intuitive result. While the overall density indicates that cells should be more tightly packed at higher noise strengths, our DBSCAN-based cluster analysis shows that the local spatial structure is contrary to what is expected. At higher noise strengths, not only do cells form more clusters, but there is a larger number of isolated cells. Isolated cells are ideal sources of proliferation in a collective, characterized by limited mechanical feedback on proliferation from neighboring cells. At lower noise strengths cell clusters contain a larger number of overlapping cells which thus inhibits cell growth and division. In this scenario, cells are localized to the periphery of a ring-like domain while its interior is mostly devoid of cells, leading to the overall density being lower. Therefore, even though cell number density is greater at higher noise strengths, there is a larger number of proliferating cells due to the presence of smaller clusters and a greater number of individual cells that are not part of a cluster. In conclusion, our study demonstrates that angular fluctuations in cell motility direction can strongly determine the spatial distribution of growing cell collectives. Our computational model provides a framework for studying the migration of cells in 2D growing cell collectives. Our model combines cell velocity re-alignment, as introduced in the Vicsek model, with active growth and cell division. This makes our work highly relevant in studying the migration behavior of biological cell collectives, in which cell migration occurs together with cell proliferation. Our results imply that there are more, yet unexplained, dynamic behaviors that may emerge from investigating mechanical feedback on proliferation in a system of self-propelled particles undergoing collective motion. § ACKNOWLEDGMENTS A.M.K acknowledge funding from startup grants. The authors acknowledge the support of Augusta University High Performance Computing Services (AUHPCS) for providing computational resources contributing to the results presented in this publication.
http://arxiv.org/abs/2307.04850v1
20230710184245
SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
[ "Sanjay Kariyappa", "Leonidas Tsepenekas", "Freddy Lécué", "Daniele Magazzeni" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Physics-Based Modeling and Validation of 2D Schottky Barrier Field-Effect Transistors Ashwin Tunga^a, Zijing Zhao, Ankit Shukla, Wenjuan Zhu, and Shaloo Rakheja Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL USA ^[email protected] August 12, 2023 =============================================================================================================================================================================================================================================================== The SHAP framework provides a principled method to explain the predictions of a model by computing feature importance. Motivated by applications in finance, we introduce the Top-k Identification Problem (TkIP), where the objective is to identify the k features with the highest SHAP values. While any method to compute SHAP values with uncertainty estimates (such as KernelSHAP and SamplingSHAP) can be trivially adapted to solve TkIP, doing so is highly sample inefficient. The goal of our work is to improve the sample efficiency of existing methods in the context of solving TkIP. Our key insight is that TkIP can be framed as an Explore-m problem <cit.>–a well-studied problem related to multi-armed bandits (MAB). This connection enables us to improve sample efficiency by leveraging two techniques from the MAB literature: (1) a better stopping-condition (to stop sampling) that identifies when PAC (Probably Approximately Correct) guarantees have been met and (2) a greedy sampling scheme that judiciously allocates samples between different features. By adopting these methods we develop KernelSHAP@k and SamplingSHAP@k to efficiently solve TkIP, offering an average improvement of 5× in sample-efficiency and runtime across most common credit related datasets. § INTRODUCTION The ability to explain the predictions of ML models is of critical importance in highly regulated industries, where laws provide right to explanations for people who are adversely impacted by algorithmic decision making. Specifically in finance, regulations like Fair Credit Reporting Act <cit.> and Equal Credit Opportunity Act <cit.> require a rejected loan/credit application (i.e. adverse action) to be explained to the borrower, by providing reasons for why the application was rejected (e.g., low credit score, high debit-to-income ratio, recent delinquencies, etc.). Owing to its principled formulation, the SHAP framework <cit.> is the defacto choice for explaining model predictions in credit-risk assesment models <cit.>. While exact computation of SHAP values is computationally intractable, sampling-based techniques like KernelSHAP <cit.> and SamplingSHAP provide a practical alternative to compute approximate SHAP values. Additionally, recent works have developed methods to quantify the approximation error of such sampling-based techniques, by providing confidence intervals (CIs) for the estimated SHAP values <cit.>. In this paper, we introduce the Top-k Identification Problem (TkIP), where the objective is to identify the k most important features, i.e., those with the k highest SHAP values (referred to as the Top-k features). TkIP is motivated by an important real-world use-case of processing credit/loan applications, where the lender is required to provide the top features that contributed negatively to the model's prediction (i.e. explanations) in the event of a rejection; this is standard practice by credit/loan issuers in order to comply with the Equal Credit Opportunity Act <cit.>. Existing methods like KernelSHAP and SamplingSHAP can be straightforwardly adapted to identify Top-k features with PAC guarantees, by evaluating enough samples to sufficiently reduce the the CIs of the SHAP estimates. However, doing so can be computationally expensive as it often requires a very large number of samples. Motivated by this problem, our paper investigates methods to improve the sample efficiency of KernelSHAP and SamplingSHAP, specifically to solve TkIP. Our key insight is that TkIP can be framed as an Explore-m problem <cit.> – a well-studied problem related to multi-arm bandits (MAB), where the goal is to identify a subset of arms with the highest expected payoffs. By leveraging this connection, we make the following key changes to the SHAP estimation algorithms based on ideas that have been developed in the MAB literature: * Overlap-based stopping condition <cit.>: Sampling for KernelSHAP and SamplingSHAP is usually done until the CI widths of SHAP values associated with all the features falls below a threshold. This naive stopping condition is unnecessarily conservative for solving TkIP; so instead, we use a stopping condition that is based on the overlap in CIs between different features (instead of the absolute CI width of each feature). This allows for early-stopping once a PAC solution for TkIP has been identified. * Greedy sampling scheme <cit.>: For SamplingSHAP, the default sampling scheme of allocating samples according to the variance of each feature is ill-suited for solving TkIP. Instead, we leverage a greedy sampling scheme that is designed to efficiently solve the Explore-m problem by allocating a higher number of samples to features that are likely to change the Top-k subset. This enables a significant reduction in sample-costs compared to the variance-based sample allocation. Note that (C2) requires the ability to allocate samples to evaluate the SHAP values on a per-feature basis, so it cannot be applied to KernelSHAP. We use the above techniques to develop KernelSHAP@k (KernelSHAP + C1) and SamplingSHAP@k (SamplingSHAP + C1 + C2). We evaluate these methods with the most common credit related datasets and show that they offer significant improvements in sample efficiency and runtime, compared to their respective baselines. The rest of this paper is structured as follows: * In Section <ref>, we provide background on sampling-based methods that can be used to estimate SHAP values and related work on variance reduction and uncertainty estimation. * In Section <ref>, we formally define the Top-k Identification problem and develop a naive stopping condition that can be used with Kernel/Sampling SHAP to correctly identify Top-k features. Nonetheless, this condition is sample-inefficient. * In Section <ref>, we develop KernelSHAP@k and SamplingSHAP@k to efficiently solve TkIP with PAC guarantees. The key insight here is framing TkIP as an Explore-m problem. * In Section <ref>, we evaluate Kernel/Sampling-SHAP@k on a suite of credit related datasets and demonstrate significant improvements in sample-costs and runtime. * We discuss limitations and future directions in Section <ref>, and conclude in Section <ref>. § BACKGROUND AND RELATED WORK The goal of our work is to modify existing algorithms to efficiently identify Top-k features with PAC guarantees. In this section, we provide background on the SHAP framework and discuss existing sampling-based techniques (SamplingSHAP and KernelSHAP) that estimate SHAP values. Additionally, we discuss related works that extend these method by reducing the variance of the estimates and quantify uncertainty in the form of confidence intervals. §.§ SHAP SHAP (SHapley Additive exPlanations) is based on a game-theoretic concept called Shapley values <cit.>, which is a method to fairly distribute the payoffs of a cooperative game among the players. This is done by measuring the average marginal contribution of a single player computed across all possible coalitions of players. Such a formulation of assigning credit has been shown to uniquely satisfy a set of fairness axioms such as local accuracy, missingness and consistency <cit.>. SHAP applies this concept to explaining the predictions of the model by treating individual features as players and the output of the model as the payoff. By measuring the marginal contributions of features across different coalitions, SHAP assigns a score to each feature that reflects its contribution to the final prediction of the model. Given a set of features D ={1,2,..,d}, the SHAP value ϕ_i for the i^th feature of an input x with a model f is computed by taking the weighted average of the change in predictions of f when feature i is added to a subset of features S as shown in Eqn.<ref>. ϕ_i(x,f) = ∑_S ⊆ D ∖{i}|S|!(d-|S|-1)!/d![f(x_S ∪{i}) - f(x_S)] Here x_S is the feature vector restricted to S. To evaluate the model function with missing features in the above expression, we use the interventional SHAP formulation <cit.>, where missing feature values are set to a default baseline. Note that computing SHAP values exactly has a computational complexity of Θ(2^d). While there are efficient methods to compute exact SHAP values for specific models such as decision trees <cit.>, in general, the exponential complexity makes it computationally intractable to evaluate exact SHAP values when the number of features is large. To reduce computational costs, sampling-based approximation techniques have been proposed. We explain two such methods in the remainder of this section. §.§ SamplingSHAP SamplingSHAP estimates SHAP values by only evaluating a subset of terms in Eqn.<ref> and then averaging over the resulting marginals. Štrumbelj et al. <cit.> provide an efficient algorithm to perform Monte Carlo sampling according to the probability distribution induced by the weights in Eqn.<ref>. To quantify the uncertainty in the SHAP estimate based on the number of samples, Merrick et al. <cit.> proposed the use of Standard Error of Means (SEM) to derive confidence intervals through the Central Limit Theorem (CLT). Specifically, the Monte Carlo simulation is run T_i times for each feature i, thus giving a set of SHAP estimates {ϕ̂_i^j}_j=1^T_i. Finally, the SHAP value for i is set to be ϕ̂_i = ∑^T_i_j = 1ϕ̂_i^j / T_i. Eqn.<ref> shows how the 95% CI for the i^th feature (there's a 0.95 probability of ϕ_i being in CI_i): CI_i = [ϕ̂_i ± 1.96σ_i/√(T_i)]. Here σ_i denotes the standard deviation of the set of SHAP estimates {ϕ̂_i^j}_j=1^T_i. Note that we can achieve any confidence that we want, by tweaking the parameter 1.96 accordingly. Additionally, prior works have also tried to reduce the length of the CIs through variance reduction techniques. For instance, Mitchell et al. <cit.> propose to evaluate negatively correlated pairs of samples in SamplingSHAP to reduce the variance σ_i of SHAP estimates. Sampling techniques have also been used in the context of Game Theory for computing Shapley values <cit.>. §.§ KernelSHAP KernelSHAP <cit.> is another sampling-based method that views SHAP values as the solution to a weighted regression problem. Specifically, consider a linear model of the form g(S) = ϕ_0 + ∑_i∈ Sϕ_i, where ϕ_i denote the SHAP values. KernelSHAP proposes to estimate these values by solving the following optimization problem: {ϕ_i} = _ϕ_1,..ϕ_d∑_S⊆ Dw(S)(f(S)-g(S))^2. Here, w(S) is a weighting function that is chosen in a way that makes solving Eqn.<ref> equivalent to finding SHAP values. Note that evaluating Eqn.<ref> requires evaluating an exponential number of terms in the summation, making the computation of exact SHAP values intractable. Fortunately, an approximation of Eqn.<ref> that evaluates only a small subset of terms is sufficient in practice to estimate SHAP values. Furthermore, a recent work <cit.> has shown that the variance of SHAP values, computed by using KernelSHAP, can be used to derive confidence intervals, providing a means of detecting convergence in the SHAP estimates; this leads to CIs identical to those of Eqn.<ref>. Additionally, this work also uses paired-sampling (similar to  <cit.>) with KernelSHAP to reduce computational costs, by reducing the variance of the SHAP estimates. § PROBLEM SETTING In this section, we formally define the Top-k identification problem (TkIP), the goal of which is to identify the features with the highest SHAP values. To apply sampling based techniques to solve TkIP, we define an (ϵ, δ)-PAC solution for it, which allows for an ϵ-approximate version of the solution with a low probability of failure (δ). Finally, we describe a naive stopping condition that can be used with Kernel/Sampling-SHAP to derive a (ϵ, δ)-PAC solution. We demonstrate that this naive solution is sample-inefficient, motivating the need for our proposed solutions that improve sample-efficiency. §.§ Top-k identification problem Consider a model f:ℐ→ℝ, which acts on a d-dimensional input x∈ℐ to produce a prediction p=f(x). For an input x∈ℐ, let {ϕ_1, ϕ_2, ..., ϕ_d} denote the set of SHAP values corresponding to the input features D={1, 2,.., d} respectively. To simplify notation, let us assume that the features are indexed such that: ϕ_1 ≥ϕ_2 ≥ϕ_3.. ≥ϕ_d. The goal of TkIP is to identify the k features: Topk= {1,2,..,k} corresponding to the k highest SHAP values 𝒮={ϕ_1, ϕ_2, .. ϕ_k}. Note that the ordering of features in Topk does not matter. Solving TkIP exactly requires us to precisely evaluate all the SHAP values, which is computationally intractable. Instead, we define ϵ-approximate and (ϵ, δ)-PAC solutions for TkIP that are more useful in the context of sampling-based PAC methods. * ϵ-approximate solution: For a given accuracy parameter ϵ∈ (0,1), consider a subset of features D^*⊂ D such that |D^*| = k. D^* is an ϵ-approximate solution to TkIP if it satisfies the following: ϕ_i ≥ϕ_k-ϵ, ∀ i ∈ D^*. * (ϵ, δ)-PAC solution: For given accuracy and confidence parameters ϵ, δ∈ (0,1), D^* is said to be an (ϵ, δ) solution for TkIP if it is an ϵ-approximate solution with a probability at least 1-δ: [ϕ_i ≥ϕ_k-ϵ, ∀ i ∈ D^*] ≥ 1-δ. In other words, here we allow for randomized algorithms that should compute D^* with controllable (low) probability of failure. This relaxed notion of the solution allows for a feature i to be returned as part of the solution even if i ∉ Topk, as long as the corresponding SHAP value ϕ_i is ϵ-close to ϕ_k (i.e. the k^th SHAP value). §.§ PAC solution for TkIP with naive stopping condition In both KernelSHAP and SamplingSHAP, we can use the CLT-based approaches mentioned in Sections <ref>, <ref> to obtain confidence intervals of the following form. Let ϕ_i be the true SHAP value for feature i, and let ϕ̂_i be our approximation for it. Then, if we repeat the corresponding algorithm T_i times, with probability at least 1-δ/d we have: |CI_i| = 2 · Z(δ / d) σ_i/√(T_i) In the above, Z(δ / d) is the critical value from the standard normal distribution for the desired level of confidence; note that this value is a small constant. It is clear from Eqn. <ref>, that the larger T_i is, the closer our approximation is to the true value. One way to identify the Topk features is by running the SHAP estimation algorithm (i.e. adding more samples) until the CIs for all the features are small enough to meet the following stopping condition: |CI_i| = 2 · Z(δ / d) σ_i/√(T_i)≤ϵ, ∀ i∈ D. We call this the naive stopping condition, and in Theorem <ref> we show that it indeed leads to an (ϵ, δ)-PAC solution for TkIP. Thus, Kernel/Sampling-SHAP can be straightforwardly adapted to solve TkIP by using enough samples to meet this stopping condition. In the following subsection, we will explain why this naive approach is sample-inefficient with the aid of an example, motivating the need for a better stopping condition and sampling technique. Let 𝒮={ϕ̂_̂1̂, ϕ̂_̂2̂,.., ϕ̂_d} denote the SHAP estimates of input features D={1,2,..,d}, such that the CI_is defined using a confidence of δ/d satisfy |CI_i| ≤ϵ, ∀ i∈ D. Then, D^*=_k(𝒮) is an (ϵ, δ)-PAC solution for TkIP; the solution consists of the k features with the largest ϕ̂_i. We show that when ϕ_i ∈ CI_i for every i, the solution is ϵ-approximate. Using a union bound over all features we have: [ϕ_i ∈ CI_i, ∀ i] = 1 - [∃ i: ϕ_i ∉ CI_i] ≥ 1 - ∑^d_i = 1δ/d = 1 - δ For the inequality above we used the definition of CI_i, which states that [ϕ_i ∉ CI_i] ≤δ / d. Clearly, if we prove that ϕ_i ∈ CI_i, ∀ i implies an ϵ-approximate solution we are done. Therefore, for the sake of contradiction, assume that the resulting solution is not ϵ-approximate. This means that there exists feature ĩ with ϕ_ĩ < ϕ_k - ϵ, which still made it in our top-k solution. By definition of Topk and CI_i, we have that for all i ∈Topk ϕ̂_i ≥ϕ_k - ϵ/2. By definition of CI_ĩ, we have ϕ̂_ĩ≤ϕ_ĩ + ϵ/2. Combining this with ϕ_ĩ < ϕ_k - ϵ, gives ϕ̂_ĩ < ϕ_k - ϵ/2. Hence, ĩ could never be chosen instead of any i ∈Topk in the returned solution. §.§ Understanding the inefficiencies of the naive stopping condition The Naive stopping condition requires the CIs of all the features to be of width at most ϵ. For a feature i, the number of samples N_i necessary to achieve this is proportional to the variance of the feature's SHAP estimate (N_i∝σ^2_i), resulting in high-variance features incurring a higher sample-cost. To illustrate, we apply SamplingSHAP to explain the prediction of an MLP model on a single example from the UCI Credit dataset. To identify the Top-k features (with k=4), we obtain CIs by runing SamplingSHAP multiple times for each feature, until the stopping condition in Eqn. <ref> is met. We visualize the CIs of the SHAP estimates of the individual features in Fig.<ref>a, where the Top-4 features are marked as green. To understand the cost of this stopping condition, we plot the number of function evaluations consumed by the algorithm in Fig.<ref>d and the variance of the SHAP estimate for each feature in Fig.<ref>c. As expected, we find that the cost is proportional to the variance of the per-feature SHAP estimate, resulting in a high sample-cost for high-variance features. A key drawback of the naive sampling scheme is that it requires |CI_i| ≤ϵ for all features, regardless of the uncertainty that the feature belongs in Topk. This results in a lot of wasted samples. For instance, in the example in Fig.<ref>, ϕ̂_3 (SHAP estimate for feature-3) is much higher compared to the other features, allowing us to conclude with high confidence that 3∈Topk early on in the sampling process and avoid sampling feature-3 further. However the naive sampling scheme lacks such adaptivity and forces this high-variance feature to continue sampling until |CI_3| ≤ϵ, thus leading to a lot of wasted samples and contributing significantly to the sample cost of SamplingSHAP. In the next section we develop SamplingSHAP@k and KernelSHAP@k to avoid such wasted samples by using a modified stopping condition and sampling scheme. § SHAP@K: FRAMING TKIP AS AN EXPLORE-M PROBLEM The key insight of our work is that TkIP can be framed as an Explore-m problem–a well-studied problem in multi-armed bandits (MAB), where the goal is to identify the arms with the highest expected payoffs in a sample-efficient way <cit.>. Formally, given N arms, each with some unknown distribution of payoffs, the objective is to identify (with PAC guarantees) the subset of m arms with the highest expected payoff. Note that TkIP has a 1-1 correspondence with the Explore-m problem. The arms in MAB are equivalent to the features in the context of SHAP, and the reward obtained by pulling an arm is equivalent to the SHAP estimate of a specific feature obtained through a single sample. The goal is to identify the subset of m arms/k features with the highest expected rewards/SHAP values. This connection allows us to leverage methods from the MAB literature to efficiently solve TkIP. Hence, we propose changes to the earlier sampling scheme and stopping condition, to develop sample efficient variants of Kernel and Sampling SHAP. §.§ Overlap-based stopping condition (C1) Inspired by Kalyanakrishnan et al. <cit.>, we use the stopping condition in Theorem <ref> that considers the overlap in CIs between the SHAP estimates of different features. By only considering the overlap between the CIs, the improved stopping condition avoids the need to reduce all the CIs widths to below ϵ as shown in Fig. <ref>b. Through experimental evaluations, we show that compared to the naive-stopping condition, this results in a significant reduction in the number of samples necessary to identify the Topk features (Fig. <ref>d). We now introduce some notation. Let T_i the number of SHAP estimates that we have collected so far for feature i. For the desired confidence δ, we define a δ / d confidence interval CI_i = [α_i, β_i] as before, where ϕ̂_i the current SHAP estimate, α_i = ϕ̂_i - Z(δ / d)σ_i/√(T_i) and β_i = ϕ̂_i + Z(δ / d)σ_i/√(T_i). Let High denote the set of k features with the highest SHAP estimates ϕ̂_̂î and Low denote the remaining set of d-k features. Let h be the feature in High with the lowest lower confidence bound i.e. h = _i∈ High{α_i}, and let ℓ be the feature in Low with the highest higher confidence bound i.e. ℓ=_i∈ Low{β_i}. Then, High is a (ϵ, δ)-PAC solution for TkIP if the following condition is satisfied: β_ℓ - α_h ≤ϵ. This proof is identical to Theorem 1 from <cit.> with one minor difference. The authors in <cit.> use Hoeffding's inequality prior to taking a union bound to show that the failure probability is at most δ. Here, we do not need the application of Hoeffding's inequality, since we alreay have the CLT guarantees for the CIs. §.§ Greedy sampling scheme (C2) The default variance-based sampling scheme used by Sampling SHAP minimizes the CIs for all features. Such sampling schemes are inefficent for the stopping condition in Theorem <ref>, which only depends on two features (h and ℓ) at any given point in the sampling process. To improve the sample efficiency, we consider a greedy sampling strategy <cit.> as described in Algorithm <ref>. The algorithm starts by using any feature-wise SHAP estimation algorithm (e.g., SamplingSHAP) to find an initial set of SHAP estimates {ϕ̂_i^j} for each input feature i; a feature-wise SHAP estimator computes the SHAP values independently for each feature. The mean SHAP estimates are used to categorize the features into the two groups High, Low. Then, the algorithm identifies h and ℓ as defined in Threorem <ref>, and evaluates additional SHAP estimates for these two features. These steps are repeated until the stopping condition is met. At this point, High will be a valid (ϵ, δ)-PAC solution for TkIP. This scheme improves sample efficiency by allocating more samples to (h,ℓ), which are exactly the features that can potentially affect what is inside Topk. To see why this algorithm terminates, notice that in each iteration exactly 2 CIs shrink. Therefore, in the worst case, there will come a point where all CIs will be of length at most ϵ, and thus the stopping condition will trivially be true. §.§ KernelSHAP@k and SamplingSHAP@k We apply the above changes to existing algorithms to propose KernelSHAP@k (KernelSHAP + C1) and SamplingSHAP@k (SamplingSHAP + C1 + C2). In both cases, we incrementally add SHAP estimates ϕ̂^j_i until the stopping condition (C1) is met and the Topk features are identified. Additionally, for SamplingSHAP@k, we use the more efficient greedy sampling scheme (C2) that allocates samples only to features that influence the stopping condition. Note that the greedy sampling scheme (C2) requires the ability to compute the SHAP values of features individually. Thus, we cannot apply C2 to KernelSHAP as it estimates the SHAP values of all features together. In contrast, SamplingSHAP estimates SHAP values per-feature, which makes it compatible with C2. § EXPERIMENTS To quantify the improvements in sample efficiency of our proposed methods, we compare the sample cost (i.e. number of function evaluations) of Kernel/SamplingSHAP@k with that of Kernel/SamplingSHAP (with naive stopping condition) using various credit-realted datasets. We present the experimental setup, followed by the results comparing sample costs and sensitivity studies that quantify how these costs change with the accuracy parameter ϵ. §.§ Experimental setup Table<ref> lists the datasets used in our experiments, along with a brief description of the prediction task, number of features, and train/test split. In each case, we train a 5-layer MLP model on the binary classification task using the training set for 100 epochs, and use this model to make predictions on the test set. For the negatively classified examples in the test set (indicating a high likelihood of the credit application being rejected), we use different methods to compute the Top-4 features that contributed the most to the negative prediction in terms of their SHAP values[Our methodology of only evaluating explanations for negatively outcomes is motivated by regulations that require explainations to be provided in case of adverse actions (e.g., credit application being rejected).]. We use interventional SHAP for our experiments and use a positively classified example from the training set as our baseline. We compare the sample-efficiency of various methods in terms of the number of function (f) evaluations and runtime required to identify the Top-4 features with PAC guarantees[Runtime measured on a machine with 32-core AMD CPU and 128GB of memory. Code to reproduce results is included in the supplementary material.]. §.§ Results Table<ref> compares the average sample cost (i.e. number of function evaluations) and average runtime required by different methods to identify Top-4 features with a (ϵ=0.005, δ=10^-6)-PAC guarantee across different datasets. Our evaluations show that Kernel/SamplingSHAP@k significantly outperform their baseline counterparts Kernel/SamplingSHAP, offering between 1.2×-14.2× improvement in sample efficiency and between 1.2× -14.7× improvement in runtime. Between SamplingSHAP@k and KernelSHAP@k, we find that the method with the better sample-cost depends on the dataset in question. However, SamplingSHAP@k has a consistently lower runtime compared to kernelSHAP@k, even in cases when it has a higher sample cost. For instance, for the UCI credit dataset, we find that SamplingSHAP@k has roughly twice the sample cost compared to KernelSHAP@k, but it is 10× faster in terms of runtime. The reason for this is that each KernelSHAP estimate is more expensive to compute as it requires solving a weighted regression problem using the outputs of the model. In contrast, SamplingSHAP works by just computing a simple average on the outputs of the model, which requires much less compute, resulting in a faster runtime. §.§ Sensitivity studies To understand how the accuracy parameter ϵ influences the sample-efficiency of various methods, we perform sensitivity studies by varying ϵ between [0.005, 0.01]. For different values of ϵ, we plot the sample-cost (i.e. number of function evaluations) and runtime of different methods across the four datasets considered in our experiments. Note that a lower value of ϵ implies a lower margin of error in identifying the Top-4 features and requires estimating SHAP values with greater precision (narrower CIs). As ϵ is reduced from 0.01 to 0.005, we find that the sample-costs and runtimes of all methods increase. Notably, the rate of this increase is much higher for Sampling/KernelSHAP, compared to Sampling/KernelSHAP@k. This is because the naive stopping condition used by Sampling/KernelSHAP requires the CI widths of the SHAP estimates of all features to be lower than ϵ, which drives up the samples required. In contrast, the stopping condition used by Sampling/KernelSHAP@k, allows for the CI widths of the features that don't influence the stopping condition to be much higher than ϵ and thus requires fewer samples. § LIMITATIONS AND FUTURE WORK We discuss the limitations of our work and future directions of research in this section. Feature dependence: Since our work builds on the SHAP framework, it shares the limitations of SHAP. Importantly, SHAP assumes that the features of the input are not correlated. This assumption is typically not true in most practical settings. To address this issue, methods like GroupSHAP <cit.> have been developed, which groups features that are highly correlated and assigns attributions to groups of features instead of individual features. We leave the evaluation of our methods to the GroupSHAP setting as part of future work. Ordering of Top-k features: Our proposed methods only solves the problem of identifying the Topk features. The features returned by our methods may not be in the right order. Thus, our methods may not be well suited for applications where the order of reporting the top-k features is important. One way in which our methods can be modified to such setting is by the repeated application of Kernel/SamplingSHAP@k by setting different values of k ranging from 1,2,..k. This would result in Topk features being identified in the right rank order. We leave the evaluation of this method as part of future studies. § CONCLUSION This paper studies the Top-k Identification problem (TkIP)– a novel problem setting, where the goal is to identify the k features with the highest SHAP values. TkIP is motivated by applications in finance, where explanations for adverse actions are typically provided by listing the top-k features that led to a negative outcome. We find that while existing black-box techniques like KernelSHAP and SamplingSHAP can be trivially adapted to solve TkIP, doing so is highly sample inefficient. To address this issue, we develop sample efficient variants of these methods that are designed specifically for solving TkIP. Our key insight is that TkIP can be viewed as an Explore-m problem – a well-studied problem related to multi-armed bandits (MAB). This connection allows us to improve sample efficiency by using (1) an overlap-based stopping-condition and (2) a greedy sampling scheme that efficiently allocates samples between different features. We leverage these techniques to develop Kernel/SamplingSHAP@k, which can efficiently identify the Topk features with (ϵ, δ)-PAC guarantees . Our experiments on several credit-related datasets show that Kernel/SamplingSHAP@k significantly outperform their corresponding baselines: Kernel/SamplingSHAP , offering an average improvement of 5× in sample-efficiency and runtime. We also characterize the sample-costs and runtime of our proposed methods across different levels of accuracy (ϵ). Our paper provides efficient solutions to a previously unstudied problem that has important practical applications in finance. § ACKNOWLEDGEMENTS This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co and its affiliates (“J.P. Morgan”) and is not a product of the Research Department of J.P. Morgan. J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
http://arxiv.org/abs/2307.04049v1
20230708212820
Parallel Algorithms Align with Neural Execution
[ "Valerie Engelmayer", "Dobrik Georgiev", "Petar Veličković" ]
cs.LG
[ "cs.LG" ]
[ Parallel Algorithms Align with Neural Execution Valerie Engelmayeraux Dobrik Georgievcam Petar Veličkovićdm auxDepartment of Applied Computer Science, University of Augsburg, Augsburg, Germany camDepartment of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom dmGoogle DeepMind, London, United Kingdom Valerie [email protected] Machine Learning, ICML 0.3in ] Neural algorithmic reasoners are parallel processors. Teaching them sequential algorithms contradicts this nature, rendering a significant share of their computations redundant. Parallel algorithms however may exploit their full computational power, therefore requiring fewer layers to be executed. This drastically reduces training times, as we observe when comparing parallel implementations of searching, sorting and finding strongly connected components to their sequential counterparts on the CLRS framework. Additionally, parallel versions achieve strongly superior predictive performance in most cases. § MOTIVATION In neural algorithmic reasoning, neural networks (NN) act as computational machines. In graph neural networks (GNN), graph nodes take on the role of storage space (interpreting edge labels as nodes adjacent to its endpoints throughout this paper), while edges indicate which ways information may flow. The update function of choice defines the set of constant (neural) time operations. But note how nodes update their features in parallel, each one acting as a processor of its own rather than sheer memory. The parallel nature of neural networks is widely known. Running them in parallel fashion on processing devices like GPUs and TPUs drastically saves computational resources <cit.>. It seems natural that this translation between computational models would also hold the other way around. And indeed, Loukas loukas_what_2020 proves how Neural Networks (NN) are analogous to distributed computational models under certain assumptions. Kaiser & Sutskever kaiser2015neural exploit the advantages of parallel processing in their Neural GPU. Freivalds et al. freivalds_neural_nodate derive their architecture from the parallel computational model of Shuffle-Exchange-Networks. Xu et al. xu_what_2020 observe how their model learns to compute a shortest path starting from both ends in parallel when executing Bellman Ford. Veličković et al. velickovic_clrs_2022 and Veličković et al. velickovic_neural_2020 hint at parallelized computations whenever possible. It is time the parallel processing capabilities of NN are exploited systematically. Theory on parallel computational models and algorithms explicitly designed for them are abundant <cit.>. Their trajectories are shorter and align more closely with neural architectures, as illustrated in figure <ref>. Hinting at these during training teaches NN to execute algorithmic tasks much more efficiently than when providing hints for sequential algorithms, as we demonstrate in section <ref> for the examples of searching, sorting and finding strongly connected components. While it is common practice to modify the neural architecture for better alignment <cit.>, it seems promising to narrow the gap from the other side, by choosing algorithms that naturally align with neural execution. § PARALLEL COMPUTING Fundamentally, the parallel computational models addressed here assume multiple processors collaborating to solve a task. The line between parallel and distributed computing is blurry and depends on how controlled interactions between processors are. We assume a fixed and known interconnection graph, uniquely identified processors and a common clock to govern computation. Therefore, we choose to speak of parallel computing. §.§ Parallel Computational Models Processor Arrays. Communication may take place via hard-wired channels between the processors. These induce an interconnection graph that may in principle take any shape. At every time step, each processor executes some computation based on the contents of its local memory and the information received from its neighbours in the previous step, and may in turn send out a tailored message through any of its channels. PRAM Models. Alternatively, communication may be realised by reading from and writing to global memory, giving rise to PRAM (parallel random access machine) models <cit.>. Submodels allowing for concurrent reading and writing by multiple processors are referred to as CRCW PRAM. Different conventions exist on whether attempting to concurrently write different values is permitted, and if so, how to decide who succeeds. In the most powerful model, the priority CRCW PRAM, the value from the processor with the lowest index taking part in the concurrent write will be taken on. §.§ Efficiency Since multiple steps can be carried out at the same time, the required number of operations in a parallel algorithm does not impose a lower bound to its run time as in the sequential case, but the product of time and processor number. Optimal speedup is achieved if the use of n processors speeds up computation by a factor of n. This gives rise to a notion of efficiency frequently used in parallel computing <cit.>. The efficiency of a parallel algorithm solving a task of sequential complexity C on p processors in time t is defined as C/pt. It is not hard to see that optimal speedup entails an efficiency of Ω(1). §.§ Examples of Parallel Algorithms Searching. For a simple parallel search for value x in a descending list of n items, assume a priority CRCW PRAM with n processors. Distribute the first item to processor 1, the second to processor 2 etc., while x is stored in the global memory. If a processor's item is ≥ x, it tries to write its index to a designated location in the global memory. Since the one with the smallest index will succeed, the location now contains the desired position of x. The run time is independent of the input size[Distributing values to processors can be done in constant time by routing over the shared memory. We neglect distributing/returning in-/outputs from/to a host computer in the following as it is omitted in neural execution.], so the time-processor-product is Θ(n), missing optimal speed-up as searching can be done in O(log n). Sorting. Habermann habermann_parallel_1972 proposes a simple parallel sorting algorithm for a linear array of processors called Odd Even Transposition Sort (OETS). Each processor holds one item. In an odd (even) round, all neighbouring pairs starting at an odd (even) index swap their items if they are out of order. The two types of rounds take turns for at most n rounds total when n items are to be sorted, yielding O(n^2) operations when accounting for the n processors. Again, this is not optimal for comparison-based sorting, which may be done in O(n log n). Strongly Connected Components. Fleischer et al. rolim_identifying_2000 propose a Divide-and-Conquer algorithm for computing strongly connected components (SCC) of a digraph, which they call DCSC. First, find all descendants and predecessors of an arbitrary node, e.g. by carrying out breadth-first search (BFS) in the graph and its reversed version. The intersection of both sets constitutes a SCC. Observe how each further SCC has to be completely contained in either the descendants, the predecessors or the undiscovered nodes, such that the described routine may be called recursively for start nodes in each subset independently, until each vertex is assigned to a SCC. They prove an expected serial time complexity of O(n log n) for graphs on n nodes whose degrees are bounded by a constant. This is not optimal, but parallelization of the two searchs per vertex, as well as the recursive calls may significantly speed up execution. §.§ Analogy to Neural Networks Loukas loukas_what_2020 formally establishes an analogy between models like processor arrays and GNN by identifying processors with graph nodes and communication channels with edges. Therefore, the width of a GNN corresponds to p, and its depth to t. Loukas coins the term capacity for the product of width and depth of a GNN, reflecting the time-processor product of parallel algorithms. The shared memory of a PRAM finds its neural analog in graph-level features. Since the computation of a graph feature may take into account positional encodings of the nodes, we may assume a priority CRCW PRAM, encompassing all other PRAM models. § EFFICIENCY OF EXECUTING ALGORITHMS NEURALLY Inspired by the definition of efficiency in parallel computing, we define the efficiency of a neural executioner as follows. Let be a GNN with capacity c(n) executing an algorithm of sequential complexity C(n). Define its node efficiency as η (, ) C(n)/c(n). This definition implies an important assumption we make throughout this paper. When executing an algorithm on a GNN, one constant-time operation is to be executed per node per layer. This is not entirely unproblematic as discussed in section <ref>, but often expected when providing hints and helps to identify theoretical properties. Under this assumption, node efficiency denotes the share of nodes doing useful computations throughout the layers. Since the computational cost of a GNN also scales with the number of messages that are being sent, it is insightful to study the share of edges that transport relevant information as well. Let be a GNN operating over a graph G=(V,E), m | E |, to execute an algorithm . Then we call an edge (i,j) ∈ E active at layer t for a certain input x, if the operation to be executed by node j at time t involves information stored at node i at time t-1. Let a(t) be the number of active edges at time t, and T the total number of time-steps. Then define edge efficiency as worst case share of active edges when processing inputs x_n of size n, ϵ (, ) x_nmin 1/T∑_t=1^T a(t)/m. Note how neural efficiencies are defined relative to the algorithm they are executing as opposed to the task they solve. This allows for a neural executioner to be efficient in executing an algorithm that is itself not efficient in solving a task. §.§ Parallel Algorithms Entail Higher Efficiency Contradicting a GNN's parallel nature by teaching it to execute sequential algorithms artificially impedes the task. Training to solve tasks in parallel instead is more efficient, which may also simplify the function to learn. Shorter Trajectories. As observed by Loukas loukas_what_2020, the complexity of an algorithm lower bounds the capacity of a GNN executing it. If the number of processors is one, the depth alone needs to match the complexity, while the width might theoretically be set to one. But in practice, the width has to scale with the input size n to ensure applicability to different n. Therefore, training sequential algorithms forces overspending on capacity by a factor of n. Setting the width to n, as is often done to distribute one unit of information over each node, entails n available processors. Making use of them may shorten the trajectory of an algorithm by a factor of up to n in the case of optimal speedup, which allows the capacity to take on its lower bound. The capacity of a GNN directly translates to the time needed to train and execute it. Additionally, long roll-outs give rise to an issue Bansal et al. bansal_end–end_2022 refer to as overthinking, where many iterations degenerate the behaviour of a recurrent processor. Less Redundancy. Neural efficiencies denote the share of nodes and edges involved in useful computations. Redundant computations not only harm run times, but may also interfere with the algorithmic trajectory. Parameterising them correctly to prevent this can complicate the function to learn. Assuming the redundant nodes (grey in figure <ref>) need to preserve their information to be processed or put out later, their self-edges should execute an identity, while the additional incoming messages need to be ignored, i.e. mapped to a constant. In practice, this will be hard to do, which could entail a temporal variant of oversmoothing, where relevant information gets lost throughout the layers <cit.>. Oyedotun et al. skipconnections highlight how skip connections help to avoid the issue, Ibarz et al. ibarz_generalist_2022 introduce a gating mechanism to leave information unchanged, Bansal et al. bansal_end–end_2022 let their architecture recall the original input. So let's explore the efficiency of executing sequential and parallel algorithms. Let be a scalable GNN operating over a graph with n nodes and m edges. Further let be a sequential, and an efficient parallel algorithm on n processors, both of complexity C. Then executing and on , respectively, entails efficiencies η(, ) = O (1/n), ϵ(, ) = O( 1/m), η(, ) = O(1), ϵ(, ) = O(n/m). As observed above, the capacity c of a GNN executing a sequential algorithm of complexity C has to be c ≥ nC, while it may be c=C in the case of optimal speedup. Node efficiencies follow immediately. Since one processor can read only so much information, only a constant number of edges can be active at each layer during sequential processing, while up to a multiple of n edges can be active during parallel algorithms. This yields the stated edge efficiencies. Therefore, the share of nodes avoiding redundant computation cannot exceed 1/n when executing sequential algorithms, whereas it may reach up to 1 for efficient parallel algorithms. At the same time, the number of redundant messages is reduced by a factor of n. Removing the artificial bottleneck of a single processor prevents data from having to be stored until the processor gets to it. Allowing nodes to carry out meaningful computation frees them of the dead weight of acting as memory. Local Exchange of Information. In neural networks, information exchange is inherently local. The feature h_i^t of node i at time t may only depend on itself and its neighbours _i. E.g. for permutation invariant MPNN <cit.>, h_i^t = f (h_i^t-1, j ∈_i⊕ g(h_i^t-1, h_j^t-1)) This paradigm is often not respected by classical algorithms, as depicted in figure <ref>. In the RAM model, the state h_i_t^t of register i_t updated at time t may depend on any two registers j_t and k_t: h_i_t^t = f^t_i (h_k_t^t-1, h_j_t^t-1), j_t, k_t arbitrary. Not being able to restrict which nodes have to communicate may render it advisable for a GNN to operate over a complete graph to make sure all necessary information is available at all times (see e.g. <cit.>). The situation is different in the setting of interconnected processing arrays, see figure <ref>. For example OETS only ever requires neighbouring processors to compare their items. In general, at time t, the memory state h_i^t of processor i is computed by h_i^t = f^t_i (h_i^t-1, j ∈ J_i^t|| h_j^t-1), J_i^t ⊆_i, where concatenation indicates how i may tell apart its neighbours. Therefore it suffices for the GNN to only rely on edges present in the interconnection graph. To emulate a PRAM algorithm, an empty graph would in principle be enough, though it might not deem advantageous to route all communication over the graph feature in practice. Restricting the number of edges further reduces the use of resources and may help performance, since fewer unnecessary messages are being passed. Interconnection graphs are mostly chosen to be sparse, enabling maximum edge efficiency. § METHODOLOGY To test the hypothesis, we consider the two elementary tasks of searching and sorting, as well as computing SCC as an example of a graph algorithm. The parallel algorithms are chosen from section <ref>; as sequential counterparts we use binary search, bubble sort and Kosaraju's SSC algorithm from the CLRS-30 benchmark <cit.>. Key data of the GNN we use are listed in table <ref>. We compare performances across various processor networks, namely the wide-spread architectures of DeepSets <cit.>, GAT <cit.>, MPNN <cit.>, and PGN <cit.>. The trajectories of the new algorithms are encoded for the CLRS framework as follows below. Note that in every case, randomized positional information, as proposed by Mahdavi et al. mahdavi_towards_2023 and standard on CLRS, is provided as part of the input, to emulate the situation of uniquely identified processors. §.§ Searching Parallel Search. The hints for parallel search of x in A closely resemble its template. As to be seen in figure <ref>, each item A_i of A is represented by one node of an empty graph. A node indicates whether A_i ≤ x. The position rank_A (x) of x in A is predicted by the graph feature as categorical variable over the nodes ( in <cit.>). Therefore we introduce an extra node carrying x as a placeholder to allow for as many categories as possible positions of x. To perfectly predict the outcome in this setting, the graph nodes may be updated by h_i = ReLU (A_i -x), yielding h_i = 0 if and only if A_i ≤ x. So the graph feature may be computed by rank_A (x) = min{i=1,…,n : h_i = 0 }. These steps closely align with the considered neural update functions, especially since the function updating the graph level possesses its own set of parameters. Additionally, the roll-out has constant length, leaving room for only a constant number of redundant edges, see figure <ref> and table <ref>. Altogether, we expect high performance on parallel search. Binary Search. Opposed to parallel search, binary search has an optimal complexity of O(log n). But given the need for n nodes, it still requires an enhanced capacity of O(n log n), yielding low node efficiency. In CLRS-30, binary search is executed on a complete graph (whose edges are omitted in figure <ref> to avoid clutter), impairing edge efficiency, see table <ref>. Low efficiency is visible in figure <ref> by the amount of grey components. §.§ Sorting OETS. Actually swapping the items would require making numerical predictions. Instead, we predict changing predecessors as , following preimplemented examples. To still provide edges between nodes holding items to compare, we have to operate on a complete graph, sacrificing edge efficiency (see table <ref>), since only Θ(n) edges are active in each round, so ϵ = n/n^2. As hints, we feed for each round the current predecessors along with an edge indicating whether two nodes have to switch their role, and a graph-level with the parity of the round, serving as rudimentary clock. Bubble Sort. Though Bubble Sort induces the same amount of operations O(n^2) as OETS, it requires a larger network to be executed on (table <ref>). Again, along with operating over a complete graph, this entails low efficiencies. §.§ Strongly Connected Components DCSC. We input the undirected adjacency matrix as edge , along with the directed one as . Parallelizing the recursive calls of DCSC on multiple disjoint sets would require an extra feature dimension for every search that is going on. Therefore we only let the two BFS starting from the same source node be executed in parallel, which we each encode as is standard in CLRS-30. Additionally, a binary on each node is flipped to 1 as soon as it is discovered from both directions, indicating it belongs the currently constructed SCC (this is reset at the start of every new search). At the same time, it receives a to the source, which in the end constitutes the output. Throughout, we keep track of undiscovered nodes in another node . We choose the node with the smallest index from this set as next source. DCSC spends most of its time on the repeated BFS, a subroutine known to be learned well even on relatively simple architectures <cit.>, as it aligns well with neural execution <cit.>. Note how they let each node consider all its incoming edges in parallel, as is done on CLRS-30. This not only allows the trajectory to be shortened from O(n+m) to O(n), but also prevents redundant computations from having to be handled explicitly. Except for the source, each node can carry out the same computation at each step (see <cit.> for details) – just that this will only change its state whenever information flowing from the start node reaches it. DCSC only has to pass the index s of the source node instead of computing predecessor pointers, so computation looks like depicted in figure <ref>, closely resembling the situation in figure <ref>. Therefore, efficiency is expected to be less important for predictive performance in this special case. An obvious upper bound to DCSC's run time is O(n^2), accounting for one (two-sided) BFS per node, resulting in the big capacity reported in table <ref>. There is also no guarantee for more than one node and edge being active per step per BFS, resulting in low efficiencies. But this represents edge cases at best, such that the average trajectories will be much shorter and more efficient, as experiments will show. The core of DCSC aligning so well with neural execution promises good results. Kosaraju. The skeleton of Kosaraju's algorithm as implemented in CLRS-30 on the other hand is formed by a depth first search (DFS), which is more challenging for neural executioners <cit.>. As opposed to the closely related BFS, it is hard to parallelize. In fact, when relying on lexicographic ordering for tie-braking, it is considered an inherently sequential algorithm <cit.>. Since nodes have to wait for the search to retract from its siblings, computation cannot be carried out as in figure <ref>, so processing needs be timed correctly. The total run time is O(n+m), entailing the capacity and efficiencies reported in table <ref>. § RESULTS Predictive performance is reported in table <ref>. As expected, parallel search achieves almost perfect results. Meanwhile, training time is reduced by a factor of almost 3 as compared to binary search (see figure <ref>). Despite DCSC's only partial parallelization and the asymptotically optimal linear run time of its sequential opponent, training time is more than halved for the SCC task. At the same time, predictions become up to more than twice as accurate. On the sorting task, the sequential algorithm entails better accuracy, with the parallel one mostly falling within one standard deviation. Though both algorithms require the same asymptotic number of operations, training OETS takes a fraction of the time needed for bubble sort (figure <ref>). § DISCUSSION Neural efficiency only loosely correlates with predictive performance when comparing tables <ref> and <ref>. This is not too surprising, since correctly parameterising redundant computations is only one of many aspects that make a function hard to learn. We propose a rather one-sided relationship, where low efficiencies can harm accuracy (if not circumvented as in BFS, see section <ref>), but high efficiencies do not necessarily enhance learning success. We would like to highlight the importance of taking the perspective on neural networks as computational models when executing algorithms, as it opens access to the rich theory of computational complexity. E.g. the classes of NC (efficiently parallelizable) and P-complete problems (mostly thought of as inherently sequential) <cit.> inform us on which tasks may be hard to execute neurally, to tackle them more effectively. However in doing so, it is important to keep in mind the gap between the respective sets of constant time operations, with none being strictly more powerful than the other. On the one hand, a single RAM instruction may need to be approximated by entire subnetworks. On the other hand, one neural step suffices to process all incoming edges of a node during execution of BFS <cit.>. This breaks up the strict correspondence between time-processor product and capacity. § CONCLUSION As suggested in section <ref>, parallel algorithms prove to be a lot more efficient to learn and execute on neural architectures than sequential ones. Often, OOD predictions on algorithmic tasks are significantly improved as well, suggesting that higher node and edge efficiency can help learning. Future work has to show how performance is impacted for other tasks, on more elaborate architectures like in <cit.>, and in generalist settings. § ACKNOWLEDGEMENTS We would like to thank Razvan Pascanu and Karl Tuyls for their valuable comments, as well as Pietro Liò for insightful discussions and Torben Hagerup for the support he provided. icml2023
http://arxiv.org/abs/2307.04320v1
20230710032804
Collimated hot electron generation from sub-wavelength grating target irradiated by a femtosecond laser pulse of relativistic intensity
[ "Kamalesh Jana", "Amit D. Lad", "Guo-Bo Zhang", "Bo-Yuan Li", "V. Rakesh Kumar", "Moniruzzaman Shaikh", "Yash M. Ved", "Min Chen", "G. Ravindra Kumar" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
,s
http://arxiv.org/abs/2307.04492v1
20230710113046
Calculating Originality of LLM Assisted Source Code
[ "Shipra Sharma", "Balwinder Sodhi" ]
cs.SE
[ "cs.SE" ]
Calculating Originality of LLM Assisted Source Code Shipra Sharma [email protected] Balwinder Sodhi Department of Computer Science and Engineering Indian Institute of Technology Ropar India [email protected] ========================================================================================================================================================================== The ease of using a Large Language Model (LLM) to answer a wide variety of queries and their high availability has resulted in LLMs getting integrated into various applications. LLM-based recommenders are now routinely used by students as well as professional software programmers for code generation and testing. Though LLM-based technology has proven useful, its unethical and unattributed use by students and professionals is a growing cause of concern. As such, there is a need for tools and technologies which may assist teachers and other evaluators in identifying whether any portion of a source code is LLM generated. In this paper, we propose a neural network-based tool that instructors can use to determine the original effort (and LLM's contribution) put by students in writing source codes. Our tool is motivated by minimum description length measures like Kolmogorov complexity. Our initial experiments with moderate sized (up to 500 lines of code) have shown promising results that we report in this paper. LLM, ChatGPT, plagiarism in education, automation in CSE education, Minimum Description Length § INTRODUCTION With the advent of Large Language Models (LLM) models such as ChatGPT, several coding tasks have become easy to complete via use of such LLMs. Such tasks include programming assignments in courses, generating subroutines and code fragments for commonly encountered algorithmic tasks, and so on. For example, programming assignments in many Computer Science and Engineering (CSE) courses can be generated in large measure <cit.> via these models. It has become very difficult to detect by standard plagiarism detection tools such as Turnitin <cit.>, that such source code is LLM generated. Even a complex assignment can be broken into simpler components, and each component can be written separately using such LLMs. Given this situation, it is highly desirable to construct a tool which can detect unauthorized or unattributed LLM help taken by the students in preparing their coding assignments. Usage of such LLM-assisted coding tools is recommended as the engineers/students may be required by the employers to be conversant with the use of such tools <cit.>. Although the LLM-based coding assistant tools seem to reply correctly to complex queries akin to an expert, they still lack the conceptual understanding of the queries as well as the results generated by the tool. The major shortcoming of these tools is lack of deep reasoning and analytical skills <cit.>. Hence, before we begin to resolve the difficulties mentioned above, we should first be able to measure (at least approximately) the amount of originality in an assignment. Motivated by the above, and by potential applications in the domain of Software Engineering, we consider the following research questions in this paper. RQ1RQ 1 Can we quantify the amount of original contribution by a student in an assignment, assuming that he/she has used an LLM such as ChatGPT for its preparation? RQ2RQ 2 How can we detect the similarity in the original contribution portion of two separate submissions when it is known that the students can take assistance from LLM-based tools in creating the submissions? RQ3RQ 3 How efficiently can we automate our answers to the above questions? In this paper, we propose two scores: the originality score o(D) and the similarity score s(D) of a source code D as solutions to the above questions. We further propose to use these scores extensively in an adaptable teaching process as follows: * Students with less measure of original contribution in their assignments (i.e., less originality scores) may be awarded suitably reduced scores. * Students with large amounts of overlap in their respective contributions (i.e., high similarity scores) may not be awarded extra “originality credits”. * More credits may be allocated to the “difficult” fragments of the program (or, assignment submission), and lesser credits may be allocated to the “easier” fragments of the program (or, assignment submission). These steps will lead to a constructive assessment of students, which encourages the students to develop original and high-depth analytic thinking. The above discussed scenario is one of the many applications of our work. Others are its usage in software development as these LLM-based models cannot replace software engineers (as of now), but can assist them <cit.>. § COMPUTING ORIGINALITY SCORE OF A PROGRAM §.§ Setting up the problem Suppose a programmer has unlimited access to a large language model 𝒜 (𝒜 can be ChatGPT, GPT-J, etc.). The programmer constructs a software program D using (see Figure <ref>): * the answers A_1, A_2, …, A_z to a sequence P_1, P_2, …, P_z of z prompts to 𝒜, and * the programmer's own original contribution 𝒪. Program D is finally constructed by combining A_1, A_2, …, A_z and 𝒪 using conventional text editing, rearrangements, etc. To be more specific, a conventional plagiarism detection software (say, Turnitin) will detect high similarity between the strings D and the corpus {A_1, A_2, …, A_n, 𝒪}. We define the following metrics: * total effort e(D) of the programmer as the total length of all prompts and the programmer's original contribution: e(D) = ∑_i=1^z |P_i| + |𝒪| * originality score o(D) (0 ≤ o(D) ≤ 1) of the program: o(D) = |𝒪|/|D| Our assumption is that a lower originality score would imply a lower original contribution by the programmer. Any programmer or student using LLM models to assist in writing programs implicitly minimizes e(D) and in turn also minimizes o(D). This motivates the following question. Question 1. Given a document D and LLM 𝒜, calculate the minimum originality score o(D). (This corresponds to <ref>). §.§ Solving <ref> To solve Question 1 we bound the maximum number of prompts z, which is a positive integer and the maximum length L of each prompt (P_1, P_2, …, P_z). We now formulate a bounded version of Question 1 above: Question 1.1. Compute the minimum value of the originality score o(D), under the assumption that the programmer can give at most z prompts, each of length at most L. Let T be a conventional plagiarism detector (a trivial one to use could be the diff command in UNIX-based systems). Figure <ref> illustrates the algorithm for solving Question 1.1. The program D in Figure <ref> forms the input to a neural network N. The output of N is of size z · L, and corresponds to the z unknown prompts to LLM 𝒜. The output of N is given as input to LLM 𝒜 to obtain answers A_1, A_2, …, A_z. A conventional plagiarism detector T is used to find the similarity percentage t between D and the output answers (A_1, A_2, …, A_n). The original contribution 𝒪 is estimated by removing the parts of D which match with the output answers. Finally, the output (originality score) u is equal to |𝒪|/|D|. If the similarity percentage between D and (A_1, A_2, …, A_n) is t, the originality score is expected to be approximately 1 - 0.01 · t[as t is percentage score we convert it to a number between 0 and 1 by multiplying by 0.01]. The output originality score u is given as the feedback to neural network N, with the objective of minimizing u. Remark. Please note that giving the same prompt again to an LLM can generate somewhat different answers. To cover all possibilities, our model allows for the same prompt to be repeated more than once in the sequence P_1, P_2, …, P_z. §.§ Applying the minimum description length (MDL) principle The minimum description length (MDL) principle <cit.> is a well-known principle for model selection. The MDL principle always selects the shortest description of given data, from the set of all possible descriptions. The quantity Γ=(P_1, P_2, …, P_z, 𝒪) (see Section <ref>) can be viewed as the content comprising of prompts plus the original code added by the student that results in the desired program as the output from an LLM. Thus, Γ can be thought to represent a description of D, which can lead to generation of the desired code. In other words, given the description Γ and LLM 𝒜, we can reconstruct program D almost completely. Our proposed solution (see Section <ref>) can then be viewed as an application of the MDL principle. For each possible description Γ, our algorithm selects the description with minimum “length", where the length of a description Γ is defined as its originality score |𝒪|/|D|. § COMPUTING SIMILARITY SCORE OF TWO PROGRAMS §.§ Setting up the problem Suppose two programmers Alice and Bob produce programs D_1 and D_2 respectively. Both programs solve the same computational problem, and both Alice and Bob had unlimited access to LLM 𝒜 during the coding process. Suppose Alice constructed D_1 using prompts P_1, P_2, …, P_z and original contribution 𝒪_1. Similarly, suppose Bob constructed D_2 using prompts Q_1, Q_2, …, Q_z and original contribution 𝒪_2. Let p be the similarity percentage between the two descriptions, Γ_1=(P_1, P_2, …, P_z, 𝒪_1) and Γ_2=(Q_1, Q_2, …, Q_z, 𝒪_2) using the conventional plagiarism detector T. Then we define similarity score, s(D_1, D_2) = 0.01 · p We now state the second question considered in this paper: Question 2. Given two source codes D_1 and D_2 and LLM 𝒜, calculate the similarity score s(D_1, D_2). (This corresponds to <ref>.) §.§ Solving <ref> In analogy with our approach for originality score, we consider a bounded version of Question 2: Question 2.1. Given two source codes D_1 and D_2, compute the maximum value of similarity score s(D_1, D_2), under the assumption that both Alice and Bob can give at most z prompts, each of length at most L. Figure <ref> illustrates the algorithm for solving Question 2.1: Source codes D_1 and D_2 are the inputs to two neural networks N_1 and N_2. The output of each neural network is of size z · L. The output of N_1 corresponds to the z unknown prompts of Alice and the output of N_2 corresponds to the z unknown prompts of Bob. Next, the outputs of N_1 and N_2 are given as input to LLM 𝒜 to generate answers A_1, A_2, …, A_z and B_1, B_2, …, B_z respectively. Using algorithm T, we compute the original contribution 𝒪_1 of Alice for prompts P_1, P_2, …, P_z and the original contribution 𝒪_2 of Bob for prompts Q_1, Q_2, …, Q_z. Finally, the similarity s between (P_1, P_2, …, P_z, 𝒪_1) and (Q_1, Q_2, …, Q_z, 𝒪_2) is computed using T, and this is used as feedback for both neural networks N_1 and N_2. The objective of the training process is to maximize (see Question 2.1) the output similarity s. Remark 1. In our implementation, we input (D_1, D_2) to a single neural network N, with ouput (P_1, P_2, …, P_z, Q_1, Q_2, …, Q_z). The intuition is that a single neural network may lead to faster convergence due to information flow along cross connections between input neurons of D_1 and D_2. Remark 2. In terms of MDL principle, the above network tries to compute the shortest description ((P_1, P_2, …, P_z, 𝒪_1), (Q_1, Q_2, …, Q_z, 𝒪_2)) of (D_1, D_2), where the “length" of the description is defined as the similarity score of T on inputs (P_1, P_2, …, P_z, 𝒪_1) and (Q_1, Q_2, …, Q_z, 𝒪_2). § PREVIOUS WORK Kolmogorov complexity and related measures. When the algorithm 𝒜 is a universal Turing machine (instead of a LLM), the minimum length description of program P is called its Kolmogorov complexity <cit.>. In <cit.>, the authors propose that neural network models such as GPT-3 have a “simplicity bias" and prefer data with low Kolmogorov complexity. Kolmogorov complexity inspired measures have a long history of application in similarity detection and compression. In <cit.>, the authors define a similarity metric called Normalized Information Distance (NID), based on Kolmogorov complexity. Since Kolmogorov complexity is non-computable, the authors further develop the notion of Normalized Compression Distance (NCD), which is an efficiently computable variant of NID using compression algorithms like gzip. More in-depth treatment of this topic is available in <cit.> and related papers. Autoencoders. An autoencoder <cit.> is a neural network which first compresses the input using an encoder network and then tries to recover the input from the compressed code by using a decoder network <cit.>. For the use of minimum description length (MDL) principle for autoencoders, see <cit.>. In the algorithm proposed in this paper (Figure <ref>), the neural network N can be viewed as the encoder, and the LLM 𝒜 can be viewed as the decoder. Further, note that only the encoder is trained using feedback from the output. AI-detection tools. We briefly discuss few recent softwares for detecting whether a text is generated by a LLM or written by a human. An AI text classifier by OpenAI, the company behind ChatGPT, is now available <cit.>. The classifier outputs the probability that a given input text is AI-generated. GPTZero <cit.> is another AI-detection tool, which also provides scores for burstiness and perplexity <cit.>. Another well-known tool is Originality.AI <cit.>. § PRELIMINARY EXPERIMENTS AND VISION FOR FUTURE WORK For an initial experimental setup for the proposed ideas, we designed a prompt space 𝒫 of size 64. Each prompt in this space is defined by a tuple of three words taken from independent sets A, B, C. Each of A, B and C contains words taken from common programming vocabulary encountered while describing the programs. For our experiments we chose |A|=8, |B|=2, |C|=4. For example, if the prompt is (“insertion", “sort", “C"), it is equivalent to writing a prompt: . We generated a pool of 10 answers to this prompt using calls to ChatGPT and BLOOM. BLOOM model was run on Macintosh, while ChatGPT was prompted through API calls. This gave us a collection of 64 · 10 = 640 (prompt, answer) pairs. We store this set in an offline repository ℛ which we used to train a neural network N using PyTorch. For each answer the neural network was trained with the following loss function: generate two prompts independently at random from the output probability distribution and calculate their similarity with answer. Next, we collected a test set 𝒯 of 50 programs. Each program D in 𝒯 was manually evaluated for similarity with the repository. Accordingly, an originality score o(D) was assigned to every program in 𝒯 using the formulas discussed in Section <ref>. The neural network N takes as input a source code D∈𝒯 and the output is a probability distribution over the prompt space 𝒫. The best score provided by the neural network is the computed originality score f(D) for two prompts. We found that the mean squared error ϵ between o(D) and f(D) was 0.3 (0≤ϵ≤ 1), which is an encouraging result (<ref>) . This experiment required a considerable amount of manual effort as our goal was to prove the viability of our proposed idea. As the proposed idea shows to be implementable and valid, we propose the following research vision: * We plan to create a prompt space that accurately maps with the internal representation of prompts for large-scale deployed LLMs such as BLOOM, ChatGPT, BARD etc. * We plan to increase the size of repository ℛ, so that it consists of a realistic number of (prompt, answer) pairs. * In future we plan to automate data cleaning, processing and model building so that the model can be trained and updated on real world data on regular basis. * We plan to increase the number of prompts in the prompt sequence to at least 20. * Finally, we will define prompt complexity, and how it minimizes originality score to be always less then 0.45. The implication being that easier the prompt is to write to get the desired code fragment., lesser will be the originality score of a source code. § CONCLUSION As current plagiarism detection tools use a corpus of documents obtained from various sources for comparison, we envision an originality detection tool which generates a prompt sequence and calculates the minimum originality score. The key idea we have proposed in this paper is: the tools for detecting originality of LLM generated source code need to “learn” from the LLM generated source code itself and the prompts used to generate such source code. Rather than trying to compute the probability that a text is AI-generated or human-generated (this has its technical limitations), we feel the focus should be on computing originality score using a pool of LLMs. Our initial results are encouraging, and our computed originality scores are in agreement with human evaluations of originality and similarity. 9 farrokhnia1 Farrokhnia, Mohammadreza, et al. A SWOT analysis of ChatGPT: Implications for educational practice and research, Innovations in Education and Teaching International (2023): 1-15. rosenblatt2 Rosenblatt, Kalhan. ChatGPT passes MBA exam given by a Wharton professor, Retrieved Jan 25 (2023): 2023. dwivedi3 Y.K. Dwivedi, N. Yogesh, et al., “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management 71 (2023): 102642. khalil4 Khalil, Mohammad, and Erkan Er. Will ChatGPT get you caught? Rethinking of plagiarism detection. arXiv preprint arXiv:2302.04335 (2023). weisz5 Weisz, Justin D., et al. Better together? an evaluation of ai-supported code translation. 27th International Conference on Intelligent User Interfaces. 2022. peng6 Peng, Sida, et al. The impact of ai on developer productivity: Evidence from github copilot. arXiv preprint arXiv:2302.06590 (2023). anu7 Baidoo-Anu, David, and Leticia Owusu Ansah. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Available at SSRN 4337484 (2023). ss8 Shipra Sharma and Balwinder Sodhi. FACT-from actual to conceptual tie-ins: a multi-level knowledge graph structured on context and semantics of software artefacts. Proceedings of the 35th Annual ACM Symposium on Applied Computing. 2020 mdl1 A. Barron, J. Rissanen and B. Yu, The minimum description length principle in coding and modeling, IEEE transactions on information theory, vol. 44, no. 6, pp. 2743–2760, 1998, IEEE. goldblum2023free Micah Goldblum and Marc Finzi and Keefer Rowan and Andrew Gordon Wilson, The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning, 2023. kolmogorovbook Ming Li and Paul Vitányi, An Introduction to Kolmogorov Complexity and Its Applications (2nd Ed.), ISBN: 0387948686, Springer-Verlag, Berlin, Heidelberg, 1997. livitanyi1 Ming Li, Xin Chen, Xin Li, Bin Ma and P. M. B. Vitanyi, The similarity metric, IEEE Transactions on Information Theory, vol. 50, no. 12, pp. 3250-3264, Dec. 2004, doi: 10.1109/TIT.2004.838101. vitanyi2 Rudi Cilibrasi and Paul M. B. Vitányi, Clustering by compression, CoRR:cs.CV/0312044, 2003. vitanyi3 M. Li, J.H. Badger, X. Chen, S. Kwong, P. Kearney, and H. Zhang. An information-based sequence distance and its application to whole mitochondrial genome phylogeny, Bioinformatics, 17:2(2001), 149–154. cilibrasi2 R. Cilibrasi, P. Vitanyi and R. de Wolf, Algorithmic clustering of music, Proceedings of the Fourth International Conference on Web Delivering of Music, 2004. EDELMUSIC 2004., Barcelona, Spain, 2004, pp. 110-117, doi: 10.1109/WDM.2004.1358107. deeplearningbook Ian J. Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning, MIT Press, Cambridge, MA, USA, 2016 openai-classifier https://platform.openai.com/ai-text-classifier gptzero https://gptzero.me/ perplexity D. M. Blei, A. Y. Ng and M. I. Jordan, Latent Dirichlet Allocation, Journal of machine Learning research, 3 Jan 2003, 993-1022. burstiness T. Lappas, B. Arai, M. Platakis, D. Kotsakos and D. Gunopulos, On burstiness-aware search for document sequences, InProceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining 2009 Jun 28, pp. 477-486. originalityai https://originality.ai/ autoencoder C.Y. Liou, W.C. Cheng, J.W. Liou and D.R. Liou, Autoencoder for words, Neurocomputing 139:84-96, Sep 2 2014 . hinton G. E. Hinton and R. Zemel, Autoencoders, Minimum Description Length and Helmholtz Free Energy, Advances in Neural Information Processing Systems, Editors: J. Cowan and G. Tesauro and J. Alspector, Vol. 6, 1993.
http://arxiv.org/abs/2307.05418v2
20230711163502
Stability and genericity of bang-bang controls in affine problems
[ "Alberto Domínguez Corella", "Gerd Wachsmuth" ]
math.OC
[ "math.OC" ]
Channel Selection for Wi-Fi 7 Multi-Link Operation via Optimistic-Weighted VDN and Parallel Transfer Reinforcement Learning Pedro Enrique Iturria-Rivera, Graduate Student Member, IEEE, Marcel Chenier[2], Bernard Herscovici[2], Burak Kantarci, Senior Member, IEEE and Melike Erol-Kantarci, Senior Member, IEEE [1]School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada [2]NetExperience., Ottawa, Canada Emails:{pitur008, burak.kantarci, melike.erolkantarci}@uottawa.ca, {marcel, bernard}@netexperience.com August 12, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================== We analyse the role of the bang-bang property in affine optimal control problems. We show that many essential stability properties of affine problems are only satisfied when minimizers are bang-bang. Moreover, we prove that almost any perturbation in an affine optimal control problem leads to a bang-bang strict global minimizer. We work in an abstract framework that allows to cover many problems in the literature of optimal control, this includes problems constrained by partial and ordinary differential equations. We give examples that show the applicability of our results to specific optimal control problems. bang-bang, affine optimal control, stability, genericity 49J30, 65K10, 49K40 § INTRODUCTION The term bang-bang was coined a long time ago in control theory. The term is informal and has become widely adopted in the field to refer to a control that switches from one extreme value to another; in analogy with relays, that make a bang to change from off to on, and another bang to come back from on to off. The term also applies to controls that take several values (more than two), usually the vertices of some polygon or polyhedron. One of the fundamental principles in the theory of mathematical control is the so-called bang-bang principle, this result asserts that for control systems, originating from an ordinary differential equation, any state that can be reached by a feasible control can also be attained by a bang-bang control, see, e.g., <cit.>. Bang-bang controls also arise in optimal control problems, specially in affine problems, where the control appears linearly (and hence the name). This is mainly because they lack the so-called Tykhonov regularization term, and hence some regularity of minimizers is lost, or better said, it was never there; it was artificially added by the regularizer. In this paper, we study several aspects related to the stability of bang-bang minimizers, and moreover to instability phenomena. These aspects have many important uses, e.g., they help to deal with uncertainty of data, to understand the technicalities appearing in the numerical methods, to regularize problems, etc. In order to show that the phenomena studied here are independent of particular control systems, we work in very general framework that allows us to cover several optimal control problems. We illustrate this with a handful of examples. Let us now comment a bit on the related literature. For optimal control problems governed by ordinary differential equations, the stability analysis of bang-bang minimizers started with <cit.> for linear quadratic problems, and continued with <cit.> for more general affine systems. After that, several refinements and analyses of numerical schemes came. In <cit.>, the accuracy of implicit discretization schemes was analysed under growth assumptions on the switching function; these same assumptions were used in <cit.> to prove the convergence of gradient methods. In <cit.> the stability of the first order necessary conditions was studied by means of the metric regularity property. In <cit.> more natural assumptions than previous papers were introduced to generate results about stability and Lipschitz rates of convergence; these assumptions involved L^1-growths, similar to the classic coercivity condition, but with some modifications. Recently, in <cit.>, the metric subregularity of the optimality mapping was used to prove the accuracy of the model predictive control algorithm. The stability analysis of bang-bang minimizers for problems constrained by partial differential equations started with <cit.>, where elliptic optimal control problems were considered. Since then, there are several papers dealing other type of problems with bang-bang minimizers; see, e.g., <cit.>. For parabolic problems, we mention <cit.>, where a study on the accuracy of variational discretization was carried out; and <cit.>, where stability with respect to initial data was analysed. In <cit.>, a fully discrete scheme is proposed and analyzed for a velocity tracking problem with bang-bang controls. Finally, we comment on the recent paper <cit.>, where bilinear problems with L^1–L^∞ constraints are considered; the authors proved that under certain hypotheses, the optimal controls must be bang-bang (our setting is different, but with a few changes applies to the problem considered there). We present a similar result regarding the bang-bang nature of optimal controls, but for linearly perturbed problems, see <Ref> for more details. We now describe the the organization of the paper and the contributions of each section. In <Ref>, we introduce the optimization problem and give the definition of the bang-bang property under consideration. The first result is the following characterization. A control is bang-bang iff every sequence converging weakly to it converges strongly. This result (<ref>) is based on previous results from <cit.> and <cit.>. Since the proof is a bit involved and requires some external tools from measure theory and set-valued analysis, it is given in the Appendix. After that, we use the characterization to prove that strict local minimizers satisfy a growth condition iff they are bang-bang, see <Ref> for more details and references. In <Ref>, we prove that an affine problem is well-posed in Tykhonov's sense iff it possesses a bang-bang strict global minimizer. We also use a smooth variational principle to prove that this situation is generic, yielding a result of the following form. Almost every linearly perturbed problem has a bang-bang strict global minimizer. In <Ref>, we prove that for strict local minimizers, hemicontinuity notions of stability under linear perturbations coincide, and all of them are equivalent to the bang-bang property; therefore, many undesirable instability phenomena can occur when the minimizer is not bang-bang. This shows that the study of problems with singular arcs is truly different in nature from the pure bang-bang case; this can already been seen in previous publications dealing with the stability of bang-singular-bang minimizers, see <cit.>. In <Ref>, we continue the study of stability, but now for the first order necessary condition. This done by means of a modification of the metric subregularity property. We prove that for isolated critical points (isolated solutions of the first order necessary condition), the stability of the first order necessary conditions is equivalent to the bang-bang property. As an application of this, we prove a stability result concerning L^p-norm regularizations. <Ref> is devoted to illustrate the applicability of our results by means of examples. The first example is an affine optimal control problems constrained by ordinary differential equations, being this the model for the theory here developed. The other examples are concerned with optimization problems constrained by partial differential equations; this includes a classical elliptic problem, and a velocity tracking one. Finally, Appendix A gives alternative proofs to some of the results in <cit.>, which to the best of the author's knowledge was not done before. We mention that some of the results in <cit.> contain flaws, although not substantial ones; see the beginning of Appendix A for more details. § THE ABSTRACT OPTIMAL CONTROL PROBLEM We consider an abstract optimization problem for which the bang-bang property can be defined. The feasible set resembles the usual control sets appearing in optimal control theory. §.§ The model Let (X,𝒜,μ) be a measure space and consider the set 𝒰 := {u ∈L^1(X)^m : u(x)∈ U a.e. in X}, where U is a subset of ℝ^m. We consider the abstract optimization problem min_u∈𝒰𝒥(u), where 𝒥:𝒰→ℝ is a given real-valued function. The set in (<ref>) is called the feasible set and the function in (<ref>) is called the objective functional. We consider problem (<ref>)–(<ref>) under the following standing assumption. We require that the following statements hold. * (X,𝒜,μ) is a finite and nonatomic measure space; * U is a convex compact subset of ℝ^m that contains more than one element; * 𝒥:𝒰→ℝ is weakly sequentially continuous. The previous assumption is very reasonable for affine optimal control problems. The reason is that in those problems, the control appears linearly (hence the name), and the weak sequential continuity of the objective functional can be expected. Moreover, under <ref>, the feasible set enjoys many good properties. The feasible set 𝒰 is a nonempty, convex and weakly sequentially compact subset of L^1(X)^m. It follows as a particular case of <cit.> that 𝒰 is a nonempty convex weakly compact subset of L^1(X)^m. From the Eberlein–Šmulian Theorem, we can conclude that 𝒰 must be weakly sequentially compact. From the previous proposition, for every sequence in the feasible set, we are able to choose a weakly convergent subsequence; which is extremely useful in existence arguments. Let u^*∈𝒰. We define the minimality radius of u^* as r̅_u^*:=sup{δ≥0: 𝒥(u^*)≤𝒥(u) for all u∈𝒰 with |u-u^*|_L^1(X)^m≤δ}. We say that u^* is a local minimizer of problem (<ref>)–(<ref>) if r̅_u^*>0. We say that u^* is a global minimizer of problem (<ref>)–(<ref>) if r̅_u^*=+∞. From the weak sequential compactness of the feasible set and the weak sequential continuity of the objective functional, the existence of minimizers follows trivially. Problem (<ref>)–(<ref>) has at least one global minimizer. Some of the arguments given in the sequel are of sequential flavor; however, one can easily argue by its topological counterpart, as an example of this, we prove that the objective functional is also weakly continuous. Due to the lack of separability assumptions, we use Whitley's construction of sequences for limit points, see <cit.>; this construction is condensed in Day's Lemma, see <cit.>. The following statements hold. (i) The feasible set 𝒰 is weakly compact; (ii) the objective functional 𝒥:𝒰→ℝ is weakly continuous. Item (i) follows as a particular case of <cit.>. We proceed to prove item (ii). Let O be an open subset of ℝ, and let A:=𝒰∖𝒥^-1(O). We will prove that A is weakly closed by proving that A^w⊂ A, where A^w denotes the weak closure of A with respect to the weak topology of L^1(X)^m. Let u∈A^w be arbitrary. Observe that A is relatively weakly compact, and hence by the Eberlein–Šmulian Theorem (in the form of <cit.>), every sequence in A has a weak limit point. We can then employ Day's Lemma (<cit.>) to find a sequence {u_n}_n∈ℕ⊂ A converging to u. Then, as 𝒥(u_n)∈ℝ∖ O for all n∈ℕ and 𝒥 is weakly sequentially continuous, we conclude that 𝒥(u)∈ℝ∖ O, and hence that u∈ A. It follows that 𝒥^-1(O) is weakly open. As O was an arbitrary open subset of ℝ, item (ii) follows. Before closing this subsection, we recall a standard result concerning the equivalence of weak convergence in L^p-spaces for sequences in the feasible set. Let p∈[1,∞). For a sequence {u_n}_n∈ℕ⊂𝒰 and u∈𝒰, the following statements are equivalent. * u_n⇀^* u weakly* in L^∞(X)^m. * u_n⇀ u weakly in L^p(X)^m. The implication (i)(ii) follows from the definition of weak convergence and the fact that L^p(X)^m⊂ L^1(X)^m for all p∈[1,∞]. The implication (ii)(i) follows from the fact that L^p(X)^m is dense in L^1(X)^m. §.§ Bang-bang property We give now a precise definition of the bang-bang property for elements of 𝒰. We denote by U the set of extreme points of U. We say that u∈𝒰 is bang-bang if u(x)∈ U for a.e. x∈ X. Elements of the feasible set with the bang-bang property are of general interest because they saturate the pointwise constraints. For example, in the particular case of a convex polytope, bang-bang elements take only values on the vertices of the polytope almost everywhere. Recall that a convex polytope is the convex hull of finitely many points. In the 2-dimensional and 3-dimensional cases, convex polytopes are exactly polygons and polyhedra, respectively. We now will state a couple of results concerned with the sequences converging weakly to bang-bang elements. The following result was proved first in <cit.> using <cit.>; however there are some inconsistencies in some of the proofs of <cit.>. In the Appendix, we comment on some of the flaws in <cit.>, and give an alternative proof of this result and other related ones. Let u^*∈𝒰 be bang-bang and {u_n}_n=1^∞⊂𝒰 be a sequence. If u_n⇀ u^* weakly in L^1(X)^m, then |u_n-u^*|_L^1(X)^m→0. This is a particular case of <ref>. The phenomenon described in the previous proposition is well known in the calculus of variations, where weakly convergent minimizing sequences are usually also strongly convergent. However, this is not always the case; the existence of bad behaved sequences follows from the following weak clustering principle. For any element of the feasible set without the bang-bang property, it is possible to find a sequence in the feasible set converging to it such that the sequence clusters on a sphere of arbitrarily small radius. Let u^*∈𝒰 . If u^* is not bang-bang, there exists δ_0>0 such that for every δ∈(0,δ_0] there exists a sequence {u_n}_n∈ℕ⊂𝒰 with the following properties. * |u_n-u^*|_L^1(X)^m=δ for all n∈ℕ; * u_n⇀ u^* weakly in L^1(X)^m. This is a particular case of <ref>. The two previous results can be combined to obtain the following characterization of the bang-bang property. Let u^*∈𝒰. The following statements are equivalent. * u^* is bang-bang. * u_n⇀ u^* weakly in L^1(X)^m implies |u_n-u^*|_L^1(X)^m→ 0 for any sequence {u_n}_n∈ℕ⊂𝒰. The next result was proved in <cit.> for local minimizers, in the particular case when the constraints in (<ref>) are box-like (in the one-dimensional case m = 1) and under the additional assumptions of separability and completeness of the measure space (X,𝒜,μ). Let u^*∈𝒰. Suppose that u^* is not bang-bang. Then, there exists δ_0 > 0 such that for any δ∈ (0,δ_0] and for any ε > 0, there exists u ∈𝒰 with |u - u^*|_L^1(X)^m = δ and 𝒥(u)≤𝒥(u^*) + ε. This follows from <ref> and the weak sequential continuity of the objective functional. §.§ Growth of the objective functional at strict local minimizers We begin recalling the definition of strict minimality, both local and global. Let u^*∈𝒰. We define the strict minimality radius of u^* as r̂_u^*:=sup{δ≥0: 𝒥(u^*)<𝒥(u) for all u∈𝒰∖{u^*} with |u-u^*|_L^1(X)^m≤δ}. We say that u^* is a strict local minimizer of problem (<ref>)–(<ref>) if r̂_u^*>0. We say that u^* is a strict global minimizer of problem (<ref>)–(<ref>) if r̂_u^*=+∞. It was proved in <cit.> that no growth of the objective functional at minimizer can occur if the minimizer is not bang-bang. The proof was given for Hölder-type growths, and they point out that the argument also works for more general type of growths. We give here the argument for completeness. Let u^*∈𝒰. Suppose that there exist δ>0 and a function ω:(0,∞)→(0,∞) such that 𝒥(u)≥𝒥(u^*)+ω(|u-u^*|_L^1(X)^m) for all u∈𝒰∖{u^*} with |u-u^*|_L^1(X)^m≤δ. Then u^* is a bang-bang strict local minimizer of problem (<ref>)–(<ref>). Moreover, δ≤r̂_u^*. If u∈𝒰∖{u^*} satisfies |u-u^*|_L^1(X)^m≤δ, then 𝒥(u)≥𝒥(u^*)+ω(|u-u^*|_L^1(X)^m)>𝒥(u^*). Thus, u^* must be a strict local minimizer of problem (<ref>)–(<ref>) and δ≤r̂_u^*. Suppose that u^* is not bang-bang, and let δ_0 be the positive number in Proposition <ref>. Then there exist η∈(0,min{δ,δ_0}) and a sequence {u_n}_n∈ℕ⊂𝒰, satisfying |u_n-u|_L^1(X)^m=η for all n∈ℕ, such that u_n⇀ u weakly in L^1(X)^m. Then, 𝒥(u_n)≥𝒥(u^*)+ω(η) for all n∈ℕ. Since 𝒥 is weakly sequentially continuous, we get ω(η)≤0. A contradiction. The converse of the previous result is also true, the objective functional must satisfy a growth condition at bang-bang strict local minimizers. Let u^*∈𝒰 be a strict local minimizer of problem (<ref>)–(<ref>). Suppose that u^* is bang-bang. Then there exist δ∈(0,r̂_u^*) and a non-decreasing function ω:(0,∞)→(0,∞) such that 𝒥(u)≥𝒥(u^*)+ω(|u-u^*|_L^1(X)^m) for all u∈𝒰∖{u^*} with |u-u^*|_L^1(X)^m≤δ. Let M:=sup_u∈𝒰|u-u^*|_L^1(X)^m and δ∈(0,min{M,r̂_u^*}) be arbitrary. Let ω_δ:(0,δ]→(0,∞) be given by ω_δ(η):=inf{𝒥(u)-𝒥(u^*): u∈𝒰 and η≤|u-u^*|_L^1(X)^m≤δ}. By construction, ω is nonnegative and non-decreasing. Suppose that there exists η∈(0,δ] such that ω(η)=0. By definition of infimum, there would exist a sequence {u_n}_n∈ℕ⊂𝒰 such that η≤|u_n-u^*|_L^1(X)^m≤δ and 0<𝒥(u_n)-𝒥(u^*)≤1/n for all n∈ℕ. We can extract a subsequence {u_n_k}_k∈ℕ of {u_n}_n∈ℕ converging weakly in L^1(X)^m to some û∈𝒰. Since 𝒥 is weakly sequentially continuous, from (<ref>), we get 𝒥(û)=𝒥(u^*). Since u^* is a strict local minimizer and |û-u^*|_L^1(X)^m≤lim inf_k→∞|u_n_k-u^*|_L^1(X)^m≤δ<r̂_u^*, we conclude û=u^*. This implies that {u_n_k}_k∈ℕ converges weakly to u^* in L^1(X)^m, and as u^* is bang-bang, {u_n_k}_k∈ℕ must converge to u^* in L^1(X)^m; a contradiction. The result follows defining ω:(0,∞)→(0,∞) as ω( η ) := ω_δ( min( η, δ )). § GENERICITY OF THE BANG-BANG PROPERTY §.§ The Radon–Nikodým property One of the drawbacks of Bochner integration theory is that the Radon–Nikodým Theorem fails to hold in general. Sets for which this result still holds define an important class that has been studied extensively; see, e.g., the specialized book <cit.>. In this subsection, we give a short review of these sets in the particular case where the underlying Banach space is L^1(X)^m. We begin with the definition of dentability; for the general definition see <cit.> or <cit.>. Let 𝒲 be a nonempty subset of L^1(X)^m. The elements of {S(𝒲,ξ,δ): ξ∈L^∞(X)^m, δ>0} are called slices of 𝒲, where S(𝒲,ξ,δ):={ w∈𝒲: ξ w≤inf_v∈𝒲ξ v+δ} for ξ∈L^∞(X)^m and δ>0. We say that the subset 𝒲 of L^1(X)^m is dentable if it admits arbitrarily small slices, i.e., for every ε>0 there exist δ>0 and ξ∈L^∞(X)^m such that S(𝒲,ξ,δ)≤ε. General definition Let ℬ be a Banach space and 𝒲 a nonempty subset of ℬ. * The elements of {S(𝒲,ξ,δ): ξ∈ℬ^*, δ>0} are called slices of 𝒲, where S(𝒲,ξ,δ):={ v∈𝒲: ξ v < inf_w∈𝒲ξ w+δ} for ξ∈ℬ^* and δ>0. * We say that a nonempty subset 𝒲 of ℬ is dentable if for every ε>0 there exist δ>0 and ξ∈ℬ^* such that S(𝒲,ξ,δ)<ε. * We say 𝒲 has the RNP if every bounded subset of 𝒲 is dentable. We now give the definition of the Radon–Nikodým Property (RNP). There are many different ones, as the property has been characterized in many ways, see, e.g., the book <cit.> or the survey <cit.> for geometrical characterizations. Here we give a definition based on dentability of sets, see <cit.> or <cit.>. A subset 𝒱 of L^1(X)^m has the Radon–Nikodým Property if every nonempty bounded subset 𝒲 of 𝒱 is dentable. We mention that the family of sets having the RNP is quite rich, it includes all reflexive spaces, see <cit.>, in particular the L^p-spaces for p∈(1,∞). There are non-reflexive spaces without the RNP, such as L^1([0,1]), see <cit.>. However, subsets of L^1-spaces might possess the property even if the whole space does not have it. For example, it is known that weakly compact convex subsets of Banach spaces have the RNP, see <cit.>. The feasible set 𝒰 has the Radon–Nikodým Property. By <ref>, 𝒰 is weakly sequentially compact, and hence by the Eberlein–Šmulian Theorem, weakly compact. We can use then <cit.> to conclude that 𝒰 has the Radon–Nikodým Property. §.§ Strong minimizers and well-posedness This subsection is devoted to recall one of the classical concepts of well-posedness, the Tikhonov one. In order to do so, we first recall the definition of strong minimality, see <cit.>. Let ℱ:𝒰→ℝ be a functional and u^*∈𝒰 a minimizer of ℱ. We say that u^* is a strong minimizer if ℱ(u_n)→ℱ(u^*) implies |u_n-u^*|_L^1(X)^m→ 0 for any sequence {u_n}_n∈ℕ⊂𝒰. Using <ref>, we can easily characterize strong minimizers in terms of the bang-bang property. Let ℱ:𝒰→ℝ be a weakly sequentially continuous functional and u^*∈𝒰. The following statements are equivalent. * u^* is a strong minimizer of ℱ. * u^* is a bang-bang strict minimizer of ℱ. Suppose that u^* is a strong minimizer of ℱ, and let {u_n}_n∈ℕ⊂𝒰 be a sequence converging weakly to u^* in L^1(X)^m. Since ℱ is weakly sequentially continuous, ℱ(u_n)→ℱ(u^*), and thus u_n→ u^*. We can use <ref> to conclude that u^* must be bang-bang. It is clear that strong minimizers are strict. Conversely, suppose that u^* is a bang-bang strict minimizer, and let {u_n}_n∈ℕ⊂𝒰 be a sequence such that ℱ(u_n)→ℱ(u^*). Let {u_n_k}_k∈ℕ be a subsequence of {u_n}_n∈ℕ. We can extract a subsequence {u_n_k_j}_j∈ℕ of {u_n_k}_k∈ℕ converging weakly in L^1(X)^m to some û∈𝒰. Since ℱ(u_n_k_j)→ℱ(u^*), we obtain ℱ(û)=ℱ(u^*), but since u^* is a strict minimizer, we must have û=u^*. Since every subsequence of {u_n}_n∈ℕ has further a subsequence that converges weakly to u^*, we conclude that u_n ⇀ u^* weakly in L^1(X)^m; but then, by <ref>, u_n→ u^* strongly in L^1(X)^m. We now give the definition of well-posedness, see <cit.>. Let ℱ:𝒰→ℝ. The optimization problem min_u∈𝒰ℱ(u) is said to be well-posed if ℱ has strong minimizer. We can use <ref> to see how the well-posedness of problem (<ref>)–(<ref>) is related to the bang-bang property. Problem (<ref>)–(<ref>) is well-posed if and only if it possesses a bang-bang strict global minimizer. §.§ Stegall's principle and the bang-bang property In nonlinear analysis and optimization, the notions of variational principle, perturbation and well-posedness are intrinsically related. The concept of genericity is of fundamental importance in the interplay of these notions. In topology, a generic property is usually one that holds on a dense open set, however this can be a very strong requirement; for example, the irrational numbers are somehow generic among the real numbers, but they do not conform a open dense subset of the real line. More generally, a generic property is one that holds on a residual set, being the dual concept a meager set. We recall now the definition of residuality in the particular case of the space L^∞(X)^m. Let Θ be a subset of L^∞(X)^m. We say that Θ is residual if there exists a countable family of open dense sets {D_n}_n∈𝒩⊂L^∞(X)^m such that Θ=⋂_n∈ℕ D_n. A set is said to be meager if it is the complement of a residual set. From Baire Category Theorem, it is clear that residual subsets of L^∞(X)^m are dense. We now give the definition of genericity, see <cit.>. A subset of L^∞(X)^m is said to be generic if it contains a residual set. A property is said to be generic if it holds on a generic set. We come now to a classic in variational analysis, Stegall's principle. This is a type of smooth variational principle over sets with the RNP. It appeared first in <cit.>, see <cit.> or <cit.> for book references. We state now a version of the theorem, as a particular case of <cit.>. The set {ξ∈L^∞(X)^m: problem min_u∈𝒰{𝒥(u)-ξ u} is well-posed} is generic. In other words, the well-posedness of linearly perturbed versions of problem (<ref>)–(<ref>) is generic. By <ref>, the feasible set 𝒰 has the RNP. Since the objective functional 𝒥 is weakly sequentially continuous, it is in particular sequentially continuous, and hence continuous. We can then employ the implication (v) (iii) of <cit.> to conclude the result. We are going now to reformulate the previous theorem in a way that makes transparent that the bang-bang property is generic. The set {ξ∈L^∞(X)^m: 𝒥-ξ has a bang-bang strict global minimizer} is generic. In other words, the existence of bang-bang strict global minimizers of linearly perturbed problems of problem (<ref>)–(<ref>) is generic. Let Λ be the set of ξ∈L^∞(X)^m such that 𝒥-ξ has a bang-bang strict global minimizer. By <ref>, there exists a residual set Θ⊂L^∞(X)^m such that problem min_u∈𝒰{𝒥(u)-ξ u} is well-posed for ξ∈Θ. By <ref>, Θ is contained in Λ; thus Λ is generic. For every ε>0 there exists ξ∈ L^∞(X)^m with |ξ|_L^∞(X)^m≤ε such that 𝒥-ξ has a bang-bang strict global minimizer. By <ref>, there exists a residual set Θ⊂L^∞(X)^m such that for any ξ∈Θ, 𝒥-ξ has a bang-bang strict global minimizer. The result follows from Baire Category Theorem, as it implies that residual sets of L^∞(X)^m are dense. § STABILITY UNDER LINEAR PERTURBATIONS We study the relation of linear perturbations to problem (<ref>)–(<ref>) and the bang-bang property. We begin by describing the solution mappings that we consider. We give new characterizations of the bang-bang property in terms of hemicontinuity of these mappings. Finally, in the last subsection, we reformulate the stability properties to make clearer their meaning. §.§ Solution mappings When studying stability of an optimization problem, it is often possible to prove that perturbed problems have solutions in a neighborhood of a reference solution. This type of solutions can serve as a first approach for the stability analysis. Given ξ∈L^∞(X)^m, u^*∈𝒰 and γ>0, we consider the optimization problem 𝒫_u^*,γ(ξ): min_{𝒥(u)-ξ u: u∈𝒰 with |u-u^*|_L^1(X)^m≤γ}. Before advancing further, let us mention that each problem has at least one solution. Each problem 𝒫_u^*,γ(ξ) has at least one minimizer, i.e, there exists u=u_u^*,γ,ξ such that 𝒥(u)-ξ u≤𝒥(w)-ξ w for all w∈𝒰 with |w-u^*|_L^1(X)^m≤γ. The set 𝒱:={u∈𝒰: |u-u^*|_L^1(X)^m≤γ} is closed and convex, hence weakly closed. As by <ref>, 𝒰 is weakly sequentially compact, so is 𝒱. Clearly, each map 𝒥-ξ:𝒱→ℝ is weakly sequentially continuous; therefore, problem 𝒫_u^*,γ(ξ) must have at least one global minimizer. With each problem 𝒫_u^*,γ(ξ), we associate a localized solution mapping. This is a set-valued mapping, denoted by 𝒮_u^*,γ:L^∞(X)^m↠ L^1(X)^m and given by 𝒮_u^*,γ(ξ):={ u∈𝒰: u is a minimizer of problem 𝒫_u^*,γ(ξ)}. Each set-valued mapping 𝒮_u^*,γ takes nonempty closed values. It follows from <ref> that each S_u^*,γ takes nonempty values. It follows from the weak sequential continuity of the objective functional that each 𝒮_u^*,γ takes closed values. We can recover the usual solution mappings from the localized ones. The local solution mapping 𝒮_loc: L^∞(X)^m→L^1(X)^m is given by 𝒮_loc(ξ):={ u∈𝒰: u∈𝒮_u,γ(ξ) for some γ>0}. Each set 𝒮_loc(ξ) consists of the local minimizers of 𝒥-ξ on 𝒰. In the same fashion, we define the global solution mapping 𝒮_gbl: L^∞(X)^m→L^1(X)^m by 𝒮_gbl(ξ):={ u∈𝒰: u ∈𝒮_u,γ(ξ) for all γ>0}. Similarly, each set 𝒮_gbl(ξ) consists of the global minimizers of 𝒥-ξ on 𝒰. §.§ Hemicontinuity It is now time to study the continuity properties of the solution mappings described in the previous subsection. We do this by means of the notion of hemicontinuity; we use standard definitions, see <cit.>. The term semicontinuity is used sometimes instead of hemicontinuity; see e.g., <cit.>. We begin studying the lower hemicontinuity properties of the mappings in relation with the bang-bang property; in order to do so, we employ the following sequential characterization of lower hemicontinuity, see <cit.>. Let 𝒮:L^∞(X)^m↠ L^1(X)^m be a set-valued mapping. The following statements are equivalent. * 𝒮 is lower hemicontinuous at 0. * For every sequence {ξ_n}_n∈ℕ⊂ L^∞(X)^m converging to 0 in L^∞(X)^m and every u∈𝒮(0), there exists a subsequence {ξ_n_k}_k∈ℕ of {ξ_n}_n∈ℕ and a sequence {u_k}_k∈ℕ⊂ L^1(X)^m such that u_k∈𝒮(ξ_n_k) for all k∈ℕ and u_k→ u in L^1(X)^m. We are now ready for our first result. The proof consist of two main ingredients, the weak clustering principle (<ref>) and the construction of adequate perturbations. For the latter, we use the celebrated Hahn-Banach Theorem. Let u^*∈𝒰 be a strict local minimizer of problem (<ref>)–(<ref>). Suppose that there exists γ>0 such that 𝒮_u^*,γ is lower hemicontinuous at 0. Then u^* is bang-bang. Suppose that u^* is not bang-bang. By <ref>, there exists a positive number δ<min{r̂_u^*,γ} and sequence {u_n}_n∈ℕ⊂𝒰 such that u_n⇀ u^* weakly in L^1(X)^m and |u_n-u^*|_L^1(X)^m=δ for all n∈ℕ. By the Hahn-Banach Theorem, for each n∈ℕ there exists ξ_n∈ L^∞(X)^m such that ξ_n(u_n-u^*)=|ξ_n|_L^∞(X)^m|u_n-u^*|_L^1(X)^m and |ξ_n|_L^∞(X)^m=4/δ|𝒥(u_n)-𝒥(u^*)| for all n∈ℕ. Since δ<r̂_u^*, it follows that 𝒥(u^*)<𝒥(u_n) for all n∈ℕ, and hence |ξ_n|_L^∞(X)^m>0 for all n∈ℕ. Also, from the weak sequential continuity of the objective functional, it follows that ξ_n→ 0 in L^∞(X)^m. Now, as 𝒮_u^*,γ is lower hemicontinuous at 0 and 𝒮_u^*,γ(0)={u^*}, there exists a subsequence {ξ_n_k}_k∈ℕ of {ξ_n}_n∈ℕ and a sequence {w_k}_k∈ℕ converging to u^* such that w_k∈𝒮_u^*,γ(ξ_n_k) for all k∈ℕ. Then 𝒥(w_k)-ξ_n_k w_k≤𝒥(u_n_k)-ξ_n_k u_n_k for all k∈ℕ. Since w_k→ u^* in L^1(X)^m, there exists k_0∈ℕ such that |w_k-u^*|_L^1(X)^m≤ 2^-1δ<r̂_u^* for k≥ k_0, and hence 𝒥(u^*)≤𝒥(w_k) for k≥ k_0. Combing this with <ref>, we get -|𝒥(u^*)-𝒥(u_n_k)|=𝒥(u^*)-𝒥(u_n_k)≤𝒥(w_k)-𝒥(u_n_k)≤ξ_n_k(w_k-u_n_k) for all k≥ k_0. Now, by construction of the sequence {ξ_n}_n∈ℕ, δ/4|ξ_n_k|_L^∞(X)^m =|𝒥(u^*)-𝒥(u_n_k)| ≥ ξ_n_k(u_n_k-w_k) =ξ_n_k(u_n_k-u^*)+ξ_n_k(u^*-w_k) ≥ |ξ_n_k|_L^∞(X)^m(δ-|w_k-u^*|_L^1(X)^m)≥δ/2|ξ_n_k|_L^∞(X)^m for all k≥ k_0. This yields a contradiction. We proceed now to analyze the upper hemicontinuity of localized solution mappings. The following sequential characterization will be of use, see <cit.>. Let 𝒮:L^∞(X)^m↠ L^1(X)^m be a set-valued mapping. The following statements are equivalent. * 𝒮 is upper hemicontinuous at 0 and 𝒮(0) is compact. * If {ξ_n}_n∈ℕ⊂ L^∞(X)^m and {u_n}_n∈ℕ⊂ L^1(X)^m are sequences such that u_n∈𝒮(ξ_n) for all n∈ℕ and ξ_n→ 0 in L^∞(X)^m, then the sequence {u_n}_n∈ℕ has a limit point in 𝒮(0). It turns out that the bang-bang property can imply upper hemicontinuity at a point when the localized solution mappings are single valued at that point. Let u^*∈𝒰 be a strict local minimizer of problem (<ref>)–(<ref>) and γ∈(0,r̂_u^*). If u^* is bang-bang, then 𝒮_u^*,γ is upper hemicontinuous at 0. Let {ξ_n}_n∈ℕ⊂ L^∞(X)^m be a sequence such that ξ_n→ 0 in L^∞(X)^m and {u_n}_n∈ℕ⊂𝒰 a sequence such that u_n∈𝒮_u^*,γ(ξ_n) for all n∈ℕ. Then 𝒥(u_n)-ξ_n u_n≤𝒥(u^*)-ξ_n u^* for all n∈ℕ. We can extract a subsequence {u_n_k}_k∈ℕ of {u_n}_n∈ℕ converging weakly to some û∈𝒰 in L^1(X)^m. Taking limit in <ref>, we conclude that 𝒥(û)≤𝒥(u^*). As γ<r̂_u^* and |û-u^*|_L^1(X)^m≤lim inf_k→ 0|u_n_k-u^*|_L^1(X)^m≤γ, we conclude that û=u^*. Hence, u_n_k⇀ u^* weakly in L^1(X)^m, and as u^* is bang-bang, u_n_k→ u^* in L^1(X)^m; thus {u_n}_n∈ℕ has a limit point in 𝒮_u^*,γ(0)={u^*}. We can now put together the two previous results in a single theorem characterizing the bang-bang property in terms of hemicontinuity. Let u^*∈𝒰 be a strict local minimizer of problem (<ref>)–(<ref>) and γ∈(0,r̂_u^*). The following statements are equivalent. * 𝒮_u^*,γ is upper hemicontinuous at 0. * 𝒮_u^*,γ is lower hemicontinuous at 0. * u^* is bang-bang. The implication (ii)(iii) follows from <ref> and the implication (iii)(i) follows from <ref>. We proceed then to prove the implication (i)(ii). We first observe that 𝒮_u^*,γ(0):={u^*} since γ<r̂_u^*. Let {ξ_n}_n∈ℕ⊂ L^∞(X)^m be any sequence converging to zero in L^∞(X)^m. There exists a sequence {u_n}_n∈ℕ⊂𝒰 satisfying u_n∈𝒮_u^*,γ(ξ_n) for all n∈ℕ. As 𝒮_u^*,γ is upper hemicontinuous at 0, {u_n}_n∈ℕ has u^* as limit point; hence there exists a subsequence {u_n_k}_k∈ℕ of {u_n}_n∈ℕ converging to u^* in L^1(X)^m. We conclude that 𝒮_u^*,γ is lower hemicontinuous at 0. We can also give a characterization in terms of the global solution mapping. Let u^*∈𝒰 be a strict global minimizer of problem (<ref>)–(<ref>). The following statements are equivalent. * 𝒮_gbl is upper hemicontinuous at 0. * 𝒮_gbl is lower hemicontinuous at 0. * u^* is bang-bang. Choose γ>0 such that 𝒰⊂{u∈ L^1(X)^m: |u-u^*|_L^1(X)^m≤γ}. Then 𝒮_gbl(ξ)=𝒮_u^*,γ(ξ) for all ξ∈ L^∞(X)^m. Consequently, the result follows from <ref>. We close the subsection with the following result relating the bang-bang property to the upper hemicontinuity at zero of the local solution mapping. Suppose that problem (<ref>)–(<ref>) has unique local minimizer u^*∈𝒰. If 𝒮_loc is upper hemicontinuous at 0, then u^* is bang-bang. Observe that 𝒮_loc(0)=𝒮_gbl(0)={u^*}. Let {ξ_n}_n∈ℕ⊂ L^∞(X)^m be sequence converging to zero in L^∞(X)^m. There exists a sequence {u_n}_n∈ℕ such that u_n∈𝒮_gbl(ξ_n) for all n∈ℕ. In particular, u_n∈𝒮_loc(ξ_n) for all n∈ℕ. As 𝒮_loc is assumed to be upper hemicontinuous at 0, it follows that {u_n}_n∈ℕ has a subsequence {u_n_k}_k∈ℕ such that u_n_k→ u^* in L^1(X)^m. Thus, we conclude that 𝒮_gbl is lower hemicontinuous. Then, by <ref>, u^* must be bang-bang. §.§ Local stability of linear perturbations In the previous subsection, the stability properties of problem (<ref>)–(<ref>) were studied in terms of hemicontinuity, whose sequential characterization involves subsequences and limit points. In this subsection, we restate these results in terms that make clearer their meaning. Let u^*∈𝒰 be a local minimizer of problem (<ref>)–(<ref>). We say that problem (<ref>)–(<ref>) is locally stable at u^* if there exists γ>0 with the property that for every ε>0 there exists δ>0 such that |ξ|_L^∞(X)^m<δ implies |u-u^*|_L^1(X)^m<ε for any u∈𝒮_u^*,γ(ξ) and any ξ∈ L^∞(X)^m. We define the stability radius of u^* as the positive number γ̂_u^*:=sup{γ>0: (<ref>) holds}. The definition of local stability says that small perturbations in L^∞(X)^m should imply that all solutions of localized perturbed problems be close to u^* in L^1(X)^m. This agrees with the common understanding of stability. We come now to a characterization of local stability in terms of the bang-bang property; the result follows easily from the hemicontinuity properties studied in the previous subsection. Let u^*∈𝒰 be a local minimizer of problem (<ref>)–(<ref>). The following statements are equivalent. * Problem (<ref>)–(<ref>) is locally stable at u^*. * u^* is a bang-bang strict local minimizer of problem (<ref>)–(<ref>). Moreover, if problem (<ref>)–(<ref>) is locally stable at u^*, then γ̂_u^*=r̂_u^*. Suppose that problem (<ref>)–(<ref>) is locally stable at u^*. It follows immediately from the definition of local stability that u^* is strict local minimizer and that γ̂_u^*≤r̂_u^*. It is also easy to see that 𝒮_u^*,γ is upper hemicontinuous at 0 for any γ∈(0,γ̂_u^*), and hence that u^* is bang-bang. Suppose now that u^* is a bang-bang strict local minimizer of problem (<ref>)–(<ref>). Suppose that γ̂_u^*<r̂_u^*, and let γ∈(γ̂_u^*,r̂_u^*). Then there exist ε>0, a sequence {ξ_n}_n∈ℕ⊂ L^∞(X)^m converging to zero in L^∞(X)^m and a sequence {u_n}_n∈ℕ⊂𝒰 such that |u_n-u^*|_L^1(X)^m≥ε and u_n∈𝒮_u^*,γ(ξ_n) for all n∈ℕ. By <ref>, 𝒮_u^*,γ is upper hemicontinuous at 0. Consequently, {u_n}_n∈ℕ must have a subsequence {u_n_k}_k∈ℕ such that u_n_k→ u^* in L^1(X)^m; this yields a contradiction. We conclude that problem (<ref>)–(<ref>) is locally stable at u^* and that γ̂_u^*=r̂_u^*. We now pass to the global analysis. For the sake of clarity and transparency, we give a definition that reflects the intuitive understanding of global stability. Let u^*∈𝒮_gbl(0). We say that problem (<ref>)–(<ref>) is globally stable at u^* if for every ε>0 there exists δ>0 such that |ξ|_L^∞(X)^m<δ implies |u-u^*|_L^1(X)^m<ε for any ξ∈ L^∞(X)^m and u∈𝒮_gbl(ξ). From <ref>, we can deduce immediately the following result. Let u^*∈𝒰 be a global minimizer of problem (<ref>)–(<ref>). The following statements are equivalent. * Problem (<ref>)–(<ref>) is globally stable at u^*. * u^* is a bang-bang strict global minimizer of problem (<ref>)–(<ref>). Let γ_0>0 such that 𝒰⊂{u∈ L^1(X)^m: |u-u^*|_L^1(X)^m≤γ_0}. Then 𝒮_gbl(ξ)=𝒮_u^*,γ(ξ) for all ξ∈ L^∞(X)^m and all γ≥γ_0. The result follows then from <ref>. § STABILITY OF THE FIRST-ORDER NECESSARY CONDITION We study now the stability with respect to perturbations of the first-order necessary condition of problem (<ref>)–(<ref>). In general, the first-order necessary condition (the local Pontryagin principle for optimal control problems) can be written as an inclusion, the so-called optimality system. Thus the stability properties of the first-order necessary condition can be directly analyzed from this inclusion. We will employ a concept of stability based on the so-called strong metric subregularity property. The definition of this property was given first in <cit.>. The recent paper <cit.> gives a good overview of the utility of this property and its role in variational analysis and optimization. For book references, see <cit.> or <cit.>. §.§ The first-order necessary condition We recall briefly the fist order necessary conditions for problem (<ref>)–(<ref>). We will employ the classic notion of (first-order) Gateaux differentiability. The objective functional 𝒥:𝒰→ℝ is said to be Gateaux differentiable if for every u∈𝒰 there exists d𝒥(u)∈ L^∞(X)^m such that d𝒥(u)v=lim_ε→0^+𝒥(u+ε v)-𝒥(u)/ε for all v∈ L^1(X)^m with u+v∈𝒰. The first-order necessary condition is well known, see, e.g., <cit.>. Suppose that 𝒥:𝒰→ℝ is Gateaux differentiable. If u^*∈𝒰 is a local minimizer of problem (<ref>)–(<ref>), then d𝒥(u^*)(u-u^*)≥0 for all u∈𝒰. In order to talk about the stability of the first-order necessary conditions, during this section, we will of course assume that the objective functional is Gateaux differentiable; and moreover, a weak-strong continuity property on the derivative. The following assumption is supposed to hold throughout the remainder of this section. The following statements hold. * The objective functional 𝒥:𝒰→ℝ is Gateaux differentiable; * the mapping 𝒬:𝒰→ L^∞(X)^m given by 𝒬(u):=d𝒥(u) is weakly-strongly sequentially continuous, i.e., u_n ⇀ u weakly in L^1(X)^m implies 𝒬(u_n) →𝒬(u) in L^∞(X)^m for any sequence {u_n}_n ∈ℕ⊂𝒰 and any u∈𝒰. In analogy with optimal control, we write σ_u:=𝒬(u) for each u∈𝒰. The mapping 𝒬:𝒰→ L^∞(X)^m in (ii) of <ref> is called the switching mapping. Let us recall that the normal cone to 𝒰 at u^* ∈𝒰 is given by N_𝒰(u^*):={ξ∈ L^∞(X)^m:⟨ξ,u-u^*⟩≤0 for all u∈𝒰}. For u^* ∈ L^1(X)^m ∖𝒰, we set N_𝒰(u^*) = ∅. We can then rewrite the first-order necessary condition as the inclusion 0∈σ_u+N_𝒰(u). The correspondence Φ:𝒰↠ L^∞(X)^m given by Φ(u):=σ_u+N_𝒰(u) is called the optimality mapping. We now give a definition concerned with inclusion (<ref>). Let u^* ∈𝒰 be given. * u^* is said to be a critical point of problem (<ref>)–(<ref>) if 0∈Φ(u^*); * u^* is said to be a locally isolated critical point of problem (<ref>)–(<ref>) if there exists δ>0 such that 0∈Φ(u) implies u=u^* for all u∈𝒰 with |u-u^*|_L^1(X)^m≤δ. The critical radius of u^* is given by ř_u^*:=sup{δ>0: <ref> holds}. §.§ Subregularity of the optimality mapping We are now going to study the stability of inclusion (<ref>) under perturbations. We will employ the following definition based on the notion of strong metric subregularity, see <cit.>. Let u^* be a critical point of problem (<ref>)–(<ref>). We say that the optimality mapping Φ:𝒰→ L^∞(X)^m is strongly subregular at u^* if there exists κ>0 with the property that for every ε>0 there exists δ>0 such that |ξ|_L^∞(X)^m <δ implies |u-u^*|_L^1(X)^m<ε for any u∈𝒰 with |u-u^*|_L^1(X)^m≤κ and any ξ∈Φ(u). We define the subregularity radius of u^* as κ̂_u^*:=sup{κ>0: property (<ref>) holds}. If κ_u^*=+∞, we say that problem (<ref>)–(<ref>) is globally subregular at u^*. We state now a trivial consequence of the definition of subregularity. Let u^*∈𝒰. Then κ̂_u^*≤ř_u^*. In particular, if the optimality mapping is strongly subregular at u^*, then u^* is a locally isolated critical point of problem (<ref>)–(<ref>). The following theorem states the necessity of the bang-bang property. Let u^*∈𝒰 be a local minimizer of problem (<ref>)–(<ref>). If the optimality mapping is strongly subregular at u^*, then u^* is bang-bang. The subregularity of the optimality mapping at u^* clearly implies that problem (<ref>)–(<ref>) is locally stable at u^*. By <ref>, u^* must be bang-bang. We arrive now to the main result of this section. Let u^*∈𝒰 be a local minimizer of problem (<ref>)–(<ref>). The following statements are equivalent. * The optimality mapping is strongly subregular at u^*. * u^* is a bang-bang locally isolated critical point of problem (<ref>)–(<ref>). Moreover, if the optimality mapping is strongly subregular at u^*, then κ̂_u^*=ř_u^*. The implication (i)(ii) follows from <ref> and <ref>. Let us prove now the implication (ii)(i). From <ref>, we have κ̂_u^*≤ř_u^*. Towards a contradiction, suppose that κ̂_u^*<ř_u^* and let δ∈(κ̂_u^*,ř_u^*). Then there exist a number ε>0, a sequence {ξ_n}_n∈ℕ⊂ L^∞(X)^m converging to zero in L^∞(X)^m and a sequence {u_n}_n∈ℕ⊂𝒰 such that δ≥|u_n-u^*|_L^1(X)^m≥ε and ξ_n∈σ_u_n+N_𝒰(u_n) for all n∈ℕ. We can extract a subsequence {u_n_k}_k∈ℕ of {u_n}_n∈ℕ converging weakly to some û∈𝒰. Now, since each u_n_k satisfies ξ_n_k∈σ_u_n_k+ N_𝒰(u_n_k), taking limit, we obtain 0∈σ_û+N_𝒰(û). As u^* is a locally isolated critical point of problem (<ref>)-(<ref>) and |û-u^*|_L^1(X)^m≤lim inf_n→∞|u_n_k-u^*|_L^1(X)^m≤δ<r̂_u^*, we conclude that û=u^*, and hence that u_n_k⇀ u^* weakly in L^1(X)^m. But, as u^*, is bang-bang, it must be that u_n_k→ u^* in L^1(X)^m; a contradiction. Then κ̂_u^*=ř_u^*. §.§ An application: p-regularization We give now an application of the subregularity property concerned with Tykhonov regularizations. Let p>1 be given. For each η>0, we consider the following regularized optimization problem. 𝒫_p(η): min_u∈𝒰{𝒥(u)+η/p∫_X|u(x)|^p dμ(x).}. From subregularity, we can conclude the following regularization result. Let u^*∈𝒰 be a critical point of problem (<ref>)-(<ref>). Suppose that optimality mapping is strongly subregular at u^*. Then for every ε>0 there exists η_ε>0 such that η<η_ε implies |u_η-u^*|_L^1(X)^m<ε for any local minimizer u_η∈𝒰 of 𝒫_p(η) satisfying |u_η-u^*|_L^1(X)^m<κ̂. Let ε>0 be arbitrary, and let u_η∈𝒰 be any local minimizer of 𝒫_p(ζ) satisfying |u_η-u^*|_L^1(X)^m<κ̂. The first order necessary condition can be written as d𝒥(u_η)(u-u_η)+η∫_X |u_η(x)|^p-2 u_η(x)· (u(x)-u_η(x)) dμ(x)≥0 ∀ u∈𝒰. This can be rewritten as the inclusion ξ_η∈σ_u_η+N_𝒰(u_η), where ξ_η∈ L^∞(X)^m is given by ξ_η(x):=η|u_η(x)|^p-2 u_η(x). By strong subregularity of the optimality mapping at u^*, there exists δ_ε>0 such that if |ξ_η|_L^∞(X)<δ_ε, then |u_η-u^*|_L^1(X)<ε. It is enough then to take η_ε:=δ_ε[sup U]^1-p We omit the case p=1 as it is a bit more involved, however and identical result can be accomplished by means of subregularity. We give a sequential version of the previous result. Let u^*∈𝒰 be a critical point of problem (<ref>)-(<ref>). Suppose that optimality mapping is strongly subregular at u^*. Let {η_n}_n∈ℕ be a sequence of positive numbers converging to zero, then u_n⟶ u^* in L^1(X)^m. for any sequence {u_n}_n∈ℕ of local minimizers of problems {𝒫_p(η_n)}_n∈ℕ such that |u_η-u^*|_L^1(X)^m<κ̂ for n∈ℕ sufficiently large. § EXAMPLES OF THE THEORY We provide three examples of the class of optimal control problems studied in this paper. The first one is constrained by an ordinary differential equation, and the second and third ones by partial differential equations. §.§ Affine optimal control problems constrained by ordinary differential equations As a canonical example of the theory developed in previous sections, we consider the affine optimal control problem given by min_u∈𝒰{ s_T(y_u(T))+∫_0^T[g_0(t,y_u(t))+∑_i=1^m g_i(t,y_u(u)) u_i(t)] dt }, where for each control u=(u_1,…,u_m)∈𝒰, there is a unique state y_u:[0,T]→ℝ^n satisfying ẏ_u=f_0(·,y_u)+∑_i=1^m f_i(·,y_u) u_i, y_u(0)=y_0 . We give below the specifications of problem (<ref>)–(<ref>) and the technical details. The number T>0 is the (fixed) time horizon. The underlying measure space ([0,T],𝒜_[0,T], ℒ) consists of the σ-algebra 𝒜_[0,T] of Lebesgue measurable subsets of [0,T], and ℒ:𝒜→ℝ the Lebesgue measure on [0,T]. For a compact convex set U⊂ℝ^m, the feasible set takes the form 𝒰={ u ∈ L^1(0,T)^m : u(t)∈ U for a.e. t∈[0,T]}. The functions f_0,…,f_m:[0,T]×ℝ^n→ℝ^n are Carathéodory and satisfy _t∈[0,T]sup_x∈ℝ^nf_i(t,x)<∞ and _t∈[0,T]sup_x_1,x_2∈ℝ^n|f_i(t,x_1)-f_i(t,x_2)|/|x_1-x_2|<∞ for each i∈{0,…,m}. These functions and the initial datum y_0∈ℝ^n determine the dynamic for each control in the following way. For each u∈𝒰, by the classical global existence theorem (see, e.g., <cit.>), there exists a unique state y_u∈ W^1,1([0,T])^n satisfying (<ref>), i.e., y_u(t)=y_0+∫_0^t[f_0(s,y_u(s))+∑_i=1^mf_i(s,y_u(s)) u_i(s)] ds ∀ t∈[0,T]. The cost functions g_0,… g_m:[0,T]×ℝ^n→ℝ are Carathéodory and the scrap function s_T:ℝ^n→ℝ is continuous. The objective functional 𝒥:𝒰→ℝ is given by 𝒥(u)= s_T(y_u(T))+∫_0^T[g_0(t,y_u(t))+∑_i=1^m g_i(t,y_u(u)) u_i(t)] dt. Clearly, problem (<ref>)–(<ref>) trivially satisfies (i) and (ii) of Assumption <ref>. Item (iii) is well-known to hold for these type of problems. Indeed, it follows easily from the integral form of the Grönwall inequality that the input-output mapping 𝒮:𝒰→ C([0,T])^n, given by 𝒮(u)=y_u, is weakly-strongly sequentially continuous. From this and the affine structure of the problem, a few calculations yield that the objective functional 𝒥:𝒰→ℝ is weakly sequentially continuous. §.§ Elliptic optimal control problems We consider now an elliptic optimal control problem. min_u∈𝒰{∫_Ω L(x,y_u(x)) dx }, where for each control u∈𝒰, there is a unique state y_u:Ω→ℝ satisfying -Δ y_u+d(·,y_u)=u in Ω, y_u=0 on Ω. The specifications of problem (<ref>)–(<ref>) and the data assumptions are given below. The underlying measure space (Ω,𝒜_Ω, ℒ) consists of a bounded Lipschitz domain Ω, the σ-algebra 𝒜_Ω of Lebesgue measurable subsets of Ω, and ℒ:𝒜→ℝ the Lebesgue measure on Ω. For numbers u_a,u_b∈ℝ with u_a<u_b, the feasible set takes the form 𝒰={ u ∈ L^1(Ω): u(x)∈ [u_a,u_b] for a.e. x∈Ω}. The function d:Ω×ℝ→ℝ is Carathéodory, monotone nondecreasing with respect to the second variable and satisfies _x∈Ωsup_y∈ K|d(x,y)|<∞ for every compact set K⊂ℝ. For each control u∈𝒰, there exists a unique y_u∈ H_0^1(Ω)∩ C(Ω̅) (see, e.g., <cit.>) satisfying (<ref>), i.e., ∫_Ω[∇ y_u(x)·∇φ(x)+d(x,y(x))φ(x)] dx=∫_Ωu(x) φ(x) dx ∀φ∈ H_0^1(Ω). The function L:Ω×ℝ→ℝ is Carathéodory and bounded by below. The objective functional 𝒥:𝒰→ℝ is then given by 𝒥(u)=∫_ΩL(x,y_u(x)) dx. Clearly, problem (<ref>)–(<ref>) trivially satisfies (i) and (ii) of Assumption <ref>. One can easily prove that the control-to-state mapping 𝒮:𝒰→ H_0^1(Ω)∩ C(Ω), given by 𝒮(u)=y_u, is weakly-strongly sequentially continuous. For example, using the arguments given in <cit.> and <cit.>. From this and the form of the cost function, it is almost trivial that the objective functional 𝒥:𝒰→ℝ is weakly sequentially continuous. §.§ A velocity tracking problem In this subsection, we discuss an optimization problem constrained by the Navier-Stokes equations. We skip technical details and rather point to the relevant references. Let Ω⊂ℝ^2 be a domain with boundary of class C^2 and T>0. Denote Q:=Ω×(0,T), Σ:=∂Ω×(0,T) and let u_a, u_b:Q→ℝ^2 be bounded functions. The feasible set is 𝒰:={ u:Ω→ℝ^2: u_a(x,t)≤ u(x,t)≤ u_b(x,t) for a.e. (x,t)∈ Q}. For a given y_d∈ L^∞(Q), the optimal control is given by min_u∈𝒰{1/2∫_0^T∫_Ω |y_u - y_d|^2 dx dt}. subject to {[ ∂_ty_u- νΔy_u+ (y_u·∇)y_u + ∇p_u = u in Q,; ; y_u=0 in Q, y_u=0 on Σ, y_u(·,0)= y_0 in Ω. ]. We refer the reader to <cit.> for technical details of this problem. It was proved in <cit.> that the control-to-state operator maps weakly convergent sequences to strongly convergent ones. This can easily be used to conclude that weak sequential continuity of the objective functional. § CONVERGENCE UNDER EXTREMAL CONDITIONS There are spaces where weak convergence and strong convergence are equivalent, such spaces are said to have the Schur property; the canonical example being the space l^1(ℕ) of summable sequences. Unfortunately, general L^1-spaces may not possess the Schur property, being the space L^1([0,1]) the classic counterexample. In <cit.> several results regarding these type of properties for particular sequences in general L^1-spaces were given, being probably the most important the following one. <cit.> Let (X,𝒜,μ) be a complete σ-finite measure space. Let {f_n}_n∈ℕ⊂ L^1(X)^m be a sequence of functions converging weakly in L^1(X)^m to a function f∈ L^1(X)^m. If f(x) is an extremal point of K(x):=({f(x)}∪{f_n(x)}_n∈ℕ) for a.e. x∈ X, then |f_n-f|_L^1(X)^m→ 0. Nevertheless, the notation in the proof <cit.> is sloppy and quite confusing sometimes. Moreover, it ignores many measurability issues. In this paper, we need the following result, proved in <cit.> as a corollary of <cit.>. <cit.> Let (X,𝒜,μ) be a complete σ-finite measure space, F:X↠ℝ^m a set-valued mapping taking nonempty compact convex values, and f∈L^1(X)^m such that f(x)∈ F(x) for a.e. x∈ X. Let {f_n}_n∈ℕ⊂L^1(X)^m be a sequence of functions such that f_n(x)∈ F(x) for a.e. x∈ X. If f_n⇀ f weakly in L^1(X)^m, then f_n→ f in L^1(X)^m. We use the first subsection of this appendix to provide an alternative proof of <cit.>. We mention that the result we give (<ref>) is somewhat different in the assumptions. We assume that the set-valued mapping in <ref> takes compact convex values, which suffices for the purposes of this paper. So in that regard the result given here is weaker. On the positive side, though not a significant improvement, our result drops some measure-theoretical assumptions made in <cit.> such as completeness and sigma-finiteness of the underlying measure, see <ref> below for the details. To the best of the author's knowledge, there is not an alternative proof of <cit.> or <cit.> in the literature. We also point out that the proof of <cit.> contains a flaw, namely it is assumed that the set of extreme points of a convex set is closed; which is not true for subsets of ℝ^m with m≥3. In the second subsection of this appendix we give a different version of a result related to <cit.> in general measure spaces (the result given there is for X being a measurable subset of the Euclidean space). Finally, we comment that our <ref> has as a corollary the result <cit.> under some additional assumptions. §.§ The extremal Schur's property We first prove some preparatory lemmas. The first one is concerned with the equi-integrability of weakly precompact sets; we refer the reader to <cit.> for the definition of equi-integrability and its consequences. Let (X,𝒜,μ) be a measure space and 𝒱 be a weakly relatively compact subset of L^1(X)^m. The following statements hold. * For every ε > 0 there exists δ > 0 such that μ(A)<δ implies sup_f∈𝒱∫_A|f(x)|dμ(x)<ε for all A∈𝒜. * For every ε > 0 there exists a finite measure set B ∈𝒜 such that sup_f ∈𝒱∫_X ∖ B |f| dμ< ε. By the Dunford-Pettis Theorem, the set 𝒱 is a family of equi-integrable functions; see <cit.>. We can then apply <cit.> to conclude the result. A proof of the next lemma, for the particular case of finite measure spaces, can be found in <cit.>. We give a proof for the general case. Let (X,𝒜,μ) be a measure space, {f_n}_n=1^∞⊂L^1(X) a sequence and f∈L^1(X). Suppose that f_n⇀ f weakly in L^1(X) and f(x)≤lim inf_n→∞f_n(x) for a.e. x∈ X. Then, |f_n-f|_L^1(X)^m→0. For each n∈ℕ, define v_n:=f_n-f. Then lim inf_n→∞ v_n≥0 a.e. in X, and since v_n⇀ 0 weakly in L^1(X), the set {v_n: n∈ℕ} is a weakly relatively compact subset of L^1(X). Let ε > 0 be arbitrary. Due to <ref>, there exists a finite measure set B ∈𝒜 and δ>0 such that sup_n∈ℕ∫_X ∖ B |v_n(x)| dμ(x)< ε/4 and sup_A∈𝒜 μ(A)<δsup_n∈ℕ∫_A |v_n(x)| dμ(x) < ε/4 By Egorov's theorem, we can find A ⊂ B with μ(A)<δ such that the convergence lim_n →∞inf_k ≥ n v_k = lim inf_n →∞ v_n is uniform in B ∖ A. In particular, there exists n_1∈ℕ such that v_n≥ -ε (8μ(B ∖ A))^-1 a.e. on B ∖ A for all n≥ n_1. It follows that |v_n|≤ v_n + ε(4μ(B ∖ A))^-1 a.e. in B ∖ A for all n≥ n_1. By definition of weak convergence, there exists n_2∈ℕ such that ∫_B ∖ A v_n d μ < ε/4 for all n ≥ n_2. Putting all together, ∫_X |v_n(x)| dμ(x) = ∫_X ∖ B|v_n(x)| dμ(x) + ∫_B ∖ A|v_n(x)| dμ(x)+ ∫_A |v_n(x)| dμ(x) <ε/4 + ∫_B ∖ A[v_n(x)+ε/4μ(B∖ A)] dμ(x) +ε/4≤ε for all n ≥max{n_1,n_2}. This shows that |v_n|_L^1(X)^m→ 0. Recall that given a convex set C∈ℝ^m and u∈ C, the set N_C(u):={ν∈ℝ^m: ν· (v-u)≤0 for all v∈ C} is the normal cone to C at u. We now address some measurability issues arising from the normal cone. These are of technical nature and will be needed later on. We use the standard definitions of measurability of set-valued mappings, see <cit.>. Let (X,𝒜) be a measurable space, F:X↠ℝ^m a measurable set-valued mapping taking nonempty compact convex values, and f:X→ℝ^m a measurable function such that f(x)∈ F(x) for a.e. x∈ X. The set-valued mapping x N_F(x)(f(x)) from X to ℝ^m is measurable. By <cit.>, there exists a countable family {f_n}_i∈ℕ of measurable functions f_n:X→ℝ^m such that F(x)={f_n(x):n∈ℕ} for all x∈ X. Let K:X↠ℝ^m be given by K(x):={ξ∈ℝ^m: ξ·(f_n(x)-f(x))≤0 for all n∈ℕ}. The measurability of K follows from <cit.>. Finally, observe that K(x)=N_F(x)(f(x)) for all x∈ X. We now give a geometrical construction that will allow to extend <ref> to the higher dimensional case. We proceed inductively, taking care of the measurability issues arising in the process. Let (X,𝒜) be a measurable space, F:X↠ℝ^m a measurable set-valued mapping taking nonempty compact convex values such that 0∈F(x) for a.e. x∈ X. Then there exist measurable functions e_1,…, e_m: X→ℝ^m such that for a.e. x∈ X, {e_1(x)…,e_m(x)} is an orthonormal basis of ℝ^m and 0∈ e_i(x)+N_F(x)∩ H_i(x)(0) for all i∈{1,…, m}, where H_1(x):=ℝ^m and H_i+1(x):={v∈ℝ^m: e_j(x)· v=0 for all j∈{1,…,i}} for i=1,…,m-1. We argue by induction. The case i=1 follows trivially from <ref>, and the Kuratowski–Ryll-Nardzewski Selection Theorem, see <cit.>. Now, let i∈{1,…,m-1} and suppose that there exists a measurable function e_i:X→ℝ^m such that -e_i(x)∈ N_F(x)∩ H_i(x)(0)∩ H_i(x) and |e_i(x)|=1 for a.e. x∈ X. By <cit.>, the set valued mapping x H_i+1(x) is measurable, hence as F is a compact-valued measurable mapping, it follows that the mapping x F(x)∩ H_i+1(x) is measurable, see <cit.>. As for a.e. x∈ X, f(x) is an extreme point of F(x), it follows that 0 is a boundary point of F(x)∩ H_i+1(x). By <ref>, the mapping x N_F(x)∩ H_i+1(x)(0) is measurable. Define K_i+1:X↠ℝ^m by K_i+1(x):=N_F(x)∩ H_i+1(x)(0)∩ H_i+1(x)∩{ν∈ℝ^m:|ν|=1} By <cit.>, K_i+1 is measurable; moreover it is clear that it takes nonempty closed values. We can then use the Kuratowski–Ryll-Nardzewski Selection Theorem to conclude the existence of a measurable selection e_i+1 of K_i+1. This completes the induction step. Finally, we note that as e_i(x)∈ H_i(x)∩{ν∈ℝ^m:|ν|=1} for a.e. x∈ X and all i∈{1,…, m}, the set {e_1(x),…,e_m(x)} generates an orthonormal basis for a.e. x∈ X. We now proceed to a merely technical lemma that allows to localize an argument given in the main theorem. Let (X,𝒜,μ ) be a measure space, F:X↠ℝ^m a measurable set-valued mapping taking nonempty compact convex values such that 0∈F(x) for a.e. x∈ X. Let {f_n}_n∈ℕ⊂L^1(X)^m be a sequence converging weakly to zero in L^1(X)^m. Consider functions e_1,…,e_m∈L^∞(X)^m satisfying <ref>. Let i∈{1,…,m-1} and suppose that for each j∈{1,…,i}, e_j(x)· f_n(x)→ 0 for a.e. x∈ X. Then e_i+1· f_n→ 0 in L^1(X)^m. Let N∈𝒜 be a measure zero set such that e_j(x)· f_n(x)→ 0 for all x∈ X∖ N holds for each j∈{1,…,i} and such that (<ref>) is valid for all x ∈ X ∖ N. Let x∈ X∖ N. Consider a subsequence {f_n_k(x)}_k∈ℕ of {f_n(x)}_n∈ℕ such that lim inf_n→∞e_i+1(x)· f_n(x)=lim_k→∞ e_i+1(x)· f_n_k(x). Since F(x) is compact, we can find a subsequence {f_n_k_l(x)}_l∈ℕ of {f_n_k(x)}_k∈ℕ converging to some f_x∈ F(x). Then, by (<ref>), we get e_j(x)· f_x=0 for j∈{1,…, i} and thus f_x∈ H_i+1(x). Since -e_i+1(x)∈ N_F(x)∩ H_i+1(x)(0), we get e_i+1(x)· f_x≥ 0. Then, lim inf_n→∞e_i+1(x)· f_n(x)=lim_l→∞ e_i+1(x)· f_n_k_l(x)=e_i+1(x)· f_x≥ 0. Since x∈ X∖ N was arbitrary, we conclude lim inf_n→∞e_i+1· f_n≥0 a.e. x∈ X. It follows then from Lemma <ref> that |e_i+1· f_n|_L^1(X)^m→ 0. We are now ready to prove the main result of this subsection of the Appendix. Let (X,𝒜,μ) be a measure space, F:X↠ℝ^m a measurable set-valued mapping taking nonempty compact convex values, and f∈L^1(X)^m such that f(x)∈ F(x) for a.e. x∈ X. Let {f_n}_n∈ℕ⊂L^1(X)^m be a sequence of functions such that f_n(x)∈ F(x) for a.e. x∈ X. If f_n⇀ f weakly in L^1(X)^m, then f_n→ f in L^1(X)^m. Assume without loss of generality that f=0. Let e_1,…, e_m:X→ℝ^m be the measurable functions given in <ref>. We argue by induction that |e_i· f_n|_L^1(X)→ 0 for all i∈{1,…,m}. The case i=1 follows trivially from Lemma <ref>. Let i∈{1,…,m-1} and suppose that |e_j· f_n|_L^1(X)^m→0 for all j∈{1,…, i}. Let {e_i+1· f_n_k}_k∈ℕ be any subsequence of {e_i+1· f_n}_n∈ℕ. We can find a subsequence {f_n_k_l}_l∈ℕ of {f_n_k}_k∈ℕ such that for each j∈{1,…, i}, e_j(x)· f_n_k_l(x)→ 0 for a.e. x∈ X. It follows by <ref> that {e_i+1· f_n_k_l}_l∈ℕ converges to 0 in L^1(X). Since every subsequence of {e_i+1· f_n}_n∈ℕ has further a subsequence converging to 0 in L^1(X), it follows that the entire sequence converges to 0 in L^1(X). This completes the induction step. Finally, since {e_1(x),…, e_m(x)} is an orthonormal basis of ℝ^m for a.e. x∈ X, ∫_X|f_n(x)| dμ(x)≤∑_i=1^m∫_X|e_i(x)· f_n(x)| dμ(x)⟶ 0. §.§ The weak clustering principle Given p∈[1,∞] and a set-valued mapping G:X↠ℝ^m from a measure space to the Euclidean space, we denote 𝒮^p_G: = {g ∈ L^p(X)^m : g(x) ∈ G(x) for a.e. x ∈ X}. Let (X,𝒜,μ) be a σ-finite measure space, F:X↠ℝ^m a measurable set-valued mapping taking nonempty compact convex values, p∈[1,∞], and f∈ L^p(X)^m such that f(x)∈ F(x) for a.e. x∈ X. Suppose there exists a set E⊂ X of positive measure such that f(x)∉ F(x) for a.e. x∈ E. Then there exist two distinct functions α,β∈𝒮_F^p such that f=α+β/2 a.e. in X. Observe that f∈𝒮^p_F, and consequently 𝒮^p_F is nonempty. As stated in <cit.>, from the measurability of F, we can deduce that 𝒮^p_F=𝒮_ F^p. Hence, as f∉𝒮_ F^p, f is not an extreme point of 𝒮_F^p; thus there must exist two distinct functions α,β∈𝒮_F^p such that f=2^-1(α+β) a.e. x∈ X. Let (X,𝒜,μ) be a non-atomic σ-finite measure space and F:X↠ℝ^m a measurable set-valued mapping taking nonempty compact convex values. Let p∈[1,∞) and consider f∈ L^p(X)^m such that f(x)∈ F(x) for a.e. x∈ X. There exists a sequence {f_n}_n∈ℕ⊂ L^p(X)^m with the following properties. * f_n(x)∈ F(x) for a.e x∈ X and all n∈ℕ; * f_n⇀ f weakly in L^p(X)^m. Observe that f∈𝒮_F^p, and consequently 𝒮_F^p is nonempty. According to <cit.>, we have 𝒮_ F^p^w=𝒮^p_F, where 𝒮_ F^p^w denotes the closure of 𝒮_ F^p with respect to the weak topology of L^p(X)^m. By the theorem of Eberlein–Šmulian (in the form of <cit.>), every sequence in 𝒮^p_ F has a weak limit point. We can then employ Day's Lemma (<cit.>) to find a sequence {f_n}_n∈ℕ⊂𝒮_ext F^p ⊂L^p(X)^m such that f_n ⇀ f in L^p(X)^m. We are now ready to prove the main result of this subsection. Let (X,𝒜,μ) be a non-atomic σ-finite measure space, F:X↠ℝ^m a measurable set-valued mapping taking non-empty compact convex values, and f∈L^1(X)^m such that f(x)∈ F(x) for a.e. x∈ X. Suppose that there exists a set E of positive measure such that f(x)∉ F(x) for a.e. x∈ E. Then there exists δ_0>0 such that for every δ∈(0,δ_0] there exists a sequence {f_n}_n∈ℕ⊂L^1(X)^m with the following properties. * f_n(x)∈ F(x) for a.e. x∈ X and all n∈ℕ; * |f_n-f|_L^1(X)^m=δ for all n∈ℕ; * f_n⇀ f weakly in L^1(X)^m. By <ref>, there exist two distinct functions α,β∈L^1(X)^m such that f(x)=2^-1(α(x)+β(x)) for a.e. x∈ X. Let G:X↠ℝ^m be given by G(x):=[α(x),β(x)]. By Lemma <ref>, there exists a sequence {g_n}_n∈ℕ⊂L^1(X) converging weakly in L^1(X)^m to f such that g_n(x)∈{α(x),β(x)} for a.e x∈ X and all n∈ℕ. Define δ_0:=2^-1|β-α|_L^1(X)^m. For each δ∈(0,δ_0] consider the sequence {f_n}_n∈ℕ given by f_n:=f+2δ/|β-α|_L^1(X)^m(g_n-f) ∀ n∈ℕ. Then, by construction, f_n⇀ f and |f_n-f|_L^1(X)^m=δ for all n∈ℕ. The sequence {f_n}_n∈ℕ satisfies all the stated properties and the proof culminates. Observe that the non-atomicity assumption in the previous proposition is needed as spaces like l^1(ℕ) have the Schur's property. Not to mention the L^1-spaces induced by the counting measure over a finite subset of the natural numbers, yielding finite dimensional spaces. Finally, we note that <ref> yields the result of <cit.> under a set of slightly different assumptions. siamplain
http://arxiv.org/abs/2307.04660v1
20230710155730
The high-pressure phase diagram of BaNi$_2$As$_2$: unconventional charge-density-waves and structural phase transitions
[ "Tom Lacmann", "Amir-Abbas Haghighirad", "Sofia-Michaela Souliou", "Michael Merz", "Gaston Garbarino", "Konstantin Glazyrin", "Rolf Heid", "Matthieu Le Tacon" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.str-el" ]
Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany [email protected] Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany Karlsruhe Nano Micro Facility (KNMFi), Karlsruhe Institute of Technology, 76344 Eggenstein-Leopoldshafen, Germany ESRF, The European Synchrotron, 71, avenue des Martyrs, CS 40220 F-38043 Grenoble Cedex 9 Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany [email protected] Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany Structural phase transitions accompanied with incommensurate and commensurate charge-density-waves (CDWs) modulations of unconventional nature have been reported in , a non-magnetic cousin of the parent compound of Fe-based superconductors, BaFe_2As_2. The strong dependence of  upon isoelectronic substitutions alongside original dynamical lattice effects suggest a strong tunability of the electronic phase of the system through structural effects. To gain further insights, we present a comprehensive synchrotron x-ray diffraction and first-principles calculation study of the evolution of the crystal structure and lattice instabilities of  as function of temperature and hydrostatic pressures (up to 12 GPa). We report a cascade of pressure-induced structural phase transitions and electronic instabilities up to ca. 10 GPa, above which all CDW superstructures disappear. We reveal that the stable high-pressure phase consists of planar Ni zigzag chains, from which the surrounding As have been pushed away. This yields a strong reduction of the interlayer As-As distance (along the original c axis), akin to what is observed in the collapsed tetragonal structure of other pnictides, albeit here with a monoclinic structure. The discovery of new polymorphs in the pressure-temperature phase diagram of  emphasizes the importance of the relative Ni-Ni and Ni-As bond lengths in controlling the electronic ground state of this compound and replenish our understanding of viable electronic phases under extreme conditions. The high-pressure phase diagram of : unconventional charge-density-waves and structural phase transitions Matthieu Le Tacon August 12, 2023 ========================================================================================================= § INTRODUCTION Superconductivity and charge-density-waves (CDWs) stand amongst the most commonly encountered instabilities of the metallic state and are often coexisting in the complex phase diagrams of quantum materials. Prominent examples encompass α-Uranium <cit.>, high-temperature superconducting cuprates <cit.>, transition-metal dichalcogenides <cit.> or the more recently discovered Kagome superconductors <cit.>. Both electronic orders have also been evidenced in , a weakly correlated metallic system which has at room temperature the same tetragonal I4/mmm crystal structure as the parent compound of Fe-based superconductors BaFe_2As_2<cit.>. Upon cooling, rather than a magneto-structural transition,  exhibits an original form of dynamical lattice nematicity <cit.> before undergoing a series of CDW instabilities and structural distortions <cit.> and ultimately entering a low temperature superconducting phase below ∼0.6 K. Incommensurate CDW (I-CDW) fluctuations have been associated with an enhanced elasto-resistance signal in the B_1g channel <cit.> and detected in thermal diffuse x-ray scattering already at room temperature <cit.> at the I-CDW wavevector (±0.28 0 0)_tet (note that throughout, this paper and for simplicity all reciprocal space indices are given in the tetragonal notation (H K L)_tet but for better readability the subscript _tet will generally be omitted). A long-range I-CDW order only develops fully at ∼ 147 K <cit.>, and triggers a minute orthorhombic distortion bringing the system into a Immm phase <cit.>. Below T_tri∼ 137 K (upon cooling) the system then undergoes a first-order transition to a triclinic (P1̅) phase while the I-CDW is replaced by a commensurate CDW (C-CDW) with a wavevector (±1/30∓1/3) <cit.>. Various substitutions have been used and yield a rapid decrease of T_tri alongside a sudden increase of the superconducting transition temperature T_c to ∼ 3.5 K which occurs when the triclinic phase is completely suppressed <cit.>. Apart from the generic trends, details appear strongly dependent on the nature of the substitution. For instance ∼60% of Strontium on the Barium site are needed to suppress completely the triclinic phase <cit.>, an effect obtained with only ∼7% of Phosphorus substitution for As <cit.> or ∼12% of Cobalt on the Nickel site <cit.>. All these substitutions are in principle isoelectronic and therefore do not change the charge carrier concentration in the system. On the other hand, these substitutions affect significantly the lattice parameters <cit.> in a non-trivial manner, which suggests in turn, akin to the Fe-based superconductors <cit.>, that pressure might be a valuable tuning parameter of the electronic phase of . To the best of our knowledge, high-pressure investigations on this system have only been limited to resistivity measurements of pristine BaNi_2As_2 <cit.> and in a limited pressure range (up to 2.74 GPa) where only a modest dependence of both structural- and superconducting transition temperatures was found. Here, we use high-resolution single crystal x-ray diffraction (XRD) to investigate the pressure and temperature dependence of the various structural phases of  over a broader pressure range, and to construct a temperature-pressure phase diagram of this compound extending up to 12 GPa and down to 30 K. We have carried out systematic structural refinement and discovered a series of pressure-induced structural phase transitions, as well as a set of novel superstructures associated to new CDW instabilities (incommensurate and commensurate). These instabilities show an highly unusual pressure dependence but are well described by our first-principles density-functional perturbation theory (DFPT) calculations which also emphasize the absence of Fermi surface nesting and only weak electron-phonon coupling enhancement of the phonon lineshapes (when any). This is in sharp contrast with the phenomenology encountered in prototypical CDW systems and indicate the unconventional nature of all these CDW instabilities. Above ∼ 10 GPa, all superstructure peaks disappear and our study reveals that the stable high-pressure phase is monoclinic C2/m and consists of planar Ni zigzag chains. Akin to what is observed in the collapsed tetragonal (cT) phase of other pnictide compounds, an overall decrease of the out-of-plane c axis parameter is observed as the As-As distance between the Ni planes is strongly reduced. On the other hand, in contrast with known cT phases, the 'thickness' of the NiAs layers increases in the high pressure phase compared to that of the original tetragonal I4/mmm structure, as the As atoms are pushed away from the Ni planes. This peculiarity emphasizes the importance of the hybridization between the As 4p and Ni 3d orbitals, through the Ni-As and As-As distances, in controlling the electronic phase of . § EXPERIMENTAL DETAILS §.§ Single-Crystal Growth and Characterisation Single crystals of BaNi_2As_2 were grown using a self-flux method. NiAs precursor was synthesised by mixing the pure elements Ni (powder, Alfa Aesar 99.999%) and As (lumps, Alfa Aesar 99.9999%) that were ground and sealed in a fused silica tube and annealed for 20 hours at 730°C. All sample handlings were performed in an argon glove box (O_2 content < 0.1 ppm). For the growth of BaNi_2As_2, a ratio of Ba:NiAs = 1:4 was placed in an alumina tube, which was sealed in an evacuated quartz ampule (10^-5 mbar). The mixtures were heated to 700°C for 10 h, followed by heating slowly to a temperature of 1090°C, soaked for 5 h, and subsequently cooled to 995°C at a rate of 0.25°C/h. At 995°C, the furnace was canted to remove the excess flux, followed by furnace cooling. Plate-like single crystals with typical sizes 3 x 2 x 0.5 mm^3 were easily removed from the ingot. The crystals are shiny brass-yellow with a metallic lustre. The measured crystals were cleaved with a scalpel just before the x-ray scattering experiments. §.§ High-pressure X-ray Diffraction High pressure–low temperature experiments were performed at the European Synchrotron Radiation Facility (ESRF, beamline ID 15B) and Positron-Elektron-Tandem-Ring-Anlage III (PETRA III, DESY, beamline P02.2) in a membrane-type diamond anvil cell (DAC) using the ruby fluorescence method for the pressure calibration <cit.>. For the experiments at the ESRF diamonds with cullet diameters of 500 and a stainless steal gasket were used. Three BaNi_2As_2 singles crystals and one ruby were placed inside the gasket hole and helium was used as pressure transmitting medium. A sketch and a photograph of the DAC, the samples, ruby and gasket hole after gas loading are shown in Fig. <ref> a) and b). For x-ray diffraction a monochromatic beam with an energy of 30.17 (≈0.411) was used and the diffracted beam was detected with a Mar research MAR555 flat panel detector. For each dataset wide scans of 0.5 exposures and 0.5 intervals with a total angular rotation of ±32 were performed. The detector position and distance was calibrated with silicon powder and an Enstatite single crystal standard using the Dioptas and CrysAlis softwares. For the measurements at PETRA III cullet diameters of 400 and rhenium gaskets were used. The cells were loaded with Neon as pressure transmitting medium. X-ray diffraction experiments were performed using a monochromatic x-ray beam with an energy of 42.71 (≈0.2903). The diffracted beam was detected with a Perkin Elmer XRD 1621 detector. The detector to sample distance was calibrated with a CeO_2 standard using the programme Dioptas. At each pressure and temperature, wide scans of 0.5 s exposures and 0.5 intervals and -25 to +30 of rotation were performed. We will focus here on isotherms that were measured mainly by cooling from room temperature to the desired temperatures, at which we compressed the crystals to the desired pressure. The exact p-T path we have followed for each sample alongside additional data obtained along four isobars (at ∼ 4, 7.6, 10 and 12 , respectively) can be found in the Supplemental Material (SM) <cit.>. As an example, the (H K 1) plane from the measurement at 0.29 and 194 is shown in <ref> c) and the structure determined from a structural refinement of such dataset (SG: I4/mmm) is shown in Fig. <ref> d). §.§ Analysis of the Crystal Structure CrysAlis Pro was used for data collection, cell refinement, data reduction and the analysis of the diffraction precession images for all datasets. SHELLXS97 <cit.> and SHELLXL97 2014/7 <cit.> as well as JANA2006 <cit.> were used for solving the crystal structure and refinements. Crystal data, and structural refinement details are summarized in Tab. <ref> and in the SM <cit.>. Atomic coordinates and site labels were standardized using the VESTA <cit.> crystal structure visualisation software. § COMPUTATIONAL DETAILS Density-functional investigations of lattice dynamics properties for the different structural phases of BaNi_2As_2 were performed in the framework of the mixed-basis pseudopotential method <cit.>. This approach employs an efficient description of more localized components of the valence states by using a basis set combining plane waves and local functions at atomic sites. The electron-ion interaction is described by norm-conserving pseudopotentials, which were constructed following the descriptions of Hamann, Schlüter, Chiang <cit.> for Ba and Vanderbilt <cit.> for Ni and As, respectively. Semi-core states Ba-5p, Ni-3s, Ni-3p were included in the valence space. The exchange-correlation functional was represented by the general-gradient approximation in the PBE form <cit.>. The mixed-basis set consisted of plane waves with a cutoff for the kinetic energy of 22 Ry and local functions of p,d type for Ba and s,p,d type for Ni, respectively. Lattice dynamics properties were calculated within the linear response or density functional perturbation theory (DFPT) as implemented in the mixed-basis method <cit.>. Brillouin-zone integration was performed by sampling a 16×16×8 k-point mesh in conjunction with a Gaussian broadening of 0.1 eV. To locate positions of phonon anomalies in the momentum space, scans of the phonon dispersions on two-dimensional high-symmetry planes were performed as follows. Dynamical matrices were calculated within DFPT on an 8×8 mesh, and interpolated on a much denser 120×120 mesh using a standard Fourier interpolation technique. Diagonalizing the dynamical matrices provided phonon frequencies. § RESULTS AND DISCUSSION In this section we present evidence for the existence of new HP-phases in BaNi_2As_2 which can be best seen following two isotherms at 140 (above the triclinic transition at ambient pressure) and 94 (below the triclinic transition at ambient pressure). Up to about 10, each of these HP phases is accompanied with a new type of CDW modulation. Above this pressure, CDW superstructures disappear. §.§ Pressure dependence of the I-CDW: 140 isotherm As previously discussed <cit.>, the formation of the long-range I-CDW at ambient pressure (hereafter referred to as I-CDW1) is accompanied with a fourfold symmetry-breaking transition. This is best seen as a difference between the thermal expansion along the (100) and (110) directions  <cit.> and indicates a small but measurable orthorhombic distortion below ∼ 146. Consequently, we can index the Bragg reflections obtained at 140 K and ambient pressure in a slightly distorted orthorhombic cell with space group Immm. The corresponding structural parameters are detailed in Tab. <ref>. In agreement with previous reports <cit.>, I-CDW1 satellites are observed around Bragg reflections at (±0.2800) or (0±0.280), depending on the reflection. The effect of pressure on the unit cell is reported in Figure <ref>-a), where we show that the a, b and c lattice parameters decrease smoothly with increasing pressure up to 7. As the orthorhombicity increases upon pressurization, half of the I-CDW1 superstructure peaks disappear. The latter can be interpreted as a consequence of detwinning that could either originate from weak non-hydrostaticity in the pressure cell or from the anisotropic response of  to strain. In parallel, we observe an increase of the incommensurability of the I-CDW1 from 0.28 at ambient pressure to 0.293 at 7. Above ∼ 7 the I-CDW1 satellites disappear, and a new set of 8 incommensurate satellites appears close to wavevectors (± 0.358 ± 0.10 0) and (±0.10±0.3580) around e.g. the (220) Bragg peaks (Fig. <ref>-d) forming a new I-CDW, labelled I-CDW2 hereafter. Furthermore, the new I-CDW2 shows a strong temperature dependence of the wavevector and onsets at a higher temperature of ≈168. Details can be found with the evaluation of the isobars around 4 and 7.6 in the SM <cit.>. Note that although the original I-CDW1 peaks are lost above 7, some faint peaks with similar wavevector can be still seen up to 10, albeit now centered around forbidden Bragg-reflections such as the (1 2 0) (denoted as I-CDW1'). Additionally, the pressure dependence of a and b lattice parameters displays a sudden upturn indicating that a structural phase transition takes place between 7 and 7.6. Our structural refinement in this region shows that the crystal structure is monoclinic and can be described within the space group C2/m. This structural phase transition involves small atomic displacements that break the translational symmetry of the lattice (or equivalently, domain-related distortions as the minute difference between cell parameters a and b in the Immm increases approaching the monoclinic phase) and a shear displacement of the Ni layers against each other. This amounts to a loss of symmetries of both the As- and Ni sites, as additional degrees of freedom are introduced in the Wyckoff-position 4i by breaking the correlation between x and z components at this position. The 4i site symmetry hereby changes from mm2 in the Immm to m in the C2/m phase. In this phase, instead of four equivalently long Ni-Ni bonds, regular Ni zigzag chains with two long and two short bond distances form (lower panel of Fig.<ref>-a). Above ∼ 10.2, all CDW satellites completely disappear. Although the symmetry remains the same as the I-CDW2 disappears, a closer look at the crystal structure reveals important internal changes (Fig.<ref>-a) both in and out of the NiAs planes. In-plane, the Ni-Ni bond length disproportionation strongly increases (reaching 4.2% at 12 GPa) indicating an increased separation of the zigzag chains. In the perpandicular direction (c lattice in the I4/mmm setting), the As-As distance between NiAs layers is abruptly reduced above 10 GPa, which is reminiscent of the first-order transition to cT phases in other iron <cit.> or cobalt <cit.> pnictide families. However, the As-As distance within the NiAs layers increases (or equivalently the As-Ni-As angle decreases), showing that the NiAs layers become thicker in the high-pressure phase. This can only be evidenced by looking carefully at the bond distances since overall the unit cell size perpendicular to the Ni planes decreases. The isobar measurements at ∼ 12 indicate an absence of transition upon cooling at this pressure between 200 K and 50 K <cit.>. §.§ Pressure dependence of the C-CDW: 94 isotherm Next, we look at the impact of pressure on the triclinic phase of  where the C-CDW is seen at ambient pressure down to the lowest temperatures <cit.>. Previous studies indicate that in this phase the four Ni-Ni bond distances become nonequivalent forming in-plane Ni-Ni dimers <cit.>, as can be derived from the different Ni-Ni bond distances in Fig. <ref> a). Before discussing the evolution of the crystal structure, let us first focus on the pressure dependence of the C-CDW superstructure (we recall here that for simplicity, the corresponding reflections are indexed in the tetragonal setting). In the triclinic phase, the characteristic set of C-CDW satellites with wavevectors (±1/30∓ 1/3) is still clearly visible in the (H2 L) plane at 2.11 (compare Fig. <ref> b). A new set of C-CDW (C-CDW2) superstructure peaks are observed at 3.75 around wavevectors ( ± 1/2 0 ∓ 1/2). Note that at this pressure, weak signatures of the C-CDW1 satellites can still be seen indicating a narrow coexistence region of the two orders. The C-CDW1 satellites are completely suppressed with the increasing pressure, while the C-CDW2 remain visible up to 9.5. Above 10 as for 140 K isotherm, no superstructure reflections could be observed. From a structural point of view, and as previously discussed, the high-pressure phase above 10 GPa is best described as a 'collapsed' monoclinic C2/m structure, with reduced As-As distance for the As ions connecting the Ni-As layers. In contrast with the situation at higher temperatures, however, evaluating the structure by including the C-CDW2 modulation in the refinement yields poor results when describing it with the C2/m symmetry. On the other hand, it is quite clear as well that the triclinic P1̅ phase is suppressed alongside the C-CDW1 phase above 2.5. The best structural solution for this intermediate pressure phase (i.e between 3.75 and 9.5) is obtained including the C-CDW2 superstructure reflections explicitly for solving the structure in the monoclinic C2/c space group (this corresponds in particular to a doubling of the unit cell in the plane, in which three unequivalent Ni-Ni bonds are found). The transition between the two monoclinic phases with C2/m and C2/c space groups occurs around 10 at 94, where the monoclinic β angle and the ratio of the a- and b-lattice parameters (Fig. <ref> a) exhibit clear discontinuities. All these transitions are first-order in nature and, in contrast to the situation at higher temperatures where the symmetry of the unit cell was lowered with increasing pressure, symmetries are restored under pressurization at low temperature. §.§ Pressure-Temperature phase diagram and discussion We illustrate the results of our analysis of the crystal structures and superstructures of  for each of the >100 points in the pressure-temperature measured and presented in a detailed phase diagram on Fig. <ref>. Crystallographic parameters for each structure are given in Tab. <ref> and in the SM <cit.>. The first important observation is that the phase diagram shows a qualitatively different pressure dependence for the high (orthorhombic and I-CDW) and low (triclinic and C-CDW) temperature phases. While the low temperature triclinic phase is lost already between 2 and 3, the orthorhombic phase around 140 survives up to ∼7. We have seen that this is accompanied with a continuous change of the incommensurability of the I-CDW1 with increasing pressure, whose onset temperature does nonetheless not seem to strongly vary with pressure. This is also the case for the first-order transition temperature to the triclinic/C-CDW1 phase below ca. 3, in agreement with an earlier transport study <cit.>. This independence of the CDW formation temperatures over large pressure ranges contrasts significantly with the previously reported effects of chemical substitutions. This is particularly true in the case of the C-CDW and triclinic phases which are gradually suppressed through substitution e.g. by phosphorus, cobalt or strontium <cit.>. This can be best understood by looking in more details at the effect of these substitutions on the structure, which tend to have opposite effects in- and out-of-plane, in contrast to hydrostatic pressure that compresses all lattice parameters. The main effect of P or Co substitutions which efficiently suppress the triclinic and C-CDW1 phase is a contraction of the ab plane<cit.>. On the contrary, Sr mostly induces a compression of the c axis <cit.> and interestingly induces a commensurate CDW with a doubling of the unit cell <cit.>, which bears similarities with the one reported here. Although to the best of our knowledge this has not been associated with a structural phase transition to a C2/c phase so far, it reemphasizes the importance of the c-axis parameter in controlling the electronic phase of pnictides, akin to their Fe-based counterpart <cit.> We note that the stability of the I-CDW1 up to 7 contrasts with the observations in the vast majority of known CDW materials. For instance, in rare earth tritellurides RTe_3 <cit.>, dichalcogenides such as 2H-NbSe_2 <cit.> or TiSe_2 <cit.>, α-U <cit.> or Kagome superconductors <cit.> to cite a few, the CDW ordering temperature is strongly dependent on pressure and most often decreases rapidly with increasing pressure, generally resulting in a complete suppression of the CDW after a few GPa. There are of course notable exceptions to this, such as VSe_2 <cit.>, or SmNiC_2 <cit.>, but there the CDW formation temperature rapidly increases with pressure. In this respect and to the best of our knowledge, the resilience of the I-CDW1 in  against pressure is particularly remarkable. It might be related to the nematic liquid phenomenology evidenced at higher temperature in this compound <cit.> as a consequence of strong fluctuations between degenerate nematic configurations and expected to be weakly affected by strain. Interestingly, the dramatic changes of the CDWs are concomitant with pressure-induced structural phase transitions. The formation of Ni zigzag chains yielding a monoclinic C2/m structure above 7 at high temperatures or a C2/c structure above 3 at low temperatures is associated with a remarkable change of the superstructure pattern, indicating a profound interdependence of the CDW instabilities and of the underlying lattice structure. To gain further insights, we turn now to first-principle calculations, which are particularly favourable owing to the weakly correlated nature of the material. The structures reported in this study have not been anticipated by previous theoretical investigations <cit.> as it is generally challenging to determine a priori the symmetry of the most stable structural configuration of a given compound. It is nonetheless possible to assess the stability of the experimentally determined crystal structures by looking at their lattice dynamics. Using the experimental lattice parameters and relaxed atomic positions to obtain force-free configurations prior to the phonon calculations, we have previously shown <cit.> that the dispersion of the phonons of the I/4mmm tetragonal structure was unstable against the softening of a low-lying optical phonon (dispersing from the Raman-active E_g mode at the zone center) along the reciprocal (H00) direction and at a wavevector close to that of the experimental I-CDW1. We have extended this approach to the pressurized unit cells. On the color plots of Fig. <ref>, we have mapped the lowest phonon frequencies (full dispersions are shown in the SM <cit.>) across planes of the reciprocal space. As unstable modes are characterized by imaginary frequencies, the negative modulus of the frequency was used so that the dominant instabilities show up as minima of the softest phonon frequency in Fig <ref>. In agreement with previous work, the calculation performed on the weakly distorted orthorhombic Immm structures determined at 0.3 and 5.1 indicates that the leading phonon instability occurs at (0.25, 0, 0) and (0, 0.25, 0), close to the I-CDW1 wavevector. Interestingly, at both pressures, we can already observe a weak softening of the same phonon branch close to the wavevector at 8 locations in the (H K0) plane including e.g. (0.380.10) or (0.10.380), which are very close to that at which the I-CDW2 satellites (Fig. <ref>) have been observed. This becomes the leading instability at 10.14 calculated within the monoclinic C2/m phase (Fig. <ref>-c), while upon further compression the phonon anomalies are suppressed and this phase is stabilized, as evidenced by the disappearance of negative phonon energies in Fig. <ref>-d). Next, we discuss the instabilities of the low temperature phases. The DFPT calculation on the experimental triclinic P1̅ structure (shown for 1.69 in Figs. <ref>-e) and f)) is also found unstable against the softening of the same low-lying phonon branch, but the leading instability is now found in the (H0 L) plane. It is rather spread in the reciprocal space but centered around the (1/301/3) and (2/302/3) wavevectors, at which C-CDW1 satellites are seen experimentally (Fig. <ref>-b). Similarly, the leading instability of the C2/c monoclinic structure at 5.79 shown in Figs. <ref>-g) and h) occurs at the commensurate wavevector (1/2, 0, 1/2) of the C-CDW2 phase (Figs. <ref>-b) and -c)). In all these cases and similar to investigations at ambient pressure <cit.>, no Fermi surface nesting is found at the I-CDW1, I-CDW2, C-CDW1 or C-CDW2 wavevectors (details are presented in the SM <cit.>). In general, we do not observe any anomaly in the phonon linewidth associated with the momentum structure of the electron-phonon coupling vertex that correlates with the structure of these CDWs, indicating the unconventional natures of these CDWs. The only noticeable exception is the I-CDW2 case, for which a weak enhancement of the linewidth of the unstable phonon is seen, but the calculated electron-phonon coupling remains very modest. It typically amounts to ∼ 0.15 meV, which is almost an order of magnitude weaker than in that of prototypical CDW systems such as dichalcogenides  <cit.>. To sum up, whether the lattice structure of  is stable against CDW appears fully controlled by the local environment of Ni, and thereby by the orbital polarization of the bands crossing the Fermi level which primarily derive from Ni states <cit.>. On the one hand, it is clear that the deformation of the Ni square lattice into a zigzag structure with a bond length disproportionation can only occur alongside a spectral weight transfer between the in- and out-of-plane t_2g orbitals of Ni <cit.>. On the other hand, the main player yielding the disappearance of CDWs above 10 GPa, are the As atoms surrounding the planar Ni zigzag chains, which are pushed away as the interlayer As-As distance is strongly reduced. Our first principle calculation indicates that the electron-phonon interaction is extremely sensitive to the subtle details of the hybridization between the As 4p and Ni 3d orbitals, primarily controlled by the Ni-As and As-As distances. Despite the weak spectral weight of As states at the Fermi level, they seem to play a key role in controlling the electronic phase of . § SUMMARY AND OUTLOOK In summary, we have investigated the pressure dependence of the crystal structure and CDWs of superconducting  and revealed the formation of new structural polymorphs and CDWs. At high pressure, a monoclinic phase exhibiting planar Ni zigzags forms and is stable against CDW instabilities. A detailed phase diagram of has been determined and revealed highly unusual pressure dependence of the incommensurate and commensurate CDW phases of this compound. First-principle calculations fueled by the experimental crystal structures show a series of lattice instabilities in a very good agreement with the experimentally observed ones. The stable monoclinic high-pressure phase shows a strongly reduced interlayer As-As distance, bearing striking similarities with previously encountered collapsed tetragonal phases in pnictides, highlighting the importance of the hybrization between As and Ni orbitals in controlling the electronic phases of these compounds. This calls for additional investigations, in particular regarding the impact of the reported structural phase transitions on the superconducting transition temperature of . Note added. During the completion of this manuscript, we became aware of another high-pressure study of BaNi_2As_2 <cit.>. Acknowledgements We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. Parts of this research were carried out at PETRA III using beamline P02.2. Beamtime was allocated for proposal I-20200263. We acknowledge the European Synchrotron Radiation Facility (ESRF) for provision of synchrotron radiation facilities and we would like to thank D. Comboni and T. Poreba for assistance and support in using beamline ID15B. We acknowledge the funding by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) Project-ID 422213477-TRR 288 (Project B03) and support by the state of Baden-Württemberg through bwHPC. S.M.S. acknowledges funding by the Deutsche Forschungsgemeinschaft-Projektnummer 441231589.
http://arxiv.org/abs/2307.05992v1
20230712081423
Robbed withdrawal
[ "Ze Chen", "Ruichao Jiang", "Javad Tavakoli", "Yiqiang Zhao" ]
cs.CR
[ "cs.CR" ]
1000 proofcount proofatend +proofcount= proofcount @=@ +@@<proofcount @ne [style=plain, name=Claim, numberwithin=section]claim claim corollaryCorollary definitionDefiniton exampleExample remarkRemark theoremTheorem *theorem*Theorem *claim*Claim Robbed withdrawal Ze ChencMaximus Labs, [email protected] Ruichao JiangjMaximus Labs & Carleton University, [email protected] Javad TavakolijaUniversity of British Columbia, [email protected] Yiqiang ZhaozCarleton University, [email protected] August 12, 2023 ================================================================================================================================================================================================================================================================= In this article we show that Theorem 2 in <cit.> is incorrect. Since Wombat Exchange, a decentralized exchange, is built upon <cit.> and Theorem 2 is fundamental to Wombat Finance, we show that an undesirable phenomenon, which we call the robbed withdrawal, can happen as a consequence. § INTRODUCTION Decentralized exchange plays an important role in decentralized finance, where the market-making is not done by an order book but by Automated Market Makers (AMMs). A pool in an AMM is a pair of two tokens. AMMs like Uniswap v2 is double-sided, where the liquidity provider must provide both tokens of the pool. Wombat <cit.> uses Single-Sided Automated Market Maker (SSAMM), where a liquidity provider is allowed to provide only one kind of the token in the pool. Theorem 2 of <cit.> is the backbone of Wombat, which stipulates the equilibrium states of pools in Wombat. However, we refute Theorem 2 of <cit.>, i.e. we not only point out the gap of the proof of Theorem 2 but also * directly prove that Theorem 2 is false, * provide a concrete counterexample. The organization of this article is as follows. We first introduce Wombat's SSAMM in <Ref> to fix the notation. We show that the proof of Theorem 2 in <cit.> is flawed and that the statement of it is false in <Ref>. § WOMBAT SSAMM We fix the notation. This section does not aim to be a comprehensive introduction to AMM nor SSAMM. For those, see <cit.>. The SSAMM of Wombat is defined by the following equation. F(A_1,L_1,A_2,L_2)=A_1^2-cL_1^2/A_1+A_2^2-cL_2^2/A_2+(c-1)(L_1+L_2)=0, where A_i≥0 (L_i≥0), i=1,2, is the asset (liability) for token i in the liquidity pool. <Ref> uses different variables from that used in <cit.>, where they used r_iA_i/L_i and L_i as the state variables of the SSAMM, which is a reparametrization. <cit.> also uses a constant D, which is equal to (1-c)(L_1+L_2) in our notation. This equality can be proven if the global equilibrium, if existent as claimed by Theorem 1 in <cit.>, of Wombat SSAMM is r^*=1. Since the statement of Theorem 2 in <cit.> assumes that r^*=1 and in the first line of their proof, they derived D=(1-A)(L_1+L_2), where A is our c. The reason we use constant c instead of A is to avoid the confussion with the asset varaible A_i. There are two types of actions defined on the Wombat SSAMM. Swap token 1 for token 2: A_1 A_1+δ, A_2 A_2+δ', where δ>0 is given and δ' is the negative solution of the following equation. F(A_1+δ, L_1, A_2+δ', L_2)=F(A_1,L_1,A_2,L_2). Swapping token 2 for token 1 is defined similarly. Provide liquidity for token 1: L_1 L_1+δ_L, A_1 A_1+δ_A, where δ_L is given and δ_A is the solution of F(A_1+δ_A,L_1+δ_L,A_2,L_2)=F(A_1,L_1,A_2,L_2). It is called the liquidity provision if δ_L>0 and liquidity withdrawal if δ_L<0. § DISPROOF AND COUNTEREXAMPLE [Theorem 2 in <cit.>] Assume that r^*=1. If δ_L<0, then δ_L≤δ_A<0; if δ_L>0, then 0<δ_A≤δ_L. Furthermore, in both cases, δ_L=δ_A iff r_i=1. The Eqn (13) in the proof of Theorem 2 in <cit.> reads[Their constant A is replaced by our c and we omit all subscripts i in their equation.] δ_A-c(L+δ_L)^2/A+δ_A+cL^2/A=(1-c)δ_L. Their proof goes as follows. “If δ_L<0 and δ_A≥0, then the left hand side (LHS) of Eq. (13)[Our Eqn (<ref>).] is positive while the right hand side (RHS) is negative, a contradiction." It is obvious that, if c>1 and δ_L<0, then the RHS of Eqn (<ref>) is positive. Their proof is therefore gapped. In fact, c is known as the amplification factor in <cit.> and is only assumed to be greater than zero but not necessarily less than one. As shown in Fig 1 in <cit.>, the amplification factor c is shown to be equal to 300 for Wombat. Hence our point is not vacuous. However, this gap does not falsify the statement of Theorem 2. Perhaps the claim is correct and the proof can be fixed. The following theorem shows that it is not the case. Let -L<δ_L<0. Under the following three conditions: * 3L+δ_L<A, * δ_L∈[-L,-L^2/A), * c∈(A/A-2L-δ_L,A/L), One either has * δ_A>0, or * δ_A<0 and |δ_A|>A[This implies |δ_A|>A>L>|δ_L|δ_A<δ_L, contradicting the claim δ_L≤δ_A in Theorem 2 of <cit.>]. The solution to Eqn (<ref>) is δ^±_A=-b±√(b^2-4[(A-2L-δ_L)c-A]δ_L)/2, where b=(δ_L+L^2/A)c+A-δ_L. 2L<b<2A. Assuming that the above claim is true, we have δ_A^+>0-[(A-2L-δ_L)c-A]δ_L>0 c>A/A-2L-δ_L, where we used δ_L<0 and Condition 1: A-2L-δ_L>L>0. On the other hand, b>0 implies δ^-_A<0. Also, |δ_A^-|>A √(b^2-4[(A-2L-δ_L)c-A]δ_L)>2A-b b^2-4[(A-2L-δ_L)c-A]δ_L>(2A-b)^2 (L+δ_L)^2>0, where we used the Claim b<2A to keep the direction of the inequality correct when squaring. Now we prove the claim. By Condition 2: δ_L<-L^2/A and Condition 3: c<A/L, we have b >(δ_L+L^2/A)A/L+A-δ_L =A+L+A-L/Lδ_L>2L>0, where we used -L<δ_L for the second inequality. Similarly, by Condition 2: δ_L<-L^2/A and Condition 3: A/A-2L-δ_L<c, we have b <(δ_L+L^2/A)A/A-2L-δ_L+A-δ_L =Aδ_L+L^2/A-2L-δ_L-δ_L+A =(L+δ_L)^2/A-2L-δ_L+A <(L-L^2/A)^2/A-2L+L^2/A+A =(1-L/A)^2L^2/(1-L/A)^2A+A =L^2/A+A<2A, where we used the fact that function f(δ_L)=(L+δ_L)^2/A-2L-δ_L is increasing in [-L,-L^2/A) in the fourth line. To see it, f'(δ_L)=(L+δ_L)(2A-3L-δ_L)/(A-2L-δ_L)^2>0-L<δ_L<2A-3L. The following example demonstrates that the robbed withdrawal can happen during a trading process. Let c=2.008. * Initialize: A_1=L_1=100, A_2=L_2=100. * Swap 101 token 1 for token 2: A_1201, A_255.98. * Withdraw 99.1 token 1. At this moment, * 3L_1+δ_L=200.9<201, * δ_L=-99.1∈[-100,-49.75]⊆[-L_1,-L_1^2/A_1), * c=2.008∈(2.00799,2.01)⊆(A_1/A_1-2L_1-δ_L,A_1/L_1), all conditions of <Ref> are satisfied. Indeed, Eqn (<ref>) has roots δ_A≈-201.009 or 0.001. Then either A_1-0.009 (the platform being robbed) or 201.001 (the liquidity provider being robbed). § CONCLUSION In Wombat, withdrawal of liquidity in token i is always associated with a fee[It is inappropriate to call this a fee. An infinitesimal amount of it is calculated by an implicit differentiation ∂ A_i/∂ L_i=-∂ F/∂ L_i/∂ F/∂ A_i, as is the slippage during a swap ∂ A_j/∂ A_i=-∂ F/∂ A_i/∂ F/∂ A_j. We find liquidity slippage a better name.] unless A_i=L_i <cit.>. <Ref> says that under its conditions, one of the following two cases happens * δ_A>0: After the liquidity provider burns their liquidity tokens, not only do they receive no asset back but also they have to give the platform a positive amount of asset, in other words, the liquidity provider is robbed; * δ_A<0 and |δ_A|>A: The platform must provide more asset to the liquidity provider than it has, in other words, the platform is robbed (also known as a bad debt). Therefore, we name the above phenomenon the robbed withdrawal.
http://arxiv.org/abs/2307.06310v1
20230712171342
Kriging-Based 3-D Spectrum Awareness for Radio Dynamic Zones Using Aerial Spectrum Sensors
[ "Sung Joon Maeng", "Ozgur Ozdemir", "Ismail Guvenc", "Mihail L. Sichitiu" ]
eess.SP
[ "eess.SP" ]
IEEEexample:BSTcontrol Kriging-Based 3-D Spectrum Awareness for Radio Dynamic Zones Using Aerial Spectrum Sensors This work is supported in part by the NSF PAWR award CNS-1939334 and its associated supplement for studying National Radio Dynamic Zones (NRDZs). The authors would like to thank Wireless Research Center for measuring antenna patterns by using an anechoic chamber. The datasets and post-processing scripts for obtaining the results in this manuscript are publicly accessible at <cit.>.S. J. Maeng, Ozgur Ozdemir, İ. Güvenç, and Mihail L. Sichitiu are with the Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 27606 USA (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). Sung Joon Maeng, Ozgur Ozdemir, Member, IEEE, İsmail Güvenç, Fellow, IEEE, and Mihail L. Sichitiu, Member, IEEE August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Radio dynamic zones (RDZs) are geographical areas within which dedicated spectrum resources are monitored and controlled to enable the development and testing of new spectrum technologies. Real-time spectrum awareness within an RDZ is critical for preventing interference with nearby incumbent users of the spectrum. In this paper, we consider a 3D RDZ scenario and propose to use unmanned aerial vehicles (UAVs) equipped with spectrum sensors to create and maintain a 3D radio map of received signal power from different sources within the RDZ. In particular, we introduce a 3D Kriging interpolation technique that uses realistic 3D correlation models of the signal power extracted from extensive measurements carried out at the NSF AERPAW platform. Using C-Band signal measurements by a UAV at altitudes between 30 m-110 m, we first develop realistic propagation models on air-to-ground path loss, shadowing, spatial correlation, and semi-variogram, while taking into account the knowledge of antenna radiation patterns and ground reflection. Subsequently, we generate a 3D radio map of a signal source within the RDZ using the Kriging interpolation and evaluate its sensitivity to the number of measurements used and their spatial distribution. Our results show that the proposed 3D Kriging interpolation technique provides significantly better radio maps when compared with an approach that assumes perfect knowledge of path loss. 3-D spectrum awareness, AERPAW, antenna radiation pattern, I/Q samples, LTE, Kriging interpolation, propagation modeling, RDZ, RSRP, UAV, USRP. § INTRODUCTION As the demand for advanced wireless communication services continues to grow, efficient use of spectrum resources is becoming increasingly vital for future wireless technologies. Therefore, the development, testing, and evaluation of effective mechanisms to improve spectrum efficiency and sharing have become imperative. Although there is a considerable body of literature that examines and analyzes spectrum sharing using theoretical models and simulations, there is a clear need to assess these approaches in real-world deployment scenarios, taking into account realistic propagation conditions. In this particular context, the concept of radio dynamic zones (RDZs) emerges as a new concept <cit.>, where geographical areas with dedicated spectrum resources are effectively managed and controlled in real-time to test new wireless innovations. This management is achieved through the sensing of signals entering and leaving the zone <cit.>. RDZs serve as testing grounds for novel spectrum sharing concepts and emerging technologies aimed at improving spectrum efficiency within specific deployment scenarios. In RDZs, it becomes crucial to ensure minimal or no interference to existing incumbent users of the spectrum. Therefore, monitoring of signal leakage to passive or active receivers outside the RDZ becomes necessary. This requires installation and deployment of sensors within the RDZ. Monitoring scope can include both terrestrial areas and airspace, e.g., for coexistence with unmanned aerial vehicles (UAVs) and satellites. By monitoring and modeling the interference levels experienced by passive receivers in these aerial scenarios, more efficient spectrum sharing can be achieved. The use of radio environment maps (REMs) <cit.> presents an effective approach for constructing dynamic interference maps within an RDZ, which can be generated for each location and frequency of interest. These radio maps are generated by collecting signal power data from deployed sensors and incorporating their corresponding location information. However, it is often impractical to position sensors throughout the entire RDZ area. Instead, signal power at unknown locations can be predicted using signal processing techniques like Kriging <cit.>, based on measurements from nearby sparsely deployed sensors. Kriging takes advantage of the spatial correlation between different locations to optimize the prediction of signal power. By employing Kriging, we can efficiently interpolate and generate a radio map of signal power using sparsely measured datasets from the sensors. In the existing literature, several studies have focused on modeling the spatial correlation of shadowing in received signals <cit.>, with experimental measurements provided in <cit.>. The application of Kriging for generating radio maps of signal power has been validated using both simulated and real datasets <cit.>. The potential of Kriging for spectrum monitoring and interference management has been explored in  <cit.>, while <cit.> extends Kriging interpolation to spectrum interpolation and analyzes it using measurement datasets. For ground-to-UAV communications in suburban environments, path loss and shadowing have been modeled based on measurement datasets <cit.>. Additionally, the spatial correlation along the linear trajectory of a UAV has been investigated <cit.>. In our recent works, we introduce the RDZ concept and discuss its features and requirements <cit.>. Furthermore, we propose a leakage sensing algorithm using Kriging in the two-dimensional (2D) plane of the RDZ <cit.>. Notably, to the best of our knowledge, the literature does not address the use of Kriging to obtain a three-dimensional (3D) aerial radio map based on measurements obtained from unmanned aerial vehicles (UAVs). In this paper, we propose to develop and use a 3D radio map to effectively sense signal leakage from an RDZ to the receivers outside of the RDZ. We employ a UAV as a mobile aerial sensor, collecting signal power measurements from distinct receivers within the RDZ. The 3D interpolation of the collected signal power is performed using the Kriging technique. The proposed method is thoroughly analyzed and validated through a measurement campaign. The main contributions of this paper can be summarized as follows: * Modeling 3D Radio Propagation: Considering a 3D spectrum sensing scenario, we develop and analyze a path loss model that accounts for spatially correlated shadowing, two-ray wireless propagation, and measured antenna radiation patterns to accurately model 3D radio propagation. We integrate 3D antenna measurements obtained in an anechoic chamber and study improvements in model accuracy when compared to using dipole and omnidirectional antenna patterns. * Semi-Variogram Based Kriging Interpolation: We introduce a novel method for Kriging interpolation specifically designed for 3D spectrum monitoring. This approach leverages a semi-variogram technique to achieve accurate and efficient interpolation across a 3D volume using a limited set of measurements. * Comparison with Measurement Data: We evaluate and compare the accuracy of our proposed 3D propagation models with the measurement data collected using software-defined radios (SDRs) at various UAV altitudes. This analysis provides valuable insights into the performance and reliability of the proposed approach. The rest of this paper is organized as follows. In Section <ref>, we present the system model for 3-D spectrum sensing, radio propagation, and spatial correlation in an RDZ, while in Section <ref>, we introduce the Kriging-based signal interpolation method for generating a 3D radio map. In Section <ref>, we describe the details of our measurement campaigns for obtaining I/Q signal samples at a UAV from an LTE-based signal source on the ground, and our measurements in an anechoic chamber for characterizing the antenna radiation patterns. In Section <ref>, we analyze the effectiveness of the proposed 3D path-loss models in predicting the received signal power at different UAV altitudes and locations. We present numerical results on Kriging-based 3D radio map interpolation for various scenarios in Section <ref> and the last section concludes the paper. § SYSTEM MODEL In this section, we present the models utilized for spectrum sensing within an RDZ. Specifically, we consider a scenario where an aerial spectrum sensor traverses the area and captures received signals from a base station (BS). Radio propagation, correlation, and antenna radiation pattern models are also presented. §.§ 3-D Spectrum Sensing with an Aerial Mobile Sensor An RDZ should protect incumbent users outside of the zone by controlling and managing interference signals radiating from inside the zone. The incumbent users may include smart devices and aerial vehicles, as well as sensitive scientific passive receivers such as satellites and ground-based radio astronomy receivers in radio quiet zones (RQZs) <cit.>. Our envisioned RDZ concept is illustrated in Fig. <ref>. The real-time spectrum sensing within the boundary of the RDZs is conducted by deployed fixed / mobile ground and aerial sensor nodes, which is an essential technique to manage dynamic spectrum usage. The UAV moves across the RDZ space along a multi-altitude trajectory, capturing signal data throughout. This paper primarily focuses on the study of real-time signal sensing in the volume of space to monitor the signal leakage from RDZs. Mobile aerial nodes, in the form of UAVs, collect signal power data as they follow predefined trajectories. Subsequently, the RDZ system leverages the collected dataset from the aerial nodes to generate a radio map depicting the signal power surrounding the RDZ space. The interpolation of this dataset facilitates the construction of a comprehensive representation of signal power distribution. §.§ Radio Propagation Model The location of a BS and a UAV can be represented by 𝐥^ bs =(ψ^ bs,ω^ bs,h^ bs), 𝐥^ uav(t)=(ψ^ uav,ω^ uav,h^ uav), where ψ, ω, and h denote the latitude, longitude, and altitude of the location. Note that although the location can be generally represented by x, y, z in 3D Cartesian coordinates, we express it by latitude, longitude, and altitude to use the information given by GPS sensors. The time-varying location of a UAV is given by 𝐥^ uav(t). The horizontal distance and the vertical distance between a BS and a UAV can be expressed as <cit.> d_ h(𝐥^ bs,𝐥^ uav) =arccos(sinψ^ uavsinψ^ bs. .+cosψ^ uavcosψ^ bscos(ω^ bs-ω^ uav))× A, d_ v(𝐥^ bs,𝐥^ uav) =|h^ bs-h^ uav|, where A is the radius of the earth (≈ 6378137 m). Then, the 3D distance between a BS and a UAV is given by d_ 3D(𝐥^ bs,𝐥^ uav) =√(d_ h(l^ bs,l^ uav)^2+d_ v(l^ bs,l^ uav)^2). Next, the elevation angle between a BS and a UAV can be expressed as θ_l =tan^-1( d_ v/d_ h). To develop a propagation model, we make use of a first-order approximation and consider the rural environment in which we collect measurements. In this scenario, we employ the two-ray ground reflection model to represent the path loss between a BS and a UAV. This model accounts for a line-of-sight (LoS) path as well as a strong ground reflection path, both contributing to the received signal as the two dominant paths in an open area such as a rural environment. The path loss characterized by the two-ray ground reflection model can be expressed as follows <cit.>: 𝖯𝖫_ twm(𝐥^ bs,𝐥^ uav)=(λ/4π)^2|√(𝖦_ bs(ϕ_l,θ_l)𝖦_ uav(ϕ_l,θ_l))/d_ 3D_LoS signal +Γ(θ_r)√(𝖦_ bs(ϕ_r,θ_r)𝖦_ uav(ϕ_r,θ_r))e^-jΔτ/r_1+r_2_ground reflected signal|^2, where 𝖦_ bs(ϕ,θ), 𝖦_ uav(ϕ,θ), λ, ϕ denote the antenna gain of a BS, antenna gain of a UAV, wave-length, and azimuth angle, respectively, θ_r=tan^-1(h^ bs+h^ uav/d_ h) represents ground reflection angle, and Δτ=2π(r_1+r_2-d_ 3D)/λ indicates the phase difference between two paths. The distance and the angle parameters in the two-ray ground reflection model are illustrated in Fig. <ref>. The ground reflection coefficient with the vertically polarized signal is given by Γ(θ_r) =ε_0sinθ_r-√(ε_0-cos^2θ_r)/ε_0sinθ_r+√(ε_0-cos^2θ_r), where ε_0 is the relative permittivity of the ground and the value depends on the type of the ground. Two signal components in (<ref>) are received and combined with a phase difference. If we only consider the first LoS term in the path loss, we can obtain the free-space path loss model, given as 𝖯𝖫_ fs=(λ/4π)^2|√(𝖦_ bs(θ_l)𝖦_ uav(θ_l))/d_ 3D|^2. Using (<ref>), the received signal power of a UAV in dB scale can be expressed as r =𝖯_ Tx-𝖯𝖫_ twm^( dB)+w, where 𝖯_ Tx, w denote transmit power and shadowing component, respectively. Note that the path loss term in (<ref>) is converted to dB scale. The shadowing term generally follows a lognormal distribution and is modeled by a zero-mean Gaussian process with a spatial covariance <cit.>. The correlation between received signals at two different locations is generally characterized by the function of the distance between those locations. Note that we do not take into account small-scale fading in the received signal since we assume that the effect is eliminated by averaging the samples within the proper time interval <cit.>. §.§ Spatial Correlation Model of Received Signal In this section, we focus on describing the correlation function between the received signals at different locations of a UAV. Since the spatial correlation primarily depends on the shadowing component (w) in the received signal in (<ref>), we can capture the correlation between received signals (r) using the correlation between the shadowing components without loss of generality. It is well-known that the correlation between two different locations is characterized by a function of their physical distance. Typically, this correlation exponentially attenuates as the physical distance between the locations increases <cit.>. However, most existing works in the literature primarily focus on terrestrial networks and do not fully consider 3D topologies. Due to this limitation, the spatial correlation between two locations with different vertical positions (heights) has not been extensively studied to our best knowledge. Considering the unique characteristics of UAV-based scenarios, where altitude plays a crucial role, it becomes essential to investigate and understand the spatial correlation between locations at different vertical positions. This exploration will allow for a more comprehensive modeling of the correlation in 3D scenarios, considering the impact of vertical distance in addition to horizontal distance. In our work, we first model the spatial correlation as a function of the vertical distance (d_ v) as well as the horizontal distance (d_ h). Then, we define the correlation function between 3D locations as a function of both the vertical distance and the horizontal distance. The spatial correlation between two different locations of a UAV, i.e., between l^ uav_i and l^ uav_j, can be expressed as R(l^ uav_i,l^ uav_j)=R(d_ v,d_ h)=𝔼[w(l^ uav_i)w(l^ uav_j)]/σ_w^2, where σ_w^2 is the variance of shadowing. Once again, the proposed correlation is the function of both the vertical distance and the horizontal distance. §.§ Antenna Radiation Model The antenna gain effect of a transmitter and a receiver in the received signal is captured in the path loss model in (<ref>), using 𝖦_ bs(ϕ,θ), 𝖦_ uav(ϕ,θ). In typical terrestrial communications, the antenna gain is simply modeled by a constant gain. This is due to the fact that a dipole antenna is usually characterized as an omni-directional antenna radiation pattern in the azimuth angle domain, or sectored directional antennas make the antenna pattern mostly uniform in the azimuth angle domain. However, air-to-ground communications require considering the variation of the antenna gain in the elevation angle domain. The antenna pattern in the elevation domain is typically far from being uniform and therefore we should consider the elevation angle-dependent radiation pattern in modeling the antenna gain. § 3D RADIO MAP INTERPOLATION USING KRIGING In this section, we introduce an efficient radio map interpolation technique using Kriging <cit.>. This method utilizes measurement data obtained from sparsely deployed spectrum sensors within an RDZ. The interpolation process allows us to estimate signal values at unsampled locations based on the available measurements. We first introduce how to calculate a semi-variogram, and subsequently, introduce our Kriging based interpolation approach for 3D RDZ scenarios. Different than the existing Kriging techniques in the literature, we consider the 3D geometry in spatial correlation with a portable aerial sensor, which enables us to interpolate the radio map in a 3D volume. §.§ Semi-variogram In geostatistics, the semi-variogram represents the degree of spatial dependency on different locations which is utilized in Kriging interpolation. The semi-variogram between a UAV's locations l^ uav_i, l^ uav_j is defined as γ(l^ uav_i,l^ uav_j)=1/2var(r(l^ uav_i)-r(l^ uav_j)). If the covariance function of a stationary process exists, we can obtain the semi-variogram from the spatial correlation in (<ref>) as follows for our considered 3D RDZ scenario <cit.>: γ(l^ uav_i,l^ uav_j) =σ_w^2/2(R(l^ uav_i,l^ uav_i)+R(l^ uav_j,l^ uav_j)-2R(l^ uav_i,l^ uav_j)) =σ_w^2(1-R(l^ uav_i,l^ uav_j))=σ_w^2(1-R(d_ v,d_ h)), where σ_w^2 captures the variance of the shadowing term w in (<ref>) as defined earlier, and R(l^ uav_i,l^ uav_i) is as defined in (<ref>). We assume that σ_w^2 is constant at given set of locations while deriving (<ref>). §.§ Kriging Interpolation The ordinary Kriging is the optimal prediction method in squared-error loss from the observed data at known spatial locations where the error of the spatial prediction of an unknown location is minimized <cit.>. It interpolates the signal strength of the arbitrary locations by using the linear combination of the signal strength of the nearby locations. The ordinary Kriging problem can be formulated as follows <cit.>: rl min_μ_1,…,μ_M 𝔼[(r̂(𝐥^uav_0)-r(𝐥^uav_0))^2], s.t. r̂(𝐥^uav_0)=∑_i=1^Mμ_ir(𝐥^uav_i), ∑_i=1^Mμ_i=1 , where l^ uav_0 is a location to predict an unknown parameter, μ_i (i=1,⋯,M) are weighting parameters and M indicates the number of nearby measured samples to use. The above problem can be solved by following steps <cit.>. First, we convert the original problem to an equivalent Lagrange expression: min_μ_1,…,μ_M 𝔼[(r(l^ uav_0)-∑_i=1^Mμ_ir(l^ uav_i))^2]-κ(∑_i=1^Mμ_i-1), where κ denotes the Lagrange multiplier. After a few mathematical steps, the objective function in (<ref>) can be reformulated as σ^2_w+2∑_i=1^Mμ_iγ(l^ uav_0,l^ uav_i)-∑_i=1^M∑_j=1^Mμ_iμ_jγ(l^ uav_i,l^ uav_j) -κ(∑_i=1^Mμ_i-1), where γ(l^ uav_i,l^ uav_j) is as defined in (<ref>). Finally, we can find the optimal solution that minimizes the objective function by the first derivative of (<ref>) with respect to μ_1,…,μ_M, which is given by ∑_j=1^Mμ_jγ(l^ uav_i,l^ uav_j)-γ(l^ uav_0,l^ uav_i)+κ'=0. We can also express (<ref>) as a linear matrix equation as: [ [ γ(l^ uav_1,l^ uav_1) ⋯ γ(l^ uav_1,l^ uav_M) 1; γ(l^ uav_2,l^ uav_1) ⋯ γ(l^ uav_2,l^ uav_M) 1; ⋮ ⋮ ⋮ ⋮; γ(l^ uav_M,l^ uav_1) ⋯ γ(l^ uav_M,l^ uav_M) 1; 1 ⋯ 1 0; ]] [ [ μ_1; μ_2; ⋮; μ_M; κ' ]] =[ [ γ(l^ uav_0,l^ uav_1); γ(l^ uav_0,l^ uav_2); ⋮; γ(l^ uav_0,l^ uav_M); 1 ]]. Then, we can easily obtain the optimal μ_1^⋆,…,μ_M^⋆ from (<ref>) and interpolate the received signal powers of unknown location l^ uav_0 by r̂(l^ uav_0)=∑_i=1^Mμ_i^⋆r(l^ uav_i). Note that accurate characterization of the 3D semi-variogram in (<ref>) is critical for the interpolation in (<ref>). The next section describes our measurements that will be used to characterize the 3D semi-variogram. § MEASUREMENT CAMPAIGN OVERVIEW In this section, we describe the details of our radio propagation measurements. We present our measurement setup, define UAV trajectory used, and describe our approach for characterizing antenna effects. §.§ Measurement Setup The measurement campaign was conducted at the Lake Wheeler Road Field Labs (LWRFL) site in Raleigh, NC, USA, which is one of the two sites in the NSF Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW). The experimental area, depicted in Fig. <ref>, can be classified as an open rural environment, ensuring LoS conditions between a UAV and the BS throughout the entire duration of the experiments. Fig. <ref> and Fig. <ref> present photos of the base station (BS) tower and the drone used during the measurement campaign. The BS tower stands at a height of 10 meters and is equipped with a single dipole transmit antenna. On the other hand, the drone is equipped with a vertically oriented single dipole receiver antenna and a GPS receiver to accurately track its position. To facilitate the measurements, the srsRAN open-source Software Defined Radio (SDR) software was utilized to implement an LTE evolved NodeB (eNB) at the BS tower, as shown in Fig. <ref>. The eNB continuously transmitted common reference symbols (CRSs) during the measurement campaign. During the measurement campaign, the drone collects raw I/Q data samples using a Software Defined Radio (SDR) that is attached to it. Specifically, the USRP B205mini from National Instruments (NI) is utilized as the SDR device, both at the BS tower and on the UAV. For post-processing the raw I/Q data, we employ Matlab's LTE toolbox. Within this toolbox, we calculate the Reference Signal Received Power (RSRP) for each location of the UAV. To ensure efficient processing and analysis, we collect 20 ms segments of data out of every 100 ms. Within each 20 ms segment, we extract a 10 ms duration for subsequent post-processing. Throughout the paper, the terms “received signal" and “RSRP" are used interchangeably to refer to the measured signal strength. The major specifications of the transmitter and the receiver are listed in Table I. §.§ UAV Trajectory We conduct the experiments multiple times by changing the altitude (height) of the UAV from 30 m to 110 m at increments of 20 m. In each flight, the UAV flies an identical predefined trajectory with a different fixed height. In particular, the UAV flies on a zig-zag pattern through the experiment site, between south and north waypoints, and it eventually flies back to the starting point. The top view (at h=110 m) and the 3D view of the UAV trajectories along with measured RSRPs are illustrated in Fig. <ref> for flight trajectories at 30 m, 50 m, 70 m, 90 m, and 110 m. §.§ Antenna Radiation Pattern Characterization The dipole antenna used in our experiments generally exhibits omni-directional radiation patterns in the azimuth angle domain, but oval-shaped radiation patterns in the elevation angle domain. The radiation pattern also varies with the carrier frequency. We obtained the antenna pattern specifications for the Rx dipole antenna (SA-1400-5900) from the vendor's specification sheet, and it shows a typical donut-shaped dipole pattern that remains consistent across different carrier frequencies <cit.>. Specifically, in the specification sheet, the antenna patterns for 1.4, 1.7, 2.4, 4.4, and 5.8 GHz frequencies are provided and all of them have similar dipole patterns. Therefore, we adopted the 2.4 GHz frequency antenna pattern from the specification sheet for our analysis. However, the Tx dipole antenna (RM-WB1-DN) exhibited different elevation angle domain patterns depending on the carrier frequency and had an asymmetric pattern that did not guarantee omni-directionality in the azimuth angle domain <cit.>. Furthermore, the specification sheet did not provide the radiation pattern for the specific carrier frequency (3.51 GHz) used in our experiments. To obtain the exact antenna radiation pattern for the 3.51 GHz frequency, we conducted separate measurements of the 3D antenna pattern using an anechoic chamber facility located at wireless research center (WRC), Wake Forest, NC. Fig. <ref> shows a photo of the setup in the anechoic chamber during the measurement of the Tx antenna's 3D pattern. Fig. <ref> displays the output of the antenna measurement, visualizing the antenna pattern in 3D Cartesian coordinates. It can be observed that the antenna pattern is not purely omni-directional in the azimuth angle domain, and the directivity in the elevation angle domain is not straightforward. In contrast, Fig. <ref> shows the elevation angle domain antenna pattern of the Rx antenna as provided in the specification sheet, where the antenna pattern is specified as omni-directional with uniform gain in the azimuth domain. Fig. <ref> illustrates the combined antenna gain from the Tx and Rx antenna patterns from Fig. <ref> and Fig. <ref>, respectively, represented in the azimuth and elevation angle domain. For all UAV heights in our experiments, the LoS angles between the Tx tower and the UAV were within the angle space covered by the black rectangular area, while the ground reflection angles between Tx tower and the UAV were covered by the red rectangular area, which are illustrated in Fig. <ref>. This implies that the antenna pattern used for the analysis is limited to the angles within this space. § AIR-TO-GROUND PROPAGATION MODELING AND ANALYSIS In this section, we review how we post-process the data for correcting errors in altitude reported by the UAV's GPS. Subsequently, we model the measured RSRP using different 3D propagation models that take into account two-ray multipath model and 3D antenna pattern. §.§ Post-measurement Correction of Altitude and RSRP During the measurements, we encountered calibration errors caused by limitations in the SDR hardware. Specifically, the Universal Software Radio Peripheral (USRP) mounted on the UAV exhibited a power level calibration error, resulting in a constant offset power throughout the experiment. To address this issue, we conducted a separate experiment to measure and determine the offset at the USRP, which was found to be 98 dB. Subsequently, we added this offset to the calculated RSRP values obtained from subsequent experiments, effectively compensating for the calibration offset. Additionally, the GPS receiver carried by the UAV exhibited an altitude mismatch. We observed an altitude drift of approximately 6 m after the UAV landed, when compared with the initial altitude of the UAV. To rectify this mismatch, we applied a linear compensation approach (see <cit.>). This involved adjusting the altitude measurements such that the altitude at the end of the flight matched the altitude of the initial measurement. By applying this compensation, we aimed to ensure accurate altitude data throughout the experiment. §.§ Antenna Radiation Pattern Effect in Path Loss Analysis In this subsection, we analyze the effect of antenna radiation patterns on the path loss fitting to the RSRP from the experiments. We consider three different antenna pattern setups for comparison: 1) Tx and Rx 3D antenna patterns described in Section <ref> and Fig. <ref>; 2) the donut shape dipole antenna pattern using the formulation for both Tx and Rx antennas; and 3) constant azimuth and elevation antenna gain for both Tx and Rx antennas. The dipole antenna pattern formula in the second case is given by <cit.> 𝖦_ bs(θ) =𝖦_ uav(θ)=cos(π/2cosθ)/sinθ. Fig. <ref> and Fig. <ref> provide a comprehensive analysis of the RSRP fitting results using different antenna patterns and path loss models in (<ref>), (<ref>). In Fig. <ref>, the RSRP curves for a UAV height of 70 m are presented, along with the fitting results obtained from the free space and two-ray path loss models with different antenna patterns. It is observed that the antenna pattern described in Section <ref> provides the best fit to the RSRP curves, while the dipole pattern in (<ref>) results in the worst fit. Additionally, Fig <ref> highlights that the two-ray path loss model performs better than the free space path loss model in capturing the deep fading of RSRP. To further evaluate the performance, Fig. <ref> presents the cumulative distribution function (CDF) of the RSRP for the 70 m height measurement, along with the fitting results obtained from the path loss models and different antenna patterns. The CDF of the two-ray path loss model with the antenna pattern in Section <ref> matches closest with the CDF of the measured RSRP, indicating a better fit. Fig. <ref> shows the fitting error, which is calculated by subtracting the measured RSRP from the fitted RSRP using the path loss models. It is observed that the fitting error is the smallest when using the two-ray path loss model with the antenna pattern in Section <ref>. Fig. <ref> also evaluates the fitting error with different antenna patterns in time, distance, and elevation domains. It is observed that the dipole antenna pattern has the largest fitting error in short and long distances. We also observe that the fitting error is relatively high in small elevation angles. It implies that the effect of scattering from the objects around the test site increases the variance of the error when the elevation angle is low. Overall, these results demonstrate that the choice of antenna pattern and path loss model significantly impacts the accuracy of RSRP fitting for air-to-ground communication links. The 3D antenna radiation pattern described in Section <ref>, combined with the two-ray path loss model, provides the best fit to the measured RSRP and minimizes the fitting error. §.§ Path Loss Model Fitting with Measurement Fig. <ref> illustrates the measured and fitted RSRP values as a function of 3D distance for different UAV heights ranging from 30 m to 110 m. We adopt measured antenna patterns in Section <ref>. The fitted curves follow the measured RSRP values reasonably closely. It is worth noting that the two-ray path loss model performs better in capturing the fluctuation of signal strength due to the ground reflected path compared to the free-space path loss model, especially when the UAV height is low. In the logarithmic scale of the distance domain, the RSRP is expected to decrease linearly. However, in the short distance range, a concave curve can be observed. This phenomenon is a result of the elevation-dependent antenna gain and the dramatic change in the elevation angle at short distances and high UAV altitudes. The 3D antenna pattern considered in the path loss models effectively captures this effect, leading to more accurate RSRP fitting. Overall, the results in Fig. <ref> highlight the importance of considering the elevation-dependent antenna gain and the 3D antenna pattern in accurately modeling and fitting RSRP measurements in air-to-ground communications. Fig. <ref> shows the relative fitting error in the distance domain for all heights with different antenna patterns. The error by the dipole antenna pattern is relatively higher than other antenna patterns, especially when the distance is around 100 m to 200 m due to the antenna pattern mismatch. We also observe that the fitting error for the omnidirectional antenna pattern is higher than the measured antenna pattern for a large distance. Overall, the use of the measured antenna pattern results in the best fit for the measured data. §.§ Analysis of Shadowing Components from Measurement After we derive the two-ray path loss model, we can extract the shadowing component by subtracting the path loss model from measured RSRP using (<ref>), as shown in Fig. <ref>. The shadowing component is known to follow a Gaussian distribution, and the measured shadowing distributions for different UAV heights are compared to the fitted curves. It is observed that the measured shadowing distributions can be modeled using a Gaussian distribution, though there are slight deviations. In particular, the measured distributions exhibit asymmetry with a heavier left tail compared to the symmetric Gaussian distribution. To achieve a better fit, an alternative approach is to use a skewed Gaussian (normal) distribution, which allows for introducing a desired level of skewness to the distribution <cit.>. The probability density function (PDF) of the skewed Gaussian distribution can be expressed as f(x)=2ϕ(x-ξ/ω)Φ(α(x-ξ/ω)), where ϕ(·), Φ(·) indicates the PDF and the CDF of Gaussian distribution, respectively. The parameter α in (<ref>) decides the skewness of the distribution. If α is a positive real value, it gives right-skewness, while left-skewness is introduced by a negative real value. In addition, the mean, the standard deviation of the shadowing, left-skewed Gaussian parameter α, and normalized mean squared error (NMSE) of model fittings for all heights are listed in Table II. Note that the optimal α is decided by minimizing NMSE. It shows that the distributions as well as the value of variances in different heights are similar, and we can assume a stationary process in spatial data. § NUMERICAL RESULTS ON 3D SIGNAL INTERPOLATION In this section, we will first study the horizontal, vertical, and finally 3D correlation in the measured data. We will use the 3D correlation to calculate the semi-variogram, which will subsequently be used to analyze the 3D interpolation accuracy for various scenarios. §.§ Analysis of Correlation Function from Measurement §.§.§ Horizontal distance correlation In this subsection, we analyze the spatial correlation using the AERPAW datasets available at <cit.>. We obtain correlation functions between two different 3D locations by using measurements at different heights, and we use exponential and bi-exponential functions to model the correlations as discussed earlier. The mean and standard deviation values obtained by statistical analysis in Section <ref> and the measured RSRP values are utilized in calculating the correlations. We analyze the spatial correlation depending on the horizontal distance (d_ h) with a zero vertical distance (d_ v) by using the experiment dataset. Since our experiments fix the height of the drone for a specific flight, the vertical distance between the samples in the same flight is zero. The analysis of the correlation by the horizontal distance is performed by following steps: i. Calculate the correlation among all samples in a flight, excluding the samples during the take-off and landing periods. ii. Sort the correlation from step (i) according to the horizontal distance between the sample pairs. This will ensure that the correlations are arranged in increasing order based on the horizontal distance. iii. Average the correlations every 2 m. Start from the smallest horizontal distance and group the correlations within a 2 m interval. Calculate the average correlation for each interval. Repeat this process for subsequent 2 m intervals until covering all the correlations. iv. Perform steps (i)-(iii) iteratively for each height (30 m, 50 m, 70 m, 90 m, 110 m). Then, we have correlations for each individual height. v. Average the correlation for every distance over all the heights. Take the correlations obtained in step (iv) for each height and distance, and compute the average correlation value across all heights for that specific distance. The correlation between two samples w_i, w_j is calculated by R_i,j=(w_i-ν_i)(w_j-ν_j)/σ_w,iσ_w,j, where ν, σ_w denote the mean and the standard deviation of the sample, which can be obtained from Table II. The obtained correlation function and fitted curves are shown in Fig. <ref>. It is observed that the correlation is rapidly decayed as the horizontal distance increases. Although the correlation is generally modeled by an exponential function (also known as the Gudmundson model) <cit.>, the bi-exponential model <cit.> fits better than the exponential model for our measurements, which is given as R(d_ h)=ae^-b_1d_ h+(1-a)e^-b_2d_ h, where b_1, b_2 are fitting parameters. We also observe that the correlation distance is 4.5 m when the correlation is 0.5. §.§.§ Vertical distance correlation We calculate the vertical distance correlation with a zero horizontal distance from measurements which is opposite to the above subsection. Since the trajectory of the UAV for flights at different heights is designed to be identical (see Fig. <ref>), we can obtain samples of the same 2D location (latitude, longitude) with different vertical distances. For example, if we want to obtain 20 m vertical distance samples, we can use the dataset from the 30 m and 50 m UAV flights and pick two samples from any overlapped trajectory (one from the 30 m height, the other from the 50 m height). The analysis of the correlation by the vertical distance is conducted by following steps: i. Choose two different height measurements datasets, such as the datasets from the 30 m and 50 m UAV flights; ii. Remove data where the two trajectories are not fully overlapped, using a threshold of d_ h > 3 m. This ensures that we have data points with the same location across the trajectories; iii. Calculate the correlations between the two samples with the same location across the trajectories. Compute the correlation coefficient for each pair of samples and average them out. This will give you the correlation for a specific vertical distance (e.g., 20 m) between the two heights. iv. Repeat steps (i) to (iii) iteratively for pairs of measurements at different heights. For example, we can calculate correlations for the 50 m and 70 m flights, 70 m and 90 m flights, and so on. In step (ii), we exclude the samples that the trajectory is undesirably not overlapped by checking GPS readings. The correlations between different pairs of flights are listed in Table III. We also present the obtained correlation function from Table III and the fitted curve in Fig. <ref>. It is observed that the correlation function based on the vertical distance fits best with the exponential model, which is expressed as R(d_ v)=e^-d_ v/d_ corln(2), where the correlation distance is given by d_ cor=11.24 m. §.§.§ 3D distance correlation To analyze the correlation when both horizontal distance and vertical distance are considered, we can process the dataset obtained from flights at two different heights. By comparing the measurements from these flights, you can determine the correlation between two different 3D coordinate locations. The processing steps for obtaining correlation with 20 m vertical distance are as follows: i. Choose a pair of measurement datasets where the height difference is 20 m. For example, select the dataset from the 30 m height flight and the dataset from the 50 m height flight. ii. Calculate the correlation between a sample from one height (e.g., 30 m) and a sample from the other height (e.g., 50 m) across all the samples in the datasets. iii. Sort the correlation from step (ii) by the horizontal distance and average the correlations for every 2 m of horizontal distance. iv. Repeat steps (i) to step (iii) iteratively by different pairs of the measurement datasets of the height. For example, you can repeat the analysis with the dataset from the 50 m height flight and the dataset from the 70 m height flight. By performing this iterative analysis for different pairs of measurement datasets with varying height differences, we can obtain the correlation values that capture the relationship between joint horizontal and vertical distances. This analysis helps in understanding how the signal strength correlation varies with changes in both horizontal and vertical distances, providing insights into the spatial characteristics of the wireless channel. The 3D distance correlation results with 20 m and 40 m vertical distances are shown in Fig. <ref>. We model and fit the correlation of joint horizontal and vertical distance by combining the correlation functions of the horizontal and the vertical distance in (<ref>), (<ref>). The proposed correlation model in 3D space is expressed as R(d_ v, d_ h)=e^-d_ v/d_ corln(2)(ae^-b_1d_ h+(1-a)e^-b_2d_ h), where a=0.3, and b_1, b_2 are tuning parameters. Note that when d_ h=0, the model is the same as (<ref>), while when d_ v=0, the model is equivalent to (<ref>). The fitted values of b_1, b_2 depending on the vertical distance (d_ v) are listed in Table IV. §.§ Analysis of Semi-variogram In Section <ref>, we introduce earlier the concept of semi-variogram in (<ref>) and derive the relation to the correlation function in (<ref>). We analyze the semi-variogram by measurements results in Fig. <ref> with respect to both the horizontal distance and vertical distance. The measurement results are directly obtained by the definition of the semi-variogram in (<ref>) and the analysis results come from the correlation function in (<ref>) which is then used in (<ref>). The measurements and our analysis from (<ref>) are closely overlapped for both distance conditions. §.§ Performance Evaluation with Kriging In this subsection, we evaluate the 3D interpolation performance of the Kriging technique described in Section <ref> using the measurement dataset. We adopt cross-validation-based root mean square error (RMSE) evaluation <cit.>, which compares the predicted RSRP with the measured RSRP to observe the error. In particular, the RMSE for performance evaluation can be expressed as 𝖱𝖬𝖲𝖤=√(1/N_0∑^N_0_i(r̂(l^ uav_0,i)-r(l^ uav_0,i))^2), where N_0 denotes the number of samples for prediction. In our evaluation, the 30 m height measurement samples are predicted by 30 m, 50 m, and 70 m height measurement datasets. The cross-validation-based evaluation is conducted by following steps: i. Randomly select M samples from the measurement dataset to use for the prediction. These samples will serve as the training set. ii. Randomly select N_0 samples from the 30 m measurement dataset as the validation set for cross-validation. iii. Use the Kriging technique described in Section <ref> to predict the RSRP values for the N_0 validation samples based on the M training samples. iv. Calculate RMSE between the predicted RSRP values and the actual measured RSRP values for the N_0 validation samples. The RMSE is calculated using (<ref>). v. Repeat steps (i) to (iv) iteratively for a large number of times, such as 10,000 iterations and calculate the median for the RMSE values obtained from the iterations. The median value represents the overall prediction performance of the Kriging technique. In step (ii), after randomly selecting M samples for prediction, exclude those samples from the dataset chosen for cross-validation. This ensures that the samples used for prediction are not used for validation. In addition, when we predict a sample by Kriging in step (iii), when predicting a sample using Kriging, consider only the nearby samples within a certain distance threshold (r_0). Limit the selection of neighboring samples to those within the r_0 radius circle around the target sample. These nearby samples will be used to predict the RSRP value for the target sample. The snapshot of the randomly chosen M samples from 50 m height measurement and N_0 from 30 m height measurement is described in Fig. <ref>. The figure depicts the radius circle r_0 within which nearby samples are used to predict the target sample. To provide a benchmark for comparison, we consider the perfect path loss-based 3D interpolation. In particular, we assume that the BS has perfect knowledge of the exact path loss and transmit power for all locations. This represents the ideal condition for prediction without utilizing spatial correlation. The RMSE by the perfect path loss estimation is equivalent to the standard deviation of the shadowing component from (<ref>) and (<ref>) as follows: 𝖱𝖬𝖲𝖤_ ple=√(𝔼[(r̂-r)^2])=√(𝔼[w^2])=σ_w. In Fig. <ref>, the RMSE performance of Kriging using measurements at different UAV altitudes is presented. The results show that the performance of Kriging varies depending on the altitude of the measurements used for prediction. When utilizing the 30 m and 50 m height measurement data for prediction, Kriging outperforms the perfect path loss estimation. This indicates that Kriging can leverage the spatial correlation present in the highly corrected data to achieve better prediction accuracy. However, in the case of 70 m height measurement, the perfect path loss estimation performs better than Kriging. This suggests that the correlation at a vertical distance of 60 m is too low to accurately predict using Kriging. In Fig. <ref>, it is observed that the RMSE generally decreases as the number of samples used for prediction (N) increases. However, when N exceeds 250, the performance of Kriging with an r_0 value of 200 m is the worst among the three different r0 values considered. This indicates that while a larger number of samples can improve performance, adding low-correlated samples can degrade the prediction accuracy. It is important to strike a balance and choose an appropriate number of samples (M) and radius (r_0). Furthermore, in Fig. <ref>, the RMSE initially decreases and then increases for r_0 values of 70 m, 100 m, and 200 m. This suggests that if the correlation between samples is not sufficiently high, increasing the number of samples may not necessarily lead to improved performance. It highlights the importance of considering both the number of samples and the correlation when determining the optimal parameters for Kriging prediction. In conclusion, the choice of the number of samples (M) and radius (r_0) is crucial for achieving accurate predictions using Kriging. Utilizing a larger number of highly correlated samples can improve performance, while including low-correlated samples or selecting an inappropriate radius can degrade the prediction accuracy. §.§ 3D Interpolation by Kriging Fig. <ref> displays the generated 3D radio map of RSRP using the Kriging interpolation technique with the available measurement data at 30 m and 50 m heights. The map provides a visual representation of the RSRP distribution in the 3D space. The dome shape of the 3D radio map provides valuable insights into monitoring the signal leakage in the three-dimensional volume of the RDZ. By examining the map, one can observe the spatial variations and signal strength levels within the monitored area. The dense 3D radio map obtained through Kriging interpolation enables efficient analysis and decision-making related to signal monitoring, interference management, and overall RF planning within the monitored area. In particular, a spectrum monitoring engine (SME) can estimate the received signal strength from each signal served within the RDZ on the surface of the dome. Subsequently, interference to sensitive receivers outside of the RDZ can be extrapolated, and if exceed a threshold, interfering signal services in the RDZ can take action (e.g. rescheduling to a different band or reducing power). § CONCLUSION In this paper, we introduce the RDZ concept which efficiently manages and controls the spectrum usage by monitoring the signal occupancy and leakage in a real-time fashion. To monitor the signal leakage from an area, we need to develop a radio map of signal power surrounding the area, which is more challenging when considering a 3D space. We propose a signal power interpolation method in the 3D volume that uses Kriging. The correlation model between two different 3D locations is designed and the semi-variogram is defined and analyzed. In addition, we study the proposed 3D Kriging interpolation using an experimental dataset provided by the NSF AERPAW platform. We fit path loss and shadowing models to the RSRP measurements and study the performance of the Kriging interpolation technique for various scenarios. Our results show that significant gains are possible in received power estimation accuracy by utilizing the 3D correlation of the data when compared with using only a path loss based power estimation. IEEEtran
http://arxiv.org/abs/2307.04219v1
20230709162247
Large Satellite Constellations and Their Potential Impact on VGOS Operations
[ "Federico Di Vruno", "Vincenza Tornatore" ]
astro-ph.IM
[ "astro-ph.IM" ]
Derandomizing Codes for the Binary Adversarial Wiretap Channel of Type II This work is supported in part by the U.S. National Science Foundation under grants CNS-2128448, CNS-2212565, CNS-2225577, EEC-1941529, ITE-2226447 and by the Office of Naval Research under grant ONR N000142112472. Eric Ruzomberka1, Homa Nikbakht1, Christopher G. Brinton2, David J. Love2 and H. Vincent Poor1 1Princeton University 2Purdue University August 12, 2023 ================================================================================================================================================================================================================================================================================================== Large LEO satellite constellations (or so-called Mega-constellations) will significantly change the view of the sky in some radio frequency bands. For VGOS telescopes it is important to understand the potential impact these constellations will have in their operations, what is the risk of its receivers going into non-linear behaviour and how much additional power would a telescope receive if observing in the same frequencies where satellites are transmitting. This work describes three of these new constellations (as they would look fully deployed) and summarizes the results of a particular study considering two VGOS telescopes (Onsala and Wettzell). § INTRODUCTION The industrialization of spacecraft construction, and the lowering in costs of space launches has paved the way for big plans in Low Earth Orbit (LEO). Large satellite constellations like Starlink phase 1(with 4400 satellites) and OneWeb phase 1 (with 648 satellites) are already in the deployment phase, others like Project Kuiper (from Amazon) or Guowang (from China) are in their development phase and others with even larger numbers are being filed into the International Telecommunication Union (ITU) system (see Table <ref>). With altitudes between 500 km and 1200 km, these new constellations will surround the planet almost homogeneously. From a radio telescope point of view, the situation in the sky will change considerably. This change is already evident in the number of active satellites in LEO, from about 2000 in 2018, to more than 5000 in 2022, and the trend suggests it may reach hundred of thousands in this decade <cit.>. Until now, most of the satellites for internet communication were located in the geostationary belt (at approximately 35780 km altitude), appearing fixed in the sky for a terrestrial observer <cit.>. The new LEO satellites will orbit the Earth with a period of about 90 minutes and will be seen as hundreds to thousands of bright and fast-moving radio sources in the sky with downlinks in frequency bands from 10.7 GHz up to 76 GHz (see Section <ref>). Contrary to the situation with terrestrial radio frequency interference (RFI), it is not possible to build radio telescopes far away from satellite transmissions [1], the challenge is further increased due to the opposite pointing direction of radio telescopes and user downlink antenna beams. The typical power flux density (PFD) of satellite constellations is in the order of -146 dBW/m^2 (<cit.>, <cit.>) in 4kHz or an equivalent to 62*10^6 Jy, i.e. more than 7 orders of magnitude brighter than a typical VGOS source <cit.>. These strong signals will require a radio astronomy receiver to have a large dynamic range to accommodate the RFI and still be able to detect faint cosmic sources in other frequency channels within the receiver band. This is normally possible for modern radio astronomy receivers, but it can be different in some particular situations such as total power bolometric receivers or receivers with a low effective number of bits (ENB) <cit.>. § LARGE LEO CONSTELLATIONS Radio astronomy has been dealing with satellite transmissions since the very first satellites were launched back in the 1960s. Implementing different strategies such as using analog receivers with large dynamic ranges, smart scheduling, and RFI flagging among others, radio telescopes have been more or less able to mitigate (or avoid) the effect of these strong radio transmissions towards Earth <cit.>. In conjunction with these strategies, spectrum management has also played a key role in dealing with the effects of satellites, several radio astronomy groups have worked at national, regional and international level for the protection of the radio astronomy service (RAS) frequency bands allocated by the International Telecommunication Union (ITU). Some with successful results, like the GLONASS example, and sometimes with battles that still ongoing 20 years after satellite deployment like in the IRIDIUM case <cit.>. The exponential growth in the number of active satellites in Low Earth Orbit <cit.> could result in more than 2000 satellites above the local horizon at any moment in time. Radio telescopes are sensitive to any transmitter in line of sight through its main beam or antenna sidelobes. §.§ Walker-Delta constellations All these new constellations follow a "Walker Delta" type of distribution, composed of orbital shells at a certain altitude, each shell contains several orbital planes, with a certain inclination with respect to the Equator and distributed homogeneously in the 360 degrees of right ascension. Each one of the constellation's planes contains N satellites, a representation of Starlink Phase 2 can be found in Figure <ref>. A shell of a Walker-Delta constellation <cit.> is described by i = t/p/f where i is the inclination, t is the total number of satellites, p is the number of equally spaced planes, and f is the relative spacing between satellites in adjacent planes. This description makes it very simple to simulate any of these constellations with the purpose of studying its geometric distribution in LEO and also its effect on radio telescopes. It is also possible to use existing Two-Line Elements (TLEs) to obtain the approximate position of existing satellites in space, which can be useful to compare observations to simulation. Figure <ref> shows a qualitative view of the sky from the Wettzell VGOS station (lat 49 degrees), with the position of different satellite constellations simulated for 100 seconds. It is simple to see how the density of satellites in the sky will drastically change in the near future if all constellations planned are deployed. §.§ Radio frequencies Satellite constellations transmit their downlink signals in frequencies allocated to the Fixed Satellite Service (FSS). Table <ref> contains some of the currently in-use and planned FSS bands and it is important to note the proximity to some ITU protected RAS bands immediately adjacent or in very close proximity. The close vicinity of the satellite's downlinks to radio astronomy bands is a matter of concern for radio astronomers and spectrum managers. As an example, the protection of the 10.6-10.7 GHz Radio Astronomy Service (RAS) band, which includes a passive band in 10.68-10.7 GHz protected by the footnote RR No. 5.340 in the ITU-R Radio Regulations (RR), was studied for the Starlink Ph1 and OneWeb ph1 constellations in <cit.>, with the conclusion that both systems should not use the first 250 MHz channel to protect the RAS band. These signals can not only impact sensitive observations in the RAS protected bands, but can also affect wideband receivers which include the frequency range of user downlinks. Such wideband receivers (from 1 to 14 GHz in the case of VGOS) are necessary to conduct cutting edge science or Geodesy <cit.>. This paper focuses on the downlink frequency range 10.7 to 12.75 GHz where both OneWeb and Starlink have divided the band in 8 channels of 250 MHz each. The study can be replicated for higher frequency bands with the appropriate modification of satellite and telescope characteristics. § POTENTIAL IMPACT ON VGOS By using large reflector antennas pointed towards the sky and wideband receivers covering the frequency range 1 to 14 GHz <cit.>, VGOS telescopes can be impacted by downlinks of the large satellite constellations in different ways. In fact the VGOS bandwidth is wide while the protected Radio astronomy band is very narrow in and Starlink and OneWeb frequencies use a considerable portion of spectrum. The severity of this impact depends on the interaction between the radio telescope beam and the satellite downlink beams. One of the most important aspects is how much a correlated baseline can be affected, as the primary product of a VGOS observation. Nevertheless, the multi-dimensionality of this problem requires an analysis of the complete signal reception mechanisms and how each part of the signal chain may be impacted. In a typical VGOS schedule, targets are observed with durations in the order of seconds to tens of seconds, the position of the target in the local sky and the density of satellites deployed will define how much interference will be seen by the telescope. The instantaneous received power from all satellites above the horizon may saturate the analog signal chain (low noise amplifiers, mixers, etc), causing non-linearities that would render the complete receiver band unusable, even if the digitizer band is tuned to a completely different frequency than the satellite downlinks channels. If the RFI power is not as strong and the analog signal chain remains linear, then there can be two possible scenarios: * First scenario: when the observed band is outside of the satellite downlink frequency range, in which case out of band emissions from the satellites could be a problem depending on their level. This work is not focusing on this, but <cit.> has studied that case. * Second scenario: if the observing band falls within one satellite downlink band (250 MHz channels) or vice versa, strong RFI will be received by the VGOS antenna. This RFI can potentially be mitigated by correlation as long as the number of bits in the digitizer are enough to correctly digitize the signal. Since a VGOS digitizer has only two bits, the total integrated RFI needs to be lower (practically at least 10 dB lower or 1/10) than the integrated noise power of the receiver <cit.>. Non-linearities and lack of headroom for RFI are transient phenomena and can be considered in terms of a data-loss associated with the moments where one satellite is going through the main beam of the radio telescope. The issue of out of band emission is related to long integrations and needs a comparison between the level of integrated RFI vs the integrated level of the astronomical source under observation. The following section describes a simulation method and presents a particular case for the Starlink phase 1, OneWeb phase 1 and Starlink phase 2 constellations to estimate data loss due to strong received power and the total aggregated RFI, the effects of the correlation is not included in this work as is currently under study by the authors. § SIMULATION METHODOLOGY The simulation is based on the Equivalent Power Flux Density (epfd) concept (see <cit.>), where the satellite constellation is propagated for a defined time duration, obtaining the coordinates and attitude of every satellite for each time step. Then, the telescope antenna is pointed towards a defined sky-cell in azimuth and elevation and for each of the simulated time steps, the received power from all satellites above the horizon is calculated with the formula: P_rx_(t,p)=∑_i=0^N_sat(PFD_sat_(i,t) * A_eff_RAS_(i,t,p)) where: t = time step p = pointing direction i = satellite index PFD_sat = Satellite power flux density in W/m^2 towards the telescope location A_eff_RAS = Effective area of the telescope antenna in m^2 towards the satellite position This calculation is iterated for a number of trials (typically hundreds to thousands), where each try has a random start time of the constellation and therefore contributes to a statistically representative result. In situations where multiple frequencies are calculated, like for example the case of OneWeb with its 16 fixed-beams antenna (see Figure <ref>), the number of channels is added to the result. Therefore the final calculation results in a data cube with four dimensions, namely number of iterations, number of pointing directions, number of time steps, and number of channels: N_iters, N_pointing, N_time and N_channel. Although the original epfd calculation as defined by the ITU uses telescope pointings in local coordinates (Alt,Az), this work considers pointings in celestial coordinates (Ra,Dec) as this allows to understand how celestial positions in different declinations can be impacted by satellite constellations transmissions. §.§ Satellite position propagation Using the Python package Cysgp4 <cit.> and the Astropy Coordinates package <cit.>, the position of the satellites in horizontal coordinates (Alt,Az) and Sky coordinates (Ra,Dec) are calculated for each timestep and each iteration (see Figure <ref>). §.§ Satellite power flux density (PFD) The PFD from each satellite in a constellation is modelled based on publicly available information (ITU documents and FCC filings). To calculate the power flux density towards the telescope site, the coordinates of the telescope in the satellite reference frame are also calculated using the Python package cysgp4 <cit.>. OneWeb satellites are modelled based on the information available in the ECC report 271 <cit.>, with 8 channels in the 10.7-12.75   GHz. A fixed beam antenna pattern, like the OneWeb system, makes it simpler to calculate the received power in a deterministic way. The PFD from Starlink satellites is more complex to model since they have an antenna array that can produce, and electronically steer, several beams in one or multiple frequency channels. The mean PFD from a Starlink satellite is modelled as a function of the elevation of the satellite, obtained from a Monte Carlo simulation in where the steering angle, the number of beams and the position of satellite and observer was varied a large number of times. Starlink satellites are modeled as one frequency channel at a time. §.§ Radio Telescope antenna The radio telescope antenna is modelled based on <cit.>. While this model is not a real measurement of the antenna pattern of a radio telescope, it is based on real measurements and is considered as a worst case for compatibility studies. To obtain the gain towards the satellite, the angle between the pointing direction and the position of the satellite is calculated. The Effective Area of the antenna is calculated with the following equation: A_eff = G_RAS*(λ^2/(4*π)) §.§ Correlation Interferometry can greatly mitigate the effects of RFI, especially when the baselines are large like in the case of VLBI <cit.>. Although Thompson and others have studied the effect that long baselines have over single RFI transmitters (and stationary), the situation is not the same when potentially hundreds of transmitters using the same frequency and bandwidth are received simultaneously as can happen now. For example in <cit.>, Petrachenko identifies the 10.7-12.75 GHz range as a usable frequency range as only Geostationary satellites were using that frequency at that time. Now the received RFI signal at one antenna will be the sum of the signals from all satellites above the horizon (of course with different levels of attenuation). This analysis is deferred to a further update of this work. §.§ Saturation Limit threshold Digital processing operations in a radio telescope can be applied as long as the analog and digital signal chains behave in a linear manner; strong enough signals will generate non-linearities corrupting the complete receiver band for the duration of the interference. Defining the level where a receiver goes non-linear is not a simple task and will depend on each particular receiver. In the case of the VGOS receivers a conservative value for total power of -50 dBm is considered to keep the analog signal chain within the linear regime. If the received power is below this linearity threshold, the analog signal can then be correctly digitized with a bandwidth of 1  GHz. Two scenarios can be identified: * Digitizing a frequency range outside of the 10.7-12.75 GHz, which should not have any complications since the signal chain behaves in a linear way and therefore this case will not be further studied; * Digitizing in a frequency range within the 10.7-12.75 GHz. In this case is interesting to understand when the RFI produces a significant amount of power compared to the RMS noise of the receiver. Given the distinct characteristic of VGOS systems using a 2 bit correlator, it is reasonable to consider that there is not much headroom in the digital signal chain to accommodate for RFI, this work considers that any signal above or equal to the receiver's noise power will result in a data loss. This defines the second threshold as a spectral power flux density equal to the RMS noise of a 20 K receiver system (-215 dBW/Hz). These two thresholds are used in the simulation; a first set of flags is produced when the total integrated power (considering the 8 channels of 250 MHz for each constellation) is higher than -50 dBm (representing a total data loss) and the second one representing a data loss in the case of observing in the same frequency range as the satellite transmissions. After these two flagging stages, low level RFI will still be present, it is of interest to understand how this will affect the correlation of the baseline. This will be further study in a future update to this work and compared to the thresholds defined in RA.769 <cit.>. §.§ Metrics Based on the threshold limits defined in the previous section, the following metrics are used: * Full Band Data Loss (FBDL): percentage of time that the complete band is lost due to very strong RFI, where the total received power is >-50 dBm; * Digitizer Data Loss (DDL): Percentage of the total observation time (single run multiplied by the number of iterations) that the instantaneous power spectral density is above 10% of the integrated noise power of the receiver. This can be calculated as a function of the declination of the source; * Average Equivalent Spectral Power Flux Density (aESPFD): average value of the equivalent Spectral Power Flux Density during the observation time in each antenna. The eSPFD is calculated as the received spectral power flux density [W/m^2/Hz] divided by the maximum effective antenna area, and it is useful to compare to the SPFD (in units of Jy) of a celestial source in the main beam of the antenna; § CASE STUDY SIMULATION A specific study case was selected to understand the impact from several satellite constellations on two telescopes normally involved in VGOS observations, it is the intent to further expand this work into how correlation over the long baseline mitigates the RFI. The VGOS stations in Sweden (Onsala Observatory) and Germany (Wettzell Observatory) were selected as the test stations, using the parameters in Table <ref>, and Starlink phase 1, OneWeb phase 1 and Starlink phase 2 as constellations see Table <ref>. The simulated observations were runned for 100 seconds in 1 second timesteps with 100 iterations. Originally it was intended to use a real VGOS schedule, using real Ra, Dec of sources observed, but to get a more representative results of the impact as a function of source declination the number of sources was increased artificially to 277 in a random fashion, see Figure <ref> for a plot of the sources distribution. Figure <ref> shows the view of the local sky in (Alt,Az) and how the celestial sources and the satellite constellation (in this case Starlink Phase1) move across the sky in that timeframe. § RESULTS The results for each one of the selected metrics is summarized here for each constellation simulated. §.§ Full Band Data Loss (FBDL) Notably, the analog saturation threshold was not reached due to the combination of maximum PFD from the satellites (-98 dBW/m^2 in 250   MHz) and maximum effective area of the VGOS antennas (106 m^2 or 20.3 dBm^2), as can be seen in Figure <ref>. This shows that even with large constellations such as Starlink phase 2 the analog receivers would still behave in a linear fashion. §.§ Digital Data Loss (DDL) When considering an observation coinciding in frequency with the downlinks of satellites (i.e. in within the 10.7-12.75 GHz) the DDL varies as a function of declination of the observed source and observatory latitude. This effect is attributable to the different structures of each constellation's density of satellites around the Earth and the latitude of the observer. This shows that impact to VGOS stations (and radio telescopes in general) will strongly depend on the observatory latitude. See Figure <ref>. §.§ Average Equivalent Spectral Power Flux Density (aESPFD) After a certain percentage of the observed data was lost as DDL (see section <ref>, the aESPFD is calculated for each constellation as a function of declination. In this case the flagged percentage is calculated as the product of the flags from the previous section for each antenna. Considering that the ITU-R RA.769 thresholds for harmful interference for VLBI are defined as -193  dBW/m^2/Hz, representing an ESPDF of 250 Jy in an antenna of 13 m diameter, the results show that VGOS observations could in principle be conducted inside the satellite downlink bands (considering the percentage of data lost). See Figure <ref>. § CONCLUSIONS This paper proposed metrics to evaluate the impact of large satellite constellations on VGOS operations by a simil-epfd simulation for Starlink ph1 and ph2, and OneWeb ph1, and two European stations as receivers. Through calculations and simulations it was proved that the maximum received power even in beam-to-beam coupling condition with satellites will not be enough to saturate the analog chain of a VGOS receiver. As for the digitized part, the simulations show that observations in the same band as the downlinks from satellites can have a significant percentage of data loss due to strong signals compared to the thermal noise of the receiver. Nevertheless the results shows that the ESPFD for both antennas and all constellations is lower than the thresholds defined by ITU-R for VLBI. Observations outside of the satellite downlink bands should not be impacted by satellite downliks in this frequency range. As further work the authors will continue investigating how correlation can help mitigate this signals from satellite constellations and how the aggregation of all constellations scales the impact. § ACKNOWLEDGEMENTS The authors would like to thank the IVS Coordinating Center at NASA Goddard Space Flight Center (GSFC) for taking the archive of IVS sessions. The schedule used in this work is available at the https://ivscc.gsfc.nasa.gov/sessions/2022/vo2027 web page. We are grateful to Salvo Buttaccio, for the assistance with the VGOS schedule, to Dr. Benjamin Winkel for assistance with the use of the Cysgp4 Python package, and to Dr. Jose Antonio Lopez-Perez and Dr. Hayo Hase for useful discussions about VGOS receivers and operations. 99 RFI_Baan W. A. Baan, 2011. "RFI mitigation in radio astronomy" RFI Mitigation Workshop 2010 Cohen J. Cohen, Iridium and Radio Astronomy in Europe Spectrum Management for Radio Astronomy: proceedings of the IUCAF summer school held at Green Bank, West Virginia, June 9-14, 2002. Cooper_bits Cooper, B.F.C., 1970. "Correlators with two-bit quantization". Australian Journal of Physics, 23, pp.521-527. ECC271 ECC Report 271, "Compatibility and sharing studies related to NGSO satellite systems operating in the FSS bands 10.7-12.75 GHz (space-to-Earth) and 14-14.5 GHz (Earth-to-space)" European Communications Office, 2021 Lawrence A. Lawrence Et. Al., "The case for space environmentalism" Nature Astronomy volume 6, pages428–435 (2022) OneWeb_ph1 OneWeb phase 1 FCC filing <https://fcc.report/IBFS/SAT-MPL-20200526-00062/2379565> Petrachenko_RFI B. Petrachenko, "The Impact of Radio Frequency Interference (RFI) on VLBI2010" IVS 2010 General Meeting Proceedings, p.434–438 Petrachenko_WG3 B. Petrachenko et. al. 2010. "Final Report of the Observing Strategies Sub group of the IVS Working Group 3" IVS 2010 General Meeting <https://ivscc.gsfc.nasa.gov/about/wg/wg3/1_observing_strategies.pdf> RA.769 RECOMMENDATION ITU-R RA.769 "Protection criteria used for radio astronomical measurements" RA.1631 RECOMMENDATION ITU-R RA.1631 "Reference radio astronomy antenna pattern to be used for compatibility analyses between non-GSO systems and radio astronomy service stations based on the epfd concept" S.1586 RECOMMENDATION ITU-R S.1586 "Calculation of unwanted emission levels produced by a non-geostationary fixed-satellite service system at radio astronomy sites" Starlink_ph1 Starlink phase 1 FCC filing <https://fcc.report/IBFS/SAT-MOD-20200417-00037/2274316> Starlink_ph2 Starlink phase 2 FCC filing <https://fcc.report/IBFS/SAT-AMD-20210818-00105> Astropy The Astropy Collaboration et.al., "Astropy: A community Python package for astronomy" A&A Volume 558, October 2013 Astropy2 The Astropy Collaboration et.al.,"The Astropy Project: Building an inclusive, open-science project and status of the v2.0 core package" <https://arxiv.org/abs/1801.02634> Thompson_RFI Thompson, 1982. "The Response of a Radio-Astronomy Synthesis Array to Interfering Signals" IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. AP-30, NO. 3, MAY 1982 Walker J. G. Walker, Satellite constellations, Journal of the British Interplanetary Society, vol. 37, pp. 559-571, 1984 Cysgp4 B. Winkel, "A wrapper around the SGP4 package, for sat TLE calculations" <https://github.com/bwinkel/cysgp4>
http://arxiv.org/abs/2307.06338v1
20230712000100
Denoising Simulated Low-Field MRI (70mT) using Denoising Autoencoders (DAE) and Cycle-Consistent Generative Adversarial Networks (Cycle-GAN)
[ "Fernando Vega", "Abdoljalil Addeh", "M. Ethan MacDonald" ]
eess.IV
[ "eess.IV", "cs.CV" ]
JWST Confirms the Nature of CID-42 [ August 12, 2023 ================================== § SYNOPSIS In this work, a denoising Cycle-GAN (Cycle Consistent Generative Adversarial Network) is implemented to yield high-field, high resolution, high signal-to-noise ratio (SNR) Magnetic Resonance Imaging (MRI) images from simulated low-field, low resolution, low SNR MRI images. Resampling and additive Rician noise were used to simulate low-field MRI. Images were utilized to train a Denoising Autoencoder (DAE) and a Cycle-GAN, with paired and unpaired cases. Both networks were evaluated using SSIM and PSNR image quality metrics. This work demonstrates the use of a generative deep learning model that can outperform classical DAEs to improve low-field MRI images and does not require image pairs. § INTRODUCTION Over the last few decades there has been an increasing use of magnetic resonance imaging (MRI) as it provides hundreds of contrast modes and is minimally invasive <cit.>. It is known that higher spatial resolution and SNR-efficiency can be achieved with higher field strength <cit.>. However, as the field strength increases, so does the cost <cit.>. Low-Field MRI scanners are less expensive (~20x less expensive than 3T over 10 years), have much lower energy consumption (~60x less electricity) <cit.>, reduce the energy absorption in the subject, and do not require expensive liquid helium <cit.>; however, the trade-off is lower resolution and lower SNR-efficiency <cit.>. Previous work aimed to improve the resolution and SNR-efficiency by implementing machine learning techniques such as a Denoising Autoencoder (DAE) <cit.> that uses Convolutional Neural Networks (CNN) <cit.>. However, using this architecture requires the images to be paired and aligned. Performing registration in noisy images is prone to error necessitating a technique that does not need images to be paired or registered. This led us to use a Cycle Consistent Generative Adversarial Network (Cycle-GAN) <cit.> as an improvement over classical DAEs. Cycle-GAN architecture is also based on CNNs, it uses four networks: two generators and two discriminators, where one generator produces synthetic denoised images that are fed to a second generator that generates the original noisy image. One discriminator is assigned to each generator to predict if the generated images are real or synthetic <cit.>. Using this approach, GAN architectures excel at generating synthetic images with a high degree of similarity to the real ones <cit.>. In this work, a 3D Cycle-GAN was implemented using unpaired 3T MRI images and low-field simulated MRI images. The model was evaluated with unseen images and reported the Structural Similarity Index (SSIM) <cit.> and Peak Signal-to-Noise Ratio (PSNR) <cit.> as performance metrics. These results are compared with the performance of DAEs. § METHOD 100 T1-weighted MRI images were used from Open Access Series of Imaging Studies (OASIS-3) <cit.> database (3T MRI images with a resolution of 1mm ×1mm×1mm). Then low-field MRI images were synthesised to have a resolution of 1.5mm×1.5mm×1.5mm and added Rician noise to emulate a low SNR of 70mT <cit.>. A 3D Cycle-GAN model was implemented using the MONAI deep learning framework [18], the model was fed with 100 high-field MRI images and 100 simulated low-field MRI images for 500 epochs following the architecture shown in <ref>. This architecture has a total of 13 layers with 9 residual blocks that act as a bottleneck without any skip connection, as shown in Figure <ref>. This architecture diverges from the standard U-net style followed in DAEs. Once the model was trained, it was evaluated with 100 unseen images and the results were compared with a DAE and evaluated using the SSIM and PSNR metrics. § RESULTS The results obtained can be seen in Figure <ref>, where the synthetic images have a high degree of visual similarity with the true images based on the reported SSIM and PSNR, Figure <ref> shows the same subjects using a DAE. In Figure <ref>, the Cycle-GAN denoising model is compared with a DAE showing that the Cycle-GAN produces overall better images in terms of contrast and shape. The metrics tested in the cohort of unseen images show that the Cycle-GAN model is able to produce high quality synthetic denoised images as shown in Figure <ref> with a mean PSNR 14.62% higher than the DAE. The DAE scored 1.15% higher in SSIM compared to the Cycle-GAN.However, the PSNR is a more sensible measure to compare noise between images than the SSIM. § DISCUSSION This work demonstrates a pipeline that can produce similar or better estimations than classical DAE in low-field simulated images. The results are encouraging as it proves that low-field MRI images can be used to generate images with the same quality as a high-field MRI without the need of paired data. In future work, we propose to address the limitations of this project. One, being the use of simulated low-field data that needs to be replaced with empirically gathered low-field data to produce a representative model. Another limitation in this simulation is that we do not consider T1, T2 differences at different field strengths. This work is a major advance as it shows that the Cycle-GAN performs better than the DAE and does not require image pairs in training. § ACKNOWLEDGEMENTS: The authors would like to thank the University of Calgary, in particular the Schulich School of Engineering and Departments of Biomedical Engineering and Electrical and Software Engineering; the Cumming School of Medicine and the Departments of Radiology and Clinical Neurosciences; as well as the Hotchkiss Brain Institute, Research Computing Services and the Digital Alliance of Canada for providing resources. The authors would like to thank the Open Access of Imaging Studies Team for making the data available. FV – is funded in part through the Alberta Graduate Excellence Scholarship. JA – is funded in part from a graduate scholarship from the Natural Sciences and Engineering Research Council Brain Create. MEM acknowledges support from Start-up funding at UCalgary and a Natural Sciences and Engineering Research Council Discovery Grant (RGPIN-03552) and Early Career Researcher Supplement (DGECR-00124). unsrt
http://arxiv.org/abs/2307.04200v1
20230709150848
Integrated frequency-modulated optical parametric oscillator
[ "Hubert S. Stokowski", "Devin J. Dean", "Alexander Y. Hwang", "Taewon Park", "Oguz Tolga Celik", "Marc Jankowski", "Carsten Langrock", "Vahid Ansari", "Martin M. Fejer", "Amir H. Safavi-Naeini" ]
physics.optics
[ "physics.optics", "quant-ph" ]
APS/123-QED 1 2 3 ⋆ [email protected] Valid PACS appear here Integrated frequency-modulated optical parametric oscillator Amir H. Safavi-Naeini1,⋆ August 12, 2023 ============================================================ Optical frequency combs have revolutionized precision measurement, time-keeping, and molecular spectroscopy <cit.>. A substantial effort has developed around “microcombs”: integrating comb-generating technologies into compact, reliable photonic platforms <cit.>. Current approaches for generating these microcombs involve either the electro-optic <cit.> (EO) or Kerr mechanisms <cit.>. Despite rapid progress, maintaining high efficiency and wide bandwidth remains challenging. Here, we introduce a new class of microcomb – an integrated optical frequency comb generator that combines electro-optics and parametric amplification to yield a frequency-modulated optical parametric oscillator (FM-OPO). In stark contrast to EO and Kerr combs, the FM-OPO microcomb does not form pulses but maintains operational simplicity and highly efficient pump power utilization with an output resembling a frequency-modulated laser <cit.>. We outline the working principles of FM-OPO and demonstrate them by fabricating the complete optical system in thin-film lithium niobate (LNOI). We measure pump to comb internal conversion efficiency exceeding 93% (34% out-coupled) over a nearly flat-top spectral distribution spanning ≈ 1,000 modes (≈ 6 THz). Compared to an EO comb, the cavity dispersion rather than loss determines the FM-OPO bandwidth, enabling broadband combs with a smaller RF modulation power. The FM-OPO microcomb, with its robust operational dynamics, high efficiency, and large bandwidth, contributes a new approach to the field of microcombs and promises to herald an era of miniaturized precision measurement, and spectroscopy tools to accelerate advancements in metrology, spectroscopy, telecommunications, sensing, and computing. § INTRODUCTION Optical frequency combs, characterized by their precisely spaced, sharp spectral lines that serve as a “frequency ruler" for light, are indispensable tools in numerous fields, from precision metrology and atomic clocks to high-capacity telecommunications and molecular spectroscopy <cit.>. Fueled by their potential practical applications, the drive to miniaturize frequency combs into chip-scale integrated devices, known as microcombs, has recently accelerated at a remarkable pace <cit.>. Traditional optical frequency combs, produced through mode-locked lasers and synchronously pumped optical parametric oscillators, are large-scale and require substantial infrastructure, thus limiting their utility outside laboratory settings. Two principal methods for creating integrated frequency comb sources suitable for smaller, deployable devices have been explored in response. The first involves third-order χ^(3) or Kerr optical nonlinearity, with successful demonstrations in materials such as silica, silicon nitride, aluminum nitride, silicon carbide, and lithium niobate <cit.>. The second strategy employs the electro-optic effect, which has been realized in resonant (shown in Fig. <ref>a) and non-resonant integrated thin-film lithium niobate devices <cit.>. Despite these remarkable advances, electro-optic and Kerr combs face several challenges. They are often limited in their efficiency, exhibit a strong pump background, suffer from limited tunability, and display a decreasing comb line intensity for the lines distant from the pump. Moreover, Kerr frequency combs demand sophisticated control and become significantly more challenging to operate at a smaller free spectral range (FSR). In this study, we propose and demonstrate a new type of microcomb that combines the advantages of both EO and Kerr combs, merging nonlinear optical processes with electro-optic modulation in an integrated device. Specifically, our structure accommodates both optical parametric amplification and phase modulation within a single cavity, thereby facilitating the generation of a frequency-modulated optical parametric oscillator (FM-OPO, Fig. <ref>b). <cit.> Remarkably, unlike in conventional Kerr and EO combs, the dynamics in our system do not result in pulse formation, making the output more closely resemble that of a frequency-modulated (FM) laser. This strategy maintains the operational simplicity characteristic of electro-optic combs while achieving substantially broader bandwidths than those attainable through modulation alone. Furthermore, our technique gives rise to a flat-top output comb, an optimal spectral distribution for many applications, while avoiding unwanted nonlinearities that manifest at large pulse peak powers<cit.>. Finally, the FM-OPO exhibits impressive efficiency, converting a significant fraction of the pump light into comb lines while demanding only modest RF power inputs for operation. To implement the integrated FM-OPO, we turn to thin-film lithium niobate (LN) for its strong second-order optical nonlinearity and electro-optic (EO) effect. Thin-film LN has recently emerged as a platform for integrated nanophotonics<cit.> through demonstrations of efficient electro-optic modulators <cit.>, electro-optic combs<cit.>, periodically poled lithium niobate (PPLN) waveguides for frequency conversion<cit.>, quantum light generation<cit.>, resonant second harmonic generation and optical parametric oscillators<cit.>, and integration with complex photonic integrated circuits for applications such as laser control<cit.> and quantum measurements<cit.>. The above demonstrations are either based on the EO effect that transfers energy between optical modes separated by the RF frequency or the χ^(2) nonlinearity that can provide broadband gain. Combining these two distinct capabilities forms the foundation for the integrated FM-OPO. § COMB DYNAMICS Both Kerr and EO comb generation fundamentally rely on mode-locking, which subsequently leads to the formation of pulses. However, this process inherently introduces a strong frequency-dependent variation in the intensity of the comb lines that decay exponentially with their offset from the center. Another considerable challenge posed by pulse formation is the inefficient utilization of pump power, as a continuous wave (CW) pump only overlaps with a small part of the circulating field. Recent advancements have started to address this issue, mainly by exploiting auxiliary resonances <cit.> and utilizing pulsed pumps <cit.>. Finally, pulse formation leads to large intracavity peak powers that can engage other unwanted nonlinearities and make comb formation challenging in integrated platforms <cit.>. We discover here that incorporating parametric gain into an EO-modulated cavity leads to a frequency comb without necessitating pulse formation. Despite the modulation being close to the cavity resonance mode spacing, our system's dynamics strikingly resemble those of an FM laser <cit.>. As in an FM laser, we will see that the optical frequency of the signal is swept across a bandwidth B.W. at the rate of the RF modulation Ω. We first consider the situation without any modulation. We assume that we operate the OPO nondegenerately so that it emits signal and idler tones at mode number offsets ± n_osc from a central mode with frequency ω_0 close to ω_p/2. As we introduce RF modulation at frequency Ω characterized by a mode coupling rate M, these signal and idler tones are simultaneously subject to gain and modulation. The pairing of these effects around the signal and idler creates conditions that mirror the dynamics of an FM laser, where phase-insensitive gain and modulation coexist. In an FM laser, the limiting behavior that prevents mode-locking arises from a detuning between the cavity's FSR and the drive frequency Ω. The FM laser then transitions to chaotic and mode-locked states as this detuning is reduced and the bandwidth is increased to approach the gain bandwidth of the medium or a limit set by the cavity dispersion <cit.>. The oscillation bandwidth of the FM-OPO is limited by the cavity's dispersion, characterized by mode frequencies , where ζ_1 and ζ_2 are the cavity FSR near ω_0 and the second-order dispersion, respectively. Under the regime considered, our device avoids the transition to mode-locking behavior. The signal and idler modes are far separated and experience local FSRs near ± n_osc that differ from each other by 2 n_oscζ_2. Moreover, the parametric nature of the process necessitates the simultaneous formation of combs at both signal and idler frequencies. Therefore in the assumed nondegenerate regime, there is always effectively a drive detuning when we consider both signal and idler combs. This results in dynamics that closely mirror those of an FM laser with detuned driving, where continuous frequency sweeping is observed rather than pulse formation. The effective bandwidth is given by B.W.≡2ΓΩ = 4MΩ/n_oscζ_2 where Γ is the modulation index, and the signal and idler tones are frequency modulated as a_s,i(t) ≈ A_s,ie^-iω_i te^∓ iΓsin(Ω t) e^iω_pt/2. The bandwidth formula aligns well with the established expression for the FM laser bandwidth B.W.∝M Ω/(Ω-FSR) <cit.>, with the correspondence being that the FM laser detuning Ω-FSR is replaced by the detuning n_oscζ_2 between the drive and local FSR in the FM-OPO. Finally, we note that there are conditions where the above analysis no longer holds, e.g., at (near-)degenerate OPO operation leading to smaller n_osc, at significantly larger M, or for dispersion-engineered waveguides that may match the local signal/idler FSRs. Bulk phase-modulated OPOs have already been demonstrated <cit.>. We leave the engineering and study of the dynamics of integrated phase-modulated OPOs in a wider set of operating regimes to future work. § RESULTS We demonstrate an optical frequency comb generator based on an FM-OPO integrated on a chip (Fig. <ref>c). The device evenly distributes 11 mW of optical power over 200 comb lines using 140 mW of C-band optical pump power and 200 mW of RF modulation power. Comb lines are spaced by about 5.8 GHz. We base our device on a racetrack resonator in thin-film lithium niobate on insulator (LNOI) with intrinsic quality factors of around Q_i≈ 10^6. This resonator holds within it an electro-optic modulator, an optical parametric amplifier, and a high-efficiency wavelength-selective coupler that nearly fully transmits the 780 nm pump while keeping the C-band excitation within the cavity. Figure <ref>a shows a schematic design of the device, while Fig. <ref>b shows a microscope image of a single FM-OPO device. The coupler allows our device to operate as a doubly resonant OPO where the pump passes through the OPA but is non-resonant in the cavity. One straight section has gold electrodes patterned next to it, enabling electro-optic modulation of the cavity (see the left inset in Fig. <ref>b). The other straight section of the cavity is a periodically poled lithium niobate (PPLN) waveguide that provides parametric gain when pumped with the second harmonic (see the right inset in Fig. <ref>b for a second harmonic microscope picture of the poled thin-film lithium niobate). In the Methods section, we describe the design and characterization of the waveguides and cavity in detail. We generate the 780 nm pump on the same chip in a separate PPLN waveguide. We filter out the original pump field through three on-chip filters of the same design as the intracavity coupler. The high SHG efficiency allows us to achieve considerable optical pump powers using only a standard commercial C-band laser. Figure <ref>c shows an example FM-OPO output spectrum when the device is pumped with around 140 mW of FH optical power (corresponding to around 100 mW of SH power) and 200 mW of RF power, equivalent to about 4.5 V peak voltage. We plot an electro-optic comb generated using the same RF power within the same cavity in gray for comparison. We observe a flat comb formation around signal and idler wavelengths and no significant background from the pump. The measured output aligns with our coupled-mode theory model (thick dark blue line) described below. The bottom right inset in Fig. <ref>c shows individual lines in a flat spectrum spaced by around 5.8 GHz. The top right inset in Fig. <ref>c shows the result of collecting the output using a fast photodetector and an RF spectrum analyzer. In the RF spectrum, we observe narrow lines spaced by the multiples of the cavity FSR, resulting from the FM-OPO sweeping over a frequency-dependent output coupler (see Methods for details). We can understand nearly all of the salient features of the observed spectra in the context of an approximate time-domain coupled-mode theory analysis. We also use this formulation to derive the formula for the comb bandwidth shown in equation <ref>, which agrees well with observations <ref>b. We define mode amplitudes a_n to represent the field amplitudes for the n-th mode around the fundamental frequency, where n = 0 corresponds to the fundamental mode closest to half of the pump frequency. In this context, b represents the amplitude of the second harmonic pump field. Each mode n has a natural frequency given by the cavity dispersion with ζ_1/2π≈ 5.8 GHz and ζ_2/2π≈ 11 kHz corresponding to the cavity FSR and the second-order dispersion, respectively. Other key parameters include the laser drive detuning Δ≡ω_p/2 - ω_0, and the RF drive detuning from the FSR δ≡Ω - ζ_1. The mode coupling due to modulation M, which is proportional to the RF drive voltage, and the nonlinear coupling rate g provide the critical ingredients for realizing the comb dynamics. We also include the loss rates of the considered field amplitudes, κ_a,n and κ_b. The rate κ_b corresponds to that of an extremely lossy single-pass “cavity” and allows us to approximate our DRO in this coupled-mode theory formulation. We derive all of the model parameters from independent simulations, as well as experimental and theoretical analysis (refer to the Methods section and SI for more details). The resulting coupled-mode equations are ȧ_n = [ i( Δ + nδ - n^2ζ_22) -κ_a,n2 ] a_n - i M ( a_n-1 + a_n+1) - 2ig a_-n^∗b ḃ = - κ_b2 b -i g ∑_na_na_-n + i√(κ_b)β_in. There are two main approximations in these equations. First, we represent the pump field as the excitation of a very lossy mode b – solutions involving significant spatial variations of the pump field along the waveguide cannot be represented accurately by this model. Secondly, we only include coupling between modes n and -n – we ignore the weaker coupling between modes with nearby n numbers. For example, coupling between n and -n+1 can be present and may become stronger as a function of pump wavelength. Tuning the pump wavelength and consequently the detuning Δ over a cavity FSR changes the mode pairs that are amplified (see Fig <ref>a). Device parameters are summarized in Extended Data Table <ref>). We tune the output wavelength in the FM-OPO through small adjustments to the pump wavelength, allowing the output to span the full range of the gain spectrum. This tuning is predominantly influenced by the cavity dispersion, mirroring the characteristics observed in an unmodulated OPO <cit.>. We show the OPO tuning behavior in Fig. <ref>a. The blue traces correspond to measurements with an optical spectrum analyzer (OSA), whereas the gray lines present the predicted tuning behavior based on the waveguide dispersion. The FM-OPO exhibits a similar tuning pattern, as shown in Fig. <ref>b. Here, the comb clusters closely follow the expected tuning. By adjusting the pump wavelength by 20 pm, which equates to half of the cavity's free spectral range (FSR), we can access bandwidth of approximately 70 nm for both FM-OPO and OPO. We measure the spectra generated by the FM-OPO using an optical spectrum analyzer. We find that the device operates continuously and robustly in a nondegenerate mode at around n_osc≈ 800. In this regime, we expect Eqn. (<ref>) to hold to high accuracy. We pump the device at 1554 nm with about 140 mW. We step the electro-optic coupling rate of the 5.8-GHz EO modulation between 0 and around 510 MHz by varying the RF power supplied to the chip. As shown in Fig. <ref>a, we observe a frequency comb develop. A number of additional comb clusters labeled (-n_osc+1,n_osc) and (-n_osc+1, n_osc+1) appear at a drive exceeding M/2π≈ 360 MHz; these are described in more detail in the Methods section. We only plot the signal combs (blue detuned) and omit the idler combs (red detuned) for clarity; we provide full spectra in Extended Data Fig. <ref>. The measured spectral peak at around 1554 nm corresponds to a slight leakage of the original FH pump into the cavity. We count the number of generated lines within the 3 dB bandwidth of the flat-top and plot this in Fig. <ref>b. We observe good agreement between the data, numerical solution of the coupled-mode equations <ref>-<ref> (blue shaded region), and the analytical expression for the FM-OPO given by equation <ref> (dashed line). At the highest EO modulation rate of around 1.2 W, we observe over 1,000 comb lines oscillating together within -30 dB from the flat-top mean power (see Extended Data Fig. <ref>e for the full spectrum). The FM-OPO operates with high efficiency, converting around 34% of the input SH light into comb lines. First, the intracavity conversion efficiency is high, exceeding 90%, based on the pump depletion measurement in Fig. <ref>c. We calculate it based on the contrast between the measured maxima and minima of the normalized SH power, visible when tuning the pump wavelength, as shown in the inset. Next, the intracavity comb is outcoupled with the cavity escape efficiency η_a ≈ 0.36, which limits the total efficiency of our device. Note that the depletion and the conversion efficiency do not depend on the RF drive strength. The output power of the FM-OPO resembles a typical behavior of an unmodulated OPO in Fig. <ref>d, where we observe a threshold of about 47 mW SH power and nonlinear coupling rate g/2π≈ 12 kHz, lower than the predicted 67 kHz, which we attribute to operating at non-perfect phase matching Δ k ≠ 0. § DISCUSSION We have successfully demonstrated a new type of integrated comb generator and established its fundamental operating principles. Our device demonstrates exceptional brightness, flatness, and efficiency while retaining robust operational dynamics. Given that our initial demonstration still has the potential for significant improvements in optical bandwidth by dispersion engineering, RF power consumption by resonant enhancement, and optical conversion efficiency by improved out-coupling, this breakthrough opens the door to a new class of deployable optical frequency combs. For the well-established application of these combs to the problems of spectroscopy, the versatility of the LN material platform allows for spectral coverage from blue light <cit.> into the mid-infrared <cit.>, enabling their use in fields such as medical diagnostics <cit.>, process control in agriculture, food production, and various industrial sectors <cit.>. Moreover, the potential of these devices as a source of flat-top combs makes them invaluable for applications from fiber communication systems to FMCW LiDAR <cit.>. § METHODS §.§ Device design and Fabrication We design our waveguide geometry to maximize the normalized efficiency and interaction rate. Extended Data Figure <ref>a shows a schematic of the periodically poled, X-cut LN waveguide. We chose the ridge height h = 300 nm, slab thickness s = 200 nm, top width w = 1.2 µm, and SiO_2 cladding thickness c = 700 nm. We find the guided modes by numerically solving Maxwell's equations with a finite-element solver (COMSOL). Extended Data Figure <ref>a shows the E_x field distribution for a mode at 1550 nm. Extended Data Figure <ref>b presents the bands of the effective index as a function of wavelength in our waveguide geometry. The blue line highlights the fundamental TE mode we use in our nonlinear waveguide and electro-optic modulator. The difference between the effective index at the fundamental and second harmonic frequency Δn_eff results in phase mismatch that we compensate for with periodic poling with a period of around . The LN waveguide forms a racetrack resonator with an intracavity directional coupler designed to close the resonator for the FH but ensure that the SH pump does not circulate. We call this design a “snail resonator". All of the waveguide bends are defined by Euler curves to minimize light scattering between straight and bent waveguide sections. We periodically pole the thin-film LN before the waveguide fabrication by patterning chromium finger electrodes on top of an insulating SiO_2 layer. Extended Data Figure <ref>c shows an SEM micrograph of a poling electrode. Next, we apply short pulses on the order of 1 kV to invert the ferroelectric domains and then verify the poling with a second harmonic microscope; Extended Data Fig. <ref>d shows a periodically poled film. In the second harmonic microscope picture, the black areas on the sides of the image correspond to the metal electrodes. The oblong shapes stretching between fingers correspond to the inverted LN domains. White regions at the center of the inverted domains correspond to the poling that extends throughout the full depth of the thin-film LN. We pattern the critical within the fully poled film regions by aligning the electron-beam lithography mask in the waveguide patterning step. Extended Data Fig. <ref> presents the fabrication process flow. We start with a thin-film lithium niobate on insulator chip (Extended Data Fig. <ref>a). We use 500 nm LN film bonded to around 2 µm of SiO_2 on a silicon handle wafer (LNOI from NanoLN). Then, we deposit about 100 nm of silicon dioxide using plasma-enhanced chemical vapor deposition (PlasmaTherm Shuttlelock PECVD System), which serves as a protective layer and prevents leakage current during poling. We pattern 100 nm thick chromium electrodes (evaporated with Kurt J. Lesker e-beam evaporator) on top of the insulating layer through electron-beam lithography (JEOL 6300-FS, 100-kV) and liftoff process and apply short voltage pulses to invert the LN domains (Extended Data Fig. <ref>b). Next, we remove the chromium and SiO_2 layers with chromium etchant and buffered oxide etchant to obtain a poled thin-film LN chip (Extended Data Fig. <ref>c). We follow with waveguide patterning using JEOL 6300-FS electron-beam lithography and hydrogen silsesquioxane mask (FOx-16). We transfer the mask to the LN material using dry etching with an argon ion mill (Extended Data Fig. <ref>d). After the waveguide fabrication, we pattern another liftoff mask with electron-beam lithography to pattern electrodes for our electro-optic modulators (Extended Data Fig. <ref>e). We use 200 nm of gold with a 15 nm chromium adhesion layer evaporated with the e-beam evaporator. We clad the entire chip with a layer of 700 nm thick SiO_2 deposited with a high-density plasma chemical vapor deposition using PlasmaTherm Versaline HDP CVD System (Extended Data Fig. <ref>f) and open vias to access electrodes using inductively coupled plasma reactive ion etching (Extended Data Fig. <ref>g). We finish preparing the chip facets for light coupling by stealth dicing with a DISCO DFL7340 laser saw. §.§ Experimental Setup We characterize our devices' FM-OPO and OPO response using the setup in Extended Data Fig. <ref>. We color-code the paths intended to use with various signals: light orange corresponds to the fundamental harmonic light (around 1500-1600 nm), the blue path corresponds to the second harmonic (around 750-800 nm), and green corresponds to the RF signals. We drive our devices with a tunable C-band laser (Santec TSL-550, 1480–1630 nm) that we amplify with an erbium-doped fiber amplifier (EDFA) to around 1 watt. The wavelength of the laser is controlled in a feedback loop using a wavelength meter (Bristol Instruments 621B-NIR). We control the optical power to the chip with a MEMS variable optical attenuator (from OZ Optics) and calibrate the power using a 5% tap and a power meter (Newport 918D-IR-OD3R). The light then passes through a fiber polarization controller (FPC) and couples to the chip facet through a lensed fiber. We deliver RF signals to the chip through a ground-signal-groud probe (GGB Industries Picoprobe 40A). We use Keysight E8257D PSG Analog Signal Generator as an RF source and amplify it with a high-power amplifier (Mini-Circuits ZHL-5W-63-S+). We place a circulator before the chip to avoid any reflections into the source and terminate the reflected port after passing it through a 20 dB attenuator. The generated light is split between two paths with a 1000-nm short-pass dichroic mirror (Thorlabs DMSP1000). The two paths are connected to the InGaAs and Si avalanche photodiodes (Thorlabs APD410A and Thorlabs APD410) to detect the FH and SH power, respectively. VOAs precede both APDs to avoid saturation and increase the dynamic range of the measurements (HP 8156A and Thorlabs FW102C). Part of the FH path splits into an optical spectrum analyzer (Yokogawa AQ6370C) and a fast photodetector (New Focus 1554-B-50), which response is characterized by an RF spectrum analyzer (Rohde & Schwarz FSW26). §.§ Intracavity coupler characterization We characterize the performance of the intracavity coupler using a smaller resonator with a straight section length of around 2 mm. Extended Data Figure <ref>a shows transmission of such a cavity (depicted in Extended Data Fig. <ref>b), where we normalize the background to one. We observe the contrast of cavity modes changing across the used wavelength range due to the changes in the intrinsic and extrinsic quality factors. The former can be used to benchmark the coupler's performance. We observe a smooth transition from an undercoupled cavity at 1500 nm, through critical coupling at around 1550 nm, to an overcoupled cavity at 1580 nm. To verify this, we fit the quality factors of all the modes. An example is shown in Extended Data Fig. <ref>c, where we observe intrinsic quality factor Q_i ≈ 2.5 · 10^6 and extrinsic quality factor Q_e ≈ 0.8 · 10^6. Extended Data Figure <ref>d shows the intrinsic and extrinsic quality factors measured as a function of wavelength. We find that Q_i peaks at around 1580 nm, corresponding to the maximum transmission through the coupler. In the FM-OPO device, we use the same coupler but extend the device length to 10 mm, which results in the flattening of the Q_i dependence on wavelength. §.§ Dispersion measurement The second-order dispersion ζ_2 is a critical parameter of the FM-OPO because it determines the comb span and tunability. To quantify it, we modify the measurement setup by adding another 5% tap connected to a fiber Mach-Zehnder interferometer (MZI) and a photodetector (Newport 1623 Nanosecond Photodetector), see Extended Data Fig. <ref>a. We collect the MZI transmission and the cavity transmission while scanning the pump laser and calibrate the wavelength by unwrapping the phase in the MZI transmission spectrum. This method allows us to measure cavity mode location with precision on the order of single MHz. We measure the FM-OPO cavity spectrum using the feedline waveguide and extract the local FSR, as shown in Extended Data Fig. <ref>b. The relative position of cavity modes is defined by ω_n = ω_0 + ζ_1 × n + ζ_2/2 × n^2. We fit the FSR with respect to the mode number and extract the second-order dispersion parameter ζ_2/2π ≈ 11 kHz, which agrees with the theoretical prediction based on the finite-element simulation. §.§ Second-order optical nonlinearity characterization We characterize the nonlinear performance of our PPLN waveguides through a second harmonic generation measurement in a waveguide that passes through the same poled area of the chip as the FM-OPO PPLN waveguides. The experiment geometry is shown in Extended Data Fig. <ref>a, where the input to the chip is the same as in the general setup but the lensed fiber couples to the test waveguide. Two APDs collect the output light the same way as in the FM-OPO measurements. Extended Data Figure <ref>b shows an example of a measured SHG transfer function recorded while sweeping the C-band laser with fixed power of around 200 µW on the chip. The waveguide length is about 7 mm, and slight distortion to the sinc function results from small waveguide nonuniformities along its length. Extended Data Figure <ref>c shows the peak SH power on-chip recorded as a function of the on-chip pump power at the FH frequency. The inset shows a bright SH spot scattered at the end of an on-chip LN waveguide and lensed fiber tip. We fit a quadratic polynomial to the data to extract the normalized efficiency η that defines the relationship between the SH and pump power: P_SH = ηP_FH^2 L^2, where L is the length of the PPLN waveguide, P_SH and P_FH correspond to the power of the second harmonic and fundamental, respectively. We extract normalized efficiency of around 1,500 %/(Wcm^2), corresponding to the interaction rate around g/2π≈ 67 kHz, which agrees with our theory. The measured FM-OPO operates away from the perfect quasi-phase matching, Δ k ≠ 0, which reduces the interaction rate to around 12 kHz. §.§ Electro-optic characterization To characterize the electro-optic performance of the FM-OPO resonator, we drive the cavity with RF and probe the transmission spectra of the feedline waveguide as shown in Extended Data Fig. <ref>a. We use the same input chain as in the FM-OPO measurements, except for the RF amplifier. We collect the light using an InGaAs APD paired with a VOA. The cavity transmission with no RF drive reveals a usual Lorentzian lineshape (blue points in Extended Data Fig. <ref>b), that we fit to extract the intrinsic quality factor of around Q_i≈ 1· 10^6. However, the lineshape becomes distorted when the RF modulation is applied to the cavity on resonance with the local FSR. We model it by simplifying the full FM-OPO cavity coupled-mode equation <ref> and adding an FH drive to one of the cavity modes n = 0. In the small optical power limit and absence of the SH drive, we can write the model as: ȧ_n = ( i(Δ - n^2 ζ_22) - κ_a,n2) a_n - i M ( a_n-1 + a_n+1) + i √(κ_a^(e) P_FHħω_a) δ_n,0. Here, Δ is the laser detuning, and δ_n,0 is a Kronecker delta. We model the EO-modulated cavity response by solving this system of equations for 50 modes in steady state: 0 = M A + B, where M is the matrix including pump detuning, loss rates of the cavity modes, and electro-optic coupling, and B(n ≠ 0) = 0. We find A = - M ^-1 B. The total output power of the cavity consists of the laser pump interfering with the intracavity field and a sum of all the generated sidebands: |a_out|^2 = |(a_in - i √(κ_0^(e)) a_0)|^2 + ∑_n≠ 0κ_n^(e) |a_n|^2. We evaluate this model numerically to fit the transmission lineshapes of the modulated cavity for various peak voltage values. The orange points in Extended Data Fig. <ref>b correspond to one example of data collected for the cavity modulated with a peak voltage of around V_P≈ 4.5 V. The red line corresponds to the fit. When fitting the modulated lineshapes, we fix the extrinsic and intrinsic quality factors, as measured for the unmodulated line, and extract only the electro-optic coupling M. Then, we plot the measured values of the EO coupling M/2π as a function of peak voltage in Extended Data Fig. <ref>c and fit a line to find the dependence of the EO coupling on the peak voltage. We measure M/2π≈ 60 MHz/V. §.§ RF and optical spectra of the FM-OPO We examine the FM-OPO combs we produce using a high-speed photodetector and an RF spectrum analyzer. Interestingly, a single FM-OPO, as defined by equations (derived in SI): a_i(t) = A_ie^-iω_i te^iΓsin(Ω t) e^iω_pt/2 a_s(t) = A_se^-iω_s te^-iΓsin(Ω t)e^iω_pt/2, should not create any detectable RF tones when evaluated with a fast photodetector since a pure phase or frequency modulation will not be detected on a photodiode measuring intensity. However, we observe peaks in the RF spectra for the FM-OPOs shown in Fig. <ref>a that are spaced by Ω. These are displayed in Extended Data Fig. <ref>a, and we provide a closer look at the first sidebands in Extended Data Fig. <ref>b. We find that even a minor dependence of the cavity's external coupler transmission on wavelength can lead to a noticeable conversion from frequency modulation to intensity modulation. To confirm this, we estimate the expected result of a high-speed photodiode measurement of signal and idler combs produced following equations <ref>-<ref>, under the influence of a wavelength-dependent coupler. We determine the external coupling as a function of frequency for our cavity from the same measurement we used for dispersion characterization. The average change in the external coupling across the 1500-1600 nm measurement bandwidth is approximately ∂κ_a^(e)/∂ω≈ -5·10^-6. The calculated RF spectra (Extended Data Fig. <ref>c) qualitatively match our experimental observations, with discrepancies occurring at higher electro-optic modulation rates where the single FM-OPO approximation is no longer applicable. For each RF spectrum, we also present the full optical spectra (including signal and idler) in Extended Data Fig. <ref>d. We plot the spectrum with the largest observed coverage, measured at around 1.2 W of RF power in Extended Data Fig. <ref>e. Note that for a particular pump wavelength, there are multiple possible modes of oscillation corresponding to the coupling between different mode pairs (-n_osc, n_osc), (-n_osc-1, n_osc), (-n_osc-1, n_osc-1), and so on. For the non-modulated OPO operation at the power levels we experimentally characterized, we observe only one oscillating mode at a fixed pump wavelength (i.e., (-n_osc, n_osc)), which we attribute to optimal phase matching. Adjusting the pump results in switching between different mode pairs with a periodicity of 1/2 FSR. However, in the presence of sufficiently strong modulation, clusters of modes arise in the FM-OPO spectrum corresponding to these secondary mode pairs being excited. §.§ FM-OPO tuning with laser and RF detuning We experimentally analyze the behavior of FM-OPO comb properties with respect to the RF drive parameters. First, we step the pump laser across one FSR of the cavity (Extended Data Fig. <ref>a) and record the OSA spectra for various electro-optic coupling rates (Extended Data Fig. <ref>b-d). We can calibrate the pump wavelength, as shown in Extended Data Fig. <ref>a with respect to the cavity modes by looking at the slight leakage of the original FH pump visible as a faint line at around 1554 nm signal wavelength in all the colormaps. In this study, we operate in a nondegenerate regime and observe a pure OPO in Extended Data Fig. <ref>b. Next, by switching on a moderate RF modulation, we achieve M/2π ≈ 100 MHz in Extended Data Fig. <ref>c and observe comb formation and higher-order FM-OPO comb development. Finally, at high modulation of around M/2π ≈ 510 MHz, we observe that the combs originating from different OPO modes (-n_osc, n_osc), (-n_osc+1, n_osc), and (-n_osc+1, n_osc+1) start to merge. We note that the areas with suppressed FM-OPO intensity result from the waveguide mode crossings between the fundamental TE mode and higher order modes that effectively reduce the quality factors in that region. Next, we analyze the FM-OPO response to the RF detuning δ, defined schematically in Extended Data Fig. <ref>e. We measure this by pumping the device at around 1545 nm and using M/2π≈ 510 MHz. For most measurements, we fix the detuning to δ = 0 so that the RF frequency is on resonance with the cavity FSR near degeneracy Ω = ζ_1 to maximize the comb span and output optical power. If the RF drive is detuned, we observe comb shrinking, as shown in Extended Data Fig. <ref>f, and the total output power decreases, as shown in Extended Data Fig. <ref>g. §.§ Uncertainty analysis The measurement error of the comb count in Fig. <ref>b is given by the standard deviation of 51 measurements (41 measurements for the highest RF power). The shaded region corresponds to the coupled-mode-equation simulation, from which we extract the half-widths of the simulated combs. We assume uncertainty of ±1 mode on each side of the signal and idler combs. We calculate the uncertainty of the measured depletion of the SH pump (Fig. <ref>c) and the measured OPO signal (Fig. <ref>d) based on the standard deviation of the SH and FH signals over the measurement time. We measure the FM-OPO resonator's average intrinsic and total quality factors by averaging the results of Lorentzian fits over around 20 nm of the spectrum, where we observe the comb formation. The standard deviation gives their uncertainties. We infer the uncertainty of κ_b based on the precision of our estimation of the group index (10^-3, based on the finite-element solver). We calculate the cavity escape efficiency uncertainty based on the errors of the average quality factors. The uncertainties of the cavity free spectral range, cavity dispersion, peak waveguide nonlinear efficiency, and electro-optically induced mode-coupling rate correspond to the standard errors of the fit parameters extracted from the least-square fitting. Uncertainties of the nonlinear interaction rate and the SH power threshold of the OPO are calculated based on the standard errors of a nonlinear fit. We calculate the internal and total OPO efficiency errors based on the cavity escape efficiency and SH depletion uncertainties. § DATA AVAILABILITY The data sets generated during and/or analyzed during this study are available from the corresponding author on request. § ACKNOWLEDGEMENTS This work was supported by U.S. government through the Defense Advanced Research Projects Agency Young Faculty Award and Director's Fellowship (YFA, Grant No. D19AP00040), LUMOS program (Grant No. HR0011-20-2-0046), the U.S. Department of Energy (Grant No. DE-AC02-76SF00515) and Q-NEXT NQI Center, and the U.S. Air Force Office of Scientific Research provided a MURI grant (Grant No. FA9550-17-1-0002). We thank NTT Research for their financial and technical support. H.S.S. acknowledges support from the Urbanek Family Fellowship, and V.A. was partially supported by the Stanford Q-Farm Bloch Fellowship Program and the Max Planck Society Sabbatical Fellowship Award. This work was also performed at the Stanford Nano Shared Facilities (SNSF), supported by the National Science Foundation under award ECCS-2026822. We also acknowledge the Q-NEXT DOE NQI Center and the David and Lucille Packard Fellowship for their support. D.D. and A.Y.H acknowledge support from the NSF GRFP (No. DGE-1656518). H.S.S. and V.A. thank Kevin Multani and Christopher Sarabalis for discussions and technical support. A.H.S.-N. thanks Joseph M. Kahn and Stephen E. Harris for useful discussions. § AUTHOR CONTRIBUTIONS A.H.S.-N. and H.S.S. conceived the device and H.S.S. designed the photonic integrated circuit. H.S.S., C.L., and M.J. developed essential components of the photonic circuit. H.S.S., T.P., and A.Y.H. fabricated the device. H.S.S., V.A., and O.T.C. developed the fabrication process. M.M.F. and A.H.S.-N. provided experimental and theoretical support. H.S.S., T.P., and D.J.D. performed the experiments. H.S.S., A.Y.H., T.P., and D.J.D.analyzed the data. H.S.S., and A.H.S.-N. wrote the manuscript. H.S.S., V.A., and A.H.S.-N. developed the experiment. H.S.S., D.J.D., A.H.S.-N. developed the numerical and analytical models. A.H.S.-N. supervised all efforts. § COMPETING INTERESTS A.H.S.-N., H.S.S., and A.Y.H. are inventors of a patent application that covers the concept and implementation of the frequency-modulated optical parametric oscillator and its applications. The remaining authors declare no competing interests. § SUPPLEMENTARY INFORMATION § OPTICAL PARAMETRIC OSCILLATOR WITHOUT MODULATION We model the doubly-resonant optical parametric oscillator (OPO) based on the Hamiltonian of the system. We separate it into the unperturbed and interaction part - H_0, H_PA: H_0 = ∑_n ω(n) a_n^∗ a_n + ω_bb^∗ b, H_PA=g ∑_n b a_n^∗ a_-n^∗ + c.c., where a_n and b correspond to the amplitudes of the n-th fundamental harmonic (FH) mode around the OPO degeneracy, point n=0, and the second harmonic (SH) pump mode. ω(n) and ω_b correspond to the frequency of the n-th FH mode and SH pump, g is the χ^(2) nonlinear coupling rate. The coupled mode equations are given by: ȧ_n = -(iω(n)+κ2)a_n-2igba_-n^∗ ḃ = -(iω_b +κ_b2)b-ig ∑_n a_n a_-n. We can use this model to calculate the threshold and analyze the above-threshold behavior by assuming that b is driven by a classical field at the pump frequency ω_p. This leads to a drive term H_d = √(κ_b)β_ine^-iω_p t b^∗ + c.c. For a doubly-resonant OPO, the loss of the b field is dominated by the extrinsci coupler κ_b^(e)≈κ_b. We remove the time dependence by putting b in frame of ω_p (which assume is the same as ω_b, and all a_n in frame ω_p/2, to maintain time-independence of H_PA). The resulting relevant parts of the Hamiltonian are: H_0 = ∑_n (ω(n) - ω_p2) a_n^∗ a_n, H_PA = g ∑_n b a_n^∗ a_-n^∗ + c.c. and the coupled mode equations turn into: ȧ_n = -(iω(n) - i ω_p2 + κ2) a_n-2igba_-n^∗ ḃ = -κ_b2 b - ig ∑_n a_n a_-n + i√(κ_b)β_in. §.§ Doubly-resonant OPO Threshold To find the threshold, we assume that the a_n’s are all equal to 0, so b obtains some complex field amplitude, and we obtain a system of two equations for each mode pair (-n,+n). We also use the cavity dispersion: ω(n) = ω_0+ζ_1 n + ζ_2 n^22, where ζ_1, and ζ_2 correspond to the first and second-order dispersion. This approach applied to equation <ref> yields: ȧ_n = -[ i( ω_0+ζ_1 n + ζ_2 n^22) -iω_p2+κ2] a_n-2igba_-n^∗, ȧ^∗_-n = -[ -i( ω_0+ζ_1 (-n) + ζ_2 (-n)^22) +iω_p2+κ2] a^∗_-n+2igb^∗ a_n. From this system of equations, we have a coupled system involving two modes a_n and a_-n. We can write the system of equations in a matrix form by treating (a_n, a_-n^∗) as a complex vector as da/dt = 𝐌𝐚, where 𝐚 = [ a_n; a_-n^∗ ] is the complex vector of amplitudes, and 𝐌 is the matrix: 𝐌 = [ -[ i ( - Δ + ζ_1 n + ζ_2 n^22) +κ2] -2igb; 2igb^∗ -[ -i( -Δ +ζ_1 (-n) + ζ_2 (-n)^22) +κ2] ], where we introduced the pump detuning defined as Δ = ω_p/2 - ω_0. This equation describes the evolution of the complex amplitudes a_n and a_-n^∗ in terms of a linear transformation defined by the matrix 𝐌. To find the stability conditions, one has to calculate the eigenvalues of the matrix 𝐌 and find the conditions for the real parts of these eigenvalues to be negative. We can compute its eigenvalues by solving the characteristic equation det(𝐌 - λ𝐈) = 0 that leads to the following quadratic equation: [λ + i(-Δ + ζ_1 n + ζ_2 n^22) + κ2] [ λ - i( - Δ +ζ_1 (-n) + ζ_2 (-n)^22) + κ2] - 4g^2|b|^2 = 0. The corresponding stability criterion is: 16 g^2 |b|^2 > κ^2 + (2Δ - n^2 ζ_2)^2. The threshold of the OPO is minimized when the pump detuning perfectly compensates for the second-order dispersion for the n-th pair of modes Δ = n^2 ζ_2/2. In that case, we can substitute a steady-state solution for b into the stability condition and see that the threshold of the doubly-resonant OPO is given by: P_th = ħω_p κ_a^2 κ_b64 g^2. §.§ Above-threshold behavior To find the relation between the pump power and the output power of the OPO we solve equations <ref>-<ref> in a steady state. Above the threshold, one pair of signal-idler modes will dominate the dynamics of the system, so we neglect the other FH modes: a_n = 4igba_-n^∗κ_a b = -4ig a_n a_-n + 2i√(κ_b)β_inκ_b Substituion of equation <ref> into <ref> yields: 0 = 8g^2κ_b a_n|a_-n|^2 - 4g√(κ_b) β_in a_-n^∗ + κ_a2 a_n. We can write an analogous equation for the signal and idler modes. Assuming that the amplitudes are real and the loss rates for the signal and idler modes are the same, we find the amplitudes of the signal and idler modes as |a_n|^2 = |a_-n|^2 = √(κ_b)2g β_in - κ_a κ_b16g^2 The total output power of the doubly-resonant OPO is: P_out = 4 η_a P_th( √(P_inP_th) - 1 ), where η_a = κ_a^(e)/κ_a is the cavity extraction efficiency for the FH modes and the input power is defined as . We can relate the OPO efficiency to the pump depletion by looking at the b amplitude in a steady state (equation <ref>). By substituting the solutions for the FH modes, we see that b = iκ_a/4g and use the input-output relations to find the output amplitude: b_out = b_in + i√(κ_b^(e)) b = b_in( 1 - 2 √(P_th)|b_in|). The depletion of the pump power is: D = 4 P_thP_in ( √(P_inP_th) - 1 ). The OPO efficiency ρ is proportional to depletion and the cavity extraction efficiency: ρ = η_a D. We use this relationship to find the efficiency of our OPO and frequency comb generator. Note that the doubly-resonant OPO can achieve high efficiency by just increasing the coupling rate to the cavity and achieve >50% efficiency for κ_a^(e)>κ_a^(i). §.§ Approximate single-mode model of a propagating pump field Our goal is to represent the propagating SH field and its dynamics approximately as the dynamics of a single b mode. We note that this model can not capture complex spatial variations in the pump field. Let's first consider just the b mode, ignoring the other a_n modes of the system. We will need to define an effective loss rate of b. The input and output fluxes give the number of the photons within our waveguide in the steady state. After some time T>τ, where τ is the amount of time it take the field to propagate across the waveguide, the number of photons in that region is given by |b(T>τ)|^2 = ∫_0^T |β_in|^2 dt - ∫_τ^T |β_out|^2 dt, where τ is the cavity round-trip time. If we neglect the propagation loss, we also see that β_in = β_out and: |b(T>τ)|^2 = ∫_0^τ |β_in|^2 dt = |β_in|^2 τ. On the other hand, solving equation <ref> in steady state and low-power approximation yields: b = - 2√(κ_b^(e)) β_in. We combine equations <ref>-<ref> to see that the effective loss rate of the b mode is given by κ_b = 4/τ = 4 v_g / L, where v_g is the group velocity of the pump and L is the total length over which the light propagates, here equivalent to the resonator length. § FREQUENCY-MODULATED OPTICAL PARAMETRIC OSCILLATOR To include the effects of the intracavity phase modulator, we need to include additional term in the Hamiltonian: H_mod=2 Mcos (Ω t)∑_σ=s,i∑_m,m' a^∗_σ,m'a_σ,m + c.c., where M is the electro-optic modulation rate, and Ω corresponds to the RF frequency applied to the modulator. Applying RWA in the new frame, leads to: H_mod=M∑_σ=s,i∑_mã^∗_σ,mã_σ,m+1 + c.c.. Resulting coupled mode equations, including both the parametric gain and the modulation, are then: H = H_0 + H_PA + H_mod. The coupled mode equations in the text are generated from this classical Hamiltonian. §.§ Relabeling the modes according to offset from NDOPO signal and idler Once we have a nondegenerate oscillating solution for the equations above, the signal and idler oscillations would be at a specific value of ± n_osc. We will also denote the frequencies of these oscillations as ω_s = ω(+n_osc) and ω_i = ω(-n_osc). The oscillating modes are then a_s,0 ≡ a_+n_osc a_i,0 ≡ a_-n_osc. We can count out from these oscillating modes with a new index variable, m: a_s,m ≡ a_n_osc+m, a_i,m ≡ a_-(n_osc+m), with frequencies ω_s(m) = ω(n_osc+m), ω_i(m) = ω(-n_osc-m). These can be written more explicitly using the original definition of ω(n): ω_s(m) = ω_0+ζ_1 (n_osc+m) + ζ_2/2 (n_osc+m)^2, ω_i(m) = ω_0+ζ_1 (-n_osc-m) + ζ_2/2 (-n_osc-m)^2. Now we rewrite Hamiltonians in terms of these new frequencies and the newly defined modes a_s,m and a_i,m. For example, the zeroth order Hamiltonian H_0 would be: H_0 = ∑_m [(ω_s(m)-ω_p/2) a_s,m^∗ a_s,m + (ω_i(m)-ω_p/2) a_i,m^∗ a_i,m]. Similarly, the parametric amplification Hamiltonian H_PA becomes H_PA = g ∑_m (b a_s,m^∗ a_i,m^∗ + c.c.) This relabeling puts the oscillating modes at the center m=0 and allows us to examine the dynamics in their vicinity. The indices m are the offsets from these central modes. §.§ Calculating a recurrence relation for the modal amplitudes First, we consider the steady state equation for the b mode as: ḃ = -κ_b/2 b -i g∑_m ã_s,mã_i,m+i√(κ_b) b_in. At steady state, we expect the amplitude of the intracavity pump field to become: b = -2i g/κ_b∑_m ã_s,mã_i,m+2i/√(κ_b) b_in For the idler and signal modes, we then derive the equations: dã_i,m/dt = -κ_a/2ã_i,m-i ((n_oscζ_2 m + m^2ζ_2/2 )ã_i,m + g b ã_s,m^∗ + M(ã_i,m+1+ã_i,m-1) ), dã_s,m/dt = -κ_a/2ã_s,m-i ((n_oscζ_2 m + m^2ζ_2/2 )ã_s,m + g b ã_i,m^∗ + M(ã_s,m+1+ã_s,m-1) ). Now we can find relations for steady-state amplitudes (assuming we can choose the phases so that all the amplitudes are real) – the imaginary part of the equations above are given by: 0 = (n_oscζ_2 m + m^2ζ_2/2 )ã_i,m + M(ã_i,m+1+ã_i,m-1), 0 = (n_oscζ_2 m + m^2ζ_2/2 )ã_s,m + M(ã_s,m+1+ã_s,m-1) , Notice that we used Re[g b ã_i,m^∗] = 0, i.e., if b_in is chosen to be real, b will be imaginary. §.§ Verifying that the FM solution satisfies the equations of motion We show that a solution of the form ã_i,m=A_i J_m(Γ),    ã_s,m=A_s J_m(Γ) approximately satisfies the steady-state dynamics of the system derived above. Substituting the relations into the equations (<ref>) and (<ref>), we find: 0 = - (n_oscζ_2 m + m^2ζ_2/2 )A_i J_m(Γ) +M(A_i J_m+1(Γ)+A_i J_m-1(Γ)). We now use the Bessel function recurrence relation to simplify this expression. The recurrence relation for Bessel functions is: 2m/xJ_m(x) = J_m-1(x) + J_m+1(x), which leads to 0 = - (n_oscζ_2 m + m^2ζ_2/2 ) +M ·2m/Γ. Obviously, it is not possible to satisfy this relation exactly due to the dependence on m of inside equation. But, we can make this approximately true by setting Γ = 2 M/n_oscζ_2. The bandwidth is then given by B.W.≡2ΓΩ = 4MΩ/n_oscζ_2 We have solved the comb dynamics in frequency domain. To find the time-domain solution, we use the Jacobi-Anger relation e^izsinθ = ∑_m=-∞^∞ J_m(z) e^imθ we find that the steady-state solution of the system appears as swept signal and idler tones. These would be represented in the form a_i(t) = A_ie^-iω_i te^iΓsin(Ω t) e^iω_pt/2 a_s(t) = A_se^-iω_s te^-iΓsin(Ω t)e^iω_pt/2. § NUMERICAL MODELING §.§ Quasi-static approximation We solve the coupled mode equations for 2,500 modes. To increase the numerical efficiency of the ODE solver, we notice that the extrinsic coupling rate of the b amplitude is much faster than any other rates in the system and introduce a quasi-static approximation. During each step of the ODE solver, we assume that the SH mode is in a steady state, which yields: ȧ_n = [ i( Δ + nδ - n^2ζ_22) -κ_a,n2 ] a_n - iM( a_n-1 + a_n+1) - 2ig a_-n^∗b b = - 2i g ∑_na_na_-n - i√(κ^(e)_b)β_inκ_b^(e). These are the equations we solve in the main text to predict the shape of the optical frequency combs generated in our device and their total comb count.
http://arxiv.org/abs/2307.04442v1
20230710094930
Automatic diagnosis of knee osteoarthritis severity using Swin transformer
[ "Aymen Sekhri", "Marouane Tliba", "Mohamed Amine Kerkouri", "Yassine Nasser", "Aladine Chetouani", "Alessandro Bruno", "Rachid Jennane" ]
cs.CV
[ "cs.CV" ]
[email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45100 [email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45100 [email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45100 [email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45100 [email protected] Laboratoire PRISME, université d'Orléans 12 Rue de Blois Orléans France 45067 [email protected] IULM AI Lab, IULM University Via Carlo Bo 1 Milan Italy 20143 [email protected] IDP laboratory, université d'Orléans xxx Orleans France 45067 Knee osteoarthritis (KOA) is a widespread condition that can cause chronic pain and stiffness in the knee joint. Early detection and diagnosis are crucial for successful clinical intervention and management to prevent severe complications, such as loss of mobility. In this paper, we propose an automated approach that employs the Swin Transformer to predict the severity of KOA. Our model uses publicly available radiographic datasets with Kellgren and Lawrence scores to enable early detection and severity assessment. To improve the accuracy of our model, we employ a multi-prediction head architecture that utilizes multi-layer perceptron classifiers. Additionally, we introduce a novel training approach that reduces the data drift between multiple datasets to ensure the generalization ability of the model. The results of our experiments demonstrate the effectiveness and feasibility of our approach in predicting KOA severity accurately. Automatic diagnosis of knee osteoarthritis severity using Swin transformer Rachid Jennane ========================================================================== § INTRODUCTION Knee osteoarthritis (KOA) is a degenerative disease of the knee joint and the most common form of arthritis. It affects almost half of the population aged 65 years or older worldwide, causing pain, mobility limitation, and impaired quality of life. KOA is caused by a breakdown of knee articular cartilage and bone micro-architecture changes <cit.>. Joint space narrowing, osteophyte formation, and sclerosis are KOA's most visually relevant pathological features that can be visualized with radiographs. Although various imaging techniques such as magnetic resonance, computed tomography, and ultrasound have been introduced to diagnose osteoarthritis, radiography remains the most widely used method for initial diagnosis due to its accessibility, low cost, and widespread use. Kellgren and Lawrence (KL) classified KOA severity into five stages based on the radiographic features, from KL-G0 for healthy cases to KL-G4 for severe cases <cit.> (See Fig <ref>). However, KOA changes gradually, so the evaluation into different stages is often subjective and depends on the operator. This causes subjectivity and makes the automatic KOA diagnosis a difficult task. In addition, the high similarity between the X-ray images increases the challenge of achieving an accurate diagnosis. Several deep learning-based methods have been proposed for medical imaging applications <cit.>, and many to diagnose KOA in recent years. In <cit.>, Antony et al. employed Convolutional Neural Networks (CNNs) to quantify the severity of KOA from radiographic images. Their method is based on two main steps: first, automatically locate the knee joints using a Fully Convolutional Neural etwork (FCN), then, classify the knee joint images using a second CNN. In addition, to improve the quantification of KOA, they combined the classification loss with the regression loss to consider the continuous aspect of the disease progression. Tuilpin et al. <cit.> presented a Siamese CNN network for KL grade prediction. They used three models with different random seeds and combined their outputs with a softmax layer to obtain the final KL grade. Chen et al. <cit.> proposed an ordinal loss for fine-tuning various CNN models to classify KOA severity. They leveraged the ordinal nature of the knee KL grading system and penalized incorrect classifications more by increasing the distance between the real and predicted KL grades. Nasser et al. <cit.> proposed a Discriminative Regularized Auto-Encoder (DRAE) for early KOA prediction using X-ray images. The proposed model uses a discriminative penalty term and the traditional AE reconstruction cost function to enhance the separability of the features learned from different classes. The aim was to boost the recognition system's performance by minimizing the inter-class variance and maximizing the intra-class distance. Recently, transformers have shown promising results in various medical imaging tasks <cit.>. Wang et al. <cit.> proposed a novel data augmentation method for early detection of KOA using a Vision Transformer model. The method involves shuffling the position embedding of non-ROI patches and exchanging the ROI patches with other images. The authors also used a hybrid loss function that combines label smoothing and cross-entropy to improve the model's generalization capability and avoid over-fitting. Several important studies <cit.>,<cit.>, <cit.>, <cit.>, <cit.>, used two multi-center databases, the Osteoarthritis Initiative (OAI, <https://nda.nih.gov/oai/>) and the Multicenter Osteoarthritis Study (MOST, <https://most.ucsf.edu/>) by not accounting for the data drift problem. The latter occurs when a machine learning model trained on one dataset lowers its performance when tested on another set of data. Subsequently, data drift causes poor generalization and performance degradation. In this work, we first investigate the use of the Swin transformer in predicting KOA severity from radiographic images. In particular, the Swin transformer is the core network that extracts high-level features and detects KOA-induced changes. Second, we introduce a multi-predictive classification header to address the high similarity problem between different KOA grades. In addition, to reduce the data drift problems between the data in the two databases, OAI and MOST, we tested several learning strategies to find the one providing the model with better generalization capabilities and balanced classification results. The remainder of the paper is organized as follows: the proposed method is described in Section <ref>. Next, the obtained experimental results are presented in Section <ref>. Finally, the conclusions and outlooks are given in Section <ref>. § PROPOSED METHOD The method proposed in this paper consists of two parts: 1) a Swin transformer as a features extractor and 2) a multi-prediction head network as a classifier. The schematic illustration of our proposed network is presented in Figure <ref>. §.§ Swin Transformer The Swin Transformer <cit.> is a state-of-the-art model that has been specifically designed to address the challenges of applying transformer models in the visual domain. While transformers have been widely successful in natural language processing, they have been less effective in computer vision due to the unique characteristics of visual data. The Swin Transformer proposes a novel architecture that leverages hierarchical feature maps and shift-based windows to improve the efficiency and performance of the model. With its innovative approach, the Swin Transformer has emerged as one of the most efficient and effective transformer models for visual applications. The model is divided into four stages, where the features are hierarchically extracted in each stage. The input image with dimensions H × W × 3 is divided into H/4×W/4 non-overlapping patches as tokens of size 4× 4 × 3 = 48. These tokens are then passed through the first stage, consisting of a linear embedding layer and two Swin Transformer blocks. The linear embedding layer projects the tokens into a higher-dimensional space denoted by C; after that, in the first Swin Transformer block, the multi-headed window self-attention mechanism (W-MSA) is employed. This mechanism computes self-attention only between patches within the same window, where each window contains M× M patches. The second Swin Transformer block utilizes shifted window multi-headed self-attention (SW-MSA), in which the partitioning windows are shifted by (⌊M/2⌋, ⌊M/2⌋) patches with respect to the standard partitioning windows used in the previous block. This approach aims to create more relationships between neighboring patches previously located in different windows and reduce the computational complexity of the global MSA module used in vision transformer. In the second stage, a patch merging layer is applied to group each 2× 2 neighboring patches into a single patch of length 4C, thus reducing the number of patches to H/8×W/8. These patches are then linearly projected to a dimension of size 2C and passed to two Swin Transformer blocks as in the first stage. This process is repeated in the third stage, using 18 Swin Transformer blocks to produce H/16×W/16 patches of length 4C. Finally, in the fourth stage, two Swin Transformer blocks are used to produce H/32×W/32 of length 8C. These consecutive stages jointly produced a hierarchical representation like those of typical convolutional networks. §.§ Multi-Prediction Head Network The main task of our designed model is to be able to predict the KOA severity grade. This presents a case of a multi-class classification task. Traditionally this is solved by using a single MLP classification head with 5 outputs activated by a softmax function. The complex nature of X-ray images imposes a high similarity between the images of adjacent KL Grades as shown in Figure <ref>. To address this issue, we decompose the task into multiple binary classification tasks. We use 5 MLP networks, each specializing in predicting one KL-Grade. This enhances the model's ability to extract and filter a rich representation for each class. Let f: X → Z be our feature extractor, where X and Z are the input and latent spaces, respectively. x represents the input image and y their corresponding one hot encoding label. The predictive label ŷ_i at the head classifier MLP_i is defined as: ŷ_i = MLP_i(f(x)) The final predictive label ŷ is computed then as follows: ŷ = argmax(⋃_i = 0 ^ 4 ŷ_i) where i ∈{0 … 4} represents the KL grades. To sum up, our final model consists of a basic Swin-B encoder with C=128 and 2, 2, 18, 2 Swin Transformer blocks, followed by Normalisation and average pooling layers to produce a final representation vector of size 1024. This vector is then passed to 5 MLPs, one for each KL grade. Each MLP contains 3 linear layers of size 384, 48, 48, 1, respectively. The final layer of each MLP network has a single neuron to predict the occurrence probability of each grade. §.§ Data Drift Correction In this paper, we employ 2 of the most widely used datasets for KOA classification (i.e. MOST and OAI datasets). These datasets were collected over a substantial amount of time, from several medical centers, and were annotated by a multitude of medical practitioners. The inherent disparity of equipment, study subjects, radiography, and diagnostics methods between different medical centers caused a shift between the datasets as further discussed in Section <ref>. We represent our model using the formula h = g ∘ f, where f : X → Z and g : Z → Y, represent the feature extractor and the multi-classification head, respectively. X is the input image, Z is the latent feature space, and Y represents the label space. To address the issue of data drift between the MOST and OAI datasets, we need to align the latent representational spaces between Z_MOST and Z_OAI. This means that the feature extractor f needs to be able to perceive the data distributions from 𝒟_ℳ𝒪𝒮𝒯 and 𝒟_𝒪𝒜ℐ as belonging to the same distribution 𝒟. It models relevant mutual features while discarding any dataset-specific information that could be considered noisy. This could be represented using the following equation: 𝒟 = ( 𝒟_ℳ𝒪𝒮𝒯∪𝒟_𝒪𝒜ℐ ) ∖ ( 𝒩_MOST∪𝒩_OAI ) where 𝒩_MOST and 𝒩_OAI represent the noisy distribution of information specific to the MOST and OAI datasets, respectively. To achieve this result, we train the model h on the MOST dataset and then freeze the MLP layers g. We continue to train the feature extractor f on the OAI dataset. This way, we force the feature extractor f to align the representational space for both datasets. This proposed approach leverages the pre-trained source model effectively and adapts it to the target dataset by minimizing the shift between the data distributions in the latent representational space Z. The objective is to achieve this without compromising the prior knowledge of the pre-trained classifier. §.§ Implementation In order to train the model, we used the AdamW optimizer <cit.> with a learning rate of 3e-5, a weight decay of 0.05, an epsilon of 1e-8, and betas of (0.9, 0.999) to adjust the weights. We trained the model with a batch size of 32 images for 300 epochs. We implemented the code in PyTorch and used an NVIDIA RTX A4000 GPU with 16 GB of VRAM to speed up the training process. We also implemented various data augmentation techniques such as 15-degree rotation, translation, scaling, random horizontal flipping, and contrast adjustment with a factor of 0.3. These techniques have previously been used in similar studies to improve the performance of deep learning models on image classification tasks in order to address the problem of limited data and overfitting. § EXPERIMENTAL RESULTS To evaluate the efficacy of the proposed approach, we conducted five experiments, described in this section. §.§ Datasets In this study, we employed two widely used and publicly available datasets: MOST dataset: It contains 18,269 knee images that were segmented in the same manner as in <cit.>. We divided this dataset into three subsets, namely training, validation, and testing with a ratio of 6:1:3. Table <ref> provides a summary of the dataset's partitioning. We use this dataset to train and evaluate our model's performance on knee image classification. OAI dataset: It consists of 8260 already prepared knee images <cit.>. It is randomly divided into three subsets, namely training, validation, and testing with a ratio of 7:1:2. Table <ref> summarizes the partitioning of the OAI dataset. We use this dataset to validate and test our model's performance. §.§ Experimental Protocol During the development of our model, we tested multiple configurations and compared them. In the first experiment, we use a single classifier to predict all grades simultaneously. In the second experiment, we use the same settings but employed the Multi-prediction head architecture, which involves breaking down the multi-classification problem into sub-binary classifications. For experiments three and four, we explored the data drift between two datasets by training only one dataset per experiment. Finally, in the fifth experiment, we tackled the issue of data drift by transferring the knowledge from the trained classifier on the source dataset (MOST) and solely training the feature extractor of our model on the target dataset (OAI). §.§ Quantitative Evaluation The performances obtained for each considered configurations are presented in Table <ref>. In the first two experiments, we observed an improvement in the F1 score for our model when using the Multi-prediction head architecture in the second experiment. Specifically, the model yielded a 0.062 and 0.042 F1 score increase compared to the first experiment in the MOST and OAI test sets, respectively. We also notice an increase in accuracy on the MOST dataset. Moreover, as seen by the confusion matrices in Figure <ref>, the architecture proposed in experiment 2 was able to avoid the catastrophic failure of detecting the KL-G1 observed in experiment 1. The grad KL-G1 is notoriously challenging to detect even for trained doctors due to the high similarity with the KL-G0 and KL-G2. In fact, the model correctly predicted 54 images in KL-G1 in experiment 2, while 0 images were classified in experiment 1. These results highlight the impact of dividing the multi-classification problem into sub-binary classification problems as described in sections <ref>. The substantial drop of performance in experiment 3 on both datasets is mainly attributed to the lack of a sufficient quantity of data. Transformer-based models are known to require a lot of data for training <cit.>. This has led to the underfitting of our model as it was not able to extract meaningful representations from this dataset. On the other hand, we notice that the performance of the model on the MOST dataset is quite similar, this is due to the richness of the representations in this dataset. In experiment 4, the MOST dataset contains more samples that cover a broader range of KOA severity levels than the OAI dataset as shown in Table <ref>. Consequently, MOST provides a more diverse and representative training set for our model, leading to better performance in the MOST test set. However, we still see a greater decrease in performance on the OAI dataset compared to experiment 2 in terms of accuracy and F1 score. Experiment 5 showed a considerable enhancement in performances on the OAI dataset compared to all other experiments, achieving a 70.17% accuracy and 0.671 F1-score, as shown in Table <ref>, while maintaining a high accuracy on the MOST dataset. This particularly highlights the significance and effectiveness of our method to reduce the data drift and align the latent representations of both datasets as described in section <ref>. §.§ Latent Representation Ability The reduction of the data drift is an important task for our model as shown in the previous quantitative results. Figure <ref> depicts the distribution of latent features extracted for the samples of each dataset across the models produced through our previous experiments. We used the t-SNE algorithm <cit.> in order to reduce the dimensionality of the features. The data drift in the representation of the two datasets is clearly apparent for both experiments 1 and 2. Even though experiment 2 achieved better results, we still noticed the high disparity of performance between datasets. Due to the underfitting of the model in experiment 3, it was also unable to address the data drift. In experiment 4 the model was trained only on the MOST dataset. Because of the availability of data, we noticed a better general alignment for data distribution between datasets. But Figure <ref> shows that the shift on the scale of individual classes is still noticeable. In experiment 5, we noticed a very strong alignment for both datasets on the general and class-specific levels in Figures <ref> and <ref>, respectively. Our approach successfully aligned all the data points from both datasets, effectively mitigating the data drift problem. As a result, the learned representations were more relevant to the task, and the model's performance improved significantly. Figure <ref> illustrates the distribution of latent representations of each class for each of our previous experiments on the OAI test-set. It highlights the ability of the model to discriminate and separate the different classes of KL-Grade. In experiment 3 where the underfitting occurred, we can observe the inability of the model to separate the distributions of the different classes. In experiments 1,2 and 4, the models were able to clearly separate the distributions of KL-G3 and KL-G4. Separating the KL-G0, KL-G1, and KL-G2 grades was more challenging in the first experiment due to the significant similarity between them and the use of a single MLP classifier. Along with the ability to align the distributions of both datasets, we noticed in Experiment 5 a better separability between KL-G0, KL-G1, and KL-G2 which posed a challenge in other experiments. We observed a clear ability to discriminate between KL-G1 and KL-G2 especially, while KL-G0 and KL-G1 still pose some challenges because they represent the none existence and the very early stages of OA respectively. Overall, these results demonstrate the effectiveness of our method in handling data drifts and enhancing the model's ability to differentiate between grades of KOA. §.§ Qualitative Evaluation We use GradCAM as a tool for interpretability purposes. By visualizing the last layer's activations of the feature extractor, we chose a sample from each grade, where the true labels of samples from (a) to (e) are from KL-G0 to KL-G4, respectively, as shown in Figures <ref> and <ref>. In Figure <ref>, we observed that the model effectively identified areas like osteophytes, joint space narrowing, and sclerosis, which are essential factors for assessing the severity of KOA <cit.>. This points out that our model bases its classifications on the right regions of interest commonly used in clinical diagnosis and not on non-relevant features. Figure <ref> represents misclassified samples. As can be observed, the model still focuses on the relevant regions around the knee joint. For instance, the model predicts sample (a) as KL-G1, even though the true KL grade was zero. It focused on the area where a medial joint space narrowing was present, which is a possible feature of KL-G1. Similar misclassifications occurred for samples (b), (c), and (d), where the model either overestimated or underestimated the KL grade, indicating the challenge of distinguishing between grades due to their high similarity and also the fact that the KL grade suffers from subjectivity/ambiguity among experts <cit.>. In sample (e), we encountered an image that contained an unusual object (i.e. A screw) in the tibia, which could potentially distract the model from the areas of the image that are crucial for grading KOA. However, our model demonstrated robustness by still being able to focus on the region of interest. Furthermore, our model classified the image as a KL-G3 instead of KL-G4, which are close compared to other KL-Grades. This result highlights the ability of our model to prioritize task-specific important features in the image and not be affected by irrelevant and noisy distractors. §.§ State-of-the-art Comparison Table <ref> presents a comparison of the results obtained with state-of-the-art methods. We note that the methods used in these studies were trained differently. Specifically, some methods used the OAI training set exclusively, others used the MOST training set exclusively, and others used both bases. This diversity in learning can have an impact on the overall performance, and should therefore be carefully considered when interpreting the results. Antony et al. <cit.> and <cit.> achieved accuracies of 53.40% and 63.60%, respectively, and F1-scores of 0.43 and 0.59, respectively. Chen et al. <cit.> used ordinal loss with different deep learning architectures and achieved accuracies of 69.60%, 66.20%, and 65.50% with Vgg19, ResNet50, and ResNet101, respectively, but they did not report F1-score. Tiulpin et al. <cit.> used a Siamese network and reported an accuracy of 66.71%. Wang et al. <cit.> achieved an accuracy of 69.18%. Our proposed method, experiment 5, outperformed all other methods with an accuracy of 70.17% and an F1-score of 0.67. These results indicate the potential of our proposed method for improving the accuracy and reliability of knee osteoarthritis diagnosis, which could be valuable in clinical practice. § CONCLUSION In this paper, we proposed a new method to predict the severity of Knee OA from radiographic images using the Swin Transformer. Our results showed that this method achieved state-of-the-art performance on the OAI test set, significantly outperforming existing methods. We show that the Swin Transformer network is effective in extracting relevant knee OA information, which can be used to detect most of the symptoms of the disease. In addition, handling the data drift and using the multi-prediction head architecture significantly improves the accuracy of the model and helps reduce the similarity between features of nearby grades. Prospects for future work may involve other imaging modalities such as MRI, while exploring clinical and demographic data, to further improve the prediction of KOA severity. Funded by the TIC-ART project, Regional fund (Region Centre-Val de Loire) ACM-Reference-Format
http://arxiv.org/abs/2307.06021v1
20230712090313
Projective dimension of weakly chordal graphic arrangements
[ "Takuro Abe", "Lukas Kühne", "Paul Mücksch", "Leonie Mühlherr" ]
math.CO
[ "math.CO", "52C35, 32S22, 20F55, 51F15" ]
A graphic arrangement is a subarrangement of the braid arrangement whose set of hyperplanes is determined by an undirected graph. A classical result due to Stanley, Edelman and Reiner states that a graphic arrangement is free if and only if the corresponding graph is chordal, i.e., the graph has no chordless cycle with four or more vertices. In this article we extend this result by proving that the module of logarithmic derivations of a graphic arrangement has projective dimension at most one if and only if the corresponding graph is weakly chordal, i.e., the graph and its complement have no chordless cycle with five or more vertices. On the Uplink Distributed Detection in UAV-enabled Aerial Cell-Free mMIMO Systems Xuesong Pan, Zhong Zheng, Member, IEEE, Xueqing Huang, Member, IEEE, Zesong Fei, Senior Member, IEEE X. Pan, Z. Zheng and Z. Fei are with the School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China (e-mail: {xs.pan, zhong.zheng, feizesong}@bit.edu.cn). X. Huang is with the Department of Computer Science, New York Institute of Technology, NY 11568, USA (e-mail: [email protected]). June 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION The principal algebraic invariant associated to a hyperplane arrangement is its module of logarithmic vectors fields or derivation module D(). Such modules provide an interesting class of finitely generated graded modules over the coordinate ring of the ambient space of the arrangement. The chief problem is to relate the algebraic structure of D() to the combinatorial structure of , i.e., whether it is free or more generally to determine its projective dimension or even graded Betti numbers. In general, this is notoriously difficult and still wide open, at its center is Terao's famous conjecture which states that over a fixed field of definition, the freeness of D() is completely determined by combinatorial data. Conversely, one might ask which combinatorial properties of are determined by the algebraic structure of D(). It is natural to approach these very intricate questions by restricting attention to certain distinguished classes of arrangements. A prominent and much studied class are the graphic arrangements, around which our present article revolves. They are defined as follows. Let V ≅^ℓ be an ℓ-dimensional -vector space. Let x_1,..,x_ℓ be a basis for the dual space V^*. Given an undirected graph G = (,E) with ={1,…,ℓ}, define an arrangement (G) by (G) {(x_i-x_j) |{i,j}∈ E}. Our aim is to study the module D((G)) of a graphic arrangement (G). In fact, regarding the freeness of D((G)), a nice complete answer is given by the following theorem, due to work by Stanley Sta72, and Edelman and Reiner Edelman. The module D((G)) is free if and only if the graph G is chordal, i.e., G does not contain a chordless cycle with four or more vertices. A recent refined result was established in tran2022matfree by Tran and Tsujie, who showed that the subclass of so-called strongly chordal graphs in the class of chordal graphs corresponds to the subclass of MAT-free arrangements, cf. ABCHT16_FreeIdealWeyl, CunMue19_MATfree. In this note, we will investigate the natural question raised by Kung and Schenck in <cit.> of whether it is possible to give a characterization of graphs G, similar to <Ref>, for which the projective dimension of D((G)) is bounded by a certain positive value. To this end, we consider the more general notion of weakly chordal graphs introduced by Hayward Hayward1: A graph G is weakly chordal if G and its complement graph G^C do not contain a chordless cycle with five or more vertices. It was subsequently discovered that many algorithmic questions that are intractable for arbitrary graphs become efficiently solvable within the class of weakly chordal graphs Hayward2. The main result of this paper is the following: The projective dimension of D((G)) is at most 1 if and only if the graph G is weakly chordal. Moreover, the projective dimension is exactly 1 if G is weakly chordal but not chordal. Along the way towards the preceding theorem, we will prove the following key result, yielding the more difficult implication of <Ref>. For ℓ≥ 6, the projective dimension of D((C_ℓ^C)) is equal to 2, where C_ℓ^C is the complement of the cycle-graph with ℓ vertices, also called the (ℓ-)antihole. Moreover, we prove a refined result. Namely, in Theorem <ref> we provide an explicit minimal free resolution of D((C_ℓ^C)). The article is organized as follows. In <Ref>, we introduce some notation for graphs and preliminary results needed later on. <Ref> is concerned with further notation and helpful results for hyperplane arrangements and their derivation modules. Moreover, in Subsection <ref> we record a new tool from the very recent work of the first author Abe23_BSequence which allows us to control the projective dimension of the derivation module along the deletion of hyperplanes under certain assumptions. Then, in <Ref> we prove one direction of our main Theorem <ref>. The <Ref> then yields, step by step, the other direction of Theorem <ref>. In particular, along the way, we derive a minimal free resolution of the derivation module of an antihole graphic arrangement. To conclude, in the final <Ref> we comment on open ends and record some questions raised by our investigations. § ACKNOWLEDGMENTS TA is partially supported by JSPS KAKENHI Grant Number JP21H00975. LK and LM are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – SFB-TRR 358/1 2023 – 491392403. PM is supported by a JSPS Postdoctoral Fellowship for Research in Japan. § PRELIMINARIES – GRAPH THEORY In this section, we define objects of interest to us while studying graphic arrangements, notably specific graph classes and their attributes. The exposition is mostly based on Diestel. We only consider simple, undirected graphs: (i) A simple graph G on a set is a tuple (,E) with E ⊆2 the set of (undirected) edges connecting the vertices in . (ii) The graph G^C = (, 2\ E) is called the complement graph of G. (iii) A graph G' = (', E') with ' ⊆, E' ⊆ E is called a subgraph of G. If E' is the set of all edges between vertices in ', i.e. E'= '2∩ E, the graph G' is an induced subgraph of G. If the subset relation is proper, G' is called a proper subgraph of G. Besides restricting the graph to a set of vertices, there are two basic operations we can perform on graphs, as described in OrlikTerao: Let G = (,E) be a graph and e = {i,j}∈ E. * The graph G' = (, E\{e}) is obtained from G through deletion of e. * The graph G” = (”, E”) with V” the vertex set obtained by identifying i and j and E” = {{p̅, q̅} |{p,q}∈ E'} is obtained by contraction of G with respect to e. We will define graph classes based on certain path or cycle properties: * For k≥ 2, a path of length k is the graph P_k = (,E) of the form = {v_0,…,v_k} , E = {{v_0,v_1}, {v_1,v_2},…,{v_k-1,v_k}} where all v_i are distinct. * If P_k = (,E) is a path, and k ≥ 3, then the graph C_k = (, E∪{v_k-1,v_0}) is called a (k-)cycle. An edge which joins two vertices of a cycle (path), but is not itself an edge of the cycle (path) is a chord of that cycle (path). An induced cycle (path) of a graph G is an induced subgraph of G, that is a cycle (path). For k≥ 6, we call C_k^C the k-antihole. A graph is called chordal (or triangulated) if each of its cycles of length at least 4 has a chord, i.e. if it contains no induced cycles of length greater than 3. The main objects of interest in this article are graphs that satisfy a weaker condition than chordality and were introduced by Hayward in Hayward1: A graph is called weakly chordal (or weakly triangulated) if it contains no induced k-cycle with k ≥ 5 and no complement of such a cycle as an induced subgraph. It is clear that chordality implies weak chordality and that weak chordality is closed under taking the complement. Additionally, it is apparent that if G is weakly chordal, so is every induced subgraph of G and in Hayward2, it was proved that weak chordality is closed under contraction. A more inductive approach is given by the following generation method, introduced by Hayward: (Hayward3, Theorem 4) A graph is weakly chordal if and only if it can be generated in the following manner: * Start with a graph G_0 with no edges. * Repeatedly add an edge e_j to G_j-1 to create the graph G_j, such that e_j is not the middle edge of any induced P_3 of G_j. With these tools, we can now prove the following: For a weakly chordal graph G = (,E), there exists a sequence of edges e_1,..,e_k∉ E, such that * G_i = (, E ∪{e_1,…,e_i}) is weakly chordal for i = 1,…,k-1, * the edge e_i is not part of an induced cycle C_4 in G_i for i = 1,…,k and * G_k is chordal. Say the complement G^C has m edges. If G is weakly chordal, so is G^C. Using <Ref> this means in turn that there exists an edge ordering e_m,…,e_1 of the edges in E_G^C, such that (, {e_m,…,e_i}) is weakly chordal for all i=m,…,1 and (, {e_m,…,e_1})=G^C. Define the sequence of graphs G_i (, E ∪{e_1,…,e_i}) for i=1,…,m. As G_i^C=(, {e_m,…,e_i-1}) these graphs are by construction all weakly chordal. Since the sequence ends with the complete graph, which is chordal, the chordality condition is met at some point in the sequence. Moreover, the middle edge in an induced path P_4 becomes an edge on an induced cycle C_4 in the complement graph. Thus the condition of <Ref> on avoiding the middle edges of an induced P_4 translates to avoiding the edges of an induced cycle C_4 as claimed. § PRELIMINARIES – HYPERPLANE ARRANGEMENTS In this section, we recall some fundamental notions form the theory of hyperplane arrangements. The standard reference is Orlik and Terao's book OrlikTerao. Let be a field and let V ≅^ℓ be a -vector space of dimension ℓ. A hyperplane H in V is a linear subspace of dimension ℓ-1. A hyperplane arrangement = (, V) is a finite set of hyperplanes in V. Let V^* be the dual space of V and S = S(V^*) be the symmetric algebra of V^*. Identify S with the polynomial algebra S = [x_1,…,x_ℓ]. Let be a hyperplane arrangement. Each hyperplane H ∈ is the kernel of a polynomial α_H of degree 1 defined up to a constant. The product Q() ∏_H ∈α_H is called a defining polynomial of . Define the rank of as () := _V(∩_H ∈H). If ℬ⊆ is a subset, then (ℬ, V) is called a subarrangement. The intersection lattice L() of the arrangement is the set of all non-empty intersections of elements of (including V as the intersection over the empty set), with partial order by reverse inclusion. For X∈ L() define the localization at X as the subarrangement _X of by _X {H ∈ | X ⊆ H} as well as the restriction (^X, X) as an arrangement in X by ^X {X ∩ H | H ∈\_X and X∩ H ≠∅}. Define L_k() {X ∈ L() | codim_V (X) = k} and L_≥ k(), L_≤ k() analogously. Let be a non-empty arrangement and let H_0 ∈. Let 𝒜' = \{H_0} and let 𝒜” = ^H_0. We call (, 𝒜', 𝒜”) a triple of arrangements with distinguished hyperplane H_0. We can associate a special module to the hyperplane arrangement : A 𝕂-linear map θ: S → S is a derivation if for f,g ∈ S: θ(f· g) = f·θ(g)+g·θ(f). Let _𝕂(S) be the S-module of derivations of S. This is a free S-module with basis the usual partial derivatives 1,…,ℓ. Define an S-submodule of _𝕂(S), called the module of -derivations, by D() {θ∈_𝕂(S) |θ(Q) ∈ QS}. The arrangement is called free if D() is a free S-module. The class of arrangements we are interested in are graphic arrangements: Given a graph G = (,E) with ={1,…,ℓ}, define an arrangement (G) by (G) {(x_i-x_j) |{i,j}∈ E}. Note that for a graphic arrangement (G), localizations exactly correspond to disconnected unions of induced subgraphs of G. More precisely, for X ∈ L((G)) we have (G)_X = {(x_i-x_j) |{i,j}∈ E'} for some E' ⊆ E if and only if there is a subgraph G' of G with edges E' such that each connected component of G' is an induced subgraph of G. For given derivations θ_1, …, θ_ℓ∈(S) we define the the coefficient matrix M(θ_1, …, θ_ℓ) := (θ_j(x_i))_1≤ i,j ≤ℓ, i.e., the matrix of coefficients with respect to the standard basis 1,…,ℓ of (S). We recall Saito's useful criterion for the freeness of D(), cf. [Thm. 4.19]OrlikTerao. For θ_1, …, θ_ℓ∈ D(), the following are equivalent: * (M(θ_1, …, θ_ℓ)) ∈^× Q(), * θ_1, …, θ_ℓ is a basis of D(). §.§ Projective dimension In this manuscript, we want to take a look at the non-free case of graphic arrangements and find a characterization for their different projective dimensions. For a comprehensive account of all the required homological and commutative algebra notions we refer to Weibel respectively Eis95_CommAlg. An S-module P is called projective if it satisfies the following universal lifting property: given S-modules L,N, a surjection g: L → N and a map γ: P → N, there exists a map β: P → N, such that γ= g ∘β. A projective resolution of a module M is a complex P_∙ with a map ϵ: P_0 → M, such that the augmented complex …→ P_2 → P_1 → P_0 M → 0 is exact and P_i is projective for all i∈ℕ. Every S-module M has a projective resolution. With this in mind, we can define the notion of projective dimension: Let M be an S-module. Its projective dimension (M) is the minimum integer n (if it exists), such that there is a resolution of M by projective S-modules 0 → P_n →…→ P_1 → P_0 → M → 0 The projectivity of the S-module P is equivalent to the exactness of the functor _S(P,-). Hence, considering its derived functors _S^i(M,-), we have the following characterization of the projective dimension, cf. [pd Lemma 4.1.6]Weibel: (M) ≤ p _S^i(M,N) = 0 for all i>p and all S-modules N. The projective dimension of an arrangement is the projective dimension of its derivation module and we simply write () := (D()). Note that D() is a finitely generated reflexive module over the polynomial ring S; as such we have () ≤()-2 and (as a consequence of the graded version of Nakayama's Lemma) D() is projective if and only it is free, cf. [Thm. 19.2]Eis95_CommAlg. Thus, by <Ref>, a chordal graph produces an arrangement of projective dimension 0. The following result is due to Terao, cf. [Lem. 2.1]Yuz91_LatticeCohom. Let X ∈ L(). Then (_X) ≤(). An arrangement is generic, if || > () and for all X ∈ L() ∖{∩_H ∈H} we have |_X| = _V(X). The next result, due to Rose and Terao RoseTerao1991_FreeResGeneric, identifies generic arrangements as those with maximal projective dimension. Let be a generic arrangement. Then () = ()-2. Important for our present investigations are the following examples of generic arrangements. Let C_ℓ be the cycle graph with ℓ vertices. Then, for ℓ≥ 3, the graphic arrangement (C_ℓ) is generic. In particular, we have ((C_ℓ)) = ((C_ℓ))-2 = ℓ-3. Since arrangements of induced subgraphs correspond to localizations, from <Ref> and <Ref> we obtain the following, first observed by Kung and Schenck [Cor. 2.4]KungSchenck. If G contains an induced cycle of length m, then pd((G)) ≥ m-3. In KungSchenck, Kung and Schenck introduced a graph they called the “triangular prism” to serve as an example for a graphic arrangement (G) whose projective dimension is strictly greater than k-3, k the length of the longest chordless cycle in G. Note that the graph they describe is the 6-antihole, see <Ref>. It does not have any cycle of length 5 or more, yet ((G))=2 and it is not weakly chordal. It is the smallest possible example (in terms of the number of vertices) that has this property. §.§ Terao's polynomial B Let be an arbitrary arrangement and H_0 a distinguished hyperplane. Let (,',”) be the corresponding triple. Choose a map ν:”→' such that ν(X)∩ H_0=X for all X∈”. Terao defined the following polynomial B(',H_0)=Q()/α_H_0∏_X∈”α_ν(X). The main properties of this polynomial can be summarized as follows: [Lem. 4.39 and Prop. 4.41]OrlikTerao * B(',H_0) = |'|-|”|. * The ideal (α_H_0,B(',H_0)) is independent of the choice of ν. * The polynomial θ(α_H_0) is contained in the ideal (α_H_0,B(',H_0)) for all θ∈ D('). In the following, we fix a hyperplane H_0 and simply write B = B(',H_0) for Terao's polynomial. By <Ref>, we have an exact sequence: 0→ D() ↪ D(') S̅·B̅, where S̅=S/α_H_0 and ∂'(θ)=θ(α_H_0). The following new result regarding this sequence will be important in our subsequent proofs. It is a special case of “surjectivity theorems” for sequences of local functors recently obtained by the first author in Abe23_BSequence. Assume that (_X) < _V(X)-2 for all X ∈ L_≥ 2(^H_0). Then the map ∂' in the sequence (<ref>) is surjective. Hence, in this case, the sequence (<ref>) is also right exact. This immediately follows from [Thm. 3.2, Thm. 3.3]Abe23_BSequence. We record the following consequences of the preceding theorem. Assume that _X is free for all X∈ L_2(^H_0) and () ≤ 1. Then the sequence (<ref>) is also right exact. This follows immediately from <Ref> and <Ref>. Assume that _X is free for all X∈ L_2(^H_0) and () ≤ 1. Then we also have (') ≤ 1. By <Ref>, the B-sequence is right-exact and by assumption, () = 0, that is _S^i(D(),N) = 0 for all i>1 and all S-modules N. The principal ideal of S̅ generated by B̅ is free as an S̅-module. So, by the graded version of [Cor. 4.3.14]Weibel, the module S̅B̅ has projective dimension 1, i.e. _S^i(S̅B̅,N) = 0 for all i>1 by <Ref>. It then follows from the long exact -sequence, that for the middle term in the B-sequence, we have _S^i(D('),N) = 0 for all i>1 which is equivalent to (') = (D(')) ≤ 1 by <Ref>. § WEAKLY CHORDAL GRAPHIC ARRANGEMENTS The goal of this section is to show that a graphic arrangement of a weakly chordal graph has projective dimension at most 1, which gives one direction of our main <Ref>. Let G=(,E) be a weakly chordal graph. Then ((G)) ≤ 1. Firstly, <Ref> implies that there exists a sequence of edges e_1,…,e_k such that G_i = (, E ∪{e_1,..,e_i}) is weakly chordal, the edge e_i is not the middle edge of any induced P_4 in G_i for i = 1,..,k, and G_k is chordal. We prove that ((G_i)) ≤ 1 for all i = 1,..,k by a descending induction. As G_k is chordal, the arrangement (G_k) is free and hence ((G_k))=0 by <Ref>. So assume that ((G_j)) ≤ 1 for some 1<j≤ k. We will now argue that this implies ((G_j-1)) ≤ 1 which finishes the proof. Let H_0 be the hyperplane corresponding to the edge e_j in the arrangement (G_j). We aim to apply <Ref> to (G_j) and (G_j-1). To check the assumption of this result, we consider X∈ L_2((G_j)^H_0) and need to show that the arrangement (G_j)_X is free. Assume the contrary, i.e., that (G_j)_X is not free. By definition of X, the arrangement (G_j)_X is a graphic arrangement on an induced subgraph of G_j on four vertices containing the edge e_j. The assumption that this arrangement is not free implies that this induced subgraph is not chordal. As this subgraph only contains four vertices it must be the cycle C_4. This however contradicts condition (2) in <Ref> which states that the edge e_j cannot be an edge of an induced cycle C_4 in the graph G_j. Therefore, the arrangement (G_j)_X is free for all X∈ L_2((G_j)^H_0). Moreover, by the induction hypothesis, we have ((G_j)) ≤ 1. Thus, by Lemma <ref>, we also have ((G_j-1)) ≤ 1 as desired. Let us record the following result which immediately follows from the previous theorem and Theorem <ref>. Let G be a weakly chordal but not chordal graph. Then ((G)) = 1. § GRAPHIC ARRANGEMENTS OF ANTIHOLES The main result of this section yields the other direction of implications in <Ref>. Recall that the graph C_ℓ^C is the complement graph of a cycle with ℓ vertices which is called the ℓ-antihole. For all ℓ≥ 6 it holds that ((C_ℓ^C))= 2. Before we delve into the arguments, leading step by step to the above principal theorem of this section, let us first explain how this concludes the proof of <Ref>. By <Ref>, we have ((G)) ≤ 1 for a weakly chordal graph G and ((G)) = 1 if G is not chordal by <Ref>. Conversely, assume that G is a graph such that ((G)) = 1. In particular, by <Ref>, the graph G is not chordal. Suppose G is also not weakly chordal. Then, by definition, there is either an m≥ 5 such that C_m is an induced subgraph or there is an ℓ≥ 6 such that C_ℓ^C is an induced subgraph of G. In the first case, by <Ref>, we have ((G)) ≥ℓ-3 ≥ 2; in the second case, by <Ref> and <Ref>, we also have ((G)) ≥ 2. Both cases contradict our assumption. Hence, G is weakly chordal. To prove Theorem <ref>, let us first introduce some notation for special derivations we will consider in this section. Let G be a graph with vertex set = [ℓ]:={1,2,…,ℓ}. Write H_ij:=(x_i-x_j) for the hyperplane corresponding to the edge {i,j} and let _ℓ-1:={H_ij| 1 ≤ i < j ≤ℓ} be the graphic arrangement of the complete graph (or equivalently, the Weyl arrangement of type A_ℓ-1, also called the braid arrangement) in ℚ^ℓ. We set θ_i:=∑_j=1^ℓ x_j^i j (i ≥ 0) and define φ_i:=∏_j ∈ [ℓ] ∖{i-1,i,i+1} (x_i-x_j)i for i ≠ 1, ℓ. Also define φ_1:=∏_i=3^ℓ-1 (x_1-x_i)1 and φ_ℓ:=∏_i=2^ℓ-2 (x_ℓ-x_i)ℓ. In this section we always consider indices and vertices in [ℓ] in a cyclic way, i.e., we identify i+ℓ with i etc. There is the following fundamental result due to K. Saito. _ℓ-1 is free with basis θ_0,…,θ_ℓ-1. With this, we can show the following. Let ℬ_i,j:=_ℓ-1∖{H_s,s+1| i ≤ s ≤ j}. If j=i+2, then _i,i+2 is free with basis θ_0,…,θ_ℓ-3, φ_i+1,φ_i+2. We use Saito's criterion (<Ref>). Apparently θ_0,…,θ_ℓ-3,φ_i+1,φ_i+2∈ D(_i,i+2). Considering the coefficient matrix M(θ_0,…,θ_ℓ-3,φ_i+1,φ_i+2), to compute its determinant, we can expand it along the last two columns, yielding a smaller Vandermonde determinant ∏_1≤ s < t ≤ℓ, i,j ∉{i+1,i+2}(x_s-x_t) multiplied with the only entries in the last two columns ∏_j ∈ [ℓ] ∖{i,i+1,i+2} (x_i+1-x_j) and ∏_j ∈ [ℓ] ∖{i+1,i+2,i+3} (x_i+2-x_j). But the product of these three terms is exactly the defining polynomial Q(_i,i+2) which yields the freeness of _i,i+2 by Saito's criterion. If i+2≤ j ≠ i-1, then D(_i,j) is generated by θ_0,…,θ_ℓ-3, φ_i+1,φ_i+2,…,φ_j. Firstly, the defining polynomials of the _i,j together with the derivations φ_i+1,φ_i+2,…,φ_j for a fixed |i-j| = s are contained in one orbit under the action of the symmetric group _ℓ on S=ℚ[x_1,…,x_ℓ] respectively on subsets of (S). Hence, without loss, we may assume that i=1. We argue by induction on j. By Lemma <ref>, the statement is true for j=3. Assume that D(_1,j) is generated by θ_0,…,θ_ℓ-3, φ_2,φ_3,…,φ_j. We will show that, after deleting H_j+1,j+2, an additional generator φ_j+1 is necessary, i.e. D(_1,j+1) is generated by θ_0,…,θ_ℓ-3, φ_2,φ_3,…,φ_j, φ_j+1. Apparently, we have |_1,j+1| = |_ℓ-1|-(j+1) = ℓ(ℓ-1)/2-(j+1), |_1,j^H_j+1,j+2| = |_ℓ-2|-(j-1) = (ℓ-1)(ℓ-2)/2-(j-1). Thus B_j+1=|_1,j+1|-|_1,j^H_j+1,j+2| =ℓ-3, where B_j+1 = B(_1,j+1,H_j+1,j+2) is Terao's polynomial from Subsection <ref>. By definition, it is clear that φ_j+1∈ D(_1,j+1) ∖ D(_1,j). Consequently, by <Ref>, we have φ_j+1(x_j+1-x_j+2) = g(x_j+1-x_j+2) + cB_j+1 and for any θ∈ D(_1,j+1) we also have θ(x_j+1-x_j+2) = g'(x_j+1-x_j+2) + fB_j+1, for certain f,g,g' ∈ S and c ∈ℚ^×. Hence, θ - f/cφ_j+1∈ D(_1,j)=⟨θ_0,…,θ_ℓ-3, φ_2,…,φ_j⟩_S by the induction hypothesis, which completes the proof. We thus see, that if we delete H_12,…,H_ℓ-1,ℓ from _ℓ-1, we can determine generators for D(_1,ℓ-1), namely D(_1,ℓ-1)=⟨θ_0,…,θ_ℓ-3,φ_2,…,φ_ ℓ-1⟩. However, the same argument as in Proposition <ref> does not work well for our target arrangement (C_ℓ^C) = _1,ℓ = _1,ℓ-1∖{H_ℓ,1}, since |_1,ℓ| = ℓ(ℓ-1)/2-ℓ, |_1,ℓ-1^H_ℓ,1| = (ℓ-1)(ℓ-2)/2-(ℓ-3). So |_1,ℓ|-|_1,ℓ-1^H_ℓ,1|=ℓ-4= B_ℓ<(φ_ℓ) = (φ_1) = ℓ-3, where B_ℓ = B(_1,ℓ,H_ℓ,1) is Terao's polynomial B. To obtain generators for _1,ℓ we need to modify the argument utilizing the polynomial B. For that purpose, we introduce the following new refined version of <Ref>. Let be an arrangement, H_1, H_2 ∉ be distinct hyperplanes and let _i:=∪{H_i}. Assume that H_1 = (α), H_2 = (β) and let B_i be the polynomial B with respect to (,H_i). Assume that (α+β) ∈, let b be the greatest common divisor of the reduction of B_1 and B_2 modulo (α,β) and let b_2b≡ B_2 modulo (α,β). Then for θ∈ D() we have: θ(α) ∈ (α, β B_1,b_2B_1). Let θ(α)=fα+FB_1 for some f,F ∈ S. Note that θ(α)=θ(α+β)-θ(β). So θ(α)=g(α+β)+hβ+aB_2 for some g,h,a ∈ S, and thus, we have fα+FB_1=g(α+β)+hβ+aB_2. Reducing the equation modulo α, we obtain FB_1 ≡ gβ + hβ + aB_2 (α). Reducing once more modulo β, we get FB_1 ≡ aB_2 (α,β). Let B_i ≡ bb_i (α,β). Then F=F_1 b_2+F_2 β+F_3 α for some F_1,F_2,F_3 ∈ S. Hence θ(α) ∈ (α, β B_1,b_2B_1), which completes the proof. We can apply Theorem <ref> to _1,ℓ-1 and := _1,ℓ = _ℓ-1∖{H_1,2,…,H_ℓ-1,ℓ,H_ℓ,1}. Namely, we can show the following: D()=⟨θ_0,…, θ_ℓ-3,φ_1,…,φ_ℓ⟩_S. Let _1:=∪{H_12} and _2:=∪{H_23}. Set B_1=∏_j=4^ℓ-1 (x_1-x_j) for the polynomial B for the pair (,H_12) and B_2=∏_j=5^ℓ (x_2-x_j) for the polynomial B of (,H_23). Note that H_i,i+1∉ (i=1,2) and (x_1-x_2)+ (x_2-x_3) =x_1-x_3, whose kernel is in . Moreover, after reduction modulo x_1=x_2=x_3, we have B_1 ≡∏_j=4^ℓ-1 (x_1-x_j), B_2 ≡∏_j=5^ℓ (x_1-x_j), and their common divisor is ∏_j=5^ℓ-1 (x_1-x_j). Then, Theorem <ref> yields for θ∈ D() θ(x_1-x_2) ∈ (x_1-x_2,(x_2-x_3)B_2,(x_2-x_ℓ)B_1)=(x_1-x_2,(x_1-x_3)B_2,(x_1-x_ℓ)B_1) = (x_1-x_2,φ_1(x_1-x_2),φ_2(x_1-x_2) ), where we used the fact that (x_2-x_3)B_2 ≡ (x_1-x_3)B_2, (x_2-x_ℓ)B_1 ≡ (x_1-x_ℓ)B_1 (x_1-x_2). Thus D() = D(_1)+Sφ_1+Sφ_2 = ⟨θ_0,…,θ_ℓ-3,φ_1,…,φ_ℓ⟩_S, by <Ref>. Note that ψ_i:=(x_i-1-x_i)φ_i-(x_i+1-x_i+2)φ_i+1∈ D(_ℓ-1) = ⟨θ_0,…,θ_ℓ-1⟩_S for i=1,…,ℓ, since ψ_i(x_i-x_i+1) = -∏_j ∈ [ℓ]∖{i,i+1}(x_i-x_j) + ∏_j ∈ [ℓ]∖{i,i+1}(x_i+1-x_j) ≡ 0 (x_i-x_i+1). Thus, there are f_ij such that ψ_i-∑_j=0^ℓ-3 f_ijθ_j =-θ_ℓ-2 (i=1,2,…,ℓ). So we have relations ψ_i-∑_j=0^ℓ-3 f_ijθ_j= ψ_s-∑_j=0^ℓ-3 f_sjθ_s, and they are generated by ψ_1-∑_j=0^ℓ-3 f_1jθ_j = ψ_i-∑_j=0^ℓ-3 f_ijθ_j for i=2,…,ℓ. We now prove that they indeed generate all the relations among the generators of D(). All relations among the set of generators θ_0,…,θ_ℓ-3,φ_1,…,φ_ℓ are generated by the ones given in Equations (<ref>). Let η: ∑_i=0^ℓ-3 a_i θ_i+∑_i=1^ℓ b_i φ_i=0 be a relation. Since θ_i ∈ D(_ℓ-1) (0≤ i ≤ℓ-3), we see that (∑_i=1^ℓ b_i φ_i)(x_1-x_2) is divisible by x_1-x_2, i.e., b_1∏_i=3^ℓ-1 (x_1-x_i)- b_2∏_i=4^ℓ (x_2-x_i) is divisible by x_1-x_2. So b_1(x_2-x_3)≡ b_2(x_1-x_ℓ) (x_1-x_2). Hence, there are polynomials g_12,h_1,h_2 ∈ S such that b_1 = (x_1-x_ℓ) g_12+(x_1-x_2)h_1, b_2 = (x_2-x_3) g_12+(x_1-x_2)h_2. Apply the same argument to (∑_i=1^ℓ b_i φ_i)(x_2-x_3) to obtain polynomials g_23∈ S such that b_2=(x_2-x_3) g_12+(x_1-x_2)g_23+(x_2-x_3)(x_1-x_2)h_2. Continuing these processes, we know that b_i=(x_i-x_i+1) g_i-1,i+(x_i-1-x_i)g_i,i+1+(x_i-1-x_i)(x_i-x_i+1)h_i for some polynomials g_i-1,i, g_i,i+1, h_i ∈ S, i=1,…,ℓ. Substituting this information into our relation η, we obtain η: ∑_i=0^ℓ-3 a_i θ_i+∑_j=1^ℓ c_jψ_j + (x_j-1-x_j)(x_j-x_j+1)h_j φ_j=0, for some a_i,c_j ,h_j ∈ S. Applying our relations from Equations (<ref>), we get a relation of the form ∑_i=0^ℓ-3 t_i θ_i+ t_ℓ-2ψ_1 + ∑_j=1^ℓ (x_j-1-x_j)(x_j-x_j+1)h_j φ_j=0, Note that by Equations (<ref>) we have ψ_1 = -θ_ℓ-2 - ∑_j=0^ℓ-3 f_1jθ_j, and since (x_i-1-x_i)(x_i-x_i+1)φ_i ∈ D(_ℓ-1), we thus have (x_i-1-x_i)(x_i-x_i+1)φ_i = -θ_ℓ-1 + ∑_j=0^ℓ-3 c_ijθ_i-c_ℓ-2,iψ_1 for suitable c_ij∈ S. Applying this last substitution to our relation, we now expressed η as ∑_i=0^ℓ-3 t_i θ_i+t_ℓ-2ψ_1+t_ℓ-1θ_ℓ-1=0. Since θ_0,…,θ_ℓ-3,ψ_1,θ_ℓ-1 are linearly independent (in fact, by Equation (<ref>) they form a basis for D(_ℓ-1)), we have t_i=0 for all i. Consequently, all the relations among the above generators of D() are expressible using Equations (<ref>). Recall that (x_i-1-x_i)φ_i-(x_i+1-x_i+2)φ_i+1∈ D(_ℓ-1 ), for i=1,…, ℓ and there are f_ij such that (x_i-1-x_i)φ_i-(x_i+1-x_i+2)φ_i+1 + ∑_j=0^ℓ-3 f_ijθ_j=-θ_ℓ-2. First let us show the following. The coefficients in relations (<ref>) are given explicitly by f_ij=(-1)^ℓ-2-je_ℓ-2-j(x_1,…,x̂_i,x̂_i+1,…,x_ℓ), where e_i(a_1,…,a_ℓ-2) is the i-th basic symmetric polynomial in the variables a_1,…,a_ℓ-2. The straightforward computation is left to the reader. Now we are ready to prove the following, which immediately implies Theorem <ref>. The module D() has the following minimal free resolution: 0 → S[-ℓ+1] → S[-ℓ+2]^ℓ-1→⊕_i=0^ℓ-4 S[-i] ⊕ S[-ℓ+3]^ℓ+1→ D() → 0. In particular, () =2. First we prove the second syzygy part. Let e_j^i: =e_j(x_1,…,x̂_i,x̂_i+1,…,x_ℓ). As we have shown in <Ref>, the relations among the generators θ_0,…,θ_ℓ-3,φ_1,…,φ_ℓ are the following: (x_ℓ-x_1)φ_1 - (x_2-x_3)φ_2 -∑_j=0^ℓ-3 (-1)^ℓ-2-je_ℓ-2-j^1θ_j =(x_i-1-x_i)φ_i - (x_i+1-x_i+2)φ_i+1 -∑_j=0^ℓ-3 (-1)^ℓ-2-je_ℓ-2-j^iθ_j for i=2,…,ℓ. Let us denote these relations by ψ_i for i=2,…,ℓ. Since we are now concerned with the relations among those first syzygies, from now on we consider all first syzygies as vectors with coordinates with respect to θ_0,…,θ_ℓ-3,φ_1,…,φ_ℓ. Now let ∑_i=2^ℓ a_i ψ_i=0 be a relation among our generators of the first syzygy module. Since the coefficients of φ_i for 3 ≤ i ≤ℓ are only x_i-x_i+1 in ψ_i-1 and x_i-1-x_i in ψ_i, we can deduce that a_i=A(x_i-x_i+1) for i=2,…,ℓ and some constant polynomial A ∈ S. So the relations among ψ_i's have to be of the form A∑_i=2^ℓ (x_i-x_i+1)ψ_i=0. Let us check whether other coefficients are zero or not. First, we consider the one of φ_1, that is (x_ℓ-x_1)(∑_i=2^ℓ (x_i-x_i+1)+(x_1-x_2))=0. Second, for φ_2, we similarly have (x_2-x_3)(-∑_i=2^ℓ (x_i-x_i+1)-(x_1-x_2))=0. So it suffices to check the remaining part, i.e., the coefficient of each θ_ℓ-2-j, which is of the form ∑_i=2^ℓ (x_i-x_i+1)(e_j^i-e_j^1) =∑_i=2^ℓ (x_i-x_i+1)e_j^i+(x_1-x_2)e_j^1= ∑_i=1^ℓ (x_i-x_i+1)e_j^i=:C_j. We show this is zero by induction on j ≥ 1. For j=1 we have 1(C_1) = ∑_i=3^ℓ x_i+∑_i=2^ℓ-1(x_i-x_i+1) -∑_i=2^ℓ-1x_i = ∑_i=3^ℓ x_i+(x_2-x_ℓ) -∑_i=2^ℓ-1x_i=0. By an analogous computation we have i(C_1)=0 for all i. Thus C_1=0. So assume that C_s=0 for s ≤ j and let us prove that C_j+1=0. Compute 1(C_j+1) = e_j+1^1+∑_i=2^ℓ-1(x_i-x_i+1)e_j(x_2,…, x̂_i,x̂_i+1,…,x_ℓ)-e_j+1^ℓ = ∑_i=2^ℓ-1(x_i-x_i+1)e_j(x_2,…, x̂_i,x̂_i+1,…,x_ℓ)+(x_ℓ-x_2) e_j(x_3,…,x_ℓ-1) + e_j+1(x_3,…,x_ℓ-1)- e_j+1(x_3,…,x_ℓ-1)=0 by the induction hypothesis. In sum, we have established the exactness of our resolution. Now, recalling that all the coefficients in the resolution (<ref>) are of positive degree, we see that it is moreover a minimal free resolution. This finishes the proof. § OPEN PROBLEMS We conclude the article by mentioning a few open problems and further directions of research. For free graphic arrangements which correspond to chordal graphs by <Ref>, the degrees of the generators in a basis of the derivation module have a nice description in terms of the combinatorics of the graph, namely vertex degrees along a vertex elimination orderings, cf. [Lem. 3.4]Edelman. Thus, the following problem arises from <Ref>. Determine the graded Betti numbers of D((G)) for a weakly chordal graph G. A further natural question arising from our Theorem <ref> would be if this generalizes to the remaining projective dimensions, i.e. if (G) has projective dimension ≤ k if and only if G and its complement graph do not contain a chordless cycle with k+4 or more vertices. This is however not the case, first note that in the case of projective dimension 0, it suffices for the graph itself to have no chordless cycle of length 4 or more and chordality is not closed under taking the complement (The complement of the 4-cycle for instance, is chordal, where the 4-cycle itself is not). Moreover, since the arrangement of the k-cycle is generic of rank k-1, it has maximal projective dimension k-3 (see <Ref>) and by <Ref> its complement has projective dimension 2. Moreover, we found two counterexamples to the other direction of this conjecture in dimension 7; both graphs and their complements have no induced cycle of length more than 5, yet have projective dimension 3, see <Ref> which was also found by Hashimoto in <cit.>. Lastly, we would like to mention that the problem of understanding the projective dimension of the logarithmic p-forms Ω^p() with poles along (cf. [Def. 4.64]OrlikTerao) greatly differs from the one we discuss in this article. As the module Ω^1() and the module D() are dual to each other, one of them is free if and only if the other is free. In the non-free scenario they behave differently however: For instance we have (Ω^1((C_ℓ)))=1 while we have (D((C_ℓ)))=ℓ-3 for ℓ≥ 4. There are furthermore graphs G with (Ω^1((G)))>(D((G))); one such example is the complete graph on six vertices K_6 with three long diagonals removed. So it seems to be an interesting but intricate problem for further research to understand for which graphic arrangements the projective dimension of the logarithmic 1-forms is bounded by one.
http://arxiv.org/abs/2307.04581v1
20230710141926
Galerkin-Bernstein Approximations of the System of Time Dependent Nonlinear Parabolic PDEs
[ "Hazrat Ali", "Nilormy Gupta Trisha", "Md. Shafiqul Islam" ]
math.NA
[ "math.NA", "cs.NA" ]
NANOGrav spectral index γ=3 from melting domain walls E. Babichev^a, D. Gorbunov^b,c, S. Ramazanov^d, R. Samanta^d, A. Vikman^d ^aUniversité Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France ^bInstitute for Nuclear Research of the Russian Academy of Sciences, 117312 Moscow, Russia ^cMoscow Institute of Physics and Technology, 141700 Dolgoprudny, Russia ^dCEICO, FZU-Institute of Physics of the Czech Academy of Sciences, Na Slovance 1999/2, 182 00 Prague 8, Czech Republic ========================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT The purpose of the research is to find the numerical solutions to the system of time dependent nonlinear parabolic partial differential equations (PDEs) utilizing the Modified Galerkin Weighted Residual Method (MGWRM) with the help of modified Bernstein polynomials. An approximate solution of the system has been assumed in accordance with the modified Bernstein polynomials. Thereafter, the modified Galerkin method has been applied to the system of nonlinear parabolic PDEs and has transformed the model into a time dependent ordinary differential equations system. Then the system has been converted into the recurrence equations by employing backward difference approximation. However, the iterative calculation is performed by using the Picard Iterative method. A few renowned problems are then solved to test the applicability and efficiency of our proposed scheme. The numerical solutions at different time levels are then displayed numerically in tabular form and graphically by figures. The comparative study is presented along with L_2 norm, and L_∞ norm. Keywords: Parabolic PDE System, Modified Galerkin Method, Modified Bernstein Polynomial, Backward Difference Method, Gray-Scott Model § INTRDUCTION Reaction-diffusion systems have been extensively studied during the 20^th century. The study of the reaction-diffusion system reveals that different species have interactions with one another and that after these interactions, new species are created via chemical reactions. The solution of the reaction-diffusion system shows the chemical reaction's underlying mechanism and the various spatial patterns of the chemicals involved. Animal coats and skin coloration have been linked to reaction-diffusion processes, which have been considered to constitute a fundamental basis for processes associated with morphogenesis in biology. There are numerous notable examples of coupled reaction-diffusion systems such as the Brusselator model, Glycolysis model, Schnackenberg model, Gray-Scott model, etc. With the help of the system size expansion, a stochastic Brusselator model has been suggested and investigated in the study cited in <cit.>. The reaction-diffusion Brusselator model has been addressed by Wazwaz et al. through the decomposition technique <cit.>. Because of its potential to provide a close analytical solution, the fractional-order Brusselator model was studied by Faiz et al <cit.>. The Brusselator system stability of a reaction-diffusion cell as well as the Hopf bifurcation analysis of the system have been detailed by Alfifi <cit.>. Qamar has analyzed the dynamics of the discrete-time Brusselator model with the help of the Euler forward and nonstandard difference schemes <cit.>. The research article cited in <cit.> has been prepared by investigating the numerical analysis of the Glycolysis model using a well-known finite difference scheme. Adel et al <cit.> have examined the synchronization problem of the Glycolysis reaction-diffusion model and designed a novel convenient control law. David et al <cit.> have analyzed the stability of turing patterns of the Schnackenberg model. Liu et al <cit.> have developed the bifurcation analysis of the aforementioned model. Khan et al. <cit.> have established a scheme for the solution of the fractional order Schnackenberg reaction-diffusion system. Numerical explorations have been applied to analyze the pattern formations of the model in the research article cited in <cit.>. Gray and Scott <cit.> were the first to introduce the Gray-Scott model. They have proposed this model as an alternative to the autocatalytic model of Glycolysis <cit.>. For this model, Pearson <cit.> has employed experimental studies to depict several sophisticated spot-type structures. Mazin et al. <cit.> have conducted an experiment using a computer simulation to investigate a range of far-from-equilibrium occurrences that emerge in a bistable Gray-Scott model. Many renowned authors <cit.> have evaluated the preceding model in which self-replicating structures have been noticed. McGough et al. <cit.> have conducted research on the bifurcation analysis of the patterns that are depicted in the model. In the research cited in <cit.>, the linear stability and periodic stationary solutions of this model have been investigated. Some analytical results of this model have also been explored <cit.>. Several prominent authors have studied the spatiotemporal chaos of the model in the research studies cited in <cit.> and <cit.>. Furthermore, Wei <cit.> has analyzed the pattern formation of the two-dimensional Gray-Scott model. The model has also been explored by Kai et al. <cit.> using an innovative technique known as the second-order explicit implicit methodology. In recent years, the nonlinear Galerkin finite element approach has become increasingly prevalent as a means to investigate the model <cit.>. Mach <cit.> has performed an in-depth examination of the quantitative evaluation of the model's numerical solution. In references <cit.> and <cit.>, the Gray-Scott reaction-diffusion system has been the subject of extensive wave modeling studies by eminent scholars. The simulation of the coupled model has been carried out by Owolabi et al. <cit.> using the higher-order Runge-Kutta method. The well-known Gray-Scott model's numerical findings have been calculated using the help of the hyperbolic B-spline <cit.>. In order to analyze the ionic version of the model while it is being affected by an electric field, the Galerkin method has been deployed <cit.>. With the use of the hybrid-asymptotic numerical method, Chen et al. <cit.> have investigated the model's dynamic behavior and stability. In the research study cited in <cit.>, special polynomials have been employed to numerically solve the Gray-Scott model. Han et al. <cit.> have conducted an exhaustive investigation on the three-dimensional Gray-Scott model. In the process of assessing the model, the cubic B-spline has proven to be of considerable use by Mittal et al <cit.>. In the disciplines of engineering and mathematical physics, the Weighted Residual Method is an approximation method that can be leveraged to resolve problems. Analysis of structures, thermal expansion, stream of fluids, movement of masses, and the electromagnetic potential, etc. are examples of prominent problem fields of concern. Several distinct Weighted Residual Method variations are within our reach. The Galerkin Weighted Residual Method (also known as GWRM) has been put into practice for centuries, long before the invention of computers. It is generally agreed that this strategy is one of the best and most often used approaches available. Lewis and Ward have provided a comprehensive overview of the process in the article that is referenced in <cit.>. This methodology has been effectively implemented in the well-known Black-Scholes model by Hossan et al. <cit.>. Shirin et al. <cit.> have employed the Galerkin method in conjunction with other special polynomials to analyze the Fredholm equations. In the research referred to in <cit.>, the approach was utilized to solve boundary value problems. In addition, this method has been used to perform a numerical calculation of the eigenvalues associated with the Sturm-Liouville problem <cit.>. There have been several successful uses of this method for problems involving metal beams and polygonal ducts with rounded edges <cit.>. The objective of this study is to employ the modified Galerkin Weighted Residual Method in conjunction with the appropriate special polynomials to numerically evaluate the one-dimensional reaction-diffusion systems. Based on our best information, this study is presently unavailable. In addition to that, the study has provided the validation necessary to use the approach in one-dimensional reaction-diffusion systems. The main merit and advantage of the study are that by solving this type of system of equations, we will be able to analyze the behavior of the ecological system and forecast its future. The article is split up into four sections. Section 2 provides a detailed explanation of the formulation of our proposed method to solve the system of nonlinear parabolic partial differential equations. In the third section, the approach's implications are shown while analyzing the aforementioned system. Numerical and graphical representations are included here as well. The fourth section contains some concluding remarks and a general discussion. § MATHEMATICAL FORMULATION Let us commence with the following system over the domain [-L, L] 2.0. [ ∂ M/∂ t=ε_1∂^2 M/∂ x^2 - f(M,N) + p(1-M); ∂ N/∂ t=ε_2∂^2 N/∂ x^2+ f(M,N) -(p+q)N ]} The boundary and initial conditions are as follows: . [ 1.0 M(-L,t)=M(L,t)=θ_0; N(-L,t)=N(L,t)=γ_0 ]} and 1.0. [ M(x,0)=M_0(x); N(x,0)=N_0(x) ]} Let us assume the approximate solutions of System (<ref>) be of the form . [ 0.9M(x,t)=θ_0+∑_j=0^nc_j(t)B_j(x); N(x,t)=γ_0+∑_j=0^nd_j(t)B_j(x); ]} where B_j's are the modified Bernstein polynomials and c_j and d_j are the coefficients dependent on time. The first terms of the approximate solutions (<ref>) have come from the boundary conditions of the system. The modified Bernstein polynomials are defined as follows: B_n,m(x)= mn(x-L)^n(U-x)^m-n(x-L)(U-x)/(U-L)^m n=0,1,2,..., m where U & L are the upper and lower limits of x. The last terms of Solution (<ref>) will vanish at the boundary points. Therefore, the residual functions are 1.8. [ R_1(x,t)=∂M/∂ t-ε_1∂^2 M/∂ x^2 + f(M,N) - p(1-M); R_2(x,t)=∂N/∂ t-ε_2∂^2 N/∂ x^2- f(M,N) +(p+q)N; ]} Now we form the residual equations as: ∫_-L^LR_1(x,t) B_i(x)dx=0 ∫_-L^LR_2(x,t) B_i(x)dx=0 From the first residual equation, we can write ∫_-L^L[∂M/∂ t-ε_1∂^2 M/∂ x^2 + f(M, N) - p(1-M)]B_i(x)dx=0 Now we apply integration by parts in the above equation ∫_-L^L∂M/∂ tB_idx+∫__L^Lε_1∂M/∂ x∂ B_i/∂ xdx + ∫_-L^L f(M, N) B_idx-∫_-L^Lp(1-M)B_idx=ε_1[∂M/∂ xB_i]_-L^L Then we substitute solution (<ref>) in Equation (<ref>). Therefore, the equation becomes, ∫_-L^L∂/∂ t(θ_0+∑_j=0^nc_jB_j)B_idx+∫_-L^Lε_1∂/∂ x(θ_0+∑_j=0^nc_jB_j)∂ B_i/∂ xdx +∫_-L^Lf(θ_0+∑_j=0^nc_jB_j, γ_0+∑_j=0^nd_jB_j)B_i dx -∫_-L^Lp(1-(θ_0+∑_j=0^nc_jB_j))B_idx=ε_1[∂/∂ x(θ_0+∑_j=0^nc_jB_j)B_i]_-L^L or,∫_-L^L∂θ_0/∂ tB_idx+∫_-L^L∑_j=0^n∂ c_j/∂ tB_j B_idx+∫_-L^Lε_1∂θ_0/∂ x∂ B_i/∂ xdx+∑_j=0^nc_j∫_-L^Lε_1∂ B_j/∂ x∂ B_i/∂ xdx +∫_-L^Lf(θ_0+∑_j=0^nc_jB_j, γ_0+∑_j=0^nd_jB_j)B_i dx-∫_-L^LpB_idx+∫_-L^Lpθ_0 B_idx+∑_j=0^nc_j∫_-L^LpB_jB_idx =ε_1[∂θ_0/∂ xB_i]_-L^L+ε_1[∑_j=0^nc_j∂ B_j/∂ xB_i]_-L^L This finally becomes ∫_-L^L∂θ_0/∂ tB_idx+∫_-L^L∑_j=0^n∂ c_j/∂ tB_j B_idx+∫_-L^Lε_1∂θ_0/∂ x∂ B_i/∂ xdx+∑_j=0^nc_j∫_-L^Lε_1∂ B_j/∂ x∂ B_i/∂ xdx +∫_-L^LΓ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_i dx+∑_j=0^nd_j∫_-L^LΩ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_j B_idx -∫_-L^LpB_idx+∫_-L^Lpθ_0 B_idx +∑_j=0^nc_j∫_-L^LpB_jB_idx =ε_1[∂θ_0/∂ xB_i]_-L^L+ε_1[∑_j=0^nc_j∂ B_j/∂ xB_i]_-L^L The first terms on both sides, and third terms on the left-hand side Equation (<ref>) become zero because of boundary conditions. Therefore, the equation reduces to, ∑_j=0^nd c_j/dt∫_-L^LB_j B_idx+∑_j=0^nc_j(∫_-L^Lε_1dB_j/dxd B_i/dxdx+∫_-L^LpB_jB_idx-ε_1[d B_j/dxB_i]_-L^L) +∑_j=0^nd_j∫_-L^LΩ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_j B_idx=-∫_-L^LΓ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_i dx +∫_-L^LpB_idx -∫_-L^Lpθ_0 B_idx The derivative and non-derivative terms of Equation (<ref>) can be summarized via standard matrix notation as follows: [C_1]{dc_j/dt}+[K_1]{c_j}+[K_2]{d_j}=[F_1] where C_1_ij= ∫_-L^LB_j B_idx K_1_ij= ∫_-L^Lε_1dB_j/dxd B_i/dxdx+∫_-L^LpB_jB_idx-ε_1[d B_j/dxB_i]_-L^L K_2_ij= ∫_-L^LΩ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_j B_idx F_1_i= -∫_-L^LΓ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_i dx+∫_-L^LpB_idx -∫_-L^Lpθ_0 B_idx Here, K_1 and K_2 are n × n matrices, C_1 is n × n matrix, and F_1 is n × 1 matrix. The first two matrices K_1 and K_2 are called stiffness matrices. The other two matrices C_1 and F_1 are called forced matrix, and load vector respectively. Therefore, we apply the backward difference method on the first term of Equation (<ref>) and rearrange the resulting terms as follows: 2.0 [C_1]{c_j-c_j-1/Δ t}+[K_1]{c_j}+[K_2]{d_j}=[F_1] Or, (1/Δ t[C_1]+[K_1]){c_j}+[K_2]{d_j}=1/Δ t[C_1]{c_j-1}+[F_1] The second residual equation can be written as, ∫_-L^L[∂N/∂ t-ε_2∂^2 N/∂ x^2- f(M,N) +(p+q)N]B_i(x)dx=0 After employing integration by parts and then substitution of (<ref>) reduces the above equation, ∫_-L^L∂/∂ t(γ_0+∑_j=1^nd _jB_j)B_idx+∫_-L^Lε_2∂/∂ x(γ_0+∑_j=1^nd_jB_j)∂ B_i/∂ xdx+∫_-L^L(p+q)(γ_0+∑_j=0^nd_jB_j)B_idx -∫_-L^Lf(θ_0+∑_j=0^nc_jB_j, γ_0+∑_j=0^nd_jB_j)B_i dx=ε_2[∂/∂ x(γ_0+∑_j=0^nd_jB_j)B_i]_-L^L or, ∫_-L^L∂γ_0/∂ tB_idx+∫_-L^L∑_j=0^n∂ d_j/∂ tB_j B_idx+∫_-L^Lε_2∂γ_0/∂ x∂ B_i/∂ xdx+∑_j=0^nd_j∫_-L^Lε_2∂ B_j/∂ x∂ B_i/∂ xdx -∫_-L^LΠ(θ_0, γ_0, ∑_l=0^nd_lB_l)B_i dx-∑_j=0^nc_j∫_-L^LΦ( γ_0, ∑_l=0^nd_lB_l)B_j B_idx+∑_j=0^nd_j∫_-L^L(p+q)B_jB_idx =-∫_-L^L(p+q)γ_0B_idx+ε_2[∂γ_0/∂ xB_i]_-L^L+ε_2[∑_j=0^nd_j∂ B_j/∂ xB_i]_-L^L Since the first, and third terms on the left-hand side and the first term on the right-hand side of Equation (<ref>) become zero, the equation reduces to, ∑_j=0^nd d_j/dt∫_-L^LB_j B_idx+∑_j=0^nd_j(∫_-L^Lε_2dB_j/dxd B_i/dxdx+∫_-L^L(p+q)B_jB_idx-ε_2[d B_j/dxB_i]_-L^L) -∑_j=0^nc_j∫_-L^LΦ(γ_0, ∑_l=0^nd_lB_l)B_j B_idx =∫_-L^LΠ(θ_0, γ_0, ∑_l=0^nd_lB_l)B_i dx -∫_-L^L(p+q)γ_0B_idx The derivative and non-derivative terms of Equation (<ref>) can be summarized via standard matrix notation as follows: [C_2]{dd_j/dt}+[K_3]{c_j}+[K_4]{d_j}=[F_2] where C_2_ij= ∫_-L^LB_j B_idx K_3_ij= -∫_-L^LΦ(γ_0, ∑_l=0^nd_lB_l)B_j B_idx K_4_ij= ∫_-L^Lε_2dB_j/dxd B_i/dxdx+∫_-L^L(p+q)B_jB_idx-ε_2[d B_j/dxB_i]_-L^L F_2_i= ∫_-L^LΠ(θ_0, γ_0, ∑_l=0^nd_lB_l)B_i dx -∫_-L^L(p+q)γ_0 B_idx Here, K_3 and K_4 are n × n matrices, C_2 is n × n matrix, and F_2 is n × 1 matrix. They are called stiffness matrices, forced matrices, and load vectors respectively. The application of the backward difference method on the first term of Equation (<ref>) results in the following equation, (1/Δ t[C_2]+[K_4]){d_j}+[K_3]{c_j}=1/Δ t[C_2]{d_j-1}+[F_2] By assembling Equations (<ref>) and (<ref>), we get the following recurrent system, 2.0. [ (1/Δ t[C_1]+[K_1]){c_j}+[K_2]{d_j}=1/Δ t[C_1]{c_j-1}+[F_1]; [K_3]{c_j}+(1/Δ t[C_2]+[K_4]){d_j}=1/Δ t[C_2]{d_j-1}+[F_2] ]} To calculate the initial values of c_j and d_j, the initial conditions are set in Galerkin sense as follows, ∫_-L^LM(x,0)B_idx=∫_-L^LM_0(x)B_idx or, ∫_-L^L(θ_0+∑_j=1^nc_j(0)B_j(x))B_idx=∫_-L^LM_0(x)B_idx equivalently, ∑_j=0^nc_j(0) ∫_-L^LB_jB_idx=∫_-L^LM_0(x)B_idx-∫_-L^Lθ_0 B_idx and ∫_-L^LN(x,0)B_idx=∫_-L^LN_0(x)B_idx equivalently, ∫_-L^Lγ_0 B_idx+∫_-L^L∑_j=0^nd_j(0) B_jB_idx=∫_-L^LN_0(x)B_idx or, ∑_j=0^nd_j(0) ∫_-L^LB_jB_idx=∫_-L^LN_0(x)B_idx-∫_-L^Lγ_0 B_idx This process will help us to evaluate the numerical solutions of the nonlinear reaction-diffusion systems. § NUMERICAL EXAMPLES AND APPLICATIONS In this section, the previously described approach has been implemented into practice by solving a few examples of practical issues. Our methodology has been shown to be valid after being applied to the first test problem. The aforementioned procedure is then used, with a variety of parameters, to assess the subsequent test problems. The L_2 norm and L_∞ norm has been determined by the following expression, L_2 Norm=||M_Δ t-M_Δ t/2||_2 L_∞ Norm=||M_Δ t-M_Δ t/2||_∞ Where Δ t is the time increment and M_Δ t is the approximate solution obtained using time increment Δ t. Test Problem 1: Let us consider the system of the parabolic equations from the study of Manaa et. al.<cit.> 2.0. [ ∂ M/∂ t=ε_1 ∂^2 M/∂ x^2 + f(M,N) -(p+q)M; ∂ N/∂ t=ε_2∂^2 N/∂ x^2- f(M,N) +p(1-N); ]} where f(M,N)=M^2N and x∈ [a,b], t≥ 0. The boundary conditions and the initial conditions are considered as: 1.0. [ M(a,t)=M(b,t)=0; N(a,t)=N(b,t)=1; ]} and 1.0. [ M(x,0)=0.01 sin(π (x-b)/(b-a)); N(x,0)=1-0.12 sin(π (x-b)/(b-a)); ]} The domain of the model is [a, b]. The values of the parameters are taken as a=0, b=2, ε_1=ε_2=0.01, p=0.09, and q=-0.004. Here, to obtain the numerical approximation, the effect of boundary conditions is insignificant because all terms of B_j(x) are zero at the boundary points. We have employed the modified Galerkin method to the system of nonlinear partial differential equations (<ref>) and therefore obtained the system of ordinary differential equations with respect to t. In this stage, we have used the α family of approximation in order to convert the system into recurrent relations and then we applied Picard iterative procedure. To find the initial guess of the given system, we have applied the weighted residual procedure on the initial conditions (<ref>). Tables (<ref>) and (<ref>) provide the numerical results of concentrations M (x,t) and N(x,t) for various values of x. For computation, we have taken Δ t=0.1. The numerical approximations are derived at time levels t=1 and t=2. Throughout these tables, we have compared the results which we have obtained with the numerical approximations that have already been published in other well-known literature. The table demonstrates that our outcomes are reasonably comparable to those that have been published. It validates the accuracy of our approach to approximating the reaction-diffusion system numerically. The approximate results M(x, t) and N(x, t) of Equation (<ref>) are presented in the following figure (<ref>). In Figure (<ref>) we have employed a three-dimensional graphical depiction of approximate solutions of M(x,t) and N(x,t) at different time levels for better understanding. The graphical representations agree with the results that we have obtained in the tables. Eventually, it makes sense clearly that the method is more applicable to solving such nonlinear parabolic PDE systems. In Figure (<ref>), we have presented the error graph of M(x,t) and N(x,t) at time t=10, where the absolute errors are computed between two different time increments, say Δ t=0.2, Δ t=0.4 and Δ t=0.1, Δ t=0.2. The L_2 norm and L_∞ norm, are presented in Table (<ref>), which shows that the comparative errors are reduced significantly according to the reduction of the size of the time increments. Test Problem 2: The Gray-Scott Model is one of the most important models whose wave formations are similar to many waves formed in real life such as butterfly wings, gesticulation, damping, turning patterns, embryos, multiple spots, and so on <cit.>. Let us consider the following model, 2.0. [ ∂ M/∂ t=ε_1 ∂^2 M/∂ x^2 - f(M,N) +p(1-M); ∂ N/∂ t=ε_2∂^2 N/∂ x^2+ f(M,N) -(p+q)N; ]} where f(M,N)=MN^2. The boundary conditions and the initial conditions are considered as follows: 1.0. [ M(-50,t)=M(50,t)=1; N(-50,t)=N(50,t)=0; ]} and 1.0. [ M(x,0)=1-0.5 sin^100(π (x-50)/100); N(x,0)=0.25 sin^100(π (x-50)/100); ]} The domain of the model is [-50, 50]. The values of the parameters are taken as ε_1=1, ε_2=0.01, p=0.01, q=0.12 Here for computational purposes, we have used 7 modified Bernstein polynomials. By applying the modified Galerkin method, we have used the backward difference method to transform the system of ordinary differential equations into the recurrent relations which is therefore solved by Picard iterative procedure. Numerical data of M(x, t) and N(x, t) of (<ref>) are also presented in tabulated form in the following table at different time steps. The table shows that the numerical values of concentrations M and N change very slowly with varying values of x. It happens in every time step. The results obtained by applying our proposed scheme are presented in Figure (<ref>). Figure (<ref>) is deployed to provide pictorial representations of the numerical concentrations M and N at different time levels. The results that are obtained in the table are shown graphically. The graphs are obtained for different time levels. The graphical presentation shows that the changes in concentrations are sufficiently small for different time levels. The L_2 norm, and L_∞ norms, are presented in table (<ref>), which shows that the comparative errors are reduced significantly according to the reduction of the size of the time increments. However, the order of convergences increased noticeably along with the reduction of the time length. In Figure (<ref>), we have presented the error graph of M(x,t) and N(x,t) at time t=10, where the absolute errors are computed between two different time increments say Δ t=0.2, Δ t=0.4 and Δ t=0.1, Δ t=0.2. § CONCLUSION This research study has provided numerical approximations of nonlinear reaction-diffusion systems with specified boundary and initial conditions through the employment of the modified Galerkin method. To generate the trial solution, modified Bernstein Polynomials have been used. The simplification of the weighted residual leads to a system of ordinary differential equations which is then transformed into the recurrent relation by applying the backward difference formula. At this stage, we have used Picard's iterative procedure to approximate the trial solution. After successful derivation, we applied our proposed method to several models in order to test their applicability and effectiveness. We have solved and displayed the results both numerically and graphically. From those figures and numerical results, it is indisputable that our proposed method is an unconditionally stable, efficient, highly modular, and easily expandable method that can be applied to any type of system of nonlinear parabolic partial differential equations regardless of the type of the boundary conditions, type of non-linearity of the functions, coefficients are constants or function of independent variables. § ACKNOWLEDGEMENT The authors acknowledge that the research was supported and funded by Dhaka University research grant under UGC, Bangladesh. 100 r39 Biancalani, T., Fanelli, D., & Di Patti, F., (2010). Stochastic Turing patterns in the Brusselator model. Physical Review E, 81(4), 046215. r40 Wazwaz, A.-M.(2000). The decomposition method applied to systems of partial differential equations and to the reaction-diffusion Brusselator model. Applied mathematics and computation, 110(2-3).,251-264. r41 Muhammad Khan, F., Ali, A., Shah, K., Khan, A., Mahariq, I., et al. (2022). Analytical Approximation of Brusselator Model via LADM. Mathematical Problems in Engineering,2022, 01-14. r42 Alfifi, H. Y., Feedback control for a diffusive and delayed Brusselator model: Semi-analytical solutions. Symmetry, 13(4), 725. r43 Din, Q. (2018). A novel chaos control strategy for discrete-time Brusselator models. Journal of Mathematical Chemistry, 56(10), 3045-3075. r36 Ahmed, N., SS, T., Imran, M., Rafiq, M., Rehman, M., & Younis, M. (2019). Numerical analysis of auto-catalytic glycolysis model. AIP Advances, 9(8), 085213. r45 Ouannas, A., Batiha, I. M., Bekiros, S., Liu, J., Jahanshahi, H., Aly, A. A. & Alghtani, A. H. (2021). Synchronization of the glycolysis reaction-diffusion model via linear control law. Entropy, 23(11), 1516. r47 Iron, D., Wei, J., & Winter, M. (2004). Stability analysis of Turing patterns generated by the Schnakenberg model. Journal of mathematical biology, 49(4), 358-390. r48 Liu, P., Shi, J., Wang, Y., and Feng, X. (2013). Bifurcation analysis of reaction-diffusion Schnakenberg model. Journal of Mathematical Chemistry, 51(8),2001-2019. r49 Khan, F. M., Ali, A., Hamadneh, N., Abdullah & Alam, M. N. (2021). Numerical Investigation of Chemical Schnakenberg Mathematical Model. Journal of Nanomaterials, 2021, 1-8. r50 Beentjes, C. H. (2015). Pattern formation analysis in the Schnakenberg model (tech. rep.). Technical Report, University of Oxford, UK. r7 Gray, P. & Scott, S.(1983). Autocatalytic reactions in the isothermal, continuous stirred tank reactor: isolas and other forms of multistability. Chemical Engineering Science, 38(1), 29-43. r8 Sel'Kov, E. (1968). Self-Oscillations in Glycolysis 1. A Simple Kinetic Model. European Journal of Biochemistry, 4(1), 79-86. r9 Pearson, J. E. (1993). Complex patterns in a simple system. Science, 261(5118), 189-192. r10 Mazin, W., Rasmussen, K., Mosekilde, E., Borckmans, P. & Dewel, G. (1996). Pattern formation in the bistable Gray-Scott model. Mathematics and Computers in Simulation, 40(3-4), 371-396. r11 Doelman, A., Kaper, T. J., & Zegeling, P. A. (1997). Pattern formation in the one-dimensional Gray-Scott model. Nonlinearity, 10(2), 523. r15 Ueyama, D. (1999). Dynamics of self-replicating patterns in the one-dimensional Gray-Scott model. Hokkaido mathematical journal, 28(1), 175-210. r13 McGough, J. S. & Riley, K. (2004). Pattern formation in the Gray–Scott model. Nonlinear analysis: real world applications, 5(1), 105-121. r21 Doelman, A., Gardner, R., A., & Kaper, T., J. (1998). Stability analysis of singular patterns in the 1D Gray-Scott model: a matched asymptotics approach. Physica D: Nonlinear Phenomena, 122(1-4), 1-36. r17 Dkhil, F., Logak, E., & Nishiura, Y. (2004). Some analytical results on the Gray–Scott model. Asymptotic Analysis, 39(3-4), 225-261. r18 Nishiura, Y., & Ueyama, D. (2001). Spatio-temporal chaos for the Gray–Scott model. Physica D: Nonlinear Phenomena, 150(3-4), 137-162. r19 Nishiura, Y., & Ueyama, D. (2000). Self-replication, self-destruction, and spatio-temporal chaos in the Gray-Scott model. Physical Review Letters, 15(3), 281-289. r20 Wei, J. (2001). Pattern formations in two-dimensional Gray–Scott model: existence of single-spot solutions and their stability. Physica D: Nonlinear Phenomena, 148(1-2), 20-48. r22 Zhang, K., Wong, J. C.-F. & Zhang, R., (2008). Second-order implicit–explicit scheme for the Gray–Scott model. Journal of Computational and Applied Mathematics, 213(2), 559-581. r2 Mach, J. (2012). Application of the nonlinear Galerkin FEM method to the solution of the reaction diffusion equations. r1 Zhang, R., Zhu, J., Loula, A. F. & Yu, X. (2016). A new nonlinear Galerkin finite element method for the computation of reaction diffusion equations. Journal of Mathematical Analysis and Applications, 434(1), 136-148. r5 Mach, J. (2010). Quantitative analysis of numerical solution for the Gray-Scott model. SNA’10, 110. r3 Singh, S. (2023). Numerical investigation of wave pattern evolution in Gray–Scott model using discontinuous Galerkin finite element method. Advances in Mathematical and Computational Modeling of Engineering Systems, 47-58. r12 Tok-Onarcan, A., Adar, N., & Dag, I. (2019). Wave simulations of Gray-Scott reaction-diffusion system, 42(16), 5566-5581. r14 Owolabi, K. M. & Patidar, K. C. (2014). Numerical solution of singular patterns in one-dimensional Gray-Scott-like models. International Journal of Nonlinear Sciences and Numerical Simulation, 15(7-8), 437-462. r4 Kaur, N. & Joshi, V. (2022). Numerical solution to the Gray-Scott Reaction-Diffusion equation using Hyperbolic B-spline. Journal of Physics: Conference Series, 2267(1), 012072. r6 Thornton, A. & Marchant, T. R. (2008). Semi-analytical solutions for a Gray–Scott reaction–diffusion cell with an applied electric field. Chemical engineering science, 63(2), 495-502. r24 Chen, W., & Ward, M. J. (2011). The stability and dynamics of localized spot patterns in the two-dimensional Gray–Scott model. SIAM Journal on Applied Dynamical Systems, 10(2), 582-666. r23 Joshi, V. & Kaur, N. (2020). Numerical Solution of Gray Scott Reaction-Diffusion Equation using Lagrange Polynomial. Journal of Physics: Conference Series, 1531(1), 012058. r25 Che, H., Wang, Y.-L., & Li, Z.-Y. (2022). Novel patterns in a class of fractional reaction–diffusion models with the Riesz fractional derivative. Mathematics and Computers in Simulation, 202, 149-163. r26 Mittal, R., Kumar, S. & Jiwari, R. (2022). A cubic B-spline quasi-interpolation algorithm to capture the pattern formation of coupled reaction-diffusion models. Engineering with Computers, 38(2), 1375-1391. r27 Lewis, P. E. & Ward, J. P. (1991). The finite element method: principles and applications, Addison-Wesley Wokingham. r28 Hossan, M. S., Hossain, A. S. & Islam, M. S. (2020). Numerical Solutions of Black-Scholes Model by Du Fort-Frankel FDM and Galerkin WRM. International Journal of Mathematical Research, 9(1), 1-10. r29 Shirin, A., Islam, M., et al. (2013). Numerical solutions of Fredholm integral equations using Bernstein polynomials. arXiv preprint arXiv:1309.6311. r30 Cicelia, J. E. (2014). Solution of weighted residual problems by using Galerkin’s method. Indian Journal of Science and Technology, 7(3), 52-54. r31 Farzana, H., Islam, M. S., & Bhowmik, S. K. (2015). Computation of eigenvalues of the fourth order Sturm-Liouville BVP by Galerkin weighted residual method. British Journal of Mathematics and Computer Science, 9, 73-85. r32 Kang, Z., Wang, Z., Zhou, B. & Xue, S. (2020). Galerkin weighted residual method for axially functionally graded shape memory alloy beams. Journal of Mechanics, 36(3), 331-345. r33 Arani, A. A. A., Arefmanesh, A., & Niroumand, A. (2018). Investigation of fully developed flow and heat transfer through n-sided polygonal ducts with round corners using the Galerkin weighted residual method. Int. J. Nonlinear Anal. Appl, 9(1), 175-193. r51 Temam, R. (2012). Infinite-dimensional dynamical systems in mechanics and physics (Vol. 68). Springer Science & Business Media. r37 Manaa, S. A., Rasheed, J. (2013). Successive and finite difference method for Gray Scott model. Science Journal of University of Zakho, 1(2), 862-873. r38 Jiwari, R., Singh, S., & Kumar, A. (2017). Numerical simulation to capture the pattern formation of coupled reaction-diffusion models. Chaos, Solitons & Fractals, 103, 422-439.
http://arxiv.org/abs/2307.06285v1
20230712163107
Smoothed Analysis of the Komlós Conjecture: Rademacher Noise
[ "Elad Aigner-Horev", "Dan Hefetz", "Michael Trushkin" ]
math.CO
[ "math.CO", "cs.DM", "cs.IT", "math.IT", "math.PR" ]
Entanglement Entropy and Algebra in Quantum Field Theory Ahmed Halawani ======================================================== The discrepancy of a matrix M ∈ℝ^d × n is given by (M) := min_∈{-1,1}^nM_∞. An outstanding conjecture, attributed to Komlós, stipulates that (M) = O(1), whenever M is a Komlós matrix, that is, whenever every column of M lies within the unit sphere. Our main result asserts that (M + R/√(d)) ≤ 1 + O(d^-1/2) holds asymptotically almost surely, whenever M ∈ℝ^d × n is Komlós, R ∈ℝ^d × n is a Rademacher random matrix, d = ω(1), and n = ω̃(d^5/4). We conjecture that n = ω(d log d) suffices for the same assertion to hold. The factor d^-1/2 normalising R is essentially best possible. § INTRODUCTION The discrepancy of a matrix M ∈ℝ^d × n is given by (M) := min_∈{-1,1}^nM_∞. A celebrated result in this venue is the so-called “six standard deviations" result, put forth by Spencer <cit.>, asserting that if M_∞≤ 1 and d = n, then (M)≤ 6√(n). More generally, if d ≥ n, then (M) = O(√(n log (2d/n))) is known to hold <cit.>. Spencer's result is essentially tight as n × n matrices M satisfying (M) = Ω(√(n)) are known to exist <cit.>. An outstanding conjecture in Discrepancy Theory, attributed to Komlós, stipulates that (M) = O(1) holds, whenever M ∈ℝ^d × n has each of its columns satisfying _2 ≤ 1; we refer to the latter as a Komlós matrix[Komlós' restriction on the matrix is more stringent than that of Spencer.]. Dimension-free (i.e., constant) bounds on the discrepancy of matrices are of special interest as it is NP-hard to distinguish between these and those having Ω(√(n)) discrepancy <cit.>. Given a hypergraph H, taking M = M_H to be its e(H) × v(H) incidence matrix retrieves the well-known (see, e.g., <cit.>) notion of combinatorial discrepancy, given by (H) := min_χmax_e ∈ E(H)| ∑_v ∈ eχ(v)|, where the minimisation ranges over all mappings χ: V(H) →{-1,1}. Beck and Fiala <cit.> proved that if H has the property that each of its vertices lies in at most t edges, i.e., each column of M_H satisfies _2 ≤√(t), then (H) ≤ 2t-1, and conjectured that (H) = O(√(t)). Up to the √(t)-scaling, the Beck-Fiala conjecture is a special case of the Komlós conjecture. The best known upper bounds for the conjectures put forth by Komlós and by Beck-Fiala are O(√(log n)) and O(√(t log n)), respectively, both obtained by Banaszczyk <cit.> in 1998. Despite this partial progress, it seems that these two conjectures are out of reach of current techniques; consequently, the investigation of these conjectures in more hospitable settings, so to speak, is well-justified. One line of research that has attracted much attention of late calls for the determination of (M) whenever M is a random matrix; in this line of research one is interested in the so-called average-case discrepancy or the discrepancy of typical matrices, where `typical' depends on the specific distribution chosen for M. In this realm, we further distinguish between two strands of study; the first pertains to gaussian matrices[ Matrices with each entry an i.i.d. copy of 𝒩(μ,σ^2); if μ = 0 and σ =1, then the matrix is called a standard gaussian matrix.] and the second deals with discrete random matrices. For standard gaussian matrices M ∈ℝ^d × n, the estimate (M) = Θ(2^-n/d√(n)) holds asymptotically almost surely (a.a.s. hereafter) for a wide range of values of d and n; in particular (M) = O(1) holds as soon as n ≥ C d log d, where C > 0 is an appropriate constant. The case d = O(1) of the above equality was settled by Costello <cit.>. Meka, Rigollet, and Turner <cit.> extended the result of Costello by allowing ω(1) = d =o(n). In fact, their result accommodates any (matrix entry) distribution whose density function f is symmetric, has a fourth moment, and is square-integrable. The regime d = Θ(n) was studied in <cit.>. Proceeding to discrete random matrices, given d ≥ n ≥ t, Ezra and Lovett <cit.> proved that (M) = O(√(t log t)) holds with probability at least 1 - exp(-Ω(t)), whenever each column of M is sampled independently and uniformly at random from all 0/1-vectors containing precisely t non-zero entries. They also proved that (M) =O(1) holds a.a.s. provided that d ≥ t and n ≫ d^t. For Bernoulli matrices[Each entry is an independent copy of Ber(p) for p := p(n,d).] M, Altschuler and Niles-Weed <cit.> proved that (M) ≤ 1 holds a.a.s. for any p:= p(n), whenever n ≥ C d log d, where C>0 is an absolute constant[Discrepancy of Poisson matrices is also studied in <cit.>; Bernoulli matrices are also studied in <cit.>.]; their result is tight in terms of the lower bound on n. Given a seed matrix M ∈ℝ^d × n as well as a distribution _d × n, set over ℝ^d × n, we refer to the (random) matrix M+R with R ∼_d × n as a random perturbation of M. Following the aforementioned results pertaining to the discrepancy of truly random matrices, the study of the discrepancy of randomly perturbed ones is the next natural step. The study of the effect of random noise is widespread in Mathematics and Computer Science. Spielman and Teng <cit.> coined the term smoothed analysis to indicate the analysis of algorithms executed on randomly perturbed inputs. In high dimensional probability (see, e.g., <cit.>), the study of randomly perturbed matrices dates back to the works of Tao and Vu <cit.>. In combinatorics, the study of randomly perturbed (hyper)graphs has witnessed a burst of activity in recent years; see, e.g., <cit.>. A perturbed/smoothed version of the Komlós conjecture is established in <cit.>. There it is shown that (M+R) ≤1/poly(d) holds a.a.s. whenever M ∈ℝ^d × n is a Komlós matrix, R ∈ℝ^d × n is a matrix whose entries are i.i.d. copies of 𝒩(0,σ^2/d) and n = ω(d log d) ·σ^-4/3. §.§ Our contribution A random variable X is said be Rademacher if X assumes the values -1 and 1, each with probability 1/2. A matrix R ∈ℝ^d × n is said to form a Rademacher matrix if its entries are independent Rademacher random variables. Our main result reads as follows. Let d = ω(1) and n = ω((d^5log d)^1/4) be integers. Then, (M +R/√(d)) ≤ 1 + 6d^-1/2 holds a.a.s. whenever M ∈ℝ^d × n is a Komlós matrix and R ∈ℝ^d × n is a Rademacher matrix. We conjecture that the bound imposed on n in Theorem <ref> can be mitigated as follows. Let d =ω(1) and n = ω(d log d) be integers. Then, (M +R/√(d)) ≤ 1 + 1/poly(d) holds a.a.s. whenever M ∈ℝ^d × n is a Komlós matrix and R ∈ℝ^d × n is a Rademacher matrix. Normalisation factor - lower bound. In Theorem <ref>, the Rademacher matrix R is normalised by a d^-1/2 factor. We claim that this normalisation factor is warranted. Indeed, requiring that _2 ≤ 1 holds for every column of the random perturbation is a natural constraint to impose, for such a restriction guarantees that the columns of the perturbation do not dominate the columns of M. Writing k := k(d) to denote the normalisation factor and letting be any column vector of R/k, we see that 1 ≥_2^2 = ∑_i=1^d 1/k^2 = d/k^2 implies k ≥√(d). Normalisation factor - upper bound. Let k be as defined in Remark <ref>. Enlarging k is of interest as this reduces the dominance of the random perturbation further, allowing one to come ever so close to Komlós' conjecture. Alas, in the setting of Theorem <ref>, there is an upper bound on the normalisation factor k. To see this, note that given k and a discrepancy bound Δ, the stipulation that (M+R/k) ≤Δ is equivalent to requiring the existence of a vector ∈{-1,1}^n for which (R)_i ∈[- k (M)_i - k Δ, - k (M)_i + k Δ] holds for every i ∈ [d]. Given ∈{-1, 1}^n and i ∈ [d], the term (R )_i has the same distribution as the sum ∑_i=1^n r_i, whose summands are independent Rademacher random variables. As such, (R)_i ∈ [- ω(√(n)), ω(√(n))] asymptotically almost surely. Consequently, a prerequisite for (<ref>) holding a.a.s. is that [- k (M)_i - k Δ, - k (M)_i + k Δ] ⊆ [- ω(√(n)), ω(√(n))] holds for every i ∈ [d]. Assuming that Δ is relatively small (as one naturally aims to have), the latter amounts to essentially requiring that k ≤√(n)M_∞^-1. The smaller the value of M_∞ we obtain, the less restrictive on k this inequality becomes. In our current state of knowledge (see Observation <ref> and Remark <ref> below), the best we can ensure are vectors ∈{-1,1}^n for which M_∞ = O(√(log d)). Such a vector then yields the upper bound k = O(√(n/log d)). It follows that for n = ω(d log d) (see Conjecture <ref>), taking k to be roughly √(d) is essentially best possible. § RELEVANT VECTORS In this section, we define a family of so-called relevant vectors from {-1,1}^n; we aim to prove that there exists a vector ∈ for which (M + R/√(d)) _∞≤ 1 + 6 d^-1/2 holds a.a.s., thus proving Theorem <ref>. Roughly put, these are taken from the support of a distribution , denoted , called the truncated Gram-Schmidt distribution, defined below in Lemma <ref>. Following the definition of , we collect several properties of the aforementioned relevant vectors, facilitating subsequent arguments. In particular, given a Komlós matrix M ∈ℝ^d × n, the following advantageous properties are proved. Non-triviality: || ≥ 2; Similar 2-norm: M_2 = Θ(√(d)) for every ∈; Low discrepancy: M_∞ = O(√(log d)) for every ∈; Equidistant: Hamming distance between any distinct ,∈ is approximately n/2; Uncorrelated: | M, M| = O(√(d log d)) for every ,∈. A real random variable X is said to be α-subgaussian[Subgaussian random variables admit several equivalent characterisations; see, e.g., <cit.> for details.] if it satisfies [|X| ≥ t] ≤ 2 exp(-(t/α)^2) for every t > 0. A random vector ∈ℝ^n is said to be α-subgaussian if , is α-subgaussian for every ∈𝕊^n-1, see, e.g., <cit.>. The following is one of the main results of <cit.>.  <cit.> Let ^(1),…,^(n)∈ℝ^n satisfy ^(i)_2 ≤ 1 for every i ∈ [n]. Applying the Gram-Schmidt walk sampling algorithm[See <cit.> for details.] over the given vectors outputs a random vector ∈{-1,1}^n such that the vector ∑_i=1^n _i ^(i) is 1-subgaussian. The distribution (implicitly) defined in Theorem <ref> is truncated in <cit.> so as to produce the following distribution over the vectors in {-1,1}^n.  <cit.> Let M ∈ℝ^d × n be a Komlós matrix. Then, there exists a constant C_<ref> > 0 as well as a distribution , set over the vectors in {-1,1}^n, such that the following two properties hold simultaneously. * M _2 = Θ(√(d)) holds for every ∈. * _∼[ |, | ≥ t ] ≤ d^C_<ref>exp(-t^2/8) and _∼[ | M, | ≥ t ] ≤ d^C_<ref>exp(-t^2/8) both hold whenever ∈𝕊^n-1, ∈𝕊^d-1, and t > 0. Shallow vectors. Let M be a Komlós matrix. The following observation shows that a vector sampled from is with high probability a witness to the fact that the discrepancy of M is not too large. Let M ∈ℝ^d × n be a Komlós matrix. Then, there exists an arbitrarily large yet fixed constant C_<ref> such that _∼[M_∞≤ C_<ref>√(log d)] ≥ 1 - d^- C_<ref> holds. Set C ≫ C_<ref>. Given i ∈ [d], let _i∈𝕊^d-1 be the unit vector whose ith entry is equal to one and all its other entries are set to zero. As M,_i = (M)_i, it follows by Lemma <ref> that _∼[|(M)_i| ≥ C √(log d)] = _∼[| M,_i| ≥ C √(log d)] ≤ d^C_<ref>exp(- C^2 log d /8) = d^C_<ref> - C^2/8. A union-bound over the d entries of M, coupled with our choice C ≫ C_<ref>, implies the existence of C_<ref> and concludes the proof of the observation. Observation <ref> implies, in particular, that (M) = O(√(log d)) holds, whenever M ∈ℝ^d × n is a Komlós matrix. This improves Banaszczyk's bound <cit.> whenever log d ≪log n. Given a Komlós matrix M ∈ℝ^d × n and a non-negative real number α, a vector ∈ is said to be (α,M)-shallow if M_∞≤α√(log d). Antipodal vectors. For two vectors ,∈{-1,1}^n, let Diff(,) = {i ∈ [n]: _i ≠_i}; note that |Diff(,) | is the Hamming distance between and . Let ∈{-1, 1}^n be arbitrary. Then, there exists an arbitrarily large yet fixed constant C_<ref> such that _∼[ | |Diff(,)| - n/2 | ≤ C_<ref>√(n log d)] ≥1 - d^- C_<ref>. Given any two vectors , ∈{-1, 1}^n, note that |,| = |∑_i ∈ [n] (,)_i_i + ∑_i ∈(,)_i_i| = |(n - |(,)|) -|(,)| | = | n - 2|(,)| |. Therefore, for any t ≥ 0, we have that |Diff(,) | = n/2 ± t if and only if |,| ≤ 2t. Fix C ≫ C_<ref>. Fix ∈{-1, 1}^n and note that /√(n)∈𝕊^n-1. It then follows by Lemma <ref> that _∼[|, /√(n)| ≥ C √(log d )] ≤ d^C_<ref>exp( - C^2 log d/8 ) = d^C_<ref> - C^2/8. Our choice of C ≫ C_<ref> coupled with (<ref>), implies the existence of C_<ref> and concludes the proof of the observation. For a non-negative real number α, two distinct vectors ,∈ are said to be α-antipodal if (,) = n/2 ±α√(n log d). Uncorrelated vectors. The following observation provides a uniform bound over the inner products of all pairs of vectors of the form M and M, which holds with high probability whenever ,∼. Let M ∈ℝ^d × n be a Komlós matrix and let ∈. Then, there exists an arbitrarily large yet fixed constant C_<ref> such that _∼[ | M,M| ≤ C_<ref>√(d log d)] ≥ 1 - d^- C_<ref>. Fix ∈ and let = M. Set = /_2 and note that ∈𝕊^d-1. It then follows by Lemma <ref> that _∼[| M, | ≥ C √(log d)] ≤ d^C_<ref>exp(- C^2 log d/8) = d^C_<ref> - C^2/8 holds for any constant C > 0. Since ∈, it follows by Lemma <ref> that _2 = Θ(√(d)). Taking C to be sufficiently large with respect to C_<ref>, the existence of C_<ref> is then implied by (<ref>); this concludes the proof of the observation. Given a Komlós matrix M ∈ℝ^d × n and a non-negative real number α, two distinct vectors ,∈ are said to be (α,M)-uncorrelated if | M,M| ≤α√(d log d). Relevant vectors. Let M ∈ℝ^d × n be a Komlós matrix and let α be a non-negative real number. A subset 𝒮⊆ is said to be (α,M)-relevant if (R.1) all its members are (α,M)-shallow; (R.2) all pairs of distinct members of 𝒮 are α-antipodal and (α,M)-uncorrelated. The following claim is a direct consequence of Observations <ref>, <ref>, and <ref>. Let d and n be positive integers satisfying d = O(n). Then, there exists an arbitrarily large yet fixed constant C_<ref> such that contains a (C_<ref>,M)-relevant subset of size at least 2. Set C_<ref> = max{C_<ref>, C_<ref>, C_<ref>}. It then follows by Observation <ref> that _∼[M_∞≤ C_<ref>√(log d)] ≥ 1 - d^- C_<ref>; in particular, there exists some vector _1 ∈ which is (C_<ref>, M)-shallow. Subsequently, it follows by Observations <ref>, <ref>, and <ref> that there exists a vector _2 ∈ such that _2 is (C_<ref>, M)-shallow, _1, _2 are (C_<ref>, M)-uncorrelated, and _1, _2 are C_<ref>-antipodal; since d is not too large with respect to n, the latter also implies that _1 ≠_2. We conclude that {_1, _2} is (C_<ref>,M)-relevant. § PROOF OF THE MAIN RESULT This section is divided into three subsections. The first two subsections contain auxiliary results which facilitate our proof of Theorem <ref>; the latter appears in the third subsection. Throughout this section we encounter binomial coefficients of the form nn/2+t, where n ∈ℕ is even and t ∈ℤ. Owing to the symmetry nn/2+t = nn/2-t, whenever it is convenient, we assume that t ≥ 0. §.§ Approximation of near-centre binomial coefficients A key tool in our approach is the following approximation result for binomial coefficients nk, where k is “close” to n/2. Let n be a sufficiently large even integer and let t ∈ℤ be such that |t| = o(n) and n+t/2∈ℤ. Then, nn+t/2 = nn/2exp(- (1/2 + o(1)) t^2/n + Θ(t^3/n^2)) holds. Up to small modifications, Proposition <ref> and its proof can be found n <cit.>; we include the proposition and its proof here as these modifications are important for our purposes. Proposition <ref> Let Q = nn+t/2/nn/2 = (n/2)! (n/2)!/(n+t/2)! (n-t/2)! = ∏_j=1^t/2n/2 - j+1/n/2+j. Therefore log Q = ∑_j=1^t/2log(1 - 4j-2/n+2j) = ∑_j=1^t/2[- 4j-2/n+2j + Θ(j^2/n^2) ], where for the last equality we use the expansion log (1-x) = -x + Θ(x^2), holding whenever x ∈ (0,1). Substituting the identity 4j-2/n+2j = 4j/n - 8j^2/n(n+2j) - 2/n+2j = 4j/n - 2/n+2j + Θ(j^2/n^2) into (<ref>) yields log Q = -∑_j=1^t/24j/n +∑_j=1^t/22/n+2j+ ∑_j=1^t/2Θ( j^2/n^2) = -t/n - t^2/2n + ∑_j=1^t/22/n+2j+ Θ(t^3/n^2), where for the last equality we employ the identity ∑_i=1^k i = k(k+1)/2 and the estimate ∑_i=1^k i^2 = Θ(k^3). The sum appearing on the right hand side of (<ref>) satisfies t/n+t = ∑_j=1^t/22/n+t≤∑_j=1^t/22/n+2j≤∑_j=1^t/22/n = t/n. Since 1 ≤ t = o(n), it follows that ∑_j=1^t/22/n+2j = (1 + o(1)) t/n = t/n + o(t^2/n). Combining (<ref>) and (<ref>) then implies that log Q = - (1/2 + o(1)) t^2/n + Θ(t^3/n^2) as required. §.§ Core probabilities The main results of this section are Lemmas <ref> and <ref> stated below. Roughly put, these two lemmas deal with determining the probabilities of events of the form , = 2t, where is a Rademacher vector, ∈{-1,1}^n, and t ∈ℤ; we refer to such probabilities as core probabilities. The focus on the inner product being even is owing to the fact that ∑_i=1^n _i = #_1() - #_-1() holds for any vector ∈{-1,1}^n. Assuming n is even, there exists an integer y such that #_1() = n/2 + y leading to ∑_i=1^n _i = n/2 +y - (n/2 -y) = 2y. The following is then implied. Let n be a positive even integer and let t ∈ℤ. Then, |S_t| = nn/2+t, where S_t := {∈{-1,1}^n: ∑_i=1^n _i = 2t}. Let ℰ_n = {∈{-1,1}^n: #_1() ≡ 0 2} denote the set of so-called even members of {-1,1}^n. The first main result of this section reads as follows. Let n ∈ℕ be even, let be a vector sampled uniformly at random from ℰ_n, let ∈{-1,1}^n, and let t ∈ℤ be such that 2t ∈,. Then, [, = 2t] = 1/2^n-1nn/2+t. Given two vectors ,∈{-1,1}^n, let α(,) = (n- |(,)|)/n, that is, α(,) n denotes the number of indices over which these two vectors coincide. The second main result of this section reads as follows. Let n ∈ℕ be even, let be a vector sampled uniformly at random from ℰ_n, let ,∈{-1,1}^n satisfying #_1() ≡#_1() 2 be given, and let α = α(,). Then, for any pair of integers t_x and t_y satisfying 2t_x ∈, and 2t_y ∈,, the equality [, =2t_x, , =2t_y] = 1/2^n-1α nα n + t_x + t_y/2(1-α)n(1-α) n + t_x - t_y/2 holds. Prior to proving Lemmas <ref> and <ref>, we collect several auxiliary results. Let n ∈ℕ be even and let ,∈{-1,1}^n satisfying #_1() ≡#_1() 2 be given. Then, |(,)| is even. Let A := A(,) = |{i ∈ [n] : _i = _i = 1}|, let B := B(,) = |{i ∈ [n] : _i = _i = -1}|, let C := C(,) = |{i ∈ [n] : _i = 1, _i = -1}|, and let D := D(,) = |{i ∈ [n] : _i = -1, _i = 1}|. Suppose for a contradiction that |(,)| is odd. Since |(,)| = C + D, we may assume without loss of generality that C is even and D is odd. Since, moreover, n = A + B + C + D is even, we may further assume without loss of generality that A is even and B is odd. It then follows that #_1() = A + C is even, whereas #_1() = A + D is odd; this contradicts the premise of the observation and concludes its proof. Let n ∈ℕ, let t ∈ℤ, and let ,∈{-1,1}^n be vectors satisfying ∑_i=1^n _i = 2t = ∑_i=1^n _i. Then, |(,)| is even. Set O = { i∈(,): _i =1} and M = {i ∈(,): _i = -1}. Then 2t = ∑_i=1^n _i = ∑_i ∉(,)_i + ∑_i ∈ O( _i-2 ) + ∑_i ∈ M( _i+2 ) = ∑_i=1^n _i - 2|O| +2|M| = 2t - 2|O| + 2|M|. It follows that |O| = |M|, and thus |(,)| = |O| + |M| is even. Let ∈{-1,1}^n and let ∈ℰ_n. If |(,)| is even, then ∈ℰ_n. The proof is via induction on |(,)|. If |(,)| = 0, then = ∈ℰ_n. Suppose then that |(,)| = 2 and let i,j ∈ [n] be the (sole) two distinct indices over which and differ. The equality #_1() = #_1() - (_i + _j) coupled with the assumption that #_1() is even as well as the fact that _i + _j ∈{-2,0,2}, imply that #_1() is even as well and thus ∈ℰ_n as required. For the induction step, consider ∈ℰ_n and ∈{-1,1}^n satisfying |(,)| = 2m +2 for some m ∈ℕ and assume that the claim holds true for any pair of vectors ∈ℰ_n and ∈{-1,1}^n satisfying |(,)| = 2k for some positive integer k ≤ m. Let 1 ≤ i < j ≤ n be any two distinct indices for which _i ≠_i and _j ≠_j both hold. The vector ' := (_1,…,_i-1,-_i,_i+1,…,_j-1,-_j,_j+1,…,_n) satisfies |(,')| = 2; hence, ' ∈ℰ_n holds by the induction hypothesis. Since, moreover, |(,')| = 2m, it follows by the induction hypothesis that ∈ℰ_n. This concludes the proof of the lemma. We are now in position to prove the first main result of this section, namely Lemma <ref>. Lemma <ref> Call a vector ∈ℰ_n satisfying , = 2t valid. Since |ℰ_n| = 2^n-1, it suffices to prove that there are nn/2+t valid vectors. In light of (<ref>), it remains to prove that there is a bijection from the set of valid vectors to the set S_t. Given a valid vector , define ϕ() := (_1_1,…,_n_n) ∈{-1,1}^n. The validity of implies that ∑_i=1^n ϕ()_i = 2t and thus ϕ() ∈ S_t. To see that ϕ(·) is injective, note that given two different valid vectors and ', there exists an index i ∈ [n] such that _i ≠'_i. As is fixed, this compels that ϕ()_i = _i _i ≠'_i _i = ϕ(')_i so that ϕ()≠ϕ('). To prove that ϕ(·) is surjective, fix ∈ S_t and define the vector ∈{-1,1}^n whose entries are uniquely determined by the equalities _i = _i _i, that is, for every i ∈ [n], if _i = _i, then _i = 1, and otherwise _i = -1. It is evident that, if is valid, then = ϕ(). Since, moreover, ∈ S_t, it suffices to prove that that ∈ℰ_n. To that end, let be an arbitrary valid vector. Since ∑_i=1^n ϕ()_i = 2t = ∑_i=1^n _i, it follows by Lemma <ref> that |(,ϕ())| is even. Note that _i = _i whenever i ∉(,ϕ()), and _i = -_i whenever i ∈(,ϕ()). Consequently, |(,)| is even and thus is even by Lemma <ref>. We conclude this section with a proof of Lemma <ref>. Lemma <ref> Since #_1() ≡#_1() 2 holds by assumption, it follows by Observation <ref> that |(,)| = 2m for some non-negative integer m. The set (,) having even cardinality has two useful implications. The first is that n - |(,)| is an even integer; this on account of n being even by assumption. Using the previously introduced notation α n := α(, ) n := n - |(,)|, we infer that α n and (1-α)n are both even integers. The second implication is that , = , + ℓ for some ℓ∈{4k : k ∈ℤ, -m ≤ k ≤ m}. Indeed, reaching , starting from , entails iterating over each member of the even-sized set (,) and adding or subtracting two from the current value accumulated thus far. If, additionally, , = 2t_x and , = 2t_y, where t_x and t_y are integers, then t_x ≡ t_y 2, for indeed t_x - t_y = , - , /2 = ℓ/2∈ 2ℤ. Given ∈{-1,1}^n, set S_1() := {i ∈ [n] (,) : _i_i = 1} and S_2() := {i ∈(,): _i _i =1}. Additionally, set S̅_1() := ([n](,)) S_1() and S̅_2() := (,) S_2(). There exist integers k_1 := k_1() and k_2:=k_2() such that |S_1()| = α n/2 + k_1 and |S_2()| = (1-α)n/2+k_2. If , = 2t_x for some integer t_x, then 2t_x = ∑_i ∈ S_1() 1 + ∑_i ∈S̅_1()(-1) + ∑_i ∈ S_2() 1 + ∑_i ∈S̅_2()(-1) = 2k_1+2k_2. Using the definition of (,), an analogous argument shows that if , = 2t_y for some integer t_y, then 2t_y = 2k_1 - 2k_2. Therefore[Recall that t_x ≡ t_y 2 so that t_x ± t_y is even.] k_1 = t_x+t_y/2 and k_2 = t_x-t_y/2; in particular, k_1 and k_2 are independent of . Hence, [, =2t_x, , =2t_y] = 1/2^n-1α nα n/2 +k_1(1-α)n(1-α) n/2+k_2 and (<ref>) follows. §.§ Proof of Theorem <ref> We deduce Theorem <ref> from the following claim. Let C_<ref> > 0 be a fixed real number, let d = ω(1) be an integer, and let n = ω((d^5 log d)^1/4) be an even integer. Let M ∈ℝ^d × n be a Komlós matrix and let ⊆{-1,1}^n be a (C_<ref>,M)-relevant set of size || ≥ 2. Let R ∈ℝ^d × n be a Rademacher matrix such that #_1() ≡ 0 2 holds for every row of R. If every vector ∈ satisfies #_1() ≡ 0 2, then a.a.s. there exists a vector ∈ such that (M+R/√(d))_∞≤ d^-1/2 holds. Using the fact that M_∞≤ 1 holds whenever M is Komlós (for indeed _∞≤_2 ≤ 1 holds for every column of M), we deduce Theorem <ref> from Claim <ref>. Claim <ref> implies Theorem <ref>: Let n and M per the premise of Theorem <ref> be given. Let ⊆{-1,1}^n be a (C_<ref>,M)-relevant set satisfying || ≥ 2; the existence of such a set is guaranteed by Claim <ref>. Let 1∈ℝ^d denote the all ones vector and set M_1 := [ M | ] ∈ℝ^d × (n+1) and M_2 := [ M || ] ∈ℝ^d × (n+2), where := 1/√(d)∈𝕊^d-1; in particular, M_1 and M_2 are both Komlós. Let R_1 ∈ℝ^d × (n+1) and R_2 ∈ℝ^d × (n+2) be Rademacher matrices, each satisfying the row parity condition stated in Claim <ref>. Given ∈, define ^(1) := [|ℓ] ∈{-1,1}^n+1 and ^(2) := [|ℓ_1 |ℓ_2] ∈{-1,1}^n+2, where ℓ := -1, #_1() ≡ 0 2, -1, #_1() ≡ 1 2, and (ℓ_1,ℓ_2) := (-1,-1), #_1() ≡ 0 2, (-1, -1), #_1() ≡ 1 2. It follows that #_1(^(1))≡#_1(^(2)) ≡ 0 2 holds for every ∈. Set _1 := {^(1) : ∈} and _2 := {^(2) : ∈} and note that |_1|, |_2| ≥ 2 both hold. Note, further, that there exist constants C^(1)_<ref> >0 and C^(2)_<ref> >0 such that _1 is (C^(1)_<ref>,M_1)-relevant and _2 is (C^(2)_<ref>,M_2)-relevant. If n is odd, then set N:= M_1, 𝒦:=_1, C_<ref>:= C^(1)_<ref>, and R:= R_1; otherwise set N:= M_2, 𝒦:=_2, C_<ref>:= C^(2)_<ref>, and R = R_2. Claim <ref> asserts that a.a.s. there exists a vector ∈𝒦 for which (N+R/√(d))_∞≤ d^-1/2 holds. Resampling the first entry of every row of R allows for a conformal Rademacher matrix to be sampled uniformly at random at the price of increasing the discrepancy by at most 1 + d^-1/2 asymptotically almost surely. Expose R and let R' be the matrix obtained from R by dropping its last column, if n is odd, and its last two columns, if n is even. In addition, let ' ∈{-1,1}^n be the vector obtained from by dropping its last entry, if n is odd, and its last two entries, if n is even. Note that, (N+R/√(d))'_∞≤ 1 + 6d^-1/2. The remainder of this section is devoted to the proof of Claim <ref>. Set Δ := d^-1/2 and define the random variable S:=S(R) = ∑_∈1{(M+R/√(d))_∞≤Δ}·_∼[ = |∈] = _∼[1{(M+R/√(d))_∞≤Δ} | ∈] whose sole source of randomness is R. It suffices to prove that S > 0 holds asymptotically almost surely. Indeed, if the latter holds, then for almost every Rademacher matrix R, there exists a vector ∈ for which 1{(M+R/√(d))_∞≤Δ}·_∼[ = |∈] > 0 holds, implying that for almost every Rademacher matrix R, there exists a vector ∈ for which the event (M+R/√(d))_∞≤Δ occurs. Establishing that _R[S] >0 (in Claim <ref> below) enables an appeal to the following consequence of the Paley-Zygmund inequality (see, e.g., <cit.>) _R[S>0] ≥_R[S]^2/_R[S^2]. Hence, given that _R[S] >0 holds, it suffices to prove that _R[S^2] ≤ (1+o(1))_R[S]^2 in order to deduce that _R[S>0] ≥ 1-o(1). Prior to proving Claim <ref>, it will be useful to establish the following simple fact. _R [(M+R/√(d))_∞≤Δ] > 0 for every ∈. Fix an arbitrary vector ∈. Since is (C_<ref>,M)-relevant, it follows that is (C_<ref>,M)-shallow. Therefore, M_∞ = O(√(log d)) < n/√(d), where the last inequality holds since n is assumed to be sufficiently large with respect to d. It follows that (M)_i ∈ [-n/√(d), n/√(d)] holds for every i ∈ [d]. Since n is even and, for every i ∈ [d], the term (R/√(d))_i is a scaled sum of independent Rademacher variables, it follows that (R/√(d))_i = {k/√(d) : k ∈ 2ℤ, -n ≤ k ≤ n}. Since, moreover, Δ = d^-1/2, there exists a choice of R such that (R/√(d))_i = - (M)_i ±Δ is satisfied for every i ∈ [d]; this concludes the proof of the claim. _R[S] >0. Note that _R[S] = _∼_R [1{(M+R/√(d))_∞≤Δ} | ∈] = _∼_R [(M+R/√(d))_∞≤Δ | ∈] > 0, where the above inequality holds by Claim <ref>. Turning our attention to (<ref>), note that (_R[S])^2 = (_∼_R [(M+R/√(d))_∞≤Δ|∈])·(_∼_R [(M+R/√(d))_∞≤Δ|∈]) = _,∼[P_ P_], where, for every ∈, P_ := _R [(M+R/√(d))_∞≤Δ]. Similarly _R[S^2] = _R [_∼[ 1{(M+R/√(d))_∞≤Δ}| ∈] ·_∼[ 1{(M+R/√(d))_∞≤Δ}| ∈] ] = _R _,∼[ 1{(M+R/√(d))_∞≤Δ}·1{(M+R/√(d))_∞≤Δ}| ,∈] = _,∼[ _R [(M+R/√(d))_∞≤Δ, (M+R/√(d))_∞≤Δ] | ,∈] = _,∼[ P_,], where, for every , ∈, P_, := _R [(M+R/√(d))_∞≤Δ, (M+R/√(d))_∞≤Δ]. The goal (<ref>) can then be rewritten as follows _,∼[ P_,] ≤ (1+o(1))_,∼[ P_ P_]. Let = {(, ) ∈ ()^2 : |, | ≥ 3n/4 }. The equality _,∼[(, ) ∈] = exp(- Ω(n)) is supported by Lemma <ref> (along an argument similar to that seen in the proof of Observation <ref>). Therefore _,∼[ P_,] = _,∼[ P_,|] ·_,∼[(,) ∈] + _,∼[ P_,|] ·_,∼[(,) ∈] ≤_,∼[ max{P_, P_} | ] ·_,∼[(,) ∈] + _,∼[ P_,|] ·_,∼[(,) ∈] ≤_,∼[ max{ P_, P_}·min{P_, P_}/min{P_, P_} | ] ·exp(-Ω(n)) + _,∼[ P_,|] ·_,∼[(,) ∈] ≤_,∼[ P_ P_·min{P_, P_}^-1 | ] ·exp(-Ω(n)) + _,∼[ P_,|] ·_,∼[(,) ∈] . Note that the term min{P_, P_}^-1 appearing in (<ref>) is valid by Claim <ref>. Progress on the analysis of (<ref>) requires some preparation. Assumption A. Given ∈ and i ∈ [d], the fact that √(d)Δ = 1 implies that the interval [- √(d) (M)_i - √(d)Δ, - √(d) (M)_i + √(d)Δ] contains at least one even integer and at most two such integers[The focus on even members of these intervals is reasoned in the begininng of Section <ref>.]. For the sake of brevity and clarity of the presentation, we proceed, initially, under the assumption that each such interval contains a single even integer, denoted t^_i, and refer to this assumption as Assumption A. This assumption is then removed at the end. In the sequel, we prove the following strengthening of Claim <ref>, under Assumption A. Let ∈. Subject to Assumption A, _R [(M+R/√(d))_∞≤Δ ] = (1+o_n,d(1))(8/π n)^d/2∏_i=1^d exp(- C(t^_i)^2/n), where C = 1/2 + o(1). Claim <ref> provides a useful uniform estimation. For subsequent reference, it is useful to state it in concise form. Let p = (8/π n)^d/2∏_i=1^d exp(- C(t^_i)^2/n), where C is as in the premise of Claim <ref>. Then, subject to Assumption A, P_ = (1+o_n,d(1)) p for every ∈. Using Corollary <ref> we obtain _,∼[P_ P_|] = (1+o_n,d(1)) p^2 ∑_,_,∼[ = , = |] = (1+o_n,d(1)) p^2 = (1+o_n,d(1)) _,∼[P_ P_]. The order of magnitude of p is given by p = (8/π n)^d/2exp(- Θ(∑_i=1^d (t_i^)^2/n) ) = (8/π n)^d/2exp(-Θ(d∑_i=1^d (M)_i^2/n) ) = (8/π n)^d/2exp(-Θ(dM_2^2/n) ) = (8/π n)^d/2exp(-Θ(d^2/n) ), where for the last equality we rely on Lemma <ref>. It follows that p^-1 = n^Θ(d)·exp( Θ(d^2/n)) = exp( Θ(d log n ) + Θ(d^2/n)) holds. Equipped with (<ref>) and (<ref>), observe that the first term appearing on the right hand side of (<ref>) satisfies _,∼[ P_ P_. ·min{P_, P_}^-1 | .] ·exp(-Ω(n)) (<ref>)=_,∼[ P_ P_ | ] · (1+o_n,d(1)) p^-1exp(-Ω(n)) (<ref>)=_,∼[ P_ P_ | ] ·exp(Θ(d log n) + Θ( d^2/n) - Ω(n)) (<ref>)= o_n,d(p^2) (<ref>)= o_n,d( _,∼[ P_ P_]), where the penultimate equality requires n = ω(d log d). Substituting the last estimate into (<ref>) yields _,∼[ P_,] ≤ o_n,d( _,∼[ P_ P_]) +_,∼[ P_,|]·_,∼[(, ) ∈]. In the sequel, we prove the following. Subject to Assumption A and conditioned on the event , the inequality P_,≤(1+o_n,d(1)) P_ P_ holds whenever ,∈. Claim <ref> implies that _,∼[ P_,|] ≤(1+o_n,d(1)) _,∼[ P_ P_|]. The latter inequality and (<ref>) jointly imply that _,∼[ P_,] ≤ o_n,d( _,∼[ P_ P_] ) +(1+o_n,d(1))_,∼[ P_ P_|]·_,∼[(, ) ∈] ≤_,∼[ P_ P_|]·_,∼[(, ) ∈] + o_n,d(_,∼[ P_ P_] +_,∼[ P_ P_|]·_,∼[(, ) ∈]) ≤_,∼[ P_ P_|]·_,∼[(, ) ∈] + o_n,d(_,∼[ P_ P_]) ≤(1+o_n,d(1))_,∼[ P_ P_], where in the last two inequalities we used _,∼[ P_ P_] = _,∼[ P_ P_|]·_,∼[(, ) ∈] + _,∼[ P_ P_|]·_,∼[(, ) ∈] ≥_,∼[ P_ P_|]·_,∼[(, ) ∈]. This establishes (<ref>). To conclude the proof of Theorem <ref>, it thus remains to prove Claims <ref> and <ref> subject to Assumption A and then rid ourselves of the latter. We commence with the proofs of the aforementioned claims. Claim <ref> The event (M+R/√(d))_∞≤Δ occurs if and only if (R)_i ∈ [- √(d) (M)_i - √(d)Δ, - √(d) (M)_i + √(d)Δ] holds for every i ∈ [d]. It thus follows by the independence of the entries of R that P_ = ∏_i=1^d _R[(R)_i ∈ [- √(d) (M)_i - √(d)Δ, - √(d) (M)_i + √(d)Δ] ] = ∏_i=1^d _R[(R)_i = t_i^], which in turn leads to P_ (<ref>)=∏_i=1^d _R[(R)_i = t_i^] (<ref>)=∏_i=1^d 1/2^n-1nn+t_i^/2 (<ref>)=∏_i=1^d 1/2^n-1nn/2exp(Θ((t_i^)^3/n^2) ) exp( - C (t_i^)^2/n) (<ref>)=(1+o_n,d(1))(8/π n)^d/2∏_i=1^d exp(Θ((t_i^)^3/n^2) ) exp(- C (t_i^)^2/n), where for the last equality we rely on the approximation nn/2 = (1+o_n(1)) √(2/π n)· 2^n arising from a straightforward application of Stirling's approximation[Use √(2 π n)(n/e)^ne^1/(12n+1)≤ n! < √(2 π n)(n/e)^ne^1/12n.], and on the fact that d = o(n). In light of (<ref>), in order to complete the proof of the claim it suffice s to prove that ∑_i=1^d |t_i^|^3 = o(n^2), which would in turn imply that ∏_i=1^d exp(Θ((t_i^)^3/n^2) ) = exp( Θ(∑_i=1^d (t_i^)^3/n^2) ) = 1+o_n,d(1). We have ∑_i=1^d |t_i^|^3 ≤max_i ∈ [d] |t_i|^·∑_i=1^d (t_i^)^2 = O(√(d) · M_∞ · d · ∑_i=1^d (M)_i^2) = O ( d^3/2 · √(log d) · M_2^2) = O(d^5/2√(log d)) = o(n^2), where in the first equality we used the fact that |t_i^| = |√(d) (M)_i ± 1|, the second equality is supported by being (C_<ref>,M)-shallow (as ∈), the third equality follows since M_2 = Θ(√(d)) holds by Lemma <ref>, and the last equality holds since n = ω( (d^5log d)^1/4) by assumption. Claim <ref> Owing to the conditioning on , we may restrict our attention to pairs (,) ∈^2 such that ≠. Given such a pair, set α := α(,) as well as t_i^ = 2k_i^ and t_i^ = 2k_i^ with k_i^,k_i^∈ℤ. In a manner similar to that seen in the proof of Claim <ref>, it holds that P_, = ∏_i=1^d _R[ (R)_i ∈ [- √(d) (M)_i ±√(d)Δ], (R)_i ∈ [- √(d) (M)_i ±√(d)Δ] ] = ∏_i=1^d _R[(R)_i = t_i^ , (R)_i = t_i^] = ∏_i=1^d 1/2^n-1α nα n + k^_i + k^_i/2(1-α)n(1-α) n + k^_i - k^_i/2, where the last equality holds by (<ref>). Denoting L_i^(1) := α nα n + k^_i + k^_i/2, we obtain L_i^(1) = α nα n/2exp(Θ((k_i^ + k_i^)^3/(α n)^2) ) exp(- C (k_i^ + k_i^)^2/α n) = (1+o_n(1)) 2^α n√(2/πα n)exp(Θ((k_i^ + k_i^)^3/(α n)^2) ) exp(- C (k_i^ + k_i^)^2/α n), where C = 1/2 + o(1), the first equality holds by (<ref>), and in the second equality we used the approximation nn/2 = (1+o_n(1)) √(2/π n)· 2^n. Similarly, denoting L_i^(2) = (1-α)n(1-α) n + k^_i - k^_i/2, we obtain L_i^(2) = (1+o_n(1)) 2^(1-α) n√(2/π (1-α) n)exp(Θ((k_i^ - k_i^)^3/((1-α) n)^2) ) exp(- C (k_i^ - k_i^)^2/(1-α) n). Using this notation in (<ref>) we obtain P_, = ∏_i=1^d 1/2^n-1 L_i^(1) L_i^(2). The aforementioned estimations for L_i^(1) and L_i^(2) give rise to terms of the form exp( Θ(∑_i=1^d (k_i^ + k_i^)^3/(α n)^2) ) and exp( Θ(∑_i=1^d (k_i^ - k_i^)^3/((1-α)n)^2) ) across the multiplication seen in (<ref>). To estimate these terms, recall that all distinct pairs of members of are C_<ref>-antipodal; consequently, α = 1/2±Θ(√(log d/n)) holds. We may then employ similar arguments to the ones used to establish (<ref>), in order to show that the terms appearing in (<ref>) equal 1+ o_n,d(1). Returning to (<ref>) with the above observations, we obtain P_,≤(1+o_n,d(1)) (4/π n)^d (1/α(1-α))^d/2∏_i=1^d T_i, where T_i := exp( - C(k_i^ + k_i^)^2/α n - C (k_i^ - k_i^)^2/(1-α) n). By (<ref>), we may write that α = 1/2 ±, where 0 ≤ = O(√(log d/n)). Consequently, α(1-α) = 1/4 - ^2 and thus (1/α(1-α))^d/2 = (4/1-4^2)^d/2≤(4/exp(-8^2))^d/2 = 2^d exp(4 d ^2) = 2^d(1+o_n,d(1)) holds, where for the above inequality we rely on the inequality 1-x ≥exp(-2x), and the last equality is supported by n = ω(d log d). Substituting the latter into (<ref>) we obtain P_,≤(1+o_n,d(1)) (8/π n)^d ∏_i=1^d T_i. Expanding T_i we obtain T_i = exp(-C/α(1-α)n( (1-α)((k_i^)^2 + 2 k^_i k^_i + (k^_i)^2 ) + α( (k_i^)^2 - 2 k^_i k^_i + (k^_i)^2) ) ) = exp( -C/α(1-α)n((k^_i)^2 +(k^_i)^2 + (1-2α)2 k^_i k^_i ) ) = exp( -C/α(1-α)n((k^_i)^2 +(k^_i)^2 ±Θ(√(log d/n)) k^_i k^_i ) ) = (exp( -4 C/n((k^_i)^2 +(k^_i)^2 ) ))^1/1-4^2·exp(±C/α(1-α)nΘ(√(log d/n)) k^_i k^_i ) = (exp( -4 C/n((k^_i)^2 +(k^_i)^2 ) ) )^1+Θ(log d/n)·exp(±Θ(√(log d/n^3)) k^_i k^_i ), where in the third equality we used (<ref>) and in the fourth and fifth equalities we used the fact that α (1 - α) = 1/4 - ^2. Throughout the multiplication seen in (<ref>), the second exponential appearing on the right hand side of  (<ref>) accumulates to exp( ±Θ( √(log d/n^3)) ∑_i=1^d k^_i k^_i ) = exp( Θ( d √(log d)/n^3/2) | M, M| ) = exp( Θ( d^3/2log d/n^3/2) ) ≤ 1+o_n,d(1), where for the second equality we rely on and being (C_<ref>,M)-uncorrelated, and the inequality is supported by n = ω(d log^2/3 d). Owing to (<ref>) and (<ref>), one may rewrite (<ref>) as follows P_,≤(1+o_n,d(1)) (8/π n)^d ∏_i=1^d (exp( -4 C/n((k^_i)^2 +(k^_i)^2 ) ) )^1+Θ(log d/n). Claim <ref> and the identities t_i^ = 2k_i^ and t_i^ = 2k_i^, yield P_ P_ = (1+o_n,d(1))(8/π n)^d ∏_i=1^d exp(- 4C/n((k^_i)^2 +(k^_i)^2 ) ). Hence, P_,≤(1+o_n,d(1)) P_ P_∏_i=1^d (exp(- 4C((k_i^)^2 + (k^_i)^2 )/n) )^Θ(log d/n). Noting that ∏_i=1^d (exp(- 4C((k_i^)^2 + (k^_i)^2 )/n) )^Θ(log d/n) = exp( Θ(log d (∑_i=1^d (k_i^)^2 + ∑_i=1^d (k_i^)^2)/n^2)) = exp( Θ(d log d (∑_i=1^d (M)_i^2 + ∑_i=1^d (M)_i^2)/n^2)) = exp( Θ(d log d (M_2^2 + M_2^2)/n^2)) = exp( Θ(d^2 log d/n^2)) = 1 + o_n,d(1) concludes the proof. Removing Assumption A. Given ∈ and i ∈ [d], let I_i^⊆ [- √(d) (M)i - √(d)Δ, - √(d) (M)i + √(d)Δ] denote the set of even integers residing in the aforementioned interval. With this notation and without Assumption A, the equality seen in (<ref>) takes the form P_ = ∏_i=1^d ∑_t_i^∈ I_i^_R[(R)_i = t_i^] so that P_ P_ = (∏_i=1^d ∑_t_i^∈ I_i^_R[(R)_i = t_i^]) ·(∏_i=1^d ∑_t_i^∈ I_i^_R[(R)_i = t_i^]) = ∏_i=1^d ∑_t_i^∈ I_i^∑_t_i^∈ I_i^_R[(R)_i = t_i^] ·_R[(R)_i = t_i^]. In a similar manner, the equality (<ref>) would now have the form P_, = ∏_i=1^d ∑_t_i^∈ I_i^∑_t_i^∈ I_i^_R[(R)_i = t_i^ , (R)_i = t_i^]. Hence, proving Claim <ref> without Assumption A reduces to proving that _R[(R)_i = t_i^ , (R)_i = t_i^] ≤(1+o(1)) _R[(R)_i = t_i^] ·_R[(R)_i = t_i^] for every t_i^∈ I_i^ and t_i^∈ I_i^. Throughout the proof of Claim <ref>, no conditions (beyond the ones stated here) are ever imposed on the parameters t_i^ and t_i^. Hence, we have in fact established (<ref>). § CONCLUDING REMARKS We have proved that (M + R/√(d)) ≤ 1 + O(d^-1/2) holds asymptotically almost surely, whenever M ∈ℝ^d × n is Komlós, R ∈ℝ^d × n is Rademacher, d = ω(1), and n = ω̃(d^5/4). We conjecture (see Conjecture <ref>) that n = ω(d log d) suffices for the same assertion to hold. Considering other distributions for the entries of the random perturbation is of high interest as well. In view of the result in <cit.>, mentioned in the introduction, regarding Bernoulli matrices, the following question seems to be a natural next step. Let d =ω(1) and n = ω(d log d) be integers, and set p:= p(n,d) >0. Is it true that (M +R) = O(1) holds a.a.s. whenever M ∈ℝ^d × n is a Komlós matrix and R ∈ℝ^d × n is a random matrix with each of its entries being an independent copy of Ber(p)? amsplain
http://arxiv.org/abs/2307.04380v1
20230710072553
Ghost polygons, Poisson bracket and convexity
[ "Martin Bridgeman", "François Labourie" ]
math.GT
[ "math.GT", "math.DG", "53D30" ]
ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals Igor A. Abrikosov August 12, 2023 ======================================================== § INTRODUCTION The character variety of a discrete group Γ in a Lie group 𝖦 admits a natural class of functions: the algebra of regular functions generated as a polynomial algebra by trace functions or characters. When Γ is a surface group, the character variety becomes equipped with a symplectic form generalizing Poincaré intersection form – called the Atiyah–Bott–Goldman symplectic form <cit.> – and a fundamental theorem of Goldman <cit.> shows that the algebra of regular functions is stable under the Poisson bracket and more precisely that the bracket of two characters is expressed using a beautiful combinatorial structure on the ring generated by characters. The Poisson bracket associated to a surface group has been heavily studied in <cit.>, <cit.>; and in the context of Hitchin representations the link between the symplectic structure, coordinates and cluster algebras discovered by Fock–Goncharov in <cit.>, has generated a lot of attention: for instance see <cit.>, <cit.>, <cit.>, <cit.> and <cit.> for more results, and also relations with the swapping algebra <cit.>. On the other hand the deformation space of Anosov representations admits many other natural functions besides regular functions. Length functions, associated to any geodesic current, studied by Bonahon <cit.> in the context of Teichmüller theory, play a prominent role for Anosov representations for instance in <cit.> and <cit.>. Another class are the correlation functions, defined in <cit.> and <cit.>. These functions are defined as follows. For the sake of simplicity, we focus in this introduction on the case of a projective Anosov representation ρ of a hyperbolic group Γ: one can then associate to any geodesic g a rank 1-projector _ρ(g). The correlation function _G associated to a configuration of n-geodesics – that is an n-tuple G=(g_1,…,g_n) of geodesics up to cyclic transformation – is then _G:ρ↦_G(ρ)(_ρ(g_n)…_ρ(g_1)) . In Teichmüller theory, the correlation function of two geodesics is the cross-ratio of the endpoints. Generally, the correlation functions of geodesics in Teichmüller theory is a rational function of cross-ratios. This is no longer the case in the higher rank. For instance if C is a geodesic triangle given by the three oriented geodesics (g_1,g_2,g_3), the map ^*_C:ρ↦^*_C(ρ)(_ρ(g_1)_ρ(g_2)_ρ(g_3)) , is related to Goncharov triple ratio on the real projective plane. For a geodesic current μ, its length function _μ is defined by an averaging process – see equation (<ref>). One can also average correlation functions: say a Γ-invariant measure μ on the set ^n of generic n-tuples of geodesics is an integrable cyclic current if it is invariant under cyclic transformations and satisfies some integrability conditions – see section <ref> for precise definitions. Then the μ-correlation function or μ-averaged correlation function is _μ:ρ↦∫_^n/Γ_G(ρ) μ̣ . The corresponding functions are analytic <cit.> but rarely algebraic. In the case when Γ is a surface group, the algebra of functions on the deformation space of Anosov representations admits a Poisson bracket coming from the Atiyah–Bott–Goldman symplectic form. To uniformize our notation, we write ^k_μ for _μ when μ is supported on ^k and ^1_ν=_ν for the length function of a geodesic current ν. Then, one of the main result of this article, Theorem <ref>, gives as a corollary [Poisson stability] The space of length functions and correlation functions is stable under the Poisson bracket. More precisely there exists a Lie bracket on the polynomial algebra formally generated by tuples of geodesics (G,H)↦ [G,H] so that {^k_μ,^p_ν}=∫_^n+m/Γ_[G,H](ρ) μ̣(G)ν̣(H) . The complete result, in particularly Corollary <ref> allows to recursively use this formula. In Theorem <ref> we compute explicitly what is the Hamiltonian vector field of the correlation functions. For instance in Teichmüller theory, this allows us to compute the higher derivatives of a length function along twist orbits by a combinatorial formula involving cross-ratios. The bracket (G,H)↦ [G,H] – that we call the ghost bracket – is combinatorially constructed. In this introduction, we explain the ghost bracket in a simple case and refer to section <ref> for more details. Recall first that an ideal polygon – not necessarily embedded – is a sequence (h_1,…, h_n) of geodesics in such that the endpoint of h_i is the starting point of h_i+1. Let then G be the configuration of n geodesics (g_1,…, g_n), with the endpoint of g_i not equal the starting point of g_i+1. The associated ghost polygon is given by the uniquely defined configuration (θ_1,…θ_2n) of geodesics – see figure (<ref>) such that * (θ̅_1,θ_2,θ̅_3 …,θ̅_2n-1,θ_2n) is an ideal polygon, * for all i, θ_2i=g_i and is called a visible edge, while θ_2i+1 is called a ghost edge. We now denote by ⌈ g,h⌉ the configuration of two geodesics g and h, ϵ(g,h) their algebraic intersection, and g̅ is the geodesic g with the opposite orientation. Then if (θ_i,…,θ_2n) and (ζ_i,…,ζ_2p) are the two ghost polygons associated to the configurations G and H, we define the projective ghost bracket of G and H as [G,H] G· H·(∑_i,j(-1)^i+jϵ(ζ_j,θ_i) ⌈ζ_j,θ_i⌉) , which we consider as an element of the polynomial algebra formally generated by configurations of geodesics. We have similar formulas when G or H are geodesics, thus generalizing Wolpert's cosine formula <cit.>. In the case presented in the introduction – the study of projective Anosov representations – the ghost bracket is actually a Poisson bracket and is easily expressed in paragraph <ref> using the swapping bracket introduced by the second author in <cit.>. Formula (<ref>) is very explicit and the Poisson Stability Theorem <ref> now becomes an efficient tool to compute recursively brackets of averaged correlations functions and length functions. 0.2 truecm In this spirit, we give two applications of this stability theorem. Following Martone–Zhang <cit.>, say a projective Anosov representation ρ admits a positive cross ratio if 0<(_ρ(g)_ρ(h))<1 for any two intersecting geodesics g and h. Examples come from Teichmüller spaces and Hitchin representations <cit.>. More generally positive representations are associated to positive cross ratios <cit.>. Our first application is a generalisation of the convexity theorem of Kerckhoff <cit.> and was the initial reason for our investigation: [Convexity Theorem] Let μ be the geodesic current associated to a measured geodesic lamination, _μ the associated length function. Let ρ be a projective Anosov representation which admits a positive cross ratio, then for any geodesic current ν, {_μ,{_μ,_ν}}≥ 0 . Furthermore the inequality is strict if and only if i(μ,ν) ≠ 0. Recall that in a symplectic manifold {f,{f,g}}≥ 0 is equivalent to the fact that g is convex along the Hamiltonian curves of f. This theorem involves a generalisation of Wolpert's sine formula <cit.>. Our second result allows us to construct commuting subalgebras in the Poisson algebra of correlation functions. Let ℒ be a geodesic lamination whose complement is a union of geodesic triangles C_i. To each such triangle, we call the associated correlation function ^*_C_i an associated triangle function. The subalgebra associated to the lamination is the subalgebra generated by triangle functions and length functions for geodesic currents supported on ℒ. [Commuting subalgebra] For any geodesic lamination whose complement is a union of geodesic triangles, the associated subalgebra is commutative with respect to the Poisson bracket. In a forthcoming paper, we use Theorem <ref> with Dick Canary, to obtain a new proof of a Theorem by Potrie–Sambarino <cit.> that says that the entropy for simple roots is 1 for Hitchin representations. 0.5 truecm In order to give a flavour of the constructions of our article, let us explain that the first step is to integrate a closed form α with values in the Lie algebra of the group against a ghost polygon by a simple combinatorial process that we call ghost integration producing the number ∮_ρ(G)α , called the ghost integral – see section <ref>. We relate this ghost integration to the derivative of correlation functions using the dynamical cohomological equation – in a more general context than surface groups or hyperbolic groups. More precisely, we have for a variation of a flat connection ∇̇ _G(∇̇)=∮_ρ(G)∇̇ , see paragraph <ref>. We obtain this formula as a consequence of our study of the dynamical cohomological equation (proposition <ref>). In order to get to the Hamitonian, we have to introduce the dual objects to ghost integration in the twisted cohomology of the group, namely a form Ω_ρ(G) with values in the endomorphism bundle, so that ∫_(α∧Ω_ρ(G))=∮_ρ(G)α . Then the ghost intersection of two ghosts polygons G and H is _ρ(G,H)=∮_ρ(G)Ω_ρ(H) , and we show that _[G,H](ρ)= _ρ(G,H) . For details, see section <ref>. In order to finally compute the Poisson brackets of averaged functions and proceed to the proof of Theorem <ref>, we have to carefully exchange some integrals – see section <ref>. 0.2 truecm The constructions outlined above are the analogues of classical constructions (integration along a path, intersection of geodesics) in differential topology described in section <ref>, in some sense playing the role of non-abelian homology. §.§ The general case For the sake of simplicity, this introduction focused on the case of the so-called projective Anosov representations. More generally, one can construct correlation functions out of geodesics decorated with weights of the Lie group with respect to a Θ-Anosov representations. The Θ-decorated correlation functions are described by configurations of Θ-decorated geodesics. The full machinery developed in this article computes more generally the brackets of these decorated correlation functions. Using that terminology, the Poisson Stability Theorem <ref> still holds with the same statement, but the ghost bracket has to be replaced by a decorated ghost bracket which follows a construction given in paragraph <ref>, slightly more involved than formula (<ref>). §.§ Beyond representations: uniformly hyperbolic bundles We also introduce a new tool allowing us to describe “universal Anosov representations“ in the spirit of universal Teichmüller spaces: the definition of uniformly hyperbolic bundles. This new tool allows us to extend results obtained for Anosov representations, notably stability and limit curves, in a situation where no periodicity according to a discrete group is required. In particular, the (not averaged) correlation functions make sense and we are able to compute the variation of such a correlation function in proposition <ref>. This result follows in particular from the solution of the (dynamical) cohomological equation (proposition <ref>). Important constructions such as ghost integration – in section <ref> – and ghost intersection – in section <ref> – are also given in the context of uniformly hyperbolic bundles. 2 truecm We would like to thank Dick Canary for very useful comments, Fanny Kassel, Curt McMullen, Andrés Sambarino and Tengren Zhang for their interest. § PRELIMINARY In this section, we recall basic facts about intersection of geodesics in the hyperbolic plane, dual forms to geodesics and the Goldman symplectic form. We also introduce one of the notions important for this paper: geodesically bounded forms. §.§ The hyperbolic plane, geodesics and forms We first recall classical results and constructions about closed geodesics in the hyperbolic plane. §.§.§ Geodesics and intersection Let us choose an orientation in . We denote in this paper by the space of oriented geodesics of that we identify with the space or pairwise distinct points in . We denote by g̅ the geodesic g with the opposite orientation. Let g_0 and g_1 be two oriented geodesics. The intersection of g_0 and g_1 is the number ϵ(g_0,g_1) which satisfies the following rules ϵ(g_0,g_1)=-ϵ(g_1,g_0)=-ϵ(g̅_0,g_1) , and verifying the following * ϵ(g_0,g_1)=0 if g_0 and g_1 do not intersect or g_0=g_1. * ϵ(g_0,g_1)=1 if g_0 and g_1 intersect and (g_0(∞),g_1(∞),g_0(-∞),g_1(-∞) is oriented. * ϵ(g_0,g_1)=1/2 if g_0(-∞)=g_1(-∞) and (g_0(∞),g_1(∞),g_1(-∞)) is oriented. Observe that ϵ(g_0,g_1)∈{-1,-1/2,0,1/2,1} and that we have the cocycle property, if g_0,g_1,g_2 are the sides of an ideal triangle with the induced orientation, then for any geodesic g we have ∑_i=0^2ϵ(g,g_i)=0 . We need an extra convention for coherence A phantom geodesic is a pair g of identical points (x,x) of ∂_∞. If g is a phantom geodesic, h any geodesic (phantom or not), we define ϵ(g,h) 0. §.§.§ Geodesic forms Let us denote by Ω^1() the space of 1-forms on the hyperbolic space. A form ω in Ω^1() is bounded if |ω_x(u)| is bounded uniformly for all (x,u) in U the unit tangent bundle of . We let ^∞ the vector space of bounded forms. We have a equivariant mapping Ω^1() ,gω_g , which satisfies the following properties * ω_g is a closed 1-form in supported in the tubular neighbourhood of g at distance 1, outside the tubular neighbourhood of g at distance 1/2. * ω_g=-ω_g̅ * Let g_0 be any geodesic g, then ∫_g_0ω_g = ϵ(g_0,g) . The construction runs as follows. Let us fix a function f from ℝ^+ to [0,1] with support in [0,1] which is constant and equal to 1/2 on [0,1/2] neighbourhood of 0. We extend (non-continuously) f to ℝ as an odd function. Let finally R_g be the “signed distance" to g, so that R_g̅=-R_g. We finally set ω_g=-(̣f∘ R_g). Then (<ref>) and (<ref>) are obvious. We leave the reader check the last point in all possible cases. We extend the above map to phantom geodesics by setting ω_g=0 for a phantom geodesic and observe that the corresponding assignment still obey proposition <ref>. The form ω_g is called the geodesic form associated to g. Such an assignment is not unique, but we fix one, once and for all. Then we have For any pair of geodesic g_0 and g_1, ω_g_1∧ω_g_0=farea_ with f bounded and in L^1. The only non-trivial case is if g_0, g_1 share an endpoint. In the upper half plane model let g_0 be the geodesic x=0, while g_1 is the geodesic x=a. Observe that the support of ω_g_1∧ω_g_0 is in the sector V defined by the inequations y>B>0 and | x/y| < C for some positive constants A an C. Finally as the signed distance for g_0 satisfies sinh(R_g_0) = x/y then ω_g_0=f_0 d(x/y) , ω_g_1=f d(x-a/y) , where f_0 and f_1 are functions bounded by a constant D. An easy computation shows that d(x/y)∧ d(x-a/y) =a d x∧ d y/y^3 . Oberve that | f f_0 a| is bounded by D^2a, and ∫_V d x∧ d y/y^3≤ 2C∫_B^∞1/y^2 d y=1/B< ∞ . This completes the proof. The above result is still true whenever g or h are phantom geodesics. From that it follows that For any pair of geodesics, phantom or not, g and g_0, we have ∫_g_0ω_g = ϵ(g_0,g)=∫_ω_g_0∧ω_g . Moreover for any (possibly ideal) triangle T in ∫_∂ Tω_g=0 . §.§ The generic set and barycentric construction For any oriented geodesic g in we denote by g̅ the geodesic with opposite orientation, and we write g≃ h, if either g=h or g=h̅. Let us also denote the extremities of g by (∂^-g,∂^+g) in ×. For n≥ 2, let us the define the singular set as ^n_1{(g_1,…,g_n)|∀ i,j, g_i≃ g_j } , and the generic set to be _⋆^n^n∖^n_1 . We define a Γ-compact set in _⋆^n to be the preimage of a compact set in the quotient _⋆^n/Γ. The barycenter of a family G=(g_1,…,g_n) of geodesics is the point (G) which attains the minimum of the sum of the distances to the geodesics g_i. Choosing a uniformisation, the barycentric construction yields a -equivariant map from :_⋆^n ,(g_1,…,g_n)(y) . It follows from the existence of the barycenter map that the diagonal action of Γ on _⋆^n is proper. The barycentric section is then the section σ of the following fibration restricted to _⋆^n F:()^n→^n , given by σ=(σ_1,…,σ_n) , where σ_i(g_1,…,g_n) is the orthogonal projection of (g_1,…,g_n) on g_i. Obviously The barycentric section is equivariant under the diagonal action of on _⋆^n as well as the natural action of the symmetric group 𝔖_n. §.§ Geodesically bounded forms We abstract the properties of geodesic forms in the following definition: Let α be a closed 1-form on . We say that α is geodesically bounded if * α belongs to ^∞, ∇α is bounded. * for any geodesic g, α(ġ) is in L^1(g, ṭ), ω_g∧α belongs to L^1() and ∫_gα =∫_ω_g∧α . * Moreover for any (possibly ideal) triangle T in ∫_∂ Tα =0 . We denote by the vector space of geodesically bounded forms. We observe that any geodesically bounded form is closed and that any geodesic form belongs to . §.§ Polygonal arcs form We will have to consider geodesic polygonal arcs which are embedded finite union of oriented geodesic arcs =γ_0∪⋯∪γ_p , such that γ_i joins γ_i^- to γ_i^+ and we have γ_i^-=γ_i-1^+, while γ_0^- and γ_p^+ are distinct points at infinity. We say that γ_1,…,γ_p-1 are the interior arcs. We have similarly to above Given a polygonal arc =γ_0∪⋯∪γ_p there exists a closed 1-form ω_ so that * the 1-form ω_ is supported on a 1-neighborhood of , * Let B be a ball containing a 1-neighbourhood of the interior arcs, such that outside of B the 1-neighbourhood V_0 of γ_0 and the 1-neighbourhood V_1 of γ_p are disjoint then .ω_|_V_0=.ω_g_0|_V_0 , .ω_|_V_1=.ω_g_p|_V_1 . where g_0 and g_p are the complete geodesics cointaining the arcs γ_0 and γ_p. * For any element Φ of , ω_Φ()=Φ^*(ω_). * For any geodesic g, ∫_gω_=ϵ(g,[γ_0^-,γ_p^+]). * Let be a polygonal arc with extremities at infinity x and y, then for any 1-form α in we have ∫_ω_∧α=∫_[x,y]α . The construction runs as the one for geodesics. Let us fix a function f from ℝ^+ to [0,1] with support in [0,1] which is constant and equal to 1/2 on [0,1/2]. We extend (non-continuously) f to ℝ as an odd function. Let finally R_g be the “signed distance" to g, so that R_g̅=-R_g. We finally set ω_g=-(̣f∘ R_g). Then (1), (2), (3) and (4) are obvious. Then writing ∖=U⊔ V where U and V are open connected sets. We have that ∫_U ω_∧α=∫_U (̣f∘ R_g)∧α=1/2∫_g α , by applying carefully Stokes theorem. The same holds for the integral over V, giving us our wanted result. The form ω_ is the polygonal arc form. §.§ The Goldman symplectic form Let S be a closed surface with Σ its universal cover that we identify with by choosing a complete hyperbolic structure on S. Given a representation ρ:π_1(S) → G we let E = Σ×_ρ𝔤 be the bundle over S by taking the quotient of the trivial bundle over Σ×𝔤 by the action of π_1(S) given by γ(x,v) = (γ(x), Adρ(γ) (v)). Let ∇ be the associated flat connection on the bundle E and denote by Ω^k(S)⊗(E) the vector space of k-forms on S with values in (E). Recall that ∇ gives rise to a differential ^̣∇: Ω^k(S)⊗(E)→Ω^k+1(S)⊗(E) . We say a 1-form α with values in (E) is closed if ^̣∇α=0 and exact if α=^̣∇β. Let then consider C^1_ρ(S) {closed 1-forms with values in (E)} , E^1_ρ(S) {exact 1-forms with values in (E)} , H^1_ρ(S) C^1_ρ(S)/E^1_ρ(S) . When S is closed, the Goldman symplectic form on H^1_ρ(S) is given by (α,β)∫_S(α∧β) , where for u and v in S: (α∧β)(u,v)(α(u)β(v))-(α(v)β(u)). Observe that if consider complex bundles, the Goldman symplectic form is complex valued, while it is real valued for real bundles. § UNIFORMLY HYPERBOLIC BUNDLES AND PROJECTORS We introduce the notion of uniformly hyperbolic bundles over the unit tangent bundle of – see definition <ref>. This notion is a universal version of Anosov representations defined in <cit.>. More specifically, we explain in the projective case, that such objects, which are bundles with data, are associated to sections of the endomorphism bundle given by projectors. One of the main results – proposition <ref> – is a description of the variation of such a projector under a variation of the data defining the uniformly hyperbolic bundles. Finally, we recover Anosov representations as periodic cases of uniformly hyperbolic bundles. Uniformly hyperbolic bundle is the structure underlying the study of quasi-symmetric maps in <cit.>. This notion has a further generalisation to all hyperbolic groups Γ, replacing by a real line bundle X over ∂_∞Γ×∂_∞Γ∖{(x,x)| x∈∂_∞Γ} , equipped with a Γ-action so that X/Γ is the geodesic flow of Γ. We will not discuss it in this paper, since this will uselessly burden our notation. §.§ Uniformly hyperbolic bundles: definition Let be the unit tangent bundle of . We denote by the vector field on generating the geodesic flow ϕ. We consider the trivial bundle E=V×. For any flat connection ∇ on E, we consider the lift Φ^∇ of ϕ given by the parallel transport along the orbits of ∂_t. When D is the trivial connection on E, we just write ΦΦ^D and observe that Φ_t(x,u)=(ϕ_t(x),u) where x is in and v in V. A rank k uniformly hyperbolic projective bundle is a pair (∇,h) where h is a section of the frame bundle on E, ∇ a trivializable connection on the bundle E, satisfying first the (standard) bounded cocyle hypothesis: ‖Φ_1^∇‖ is uniformly bounded. Then we assume that we have a Φ^∇ invariant decomposition at every point x E_x=L_x⊕ P_x , where L_x and P_x are subspaces with (L_x)=k and so that * The bundle L⊗ P^* is contracting, that is there exist positive constant B and b so that for all positive real s, for all x in for all non-zero vector u and v in L_x and P_x respectively ‖Φ^∇_s(u)‖/‖ u‖≤ B e^-bs‖Φ^∇_s(v)‖/‖ v‖ . * There exists a positive ϵ, so that for any converging sequences x and y to a point x, and any sequence u and v converging to u and v in E_x, so that u_m and v_m belongs to L_x_m and P_y_m respectively, then |⟨u| v||⟩≤ (1-ϵ) ‖ u‖·‖ v‖ . * There is a volume form on E, which is bounded with respect to h and ∇-parallel along orbits on the flow. The metric and scalar products considered are with respect to the metric g_h for which h is orthonormal. The fundamental projector associated to a uniformly hyperbolic bundle is the section of (E) given by the projection on L parallel to P. Observe that we do not require a priori any continuity on the bundles L and P. When the dimension of L_x is 1, we talk of a projective uniformly hyperbolic bundle, when it is k, we talk of a rank k uniformly hyperbolic bundle. The hypothesis (<ref>) is for simplification purposes. Using that hypothesis, one sees that (L) and (P) are respectively contracting and expanding bundles. The bounded cocycle assumption, akin to a similar condition in Oseledets theorem, implies that there exists positive constants A, B and C so that ‖Φ_s^∇‖≤ A+ Be^Cs . If we have a projection π from a set F to U, we write, for x in , F_x=π^-1(x). Let (∇,h) be a uniformly hyperbolic bundle. Then there exists open sets 𝒱 and 𝒰 of (E) and (E^*), where k is the dimension of L_x, respectively as well as a positive real T, so that * For every u in 𝒰_x and v in 𝒱_x, u and v are transverse. * Φ_T sends 𝒰 to 𝒰 and is 1/2 Lipschitz. * Φ_-T sends 𝒱 to 𝒱 and is 1/2 Lipschitz. * L and P are sections of 𝒱 and 𝒰 This is a rephrasing of the definition of uniformly hyperbolic bundle: let us consider L and P as sections of (E) and (E^*) respectively and let ℒ and 𝒫 be the closure of the images of these sections. The second condition implies for any u in ℒ and v in 𝒫, d(u,w)≥ϵ>0 , ∀ w not transverse to v . It follows that we can find ϵ_0 so that the open sets 𝒱{u| d(u,ℒ)≤ϵ_0} , 𝒰{v| d(v,𝒫)≤ϵ_0} , satisfy the condition of the lemma. As a classical consequence we have. Let (∇,h) be a uniformly hyperbolic bundle. Let us choose a trivialisation so that ∇ is the trivial connection. Then * The fundamental projector is a parallel along the geodesic flow and continuous bounded section of (E). * L is constant along the strong stable foliation of the geodesic flow of . * Finally P is constant along the strong unstable foliation of . The second condition of the definition of uniformly hyperbolic bundles guarantees that there exist open sets 𝒱 and 𝒰 in (E) and (E^*) respectively, so that L and P are sections of 𝒱 and 𝒰 respectively and moreover the closure of 𝒱 and 𝒰 do not intersect. The first condition implies that for s large enough Φ_s is contracting as a map from 𝒱 to 𝒰. Thus L being an invariant section is continuous; the same holds for P. Hence is continuous. Using now that the geodesic flow is contracting along the stable leaves towards the future, and contracting along the unstable leaves towards the past, it follows that L is constant along the strong stable leaves and P is constant along the strong unstable leaves. This allows to define the limit maps of the unformly hyperbolic bundle (∇,h). Let us choose a trivialization E=V× so that ∇ is trivial. The limit map of the uniformly hyperbolic bundle is ξ: ∂_∞→(V) , so that ξ(x)=L(y), if y belongs to the strong stable foliation defined by x. Symmetrically, the dual limit map of the uniformly hyperbolic bundle is ξ^*: ∂_∞→(V^*) , so that ξ^*(x)=P(y), if y belongs to the strong unstable foliation defined by x. 0.2truecm Finally let us define a notion of equivalence for uniformly hyperbolic bundles: Two uniformly hyperbolic bundles (∇_0,h_0) and (∇_1,h_1) are equivalent if there is a section B of 𝖦𝖫(E) so that * ∇_1=B^*∇_0 , * The metrics g_h_0 and B^*g_h_1 are uniformly equivalent. §.§ Families of uniformly hyperbolic bundles In order to study families of uniformly hyperbolic bundles, we will adopt two different gauge-fixing points of view: * The fixed gauge point of view: we allow the frame to vary but fix the connection * The fixed frame point of view: we allow the connection to vary but fix the frame. A natural example comes from a projective Anosov representation of a cocompact surface group. We call such an example, where the frame and the connections are invariant under the action of a cocompact surface group a periodic bundle. We discuss periodic bundles in <ref>. For a vector bundle V over a topological space X, we denote by V_x the fiber at a point x in X. A C^k-bounded variation of a uniformly hyperbolic bundle (∇,h) is a family (∇^t,h_t)_t∈ ]-ϵ,ϵ[ of connections and frames on E_0 so that * (∇_0,h_0)=(∇,h), * for all t, ∇^t is trivializable * for all t close to 0, the C^k-derivatives of t↦∇_ḣ_t are bounded with respect to g_h_t. We will see that any smooth family of periodic variation is of bounded variation. Then we have the lemma: Assume that ∇^t,h is a C^k bounded variation of a uniformly hyperbolic bundle where k∈ℕ∪{ω}. Then for t in some neighbourhood of zero, the bundle (∇^t,h_t) is uniformly hyperbolic. Let _t be the associated projector, then _t depends C^k on t. We prove this lemma in paragraph <ref>. §.§ The fundamental projector and its variation Our goal is to compute the variation of the associated family of fundamental projectors of a bounded variation of a uniformly hyperbolic bundle. More precisely, let assume we have a uniformly hyperbolic bundle (∇_0,h_0) with decomposition E_0=L_0⊕ P_0 . We prove in this paragraph the following proposition Assume that we have a bounded variation ∇_t,h of the uniformly hyperbolic bundle (∇_0,h_0) in the fixed connection point of view, that is ∇_t is the trivial connection D. The derivative of the fundamental geodesic at a point x in a geodesic g, is given by _0=[Ȧ,_0] + ∫_g^+ [̣̇A,_0] ·_0+ ∫_g^-_0· [̣̇A,_0] . where g^+ is the geodesic arc from x to g(+∞) and g^- is the arc from x to g(-∞) (in other words with the opposite orientation to g), and Ȧ is the endormorphism so that Ȧ h =.∂/∂ t|_t=0 h_t . §.§.§ Preliminary: subbundles of (E_0) We first adopt the fixed frame point of view. Let ∇ be a flat connection on E_0, Then is parallel for the induced flat connection on (E_0) along the flow. Let also F_0 be the subbundle of (E_0) given by F_0{B∈(E_0)| B+ B=B} . Observe that for any section C of (E_0), [C,] is a section of F_0 and that for any element A in F_0 we have (A)=0. The bundle F_0 decompose as two parallel subbundles F_0=F_0^+⊕ F_0^- , where we have the identification F_0^+=P^*⊗ L , F_0^-=L^*⊗ P . The projection of F_0 to F_0^+ parallel to F_0^- is given by B↦ B, while the projection on F_0^- parallel to F_0^+ is given by B↦ B. Finally there exists positive constants A and a, so that for all positive time s, endomorphisms u^+ in F_0^+ and u^- in F_0^-, we have ‖Φ_-s(u^-)‖≤ A e^-as‖ u^-‖ , ‖Φ_s(u^+)‖≤ A e^-as‖ u^+‖ . Consequently, for any section D of F_0, we write D=D^++D^- where D^± are sections of F_0^± according to the decomposition (<ref>). Let us write (E_0)=E_0^*⊗ E_0=(L^*⊗ L)⊕ (P^*⊗ P)⊕ (L^*⊗ P)⊕ (P^*⊗ L) , In that decomposition, F_0=(P^*⊗ L)⊕(L^*⊗ P). Let F_0^+=P^*⊗ L , F_0^-=L^*⊗ P . Thus, we can identify F_0^+ as the set of elements whose image lie in L and F_0^- are those whose kernel is in P. Thus F_0^+ = {B∈ F_0| B=B}= {B∈ F_0| B=0} , F_0^- = {B∈ F_0| B=0}={B∈ F_0| B=B} . Then the equation for any element B of F_0, B= B+ B , corresponds to the decomposition F_0=F_0^+⊕ F_0^-. Thus the projection on F_0^+ is given by B↦ B, while the projection on F_0^- is given by B↦ B. The definition of F_0^+ and F_0^- and the corresponding contraction properties of the definition of a uniformly hyperbolic bundles give the contraction properties on F_0^+ and F_0^-. §.§.§ The cohomological equation Let σ be a bounded section of F_0, then there exists a unique section η of F_0 so that ∇_η=σ. This section η is given by η(x)=∫_-∞^0 ·σ(ϕ_s(x)) ṣ-∫_0^∞σ(ϕ_s(x))· ṣ . Classically, in dynamical systems, the equation ∇_η=σ is called the cohomological equation. Since σ belongs to F_0^+ while σ belongs to F_0^- by lemma <ref>, the right hand side of equation (<ref>) makes sense using the exponential contraction properties given in the inequalities (<ref>). Indeed, for a positive s by lemma <ref> again, ‖Φ_-s(σ(ϕ_s) ·) ‖ ≤ Ae^-as‖σ‖_∞ , ‖Φ_s(·σ(ϕ_-s)) ‖ ≤ Ae^-as‖σ‖_∞ . It follows that using the above equation as a definition for η we have η(ϕ_s(x))=∫_-∞^t·σ(ϕ_u(x)) ụ-∫_t^∞σ(ϕ_u(x))· ụ . Thus ∇_η=σ +σ=σ , since σ is a section of F_0. Uniqueness follows from the fact that F_0 has no parallel section: indeed neither F_0^+ nor F_0^- have a parallel section. §.§.§ Variation of the fundamental projector: metric gauge fixing We continue to adopt the variation of connection point of view and consider after gauge fixing only hyperbolic bundles where the metric is fixed. Let ∇^t,h give rise to a bounded variation of the uniformly hyperbolic bundle (∇_0,h), where ∇_0 is the trivial connection D. Our first result is The variation of the fundamental projector _t associated to (∇^t,h) is given by (x)=∫_-∞^0 ( · [,∇̇_] )(x^s) ṣ- ∫_0^∞([,∇̇_]·)(x^s) ṣ , where x^s=ϕ_s(x) and ∇̇_(u)=.∂/∂ s|_t=0∇^t_ (u). Let us distinguish for the sake of this proof the following connections. Let ∇ be the flat connection on E_0 and ∇^End the associated flat connection on (E_0). Then from the equation ^2=, we obtain after differentiating, += . Thus is a section of F_0. Moreover taking the variation of the equation ∇^End_=0 yields ∇^End_=-∇̇^End_=[,∇̇_ ] . In other words, the variation of the fundamental projector is a solution of the cohomological equation ∇^End_η=σ, where σ=[,∇_] and η=. Applying proposition <ref>, yields the equation (<ref>). §.§.§ The fixed connection point of view and the proof of proposition <ref> We can now compute the variation of the projector in the fixed frame point of view and prove proposition <ref>. We first need to switch from the fixed frame of view to the fixed connection point of view. Let (∇^t,h) be a variation in the fixed frame point of view. Let A^t be so that ∇^t=A_t^-1 DA_t and A_0=Id. In particular, we have ∇̇_= D_Ȧ=̣̇A( ) . Then the corresponding variation in the fixed connection point of view is ( D,h_s) where h_t=A_t(h). It follows that ḣ= Ȧ(h) , ∇̇_ =̣̇A()=D_Ȧ . Let now _0^t the projector – in the fixed connection point of view– associated to (D, h_t), while ^t is the projector associated to (∇^s, h). Obviously _0^t=A_t^t A_t^-1 , _0_0^0=^0_0 . Thus _0=[Ȧ,]+ . Using lemma <ref> and equations (<ref>), we have =∫_-∞^0 · [,∇̇_] ∘ϕ_s ṣ- ∫_0^∞ [,∇̇_]·∘ϕ(s) ṣ , which yields (using the fact that _0=): _0=[Ȧ,_0] +∫_-∞^0_0· [_0,∇̇_]∘ϕ_s ṣ- ∫_0^∞ [_0,∇̇_]·_0 ∘ϕ(s) ṣ . From equation (<ref>), we get that ∫_0^∞ [_0,∇̇_]_0 ∘ϕ(s) ṣ=∫_g^+[_0,̣̇A]·_0=- ∫_g^+[̣̇A,_0]·_0 , while ∫_-∞^0 _0 [_0,∇̇_]∘ϕ(s) ṣ=-∫_g^-_0 [_0,̣̇A] =∫_g^-_0[̣̇A,_0] . This concludes the proof of proposition <ref>. §.§ Proof of the stability lemma <ref> Let us first choose a continuous family of gauge transformations g so that g_t^*h_t=h. The bounded variation condition implies that for a given T, for any α, there exists β so that | s|≤β, implies that ‖Φ_T-Φ_T^s‖≤α , where Φ_Y^s is the parallel transport at time T for ∇^s and the norm is computed with respect to h. Thus from lemma <ref>, for α small enough, Φ_T^s preserves 𝒰 and is 3/4-Lipschitz, while the same holds for Φ_-T^s and 𝒱. This implies that for | s|≤β, (∇_s,h) is a uniformly hyperbolic bundle. By the C^k bounded variation hypothesis, Φ_-T^s is a C^k-family of contracting maps, hence the fixed section is itself C^k as a function of s. This proves that the fundamental projector varies C^k in s. §.§ Θ-Uniformly hyperbolic bundles We now generalize the situation described in the previous paragraphs, using the same notational convention. Let V be a finite dimension vector space, let Θ=(K_1,…,K_n) be a strictly increasing n-tuple so that 1≤ K_1<…< K_n < (V) . Then a Θ-uniformly hyperbolic bundle over is given by a pair (∇,h) for which there exists a Φ^∇- invariant decomposition E_0=E_1⊕…⊕ E_n+1 , so that (∇,h) is uniformly of rank K_å for all å in {1,…,n} with invariant decomposition given by E_0=F_å⊕ F^∘_å , with F_å=E_1⊕…⊕ E_å , F^∘_å=E_å+1⊕…⊕ E_n+1 . The flag (F_1,…,F_n) will be called a Θ-flag. In other words, we generalized the situation described before for Grassmannians to flag varieties. §.§ Projectors and notation In this section, we will work in the context of a Θ-uniformly hyperbolic bundle ρ=(∇,h) associated to a decomposition of a trivialisable bundle E=E_1⊕⋯⊕ E_n+1 . Let us denote k_å(E_å) and K_å k_1+… k_å so that Θ=(K_1,…, K_n). We then write for a geodesic g, ^å(g) , the projection on F_å=E_1⊕…⊕ E_å parallel to F_å^∘ E_å+1⊕…⊕ E_n+1. When g is a phantom geodesic we set the convention that ^å(g). Observe that all ^å(g) are well defined projectors in the finite dimensional vector space V which is the space of ∇-parallel sections of E. Or in other words the vector space so that in the trivialization given by ∇, E=V×. Finally, we will consider a Θ-geodesic g, given by a geodesic g_0 labelled by an element å of Θ and write (g)^å(g_0) , Θ_g=(^å)=K_å . §.§ The periodic case Let Σ be the universal cover of a closed surface S. We denote by π the projection from Σ to S and p the projection from to Σ. Let Γ be the fundamental group of S and ρ be a projective Anosov representation of Γ on some vector space ℰ. Let E be the associated flat bundle on S with connection ∇. We will use in the sequel the associated trivialisation of the bundle E_0=p^*π^* E on which ∇ is trivial. Let us choose a Γ-invariant euclidean metric g on the bundle E_0. Let us finally choose a orthonormal frame h for g so that g=g_h. It follows from the definition of projective Anosov representations that the corresponding bundle (∇,h) is uniformly hyperbolic. We call such a uniformly hyperbolic bundle periodic. More generally, let 𝖯_Θ be the parabolic group stabilizing a Θ-flag. Then a 𝖯_Θ-Anosov representation defines a Θ-uniformly hyperbolic bundle. Finally we observe * Given a representation ρ, a different choice of a Γ-invariant metric yields an equivalent uniformly hyperbolic bundle. * Similarly, two conjugate representations give equivalent uniformly hyperbolic bundles. § GHOST POLYGONS AND CONFIGURATIONS OF PROJECTORS We introduce here our main tool, ghost polygons, and relate them to configurations of geodesics and correlation functions. This section is mainly concerned with definitions and notation. We will consider the space of oriented geodesics of , and an oriented geodesic g as a pair (g^-,g^+) consisting of two distinct points in . §.§ Ghost polygons 0.2 truecm A ghost polygon is a cyclic collection of geodesics ϑ=(θ_1,…,θ_2p). The ghost edges are the geodesics (possibly phantom) θ_2i+1 , and the visible edges are the even labelled edges θ_2i, such that θ_2i+1^+=θ_2i^+ , θ_2i-1^-=θ_2i^- . * The geodesics are allowed to be phantom geodesics, * It will be convenient some time to relabel the ghost edges as ζ_iθ_2i+1. * It follows from our definition that (θ̅_1,θ_2,θ̅_3,…,θ_2p) is closed ideal polygon. We have an alternative point of view. A configuration of geodesics of rank p is just a finite cyclically ordered set of p-geodesics. We denote the cyclically ordered set of geodesics (g_1,…,g_p) by ⌈ g_1,…, g_p⌉. The cardinality of the configuration is called the rank of the the configuration. We see that the data of a ghost polygon and a configuration of geodesics is equivalent (see figure (<ref>)): * we can remove the ghost edges to obtain a configuration of geodesics from a ghost polygon, * conversely, given any configuration G=(g_1,…,g_p), the associated ghost polygon ϑ=(θ_1,…,θ_2p) is given by θ_2i g_i, θ_2i+1(g_i+1^-,g_i^+) We finally say that two configurations are non-intersecting if their associated ghost polygons do not intersect. Let us add some convenient definitions. Let ϑ=(θ_1,…,θ_2p) be a ghost polygon associated to the configuration configuration ⌈ g_1,…, g_p⌉, We then define the opposite configurations as follows. * For visible edge g_1 of G, the opposite configuration is tuple g_1^* (g_1,g_2,…,g_p,g_1). * For ghost edge θ_1 of G, the ghost opposite configuration is the tuple θ_1^*(g_2,…,g_p,g_1). Observe that both opposite configurations are not configurations per se but actually tuples – or ordered configurations. We finally define the core diameter r(G) of a ghost polygon G to be the minimum of those R such that, if B(R) is the ball of radius R centered at the barycenter (G), then B(R) intersects all visible edges. We obviously have The map G↦ r(G) is a continuous and proper map from _⋆^n/ to ℝ. §.§ Θ-Ghost polygons We now Θ-decorate the situation. Let as in paragraph <ref>, Θ=(K_1,…,K_n) with K_å<K_å+1. Let G be a ghost polygon, a Θ-decoration is a map Å from the set of visible edges to 1,…,n. We again have the equivalent description in terms of configurations. A Θ-configuration of geodesics of rank p is configuration (g_1,…,g_p) with a map Å – the Θ-decoration – from the collection of geodesics to {1,…,n}. We think of a Θ-decorated geodesic, or in short a Θ-geodesic, as a geodesic labelled with an element of Θ. When ρ is a uniformly hyperbolic bundle and ^å(g) a fundamental projector associated to a geodesic g, we will commonly use the following shorthand. Let G be ghost polygon (θ_1,θ_2,…,θ_2p) be given by configuration ⌈ g_1,…, g_p⌉. * For visible edge g_i we write _i^Å_i^Å(i)(g_i). * For visible edge g_i, the opposite ghost endomorphism is _G^Å(g^*_j) _j·_j-1⋯_j+1·_j . * For ghost edge ζ_i, the opposite ghost endormorphism is _G^Å(ζ_i^*)_i·_i-1…_i+1 . The reader should notice that in the product above, the indices are decreasing. The opposite ghost endomorphisms have a simple structure in the context of projective uniformly hyperbolic bundles (that is when Θ={1}). When Θ={1}, _G(θ^*_i)=_G(ρ) (θ_i). Let G=(θ_1,…,θ_2p) be a ghost polygon with configuration ⌈ g_1,…, g_p⌉. If g_i^+ = g_i+1^- then p_i+1p_i = 0 and the equality holds trivially with both sides zero. We thus can assume there is a ghost edge ζ_i = θ_2i+1 for each i ∈{1,…,p}. When Θ={1} all projectors have rank 1. Thus for visible edge g_i _G(g_i^*)=_i_i-1…_i+1_i = (_i…_i+1) _i= _G(ρ) (g_i) . For a ghost edge ζ_i as (_i+1_i) ≠ 0 _G(ζ^*_i)=_i_i-1…_i+1 = _i_i+1(_n…_1)/(_i_i+1)= _G(ρ) , where 1/(_i_i+1)_i_i+1. Then we see that has trace 1, its image is the image of _i, and its kernel is the kernel of _i+1. Thus is the rank 1 projector on the image of _i, parallel to the kernel of _i+1. Hence =(ζ_i). The result follows. 0.2 truecm §.§ Correlation function Given a Θ-configuration of geodesics G=⌈ g_1,…,g_p⌉ given by a p-uple of geodesics (g^0_1,…,g^0_p), with a Θ-decoration Å the correlation function associated to G is _G: ρ↦_⌈ g_1,…,g_p⌉(ρ)(^å(p)(g^0_p)⋯^å(1)(g^0_1))=((g_p)⋯(g_1)) , where is the projector associated to the uniformly hyperbolic bundle ρ. The reader should notice (again) that the geodesics and projectors are ordered reversely. §.§ Analyticity in the periodic case In this subsection we will treat first the case of complex bundles, that is representation in in 𝖲𝖫(n,ℂ) of the (complex) parabolic group 𝖯^ℂ_Θ associated to Θ. We now have, as a consequence of <cit.>, the following Let G be a ghost polygon. Let ρ be an analytic family of 𝖯^ℂ_Θ-Anosov representations parametrized by the unit disk . Then, the function u↦_G(ρ_u) is analytic. Moreover the map G↦_G is a continuous function with values in the analytic functions. Indeed the correlation functions only depends on the limit curve of the representation and thus the analyticity of the limit curve proved in <cit.> gives the result. We deduce the general analyticity result from this proposition by complexifying the representation. § GHOST INTEGRATION In this section, given a Θ-uniformly hyperbolic bundle ρ, a Θ-ghost polygon G and a 1-form α on with values in the endormorphism bundle of a uniformly hyperbolic bundle (of a special type), we produce a real number denoted ∮_ρ(G)α . This procedure is called the ghost integration. We introduce the dual cohomology object Ω_ρ(G) which is a 1-form with values in the endomorphism bundle so that ∫_(α∧Ω_ρ(G))=∮_ρ(G)α . The construction is motivated by the following formula that we shall derive and explain in paragraph <ref> _G(∇̇)=∮_ρ(G)∇̇ . Observe that here we use an abuse of language: we use the same notation for 1-form on with values in (E) and their pull-backs which are 1-forms on U with values in (π^*(E)) where π is the projection from U to . §.§ Bounded and geodesically bounded forms In this paragraph, we define a certain type of 1-forms with values in (E), where E is a uniformly hyperbolic bundle (∇,h). All norms and metrics will be using the Euclidean metric g_h on E associated to a framing h. A bounded 1-form ω on with values in (E) is a form so that ‖ω_x(u)‖_x is bounded uniformly for all (x,u) in U. Let us denote ^∞(E) the vector spaces of those forms and ‖ω‖_∞=sup_(x,u)∈ U‖ω_x(u)‖_x . As an example of such forms, we have * Given a Θ-geodesic g, given by a (possibly phantom) geodesic g_0, and an element å of Θ, the projector form is β_ρ(g)ω_g (g)=ω_g ^å(g_0) . where we used the notation (<ref>). * Any Γ-equivariant continuous form in the case of a periodic bundle. * Given (A_t)_t∈]-1,1[ a bounded variation of a uniformly hyperbolic bundle (see definition <ref>, the form Ȧ.∂ A_t/∂ t|_t=0 , is by definition a bounded 1-form. We do not require forms in ^∞(E) to be closed. A form α is geodesically bounded if for any parallel section A of (E), (α A) is geodesically bounded as in definition <ref>. We denote by (E) the set of 1-forms which are geodesically bounded. Again for any geodesic, the projector form β_ρ(g) is geodesically bounded. However Γ-equivariant forms are never geodesically bounded unless they vanish everywhere. §.§ Line integration Let ω be a 1-form in ^∞(E). Let x be a point on the oriented geodesic g and Q a parallel section of (E) along g. The line integration of ω – with respect to the uniformly hyperbolic bundle ρ – is given by _x,g,(ω) ∫_g^+( [ω,] ) + ∫_g^-( [ω,]) . Observe that since for a projector , we have (A [B,]) =([,A] B) , we have the equivalent formulation _x,g,(ω) = ∫_g^+(ω [,] ) + ∫_g^-(ω [,] ) . Let now α be a section of (E) so that α̣ belongs to ^∞(E). We also define the primitive line integration of α by _x,g,(α) (α(x) [,] )+ _x,g,(α̣) = ([α(x),] )+ _x,g,(α̣) . §.§.§ Bounded linear forms and continuity The line integration operator ω↦_x,g,Q(ω) , a continuous linear form on ^∞(E). This proposition is an immediate consequence of the following lemma There exist positive constants B and b, only depending on and x, so that for any ω in ^∞(E) if y is a point in g^+, z a point in g^- and denoting the tangent vector to the geodesic g. |( [ω_y(),] )| ≤ Be^-bd(x,y)‖ω‖_∞ , ‖ [,] ‖_z ≤ Be^-bd(x,z) , ‖ [,]‖_y ≤ Be^-bd(x,y) . Let us choose a trivialization of E so that ∇ is trivial. By hypothesis ω is in ^∞(E) and thus ‖ω_y()‖_y≤‖ω‖_∞ . Then σ: y↦σ(y) [ω_y(),] , is a section of F_0^-. Since is bounded – see proposition <ref> – there exists k_1 such that for all y ‖σ(y)‖_y≤ k_1 ‖ω‖_∞ . By lemma <ref>, F_0^- is a contracting bundle in the negative direction, which means there exists positive constants A and a so that if y=ϕ_t(x) with t>0, then ‖Φ_-t^∇( σ(y))‖_x≤ A e^-at‖σ(y)‖_y , where ∇ is the connection. However in our context, since we have trivialized the bundle, Φ_-t^∇ is the identity fiberwise, and thus combining the previous remarks we get that if y is in g^+, then ‖ [ω_y(),] ‖_x ≤ A e^-a(d(y,x)‖ω‖_∞ . By Cauchy–Schwarz, for all endomorphisms U and V, we have |(U V)|≤‖ U‖_x‖ V‖_x . Thus combining equations (<ref>) and (<ref>) we obtain |( [ω_y(),] )|≤‖‖_x ‖ [ω_y(),] ‖_x ≤ A e^-a(d(y,x)‖‖_x ‖ω‖_∞ , and the inequality (<ref>) follows. Similarly, [,] is a parallel section of F_0^-, thus the inequality (<ref>) is an immediate consequence of inequality <ref>. The primitive line integration _x,g,Q(α) does not depend on the choice of x on g. Let us write for the sake of this proof _x_x,g,Q(α). Let μ be the geodesic arc from y to x. Let us consider a parametrization of g so that x=g(s_0) and y=g(t_0). Then letting ω = α̣ _y-_x = ((α(y)-α(x)) [,]) + ∫_t_0^∞(ω(ġ) [,Q])ṭ +∫_t_0^-∞(ω(ġ) [,Q] )ṭ - ∫_s_0^∞(ω(ġ) [,Q])ṭ-∫_s_0^-∞(ω(ġ) [,Q] )ṭ = ∫_s_0^t_0(ω(ġ) ( [,Q]- [, Q]-[,Q] )) ṣ =0 , where the last equality comes form the fact that, since is a projector [,Q] + [, Q]=[,Q] . Finally we have, Assume that β is bounded. Then _m,Q(β)= 0. Let ϖ=β̣. It follows that (ϖ() [,])=∂/∂ t(β [,]) . Thus by the exponential decay lemma <ref>, we have ∫_g^+(ϖ [,])=-(β(x) [,]) . Similarly ∫_g^-(ϖ [,])=-(β(x) [,] ) . It follows that _x,g_0,Q(ϖ)= -(β(x) [,])-(β(x) [,] )=-(β(x) [,]) . This concludes the proof. §.§ Ghost integration: the construction Let now G be a configuration of geodesics with a Θ-decoration Å. Let ρ be a Θ-uniformly hyperbolic bundle, where G=⌈ g_1,…, g_p⌉. Let _i=^Å(i)(g_i) and _i=_i-1…_i+1 . Let α be a closed 1-form with values in (E). Assume that α belongs to ^∞(E). Let β be a primitive of α – that is a section of (E) so that β̣=α – let _ρ(G)(β)∑_i=1^n _g_i,_i(β) , The quantity _ρ(G)(β) only depends on the choice of α and not of its primitive. Let β_0 and β_1 two primitives of α. Observe that Bβ_1-β_0 is constant, then _G(β_1)-_G(β_0)=∑_i=1^p (B[_i,_i])=∑_i=1^p (B_i_i)-∑_i=1^p (B_i_i)=0 , since _i_i=_i-1_i-1. We define the ghost integration of a 1-form α in ^∞(E) with respect to a Θ-ghost polygon G and a uniformly hyperbolic bundle ρ to be the quantity ∮_ρ(G)α_ρ(G)(β) , where β is a primitive of α. Gathering our previous results, we summarize the important properties of ghost integration: The ghost integration enjoys the following properties: * The map α↦∮_ρ(G)α is a continuous linear form on ^∞(E). * Assume α=β̣, where β is a bounded section of (E). Then ∮_ρ(G)α=0 . We remark that the second item implies that ghost integration is naturally an element of the dual of the first bounded cohomology with coefficients associated to the bundle. These are consequences of the corresponding properties for J_x,g,Q proved respectively in propositions <ref>, <ref> and <ref>. §.§ Ghost integration of geodesic forms Recall that we denoted by (E) the space of geodesically bounded forms, and observe that for any geodesic g, the projector form β_ρ(g) belongs to (E). Let ρ be a Θ-uniformly hyperbolic bundle. Let G be configuration of geodesics of rank p associated to a ghost polygon ϑ(θ_1,…θ_2p) and a Θ-decoration. Assume that α is in (E). Then ∮_ρ(G)α = - (∑_i=1^2p(-1)^i∫_θ_i(α _G(θ_i^*))) , where _G^Å(θ_i^*) denotes the opposite ghost endomorphism to θ_i. In the context of projective uniformly hyperbolic bundle, that is Θ={1}, then the previous formula is much simpler as an immediate consequence of lemma <ref>. Let G be configuration of geodesics of rank p associated to a ghost polygon ϑ(θ_1,…θ_2p) and a Θ-decoration. Let ρ be a projective uniformly hyperbolic bundle. Assume that α is in (E). Then ∮_ρ(G)α = - _G(ρ)(∑_i=1^2p(-1)^i∫_θ_i(α (θ_i))) . Observe that both formulae above do not make sense for a general bounded form. Observe also that Let G be a ghost polygon, and α a 1-form with values in the center of (E) then ∮_ρ(G)α=0 . §.§.§ An alternative construction: a first step Let x be a point in , γ_i^± the geodesic from x to g_i^±. Assume that α is in (E) then ∮_ρ(G)α = ∑_i=1^p(∫_γ_i^+(α _i [_i,_i])+ ∫_γ_i^-(α [_i,_i] _i )) . Let fix a point x_i in each of the g_i. Let β a primitive of α so that β(x)=0. Let η_i be the geodesic from x to x_i. It follows that, since α is geodesically bounded, we have by the cocycle formula (<ref>) ∫_γ_i^+(_i [α,_i] _i)= ∫_η_i(_i [α,_i] _i)+∫_g_i^+(_i [α,_i] _i) . Similarly ∫_γ_i^-(_i _i [α,_i])= ∫_η_i(_i _i [α,_i]+∫_g_i^-(_i _i [α,_i]) . Observe now that, using the relation [,Q] + [, Q]=[,Q], we have ∫_η_i(_i [α,_i] _i)+∫_η_i(_i _i [α,_i]) =∫_η_i(_i [α,_i])=(_i [β(x_i),_i]) . Thus, we can now conclude the proof: _ρ(G)(β) = ∑_i=1^p(∫_γ_i^+(_i [α,_i] _i)+ ∫_γ_i^-(_i _i [α,_i])) = ∑_i=1^p(∫_γ_i^+(_i [_i,_i] α)+ ∫_γ_i^-([_i,_i] _i α)) . §.§.§ Proof of proposition <ref> Let us assume we have a ghost polygon ϑ = (θ_1,…,θ_2p) given by a configuration of geodesics G=⌈ g_1,…, g_p⌉. Let _i=(g_i) and α an element of (E). We have _i [_i,_i] = _i_i- _i_i_i , [_i,_i] _i = _i_i_i- _i_i=_i_i_i- _i-1_i-1 . Since α is geodesically bounded we have ∮_ρ(G)α =∑_i=1^p(∫_γ_i^+(α _i _i )- ∫_γ_i+1^-(α _i _i)) -∑_i=1^p(∫_γ_i^+(α _i _i _i)-∫_γ_i^-(α _i _i _i) ) . For i∈{1,…,p}, let ζ_i be the ghost edge joining g_i+1^- to g_i^+, that is ζ_i=θ_2i+1. For a closed form β which is geodesically bounded the cocycle formula (<ref>) yields ∫_γ_i^+β-∫_γ_i^-β=∫_g_iβ , ∫_γ_i+1^-β-∫_γ_i^+β=-∫_ζ_iβ . Thus _ρ(G)(α) = ∑_i=1^p(∫_ζ_i(α_i_i) -∫_g_i(α_i_i_i)) . To conclude we need first to observe that as g_i is a visible geodesic then _i_i_i is the opposite ghost endomorphism _G(g_i^*). On the other hand as ζ_j is a ghost edge then _j_j is the opposite ghost endomorphism _G(ζ_j^*). Thus _ρ(G)(α) = - (∑_i=1^2p(-1)^i∫_θ_i(α _G(θ_i^*))) . §.§.§ Another altenative form with polygonal arcs Let G = (θ_1,…,θ_2p) be Θ a ghost polygon given by configuration ⌈ g_1,…,g_p⌉ with g_i = θ_2i. Let x be the barycenter of G. Let x_i be the projection of x on g_i. For a ghost edge ζ_i = θ_2i+1, let us consider the polygonal arc _i given by _i=a_i∪ b_i∪ c_i∪ d_i , where * the geodesic arc a_i is the arc (along g_i+1) from g_i+1^- to x_i+1, * the geodesic arc b_i joins x_i+1 to x, * the geodesic arc c_i joins x to x_i, * the geodesic arc d_i joins x_i to g_i^+. We then have, using the same notation as in proposition <ref> We have for α in (E) ∮_ρ(G)α = -∑_i∫_g_i(α _G(θ_i^*)) + ∫__i(α _G(θ_i^*)) . The proof relies on the fact that for α in (E), and ζ_i a ghost edge we have ∫__iα=∫_ζ_iα . Then the formula follows from proposition <ref>. Ghost integration and Rhombus integration. The process described for the ghost integration is a generalisation of the Rhombus integration described in <cit.>. §.§ A dual cohomology class Let ρ be a Θ-uniformly hyperbolic bundle. Let now G be a Θ-ghost polygon with configuration ⌈ g_1,…,g_p⌉ and Θ-decoration Å. Let ϑ=(θ_1,…,θ_2p) be the associated ghost polygon and denote by ζ_i=θ_2i+1 the ghost edges. Let _i be the associated polygonal arc associated to the ghost edge ζ_i as in paragraph <ref>. The ghost dual form to ρ(G) is Ω_ρ(G)∑_i=1^p(ω_g_i_G(g^*_i) - ω__i_G(ζ^*_i)) . Observe that ρ(G) incorporates a Θ-decoration and so Ω_ρ(G) depends on the Θ-decoration. We have the following properties * The ghost dual form belongs to (E). * Assume that α belongs to (E). Then ∮_ρ(G)α=∫_(α∧Ω_ρ(G)) . * (exponential decay inequality) Finally, there exist positive constants K and a only depending on ρ and R_0 so that if the core diameter of G is less than R_0, then ‖Ω_ρ(G)(y)‖_y≤ K e^-a d(y,(G)) , and, moreover, Ω_ρ(G)(y) vanishes when d(y,(G))≥ R_0+2 and d(y,g)>2 for all visible edges g of G. Later we will need the following corollary which we prove right after we give the proof of the proposition. We have the following bounds: The map ϕ_G : y ↦ ‖Ω_ρ(G)(y)‖_y , belongs to L^1(), and ‖ϕ_G‖_L^1() is bounded by a continuous function of the core diameter of G. The map ψ_G,y : γ ↦ ‖Ω_ρ(G)(γ y)‖_y , belongs to ℓ^1(Γ), and ‖ψ_G,y‖_ℓ^1(Γ) is bounded by a continuous function of the core diameter of G. Finally the map ϕ : H ↦ ‖Ω_ρ(H)‖_∞= sup_y∈‖Ω_ρ(H)(y)‖_y , is bounded on every compact set of ^p_⋆ We first prove the exponential decay inequality (<ref>) which implies in particular that Ω_ρ(G) belongs to ^∞(E). Let r(G) be the core diameter of G. Let as usual g_i be a visible edges, x be the barycenter of all g_i and x_i be the projection of x on g_i. By the construction of the polygonal arc _i, it follows that outside of the ball of radius r(G)+2 centered at x, then Ω_ρ(G)= ∑_iω^-_g_i [_i,_i]_i + ∑_iω^+_g_i_i [_i,_i] , where ω^±_g_i=f^±_iω_g_i where f^±_i is a function with values in [0,1] with support in the 2-neighbourhood of the arc [x_i,g_i^±]. Then the decay given in equation (<ref>) is an immediate consequence of the exponential decay given in inequality (<ref>). Observe now that Ω_ρ(G) is closed. Let A be a parallel section of (E), then it is easily seen that (Ω_ρ(G)A) is geodesically bounded. It follows that Ω_ρ(G) is in (E). Then the result follows from the alternative formula for ghost integration in proposition <ref>. Given a ghost polygon H whose set of visible edges is g_H, and core diameter less than R_0. Let V_H≤{y∈| d(y,(H))≤ R_0+2 or d(y,g)≤ 2 for some g ∈ g_H} Observe that the volume of V_H(R) V_H∩ B((H),R) has some linear growth as a function of R, and more over this growth is controlled as a function of R_0. This, and the exponential decay inequality (<ref>), implies that ϕ_G, whose support is in V_H, is in L^1() and that is norm is bounded by a constant that only depends on R_0. Similarly consider F_H,y{γ∈Γ| d(γ(y),(H))≤ R_0+2 or d(γ(y),g)≤ 2 for g in g_H} . and F_H,y(R){γ∈ F_H,y| d(γ (y),(H))≤ R} . Then the cardinal of the subset F_H,y(R) has linear growth depending only on R_0. Hence, for every y, γ↦∑_γ∈ F_H,yK_0e^-a(d(γ y,(H)) , seen a function of H is in ℓ^1(Γ) and its ℓ^1 norm is bounded as a function of R_0. Hence – as a consequence of the exponential decay inequality (<ref>) – for every y, the map γ↦‖Ω_ρ(G)(γ y)‖_y , is in ℓ^1(Γ) and its ℓ^1 norm is bounded by a function of R_0. Finally from inequality (<ref>), we have obtain that there is a constant R_1 only depending on R_0 such that sup_y∈‖Ω_ρ(H)(y)‖_y≤sup_y∈ B((H), R_1)‖Ω_ρ(H)(y)‖_y +1 . The bounded cocycle hypothesis, equation (<ref>), implies that sup_y∈ B((H), R_1)‖Ω_ρ(H)(y)‖_y is bounded by a function only depending on R_1, and thus sup_y∈‖Ω_ρ(H)(y)‖_y is bounded by a function of R_0. This completes the proof of the corollary. §.§ Derivative of correlation functions In this paragraph, as a conclusion of this section, we relate the process of ghost integration with the derivative of correlation functions. Let ∇_t,h be a bounded variation of a uniformly hyperbolic bundle ρ=(∇,h). Assume that G is a Θ-ghost polygon, then _G(∇̇)=∮_ρ(G)∇̇ . This proposition is an immediate consequence of the following lemma, which is itself an immediate consequence of the definition of the line integration in paragraph <ref> and lemma <ref>: Let (∇_t,h_t) be a family of uniformly hyperbolic bundles with bounded variation – see definition <ref> – associated to a family of fundamental projectors . Then for a decorated geodesic g, (_0(g)· Q)=_ρ(g),Q(∇̇) . §.§ Integration along geodesics For completeness, let us introduce ghost integration for geodesics: we define for any geodesically bounded 1-form α in Ξ(E) and a Θ-geodesic g, ∮_ρ(g)α∫_g(α_g) . It is important to observe that, contrarily to a general ghost polygon, we only integrate geodesically bounded forms, not bounded ones. In particular, we cannot integrate variations of uniformly hyperbolic bundles. § GHOST INTERSECTION AND THE GHOST ALGEBRA In this section we will effectively define and compute the ghost intersections of ghost polygons or geodesics. This is the objective of propositions <ref> and <ref>. We define the associated ghost algebra in paragraph <ref> and relate in <ref> the corresponding ghost bracket for the projective case to the swapping bracket defined in <cit.> by the second author. Finally we relate the intersection of two ghost polygon to the correlation of the brackets of these in the crucial proposition <ref>. In the somewhat independent paragraph <ref>, we define and study natural maps from the ghost algebra to itself. We will use freely the definitions given in section <ref> for ghost polygons. §.§ Ghost intersection: definitions and computation We proceed step by step with the definitions. §.§.§ Intersecting two geodesics Let g and h two Θ-geodesics (in other words, geodesics labelled with an element of Θ). Let us define _ρ(g,h)∮_ρ(g)β^0_h , where β^0_hβ_h-Θ_h/(E) is the trace free part of β_h and Θ_h is defined in equation (<ref>). A straightforward computation using equation (<ref>) and (<ref>) then gives _ρ(g,h)=ϵ(h,g)(_⌈ g,h⌉ (ρ) - 1/(E)Θ_gΘ_h) . By convention, the quantity ϵ(g,h) for two Θ-decorated geodesics g and h is the same as the intersection of the underlying geodesics. §.§.§ Intersecting a ghost polygon with a geodesic Let ρ be a Θ-uniformly hyperbolic bundle. Let G be a Θ-ghost polygon and h a Θ-geodesic. The ghost intersection of G and g is _ρ(G,h) -∮_ρ(G)β_ρ(h)=-∫_(β_ρ(h)∧Ω_ρ(G))=-∫_h(Ω_ρ(G) (h))-_ρ(h,G) . By convention we set _ρ(h,G) -_ρ(G,h). We will prove that we can effectively compute the ghost intersection. Then we have Let G be a configuration of geodesics, associated to a ghost polygon ϑ=(θ_1,…,θ_2p). The ghost intersection of h and G is given by _ρ(G,h)=∑_i=1^2p(-1)^i+1ϵ(h,θ_j) _⌈ h, θ^*_i⌉(ρ) , where θ_i^* is the opposite configuration as in paragraph <ref>. In the projective case, that is Θ={1} we have _ρ(G,h)=_G(ρ)(∑_i=1^2p(-1)^i+1ϵ(h,θ_j) _⌈ h, θ_i⌉(ρ)) . §.§.§ Intersecting two ghost polygons We define the ghost intersection of two ghost-polygons or equivalently of two configuration of geodesics G and H to be _ρ(G,H)∮_ρ(G)Ω_ρ(H)=∫_(Ω_ρ(H)∧Ω_ρ(G)) . We can again compute this relatively effectively: The ghost intersection of the two configuration G and H, associated respectively to the ghost polygons ϑ=(θ_i)_i∈ I, with I=[1,2p], and ς=(σ_j)_j∈ J, with J=[1,2m], respectively, is given by _ρ(G,H) = ∑_i∈ I,j∈ J(-1)^i+jϵ(σ_j,θ_i) _⌈σ^*_j,θ^*_i⌉(ρ) . In the projective case, this simplifies as _ρ(G,H) = _G(ρ)_H(ρ)(∑_i∈ I,j∈ J(-1)^i+jϵ(σ_j,θ_i) _⌈σ_j,θ_i⌉(ρ)) . §.§ Θ-Ghost bracket and the ghost space We develop a more formal point of view. Our goal is proposition <ref> that identifies the intersection as a correlation function. Let 𝒜 be the vector space generated by Θ-ghost polygons (or equivalently configurations of Θ-geodesics) and Θ-geodesics. We add as a generator the element , and call it the Casimir element. By definition, we say has rank 0. We will see that the Casimir element will generate the center. Recall also that we can reverse the orientation on geodesics. The corresponding reverse orientation on configuration is given by ⌈ g_1,… ,g_p⌉⌈g̅_p,… ,g̅_1⌉. We define the bracket on the basis of 𝒜 and extend it by linearity. * The bracket of with all elements is 0. * Let G and H be two configurations of Θ-geodesics, associated respectively to the ghost polygons ϑ=(θ_i)_i∈ I, with I=[1, 2p] and ς=(σ_j)_j∈ J, with J=[1,2m] respectively. Their Θ-ghost bracket is given by [G,H] ∑_i∈ I,j∈ Jϵ(σ_j,θ_i)(-1)^i+j⌈θ^*_i,σ^*_j⌉ , where we recall that θ^*_j is the opposite ghost configuration defined in paragraph <ref>. * Let g and h be two Θ-geodesics and G a ghost polygon as above. Then we define [g,h] ϵ(h,g)(⌈ h,g⌉ -Θ_hΘ_g·) , G,h] ∑_j∈ J (-1)^j+1ϵ(h,θ_j) ⌈ h,θ^*_j⌉ -[h,G] , Finally 𝒜 equipped with the ghost bracket is called the ghost algebra. We observe that the ghost bracket is antisymmetric. However, the Θ-ghost bracket does not always satisfy the Jacobi identity: there are some singular cases. We actually prove in the Appendix <ref>, as Theorem <ref> the following result Assume A, B, and C are ghost polygons and that V_A∩ V_B∩ V_C=∅ , where V_A, V_B and V_C are the set of vertices of A, B and C respectively, then [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0 . Finally we now extend the map on 𝒜 so as to define _G(ρ) for G an element of 𝒜, while defining _(ρ) 1/(E) . The purpose of this formal point of view is to rewrite Propositions <ref> and <ref> as the simple formula: We have for G, H ghost polygons then _ρ(G,H)=_[G,H](ρ) . This formula will allow us to compute recursively Poisson brackets of correlation functions. §.§ The projective case: swapping and ghost algebras Throughout this section, we will restrict ourselves to the projective case, that is Θ={1}. §.§.§ Ghost polygons and multifractions In <cit.>, the second author introduced the swapping algebra ℒ consisting of polynomials in variables (X,x), where (X,x) are points in S^1, together with the relation (x,x)=0. We introduced the swapping bracket defined on the generators by [(X,x),(Y,y)]=ϵ((Y,y),(X,x)) ((X,y)· (Y,x) ) . We proved that the swapping bracket gives to the swapping algebra the structure of a Poisson algebra. We also introduced the multifraction algebra ℬ which is the vector space in the fraction algebra of ℒ generated by the multifractions which are elements defined, when X and x are a n tuples of points in the circle and σ an element of the symmetric group 𝔖(n) by [X,x;σ]∏_i=1^n (X_i,x_σ(i))/∏_i=1^n(X_i,x_i) . We proved that the multifraction algebra is stable by the Poisson bracket, while it is obviously stable by multiplication. Let us consider the algebra ℬ_0 which is generated as a vector space by the multifraction algebra to which we add extra generators denoted ℓ_g for any geodesic g – which are formally logarithms of geodesics ℓ_g=log(g) as well as a central element ; we finally extend the swapping bracket to ℬ_0 by adding [ℓ_g,ℓ_h]1/g h[g,h]+ϵ(g,h) , [G,ℓ_h] 1/h[G,h] -[ℓ_h,G] . We call ℬ_0 with the extended swapping bracket, the extended swapping algebra. The reversing orientation is defined on generators by ℓ_g=ℓ_g, We then have The extended swapping algebra is a Poisson algebra. The reversing isomorphisms antipreserves the Poisson structure: [G,H]=-[ G, H]. This is just a standard check that adding “logarithmic derivatives” to a Poisson algebra still gives a Poisson algebra. We first see that that ∂_g: z↦ [ℓ_g,z]=1/g[g,z] , is a derivation on the fraction algebra of the swapping bracket. Indeed, ∂_g([z,w]) = 1/g[g,[z,w]] =1/g([z,[g,w]]-[w,[g,z]]) = ([z,[g,w]/g]-[w,[z,w]/g])=[z,∂_g(w)]+[∂_g(z),w] . Moreover, the bracket of derivation gives [∂_g,∂_h](z)=[[ℓ_g,ℓ_h],z] Let us check this last point: ∂_g (∂_h (z))= 1/g[g,1/h[h,z]]=-1/gh^2[g,h][h,z] + 1/g h[g,[h,z]] . Thus, we complete the proof of the proposition [∂_g,∂_h](z)=[g,h](-[h,z]/gh^2- [g,z]/hg^2) + 1/gh[[g,h],z]]=[[g,h]/gh,z] . §.§ The projective case: ghost algebra and the extended swapping algebra In the projective case, it is convenient to consider the free polynomial algebra 𝒜_P generated by the ghost polygons, and extend the ghost bracket by the Leibnitz rule to 𝒜_P. In this paragraph, we will relate the algebras 𝒜_P and ℬ_0, more precisely we will show: There exists a homomorphisms of commutative algebra map π:𝒜_P→ℬ_0 , which is surjective, preserves the bracket and and the reversing the orientation isomorphism: [π(A),π(B)]=π[A,B] , π(A)=π(A) . Finally if A belongs to the kernel of π, then for any projective Anosov representation ρ, _A(ρ)=0. Thus, 𝒜_P/(π) is identified as an algebra with bracket with ℬ_0; in particular 𝒜_P/(π) is a Poisson algebra. This will allow in the applications to reduce our computations to calculations in the extended swapping algebra, making use of the fact that the extended swapping algebra is a Poisson algebra by proposition <ref>. Unfortunately, we do not have the analogue of the swapping bracket in the general Θ-case, although the construction and result above suggest to find a combinatorially defined ideal ℐ in the kernel of (ρ) for any ρ, so that 𝒜/ℐ satisfies the Jacobi identity. §.§.§ From the ghost algebra to the extended swapping algebra In this paragraph, we define the map π of Theorem <ref>. The map π is defined on the generators by g ⟼ π(g)ℓ_g , G=⌈ g_1,… ,g_p⌉ ⟼ π(G) [X,x;σ]=∏_i=1^n (g^+_i,g^-_i+1)/∏_i=1^n(g^+_i,g^-_i) . where X=(g^+_i), x=(g^-_i), σ(i)=i+1. Cyclicity is reflected by π(⌈ g_1,… ,g_p⌉) = π(⌈ g_2,… ,g_p, g_1 ⌉) . Conversely, we then have the following easy construction. Let X=(X_1,…,X_k), x=(x_1,…, x_k), and g_i the geodesic (X_i,x_i). Let σ be a permutation of {1,…,k} and let us write σ=σ_1,…,σ_q be the decomposition of σ into commuting cycles σ_i or order k_i with support I_i. For every i, let m_i be in I_i and let us define h^i_j= g_σ_i^j-1(m_i) , G_i=⌈ h^i_1, … h^i_k_i⌉ . We then have with the above notation [X,x;σ]=π(G_1… G_q) . The map π is surjective. In the sequel, the decomposition (<ref>) will be referred as the polygonal decomposition of the multifraction [X,x;σ]. We also obviously have Any tuples of ghost polygons is the polygonal decomposition of a multifraction. §.§.§ The map π and the evaluation For any multifraction B=[X,x;σ] and projective Anosov representation ρ associated to limit curves ξ and dual limit curves ξ^*, we define ^P_B(ρ)∏_i ⟨V_i,v_σ(i)|⟩/∏_i ⟨V_i,v_i|⟩ , where V_i is a non-zero vector in ξ^*(X_i) while v_i is a non-zero vector in ξ(x^i). Given ρ, we now extend G↦_G(ρ) and ^P_B(ρ) to homomorphisms of commutative free algebras to 𝒜_p and ℬ_0. We then have the following result which follows at once since we are only considering rank 1 projectors. We have, for all projective Anosov representations ρ ^P_π(G)(ρ)=_G(ρ) , _G(ρ)=_G̅(ρ^*) , This proposition implies that for every G in the kernel of π, for every ρ, _G(ρ)=0. §.§.§ Swapping bracket We now compute the brackets of multifractions. We shall use the notation of paragraph <ref> where the opposite configuration g^* of a ghost or visible edge g is defined. Observe that g^* is an ordered configuration. Then we have Let G and H be two multifractions that are images of ghost polygons: G = π(θ_1,…,θ_2p) and H = π(ζ_1,…,ζ_2q). Then their swapping bracket is given by [G ,H ] = (G H ( ∑_i,jϵ(ζ_j,θ_i)(-1)^i+jπ(⌈θ_i,ζ_j ⌉)) ) . Moreover, for g=(X,x) and h=(Y,y) geodesics, we have in the fraction algebra of the swapping algebra. [ℓ_h,ℓ_g] = ( ϵ(g,h) π(⌈ g,h⌉)) . [G , ℓ_h] = (G ( ∑_iϵ(h,θ_i)(-1)^i+1π(⌈θ_i,h⌉))) . Moreover, using the notation θ^*_i for the opposite edge, we have, for every i and j π(⌈θ_i^*,ζ_j^*⌉ )=G H π(⌈θ_i,ζ_j⌉) . In this proof, we will omit to write π and confuse a ghost polygon and its image under π. Equation (<ref>) follows at once from the definition. Let now G=⌈ g_1,…, g_p⌉, let η_i be the ghost edges joining g_i+1^- to g_i^+. Then we may write in the fraction algebra of the swapping algebra ⌈ g_1,…, g_p⌉ =∏_i=1^pη_i/∏_i=1^p g_i . Using logarithmic derivatives we then have 1/G [G ,ℓ_h] =∑_i=1^p (1/h η_i[η_i,h] -1/h g_i[g_i,h] )=∑_i=1^p (ϵ(h,η_i)⌈η_i,h⌉ -ϵ(h,g_i)⌈ g_i,h⌉) , which gives equation (<ref>). Writing now G =⌈ g_1,…, g_p⌉ =∏_i=1^pη_i/∏_i=1^q g_i , H =⌈ h_1,…, h_q⌉ =∏_i=1^qν_i/∏_i=1^q h_i , where η_i and ν_i are ghost edges of G and H respectively, we get [G ,H ] /G H = ∑_(i,j)(1/g_i h_j[g_i,h_j] -1/g_i ν_j[g_i,ν_j] +1/η_i ν_j[η_i,ν_j] - 1/η_i h_j[η_i,h_j] ) = ∑_(i,j)(ϵ(h_j,g_i)⌈ g_i,h_j⌉ - ϵ(ν_j,g_i)⌈ g_i,ν_j⌉ +ϵ(ν_j,η_i)⌈η_i,ν_j⌉ - ϵ(h_j,η_i)⌈η_i,h_j⌉) , which is what we wanted to prove. The equation (<ref>) follows from the definition of the map π. As a corollary we obtain The map π preserves the bracket. The proof follows at once from proposition <ref> and <ref> which computes the ghost intersection and recognizing each term as the correlation functions of a term obtained in the corresponding ghost bracket in proposition <ref>. §.§.§ Proof of Theorem <ref> We have proved all that we needed to prove: the theorem follows from corollary <ref> and <ref>, as well as lemma <ref>. §.§ Natural maps into the ghost algebra Let w be a p-multilinear map from the ghost algebra to itself. We say w is natural, if for tuples of integers (n_1,…,n_p) there exists an integer q, a real number A such that given a tuple of ghost polygons G=(G_1,…, G_p) with G_i in ^n_i, then w(G_1,…,G_p)=∑_i=1^q λ_i H_i , where H_i are ghost polygons, λ_i are real numbers less than A and, moreover, every visible edge of H_i is a visible edge of one of the G_i.[The existence of q is actually a consequence of the definition: there only finitely many polygons with a given set of visible edges] We will extend the definition of the core diameter to any element of the ghost algebra by writing, whenever H_i are distinct ghost polygons ghost polygons r(∑_i=1^q λ_i H_i)sup_i=1,…,q(|λ_i| r(H_i)) , We also recall that the core diameter of a ghost polygons, only depends on the set of its visible edeges. We then define the core diameter of a tuple of polygons G=(G_1,…,G_n), as the core diameter of the union of the set of edges of the G_i's. We then have the following inequality of core diameters for a natural map w, G=(G_1,…,G_p) and q and A as in the definition r(w(G))≤ A r(G) . We now give an exemple of a natural map The map (G_1,…,G_n)↦[G_1,[G_2,[… [G_n-1,G_n]…]]] is a natural map. This follows at once from the definition of the ghost bracket and a simple induction argument. § GEODESIC AND CYCLIC CURRENTS In this section, building on the classical notion of geodesic currents, we define the notion of higher order geodesic currents, called cyclic currents. Among them we identify integrable currents, show how they can average correlation functions and produce examples of them. Recall that is the set of oriented geodesics in . The set of Θ-geodesics is then denoted ×Θ. §.§ Cyclic current First recall that a signed measure is a linear combination of finitely many positive measure. Any signed measure is the difference of two positive measures. A cyclic current is a Γ-invariant signed measure invariant under cyclic permutation. As a first example let us consider for μ and ν geodesic current, the signed measure μ∧ν given by μ∧ν1/2ϵ (μ⊗ν -ν⊗μ) , where we recall that ϵ(g,h) is the intersection number of the two geodesics g and h. The signed measure μ∧ν is a cyclic current supported on intersecting geodesics. Moreover μ∧ν=-ν∧μ. We have 2∫_^2/Γ f(g,h) μ̣∧ν(g,h)= ∫_^2/Γ f(g,h) ϵ(g,h)(μ̣(g) ν̣(h)-ν̣(g) μ̣(h) ) = ∫_^2/Γ f(h,g) ϵ(h,g)(μ̣(h) ν̣(g)-ν̣(h) μ̣(g) ) = ∫_^2/Γ f(h,g) ϵ(g,h)(ν̣(h) μ̣(g)-μ̣(h) ν̣(g) ) =2∫_^2/Γ f(h,g) μ̣∧ν(g,h) . Hence μ∧ν is cyclic. The last assertions are obvious. Our main definition is the following, let ρ be a Θ-Anosov representation of Γ, the fundamental group of a closed surface. We give several definitions, let w be a natural map from ^p_1×⋯×^p_q to ^m * a w-cyclic current is a Γ-invariant measure μ=μ_1⊗⋯⊗μ_q where μ_i are Γ-invariant cyclic currents on ^n_i, * the w-cyclic current μ is a (ρ,w)-integrable current if there exists a neighborhood U of ρ in the moduli space of (complexified) Θ-Anosov representations of Γ, and a positive function F in L^1(_⋆^k/Γ,η) so that for all σ in U, and G in _⋆^k; |_w(G)(σ)|≤ F(G) , where F_0 is the lift of F to _⋆^k. * When w is the identity map , we just say a current is ρ-integrable, instead of (ρ,)-integrable. * A current of order k, is w-integrable or integrable if it is (ρ,w)-integrable or ρ-integrable for all representations ρ. §.§.§ Γ-compact currents A Γ-invariant w-cyclic current μ is Γ-compact if it is supported on a Γ-compact set of _⋆^p. Obviously a Γ-compact cyclic current is integrable for any natural map w. Here is an important example of a Γ-compact cyclic current: Let ℒ be a geodesic lamination on S with component of its complement C being a geodesic triangle. Let π:↦ S be the universal covering of S and x a point in C Then π^-1C=_i∈π^-1(x) C_i . The closure of each C_i is an ideal triangle with cyclically ordered edges (g_i^1,g_i^2,g_i^3). We consider the opposite cyclic ordering (g_i^3,g_i^2,g_i^1). The notation δ_x denotes the Dirac measure on X supported on a point x of X. Then we obviously have The measure defined on ^p by μ^*_C=1/3∑_i∈π^-1(x)(δ_(g_i^1,g_i^3,g_i^2)+δ_(g_i^2,g_i^1,g_i^3)+δ_(g_i^3,g_i^2,g_i^1)) , is a Γ-compact cyclic current. §.§.§ Intersecting geodesics Let us give an example of integrable current. Let μ be Γ-invariant cyclic current supported on pairs of intersecting geodesics. Assume furthermore that μ(^2/Γ) is finite. Then μ is integrable. This follows at once from the following lemma. Let ρ_0 be a Θ-Anosov representation. Then there exists a constant K_ρ in an neighborhood U of ρ_0 in the moduli space of Anosov representations, such that for any ρ in U, for any pair of intersecting geodesics |_⌈ g, h⌉(ρ)|≤ K_ρ . Given any pair of geodesics (g_1,g_0) intersecting on a point x, then we can find an element γ in Γ, so that γ x belongs to a fundamental domain V of Γ. In particular, there exists a pair of geodesics h_0 and h_1 passing though V so that _⌈ g_0,g_1⌉(η)=_⌈ h_0,h_1⌉(η)=(_η(h_0) _η(h_1)) , where _η is the fundamental projector for η. Since the set of geodesics passing through V is relatively compact, the result follows by the continuity of the fundamental projector _η(h) on h and η. Given μ and ν, then μ∧ν is integrable. Let A{(g,h)|ϵ(g,h) = ± 1} , B{(g,h)|ϵ(g,h) = ±1/2} . Observe first that denoting i the Bonahon intersection, we have |μ∧ν (A/Γ)|≤ i(μ̅,ν̅)<∞ , where the last inequality is due to Bonahon <cit.>, and λ̅ is the symmetrised current of λ. As Γ acts with compact quotient on the set of triples of points on ∂, it follows that Γ acts on B with compact quotient and therefore μ∧ν(B) is finite. Therefore taking the sum we have that μ∧ν(^2/Γ) is finite. §.§.§ A side remark Here is an example of (ρ,w)-integrable current. First we the following inequality: given a representation ρ_0, there is a constant K_0, a neighborhood U of ρ_0, such that for every k-configuration G of geodesics and ρ in U then |_G(ρ)|≤ e^kK_0 r(G) . Since this is just a pedagogical remark that we shall not use, we do not fill the details of the proofs. From that inequality we see that if G↦ e^kK_0 r(G) is in L^1(_⋆^k/Γ,μ) then μ is (ρ,w)-integrable. § EXCHANGING INTEGRALS To use ghost integration to compute the Hamiltonian of the average of correlation functions with respect to an integrable current, we will need to exchange integrals. This section is concerned with proving the two Fubini-type exchange theorems we will need. Recall that the form β_ρ(g) is defined in equation <ref>. Let μ a Γ-invariant geodesic current. Let G be a Θ-ghost polygon. Then * ∫_β_gμ̣(g) — defined pointwise — is an element of ^∞(E), * the map g↦∮_ρ(G)β_g is in L^1(,μ), * finally, we have the exchange formula ∮_ρ(G)(∫_β_ρ(g) μ̣(g))=∫_(∮_ρ(G)β_ρ(g))μ̣(g) . Similarly, we have a result concerning ghost intersection forms. We have to state it independently in order to clarify the statement. Let us first extend the assigement G↦Ω_G by linearity to the whole ghost algebra, and observe that if we have distinct ghost polygons G_i and H=∑_i=1^q λ_iG_i , with sup_i∈{1,…,q|λ_i|=A , Then ‖Ω_H(y)‖≤ qA sup_i∈{1,…,q‖Ω_G_i(y)‖ . Let μ be a w-cyclic and Γ-compact current of rank p. Let G be a ghost polygon. Let w be a natural map. Then * ∫_^pΩ_ρ(w(H))μ̣(H) — defined pointwise — is an element of ^∞(E), * the map H↦∮_ρ(G)Ω_ρ(w(H)) is in L^1(^p,μ), * finally, we have the exchange formula ∮_ρ(G)(∫_^pΩ_ρ(w(H)) μ̣(H))=∫_^p(∮_ρ(G)Ω_ρ(w(H)))μ̣(H) . We first concentrate on Theorem <ref>, then prove Theorem <ref> in paragraph <ref>. §.§ Exchanging line integrals Theorem <ref> is an immediate consequence of a similar result involving line integrals. Let μ be a Γ-invariant geodesic current on , then * ∫_β_gμ̣(g) — defined pointiwise — is an element of ^∞(E), * Let g_0 be a geodesic, x a point on g_0 and Q a parallel section of (E) along g_0, then the map g↦_x,g_0,Q(β_g) , is in L^1(,μ). * We have the exchange formula _x,g_0,Q(∫_β_ρ(g) μ̣(g))=∫__x,g_0,Q(β_ρ(g)) μ̣(g) . We prove the first item in proposition <ref>, the second item in <ref> and the third in <ref>. §.§ Average of geodesic forms and the first item Let μ be a Γ-invariant measure on . Let y be a point in , and G(y,R){g∈| d(g,y)≤ R} . As an immediate consequence of the Γ-invariance we have For every positive R, there is a constant K(R) so that for every y in μ(G(y,R))≤ K(R) . Observe now that if g is not in G(y) G(y,2), then y is not in the support of ω_g and thus β_ρ(g)(y)=0. We then define The μ-integral of geodesic forms is the form α so that at a point y in α_y∫_G(y)β_g(y) μ̣(g)=∫_β_g(y) μ̣(g) . We use some abuse of language and write α∫_β_ρ(g)μ̣(g) . The form α_y is well defined since G(y) is compact. Moreover, the next lemma gives the proof of the first item of proposition <ref> The μ-integral of geodesic forms belongs to ^∞(E) and we have a constant K_5 only depending on ρ and μ so that ‖∫_β_ρ(g)μ̣(g)‖_∞≤ K_5 . We have |∫_β_ρ(g)(y) μ̣(g)|=|∫_G(y)β_ρ(g)(y) μ̣(g)|≤μ(G(y)) sup_g∈ G(y)‖β_ρ(g)‖_∞ . Then by proposition <ref>, there is a constant k_1 so that μ(G(y))≤ k_1. Recall that β_ρ(g)=ω_g(g). Then by the equivariance, ω_g is bounded independently of g, while by lemma <ref>, is a bounded section of (E). The result follows. §.§ Decay of line integrals We now recall the following definition. _x,g,(ω) = ∫_g^+(ω [,] ) + ∫_g^-(ω [,] ) . We prove in this paragraph the following two lemmas. Let g_0 be a geodesic and x a point in g_0, Let g be a geodesic such that d(g,g_0)>1, then for any function ψ on with values in [0,1]: _x,g_0,Q(ψβ_ρ(g))=0 . This follows at once from the fact that under the stated hypothesis, the support of ω_g does not intersect g_0. For any endomorphism and representation ρ, there exist positive constants K and k, so that for all g so that d(g,x)>R, for any function ψ on with values in [0,1]: |_x,g_0,Q(ψβ_ρ(g))|≤ K e^-kR . We assume x and g are so that d(g,x)> R. It is enough (using a symmetric argument for g_0^- to show that |∫_g_0^+(ψβ_ρ(g)··[,])|≤ K e^-kR , where g_0^+ is the arc on g_0 from x to +∞. Let us denote by g_0^+(R) the set of points of g_0^+ at distance at least R from x: g_0^+(R){y∈ g_0^+| d(y,x)≥ R} . Then if y belongs to g_0^+ and does not belong to g_0^+(R-1), then d(y,x)<R-1. Thus d(y,g)>1. Thus, by lemma <ref>, β_ρ(g)(y) vanishes for y in g_0^+ and not in g_0^+(R-1). Thus |∫_g_0^+(ψβ_ρ(g)·· [,])|≤∫_g_0^+(R-1)|(β_ρ(g)()·· [,])| ṭ . Then the result follows from the exponential decay lemma <ref>. Lemma <ref> now follows immediately after using a symmetric result for g_0^-. §.§ Cutting in pieces and dominating: the second item We need to decompose into pieces. Let g_0 be an element of and x a point on g_0. Let x^+(n) – respectively x^-(n) – the point in g_0^+ – respectively g^-_0 – at distance n from x. Let us consider U_0 {g∈| d(g,g_0)>1} , V^+_n {g∈| d(g,x^+(n))< 2 and for all 0≤ p<n , d(g,x^+(n))≥ 2 } , V^-_n {g∈| d(g,x^-(n))< 2 and for all 0≤ p<n, d(g,x^-(n))≥ 2 } . This gives a covering of : We have the decomposition =U_0∪⋃_n∈ℕ V^±_n , When g does not belong to U_0, there is some y in g_0 so that d(g,y)≤ 1, hence some n so that either d(y,g^+(n))≤ 2, while for all 0≤ p<n we have d(y,g^+(p))> 2, or d(y,g^-(n))≤ 2, while for all 0≤ p<n we have d(y,g^-(p))> 2. Let now n(g)=sup{m∈ℕ| g∈ V^+_m∪ V^-_m} . By convention, we write n(g)=+∞, whenever g does not belong to ⋃_n∈ℕ V^±_n. The non-negative control function F_0 on is defined by F_0(g)=e^-n(g). We now prove For any positive k, the function (F_0)^k is in L^1(,μ). Moreover, there exist positive constants K_9 and k_9 so that for all functions ψ on with values in [0,1] we have |_x,g_0,Q(ψβ_g)|≤ K_9(F_0(g))^k_9 . We now observe that the second item of proposition <ref> is an immediate consequence of this lemma. We first prove that F_0 and all its powers are in L^1(,μ). Observe that V^±_n⊂ G(x^±(n),2). It follows from that μ(V^±_n)≤ K(2) by proposition <ref>. Moreover, for any g in V_n^±, F_0(g)^k≤ e^-kn. The decomposition of lemma <ref> implies that F_0^k is in L^1(,μ). Let g be a element of . * When g belongs to U_0, then by lemma <ref>, _x,g_0,Q(β_ρ(g))=0. Hence |_x,g_0,Q(β_ρ(g))|≤ A (F_0(g))^a, for any positive A and a. * When g does not belong to U_0, then g belongs to V^±_n(g) with n(g)<∞. By lemma <ref>, we have d(x,g)≥ n(g). It follows from lemma <ref> that for any positive function ψ, we have |_x,g_0,Q(ψβ_ρ(g))|≤ Ke^-kn(g)=KF_0(g)^k . The last inequality concludes the proof. §.§ Proof of the exchange formula of proposition <ref> Let us choose, for any positive real R, a cut-off function ψ_R, namely a function on with values in [0,1], with support in the ball with center x and radius R+1, and equal to 1 on the ball of radius x and radius R. We write |_x,g_0,Q(∫_β_ρ(g) μ̣(g))-∫__x,g_0,Q(β_ρ(g))μ̣(g)|≤ A(R)+B(R)+C(R) , where A(R) = |_x,g_0,Q(∫_β_ρ(g) μ̣(g))-_x,g_0,Q(ψ_R∫_β_ρ(g)μ̣(g))| , B(R) = |_x,g_0,Q(ψ_R∫_β_ρ(g) μ̣(g))-∫__x,g_0,Q(ψ_R β_ρ(g)) μ̣(g)| , C(R) = |∫__x,g_0,Q( ψ_R β_ρ(g)) μ̣(g)-∫__x,g_0,Q( β_ρ(g)) μ̣(g) | . We will prove the exchange formula (the third item of proposition <ref>) as an immediate consequence of the following three steps 0.2 truecm Step 1: By lemma <ref>, α=∫_β_ρ(g)μ̣_g is in ^∞(E). By definition of a cutoff function, the support of (1-ψ(R)) α vanishes at any point y so that d(x,y)<R. Thus the exponential decay lemma <ref> guarantees that A(R)=|_x,g_0,Q((1-ψ(R)) α)|≤ K_4e^-k_4R‖α‖_∞ . Hence lim_R→∞A(R)=0. 0.2 truecm Step 2: Observe that ψ_R∫_β_ρ(g) μ̣(g)=∫_ψ_Rβ_ρ(g) μ̣(g) . Moreover the function g↦ψ_Rβ_g is continuous from to ^∞(E). Thus follows from the continuity of _x,g_0,Q proved in proposition <ref> implies that B(R)=0. 0.2 truecm Final Step: As a consequence of Lebesgue's dominated convergence theorem and the domination proved in lemma <ref>, we have that lim_R→∞ C(R)=0. 0.1 truecm Combining all steps lim_R→∞(A(R)+B(R)+C(R))=0 . Hence thanks to equation (<ref>), we have _x,g_0,Q(∫_β_ρ(g) μ̣(g))=∫__x,g_0,Q(β_ρ(g)) μ̣(g) . §.§ Proof of Theorem <ref> We assume now that μ is a Γ-compact current of order k>1. We may also assume – by decomposing the positive and negative part that μ is a positive current. We want to show that ∫_^pΩ_ρ(w(H))μ̣(H) — defined pointwise — is an element of ^∞(E). Since μ is Γ-compact, it follows that the core diameter of any H in the support of μ is bounded by some constant R_0 by proposition <ref>. It will be enough to prove that ∫_^p‖Ω_ρ(w(H))(y)‖_y μ̣(H) ≤ K_0 , for some constant K_0 that depends on μ. Let 𝒦 be a fundamental domain for the action of Γ on ^p. Observe now that ∫_^p‖Ω_ρ(w(H))(y)‖_y μ̣(H) = ∑_γ∈Γ∫_γ𝒦‖Ω_ρ(w(H))(y)‖_y μ̣(H) =∫_𝒦(∑_γ∈Γ‖Ω_ρ(w(H))(γ(y))‖_y) μ̣(H) = ∫_𝒦‖ψ_w(H),y‖_ℓ^1(Γ) μ̣(H) , where ψ_w(H),y : γ ↦‖Ω_ρ(w(H))(γ(y))‖_y . By the second assertion of corollary <ref>, the map ψ_H,y is in ℓ^1(Γ) and its norm is bounded by a continuous function of the core diameter r(w(H)) of w(H), hence by a continuous function of r(H) by inequality (<ref>), hence by a constant on the support of μ, since r is Γ-invariant and continuous by proposition <ref> and μ is Γ-compact. Since r(H) is bounded on the support of μ, the first item of the theorem follows. Let us consider the map Ψ: H↦∮_ρ(G)Ω_ρ(w(H))=∫_(Ω_ρ(w(H))∧Ω_ρ(G)) , where we used formula (<ref>) in the last equality. Our goal is to prove Ψ is in L^1(^p,μ). We have that ‖Ω_ρ(w(H))∧Ω_ρ(G)(y)‖≤‖Ω_ρ(w(H))(y)‖ ‖Ω_ρ(G)(y)‖ . It follows that ∫_^p|Ψ(H)| μ̣(H) ≤ ∫_^p∫_‖Ω_ρ(G)(y)‖ ‖Ω_ρ(w(H))(y)‖ ỵ μ̣(H) ≤ ∫_‖Ω_ρ(G)(y)‖ ( ∫_^p‖Ω_ρ(w(H))(y)‖ μ̣(H)) ỵ ≤ K_0∫_‖Ω_ρ(G)(y)‖ ỵ = K_0 ‖Ω_ρ(G)‖_L^1() , where we used the first in the third inequality. We can now conclude by using the first assertion the corollary <ref>. We use again a family of cutoff functions {ψ_n}_n∈ℕ defined on ^p with values in [0,1] so that each ψ_n has a compact support, and ψ_n converges to 1 uniformly on every compact set. It follows from the Lebesgue's dominated convergence theorem and the second item that lim_n→∞∫_^p( ∮_ρ(G)Ω_ρ(w(H))) ψ_n μ̣(H) =∫_^p(∮_ρ(G)Ω_ρ(w(H))) μ̣(H) . Recall now that by the last assertion of corollary <ref>, ‖Ω_ρ(H)‖_∞ is bounded on every compact set and Γ-invariant, hence bounded on the support of μ. Thus we have the following convergence in ^∞(E) lim_n→∞∫_^pΩ_ρ(w(H)) ψ_n μ̣(H) =∫_^pΩ_ρ(w(H)) μ̣(H) , From the continuity obtained in proposition <ref>, we then have that lim_n→∞∮_ρ(G)∫_^pΩ_ρ(w(H)) ψ_n μ̣(H) =∮_ρ(G)∫_^pΩ_ρ(w(H)) μ̣(H) . Finally, for every n, since ψ_n has compact support the following formula holds ∮_ρ(G)( ∫_^pΩ_ρ(w(H)) ψ_n μ̣(w(H)))= ∫_^p(∮_ρ(G)Ω_ρ(w(H))) ψ_n μ̣(H) . The exchange formula now follows from both assertions (<ref>) and (<ref>). § HAMILTONIAN AND BRACKETS: AVERAGE OF CORRELATION AND LENGTH FUNCTIONS We now leave the realm of uniformly hyperbolic bundles in general and focus only on periodic ones. This corresponds to the study of Anosov representations of the fundamental group of a closed surface. The fact that S is closed allows us to introduce a new structure: the smooth part of the representation variety of projective representations carries the Goldman symplectic form, defined in paragraph <ref>, see also <cit.>. Hence we have a Poisson bracket on functions on the character variety. In this section, we will introduce averaged correlation functions and length functions and compute their Hamiltonian vector fields and Poisson bracket. §.§ Averaged length function: definition As a first step in the construction, let us consider a Θ-decorated current μ^å supported on ×{å} where å is in Θ. The associated length function on the character variety of Anosov representation is the function ^å_μ^å defined by ^å_μ^å(ρ)log(∫_/Γ R_å^σ μ̣^å) , where R_å^σ is the (complex valued in the case of complex bundles) 1-form associated to a section σ of (F_å) by ∇_uσ=R^σ(u)·σ. Although R^σ depends on the choice of the section σ, the integrand over does not. In the complex case, we see the length functions as taking values in ℂ/2π i ℤ due to the ambiguity of defining the logarithm. Recall that in our convention (F_å) is a contracting bundle and thus the real part of _μ is positive. Moreover for a closed geodesic γ whose associated geodesic current, supported on ×{å} is also denoted by γ^å. ^å_γ(ρ)=-log(.Hol(γ)|_F_å) , where Hol(γ) is the holonomy of γ. For a geodesic current δ supported on a closed geodesic, the length function _δ is analytic. This extends to all geodesic currents by density and Morera's Theorem (See <cit.> for a related discussion in the real case). The notion extends naturally – by additivity – to a general Θ-geodesic current. We can now extend the length function to any Θ-geodesic current. Let μ be a Θ-geodesic current on ×Θ, we can then write uniquely μ=∑_å∈Θμ^å , where μ^å is supported on ×{å}, then by definition the μ-averaged length function[In the complex case, since the logarithm, hence the length, is defined up to an additive constant, the Hamiltonian is well defined and the bracket of a length function and any other function makes sense.] is _μ(ρ)∑_å∈Θ^å_μ^å(ρ) . §.§ Averaged correlation function: definition When w is a natural map, μ a (ρ,w)-integrable cyclic current, the associated averaged correlation function of order n _w(μ) on the moduli space of Θ-Anosov representations is defined by _w(μ)(ρ)∫_^n/Γ_w(G)(ρ) μ̣(G) , where G=(G_1,…,G_p) with and _G is the correlation function associated to a Θ-configuration of geodesics defined in paragraph <ref>. As we shall see in proposition <ref>, the function _w(μ) is analytic . Our main result is a formula for the Poisson bracket of those functions. We use a slightly different convention, writing ^k for a correlation function of order k and ^1_μ=_μ. Let μ be either a w-integrable Θ-cyclic currents at ρ_0 or a Θ-geodesic current. Similarly, let ν be either a v-integrable Θ-cyclic currents at ρ_0 or a Θ-geodesic current. Then the measure μ⊗ν is z-integrable at ρ_0, where z(G,H)=[w(G),v(H)] and moreover {^p_w(μ),^n_v(ν)}(ρ) = ∫_^p+n/Γ_ρ(w(G),v(H)) μ̣(G)ν̣(H) = ∫_^p+n/Γ_[w(G),v(H)](ρ) μ̣(G)ν̣(H) . As a corollary, generalizing Theorem <ref> given in the introduction, using a simple induction and proposition <ref> we get The vector space generated by length functions, averaged correlations functions and constants is stable under Poisson bracket. More precisely, let μ_1, …μ_p cyclic currents of order n_i, and N=n_1+… n_p then {^n_1_μ_1,{^n_2_μ_2,…{^n_p-1_μ_p-1,^n_p_μ_p}…}}(ρ) = ∫_^N/Γ^N_[G_1,[G_2,[…,[G_p-1,G_p]…]]](ρ) μ̣_1(G_1)…μ̣_1(G_p) . In the course of the proof, we will also compute the Hamiltonians of the corresponding functions. Let μ be a Θ-geodesic current. The Hamiltonian of the length function _μ is H^0_μ the trace free part of H_μ, where H_μ-∫_β_ρ(g) μ̣(g) , Let w be a natural function. Let ν be a (ρ,w) integrable cyclic current. The Hamiltonian of the correlation function _w(ν) of order n, with n>1 is Ω_w(ν)∫_^nΩ_ρ(w(G)) ν̣(G) , Both H_μ and Ω_w(ν) are in ^∞(E). §.§ Preliminary and convention in symplectic geometry Our convention is that if f is a smooth function and a symplectic form, the Hamiltonian vector field X_f of f and the Poisson bracket {f,g} of f and g are defined by f̣(Y) = (Y,X_f) , {f,g} = f̣(X_g)=(X_g,X_f)=- g̣(X_f) . 0.5 truecm Observe that if Ω is a complex valued symplectic form – which naturally take entries in the complexified vector bundle – and f a complex valued function then the Hamiltonian vector field is a complexified vector field. The bracket of two complex valued functions is then a complex valued function. In the sequel, we will not write different results in the complex case (complex valued symplectic form and functions) and the (usual) real case. We first start by computing the bracket and Hamiltonian of length functions; §.§ Regularity of averaged correlations functions We prove here Let w be a natural function. Let μ be a (ρ,w)-integrable current, then * _w(μ) is an analytic function in a neighborhood of ρ, * For any tangent vector v at ρ, then _w(G)(v) is in L^1(μ) and _w(μ)(v)=∫__⋆^n/Γ_w(G)(v)μ̣(G) . As in proposition <ref>, we work in the context of complex uniform hyperbolic bundles, possibly after complexification of the whole situtation. Let us first treat the case when μ is Γ-compact. In that case, the functions _G:ρ↦ T_w(G) are all complex analytic by proposition <ref>, uniformly bounded with uniformly bounded derivatives in the support of μ. Thus the result follows from classical results. We now treat the non Γ-compact case. Let now consider an exhaustion of _⋆^n/Γ by compacts K_n and write μ_n=1_K_nμ. Let then _n=∫_K_n_w(μ_n)μ̣ . Then by our integrability hypothesis and Lebesgue dominated convergence Theorem _n converges uniformly to _w(μ). Since all _n are complex analytic, by Morera Theorem _w(μ) is complex analytic and _n converges C^∞ to _w(μ). It thus follows that _w(μ)(v)=lim_n→∞_n(v)=lim_n→∞∫_K_n_w(G)(v)μ̣(G) . We now conclude by lemma <ref>. §.§ Length functions: their Hamiltonians and brackets The first step in our proof is to understand the variation of length, The derivatives of a length function with respect to a variation ∇̇ is given by _μ(∇̇)=∫_Θ×/Γ(∇̇) μ̣(x) . By the linearity of the definition, see equation (<ref>), it is enough to consider a Θ-geodesic current μ^å supported on ×{å}. Let E^å⋀^(F_å) E, and Λ^å the natural exterior representation from sl(E) to sl(E^å). Then by <cit.> and formula (<ref>) we have _μ(∇̇)=∫_{å}×/Γ(_åΛ^å(∇̇)) μ̣^å(x) , where ^1_å is the section of (E_å) given by the projection on the line (F_a) induced by the projection on F_å parallel to F_å^∘ – see section <ref> for notation. We now conclude by observing –using just a litle bit of linear algebra– that for any element in sl(E) (^1_åΛ^å(A))=(_å A) . Indeed let us choose a basis (e_1,…, e_p) of F_å completed by a basis (f_1,…, f_m) of F_å^∘ and choose a metric so that this basis is orthonormal. Then Λ^å(A)(e_1∧…∧ e_p)=∑_i=1^pe_1∧… e_i-1∧ A(e_i)∧ e_i+1∧… e_p , (^1_åΛ^å(A))=⟨e_1∧…∧ e_p ,Λ^å(A)(e_1∧…∧ e_p)|=⟩∑_i=1^p⟨e_i , A(e_i)|=⟩(_å A) . Let then H_μ= -∫_β_ρ(g) μ̣(g) . We proved that H_μ lies in ^∞(E) in lemma <ref>. We now prove the following proposition The Hamiltonian vector field of _μ is given by H^0_μ, which is the trace free part of H^μ. Then {_ν,_μ}=(H^0_μ,H^0_ν)=∫__⋆^2/Γ_ρ(g,h) ν̣(g)⊗μ̣(h) , Observe that if μ and ν are both supported on finitely many geodesics, then the support of μ⊗ν is finite in ^2 and its cardinality is the geometric intersection number of the support of μ, with the support of ν. This is a generalization of Wolpert cosine formula, see <cit.>. Remark that ϵμ⊗ν is supported in ^2 on a set on which Γ acts properly. Let us first consider the computation of (H_μ,H_ν). let Δ_1 be a fundamental domain for the action of Γ on ^2. Then denoting ^0_g the traceless part of _g (H^0_μ,H^0_ν) = ∫_Δ_0((∫_β^0_h μ̣(h))∧(∫_β^0_g ν̣(g))) = ∫_Δ_0∫_×ω_h∧ω_g (^0 (g)^0 (h)) μ̣(h)ν̣(g) = ∫_Δ_1∫_ω_h∧ω_g (^0 (g)^0 (h)) μ̣(h)ν̣(g) = ∫_Δ_1ϵ(h,g)(^0 (g)^0 (h)) μ̣(h) ν̣(g) . Let us comment on this series of equalities: the first one is the definition of the symplectic form and that of H_μ and H_ν, for the second one we use the pointwise definition of H_μ and H_ν, for the third one we use proposition <ref>. Observe that the final equality gives formula (<ref>). From the third equality we also have (H^0_μ,H^0_ν) = ∫_Δ_1(∫_gω_h (^0 (g)^0 (h))) μ̣(g)ν̣(h) . Let now consider the fibration z:×→^2 and observe that z^-1(Δ_1) is a fundamental domain for the action of Γ in ×. Let Δ_2 be a fundamental domain for the action of Γ on and observe that Δ_2× is a fundamental domain for the action on Γ on ×. Then the above equation leads to (H^0_μ,H^0_ν) = ∫_z^-1(Δ_1)ω_h (^0 (g)^0 (h)) μ̣(g)ν̣(h) =∫_Δ_2×ω_h (^0 (g)^0 (h)) μ̣(g)ν̣(h) = ∫_Δ_2(^0 (g)∫_β^0_ρ(h)ν̣(h) )μ̣(g) =-∫_Δ_2(^0 (g)H^0_ν)μ̣(g) = -_μ(H^0_ν)=_ν(H^0_μ) . As a conclusion, if Ham(_ν) is the Hamiltonian vector field of _ν, then for all length functions _μ _μ(H^0_ν-Ham(_ν))=0 . We proved in <cit.> that the derivatives of the length functions generates the cotangent space of the character variety on some open dense subset. This completes the proof. As noted, the above gives a generalization of Wolpert's cosine formula. Explicitly we have for two Θ-geodesic currents μ,ν then {_ν,_μ} = ∫_(^2)_⋆/Γϵ(g,h)(((g)(h)) - Θ(g)Θ(h)/(E)) μ̣(g)ν̣(h) . §.§ Bracket of length function and discrete correlation function We have Let G be a Θ-configuration and μ a Θ-geodesic current, then {_G,_μ}=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g)=∫__ρ(G,g) μ̣(g) . By proposition <ref>, we have _G(H_μ)=-∮_ρ(G)(∫_β_ρ(g)μ̣) . Thus by the exchange formula (<ref>), we have _G (H_μ)=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g) . Thus conclude using equation (<ref>) {_G,_μ}= _G (H_μ)=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g)=∫__ρ(G,g) μ̣(g) . §.§ Bracket of length functions and correlation functions Our first objective is, given a family of flat connection ∇ whose variation at zero is ∇̇, to compute _μ(∇̇). Assume that the Θ-cyclic current μ is (ρ,w)-integrable. Then {_w(μ),_ν}(ρ) = ∫_^n+1/Γ_ρ(w(G),g) ν̣(g) μ̣(G) . By Theorem <ref>, the hamiltonian vector field of _ν is given by H^0_ν=-∫_β^0_ρ(g) ν̣(g) . Let Δ be a fundamental domain for the action of Γ on ^n, and observe that Δ× is a fundamental domain for the action of Γ on ^n+1. It follows since H_ν is ρ-equivariant and proposition <ref> that {_w(μ),_ν}=_w(μ)(H^0_ν) = ∫_Δ_w(G)(H^0_ν) μ̣(G) =∫_Δ(∮_ρ(w(G))H^0_ν) μ̣(G) = -∫_Δ∫_(∮_ρ(w(G))β_ρ(g)) ν̣(g)μ̣(G) = ∫_Δ∫_(_ρ(w(G),g)) ν̣(g)μ̣(G) = ∫_^n+1/Γ_ρ(w(G),g) μ̣(G)ν̣(g) . For the second equality we used proposition <ref> and that integrating a 1-form with values in the center gives a trivial result by proposition <ref>. §.§ Hamiltonian of correlation functions We are going to prove the following result Let w a natural function. Let μ be a (ρ,w)-integrable Θ-current. Then for every y in , Ω_ρ(G) belongs to L^1(^p,μ). Moreover Ω_w(μ)(ρ)∫_^pΩ_ρ(w(G))μ̣(G) . seen as vector field on the character variety is the Hamiltonian of the correlation function _w(μ). We first prove proposition <ref> under the additional hypothesis that μ is a Γ-compact current, then move to the general case by approximation. Assume μ is a Γ-compact current. By the density of derivatives of length functions, it is enough to prove that for any geodesic current ν associated to a length function _ν whose Hamiltonian is H_ν we have {_ν,_w(μ)}=(Ω_w(μ),H_ν)=_ν(Ω_w(μ)) . Then using a fundamental domain Δ_0 for the action of Γ on , and Δ_1 a fundamental domain for the action of Γ on ^n, and finally denoting ν_0 the flow invariant measure in associated to the current ν _ν(Ω_w(μ)) =∫_Δ_0(Ω_w(μ))ν̣_0(g) = ∫_Δ_0(∫_^n( (g)Ω_ρ(w(G))) μ̣(G))ν̣_0(g) =∫_^n(∫_Δ_0( (g)Ω_ρ(w(G))) ν̣_0(g))μ̣(G) = ∫_Δ_1(∫_( (g)Ω_ρ(w(G))) ν̣_0(g))μ̣(G) =∫_Δ_1∫_∫_g( (g)Ω_ρ(w(G)))) ν̣(g)μ̣(G) = ∫_^n/Γ(∫_∫_(ω_g (g)∧Ω_ρ(w(G))) )ν̣(g) μ̣(G) =-∫_(^n/Γ)×_ρ(w(G),g) μ̣(G)ν̣(g) = {_ν,_μ} . The first equality uses equation (<ref>), the second uses the definition of Ω_μ, the third one comes from Fubini theorem, the fourth one from lemma <ref>, the fifth one from the fibration from to , the sixth one from formula (<ref>), the seventh one definition (<ref>). Let us now prove the general case when μ is a ρ-integrable current. Let us consider an exhaustion K of ^p/Γ by compact sets. Assume that the interior of K_m+1 contains K_m. Let 𝒦 be a fundamental domain of the action Γ on _⋆^p. Let _m(ρ)∫_K_m_w(G)(ρ) μ̣(G) . The functions _m are analytic and converges C^0 on every compact set to _μ by the integrability of μ. Thus, by Morera's Theorem, _μ is analytic and converges C^∞ on every compact . Let us call X the Hamiltonian vector field of _μ and X_m the Hamiltonian vector field of _m. It follows that X converges to X. We have just proven in the previous paragraph that the Hamiltionian of _m is X_m=∫_C_mΩ_ρ(H) μ̣ . From corollary <ref>, for every y and H, the function γ↦‖Ω_ρ(γ w(H))(y)‖, is in ℓ^1(γ). It follows that X_m(y)=∫_C_m𝒦(∑_γ∈ΓΩ_ρ(γ H)(y) μ̣(H)) . Since {X_m(y)}_m∈ℕ converges for any exhaustion of 𝒦 to X(y). It follows by lemma <ref> that H↦∑_γ∈ΓΩ_ρ(γ w(H))(y) μ̣(H) , is in L^1(𝒦,μ) and that X(y)= ∫_K∑_γ∈ΓΩ_ρ(γ w(H))(y) μ̣(H)=∫_^pΩ_ρ(w(H))(y) μ̣(H) , where we applied Fubini again in the last equality. This is what we wanted to prove. §.§ Bracket of correlation functions We have Let μ and ν be two integrable Θ-currents of rank m and n respectively. Let p=m+n, then {_w(ν),_v(μ)}=∫_^p/Γ_ρ(w(H),v(G)) ν̣⊗μ̣(H,G) . We have {_w(ν),_v(μ)}=_w(ν)(Ω_v(μ)) = ∫_^n/Γ_w(H)(Ω_v(μ)) ν̣(H) =∫_^n/Γ(∮_ρ(w(H))Ω_v(μ)) ν̣(H) = ∫_^n/Γ(∮_ρ(w(H))∫_^mΩ_ρ(v(G))μ̣(G)) ν̣(H) =∫_^n/Γ(∫_^m∮_ρ(w(H))Ω_ρ(v(G))μ̣(G)) ν̣(H) = ∫_^p/Γ_ρ(w(H),v(G)) ν̣(H) μ̣(G) . The crucial point in this series of equalities is the exchange formula for the fifth equality which comes from Theorem <ref>. With the above, we have completed the proof of the ghost representation Theorem <ref>. § APPLICATIONS In this section we give two applications of our previous results. The first one is a generalization of Kerckhoff theorem <cit.> of the convexity of length functions, and the related Wolpert's sine formula for the second derivatives along twist orbits <cit.>. The second one is to give examples of commuting functions arising from laminations. Both results will follow from computations in the ghost algebra combined with the Ghost Representation Theorem <ref>. §.§ Convexity of length functions for positively ratioed representations We can know prove our convexity theorem. We work in the context of real projective Anosov representation, or 𝖲𝖫(n,ℝ) valued with Θ={1}. Let us first say, following Martone–Zhang <cit.> that a representation has a positive cross ratio if for all intersecting geodesics g and h 0<_⌈ g,h ⌉(ρ)<1 . Let μ be an oriented geodesic current supported on non-intersecting geodesics. Then for any geodesic current ν for any projective representation with a positive cross ratio, we have {_μ,{_μ,_ν}}(ρ)≥ 0 . Furthermore the inequality is strict if and only if i(μ,ν) ≠ 0. This will follow from the definition of a positive cross ratio and our generalisation of Wolpert sine formula: Let μ be an oriented geodesic current supported on non-intersecting geodesics. Then for any geodesic current ν, for any projective representation ρ, we have {_μ,{_μ,_ν}}(ρ)=2∫_^3,+/Γϵ(g_0,h)ϵ(g_1,h)(_⌈ g_1,h,g_0⌉ -⌈ g_1,h⌉⌈ g_0,h⌉)(ρ) μ̣^2 (g_1,g_0) ν̣(h) . where ^3,+ is the set of (g_1,h,g_0) so that if h intersects both g_1 and g_0, then h intersects g_1 before g_0. §.§ Commuting functions arising from laminations Let ℒ be a lamination. Associated to this lamination we get several functions that we called associated to the lamination * The length functions associated to geodesic currents supported on the laminations, * functions associated to any complementary region of the lamination. Let F_ℒ be the vector space generated by these functions. our result is then Let ℒ be a geodesic lamination, then the vector space F_ℒ consists of pairwise Poisson commuting functions. An interesting example is the case of the maximal geodesic lamination coming from a decomposition into pair of pants. An easy check give that there are 6g-6 length functions, and 4g-4 triangle functions. Thus we have 10g-10 commuting functions. However in the case the dimension of the space is 16g-16 and it follows that there are relations between these functions. It is interesting to notice that these relations may not be algebraic ones: In that specific case some relations are given by the higher identities <cit.> generalizing Mirzakhani–McShane identities. §.§ Double derivatives of length functions in the swapping algebra In order to prove our convexity result, we will need to calculate the double brackets. By Theorem <ref>, as the map A →_A on the ghost algebra factors through the extended swapping bracket ℬ_0, it suffices to do our calculations in ℬ_0. For simplicity, we will further denote the elements ℓ_g in ℬ_0 by g. Let h be a an oriented geodesic and g_0 and g_1 two geodesics so that ϵ(g_0,g_1)=0. Let ϵ_i=ϵ(g_i,h). Assume first that ϵ_0ϵ_1=0, then [g_1,[g_0,h]]=0. Assume otherwise that h intersect g_1 before g_0 or that g_1=g_0. Then [g_1,[g_0,h]] = ϵ_1ϵ_0 (⌈ g_1,h,g_0 ⌉- ⌈ g_1,h⌉ ⌈ g_0,h ⌉) =ϵ_1ϵ_0 ⌈ g_1,h⌉ ⌈ g_0,h ⌉ (⌈γ_0 ,γ_1 ⌉-1 ) , where γ_0 (g_0^+,h^-) and γ_1 (h^+, g_1^-). Observe that γ_0 and γ_1 are not phantom geodesics by hypothesis. First let us remark that by the Jacobi identity, since [g_0,g_1]=0, then [g_1,[g_0,h]]=[g_0,[g_1,h]] . We apply formulas of paragraph <ref>. We first have from equation (<ref>). [g_0,h]=ϵ(h,g_0)⌈ g_0,h ⌉ + ϵ(g_0,h) . It follows that if ϵ(g_0,h)=0, then [g_1,[g_0,h]]=0 . The same holds whenever ϵ(g_1,h)=0 by the symmetry given by equation (<ref>). Assume now that ϵ_0ϵ_1≠0. Let then (g_0,ζ_0,h,η_0) be the associated ghost polygon to ⌈ g_0,h⌉ with ghost edges ζ_0 = (g_0^+,h^-) and η_0 = (h^+,g_0^-). Thus using the hypothesis ϵ(g_0,g_1)=0, and using the notation ϵ_i=ϵ(g_i,h) we get from equation (<ref>) [g_1,[g_0,h]] = -ϵ_0⌈ g_0,h ⌉(ϵ_1 ⌈ g_1,h⌉- ϵ(g_1,ζ_0)⌈ g_1,ζ_0⌉ -ϵ(g_1,η_0)⌈ g_1, η_0 ⌉) . Since h intersects g_1 before g_0, we have ϵ(g_1,η_0)=0 and ϵ(g_1,ζ_0)=ϵ(g_1,h). Thus [g_1,[g_0,h]] = ϵ_1ϵ_0 ( ⌈ g_1, ζ_0 ⌉⌈ g_0,h ⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉) . As ζ_0 = (g_0^+,h^-) by definition of the swapping algebra ⌈ g_1, ζ_0 ⌉⌈ g_0,h ⌉ = (g_1^+,h^-)(g_0^+,g_1^-)(g_0^+,h^-)(h^+,g_0^-)/(g_1^+,g_1^-)(g_0^+,h^-)(g_0^+,g_0^-)(h^+,h^-) = (g_1^+,h^-)(h^+,g_0^-)(g_0^+,g_1^-)/(g_1^+,g_1^-)(h^+,h^-)(g_0^+,g_0^-) = ⌈ g_1, h, g_0 ⌉ . Similarly ⌈ g_1, h, g_0 ⌉/⌈ g_1, h ⌉⌈ g_0, h⌉ = (g_0^+,g_1^-)(h^+,h^-)/(g_0^+,h^-)(h^+,g_1^-) = ⌈ (g_0^+,h^-),(h^+,g_1^-)⌉ . The result follows from equations (<ref>) and the fact that γ_0=(g_0^+,h^-) and γ_1=(h^+,g_1^-). §.§ Triangle functions and double brackets Let δ_0=(a_1,a_2,a_3) be an oriented ideal triangle, we associate to such a triangle the configuration t_0⌈ a_1,a_3,a_2⌉ . The reader should notice the change of order. One can make the following observation. First t t̅ =1. Thus for a self-dual representation ρ, we have _t(ρ)^2=1 and in particular _t is constant along self dual representations. Let t_0 be a triangle, then [t_0, g] = ∑_j∈{1,2,3}ϵ(a_j,g) t_0 (⌈ g,a_j⌉+⌈ g,a̅_j⌉) . Let t_0, t_1 be triangles. Then [t_1,t_0] = t_1· t_0∑_i,j∈{1,2,3}ϵ(a_i,b_j)(⌈ a_i,b_j⌉ + ⌈ a_i, b_j⌉+⌈a_i,b_j⌉ + ⌈a_i, b_j⌉= t_0∑_i∈{1,2,3} [t_1,a_i-a_i] . Assume now that t_0 and t_1 are two non-intersecting triangles. Then we have the formula: [t_1,[t_0, g]] =t_0 t_1∑_i,j∈{1,2,3} α,β∈{-1,1}ϵ(b_i,g)ϵ(a_j,g) (⌈ b_i^β,g,a_j^α⌉ -⌈ a^α_j,g⌉ ⌈ b^β_i,g⌉) , where c^1=c, c^-1=c̅. Observe first the the hypothesis imply that [t_0,t_1]=0. Thus, by Jacobi identity, [t_0,[t_1,g]]=[t_1,[t_0, g]] . The ghost polygon associated to t is (a_1,a_2,a_3,a_1,a_2,a_3). Thus [t_0, g] = t_0 ∑_j∈{1,2,3}ϵ(a_j,g)⌈ g,a_j⌉-ϵ(a_j,g)⌈ g,a̅_j⌉ = t_0 ∑_j∈{1,2,3}ϵ(a_j,g)(⌈ g,a_j⌉+⌈ g,a̅_j⌉) . In particular, if ϵ(g,a_i)=0 for all i, then [t_0, g]=0. Hence, in that case [t_0,[t_1, g]]=[t_1,[t_0, g]]=0 , and the formula (<ref>) is correct. For t_0,t_1 we have [t_1,t_0] = t_1· t_0∑_i,j∈{1,2,3}ϵ(a_i,b_j)⌈ a_i,b_j⌉ - ϵ(a_i,b_j)⌈ a_i, b_j⌉-ϵ(a_i,b_j)⌈a_i,b_j⌉ + ϵ(a_i,b_j)⌈a_i, b_j⌉ = t_0· t_1∑_i,j∈{1,2,3}ϵ(a_i,b_j)(⌈ a_i,b_j⌉ +⌈ a_i, b_j⌉+⌈a_i,b_j⌉ + ⌈a_i, b_j⌉) = t_0∑_i∈{1,2,3} [t_1, a_i- a_i] For the triple bracket, let us focus in the case where g intersects both t_0 and t_1 and by the above symmetry that g intersects t_1, then t_0. Let (a_i,ζ_i,g,η_i) the ghost polygon to ⌈ a_i,g⌉ with ζ_i = (a_i^+,g^-) and η_i = (g^+,a_i^-). Let t_1 = ⌈ b_1, b_3, b_2⌉ be another ideal triangle not intersecting t_0 and such that g intersects t_1, then t_0. Then the associated ghost polygon is (b_1,b_2,b_3,b_1,b_2,b_3). Let h be an edge of the ghost polygon of t_1. Then as g intersects t_1 before t_0 ϵ(h,η_j)=0 , ϵ(h,ζ_j)=ϵ(h,g) . Thus [ t_1,⌈ g, a_j⌉ ] = t_1⌈ g, a_j⌉∑_i∈{1,2,3}(ϵ(g,b_i) ⌈ b_i,g⌉ -ϵ(g,b_i) ⌈b_i,g⌉ - ϵ(ζ_j,b_i)⌈ζ_j, b_i ⌉+ϵ(ζ_j,b_i)⌈ζ_i,b_i⌉) . Simplifying we obtain [ t_1,⌈ g, a_j⌉ ] = t_1⌈ g, a_j⌉∑_i∈{1,2,3}ϵ(g,b_i)( ⌈ b_i,g⌉ + ⌈b_i,g⌉ -⌈ b_i, ζ_j ⌉-⌈b_i, ζ_j ⌉) . By equation (<ref>) ⌈ g, a_j⌉⌈ b_i, ζ_j ⌉ = ⌈ b_i,g, a_j ⌉ ⌈ g, a_j⌉⌈b_i, ζ_j ⌉ = ⌈b_i,g, a_j ⌉ . Thus we obtain [t_1,⌈ g,a_j⌉] = t_1∑_i∈{1,2,3}ϵ(b_i,g) (⌈ b_i,g,a_j⌉-⌈ a_j,g⌉ ⌈ b_i,g⌉ +⌈b̅_j,g,a_j⌉ - ⌈ g,a_j⌉⌈b̅_j, g ⌉) , t_1,⌈ g,a̅_j⌉] = t_1∑_i∈{1,2,3}ϵ(b_i,g) (⌈ b_i,g,a̅_j⌉-⌈ a_j,g⌉ ⌈ b_i,g⌉ +⌈b̅_j,g,a̅_j⌉ - ⌈ g,a̅_j⌉⌈b̅_j, g ⌉) . Combining the two last equations, and writing ϵ(b_i,g)=ϵ^1_i and ϵ(a_j,g)=ϵ^0_j we have (after some reordering) [t_1,[t_0, g]] = t_0 t_1 ∑_i,j∈{1,2,3}ϵ^1_iϵ^0_j(⌈ b_i,g,a_j⌉+⌈ b_i,g,a̅_j⌉+⌈b̅_i,g,a_j⌉+⌈b̅_i,g,a̅_j⌉ - ⌈ a_j,g⌉ ⌈ b_i,g⌉-⌈a̅_j,g⌉ ⌈ b_i,g⌉-⌈a̅_j,g⌉ ⌈b̅_i,g⌉-⌈ a_j,g⌉ ⌈b̅_i,g⌉) , which is what we wanted to prove. Let g be disjoint from the interior of ideal triangle δ. Then g and the triangle function t commute. Similarly let δ_0, δ_1 be ideal triangles with disjoint interiors. Then the associated triangle functions t_0,t_1 commute. We first make an observation. If ϵ(g, h) = ± 1/2 then ⌈ g,h ⌉ + ⌈ g, h⌉ = 1 . To see this, assume g^+=h^-. Then ⌈ g,h⌉ = 0 and ⌈ g, h⌉ has ghost polygon (g,h,h, g) giving ⌈ g, h⌉ = h· g/g·h =1 . By symmetry, this holds for all g,h with ϵ(g,h) = ± 1/2. Let g be disjoint from the interior of ideal triangle δ = (a_1,a_2,a_3). Then from above [g, t] = t∑_i∈{1,2,3}ϵ(g,a_i)(⌈ g,a_i⌉ + ⌈ g, a_i⌉) = t∑_i∈{1,2,3}ϵ(g,a_i) . If ϵ(g,a_i) = 0 for all i then trivially [ g, t] = 0. Thus we can assume ϵ(g, a_1) = 0 and ϵ(g,a_2),ϵ(g,a_3) ≠ 0. If g = a_1 then as ϵ(a_1, a_2) = -ϵ(a_1, a_3) then [g, t] =0. Similarly for g = a_1. Otherwise g, a_2, a_3 share a common endpoint and a_2,a_3 have opposite orientation at the common endpoint. Therefore as g is not between a_2 and a_3 in the cyclic ordering about their common endpoint, then ϵ(g,a_2)= -ϵ(g,a_3) giving [g, t] =0. Let t_0, t_1 be the triangle function associated to ideal polygons δ_0, δ_1 with t_0 = [a_1, a_3,a_2]. Then from above [t_1,t_0] = t_0∑_i [t_1,a_i-a_i] . Thus if t_0,t_1 have ideal triangles with disjoint interiors then by the above, [a_i,t_1] = [a_i,t_1]=0 giving [t_0,t_1]=0. Let t_1, t_2 be ideal triangles intersecting triangle t_0 with sides a_i. Let u = ∑ a_i - a_i. Then [t_2,[t_1,t_0]] = t_0([t_1,u][t_2,u]- [t_2,[t_1,u]]) . From above [t_1,t_0] = -t_0[t_1,u] and [t_2,t_0] = -t_0[t_2,u]. Therefore [t_2,[t_1,t_0]] = -[t_2,t_0][t_1,u] - t_0[t_2,[t_1,u]] = t_0[t_2,u][t_1,u]-t_0[t_2,[t_1,u]] . §.§ Positivity Recall that a projective representation ρ has a positive cross ratio if for all g,h intersecting geodesics 0 < _⌈ g,h⌉(ρ) < 1. We now give an equivalent definition which is the one originally given by Martone–Zhang in <cit.>. A projective representation ρ has a positive cross ratio if and only if for all (X,Y,y,x) cyclically oriented _⌈ (X,x),(Y,y)⌉(ρ)>1 . Let X,x,Y,y be 4 points. We observe that (X,Y,y,x) is cyclically oriented if and only if geodesics (X,y),(Y,x) intersect. The result then follows from ⌈ (X,x),(Y,y)⌉ =(X,y) (Y,x)/(X,x) (Y,y)=((X,x) (Y,y)/(X,y) (Y,x))^-1=⌈ (X,y),(Y,x)⌉^-1 . Assume ρ is a projective representation with a positive cross ratio. Let h be so that if h intersects both g_1 and g_0, then h intersects g_1 before g_0. Let g_1, g_0 be such that ϵ(g_0,g_1) = 0 Then we have the inequality ϵ_1ϵ_0 _⌈ g_1 ,h , g_0⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉(ρ)≥ 0 . Furthermore the inequality is strict if and only if h intersects both g_0, g_1 in their interiors (i.e. if and only if |ϵ_0ϵ_1| = 1). By Lemma <ref> we have, since g_1 meets h before g_0. ϵ_0ϵ_1(⌈ g_1,h,g_0⌉ - ⌈ g_1,h⌉⌈ g_0,h⌉) = ϵ_1ϵ_0 ⌈ g_1,h⌉ ⌈ g_0,h ⌉ (⌈γ_0 ,γ_1 ⌉-1 ) . where γ_0 (g_0^+,h^-) and γ_1 (h^+, g_1^-). We will also freely use that if x^+=y^- or x^-=y^+, then _⌈ x,y⌉=0, while if x^+=y^+ or x^-=y^- then _⌈ x,y⌉=1. 0.2 truecm First case: ϵ_0ϵ_1 =0. In that case, we have equality. 0.2 truecm Second case: 0<|ϵ_0ϵ_1| <1. In that situation one of the end point of h is an end point of g_0 or g_1. * Firstly, the cases g_0^± = h^- or g_1^± = h^+ are impossible since h meets g_1 before g_0. * Secondly if g_1^+= h^- or g_0^- = h^+, then _⌈ g_1,h⌉_⌈ g_0,h⌉(ρ) = 0. * Finally, if g_1^-= h^- or g_0^+ = h^+, then either γ_0^+=γ_1^+ or γ_0^-=γ_1^-. In both cases, _⌈γ_0,γ_1⌉(ρ)=1 and hence it follows that _⌈ g_1 ,h , g_0⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉(ρ)=0. 0.2 truecm Final case: |ϵ_0ϵ_1|=1. As both g_0 and g_1 intersect h and ρ has a positive cross ratio, then by proposition <ref>, _⌈ g_1,h⌉ ⌈ g_0,h ⌉(ρ)=_⌈ g_1,h⌉(ρ) _⌈ g_0,h ⌉(ρ)>0 . We can then split in two cases as in figure (<ref>): * If ϵ_0ϵ_1>0, then γ_0 and γ_1 do not intersect, and (h^-,g_0^+,h^+,g_1^-) is a cyclically oriented quadruple. Hence, by definition _⌈γ_0,γ_1⌉(ρ)>1. See figure (<ref>)) * If now ϵ_0ϵ_1<0, then γ_0 and γ_1 intersect, and by proposition <ref> _⌈γ_0,γ_1⌉(ρ)<1.(see figure (<ref>)) Combining both cases, we get that ϵ_0ϵ_1 (_⌈γ_0,γ_1⌉(ρ)-1)> 0 . The result follows from equations (<ref>) and (<ref>). Then we have Assume ρ is a projective representation with a positive cross ratio. Let g_1, g_0 be such that ϵ(g_0,g_1) = 0. Then we have the inequality _[g_1,[g_0, h]](ρ)≥0 . Furthermore the inequality is strict if and only if h intersects both g_0, g_1 in their interiors. The Jacobi identity for the swapping bracket <ref> gives that [g_0,[g_1,h]]=[g_1,[g_0,h]] since [g_0,g_1]=0. Thus the proof follows lemma <ref>, lemma <ref>. §.§ Proof of the convexity theorem <ref> and the sine formula theorem <ref> By the representation theorem and its corollary <ref> {_μ,{_μ,_ν}}(ρ)=∫_^3/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) . Since by lemma <ref>, the integrand is non-negative, the integral is non-negative. If i(μ,ν)= 0 then for all g in the support of μ and h in the support of ν, |ϵ(g,h)| ≠ 1. Thus by lemma <ref> for g_0,g_1 in the support of μ and h in the support of ν then _[g_1,[g_0, h]](ρ) = 0 . Thus the integral is zero for i(μ,ν)= 0. If i(μ,ν) ≠ 0 then there exists g_0, h in the supports of μ,ν respectively such that |ϵ(g_0,h)| = 1. If h is descends to a closed geodesic then it is invariant under an element γ of Γ then we let g_1 = γ g_0. Then the triple (g_1,g_0,h) is in the support of μ⊗μ⊗ν. Thus _[g_1,[g_0, h]](ρ) > 0 and the integral is positive. If h does not descend to a closed geodesic, then as any geodesic current is a limit of a discrete geodesic currents, it follows that h intersects g_1 = γ g_0 for some γ in Γ. Again the triple (g_1,g_0,h) are in the support of μ⊗μ⊗ν with _[g_1,[g_0, h]](ρ) > 0. Thus the integral is positive. This completes the proof of Theorem <ref>. For Theorem <ref>, we use the Jacobi identity for the swapping bracket to get ∫_^3/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) =2∫_^3,+/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) . Then we use lemma <ref>. § THE JACOBI IDENTITY FOR A Θ-GHOST BRACKET We now explain the the Jacobi identity for polygons with disjoint set of vertices is satisfied. §.§ Linking number on a set Let us make a little more general construction recall some construction of in <cit.> Let be a set, 𝒢_1 be the set of pair of points of Z. We denote temporarily the pair (X,x) with the symbol Xx. We also defined a linking number on to be a map from ^4 to a commutative ring 𝔸 (X,x,Y,y)→ϵ(Xx,Yy), so that for all points X,x,Y,y,Z,z the following conditions are satisfied ϵ(Xx,Yy)+ϵ(Xx,yY)=ϵ(Xx,Yy)+ϵ(Yy,Xx) = 0 , ϵ(zy,XY)+ϵ(zy,YZ)+ϵ(zy,ZX) = 0 , ϵ(Xx,Yy).ϵ(Xy,Yx) = 0 . The second author proved in <cit.> the Let (X,x,Z,z,Y,y) be 6 points on the set equipped with an linking number, then ϵ(Xy,Zz)+ϵ(Yx,Zz)=ϵ(Xx,Zz)+ϵ(Yy,Zz). Moreover, if {X,x}∩{Y,y}∩{Z,z}=∅, then ϵ(Xx,Yy)ϵ(Xy,Zz)+ϵ(Zz,Xx)ϵ(Zx,Yy)+ϵ(Yy,Zz)ϵ(Yz,Xx) = 0 , ϵ(Xx,Yy)ϵ(Yx,Zz)+ϵ(Zz,Xx)ϵ(Xz,Yy)+ϵ(Yy,Zz)ϵ(Zy,Xx) = 0 . §.§ The ghost algebra of a set with a linking number §.§.§ Ghost polygons and edges We say a geodesic is a pair of points in . We write g=(g_-,g_+). A configuration G ⌈ g_1,… g_n⌉ is a tuple of geodesics (g_1,… g_n) up to cyclic ordering, with n≥ 1. The positive integer n is the rank of the configuration. To a configuration of rank greater than 1, we associate a ghost polygon, also denoted G which is a tuple G = (θ_i,…,θ_2n) where g_i = θ_2i are the visible edges and ϕ_i = θ_2i+1 ((g_i+1)_-,(g_i)_+) are the ghost edges. The ghost index i_e of an edge e is an element of ℤ/2ℤ which is zero for a visible edge and one for a ghost edge. In other words i_θ_k k [2]. We will then denote by G_∘ the set of edges (ghost or visible) of the configuration G. Geodesics, or rank 1 configuration, play a special role. In that case G=⌈ g⌉, by convention G^∘ consists of of single element g which is a visible edge. §.§.§ Opposite edges We now define the opposite of an edge in a reduced configuration. Recall that a configuration is a tuple up to cyclic permutation. in this section we will denote ⌊ g_1,… g_n⌋, a tuple. We denote by the ∙ the concatenation of tuples: ⌊ g_1,… g_n⌋∙⌊ h_1,… h_p⌋⌊ g_1,… g_n, h_1,… h_p⌋ . We introduce the following notation. If θ is a visible edge of G, we define θ_+ = θ_- = θ and if θ is a ghost edge of G then we define θ_+ to be the visible edge after θ and θ_- the visible edge before. The opposite of an edge is θ^* ⌊θ_+…θ_-⌋ where the ordering is an increasing ordering of visible edges from θ_+ to θ_-. More specifically * For a visible edge g_i, the opposite is the tuple g_i^* = ⌊ g_i, g_i+1… g_i-1g_i⌋, * while for a ghost edge ϕ_i the opposite is ϕ_i^* = ⌊ g_i+1g_i+2… g_i-1 g_i⌋. * if ⌈ h⌉ is a rank 1 configuration. The opposite of its unique edge h is h itself. §.§ Ghost bracket and our main result We now define the ghost algebra of to be the polynomial algebra 𝒜_0 freely generated by ghost polygons and geodesics. The ghost algebra is equipped with the antisymmetric ghost bracket, given on the generators 𝒜 by, for two ghosts polygons B and C and geodesics g and h, [B,C] = ∑_(b,c)∈ B_∘× C_∘ϵ(c,b)(-1)^i_b+i_c⌈ c^* b^*⌉ . It is worth writing down the brackets of two geodesics g and h, as well as the bracket of a geodesic g and a configuration B, -[g,B]=[B,g] = ∑_b∈ B_∘ϵ(g,b)(-1)^i_b+1⌈ g,b^*⌉ , -[g,h]=[h,g] = ϵ(g,h) ⌈ g, h⌉ . Our goal in this section is to prove Let A, B, C three polygons with no common vertices: V_A∩ V_B∩ V_C=∅, where V_G is the set of vertices of the polygon G. Then the ghost bracket satisfies the Jacobi identity for A, B, C: [A,[B,C]]+[B,[C,A]]+[C,[A,B]] =0 . As the formula for the bracket differs based on whether ghost polygons are rank 1 or higher, will need to consider the different cases based on the rank of the three elements. We will denote rank 1 elements by a,b,c and higher rank by A,B,C. For a, b and c edges in A, B, C ghost or otherwise we label their ghost indexes by i_a ,i_b, i_c and their opposites by a^*, b^*, c^*. §.§ Preliminary: more about opposite edges Let also use the following notation: if θ_k and θ_l are two edges, ghost or visible ⌈ g_1,…, g_n⌉, of a ghost polygon, then G(θ_k,θ_l) = ⌊θ_k_+…θ_l_-⌋ , where again this is an increasing of visible edges. The tuple G(θ_k,θ_l) is an “interval" defined by θ_k and θ_2. In order to continue our description of the triple brackets. We need to understand, in the above formula, what are the opposite of ϕ^* in [b^*,c^*]. Our preliminary result is the following Let B and C be two ghost polygons, b and c edges in B and C respectively. Let ϕ be an edge in ⌈ b^*,c^*⌉, then we have the following eight possibilities 1: Either ϕ is an edge of B, different from b or a ghost edge, then ϕ^*= G(ϕ,b)∙ c^*∙ G(b,ϕ) , 2: b is a visible edge, ϕ is the initial edge b in b^* and then ϕ^*=b^*∙ c^*∙ b . 3: b is a visible edge, ϕ is the final edge b in b^* and then ϕ^*=b∙ c^*∙ b^* . 4, 5, 6: Or ϕ is an an edge of C, and the three items above apply with some obvious symmetry, giving three more possibilities. 7: or ϕ is the edge u_b,c (c_-^-,b_+^+) of ⌈ b^*,c^*⌉ which is neither an edge of b nor an edge of c, a ghost edge, and ϕ^*=⌊ c^*,b^*⌋ . 8: ϕ is the edge u_c,b (b_-^-,c_+^+) of ⌈ b^*,c^*⌉ which is neither an edge of b nor an edge of c, a ghost edge, and ϕ^*=⌊ b^*,c^*⌋ . This follows from a careful book-keeping and the previous definitions. §.§ Cancellations Let us introduce the following quantities for any triple of polygons A, B, C whatever their rank. They will correspond to the cases obtained corresponding to the cases observed in lemma <ref>: Case 1: P_1(A,B,C) ∑_(a,c,b,ϕ)∈ A_∘× C_∘× B_∘^2 ϕ≠bϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉ , Case 3: P_2(A,B,C) ∑_(a,b,c,ϕ)∈ A_∘× B_∘× C_∘^2 ϕ≠cϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^*∙ G(ϕ,c)∙ b^*∙ G(c,ϕ) ⌉ , Case 4: Q_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,b)ϵ(c,b)(-1)^i_a+i_c⌈ a^*∙ b ∙ c^*∙ b^* ⌉ , Case 5: Q_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,b)ϵ(c,b)(-1)^i_a+i_c⌈ a^*∙ b^* ∙ c^*∙ b⌉ , Case 6: R_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ a^*∙ c ∙ b^*∙ c^* ⌉ , Case 7: R_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ a^*∙ c^* ∙ b^*∙ c ⌉ , Case 8: S_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,u_b,c)ϵ(c,b)(-1)^i_a+i_c+i_b⌈ a^*∙ c^* ∙ b^* ⌉ , Case 4: S_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,u_c,b)ϵ(c,b)(-1)^i_a+i_c+i_b⌈ a^*∙ b^* ∙ c^* ⌉ . We then have We have the following cancellations, where the two last ones use the hypothesis (<ref>) P_1(A,B,C)+P_2(C,A,B) = 0 , first cancellation , Q_1(A,B,C)+R_2(B,C,A) = 0 , second cancellation , S_1(A,B,C)+S_1(B,C,A)+S_1(C,A,B) = 0 , hexagonal cancellation-1 , S_2(A,B,C)+S_2(B,C,A)+S_2(C,A,B) = 0 , hexagonal cancellation-2 . For the first cancellation, we have P_1(A,B,C)+P_2(C,A,B) = ∑_(a,c,b,ϕ)∈ A_∘× C_∘× B_∘^2 ϕ≠bϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉ + ∑_(c,a,b,ϕ)∈ C_∘× A_∘× B_∘^2 ϕ≠bϵ(c,ϕ)ϵ(b,a) (-1)^i_a+i_ϕ+i_b+i_c⌈ c^* ∙ G(ϕ,b)∙ a^*∙ G(b,ϕ) ⌉ = ∑_(a,c)∈ A_∘× C_∘ (b_0,b_1)∈ B_∘^2 b_0≠b_1(ϵ(a,b_1)ϵ(c,b_0) + ϵ(c,b_0)ϵ(b_1,a) )(-1)^i_a+i_b_0+i_b_1+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉=0 , where we used the change of variables (b_0,b_1)=(b,ϕ) in the second line and (b_0,b_1)=(ϕ,b) in the third and use the cyclic invariance. The second cancellation follows by a similar argument R_1(A,B,C)+Q_2(B,C,A) = ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ b^*∙ c^* ∙ a^*∙ c) ⌉ + ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(b,c)ϵ(a,c)(-1)^i_a+i_b⌈ b^*∙ c^* ∙ a^*∙ c) ⌉ =0 . Finally the hexagonal cancellation-1 follows from the hexagonal relation ϵ(a,u_b,c)ϵ(c,b)+ϵ(b,u_c,a)ϵ(a,c)+ϵ(c,u_a,b)ϵ(b,a)=0 , which is itself a consequence of lemma <ref> and the assumption (<ref>). A similar argument works the second hexagonal relation. §.§ The various possibilities for the triple bracket We have to consider 3 different possibilities for the triple brackets [A,[B,C] taking in account whether B and C have rank 1. The following lemma will be a consequence of lemma <ref>. We will also use the following conventions: if Q_1(U,V,W)=Q_2(U,V,W) , then we write Q(U,V,W) Q_1(U,V,W)=Q_2(U,V,W) , if R_1(U,V,W)=R_2(U,V,W) , then we write R(U,V,W) R_1(U,V,W)=R_2(U,V,W) . We have the following four possibilities (independent of the rank of U) for the triple brackets * The polygons V and W have both rank greater than 1, then [U,[V,W]] = P_1(U,V,W)+P_2(U,V,W)+Q_1(U,V,W)+Q_2(U,V,W) + R_1(U,V,W)+R_2(U,V,W)+ S_1(U,V,W)+S_2(U,V,W) . * Both v V and w W have rank 1, then [U,[v,w]] = Q(U,v,w)+R(U,v,w)+S_1(U,v,w)+S_2(U,v,w) . * The polygon W has rank greater than 1, while v V has rank 1, then [U,[v,W]] = P_2(U,v,W)+Q(U,v,W)+R_1(U,v,W)+R_2(U,v,W) + S_1(U,v,W)+S_2(U,v,W) . * The polygon W has rank greater than 1, while v V has rank 1, then [U,[V,w]] = P_2(U,W,v)+R(U,W,v)+Q_1(U,W,v)+R_2(U,W,v) + S_1(U,W,v)+S_2(U,W,v) . This is deduced from lemma <ref>. Indeed we deduce from that lemma that we have * if B is a geodesic, then case 1 does not happen, and case 4 and case 5 coincide, thus P_1(U,V,W)=0 , Q_1(U,V,W)=Q_2(U,V,W) Q(U,V,W) . * Symmetrically, if C is a geodesic, then case 2 does not happen, and case 6 and case 7 coincide, thus P_2(U,V,W)=0 , R_1(U,V,W)=R_2(U,V,W) R(U,V,W) . §.§ Proof of the Jacobi identity We will use freely in that paragraph lemma <ref> The previous discussion gives [A,[B,C]] = P_1(A,B,C)+P_2(A,B,C)+Q_1(A,B,C)+Q_2(A,B,C) + R_1(A,B,C)+R_2(A,B,C)+ S_1(A,B,C)+S_2(A,B,C) , B,[C,A]] = P_1(B,C,A)+P_2(B,C,A)+Q_1(B,C,A)+Q_2(B,C,A) + R_1(B,C,A)+R_2(B,C,A)+ S_1(B,C,A)+S_2(B,C,A) , C,[A,B]] = P_1(C,A,B)+P_2(C,A,B)+Q_1(C,A,B)+Q_2(C,A,B) + R_1(C,A,B)+R_2(C,A,B)+ S_1(C,A,B)+S_2(C,A,B) . The proof of the Jacobi identity then follows from the cancellations (<ref>). In that case, writing a A, b B and c C, we have [a,[b,c]] = Q(a,b,c)+R(a,b,c)+S_1(a,b,c)+S_2(a,b,c) , b,[c,a]] = Q(b,c,a)+R(b,c,a)+S_1(b,c,a)+S_2(b,c,a) c,[b,a]] = Q(c,a,b)+R(c,a,b)+S_1(c,a,b)+S_2(c,a,b) . The Jacobi identity follows from the cancellations (<ref>). Assume a A is a geodesic, B and C has rank 2. Then [a,[B,C]] = P_1(a,B,C)+P_2(a,B,C)+Q_1(a,B,C)+Q_2(a,B,C) + R_1(a,B,C)+R_2(a,B,C)+ S_1(a,B,C)+S_2(a,B,C) , C,[a,B]] = P_2(C,a,B)+Q_1(C,a,B)+Q_2(C,a,B) +R(C,a,B)+ S_1(C,a,B)+S_2(C,a,B) , B,[C,a]] = P_1(B,C,a)+R_1(B,C,a)+R_2(B,C,a) +Q(B,C,a)+ S_1(B,C,a)+S_2(B,C,a) . Then again the cancellations (<ref>), yields the Jacobi identity in that case. We have here that A has rank greater than 1, while b B and c C are geodesics, then [A,[b,c]] = Q(A,b,c)+R(A,b,c)+S_1(A,b,c)+S_2(A,b,c) , b,[c,A]] = P_1(b,c,A)+Q_1(b,c,A)+Q_2(b,c,A) +R(b,c,A)+ S_1(b,c,A)+S_2(b,c,A) , c,[A,b]] = P_2(c,A,b)+R_1(c,A,b)+R_2(c,A,b) +Q(c,A,b)+ S_1(c,A,b)+S_2(c,A,b) . For the last time, the cancellations (<ref>), yields the Jacobi identity in that case. § A LEMMA IN HYPERBOLIC GEOMETRY For any geodesic g and g_0, where g_0 is parametrized by the arc, the following holds. If R> 1 and d(g_0(R), g)<2, while d(g_0(R-1),g)≥ 2, then d(g_0(0),g)≥ R . We let h be a geodesic with d(g_0(R),h) = d(g_0(R-1),h) = 2. Then we observe that d(g_0(0),g) ≥ d(g_0(0), h). We drop perpendiculars from g_0(R-1),g_0(R-1/2) and g_0(0) to h. The perpendicular from g_0(R-1) to h is length 2 and let a be the length of the perpendicular from g_0(R-1/2). Then considering the Lambert quadrilateral with opposite sides of length a, 2 gives sinh(a)cosh(1/2) = sinh(2) , sinh(a)cosh( R-1/2)=sinh D , where D = d(g_0(0), h). It follows easily that e^D/2≥sinh(D) = sinh(a) cosh(R-1/2) ≥sinh (a)/2 e^R-1/2 . Thus d(g_0(0),g) ≥ D ≥ R-1/2+log(sinh(a)) ≥ R . § FUNDAMENTAL DOMAIN AND L^1-FUNCTIONS If Γ is a countable group acting on X preserving a measure μ, a μ-fundamental domain for this action is a measurable set Δ so that ∑_γ∈Γ 1_γ(Δ)=1, μ-almost everywhere. A function F on X is Γ-invariant if for every γ in Γ, F=F∘γ, μ–almost everywhere. Then For any Γ-invariant positive function, if Δ_0 and Δ_1 are fundamental domain then ∫_Δ_0 Fμ̣=∫_Δ_1 Fμ̣ . Using the γ-invariance of F ∫_Δ_0F=∑_γ∈Γ∫_X F· 1_Δ_0∩γ(Δ_1)μ̣=∑_η∈Γ∫_X F· 1_η(Δ_0)∩Δ_1μ̣= ∫_Δ_1F . We define by a slight abuse of language, if Γ-admits a μ fundamental domain Δ on X ∫_X/ΓF μ̣∫_ΔF μ̣ . Let Γ be a group acting properly on X_0 and X_1 preserving μ_0 and μ_1 respectively. Assume that Δ_0 – respectively Δ_1 – is a fundamental domain for the action of Γ on X_0 and X_1, then Let F be a positive function on X_0× X_1 which is Γ invariant, where Γ acts diagonally and the action on each factor preserves measures called μ_0 and μ_1 and admits a fundamental domain called Δ_0 and Δ_1, then ∫∫_Δ_0× X_1F μ̣_0⊗μ̣_1=∫∫_X_0×Δ_1F μ̣_0⊗μ̣_1 . Indeed Δ_0× X_1 and X_0×Δ_1 are both fundamental domains for the diagonal action of Γ on X_0× X_1. The lemma then follows from the previous one and Fubini theorem. Let f be a continuous function defined on a topological space X. Let μ be a Radon measure on X. Then the following lemma holds. Assume that there exists a real constant k so that for every exhausting sequence K of compacts of X, lim_m→∞∫_K_mf μ̣=k. Then f belongs to L^1(X,μ) and ∫_Xf μ̣=k. amsplain 10 Atiyah:1983 Michael F Atiyah and Raoul Bott, The Yang-Mills equations over Riemann surfaces, Philos. Trans. Roy. Soc. London Ser. A 308 (1983), no. 1505, 523–615. BGLPW Jonas Beyrer, Olivier Guichard, François Labourie, Beatrice Pozzetti, and Anna Wienhard, Positivity, cross ratios and the collar lemma. Bonahon:1988 Francis Bonahon, The geometry of Teichmüller space via geodesic currents, Inventiones Mathematicae 92 (1988), no. 1, 139–162. Bonahon:2014woa Francis Bonahon and Guillaume Dreyer, Hitchin characters and geodesic laminations, Acta Mathematica 218 (2017), no. 2, 201–295. Bridgeman:2020vg Martin Bridgeman, Richard Canary, and François Labourie, Simple length rigidity for Hitchin representations, Adv. Math. 360 (2020), 106901, 61. 4035950 Bridgeman:2015ba Martin J Bridgeman, Richard Canary, François Labourie, and Andres Sambarino, The pressure metric for Anosov representations, Geometric And Functional Analysis 25 (2015), no. 4, 1089–1179. Choi:2020aa Suhyoung Choi, Hongtaek Jung, and Hong Chan Kim, Symplectic coordinates on PSL_3( R)-Hitchin components, Pure Appl. Math. Q. 16 (2020), no. 5, 1321–1386. 4220999 Fock:2006a Vladimir V Fock and Alexander B Goncharov, Moduli spaces of local systems and higher Teichmüller theory, Publ. Math. Inst. Hautes Études Sci. (2006), no. 103, 1–211. Goldman:1984 William M Goldman, The symplectic nature of fundamental groups of surfaces, Advances in Mathematics 54 (1984), no. 2, 200–225. Goldman:1986 , Invariant functions on Lie groups and Hamiltonian flows of surface group representations, Inventiones Mathematicae 85 (1986), no. 2, 263–302. Kerckhoff:1983th Steven P. Kerckhoff, The Nielsen realization problem, Ann. of Math. (2) 117 (1983), no. 2, 235–265. 690845 Labourie:2020tv François Labourie and Jérémy Toulisse, Quasicircles and quasiperiodic surfaces in pseudo-hyperbolic spaces, arXiv:2010.05704, 2020. Labourie:2006 François Labourie, Anosov flows, surface groups and curves in projective space, Inventiones Mathematicae 165 (2006), no. 1, 51–114. Labourie:2005 , Cross ratios, surface groups, PSL(n, R) and diffeomorphisms of the circle, Publ. Math. Inst. Hautes Études Sci. (2007), no. 106, 139–213. Labourie:2013ka , Lectures on representations of surface groups, Zurich Lectures in Advanced Mathematics, European Mathematical Society (EMS), Zürich, 2013. Labourie:2012vka , Goldman algebra, opers and the swapping algebra, Geometry and Topology 22 (2018), no. 3, 1267–1348. McShane-Lab François Labourie and Gregory McShane, Cross ratios and identities for higher Teichmüller-Thurston theory, Duke Mathematical Journal 149 (2009), no. 2, 279 – 345. Labourie:2018fj François Labourie and Richard Wentworth, Variations along the Fuchsian locus, Annales Scientifiques de l'Ecole Normale Supérieure. Quatrième Série 51 (2018), no. 2, 487–547. Martone:2019uf Giuseppe Martone and Tengren Zhang, Positively ratioed representations, Comment. Math. Helv. 94 (2019), no. 2, 273–345. Nie:2013tu Xin Nie, The quasi-Poisson Goldman formula, J. Geom. Phys. 74 (2013), 1–17. Potrie:2014uta Rafael Potrie and Andr e s Sambarino, Eigenvalues and Entropy of a Hitchin representation, Inventiones Mathematicae (2017), no. 3, 885–925. Sun:2021tj Zhe Sun, Rank n swapping algebra for PGL_n Fock-Goncharov X moduli space, Math. Ann. 380 (2021), no. 3-4, 1311–1353. Sun:2020vm Zhe Sun, Anna Wienhard, and Tengren Zhang, Flows on the PGL(V)-Hitchin component, Geom. Funct. Anal. 30 (2020), no. 2, 588–692. Sun:2017 Zhe Sun and Tengren Zhang, The Goldman symplectic form on the PGL(V)-Hitchin component, arXiv:1709.03589. Turaev:1991wk Vladimir G Turaev, Skein quantization of Poisson algebras of loops on surfaces, Annales Scientifiques de l'Ecole Normale Supérieure. Quatrième Série 24 (1991), no. 6, 635–704. Wolpert:1981vt Scott Wolpert, An elementary formula for the Fenchel-Nielsen twist, Comment. Math. Helv. 56 (1981), no. 1, 132–135. Wolpert:1983td Scott A Wolpert, On the Symplectic Geometry of Deformations of a Hyperbolic Surface, Annals of Mathematics 117 (1983), no. 2, 207–234.
http://arxiv.org/abs/2307.04029v1
20230708183856
On "Indifference" and Backward Induction in Games with Perfect Information
[ "Nimrod Megiddo" ]
cs.AI
[ "cs.AI" ]
=480pt =0pt 6pt corollaryCorollary definitionDefinition factFact exampleExample lemmaLemma propositionProposition remarkRemark *remarknonumRemark theoremTheorem =0pt #1 1## ###1
http://arxiv.org/abs/2307.05204v1
20230711121944
Ranging Sensor Fusion in LISA Data Processing: Treatment of Ambiguities, Noise, and On-Board Delays in LISA Ranging Observables
[ "Jan Niklas Reinhardt", "Martin Staab", "Kohei Yamamoto", "Jean-Baptiste Bayle", "Aurélien Hees", "Olaf Hartwig", "Karsten Wiesner", "Gerhard Heinzel" ]
gr-qc
[ "gr-qc", "astro-ph.IM" ]
[email protected] Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), Callinstraße 38, 30167 Hannover, Germany Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), Callinstraße 38, 30167 Hannover, Germany Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), Callinstraße 38, 30167 Hannover, Germany Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany University of Glasgow, Glasgow G12 8QQ, United Kingdom SYRTE, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, LNE, 61 avenue de l'observatoire 75014 Paris, France SYRTE, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, LNE, 61 avenue de l'observatoire 75014 Paris, France Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), Callinstraße 38, 30167 Hannover, Germany Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), Callinstraße 38, 30167 Hannover, Germany Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany Interspacecraft ranging is crucial for the suppression of laser frequency noise via time-delay interferometry (TDI). So far, the effect of on-board delays and ambiguities in the LISA ranging observables was neglected in LISA modelling and data processing investigations. In reality, on-board delays cause offsets and timestamping delays in the LISA measurements, and PRN ranging is ambiguous, as it only determines the range up to an integer multiple of the pseudo-random noise (PRN) code length. In this article, we identify the four LISA ranging observables: PRN ranging, the sideband beatnotes at the interspacecraft interferometer, TDI ranging, and ground-based observations. We derive their observation equations in the presence of on-board delays, noise, and ambiguities. We then propose a three-stage ranging sensor fusion to combine these observables in order to gain optimal ranging estimates. We propose to calibrate the on-board delays on ground and to compensate the associated offsets and timestamping delays in an initial data treatment (stage 1). We identify the ranging-related routines, which need to run continuously during operation (stage 2), and implement them numerically. Essentially, this involves the reduction of ranging noise, for which we develop a Kalman filter combining the PRN ranging and the sideband beatnotes. We further implement crosschecks for the PRN ranging ambiguities and offsets (stage 3). We show that both ground-based observations and TDI ranging can be used to resolve the PRN ranging ambiguities. Moreover, we apply TDI ranging to estimate the PRN ranging offsets. Ranging Sensor Fusion in LISA Data Processing: Treatment of Ambiguities, Noise, and On-Board Delays in LISA Ranging Observables Gerhard Heinzel August 12, 2023 =============================================================================================================================== § INTRODUCTION The Laser Interferometer Space Antenna (LISA), due for launch in 2034, is an ESA-led mission for space-based gravitational-wave detection in the frequency band between 0.1 and 1 <cit.>. LISA consists of three satellites forming an approximate equilateral triangle with an armlength of 2.5, in a heliocentric orbit that trails or leads Earth by about 20 degrees. Six infrared laser links with a nominal wavelength of 1064 connect the three spacecraft (SC), whose relative motion necessitates the usage of heterodyne interferometry. Phasemeters are used to extract the phases of the corresponding beatnotes <cit.>, in which gravitational-waves manifest in form of microcycle deviations equivalent to picometer variations in the interspacecraft ranges.The phasemeter output, however, is obscured by various instrumental noise sources. They must be suppressed to fit in the LISA noise budget of 10-0.5 (single link) <cit.>, otherwise they would bury the gravitational-wave signals. Dedicated data processing algorithms are being developed for each of these instrumental noise sources, their subsequent execution is referred to as initial noise reduction pipeline (INReP). The dominating noise source in LISA is by far the laser frequency noise due to the armlength differences in the order of 1% (25000). It must be reduced by more than 8 orders of magnitude. This is achieved by time-delay interferometry (TDI), which combines the various beatnotes with the correct delays to virtually form equal-optical-path-length interferometers, in which laser frequency noise naturally cancels <cit.>. The exact definition of these delays depends on the location of TDI within the INReP (see <ref>) <cit.>, but wherever we place it, some kind of information about the absolute interspacecraft ranges is required.Yet, absolute ranges are not a natural signal in a continuous-wave heterodyne laser interferometer such as LISA. Therefore, a ranging scheme based on pseudo-random noise (PRN) codes is implemented <cit.>. Each SC houses a free-running ultra-stable oscillator (USO) as timing reference. It defines the spacecraft elapsed time (SCET). PRN codes generated according to the respective SCETs are imprinted onto the laser beams by phase-modulating the carrier. The comparison of a PRN code received from a distant SC, hence generated according to the distant SCET, with a local copy enables a measurement of the pseudorange: the pseudorange is commonly defined as the difference between the SCET of the recei­ving SC at the event of reception and the SCET of the emitting SC at the event of emission <cit.>. It represents a combination of the true geometrical range (light travel time) with the offset between the two involved SCETs (see <ref>).In the baseline TDI topology (upper row in <ref>), TDI is performed after SCET synchronization to the barycentric coordinate time (TCB), the light travel times are used as delays. The pseudoranges comprise information about both the light travel times and the SCET offsets required for synchronizing the clocks (see <ref>). A Kalman filter can be used to disentangle the pseudoranges in order to retrieve light travel times and SCET offsets <cit.>. In the alternative TDI topology (lower row in <ref>), the pseudoranges are directly used as delays. In that topology, TDI is executed on the unsynchronized beatnotes sampled according to the respective SCETs <cit.>. However, PRN ranging (PRNR) does not directly provide the pseudoranges but requires three treatments. First, due to the finite PRN code length (we assume 400), PRNR measures the pseudoranges modulo an ambiguity <cit.>. Secondly, PRNR is limited by white ran­ging noise with an RMS amplitude of about 1 when sampled at 4 <cit.>. Thirdly, on-board delays due to signal propagation and processing cause offsets and time­stamping delays in the PRNR. There are three additional pseudorange observables to resolve these difficulties: ground-based observations provide inaccurate but unambiguous pseudorange estimates; time-delay interfero­metric ranging (TDIR) turns TDI upside-down seeking a model for the delays that minimizes the laser frequency noise in the TDI combinations <cit.>; the sideband beatnotes include information about the time derivatives of the pseudoranges <cit.>. The combination of these four pseudorange observables in order to form optimal pseudorange estimates is referred to as ranging sensor fusion in the course of this article. It is common to both TDI topologies (see <ref>) and consequently a crucial stage of the INReP.In <ref>, we first specify the pseudorange definition. We then derive the observation equations of the four pseudorange observables carefully considering the effects of the on-board delays. In <ref>, we introduce a three-stage ranging sensor fusion consisting of an initial data treatment, a core ranging processing, and crosschecks. In the initial data treatment, we propose to compensate for the offsets and timestamping delays caused by the on-board delays. We identify PRNR unwrapping and noise reduction as the ranging processing steps that need to run continuously during operation. In parallel to this core ranging processing, we propose crosschecks of the PRNR ambiguities and offsets. We implement the core ranging processing and the crosschecks numerically. In <ref> we discuss the performance of this implementation, and conclude in <ref>. § RANGING MEASUREMENTS Each SC houses an ultra-stable oscillator (USO) gene­rating an 80 clock signal, the phasemeter clock (PMC). The PMC can be considered as the timing refe­rence on board the SC (see <ref>), its associated counter is referred to as spacecraft elapsed time (SCET): SCET(n)=∑_1^n 1/80. The SCET, denoted by τ̂_i, differs from the barycentric coordinate time (TCB), denoted by t, due to instrumental clock drifts and jitters, and due to relativistic effects. Following the notation of <cit.>, we use superscripts to indicate a quantity to be expressed as function of a certain time scale, e.g., τ̂_1^t denotes the SCET of SC 1 as function of TCB. Note that τ̂_i^τ̂_i(τ) = τ. Each SC contains two movable optical sub-assemblies (MOSAs) connected by an optical fibre (see <ref> for the labeling conventions). Each MOSA has an associated laser and houses a telescope, a free-falling test mass marking the end of the corresponding optical link, and an optical bench with three interferometers: the interspacecraft interferometer (ISI), in which the gravitational-wave signals eventually appear, the reference interfero­meter (RFI) to compare local and adjacent lasers, and the test-mass interferometer (TMI) to sense the optical bench motion with respect to the free-falling test mass in direction of the optical link. The beatnotes in these interferometers are detected with quadrant-photo-receivers (QPRs). They are digitized in analog-to-digital converters (ADCs) driven by the PMCs. Phasemeters extract the beatnote phases[In the current design, the phasemeters deliver the beatnote frequencies with occasional phase anchor points.] using digital phase-locked loops (DPLLs), which are then downsampled to 4 in a multi-stage decimation procedure (DEC) and telemetered to Earth. §.§ The pseudorange and on-board delays The pseudorange, denoted by R_ij^τ̂_i, is commonly defined as the difference between the SCET of the receiving SC at the event of reception and the SCET of the emitting SC at the event of emission <cit.>. It represents a combination of the light travel time between the emission at SC j and the reception at SC i, and the differential SCET offset (see <ref>). However, considering the complexity of the LISA metrology system, this definition appears to be rather vague: to what exactly do we relate the events of emission and reception? Two specifications are required here: we need to locate emission and reception, and we need to define the actual events. It is convenient to consider emission and reception at the respective polarizing beam splitters (PBSs) in front of the telescopes (denoted PBS1 in <cit.>), and to treat the on-board signal propa­gation and processing on both SC as on-board delays. Thus, we clearly separate the pseudorange from on-board delays. This definition is not unique, the events of emission and reception could be located elsewhere, assuming that the on-board delays are defined accordingly. The LISA optical links do not involve delta-pulse-like events. In order to define the actual events of emission and reception we, instead, use the instants when the light phase changes at the beginning of the first PRN code chip. At first glance, the PRN code might seem unfavorable for the pseudorange definition, as PRN and carrier phase are oppositely affected by the solar wind: the PRN phase is delayed by the group-delay, while the carrier phase is advanced by the phase delay. However, these effects are at the order of 10 (see <ref>), whereas our best pseudorange estimates are at 0.1 accuracy. Consequently, the solar wind dispersion can be neglected in the pseudorange definition. When expressing the interferometric measurements according to this specified pseudorange definition, we need to consider the excluded on-board signal propagation and processing. For that purpose, we introduce two kinds of delay operators by their action on a function f^τ̂_j. The on-board delay operator describes delays due to on-board signal propagation and processing and is defined on the same SCET as the function it is acting on: D_x^τ̂_j f^τ̂_j(τ) = f^τ̂_j(τ - d_x^τ̂_j(τ)). x is a place holder for any on-board delay, e.g., D_pbs ← l denotes the optical path length from the laser to the PBS and D_dec the decimation filter group delay. The interspacecraft delay operator is defined on a different SCET than the function it is acting on and applies the pseudorange as delay: D_ij^τ̂_i f^τ̂_j(τ) = f^τ̂_j(τ - R_ij^τ̂_i(τ)). For on-board delays that differ between carrier, PRN, and sideband signals, we add the superscripts car, prn, and sb, respectively. To trace the full path of a signal from the distant SC, we need to combine the interspacecraft delay operator for the interspacecraft signal propagation and the SCET conversion (considered at the PBS of the receiving SC) with on-board delay operators on both SC. The application of a delay operator to another time-dependent delay operator results in nested delays: D_x^τ̂_i D_ij^τ̂_i f^τ̂_j(τ) = f^τ̂_j(τ - d_x^τ̂_i(τ) - R_ij^τ̂_i(τ - d_x^τ̂_i(τ))). For a constant delay operator D_x we can define the associated advancement operator A_x acting as its inverse: A^τ̂_j_x f^τ̂_j(τ) = f^τ̂_j(τ + d_x^τ̂_j), A_x D_x f^τ̂_j(τ) = f^τ̂_j(τ-d_x^τ̂_j+d_x^τ̂_j) = f^τ̂_j(τ). For advancement operators associated to propagation delays, e.g., the optical path length from the laser to the PBS, we write D_pbs ← l^-1 = A_l ← pbs, the subscript underlines that the advancement operator undoes the signal propagation. Below, we consider on-board delays as constant or slowly time varying so that their associated advancement operators are well-defined.What does the specified pseudorange definition imply for TDI in the context of on-board delays? In <cit.> the pseudoranges are said to be the delays that are to be applied in TDI in the alternative topology. To find out whether this statement holds, we write down the ISI carrier beatnotes in the presence of on-board delays using the above defined delay operators: ISI_ij^τ̂_i(τ)=D_dec ← bs^car, τ̂_i( D_bs ← pbs^τ̂_i D_ij^τ̂_i D_pbs ← l^τ̂_jΦ_ji^τ̂_j(τ) - D_bs ← l^τ̂_i Φ_ij^τ̂_i(τ) ). D_pbs ← l denotes the optical path length from the laser to the PBS (before transmission), D_bs ← pbs is the optical path length from the PBS to the recombining beam splitter of the interspacecraft interferometer (ISI BS) (after reception), and D_bs ← l denotes the optical path length from the local laser to the ISI BS. These optical path lengths are in the order of 10 to 1 <cit.>. D_dec ← bs^car denotes the delay from the ISI BS to the decimation filters, it differs for sideband and PRN signals. The dominating part of D_dec ← bs^car is the group delay of the deci­mation filters in the order of 1. To identify the delay we need to apply in TDI, it is convenient to split the delays in <ref> into a common and an uncommon delay by inserting D_bs ← l^τ̂_i A_l ← bs^τ̂_i=1 in front of the bracket: ISI_ij^τ̂_i(τ) =C_i^car, τ̂_i(U_ij^τ̂_i Φ_ji^τ̂_j(τ) - Φ_ij^τ̂_i(τ)), C_i^car, τ̂_i =D_dec ← bs^car, τ̂_i D_bs ← l^τ̂_i, U_ij^τ̂_i =A_l ← bs^τ̂_i D_bs ← pbs^τ̂_i D_ij^τ̂_i D_pbs ← l^τ̂_j. C_i^car denotes the common delay of the local and the distant carrier phase. U_ij is the uncommon delay that only applies to the distant carrier phase. We refer to C_i^car and U_ij as common and uncommon carrier delay, respectively. To see how these delays affect the carrier beatnotes, we expand <ref>: ISI_ij^τ̂_i(τ) = Φ_ji^τ̂_j( τ - c_i^τ̂_i - u_ij^τ̂_i(τ - c_i^τ̂_i)) - Φ_ij^τ̂_i(τ - c_i^τ̂_i). The common carrier delay causes a timestamping delay in both the laser phases and the uncommon carrier delay (essentially the pseudorange). It can be compensated by application of its associated advancement operator: (C_i^car, τ̂_i)^-1ISI_ij^τ̂_i(τ) =U_ij^τ̂_i Φ_ji^τ̂_j(τ) - Φ_ij^τ̂_i(τ) , (C_i^car, τ̂_i)^-1 =(D_bs ← l^τ̂_i)^-1(D_dec ← bs^car, τ̂_i)^-1 =A_l ← dec^car, τ̂_i. TDI is blind to the common carrier delay, as it equally delays the laser phases and the pseudorange. Hence, from the perspective of TDI <ref> and <ref> are equiva­lent. Nevertheless, the compensation of the common carrier delay is important for the synchronization of the measurements to TCB. We propose to calibrate C_i^car on ground, so that during operation it can be compensated in an initial data treatment by application of its associated advancement operator (see <ref>). After this initial data treatment, the uncommon carrier delay constitutes the delay that is to be applied in TDI in the alternative topology. It is composed of the optical path length delay from the distant laser source to the local ISI BS and the optical path length advancement from the ISI BS to the local laser source. Hence, it can be thought of as the differential optical path length from both lasers to the ISI BS. To construct the uncommon carrier delay, we need to measure the optical path lengths laser to PBS, PBS to ISI BS, and laser to ISI BS on ground, and we need to measure the pseudorange during operation. The sections <ref> to <ref> cover the four pseudorange observables. Before, we close this section with a few comments on the common carrier delay.Parts of the common carrier delay are slowly time varying. To analyze the origin of this time dependence we decompose C_i^car into C_i^car = D^car_dec D^car_dpll D_dpll ← abee D^car_abee D_abee ← qpr D^car_qpr D_qpr ← bs D_bs ← l, these constituents are marked green in <ref>. The dominating contribution is by far the decimation filter group delay D^car_dec in the order of 1. It is constant and predetermined by the design of the decimation filters. The group delays of the quadrant-photo-receiver D^car_qpr and the analog backend electronics[The analog backend electronics comprise analog signal amplifiers, analog low-pass filters, and the ADC.] D^car_abee depend amongst others on the beatnote frequency <cit.>. Hence, they change over time and differ between carrier, sideband, and PRN signals. Together with the cable delays D_abee← qpr and D_dpll← abee they can amount to 10. The DPLL delay D^car_dpll depends on the time-dependent beatnote amplitude. The higher this amplitude the smaller D^car_dpll <cit.>. D_qpr ← bs and D_bs ← l, for completeness, denote the optical path lengths from the local laser to the QPR in the order of 10 to 1 <cit.>. We propose to individu­ally calibrate all constituents of C_i^car on ground. The time-dependent ones should be calibrated for all combinations of the time-dependent parameters. Hence, during operation they can be constructed with the help of the SC monitors, which provide the corresponding parameter values, e.g., beatnote frequency and amplitude. §.§ PRN ranging (PRNR) A set of 6 pseudo-random noise (PRN) sequences has been computed such that the cross-correlations and the auto-correlations for nonzero delays are minimized. These PRN codes are associated to the 6 optical links in the LISA constellation. The PRN codes are genera­ted according to the respective PMCs and imprinted onto the laser beams by phase-modulating the carriers in electro-optical modulators (EOMs). In each phasemeter, DPLLs are applied to extract the beatnote phases. The PRN codes show up in the DPLL error signals since the DPLL bandwidth of 10 to 100 is lower than the PRN chipping rate of about 1. In a delay-locked loop (DLL), these error signals are correlated with PRN codes generated according to the local SCET. This correlation yields a pseudorange measurement, we refer to it as PRN ranging (PRNR) <cit.>.We now derive the PRNR observation equation carefully taking into account on-board delays. We model the path of the PRN code from the distant SC to the local DLL by applying delay operators to the distant SCET: D_dll ← pbs^prn, τ̂_i D_ij^τ̂_i D_pbs ← pmc^prn, τ̂_j τ̂_j^τ̂_j(τ). The two on-board delays can be decomposed into D_pbs ← pmc^prn= D_pbs ← eom D_eom ← prn D_prn D_prn ← pmc, D_dll ← pbs^prn= D_dll D_dpll^prn D_dpll ← abee D_abee^prn D_abee ← qpr D_qpr^prn D_qpr ← bs D_bs ← pbs. D_pbs ← pmc^prn consists of the cable delays from the PMC to the EOM, the processing delay due to the PRN code generation, and the optical path length from the EOM to the PBS. All these delays are constant at the sensitive scale of PRNR, so that we do not have to consider delay nesting in D_pbs ← pmc^prn. We added the superscript prn because this path is different for the sideband signal. D_dll ← pbs^prn is explained in the next paragraph as part of the PRN timestamping delay. At the DLL, the received PRN codes are correlated with identical codes generated according to the local SCET. We model this correlation as the difference between the local SCET and the delayed distant SCET (<ref>), and we apply D^prn_dec to model the group delay of the decimation filters applicable to PRN ranging: D^prn_dec(τ̂_i^τ̂_i(τ) - D_dll ← pbs^prn, τ̂_i D_ij^τ̂_i D_pbs ← pmc^prn, τ̂_j τ̂_j^τ̂_j(τ)). To see how the on-board delays affect the PRNR we expand <ref> applying <ref>: D^prn_dec(τ̂_i^τ̂_i(τ) -τ̂_j^τ̂_j(τ - d^τ̂_i_dll ← pbs - R_ij^τ̂_i(τ - d^τ̂_i_dll ← pbs) - d^τ̂_j_pbs ← pmc)) = D_dec ← pbs^prn, τ̂_i R_ij^τ̂_i(τ) +O^prn_ij. The on-board delays cause a timestamping delay D_dec ← pbs^prn, the PRN timestamping delay, and an offset O^prn_ij, the PRNR offset: D_dec ← pbs^prn= D^prn_dec D_dll D_dpll^prn D_dpll ← abee D_abee^prn D_abee ← qpr D_qpr^prn D_qpr ← bs D_bs ← pbs, O_ij^prn= d^τ̂_i_dll ← pbs+d^τ̂_j_pbs ← pmc. The PRN timestamping delay has similar constituents as the common carrier delay, they are marked pink in <ref>. However, most of them are frequency or amplitude dependent. Therefore, they differ between carrier and PRN signals. As for the common carrier delay, we propose to individually calibrate all constituents of the PRN time­stamping delay on ground before mission start. Hence, during operation D_dec ← pbs^prn can be compensated in an initial data treatment by application of its associated advancement operator A_pbs ← dec^prn. After that, the PRNR observation equation including ranging noise and PRN ambiguity can be written as: A_pbs ← dec^prn, τ̂_i PRNR_ij^τ̂_i(τ) = R_ij^τ̂_i(τ) + O^prn_ij + N^prn_ij(τ) - a^prn_ij(τ) · l. l denotes the finite PRN code length. We use 400 as a placeholder, the final value has not been decided. The finite PRN code length leads to an ambiguity, a^prn_ij denote the associated ambiguity integers <cit.>. N^prn_ij is the white ranging noise with an RMS amplitude of about 1 at 4. This ranging noise is mainly due to shot noise and PRN code interference <cit.>. The PRNR offset O^prn_ij involves contributions on the emitter and on the receiver side (see <ref>), they are marked light blue in <ref>. It can amount to 10 and more <cit.>. Similar to the common carrier and the PRN timestamping delay, we propose to calibrate the PRNR offset on ground, so that it can be subtracted in an initial data treatment. §.§ Sideband ranging (SBR) For the purpose of in-band clock noise reduction in the INReP, a clock noise transfer between the SC is implemented <cit.>: the 80 PMC signals are up-converted to ν^m_l=2.400 and ν^m_r=2.401 for left and right-handed MOSAs, respectively (see <ref> for the definition of left and right-handed MOSAs). The EOMs phase-modulate the carriers with the up-converted PMC signals, thereby creating clock sidebands.[We focus on the first order upper clock sidebands, because the lower sidebands contain almost the same information.] We show below that the beatnotes between these clock sidebands constitute a pseudorange observable.Considering on-board delays, the difference between carrier and sideband beatnotes can be written as ISI _ij^τ̂_i(τ) - ISI_sb, ij^τ̂_i(τ) = - D^sb, τ̂_i_dec ← bs { D^τ̂_i_bs ← pbs D^τ̂_i_ij(D^sb, τ̂_j_pbs ← pmc ν^m_ji τ̂_j^τ̂_j(τ) + ν^m_ji M^τ̂_j_ji(τ)) -( D^sb, τ̂_i_bs ← pmc ν^m_ij τ̂_i^τ̂_i(τ) + ν^m_ij M^τ̂_i_ij(τ) ) }. D^sb_pbs ← pmc and D^sb_bs ← pmc are the delay operators associated to the paths from the PMC to the PBS and to the ISI BS, respectively. They can be decomposed into: D^sb_(p)bs ← pmc= D_(p)bs ← eom D_eom ← pmc D_up, D_up is the up-conversion delay due to phase-locking a 2.40(1) oscillator to the the 80 PMC signal, D_eom ← pmc is the cable delay from the PMC to the EOM. ν^m_ij is the up-converted USO frequency associated to MOSA_ij. Since <ref> is expressed in the SCET, all clock imperfections are included in τ̂_i^τ̂_i(τ). The modulation noise M^τ̂_i_ij contains any additional jitter collected on the path D^sb_(p)bs ← pmc, e.g., due to the electrical frequency up-converters. The amplitude spectral densities (ASDs) of the modulation noise for left and right-handed MOSAs are specified to be below <cit.> √(S_M_l(f)) = 2.5 × 10^-6-0.5(f/Hz)^-2/3, √(S_M_r(f)) = 2.5 × 10^-5-0.5(f/Hz)^-2/3. The modulation noise on left-handed MOSAs is one order of magnitude lower, because the pilot tone used for the ADC jitter correction, hence being the ultimate phase reference, is derived from the 2.400 clock signal. To derive a pseudorange observation equation from the sideband beatnote we expand <ref> using <ref>. We apply the advancement operator A^sb_pbs ← dec to avoid nested delays in the pseudorange: A^sb, τ̂_i_pbs ← dec( ISI_ij^τ̂_i(τ) - ISI_sb, ij^τ̂_i(τ) ) = ν^m_ij A^τ̂_i_pbs ← bs(D^sb, τ̂_i_bs ← pmc τ̂_i^τ̂_i(τ) + M^τ̂_i_ij) - ν^m_ji D^τ̂_i_ij( D^sb, τ̂_j_pbs ← pmc τ̂_j^τ̂_j(τ) + M^τ̂_j_ji(τ)) = (ν^m_ij - ν^m_ji)τ + ν^m_ji R_ij^τ̂_i(τ) + ν^m_ji· d^τ̂_j_pbs ← pmc -ν^m_ij·(d^τ̂_i_bs ← pmc - d^τ̂_i_pbs ← bs) + ν^m_ij A^τ̂_i_pbs ← bs M^τ̂_i_ij(τ)-ν^m_ji D^τ̂_i_ij M^τ̂_j_ji(τ). We subtract the 1 ramp and then refer to <ref> as sideband ranging (SBR). Taking into account that the SBR phase is defined up to a cycle, the SBR can be written as SBR_ij^τ̂_i(τ) = A^sb, τ̂_i_pbs ← dec( ISI_ij^τ̂_i(τ) - ISI_sb, ij^τ̂_i(τ) ) ±1 τ =ν^m_ji R_ij^τ̂_i(τ)+ O^sb_ij + N^sb_ij(τ) - a^sb_ij(τ). a^sb_ij denote the SBR ambiguity integers. Expressed as length, the SBR ambiguity is 12.5 corresponding to the wavelength of the sidebands. The SBR offset O^sb_ij= ν^m_ji· d^τ̂_j_pbs ← pmc - ν^m_ij·(d^τ̂_i_bs ← pmc - d^τ̂_i_pbs ← bs) can be thought of as the differential phase accumulation of local and distant PMC signals on their paths to the respective PBSs. Similar to the PRNR offset and the various delays, the SBR offset should be measured on ground. N^sb_ij denotes the appearance of the modulation noise in the SBR: N^sb_ij(τ) = ν^m_ij A^τ̂_i_pbs ← bs M^τ̂_i_ij(τ)-ν^m_ji D^τ̂_i_ij M^τ̂_j_ji(τ). This is a combination of left and right-handed modulation noise, their RMS amplitudes are 2.9 × 10^-5 and 2.9 × 10^-4, respectively. As shown in <cit.>, it is possible to combine carrier and sideband beatnotes from the RFI to form measurements of the dominating right-handed modulation noise, which can, thus, be subtracted from the SBRs (see <ref>). The advancement operator A_pbs ← dec^sb (see <ref>) is associated to the delay operator D_dec ← pbs^sb, to which we refer as sideband timestamping delay. The sideband timestamping delay can be decomposed into: D_dec ← pbs^sb = D^sb_dec D^sb_dpll D_dpll ← abee D^sb_abee D_abee ← qpr D^sb_qpr D_qpr ← bs, these constituents are marked dark yellow in <ref>. As for the common carrier and the PRN timestamping delay, we propose to individually calibrate all its constituents on ground. The sideband timestamping delay can then be compensated in an initial data treatment by application of its associated advancement operator (see <ref>).In reality, the beatnotes are expected to be delivered not in phase, but in frequency with occasional phase anchor points. Therefore, we consider the derivative of <ref>, we refer to it as sideband range rate (ṠḂṘ): ṠḂṘ_ij^τ̂_i(τ) =ν^m_ji Ṙ_ij^τ̂_i(τ) + Ṅ^sb_ij(τ). The sideband range rates are an offset-free and unambiguous measurement of the pseudorange time derivatives. Phase anchor points enable their integration, so that we recover <ref>. §.§ Time-delay interferometric ranging (TDIR) TDI builds combinations of delayed ISI and RFI carrier beatnotes to virtually form equal-arm interferometers, in which laser frequency noise is suppressed. In the alternative TDI topology, the corresponding delays are given by the pseudoranges in combination with the small optical path lengths between laser, PBS, and ISI BS (see the uncommon carrier delay <ref>). Time delay interferome­tric ranging (TDIR) turns this approach upside-down: it minimizes the power integral of the laser frequency noise in the TDI combinations by varying the delays that are applied to the beatnotes <cit.>. When doing this before clock synchronization to TCB, i.e., with the beatnotes sampled according to the respective SCETs, the uncommon delays show up at the very minimum of that integral. Thus, TDIR constitutes a pseudorange observable.Below, we consider TDI in frequency <cit.>. We introduce the Doppler-delay operator, which can be consi­dered as the time derivative of the interspacecraft delay operator (see <ref>): Ḋ^τ̂_i_ij f^τ̂_j(τ)= (1 - Ṙ_ij^τ̂_i(τ))· f^τ̂_j(τ - R_ij^τ̂_i(τ)). We use the shorthand notation Ḋ^τ̂_i_ijk = Ḋ^τ̂_i_ij Ḋ^τ̂_j_jk to indicate chained interspacecraft Doppler-delay operators. In this paper we neglect on-board delays in the RFI beatnotes. We start our consideration of TDIR from the intermediary TDI variables η_ij. These are combinations of the ISI and RFI carrier beatnotes to eliminate the laser frequency noise contributions of right-handed lasers. In terms of the η_ij the second-generation TDI Michelson variables can be expressed as <cit.> X_2^τ̂_1 =(1 - Ḋ^τ̂_1_121 - Ḋ^τ̂_1_12131 + Ḋ^τ̂_1_1312121)(η^τ̂_1_13 - Ḋ^τ̂_1_13η^τ̂_3_31) -(1 - Ḋ^τ̂_1_131 - Ḋ^τ̂_1_13121 + Ḋ^τ̂_1_1213131)(η^τ̂_1_12 - Ḋ^τ̂_1_12η^τ̂_2_21) Y_2^τ̂_2(τ) and Z_2^τ̂_3(τ) are obtained by cyclic permutation of the indices. For later reference, we also state the first generation TDI Michelson variables: X_1^τ̂_1 = (1 - Ḋ^τ̂_1_121)(η^τ̂_1_13 - Ḋ^τ̂_1_13η^τ̂_3_31) - (1 - Ḋ^τ̂_1_131)(η^τ̂_1_12 - Ḋ^τ̂_1_12η^τ̂_2_21). In the framework of TDIR, the delays applied in TDI are parameterized by a model, e.g., by a polynomial model. We minimize the power integral of the TDI combinations by varying the model parameters. TDIR attempts to minimize the in-band laser frequency noise residual. Therefore, we apply a band-pass filter to first remove other contributions appearing out-of-band, i.e., slow drifts and contributions above 1Hz that are domi­nated by aliasing and interpolation errors. The TDIR pseudorange observables for the second generation TDI Michelson variables can then be expressed as TDIR_ij^τ̂_i = min_Θ1/T∫_1/T^T[X̃_2^τ̂_1]^2 + [Ỹ_2^τ̂_2]^2 + [Z̃_2^τ̂_3]^2 dt, Θ denotes the parameters of the delay model, the tilde indicates the filtered TDI combinations.The TDIR accuracy, we denote it by σ^tdir, increases with the integration time T (length of telemetry dataset). It is in the order of <cit.>: σ^tdir(T) ∝10 √(/T), where stands for day. §.§ Ground-observation based ranging (GOR) The mission operation center (MOC) provides orbit determinations (ODs) via the ESA tracking stations and MOC time correlations (MOC-TCs). When combined properly, these two on-ground measurements form a pseudorange observable referred to as ground-observation based ranging (GOR). It has an uncertainty of about 50 due to uncertainties in both the OD and the MOC-TC. Yet, it yields valuable information. It is unambiguous, hence it allows to resolve the PRNR ambiguities.The OD yields information about the absolute positions and velocities of the three SC. New orbit determinations are published every few days. For the position and velocity measurements in the line of sight, radial (with respect to the sun) and cross-track direction conservative estimations by ESA state the uncertainties as 2 and 4, 10 and 4, 50 and 5, respectively <cit.>. The MOC-TC is a measurement of the SCET desynchronization from TCB. It is determined during the telemetry contacts via a comparison of the SCET associated to the emission of a telemetry packet and the TCB of its reception on Earth taking into account the down link delay. We expect the accuracy of the MOC-TC to be better than 0.1 (corresponds to 30). This uncertainty is due to unexact knowledge of the SC-to-ground-station separation, as well as inaccuracies in the time tagging process on board and on ground.As shown in <ref>, the pseudoranges can be expressed in TCB as functions of the reception time: R_ij^t(t) = (1 + δτ̇̂̇_j^t(t)) · d_ij^t(t) + δτ̂_ij^t(t). d_ij^t denotes the light travel time from SC j to SC i, δτ̂_ij^t the offset between the involved SCETs, and δτ̇̂̇_j^t the SCET drift of the emitting SC with respect to TCB. The light travel times can be expressed in terms of the ODs <cit.>: d^t_od, ij(t) =1/c L^t_ij(t) + 1/c^2 L⃗^t_ij(t)·v⃗_j^t(t) + O(c^-3), L⃗_ij = r⃗_i - r⃗_j, L_ij = |L⃗_ij|, r⃗_i denoting the position of the receiving SC, r⃗_j and v⃗_j the position and the velocity of the emitting one, respectively. The terms of order O(c^-3) contribute to the light travel time at the order of 10 and are therefore negligible compared to the large uncertainties of the orbit determination. Combining the light travel times obtained this way with the MOC-TC allows to write the GOR as GOR_ij^t(t) = d^t_od, ij(t)+δτ̂^t_tc, i(t) - δτ̂^t_tc, j(t) + N^gor_ij(t). δτ̂^t_tc, i denotes the MOC-TC of SC i and N_gor^t∼50 the GOR uncertainty. Note that OD and MOC-TC, and hence also the GOR, are given in TCB, while all other pseudorange observables are sampled in the respective SCETs. This desynchronization is negligible: the desynchronization can amount up to 10 after the ten year mission time, the pseudoranges drift with 10 to 100-1 (see central plot in <ref>). Hence, neglecting the desynchronization leads to an error in the order of 100 to 1000, which is negligible compared to the large GOR uncertainty. § RANGING SENSOR FUSION To combine the four pseudorange observables, we propose a three-stage ranging sensor fusion consisting of an initial data treatment, a ranging processing, and crosschecks. The ranging processing (central part of <ref>) refers to the ranging-related routines, which need to run continuously during operation. These are the PRNR unwrapping, and the reduction of ranging and right-handed modulation noise. Simultaneously, the PRNR ambiguities and offsets are steadily crosschecked using TDIR and GOR (lower part of <ref>). Both ranging processing and crosschecks rely on a preceding initial data treatment (upper part of <ref>), in which the various delays and offsets are compensated for. Ranging processing and crosschecks can be categorized into four parts demonstrated below: PRNR ambiguity, noise, PRNR offset, and SBR ambiguity. §.§ PRNR ambiguity As part of the ranging processing, the PRNR needs to be steadily unwrapped: due to the finite PRN code length, the PRNR jumps back to 0 when crossing 400 and vice versa (see upper plot in <ref>). These jumps are unphysical but easy to track and to remove. Apart from that, the PRNR ambiguities need to be crosschecked regularly. For that purpose we propose two independent methods below.The combination of PRNR and GOR enables an identification of the PRNR ambiguity integers a^prn_ij: GOR_ij^t(t) - PRNR_ij^τ̂_i(τ) = N^gor_ij + a^prn_ij(τ) ·400 +R_ij^t(t) - R_ij^τ̂_i(τ) - O^prn_ij - N^prn_ij(τ)_negligible, a^prn_ij(τ) = round[GOR_ij^t(t) - PRNR_ij^τ̂_i(τ)/400], 400 is the value we assumed for the PRN code length. However, this procedure only succeeds if | N^gor_ij| does not exceed the PRN code's half length, i.e., 200. Otherwise, a wrong value for the associated PRN ambiguity integer is selected resulting in an estimation error of 400 in the corresponding link. Note that GOR_ij^t(t) and PRNR_ij^τ̂_i(τ) are sampled according to different time frames, but this desynchronization is negligible considering the low accuracy that needs to be reached here (see <ref>).TDIR constitutes an unambiguous pseudorange observable too. It can be applied as an independent crosscheck of the PRNR ambiguities. We linearly detrend the ISI, RFI, and TMI beatnotes. We then form the first-generation TDI Michelson variables (see <ref>) assuming constant delays. It is not necessary to apply second-generation TDI, the first-generation already accomplishes the task (see <ref>). The pseudoranges are actually drifting by 10 to 100-1 mainly due to differen­tial USO frequency offsets (see central plot in <ref>). Therefore, we choose a short integration time (we use 150), otherwise the constant delay model is not sufficient. We use the GOR for the initial delay values of the TDIR estimator. The TDIR pseudorange estimates can then be used to crosscheck the PRNR ambiguity integers: a^prn_ij(τ) = round[TDIR_ij^τ̂_i(τ) - PRNR_ij^τ̂_i(τ)/400]. §.§ Noise reduction For the ranging noise reduction in the ranging proces­sing, we propose to combine PRNR and sideband range rates in a linear Kalman filter (KF). The conventional KF requires all measurements to be sampled according to one overall time grid. However, in LISA each SC involves its own SCET. We circumvent this difficulty by splitting up the system and build one KF per SC. Each KF only processes the measurements taken on its associated SC, so that the individual SCETs serve as time-grids.The state vector of the KF belonging to SC 1 and its associated linear system model can be expressed as x^τ̂_1 = (R^τ̂_1_12, R^τ̂_1_13, Ṙ^τ̂_1_12, Ṙ^τ̂_1_13, R̈^τ̂_1_12, R̈^τ̂_1_13)^⊺, x^τ̂_1_k+1 = [ 1 0 Δ t 0 Δ t^2/2 0; 0 1 0 Δ t 0 Δ t^2/2; 0 0 1 0 Δ t 0; 0 0 0 1 0 Δ t; 0 0 0 0 1 0; 0 0 0 0 0 1 ]· x^τ̂_1_k + w^τ̂_1_k, k being a discrete time index. Eq. <ref> describes the time evolution of the state vector from k to k+1. w^τ̂_1_k denotes the process noise vector, its covariance matrix is given by E [ w_k ·w_l^T ] = δ_k, l W, W = diag( 0, 0, 0, 0, 10^-15-1, 10^-15-1)^2. δ_k, l denotes the Kronecker delta. Hence, <ref> indicates that each component of w^τ̂_1_k is a white random process. The process noise covariance matrix we used in our implementation is given in <ref>. The measurement vector and the associated observation model are given by y^τ̂_1 = (PRNR^τ̂_1_12, PRNR^τ̂_1_13, ṠḂṘ^τ̂_1_12, ṠḂṘ^τ̂_1_13)^⊺, y^τ̂_1_k = [ 1 0 0 0 0 0; 0 1 0 0 0 0; 0 0 2.401 0 0 0; 0 0 0 2.400 0 0 ]· x^τ̂_1_k + v^τ̂_1_k. Eq. <ref> relates the measurement vector to the state vector. v^τ̂_1_k denotes the measurement noise vector, its covariance matrix is given by E [ v_k ·v_l^T ] = δ_k, l V, V =diag( 3·10^-9-1, 3·10^-9-1, 5.2·10^-13, 5.2·10^-13)^2. The measurement noise covariance matrix we used in our implementation is given in <ref>. The diagonal entries denote the variances of the respective measurements. We assume the measurements to be uncorrelated, so that the off-diagonal terms are zero. The KFs for SC 2 and SC 3 are defined accordingly. In this manner, we remove the ranging noise and obtain estimates for the six pseudo­ranges and their time derivatives.These pseudorange estimates are dominated by the right-handed modulation noise, which is one order of magnitude higher than the left-handed one. As pointed out in <cit.>, the right-handed modulation noise can be subtracted (see <ref>): we combine the RFI measurements to form the Δ M_i, which are measurements of the right-handed modulation noise on SC i (see <ref>). For right-handed MOSAs, the local right-handed modulation noise enters the sideband range rates and we just need to subtract the local Δ M_i (see <ref>). For left-handed MOSAs the Doppler-delayed right-handed modu­lation noise from the distant SC appears in the sideband range rates. Here we need to apply the Kalman filter estimates for the pseudoranges and their time derivatives to form the Doppler-delayed distant Δ M_i, which then can be subtracted (see <ref>). We then process the three KFs again, this time with the corrected sideband range rates. Nowe they are limited by left-handed modulation noise, so that the respective noise levels are lower. Therefore, we need to adjust the measurement noise covariance matrix for the second run of the KFs: V_ cor =diag( 3·10^-9-1, 3·10^-9-1, 7.4·10^-14, 7.4·10^-14)^2. In this way we obtain estimates for the pseudoranges and their time derivatives, which are limited by the left-handed modulation noise. §.§ PRNR offset The PRNR offset is calibrated on ground before mission start. During operation, it is constructed with the help of SC monitors and subtracted in the initial data treatment.TDIR can be used as a crosscheck for residual PRNR offsets, as it is sensitive to offsets in the delays. To obtain optimal performance we choose the second-generation TDI Michelson variables to be ultimately limited by secon­dary noises. In-band clock noise is sufficiently suppressed, since we operate on beatnotes in total frequency and make use of the in-band ranging information provided by the preceding noise reduction step. Accordingly, the offset delay model is parameterized by d_ij^τ̂_i(τ) = R̂_ij^τ̂_i(τ) - O_ij, R̂_ij^τ̂_i denote the pseudorange estimates after noise reduction, O_ij are the 6 offset parameters. As discussed in <ref>, computing TDI in total frequency units gene­rally results in a variable with residual trends. Those trends need to be removed prior to calculation of the TDIR integral to be sensitive to residual laser noise in band. This is achieved by an appropriate band-pass filter with a pass-band from 0.11. The TDIR integral then reads Ô_ij = _O_ij∫_0^TX̃^2(t) + Ỹ^2(t) + Z̃^2(t) dt where tilde indicates the filtered quantity. §.§ SBR ambiguity Phase anchor points, together with the pseudorange estimates after noise reduction, enable the resolution of the SBR ambiguity (see <ref>): a^sb_ij(τ) = round[ν^m_ji R̂_ij^τ̂_i(τ) - SBR_ij^τ̂_i(τ)]. SBR_ij^τ̂_i are the phase anchor points, R̂_ij^τ̂_i the pseudorange estimates after noise reduction. Thus, we obtain estimates of the SBR ambiguity integers a^sb_ij. The resolution is successful if the pseudorange estimates are more accurate than 6.25 (half the ambiguity). From the perspective of noise reduction, this is feasible (see <ref>). Having resolved the SBR ambiguity, the pseudorange estimates associated to the phase anchor points serve as initial values for the integration of the sideband range rates. The resolution of the SBR ambiguity is worthwhile: SBR constitutes a very accurate pseudo­range observable, as both its stability and accuracy are limited by the modulation noise. § RESULTS In this section, we demonstrate the performance of our implementation of the core ranging processing and the crosschecks as proposed in <ref> (central and lower part of <ref>). We did not implement the initial data treatment. Instead we assume that the common carrier, PRN, and sideband timestamping delays are compensated beforehand. We further consider offset-free PRNR and apply TDIR as a crosscheck for residual offsets. We use telemetry data simulated by LISA Instrument <cit.> and LISANode <cit.> based on orbits provided by ESA <cit.>. We simulate phase anchor points for the SBR (see <ref>). The SCET deviations from the respective proper times are modeled by δτ̂_i(τ) = δτ̂_i, 0 + y_i τ + ẏ_i/2 τ^2 + ÿ_i/3 τ^3 + ∫_τ_0^τdτ̃ y_i^ϵ(τ̃), the δτ̂_i, 0 denote the initial SCET deviations set to 1, -1.2, and 0.6 for SC 1, 2, and 3, respectively. The y_i model the PMC frequency offsets corresponding to linear clock drifts. They are set to 10^-7, -2 × 10^-7, and 0.6 × 10^-7 for SC 1, 2, and 3, respectively. ẏ_i ∼ 10^-14 -1 and ÿ_i ∼ 10^-23 -2 are constants modeling the linear and quadratic PMC frequency drifts. The y_i^ϵ denote the stochastic clock noise in fractional frequency deviations, the associated ASD is given by √(S_y^ϵ(f)) = 6.32 × 10^-14-0.5(f/)^-0.5. We simulate laser frequency noise with an ASD of √(S_Ṅ^p(f)) = 30-0.5, and ranging and modulation noise as specified in the sections <ref> and <ref>. Furthermore, we consider test-mass acceleration noise √(S_N^δ(f)) = 4.8 × 10^-15-2-0.5√(1 + (0.4/f)^2) and readout noise √(S_N^ro(f)) = A √(1 + (2/f)^4), where A=6.35 × 10^-12-0.5 for the ISI carrier and A=1.25 × 10^-11-0.5 for the ISI sideband beatnotes. For the readout noise we set a saturation frequency of f_sat=0.1, below which we whiten. The orbit determinations are simulated by LISA Ground Tracking with the noise levels specified in <ref>. §.§ Ranging processing Here we demonstrate the performance of our implementation of the core ranging processing for one day of telemetry data simulated by LISA Instrument <cit.>. The first ranging processing step covers the PRNR unwrapping (see <ref>). The upper plot shows the raw PRNR, which jumps back to 0 when crossing 400 and vice versa. These jumps are easy to track and to remove. In our implementation we remove all PRNR jumps bigger than 200. The central plot shows the unwrapped but yet ambiguous PRNR. Here you can see PRNR drifts of the order of 10 to 100-1, which are mainly due to differential USO frequency offsets. Inserting the PRNR ambiguity integers obtained from GOR and TDIR yields the unambiguous PRNR shown in the lower plot.In the second step, we use the Kalman filter presented in <ref> to reduce the ranging noise. Subsequently, we subtract the right-handed modulation noise applying the Δ M measurements constructed from the RFI beatnotes (see <ref>). After noise reduction, we resolve the SBR ambiguities combining the estimated pseudo­ranges with the simulated SBR phase anchor points (see <ref>). We then integrate the sideband range rates, to obtain unambiguous SBR.In <ref>, we plot the ASDs of the residual pseudo­range estimates (deviations of the estimates from the true pseudorange values in the simulation) for link 12 (upper plot) and link 21 (lower plot). Blue lines show the ASDs of the residual PRNR, which are essentially the ASDs of the white ranging noises. The residual pseudorange estimates after ranging noise reduction are plotted in orange. They are obtained by combining the PRNR with the sideband range rates. Therefore, they are limited by right-handed modulation noise (dashed black line). In green, we plot the residual pseudorange estimates after subtraction of right-handed modulation noise with the RFI beatnotes. Now the estimates are limited by left-handed modulation noise (dash-dotted black line). The residual SBR are drawn red, they are limited by left-handed modulation noise as well, but involve a smaller offset, since the SBR phase anchor points are more accurate than PRNR after ranging noise reduction (see <ref>). In the case of left-handed MOSAs (see link 12) the RFI beatnotes need to be time shifted to form the delayed Δ M measurements. We apply the time shifting method of PyTDI <cit.>, which consists in a Lagrange interpolation (we use order 5). The interpolation introduces noise in the high frequency band (see the bump at 2 in the upper plot) but this is out of band. Fig. <ref> shows the different residual pseudorange estimates as time series. The upper plot shows the 6 residu­al pseudorange estimates after ranging noise reduction, the second plot after subtraction of right-handed modulation noise. The third plot shows the SBR residuals. The subtraction of right-handed modulation noise reduces the noise floor, but it does not increase the accuracy of the pseudorange estimates. The accuracy can be increased by one order of magnitude through the resolution of the SBR ambiguities. After ambiguity resolution, SBR constitutes pseudorange estimates with sub- accuracy. §.§ Crosschecks Here we demonstrate the performance of our implementation of the crosschecks for PRNR ambiguity and PRNR offset.The PRNR ambiguities can be resolved using either GOR (see <ref>) or TDIR (see <ref>). To evaluate the performance of both methods, we simulate 1000 short (150) telemetry datasets with LISA Instrument <cit.>, and one set of ODs and MOC-TCs for each of them. We compute the GOR and TDIR pseudorange estimates for each of the 1000 datasets. Fig. <ref> shows the GOR residu­als (first row) and the TDIR residuals (second row) in as histogram plots. We see that the GOR accuracy depends on the arm, because we obtain more accurate ODs for arms oriented in line of sight direction than for those oriented cross-track. The PRNR ambiguity resolution via GOR is successful for GOR deviations smaller than 200. In the case of the links 23, 31, 13, and 32 all PRNR ambiguity resolutions via GOR are successful. For each of the links 12 and 21, 2 out of the 1000 PRNR ambiguity resolutions fail. The GOR estimates are passed as initial values to TDIR, which then reduces the uncertainty by almost one order of magnitude (lower plot of <ref>), such that eventually all PRNR ambiguity resolutions are successful.TDIR can also be applied to estimate the PRNR offsets. Hence, it constitutes a cross-check of the on-ground PRNR offset calibration. We simulate one year of telemetry data using LISANode <cit.>. We set the PRNR offsets to 160.3, -210.2, 137.3, -250.3, -188.8, and 105.1 for the links 12, 23, 31, 13, 32, and 21, respectively. We divide the dataset into 1 day chunks (left plots in <ref>), 2 day chunks (central plots in <ref>), and 3 day chunks (right plots in <ref>). In each partition we apply the TDIR estimator presented in <ref> to each chunk in order to estimate the PRNR offsets. This computation was parallelized and executed on the ATLAS cluster at the AEI Hannover. In the upper part of <ref> we show the offset estimation residuals for the three chunk sizes. The offset estimation accuracy increases with the chunk size in agreement with the order of magnitude estimate through <ref>. In the lower part of <ref> we plot the residual cumulative averages of the PRNR offset estimates for the different chunk sizes. Here, it can be seen that the TDIR estimator performs similarly for the different chunk sizes. With the 3 day chunk size we can estimate all PRNR offsets with an accuracy of better than 20 after 10 days. The dashed-black lines indicate 6.25 (half the SBR ambiguity). This is the required PRNR offset estimation accuracy for a successful SBR ambiguity resolution. With the 3 day chunk size all offset estimation residuals are below these 6.25 after 179 days. § CONCLUSION The reduction of laser frequency noise in TDI crucially depends on information about the pseudoranges. There are four pseudorange observables each having advantages and disadvantages. In this article, we first derived their observation equations carefully taking into account ambiguities, noise, and on-board delays, which cause offsets and timestamping delays. We then proposed a three-stage ranging sensor fusion (initial data treatment, ran­ging processing, crosschecks, compare <ref>) to combine the four pseudorange observables, such that we obtain optimal pseudorange estimates.We pointed out that the common carrier, PRN, and sideband timestamping delays (see eqs. <ref>, <ref>, and <ref>), as well as the PRNR and SBR offsets (see eqs. <ref> and <ref>) need to be calibrated on ground, so that they can be compensated in the initial data treatment. We further derived that the small optical path lengths between laser and PBS, PBS and ISI BS, and laser and ISI BS show up in the uncommon delays (see <ref>), which are to be applied in TDI. We proposed to measure these optical path lengths on ground, so that during operation they can be combined with the pseudorange estimates to form the uncommon delays.We identified the processing steps, which need to be performed continuously during operation. These are the PRNR unwrapping, and the reduction of ranging and right-handed modulation noise, we referred to them as ranging processing. We implemented the ranging processing numerically: we showed that the white ranging noise can be reduced by combining the PRNR with the sideband range rates in a KF. We split up the system and implemented one KF per SC, such that the individual SCETs served as KF time-grids. We further applied the RFI beatnotes to subtract the right-handed modulation noise. The pseudorange estimates we obtained this way were at sub- accuracy. We showed that in combination with phase anchor points they allow for the resolution of the SBR ambiguity resulting in pseudorange estimates at sub- accuracy.We implemented crosschecks for the PRNR ambiguities and offsets. We showed that both GOR and TDIR allow for the resolution of the PRNR ambiguity. We applied TDIR as a crosscheck for the PRNR offset cali­bration and demonstrated its performance for one year of telemetry data: after about 180 days all PRNR offset estimates reached an accuracy of better than 6.25 allowing for the resolution of the SBR ambiguity.In reality, the PRNR offsets are slowly time-varying. The investigation of the PRNR offset estimation via TDIR could be extended for linearly time varying PRNR offsets. The delay model for the TDIR estimator would then become (compare <ref>): d_ij^τ̂_i(τ) = R̂_ij^τ̂_i(τ) - (O^0_ij + O^1_ij·τ) , The TDIR estimator would now have to fit the 12 parame­ters O^0_ij and O^1_ij. Apart from that, tone-assisted TDIR <cit.> could be applied for the PRNR offset estimation in order to reach faster convergence. As a further follow-up investigation, time-varying on-board delays and the associated SC monitors could be included into the simulation, which would enable an inspection of the feasibility of the initial data treatment as proposed in <ref>. Furthermore, the ranging sensor fusion could be included into the different INReP topologies. Apart from that, the algorithms could be applied to real data as, e.g., produced by the hexagon experiment <cit.>, <cit.>. § ACKNOWLEDGEMENTS J. N. R. acknowledges the funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Furthermore, he acknowledges the support by the IMPRS on Gravitational Wave-Astronomy at the Max Planck Institute for Gravitational Physics in Hannover, Germany. This work is also supported by the Max-Planck-Society within the LEGACY (“Low-Frequency Gravitational-Wave Astronomy in Space”) collaboration (M.IF.A.QOP18098). O. H. and A. H. acknowledge support from the Programme National GRAM of CNRS/INSU with INP and IN2P3 co-funded by CNES and from the Centre National d'Études Spatiales (CNES). The authors thank Miles Clark, Pascal Grafe, Waldemar Martens, and Peter Wolf for useful discussions. The study on PRNR offset estimation via TDIR was performed on the ATLAS cluster at AEI Hannover. The authors thank Carsten Aulbert and Henning Fehrmann for their support. § PSEUDORANGES IN TCB The pseudorange can be expressed in TCB by writing the SCETs of receiving and emitting SC as functions of TCB evaluated at the events of reception and emission, respectively: R_ij^t(t_rec)= τ̂_i^t(t_rec) - τ̂_j^t(t_emit), τ̂_i^t denotes the SCET of SC i expressed as a function of TCB. The TCB of emission can be expressed as the difference between the TCB of reception and the light travel time from SC j to SC i, denoted by d_ij^t: R_ij^t(t_rec)=τ̂_i^t(t_rec) - τ̂_j^t(t_rec - d_ij^t(t_rec)), in the following we drop the subscript, hence t refers to the TCB of reception. The SCET can be expressed in terms of the SCET deviation from TCB τ̂_i^t(t) = t + δτ̂_i^t(t), which allows us to write <ref> as R_ij^t(t) = δτ̂_i^t(t) + d_ij^t(t) + δτ̂_j^t(t - d_ij^t(t). Expanding the emitting SC SCET deviation from TCB around the reception TCB yields: R_ij^t(t) = δτ̂_ij^t(t) + (1 + δτ̇̂̇_j^t(t)) · d_ij^t(t), δτ̂_ij^t(t): = δτ̂_i^t(t) - δτ̂_j^t(t). Hence, in a global time frame like TCB, the pseudorange can be expressed in terms of the light travel time d_ij^t and the differential SCET offset δτ̂_ij^t. § SUBTRACTION OF RIGHT-HANDED MODULATION NOISE Following the notation in <cit.>, we express the RFI beatnotes in frequency: RFI_ij^τ̂_i(τ) = ν^τ̂_i_ik(τ) - ν^τ̂_i_ij(τ), RFI_sb, ij^τ̂_i(τ) = ν^τ̂_i_sb, ik(τ)- ν^τ̂_i_sb, ij(τ), ν^τ̂_i_sb, ij(τ)= ν^τ̂_i_ij(τ)+ν_ij^m·(1+M_ij^τ̂_i). In this article we do not consider on-board delays in the RFI beatnotes. We combine the RFI carrier and sideband beatnotes to form measurements of the right-handed modulation noise: Δ M_i^τ̂_i = RFI_ij^τ̂_i - RFI_sb, ij^τ̂_i + 1/2 - RFI_ik^τ̂_i - RFI_sb, ik^τ̂_i - 1/2, = ν^m_ij· M^τ̂_i_ij - ν^m_ik· M^τ̂_i_ik, i, j, and k being a cyclic permutation of 1, 2, and 3. We can now subtract the Δ M_i^τ̂_i measurements from the sideband range rates (<ref>). Thus, we reduce the right-handed modulation noise, so that we are limited by the one order of magnitude lower left-handed modulation noise: ṠḂṘ_cor, ij^τ̂_i = ṠḂṘ_ij^τ̂_i - Ḋ^τ̂_̂î_ij·Δ M_j^τ̂_j(τ), = ν^m_ji·Ṙ_ij^τ̂_i +ν^m_ij( M^τ̂_i_ij-Ḋ^τ̂_̂î_ij· M^τ̂_j_jk(τ )), ṠḂṘ_cor, ik^τ̂_i = ṠḂṘ_ik^τ̂_i(τ) + Δ M_i^τ̂_i(τ), = ν^m_ki·Ṙ_ik^τ̂_i +ν^m_ki( M^τ̂_i_ij+Ḋ^τ̂_̂î_ik M^τ̂_k_ki(τ)), i, j, and k being a cyclic permutation of 1, 2, and 3. § SOLAR WIND DISPERSION The average solar wind particle density at the LISA orbit is about 10-3. Hence, at the scales of optical wavelengths the solar wind plasma can be treated as a free electron gas with the plasma frequency <cit.> ν_p^2 = n_e e^2/4π^2 ϵ_0 m_e≈ 8 × 10^8 -2, n_e denotes the electron density, e the elementary charge, m_e the electron mass, and ϵ_0 the the vacuum permittivi­ty. Contributions from protons and ions can be neglected as the plasma frequency is inversely proportional to the mass. We describe the refractive index of the solar wind plasma by the Appleton equation. Neglecting collisions and magnetic fields it denotes n(ν) = √(1 - (ν_p/ν)^2). In a dispersive medium we need to distinguish between phase and group velocity. The phase velocity is given by v_p(ν) = c/n(ν) = c/√(1 - (ν_p/ν)^2)≈ c ·(1 + 1/2ν_p^2/ν^2), where we applied the expansion for ν≫ν_p, as we consider optical frequencies. The product of group and phase velocity yields c^2. Consequently, the group velocity is v_g(ν) = c · n(ν) = c ·√(1 - (ν_p/ν)^2)≈ c ·(1 - 1/2ν_p^2/ν^2). Group and phase delay can now be written as Δτ_g(ν) = L (1/c ·√(1 - (ν_p/ν)^2) - 1/c) ≈L ν_p^2/2 c·1/ν^2, Δτ_p(ν) = L (√(1 - (ν_p/ν)^2)/c - 1/c) ≈ -L ν_p^2/2 c·1/ν^2, where L = 2.5 denotes the LISA armlength. PRN and sideband signals propagate at the group velocity, hence they are delayed by the group delay: Δτ_g^prn = Δτ_g(281±1) ≈12.7, Δτ_g^sb = Δτ_g(281±2.4) ≈12.7. The phase delay is negative, because the phase velocity is bigger than c. Therefore, the laser phase is advanced with respect to a wave propagating in vacuum. For the LISA carrier this phase advancement corresponds to Δτ_p(281) ≈ -12.7.
http://arxiv.org/abs/2307.04740v1
20230710175219
On the image of graph distance matrices
[ "William Dudarov", "Noah Feinberg", "Raymond Guo", "Ansel Goh", "Andrea Ottolini", "Alicia Stepin", "Raghavenda Tripathi", "Joia Zhang" ]
math.CO
[ "math.CO", "math.PR", "05C12, 05C50" ]
]On the Image of Graph Distance Matrices [2020]05C12, 05C50 ]William Dudarov ]Noah Feinberg ]Raymond Guo ]Ansel Goh ]Andrea Ottolini ]Alicia Stepin ]Raghavendra Tripathi ]Joia Zhang Department of Mathematics, University of Washington, Seattle, WA 98195, USA [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Let G=(V,E) be a finite, simple, connected, combinatorial graph on n vertices and let D ∈ℝ^n × n be its graph distance matrix D_ij = d(v_i, v_j). Steinerberger (J. Graph Theory, 2023) empirically observed that the linear system of equations Dx = 1, where 1 = (1,1,…, 1)^T, very frequently has a solution (even in cases where D is not invertible). The smallest nontrivial example of a graph where the linear system is not solvable are two graphs on 7 vertices. We prove that, in fact, counterexamples exists for all n≥ 7. The construction is somewhat delicate and further suggests that such examples are perhaps rare. We also prove that for Erdős-Rényi random graphs the graph distance matrix D is invertible with high probability. We conclude with some structural results on the Perron-Frobenius eigenvector for a distance matrix. [ [ August 12, 2023 =================== § INTRODUCTION Let G=(V,E) be a finite, simple, connected, combinatorial graph on |V|=n vertices. A naturally associated matrix with G is the graph distance matrix D ∈ℝ^n × n such that D_ij=d(v_i, v_j) is the distance between the vertex v_i and v_j. The matrix is symmetric, integer-valued and has zero on diagonals. The graph distance matrix has been extensively studied, we refer to the survey Aouchiche-Hansen <cit.>. The problem of characterizing graph distance matrices was studied in <cit.>. A result of Graham-Pollack <cit.> ensures that D is invertible when the graph is a tree. Invertibility of graph distance matrix continues to receive attention and various extension of Graham-Pollack has been obtained in recent times <cit.>. However, one can easily construct graphs whose distance matrices are non-invertible. Thus, in general the graph distance matrix may exhibit complex behaviour. Our motivation comes from an observation made by Steinerberger <cit.> who observed that for a graph distance matrix D, the linear system of equations Dx = 1, where 1 is a column vector of all 1 entries, tends to frequently have a solution–even when D is not invertible. An illustrative piece of statistics is as follows. Among the 9969  #V ≤ 100, 3877  (D) < n  7  1∉(D). This is certainly curious. It could be interpreted in a couple of different ways. A first natural guess would be that the graphs implemented in Mathematica are presumably more interesting than `typical' graphs and are endowed with additional symmetries. For instance, it is clear that if D is the distance matrix of a vertex-transitive graph (on more than 1 vertices) then Dx=1 has a solution. Another guess would be that this is implicitly some type of statement about the equilibrium measure on finite metric spaces. For instance, it is known <cit.> that the eigenvector corresponding to the largest eigenvalue of D is positive (this follows from the Perron-Frobenius theorem) and very nearly constant in the sense of all the entries having a uniform lower bound. The sequence A354465 <cit.> in the OEIS lists the number of graphs on n vertices with 1∉(D) as 1, 0, 0, 0, 0, 0, 2, 14, 398, 23923, … where the first entry corresponds to the graph on a single vertex for which D=(0). We see that the sequence is small when compared to the number of graphs but it is hard to predict a trend based on such little information. The first nontrivial counterexamples are given by two graphs on n=7 vertices. Lastly, it could also simply be a `small n' effect where the small examples behave in a way that is perhaps not entirely representative of the asymptotic behavior. It is not inconceivable to imagine that the phenomenon disappears completely once n is sufficiently large. We believe that understanding this is an interesting problem. §.§ Acknowledgements This project was carried out under the umbrella of the Washington Experimental Mathematics Lab (WXML). The authors are grateful for useful conversations with Stefan Steinerberger. A.O. was supported by an AMS-Simons travel grant. § MAIN RESULTS §.§ A plethora of examples Notice that the sequence A354465 <cit.> in the OEIS lists suggests that for n≥ 7 one can always find a graph on n vertices for which Dx=1 does not have a solution. Here, we recall that D represents the distance matrix of the graph, and 1 represents a vector with all of its |V| entries that are equal to one (we often omit the explicit dependence on |V|, when it is understood from the context). The main result of this section is the following. For each n≥ 7, there exists a graph G on n vertices such that Dx=1 does not have a solution. Since we know that no counterexample exists for n<7, the result is sharp. Our approach to find many examples of graphs for which Dx=1 has no solutions is to prove some structural results (of independent interests) that show how to obtain bigger examples out of smaller ones. For a careful statement of such structural results, we will need some definitions. We start with the notion of graph join. The graph join G+H of two graphs G and H is a graph on the vertex set V(G) ∪ V(H) with edges connecting every vertex in G with every vertex in H along with the edges of graph G and H. Our structural result on the distance matrix of the graph join of two graphs is better phrased with the following definition. Let G be a graph with adjacency matrix A_G. Then, define D_G = 2J-2I-A_G. Observe that for a graph of diameter 2, D_G is the distance matrix, justifying this choice of notation. We now state the main ingredient in the proof of Theorem  <ref>. Let G and H be a graphs and suppose that D_G x=1 has no solution. Then, the distance matrix D of the graph join G+H has no solution to Dx=1 if and only if there exists a solution to D_H x = 1 such that ⟨ x, 1⟩ = 0. An alternative approach to the proof of Theorem <ref>, that unfortunately does not allow for the same sharp conclusion (though it can be used to generate examples for infinitely many values of n) relies instead of the notion of Cartesian product. Given two graphs G=(V_1, E_1) and H=(V_2, E_2) their Cartesian product G × H is a graph on the vertex set V=V_1× V_2 such that there is an edge between vertices (v_1,v_2) and (v_1',v_2') if and only if either v_1=v_1' and v_2 is adjacent to v_2' in H or v_2=v_2' and v_1 is adjacent to v_1' in G. If G and H are graphs such that 1 is not in the image of their distance matrices, then the Cartesian product graph G × H also has the property that 1 is not in the image of its distance matrix. We note that examples for which Dx=1 are not so easy to construct. In addition to the numerical evidence we provided in the introduction, we are able to give a rigorous, albeit partial, explanation of why this is the case (see Lemma <ref>). §.§ Erdős-Rényi random graphs We conclude with a result about Erdős-Rényi random graphs. We first recall their definition. An Erdos-Renyi graph with parameters (n,p) is a random graph on the labeled vertex set V = {v_1,v_2,...,v_n} for which there is an edge between any pair (v_i,v_j) of vertices with independent probability p. The following theorem shows that their distance matrices are invertible with high probability. As a consequence, Dx=1 has a solution for Erdős-Rényi graphs with high probability, as we summarize in the following Theorem. Let 0 < p < 1 and let D_n,p be the (random) graph distance matrix associated of a random graph in G(n,p). Then, as n →∞, ℙ( (D_n,p) = 0) → 0. It is a natural question to ask how quickly this convergence to 0 happens. Our approach relies heavily on recent results <cit.> about the invertibility of a much larger class of random matrices with discrete entries, providing some explicit bounds that are likely to be loose. We propose a conjecture, which is reminiscent of work on the probability that a matrix with random ± 1 Rademacher entries is singular, we refer to work of Komlós <cit.> and the recent solution by Tikhomirov <cit.>. One might be inclined to believe that the most likely way that D_n,p can fail to be invertible is if two rows happen to be identical. This would happen if there are two vertices v, w that are not connected by an edge which, for every other vertex u ∈ V, are both either connected to u or not connected to u. For a graph G ∈ G(n,p) each vertex is connected to roughly ∼ np vertices and not connected to ∼ (1-p)n vertices. This motivates the following Question. Is it true that lim_n →∞log( ℙ( (D_n,p) = 0) )/n = log( p^p (1-p)^1-p) ? The right-hand side log( p^p (1-p)^1-p) = p log(p) + (1-p) log(1-p) is merely (up to constants) the entropy of a Bernoulli random variable. §.§ Perron-Frobenius eigenvectors are nearly constant Let (X, d) be a metric space and let x_1, …, x_n be n distinct points in X. The notion of distance matrix naturally extends to this case. That is, we define D∈ℝ^n× n by setting D_ij=d(x_i, x_j). This notion clearly agrees with the graph distance matrix if X is a graph equipped with the usual shortest path metric. Let λ_D be the Perron-Frobenious eigenvalue of D and let v be the corresponding eigenvector with non-negative entries. In the following we will always assume that v is normalized to have L^2 norm 1 unless otherwise stated. In <cit.>, it was proved that v 1/√(n)≥1/√(2) ;. It is also shown in <cit.> that the above inequality is sharp in general for the distance matrix in arbitrary metric space. However, it was observed that for graphs in the Mathematica database, the inner product tends to be very close to 1, and it was not known if the lower bound of 1/√(2) is sharp for graphs. We show that this bound is sharp for graph distance matrices as well. The lower bound is achieved asymptotically by the Comet graph that we define below. We define a comet graph, C_m_1^m_2, to be the disjoint union of a complete graph on m_1 vertices with the path graph on m_2 vertices and adding an edge between one end of the path graph and any vertex of the complete graph. Let D_m be the graph distance matrix of the Comet graph C_m^2^m. Let v_m be the top eigenvector (normalized to have unit L^2 norm) of the distance matrix D_m. Then, lim_m→∞v_m 1/√(n)=1/√(2) , where n=m^2+m is the number of vertices in C_m^2^m. While Theorem <ref> shows that the lower bound 1/√(2) is sharp, it does not reveal the complete truth. It is worth emphasizing that the lower bound is achieved only in the limit as the size of the graph goes to infinity. The following theorem shows that if a graph has diameter 2 then, ⟨ v, 1⟩/√(n) is significantly larger. Let G be a graph with diameter 2 and let D be the distance matrix of G. Let v be the top-eigenvector of D normalized to have L^2 norm 1. Then, v 1/√(n)≥4/3·1/√(2) . In the light of above theorem, it is reasonable to expect a more general result of the following form that we leave open. Problem. Let G be a graph on n vertices with distance matrix D. Let v be the top eigenvector of D with unit L^2 norm. If G has diameter d then, ⟨ v, 1⟩/n≥1/√(2)(1+f(d)) , for some f such that f(d)→ 0 as d→∞. § PROOF OF THEOREM <REF> This section is dedicated to the proof of the main Theorem <ref>. Since the main ingredient is the structural result about the distance matrix of the graph join (Theorem <ref>), we begin the section with the proof of that. Observe that the distance matrix of G+H is given by D = [ D_G J; J D_H ]. Recall that the orthogonal complement of the kernel for a symmetric matrix is the image of the matrix because the kernel of a matrix is orthogonal to the row space, which in this case, is the column space. In particular, this applies to D_G and D_H. To prove the forwards direction, we will show the contrapositive. We have two cases, namely the case where D_H x = 1 has no solution and the case where there is a solution to D_H x = 1 where ⟨ x, 1⟩≠ 0 First, assume that D_H x = 1 has no solution. Then, we have that D_G ⊥̸1 and D_H ⊥̸1 because 1∉D_G and 1∉D_H. So, there exists x_1 ∈D_G and x_2 ∈D_H such that ⟨ x_1, 1⟩ = ⟨ x_2, 1⟩ = 1. Observe that the vector x=(x_1, x_2)^T satisfies Dx=1 so we are done with this case. Now, suppose that there exists x such that D_H x = 1 and ⟨ x, 1⟩≠ 0. Then, let x_2 = x/⟨ x, 1⟩. Once again, D_G ⊥̸1 so there exists x_1 ∈D_G such that ⟨ x_1, 1⟩ = 1-1/⟨ x, 1⟩. Then, the vector x=(x_1, x_2)^T satisfies Dx=1. Thus, we are done with this direction. Now, for the reverse direction, suppose that there exists y such that D_H y= 1 and ⟨ y, 1⟩ = 0. Assume for a contradiction that there exists a solution to Dx=1. Then, we have x_1, x_2 such that D_G x_1 + J x_2 = 1 and J x_1 + D_H x_2= 1. First, suppose that ⟨ x_1, 1⟩ = 1. Then, we have D_H x_2=0 so x_2 ∈D_H. Note that 1∈D_H so D_H ⊥1. Thus, ⟨ x_2, 1⟩ = 0, implying that Jx_2=0. However, this implies that D_Gx_1=1, which is a contradiction. Now, suppose that ⟨ x_1, 1⟩≠ 1. Then, D_H x_2 = c1 for some c≠ 0. So, x_2= y/c + z for some z ∈D_H. Noting that D_H ⊥1, we have ⟨ x_2, 1⟩ = ⟨ y, 1⟩/c = 0. So, Jx_2 = 0 implying that D_Gx_1=1, which is a contradiction. Now, we will construct a family of graphs {H_n}_n=3^∞ such that each H_n has 2n vertices and there exists x satisfying D_H_nx = 1 with ⟨ x, 1⟩ = 1. First, we will define {H_n}_i=3^∞. For each n ≥ 3, define H_n=C_n^c + K_n, where + is the graph join and C_n^c is the complement of the cycle graph on n vertices. For each n≥ 3, there exists x satisfying D_H_nx = 1 with ⟨ x, 1⟩ = 0. To start, observe that D_H_n is of the form [ B J_n; J_n J_n-I_n ] where B is defined by B_i,j= 0 i=j 2 i=j ± 1 n 1 otherwise. The vector x=(1_n,-1_n)^T satisfies D_H_nx=1 with ⟨ x,1 ⟩ = 0 so we are done. Observe that each H_i has an even number of vertices. We will now show construct a family of graphs {H_n'}_n=3^∞ such that each H_n' has 2n+1 vertices. For each n ≥ 3, define H_n' to be the graph formed by attaching one vertex to every vertex of H_n except for one of the vertices of the C_n^c component of H_n. For each n≥ 3, there exists x satisfying D_H_n'x = 1 with ⟨ x, 1⟩ = 0. To start, observe that we can write D_H_n' as [ D_H_n; y ] where y = (2,1, …, 1,0). Then, the vector x=(1_n,-1_n,0)^T satisfies D_H_n'x=1 with ⟨ x,1 ⟩ = 0 so we are done. Now, for sake of notation, we will recall the definition of the cone of a graph. Given a graph G, the graph (G) is defined as the graph join of G with the trivial graph. Take G=(H_(n-1)/2) if n is odd, and G=(H^'_n/2-1) if n is even. The proof is immediate from Theorem <ref>, Lemma <ref> and Lemma <ref>. We now move to the proof of Theorem <ref>, that allows for an alternative way of constructing graphs for which Dx=1 does not have a solution. To this aim, let G and H be two graphs on n and m vertices, respectively. Let A∈ℝ^n× n and B∈ℝ^m× m be the distance matrices of G and H respectively. It is well-known (see for instance <cit.>, <cit.>) that the distance matrix of the Cartesian product G× H is given by J_m ⊗ A+ B ⊗ J_n ∈ℝ^nm× nm where ⊗ is the Kronecker product and J_ℓ denotes ℓ×ℓ matrix with all 1 entries. Theorem <ref> is an immediate consequence of the following Lemma <ref>. Suppose that A is a n× n matrix and B is an m × m matrix such that the linear systems Ay= 1_n and Bz=1_m have no solution. Then, (J_m ⊗ A+ B ⊗ J_n)x=1_nm has no solution. Assume for the sake of contradiction that there exists x∈ℝ^nm× nm with (J_m ⊗ A+ B ⊗ J_n)x=1_nm. Then, we have (J_m ⊗ A)x = 1_nm - (B ⊗ J_n)x = (c_1, …, c_m)^T , where each c_i∈ℝ^1× n is a vector with constant entries. Since Bz=1_m has no solutions, there must be some 1≤ j≤ m for which c_j=α1_n, where α≠ 0. ( [ A ⋯ A; ⋮ ⋱ ⋮; A ⋯ A ] + [ b_1,1J_n ⋯ b_1,mJ_n; ⋮ ⋱ ⋮; b_m,1J_n ⋯ b_m,mJ_n ])x =1 [ A ⋯ A; ⋮ ⋱ ⋮; A ⋯ A ] x = 1- [ b_1,1J_n ⋯ b_1,mJ_n; ⋮ ⋱ ⋮; b_m,1J_n ⋯ b_m,mJ_n ]x. Writing x as the block vector (x_1, ..., x_m)^T where each x_i∈ℝ^1× n, we note that A(x_1+…+x_m) = c_i, ∀ 1≤ i≤ m . In particular the above equation holds for i=j. Thus, we obtain Ay=1_n for y= (x_1+…+x_m)/α which contradicts our assumption. [ A ⋯ A; ⋮ ⋱ ⋮; A ⋯ A ] x = [ c_1; ⋮; c_n ]. [ A (x_1 + ⋯ + x_m); ⋮; A (x_1 + ⋯ + x_m) ] = [ c_1; ⋮; c_n ]. As we pointed out in Section 2, while we have established that there are infinitely many graphs G such that Dx=1 does not have a solution, finding such graphs can be hard. To illustrate this, we conclude this section with a structural result about family of graphs for which Dx=1 does have a solution. Let G=(V, E) be a connected graph. Suppose there are two vertices v,w∈ V such that the following conditions hold. * v is not connected to w * v∼ x for every x∈ V∖{w} * w∼ x for every x∈ V∖{v}. If D is the graph distance matrix of G then Dx=1 has a solution. Furthermore, if there are two or more distinct pairs of vertices satisfying 1-3 then D is non-invertible. Observe that we can write the distance of G such that the first two columns of D are (0,2, 1, …, 1)^T and (2,0, 1, …, 1)^T. Therefore x=(1/2,1/2, 0, ..., 0)^T satisfies Dx=1. If there are two pair of vertices, say w.l.o.g v_1, v_2 and v_3, v_4 satisfying conditions 1-3 then the first four columns of D look like [ 0 2 1 1; 2 0 1 1; 1 1 0 2; 1 1 2 0; 1 1 1 1; ⋮ ⋮ ⋮ ⋮; 1 1 1 1 ]. Labeling the columns c_1,…, c_4, we have c_1+c_2-c_3=c_4. D must be singular. § PROOF OF THEOREM <REF> We start with the following well-known result (see, e.g., <cit.>) about the diameter of an Erdős-Rényi graph. Let p∈ (0, 1). Let P_p,n be the probability that a random Erdős-Rényi graph G(n, p) has diameter at least 3. Then, lim_n→∞P_p,n = 0. Let I be the identity matrix, J be the all-ones matrix, and A be the graph's adjacency matrix. Owing to the Lemma (<ref>), we can write, with high probability, the distance matrix as D = 2J-A-2I. We will now state the following theorem from <cit.>, which describes the smallest singular value σ_n of a matrix M_n=F_n+X_n where F_n is a fixed matrix and X_n is a random symmetric matrix under certain conditions. Assume that ξ has zero mean, unit variance, and there exist positive constants c_1<c_2 and c_3 such that ℙ(c_1 ≤|ξ-ξ'|≤ c_2)≥ c_3, where ξ' is an independent copy of ξ Assume that the upper diagonal entries of x_ij are i.i.d copies of a random variable ξ satisfying <ref>. Assume also that the entries f_ij of the symmetric matrix F_n satisfy | f_ij|≤ n^γ for some γ > 0. Then, for any B>0, there exists A>0 such that ℙ(σ_n(M_n)≤ n^-A)≤ n^-B. Combining all these results, we can prove the main result of the section. Owing to Lemma <ref>, we can assume that with high probability the distance matrix has the form D=2J-2A-2I. Note that the upper diagonal entries of A are i.i.d copies of a random variable satisfying Condition <ref> with c_1=c_3=1 and c_2=1. Furthermore, 2(J-I) is symmetric and its entries are bounded. Therefore, the result follows from Theorem <ref>. § PROOF OF THEOREM <REF> Let D_m be the graph distance matrix of C_m^2^m. We start by observing that D_m= [ J_m^2-I_m^2 B_m; (B_m)^⊤ A_m ] , where A_m as a matrix m× m matrix such that (A_m)_ij=|i-j| and B_m is m× m matrix defined by B_m= [ 2 3 ⋯ m+1; ⋮ ⋮ ⋮ ⋮; 2 3 ⋯ m+1; 1 2 ⋯ m; ] Our first observation is that the first eigenvector of D_m is constant for the first m^2-1 entries (considering the symmetry of the graph, this is not surprising). Let λ_m denote the largest eigenvalue of D_m and let v be the corresponding eigenvector. Then, for all i,j≤ m^2-1, we have v_i=v_j. Let r_i, r_j be i-th and j-th rows of D respectively. We first note that r_i-r_j=e_i-e_j for i, j≤ m^2-1. Now observe that λ_m v_j-λ_m v_i =r_jv-r_iv =e_i-e_jv=v_i-v_j . The conclusion follows since λ_m≥ 0. We start with an estimate for λ_m that will later allow us to bound entries of v. Let λ_m be the largest eigenvalue of D_m then λ_m = (1+o(1)) ·m^5/2/√(3) . Write D = D_m and let λ_m be as above. Let A be the m^2 + m by m^2 + m matrix defined by A_i,j= i - m^2 if i > m^2, j ≤ m^2 j - m^2 if j > m^2, i ≤ m^2 0 otherwise . Let B be the m^2+m by m^2+m matrix defined by B_i,j= 1 if i,j ≤ m^2 0 otherwise . Let C be the m^2+m by m^2+m matrix defined by C_i,j= m+1 if i,j > m^2 0 otherwise . Note that A ≤ D ≤ A + B + C where the inequalities refer to entrywise inequalities. This means that for all x ∈ℝ^m^2 + m with nonnegative entries, x^TAx ≤ x^TDx ≤ x^T(A+B+C)x Let λ_A,λ_B,λ_C be the top eigenvalue of A, B, and C respectively and let λ_A+B+C be the top eigenvalue of A+B+C. Noting that A,B,C are all symmetric nonnegative matrices, letting S ⊂ℝ^m^2+m be the subset of vectors with nonnegative entries such that x_2≤ 1. Then, λ_A ≤λ_m ≤λ_A+B+C≤λ_A+λ_B + λ_C . It is easily seen that λ_B = m^2 and λ_C = m(m+1). We can also compute λ_A explicitly. Let v be the top eigenvector of A. Since the first m^2 rows and columns of M are all identical, the first m^2 entries of v are the same. Normalize v so that the first m^2 entries are 1. Then λ_Av = Dv yields λ_Av_1 = λ_A = ∑_j=1^m A_1,jv_m^2+j = ∑_j=1^m jv_m^2+j and for 1 ≤ k ≤ m, λ_Av_m^2+k = ∑_j=1^m^2kv_j = ∑_j=1^m^2 k = m^2k . Plugging v_m^2+k = m^2k/λ_A into the first equation, we get λ_A^2 = ∑_j=1^m m^2j^2 = m^2(m)(m+1)(2m+1)/6 . This yields, √(m^3(m+1)(2m+1)/6)≤λ_m ≤√(m^3(m+1)(2m+1)/6) + m^2 + m(m+1) . With this estimate in hand we can now show stronger bounds on v_∞ than are directly implied by <cit.> in the general case. Let v be the top eigenvector of D_m normalized so that v_1=1 we have v_∞ = 𝒪(√(m)) First we note that D_m_max≤ m+1, second we note that by <cit.> we know 1/2√(m^2 + m)≤v_i/v_2≤ 1 And in particular this means max_i∈ [m^2+m] v_i=max_i∈ [m^2+m]v_i/v_1≤max_i,j∈ [m^2+m]v_i/v_j≤ 2√(m^2+m)≤ 2m+1 It follows from <cit.> that v_∞ = 𝒪(m). when we have normalized v such that v_1=1. Since the first m^2-1 terms of v are 1 and the entries in D are at most (m+1) we get λ_m v_i =∑_k=1^m^2-1 (D_m)_i,kv_k +∑_k=m^2^m^2+m (D_m)_i,kv_k ≤ m^2(m+1)+2m(m+1)^2=𝒪(m^3) . Since λ_m≥ m^5/2/√(3), it follows that v_i≤𝒪(√(m)). Let v be as above. There exists C>0 such that for i≥ m^2, we have √(1/3m)-C/m≤ (v_i-v_i-1)≤√(3/m)+C/m , for all sufficiently large m. For i≥ m^2 we consider the following difference r_i-r_i-1. Observe that first i-1 coordinates are 1 followed by n+m+1-i many -1. Therefore, λ(v_i-v_i-1) =(D_mv)_i-(D_mv)_i-1=r_i-r_i-1v =∑_k=1^i-1 v_i-∑_k=i^m^2+m v_i =(m^2-1)+∑_k=m^2^i-1 v_i-∑_k=i^m^2+m v_i . Using the fact that v_i≤ C√(m) for all i we obtain m^2-1-Cm^3/2≤λ(v_i-v_i-1)≤ m^2-1+Cm^3/2 . Since λ_m∼ m^5/2/√(3), the desired conclusion follows. To conclude the proof we first note that from above ⟨1, v⟩≥ m^2. On the other hand, Now with this we have enough to get good estimates of v_1 and v_2 which will imply the desired result. Starting with ℓ_1 first we have v_1 =∑_k=1^m^2+m v_k =m^2-1+∑_k=m^2^m^2+m v_k ∼ m^2+√(3/m)∑_k=1^m k ∼ m^2+m√(3m)/2 ∼ m^2 Now turning to the ℓ_2 we have v_2^2 =∑_k=1^m^2+m v_k^2 =m^2-1+∑_k=m^2^m^2+m v_k^2 ∼ m^2+3/m∑_k=1^m k^2 = m^2+3(m+1)(2m+1)/6 = 2m^2 We also obtain v_2^2 ≤ 2m^2 + C(m+1)^3/2 . Combining these results tells us that lim inf_m→∞1v/v_2·1_2≥1/√(2) . § PROOF OF THEOREM <REF> Let G be any graph with diameter 2. Since D_ij is either 1 or 2 (except for D_ii=0), it is easy to see that 1v-v_i ≤λ v_i=∑_j=1^n D_i,j v_j ≤ 2(1v-v_i). Rearranging, we obtain the uniform two-sided bound 1v/λ+1≤ v_i < 21v/λ+1. This yields, in particular, that for all 1≤ i, j≤ n 1≤v_i/v_j≤ 2 . This defines a convex region, that we denote by D. In order to prove our result, it suffices to prove that the minimum of v_1= 1v over the set D, subject to the constraint v_2=1, is at least 4/(3√(2)). To this aim, we first notice that the minimizers of this problem are the same, up to a scalar factor, of the maximizers of v_2_2 in D subject to v_1=1 (in fact, in both cases they must be minimizers of the homogeneous function v_1/v_2 on D). Since the latter is a maximization problem for a strictly convex function on a convex set, the maximizers must be extreme points of D. In particular, going back to the original formulation, we conclude that the smallest that 1v can be will be when all entries of v are c,2c for some c so that v_2=1. Suppose now that we have m entries equal to c and n-m equal to 2c, then 1=v_2^2 =∑_k=1^m c^2+∑_k=m+1^n(2c)^2 =mc^2+(n-m)4c^2 Then solving for c we find c=1/√(4n-3m) So now we can optimize over m to minimize the ℓ_1 norm v_1/√(n)=mc+(n-m)2c/√(n)=2n-m/√(n(4n-3m)) Now treating n as a constant and differentiating wrt to m we get d/dm2n-m/√(n(4n-3m)) =-√(4n^2-3mn)+3n(2n-m)/2√(4n^2-3mn)/4n^2-3mn =3mn-2n^2/2(4n^2-3mn)^3/2 If we want to set this equal to 0 we only care about the denominator so we solve 0 =3mn-2n^2 0 =n(3m-2n) Which gives solutions n=0,2n/3 from which we see the latter is the minimum. Now if we substitute this into our formula for the ℓ_1 norm we get 2n-m/√(n(4n-3m))=4n/3/√(n(4n-2n))=4/3·1/√(2) Now by <ref> we know that if G is a random graph, then for large n it will have diameter 2 and this bound will hold. alpha
http://arxiv.org/abs/2307.05611v1
20230710225132
Against the "nightmare of a mechanically determined universe": Why Bohm was never a Bohmian
[ "Flavio Del Santo", "Gerd Christian Krizek" ]
physics.hist-ph
[ "physics.hist-ph", "quant-ph" ]
unsrt Turán number for bushes Zoltán Füredi Alfréd Rényi Institute of Mathematics, Budapest, Hungary. E-mail: . Research partially supported by National Research, Development and Innovation Office NKFIH grants 132696 and 133819. Alexandr Kostochka University of Illinois at Urbana–Champaign, Urbana, IL 61801 and Sobolev Institute of Mathematics, Novosibirsk 630090, Russia. E-mail: . Research supported in part by NSF grant DMS-2153507 and NSF RTG grant DMS-1937241. =============================================================================================================================================================================================================================================================================================================================================================================================================================================================== David Bohm has put forward the first deterministic interpretation of quantum physics, and for this he seems to be regarded as a champion of determinism by physicists (both his contemporaries and the supporters of his interpretation, the so-called “Bohmians") as well as by historians of physics. The standard narrative is that he underwent a “conversion" from being a supporter of Bohr to being a staunch determinist, due to his interaction with Einstein and his commitment to Marxism. Here we show that Bohm actually upheld with continuity throughout his career some philosophical tenets that included a strong rejection of mechanistic determinism. As such, we conclude that Bohm was never a Bohmian and that his philosophical views have been largely misinterpreted. “Why on earth are they calling it Bohmian mechanics? Haven't they read a word I have written?!"David Bohm (reported by Basil Hiley) § INTRODUCTION David Bohm (1917-1992) went down in history as the physicist who achieved the impossible by providing an alternative deterministic interpretation of quantum mechanics <cit.>.[ Bohm himself referred to his interpretation as “alternative interpretation"<cit.>, as “causal interpretation"<cit.>, and as “quantum potential interpretation". In the literature it is referred to as “Ontological interpretation" <cit.>, “De Broglie-Bohm causal interpretation"<cit.>, or “De Broglie-Bohm Pilot-Wave Theory", “Bohmian Mechanics" <cit.>, or “Bohm theory" <cit.>. The variety of terminologies reflects different stances and views of Bohm's collaborators and successors which deviate in some cases substantially from Bohm's own ideas and whose discussion would go beyond the scope of this work.] Acclaimed or blamed therefore as a champion of determinism, he was (and still is) regarded by many as a cure against the claims of the Copenhagen school that quantum mechanics necessarily requires a completely novel way of looking at the world. According to this narrative, Bohm restored the seemingly lost comfort of mechanistic determinism, which had characterized physics for centuries, and his work seems therefore animated by a certain intellectual conservatism (see, e.g., <cit.>). Here, we show that it was far from his intention to try to go back to an old pre-quantum paradigm. Bohm's views on philosophy of physics have instead been explicitly aimed, with continuity throughout his whole career, at demolishing certain established views that he perceived as limiting and dogmatic. As we shall see, one of these was the concept of mechanism, a form of reductionism which Bohm regarded as the assumption that the great diversity of things that appear in all of our experience, every day as well as scientific, can all be reduced completely and perfectly to nothing more than consequences of the operation of an absolute and final set of purely quantitative laws determining the behaviour of a few kinds of basic entities or variables. (<cit.>, p. 37). In this effort, Laplacian determinism was regarded by Bohm as the first and foremost expression of mechanism, and he thus searched for alternatives throughout his whole life. As noted by Nobel laureate Roger Penrose, “there can be few physicists who have delved into the philosophical implications of their subject as has David Bohm” <cit.>. It is indeed possible to identify at least three fundamental tenets in David Bohm's philosophy of physics, namely: (i) realism, (ii) causality, and (iii) anti-mechanism. Here we will not deal with Bohm's realism which has already been the subject of numerous studies, and it is undisputed that Bohm was committed to (some form of) realism (see, e.g., <cit.>, and references therein). On the other hand, we will focus on the latter two tenets, which have been astonishingly misunderstood in most of the vast literature devoted to Bohm's thought and his intellectual legacy. In particular, the term causality has been commonly assumed to be a synonym of determinism; a mistake unfortunately still present in the literature in both physics and philosophy to date. Furthermore, Bohm always opposed mechanism, which, we stress again, has its most striking example (but not the only one) in determinism. It is the main aim of this paper to clarify some of Bohm's original philosophical stances by demolishing certain established misconceptions around his commitment to determinism, which we cannot emphasize enough, was never present in his thought. It is a peculiar case that a scholar to whom so many historical and philosophical studies have been devoted has been so misrepresented. Bohm's sustained rejection of determinism was only partly acknowledged in <cit.> and new important evidences made available thanks to the publication of a collection of letters in <cit.>. Moreover, one of us (F.D.S.) already pointed out in <cit.> that Bohm's commitment to determinism was secondary to his commitment to realism. The same thesis was then put forward in <cit.>. Here, we show that Bohm's position was more radical than this: not only was not determinism his philosophical priority, but he actually always opposed it. In section <ref>, we will recollect the standard narrative about Bohm's ideas. Albeit with some variations, indeed, there seems to be a consensus about the fact that Bohm's main philosophical concern was to retrieve determinism in modern physics (at least at a certain stage of his working life). We will strongly counter, in section <ref>, this standard narrative with a more accurate account of the actual philosophical views of David Bohm, focusing on his take on causality and (non)determinism. We will show that one of Bohm's main commitments was always anti-mechanism, a position that he had understood very early to be incompatible with determinism. This is what actually led him to initially (partly) support the indeterministic doctrine of Copenhagen, which, however, he abandoned when he realized that randomness is another, for him unacceptable, form of mechanism. Hence, his commitment to determinism—stemming from his celebrated alternative interpretation—is only ostensible. Bohm's anti-mechanistic position led him to develop a dialectic philosophical view of an unlimited number of levels of description of reality that can be neither deterministic nor fully random, but still allow either of these descriptions to exist at different levels. We will here mainly focus on the period of the 1950s, because it is in that decade that Bohm allegedly underwent a change from being a supporter of Bohr to becoming a determinist and then supposedly abandoned this debate altogether as his commitment to Marxism faded away. To avoid further misinterpretations on our part, we will favor quoting as much as possible from Bohm's original writings rather than presenting our own summaries and analyses. Moreover, in the interest of conciseness, but without the risk of decontextualizing the quotations, we will provide more extended excerpts in the form of appendices, where the interested reader can find further evidence in support of the thesis put forward in the main text. We hope that letting Bohm speak for himself would finally bring justice to some aspects of his complex and original way of conceiving physics. § THE STANDARD NARRATIVE: BOHM'S ALLEGED COMMITMENT TO DETERMINISM After World War II, the practices of physics underwent a drastic change. The foundational debate that had characterized the early days of quantum physics gave away to a pragmatic approach, the so-called “shut up and calculate", oriented towards applications often of a military nature <cit.>; the debate over the interpretation of the quantum formalism seemed to be settled for good. It was only a handful of physicists (and a few philosophers) scattered all over the world who started reviving the uneasiness towards the orthodox interpretation proposed by the school of Copenhagen (see Refs. <cit.>). Among them, David Bohm was a link between the old generation of critics—such as Albert Einstein, who played and active role in his intellectual life, Erwin Schrödinger, or (the early) Luis de Broglie—and the new underground culture concerned with quantum foundations to come. After completing his PhD with Robert Oppenheimer at Berkeley in the 1940s and a post at the Institute of Advanced Studies in Princeton, in 1951, Bohm fell victim of the witch-hunt of McCarthyism because of his adherence to Marxism; this led him to a life of exile: firstly to Brazil, then to Israel, and finally to the UK, where he spent the rest of his life (see <cit.> for biographies of Bohm). Although his research in the group of Oppenheimer was mainly about plasma physics, it is there that Bohm started getting interested in foundational problems of quantum theory, as he later recalled: “When I went to work with J. Robert Oppenheimer, I found a more congenial spirit in his group. For example, I was introduced to the work of Niels Bohr and this stimulated my interest, especially in the whole question of the oneness of the observer and the observed." (cited in <cit.>, p. 1. See also <cit.>, Ch. 4). Bohr, together with Werner Heisenberg and others, was not only among the founding fathers of quantum theory but the initiator of the so-called Copenhagen interpretation thereof. The latter maintains that quantum mechanics necessarily leads to abandoning certain fundamental precepts of classical physics, among which determinism, and instead to embrace the genuine probabilistic nature of quantum phenomena. Bohm went so deep in his reflections about quantum theory and its foundations that, in 1951, he published the textbook Quantum Theory <cit.>, fully in the spirit of the Copenhagen interpretation. Shortly after the publication, indeed, Bohm himself stated about his book: “a clear presentation of Bohr’s point of view (the first clear, if I may boast a little)."(Letter from Bohm to Miriam Yevick; Letter 66, Folder C117, January 23, 1952. In <cit.>, p. 235.) However, in the very same year, Bohm submitted, on July 5th, a seminal work (published in two parts <cit.>) wherein he presented the first consistent alternative interpretation of the quantum formalism. He introduced the initial position of quantum particles as a “hidden variable" that, if known, would lead to deterministic trajectories similar to the familiar ones of classical mechanics (but guided by a genuinely additional quantum part in the potential). So far, these are mere historical facts. Based on these, however, a standard narrative about David Bohm has crystallized, which can be summarized as follows: In the span of around a year, Bohm had a dramatic shift in his philosophical agenda moving one of his tenets from indeterminism to determinism. This narrative is not only popularized among physicists in the sort of working history that hovers in the community, but has been advocated by most historians, too. This is however not surprising, since admittedly it prima facie seems a rational account of the facts. A more thorough historical reconstruction, proposed among other works in the recent comprehensive biography of Bohm by Olival Freire Jr. <cit.>, tells a more nuanced story. First of all, it points out that already in his 1951 book <cit.>, Bohm had places some hints of his uneasiness with Copenhagen, such as endorsing ontological realistic assumptions (see <cit.>, pp. 48-51). Moreover, historians tend to add a third phase in which Bohm supposedly distanced himself again from determinism at the end of the 1950s, concurrently with his dropping of Marxism. This double shift, also in relation to Marxism, was strongly emphasized already by Pylkkänen <cit.>, and also Freire, although more cautiously, endorses a similar position: “Indeed, the connection between the break with Marxism and abandonment of determinism in science, particularly in physics, and not only in society, in Bohm’s thoughts is just a guess, albeit a plausible one." (<cit.>, p. 123). At any rate, the main point of the standard narrative is essentially present also in these more informed accounts. The historical question that naturally arises then is: why did Bohm go through such a drastic and abrupt change from an adherent of the school of Copenhagen, i.e. a doctrine explicitly advocating the failure of determinism, to a novel deterministic interpretation? (And, possibly, why did he give in determinism again a few years later?). That is, what caused the sudden “conversion" of Bohm from an open supporter of indeterminism to a staunch determinist (and perhaps back)? Numerous studies have tried to answer this question (<cit.>, apparently quite successfully despite a few minor details that are still the subject of historical debate. But what if the question was the wrong one in the first place? What if determinism has never been a desideratum for Bohm, rather, this change was not about his worldview, but simply it was reflecting different phases of Bohm's experimentation in his attempt to achieve a physical theory that would satisfy his main philosophical tenets? In section <ref>, we will, in fact, defend this thesis. That is, that Bohm always upheld an anti-mechanistic view that was clearly incompatible with determinism alone. Before doing that, in the remainder of this section, we will continue summarizing the standard narrative, or rather, its reply to the main question it poses. There is an almost absolute consensus on the fact that the two elements that played the major role in Bohm's turn towards determinism have been, on the one hand, his encounter with Einstein, and, on the other, his Marxist views. This twofold explanation is by now well-established among historians, who mostly debate about the extent of one or the other influences (possibly, concurrently with Bohm's political prosecution; see <cit.>). This reconstruction was already put forward by the illustrious historian and philosopher of physics Max Jammer, according to a late recollection of Bohm himself: Stimulated by his discussion with Einstein and influenced by an essay which, as he told the present author, was “written in English” and “probably by Blokhintsev or some other Russian theorist like Terletzkii,” and which criticized Bohr’s approach, Bohm began to study the possibility of introducing hidden variables. (<cit.> p. 279)[Note however, that there is a controversy about the value of this statement because there were no English translations available of either Blokhintsev's or some other Terletzkii's works at the time of Bohm's “conversion". See <cit.>, Section 3.4.2.] It is indeed well-known that Einstein had opposed Bohr's views since the early days of quantum theory and his attempt to maintain determinism, summarized by the motto “God does not play dice", has entered the popular culture. However, while Einstein was invariably troubled by the abandonment of realism (and possibly of locality and localizability) implied by Bohr and his school, there are quite incontrovertible evidences that determinism was not Einstein's main philosophical concern <cit.>, and even less so in his late years. Actually, in 1953, in a letter to his friend Max Born, he stated: “I have written a little nursery song about physics, which has startled Bohm and de Broglie a little. It is meant to demonstrate the indispensability of your statistical interpretation of quantum mechanics […] This may well have been so contrived by that same ‘non-dice-playing God’ who has caused so much bitter resentment against me, not only amongst the quantum theoreticians but also among the faithful of the Church of the Atheists” (Einstein, A. to Born, M, 12 Oct 1953 <cit.>). In the light of this, we can conjecture that the impact that Einstein had on Bohm at the time of their encounter at Princeton in the early 1950s, was probably that of casting doubt on the Copenhagen interpretation, and suggesting that one could search for an alternative. However, it does not seem likely that he directly pushed Bohm towards determinism, let alone hidden variable that he never supported (see <cit.>). As for whether and to what extent Marxism has been a guiding principle for Bohm in developing his deterministic hidden variable interpretation, the question is subtler. This has been considered in detail by Forstner <cit.>, and partly by Peat <cit.>, Freire <cit.>, and Talbot <cit.>. Bohm surely agreed with the ontology supported by Marx and Engels, namely, a materialistic philosophy (or naturalism) which “says that the sole reality is the natural world, and this world is made up solely of matter" and “material things are not dependent for their existence or nature on any mind or minds", thus implying realism (from A. W. Wood, cited in <cit.>, p. 24). Moreover Marx and Engels put together this materialistic view and the dialectic of Hegel, which turned into the main guiding philosophy of Marxism, i.e., dialectical materialism. While dialectical materialism applied in a scientific context deals primarily with the nature of the world, it is in the Marxist analysis of the progress of history and society, historical materialism, that one finds determinism as a main characteristic. In fact, for Marx it is the mode of production and the struggle between social classes that necessarily determines historical change. As explained by Freire <cit.>, it is objectively difficult to know to which Marxist writings Bohm had access to and therefore which parts of that philosophy had a concrete impact on his scientific and philosophical views. However, we will see in section <ref> that it is the dialectic aspect (and partly the materialist one, for what concerns realism) of Marxism that seems to have played the major role in the views about philosophy of science that guided Bohm, rather than the deterministic character of historical materialism. As a matter of fact, Bohm was already a Marxist when he published his book <cit.> in which he endorsed the view of Bohr, so it does not seem to make sense to attribute his alleged conversion towards determinism to his adherence to Marxism. We will show, on the contrary, that his interest in Bohr actually stemmed, at least partly, from Marxism. This should be regarded as Bohm's first attempt to get away from a mechanistic philosophy in a dialectic (i.e. Marxist) spirit. Historians are not the only ones who have misconceived Bohm's point of view. The idea that Bohm's first and foremost concern was that of restoring determinism at any cost was surely always widespread among physicists too. Starting with the contemporaries who were supportive of him—like Einstein, Luis de Broglie, and several Marxist physicists, in particular Jean-Pierre Vigier—and closely followed by his critics, they all emphasized Bohm's commitment to determinism: the former as a merit and the latter as a untenable conservative attitude (see <cit.>, Chapters 4.2-4.5, for the early reactions on Bohm's hidden variable model).[Incidentally, it should be recalled that Bohm's interpretation did not receive the praise that he expected and that he might have deserved. Even Einstein, who supported Bohm in his career and considered him a very talented physicist, stated that the way Bohm's way of restoring determinism “seems too cheap" (see <cit.>). There are several hypotheses about why this has been the case, related to the Zeitgeist of post-war physics, Bohm's political views, the authority of the Copenhagen school, etc. (See <cit.>). It was only in more recent years that the so-called Bohmian mechanics found new momentum in a sub-community of scholars interested in foundations of quantum physics (see <cit.>). Also Bohm's close collaborators rediscovered Bohm's original interpretation and encouraged further works closer to Bohm's non-mechanistic ideas (see <cit.>, <cit.>, <cit.>). ] As a matter of fact, due to his hidden variable model, Bohm started being regarded as a staunch determinist. § AN ALTERNATIVE NARRATIVE: BOHM AGAINST MECHANISTIC DETERMINISM §.§ Indeterminism in Bohm's book Quantum Theory (1951) and beyond As we have previously recalled, the first work of Bohm in which he manifestly deals with foundational questions is his 1951 book on quantum theory <cit.>. It is generally known, as we have discussed, that this book takes an approach close to the orthodox view of Copenhagen. Note that in doing so, Bohm was not blindly following the mainstream, but rather he was actively looking for ways to provide quantum mechanics of solid and understandable physical foundations, against the wide-spread pragmatic acceptance of an uninterpreted abstract formalism. He therefore saw in the thought of Bohr an attractive philosophy because it was provided with two main features: the principle of complementarity, and irreducible probability (i.e. nondeterminism). In the former he saw elements of dialectics, which we claim was Bohm's main influence from Marxism. In fact, this is a first attempt, that Bohm was to develop in greater detail in the following years (see below), to apply the ideas of Engels who, in his Dialectics of Nature, “is especially opposed to attempts at mechanical reductionism" <cit.>. In the context of quantum physics, this is the fact that it is the interaction between two qualitatively different descriptions (the classical and the quantum ones) to determine reality, forming something qualitatively new not according to necessity. This also satisfied Bohm's antireductionist convictions because the classical world ought to lie outside of the quantum domain as a primitive and cannot be in general fully reduced to a quantum description. As for the acceptance of objective chance (i.e., potentialities), he saw in this the most natural possibility to abandoning the view of mechanistic determinism. Later Bohm abandoned this approach, but he remained sympathetic to potentialities (see section <ref>). In a letter to at that time his girlfriend Hanna Loewy, presumably in 1950, Bohm explicitly clarified his motivations for having taken a Bohrian approach in his book: I just got another idea on the quantum theory also. It is based on the fact that at the microscopic level, the quantum theory deals only with potentialities. For example, the quantum theory describes the probability that an electron can realise its potentiality for a given position. But to realise this potentiality, it must interact with some large scale (classical) system, such as an apparatus which measures position. It is only at the large scale that definite and well-defined events can exist. [...] Thus, the quantum theory presupposes the validity of classical concepts at the classical level. This means that one does not deduce the classical theory from the quantum theory, but that the two work together to describe the whole system. This is in contrast to most theories in physics, in which we analyse all large scale phenomena in terms of the small scale components. Here, we see that at the large scale level, new (classical) phenomena appear, which are not contained logically in the small scale phenomena alone. In other words, the behaviour of the whole system cannot be reduced to a description of the relationship of all its parts, since, new properties appear in a large aggregate, not contained at all in the behaviour of the microscopic systems. (Letter from Bohm to Hanna Loewy; Letter 1. Folder C37, not dated. [February-May, 1950?], <cit.>, p. 99). Moreover, soon after the publication of the book, he explained to his friend, the mathematician Miriam Yevick, why he got interested in Bohr: All I knew was that there was one school, which utterly repelled me, in which one was supposed to introduce abstract mathematical postulates, and be satisfied if the calculations agreed with experiment. Against this, Bohr’s school seemed to be a big improvement, because at least he tried to explain the physical meaning of the theory. Moreover, there was an element of dialectics in Bohr’s point of view which attracted me. It seemed progressive because it broke the old mechanist materialist determinism, which left no room for growth and development of something new. (Bohm to Miriam Yevick; Letter 65. Folder C117, dated: Jan 7, 1952, <cit.>, p. 227); extended quotation in Appendix <ref>). Note that at the time when he wrote this letter, Bohm was a staunch Marxist and most remarkably had already completed his work on deterministic hidden variables, and yet he was evidently criticizing mechanistic materialist determinism. For what concerns its content, Bohm's book is an excellent technical manual of quantum mechanics and, although it endorses the view of the Copenhagen school, it is already possible to pin down where the main philosophical concerns of its author lie: causality is already his main focus together with his refusal of mechanism. However, at this stage, he explicitly endorses indeterminism as a way out of mechanism, a view that was soon to change when he realised that also indeterminism can be mechanistic. We have recalled in the previous section that Freire <cit.> already noticed that a first element that distances Bohm from the Copenhagen school, is that in his 1951 book he looks for a realist account of nature. Another main difference with Copenhagen becomes manifest for what concerns causality. While for Heisenberg “quantum mechanics proves the invalidity of the law of causality,"[The original German phrase reads: “so wird durch die Quantenmechanik die Ungtültigkeit des Kausalgesetzes".] <cit.> for Bohm causality was an absolutely indispensable tenet. However, he makes very clear in his book that while maintaining causality he wants to escape determinism. Hence, a first major distinction, surely not well-understood at that time (and alas not even today in most of physics circles), is the conceptual difference between causality and determinism. This is also at the center of misunderstandings in the historical literature when referring to Bohm's later views, for instance in Freire's words: “Soon both David Bohm and his critics were using “causal interpretation” to label his approach to quantum theory, clarifying Bohm’s ambition to restore a kind of determinism analogous to that of classical mechanics." (Ref, <cit.>, p. 63). In his 1951 book, Bohm actually advocates a causally non-deterministic nature of physical laws, in terms of tendencies (as we will see later, this is closely related to Popper's view in terms of propensities; see section <ref>): we wish to call attention to the fact that, even in very early times, two alternative general types of causal laws appeared. One of these involved the notion of complete determinism; the other involved the notion of causes as determining general tendencies but not determining the behavior of a system completely. (<cit.>, Ch. 8, Sect. “Completely Deterministic vs. Causal Laws as Tendencies.") Bohm goes as far as to brilliantly show that actually the determinism of classical physics makes the concept of causality redundant: It is a curiously ironical development of history that, at the moment causal laws obtained an exact expression in the form of Newton's equations of motion, the idea of forces as causes of events became unnecessary and almost meaningless. The latter idea lost so much of its significance because both the past and the future of the entire system are determined completely by the equations of motion of all the particles, coupled with their positions and velocities at any one instant of time. Thus, we can no more say that the future is caused by the past than we can say that the past is caused by the future. [...] Thus, classical theory leads to a point of view that is prescriptive and not causal. (<cit.>, Ch. 8, Sect. “Classical Theory Prescriptive and not Causal".) Hence, he saw a way out of the effective lack of causality in a completely deterministic theory in terms of the tendencies or potentialities entailed by (the Copenhagen interpretation of) quantum physics: With the advent of quantum theory, the idea of complete determinism was shown to be wrong and was replaced by the idea that causes determine only a statistical trend, so that a given cause must be thought of as producing only a tendency toward an effect. [...] (<cit.>, Ch. 8, Sect. “New Properties of Quantum Concepts : Approximate and Statistical Causality".) Thus, in terms of our new concept, matter should be regarded as having potentialities for developing either comparatively well-defined causal relationships between comparatively poorly defined events or comparatively poorly defined causal relationships between comparatively well-defined events, but not both together. (<cit.>, Ch. 8, Sect. “Relation between Space Time and Causal Aspects of Matter".) We have thus seen why Bohm became aligned with Bohr in the first place, namely, to find a suitable alternative to mechanistic determinism that precluded a sensible concept of causality, which was for Bohm a crucial assumption for a physical theory. However, he soon realized that Bohr’s philosophy was not as satisfactorily as he previously had sensed because it indeed contained a dialectical approach but not as much of materialism as he would have wanted: After I had written the book, I finally began to grasp the full meaning of the theory, and could see that it leads inevitably to a form of (dialectical) idealism. But this was not so clear when I started, because of the general confusion in the literature. (Bohm to Miriam Yevick; Letter 65. Folder C117, dated: Jan 7, 1952, <cit.>, p. 227); extended quotation in Appendix <ref>). And again: I notice that you call me “a disciple of Einstein". This is not very accurate. Actually I was a strong “Bohrian" and wrote my book under the assumption (later proved wrong) that the principle of Complementarity was a materialist point of view. It certainly is very dialectical, but I did not see at that time that it is not materialist. After writing my book, I sent a copy to Einstein. He called me up asking to discuss the book, especially the Section on the paradox of EPR, which he liked very much. He thought I gave Bohr's point of view the most convincingly possible presentation, but he still refused to accept it. He then argued for some time, and he ended up convincing me that his objections were not answered. I thought about it for a while, becoming more convinced all the time that he was right. Finally I decided to look for a causal interpretation within few weeks, I hit upon the idea which I published, not knowing about de Broglie's work until later. It took me 10 hours of work, distributed over 2 months to convince Einstein that it made sense, but he actually never liked it. He only thought it was good to propose it to break out the present stagnant situation in physics. (Bohm to Schatzman; Letter A1.15. September 7, 1952, <cit.>, p.335) §.§ Against determinism, despite hidden variables (1952) Exactly in the same period when his book <cit.> was appearing, Bohm was formulating his alternative, deterministic interpretation in terms of hidden variables. Given his clear motivation recalled in the previous section, why did he do that? Bohm must have found himself in a strange position, when he managed to conceive a consistent model based on hidden variables that restored determinism. He clearly wanted to prove something that was considered impossible by the founding fathers of theory, in particular John von Neumann who had allegedly proven that a hidden variable completion of quantum mechanics was in principle impossible.[On the history of von Neumann's impossibility proof see <cit.>.] Moreover, Bohm wanted to prove that Bohr and Heisenberg's view was not necessarily the ultimate description of reality. It should be stressed that at that time, no other interpretation of quantum physics was known besides (slightly different understandings) of the Copenhagen one, so probably stimulated by his novel awareness of the limits of Bohr's interpretation and by the discussions with Einstein he explicitly looked for any alternative different interpretation. According to Hiley, indeed, Bohm “was not a deterministic man, he used causality. [...] He was not bound to it [determinism]. David Bohm always used to say to me: `I am making a proposal'. So, all this people think he had rigid views. He didn't have rigid views. He was always making proposals, because he thought he never fully got to the bottom of quantum mechanics." <cit.>. In fact, although Bohm stresses in his papers that the “`hidden" variables determine the precise results of each individual measurement process" <cit.>, repeatedly acknowledging very clearly the deterministic character of his model, he certainly never adopted a fundamental ontology merely made of particles plus their deterministic dynamics guided by the wave function. This is something that his followers, the so-called Bohmians (see footnote 1), have instead assumed, namely, considering Bohm's proposal as the ultimate description of reality, much against the view of Bohm himself. In fact, the germ of Bohm's way out of mechanical determinism (see further) as entailed by his proposal, is already expressed, although quite subtly, already in the conclusion of his second paper on hidden variables <cit.>, when he states: This hypothesis is based on the simple assumption that the world as a whole is objectively real and that, as far as we now know, it can correctly be regarded as having a precisely describable and analyzable structure of unlimited complexity. The pattern of this structure seems to be rejected completely but indirectly at every level [...]. We should never expect to obtain a complete theory of this structure, because there are almost certainly more elements in existence than we possibly can be aware of at any particular stage of scientific development. Any specified element, however, can in principle ultimately be discovered, but never all of them. Indeed, at least since 1951, most likely when he was still in Princeton (see <cit.>, footnote 48, p. 31), Bohm started developing a new philosophy based on the concept of having different levels of description, each of which can be either deterministic or indeterministic, but each of them giving only a partial account of reality. His ontology was thus made of the wholeness of the different levels of qualitatively different entities. However, he postulated the number of levels to be infinite, thereby making it fundamentally impossible to have mechanism, and in particular determinism: Because of the existence of an infinite number of levels, the deterministic laws of order at each level probably follow only as a result of conditions of chaos existing at lower levels. If the lower-level conditions of chaos could be altered, then the very framework of description of the higher level laws would also have to be altered. Thus, we are led to a more dynamic concept of the laws of nature; for because of their infinite complexity, richness, and depth, the applicability even of certain very general forms of laws at a particular level may depend on conditions at other levels, which are in principle subject to our prediction and control. This experience should ultimately be repeated at any given level, however deep, as our knowledge is extended. (Bohm to Miriam Yevick; Letter 58. Folder C116, dated: Nov 23 [1951], <cit.>, p. 205) Note that this idea, while keeping being refined, remained essentially unchanged throughout Bohm's transition from the period of his 1951 book to his hidden variable proposal, and reached its main expression in the book Causality and Chance <cit.> published in 1957 (see section <ref>). For instance, after he had already completed his hidden variable interpretation, he wrote to Yevick: The “things” at each level, are made up of smaller “elements” at a more fundamental level, and it is the motion of these more fundamental elements (not usually directly visible to us, except with the aid of elaborate scientific research) which causes the appearance and disappearance of the “things” existing at a higher level. These more fundamental “elements” however, cannot be permanent, but must be made up of still more fundamental “elements” and so on ad infinitum. (Bohm to Miriam Yevick; Letter 65. Folder C117, dated: Jan 7, 1952, <cit.>, p. 227; extended quotation in Appendix <ref>) Bohm also points out his position on the need for infinite levels to this collaborator Schatzman in a letter from 1952: It is most likely that not even the substratum particles could be indestructible and unanalysable. Instead, there is probably another substratum below this (of a qualitatively different kind most probably) and so on ad infinitum. Thus, we should have an infinite series of qualitatively different levels of laws. Any finite number of levels can always be understood by humanity, but never all of them. (<cit.>, p. 351; extended quotation in Appendix <ref>) And soon after his letter to Miriam Yevick in January, he wrote what is one of the most important quotations from the whole collection of known writings of David Bohm, because it unambiguously states that he could not accept mechanic determinism, even in the period when he was promoting his hidden variable model: Most of the errors of both the positivist and the 19th century “mechanical” materialists spring from an implicit assumption that the laws of nature will some day finally be understood in terms of a limited number of hypotheses. From this comes the nightmare of a mechanically determined universe that follows an inevitable course. To avoid this nightmare, positivists and idealists have given up causality and assumed a “spontaneous” (i.e., uncaused) element in physical processes. The concept of a limitless number of levels [...] provides a motive power for continual development & growth. Moreover, the nightmare of complete determinism is avoided. Although each level is causal, the totality of levels cannot ever be taken into account. Thus, as a matter of principle, we say that complete determinism could not even be conceived of, yet, each level can be determined. Here, we part company with the believers in “spontaneity” for we say that what appears to be spontaneous is caused by factors, in principle, knowable, but now hidden to us. But to be able to say this without implying complete determinism, we must assume an unlimited number of levels. (Bohm to Miriam Yevick; Letter 73. Folder C118, dated: Rec Mar 31 [1952], <cit.>, pp. 254-55; extended quotation in Appendix <ref>) It is now clear that Bohm did not undergo a conversion form indeterminism (à al Copenhagen) to determinism (with hidden variables), as the standard narrative implies. He actually stayed faithful to his tenets of realism and causality and his shift was merely that of realising that Bohr`s approach was not enough to achieve what he had in mind. So it seems that his philosophical theory of the infinite levels was conceived to “cure" his own model from the “nightmare" of determinism. One should also remark that this idea of unlimited levels is very much in the spirit of dialectics, and indeed this is the most Marxist trait in Bohm's work. As pointed out by Talbot, such a connection is perhaps less abstract that one could think, drawing directly from the work of Engels: “especially in the Dialectics of Nature, Engels introduces the idea of levels, or what he calls `forms of motion'. [...] Engels is especially opposed to attempts at mechanical reductionism, which `blots out the specific character' and `qualitative difference' of non-mechanistic forms of motion." ( <cit.>, p. 25). For Bohm this dialectic view of nature is a way to maintain a non trivial form of causality, intended as the possibility of creating non necessary new things, contrarily to the mechanistic view. In a letter to his friend —the American physicist Melba Phillips— Bohm spelled out this connection in detail: Also an important additional aspect of causality needs to be discussed in more detail —namely— causality as a means of determining the mode of being of qualitatively new things, which grow out of the old things. The basic aspect of mechanism is that (as in an idealized machine) the universe is conceived of as made of basic elements (particles, fields, or what have you) which simply interact according to fixed roles, and which themselves never change as a result of the processes in which they take part. [...] However, the concept of the infinity of levels shows that there need exist in nature no such thing as a basic element which never changes. Thus, causal laws not only determine the future in a mechanical sense; i.e., in the sense of determining quantitative changes in the arrangements of entities whose intrinsic character is fixed. The causal laws also tell when qualitative changes will occur and may define the characteristics of the new entities that can come into being. Thus, causality is a broader concept than that of mechanical determinism. [...] A “mechanistic” attitude toward science however, tends to limit the growth of our concepts in an arbitrary and dogmatically conceived way. Such a mechanistic attitude refers not only, however, to the mechanistic determinists, but also to the “mechanistic indeterminists”, who insist that in the quantum of action, we have reached an ultimate, indivisible, and unanalyzable entity, which will never be found to have a structure understandable in terms of a deeper level. to fixed rules. (Bohm to Melba Phillips. Letter 43. Folder C48, dated: Oct 13, 1953, <cit.>, p. 164; extended quotation in Appendix <ref>). In the following years, Bohm kept developing his philosophy of the infinite levels, sharpening the distinction between causality and deterministic mechanism, advocating the former and in strong opposition to the latter. Causality is for Bohm the possibility of creating new qualitative entities in a non trivial sense, i.e. without being able to reduce everything to a finite collection of basic elements that cannot change and that are subject to fix laws: Now, at first sight, it may seem that we could eliminate the large-scale level by analyzing it in terms of its basic molecular motions. And if there were a finite number of levels, this would be true. But if there are an infinite number, then each level stands on a footing that is, in the long run, as basic as that of any other. For every level has below it a deeper one. Indeed, matter can be regarded as made up of the totality of all levels. Each level makes its own specific contribution to the totality. (Bohm to Melba Phillips. Letter 46. Folder C48, dated: March 15, 1954, <cit.>, p. 170; extended quotation in Appendix<ref>). Let us now stop for a moment and go back to the standard narrative. Freire makes a case that in the 1950s Bohm did indeed promote the recovery of determinism. In 1951, before the term `causal interpretation' had gained currency in the debates on Bohm’s proposal, he himself emphasized it in his first letter to the French astrophysicist and Marxist Évry Schatzman, while looking for allies, such as Jean-Pierre Vigier and Louis de Broglie, to get support for his proposal: “My position in these physical questions is that the world along with all observers who are part of it is objectively real and in principle precisely definable (with arbitrarily high accuracy), and subject to precise causal laws that apply in each individual case and not only statistically.” (<cit.>, p. 65). There seems to be a tension between the statements of Bohm here. However, one can hypothesize that his actual point of view on determinism is the one that emerges from the letters to his intimate friends, i.e., a staunch anti-mechanistic position. Thus, these letters seem to be a more trustable source than a first contact to somebody from whom Bohm was seeking the support. He probably tamed his more complex philosophical positions and tailored his letters to his interlocutor by highlighting the deterministic aspect in the interactions with Schatzman and later with Vigier to find a common ground with these more “traditional" Marxists who definitely prised determinism (see Appendix <ref>). Moreover, note that in the quoted letter to Schatzman, Bohm stresses the causal aspect of his proposal, which, as clarified above, does not necessarily means determinism. §.§ An indeterministic causal model by Bohm and Vigier (1954) So far, the evidence that Bohm was against determinism even during the years in which he devised and promoted his hidden variable model are limited to private correspondence. However, in 1954, Bohm published a paper with Vigier—Model of the causal interpretation of quantum theory in terms of a fluid with irregular fluctuations <cit.>—that is a first attempt to put into practice the ideas of a model of causal interpretation which is however fundamentally non-deterministic, due to different levels of description. In fact, therein Bohm and Vigier postulate a field that is described by a fluid of density |ψ|^2, which is then able to recover the standard quantum mechanics by introducing the hypothesis of a very irregular and effectively random fluctuation in the motions of the fluid. [...] Such random fluctuations are evidently consistent within the framework of the causal interpretation of the quantum theory. Thus, there are always random perturbations of any quantum mechanical system which arise outside that system. <cit.> They indeed clarify that “the causal interpretation of the quantum theory permits an unlimited number of new physical models" and that their proposed “model is an extension of the causal interpretation of the quantum theory already proposed, which provides a more concrete physical image of the meaning of our postulates than has been available before, and which suggests new properties of matter that may exist at deeper levels." <cit.>. Here causal means the possibility of explaining the theory in terms of a sub-quantum level (the fluid) that accounts for the higher quantum level. Note that, contrarily to the first hidden variable model <cit.>, this model is based on fundamental random fluctuations, thereby dispelling even more the doubt that Bohm was a committed determinist: “In the model that we have proposed here, however, the statistical fluctuation in the results of such [quantum] measurements are shown to be ascribable consistently to an assumed deeper level of irregular motion”. It is interesting to notice that while the postulated fluctuations of the fluid are considered to be (at this level of description) genuinely indeterministic, Bohm and Vigier think of these fluctuation as having a certain structure in terms of potentialities: “The fact that the mean density remains equal to |ψ|^2, despite the effects of the random fluctuations, implies then that a systematic tendency must exist for fluid elements to move toward regions of high mean fluid density.” The ontological basis of this new indeterministic model and how it relates to Bohm’s philosophy of the infinite levels is explained by Bohm in correspondence with Einstein: “The general idea is that at a level more fundamental than that of quantum mechanics, there is a field which satisfies causal laws. This field is, however, in a state of statistical fluctuations. These fluctuations are somehow described by the Ψ field.” (Bohm to Einstein ; Letter 16. page 5 Folder C14, February 3, 1954, <cit.>, p. 5). My own point of view is that below the quantum theory there exists a sub quantum-mechanical level of continuous and causally determined motion, and that the quantum theory is related to the sub-quantum mechanical level, more or less as ordinary Brownian motion is related to the atomic level. In other words, events at the atomic level are contingent on the (in general irregular) motions of some as yet unknown but qualitatively new kind of entity, existing below the atomic level. As a result, the relationships between things, that can be defined at the atomic level will be characterized by the laws of chance, since they will be determined only in terms of some quasi-ergodic type of motion of new kinds of entities existing at the lower level. (Bohm to Einstein; Letter 21. Folder C15, dated: November 14, 1954, <cit.>) Einstein’s replies may seem surprising to those who still believe that he was also a committed determinist at any cost, because they show once more that he was dissatisfied with Bohm’s first (deterministic) hidden variable model: “I am glad that you are deeply immersed seeking an objective description of the phenomena and that you feel that the task is much more difficult as you felt hitherto.” (Einstein to Bohm ; Letter 17. Folder C14, February 10, 1954, <cit.>). And again: “In the last years several attempts have been made to complete quantum theory as you have also attempted. But it seems to me we are still quite remote from a satisfactory solution of the problem.” (Einstein to Bohm ; Letter 20. Folder C15, dated: October 28, 1954, <cit.>) Bohm did not develop further this approach which he most likely perceived as well as a proposed first step towards his philosophy of levels of description, but he came back to a stochastic causal interpretation, also with Hiley, in the 1980s <cit.>. §.§ Causality and Chance in Modern Physics (1957) It is around the same period that Bohm started thinking not only that either a deterministic or an indeterministic description was possible in every level of an infinite series, but that both individual laws and statistical laws are necessary for a causal interpretation: The picture which I propose is this: The totality of causal laws includes both statistical and individual laws. We start with this totality as our basic reality. [...] The fundamental reality is that of matter in being and in process of change, or of becoming, as it may more accurately be called. (Bohm to Miriam Yevick. Letter 121. Folder C124, dated: Sept 10 1954, <cit.>, p. 419-22). These dialectic ideas grew into a book, Causality and Chance, which Bohm published in 1957 <cit.>. Therein, Bohm identifies two types of causal laws (both considered fundamental): simple causal laws that connect past and future one-to-one (i.e. deterministic), and more general ones that are one-to-many, (i.e. that do not lead to a unique evolution but only to an array of possibility): [L]et us note that the one-to-many character of a causal law has no essential relationship to a lack of knowledge on our part concerning the additional causal factors to which the more precise details of the effect can be traced. [...] In other words, a one-to-many law represents an objectively necessary causal connection, but in this case, what is necessary is that the effect remain within certain bounds; and not, as in simpler types of causal laws, that the effect be determined uniquely. (<cit.>, p. 17). And again, Bohm clarifies, as he always maintained (cf. <ref>), that causality is a more general concept than that of necessity (i.e., determinism): We see, then, that it is appropriate to speak about objectively valid laws of chance, which tell us about a side of nature that is not treated completely by the causal laws alone. Indeed, the laws of chance are just as necessary as the causal laws themselves. [Footnote:] Thus necessity is not to be identified with causality, but is instead a wide category. (<cit.>, p. 23). Furthermore, Bohm here again stresses the fact that objective chance should be interpreted, as a potentiality, i.e., a property of the system and its causal conditions: On the basis of the above considerations, we are then led to interpret the probability of, for example, a given result in the game of dice as an objective property associated with the dice that are being used and with the process by which they are thrown (<cit.>, p. 27; extended quotation in Appendix <ref>) Note that this example is exactly the same used by Karl Popper <cit.> when he introduced the propensities interpretation (see section <ref>), again showing the compatibility between Bohm and a worldview based both on causality and on indeterminism. Beyond causality, a large part of Bohm's 1957 book <cit.> is devoted to defend another of his main tenets, namely, anti-mechanism. However, while being still convinced that determinism is an unacceptable form of mechanism, there is a fundamental difference with respect to his book on quantum theory <cit.>. Here, in fact, Bohm does not consider randomness alone as a way out of mechanism: The point of view described above evidently renounces an important aspect of the various forms of the mechanistic philosophy that appeared from the sixteenth through the nineteenth centuries; namely, their determinism. But in doing this, it has conserved and in fact enhanced the central and most essential characteristic of this philosophy; namely, the assumption that everything in the whole universe can be reduced completely and perfectly to nothing more than the effects of a set of mechanical parameters undergoing purely quantitative changes. [...] The question of what constitutes a mechanistic philosophy, therefore, cuts across the problems of determinism and indeterminism. For this reason, we shall call the philosophy described in this section by the name of “indeterministic mechanism” (<cit.>, pp.62-63). Bohm's criticism of mechanism (and thereby of determinism), does not spare his own hidden variable interpretation, which he considers again an unsatisfactory physical model, whose main feature, he stresses, is consistency: While our theory can be extended formally in a logically consistent way by introducing the concept of a wave in a 3N-dimensional space, it is evident that this procedure is not really acceptable in a physical theory, and should at least be regarded as an artifice that one uses provisionally until one obtains a better theory in which everything is expressed once more in ordinary three-dimensional space. (<cit.>, p. 117) Finally, in his Causality and Chance, Bohm for the first time defends publicly his philosophical view of the infinite levels of description as the main alternative to mechanism, be it deterministic or indeterministic (see Appendix <ref> for relevant quotations). As noted already by Freire <cit.>, this marks Bohm's entry in the philosophical debate and would allow him to engage with prominent philosophers of science, the like of Paul Feyerabend and Karl Popper (see further). However, these ideas of infinite levels were not appreciated by his more traditional Marxist followers, who saw in this the undermining of determinism: a positive feature for Bohm and an unacceptable price for them. This is the case of Évry Schatzman and and Vigier who wrote to Bohm: “We may be wrong, but we do not agree at all with your ideas about the different levels of reality. It seems to us that it is a formal interpretation of the famous sentence of Lenin, in Materialism and Empiriocriticism, about the different levels of reality” (quoted in <cit.>, p. 108). To conclude, in Causality and Chance Bohm synthesizes his main philosophical tenets that have been present in his writing since the beginning, but in a quite scattered way. Therein, Bohm defends, for the first time systematically, causality in its broadest sense, advocating the fundamental necessity of both individual laws and statistical laws, depending on the context. Moreover, he firmly rejects mechanism, not only in the form of determinism (which he did for many years already), but also in its indeterministic form. Finally, Bohm opposes mechanism with a dialectic philosophy of infinite levels of description that he had developed throughout the 1950s. For what concerns physics proper, in 1957, Bohm published with his student Yakir Aharonov a paper where he rejects his own 1952 model, not on the basis of determinism but on nonlocality: “It must be admitted, however, that this quantum potential seems rather artificial in form [...] that it implies instantaneous interactions between distant particles, so that it is not consistent with the theory of relativity.” <cit.>. Bohm thus kept proposing his dialectical views of different levels, similar to the paper with Vigier <cit.>, looking for a a “deeper subquantum-mechanical level” <cit.>. It is interesting to notice, that still at this stage, Bohm's views were completely misunderstood. Luis de Broglie, who wrote the forward of his Causality and Chance, for instance, keeps attributing to Bohm the great merit of giving hope to those who look for a deterministic hidden variable explanation of quantum theory: “It is possible that looking into the future to a deeper level of physical reality we will be able to interpret the laws of probability and quantum physics as being the statistical results of the development of completely determined values of variables which are at present hidden from us. It may be that the powerful means we are beginning to use to break up the structure of the nucleus and to make new particles appear will give us one day a direct knowledge which we do not now have of this deeper level." (<cit.>, p. x). This goes completely against what Bohm conveys in his book, making wander whether people like de Broglie were actually reading Bohm’s works or they just imposed on him what they wished to hear. Towards the end of the 1950s Bohm abandoned Communism, following the revelations of Stalin’s crimes by Nikita Khrushchev in 1956 (see <cit.>). As already recalled, this has been identified in the literature as the main motivation to abandon his commitment to determinism. But as we have shown, such an alleged commitment to determinism was never present in the first place and his dialectic attitude remained an important factor in his philosophy. However, probably due the frustration of being continuously misunderstood, Bohm’s engagement with different models of the causal interpretation became sparser. Actually, since his moving to the UK, firstly in Bristol and then in London, he engaged more and more in the philosophical debate, becoming friend with Paul Feyerabend, Karl Popper and Stephen Körner, and he kept his interpretational considerations away from his physics colleagues. Hiley joined Bohm at Birkbeck college in London in 1961 and, as a matter of fact, they passed “ten years without actually talking about the causal interpretation" <cit.>. As recalled by Hiley <cit.>, it was only in the 1970s that two of Bohm's students, Chris Dewdney and Chris Philippidis, “rediscovered" the hidden variable papers <cit.> and went to Hiley to ask why Bohm and him were not discussing this important results. Hiley replied “because it is all wrong", but when further inquired, he realized that he did not actually know why, he only had picked up what everybody was saying. And when he finally read thoroughly Bohm’s original papers, he understood that nothing was wrong and motivated the students to use the computer to calculate the trajectories of particles using Bohm's model. This marks the revival of Bohm’s hidden variables (see also <cit.> Ch. 6.1), a revival to whom Bohm, however, obviously did not participate. Actually, when approached by Dewdney Philippidis, “Bohm himself [...] admitted that he had made a tactical error in his original presentation of the theory. The term hidden variables, he said, created the wrong impression, and the papers themselves were too rigid and deterministic." (<cit.>, p. 266). In the following decades Bohm dedicated his work to an holistic approach that continued his ideas from the work on the causal interpretation of quantum theory. The purpose of Bohm’s original proposal in the light of his new ideas was later explained by himself in the following way: To show that it was wrong to throw out hidden variables because they could not be imagined, it was therefore sufficient to propose any logically consistent theory that explained the quantum mechanics, through hidden variables, no matter how abstract and hypothetical it might be. Thus, the existence of even a single consistent theory of this kind showed that whatever arguments one might continue to use against hidden variables, one could no longer use the argument that they are inconceivable. Of course, the specific theory that was proposed was not satisfactory for general physical reasons, but if one such theory is possible, then other and better theories may also be possible, and the natural implication of this argument is ‘Why not try to find them?’ (<cit.>, p. 104) His scientific program was based on quantum field theory to approach the concept of the infinite levels he already pointed out in the early works. His philosophical ideas remained consistent to his early works in the refusal of mechanistic ideas: As we have seen, relativity theory requires continuity, strict causality (or determinism) and locality. On the other hand, quantum theory requires noncontinuity, non-causality and non-locality. So the basic concepts of relativity and quantum theory directly contradict each other. [...] What is very probably needed instead is a qualitatively new theory, from which both relativity and quantum theory are to be derived as abstractions, approximations and limiting cases. The basic notions of this new theory evidently cannot be found by beginning with those features in which relativity and quantum theory stand in direct contradiction. The best place to begin is with what they have basically in common. This is undivided wholeness. Though each comes to such wholeness in a different way, it is clear that it is this to which they are both fundamentally pointing. To begin with undivided wholeness means, however, that we must drop the mechanistic order. (<cit.>, p. 223) §.§ Propensities and the causal interpretation Bohm has been in touch with Popper at least since 1959 (for the relationship between them, see <cit.> and references therein). It is exactly in that period that Popper—who was advocating for fundamental indeterminism in physics even at the classical level—developed a new interpretation of probabilities that are interpreted as objective physical properties, i.e., propensities or tendencies for a system to produce an outcome <cit.>. Here we would like to stress that although Bohm's never actually pursued a program based on potentialities, he hinted at it in several occasions (see above). As we have seen, he endorsed that view in his Quantum Theory <cit.> and hinted that the statistical behaviors of quantum mechanics constrains the tendency of the sub-quantum fluid in his paper with Vigier <cit.>. Looking at Bohm’s correspondence with Popper, we find an explicit support of this view: “I feel that what you have to say about propensities make a genuine contribution to clarifying the issue that you discuss" (Bohm to K. Popper on March 15th 1967. PA, Popper’s Archives, Box/Folder: 84/19. AAU, Klagenfurt (Austria)/Hoover Institution, Stanford (California) <cit.>. This was not appreciated by Popper himself, who should be listed among the many that misinterpreted Bohm, attributing to him a strong commitment to determinism. In fact, when Popper published his book on the foundations of quantum theory in 1982 <cit.>, although prizing Bohm for striving for realism, he harshly criticized him for being a determinist. Bohm replied to him, emphasizing once again that he was not committed to determinism and explicitly acknowledging for the first time, to our knowledge, that his view on the causal interpretation can be regarded in terms of potentialities: “I certainly think that a realistic interpretation of physics is essential. I think also that I understand your propensity interpretation of probability and I have no objections against it. […]. However, I feel that you have not properly understood my own point of view, which is much less different from yours than is implied in your book. Firstly I am not wedded to determinism. It is true that I first used a deterministic version of […] quantum theory. But later, with Vigier, a paper was written, in which we assumed that the movement of the particle was a stochastic process. Clearly that is not determinism. Indeed, we can regard the stochastic movement of the particle as affected by a field of propensities, in accordance with your ideas […] The key question at issue is therefore not that of determinism vs. indeterminism. I personally do not feel addicted to determinism [...]. [W]hat is real has a being independent of the consciousness of the observer. John Bell has used the term “beable" to describe such an independent reality. From the point of view of realism, the main criticism of the orthodox interpretation of the quantum theory is that it has no room in it for “beables". [...] I introduced the notion that the “beables" of the quantum theory are the particles and the wavefunction (which contains information about the propensities). Along with Vigier, I can say that the “beables" are themselves conditioned by such propensities. What are called the observables of quantum theory are then potentialities of the “beables", realized according to a context, which in current physics, is determined by the experimental arrangement (though in nature, similar contexts will still exist without the intervention of human being). [...] My proposal has been that the “beables" are particles (moving stochastically), along with the wave function. (Bohm to K. Popper 13.07.1984. Box/Folder: 278/2. AAU, Klagenfurt (Austria)/Hoover Institution, Stanford (California) <cit.>) § DISCUSSION AND CONCLUSION In this paper, we have shown that Bohm was always against mechanism and therefore determinism. We have rebutted the historical narrative according to which one can identify an early period when Bohm was a supporter of Bohr, a later period when he was a committed determinist (influenced by Einstein and by Marxism), and finally a period, after his break with Marxism, in which determinism quit being a main concern of his. On the contrary, Bohm's philosophical tenets have never changed throughout his whole life: he was always committed to develop a realistic, causal, non-mechanistic view of physics. This led him to develop a new dialectical philosophy composed of infinite levels of description that guided him in his work for the following decades. As such, Bohm would have never accepted determinism, at any stage of his life. In a slogan, Bohm was never a Bohmian. Although the content of this paper has mostly a historical scope, it may affect also the physicists and philosophers who have proclaimed themselves Bohmians. It is undeniably true that Bohm provided the first deterministic hidden variable model of quantum theory. And yet, we just want to stress that for him this was nothing more than a model, a proof of principle that it was possible to do what was considered fundamentally unattainable. However, at the same time, this was for him most unsatisfactory, for it betrayed one of his deepest convictions about nature, namely, that a basic ontology of particles moved around by deterministic laws cannot be the end of the story. Therefore, the many scholars who today support Bohmian mechanics at face value, giving to it an ontological role, should be aware that they are advocating a worldview that stems from what its original proposer considered a mere model which could not satisfy the basic standards of acceptability for a physical theory (except internal consistency). Now, while this is obviously a logically acceptable position, they should be aware that they are going directly against the fundamental views of Bohm, and cannot therefore whatsoever appeal to his authority. This separation between the original though of Bohm and those who adopted his model was so striking that soon before his death when he became aware of Sheldon Goldstein and Detlev Dürr's work on his ideas, Bohm bitterly confessed to his main collaborator Basil Hiley: “why on earth are they calling it Bohmian mechanics? Haven't they read a word I have written?" <cit.>. So, concerning determinism, Bohm finds himself in a position comparable (fortunately with less ethical implications) to that Einstein with respect to the atomic bomb: It is a historical fact that it was Einstein who suggested to US president Franklin Roosevelt to research on nuclear weapons to preempt Nazi Germany to achieve the same threat. However, for his whole life—before and after—Einstein was a committed pacifist. Similarly, it is a historical fact that Bohm developed a deterministic interpretation of quantum theory. However, for his whole life—before and after—he was a committed anti-determinist. Invoking Bohm to defend deterministic views of physics is like invoking Einstein to promote nuclear weapons. §.§ Acknowledgements The authors would like to thank Basil Hiley for taking time for an interview and valuable discussions. We also would like to express our thanks to Emma Illingworth from the David Bohm Archive at Birbeck Library for her support during our research. § APPENDIX A – EXCERPTS FROM CORRESPONDENCES OF D. BOHM §.§ Excerpt of a letter from Bohm to Miriam Yevick (January 7, 1952) Letter 65. Folder C117, dated: Jan 7, 1952, <cit.>, p. 227. Now, to retain the concept of matter, we must above all retain the idea that in some aspects at least, matter is indestructible and uncreatable. How then do we explain the prevalence of change and the transiency of material things? This is done by the notion of endless transformation. The “things” at each level, are made up of smaller “elements” at a more fundamental level, and it is the motion of these more fundamental elements (not usually directly visible to us, except with the aid of elaborate scientific research) which causes the appearance and disappearance of the “things” existing at a higher level. These more fundamental “elements” however, cannot be permanent, but must be made up of still more fundamental “elements” and so on ad infinitum. Thus, we can see that every “thing” that exists may at some time come into existence and later go out of existence, but there is always a deeper level, in terms of which this change can be viewed rationally as a transformation of a more elementary form of matter, which is not itself basically altered in this particular transformation. Nevertheless, no single “thing” is uncreatable or indestructible. Only matter as a whole in its infinity of properties and potentialities is eternal. §.§ Excerpt of a letter from Bohm to Schatzman; (not dated, 1952) Letter A1.20, not dated, 1952. <cit.>, p. 351. For quantum mechanics has show, that "empty" space a strongly fluctuating electromagnetic fields and more important still, a very high density ( infinite according to the present inadequate theories) of negative energy electrons, protons and neutrons. If one adopts the new interpretation of the quantum mechanics, there is no choice but co suppose chat these particles are really in existence. One therefore has been back to the old notion of a material substratum filling all space. As a have said, this substratum is very dense, much denser than any other form of matter. In fact, matter as it is usually called, would be only a disturbance in the uniform background of substratum. Light waves, etc. would also be disturbances of the substratum. The mysterious "annihilation" and "creation" of material particles could now be understood naturally; for with the [ ?] of energy, the substratum could be made non-uniform as a spreading wave. These two forms of energy could be transformed into each other when we look out at the sky, space appears to be almost empty, because light waves are scattered only by inhomogeneities in space. Similarly material particles are likewise inhomogeneities propagated freely in a uniform background. Thus, to a naive way of looking, space appears empty, a similar phenomenon appears in connection with the theory of metals. As you know, an electron will go through a very dense metal without being scattered as long as the crystal lattice is perfectly regular. Only non-uniformities in the lattice will scatter the electron. A naive observer (for example a positivist) would conclude from this evidence that a metal consists of empty space, with a very thin haze of "matter" . I would like to add one point here. It is most likely that not even the substratum particles could be indestructible and unanalysable. Instead, there is probably another substratum below this ( of a qualitatively different kind most probably) and so on ad infinitum. Thus, we should have an infinite series of qualitatively different levels of laws. Any finite number of levels can always be understood by humanity, but never all of them. Thus, ·we can understand more vividly a number of dialectical principles, for example, many people are puzzled by the dialectical assertion that matter must be eternal ( i.e. no creation). The answer is that at any particular level, the forms of matter as a whole, in its infinite number of properties and inter -connections is eternal. Secondly, consider the statement of dialectics chat "a thing is not equal to itself" . this we understand by the [ ? ] that a materiel "thing" contains an infinity of properties whereas the concepts usually defining what the thing "is" cover only a finite number of these properties. Thus, a thing is not only "what it is" but also a large nun1ber of other things, which will manifest themselves later ; or in other words in "what is coming to be". Moreover, the levels not taken into account in the usual definition of the "theory" will generally produce effects that are in contradiction with the permanent existence of this "thing" . §.§ Excerpt of a letter from Bohm to Miriam Yevick (January 23, 1952) Letter 66. Folder C117, dated: Jan 23, 1952, <cit.>, p. 235: [I]t is essential to think that things are not only “what they are known to be”, but also a whole list of different things connected with the infinite number of levels not known to us. These other things may be thought of roughly as “what is coming into being” since it is in the future form of the thing that the underlying factors will ultimately manifest themselves. [...] As in the structure of “elementary” forms of matter human beings contain an infinite number of at present unknown (or poorly known) levels of complexity of behavior. This fact has two important implications: (1) The most obvious, that by scientific study, we may ultimately learn to control some of the factors at any particular level, and thus to produce startling changes in human nature (including even ourselves) (2) Before this can be done, the different levels will manifest themselves in that people cannot correctly be regarded as “being only what they are”, but that they can also undergo fundamental transformations of character with changing conditions. [...] As for the book [<cit.>], you must try to imagine the situation when I wrote it. You suggest that I may have had some dishonesty, perhaps some desire to please the “big shots” in writing it, and that this led me to back up the usual interpretation of the quantum theory. You must remember several things however: (1) When I wrote this book, there did not exist anywhere a clear statement of the basis of the theory. There existed some books which made ridiculous abstract mathematical postulates that no one could possibly understand, and there were other discussions, such as those of Bohr, which aimed at discussing the physics, but in an incredibly vague way. A student at Princeton once told me that Bohr’s statements not only cancelled out with regard to their meaning in the first order, but also with regard to connotation in the second order. It was therefore necessary to go to the third order to find what Bohr meant. When I first started to study this subject 15 years ago, it fascinated me and puzzled me. I had no reason to suspect that the “big shots” had muddled up the subject, since after all, had they not been astonishingly successful in predicting experiment after experiment? Above all, I never got over being puzzled by the theory. When I started the book, I was in no position to see through the matter, because I still hadn’t made complete sense of it. All I knew was that there was one school, which utterly repelled me, in which one was supposed to introduce abstract mathematical postulates, and be satisfied if the calculations agreed with experiment. Against this, Bohr’s school seemed to be a big improvement, because at least he tried to explain the physical meaning of the theory. Moreover, there was an element of dialectics in Bohr’s point of view which attracted me. It seemed progressive because it broke the old mechanist materialist determinism, which left no room for growth and development of something new. After I had written the book, I finally began to grasp the full meaning of the theory, and could see that it leads inevitably to a form of (dialectical) idealism. But this was not so clear when I started, because of the general confusion in the literature. If you tried to read other books, you wouldn’t be able to say that you see through this stuff, just because the other books leave things just vague enough so that you don’t know quite what you are seeing through. In writing this book, I hope that I have not only clarified the issues for myself, but perhaps for other people too. I suspect that a clear presentation of Bohr’s point of view (the first clear one, if I may boast a little) will do more to favor the causal interpretation than to favor Bohr’s interpretation. Now with my new point of view, I can see an infinitely better way to get out of the trap of mechanistic determinism; namely through the concept of an unlimited number of causal levels. I would call Bohr’s point of view “static dialectics”. This is because it is a form of “slinging the lingo” in which the dialectically opposing concepts are made just vague enough so that the contradictions between them are avoided. Thus, one is not faced with the necessity of seeking new concepts that synthesise the opposites, and the dynamic aspects of dialectics (i.e. the contradictions leading to something new at another level) are lost. Finally, I should say that I wrote the book in a spirit of struggle against the obscurantist notion that nature can from now on be understood only in terms of abstract mathematical postulates. The struggle was well worth while, since it led me to a new point of view. §.§ Excerpt of a letter from Bohm to Miriam Yevick (March 31, 1952) Letter 73. Folder C118, dated: Rec Mar 31 [1952], <cit.>, pp. 254-55: I think that the explicit recognition of a limitless number of levels would be a big step forward in science. Most of the errors of both the positivist and the 19th century “mechanical” materialists spring from an implicit assumption that the laws of nature will some day finally be understood in terms of a limited number of hypotheses. From this comes the nightmare of a mechanically determined universe that follows an inevitable course. To avoid this nightmare, positivists and idealists have given up causality and assumed a “spontaneous” (i.e., uncaused) element in physical processes. [...] The concept of a limitless number of levels suggests, however that the work of science is never finished and leads one at each level to seek the contradictions which can [unreadable] at the next level etc. Thus it provides a motive power for continual development & growth. Moreover, the nightmare of complete determinism is avoided. Although each level is causal, the totality of levels cannot ever be taken into account. Thus, as a matter of principle, we say that complete determinism could not even be conceived of, yet, each level can be determined. Here, we part company with the believers in “spontaneity” for we say that what appears to be spontaneous is caused by factors, in principle, knowable, but now hidden to us. But to be able to say this without implying complete determinism, we must assume an unlimited number of levels. It is the unlimited number of levels which give matter its “non-mechanical” aspects, for if the analysis of physical laws could ever be completed, the theory would either be deterministic + “mechanical”, or “indeterministic” and “spontaneous”. Another interesting point – if there are an infinite number of levels, we can expect that all existing limitations (such as speed of light and uncertainty principle) can be overcome with the aid of more fundamental levels. Thus, by the use of causal laws, humanity can move toward freedom. Whereas, in the ignorance of causal laws, humanity is enslaved either to determinism or to “spontaneity”, which, being pure accident, is just as tyrannical. One other point, a distinction between “determinism” and “causality”. Although both words have roughly the same meaning, their implications are different. For causality implies (a) that if you know the causes, you can predict the effects. (b) That if you change the causes, you can change the effects in a predictable way. But determinism implies only predictability. In fact, with complete determinism, it would be impossible for us ever to change anything. Now, if there are a finite number of levels, then complete causality obviously implies complete determinism. But if there are an infinite number, then the two concepts part company. For we can have complete causality at every level, in the sense that we can use this causality to change the world in a predictable way,with the error in the predictions dependent only on our level of knowledge; whereas we can in no sense conceive of the world as completely determined. In this connection, note that the statement that new things can come into existence is consistent with causality, only if what is already in existence has an infinite number of levels. For if we have a finite number of causal levels, then the future is already contained logically in the present, but not if we have an infinite number. The appearance of qualitatively new things with time is possible with an infinite number, because the effects of the limitless number of lower levels can always surge up into a higher level (and vice versa) producing qualitative [missing words] describable as a rearrangement of things already in existence. §.§ Excerpt of a letter from Bohm to Melba Phillips (October 13, 1953) Letter 43. Folder C48, dated: Oct 13, 1953, <cit.>, p. 164: Also an important additional aspect of causality needs to be discussed in more detail – namely – causality as a means of determining the mode of being of qualitatively new things, which grow out of the old things. The basic aspect of mechanism is that (as in an idealized machine) the universe is conceived of as made of basic elements (particles, fields, or what have you) which simply interact according to fixed roles, and which themselves never change as a result of the processes in which they take part. Naturally, every physical theory has some non-mechanistic aspects. For example, in the field theory, new entities (waves+particle — like singularities) can arise out of the interconnections of the basic field elements through the field equations (especially if the latter are non-linear). Also in a particle theory, new entities can arise out of interactions. [...] Nevertheless, the basic elements in such theories are usually conceived of as fixed and eternal. However, the concept of the infinity of levels shows that there need exist in nature no such thing as a basic element which never changes. Thus, causal laws not only determine the future in a mechanical sense; i.e., in the sense of determining quantitative changes in the arrangements of entities whose intrinsic character is fixed. The causal laws also tell when qualitative changes will occur and may define the characteristics of the new entities that can come into being. Thus, causality is a broader concept than that of mechanical determinism. It contains limited mechanical determinism as a special case. Indeed, the concept of causality is continually evolving with the development of science and other aspects of human activity, so that the potential richness of this concept has no limit. In other words, we may expect future generations to discover more and more aspects of the concept of causality, thus transforming this concept in a way that we have at present no inkling of. Yet these changes will not be arbitrary, but will instead grow in a definite way out of the efforts to solve real problems presented by the successive levels of reality that we shall be able to reach. A “mechanistic” attitude toward science however, tends to limit the growth of our concepts in an arbitrary and dogmatically conceived way. Such a mechanistic attitude refers not only, however, to the mechanistic determinists, but also to the “mechanistic indeterminists”, who insist that in the quantum of action, we have reached an ultimate, indivisible, and unanalyzable entity, which will never be found to have a structure understandable in terms of a deeper level. In fact, the quantum of action presents many aspects of the ultimate particles of the atomists, so that the insistence that the quantum will never be analyzed is as mechanistic as a theory of point particles following determined orbits. Similarly, the insistence that chance+probability are not subject to a causal analysis at a deeper level constitutes a mechanistic attitude toward these things, since chance+probability are conceived of as existing in themselves and functioning under all possible circumstances according to fixed rules. [...] According to the mechanistic indeterminists, it is fixed by an equally mechanical “chance” which is conceived of as absolute and not itself capable of change or development. We may make an analogy of a man who is offered the possibility of 100 different ways of being executed. The deterministic school of executioners would choose the way according to certain definite factors, e.g., the chemical concentration of the blood, the wave - length of the light emitted from his skin, etc. The indeterministic school would chose the way by spinning a roulette wheel. The non-mechanistic school would seek a qualitative change - i.e., to find a way to escape execution, taking advantage of all factors, both “determinate” and “chance”. So the essential point is that because of the infinite complexity and depth of the laws governing the nature of matter, no preassigned scheme of things can remain adequate forever, not even if it is restricted to being a general framework or outline. But this is just what most people find it difficult to accept – perhaps because our society requires us to accept the idea that a certain general form of social organization is inevitable, although within this general framework, we may make various quantitative changes, either by chance, or by determinate rule, as we please, as long as nothing essential is ever changed. [...] My own opinion is that the synthesis will eventually have to be on a still deeper level and will have to introduce new kinds of entities that are neither particles nor fields, of which we have only a vague idea at present. §.§ Excerpt of a letter from Bohm to Melba Phillips (March 15, 1954) Letter 46. Folder C48, dated: March 15, 1954, <cit.>, p. 170: First of all, it is necessary to sharpen the distinction between causality and mechanism (or deterministic mechanism). Mechanism is characterized by two fundamental aspects: (1) Everything is made of certain basic elements which themselves never change in essence (i.e., qualitatively). (2)All that these elements can do is to undergo some quantitative change according to some fixed laws of change. For example, if they are bodies, they can move in space. If they are fields, they can change their numerical values, etc. But the basic elements themselves never undergo qualitative change. If we postulate an infinity of levels, then we make a step beyond mechanism. For the elements existing at each level are made of still smaller elements in motion (i.e., changing quantitatively), and the mode of being of the higher level elements arises out of the motions of the lower level elements. Thus, there are no elements that can never change. Indeed, even if we have a finite number of levels, some qualitative change is possible within a mechanistic theory. For example, with atoms in chaotic motion, we obtain new large scale properties, such as pressure, temperature, etc., new entities, such as gas, liquid, solid, and qualitative changes between them. Now, at first sight, it may seem that we could eliminate the large-scale level by analyzing it in terms of its basic molecular motions. And if there were a finite number of levels, this would be true. But if there are an infinite number, then each level stands on a footing that is, in the long run, as basic as that of any other. For every level has below it a deeper one. Indeed, matter can be regarded as made up of the totality of all levels. Each level makes its own specific contribution to the totality. Of course, each level finds an image in others, so that one can deduce many properties of a given level by studying other levels. Yet, there may be properties that cannot so be deduced. Not only may these properties be peculiar to a given level, but they may involve “crossing” of levels. [...] Now, a mechanical law is characterized by the fact that it specifies a rule governing quantitative changes of elements that are fixed in nature. A more general causal law may express the conditions governing qualitative change. But if it does this, it must do something else that a mechanical law is never called upon to do. It must not only determine the mode of change, but also the mode of being of the elements when they are not changing. A mechanical law simply postulates a certain fixed and eternal mode of being of the elements, so that there is a sharp separation between the laws of change and the mode of being of the elements. A more general causal law does not make such a sharp separation. Thus, in the theory of evolution, the principle of natural selection enables us to say something about the mode of being of the various forms of life, in terms of their past history of evolution, struggle for survival, etc. Similarly, in embryology, one can in part, understand the characteristic properties of an animal at a given stage of development in terms of its past history which helped make it what it now is. Thus, a more general causal law may be historical in form. By this, I mean that the very mode of being of the elements which enter into the laws is a necessary consequence of the causal laws governing the whole chain of development.[...] A causal law may express the necessity of a fundamental qualitative change, so that what develops may have something new in it. This something new arise[s] as a necessary consequence of what is old, and yet it is not just a rearrangement or a quantitative change of the old elements. §.§ Excerpt of a letter from Bohm to Miriam Yevick (September 10, 1954) Letter 121. Folder C124, dated: Sept 10 1954, <cit.>, p. 419-22: The picture which I propose is this: The totality of causal laws includes both statistical and individual laws. We start with this totality as our basic reality. Then, we may take various views of this totality, some of which stress the individual aspect of the laws, and some of which stress the statistical aspect. But there is no such thing as a perfect individual law, because there are always fluctuations and errors coming from what has been left out. [...] We start with the idea of a real world, which is in a continual process of change and development. We must now find means of analyzing this change and development. To begin, we seek those aspects that have a relative permanence. Over a short period of time, these aspects may be idealized and abstracted as having a being, conceived of as static. But like the mathematical point, the notion of a property or an aspect of things as having such a static and complete being is only a simplifying abstraction. In reality it does not have such static being, as is shown by the fact that it changes after some time. The fundamental reality is that of matter in being and in process of change, or of becoming, as it may more accurately be called. [...] We note that causal laws are relationships between various aspects of reality at different times. Depending on which aspects that we find are necessary, possible, or convenient to relate, we will have different kinds of causal laws, some more nearly statistical and some more nearly individual. But the essential point is that one and the same system simultaneously obeys individual and statistical laws. [...] Thus, we do not regard the world as made of certain fixed eternal basic elements, satisfying corresponding laws. [...] [S]tatistical laws are not purely a matter of convenience and practicability. Moreover every level of individual law ultimately has some deeper statistical basis. A more accurate statement of the problem is thus: Both for reasons of practical convenience and for reasons of principle, we study statistical aggregates in their own right. [...] What must be stressed however is that individual and statistical laws are abstractions as limiting cases of laws in general, and that there remains before us the problem of formulating more general types of laws that could connect these two limiting cases in a continuous and rationally understandable way. § APPENDIX B – EXCERPTS FROM THE WRITINGS OF D. BOHM §.§ Excerpts from Causality and Chance (1957) Evidently, then, the applicability of the theory of probability to scientific and other statistical problems has no essential relationship either to our knowledge or to our ignorance. Rather, it depends only on the objective existence of certain regularities that are characteristic of the systems and processes under discussion, regularities which imply that the long run or average behaviour in a large aggregate of objects or events is approximately independent of the precise details that determine exactly what will happen in each individual case. On the basis of the above considerations, we are then led to interpret the probability of, for example, a given result in the game of dice as an objective property associated with the dice that are being used and with the process by which they are thrown, a property that can be defined independently of the question of whether or not we know enough to predict what will happen in each individual throw. (p. 27) When we study any particular set of processes within one of its relatively autonomous contexts, we discover that certain relationships remain constant under a wide range of changes of the detailed behaviour of the things that enter into this context. Such constancy is interpreted not as a coincidence, but rather as an objective necessity inherent in the nature of the things we are studying. These necessary relationships are then manifestations of the causal laws applying in the context in question. These laws do not have to determine a given effect uniquely. Instead, they may (in the case of one-to-many relationships) determine only that the effect must remain within a certain range of possibilities. (p. 29) Now, as we shall see in this chapter and in other parts of the book, the mechanistic philosophy has taken many specific forms throughout the development of science. The most essential aspects of this philosophy seem to the author, however, to be its assumption that the great diversity of things that appear in all of our experience, every day as well as scientific, can all be reduced completely and perfectly to nothing more than consequences of the operation of an absolute and final set of purely quantitative laws determining the behaviour of a few kinds of basic entities or variables. (p. 37) The essential change brought in by this new point of view was the introduction of an element of arbitrariness into the theory. One still thought of the universe as a gigantic mechanical system with the property that everything in it can in principle be reduced completely and perfectly to nothing more than the results of purely quantitative changes taking place in suitable mechanical parameters. But instead of having its behaviour determined completely in terms of definite laws governing these parameters, this universal system could continually be subject to irregular alterations in the course of its motion. [...] For we now see that there is a whole level in which chance fluctuations are an inseparable part of the mode of being of things, so that they must be interwoven into the fabric of the theory of this level in a fundamental way. Thus, we have been led to take an important step beyond the classical notion of chance as nothing more than the effects of contingencies that modify the boundary conditions or introduce randomly fluctuating external forces in a way that is not predictable within the context of interest, but which play no essential part in the formulation of the basic laws that apply within such a context. If we stopped at this point, however, we should, as we have seen in the previous chapter, merely have switched from deterministic to indeterministic mechanism. To avoid indeterministic mechanism, we must suppose that, in their turn, the chance fluctuations come from something else. Since, as Heisenberg and Bohr have shown so well, there is no room in the quantum domain for anything to exist in which these fluctuations might originate, it is clear that to find their origin we must go to some new domain. [...] Of course, if one were now to make the assumption that these new laws would surely be nothing more than purely causal laws, one would then fall back into deterministic mechanism, while the similar assumption that they were surely nothing more than laws of probability would throw one back into indeterministic mechanism. On-the other hand, we have in the proposals made in this chapter avoided both these dogmatic and arbitrary extremes, since we have considered, as the situation demanded, the possibility that there are new features to the causal laws (a “quantum force” not appearing at higher levels) as well as to the laws of chance (random fluctuations originating in the sub-quantum mechanical level). Of course, as we have indicated in Section 5, we do not regard our earlier proposals as providing a completely satisfactory and definitive interpretation of the laws of the quantum domain. The basic reason is, in a sense, that the fundamental concepts considered in the theory (waves and particles in interaction) are still very probably too close to those applying in the classical domain to be appropriate to a completely new domain such as that treated in the quantum theory. (pp. 126-127) Actually, however, neither causal laws nor laws of chance can ever be perfectly correct, because each inevitably leaves out some aspect of what is happening in broader contexts. [...] Thus, we are led to regard these two kinds of laws as effectively furnishing different views of any given natural process, such that at times we may need one view or the other to catch what is essential, while at still other times, we may have to combine both views in an appropriate way. But we do not assume, as is generally done in a mechanistic philosophy, that the whole of nature can eventually be treated completely perfectly and unconditionally in terms of just one of these sides, so that the other will be seen to be inessential, a mere shadow, that makes no fundamental contribution to our representation of nature as a whole. (p. 143) § APPENDIX C – EXCERPTS FROM THE SECONDARY LITERATURE ABOUT D. BOHM §.§ Excerpt from Freire, O. Jr, David Bohm: A life dedicated to understanding the quantum world Évry Schatzman, who was the intermediary for Bohm to contact Vigier, wrote to Bohm: “Any physical theory should be completely deterministic, because an affirmation of the dialectical materialism is that there is an objective reality and that this reality is cognizable, that we can built an image of that reality in our mind”. Schatzman was far from modest about the work which was being done by Bohm and Vigier, comparing it to Marx’s works: “We should be grateful to people like Vigier, like you, who have with tenacity devoted their efforts to the rebuilding of the quantum theory on its feet, just like the dialectic of Hegel, which had to be put back on its feet!” However, if the Marxist background was the cement, the collaboration between Bohm and Vigier blossomed in a fruitful scientific collaboration. (<cit.>, p. 91)
http://arxiv.org/abs/2307.04019v3
20230708173320
GP-guided MPPI for Efficient Navigation in Complex Unknown Cluttered Environments
[ "Ihab S. Mohamed", "Mahmoud Ali", "Lantao Liu" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.SY", "eess.SY" ]
Explicit a posteriori error representation for variational problems and application to TV-minimization [ August 12, 2023 ======================================================================================================== @topnum0 @botnum0 empty empty Robotic navigation in unknown, cluttered environments with limited sensing capabilities poses significant challenges in robotics. Local trajectory optimization methods, such as Model Predictive Path Intergal (MPPI), are a promising solution to this challenge. However, global guidance is required to ensure effective navigation, especially when encountering challenging environmental conditions or navigating beyond the planning horizon. This study presents the GP-MPPI, an online learning-based control strategy that integrates MPPI with a local perception model based on Sparse Gaussian Process (SGP). The key idea is to leverage the learning capability of SGP to construct a variance (uncertainty) surface, which enables the robot to learn about the navigable space surrounding it, identify a set of suggested subgoals, and ultimately recommend the optimal subgoal that minimizes a predefined cost function to the local MPPI planner. Afterward, MPPI computes the optimal control sequence that satisfies the robot and collision avoidance constraints. Such an approach eliminates the necessity of a global map of the environment or an offline training process. We validate the efficiency and robustness of our proposed control strategy through both simulated and real-world experiments of 2D autonomous navigation tasks in complex unknown environments, demonstrating its superiority in guiding the robot safely towards its desired goal while avoiding obstacles and escaping entrapment in local minima. The GPU implementation of GP-MPPI, including the supplementary video, is available at <https://github.com/IhabMohamed/GP-MPPI>. Autonomous vehicle navigation, MPPI, sparse Gaussian process (SGP), occupancy grid map path planning. § INTRODUCTION AND RELATED WORK Autonomous navigation of mobile robots in unknown, cluttered, and unpredictable environments with limited sensor capabilities is a challenging task owing to the inherent uncertainty and complexity of such environments. To tackle this challenge, a receding-horizon strategy such as Model Predictive Control (MPC) is commonly employed. The MPC control framework allows the robot to simultaneously plan a short trajectory (sequence of actions), following which the robot executes the immediate action while planning a subsequent trajectory. To successfully achieve receding-horizon planning, the robot must consider both safety and persistent feasibility, where safety is achieved by avoiding collisions with any obstacles while executing a planned trajectory, and persistent feasibility is maintained by always generating a safe trajectory that does not result in dead-ends or local minima while progressing towards the desired goal. One of the significant challenges in robot motion planning is that the desired goal is often situated beyond the planning horizon, which requires the use of local subgoals or cost-to-go heuristics for motion safety and persistent feasibility. A common strategy is to rely on single-query motion planning algorithms, such as A^* and RRT^X, to identify feasible paths that direct the local planner towards its desired goal <cit.>. For instance, the RRT^X algorithm, introduced in <cit.>, incorporates replanning techniques from Dynamic Rapidly-exploring Random Trees (DRRT) and Rapid-exploring Random Trees (RRT^*) algorithms to adjust the path during exploration based on environmental changes. However, due to its high computational demands, implementing this algorithm in real-time on a robot can be challenging. One alternative method to achieve efficient solutions for motion planning problems is the integration of MPC with data-driven methods, also known as learning-based MPC <cit.>. To name a few, a subgoal planning policy using Deep Reinforcement Learning (DRL) is recently proposed to guide the local MPC planner to navigate in crowded surroundings <cit.>. Similarly, RL was utilized to choose the next subgoal from a set of predefined possibilities <cit.>, which guides the robot through challenging environments with dead-end corridors while also prevents the MPC planner from getting trapped in local minima. Another related work that combines learning with MPC is POLO which aims to enhance MPC performance by learning a global value function <cit.>. Most of these approaches typically rely on either offline training or having access to the global map of the environment. In addition, many recent studies have suggested combining Gaussian Process (GP) with MPC to learn system dynamics, leading to better control performance and robustness to uncertainty <cit.>. Another research avenue employed gap-based techniques that identify gaps as free spaces between obstacles, enabling a robot to move through them while avoiding local minima and obstacles. The first developed method was the Nearness Diagram (ND) <cit.>, but many of its variants exhibited undesired oscillatory motion. To overcome these limitations, robotics researchers have developed techniques that rely on the geometry of the gap. One such technique is the Follow-the-Gap Method (FGM), which selects a gap based on its area and computes the robot's heading using the gap center's direction relative to both the robot and the final goal <cit.>. Another approach is the sub-goal seeking method, which assigns a cost to each sub-goal based on the goal heading error with respect to the robot and the gap heading, and then selects the sub-goal with the lowest cost (error) <cit.>. The Admissible Gap (AG) method <cit.>, an iterative algorithm that takes into account the exact shape and kinematic constraints of the robot, identifies possible admissible gaps, and selects the nearest gap as the goal. Different from all these strategies, our proposed framework leverages a Sparse variant of Gaussian Process (SGP) which is a new perception model by “abstracting” local perception data so that the local sub-goal for navigation can be naturally extracted. Specifically, we introduce the GP-MPPI control strategy, which enhances the state-of-the-art sampling-based MPC, Model Predictive Path Integral (MPPI) <cit.>, by incorporating the GP-subgoal recommender policy. Such a policy takes advantage of the SGP occupancy model to learn about the navigable space surrounding the robot, identifies a set of suggested subgoals, and ultimately recommends the optimal subgoal that minimizes a predefined cost function to the MPPI local planner, as demonstrated in Fig. <ref>. Subsequently, MPPI computes the optimal control sequence that satisfies the robot and collision avoidance constraints while moving towards the recommended subgoal, followed by executing the first optimal control 𝐮_0 to the robot. In summary, the contributions of this work can be summarized as follows: * We propose an online learning-based control strategy that recommends subgoals solely based on local sensory information, ensuring safety and persistent feasibility; such an approach eliminates the need for a global map of the environment or an offline training process as in RL techniques, resulting in a more flexible and agile control framework that can be easily deployed in different unexplored environments, as revealed in Section <ref>. * To the best of the authors' knowledge, this is the first attempt to utilize the SGP occupancy model in conjunction with sampling-based trajectory optimization methods, specifically MPPI, to efficiently explore the navigable space surrounding the robot. * In Sections <ref> and <ref>, we validate our GP-MPPI control strategy for collision-free navigation in complex and unknown cluttered environments, using both simulation and experimental demonstrations; by comparing it with two baseline sampling-based approaches (namely, MPPI <cit.>, and log-MPPI <cit.>), we show its effectiveness in overcoming local minima that may arise when the sampled trajectories of MPPI are concentrated in high-cost regions or due to challenging environmental conditions. § PRELIMINARIES To provide the necessary background for our proposed work, in this section, we formulate the optimal control problem and present a concise overview of the MPPI control strategy that can be utilized to address this problem, along with a brief introduction to the Sparse Gaussian Process (SGP) which is the backbone of our GP-subgoal recommender policy. §.§ Problem Formulation Consider a nonlinear discrete-time stochastic dynamical system 𝐱_k+1=f(𝐱_k,𝐮_k+δ𝐮_k), with 𝐱_k ∈ℝ^n_x and 𝐮_k ∈ℝ^n_u representing the state of the system and its control input, respectively. The disturbance introduced into the control input, δ𝐮_k, is modeled as a zero-mean Gaussian noise with co-variance Σ_𝐮. Given a finite time-horizon N, we define the control sequence 𝐔 as 𝐔 = [𝐮_0, 𝐮_1, …,𝐮_N-1]^⊤∈ℝ^n_u N and the resulting state trajectory of the system being controlled as 𝐗 = [𝐱_0, 𝐱_1, …, 𝐱_N]^⊤∈ℝ^n_x (N+1). Furthermore, 𝒳^d is used to represent the d-dimensional space with 𝒳_rob(𝐱_k) ⊂𝒳^d and 𝒳_o b s⊂𝒳^d representing the robot's occupied area and obstacles' area, respectively. Let 𝐱_s and 𝐱_f denote the initial and desired (goal) state of the robot, respectively. Given 𝒳_rob(𝐱_k), 𝒳_o b s, 𝐱_s, and 𝐱_f, we aim to find the optimal control sequence, 𝐔, that allows the robot to safely and efficiently navigate from its initial state, 𝐱_s, to the desired state, 𝐱_f, by avoiding both getting stuck in local minima and collisions with obstacles, while minimizing a cost function J. The optimization problem at hand can be approached utilizing the classical MPPI control strategy described in <cit.>. This optimization can be mathematically expressed as in (<ref>), with the objective of minimizing the cost function, J, which is comprised of the expectation of a combination of state terminal cost ϕ(𝐱_N), running cost q(𝐱_k), and control inputs 𝐮_k, weighted by the positive-definite matrix R∈ℝ^n_u × n_u, taking into consideration the system dynamics outlined in (<ref>) and constraints such as collision avoidance and control constraints as stated in (<ref>). min _𝐔 J = 𝔼[ϕ(𝐱_N)+∑_k=0^N-1(q(𝐱_k)+1/2𝐮_k^⊤ R 𝐮_k)], s.t. 𝐱_k+1=f(𝐱_k, 𝐮_k+δ𝐮_k), δ𝐮_k∼𝒩(0, Σ_𝐮), 𝒳_rob(𝐱_k) ∩𝒳_obs=∅, 𝐡(𝐱_k, 𝐮_k) ≤ 0, 𝐱_0 = 𝐱_s, 𝐮_k∈𝕌, 𝐱_k∈𝕏. §.§ Overview of MPPI Control Strategy In order to solve the optimization control problem defined in (<ref>), MPPI leverages Monte Carlo simulation to generate a significant number of real-time simulated trajectories by propagating them from the underlying system dynamics. It then evaluates the cost-to-go of each trajectory based on a predefined cost function and updates the optimal control sequence by considering a weighted average cost from all of the simulated trajectories. More details are given in <cit.>. Subsequently, each trajectory τ_i in the time-horizon N can have its cost-to-go evaluated as given in (<ref>), where the cost-to-go S̃(τ_i) is calculated as the sum of the terminal state cost ϕ(𝐱_N) and the instantaneous running cost q̃(𝐱_k, 𝐮_k, δ𝐮_k,i) over all time steps. The instantaneous running cost, q̃, expressed in (<ref>), is comprised of the state-dependent running cost q(𝐱_k) and the quadratic control cost q(𝐮_k, δ𝐮_k), where γ_𝐮 = ν -1/2ν and the aggressiveness in exploring the state-space is determined by the parameter ν∈ℝ^+. Specifically, S̃(τ_i ) =ϕ(𝐱_N) + ∑_k=0^N-1q̃(𝐱_k, 𝐮_k, δ𝐮_k,i) ∀ i ∈{0, ⋯, M-1}, q̃= q(𝐱_k)_State-dep.+ γ_𝐮δ𝐮_k,i^⊤ R δ𝐮_k,i+ 𝐮_k^⊤ R δ𝐮_k,i+ 1/2𝐮_k^⊤ R 𝐮_k_q(𝐮_k, δ𝐮_k): Quadratic Control Cost. As outlined in (<ref>) from <cit.>, the optimal control sequence {𝐮_k}_k=0^N-1 in the vanilla MPPI algorithm is iteratively updated by taking a weighted average cost from all simulated trajectories, where S̃(τ_m) represents the cost-to-go of the m^th trajectory, and λ∈ℝ^+ denotes the “inverse temperature”, which regulates the selectiveness of the weighted average of the trajectories. After smoothing the resulting control sequence with a Savitzky-Galoy filter <cit.>, the first control 𝐮_0 is executed in the system, with the remaining sequence utilized as a warm-start for the next optimization step. Formally, 𝐮_k←𝐮_k +∑_m=0^M-1exp( -1/λS̃(τ_m) ) δ𝐮_k, m/∑_m=0^M-1exp( -1/λS̃(τ_m) ). §.§ Sparse Gaussian Process Gaussian Process (GP) is a well-established non-parametric model described by a mean function m(z) and a co-variance function k(z, z^') (also referred to as kernel function), where z∈ℝ^n_g is the input to the GP <cit.>; it can be mathematically expressed as f(𝐳) ∼𝒢 𝒫(m(𝐳), k(𝐳, 𝐳^')). Let 𝒟 = {(𝐳_i, y_i)}_i=1^n denote a dataset consisting of n input-output pairs, where each output y_i ∈ℝ is assumed to be the sum of an unknown underlying function f(𝐳_i) and Gaussian noise ϵ_i with a zero-mean and variance σ^2, i.e., ϵ_i ∼𝒩(0, σ^2). In the context of GP regression, to estimate the output y^* for a given new input z^*, the following GP prediction equation is employed p(y^* | y) = 𝒩(y^* | m_y(z^*), k_y(z^*,z^*) + σ^2), m_𝐲(𝐳) =K_𝐳 n(σ^2 I+K_n n)^-1𝐲, k_𝐲(𝐳, 𝐳^') =k(𝐳, 𝐳^')-K_𝐳 n(σ^2 I+K_n n)^-1 K_n 𝐳^', where m_𝐲(𝐳) and k_𝐲(z,z^') are the GP posterior mean and co-variance functions, respectively, while K_nn∈ℝ^n × n refers to the n × n co-variance matrix of the training inputs and K_𝐳n∈ℝ^n is n-dimensional row vector of kernel function values between 𝐳 and the training inputs, with K_n𝐳 = K_𝐳n^⊤. Achieving a more accurate GP prediction requires the optimization of hyper-parameters, such as kernel parameters Θ and noise variance σ^2, by maximizing the log marginal likelihood log p(𝐲)=log[𝒩(𝐲|0, σ^2 I+K_n n)]. The standard GP can be computationally intensive due to its complexity of 𝒪(n^3), where n represents the number of training instances. To mitigate this issue, various approximation methods, collectively known as Sparse Gaussian Process (SGP), have been developed as an alternative approach. Instead of using the complete training data, SGP employs a smaller set of m_s training points, called inducing points Z_m_s, resulting in a more efficient process and a lower computation complexity of 𝒪(n m_s^2)  <cit.>. Our present work leverages the variational SGP method, proposed in <cit.>, to approximate the true posterior of a GP p(f|𝐲) using an approximated variational posterior distribution q(f,f_m_s), where f_m_s are the values of the underlying function f at the inducing points Z_m_s. This approximation is done by augmenting the true posterior with the variable f_m_s such as p(f,f_m_s|𝐲) = p(f|f_m_s) p(f_m_s|y). Then, the approximated variational distribution q(f,f_m_s) can be factorized in the same manner as the augmented true posterior, as follows q(f,f_m_s) = p(f|f_m_s)ϕ(f_m_s), where ϕ(f_m_s) is an unconstrained variational distribution over f _m_s and p(f|f_m_s) is the conditional GP prior. By minimizing the Kullback-Leibler (KL) divergence between the approximated and true posteriors, 𝕂𝕃[q(f, f_m_s)||p(f|𝐲)], the variational SGP obtains estimates of the inducing inputs Z_m_s and hyperparameters (Θ, σ^2). § GP-MPPI CONTROL STRATEGY The goal of our present research, as outlined in (<ref>), is to determine the optimal control sequence 𝐔={𝐮_k}_k=0^N-1 that enables safe and efficient navigation of the mobile robots through complex and unknown cluttered environments, while avoiding collisions with obstacles and getting trapped in local minima. Although the MPPI control framework, as summarized in <cit.>, has many positive attributes, it is prone to generating infeasible control sequences or trajectories, particularly when the distribution of all sampled trajectories are concentrated within high-cost regions. To mitigate this issue, new sampling strategies proposed in <cit.> have enabled more efficient exploration of the state-space, allowing the algorithm to find better solutions and potentially reduce the risk of trapping in local minima. Nevertheless, for specific tasks such as the one depicted in Fig. <ref>, eliminating the local minima remains a potential challenge that needs to be tackled. One solution could be incorporating MPPI with a global planner, such as the solution presented in <cit.>, which utilizes the RRT algorithm to guide MPPI. Instead, we introduce the GP-MPPI control strategy, a new online navigation technique that leverages the SGP occupancy model to learn about the navigable space surrounding the robot. Specifically, we introduce the GP-subgoal recommender policy, which identifies a set of recommended subgoals and subsequently suggests the optimal subgoal that minimizes a predefined cost function to the MPPI local planner, as depicted in Fig. <ref> and explained in detail in Section <ref>. Unlike conventional methods, a distinctive aspect of the proposed control strategy is that it does not require either a global map for long-term planning or an offline training process. §.§ SGP Occupancy Surface Representation Our proposed GP-subgoal recommendation policy relies on our earlier work presented in <cit.>, where we transformed pointcloud data into an occupancy surface and modeled it using a Sparse Gaussian Process (SGP). Within this approach, the occupancy surface takes the form of a 2D circular surface centered around the sensor origin and has a predefined radius of r_oc. This surface serves as the projection space for all observed points, which are represented in spherical coordinates (θ_i, α_i, r_i), where (θ_i, α_i, r_i) correspond to the azimuth, elevation, and radius values of each observed point, respectively. Each point 𝐳_i on the occupancy surface is defined by two attributes: the azimuth and elevation angles 𝐳_i= (θ_i, α_i), and assigned an occupancy value f(𝐳_i) that is a function of the point radius r_i, such as f(𝐳_i)=r_oc-r_i. Afterward, the probability of occupancy f(𝐳) over the occupancy surface is modeled by an SGP occupancy model, as follows f(𝐳) ∼𝒮𝒢𝒫(m(𝐳), k(𝐳, 𝐳^')), k(𝐳, 𝐳^') =σ_f^2(1+(𝐳-𝐳^')^2/2 αℓ^2)^-α, where σ_f^2 is the signal variance, l is the length-scale, and α is the relative weighting factor that manipulates large and small scale variations. In our SGP model, the point's occupancy to radius relation is encoded as a zero-mean function, m(𝐳)=0, where the occupancy value of the non-observed points is set to zero. The Rational Quadratic (RQ) kernel, k(𝐳, 𝐳^'), is selected as the SGP kernel due to its ability to model functions that vary across different length-scale <cit.>. This characteristic makes the RQ kernel well-suited for modeling the occupancy surface. In Fig. <ref>, we present a concrete example of the SGP occupancy model applied to our Jackal robot, which is equipped with a Velodyne VLP-16 LiDAR and located in an unknown cluttered environment, as depicted in Fig <ref>. The figure also illustrates the raw pointcloud generated by the onboard sensor (Fig <ref>), as well as the original occupancy surface, which represents the projection of the point clouds onto the 2D circular surface with radius r_oc, where warmer colors indicate areas of lower occupancy (Fig <ref>). Furthermore, Fig <ref> exhibits the SGP occupancy surface reconstructed by the SGP occupancy model, as previously expressed in (<ref>). The precision of the SGP occupancy model is intensively evaluated in our previous work <cit.>, where the results showed that an SGP occupancy model comprising of 400 inducing points generates a reconstructed point cloud with an average error of approximately 12. §.§ GP-Subgoal Recommender Policy The primary advantage of GP and its variants, compared to other modeling techniques, is their ability to provide a measure of variance, which indicates the level of uncertainty, along with a function estimate (i.e., mean). More precisely, in the context of the occupancy surface, the SGP occupancy model prediction, as defined in (<ref>), provides both mean μ_oc_i and variance σ_oc_i values for each point on the surface, where the mean represents the expected occupancy while the variance reflects the uncertainty associated with the predicted occupancy. Consequently, constructing the SGP occupancy surface is accompanied by an SGP variance surface that captures the uncertainty in the occupancy estimate, as depicted in Fig. <ref>. Within this research, we have opened up a new avenue for effectively utilizing the SGP variance surface as a reliable indicator for distinguishing between occupied and free spaces around the robot, where regions with variances higher than a certain threshold V_th correspond to free space, while low-variance regions indicate occupied space. In fact, the variance surface changes across observations due to variations in the number and distribution of observed points employed in the training of the SGP model. As a result, the variance threshold V_th is considered to be a variable that relies on the distribution of the variance across the surface and can be calculated as V_th=K_m v_m, where K_m ∈ℝ^+ is a tuning parameter and v_m represents the mean of the variance distribution. To identify free navigable spaces, we define a Gaussian Process frontier (namely, GP frontier) as the centroid point (θ_i, α_i) of each high variance region. These GP frontiers {f_i}_i=1^ℱ serve as local recommended subgoals (see colored circles in Fig. <ref>). Unlike the well-known frontier concept introduced in <cit.>, it is worth noting that our GP frontier does not rely on a global occupancy map; instead, it is extracted from the uncertainty of the current observation. Following the identification of the GP frontiers by the SGP model, a cost function J_gp is utilized to determine the optimal GP frontier f^* that guides the local planner (in our case, MPPI) towards the desired state 𝐱_f. Our cost function J_gp, given in (<ref>), has been established with two distinct terms. The first term, as introduced in <cit.>, calculates the distance d_fs between a GP frontier f_i and the desired state 𝐱_f. This distance criterion is used to identify the GP frontier closest to 𝐱_f. The second term, inspired by the direction criterion proposed in <cit.>, evaluates the direction θ_f_i of a GP frontier with respect to the robot heading. This criterion prioritizes a GP frontier that aligns better with the robot heading. J_gp(f_i) = k_dst d_fs + k_dirθ_fi^2 , f^* =argmin _f_i∈ℱ(J_gp(f_i)), where k_dst, k_dir are weighting factors. The GP frontier direction θ_f_i is squared to indicate the absolute direction. Finally, the local planner receives the optimal subgoal g^*, obtained by acquiring the Cartesian coordinate of the optimal GP frontier f^*, which leads the robot to its desired state 𝐱_f. §.§ Real-Time GP-MPPI Control Algorithm Algorithm <ref> summarizes the real-time control cycle of the GP-MPPI algorithm, which includes two primary components: the local MPPI motion planner (described earlier in Section <ref>) and the GP-subgoal recommender (explained in Section <ref>). Each time-step Δ t, the GP policy recommends the optimal subgoal g^*, the current state is estimated, and a M × N random control variations δ𝐮 are generated (lines 2:4). Then, M trajectories are simulated in parallel, propagated from the system dynamics defined in (<ref>), and evaluated using (<ref>) (lines 5:13). It is noteworthy that the minimum sampled cost trajectory, denoted as S̃_min, among all simulated trajectories prevents numerical overflow or underflow without affecting the optimality of the algorithm <cit.>. After that, the optimal control sequence {𝐮_k}_k=0^N-1 is updated, smoothed with a Savitzky-Galoy filter, and the first control 𝐮_0 is applied to the system (lines 14:18), while the remaining sequence of length N - 1 is slid down to be utilized at next time-step (lines 19:22). In lines 25 to 38, the function known as GP-SubgoalRecommender is described, which takes a pointcloud input (PCL) and returns the optimal subgoal g^* for the local planner. To optimize the hyper-parameters Θ and inducing points Z_m_s of the SGP occupancy model, the pointcloud data is transformed into training data 𝒟 (lines 26:29). The mean occupancy μ_oc and variance σ_oc are then estimated over the surface Z^*, and the GP frontiers are defined as those with σ_oc > V_th, where the centroids of these frontiers are converted to Cartesian coordinates (lines 30:34). Finally, the cost function J_gp in (<ref>) is used to select the optimal subgoal g^* (lines 35:37). In this study, we introduce two operating modes for the GP-MPPI algorithm: the simple mode (SM) and the recovery mode (RM). Under the simple mode, MPPI consistently leverages the optimal subgoal 𝐠^* suggested by the GP policy. In contrast, in the recovery mode, MPPI generates the optimal control sequence that steers the robot towards its desired state 𝐱_f, adhering to the recommended subgoal only when the robot is at risk of encountering local minima. Such local minima occur when the robot's linear velocity is zero (v=0) and its current state 𝐱_k does not match 𝐱_f (i.e., 𝐱_k ≠𝐱_f). Thanks to the optimal control sequence {𝐮_k}_k=0^N-1 obtained by MPPI, we can efficiently anticipate the occurrence of local minima by imposing a condition on the mean of the predicted linear velocities over the time-horizon N, expressed as follows: μ_𝐮 = 1/N∑_i=0^N-1 |v_i| < 𝐮_th, where 𝐮_th∈ℝ^+ represents a control switching threshold set based on N. If this condition is fulfilled, then MPPI will follow the subgoal recommended by the GP rather than navigating directly towards its desired state 𝐱_f. § SIMULATION-BASED EVALUATION In this section, the effectiveness of our proposed control strategy is assessed and compared with both vanilla MPPI and log-MPPI control strategies in a goal-oriented autonomous ground vehicle (AGV) navigation task conducted in 2D cluttered environments of unknown nature. §.§ Simulation Setup: In this study, we consider the kinematics model of a differential wheeled robot presented in <cit.>, specifically the fully autonomous ClearPath Jackal robot, where the robot's position and orientation in the world frame are given by 𝐱 = [x, y, θ]^⊤∈ℝ^3, and the control input 𝐮 = [v,ω]^⊤∈ℝ^2 denotes the robot's linear and angular velocities. Our autonomous AGV platform is equipped with a 16-beam Velodyne LiDAR sensor utilized for two key functions: (i) constructing the SGP variance surface, and (ii) generating the local costmap. The simulations for all proposed control schemes were conducted with the following parameters: a prediction time of 6, a control frequency of 30 (i.e., N=180), sampling 2528 rollouts per time-step Δ t, and an exploration variance ν of 1200. Additionally, a control weighting matrix R, expressed as λΣ_n^-1/2, is utilized. In the case of MPPI and GP-MPPI, the inverse temperature λ and the control noise co-variance Σ_𝐮 = Σ_n = Diag(σ_v^2, σ_w^2) are both set to 0.572 and Diag(0.023, 0.028), respectively. However, for log-MPPI, different values of 0.169 and Diag(0.017, 0.019) are used for these parameters, along with a normal distribution that has a co-variance of Σ_n = Diag(0.002, 0.0022) (For more details, refer to <cit.>). The Savitzky-Galoy (SG) convolutional filter is utilized with a quadratic polynomial function, i.e., n_sg=2, and a window length l_sg of 51. The occupancy surface was constructed with an occupancy radius r_oc of 5 meters, a full azimuth range of -180^o to 180^o, and elevation height of 0^o to 15^o. The SGP occupancy model was designed with 400 inducing points (Z_m = 400), where the GP frontiers were identified based on a variance threshold of V_th= K_m v_m, where K_m was set to 0.4. For the distance and direction factors K_dst and K_dir of the cost function J_gp, we assigned weighting factors of 5 and 4, respectively. To enable the recovery mode of the GP-MPPI, we have set the control threshold, 𝐮_th, to 0.55[]. All the proposed control schemes, which are written in Python and integrated with the Robot Operating System (ROS) framework, are executed in real-time on an NVIDIA GeForce GTX 1660 Ti laptop GPU, with the GP-subgoal recommender built on GPflow<cit.>. To accomplish the 2D navigation task, we adopt a state-dependent cost function described in (<ref>), which comprises two terms. The first term, with Q = Diag(2.5,2.5,5), aims to steer the robot towards its desired state, whereas the second term incorporates a Boolean variable 𝕀_crash to heavily penalizes collisions with obstacles. q(𝐱_k)= (𝐱_k-𝐱_f)^⊤ Q (𝐱_k-𝐱_f) + 10^3 𝕀_crash. Since the robot is operating in unknown environments, it relies on a 2D costmap to maintain a record of obstacles in its vicinity. This costmap is generated by analyzing sensor data from the environment and constructing a 2D occupancy grid, with each cell typically categorized as occupied, free, or unknown <cit.>. The generated occupancy grid is subsequently employed as a 2D local costmap, feeding directly into the sampling-based MPC algorithm, enabling safe and collision-free navigation. The robot-centered 2D local costmap, which is built by the on-board Velodyne VLP-16 LiDAR sensor, has a size of 200×200 and a grid resolution of 0.05/. Finally, throughout the simulations, the maximum linear velocity v_max of the robot is set to 1.5/. §.§ Simulation Scenarios and Performance Metrics: The benchmark evaluation utilizes two types of Gazebo simulation environments, as depicted in Fig. <ref>. The first type, referred to as Forest #1, is a 50×50 forest-like environment characterized by tree-shaped obstacles with a density of 0.2/□; The other type, named Maze #1, is a 20×20 maze-like environment with three U-shaped rooms (i.e., U_1, U_2, and U_3), as well as various other obstacles (highlighted in red in Fig. <ref>)[To evaluate the local planner's obstacle avoidance capability, the red obstacles are intentionally made undetectable as occupied space by the GP-subgoal recommender, as occupancy elevation height is set to a higher value.]. In the first scenario, denoted as Forest #1, the robot is directed to navigate from an initial pose 𝐱_s = [-5,-8,0]^⊤ to a desired pose 𝐱_f = [20,20,45]^⊤ in ([], [], []). Meanwhile, in Maze #1, we conducted two separate control missions to (i) evaluate the robustness of our proposed control strategy, and (ii) examine its performance under the two different operating modes, previously described in Section <ref>. The first mission, MU_1, requires the robot to navigate from 𝐱_s = [-5,-8,60]^⊤ to a desired pose 𝐱_f = [4,4,45]^⊤ located inside U_1; while, in the second mission, named MU_2, the robot starts at 𝐱_s = [-6,8,0]^⊤, crosses U_2, and reaches a desired pose of 𝐱_f = [8,-8,170]^⊤. To ensure a fair and comprehensive comparison of the three control schemes, we have established a set of performance metrics, including the task completion percentage 𝒯_c, the average distance traveled by the robot d_av to reach 𝐱_f from 𝐱_s, the average linear velocity v_av of the robot within the cluttered environment, and the percentage of assistance 𝒜_gp provided by the GP-subgoal recommender policy to MPPI when the recovery mode is utilized. The successful task completion entails the robot reaching the target position without encountering obstacles or getting trapped in local minima ℛ_lm. §.§ Simulation Results: We evaluated the effectiveness of the proposed control strategies in Forest #1 and Maze #1 (i.e., MU_1 & MU_2) through 10 trials each, and the resulting performance statistics are summarized in Table <ref>. The performance results demonstrate that, as expected, the proposed GP-MPPI control strategy outperforms both the vanilla MPPI and log-MPPI as the autonomous vehicle successfully accomplished all control missions (with 𝒯_c=100%) without getting stuck in local minima or colliding with obstacles (i.e., ℛ_lm =0), despite having limited perception range and incomplete knowledge of the environment. In contrast, in Forest #1, log-MPPI achieved a task completion rate 𝒯_c of 95.72% over 10 trials, compared to 86.87% when MPPI was utilized. Additionally, log-MPPI encountered local minima only twice, while MPPI was trapped six times. Nevertheless, both control methods were unable to complete any of the trials in MU_1 and MU_2 due to the challenging environmental conditions (refer to the robot trajectories generated by log-MPPI in Fig. <ref>). Additionally, our proposed approach in Forest #1 provided a shorter route towards the desired state 𝐱_f, especially when the recovery mode (RM) is activated, similar to the optimal trajectory of the baselines, with an average linear velocity v_av of 1.30/, which approaches the maximum specified velocity of 1.5/. Concerning the two modes of GP-MPPI, it is observed that activating the recovery mode (RM) during Forest #1 and MU_1 missions improves the average distance traveled d_av by the robot. For instance, in MU_1, d_av was approximately 32.74 with RM, whereas with the simple mode (SM), which consistently relies on the subgoal recommended by GP, d_av was roughly 34.48. On the other hand, during the MU_2 mission, the RM produced a slightly longer robot trajectory than the SM since operating our proposed GP-MPPI in the RM strikes a balance between the state-dependent cost function that directs the robot to follow a direct route towards the desired state and the optimal subgoal recommended by the GP policy that forces the robot to avoid the dead-ends associated with rooms U_2 and U_3 on its way to 𝐱_f, as illustrated in Fig. <ref>. We can also see that, due to the presence of U-shaped rooms in Maze #1, the GP provides more assistance, represented by 𝒜_gp, than in Forest #1. In Fig. <ref>, we illustrate through an example from the conducted trials the robot trajectories generated by GP-MPPI under the two operating modes in Maze #1. We can clearly observe that our proposed control strategy successfully achieves collision-free navigation in both modes, without getting stuck in local minima. As an example, Fig. <ref> displays the velocity profile of the robot during the MU_1 mission shown in Fig. <ref>, while using GP-MPPI with RM, along with its corresponding mean of the predicted linear velocities μ_𝐮 over the given time-horizon N (see Fig. <ref>). The mean values that fall below the switching threshold 𝐮_th, set at 0.55[], denote the intervals where the RM is active, and are visually emphasized in yellow in Fig. <ref>. § REAL-WORLD DEMONSTRATION In this section, we experimentally demonstrate the applicability of our proposed control strategy in achieving a safe 2D grid-based collision-free navigation in a complex and unknown indoor cluttered environment. §.§.§ Experimental Setup and Validation Environment: To conduct our experimental validation, we used the simulation setup previously outlined in Section <ref>, except for (i) setting the maximum speed v_max to 1.0/ to avoid the robot localization error associated with using the RealSense camera as a source of localization, (ii) setting the occupancy radius r_oc to 3.0, and (iii) decreasing the size of the 2D grid map to 120×120. r0.25 < g r a p h i c s > Panoramic photo of our L-shaped indoor environment. We also decreased the recovery mode switching threshold 𝐮_th to 0.3/ to be compatible with the updated v_max. Additionally, to ensure real-time execution of the GP-subgoal recommender policy, we decrease the resolution of the SGP variance surface to one-third of its original value along the azimuth axis while keeping the original resolution along the elevation axis. We employed an L-shaped indoor corridor environment measuring 9×14 for experimental validation. The environment has a varying width between 1.8 and 2.8 and contains randomly placed boxes-like obstacles, as depicted in Fig. <ref>. The assigned control mission of the robot is to navigate from 𝐱_s = [0,0,0]^⊤ and arrive at 𝐱_f = [7.5,13,90]^⊤. §.§.§ Experimental Results: The performance statistics of our proposed GP-MPPI control scheme, gathered from four trials conducted in our indoor environment, are summarized in Table <ref> for the two operating modes. From all trials, we can conclude that both operating modes provide collision-free navigation in the cluttered environment with an average linear velocity of 0.80, without the risk of being trapped in local minima (as ℛ_lm = 0) while moving towards its desired state. This ensures the safety and consistent feasibility of the receding-horizon planning. In contrast, it is observed that the vanilla MPPI and log-MPPI consistently failed to complete any of the trials due to being trapped in the first edge of the L-shaped environment. However, MPPI managed to avoid such traps with the aid of the GP-subgoal recommender policy in the recovery mode (RM), which provides an average assistance percentage 𝒜_gp of roughly 31.36%. More details about the simulation and experimental results, including the behavior of the baselines, are provided in the supplementary video: <https://youtu.be/et9t8X1wHKI>. § CONCLUSION In this work, we proposed the GP-MPPI control strategy, which comprises two primary components: the GP-subgoal recommender policy and the local planner, the MPPI. First, the GP-subgoal recommender utilized the learning capacity of SGP to create a reliable SGP variance surface, which served as an indicator for differentiating between occupied and free spaces around the robot. Consequently, a set of suggested subgoals was identified, and the optimal subgoal that minimizes a predefined cost function was recommended to the local MPPI planner. Based on the recommended subgoal, MPPI computes the optimal control input that enables the robot to navigate towards the goal efficiently and safely while accounting for its dynamics and avoiding collisions. By conducting a combination of simulated and real-world experiments, we have shown that our proposed control strategy is superior to the vanilla MPPI and log-MPPI methods in achieving efficient and safe navigation in unknown and complex environments, thereby avoiding the risk of getting stuck in local minima. IEEEtran
http://arxiv.org/abs/2307.03996v1
20230708153748
ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation
[ "Saifullah Mahbub", "Md. Easin Arafat", "Chowdhury Rafeed Rahman", "Zannatul Ferdows", "Masum Hasan" ]
cs.SE
[ "cs.SE" ]
[email protected] Code review is considered a key process in the software industry for minimizing bugs and improving code quality. Inspection of review process effectiveness and continuous improvement can boost development productivity. Such inspection is a time-consuming and human-bias-prone task. We propose a semi-supervised learning based system ReviewRanker which is aimed at assigning each code review a confidence score which is expected to resonate with the quality of the review. Our proposed method is trained based on simple and and well defined labels provided by developers. The labeling task requires little to no effort from the developers and has an indirect relation to the end goal (assignment of review confidence score). ReviewRanker is expected to improve industry-wide code review quality inspection through reducing human bias and effort required for such task. The system has the potential of minimizing the back-and-forth cycle existing in the development and review process. Usable code and dataset for this research can be found at: https://github.com/saifarnab/code_review <ccs2012> <concept> <concept_id>10011007.10011074.10011081</concept_id> <concept_desc>Software and its engineering Software development process management</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Software and its engineering Software development process management ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation Masum Hasan August 12, 2023 ========================================================================================== § INTRODUCTION The editorial world has been using peer review since 1731 <cit.>. Modern software development industries have given it a more common name: Code Review. Since then Modern Code Review (MCR) <cit.> has become an essential part of software development. MCR is a software quality control process in which one or a group of people evaluates the system by examining and analyzing different parts of source code which can be done either during or after the completion of the implementation phase. The purpose of code review is to find bugs, correct mistakes, and boost the consistency of code by improving performance and by reducing security vulnerabilities. Figure <ref> outlines a typical code review process. A developer or a set of developers prepares the codes and submits them for review. A reviewer or a subgroup of reviewers then performs review checking and makes sure that the author’s codes cause no system failures in other parts of the codebase. They also ensure consistent coding style and design pattern. Following all these checking and evaluations, the reviewer or the subgroup of reviewers who have a higher role either approve or reject these reviews. Developers then make changes in codes, revise their works based on the feedback, or provide appropriate explanations against the approved review until both parties are satisfied. Sometimes a reviewer figures out the problematic part of the reviewed code but fails to submit an appropriate explanation of the problem. In such cases, the changes made by the developers will probably not satisfy the reviewer and we are going to get another couple of develop-review cycles. Such cycles can lead to substantial decrease in productivity in the software industry. It is possible to minimize such situations if we can somehow assign each review a quality score. Such scoring will help us in (a) gaining a deeper understanding of quality reviews, (b) identifying quality reviewers in the company and (c) estimating provided review quality before sending off to the developers. Essentially, if after going through a particular review, a developer feels confident about the changes that he has to make in the codebase, then that review is probably of good quality. In this paper, we focus on modeling the developer confidence in a review. One way is to simply form this task as a supervised learning task where the input will be a review and the output will be the confidence score for that review. The output labeling will be performed by the developer to whom the review had been sent for making changes in the codebase. Figure <ref> shows the problem behind such labeling. We can see a review in the figure which has been marked as good, average, below average and poor by a significant set of developers from three different software companies. We performed this experiment on 25 reviews in total and got more or less similar results. Let us understand what this means. There are developers who are broad minded and will give good score even when the review is not that good. The opposite spectrum is also equally visible in the industry. The score assigned by a developer also depends on what type of mood he is in at that particular moment. In short, this labeling process is highly dependent on human perception which can vary widely from person to person. We propose an alternative labeling scheme in this paper which indirectly trains a set of three models and enables them in predicting the confidence scores for a particular set of reviews. We call this semi-supervised learning approach ReviewRanker. The labeling is related to three simple multiple choice questions (for the three models) regarding - (a) the understanding of the type of change to perform in the code, (b) the understanding of what to insert and (c) what to delete from the code based on the review of interest. We performed a similar experiment (as of Figure <ref>) with these three multiple choice questions and found out that the choices made by the developers from different companies are similar unless the review is largely vague. Thus we have come to a conclusion that the answer to these questions are not biased by the human perception side of the developers. During inference (after training is done with a set of labeled reviews), we provide a code review as input to the three models for predicting the answer to the three questions (see Figure <ref>). We get three confidence scores from these three models corresponding to the ground truth answers of these questions (labeled by a developer in advance). We obtain the final confidence score from these three scores. Thus we model the confidence of the developer in understanding the review given to him or her. Mainly three types of related studies have been performed regarding code review analysis: (1) theoretical studies on different aspects of code reviewing <cit.>, (2) assisting reviewers by problematic code snippet identification <cit.> and (3) reviewer recommendation <cit.>. Although RevHelper <cit.> was developed to measure code review usefulness, it is actually a binary classification tool (useful vs not useful) and does not provide any quality score to the review of interest. Also this method has the human bias aspect that we have mentioned in detail in Figure <ref>. § PROBLEM DEFINITION The input of ReviewRanker is a large set of code reviews R. The output is a confidence score C_i for each review R_i ∈ R, where C_i ∈ [0, 1]. Higher confidence score denotes higher review quality. C_i is the combination of three different confidence scores coming from three different questions related to review R_i. The answer of each question Q_ij is predicted by a model M_j that forms the question answering as a binary classification task. We get a confidence score C_ij (associated with the ground truth label answer) from each model M_j for each question Q_ij for the review of interest R_i. The final confidence score C_i of review R_i is the geometric mean of all C_ij's, where j ∈{1,2,3}. The three questions are as follows: * What type of operation (change in code) did the code review suggest (multi-class classification)? * Did you understand what to insert in the code from the review (binary classification)? * Did you understand what to delete from the code reading the review (binary classification)? Unlike questions related to directly assigning a quality score to a review, these three questions are straightforward and have little to no human bias. § RELATED WORKS Researches have been undertaken to automate the process of reviewing code by using static checks such as standard violation, and common structure defects; while other researchers have focused on automating the process of reviewer recommendation and problematic code detection. §.§ Studies on Code Review Semi-structured individual interviews were conducted with seven developers from Microsoft in <cit.>. They concluded that prior knowledge of files leads to useful comments and tends to increase efficiency. The contemporary code review process at Microsoft was looked into in <cit.>. Research shows that the average spending time in a week for Microsoft developers is four hours in code review, while open source developers take five hours. Microsoft developers give more attention to reviewing relationships with developers compared to open-source developers. An observational survey on Mozilla’s 88 core developers was conducted in <cit.>. The authors found out that approximately 57-69% developers reviewed fewer than 5 patch files, 10% developers reviewed 11 to 20 such files and 4% developers reviewed more than 21 patch files each week. A study described why code review is responsible for evaluating the reliability of test codes and what professional developers do to review test codes by analyzing 300,000 code reviews from open-source projects <cit.>. §.§ Code Review Automation Empirical Studies A prototype tool named Code Distance Visualiser was proposed in <cit.> to detect problematic codes like string overflow, memory leaks, null pointer references, and incorrect API usages. ReviewBot model was proposed in <cit.> where they automated the checking for source code by using a static analyzer and recommended reviewers based on the belief that every line of code had a past history. cHRev model used three measurement metrics to measure the expertise of the reviewers based on their review comments: 1) higher number of review count, 2) reviewer’s effort in the workday and 3) higher weight assignment to the latest reviews <cit.>. RevFinder, a recommendation model for reviewers based on file location was developed in <cit.>. According to their heuristics, identical path files should be reviewed by identical reviewers. To analyze similar file paths, they used four string comparison techniques: 1) longest common prefix, 2) longest common suffix, 3) longest common subsequence and 4) longest common substring. RevRec developed in <cit.> consists of two models: the reviewer expertise model (RevRecRE) and the reviewer collaboration model (RevRecRC). They evaluated three open-source projects - Android, OpenStack, and Qt. A comparative study on code review usefulness was conducted based on textual features and reviewer expertise in <cit.>. The authors proposed a machine learning model named RevHelper to predict the usefulness of a review comment. Their comparative study was based on two heuristics - 1) differences between useful and non-useful reviews and 2) how the reviewers' experience helps them to provide appropriate reviews. § DATASET DESCRIPTION The steps regarding the dataset creation process for this research has been briefly shown in the leftmost box of Figure <ref>. We shall describe each of these steps in detail in this section. §.§ Data Source We have collected our data from multiple open-source projects hosted in Gerrit [https://www.gerritcodereview.com/]. Gerrit is a popular tool for code review in both open-source and commercial code repositories. Gerrit provides an easily accessible REST API [https://gerrit-review.googlesource.com/Documentation/rest-api.html] for collecting code reviews and their related codes. We have created a Gerrit Miner using Java that mines code reviews from open source code repositories such as Android & Iotivity and stores them in a MySQL database. We later query the database and label the reviews with different criteria described in detail in the upcoming subsections. §.§ Data Labeling We have created a labeling application with the Django framework in Python <cit.>. The labeling app was designed to be user-friendly and intuitive. On entry, the web app asks for the login credentials of the user. Once it is provided, it directly goes to the labeling page and displays a code review comment to the user. The user is asked what type of operation (change type in code) the code review suggests (see Figure <ref>). Four options are provided in the form of a drop-down menu: Insert, Delete, Replace, and Not Enough Information. The web app provides the private URLs to the source code, and by clicking the link the user can view the source code, where the code review was submitted, and the later modification (accepted by reviewer) in the source code side by side (see Figure <ref>). When the user selects one of the four operations from the drop down menu, he/she is also asked to provide the code snippet that is impacted by the operation. If the operation is an Insert operation, the user is supposed to provide the code snippet that was to be inserted in a text field named Add Code (only if it is understood from the review what was to be inserted). If the operation is a Remove operation, the user puts the code that was to be removed from the original code in the text box named Remove Code (only if it is understood from the review what was to be removed). If the operation is a Replace operation, the user puts the part of the code that changed in Remove Code text box, and the part that it changed into in the Add Code text box (only if both these parts can be understood from the code review alone). We also took a human-centric design approach to design the labeling app. Each time a sample data was submitted, the web page changed the background color so that the labeling process would not become monotonous and also would give a sense of progress to the user. §.§ Label Validation The reviews were labeled by a team of five independent volunteers who possess substantial experience in programming. All the labelers are from Computer Science background and have more than two years of working experience with programming languages such as C and Java, specifically in the areas of Android and Iotivity. To ensure consistency in the labeling process, 10% of the reviews were given to all the participants for labeling. The remaining 90% of samples were unique for each labeler. The admin frequently examined 10% of the data labels to check for any discrepancies among the labelers. If there was a considerable variation in the labeling, appropriate measures were taken to make the data labels more consistent. Later on, the entire dataset was manually labeled and reviewed by senior software developers to ensure proper validation of the assigned labels. The final confirmation for the labeling was obtained from the admin and considered conclusive for this dataset. § MATERIALS AND METHODS Figure <ref> provides an overview of the steps in developing ReviewRanker. We have already described the dataset creation step in the previous section. In this section, we are going to elaborate the next four steps which are more related to ReviewRanker training and inference phase. §.§ Data Preprocessing §.§.§ Data Labeling: Our initial dataset consisted of 2052 review comments. After the elimination of redundant samples, we are now left with 1483 sample reviews in our final dataset. Let us talk about the ground truth label assignment process for the three multiple choice questions asked for each review (the three questions can be found in Section <ref>). In real life scenario, the ground truth labels associated to a particular review are expected to be assigned by the developer/ developers to whom the review is directed to during the development process. Observing the questions, it is evident that it will take little to no effort from the developers to perform this labeling process. We start with the operation (code change) related question. We define four types of operations: (1) replace (class label 0), (2) delete (label 1), (3) insert (label 2) and (4) not enough information (no label assigned). If a review operation is assigned as "not enough information", then we simply assign that review a confidence score of 0 and exclude that review from ReviewRanker training and inference. The next two questions are about understanding of what to insert and what to remove from the current code base (both are binary classification tasks). If it is clear from the review what to insert, then the insertion related question receives ground truth label of 1, else the label is 0. The exact same aspect goes for the deletion related question. If the operation is labeled as "replace" (first question), then it is expected that the label of both the insertion and deletion related questions will be 1 (it will not always happen in non-ideal cases). Similarly, if the operation is labeled as "delete", then the label of deletion related question is expected to be 1, while the insertion related question will have a label of 0 in an ideal world; and the opposite aspect will happen if the operation is labeled as "insert". Let us now look at an example review - “outer parens not needed”. The labels for this review are as follows: Operation Type: delete (label 1) Understanding of something to be added: nothing to add (label 0) Understanding of something to be deleted: parentheses need to be deleted (label 1) §.§.§ Similar Word Handling Our corpus contains more than 3000 unique words, which is a large number considering the small corpus size (less than 1500 reviews). So, by replacing all semantically identical words with a single word, we minimize the word list, which helps our model find acceptable relationships between words. While doing so, we use both the process of word stemming and lemmatization. Using word-stemming, we can modify a word’s plural instance to singular, normalize grammatical state, and so on. Consider the words provided below: The above words are generated from the word “program”. Through the word-stemming process, we replace all of these words with the word program in our unique word list. Using word lemmatization, we can generate a similar set of words from a single word. For example, the word minor generates the following words: These words are verbally similar to the word minor. Thus we replace all of these words with the word minor in our unique word list as well. By doing so, our corpus now contains around 1700 unique words. §.§.§ Special Word Handling: Our dataset contains code reviews that include a significant amount of special words specific to C code that have no real meaning but play a very important role in review comments. Our proposed model works based on the textual relationship between normal words and these special words. Hence we replace these words with some common words based on their operational characteristics. First, we lowercase the starting letter of all words in our corpus. After that for each of the words: * If the word has any uppercase letter, then we replace the word with keywordvariable, considering we usually use camel case to write variables. * Otherwise, if the word contains .h or #, then we replace the word with keyworddoth. The presence of such special characters denotes header files in C programming. * Otherwise, if the word contains _, then we replace the word with keywordunderscore. Having an underscore in a word is a bit confusing, it may denote a function or a variable. That is why we treat them with a special keyword. * Otherwise, If the word contains parenthesis, then we replace the word with keywordfunction, considering all functions must initiate with a pair of parentheses. After such special keyword handling, our corpus now contains 1368 unique words which started with 3000 initially. §.§ Feature Extraction In order to feed a review to a model as input, We need a mathematical representation of that review. We have 1368 unique words in our preprocessed dataset (see Section <ref>). Each review contains a subset of these words. So, we represent each review with a vector V of size 1368, where V_i represents the total count of word_i found in the review. Let us look at two examples: Review sample 1: line over fifty characters you should reduce it to twenty characters. Review sample 2: provide line level comment to line. If we create a unique word list from this corpus, it would be: We can index these words from 0 to 12. The feature vector for the two sample reviews is as follows: Instead of utilizing word embedding based approaches such as Word2Vec <cit.> and FastText <cit.>, we have opted for a bag-of-words type of approach <cit.>. Word embedding produces semantic vectors for each word typically employed with recurrent neural networks (RNNs) <cit.>. However, due to our small dataset and straightforward classification tasks, we have observed that a basic shallow neural network with bag-of-words feature outperforms RNNs with word embeddings through five fold cross validation. §.§ Model Details Our proposed algorithm combines three models as shown in Table <ref>. Details of the classes present under each model can be found in Section <ref>. Each model is a fully connected vanilla neural network but with a different set of parameter values. The input layer is of size 1368 (word frequency vector: total unique word no. is 1368). M_1 and M_2 are used for binary classification while M_3 is used for multi-class classification (three classes). Relu activation function <cit.> has been used for the intermediate layers, while Softmax has been used for the output layer. A dropout of 20% has been applied between each consecutive hidden layers to prevent overfitting <cit.>. Categorical Cross Entropy <cit.> has been used as the loss function, while Adam (Adaptive Moment Estimation) optimizer <cit.> has been used for weight update. §.§ Review Confidence Score Generation Table <ref> illustrates the entire process of confidence score generation for two sample reviews (We assume that the three task specific models M_1, M_2 and M_3 are already trained). The feature vector of each review is passed through all three models separately. Each model provides a discrete probability distribution of the task specific classes. For example, model M_3 always provides three probability values (sums to 1) for the three operation type specific classes. For each model, we only take the probability score associated with the ground truth class label (expected to be available for all reviews). Thus, for one review, we get total three confidence scores (predicted probability values) from the three models. The final confidence score is the geometric mean ((C_1 × C_2 × C_3)^1/3) of these three confidence scores. A higher confidence score denotes higher review quality, as it is expected that the developer confidence in such reviews will be high. §.§ Confidence Score Generation for the Entire Review Set The expected input to the ReviewRanker system is not a single review, but an entire set of labeled (the three questions/ tasks) reviews. The three models that are part of ReviewRanker are trained on a fraction of this labeled review set. The confidence scores for the reviews are obtained in a 10-fold cross validation style. Let us understand the entire process. Given a large set of labeled reviews S, we first randomly divide the set into 10 small disjoint subsets S_1, S_2, … S_10 of reviews. For fold no. i of the 10-fold cross validation, we use all S_j (j ≠ i) subsets of reviews for training the three models (from randomly assigned initial weights) and finally, use the trained models to predict the final confidence scores of the validation review subset S_i. After doing this 10 times for the 10 folds, we are going to get review confidence scores for all the reviews available in the entire review set S. The important thing to note here is that the confidence score of each review is obtained only when that review is part of the validation subset. This is done to avoid obtaining overfitted scores on training data (many of the confidence scores of training data are close to 1). § RESULTS AND DISCUSSION §.§ Manual Inspection of Assigned Review Quality We examine both the review text and its corresponding confidence score to gain insight into the behavior of the proposed ReviewRanker system. Our goal is to understand why certain reviews receive higher scores than others. To this end, we randomly selected several reviews with high, average, and low confidence scores and analyzed their content (shown in Table <ref>). Through our analysis, we discovered that reviews with higher confidence scores are generally easy to understand, provide clear suggestions for changes to the code, and use specific variable and function names. Reviews with average confidence scores are sometimes easy to understand but lack substantive information, are excessively long, or contain lengthy blocks of code. Reviews with very low confidence scores are often too short to understand, lack meaningful information, and include asterisks and other special characters. Since ReviewRanker is composed of three training based neural network models, it is a data hungry system. So, larger the provided review set, better will ReviewRanker be able to model the developer confidence in a particular review. §.§ Model Performance Table <ref> shows the dataset size and performance of the three ReviewRanker models across the 10 folds. The high mean validation accuracy shows that the models can learn to answer the three simple questions associated with review confidence score generation effectively and can generalize well to validation data. The reported performance has some implications on the usage of ReviewRanker. If for some particular set of code reviews, we see that the 10-fold cross validation performance is not upto the mark, then what it means is that the three models have not been able to understand how to answer the three questions for the provided reviews. In that case, the final confidence score provided by ReviewRanker will not be a reliable metric to measure review quality. §.§ ReviewRanker Validation ReviewRanker has not been validated at industry-wide scale. We have made effort of validating ReviewRanker at small scale in three different software companies. But just as we have mentioned in the Introduction section, there is high human bias when it comes to assigning some kind of quality score to a review manually as part of the labeling process. Hence, our effort remains unsuccessful. Nevertheless, this is a system that has the potential of providing us with effective review quality score at industry scale. The system works end-to-end. The input is a set of reviews (no limitation in the number of reviews provided in the set) and the output is a csv file containing confidence score for each of the provided reviews. These scores can be used to find out characteristics of high, average and poor quality reviews; which in turn can aid software industries in coming up with proper guidelines for providing code reviews. This can save considerable time and cost by minimizing the occurrence of develop-review-develop cycles. Designing an effective industry-wide validation study can be an immediate next research step for ReviewRanker. §.§ Limitations ReviewRanker asks three questions regarding change type, code addition and code deletion while providing confidence score for a particular review. It does not use the context of code based on which the review has been provided. But we firmly believe that usage of code review context by the models for answering the three questions can greatly benefit the confidence score generation process. In such a case, sequence modeling approaches such as Long Short Term Memory (LSTM) <cit.> or Transformer <cit.> can be used as the three models of ReviewRanker. But one also has to take note of the fact that these sequence models are extremely data hungry. So, if a particular review set has less than 10K reviews (which is our case as well), then it is better to use the simple feature extraction method and model architecture that we have proposed. The three questions that we ask the developers to label for each sample are not based on any large scale study. We believe that a more optimal set of questions can be used for review quality estimation provided that a well designed large scale study is undertaken for this purpose. The reviews that we are dealing with in the experimental dataset for ReviewRanker are line-level code reviews. We have not tested the method on block-level code reviews, although we expect similar result for such case as well. Finally, because of the human bias factor, proper validation of the proposed ReviewRanker method could not be performed. § CONCLUSION In this paper, we propose ReviewRanker with the goal of enabling effective inspection of code review quality. We discover the human bias factor of a supervised learning based approach and thus resort to a human-bias free multiple choice question scheme in order to indirectly get the confidence score for each review in a semi-supervised fashion. We ensure that the labeling process requires little to no effort from the developers. ReviewRanker can handle a large number of reviews (theoretically no limitation in number of reviews provided) and can provide the confidence score for each review in an end to end manner with zero external effort required. The proposed system can be implemented easily at industry level to consistently identify the best reviewers and promote the best review practices with minimal time and effort. The adoption of this system is expected to enhance code quality and to reduce the back-and-forth cycle of the review process. Some immediate future research directions are - (a) well designed industry scale evaluation of ReviewRanker effectiveness in review quality estimation, (b) incorporation of code context in ReviewRanker models and (c) replacing the current set of questions with more suitable set of questions through large scale study. We plan to make ReviewRanker publicly available in the form of a Python package upon acceptance. ACM-Reference-Format
http://arxiv.org/abs/2307.07310v1
20230714124325
Unsourced Random Access Using Multiple Stages of Orthogonal Pilots: MIMO and Single-Antenna Structures
[ "Mohammad Javad Ahmadi", "Mohammad Kazemi", "Tolga M. Duman" ]
cs.IT
[ "cs.IT", "math.IT" ]
Unsourced Random Access Using Multiple Stages of Orthogonal Pilots: MIMO and Single-Antenna Structures Mohammad Javad Ahmadi, Mohammad Kazemi, and Tolga M. Duman This research is accepted for publications in IEEE Transactions on Wireless Communications <cit.>, with DOI:10.1109/TWC.2023.3288376. The authors are with the Department of Electrical and Electronics Engineering, Bilkent University, 06800 Ankara, Turkey (e-mails: {ahmadi, kazemi, duman}@ee.bilkent.edu.tr). July 14, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================ We study the problem of unsourced random access (URA) over Rayleigh block-fading channels with a receiver equipped with multiple antennas. We propose a slotted structure with multiple stages of orthogonal pilots, each of which is randomly picked from a codebook. In the proposed signaling structure, each user encodes its message using a polar code and appends it to the selected pilot sequences to construct its transmitted signal. Accordingly, the transmitted signal is composed of multiple orthogonal pilot parts and a polar-coded part, which is sent through a randomly selected slot. The performance of the proposed scheme is further improved by randomly dividing users into different groups each having a unique interleaver-power pair. We also apply the idea of multiple stages of orthogonal pilots to the case of a single receive antenna. In all the set-ups, we use an iterative approach for decoding the transmitted messages along with a suitable successive interference cancellation technique. The use of orthogonal pilots and the slotted structure lead to improved accuracy and reduced computational complexity in the proposed set-ups, and make the implementation with short blocklengths more viable. Performance of the proposed set-ups is illustrated via extensive simulation results which show that the proposed set-ups with multiple antennas perform better than the existing MIMO URA solutions for both short and large blocklengths, and that the proposed single-antenna set-ups are superior to the existing single-antenna URA schemes. Unsourced random access (URA), internet of things (IoT), orthogonal pilots, massive MIMO, pilot detection, power diversity, CRC check, performance analysis, fading channel. § INTRODUCTION In contrast to the conventional grant-based multiple access, where the base station (BS) waits for the preamble from devices to allocate resources to them, in grant-free random access, users transmit their data without any coordination. Removing the need for scheduling results in some benefits, such as reducing the latency and signaling overhead, which makes the grant-free set-up interesting for serving many users. Sourced and unsourced random access schemes are the main categories of grant-free random access. In the former, both the messages and identities of the users are important to the BS, so each user is assigned a unique pilot. However, this is inefficient, especially considering the next-generation wireless networks with a massive number of connected devices <cit.>. In the so-called unsourced random access (URA), which was introduced by Polyanskiy in <cit.>, the BS cares only about the transmitted messages, i.e., the identity of the users is not a concern. The BS is connected to millions of cheap devices, a small fraction of which are active at a given time. In this set-up, the users employ a common codebook, and they share a short frame for transmitting their messages. In URA, the per-user probability of error (PUPE) is adopted as the performance criterion. Many low-complexity coding schemes are devised for URA over a Gaussian multiple-access channel (GMAC) including T-fold slotted ALOHA (SA) <cit.>, sparse codes <cit.>, and random spreading <cit.>. However, GMAC is not a fully realistic channel model for wireless communications. Therefore, in <cit.>, the synchronous Rayleigh quasi-static fading MAC is investigated, and the asynchronous set-up is considered in <cit.>. Recently, several studies have investigated Rayleigh block-fading channels in a massive MIMO setting <cit.>. In <cit.>, a covariance-based activity detection (AD) algorithm is used to detect the active messages. A pilot-based scheme is introduced in <cit.> where non-orthogonal pilots are employed for detection and channel estimation, and a polar list decoder is used for decoding messages. Furthermore, in a scheme called FASURA <cit.>, each user transmits a signal containing a non-orthogonal pilot and a randomly spread polar code. The coherence blocklength is defined as the period over which the channel coefficients stay constant. As discussed in <cit.>, the coherence time can be approximated as T_c≈ 1/4D_s, where D_s is the maximal Doppler spread. For a typical carrier frequency of 2 GHz, the coherence time may vary in the range of 1 ms–45 ms (corresponding to the transmitter speeds between 3 km/h–120 km/h). Moreover, the sampling frequency should be chosen in the order of coherence bandwidth, whose typical value is between 100 kHz–500 kHz in outdoor environments. Consequently, the coherence blocklength L_c can range from 100 to 20000 samples. Although the AD algorithm in <cit.> performs well in the fast fading scenario (e.g., when L_c≤ 320), it is not implementable with larger blocklengths due to run-time complexity scaling with L_c^2. In contrast, the schemes in <cit.> work well in the large-blocklength regimes (e.g., for L_c=3200); that is, in a slow fading environment where large blocklengths can be employed, their decoding performance is better than that of <cit.>. Most coding schemes in URA employ non-orthogonal pilots/sequences for identification and estimation purposes <cit.>. Performance of detectors and channel estimators may be improved in terms of accuracy and computational complexity by employing a codebook of orthogonal pilots; however, this significantly increases the amount of collisions due to the limited number of available orthogonal pilot sequences. To address this problem, the proposed schemes in this paper employ multiple stages of orthogonal pilots combined with an iterative detector. In the proposed scheme, the transmitted signal of each user is composed of J+1 stages: a polar codeword appended to J independently generated orthogonal pilots. Thus, the scheme is called multi-stage set-up with multiple receive antennas (MS-MRA). At each iteration of MS-MRA at the receiver side, only one of the pilot parts is employed for pilot detection and channel estimation, and the polar codeword is decoded using a polar list decoder. Therefore, the transmitted pilots in the remaining J-1 pilot parts are still unknown. To determine the active pilots in these, we adopt two approaches. In the first one, all the pilot bits are coded jointly with the data bits and cyclic redundancy check (CRC) bits (therefore, the transmitted bits of all the pilot parts are detected after successful polar decoding). As a second approach, to avoid waste of resources, we propose an enhanced version of the MS-MRA, where only data and CRC bits are fed to the polar encoder. At the receiver side, the decoder iteratively moves through different J+1 parts of the signal to detect all the parts of an active user's message. Since it does not encode the pilot bits, this is called MS-MRA without pilot bits encoding (MS-MRA-WOPBE). We further improve the performance of the MS-MRA by randomly dividing users into different groups. In this scheme (called multi-stage set-up with user grouping for multiple receive antennas (MSUG-MRA)), each group is assigned a unique interleaver-power pair. Transmission with different power levels increases the decoding probability of the users with the highest power (because they are perturbed by interfering users with low power levels). Since successfully decoded signals are removed using successive interference cancellation (SIC), users with lower power levels have increased chance of being decoded in the subsequent steps. By repeating each user's signal multiple times, we further extend the idea in MS-MRA and MSUG-MRA to the case of a single receive antenna. These extensions are called multi-stage set-up with a single receive antenna (MS-SRA) and multi-stage set-up with user grouping for a single receive antenna (MSUG-SRA). We demonstrate that, while the covariance-based AD algorithm in <cit.> suffers from performance degradation with large blocklengths, and the algorithms in <cit.> do not work well in the short blocklength regime (hence not suitable for fast fading scenarios), the MS-MRA and MSUG-MRA have a superior performance in both regimes. Furthermore, the MS-SRA and MSUG-SRA show better performance compared to similar solutions with a single receive antenna over fading channels <cit.>. Our contributions are as follows: * We propose a URA set-up with multiple receive antennas, namely MS-MRA. The proposed set-up offers comparable performance with the existing schemes with large blocklengths, while having lower computational complexity. Moreover, for the short-blocklength scenario, it significantly improves the state-of-the-art. * We provide a theoretical analysis to predict the error probability of the MS-MRA, taking into account all the sources of error, namely, errors resulting from pilot detection, channel estimation, channel decoding, SIC, and collisions. * We extend the MS-MRA set-up by randomly dividing the users into groups, i.e., MSUG-MRA, which is more energy-efficient than MS-MRA and other MIMO URA schemes. * Two URA set-ups with a single receive antenna, called MS-SRA and MSUG-SRA, are provided by adopting the ideas of the MS-MRA and MSUG-MRA to the case of a single receive antenna. They perform better than the alternative solutions over fading channels. The rest of the paper is organized as follows. Section II presents the system model for the proposed framework. The encoding and decoding procedures of the proposed schemes are introduced in Section III. In Section IV, extensive numerical results and examples are provided. Finally, Section V provides our conclusions. The following notation is adopted throughout the paper. We denote the sets of real and imaginary numbers by ℝ and ℂ, respectively. [ 𝐓]_(l,:) and [ 𝐓]_(:,l) represent the lth row and the lth column of 𝐓, respectively; the Re(𝐭) and Im(𝐭) give the real and imaginary parts of 𝐭, respectively; the transpose and Hermitian of matrix 𝐓 are denoted by 𝐓^T and 𝐓^H, respectively; |.| denotes the cardinality of a set, 𝐈_M and 1_s denote the M× M identity matrix and 1× s all-ones vector, respectively; we use [a_1:a_2] to denote {i∈ℤ:a_1≤ i≤ a_2 }, and δ_i,j is the Kronecker delta. § SYSTEM MODEL We consider an unsourced random access model over a block-fading wireless channel. The BS is equipped with M receive antennas connected to K_T potential users, for which K_a of them are active in a given frame. Assuming that the channel coherence time is larger than L, we divide the length-n time-frame into S slots of length L each (n = SL). Each active user randomly selects a single slot to transmit B bits of information. In the absence of synchronization errors, the received signal vector corresponding to the sth slot at the mth antenna is written as 𝐲_m,s = ∑_i∈𝒦_s^h_m,i𝐱(𝐰(i))+𝐳_m,s , where 𝐲_m,s∈ℂ^1× L, 𝒦_s denotes the set of active user indices available in the sth slot, K_s:=|𝒦_s|, 𝐱(𝐰(i))∈ℂ^1× L is the encoded and modulated signal corresponding to the message bit sequence 𝐰(i)∈{0,1}^B of the ith user, h_m,i∼𝒞𝒩(0, 1) is the channel coefficient between the ith user and the mth receive antenna, and 𝐳_m,s∼𝒞𝒩(0, σ_z^2𝐈_L) is the circularly symmetric complex white Gaussian noise vector. Letting 𝒦_a and ℒ_d be the set of active user indices and the list of decoded messages, respectively, the PUPE of the system is defined in terms of the probability of false-alarm, p_fa, and the probability of missed-detection, p_md, as P_e = p_fa+p_md, where p_md =1K_a∑_i∈𝒦_a^Pr(𝐰(i)∉ℒ_d) and p_fa = 𝔼{n_fa|ℒ_d|}, with n_fa being the number of decoded messages that were indeed not sent. The energy-per-bit of the set-up can be written as E_bN_0=LPσ_z^2 B, where P denotes the average power of each user per channel use. The objective is to minimize the required energy-per-bit for a target PUPE. § URA WITH MULTIPLE STAGES OF ORTHOGONAL PILOTS §.§ MS-MRA Encoder In this part, we introduce a multi-stage signal structure which is used in both of the proposed URA set-ups. As shown in Fig. <ref>, we divide the message of the ith user into J+1 parts (one coded part and J pilot parts) denoted by 𝐰_c(i) and 𝐰_p_j(i), j = 1,2,...,J with lengths B_c and B_p, respectively, where B_c+JB_p = B. The ith user obtains its jth pilot sequence, 𝐛_ji, with length n_p = 2^B_p by mapping 𝐰_p_j(i) to the orthogonal rows of an n_p× n_p Hadamard matrix 𝐁_n_p, which is generated as 𝐁_2 = [ 1 1; 1 -1 ], 𝐁_2^i = 𝐁_2 ⊗𝐁_2^i-1 ∀ i = 2,3, , where ⊗ represents the Kronecker product. Since the number of possible pilots in the orthogonal Hadamard codebook is limited, it is likely that the users will be in collision in certain pilot segments, that is, they share the same pilots with the other users. However, the parameters are chosen such that two different users are in a complete collision in all the pilot parts with a very low probability. To construct the coded sequence of the ith user, we accumulate all the message bits in a row vector as 𝐰(i) = [𝐰_p_1(i), 𝐰_p_2(i), , 𝐰_p_J(i) ,𝐰_c(i)], and pass it to an (2n_c, B +r) polar code, where r is the number of CRC bits. Note that contrary to the existing schemes in URA, we feed not only data bits but the pilot bit sequences to the encoder. Hence, in the case of successful decoding, all the pilot sequences for the user can be retrieved. The polar codeword is then modulated using quadrature phase shift keying (QPSK), resulting in 𝐯_i∈{√(P_c/2)(± 1± j)}^1× n_c, where P_c is the average power of the polar coded part, and Gray mapping is used. The overall transmitted signal for the ith user consists of J pilot parts and one coded part, i.e., 𝐱_i = [√(P_p)𝐛_1i,√(P_p)𝐛_2i,,√(P_p)𝐛_Ji,𝐯_i]∈ℂ^1× L , where L = n_c+Jn_p and P_p denotes the average power of the pilot sequence. Accordingly, the received signal in a slot is composed of J+1 parts, for which, at each iteration, the decoding is done by employing one of the J pilot parts (sequentially) and the coded part of the received signal. Generally, only the non-colliding users can be decoded. Some non-colliding users in the current pilot stage may experience collisions in the other pilot parts. Therefore, by successfully decoding and removing them using SIC, the collision density is reduced, and with further decoding iterations, the effects of such collisions are ameliorated. §.§ MS-MRA Decoder We now introduce the decoding steps of MS-MRA where the transmitted signal in (<ref>) is received by M antennas through a fading channel. The jth pilot part and the polar coded part of the received signal in the sth slot of the MS-MRA can be modeled using (<ref>) as 𝐘_p_j = √(P_p)𝐇𝐁_j+𝐙_p_j∈ℂ^M× n_p , j = 1,2, , J, 𝐘_c = 𝐇𝐕+𝐙_c ∈ℂ^M× n_c, where 𝐇∈ℂ^M × K_s is the channel coefficient matrix with h_m,i in its mth row and ith column, 𝐙_p_j and 𝐙_c consist of independent and identically distributed (i.i.d.) noise samples drawn from 𝒞𝒩(0,σ_z^2) (i.e., a circularly symmetric complex Gaussian distribution), and 𝐛_ji and 𝐯_i determine the rows of 𝐁_j ∈{± 1}^K_s × n_p and 𝐕∈{√(P_c/2)(± 1± j)}^K_s × n_c, respectively, with i∈𝒦_s. Note that we have removed the slot indices from the above matrices to simplify the notation. The decoding process is comprised of five different steps that work in tandem. A pilot detector based on a Neyman-Pearson (NP) test identifies the active pilots in the current pilot part; channel coefficients corresponding to the detected pilots are estimated using a channel estimator; maximum-ratio combining (MRC) is used to produce a soft estimate of the modulated signal; after demodulation, the signal is passed to a polar list decoder; and, the successfully decoded codewords are added to the list of successfully decoded signals before being subtracted from the received signal via SIC. The process is repeated until there are no successfully decoded users in J consecutive SIC iterations. In the following, 𝐘^'_p_j and 𝐘^'_c denote the received signals in (<ref>) and (<ref>) after removing the list of messages successfully decoded in the current slot up to the current iteration. §.§.§ Pilot Detection Based on NP Hypothesis Testing At the jth pilot part, we can write the following binary hypothesis testing problem: 𝐮_ji|ℋ_0 ∼𝒞𝒩(0,σ_z^2𝐈_M), 𝐮_ji |ℋ_1 ∼𝒞𝒩(0,σ^2_1𝐈_M), where σ_1=√(σ_z^2+m_ij n_pP_p), 𝐮_ji := 𝐘^'_p_j𝐛̅_i^H /√(n_p), with 𝐛̅_i=[𝐁_n_p]_(i,:), ℋ_1 and ℋ_0 are alternative and null hypotheses that show the existence and absence of the pilot 𝐛̅_i at the jth pilot part, respectively, and m_ij is the number of users that pick the pilot 𝐛̅_i as their jth pilots. (<cit.>): Let 𝒟̂_j be the estimate of the set of active rows of 𝐁_n_p in the jth pilot part. Using a γ-level Neyman-Pearson hypothesis testing (where γ is the bound on the false-alarm probability), 𝒟̂_j can be obtained as 𝒟̂_j = {l:𝐮_jl^H𝐮_jl≥τ_0^'}, where τ_0^'= 0.5σ_z^2Γ^-1_2M(1-γ), Γ_k(.) denotes the cumulative distribution function of the chi-squared distribution with k degrees of freedom χ^2_k, and Γ^-1_k(.) is its inverse. The detection probability of a non-colliding user (m_ij=1) is then obtained as P_D(δ_NP) = ℙ(𝐮_ji^H𝐮_ji≥τ_0^'|ℋ_1 ) a 1-Γ_2M(2τ_0^'σ_z^2+n_pP_p) = 1-Γ_2M(σ_z^2Γ_2M^-1(1-γ)σ_z^2+n_pP_p) , where in (a), we use the fact that 2σ_1^2𝐮_ji^H𝐮_ji |ℋ_1 ∼χ^2_2M. Note that a higher probability of detection is obtained in the general case of m_ij> 1. It is clear that the probability of detection is increased by increasing the parameters γ, n_p, P_p, and M. §.§.§ Channel Estimation Let 𝐁_𝒟̂_j∈{± 1}^|𝒟̂_j|× n_p be a sub-matrix of 𝐁_n_p consisting of the detected pilots in (<ref>), and suppose that 𝐛̃_jk=[𝐁_𝒟̂_j]_(k,:) is the corresponding pilot of the ith user. Since the rows of the codebook are orthogonal to each other, the channel coefficient vector of the ith user can be estimated as 𝐡̂_i =1 n_p√(P_p)𝐘^'_p_j𝐛̃_jk^T. If the ith user is in a collision ( m_ij>1), (<ref>) gives an unreliable estimate of the channel coefficient vector. However, this is not important since a CRC check is employed after decoding and such errors do not propagate. §.§.§ MRC, Demodulation, and Channel Decoding Let 𝐡_i be the channel coefficient vector of the ith user, where i∈𝒮̃_s with 𝒮̃_s denoting the set of remaining users in the sth slot. Using 𝐡̂_i in (<ref>), the modulated signal of the ith user can be estimated employing the MRC technique as 𝐯̂_i = 𝐡̂_i^H𝐘^'_c. Plugging (<ref>) into (<ref>), 𝐯̂_i is written as 𝐯̂_i=𝐡̂_i^H𝐡_i𝐯_i +𝐧_i, where 𝐧_i = ∑_k∈𝒮̃_s, k≠ i𝐡̂_i^H𝐡_k𝐯_k+𝐡̂_i^H𝐙_c. The first and second terms on the right-hand side of (<ref>) are the signal and interference-plus-noise terms, respectively. We can approximate 𝐧_i to be Gaussian distributed, i.e., 𝐧_i ∼𝒞𝒩(0, σ_oi^2𝐈_n_c), where σ_oi^2=1n_c𝔼{𝐧_i𝐧_i^H} = P_c∑_k∈𝒟̂_j, k≠ i^ |𝐡̂_i^H𝐡_k|^2 +σ_z^2 𝐡̂_i^2, which is obtained by treating the coded data sequences of different users to be uncorrelated. The demodulated signal can be obtained as 𝐠_i = [Im(ϑ_1i),Re(ϑ_1i),,Im(ϑ_n_ci),Re(ϑ_n_ci)] , where ϑ_ti = [𝐯̂_i]_(:,t). From (<ref>) and (<ref>), and using 𝐡̂_i^H𝐡_i ≈𝐡̂_i^2, each sample of 𝐠_i can be approximated as ±√(P_c/2)𝐡̂_i^2+n^', where n^'∼𝒞𝒩(0, σ_oi^22). The following log-likelihood ratio (LLR) is obtained as the input to the polar list decoder 𝐟_i = 2√(2P_c)𝐡̂_i^2σ̂_oi^2𝐠_i , where σ̂_oi^2 is an approximation of σ_oi^2 which is obtained by replacing 𝐡_k's by their estimates. At the jth pilot part, the ith user is declared as successfully decoded if 1) its decoder output satisfies the CRC check, and 2) by mapping the jth pilot part of its decoded message to the Hadamard codebook, 𝐛̃_jk is obtained. Then, all the successfully decoded messages (in the current and previous iterations) are accumulated in the set 𝒮_s, where |𝒮_s|+|𝒮̃_s|=K_s. §.§.§ SIC we can see in (<ref>) that the successfully decoded messages contain bit sequences of pilot parts and the coded part (𝐰_p_j(i), j = 1,2,...,J and 𝐰_c(i)). Having the bit sequences of successfully decoded messages, we can construct the corresponding transmitted signals using (<ref>). The received signal matrix can be written as 𝐘 = 𝐇_𝒮_s𝐗_𝒮_s+𝐇_𝒮̃_s𝐗_𝒮̃_s+𝐙_s, where 𝐘 is obtained by merging received signal matrices of different parts, i.e., 𝐘 = [𝐘_p_1,,𝐘_p_J, 𝐘_c ]∈ℂ^M× L with 𝐗_𝒮_s∈ℂ^|𝒮_s|× L and 𝐗_𝒮̃_s∈ℂ^|𝒮̃_s|× L including the signals in the sets 𝒮_s and 𝒮̃_s, and 𝐇_𝒮_s∈ℂ^M × |𝒮_s| and 𝐇_𝒮̃_s∈ℂ^M × |𝒮̃_̃s̃| comprising the channel coefficients corresponding to the users in the sets 𝒮_s and 𝒮̃_s, respectively. Employing the least squares (LS) technique, 𝐇_𝒮_s is estimated as 𝐇̂_𝒮_s = 𝐘𝐗_𝒮_s^H(𝐗_𝒮_s𝐗_𝒮_s^H)^-1. Note that 𝐗_𝒮_s consists of all the successfully decoded signals in the sth slot so far, and 𝐘 is the initially received signal matrix (not the output of the latest SIC iteration). The SIC procedure is performed as follows 𝐘^'=[𝐘^'_p_1, 𝐘^'_p_2,,𝐘^'_p_J, 𝐘^'_c ] = 𝐘- 𝐇̂_𝒮_s𝐗_𝒮_s. Finally, 𝐘^' is fed back to the pilot detection algorithm for the next iteration, where the next pilot part is employed. We note that if no user is successfully decoded in J consecutive iterations (corresponding to J different pilot parts), the algorithm is stopped. The details of the decoding stages of MS-MRA are shown in Fig. <ref> and Algorithm 1. Note that we will discuss MS-MRA-WOPBE, which deviates from the above model, in Section <ref>. The signal-to-interference-plus-noise ratio (SINR) at the output of MRC for a non-colliding user in the sth slot can be approximated as β_s≈ω_c_sP_c (ω_p_s𝔼{𝐡_i^4} +σ_z^2 n_pP_p𝔼{𝐡_i^2})(P_c (|𝒮̃_s|-1) + σ_z^2)(ω_p_s𝔼{𝐡_i^2}+M σ_z^2 n_pP_p), where ω_p_s=ω_c_s=1-| 𝒮_s|L if the transmitted signals are randomly interleaved, and ω_p_s=1-1E_xP_p| 𝒮_s|, ω_c_s=1-1E_xP_c| 𝒮_s|, otherwise, with E_x=Jn_pP_p+n_cP_c. See Appendix <ref>. We employ the above approximate SINR expression 1) to estimate the error probability of MS-MRA analytically, and 2) to determine the optimal power allocation for each group in MSUG-MRA. We further note that using this SINR approximation, the performance of the MS-MRA is well predicted in the low and medium K_a regimes (see Fig. <ref>). The reason why the SINR approximation does not work well in the high K_a regime is the employed approximations in Lemma <ref> (see the Appendix <ref> for details). §.§ Analysis of MS-MRA In this part, the PUPE of the MS-MRA is analytically calculated, where errors resulting from the collision, pilot detection, and polar decoder are considered. For our analyses, we assume that after successfully decoding and removing a user using a pilot part, the decoder moves to the next pilot part. Hence, in the tth iteration of the sth slot, we have |𝒮_s| =t-1, |𝒮̃_s| =K_s-t+1. Let ξ_k be the event that k out of K_s users remain in the sth slot, and define η_i:=𝐡_i^2, where 𝐡_i∼𝒞𝒩(0,𝐈_M). Assuming that the strongest users with highest η_i values are decoded first, we have 𝔼{η_i^m|ξ_k}=μ_(k,m), where μ_(k,m):= ∫_-∞^x̅_kη^m f_2M^χ^2(2η)dη∫_-∞^x̅_kf_2M^χ^2(2η) dη, with f_k^χ^2(.) denoting the PDF of the chi-squared distribution with k degrees of freedom and x̅_k=0.5 Γ_2M^-1(k/K_s). In the first iteration of the sth slot for which no user is decoded yet (all the K_s active users are available), since 𝐡_i∼𝒞𝒩(0,𝐈_M), we have 2η_i|ξ_K_s∼χ^2_2M. We assume that the users with higher values of η_i are decoded first. Hence, if in an iteration, k out of K_s users remain in the slot, the distribution of η_i is obtained by 2η_i|ξ_k∼{χ^2_2M}_k/K_s, where {.}_β removes 1-β portion of the samples with higher values from the distribution and normalizes the distribution of the remaining samples, i.e., ℙ(η_i=y|ξ_k)= f_2M^χ^2(2y) ∫_-∞^x̅_kf_2M^χ^2(2y) dy, y<x̅_k, where x̅_k is obtained by solving the following equation ℙ(η_i<x̅_k |ξ_K_s)=k/K_s, which results in x̅_k=0.5 Γ_2M^-1(k/K_s). Therefore, we obtain 𝔼{η_i^m|ξ_k}= ∫_-∞^x̅_kη^m f_2M^χ^2(2η)dη∫_-∞^x̅_kf_2M^χ^2(2η) dη. We can see from (<ref>) and (<ref>) that the input of the polar decoder is a 1× 2n_c real codeword. Thus, the average decoding error probability of a non-colliding user in the tth iteration of a slot with K_s users can be approximated as (see <cit.>) P_ K_s ,t^dec≈ Q( 0.5log(1+α_ K_s ,t)-B+r2n_c√(12n_cα_ K_s ,t(α_ K_s ,t+2)log^2e2(α_ K_s ,t+1)^2)), where Q(.) denotes the standard Q-function, and α_ K_s ,t is the SINR of a non-colliding user in the tth iteration of a slot with K_s users, which is calculated using Theorem <ref>, Lemma <ref>, and (<ref>) as α_ K_s ,t≈ s_c_tP_c (s_p_tμ_(K_s-t+1,2) +σ_z^2 n_pP_pμ_(K_s-t+1,1))(P_c (K_s-t) + σ_z^2)(s_p_tμ_(K_s-t+1,1)+M σ_z^2 n_pP_p), where s_p_t = 1-P_pt-1E_x and s_c_t = 1-P_ct-1E_x. Note that since the powers of signal and interference-plus-noise terms of 𝐯̂_i are equal in their real and imaginary parts, the SINRs of 𝐟_i in (<ref>) and 𝐯̂_i are the same. Therefore, in (<ref>), we employ the SINR calculated in Theorem <ref> for the input of the polar list decoder. Since decoding in the initial iterations well represents the overall decoding performance of the MS-MRA, we approximate the SINR of the first iteration by setting t=1 in (<ref>) as α_K_s,1≈P_cM(σ_z^2+P_cK_s)(1+σ_z^2 n_pP_p). Concentrating on (<ref>), we notice that P_K_s,1^dec is a decreasing function of n_c and α_K_s,1. Besides, (<ref>) shows that α_K_s,1 increases by decreasing n_c and J (considering K_s≈ K_a(Jn_p+n_c)/n), and increasing M, P_c, and P_p, however, it is not a strictly monotonic function of n_p. Since our goal is to achieve the lowest P_K_s,t^dec by spending the minimum E_b/N_0=(n_cP_c+Jn_pP_p)/B, we can optimize the parameters n_c, n_p, P_c, and P_p. In the tth iteration of the sth slot, the probability of collision for a remaining user i∈𝒮̃_s can be approximated as P_K_s,t^col≈ 1- N_1^(t)K_s-t+1, where N_i^(k) denotes the average number of pilots that are in i-collision (selected by i different users) in the kth iteration, which is calculated as N_i^(k+1)≈ N_i^(k)+κ_k((i+1)N_i+1^(k)-iN_i^(k)) i≥ 2 κ_k(2N_2^(k)-N_1^(k)) -1J i = 1, where κ_k = J-1J(K_s-k+1), and N_i^(1)≈ n_p f_p(i;K_s/n_p) with f_p(i;a) denoting the probability mass function (PMF) of the Poisson distribution with the parameter a. See Appendix <ref>. Note that to extend the result in Theorem <ref> to an SIC-based system with only one pilot sequence (orthogonal or non-orthogonal), we only need to set J=1 in the above expressions. From (<ref>), the collision probability in the first iteration can be calculated as P_K_s,1^col≈ 1- e^-K_s/n_p, which is a decreasing function of n_p. Since the overall decoding performance of the system depends dramatically on the collision probability in the first iteration, we can increase n_p, however, this results in additional overhead. Assuming a relatively large CRC length (hence negligible p_fa), the PUPE of the MS-MRA with S slots and K_a active users can be approximated as P_e ≈ 1-∑_r=1^K_a(1-ϵ_r) K_a-1r-1(1S)^r-1(1-1S)^K_a-r, where ϵ_r denotes the PUPE of a slot with r users, which is obtained as ϵ_r≈∑_j=1^rr-j+1rp_j,r, with p_j,r = (e_j,r)^r-j+1∏_f=1^j-1(1-(e_f,r)^r-f+1), and e_t,r = 1- P_D(δ_NP) (1- P_r,t^dec)(1-P_r,t^col), where P_r,t^dec, P_r,t^col, and P_D(δ_NP) are computed in (<ref>), Theorem <ref>, and (<ref>), respectively. Note that the result in Corollary <ref> can also be used in any other slotted system with SIC by replacing appropriate e_j,r. §.§ MS-MRA-WOPBE As discussed in Section <ref>, in the MS-MRA scheme, the pilot bits are fed to the polar encoder along with the data and CRC bits. To improve the performance by decreasing the coding rate, the MS-MRA-WOPBE scheme passes only the data and CRC bits to the encoder. To detect the bit sequences of different parts of the message, it employs an extra iterative decoding block called iterative inter-symbol decoder (IISD) (described in Section <ref>). At each step of IISD, it detects one part of a user's signal (polar or pilot part), appends the detected part to the current pilot (which was used for channel estimation in the previous step) to have an extended pilot, and re-estimates the channel coefficients accordingly. The encoding and decoding procedures of MS-MRA-WOPBE are described below. §.§.§ Encoder The ith user encodes its bits using the following steps (the general construction is shown in Fig. <ref>). Similar to the MS-MRA encoder in Section <ref>, B information bits are divided into J+1 parts as in (<ref>), and the transmitted signal is generated as in (<ref>). The only difference is in the construction of the QPSK signal. The encoder in MS-MRA-WOPBE defines two CRC bit sequences as 𝐜_2(i) = 𝐰(i)𝐆_2 and 𝐜_1(i) = [𝐰_c(i),𝐜_2(i)]𝐆_1, where 𝐆_2 ∈{0,1}^B× r_2 and 𝐆_1∈{0,1}^(B_c+r_2)× r_1 are generator matrices known by the BS and users. Then, it passes [𝐰_c(i),𝐜_2(i),𝐜_1(i)] to an (2n_c, B_c+r_1+r_2 ) polar encoder, and modulates the output by QPSK to obtain 𝐯_i∈{√(P_c/2)(± 1± j)}^1× n_c. §.§.§ Decoder As shown in Algorithm 1, MS-MRA-WOPBE exploits the same decoding steps as the MS-MRA scheme, except for the IISD step. We can see in Algorithm 1 that the jth pilot of the ith user is detected before employing the IISD. Then, IISD must detect the data (polar) sequence and the fth pilot of the ith user, where f=1,...,J, f≠ j. In the following, IISD is described in detail. Step 1 [Detecting 𝐰_c(i) ∀ i∈𝒟̂_j]: We first obtain 𝐠_i using (<ref>), where 𝐯̂_i = 𝐡̂_i^H𝐑_h^-1𝐘^'_c, and 𝐑_h=σ_z^2𝐈_M+P_c∑_l∈𝒟̂_j𝐡̂_l𝐡̂_l^H. Then, we pass 𝐟_i = 2√(2P_c)1-P_c𝐡̂_i^H𝐑_h^-1𝐡̂_i𝐠_i to the list decoder. A CRC check flag_CRC1(i)∈{0,1} and an estimate of [𝐰_c(i),𝐜_2(i),𝐜_1(i)] [In the output of the polar list decoder, there is a list of possible messages. If more than one messages satisfy the CRC check (𝐜_1(i) = [𝐰_c(i),𝐜_2(i)]𝐆_1), the most likely of them is returned as the detected message and the CRC flag is set to one. Otherwise, the most likely message is returned as the detected message and the CRC flag is set to zero.] are obtained by the polar list decoder. Step 2 [Updating 𝐡̂_i]: Since the jth pilot and polar codeword of the ith user are detected so far, we append them to construct a longer signal as 𝐪_i = [𝐛_ji, 𝐯_i]∈ℂ^1× (n_p+n_c). Then, we update 𝐡̂_i by MMSE estimation as 𝐡̂_i = 𝐘^'_q 𝐑_q^-1𝐪_i^H, where 𝐑_q=σ_z^2𝐈_(n_p+n_c)+∑_l∈𝒟̂_j𝐪_l^H𝐪_l, and 𝐘^'_q = [𝐘^'_p_j,𝐘^'_c]. Step 3 [Detecting 𝐰_p_f(i) ∀ i∈𝒟̂_j, f≠ j]: Assuming that the tth row of the Hadamard matrix is active in the fth pilot part (f≠ j), we estimate the corresponding channel coefficient as 𝐬_ft=1 n_p√(P_p)𝐘^'_p_f𝐛̃_ft^T (see (<ref>)). To find the fth pilot sequence of the ith user, we find the pilot whose corresponding channel coefficient vector is most similar to 𝐡̂_i, i.e., we maximize the correlation between 𝐡̂_i and 𝐬_ft as t̂_fi = max_t|𝐡̂_i^H𝐬_ft|^2𝐬_ft^H𝐬_ft, f=1,...,J, f≠ j. Step4 [Updating 𝐡̂_i]: Since the bit sequences of all J+1 parts are detected, we can construct 𝐱_i using (<ref>). The channel coefficient vector can be updated by MMSE as 𝐡̂_i = 𝐘^'𝐑^-1𝐱_i^H, where 𝐑=σ_z^2𝐈_L+∑_l∈𝒟̂_j𝐱_l^H𝐱_l. If the number of users that satisfy flag_CRC1(i)=1 is not changed in an iteration, the iteration is stopped, otherwise, the algorithm goes to Step 1 for another iteration with updated 𝐡̂_i. Users whose bit sequences satisfy 𝐜_2(i) = 𝐰(i)𝐆_2 and 𝐜_1(i) = [𝐰_c(i),𝐜_2(i)]𝐆_1 are added to the set 𝒮_t^' j as successfully decoded users of the current iteration. §.§ MSUG-MRA Different from MS-MRA where the power of every user is the same and signals are not interleaved, MSUG-MRA defines G groups, each being assigned unique interleaver and power pair (π_g(.),P_p_g,P_c_g), g=1,2,...,G. We assume that ϕ = P_p_gP_c_g is constant in all groups, hence each group can be identified with a unique interleaver-power pair (π_g(.),P_c_g), which is known at both transmitter and receiver sides. The details of encoding and decoding procedures as well as the power selection strategy are explained below. Note that we assume without loss of generality that P_c_1<P_c_2...<P_c_G. §.§.§ Encoder The encoding is adopted as follows: * Every user randomly selects a group, e.g., with index g. * Each user employs P_c_g and ϕ P_c_g as the powers of the coded and pilot parts, with which it generates its multi-stage signal 𝐱_i similar to MS-MRA (according to (<ref>)). * The transmitted signal is created as 𝐱̃_i = π_g(𝐱_i). §.§.§ Decoder In each iteration, the decoder tends to decode the messages belonging to the users of the dominant group (the Gth group with the highest power level). After decoding and removing users in the Gth group, users in the (G-1)st group become the dominant ones. Using the same trend, all the groups have the chance to be the dominant group at some point. Since users in different groups are interleaved differently, signals of users in other groups are uncorrelated from the signals in the dominant group. Thus, letting the g_0th group to be dominant, we approximately model the fth signal in the the gth group (g≠ g_0) as 𝐱̃_f∼𝒞𝒩(0,ζ P_c_g𝐈_L), where ζ =Jϕ n_p+n_cL. Therefore, when the g_0th group is dominant (the users in the groups with indices greater than g_0 are already removed using SIC), users in the g_0th group are perturbed by i.i.d. noise samples drawn from 𝒞𝒩(0,δ_g_0 ), with δ_g_0≈ζ K_0∑_g=1^g_0-1P_c_g+σ_z^2, where K_0=K_aSG is the average number of users in each group of the current slot. Consequently, by replacing σ_z^2, P_p, and P_c with δ_g_0, ϕ P_c_g_0, and P_c_g_0 in the decoding steps of MS-MRA (in Section <ref>), the decoding procedure of MSUG-MRA is obtained as: * Deinterleave the rows of the received signals:Ỹ^'_p_j =π_g_0^-1(𝐘^'_p_j) and Ỹ^'_c =π_g_0^-1(𝐘^'_c). * Find active pilots as 𝒟̂_j = {l:ũ_jl^Hũ_jl≥ 0.5δ_g_0Γ^-1_2M(1-γ)}, where ũ_ji := Ỹ^'_p_j𝐛̅_i^H /√(n_p). * Channel estimation and MRC: 𝐯̂_i = 𝐡̂_i^HỸ^'_c, where 𝐡̂_i =1 n_p√(ϕ P_c_g_0)Ỹ^'_p_j𝐛̃_jk^T, and 𝐛̃_jk is one of the detected pilots. * Pass 𝐟_i = 2√(2 P_c_g_0)𝐡̂_i^2σ̂_oi^2𝐠_i to the polar decoder, where σ̂_oi^2=P_c_g_0∑_k∈𝒟̂_j, k≠ i^ |𝐡̂_i^H𝐡̂_k|^2 +δ_g_0𝐡̂_i^2, and 𝐠_i is defined in (<ref>). * Regenerate signals of successfully decoded users according to Section <ref> (using (π_g_0(.),P_c_g_0) pair), and collect them in the rows of 𝐗̃_𝒮_s. * Apply LS-based SIC similar to (<ref>), i.e., 𝐘^' = 𝐘(𝐈_L- 𝐗̃_𝒮_s^H(𝐗̃_𝒮_s𝐗̃_𝒮_s^H)^-1𝐗̃_𝒮_s). Note that this loop is repeated for G different group indices and J different pilot parts, and the iteration is stopped if there is no successfully decoded users in GJ consecutive iterations. §.§.§ Power Calculation When MSUG-MRA starts the decoding in the g_0th group, there are |𝒮_s| ≈ K_0 (G-g_0) successfully decoded users from previous groups (with higher power levels), |𝒮̃_s|=K_0 users remain in the g_0th group, and users in the current group are perturbed with a complex Gaussian noise with covariance matrix δ_g_0𝐈_M. Therefore, the SINR of a non-colliding user in the current group can be calculated by replacing |𝒮̃_s| ≈ K_0, |𝒮_s|=K_0(G-g_0), 𝔼{𝐡_i^2}=M, 𝔼{𝐡_i^4}=M^2, P_c = P_c_g_0, P_p = ϕ P_c_g_0, σ_z^2 ≈δ_g_0, and ω_p_s=ω_c_s=1-| 𝒮_s|L in (<ref>) as β_g_0^'≈ρ_g_0MP_c_g_0^2+δ_g_0n_p ϕP_c_g_0(P_c_g_0 (K_0-1) + δ_g_0)( P_c_g_0+δ_g_0ρ_g_0 n_pϕ), where ρ_g_0 = 1-K_0 (G-g_0)L. To impose similar performance on different groups, we set β_1^'=β_2^'==β_G^'. Solving this equation, the power of the gth group satisfies c_1 P_g^2+c_2 P_g+c_3=0, where c_1 = (K_0-1) -ρ_gMβ_g-1^', c_2 = δ_g(1+(K_0-1)ϕ n_p ρ_g-1ϕ n_p β_g-1^'), c_3 = δ_g^2ϕ n_p ρ_g. Solving this equation, we have P_t = -c_2 + √(c_2^2 - 4c_1c_3)2c_1, s.t. 1G∑_f=1^G P_f=P and P_t ∈ℝ^+. Note that the MS-MRA scheme is a special case of the MSUG-MRA with G=1. §.§ MS-SRA and MSUG-SRA In this part, we apply the proposed MIMO coding schemes to the case of a single receive antenna. To accomplish this, we repeat each user's length-L signal multiple times to create temporal diversity in MS-SRA and MSUG-SRA. Accordingly, we divide the whole frame into V sub-frames of length n^' = n/V, then divide each sub-frame into S slots of length L = n^'/S. Each user randomly selects a slot index, namely s, and transmits its signal, through the sth slot of each sub-frame. Assuming the coherence time to be L, each sub-frame is analogous to a receive antenna. Therefore, the transmitted messages in MS-SRA and MSUG-SRA can be decoded using MS-MRA and MSUG-MRA decoders in Sections <ref> and <ref>, respectively, considering V receive antennas. Since each user repeats its signal V times, for this case, we have E_b/N_0=VLPσ_z^2 B. §.§ Computational Complexity We focus on the number of multiplications as a measure of the computational complexity, and make a complexity comparison among the proposed and existing URA solutions. The per-iteration computational complexity of the MS-MRA in a slot is calculated as follows: The pilot detection in (<ref>) has a complexity of 𝒪( n_p^2MJS) corresponding to J different pilot parts and S different slots, where 𝒪(.) is the standard big-O notation, denoting the order of complexity. The channel estimator in (<ref>) does not require any extra computation, because 𝐡̂_i corresponds to 𝐮_ji which is calculated before for pilot detection; the MRC in (<ref>) has a complexity of 𝒪(∑_j=1^J |𝒟_j|M n_c S ); to compute the LLR in (<ref>), the required computational complexity is 𝒪( ∑_j=1^J|𝒟_j|^2 M S ); the computational complexity of the polar list decoder is <cit.> 𝒪(∑_j=1^J|𝒟_j|n_clog n_c S); and, the SIC has a complexity of 𝒪(ML|𝒮_s|S+|𝒮_s|^2LS). We know from (<ref>) that in the first iteration, we have |𝒟_j|≈ n_p-n_p e^-K_a/(n_pS)< K_a/S, and |𝒮_s|=0; in the last iterations, we have |𝒮_s|≈ K_a/S and |𝒟_j| ≈ 0. Hence, considering M≫log n_c and n_c|𝒟_j|≫ n_p, we can compute the computational complexity of the MS-MRA in the first and last iterations as 𝒪(K_aMJ(n_c+K_a/S)) and 𝒪(LK_a(M+K_a/S)), respectively. Considering the computational complexity in the intermediate iterations to be in the same order, the per-iteration computational complexity of the MS-MRA can be bounded by 𝒪(n_p^2MJS+max(K_aMJ(n_c+K_a/S) ,LK_a(M+K_a/S))). Note that the computational complexity of MSUG-MRA is in the same order as MS-MRA, and for MS-SRA and MSUG-SRA schemes, the computational complexity is obtained by replacing M by V in the above figures. Note that by employing a low-complexity adaptive filter <cit.>, we can considerably reduce the computational complexity of the LS-based channel estimator in (<ref>) (hence the total computational complexity of the proposed schemes). Looking at Algorithm 1, we can infer that MS-MRA-WOPBE is obtained by employing the same pilot detector (with complexity 𝒪( n_p^2MJS)), channel estimator (does not incur any extra computational complexity), and SIC (with complexity 𝒪(ML|𝒮_s|S+|𝒮_s|^2LS)) as in the MS-MRA case, except for employing the IISD block. In Step 1 of IISD, the complexity for computing 𝐟_i and implementing polar decoder are 𝒪( (M n_c+M^2)T_IS∑_j=1^J|𝒟_j| ) and 𝒪(T_In_clog n_cS∑_j=1^J|𝒟_j| ), respectively, where T_I denotes the number of iterations of IISD. In the Step 2 of IISD, computing 𝐡̂_i and e_k has the complexity of 𝒪(T_I(n_c+n_p)^2S∑_j=1^J|𝒟_j|+T_I(n_c+n_p)MS∑_j=1^J|𝒟_j|) and 𝒪(T_I(J-1)n_pMS∑_j=1^J|𝒟_j|), respectively. The computational complexity of obtaining ĥ_i in Step 3 of IISD is 𝒪(T_I(L^2+LM)S∑_j=1^J|𝒟_j|). Then, replacing |𝒟_j| and |𝒮_s| with their approximate values (discussed in the previous paragraph), the overall computational complexity of the MS-MRA-WOPBE is bounded by 𝒪(n_p^2MJS+max(((L^2+M^2)+ML)T_IJK_a,LK_a(M+K_a/S))). For comparison purposes, the dominant per-iteration computational complexity of the FASURA in <cit.> (which is due to energy detector and SIC operation) can also be computed as 𝒪(M(n_p+L^' n_c)2^B_f+K_a(nM+n^2) ), where B_f denotes the number of pilot bits, n is the frame length, and L^' is the length of the spreading sequence. § NUMERICAL RESULTS We provide a set of numerical results to assess the performance of the proposed URA set-ups. In all the results, we set B = 100, the number of CRC bits r = 11, the Neyman-Pearson threshold γ = 0.1, and the list size of the decoder to 64. For MS-MRA and MSUG-MRA, we set the frame length n≈ 3200, and P_e = 0.05. The corresponding values for the MS-SRA and MSUG-SRA are n≈ 30000, and P_e = 0.1. In Fig. <ref>, the performance of the proposed MS-MRA and MSUG-MRA is compared with the short blocklength scheme of <cit.> with the number of antennas M=100 and slot length L= 200. (In this scenario, we consider a fast-fading environment, where the coherence blocklength is considered as L_c = 200). To facilitate a fair comparison, we consider (J,n_p,n_c) = (2,32,128) (L=192) and P_p/P_c =1 (ϕ = 1 for MSUG-MRA) for all the proposed schemes. For MSUG-MRA, the value of G is set as G=1 for K_a≤ 400, G = 3 for K_a= 500, G = 6 for 600≤ K_a ≤ 800, G = 8 for 900 ≤ K_a≤ 1000, and G=10 for K_a>1000. The superiority of the proposed schemes over the one in <cit.> is mostly due to the more powerful performance of the polar code compared to the simple coding scheme adopted in <cit.> and the use of the SIC block, which significantly diminishes the effect of interference. We also observe that MS-MRA-WOPBE outperforms MS-MRA, which is due to 1) employing IISD, which iteratively improves the accuracy of the channel estimation, and 2) lower coding rate by not encoding the pilot bits. Besides, the range of the number of active users that are detected by the MSUG-MRA is higher than those of MS-MRA and MS-MRA-WOPBE schemes. This improvement results from randomly dividing users into different groups, which provides each group with a lower number of active users (hence a lower effective interference level). In Fig. <ref>, we compare the proposed MS-MRA and MSUG-MRA with the ones in <cit.>, considering the slow-fading channel with coherence blocklength L_c=3200. We set (J,n_p,n_c) = (2,256,512), M=50, P_p/P_c =0.66 for MS-MRA. We choose (J,n_p,n_c, G) = (2,256,512, 1) for K_a ≤ 700, (J,n_p,n_c, G) = (2,64,512, 6) for K_a = 900, and (J,n_p,n_c, G) = (2,64,512, 18) for K_a > 900 with ϕ = 0.66. Thanks to employing the slotted structure, SIC, and orthogonal pilots, all the proposed schemes have superior performance compared to <cit.>. Due to employing random spreading and an efficient block called NOPICE, FASURA in <cit.> performs better than the proposed MS-MRA and MSUG-MRA in the low K_a regimes; however, its performance is worse than the MSUG-MRA in higher values of K_a (thanks to the random user grouping employed in MSUG-MRA). The proposed MS-MRA-WOPBE also shows a similar performance as FASURA. To achieve the result in Fig. <ref>, FASURA sets n_p = 896, L^' = 9, n_c = 256, n = 3200, B_f = 16, and M=50. The order of computational complexity for these schemes is given in the performance-complexity plot in Fig. <ref>. It can be interpreted from this figure that the proposed MS-MRA-WOPBE has comparable accuracy to FASURA while offering a lower computational complexity. Note also that despite the higher required E_b/N_0 compared to FASURA, MS-MRA offers very large savings in terms of computational complexity, which is attributed to employing orthogonal pilots, slotted structure, and simpler decoding blocks. As a further note, FASURA considers 2^B_p possible spreading sequences of length L^' for each symbol of the polar codeword; hence every transceiver should store n_c 2^B_p vectors of length L^', as well as a pilot codebook of size 2^J× n_p. For typical values reported in <cit.>, the BS and every user must store 1.6× 10^7 vectors of length 9 and a matrix of size 5.8× 10^7. For the proposed schemes in this paper, every transceiver must store only an orthogonal codebook of size n_p× n_p, where n_p = 256. Thus, FASURA requires about 3000 times larger memory than our proposed schemes, which may be restrictive for some target URA applications such as sensor networks, where a massive number of cheap sensors are deployed. Moreover, unlike FASURA, the proposed solutions are implementable with short blocklengths (see Fig. <ref>), which makes them appropriate for fast fading scenarios as well. In Fig. <ref>, we compare the theoretical PUPE in (<ref>) with the simulation results of the MS-MRA for three different scenarios (M=50, 100, 200) with P_p/P_c =0.66 and (J,n_p,n_c) = (2,256,512). It is shown that the approximate theoretical analysis well predicts the performance of the MS-MRA for K_a≤ 700, however, the results are not consistent for higher values of K_a. The reason for the mismatch for the K_a> 800 regime is the approximations employed while analyzing SIC in Lemma <ref> (e.g., n_c,n_p≫ 1, |𝒮_s|≫ 1, uncorrelated QPSK codewords of two different users, and uncorrelated samples of 𝐱_i). Fig. <ref> compares the MS-SRA and MSUG-SRA with the existing single-antenna solutions <cit.>. For both set-ups, we set (J,n_p,n_c) = (2,64,512), P_p/P_c = 1 (ϕ = 1 for MSUG-SRA), (S,V) = (6,8) for K_a≤ 200, and (S,V) = (12,4) for K_a≥ 300. For MSUG-SRA, we also choose G=1 for K_a≤ 300, G=3 for 500≤ K_a ≤ 700, and G = 6 for K_a ≥ 900. It is observed that the proposed MS-SRA has a superior performance compared to the existing URA approaches for the low number of active users, However, it performs worse than the scheme in <cit.> for higher values of K_a. Furthermore, the proposed MSUG-SRA outperforms existing solutions, and its effective range of K_a is up to 1500 users. § CONCLUSIONS We propose a family of unsourced random access solutions for MIMO Rayleigh block fading channels. The proposed approaches employ a slotted structure with multiple stages of orthogonal pilots. The use of a slotted structure along with the orthogonal pilots leads to the lower computational complexity at the receiver, and also makes the proposed designs implementable for fast fading scenarios. We further improve the performance of the proposed solutions when the number of active users is very large by randomly dividing the users into different interleaver-power groups. The results show that the proposed MIMO URA designs are superior for both short and large blocklengths, while offering a lower computational complexity. § PROOF OF THEOREM <REF> Assuming that the transmitted data part contains uncorrelated and equally likely QPSK symbols, for i, j ∈𝒮_s and n_p, n_c →∞, the transmitted signals satisfy 1E_x𝐱_i𝐱_j^H p→ 0, where E_x= Jn_pP_p + n_c P_c. Let 𝐛_ji and 𝐛_jr be the jth pilots of the ith and rth users, and 𝐯_i and 𝐯_r be the corresponding polar-coded and QPSK-modulated signals. Since 𝐛_ji and 𝐛_jr are randomly chosen rows of the Hadamard matrix, 𝐛_ji𝐛_ji^T = n_p with probability 1n_p, and it is zero with probability 1-1n_p. Besides, for n_c→∞, v_it and v_rt are zero-mean and uncorrelated, where v_it=[𝐯_i]_(:,t). Therefore, lim_n_p,n_c→∞ ℙ( 1E_x|𝐱_r𝐱_i^H|>0) = lim_n_p,n_c→∞ℙ( 1E_x|P_p ∑_j=1^J 𝐛_jr𝐛_ji^H+𝐯_r𝐯_i^H|>0) ≤ lim_n_p,n_c→∞ℙ( P_pE_x∑_j=1^J |𝐛_jr𝐛_ji^H|+1E_x|𝐯_r𝐯_i^H|>0) ≤ lim_n_p,n_c→∞∑_j=1^J ℙ(P_pE_x|𝐛_jr𝐛_ji^H|>0) +ℙ(1E_x|∑_t=1^n_cv_rtv_it^H|>0) ≈ lim_n_p,n_c→∞JP_pn_pE_x+ℙ(n_cE_x|𝔼{v_rtv_it^H}|>0) ≈ 0. Note that, strictly speaking, the uncorrelated QPSK symbol assumption is not accurate for coded systems. Nevertheless, it is useful to obtain a good approximation of SINR, as we will show later. By applying LS-based SIC, the residual received signal matrices of pilot and coded parts can be written based on the signal and interference-plus-noise terms as 𝐘^'_p_j≈ √(P_p)𝐡_i𝐛_ji𝐋_p_j+√(P_p)∑_k∈𝒮̃_s, k≠ i^𝐡_k𝐛_jk𝐋_p_j+ 𝐙_n,p_j, 𝐘^'_c≈𝐡_i𝐯_i𝐋_c+∑_k∈𝒮̃_s, k≠ i^𝐡_k𝐯_k𝐋_c+ 𝐙_n,c, where 𝐡_i ∈ℂ^M× 1 is the channel coefficient vector of the ith user, 𝐋_p_j=ω_p_s𝐈_n_p, 𝐋_c=ω_c_s𝐈_n_c, and the elements of 𝐙_n,p_j and 𝐙_n,c are drawn from 𝒞𝒩(0,ω_c_sσ_z^2) and 𝒞𝒩(0,ω_p_sσ_z^2), respectively, with ω_p_s and ω_c_s are as defined in the statement of the Theorem <ref>. Plugging (<ref>) and (<ref>) into (<ref>), we obtain 𝐘^' = 𝐇_𝒮_s𝐗_𝒮_s𝐋+𝐇_𝒮̃_s𝐗_𝒮̃_s𝐋+𝐙_s𝐋 =𝐇_𝒮̃_s𝐗_𝒮̃_s𝐋+𝐙_s𝐋 = 𝐡_i𝐱_i𝐋+ ∑_k∈𝒮̃_s, k≠ i^𝐡_k𝐱_k𝐋+𝐙_n , where 𝐋 = 𝐈_L-𝐗_𝒮_s^H(𝐗_𝒮_s𝐗_𝒮_s^H)^-1𝐗_𝒮_s, and 𝐙_n=𝐙_s 𝐋. Since 𝐋^H 𝐋=𝐋 and 𝐙_s∼𝒞𝒩(0,σ_z^2 𝐈_L), we have 𝐙_n∼𝒞𝒩(0,σ_z^2𝔼{𝐋}) . Since the values of n_p and n_c are large, and using (<ref>), we have 1E_x𝐗_𝒮_s𝐗_𝒮_s^H ≈𝐈_|𝒮_s|, where E_x= Jn_pP_p + n_c P_c. In other words, we can approximate 𝐋 as 𝐋≈𝐈_L-1E_x∑_r∈𝒮_s^𝐱_r^H𝐱_r. Using the weak law of large numbers, and assuming samples of 𝐱_r to be uncorrelated and |𝒮_l|≫ 1, we can rewrite 𝐋 in (<ref>) as 𝐋≈[ 𝐋_p_1 ... 0 0; ⋮ ⋱ ⋮ ⋮; 0 ... 𝐋_p_J 0; 0 ... 0 𝐋_c ] , where 𝐋_p_j=ω_p_s𝐈_n_p and 𝐋_c=ω_c_s𝐈_n_c with ω_p_s=ω_c_s=1-| 𝒮_s|L if the transmitted signals are randomly interleaved, and ω_p_s=1-1E_xP_p| 𝒮_s|, ω_c_s=1-1E_xP_c| 𝒮_s|, otherwise. Letting 𝐙_n=[𝐙_n,p_1, , 𝐙_n,p_J, 𝐙_n,c], we can infer from (<ref>) and (<ref>) that the elements of 𝐙_n,p_j and 𝐙_n,c approximately follow 𝒞𝒩(0,ω_p_sσ_z^2) and 𝒞𝒩(0,ω_c_sσ_z^2), respectively. Besides, using (<ref>) and the signal structure in (<ref>), we can divide (<ref>) into pilot and coded parts as in (<ref>) and (<ref>). The estimated channel coefficients of a non-colliding user approximately satisfy the following expressions: 𝔼{𝐡̂_i^2} ≈ω_p _s^2 𝔼{𝐡_i^2}+M ω_p_sσ_z^2 n_pP_p, 𝔼{ |𝐡̂_i^H𝐡_k|^2 } ≈ω_p_s^2 𝔼{𝐡_i^2}+M ω_p_sσ_z^2 n_pP_p, 𝔼{ |𝐡̂_i^H𝐡_i |^2} ≈ω_p_s^2𝔼{𝐡_i^4} +ω_p_sσ_z^2 n_pP_p𝔼{𝐡_i^2} . Using the approximation of 𝐘^'_p_j in (<ref>) in (<ref>), the channel coefficient vector of the ith user can be estimated as 𝐡̂_i≈ ω_p_s n_p𝐡_i𝐛_ji𝐛̃_jk^H+ω_p_s n_p∑_f∈𝒮̃_s, f≠ i^𝐡_f𝐛_jf𝐛̃_jk^H+ 𝐳_p_j,n a ω_p_s𝐡_i+𝐳_p_j,n, where 𝐳_p_j,n = 1 n_p√(P_p)𝐙_n,p_j𝐛̃_jk^H, and in (a), we use the assumption that the ith user is non-colliding, hence 𝐛̃_jk is only selected by the ith user (𝐛_ji=𝐛̃_jk and 𝐛_jf≠𝐛̃_jk^H for f∈𝒮̃_s, f≠ i). We can argue the following approximation 𝐳_p_j,n∼𝒞𝒩(0,ω_p_sσ_z^2 n_pP_p). Using (<ref>), we can show that 𝔼{𝐡̂_i^2}≈ω_p_s^2 𝔼{𝐡_i^2}+M ω_p_sσ_z^2 n_pP_p, 𝔼{ |𝐡̂_i^H𝐡_i |^2}≈ω_p_s^2𝔼{𝐡_i^4} +ω_p_sσ_z^2 n_pP_p𝔼{𝐡_i^2}, and 𝔼{ |𝐡̂_i^H𝐡_k|^2 } = 𝔼{𝐡̂_i^2}. Plugging (<ref>) into the MRC expression in (<ref>), 𝐯̂_i can be estimated as 𝐯̂_i≈ω_c_s𝐡̂_i^H𝐡_i 𝐯_i+ 𝐳_in, where the first term on the right-hand side is the signal term, and 𝐳_in=∑_k∈𝒮̃_s, k≠ i^𝐡̂_i^H𝐡_k𝐯_k𝐋_c+ 𝐡̂_i^H𝐙_n,c is the interference-plus-noise term. Since 𝐋^H𝐋=𝐋, and using (<ref>), we can show 𝐋_c^H𝐋_c≈𝐋_c. Therefore, by employing Lemma <ref>, we can approximate 𝐳_in∼𝒞𝒩(0,σ_in^2𝐈_n_c), where σ_in^2= ω_c_s(P_c (|𝒮̃_s|-1) +σ_z^2)(ω_p_s^2 𝔼{𝐡_i^2}+Mω_p_sσ_z^2 n_pP_p). Besides, the per-symbol power of the signal term can be obtained as σ_s^2 ≈ω_c_s^2 𝔼{|𝐡̂_i^H𝐡_i|^2}P_c. Then, using Lemma <ref>, the SINR of 𝐯̂_i can be calculated as in (<ref>). § PROOF OF THEOREM <REF> In the first iteration of the sth slot, since K_s users have selected one out of n_p pilots randomly, the number of users that select an arbitrary pilot approximately follows a Poisson distribution with the parameter K_s/n_p. In the kth iteration of the sth slot, let T_j,i^(k) be the average number of i-collision pilots (pilots selected by i different users) in the jth pilot part. We have T_j,i^(1)≈ n_p f_p(i;K_s/n_p), where f_p(i;a) denotes the PMF of the Poisson distribution with the parameter a. The average number of i-collision users in the kth iteration of the jth pilot part is then calculated as K_j,i^(k)≈ i T_j,i^(k). Supposing that in the kth iteration (using the assumption in (<ref>)), the decoder employs the jth pilot part for channel estimation, the removed user is non-colliding (1-collision) in its jth pilot part (we assume that the decoder can only decode the non-colliding users), and it is in i-collision in its j^'th (j^'≠ j) pilot part with probability p_i,j^'^(k) = K_j^',i^(k)K_s-k+1. Therefore, removing a user from the jth pilot part results in * In the jth pilot part, we have T_j,1^(k+1)=T_j,1^(k)-1, and T_j,i^(k+1)=T_j,i^(k) for i>1. * In the j^'th pilot part (j^'≠ j), we have T_j^',i^(k+1)=T_j^',i^(k)+p_i+1,j^'^(k)-p_i,j^'^(k). The collision probability of the jth pilot part in the tth iteration is then obtained as P_col(j,t) = 1- T_j,1^(t)K_s-t+1. Finally, by approximating T_j,i^(t) by its average over different pilot parts (i.e., T_j,i^(t)≈ N_i^(t):=1J∑_j=1^J T_j,i^(t)) in above equations, the results in Theorem <ref> are obtained. Note that since all the pilot parts are equally likely in the first iteration, we have N_i^(1)≈ T_j,i^(1)≈ n_p f_p(i;K_s/n_p), ∀ j=1,...,J. 00 Shao2022Reconfigurable X. Shao, L. Cheng, X. Chen, C. Huang and D. W. K. Ng, “Reconfigurable intelligent surface-Aided 6G massive access: coupled tensor modeling and sparse bayesian learning,” IEEE Trans. Wireless Commun., vol. 21, no. 12, pp. 10145–10161, Dec. 2022. Shao2020Cooperative X. Shao, X. Chen, D. W. K. Ng, C. Zhong and Z. Zhang, “Cooperative activity detection: sourced and unsourced massive random access paradigms,” IEEE Trans. Signal Process., vol. 68, no. , pp. 6578–6593, Nov. 2020. polyanskiy2017perspective Y. Polyanskiy, “A perspective on massive random-access,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany, June 2017, pp. 2523–2527. ordentlich2017low O. Ordentlich and Y. Polyanskiy, “Low complexity schemes for the random access Gaussian channel,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany, June 2017, pp. 2528–2532. facenda2020efficient G. Kasper Facenda and D. Silva, “Efficient scheduling for the massive random access Gaussian channel,” IEEE Trans. Wireless Commun., vol. 19, no. 11, pp. 7598–7609, Nov. 2020. vem2019user A. Vem, K. R. Narayanan, J.-F. Chamberland, and J. Cheng, “A user-independent successive interference cancellation based coding scheme for the unsourced random access Gaussian channel,” IEEE Trans. Commun., vol. 67, no. 12, pp. 8258–8272, Dec. 2019. glebov2019achievability A. Glebov, N. Matveev, K. Andreev, A. Frolov and A. Turlikov, “Achievability bounds for T-fold irregular repetition slotted ALOHA scheme in the Gaussian MAC,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Marrakesh, Morocco, Apr. 2019, pp. 1–6. amalladinne2018coupled V. K. Amalladinne, A. Vem, D. K. Soma, K. R. Narayanan, and J.-F. Chamberland, “A coupled compressive sensing scheme for unsourced multiple access,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Calgary, Canada, Sep. 2018, pp. 6628–6632. tanc2021massive A. K. Tanc and T. M. Duman, “Massive random access with trellis based codes and random signatures,” IEEE Commun. Lett., vol. 25, no. 5, pp. 1496–1499, May 2021. han2021sparse Z. Han, X. Yuan, C. Xu, S. Jiang and X. Wang, “Sparse Kronecker-product coding for unsourced multiple access,” IEEE Wireless Commun. Lett., vol. 10, no. 10, pp. 2274-2278, Oct. 2021. ebert2021stochastic J. R. Ebert, V. K. Amalladinne, S. Rini, J. -F. Chamberland and K. R. Narayanan, “Stochastic binning and coded demixing for unsourced random access,” in Proc. IEEE Workshop Signal Process. Adv. Wireless Commun. (SPAWC), Lucca, Italy, Sep. 2021, pp. 351–355. pradhan2020polar A. K. Pradhan, V. K. Amalladinne, K. R. Narayanan, and J.-F. Chamberland, “Polar coding and random spreading for unsourced multiple access,” in Proc. IEEE Int. Conf. Commun. (ICC), Dublin, Ireland, June 2020, pp. 1–6. ahmadi2021random M. J. Ahmadi and T. M. Duman, “Random spreading for unsourced MAC with power diversity,” in IEEE Commun. Lett., vol. 25, no. 12, pp. 3995–3999, Dec. 2021. pradhan2021ldpc A. K. Pradhan, V. K. Amalladinne, K. R. Narayanan and J. -F. Chamberland, “LDPC codes with soft interference cancellation for uncoordinated unsourced multiple access,” in Proc. IEEE Int. Conf. Commun. (ICC), Montreal, Canada, June 2021, pp. 1–6. kowshik2019quasi S. S. Kowshik and Y. Polyanskiy, “Quasi-static fading MAC with many users and finite payload,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Paris, France, July 2019, pp. 440–444. kowshik2020energy S. S. Kowshik, K. Andreev, A. Frolov and Y. Polyanskiy, “Energy efficient coded random access for the wireless uplink,” IEEE Trans. Commun., vol. 68, no. 8, pp. 4694–4708, Aug. 2020. kowshik2019energy S. S. Kowshik, K. Andreev, A. Frolov and Y. Polyanskiy, “Energy efficient random access for the quasi-static fading MAC,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Paris, France, July 2019, pp. 2768–2772. kowshik2021fundamental S. S. Kowshik and Y. Polyanskiy, “Fundamental limits of many-user MAC with finite payloads and fading,” IEEE Trans. Inf. Theory, vol. 67, no. 9, pp. 5853–5884, Sep. 2021. andreev2020polar K. Andreev, E. Marshakov and A. Frolov, “A polar code based TIN-SIC scheme for the unsourced random access in the quasi-static fading MAC,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Los Angeles, USA, June 2020, pp. 3019–3024. andreev2019low K. Andreev, S. S. Kowshik, A. Frolov and Y. Polyanskiy, “Low complexity energy efficient random access scheme for the asynchronous fading MAC,” in Proc. IEEE Veh. Technol. Conf. (VTC), Honolulu, USA, Sep. 2019, pp. 1–5. amalladinne2019asynchronous V. K. Amalladinne, K. R. Narayanan, J. -F. Chamberland and D. Guo, “Asynchronous neighbor discovery using coupled compressive sensing,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Brighton, UK, May 2019, pp. 4569–4573. fengler2021non A. Fengler, S. Haghighatshoar, P. Jung and G. Caire, “Non-Bayesian activity detection, large-scale fading coefficient estimation, and unsourced random access with a massive MIMO receiver,” IEEE Trans. Inf. Theory, vol. 67, no. 5, pp. 2925–2951, May 2021. fengler2020pilot A. Fengler, P. Jung and G. Caire, “Pilot-based unsourced random access with a massive MIMO receiver in the Quasi-static fading regime,” in Proc. IEEE Workshop Signal Process. Adv. Wireless Commun. (SPAWC), Lucca, Italy, Sep. 2021, pp. 356–360. Gkagkos2022FASURA M. Gkagkos, K. R. Narayanan, J. F. Chamberland and C. N. Georghiades, “FASURA: A scheme for quasi-static massive MIMO unsourced random access channels,” in Proc. IEEE Workshop Signal Process. Adv. Wireless Commun. (SPAWC), Oulu, Finland, July 2022, pp. 1–5. ahmadi2021Unsourced M. J. Ahmadi and T. M. Duman, “Unsourced random access with a massive MIMO receiver using multiple stages of orthogonal pilots,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Espoo, Finland, July 2022, pp. 2880-2885. Arikan_channl E. Arikan, “Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3051–3073, May 2009. Polyanskiy2010Channel Y. Polyanskiy, H. V. Poor and S. Verdu, “Channel coding rate in the finite blocklength regime,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2307–2359, May 2010. Ahmadi2020Efficient M. J. Ahmadi, R. Arablouei and R. Abdolee, “Efficient Estimation of Graph Signals With Adaptive Sampling,” IEEE Trans. Signal Process., vol. 68, no. , pp. 3808-3823, June 2020. Abadi2019Diffusion M. S. E. Abadi and M. J. Ahmadi, “Diffusion improved multiband-structured subband adaptive filter algorithm with dynamic selection of nodes over distributed networks,” IEEE Trans. Circuits Syst. II, Exp. Briefs,, vol. 66, no. 3 , pp. 507-511, Mar. 2019. Abadi2019Two M. S. E. Abadi, J. H. Husøy, and M. J. Ahmadi, “Two improved multiband structured subband adaptive filter algorithms with reduced computational complexity,” Signal Process., vol. 154, no. , pp. 15-29, Jan. 2019. Ahmadi2023Unsourced M. J. Ahmadi, M. Kazemi, and T. M. Duman, “Unsourced random access with a massive MIMO receiver using multiple stages of orthogonal pilots: MIMO and single-antenna structures,” IEEE Trans. Wireless Commun., June 2023.
http://arxiv.org/abs/2307.04546v1
20230710132437
Safety Analysis of Parameterised Networks with Non-Blocking Rendez-Vous
[ "Lucie Guillou", "Arnaud Sangnier", "Nathalie Sznajder" ]
cs.LO
[ "cs.LO", "cs.MA", "C.2.4; F.4.3" ]
[ Simon R. Eugster1 August 12, 2023 ===================== We consider networks of processes that all execute the same finite-state protocol and communicate via a rendez-vous mechanism. When a process requests a rendez-vous, another process can respond to it and they both change their control states accordingly. We focus here on a specific semantics, called non-blocking, where the process requesting a rendez-vous can change its state even if no process can respond to it. In this context, we study the parameterised coverability problem of a configuration, which consists in determining whether there is an initial number of processes and an execution allowing to reach a configuration bigger than a given one. We show that this problem is EXPSPACE-complete and can be solved in polynomial time if the protocol is partitioned into two sets of states, the states from which a process can request a rendez-vous and the ones from which it can answer one. We also prove that the problem of the existence of an execution bringing all the processes in a final state is undecidable in our context. These two problems can be solved in polynomial time with the classical rendez-vous semantics. § INTRODUCTION Verification of distributed/concurrent systems. Because of their ubiquitous use in applications we rely on constantly, the development of formal methods to guarantee the correct behaviour of distributed/concurrent systems has become one of the most important research directions in the field of computer systems verification in the last two decades. Unfortunately, such systems are difficult to analyse for several reasons. Among others, we can highlight two aspects that make the verification process tedious. First, these systems often generate a large number of different executions due to the various interleavings generated by the concurrent behaviours of the entities involved. Understanding how these interleavings interact is a complex task which can often lead to errors at the design-level or make the model of these systems very complex. Second, in some cases, the number of participants in a distributed system may be unbounded and not known a priori. To fully guarantee the correctness of such systems, the analysis would have to be performed for all possible instances of the system, i.e., an infinite number of times. As a consequence, classical techniques to verify finite state systems, like testing or model-checking, cannot be easily adapted to distributed systems and it is often necessary to develop new techniques. Parameterised verification. When designing systems with an unbounded number of participants, one often provides a schematic program (or protocol) intended to be implemented by multiple identical processes, parameterised by the number of participants. In general, even if the verification problem is decidable for a given instance of the parameter, verifying all possible instances is undecidable (<cit.>). However, several settings come into play that can be adjusted to allow automatic verification. One key aspect to obtain decidability is to assume that the processes do not manipulate identities in the protocolsand use simple communication mechanisms like pairwise synchronisation (or rendez-vous) <cit.>, broadcast of a message to all the entities <cit.> (which can as well be lossy in order to simulate mobility <cit.>), shared register containing values of a finite set <cit.>, and so on (see <cit.> for a survey). In every aforementioned case, all the entities execute the same protocol given by a finite state automaton. Note that parameterised verification, when decidable like in the above models, is also sometimes surprisingly easy, compared to the same problem with a fixed number of participants. For instance, liveness verification of parameterised systems with shared memory is Pspace-complete for a fixed number of processes and in NP when parameterised  <cit.>. Considering rendez-vous communication. In one of the seminal papers for the verification of parameterised networks <cit.>, German and Sistla (and since then <cit.>) assume that the entities communicate by “rendez-vous”, a synchronisation mechanism in which two processes (the sender and the receiver) agree on a common action by which they jointly change their local state. This mechanism is synchronous and symmetric, meaning that if no process is ready to receive a message, the sender cannot send it. However, in some applications, such as Java Thread programming, this is not exactly the primitive that is implemented. When a Thread is suspended in a waiting state, it is woken up by the reception of a message sent by another Thread. However, the sender is not blocked if there is no suspended Thread waiting for its message; in this case, the sender sends the anyway and the message is simply lost. This is the reason why Delzanno et. al. have introduced non-blocking rendez-vous in <cit.> a communication primitive in which the sender of a message is not blocked if no process receives it. One of the problems of interest in parameterised verification is the coverability problem: is it possible that, starting from an initial configuration, (at least) one process reaches a bad state? In <cit.>, and later in <cit.>, the authors introduce variants of Petri nets to handle this type of communication. In particular, the authors investigate in <cit.> the coverability problem for an extended class of Petri nets with non-blocking arcs, and show that for this model the coverability problem is decidable using the techniques of Well-Structured Transitions Systems <cit.>. However, since their model is an extension of Petri nets, the latter problem is -hard <cit.> (no upper bound is given). Relying on Petri nets to obtain algorithms for parameterised networks is not always a good option. In fact, the coverability problem for parameterised networks with rendez-vous can be solved in polynomial timeis in P<cit.>, while it is -complete for Petri nets <cit.>. Hence, no upper bound or lower bound can be directly deduced for the verification of networks with non-blocking rendez-vous from <cit.>. Our contributions. We show that the coverability problem for parameterised networks with non-blocking rendez-vous communication over a finite alphabet is -complete. To obtain this result, we consider an extension of counter machines (without zero test) where we add non-blocking decrement actions and some restore mechanism, i.e.edges that can bring back the machine to its initial location at any moment. We show that the coverability problem for these extended counter machines is -complete (<ref>) and that it is equivalent to our problem over parameterised networks (<ref>). We consider then a subclass of parameterised networks – wait-only protocols – in which no state can allow to both request a rendez-vous and wait for one. This restriction is very natural to model concurrent programs since when a thread is waiting, it cannot perform any other action. We show that coverability problem can then be solved in polynomial time (<ref>). Finally, we show that the synchronization problem, where we look for a reachable configuration with all the processes in a given state, is undecidable in our framework, even for wait-only protocols (<ref>). Due to lack of space, some proofs are only given in the appendix. § RENDEZ-VOUS NETWORKS WITH NON-BLOCKING SEMANTICS For a finite alphabet Σ, we let Σ^* denote the set of finite sequences over Σ (or words). Given w∈Σ^*, we let |w| denote its length: if w=w_0… w_n-1∈Σ^*, then |w|=n. We write to denote the set of natural numbers and [i,j] to represent the set k∈| i≤ k k ≤ j for i,j ∈. For a finite set E, the set ^E represents the multisets over E. For two elements m,m' ∈^E, we denote m+m' the multiset such that (m+m')(e) = m(e) +m'(e) for all e ∈ E. We say that m ≤ m' if and only if m(e) ≤ m'(e) for all e ∈ E. If m ≤ m', then m'-m is the multiset such that (m'-m)(e) = m'(e)-m(e) for all e ∈ E. Given a subset E' ⊆ E and m ∈^E, we denote by ||m||_E' the sum Σ_e∈ E'm(e) of elements of E' present in m. The size of a multiset m is given by ||m|| =||m||_E. For e ∈ E, we use sometimes the notation e for the multiset m verifying m(e)=1 and m(e')=0 for all e' ∈ E∖e and, to represent for instance the multiset with four elements a, b,b and c, we will also use the notations a, b, b, c or a, 2· b, c. §.§ Rendez-Vous Protocols We can now define our model of networks. We assume that all processes in the network follow the same protocol. Communication in the network is pairwise and is performed by rendez-vous through a finite communication alphabet Σ. Each process can either perform an internal action using the primitive τ, or request a rendez-vous by sending the message m using the primitive !m or answer to a rendez-vous by receiving the message m using the primitive ?m (for m ∈Σ). Thus, the set of primitives used by our protocols is RV(Σ)=τ∪?m,!m | m ∈Σ. A rendez-vous protocol (shortly protocol) is a tuple = (Q, Σ, , q_f, T) where Q is a finite set of states, Σ is a finite alphabet, ∈ Q is the initial state, q_f ∈ Q is the final state and T ⊆ Q × RV(Σ) × Q is the finite set of transitions. For a message m ∈Σ, we denote by m the set of states q from which the message m can be received, i.e. states q such that there is a transition (q, ?m, q') ∈ T for some q' ∈ Q. A configuration associated to the protocol is a non-empty multiset C over Q for which C(q) denotes the number of processes in the state q and ||C|| denotes the total number of processes in the configuration C. A configuration C is said to be initial if and only if C(q)=0 for all q ∈ Q∖. We denote by () the set of configurations and by () the set of initial configurations. Finally for n ∈∖0, we use the notation _n() to represent the set of configurations of size n, i.e. _n()=C ∈() | ||C||=n. When the protocol is made clear from the context, we shall write , and _n. We explain now the semantics associated with a protocol. For this matter we define the relation ⊆⋃_n≥ 1_n ×(τ∪Σ∪𝐧𝐛(m) | m ∈Σ) ×_n as follows (here · is a special symbol). Given n ∈∖0 and C,C' ∈_n and m ∈Σ, we have: * C τ C' iff there exists (q, τ, q') ∈ T such that C(q) > 0 and C' = C - q + q' (internal); * C m C' iff there exists (q_1, !m, q_1') ∈ T and (q_2, ?m, q_2')∈ T such that C(q_1)>0 and C(q_2)>0 and C(q_1)+C(q_2)≥ 2 (needed when q_1 = q_2) and C' = C - q_1, q_2 + q_1', q_2' (rendez-vous); * C 𝐧𝐛(m) C' iff there exists (q_1, !m, q_1') ∈ T, such that C(q_1)>0 and (C-q_1)(q_2)=0 for all (q_2, ?m, q_2') ∈ T and C' = C - q_1 + q'_1 (non-blocking request). Intuitively, from a configuration C, we allow the following behaviours: either a process takes an internal transition (labeled by τ), or two processes synchronize over a rendez-vous m, or a process requests a rendez-vous to which no process can answer (non-blocking sending). This allows us to define S_ the transition system ((), ) associated to . We will write C C' when there exists a ∈τ∪Σ∪𝐧𝐛(m) | m ∈Σ such that C a C' and denote by ^∗ the reflexive and transitive closure of . Furthermore, when made clear from the context, we might simply write instead of . An execution is a finite sequence of configurations ρ = C_0C_1… such that, for all 0≤ i< |ρ|, C_i C_i+1. The execution is said to be initial if C_0∈(). Figure <ref> provides an example of a rendez-vous protocol where is the initial state and the final state. A configuration associated to this protocol is for instance the multiset 2 · q_1, 1· q_4, 1 · q_5 and the following sequence represents an initial execution: 2 ·𝐧𝐛(a), b, c 2 ·. When we only allow behaviours of type (internal) and (rendez-vous), this semantics corresponds to the classical rendez-vous semantics (<cit.>). In opposition, we will refer to the semantics defined here as the non-blocking semantics where a process is not blocked if it requests a rendez-vous and no process can answer to it. Note that all behaviours possible in the classical rendez-vous semantics are as well possible in the non-blocking semantics but the converse is false. §.§ Verification Problems We now present the problems studied in this work. For this matter, given a protocol = (Q, Σ, , q_f, T), we define two sets of final configurations. The first one () = { C ∈()  | C(q_f)> 0} characterises the configurations where one of the processes is in the final state. The second one () = { C ∈()  | C(Q ∖{q_f})= 0} represents the configurations where all the processes are in the final state. Here again, when the protocol is clear from the context, we might use the notations and . We study three problems: the coverability problem (), the synchronization problem () and the termination problem () which all takes as input a protocol and which can be stated as follows: We study three problems: the state coverability problem (), the configuration coverability problem () and the synchronization problem (), which all take as input a protocol   and can be stated as follows: Problem name Question Are there C_0 ∈ and C_f ∈, such that C_0 ^∗ C_f? Given C ∈, are there C_0 ∈ and C' ≥ C, such that C_0 ^∗ C'? Are there C_0 ∈ and C_f ∈, such that C_0 ^∗ C_f? Does _∞ (S_) = ∅?  expresses a safety property: if q_f is an error state and the answer is negative, then for any number of processes, no process will ever be in that error state. , in another hand, is a liveness property: if q_f is a deadlock state (a state in which no action is possible), and the answer is negative, then for any number of processes, all processes together are never blocked at the same time. The difficulty in solving these problems lies in the fact that we are seeking for an initial configuration allowing a specific execution but the set of initial configurations is infinite. The difference between  and   is that in the first one we ask for at least one process to end up in the final state whereas the second one requires all the processes to end in this state. Note that  is an instance of  but  is not. The rendez-vous protocol of Figure <ref> is a positive instance of , as shown in <ref>. However, this is not the case for : if an execution brings a process in , this process cannot be brought afterwards to . If is the final state,  is now a positive instance of  (see Example <ref>). Note that if the final state is , is not a positive instance of  anymore. In fact, the only way to reach a configuration with a process in is to put (at least) two processes in state as this is the only state from which one process can send the message b. However, this cannot happen, since from an initial configuration, the only available action consists in sending the message a as a non-blocking request. Once there is one process in state q_5, any other attempt to put another process in this state will induce a reception of message a by the process already in q_5, which will hence leave q_5. Finally, note that for any n ∈ℕ, the configuration n · is coverable, even if with as final state is not a positive instance of . § COVERABILITY FOR NON-BLOCKING COUNTER MACHINES We first detour into new classes of counter machines, which we call non-blocking counter machines and non-blocking counter machines with restore, in which a new way of decrementing the counters is added to the classical one: a non-blocking decrement, which is an action that can always be performed. If the counter is strictly positive, it is decremented; otherwise it is let to 0. We show that the coverability of a control state in this model is -complete, and use this result to solve coverability problems in rendez-vous protocols. To define counter machines, given a set of integer variables (also called counters) , we use the notation to represent the set of associated actions given by ,,|∈∪. Intuitively, increments the value of the counter , while decrements it and checks if it is equal to 0. We are now ready to state the syntax of this model. A counter machine (shortly CM) is a tuple M = (, , Δ, ) such that is a finite set of locations, ∈ is an initial location, is a finite set of counters, and Δ⊆×× is finite set of transitions. We will say that a CM is test-free (shortly ) whenever Δ∩×{|∈}× = ∅. A configuration of a CM M = (, , Δ, ) is a pair (ℓ, v) where ℓ∈ specifies the current location of the CM and v∈^ associates to each counter a natural value. The size of a CM M is given by |M|= || + || + |Δ|. Given two configurations (ℓ, v) and (ℓ',v') and a transition δ∈Δ, we define (ℓ, v) δ_M (ℓ', v') if and only if δ = (ℓ, op, ℓ') and one of the following holds: [t]7cm * op = and v =v'; * op = and v'() = v() + 1 and v'(') = v(') for all ' ∈∖; [t]7cm * op = and v'() = v() - 1 and v'(') = v(') for all ' ∈∖; * op = and v() = 0 and v'= v. In order to simulate the non-blocking semantics of our rendez-vous protocols with counter machines, we extend the class of test-free CM with non-blocking decrement actions. A non-blocking test-free counter machine (shortly ) is a tuple M=(, , Δ_b, Δ_nb, ) such that (, , Δ_b, ) is a  and Δ_nb⊆×{|∈}× is a finite set of non-blocking transitions. Observe that in a , both blocking and non-blocking decrements are possible, according to the definition of the transition relation. Again, a configuration is given by a pair (ℓ,v)∈×^. Given two configurations (ℓ, v) and (ℓ, v') and δ∈Δ_b∪Δ_nb, we extend the transition relation (ℓ,v)δ_M (ℓ,v') over the set Δ_nb in the following way: for δ= (ℓ, , ℓ') ∈Δ_nb, we have (ℓ,v) δ_M (ℓ',v') if and only if v'() = max(0, v() - 1), and v'(') = v(') for all ' ∈∖. We say that M is an  with restore (shortly ) when (ℓ, , ) ∈Δ for all ℓ∈, i.e. from each location, there is a transition leading to the initial location with no effect on the counters values. For a CM M with set of transitions Δ (resp. an   with sets of transitions Δ_b and Δ_nb), we will write (ℓ, v) _M (ℓ', v') whenever there exists δ∈Δ (resp. δ∈Δ_b∪Δ_nb) such that (ℓ, v) δ_M (ℓ', v') and use ^∗_M to represent the reflexive and transitive closure of _M. When the context is clear we shall write instead of _M. We let 0_ be the valuation such that 0_()=0 for all ∈. An execution is a finite sequence of configurations (ℓ_0, v_0) (ℓ_1, v_1) …(ℓ_k, v_k). It is said to be initial if (ℓ_0,v_0)=(, 0_). A configuration (ℓ,v) is called reachable if (, 0_) ^∗ (ℓ,v). We shall now define the coverability problem for (non-blocking test-free) counter machines, which asks whether a given location can be reached from the initial configuration. We denote this problem [ℳ], for ℳ∈{CM, , , }. It takes as input a machine M in ℳ (with initial location and working over a set of counters) and a location ℓ_f and it checks whether there is a valuation v ∈ℕ^ such that (, 0_) ^*(ℓ_f, v). In the rest of this section, we will prove that [] is -complete. To this end, we first establish that [] is in , by an adaptation of Rackoff's proof which shows that coverability in Vector Addition Systems is in Expspace <cit.>. This gives also the upper bound for , since any  is a . This result is established by the following theorem, whose proof is omitted due to lack of space. [] and [] are in . To obtain the lower bound, inspired by Lipton's proof showing that coverability in Vector Addition Systems is -hard <cit.>, we rely on 2Exp-bounded . We say that a CM M = (,, Δ,) is 2Exp-bounded if there exists n ∈ O(|M|) such that any reachable configuration (ℓ, v) satisfies v() ≤ 2^2^n for all ∈. We use then the following result. [2Exp-bounded ] is -hard. We now show how to simulate a 2Exp-bounded  by a , by carefully handling restore transitions that may occur at any point in the execution. We will ensure that each restore transition is followed by a reset of the counters, so that we can always extract from an execution of the  a correct initial execution of the original . The way we enforce resetting of the counters is inspired by the way Lipton simulates 0-tests of a CM in a . As in <cit.>, we will describe the final  by means of several submachines. To this end, we define procedural non-blocking counter machines that are   with several identified output states: formally, a procedural- is a tuple N = (, , Δ_b, Δ_nb, ℓ_in, L_out) such that (, , Δ_b, Δ_nb, ℓ_in) is a , L_out⊆, and there is no outgoing transitions from states in L_out. Now fix a 2Exp-bounded  M = (,, Δ,), ℓ_f∈ the location to be covered. There is some c, such that, any reachable configuration (ℓ, v) satisfies v() < 2^2^c |M| for all ∈, fix n = c|M|. We build a  N as pictured in <ref>. The goal of the procedural  𝚁𝚜𝚝𝙸𝚗𝚌 is to ensure that all counters in are reset. Hence, after each restore transition, we are sure that we start over a fresh execution of the  M. We will need the mechanism designed by Lipton to test whether a counter is equal to 0. For a counter bounded by some value K, this is done by duplicating into and ensure along any execution that the sum of and is equal to K. So, we define two families of sets of counters (Y_i)_0≤ i ≤ n and (Y_i)_0≤ i≤ n as follows. Let Y_i = {_i, _i, _i } and Y_i = {_i, _i, _i} for all 0≤ i < n and Y_n = and Y_n = ∅ and '=⋃_0≤ i≤ n Y_i∪Y_i. All the machines we will describe from now on will work over the set of counters '. Procedural- 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(). We use a family of procedural- defined in <cit.>: for all 0≤ i <n, for all ∈Y_i, 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() is a procedural- with an initial location ^𝚃𝚂,i,, and two output locations ℓ^𝚃𝚂,i,_z and ℓ^𝚃𝚂,i,_nz. It tests if the value of is equal to 0, using the fact that the sum of the values of and is equal to 2^2^i. If =0, it swaps the values of and , and the execution ends in the output location ℓ^𝚃𝚂,i,_z. Otherwise, counters values are left unchanged and the execution ends in ℓ^𝚃𝚂,i,_nz. In any case, other counters are not modified by the execution. Note that 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() makes use of variables in ⋃_1≤ j< i Y_i∪Y_i. Formally, these machines have the following property: We use this proposition. Let 0≤ i < n, and ∈Y_i. For all v,v'∈ℕ^X', for ℓ∈{ℓ^𝚃𝚂,i,_z,ℓ^𝚃𝚂,i,_nz}, we have (^𝚃𝚂,i,v)^*(ℓ,v') in 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() if and only if : * (PreTest1): for all 0 ≤ j < i, for all _j ∈Y_j, v(_j) = 2^2^j and for all _j ∈ Y_j, v(_j) = 0; * (PreTest2): v(_i) = 2^2^i and v( _i) = 0; * (PreTest3): v() + v() = 2^2^i; * (PostTest1): For all ∉{,}, v'() = v(); * (PostTest2): either (i) v() = v'() = 0, v() = v'() and ℓ = ℓ^i_z, or (ii) v'() = v() >0, v'() = v() and ℓ = ℓ^𝚃𝚂,i,_nz. Moreover, if for all 0 ≤ j ≤ n, and any counter ∈ Y_j ∪Y_j, v()≤ 2^2^j, then for all 0 ≤ j ≤ n, and any counter ∈ Y_j ∪Y_j, the value of will never go above 2^2^j during the execution. Note that for a valuation v∈ℕ^X' that meets the requirements (PreTest1), (PreTest2) and (PreTest3), there is only one configuration (ℓ,v') with ℓ∈{ℓ^𝚃𝚂,i,_z,ℓ^𝚃𝚂,i,_nz} such that (ℓ_in,v) ^* (ℓ,v'). Procedural  𝚁𝚜𝚝_i. We use these machines to define a family of procedural- (𝚁𝚜𝚝_i)_0≤ i≤ n that reset the counters in Y_i∪Y_i, assuming that their values are less than or equal to 2^2^i. Let 0≤ i≤ n, we let 𝚁𝚜𝚝_i=(^𝚁,i, ',Δ_b^𝚁,i,Δ^𝚁,i_nb, ℓ^𝚁,i_in, {ℓ_out^𝚁,i}). The machine 𝚁𝚜𝚝_0 is pictured Figure <ref>. For all 0≤ i< n, the machine 𝚁𝚜𝚝_i+1 uses counters from Y_i∪Y_i and procedural- 𝚃𝚎𝚜𝚝𝚜𝚠𝚊𝚙_i(_i) and 𝚃𝚎𝚜𝚝𝚜𝚠𝚊𝚙_i(_i) to control the number of times variables from Y_i+1 and Y_i+1 are decremented. It is pictured Figure <ref>. Observe that since Y_n=, and Y_n=∅, the machine 𝚁𝚜𝚝_n will be a bit different from the picture: there will only be non-blocking decrements over counters from Y_n, that is over counters from the initial  M. If _i, _i (and 𝚜_i) are set to 2^2^i and _i, _i (and 𝚜_i) are set to 0, then each time this procedural-  takes an outer loop, the variables of Y_i+1∪Y_i+1 are decremented (in a non-blocking fashion) 2^2^i times. This is ensured by Proposition <ref>the properties of 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(). Moreover, the location ℓ^𝚃𝚂, i, _z will only be reached when the counter _i is set to 0, and this will happen after 2^2^i iterations of the outer loop, again thanks to Proposition <ref>the properties of 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(). So, all in all, variables from Y_i and Y_i+1 will take a non-blocking decrement 2^2^i.2^2^i times, that is 2^2^i+1. These properties are formalized in the following proposition. For all 0≤ i≤ n, for all v∈ℕ^' such that * (PreRst1): for all 0 ≤ j < i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0, for all v' ∈ℕ^', if (^𝚁,i, v) ^* (ℓ^𝚁,i_out,v') in 𝚁𝚜𝚝_𝚒 then * (PostRst1): for all ∈ Y_i ∪Y_i, v'() = max(0, v() - 2^2^i), * (PostRst2): for all ∉Y_i ∪Y_i, v'() = v(). For all ∈', we say that is initialized in a valuation v if ∈ Y_i for some 0≤ i≤ n and v()=0, or ∈Y_i for some 0≤ i≤ n and v()=2^2^i. For 0≤ i≤ n, we say that a valuation v∈ℕ^' is i-bounded if for all ∈ Y_i ∪Y_i, v() ≤ 2^2^i. The procedural- 𝚁𝚜𝚝_i is taking care of resetting counters in Y_i∪Y_i. The following lemma states that no counter in Y_j∪Y_j, for 1≤ j≤ n, will be increased over 2^2^j during this process, and that it reset properly counters in Y_i ∪Y_i. Let 0≤ i ≤ n, and let v∈ℕ^' satisfying (PreRst1) for 𝚁𝚜𝚝_𝚒. If for all 0≤ j ≤ n, v is j-bounded, then for all (ℓ,v')∈^𝚁,i×ℕ^' such that (ℓ^𝚁,i_in,v) ^* (ℓ, v') in 𝚁𝚜𝚝_i, v' is j-bounded for all 0≤ j ≤ n. Furthermore, the unique configuration such that (ℓ^𝚁,i_in,v) ^* (ℓ^𝚁,i_out, v') in 𝚁𝚜𝚝_i is defined by v'() = 0 for all ∈ Y_i ∪Y_i and v'() = v() for all ∉ Y_i ∪Y_i. The construction ensures that when one enters 𝚁𝚜𝚝_i with a valuation v that is i-bounded, and in which all variables in ⋃_0≤ j<i Y_j∪Y_j are initialized, the location ℓ^𝚁,i_out is reached with a valuation v' such that: v'() = 0 for all ∈ Y_i ∪Y_i and v'() = v() for all ∉ Y_i ∪Y_i. Moreover, if v is j-bounded for all 0≤ j≤ n, then any valuation reached during the execution remains j-bounded for all 0≤ j≤ n. Procedural  𝙸𝚗𝚌_i. The properties we seek for 𝚁𝚜𝚝_i are ensured whenever the variables in ⋃_0≤ j<iY_j∪Y_j are initialized. This is taken care of by a family of procedural- introduced in <cit.>. For all 0≤ i< n, 𝙸𝚗𝚌_i is a procedural- with initial location ^𝙸𝚗𝚌, i, and unique output location ℓ^𝙸𝚗𝚌, i_out. They enjoy the following property: for 0≤ i<n, when one enters 𝙸𝚗𝚌_i with a valuation v in which all the variables in ⋃_0≤ j<i Y_j∪Y_j are initialized and v()=0 for all ∈Y_i, then the location ℓ^𝙸𝚗𝚌_i_out is reached with a valuation v' such that v'()=2^2^i for all ∈Y_i, and v'()=v() for all other ∈'. Moreover, if v is j-bounded for all 0≤ j≤ n, then any valuation reached during the execution remains j-bounded for all 0≤ j≤ n. For all 0≤ i< n, for all v,v'∈ℕ^', (^𝙸𝚗𝚌, i,v) ^* (ℓ_out^𝙸𝚗𝚌, i, v') in 𝙸𝚗𝚌_i if and only if: * (PreInc1) for all 0 ≤ j < i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0; * (PreInc2) for all ∈Y_i, v( ) = 0, * (PostInc1) for all ∈Y_i, v'() = 2^2^i; * (PostInc2) for all ∉Y_i, v'() = v(). Moreover, if for all 0≤ j ≤ n, v is j-bounded, then for all (ℓ,v”) such that (ℓ^𝙸𝚗𝚌,i_in,v) ^* (ℓ, v”) in 𝙸𝚗𝚌_i, then v” is j-bounded for all 0≤ j≤ n. Procedural  𝚁𝚜𝚝𝙸𝚗𝚌. Finally, let 𝚁𝚜𝚝𝙸𝚗𝚌 be a procedural-  with initial location ℓ_a and output location ℓ_b, over the set of counters ' and built as an alternation of 𝚁𝚜𝚝_i and 𝙸𝚗𝚌_i for 0≤ i<n, finished by 𝚁𝚜𝚝_n. It is depicted in <ref>. Thanks to the properties of the machines 𝚁𝚜𝚝_i and 𝙸𝚗𝚌_i, in the output location of each 𝙸𝚗𝚌_i machine, the counters in Y_i are set to 2^2^i, which allow counters in Y_i+1∪Y_i+1 to be set to 0 in the output location of 𝚁𝚜𝚝_i+1. Hence, in location ℓ^𝙸𝚗𝚌,n_out, counters in Y_n= are set to 0. The reduction. To build the final  N, we compose the procedural  𝚁𝚜𝚝𝙸𝚗𝚌 with the   M in the way described <ref>, and we add to every location ℓ of 𝚁𝚜𝚝𝙸𝚗𝚌 and M a restore transition (ℓ, ∅,') which is represented in the figure in an abstract way with dashed arrows, for readability's sake. From <cit.>, each procedural machine 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() and 𝙸𝚗𝚌_i has size at most C × n^2 for some constant C. Hence, observe that N is of size at most B for some B∈ O(|M|^3). One can show that (, 0_) ^*_M (ℓ_f, v) for some v∈ℕ^, if and only if (', 0_') ^*_N (ℓ_f, v') for some v'∈ℕ^'. Using <ref>, we obtain: [] is -hard. § COVERABILITY FOR RENDEZ-VOUS PROTOCOLS In this section we prove that  and  problems are both -complete for rendez-vous protocols. To this end, we present the following reductions:  reduces to [] and [] reduces to . This will prove that  is in  and  is -hard (from <ref> and <ref>). As  is an instance of , the two reductions suffice to prove -completeness for both problems. §.§ From Rendez-vous Protocols to Let = (Q, Σ, , q_f, T) a rendez-vous protocol and C_F a configuration of  to be covered. We shall also decompose C_F as a sum of multisets 𝐪_1 + 𝐪_2 + … + 𝐪_s. Observe that there might be 𝐪_i=𝐪_j for i≠ j. We build the  M = (, , Δ_b, Δ_nb, ) described in <ref>. Here, with =Q. A configuration C of is meant to be represented in M by (,v), with v(q)=C(q) for all q∈ Q. The only meaningful location of M is then . The other ones are here to ensure correct updates of the counters when simulating a transition. We let = {}∪{ℓ_(t,t')^1, ℓ_(t,t')^2,ℓ_(t,t')^3| t=(q,!a,q'), t'=(p,?a,p')∈ T}∪{ℓ_t, ℓ_t,p_1^a,⋯,ℓ_t,p_k^a| t=(q,!a,q')∈ T, a={p_1,…, p_k}}∪{ℓ_q| t=(q,τ,q')∈ T}∪{ℓ_1 …ℓ_s}, with final location ℓ_f = ℓ_s, where m for a message m ∈Σ has been defined in <ref>. The sets Δ_b and Δ_nb are shown <ref>. Transitions pictured <ref> show how to simulate a rendez-vous protocol with the classical rendez-vous mechanism. The non-blocking rendez-vous are handled by the transitions pictured <ref>(where the only non-blocking transitions of the  occur): to simulate the occurrence of (q,!a,q'), the  M decrements the value of q by a transition of the form (3). It then takes a sequence of non-blocking decrements for each state in a. The last transition of the simulation of a non-blocking rendez-vous is to increment the counter q' by a transition of the form (3).. If the  M faithfully simulates , then this loop of non-blocking decrements is taken when the values of the counters in a are equal to 0, and the configuration reached still corresponds to a configuration in . However, it could be that this loop is taken in M while some counters in a are strictly positive. In this case, a blocking rendez-vous has to be taken in , e.g. (q,!a,q') and (p,?a,p') if the counter p in M is strictly positive. Therefore, the value of the reached configuration (, v) and the corresponding configuration C in will be different: first, C(p')>v(q'), since the process in p has moved in the state p' in when there has been no increment of p' in M. Furthermore, all other non-blocking decrements of counters in a in M may have effectively decremented the counters, when in no other process has left a state of a. However, this ensures that C≥ v. The reduction then ensures that if (, v) is reachable in M, then a configuration C≥ v is reachable in . Then, if it is possible to reach a configuration (, v) in M whose counters are high enough to cover ℓ_F, then the corresponding initial execution in will reach a configuration C≥ v, which hence covers C_F.  over rendez-vous protocols is in . §.§ From  to Rendez-Vous Protocols The reduction from [] to  in rendez-vous protocols mainly relies on the mechanism that can ensure that at most one process evolves in some given set of states, as explained in <ref>. This will allow to somehow select a “leader” among the processes that will simulate the behaviour of the  whereas other processes will simulate the values of the counters. Let M = (, , Δ_b, Δ_nb, ) a  and ℓ_f ∈ a final target location. We build the rendez-vous protocol pictured in <ref>, where (M) is the part that will simulate the  M. The locations {1_|∈} will allow to encode the values of the different counters during the execution: for a configuration C, C(1_) will represent the value of the counter . We give then (M)=(Q_M,Σ_M,,ℓ_f,T_M) with Q_M = ∪{ℓ_δ|δ∈Δ_b} Σ_M = {inc_,inc_, dec_, dec_, nbdec_|∈} T_M ={(ℓ_i,!inc_, ℓ_δ), (ℓ_δ, ?inc_, ℓ_j)|δ=(ℓ_i, , ℓ_j)∈Δ_b} ∪{(ℓ_i, !dec_, ℓ_δ), (ℓ_δ, ?dec_, ℓ_j)|δ = (ℓ_i, , ℓ_j)∈Δ_b} ∪{(ℓ_i, !nbdec_, ℓ_j)| (ℓ_i, x,ℓ_j)∈Δ_nb} ∪{(ℓ_i, τ, ℓ_j)| (ℓ_i, ,ℓ_j)∈Δ_b, ℓ_j } Q_M=∪{ℓ_δ|δ∈Δ_b}, Σ_M= {inc_,inc_, dec_, dec_, nbdec_|∈}, and T_M={(ℓ_i,!inc_, ℓ_δ),(ℓ_δ, ?inc_, ℓ_j)|δ=(ℓ_i, , ℓ_j)∈Δ_b}∪{(ℓ_i, !dec_, ℓ_δ), (ℓ_δ, ?dec_, ℓ_j)| δ = (ℓ_i, , ℓ_j) [0]∈Δ_b}∪{(ℓ_i, !nbdec_, ℓ_j)| (ℓ_i, ,ℓ_j)∈Δ_nb}∪{(ℓ_i, τ, ℓ_j)| (ℓ_i, ,ℓ_j)∈Δ_b}. Here, the reception of a message inc_ (respectively dec_) works as an acknowledgement, ensuring that a process has indeed received the message inc_ (respectively dec_), and that the corresponding counter has been incremented (resp. decremented). For non-blocking decrement, obviously no acknowledgement is required. We define =(Q,Σ,T,, ℓ_f) as follows. Q = Q_M∪{1_, q_, q'_|∈}∪{, q, q_} Σ = Σ_M∪{L, R} T =T_M∪{(, !L, q), (q, !R, ), (q, ?L, q_)}∪{(ℓ, ?L, q_)|ℓ∈ Q_M} ∪{(, ?inc_, q_), (q_, !inc_, 1_), (1_, ?dec_, q'_), (q'_, !dec_, ), (1, ?nbdec_, )|∈} {(q_, ?R, ), (q'_, ?R, )|∈} The protocol =(Q,Σ,, ℓ_f,T) is then defined with Q= Q_M∪{1_, q_, q'_|∈}∪{, q, q_}, Σ=Σ_M∪{L, R} and T is the set of transitions T_M along with the transitions pictured in <ref>. Note that there is a transition (ℓ,?L,q_) for all ℓ∈ Q_M. With two non-blocking transitions on L and R at the beginning, protocol can faithfully simulate the  M without further ado, provided that the initial configuration contains enough processes to simulate all the counters values during the execution: after having sent a process in state , any transition of M can be simulated in . Conversely, an initial execution of can send multiple processes into the 𝒫(M) zone, which can mess up the simulation. However, each new process entering 𝒫(M) will send the message L, which will send the process already in {q}∪ Q_M in the deadlock state q_, and send the message R, which will be received by any process in {q_,q'_|∈}. Moreover, the construction of the protocol ensures that there can only be one process in the set of states {q_,q'_|∈}. Then, if we have reached a configuration simulating the configuration (ℓ, v) of M, sending a new process in the 𝒫(M) zone will lead to a configuration (, v), and hence simply mimicks a restore transition of M. So every initial execution of corresponds to an initial execution of M.  and over rendez-vous protocols are  complete. § COVERABILITY FOR WAIT-ONLY PROTOCOLS In this section, we study a restriction on rendez-vous protocols in which we assume that a process waiting to answer a rendez-vous cannot perform another action by itself. This allows for a polynomial time algorithm for solving . §.§ Wait–Only Protocols We say that a protocol = (Q, Σ, , q_f, T) is wait-only if the set of states Q can be partitioned into Q_A — the active states — and Q_W — the waiting states — with ∈ Q_A and: * for all q ∈ Q_A, for all (q',?m,q”)∈ T, we have q'≠ q; * for all q∈ Q_W, for all (q', !m, q”) ∈ T, we have q' ≠ q and for all (q', τ, q”) ∈ T, we have q'≠ q. From a waiting state, a process can only perform receptions (if it can perform anything), whereas in an active state, a process can only perform internal actions or send messages. Examples of wait-only protocols are given by Figures <ref> and <ref>. In the sequel, we will often refer to the paths of the underlying graph of the protocol. Formally, a path in a protocol = (Q, Σ, , q_f, T) is either a control state q ∈ Q or a finite sequence of transitions in T of the form (q_0,a_0,q_1)(q_1,a_1,q_2)…(q_k,a_k,q_k+1), the first case representing a path from q to q and the second one from q_0 to q_k+1. §.§ Abstract Sets of Configurations To solve the coverability problem for wait-only protocols in polynomial time, we rely on a sound and complete abstraction of the set of reachable configurations. In the sequel, we consider a wait-only protocol = (Q, Σ, , q_f, T) whose set of states is partitioned into a set of active states Q_A and a set of waiting states Q_W. An abstract set of configurations γ is a pair (S,) such that: * S ⊆ Q is a subset of states, and, * ⊆ Q_W ×Σ is a subset of pairs composed of a waiting state and a message, and, * q ∉S for all (q,m) ∈. We then abstract the set of reachable configurations as a set of states of the underlying protocol. However, as we have seen, some states, like states in Q_A, can host an unbounded number of processes together (this will be the states in S), while some states can only host a bounded number (in fact, 1) of processes together (this will be the states stored in ). This happens when a waiting state q answers a rendez-vous m, that has necessarily been requested for a process to be in q. Hence, in , along with a state q, we remember the last message m having been sent in the path leading from to q, which is necessarily in Q_W. Observe that, since several paths can lead to q, there can be (q,m_1),(q,m_2)∈ with m_1≠ m_2. We denote by Γ the set of abstract sets of configurations. Let γ=(S,) be an abstract set of configurations. Before we go into the configurations represented by γ, we need some preliminary definitions. We note (-1𝑝𝑡) the set q ∈ Q_W |there exists m∈Σ such that (q,m) ∈ of control states appearing in . Given a state q ∈ Q, we let q be the set m ∈Σ|there exists q'∈ Q such that (q,?m, q') ∈ T of messages that can be received in state q (if q is not a waiting state, this set is empty). Given two different waiting states q_1 and q_2 in , we say q_1 and q_2 are conflict-free in γ if there exist m_1,m_2 ∈Σ such that m_1 ≠ m_2, (q_1,m_1),(q_2,m_2) ∈ and m_1 ∉q_2 and m_2 ∉q_1. We now say that a configuration C∈() respects γ if and only if for all q ∈ Q such that C(q)>0 one of the following two conditions holds: * q ∈ S, or, * q ∈ and C(q)=1 and for all q' ∈∖q such that C(q')=1, we have that q and q' are conflict-free. Note that the condition is on states q such that C(q) > 0 and not all states q ∈ Q because it might be that some states don't appear in S∪ st(Toks) (non-reachable states for instance). Let γ be the set of configurations respecting γ. Note that in γ, for q in S there is no restriction on the number of processes that can be put in q and if q in , it can host at most one process. Two states from can both host a process if they are conflict-free. Finally, we will only consider abstract sets of configurations that are consistent. This property aims to ensure that concrete configurations that respect it are indeed reachable from states of S. Formally, we say that an abstract set of configurations γ=(S,) is consistent if (i) for all (q,m) ∈, there exists a path (q_0,a_0,q_1)(q_1,a_1,q_2)…(q_k,a_k,q) in such that q_0 ∈ S and a_0= !m and for all 1≤ i ≤ k, we have that a_i= ?m_i and that there exists (q'_i,!m_i,q”_i) ∈ T with q'_i ∈ S, and (ii) for two tokens (q,m), (q',m') ∈ either m∈q' and m'∈q, or, m∉q' and m'∉q. Condition (i) ensures that processes in S can indeed lead to a process in the states from . Condition (ii) ensures that if in a configuration C, some states in are pairwise conflict-free, then they can all host a process together. Given γ∈Γ and a configuration C, there exists C' ∈γ such that C' ≥ C if and only if C ∈γ. Checking that C∈γ can be done in polynomial time. §.§ Computing Abstract Sets of Configurations Our polynomial time algorithm is based on the computation of a polynomial length sequence of consistent abstract sets of configurations leading to a final abstract set characterising in a sound and complete manner (with respect to the coverability problem), an abstraction for the set of reachable configurations. This will be achieved by a function F:Γ→Γ, that inductively computes this final abstract set starting from γ_0=(, ∅). Formal definition of the function F relies on intermediate sets S”⊆ Q and ”⊆ Q_W ×Σ, which are the smallest sets satisfying the conditions described in <ref>. From S and , rules described in <ref> add states and tokens to S” and ” from the outgoing transitions from states in S and (). It must be that every state added to S” can host an unbounded number of processes, and every state added to ” can host at least one process, furthermore, two conflict-free states in ” should be able to host at least one process at the same time. We now provide the formal definition of this function. For an abstract set of configurations γ=(S,), we will have γ'=F(γ) if and only if γ'=(S',') where S' and ' are built as follows. First we use some intermediate sets of states S”⊆ Q and ”⊆ Q_W ×Σ which are the smallest sets satisfying the following conditions S ⊆ S” and ⊆” and: * for all (p,τ,p') ∈ T with p ∈ S, we have p' ∈ S”; * for all (p,!a,p') ∈ T with p ∈ S, we have: (a) p' ∈ S” if a ∉p' or if there exists (q,?a,q') ∈ T with q ∈ S; (b) (p',a) ∈” otherwise (i.e. when a ∈p' and there does not exists (q,?a,q') ∈ T with q ∈ S); * for all (q,?a,q') ∈ T with q ∈ S or (q,a) ∈, we have q' ∈ S” if there exists (p,!a,p') ∈ T with p ∈ S; * for all (q,?a,q') ∈ T with (q,m) ∈ with m ≠ a, we have: (a) q' ∈ S” if m ∉q' and there exists (p,!a,p') ∈ T with p ∈ S; (b) (q',m) ∈” if m ∈q' and there exists (p,!a,p') ∈ T with p ∈ S. We have then that S' is the smallest set including S” and such that: * for all (q_1, m_1), (q_2, m_2) ∈” such that m_1 m_2 and m_2 ∉q_1 and m_1 ∈q_2, we have q_1 ∈ S'; * for all (q_1, m_1), (q_2, m_2), (q_3,m_2) ∈” s.t m_1 m_2 and (q_2, ?m_1, q_3) ∈ T, we have q_1 ∈ S'; * for all (q_1, m_1), (q_2, m_2), (q_3, m_3) ∈” such that m_1 m_2 and m_1 m_3 and m_2 m_3 and m_1 ∉q_2, m_1 ∈q_3 and m_2∉q_1, m_2 ∈q_3, and m_3 ∈q_2 and m_3 ∈q_1, we have q_1 ∈ S'. And finally '=(q,m) ∈”| q ∉S'. Consider the wait-only protocol _1 depicted on Figure <ref>. From (q_in,∅), rules described in <ref> construct the following pair (S_1”, _1”) = (q_in,q_4,(q_1,a),[0](q_1,b),(q_5,c)). In _1, it is indeed possible to reach a configuration with as many processes as one wishes in the state q_4 by repeating the transition (q_in,!d,q_4) (rule <ref>). On the other hand, it is possible to put at most one process in the waiting state q_1 (rule <ref>), because any other attempt from a process in will yield a reception of the message a (resp. b) by the process already in q_1. Similarly, we can put at most one process in q_5. Note that in _1”, the states q_1 and q_5 are conflict-free and it is hence possible to have simultaneously one process in both of them. If we apply rules of <ref> one more time to (S”_1, ”_1), we get S_2”=, q_2, q_4, q_6,q_7 and _2”=(q_1,a), (q_1,b) ,(q_3,a),(q_3,b),(q_5,c). We can put at most one process in q_3: to add one, a process will take the transition (q_1,?c,q_3). Since (q_1,a), (q_1,b)∈”_1, there can be at most one process in state q_1, and this process arrived by a path in which the last request of rendez-vous was !a or !b. Since {a,b}⊆q_3, by rule <ref>, (q_3,a),(q_3,b) are added. On the other hand we can put as many processes as we want in the state q_7 (rule <ref>): from a configuration with one process on state q_5, successive non-blocking request on letter c, and rendez-vous on letter d will allow to increase the number of processes in state q_7. However, one can observe that q_5 can in fact host an unbounded number of processes: once two processes have been put on states q_1 and q_5 respectively (remember that q_1 and q_5 are conflict-free in (S”_1, ”_1)), iterating rendez-vous on letter c (with transition (q_1, ?c, q_3)) and rendez-vous on letter a put as many processes as one wants on state q_5. This is why we need another transformation from S_2”, _2” to F(S”_1, ”_1). As we shall see, this transformation does not have any impact on S”_1 and ”_1 and so it holds that F((, ∅)) = (S”_1, ”_1). Note F(γ) = (S', '), <ref> describes the construction of S' from (S”, ”), while ' = ”∖ (S ×Σ), i.e. all states added to S' are removed from ' so a state belongs either to S' or to '. Now the case of state q_5 evoked in the previous example leads to application of rule <ref>, since (q_5,c), (q_1,a) ∈”_2, and (q_3,a) (q_1,?c,q_3)∈ T. Finally, F(F(q_in,∅))=(q_in, q_2,q_4, q_5, q_6,q_7,[0](q_1,a), (q_1,b) ,(q_3,a),(q_3,b)). Since q_1 and q_3 are not conflict-free, they won't be reachable together in a configuration. We consider now the wait-only protocol _2 depicted on Figure <ref>. In that case, to compute F((q_in,∅)) we will first have S”=q_in and ”=(q_1,a),(q_2,b),(p_1,m_1),(p_2,m_2),[0](p_3,m_3) (using rule <ref>), to finally get F((q_in,∅))=(q_in,q_1,p_1,(q_2,b),(p_2,m_2),[0](p_3,m_3))). Applying rule <ref> to tokens (q_1, a) and (q_2, b) from ”, we obtain that q_1∈ S': whenever one manages to obtain one process in state q_2, this process can answer the requests on message a instead of processes in state q_1, allowing one to obtain as many processes as desired in state q_1. Now since (p_1,m_1), (p_2, m_2) and (p_3, m_3) are in ” and respect the conditions of rule <ref>, p_1 is added to the set S' of unbounded states. This case is a generalisation of the previous one, with 3 processes. Once one process has been put on state p_2 from , iterating the following actions: rendez-vous over m_3, rendez-vous over m_1, non-blocking request of m_2, will ensure as many processes as one wants on state p_1. Finally applying successively F, we get in this case the abstract set (q_in,q_1,q_3,p_1,p_2,p_3,p_4,(q_2,b)). We show that F satisfies the following properties. * F(γ) is consistent and can be computed in polynomial time for all consistent γ∈Γ. * If (S',')=F(S,) then S ≠ S' (and S ⊆ S') or ⊆'. * For all consistent γ∈Γ, if C ∈γ and C C' then C' ∈F(γ). * For all consistent γ∈Γ, if C' ∈F(γ), then there exists C”∈ and C ∈γ such that C”≥ C' and C ^∗ C”. Point 1. and 2, ensures us that if we apply successively the function F to (q_in,∅) then the computation will reach a consistent abstract set γ_f such that γ_f=F(γ_f) and it will take a polynomial time. Points 3. ensures that the computed abstraction is complete whereas Point 4. guarantees its soundness. §.§ Polynomial Time Algorithm We now present our polynomial time algorithm to solve  for wait-only protocols. We define the sequence (γ_n)_n ∈ as follows: γ_0=(,∅) and γ_i+1=F(γ_i) for all i ∈. First note that γ_0 is consistent and that γ_0= is the set of initial configurations. Using Lemma <ref>, we deduce that γ_i is consistent for all i ∈. Furthermore, each time we apply F to an abstract set of configurations (S,) either S or increases, or (S, ) stabilises. Hence for all n ≥ |Q|^2*|Σ|, we have γ_n+1=F(γ_n)=γ_n. Let γ_f=γ_|Q|^2*|Σ|. Using Lemma <ref>, we get: Given C ∈, there exists C_0 ∈ and C' ≥ C such that C_0 ^∗ C' if and only if there exists C”∈γ_f such that C”≥ C. We need to iterate |Q|^2*|Σ| times the function F to compute γ_f and each computation of F can be done in polynomial time. Furthermore checking whether there exists C”∈γ_f such that C”≥ C for a configuration C ∈ can be done in polynomial time by Lemma <ref>, hence using the previous lemma we obtain the desired result.  and  restricted to wait-only protocols are in . § UNDECIDABILITY OF It is known that [CM] is undecidable in its full generality <cit.>. This result holds for a very restricted class of counter machines, namely Minsky machines (Minsky-CM for short), which are CM over 2 counters, _1 and _2. Actually, it is already undecidable whether there is an execution (,0_{_1,_2})^* (ℓ_f, 0_{_1,_2}). Reduction from this last problem gives the following result.  is undecidable, even for wait-only protocols. Fix M = (, ℓ_0, {_1, _2}, Δ ) with ℓ_f ∈ the final state. W.l.o.g., we assume that there is no outgoing transition from state ℓ_f in the machine. The protocol  is described in <ref>. The states {0_i,p_i,1_i,p'_i| i=1,2} will be visited by processes simulating values of counters, while the states in will be visited by a process simulating the different locations in the Minsky-CM. If at the end of the computation, the counters are equal to 0, it means that each counter has been incremented and decremented the same number of times, so that all processes simulating the counters end up in the state ℓ_f. The first challenge is to appropriately check when a counter equals 0. This is achieved thanks to the non-blocking semantics: the process sends a message !zero_i to check if the counter i equals 0. If it is does not, the message will be received by a process that will end up in the deadlock state . The second challenge is to ensure that only one process simulates the Minsky-CM in the states in . This is ensured by the states {w, w'}. Each time a process arrives in the state, another must arrive in the w' state, as a witness that the simulation has begun. This witness must reach ℓ_f for the computation to be a testifier of a positive instance of , but it should be the first to do so, otherwise a process already in ℓ_f will receive the message “w” and reach the deadlock state . Thus, if two processes simulate the Minsky-CM, there will be two witnesses, and they won't be able to reach ℓ_f together. § CONCLUSION We have introduced the model of parameterised networks communicating by non-blocking rendez-vous, and showed that safety analysis of such networks becomes much harder than in the framework of classical rendez-vous. Indeed,  and  become -complete and  undecidable in our framework, while these problems are solvable in polynomial time in the framework of <cit.>. We have introduced a natural restriction of protocols, in which control states are partitioned between active states (that allow requesting of rendez-vous) and waiting states (that can only answer to rendez-vous) and showed that  can then be solved in polynomial time. Future work includes finding further restrictions that would yield decidability of . A candidate would be protocols in which waiting states can only receive one message. Observe that in that case, the reduction of <ref> can be adapted to simulate a , hence  for this subclass of protocols is as hard as reachability in Vector Addition Systems with States, i.e. non-primitive recursive <cit.>. Decidability remains open though. § PROOFS OF <REF> We present here the omitted proofs of <ref>. §.§ Proof of <ref> We will in fact prove the  upper bound for a more general model: Non-Blocking Vector Addition Systems (). A  is composed of a set of transitions over vectors of dimension d, sometimes called counters, and an initial vector of d non-negative integers, like in VAS. However, in a , a transition is a pair of vectors: one is a vector of d integers and is called the blocking part of the transition and the other one is a vector of d non-negative integers and is called the non-blocking part of the transition. Let d ∈ℕ. A Non-blocking Vector Addition System () of dimension d is a tuple (T, v_0) such that T ⊆ℤ^d ×ℕ^d and v_init∈ℕ^d. Formally, for two vectors v, v' ∈ℕ^d, and a transition t=(t_b, t_nb) ∈ T, we write v t v' if there exists v”∈ℕ^d such that v” = v + t_b and, for all i ∈ [1,d], v'(i) = max(0, v”(i) - t_nb(i)). We write for ⋃_t ∈ Tt. We define an execution as a sequence of vectors v_1 v_2 … v_k such that for all 1 ≤ i < k, v_i v_i+1. Intuitively, the blocking part t_b of the transition has a strict semantics: to be taken, it needs to be applied to a vector large enough so no value goes below 0. The non-blocking part t_nb can be taken even if it decreases some component below 0: the corresponding component will simply be set to 0. We can now define what is the  problem on .  problem for a  V = (T,v_init) of dimension d ∈ℕ and a target vector v_f, asks if there exists v∈ℕ^d, such that v ≥ v_f and v_init^∗ v. Adapting the proof of <cit.> to the model of  yields the following result. The  problem for  is in . Fix a  (T,v_init) of dimension d, we will extend the semantics of  to a slightly relaxed semantics: let v,v' ∈ℕ^d and t = (t_b, t_nb) ∈ T, we will write v t v' when for all 1≤ j ≤ d, v'(j) = max(0, (v+t_b -t_nb)(j)). Note that v t v' implies that v t v' but the converse is false: consider an   of dimension d = 2, with t = (t_b, t_nb) ∈ T such that t_b =(-3, 0) and t_nb = (0, 1), and let v = (1, 2) and v' =(0, 1). One can easily see that there does not exist v”∈ℕ^2 such that v” = v + t_b, as 1 - 3<0. So, t cannot be taken from v and it is not the case that vt v', however, v t v'. We use for ⋃_t ∈ Tt. Let J ⊆ [1,d], a path v_0 v_1 … v_m is said to be J-correct if for all v_i such that i < m, there exists t = (t_b, t_nb) ∈ T, such that v_i t v_i+1 and for all j ∈ J, (v_i + t_b)(j) ≥ 0. We say that the path is correct if the path is [1,d]-correct. It follows from the definitions that for all v,v'∈ℕ^d, v^* v' if and only if there exists a correct path between v and v'. Fix a target vector v_f ∈ℕ^d, and define = |v_f| + max_(t_b, t_nb)∈ T(|t_b| + |t_nb|), where |·| is the norm 1 of vectors in ℤ^d. Let ρ = v_0 v_1 … v_m and J ⊆ [1,d]. We say the path ρ is J-covering if it is J-correct and for all j ∈ J, v_m(j) ≥ v_f(j). Let r ∈ℕ, we say that ρ is (J,r)-bounded if for all v_i, for all j ∈ J, v_i(j) < r. Let v ∈ℕ^d, we define m(J,v) as the length of the shortest J-covering path starting with v, 0 if there is none. Note 𝒥_i = {J⊆ [1,d]| |J| = i } and define the function f as follows: for 1 ≤ i ≤ d, f(i) = max{m(J_i, v) | J_i ∈𝒥_i, v∈ℕ^d}. We will see that f is always well defined, in . f(0) = 1. From any vector v ∈ℕ^d, the path with one element v is ∅-covering. For all 0 ≤ i < d, f(i+1) ≤ (· f(i))^i+1 + f(i). Let J ∈𝒥_i+1 and v∈ℕ^d such that there exists a J-covering path starting with v. Note ρ = v_0t^1…t^mv_m the shortest such path. First case: ρ is (J, .f(i))-bounded. Assume, for sake of contradiction, that for some k < ℓ, for all j∈ J, v_k(j)=v_ℓ(j). Then we show that v_0… v_kv_ℓ+1…v_m is also a J-correct path, with the vectors (v_ℓ')_ℓ< ℓ'≤ m, defined as follows. v_ℓ+1(j)=v_ℓ+1(j) for all j∈ J max(0,(v_k(j)+t^ℓ+1_b(j)-t^ℓ+1_nb(j))) otherwise. And for all ℓ + 1< ℓ'≤ m, v_ℓ'(j)=v_ℓ'(j) for all j∈ J max(0, (v_ℓ'-1(j)+t_b^ℓ'(j)-t_nb^ℓ'(j))) otherwise. Then v_0… v_kv_ℓ+1…v_m is also a J-correct path. Indeed, since v_k(j)=v_ℓ(j) for all j∈ J, we have that v_ℓ+1(j)=v_ℓ+1(j)=max(0,(v_ℓ(j) + t^ℓ+1_b(j) - t^ℓ+1_nb(j)))=max(0,(v_k(j) + t^ℓ+1_b(j) - t^ℓ+1_nb(j))). Moreover, for j∈ J, since v_ℓ(j)+t^ℓ+1_b(j)≥ 0, we get that v_k(j)+ t^ℓ+1_b(j)≥ 0. By definition, for j∉ J, v_ℓ+1(j)=max(0,(v_k(j) + t^ℓ+1_b(j) - t^ℓ+1_nb(j))). Hence, v_k^t^ℓ+1v_ℓ+1, and v_0^t^1… v_k^t^ℓ+1v_ℓ+1 is J-correct. Now let ℓ<ℓ'<m. By definition, for j∈ J, v_ℓ'+1(j)=v_ℓ'+1(j). Then, v_ℓ'+1(j)=max(0,(v_ℓ'(j)+t^ℓ'+1_b(j) - t^ℓ'+1_nb(j))) = max(0,(v_ℓ'(j)+t^ℓ'+1_b(j) - t^ℓ'+1_nb(j))). Again, since ρ is J-correct, we deduce that for j∈ J, v_ℓ'(j)+t^ℓ'+1_b(j)≥ 0, hence v_ℓ'(j)+t^ℓ'+1_b(j)≥ 0. For j∉ J, v_ℓ'+1(j)=max(0, (v_ℓ'(j)+t_b^ℓ'+1(j)-t_nb^ℓ'+1(j))). So v_ℓ'^t^ℓ'+1v_ℓ'+1, and v_0^t^1… v_k^t^ℓ'+1v_ℓ'+1 is J-correct. Then, ρ'=v_0… v_kv_ℓ+1…v_m is a J-correct path, and since v_m(j)=v_m(j) for all j∈ J, it is also J-covering, contradicting the fact that ρ is minimal. Hence, for all k < ℓ, there exists j ∈ J such that v_k(j) ≠ v_ℓ(j). The length of such a path is at most (.f(i))^i+1, so m(J,v)≤ (.f(i))^i+1≤ (.f(i))^i+1+f(i). Second case: ρ is not (J, .f(i))-bounded. We can then split ρ into two paths ρ_1 ρ_2 such that ρ_1 is (J,.f(i))-bounded and ρ_2 = v'_0 … v'_n is such that v'_0(j) ≥.f(i) for some j ∈ J. As we have just seen, |ρ_1|≤ (.f(i))^i+1. Note J' = J ∖{j} with j such that v'_0(j) ≥.f(i). Note that ρ_2 is J'-covering, therefore, by definition of f, there exists a J'-covering execution ρ = w_0 … w_k with w_0=v'_0, and such that |ρ|≤ f(i). Also, by definition of , for all 1≤ j' ≤ d, for all (t_b,t_nb)∈ T, ≥ |t_b(j')|+|t_nb(j')|, then t_b(j')≥ -, and t_b(j')-t_nb(j')≥ -. Hence, for all v∈^d, 1≤ j'≤ d, and c∈ such that v(j')≥ + c, for all (t_b,t_nb)∈ T, (v+t_b)(j') ≥ c and (v+t_b-t_nb)(j') ≥ c. Now, since w_0 = v'_0, we get w_0(j)≥.f(i). We deduce two things: first, for all 0 ≤ℓ < k, if t=(t_b,t_nb)∈ T is such that w_ℓ^t w_ℓ+1, it holds that (w_ℓ + t_b)(j)≥.(f(i)- ℓ - 1). Since k = f(i) - 1, it yields that ρ is J-correct. Second, for all 0 ≤ℓ≤ k, w_ℓ(j)≥(f(i) - ℓ). Again, k = f(i) - 1, so w_k(j) ≥≥ v_f(j). Hence ρ is also J-covering. Since ρ is the shortest J-covering path, we conclude that |ρ|≤ (.f(i))^i+1 + f(i), and so m(J,v)≤ (.f(i))^i+1 + f(i). We define a function g such that g(0) = 1 and g(i+1) = (+1)^d(g(i))^d for 0 ≤ i < d; then f(i)≤ g(i) for all 1 ≤ i ≤ d. Hence, f(d) ≤ g(d) ≤ (+1)^d^d+1≤ 2^2^cnlog n for some n ≥max( d, , |v_init|) and a constant c which does not depend on d, v_0, nor v_f or the . Hence, we can cover vector v_f from v_init if and only if there exists a path (from v_init) of length ≤ 2^2^cn log n which covers v_f. Hence, there is a non-deterministic procedure that guesses a path of length ≤ 2^2^cn log n, checks if it is a valid path and accepts it if and only if it covers v_f. As |v_init|≤ n, |v_f| ≤ n and for all (t_b, t_nb) ∈ T, |t_b| + |t_nb| ≤ n, this procedure takes an exponential space in the size of the protocol. By Savitch theorem, there exists a deterministic procedure in exponential space for the same problem. We are now ready to prove that the  problem for  is as hard as the  problem for . [] reduces to  in . Let a  M = (, , Δ_b,Δ_nb, ), for which we assume wlog that it does not contain any self-loop (replace a self loop on a location by a cycle using an additional internal transition and an additional location). We note = {_1, …, _m}, and = {ℓ_1…ℓ_k}, with ℓ_1= and ℓ_k=ℓ_f, and let d = k+m. We define the  V = (T, v_init) of dimension d as follows: it has one counter by location of the , and one counter by counter of the . The transitions will ensure that the sum of the values of the counters representing the locations of M will always be equal to 1, hence a vector during an execution of V will always represent a configuration of M. First, for a transition δ = (ℓ_i, op, ℓ_i')∈Δ, we define (t_δ, t'_δ)∈ℤ^d×ℕ^d by t_δ(i) = -1, t_δ(i')= 1 and, * if op=, then t_δ(y)= 0 for all other 1≤ y≤ d, and t'_δ=0_d (where 0_d is the null vector of dimension d), i.e. no other modification is made on the counters. * if op=_j, then t_δ(k+j)=1, and t_δ(y)= 0 for all other 1≤ y≤ d, and t'_δ=0_d, i.e. the blocking part of the transition ensures the increment of the corresponding counter, while the non-blocking part does nothing. * if op=_j, then t_δ(k+j)=-1, and t_δ(y)= 0 for all other 1≤ y≤ d, and t'_δ=0_d, i.e. the blocking part of the transition ensures the decrement of the corresponding counter, while the non-blocking part does nothing. . * if op=_j, then t_δ(y)= 0 for all other 1≤ y≤ d, and t'_δ(k+j)=1 and t'_δ(y)=0 for all other 1≤ y≤ d, i.e. the blocking part of the transition only ensures the change in the location, and the non-blocking decrement of the counter is ensured by the non-blocking part of the transition. We then let T={t_δ|δ∈Δ}, and v_0 is defined by v_init(1)=1 and v_init(y)=0 for all 2≤ y≤ d. We also fix v_f by v_f(k)=1, and v_f(y)=0 for all other 1≤ y≤ d. One can prove that v_f is covered in V if and only if ℓ_f is covered in M. If there exists w ∈ℕ^ such that (, 0_) ^* (ℓ_f, w), then there exists v ∈ℕ^d such that v_0 ^* v and v ≽ v_f. Any configuration (ℓ,w) of M can be turned into a valuation v(ℓ_i,w) of T such that v(ℓ_i,w)(i)=1, for all 1≤ i≤ m, v(ℓ_i,w)(k+i)=w(_i) and for all other 1≤ y≤ k, v(ℓ_i,w)(y)=0. Observe that v(,0_)=v_0. It follows from the definitions that (ℓ_i,w)(ℓ_i',w') if and only if v(ℓ_i,w) v(ℓ_i',w'). Hence, v_0^*v(ℓ_f,w)≥ v_f. If there exists v ∈ℕ^d such that v_0 ^* v and v ≽ v_f, then there exists w ∈ℕ^ such that (, 0_) ^* (ℓ_f, w). One can prove by induction that every vector v reachable from v_0 is such that there exists only one 1 ≤ i ≤ k such that v(i) = 1 and for all 1 ≤ i' ≤ k such that i ≠ i', v(i') = 0. Hence, given a reachable vector v, one can define γ_v a machine configuration as (ℓ_i, w) where i is the unique index 1≤ i≤ k such that v(i) = 1 and, for all 1 ≤ j ≤ m, w(_j) = v(k+j). Note v_0 v_1 … v_n = v, and observe that γ_v_n = (ℓ_f, w) for some w ∈ℕ^. Again, by a simple induction, one can prove that γ_v_0γ_v_1…γ_v_n, which concludes the proof. Putting together Lemma <ref> and Lemma <ref>, we obtain the proof of <ref>. §.§ Proof of <ref> In this subsection, we prove <ref> by proving that the [] problem is  hard. Put together with <ref>, it will prove the -completeness of []. §.§.§ Proofs on the Pocedural  Defined in <ref> We formalize some properties on the procedural  presented in <ref> used in the proof. As for the procedural  𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i, we use this proposition from <cit.>. Let 0≤ i < n, and ∈Y_i. For all v,v'∈ℕ^X', for ℓ∈{ℓ^𝚃𝚂,i,_z,ℓ^𝚃𝚂,i,_nz}, we have (^𝚃𝚂,i,v)^*(ℓ,v') in 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i() if and only if: * (PreTest1): for all 0 ≤ j < i, for all _j ∈Y_j, v(_j) = 2^2^j and for all _j ∈ Y_j, v(_j) = 0; * (PreTest2): v(_i) = 2^2^i and v( _i) = 0; * (PreTest3): v() + v() = 2^2^i; * (PostTest1): For all ∉{,}, v'() = v(); * (PostTest2): either (i) v() = v'() = 0, v() = v'() and ℓ = ℓ^i_z, or (ii) v'() = v() >0, v'() = v() and ℓ = ℓ^𝚃𝚂,i,_nz. Moreover, if for all 0 ≤ j ≤ n, and any counter ∈ Y_j ∪Y_j, v()≤ 2^2^j, then for all 0 ≤ j ≤ n, and any counter ∈ Y_j ∪Y_j, the value of will never go above 2^2^j during the execution. Note that for a valuation v∈ℕ^X' that meets the requirements (PreTest1), (PreTest2) and (PreTest3), there is only one configuration (ℓ,v') with ℓ∈{ℓ^𝚃𝚂,i,_z,ℓ^𝚃𝚂,i,_nz} such that (ℓ_in,v) ^* (ℓ,v'). *Procedural  𝚁𝚜𝚝_i. We shall now prove that the procedural s we defined and displayed in <ref> meet the desired requirements. For all 0≤ i≤ n, any procedural  𝚁𝚜𝚝_i has the following property: For all 0≤ i≤ n, for all v∈ℕ^' such that * (PreRst1): for all 0 ≤ j < i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0, for all v' ∈ℕ^', if (^𝚁,i, v) ^* (ℓ^𝚁,i_out,v') in 𝚁𝚜𝚝_𝚒 then * (PostRst1): for all ∈ Y_i ∪Y_i, v'() = max(0, v() - 2^2^i), * (PostRst2): for all ∉Y_i ∪Y_i, v'() = v(). For 𝚁𝚜𝚝_0, (PreRst1) trivially holds, and it is easy to see that (PostRst1) and (PostRst2) hold. Now fix 0 ≤ i < n, and consider the procedural- 𝚁𝚜𝚝_𝚒+1. Let v_0 ∈ℕ^' such that for all 0 ≤ j < i+1, for all ∈Y_j, v_0() = 2^2^j and for all ∈ Y_j, v_0( ) = 0, and let v_f such that (^𝚁,i, v_0) ^+ (ℓ^𝚁,i_out,v_f) in 𝚁𝚜𝚝_i. First, we show the following property. Property (∗): if there exist v,v'∈ℕ^' such that v(_i)=k, (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v') with no other visit of ℓ^𝚃𝚂,i,_z in between, then v'(_i)=2^2^i, v'(_i)=0, for all ∈ Y_i+1∪Y_i+1, v'()=max(0, v()-k), and v'()=v() for all other ∈'. If k=0, then Proposition <ref> ensures that v'(_i)=2^2^i, v'(_i)=0, and for all other ∈', v'()=v(). Otherwise, assume that the property holds for some k≥ 0 and consider (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v') with no other visit of ℓ^𝚃𝚂,i,_z in between, and v(_i)=k+1. Here, since v(_i)=k+1, Proposition <ref> and the construction of the procedural- ensure that (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_nz,v)(ℓ^𝚁,i+1_2,v)^*(^𝚃𝚂,i,,v_1) with v_1(_i)=k, v_1(_i)=v(_i)+1, for all ∈ Y_i+1∪Y_i+1, v_1()=max(0, v()-1), and for all other ∈', v_1()=v(). Induction hypothesis tells us that (^𝚃𝚂,i,,v_1)^* (ℓ^𝚃𝚂,i,_z,v') with v'(_i)=2^2^i, v'(_i)=0, for all ∈ Y_i+1∪Y_i+1, v'()=max(0, v()-k-1), and v'()=v() for all other ∈'. Next, we show the following. Property (∗∗): if there exist v,v'∈ℕ^' such that v(_i)=k, v(_i)=2^2^i, v(_i)=0, and (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v') with no other visit of ℓ^𝚃𝚂,i,_z in between, then v'(_i)=2^2^i, v'(y_i)=0, for all ∈ Y_i+1∪Y_i+1, v'()=max(0, v()- k.2^2^i), and v'()=v() for all other ∈'. If k=0, then Proposition <ref> ensures that v'(_i)=2^2^i, v'(_i)=0, and v'()=v() for all other ∈'. Otherwise, assume that the property holds for some k≥ 0 and consider (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v') with no other visit of ℓ^𝚃𝚂,i,_z in between, and v(_i)=k+1. Again, since v(_i)=k+1, Proposition <ref> and the construction of the procedural- ensure that (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_nz,v)(^𝚁,i+1,v)^*(^𝚃𝚂,i,,v_1)^* (ℓ^𝚃𝚂,i,_z,v'_1) (^𝚃𝚂,i,,v'_1), with v_1(_i)=v(_i)-1=k, v_1(_i)=v(_i)+1, v_1(_i)=v(_i)-1=2^2^i-1, v_1(_i)=v(_i)+1=1, for all ∈ Y_i+1∪Y_i+1, v_1()=max(0,v()-1), and for all other ∈', v_1()=v(). By Property (∗), v'_1(_i)=2^2^i, v'_1(_i)=0, for all ∈ Y_i+1∪Y_i+1, v'_1()=max(0, v()-2^2^i), and v'_1()=v_1() for all other ∈'. Induction hypothesis allows to conclude that since (^𝚃𝚂,i,,v'_1)^* (ℓ^𝚃𝚂,i,_z,v'), v'(_i)=2^2^i, v'(_i)=0, for all ∈ Y_i+1∪Y_i+1, v'()=max(0, v'_1()- k.2^2^i) = max(0, v() - (k+1).2^2^i), and v'()=v'_1()=v() for all other ∈'. Since (^𝚁,i, v_0) ^+ (ℓ^𝚁,i_out,v_f), we know that (^𝚁,i, v_0) ^* (^𝚃𝚂,i,,v)^*(ℓ^𝚃𝚂,i,_z,v')(^𝚃𝚂,i,,v')^*(ℓ^𝚃𝚂,i,_z,v”) (ℓ^𝚁,i_out,v_f). By construction, v(_i)=2^2^i-1, v(_i)=2^2^i-1, v(_i)=1, v(_i)=1, for all ∈ Y_i+1∪Y_i+1, v()=max(0,v_0()-1), and for all other counter , v()=v_0(). By Property (∗), v'(_i)=2^2^i=v_0(_i), v'(_i)=0=v_0(_i), for all ∈ Y_i∪Y_i+1, v'()=max(0, v_0()-2^2^i) and for all other ∈', v'()=v(). By Property (∗∗), v”(_i)=2^2^i=v_0(_i), v”(_i)=0=v_0(_i), for all ∈ Y_i∪Y_i+1, v”()=max(0, v_0()-2^2^i - (2^2^i-1).2^2^i)=max(0, v_0()-2^2^i.2^2^i)=max(0, v_0()-2^2^i+1), and for all other ∈', v”()=v'()=v_0(). We get the immediate corollary: Let 0≤ i≤ n, and v∈ℕ^' satisfying (PreRst1) for 𝚁𝚜𝚝_i. If v is i-bounded, then the unique configuration such that (ℓ^𝚁,i_in,v) ^+ (ℓ^𝚁,i_out, v') in 𝚁𝚜𝚝_i is defined v'() = 0 for all ∈ Y_i ∪Y_i and v'() = v() for all ∉ Y_i ∪Y_i. Let 0≤ i ≤ n, and let v∈ℕ^' satisfying (PreRst1) for 𝚁𝚜𝚝_𝚒. If for all 0≤ j ≤ n, v is j-bounded, then for all (ℓ,v')∈^𝚁,i×ℕ^' such that (ℓ^𝚁,i_in,v) ^* (ℓ, v') in 𝚁𝚜𝚝_i, v' is j-bounded for all 0≤ j ≤ n. We will prove the statement of the property along with some other properties: (1) if ℓ is not a state of 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i) or 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i), then for all 0 ≤ j < i, for all ∈Y_j, v'() = 2^2^j and for all ∈ Y_j, v'() = 0, and v'(_i) =2^2^i and v'(_i) = 0. (2) if ℓ is not a state of 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i) or 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i) and if ℓℓ_1^𝚁, i+1, then v'(_i) + v'(_i) = 2^2^i, and if ℓℓ_3^𝚁, i+1, then v'(_i) + v'(_i) = 2^2^i. For 𝚁𝚜𝚝_0, the property is trivial. Let 0≤ i <n, and a valuation v∈ℕ^' such that for all 0 ≤ j ≤ i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0, and such that, for all 0≤ j≤ n, v is j-bounded. Let now (ℓ,v') such that (ℓ^𝚁,i+1_in,v) ^* (ℓ, v') in 𝚁𝚜𝚝_i+1. We prove the property by induction on the number of occurences of ^𝚃𝚂,i,z and ^𝚃𝚂,i,y. If there is no occurence of such state between in (ℓ^𝚁,i+1_in,v) ^* (ℓ, v'), then, for all ∈ Y_j ∪Y_j∪{_i, _i} and j i, j i+1, then v'() = v() and so v' is j-bounded. Furthermore, for ∈ Y_i ∪ Y_i+1∪Y_i+1, v'() ≤ v(), and for all ∈Y_i, v'() ≤ v() + 1 = 1. The property (2) is easily verified. Hence the properties hold. Assume now we proved the properties for k occurrences of ^𝚃𝚂,i,z and ^𝚃𝚂,i,y, and let us prove the clam for k+1 such occurrences. Note ℓ_k+1∈{^𝚃𝚂,i,z,^𝚃𝚂,i,y} the last occurence such that: (ℓ^𝚁,i+1_in,v) ^+ (ℓ_k, v_k) (ℓ_k+1, v_k+1) ^* (ℓ, v'). By induction hypothesis, v_k is j-bounded for all 0 ≤ j ≤ n and it respects (1) and (2), and by construction, (ℓ_k, , ℓ_k+1) and ℓ_k ℓ_1^𝚁,i+1, ℓ_k ℓ_3^𝚁, i+1, hence v_k+1 is j-bounded for all 0 ≤ j ≤ n and respects (PreTest1), (PreTest2), and (PreTest3) for 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i) and 𝚃𝚎𝚜𝚝𝚂𝚠𝚊𝚙_i(_i). As a consequence, if ℓ is a state of one of this machine such that (ℓ_k+1, v_k+1)^* (ℓ, v'), then by <ref>, for all 0 ≤ j ≤ n, as v_k+1 is j-bounded, so is v'. Assume now ℓ to not be a state of one of the two machines. And keep in mind that v_k+1 respects (1) and (2). Then, either ℓ = ℓ_out^𝚁, i+1 and so v'() = v_k+1() for all ∈ Y_j ∪Y_j for all j i, and v'(_i) = 2^2^i and v'(_i) = 0 and so the claim holds, either ℓ∈{ℓ_in^𝚁,𝚒+1, ℓ_j'^𝚁, i+1}_j' = 1, 2, 3, 4, 5, 6, …, r. In this case, the execution is such that: (ℓ_k+1, v_k+1) ^+ (ℓ_nz, k+1, v_k+1) ^* (ℓ, v'), where if ℓ_k+1 =^𝚃𝚂,i,z, ℓ_nz, k+1 = ℓ^𝚃𝚂, i ,z_nz and otherwise ℓ_nz, k+1 = ℓ^𝚃𝚂, i ,y_nz. In any cases, for all j i, j i+1, ∈ Y_j∪Y̅_j ∪{_i, _i}, v'() = v_k+1(), hence (1) holds and v' is j-bounded for all j < i and j > i+1. Observe as well that for all ∈ Y_i+1∪Y_i+1, v'() ≤ v_k+1(), and so v' is i+1-bounded. The last thing to prove is that (2) holds. This is direct from the fact that v_k+1 respects (2). About the procedural  𝙸𝚗𝚌_i, we use this proposition from <cit.>. For all 0≤ i< n, for all v,v'∈ℕ^', (^𝙸𝚗𝚌, i,v) ^* (ℓ_out^𝙸𝚗𝚌, i, v') in 𝙸𝚗𝚌_i if and only if: * (PreInc1) for all 0 ≤ j < i, for all ∈Y_j, v() = 2^2^j and for all ∈ Y_j, v() = 0; * (PreInc2) for all ∈Y_i, v( ) = 0, * (PostInc1) for all ∈Y_i, v'() = 2^2^i; * (PostInc2) for all ∉Y_i, v'() = v(). Moreover, if for all 0≤ j ≤ n, v is j-bounded, then for all (ℓ,v”) such that (ℓ^𝙸𝚗𝚌,i_in,v) ^* (ℓ, v”) in 𝙸𝚗𝚌_i, then v” is j-bounded for all 0≤ j≤ n. *Procedural  𝚁𝚜𝚝𝙸𝚗𝚌. We shall now prove the properties in the procedural  𝚁𝚜𝚝𝙸𝚗𝚌 defined in <ref>. The next proposition establishes the correctness of the construction 𝚁𝚜𝚝𝙸𝚗𝚌. Let v ∈ℕ^' be a valuation such that for all 0≤ i ≤ n and for all ∈ Y_i ∪Y_i, v() ≤ 2^2^i. Then the unique valuation v' ∈ℕ^' such that (ℓ_a, v) ^* (ℓ_b, v') in 𝚁𝚜𝚝𝙸𝚗𝚌 satisfies the following: for all 0≤ i ≤ n, for all ∈Y_i, v'() = 2^2^i and for all ∈ Y_i, v'() = 0. Moreover, for all (ℓ,v”) such that (ℓ_a, v) ^* (ℓ, v”) in 𝚁𝚜𝚝𝙸𝚗𝚌, for all 0≤ i≤ n, v” is i-bounded. We can split the execution in (ℓ_a,v) (^𝚁,0,v)^*(ℓ^𝚁,0_out, v_0) (^𝙸𝚗𝚌,0,v_0)^* (ℓ_out^𝙸𝚗𝚌,0,v'_0) (^𝚁,1,v'_0)^*(ℓ^𝚁,1_out,v_1)^*(^𝙸𝚗𝚌,n-1, v_n-1)^*(ℓ^𝙸𝚗𝚌,n-1_out, v'_n-1) (^𝚁,n, v'_n-1)^*(ℓ_out^𝚁,n,v_n)(ℓ_b,v'), with v'=v_n and v=v'_-1. We show that for all 0≤ i≤ n: * P_1(i): For all ∈ Y_i∪Y_i, v_i()=0, and for all ∉ (Y_i∪Y_i), v_i()=v'_i-1(). * P_2(i): For all 0≤ j <i, for all ∈ Y_j, v'_i-1()=0 and for all ∈Y_j, v'_i-1()=2^2^j, and for all other ∈', v'_i()=v_i(). * P_3(i): For all v” such that (ℓ_a, v) ^* (ℓ, v”)^* (ℓ^𝚁,i_out, v_i), v” is i-bounded, for all 0≤ i≤ n. For k=0, <ref> implies that for all ∈ Y_0∪Y_0, v_0()=0, and that for all other ∈', v_0()=v(). Moreover, for all v” such that (^𝚁,0,v)^*(ℓ, v”)^*(ℓ_out^𝚁,0,v_0), <ref> ensures that v” is i-bounded, for all 0≤ i≤ n. P_2(0) is trivially true. Let 0≤ k< n, and assume that P_1(k), P_2(k) and P_3(k) hold. P_1(k) and P_2(k) and <ref> imply that for all ∈Y_k, v'_k()= 2^2^k, and that for all other counter ∈', v'_k()=v_k(). Thanks to P_1(k), P_2(k+1) holds. Moreover, we also know by <ref> that for all v” such that (ℓ_out^𝚁,k,v_k) (^𝙸𝚗𝚌,k, v_k)^*(ℓ, v”)^*(ℓ_out^𝙸𝚗𝚌,k, v'_k), v” is i-bounded for all 0≤ i≤ n. Since v'_k is then i-bounded for all 0≤ i≤ n, and since P_2(k) holds, <ref> implies that v_k+1()=0 for all ∈ Y_k+1∪Y_k+1, and that, for all other ∈', v_k+1()=v'_k). So P_1(k+1) holds. Moreover, by <ref>, for all v” such that (ℓ_out^𝙸𝚗𝚌,k, v'_k)(^𝚁,k+1,v'_k)^*(ℓ,v”)^* (ℓ_out^𝚁,k+1,v_k+1), v” is i-bounded for all 0≤ i≤ n. Hence P_3(k+1) holds. By P_1(n), v'()=0 for all ∈ Y_n, and since Y_n=∅, v'()=2^2^n for all ∈Y_n. Let ∉ (Y_n∪Y_n). Then v'()=v'_n-1(), and by P_2(n), for all 0≤ i <n, for all ∈Y_i, v'()=2^2^i, and for all ∈ Y_i, v'()=0. By P_3(n), for all (ℓ,v”) such that (ℓ_a, v) ^* (ℓ, v”) in 𝚁𝚜𝚝𝙸𝚗𝚌, for all 0≤ i≤ n, v” is i-bounded. §.§.§ Proofs of the Reduction We are now ready to prove <ref>, i.e. that the reduction is sound and complete. For some subset of counters Y, we will note v_| Y for the valuation v on counters Y, formally, v_| Y : Y →ℕ and is equal to v on its domain. If there exists v ∈ℕ^ such that (, 0_) ^*_M (ℓ_f, v), then there exists v' ∈ℕ^' such that (', 0_') ^*_N (ℓ_f, v'). From <ref>, we have that (', 0_') ^*_N (, v_0) where v_0 is such that, for all 0 ≤ j ≤ n, for all ∈Y_j, v_0() = 2^2^j and for all ∈ Y_j, v_0( ) = 0. By construction of N, (, v_0)^*_N (ℓ_f,v') with v' defined by: for all 0≤ i <n, for all ∈Y_j, v'() = 2^2^j, for all ∈ Y_j, v'() = 0, and, for all ∈, v'() = v(). Note that in this path, there is no restore step. If there exists v' ∈ℕ^' such that (', 0_') ^*_N (ℓ_f, v'), then there exists v ∈ℕ^ such that (, 0_) ^*_M (ℓ_f, v). We will note v_0 the function such that for all 0≤ i ≤ n, and for all ∈Y_i, v_0() = 2^2^i and for all ∈ Y_i, v_0() = 0. Observe that there might be multiple visits of location in the execution of N, because of the restore transitions. The construction of 𝚁𝚜𝚝𝙸𝚗𝚌 ensures that, every time a configuration (,v) is visited, v=v_0. Formally, we show that for all (, v) such that (',0_')^*_N(,v), we have that v=v_0. First let (',w)^*_N(',w'), with w()≤ 2^2^i, and ', not visited in between. Then for all 0≤ i≤ n, for all ∈ Y_i∪Y_i, w'()≤ 2^2^i. Indeed, let (ℓ,w) be such that (',w)^*_N(ℓ, w)_N(',w'). By <ref>, we know that, for all 0≤ i≤ n, for all ∈ Y_i∪Y_i, w()≤2^2^i. Since the last transition is a restore transition, we deduce that, for all 0≤ i≤ n, for all ∈ Y_i∪Y_i, w'()=w()≤ 2^2^i. * Let v∈ℕ^' be such that (',0_')^*_N(,v), and (,v) is the first configuration where is visited. The execution is thus of the form (',0_')^*_N(',w)^*_N(,v), with (',w) the last time ' is visited. We have stated above that w()≤ 2^2^i. Then, we have that (',0_') ^*_N(',w)_N(ℓ_a,w)^*_N(ℓ_b,v)_N(,v), and by <ref>, v=v_0. * Let now v_k,v_k+1∈ℕ^' be such that (',0_')^*_N(,v_k)^*_N(,v_k+1), and v_k and v_k+1 are respectively the k^th and the (k+1)^th time that is visited, for some k≥ 0. Assume that v_k=v_0. We have (, v_k)^*_N(ℓ,v)_N(',v)^*_N(',v)_N(ℓ_a,v)_N^*(ℓ_b,v_k+1) _N(, v_k+1). Since the  M is 2EXP-bounded, and v_k=v_0, we obtain that for all ∈=Y_n, v()≤ 2^2^n. For all 0≤ i<n, for all ∈ Y_i∪Y_i, v()=v_0(), then for all 0≤ i≤ n, for all ∈ Y_i∪Y_i, v()≤ 2^2^i. Then, as proved above, v()≤ 2^2^i for all 0≤ i≤ n, for all ∈ Y_i∪Y_i. By <ref>, v'=v_0. Consider now the execution (',0_')^*_N(,v)^*_N(ℓ_f,v'), where (,v) is the last time the location is visited. Then, as proved above, v=v_0. From the execution (,v)^*_N(ℓ_f,v'), we can deduce an execution (, v_|)^*_M (ℓ_f, v'_|). Since v=v_0 and for all ∈=Y_n, v()=0, we can conclude the proof. The two previous lemmas prove that the reduction is sound and complete. By <ref>, we proved the -hardness of the problem, and so <ref>. § PROOFS OF <REF> In this section, we present proofs omitted in <ref>. §.§ Proof of <ref> We present here the proof of <ref>. The two lemmas of this subsection prove the soundness and completeness of the reduction presented in <ref>. Put together with <ref>, we prove <ref>. Let C_0 ∈, C_f ≥ C_F. If C_0 ^* C_f, then there exists v∈ℕ^Q such that (, 0_)^*(ℓ_f, v). For all q∈ Q, we let v_q(q)=1 and v_q(q')=0 for all q'∈ such that q'≠ q. Let n=||C_0||=C_0(), and let C_0C_1⋯ C_mC_f be the configurations visited in . Then, applying the transition (, , ), we get (, 0_) (, v^1) … (, v^n) with v_0 = v^n and v_0()=n and v_0()=0 for all ≠. Let i≥ 0 and assume that (,0_)^*(, C_i). We show that (, C_i)^*(, C_i+1). * If C_im C_i+1, let t=(q_1,!m,q'_1), t'=(q_2, ?m, q'_2)∈ T such that C_i(q_1)>0, C_i(q_2)>0, C_i(q_1)+C_i(q_2)≥ 2, and C_i+1= C_i - q_1,q_2+q'_1,q'_2. Then (, C_i) (ℓ_(t,t')^1, v_i^1)(ℓ_(t,t')^2, v_i^2)(ℓ_(t,t')^3, v_i^3)(, v_i^4), with v_i^1= C_i - v_q_1, v_i^2=v_i ^1 - v_q_2, v_i^3 = v_i^2 + v_q'_1, v_i^4 = v_i^3+v_q'_2. Observe that v_i^4=C_i+1 and then (, C_i)^*(, C_i+1). * If C_iτ C_i+1, let t=(q,τ,q') such that C_i(q)>0 and C_i+1=C_i-q+q'. Then, (, C_i) (ℓ_q, v_i^1) (, v_i^2) with v_i^1=C_i- v_q and v_i^2 = v_i^1+ v_q'. Observe that v_i^2 = C_i+1, then (, C_i)^*(, C_i+1). * If C_im C_i+1, let t=(q,!m,q') such that C_i+1=C_i-q+q', and m = {q_1,…, q_k}. Then C_i(p_j)=0 for all 1≤ j≤ k. We then have that (, C_i) (ℓ_t, v_i^1) (ℓ_t,q_1^m, v_i^1)⋯(ℓ_t,q_k^m, v_i^1) (, v_i^2) with v_i^1= C_i - v_q and v_i^2= v_i^1 + v_q'. Indeed, v_i^1(q_j)=0 for all q_j∈m, so the transitions (ℓ^m_t,q_j, q_j+1), ℓ^m_t,q_j+1) do not change the value of the counters. Hence, v_i^2= C_i+1 and (, C_i)^* (, C_i+1). So we know that (, 0_)^* (, C_f). Moreover, since C_f ≥ C_F, it holds that C_f ≥ v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s. Then (, C_f)^s (ℓ_f, v) with v=C_f-(v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s). Let v∈^Q. If (, 0_)^*(ℓ_f, v), then there exists C_0 ∈, C_f ≥ C_F such that C_0 ^* C_f. Let (, v_0), (, v_1) … (, v_n) be the projection of the execution of M on {}×ℕ^. We prove that, for all 0≤ i≤ n, there exists C_0 ∈, and C≥ v_i such that C_0 ^* C. For i = 0, we let C_0 be the empty multiset, and the property is trivially true. Let 0≤ i < n, and assume that there exists C_0 ∈, C≥ v_i such that C_0 ^* C. * If (, v_i)δ(, v_i+1) with δ=(, , ), then v_i+1 = v_i +v_. The execution C_0^* C built so far cannot be extended as it is, since it might not include enough processes. Let N be such that C_0 C_1… C_N = C, and let C'_0∈ with C'_0()=C_0()+N+1. We build, for all 0≤ j ≤ N, a configuration C'_j such that C'_0^j C'_j, C'_j≥ C_j and C'_j()>C_j()+N-j. For j=0 it is trivial. Assume now that, for 0≤ j < N, C'_j≥ C_j and that C'_j() > C_j()+N-j. If C_jm C_j+1 for m∈Σ, with t_1=(q_1,!m, q'_1) and t_2=(q_2,?m,q'_2). Then, C_j+1=C_j - q_1,q_2 + q'_1,q'_2. Moreover, C'_j(q_1) ≥ C_j(q_1)>0 and C'_j(q_2) ≥ C_j(q_2) >0 and C'_j(q_1) + C'_j(q_2)≥ C_j(q_1) + C_j(q_2) ≥ 2. We let C'_j+1 = C'_j - q_1,q_2 + q'_1,q'_2, and C'_jm C'_j+1. It is easy to see that C'_j+1≥ C_j+1. Moreover, C'_j+1() > C_j+1() +N -j > C_j+1 + N -j-1. If C_jm C_j+1 and for all q∈m, C'_j-q_1(q)=0, with t=(q_1,!m,q_2), (respectively C_jτ C_j+1 with t=(q_1,τ,q_2)), we let C'_j+1=C'_j - q_1+q_2, and C'_jmC'_j+1 (respectively C'_jτ C'_j+1). Again, thanks to the induction hypothesis, we get that C'_j+1≥ C_j+1, and C'_j+1 ()> C_j+1() + N - j > C_j+1() + N - j-1. If now C_jm C_j+1, with t_1=(q_1,!m,q_2) and there exists q'_1∈m such that C'_j - q_1(q'_1) >0. Let (q'_1,?m,q'_2)∈ T, and then C'_j+1=C'_j - q_1,q'_1 + q_2, q'_2. Since C'_j≥ C_j, C'_j(q_1)≥ 1, and since C'_j-q_1(q'_1) >0, C'_j(q'_1)≥ 1 and C'_j(q_1) + C'_j(q'_1) ≥ 2. Hence, C'_jm C'_j+1. We have that C'_j(q'_1) > C_j(q'_1), so C'_j+1(q'_1) ≥ C_j+1(q'_1) and C'_j+1(q)≥ C_j+1(q) for all other q∈ Q. Hence C'_j+1 > C_j+1. Also, C_j+1() = C_j() + x, with x∈{0,1}. If q'_1≠, then C'_j+1() = C'_j() + y, with y≥ x. Hence, since C'_j() > C_j() + N - j, we get C'_j+1() > C_j+1() + N - j > C_j+1() + N -j - 1. If q'_1 =, then we can see that C'_j+1() = C'_j() +y, with x-1≤ y ≤ x. In that case, C'_j+1() > C_j() + N-j+y≥ C_j() + N- j + x-1 ≥ C_j+1() + N-j-1. So we have built an execution C'_0 ^* C'_N such that C'_N≥ C_N and C'_N() > C_N(). Hence, C'_N≥ v_i+1. * If (,v_i) (ℓ_(t,t')^1, v_i^1) (ℓ_(t,t')^2, v_i^2) (ℓ_(t,t')^3, v_i^3) (, v_i+1), with t= (q_1,!m,q_2) and t'=(q'_1, ?m, q'_2), then v_i^1 = v_i - v_q_1, v_i^2= v_i^1 - v_q'_1, v_i^3 = v_i^2 + v_q_2, and v_i+1 = v_i^3+ v_q'_2. Then by induction hypothesis, C(q_1)≥ 1, C(q'_1)≥ 1, and C(q_1) + C(q'_1) ≥ 2. We let C' = C - q_1, q'_1 + q_2, q'_2. We have Cm C' and C' ≥ v_i+1. * If (, v_i) (ℓ_q, v_i^1) (, v_i+1) with (q,τ, q')∈ T and v_i^1 = v_i - v_q and v_i+1 = v_i^1 + v_q', then by induction hypothesis, C≥ 1, and if we let C'=C- q+q', then CτC', and C'≥ v_i+1. * If (, v_i) (ℓ_t, v_i^1) (ℓ_t,p_1^m, v_i^2)… (ℓ_t,p_k^m, v_i^k+1) (, v_i+1) with t=(q,!m,q') and m = {p_1,…,p_k}, and (C-q)(p)=0 for all p∈m. We let C' = C- q+q', hence Cm C'. Moreover, v_i^1 = v_i - v_q, and, for all 1≤ j <k, it holds that v_i^j+1(p_j) = max(0, v_i^j(p_j) - 1) and v_i^j+1(p)=v_i^j(p) for all p≠ p_j. By induction hypothesis, C≥ v_i, hence v_i^j(p)=0 for all p∈m, for all 1≤ j≤ k+1. Hence, v_i+1 = v_i^k+1 + v_q' = v_i^1 + v_q', and C' ≥ v_i+1. * If (, v_i) (ℓ_t, v_i^1) (ℓ_t,p_1^m, v_i^2)… (ℓ_t,p_k^m, v_i^k+1) (, v_i+1) with t=(q,!m,q') and m = {p_1,…,p_k}, and (C-q)(p_j)>0 for some p_j∈m. Let (p_j,?m,p'_j)∈ T and C' = C - q,p_j+q',p'_j. Obviously, Cm C'. It remains to show that C'≥ v_i+1. This is due to the fact that in the M, the counter p'_j will not be incremented, unlike C(p'_j). Moreover, in the protocol , only p_j will lose a process, whereas in M, other counters corresponding to processes in m may be decremented. Formally, by definition and by induction hypothesis, C-q≥ v_i^1. Also, for all p∈m, either v_i^1(p)=v_i^k+1(p) = 0, or v_i^k+1(p) = v_i^1(p)-1. Remark that since C≥ v_i, then C-q≥ v_i-v_q = v_i^1, hence (C-q,p_j)(p_j) = (C-q)(p_j) - 1 ≥ v_i^1(p_j)-1. Also, (C-q)(p_j) - 1≥ 0, hence (C-q)(p_j) - 1≥max(0,v_i^1(p_j)-1)=v_i^k+1(p_j). Observe also that, for all p≠ p_j∈m, if v_i^1(p)>0, then (C-q,p_j)(p)= (C-q)(p) ≥ v_i^1(p) > v_i^k+1(p). If v_i^1(p) = 0, then (C-q,p_j)(p)≥ v_i^1(p)= v_i^k+1(p). For all other p∈ Q, (C-q,p_j)(p) = (C-q)(p) ≥ v_i^1(p)= v_i^k+1(p). Hence, C-q,p_j≥ v_i^k+1. By definition, v_i+1 = v_i^k+1 + v_q'. Hence, (C-q,p_j+q',p'_j)(p)≥ v_i+1(p), for all p≠ p'_j, and (C-q,p_j+q',p'_j)(p'_j)> v_i+1(p'_j). So, C'> v_i+1. Now we know that the initial execution of M is: (, 0_)^∗(, v_n)^∗ (ℓ_f, v_f) with v_f = v_n - (v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s). Thus v_n>v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s. We have proved that we can build an initial execution of P: C_0^*C_n and that C_n≥ v_𝐪_1 + v_𝐪_2 + … + v_𝐪_s. Hence C_n ≥ C_F. §.§ Proofs of <ref> To prove <ref>, we shall use <ref> along with the reduction presented in <ref>. If the reduction is sound and complete, it will prove that  is -hard. As  is a particular instance of the  problem, this is sufficient to prove <ref>. The two lemmas of this subsection prove the soundness and completeness of the reduction presented in <ref>, put together with <ref>, it proves that  is -hard. For all v∈ℕ^d, if (, 0_)_M^*(ℓ_f, v), then there exists C_0 ∈, C_f ∈ such that C_0 ^* C_f. For all ∈, we let N_ be the maximal value taken by in the initial execution (, 0_)^*(ℓ_f, v), and N=Σ_∈ N_. Now, we let C_0∈∩ C_N+1 be the initial configuration with N+1 processes. In the initial execution of that we will build, one of the processes will evolve in the (M) part of the protocol, simulating the execution of the , the others will simulate the values of the counters in the execution. Now, we show by induction on k that, for all k≥ 0, if (, 0_)^k (ℓ, w), then C_0^* C, with C(1_)=w() for all ∈, C(ℓ)=1, C()=N-Σ_∈ w(), and C(s)=0 for all other s∈ Q. C_0L C_0^1R C_0^2, and C_0^2()=N, C_0^2()=1, and C_0^2(s)=0 for all other s∈ Q. So the property holds for k=0. Suppose now that the property holds for k≥ 0 and consider (, 0_)^k (ℓ,w)δ (ℓ',w'). * if δ=(ℓ,,ℓ'), then Cinc_C_1 with C_1=C-ℓ, +ℓ_δ,q_. Indeed, by induction hypothesis, C(ℓ)=1> 0, and C()>0, otherwise Σ_∈ w()=N and w() is already the maximal value taken by so no increment of could have happened at that point of the execution of M. We also have C_1inc_C', since C_1(ℓ_δ)>0 and C_1(q_)>0 by construction, and C'=C_1-ℓ_δ,q_+ℓ', 1_. So C'(ℓ')=1, for all ∈, C'(1_)=w'(), and C'()=N-Σ_∈ w'(). * if δ=(ℓ,,ℓ'), then C(ℓ)=1>0 and C(1_)>0 since w()>0. Then Cdec_C_1 with C_1=C-ℓ,1_+ℓ_δ,q'_. Then C_1dec_C', with C'=C_1-q'_, ℓ_δ+, ℓ'. So C'(ℓ')=1, C'(1_)=C(1_)-1, C'()=C()+1. * if δ=(ℓ,,ℓ') and w()>0 then Cnbdec_C', and C'=C-ℓ, 1_+ℓ', and the case is proved. * if δ=(ℓ,,ℓ') and w()=0 then by induction hypothesis, C(1_)=0 and Cnbdec_C', with C'=C-ℓ+ℓ'. Then, C'(1_)=0=w'(), and C'(ℓ')=1. * if δ=(ℓ,,ℓ'), then CτC', avec C'=C-ℓ+ℓ'. This includes the restore transitions. Then C_0^* C with C(ℓ_f)=1 and C∈. Let C_0 ∈, C_f ∈ such that C_0 ^* C_f, then (ℓ_0, 0_)^*_M(ℓ_f, v) for some v∈ℕ^. Before proving this lemma we establish the following useful result. Let C_0 ∈. For all C∈ such that C_0^+ C, we have Σ_p∈{q}∪ Q_M C(p)= 1. Note C_0C_1…C_n = C_f. Now, thanks to <ref>, for all 1≤ i≤ n, we can note 𝗅𝖾𝖺𝖽𝖾𝗋(C_i) the unique state s in {q}∪ Q_M such that C_i(s) = 1. In particular, note that 𝗅𝖾𝖺𝖽𝖾𝗋(C_n) = ℓ_f. We say that a configuration C is M-compatible if 𝗅𝖾𝖺𝖽𝖾𝗋(C)∈. For any M-compatible configuration C∈, we define the configuration of the  π(C_i)=(𝗅𝖾𝖺𝖽𝖾𝗋(C), v) with v=C(1_) for all ∈. We let C_i_1⋯ C_i_k be the projection of C_0C_1… C_n onto the M-compatible configurations. We show by induction on j that: P(j): For all 1≤ j≤ k, (,0_)^*_M π(C_i_j), and Σ_∈C_i_j(q_)+C_i_j(q'_)=0. Moreover, for all C such that C_0^*CC_i_j, Σ_∈C(q_)+C(q'_)≤ 1. By construction of the protocol, C_0L C_1(L)^k C_2R C_i_1 for some k ∈ℕ. So π(C_i_1)=(, 0_), and for all C such that C_0^*CC_i_1, Σ_∈C(q_)+C(q'_)=0, so P(0) holds true. Let now 1≤ j <k, and suppose that (,0_)^*_M π(C_i_j), and Σ_∈C_i_j(q_)+C_i_j(q'_)=0. We know that C_i_j^+C_i_j+1. * If there is no C∈ such that C(q)=1 and C_i_j^+C^*C_i_j+1, the only possible transitions from C_i_j are in T_M. Let π(C_i_j)=(ℓ,v). * if C_i_jinc_C then C=C_i_j-ℓ,+ℓ_δ,q_ for δ=(ℓ,,ℓ')∈Δ_b. Σ_∈C(q_)+C(q'_)=1. Note that the message inc_ is necessarily received by some process, otherwise C(q_)=0 and C has no successor, which is in contradiction with the fact the the execution reaches C_f. Moreover, the only possible successor configuration is Cinc_ C_i_j+1, with C_i_j+1=C-q_, ℓ_δ+1_, ℓ'. Hence, obviously, π(C_i_j)π(C_i_j+1). * if C_i_jdec_C then C=C_i_j-ℓ,1_+ℓ_δ,q'_ for δ=(ℓ,,ℓ')∈Δ_b. Σ_∈C(q_)+C(q'_)=1. Note that the message dec_ is necessarily received by some process, otherwise C(q'_)=0 and C has no successor, which is in contradiction with the fact the the execution reaches C_f. Besides, C_i_j(1_)>0 hence v()>0. Moreover, the only possible successor configuration is Cdec_ C_i_j+1, with C_i_j+1=C-q'_, ℓ_δ+, ℓ'. Hence, obviously, π(C_i_j)π(C_i_j+1). * if C_i_jnbdec_C_i_j+1 then C_i_j+1=C_i_j-ℓ,1_+ℓ', for δ=(ℓ,,ℓ')∈Δ_nb. Σ_∈C(q_)+C(q'_)=0. Besides, C_i_j(1_)>0 hence v()>0. Hence, obviously, π(C_i_j)π(C_i_j+1). * if C_i_j𝐧𝐛(nbdec_)C_i_j+1 then C_i_j+1=C_i_j-ℓ+ℓ' for δ=(ℓ,,ℓ')∈Δ_nb. Σ_∈C(q_)+C(q'_)=0. Besides, C_i_j(1_)=0 hence v()=0. Hence, obviously, π(C_i_j) π(C_i_j+1). * if C_i_jτC_i_j+1 then C_i_j+1=C_i_j-ℓ+ℓ' for δ=(ℓ,,ℓ')∈Δ_nb. Σ_∈C(q_)+C(q'_)=0. Besides, C_i_j(1_)=C'_i_j+1(1_) for all ∈. Hence, obviously, π(C_i_j)π(C_i_j+1). * Otherwise, let C be the first configuration such that C(q)=1 and C_i_j^+C^*C_i_j+1. The transition leading to C is necessarily a transition where the message L has been sent. Remember also that by induction hypothesis, Σ_∈C_i_j(q_)+C_i_j(q'_)=0. * if C_i_jLC, then C(q)=1, and by induction hypothesis, Σ_∈C(q_)+C(q'_)=0. Then the only possible successor configuration is CRC_i_j+1, with Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0, and π(C_i_j+1)=(, v), so π(C_i_j)π(C_i_j+1), by a restore transition. * if C_i_jinc_C_1LC then C_1=C_i_j-ℓ,+ℓ_δ,q_ for δ=(ℓ,,ℓ')∈Δ_b and Σ_∈C_1(q_)+C_1(q'_)=1. Now, C=C_1 - ℓ_δ, + q_, q, so C(q)=1=C(q_), and Σ_∈C(q_)+C(q'_)=1. * If CRC_i_j+1, then C_i_j+1 = C - q,q_+,, then Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0 and π(C_i_j+1)=(, v), hence π(C_i_j)π(C_i_j+1) by a restore transition. * Now C(q_)=1 so it might be that Cinc_ C', with C'=C - q_+1_. Here, Σ_∈C'(q_)+C'(q'_)=0. However, 𝚕𝚎𝚊𝚍𝚎𝚛(C')={q} so C' is not M-compatible. The only possible transition from C' is now C'R C_i_j+1 with C_i_j+1= C'-q+. Hence, C_i_j+1(1_)= C'(1_)=C_i_j(1_)+1=v()+1, and C_i_j+1(1_)=C'(1_)=C_i_j(1_)=v() for all ≠. So π(C_i_j)=(ℓ,v)δ (ℓ',v+v_)(, v+v_)=π(C_i_j+1), the last step being a restore transition. Finally, Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0. * if C_i_jdec_C_1L C, then C_1=C_i_j-ℓ,1_+ℓ_δ,q'_ for δ=(ℓ,,ℓ')∈Δ_b and Σ_∈C_1(q_)+C_1(q'_)=1. Now, C=C_1 - ℓ_δ, + q_, q, so C(q)=1=C(q'_), and Σ_∈C(q_)+C(q'_)=1. Again, two transitions are available: * If CRC_i_j+1, then C_i_j+1 = C - q,q'_+,, then Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0 and π(C_i_j+1)=(, v), hence π(C_i_j)π(C_i_j+1) by a restore transition. * Now C(q'_)=1 so it might be that Cdec_ C', with C'=C - q'_+. Here, Σ_∈C'(q_)+C'(q'_)=0. However, 𝚕𝚎𝚊𝚍𝚎𝚛(C')={q} so C' is not M-compatible. The only possible transition from C' is now C'R C_i_j+1 with C_i_j+1= C'-q+. Hence, C_i_j+1(1_)= C'(1_)=C_i_j(1_)-1=v()-1, and C_i_j+1(1_)=C'(1_)=C_i_j(1_)=v() for all ≠. So π(C_i_j)=(ℓ,v)δ (ℓ',v-v_)(, v+v_)=π(C_i_j+1), the last step being a restore transition. Finally, Σ_∈C_i_j+1(q_)+C_i_j+1(q'_)=0. * If C_i_jinc_ C_1 then, it means that C_i_j()=0. In that case, let δ=(ℓ,,ℓ')∈Δ_b, and C_1=C_i_j -ℓ+ℓ_δ. Since, by induction hypothesis, C_1(q_)=C_i_j()=0, the only possible transition from C_1 would be C_1LC_i_j+1. However, C_i_j()=C_1()=0, so this transition is not possible, and C_1 is a deadlock configuration, a contradiction with the hypothesis that C_i_jC_i_j+1. * If C_i_jdec_ C_1 then it means that C_i_j(1_)=0. In that case, let δ=(ℓ,,ℓ')∈Δ_b, and C_1=C_i_j -ℓ+ℓ_δ. Since, by induction hypothesis, Σ_∈C_1(q_)+C_1(q'_) = Σ_∈C_i_j(q_)+C_i_j(q'_) = 0, the only possible transition from C_1 is C_1LC, with C=C_1 - ,ℓ_δ + q, q_. Again, Σ_∈C(q_)+C(q'_) = 0, and C(ℓ)= for all ℓ∈ Q_M, so the only possible transition is CR C_i_j+1. Observe that C_i_j+1 is M-compatible, with C_i_j+1()=1, and C_i_j+1(1_)=C_i_j(1_) for all ∈. Hence π(C_i_j+1)=(, v), and π(C_i_j)π(C_i_j+1), thanks to a restore transition of M. We then have, by P(k), that (,0_)^*_M π(C_i_k), with C_i_k M-compatible and such that C_i_k^* C_f, and C_i_k is the last M-compatible configuration. Then, by definition of an M-compatible configuration, C_i_k=C_f, and π(C_i_k)=(ℓ_f,v) for some v∈ℕ^. § PROOF OF SECTION <REF> We present here omitted proofs of <ref>. §.§ Technical Lemma We provide here a lemma which will be useful in different parts of this section. Let be rendez-vous protocol and C,C' ∈ such that C=C_0 C_1 ⋯ C_ℓ=C'. Then we have the two following properties. * For all q ∈ Q verifying C(q)=2.ℓ+a for some a ∈, we have C'(q)≥ a. * For all D_0 ∈ such that D_0 ≥ C_0, there exist D_1,…,D_ℓ such that D_0 D_1 ⋯ D_ℓ and D_i ≥ C_i for all 1 ≤ i ≤ℓ. According to the semantics associated to (non-blocking) rendez-vous protocols, each step in the execution from C to C' consumes at most two processes in each control state q, hence the result of the first item. Let C,C' ∈ such that C C'. Let D ∈ such that D ≥ C. We reason by a case analysis on the operation performed to move from C to C' and show that there exists D' such that D D' and D'≥ C'. (To obtain the final result, we repeat k times this reasoning). * Assume C m C' then there exists (q_1, !m, q_1') ∈ T and (q_2, ?m, q_2')∈ T such that C(q_1)>0 and C(q_2)>0 and C(q_1)+C(q_2)≥ 2 and C' = C - q_1, q_2 + q_1', q_2'. But since D ≥ C, we have as well D(q_1)>0 and D(q_2)>0 and D(q_1)+D(q_2)≥ 2 and as a matter of fact D m D' for D' = D - q_1, q_2 + q_1', q_2'. Since D≥ C, we have D' ≥ C'. * The case C τ C' can be treated in a similar way. * Assume C 𝐧𝐛(m) C', then there exists (q_1, !m, q_1') ∈ T, such that C(q_1)>0 and (C-q_1)(q_2)=0 for all (q_2, ?m, q_2') ∈ T and C' = C - q_1 + q'_1. We have as well that D(q_1)>0. But we need to deal with two cases: * If (D-q_1)(q_2)=0 for all (q_2, ?m, q_2') ∈ T. In that case we have D 𝐧𝐛(m) D' for D' = D - q_1 + q'_1 and D' ≥ C'. * If there exists (q_2, ?m, q_2') ∈ T such that (D-q_1)(q_2)>0. Then we have that D m D' for D' = D - q_1, q_2 + q_1', q_2'. Note that since (C-q_1)(q_2)=0 and D ≥ C, we have here again D' ≥ C'. §.§ Properties of Consistent Abstract Sets of Configurations §.§.§ Proof of Lemma <ref> Let C' ∈γ such that C' ≥ C. Let q ∈ Q such that C(q)>0. Then we have C'(q)>0. If q ∉ S, then q ∈ and C'(q)=1 and C(q)=1 too. Furthermore for all q' ∈∖q such C(q')=1, we have that C'(q')=1 and q and q' are conflict-free. This allows us to conclude that C ∈γ. Checking whether C belongs to γ can be done in polynomial time applying the definition of ·. §.§.§ Building Configurations from a Consistent Abstract Set Let γ be a consistent abstract set of configurations. Given a subset of states U ⊆ Q, if for all N ∈ and for all q ∈ U there exists C_q ∈γ and C'_q ∈ such that C_q ^∗ C'_q and C'_q(q)≥ N, then for all N ∈, there exists C ∈γ and C' ∈ such that C ^∗ C' and C'(q) ≥ N for all q ∈ U. We suppose γ=(S,) and reason by induction on the number of elements in U∖ S. The base case is obvious. Indeed assume U ∖ S=∅ and let N∈. We define the configuration C such that C(q)=N for all q ∈ S and C(q)=0 for all q ∈ Q∖ S. It is clear that C ∈γ and that C(q) ≥ N for all q ∈ U (since U ∖ S=∅, we have in fact U ⊆ S). We now assume that the property holds for a set U and we shall see it holds for U ∪p, p∉ S. We assume hence that for all N ∈ and for all q ∈ U ∪p there exists C_q ∈γ and C'_q ∈ such that C_q ^∗ C'_q and C'_q(q)≥ N. Let N ∈. By induction hypothesis, there exists C_U ∈γ and C'_U ∈ such that C_U ^∗ C'_U and C_U'(q) ≥ N for all q ∈ U. We denote by ℓ_U the minimal number of steps in an execution from C_U to C'_U. We will see that that we can build a configuration C ∈γ such that C ^∗ C”_U with C”_U ≥ C_U and C”_U(p) ≥ N+2*ℓ_U. Using Lemma <ref>, we will then have that C”_U ^∗ C' with C' ≥ C'_U and C'(p) ≥ N. This will allow us to conclude. We as well know that there exist C_p ∈γ and C'_p ∈ such that C_p ^∗ C'_p and C'_p(p)≥ N+2*ℓ_U+(k*ℓ). We denote by ℓ_p the minimum number of steps in an execution from C_p to C'_p. We build the configuration C as follows: we have C(q)=C_U(q)+2*ℓ_p+(k*ℓ)+C_p(q) for all q ∈ S, and we have C(q)=C_p(q) for all q ∈. Note that since C_p ∈γ, we have that C ∈γ. Furthermore, we have C ≥ C_p, hence using again Lemma <ref>, we know that there exists a configuration C”_p such that C ^∗ C”_p and C”_p ≥ C'_p (i.e. C”_p(p) ≥ N+2*ℓ_U+(k*ℓ) and C”_p(q) ≥ C_U(q)+(k*ℓ) + C_p(q) for all q ∈ S by <ref>,<ref>) Having C_U ∈γ, we name (q_1, m_1) … (q_k, m_k) the tokens in such that C_U(q_j) = 1 for all 1 ≤ j ≤ k, and for all q ∈∖{q_j}_1 ≤ j ≤ k, C_U(q) =0. Since γ is consistent, for each (q_j, m_j) there exists a path (q_0,j,!m_j,q_1,j)(q_1,j,?m_1,j,q_2,j)…(q_ℓ_j,j,?m_ℓ_j,j,q_j) in such that q_0,j∈ S and such that there exists (q'_i,j,!m_i,j,q”_i,j) ∈ T with q'_i,j∈ S for all 1 ≤ i ≤ℓ_j. We denote by ℓ = max_1 ≤ j≤ k(ℓ_j)+1. Assume there exists 1≤ i≤ j≤ k such that (q_i,m_i),(q_j,m_j)∈ and C_U(q_i)=C_U(q_j)=1, and m_i∈q_j and m_j∈q_i. Since C_U respects γ, q_i and q_j are conflict-free: there exist (q_i,m), (q_j,m')∈ such that m∉q_j and m'∉q_i. Hence, (q_i,m_i), (q_i, m), (q_j,m_j), (q_j,m')∈, and m∉q_j and m_j∈q_i. Therefore, we have (q_i,m), (q_j,m_j)∈ and m∉q_j and m_j∈q_i, which is in contradiction with the fact that γ is consistent. Hence, for all 1≤ i≤ j≤ k, for all (q_i,m_i), (q_j,m_j)∈, m_i∉q_j and m_j∉q_i. We shall now explain how from C”_p we reach C”_U in k*ℓ steps, i.e. how we put (at least) one token in each state q_j such that q_j ∈ and C_U(q_j)=1 in order to obtain a configuration C”_U ≥ C_U. We begin by q_1. Let a process on q_0,1 send the message m_1 (remember that q_0,1 belongs to S) and let ℓ_1 other processes on states of S send the messages needed for the process to reach q_1 following the path (q_0,1,!m_1,q_1,1)(q_1,1,?m_1,1,q_2,1)…(q_ℓ_1,1,?m_ℓ_1,1,q_1). At this stage, we have that the number of processes in each state q in S is bigger than C_U(q)+((k-1)*ℓ) and we have (at least) one process in q_1. We proceed similarly to put a process in q_2, note that the message m_2 sent at the beginning of the path cannot be received by the process in q_1 since, as explained above, m_2 ∉q_1. We proceed again to put a process in the states q_1 to q_K and at the end we obtain the configuration C”_U with the desired properties. §.§ Proof of Lemma <ref> In this subsection, the different items of Lemma <ref> have been separated in distinct lemmas. F(γ) is consistent and can be computed in polynomial time for all consistent γ∈Γ. The fact that F(γ) can be computed in polynomial time is a direct consequence of the definition of F (see <ref>). Assume γ = (S,) ∈Γ to be consistent. Note (S”, ”) the intermediate sets computed during the computation of F(γ), and note F(γ) = (S', '). To prove that F(γ) is consistent, we need to argue that (1) for all (q, m) ∈”∖, there exists a finite sequence of transitions (q_0, a_0, q_1) … (q_k, a_k, q) such that q_0 ∈ S, and a_0 = !m and for all 1 ≤ i≤ k, we have that a_i = ?m_i and that there exists (q'_i, !m_i, q'_i+1) ∈ T with q'_i ∈ S, and (2) for all (q,m), (q',m') ∈' either m∈q' and m'∈q or m∉q' and m'∉q. We start by proving property (1). If (q, m) has been added to ” with rule <ref>, then by construction, there exists p ∈ S such that (p, !a, p') ∈ T, and (q, m) = (p', a). The sequence of transition is the single transition is (p, !a, q). If (q, m) has been added to ” with rule <ref>, then there exists (q',m) ∈, and (q', ?a, q) with m a. Furthermore, m ∈q and there exists (p, !a,p') ∈ T with p ∈ S. By hypothesis, γ is consistent, hence there exists a finite sequence of transitions (q_0, q_0, q_1) … (q_k, a_k, q') such that q_0 ∈ S, and a_0 = !m and for all 1 ≤ i≤ k, we have that a_i = ?m_i and that there exists (q'_i, !m_i, q'_i+1) ∈ T with q'_i ∈ S. By completing this sequence with transition (q', ?a, q) we get an appropriate finite sequence of transitions. It remains to prove property (2). Assume there exists (q, m), (q',m') ∈' such that m ∈q' and m' ∉q, then as ' ⊆”, (q, m), (q',m') ∈”. By condition <ref>, q ∈ S', therefore, as ' = {(p, a) ∈”| p ∉ S'}, we have that (q, m) ∉', and we reached a contradiction. If (S',')=F(S,) then S ⊊ S' or ⊆'. From the construction of F (see <ref>), we have S ⊆ S”⊆ S'. Assume now that S=S'. First note that ⊆” (see Table <ref>) and that ∩ S=∅. But '=(q,m) ∈”| q ∉S'=(q,m) ∈”| q ∉S. Hence the elements that are removed from ” to obtain ' are not elements of . Consequently ⊆'. For all consistent γ∈Γ, if C ∈γ and C C' then C' ∈F(γ). Let γ = (S,)∈Γ be a consistent abstract set of configurations, and C ∈ such that C ∈γ and C C'. Note F(γ) = (S', ') and γ' = (S”, ”) the intermediate sets used to compute F(γ). We will first prove that for all state q such that C'(q) > 0, q ∈ S' or q ∈('), and then we will prove that for all states q such that q ∈(') and C'(q)>0, C'(q) = 1 and for all other state p∈(') such that C'(p) >0, p and q are conflict-free. Observe that S ⊆ S”⊆ S', ⊆”, and (”) ⊆(') ∪ S'. First, let us prove that for every state q such that C'(q)>0, it holds that q ∈ S' ∪('). Note that for all q such that C(q) > 0, because C respects γ, q ∈() ∪ S. As () ∪ S ⊆(') ∪ S', the property holds for q. Hence, we only need to consider states q such that C(q) = 0 and C'(q) > 0. If C τ C' then q is such that there exists (q', τ, q) ∈ T, q' is therefore an active state and so q' ∈ S, (recall that ⊆ Q_W ×Σ). Hence, q should be added to (”) ∪ S” by condition <ref>. As (”) ∪ S”⊆(') ∪ S', it concludes this case. If C a C' then q is such that there exists (q', !a, q) ∈ T, with q' an active state. With the same argument, q' ∈ S and so q should be added to (”) ∪ S” by condition <ref> or <ref>. If C a C', then q is either a state such that (q', !a, q) ∈ T and the argument is the same as in the previous case, or it is a state such that (q', ?a, q) ∈ T, and it should be added to (”)∪ S” by condition <ref>, <ref>, or <ref>. Therefore, we proved that for all state q such that C'(q) >0, it holds that q ∈(') ∪ S'. It remains to prove that if q ∈(), then C'(q) = 1 and for all q' ∈(') ∖{q} such that C'(q') = 1, we have that q and q' are conflict-free. Note that if q ∈() and C(q) = C'(q) = 1, then for every state p such that p ∈() and C(p) = C'(p) = 1, it holds that q and p are conflict-free. Observe that if C τ C', then note q the state such that (q', τ ,q), it holds that {p | p ∈(') and C'(p) > 0}⊆{p | p ∈() and C(p) = 1}: q' is an active state, q might be in () but it is added to S”⊆ S' with rule <ref>, and for all other states, C'(p) = C(p). If p ∈(') and C(p) > 0, it implies that C'(p)= C(p) = 1 and p∈() (otherwise p is in S ⊆ S'). Hence, there is nothing to do as C respects γ. Take now q ∈(') ∖() with C'(q) > 0, we shall prove that C'(q) =1 and for all p ∈(') and C'(p) > 0, q and p are conflict-free. If q ∈(') ∖(), it implies that C(q) = 0 because C respects γ. Hence: either (1) C a C' with transition (q', !a, q) ∈ T, either (2) C a C' with transitions (q_1, !a, q'_1) ∈ T and (q_2, ?a, q'_2) ∈ T and q = q'_1 or q=q'_2. In the latter case, we should be careful as we need to prove that q'_2 q'_1, otherwise, C'(q) = 2. Case (1): Note that as only one process moves between C and C' and C(q)= 0, it is trivial that C'(q) = 1. In this first case, as it is a non-blocking request on a between C and C', it holds that: for all p ∈() such that C(p) = 1, a ∉p. Take p ∈('), such that p q and C'(p) = 1, then C'(p) = C(p) = 1 and so p ∈(), and a ∉p. Suppose (p, m) ∈' such that m ∈q, then we found two tokens in ' such that m ∈q and a ∉p which contradicts F(γ)'s consistency. Hence, p and q are conflict-free. Case (2): Note that if q'_2 ∈('), then q_2 ∈() (otherwise, q'_2 should be in S' by condition <ref>), and note (q_2, m) ∈, with (q'_2, m) ∈'. Note as well that if q'_1 ∈('), then a ∈q'_1 (otherwise, q'_1 should be in S' by condition <ref>) and (q'_1 ,a) ∈' by condition <ref>. Furthermore, if q'_1 ∈('), q_2 ∈() as well as otherwise q'_1 should be added to S' by condition <ref>. We first prove that either q'_1 ∈ S', or q'_2 ∈ S'. For the sake of contradiction, assume this is not the case, then there are three tokens (q'_1, a), (q_2, m), (q'_2, m) ∈' ⊆”, such that (q_2, ?a, q'_2) ∈ T. From condition <ref>, q'_1 should be added to S' and so (q'_1, a) ∉'. Note that, as a consequence q'_1 q'_2 or q'_1 = q'_2 ∈ S'. Take q ∈(') ∖() such that C'(q) >0, if such a q exists, then q = q'_1 or q = q'_2 and q'_1 q'_2. As a consequence, C'(q) = 1 (note that if q'_1 = q_2, C(q_2) = 1). Take p ∈(') ∖{q} such that C'(p) > 0, it is left to prove that q and p are conflict-free. If p q and p ∈('), then C'(p) = C(p) (because q'_1 ∈ S' or q'_2 ∈ S'). Hence, p ∈() and C'(p) = 1. Assume q = q'_1 and assume q and p are not conflict-free. Remember that we justified that q_2 ∈(), and therefore, C(q_2) = 1. Hence, either C'(q_2) = 0, or q_2 = q'_2 and in that case q_2,q_2' ∈ S' or q_2' = q_1' and then q_2=q. In any case, p q_2. As C respects γ, there exists (p, m_p) and (q_2, m) ∈ such that m_p ∉q_2 and m ∉p (q_2 and p are conflict-free). As p ∈('), (p,m_p) ∈' and so m_p∈q or a ∈p (q and p are not conflict-free). As F(γ) is consistent, m_p∈q and a ∈p. Note that a m_p because a ∈q_2, a m because m ∉p, and obviously m m_p. Note also that if m ∉q, then we found two tokens (q,a) and (q_2,m) in ' such that a ∈q_2 and m ∉q, which contradicts the fact that F(γ) is consistent (Lemma <ref>). Hence, m∈q. Note that even if q_2 is added to S”, it still is in ”. As ' ⊆” we found three tokens (p, m_p), (q_2,m), (q, a) in ”, satisfying condition <ref>, and so p should be added to S', which is absurd as p ∈('). We reach a contradiction and so q and p should be conflict-free. Finally assume q = q_2'. If q = q_2, then, because C respects γ, q and p are conflict-free. Otherwise, as q_2 is conflict-free with p, there exists (q_2, m ) and (p, m_p) in such that m ∉p and m_p ∉q_2. Note that (q,m) ∈” from condition <ref> (otherwise, q ∈ S” which is absurd). Hence, (q, m) ∈' and, as p ∈('), (p,m_p) is conserved from to '. It remains to show that m_p ∉q. Assume this is not the case, then there exists (p,m_p) and (q,m) ∈' such that m∉p and m_p∈q which is absurd given F(γ)'s consistency. As a consequence, q and p are conflict-free. We managed to prove that for all q such that C'(q) >0, q ∈ S' ∪('), and if q ∈('), then C'(q) = 1 and for all others p∈(') such that C'(p) = 1, p and q are conflict-free. For all consistent γ∈Γ, if C' ∈F(γ), then there exists C”∈ and C ∈γ such that C”≥ C' and C ^∗ C”. Let γ be a consistent abstract set of configurations and C'∈F(γ). We suppose that γ=(S,) and F(γ)=γ'=(S','). We will first show that for all N ∈, for all q ∈ S' there exists a configuration C_q ∈γ and a configuration C_q' ∈ such that C_q ^∗ C_q' and C'_q(q) ≥ N. This will allow us to rely then on Lemma <ref> to conclude. Take N ∈ and q ∈ S', if q ∈ S, then take C_q ∈γ to be N · q. Clearly C_q ∈F(γ), C_q(q) ≥ N and C_q ^∗ C_q. Now let q ∈ S' ∖ S. Note (”, S”) the intermediate sets of F(γ)'s computation. Case 1: q ∈ S”. As a consequence q was added to S” either by one of the conditions <ref>, <ref>, <ref> or <ref>. In cases <ref> and <ref> when a ∉q, note q' the state such that (q', τ, q) or (q', !a, q), and consider the configuration C_q = N · q'. By doing N internal transitions or non-blocking requests, we reach C'_q= N · q. Note that the requests on a are non-blocking as q' ∈ Q_A and a ∉q. C'_q ∈F(γ). In cases <ref> with a∈q and in case <ref>, note (q_1, !a, q_1') and (q_2, ?a, q_2') the two transitions realizing the conditions. As a consequence q_1, q_2 ∈ S. Take the configuration C_q =N · q_1, N · q_2. C_q ∈γ and by doing N successive rendez-vous on the letter a, we reach configuration C'_q = N· q'_1 + N · q'_2. C'_q ∈F(γ), and as q ∈{q'_1, q'_2}, C'_q(q) ≥ N. In case <ref>, there exists (q', m) ∈ such that (q', ?a, q) ∈ T, m ∉q, and there exists p ∈ S such that (p, !a,p') ∈ T. Remember that γ is consistent, and so there exists a finite sequence of transitions (q_0, !m, q_1) (q_1, a_1, q_2) … (q_k, a_k, q') such that q_0 ∈ S and for all 1 ≤ i ≤ k, a_i = ?m_i and there exists (q'_i , !m_i, q”_i) ∈ T with q'_i ∈ S. Take C_q = (N-1) · q_0 + (N-1) · q'_1 + … + (N-1) · q'_k + N · p + q'. Clearly C_q ∈γ as all states except q' are in S and q' ∈(), C_q(q') = 1. We shall show how to put 2 processes on q from C_q and then explain how to repeat the steps in order to put N. Consider the following execution: C_q a C_1 x_m C_2 m_1…m_k C_k+2a C_k+3. The first rendez-vous on a is made with transitions (p, !a, p') and (q', ?a, q). Then either m ∉p' and x_m = m, otherwise, x_m = m, in any case, the rendez-vous or non-blocking sending is made with transition (q_0, !m, q_1) and the message is not received by the process on q (because m ∉q) and so C_2 ≥q + q_1. Then, each rendez-vous on m_i is made with transitions (q'_i, !m_i,q”_i) and (q_i, ?m_i, q_i+1) (q_k+1 = q'), . Hence C_k+3≥(N-2)· q_0+ (N-2) · q'_1 + … + (N-2) · q'_k + (N-2) · p + 2 · q. We can reiterate this execution (without the first rendez-vous on a) N-2 times to reach a configuration C'_q such that C'_q ≥N · q. Case 2: q ∉ S”. Hence, q should be added to S' by one of the conditions <ref>, <ref>, and <ref>. If it was added with condition <ref>, let (q_1, m_1), (q_2, m_2) ∈” such that q =q_1, m_1 m_2, m_2 ∉q_1 and m_1 ∈q_2. From the proof of Lemma <ref>, one can actually observe that all tokens in ” correspond to "feasible" paths regarding states in S, i.e there exists a finite sequence of transitions (p_0, !m_1, p_1) (p_1, a_1, p_2) … (p_k, a_k, q_1) such that p_0 ∈ S and for all 1 ≤ i ≤ k, a_i = ?b_i and there exists (p'_i , !b_i, p”_i) ∈ T with p'_i ∈ S. The same such sequence exists for the token (q_2, m_2), we note the sequence (s_0, !m_2, s_1)… (s_ℓ, a_ℓ, q_2) such that s_0 ∈ S and for all 1 ≤ i ≤ℓ, a_i = ?c_i and there exists (s'_i , !c_i, s”_i) ∈ T with s'_i ∈ S. Take C_q = N · p_0 + N · s_0 + N p'_1 + … + N p'_k + N · s'_1 + … + N · s'_ℓ. Clearly, C_q ∈γ, as all states are in S. Consider the following execution: C_q m_1 C_1 b_1…b_k C_k+1, the non-blocking sending of m_1 is made with transition (p_0, !m_1, p_1) and each rendez-vous on letter b_i is made with transitions (p'_i, !b_i, p_i”) and (p_i, ?b_i, p_i+1) (p_k+1 = q_1). Hence, C_k+1 is such that C_k+1≥q_1. From C_k+1, consider the following execution: C_k+1x_m_2 C_k+2c_1…c_ℓ C_k+ℓ +2m_1C_k+ℓ +3, where x_m_2 = m_2 if no process is on a state in R(m_2), or x_m_2 = m_2 otherwise. In any case, as m_2 ∉q_1, C_k+2≥q_1. And each rendez-vous on letter c_i is made with transitions (s'_i, !c_i, s_i”) and (s_i, ?c_i, s_i+1) (s_k+1 = q_2), the last rendez-vous on m_1 is made with transitions (p_0, !m_1, p_1) and (q_2, ?m_1, q_2') (such a q_2' exists as m_1 ∈q_2). Hence, C_k+ℓ +3≥p_1 + q_1. By repeating the two sequences of steps (without the first non-blocking sending of m_1) N-1 times (except for the last time where we don't need to repeat the second execution), we reach a configuration C'_q such that C'_q≥N · q_1. If it was added with condition <ref>, then let (q_1, m_1), (q_2,m_2), (q_3,m_2) ∈” such that m_1 m_2 and (q_2, ?m_1, q_3) ∈ T with q =q_1. From the proof of Lemma <ref>, ” is made of "feasible" paths regarding S and so there exists a finite sequence of transitions (p_0, !m_2, p_1) (p_1, a_1, p_2) … (p_k, a_k, q_2) such that p_0 ∈ S and for all 1 ≤ i ≤ k, a_i = ?b_i and there exists (p'_i , !b_i, p”_i) ∈ T with p'_i ∈ S. The same sequence exists for the token (q_1, m_1), we note the sequence (s_0, !m_1, s_1)… (s_ℓ, a_ℓ, q_1) such that s_0 ∈ S and for all 1 ≤ i ≤ℓ, a_i = ?c_i and there exists (s'_i , !c_i, s”_i) ∈ T with s'_i ∈ S. Take C_q = N · p_0 + N · s_0 + N p'_1 + … + N p'_k + N · s'_1 + … + N · s'_ℓ. Clearly, C_q ∈γ, as all states are in S. We do the same execution from C_q to C_k+1 as in the previous case: C_q m_2 C_1 a_1…a_k C_k+1. Here C_k+1 is then such that C_k+1≥q_2. Then, from C_k+1 we do the following: C_k+1m_1 C_k+2c_1…c_ℓ C_k+ℓ+2m_2 C_k+ℓ+3: the rendez-vous on letter m_1 is made with transitons (s_0, !m_1, s_1) and (q_2, ?m_1, q_3). Then, each rendez-vous on letter c_i is made with transitions (s'_i, !c_i, s_i”) and (s_i, ?c_i, s_i+1) (s_k+1 = q_1), and the last rendez-vous on letter m_2 is made with transitions (p_0, !m_2, p_1) and (q_3, ?m_2,q_3') (such a state q_3' exists as (q_3, m_2) ∈” and so m_2∈q_3). Hence, C_k+ℓ+3 is such that C_k+ℓ +3≥q_1 + p_1. We can repeat the steps from C_1 N-1 times (except for the last time where we don't need to repeat the second execution), to reach a configuration C'_q such that C'_q≥N · q_1. pas encore relu condition 8If it was added with condition <ref>, then let (q_1, m_1), (q_2, m_2), (q_3, m_3) ∈”, such that m_1 m_2, m_2 m_3, m_1 m_3, and m_1 ∉q_2, m_1 ∈q_3, and m_2 ∉q_1, m_2 ∈q_3 and m_3 ∈q_2 and m_3 ∈q_1, and q_1 = q. Then there exists three finite sequences of transitions (p_0, !m_1, p_1) (p_1, ?b_1, p_2) … (p_k, ?b_k, p_k+1), and (s_0, !m_2, s_1) (s_1, ?c_1, s_2) … (s_ℓ, ?c_k, s_ℓ +1), and (r_0, !m_3, r_1) (r_1, ?d_1, r_2) … (r_j, ?d_j, r_j+1) such that p_k+1 = q_1, s_ℓ +1 = q_2 and r_j+1 = q_3, and for all messages a ∈{ b_i_1, c_i_2, d_i_3}_1 ≤ i_1 ≤ k, 1 ≤ i_2 ≤ℓ, 1 ≤ i_3 ≤ j = M, there exists q_a∈ S such that (q_a, !a, q'_a). Take C_q = Np_0 + Ns_0 + Nr_0 + ∑_a ∈ MNq_a. From C_q there exists the following execution: C_q m_1 C_1 b_1…b_k C_k +1 where the non-blocking sending is made with the transition (p_0, !m_1, p_1) and each rendez-vous with letter b_i is made with transitions (q_b_i, !b_i, q'_b_i) and (p_i, ?b_i, p_i+1). Hence, C_k+1≥q_1. Then, we continue the execution in the following way: C_k+1x_m_2 C_k+2c_1…c_ℓ C_k+ ℓ +2 where x_m_2 = m_2 if there is no process on R(m_2), and x_m_2 = m_2 otherwise. In any case, the rendez-vous is not answered by a process on state q_1 because m_2 ∉q_1. Furthermore, each rendez-vous with letter c_i is made with transitions (q_c_i, !c_i, q'_c_i) and (s_i, ?c_i, s_i+1). Hence, C_k +ℓ+2≥q_2 + q_1. From C_k+ℓ +2 we do the following execution: C_k+ℓ +2m_3 C_k+ℓ +3d_1…d_j C_k +ℓ + j +3 where the rendez-vous on letter m_3 is made with transitions (r_0, !m_3, r_1) and (q_2, ?m_3, q_2') (this transition exists as m_3 ∈q_2). Each rendez-vous on d_i is made with transitions (q_d_i, !d_i, q'_d_i) and (r_i, ?d_i, r_i+1). Hence, the configuration C_k+ ℓ +j+3 is such that C_k+ℓ +j +3≥q_3 + q_1. Then from C_k+ℓ +j +3: C_k+ℓ + j +3m_1 C_k+ℓ + j +4 where the rendez-vous is made with transitions (p_0, !m_1, p_1) and (q_3, ?m_1, q'_3) (this transition exists as m_1 ∈q_3). By repeating N-1 times the execution from configuration C_1, we reach a configuration C'_q such that C'_q(q_1) ≥ N. Hence, for all N ∈ℕ, for all q ∈ S', there exists C_q ∈γ, such that C_qC'_q and C'_q(q) ≥ N. From Lemma <ref>, there exists C'_N and C_N ∈γ such that C_N ^∗ C'_N and for all q ∈ S', C_N(q) ≥ N. Take C' ∈F(γ), we know how to build for any N ∈, a configuration C'_N such that C'_N(q) ≥ N for all states q ∈ S' and there exists C_N ∈γ, such that C_N ^∗ C'_N, in particular for N bigger than the maximal value C'(q) for q ∈ S', C'_N is greater than C'_N on all the states in S'. To conclude the proof, we need to prove that from a configuration C'_N' for a particular N', we can reach a configuration C” such that C”(q) ≥ C'(q) for q ∈ S' ∪('). As C' respects F(γ), remember that for all q ∈('), C'(q) = 1. The execution is actually built in the manner of the end of the proof of Lemma <ref>. Note N_max the maximum value for any C'(q). We enumerate states q_1, …, q_m in (') such that C'(q_i) = 1. As C' respects F(γ), for i j, q_i and q_j are conflict free. From Lemma <ref>, F(γ) is consistent, and so we note (p^j_0, !m^j, p^j_1) (p^j_1, ?m^j_1, p^j_2) … (p^j_k_j, ?m^j_k_j, p^j_k_j+1) the sequence of transitions associated to state q_j such that: p^j_k_j+1 = q_j, (q_j, m^j) ∈ and for all m^j_i, there exists (q_m^j_i, !m_i^j, q'_m^j_i) with q_m^j_i∈ S'. Note that for all i j, q_i and q_j are conflict-free and so there exists (q_i, m), (q_j,m') ∈' such that m ∉q_j and m' ∉q_i. As F(γ) is consistent, it should be the case for all pairs of tokens (q_i, a), (q_j, a'). Hence m^j ∉q_i and m^i ∉q_j. Note ℓ_j = k_j + 1. For N' = N_max + ∑_1≤ j ≤ mℓ_j, there exists a configuration C'_N' such that there exists C_N'∈γ, C_N'^*C'_N', and C'_N'(q) ≥ N' for all q ∈ S'. In particular, for all q ∈ S', C'_N'(q) ≥ C'(q) + ∑_1≤ j ≤ mℓ_j. Then, we still have to build an execution leading to a configuration C” such that for all q ∈('), C”(q) ≥ C'(q). We then use the defined sequences of transitions for each state q_j. With ℓ_1 processes we can reach a configuration C_1 such that C_1(q_1) ≥ 1: C_1 x_m^1 C_2 m_1^1…m_k_1^1 C_ℓ_1+ 1. x_m^1 = m^1 if there is no process on R(m^1), and x_m^1 = m^1 otherwise. Each rendez-vous on m_i^1 is made with transitions (p_i^1, ?m_i^1, p_i+1^1) and (q_m_i^1, ! m_i^1, q'm_i^1). As a result, for all q ∈ S', C_ℓ_1+1(q) ≥ C'(q) +∑_2≤ j ≤ mℓ_j and C_ℓ_1 +1(q_1) ≥ 1. We then do the following execution form C_ℓ_1 + 1: C_ℓ_1 +1x_m^2 C_ℓ_1+2m_1^2…m_k_2^2 C_ℓ_1+ ℓ_2+ 2. x_m^2 = m^2 if there is no process on R(m^2), and x_m^2 = m^2 otherwise. Remember that we argued that m^2 ∉q_1, and therefore C_ℓ_1 + 2(q_1) ≥ C_ℓ_1 +1(q_1) ≥ 1. Each rendez-vous on m_i^2 is made with transitions (p_i^2, ?m_i^2, p_i+1^2) and (q_m_i^2, ! m_i^2, q'm_i^2). As a result, C_ℓ_1+ℓ_2 +2(q) ≥ C'(q) +∑_3≤ j ≤ mℓ_j for all q ∈ S' and C_ℓ_1+ ℓ_2 + 2≥q_1 + q_2. We can then repeat the reasoning for each state q_i and so reach a configuration C” such that C”(q) ≥ C'(q) for all q ∈ S' and, C”≥q_1 + q_2 + …q_m. We built the following execution: C_N'^∗ C'_N'^∗ C”, such that C”≥ C', and C'_N'∈γ. §.§ Proof of Lemma <ref> Assume that there exists C_0 ∈ and C' ≥ C such that C_0 C_1 … C_ℓ =C'. Then using the Lemma <ref> iteratively, we get that C' ∈γ_ℓ. From the definition of F and ·, one can furthermore easily check that γ⊆F(γ) for all γ∈Γ. Hence we have γ_ℓ⊆γ_f and C' ∈γ_f. Before proving the other direction, we first prove by induction that for all i ∈ and for all D ∈γ_i, there exists C_0 ∈ and D' ≥ D such that C_0 ^∗ D'. The base case for i=0 is obvious. Assume the property holds for γ_i and let us show it is true for γ_i+1. Let E ∈γ_i+1. Since γ_i+1=F(γ_i), using Lemma <ref>, we get that there exists E' ∈ and D ∈γ_i such that E' ≥ E and D ^∗ E'. By the induction hypothesis, there exist C_0 ∈ and D' ≥ D such that C_0 ^∗ D'. Using the monotonicity property stated in Lemma <ref>, we deduce that there exists E”∈ such that E”≥ E' ≥ E and C_0 ^∗ D' ^∗ E”. Suppose now that there exists C”∈γ_f such that C”≥ C. By the previous reasoning, we get that there exist C_0 ∈ and C' ≥ C”≥ C such that C_0 ^∗ C'.
http://arxiv.org/abs/2307.04232v1
20230709172357
Multi-spin probes for thermometry in the strong-coupling regime
[ "Marlon Brenes", "Dvira Segal" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech" ]
[email protected] Department of Physics and Centre for Quantum Information and Quantum Control, University of Toronto, 60 Saint George St., Toronto, Ontario, M5S 1A7, Canada Department of Physics and Centre for Quantum Information and Quantum Control, University of Toronto, 60 Saint George St., Toronto, Ontario, M5S 1A7, Canada Department of Chemistry University of Toronto, 80 Saint George St., Toronto, Ontario, M5S 3H6, Canada We study the sensitivity of thermometric probes that are composed of N spins coupled to a sample prepared at temperature T. Our analysis extends beyond the weak-coupling limit into the strong sample-probe coupling regime. In particular, sample-induced interactions between each of the spins are generated via strong coupling effects and are not fine-tuned amongst each body composing the probe. By employing the reaction-coordinate mapping to evaluate the non-canonical equilibrium state of the probe at finite coupling, we compute the thermometric sensitivity via the quantum Fisher information through the equilibrium state itself. We find that for single-spin probes (N = 1), temperature sensitivity decreases in the regime of weak-to-intermediate coupling strength, however, as the coupling increases we observe much higher sensitivity of the probe in the low-temperature regime. Furthermore, as long as N > 1, there exist optimal values of the sample-probe interaction energy that allow one to attain enhanced thermometric sensitivity when compared to the maximum achieved precision obtained from thermal Gibbs states at weak coupling, particularly in the regime of low temperature. Finally, we show that this enhanced sensitivity may be observed from suboptimal measurements. Multi-spin probes for thermometry in the strong-coupling regime Dvira Segal August 12, 2023 =============================================================== § INTRODUCTION Temperature estimation in the quantum domain is a fervent research field, which has received theoretical and practical attention in recent years <cit.>. As a subset of the already growing field of quantum thermodynamics <cit.>, quantum thermometry has emerged to develop and understand precise protocols for temperature estimation at the nanoscale. Achieving high precision in the estimation of very low temperatures is a difficult task, with a number of applications ranging from cold-atomic systems for quantum simulation <cit.> to sensing with nitrogen-vacancy centers in diamond <cit.> and biological systems <cit.>. Diverse directions have taken place as recourse to achieve high-precision thermometry, most of which fall into two categories: local and global thermometry. While global thermometry <cit.> rose as means to understand temperature estimation in situations where the temperature range is not well-known a priori, local thermometry concerns the design of temperature probes and the optimal measurements to be carried out to achieve high-temperature sensitivity <cit.>. Adaptative-Bayesian strategies have also come into place with promising precision enhancement in temperature estimation <cit.>. In non-integrable quantum systems, where thermalisation is ubiquitous, the eigenstate thermalisation hypothesis provides the means to estimate temperatures from local operations <cit.>. Quantum thermal machines have also been proposed as means for temperature estimation <cit.>. In turn, local thermometry can also be sub-categorised into two different classes of protocols: those which achieve temperature estimation via the study of the resulting equilibrium state from coupling a probe to a sample <cit.> and those which do so via the out-of-equilibrium dynamical response signals <cit.>. We shall refer to the former as equilibrium thermometry, where temperature estimation may only follow from indirect measurements on the equilibrium state. The precision of the temperature estimation, in this case, will depend on both the equilibrium state itself and on the particular indirect measurement chosen. Whenever the probe-sample interaction energy is the smallest energy scale in the configuration, quantum master equations <cit.> predict that the equilibrium state of the probe, i.e., the resulting state in the limit of long times starting from a product state between a probe and a thermalised sample, will be a thermal Gibbs state. We refer to the Gibbs state as the “canonical" state. Certain microscopic conditions need to be met for the equilibrium state to be thermal <cit.>, although, thermalisation between a probe and a sample at weak interaction energy is a physical phenomenon that occurs with a high degree of universality. In the coupling regime where the equilibrium state of the probe is canonical, several aspects have been highlighted in order to employ these states for temperature estimation. The optimal measurement that provides the highest temperature sensitivity is the energy measurement of the probe <cit.>, while the design of the probe that provides the ultimate temperature sensitivity is one for which the M levels of the energy spectrum of the probe contains a single, non-degenerate ground state; together with a (M - 1)-degenerate excited state <cit.>. For practical purposes, achieving such a high degree of control and design is indeed very complicated. In the regime where the probe-sample interaction energy cannot be neglected, the equilibrium state of the probe is non-canonical <cit.> and it has been discussed that energy measurements remain optimal even in this regime from perturbation theory <cit.>; while bath-induced correlations and strong coupling may lead to enhanced temperature sensitivity in integrable and harmonic models <cit.>. It has also been shown that non-Markovian effects, which may be prominent in the strong probe-sample interaction energy, could also lead to enhancements in temperature estimations from dynamical signals <cit.>. In this work, we consider thermal probes that are composed of multiple spins and possibly strongly coupled to a sample as means for temperature estimation. In particular, we consider sample-induced spin interactions to determine whether an enhancement in the temperature estimation may be achieved. While the equilibrium state in the ultra-strong coupling regime may be accessed via the projection of the probe Hamiltonian onto the eigenbasis of the coupling operator between the sample and the probe <cit.>; in the intermediate (non-perturbative) coupling regime, the equilibrium state is most appropriately described via numerical approaches. The reaction-coordinate mapping <cit.> may be employed in certain operational regimes with a high degree of accuracy <cit.> for specific spectral functions of the sample <cit.> to compute the equilibrium state at strong coupling. The reaction-coordinate mapping provides the means to address strong-coupling effects via a Markovian embedding <cit.>, in which an enlarged system Hamiltonian evolves under Markovian dynamics. It can also be extended via polaron transformations that allow for analytical insight <cit.>. We consider multi-spin probes coupled to a reaction coordinate to model strong-coupling effects and bath-mediated interactions. With this method, we study the reduced state obtained when tracing out the reaction-coordinate degrees of freedom, leading to a non-canonical equilibrium state of the probe. To address the temperature sensitivity, we consider the signal-to-noise ratio (SNR) as a figure of merit, which can be upper-bounded with the quantum Fisher information <cit.> through the quantum Cramer-Rao bound <cit.>. By computing maximal SNR via the non-canonical equilibrium state of the probe mediated by bath-induced interactions, we summarise our results as follows: * For a single-spin probe, the effect of strong coupling is detrimental to the optimal temperature sensitivity of the probe at weak-to-intermediate coupling energy. This falls in line with the findings in Ref. <cit.> and extends the results therein to the non-perturbative regime of strong coupling. In the intermediate-to-strong coupling regime, much higher sensitivity may be observed in the low-temperature regime (Fig. <ref>). * For multi-spin probes where the internal interactions amongst each of the N spins are mediated via bath-induced correlations, we find that, as long as N > 1, the temperature range over which the probe is sensitive increases considerably. Furthermore, there exists an optimal coupling λ between the sample and the probe for a given temperature range and probe size N to attain the optimal SNR (Figs. <ref>-<ref>). * The broad-range temperature sensitivity for multi-spin probes can be attained via dephasing operations (diagonal measurements) on the reduced state of the probe, at the cost of decreased sensitivity in the high-temperature regime, but not in the low-temperature regime. Local operations, such as polarisation measurements on the multi-spin probe, diminish temperature sensitivity in the low-temperature regime but the observed sensitivity is higher than the one obtained from energy measurements at weak coupling (Fig. <ref>). These results contribute to the growing literature to establish ultimate thermometric precision bounds in strong-coupling thermodynamics. Two drawbacks of equilibrium thermometry that have been pointed out are the long timescales required for equilibration <cit.> and the highly-peaked sensitivity at the level of the SNR that it is often observed <cit.>, which leaves one to somehow obtain prior knowledge of the temperature range to be estimated. We argue that bath-mediated interactions may alleviate these constraints, by increasing the temperature range over which the probe provides high sensitivity and in many cases reducing the timescales of equilibration via strong-coupling dynamics. In Sec. <ref> we introduce the common language of equilibrium thermometry and our reaction-coordinate mapping, as well as the model we employ for equilibrium thermometry. In Sec. <ref> we delve into the optimal SNR results for our probe configurations and the SNR obtained from suboptimal measurements. We provide some analysis and conclusions in Sec. <ref>, together with some proposals for future directions. § EQUILIBRIUM THERMOMETRY §.§ Ultimate precision bounds Focusing on equilibration processes, thermometry relies on the parameter estimation from the equilibrium state of a probe. A thermalised sample is coupled to a probe and the entire configuration is allowed to relax to equilibrium. The equilibrium state of the probe (k_B 1) ρ̂_p(β) = e^-βĤ_p/Z_p, depends on the parameter under investigation, in this case, the temperature T 1 / β. The temperature may only be estimated via a set of m indirect measurements on the equilibrium state of the probe. The equilibrium state is defined via the Hamiltonian of the probe Ĥ_p and Z_p = [ exp(-βĤ_p)]. The ultimate precision that may be attained via this parameter estimation protocol is understood from the Cramer-Rao inequality <cit.> δ T ≥ [m ℱ(T)]^-1/2, where δ T stands for the temperature precision and ℱ is the quantum Fisher information (QFI) which, in this context, may be understood as the sensitivity of each optimal measurement <cit.>. The QFI is obtained when maximising the classical Fisher information (FI) over all possible measurements <cit.>. It has been shown that the observable Ô with the largest (optimal) temperature sensitivity at thermal equilibrium is the Hamiltonian Ĥ_p of the probe itself, such that the minimum statistical uncertainty on the signal-to-noise ratio (SNR) is given by <cit.> (T / δ T)^2 ≤ m C(T), where C(T) = (δĤ_p / T)^2 is the heat capacity of the system and δ^2 Ĥ_p = [ρ̂_p(T)Ĥ^2_p] - [ρ̂_p(T)Ĥ_p]^2. In a more general sense, it will be more useful for our discussion to consider the QFI as <cit.> ℱ(β) = [L̂_β^2 ρ̂_p(β)], where L̂_β is the symmetric-logarithmic derivative (SLD) defined implicitly from the Lyapunov equation ∂_βρ̂_p(β) = 1/2{L̂_β, ρ̂_p(β) }, with {·, ·} denoting the anti-commutator. Following our previous discussion, the most informative measurements can be shown to be the projections onto the eigenbasis of L̂_β <cit.>. For thermal equilibrium processes, where the equilibrium state is of the form ρ̂_p(β) = exp(-βĤ_p) / Z_p, the SLD can be shown to be L̂_β = ⟨Ĥ_p ⟩ - Ĥ_p <cit.>. In this case, the SLD is diagonal in the energy eigenbasis of the Hamiltonian of the probe. If the equilibrium state of the probe is non-canonical, as may be the case when the sample-probe interaction energy is non-negligible, these conditions are not satisfied, in general <cit.>. §.§ Thermalisation and strong-coupling thermal fixed point through the reaction coordinate mapping At weak coupling, the state of the probe ρ̂_p(β) may be seen as the steady state of the resulting dynamics between the probe to a sample, modelled as a thermal reservoir, of which the temperature is to be estimated. The total Hamiltonian of the configuration is given by Ĥ_ tot = Ĥ_p + Ĥ_ B + γĤ_ int, where Ĥ_p, Ĥ_ B and Ĥ_ int are the Hamiltonians of the probe, bath (sample) and their interaction; respectively. The coupling between the probe and the bath is controlled via the dimensionless parameter γ. In standard open-systems theory, a perturbative approximation to the second order of γ together with the Born-Markov approximations yield a quantum master equation in Lindblad form for the dynamics of the probe <cit.> ∂ρ̂_p / ∂ t = -[Ĥ_p, ρ̂_p] + ℒ{ρ̂_p }, where ℒ is the Lindblad superoperator and [·, ·] is the commutator. The above equation dictates the effective dynamics of the probe from environmental effects. For a given physical configuration, the form of ℒ will depend on the microscopic details of the probe-bath interaction Hamiltonian and a careful treatment has to be considered for Eq. (<ref>) to yield the correct steady-state at long-times, i.e., the (canonical) thermal state in Eq. (<ref>) <cit.>. Most importantly, the approximations that lead to Eq. (<ref>) require the probe and the bath to approximately be in a product state throughout the dynamics and that correlation functions of the bath decay over timescales much shorter than the characteristic timescales of the dynamics of the probe <cit.>. At strong coupling, beyond second-order perturbative approximations in γ, these conditions cannot be guaranteed <cit.>. Instead, in this regime, the total equilibrium state is a Gibbs state of the entire configuration ρ̂_tot = e^-βĤ_tot/Z_tot, such that the reduced state of the probe is the partial trace over environmental degrees of freedom <cit.> ρ̂_p = _B[e^-βĤ_tot/Z_tot], the complication being that this expression leaves one to describe the, in principle, infinite amount of degrees of freedom of the environment. In certain scenarios, however, one may instead consider the repartitioning of the Hamiltonian into an enlarged system that contains certain degrees of freedom of the bath, and a residual bath to which the system is coupled. The mapping becomes useful as long as the resulting enlarged Hamiltonian remains weakly coupled to the residual bath. This approach is typically known as a Markovian embedding <cit.>, whereby strong-coupling effects are captured via the explicit evolution of the probe's state combined with some bath degrees of freedom. An example of a specific type of Markovian embedding is the so-called reaction coordinate mapping, as depicted in Fig. <ref> <cit.>. Consider a probe coupled to a bosonic bath modelled via an infinite set of harmonic oscillators with total Hamiltonian Ĥ_ tot = Ĥ_p + Ŝ∑_k f_k (b̂^†_k + b̂_k) + ∑_k ν_k b̂^†_k b̂_k, where the set of {b̂_k } are canonical bosonic operators for the k-th mode with frequency ν_k and f_k is the coupling strength between probe Ĥ_p and sample through the probe's operator Ŝ. The reaction-coordinate mapping in its most basic form starts by extracting a collective mode (with canonical bosonic operators {â}) from the bath and including it as part of the system, such that the probe is turned into an enlarged system Ĥ_p + Ωâ^†â + λŜ (â^† + â) ↦Ĥ_S, where the extended system Ĥ_S is now weakly-coupled to the residual bath, i.e., the resulting bath description after the extraction of the strongly-coupled mode. In Eq. (<ref>), λ is the coupling strength and Ω the frequency of the extracted mode. Both λ and Ω follow from the spectral function of the original (previous to the mapping) sample J(ω), via <cit.> λ^2 = 1/Ω∫_0^∞dω ω J(ω), Ω^2 = ∫_0^∞dω ω^3 J(ω)/∫_0^∞dω ω J(ω). The mapping can be shown to lead to an extended system coupled weakly to a residual bath for certain spectral densities of the original model Eq. (<ref>) <cit.>. If that is the case, then one can justify a master equation of the form Eq. (<ref>) that leads to appropriate thermalisation of the extended system, such that in the limit of long times the steady state is thermal ρ̂_S(β) = exp(-βĤ_S) / Z_S, where Z_S = [ exp(-βĤ_S)] and Ĥ_S is the Hamiltonian of the enlarged system. Through this approach, one may investigate strong-coupling thermometric effects by studying the reduced state of the probe after tracing out the reaction-coordinate degrees of freedom ρ̂_p(β) = _ RC[e^-βĤ_S]/Z_S, and computing first the SLD through Eq. (<ref>) and then QFI for the reduced state through Eq. (<ref>). § SPIN PROBES The Hamiltonian of the model is given by Eq. (<ref>). We consider a probe composed of N spins with the Hamiltonian Ĥ_p = ∑_i=1^NΔσ̂^z_i. Each of the spins composing the probe do not interact with the other, however, they are coupled strongly to the bath with a system operator given by Ŝ = ∑_i=1^Nσ̂^x_i. Using the reaction-coordinate mapping, we define the system Hamiltonian Eq. (<ref>) including the original probe model, the reaction coordinate, and their mutual interaction. The reaction coordinate itself couples to the residual bath allowing thermalisation of the extended system. For details on the mapping see e.g., Ref. <cit.>. The extraction of a reaction coordinate from the bath (sample) and its inclusion as part of the probe (thermometer) Hamiltonian, as written in Eq. (<ref>) elucidates the generation of an effective coupling between all pairs of spins in the limit of non-vanishing coupling λ. This is the case since the spins in Ĥ_S, which are otherwise non-interacting, are coupled via a collective operator Ŝ to the same reaction-coordinate mode. This degree of freedom, which is included explicitly in the equilibrium state in Eq. (<ref>) before being traced out, mediates couplings between the spins of the probe. As mentioned before, the system composed of the probe and the reaction coordinate is assumed to thermalise to a canonical Gibbs state. The reduced state of the probe can be shown to thermalise to a Gibbs state at weak λ <cit.>, however, this is not necessarily the case as λ increases. A natural question is thus whether strong coupling effects in our model could lead to enhanced or detrimental maximal signal-to-noise ratios, which we can compute via T/δ T = √(β^2 ℱ(β)) in the single-shot scenario (m = 1). §.§ Single-spin probe The most basic form of spin probes, corresponding to a single spin strongly coupled to the bath, serves as a basic benchmark. In this case, we consider N = 1 in Eq. (<ref>). The extended system Hamiltonian includes a single spin coupled to a reaction-coordinate mode (the latter coupled to the sample). For the SNR, we consider the reduced state of the single spin induced by strong coupling. We compute the SNR as a function of temperature for different values of the coupling parameter λ. The results are shown in Fig. <ref>. The calculation involves the computation of the SLD L̂_β though Eq. (<ref>) to then compute the QFI in Eq. (<ref>), with ρ̂_p = _ RC[e^-βĤ_S] / Z_S and Ĥ_S from Eq. (<ref>). The reaction coordinate with a frequency of Ω = 15Δ is truncated to M = 50 levels, which was sufficient to attain convergence of the results shown in Fig. <ref>. The value of the reaction coordinate frequency emerges from the characteristics of the spectral function of the bath (sample), see Eq. (<ref>) and Appendix <ref>. At weak coupling, the maximal SNR for the single-spin probe can be shown to be related to the heat capacity of the probe through √(C(T)) <cit.>, as depicted in the solid black line in Fig <ref>. We have that C(T) = ∂_T ⟨Ĥ_p ⟩ and can be computed analytically to obtain the maximum SNR in the single-shot scenario T/δ T = √(C(T)) = 2Δβ e^βΔ/1 + e^2βΔ. We see in Fig. <ref>, that the effect of strong coupling is detrimental to the sensitivity of single-spin thermometers in the weak-to-intermediate coupling regime (λ≲ 5Δ). This falls in line with the results exposed in Ref. <cit.>, in which a perturbative treatment lead to the conclusion that energy measurements in the weakly-coupled case remain the most informative measurements. However, as the coupling strength λ increases, we see that at low temperatures, stronger coupling in the single-spin probe leads to much higher sensitivity than its weakly-coupled counterpart. In fact, in the range T / Δ = [10^-2, 10^-1], strong coupling leads to a SNR several orders of magnitude higher than the heat capacity of the spin-probe at weak coupling. A fast decay is observed at a given temperature for all the curves, however, indicating that one can only achieve certain precision at given temperature ranges from this protocol. Most interestingly, though, the reduced state of the probe ρ̂_p does not acquire off-diagonal elements in this model <cit.>, which means that both ρ̂_p(β) and L̂_β are diagonal operators. This indicates that the precision shown in Fig. <ref> can be achieved via the measurements of the populations of the spin-probe at strong-coupling and there exists no need to evaluate the optimal basis for the measurement, as it corresponds to simple occupations of the reduced density matrix of the probe at equilibrium. We can gather from these results that at intermediate-to-strong coupling, the populations of the spin levels acquire a different temperature dependence than the canonical ones, translating to differences in the SLD compared to weak coupling. This distinct dependence translates to an increased sensitivity of the probe at a lower temperature for sufficiently strong λ. Interestingly, for our choice of Ŝ = σ̂^x, no coherences are generated in the reduced state of the probe. Different choices for the coupling operator Ŝ do indeed lead to temperature-dependent coherences in the state of the probe. The choice of the coupling operator can largely affect the sensitivity of the probe at strong coupling. See Appendix <ref> for further details. §.§ Multi-spin probes Having understood the single-spin probe at strong coupling, we now turn our attention to the N > 1 case. Recalling Eq. (<ref>) and Eq. (<ref>), we do not allow spins to directly interact with each other. However, they do develop an effective interaction via their strong coupling to the sample. Fig. <ref> shows the SNR results as a function of temperature for different N and different coupling parameters λ. It can be seen from Fig. <ref>(a) that at weak-intermediate coupling, the behaviour of the SNR is rather similar to the one observed for the single-probe case. The optimal measurements remain to be the energy measurements in the basis of the weakly-coupled probe, even in the multi-spin case. In fact, in this regime of weak coupling, the bath-mediated interactions are weak enough that each of the spins composing the probe barely interact with each other. The increased sensitivity follows the trivial √(N) scaling for uncorrelated spins <cit.> at λ→ 0, which can be confirmed from Fig. <ref>(a). However, as shown in Fig. <ref>(b) and Fig. <ref>(c), the effect of strong coupling is non-trivial. Particularly, the maximal SNR can be larger at strong coupling than its weakly-coupled counterpart, albeit at higher temperature ranges. Furthermore, strong coupling reveals different SNR peaks at different temperature ranges for certain values of N. We thus come to the conclusion from these results that temperature sensitivity may be higher at strong coupling for multi-spin probes, unlike the single-probe configuration. Furthermore, this effect can only be observed at relatively strong coupling as a collective effect stemming from many-body interactions induced by the sample. In Fig. <ref> we show the SNR as a function of the coupling strength λ for probes composed of a different number of spins at different temperature values. These results show that indeed, at low temperatures, strong coupling translates to a higher sensitivity in multi-probe configurations. In fact, strong coupling translates to temperature sensitivity in certain regimes where weakly-coupled probes provide negligible information through energy measurements. Furthermore, there exists an optimal coupling at a given temperature for which the sensitivity is maximal. This effect washes away as the temperature increases, where we recover that the most informative measurements are the ones related to weakly-coupled configurations. We then see that multi-spin probes are primed for low-temperature thermometry. §.§ Suboptimal measurements We have seen that for a single spin probe, strong coupling increases thermometric sensitivity at sufficiently strong λ in the low-temperature regime. On the other hand, multi-spin probes show interesting behaviour, whereby strong coupling and many-body effects may provide higher temperature sensitivity, particularly in the low-temperature regime. However, achieving such precision from the equilibrium states can be very complicated. Indeed, even at weak coupling, energy measurements can involve highly non-local operations which pose practical and technical complications. At strong coupling, to take advantage of the higher sensitivity that may exist in certain temperature ranges as we have seen in Fig. <ref> and Fig. <ref>, the situation is even more complicated. In this coupling regime, the SLD develops off-diagonal matrix elements which are temperature-dependent, the same as the equilibrium state ρ̂_p from Eq. (<ref>). Therefore, it is even more difficult to understand and choose the optimal basis which renders the SLD in diagonal form, which then fixes the basis for the measurements required to attain the fundamental bound for thermometry. It is therefore imperative to consider suboptimal measurements, ones that are more feasible from the experimental perspective. In light of this, we now consider the temperature sensitivity of the spin probes from suboptimal measurements. The first operation we consider is the dephasing of the reduced state of the probe ρ̂_p onto its diagonal basis ρ̃_p = ∑_k |k⟩⟨k|ρ̂_p |k⟩⟨k|, where |k⟩ = (0,⋯,1_k,⋯,0)^T are basis vectors such that ρ̃_p is diagonal in the spin basis. From this state we can compute the Fisher information via ℱ_ D = [L̃^2_βρ̃_p(β)], where L̃_β is the SLD from Eq. (<ref>) computed through ρ̃_p(β). Naturally, L̃_β is also diagonal in the spin basis, which then implies that the optimal SNR T / δ T = √(β^2 ℱ_ D) follows from the estimation of the occupations (diagonal matrix elements) of ρ̂_p(β). We may also consider the suboptimal measurements which follow from the estimation of a local observable Ô. Given an observable Ô, we may consider the SNR from the measurements following the expectation values of Ô, via T/δ T = T |χ_T(Ô)|/δÔ≤√(β^2 ℱ(β)) in the single-shot scenario (m = 1). In Eq. (<ref>), δ^2 Ô⟨Ô^2 ⟩ - ⟨Ô⟩^2 is the variance of Ô in the reduced state of the probe ρ̂_p(β) and χ_T(Ô) ∂_α[ρ̂_p(α) Ô]|_α=T is the temperature susceptibility of Ô <cit.>. We choose an extensive, yet local operator, for the suboptimal measurement. Consider an extensive sum of the spin polarizations in the z component, Ô∑_k=1^Nσ̂^z_k, which amounts to estimating the total polarisation of the probe composed of N spins. In Fig. <ref> we display the maximal signal-to-noise ratio T / δ T as a function of temperature for different system sizes N and couplings λ, using four different measurement schemes: energy measurements at weak coupling, the optimal measurement at strong coupling at the value of λ, the dephasing operation which amounts to estimating the occupations of the density matrix and a sum of local operations where one measures the polarisation of all the spins in the z direction. Panels (a), (b), and (c) in Fig. <ref> show different system sizes, N = 2, 4, and 8, respectively. We start by highlighting that energy measurements at weak coupling have a higher peak for larger system sizes while, as discussed before, strong coupling effects lead to a multi-peaked SNR as a function of T in the optimal basis. Remarkably, even considering the dephasing operation in the spin basis of ρ̂_p(β) leads to the same multi-peaked behaviour, at the cost of reduced sensitivity at higher temperatures. Furthermore, this operation retains the low-temperature sensitivity observed from the optimal measurements at the value of λ, so one need not consider the optimal bound on the SNR at strong coupling for low-temperature thermometry. Considering an extensive sum of local observables, however, indeed leads to decreased sensitivity in the low-temperature regime. We do note that even this operation from local measurements, leads to broader-temperature sensitivity, with higher values of the SNR than the optimal weakly-coupled counterpart. Our results suggest that high-temperature sensitivity increases with the system size N. This shows that many-body effects and sample-induced correlations in this case lead to relatively high sensitivity even from conducting local operations. Finally, we note that the optimal coupling λ changes with both the system size and the temperature range over which the probe is sensitive. We have selected the values of λ in Fig. <ref> such that they lie close to the optimal value taken from Fig. <ref>. § CONCLUSIONS We have studied the impact of strong-coupling effects on equilibrium thermometry employing multi-spin probes. Using the reaction-coordinate mapping, we showed that a non-canonical equilibrium state of such probes stems from strong-coupling effects with the sample. While the reduced states of the probes are non-canonical, the equilibrium state of the extended Hamiltonian that contains the reaction coordinate is indeed a standard Gibbs state and therefore, we can take advantage of the Markovian embedding to analyse the reduced states of the probes. This approximation can be shown to yield the correct equilibrium states in certain regimes of the spectral function of the sample <cit.>. From this treatment, we can consider strong-coupling effects in the non-perturbative regime. We have shown that, along the lines of the findings in Ref. <cit.> in which a perturbative treatment was employed, weak-to-intermediate coupling leads to an equilibrium state for which the optimal SNR for temperature thermometry is lower than its weakly-coupled, thermal Gibbs state counterpart. At stronger values of the coupling parameter, however, thermometric sensitivity in the case N = 1 is higher in the low-temperature regime. We can conclude from these results that the most informative measurements for single-spin probes are the energy measurements of the states that undergo a thermalisation process, up until the value of the sample-probe coupling strength increases such that one may attain higher sensitivity at low T. For multi-spin probes and, in particular, configurations where the probe is composed of spins that are not finely tuned to interact with each other but rather through the sample itself, we have found that strong coupling leads to enhanced precision in the low-temperature regime. This trend is in accord with another configuration, where each body comprising the probe is a harmonic mode and the interaction amongst each mode with the sample is quadratic in bosonic operators, such that the equilibrium state is a non-canonical Gaussian state. As shown in Ref. <cit.>, in such a configuration, low-temperature sensitivity is also enhanced via sample-induced correlations between each body comprising the probe. In both harmonic and anharmonic (spin) cases, the enhanced temperature sensitivity may also be accessed via local measurements. These results suggest that, in a more general sense, bath-induced correlations between local probes enhance low-temperature thermometry. Our study, however, demonstrates that in multi-spin setups the SNR depends on the number of spins in a highly non-monotonic manner once the interaction extends beyond weak coupling. Furthermore, in our model, the coupling mechanism of the spins to the sample provides another route for tunability of thermometric sensitivity (see Appendix <ref>). A direction that has been less explored is related to the effects on the temperature sensitivity from strong coupling effects from dynamical signals. Non-Markovian effects may be prominent in this regime <cit.>. Furthermore, dynamical signals in the non-Markovian regime differ substantially from their weakly-coupled counterparts and may be studied in certain regimes with the reaction-coordinate mapping <cit.>. Given that non-Markovian effects may lead to enhanced temperature sensitivity <cit.>, a promising direction could be to consider strong-coupling effects on thermometric probes from the dynamical perspective using reaction-coordinate mapping. We gratefully acknowledge fruitful discussions with John Goold and Mark T. Mitchison. The work of M.B. has been supported by the Centre for Quantum Information and Quantum Control (CQIQC) at the University of Toronto. D.S. acknowledges support from NSERC and from the Canada Research Chair program. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. § MAXIMALLY-COHERENT Ŝ FOR THE SINGLE-PROBE CASE In the main text, we have considered a specific type of probe-sample coupling interaction. From Eq. (<ref>) and Eq. (<ref>), we have Ŝ = σ̂^x for the single-spin probe. This choice is, in principle, arbitrary. One may instead consider different forms of probe coupling operators that lead to coherences in the spin basis of the reduced state of the single-spin probe ρ̂_p [Eq. (<ref>)]. For instance, if we consider Ŝ = 1/√(2) ( σ̂^x + σ̂^z ), then coherences develop in the spin basis of the reduced state of the probe ρ̂_p = _RC[e^-βĤ_S] / Z_S (see Eq.(<ref>)). In turn, these coherences become temperature-dependent <cit.> in the non-canonical equilibrium state of the probe at finite sample-probe interaction energy. In Fig. <ref> we display the SNR for the spin-boson model (single-spin probe) as a function of temperature for different values of the coupling parameter λ. These results are analogous to the ones displayed in Fig. <ref> and we have used the same parameters for the calculation, with the only difference being the coupling operator from Eq. (<ref>). It can be observed that for this type of coupling operator, the temperature sensitivity behaves quite differently at strong coupling than for the case Ŝ = σ̂^x examined in the main text. In particular, the observed higher SNR at low temperatures disappears in this case. The inset in Fig. <ref> displays the coherences being developed in the spin basis of the reduced state of the probe, which vanish in the limit T →∞. In analogy to the results exposed in our main example in Fig. <ref>, at weak-to-intermediate coupling, it is the weakly-coupled Gibbs state SNR (solid black line in Fig. <ref>) that translates to the highest temperature sensitivity. However, as λ increases, higher sensitivity may be achieved from the non-canonical equilibrium states of the probe. Nevertheless, we see that in this case, the SNR is not higher in the low-temperature regime when compared to its weakly-coupled counterpart, irrespective of the temperature-dependent coherences that develop in the reduced states of the probe ρ̂_p(β) at low temperature. We remark that the thermal Gibbs state at weak coupling is the equilibrium state of the probe irrespective of the sample-probe interaction operator, while the equilibrium state at strong coupling heavily depends on the microscopic details via Ŝ <cit.>. This implies that, at strong coupling, the microscopic details of the probe-sample interaction play an important role in the sensitivity of the probes. § EFFECT OF THE SPECTRAL FUNCTION OF THE SAMPLE A free parameter in our simulations is the natural frequency of the reaction coordinate, which we have denoted with Ω and described in Eq. (<ref>). On physical grounds, Ω is the frequency of a collective-effective harmonic mode pertaining to the sample, to which the probe is most strongly coupled <cit.>. Tuning this parameter yields different equilibrium states of the probe in the finite-λ regime, as it is a property of the sample via its spectral function <cit.>. A spectral density of Brownian form J(ω) = 4γΩ^2 λ^2 ω/(ω^2 - Ω^2)^2 + (2πγΩω)^2, which is peaked around Ω with width γ, leads to an effective spectral density, after the reaction-coordinate mapping, of the Ohmic type J_RC(ω) = γω e^-|ω| / Λ, where Λ is a high-frequency cut-off <cit.>. The dimensionless width parameter γ is kept small, such that the enlarged system, comprising the probe and the reaction coordinate, is weakly coupled to the residual bath, i.e., to the sample after the reaction-coordinate mapping. In Fig. <ref> we display the SNR for the two-body spin probe (N = 2) as a function of temperature for different values of Ω. It can be observed that the effect of reducing Ω is to shift the temperature sensitivity to lower-temperature regimes, up to the point in which the low-temperature sensitivity vanishes for sufficiently low Ω. In our calculations we kept the dimension of the manifold of the reaction coordinate to a very high value, M = 2000, to ensure convergence in the entire temperature regime shown in Fig. <ref>. From these results we can conclude that in employing multi-spin probes for temperature estimation in the strong-coupling regime, two important parameters are required to be considered hand-in-hand: the effective probe-sample coupling parameter λ and, for this particular case, the frequency of the collective harmonic mode of the sample to which the probe is most-strongly coupled. In a more general sense, this is the result of the equilibrium state at finite coupling depending strongly on the microscopic details of both the sample and its interaction with the probe, unlike thermal Gibbs states at weak coupling which are, in general, independent on these details.
http://arxiv.org/abs/2307.07446v1
20230714160818
An equivariant surgery classification of $C_p$-surfaces
[ "Kelly Pohland" ]
math.GT
[ "math.GT", "math.AT", "57M60" ]
Dynamical simulation of the injection of vortices into a Majorana edge mode G. Lemut July 2023 =========================================================================== Let p be an odd prime, and let C_p denote the cyclic group of order p. We use equivariant surgery methods to classify all closed, connected 2-manifolds with an action of C_p. We additionally provide a way to construct representatives of each isomorphism class using a series of equivariant surgery operations. The results in this paper serve as an odd prime analogue to a similar classification proved by Dan Dugger. § INTRODUCTION Let p be an odd prime, and let C_p denote the cyclic group of order p. In this paper, we classify all closed and connected 2-manifolds with an action of C_p up to equivariant isomorphism. More specifically, we define ways of constructing classes of C_p-surfaces using equivariant surgery methods and prove that all C_p-surfaces can be constructed in this way. Dugger gave a similar classification of C_2-surfaces in <cit.>. In his paper, Dugger gave a complete list of isomorphism classes of C_2-surfaces and developed a full set of invariants which determine the isomorphism class of a given surface with involution. We use similar methods to show that all nontrivial, closed, connected C_p-surfaces are in one of six families of isomorphism classes of C_p-surfaces. Various papers have treated aspects of the classification result in Theorems <ref> and <ref>, mostly focusing on the orientable case <cit.>. Previous treatments of this classification problem give particular interest to using invariants to quantify the number of isomorphism types of equivariant surfaces. The new idea presented in this classification and in that of <cit.> is the construction of isomorphism classes via equivariant surgeries. By giving a geometric construction of the surfaces, we provide additional information which allows us to use the classification in a new way. One such application is in the computation of RO(G)-graded Bredon cohomology, an important algebraic invariant in equivariant homotopy theory. The decomposition into surgery pieces informs the construction of cofiber sequences which give rise to long exact sequences on cohomology. Hazel used Dugger's classification to compute the cohomology of C_2-surfaces in this Bredon theory <cit.>, and this author performed similar computations in the p=3 case using the classification presented in this paper <cit.>. The idea behind our classification result is to show that all C_p-surfaces can be described in terms of other simpler C_p-surfaces. Some examples of these “building block” surfaces are S^2,1 and M_1^free which can be described as the 2-sphere and torus (respectively) rotating about the axis passing through each of their centers. Other examples include the non-orientable spaces N_2^free and N_1[1] whose C_p-actions are shown in Figure <ref> in the case p=5. The final family of spaces needed for our classification is denoted _n for n≥ 1. We can think of _1 as a 2p-gon with opposite edges identified and a rotation action of e^2π i/p. Then _n consists of n copies of _1 glued together in a particular way. These surfaces are described in greater detail in Section <ref>, but we can also see this gluing demonstrated in Figure <ref> in the case p=3. Before precisely stating the classification result, let us introduce some equivariant surgery operations. Let Y be a non-equivariant surface and X a non-trivial C_p-surface. We construct a new C_p-surface by removing p disjoint conjugate disks from X and gluing to each boundary component a copy of Y∖ D^2. The result is a new space on which we can naturally define a C_p-action. This is called the equivariant connected sum of X and Y and is denoted X#_p Y. An example of this operation is depicted in Figure <ref> in the case p=3. A precise definition of X#_p Y can be found in Section <ref>. Let R_p denote the space S^2,1 with p disjoint conjugate disks removed. Let X be any non-trivial C_p-surface. After removing p disjoint conjugate disks from X, we can construct a new C_p-surface X+[R_p] by gluing the p boundary components of X to those of R_p via an equivariant map. Figure <ref> depicts an example of this surgery operation in the case p=3. A precise definition of X+[R_p] can be found in Section <ref>. In this paper, we prove that up to isomorphism all C_p-surfaces can be constructed by starting with M_1^free, S^2,1, N_2^free, N_1[1], or _n (for some n) and performing a series of equivariant connected sum and ribbon surgeries. If X is a surface with order p homeomorphism σ_X and Y is a surface with order p homeomorphism σ_Y, we say that X and Y are isomorphic if there exists a homeomorphism f X→ Y such that f∘σ_X=σ_Y∘ f. Let M_g denote the genus g, closed orientable surface. Let X be a connected, closed, orientable surface with an action of C_p. Then X can be constructed via one of the following surgery procedures, up to Aut(C_p) actions on each of the pieces. * M_1^free#_pM_g, g≥ 0 * (S^2,1+k[R_p])#_pM_g, k,g≥ 0 * (_n+k[R_p]) #_pM_g, k,g≥ 0, n≥ 1 Let N_r denote the genus r, closed non-orientable surface. Let X be a connected, closed, non-orientable surface with an action of C_p. Then X can be constructed via one of the following surgery procedures, up to Aut(C_p) actions on each of the pieces. * N_2^free#_pN_r, r≥ 0 * (S^2,1+k[R_p])#_pN_r, r≥ 1 * (N_1[1]+k[R_p])#_pN_r, k,r≥ 0 Unlike the corresponding result of Dugger, this classification does not provide a complete list of invariants distinguishing isomorphism classes. For example, we do not provide invariants with which to distinguish the C_3-surfaces _2 and S^2,1+2[R_3]. We instead prove that these spaces are non-isomorphic and that they represent the only closed and connected genus 6 orientable C_3-surfaces with 6 fixed points up to equivariant isomorphism. §.§ Organization of the Paper Equivariant surgery procedures are outlined in Section <ref>. Section <ref> contains a statement of the main classification theorem for nontrivial C_p-surfaces. Some important equivariant surgery results are proved in Section <ref>. A detailed proof of the main classification theorem from Section <ref> is given in Sections <ref> and <ref>. §.§ Acknowledgements The work in this paper was a portion of the author's thesis project at the University of Oregon. The author would first like to thank her doctoral advisor Dan Dugger for his invaluable guidance and support. The author would also like to thank Christy Hazel and Clover May for countless helpful conversations as well as Robert Lipshitz for many constructive comments. This research was partially supported by NSF grant DMS-2039316. § C_P-EQUIVARIANT SURGERIES OF SURFACES Let p be an odd prime. There are (p-1)/2 isomorphism classes of C_p-actions on ^2 corresponding to rotation about the origin by a pth root of unity. Rotation of the plane by ω_i is isomorphic to rotation by ω_j only when ω_i=ω_j. However if we consider such rotations up to an action of Aut(C_p), then we are left with only one isomorphism class of nontrivial actions on ^2. In this section we lay the ground work for a classification of closed surfaces with a nontrivial action of C_p up to an action of Aut(C_p). We do this by defining analogues of equivariant surgery methods from <cit.> in the odd prime case. For a C_p-surface X, let F(X) denote the number of fixed points of X. It is useful to note that when the action is non-trivial, F(X) must be finite. We also let β(X) denote the β-genus of X, defined to be dim_/2 H^1_sing(X;/2). §.§ Equivariant Connected Sums Let Y be a non-equivariant surface and X a surface with a nontrivial order p homeomorphism σ X→ X. Define Ỹ:=Y∖ D^2, and let D be a disk in X so that D is disjoint from each of its conjugates σ^i D. Similarly let X̃ denote X with each of the σ^i D removed. Choose an isomorphism f∂Ỹ→∂ D. We define an equivariant connected sum X#_p Y, by [X̃⊔∐_i=0^p-1(Ỹ×{i})]/∼ where (y,i)∼σ^i(f(y)) for y∈∂Ỹ and 0≤ i ≤ p-1. We can see an example of this surgery in Figure <ref>. We will prove in Proposition <ref> that the space X#_p Y is independent of the chosen disk D. Any nontrivial C_p-surface has only a finite number of isolated fixed points since each fixed point must have a neighborhood isomorphic to ^2 with a rotation action. For a C_p-space X with F fixed points and β-genus β_1 and a non-equivariant surface Y with β-genus β_2, X#_p Y has F fixed points and β-genus β_1+pβ_2. §.§ C_p-equivariant Ribbon Surgeries There are (p-1)/2 non-isomorphic C_p-actions on S^2 given by rotation by a primitive pth root of unity about the axis passing through its north and south poles. When the prime p is understood, we let S^2,1_(i) denote this sphere with rotation by e^2π i/p where 1≤ i≤ p-1. We additionally write S^2,1 when only considering such actions up to twisting by Aut(C_p). The sphere S^2,1_(i) is an example of a representation sphere. It can be defined as the one point compactification of a two-dimensional nontrivial C_p representation. These objects are incredibly important in equivariant homotopy theory, so we choose our notation to be consistent with other papers in this field. Let D be a disk in S^2,1_(i) that is disjoint from each of its conjugate disks. We define a C_p-equivariant ribbon as S^2,1_(i)∖(∐_j=0^p-1σ^j D), and we denote this space R_p,(i). We can see R_p,(1) depicted in Figure <ref> in the cases p=3 and p=5. The action of R_p,(i) can be described as rotation about the orange axis. There are two fixed points of this action, given by the points in blue where the axis of rotation intersects the surface. Let X be a surface with a nontrivial order p homeomorphism σ X→ X. Choose a disk D_1 in X that is disjoint from σ^jD_1 for each j. Then remove each of the σ^jD_1 to form the space X̃. As in Definition <ref>, let D be the disk in S^2,1_(i) which was removed (along with its conjugates) to form R_p,(i). Choose an isomorphism f∂ D_1→∂ D and extend this equivariantly to an isomorphism f̃∂X̃→∂ R_p,(i). We then define C_p-ribbon surgery on X to be the space (X̃⊔ R_p,(i))/∼ where x∼f̃(x) for x∈∂X̃. This is a new C_p-surface which we will denote X+[R_p,(i)]. There is an action of Aut(C_p) on S^2,1_(i) (and thus R_p,(i)) given by σ S^2,1_(i)=S^2,1_(σ(i)) for σ∈Aut(C_p). Our goal is to classify all C_p-surfaces using equivariant surgery methods up to this action of Aut(C_p) on each of the surgery pieces. Going forward, we will use the notation X+[R_p] to denote a C_p-surface obtained by performing some C_p-ribbon surgery on X. The notation X+[R_p] therefore refers to several distinct isomorphism classes of C_p-surfaces which can be obtained from each other by the action of Aut(C_p) on each of the surgery pieces. We similarly let S^2,1 denote the 2-sphere with a rotation action of C_p, noting that each of these can be obtained from the standard rotation of e^2π/p by this action of Aut(C_p). In the p=3 case, this action of Aut(C_p) is trivial since S^2,1_(1)≅ S^2,1_(2). Thus the notation X+[R_3] (as well as S^2,1) is well-defined and denotes a single C_3-surface up to equivariant isomorphism. We will prove in Corollary <ref> that the space X+[R_p,(i)] is independent of the chosen disk D_1. For a C_p-surface X with F fixed points and β-genus β, the space X+[R_p,(i)] has F+2 fixed points and β-genus β+2(p-1). Let X+k[R_p,(i)] denote the surface obtained by performing C_p-ribbon surgery k times on X. We will see in Corollary <ref> that +[R_p,(i)]-surgery is independent of the choice of disk D_1. Because of this, C_p-ribbon surgery is associative and commutes with itself, making this notation well-defined. We next define the C_p-surface TR_p,(i) using a gluing diagram. Start with a 2p-gon with a disk removed from its center. Then identify opposite edges of the 2p-gon in the same direction to obtain the space TR_p,(i). Figure <ref> shows this in the case p=3. The action on TR_p,(i) is defined by rotation about its center by an angle corresponding to the pth root of unity e^2π i /p (1≤ i ≤ (p-1)/2). Note that TR_p,(i)≅ TR_p,(j) only when j=i or j=p-i. This surface is orientable with one boundary component. When p=3, TR_3,(1)≅ TR_3,(2), so for simplicity of notation we will denote this space by TR_3. The surface TR_p,(i) has two fixed points. Consider the space TR_p,(i) with its first p edges labeled e_1,… ,e_p as shown in Figure <ref>. Since opposite edges of the 2p-gon are identified, all other edges are named accordingly. Let v_1 be the starting vertex of e_1, and let v_2 be the ending vertex of e_1. We first claim that all other vertices of the 2p-gon representing TR_p,(i) must be identified with either v_1 or v_2. Looking at the edge labeled e_2 towards the top of the polygon, we see that e_2 shares a starting vertex with e_1. Now looking at its opposite edge, it is also the case that e_2 shares an ending vertex with e_1. We can keep going to see that e_3 must share starting and ending vertices with e_2, and in fact all vertices e_k must have starting vertex v_1 and ending vertex v_2. Finally observe that since the action of C_p takes e_1 to e_k for some k, the vertices v_1 and v_2 are fixed under the action. Thus, TR_p,(i) has two fixed points. Let X be a non-trivial C_p-space with at least one isolated fixed point x. Choose a neighborhood D_x of x that is fixed by the action of σ. We then let X̃ denote X∖ D_x. The action on the boundary of X̃ will be rotation by e^2π i/p for some i. Fix an isomorphism f∂X̃→∂ TR_p,(i). The C_p-twisted ribbon surgery on X is given by (X̃⊔ TR_p,(i))/∼ where y∼ f(y) for y∈∂X̃. We denote this new space by X+_x[TR_p]. For a C_p-surface X with F fixed points and β-genus β(X), the space X+_x[TR_p] has F+1 fixed points and β-genus β(X)+2(p-1). We will see in Corollary <ref> that +[R_p,(i)]-surgery does not depend on the initial disks chosen for the surgery, making the notation X+[R_p,(i)] well defined. Unfortunately, the same is not true of twisted ribbon surgery. To specify our choice of initial fixed point x, we will use the notation X+_x[TR_p]. We will see in Example <ref> a space X and choices of fixed points x and y where X+_x[TR_p]≇X+_y[TR_p]. Let X be a C_p-space with two distinct fixed points x and y. By Proposition <ref>, there exists a simple path α in X from x to y that does not intersect its conjugate paths. Observe that the union of all conjugates of α is isomorphic to EB_p, where EB_p denotes the unreduced suspension of C_p. In particular, given any C_p-space X with at least two isolated fixed points, we can find a copy of EB_p sitting inside X. We know from Lemma <ref> that a neighborhood of this copy of EB_p must be isomorphic to R_p,(i) or TR_p,(i). Given such a space, we can “undo” the corresponding ribbon surgery to construct a new space X-[R_p,(i)] (respectively X-[TR_p]) which we define below. Figure <ref> shows us how R_p,(i) and TR_p,(i) can be viewed as neighborhoods of EB. Let X be a C_p-surface with isolated fixed points a and b, and suppose the corresponding EB_p containing a and b has a neighborhood homeomorphic to R_p,(i). Then X̃:=X∖ R_p,(i) has p boundary components, and there is an isomorphism f∂X̃→∂(D^2× C_p). Define X-[R_p,(i)] to be (X̃⊔(D^2× C_p))/∼ where a∼ f(a) for a∈∂X̃. As a result of this surgery, the space X-[R_p,(i)] has 2 fewer fixed points, and its β-genus is reduced by 2(p-1) from that of X. Moreover, if X was a connected C_p-surface with at least 3 fixed points, then X-[R_p,(i)] is also connected. This does not have to be the case when F=2 however. For example, there exists EB_p⊆ S^2,1 such that (S^2,1#_p M_1) -[R_p] ≅ M_1× C_p. Let a,b∈ X be fixed points such that a and b live in some copy of TR_p,(i) inside of X. We can similarly define X-_a,b[TR_p] to be the result of surgery which removes this copy of TR_p,(i) from X and glues in D^2,1 along the boundary. As one would expect, the space X-_a,b[TR_p] has one fewer fixed point and β-genus p-1 smaller than X. Although by Corollary <ref> we know +[R_p,(i)] is independent of the disks chosen, -[R_p,(i)] surgery does depend on a choice of EB_p. Two different choices of R_p,(i) in a space can result in different spaces once -[R_p,(i)] is performed. As a result, the notation X-[R_p,(i)] is not well defined. Going forward, we will use the notation X-[R_p] when the choice of R_p,(i) is understood. Figure <ref> shows this using the example S^2,1#_3 M_1. For the choice of EB on the left, -[R_3] surgery results in the space M_1× C_3. For the choice on the right, -[R_3] surgery results in the space M_1^free Let X be a non-trivial C_p-surface and Y a non-equivariant surface. Then (X+[R_p,(i)])#_pY≅(X#_pY)+[R_p,(i)]. If X has a fixed point x, it is also true that (X+_x[TR_p])#_pY≅(X#_pY)+_x[TR_p]. Additionally, if X is a space for which -[R_p] or -[TR_p]-surgeries are defined, then (X-[R_p])#_pY≅(X#_pY)-[R_p] (respectively (X-[TR_p])#_pY≅(X#_pY)-[TR_p]). In other words,the equivariant connected sum surgery operation commutes with ± [R_p,(i)] and ± [TR_p] on all C_p-surfaces X for which these surgeries are defined. In the case of -[R_p] or ± [TR_p] surgeries, this is clear because these surgery operations take place in the neighborhood of fixed points while we can choose to perform any equivariant connected sum operation away from these fixed points. The proof that equivariant connected sum surgery commutes with +[R_p,(i)]-surgery is similar to the argument presented in the proof of Corollary <ref> and is left to the reader. §.§ Möbius Band Surgeries Represent the Möbius band as the usual quotient of the unit square where (0,y)∼ (1,1-y). We define (p-1)/2 actions of C_p on the möbius band as follows. For a generator σ of C_p, let σ(x,y)=(x+i/p,1-y) for 1≤ i ≤ (p-1)/2. Denote this space MB_p,(i). Figure <ref> gives a visual representation of this action in the case p=3. Note that the action on the boundary of MB_p,(i) is the rotation action of S^1 by e^-2π i/p. When p=3, MB_3,(1)≅ MB_3,(2), so for simplicity of notation we will denote this space by MB_3. Let X be a non-trivial C_p-surface with fixed point x. Choose a neighborhood D_x of x which is fixed under the action of σ∈ C_p. The C_p-space X̃:=X∖ D_x has a distinguished boundary component isomorphic to S^1 with rotation by some angle e^2π i/p. Fix an equivariant isomorphism f∂X̃→∂ MB_p,(i). We can then define a new C_p-space (X̃⊔ MB_p,(i))/∼ where x∼ f(x) for x∈∂X̃. Denote this new space by X+_x[FMB_p]. This process is called fixed point to möbius band surgery. Given a C_p space X with F fixed points and β-genus β, the space X+[FMB_p] has F-1 fixed points and genus β+1. We can similarly define möbius band to fixed point surgery on a C_p space X with MB_p,(i)⊆ X. This procedure is the reverse process of +_x[FMB_p] surgery in the sense that it removes MB_p from X and glues in a copy of D^2,1 along the boundary. The resulting space is denoted X+[MB_pF]. This notation will only be used when the choice of möbius band is understood. § EXAMPLES IN THE P=3 CASE In this section we will highlight some of the surfaces we can now build using equivariant surgery. Although each of the following examples have analogues for higher p, we will focus mainly on the p=3 case. [Free Torus] There is a free C_3-action on the torus M_1 given by rotation of 120^∘ about its center. Denote this C_3-space by M_1^free. From this, we can perform an equivariant connected sum operation with the g-holed torus M_g to construct the space M_3g+1^free:=M_1^free#_3 M_g. The result is a free C_3-action on the (3g+1)-holed torus (ie. the orientable surface with beta genus β=6g+2). We will see in the next section that up to equivariant isomorphism there is only one free action of C_3 on M_3g+1. The space M_3g+1^free can be seen in Figure <ref> in the case g=2. [_g[F]] The representation sphere S^2,1 is defined as the 2-sphere with a rotation action of 120^∘ about the axis passing through the north and south poles of the sphere. Since ribbon surgery and connected sum surgery commute with each other, we can consider the space _2k+3g[2k+2]:=(S^2,1+k[R_3])#_3M_g which is constructed by performing ribbon surgery k times on S^2,1 and then performing connected sum surgery with the orientable surface M_g. The space _2k+3g[2k+2] has 2k+2 fixed points and is non-equivariantly isomorphic to M_2k+3g. [Non-free Torus] Let _1 (Figure <ref>) denote the space S^2,1+_S[TR_3] where S denotes the south pole of S^2,1. Then _1 has β-genus β=2 and 3 fixed points. We can additionally observe that the space S^2,1+_N[TR_3] (where N is the north pole this time) is isomorphic to _1. For p>3, we can define similar spaces denoted _1^p_(i) (where the action is given by the usual rotation by a pth root of unity about the center). The underlying space is a genus (p-1)/2 orientable surface represented by a 2p-gon with appropriate identifications. The notation was chosen to remark on the fact that this surface is most easily seen through this polygonal representation of M_(p-1)/2. [_n] Consider the surface _1+[R_3] with β-genus β=6 and F=5 fixed points. Label the fixed points as shown in Figure <ref>. As a result of Lemma <ref>, we know that +_c_i[TR_3]-surgery results in a space isomorphic to S^2,1+2[R_3]. One naturally asks the question: Does twisted ribbon surgery yield the same space when centered around the fixed points a or b? As it turns out, we get the same result after performing +_a[TR_3]-surgery, but ribbon surgery centered on the point b yields a different C_3-surface. This new surface (which we will call _2) is depicted in Figure <ref>. Proposition <ref> contains the proof of the fact that _2 and S^2,1+2[R_3] are non-isomorphic surfaces. Now that we have a new C_3-surface _2, we can construct surfaces of the form _2+k[R_3]#_3 M_g for some k,g≥ 0. This brings us back to our previous question. What if we performed twisted ribbon surgery on _2+k[R_3]#_3M_g? Does the result depend on the chosen fixed point? Ultimately, the answer depends on k. When k=0, twisted ribbon surgery is independent of the chosen fixed point. This is not true when k>0 however. In this case there are two isomorphism classes of spaces which can be obtained by performing twisted ribbon surgery on _2+k[R_3]#_3M_g. We prove these facts in Section <ref>. For now, let us examine this through the k=1, g=0 case. The space _2+[R_3] is shown on the left of Figure <ref>. Performing twisted ribbon surgery centered on any point other than b results in the space _1+3[R_3]. However +_b[TR_3]-surgery produces a different space which we will call _3 (the space on the right of Figure <ref>). In general, we can inductively define a space _n by starting with the space _n-1+[R_3] and performing twisted ribbon surgery centered on a specific fixed point. Just as _3 is represented in Figure <ref> as a tower of three hexagons, the space _n for n≥ 1 can be thought of as a tower of n hexagons connected in a similar way. An analogous collection of C_p-spaces (for p>3) can be defined and will be denoted _n^p when the prime p is not understood. The C_p-space _n^p has 3n fixed points and β-genus β=(3n-2)(p-1). We can again most easily visualize this space as a tower of n polygons. [Free Klein Bottle] The representation sphere S^2,1 has two fixed points, so we can consider the space S^2,1+2[FMB_3]:=(S^2,1+_N[FMB_3])+_S [FMB_3] where we perform +[FMB_3] surgery on both the north and south poles. The resulting space must be free of β-genus β=2. We denote this free Klein Bottle by N_2^free. Other free non-orientable surfaces can be constructed by performing equivariant connected sum surgery on N_2^free. We will see in the next section that up to isomorphism there is only one free action on N_2+3r for each r≥ 0, namely N_2+3r^free:=N_2^free#_3N_r. § CLASSIFYING C_P ACTIONS In this section we state the main classification theorem for nontrivial, closed surfaces with an action of C_p for any odd prime p. All surfaces are defined up to an action of Aut(C_p) on each of the surgery pieces. The proof of the classification of free C_p-surfaces can be found in Section <ref>, while the proof of the non-free case is in Section <ref>. Let X be a surface with beta genus β. If σ X→ X is a C_p-action with F fixed points, then F≡ 2-βp. The space X∖ X^C_p is a free C_p-space with Euler characteristic 2-β-F. Since the action is free, X∖ X^C_p→(X∖ X^C_p)/C_p is a p-fold covering space. In particular, the Euler characteristic of X∖ X^C_p must be a multiple of p. Let X be a connected, closed, orientable surface with an action of C_p. Then X can be constructed via one of the following surgery procedures, up to Aut(C_p) actions on each of the pieces: * M_1+pg^free:= M_1^free#_pM_g, g≥ 0 * _(p-1)k+pg[2k+2]:=(S^2,1+k[R_p])#_pM_g, k,g≥ 0 * _n,(3n-2)(p-1)/2+(p-1)k+pg[3n+2k]:=(_n+k[R_p]) #_pM_g, k,g≥ 0, n≥ 1 Let X be a connected, closed, non-orientable surface with an action of C_p. Then X can be constructed via one of the following surgery procedures, up to Aut(C_p) actions on each of the pieces: * N_2+pr^free≅ N_2^free#_pN_r, r≥ 0 * N_2(p-1)k+pr[2k+2]≅(S^2,1+k[R_p])#_pN_r, r≥ 1 * N_1+2(p-1)k+pr[1+2k]≅(N_1[1]+k[R_p])#_pN_r, k,r≥ 0 It is important to note that for orientable surfaces, β and F do not provide enough information to distinguish between these families of isomorphism classes. For example, when p=3, _2,4[6] and _4[6] are non-isomorphic orientable surfaces with β=4 and F=6. See Proposition <ref> for a proof of this fact. In the case of non-orientable surfaces, F and β do distinguish between these families. In other words, given a non-orientable surface X with specific values for F and β, one can explicitly determine how X was constructed via equivariant surgeries. Some examples of spaces in each of these families are shown in the case p=3 in Figures <ref>, <ref>, <ref>. § SURGERY INVARIANCE RESULTS Let p be an odd prime. This section contains proofs for some of the basic surgery invariance results outlined in Section <ref>. Let X be a closed, connected 2-manifold with a map σ X→ X such that σ^p=1. Let a,b∈ X∖ X^C_p such that a≠σ^k b for any k. Then there exists a simple path α in X from a to σ^k b for some k such that α does not intersect any of its conjugate paths. In other words, α(s)≠σ^kα(t) for all k, s, and t (k≠ 0 if s=t). Choose an embedded path α in (X∖ X^C_p)/C_p from the image of a to the image of b. The preimage of α in X∖ X^C_p consists of p disjoint conjugate paths from σ^ia to σ^jb. In particular, there is a component of this preimage which is a path from a to σ^kb for some k with the desired property. Let X be a path-connected, closed 2-manifold with a C_p action. Let Y_1 be obtained from X by removing disjoint conjugate disks embedded in X∖ X^C_p and sewing in a C_p-ribbon. Let Y_2 be similarly obtained from X, but using a different set of conjugate embedded disks. Then Y_1≅ Y_2. Let D_i, σ D_i,… ,σ^p-1D_i be the names of the disjoint disks removed to make Y_i from X. Let a_i denote the center of D_i. Then by Proposition <ref> there is a path α from a_1 to σ^k a_2 for some k that does not intersect its conjugate paths. From here, we can obtain an equivariant homeomorphism X→ Y by following a nearly identical procedure to the proof of Corollary A.3 in <cit.>. Let X be a path-connected, closed 2-manifold with a C_p action, and let M be a non-equivariant connected surface. The equivariant isomorphism type of X#_p M is independent of the choice of disks used in the construction. The proof of this proposition is nearly identical to that of Corollary <ref>. Let X and Y be equivariant 2-manifolds that both contain a C_p-ribbon. If X-[R_p]≅ Y-[R_p], then X≅ Y. An analogous statement and proof of this fact for the p=2 case can be found in Proposition 3.11 of <cit.>. § FREE CLASSIFICATION PROOF In order to prove Theorems <ref> and <ref>, we will induct on the number of fixed points of a given C_p-surface. In this section we prove the base case for this argument, that every closed surface with a free C_p action is either isomorphic to M_1+pg^free for some g, to N_2+pr^free for some r≥ 1, or to N_2 with one of its (p-1)/2 free C_p-actions. Let X be a path-connected non-equivariant space. Let 𝒮_p(X) denote the set of isomorphism classes of free C_p-spaces Y that are path-connected and have the property that Y/C_p ≅ X. There is a bijection between 𝒮_p(X) and the set of nonzero orbits in H^1_sing(X;/p)/Aut(X). An analogous proof of this fact for the p=2 case is provided in <cit.>, but we will summarize the main idea here. Given an element Y of 𝒮(X), we get a principal ℤ/p bundle Y→ X by choosing an isomorphism Y/C_p→ X. This then corresponds to an element of H^1(X;/p) via its characteristic class. To make this association well-defined, we must quotient out by the automorphisms of X. With this proposition, our goal is now to understand the action of Aut(X) on H_sing^1(X;/p). This is given by a group homomorphism Aut(X)→Aut(H_sing^1(X;/p)). Recall that the full mapping class group ℳ(X) of a space X is defined to be ℳ(X)=Aut(X)/ℐ(X) where ℐ(X) is the subgroup of automorphisms that are isotopic to the identity. Since ℐ(X) acts trivially on H_sing^1(X;/p), our action Aut(X)→Aut(H_sing^1(X;/p)) descends to a map ℳ(X)→Aut(H_sing^1(X;/p)). §.§ Non-orientable Case Since homology and cohomology are dual in /p coefficients, it is sufficient to consider the action of ℳ(X) on H_1(X;/p). The space X can be represented as a sphere with r crosscaps α_1,… ,α_r as in Figure <ref>, which we can choose as generators for H_1(X;/p). We begin by discussing generators of ℳ(X) and how they act on the α_i. Let 𝒞 denote the curve shown in Figure <ref>, which passes through the ith and jth crosscaps of X. Note that 𝒞 is orientation preserving and thus has a neighborhood isomorphic to S^1× I. Let T_i,j denote the Dehn twist about 𝒞 as defined in <cit.>. The image of α_i under this map is 2α_i+α_j∈ H_1(X;/p), and the image of α_j is -α_i. The Dehn twist T_i,j fixes all other generators. See <cit.> for full details of this computation. For r≥ 2 we let Y_i,j denote the “crosscap slide” map passing the ith crosscap through the jth. Note that this map is often referred to as a Y-homeomorphsim and is described in <cit.> in greater detail. The image of α_i under this map is -α_i∈ H_1(X;/p), and the image of α_j is 2α_i+α_j. As with the Dehn twist, all other homology generators are fixed by Y_i,j. This computation was carried out in <cit.> and again in <cit.> using more modern language. See also Appendix B of <cit.> for another treatment. The mapping class group of ℝP^2 is trivial, but it turns out that for non-orientable surfaces of genus at least 2, ℳ(X) is generated by the T_i,j and Y_i,j for all i≠ j. This result is due to Chillingworth <cit.>; see also <cit.> for a discussion using language similar to what we use here. Let X be a closed, connected, non-orientable surface of genus r≥ 3. There is only one nonzero orbit in H^1(X;/p)/Aut(X). Let us start by considering the case when r=3. We first claim that T_1,2^ℓ (α_1)=(ℓ+1)α_1 + ℓα_2, which can be quickly verified using induction. The ℓ=0 case is immediate, and T_1,2^ℓ+1((ℓ+1)α_1+ℓα_2) =(ℓ+1)(2α_1+α_2)-ℓα_1 =(2ℓ +2)α_1+(ℓ+1)α_2-ℓα_1 =(ℓ+2)α_1+(ℓ+1)α_2. Let the tuple (c_1,c_2) represent the element c_1α_1+c_2α_2∈ H_1(N_3;/p). Now for each 1≤ k≤ p-1, let S_k be the set S_k ={(k,0), (k· 2,k), (k· 3,k· 2), … ,(k· (p-1),k· (p-2)),(0,k·(p-1))} ={T_1,2^ℓ(k,0)|ℓ≥ 0} and let S̃_k be the singleton set containing (k,k). Observe that (c_1,c_2)∈ S_k if and only if c_1-c_2=k. Thus every nonzero element of H_1(X;/p) is in at least one of the S_k or S̃_k for some k. One can also check that the map T_1,2 fixes all elements of the form (k,k). Next we'll consider the action of Y_1,3 on the elements of S_k and S̃_k. Since Y_1,3(α_1)=-α_1=(p-1)α_1, the tuple (1,0) maps to (p-1,0). So these elements are in the same orbit, and it must be that S_1∪ S_p-1 is contained in a single orbit. Similarly, we have Y_1,3(2,1)=(p-2,1)∈ S_p-3. This implies the elements of S_1 and S_p-3 are in the same orbit. Therefore, S_1∪ S_p-1∪ S_p-3 is contained in a single orbit. Continuing in this way, we can see that in general Y_1,3(s,s-1)=(p-s,s-1)∈ S_p-(2s-1). for all 1≤ s≤ p-1. As s ranges from 1 to p-1, S_p-(2s-1) ranges over all the S_k. This tells us that ⋃_k=1^p-1 S_k is contained in a single orbit. Finally, we can check that (k,k) must also be in this orbit for each k. We have Y_1,3(k,k) = (p-k,k)∈ S_p-2k. So S̃_k∪ S_p-2k is contained in the same orbit for each k. Since every nonzero element of H_1(X;/p) is in S_k or S̃_k for some k, there must be a single nonzero orbit in H_1(X;/p)/Aut(X). Let us now turn to the more general r>3 case. For ease of notation, we will denote elements of H_1(X;/p) by an (r-1)-tuple. We will show that every nonzero element is in the same orbit as (1,0,… ,0) under the action of Dehn twists and crosscap slides. Let (c_1,c_2,… ,c_r-1)∈ H_1(X;/p) be nonzero, and let c_i be the rightmost nonzero coordinate of the tuple. First suppose i=1. We know from the r=3 case that there exist compositions of T_1,2 and Y_1,3 which take (c_1,0) to (1,0). Since the maps T_j,k and Y_j,k fix all coordinates other than j and k of any given tuple, we can use T_1,2 and Y_1,r to take (c_1,0,… ,0) to (1,0,… ,0) in the r>3 case. For i>1, our tuple is of the form (c_1,c_2,… ,c_i-1,c_i,0,… ,0). We again know from the r=3 case that there is a composition of T_1,2 and Y_1,3 which takes (c_i-1,c_i) to (1,0). We can use the same compositions (replacing Y_1,3 with Y_1,r) in the r>3 case to take (c_1,…, c_i,0,… ,0) to (c_1,…, c_i-2,1,0,… ,0). Now we have a new nonzero tuple in the same orbit as the original tuple whose rightmost nonzero coordinate is in the (i-1)st position. We can repeat the above process until we get that the tuple (c_1,…, c_i,0,…,0) is in the same orbit as (1,0,…,0). Since every nonzero element is in the same orbit as (1,0,…,0), it must be that there is a single nonzero orbit in H_1(X;/p)/Aut(X). Now let us go back and treat the case where X is non-orientable of genus 2. There are (p-1)/2 nonzero orbits in H^1(N_2;/p)/Aut(N_2). As in the r≥ 3 case, we can choose to represent N_2 as a sphere with 2 crosscaps α_1 and α_2. It is still sufficient in this case to check the action of the mapping class group on H_1(N_2;/p)≅⟨α_1⟩≅/p using Dehn twists and crosscap slide maps. We again have α_1 as a homology generator with α_2=-α_1. It can be easily verified that the Dehn twist about the curve passing through the two crosscaps acts trivially on α_1 and α_2 on homology. We also know that Y_1,2(α_1)=-α_1 and Y_2,1(α_1)=2α_2+α_1=-α_1. This gives us (p-1)/2 nonzero orbits, each containing kα_1 and -kα_1 for each 1≤ k≤ (p-1)/2. Recall that any closed, connected non-orientable surface Y with free C_p-action must have genus 2+pr for some r. So Y/C_p is a closed, connected non-orientable surface of genus 2+r. Propositions <ref> and <ref> then guarantee that Y must be isomorphic to N_2+pr^free when r≥ 1 or one of the (p-1)/2 non-isomorphic Klein bottle actions. §.§ Orientable Case When X is an orientable surface, Aut(X) preserves the symplectic form given by the cup product. So the map Aut(X)→Aut(H^1(X)) factors through the symplectic group Sp(2g,/p). We again reference <cit.> for similar details in the p=2 case. Let X be a closed, connected, orientable surface of genus g≥ 1. There is only one nonzero orbit in H^1(X;/p)/ℳ(X). We first show there is one nonzero orbit in the case g=1. One can easily check that the matrices A and B given by A =[ 1 0; 1 1 ] B =[ 1 1; 0 1 ] are in Sp(2,/p). For each nonzero k∈/p, the elements of the set S_k={[ k; 0 ], [ k; k ], [ k; 2k ], … , [ k; (p-1)k ]} are in the same orbit since A[ k; nk ]=[ k; (n+1)k ]. Similarly, for each k the elements of the set T_k={[ 0; k ], [ k; k ], [ 2k; k ], … , [ (p-1)k; k ]} are in the same orbit since B[ nk; k ]=[ (n+1)k; k ]. Thus we can see that the orbit containing [ k; k ] must also contain all elements of S_k and T_k. In particular, S_k∪ T_k is contained in a single orbit for each k. For each nonzero k∈/p, we can find its multiplicative inverse k^-1. Then [ k; k^-1 k ]=[ 1· k; 1 ] is in both S_k and T_1. So for each k, the elements of S_k (and thus T_k) are in the same orbit as T_1. Finally, observe that every nonzero element of (/p)^2 is in S_k or T_k for some k. Thus, all nonzero elements are in the same orbit under the action of Sp(2,/p). Now suppose g≥ 2. Choose a symplectic basis {e_1,f_1,… ,e_g,f_g} so that ⟨ e_i,f_i⟩ =1, ⟨ f_i,e_i⟩ =-1, and all other pairings are 0. Denote v∈(/p)^2g by v=[B_1,… ,B_g] where each B_i∈(/p)^2 and v=(B_1)_1 e_1+(B_1)_2f_1 + ⋯ + (B_g)_1 e_g+(B_g)_2f_g. Consider the evident homomorphism Sp(2,/p)×⋯×Sp(2,/p) →Sp(2g,/p). This allows us to represent orbits by vectors [B_1,… ,B_g] with B_i∈{[0,0],[1,0]} by the g=1 case. Now consider the 4× 4 symplectic matrix A=[ 0 I_2; -I_2 0 ] where I_2 is the identity matrix. Since A is symplectic, so is A^' =I_2k⊕ A ⊕ I_2g-2k-4 for any 0≤ k ≤ g-2. Multiplying a vector v=[B_1,… ,B_g] by A^' allows us to permute its (k+1)st and (k+2)nd blocks with the price of a sign. We can then multiply by the appropriate element of Sp(2,/p)×⋯×Sp(2,/p) to reduce all coefficients to 1 or 0. Thus, there are at most g+1 orbits of the action of Sp(2,/p) on (/p)^2g. These orbits can be represented by the vectors [O,O,… , O] [T,O,… ,O] [T,T,O,… ,O] ⋯ [T,T,… ,T] where O=[0,0] and T=[1,0]. Let B be the symplectic matrix B=[ 1 0 0 0; 0 1 0 -1; 1 0 1 0; 0 0 0 1 ] and observe that when g≥ 2, B⊕ I_2g-4 sends [T,O,… ,O] to [T,T,O,… , O]. In particular, these two representatives are actually in the same orbit. Moreover, for 0≤ k ≤ g-2, I_2k⊕ B⊕ I_2g-2k-4 takes [T,T,… ,T,O,… ,O] (with T in the first k+1 entries) to the vector with T in the first k+2 entries. Thus, all nonzero vectors in (/p)^2g are in the same orbit under the action of the symplectic group. Now let Y be an orientable surface with a free C_p-action. We can see from Lemma <ref> that the genus of Y must be 1+pg for some g. This implies Y/C_p is a closed, connected orientable surface of genus 1+g. Propositions <ref> and <ref> then imply that there is only one isomorphism class of C_p-spaces whose quotient by C_p is M_1+g. So Y must be isomorphic to M_1+pg^free. § NON-FREE CLASSIFICATION PROOF This section contains proofs for Theorems <ref> and <ref>. In each case, we begin by establishing several lemmas describing relationships between surfaces constructed using differing equivariant surgery methods. The classification theorems are then proven using induction on the number of fixed points. §.§ Proof of Classification for Orientable Surfaces Let us start with the orientable case. Let X be a closed, connected C_p-surface with distinct fixed points x and y. Then for some i there exists EB_p,(i)⊂ X with x,y∈ EB_p,(i). Moreover, (EB_p,(i))⊂ X must be isomorphic to R_p,(i) or TR_p,(i). This reduces to a question of how we can glue together the surfaces in Figure <ref> (showing the p=3 case) along the red lines using equivariant maps. Any such map is completely determined by how we attach a single edge, and up to isomorphism there are only two choices. One of these produces R_p,(i) and the other TR_p,(i). There is an equivariant automorphism f on TR_p,(i) with distinct fixed points x and y so that f(x)=y and f(y)=x and f|_∂ TR_p,(i)=. Recall the polygon representation of TR_p,(i) as shown in Figure <ref>. The action of C_p on TR_p,(i) corresponds to a rotation action by e^2π i/p on the polygon. Let A represent the annulus of width ϵ>0 inside TR_p,(i) so that ∂ TR_p,(i) is a boundary component of A. Define f so that f|_A is the Dehn twist with f|_∂ TR_p,(i)= and f restricted to the other boundary component of A is given by 180^∘ rotation. Then let f|_TR_p,(i)∖ A act as rotation by 180^∘. Notice that f respects the C_p-action of TR_p,(i) and swaps x and y as desired. If x,y∈_1 are distinct fixed points, then _1+_x[TR_p]≅_1+_y[TR_p]. Moreover, _1+_x[TR_p]≅_p-1[4]. Given any two distinct fixed points x,y∈_1, there is a copy of TR_p containing them. By Lemma <ref>, there is an automorphism φ̃ of TR_p⊂_1 swapping x and y. This can be extended to an automorphism φ of _1 by defining φ to be φ̃ on TR_p and the identity everwhere else. Thus we can define an isomorphism _1+_x[TR_p]→_1+_y[TR_p] given by φ everywhere outside of the added copy of TR_p. Observe that _1+_x[TR_p] can be obtained by taking two copies of TR_p and identifying their boundaries. Figure <ref> shows how this gives us _p-1[4] in the p=3 case. Choose one copy of TR_p to be a neighborhood of the red EB_p. It's complement in _p-1[4] is another copy of TR_p containing the purple EB_p. If x,y∈_n (n≥ 2) are distinct fixed points, then _n+_x[TR_p]≅_n+_y[TR_p]. In other words, twisted ribbon surgery on _n is independent of the fixed point chosen. The proof is almost identical to that of Lemma <ref>. The idea is that any two fixed points in _n are contained in a copy of TR_p. More specifically, this argument shows that _n+_x[TR_p]≅(_n-1+(k+2)[R_p])#_pM_g. If x and y are distinct fixed points in _(p-1)k+pg[2k+2] for some k,g≥ 0, then _(p-1)k+pg[2k+2]+_x[TR_p]≅_(p-1)k+pg[2k+2]+_y[TR_p]. In other words, twisted ribbon surgery on _(p-q)k+pg[2k+2] is independent of the chosen fixed point. We can choose to represent _(p-1)k+pg[2k+2] in the following way: * Start with S^2,1. * Choose k+1 disks D_1,… D_k+1 centered at the equator of S^2,1 so that σ^s D_i∩σ^s^'D_j=∅ for all i,j,s,s^'. * Perform #_pM_g-surgery using D_k+1 and its conjugates. * Remove D_1,… ,D_k and their conjugates to perform +[R_p]-surgery k times. Let R_p_i denote the copy of R_p glued to the boundary of D_i∪σ D_i∪⋯∪σ^p-1 D_i. Suppose each copy of R_p is glued onto S^2,1 as shown in Figure <ref>. We call a the “north pole” of R_p and b the “south pole”. Figures <ref> and <ref> depict a path α (in green) from the north pole of R_p_i for some i to the north pole of S^2,1 or R_p_j for some j. This figure only shows the path in the case where k=2 and g=0, but in all other cases a similar path can be chosen. Observe that a neighborhood of α∪σα∪⋯∪σ^p-1α is isomorphic to TR_p. This can be verified by checking that this neighborhood has only a single boundary component. In this case, we know there exists an automorphism of _(p-1)k+pg[2k+2] swapping the two north poles. Similarly, if given two south poles we can find a copy of TR_p containing them. Thus if x and y are both north poles (respectively south poles), then _(p-1)k+pg[2k+2]+_x[TR_p]≅_(p-1)k+pg[2k+2]+_y[TR_p]. It remains to show that if x is a north pole and y is a south pole, then twisted ribbon surgery on _(p-1)k+pg[2k+2]+_x[TR_p] at the points x and y result in isomorphic spaces. We will show this by considering the case x=a and y=b^' as depicted in Figures <ref> and <ref>. The argument for cases when k>1 or g>0 are similar. If we can show the isomorphism in this case, then for any north pole x^' and any south pole and y^', we have _(p-1)k+pg[2k+2]+_x^'[TR_p] ≅_(p-1)k+pg[2k+2]+_x[TR_p] ≅_(p-1)k+pg[2k+2]+_y[TR_p] ≅_(p-1)k+pg[2k+2]+_y^'[TR_p]. Figure <ref> depicts the result of +_a[TR_3]-surgery on M_2[4], and Figure <ref> shows M_2[4]+_b^'[TR_3]. We can construct an isomorphism between these spaces as reflection through the plane of the hexagon. So far we have proven that X+_x[TR_p] is independent of x when X is of the form _n#_p M_g or S^2,1+[R_p]#_p M_g. We will now spend some time understanding when twisted ribbon surgery fails to be independent of its chosen fixed point. There does not exist an equivariant isomorphism between the C_p-spaces _2(p-1)[6] and _2 (even up to the action of Aut(C_p)). Let X be a nontrivial, orientable C_p-space, and let X^C_p denote the fixed set of X. We start by defining a map X^C_p→ C_p. Fix an orientation for X, and consider the induced orientation on X/C_p. For each fixed point x∈ X^C_p, let x̅ represent the image of x in X/C_p. Choose a small loop going around x̅ in the direction of the chosen orientation. We can then lift this loop to a path in X going from a point y to gy for some g∈ C_p. Note that the element g is independent of choice for y. In this way, we can define the map X^C_p→ C_p given by x↦ g. Theorem 1.1 of <cit.> states that this map determines the C_p-space X up to isomorphism. Let us now turn our attention to _2 and _2(p-1)[6]. We will show by direct computation that the maps _2^C_p→ C_p and _2(p-1)[6]^C_p→ C_p as defined above must be distinct. We focus our attention on the p=3 case since the argument can be extended to all odd primes. Let g be the generator of C_3 corresponding to counter-clockwise rotation of _2 by 120^∘ about the axis passing through the center of the hexagons. Figure <ref> demonstrates that for any fixed point x∈_2, the image of x under the above map is g. To see this, start by labeling the six fixed points of _2 as x_1,… ,x_6. Next choose an orientation for _2 and consider the induced orientation on _2/C_3≃ S^2. Figure <ref> depicts _2 (left) and S^2 (right) with the chosen orientation in gray. We can then choose a loop in the direction of the orientation about the image of each x_i in S^2. Each of these loops can be lifted to some path in _2. Let x̃_i (1≤ i≤ 6) denote the starting point of the path lifted from the ith loop. Figure <ref> demonstrates that for each i, we get a path from x̃_i to gx̃_i. For example, the green loop on the right of Figure <ref> goes about the fixed point x_1. We can lift it to the green path in _2. This path starts at the point labeled x̃_1 and ends at the image of x̃_1 under the action of g. So our map in this case sends x_1 to g. Since _2^C_3 just consists of the fixed points x_1,x_2,… ,x_6, we can describe the above map as the tuple (g,g,g,g,g,g). Let us now choose to represent the space _4[6] as depicted on the left of Figure <ref>. Let g represent counter-clockwise rotation of _4[6] by 120^∘ about the axis passing through the center of the hexagons, and label the six fixed points as x_1,x_2,… ,x_6. We can then fix an orientation for _4[6] and choose oriented paths about the image of x_i in _4[6]/C_3≃ S^2 for each i. As before, we lift each of these loops to a path starting at the point x̃_i, and we look at the endpoint of each lifted path. Figure <ref> demonstrates that these endpoints are gx̃_1, gx̃_2, gx̃_3, g^2x̃_4, g^2x̃_5, and g^2x̃_6. Another way to represent this map _4[6]^C_3→ C_3 is with the tuple (g,g,g,g^2,g^2,g^2). Even up to a relabeling of the fixed points and an action of Aut(C_3), the maps described by (g,g,g,g,g,g) and (g,g,g,g^2,g^2,g^2) must be distinct. In other words, it cannot be the case that _4[6] and _2 are isomorphic. More generally, the same argument shows that _2+k[R_p]#_p M_g is not isomorphic to _4[6]+k[R_p]#_p M_g for any k,g. The same methods can also be used to show _n_1+k_1[R_p]#_p M_g_1, _n_2+k_2[R_p]#_p M_g_2, and S^2,1+k_3[R_p]#_p M_g_3 are always in distinct isomorphism classes (unless of course n_1=n_2, k_1=k_2, and g_1=g_2). When k≥ 1, there are two isomorphism classes of C_p-spaces of the form _1,(p-1)/2+(p-1)k+pg[3+2k]+_x[TR_p] which depend on the choice of fixed point x. In particular, given a fixed point x, _1,(p-1)/2+(p-1)k+pg[3+2k]+_x[TR_p] is isomorphic to one of the following: * _(p-1)(k+1)+pg[2+2(k+1)] * (_2+(k-1)[R_p])#_p M_g We can represent _1,(p-1)/2+(p-1)k+pg[3+2k] by choosing k+1 disks D_1,… D_k+1 on _1 so that σ^s D_i∩σ^s^'D_j=∅ for all i,j,s,s^'. Remove each σ^jD_i. Then attach a copy of R_p (denoted R_p_i) to ∂ D_i∪∂(σ D_i)∪⋯∪∂(σ^p-1 D_i) for each i=1,… ,k. Then attach a copy of C_p×(M_g∖ D^2) to ∂ D_k+1∪∂(σ D_k+1)∪⋯∪∂(σ^p-1 D_k+1). For simplicity of notation, we will let X denote the space _1,(p-1)/2+(p-1)k+pg[3+2k] for the remainder of the proof. A similar argument as in the previous case shows that if x and y are the north poles (respectively south poles) of R_p_i and R_p_j for some i,j, then we can find a copy of TR_p containing x and y. This implies that X+_x[TR_p]≅ X+_y[TR_p] for all such x and y. Let a,b,c be the fixed points originating from the copy of _1 as depicted in Figure <ref>. This figure depicts a copy of EB_p in X containing the north pole of (R_p)_1 and c with a neighborhood isomorphic to TR_p. Figure <ref> depicts the case k=1,g=0, but one could construct a similar copy of EB_p in all other cases. Recall additionally from Lemma <ref> that there is a copy of TR_p containing a and c as well as a copy containing b and c. So we have that X+_x[TR_p]≅ X+_y[TR_p] when x,y∈{north pole of (R_p)_i| 1≤ i≤ k}∪{a,b,c}. This also holds if x,y∈{south pole of (R_p)_i| 1≤ i ≤ k}. At this point we have demonstrated there are at most two isomorphism classes of _1,(p-1)/2+(p-1)k+pg[3+2k]+_x[TR_p]. We know from Lemma <ref> that X+_c[TR_p]≅_p-1[4]+k[R_p]#_p M_g. By construction in Example <ref>, we also know that X+_x[TR_p]≅_2+(k-1)[R_p]#_p M_g when x∈{south pole of (R_p)_i| 1≤ i ≤ k}. We know from Proposition <ref> and subsequent remarks that these spaces are not isomorphic. So there must be exactly two isomorphism classes of spaces of the form (_1,(p-1)/2+(p-1)k+pg[3+2k])+_?[TR_p]. For n≥ 2 and k≥ 1, there are two isomorphism classes of C_p-spaces of the form (_n+k[R_p])#_p M_g+_x[TR_p] which depend on the choice of fixed point x. Specifically, given a fixed point x, (_n+k[R_p])#_p M_g+_x[TR_p] is isomorphic to one of the following: * (_n+1+(k-1)[R_p])#_p M_g * (_n-1+(k+2)[R_p])#_p M_g The same ideas presented in the proof of Lemma <ref> can be extended to this more general case. Finally, we present a lemma which will help prove the inductive step of our main classification theorem. Let X be a connected C_p-surface for which X-[R_p] is defined. If F(X)≥ 3, then X-[R_p] must also be connected. Fix a copy of R_p⊂ X on which we will perform -[R_p] surgery. Since F(X)≥ 3, there exists at least one additional fixed point x∈ X such that x∉R_p. In order to show that X-[R_p] is connected, it suffices to show that X∖ R_p (the space obtained by removing R_p from X but before gluing in the p conjugate disks) is connected. We first claim that given any point y in the boundary of X∖ R_p, there is a path from the fixed point x to y. First note that the connected component of X∖ R_p containing x must have at least one boundary component (which we will call C). Otherwise, X could not have been connected. Thus there is a path from x to any point on C. A conjugate to any such path would be a path from x to σ^i C. Thus, x must be in the same connected component as each boundary component of X∖ R_p. Since X is connected, every point z∈ X∖ R_p must be in the same connected component as at least one boundary component. Thus every point in X∖ R_p must lie in a single boundary component. We are now ready to revisit Theorem <ref> and provide a proof of the result. Let X be a connected, closed, orientable surface with an action of C_p. Then X can be constructed via one of the following surgery procedures, up to Aut(C_p) actions on each of the pieces. * M_1+pg^free:= M_1^free#_pM_g, g≥ 0 * _(p-1)k+pg[2k+2]:=(S^2,1+k[R_p])#_pM_g, k,g≥ 0 * _n,(3n-2)(p-1)/2+(p-1)k+pg[3n+2k]:=(_n+k[R_p]) #_pM_g, k,g≥ 0, n≥ 1 We induct on the number of fixed points F. First let X be a free orientable space. By the classification of free C_p-spaces done in Section <ref>, X≅ M_1+pg^free for some g≥ 0. The case where X is orientable and F=1 does not occur. A proof of this fact can be found in Example 3.3 of <cit.> or Theorem 7.1 of <cit.>. Let us move on to the case F=2. Let x,y∈ X be distinct fixed points. By Lemma <ref>, there exists R_p⊂ X or TR_p⊂ X containing x and y. The latter case is not possible since X-[TR_p] would be a closed, orientable C_p-surface with a single fixed point. So x and y are contained in some R_p in X. Then by the F=0 case, X-[R_p]≅ M_1+pg^free or X-[R_p]≅ M_g× C_p. We know from Figure <ref> that +[R_p] surgery on either of these spaces results in S^2,1#_pM_g^' for some g^'≥ 0. Thus X must be isomorphic to _pg[2]. We additionally observe in the case F=2 that since X≅ S^2,1#_pM_g^', there is an equivariant automorphism of X swapping the fixed points x and y. This map φ can be defined as a reflection through the plane perpendicular to the axis of rotation which bisects X. We can thus define an isomorphism X+_y[TR_p]→ X+_x[TR_p] given by φ everywhere outside of the added copy of TR_p. Next assume F=3. Again, we can find distinct fixed points x and y in X which are contained in R_p⊂ X or TR_p⊂ X. The former is impossible since X-[R_p] would be a closed, orientable C_p-surface with one fixed point. Thus, x,y∈ TR_p in X. So X-_x,y[TR_p]≅ S^2,1#_pM_g for some g by the previous F=2 case. Finally we can observe that (S^2,1#_pM_g)+[TR_p]≅_1#_pM_g. Since S^2,1#_pM_g+_?[TR_p] is independent of the chosen fixed point, we can conclude that X≅_1#_pM_g. Since X≅_1#_pM_g, all three fixed points of X live in a neighborhood isomorphic to _1∖(D_2× C_p). So given any two fixed points in X, there exists TR_p⊂ X containing them. By Lemma <ref> we can construct an equivariant automorphism of X swapping any two of its fixed points. Therefore +[TR_p] surgery on X is invariant of the choice of fixed point. For the inductive hypothesis, let 3 < ℓ. For any ℓ^' with 3≤ℓ^' <ℓ, suppose that (1) if Z is a connected, closed, orientable C_p-surface with F=ℓ^', then Z is isomorphic to _(p-1)k+pg[2k+2] or _n,(3n-2)(p-1)/2+(p-1)k[3n+2k] for some k,g≥ 0 and n≥ 1, and (2) if x and y in Z are distinct fixed points, then Z+_x[TR_p]≅ Z+_y[TR_p]. Now let X be a closed, orientable C_p-surface with F=ℓ. Let x,y∈ X be distinct fixed points. By Lemma <ref>, there exists R_p⊂ X or TR_p⊂ X containing x and y. Suppose first that x and y are contained in R_p⊂ X. Then X-[R_p] has ℓ-2≥ 2 fixed points. Since X was connected and X-[R_p] has at least one fixed point, X-[R_p] must also be connected by Lemma <ref>. So we can invoke the inductive hypothesis to conclude that X-[R_p] is isomorphic to one of the following: * _(p-1)k+pg[2k+2]≅(S^2,1+k[R_p])#_pM_g * (_n+k[R_p])#_p M_g. In the first case, we can conclude X≅(S^2,1+(k+1)[R_p])#_pM_g≅_(p-1)(k+1)+pg[2(k+1)+2]. In the second case, it follows that X≅(_n+(k+1)[R_p])#_p M_g. If x and y are contained in TR_p⊂ X, then X-_x,y[TR_p] has ℓ-1≥ 3 fixed points. By the inductive hypothesis, X-_x,y[TR_p] is isomorphic to one of the following: * _(p-1)k+pg[2k+2]≅(S^2,1+k[R_p])#_pM_g for some k≥ 1 and g≥ 0 * (_n+k[R_p])#_p M_g for some n≥ 1 and k,g≥ 0. We know from Lemma <ref> that +_?[TR_p]-surgery on _(p-1)k+pg[2k+2] is independent of the chosen fixed point. So if X-_x,y[TR_p]≅_(p-1)k+pg[2k+2], then X≅(_1+k[R_p])#_pM_g. Next suppose X-_x,y[TR_p]≅(_n+k[R_p])#_p M_g for some n≥ 1 and k,g≥ 0. Again, we know from Corollary <ref> that if k≥ 1 there are two isomorphism classes of spaces for ((_n+k[R_p])#_p M_g)+_a[TR_p], depending on the choice of fixed point a. In one case we have X≅(_n-1+(k+2)[R_p])#_pM_g. This is also the result of +_a[TR_p]-surgery on X when k=0. Assuming k≥ 1, it is also possible that X≅(_n+1+(k-1)[R_p])#_pM_g. For the remainder of this section, we use Ñ_n to denote the space N_n∖ D^2. §.§ Free Actions on Non-orientable Surfaces with Boundary Our next goal is to prove the classification theorem for non-orientable C_p-surfaces. We saw that there were no orientable C_p-surfaces with a single fixed point, but this is not the case for non-orientable surfaces. In order to prove the F=1 case of our classification theorem, we need to lay a bit of ground work. We start with a treatment of free C_p-actions on Ñ_pn+1 for n≥ 0. Up to the action of Aut(C_p) there is a single isomorphism class of free C_p actions on Ñ_pn+1 for all n≥ 0. More precisely, there are (p-1)/2 non-isomorphic actions on Ñ_pn+1. These are the Aut(C_p)-conjugates of MB_p#_p N_n where MB_p is defined in Section <ref>. Given a free C_p action on Ñ_pn+1, the quotient Ñ_pn+1/C_p must be a non-orientable surface with a single boundary component and Euler characteristic 1/p(1-(pn+1))=-n. The only such space is Ñ_n+1. Recall from Section <ref> that 𝒮(Ñ_n+1) denotes the set of isomorphism classes of path-connected, free C_p spaces X so that X/C_p≅Ñ_n+1. There is a bijection between 𝒮(Ñ_n+1) and the set of nonzero orbits of H^1(Ñ_n+1;/p) under the action of Aut(Ñ_n+1). To prove Proposition <ref>, we will consider three cases: n=0, n=1, and n>1. When n>1, we will show that there are at most (p+1)/2 nonzero orbits in H_1(Ñ_n+1;/p)/Aut(Ñ_n+1). Then we construct a free C_p space Y≇Ñ_pn+1 of genus pn+1 whose quotient by C_p is Ñ_n+1. This will guarantee that the p-1/2 conjugate actions of C_p on Ñ_pn+1 coming from MB_p#_pN_n can be the only such actions. We carry out a similar procedure in the n=1 case, instead showing that there are p-1 nonzero orbits in H_1(Ñ_2;/p)/Aut(Ñ_2) and constructing a non-equivariant space distinct from Ñ_n+1 with (p-1)/2 non-isomorphic free C_p-actions. The n=0 case will prove to be even simpler, with only (p-1)/2 nonzero orbits in H_1(Ñ_1)/Aut(Ñ_1). Our proof will be very reminiscent of that of Theorem <ref>. Represent Ñ_n+1 as a disk with n+1 crosscaps, and pick a basis {α_1,… , α_n+1} for H_1(Ñ_n+1;/p)=(/p)^n+1 given by the center circles of the crosscaps. Recall our notation T_i,j for the Dehn twist about the curve passing through the ith and jth crosscaps and Y_i,j for the crosscap slide which passes the ith crosscap through the jth. In addition to these mapping class group elements, let ψ denote the reflection as shown in Figure <ref>. When n=0, we do not have Dehn twists or crosscap slide homeomorphisms. It is quick to check that ψ sends kα_1 to (p-k)α_1. This gives us at most (p-1)/2 nonzero orbits in H_1(Ñ_1;/p)/Aut(Ñ_1). In fact we can conclude that there are exactly (p-1)/2 nonzero orbits because there should also be at least (p-1)/2 nonzero orbits corresponding to the (p-1)/2 non-isomorphic free actions on the Möbius band defined in Section <ref>. Skip the n=1 case for now, let us assume n≥ 2. Let 𝐜=(c_1,… ,c_n+1) be a nonzero element of H_1(Ñ_n+1;/p). We will first show that there are at most p-1 nontrivial orbits with representatives of the form (k,0,…,0) for some 1≤ k≤p-1/2 or (ℓ,ℓ,…,ℓ) for some 1≤ℓ≤p-1/2. Let c_i be the rightmost nonzero entry of 𝐜 with the property that c_i≠ c_i-1. We first claim there exists some power of T_i-1,i so that (c_1,… ,c_i-1,c_i,…,c_n+1)∼ (c_1,… ,c_i-1-c_i,0,c_i+1,… ,c_n+1). We showed in the proof of Proposition <ref> that applying T_i-1,i to the tuple s times produces the tuple whose (i-1)st coordinate is (s+1)c_i-1-sc_i and whose ith coordinate is ,sc_i-(s-1)c_i-1. Since c_i-1≠ c_i, there exists some positive integer s so that sc_i-1-(s-1)c_i≡ 0p. For such an s, it is therefore also true that (s+1)c_i-1-sc_i≡ c_i-1-c_ip. So applying T_i-1,i^s to the tuple (c_1,… ,c_n+1) produces (c_1,… ,c_i-1-c_i,0,c_i+1,… ,c_n+1) as desired. Notice that applying the appropriate power of T_i-1,i either increases the number of zeros in the tuple (in the case that c_i-1≠ 0) or shifts an existing zero to the right one position (in the case c_i-1=0). Repeat this process to obtain the orbit representative (c_1,… ,c_n+1)∼ (k,k,…,k,0,…,0) with k≠ 0 and 1≤ℓ≤ n+1 nonzero entries. When ℓ<n+1, Y_ℓ,n+1(k,k,…,k,0,… ,0)=(k,…,k,p-k,0,…,0) Since the ℓth entry is not equal to the (ℓ-1)st entry, we can repeat the steps outlined in the previous paragraph until we obtain (c_1,…,c_n+1)∼ (k^',…,k^',0,…,0) with k^'≠ 0 and ℓ^'<ℓ nonzero entries. Since k is nonzero and k≠ p-k, we know the number of zeros will strictly increase with this process. Therefore we can repeat it until (c_1,…,c_n)∼ (k,0,…,0) for some nonzero k. In the case that ℓ=n+1 we have c_i=c_1 for all i. So (c_1,… ,c_n+1)= (c_1,c_1,…,c_1). Note that the action of ψ puts (k,0,…,0) in the same orbit as (p-k,0,… ,0) and similarly puts (k,k,…,k) in the same orbit as (p-k,p-k,…,p-k) , giving us at most p-1 nonzero orbits. To finish the n≥ 2 case, we will now check that all elements of the form (k,0,… ,0) are in the same orbit under the action of Dehn twists and crosscap slides. Let (1,a,0…,0) be an element of H_1(Ñ_n+1;/p) with a≠ 0. Note that since n≥ 2, this element at least one zero entry. Based on our previous arguments, we know this is in the same orbit as (1-a,0,…,0) and (a-1,0,…,0). Alternatively, we can see that under the action of Y_2,3 followed by several Dehn twists, (1,a,0,…,0) is in the same orbit as (1,-a,0,…,0) and (1+a,0,…,0). Putting all of this together, we are able to conclude that (a+1,0,…,0) and (a-1,0,…,0) are in the same orbit for all a. this is enough to conclude that all elements of the form (k,0,…,0) are in the same orbit when k≠ 0. As desired, this leaves us with (p+1)/2 nontrivial orbits with representatives (1,0,…,0) and (ℓ,ℓ,…,ℓ) for all 1≤ℓ≤ (p-1)/2. When n=1, H_1(Ñ_2;/p)=/p⊕/p. As with the n>1 case, analyzing the action of Dehn twists and crosscap slides on the homology generators gives us at most p-1 nontrivial orbits with representatives of the form (k,0) for 1≤ k ≤ p-1 and (ℓ,ℓ) for 1≤ℓ≤ p-1. We will see below that these must represent p-1 distinct orbits in H_1(Ñ_2)/Aut(Ñ_2). Let Y be the space obtained by removing p conjugate disks from N_2^free#_pN_n-1. As proved in Section <ref>, this space has (p-1)/2 non-isomorphic free C_p-actions when n=1 and just one action when n>1. The quotient of Y by any of its free actions is Ñ_n+1 as desired. Moreover, Y≇Ñ_pn+1 since these spaces do not have the same number of boundary components. As in the last section, the trivial orbit of H_1(Ñ_n+1;/p) corresponds to the non-path-connected C_p-space C_p×Ñ_n+1. §.§ Proof of Classification for Non-orientable Surfaces There is an equivariant isomorphism _1#_p N_1≅ N_1[1]+[R_p] We will prove this result for the case p=3, noting that p>3 is similar. Figure <ref> shows us how (_1#_3N_1)-[R_3]≅ N_1[1]. To begin, we represent _1#_3N_1 as our usual hexagon picture with fixed points a, b, and c as well as 3 crosscaps. A copy of EB containing a and b can be seen in red in the figure on the left. One can check that a tubular neighborhood of this EB has three boundary components and thus must be isomorphic to R_3. The middle of Figure <ref> shows the result of removing this copy of R_3. To complete -[R_3] surgery, we glue in the orange, pink, and green disks along the resulting boundary. To more easily see these identifications, we can first perform the intermediate step of “flipping” the red regions and identifying the yellow edges, then having the red change back to grey. The third picture on the right shows the result of the completed -[R_3] surgery. The resulting space is isomorphic to N_1[1]. The original statement then follows from Lemma <ref>. There is an equivariant isomorphism N_2^free+[R_p]≅ S^2,1#_pN_2 If we perform -[R_p] surgery on a neighborhood of the copy of EB from S^2,1#_pN_2 shown in Figure <ref>, the result is a connected, non-orientable surface with a free C_p-action. Since -[R_p]-surgery reduces β-genus by 2(p-1), this surface must have genus β=2. In particular, it must be N_2^free by our classification of free C_p spaces. It follows from Lemma <ref> that N_2^free+[R_p]≅ S^2,1#_pN_2. There is an equivariant isomorphism N_1[1]+[TR_p]≅ S^2,1#_pN_1 The C_p-space _1+[FMB_p] can be constructed in two ways. In addition to performing +[FMB_p] surgery on _1, we could start by constructing N_1[1] as S^2,1+[FMB_p]. We can then build N_1[1]+[TR_p] by performing the +[TR_p] surgery on the remaining fixed point. These two constructions are demonstrated in Figure <ref>. Since both of these constructions yield the same space, it follows that N_1[1]+[TR_p]≅_1+[FMB_p]. If we next remove a copy of R_p from _1+[FMB_p] as shown in Figure <ref>, the result is C_p× MB where MB denotes the Möbius band. Thus, when we finish the -[R_p] surgery on _1+[FMB_p] by gluing in p disks on the boundary components, this leaves us with C_p× N_1. Since C_p× N_1≅(S^2,1#_pN_1)-[R_p], we get that _1+[FMB_p]≅ S^2,1#_pN_1 by Lemma <ref>. We are now ready to restate and prove Theorem <ref> for the classification of non-orientable C_p-surfaces. Let X be a connected, closed, non-orientable surface with an action of C_p. Then X can be constructed via one of the following surgery procedures, up to Aut(C_p) actions on each of the pieces. * N_2+pr^free≅ N_2^free#_pN_r, r≥ 0 * N_2(p-1)k+pr[2k+2]≅(S^2,1+k[R_p])#_pN_r, r≥ 1 * N_1+2(p-1)k+pr[1+2k]≅(N_1[1]+k[R_p])#_pN_r, k,r≥ 0 Moreover, the space X is determined by F and β, with the condition that F≡ 2-βp. We induct on the number of fixed points F of X. First let X be a free non-orientable space. By the classification of free C_p-spaces, X≅ N_2+pr^free for some r≥ 0. Let X be a connected, closed, non-orientable C_p-surface with F=1. Then X must have genus pr+1 for some r≥ 0 by Lemma <ref>. Suppose Y is another closed, connected, genus pr+1 non-orientable C_p-surface with a single fixed point. Let X̃ (respectively Ỹ) denote the C_p-space X∖ D^2,1 (respectively Y∖ D^2,1) where D^2,1 is a neighborhood of the fixed point of X (respectively Y). Recall that Ñ_pr+1 has (p-1)/2 non-trivial, non-isomorphic C_p actions up to isomorphism by Proposition <ref>. After altering the action on Y by Aut(C_p), we can make the action on ∂Ỹ match that on ∂X̃. Then X̃≅Ỹ, which extends to an equivariant isomorphism X→ Y. Thus there is only one non-orientable C_p-surface of genus pr+1 with F=1, so it must be isomorphic to N_1[1]#_pN_r. Suppose F=2. Let x and y be the two distinct fixed points of X. By Lemma <ref>, there exists R_p⊂ X or TR_p⊂ X containing x and y. If there exists R_p⊂ X containing x and y, then X-[R_p] is a free, non-orientable C_p-space. If X-[R_p] is connected, then X-[R_p]≅ N_2+pr^free for some r≥ 0. So X≅ N_2+pr^free+[R_p]≅ S^2,1#_pN_r+2 by Lemma <ref>. If X-[R_p] is not connected, then it must be isomorphic to N_r^'× C_p for some r^'≥ 1. In this case, we can see that X≅ S^2,1#_pN_r^'. Suppose instead we find that x and y are contained in some TR_p⊂ X. Then X-_x,y[TR_p] is a closed, connected, non-orientable C_p-surface with 1 fixed point. In particular, X-_x,y[TR_p]≅ N_1[1]#_pN_r for some r by what we already showed. Recall that equivariant connected sum surgery commutes with all types of C_p-ribbon surgeries. Since X is the result of +[TR_p]-surgery on N_1[1]#_pN_r, Lemma <ref> tells us that X≅(N_1[1]+[TR_p])#_pN_r ≅(S^2,1#_pN_1)#_pN_r≅ S^2,1#_pN_r+1. We next claim that for a closed, non-orientable C_p-surface with F=2, there exists a path α between the two fixed points so that a neighborhood of α∪σα∪⋯∪σ^p-1α is isomorphic to TR_p. We just showed that X≅ S^2,1#_pN_r for some r≥ 1, so we can represent X by a copy of S^2,1 with pr crosscaps at the equator. Figure <ref> shows a path α on X with the desired property in the case when r=2 and p=3. By Lemma <ref>, there exists an automorphism of X swapping its fixed points. As in previous cases, this allows us to conclude that +[TR_p] surgery on X is independent of the chosen fixed point. For the inductive hypothesis, let 2<ℓ. For any ℓ^' with 2≤ℓ^'<ℓ, suppose that (1) if A is a connected, closed, non-orientable C_p-surface with F=ℓ^', then Z is isomorphic to N_2(p-1)k+pr[2k+2] or N_1+2(p-1)k+pr[1+2k] for some k,r, and (2) if x and y in Z are distinct fixed points, then Z+_x[TR_p]≅ Z+_y[TR_p]. Now let X be a closed, non-orientable C_p-surface with F=ℓ. Let x,y∈ X be distinct fixed points. By Lemma <ref>, there exists R_p⊂ X or TR_p⊂ X containing x and y. Suppose first that x and y are contained in R_p⊂ X. Then X-[R_p] has ℓ-2≥ 1 fixed points and is thus connected by Lemma <ref>. By the inductive hypothesis, X-[R_p] is isomorphic to one of the following: * N_2(p-1)k+pr[2k+2]≅(S^2,1+k[R_p])#_pN_r * N_1+2(p-1)k+pr[1+2k]≅(N_1[1]+k[R_p])#_pN_r. In the first case, we can conclude X≅(S^2,1+(k+1)[R_p])#_pN_r≅ N_2(p-1)(k+1)+pr[2(k+1)+2]. In the second case, we have X≅(N_1[1]+(k+1)[R_p])#_pN_r≅ N_1+2(p-1)(k+1)+pr[1+2(k+1)]. If x and y are contained in TR_p⊂ X, then X-[TR_p] has ℓ-1≥ 2 fixed points. By the inductive hypothesis, X-[TR_p] is isomorphic to one of the following: * N_2(p-1)k+pr[2k+2]≅(S^2,1+k[R_p])#_pN_r (r≥ 1) * N_1+2(p-1)k+pr[1+2k]≅(N_1[1]+k[R_p])#_pN_r. We also know from the inductive assumption that +[TR_p]-surgery on X-[TR_p] is independent of the chosen fixed point, so (X-[TR_p])+[TR_p]≅ X. Thus in the first case, we can choose to center our +[TR_p] surgery on the north pole of S^2,1. Since r≥ 1, we have X ≅((_1#_pN_1)+k[R_p])#_pN_r-1 ≅(N_1[1]+(k+1)[R_p])#_pN_r-1 ≅ N_1+2(p-1)(k+1)+p(r-1)[1+2(k+1)] where the second isomorphism is by Lemma <ref> and the first isomorphism follows from the commutativity of +[R_p]-surgery and equivariant connected sum surgery. In the second case, we can choose to center our +[TR_p] surgery on the fixed point originating from the copy of N_1[1]. By Lemma <ref>, we get X≅(S^2,1+k[R_p])#_pN_r+1≅ N_2(p-1)k+p(r+1)[2k+2]. Next we will show that if x and y are distinct fixed points in X, then X+_x[TR_p]≅ X+_y[TR_p]. The case where X≅ N_2(p-1)k+pr[2k+2] is nearly identical to the orientable case _(p-1)k+pg[2k+2], so we will provide the proof of +[TR_p] invariance only for X≅ N_1+2(p-1)k+pr[1+2k]. We represent N_1+2(p-1)k+pr[1+2k] by first choosing a disk D in N_1[1] that does not intersect its conjugates. Then choose a representation of N_2(p-1)(k-1)+pr[2(k-1)+2] using the same construction as for _(p-1)k+pg[2k+2] in Lemma <ref>. Next remove p disjoint conjugate disks D^' ,σ D^', … ,σ^p-1D^' from the equator of the sphere S^2,1 used to construct N_2(p-1)(k-1)+pr[2(k-1)+2]. Remove D and its conjugates from N_1[1] and identify ∂σ^i D with ∂σ^i D^' (renaming D^' if necessary). Let c denote the fixed point in N_1[1]. We will show that for any other fixed point x there exists an equivariant automorphism of N_1+2(p-1)k+pr[1+2k] which exchanges x and c. If we can show this, then composition of these automorphisms allows us to swap any two fixed points in N_2(p-1)(k-1)+pr[2(k-1)+2]. Let x≠ c be a fixed point in N_1+2(p-1)k+pr[1+2k]. Then x is either contained in the copy of S^2,1 or (R_p)_i for some i. In any case, there exists a path α from x to c with a neighborhood of α∪σα∪⋯∪σ^p-1α isomorphic to TR_p. Figure <ref> shows how to construct such a path α when x∈ S^2,1. Note that this figure does not show the (R_p)_i, but α can be constructed so that it does not intersect any of the (R_p)_i. Similarly, Figure <ref> shows how to construct α when x∈ (R_p)_i for some i. The choice of α is similar for all i. Again note that α can be constructed so that it does not intersect (R_p)_j when j≠ i. One can check that the paths depicted in these figures have a neighborhood isomorphic to TR_p by checking that the chosen neighborhood has a single boundary component. Since x and c are contained in a copy of TR_p⊂ N_1+2(p-1)k+pr[1+2k], we know from Lemma <ref> that there exists an automorphism of N_1+2(p-1)k+pr[1+2k] swapping x and c. The result then follows from induction. If X and Y are closed, connected, non-orientable C_p-surfaces with X-[TR_p]≅ Y-[TR_p], then X≅ Y. In particular, X+_x[TR_p] is independent of the choice of x. amsalpha
http://arxiv.org/abs/2307.06189v1
20230712142740
Cooperative Localization for Autonomous Underwater Vehicles -- a comprehensive review
[ "Milind Fernandes", "Soumya Ranjan Sahoo", "Mangal Kothari" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Cooperative Localization for Autonomous Underwater Vehicles - a comprehensive review Milind Fernandes, Soumya Ranjan Sahoo, and Mangal Kothari M. Fernandes and S. R. Sahoo are with the Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India (e-mail: [email protected]; [email protected]). M. Kothari is with the Department of Aerospace Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India (e-mail: [email protected]). July 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================= Cooperative localization is an important technique in environments devoid of GPS-based localization, more so in underwater scenarios, where none of the terrestrial localization techniques based on radio frequency or optics are suitable due to severe attenuation. Given the large swaths of oceans and seas where autonomous underwater vehicles (AUVs) operate, traditional acoustic positioning systems fall short on many counts. Cooperative localization (CL), which involves sharing mutual information amongst the vehicles, has thus emerged as a viable option in the past decade. This paper assimilates the research carried out in AUV cooperative localization and presents a qualitative overview. The cooperative localization approaches are categorized by their cooperation and localization strategies, while the algorithms employed are reviewed on the various challenges posed by the underwater acoustic channel and environment. Furthermore, existing problems and future scope in the domain of underwater cooperative localization are discussed. Cooperative localization, AUV, Underwater, ASV, CNA. § INTRODUCTION It has been said that we know more about the surface of the moon than the ocean floor. A large swath of our oceans remains unexplored and unmapped, owing mainly to its massive size and the cost of operation for any survey activity. A modern solution to this age-old problem is unmanned underwater vehicles or UUV's instead of human-crewed ships. While the UUV's generally consist of remotely operated vehicles (ROV) and autonomous underwater vehicles (AUV), it's the latter that is best suited for long-duration and long-range survey missions in the oceans. An AUV is capable of autonomously carrying out pre-planned surveys, opportunistic seeking missions, and target tracking without human intervention and control. However, even though the AUV's have increased our capabilities to survey the ocean depths, they do have a few shortcomings. First, an AUV needs a support vehicle in the form of a human-crewed ship for deployment and recovery, which is costly <cit.>. Second, since the AUV works underwater, localization is a huge challenge. In a terrestrial setting, the localization problem is solved mainly by relying on GPS. But for underwater environments, no such large-scale system exists. This is because electromagnetic signals attenuate very rapidly in water and do not propagate useful distances <cit.>, <cit.>. In the absence of localization references such as GPS, it is common for an autonomous vehicle to rely on its onboard sensors for dead reckoning. But despite the advances in accuracy and resolutions of sensors such as accelerometers, gyroscopes, compass, etc., the dead reckoning approach still suffers significant drift from true location, especially in large distance surveys <cit.>, <cit.>. Thus, the location of the AUV becomes increasingly uncertain. The cost of these high-accuracy sensors renders them prohibitively expensive in missions utilizing large teams of AUVs. A Doppler velocity log (DVL), which provides velocity measurements, can be used to bound the growth rate of location error to some extent, as was shown in <cit.>. However, the ocean floor needs to be constantly in the range of DVL for it to be useful. This is not possible if the vehicle operates far above the ocean floor in the Pelagic zone. This necessitates external reference systems to minimize or bound the uncertainty in the position within a specified range. One of the simplest solutions is for the AUV itself to periodically surface for a GPS fix. But this is not a very elegant solution because significant energy and mission time is wasted for surfacing and then heading back down. Traditionally underwater localization has relied on acoustic localization systems such as the long baseline (LBL), GPS intelligent buoys (GIB), and ship/surface vehicle-based short or ultra-short baseline systems (SBL/USBL) <cit.>, <cit.>. Other methods, such as those based on Simultaneous Localization and Mapping (SLAM), geophysical features obtained using camera or SONAR imagery, magnetic field maps, and bathymetric maps, have elicited interest in the recent past. The reader can find an excellent review of AUV navigation technologies and techniques in <cit.>, <cit.>. The traditional localization systems, such as LBL and GIB, suffer from installation/deployment and maintenance/recovery issues, whereas SBL and USBL suffer from lower precision and operating range with bearing measurements further affected by the surface vehicles' roll and pitch. Even with LBL, the operating range is limited to a few square kms. Underwater SLAM, on the other hand, suffers from a limited set of available underwater features, whereas optical solutions need clear waters and fail in turbid environments. They also have high computational requirements and a high monetary cost that escalates with the number of AUVs for a given mission. Another option that has gained significant attention in recent times is the range-only single beacon-based localization due to its inherent simplicity and cost-effectiveness <cit.>. The beacon could be either static or moving. While a moving beacon falls in the realm of cooperative localization (CL), as will be seen in the sequel, the static beacon-based localization requires the AUV to perform fast turning or encircling maneuvers for the system to be observable. While this is acceptable in target tracking (where the beacon is the target), if the AUV has a specific mission trajectory, this method cannot be used. All the above issues have contributed to the increasing interest in cooperative localization-based approaches, especially in AUV teams working together. At the very least, CL requires a set of sensors that will, by necessity, be present on each of the vehicles, such as the Inertial Navigation System (INS), acoustic modem, and DVL. Some CL approaches can even accommodate vehicles with lower accuracy sensors. The CL-based approach's primary requirement is the mutual information exchange between vehicles, using which each vehicle can improve its respective localization accuracy. With the growing interest in this area, the literature on underwater CL has reached a critical mass. However, to the best of the authors' knowledge, no publication has yet classified and categorized this trove of information. Thus, this paper aims to review and classify the existing literature in this domain qualitatively. With this in mind, the contributions of this paper are as follows: * We present an exhaustive review of underwater cooperative localization literature for AUVs up to date. * We classify the CL approaches and bring forth their salient features. * We identify and discuss the open problems in underwater cooperative localization. The paper is organized as follows. Section II gives the requisite background that underlines the operational performance of underwater cooperative localization strategies. Section III discusses the different categories of cooperative localization strategies and provides a detailed comparison of various performance parameters. In section IV, a discussion on the current shortcomings and future directions is presented. Section V concludes the paper. § BACKGROUND In this section, we put forth some of the considerations relevant to the underwater environment and cooperative localization algorithms. These include the underwater acoustic channel, state estimation techniques, measurements for state estimators, among others. This section also introduces some of the criteria used to compare the current state-of-the-art cooperative localization strategies for AUVs. §.§ Underwater acoustic channel A water body as a communication channel is quite challenging, especially considering the severe attenuation of electromagnetic signals, be it radio frequencies or light. This has led to widespread adoption and advancements in acoustic communication technologies for underwater use. Still, the underwater acoustic channel has a fair share of issues that need to be dealt with and kept in mind while developing algorithms that use acoustically transmitted information, as highlighted below. * Speed of propagation: The water temperature in seas and oceans is not a constant function of depth; instead, it varies with it. The warm waters are near the surface, while the cold waters are near the floor. Similarly, a column of water may consist of strata of different salinity at any location and time, thus, different densities. The density of the water is also a function of the depth. These spatiotemporal gradients of temperature, salinity, and depth affect the speed of propagation of acoustic signals <cit.>, <cit.>. This is especially severe in communications that involve large distances. While in practice, it is common to assume a constant speed of sound (1500 m/s) underwater, it does not represent the actual speed of sound at the instant of time and space, and thus, will be a source of error in methods that rely on range computations for localization. However, some approaches can estimate the sound speed profile of the acoustic channel during a mission and compensate for any such effects <cit.>, <cit.>, <cit.>, <cit.>. * Latency: This is the direct consequence of acoustic signals' speed being much less than RF signals' speed. It leads to time delays between signal transmission and reception. The transmitter or receiver might have moved in that time, which causes an offset between the estimated and the actual positions. This has led to the development of delayed-state-based estimators <cit.>, <cit.>, <cit.>. * Propagation Path: The density changes also cause the acoustic signals to travel along a curved path instead of straight lines <cit.>. This introduces errors in range measurements, as the actual traveled distance is greater than the exact Euclidean distance between any two points. Unlike the sound speed profile, this effect is difficult to characterize and compensate for large distances. In most cases, the path is assumed to be a straight line. * Multipath effects: These effects are predominantly encountered in shallow waters, wherein the acoustic signals bounce off the seafloor or surface boundary and arrive at the receiver with a delay. These can also be experienced in deep water missions near the seafloor. Multipath signals give rise to measurement outliers or inaccurate range measurements and cause significant errors in state estimators' output. Outlier mitigation is one of the critical considerations in the performance evaluation of localization algorithms. Some approaches are given in <cit.>. * Bandwidth: The acoustic channel is inherently narrow-band since it operates in the audible/ultrasonic frequency bands. This limits the number of bits one can transmit per second. Although recent advancements have achieved up to 64 kbps of throughput <cit.> over short distances of 300 m, it is a fraction of what is achievable in terrestrial networks. This calls for CL techniques that are robust against the limited channel capacity. * Measurement noise: For mathematical and computational convenience, it is often the practice to assume any noise source in acoustic communications and uncertainty in measurement as being Gaussian distributed. However, as evident from many practical experiments, the distributions are more often than not heavy-tailed, especially in scenarios where multipath is evident <cit.>. This leads to apparent errors in the location estimates generated by the estimation algorithms. * Lost transmissions/Packet Loss: Due to the harsh underwater environment, underwater acoustic channels are far less reliable than terrestrial RF channels and suffer from intermittent lost transmissions/ packets up to 20-50% of the total <cit.>. This severely affects the convergence rate of estimation algorithms and can even render them unstable. As can be seen from above, the underwater acoustic channel has many challenges in terms of communication accuracy and reliability and is an active area of research. A comprehensive survey of communication challenges, solutions, and open problems can be found in <cit.>. §.§ State Estimation Techniques As evident from the previous section, it is difficult to get complete location information in the harsh and dynamic underwater environment. It necessitates estimation techniques that can predict the vehicle's current location with a high degree of confidence by incorporating all the noisy and outlier-affected measurements from the internal and external sensors of an AUV. State estimators can be classified into three categories: a) Stochastic, b) SLAM based, and c) Deterministic <cit.>. Stochastic, Bayes filter-based estimators have found wide use due to their simplicity and computational efficiencies. A brief comparison of the various estimators is in Table  <ref>. While there have been many SLAM-based approaches for state estimation for which the reader is referred to <cit.>, <cit.>, <cit.>, the domain of underwater cooperative SLAM is still unexplored except for a few works <cit.>, <cit.>, <cit.>. Similar is the case with deterministic state estimators that require exact plant and measurement models, which are difficult to model in an uncertain underwater environment. §.§ Measurement inputs for the state estimators The common state estimation algorithms in Table <ref> have two stages, predict and update. The prediction stage uses the past state and beliefs to predict the next state. The measurement stage corrects this prediction using information from internal and external sensors. The AUVs general sensor fusion architecture to estimate the position of a vehicle in a cooperative scenario is shown in Fig. <ref>. An AUV will have a bare minimum sensor suite consisting of the Attitude Heading Reference system (AHRS) and pressure (depth) sensor. The AHRS consists of a gyroscope and compass/magnetometer and provides the state estimators with angular accelerations, velocity, orientation, and heading inputs. Additionally, an Inertial Measurement Unit (IMU) combines an AHRS with accelerometers, providing additional 3D acceleration information which can be integrated for linear velocity and position estimates. The Inertial Navigation System (INS) uses data from AHRS, accelerometers, depth sensors, and DVL/ADCP (Acoustic Doppler current profiler) (if present) to estimate the vehicle's pose, also known as the dead reckoned estimate. The more expensive sensors, such as DVL, SONAR (Side-scan/Multibeam), gravity (Gravity map-based localization), etc., may or may not be present. The availability of cheap and accurate pressure-based depth sensors has rendered the three-dimensional underwater localization problem to two dimensions. This simplifies analysis and subsequent computations. However, over time the position estimate becomes more and more uncertain due to inherent drift in the sensors. Fusing INS information with other sensor data, such as from GPS, range, bearing, etc., either eliminates the uncertainty in position or bounds the error within a desired range. While GPS is available only on the sea surface, bearing-based methods either rely on visual information or USBL. Visual information is subject to the water's turbidity and is inherently limited to short distances <cit.>. As mentioned previously, these methods are unsuitable for long-distance cooperative localization. This leaves the range-based methods, which explains its wide popularity in underwater localization. Range information can be acquired through different means. If all the vehicles are synchronized in time, time of flight (ToF) can be used to measure the distance between any two vehicles. This method, also known as one-way travel time (OWTT) ranging, has the benefit that it requires only one acoustic transmission per range measurement and is scalable with the number of vehicles. However, it requires high accuracy and stable clocks that are temperature, bias, and drift compensated, such as chip-scale atomic clocks. If synchronization is not possible, two-way travel time (TWTT) or Time difference of arrival (TDOA) can be used <cit.>, <cit.>. In TWTT, the first vehicle sends a request ping, to which the second vehicle replies with a finite delay. If the delay is fixed and known, the distance between the two vehicles is a function of the total time from transmission to reception at the first vehicle. Since this method requires two acoustic transmissions for each range measurement, it is not scalable to large teams. In the TDOA method, the transmission from one vehicle is received by two or more vehicles. By knowing the arrival times at each of the receiving vehicles and their respective locations, the location or range of the transmitting vehicle can be estimated by exchanging data between the receiving vehicles, as it is a function of the difference in the arrival times at the receiving vehicles. However, this method requires more acoustic transmissions and is thus also not scalable. For the above reasons, OWTT has emerged as the preferred method for range measurements in underwater environments. §.§ Scalability The CL approach's scalability is inherently tied to how ranging is performed and how the data is exchanged between the vehicles involved due to the narrow bandwidth of the acoustic channel. Consequently, multiple simultaneous communications cannot exist. Thus, two of the most popular schemes to share data from one to many are a) broadcasting and b) Time-division multiple access (TDMA). The first approach is scalable to any number of receiving vehicles, whereas the latter is not. In a large team, the total time for updating the location information for one vehicle increases linearly, thus leading to considerable delays and can affect the convergence of the estimators. Other approaches, such as data exchange with neighbors-only, have also been reported <cit.>, which need fewer transmissions but need a scheduling algorithm. There are also recent approaches of using orthogonal frequency division multiple access (OFDMA) for simultaneous communications between vehicles, as reported in <cit.>. Another aspect that affects scalability is the size of the communication packets. The smaller the packets, the more reliable the communication with shorter communication intervals; thus, more vehicles can communicate in any given duration. But, this contravenes the requirement that for CL, the vehicles must exchange their states with each other, which is a lot of information. Thus, there have been attempts to efficiently manage the bandwidth in CL scenarios, as will be seen later. §.§ Other Considerations Ocean currents can have a detrimental effect on cooperative localization and, in general, localization of any AUV. Ocean currents tend to exacerbate the drift in the position estimate of the vehicles. They are dynamic, thus, cannot be accurately accounted for apriori and need to be estimated in situ for accurate localization results. To some extent, ocean general circulation models (OGCM) can be preloaded in the AUV prior to a mission, if available <cit.> and can be used to compensate for ocean currents for tasks or missions involving large areas. § UNDERWATER COOPERATIVE LOCALIZATION The term cooperative implies that the vehicles involved in localization share some information about their respective locations/state with each other <cit.>. The location information shared could be absolute or could be relative. While absolute location information present with any one of the vehicles in the team can essentially drive down the error to a minimal value, even relative position exchange between teams can prevent the error from unbounded growth <cit.>. Some of the common AUV cooperative localization approaches are shown in Fig. <ref>. In (a), surface vehicles are employed to aid the underwater vehicle by transmitting its absolute position information through acoustic channels. The surface vehicles could be single or multiple, autonomous or manned, and localized with the help of GPS signals. In (b), a "server/leader" underwater vehicle, which has very high accuracy and expensive sensors for its own localization, aids in the localization of several other "client/follower" AUVs. The client AUVs generally have low-accuracy inertial sensors or an incomplete sensor suite, along with other mission-specific payloads. Approach (c) does away with the aid vehicle altogether and instead relies on inter-vehicle communications to bound their localization error growth. In this type of approach, only the error growth rate can be lowered. This issue can be resolved in (d) type, wherein a team member can surface for GPS fix and then dive back to share the positional information with other team members. The taxonomy of underwater CL methods is shown in Fig. <ref>, and a brief comparison between the various categories is given in Table <ref>. In the following sections, we describe each of these approaches in detail and put forth the research contributions in those areas. Remark 1: Although cooperation for localization could also be with static sensors on the ocean floor or surface, as in the case with GiB, LBL, or UWSN, we restrict ourselves to the review of cases where cooperation is between moving vehicles underwater with or without help from those at the surface. This is for the reasons mentioned in the prior sections and because the dynamics of the moving beacons pose interesting and challenging problems. For UWSN based localization techniques the reader is referred to <cit.>, <cit.>, <cit.>. §.§ With a dedicated support vehicle In this approach, dedicated support vehicles are used as navigation aids (NA). These support vehicles can also act as communication gateways between the AUVs and ground or ship-based mission control stations. This configuration of the support vehicles is often termed a communication and navigation aid (CNA). CNAs have the advantage of being able to facilitate mission parameter changes and telemetry relays on the fly. Other benefits of using a dedicated NA include longer mission durations and possibly a facility for docking and recharging. The NA can either be on the surface of the water body or near the other AUVs underwater. While having a NA for localization results in excellent accuracy, they also pose certain challenges. The primary one being the path planning of the aiding vehicles. This has led to investigations into the observability of the localization problem with NA and optimal path planning strategies. This is more prominent in surface-based navigation aids since aid underwater generally has the same mission plan as the AUVs. In the following sections, we discuss these approaches and the related research. §.§.§ Navigational Aid on the surface Having the CNA on the surface has the advantages of being in constant reception of absolute GPS location data and communication network. However, this approach does have its challenges, such as more complex mission planning, longer distances for acoustic communication with AUVs resulting in acoustic channel issues such as packet loss, higher noise, latency, etc., and mitigation of other surface traffic. Furthermore, their performance is dependent on the surface sea states. Some of the earliest results in CL using surface vehicles were with crewed ships and boats, which were non-autonomous. The conducive results from these experiments were then extended to autonomous surface vehicles. Crewed Ship as localization aid In this section, we review cooperative localization using non-autonomous surface vehicles as a navigational aid. These are mainly crewed ships and boats used for the deployment and retrieval of the AUVs. In one of the earliest and simplest approaches, Matos and Cruz <cit.> used two boats as moving beacons to localize a single AUV in a river. The AUV runs an EKF estimator, fusing distance information from the two beacons on the boats and its dead reckoning (DR) data. Both the boats move along a path perpendicular to the AUV path, which is a bank-to-bank, back-and-forth motion along the river. In <cit.>, a single ship was used instead, utilizing a least-squares (LS) based approach. The position is estimated from range information and the known trajectory through a least-squares estimator, and the resulting data is fused with dead reckoning data in a Kalman Filter. The algorithm’s accuracy suffered in cases of poor trilateration geometry. It also required prior knowledge of sound speed profile (SSP), refraction index, and depth errors, the former two being challenging to acquire. McPhail and Pebody <cit.> tackled the problem of position drift during the dive of a deep diving AUV by utilizing range information from a surface ship. The AUV, post-diving, is made to move in a circular orbit about the ship while its true position is estimated using a least mean square (LMS) algorithm. The paper also discusses mitigation of the problems mentioned in <cit.>, such as the measurement of SSP, depth error, refraction, sensor errors, etc., and others, such as the effects of tides and atmospheric pressure changes. For the same problem, in <cit.>, a strong tracking algorithm in which the prediction is carried out using unscented transform is proposed. The observability with successive measurements was proved using Lie algebra. The proposed algorithm showed marginal improvements over generic UKF. Folk et al. <cit.> evaluated relative and absolute localization of AUV to a naval ship to measure the latter’s magnetic signatures. In the absolute case, the AUV used EKF for estimating its and the ship’s states, while in the relative case, only its own states were estimated. Eustice et al. used a ship as a navigation aid for single or multiple AUVs in <cit.>. The AUVs localized using OWTT of the acoustic signal from the ship and its sensors through a decentralized Least Squares (LS)-maximum likelihood estimator (MLE). The proposed approach was shown to perform comparably to an LBL system. The ship, however, was free-drifting without any specific or optimal path. In <cit.>, OWTT information from a ship was used to localize a deepwater AUV but in post-processing. The authors used a delayed state centralized EKF (DS-CEKF) to process the ship, AUV sensor, and range data. The delayed states compensated for the movement of AUV between acoustic transmission by ship and its reception by AUV. Centralized post-processing led to the incorporation of the cross-correlations between the ship and AUV states resulting in the lowest estimation errors compared to distributed estimation. Hence CEKF is often referred to as the gold standard of estimation with other estimators compared against it for performance evaluation. The ship’s motion was confined to a diamond-shaped path to improve observability. The initial position uncertainty was tackled by initializing the EKF with an MLE estimate, as improper EKF initialization leads to its instability. The same authors in <cit.> proposed a decentralized extended information filter (DEIF) for localization of multiple AUVs using a single moving beacon, which is not only scalable but also suited for the low bandwidth, low capacity acoustic channel. The beacon and AUVs maintain separate filters. The beacon broadcasts only the changes in its state and uncertainty, since the last broadcast, to the AUVs asynchronously. The AUV filter reconstructs the beacon state using the information in the acoustic messages. The beacon/AUV, process, and observation models are considered to be linear. The proposed approach is evaluated against CEKF, Egocentric EKF, Interleaved Update (IU) <cit.>, and DR for two cooperating scenarios: a) Ship as a beacon b) AUV (resurfacing for GPS) as a beacon. Results show that DEIF performs similarly to CEKF, although its performance is subject to packet loss. A pre-planned path is chosen for the beacon since the mission is known. Allota et al. <cit.> presented a strategy based on geometrically calculating individual locations through inter-AUV and AUV-Ship range information, which is then utilized in the Kalman filter measurement step. The proposed scheme is evaluated using 3 AUVs and one ship, although it can be scaled while ensuring at least one AUV has a DVL sensor. The AUV, which has a DVL sensor, is denoted as the server, and the tetrahedral geometry-based algorithm is run only on it in a centralized manner with state inputs from other vehicles. The calculated locations are then communicated to other AUVs. The server AUV uses a non-linear complimentary filter, while all other AUVs use KF for position and velocity estimates. Intervehicle communication, excluding server, has no information exchange and is only used to calculate the range. The scheme has no limitations on the relative distances or paths, although it requires more than four vehicles to work. Harris and Whitcomb <cit.> proposed range and range rate estimation based on cooperative localization of AUVs, without DVL or ADCP, with the help of surface ship. A delayed state centralized EKF was used for estimation. It was shown through simulations that including the range rate information didn’t improve the localization error. Only in the case of poor range measurements marginal improvement was observed with range rate. The above approach was extended in <cit.> to use a dynamic model of the vehicle instead of a kinematic model. Compared to AUVs estimating with a kinematic model without DVL, the proposed method gives results comparable to AUVs with DVL and kinematic model. The surface ship was navigating along a circular trajectory about the work area of the AUV. This approach, however, is heavily dependent on accurate modeling of the AUV. In <cit.>, an AUV was localized using a surface ship and a static beacon. The approach relies on utilizing information exchange within the existing ad-hoc network among AUVs, surface vehicles, and beacon nodes for localization. AUVs are assumed to be operating in deep water without the bottom lock and rely on IMU, relative velocity, and range/bearing information for positioning. The ship localizes the AUV using a high-precision acoustic positioning (HiPAP) system. Each AUV runs two EKF algorithms, one for its localization and the other to estimate neighboring nodes’ positions. Outliers are rejected using the Mahalanobis distance metric, while delays are taken care of using a back-and-forth approach wherein measurements are used in previous estimates and then propagated forward. The approach requires that the AUVs be equipped with very high-accuracy INS and a USBL modem. This not only increases cost, but USBL also limits the size of the team. An overview of all the above approaches using crewed vessels as CNA is given in Table <ref>. While the results using a ship or boat as a CNA show performance that is almost on par with traditional localization approaches such as LBL, operating a ship or a boat is prohibitively expensive. This is true, especially for long-duration missions, due to the crew’s expenses, operations, and maintenance of the vessel <cit.>, <cit.>. Furthermore, they are non-autonomous and thus need the mission path of the AUV known a priori and can only move in paths that are simple shapes made up of straight lines or circles. ASV as localization aid The costs associated with crewed ships naturally led to research on surface vehicles that are uncrewed, autonomous, and can reliably perform for long durations at sea. The reader can find a review of uncrewed surface vehicles in <cit.>. While the ASV did away with some of the costs associated with the workforce, maintenance, etc., of a large boat or ship, it introduces new challenges in the form of its control, coordination, and mission planning. This has led to new research directions, such as optimal path planning, formation control, and observability analysis in the context of CL. In the following subsections, we look at the current state of the art in CL of AUVs with the help of autonomous surface vehicles. Single ASV and Range information only: In this approach, the AUVs localize using range information calculated from the ASV's acoustic transmissions and the data therein. In the simplest of these cases, a single ASV is used as a localization aid for a single AUV. To find its X and Y coordinates with respect to a pre-defined frame of reference (since depth is known), the AUV requires at least two distance measurements from two different ASV locations if its current location is known with some uncertainty. For a mobile AUV carrying out its mission, this imposes constraints on the ASV motions and trajectories. Fallon et al.<cit.> used the current and past range measurements, current and past locations of the ASV, and distance traveled by ASV in-between measurements for estimating position with an EKF. For range measurement updates, the position and covariance of ASV are incorporated, but the cross-correlation is neglected, which can lead to overconfidence in estimates. As EKF fails to converge when the initial uncertainty in the location is large, in <cit.>, methods for estimating the initial location of AUV and the ocean currents are discussed. Post diving, when the uncertainty is the largest, the AUV uses a vision system and stored image database to calculate its location offset. While it is stationary on the floor, an ASV or ship with towed beacon is used to estimate the initial position and sound speed profile. In <cit.>, an ASV with USBL was used to localize an AUV without DVL or IMU. The AUV was assumed to have only attitude information and an acoustic model. The range information was fused with speed estimation using thruster motor current measurements in an EKF with state augmented to include unknown current velocities. Since EKF linearizes the system, its estimate is less accurate. To mitigate this, Gao et al. <cit.> combined an iterative divided difference filter (I-DDF) algorithm with Huber M-estimator for localizing an underwater vehicle. The advantage of the DDF filter is that it does not linearize the system, and compared to UKF, its covariance matrix estimate is more accurate. The slow convergence of DDF in systems with weak observability and large initial error is mitigated through iteration. The Huber-based M estimator is employed to take care of outliers in range measurements. The proposed Huber-M-based DDF (HIDDF) algorithm is shown to perform better than EKF, DDF, and IDDF alone, albeit with a higher computational burden. In <cit.>, a factor graph (FG) based approach is proposed to estimate AUVs location and current velocity in the absence of bottom lock/DVL using range from a surface vehicle and neighboring AUVs. A factor graph is a graphical representation of the joint probability density functions of all the unknown positions of vehicles given the measurements from sensors. To solve the nonlinear estimation problem, a Maximum-A-Priori (MAP) algorithm is used, and to maintain the observability of the whole system, a formation-switching strategy is employed. The effects of packet loss and clock drift were also evaluated in field trials. The factor graphs, however, have high memory requirements and can get complex with increasing team size. In <cit.>, an EKF and MAP based Moving horizon estimation algorithm (MHE) is proposed wherein the EKF is used to generate high-frequency estimates using depth and IMU data, while MHE is used to fuse the low-frequency range information along with its history to generate consistent estimates that do not suffer from linearization errors. To keep the computational costs in check, MHE is implemented as a moving window version of MAP. The particle filter is another popular filter for state estimation. In <cit.>, <cit.>, the performance of PF against EKF is compared. Simulations show similar performance of PF and EKF due to the assumption that the measurement errors are Gaussian distributed. In <cit.> the performance of EKF against PF and nonlinear least squares (NLS) estimators is compared. The NLS estimator outperformed both EKF and PF, especially in the case of post-processed data. In <cit.>, <cit.>, a comparison between DR, distributed EKF, and loosely coupled PF for estimating vehicle states using OWTT ranges and dynamic vehicle model is presented. The distributed EKF on every vehicle is augmented with other vehicles' states. The sum of covariance is used in the augmented covariance matrix to reduce errors due to overconfidence. In the loosely coupled PF case, the PF was only used for the measurement update, while the prediction stage used the output of EKF. A KF-based velocity (due to ocean currents) and synchronization bias estimator were used to correct errors due to ocean currents and clock offset. Experimental results show that the PF performed marginally better than EKF. Including bias and ranges from multiple sources (AUVs other than on the surface) improved the estimates even further. While newer techniques, such as MHE, PF, IDDF, etc., are reported, as seen from the above discussion, EKF is widely popular due to its simplicity and effectiveness in most cases. Another interesting challenge with ASV-based cooperative localization, especially single ASV, is the path planning of ASV to minimize the positional error of the aided AUVs. In <cit.>, the ASV is made to follow a simple pre-planned zig-zag path which can be easily parameterized and implemented, but it is not optimal and unsuitable for more than one AUV. Extension of this work was presented in <cit.>, where it was also shown that using nonlinear estimation, the system can be observable under less stringent conditions, unlike in the case of a linearized version of the system, which causes EKF to diverge. The ASV paths were generated using AUV position estimates and uncertainty to maintain the observability of the system. Two such paths, zig-zag and circular orbit about AUV, were evaluated. In <cit.>, the ASV, from its knowledge of the AUVs mission, uses a simple heuristic algorithm based on the minimization of the integral of squared inter-vehicle distance to plan its positions for acoustic transmissions. The approach, however, is not scalable, as each iteration of the algorithm requires three transmissions. In <cit.>, to find the optimal waypoint for the next acoustic transmission, the ASV calculates the error uncertainty ellipse using the state information transmitted by AUV. The waypoint is then selected such that the error ellipse is minimized, which is along the direction of the major axis. The above results were extended in <cit.> for ASV aiding multiple underwater AUVs by incorporating inter-vehicle range measurements. Two cases were considered. One, where the ASV helped minimize the positional error of the AUV with the worst error, and second, where the ASV helped minimize the positional error of the whole group. For a similar scenario, Chitre <cit.> proposed dynamic programming (DP) based approach to generate ASV paths. It minimizes the localization error ellipse for the AUV's along the major axis, which is orthogonal to the range measurement vector. The Bellman equation for the optimal solution of DP is solved using an approximation of value function, i.e., planning over a finite horizon. The approach provides a globally optimal trajectory for ASV from the knowledge of AUV paths. This is similar to the work in <cit.>, <cit.>, <cit.> for localization of AUVs using a single static beacon. In <cit.>, authors tackle the path planning problem using the Markov decision process (MDP) framework. An MDP policy maps a state to action. In this case, the probability of choosing the bearing angle of ASV given its current state, AUV path, and relative range and angle. The ASV is further made to adaptively "learn" to position itself through Cross-Entropy (C-E) method over a segment of the AUVs path. A smoothing filter is applied to prevent the C-E method from converging to a local minimum. The ASV path was computed using three different strategies: a) proximity to the AUV with a larger error, b) along the centroid of the AUVs formation, and c) based on the sum of errors squared of all AUVs. The third approach was concluded to be simpler and produced better results. Computationally, this approach is only efficient once the learning is done offline and has a linear increase in complexity with the number of AUVs. A comparison of DP and MDP methods is given in <cit.>. In <cit.>, a genetic algorithm (GA) based policy search approach that is computationally more efficient than the MDP-CE approach is proposed. The state space is divided using Voronoi tessellations to reduce the number of representative states compared to MDP. A variable-length GA is used to select the appropriate state-action pair. Seto et al. <cit.> proposed an optimal path planner in 3D for ASV aiding multiple AUVs performing mine sweeping tasks using an approach similar to <cit.>. The path planner involves a look ahead strategy that also incorporates distance penalty, which helps to bound the error and reduces the computational effort. In <cit.>, two cases are considered, a) the ASV knows AUV paths a priori, and b) it estimates the AUV paths in real-time using the next three waypoints communicated by the AUV. In the latter case, the cost of heading decisions at L future times steps is calculated, and the path is chosen such that the AUV position errors are minimized. The latter case is useful in tasks where the AUV may have to change its path in between the mission. In <cit.>, the condition number of observability gramian and empirical observability gramian of the linearized discrete system are used to optimize the trajectory of ASV. In the former, the condition number is minimized by minimizing the difference between the trajectory inertia matrix's eigenvalues. For single ASV localizing multiple AUVs, the condition number-based approach is shown to be better than the empirical gramian based while being less computationally taxing. Walls and Eustice <cit.> proposed an information maximization-based approach, similar to the maximization of the determinant of Fisher information matrix (FIM), to compute optimal trajectories for an ASV localizing multiple AUVs. Only those ASV trajectories are selected, which can be parameterized by diamond and/or zig-zag patterns instead of searching over the full space of trajectories. This is in contrast to <cit.>, where the approach was based on a future segment of AUV paths at any time 't'. The performance was evaluated against the approaches in <cit.> and <cit.>. It was shown that the proposed approach could attain higher information gain than others. A problem with such easily parameterizable trajectories is that if the pattern is large, the measurements appear to be from a straight line segment, while if the pattern is small, the difference between successive measurements may not be much for a vehicle that is far away. Sousa et al. <cit.> presented two FIM-based approaches for finding the optimal path, wherein the ASV calculates its next location with or without using the estimated AUV position. The calculation involves selecting the one point having the maximum determinant of FIM within an estimated set of all reachable points from the current location until the next communication. The effect of AUV depth and the horizontal range is evaluated on localization performance, with the localization error increasing with depth and decreasing with radius. In <cit.>, an extremum seeking (ES) based approach is proposed. In ES, the optimal input for an unknown input-output map is found using online gradient estimation. The proposed approach is vehicle model agnostic, subsumes constant disturbances such as gravity, currents, etc., requires minimal acoustic data transmission, and does not require AUV trajectory apriori. The cost function is formulated in terms of the estimation filter's covariance matrix, thus ensuring low computation complexity. Its optimal value minimizes the maximum eigenvalue of the covariance matrix. This, in turn, maximizes the minimum eigenvalue of the observability gramian. However, the system model considered requires the ASV to move arbitrarily in the horizontal plane, which may not always be the case. Also, the approach requires to and fro communications between AUV and ASV, which will introduce delays and will not scale well in the case of multiple AUVs. An extension to the case of underactuated beacon vehicles moving in a 2D plane was presented in <cit.>. In <cit.>, an algorithm that combines priority-based expansion of a search tree with random sampling-based exploration to adaptively plan multiple future waypoints is presented. Sampling-based exploration is chosen to reduce the number of states that need to be evaluated. At the same time, the expansion of the search tree is done such that the angle between the distance vector and the major axis of AUV uncertainty is minimized. The optimal locations are such that they also correspond to the optimal time for transmission and are calculated by considering the limitations of the ASV dynamics. The optimal time for transmission is chosen from a set of TDMA time slots in which the ASV is allowed to transmit. In the case of multiple AUVs, the sum of total uncertainties is minimized since optimal locations for all AUVs will not be the same. However, priorities, if required, could be assigned. Rua et al. <cit.> proposed a novel solution to the single beacon, range-based, cooperative localization of an AUV wherein the beacon is mounted on a rotating arm, which in turn is mounted on the hull of an ASV or at a static location. However, the AUV motion is considered to be in trimming trajectories, i.e., only constant linear and angular velocities. It is shown that the system is observable in the case of AUV moving with a) constant linear and angular velocity or b) constant linear velocity under initial condition constraints, while the system is unobservable when the AUV is not moving. Further optimality analysis to find the optimal motions of only the beacon, beacon, and vehicle, optimal fixed rotation rate of beacon, and optimal energy and rotation rate are carried out in <cit.>. Another aspect of cooperative localization between vehicles is the observability analysis of the states of AUVs. Authors have reported analysis using nonlinear weak observability <cit.> and nonlinear to linear time-varying transformations (NL-LTV) <cit.>. Viegas et al. <cit.> presented observability analysis using NL-LTV transformations using state augmentation. The authors considered two cases under the influence of unknown ocean currents: a) ASV transmits velocity and position information to AUV, and b) ASV transmits only position information, and AUV estimates ASV velocity through some sensors. A Kalman filter was then proposed for the LTV system, which guarantees global asymptotic stability of error dynamics. In <cit.>, nonlinear observability analysis in the discrete domain was presented. It was shown that the AUV with only IMU, depth sensor, and range information is weakly observable in nonlinear case but is unobservable in linearized case. There also have been attempts at efficient and optimal use of the acoustic channel to share information between cooperating vehicles. In <cit.>, Meira et al. coupled CL algorithm from <cit.> with a logic-based communication approach that transmits location information from ASV to AUV depending on a threshold instead of a pre-determined periodic transmission. This threshold is based on the difference between ASV's position estimate and GPS data. While the authors analytically proved the position error's boundedness under certain assumptions on formation, velocity, and currents, the experimental implementation had no such assumptions. It was demonstrated that the approach gives only marginally worse performance than periodic transmission but with almost 62.5% fewer transmissions. It was assumed that the AUV runs an ASV model parallel to its own to estimate better filter parameters. In <cit.>, the effects of adaptive time-of-launch (TOL) of localization packets within the TDMA time slot were studied. By choosing the TOL based on a criterion, it is shown that the localization error can be reduced compared to a static TOL. EKF and NLS trilateration-based estimators were compared in the case of static single/three beacons, a follower ASV and a lawn mowing ASV, wherein the EKF performed better than the NLS-based method. Table <ref> gives an overview of all the above approaches using a single ASV and only range information for localization. Single ASV with Range and bearing information: In this approach, localization between vehicles is carried out with sensors that provide both range and bearing information, such as USBL, SBL, etc. When range and bearing information is combined with depth sensor data, localization of any vehicle in 3D ideally requires only one acoustic transmission if the clocks are synchronized. In practice, considering a single ASV-AUV case, the ASV initially sends an interrogation signal to which the AUV replies. Using the two-way travel time and the calculated bearing, the ASV can localize the AUV. The estimated position is then communicated back to the AUV. In <cit.>, a hierarchical cooperative localization scheme between one ASV and two AUVs is presented. One of the AUVs acted as a guide/server for the other AUVs and was stationed between the latter and the ASV in the water column. It localized relative to the ASV and the other AUV using USBL along with absolute position information from ASV and velocity/depth data from acoustic packets. It used a linearized system model with KF for state estimation of the ASV and the other AUV. The other AUV only had an AHRS and estimated its states using speed estimation and data communicated by the middle AUV. In <cit.>, USBL in inverted mode (iUSBL) with OWTT is used to localize an AUV using a surface craft. In iUSBL, the USBL modem is onboard the AUV instead of the ASV. The AUV interrogates the ASV, which replies with its position data. This data, along with the calculated range and bearing, is then used for localization. This provides position information for the AUV independent of INS. Using OWTT further alleviates the need for back-and-forth communication; thus, multiple vehicles can be localized simultaneously. In <cit.>, a UKF-based state estimator is proposed for AUV having access to the range, bearing, and elevation information relative to ASV, along with its velocity estimate and velocity of the ASV. Packet latency issues are resolved by back-calculating the estimates using current measurements. Salavasidis et al. <cit.> proposed an algorithm that uses EKF for state estimation of AUV, but is run partially on AUV and ASV, such that the measurement update part is carried out on the ASV using USBL, to reduce the computational burden on the AUV. The computed location estimate is then communicated to the AUV. The approach, however, has a high communication overhead. In <cit.>, the ASV computes the AUV location through USBL and communicates back only the error measured with respect to the GPS position instead of the absolute position. It also tracks the AUV using a virtual target approach while maintaining a given offset. The communicated error is used in a KF onboard the AUV along with DR sensors to compute its position. A maximum-a-posteriori estimation-based approach is presented in <cit.>. The range and bearing information are acquired using a USBL on the surface vehicle stationed at a fixed point. In <cit.>, an artificial potential field (APF) based controller for ASV is presented to support the localization of multiple underwater agents, which could be AUVs or human divers. The agents use iUSBL and exchange location, velocity, and course data with the ASV. As iUSBL works only when communicating nodes are vertically aligned, the APF is created such that it has an attraction basin between the agents and repulsive fields directly above them. In <cit.>, MDP is combined with a reinforcement-based Q-learning approach. However, Q learning complexity increases exponentially with the increasing number of AUVs. The back-and-forth communication involved in this approach limits its scalability to a few vehicles. The bearing calculations are further dependent on the roll and pitch moments of the ASV when the USBL is mounted on it. The limited range of the USBL also imposes constraints on the operating area of the team. Since USBL systems are very costly, utilizing them on each team member as iUSBL increases the cost drastically. These are the primary reasons why range-only localization is more popular, and there are very few works in CL utilizing USBL. Table <ref> gives an overview of approaches using a single ASV with range and bearing-based CL. Multiple ASV’s as localization aids: For a single ASV to aid even a single AUV for localization requires that the ASV moves to optimal waypoints, as seen previously. The primary advantage of having multiple ASVs to aid in localization is thus the relaxed requirements on the motions and trajectories that an ASV has to execute to keep the localization error of AUVs within bounds <cit.>. It is also beneficial when the AUVs' team size is large and spread over a larger area, as it would be impossible for a single AUV to satisfactorily aid all the AUVs at the same time and will necessitate a priority-based approach. It also increases the probability of acoustic data reception from AUVs, minimizing packet loss errors. This, however, has additional costs in terms of hardware, mission planning complexities, and computations required. Although the performance benefits outweigh the costs, only a few works have addressed this problem. One of the first works to propose multiple autonomous surface craft as a CNA is <cit.>. This is an extension of moving long baseline (MLBL) work done in <cit.> by using ASVs instead of boats. MLBL is similar in concept to LBL, but the beacons are considered to be mobile. Bahr et al. <cit.> used two ASVs for cooperative localization of an actual AUV and proposed an Interleaved Update (IU) algorithm that works in the presence of measurement outliers and can scale to large team size. At each instant probable position, candidates are evaluated, and by minimizing a cost function based on Kullback-Leibler divergence (KLD), the most appropriate one is selected. In <cit.>, a diver/AUV position estimation technique using multiple ASVs was proposed. The maximum velocity, acceleration, and turning rate of the diver target are assumed to be known. The position is deduced using transmit/reply TWTT scheme through a CEKF algorithm that uses the back-and-forth technique to take care of delays and vehicles' motion during measurements. Chen et al. <cit.>, extended the work in <cit.>, which uses single ASV, to use MHE for localizing a single AUV with multiple surface vehicles in an MLBL approach similar to <cit.>. The proposed approach is compared against a KF, showing marginal improvements. The same author in <cit.> investigated the optimal number of ASVs, the range between AUV-ASV, and the effect of distance-dependent noise factor using a cost formulation. It was shown that the cost is inversely proportional to the number of ASVs and directly proportional to the distance-dependent noise factor. An optimal value of range to minimize the localization error for AUV is also found. In <cit.>, a shore-based centralized approach that uses the knowledge of the ocean current model is presented. The AUVs employ a UKF, with drift, due to ocean currents modeled as a random walk, used as one of the states. The AUVs' estimates are processed using a consensus algorithm at a shore-based centralized server via surface vehicles as communication intermediaries. The consensus current estimate is then communicated back to the AUVs to improve their estimation. In <cit.>, a concept of using a companion vehicle to aid ASV for localization of a target using range information is proposed. The companion vehicle can be solely used as an aid or could be performing an independent mission of its own. Furthermore, the companion vehicle's location may or may not be known to the ASV, which is measured in the latter case. In all cases, the companion vehicle shares the information about its range to the target with the ASV. The ASV optimal trajectories are calculated by maximizing the determinant of FIM. Except when the companion is also cooperatively tracking the target, the target is assumed to be stationary. In <cit.>, an approach for localizing N target AUVs by M cooperating surface AUVs, where M>N, using the TDOA technique in 3D, is presented. The effects of distance-dependent noise, uncertainty in the target AUV's initial location, and curved path in acoustic signal transmission are taken into account. The optimal formation/locations for the surface AUVs are arrived at by sequentially evaluating the determinant of FIM at each time step and moving only if it's higher than the current value. The optimal formations are evaluated in simulations against different depths for target AUV, centralized and decentralized sensor pairings, different numbers of surfaces, and target AUVs. However, only the cases with static target AUVs are considered. In <cit.>, a multi-ASV system for localization of multiple AUVs that is scalable in the number of AUVs was proposed. The AUVs are made to be passive listeners, while the ASVs transmit their location information using TDMA. The ASV is assumed to know the AUV path for it to be in its vicinity to aid in localization. The optimality of the ASV location, although, has not been considered. In <cit.> a robust KF is proposed that uses heavy-tailed mixture distribution for outlier mitigation in the case of two ASVs aiding a single AUV. It is shown using experimental data that the proposed approach is better than several contemporary approaches for outlier-affected acoustic communication. Table <ref> gives an overview of approaches using multiple ASVs. §.§.§ Navigational Aid underwater While the CNAs, on the surface, have the advantages of lower localization error and communication, they are unsuitable in certain applications, especially in defense such as espionage, target tracking, etc. Furthermore, surface craft can be affected by sea states and other surface traffic. However, the acoustic channel's challenges, particularly with deep-diving AUVs, can have a far more detrimental effect on CL performance. An alternative approach is to have the navigation aid (NA) vehicles close to the AUV team performing specified tasks. This also benefits from outfitting the survey AUVs with high-accuracy task-specific sensors and medium or low accuracy (thus low cost) navigation sensors. The aiding AUVs meanwhile have high accuracy inertial navigation sensors, which are fewer in number. In case absolute positioning information is required, the NA AUVs can then resurface for a GPS fix. While different terminology has been used for this category in literature, such as leader-follower, mother-daughter, and master-slave, we will refer to it as the server-client approach. Here the server vehicles are the ones that have high-accuracy navigational sensors and provide localization support for the client survey AUVs by sharing their current location information. One of the earliest solutions resembling cooperative strategies for localization was presented in <cit.>, wherein a server-client approach was used. The client AUVs would localize using USBL with respect to a server AUV, which in turn would localize itself using LBL. In <cit.>, a similar approach was proposed, but the client AUVs use only range and location data from the server through an acoustic modem instead of USBL, along with data from their DR sensors. Vaganay et al. <cit.> is the first work to present the MLBL concept wherein two AUVs perform the role of CNA for other survey AUVs. The time-synchronized survey AUVs calculate their positions by passively listening to location update pings from CNA, which flank the former on both sides. Since the survey AUVs are passive listeners, this approach is highly scalable. In <cit.>, a centralized delayed state EKF (DSEKF) running on a server was proposed for fusing information in one server, multiple client configuration, which takes into account the delay in propagation of acoustic signals. With one filter instance for every client vehicle, the computational complexity increases with the number of vehicles. In <cit.>, a distributed approach combining Dynamic SLAM and cooperative localization of client AUVs is proposed. The server AUVs are assumed to be localized with very low error and are used as dynamic landmarks for client AUVs' SLAM algorithm. In the absence of server AUVs in proximity, CL with other client AUVs is used to bound the error. Consistency is preserved by using the client AUV with the least covariance as the beacon. Both DSLAM and CL are formulated as independent Bayes filters and solved using EKF. The consistency issue when using server AUVs is resolved in <cit.> by formulating a distributed modified EKF (MEKF). Here, MEKF takes care of the cross-correlations by employing Jacobian multipliers that the client AUVs use to track the server AUV's location changes during each DSLAM correction phase. But since the AUV model is non-linear, the error performance of MEKF is limited, which is why PF is then proposed in <cit.>. It is shown that the PF performs marginally better with similar or lower computational complexity than MEKF. In <cit.>, an algorithm based on the origin state method is proposed to estimate the states of multiple client vehicles using acoustic range measurements and pose-graph information from multiple server vehicles. It is robust against packet loss and is bandwidth-efficient. The server communicates incremental pose-graph information relative to a server state known by the client, termed the origin state. The server vehicles are assumed to have their absolute position information, for example, through a surface vehicle. It was shown that the performance using a DEIF estimator on client vehicles produced consistent results similar to centralized estimation schemes without any overconfident estimates. The above results were extended in <cit.> by constructing the local state in the form of factor chains using a factor graph framework with odometry and/or GPS as factors. These factor chains are then approximated through composition, unlike approximations relative to the origin pose as in the previous paper, and are shared with the other vehicles. This approach does not require shifting the origin pose and allows vehicles without odometry to join the network. While it is similar to <cit.>, it has lower communication overhead and better performance. Ben et al. <cit.> used factor graphs for the case when both range and bearing information are used for client position. Since cycles may exist within the graph, a clustering method is used to obtain cycle-free graphs. Zhang et al. <cit.> presented a triangulation-based approach to localize a single client AUV using three server AUVs. The server paths were independent of each other and the client's mission. This approach fails if the server tasks are such that it takes them beyond the AUV's communication range. In <cit.>, a parallel projection algorithm (PPA) based approach is proposed wherein the global pose of the vehicle to be localized (rotation matrix and translation) is estimated through MLE formulation that is convexified and solved using PPA. The approach is compared against a semi-definite programming-based approach for the coordinate alignment-based formulation. The proposed approach is shown to have similar or better performance while having a faster convergence rate. Zhang et al. <cit.> extended the MDP-CE approach proposed in <cit.> to the server-client configuration, using two server AUVs to aid multiple client AUVs. However, the approach requires training the CE algorithm every time the trajectory of the clients is changed. In <cit.>, the effects of unknown constant current are investigated. Only the server is equipped with DVL and uses USBL to measure the clients' range and orientation. Locations of the server and clients are estimated by the server using a hybrid UKF-KF estimator, wherein the prediction step uses UKF while the measurement step uses linear KF. The authors also consider the case where the clients also communicate among themselves but don't consider the cross-correlation, thus giving overconfident results that are worse than no intercommunication. Yan et al. <cit.> proposed a cooperative localization approach for server-client UUVs in polar regions where navigation is difficult as meridians converge faster, thus leading to calculation overflow and error magnification in conventional SINS. For this, a polar grid algorithm-based state formulation is used, and a delayed state Adaptive KF is employed for state estimation of follower UUVs. In <cit.> <cit.>, multiple AUVs are used as localization aids for the main AUV performing a critical task. Centralized processing is carried out on the main AUV, which estimates its state and the state of the aiding AUVs. The aiding AUVs take turns to resurface for GPS fix and use EKF to estimate their states using this data and information received from the main AUV. In <cit.>, a KF-based solution to the time delay problem in server-client cooperative localization is presented. The client AUV is localized by server AUV using USBL. The time delays are taken care of by modifying the update step of KF in terms of the delayed measurement updates, i.e., time delays are converted into a bias in the observation equation. Simulations show that the proposed approach reduces the error in estimates by orders of magnitude. In <cit.>, an augmented EKF (AEKF) for mitigating time delays in the acoustic channel was proposed. With the knowledge of the delay, the client AUV's state vector was augmented with all the states from the actual transmitted time till the current time, using which the estimate was then propagated to the current time. Results presented indicate AEKF can bound the localization error better than EKF in the presence of time delays at the expense of computational cost. In <cit.>, cooperative localization of a single client vehicle with multiple servers using a probability hypothesis density filter is presented. The filter runs on all server vehicles and estimates the client location using TWTT. The estimates are communicated back to the client, which uses an information entropy-based approach to fuse and obtain the best estimate of its position. However, the proposed method has high communication overhead and is computationally complex. In <cit.>, an improved TWTT communication scheme for the server-client CL approach is presented. The server interrogates each client AUV using TWTT, but instead of re-transmitting measurement updates to each client AUV, the complete state information is broadcast. Thus requiring 2N+1 acoustic transmissions instead of 3N, where N is the number of client AUVs. In <cit.>, localization of a single client AUV with two server AUVs without time synchronization is presented. The range calculated using TWTT from both server AUVs was used in EKF to estimate the clients' position. While the proposed method is not affected by clock drift, it is not scalable. Qu et al. <cit.> investigated the optimal formation for multiple AUVs acting as servers for a single client AUV. The formulation uses FIM and area of information ellipse. This approach requires clock synchronization among the vehicles and bearing information. In <cit.>, these shortcomings are mitigated by relying on RSSI information instead. Fan et al. <cit.> proposed a maximum correntropy (MCC) based unscented PF to mitigate outliers in range measurements wherein it used KLD-based re-sampling. Compared to PF, CKF, MCC-UKF, UPF, Huber-based UPF, and cubature KF, the proposed algorithm produced the least average error but is computationally intensive. In <cit.>, an adaptive extended Kalman filter that estimates the unknown process and measurement noise covariances using the expectation-maximization method was presented. For a localization problem involving two servers and one client, the proposed method is compared against DR, EKF, Innovation AKF, and Sage-Husa AKF. The proposed algorithm converges the fastest while having better error performance than other methods except for EKF, where it is marginally better with higher computational complexity. In <cit.>, an unscented PF (UPF) based estimator was proposed to take care of the non-Gaussian nature of noise and depletion of particles in PF. In UPF, UKF was used to update the state of each particle in the client AUV. While the proposed algorithm has the lowest error compared to EKF, UKF, and PF, it requires ten times more computational time. In <cit.>, a Student-t based EKF (SEKF) for outlier mitigation is presented. The students-T distribution is used for process and measurement noises instead of the usual Gaussian distribution. Results indicate that the proposed method has almost 30-40% better error reduction than Threshold EKF and 60-50% better than standard EKF, albeit with slightly higher computational requirements. In <cit.>, a maximum correntropy criterion (MCC), adaptive neuro-fuzzy inference system (ANFIS), and CKF-based approach for mitigation of outliers and acoustic packet loss is proposed. The packet loss is taken care of by ANFIS, which is a fuzzy system wherein the fuzzy membership function and rules are trained using the neural network from a large amount of data instead of being selected arbitrarily. For the outliers, the MCC is used. State estimation when there is no packet loss is carried out using a cubature Kalman filter based on the MCC data. The same is used for training ANFIS. The trained ANFIS model predicts the location when there is packet loss for more than 3 seconds. The proposed approach improves error performance by over 60% against only CKF and by over 40% against ANFIS-CKF. While MCC-CKF computational requirements are almost the same as CKF, ANFIS requires a large amount of training data. This can be an issue if the packet loss is frequent and not enough data is acquired to train ANFIS. The MCC kernel bandwidth is chosen heuristically. To mitigate this, an adaptive version of the algorithm is proposed in <cit.>. The same authors in <cit.> combined adaptive cubature KF to track ranging errors and used ANFIS for detecting anomalies in ranging data. In <cit.>, a robust Gaussian approximate smoother based on expectation maximization is presented for outliers mitigation and sensor faults. A faulty DVL is considered, and any bias in acoustic modem data is treated as unknown inputs. As for the path planning of the aiding vehicle, there are very few papers that investigate the problem. This is because the server's path is generally the same as the client's mission. However, there are few works where this assumption is not held true. In <cit.>, the server plans its path using a dynamic programming-based approach from <cit.>. The server AUV is assumed to know the survey plan a priori. In <cit.>, an algorithm for optimal positioning of server AUVs, without prior knowledge of client AUVs paths, is presented. The server AUV calculates the optimal positions to broadcast position information that will minimize the combined location uncertainty for all client AUVs. This is done by dividing the area into grid points that can be reached by server AUV before the next broadcast and estimating the minimization of uncertainty by broadcasting from all those grid points. The point that leads to the minimum error estimates is chosen. While the proposed approach is distributed and robust against the number of beacons/survey AUVs at any instant, correlations among the vehicles are ignored. In <cit.>, a belief space path planner based on a partially observable Markov decision process (POMDP) model was proposed, which uses a probabilistic acoustic channel model that accounts for randomness in measurements such as packet loss. Optimal open-loop control actions or parameterized paths are generated for the server using EKF, the proposed model, and the known client trajectory. Table <ref> gives a summary of the approaches using underwater navigational aid. §.§ Without a dedicated support vehicle Although having a dedicated aid vehicle has its merits in better localization and communications, there are several demerits. In addition to their path planning, aiding vehicles may not carry any mission-specific sensors and thus don't contribute other than in localization. Also, if the aiding vehicle fails, the whole mission can get compromised. In this section, we look at the approaches that do away with support vehicles altogether. This can be done either with one of the vehicles surfacing for GPS or through other sensors, such as vision, SONAR, gravity, etc., that aid in localization. §.§.§ Surfacing approach In this approach, one of the AUVs in the team resurfaces to get absolute GPS position information. With this information, the AUV dives back and shares its absolute position with other team members resulting in a reduction in their localization error. In all the following works, one of the vehicles surfaces for a GPS fix. The earliest results in this category are by Maczka et al. <cit.>, wherein they demonstrate cooperative navigation by sharing inter-vehicle ranges over acoustic communication channels to complement DR estimates. To mitigate the inability to transmit the full estimation error covariance matrix due to insufficient bandwidth, only a scalar function of the main diagonal elements is shared instead. Acoustic latency is taken care of by recalculating past estimates using the range measurement and propagating it to the current time. In <cit.>, a MAP-based scheme is proposed that computes consistent estimates of the full multi-robot trajectory with a communication strategy involving constant packet size, adaptive performance with respect to acoustic channel, and which scales linearly with the number of AUVs. Every AUV maintains two-factor graphs, multi-AUV, and its own DR. Instead of all the raw sensor data, only change in position factor and associated covariances from the DR factor graph, depth, range data, acknowledgment bits, and GPS fix (if available) are communicated. To maintain consistency, bookkeeping-based tracking is employed. Backlogging due to communication channel issues is taken care of by combining multiple data. Liu et al. <cit.> describes the 'SUAVE' algorithm for localization among a swarm of AUVs. In it, the AUVs with average tracking variance above a certain threshold resurface and remain stationary to act as beacons for other AUVs. The authors propose an iterative multiple model (IMM) based estimator utilizing a fusion of KF and EKF for linear and angular motion, respectively. In <cit.>, a decentralized, opportunistic communication-based CL within a team of AUVs, wherein the members can join or leave at any time, is proposed. There is no dedicated timeslot for vehicles to communicate. The localization is performed through trilateration among the team members through the non-linear least-squares approach using OWTT ranging and data. Time delay in data packets is compensated by taking into account the vehicle's own motion during the time difference between the received timestamp and its clock. In <cit.>, an approach that uses a measure of each AUV's confidence of location (LC) estimate to fuse relative pose information through KF for reducing localization error is presented. When LC is below a limit for any of the swarm members, they return to the surface for a GPS fix. The effect of a rouge AUV with high LC in propagating wrong information is also considered. Swarm subsections that have low LC are aided by specially deployed AUVs with high LC to improve their localization, but this was not validated by the authors in the simulation. Although this approach does not need any support vehicles, as mentioned before, resurfacing for absolute position information wastes time and energy. Also, once the surfacing AUV dives back down, depending upon the depth it has to dive, its position estimate would have drifted by a significant amount. Hence the total useful contribution in error reduction from resurfacing will not be as good as with a surface vehicle. An overview of all the papers using this approach is given in Table <ref>. §.§.§ Non Surfacing In this approach, there are no aiding vehicles nor resurfacing for GPS. The team either relies only on their inter-vehicle range and exchanged data or SLAM to keep the localization error in check. However, it may be possible that some of the team members have high accuracy sensors than others, but unlike the server-client approach, they have their separate mission. In SLAM, the AUVs rely on other geophysical information such as gravity, magnetic field, and bathymetry map (using vision, side-scan SONAR, multibeam SONAR, etc.) to bound their localization error. The advantage of this approach is that it does not require a support vehicle or surfacing. However, the localization error growth will be the worst among all the strategies due to the absence of absolute position information. Alternating Landmark Matsuda et al. <cit.> proposed a novel cooperative strategy in which a group of AUVs alternatively performs the role of static landmarks (beacons) and survey vehicles. A particle filter (PF) is used for estimating the horizontal position and yaw of the vehicles from onboard sensors, including DVL, together with relative range and angle measurements. The vehicles are assumed to be hover-capable and equipped with accurate but expensive fiber optic-based gyro sensors. While this approach is able to keep the error growth in check, the performance is not guaranteed in the presence of ocean currents, as the group designated to act as a landmark will drift in such a case. In <cit.>, a communication scheme to reduce the communication overhead in the previous work is proposed wherein the particles are clustered using K-means clustering. Only the averages and standard deviations are then shared across other vehicles leading to lower data transmission. In <cit.>, the requirement of hover-capable AUVs is mitigated by having the landmark AUVs remain stationary by landing on the seafloor. The vehicles are divided into two separate groups of landmark and survey vehicles, with only the landmark vehicles alternatively remaining stationary. In <cit.>, two methodologies are considered. In 1st, one of the AUVs acts as a static landmark while the other two moves; later, the other two are static and estimate the position of 1st AUV. In the second method, a server-client approach is used, in which one of the AUVs acts as a moving beacon, and the other two localize with respect to the server AUV. However, the EKF estimation is carried out in a centralized manner on one of the robots, including estimating the landmark robot state, which is then shared with it. This, however, leads to overconfident estimates. An overview of all the works in this approach is given in Table <ref>. Parallel In this approach, the team members share range and position information with their immediate neighbors or all the team members through broadcast. In the prior case, we have directed graph-based topology, while in the latter, we have mesh topology. When absolute position information is not available with any of the vehicles, only the error growth rate can be reduced with this method. Eventually, at least one vehicle will have to acquire absolute positioning information from GPS, ASV, LBL, etc., to bound the error. In <cit.>, the authors present an interleaved update (IU) algorithm that ensures consistency of estimates free from overconfidence that could be induced due to the reception of multiple instances of the same information from different vehicles. For this, the authors suggest a bookkeeping approach to properly keep track of measurements to be incorporated, including the cross-correlations of position estimates between vehicles. Every vehicle has multiple estimation filters that track the source of each range measurement. Only those estimates are used which are known to be uncorrelated. The team is assumed not to have a structure, and any member can join or leave at any time. The approach can also take care of lost packets. The disadvantage is that the method cannot be scaled beyond three to four vehicles due to the large covariance information that needs to be transmitted. Also, the estimates are quite conservative. In <cit.>, a linear programming and convex optimization-based solution for multiple AUVs equipped with low-cost sensors is presented. Each AUV is assumed to have a sensor that can measure range and bearing with respect to other AUVs. The complexity of this algorithm increases rapidly for a team size of more than ten. In <cit.>, a probabilistic method to minimize the localization uncertainty for AUVs working under ice sheets is proposed. The acoustic packets exchanged between AUVs are used to estimate the ranges between them, and using Doppler shifts, the velocity of own vehicle is estimated. Using them and the uncertainties in other vehicles' locations transmitted in packets, a probabilistic method minimizes the location uncertainty. An algorithm to optimize the trade-off between communication overhead and localization error is also presented. In <cit.>, an approach using inter-vehicle ranges and range differences that do not need time synchronization is proposed. Each vehicle interrogates other vehicles sequentially and calculates the relative range from the reply using TWTT. Other vehicles not in the current communicating pair eavesdrop on the broadcasts and use the range information to calculate range differences, which are then used along with their ranging information to construct Euclidian Distance Matrices (EDM) for localization. To mitigate noisy and incomplete data, which will lead to ill-defined EDM, three optimization-based techniques are proposed and evaluated against a least-squares-based approach and non-optimized EDM. Results indicated EDM with plain ranges and Lower-Bounded Epigraph performs the best. In <cit.>, geometric constraints that may exist within the team are exploited through the projection approach. The paper uses nonlinear to LTV transformation from <cit.>, which lends to the use of simpler linear KF for state estimation. The inclusion of the geometric constraints gives much lower positional errors and covariance than without. It is required that there exists a cyclic connectivity within the connection graph, which may not always be true. In <cit.>, decentralized state estimation in formations of vehicles with time-varying topologies is presented. Each AUV relies on a local observer using local measurements and limited communications with neighboring vehicles for its state estimation. Some of the vehicles are assumed to have access to absolute position information from LBL/USBL system. Sufficient conditions for global exponential stability of error dynamics are derived using switching systems theory. The approach is extended to acyclic formations with fixed topologies In <cit.>. The performance of KF is shown to be similar to EKF but with observability and stability guarantees. This is important as EKF is known to diverge rapidly if the choice of initial conditions is poor. The same authors addressed the problem of distributed state estimation in a multi-vehicle fixed formation framework using discrete KF formulation in <cit.>. Two algorithms, one-step and finite horizon, are proposed to find the steady-state discrete-time KF gains with sparsity constraints. The first one, which calculates current gain based only on current covariance, is computationally less intensive and simple; its error performance is worse. The latter has better error performance but has a higher computational load. In <cit.>, a DEIF-based decentralized cooperative localization strategy that is well suited for low bandwidth acoustic communication and is robust against packet loss is proposed. Its performance is compared against distributed Naive KF and Single KF for different packet loss scenarios. It is shown that the proposed method provides better estimates compared to the other two and is robust against packet loss. In <cit.>, an approach using relative concurrent range and bearing measurements from vision-based sensors is presented. The localization problem is solved using a variant of convex disk relaxation. A set of position/bearing reference nodes or anchors are assumed to be deployed at fixed locations and accessible to vehicles over time. The approach is most effective when the vehicle trajectories are predominantly linear and suffers from position /attitude ambiguity when anchors are not accessible. In <cit.>, localization among a team of AUVs in the mid-ocean zone in the presence of ocean background flows is investigated. The large background flows are preloaded from ocean general circulation models (OGCMs) and are used in localization, while the local flows are measured using ACDP/DVL sensor. The vehicles communicate state information when in range of each other. Cross-correlation is taken care of with the covariance intersection method. A marginalized PF/Rao-Blackwellized PF is used for state estimation utilizing an EKF for position and velocity estimation to reduce the number of particles otherwise needed considering the large state vector size. In <cit.>, an approach that uses information entropy-based criteria to evaluate and select information from neighboring vehicles to update its state is proposed. The performance was evaluated based on mutual information, relative distance, and estimated covariance in two cases, leader-follower and parallel architecture. The simulations indicate that selecting the AUVs closest for updates gave the best performance. A fuzzy logic-based localization scheme for large swarms is presented in <cit.>. The localization is carried out using a trilateration approach using PSO at each AUV. Some of the AUVs are updated from a boat using USBL. These AUVs then communicate and localize others in the swarms. Quite a few works have also investigated the observability conditions in the parallel approach. In <cit.>, the observability analysis of localization using DR and range measurements between two AUVs is presented under the assumptions of no velocity measurements and zero latency of acoustic signals in a 2D scenario. The local weak observability condition is derived from the determinant of the Jacobian of two consecutive range measurements, which requires that the two vehicles should not move parallel to each other at the same velocity. In other words, the team members should exhibit sufficiently exciting relative manoeuvrers for the network to be locally weakly observable. This is extended to 3D space in <cit.>. The nonlinear model-based weak observability condition is shown to be less stringent than the conditions using a linear model. The linear observability condition requires non-null angular velocity; in contrast, the weak observability condition does not. The authors use EKF-based observer for state estimation using their own and other vehicles' linear and angular velocity, orientation, and depth data. Instead of the binary rank condition for observability presented in <cit.>, <cit.> propose inverse of the condition number of observability matrix as a better measure of observability. The analysis is carried out in 3D space, and it is shown that the state's vertical component does not affect observability if it can be directly measured. Further, the system is observable as long as the relative position and velocity vectors are not parallel. In <cit.>, observability analysis using the state augmentation technique to transform a nonlinear system into a higher dimensional LTV system is presented for a case of two AUVs. Linear system observability analysis is used to find the indistinguishable states of the nonlinear system. This is done for a limited case of constant angular and linear velocities and a point-mass model to keep the analysis mathematically tractable. It is shown that the approach leads to global observability conditions rather than the weak local notion of observability and that there is a one-to-one correspondence between the trajectories of the augmented LTV system and the original nonlinear system. In <cit.>, the above analysis is extended to the case wherein both the AUVs move with constant nonzero linear velocities and the observing vehicle with constant nonzero angular velocity. Also, the relative angular velocity, as seen from observing the vehicle, is assumed constant. The observability analysis of the augmented state vector belonging to R^25 is carried out using Popov - Belevitch - Hautus (PBH) Lemma. However, it is assumed that the vehicle can turn on the spot, i.e., the angular and linear velocities are independent. This restricts the analysis of a limited set of underwater vehicles. An overview of parallel approaches is given in Table <ref>. Cooperative SLAM In simultaneous localization and mapping (SLAM), an extensive review of which can be found in <cit.>, the autonomous vehicle is assumed to be equipped with some sensor with which the vehicle can map its surroundings. The most popular are LIDAR, ultrasonic distance sensors, monocular or stereo vision systems, RADARs, and SONARs. In an underwater environment, SONAR-based sensors such as side scan, multibeam, and echo sounders are much more prevalent than LIDARs and vision. This is due to the reasons mentioned in section II. However, in clear waters and narrow structures such as flooded mines/caves, etc., LIDARs and vision sensors are finding increasing acceptance. With a single vehicle, though, mapping and subsequent localization can take a large amount of time. This has led to an interest in the field of cooperative SLAM. Here, each of the vehicles is outfitted with some mapping sensor, information from which is shared with others for localization. Walter et al. <cit.> presented an approach using a heterogeneous server-client AUV configuration. The sensor data of environmental observations made by all the vehicles are fused using a SLAM algorithm in a centralized manner on one of the leader vehicles. The output is communicated back for the navigation of other parent and child vehicles. Data association is carried out using a joint compatibility branch and bound (JCBB) test. In <cit.>, a CSLAM algorithm utilizing side-scan sonar images and INS is proposed. The proposed algorithm is robust against packet loss and generates acoustic packets that are small enough to be transmitted in the underwater acoustic channel. The method employs factor graph-based SLAM with data reduction using intermediate (between communications) state marginalization by Schur's complement and further consistent sparsification by convex optimization using KLD as the cost between original and sparsified data. In <cit.>, an acoustic-SLAM approach is proposed in which a client AUV and static beacons are localized using a server AUV having USBL. The static beacons' location, which later acts as landmarks, is assumed to be initially unknown. The estimation is carried out centrally on the server AUV using EKF, which then communicates the estimates to child and beacon nodes. The approach does not scale with the number of client vehicles due to the TDMA scheme employed and two-way communication. Furthermore, the necessity of static beacons requires deployment and retrieval. Tan et al. <cit.> proposed a bathymetry based multi AUV cooperative localization scheme wherein the AUVs utilize only a low-cost sensor set of an altimeter, depth sensor, and an acoustic modem. The collected sensor data is fused in a decentralized Marginalized PF (DMPF), i.e., the marginalized linear dynamical states are estimated using KF while the others with PF reduce the computational and communication load. The other vehicles' beliefs, along with the estimated inter-vehicle range, are used to influence the particle distribution and likelihood computation and are not fused directly in the filter to prevent the effects of large errors in the beliefs/position on estimates. The bathymetric map data is used to update the measurement model of the PF. This approach requires the bathymetry map of the area a priori, and the proposed DMPF algorithm does not take cross-correlation into account. In <cit.>, terrain relative navigation with inter-robot measurements is proposed for multi-robot localization. A particle filter is used along with a covariance intersection filter. The complexity only grows linearly with the number of vehicles. However, it is assumed that the terrain map is available and clocks are synchronized. The major impediment to this approach is the large amount of data that needs to be exchanged among the vehicles. Given the bandwidth of the acoustic channel, this is a very difficult challenge and remains an active area of research. §.§ Other works Several works have also investigated mixed strategies or combinations of the approaches discussed in the previous sections. In <cit.>, the authors proposed a modular measurement distribution framework that is scalable and allows any cooperating team member to share measurements using TDMA, enabling the whole team to estimate positions consistently and accurately. The framework is amenable to any cooperative approach, such as ASV/CNA-based, server-client, parallel/mesh, or surfacing type, and is independent of any state estimation scheme. It does not need an entire covariance matrix to be transmitted to ensure no overconfidence in position estimates and ensures scalability, although the algorithm is sensitive to packet loss. In <cit.>, the experimental comparison between 3 estimation algorithms, i.e., simple distributed EKF, interleaved update, and distributed extended information filter (DEIF) against a centralized EKF (Post-processed), are reported. In the case of DEIF, the assumptions made render it useful only in two-node unidirectional topologies. Different cooperating strategies considered between one ASV and two AUVs are AUV aiding AUV, ASV aiding two AUV, 3 Node mesh, and ASV aiding AUV, which in turn is a server for other AUVs. Results indicate that the distributed EKF produces overconfident estimates when a strong correlation persists, while IU and DEIF give consistent results. Otherwise, DEKF performs nearly as well as CEKF. While DEKF and DEIF bounds error, IU does not. DEIF requires the largest packet size among the three and is the least robust against packet loss. There have been other approaches as well that combine cooperating vehicles with one or more static beacons. For example, Mirza et al. <cit.> presented a factor graph-based approach using maximum likelihood for CL between multiple underwater vehicles and beacons in a distributed setup. Real-time and non-real-time centralized setups were evaluated against real-time distributed setup for cases wherein a) all states were shared with all neighbors (RTD-A), b) all states were shared with an immediate neighbor only (RTD-B), and c) current states shared with an immediate neighbor only (RTD-C), the latter being the one with the least communication overhead and thus preferred in an underwater scenario. The RTD-C scheme was evaluated for different no of beacons and vehicles, indicating more collaborating vehicles and beacons, the lower the error. When all the vehicles are in intermittent contact with the beacons, it was reported that collaboration did not provide any improvement in error, while minimum error for all vehicles is achieved when the beacons are uniformly distributed. Maximum gain is for those vehicles which are not in contact with the beacon. However, the proposed approach tends to deteriorate collaborative localization performance when any one or more vehicles are consistently not in contact with the beacon in a team of more than two vehicles. Rego et al. <cit.> evaluated the performance of estimating the position and velocity of vehicles under stringent communication bandwidth constraints. It is assumed that there is a fixed beacon at a known location. The vehicles either exchange measurements or state estimates. Adaptive quantization is used to limit the amount of data sent over a communication link under a zero packet loss assumption. In <cit.>, an algorithm for optimal placement of multiple heterogeneous beacon vehicles (including static nodes such as GIB) not capable of rapid motions relative to AUVs is proposed. Optimum locations are found by minimizing the trace of appropriately defined CRLB matrix using range-only information. The locations are always estimated on the perimeter of a circle, and at each step, the dynamic beacons move to the optimal location. In <cit.>, a hierarchical beacon-server-client cooperative localization scheme using range information from time delays is presented. The server AUV localizes with respect to the beacon, while the client AUV localizes with respect to the server AUV and the single beacon. The proposed architecture reduces the number of acoustic transmissions required relative to each AUV localizing with respect to the beacon. However, the server AUV errors are not discussed; the motions are considered to be stop-and-go and require the server AUV speed and heading to be different from the team members. In the coordinate fusion technique, all AUVs have to communicate, which negates the original proposals' advantage of lower communication overhead. This concludes the review of all the works in the domain of cooperative localization in the underwater scenario. In the next section, we briefly discuss the issues and open problems in this area. § DISCUSSION AND OPEN PROBLEMS Before we discuss the open challenges in the cooperative localization of underwater vehicles, we briefly summarize the research presented in the preceding section. As noted, the estimation algorithm forms the heart of the localization problem. While EKF <cit.> is widely popular, as can be gauged from the tables, several different estimation techniques have been proposed across all categories, such as least squares <cit.>, least mean squares <cit.>, UKF <cit.>, Decentralised LS-MLE <cit.>, Decentralised EIF <cit.>, distributed extended information filter <cit.>, Delayed State Centralized EKF <cit.>, centralised EKF <cit.>, Interleaved Update (IU) <cit.>, iterative divided difference filter <cit.>, Factor graphs: <cit.>, Maximum-A-Priori (MAP) <cit.>, moving horizon estimation <cit.>, particle filters <cit.>, distibuted EKF <cit.>, distributed modified EKF <cit.>, origin state method <cit.>, Parallel projection algorithm+MLE <cit.>, hybrid UKF-KF estimator <cit.>, delayed state Adaptive KF <cit.>, augmented EKF <cit.>, probability hypothesis density filter <cit.>, unscented PF <cit.>, Student-t based EKF <cit.>, Linear programming <cit.>, fuzzy logic <cit.> and SLAM <cit.>. Authors have also exploited geometric properties of the localization problem, such as in <cit.>. Often the initial location of the vehicle may not be known; for this, a few methods are suggested in <cit.>, which are useful for initializing EKF-based estimators. As for the challenges presented by the acoustic channel, several authors have presented solutions for low bandwidth, such as by using estimators <cit.>, logic-based communication <cit.>, adaptive time-of-launch <cit.>, reducing communication overhead <cit.> and adaptive quantization <cit.>. For outlier mitigation most approaches use Mahalanobis distance metric <cit.> while others have proposed different noise distributions such as heavy-tailed mixture distribution <cit.>, Student-t based EKF <cit.> or different estimators like Huber based M estimator <cit.>, maximum correntropy-PF <cit.>, unscented PF <cit.> and MCC-ANFIS <cit.>. Managing delays in communication is carried out either through back and forth technique <cit.> or delayed state estimators <cit.>. Some techniques to estimate sound speed profile and refraction are mentioned in <cit.>. For path planning of ASVs and server vehicles the approaches include simple paths such as diamond <cit.>, zig-zag <cit.> or circular <cit.> or online path planning through heuristics <cit.>, uncertainty ellipse <cit.>, Dynamic programming <cit.>, Markov Decision process <cit.>, Genetic algorithm <cit.>, condition number of observability gramian and empirical observability gramian <cit.>, FIM: <cit.>, extremum seeking: <cit.>, priority-based expansion of a search tree<cit.>, artificial potential field <cit.>, Q-learning <cit.> and partially observable Markov decision process <cit.>. Authors have also evaluated optimal formations of aiding vehicles in <cit.>, optimal number of ASV <cit.>, and optimal positioning <cit.>. Related observability analysis is covered in <cit.>. Several works have also tackled problems that deal with other aspects such as centralized <cit.>, hierarchical <cit.> or decentralized <cit.> estimation, the inclusion of known <cit.> and unknown <cit.> ocean current models, combining with other localization sources such as static beacons <cit.>, divers <cit.> and companion vehicles <cit.> and ranging in the absence of time synchronization <cit.>. The underwater cooperative localization problem was first investigated with the help of crewed ships and boats as communication and navigation aids. However, due to the associated low costs and advancements in ASV, acoustic communication capabilities, and better, more cost-effective sensors, the research has since been focused on utilizing ASV as a CNA or relying entirely on the underwater team itself for localization and navigation. Both categories, with and without navigational aid, have seen an almost equal amount of interest and have generated substantial research output over the past decade, as summarized above. However, there are still areas that have not been explored much, especially considering the attention given to the problem of target tracking. Acoustic Channel As evident from the tables, very few results incorporate the challenges afforded by the underwater acoustic channel. In most cases, the acoustic channel is assumed lossless, instantaneous, and Gaussian distributed. The presence of outliers in acoustic communication essentially renders the noise PDF heavy-tailed, and the Gaussian assumption is no longer valid. Many works report the usage of EKF state estimation, which is simple and computationally less taxing but unsuitable in the presence of outliers. Furthermore, EKF is prone to instability in the event of wrong initialization. If computational power requirements are not a concern, the particle filter outperforms EKF, especially in non-Gaussian noise. While little can be done about the acoustic channel's bandwidth from the control perspective, estimation algorithms need to be made robust against the acoustic packet latency and losses. While there are a few approaches that mitigate these issues, there is still scope for more improvements in this area. Estimation of sound speed profile, latency effects on the stability of the estimator, bandwidth and packet size optimization for SLAM-based approaches, TDMA optimization, and event-triggered communications are some of the areas for future research. Scalability to large teams is heavily dependent on solutions to problems in this area. Optimal paths and formations There have been approaches that optimize the trajectory of a single ASV to ensure the observability of the cooperative localization problem. However, in large teams and multiple ASV, optimal path planning and formations have not been explored to the degree seen from investigations into optimal formations of static underwater sensor networks. Optimal trajectories for a single ASV in terms of energy and control input have not been explored. For multiple ASVs aiding a large team of AUVs, problems involving their optimal number, formation, and collision avoidance strategies can be explored. Applications of neural network-based learning methods have not been applied to the path planning problem so far. Furthermore, observability analysis for multiple vehicle teams is still in the nascent stage and has quite a lot of scope. Practical results The basic premise of cooperative localization is the ability of a team of vehicles to exchange data and improve their location estimate. This is even more useful in a large team of vehicles. While many papers have proposed algorithms that are, in general, applicable to large robot teams, only a handful show results with more than 3 or 4 team members. This number is even less in the case of experimental results, which could be explained by the high costs of underwater hardware. However, this is expected to improve in the future with the cost reductions in hardware and more cost-friendly vendors in underwater robotics coming into the picture. One specific area is SLAM utilizing SONAR data for cooperative localization as it requires efficient but high computational capability. Ocean effects Another area that needs attention is the ocean effects, such as ocean currents and tides. Modeling ocean currents has received very little attention in both approaches, more so in the category without a dedicated navigation aid. Considering that most ocean surveys span large areas, this is an important consideration that cannot be overlooked. While DVL sensors can help counter the drifts induced by currents, they are not always useful, such as in the mid-water column where a ground lock is not available. In such instances, ACDP can help to some extent. Tidal effects are essential considerations for long-duration missions, especially for those approaches that use a depth sensor to project 3D localization problems onto a 2D plane. The tidal changes will introduce bias in the depth measurements that would affect the computed location's accuracy. Another effect that has been ignored is the effect of sea states on the performance of ASV as a navigational aid. All the works assume that ASV communicates while in constant motion, which is not valid, as the sea state will induce time-varying changes in the range measurements and increase the difficulty of ASV trajectory planning. § CONCLUSION This paper has presented an exhaustive review of the literature in the underwater cooperative localization domain. A brief overview of the challenges for localization in the underwater acoustic channel was followed by a glimpse of popular state estimation algorithms used in the current state of the art approaches. The CL approaches were classified and evaluated on different parameters, and their salient features were highlighted. Finally, we presented a brief discussion on the open problems in the context of underwater cooperative localization. IEEEtran
http://arxiv.org/abs/2307.05550v1
20230709072109
Exploring high scale seesaw models through a supersymmetric portal
[ "Yi Liu", "Stefano Moretti", "Harri Waltari" ]
hep-ph
[ "hep-ph" ]
Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping Kazuya Nishimura1 Ami Katanaya2 Shinichiro Chuma2 Ryoma Bise1 Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ========================================================================================== § INTRODUCTION Neutrino masses have been known to be non-zero for 25 years <cit.>. As they are so much smaller than all other Standard Model (SM) fermion masses, one usually assumes that they are generated by some kind of a seesaw mechanism <cit.>. The masses are still generated through the Higgs mechanism, but suppressed by a heavy seesaw particle, which can be a singlet neutrino (Type-I), a triplet of Higgs bosons (Type-II) or a triplet of exotic leptons (Type-III) (see Refs. <cit.> for reviews). The seesaw scale is a priori unknown. If the seesaw scale is around the Electro-Weak (EW) scale, one may be able to produce the seesaw particles directly at the Large Hadron Collider (LHC) <cit.>. One of the original ideas <cit.> was that the smallness of the neutrino masses could be related to the breaking of a Grand Unification Theory (GUT), i.e., the relevant Yukawa couplings would be of order unity and the seesaw scale somewhere around α M_GUT∼ 10^14 GeV. Such energy scales are obviously out of the reach of present and future colliders. Supersymmetry, the symmetry between fermions and bosons, is often a necessary ingredient in formulating models with large separations of scales. Due to the cancellation between the bosonic and fermionic loops, the separation of scales is radiatively stable <cit.>, once it has been generated by some dynamics. Thus in the supersymmetric framework, scalar masses would not get quadratic corrections proportional to the seesaw scale and an EW scale Higgs boson would not be unnatural even if the seesaw scale was close to the GUT scale. In the context of high scale seesaw models, supersymmetry has one remarkable property. The scalar potential, and especially its F-terms being of the form V=∑_i| ∂ W/∂φ_i|^2, leads to four-scalar interactions without the seesaw particle but with the seesaw couplings involved. If the couplings are of the order unity, they are among the largest ones in the model and could lead to observable consequences. For definiteness, let us consider the Type-I seesaw model, where the extra superpotential terms in addition to those of the Minimal Supersymmetric Standard Model (MSSM) are W=W_MSSM+ y^ν L· H_u N^c+M_NN^cN^c, where we assume y^ν∼ 1 and M_N∼ 10^14 GeV. When differentiating with respect to N^c, one gets the term ∑_k y^ν *_iky^ν_jkL̃^†_i· H_u^†L̃_j· H_u, involving only Higgs bosons and left-handed sleptons, which we assume to be at the TeV scale. If there are significant mass splittings between the sfermion generations, which could well be generated through Renormalisation Group Evolution (RGE) due to the large couplings, one might get processes like ν̃_i→ν̃_jh with a large Branching Ratio (BR). If the sneutrinos decay visibly, the decays can be distinguished from mono-Higgs signatures that could arise from dark matter <cit.>. Slepton decays with Higgs bosons in the final state could offer an indication of a high scale seesaw model and thus provide us a window to scales otherwise beyond our experimental reach. Our aim is to investigate how could one observe such slepton decay patterns involving Higgs bosons in seesaw models of Type-I and Type-III, which have a similar structure in terms of the TeV scale Lagrangian. Our paper is organised as follows. Higgs-slepton interactions are described in the next section, which is followed by a discussion of the production and decay modes relevant to our research. Our numerical analysis is introduced in the following section, after which we conclude. § HIGGS-SLEPTON INTERACTIONS IN SEESAW MODELS We shall now look at how the Higgs-slepton interactions arise from our seesaw models in some detail. In particular, we look at Type-I and Type-III seesaw models. Both have Yukawa couplings that connect the lepton and Higgs doublets to the seesaw particles, which form a singlet and triplet under SU(2). The superpotential of Type-I seesaw is given in Eq. (<ref>) and for Type-III seesaw it is W = W_MSSM + y^ν L Σ H_u + M_ΣTr(Σ^2), where L is the left-chiral lepton doublet and H_u = (H^+ , H^0)^T is the up-type Higgs doublet. The Σ is an antilepton (L=-1) chiral superfield which transforms as (1,3,0) under the SM gauge group SU(3)_c× SU(2)_L × U(1)_Y. The mass term for Σ violates lepton number by two units. The superfield Σ can be represented Σ = σ^iΣ^i= ( [ Σ^0/√(2) Σ^+; Σ^- -Σ^0/√(2) ]), Σ^± = Σ^1 ∓ iΣ^2/√(2), Σ^0 = Σ^3. The models look very similar in what comes to neutrino mass generation, both having a lepton and a Higgs doublet coupling to the companion neutrinos. The only difference is that the L and H_u superfields combine to a singlet in the case of Type-I and to a triplet in the case of Type-III seesaw. This difference between the two seesaw models leads to a difference in the scalar potential which contributes the processes that lead to slepton decays containing a Higgs boson. When we expand the neutrino Yukawa terms in the superpotential, we get W = y^ν_ij( e^-_iH_u^+-1/√(2)ν_i H_u^0)N^c_j +…, W = y^ν_ij( 1/√(2)e^-_iH_u^+Σ^0_j -ν_iΣ^-_jH_u^++1/√(2)e^-_iΣ^+_jH_u^0+1/2ν_iΣ^0_jH_u^0)+…, for Type-I and Type-III, respectively. Here we have included a factor of 1/√(2) into the definition of the neutral Higgs field. Differentiating with respect to the heavy seesaw fields leads to the scalar potentials V = ∑_k1/2y^ν_iky^ν *_jkν̃_iν̃^*_jH_u^0H_u^0 *+…, V = ∑_k1/4 y^ν_iky^ν *_jk(ν̃_iν̃^*_jH_u^0H_u^0 *+2ẽ^-_iẽ^+_jH_u^0H_u^0 *)+… , for Type-I and Type-III, respectively. Hence one in general gets Higgs interactions with sleptons that are non-diagonal in flavour space and, in the case of a high scale seesaw, have large couplings. After EW Symmetry Breaking (EWSB) we have ⟨ H_u^0⟩ = vsinβ (v=246 GeV), which generates a three-point coupling between sleptons and the SM-like Higgs. One may also note that in Type-III seesaw there is a non-flavour-diagonal coupling between charged sleptons and Higgs bosons, while there is no such coupling in the case of Type-I seesaw. As we discuss below, this leads to a stronger signal arising from Type-III than Type-I seesaw. We further notice that, while the usual D-terms of the scalar potential also contain large couplings between sneutrinos, charged sleptons and Higgs bosons, such couplings are always flavour-diagonal and cannot result in decays of the type ν̃_2→ν̃_1h, which is our smoking gun signature for high scale seesaw models. Besides the decay modes containing Higgs bosons, there are other decay channels and the visibility of the signal depends on the branching ratios. If the Lightest Supersymmetric Particle (LSP) is a higgsino-like neutralino and the gauginos are heavier than the sleptons, the decays of the left-handed sleptons arise from the superpotential term y^ℓ LH_dE^c, so one gets the decays ν̃→χ̃^±ℓ^∓ and ℓ̃^±→χ̃^0ℓ^±. These lead to partial widths Γ(ν̃_j→ℓ^±_jχ̃^∓_i) = |y^ℓ_jj|^2|U_i2|^2(m_ν̃^2-m_χ̃^2)^2/32π m_ν̃^3, Γ(ℓ̃^±_j→ℓ^±_jχ̃^0_i) = |y^ℓ_jj|^2|N_i3|^2(m_ℓ̃^2-m_χ̃^2)^2/16π m_ℓ̃^3, where U_i2 gives the higgsino component of the chargino (for our benchmarks |U_i2|≃ 1), N_i3 gives the down-type higgsino component of the neutralino (for our benchmarks |N_13|≃ 1/√(2)). If the soft slepton masses are not flavour diagonal, an appropriate linear combination of the leptonic Yukawas corresponding to the flavour composition of the sleptons must be used. If the LSP is a gaugino there are additional decay channels ν̃→νχ̃^0 and ℓ̃^±→χ̃^±ν (if winos are light) and the decay widths are propotional to g^2 instead of |y^ℓ|^2 and gaugino components instead of higgsino components. Since we have the hierarchy y^ℓ_11≪ y^ℓ_22≪ y^ℓ_33≪ g, the strength of our signal will depend on the nature of the light neutralinos and charginos and in the case of higgsinos, the flavour of the heavier sleptons. As the electron and muon Yukawas are so tiny, in practice the mixing between the gaugino and higgsino components will be significant for the overall decay widths of the sneutrinos and charged sleptons unless the gauginos are extremely heavy. We shall concentrate on the higgsino case, since as we shall see, already the tau Yukawa is so large that the signal containing Higgs bosons will have a too small branching ratio if stau is the heavy slepton that decays. Hence in all our benchmarks we make our gauginos heavier than the sleptons. § THE PRODUCTION AND DECAY MECHANISMS To study the high-scale seesaw signatures with Higgs bosons, we build some Benchmark Points (BPs) with m(ẽ^±)<m(μ̃^±)<m(τ̃^±) and mass splittings between generations larger than m_h≈ 125 GeV (the mass of the SM-like state h). As we shall see, this will be the limiting case, where we still can see a signal. If the second slepton (assuming the third one to be too heavy to be produced efficiently) would be a selectron, the signal would be similar (as the mixing with gauginos dominates the other decay modes already for smuons), while in the case of a stau, the signal would almost vanish due to the larger partial widths from equations (<ref>) and (<ref>). We consider the charged current process pp→ℓ̃_2^±ν̃_2, where the subscript indicates mass ordering. The charged current portal is more promising as the final state contains charged leptons even when the sneutrino decays invisibly. As discussed above, in Type-III seesaw both sneutrinos and charged sleptons can decay to final states with Higgs bosons. The dominant process is ℓ̃_2→ℓ̃_1 h while ν̃_2 →ℓ^±χ̃_1^∓, νχ̃^0. The Feynman diagram for such a process is shown in Fig. <ref>. There is also a process, where the Higgs originates from a sneutrino decay, but that has a smaller BR as can be seen from equation (<ref>). In Type-I seesaw, only the sneutrino can decay into a Higgs boson via ν̃_2 → h ν̃_1. The corresponding Feynman diagram is shown in Fig. <ref>. These processes can lead to a variety of final state topologies. Currently the limit for charged slepton masses is m(ẽ^±),m(μ̃^±)> 700 GeV for neutralino masses below 350 GeV <cit.>, which we take as our lower limit of charged slepton masses[With more compressed spectra m(ℓ̃)-m(χ̃^0)≲ 100 GeV, one obviously can have significantly lighter sleptons. Such cases need a different analysis strategy than the one adopted here as we rely on large E_T to suppress SM backgrounds.]. This means that the overall production rate of slepton-sneutrino pairs will be low, especially as we have to produce second generation sleptons with a large mass splitting compared to the first generation ones. In fact, the production rate at the LHC even with nominal collision energy (√(s)=14 TeV) is so low (∼ 30 ab for 1 TeV sleptons), that there will not be sufficient statistics even at the High-Luminosity LHC (HL-LHC) <cit.>. Hence we turn to the proposed High-Energy LHC (HE-LHC) <cit.> with a nominal collision energy of √(s)=27 TeV. This increases the production cross section by an order of magnitude compared to the standard LHC. In Tab. <ref> we show the lepton multiplicities for some typical benchmark points (BP1 and BP3, defined in Table <ref>). We see that the single lepton final state has the highest multiplicity for both seesaw models. As we will lose a part of the signal due to different BRs involved in the model, it is reasonable to look at the state with the highest multiplicity first. We also pick the Higgs decay mode to b-quarks as that has the highest BR and allows to reconstruct the Higgs boson, although not with a too high precision in mass. Unfortunately the channels with good mass resolution (i.e., γγ and ZZ^*→ 4 leptons) are too rare to be useful with such a small event rate. Our signal events will then consist of events with a single lepton, two b-tagged jets and missing momentum carried by the LSP. The largest SM backgrounds to this final state arise from the following processes: * tt̅ production where one the top (anti)quarks decays semileptonically and the other one hadronically; * W^±h production in the case where the W^± boson decays into a lepton and a neutrino. These have been considered to be the dominant backgrounds in similar types of experimental analyses (e.g., <cit.>). § SIMULATION AND RESULTS In this section we will describe our numerical toolbox and the Monte Carlo (MC) simulations that we have pursued with it. §.§ Analysis strategy The model files are produced by the Mathematica package Sarah v4.14 <cit.>. This code also generates a source code for Spheno v4.0.4 <cit.> to obtain the mass spectrum and couplings as well as for Madgraph5 v2.8.2 <cit.> to simulate collider events. We use Pythia v8.2 <cit.> for parton showering and hadronisation while we simulate the detector response by using Delphes3 <cit.>. We simulate the analysis and present our numerical results with Madanalysis5 v1.8 <cit.>. We prepare two BPs for Type-III seesaw and two for Type-I seesaw, which can be detected in the HE-LHC with 27 TeV collision energy and the integrated luminosity 10 ab^-1. We simulate proton-proton collisions to produce the second generation sneutrino (ν̃_2) and slepton (ℓ_2), which in our cases are smuon-like, and select decays to the SM-like Higgs boson plus corresponding first generation particles. The mass of ν̃_2 and ℓ_2 should be heavy enough to allow for the decay kinematics. At the same time, the mass of lightest slepton is required to be larger than 700 GeV <cit.>. The particle mass spectra and relevant BRs are shown in Tab. <ref>. All of the BPs have the same Lightest Supersymmetric Particle (LSP) and Next-to-LSP (NLSP), which are higgsino-like neutralinos and charginos. BP1 has a mass spectrum similar to BP3 and the same situation arises between BP2 and BP4. However, there is a significant difference in the Higgs production cross section times BRs between Type-III seesaw and Type-I seesaw. For the sneutrino decay process, Type-I seesaw has BRs larger than the Type-III ones, which can be traced back to the factors in equations (<ref>) and (<ref>). However, the charged slepton decay channel does not exist in Type-I seesaw whereas it dominates the Higgs signal in Type-III seesaw, consistent with equations (<ref>) and (<ref>). As the slepton masses increase, the BR shows a decreasing trend. The BR for μ̃^±→ẽ^±h is high in Type-III seesaw, since the competing decay mode of eq. (<ref>) is proportional to the small muon Yukawa coupling squared or the small gaugino-higgsino mixing factor squared. Had the second slepton been a selectron, the BR would have been similar as the gaugino-higgsino mixing would dominate the decays to neutralinos/charginos, while for staus the corresponding branching ratio is only a few percent as the tau Yukawa is large enough to dominate the branching ratio. As a pre-selection, we require a single lepton and at least two b-jets, as shown in Tab. <ref>. We use a working point, where the b-jet tagger achieves 70% efficiency and only a 1.5% probability of misidentifying a light-parton jet as a b-one <cit.>. Then several cuts are imposed to select the Higgs signal as per the process in Fig. <ref>. The leading lepton is dominantly produced from the process ν̃_1→ e + χ̃_1^±. As the mass difference between sneutrino and the lightest chargino is larger than 500 GeV for BP1 and 400 GeV for BP2, we choose the transverse momentum of the leading lepton to be larger than 400 GeV to preserve the single lepton signal and reduce the background, as shown in Fig. <ref>. The E_T (MET) cut is chosen to be 500 GeV as the NLSP mass is around that value. In order handle properly the MC generation of the tt̅ background, we add a cut at the generation level (MET above 300 GeV) so as to generate this SM process automatically in the signal region of interest. The Higgs selection is done by choosing the interval of invariant mass of the leading and next-to-leading b-jets from 100 GeV to 150 GeV. Fig. <ref> shows a peak around the SM-like Higgs mass for the signal and W^± h background, while the tt̅ noise is rather flat therein. Hence, this requirement proves effective against the latter. Finally, the 100 GeV cut on the transverse mass defined using the highest p_T lepton plus missing transverse momentum, M_T(l_1,E_T), can also significantly reduce background, especially tt̅, as evident from Fig. <ref>. §.§ Numerical analysis We have applied the cuts of Tab. <ref> to all BPs as well as backgrounds and the results are presented in Tab. <ref>, for the discussed HE-LHC energy and luminosity. As expected, Type-III seesaw preserves more signal events (25.8 for BP1 and 27.7 for BP2) than Type-I seesaw (15.5 for BP3 and 9.2 for BP4). Furthermore, BP2 and BP4 show the interesting feature of having fewer initial events (compared to BP1 and BP3, respectively) but displaying a similar final result. This is because the sneutrino and smuon in BP2(BP4) are heavier than those in BP1(BP3), leading to a larger MET and higher transverse momentum of the leading lepton (p_T(ℓ_1)), thereby increasing the efficiency of the corresponding selections. The significances are shown in Tab. <ref>, for the usual HE-LHC parameters, wherein one can appreciate rather significant signal excesses above the SM backgrounds for Type-III seesaw while for Type-I seesaw the sensitivity is somewhat limited (but larger values of Yukawa couplings could be probed and there could be room to improve the analysis or increase the amount of data). We also tested a benchmark similar to BP1, but with the mass ordering m(ẽ)<m(τ̃)<m(μ̃) with the smuon too heavy to be produced. This gave just 0.6 events after the cuts, so we can get a significant signal only arising from selectrons or smuons and their sneutrinos. In addition it is essential for our analysis that there is a significant mass splitting between the sleptons and the LSP. With a softer MET cut the tt background would be problematic, while the cut on the transverse mass of the lepton and MET would keep W^±h under control. In summary, though, it is clear that the HE-LHC is a machine with clear potential to access high scale seesaw models (like Type-III and Type-I embedded within the MSSM) by exploiting the SM-like Higgs (eventually decaying to bb̅) plus a hard lepton and MET signature. § CONCLUSIONS How neutrino mass generation occurs in Nature is one of the outstanding questions in particle physics. Current probes of neutrinos hardly include colliders, as herein such particles appear as E_T, thereby offering no scope to identify their properties. However, in a supersymmetric world, there exist sneutrinos, which share with neutrinos their interactions. Therefore, given that sneutrinos can decay visibly at the LHC (i.e., inside the detectors), it makes sense, in order to study neutrino properties in supersymmetry, to study sneutrinos. One, however, needs a paradigm for supersymmetry to do so, i.e., a model realisation of it, which we assumed here to be the MSSM, supplemented with two kinds of seesaw mechanism for (s)neutrino mass generation, the so-called Type-I and Type-III. These mechanisms have a similar structure to generate neutrino masses and hence both lead to Higgs-sneutrino interactions, which are non-diagonal in flavour space. These two are examples of high scale seesaw mechanisms, wherein the companion neutrinos (to the SM ones) can have masses of order 10^12-10^14 GeV. However, left-handed sneutrino and slepton masses are necessarily linked to the typical supersymmetry breaking scale, which ought to be 10 TeV or so at the most (in order to preserve gauge coupling unification, successful dynamical EWSB, etc.). In the case of a high seesaw scale the neutrino Yukawa couplings are among the largest ones in the model and, due to the structure of the supersymmetric scalar potential, they can lead to observable consequences at the supersymmetry breaking scale. We found that the current LHC, for which √(s)=14 TeV (in turn recalling that √(ŝ) is only a fraction of that), cannot test such seesaw scenarios. However, a possible energy upgrade has been proposed for it: the so-called HE-LHC. This offers √(s)=27 TeV (and ∫ L dt=10 ab^-1), therefore, it is in a position to test the aforementioned seesaw scenarios of neutrino mass generation. In this paper, we have, in particular, tested the scope of a particular signal stemming from these two seesaw mechanisms. In fact, the signature is common to both, i.e., charged current induced slepton-sneutrino production and subsequent decay into the SM-like Higgs boson (in turn decaying to bb̅ pairs), a single lepton (l=e,μ) and MET (or E_T). Upon assessing that the single lepton channel (as opposed to multi-lepton ones also stemming in these two scenarios) is the most sensitive one, for any number of b-jets beyond 1, we have devised a simple cut-and-count analysis, deployed identically for both Type-I and -III, that has enabled us to reach evidence to discovery significances at the HE-LHC for the Type-III case while for the Type-I case a more refined selection and/or additional data would be required. This was shown, in both cases, for BPs currently compliant with standard theoretical requirements as well as current experimental searches. Parameterwise, the signature requires the gauginos to be heavier than the sleptons, a sufficient mass splitting (≳ 300 GeV) between the sleptons and the higgsino-like LSP and a sufficient mass splitting between the slepton generations so that the decay with a Higgs boson is kinematically allowed. Even though this signal is common to the two seesaw models, the fact that in Type-I seesaw only sneutrinos have decay modes containing Higgs bosons, while for Type-III also charged sleptons have such decay channels allows us to distinguish the models. This distinction might be more difficult at a hadron collider but, if there was an electron-positron collider with sufficient collision energy, the pair production of charged sleptons above √(s)=2m_ℓ̃ would lead to an enhanced signal with Higgs bosons in case of Type-III, while no such an enhancement would be present in Type-I. As an outlook of our work, we would like to highlight that a Future Circular Collider in hadron-hadron mode (FCC-hh) <cit.>, running at √(s) values up to 100 TeV, will not improve the scope of the HE-LHC since, herein, background rates increase more that the signal ones that we pursued (although this may not be true for other channels not considered here). Altogether, we have shown that there exist cases where, in supersymmetric theories, it is possible to probe the neutrino mass generation mechanism through sneutrino phy­sics while the (seesaw) scale related to this mechanism is extremely high, roughly, up to 10^14 GeV. § ACKNOWLEDGEMENTS SM is supported in part through the NExT Institute and STFC Consolidated Grant No. ST/L000296/1. HW is supported by the Carl Trygger Foundation under grant No. CTS18:164. We finally acknowledge the use of the IRIDIS5 High-Perfor­mance Computing Facility and associated support services at the University of Southampton in the completion of this work. 99 Super-Kamiokande:1998kpq Y. Fukuda et al. [Super-Kamiokande], Phys. Rev. Lett. 81 (1998), 1562-1567 [arXiv:hep-ex/9807003 [hep-ex]]. Minkowski:1977sc P. Minkowski, Phys. Lett. B 67 (1977), 421. Konetschny:1977bn W. Konetschny and W. Kummer, Phys. Lett. B 70 (1977), 433. Gell-Mann:1979vob M. Gell-Mann, P. Ramond and R. Slansky, Conf. Proc. C 790927 (1979), 315 [arXiv:1306.4669 [hep-th]]. Mohapatra:1980yp R. N. Mohapatra and G. Senjanovic, Phys. Rev. D 23 (1981), 165-180. Foot:1988aq R. Foot, H. Lew, X. G. He and G. C. Joshi, Z. Phys. C 44 (1989), 441. Khalil:2022toi S. Khalil and S. Moretti, CRC Press, 2022, ISBN 978-1-138-33643-8. Moretti:2019ulc S. Moretti and S. Khalil, CRC Press, 2019, ISBN 978-0-367-87662-3. CMS:2017ybg A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 119 (2017) no.22, 221802 [arXiv:1708.07962 [hep-ex]]. CMS:2018jxx A. M. Sirunyan et al. [CMS], JHEP 01 (2019), 122 [arXiv:1806.10905 [hep-ex]]. ATLAS:2019kpx G. Aad et al. [ATLAS], JHEP 10 (2019), 265 [arXiv:1905.09787 [hep-ex]]. ATLAS:2020wop G. Aad et al. [ATLAS], Eur. Phys. J. C 81 (2021) no.3, 218 [arXiv:2008.07949 [hep-ex]]. Dimopoulos:1981zb S. Dimopoulos and H. Georgi, Nucl. Phys. B 193 (1981), 150. Petrov:2013nia A. A. Petrov and W. Shepherd, Phys. Lett. B 730 (2014), 178 [arXiv:1311.1511 [hep-ph]]. Berlin:2014cfa A. Berlin, T. Lin and L. T. Wang, JHEP 06 (2014), 078 [arXiv:1402.7074 [hep-ph]]. ATLAS:2019lff G. Aad et al. [ATLAS], Eur. Phys. J. C 80 (2020) no.2, 123 [arXiv:1908.08215 [hep-ex]]. Gianotti:2002xx F. Gianotti, M. L. Mangano, T. Virdee, S. Abdullin, G. Azuelos, A. Ball, D. Barberis, A. Belyaev, P. Bloch and M. Bosman, et al. Eur. Phys. J. C 39 (2005), 293 [arXiv:hep-ph/0204087 [hep-ph]]. FCC:2018bvk A. Abada et al. [FCC], Eur. Phys. J. ST 228 (2019) no.5, 1109. ATLAS:2022enb G. Aad et al. [ATLAS], JHEP 06 (2023), 016 [arXiv:2207.00230 [hep-ex]]. Staub:2015kfa F. Staub, Adv. High Energy Phys. 2015 (2015), 840780 [arXiv:1503.04200 [hep-ph]]. Porod:2003um W. Porod, Comput. Phys. Commun. 153 (2003), 275 [arXiv:hep-ph/0301101 [hep-ph]]. Porod:2011nf W. Porod and F. Staub, Comput. Phys. Commun. 183 (2012), 2458 [arXiv:1104.1573 [hep-ph]]. Alwall:2011uj J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, JHEP 06 (2011), 128 [arXiv:1106.0522 [hep-ph]]. Sjostrand:2014zea T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen and P. Z. Skands, Comput. Phys. Commun. 191 (2015), 159 [arXiv:1410.3012 [hep-ph]]. deFavereau:2013fsa J. de Favereau et al. [DELPHES 3], JHEP 02 (2014), 057 [arXiv:1307.6346 [hep-ex]]. Conte:2012fm E. Conte, B. Fuks and G. Serret, Comput. Phys. Commun. 184 (2013), 222 [arXiv:1206.1599 [hep-ph]]. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], PTEP 2022 (2022), 083C01 CMS:2012feb S. Chatrchyan et al. [CMS], JINST 8 (2013), P04013 [arXiv:1211.4462 [hep-ex]]. FCC:2018byv A. Abada et al. [FCC], Eur. Phys. J. C 79 (2019) no.6, 474.
http://arxiv.org/abs/2307.05696v1
20230709011908
A Personalized Reinforcement Learning Summarization Service for Learning Structure from Unstructured Data
[ "Samira Ghodratnama", "Amin Beheshti", "Mehrdad Zakershahrak" ]
cs.IR
[ "cs.IR", "cs.AI", "cs.CL" ]
A Personalized Reinforcement Learning Summarization Service for Learning Structure from Unstructured Data Samira Ghodratnama Macquarie University, Australia W.W. Grainger, USA [email protected] [email protected] Amin Behehsti Macquarie University, Australia [email protected] Mehrdad Zakershahrak Macquarie University, Australia [email protected] Received August 12, 2023; accepted August 12, 2023 ============================================================================================================================================================================================================================================================================================================================ The exponential growth of textual data has created a crucial need for tools that assist users in extracting meaningful insights. Traditional document summarization approaches often fail to meet individual user requirements and lack structure for efficient information processing. To address these limitations, we propose Summation, a hierarchical personalized concept-based summarization approach. It synthesizes documents into a concise hierarchical concept map and actively engages users by learning and adapting to their preferences. Using a Reinforcement Learning algorithm, Summation generates personalized summaries for unseen documents on specific topics. This framework enhances comprehension, enables effective navigation, and empowers users to extract meaningful insights from large document collections aligned with their unique requirements. Document summarization, personalized summarization, hierarchical summarization, concept-based summarization. § INTRODUCTION The availability of a vast amount of information on various topics has led to a phenomenon known as information overload, where the volume of data exceeds an individual's capacity for effective processing within a reasonable timeframe. While this abundance of data can be valuable for analytical applications, it necessitates efficient exploration tools to harness its potential benefits without succumbing to information overload, which can strain cognitive resources. Data summaries serve as effective tools for gathering relevant information, organizing it into a coherent and manageable form, and facilitating complex question answering, insight generation, and conceptual boundary discovery <cit.>. Automatic document summarization has been extensively studied to address the challenges of data reduction for analysis, commercialization, management, and personalization purposes. Furthermore, users often seek information in an organized and coherent structure. However, despite the speed of document generation and the massive collections of unstructured documents, producing personalized summaries comparable to human-written ones remains challenging. Most previous work on automatic text summarization has focused on generating textual summaries rather than structured ones. These approaches typically produce a single, short, general, and flat summary that applies to all users, lacking interpretability and personalization. Moreover, they are incapable of producing more extended and detailed summaries, even if users express interest in obtaining additional information. Additionally, the lack of structure in these summaries hampers further processing, and they heavily rely on reference or gold summaries created by humans, which are subjective and costly <cit.>. To address these limitations, we propose Summation, a hierarchically interactive structured summarization approach that generates personalized summaries. We emphasize the significance of the following aspects in our contribution: i) Structured summaries, ii) Personalization, iii) Interaction, and iv) The elimination of reference summaries. Structured Summaries. Studies have demonstrated that when individuals encounter numerous documents, they seldom formulate fully-fledged summaries. Instead, they attempt to extract concepts and understand the relationships among them <cit.>. Consequently, structured data has become crucial in various domains. It offers a concise overview of the document collection's contents, unveils interesting relationships, and serves as a navigational structure for further exploration of the documents. Our approach, Summation, provides summaries in the form of a hierarchical concept map, which caters to diverse user requirements by being interpretable, concise, and simultaneously providing an overview and detailed information. Personalization. Existing summarization approaches typically generate a generic summary comprising a few selected sentences intended to meet the needs of all users. In contrast to such generic summaries, there is a dearth of user-centric summarization approaches that allow users to specify the desired content in the summaries <cit.>. Interaction. Conventional summarization approaches treat a topic-related document set as input and generate a summary that captures the most salient aspects. However, research on this topic often neglects the usefulness of the approach for users, focusing primarily on the accuracy of the generated summaries. As a result, these approaches produce short (3-6 sentences), inflexible, and flat summaries that are the same for all users. Consequently, these approaches fail to provide more extensive summaries even when users express interest in obtaining additional information. Reference Summaries. Traditional document summarization techniques rely on reference summaries created by humans for training their systems. However, this approach is subjective and, more importantly, resource-intensive. For instance, Lin <cit.> reported that creating summaries for the Document Understanding Conferences (DUC) required 3,000 hours of human effort. Personalized summaries eliminate the need for such reference summaries by generating specific summary for a user instead of optimizing a summary for all users. Our Contribution. We study the automatic creation of personalized, structured summaries, allowing the user to overview a document collection's content without much reading quickly. The goal here is to dynamically maintain a federated summary view incrementally, resulting in a unified framework for intelligent summary generation and data discovery tools from a user-centered perspective. The unique contribution of this paper includes: * We provide summaries in the form of a hierarchical concept map, labeled graphs representing concepts and relationships in a visual and concise format. Their structured nature can reveal interesting patterns in documents that users would otherwise need to discover manually. It enables providing more information than traditional approaches within the same limit size. It can be used as a navigator in the document collection. Such visualization is beneficial for decision-making systems. * We introduce and formalize a theoretically grounded method. We propose a personalized interactive summarization approach utilizing a reinforcement learning algorithm to learn generating user-adapted results. It is the first approach to predict users' desired structured summary to the best of our knowledge. * We provide various evidence evaluating different aspects to prove Summation's usability using human and automatic evaluation. We divide the proposed framework into two steps. The first step is organizer which structure unstructured data by making a hierarchical concept map. Then summarizer is responsible for: i) predicting users' preferences based on the given feedback by employing preference learning and ii) learning to provide personalized summaries by leveraging reinforcement learning. A general overview of the algorithm is depicted in Figure <ref>. § RELATED WORK We categorize previous approaches into three groups including traditional approaches, structured approaches, personalized and interactive approaches discussed below. Traditional Approaches. A good summary should provide the maximum information about the input documents within a size limit and be fluent and natural. Different aspects for categorizing traditional multi-document summarization approaches exist, such as the input type, the process, and the summarization goal <cit.>. However, the main category considers the process and the output type of the summarization algorithm: extractive and abstractive approaches. The input in both cases is a set of documents, and the output is a few sentences. Abstractive summaries are generated by interpreting the main concepts of a document and then stating those contents in another format. Therefore, abstractive approaches require deep natural language processing, such as semantic representation and inference <cit.>. However, extractive text summarization selects some sentences from the original documents as the summary. These sentences are then concatenated into a shorter text to produce a meaningful and coherent summary <cit.>. Early extractive approaches focused on shallow features, employing graph structure, or extracting the semantically related words <cit.>. Different machine learning approaches, such as naive-Bayes, decision trees, neural networks, and deep reinforcement learning models are used for this purpose <cit.>. Structured Approaches. While traditional summarization approaches produce unstructured summaries, there exist few attempts on structured summaries. Structured summaries are defined by generating Wikipedia articles and biographies to extract the significant aspects of a topic using approaches such as topic modeling or an entity-aspect LDA model <cit.>. Discovering threads of related documents is another category of structured summaries. They mostly use a machine algorithm to find the threads using a supervised approach and features such as temporal locality of stories for event recognition and time-ordering to capture dependencies <cit.>. A few papers have examined the relationship between summarization and hierarchies. However, the concept of hierarchy in these approaches is the relation between different elements of a document. An example is creating a hierarchy of words or phrases to organize a set of documents <cit.>. There is a related thread of research on identifying the hierarchical structure of the input documents and generating a summary which prioritizes the more general information according to the hierarchical structure <cit.>. However, the information unit is a sentence, and the hierarchy is based on time measures. Concept-based multi-document summarization is a variant of traditional summarization that produces structured summaries using concept maps. It learns to identify and merge coreferent concepts to reduce redundancy and finds an optimal summary via integer linear programming. However, it produces a single flat summary for all users <cit.>. Personalized and Interactive Approaches. Recently, there exist few recent attempts on personalized and interactive approaches in different NLP tasks. Unlike non-interactive systems that only present the system output to the end-user, interactive NLP algorithms ask the user to provide certain feedback forms to refine the model and generate higher-quality outcomes tailored to the user. Multiple forms of feedback also have been studied including mouse-clicks for information retrieval <cit.>, post-edits and ratings for machine translation <cit.>, error markings for semantic parsing <cit.>, and preferences for translation <cit.>. A significant category of interactive approaches presents the output of a given automatic summarization system to users as a draft summary, asking them to refine the results without further interaction. The refining process includes cutting, paste, and reorganize the essential elements to formulate a final summary <cit.>. Other interactive summarization systems include the iNeATS <cit.> and IDS <cit.> systems that allow users to tune several parameters for customizing the produced summaries. Avinesh and Meyer <cit.> proposed the most recent interactive summarization approach that asks users to label important bigrams within candidate summaries. Their system can achieve near-optimal performance. However, labeling important bigrams is an enormous burden on the users, as users have to read through many potentially unimportant bigrams. Besides, it produces extractive summaries that are unstructured. § THE PROPOSED APPROACH (SUMMATION) The ultimate goal of summarization is to provide a concise, understandable, and interpretable summary tailored to the users' needs. However, making such a summary is challenging due to massive document collection, the speed of generated documents, and the unstructured format. In this regard, Summation aims to make structured summaries to facilitate further processes to make it concise and easily understandable while engaging users to create their personalized summaries. This novel framework has two components: organizer and the summarizer. First, we discuss the problem definition, and then each component is explained. Problem Definition. The input is a set of documents D={D_1,D_2, ... ,D_N} and each document consists of a sequence of sentences S=[s_1,s_2,...,s_n]. Each sentence s_i is a set of concepts {c_1,c_2, ..,c_k}, where a concept can be a word (unigram) or a sequence of words. The output is a personalized hierarchical concept map. This novel framework has two components, an organizer and a summarizer, explained in Sec. <ref> and <ref>, respectively. §.§ Adding Structure to Unstructured Data The first step is to structure unstructured information by making a hierarchical concept map. A concept map is a graph with directed edges, where nodes indicate concepts and edges indicate relations. Both concepts and relations are sequences of related words representing a semantic unit. Consequently, the first step in creating a concept map is to identify all concepts and relations. Here, we propose hierarchical clustering to form the hierarchical concept map. §.§.§ Concept and Relation Extraction. Concepts come in different syntactic types, including nouns, proper nouns, more complex noun phrases, and verb phrases that describe activities <cit.>. For this purpose, we used open information extraction (OIE) <cit.> through which the entities and relations are obtained directly from the text. OIE finds binary propositions from a set of documents in the form of (con_1,R,con_2), which are equivalent to the desired concepts and relations. For example, the output for the sentence, ‘cancer treatment is underpinned by the Pharmaceutical Benefits Scheme’, is: Cancer treatment by the Pharmaceutical Benefits Scheme Balancing precision and recall in extracting concepts is a challenging task. A high precision causes to define all identified spans as mentions of concepts. Therefore, some constructions are usually missed, which leads to lowering the recall. On the other hand, a high recall is necessary since missed concepts can never be in summary. Obtaining a higher recall may extract too many mentions, including false positives. Generalizability is also essential. The reason is that extracting a particular syntactic structure might generate only correct mentions, causing too broad mentions. Ideally, a proper method applies to many text types. To avoid meaningless and long concepts, we processed the OIE results such that concepts with less than one noun token or more than five tokens are omitted. The original nouns also replace pronouns. If an argument is a conjunction indicating conj-dependency in the parse tree, we split them. §.§.§ Concept Map Construction. Among various extracted concepts and relations, multiple expressions can refer to the same concept while not using precisely the same words; that is, they can also use synonyms or paraphrases. However, distinguishing similar concepts to group them is challenging and subjective. For example, adding a modifier can completely change the meaning of a concept based on the purpose of summarization. Consequently, grouping them may lead to propositions that are not stated in the document. Therefore, we need to group every subset that contains mentions of a single, unique concept. Scalability is another critical issue. For example, pairwise comparisons of concepts cause a quadratic run-time complexity applicable only to limited-sized document sets. The same challenges exist for relation grouping. However, we first grouped all mentions by the concepts' pairs, and then performed relation grouping. Therefore, this task’s scope and relevance are much smaller than when concepts are used. Therefore, in practise, comparison-based quadratic approaches are feasible. Moreover, as the final goal is to create a defined size summary, the summary size significantly affects the level of details in grouping concepts. This is because the distinction between different mentions of a concept might not be required, as it is a subjective task. Ideally, the decision to merge must be made based on the final summary map’s propositions to define the necessary concept granularity. We further propose hierarchical conceptual clustering using k-means with word embedding vectors to tackle this problem, as it spans a semantic space. Therefore, word embedding clusters give a higher semantic space, grouping semantically similar word classes under the Euclidean metric constraint defined below. Before defining the proposed hierarchical conceptual clustering, we review word embedding schemes used in the proposed model. Word Embedding. Word embedding is a learnt representation of text such that the same meaning words have similar representations. Different techniques can be used to learn a word embedding from the text. Word2Vec <cit.> is an example of a statistical model for learning a word embedding representation from a text corpus, utilising different architectures. As such, we used skip-gram and bag of character n-grams in our experiments. The skip-gram model uses the current word for predicting the surrounding words by increasing the weights of nearby context words more than other words using a neural network model. One drawback of skip-gram is its inability to detect rare words. In another model, authors define an embedding method by representing each word as the sum of the vector representations of its character n-grams, known as ‘bag of character n-grams’ <cit.>. If the training corpus is small, character n-grams will outperform the skip-gram (of words) approach. [We used fastText for word embedding: https://fasttext.cc/docs/en/support.html] Conceptual Hierarchical Clustering. Given word (concept) embeddings learnt from a corpus, {v_w_1,v_w_2,...,v_w_T}, we propose a novel recursive clustering algorithm to form a hierarchical concept map, H. This variable denotes a set of concept maps organised into a hierarchy that incrementally maintains hierarchical summaries from the most general node (root) to the most specific summary (leaves). Within this structure, any non-leaf summary generalises the content of its children nodes. Hierarchical summarization has two critical strengths in the context of large-scale summarization. First, the initial information under review is small and grows upon users’ request, so as not to overwhelm them. Second, the parent-to-child links facilitate user navigation and drilling down for more details on interesting topics. The hierarchical conceptual clustering minimizes the objective function Eq. <ref> over all k clusters as C={c_1,c_2,..,c_k}. J = ∑_k=1^K∑_t=1^| T | |v_w_t- c_k|^2 +αmin _c∈ C size(c), where c_k is the randomly selected centre k-th cluster, and T is the number of word vectors. The second term is the evenness of the clusters, added to avoid clusters with small sizes. α tunes the evenness factor, which was defined by employing a grid search over a development set. We also implemented hierarchical clustering top-down at each time, optimising Eq. <ref>. After defining the clusters, we must find the concept that best represents every concept at the lower levels to ensure hierarchical abstraction. A concise label is the desired label for each node; however, shortening mentions can introduce propositions that are not asserted by a text. For example, the concept labelled ‘students’ can change in meaning where the emphasis is on a few students or some students. To this end, a centre of a cluster at each level of the hierarchy was defined as a label. The inverse distance to the cluster centres is the membership degree or the similarity to each label. The cluster distance for a word w_t is defined as d_v_w_t. Consequently, the membership of each word w_t in cluster c_k to its label is the inverse distance defined in Eq. <ref>. m_v_w_t=1/d_v_w_t =1/|c_k-v_w_t|^2∀ w_t ∈ c_k We then fine-tuned K within the 5–50 range based on the dataset size and chose the cluster number according to gap statistic value <cit.>. The output H can be directly used as a new dataset for other actions, such as browsing, querying, data mining process, or any other procedures requiring a reduced but structured version of data. The hierarchical clustering can also be pruned at each level to represent a summarised concept map for different purposes or users. Therefore, H is fed to the summariser for pruning to generate a personalized summary. Moreover, by using preference-based learning and RL, we learn users’ preferences in making personalized summaries for unseen topic-related documents, discussed in Sec. <ref>. §.§ Summarizer The hierarchical concept map produced in the previous step is given to the summariser to make the desired summaries for users based on their given preferences. Therefore, the summariser consists of two phases—(i) predicting user preferences and (ii) generating the desired summary. §.§.§ Predicting User Preference. The first step towards creating personalized summaries is to understand users’ interests. It can be extracted implicitly based on users' profiles, browsing history, likes or dislikes, or retweeting in social media <cit.>. When this information is not available, interaction with users is an alternative to retrieve user's perspectives. The user feedback can be in any form, such as mouse-click or post-edits, as explained in Section <ref>. Preference-based interactive approaches are another form of feedback that puts a lower cognitive burden on human subjects <cit.>. For instance, asking users to select one concept among “cancer treatment" and “cancer symptoms" is more straightforward than asking for giving a score to each of these concepts. Therefore, in this paper, to reduce users' cognitive load, queries are in the form of concept preference. Preference learning is a classification method that learns to rank instances based on the observed preference information. It trains based on a set of pairwise preferred items and obtaining the total ranking of objects <cit.>. H is the hierarchical concept map, where at the i-th level of the hierarchy there exist m_i nodes defining a label l. L={l_11,...,l_nm_i} is the set of all labels, where l_i1 indicates the first node at i-th level of the hierarchy and n is the number of levels, and L_i indicates the labels at i-th level. We queried users with a set of pairwise concepts at the same levels, {p(l_i1,l_i2),p(l_i2,l_i3),...,p(l_im_i-1,p(l_im_i)}, where p(l_i1,l_i2) is defined in Eq. <ref>. p(l_i1,l_i2)= 1, if l_i1>l_i2 0, otherwise where > indicates the preference of l_i1 over l_i2. Preference learning aims to predict the overall ranking of concepts, which requires transforms concepts into real numbers, called utility function. The utility function U such that l_i > l_j U(l_i) > U (l_j), where U is a function U: CR. In this problem, the ground-truth utility function (U) measures each concept’s importance based on users’ attitudes, defined as a regression learning problem. According to U, we defined the ranking function, R, measuring the importance of each concept towards other concepts based on users’ attitude. This is defined in Eq. <ref>. R(l_i)=∑1{U(l_i)>U(l_j)} , ∀ l_i , l_j∈L where 1 is the indicator function. The Bradley–Terry model <cit.> is a probability model widely used in preference learning. Given a pair of individuals l_i and l_j drawn from some population, the model estimates the probability that the pairwise comparison l_i > l_j is true. Having n observed preference items, the model approximates the ranking function R by computing the maximum likelihood estimate in Eq. <ref>. J_x(w)= ∑_i ∈ n[p(l_i,l_j)log F(l_i,l_j;w)+ p(l_j,l_i)log F(l_j,l_i;w)] where F(l) is the logistic function defined in Eq. <ref>. F(l_i,l_j;w)= 1/1+exp[U^*(l_j;w)-U^*(l_i;w)] Here, U^* is the approximation of U parameterised by w, which can be learnt using different function approximation techniques. In our problem, a linear regression model was designed for this purpose, defined as U(l;w)=w^Tϕ(l), where ϕ(l) is the representation feature vector of the concept l. For any l_i,l_j ∈ L, the ranker prefers l_i over l_j if w^Tϕ(l_i)> w^Tϕ(l_j). By maximizing the J_x(w) in Eq. <ref>, w^* = arg max_w J_x(w), the resulting w^* using stochastic gradient ascent optimisation will be used to estimate U^*, and consequently the approximated ranking function R^*: C R. Thus, Summation learns a ranking over concepts and uses the ranking to generate personalized summaries. §.§.§ Generating Personalized Summaries. The summarization task is to transform the input (a cluster of documents) d to the best summary among all possible summaries, called Y(d), for the learnt preference ranking function. This problem can be defined as a sequential decision-making problem, starting from the root, sequentially selecting concepts and adding them to a draft summary.Therefore, it can be defined as an MDP problem. An MDP is a tuple (S,A,R,T), where S is the set of states, A is the set of actions, R(s,a) is the reward for performing an action (a) in a state (s), and T is the set of terminal states. In our problem, a state is a draft summary, and A includes two types of action—either adding a new concept to the current draft summary or terminating the construction process if it reaches users’ limit size. The reward function R returns an evaluation score in one of the termination states or 0 in other states. A policy π(s,a): S × A R in an MDP defines the selection of actions in state s. The goal of RL algorithms is to learn a policy that maximises the accumulated reward. The learnt policy trained on specific users’ interests is used on unseen data at the test time (in this problem to generate summaries in new and related topic documents). We defined the reward as the summation of all concepts’ importance included in the summary. A policyπ defines the strategy to add concepts to the draft summary to build a user’s desired summary. We defined π as the probability of choosing a summary of y among all possible summaries within the limit size using different hierarchy paths, Y(d), denoted as π(y). The expected reward of performing policy π, where R(y) is the reward for selecting summary y, is defined in Eq. <ref>. R^RL(π|d)= E_y ∈ Y(d)R(y)= ∑_y∈ Y(d)π(y)R(y) The goal of MDP is to find the optimal policy π^* that has the highest expected reward. Therefore, the optimal policy, π^*, is the function that finds the desired summary for a given input based on user feedback (Eq. <ref>). π^* = arg max R^RL(π|d) = arg max ∑_y ∈ Y(d)π(y) R(y) We also used the linear temporal difference algorithm to obtain π^*. The process is explained in Algorithm <ref>. § EVALUATION In this section, we present the experimental setup for assessing our summarization model's performance. We discuss the datasets, give implementation details, and explain how system output was evaluated. §.§ Datasets and Evaluation We evaluated Summation using three commonly employed benchmark datasets from the Document Understanding Conferences (DUC) [Produced by the National Institute Standards and Technology (https://duc.nist.gov/)]. Each dataset contains a set of document clusters accompanied by several human-generated summaries used for training and evaluation. Details are explained in Table <ref> Automatic Evaluation. We evaluate the quality of summaries using ROUGE_N measure <cit.>[We run ROUGE 1.5.5: http://www.berouge.com/Pages/defailt.aspx with parameters -n 2 -m -u -c 95 -r 1000 -f A -p 0.5 -t 0] defined as: The three variants of ROUGE (ROUGE-1, ROUGE-2, and ROUGE-L) are used. We used the limited length ROUGE recall-only evaluation (75 words) to avoid being biased. Human Evaluation. For this purpose, we hired fifteen Amazon Mechanical Turk (AMT)[https://www.mturk.com/] workers to attend tasks without any specific prior background required. Then five document clusters are randomly selected from the DUC datasets. Each evaluator was presented with three documents to avoid any subjects' bias and was given two minutes to read each article. To make sure human subjects understood the study's objective, we asked workers to complete a qualification task first. They were required to write a summary of their understanding. We manually removed spam from our results. §.§ Results and Analysis Summation was evaluated from different evaluation aspects, first from the organiser’s output, and then concerning the hierarchical concept map (H), which can be served individually to users as the structured summarised data. Next, we evaluated H using both human and automatic evaluation techniques to answer the following questions: * Do users prefer hierarchical concept maps to explore new and complex topics? * How much do users learn from a hierarchical concept map? * How coherent is the produced hierarchical concept map? * How informative are summaries in the form of a hierarchical concept map? Personalized summaries generated on test data were also evaluated from various perspectives to analyse the effect of RL and preference learning, including: * The impact of different features in approximating the proposed preference learning. * The role of the query budget in retrieving pairwise preferences. * The performance of RL algorithm and the information coverage in terms of ROUGE. * Users' perspectives on learned summaries based on their given feedback. Hierarchical Concept Map Evaluation. To answer the questions in Sec. <ref>, we performed three experiments. First, within the same limit size as the reference summaries, we compared the summaries produced by three models—using ExDos, which is a traditional approach; using a traditional hierarchical approach <cit.>; and using a structured summarization approach <cit.> on selected documents (with ROUGE-1 and ROUGE-2 scores based on the reference summaries). The average ROUGE-1 for Summation was 0.65 and ROUGE-2 was 0.48. The structured approach <cit.> showed similar performance with ROUGE-1 and ROUGE-2 at 0.65 and 0.45, respectively. Meanwhile, traditional hierarchical approaches <cit.> produced a ROUGE-1 of 0.27 and ROUGE-2 of 0.18. In the same task, the percentage of covered unigrams and bigrams based on documents were also compared. Both Summation and the structured approach covered approximately 4% unigrams and 2% bigrams, but dropped below 1% in both cases when testing the hierarchical approaches. In the third experiment, all competitors’ outputs were rated based on three measures, including usability in exploring new topics, level of informativeness, and coherency. Summation’s rate for the first and second criteria was 96% and 94%, respectively. However, it was 34% for coherency. We removed all concepts with low similarity to their parents based on a different threshold at each level. After repeating the same experiment, and rate of coherency increased to 76%. Feature Analysis. Before evaluating the effect of conceptual preference, it is important to explain the ground-truth concept ranker function (U) and the approximate function (U^*), indicating the importance of concepts. To estimate the approximate function (U^*), we defined a linear model U^*(c)=W^Tϕ(c), where ϕ are the features. To this end, a set of features (whose importance was validated in ExDos) was used, including surface-level and linguistic-level features. Surface-level features include frequency-based features (TF-IDF, RIDF, gain and word co-occurrence), word-based features (upper-case words and signature words), similarity-based features (Word2Vec and Jaccard measure) and named entities. Linguistic features are generated using semantic graphs and include the average weights of connected edges, the merge status of concepts as a binary feature, the number of concepts merged with a concept, and the number of concepts connected to the concept. We defined different combinations of features with different sizes,{2,5,8,10}, starting from the most critical one. Then, we repeated the experiments for 10 cluster documents. We used the concepts included in the reference summary as preferences, and then evaluated the concept coverage in a concept map compared to the reference summaries using ROUGE-1 and ROUGE-2. The results reported in Fig. <ref> show that the model’s performance improved after adding more features. Summary Evaluation. To avoid subjectivity in the evaluation process, we used the reference summaries as feedback. The mentioned concepts that exist in reference summaries receive the maximum score by the ranked function. We compared the summaries produced by three models, including the traditional approach (ExDos), a range of hierarchical approaches <cit.>, and a structured summarization approach <cit.>, each tested on randomly selected documents from three datasets using ROUGE-1, ROUGE-2 and ROUGE-L scores based on the references summaries. The average results reported in Table <ref> show the supremacy of Summation in selecting specific contents. Query Budget Size. We also measure the effectiveness of the users' query budget size in the process. The pairwise preferences are defined based on the reference summaries, defining in a dictionary format. We selected the query size among the selection of {10,15,20,25,30,35}, demonstrating the user's number of feedback. The results are reported in Figure <ref>. As expected, by increasing the number of feedback, the ROUGE score increases significantly. However, the difference rate decreases through the process. Human Analysis. Since the goal of Summation is to help users make their desired summary, we conducted two human experiments to evaluate the model. In the first experiments, to assess the possibility of finding their desired information, they were asked to answer a given question about each topic. Their level of confidence in answering questions and their answers were recorded. An evaluator assessed their accuracy in answering questions. Among the fifteen workers, 86.67% were completely confident in their answers. However, 57% answered completely accurately. In another task, after querying users for feedback, we ask them to select some concepts as the summary for the test data. Then the outputs were also shown to users, and they all approved their satisfaction. Besides, an evaluator manually compared them and reported more than 80% correlation between outputs. § CONCLUSION AND FUTURE WORK Extensive information in various formats is producing from single or multiple simultaneous sources in different systems and applications. For instance, data can be structured, such as data in SQL databases, unstructured stored in NoSQL systems, semi-structured like web server logs, or streaming data from a sensor. We propose a summarization approach based on a hierarchical concept map to tackle the variety and volume of big generated data. We trained our approach using document collections as input and employed users' feedback to generate desired summaries for users, which can be extended to other data types. Many future directions are possible. First, capturing users' interests is a significant challenge in providing practical personalized information. The reason is that users are reluctant to specify their preferences as entering lists of interests may be a tedious and time-consuming process. Therefore, techniques that extract implicit information about users' preferences are the next step for making useful personalized summaries. Another potential direction is to use human feedback records to provide personalized summaries on new domains using transfer learning. Moreover, we aim to use fuzzy clustering to make a hierarchical concept map. § ACKNOWLEDGEMENT We acknowledge the Centre for Applied Artificial Intelligence at Macquarie University, Sydney, Australia, for funding this research. IEEEtran
http://arxiv.org/abs/2307.04703v1
20230710170154
Coexistence of self-similar and anomalous scalings in turbulent small-scale solar magnetic fields
[ "Gorobets Andrei Y.", "Berdyugina Svetlana V" ]
physics.flu-dyn
[ "physics.flu-dyn", "astro-ph.SR", "physics.plasm-ph" ]
Leibniz-Institut für Sonnenphysik (KIS), Schöneckstr. 6, Freiburg 79104, Germany Leibniz-Institut für Sonnenphysik (KIS), Schöneckstr. 6, Freiburg 79104, Germany Istituto ricerche solari Aldo e Cele Daccò (IRSOL), Faculty of Informatics, Università della Svizzera italiana, 6605 Locarno, Switzerland Coexistence of self-similar and anomalous scalings in turbulent small-scale solar magnetic fields. Svetlana V. Berdyugina August 12, 2023 ================================================================================================== We report an evidence that self-similarity and anomalous scalings coexist in a turbulent medium, particularly in fluctuations of the magnetic field flux density in magnetized plasma of the solar photosphere. The structure function scaling exponents in the inertial range have been analyzed for fluctuations grouped according to the sign of the path-dependent stochastic entropy production. It is found that the scaling exponents for fluctuations with the positive entropy production follow the phenological linear dependence for the magnetohydrodynamic turbulence. For fluctuations with the negative entropy production, the scaling is anomalous. In the lower solar atmosphere (photosphere), the evolution of magnetic fields is influenced by turbulent magnetoconvective motions of plasma, especially in regions with weak fields (≤ 0.1Mx m^-2) of the so-called "quiet Sun", i.e. away from pores, sunspots, and their groups (active regions), where stronger magnetic fields suppress convective motions. The quiet Sun line-of-sight magnetic flux density (MFD) is observed as a rapidly evolving, spatially intermittent (fractal) quantity in magnetic field maps (magnetograms) <cit.>. Photospheric magnetograms (Fig. <ref>) are recorded by space missions with a high cadence during several 11-year solar cycles. The range of physical parameters in the solar atmosphere provides a unique laboratory for unprecedented continuous high spatial resolution studies of dynamic magnetic phenomena <cit.>. In this Letter, we report a first empirical evidence for a dual character of the scaling law in temporal fluctuations of (t) when their statistical realizations are analysed separately according to the sign of the stochastic entropy production. We employ an uninterrupted observation of the quiet Sun at the solar disk center obtained by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) space mission <cit.>. The analyzed time-series consists of 51,782 magnetograms in the Fe I 617.3nm line from 2019 December 11, 00:00:22 UT to 2020 January 06, 23:58:07 UT, with the instrument-fixed cadence =45 s. This is exactly 27days, which is somewhat longer than one synodic rotation period of 26.24 days. The magnetogram series is considered pixel-wise as discrete, time-ordered snapshots of magnetic flux evolution in the Eulerian frame of reference. In this context, every pixel as a probe in the field of view (FoV) provides a finite-length random realization of MFD fluctuations (also called trajectory or path) (t) :={(t_1),(t_1+), …,(t_1+n) } ={b_1,b_2, …, b_n }=, t∈[1,n], where t is the local time index starting at the local origin t_1, n is the length of the trajectory. The trajectory is a set of identically distributed, signed, non-Gaussian, random variables; sign of b_t designates polarity of (t) at a given time instance, and n is the exponentially distributed random number. At a given pixel, the total number of trajectories is arbitrary. It depends on: the overall observation time, a particular solar magnetic field topology within FoV, and the noise cutoff. Statistical properties of trajectories are assumed to be homogeneous in space for the quiet Sun, at least with the HMI spatio-temporal resolution [The empirical test of Markov property at a higher resolution in <cit.> revealed that granular and intergranular had, to some extent, different statistical properties, which were neglected at that stage of the studies. More details of the relevant discrepancies were reported in <cit.>.]. Hence, trajectories of different pixels contribute to the overall statistics equally. The nature of fluctuations enables analysis of fluctuations including a measure of their irreversibility. Namely, -transitions in obey Markov property <cit.>, and so allow computing trajectory-dependent (total) stochastic entropy production ()= ln[p_n(b_1,b_2,⋯,b_n)/p_n(b_n,⋯, b_2,b_1)] = ln[p(b_1)/p(b_n)∏_k=1^n-1p(b_k+1|b_k)/p(b_k|b_k+1)], where p, p_n and p(b_j|b_i) are respectively the marginal, n-joint and -step conditional probability density functions (PDF). The random quantity is the measure of irreversibility of the trajectory, and its PDF has an exact symmetry relation, known as the detailed fluctuation theorem [For introduction and review see, for example: <cit.>]: p(>0)/p(<0) = e^||. That is, the total entropy consumption, ^-≡<0, is exactly exponentially less probable than the total entropy generation, ^+≡>0, of the same magnitude ||. Hereafter, the corresponding signs are placed as superscripts in notations of estimated quantities. The detailed pixel calculus and Markov property test for at a higher spatial resolution are described in <cit.>. For HMI , properties of the regular Markov chains were considered in <cit.>, and the validity of the fluctuation theorems (including Eq. (<ref>)) was shown in <cit.>. Henceforth, in our investigation of scale invariance of (t) fluctuations due to turbulent origin, we take into account the sign of , which defines two disjoint sets ^±. The conventional method of studying manifestations of scale invariance involves an analysis of signal's self-similarity in terms of the q-order structure functions (SF) S_q(ℓ)≡⟨|δ_ℓ(t)|^q⟩=⟨|(t+ℓ)- (t)|^q⟩, where δ_ℓ (·) is an increment of a turbulent quantity at two points of the flow at a distance ℓ. The Taylor's "frozen turbulence" hypothesis connects temporal and spatial scales in measurements, so scales in Eq.(<ref>) are used in units of spatial distance. The solar data we investigate do not resolve all vector components of the observable/inferred quantities like photospheric velocity and magnetic fields, and consequently details of real flows are quite uncertain. However, we assume that Taylor's hypothesis is applicable for MFD of the quiet Sun <cit.>. For the set of 1D trajectories of a finite length, SF are computed as the ensemble average, and ℓ is expressed in units of the sampling interval . The phenomelogical theory of turbulence establishes fundamental scaling relations for observable quantities, and hence defines power-law dependencies between SF. The Kolmogorov phenomenology <cit.> of the fully developed hydrodynamic (HD) turbulence at a high Reynolds number R=vℓ_0/ν predicts the scaling law in the inertial range λ≪ℓ≪ℓ_0: δ_ℓ v ∼^1/3ℓ^1/3, where v is the velocity, is the average energy dissipation rate, ν is the viscosity, and ℓ_0 and λ are the integral and dissipation scales, respectively. Turbulence of a magnetized plasma is described in the framework of magnetohydrodynamics (MHD). The corresponding Iroshnikov-Kraichnan phenomenology <cit.> includes the Alfvén wave effect of coupling between velocity and magnetic field fluctuations on small-scales by the integral-scale magnetic field B_0 <cit.>. At a high magnetic Reynolds number Rm = v_Al_0/η, the self-similar scaling exponents are δ_ℓ v ∼δ_ℓ B ∼ [ v_A]^1/4ℓ^1/4, where η is the magnetic diffusivity, v_A≡ B_0(4πρ)^-1/2 is the Alfvén velocity in B_0, ρ is the mass density, and ℓ_0 =v_A^3^-1. In terms of SF, the self-similar (linear) scalings in Eqs. (<ref>-<ref>) read S_q(ℓ) ∼ℓ^ξ(q), ξ(q)=q/m, with m=3 for HD and m=4 for MHD turbulence. To cope with experimental limitations and irregularities of flows which hinder the analysis of scaling in S_q(ℓ), the concept of the Extended Self-Similarity (ESS) was proposed in Refs. <cit.>. In essence, ESS is a set of the functional dependencies of SF of any order on SF of the order for which ξ(q)=1. Hence, for the case of MHD turbulence we focus on ESS with the relative exponents ξ_4 S_q(ℓ) ∼[S_4(ℓ)]^ξ_4(q), ξ_4(q)=ξ(q)/ξ(4). The linear scalings in Eq. (<ref>) are violated by spatial inhomogeneities of the dissipation on small scales, as said by intermittency. Thus, the scaling exponents (anomalously) deviate from the exact linear relations, as has become evident from extensive experimental and numerical studies <cit.>. Models for intermittency differ by assumptions about statistical properties of the energy dissipation rate , such as log-normal <cit.>, multifractal <cit.>, and log-Poisson <cit.>. The latter was revealed for the solar wind MHD turbulence <cit.> and applied for photospheric flows <cit.>. The "standard model" of Ref.<cit.> as the non-parametric version of the log-Poisson model for MHD turbulence ξ_4(q)=q/8+1-(1/2)^q/4 is used as a reference for anomalous scaling in the results presented below. In Fig. <ref>, the SF scalings are shown according to Eq. (<ref>) being computed separately for two sets ^±. The discrepancy in slopes with respect to sign of is clearly seen, especially for higher orders. Following ideas from Ref. <cit.>, the inertial range is defined as the range in which Kolmogorov's 4/5 law S_3(ℓ)=-4/5ℓ holds. For our data, we found the inertial range to be from 15 to 19. The range boundaries were modified by ±, to compensate for a rather coarse sampling rate , because linear fits showed substantial variations with range boundaries. This modification also helps to improve statistics of fits. Therefore, an SF scaling (Eq. <ref>) in the inertial range is estimated by the set of independent linear fits within the extended inertial range [15±,19±]. The ultimate value of the scaling exponent ξ_4 is then computed as the weighted mean of 9 exponents for every combination of the inertial range boundary variations given by (0,± 1). This procedure was applied to three groups of fluctuations: ^± and their joint data set. The result is shown in Fig. <ref>. Statistical robustness of the result is highlighted by the 99,99% confidence level computed by the χ^2 minimization. Errors of the means are smaller than symbols and not shown. Summarizing, an anomalous scaling is the intrinsic property of the MFD fluctuations in the quiet Sun (diamonds in Fig. <ref>). The main results is the statistically significant difference between ξ^+(q) and ξ^-(q). The former exhibits scaling exponents rather distinctly following the linear dependence q/4, in accordance with the Iroshnikov-Kraichnan phenomenology. Contrastly, fluctuations along ^--trajectories have anomalous scaling exponents, and the curve of ξ^-(q) resembles the MHD log-Poisson model (Eq. <ref>). However, we note that models describing curves of ξ(q)^- and ξ(q) are out of the scope of the present Letter. Following the arguments of She and Leveque <cit.>, one can interpret our finding that entropy consuming fluctuations could be related to entropy (energy) sinks which support building up of coherent structures at larger scales due to correlations induced by intermittency. Correspondingly, entropy generating fluctuations are related to dissipation processes according to the phenomenological cascade model. To conclude, splitting measurements according to the sign of the entropy production allows detecting an unexpected coexistence of self-similar and anomalous scalings in the inertial range of turbulent small-scale photospheric magnetic fields on the Sun. Future numerical and experimental/observational applications of the method proposed in this Letter may advance understanding of the self-similarity in turbulent phenomena. We thank Petri Käapylä for stimulating discussions. Solar Dynamics Observatory (SDO) is a mission for NASA's Living With a Star (LWS) program. The Helioseismic and Magnetic Imager (HMI) data were provided by the Joint Science Operation Center (JSOC). 10 benziExtendedSelfSimilarityDissipation1993 R. Benzi, S. Ciliberto, C. Baudet, G. Ruiz Chavarria, and R. Tripiccione. Extended Self-Similarity in the Dissipation Range of Fully Developed Turbulence. Europhysics Letters, 24(4):275, November 1993. benziExtendedSelfsimilarityTurbulent1993 R. Benzi, S. Ciliberto, R. Tripiccione, C. Baudet, F. Massaioli, and S. Succi. Extended self-similarity in turbulent flows. Physical Review E, 48(1):R29–R32, July 1993. biskampCascadeModelsMagnetohydrodynamic1994 D. Biskamp. Cascade models for magnetohydrodynamic turbulence. Physical Review E, 50(4):2702–2711, October 1994. bustamanteNonequilibriumThermodynamicsSmall2005 Carlos Bustamante, Jan Liphardt, and Felix Ritort. The nonequilibrium thermodynamics of small systems. Physics Today, 58(7):43–48, 2005. consoliniCharacterizationSolarPhotospheric1999 G. Consolini, F. Berrilli, E. Pietropaolo, R. Bruno, V. Carbone, B. Bavassano, and G. Ceppatelli. Characterization of the Solar Photospheric Velocity Field: A New Approach. In Magnetic Fields and Solar Processes, volume 448 of ESA Special Publication, page 209, December 1999. consoliniScalingBehaviorVertical1999 G. Consolini, V. Carbone, F. Berrilli, R. Bruno, B. Bavassano, C. Briand, B. Caccin, G. Ceppatelli, A. Egidi, I. Ermolli, A. Florio, G. Mainella, and E. Pietropaolo. Scaling behavior of the vertical velocity field in the solar photosphere. Astronomy and Astrophysics, 344:L33–L36, April 1999. faurobert-schollTurbulentMagneticFields1995 M. Faurobert-Scholl, N. Feautrier, F. Machefert, K. Petrovay, and A. Spielfiedel. Turbulent magnetic fields in the solar photosphere: Diagnostics and interpretation. Astronomy and Astrophysics, 298:289, June 1995. frischTurbulence1995 Uriel Frisch. Turbulence. 1995. giannattasioScalingPropertiesMagnetic2022 F. Giannattasio, G. Consolini, F. Berrilli, and P. De Michelis. Scaling properties of magnetic field fluctuations in the quiet Sun. Astronomy & Astrophysics, 659:a180, 2022. gorobetsStochasticEntropyProduction2019 A. Y. Gorobets and S. V. Berdyugina. Stochastic entropy production in the quiet Sun magnetic fields. Monthly Notices of the Royal Astronomical Society: Letters, 483(1):L69–L74, February 2019. gorobetsMaximumEntropyLimit2017 A. Y. Gorobets, S. V. Berdyugina, T. L. Riethmüller, J. Blanco Rodríguez, S. K. Solanki, P. Barthol, A. Gandorfer, L. Gizon, J. Hirzberger, M. noortvan Noort, J. C. Del Toro Iniesta, D. Orozco Suárez, W. Schmidt, V. Martínez Pillet, and M. Knölker. The Maximum Entropy Limit of Small-scale Magnetic Field Fluctuations in the Quiet Sun. The Astrophysical Journal Supplement Series, 233(1):5, 2017. gorobetsMARKOVPROPERTIESMAGNETIC2016 A. Y. Gorobets, J. M. Borrero, and S. Berdyugina. Markov Properties of The Magnetic Field in The Quiet Solar Photosphere. The Astrophysical Journal, 825(2):L18, July 2016. grauerScalingHighorderStructure1994 R. Grauer, J. Krug, and C. Marliani. Scaling of high-order structure functions in magnetohydrodynamic turbulence. Physics Letters A, 195(5):335–338, December 1994. guerraSpatioTemporalScalingTurbulent2015 J. A. Guerra, A. Pulkkinen, V. M. Uritsky, and S. Yashiro. Spatio-Temporal Scaling of Turbulent Photospheric Line-of-Sight Magnetic Field in Active Region NOAA 11158. Solar Physics, 290(2):335–350, 2015. harrisFluctuationTheoremsStochastic2007 R. J. Harris and G. M. Schütz. Fluctuation theorems for stochastic dynamics. Journal of Statistical Mechanics: Theory and Experiment, 2007(07):P07020–P07020, July 2007. iroshnikovTurbulenceConductingFluid1964 P. S. Iroshnikov. Turbulence of a Conducting Fluid in a Strong Magnetic Field. Soviet Astronomy, 7:566, February 1964. janssenFractalDimensionSmallscale2003 K. Janßen, A. Vögler, and F. Kneer. On the fractal dimension of small-scale magnetic structures in the Sun. Astronomy & Astrophysics, 409(3):1127–1134, October 2003. jarzynskiEqualitiesInequalitiesIrreversibility2011 Christopher Jarzynski. Equalities and Inequalities: Irreversibility and the Second Law of Thermodynamics at the Nanoscale. Annual Review of Condensed Matter Physics, 2(1):329–351, March 2011. klagesNonequilibriumStatisticalPhysics2013 Rainer Klages, W. Just, and Christopher Jarzynski, editors. Nonequilibrium Statistical Physics of Small Systems: Fluctuation Relations and Beyond. Reviews of Nonlinear Dynamics and Complexity. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany, 2013. kolmogorovRefinementPreviousHypotheses1962 A. N. Kolmogorov. A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number. Journal of Fluid Mechanics, 13(1):82–85, 1962. K41 A. N. Kolmogorov. Dokl. Akad. Nauk SSSR 31, 538 (1941) [Proc. R. Soc. London A 434, 15 (1991)]. kraichnanInertialRangeSpectrumHydromagnetic1965 Robert H. Kraichnan. Inertial-Range Spectrum of Hydromagnetic Turbulence. Physics of Fluids, 8:1385–1387, July 1965. liuComparisonLineofSightMagnetograms2012 Y. Liu, J. T. Hoeksema, P. H. Scherrer, J. Schou, S. Couvidat, R. I. Bush, T. L. Duvall, K. Hayashi, X. Sun, and X. Zhao. Comparison of Line-of-Sight Magnetograms Taken by the Solar Dynamics Observatory/Helioseismic and Magnetic Imager and Solar and Heliospheric Observatory/Michelson Doppler Imager. Solar Physics, 279(1):295–316, July 2012. marconiFluctuationDissipationResponse2008 Umberto Marini Bettolo Marconi, Andrea Puglisi, Lamberto Rondoni, and Angelo Vulpiani. Fluctuation–dissipation: Response theory in statistical physics. Physics Reports, 461(4):111–195, June 2008. meneveauSimpleMultifractalCascade1987 C. Meneveau and K. R. Sreenivasan. Simple multifractal cascade model for fully developed turbulence. Physical Review Letters, 59(13):1424–1427, 1987. politanoModelIntermittencyMagnetohydrodynamic1995 H. Politano and A. Pouquet. Model of intermittency in magnetohydrodynamic turbulence. Physical Review E, 52(1):636–641, July 1995. rinconSunSupergranulation2018a François Rincon and Michel Rieutord. The Sun's supergranulation. Living Reviews in Solar Physics, 15(1):6, 2018. schekochihinMHDTurbulenceBiased2022 Alexander A. Schekochihin. MHD turbulence: A biased review. Journal of Plasma Physics, 88(5):155880501, October 2022. scherrerHelioseismicMagneticImager2012 P. H. Scherrer, J. Schou, R. I. Bush, A. G. Kosovichev, R. S. Bogart, J. T. Hoeksema, Y. Liu, T. L. Duvall, J. Zhao, A. M. Title, C. J. Schrijver, T. D. Tarbell, and S. Tomczyk. The Helioseismic and Magnetic Imager (HMI) Investigation for the Solar Dynamics Observatory (SDO). Solar Physics, 275:207–227, January 2012. schouDesignGroundCalibration2012 J. Schou, P. H. Scherrer, R. I. Bush, R. Wachter, S. Couvidat, M. C. Rabello-Soares, R. S. Bogart, J. T. Hoeksema, Y. Liu, T. L. Duvall, D. J. Akin, B. A. Allard, J. W. Miles, R. Rairden, R. A. Shine, T. D. Tarbell, A. M. Title, C. J. Wolfson, D. F. Elmore, A. A. Norton, and S. Tomczyk. Design and Ground Calibration of the Helioseismic and Magnetic Imager (HMI) Instrument on the Solar Dynamics Observatory (SDO). Solar Physics, 275(1-2):229–259, January 2012. schumacherColloquiumUnusualDynamics2020 Jörg Schumacher and Katepalli R. Sreenivasan. Colloquium: Unusual dynamics of convection in the Sun. Reviews of Modern Physics, 92:041001, October 2020. seifertStochasticThermodynamicsFluctuation2012 Udo Seifert. Stochastic thermodynamics, fluctuation theorems and molecular machines. Reports on Progress in Physics, 75(12):126001, December 2012. seifertStochasticThermodynamicsThermodynamic2019 Udo Seifert. From stochastic thermodynamics to thermodynamic inference. Annual Review of Condensed Matter Physics, 10(1):171–192, March 2019. sheHierarchicalStructuresScalings1997 Zhen-Su She. Hierarchical structures and scalings in turbulence. In Oluş Boratav, Alp Eden, and Ayse Erzan, editors, Turbulence Modeling and Vortex Dynamics, Lecture Notes in Physics, pages 28–52, Berlin, Heidelberg, 1997. Springer. sheUniversalScalingLaws1994 Zhen-Su She and Emmanuel Leveque. Universal scaling laws in fully developed turbulence. Physical Review Letters, 72(3):336–339, 1994. stenfloScalingLawsMagnetic2012 J. O. Stenflo. Scaling laws for magnetic fields on the quiet Sun. Astronomy and Astrophysics, 541:A17, 2012. stolovitzkyKolmogorovRefinedSimilarity1992 G. Stolovitzky, P. Kailasnath, and K. R. Sreenivasan. Kolmogorov's refined similarity hypotheses. Physical Review Letters, 69(8):1178–1181, 1992.
http://arxiv.org/abs/2307.04651v1
20230710154937
Joint Salient Object Detection and Camouflaged Object Detection via Uncertainty-aware Learning
[ "Aixuan Li", "Jing Zhang", "Yunqiu Lv", "Tong Zhang", "Yiran Zhong", "Mingyi He", "Yuchao Dai" ]
cs.CV
[ "cs.CV" ]
Salient objects attract human attention and usually stand out clearly from their surroundings. In contrast, camouflaged objects share similar colors or textures with the environment. In this case, salient objects are typically non-camouflaged, and camouflaged objects are usually not salient. Due to this inherent contradictory attribute, we introduce an uncertainty-aware learning pipeline to extensively explore the contradictory information of salient object detection (SOD) and camouflaged object detection (COD) via data-level and task-wise contradiction modeling. We first exploit the dataset correlation of these two tasks and claim that the easy samples in the COD dataset can serve as hard samples for SOD to improve the robustness of the SOD model. Based on the assumption that these two models should lead to activation maps highlighting different regions of the same input image, we further introduce a contrastive module with a joint-task contrastive learning framework to explicitly model the contradictory attributes of these two tasks. Different from conventional intra-task contrastive learning for unsupervised representation learning, our contrastive module is designed to model the task-wise correlation, leading to cross-task representation learning. To better understand the two tasks from the perspective of uncertainty, we extensively investigate the uncertainty estimation techniques for modeling the main uncertainties of the two tasks, namely task uncertainty (for SOD) and data uncertainty (for COD), and aiming to effectively estimate the challenging regions for each task to achieve difficulty-aware learning. Experimental results on benchmark datasets demonstrate that our solution leads to both state-of-the-art performance and informative uncertainty estimation. Salient Object Detection, Camouflaged Object Detection, Task Uncertainty, Data Uncertainty, Difficulty-aware Learning Joint Salient Object Detection and Camouflaged Object Detection via Uncertainty-aware Learning Aixuan Li,  Jing Zhang*,  Yunqiu Lv,  Tong Zhang,  Yiran Zhong,  Mingyi He,  Yuchao Dai*  A. Li, Y. Lv, M. He and Y. Dai are with School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China and Shaanxi Key Laboratory of Information Acquisition and Processing. J. Zhang is with School of Computing, the Australian National University, Canberra, Australia. T. Zhang is with IVRL, EPFL, Switzerland. Y. Zhong is with Shanghai AI Laboratory, Shanghai, China. A preliminary version of this work appeared at <cit.>. Our code and data are available at: <https://npucvr.github.io/UJSCOD/>. A. Li and J. Zhang contributed equally. Corresponding authors: Y. Dai ([email protected]) and J. Zhang ([email protected]). This research was supported in part by National Natural Science Foundation of China (62271410) and by the Fundamental Research Funds for the Central Universities. August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Visual salient object detection (SOD) aims to localize the salient object(s) of the image that attract human attention. The early work of saliency detection mainly relies on human visual priors based handcrafted features <cit.> to detect high contrast regions. Deep SOD models <cit.> use deep saliency features instead of handcrafted features to achieve effective global and local context modeling, leading to better performance. In general, existing SOD models <cit.> focus on two directions: 1) constructing effective saliency decoders <cit.> that facilitate high/low-level feature aggregation; and 2) designing appropriate loss functions <cit.> to achieve structure-preserving saliency detection. Unlike salient objects that immediately attract human attention, camouflaged objects evolve to blend into their surroundings, effectively avoiding detection by predators. The concept of camouflage has a long history <cit.>, and finds application in various domains including biology <cit.>, military <cit.> and other fields <cit.>. From a biological evolution perspective, prey species have developed adaptive mechanisms to camouflage themselves within their environment <cit.>, often by mimicking the structure or texture of their surroundings. These camouflaged objects can only be distinguished by subtle differences. Consequently, camouflaged object detection (COD) models <cit.> are designed to identify and localize these "subtle" differences, enabling the comprehensive detection of camouflaged objects. To address the contradictory nature of SOD and COD, we propose a joint-task learning framework that explores the relationship between these two tasks. Our investigation reveals an inverse relationship between saliency and camouflage, where a higher level of saliency typically indicates a lower level of camouflage, and vice versa. This oppositeness is clearly demonstrated in Fig. <ref>, where the object gradually transits from camouflaged to salient as the contrast level increases. Hence, we explore the correlation of SOD and COD from both data-wise and task-wise perspectives. For data-wise correlation modeling, we re-interpret the data augmentation by defining easy samples from COD as hard samples for SOD. By doing so, we achieve contradiction modeling from the dataset perspective. Fig. <ref> illustrates that typical camouflaged objects are never salient, but samples in the middle can be defined as hard samples for SOD. Thus, we achieve context-aware data augmentation by the proposed data interaction as data augmentation method. In addition, for COD, we find the performance is sensitive to the size of camouflaged objects. To explain this, we crop the foreground camouflaged objects with different percentages of background, and show their corresponding prediction maps and uncertainty maps in Fig. <ref>. We observe that the cropping based prediction uncertainty,  variance of multiple predictions, is relatively consistent with region-level detectability of the camouflaged objects, validating that performance of the model can be influenced by the complexity of the background. The foreground-cropping strategy can serve as an effective data augmentation technique and a promising uncertainty generation strategy for COD, which also simulates real-world scenarios that camouflaged objects in the wild may appear in different environments. We have also investigated the foreground cropping strategy for SOD, and observed relatively stable predictions, thus the foreground cropping is only applied to COD training dataset. Aside from data augmentation, we integrate contrastive learning into our framework to address task-wise contradiction modeling. Conventional contrastive learning typically constructs their positive/negative pairs based on semantic invariance. However, since both SOD and COD are class-agnostic tasks that rely on contrast-based object identification, we adopt a different approach for selecting positive/negative pairs based on region contrast. Specifically, given the same input image and its corresponding detected regions for the two tasks, we define region features with similar contrast as positive pairs, while features with different contrast serve as negative pairs. This contrastive module is designed to cater to class-agnostic tasks and effectively captures the contrast differences between the foreground objects in both tasks. Additionally, we observe two types of uncertainty for SOD and COD, respectively, as depicted in Fig. <ref>. For SOD, the subjective nature <cit.> and the prediction uncertainty due to themajority voting mechanism in labeling procedure, which we define as task uncertainty. On the other hand, in COD, uncertainty arises from the difficulty of accurately annotating camouflaged objects due to their resemblance to the background, which we refer to as data uncertainty. To address these uncertainties, as shown in the fifth column of Fig. <ref>, we extensively investigate uncertainty estimation techniques to achieve two main benefits: (1) a self-explanatory model that is aware of its prediction, with an additional uncertainty map to explain the model's confidence, and (2) difficulty-aware learning, where the estimated uncertainty map serves as an indicator for pixel-wise difficulty representation, facilitating practical hard negative mining. A preliminary version of our work appeared at <cit.>. Compared with the previous version, we have made the following extensions: 1): We have fully analyzed the relationship between SOD and COD from both dataset and task connection perspectives to further build their relationships. 2): To further investigate the cross-task correlations from the contrast perspective, we have introduced contrastive learning to our dual-task learning framework. 3): As an adversarial training based framework, we have investigated more training strategies for the discriminator, leading to more stable training. 4): We have conducted additional experiments to fully explain the task connections, the uncertainty estimation techniques, the experiment setting, and the hyper-parameters. Our main contributions are summarized as: * We propose that salient object detection and camouflaged object detection are tasks with opposing attributes for the first time and introduce the first joint learning framework which utilizes category-agnostic contrastive module to model the contradictory attributes of two tasks. * Based on the transitional nature between saliency and camouflage, we introduce data interaction as data augmentation by defining simple COD samples as hard SOD samples to achieve context-aware data augmentation for SOD. * We analyze the main sources of uncertainty in SOD and COD annotations. In order to achieve reliable model predictions, we propose an uncertainty-aware learning module as an indicator of model prediction confidence. * Considering the inherent differences between COD and SOD tasks, we propose random sampling-based foreground-cropping as the COD data augmentation technique to simulate the real-world scenarios of camouflaged objects, which significantly improves the performance. § RELATED WORK Salient Object Detection. Existing deep saliency detection models <cit.> are mainly designed to achieve structure-preserving saliency predictions. <cit.> introduced an auxiliary edge detection branch to produce a saliency map with precise structure information. Wei  <cit.> presented structure-aware loss function to penalize prediction along object edges. Wu  <cit.> designed a cascade partial decoder to achieve accurate saliency detection with finer detailed information. Feng  <cit.> proposed a boundary-aware mechanism to improve the accuracy of network prediction on the boundary. There also exist salient object detection models that benefit from data of other sources. <cit.> integrated fixation prediction and salient object detection in a unified framework to explore the connections of the two related tasks. Zeng  <cit.> presented to jointly learn a weakly supervised semantic segmentation and fully supervised salient object detection model to benefit from both tasks. Zhang  <cit.> used two refinement structures, combining expanded field of perception and dilated convolution, to increase structural detail without consuming significant computational resources, which are used for salient object detection task on high-resolution images. Liu  <cit.> designed the stereoscopically attentive multi-scale module to ensure the effectiveness of the lightweight salient object detection model, which uses a soft attention mechanism in any channel at any position, ensuring the presence of multiple scales and reducing the number of parameters. Camouflaged Object Detection. The concept of camouflage is usually associated with context <cit.>, and the camouflaged object detection models are designed to discover the camouflaged object(s) hidden in the environment. Cuthill  <cit.> concluded that an effective camouflage includes two mechanisms: background pattern matching, where the color is similar to the environment, and disruptive coloration, which usually involves bright colors along edge, and makes the boundary between camouflaged objects and the background unnoticeable. Bhajantri  <cit.> utilized co-occurrence matrix to detect defective. Pike  <cit.> combined several salient visual features to quantify camouflage, which could simulate the visual mechanism of a predator. Le  <cit.> fused a classification network with a segmentation network and used the classification network to determine the likelihood that the image contains camouflaged objects to produce more accurate camouflaged object detection. In the field of deep learning, Fan  <cit.> proposed the first publicly available camouflage deep network with the largest camouflaged object training set. Mei  <cit.> incorporated the predation mechanism of organisms into the camouflaged object detection model and proposed a distraction mining strategy. Zhai  <cit.> introduced a joint learning model for COD and edge detection based on graph networks, where the two modules simultaneously mine complementary information. Lv  <cit.> presented a triple-task learning framework to simultaneously rank, localize and segment the camouflaged objects. Multi-task Learning. The basic assumption of multi-task learning is that there exists shared information among different tasks. In this way, multi-task learning is widely used to extract complementary information about positively related tasks. Kalogeiton  <cit.> jointly detected objects and actions in a video scene. Zhen  <cit.> designed a joint semantic segmentation and boundary detection framework by iteratively fusing feature maps generated for each task with a pyramid context module. In order to solve the problem of insufficient supervision in semantic alignment and object landmark detection, Jeon  <cit.> designed a joint loss function to impose constraints between tasks, and only reliable matched pairs were used to improve the model robustness with weak supervision. Joung  <cit.> solved the problem of object viewpoint changes in 3D object detection and viewpoint estimation with a cylindrical convolutional network, which obtains view-specific features with structural information at each viewpoint for both two tasks. Luo  <cit.> presented a multi-task framework for referring expression comprehension and segmentation. Uncertainty-aware Learning. Difficulty-aware (or uncertainty-aware, confidence-aware) learning aims to explore the contribution of hard samples, leading to hard-negative mining <cit.>, which has been widely used in medical image segmentation <cit.>, semantic segmentation <cit.>, and other fields <cit.>. To achieve difficulty-aware learning, one needs to estimate model confidence. To achieve this, Gal  <cit.> used Monte Carlo dropout (MC-Dropout) as a Bayesian approximation, where model uncertainty can be obtained with dropout neural networks. Deep Ensemble <cit.> is another popular type of uncertainty modeling technique, which usually involves generating an ensemble of predictions to obtain variance of predictions as the uncertainty estimation. With extra latent variable involved, the latent variable models <cit.> can also be used to achieve predictive distribution estimation, leading to uncertainty modeling. Following the uncertainty-aware learning pipeline, Lin  <cit.> introduced focal loss to balance the contribution of simple and hard samples for loss updating. Li  <cit.> presented a deep layer cascade model for semantic segmentation to pay more attention to the difficult parts. Nie  <cit.> adopted adversarial learning to generate confidence levels for predicting segmentation maps, and then used the generated confidence levels to achieve difficulty-aware learning. Xie  <cit.> applied difficulty-aware learning to an active learning task, where the difficult samples are claimed to be more informative. Contrastive learning. The initial goal of contrastive learning <cit.> is to achieve effective feature representation via self-supervised learning. The main strategy to achieve this is through constructing positive/negative pairs via data augmentation techniques <cit.>, where the basic principle is that similar concepts should have similar representation, thus stay close to each other in the embedding space. On the contrary, dissimilar concepts should stay apart in the embedding space. Different from augmentation based self-supervised contrastive learning, supervised contrastive learning builds the positive/negative pairs based on the given labels <cit.>. Especially for image segmentation, the widely used loss function is cross-entropy loss. However, it's well known that cross-entropy loss is not robust to labeling noise <cit.> and the produced category margins are not separable enough for better generalizing. Further, it penalizes pixel-wise predictions independently without modeling the cross-pixel relationships. Supervised contrastive learning <cit.> can fix the above issues with robust feature embedding exploration, following the similar training pipeline as self-supervised contrastive learning. § OUR METHOD We propose an uncertainty-aware joint learning framework via contrastive learning (see Fig. <ref>) to learn SOD and COD in a unified framework. Firstly, we explain that these two tasks are both contradictory and closely related (Sec. <ref>), and a joint learning pipeline can benefit each other with effective context modeling. Then, we present a Contrastive Module to explicitly model the contradicting attributes of these two tasks (Sec. <ref>), with a data-interaction technique to achieve context-level data augmentation. Further, considering uncertainty for both tasks, we introduce a difficulty-aware learning network (Sec. <ref>) to produce predictions with corresponding uncertainty maps, representing the model's awareness of the predictions. §.§ Tasks Analysis §.§.§ Tasks Relationship Exploration Model Perspective: At the task level, both SOD and COD are class-agnostic binary segmentation tasks, where a UNet <cit.> structure is usually designed to achieve mapping from input (image) space to output (segmentation) space. Differently, the foreground of SOD usually stands out highly from the context, while camouflaged instances are evolved to conceal in the environment. With the above understanding about both SOD and COD, we observe complementary information between the two tasks. Given the same image, we claim that due to the contradicting attributes of saliency and camouflage, the extracted features for each task should be different from each other, and the localized region of each task should be different as well. Dataset Perspective: At the dataset level, we observe some samples within the COD dataset can also be included in the SOD dataset (see Fig. <ref>), where the camouflaged region is consistent with the salient region. However, due to the similar appearance of foreground and background, these samples are easy for COD but challenging for SOD, making them effective for serving as hard samples for SOD to achieve hard negative mining. On the other side, most of the salient foreground in the SOD dataset has high contrast, and the camouflaged regions of the same image usually differ from the salient regions. In this way, samples in the SOD dataset usually cannot serve as simple samples for COD. Considering the dataset relationships of both tasks, we claim that easy samples in the COD dataset can effectively serve as hard samples for SOD to achieve context-level data augmentation. §.§.§ Inherent Uncertainty Subjective Nature of SOD: To reflect the human visual system, the initial saliency annotation of each image is obtained with multiple annotators <cit.>, and then majority voting is performed to generate the final ground truth saliency map that represents the majority salient regions,  the DUTS dataset <cit.>, ECSSD <cit.>, DUT <cit.> dataset are annotated by five annotators and HKU-IS <cit.> is annotated by three annotators. Further, to maintain consistency of the annotated data, some SOD datasets adopt the pre-selection strategy, where the images contain no common salient regions across all the annotators will be removed before the labeling process,  HKU-IS <cit.> dataset first evaluates the consistency of the annotation of the three annotators, and removes the images with greater disagreement. In the end, 4,447 images are obtained from an initial dataset with 7,320 images. We argue that both the majority voting process for final label generation and the pre-selection process for candidate dataset preparation introduce bias to both the dataset and the models trained on it. We explain this as the subjective nature of saliency. Labeling Uncertainty of COD: Camouflaged objects are evolved to have similar texture and color information to their surroundings <cit.>. Due to the similar appearance of camouflaged objects and their habitats, it's more difficult to accurately annotate the camouflaged instance than generic object segmentation, especially along instance boundaries. This poses severe and inevitable labeling noise while generating the camouflaged object detection dataset, which we define as labeling uncertainty of camouflage. §.§ Joint-task Contrastive Learning As a joint learning framework, we have two sets of training dataset for each individual task, namely a SOD dataset D_s={x_i^s,y_i^s}_i=1^N_s for SOD and a COD dataset D_c={x_i^c,y_i^c}_i=1^N_c for COD, where {x_i^s,y_i^s} is the SOD image/ground truth pair and {x_i^c,y_i^c} is the COD image/ground truth pair, and i indexes images, N_s and N_c are the size of training dataset for each task. Motivated by both the task contradiction and data sharing attributes of the two tasks, we introduce a contrastive learning based joint-task learning pipeline for joint salient object detection and camouflaged object detection. Firstly, we model the task contradiction (Section <ref>) with a contrastive module. Secondly, we select easy samples by weighted MAE from the COD training dataset (Section <ref>), serving as hard samples for SOD. §.§.§ Task Correlation Modeling via Contrastive Learning To model the task-wise correlation, we design a Contrastive Module in Fig. <ref> and introduce another set of images from the PASCAL VOC 2007 dataset <cit.> as connection modeling dataset D_p={x_i^p}_i=1^N_p, from which we extract both the camouflaged features and the salient features. With the three datasets (SOD dataset D_s, COD dataset D_c and connection modeling dataset D_p), our contradicting modeling framework uses the Feature Encoder module to extract both the camouflage feature and the saliency feature. The Prediction Decoder is then used to produce the prediction of each task. We further present a Contrastive Module to model the connection of the two tasks with the connection modeling dataset. Feature Encoder: The Feature Encoder takes the RGB image (x^s or x^c) as input to produce task-specific predictions and also serves as the feature extractor for the Contrastive Module. We design both the saliency encoder E_α_s and camouflage encoder E_α_c with the same backbone network,  the ResNet50 <cit.>, where α_s and α_c are the corresponding network parameter sets. The ResNet50 backbone network has four groups[We define feature maps of the same spatial size as same group.] of convolutional layers of channel size 256, 512, 1024 and 2048 respectively. We then define the output features of both encoders as F_α_s={f^s_k}_k=1^4 and F_α_c={f^c_k}_k=1^4, where k indexes the feature group. Prediction Decoder: As shown in Fig. <ref>, we design a shared decoder structure for our joint learning framework. To reduce the computational burden, also to achieve feature with larger receptive field, we first attach a multi-scale dilated convolution <cit.> of output channel size C=32 to each backbone feature to generate the new backbone features F'_α_s={f^cs_k}_k=1^4 and F'_α_c={f^cc_k}_k=1^4 for each specific task from F_α_s and F_α_c. Then, we adopt the residual attention based feature fusion strategy from <cit.> to achieve high/low level feature aggregation. Specifically, the lower-level features are fed to a residual connection module <cit.> with two 3× 3 convolutional layers, which is then added to the higher level feature. The sum of the high/low level feature is then fed to another residual connection block of the same structure as above to generate the fused feature. We perform the above feature fusion operation until we reach the lowest level feature,  f^cc_1 or f^cs_1. To generate the prediction for each task, we design a classifier module, which is composed of three cascaded convolutional layers, where the kernel size of the first two convolutional layers is 3× 3, and that of the last convolutional layer is 1× 1. After generating initial predictions, we used the holistic attention module <cit.> for feature optimization to obtain further improved predictions, as the final predictions. To simplify the explanation, we only use prediction after the holistic attention module as the decoder output. We then define prediction of each task as: G_β(F_α_s) for SOD and G_β(F_α_c) for COD, where β represents the parameter set of the shared prediction decoder. Contrastive Module: The Contrastive Module 𝐶𝑡𝑟𝑠_θ aims to enhance the identity of each task with the feature of other tasks as guidance. Specifically, it takes image x^p from the connection modeling dataset D_p={x_i^p}_i=1^N_p as input to model the feature correlation of SOD and COD, where θ is parameter set of the contrastive module. For image x^p from the connection modeling dataset, its saliency and camouflage features are F^p_α_s={f^p_sk}_k=1^4 and F^p_α_c={f^p_ck}_k=1^4, respectively. With the shared decoder G_β, the prediction map are G_β(F^p_α_s) indicating the saliency map and G_β(F^p_α_c) as the camouflage map. The contrastive module decides positive/negative pairs based on contrast information, where regions of similar contrast are defined as positive pairs and the different contrast regions are defined as negative pairs. The intuition behind this is that COD and SOD are both contrast based class-agnostic binary segmentation tasks, making conventional category-aware contrastive learning infeasible to work in this scenario. Considering the goal of building the positive/negative pairs for contrastive learning is to learn representative features via exploring the inherent data correlation,  the category information, we argue the inherent correlation in our scenario is the contrast information. For SOD, the foreground shows higher contrast compared with the background, indicating the different contrast level. For COD, the contrast levels of foreground and background are similar. Thus given the same input image x^p, we decide positive/negative pairs based on the contrast information of the activated regions. In Fig. <ref>, we show the activation region (the processed predictions) of the same image from both the saliency encoder (first row) and camouflage encoder (second row). Specifically, given same image x^p, we compute its camouflage map and saliency map, and highlight the detected foreground region in red. Fig. <ref> shows that the two encoders focus on different regions of the image, where the saliency encoder pays more attention to the region that stands out from the context. The camouflage encoder focuses more on the hidden object with similar color or structure as the background, which is consistent with our assumption that these two tasks are contradicting with each other in general. Feature definition: Following the conventional practice of contrastive learning, our contrastive module Ctrs_θ maps image features,  F^p_α_s and F^p_α_c for the connection modeling data x^p, to the lower dimensional feature space via four spectral normed convolutional layers (SNconv) <cit.>, which is proven effective in preserving the geometric distance in the compressed space. We then compute saliency and camouflage features of the same image: F^p_sf =S(G_β(F^p_α_s),Ctrs_θ(F^p_α_s)), F^p_sb =S((1-G_β(F^p_α_s)),Ctrs_θ(F^p_α_s)), F^p_𝑐𝑓 =S(G_β(F^p_α_c),Ctrs_θ(F^p_α_c)), F^p_cb =S((1-G_β(F^p_α_c)),Ctrs_θ(F^p_α_c)), where S(·,·) computes the region feature via matrix multiplication <cit.>, where the feature maps,  Ctrs_θ(F^p_α_s), are scaled to be the same spatial size as the activation map,  G_β(F^p_α_s). F^p_sf∈ℝ^1× C and F^p_sb∈ℝ^1× C in Eq. (<ref>) represent the SOD foreground and background features, and F^p_𝑐𝑓 and F^p_cb are the COD foreground and background features, respectively. Positive/negative pair construction: According to our previous discussion, we define three sets of positive pairs based on contrast similarity: (1) The SOD background feature and COD background feature of the same image should be highly similar, indicating similar contrast information; (2) Due to the nature of the camouflaged object, the foreground and the background features of COD are similar as well as camouflaged object shares similar contrast with the background; (3) Similarly, the COD foreground feature and SOD background feature are also similar in contrast. On the other hand, the negative pair consists of SOD foreground feature and background feature. Contrastive loss: Given the positive/negative pairs, we follow <cit.> and define the contrastive loss as: ℒ_ctrs=-log∑_pos/∑_pos+exp(c(F^p_sf,F^p_sb)), where c(· ) measures the cosine similarity of the normalized vectors. ∑_pos represents the similarity of positive pairs, which is defined as: ∑_pos = exp(c(F^p_cf,F^p_cb))+exp(c(F^p_sb,F^p_cb))+exp(c(F^p_sb,F^p_cf)). §.§.§ Data Interaction In Section <ref>, we discuss the contradicting modeling strategy to model the two tasks from the model correlation perspective. In this section, we further explore the task relationships from dataset perspective, and introduce data interaction as data augmentation. Sample selection principle: As shown in Fig. <ref>, saliency and camouflage are two properties that can transfer from each other. We find that there exist samples in the COD dataset that are both salient and camouflaged. We argue that those samples can be treated as hard samples for SOD to achieve robust learning. The main requirement is that the activation of those samples for SOD and COD should be similar. In other words, the predictions of the selected images for both tasks need to be similar. To select those samples from the COD dataset, we resort to weighted Mean Absolute Error (𝑤𝑀𝐴𝐸), and select samples in the COD dataset <cit.> which achieve the smallest 𝑤𝑀𝐴𝐸 by testing it using a trained SOD model. The weighted mean absolute error 𝑤𝑀𝐴𝐸 is defined as : 𝑤𝑀𝐴𝐸 = ∑_u=1^W∑_v=1^H |y^u, v - p^u,v |/∑_u=1^W∑_v=1^H y^u, v, where u,v is the pixel index, p represents the model prediction, y is the corresponding ground-truth, and W and H indicate size of y. Compared with mean absolute error, 𝑤𝑀𝐴𝐸 avoids the biased selection caused by different sizes of the foreground object(s). Data interaction: For the COD training dataset D_c ={x_i^c, y_i^c}_i=1^N_c and the trained SOD model M_θ_s, we obtain saliency prediction of the images in D_c as P^c_s=M_θ_s({x^c})={p^c_i}_i=1^N_c, where p_i^c is the saliency prediction of the COD training dataset. We assume that easy samples for COD can be treated as hard samples for SOD as shown in Fig. <ref>. Then we select M=403 samples D_c^M with the smallest 𝑤𝑀𝐴𝐸 in D_c via Eq. (<ref>), and add in our SOD training dataset <cit.> as a data augmentation technique. We show the selected samples in Fig. <ref>, which clearly illustrates the partially positive connection of the two tasks at the dataset level. §.§.§ Foreground Cropping as Data Augmentation: Considering the real-life scenarios, camouflaged objects can appear in different sizes, we introduce foreground cropping to achieve context-aware data augmentation. Note that we only perform foreground cropping for COD as the prediction of SOD is relatively stable with different sizes of the foreground object(s). Specifically, we first define the largest bounding box region that covers all the camouflaged objects as the compact cropping (CCrop). Then, we obtain the median cropping (MCrop) and loose cropping (LCrop) by randomly extending 0-80 and 0-150 pixels respectively randomly outward along the compact bounding box. We perform cropping on the raw images and resize the cropped image back to the pre-defined training image size for training. §.§ Uncertainty-aware Learning In Section <ref>, we discussed that both SOD and COD have inherent uncertainty, where the subjective nature of SOD poses serious model uncertainty <cit.> for SOD and difficulty of labeling introduces data uncertainty <cit.> for COD. As shown in Fig. <ref>, for the SOD dataset, the uncertainty comes from the ambiguity of saliency. For the COD dataset, the uncertainty mainly comes from the difficulty of labeling (the accuracy of y_i). To model the uncertainty of both tasks for reliable model generation, we introduce an uncertainty-aware adversarial training strategy to model the task-specific uncertainty in our joint learning framework. Adversarial learning framework: Following the conventional practice of generative adversarial network (GAN) <cit.>, we design a fully convolutional discriminator network to evaluate confidence of the predictions. The fully convolutional discriminator network D_γ consists of five SNconv layers <cit.> of kernel size 3× 3. As a conditional generation task, the fully convolutional discriminator takes the prediction/ground truth and the conditional variable,  the RGB image, as input, and produces a one-channel confidence map, where γ is the network parameter set. Note that we have batch normalization and leaky relu layers after the first four convolutional layers. D_γ aims to distinguish areas of uncertainty, which produce all-zero output with ground truth y as input, and produce |p-y| output with prediction map p as input. In our case, the fully convolutional discriminator aims to discover the hard (or uncertain) regions of the input image. We use the same structure of discriminators with parameter sets γ_s and γ_c for SOD and COD respectively, to identify the two types of challenging regions,  the subjective area for SOD, and the ambiguous regions for COD. Uncertainty-aware learning: For the prediction decoder module, we first have the task-specific loss function to learn each task. Specifically, we adopt the structure-aware loss function <cit.> for both SOD and COD, and define the loss function as: ℒ_str(p,y)=ω*ℒ_ce(p,y)+ℒ_iou^ω(p,y), where ω is the edge-aware weight, which is defined as ω=1+5* | (avg_pool(y)-y) |, y is task-specific ground truth, ℒ_ce is the binary cross-entropy loss, ℒ_iou^ω is the weighted boundary-IOU loss <cit.>. In this way, the task specific loss functions ℒ_str^s and ℒ_str^c for SOD and COD are defined as: ℒ_str^s=ℒ_str(G_β(F_α_s),y^s), ℒ_str^c=ℒ_str(G_β(F_α_c),y^c), To achieve adversarial learning, following <cit.>, we further introduce adversarial loss function to both SOD and COD predictors, which is defined as a consistency loss between discriminators prediction of prediction map and discriminators prediction of ground-truth, aiming to fool the discriminators that the prediction of SOD or COD is the actual ground truth. The adversarial loss functions (ℒ_adv^s and ℒ_adv^c) for SOD and COD, respectively, are defined as: ℒ_adv^s = ℒ_ce(D_γ_s(x^s,G_β(F_α_s)), D_γ_s(x^s,y^s)), ℒ_adv^c =ℒ_ce(D_γ_c(x^c,G_β(F_α_c)), D_γ_c(x^c,y^c)), Both the task specific loss in Eq. (<ref>), Eq. (<ref>) and the adversarial loss in Eq. (<ref>), Eq. (<ref>) are used to update the task-specific network (the generator). To update the discriminator, following the conventional GAN, we want it to distinguish areas of uncertainty clearly. Due to the inherent uncertainty that cannot be directly described, the uncertainty in inputting the ground truth cannot be accurately represented. However, because the correctly annotated regions are dominant in the complete dataset, we believe that the network can perceive the areas that are difficult to learn. The adversarial learning mechanism makes it difficult for the discriminator to distinguish between predicted and ground truth maps, and it can differentiate between noisy ground truth images and areas where RGB images cannot be aligned. Therefore, the output of the discriminator when inputting ground truth is defined as an all-zero map. Additionally, it produces a residual output for the prediction map. The outputs corresponding to different inputs of the discriminator are shown in Fig. <ref>. Then, the discriminators (D_γ_s and D_γ_c) are updated via: ℒ_dis^s=ℒ_ce(D_γ_s(x^s,G_β(F_α_s)), |G_β(F_α_s)-y^s|), + ℒ_ce(D_γ_s(x^s,y^s),0), ℒ_dis^c=ℒ_ce(D_γ_c(x^c,G_β(F_α_c)), |G_β(F_α_c)-y^c|), + ℒ_ce(D_γ_c(x^c,y^c),0), Note that the two discriminators are updated separately. §.§ Objective Function As a joint confidence-aware adversarial learning framework, we further introduce the objective functions in detail for better understanding of our learning pipeline. Firstly, given a batch of images from the SOD training dataset x^s, we define the confidence-aware loss with contrastive modeling for the generator as: ℒ^s = ℒ_str^s +λ_adv*ℒ_adv^s+λ_ctrs*ℒ_ctrs, where ℒ_str^s is the task specific loss, defined in Eq. (<ref>), ℒ_avd^s is the adversarial loss in Eq. (<ref>), and ℒ_ctrs is the contrative loss in Eq. (<ref>). The parameters λ_adv=1,λ_ctrs=0.1 are used to balance the contribution of adversarial loss/contrastive loss for robust training. Similarly, for image batch x^c from the COD training dataset, the confidence-aware loss with contrastive modeling for the generator is defined as: ℒ^c = ℒ_str^c + λ_adv*ℒ_adv^c+λ_ctrs*ℒ_ctrs. The discriminators are optimized separately, where D_γ_s and D_γ_c are updated via Eq. (<ref>) and Eq. (<ref>). Note that, we only introduce contrastive learning to our joint-task learning framework after every 5 steps, which is proven more effective in practice. We show the training pipeline of our framework in Algorithm <ref> for better understanding of the implementation details. § EXPERIMENTAL RESULTS §.§ Setting: Dataset: For salient object detection, we train our model using the augmented DUTS training dataset <cit.> via data interaction (see Sec. <ref>), and testing on six other testing dataset, including the DUTS testing datasets, ECSSD <cit.>, DUT <cit.>, HKU-IS <cit.>, PASCAL-S dataset <cit.> and SOD dataset <cit.>. For camouflaged object detection, we train our model using the benchmark COD training dataset, which is a combination of COD10K training set <cit.> and CAMO training dataset <cit.>, and test on four camouflaged object detection testing sets, including the CAMO testing dataset <cit.>, CHAMELEON <cit.>, COD10K testing dataset <cit.> and NC4K dataset <cit.>. Evaluation Metrics: We use four evaluation metrics to evaluate the performance of the salient object detection models and the camouflaged object detection models, including Mean Absolute Error (ℳ), Mean F-measure (F_β), Mean E-measure <cit.> (E_ξ) and S-measure <cit.> (S_α). Mean Absolute Error (ℳ): measures the pixel-level pairwise errors between the prediction s and the ground-truth map y, which is defined as: ℳ = ∑_u=1^W∑_v=1^H |y^u, v - s^u,v |/W × H, where W and H indicate size of the ground-truth map. Mean F-measure (F_β): measures the precision and robustness of the model, which is defined as: F_β = TP/TP + 1/2(FP + FN), where TP denotes the number of true positives, FP shows the false positives and FN indicates the false negatives. Mean E-measure (E_ξ): measures the pixel-level matching and image-level statistics of the prediction <cit.>, which is defined as: E_ξ = 1/W × H∑_u=1^W∑_v=1^H ϕ_p(u, v), where ϕ_p(u, v) is the alignment matrix <cit.>, measuring the alignment of model prediction and the ground truth. S-measure (S_α): measures the regional and global structural similarities between the prediction and the ground-truth <cit.> as: S_α = α· S_o + (1 - α) · S_r. where S_o measures the global structural similarity, in terms of the consistencies in the foreground and background predictions and contrast between the foreground and background predictions, S_r measures the regional structure similarity, and α = 0.5 balances the two similarity measures following <cit.>. Training details: We train our model in Pytorch with ResNet50 <cit.> as backbone, as shown in Fig. <ref>. Both the encoders for saliency and camouflage branches are initialized with ResNet50 <cit.> trained on ImageNet, and other newly added layers are initialized by default. We resize all the images and ground truth to 352×352, and perform multi-scale training. The maximum step is 30000. The initial learning rate are 2e-5, 2e-5 and 1.2e-5 with Adam optimizer for the generator, discriminators and contrastive module respectively. The whole training takes 26 hours with batch size 22 on an NVIDIA GeForce RTX 3090 GPU. §.§ Performance Comparison Quantitative Analysis: We compare the performance of our SOD branch with SOTA SOD models as shown in Table <ref>. One observation from Table <ref> is that the structure-preserving strategy is widely used in the state-of-the-art saliency detection models, SCRN <cit.>, F^3Net <cit.>, ITSD <cit.>, and it can indeed improve model performance. Our method shows significant improvement in performance on four evaluation metrics compared to other SOD methods, except for the SOD dataset <cit.>. Due to the small size of the SOD dataset <cit.>(300 images), we believe that fluctuations in predictions are reasonable. We also compare the performance of our COD branch with SOTA COD models in Table <ref>. Except for COD10k<cit.>, where our method is slightly inferior to ZoomNet <cit.>, our method shows significant superiority over all other COD methods on all datasets. The reason for this may be that ZoomNet <cit.> was tested at resolution 384 × 384, while our method was tested at resolution 352 × 352, and resolution can affect the performance of COD. The consistent best performance of our camouflage model further illustrates the effectiveness of the joint learning framework. Qualitative Analysis: Further, we show predictions of ours and SOTA models of SOD method in Fig. <ref>, and COD method in Fig. <ref>, where the Uncertainty is obtained based on the prediction from the discriminator. Fig. <ref> shows that we produce both accurate prediction and reasonable uncertainty estimation, where the brighter areas of the uncertainty map indicate the less confident regions. It can be observed that our approach can better distinguish the boundaries between salient objects and the background. Fig. <ref> illustrates that our proposed joint learning approach and random-sampling based foreground cropping can better localize camouflaged targets. Further, the produced uncertainty map clearly represents model awareness of the prediction, leading to interpretable prediction for the downstream tasks. Run-time Analysis: For COD task, the inference time of our model is 53.9 ms per image. And for SOD task, the inference time of our model is 40.4 ms per image on an NVIDIA GeForce RTX 3090 GPU, which is comparable to the state-of-the-art model in terms of speed. §.§ Ablation Study We extensively analyze the proposed joint learning framework to explain the effectiveness of our strategies, and show the performance of our SOD and COD models in Table <ref> and Table <ref> respectively. Note that, unless otherwise stated, we do not perform multi-scale training for the related models. Train each individual task: We use the same Feature encoder, Prediction decoder in Fig. <ref> to train the SOD model with original DUTS dataset and the COD model trained without random-sampling based foreground cropping following the same training related setting as in the Training details section, and show their performance as SSOD and SCOD, respectively. And we used the augmented DUTS dataset and foreground cropping COD training dataset to train the SOD model and the COD model separately, the results are shown as ASOD and ACOD. The comparable performance of SSOD and SCOD with their corresponding SOTA models proves the effectiveness of our prediction decoder. Further, the two data augmentation based models show clear performance improvement compared with training directly with the raw dataset, especially for the COD task, where foreground cropping is applied. We generated the augmented SOD dataset via data interaction (see Sec. <ref> and Fig. <ref>). Experimental results show a reasonable performance improvement, indicating that our proposed data augmentation techniques are effective in enriching the diversity of the training data. Joint training of SOD and COD: We train the Feature encoder and Prediction decoder within a joint learning pipeline to achieve simultaneous SOD and COD. The performance is reported as JSOD1 and JCOD1, respectively. For the COD task, there was a slight improvement in performance compared to the uni-task setting, indicating that under the joint learning framework, SOD can provide effective prediction optimization for COD. For SOD task, there was a slight decrease in performance, which we believe is due to the lack of consideration of the contradicting attribute between the two tasks. The subsequent experiments in the paper fully demonstrate this point. Joint training of SOD and COD with contrastive learning: We add the task connection constraint to the joint learning framework,  the contrastive module in particular, and show performance as JSOD2 and JCOD2 respectively. As discussed in Sec. <ref>, our contrastive module is designed to enhance the context information, and the final results show performance improvement for SOD. However, we observe deteriorated performance for COD when the contrastive module is applied. We have analyzed the predictions and find that the context enhancement strategy via contrastive learning can be a double-edged sword, which is effective for SOD but leads to performance deterioration for COD. Different from the conventional way of constructing positive/negative pairs based on augmentation or category information, SOD and COD are both class-agnostic tasks, and our positive/negative pairs are designed based on contrast information. Experimental results explain its effectiveness for high-contrast based foreground detection,  salient object detection, while minimal context difference between foreground and background of COD poses new challenges for applying contrastive learning effectively to achieve distinguishable foreground/background feature representation. Joint adversarial training of SOD and COD: Based on the joint learning framework (JSOD1 and JCOD1), we further introduce the adversarial learning pipeline, and show performance as JSOD3 and JCOD3. We observe relatively comparable performance of JSOD3 (JCOD3) to JSOD1 (JCOD1), explaining that the adversarial training pipeline will not sacrifice model deterministic performance. Note that with adversarial training, our model can output prediction uncertainty with single forward, serving as an auxiliary output to explain confidence of model output (see Uncertainty in Fig. <ref> and Fig. <ref>). The proposed joint framework: We report our final model performance with both the contrastive module and the adversarial learning solution as Ours. As a dual-task learning framework, Ours shows improved performance compared with models with each individual strategy,  contrastive learning and adversarial training. As discussed in Sec. <ref>, the former is introduced to model the task-wise correlation, and the latter is presented to model the inherent uncertainty within the two tasks. Although these two strategies show limitations for some specific datasets, we argue that as a class-agnostic task, both our contrast based positive/negative pair construction for contrastive learning and residual learning based discriminator learning within the adversarial training pipeline are effective in general, and more investigation will be conducted to further explore their contributions for the joint learning of the contradictory tasks. §.§ Framework Analysis As discussed in Sec. <ref>, SOD and COD are correlated from both task's point of view and the data's perspective. In this Section, we further analyze their relationships and the inherent uncertainty modeling techniques for SOD and COD. §.§.§ Data interaction analysis SOD and COD are both context based tasks (see Fig. <ref>), and can be transformed into each other, where the former represents the attribute of object(s) with high-contrast and the latter is related to concealment. Considering the opposite object attribute of saliency and camouflage, we introduce a simple data selection strategy as data augmentation for saliency detection. Based on the nature of the two task, we explicitly connected the SOD and COD datasets. Experimental results show that incorporating an additional 3.8% of data, specifically 403 out of 10,553 images, led to performance improvement for SOD, comparing ASOD and SSOD in Tabel <ref>. §.§.§ Task interaction analysis In our preliminary version <cit.>, we used the entire PASCAL VOC 2007 as a bridge dataset to model the contradictory properties of SOD and COD via similarity modeling. Here, we apply contrative learning based on contrast information instead, which is proven effective for SOD, comparing JSOD2 and JSOD1 in Tabel <ref>. As contrastive learning is sensitive to the positive/negative pools, and PASCAL VOC 2007 dataset contains samples that pose challenges for either SOD or COD to decide the foreground, we thus selected a portion of the images from the bridge dataset as the updated PASCAL dataset. Specifically, we tested the PASCAL VOC 2007 dataset using the trained SOD and COD models to obtain the weighted MAE of the SOD and COD prediction maps. Then, we selected 200 images from the PASCAL VOC 2007 dataset with the smallest weighted MAE as the new bridge dataset for training the contradicting modeling module. The contradicting module is trained every 5 steps of the other modules to avoid involving feature conflicting for COD. Although our contrastive learning solution is proven effective for SOD, the final performance still shows deteriorated performance of COD, comparing JCOD2 and JCOD1 in Tabel <ref>. The main reason is that the contrastive learning module tries to push the feature spaces of foreground and background to be close as Eq. (<ref>), while the main task of COD is to distinguish the foreground from the background. The contradicting objectives pose challenges for the COD task to converge. §.§.§ Discriminator analysis Considering that the uncertainty regions of both tasks are associated with the image, we concatenate the prediction/ground truth with the image, and feed it to the discriminator. We define the portions of a network's incorrect predictions as areas that are difficult to learn following <cit.>. In the early stages of training, the network fits the correctly annotated regions, and in later training, the predicted maps gradually approach the ground truth maps with the uncertainty/noise annotations <cit.>. When introducing image information, the areas that are difficult to predict or annotated incorrectly (inherent uncertainty) can be gradually discovered under the guidance of RGB image. §.§ Hyper-parameters analysis In our joint learning framework, several hyper-parameters affect our final performance, including the maximum iterations, the base learning rates, weights for the contrastive learning loss function and the adversarial loss function. We found that although the training dataset size of SOD is three times of the COD dataset, the COD images are more complex than the SOD images. Therefore, we kept the same numbers of iterations for SOD and COD tasks. Due to the overlapping regions of saliency and camouflage, for the contrastive learning module, we trained it every 5 steps to avoid involving too much conflicting to COD. With the same goal, we set the weight of the contrastive loss to 0.1. For the Confidence estimation module, we observed that excessively large adversarial training loss may lead to over-fitting on noise. Our main goal of using the adversarial learning is to provide reasonable uncertainty estimation. In this case, we define the ground truth output of the discriminator as the residual between the main network prediction and the corresponding ground truth, and set the weight of Eq. (<ref>) and Eq. (<ref>) as 1.0, to achieve trade-off between model performance and effective uncertainty estimation. § CONCLUSION In this paper, we proposed the first joint salient object detection and camouflaged object detection framework to explore the contradicting nature of these two tasks. Firstly, we conducted an in-depth analysis on the intrinsic relationship of the two tasks. Based on it, we designed a contrastive module to model the task-wise correlation, and a data interaction strategy to achieve context-aware data augmentation for SOD. Secondly, considering that camouflage is a local attribute, we proposed random sampling-based foreground-cropping as the COD data augmentation technique. Finally, uncertainty-aware learning is explored to produce uncertainty estimation with single forward. Experimental results across different datasets prove the effectiveness of our proposed joint learning framework. We observed that although contrast-based task-wise contrastive learning is proven effective for SOD, it damages the performance of COD due to the contradicting attribute of these two tasks. More investigation will be conducted to further explore informative feature representation learning via contrastive learning for class-agnostic tasks. ieeetr
http://arxiv.org/abs/2307.05463v1
20230711175015
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone
[ "Shraman Pramanick", "Yale Song", "Sayan Nag", "Kevin Qinghong Lin", "Hardik Shah", "Mike Zheng Shou", "Rama Chellappa", "Pengchuan Zhang" ]
cs.CV
[ "cs.CV" ]
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone Shraman Pramanick^1,2 † Yale Song^2 Sayan Nag^3 Kevin Qinghong Lin^4 Hardik Shah^2 Mike Zheng Shou^4 Rama Chellappa^1 Pengchuan Zhang^2 ^1Johns Hopkins University, ^2Meta AI, ^3University of Toronto, ^4National University of Singapore August 12, 2023 ================================================================================================================================================================================================================================================================================== empty Video-language pre-training (VLP) has become increasingly important due to its ability to generalize to various vision and language tasks. However, existing egocentric VLP frameworks utilize separate video and language encoders and learn task-specific cross-modal information only during fine-tuning, limiting the development of a unified system. In this work, we introduce the second generation of egocentric video-language pre-training (), a significant improvement from the previous generation, by incorporating cross-modal fusion directly into the video and language backbones. learns strong video-text representation during pre-training and reuses the cross-modal attention modules to support different downstream tasks in a flexible and efficient manner, reducing fine-tuning costs. Moreover, our proposed fusion in the backbone strategy is more lightweight and compute-efficient than stacking additional fusion-specific layers. Extensive experiments on a wide range of VL tasks demonstrate the effectiveness of by achieving consistent state-of-the-art performance over strong baselines across all downstream. Our project page can be found at https://shramanpramanick.github.io/EgoVLPv2/https://shramanpramanick.github.io/EgoVLPv2/. ^†Part of this work was done during an internship at Meta AI. § INTRODUCTION Video-Language Pre-training (VLP) has proven to be the de-facto solution for a variety of video-text tasks, e.g., video-text retrieval <cit.>, VQA <cit.>, zero-shot recognition, <cit.> and video-text grounding <cit.>. This is fueled by recent advances in vision <cit.> and language <cit.>, coupled with large-scale data <cit.>. Existing video-language datasets generally fall under two categories: third-person view and first-person view (egocentric). The noticeable domain gap between them restricts VLP frameworks pre-trained on third-person videos from performing well on egocentric benchmarks <cit.>. However, the recent introduction of a massive-scale egocentric dataset Ego4D <cit.> helps unlock the full potential of egocentric VLP. Existing egocentric VLP approaches <cit.> pre-train separate (dual) video and language encoders and learn task-specific cross-modal information only during fine-tuning, limiting the development of unified egocentric VL frameworks. Moreover, they lack strong zero-shot inference ability on multi-modal downstream tasks. This issue is commonly addressed by stacking dedicated fusion layers on top of the dual video and text encoders <cit.>, or with a shared video-language architecture <cit.>. However, these approaches introduce a large number of fusion-specific parameters, and the resulting encoder cannot be directly applied to uni-modal (video-only) tasks. In this work, we present the second generation of egocentric VLP (EgoVLPv2), a significant improvement over the previous generation <cit.> by incorporating cross-modal fusion directly into the video and language backbones. Our approach improves over existing VLP frameworks by: (i) fewer fusion parameters compared to stacked fusion-specific transformer layers or shared encoders, requiring less GPU memory, compute resources, and training time; (ii) the flexibility to switch between dual and fusion encoders, by turning on and off cross-attention fusion using a gating mechanism; (iii) being applicable to both uni- and multi-modal tasks. Inserting cross-modal fusion directly into the backbone helps unify a wide range of dual- and fusion-encoder-based downstream tasks. Specifically, the “switching” ability of enables us to utilize the same pre-trained encoders for fast retrieval and grounding tasks, which require dual and fusion encoders, respectively. Moreover, in contrast to existing egocentric VLP frameworks that learn task-specific fusion parameters during fine-tuning, reuses the pre-trained cross-attention modules across different tasks, significantly reducing the fine-tuning cost. This enables us to introduce query-focused video summarization as a downstream task, which has recently gained attention in the community <cit.>. The scarcity of annotated data has been a bottleneck to training decent-sized models end-to-end on this task, with the only available egocentric dataset, QFVS <cit.>, providing merely 135 video-query training samples. achieves new state-of-the-art results on QFVS with a decent margin over the baselines. In summary, our contributions are: (i) We advance a step forward in egocentric VLP by proposing , the second generation of EgoVLP <cit.> with cross-modal fusion in the backbone. Our proposed framework can switch between dual and fusion encoders and requires 45% lesser compute (GMACs) than learning additional fusion-specific transformer layers. (ii) The switching capability of allows us to unify a wide range of dual- and fusion-encoder-based downstream tasks under the same VLP framework and reduce the task-specific fine-tuning cost by employing the same pre-trained cross-attention modules across different video-language tasks. (iii) We demonstrate the effectiveness of on eight egocentric benchmarks and achieve state-of-the-art performance among comparable-sized backbones. We summarize these results in Figure <ref>. § RELATED WORKS §.§ VLP Frameworks Video-language pre-training (VLP) has attracted increasing attention in recent years, following the success of image-language pre-training <cit.> and their applications <cit.>. There are three broad categories of VLP frameworks (see Figure <ref>): Dual Encoders: Many existing egocentric VLP frameworks <cit.> falls into this category. They use separate video and language backbones and learn task-specific cross-modal fusion during fine-tuning <cit.>. They are commonly trained using InfoNCE <cit.> or MIL-NCE <cit.> objectives, and have been successful in video-text retrieval. Shared Encoder: Approaches that learn a combined encoder for video and text fall under this category <cit.>. They are modality independent and can be applied to an image, video, text, audio, time-series, and single-view 3D data. Common learning objectives include masked language modeling <cit.>, masked frame modeling <cit.>, masked token modeling <cit.>, masked modal modeling <cit.>, sentence ordering modeling <cit.>, frame ordering modeling <cit.>, and video-text matching <cit.>. Encoders with Stacked Fusion Layers: This line of work uses dedicated cross-modal fusion layers on top of dual encoders <cit.>, trained using similar objectives as shared encoders. The latter two categories introduce a large number parameters for cross-modal fusion. In this work, we propose a fourth category (Figure <ref> (d)) by inserting cross-modal fusion in uni-modal backbones using a gating mechanism. Our framework is flexible to act as either dual or shared encoders by switching cross-attention modules off and on. §.§ Video-Language Datasets The success of VLP can be partially attributed to the availability of large-scale open-world video-text datasets such as ActivityNet <cit.>, WebVid-2M <cit.>, and HowTo100M <cit.>. These datasets comprise videos sourced from the Web, such as YouTube, and are paired with the corresponding ASR captions, making them popular for VLP pre-training. Despite their impressive size, these existing video-text pretraining datasets typically feature 3rd-person views. On the other hand, egocentric videos has received increasing interests from the community. Previous egocentric datasets <cit.> were small-scale and domain-specific. The recently released Ego4D <cit.> is the first massive-scale egocentric dataset consisting of 3670 hours of videos collected by 931 people from 74 locations across 9 different countries world-wide. Recently, EgoClip <cit.> offered a filtered version of Ego4D with variable-length clip intervals instead of single timestamps. We train our proposed framework, , on the EgoClip version of Ego4D. § §.§ Fusion in the Backbone We use TimeSformer <cit.> and RoBERTa <cit.> as our video and language backbones. However, such separate (dual) uni-modal encoder design does not capture cross-modality interaction and, thus, fails to produce fine-grained multi-modal representation. Existing VLP frameworks achieve cross-modal fusion by: (i) learning a shared architecture <cit.> or stack fusion layers on top of dual encoders <cit.>, or (ii) learning cross-modal fusion during fine-tuning <cit.>. While the former offers superior cross-modal representation and zero-shot inference ability on multi-modal downstream tasks, they introduce a large number of fusion parameters than the latter. In this work, we insert cross-modal fusion into the top few layers of uni-modal backbones to strike a balance between the two ideas. Figure <ref> shows the architecture of . Each TimeSformer encoder layer has a divided space-time attention module containing temporal and spatial self-attentions with residual connections. The output of space-time attention at k^th encoder layer, z^(k), can be expressed as: x̂^(k)_vid = x^(k-1)_vid + Temp-SA(x^(k-1)_vid) z^(k) = x^(k-1)_vid + Spa-SA(x̂^(k)_vid) = Space-Time(x^(k-1)_vid) where x^(k-1)_vid is the output of the (k-1)^th encoder layer, Temp-SA and Spa-SA represent temporal and spatial self-attention blocks, respectively. We insert multi-modal fusion inside the backbone by introducing gated cross-attention after the space-time attention module. Hence, the output of k^th fused TimeSformer layer, x^(k)_vid , can be expressed as: z^(k) = Space-Time(x^(k-1)_vid) x^(k)_vid = x^(k-1)_vid + z^(k) + α * CA( z^(k), x^(k-1)_text) x^(k)_vid = x^(k)_vid + FFN(x^(k)_vid) where x^(k-1)_text is the output from the (k-1)^th RoBERTa layer, CA, FFN denote cross-attention block and feed-forward network, respectively, and α is a learnable gating parameter initialized from 0. Each RoBERTa layer contains multi-head self-attention <cit.> followed by feed-forward layers. Similar to the fused TimeSformer module, we insert cross-attention into the RoBERTa backbone: x̂^(k)_text = SA(x^(k-1)_text) x^(k)_text = x^(k-1)_text + x̂^(k)_text + α * CA(x̂^(k)_text, x^(k)_vid) x^(k)_text = x^(k)_text + FFN(x^(k)_text) where SA is the traditional self-attention module. For simplicity, we insert cross-attention into the same number of layers in both backbones. Notably, such fusion in the backbone strategy is not only limited to TimeSformer and RoBERTa; but can also be applied to any transformer-based video <cit.> and text <cit.> encoders. Fusion in the backbone with gated cross-attention has the following advantages: (i) Cross-attention parameters can easily be switched off by setting the gating scalar α to 0; thus, the model behaves as a dual encoder, which is helpful for scenarios that require “unfused” video and textual features; (ii) Our fusion approach is more lightweight and compute-efficient than adding fusion-specific transformer layers, which is demonstrated in detail in Section <ref>. §.§ Pre-training Objectives We use three pre-training objectives: (1) Egocentric noise contrastive estimation (EgoNCE), (2) masked language modeling (MLM), and (3) video-text matching (VTM). EgoNCE: Lin et al. <cit.> proposed EgoNCE for dual-encoder-based egocentric VLP. It makes two modifications over InfoNCE <cit.>: (i) Besides the matched video-text samples, all pairs that share at least one noun or one verb are treated as positives. (ii) Every batch of N video-text samples is augmented with another N visually similar videos, which are treated as additional negatives. Overall, video-to-text EgoNCE objective, ℒ^ego_v2t, can be expressed as: ℒ^ego_v2t=1/| ℬ |∑_i∈ℬlog∑_k∈𝒫_iexp(𝐯_i^T𝐭_k /τ) /∑_j∈ℬ( exp(𝐯_i^T𝐭_j/τ) + exp(𝐯_i^T𝐭_j'/τ)) where the i^th video embedding v_i and j^th text embedding t_j are L_2 normalized features, and τ is a temperature factor. B is the augmented batch with 2N samples. The term in brown are the modified positive samples, and the term in blue are the modified negative samples. The text-to-video EgoNCE objective, ℒ^ego_t2v, can be defined symmetrically. The total EgoNCE loss is: ℒ_EgoNCE = ℒ^ego_v2t + ℒ^ego_t2v. We compute EgoNCE in a dual-encoder setting. Specifically, we set α = 0, and thus, the cross-attention modules are switched off to calculate the EgoNCE loss. MLM: Masked language modeling and video-text matching are proven helpful in fusion-encoder-based VLP literature <cit.>. For MLM, we randomly mask 15% text tokens,[Following BERT, we decompose this 15% into 10% random words, 10% unchanged, and 80% with a special token [MASK]. ] and the loss, ℒ_MLM, aims to reconstruct the masked tokens based on surrounding words and video patches by minimizing the negative log-likelihood. VTM: For the VTM objective, the model is given a video-text sample, and the output is a binary label y∈{0,1} indicating if the input pair is matched. ℒ_VTM is constructed as a binary cross-entropy loss over the predicted scores. Following <cit.>, we sample the global hard-negative video-text pairs using the similarities computed by EgoNCE. We compute ℒ_MLM and ℒ_VTM in a fusion-encoder setting. In this case, α≠ 0 and the cross-attention modules are switched on. Overall, our pre-training pipeline can be summarized in the following three steps: * EgoNCE requires unfused video and text features, so we switch off cross-attention (α = 0). Thus, ℒ_EgoNCE is computed with acting as a dual encoder. * MLM & VTM requires multi-modal representation. We switch on cross-attention modules and compute ℒ_MLM and ℒ_VTM with acting as a fusion encoder. * For back-propagation, the three losses are added, resulting in ℒ_total = (1 - γ - δ) ℒ_EgoNCE + γℒ_MLM + δℒ_VTM, and back-propagated into the model end-to-end. γ and δ are hyper-parameters that control the contribution of different terms on ℒ_total. An ablation on different pre-training objectives of is provided in Section <ref>. The pseudo-code for pre-training can be found in the supplementary. §.§ Adaptation to Downstream Tasks We now describe how we adapt to different downstream tasks as shown in Figure <ref>. Video-Text Retrieval: We perform retrieval in two settings: (i) dual encoders: we switch off cross-attention and use as a dual encoder, and compute the cosine similarity between video clips and text narrations. (ii) fusion encoders: we switch on cross-attention. The top M layers of the video and language backbones interact and produce multi-modal representations, which are fed into the pre-trained VTM head to compute matching scores. We also compute an ensemble of the two to further boost the performance, discussed in Section <ref>. Video Grounding and Question Answering: We perform both uni- (video-only) and multi-modal (text-guided) video grounding. We switch off cross-attention for uni-modal grounding and use only the video encoder. We use as a fusion encoder for text-guided grounding and video question answering. Query-focused Video Summarization: The input videos are very long (3-5 hours) for this task. We first use the unfused N-M layers[For simplicity, we keep the number of unfused and fused layers the same in the video and text encoder.] of our video and text encoders to extract uni-modal features from 5 second clips and the text query. Next, we apply the KTS shot boundary detector <cit.> to segment the long video. After this, the query and segment-wise clip features are fed into the top M fused layers of to compute the multi-modal representation. Finally, we learn an additional single-layer transformer to design the interrelation across all 5 second long clips in every segment. We present additional details for the query-focused video summarization framework in the supplementary. § EXPERIMENTS §.§ Pre-training & Downstream Datasets We pre-train on the EgoClip <cit.> version of Ego4D <cit.>, the largest publicly available egocentric video dataset. EgoClip sources untrimmed egocentric videos from Ego4D and offers filtered video-narration samples with variable-length clip intervals instead of single timestamps of Ego4D. Moreover, EgoClip excludes the videos appearing in the validation and test sets of the Ego4D benchmark <cit.>, resulting in around 3.8M pre-training samples covering over 2927 hours of video from 129 different scenarios. We evaluate across multiple benchmarks on five egocentric datasets, summarized in Table <ref>: * On Ego4D <cit.> benchmarks: Multiple-Choice Questions (EgoMCQ) is a text-to-video (T → V) retrieval task with five video clips for every query text. Natural Language Query (EgoNLQ) is a natural language grounding <cit.> task that aims to localize a single time interval within a video given a text query. Moment Query (EgoMQ) is a video-only temporal action localization <cit.> task. * Query-focused video summarization (QFVS) <cit.> aims to generate a concise version of a long (3-5 hours) egocentric video based on a natural language query. * Video question-answering on EgoTaskQA <cit.> provides four question types (descriptive, predictive, explanatory, and counterfactual) with direct and indirect references, and evaluates the prediction over spatial, temporal, and causal domains of goal-oriented task understanding. Notably, to the best of our knowledge, we are the first to unify QFVS and EgoTaskQA as two downstream tasks of a VLP framework. * Action Recognition on CharadesEgo <cit.>: a multi-class classification of daily indoor activities, with class names being short natural language phrases like `Putting something on a shelf.' Hence, leveraging text representations with class names, we treat this task as a retrieval problem. * Multi-instance retrieval on Epic-Kitchens-100 <cit.> (EK-100 MIR): this is a text-to-video (T → V) and video-to-text (V → T) retrieval task, with a significant semantic overlap between different narrations. Detailed statistics of pre-training and downstream datasets and evaluation metrics are given in the supplementary. §.§ Evaluation Protocol We evaluate using three evaluation protocols: * Zero-Shot (ZS). The pre-trained backbones are directly applied for V ↔ T retrieval without fine-tuning on downstream datasets. We perform zero-shot retrieval via: (i) dual encoders, computing the cosine similarity between video clips and textual narrations, and (ii) fusion encoder, incorporating the pre-trained VTM head to compute the video-text matching score. * Task-specific Head-tune (HT). We extract features using the frozen encoder and train task-specific heads[VSLNet <cit.> for EgoNLQ, VSGN <cit.> for EgoMQ, single-layer transformer encoder <cit.> for summarization, and linear layers for video QA.] using the training split of downstream datasets. * Fine-tune (FT). We fine-tune the entire pre-trained video-text model end-to-end using the training split of downstream datasets. §.§ Implementation Details We use TimeSformer-B <cit.> and RoBERTa-B <cit.> as our video and language backbones. The video encoder has 12 layers and 12 heads, and is configured with the patch size of 16 × 16 and the hidden dimension of 768. The spatial attention modules are initialized from a ViT <cit.>. We resize videos to 224 × 224 and sample 4 frames per video for pre-training and 16 for fine-tuning on downstream tasks. We use RoBERTa-B pre-trained on English Wikipedia and Toronto Book Corpus. For our best model,[An ablation on the number of fusion layers is provided in Section <ref>.] we fuse the top 6 layers of the two encoders. We pre-train our model for 20 epochs with a batch size of 256, using AdamW <cit.> with a peak learning rate of 3e-5 for the backbones and 12e-5 for the cross-modal parameters. We use linear warmup over the first 2 epochs and use linear decay. Pre-training takes five days on 32 A100 GPUs. Other necessary pre-training and downstream details are given in the supplementary. §.§ Main Results We use boldface and underline for the best and second-best performing methods in every table and indicate the performance improvements over the state-of-the-art with Δ. Ego4D: Table <ref> and <ref> present the performance of on three different Ego4D benchmarks: EgoMCQ, EgoNLQ and EgoMQ. On EgoMCQ, our model achieves 91.0% inter-video and 60.9% intra-video accuracy, significantly improving over the baselines. Note that achieves 1% absolute gain on the challenging intra-video MCQ task over , which is trained using 15× more narrations generated by a pre-trained large language model, GPT-2 <cit.>. On EgoNLQ, yields an impressive gain of 2.11% R@1 for IoU = 0.3 over EgoVLP. Moreover, using a smaller task-specific head and fewer epochs of head-tuning, outperforms existing baselines, which indicates the importance of learning cross-modal information during pre-training.[Additional details are provided in supplementary.] On the uni-modal grounding task, EgoMQ, our framework also sets a new state-of-the-art, outperforming EgoVLP by 1.54% R@1 for IoU = 0.3, implying the flexibility of fusion in the backbone over dual and shared encoder-based pre-training. QFVS: We evaluate on query-focused video summarization task. The QFVS dataset contains only 135 video-query training samples with long (3-5 hours) videos, and all existing baselines are trained end-to-end. In contrast, we learn a tiny head (single-layer transformer) on top of the pre-trained encoders. As shown in Table <ref>, our model persistently attains the state-of-the-art F-1 score across all four videos in this dataset. The pre-trained video-language representation helps to achieve strong performance, whereas the baselines struggle to learn good cross-modal features due to the small training set. EgoTaskQA: Table <ref> shows the results on the egocentric video question-answering tasks on the EgoTaskQA dataset. Our model achieves significant gains across various baselines in the fine-tuning regime. Notably, performs consistently well in the challenging indirect split, which demonstrates its ability to solve complicated reference tasks. In the head-tuning regime, we only learn a linear layer on top of frozen encoders, where beats EgoVLP by a strong margin, which proves the efficacy of cross-modal pre-trained representation. CharadesEgo: This is a multi-class action recognition task, with class names as short text phrases. We convert this to a video-to-text (V → T) retrieval problem as in CLIP <cit.>, and perform dual-encoder-based retrieval. As shown in Table <ref>, obtains a new state-of-the-art in both fine-tuning and zero-shot regimes. Since CharadesEgo videos are significantly different from Ego4D, being captured by crowd-sourced workers using mobile cameras, these results demonstrate the generalizability of . EK-100: Table <ref> shows our results on EK-100 MIR. In the fine-tuning regime, achieves noticeable improvements over the supervised approaches (S3D, MME, JPoSE) and VLP methods (EgoVLP, HierVL). In the zero-shot setup, beats EgoVLP and HierVL by 7.8% mAP and 4.4% nDCG scores. The consistent performance gains again show the quality of pre-trained encoders. §.§ Ablation Study Fusion in the Backbone: We compare our fusion module to the conventional practice of using fusion-specific transformer layers, which we implement following ALBEF <cit.>.[ <https://github.com/salesforce/ALBEF/>] Table <ref> shows that the proposed fusion strategy performs slightly better than stacked fusion layers. For both methods, increasing the number of fusion layers to 6 results in a non-trivial performance gain. However, our proposed architecture is significantly more parameter- and compute-efficient. For instance, with 6 fusion layers, the proposed architecture contains 33M fewer parameters and requires 45% lesser computing cost, which shows the efficacy of our method. Pre-training Objectives: We ablate different pre-training objectives and evaluate the pre-trained models on EgoMCQ using as a dual encoder, as a fusion encoder, and an ensemble of the two by summing their similarity scores for each video-text pair. As shown in Table <ref>, removing any pre-training objective lead to a performance drop. Specifically, VTM with hard-negative mining is largely beneficial across all three evaluation strategies. Fusion encoder-based evaluation brings significant improvements over dual-encoders; moreover, as EgoMCQ contains only 5 sentences for every video, both evaluation methods offer similar latency. Ensembling the two yields further 1-2% performance gain for both inter- and intra-video accuracy metrics. §.§ Attention Visualization & Error Analysis In Figure <ref>, we show that different heads in the cross-modal attention can attend to different semantic regions of the video frames, guided by the narration. We observe that the pre-trained model learns well to recognize a wide variety of objects appearing in egocentric actions, such as indoor furniture, cooking appliances, phones, tablets, car steering, bicycle handles, etc. Such strong cross-modal information learned during pre-training helps in multi-modal downstream tasks. The visualizations in Figure <ref> are obtained with 960p video frames, resulting in sequences of 3601 tokens for 16 × 16 patches. However, vastly hindered objects in cluttered environments, especially in low-light conditions, are occasionally not focused. We show such error cases in the supplementary. § CONCLUSION This work introduces , the second generation of egocentric video-language pre-training and a significant improvement over the previous generation <cit.> by incorporating cross-modal fusion directly into the video and language backbones. Our proposed fusion in the backbone strategy is lightweight, compute-efficient, and allows us to unify various VL tasks in a flexible and efficient manner. We conduct extensive experiments to demonstrate the effectiveness of on a wide range of downstream tasks, consistently achieving state-of-the-art performance. Moreover, we visually demonstrate the effectiveness of the learned cross-attention representation. ieee_fullname figuresection tablesection § RADAR CHART FIGURE 1 DETAILS Here, we explain the details of the radar chart in Figure <ref>, which summarizes the comparative performance of with EgoVLP <cit.>. First, for illustrative purposes, we normalize each axis by the score achieved by , which turns the axes in the range (0, 1]. Next, we keep the origin of each axis at 0.7 normalized value, which reasonably separates the inner and outer frames for better readability. Finally, we annotate each vertex with absolute performance metric scores. Notably, in most previous radar chats in the vision-language literature <cit.>, the axes have different scales and shifts, which may cause misinterpretations and fallacies. However, our illustration is uniform and accurate to scale. § ALGORITHM The algorithm for pre-training is given in Algorithm <ref>. Section <ref> provides details of different pre-training objectives. § DATASET DETAILS This section provides additional details of our pre-training and downstream datasets. Ego4D & EgoClip: Ego4D <cit.> is the first-of-its-kind massive-scale egocentric video-language dataset and benchmark suite. It offers 3670 hours of daily life activity videos captured by 931 unique camera wearers from 74 worldwide locations and 9 different countries. The videos in Ego4D span hundreds of scenarios (kitchen, laboratory, workshop, porch, shopping, driving, leisure, etc.) with various daytime and weather conditions. A portion of the dataset is accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and synchronized videos from multiple egocentric cameras at the same event. Each narration in Ego4D is a free-form sentence and has a single timestamp. For example, the narration “” is associated with the video content, which occurs at 28.3s of a particular video. However, an activity occurs for a certain duration, and such a single timestamp can not reflect the start and end points where the particular activity takes place. EgoClip <cit.> offers a filtered version of Ego4D and designs a contextual variable-length clip pairing strategy to assign every narration with start and end timestamps. Moreover, EgoClip excludes videos that belong to the validation and test sets of the Ego4D benchmark challenges and retains textual annotation from multiple narrators, allowing us to have narration diversity during pre-training. Overall, EgoClip contains 2927 hours of videos which form 3.8M clip-text pairs, with an average clip length of 1.0s and a standard deviation of 0.9s. We use this EgoClip version of Ego4D for pre-training. We evaluate on three different downstream benchmarks of Ego4D: multiple-choice questions (EgoMCQ), natural language query (EgoNLQ), and moment query (EgoMQ). QFVS: The query-focused video summarization (QFVS) <cit.> dataset builds upon previously existing UT egocentric (UTE) <cit.> dataset, which contains four 3-5 hours long videos captured in uncontrolled everyday scenarios. QFVS curates 46 queries for every video, where each query contains two distinct concepts (nouns) <cit.>. For example, a query can be {HAT, PHONE}, or {FOOD, DRINK}. These 46 queries cover four distinct scenarios: (i) both the concepts appear in the same video shot (15 such queries),[QFVS defines every consecutive 5s video clip as a shot.] (ii) the concepts appear in the video, but not in a single shot (15 such queries), (iii) only one concept appears in the video (15 such queries), and (iv) none of the concepts in the query are present in the video (1 such query). We use prompt engineering to generate natural language using the concepts in the query and feed the sentence in our model. For instance, a given query {HAT, PHONE} is converted as “All scenes containing hats and phones”. We use 10 different prompts during head-tuning. The QFVS dataset also annotates concepts for every video shot. It proposes a robust evaluation strategy: find the similarity between the concepts in the generated and ground truth summary by maximum weight matching of a bipartite graph, and compute precision, recall, and F1 score from the number of matched concepts. This evaluation strategy helps to capture how well a system summary can retain semantic information instead of visual quantities, as used in previously existing evaluation methods, such as a system-generated summary has to consist of the same key units (frame or shot) as in the user summary <cit.> or comparing pixels and low-level features <cit.>. EgoTaskQA: The EgoTaskQA <cit.> benchmark uses the same egocentric videos as the LEMMA dataset <cit.>, which contains goal-oriented and multi-tasked human activities with rich human-object interactions and action dependencies in both single-agent and two-agent collaboration scenarios. The videos are segmented into clips with an average duration of 25s. The questions in the EgoTaskQA dataset are machine-generated and aim to evaluate models' capabilities to describe, explain, anticipate, and make counterfactual predictions about goal-oriented events. The answers are of two types - open-answer queries and binary statement verifications. The EgoTaskQA dataset contains 40K balanced question-answer pairs selected from 368K programmatically generated questions from 2K egocentric videos. Moreover, this dataset offers two different benchmark splits (i) normal or direct split where the train, test, and validation sets are randomly sampled in a 3:1:1 ratio and (ii) indirect split where the actions and objects are strongly correlated and test the model's task understanding capability with challenging questions. We approach the video QA as a classification task and report accuracy for open queries and binary verification in the direct and indirect splits. CharadesEgo: The CharadesEgo <cit.> dataset consists of 68.5K annotated samples from 7860 videos from both first and third-person views, covering 157 classes of daily indoor activities. We only use the first-person subset, which contains 3085 videos for training and 846 videos for testing. ChardesEgo is originally a multi-class classification problem, with class labels being short phrases like `Putting something on the shelf.' We treat this problem to a video-to-text (V → T) retrieval task as in CLIP <cit.> by leveraging the text encoder to extract features from class names. We directly evaluate the model on the validation set in the zero-shot setting. In the fine-tuning setting, we leverage the 33.1K training samples to perform an end-to-end fine-tuning of . Following the previous literature <cit.>, we report video-level mAP as the evaluation metric. EK-100: The Epic-Kitchens-100 <cit.> dataset contains 100 hours of egocentric cooking videos. The training set consists of 67.2K video samples, whereas the validation and test set has 9.6K and 13.1K samples, respectively. Each sample is associated with text narration. We perform multi-instance retrieval (V ↔ T) on the EK-100 dataset, which is challenging due to the significant semantic overlap between different narrations. The evaluation metrics are mean Average Precision (mAP) and the normalized Discounted Cumulative Gain (nDCG). § IMPLEMENTATION DETAILS §.§ Pre-training on EgoClip Table <ref> presents the hyper-parameters used during pre-training. We use TimeSformer-B <cit.> and RoBERTa-B <cit.> as our video and language backbones. We chose the best learning rate using a grid search. We ablate our other design choices in Section <ref>. We use PyTorch’s native FP16 mixed precision training and gradient checkpoint during pre-training. After every epoch, we validate the pre-trained checkpoint on EgoMCQ and select the model with the best EgoMCQ intra-video score for other downstream tasks. We extract 4 frames for every video sample during pre-training and reshape those to 224 × 224. We also apply standard , , and normalization to every frame. We tokenize the text using RoBERTa tokenizer and pad/truncate every narration to a maximum length of 30. Pre-training takes five days on 32 A100 GPUs. §.§ Downstream Settings This section presents our fine-tuning and head-tuning strategy for different downstream tasks. For a fair comparison with the baselines <cit.>, we follow the same downstream configuration as the baselines when possible. The downstream is performed with 16 frames per video sample. EgoNLQ: This task is a video-text localization problem, with each video clip longing up to 1200s. Hence, performing end-to-end fine-tuning can be hard on EgoNLQ. Following <cit.>, we pre-extract features from the video-text samples using our pre-trained model and train VSLNet <cit.> for 100 epochs, with a learning rate of 1e-3 and batch size of 32. We keep all other configurations the same as <cit.>.[<https://github.com/showlab/EgoVLP>] However, we observe that we can beat the baselines using even a smaller task head and fewer epochs of tuning, which we describe in Section <ref>. We show the complete EgoNLQ pipeline in Figure <ref>. EgoMQ: This is a video-only localization problem, and similar to EgoNLQ, the input videos are very long. Hence, end-to-end fine-tuning is also hard to perform on EgoMQ. Following EgoVLP <cit.>, we pre-extract video features using pre-trained and train VSGN <cit.> for 100 epochs, with a learning rate of 1e-4 and batch size of 32. We keep all other configurations the same as <cit.>. We perform a grid search for other hyper-parameters of VSGN. QFVS: Query-focused video summarization aims to generate an abridged version of input video guided by a natural language query. To the best of our knowledge, we are the first to unify QFVS as a downstream of a VLP framework. The input videos for this task are very long (3-5 hours). We first use the unfused N-M layers[For simplicity, we keep the number of unfused and fused layers the same in the video and text encoder.] of our video and text encoders to extract uni-modal features from every 5-second clip and the text query. Next, we apply the KTS shot boundary detector <cit.> to segment the long video.[Segmentation helps in two ways: (i) TimeSformer can not process the whole 3-5 hours long video (containing tens of thousands of frames) at once. (ii) Segmentation is also used to convert frame-level prediction scores into key shots. For details, please refer to <cit.>.] After this, the query and segment-wise clip features are fed into the top M fused layers of to compute the multi-modal representation. Finally, we learn an additional single-layer transformer to design the interrelation across all 5 second long clips in every segment. We train the single-layer transformer for 20 epochs, with a batch size of 20, a peak learning rate of 1e-5 using AdamW <cit.> optimizer, cosine scheduler, and a linear warmup for the first 2 epochs. We also perform an ablation on the single-layer transformer in Section <ref>. EgoTaskQA: We treat the video QA as a classification problem, where we train linear layers on top of the fused feature representation generated by the pre-trained . In the fine-tuning setting, we fine-tune the pre-trained model for 36 epochs with a batch size of 64, using the AdamW <cit.> optimizer. We use cosine annealing with 10% linear warmup steps, with the peak learning rate of 2e-4 for the direct split and 1e-4 for the indirect split. In the head-tuning setup, we only train the classifier head on top of frozen backbones with the same configuration. CharadesEgo: Following <cit.>, we convert CharadesEgo as a retrieval problem. In the zero-shot setup, we perform dual-encoder-based inference. In the fine-tuning setup, we use EgoNCE as our objective. We fine-tune the model for 10 epochs with a batch size of 128 using AdamW <cit.> optimizer with (β_1, β_2) = (0.9, 0.98), and weight decay of 0.01. We use cosine annealing with warmup, with 10% linear warmup steps, peak learning rate of 1.5e-4 and end learning rate of 1e-7. Since this is a multi-class dataset, where each video can include multiple actions, we report mAP as the evaluation metric. For input, we sample 16 frames from each video clip, and reshape the frames into 224 × 224. EK-100 MIR: Since a narration can jointly be associated with multiple videos for EK-100 multi-instance retrieval task, we use the adaptive multi-instance max-margin loss <cit.> for this task with a margin value of 0.2. We keep the zero-shot configuration the same as CharadesEgo. We fine-tune the model for 100 epochs with a batch size of 128 using AdamW <cit.> optimizer with (β_1, β_2) = (0.9, 0.98), and weight decay of 0.01. We use cosine annealing with warmup, with 10% linear warmup steps, peak learning rate of 2e-4 and end learning rate of 1e-7. § ADDITIONAL ABLATIONS ON PRE-TRAINING We conduct additional ablation experiments in this section to validate our design choices. Reported results on EgoMCQ in Table <ref>, <ref>, <ref> and Figure <ref> are achieved by directly ensembling dual- and fusion-encoder-based inference. Effect of EgoNCE: We study the effect of the EgoNCE loss <cit.> compared to the more popular InfoNCE objective <cit.>. Given a batch of N video-text pairs, InfoNCE treats the matched N pairs as positives and every other pair as negatives. However, egocentric videos pose two unique challenges: (i) Same actions in different scenarios appear to be visually different (talking on the phone indoors and outdoors). (ii) Different actions in same scenarios appear to be similar (writing on a tablet and watching a movie on a tablet are visually indistinguishable). To overcome these challenges, EgoNCE is built upon InfoNCE with two modifications: (i) Besides the matched video-text samples in every batch, all narration pairs which share at least one noun and one verb are treated as positives. (ii) Every batch of N video-text pairs is augmented with another N visually similar videos, often containing different actions in the same scenarios. These added videos with the same texts as in the original batch are treated as additional negatives. Table <ref> shows the effect of the modified positive and negative sampling of EgoNCE on . First, we observe that replacing EgoNCE with InfoNCE leads to a performance drop of 5.7% accuracy on the challenging intra-video metric of EgoMCQ. Further, discarding either positive or negative sampling also drops the results by 2.1-1.8% intra-video accuracy. These results align with the findings in <cit.> and indicate the efficacy of the EgoNCE objective for egocentric video-language pre-training. Effect of Gated Cross-attention: Next, we study the importance of gated cross-attention modules with learnable gating scalar, α. Table <ref> shows that a fixed value of α leads to a significant performance drop. In our best pre-trained model, we also find that the learned value of α varies in different layers, ranging from 0.05 to 0.4. Effect of Projector: We compare different choices of projector dimensions used in the EgoNCE head in Figure <ref>. We observe that a three-layer projector works better than single and two-layer projectors. For instance, a 4096-4096-4096 dimensional projector improves the EgoMCQ intra-video retrieval performance by 0.85% over a single 4096 dimensional projector. Moreover, an increase in the width of the projector also helps in performance. Hence, we use 4096-4096-4096 as our default projector. Notably, these results oppose the findings in Zhao et al. <cit.>, where the authors observe that using 256-dimension achieves better performance than a 512 dimensional projector. The reason behind such results is, in contrast to Zhao et al., <cit.>, who only use InfoNCE, a larger projector helps us both in EgoNCE and VTM objectives by offering a stronger hard-negative sampling. Effect of Batch Size: Next, we study the effect of pre-training batch size in Table <ref>. The performance improves using a batch size of 256 over 128. However, the performance drops if we further increase the batch size to 512 or 1024. Therefore, we use 256 as our default batch size in all other experiments. Effect of Number of Frames: Lastly, we ablate the number of frames per sample during pre-training in Table <ref>. We see a good improvement in the EgoMCQ performance when the number of frames is increased to 4. However, after 4, the performance improvement diminishes. We keep 4 as our default frame number for a fair comparison with the baselines <cit.>, who also use 4 frames per sample during pre-training. § ABLATIONS ON DOWNSTREAM This section presents an ablation on downstream task-specific heads for EgoNLQ and QFVS. EgoNLQ: Following EgoVLP <cit.> and <cit.>, we use VSLNet <cit.> as the task-head for EgoNLQ. However, since our model learns cross-modal features during pre-training, we observe that we can beat the previous methods by a significant margin even using smaller task heads. As shown in Table <ref>, when we only use the conditional span predictor module, which is just a linear layer, we can beat EgoVLP by 2.43% R@5 for IoU=0.3. Adding the QGH module further helps in improving the performance. Using the whole VSLNet can significantly beat EgoVLP and across all metrics. Moreover, the previous methods train VSLNet for 200 epochs, whereas we achieve the best performance within 100 epochs. These results prove the efficacy of the cross-modal pre-trained representation of . QFVS: Next, we compare different heads for QFVS in Table <ref>. Notably, this dataset is very small, with only 135 training samples. We observe that a single-layer transformer head performs better than linear layers and multi-layer transformers. Linear layers can not model temporal relations across different video shots, which a transformer can efficiently do. However, multi-layer transformers overfit this dataset due to the small training set. Hence, we use a single-layer transformer for QFVS. § ERROR ANALYSIS Although learns impressive cross-modal representation during pre-training, there are still some cases where the model fails to identify tiny and hindered objects, especially in cluttered environments. We show two such examples in Figure <ref>. In the first video, the objects `bicycle handle' and `T-wrench' are barely visible even in human eyes, and thus, can not consistently attend to these objects in all frames. However, it can recognize larger, more familiar things like tables and human hands. In the second video, we show an egocentric activity in a wet lab, where the camera wearer is wearing gloves, holding a test tube, and heating a wire using a bunsen burner. This is a complex scenario with multi-agent collaborative activities and fine-grained actions. Interestingly, can correctly identify the human hands and track the motion of the thumb in different frames, even when wearing gloves. However, the test tube and the wire are hindered and are partially attended by the model. Since we pre-train with 224 × 224 video frames, such tiny objects are often hard to be distinguished. However, higher-resolution frames will be more helpful in addressing such intricate scenarios, which we plan to explore in future works. § QUALITATIVE DOWNSTREAM PERFORMANCE EgoMCQ: In Figure <ref>, we show example predictions made by EgoVLP <cit.> and on multiple choice questions from EgoMCQ validation set. beats EgoVLP substantially on the challenging intra-video setting, where all 5 choices are visually similar. The VTM head pre-trained with hard-negative sampling helps to distinguish between similar videos and boosts the performance over EgoVLP. QFVS: Figure <ref> shows some examples of query-focused summaries generated by on the QFVS dataset. Given a long egocentric video and a natural language query, our model can summarize all relevant scenes successfully. Notably, the input videos on this dataset are very long (3-5 hours), and the length of the generated summary is 2% input video, which makes this task challenging. EgoNLQ: Figure <ref> shows examples of predictions made by EgoVLP <cit.> and on text-guided video localization from the EgoNLQ dataset. Given an untrimmed video and a natural language query, this task aims to predict a single temporal window to answer the query. The predictions of are significantly more aligned with the ground truth than EgoVLP, which supports the impressive quantitative performance gain by over EgoVLP across all metrics.
http://arxiv.org/abs/2307.05610v1
20230710224010
Substance or Style: What Does Your Image Embedding Know?
[ "Cyrus Rashtchian", "Charles Herrmann", "Chun-Sung Ferng", "Ayan Chakrabarti", "Dilip Krishnan", "Deqing Sun", "Da-Cheng Juan", "Andrew Tomkins" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
plain theoremTheorem[section] proposition[theorem]Proposition lemma[theorem]Lemma corollary[theorem]Corollary definition definition[theorem]Definition assumption[theorem]Assumption remark remark[theorem]Remark squishenum squishitem Substance or Style: what does your image embedding know? [ Substance or Style: What Does Your Image Embedding Know? equal* Firstname1 Lastname1equal,yyy Firstname2 Lastname2equal,yyy,comp Firstname3 Lastname3comp Firstname4 Lastname4sch Firstname5 Lastname5yyy Firstname6 Lastname6sch,yyy,comp Firstname7 Lastname7comp Firstname8 Lastname8sch Firstname8 Lastname8yyy,comp yyyDepartment of XXX, University of YYY, Location, Country compCompany Name, Location, Country schSchool of ZZZ, Institute of WWW, Location, Country Firstname1 [email protected] Firstname2 [email protected] Machine Learning, ICML 0.3in ] Vision foundation models based on masking or contrastive learning are heavily studied in terms of semantic signals. Less understood is what non-semantic information these embeddings contain. For example, can we detect a blurred, recolored, or brightened image using an embedding like MAE, SimCLR, or CLIP without accessing the pixels? To address this, we design a systematic transformation prediction task and measure the visual content of six models that use different training schemes. Surprisingly, all six embeddings (including SimCLR) capture enough information to identify dozens of transformations. We further compare the sensitivities of each embedding. Masking-based models (CAN and MAE) perform best on fine-grained transformation prediction, while image-text models (CLIP and ALIGN) generalize better to unseen transformations. Finally, we demonstrate that representations can contain object-level content and low-level details without sacrificing either. Overall, modern embeddings encode a variety of visual aspects, despite being trained on large datasets in a self-supervised way. § INTRODUCTION Machine learning systems often use embeddings from large pre-trained models as a way to standardize and improve data representations. Such embeddings, known as foundation models, provide a `general-purpose’ data encoding method <cit.>. The models perform very well on many downstream tasks. They can be used with or without fine-tuning and even in a zero- or few-shot way. Despite the popularity of foundation models, it is unclear what qualities of these embeddings are responsible for their good performance. A reasonable hypothesis is that better embeddings have a higher capacity in the sense that they capture more information about the raw data. An opposing hypothesis is that these embeddings precompute important, high-level features while ignoring low-level attributes that are immaterial for downstream tasks. Considering vision foundation models, the embeddings may (a) capture all the information in the image and achieve compression because natural images lie on a low dimensional manifold; or (b) compute a lossy compression, where their pre-training objectives guide what information they keep or discard. It also may be that some models are closer to (a) while others resemble (b). Before we, as a community, adopt foundation models, we should understand their predispositions. One challenge is that researchers evaluate embeddings on the same axes. Prior work shows that foundation models perform well on downstream tasks. However, these findings stem from benchmark tasks, such as ImageNet, VTAB <cit.>, or COCO <cit.>. Conclusions from these analyses focus on the semantic content of embeddings (e.g., object-level details). We can only speculate about how the pre-training algorithm impacts what other visual aspects the model captures. A masked autoencoder (MAE) <cit.> fills in portions of the image, so MAE-based embeddings may be more sensitive to style. Contrastive losses like SimCLR <cit.> could encourage invariance to the augmentations used to form image pairs during training. Newer models such as CAN <cit.> combine both masking and contrastive pretraining and add other elements such as predicting random noise. Image-text models, such as CLIP <cit.> and ALIGN <cit.>, may learn visual concepts beyond the object-level categories of image datasets. In this paper, we investigate these speculations and perform complementary experiments to understand what non-semantic information these embeddings contain. §.§ Predicting Transformations If we aim to go beyond a semantic evaluation of models, we need to measure whether other types of information appear in the embeddings. We also want an approach that applies to arbitrary vision models, regardless of their training methods, dataset, architecture, etc. One way to accomplish this is the following experiment. We can modify an image and then see if this change is detectable after computing the image’s embedding. For example, consider two images: one that is a sample from ImageNet, and another where the same image has been slightly blurred. Then, compute embeddings for both images and throw out the original pixels. Assume that in both cases a linear probe will predict the correct ImageNet class for the image. The next question is: does the embedding contain enough information to determine which image was blurred and which was unaltered? If the embedding contains sufficient information to detect blurring, then it should be possible to train a network to perform well on a `blurry or not’ classification task. Specifically, we can apply Gaussian blur to all images in ImageNet, and we can train a network to predict whether the transformation has been applied given access only to the image embeddings. Foundation models that capture more of the transformation information will perform better on this task, whereas models that perform poorly must be insensitive to the transformation. Note that freezing the embedding model is crucial for this analysis. If we fine-tuned on the transformation prediction task, then we would not know whether the original model captured the transformation. The usefulness of being sensitive to blurring or other transformations depends on the downstream task. Some embeddings might completely ignore the blurring (leading to a blurring-invariant model) or encode the blurring in a consistent way (leading to a blurring-equivariant model). The former approach has advantages for stability, where the embedding should be unchanged. The latter equivariant approach is desirable for data cleaning or content filtering. We posit that if foundation models are going to be general-purpose, they should create nearly lossless embeddings, including low-level details. This is crucial for tasks such as determining if an image is a painting or photograph, if it has been taken during the day or night, if it is high-fidelity or grainy, or if it has been edited from the original. §.§ Our Contributions We propose a transformation prediction task to measure the sensitivity of embeddings to changes to an image. Given the embedding of an image from a pre-trained model, the goal of this task is to predict how the image has been modified (e.g., blurred, brightened, darkened, noised, saturated, solarized, stylized, etc). We carefully design the set of transformations, ensuring enough variety to elicit whether embeddings capture different types of visual content. We also have two variations: a fine-grained version, where the train and test sets use the same 31 transformations, and a coarse-grained version, where we group together similar transformations into 10 classes and hold out some transformations. All of the embeddings that we consider perform well at predicting transformations. This is surprising. <ref> shows that several transformations alter images in a subtle way. The frozen embedding networks must retain a lot of low-level image information, despite not being explicitly trained to do so. Our transformation prediction metric is orthogonal to, and hence complements, existing semantic accuracy measurements for image embeddings. The transformation prediction tasks lead to new insights about the embeddings. CAN and MAE are more sensitive than SimCLR in some cases. Specifically, SimCLR is fairly invariant to hue, saturation, and brightness, but it is still quite sensitive to other transformations (including blurring, which is part of the contrastive training). We also evaluate the image embeddings of CLIP and ALIGN. These image-text models excel in recognizing the concept of style transfer. They can generalize to new styles after being only trained on a few. As a baseline, we test a supervised model, and we find that it performs comparably on the fine-grained task but significantly worse on the coarse-grained version. A natural next question is whether post-processing the embedding to improve transformation prediction will effect the semantic accuracy (e.g., ImageNet top-1 accuracy). We actually find that it is possible to achieve good performance on both metrics when training a 2-layer MLP with two heads and optimizing a multi-task loss. This implies that the transformation information does not interfere with the object-level features. Both can coexist. With transformed images there is a related question around robust accuracy. Common wisdom suggests that generalizing to OOD data can be facilitated by encouraging invariance to corruptions or style <cit.>. However, we find that increasing sensitivity to transformations (less invariance) does not significantly impact the semantic accuracy on transformed images. In summary, our main findings are (see also <ref>): * Foundation models capture information about dozens of transformations. Hence, we can use embeddings to detect a domain shift due to transformations. * Vision models with masking (CAN, MAE) are more sensitive than those using only a contrastive loss (SimCLR) to changes in hue, saturation, and brightness. * Image-text models (CLIP and ALIGN) generalize better than image-only embeddings when classifying unseen transformations, such as new styles. * Many errors come from mistaking images as normal (i.e., `Identity' transform) when they have been modified in unseen ways (e.g., background blur, grayscale, line shift). * Sharing one hidden layer for semantic and transformation prediction does not harm the performance on either task. Overall, our results support the hypothesis that foundation models provide a higher-capacity representation, rather than ignoring irrelevant features. § RELATED WORK Foundation Models. SimCLR <cit.> trains on pairs of transformed images, and the representation is penalized if the embeddings differ. The embedding should be less sensitive to these transformations (cropping, color distortion, and Gaussian blur). MAE <cit.> trains on images that have been subject to patch-wise masking and reconstructs the missing pixels. CAN <cit.> combines contrastive learning, masked autoencoders, and noise prediction. Image embeddings also come from multi-modal models, such as CLIP <cit.> and ALIGN <cit.>. Both use a contrastive loss to form visual and language representations of image-text pairs. Work has also investigated fine-tuning <cit.> and dataset quality <cit.>. Compared to vision, much more work studies the information captured by language models <cit.>. Invariance and Equivariance. The popularity of contrastive losses has led researchers to question whether embeddings should be encouraged to be insensitive (a.k.a., invariant) or sensitive (a.k.a., equivariant) to transformations <cit.>. This extends research that aims to understand rotation prediction <cit.>, a seminal task for unsupervised representation learning <cit.>. There has been efforts to measure CNN equivariance through individual features <cit.>, and to examine embeddings by reconstructing images <cit.>. Augmentation-aware learning has been proposed to improve semantic accuracy <cit.>. Another direction shows that contrastive training learns domain-sensitive features, which helps OOD generalization <cit.>. Transformation prediction. Work on visual chirality shows that, surprisingly, it is possible to train a model to detect whether an image has been horizontally flipped <cit.>. A related effort considers predicting domains, such as painting, sketch, or cartoon <cit.>. Researchers have identified nuisance factors of X-ray images <cit.> even with a pre-trained chest radiography model <cit.>. Part of training diffusion models involves reversing the (artificial) Gaussian noise in an image, and part of the optimization involves a noise-prediction loss <cit.>. Recent work on cold diffusion considers reversing other transformations, including deblurring, inpainting, super-resolution, and snow removal <cit.>. Compared to prior work, we use transformation prediction to probe image embeddings, and we consider a much broader set of transformations. § PROBING EMBEDDINGS BY PREDICTING TRANSFORMATIONS Evaluating only the typical semantic accuracy on class labels leaves open questions regarding what information from the raw data is retained or lost in the embedding. Therefore, we also measure the ability of a network to predict the type of transformation that has been applied to an image. To do so, we define a transformation prediction task along with new metrics. This task can be formulated for any dataset/task as long as there is a way to synthetically apply transformations. §.§ Transformation Prediction Task Assume we have T image transformations (<ref> shows examples). Here, for transformation, we take a broad definition. One option is a well-defined function, such as adding Gaussian noise with certain variance independently to each pixel. Another possibility is to have some random parameters, such as uniformly choosing a value in a range and increasing the image’s saturation by this much. Finally, we can have transformation families, containing several sub-transformations. For example, the family “color quantizing’’ could mean choosing a sub-transformation that modifies hue, inverts colors, or solarizes the image. Sub-transformations have their own (possibly random) parameters. We apply each of the T transformations to all images in the training/test sets. This generates T+1 copies of the dataset, including the original images. Also, this process defines a (T+1)-way classification problem, labeling each image either with `Identity’ or one of the T transformations. Metrics. Our tasks involve both unaltered (clean) images and transformed ones, as well as a new label for the type of transformation. For a dataset such as ImageNet, which contains images x and semantic class labels y, we will use t to denote the transformation label of our augmented dataset, leading to a labeled triple (x,y,t). A network can predict the semantic label y, the transformation label t, or both in a multi-task scenario. The transformation prediction accuracy is the fraction of images receiving the correct transformation label (the network does not see the class label). We use clean semantic accuracy to refer to the fraction of correctly predicted class labels on unaltered images (i.e., the transformation t is the identity). The obfuscated semantic accuracy is the fraction of correct class labels when the image has been transformed (i.e., t is not the identity). §.§ Evaluating Frozen Image Embeddings Consider an image x, let t be one of the T+1 transformations, and use t(x) to denote the transformed version of x. For a frozen embedding model ϕ, we compute the embedding ϕ(t(x)). We then train a network that takes ϕ(t(x)) as input and outputs a semantic label or a transformation label or both. In a multi-task setting with a two-headed network that outputs two labels, we independently measure the clean/obfuscated semantic and transformation accuracies. Training a linear probe on top of the embedding ϕ(t(x)) is the simplest setting to predict transformation labels. The last-layer weights can be trained using the transformation labels (while the embedding model is fixed). We find that we can improve performance by using an MLP with a single hidden layer instead of a linear probe. In this case, training the hidden layer leads to a new representation that has been post-processed for transformation prediction. We can also do this in a multi-task way, incorporating the loss from both the semantic and transformation prediction tasks. We do not fine-tune the embedding model itself. We expect that it would lead to improved transformation prediction accuracy. However, it would conflate the information in the original embedding with the new information learned from the fine-tuning. Freezing the model, on the other hand, allows us to draw conclusion about existing embeddings. §.§ Fine-grained vs. coarse-grained In our experiments, we will consider a fine-grained task (where the train and test sets use the same transformations) and a coarse-grained version (where the same label contains different sub-transformations). For both tasks, the post-processing network should learn which features of the embedding correspond to different transformation labels. The fine-grained task has 31 labels, including `Identity' for unaltered images. In a few cases, we use the same transformation with disjoint parameter ranges as separate classes. Specifically, two categories come from each of (i) a low or medium amount of motion blur, (ii) a low or high amount of Gaussian blur, (iii) a low or medium amount of Gaussian noise, and (iv) increasing or decreasing the brightness. During test-time, the same transformation applied to an image will only differ in its randomized parameters that are restricted to different ranges. In the coarse-grained task, the training set has 28 transformations, split across 9 categories, plus the Identity transformation. The test set has 43 sub-transformations, split across the same 9 categories, plus the Identity transformation. Hence, there are 15 held-out transformations that the network only sees during test time. We define the coarse categories so that the visual content should be similar in some way. For example, `Quantize' contains 7 recoloring options (4 for training and 3 held-out). The `Style Transfer' label has 13 style options (6 for training and 7 held-out). For some categories, there are no held-out sub-transformations (e.g., Icon Overlay, Image Overlay, Line Halftoning). Justifying the transformations. When choosing the sets of transformations, we have tried to cover a range of visual effects. Noise affects individual pixels and blurring affects nearby regions. Overlays are independent of the image, while style transfer heavily depends on the content. The filtering and quantizing options focus on hue, saturation, or value separately. Some transformations are barely human-visible, and others are strikingly obvious. Of course, the space of all possible transformations is impossible cover fully, but we aim to probe many aspects of embeddings. §.§ Drawing conclusions about embeddings We can use the transformation prediction task to measure if an embedding model captures certain visual content. Consider a transformation t, where t(x) denotes the transformed version of x. Assume we can train a post-processing network to predict that ϕ(t(x)) is transformed and ϕ(x) is not. Then, we can conclude that ϕ must preserve enough information about the image so t can be detected. That is, ϕ(x) ≠ϕ(t(x)). More interestingly, a network may succeed at predicting most transformations t from a set 𝒯 when they are applied to images in a dataset 𝒳. Hence, the sets A_t, ϕ = {ϕ(t(x)) | x ∈𝒳} for t ∈𝒯 are mostly disjoint. It is possible to use a sample from A_t, ϕ to determine t with high accuracy. We also believe the transformation prediction task is a direct measure of equivariance, as opposed to k-NN results <cit.>. If the network cannot detect the transformation t, then we may conclude the opposite. The embedding ϕ does not preserve enough information. We can further qualify this based on the amount of post-processing required to extract this information. If t is detectable after zero or one layers, then the information must be readily accessible in ϕ(t(x)). Otherwise, if t can be detected but only after numerous layers, then the information is still present but can only be recovered after combining several sources of information from ϕ(t(x)). If no amount of post-processing suffices, then the embedding must truly be invariant, and ϕ(x) ≈ϕ(t(x)). Given the above discussion, the fine-grained and coarse-grained tasks yield complementary insights. The benefit of the fine-grained task is that we can investigate the precision of the embedding's information. Distinguishing a blur of radius three vs. five should require more detailed information than distinguishing blurring vs. brightening. Also, using the same transformations for train and test simplifies the task. In the coarse-grained task, the network does not see some sub-transformations during training, which enables us to measure a type of generalization. For example, consider transformations t and t' from the same class (e.g., two different styles). In the best case, we only use t during training, and the network can recognize that ϕ(t'(x)) is similar to ϕ(t(x)). It could be that the embeddings are close together or that ϕ encodes the style in some way. On the other hand, the network may fail to generalize, and predict ϕ(t'(x)) and ϕ(t(x)) differently. One conclusion is that ϕ may not be sensitive to t'. However, we will show later that prediction accuracy is quite high for the fine-grained task. The coarse-grained mistakes actually imply that ϕ captures both transformations but does so in a divergent way. § EXPERIMENTAL RESULTS Datasets. We evaluate on transformed versions of ImageNet-1k <cit.>. In addition to the original image (Identity), we apply 30 transformations to each train/test image. This leads to 31 classes for the fine-grained transformation prediction task. We also construct a coarse-grained dataset with 10 categories, where each category contains one or more transformations along with a range of parameters (e.g., noise level or type of style transfer). The test set transformations form a superset of those applied to the training images. Full details in <ref>. Metrics. We measure semantic and transformation prediction accuracies as defined in <ref>. In the fine-grained case, the model predicts one of 31 transformation classes; in the coarse-grained case, it predicts one of 10. For both cases, we average over a test set with size being the number of labels times the number of original images, i.e., (# classes) × 50k for ImageNet-1k. We measure semantic accuracy with ImageNet-1k class labels, separating the accuracy on clean and transformed (a.k.a., obfuscated) images. Embedding Models. CAN, MAE, and SimCLR produce a 1024-dimensional embedding from a ViT L/16 trained on JFT-300M <cit.>. The SimCLR model also contains a projection to a 128-dimensional embedding that we use for one comparison. CLIP uses ViT L/14 for a 768-dimensional image embedding. ALIGN uses EfficientNet-L2 for the image encoder and outputs a 1376-dimensional embedding. Our baseline is a 1024-dimensional embedding from a supervised ViT L/16 trained on ImageNet-1k. Post-processing. We pre-compute embeddings for all train and test images, and then we ignore the pixels. We then train a linear probe or a small MLP network on the frozen embeddings. Unless stated otherwise, the MLP has one hidden layer of width 2048, and we optimize it with ADAM and with a 0.2 dropout rate. We experimented with deeper/wider networks and with other dropout rates, but this did not lead to significantly different results in most cases. Note that while the embedding model is not trained on transformed images, the post-processing network can indirectly learn from them, depending on what information is in the embedding. §.§ Do embeddings capture transformations? We compare the transformation prediction performance of six embeddings in <ref>. All embeddings perform extremely well on the transformation detection task: over 93% accuracy for the fine-grained and over 79% accuracy for the coarse-grained. These embeddings preserve fairly detailed information about the input image that can be extracted with minimal post-processing (2-layer MLP). §.§ What is the most equivaraint embedding? In the fine-grained task (a test of which embedding has the most detailed information about the image), the CAN embedding performs the best with MAE being a close second. Note that both CAN and MAE use masking as part of the self-supervised training. This suggests that filling in the image patches increases the transformation sensitivity. The SimCLR embedding performs fairly well, despite the expectation that a contrastive loss would lead to high levels of invariance (we discuss SimCLR more in <ref>). CLIP and ALIGN perform slightly worse than CAN on the fine-grained task but still quite well. In the coarse-grained task (a test of how well an embedding's information about transformations can generalize), the two text-image embeddings (CLIP and ALIGN) perform better than all other methods. This suggests that training with text improves the generalization ability of the image embedding. We note that, for all methods, the decreased accuracy between fine-grained and coarse-grained occurs because the held-out sub-transformations present a challenging OOD task. Also, all self-supervised models perform significantly better than the supervised baseline, suggesting that optimizing an embedding directly for semantic information does not by default retain as much transformation information. In <ref> we analyze in detail the different kinds of mistakes that the embeddings make. §.§ Isn't SimCLR supposed to be invariant? <ref> shows transformation prediction results for SimCLR. We consider two layers of the embedding model. Specifically, `SimCLR embed' refers to the second-to-last layer and has 1024 dimensions (which is standard and used in <ref>). Then, the network projects this onto the last layer `SimCLR proj' to form a 128 dimensional vector. We see that `SimCLR embed' generally outperforms `SimCLR proj' on both fine- and coarse-grained datasets, and this holds regardless of the post-processing method. One implication is that the final projection layer of SimCLR is responsible for much of the invariance that we expect from a contrastive loss. On the other hand, the layer right before this retains more information about transformations. We also control for the dimensionalities (1k vs. 128) by evaluating a network that has one hidden layer of width 128. With a small width, we still see a large improvement from using SimCLR embed vs. proj (+14.88% for fine-grained, +7.19% for coarse-grained). Finally, we can greatly improve the performance of SimCLR proj by post-processing with a width 16k network (+10.66% for fine-grained, +5.81% for coarse-grained). This means that after the projection, there are transformation details that are not available via a linear probe but can be extracted with a 2-layer network. §.§ Do all embeddings make the same mistakes? We dig into the confusion matrices and how trends in the mistakes further illuminate the information in embeddings. The fine-grained and coarse-grained datasets lead to slightly different insights, and so we discuss them separately. Fine-grained errors. The most common mistakes for all embeddings come from (i) misclassifying medium Gaussian blur as low Gaussian blur, and (ii) underpredicting `Identity' for the unaltered images. Both mistakes are fairly expected. Comparing MAE to CAN, we find that MAE has worse performance for central cropping, which is likely due to its more aggressive masking during training (CAN uses 50% masking while MAE uses 75%). Considering SimCLR, the lower accuracy comes mostly from mispredicting hue shift, brighten, and saturate. For example, SimCLR labels 45% images as `Identity' when their hue has been offset by 64. On the other hand, SimCLR performs comparably on the other transformations, including Gaussian blurring, despite this augmentation being part of the contrastive training. Compared to CAN and MAE, both CLIP and ALIGN have trouble with motion blur, perhaps because this is not an effect that is easily tied to textual cues. Coarse-grained errors. We focus on style transfer results here. <ref> contains full confusion matrices, as well as <ref> and <ref>, which compare embeddings on held-out transformations. CLIP performs quite well on the style transfer category, whereas this accounts for a sizeable fraction of errors for CAN, MAE, and Supervised. For the held-out styles, CLIP correctly labels 86% of images. The best vision-only model is SimCLR, which has 54% accuracy. The errors for CAN/MAE come from the fact that they often predict restyled images as clean or filtered (e.g, blurred). CLIP and SimCLR achieve over 70% accuracy on the `Pasta' style, while CAN and MAE are below 4%. §.§ Does transformation information interfere with semantic information? We next explore the interplay between semantic and transformation accuracy by training two-head networks in a multi-task setting. The first head predicts the ImageNet-1k class. The second head predicts the transformation label. Both heads share the same 2048-dimensional hidden layer of the MLP that post-processes the embedding. As a baseline, we also train a one-head model that only predicts the semantic class (also using a 2-layer MLP with width 2048). We aim to determine how the multi-task setting affects the three metrics: semantic accuracy on clean images, obfuscated semantic accuracy on transformed images, and transformation prediction accuracy. <ref> reports these accuracies for both the fine-grained and coarse-grained versions. As before, we fix the embedding model and only train the MLP. Clean semantic accuracy. Using the two-head network leads to comparable semantic prediction compared to a one-head network. Only in some cases do we see a decrease in accuracy. The post-processing, despite being only 2048 dimensional, is able to effectively combine both semantic and transformation information in the MLP's hidden layer. Comparing semantic accuracies, CLIP and ALIGN outperform the other methods by a large margin. This is expected since the linear probe accuracy of vision-only self-supervised methods (CAN, MAE, SimCLR) tend to be lower than the accuracies after fine-tuning <cit.>. Obfuscated semantic accuracy. We move on to discuss the semantic accuracy on the transformed images (ObfSem). In essence this is a metric for the robustness of the models to a dataset shift. Moreover, many of the transformations were not seen during training, and hence, we can consider the images to be OOD. Across the embeddings, we observe a mix of increases and decreases to the ObfSem accuracy. In general, the deviations are small, and we conclude that transformations sensitivity does not impact the ability to succeed at object-level predictions. §.§ What have we learned about embeddings? Transformation prediction is surprisingly easy. While our main goal was to uncover new insights about foundation models, along the way we discovered that embeddings can be used to predict transformations. This ability is useful for OOD detection and content filtering. There is growing evidence that cloud-based classification systems are susceptible to transformation-based attacks, such as style transfer, Gaussian noise, or recoloring <cit.>. We believe this is an important direction, in addition to current OOD and anomaly detection efforts <cit.>. Fortunately, based on our results, modern embeddings suffice for both classification and for detecting many transformations. Possible to have semantic & transformation accuracy. From <ref>, we see that the multi-task training leads to good performance on both semantic and transformation prediction. In some cases, the sensitivity to transformations even improves the obfuscated semantic accuracy. Another observation is that we achieve do the post-processing via the low-cost training of a 2-layer MLP. It is possible to fine-tune representations while freezing the large embedding model. Different embeddings capture different information. By analyzing transformation prediction, we have drawn conclusions about the sensitivity of several embeddings. <ref> has summarized these insights, which help inform a choice between competing models. All of the models capture a lot of transformation information, which is useful to know (and perhaps unexpected). We hope that transformation prediction becomes a standard evaluation metric. §.§ Where could we have done more? One deficiency of our work is that we have not proposed ways to translate our observations into improvements on benchmarks. Transformation awareness could improve performance on downstream tasks beyond ImageNet. It would ideal to couple transformation prediction with a new architecture or algorithm and create a self-supervised method that outperforms CAN, MAE, SimCLR, CLIP, and ALIGN. Another shortcoming is that we have not actually used our models to detect a dataset shift in real-world data. There are many settings where discovering image transformations is important, including content safety, detecting copyright evasion, and discovering manipulation. In this direction, we have shown that different types of semantically-trained embeddings can perform well on these detection tasks. § CONCLUSION We constructed and investigated a new classification task that allowed us to shed new light on image embeddings. We showed that popular models capture enough information to distinguish dozens of transformations. Our experiments uncovered some ways in which SimCLR is more invariant than CAN and MAE, and the types of transformations that are captured by self-supervised vision models vs. image-text models, such as CLIP and ALIGN. We demonstrated that it is possible to post-process embeddings using a small network and extract more transformation information than a linear probe. The findings from the transformation prediction task provide new insights into the capacity of image embedding methods, which complements prior experiments on semantic accuracy. We discuss future work in <ref>. icml2023 § WHAT CAN YOU DO NEXT? Our work is motivated by improving foundation models for their uses beyond semantic classification. We list many open directions for future work inspired by our findings: * A central question is to create a nearly lossless image embedding that is also easily adapted for many downstream tasks. Our work suggests that it should be possible to keep more low-level information in the representation without compromising semantic performance. We believe this is an important direction because some of the downstream tasks may require these low-level features. * Our results also suggest that networks that can predict transformations do not perform any worse in terms of data shift robustness (obfuscated semantic accuracy). This suggests that robust training methods might benefit from incorporating equivariance strategically, instead of focusing on invariance. Or, in contrast, transformers may be inherently equivariant, and achieving invariance may require even more aggressive training methods. * Text-to-image generative models depend heavily on their pre-trained image encoder <cit.>. Fine-tuning the image backbone with transformation prediction could help in synthesizing transformed images. On the other hand, the invariance of image embeddings could prohibit the ability to generate certain visual features. * A different direction is extending the transformation prediction task to be more fine-grained. We could ask the network to predict the specific parameters or strength of one or more transformations. One option is to predict both the transformation and strength of ImageNet-C transformations <cit.>. This should make the task more challenging, and thus, reveal larger quantitative gaps in the performance of various embeddings. * Another extension could be to identify which part of an image has been altered. This could uncover further differences between embedding methods. For example, masking-based embeddings might struggle with this given that they are trained on heavily obscured images. Image-text models might perform well because language cues can refer to parts of images and relative positioning of objects. * An alternative way to probe the visual side of image-text models is through text prompts that describe visual aspects. This has been studied for some attributes like color, shape, and material <cit.>. For example, <cit.> uses questions like “what is the color of a penguin?” or “what is the size of an ant?” to probe the image-text model. Interestingly, our results suggest that CLIP and ALIGN retain quite a bit of visual information in the embedding. Hence, errors for the text prompts may be due to image-text alignment or to the language side of the model itself. It would be interesting to compare transformation prediction performance to these prompts and see if there are trends in the performance of different models. * Recent work considers probes to understand how transformers process information <cit.>. In our SimCLR experiments, we saw that the final projection layer is responsible for much of the invariance. This suggests that certain layers may have a larger impact than others, as transformation information flows through the transformer model. Future work could continue the study of this interesting phenomenon. * Considering other modalities, our transformation prediction task can apply to text or audio. For example, words can be changed with synonyms, characters can be replaced with symbols or typos, or sentences can be reordered based on syntactic freedoms. Following our analysis, this approach could then draw conclusions about the predispositions of different language models. This would complement some of the existing language model probing work <cit.>. * From an application point of view, it would be interesting to use transformation prediction for a data cleaning or filtering task. Another application is detecting (adversarial) image manipulations. It is possible to use an MLP trained on top of an embedding to find anomalous images, such as those that have been stylized or heavily edited. §.§ Where could we have done more? One deficiency of our work is that we have not proposed ways to translate our observations into improvements on benchmarks. Transformation awareness could improve performance on downstream tasks beyond ImageNet. It would ideal to couple transformation prediction with a new algorithm and create a self-supervised method that outperforms CAN, MAE, SimCLR, CLIP, and ALIGN. Another shortcoming is that we have not actually used our models to detect a dataset shift in real-world data. There are many settings where discovering image transformations is important, including content safety, detecting copyright evasion, and discovering manipulation. In this direction, we have shown that different types of semantically-trained embeddings can perform well on detection tasks. Our generalization task also shows that training even a small MLP on top of an embedding can suffice to detect held-out transformations. § EXPERIMENTAL SET-UP §.§ Drawing conclusions about embeddings We can use the transformation prediction task to measure if an embedding model captures certain visual content. Consider a transformation t, where t(x) denotes the transformed version of x. Assume we can train a post-processing network to predict that ϕ(t(x)) is transformed and ϕ(x) is not. Then, we can conclude that ϕ must preserve enough information about the image so t can be detected. That is, ϕ(x) ≠ϕ(t(x)). More interestingly, a network may succeed at predicting most transformations t from a set 𝒯 when they are applied to images in a dataset 𝒳. Hence, the sets A_t, ϕ = {ϕ(t(x)) | x ∈𝒳} for t ∈𝒯 are mostly disjoint. It is possible to use a sample from A_t, ϕ to determine t with high accuracy. We also believe the transformation prediction task is a direct measure of equivariance, as opposed to k-NN results <cit.>. If the network cannot detect the transformation t, then we may conclude the opposite. The embedding ϕ does not preserve enough information. We can further qualify this based on the amount of post-processing required to extract this information. If t is detectable after zero or one layers, then the information must be readily accessible in ϕ(t(x)). Otherwise, if t can be detected but only after numerous layers, then the information is still present but can only be recovered after combining several sources of information from ϕ(t(x)). If no amount of post-processing suffices, then the embedding must truly be invariant, and ϕ(x) ≈ϕ(t(x)). Given the above discussion, the fine-grained and generalization tasks yield complementary insights. The benefit of the fine-grained task is that we can investigate the precision of the embedding's information. Distinguishing a blur of radius three vs. five should require more detailed information than distinguishing blurring vs. brightening. In the generalization task, the network does not see some sub-transformations during training, which enables us to measure a type of generalization. For example, consider transformations t and t' from the same class (e.g., two different styles). In the best case, we only use t during training, and the network recognizes that ϕ(t'(x)) is similar to ϕ(t(x)). It could be that the embeddings are close together or that ϕ encodes the style in some way. On the other hand, the network may fail to generalize, and predict ϕ(t'(x)) and ϕ(t(x)) differently. One conclusion is that ϕ is insensitive to t'. However, we show later that prediction accuracy is quite high for the fine-grained task. The generalization mistakes actually imply that ϕ captures both transformations but does so in a divergent way. Interestingly, in contrast to some work in NLP probing <cit.>, we observe the same trends using a linear probe and a 2-layer MLP. It would be worthwhile to also consider control metrics and random feature embeddings to further understand whether the transformation information is readily available or not, similar to <cit.>. §.§ More experiment and training details We use JAX to implement the models and run the experiments. The learning rate for is 10^-3, and use per-device batch size of 1024 with roughly 41.2 epochs for the fine-grained dataset and 127.9 for coarse-grained. For both datasets, the model trains while seeing a total of roughly 1.6B examples. We do not use any warm-up steps. All models are optimized with ADAM and the learning rate decreases linearly. Given that we are only training 2-layer MLPs, the wall clock time for training is under a few hours using TPUs. We use dropout with rate 0.2 for all experiments except when comparing SimCLR in <ref>, which has a dropout rate of 0.0 because we compare with a linear probe. We experimented with deeper/wider networks and other dropout rates, but this did not lead to much different results. For 2-head models, we sum the losses for the semantic prediction and transformation prediction tasks. We use categorical cross entropy for both losses on the separate tasks. We do not weight the losses separately. We also randomly sample batches without controlling for the distribution of transformations in each batch. Hence, for the fine-grained task, we expect 1/31 of the images to be clean, and 1/10 to be clean for the coarse-grained task. Embeddings computed consistent with external implementations for the various embedding models. We were given access to the ALIGN weights, we also use a standard CLIP checkpoint. We trained the supervised model from scratch, without optimizing the data augmentation strategy. We use three pre-trained models (CAN, MAE, SimCLR) that were trained by the authors of the CAN paper <cit.> and shared with us (all trained on JFT-300M). Interestingly, we achieve slightly higher top-1 accuracy on ImageNet-1k with our 2-head multi-task MLPs compared to the linear probe results that they report. Specifically, our MLPs on top of CAN, MAE, SimCLR have 76.04, 70.18, 74.49 top-1 accuracy, respectively. Their linear probe results for CAN, MAE, SimCLR are 75.4, 64.1, 73.4, respectively. This improvement could either be due to (i) the extra layer in the MLP or (ii) training the MLP with both clean and transformed data. The MAE paper reports higher accuracy than ours, with 73.5 top-1 linear probe <cit.>. § DATASET AND TRANSFORMATION DETAILS For the images, we use the ImageNet-1k dataset. We transform the images using standard methods in OpenCV <cit.> and Pillow <cit.>, and a few pre-trained models listed below. §.§ High-level motivation for our choice of transformations. We aim to determine whether embeddings from image foundation models can be used to train classifiers for non-semantic tasks. This is advantageous because in large ML systems, embeddings are often pre-computed and stored for as a way to compress and pre-process image datasets. Hence, using a small MLP on top of an embedding offers a light-weight way to automatically compute predicted signals about the images. There are many non-semantic tasks that would fit into this framework. For example, for data cleaning, it is important to recognize poor image quality (e.g., JPEG artifacts, motion blur, cropping, etc). For content filtering and policy enforcement, it may be crucial to detect image manipulations (e.g., style transfer, text/icon overlays). In general, non-semantic image information is crucial for a myriad of tasks, such as determining if an image is a painting or photograph, if it has been taken during the day or night, if it is high-fidelity or grainy, or if it has been edited from the original. When choosing the sets of transformations, we have tried to cover a range of visual effects. Noise affects individual pixels and blurring affects nearby regions. Overlays are independent of the image, while style transfer heavily depends on the content. The filtering and quantizing options focus on hue, saturation, or value separately. Some transformations are barely human-visible, and others are strikingly obvious. Of course, the space of all possible transformations is impossible cover fully, but we aim to probe many aspects of embeddings. In the generalization task, we have also tried to set-up an experiment that reflects real-world usage. For example, with style transfer, we train with a subset of styles and ask the model to recognize examples of transformed images with unseen styles. For the other categories, we also believe that quantizing captures a variety of related recoloring effects, and filtering covers many types of blur and noise. There is certainly room to expand and refine the taxonomy of transformations, and this is a nice direction for future work. §.§ Fine-grained transformations Below is the list of transformations in the fine-grained transformation set. For transformations which are parameterized, multiple sets of parameters may be used. In this case, different parameter sets are considered as different "classes" in the transformation prediction problem. The parameters are the same for training and for testing. * Identity * No transformation, i.e. the original images. * No parameter. * Hue Scaling & Shift * Scale and shift the hue channel in the hue-saturation-lightness (HSL) color space. hue_new = (hue × scale + 𝚘𝚏𝚏𝚜𝚎𝚝) mod 360. * Parameter set 1: 𝚜𝚌𝚊𝚕𝚎 = -32, 𝚘𝚏𝚏𝚜𝚎𝚝 = -4 * Parameter set 2: 𝚜𝚌𝚊𝚕𝚎 = 1, 𝚘𝚏𝚏𝚜𝚎𝚝 = 64. * Saturate & Desaturate * Scale and shift the saturation channel in the HSL color space. saturation_new = clip(saturation × scale + 𝚘𝚏𝚏𝚜𝚎𝚝, 0, 255). * Parameter set 1: 𝚜𝚌𝚊𝚕𝚎 = 5, 𝚘𝚏𝚏𝚜𝚎𝚝 = -4 * Parameter set 2: 𝚜𝚌𝚊𝚕𝚎 = 0.25, 𝚘𝚏𝚏𝚜𝚎𝚝 = 32 * Brighten & Darken * Shift the lightness channel in the HSL color space. lightness_new = clip(lightness + 𝚘𝚏𝚏𝚜𝚎𝚝, 0, 255). * Parameter set 1: 𝚘𝚏𝚏𝚜𝚎𝚝 = 96 * Parameter set 2: 𝚘𝚏𝚏𝚜𝚎𝚝 is uniformly sampled between -128 and -64. * Gaussian Noise * Add a random noise to each pixel. The noise distribution is a Gaussian distribution with mean 0 and standard deviation σ. * Parameter set 1: σ = 0.05 * Parameter set 2: σ = 0.15 * Gaussian Blur * Blur an image by a Gaussian function with a given radius. * Parameter set 1: The radius is uniformly sampled between 3 and 5. * Parameter set 2: The radius is uniformly sampled between 7 and 9. * Motion Blur * Simulate a motion of an image (as a 2D rectangle) along a random direction by a given length (in pixels). * Parameter set 1: 𝚕𝚎𝚗𝚐𝚝𝚑 = 5 * Parameter set 2: 𝚕𝚎𝚗𝚐𝚝𝚑 = 10 * Corner Crop * Keep only the bottom-right quadrant of an image. * No parameter. * Rotation * Rotate an image counter-clockwise by a given degree. * The degree is uniformly sampled between 90 and 270. * JPEG Compression * Re-compress an image with a given JPEG quality. * The quality is uniformly sampled between 10 and 15. * Floyd-Steinberg Dithering <cit.> * Reduce the bit depth of an image by applying the Floyd-Steinberg dithering algorithm. * The bit depth is set to 1. * Posterize * Reduce the bit depth of an image by quantizing each pixel value independently. * The bit depth is set to 2. * Pixelate * Create a pixelation effect by downsampling an image with a factor and then upsampling to its originnal size. * The downsampling factor is set to 0.15. * Solarize * Simulate photo solarization by inverting each pixel value above a threshold. * The threshold is set to 192. * Grayscale * Change an image to a grayscale image. * No parameter. * Vertical Line Shift * Rotate each column by a given distance (wrapping around), with even columns rotating down and odd columns rotating up. * The distance is set to 3. * Grid Overlay * Change the pixels on even rows and on even columns to a fixed color RGB=(204, 255, 127). * No parameters. * Line Overlay * Paint horizontal lines on an image. * Each line is 4-pixel wide, and the distance between adjacent lines is 20 pixels. The lines are painted dark red RGB = (101, 0, 0). * Icon Overlay * Paint a wall of `grinning face' icons on an image. * The opacity (alpha channel) of the icons is set to 32. The width ratio between image and icon is set to 10. * Text Overlay * Paint a wall of constant gibberish text on an image. * The text is colored dark gray RGB=(25, 25, 25). * Line Halftoning * Apply a halftone process based on amplitude-modulated sinusoidal waves <cit.>. * The waves are drawn with lines of 1-pixel width, and the maximum amplitude is 5 pixels. * Style Transfer * Apply the style transfer model <cit.> with a given style image. * Parameter set 1: The style image is Vincent van Gogh's The Starry Night. * Parameter set 2: The style image is Gyula Derkovits's Taligás. * Parameter set 3: The style image is a photo of a bonfire. * Parameter set 4: The style image is a photo of pasta. §.§ Coarse-grained transformations Below is the list of transformation categories and sub-transformations in the coarse-grained transformation set. The testing set of some categories may contain more sub-transformations or wider parameter ranges. For randomized parameters, we use U(a, b) to denote a uniform sample between a and b (inclusive). The parameters are independently sampled once for each image. * Identity * No transformation, i.e. the original images. * No parameter. * Icon Overlay * Paint a wall of icons on an image. * Training parameters: For each image an icon is randomly chosen from 5 candidate icons. The opacity (alpha channel) of the icons is U(64, 128). The width ratio between image and icon is U(8, 12). * Testing parameters: Additional 5 candidate icons (10 in total). The opacity (alpha channel) of the icons is U(64, 144). The width ratio between image and icon is U(5, 15). * Line Halftoning * Apply a halftone process based on amplitude-modulated waves <cit.>. * Training parameters: For each image an waveform is randomly chosen from 2 candidate waveforms. The waves are drawn with lines of U(1, 2)-pixel width, and the maximum amplitude is U(5, 7) pixels. * Testing parameters: Additional 2 candidate waveforms (4 in total). The waves are drawn with lines of U(1, 2)-pixel width, and the maximum amplitude is U(4, 7) pixels. * Filtering: Transformations making the image blurry or less clear. * Gaussian Blur * Blur an image by a Gaussian function with a given radius. * Training parameter: The radius is U(3, 6). * Testing parameter: The radius is U(2, 9). * Motion Blur * Simulate a motion of an image (as a 2D rectangle) along a random direction by a given length (in pixels). * Training parameter: The radius is U(18, 27). * Testing parameter: The radius is U(15, 35). * Pixelate * Create a pixelation effect by downsampling an image with a factor and then upsampling to its originnal size. * Training parameter: The downsampling factor is U(0.25, 0.5). * Testing parameter: The downsampling factor is U(0.125, 0.5). * Blurry Background * Change the aspect ratio and use a Gaussian blurred copy of the same image as background. * Training parameters: Width and height scaling factors are U(1.0, 1.8). Gaussian blur radius is U(20, 40). * Testing parameters: Width and height scaling factors are U(0.7, 2.0). Gaussian blur radius is U(20, 50). * Line Shift * Rotate each row or column by a given distance (wrapping around), with even rows/columns rotating in an opposite direction to odd rows/columns. * This transformation is held out in training. * Testing parameter: The distance is U(2, 8) pixels. * Noise: Transformations adding high frequency artifacts. * Gaussian Noise * Add a random noise to each pixel. The noise distribution is a Gaussian distribution with mean 0 and standard deviation σ. * Training parameter: σ∼ U(0.1, 0.5) * Testing parameter: σ∼ U(0.1, 0.7) * Impulse Noise * Randomly sample a percentage of pixels and paint half of them white and half of them black. * Training parameter: Noise percentage is U(10%, 30%). * Testing parameter: Noise percentage is U(5%, 40%). * Random Dithering * Quantize each pixel to 0 or 255 using a per-pixel random threshold. * No parameters. * Ordered Dithering * Quantize each pixel to using a 2×2 Bayer threshold matrix <cit.>. * No parameters. * Floyd-Steinberg Dithering <cit.> * Reduce the bit depth of an image by applying the Floyd-Steinberg dithering algorithm. * This transformation is held out in training. * Testing parameter: The bit depth is U(1, 2). * Image Fusing: Transformations which fuse another image (as a distraction) in foreground or background. * Image Overlay * Add a small distraction image to foreground with partial opacity. * Training parameters: 5 choices of distraction images. The distraction image's dimensions are U(0.5, 0.7) fraction in size of the content image. The opacity is U(64, 128). * Testing parameters: Additional 6 choices of distraction images (11 in total). The distraction image's dimensions are U(0.4, 0.8) fraction in size of the content image. The opacity is U(64, 128). * Fusing * Add a distraction image as background. * Training parameters: 5 choices of distraction images. The foreground image's dimensions are U(0.6, 0.8) fraction in size of the background image. The foreground's opacity is U(128, 196). * Testing parameters: Additional 6 choices of distraction images (11 in total). The foreground image's dimensions are U(0.4, 0.9) fraction in size of the background image. The foreground's opacity is U(128, 196). * Quantizing: Transformations dealing with colors. * Quantize Colors * Reduce the number of distinct colors in an image. The colors are clustered and then replaced by the cluster centroids. * Training parameter: The number of distinct colors after quantization is U(16, 64). * Testing parameters: The number of distinct colors after quantization is U(8, 128). * Invert Colors * Invert all pixel values. * No parameter. * Solarize * Simulate photo solarization by inverting each pixel value above a threshold. * Training parameter: The threshold is U(96, 192). * Testing parameter: The threshold is U(64, 224). * HSL To RGB * Convert an image to the HSL color space, and then directly read the values as RGB. * No parameter. * Grayscale * Change an image to a grayscale image. * This transformation is held-out in training. * No parameter. * Hue Shift & Scaling * Scale and shift the hue channel in the hue-saturation-lightness (HSL) color space. hue_new = (hue × scale + 𝚘𝚏𝚏𝚜𝚎𝚝) mod 360. * This transformation is held-out in training. * Testing parameters 1: 𝚜𝚌𝚊𝚕𝚎=1, 𝚘𝚏𝚏𝚜𝚎𝚝=U(60, 300). * Testing parameters 2: 𝚜𝚌𝚊𝚕𝚎=± U(8, 32), 𝚘𝚏𝚏𝚜𝚎𝚝=U(0, 360). * Static Overlay * Line Overlay * Paint a series of equidistant parallel lines on an image. The lines in one image are in a random direction and of the same random color. * Training parameters: Each line is U(5, 7)-pixel wide, and the distance between adjacent lines is U(18, 24) pixels. * Testing parameters: Each line is U(3, 10)-pixel wide, and the distance between adjacent lines is U(15, 30) pixels. * Text Overlay * Paint a wall of gibberish text on an image. The text in one image are of the same random color. * Training parameter: 5 choices of gibberish text. * Testing parameter: Additional 5 choices of gibberish text (10 in total). * Grid Overlay * Change the pixels on even rows and on even columns to a random color (sampled per image). * No parameters. * Style Transfer * Arbitrary Neural Style Transfer * Apply a style transfer model <cit.> with a given style image. * Training parameter 1: The style image is Vincent van Gogh's The Starry Night. * Training parameter 2: The style image is Gyula Derkovits's Taligás. * Training parameter 3: The style image is Edvard Munch's The Scream. * Training parameter 4: The style image is Katsushika Hokusai's The Great Wave off Kanagawa. * Held-out parameter 1: The style image is Amadeo de Souza-Cardoso's Landscape with Black Figure. * Held-out parameter 2: The style image is Pablo Picasso's Violon. * Held-out parameter 3: The style image is a photo of a a bonfire. * Held-out parameter 4: The style image is a photo of pasta. * Artistic Style Transfer * Apply a style transfer model <cit.> with a set of pre-trained weights. * Training parameter 1: The weights are pre-trained to mimic stained glass mosaics. * Training parameter 2: The weights are pre-trained toward Francis Picabia's Udnie. * Held-out parameter 1: The weights are pre-trained toward Vincent van Gogh's The Starry Night. * Held-out parameter 2: The weights are pre-trained toward a painting of candies. * Deep Dream * Run a pre-trained DeepDream model <cit.> to enhance the patterns that the model recognizes. * This transformation is held-out in training. * Testing parameter: The DeepDream process is configured with U(7, 12) update iterations, learning rate U(0.05, 0.08), number of octaves U(6, 12), and octave scale U(1.5, 2.0). * Warping: Transformations which rotate or transpose images. * Rotation * Rotate an image counter-clockwise by a given degree. * Train parameter: The degree of rotation is 90. * Held-out parameter 1: The degree of rotation is 180. * Held-out parameter 2: The degree of rotation is 270. * Vertical Flip * Flip an image top to bottom. * No parameter. * Transpose * Flip an image diagonally along a diagonal. * Training parameter: Flipping along the minor diagonal. * Held-out parameter: Flipping along the major diagonal. § FURTHER EXPERIMENTAL RESULTS §.§ Smaller Representation (width 1k MLP) <ref> shows the one-head and two-head accuracies for an MLP with one hidden layer of width 1k (the main paper table had width 2k). We use a one-headed model for each of transformation and semantic prediction, and then a two-headed model that trains to predict both labels. For the fine-grained dataset, performance is comparable for one- and two-head models. However, the two-headed models underperforms slightly (less than 1% worse). Interestingly, the coarse-grained dataset has the opposite trend for CAN, MAE, and SimCLR when it comes to transformation accuracy. Here, the two-headed model actually leads to a significant improvement in transformation prediction accuracy. The multi-task set-up likely prevents overfitting. This is beneficial because the coarse-grained dataset has held-out transformations (whereas the fine-grained dataset has the same train/test transformations). §.§ Analyzing the Held-Out Transformations and Generalization Dataset <ref> presents average accuracies for the held-out sub-transformations. <ref> zooms in the on the style transfer accuracies, showing the fraction of correct prediction for each of the thirteen styles that are displayed in <ref>. Then, we present the 10 × 10 confusion matrices for the coarse-grained dataset, one for each embedding. This matrices demonstrate the common errors made by the models, which inform the ways that the probe uncovers properties of the embeddings. Specifically, mispredicting certain transformations as `Identity' either points to invariance or to an inconsistent encoding of the transformation information.
http://arxiv.org/abs/2307.05532v1
20230708070820
Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators
[ "Andreas Liesenfeld", "Alianda Lopez", "Mark Dingemanse" ]
cs.CL
[ "cs.CL" ]
[email protected] 0000-0001-6076-4406 Centre for Language Studies Radboud University The Netherlands [email protected] 0009-0004-5873-5496 Centre for Language Studies Radboud University The Netherlands [email protected] 0000-0002-3290-5723 Centre for Language Studies Radboud University The Netherlands Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI's ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as `open source', many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human annotation labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment. <ccs2012> <concept> <concept_id>10010147.10010178.10010179.10010182</concept_id> <concept_desc>Computing methodologies Natural language generation</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010583.10010786</concept_id> <concept_desc>Hardware Emerging technologies</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10002944.10011122.10002945</concept_id> <concept_desc>General and reference Surveys and overviews</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10002951.10003227.10003233.10003597</concept_id> <concept_desc>Information systems Open source software</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10002944.10011123.10011130</concept_id> <concept_desc>General and reference Evaluation</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Natural language generation [300]Emerging technologies [300]Surveys and overview [100]Open-source software [100]Evaluation < g r a p h i c s > A table with 16 rows and 13 columns. The first row is headed “Project" and lists the project names and organization behind it. Some projects also feature more information regarding the base large language and reinforcement learning models that are used. The The remaining 12 rows are each names after one of evaluation features in Table 1. Each cell of the table then evaluates the project for the respective feature, either giving it a pass, a partial pass, or a fail. More detailed information as well as the content of each cell can be found in the data repository that accompanies the paper. 20 April 2023 [accepted]26 May 2023 Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators Mark Dingemanse August 12, 2023 ============================================================================================================ § INTRODUCTION Open research is the lifeblood of cumulative progress in science and engineering. In today's technological landscape, it is hard to find any research finding or technology that does not rely to a significant extent on the fruits of open research, often publicly funded. For instance, AlexNet <cit.>, the deep neural net kickstarting the deep learning revolution a decade ago, derived its strength from a human-annotated dataset of 3.2 million images created by Princeton computer scientists <cit.>. And the striking progress in protein folding in recent years (with the AlphaFold deep learning system predicting the structure of nearly all known proteins <cit.>, where decades of prior work had reached a comparatively meagre 17%) has only been possible thanks to openly deposited structural data in the Protein Data Bank that goes back half a century <cit.>. The talk of the town in conversational interfaces today is undoubtedly ChatGPT, an instruction-tuned text generator that impresses many because of its fluid prose. Yet striking new capabilities should not detract us from the risks of proprietary systems. Only three months after OpenAI rolled out ChatGPT, it abruptly discontinued API support for its widely used Codex model that had been available as a “free limited beta” since 2021 <cit.> — surprising users with only three days' notice and undercutting at one blow the reproducibility of at least 100 research papers.[See https://aclanthology.org/search/?q=openai-davinci-002aclanthology.org/search/?q=openai-davinci-002 (the same search term yields >150 arXiv preprints and >800 entries on Google Scholar) ] This is a stark reminder that proprietary systems are designed to offer smooth onboarding and convenience but come at the price of user lock-in and a lack of reliability. Proprietary systems come with considerable further risks and harms <cit.>. They tend to be developed without transparent ethical oversight, and are typically rolled out with profit motives that incentivise generating hype over enabling careful scientific work. They allow companies to mask exploitative labour practices, privacy implications <cit.> and murky copyright situations <cit.>. Today there is a growing division between global academia and the handful of firms who wield the computational resources required for training large language models. This “Compute Divide” <cit.> contributes to the growing de-democratisation of AI. Against this, working scientists call for avoiding the lure of proprietary models <cit.>, for decolonizing the computational sciences <cit.>, and for regulatory efforts to counteract harmful impacts <cit.>. §.§ Why openness matters Open data is only one aspect of open research; open code, open models, open documentation, and open licenses are other crucial elements <cit.>. Openness promotes transparency, reproducibility, and quality control; all features that are prequisites for supporting robust scientific inference <cit.> and building trustworthy AI <cit.>. Openness also allows critical use in research and teaching. For instance, it enables the painstaking labour of documenting ethical problems in existing datasets <cit.>, important work that can sometimes result in the retraction of such datasets <cit.>. In teaching, it can help foster critical computational literacy <cit.>. Despite strong evidence of the scientific and engineering benefits of open research practices, openness is not a given in machine learning and AI research <cit.>. Gundersen and Kjensmo, in one of the most detailed examinations of reproducibility in AI to date <cit.>, systematically surveyed 400 papers for a range of open science practices. They found that only about a third of papers share test datasets, only 8% share source code, and only a single paper shared training, validation and test sets along with results. We are not aware of more recent systematic surveys of this kind (nor do we attempt this here), but the increasing trend of corporate releases with glossy blog posts replacing peer-reviewed scientific documentation provides little reason for optimism. Openness is perhaps especially important for today's breed of instruction-following text generators, of which ChatGPT is the best known example. The persuasiveness of these language models is due in large part to an additional reinforcement learning component in which text generator output is pruned according to a reward function that is based on human feedback <cit.>, using insights from early work on evaluative reinforcement <cit.>. Human users appear to be highly susceptible to the combination of interactivity and fluid text generation offered by this technology. The ubiquity of ChatGPT interfaces makes it easy for anyone today to try out some prompt engineering (while freely providing further training data to OpenAI) — but it does not allow one to gain a critical and holistic understanding of the constraints and capabilities of such systems, nor of their risks and harms. For true progress in this domain, we will need open alternatives. In this paper, we survey alternatives to ChatGPT and assess them in terms of openness of data, models, documentation and access methods. The aim of our survey is threefold: to sketch some of the major dimensions along which it is useful to assess openness and transparency of large language models; to provide a view of the state of the art in open source instruction-tuned text generation; and to contribute towards a platform for tracking openness, transparency and accountability in this domain. §.§ Previous work Existing work reviewing and comparing large language models falls into two categories: informal lists and structured surveys. Informal lists are crowd-sourced pointers to available resources, from open RLHF datasets[https://github.com/yaodongC/awesome-instruction-datasetgithub.com/yaodongC/awesome-instruction-dataset ] to open examples of instruction-tuned text generators.[https://github.com/nichtdax/awesome-totally-open-chatgpt/blob/main/README.mdgithub.com/nichtdax/awesome-totally-open-chatgpt ] Systematic surveys of instruction-tuned language models are still rare and mostly focus on comparing model capabilities and performance, e.g., of “augmented language models” <cit.> and language models for writing code <cit.> (not our focus here). Complementary to our focus on degrees of openness in instruction-tuned models, a recent survey of generative AI systems more broadly focuses on gradience in release methods, from closed to staged to fully open <cit.>. An important development in this domain the introduction of data statements <cit.> and model cards <cit.>. These are structured documents that help creators document the process of curating, distributing and maintaining a dataset or model, and that help users to critically judge underlying assumptions, potential risks and harms, and potential for broader use. These resources have seen considerable uptake in the scientific community, though their adoption by for-profit entities lags behind. The risks of relying on proprietary solutions has spurred the development of several more open alternatives. For instance, the Bloom collaboration <cit.> is a team science project of unprecedented magnitude. It has trained and open-sourced a large language model based on a collection of almost 500 HuggingFace datasets amounting to 1.6TB of text and code in 46 spoken languages and 13 programming languages. <cit.>. A related initiative is The Pile <cit.>, a 800GB dataset of English text that serves as pre-training data for language models by EleutherAI <cit.>. Meta AI's LLaMA <cit.> provides researchers with access to a series of base models trained on data claimed to be `publicly available'. It should be noted that none of these initiatives have undergone rigorous peer-review or data auditing at this point, and that claims of openness do not cancel out problems, legal or otherwise. In recent years, the private company HuggingFace has emerged as an important hub in the open source community, bringing together developers and users of projects in machine learning and natural language processing. It offers infrastructure for hosting code, data, model cards, and demos <cit.>. It also provides a widely used setup for automated evaluation, generating leaderboards and allowing quick comparison on a number of automated metrics, making it somewhat of a balancing act between offering incentives for documentation and for SOTA-chasing <cit.>. Our focus here is not performance evaluation of the kind offered by leaderboards; instead it is to survey degrees of openness in the fast-evolving landscape of text generators. § METHOD We survey open-source instruction-tuned text generators and evaluate them with regard to openness, scientific documentation, and access methods. Since any survey in this fast-growing field deals with moving targets, we focus here mainly on dimensions of enduring relevance for transparency and accountability. An up to date list of all models surveyed can be found at https://osf.io/d6fsrosf.io/d6fsr. §.§ Requirements The target breed of models in focus here is characterized by the following two features: its architecture is at base a large language model with reinforcement learning from human feedback (LLM + RLHF) and it aims for openness and transparency (along degrees we quantify). Projects are not included if they are as proprietary and undocumented as ChatGPT (like Google's Bard), or if they merely provide a front-end that calls some version of ChatGPT through an OpenAI API (like Microsoft's Bing). We explicitly include small-scale projects and projects that are in early stage development if they are open, sufficiently documented, and released under an open source license. Querying academic search engines and open code repositories, we find at least 15 projects that have sprung up in the last six months alone. §.§ Survey elements We assess projects on 13 features divided over three areas (Table 1): availability, documentation, and access methods. For each feature, we document openness along a scale from maximum to partial to no openness and transparency. For licenses, only systems that are fully covered by a true open-source licence count as maximally open, less permissive or partial licensing counts as partially open, and non-open or unclear licensing situations count as closed. Figure 1 shows a snapshot of 15 projects assessed for all features, with degrees of openness colour-coded (, ∼ , ×). Please refer to the data repository for more information about how each feature is evaluated, and for a more up to date listing. § RESULTS Projects roughly fall into two categories. First, small, relatively bare bones projects that only provide source code and build on existing large language models. These projects often cannot share information on architecture, training data, and documentation because they inherit closed-source data from the LLMs they build on. They usually also do not provide APIs or other user interfaces. However, some of such small projects do come with high-quality documentation and some build only on explicitly open LLMs. What such small projects lack in performance, they make up in utility for the open source community as they can provide useful entry points to learning about LLM+RLHF tools. We also identify a handful of projects backed by larger organisations, which aim to offer similar features to proprietary tools such as ChatGPT but are open-sourced and well documented. Two such initiatives top our list of open-source alternatives to ChatGPT: bigscience-workshop's xmtf tool building on the BLOOMZ and mT0 models (sponsored by HuggingFace) and LAION-AI's OpenAssistant based on an open, crowd-sourced RLHF training dataset (oasst1). Open Assistant also features a text-based and graphical user interface as well as a web resources for crowd-sourcing training data. We also found that several projects are not as open as they initially seemed to be, with many of them merely wrappers of closed models. We observe three recurring issues in the area of availability and documentation. Inheritance of undocumented data. Many tools build on existing large language models (which we here call base models) and inherit the undocumented datasets (often web-scraped and often of dubious legality) these base models are trained on. Training data of RLHF component is not shared. Building RLHF training datasets requires labour-intensive work by human annotators. The lack of RLHF training data is a major performance bottleneck for smaller research teams and organisations, and hampers reproducible research into the use of instruction-tuned text generators for conversational user interfaces. Papers are rare, peer-review even rarer. Most projects reviewed here follow the corporate `release by blog post' model. While there are some preprints, none of the systems we review is currently documented in a peer-reviewed paper. Habitually bypassing this important (albeit sometimes flawed) quality assurance mechanism allows systems to escape critical scrutiny and risks undermining scientific and ethical standards. Some other patterns are worth noting. One is the rise of synthetic data especially for the instruction component. Prominent examples are Self-Instruct (derived from GPT3) <cit.>, and Baize, a corpus generated by having ChatGPT engage in interaction with itself, seeded by human-generated questions scraped from online knowledge bases <cit.>. This stretches the definition of LLM + RLHF architectures because the reinforcement learning is no longer directly from human feedback but has a synthetic component, in effect parasitizing on the human labour encoded in source models. The consequences of using synthetic reinforcement learning data at scale are unknown and in need of close scrutiny. The derivative nature of synthetic datasets is probably one reason they are released specifically “for research purposes only” <cit.>, with commercial use strictly prohibited. This leads to an important wrinkle. Baize models and data are incorporated in several popular instruction-tuned text generators, including the Falcon family of models which bills itself as ready for “research and commercial utilization”[Technology Innovation Institute, https://falconllm.tii.ae/, June 7, 2023] in direct violation of Baize's prohibition against commercial use. This is merely one example of the complex dependencies embedded in these tools, and the legal quagmires obscured by simple claims of `openness'. § DISCUSSION The goal of this short paper has been to provide a critical review of degrees of openness in the fast-moving field of instruction-tuned large language models. We have found projects at varying stages of implementation, documentation, and useability. Most of them offer access to source code and some aspects of pre-training data, sometimes in legally ambiguous ways. Data from the reinforcement learning step, crucial to the simulation of instruction-following in these interfaces, is more elusive, provided by at best half of the initiatives. Strikingly, only a handful of projects are underpinned by a scientific write-up and none of them have as yet undergone scientific peer review. There are many shades of openness <cit.>, yet all of the projects surveyed here are significantly more open than ChatGPT. ChatGPT was announced in a company blog post and rolled out to the public with an interface designed to capture as much free human labour as possible, but without any technical documentation. (The RLHF component, arguably the biggest differentiator for the instruction-following behavior, was sketched in <cit.>, though without data.) Its follow-up GPT-4 continues OpenAI's tradition of openness in name only: it comes with an evaluation framework that primarily benefits the company yet contains the absolute minimum of technical documentation. In particular, an unreviewed preprint distributed by OpenAI and billed as a “technical report" <cit.> mostly provides cherry-picked examples and spends more space on crediting company workers for blog post content, communications, revenue, and legal advice than on actual technical details. (Companies like OpenAI sometimes give “AI safety" as a pretext for closedness; this is hard to take seriously when their own public-facing proprietary models provide clear and present harms <cit.>.) How can we foster more openness and accountability? First, incentives need changing. In high-stakes AI research, data work is often seen as low-level grunt work <cit.> and incentive structures generally encourage a `move fast and break things' mentality over careful scientific work <cit.>. But work that documents data provenance and traces harmful impacts <cit.> deserves major scholarly and societal credit. Here, AI and NLP might benefit from work in software engineering and infrastructure, where strong frameworks already exist to foster accountability for datasets <cit.>. Interactive model cards <cit.> offer a promising step towards a human-centered approach to documentation. Second, corporate capture and user lock-in are well-known strategies by which companies exercise control over scientific results and research infrastructure. In the age of large language models, this is amplified by the possibility to extract human labour and repackage it in amiable conversational formats. Openness not only aligns with principles of sound and ethical scholarship <cit.>; it also safeguards transparent and reproducible research <cit.>. Recent work on legal datasets offers an example in responsible data curation with insights that may be more broadly applicable <cit.>. Third, technology is never a fait accompli unless we make it so. It is one of the achievements of publicly funded science that it can afford to not jump on the bandwagon and instead make room for reflection <cit.>. Today's language technology landscape offers ample opportunities for what philosopher Ivan Illich has called counterfoil research: “Counterfoil research must clarify and dramatize the relationship of people to their tools. It ought to hold constantly before the public the resources that are available and the consequences of their use in various ways. It should impress on people the existence of any trend that threatens one of the major balances of which life depends” <cit.>. Among the consequences of unleashing proprietary LLM + RLHF models are untold harms to workers exploited in labeling data; energy demands of computational resources <cit.>; and tidal waves of plausible-looking text generated without regard for truth value (technically, bullshit <cit.>). One possible outcome of the kind of deeper understanding fostered by openness is a call for responsibly limited technology <cit.>. The spectre of regulation (a key way to keep corporate powers in check) is a powerful incentive for companies to keep things proprietary and so shield them from scrutiny. The systems we have surveyed here provide elements of a solution. Open to various degrees, they provide ways to build reproducible workflows, chart resource costs, and lessen reliance on corporate whims. § CONCLUSION Openness is not the full solution to the scientific and ethical challenges of conversational text generators. Open data will not mitigate the harmful consequences of thoughtless deployment of large language models, nor the questionable copyright implications of scraping all publicly available data from the internet. However, openness does make original research possible, including efforts to build reproducible workflows and understand the fundamentals of LLM + RLHF architectures. Openness also enables checks and balances, fostering a culture of accountability for data and its curation, and for models and their deployment. We hope that our work provides a small step in this direction. This research is funded by Dutch Research Council (NWO) grant 016.vidi.185.205 to MD. For the purpose of Open Access the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. ACM-Reference-Format
http://arxiv.org/abs/2307.06816v1
20230710024453
Data-driven Nonlinear Parametric Model Order Reduction Framework using Deep Hierarchical Variational Autoencoder
[ "SiHun Lee", "Sangmin Lee", "Kijoo Jang", "Haeseong Cho", "SangJoon Shin" ]
cs.LG
[ "cs.LG", "physics.data-an", "physics.flu-dyn" ]
Data-driven Nonlinear pROM using Deep Hierarchical VAE …]Data-driven Nonlinear Parametric Model Order Reduction Framework using Deep Hierarchical Variational Autoencoder 1]SiHun [email protected] 1]Sangmin [email protected] 1]Kijoo [email protected] 2]Haeseong [email protected] [1,3]SangJoon [email protected] [1]Department of Aerospace Engineering, Seoul National University, Seoul, 08226, Republic of Korea [2]Department of Aerospace Engineering, Jeonbuk National University, Jeonju, 54896, Republic of Korea *[3]Institute of Advanced Aerospace Technology, Seoul National University, Seoul, 08226, Republic of Korea A data-driven parametric model order reduction (MOR) method using a deep artificial neural network is proposed. The present network, which is the least-squares hierarchical variational autoencoder (LSH-VAE), is capable of performing nonlinear MOR for the parametric interpolation of a nonlinear dynamic system with a significant number of degrees of freedom. LSH-VAE exploits two major changes to the existing networks: a hierarchical deep structure and a hybrid weighted, probabilistic loss function. The enhancements result in a significantly improved accuracy and stability compared against the conventional nonlinear MOR methods, autoencoder, and variational autoencoder. Upon LSH-VAE, a parametric MOR framework is presented based on the spherically linear interpolation of the latent manifold. The present framework is validated and evaluated on three nonlinear and multiphysics dynamic systems. First, the present framework is evaluated on the fluid-structure interaction benchmark problem to assess its efficiency and accuracy. Then, a highly nonlinear aeroelastic phenomenon, limit cycle oscillation, is analyzed. Finally, the present framework is applied to a three-dimensional fluid flow to demonstrate its capability of efficiently analyzing a significantly large number of degrees of freedom. The performance of LSH-VAE is emphasized by comparing its results against that of the widely used nonlinear MOR methods, convolutional autoencoder, and β-VAE. The present framework exhibits a significantly enhanced accuracy to the conventional methods while still exhibiting a large speed-up factor. [ * ===== § INTRODUCTION Modern high-fidelity, nonlinear computational analysis is mostly computationally intensive in terms of time and memory. In particular, many multiphysics analysis adopt a partitioned method in which the solvers regarding each type of physics are executed separately. Such an approach also requires computation for the data interpolation among different types of discretization and executes iterative computation within a single time step, demanding even more intensive computation. Consequently, model order reduction (MOR) has been suggested to alleviate the computational time and memory consumption. Two types of MOR frameworks exist: intrusive and non-intrusive. Intrusive MOR depends on the governing equation to construct the reduced bases. Galerkin projection is one of the most widely used approaches which projects an ensemble of the full-order model (FOM) results into the governing equation <cit.>. However, a parametric analysis may become extremely challenging when the algorithm is not explicitly established as it manipulates the governing equation directly <cit.>. Instead, a completely data-driven approach, non-intrusive MOR (NIMOR) may be considered. NIMOR aims to discover the embedded pattern in the FOM dataset and rescale those to a much smaller dimensionality. Unlike intrusive MOR, NIMOR is independent of the governing equation, making it to be extremely versatile. Among MOR methods, linear subspace MOR (LS-MOR) has been widely considered as they are mathematically rigorous and efficient. LS-MOR has been successfully employed in fluid dynamics, flow control, structural dynamics, aeroelasticity, and fluid-structure interaction (FSI) <cit.>. However, LS-MOR may require an excessive number of the subspaces to accurately represent a nonlinear, complex FOM. For example, in complex turbulent fluid flows, proper orthogonal decomposition (POD) extracts its modes with respect to the energy ratio and details are filtered out <cit.>. Those details are usually excluded because they contain very small energy and the corresponding coefficients are quite random. LS-MOR methods are generally known to be less effective on advection-dominated, sharp-gradient, multiphysics systems, and especially systems with slowly decaying Kolmogorov n-width <cit.>. Recent exponential development in the field of machine learning has enabled neural networks to be used for MOR. Specifically, autoencoder has become a viable nonlinear MOR method where a shallow, well-trained autoencoder with a linear activation function is known to behave similarly to POD <cit.>. Instead of the linear activation functions, many autoencoders adopt nonlinear activation functions, using them to generate nonlinear subspace <cit.>. Such an autoencoder-based method has been implemented widely to reduce the dimensionality of various engineering problems including fluid dynamics, convection problems, and structural dynamics <cit.>. However, the performance of an autoencoder as a generative ANN is known to be quite limited <cit.>. The deterministic aspect of its loss function, which was designed to only reconstruct the input, limits autoencoders to generate diverse outputs. Attempts to enhance the generative capability have led to the development of the variational autoencoder (VAE) and generative adversarial network (GAN) <cit.>. These methods implement probabilistic loss functions that construct a dense and smooth latent space. Between the two alternatives, VAE is selected for use in this study owing to its stable training property <cit.>. VAE has been widely studied for use in the field of computer vision but it has also been used to interpolate dynamic systems <cit.>. VAE in its simplest form, vanilla VAE, is capable of generating data of significantly superior quality compared with the autoencoder. However, VAE commonly suffers from a phenomenon known as posterior collapse, where the generative model learns to ignore a subset of the latent variables <cit.>. The posterior collapse was easily alleviated by applying a technique known as Kullback-Leibler divergence (KL divergence) annealing, or β-VAE <cit.>. Another problem with vanilla VAE is that it is restricted to a shallow network, limiting its expressiveness. Vanilla VAE tends to perform worse as the network becomes deeper due to the loss of long-range correlation and its performance was found to be insufficient when complex data were processed <cit.>. Deep hierarchical VAEs, such as the LVAE, IAF-VAE, and NVAE, have been developed to enhance the performance of vanilla VAE <cit.>. These VAEs mainly adopt a type of residual cells that connect the encoder and decoder directly without passing through the latent space. Similar to U-nets, the skip connections allow bidirectional information sharing between the encoder and decoder, thereby preventing the loss of long-range correlation. Recently, various types of VAEs are being adopted as a nonlinear MOR method owing to their superior generative capability compared to conventional autoencoders. VAEs have been adopted on flow problems <cit.>, transonic flow <cit.>, numerics <cit.>, biology <cit.>, brain MRI images <cit.>, and anomaly detection <cit.>. While earlier studies adopt the simplest convolutional VAE, many recent studies consider β-VAE due to its near-orthogonal latent space <cit.>. Previous studies show that β-VAE may successfully construct nonlinear subspace, but the majority of networks used in those studies were quite shallow. The use of shallow networks may result in insufficient expressiveness if the input data consists of a large number of DOF and exhibits a complex response. Instead, a deep hierarchical VAE is proposed, least-squares hierarchical VAE (LSH-VAE) for nonlinear MOR of a dynamic system. LSH-VAE is a very deep hierarchical network that incorporates a modified loss function similar to that of β-VAE. The deep hierarchical structure enables a very deep, stable network (>100 layers) with highly expressive and accurate interpolation results. The modified loss function consists of a hybrid weighted least-squares and Kullback-Leibler divergence function that alleviates posterior collapse and enhances orthogonality of the latent space <cit.>. The least-squares error in the loss function is also known to enhance the accuracy when used on the continuous dataset <cit.>. There has been no report on a very deep VAE (>100 layers) implemented for nonlinear MOR. The present framework is validated by solving the following three problems. First, a standard two-dimensional FSI benchmark problem developed by Turek and Hron will be exemplified <cit.>. Then, the highly nonlinear aeroelastic phenomenon of limit cycle oscillation (LCO) will be considered to examine the accuracy of the proposed framework under nonlinearity. Finally, the flow surrounding a three-dimensional cylinder is to be analyzed to establish the capability of the current framework to accommodate a system with a significantly large number of degrees of freedom. The computational efficiency and accuracy will be assessed as well as comparison to the existing nonlinear MOR methods will be presented. § MACHINE-LEARNING METHODS This section provides the theoretical background of the machine learning methods. Based on the existing convolutional autoencoder and β-VAE, the formulation of the proposed network, LSH-VAE is presented. §.§ Convolutional autoencoder (CAE) A convolutional autoencoder (CAE) is an ANN that is trained to output data that are similar to its input. The typical architecture of the CAE, shown in Fig. <ref>, enables the encoder to compress the input data into a smaller latent dimensionality. The decoder then expands the latent code back to its original dimensionality. By training both the encoder and decoder, CAE learns to extract important features of the input dataset. The latent codes contain the embedded features recognized by the CAE that can be used as the reduced bases in the ROM. The interpolation of data using CAE is conducted by interpolating the latent codes. The interpolated latent code contains the interpolated features, which leads to the interpolation of the input data. The loss function of CAE is quite intuitive. CAE takes the input, x, and passes it through the encoder, Φ, to obtain the latent vector, z. Then, the decoder, Ψ, receives the latent vector and generates the output, y. The output, y, is compared against the input, x, using the mean squared error (MSE) loss function. In this way, the CAE is trained such that the difference between y and x is reduced, aiming for a more accurate reconstruction of the input. The equations for the encoder and decoder network are presented in Eq. (<ref>), where the loss function is shown in Eq. (<ref>). z=Φ(x),   y = Ψ(z) L = MSE(Ψ(Φ(x))-x) The simplest form of CAE, known as the vanilla CAE, has been shown to produce unsatisfactory interpolation outcomes <cit.>. Hence, derivatives thereof such as VAE, and GAN may be utilized to enhance the performance. §.§ Variational autoencoder (VAE) VAE and autoencoder share a similar architecture. The largest difference lies in that the encoder of VAE utilizes probabilistic latent values instead of discrete latent codes. The probabilistic encoder models the latent feature probability distribution. The resultant latent space is continuous and smooth, enabling higher quality generated outcomes. The encoder of VAE extracts the mean, μ, and the variance, σ, which are used to generate the latent code, z. A typical VAE structure can be observed in Figure <ref>. VAE aims to efficiently infer the intractable posterior distribution, p(z | x). It is performed by adopting an approximate posterior, q(z | x), because determining the true posterior is quite challenging. Here, the encoder or inference network is represented by q(z | x), whereas the decoder network is denoted as p(x | z). Kullback-Leibler (KL) divergence is the expectation of the difference between two distributions, which is always a positive value. KL divergence between the approximate and the real posterior is written as Eq. (<ref>). D_KL(q(z | x) || p(z | x))=-∫ q(z | x)log(p(z | x)/q(z | x))dz≥ 0 Applying Bayes' theorem to Eq. (<ref>) yields Eq. (<ref>). D_KL(q(z | x) || p(z | x)) = -∫ q(z | x) log(p(x | z)p(z)/q(z | x)p(x)) dz = -∫ q(z | x) log(p(x | z)p(z)/q(z | x)) dz + log p(x)≥ 0 Equation (<ref>) can be rewritten as Eq. (<ref>). Applying the rules of logarithm to Eq. (<ref>) will yield Eq. (<ref>). log p(x) ≥∫ q(z | x)logp(x | z)p(z)/q(z | x)dz log p(x) ≥∫ q(z | x) log(p(z)/q(z | x))dz + ∫ q(z | x)log p(x | z) dz ≥𝔼_q(z | x)[log p(x | z)]-D_KL(q(z | x) || p(z)) The right hand side of Eq. (<ref>) is the evidence lower bound (ELBO). VAE aims to maximize ELBO which maximizes the logarithmic probability of the data by proxy. Following the convention of minimizing the loss function, the right hand side of Eq. (<ref>) is converted as Eq. (<ref>), which is the goal of VAE. min[ -𝔼_q(z | x)[log p(x | z)]+ D_KL(q(z | x) || p(z)) ] The goal of VAE is to minimize both the reconstruction and KL divergence loss. In Eq. (<ref>), the first term corresponds to the reconstruction loss and the second term corresponds to KL divergence loss. KL divergence loss enforces the decoder (approximate posterior) to become similar to the inverse of the encoder. The loss function in Eq. (<ref>) has to be differentiable to minimize it during the training. Usually, KLD term can be integrated analytically <cit.>; however, the reconstruction loss is not directly differentiable. To enforce the reconstruction loss to be differentiable, the reparameterization technique is adopted <cit.>. First, Gaussian sampled random noise, ε will be introduced. The latent code z, is formulated as shown in Eq. (<ref>), introducing the mean and standard deviation to the equation. z=μ+(σ×ε), ε∼ N(0,1) Since the latent code is formulated as Eq. (<ref>), KL divergence in Eq. (<ref>) is rewritten as Eq. (<ref>), assuming the posterior and prior follow the Gaussian distribution. D_KL(q(z| x)|| p(z)) = 1/2∑(σ^2+μ^2-(log(σ^2)+1)) The latent code with the reparameterization technique enforces the latent space to be stochastically determined. The reparameterization enables the reconstruction loss to be differentiable by Monte Carlo method. For further details and step-by-step derivation of the VAE loss function, reference can be found in works by Kingma and Odaibo <cit.>. §.§ Least-squares hierarchical variational autoencoder (LSH-VAE) Conventional vanilla VAE is limited to shallow networks due to vanishing gradients and the loss of long-range correlation. However, shallow networks may lack expressiveness on complex systems with a significant number of DOFs. In this study, a deep VAE with a hierarchical structure is proposed to enhance the performance. Specifically, to alleviate the loss of long-range correlation and stabilize the training process of a very deep network. The hierarchical structure creates direct passages between the earlier layers of the encoder and the latter layers of the decoder, circumventing the middle layers. Those direct passages enable bidirectional information sharing between the encoder and decoder network. The bidirectional information enables the earlier layers of the VAE to greatly affect the outcome, thus, alleviating the loss of long-range correlation. The diagram in Fig. <ref> shows the hierarchical structure of LSH-VAE. In the hierarchical VAE, the latent variables are divided into L groups. By the divided latent dimension, the prior and posterior distributions are rewritten as in Eq. (<ref>) and Eq. (<ref>). p(z)=p(z_L) ∏_i=1^L-1 p(z_i| z_i+1) q(z | x)=q(z_1| x) ∏_i=2^L q(z_i| z_i-1) p(z_i| z_i+1)=𝒩(z_i|μ(z_i+1), σ^2(z_i+1)) p(z_L)=𝒩(z_L| 0, I) q(z_i| z_i-1)=𝒩(z_i|μ(z_i-1), σ^2(z_i-1)) q(z_1| x)=𝒩(z_1|μ(x), σ^2(x)) The loss function for hierarchical VAE is shown in Eq. (<ref>), which is obtained by computing the KL divergence separately for each group. By breaking down the KL divergence into groups, bidirectional information flows are created between the inference and generative network. Detailed descriptions about the deep hierarchical structure of VAE can be found in <cit.>. min [ -𝔼_q(z | x)[log p(x | z)]+ D_K L(q(z | x) | p(z)) +∑_i=1^L-1𝔼_q(z_<i| x)[D_K L(q(z_i| z_<i, x) | p(z_i| z_>i))]] The present LSH-VAE adopts hierarchical structures motivated by LVAE, IAF-VAE, and NVAE <cit.>. The latent codes in the hierarchical VAE are formed by both bottom-up and top-down information. The latent codes of each of the groups output shared information (from the encoder and decoder) to the next decoder block. Because the information of the encoder and decoder network is shared via latent code, the network delivers higher performance. Upon the hierarchical structure, LSH-VAE implements a hybrid weighted loss function. The loss function consists of the mean squared error (MSE) and KL divergence instead of conventional binary cross entropy. The use of MSE as a reconstruction error has been known to be successful for continuous datasets <cit.>. The loss function of LSH-VAE is shown in Eq. (<ref>), where the coefficients α and β denote the weights of the MSE and KL divergence, respectively. min _ϕ, θ [α MSE(x, x̃)+ β D_K L(q(z | x) | p(z)) +∑_i=1^L-1𝔼_q(z_<i| x)[β D_K L(q(z_i| z_<i, x) | p(z_i| z_>i))]] Usually, the weights α and β are set to be α / β_target≈ 10^6. During the training, α is a fixed value whereas β is a variable that varies with respect to the epochs. The variable β is implemented to prevent posterior collapse in which some latent variables become inactive. This method is known as KL-annealing or β-VAE, where β is formulated as Eq. (<ref>) <cit.>. β = 1× 10^-4β_target if epoch <0.3n_epochs β_targetepoch/n_epochs if epoch >0.3n_epochs During the training, β is assigned a low value at the start such that LSH-VAE behaves as an autoencoder. During the first few epochs, input data will be mapped on the latent space. Beyond a few prescribed epochs, β will be gradually ramped up such that LSH-VAE may behave as a VAE, generating smooth latent space. § PRESENT FRAMEWORK §.§ Architecture of the least-squares hierarchical VAE (LSH-VAE) LSH-VAE adopts a one-dimensional (1D) convolutional layer to accommodate the transient response of the unstructured grids. The use of a 1D convolutional layer enables the temporal continuity of the physical variables to be considered. The encoder and decoder of the LSH-VAE consist of the blocks discussed in the previous section, where a detailed schematic of these blocks is shown in Fig. <ref>. Being a deep neural network (DNN), LSH-VAE encoder and decoder blocks are composed of stacks of multiple layers. These layers consist of the following layers: spectral normalization (SN), 1D convolution, dense, exponential linear unit (ELU), Swish, and batch normalization (BN). Swish, and ELU nonlinear activation functions are chosen as their continuous derivatives enhance the stability of a DNN <cit.>. The LSH-VAE implements a normalization-activation sequence instead of the conventional activation-normalization sequence. Such sequence is known to deliver benign performance empirically when used before the convolutional computation <cit.>. The output of the encoder block is branched in three ways. The first branch connects to the input of the next block and the remaining two branches form μ, and σ. The encoder latent code is formulated by reparameterizing μ, and σ. The reparameterized latent code and ELU layer infer bottom-up information transfer, shown in green in Fig. <ref>. In the current configuration, the decoder network is significantly deeper and more complex than the encoder network. The deep decoder network enables an expressive output when accompanied by a system with many DOFs. The decoder network receives two inputs: top-down information from the predecessor decoder block and encoder-decoder shared information from the latent code. Through a series of layers, the decoder outputs top-down information, shown in blue. The decoder block generates the decoder latent code and input for the next block. The encoder latent code and the decoder latent code are added to generate shared latent code, z^i. The shared latent code contains both top-down and bottom-up information, enabling bidirectional information sharing. §.§ Preprocessing dataset Acquiring many FOM samples may be quite cumbersome. In particular, many-queried FOM computations are extremely time-consuming if FOM is highly nonlinear, includes multiphysics, and involves a significant number of DOFs. Acquiring those FOM data through experiments and simulations is considered prohibitive for computational, financial reasons. Instead, data augmentation is considered to sample sparsely and expand the amount of training data. A larger amount of training data improves the generalization of ANN and thus enhances the accuracy. Similar to the data augmentation typically performed on images, the pre-acquired FOM results are processed using the following three methods. First, temporal data are resampled by shortening the timestep, i.e. frequency elongation. Then, the training data are augmented by changing the amplitude and adding a random number within the bound of ±30% for every epoch. Training the ANN using the augmented data ensures that the ANN is effectively trained against a very large dataset, resulting in a high-performance network. §.§ LSH-VAE training and interpolation The current framework performs MOR directly on FOM results. The LSH-VAE employs 1D convolutional layers which requires a three-dimensional input of the format (batch, sequence, channel). In the current configuration, the temporal continuity of the FOM results is considered in the convolutional dimension. The resultant input composition of LSH-VAE becomes (batch, N_t, N_DOF), where N_t denotes the number of time steps and N_DOF denotes the number of DOFs in the dynamic system. LSH-VAE receives such input and compresses it into latent vectors via the encoder. The dimensionality change throughout LSH-VAE is expressed in Eq. <ref>, where N_i represents the latent dimension in the i-th latent group. The total latent dimension, ∑ N_i is much smaller than the FOM dimension, achieving MOR. (, N_t, N_DOF)(, ∑ N_i) (, N_t, N_DOF) The training algorithm for LSH-VAE is shown in Algorithm <ref>. The algorithm starts by normalizing the physical variables of interest, v. v is normalized to the range of [-0.7, 0.7] for each DOF by the normalizing function, N(). The normalized variable is then augmented by resampling for N_A instances. Then, the training dataset, x_train is constructed by concatenating the original normalized variable with the augmented ones. The training dataset of the network becomes, x_train = [x,R(x)_1,R(x)_2, ... ,R(x)_N_A], where R(x)_n denotes the resampled normalized variable of interest. The training dataset is further augmented for amplitude and offset. The amplitude and offset augmentation is performed by using random values for every epoch. The network receives a different input in every epoch, enabling the network to be trained against a very large dataset. After the data augmentation is completed, the encoder and the decoder networks are trained. After the decoder is trained, the loss function can be obtained by Eq. <ref>. The training of LSH-VAE is optimized by the Adamax optimizer, which has shown good performance compared with the conventional Adam and SGD optimizers. Generative ANNs usually require latent vectors to be sought. This is required owing to the probabilistic formulation that is used to parameterize the latent vector. However, we empirically found that sufficient epochs and a small number of parameters obviate the need for latent searching. In this study, rather than attempting latent searching, the latent vectors are calculated by the mean value from the encoder network directly. Upon acquiring the latent vectors, slerp interpolation is performed to collect the targeted latent vector. The latent space created by VAEs is in the form of a well-structured, multi-dimensional hypersphere, which enables complex operation by vector arithmetic <cit.>. It is possible since the reparameterization trick introduces Gaussian random number, which attributes to the vector length and angle in the latent hypersphere. The slerp interpolation shown in Algorithm <ref> not only interpolates the rotation angle of vectors, but it also interpolates the arc length. Such slerp interpolation enables the latent vectors to be interpolated following the path of the complex latent manifold. The use of slerp interpolation has been widely accepted for performing latent interpolation <cit.>. § NUMERICAL RESULTS This section presents the numerical results obtained by the proposed framework. First, the framework is applied to solve a FSI benchmark problem previously developed by Turek and Hron <cit.>. The accuracy of the current method is evaluated and compared against that obtained by the conventional nonlinear MOR, CAE. Then, the proposed framework is examined on a wing section that undergoes limit cycle oscillation (LCO). LCO analysis is performed to evaluate the accuracy of the proposed framework on the nonlinear multiphysics phenomenon. Last, the applicability of LSH-VAE to a system with many DOFs is demonstrated by analyzing a three-dimensional fluid flow. The numerical results presented in this paper are obtained by intentionally sampling a small number of initial FOM results. Sparse sampling is performed because ANN replicating its training data often leads to enough accuracy when the sampling is performed densely. In addition, sparse sampling is attempted as dense and iterative computations on a nonlinear system with many DOFs are rather unrealistic. For all of the results, the same LSH-VAE network is used for each variable of interest. The hyperparameters used for the training are shown in Table. <ref>. In Table <ref>, the first value for the latent dimension criterion denotes the latent dimension in which the interpolation is performed. The latter value denotes the latent dimension used for information sharing between the encoder and decoder network. LSH-VAE used for the following numerical results consists of 7 encoder and decoder blocks, with a total of 107 layers. While detailed optimization of the hyperparameters would yield better accuracy, such procedure is not performed to emphasize the generability of the framework. However, different batch sizes are used considering the number of DOF, limited by the VRAM of GPU. For all of the results presented in this paper, computations are carried out on AMD 3950X CPU to obtain the FOM results. ANN are trained using NVIDIA GeForce GTX 3090 GPU. §.§ Turek-Hron FSI benchmark §.§.§ Description of the analysis The widely accepted FSI benchmark developed by Turek and Hron is described in this section <cit.>. The benchmark problem consists of a rigid cylinder with a diameter of 0.1 m and a highly flexible tail. The fluid flowed from the inlet to the outlet with laminar separation occurring behind the cylinder. Von Kàrmàn vortex street created by the flow separation excites the tail, which exhibits a large deflection. A hyperbolic inlet profile is used to consider the no-slip initial wall boundary condition at the upper and lower computational domain. A detailed schematic regarding the analysis is shown in Fig. <ref>. The current framework requires a few parametric initial FOM samples to extract the embedded patterns. For Turek-Hron FSI benchmark problem, seven initial FOM results are collected. The inflow speed was selected as a parameter and speeds ranging from 0.7 m/s to 1.3 m/s, in 0.1 m/s intervals were sampled. The FOM samples are analyzed using Navier-Stokes computational fluid dynamics (CFD) and finite element method (FEM) two-way FSI analysis provided in the commercial software, ANSYS. The flow field is discretized by 29,788 CFD nodes and the flexible body is discretized by 954 FEM nodes. The ensemble of FOM results is constructed by collecting 2 s of the fully converged response in intervals of 0.01 s. The pre-acquired FOM ensemble is then subjected to interpolation by LSH-VAE shown in Table <ref>. After the training of LSH-VAE is completed, the latent code is interpolated. In the present case, the target parameter is selected as the unseen inflow speed of 0.95m/s. The latent code corresponding to 0.95m/s is acquired by the slerp interpolation shown in Algorithm <ref>. The interpolated latent code is then decoded by the decoder network where the resultant interpolated variables are generated. §.§.§ Accuracy and efficiency The accuracy of the current framework is assessed by comparing the results of the ROM against those obtained with the FOM. Five physical variables, dX, dY, u, v, and p are considered for interpolation in this case. Among them, the first two variables denote the grid deformation in x- and y-direction. Using the interpolated variables, the interpolated FSI field will be constructed. The interpolated FSI field and FOM are shown in Fig. <ref>. Evaluation of the results shown in Fig. <ref> verifies that the proposed framework is reasonably accurate. Subsequently, the accuracy of LSH-VAE is compared against that of CAE and β-VAE. For comparison, the CAE and β-VAE networks are constructed using the same hyperparameters that were used for LSH-VAE. The comparison between CAE, β-VAE, and LSH-VAE is performed by comparing the extent to which their results differed from those of FOM. The discrepancy contours of various networks are shown in Fig. <ref>. The minimum and maximum of each variable are matched for the respective variable. Overall, LSH-VAE exhibits the smallest discrepancy while β-VAE performs the worst. Interestingly, the regions that exhibit a relatively larger discrepancy are found to be quite similar for all of the networks. This is caused by the finite number of latent dimensions considered in the generative networks. Small details of FOM would have been neglected in the finite latent representation, which lead to the discrepancy in the similar areas. Another one to note is that the pressure contour of CAE and β-VAE shows a considerably larger discrepancy compared against that by LSH-VAE. This is caused by the large variation between the maximum and minimum values of the pressure. The inability of CAE and β-VAE to generate an expressive output is considered to be the reason for small details being neglected by large variations. Then, the efficiency of the proposed framework is assessed. The computational procedures for the proposed framework comprise four stages and the computational time required for each stage is listed in Table <ref>. For Turek-Hron FSI problem, each FOM query requires 109.0 h whereas the online stage consumes 0.11 h. The proposed framework therefore exhibits a speed-up factor of 990 for each unseen parametric estimation. The expected computational time in terms of the number of computations is shown in Fig. <ref>. §.§ Limit cycle oscillations §.§.§ Description of the analysis Limit cycle oscillation (LCO) is a nonlinear periodic oscillation with limited amplitude on an aerodynamic surface. LCO of an aircraft is a highly nonlinear FSI phenomenon that is caused by nonlinearities in both the fluid and structure. Typical causes of LCO include flow separation, transonic shock, geometric nonlinearity, and nonlinear stiffness of the control surface. For an aircraft, LCO may result in structural fatigue in the wings, thus requiring high-fidelity analysis for safety. During the design stage of an aircraft, iterative LCO analysis is performed to satisfy the vibration criterion. Such parametric LCO analysis is considered to be quite cumbersome and tedious as it is highly nonlinear and involves many DOFs. In this section, the proposed framework is used to conduct a simplified nonlinear parametric LCO analysis of a wing section. The wing section considered in this analysis is derived from that reported by O'Neil et al. <cit.>. In it, a two-dimensional wing section was constrained by the pitch and heave springs as shown in Fig. <ref>. The pitch and heave stiffnesses are nonlinear in their cubic terms, which are expressed in Eq. <ref>. LCO is caused by the cubic stiffness in the structure and LCO is observed at the inflow stream speed of 15.5 m/s to 50 m/s. K_α = 2.57(α+500α^3) K_h = 0.09(h+2860h^3)) The inflow speed is chosen as the parameter in this analysis. The initial FOM samples are collected by adjusting the inflow speed from 20 m/s to 45 m/s in increments of 5 m/s. The relevant flow field is discretized by 19,381 nodes and solved using the commercial Navier-Stokes solver, ANSYS. The initial FOM samples are obtained by collecting 2 s of the fully converged response in intervals of 0.01 s. The FOM ensemble is subjected to MOR and interpolation by LSH-VAE. After LSH-VAE is trained, the latent code for the desired parameter is acquired via slerp interpolation. The target parameter is an unseen inflow speed of 32.5 m/s, and the corresponding latent code is interpolated using Algorithm <ref>. The interpolated latent code is then decoded by the decoder and the interpolated FSI field is generated. §.§.§ Evaluation of accuracy and efficiency The accuracy of LSH-VAE is assessed by comparing the ROM results against those produced by FOM. In this case, the five physical variables discussed in the previous section were considered. The interpolated variables were used to generate the FSI field, where the interpolated FSI field and FOM are shown in Fig. <ref>. In Fig. <ref>, the interpolated FSI field constructed by LSH-VAE is found to be accurate. Then, the accuracy of LSH-VAE is compared against that of CAE and β-VAE. The discrepancy contours between LSH-VAE, CAE, and β-VAE are shown in Fig. <ref>. The minimum and maximum of the variable are each matched for the same variable. Similar to Turek-Hron problem, LSH-VAE exhibits the smallest discrepancy. However in this case, β-VAE performed better than CAE. For dX, all networks exhibit a similar discrepancy, as the wing section is constrained in x-direction. Only the pitching motion affects the deformation of surrounding grids in x-direction, resulting in a small variation. dY, however, shows different behavior. The discrepancy is spread evenly as the wing heaves and LSH-VAE shows a significantly reduced discrepancy. Another important point to note is that the discrepancy regarding the pressure is quite small. This is due to the stagnation point which creates a concentrated high-pressure region. The efficiency of the proposed framework is also assessed. The computational time required for each stage is summarized in Table <ref>. The offline FOM computation required 280.1 h including six initial FOM sample computations. LSH-VAE training required 3.52 h for the five variables of interest, resulting in a total offline stage of 283.6 h. For the online stage, FSI field reconstruction and saving to disk requires the most time as it requires 0.06 h. The present framework exhibits a speed-up factor of 660 for each unseen parametric estimation. The expected computational time in terms of the unseen parametric queries is shown in Fig. <ref>. §.§ Three-dimensional fluid flow §.§.§ Description of the analysis Finally, fluid flow surrounding a simple stationary three-dimensional (3D) cylinder is analyzed. The analysis of the 3D fluid serves to demonstrate the use of the proposed framework to analyze a system with a significant number of DOFs. A 3D cylinder with a diameter of 1 m was subjected to a uniform inflow, as shown in Fig. <ref>. Similar to Turek-Hron FSI benchmark, a von Kàrmàn vortex is formed behind the cylinder. For CFD analysis, a cuboid computational domain of 20m×10m×10m was discretized into 1,121,000 tetrahedral elements. The Reynolds number of the inflow varied from 100 to 160 in intervals of 10. The initial FOM samples are obtained by using the ANSYS Navier-Stokes solver and 2s of FOM data are collected in intervals of 0.01 s. Then, the LSH-VAE is trained against the FOM ensemble and interpolation is performed with respect to the parameter. After LSH-VAE is trained, the latent code representing the targeted parameter is acquired. The target parameter is selected as an unseen inflow Reynolds number of Re = 125. The latent code corresponding to Re = 125 is acquired by the interpolation shown in Algorithm <ref>. The interpolated latent code is then decoded and the resultant interpolated flow field is generated. §.§.§ Evaluation of the accuracy and efficiency The accuracy of LSH-VAE is assessed by comparing the results of ROM with those obtained using FOM. In this case, four physical variables, u, v, w, and p are considered for the interpolation. Using the interpolated variables, the interpolated flow field is generated. The interpolated and original flow fields are displayed in Fig. <ref>. The interpolated flow field constructed by LSH-VAE is found to be quite accurate. Particularly, the velocity in z-direction, w, is accurately interpolated even though w exhibits quite a complex response. As the initial physical variables are interpolated well, the relationship between the variables is inspected. Comparison against CAE and β-VAE is not conducted in this case as the large number of DOF caused instability of the networks. Instead, the normalized Q-criterion is considered to assess whether the interpolated flow field preserves its vorticity. In Fig.<ref>, the normalized Q-criterion is obtained using the interpolated variables shown in Fig. <ref>. Figure <ref> shows the iso-surface generated based on the normalized Q-criterion. The iso-surface is colored by u-velocity and pressure for visualization. The good agreement in terms of the Q-criterion indicates that LSH-VAE interpolates the direct variables sufficiently well such that the relationship between variables may be well preserved. Lastly, the efficiency of the present framework is assessed. The computational time required for each stage is listed in Table <ref>. The offline FOM computation requires 193.7 h including the seven initial FOM samples. LSH-VAE training requires 11.3 h resulting in a total offline stage of 205.0 h. For the online stage, variable reconstruction and writing to disk requires the most time as it required 2.02 h. The proposed framework exhibits a speed-up factor of 14 for each unseen parametric estimation. The expected computational time in terms of queries is as shown in Fig. <ref>. § CONCLUSIONS This paper proposes a nonlinear data-driven parametric MOR framework based on a neural network. The present framework adopts a novel neural network, LSH-VAE, to perform parametric MOR and interpolation. The present validations demonstrates that the LSH-VAE is capable of the parametric interpolation of dynamic system while significantly reducing the computational time. The following results are obtained in this study. * A novel machine-learning method, LSH-VAE, is developed for nonlinear MOR and the parametric interpolation of nonlinear, dynamic systems. * LSH-VAE is assessed on three nonlinear and multiphysics dynamic systems with many DOFs. The proposed framework is proven to be accurate and to significantly reduce the computational time. * Compared against the existing nonlinear MOR methods, convolutional autoencoder and β-VAE, LSH-VAE demonstrates significantly higher accuracy. The performance of LSH-VAE is assessed on three nonlinear dynamic systems: FSI benchmark, LCO, and three-dimensional flow. For all of the systems, LSH-VAE is capable of constructing an accurate parametric ROM. Especially, LSH-VAE exhibited a significantly enhanced accuracy compared to CAE and β-VAE. Also, LSH-VAE is found to be effective as not only did it interpolate the variables well, but it also interpolated the vorticity with high accuracy, which is embedded in the patterns of variables. Upon the accurate parametric MOR, LSH-VAE exhibites a speed-up factor of 990, 660, and 14 respectively. Such results are possible owing to the improvements in the LSH-VAE. First, it adopts a hierarchical structure that enables a much deeper and more stable network. Second, it adopts a hybrid weighted loss function consisting of mean-squared error and KL divergence. The use of mean-squared error improved the performance against continuous datasets while the hybrid weights reduced posterior collapse. Lastly, the use of slerp interpolation instead of linear interpolation in the latent space significantly enhanced the interpolation quality following the complex latent manifolds. However, there still exist a few challenges to be dealt with. First, LSH-VAE may require a significant amount of video random access memory (VRAM) if it is incorporated with an extensive number of DOF. The excessive VRAM requirement stems from its deep structure. By adopting a deep structure, LSH-VAE is capable of generating an expressive result at the cost of training an extensive number of learnable nodes. The excessive VRAM requirements necessitate limiting the batch size for the 3D fluid flow example. Yet, VRAM limitations may be alleviated by adopting parallel computing and utilizing many GPUs. Splitting the DOFs into several groups and merging them after interpolation may also be considered as a solution. Second, extrapolation is limited in the proposed framework. Accurate extrapolation would require dense sampling in the parametric space. However, the construction of ROM with sufficiently dense sampling accompanied by an effective latent manifold tracking method would make reasonable extrapolation viable. Finally, the effectiveness of the proposed framework decreases as the FOM becomes simpler and increasing DOFs are involved. An example of this tendency is demonstrated in the 3D fluid flow example where the speed-up factor diminished to 14 compared to 990 and 660 in the previous cases. In the future, the plan is to extend the evaluation of the proposed framework to various multiphysics problems such as the analysis of the heat-structure systems. Considering that the present framework is purely data-driven, LSH-VAE is expected to be used in its current form. In addition, multi-parametric analysis coupled with sampling algorithms such as Latin hypercube will be attempted by adopting conditional tokens in the latent space. Acknowledgments This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Science, ICT and Future Planning (2023R1A2C1007352). § DECLARATIONS The authors declare that they have no conflict of interest.
http://arxiv.org/abs/2307.05622v1
20230711042445
On the gap probability of the tacnode process
[ "Luming Yao", "Lun Zhang" ]
math-ph
[ "math-ph", "math.CA", "math.MP", "math.PR", "33C15, 60B20, 60K35" ]
[1]School of Mathematical Sciences, Fudan University, Shanghai 200433, China. E-mail: {lumingyao, lunzhang}'100fudan.edu.cn. On the gap probability of the tacnode process Luming Yao[1]   and  Lun Zhang[1] August 12, 2023 ============================================= The tacnode process is a universal determinantal point process arising from non-intersecting particle systems and tiling problems. It is the aim of this work to explore the integrable structure and large gap asymptotics for the gap probability of the thinned/unthinned tacnode process over (-s,s). We establish an integral representation of the gap probability in terms of the Hamiltonian associated with a system of differential equations. With the aids of some remarkable differential identities for the Hamiltonian, we also compute large gap asymptotics, up to and including the constant term in the thinned case. As direct applications, we obtain expectation, variance and a central limit theorem for the associated counting function. § INTRODUCTION Since the seminal work of Dyson on the Brownian motion model for eigenvalues of Gaussian unitary ensemble <cit.>, there has been significant interest in ensembles of non-intersecting paths. Among them, non-intersecting Brownian motion models and their variants have been most studied over the last few decades. Besides their intimate connections with a variety of physical, combinatorial and probabilistic models <cit.>, the long-standing interest in non-intersecting Brownian motions is due in large part to the fact that the scaling limits lead to universal determinantal point processes related to random matrix theory and the KPZ universality class. In a typical case, we consider n 1D non-intersecting Brownian motion paths with several prescribed starting and ending points. For any fixed time, the positions of these paths form a determinantal point process. As the number of paths tends to infinity, these paths will, after proper scalings, fill out a region in the time-space plane with a deterministic limit shape. It comes out that the local statistics of this model is governed by sine process in the interior of the shape <cit.>, by the Airy process at the edge of the limit shape <cit.>, and by the Pearcey process at the cusp <cit.>; see also <cit.> for relevant studies. In this paper, we focus on a critical process arising from non-intersecting Brownian motions and random walk paths called tacnode process. This process appears in the case of critical separation, that is, two groups of Brownian motions are asymptotically distributed in two ellipses in the time-space plane which are tangent to each other (critical separation) and create a tacnode point; see Figure <ref> for an illustration. The tacnode process then describes local correlations of the paths around this point. As a determinantal process, the tacnode process is characterized by a two-variable correlation function K_(x,y) called the tacnode kernel and the kernel also depends on some extra parameters relevant to the scalings. The tacnode process was studied by different groups of authors using different techniques. Adler, Ferrari and van Moerbeke <cit.> resolved the tacnode problem for non-intersecting random walks on ℤ (discrete space and continuous time). Johansson <cit.> gave an integral representation of the tacnode kernel in the continuous time-space setting. Ferrari and Vető <cit.> extended the results of Johansson to the non-symmetric case when the two touching groups of Brownian motions may have different sizes. In all these studies, the tacnode kernel is expressed using resolvents and Fredholm determinants of the Airy integral operator. An alternative expression of the tacnode kernel is given by Delvaux, Kuijlaars and the second author <cit.> with the aid of a new 4× 4 matrix-valued Riemann-Hilbert (RH) problem. This RH problem has a remarkable connection with the Hastings-McLeod solution <cit.> of the homogeneous Painlevé II equation q”(x)=2q(x)^3+xq(x). A natural question is then to ask whether all these formulas for the tacnode kernel lead to the same process, although it is generally believed to be the case. The equivalence of the RH formulation of the tacnode kernel in <cit.> and the Airy resolvent type formula of Johansson <cit.> was later established in <cit.>, while the two different Airy type formulas obtained in <cit.> and <cit.> was proved to be equivalent in <cit.> based on an indirect way. Similar to the canonical Sine, Airy and Pearcey point processes, the tacnode process (or its variant) represents a universality class in a wide range of problems in probability and mathematical physics. Some concrete examples include non-intersecting Brownian motions on the unit circle <cit.> and various random tilling models <cit.>, among others. We intend to investigate the gap probability of the tacnode process – a basic object in the theory of point processes. More precisely, let 𝒦_ be the integral operator acting on L^2(-s, s), s≥ 0, with the tacnode kernel K_ and consider the associated Fredholm determinant D(s;γ):=(I-γ𝒦_), where 0< γ≤ 1 is a real parameter. The determinantal structure implies that D(s; 1) can be interpreted as the probability of finding no particles (a.k.a. the gap probability) on the interval (-s,s) for the tacnode process, while the deformed determinant D(s; γ), 0<γ<1, gives us gap probability for the thinned tacnode process. The thinned process is related to the original one by removing each particle independently with probability 1-γ (cf. <cit.>), and according to a general result in <cit.>, the information of D(s; γ) is essential in establishing global rigidity result for the tacnode process. Beyond the fundamental meaning of D(s;γ) just described, our study is also highly motivated by rich structures of Fredholm determinants for canonical universality classes. For instance, the celebrated Tracy-Widom distribution established in <cit.> shows that the Airy-kernel determinant admits an integral representation via the Hastings-McLeod solution of (<ref>), which also leads to a conjecture of the large gap asymptotic formula. This conjecture was rigorously proved in <cit.> using different approaches. The same integral formula holds for the deformed Airy-kernel determinant but in terms of the Ablowitz-Segur solution <cit.> of (<ref>) instead; cf. <cit.>. Large gap asymptotics in this case, however, exhibits a significantly different behavior from the undeformed case, as conjectured in <cit.> and lately proved in <cit.>. Analogous results can be found in <cit.> for the sine-kernel determinant, and in <cit.> for the Pearcey-kernel determinants. Due to the highly transcendental form of the tacnode kernel, it remains an intriguing open problem to explore the integrable structure and large s asymptotics of D(s;γ); see however <cit.> for the transitions between the tacnode process and the Airy, Pearcey processes. It is the aim of this paper to resolve these problems and our results are stated in the next section. Notations Throughout this paper, the following notations are frequently used. * If A is a matrix, then (A)_ij stands for its (i,j)-th entry and A^ T stands for its transpose. An unimportant entry of A is denoted by ∗. We use I to denote an identity matrix, and the size might differ in different contexts. To emphasize a k× k identity matrix, we also use the notation I_k. * It is notationally convenient to denote by E_j,k the 4× 4 elementary matrix whose entries are all 0, except for the (j,k)-entry, which is 1, that is, E_j,k=( δ_l,jδ_k,m)_l,m=1^4, where δ_j,k is the Kronecker delta. * We denote by D(z_0, r) the open disc centred at z_0 with radius r > 0, i.e., D(z_0, r) := { z∈ℂ| |z-z_0|<r }, and by ∂ D(z_0, r) its boundary. The orientation of ∂ D(z_0, r) is taken in a clockwise manner. * As usual, the three Pauli matrices {σ_j}_j=1^3 are defined by σ_1=[ 0 1; 1 0 ], σ_2=[ 0 -; 0 ], σ_3= [ 1 0; 0 -1 ]. * From time to time, we will encounter some functions that depend on the real parameters r_1, r_2, s_1, s_2 and τ. If x(·; r_1, r_2, s_1, s_2, τ) is such a function, we set x(·; r_1, r_2, s_1, s_2, τ) = x(·; r_2, r_1, s_2, s_1, τ), ẋ(·; r_1, r_2, s_1, s_2, τ) = x(·; r_1, r_2, s_1, s_2, -τ). Clearly, one has x=x if r_1=r_2 and s_1=s_2, and x=ẋ if τ=0. § MAIN RESULTS §.§ Definition of the tacnode kernel As mentioned previously, there exist several equivalent formulas of the tacnode kernel. We use the one that is defined through the following 4 × 4 tacnode RH problem <cit.>. (a) M(z)=M(z; r_1, r_2, s_1, s_2,τ) is analytic for z ∈∖Γ_M, where the parameters r_1,r_2,s_1,s_2,τ are real with r_i>0, i=1,2, and Γ_M:=∪_k=0^5Γ_k ∪{0} with Γ_0= (0,+∞), Γ_1=e^φ(0,+∞), Γ_2=e^-φ(-∞,0), Γ_3= (-∞,0), Γ_4=e^φ(-∞,0), Γ_5=e^-φ(0,+∞), 0<φ<π/3; see Figure <ref> for an illustration of the contour Γ_M. (b) For z∈Γ_k, k=0,1,…,5, the limiting values M_+(z) = lim_ζ→ z ζ on +-side of Γ_kM(ζ), M_-(z) = lim_ζ→ z ζ on --side of Γ_kM(ζ), exist, where the +-side and --side of Γ_k are the sides which lie on the left and right of Γ_k, respectively, when traversing Γ_k according to its orientation. These limiting values satisfy the jump relation M_+(z) = M_-(z)J_k(z), k=0,…,5, where the jump matrix J_k(z) for each ray Γ_k is shown in Figure <ref>. (c) As z →∞ with z ∈∖Γ_M, we have M(z) =( I+M^(1)/z+ (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4) × A (e^-θ_1(z)+τ z, e^-θ_2(z)- τ z, e^θ_1(z)+τ z,e^θ_2(z)- τ z), where the matrix M^(1) is independent of z but depends on the parameters, A :=1/√(2)[ 1 0 - 0; 0 1 0 ; - 0 1 0; 0 0 1 ], θ_1(z) = 2/3 r_1(-z)^3/2 +2 s_1 (-z)^1/2, z∈ℂ∖ [0,∞), θ_2(z) =2/3r_2z^3/2 +2 s_2 z^1/2, z∈ℂ∖ (-∞,0]. (d) M(z) is bounded near z=0. The jump contour of the original tacnode RH problem consists of ten rays emanating from the origin. Here, as in <cit.>, we reduce the number of rays to six by combining the two jumps in each of the open quadrants. The existence of a unique solution to the tacnode RH problem was proved for τ=0 by Delvaux, Kuijlaars and the second author <cit.>, for the symmetric case r_1=r_2=1, s_1=s_2 with general τ by Duits and Geudens <cit.>, for the non-symmetric case by Delvaux <cit.>. The RH problem for X is related to the Hastings-McLeod solution of the Painlevé II equation (<ref>) through the “residue” term M^(1) in (<ref>). More precisely, the Hastings-McLeod solution and associated Hamiltonian appear in the top right 2 × 2 block of the matrix M^(1); see <cit.>. Note that M satisfies the following symmetric relations (see <cit.>): M (-z;r_1,r_2,s_1,s_2,τ) = [ J 0; 0 -J ]M(z) [ J 0; 0 -J ], M (z;r_1,r_2,s_1,s_2,τ)^- T = [ 0 -I_2; I_2 0 ]Ṁ(z) [ 0 I_2; -I_2 0 ], where J = [ 0 1; 1 0 ], M and Ṁ are defined through (<ref>) and (<ref>). It is then readily seen that M^(1)=M^(1)(r_1,r_2,s_1,s_2,τ) satisfies the symmetric relations M^(1) = -[ J 0; 0 -J ]M^(1)[ J 0; 0 -J ], ( M^(1))^ T = -[ 0 -I_2; I_2 0 ]Ṁ^(1)[ 0 I_2; -I_2 0 ]. As a consequence, we have M^(1)_11 = -M^(1)_22=-Ṁ^(1)_33=Ṁ^(1)_44, M^(1)_13 = M^(1)_24=Ṁ^(1)_13, M^(1)_23 = M^(1)_14=Ṁ^(1)_14. Let M be the analytic continuation of the restriction of M in the sector bounded by the rays Γ_1 and Γ_2 to the whole complex plane. The tacnode kernel K_(x,y):=K_(x,y;r_1,r_2,s_1,s_2,τ) is then given in terms of M by <cit.> [There is a misprint in <cit.>. It should be `M(v)^-1M(u)' instead of `M(u)^-1M(v)'.] K_(x,y) = 1/2π (x-y)[ 0 0 1 1 ]M(y)^-1M(x) [ 1; 1; 0; 0 ]. Define F(s;γ)=F(s;γ, r_1, r_2,s_1,s_2, τ):=ln (D(s;γ))=ln(I-γ𝒦_), 0< γ≤ 1. Our first result is an integral representation of F as stated in what follows. §.§ An integral representation of F The integral representation of F involves the Hamiltonian of a system of coupled differential equations. These differential equations are given as follows: { p_1'(s) =- r_1 s p_3(s) -p_1(s)p_5(s)- r_1 p_2(s)q_6(s)- r_1 p_3(s)q_5(s)+p_4(s)p_6(s)-τ p_1(s) + s_1 p_3(s)-p_2(s)/s (p_1(s) q_2(s)+p_2(s) q_1(s) - p_3(s) q_4(s)-p_4(s) q_3(s)), p_2'(s) = r_2 s p_4(s) + r_2 p_1(s) q_6(s)+p_2(s)p_5(s) +p_3(s)p_6(s)- r_2 p_4(s)q_5(s) +τ p_2(s) + s_2 p_4(s)-p_1(s)/s (p_1(s) q_2(s)+p_2(s) q_1(s) - p_3(s) q_4(s)-p_4(s) q_3(s)), p_3'(s) =- r_1 p_1(s) +p_3(s)p_5(s)- r_2p_4(s)q_6(s)-τ p_3(s) +p_4(s)/s (p_1(s) q_2(s)+p_2(s) q_1(s) - p_3(s) q_4(s)-p_4(s) q_3(s)), p_4'(s) = - r_2 p_2(s) + r_1 p_3(s)q_6(s) -p_4(s)p_6(s)+τ p_4(s) +p_3 (s)/s (p_1(s) q_2(s)+p_2(s) q_1(s) - p_3(s) q_4(s)-p_4(s) q_3(s)), p_5'(s) =- r_1(p_3(s)q_1(s)+p_4(s)q_2(s)), p_6'(s) = r_1 (p_3(s)q_4(s)-p_4(s)q_3(s))+ r_2 (p_1(s)q_2(s)-p_2(s)q_1(s)), q_1'(s) =p_5(s)q_1(s)- r_2 q_2(s)q_6(s)+ r_1 q_3(s)+τ q_1(s) +q_2(s)/s(p_2(s)q_1(s)+p_1(s)q_2(s) -p_4(s) q_3(s) -p_3(s)q_4(s)), q_2'(s) = r_1 q_1(s)q_6(s)-p_5(s)q_2(s)+ r_2q_4(s)-τ q_2(s) +q_1(s)/s(p_2(s)q_1(s)+p_1(s)q_2(s) -p_4(s) q_3(s) -p_3(s)q_4(s)), q_3'(s) = r_1 sq_1(s)+ r_1 q_1(s)q_5(s) -p_6(s) q_2(s)-p_5(s)q_3(s)- r_1q_4(s)q_6(s)- s_1 q_1(s) +τ q_3(s)-q_4(s)/s(p_2(s)q_1(s)+p_1(s)q_2(s) -p_4(s) q_3(s) -p_3(s)q_4(s)), q_4'(s) =- r_2sq_2(s)-p_6(s)q_1(s)+ r_2q_2(s)q_5(s) + r_2 q_3(s)q_6(s)+p_5(s)q_4(s)- s_2q_2(s) -τ q_4(s)-q_3(s)/s(p_2(s)q_1(s)+p_1(s)q_2(s) -p_4(s) q_3(s) -p_3(s)q_4(s)), q_5'(s) =p_1(s)q_1(s)-p_3(s)q_3(s)-p_2(s) q_2(s)+p_4(s) q_4(s), q_6'(s) =-p_4(s)q_1(s)-p_3(s) q_2(s), . where p_k(s)=p_k(s;γ, r_1, r_2,s_1,s_2, τ), q_k(s)=q_k(s;γ, r_1, r_2,s_1,s_2, τ), k=1,…,6, are 12 unknown functions, p_k(s) and q_k(s) are related to p_k(s) and q_k(s) by swapping the parameters r_1↔ r_2 and s_1↔ s_2; see the definition (<ref>). By introducing the matrix-valued functions A_0(s) = [ p_5(s)+τ - r_2 q_6(s) r_1 0; r_1 q_6(s) -p_5(s)-τ 0 r_2; r_1 q_5(s)- s_1 -p_6(s) -p_5(s)+τ - r_1 q_6(s); -p_6(s) r_2 q_5(s)- s_2 r_2 q_6(s) p_5(s)-τ ], A_1(s) = [ q_1(s); q_2(s); q_3(s); q_4(s) ][ p_1(s) p_2(s) p_3(s) p_4(s) ], and A_2(s) = [ q_2(s); q_1(s); -q_4(s); -q_3(s) ][ p_2(s) p_1(s) -p_4(s) -p_3(s) ], one can check H(s)=H(s;γ, r_1, r_2,s_1,s_2, τ) := [ p_1(s) p_2(s) p_3(s) p_4(s) ]( (r_1E_3,1-r_2 E_4,2)s + A_0(s)+A_2(s)/2s)[ q_1(s); q_2(s); q_3(s); q_4(s) ] +[ p_2(s) p_1(s) -p_4(s) -p_3(s) ]( (r_1E_3,1-r_2 E_4,2)s - A_0(s)+A_1(s)/2s) [ q_2(s); q_1(s); -q_4(s); -q_3(s) ], with E_i,j, i,j =1,…,4, being the matrices defined in (<ref>), is the Hamiltonian for the above system of differential equations, under the extra condition ∑_k=1^4 p_k(s)q_k(s)=0. That is, we have q_k'(s)=∂ H/∂ p_k, p_k'(s)=-∂ H/∂ q_k, k=1,...,6. For γ∈ [0, 1], with the function F(s;γ) defined in (<ref>), we have, F(s;γ) = ∫_0^s H(t; γ) t, s ∈ (0, ∞), where H is the Hamiltonian (<ref>) associated with a family of special solutions to the system of differential equations (<ref>). Moreover, H(s) satisfies the following asymptotic behaviors: as s → 0^+, H(s)=(1), and as s → +∞, H(s)= -r_1^2 + r_2^2/4 s^2 + (r_1s_1 + r_2 s_2) s - s_1^2-s_2^2 - 1/4s +(s^-2), γ=1, 2 β(r_1 + r_2) s^1/2 - 2 β (s_1+s_2) s^-1/2  - (3 β^2 + β/2cos(2 ϑ(s))+β/2cos(2 ϑ(s)))s^-1+ (s^-3/2), 0 ≤γ <1, where β := 1/2 πln (1-γ), γ∈ [0, 1), and ϑ (s) = ϑ (s;r_1,s_1)= 2 r_1/3 s^3/2-2s_1 s + 3 β/2ln s + βln (8(r_1 - s_1/s)) + Γ (1+β), ϑ(s) = ϑ (s;r_2,s_2), with Γ being the Euler's gamma function. The local behavior of H near the origin in (<ref>) ensures the integral (<ref>) is well-defined. For 0<γ<1, we also derive asymptotics of the family of special solutions in Proposition <ref> below, which plays an important role in asymptotic studies of F. §.§ Large gap asymptotics and applications A direct application of Theorem <ref> is that we can obtain the first few terms in the asymptotic expansion of F(s;γ) as s → +∞ except for the constant term by inserting (<ref>) into (<ref>). For 0<γ<1, we are also able to determine the notoriously difficult constant term. With the function F(s;γ) defined in (<ref>), we have, as s → +∞, F(s;γ) = -r_1^2 + r_2^2/12 s^3 + r_1s_1 + r_2 s_2/2 s^2 - (s_1^2+s_2^2) s - lns/4 + C + (s^-1), γ=1, 4 β (r_1 + r_2)/3 s^3/2 - 4 β (s_1 + s_2) s^1/2 - 3 β^2 lns + 2 ln(G(1+β)G(1-β))   -β^2 ln(64 r_1 r_2)+ (s^-1/2), 0 ≤γ <1, uniformly for r_i>0, i=1,2 and s_i ∈ℝ, i=1,2, where C is an undetermined constant independent of s, β is given in (<ref>) and G(z) is the Barnes G-function. If γ =0, we have that β =0 and G(1+β)=G(1)=1. It is then straightforward to see F(s;0)= (s^-1/2), which matches the fact that F(s;0)=0. If γ=1, our result supports the so-called Forrester-Chen-Eriksen-Tracy conjecture <cit.>. This conjecture claims that the probability E(s) of emptiness over the interval (x^*-s,x^*+s) behaves like exp(-Ks^2α+2) for large positive s with K being some constant, provided the density of state behaves as |x-x^*|^α as x→ x^*. Since the limiting mean density for the non-intersecting Brownian paths at the time of tangency consists of two touching semicircles, it follows that α=1/2 for the tacnode process. Thus, one should have F(s;1)=(s^3), as confirmed in (<ref>). Also, we cannot evaluate explicitly the constant C therein with our method, which in general is a challenging task; cf. <cit.>. Our final result is about counting statistics of the tacnode process. To proceed, we denote by N(s) the random variable that counts the number of points in the tacnode process falling into the interval (-s,s), s≥ 0. It is well known that the following generating functionx 𝔼(e^-2πν N(s)) = ∑_k=0^∞ℙ (N(s)=k)e^-2πν k, ν≥ 0, is equal to the deformed Fredholm determinant (I - (1-e^-2πν)𝒦_). This, together with Theorem <ref>, allows us to establish various asymptotic statistical properties of N; see also <cit.> for relevant results about the sine, Airy and Pearcey point determinantal processes. As s → +∞, we have 𝔼(N(s)) =μ (s) + (ln s/s^1/2), (N(s)) = σ (s)^2 + 2 + 2γ_E + ln(64 r_1 r_2)/2 π^2 + ((ln s)^2/s^1/2), where γ_E=-Γ'(1)≈ 0.57721 is Euler’s constant, μ (s) = 2(r_1+r_2)/3 π s^3/2 - 2(s_1 + s_2)/π s^1/2, σ (s)^2 = 3/2 π^2ln s. Furthermore, the random variable N(s) - μ (s)/√(σ (s)^2) converges in distribution to the normal law 𝒩 (0,1)as s → +∞, and for any ϵ > 0, we have lim_a →∞ℙ(sup_s>a|N(s) - μ(s)/ln s| ≤3 √(2)/2 π + ϵ)=1. The probabilistic bound (<ref>) particularly implies that, for large positive s, the counting function of the tacnode process lies in the interval (μ(s)-(3√(2)/(2π)+ϵ) ln s, μ(s)+ (3√(2)/(2π)+ϵ) ln s) with high probability. Organization of the rest of the paper The rest of this paper is devoted to the proofs of our main results. The idea is to relate various derivatives of F to a 4 × 4 RH problem under the general framework <cit.>. In Section <ref>, we connect F/ s to a 4 × 4 RH problem for X with constant jumps. We then derive a lax pair for X in Section <ref>, and some useful differential identities for the Hamiltonian will also be included for later calculation. We perform a Deift-Zhou steepest descent analysis <cit.> on the RH problem for X as s → +∞ in Sections <ref> and <ref> for the cases γ=1 and 0 ≤γ < 1, respectively, and deal with the small positive s case in Section <ref>. After computing the asymptotics of a family of special solutions to the system of differential equations (<ref>) and (<ref>) in Section <ref>, we finally present the proofs of our main results in Section <ref>. § PRELIMINARIES We intend to establish a relation between F / s and an RH problem with constant jumps. To proceed, we note that / sF(s;γ)= -tr((I-γ𝒦_)^-1γ/ s𝒦_) = -(R(s,s)+R(-s,-s)), where R(u,v) stands for the kernel of the resolvent operator. By (<ref>), one readily sees that γ K_ (x,y) = f⃗(x)^ Th⃗(y)/x-y, where f⃗(x)=[ f_1; f_2; f_3; f_4 ]:= M (x) [ 1; 1; 0; 0 ], h⃗(y)=[ h_1; h_2; h_3; h_4 ] := γ/2 πM(y)^- T[ 0; 0; 1; 1 ]. This integrable structure of kernel K_ in the sense of Its et al. <cit.> implies that the resolvent kernel R(u,v) is integrable as well; cf. <cit.>. Indeed, by setting F⃗(u)= [ F_1; F_2; F_3; F_4 ]:=(I-γ𝒦_)^-1f⃗, H⃗(v)=[ H_1; H_2; H_3; H_4 ] :=(I-γ𝒦_)^-1h⃗, we have R(u,v)=F⃗(u)^ TH⃗(v)/u-v. Moreover, the functions F⃗(u) and H⃗(u) are closely related to the following RH problem. (a) Y(z) is a 4× 4 matrix-valued function defined and analytic in ℂ∖ [-s,s], where the orientation is taken from the left to the right. (b) For x∈(-s,s), we have Y_+(x)=Y_-(x)(I-2πf⃗(x)h⃗(x)^ T), where the functions f⃗ and h⃗ are defined in (<ref>). (c) As z →∞, we have Y(z)=I+Y^(1)/z+ (z^-2). where the function Y^(1) is independent of z. (d) As z →± s, we have Y(z) = (ln(z ∓ s)). By <cit.>, it follows that Y(z)=I-∫_-s^sF⃗(w)h⃗(w)^ T/w-z w and F⃗(z)=Y(z)f⃗(z), H⃗(z)=Y(z)^- Th⃗(z). We now make an undressing transformation to arrive at an RH problem that is related to F/ s with the aid of the RH problems for M and Y. We start with definitions Γ_0^(s):=(s,+∞), Γ_1^(s):=s+e^φ(0,+∞), Γ_2^(s):=-s+e^-φ(-∞,0), Γ_3^(s):= (-∞,-s), Γ_4^(s):=-s+e^φ(-∞,0), Γ_5^(s):=s+e^-φ(0,+∞), 0<φ<π/3. Clearly, the rays Γ_k^(s), k=1,2,4,5, and ℝ divide the whole complex plane into 6 regions Ω_j^(s), j=1,…,6; see Figure <ref> for an illustration. The transformation is defined by X(z) = X(z; s,γ, r_1,r_2,s_1,s_2,τ) = Y(z)M(z), z ∈Ω_1^(s)∪Ω_3^(s)∪Ω_4^(s)∪Ω_6^(s), Y(z)M(z), z ∈Ω_2^(s), Y(z)M(z)[ 1 0 -1 -1; 0 1 -1 -1; 0 0 1 0; 0 0 0 1 ], z ∈Ω_5^(s). On account of the RH problems <ref> and <ref> for M and Y, it is straightforward to check that X satisfies the following RH problem. (a) X(z) is defined and analytic in ℂ∖Γ_X, where Γ_X:=∪^5_j=0Γ_j^(s)∪[-s,s] with the rays Γ_j^(s), j=0,1,…,5, defined in (<ref>) and (<ref>); see Figure <ref> for the orientations of Γ_X. (b) For z ∈Γ_X, X satisfies the jump condition X_+(z)=X_-(z)J_X(z), where J_X(z):={[ [ 0 0 1 0; 0 1 0 0; -1 0 0 0; 0 0 0 1 ], z∈Γ_0^(s),; I-E_2,1+E_3,1+E_3,4, z∈Γ_1^(s),; I-E_1,2+E_4,2+E_4,3, z∈Γ_2^(s),; [ 1 0 0 0; 0 0 0 1; 0 0 1 0; 0 -1 0 0 ], z∈Γ_3^(s),; I+E_1,2+E_4,2-E_4,3, z∈Γ_4^(s),; I+E_2,1+E_3,1-E_3,4, z∈Γ_5^(s),; [ 1 0 1-γ 1-γ; 0 1 1-γ 1-γ; 0 0 1 0; 0 0 0 1 ], z∈ (-s,s). ]. (c)As z →∞ with z∈ℂ∖Γ_X, we have X(z) =( I+X^(1)/z+ (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4) × A ( e^-θ_1(z)+τ z, e^-θ_2(z)- τ z, e^θ_1(z)+τ z,e^θ_2(z)- τ z), where A, θ_1 and θ_2 are defined in (<ref>)–(<ref>), respectively and X^(1) = Y^(1) + M^(1) with Y^(1) and M^(1) given in (<ref>) and (<ref>). (d) As z → s, we have X(z) = X_R(z) [ 1 0 -γ/2πln(z-s) -γ/2πln(z-s); 0 1 -γ/2πln(z-s) -γ/2πln(z-s); 0 0 1 0; 0 0 0 1 ] × I, z ∈Ω_2^(s), [ 1 0 -1 -1; 0 1 -1 -1; 0 0 1 0; 0 0 0 1 ], z ∈Ω_5^(s), where the principal branch is taken for ln(z-s), and X_R(z) is analytic at z=s satisfying X_R(z) = X_R,0(s)(I+X_R,1(s)(z-s)+((z-s)^2) ), z→ s, for some functions X_R,0(s) and X_R,1(s) depending on the parameters r_1,r_2,s_1,s_2,τ and γ. (e) As z → -s, we have X(z) = X_L(z) [ 1 0 -γ/2πln(-z-s) -γ/2πln(-z-s); 0 1 -γ/2πln(-z-s) -γ/2πln(-z-s); 0 0 1 0; 0 0 0 1 ] ×[ 0 1 0 0; 1 0 0 0; 0 0 0 -1; 0 0 -1 0 ], z ∈Ω_2^(s), [ 0 1 1 1; 1 0 1 1; 0 0 0 -1; 0 0 -1 0 ], z ∈Ω_5^(s), where ln(-z-s) is analytic for z∈ℂ∖ [-s,+∞) and X_L(z) is analytic at z=-s satisfying X_L(z) = X_L,0(s)(I+X_L,1(s)(z+s)+((z+s)^2) ), z→ -s, for some functions X_L,0(s) and X_L,1(s) depending on the parameters r_1,r_2,s_1,s_2,τ and γ. The matrix-valued function X(z) satisfies the following symmetric relations: X (-z) = [ J 0; 0 -J ]X(z) [ J 0; 0 -J ], X (z)^- T = [ 0 -I_2; I_2 0 ]Ẋ(z) [ 0 I_2; -I_2 0 ], where J is given in (<ref>), X and Ẋ are defined through (<ref>) and (<ref>). Moreover, the matrix X^(1) = X^(1)(γ, r_1,r_2,s_1,s_2,τ) in (<ref>) satisfies X^(1) = -[ J 0; 0 -J ]X^(1)[ J 0; 0 -J ], (X^(1))^ T = -[ 0 -I_2; I_2 0 ]Ẋ^(1)[ 0 I_2; -I_2 0 ]. One can check that the left and right hand sides of (<ref>) satisfy the same RH problem. Then (<ref>) follows from the uniqueness of the solution to this RH problem. The same argument applies to (<ref>). By substituting the asymptotic behavior of X(z) in (<ref>) into equations (<ref>) and (<ref>), the symmetric relations (<ref>) and (<ref>) follow from a straightforward calculation. This finishes the proof of Proposition <ref>. The connection between the derivatives of F and the RH problem for X is revealed in the following proposition. With F defined in (<ref>), we have / s F(s;γ, r_1, r_2,s_1,s_2, τ) =-γ/2 π[ lim_z → s∑_i=3^4∑_j=1^2 (X(z)^-1X'(z))_ij+ lim_z → -s∑_i=3^4∑_j=1^2 (X(z)^-1 X'(z))_ij] , where the limit is taken from Ω_2^(s) and ∂/∂ s_1 F(s;γ, r_1, r_2,s_1,s_2, τ) = 2(X^(1)_13-M^(1)_13), ∂/∂ s_2 F(s;γ, r_1, r_2,s_1,s_2, τ) = 2(X^(1)_13-M^(1)_13), ∂/∂τ F(s;γ, r_1, r_2,s_1,s_2, τ) = -X^(1)_11- X^(1)_11+Ẋ^(1)_11+Ẋ^(1)_11 +M^(1)_11+ M^(1)_11-Ṁ^(1)_11-Ṁ^(1)_11, where X^(1) and M^(1) are given in (<ref>) and (<ref>). Here, for a function x=x(·; r_1, r_2,s_1,s_2, τ), we have ẋ=x(·; r_2, r_1,s_2,s_1, -τ)); see the notations (<ref>) and (<ref>). To prove Proposition <ref>, we need the following lemma. Let M be the unique solution to the tacnode RH problem <ref>, we have ∂ M/∂ s_1 = -2( E_1,3+zE_3,1+[M^(1), E_3,1]) M, ∂ M/∂ s_2 = -2( -E_2,4+zE_4,2+[M^(1), E_4,2]) M, ∂ M/∂τ = ( [ z 0 0 0; 0 -z 0 0; 0 0 z 0; 0 0 0 -z ]+[M^(1), [ 1 0 0 0; 0 -1 0 0; 0 0 1 0; 0 0 0 -1 ]])M, where [A,B] denotes the commutation of two matrices, i.e., [A,B]=AB-BA. Since the RH problem for M has constant jumps, we obtain from the local behavior of M near the origin that (∂ M / ∂ s_1) M^-1 is entire. As z →∞, we find from (<ref>) that ∂ M/∂ s_1 M^-1 = ( I+M^(1)/z+ (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A (-2(-z)^1/2,0,2(-z)^1/2,0 ) × A^-1((-z)^1/4,z^1/4,(-z)^-1/4,z^-1/4) ( I-M^(1)/z+ (z^-2) ) + (z^-1) = -2( E_1,3+zE_3,1+[M^(1), E_3,1]) + (z^-1) Keeping only the polynomial terms in z, we obtain (<ref>). A similar argument yields (<ref>) and (<ref>). This finishes the proof of Lemma <ref>. For z∈Ω_2^(s), we see from (<ref>), (<ref>) and (<ref>) that F⃗(z)=Y(z)f⃗(z)=Y(z)M(z) [ 1; 1; 0; 0 ]= X(z) [ 1; 1; 0; 0 ] and H⃗(z) =Y(z)^- Th⃗(z)= X(z)^- TM(z)^ T·γ/2 πM(z)^- T[ 0; 0; 1; 1 ] =γ/2 π X(z)^- T[ 0; 0; 1; 1 ]. Combining the above formulas and (<ref>), we obtain R(z,z) = γ/2 π∑_i=3^4∑_j=1^2 (X(z)^-1X'(z))_ij, z∈Ω_2^(s). This, together with (<ref>), gives us (<ref>). To show (<ref>), we note from (<ref>) and (<ref>) that ∂f⃗/∂ s_1(x) = -2( E_1,3+xE_3,1+[M^(1), E_3,1])f⃗(x), ∂h⃗/∂ s_1(y) = 2( E_1,3+yE_3,1+[M^(1), E_3,1])^ Th⃗(y). The above equations, together with (<ref>), imply that ∂/∂ s_1(γ K_ (x,y)) = ∂f⃗^ T/∂ s_1(x)h⃗(y)+ f⃗(x)^ T∂h⃗/∂ s_1(y)/x-y =-2 f⃗(x)^ T E_1,3h⃗(y) = -2 f_1(x)h_3(y). Thus, it is readily seen from (<ref>) and (<ref>) that ∂/∂ s_1F(s;γ, r_1, r_2,s_1,s_2, τ) =∂/∂ s_1ln(I-γ𝒦_) = -((I - γ𝒦_)^-1∂/∂ s_1(γ𝒦_)) =2∫_-s^s F_1(v) h_3(v) v On the other hand, from (<ref>) and (<ref>) we have Y^(1) = ∫_-s^s F⃗(v) h⃗(v)^ T v. A combination of the above two equations gives us ∂/∂ s_1 F(s;γ, r_1, r_2,s_1,s_2, τ) = 2 Y^(1)_13. We thus obtain (<ref>) by applying (<ref>) to the above formula. Through a similar calculation, we have ∂/∂ s_2 F(s;γ, r_1, r_2,s_1,s_2, τ) = 2 Y^(1)_24, ∂/∂τ F(s;γ, r_1, r_2,s_1,s_2, τ) = -Y^(1)_11+Y^(1)_22-Y^(1)_33+Y^(1)_44. Substituting (<ref>) into the above equations and using the symmetric relations (<ref>), (<ref>), (<ref>) and (<ref>), we arrive at (<ref>) and (<ref>). This completes the proof of Proposition <ref>. § LAX PAIR EQUATIONS AND DIFFERENTIAL IDENTITIES FOR THE HAMILTONIAN In this section, we will derive a Lax pair for X(z;s) from RH problem <ref>. Several useful differential identities for the associated Hamiltonian will also be presented for later use. §.§ The Lax system for X For the matrix-valued function X(z) = X(z;s) defined in (<ref>), we have ∂/∂ z X(z;s) = L(z;s) X(z;s), ∂/∂ s X(z;s) = U(z;s) X(z;s), where L(z;s) =(r_1 E_3,1- r_2 E_4,2)z+A_0(s)+A_1(s)/z-s+A_2(s)/z+s, and U(z;s) = -A_1(s)/z-s+A_2(s)/z+s with the functions A_k(s), k=0,1,2, given in (<ref>)–(<ref>), respectively. Moreover, the functions p_i (s) and q_i(s), i=1,…,6 in the definitions of A_k(s) satisfy the equations (<ref>), (<ref>) and p_3(s)q_1(s)-p_4(s) q_2(s) = - r_1 q_5(s)-p_5(s)^2/ r_1 + r_2 q_6(s) q_6(s)- s_1. The proof is based on the RH problem <ref> for X. Since the jump matrices for X are all independent of z and s, it's easily seen that L(z;s)=∂/∂ z X(z;s) X(z;s)^-1, U(z;s)=∂/∂ s X(z;s) X(z;s)^-1 are analytic in the complex z plane except for possible isolated singularities at z=± s and z=∞. We next calculate the functions L(z;s) and U(z;s) one by one. From the large z behavior of X given in (<ref>), we have, as z →∞, L(z;s)=(r_1 E_3,1- r_2 E_4,2)z+[ τ 0 r_1 0; 0 -τ 0 r_2; - s_1 0 τ 0; 0 - s_2 0 -τ ] + [X^(1),(r_1 E_3,1- r_2 E_4,2)]+(z^-1), where X^(1) is given in (<ref>). This gives us the leading term in (<ref>) and A_0(s)=[ τ 0 r_1 0; 0 -τ 0 r_2; - s_1 0 τ 0; 0 - s_2 0 -τ ] + [X^(1),(r_1 E_3,1- r_2 E_4,2)]. According to the symmetric relation of X^(1) established in (<ref>), it follows that A_0(s)=-[ J 0; 0 -J ]A_0(s) [ J 0; 0 -J ], where J is given in (<ref>). Thus, if we define p_5(s) = r_1 X^(1)_13, p_6(s) = - r_1 X^(1)_43 - r_2 X^(1)_21, q_5(s) = X^(1)_33-X^(1)_11, q_6(s) = X^(1)_14, the formula of A_0(s) in (<ref>) follows directly by combining (<ref>)–(<ref>). If z → s, it follows from (<ref>) that L(z;s) ∼A_1(s)/z-s, where A_1(s) = -γ/2 π X_R,0(s) [ 0 0 1 1; 0 0 1 1; 0 0 0 0; 0 0 0 0 ]X_R,0(s)^-1 with X_R,0(s) given in (<ref>). By setting [ q_1(s); q_2(s); q_3(s); q_4(s) ]=X_R,0(s) [ 1; 1; 0; 0 ] and [ p_1(s); p_2(s); p_3(s); p_4(s) ]=-γ/2 πX_R,0(s)^- T[ 0; 0; 1; 1 ], we obtain the expression of A_1(s) in (<ref>). From (<ref>), we also note that A_1(s) = ∑_k=1^4 q_k(s)p_k(s)=0. If z → -s, we have from (<ref>) that L(z;s) ∼A_2(s)/z+s, where A_2(s) = -γ/2 π X_L,0(s) [ 0 0 1 1; 0 0 1 1; 0 0 0 0; 0 0 0 0 ]X_L,0(s)^-1 with X_L,0(s) given in (<ref>). On account of the symmetric relation (<ref>), it is readily seen that X_L,0(s;r_1, r_2, s_1, s_2, τ)=[ J 0; 0 -J ]X_R,0(s; r_2, r_1, s_2, s_1, τ). This, together with (<ref>) and (<ref>), implies that A_2(s)=[ J 0; 0 -J ]A_1(s)[ J 0; 0 -J ], which leads to the expression for A_2(s) in (<ref>). The computation of U(z;s) is similar. It is easy to check that U(z;s)=(z^-1), z →∞, and U(z;s) ∼ -A_1(s)/z-s, z → s; U(z;s) ∼A_2(s)/z+s, z → -s, where A_1(s) and A_2(s) are given in (<ref>) and (<ref>). The above equations imply the expression of U(z;s) in (<ref>). It remains to establish the various relations satisfied by the functions p_i (s) and q_i(s), i=1,…,6, in the definitions of A_k(s), k=0,1,2. By (<ref>), we have proved (<ref>). The relation (<ref>) follows by computing the (1,3) entry of the (z^-1) term on both sides of (<ref>). To show the differential equations in (<ref>), we recall that the compatibility condition ∂^2/∂ z ∂ s X(z;s) = ∂^2/∂ s ∂ z X(z;s) for the Lax pair (<ref>) is the zero curvature relation ∂/∂ sL(s;z) - ∂/∂ zU(s;z) = A_0'(s)+A_1'(s)/z-s+A_2'(s)/z+s=[U,L]. Inserting (<ref>) and (<ref>) into the above equation and taking z →∞, we get A_0'(s) = [A_2(s)-A_1(s), (r_1 E_3,1-r_2 E_4,2)], which leads to p_5'(s)=- r_1(p_3(s)q_1(s)+p_4(s)q_2(s)), p_6'(s)= r_1 (p_3(s)q_4(s)-p_4(s)q_3(s))+ r_2 (p_1(s)q_2(s)-p_2(s)q_1(s)), q_5'(s)=p_1(s)q_1(s)-p_3(s)q_3(s)-p_2(s) q_2(s)+p_4(s) q_4(s), q_6'(s)=-p_4(s)q_1(s)-p_3(s) q_2(s). If we calculate the residue at z=s on the both sides of (<ref>), it is easily seen that A_1'(s)=-[A_1(s), (r_1 E_3,1-r_2 E_4,2)s+A_0(s)+A_2(s)/s]. On account of (<ref>)–(<ref>), we obtain (∂/∂ zX(z;s)+∂/∂ sX(z;s))X(z;s)^-1=L(z;s)+U(z;s) ∼ (r_1 E_3,1-r_2 E_4,2)s+A_0(s)+A_2(s)/s, z → s. On the other hand, substituting (<ref>) into the left hand side of the above equation gives us (∂/∂ zX(z;s)+∂/∂ sX(z;s))X(z;s)^-1∼/ s X_R,0(s) · X_R,0(s)^-1, z → s. Therefore, we have from the above two formulas that / s X_R,0(s)=( (r_1 E_3,1-r_2 E_4,2)s+A_0(s)+A_2(s)/s)X_R,0(s). Recall the definitions of q_k(s), k=1,…,4, given in (<ref>), it is readily seen that [ q_1'(s); q_2'(s); q_3'(s); q_4'(s) ]=( (r_1 E_3,1-r_2 E_4,2)s+A_0(s)+A_2(s)/s)[ q_1(s); q_2(s); q_3(s); q_4(s) ]. It then follows from (<ref>), (<ref>) and straightforward calculations that { q_1'(s) =p_5(s)q_1(s)- r_2 q_2(s)q_6(s)+ r_1 q_3(s)+τ q_1(s) +q_2(s)/s(p_2(s)q_1(s)+p_1(s)q_2(s) -p_4(s) q_3(s) -p_3(s)q_4(s)), q_2'(s) = r_1 q_1(s)q_6(s)-p_5(s)q_2(s)+ r_2q_4(s)-τ q_2(s) +q_1(s)/s(p_2(s)q_1(s)+p_1(s)q_2(s) -p_4(s) q_3(s) -p_3(s)q_4(s)), q_3'(s) = r_1 sq_1(s)+ r_1 q_1(s)q_5(s) -p_6(s) q_2(s)-p_5(s)q_3(s)- r_1q_4(s)q_6(s)- s_1 q_1(s) +τ q_3(s)-q_4(s)/s(p_2(s)q_1(s)+p_1(s)q_2(s) -p_4(s) q_3(s) -p_3(s)q_4(s)), q_4'(s) =- r_2sq_2(s)-p_6(s)q_1(s)+ r_2q_2(s)q_5(s) + r_2 q_3(s)q_6(s)+p_5(s)q_4(s)- s_2 q_2(s) -τ q_4(s)-q_3(s)/s(p_2(s)q_1(s)+p_1(s)q_2(s) -p_4(s) q_3(s) -p_3(s)q_4(s)). . To show the derivatives of p_i(s), i=1,…,4, we see from (<ref>) and (<ref>) that A_1'(s)= [ q_1'(s); q_2'(s); q_3'(s); q_4'(s) ][ p_1(s) p_2(s) p_3(s) p_4(s) ]+ [ q_1(s); q_2(s); q_3(s); q_4(s) ][ p_1'(s) p_2'(s) p_3'(s) p_4'(s) ] = -[A_1(s), (r_1 E_3,1-r_2 E_4,2)s+A_0(s)+A_2(s)/s] = -[ q_1(s); q_2(s); q_3(s); q_4(s) ][ p_1(s) p_2(s) p_3(s) p_4(s) ]( (r_1 E_3,1-r_2 E_4,2)s+A_0(s)+A_2(s)/s) +( (r_1 E_3,1-r_2 E_4,2)s+A_0(s)+A_2(s)/s)[ q_1(s); q_2(s); q_3(s); q_4(s) ][ p_1(s) p_2(s) p_3(s) p_4(s) ] A combination of this formula and (<ref>) gives us [ p_1'(s) p_2'(s) p_3'(s) p_4'(s) ] = -[ p_1(s) p_2(s) p_3(s) p_4(s) ]( (r_1 E_3,1-r_2 E_4,2)s+A_0(s)+A_2(s)/s), or equivalently, by using (<ref>), (<ref>), { p_1'(s) =- r_1 s p_3(s) -p_1(s)p_5(s)- r_1 p_2(s)q_6(s)- r_1 p_3(s)q_5(s)+p_4(s)p_6(s)-τ p_1(s) + s_1 p_3(s)-p_2(s)/s (p_1(s) q_2(s)+p_2(s) q_1(s) - p_3(s) q_4(s)-p_4(s) q_3(s)), p_2'(s) = r_2 s p_4(s) + r_2 p_1(s) q_6(s)+p_2(s)p_5(s) +p_3(s)p_6(s)- r_2 p_4(s)q_5(s) +τ p_2(s) + s_2 p_4(s)+p_1(s)/s (p_1(s) q_2(s)+p_2(s) q_1(s) - p_3(s) q_4(s)-p_4(s) q_3(s)), p_3'(s) =- r_1 p_1(s) +p_3(s)p_5(s)- r_2p_4(s)q_6(s) -τ p_3(s) +p_4(s)/s (p_1(s) q_2(s)+p_2(s) q_1(s) - p_3(s) q_4(s)-p_4(s) q_3(s)), p_4'(s) = - r_2 p_2(s) + r_1 p_3(s)q_6(s) -p_4(s)p_6(s)+τ p_4(s) +p_3 (s)/s (p_1(s) q_2(s)+p_2(s) q_1(s) - p_3(s) q_4(s)-p_4(s) q_3(s)). . This completes the proof of Proposition <ref>. From the general theory of Jimbo-Miwa-Ueno <cit.>, we have the Hamiltonian associated with the Lax system (<ref>) is given by H(s) = γ/2 π( (X_L,1(s)-X_R,1(s))[ 0 0 1 1; 0 0 1 1; 0 0 0 0; 0 0 0 0 ]) = γ/2 π∑_i=3^4 ∑_j=1^2 (X_L,1(s)-X_R,1(s))_ij, where X_R,1(s) and X_L,1(s) are given in (<ref>) and (<ref>), respectively. Taking z → s in the first equation of the Lax pair (<ref>), we obtain from (<ref>) and (<ref>) that the (1) term gives X_R,1(s) = γ/2 π[X_R,1(s), [ 0 0 1 1; 0 0 1 1; 0 0 0 0; 0 0 0 0 ]] + X_R,0(s)^-1[ (r_1 E_3,1-r_2 E_4,2)s + A_0(s)+A_2(s)/2s] X_R,0(s). Similarly, by taking z → -s we see from (<ref>) that X_L,1(s) = γ/2 π[X_L,1(s), [ 0 0 1 1; 0 0 1 1; 0 0 0 0; 0 0 0 0 ]] + X_L,0(s)^-1[- (r_1 E_3,1-r_2 E_4,2)s + A_0(s)-A_1(s)/2s] X_L,0(s). Inserting the above two equations into (<ref>) yields H(s) = -γ/2 π[ 0 0 1 1 ](X_R,0(s)^-1[ (r_1 E_3,1-r_2 E_4,2)s + A_0(s)+A_2(s)/2s] X_R,0(s). . +X_L,0(s)^-1[ (r_1 E_3,1-r_2 E_4,2)s - A_0(s)+A_1(s)/2s] X_L,0(s)) [ 1; 1; 0; 0 ]. Recall the definitions of A_k(s), k=0, 1, 2, and q_i(s), p_i(s), i=1, … 4, given in (<ref>)–(<ref>) and (<ref>), we recover the definition of H given in (<ref>), or equivalently, H(s)=  s( r_1(p_3(s)q_1(s)-p_4(s) q_2(s))+ r_2(p_3(s) q_1(s)-p_4(s)q_2(s)))+p_5(s)(p_1(s)q_1(s) -p_3(s)q_3(s)-p_2(s) q_2(s)+p_4(s) q_4(s))+p_5(s)(p_1(s) q_1(s)-p_3(s) q_3(s)-p_2(s)q_2(s) +p_4(s)q_4(s))-p_6(s)(p_3(s) q_2(s) +p_4(s)q_1(s))-p_6(s)(p_3(s)q_2(s)+p_4(s) q_1(s)) + r_1 q_5(s)(p_3(s)q_1(s)+p_4(s) q_2(s)) + r_2 q_5(s)(p_3(s) q_1(s)+p_4(s)q_2(s)) + r_1 q_6(s) (p_4(s) q_3(s)-p_3(s)q_4(s))+ r_2 q_6(s)(p_2(s) q_1(s) -p_1(s)q_2(s)) + r_1 q_6(s)(p_2(s)q_1(s)-p_1(s) q_2(s))+ r_2 q_6(s)(p_4(s)q_3(s) -p_3(s) q_4(s))+ r_1(p_2(s) q_4(s) +p_1(s)q_3(s))+ r_2(p_2(s)q_4(s)+p_1(s) q_3(s)) +τ (p_1(s)q_1(s)+ p_3(s)q_3(s)-p_2(s)q_2(s) -p_4(s)q_4(s)+p_1(s) q_1(s)+p_3(s) q_3(s)-p_2(s) q_2(s)-p_4(s) q_4(s))- s_1(p_3(s)q_1(s) +p_4(s) q_2(s))- s_2(p_3(s) q_1(s)+p_4(s)q_2(s))+1/s(p_2(s) q_1(s) +p_1(s) q_2(s)-p_4(s)q_3(s) -p_3(s)q_4(s))(p_2(s)q_1(s)+p_1(s) q_2(s)-p_4(s)q_3(s)-p_3(s) q_4(s)). §.§ Differential identities for the Hamiltonian With the Hamiltonian H defined in (<ref>), we have / sH(s) = r_1(p_3(s)q_1(s)-p_4(s) q_2(s))+ r_2(p_3(s) q_1(s)-p_4(s)q_2(s))-1/s^2(p_2(s) q_1(s) p_1(s) q_2(s) -p_4(s)q_3(s)-p_3(s)q_4(s))(p_2(s)q_1(s)+p_1(s) q_2(s)-p_4(s)q_3(s)-p_3(s) q_4(s)), and when τ = 0, ∑_k=1^6 (p_k(s)q'_k(s) +p_k(s)q'_k(s) ) - H(s) =H(s) - 1/3/ s(2s H(s) + p_1(s)q_1(s) + p_2(s)q_2(s) + p_1(s)q_1(s) + p_2(s)q_2(s) - 2p_5(s)q_5(s). -2p_5(s)q_5(s) - p_6(s)q_6(s)-p_6(s)q_6(s)+2s_1/r_1 p_5(s) + 2 s_2/r_2p_5(s)). We also have the following differential identity with respect to the parameter γ: ∂/∂γ(∑_k=1^6 (p_k(s)q'_k(s) +p_k(s)q'_k(s) ) - H(s)) = / s∑_k=1^6(p_k(s)∂/∂γq_k(s) + p_k(s)∂/∂γq_k(s)). The differential indentities (<ref>) and (<ref>) follows directly from (<ref>), (<ref>) and cumbersome calculations. To see the differential identity with respect to the parameter γ, we have from (<ref>) that ∂/∂γH(s) = ∑_k=1^6(∂ H/∂ p_k∂/∂γ p_k(s) + ∂ H/∂ q_k∂/∂γq_k(s) +∂ H/∂p_k∂/∂γp_k(s)+∂ H/∂q_k∂/∂γq_k(s)) =∑_k=1^6(q'_k(s) ∂/∂γ p_k(s) -p'_k(s) ∂/∂γq_k(s) +q'_k(s) ∂/∂γp_k(s)-p'_k(s) ∂/∂γq_k(s)), which leads to (<ref>). This completes the proof of Proposition <ref>. § ASYMPTOTIC ANALYSIS OF THE RH PROBLEM FOR X AS S → +∞ WITH Γ=1 In this section, we will analyze the RH problem for X as s → +∞ with γ=1. In this case, it is readily seen from (<ref>) that X has no jump on the interval (-s,s). §.§ First transformation: X → T This transformation is a rescaling of the RH problem for X, which is defined by T(z)= ( s^1/4,s^1/4, s^-1/4,s^-1/4) X(sz). In view of RH problem <ref> for X, it is readily seen that T satisfies the following RH problem. (a) T(z) is defined and analytic in ℂ∖{Γ_T ∪{-1}∪{1}}, where Γ_T:=∪^5_j=0Γ_j^(1), and where the contours Γ_j^(1), j=0,1,…,5, are defined in (<ref>) with s=1. (b) For z∈Γ_T, T(z) satisfies the jump condition T_+(z)=T_-(z)J_T(z), where J_T(z):={[ [ 0 0 1 0; 0 1 0 0; -1 0 0 0; 0 0 0 1 ], z∈Γ_0^(1),; I-E_2,1+E_3,1+E_3,4, z∈Γ_1^(1),; I-E_1,2+E_4,2+E_4,3, z∈Γ_2^(1),; [ 1 0 0 0; 0 0 0 1; 0 0 1 0; 0 -1 0 0 ], z∈Γ_3^(1),; I+E_1,2+E_4,2-E_4,3, z∈Γ_4^(1),; I+E_2,1+E_3,1-E_3,4, z∈Γ_5^(1).; ]. (c)As z →∞ with z∈ℂ∖Γ_T, we have T(z) =( I+T^(1)/z+ (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4) × A ( e^-s^3/2θ_1(z;s)+τ s z, e^-s^3/2θ_2(z;s)- τ s z, e^s^3/2θ_1(z;s)+τ s z,e^s^3/2θ_2(z;s)- τ s z), where T^(1) is independent of z and A is defined in (<ref>) and θ_1(z) = 2/3 r_1(-z)^3/2 + 2 s_1/s (-z)^1/2, z ∈ℂ∖ [0, ∞), θ_2(z) =2/3 r_2z^3/2 +2 s_2/s z^1/2, z ∈ℂ∖ (-∞, 0]. (d) As z →± 1, we have T(z)=(ln(z ∓ 1)). §.§ Second transformation: T → S In this transformation we partially normalize RH problem <ref> for T at infinity. For this purpose, we introduce the following two g-functions: g_1(z) =2/3 r_1(1-z)^3/2+(-r_1+2s_1/s)(1-z)^1/2, z∈ℂ∖ [1,+∞), g_2(z) =2/3 r_2(z+1)^3/2+(-r_2+2s_2/s)(z+1)^1/2, z∈ℂ∖ (-∞,-1]. As z→∞, it is readily seen that g_1(z) =θ_1(z;s)+(-r_1/4+s_1/s)(-z)^-1/2+(z^-3/2), g_2(z) =θ_2(z;s)+(-r_2/4+s_2/s)z^-1/2+(z^-3/2), where θ_i(z;s), i=1,2, are defined in (<ref>). The second transformation is set to be S(z) =(I+s^3/2(-r_1/4+s_1/s) E_3,1-s^3/2(-r_2/4+s_2/s) E_4,2)T(z) ×( e^s^3/2g_1(z)-τ s z, e^s^3/2g_2(z)+ τ s z, e^-s^3/2g_1(z)-τ s z, e^-s^3/2g_2(z) + τ s z). Then, S satisfies the following RH problem. The matrix-valued function S defined in (<ref>) has the following properties: (a) S(z) is defined and analytic in ℂ∖{Γ_T ∪{-1}∪{1}}, where Γ_T is defined in (<ref>). (b) For z∈Γ_T, S(z) satisfies the jump condition S_+(z)=S_-(z)J_S(z), where J_S(z):={[ [ 0 0 1 0; 0 1 0 0; -1 0 0 0; 0 0 0 1 ], z∈Γ_0^(1),; I-e^s^3/2(g_1(z)-g_2(z))-2τ s zE_2,1+e^2s^3/2g_1(z)E_3,1; +e^s^3/2(g_1(z)-g_2(z))+2τ s z E_3,4, z∈Γ_1^(1),; I- e^-s^3/2(g_1(z)-g_2(z))+2τ s z E_1,2+ e^2s^3/2g_2(z) E_4,2; + e^-s^3/2(g_1(z)-g_2(z))-2τ s z E_4,3, z∈Γ_2^(1),; [ 1 0 0 0; 0 0 0 1; 0 0 1 0; 0 -1 0 0 ], z∈Γ_3^(1),; I+ e^-s^3/2(g_1(z)-g_2(z))+2τ s z E_1,2+ e^2s^3/2g_2(z)E_4,2; - e^-s^3/2(g_1(z)-g_2(z))-2τ s z E_4,3, z∈Γ_4^(1),; I+e^s^3/2(g_1(z)-g_2(z))-2τ s z E_2,1+e^2s^3/2g_1(z) E_3,1; -e^s^3/2(g_1(z)-g_2(z))+2τ s zE_3,4, z∈Γ_5^(1).; ]. (c)As z →∞ with z∈ℂ∖Γ_T, we have S(z)=( I+S^(1)/z+ (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A, where S^(1) is independent of z and A is defined in (<ref>). (d) As z →± 1, we have S(z)=(ln(z ∓ 1)). All the items follow directly from (<ref>) and RH problem <ref> for T. In particular, to check the jump condition of S on ℝ, we need the facts that g_1,+(x)+g_1,-(x) =0, x ∈ [1,+∞), g_2,+(x)+g_2,-(x) =0, x ∈ (-∞,1]. To establish the large z behavior of S shown in item (c), we observe from (<ref>), (<ref>) and (<ref>) that, as z→∞, T(z)( e^s^3/2g_1(z)-τ s z, e^s^3/2g_2(z)+ τ s z, e^-s^3/2g_1(z)-τ s z, e^-s^3/2g_2(z) + τ s z) = ( I+ (z^-1) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A × (I+ s^3/2(-r_1/4+s_1/s)(-z)^-1/2E_1,1 + s^3/2(-r_2/4+s_2/s)z^1/2E_2,2 -s^3/2(-r_1/4+s_1/s)(-z)^-1/2E_3,3 -s^3/2(-r_2/4+s_2/s)z^-1/2E_4,4+ (z^-3/2) ). By a direct calculation, it follows that ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A (I+ s^3/2(-r_1/4+s_1/s)(-z)^-1/2E_1,1 + s^3/2(-r_2/4+s_2/s)z^-1/2E_2,2 -s^3/2(-r_1/4+s_1/s)(-z)^-1/2E_3,3 -s^3/2(-r_2/4+s_2/s)z^-1/2E_4,4) =(I-s^3/2(-r_1/4+s_1/s) E_3,1+s^3/2(-r_2/4+s_2/s) E_4,2 +(z^-1)) ×((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A. This, together with (<ref>) and (<ref>), gives us (<ref>). §.§ Global parametrix As s→ +∞, it comes out that the jump matrix J_S(z) of S given in (<ref>) tends to the identity matrix exponentially fast except for z∈Γ_0^(1)∪Γ_3^(1). Indeed, by (<ref>) and (<ref>), it is readily seen that as z →∞, g_1(z) ∼( 2/3 r_1(1-z)^3/2){[ <0, (z-1) ∈ (0,2π/3)∪ (4π/3,2π),; >0, (z-1) ∈ (2π/3,4π/3), ]. and g_2(z) ∼( 2/3 r_2(z+1)^3/2){[ <0, (z+1) ∈ (π/3,π)∪ (-π,-π/3),; >0, (z+1) ∈ (-π/3,π/3), ]. where we have made use of the fact that r_i > 0, i=1, 2. Moreover, for large positive s, we have g_1(z)-g_2(z) = -√(2)/3r_2+o(1), z → 1, g_1(z)-g_2(z) = √(2)/3r_1+o(1), z → -1. A combination of (<ref>)–(<ref>) and (<ref>) implies that, by deforming the contours if necessary, we may assume that J_S(z) → I as s→ +∞ for z∈Γ_T∖ (Γ_0^(1)∪Γ_3^(1)). As a consequence, away from the points ± 1, we expect that S should be well approximated by the following global parametrix. (a) N(z) is defined and analytic in ℂ∖{(-∞,-1] ∪ [1,∞) }. (b) For x∈ (-∞,-1) ∪ (1,∞), N(x) satisfies the jump condition N_+(x)=N_-(x){[ [ 0 0 1 0; 0 1 0 0; -1 0 0 0; 0 0 0 1 ], x>1,; [ 1 0 0 0; 0 0 0 1; 0 0 1 0; 0 -1 0 0 ], x<-1. ]. (c)As z →∞ with z ∈ℂ∖ℝ, we have N(z)=( I+N^(1)/z+ (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A, where N^(1) is independent of z and A is defined in (<ref>). The above RH problem can be solved explicitly, and its solution is given by N(z)=((1-z)^-1/4,(z+1)^-1/4,(1-z)^1/4,(z+1)^1/4)A, where we take the branch cuts of (1-z)^1/4 and (z+1)^1/4 along [1,∞) and (-∞,-1], respectively. §.§ Local parametrix near z=-1 For z near the endpoints -1 and 1, S(z) and N(z) are not uniformly close to each other, hence, local parametrices need to be constructed near these endpoints. We start with the local parametrix P^(-1)(z) near -1, which satisfies the following RH problem. (a) P^(-1)(z) is defined and analytic in D(-1, ε) ∖Γ_T, where D(z_0, ε) and Γ_T are defined in (<ref>) and (<ref>), respectively. (b) For z ∈ D(-1, ε) ∩Γ_T, we have P^(-1)_+(z) = P^(-1)_-(z) J_S(z), where J_S(z) is defined in (<ref>). (c) As s →∞, P^(-1)(z) satisfies the following matching condition P^(-1)(z) = (I+ (s^-3/2)) N(z), z ∈∂ D(-1, ε), where N(z) is given in (<ref>). RH problem <ref> can be solved explicitly by using the Bessel parametrix Φ^()(z) defined in Appendix <ref>. To do this, we introduce the function f_-1(z): = g_2(z)^2 =(r_2-2s_2/s)^2(z+1) -4/3 r_2(r_2-2s_2/s)(z+1)^2 +4/9 r_2^2 (z+1)^3, where g_2 is given in (<ref>). Clearly, f_-1(z) is analytic in D(-1, ε) and is a conformal mapping for large positive s. Let Ω_j^(1), j=1,…,6, be the six regions shown in Figure <ref> with s=1, we now define P^(-1)(z) = E_-1(z) [ 1 0 0 0; 0 Φ^()_11(s^3f_-1(z)) 0 Φ^()_12(s^3f_-1(z)); 0 0 1 0; 0 Φ^()_21(s^3f_-1(z)) 0 Φ^()_22(s^3f_-1(z)) ] ×[ 1 0 0 0; 0 e^s^3/2g_2(z) 0 0; 0 0 1 0; 0 0 0 e^-s^3/2g_2(z) ] × I -e^-s^3/2(g_1(z)-g_2(z)) + 2 τ s z E_1,2 +e^-s^3/2(g_1(z)-g_2(z)) - 2 τ s z E_4,3, z ∈Ω_2^(1)∪Ω_5^(1)∩ D(-1, ε), I, z ∈Ω_3^(1)∪Ω_4^(1)∩ D(-1, ε), where f_-1(z) is defined in (<ref>) and E_-1(z) = 1/√(2) N(z) [ √(2) 0 0 0; 0 π^1/2 s^3/4 f_-1(z)^1/4 0 - π^-1/2 s^-3/4 f_-1(z)^-1/4; 0 0 √(2) 0; 0 -π^1/2 s^3/4 f_-1(z)^1/4 0 π^-1/2 s^-3/4 f_-1(z)^-1/4 ]. The local parametrix P^(-1)(z) defined in (<ref>) solves RH problem <ref>. First, we show the prefactor E_-1(z) is analytic near z=-1. To achieve this, we notice that the only possible jump for E_-1(z) is on the interval (-1-ε, -1). For z ∈ (-1-ε, -1), we see from (<ref>) and (<ref>) that E_-1,-(z)^-1E_-1,+(z) = 1/2[ √(2) 0 0 0; 0 π^-1/2 s^-3/4 f_-1,-(z)^-1/4 0 π^-1/2 s^-3/4 f_-1,-(z)^-1/4; 0 0 √(2) 0; 0 π^1/2 s^3/4 f_-1,-(z)^1/4 0 π^1/2 s^3/4 f_-1,-(z)^1/4 ][ 1 0 0 0; 0 0 0 1; 0 0 1 0; 0 -1 0 0 ] ×[ √(2) 0 0 0; 0 π^1/2 s^3/4 f_-1,+(z)^1/4 0 - π^-1/2 s^-3/4 f_-1,+(z)^-1/4; 0 0 √(2) 0; 0 -π^1/2 s^3/4 f_-1,+(z)^1/4 0 π^-1/2 s^-3/4 f_-1,+(z)^-1/4 ]=I. Moreover, as z → -1, we have E_-1(z) = E_-1(-1) + E_-1'(-1)(z+1) + ((z+1)^2), where E_-1(-1) = [ 2^-3/4 0 - 2^-3/4 0; 0 π^1/2 s^3/4(r_2 - 2s_2/s)^1/2 0 0; - 2^-1/4 0 2^-1/4 0; 0 0 0 π^-1/2 s^-3/4(r_2 - 2s_2/s)^-1/2 ] and E_-1'(-1) = [ 1/8 · 2^3/4 0 -/8 · 2^3/4 0; 0 -π^1/2 s^3/4r_2/3(r_2 - 2s_2/s)^-1/2 0 0; /8 · 2^1/4 0 -1/8 · 2^1/4 0; 0 0 0 π^-1/2 s^-3/4 r_2/3(r_2 - 2s_2/s)^-3/2 ]. Therefore, E_-1(z) is indeed analytic in D(-1, ε). It is then straightforward to verify the jump condition of P^(-1) in (<ref>) by using the analyticity of E_-1 and (<ref>). Next, we check the matching condition (<ref>). From the definitions of g_1(z) and g_2(z) in (<ref>) and (<ref>), it is clear that functions e^-s^3/2(g_1(z)-g_2(z)) + 2 τ s z and e^-s^3/2(g_1(z)-g_2(z)) - 2 τ s z in (<ref>) are exponentially small as s → +∞ for z ∈ D(-1, ε); cf. (<ref>). Thus, it follows from the asymptotic behavior of the Bessel parametrix at infinity in (<ref>) that, as s → +∞, P^(-1)(z) N(z)^-1 = I + J^(-1)_1(z)/s^3/2 + (s^-3), z ∈∂ D(-1, ε), where J^(-1)_1(z) = 1/8f_-1(z)^1/2 N(z) [ 0 0 0 0; 0 -1 0 -2; 0 0 0 0; 0 -2 0 1 ]N(z)^-1. This completes the proof of Proposition <ref>. For later use, we include following local behavior of J^(-1)_1(z) near z = -1: J^(-1)_1(z) = -/8(r_2 - 2s_2/s)(z+1) E_2,4 - r_2/12(r_2 - 2s_2/s)^2 E_2,4 -3/8(r_2 - 2s_2/s) E_4,2 -( r_2^2/18(r_2 -2s_2/s)^3 E_2,4 + r_2/4(r_2 - 2s_2/s)^2E_4,2)(z+1) + ((z+1)^2), z → -1. §.§ Local parametrix near z=1 In a small disc D(1, ε) around z=1, the local parametrix P^(1)(z) reads as follows. (a) P^(1)(z) is defined and analytic in D(1, ε) ∖Γ_T, where Γ_T is defined in (<ref>). (b) For z ∈ D(1, ε) ∩Γ_T, we have P^(1)_+(z) = P^(1)_-(z) J_S(z), where J_S(z) is defined in (<ref>). (c) As s →∞, P^(1)(z) satisfies the following matching condition P^(1)(z) = (I+ (s^-3/2)) N(z), z ∈∂ D(1, ε), where N(z) is given in (<ref>). Similar to the construction of P^(-1), the above RH problem can be solved again by using the Bessel parametrix Φ^()(z) defined in Appendix <ref>. In this case, we need the following function f_1(z) := g_1(z)^2=(r_1 - 2s_1/s)^2 (1-z) -4/3 r_1(r_1-2s_1/s)(1-z)^2 +4/9 r_1^2 (1-z)^3, where g_1(z) is defined in (<ref>). Clearly, f_1(z) is analytic in D(1, ε) and is a conformal mapping for large positive s. We now define P^(1)(z) = E_1(z) [ Φ^()_11(s^3f_1(z)) 0 -Φ^()_12(s^3f_1(z)) 0; 0 1 0 0; -Φ^()_21(s^3f_1(z)) 0 Φ^()_22(s^3f_1(z)) 0; 0 0 0 1 ] ×[ e^s^3/2g_1(z) 0 0 0; 0 1 0 0; 0 0 e^-s^3/2g_1(z) 0; 0 0 0 1 ] × I -e^s^3/2(g_1(z)-g_2(z)) - 2 τ s z E_2,1+e^s^3/2(g_1(z)-g_2(z)) + 2 τ s zE_3,4, z ∈Ω_2^(1)∪Ω_5^(1)∩ D(1, ε), I, z ∈Ω_1^(1)∪Ω_6^(1)∩ D(1, ε), where f_1(z) is defined in (<ref>) and E_1(z) = 1/√(2) N(z) [ π^1/2 s^3/4 f_1(z)^1/4 0 π^-1/2 s^-3/4 f_1(z)^-1/4 0; 0 √(2) 0 0; π^1/2 s^3/4 f_1(z)^1/4 0 π^-1/2 s^-3/4 f_1(z)^-1/4 0; 0 0 0 √(2) ]. P^(1)(z) defined in (<ref>) solves RH problem <ref>. First, we show E_1(z) is analytic in D(1, ε). From (<ref>), the only possible jump is on (1, 1 + ε), and for z ∈ (1, 1 + ε), E_1,-(z)^-1E_1,+(z) =1/2[ π^-1/2 s^-3/4 f_1,-(z)^-1/4 0 -π^-1/2 s^-3/4 f_1,-(z)^-1/4 0; 0 √(2) 0 0; -π^1/2 s^3/4 f_1,-(z)^1/4 0 π^1/2 s^3/4 f_1,-(z)^1/4 0; 0 0 0 √(2) ][ 0 0 1 0; 0 1 0 0; -1 0 0 0; 0 0 0 1 ] ×[ π^1/2 s^3/4 f_1,+(z)^1/4 0 π^-1/2 s^-3/4 f_1,+(z)^-1/4 0; 0 √(2) 0 0; π^1/2 s^3/4 f_1,+(z)^1/4 0 π^-1/2 s^-3/4 f_1,+(z)^-1/4 0; 0 0 0 √(2) ]=I Moreover, as z → 1, we have E_1(z) = E_1(1) + E_1'(1)(z-1) + ((z-1)^2), where E_1(1) = [ π^1/2 s^3/4(r_1 - 2s_1/s)^1/2 0 0 0; 0 2^-3/4 0 2^-3/4; 0 0 π^-1/2 s^-3/4(r_1 - 2s_1/s)^-1/2 0; 0 2^-1/4 0 2^-1/4 ] and E_1'(1) = [ π^1/2 s^3/4r_1/3(r_1 - 2s_1/s)^-1/2 0 0 0; 0 -1/8 · 2^3/4 0 -/8 · 2^3/4; 0 0 π^-1/2 s^-3/4 r_1/3(r_1 - 2s_1/s)^-3/2 0; 0 /8 · 2^1/4 0 1/8 · 2^1/4 ]. Therefore, E_1(z) is indeed analytic in D(1, ε). The jump condition of P^(1)(z) in (<ref>) can be verified from the analyticity of E_1(z) and the jump condition in (<ref>). Finally, we check the matching condition in (<ref>), it follows from the asymptotic behavior of the Bessel parametrix at infinity in (<ref>) that, as s → +∞, P^(1)(z) N(z)^-1 = I + J^(1)_1(z)/s^3/2 + (s^-3), where J^(1)_1(z) = 1/8f_1(z)^1/2 N(z) [ -1 0 2 0; 0 0 0 0; 2 0 1 0; 0 0 0 0 ]N(z)^-1. This completes the proof of Proposition <ref>. For later use, we calculate the behavior of J^(1)_1(z) near z = 1 as follows, J^(1)_1(z) = /8(r_1 - 2s_1/s)(1-z) E_1,3 + r_1/12(r_1 - 2s_1/s)^2E_1,3+3/8(r_1 - 2s_1/s)E_3,1 +( r_1^2/18(r_1 - 2s_1/s)^3E_1,3+ r_1/4(r_1 - 2s_1/s)^2E_3,1)(1-z) + ((1-z)^2), z → 1. §.§ Final transformation The final transformation is defined by R(z) = S(z) P^(-1)(z)^-1, z ∈ D(-1, ε), S(z) P^(1)(z)^-1, z ∈ D(1, ε), S(z) N(z)^-1, elsewhere. From the RH problems for S, N and P^(±1), it follows that R satisfies the following RH problem. (a) R(z) is defined and analytic in ℂ∖Γ_R, where Σ_R:=Γ_T ∪∂ D(-1,ε) ∪∂ D(1,ε) ∖{ℝ∪ D(-1,ε) ∪ D(1,ε) }; see Figure<ref> for an illustration. (b) For z ∈Γ_R, we have R_+(z) = R_-(z) J_R (z), where J_R(z) = P^(-1)(z) N(z)^-1, z ∈∂ D(-1, ε), P^(1)(z) N(z)^-1, z ∈∂ D(1, ε), N(z) J_S(z) N(z)^-1, z ∈Γ_R∖∂ D(±1, ε), with J_S(z) defined in (<ref>). (c) As z →∞, we have R(z) = I + R^(1)/z + (z^-1), where R^(1) is independent of z. Since the jump matrix J_S(z) of S given in (<ref>) tends to the identity matrix exponentially fast except for z∈Γ_0^(1)∪Γ_3^(1) as s → +∞, from the matching condition (<ref>) and (<ref>), we have J_R(z) = I + (s^-3/2), s → +∞. By a standard argument <cit.>, we conclude that R(z) = I + R_1(z)/s^3/2 + (s^-3), s → +∞, uniformly for z∈ℂ∖Γ_R. Moreover, inserting the above expansion into (<ref>), it follows that function R_1 is analytic in ℂ∖ (∂ D(-1, ε) ∪∂ D(1, ε)) with asymptotic behavior (1/z) as z→∞, and satisfies R_1,+(z)-R_1,-(z)= {[ J^(-1)_1(z), z∈∂ D(-1,ε),; J^(1)_1(z), z∈∂ D(1,ε), ]. where the functions J^(-1)_1(z) and J^(1)_1(z) are given in (<ref>) and (<ref>), respectively. By Cauchy's residue theorem, we have R_1(z) = 1/2 π∮_∂ D(-1, ε)J_1^(-1) (ζ)/z-ζζ + 1/2 π∮_∂ D(1, ε)J_1^(1) (ζ)/z-ζζ =_ζ = -1 J^(-1)_1(ζ)/z+1 + _ζ = 1 J^(1)_1(ζ)/z-1, z ∈ℂ∖{D(-1, ε) ∪ D(1, ε)}, _ζ = -1 J^(-1)_1(ζ)/z+1 + _ζ = 1 J^(1)_1(ζ)/z-1 - J^(-1)_1(z), z ∈ D(-1, ε), _ζ = -1 J^(-1)_1(ζ)/z+1 + _ζ = 1 J^(1)_1(ζ)/z-1 - J^(1)_1(z), z ∈ D(1, ε). In view of (<ref>) and (<ref>), it follows from direct calculations that R_1'(-1) = /32(r_1 - 2s_1/s ) E_1,3 + r_2^2/18(r_2 - 2s_2/s)^3E_2,4 + r_2/4(r_2 - 2s_2/s)^2E_4,2 and R_1'(1) = r_1^2/18(r_1 - 2s_1/s)^3 E_1,3 + /32(r_2 - 2s_2/s)E_2,4 + r_1/4(r_1 - 2s_1/s)^2E_3,1. § ASYMPTOTIC ANALYSIS OF THE RH PROBLEM FOR X AS S → +∞ WITH 0<Γ<1 If 0< γ <1, the matrix-valued function X has a non-trivial jump [ 1 0 1-γ 1-γ; 0 1 1-γ 1-γ; 0 0 1 0; 0 0 0 1 ] over the interval (-s,s). This extra jump will lead to a completely different asymptotic analysis of the RH problem for X as s → +∞, which will be carried out in this section. §.§ First transformation: X ↦ T This transformation is a rescaling and normalization of the RH problem for X, and it is defined by T(z) = ( s^1/4,s^1/4, s^-1/4,s^-1/4) X(sz) ×( e^θ_1(sz)-τ s z, e^θ_2(sz)+ τ s z, e^-θ_1(sz)-τ s z,e^-θ_2(sz) + τ s z), where the functions θ_1 and θ_2 are given in (<ref>) and (<ref>), respectively. In view of the facts that θ_1,+(sx)+θ_1,-(sx) = 0, x>0, θ_2,+(sx)+θ_2,-(sx) =0, x<0, and RH problem <ref> for X, it is readily seen that T defined in (<ref>) satisfies the following RH problem. (a) T(z) is defined and analytic in ℂ∖Γ_ T, where Γ_ T:=∪^5_j=0Γ_j^(1)∪ [-1,1], and where the contours Γ_j^(1), j=0,1,…,5, are defined in (<ref>) with s=1. (b) For z∈Γ_ T, T(z) satisfies the jump condition T_+(z)= T_-(z)J_ T(z), where J_ T(z):={[ [ 0 0 1 0; 0 1 0 0; -1 0 0 0; 0 0 0 1 ], z∈Γ_0^(1),; I-e^θ_1(sz)-θ_2(sz)-2τ s zE_2,1+e^2θ_1(sz)E_3,1; +e^θ_1(sz)-θ_2(sz)+2τ s z E_3,4, z∈Γ_1^(1),; I- e^-θ_1(sz)+θ_2(sz)+2τ s z E_1,2+ e^2θ_2(sz) E_4,2; + e^-θ_1(sz)+θ_2(sz)-2τ s z E_4,3, z∈Γ_2^(1),; [ 1 0 0 0; 0 0 0 1; 0 0 1 0; 0 -1 0 0 ], z∈Γ_3^(1),; I+ e^-θ_1(sz)+θ_2(sz)+2τ s z E_1,2+ e^2θ_2(sz)E_4,2; - e^-θ_1(sz)+θ_2(sz)-2τ s z E_4,3, z∈Γ_4^(1),; I+e^θ_1(sz)-θ_2(sz)-2τ s z E_2,1+e^2θ_1(sz) E_3,1; -e^θ_1(sz)-θ_2(sz)+2τ s zE_3,4, z∈Γ_5^(1),; J_ L(z), z ∈ (-1,0),; J_ R(z), z ∈ (0,1), ]. with J_ L(z) =[ 1 0 (1-γ)e^-2θ_1(sz) (1-γ)e^-θ_1(sz)-θ_2,+(sz)+2τ sz; 0 e^θ_2,+(sz)-θ_2,-(sz) (1-γ)e^-θ_1(sz)-θ_2,-(sz)-2τ sz 1-γ; 0 0 1 0; 0 0 0 e^θ_2,-(sz)-θ_2,+(sz) ], and J_ R(z) =[ e^θ_1,+(sz)-θ_1,-(sz) 0 1-γ (1-γ)e^-θ_1,-(sz)-θ_2(sz)+2τ sz; 0 1 (1-γ)e^-θ_1,+(sz)-θ_2(sz)-2τ sz (1-γ)e^-2θ_2(sz); 0 0 e^θ_1,-(sz)-θ_1,+(sz) 0; 0 0 0 1 ]. (c)As z →∞ with z∈ℂ∖Γ_ T, we have T(z)=( I+ T^(1)/z + (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A, where T^(1) is independent of z and A is defined in (<ref>). (d) As z →± 1, we have T(z)=(ln(z ∓ 1)). §.§ Second transformation: T ↦ S On account of the definitions of θ_1(z) and θ_2(z) given in (<ref>) and (<ref>), it is readily seen that J_ T(z) in (<ref>) tends to I exponentially fast as s→ +∞ for z∈Γ_ T∖ℝ. Moreover, the (2,2), (4,4) entries of J_L in (<ref>) and the (1,1), (3,3) entries of J_R in (<ref>) are highly oscillatory for large positive s. The second transformation then involves the so-called lens opening around the interval (-1,0)∪ (0,1). The idea is to remove the highly oscillatory terms of J_ T with the cost of creating extra jumps that tend to the identity matrices on some new contours. To proceed, we observe from (<ref>), (<ref>) and (<ref>) that J_ L(z)= J_1,-(z) [ 1 0 0 0; 0 0 0 1-γ; 0 0 1 0; 0 1/γ-1 0 0 ] J_2,+(z), z∈(-1,0), where J_1(z) = I+e^-θ_1(sz)+θ_2(sz)+2τ s zE_1,2+e^2θ_2(sz)/1-γE_4,2-e^-θ_1(sz)+θ_2(sz)-2 τ szE_4,3, J_2(z) = I - e^-θ_1(sz) + θ_2(sz) + 2τ s zE_1,2 + e^2θ_2(sz)/1-γE_4,2 + e^-θ_1(sz)+θ_2(sz)-2 τ szE_4,3, and J_ R(z)= J_3,-(z) [ 0 0 1-γ 0; 0 1 0 0; 1/γ-1 0 0 0; 0 0 0 1 ] J_4,+(z), z∈(0,1), where J_3(z) = I+e^θ_1(sz)-θ_2(sz)-2τ s zE_2,1+e^2θ_1(sz)/1-γE_3,1-e^θ_1(sz)-θ_2(sz)+2τ szE_3,4, J_4(z) = I- e^θ_1(sz)-θ_2(sz)-2τ s zE_2,1+e^2θ_1(sz)/1-γE_3,1+e^θ_1(sz)-θ_2(sz)+2τ szE_3,4. It is easily seen that, for i=1,…,4, we have J_i(z)^-1= 2I - J_i(z). Let Ω_ L, ± and Ω_ R, ± be the lenses on the ±-side of (-1,0) and (0,1), as shown in Figure <ref>. The second transformation is defined by S = T{[ J_2(z)^-1, z∈Ω_ L,+,; J_1(z), z∈Ω_ L,-,; J_4(z)^-1, z∈Ω_ R,+,; J_3(z), z∈Ω_ R,-,; I, elsewhere. ]. It is then readily seen from RH problem <ref> for T that S satisfies the following RH problem. (a) S(z) is defined and analytic in ℂ∖Γ_ S, where Γ_ S:=∪^5_j=0Γ_j^(1)∪ [-1,1] ∪∂Ω_ L, ±∪∂Ω_ R, ±; see Figure <ref> for an illustration. (b) For z∈Γ_ S, S(z) satisfies the jump condition S_+(z)= S_-(z)J_ S(z), where J_ S(z):={[ J_ T(z), z∈∪^5_j=0Γ_j^(1),; J_2(z), z∈∂Ω_ L,+,; J_1(z), z∈∂Ω_ L,-,; J_4(z), z∈∂Ω_ R,+,; J_3(z), z∈∂Ω_ R,-,; [ 1 0 0 0; 0 0 0 1-γ; 0 0 1 0; 0 1/γ-1 0 0 ], z ∈ (-1,0),; [ 0 0 1-γ 0; 0 1 0 0; 1/γ-1 0 0 0; 0 0 0 1 ], z ∈ (0,1), ]. where the functions J_ T(z) and J_i(z), i=1,…,4, are defined in (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), respectively. (c)As z →∞ with z∈ℂ∖Γ_ S, we have S(z)=(I+ T^(1)/z + (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A, where T^(1) is given in (<ref>) and A is defined in (<ref>). (d) As z →± 1, we have S(z)=(ln(z ∓ 1)). §.§ Global parametrix As s→ +∞, it is now readily seen that all the jump matrices of S tend to the identity matrices exponentially fast except for those along ℝ and we are lead to consider the following global parametrix. (a) N(z) is defined and analytic in ℂ∖ℝ. (b) For x∈ℝ, N(x) satisfies the jump condition N_+(x)= N_-(x){[ [ 1 0 0 0; 0 0 0 1; 0 0 1 0; 0 -1 0 0 ], x<-1,; [ 1 0 0 0; 0 0 0 1-γ; 0 0 1 0; 0 1/γ-1 0 0 ], x ∈ (-1,0),; [ 0 0 1-γ 0; 0 1 0 0; 1/γ-1 0 0 0; 0 0 0 1 ], x ∈ (0,1),; [ 0 0 1 0; 0 1 0 0; -1 0 0 0; 0 0 0 1 ], x>1. ]. (c)As z →∞, we have N(z)=( I+ (z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A, where A is defined in (<ref>). To solve the above RH problem, we observe from the jump condition for N that it is natural to expect N should take a chessboard structure, i.e., N= [ ⋆ 0 ⋆ 0; 0 ⋆ 0 ⋆; ⋆ 0 ⋆ 0; 0 ⋆ 0 ⋆ ], where ⋆ denotes a matrix entry to be specified. This sparsity pattern then implies that we could decompose N into two 2× 2 RH problems. Indeed, let N_1(z):= [ N_22(z) N_24(z); N_42(z) N_44(z) ]. It is readily seen that N_1 solves the following RH problem. (a) N_1(z) is defined and analytic in ℂ∖ (-∞,0]. (b) For x∈ (-∞,0), N_1(x) satisfies the jump condition N_1,+(x)= N_1,-(x){[ [ 0 1; -1 0 ], x<-1,; [ 0 1-γ; 1/γ-1 0 ], x ∈ (-1,0). ]. (c)As z →∞, we have N_1(z)=( I+ (z^-1) ) z^- σ_3/4/√(2)[ 1 ; 1 ]. Similarly, N_2(z):= [ N_11(z) N_13(z); N_31(z) N_33(z) ] is a solution of the following RH problem. (a) N_2(z) is defined and analytic in ℂ∖ [0,∞). (b) For x∈ (0,∞), N_2(x) satisfies the jump condition N_2,+(x)= N_2,-(x){[ [ 0 1-γ; 1/γ-1 0 ], x∈ (0,1),; [ 0 1; -1 0 ], x > 1. ]. (c)As z →∞, we have N_2(z)=( I+ (z^-1) ) (-z)^- σ_3/4/√(2)[ 1 -; - 1 ]. Since it is easily seen that N_2(z)=σ_3 N_1(-z) σ_3, it suffices to solve RH problem <ref> for N_1. For that purpose, we set λ(ζ):= (ζ-/ζ+)^β, ζ∈ℂ∖ [-,], where β=ln (1-γ)/(2π) (see (<ref>)) and the branch cut is chosen such that λ(ζ) → 1 as ζ→∞ with the orientation from to -. With the aid of the function λ, we define d_1(z)=λ(z^1/2), d_2(z)=λ(-z^1/2). Some properties of d_1 and d_2 are collected in the proposition below. The functions d_1(z) and d_2(z) in (<ref>) satisfy the following properties. (i) d_1(z) and d_2(z) are analytic in ℂ∖ (-∞,0]. Moreover, we have d_1,±(x) =d_2,∓(x), x<-1, d_1,±(x) =d_2,∓(x)e^-2βπ, -1<x<0. (ii) As z→∞, we have d_1(z) =1-2β/z^1/2-2β^2/z+(z^-2), d_2(z) =1+2β/z^1/2-2β^2/z+(z^-2). (iii) As z → 0, we have d_1(z) = e^-βπ(1 + 2 β√(z)- 2 β^2 z - 2 (β + 2 β^3)/3 z^3/2 + (z^2)), d_2(z) = e^βπ(1 - 2 β√(z)- 2 β^2 z + 2 (β + 2 β^3)/3 z^3/2 + (z^2)). (iv) As z → -1 and z>0, we have d_1(z) = e^-βπ 4^-β (z+1)^β(1+ β/2 (z+1) + ((z+1)^2)), d_2(z) = e^βπ 4^β (z+1)^-β(1- β/2 (z+1) + ((z+1)^2)). (v) As z → 1, we have d_1(z) = e^-βπ/2(1 + β/2(z-1) + ((z-1)^2)), d_2(z) = e^βπ/2(1 - β/2(z-1) + ((z-1)^2)). From the definition of λ given in (<ref>), it is readily seen that λ_+(ζ)=λ_-(ζ){[ e^-2βπ, ζ∈ (0,),; e^2βπ, ζ∈ (-,0), ]. and λ(ζ) =1-2 β/ζ-2β^2/ζ^2+(ζ^-3), ζ→∞, λ(ζ) = e^-βπ(1 + 2βζ-2β^2 ζ^2 -2 (β + β^3)/3ζ^3 + (ζ^4)), ζ>0, e^βπ(1 + 2βζ-2β^2 ζ^2 -2 (β + β^3)/3ζ^3 + (ζ^4)), ζ<0, ζ→ 0, λ(ζ) = e^-βπ/2 (ζ-)^β(2-β/2(ζ-)+((ζ-)^2)), ζ→, λ(ζ) = e^-βπ/2(1 + β (ζ-1) + ((ζ-1)^2)), ζ→ 1. This, together with (<ref>), gives us the claims in the proposition after straightforward calculations. We can now solve RH problem <ref> for N_1 by using the functions d_1 and d_2. Let d_1 and d_2 be two functions defined in (<ref>). A solution of RH problem <ref> is given by N_1(z)= [ 1 0; -2β 1 ] z^- σ_3/4/√(2)[ 1 ; 1 ] (d_1(z),d_2(z)). By (<ref>) and (<ref>), it follows that for x<-1 N_1,-(z)^-1 N_1,+(z) =1/2(1/d_1,-(x),1/d_2,-(x)) [ 1 -; - 1 ](-,) [ 1 ; 1 ] (d_1,+(x),d_2,+(x)) = [ 0 1; -1 0 ], as required. The jump of N_1 on (-1,0) can be checked in a similar manner and we omit the details here. To show the large z behavior of N_1 in (<ref>), we observe from item (ii) of Proposition <ref> that, as z→∞, z^- σ_3/4/√(2)[ 1 ; 1 ] (d_1(z),d_2(z))=([ 1 0; 2β 1 ]+(z^-1))z^- σ_3/4/√(2)[ 1 ; 1 ]. Thus, N_1 in (<ref>) indeed satisfies the asymptotic condition (<ref>). This completes the proof of Proposition <ref>. In view of (<ref>), (<ref>), (<ref>) and Proposition <ref>, the following lemma is immediate. A solution of RH problem <ref> is given by N(z) = (I+2β E_3,1-2β E_4,2) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A ×(d_1(-z),d_1(z),d_2(-z),d_2(z) ), where the functions A, d_1 and d_2 are defined in (<ref>) and (<ref>). Moreover, we have N(z) = (I+ N^(1)/z+(z^-2) ) ((-z)^-1/4,z^-1/4,(-z)^1/4,z^1/4)A, z →∞, where A is defined in (<ref>) and N^(1)=[ 2 β^2 0 -2β 0; 0 ∗ 0 ∗; ∗ 0 ∗ 0; 0 ∗ 0 ∗ ]. §.§ Local parametrix near z=0 Since the jump matrices for S and N are not uniformly close to each other near z=0 and z=±1, we need to construct the local parametrices near these points and start with the local parametrix near the origin. (a) P^(0)(z) is defined and analytic in D(0, ε)∖Γ_ S, where Γ_ S is defined in (<ref>). (b) For z ∈ D(0, ε) ∩Γ_ S, P^(0)(z) satisfies the jump condition P^(0)_+(z)= P^(0)_-(z)J_ S(z), where J_ S(z) is given in (<ref>). (c)As s →∞, we have the matching condition P^(0)(z)=( I+ (s^-1/2) ) N(z), z ∈∂ D(0, ε), where N(z) is given in (<ref>). This RH problem can be solved by using the solution M(z) of the tacnode RH problem <ref>. More precisely, we define P^(0)(z) = E_0(z) M(sz) ((1-γ)^-1/2e^θ_1(sz) - τ sz,(1-γ)^-1/2e^θ_2(sz) + τ sz,(1-γ)^1/2e^-θ_1(sz) - τ sz., .(1-γ)^1/2e^-θ_2(sz) + τ sz), with E_0(z) = N(z)((1-γ)^1/2,(1-γ)^1/2,(1-γ)^-1/2,(1-γ)^-1/2) A^-1 ×((-sz)^1/4, (sz)^1/4,(-sz)^-1/4,(sz)^-1/4), where A is defined in (<ref>) and N (z) is given in (<ref>). The local parametrix P^(0)(z) defined in (<ref>) solves RH problem <ref>. We first show the prefactor E_0(z) is analytic in D(0, ε). From its definition in (<ref>), the only possible jump is on (-ε, ε). For x ∈ (-ε, 0), recalling the jump matrix of N(z) in (<ref>), we have E_0,-(x)^-1 E_0,+(x) = ((-sz)^-1/4, (sz)_-^-1/4,(-sz)^1/4,(sz)_-^1/4) A ((1-γ)^-1/2,(1-γ)^-1/2,(1-γ)^1/2,(1-γ)^1/2) ×[ 1 0 0 0; 0 0 0 1-γ; 0 0 1 0; 0 1/γ-1 0 0 ]((1-γ)^1/2,(1-γ)^1/2,(1-γ)^-1/2,(1-γ)^-1/2) A^-1 ×((-sz)^1/4, (sz)_+^1/4,(-sz)^-1/4,(sz)_+^-1/4) =((-sz)^-1/4, (sz)_-^-1/4,(-sz)^1/4,(sz)_-^1/4) (1,-,1,) ×((-sz)^1/4, (sz)_+^1/4,(-sz)^-1/4,(sz)_+^-1/4)=I. Similarly, for x ∈ (0, ε), we have E_0,-(x)^-1 E_0,+(x) = ((-sz)_-^-1/4, (sz)^-1/4,(-sz)_-^1/4,(sz)^1/4) A ((1-γ)^-1/2,(1-γ)^-1/2,(1-γ)^1/2,(1-γ)^1/2) ×[ 1 0 1-γ 0; 0 1 0 0; 1/γ-1 0 0 0; 0 0 0 1 ]((1-γ)^1/2,(1-γ)^1/2,(1-γ)^-1/2,(1-γ)^-1/2) A^-1 ×((-sz)_+^1/4, (sz)^1/4,(-sz)_+^-1/4,(sz)^-1/4) =((-sz)_-^-1/4, (sz)^-1/4,(-sz)_-^1/4,(sz)^1/4)(,1,-,1) ×((-sz)_+^1/4, (sz)^1/4,(-sz)_+^-1/4,(sz)^-1/4)=I. Moreover, as z → 0, one has E_0(z) = E_0(0)(I + E_0(0)^-1 E_0'(0)z + (z^-2))(s^1/4, s^1/4,s^-1/4,s^-1/4), where E_0(0) = [ 1 0 -2β 0; 0 1 0 2 β; 2 β 0 1- 4 β^2 0; 0 -2 β 0 1-4 β^2 ] and E_0(0)^-1 E_0'(0) = [ -2 β^2 0 2β (4β^2-1)/3 0; 0 2 β^2 0 2β (4β^2-1)/3; -2 β 0 2 β^2 0; 0 -2 β 0 -2β^2 ]. Thus, E_0(z) is indeed analytic in D(0, ε) and the jump condition (<ref>) can be verified easily from this fact, (<ref>) and the jump condition of M given in (<ref>). For the matching condition, it follows from the asymptotic behavior of M at infinity given in (<ref>) that, for z ∈∂ D(0, ε), P^(0)(z) N(z)^-1 = I + J^(0)_1(z)/s^1/2 + J^(0)_2(z)/s+(s^-3/2), s → +∞, where J^(0)_1(z) = 1/z E_0(z) [ 0 0 M^(1)_13 M^(1)_14; 0 0 M^(1)_23 M^(1)_24; 0 0 0 0; 0 0 0 0 ]E_0(z)^-1 and J^(0)_2(z)= 1/z E_0(z) [ M^(1)_11 M^(1)_12 0 0; M^(1)_21 M^(1)_22 0 0; 0 0 M^(1)_33 M^(1)_34; 0 0 M^(1)_43 M^(1)_44 ]E_0(z)^-1 with E_0(z) := E_0(z) (s^-1/4, s^-1/4,s^1/4,s^1/4) and M^(1) given in (<ref>). §.§ Local parametrix near z=-1 Near z=-1, we intend to find an RH problem as follows. (a) P^(-1)(z) is defined and analytic in D(-1, ε)∖Γ_ S, where Γ_ S is defined in (<ref>). (b) For z ∈ D(-1, ε) ∩Γ_ S, P^(-1)(z) satisfies the jump condition P^(-1)_+(z)= P^(-1)_-(z)J_ S(z), where J_ S(z) is given in (<ref>). (c)As s → +∞, we have the matching condition P^(-1)(z)=( I+ (s^-3/2) ) N(z), z ∈∂ D(-1, ε), where N(z) is given in (<ref>). This local parametrix can be constructed by using the confluent hypergeometric parametrix Φ^() introduced in Appendix <ref>. To proceed, we introduce the function f_-1(z) = -2 s^-3/2θ_2(sz) - θ_2,+(-s), z > 0, -θ_2(sz) + θ_2,-(-s), z < 0. By (<ref>), it is easily seen that f_-1(z) = 2(r_2 - s_2/s)(z+1) + ((z+1)^2), z → -1. Thus, f_-1(z) is a conformal mapping near z=-1 for large positive s. We set P^(-1)(z) = E_-1(z) [ 1 0 0 0; 0 Φ^()_11(s^3/2 f_-1(z); -β) 0 Φ^()_12(s^3/2 f_-1(z); -β); 0 0 1 0; 0 Φ^()_21(s^3/2 f_-1(z); -β) 0 Φ^()_22(s^3/2 f_-1(z); -β) ] ×(1, e^θ_2(sz) - βπ/2, 1, e^-θ_2(sz) + βπ/2) × I-e^-θ_1(sz)+θ_2(sz)+2 τ sz E_1,2+e^-θ_1(sz)+θ_2(sz)-2 τ szE_4,3, z ∈ (Ω_2∪Ω_5)∩ D(-1, ε) ∖∂Ω_L,±, I, z ∈ (Ω_3∪Ω_4)∩ D(-1, ε), where β is given in (<ref>), f_-1 is defined in (<ref>) and E_-1(z) = N(z) ×(1, e^-θ_2,+(-s) + βπ/2s^-3β/2 f_-1(z)^-β, 1, e^θ_2,+(-s) -βπ/2s^3β/2 f_-1(z)^β), z>0, [ 1 0 0 0; 0 0 0 1; 0 0 1 0; 0 -1 0 0 ](1, e^-θ_2,+(-s) - 3βπ/2s^-3β/2 f_-1(z)^-β, 1, e^θ_2,+(-s) +3βπ/2s^3β/2 f_-1(z)^β), z<0. The local parametrix P^(-1)(z) defined in (<ref>) solves RH problem <ref>. We first show the analyticity of E_-1(z). From its definition in (<ref>), the only possible jump is on (-1-ε, -1+ε). For x ∈ (-1-ε, -1), we have f_-1,+(x)^β = e^2 βπ f_-1,-(x)^β, it then follows from (<ref>) that E_-1,-(x)^-1 E_-1,+(x) = I. Similarly, for x ∈ (-1, -1+ε), we also have E_-1,-(x)^-1 E_-1,+(x) = I. Moreover, as z → -1, applying (<ref>), (<ref>) and (<ref>), we have E_-1(z) = E_-1(-1)(I+ E_-1(-1)^-1 E_-1'(-1)(z+1) +((z+1)^2)), where E_-1(-1) = 1/√(2) (I + 2 β E_3,1-2 β E_4,2) [ e^-βπ/2 0 - e^βπ/2 0; 0 e^-π/4 0 e^π/4^-1; - e^-βπ/2 0 e^βπ/2 0; 0 -e^-π/4 0 e^π/4^-1 ] and E_-1(-1)^-1 E_-1'(-1)=[ -β/2 0 -/4e^βπ 0; 0 β/4(2 + r_2+s_2/s/r_2-s_2/s) 0 /4^2; /4e^-βπ 0 β/2 0; 0 -^2/4 0 -β/4(2 + r_2+s_2/s/r_2-s_2/s) ] with = e^-θ_2,+(-s)-πβ/2 f_-1'(-1)^-β 4^-β s^-3β/2. Thus, the prefactor E_-1(z) is indeed analytic in D(-1, ε), the jump condition (<ref>) is satisfied due to this fact and the jump condition of Φ^() given in (<ref>). From the definitions of θ_1(z) and θ_2(z) in (<ref>) and (<ref>), it is clear that functions e^-θ_1(sz)+θ_2(sz)+2 τ sz and e^-θ_1(sz)+θ_2(sz)-2 τ sz in (<ref>) are exponentially small for z ∈ D(-1, ε) with large positive s. As s → +∞, the matching condition (<ref>) follows directly from (<ref>), (<ref>) and the asymptotic behavior of the confluent hypergeometric parametrix Φ^()(z) at infinity in (<ref>). This completes the proof of Proposition <ref>. §.§ Local parametrix near z=1 Near the endpoint z=1, the local parametrix P^(1)(z) reads as follows. (a) P^(1)(z) is defined and analytic in D(1, ε)∖Γ_ S, where Γ_ S is defined in (<ref>). (b) For z ∈ D(1, ε) ∩Γ_ S, P^(1)(z) satisfies the jump condition P^(1)_+(z)= P^(1)_-(z)J_ S(z), where J_ S(z) is given in (<ref>). (c)As s →∞, we have the matching condition P^(1)(z)=( I+ (s^-3/2) ) N(z), z ∈∂ D(1, ε), where N(z) is given in (<ref>). Again, the above RH problem can be solved by using the confluent hypergeometric parametrix Φ^(). To do this, we introduce the function f_1(z) = -2 s^-3/2θ_1(sz) - θ_1,+(s), z > 0, -θ_1(sz) + θ_1,-(s), z < 0. By (<ref>), it is easily seen that f_1(z) = 2(r_1 - s_1/s)(z-1) + ((z-1)^2), z → 1. Hence, f_1 is a conformal mapping near z=1 for large positive s. We then define P^(1)(z) = E_1(z) [ Φ^()_11(s^3/2 f_1(z); β) 0 Φ^()_12(s^3/2 f_1(z); β) 0; 0 1 0 0; Φ^()_21(s^3/2 f_1(z); β) 0 Φ^()_22(s^3/2 f_1(z); β) 0; 0 0 0 1 ] ×(e^θ_1(sz) - βπ/2, 1, e^-θ_1(sz) + βπ/2, 1) × I-e^θ_1(sz)-θ_2(sz)-2 τ sz E_2,1+e^θ_1(sz)-θ_2(sz)+2 τ szE_3,4, z ∈ (Ω_2∪Ω_5)∩ D(1, ε) ∖∂Ω_R,±, I, z ∈ (Ω_1∪Ω_6)∩ D(1, ε), where β is given in (<ref>), f_1 is defined in (<ref>) and E_1(z) = N(z) ×(e^-θ_1,+(s) + βπ/2s^3β/2 f_1(z)^β, 1, e^θ_1,+(s) -βπ/2s^-3β/2 f_1(z)^-β, 1), z>0, [ 0 0 1 0; 0 1 0 0; -1 0 0 0; 0 0 0 1 ](e^θ_1,-(s) +βπ/2s^3β/2 f_1(z)^β, 1, e^-θ_1,-(s) -βπ/2s^-3β/2 f_1(z)^-β,1), z<0. The following proposition can be proved in a manner similar to that of Proposition <ref>, and we omit the details here. The local parametrix P^(1)(z) defined in (<ref>) solves RH problem <ref>. For later use, we include the following asymptotic behavior of E_1(z) near z=1 by applying (<ref>), (<ref>) and (<ref>): E_1(z) = E_1(1)(I + E_1(1)^-1 E_1'(1)(z-1) +((z-1)^2)), z→ 1, where E_1(1) = 1/√(2) (I + 2 β E_3,1-2 β E_4,2) [ e^π/4 0 e^-π/4^-1 0; 0 e^-βπ/2 0 e^βπ/2; -e^π/4 0 e^-π/4^-1 0; 0 e^-βπ/2 0 e^βπ/2 ] and E_1(1)^-1 E_1'(1)=[ β/4(2 + r_1+s_1/s/r_1-s_1/s) 0 /4^2 0; 0 β/2 0 -/4e^βπ; -^2/4 0 -β/4(2 + r_1+s_1/s/r_1-s_1/s) 0; 0 /4e^-βπ 0 -β/2 ] with = e^-θ_1,+(s)+πβ/2 f_1'(1)^-β 4^β s^3 β /2. §.§ Final transformation We define the following final transformation R(z) = S(z) P^(0)(z)^-1, z ∈ D(0, ε), S(z) P^(-1)(z)^-1, z ∈ D(-1, ε), S(z) P^(1)(z)^-1, z ∈ D(1, ε), S(z) N(z)^-1, elsewhere. From the RH problems for S, N, P^(0) and P^(± 1), it follows that R satisfies the following RH problem. (a) R(z) is defined and analytic in ℂ∖Γ_ R, where Σ_ R:=Γ_ S∪∂ D(-1,ε) ∪∂ D(0,ε) ∪∂ D(1,ε) ∖{ℝ∪ D(-1,ε) ∪ D(0,ε) ∪ D(1,ε) }; see Figure <ref> for an illustration. (b) For z ∈Γ_ R, we have R_+(z) = R_-(z) J_ R (z), where J_ R(z) = P^(0)(z) N(z)^-1, z ∈∂ D(0, ε), P^(-1)(z) N(z)^-1, z ∈∂ D(-1, ε), P^(1)(z) N(z)^-1, z ∈∂ D(1, ε), N(z) J_ S(z) N(z)^-1, Γ_ R∖{∂ D(0, ε) ∪∂ D(± 1, ε) }, with J_ S(z) defined in (<ref>). (c) As z →∞, we have R(z) = I + R^(1)/z + (z^-2), where R^(1) is independent of z. As s→ +∞, we have the following estimate of J_ R(z) in (<ref>). For z ∈Γ_ R∖{∂ D(0, ε) ∪∂ D(± 1, ε) }, it is readily seen from (<ref>) and (<ref>) that there exists a positive constant c such that J_ R(z) = I + (e^-c s^3/2), for z ∈∂ D(± 1, ε), it follows from (<ref>) and (<ref>) that J_ R(z) = I + (s^-3/2), and for z ∈ D(0, ε), it follows from (<ref>) that J_ R(z) = I + J^(0)_1(z)/s^1/2 + J^(0)_2(z)/s+(s^-3/2), where J^(0)_1(z) and J^(0)_2(z) are given in (<ref>) and (<ref>). By <cit.>, the estimates (<ref>)–(<ref>) imply that R(z) = I + R_1(z)/s^1/2 + R_2(z)/s+ (s^-3/2), s → +∞, uniformly for z ∈ℂ∖Γ_ R. Moreover, by inserting (<ref>) into (<ref>), it follows from (<ref>)–(<ref>) that R_1 satisfies the following RH problem. (a) R_1(z) is analytic in ℂ∖∂ D(0, ε). (b) For z ∈∂ D(0, ε), we have R_1,+(z)- R_1,-(z)= J^(0)_1(z), where J^(0)_1(z) is given in (<ref>). (c) As z →∞, we have R_1(z) = (z^-1). By Cauchy's residue theorem, we have R_1(z) = 1/2 π∫_∂ D(0, ε) J^(0)_1(ζ)/ζ -zζ =_ζ =0 J^(0)_1(ζ)/z - J^(0)_1(z), z ∈ D(0, ε), _ζ =0 J^(0)_1(ζ)/z, elsewhere. Similarly, we have that R_2 in (<ref>) satisfies the following RH problem. (a) R_2(z) is analytic in ℂ∖∂ D(0, ε). (b) For z ∈∂ D(0, ε), we have R_2,+(z)- R_2,-(z)= R_1,-(z) J^(0)_1(z)+ J^(0)_2(z), where J^(0)_1(z) and J^(0)_2(z) are given in (<ref>) and (<ref>), respectively. (c) As z →∞, we have R_2(z) = (z^-1). By Cauchy's residue theorem, we have R_2(z) = 1/2 π∫_∂ D(0, ε) R_1,-(ζ) J^(0)_1(ζ)+ J^(0)_2(ζ)/ζ -zζ =_ζ =0( R_1,-(ζ) J^(0)_1(ζ)+ J^(0)_2(ζ))/z - R_1,-(z) J^(0)_1(z)- J^(0)_2(z), z ∈ D(0, ε), _ζ =0( R_1,-(ζ) J^(0)_1(ζ)+ J^(0)_2(ζ))/z, elsewhere. § ASYMPTOTIC ANALYSIS OF THE RH PROBLEM FOR X AS S → 0^+ In this section, we analyze the asymptotics for X as s → 0^+, which is relatively simpler than the case when s → +∞. Throughout this section, it is assumed that 0 < γ≤ 1. §.§ Global parametrix As s → 0^+, the interval (-s, s) vanishes, it is then easily seen that the RH problem for X is approximated by following global parametrix N for |z|>δ > s: N(z) = M(z) J_1(z), z < φ and (z-s) > φ, J_5(z)^-1, z >- φ and (z-s) <- φ, J_2(z), z > π - φ and (z+s) < π -φ, J_4(z)^-1, z < φ-π and (z+s) >φ-π, I, elsewhere, where M is the solution of the tacnode RH problem <ref>, φ is given in (<ref>) and J_k, k=0,…, 5, denotes the jump matrix of M on the ray Γ_k as shown in Figure <ref>. §.§ Local parametrix For |z|<δ, which particularly includes a neighborhood of (-s, s), we approximate X by the following local parametrix. (a) P^(0)(z) is defined and analytic in D(0, δ) ∖Γ_X, where Γ_X is defined in (<ref>). (b) For z ∈ D(0, δ) ∩Γ_X, P^(0) satisfies the jump condition P^(0)_+(z) = P^(0)_-(z) J_X(z), where J_X(z) is given in (<ref>). (c) As s → 0^+, we have the matching condition P^(0)(z) = (I+ (s)) N(z), z ∈∂ D(0, δ), where N(z) is given in (<ref>). Recall that M is the analytic continuation of the restriction of M in the sector bounded by the rays Γ_1 and Γ_2 to the whole complex plane and Ω_k^(s), k=1,…,6, are six regions as shown in Figure <ref>, we look for a solution to the above RH problem of the following form: P^(0)(z) = M(z) [ 1 0 η(z/s) η(z/s); 0 1 η(z/s) η(z/s); 0 0 1 0; 0 0 0 1 ] J_1(z)^-1, z ∈Ω_1^(s), I, z ∈Ω_2^(s), J_2(z)^-1, z ∈Ω_3^(s), J_1(z)^-1J_0(z)^-1J_5(z)^-1J_4(z), z ∈Ω_4^(s), J_1(z)^-1J_0(z)^-1J_5(z)^-1, z ∈Ω_5^(s), J_1^-1(z)J_0^-1(z), z ∈Ω_6^(s), where η is a function to be determined later. In view of RH problem <ref>, it follows that η solves the following scalar RH problem. (a) η (z) is defined and analytic in ℂ∖ [-1, 1]. (b) For x ∈ (-1, 1), we have η_+(x) = η_-(x) -γ. (c) As z →∞, we have η(z) = (z^-1). By the Sokhotske-Plemelj formula, it is easy to find η (z) = - γ/2 πln(z-1/z+1). Since η(z/s) = (s) as s → 0^+ for z ∈∂ D(0, δ), we deduce the matching condition (<ref>) from (<ref>), (<ref>) and the fact that M is bounded near the origin. §.§ Final transformation We define the final transformation as R(z) = X(z) N(z)^-1, z ∈ℂ∖ D(0, δ), X(z) P^(0)(z)^-1, z ∈ D(0, δ). From the RH problems for X and P^(0), it is readily seen R is analytic in D(0, δ) ∖{-s, s}. On account of the fact that η (z/s) = (ln(z+s)), z → -s (ln(z-s)), z → s, we conclude from (<ref>), (<ref>) and (<ref>) that both s and -s are removable singularities. Moreover, it is readily seen that R solves the following RH problem. (a) R(z) is defined and analytic in ℂ∖∂ D(0, δ). (b) For z ∈∂ D(0, δ), we have R_+(z) = R_+(z) J_R(z), where J_R(z) = P^(0)(z) N(z)^-1 (c) As z →∞, we have R(z) = I + (z^-1). According to (<ref>), it is easily seen that J_R(z) =I+ (s), as s → 0^+. Therefore, we have R(z)=I+ (s), / z R(z)=(s), s → 0^+, uniformly for z ∈ℂ∖∂ D(0, δ). § ASYMPTOTICS OF P_K(S) AND Q_K(S) FOR LARGE AND SMALL S We have defined the functions p_k(s) and q_k(s), k=1…,6, in (<ref>), (<ref>) and (<ref>), which satisfies the equations (<ref>) and (<ref>). It is the aim of this section to derive large and small s asymptotics of these functions for 0< γ < 1. As we will see later, these asymptotic formulas are essential in the proof of large gap asymptotics of F(s;γ). For the purely imaginary parameter β given in (<ref>), there exist a family of special solutions to the system of differential equations (<ref>) and (<ref>) with the following asymptotic behaviors: as s → +∞, p_1(s) = e^-τ sγ |Γ(1-β)| s^1/4/√(2)πe^-βπ/2( sin(ϑ(s)-π/4) - 2 βcos(ϑ(s)-π/4))(1+(s^-1/2)), p_2(s) = e^-τ s√(2)γβ |Γ(1-β)| M^(1)_14/π s^1/4 e^-βπ/2sin(ϑ(s)-π/4)(1+(s^-1/2)), p_3(s) = e^-τ sγ |Γ(1-β)| /√(2)π s^1/4 e^-βπ/2cos(ϑ(s)-π/4)(1+(s^-1/2)), p_4(s) = e^-τ s×(s^-3/4), p_5(s) = r_1(-2 β s^1/2 + M^(1)_13+2β s^-1/2(M^(1)_11 -(M^(1)_13)^2-M^(1)_14M^(1)_23-M^(1)_33)) +(s^-1), p_6(s) = r_1 (2 βṀ^(1)_14 s^1/2 + Ṁ^(1)_12-4β^2(Ṁ^(1)_12+Ṁ^(1)_13Ṁ^(1)_14+Ṁ^(1)_14Ṁ^(1)_24+Ṁ^(1)_34)) + r_2 (2 βM^(1)_14 s^1/2 + M^(1)_12-4β^2(M^(1)_12+M^(1)_13M^(1)_14+M^(1)_14M^(1)_24+M^(1)_34)) +(s^-1/2), q_1(s) =√(2) e^τ s|Γ(1-β)|e^-βπ/2cos(ϑ(s)-π/4)s^-1/4(1+(s^-1/2)), q_2(s) = e^τ s×(s^-3/4), q_3(s) = √(2) e^τ s|Γ(1-β)|e^-βπ/2s^1/4(2βcos(ϑ(s)-π/4)+sin(ϑ(s)-π/4))(1+(s^-1/2)), q_4(s) = -2√(2)β e^τ s|Γ(1-β)|M^(1)_23e^-βπ/2sin(ϑ(s)-π/4)s^-1/4(1+(s^-1/2)), q_5(s) =-4β^2 s + 2β M^(1)_13 s^1/2- M^(1)_11 + 4β^2(M^(1)_11 -(M^(1)_13)^2-M^(1)_14M^(1)_23-M^(1)_33) +2βṀ^(1)_13 s^1/2- Ṁ^(1)_11 + 4β^2(Ṁ^(1)_11 -(Ṁ^(1)_13)^2-Ṁ^(1)_14Ṁ^(1)_23-Ṁ^(1)_33)+(s^-1/2), q_6(s) =M^(1)_14-2β s^-1/2(M^(1)_12+M^(1)_13M^(1)_14+M^(1)_14M^(1)_24+M^(1)_34)+(s^-1); as s → 0^+ p_1(s) =(1), p_2(s)=(1), p_3(s)=(1), p_4(s)=(1), p_5(s) = r_1 M^(1)_13+(s), p_6(s) = r_1 Ṁ^(1)_12 + r_2 M^(1)_12 +(s), q_1(s) =(1), q_2(s)=(1), q_3(s)=(1), q_4(s)=(1), q_5(s) =-Ṁ^(1)_11-M^(1)_11+(s), q_6(s) = M^(1)_14+(s), where Γ (z) is Euler's gamma function, ϑ (s) and M^(1) are given in (<ref>) and (<ref>), M^(1) and Ṁ^(1) are defined through (<ref>) and (<ref>). We split the proof into two parts, which deal with the large and small s asymptotics, respectively. §.§.§ Asymptotics of p_k(s) and q_k(s) as s → +∞ We first consider the asymptotics of p_k and q_k for k=1, … 4. Recall that [ q_1(s); q_2(s); q_3(s); q_4(s) ]=X_R,0(s) [ 1; 1; 0; 0 ] and [ p_1(s); p_2(s); p_3(s); p_4(s) ]=-γ/2 πX_R,0(s)^- T[ 0; 0; 1; 1 ], where X_R,0(s) = lim_z → 1, z ∈Ω_2^(1) X(sz) [ 1 0 γ/2 πln(sz-s) γ/2 πln(sz-s); 0 1 γ/2 πln(sz-s) γ/2 πln(sz-s); 0 0 1 0; 0 0 0 1 ]. Tracing back the transformations X →T→ S → R in (<ref>), (<ref>) and (<ref>), it follows that, for z ∈Ω_2^(1)∖Ω_R,+, X(sz) = (s^-1/4, s^-1/4, s^1/4, s^1/4)T(z) (e^-θ_1(sz)+τ sz, e^-θ_2(sz)-τ sz, e^θ_1(sz)+τ sz, e^θ_2(sz)-τ sz) =(s^-1/4, s^-1/4, s^1/4, s^1/4)S(z) (e^-θ_1(sz)+τ sz, e^-θ_2(sz)-τ sz, e^θ_1(sz)+τ sz, e^θ_2(sz)-τ sz) =(s^-1/4, s^-1/4, s^1/4, s^1/4)R(z) P^(1)(z) ×(e^-θ_1(sz)+τ sz, e^-θ_2(sz)-τ sz, e^θ_1(sz)+τ sz, e^θ_2(sz)-τ sz). This, together with (<ref>) and (<ref>), implies that X_R,0(s) = (s^-1/4, s^-1/4, s^1/4, s^1/4)R(1) E_1(1) ×lim_z → 1, z ∈Ω_2^(1)[[ Φ^()_11(s^3/2 f_1(z); β) 0 Φ^()_12(s^3/2 f_1(z); β) 0; 0 1 0 0; Φ^()_21(s^3/2 f_1(z); β) 0 Φ^()_22(s^3/2 f_1(z); β) 0; 0 0 0 1 ]. ×(e^θ_1(sz) - βπ/2, 1, e^-θ_1(sz) + βπ/2, 1)[ 1 0 0 0; -e^θ_1(sz)-θ_2(sz)-2 τ sz 1 0 0; 0 0 1 e^θ_1(sz)-θ_2(sz)+2 τ sz; 0 0 0 1 ] ×(e^-θ_1(sz)+τ sz, e^-θ_2(sz)-τ sz, e^θ_1(sz)+τ sz, e^θ_2(sz)-τ sz) ×.[ 1 0 γ/2 πln(sz-s) γ/2 πln(sz-s); 0 1 γ/2 πln(sz-s) γ/2 πln(sz-s); 0 0 1 0; 0 0 0 1 ]], where E_1 and f_1 are defined in (<ref>) and (<ref>), respectively. On account of the behavior of Φ^() near the origin given in (<ref>), it is readily seen that X_R,0(s) = (s^-1/4, s^-1/4, s^1/4, s^1/4)R(1) E_1(1)Υ_0 ×[ e^τ s 0 e^τ sγ/2 π(ln s - ln (e^-π/2 s^3/2 f_1'(1))) e^τ sγ/2 π(ln s - ln (e^-π/2 s^3/2 f_1'(1))); -e^-θ_2(s)-τ s e^-θ_2(s)-τ s 0 0; 0 0 e^τ s e^τ s; 0 0 0 e^θ_2(s)-τ s ], where Υ_0 = [ (Υ_0)_11 0 (Υ_0)_12 0; 0 1 0 0; (Υ_0)_21 0 (Υ_0)_22 0; 0 0 0 1 ] with Υ_0 defined in (<ref>). Inserting (<ref>) into the first equation of (<ref>), it follows that [ q_1(s); q_2(s); q_3(s); q_4(s) ]=e^τ s(s^-1/4, s^-1/4, s^1/4, s^1/4)R(1) E_1(1)Υ_0[ 1; 0; 0; 0 ]. We note from (<ref>), (<ref>) and (<ref>) that R(1) = I + s^-1/2 (I+2β E_3,1-2β E_4,2) [ 0 0 M^(1)_13 M^(1)_14; 0 0 M^(1)_23 M^(1)_24; 0 0 0 0; 0 0 0 0 ](I+2β E_3,1-2β E_4,2)^-1 + (s^-1). A combination of (<ref>), (<ref>), (<ref>) and (<ref>) shows q_1(s) =1/√(2) e^τ s s^-1/4( e^π/4- βπΓ (1- β)+ ^-1 e^-π/4Γ (1+ β))(I+(s^-1/2)) with defined in (<ref>). Since β =0, the two term inside the first bracket of (<ref>) are complex conjugate of each other, which implies that q_1(s) = √(2) e^τ s s^-1/4( e^π/4- βπΓ (1- β))(I+(s^-1/2)). The asymptotic formula (<ref>) then follows directly from the above equation. The asymptotics of q_k(s) for k=2, 3, 4, in (<ref>)–(<ref>) can be derived in a similar way. Similarly, by inserting (<ref>) into the second equation of (<ref>), we have [ p_1(s); p_2(s); p_3(s); p_4(s) ]=-γ/2 π e^-τ s(s^1/4, s^1/4, s^-1/4, s^-1/4)R(1)^- T E_1(1)^- TΥ_0^- T[ 0; 0; 1; 0 ]. The asymptotic formulas (<ref>)–(<ref>) of p_k(s), k=1, 2, 3, 4, then follows from (<ref>), (<ref>), (<ref>) and direct calculations. To derive the asymptotics of p_5(s), p_6(s), q_5(s) and q_6(s) defined in (<ref>)–(<ref>), we trace back the transformations X →T→ S → R in (<ref>), (<ref>), (<ref>), and obtain that for z∈ℂ∖{D(0,ε) ∪ D(± 1,ε)}, X(sz)=(s^-1/4, s^-1/4, s^1/4, s^1/4) R(z) N(z) ×(e^- θ_1(sz) + τ s z,e^- θ_2(sz) - τ s z,e^θ_1(sz) + τ s z,e^θ_2(sz) - τ s z). Taking z →∞ and comparing the coefficient of (1/z) term on both sides of the above formula, we have X^(1) = s (s^-1/4, s^-1/4, s^1/4, s^1/4)( R^(1)+ N^(1))(s^1/4, s^1/4, s^-1/4, s^-1/4), where N^(1) and R^(1) are given in (<ref>) and (<ref>), respectively. In view of (<ref>), it is readily seen that the first row of N^(1) is (2 β^2, 0, -2 β, 0). By (<ref>), (<ref>), (<ref>) and (<ref>), one has R^(1) = s^-1/2_ζ =0 J^(0)_1(ζ) + s^-1_ζ =0( R_1,-(ζ) J^(0)_1(ζ)+ J^(0)_2(ζ)) +(s^-3/2). Substituting (<ref>), the expressions of J^(0)_1 and J^(0)_2 in (<ref>) and (<ref>) into the above formula gives us R^(1)_11 = -2β M^(1)_13 s^-1/2 + s^-1(M^(1)_11-4β^2(M^(1)_11-(M^(1)_13)^2-M^(1)_14M^(1)_23-M^(1)_33))+ (s^-3/2 ), R^(1)_12 = 2β M^(1)_14 s^-1/2 + s^-1(M^(1)_12-4β^2(M^(1)_12+M^(1)_13M^(1)_14+M^(1)_14M^(1)_24+M^(1)_34))+ (s^-3/2 ), R^(1)_13 = M^(1)_13 s^-1/2 + 2 β s^-1(M^(1)_11-(M^(1)_13)^2-M^(1)_14M^(1)_23-M^(1)_33)+ (s^-3/2 ), R^(1)_14 = M^(1)_14 s^-1/2 -2 β s^-1(M^(1)_12+M^(1)_13M^(1)_14+M^(1)_14M^(1)_24+M^(1)_34)+ (s^-3/2 ), where M^(1) is given in (<ref>). Combining (<ref>) and (<ref>)–(<ref>), it follows that X^(1)_11 = 2 β^2 s - 2 β M^(1)_13 s^1/2 + M^(1)_11 - 4 β^2 (M^(1)_11 -(M^(1)_13)^2-M^(1)_14M^(1)_23-M^(1)_33) + (s^-1/2), X^(1)_12 = 2 β M^(1)_14 s^1/2 + M^(1)_12 - 4 β^2 (M^(1)_12 +(M^(1)_13)^2+M^(1)_14M^(1)_24+M^(1)_34) + (s^-1/2), X^(1)_13 = -2 β s^1/2 +M^(1)_13+ (s^-1/2), X^(1)_14 = M^(1)_14+ (s^-1/2). Substituting the above equations into (<ref>) and (<ref>) yields the asymptotics of p_5(s), p_6(s), q_5(s) and q_6(s) shown in (<ref>),(<ref>),(<ref>) and (<ref>). §.§.§ Asymptotics of p_k(s) and q_k(s) as s → 0^+ The small s asymptotics of p_k(s) and q_k(s), k=1, …, 6, are outcomes of the asymptotic analysis performed in Section <ref>. For |z|>δ, it follows from (<ref>) and (<ref>) that X(z)= R(z) N(z)=(I+(s))N(z). Thus, on account of (<ref>), (<ref>) and (<ref>), we have X^(1)=M^(1)(I+(s)). It is then straightforward to obtain asymptotics of p_5(s), p_6(s), q_5(s), q_6(s) in (<ref>), (<ref>), (<ref>) and (<ref>) by using (<ref>), (<ref>) and the symmetric relation of M^(1) established in (<ref>) and (<ref>). If |z| < δ, it follows from (<ref>), (<ref>) and (<ref>) that X(z)= R(z) P^(0)(z) = R(z) M(z) (I+γ/2 πln(z+s/z-s) [ 0 0 1 1; 0 0 1 1; 0 0 0 0; 0 0 0 0 ]), z ∈Ω_2^(s). This, together with (<ref>), (<ref>) and (<ref>), implies that X_R,0(s) = R(s) M(s) (I+γ/2 πln (2s) [ 0 0 1 1; 0 0 1 1; 0 0 0 0; 0 0 0 0 ]) =( M(0)+(s))(I+γ/2 πln (2s) [ 0 0 1 1; 0 0 1 1; 0 0 0 0; 0 0 0 0 ]), s → 0^+. As a consequence, it follows from the definitions of p_k(s) and q_k(s), k=1,…,4, in (<ref>) that [ p_1(s); p_2(s); p_3(s); p_4(s) ] = -γ/2 π( M(0)^- T + (s))[ 0; 0; 1; 1 ]=(1) and [ q_1(s); q_2(s); q_3(s); q_4(s) ] = ( M(0) + (s))[ 1; 1; 0; 0 ]=(1), as claimed in (<ref>) and (<ref>). This completes the proof of Proposition <ref>. Substituting large s asymptotics of X^(1)_11 and X^(1)_13 given in (<ref>) and (<ref>) into (<ref>)–(<ref>), it is readily seen that, as s → +∞, ∂/∂ s_1 F(s;γ, r_1, r_2,s_1,s_2, τ) = - 4 β s^1/2 + (s^-1/2), ∂/∂ s_2 F(s;γ, r_1, r_2,s_1,s_2, τ) = - 4 β s^1/2 + (s^-1/2), ∂/∂τ F(s;γ, r_1, r_2,s_1,s_2, τ) = (s^-1/2). The above estimates particularly imply that the the non-trivial constant term in the asymptotics of F(s;γ), γ∈[0,1) is independent of the parameters s_1, s_2 and τ. § PROOFS OF MAIN RESULTS §.§ Proof of Theorem <ref> On account of (<ref>) and (<ref>), it is easily seen that / s F(s;γ)=H(s), which leads to the integral representation of F claimed in (<ref>) after integrating with respect to t. It then remains to establish asymptotics of the Hamiltonian H, which will be discussed in what follows. §.§.§ Asymptotics of H(s) as s → +∞ for γ=1 We make use of the representation of H in (<ref>), which reads H(s)=-1/2 π∑_i=3^4 ∑_j=1^2 (X_R,1(s)-X_L,1(s))_ij, where X_R,1(s) is given in the local behavior of X near z = s in (<ref>) and X_L,1(s) is given in the local behavior of X near z = -s in (<ref>). Therefore, from (<ref>) and (<ref>) we obtain H(s)=-1/2 π[ lim_z → s∑_i=3^4∑_j=1^2 (X(z)^-1X'(z))_ij+ lim_z → -s∑_i=3^4∑_j=1^2 (X(z)^-1 X'(z))_ij], z ∈Ω_2^(s). By inverting the transformations X → T → S in (<ref>) and (<ref>), it follows from the above formula that H(s) =-1/2 π s[ lim_z → 1∑_i=3^4∑_j=1^2 (T(z)^-1T'(z))_ij+ lim_z → -1∑_i=3^4∑_j=1^2 (T(z)^-1 T'(z))_ij] = -1/2 π s[ lim_z → 1∑_i=3^4∑_j=1^2 ((e^s^3/2g_1(z)-τ s z, e^s^3/2g_2(z)+ τ s z, e^-s^3/2g_1(z)-τ s z, e^-s^3/2g_2(z) + τ s z).. . × S(z)^-1S'(z)(e^s^-3/2g_1(z)+τ s z, e^-s^3/2g_2(z)- τ s z, e^s^3/2g_1(z)+τ s z, e^s^3/2g_2(z) -τ s z))_ij +lim_z → -1∑_i=3^4∑_j=1^2 ((e^s^3/2g_1(z)-τ s z, e^s^3/2g_2(z)+ τ s z, e^-s^3/2g_1(z)-τ s z, e^-s^3/2g_2(z) + τ s z). . .× S(z)^-1S'(z)(e^s^-3/2g_1(z)+τ s z, e^-s^3/2g_2(z)- τ s z, e^s^3/2g_1(z)+τ s z, e^s^3/2g_2(z) -τ s z))_ij], where the limits are taken from Ω_2^(1). We next calculate the limits of (S(z)^-1S'(z))_ij, i=3,4, j=1,2, as z→± 1. If z is close to -1, it follows from (<ref>) and (<ref>) that S(z)^-1S'(z) = P^(-1)(z)^-1 R(z)^-1 R'(z) P^(-1)(z) + P^(-1)(z)^-1 (P^(-1))'(z) =𝒜_-1(z)^-1ℬ_-1(s^3f_-1(z))^-1E_-1(z)^-1 R(z)^-1 R'(z) E_-1(z)ℬ_-1(s^3f_-1(z))𝒜_-1(z) +𝒜_-1(z)^-1ℬ_-1(s^3f_-1(z))^-1E_-1(z)^-1E_-1'(z)ℬ_-1(s^3f_-1(z)𝒜_-1(z) +s^3f_-1'(z)𝒜_-1(z)^-1ℬ_-1(s^3f_-1(z))^-1ℬ_-1'(s^3f_-1(z))𝒜_-1(z) + 𝒜_-1(z)^-1𝒜_-1'(z), where E_-1 is given in (<ref>), 𝒜_-1(z) = [ 1 -e^-s^3/2(g_1(z)-g_2(z)) + 2 τ s z 0 0; 0 e^s^3/2g_2(z) 0 0; 0 0 1 0; 0 0 e^-s^3/2g_1(z) - 2 τ s z e^-s^3/2g_2(z) ] and ℬ_-1(z) = [ 1 0 0 0; 0 Φ^()_11(z) 0 Φ^()_12(z); 0 0 1 0; 0 Φ^()_21(z) 0 Φ^()_22(z) ]. Recalling the definitions of g_1(z) and g_2(z) in (<ref>) and (<ref>), we have g_1(-1) = √(2)/3 r_1 + 2^3/2s_1/s, g_2(-1) = 0. Thus, one can check directly that for an arbitrary 4 × 4 matrix M=(m_ij)_i,j=1^4, as s → +∞, lim_z → -1(𝒜_-1(z)^-1 M 𝒜_-1(z))_31 =m_31, lim_z → -1(𝒜_-1(z)^-1 M 𝒜_-1(z))_32 =m_32 + (e^-√(2)/3 r_1 s^3/2), lim_z → -1(𝒜_-1(z)^-1 M 𝒜_-1(z))_41 =m_41 + (e^-√(2)/3 r_1 s^3/2), lim_z → -1(𝒜_-1(z)^-1 M 𝒜_-1(z))_42 =m_42 + (e^-√(2)/3 r_1 s^3/2), and lim_z → 0(ℬ_-1(z)^-1 M ℬ_-1(z))_ij = m_ij, for i = 3,4, and j=1,2. Recall the following properties of the modified Bessel functions I_0 and K_0 (cf. <cit.>): I_0(z) = ∑_k=0^∞(z/2)^2k/(k!)^2, K_0(z) =-(ln(z/2) + γ_E)I_0(z) + (z^2), z → 0, where γ_E is the Euler's constant, we obtain from (<ref>) and (<ref>) that lim_z → 0(ℬ_-1(z)^-1ℬ_-1'(z))_31 = lim_z → 0(ℬ_-1(z)^-1ℬ_-1'(z))_32=lim_z → 0(ℬ_-1(z)^-1ℬ_-1'(z))_41=0, lim_z → 0(ℬ_-1(z)^-1ℬ_-1'(z))_42 = π/2. With the aid of local behavior of E_-1(z) near z=-1 in (<ref>) and the explicit expression of R_1'(-1) in (<ref>), we have lim_z → -1 E_-1(z)^-1R(z)^-1 R'(z) E_-1(z) = lim_z → -1 E_-1(z)^-1(R_1'(z)/s^3/2) E_-1(z) =π r_2/4(r_2 - 2s_2/s)E_4,2 + (s^-3/2), and lim_z → -1 E_-1(z)^-1 E_-1'(z) = [ 0 0 -/8 0; 0 - r_2/3(r_2 - 2s_2/s) 0 0; /8 0 0 0; 0 0 0 r_2/3(r_2 - 2s_2/s) ] + (s^-3/2). A combination of (<ref>) and (<ref>)–(<ref>) gives us lim_z → -1(S(z)^-1S'(z))_31 = /8 + (s^-3/2), lim_z → -1(S(z)^-1S'(z))_32 = (e^-√(2)/3r_1 s^3/2), lim_z → -1(S(z)^-1S'(z))_41 =(e^-√(2)/3r_1 s^3/2), lim_z → -1(S(z)^-1S'(z))_42 = π/2(r_2 - 2s_2/s)^2 s^3 + π/4 + (s^-1). Similarly, if z is close to 1, we use (<ref>) to obtain S(z)^-1S'(z) = P^(1)(z)^-1 R(z)^-1 R'(z) P^(1)(z) + P^(1)(z)^-1 (P^(1))'(z) =𝒜_1(z)^-1ℬ_1(s^3f_1(z))^-1E_1(z)^-1 R(z)^-1 R'(z) E_1(z)ℬ_1(s^3f_1(z))𝒜_1(z) +𝒜_1(z)^-1ℬ_1(s^3f_1(z))^-1E_1(z)^-1 E_1'(z)ℬ_1(s^3f_1(z)𝒜_1(z) +s^3f_1'(z)𝒜_1(z)^-1ℬ_1(s^3f_1(z))^-1ℬ_1'(s^3f_1(z))𝒜_1(z) + 𝒜_1(z)^-1𝒜_1'(z), where E_1 is defined in (<ref>), 𝒜_1(z) = [ e^s^3/2g_1(z) 0 0 0; -e^s^3/2(g_1(z)-g_2(z)) - 2 τ s z 1 0 0; 0 0 e^-s^3/2g_1(z) e^-s^3/2g_2(z) + 2 τ s z; 0 0 0 1 ] and ℬ_1(z)= [ Φ^()_11(z) 0 -Φ^()_12(z) 0; 0 1 0 0; -Φ^()_21(z) 0 Φ^()_22(z); 0 0 0 1 ]. As z → 1, from (<ref>) and (<ref>), it follows that g_1(1) =0, g_2(1) = √(2)/3 r_2 + 2^3/2s_2/s. For an arbitrary 4 × 4 matrix M=(m_ij)_i,j=1^4, as s → +∞, we have lim_z → 1(𝒜_1(z)^-1 M 𝒜_1(z))_31 = m_31 + (e^-√(2)/3r_2 s^3/2), lim_z → 1(𝒜_1(z)^-1 M 𝒜_1(z))_32 = m_32 + (e^-√(2)/3r_2 s^3/2), lim_z → 1(𝒜_1(z)^-1 M 𝒜_1(z))_41 = m_41 + (e^-√(2)/3r_2 s^3/2), lim_z → 1(𝒜_1(z)^-1 M 𝒜_1(z))_31 = m_42. and lim_z → 0(ℬ_1(z)^-1 M ℬ_1(z))_ij = m_ij, for i=3,4 and j = 1, 2. From the properties of the modified Bessel functions in (<ref>) and (<ref>), we see from (<ref>) and (<ref>) that lim_z → 0(ℬ_1(z)^-1ℬ_1'(z))_32 = lim_z → 0(ℬ_1(z)^-1ℬ_1'(z))_41=lim_z → 0(ℬ_1(z)^-1ℬ_1'(z))_42=0, lim_z → 0(ℬ_1(z)^-1ℬ_1'(z))_31 = -π/2. By using the local behavior of E_1(z) in (<ref>) and the explicit expression of R_1'(1) in (<ref>), we have lim_z → 1 E_1(z)^-1 R(z)^-1 R'(z) E_1(z) = lim_z → 1 E_1(z)^-1(R_1'(z)/s^3/2 + (s^-3)) E_1(z) = π r_1/4(r_1 - 2s_1/s)E_3,1 + (s^-3/2), and lim_z → 1 E_1(z)^-1 E_1'(z) = [ r_1/3(r_1 - 2s_1/s) 0 0 0; 0 0 0 -/8; 0 0 -r_1/3(r_1 - 2s_1/s) 0; 0 /8 0 0 ] + (s^-3/2). Thus, we obtain after a direct calculation that lim_z → 1(S(z)^-1S'(z))_31 = π/2(r_1 - 2s_1/s)^2 s^3 + π/4 + (s^-1), lim_z → 1(S(z)^-1S'(z))_32 = (e^-√(2)/3r_2 s^3/2), lim_z → 1(S(z)^-1S'(z))_41 =(e^-√(2)/3r_2 s^3/2), lim_z → 1(S(z)^-1S'(z))_42 = /8 + (s^-3/2). Substituting equations (<ref>)–(<ref>) and (<ref>)–(<ref>) into (<ref>), it follows from (<ref>) and (<ref>) that H(s) = -r_1^2 + r_2^2/4s^2 + (r_1s_1 + r_2s_2)s - s_1^2-s_2^2 - 1/4s + (s^-2), s → +∞, as shown in (<ref>). §.§.§ Asymptotics of H(s) as s → +∞ for 0≤γ<1 If γ∈ [0, 1), one can theoretically obtain asymptotics of H(s) by combining (<ref>) and the large s asymptotics of p_k(s) and q_k(s), k=1,…,6, established in Theorem <ref>. This approach, however, is too complicated. Alternatively, we again turn to use the relation (<ref>), which can be written as H(s) = -γ/2 π[ 0 0 1 1 ]( X_R,1(s) - X_L,1(s))[ 1; 1; 0; 0 ], where X_R,1(s) and X_L,1(s) are given in (<ref>) and (<ref>). From (<ref>) and (<ref>), it follows that X_R,1(s) = X_R,0(s)^-1/slim_z → 1, z ∈Ω_2^(1)[X(sz) [ 1 0 γ/2 πln(sz-s) γ/2 πln(sz-s); 0 1 γ/2 πln(sz-s) γ/2 πln(sz-s); 0 0 1 0; 0 0 0 1 ]]' Note that (see (<ref>)) [ 0 0 1 1 ] X_R,0(s)^-1 = [ 0 0 e^-τ s 0 ]Υ_0^-1 E_1(1)^-1 R(1)^-1 (s^1/4, s^1/4, s^-1/4, s^-1/4), and by tracing back the transformations X →T→ S → R in (<ref>), (<ref>) and (<ref>), lim_z → 1, z ∈Ω_2^(1)[X(sz) [ 1 0 γ/2 πln(z-s) γ/2 πln(z-s); 0 1 γ/2 πln(z-s) γ/2 πln(z-s); 0 0 1 0; 0 0 0 1 ]]'[ 1; 1; 0; 0 ] = (s^-1/4, s^-1/4, s^1/4, s^1/4) ×lim_z → 1, z ∈Ω_2[ R(z) P^(1) (z) (e^- θ_1(sz) + τ s z,e^- θ_2(sz) - τ s z,e^θ_1(sz) + τ s z,e^θ_2(sz) - τ s z) ]'[ 1; 1; 0; 0 ] = (s^-1/4, s^-1/4, s^1/4, s^1/4) ( R'(1) E_1 (1)Υ_0 e^τ s + R(1) E_1' (1)Υ_0 e^τ s. . +s^3/2 f_1'(1) R(1) E_1(1)Υ_0 Υ_1 e^τ s + τ s R(1) E_1(1)Υ_0e^τ s)[ 1; 0; 0; 0 ], where Υ_0 is defined in (<ref>) and Υ_1 is a 4× 4 matrix related to Υ_1 in (<ref>) with (Υ_1)_31=βπ e^-βπ/sin(βπ ). It follows from the above three equations that -γ/2 π[ 0 0 1 1 ] X_R,1(s) [ 1; 1; 0; 0 ] =-γ/2 π s(Υ_0^-1 E_1(1)^-1 R(1)^-1 R'(1) E_1 (1)Υ_0. .+Υ_0^-1 E_1(1)^-1 E_1' (1)Υ_0+s^3/2 f_1'(1) Υ_1 + τ s I )_31. To calculate the first term in the bracket, we substitute (<ref>), (<ref>) and (<ref>) into the above equation and obtain (Υ_0^-1 E_1(1)^-1 R(1)^-1 R'(1) E_1 (1)Υ_0)_31=(s^-1/2). From the explicit expression of Υ_0 in (<ref>) and E_1(1)^-1 E_1' (1) in (<ref>), one has (Υ_0^-1 E_1(1)^-1 E_1' (1)Υ_0)_31 =-3 β e^-βπ/2Γ (1+β) Γ (1-β) -/4(Γ(1+β)^2/^2 + ^2 Γ(1-β)^2 e^-2 βπ) + (s^-1), where is given in (<ref>). Moreover, from <cit.>, we have Γ (β) Γ (1-β) = π/sin (βπ), and hence Γ (1+β) Γ (1-β) = |Γ (1+β)|^2=βπ/sin (βπ). Therefore, we get (Υ_0^-1 E_1(1)^-1 E_1' (1)Υ_0)_31=-3 β^2 π e^-βπ/2 sin (βπ) -βπ e^-βπ/2sin (βπ)cos (2 ϑ (s))+ (s^-1). where ϑ (s) is given in (<ref>). We see from (<ref>) that (s^3/2 f_1'(1) Υ_1)_31 = 2 βπ e^-βπ/sin (βπ)(r_1 s^3/2-s_1 s^1/2) and (τ s I )_31 = 0. It is then straightforward to calculate the following result by substituting (<ref>), (<ref>) (<ref>), (<ref>) and (<ref>) into (<ref>) -γ/2 π[ 0 0 1 1 ] X_R,1(s) [ 1; 1; 0; 0 ] =2 β r_1 s^1/2 - 2 β s_1 s^-1/2 - 3 β^2/2 s^-1 - β/2cos(2 ϑ (s)) s^-1 + (s^-3/2), where ϑ (s) is given in (<ref>). Similarly, from (<ref>) and (<ref>), it follows that X_L,1(s) = X_L,0(s)^-1/s ×lim_z → -1, z ∈Ω_2^(1)[X(sz)[ 0 1 0 0; 1 0 0 0; 0 0 0 -1; 0 0 -1 0 ][ 1 0 γ/2 πln(-sz-s) γ/2 πln(-sz-s); 0 1 γ/2 πln(-sz-s) γ/2 πln(-sz-s); 0 0 1 0; 0 0 0 1 ]]' with [ 0 0 1 1 ] X_L,0(s)^-1 = [ 0 0 0 e^-τ s-βπ ]Υ_0^-1 E_-1(-1)^-1 R(-1)^-1 (s^1/4, s^1/4, s^-1/4, s^-1/4), where Υ_0=[ 1 0 0 0; 0 (Υ_0)_11 0 (Υ_0)_12; 0 0 1 0; 0 (Υ_0)_21 0 (Υ_0)_22 ]. Tracing back the transformations X →T→ S → R in (<ref>), (<ref>) and (<ref>) yields lim_z → -1, z ∈Ω_2^(1)[X(sz) [ 0 1 γ/2 πln(z-s) γ/2 πln(z-s); 1 0 γ/2 πln(z-s) γ/2 πln(z-s); 0 0 0 -1; 0 0 -1 0 ]]'[ 1; 1; 0; 0 ] = (s^-1/4, s^-1/4, s^1/4, s^1/4) ×lim_z → -1, z ∈Ω_2[ R(z) P^(-1) (z) (e^- θ_1(sz) + τ s z,e^- θ_2(sz) - τ s z,e^θ_1(sz) + τ s z,e^θ_2(zs) - τ s z) ]'[ 1; 1; 0; 0 ] = (s^-1/4, s^-1/4, s^1/4, s^1/4) ( R'(-1) E_-1 (-1)Υ_0 e^τ s -βπ + R(-1) E_-1' (-1)Υ_0 e^τ s -βπ. . +s^3/2 f_-1'(-1) R(-1) E_-1(-1)Υ_0 Υ_1 e^τ s -βπ - τ s R(-1) E_-1(-1)Υ_0e^τ s - βπ)[ 0; 1; 0; 0 ], where Υ_0 is given in (<ref>) and Υ_1 is a 4× 4 matrix related to Υ_1 in (<ref>) with (Υ_1)_42=βπ e^-βπ/sin(βπ ). With the aid of the explicit expressions of E_-1(-1) and E_-1'(-1) in (<ref>) and (<ref>), we obtain from (<ref>) and (<ref>) that γ/2 π[ 0 0 1 1 ] X_L,1(s) [ 1; 1; 0; 0 ] = 2 β r_2 s^1/2 - 2 β s_2 s^-1/2 - 3 β^2/2 s^-1 - β/2cos(2 ϑ (s)) s^-1+ (s^-3/2), where ϑ(s) is given in (<ref>). Inserting (<ref>) and (<ref>) into (<ref>), we arrive at H(s) = 2 β(r_1 + r_2) s^1/2 - 2 β (s_1+s_2) s^-1/2 - (3 β^2 + β/2cos(2 ϑ(s))+β/2cos(2 ϑ(s)))s^-1 + (s^-3/2), as required. §.§.§ Asymptotics of H(s) as s → 0^+ Recall the symmetric relation of M in (<ref>), it is easily seen from (<ref>) and (<ref>) that, as s → 0^+, p_1(s) = -p_2(s)+ (s), p_3(s) =p_4(s)+ (s), q_1(s) = q_2(s) + (s), q_3(s) = -q_4(s)+ (s). This, together with (<ref>), (<ref>) and the relation (<ref>), implies that p_2(s)q_1(s)+p_1(s) q_2(s)-p_4(s)q_3(s)-p_3(s) q_4(s) =p_2(s)q_2(s)+p_1(s) q_1(s)+p_4(s) q_4(s)+p_3(s) q_3(s)+ (s)=(s), or equivalently, p_2(s) q_1(s) +p_1(s) q_2(s)-p_4(s)q_3(s) -p_3(s)q_4(s)=(s). Inserting the above two estimates into the expression of H(s) in (<ref>), it then follows from the small s asymptotics of p_k and q_k, k=1…,6, established in Theorem <ref> that H(s) = (1), s→ 0^+. This finishes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> Large s asymptotics of F(s; 1) in (<ref>) follows directly from (<ref>) and the integral representation of F in (<ref>). To establish large s asymptotics of F(s; γ) for γ∈ [0, 1), we note from (<ref>) that the first few terms till the constant term in the expansion are independent of τ. We thus simply take τ=0 and obtain from (<ref>) that ∫_0^s H(t) t =∫_0^s (∑_k=1^6(p_k(t)q_k'(t)+p_k(t)q_k'(t)) -H(t)) t + 1/3 [2tH(t) + p_1(t)q_1(t) + p_2(t)q_2(t)+p_1(t)q_1(t) + p_2(t)q_2(t) -2p_5(t)q_5(t)-2p_5(t) q_5(t)-p_6(t)q_6(t) -p_6(t) q_6(t)+2s_1/r_1 p_5(t) + 2 s_2/r_2p_5(t)]_t=0^s. The integral on the right hand side of the above equation can be evaluated with the aid of (<ref>), where also holds if we replace γ by β. By integrating both sides of (<ref>) with respect to s, it is readily seen that ∂/∂β∫_0^s (∑_k=1^6(p_k(t)q_k'(t)+p_k(t)q_k'(t)) -H(t)) t = ∑_k=1^6(p_k(s)∂/∂βq_k(s)+ p_k(s)∂/∂βq_k(s)-p_k(0)∂/∂βq_k(0)- p_k(0)∂/∂βq_k(0)) =∑_k=1^6(p_k(s)∂/∂βq_k(s)+ p_k(s)∂/∂βq_k(s)), where the second equality follows from small s asymptotics of p_k and q_k, k=1,…,6, established in Theorem <ref>. Inserting the above formula into (<ref>), with the aids of the relations (<ref>), (<ref>) and the small s asymptotics of p_5, p_6, q_5, q_6 in (<ref>), (<ref>), (<ref>), (<ref>), we arrive at ∫_0^s H(t) t = ∫_0^β∑_k=1^6(p_k(s)∂/∂β'q_k(s)+ p_k(s)∂/∂β'q_k(s)) β'+1/3 [2tH(t) + p_1(t)q_1(t) + p_2(t)q_2(t)+p_1(t)q_1(t) + p_2(t)q_2(t) -2p_5(t)q_5(t)-2p_5(t) q_5(t)-p_6(t)q_6(t) -p_6(t) q_6(t)+2s_1/r_1 p_5(t) + 2 s_2/r_2p_5(t)]_t=0^s =∫_0^β∑_k=1^6(p_k(s)∂/∂β'q_k(s)+ p_k(s)∂/∂β'q_k(s)) β' + 1/3 [2sH(s) + p_1(s)q_1(s) + p_2(s)q_2(s)+p_1(s)q_1(s) + p_2(s)q_2(s) -2p_5(s)q_5(s)-2p_5(s) q_5(s)-p_6(s)q_6(s) -p_6(s) q_6(s)+2s_1/r_1 p_5(s) + 2 s_2/r_2p_5(s)-4 r_1 M_11^(1) M_13^(1)-4 r_2 M_11^(1)M_13^(1) + ( M_14^(1)+ M_14^(1))( r_1 M_12^(1) + r_2 M_12^(1))-2 s_1 M_13^(1)-2 s_2 M_13^(1)]. We next estimate the terms p_k(s) ∂/∂βq_k(s), k=1,…,6, as s→ +∞ by using Theorem <ref>. Substituting (<ref>) and (<ref>) into p_1(s) ∂/∂βq_1(s), it follows that p_1(s) ∂/∂βq_1(s)=p_1(s) q_1(s)∂/∂βln q_1(s) =β(cos (2ϑ(s))+2 βsin(2ϑ(s))+2β)(-π/2 + ∂/∂βln |Γ(1-β)|-tan(ϑ(s)-π/4)∂/∂βϑ(s)) +(ln s/s^1/2). Similarly, we have p_2(s) ∂/∂βq_2(s) = (ln s/s) from (<ref>) and (<ref>). With the help of (<ref>) and (<ref>), we obtain p_3(s) ∂/∂βq_3(s)=p_3(s) q_3(s)∂/∂βln q_3(s) =2 β(2 cos^2(ϑ(s)-π/4)(1-βπ/2 + β∂/∂βln |Γ(1-β)|-βtan(ϑ(s)-π/4)∂/∂βϑ(s)). .+1/2sin(2 ϑ(s)-π/2)(π/2+∂/∂βln |Γ(1-β)|+(ϑ(s)-π/4)∂/∂βϑ(s)))+(ln s/s^1/2), and by (<ref>) and (<ref>), p_4(s) ∂/∂βq_4(s) = (ln s/s). Adding the above four formulas together, it follows from a direct calculation that ∑_k=1^4 p_k(s) ∂/∂βq_k(s) = -βcos (2ϑ(s))tan(ϑ(s)-π/4)∂/∂βϑ(s)+2β(sin(2 ϑ(s))+1) -βcos (2ϑ(s))(ϑ(s)-π/4)∂/∂βϑ(s)+(s^-1/2) =2β∂/∂βϑ(s) + 2 β(sin(2 ϑ(s))+1)+(ln s/s^1/2). To estimate p_5(s) ∂/∂βq_5(s), we refer to (<ref>) and rewrite it as p_5(s) ∂/∂βq_5(s) =1/ r_1p_5(s)∂/∂β( r_2 q_6(s)q_6(s) - p_5(s)^2/ r_1 - p_3(s)q_1(s)+p_4(s) q_2(s)- s_1) =r_2/r_1 p_5(s)∂/∂β(q_6(s)q_6(s))+2/3r_1^2∂/∂βp_5(s)^3-1/ r_1 p_5(s)∂/∂β(p_3(s)q_1(s))+(ln s/s^3/2). Note that, with the aid of (<ref>) and (<ref>), r_2/r_1∫_0^βp_5(s)∂/∂β'(q_6(s)q_6(s)) β' =2 r_2 β^2(M^(1)_14(M^(1)_12 +M^(1)_13 M^(1)_14+M^(1)_14M^(1)_24+M^(1)_34)+M^(1)_14(M^(1)_12+M^(1)_13M^(1)_14.. ..+M^(1)_14M^(1)_24+M^(1)_34)) + (ln s/s^1/2), and 2/3r_1^2∫_0^β∂/∂β'p_5(s)^3 β' = 2/3r_1^2p_5(s)^3+ r_1/3 (M^(1)_13)^3 + (ln s/s^1/2). Moreover, we obtain from integration by parts and the asymptotic formulas in (<ref>), (<ref>) and (<ref>) that -1/ r_1∫_0^βp_5(s)∂/∂β'(p_3(s)q_1(s)) β' =-1/ r_1p_5(s)p_3(s)q_1(s) -∫_0^β4β'cos^2(ϑ(s)-π/4) β' + (ln s/s^1/2). Therefore, it is readily seen from the above four formulas that ∫_0^βp_5(s) ∂/∂β'q_5(s)β' =2 r_2 β^2(M^(1)_14(M^(1)_12 +M^(1)_13 M^(1)_14+M^(1)_14M^(1)_24+M^(1)_34)+M^(1)_14(M^(1)_12+M^(1)_13M^(1)_14+M^(1)_14M^(1)_24.. ..+M^(1)_34))+ 2/3r_1^2p_5(s)^3+ r_1/3 (M^(1)_13)^3-1/ r_1p_5(s)p_3(s)q_1(s)-∫_0^β4β'cos^2(ϑ(s)-π/4) β' + (ln s/s^1/2). At last, we observe from (<ref>) and (<ref>) that ∫_0^βp_6(s) ∂/∂β'q_6(s)β' =-2β^2 r_1 M^(1)_14(M^(1)_12 +M^(1)_13 M^(1)_14+M^(1)_14M^(1)_24+M^(1)_34) -2β^2 r_2M^(1)_14(M^(1)_12 +M^(1)_13 M^(1)_14+M^(1)_14M^(1)_24+M^(1)_34) + (ln s/s^1/2). A combination of (<ref>), (<ref>), (<ref>), (<ref>) and Theorem <ref> gives us that ∫_0^s H(t) t =∫_0^β2 β' ∂/∂β'(ϑ(s) + ϑ(s))β' + 4/3β (r_1+r_2)s^3/2 -4 β (s_1+s_2)s^1/2-2β^2+(ln s/s^1/2). Recall the definition of ϑ(s) in (<ref>), we have ∫_0^β2 β' ∂/∂β'ϑ(s) β' =∫_0^β2 β' (∂/∂β'Γ(1+β) + 3/2ln s + ln(8r_1-s_1/s)) β' =ln G(1+β)G(1-β)-3/2β^2 ln s+β^2 -β^2 ln(8r_1)+(s^-1). Inserting (<ref>) into (<ref>), we obtain the large gap asymptotic formula (<ref>) for γ∈ [0, 1) with the error term (ln s/s^1/2). In fact, integrating on the both sides of (<ref>), one can find the error term in (<ref>) is (s^-1/2) instead of (ln s/s^1/2). This completes the proof of Theorem <ref>. §.§ Proof of Corollary <ref> It is readily seen that, as ν→ 0, 𝔼(e^-2πν N(s)) = 1-2 π𝔼(N(s)) ν + 2 π^2 𝔼(N(s)^2) ν^2 + (ν^3). Then we have F(s; 1-e^-2πν) =ln𝔼(e^-2πν N(s)) =-2 π𝔼(N(s)) ν + 2 π^2 (N(s)) ν^2 + (ν^3), ν→ 0, where F ie defined in (<ref>). In view of (<ref>), we have F(s; 1-e^-2πν) = -2 πμ (s) ν + 2 π^2(σ (s)^2 + ln (64r_1 r_2)/2 π^2)ν^2 +2 ln (G(1+ν)G(1-ν)) + (s^-1/2), s→ +∞, where the functions μ (s) and σ (s)^2 are defined in (<ref>). Note that G(1+z) = 1 + ln (2π)-1/2 z+((ln (2π)-1)^2/8-1 + γ_E/2) z^2 + (z^3), z → 0, where γ_E is Euler's constant, we then obtain (<ref>) and (<ref>) from (<ref>) and (<ref>). Note that the additional factors ln s and (ln s)^2 appear in the error terms due to the derivative with respect to ν. Finally, since σ (s)^2 → +∞ for large positive s, it is easily shown that 𝔼(e^t ·N(s) - μ (s)/√(σ (s)^2)) → e^t^2/2, s → +∞, which implies the convergence of N(s) - μ (s)/√(σ (s)^2) in distribution to the normal law 𝒩 (0,1). The upper bound (<ref>) follows directly from a combination of <cit.>, (<ref>) and (<ref>). This completes the proof of Corollary <ref>. § THE BESSEL PARAMETRIX Let I, II, III, be the three regions shown in Figure <ref>, the Bessel parametrix Φ^() is defined as follows: Φ^()(z) = [ I_0(z^1/2) /π K_0(z^1/2); π z^1/2I_0'(z^1/2) -z^1/2K_0'(z^1/2) ], z ∈I, [ I_0(z^1/2) /π K_0(z^1/2); π z^1/2I_0'(z^1/2) -z^1/2K_0'(z^1/2) ][ 1 0; -1 1 ], z ∈II, [ I_0(z^1/2) /π K_0(z^1/2); π z^1/2I_0'(z^1/2) -z^1/2K_0'(z^1/2) ][ 1 0; 1 1 ], z ∈III, where I_0(z) and K_0(z) denote the modified Bessel function of order 0 (cf. <cit.>) and the principle branch is taken for z^1/2. By <cit.>, Φ^() is a solution of the following RH problem. §.§ RH problem for Φ^() * (a) Φ^()(z) is analytic in ℂ∖{∪_j=1^3Σ_j∪{0}}; where the contours Σ_j, j=1,…,3, are indicated in Figure <ref>. * (b) For z ∈Σ_j, j=1, 2, 3, Φ^()(z) satisfies the jump condition Φ_+^()(z) = Φ_ -^()(z) [ 1 0; 1 1 ], z ∈Σ_1, [ 0 1; -1 0 ], z ∈Σ_2, [ 1 0; 1 1 ], z ∈Σ_3. * (c) As z →∞, Φ^()(z) satisfies the following asymptotic behavior: Φ^()(z) = (π^2 z)^-σ_3/4/√(2)[ 1 ; 1 ](I + 1/8 z^ 1/2[ -1 -2; -2 1 ] + (z^-1)) e^z^1/2σ_3. * (d) As z → 0, we have Φ^()(z)=(ln|z|). § THE CONFLUENT HYPERGEOMETRIC PARAMETRIX The confluent hypergeometric parametrix Φ^()(z)=Φ^()(z;β) with β being a parameter is a solution of the following RH problem. §.§ RH problem for Φ^() * (a) Φ^()(z) is analytic in ℂ∖{∪^6_j=1Σ_j∪{0}}, where the contours Σ_j, j=1,…,6, are indicated in Figure <ref>. * (b) Φ^() satisfies the jump condition Φ^()_+(z)=Φ^()_-(z) J_j(z), z ∈Σ_j, j=1,…,6, where J_1(z) = [ 0 e^-βπ; - e^βπ 0 ], J_2(z) = [ 1 0; e^βπ 1 ], J_3(z) = [ 1 0; e^ -βπ 1 ], J_4(z) = [ 0 e^βπ; - e^-βπ 0 ], J_5(z) = [ 1 0; e^- βπ 1 ], J_6(z) = [ 1 0; e^βπ 1 ]. * (c) As z→∞, Φ^()(z) satisfies the following asymptotic behavior: Φ^()(z)=(I + (z^-1)) z^-βσ_3e^- z/2σ_3{[ I, 0< z <π,; [ 0 -e^βπ; e^-βπ 0 ], π< z<3π/2,; [ 0 -e^-βπ; e^βπ 0 ], -π/2< z<0. ]. * (d) As z→ 0, we have Φ^()(z)=(ln |z|). It follows from <cit.> that the above RH problem can be solved explicitly in terms of the confluent hypergeometric functions. Moreover, as z → 0, we have Φ^()(z) e^-βπ/2σ_3 = Υ_0( I+ Υ_1z+(z^2) ) [ 1 -γ/2πln (e^-π/2z); 0 1 ], for z belonging to the region bounded by the rays Σ_2 and Σ_3, where γ=1-e^2βπ, Υ_0 =[ Γ(1-β) e^-βπ 1/Γ(β)( Γ'(1-β)/Γ(1-β) +2γ_E); Γ(1+β) -e^βπ/Γ(-β)( Γ'(-β)/Γ(-β) +2γ_E) ] with γ_E being the Euler's constant, and (Υ_1)_21=βπ e^-βπ/sin(βπ ). § ACKNOWLEDGEMENTS This work was partially supported by National Natural Science Foundation of China under grant numbers 12271105, 11822104, and “Shuguang Program” supported by Shanghai Education Development Foundation and Shanghai Municipal Education Commission. 10 AS76 M. J. Ablowitz and H. Segur, Asymptotic solutions of the Korteweg-deVries equation, Stud. Appl. Math. 57 (1976/77), 13–44. ADV11 M. Adler, J. Delépine, P. van Moerbeke and P. Vanhaecke, A PDE for non-intersecting Brownian motions and applications, Adv. Math. 226 (2011), 1715–1755. ADV10 M. Adler, J. Delépine and P. van Moerbeke, Dysons nonintersecting Brownian motions with a few outliers, Comm. Pure Appl. Math. 62 (2009), 334–395. AFV13 M. Adler, P. L. Ferrari and P. van Moerbeke, Non-intersecting random walks in the neighborhood of a symmetric tacnode, Ann. Probab. 41 (2013), 2599–2647. AJV22 M. Adler, K. Johansson and P. van Moerbeke, A singular Toeplitz determinant and the discrete tacnode kernel for skew-Aztec rectangles, Ann. Appl. Probab. 32 (2022), 1234–1294. AJV14 M. Adler, K. Johansson and P. van Moerbeke, Double Aztec diamonds and the tacnode process, Adv. Math. 252 (2014), 518–571. AOV10 M. Adler, N. Orantin and P. van Moerbeke, Universality for the Pearcey process, Phys. D 239 (2010), 924–941. AV23 M. Adler and P. van Moerbeke, Double interlacing in random tiling models, J. Math. Phys. 64 (2023), 033509. AV07 M. Adler and P. van Moerbeke, PDEs for the gaussian ensemble with external source and the Pearcey distribution, Comm. Pure Appl. Math. 60 (2007), 1261–1292. ABK05 A. Aptekarev, P. Bleher and A. B. J. Kuijlaars, Large n limit of Gaussian random matrices with external source. II, Comm. Math. Phys. 259 (2005), 367–389. BRD08 J. Baik, R. Buckingham and J. DiFranco, Asymptotics of Tracy-Widom distributions and the total integral of a Painlevé II function, Comm. Math. Phys. 280 (2008), 463–497. BW83 E. Basor and H. Widom, Toeplitz and Wiener-Hopf determinants with piecewise continuous symbols, J. Funct. Anal. 50 (1983), 387–413. BC13 M. Bertola and M. Cafasso, The gap probabilities of the tacnode, Pearcey and Airy point processes, their mutual relationship and evaluation, Random Matrices Theory Appl. 2 (2013), 1350003. BK07 P. Bleher and A. B. J. Kuijlaars, Large n limit of Gaussian random matrices with external source. III. Double scaling limit, Comm. Math. Phys. 270 (2007), 481–517. BK04I P. Bleher and A. B. J. Kuijlaars, Large n limit of Gaussian random matrices with external source. I, Comm. Math. Phys. 252 (2004), 43–76. BCI A. Bogatskiy, T. Claeys and A. Its, Hankel determinant and orthogonal polynomials for a Gaussian weight with a discontinuity at the edge, Comm. Math. Phys. 347 (2016), 127–162. BCP O. Bohigas, J. X. de Carvalho and M. Pato, Deformations of the Tracy-Widom distribution, Phys. Rev. E 79 (2009), 031117. BD02 A. Borodin and P. Deift, Fredholm determinants, Jimbo-Miwa-Ueno τ-functions, and representation theory, Comm. Pure Appl. Math. 55 (2002), 1160–1230. BB18 T. Bothner and R. Buckingham, Large deformations of the Tracy-Widom distribution I: non-oscillatory asymptotics, Comm. Math. Phys. 359 (2018), 223–263. BIP T. Bothner, A. Its and A. Prokhorov, The analysis of incomplete spectra in random matrix theory through an extension of the Jimbo-Miwa-Ueno differential, Adv. Math. 345 (2019), 483–551. BH98a E. Brézin and S. Hikami, Level spacing of random matrices in an external source, Phys. Rev. E. 58 (1998), 7176–7185. BH98b E. Brézin and S. Hikami, Universal singularity at the closure of a gap in a random matrix theory, Phys. Rev. E. 57 (1998), 4140–4149. BL19 R. Buckingham and K. Liechty, The k-tacnode process, Probab. Theory Related Fields 175 (2019), 341–395. Charlier21 C. Charlier, Upper bounds for the maximum deviation of the Pearcey process, Random Matrices Theory Appl. 10 (2021), 2150039. CC21 C. Charlier and T. Claeys, Global rigidity and exponential moments for soft and hard edge point processes, Prob. Math. Phys. 2 (2021), 363–417. CM C. Charlier and P. Moreillon, On the generating function of the Pearcey process, preprint arXiv:2107.01859, to appear in Ann. Appl. Probab.. chen Y. Chen, K. Eriksen and C. A. Tracy, Largest eigenvalue distribution in the double scaling limit of matrix models: a Coulomb fluid approach, J. Phys. A 28 (1995), L207–L211. CNV20 T. Claeys, T. Neuschel and M. Venker, Critical behavior of non-intersecting Brownian motions, Comm. Math. Phys. 378 (2020), 1501–1537. DKV E. Daems, A. B. J. Kuijlaars and W. Veys, Asymptotics of non-intersecting Brownian motions and a 4×4 Riemann-Hilbert problem, J. Approx. Theory 153 (2008), 225–256. DXZ22 D. Dai, S.-X. Xu and L. Zhang, On the deformed Pearcey determinant, Adv. Math. 400 (2022), 108291, 64pp. DXZ21 D. Dai, S.-X. Xu and L. Zhang, Asymptotics of Fredholm determinant associated with the Pearcey kernel, Comm. Math. Phys. 382 (2021), 1769–1809. Deift1999 P. Deift, Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, vol. 3, New York University, 1999. DIK2008 P. Deift, A. Its and I. Krasovsky, Asymptotics of the Airy-kernel determinant, Comm. Math. Phys. 278 (2008), 643–678. DIKZ P. Deift, A. Its, I. Krasovsky and X. Zhou, The Widom-Dyson constant for the gap probability in random matrix theory, J. Comput. Appl. Math. 202 (2007), 26–47. DIZ97 P. Deift, A. Its and X. Zhou, A Riemann-Hilbert approach to asymptotic problems arising in the theory of random matrix models, and also in the theory of integrable statistical mechanics, Ann. of Math. 146 (1997), 149–235. Deift1993 P. Deift and X. Zhou, A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation, Ann. Math. (2) 137 (1993), 295–368. Del S. Delvaux, The tacnode kernel: equality of Riemann-Hilbert and Airy resolvent formulas, Int. Math. Res. Not. IMRN 2018 (2018), 160–201. DKZ11 S. Delvaux, A. B. J. Kuijlaars and L. Zhang, Critical behavior of non-intersecting Brow- nian motions at a tacnode, Comm. Pure Appl. Math. 64 (2011), 1305–1383. DG13 M. Duits and D. Geudens, A critical phenomenon in the two-matrix model in the quartic/ quadratic case, Duke Math. J. 162 (2013), 1383–1462. Dyson76 F. Dyson, Fredholm determinants and inverse scattering problems, Comm. Math. Phys. 47 (1976), 171–183. Dyson F. Dyson, A Brownian-motion model for the eigenvalues of a random matrix, J. Math. Phys. 3 (1962), 1191–1198. Eh06 T. Ehrhardt, Dyson's constant in the asymptotics of the fredholm determinant of the sine kernel, Comm. Math. Phys. 262 (2006), 317–341. FV12 P. L. Ferrari and B. Vető, Non-colliding Brownian bridges and the asymmetric tacnode process, Electron. J. Probab. 17 (2012), 17 pp. Fisher M. E. Fisher, Walks, walls, wetting, and melting, J. Stat. Phys. 34 (1984), 667–729. F1993 P. J. Forrester, The spectrum edge of random matrix ensembles, Nucl. Phys. B 402 (1993), 709–728. For11 P. J. Forrester, S. N. Majumdar and G. Schehr, Non-intersecting Brownian walkers and Yang-Mills theory on the sphere, Nuclear Phys. B 844 (2011), 500–526. GZ D. Geudens and L. Zhang, Transitions between critical kernels: from the tacnode kernel and critical kernel in the two-matrix model to the Pearcey kernel, Int. Math. Res. Not. IMRN 2015 (2015), 5733–5782. Gir14 M. Girotti, Asymptotics of the tacnode process: a transition between the gap probabilities from the tacnode to the Airy process, Nonlinearity 27 (2014), 1937–1968. GOV A. J. Guttmann, A. L Owczarek and X. G. Viennot, Vicious walkers and Young tableaux I: without walls, J. Phys. A 31 (1998), 8123–8135. HM S. P. Hastings and J. B. McLeod, A boundary value problem associated with the second Painlevé transcendent and the Korteweg-de Vries equation, Arch. Ration. Mech. Anal. 73 (1980), 31–51. Huang J.-Y. Huang, Edge universality for nonintersecting Brownian bridges, preprint arXiv:2011.01752. IllianBook J. Illian, A. Penttinen, H. Stoyan and D. Stoyan, Statistical Analysis and Modelling of Spatial Point Patterns, Wiley, 2008. IIKS90 A. R. Its, A. G. Izergin, V. E. Korepin and N. A. Slavnov, Differential equations for quantum correlation functions, Internat. J. Modern Phys. B 4 (1990), 1003–1037. IK A. R. Its and I. Krasovsky, Hankel determinant and orthogonal polynomials for the Gaussian weight with a jump, Contemp. Math. 458 (2008), 215–248. JMMS M. Jimbo, T. Miwa, Y. Môri and M. Sato, Density matrix of an impenetrable Bose gas and the fifth Painlevé transcendent, Phys. D 1 (1980), 80–158. JMU81 M. Jimbo, T. Miwa and K. Ueno, Monodromy preserving deformation of linear ordinary differential equations with rational coefficients: I. General theory and τ-function, Physica D 2 (1981), 306–352. John13 K. Johansson, Non-colliding Brownian motions and the extended tacnode process, Comm. Math. Phys. 319 (2013), 231–267. John05 K. Johansson, Non-intersecting, simple, symmetric random walks and the extended Hahn kernel, Ann. Inst. Fourier 55 (2005), 2129–2145. John02 K. Johansson, Non-intersecting paths, random tilings and random matrices, Probab. Theory Related Fields 123 (2002), 225–280. Joh01 K. Johansson, Universality of the local spacing distribution in certain ensembles of hermitian wigner matrices, Comm. Math. Phys. 215 (2001), 683–705. KT07 M. Katori and H. Tanemura, Noncolliding Brownian motion and determinantal processes, J. Stat. Phys. 129 (2007), 1233–1277. KT04 M. Katori and H. Tanemura, Symmetry of matrix-valued stochastic processes and noncolliding diffusion particle systems, J. Math. Phys. 45 (2004), 3058–3085. Kra09 I. Krasovsky, Large Gap Asymptotics for Random Matrices, XVth International Congress on Mathematical Physics, New Trends in Mathematical Physics, Springer, 2009, 413–419. Krasovksy04 I. Krasovsky, Gap probability in the spectrum of random matrices and asymptotics of polynomials orthogonal on an arc of the unit circle, Int. Math. Res. Not. IMRN 2004 (2004), 1249–1272. Kuij A. B. J. Kuijlaars, The tacnode Riemann-Hilbert problem, Const. Approx. 39 (2014), 197–222. KMVV04 A. B. J. Kuijlaars, K. T-R. McLaughlin, W. Van Assche and M. Vanlessen, The Riemann-Hilbert approach to strong asymptotics for orthogonal polynomials on [-1, 1], Adv. Math. 188 (2004), 337–398. LSY19 B. Landon, P. Sosoe and H.-T. Yau, Fixed energy universality of Dyson Brownian motion, Adv. Math. 346 (2019), 1137–1332. LY17 B. Landon and H.-T. Yau, Convergence of local statistics of Dyson Brownian motion, Comm. Math. Phys., 355 (2017), 949–1000. LY17b B. Landon and H.-T. Yau, Edge statistics of Dyson Brownian motion, preprint arXiv:1712.03881. LW16 K. Liechty and D. Wang, Nonintersecting Brownian motions on the unit circle, Ann. Probab. 44 (2016), 1134–1211. NV22 T. Neuschel and M. Venker, Boundary asymptotics of non-intersecting Brownian motions: Pearcey, Airy and a transition, preprint arXiv:2212.03816. DLMF F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller and B. V. Saunders, eds, NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/, Release 1.0.21 of 2018-12-15. SA81 H. Segur and M. J. Ablowitz, Asymptotic solutions of nonlinear evolution equations and a Painlevé transcendent, Phys. D 3 (1981), 165–184. Sosh A. Soshnikov, Gaussian uctuation for the number of particles in Airy, Bessel, sine, and other determinantal random point fields, J. Statist. Phys. 100 (2000), 491–522. TW06 C. Tracy and H. Widom, The Pearcey Process, Comm. Math. Phys. 263 (2006), 381–400. TW94 C. Tracy and H. Widom, Level spacing distributions and the Airy kernel, Comm. Math. Phys., 159 (1994), 151–174. WFS T. Weiss, P. L. Ferrari and H. Spohn, Reflected Brownian motions in the KPZ universality class, Springer Briefs in Mathematical Physics, 18. Springer, Cham, 2017. vii+118 pp. Widom94 H. Widom, The asymptotics of a continuous analogue of orthogonal polynomials, J. Approx. Theory 77 (1994), 51–64.
http://arxiv.org/abs/2307.04793v1
20230710180004
Stellar triples with chemically homogeneously evolving inner binaries
[ "Andris Dorozsmai", "Silvia Toonen", "Alejandro Vigna-Gómez", "Selma E. de Mink", "Floris Kummer" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.HE" ]
firstpage–lastpage Generalized Hall current on a finite lattice Srimoyee Sen, Semeon Valgushev Department of Physics and Astronomy, Iowa State University, Ames, IA, 50011 ======================================================================================================================= Observations suggest that massive stellar triples are common. However, their evolution is not yet fully understood. We investigate the evolution of hierarchical triples in which the stars of the inner binary experience chemically homogeneous evolution (CHE), particularly to understand the role of the tertiary star in the formation of gravitational-wave (GW) sources. We use the triple-star rapid population synthesis code to determine the evolution of these systems at two representative metallicities: Z = 0.005 and Z = 0.0005. About half of all triples harbouring a CHE inner binary (CHE triples) experience tertiary mass transfer (TMT) episodes, an event which is rare for classically evolving stars. In the majority of TMT episodes, the inner binary consists of two main-sequence stars (58-60 per cent) or two black holes (BHs, 24-31 per cent). Additionally, we explore the role of von Zeipel-Lidov-Kozai (ZLK) oscillations for CHE triples. ZLK oscillations can result in eccentric stellar mergers or lead to the formation of eccentric compact binaries in systems with initial outer pericenters smaller than ∼ 1200 R_⊙. Approximately 24-30 per cent of CHE triples form GW sources, and in 31 per cent of these, the tertiary star plays a significant role and leads to configurations that are not predicted for isolated binaries. We conclude that the evolution of CHE binaries can be affected by a close tertiary companion, resulting in astronomical transients such as BH-BH binaries that merge via GW emission orders of magnitude faster than their isolated binary counterparts and tertiary-driven massive stellar mergers. gravitational waves, stars: evolution, stars: massive, stars:black holes, binaries:close § INTRODUCTION An accurate and detailed understanding of the evolution of massive stars is essential for various important open questions in astrophysics, such as nucleosynthesis of heavy elements, the origin of supernova events, gamma-ray bursts, and GW sources (e.g. ). Observational evidence shows that the fraction of stars in hierarchical triples or in higher-order multiple-stellar systems increases with the mass of the primary star (). In particular, <cit.> showed that the majority of O-type stars reside either in triple or quadruple stellar systems. This implies that in order to understand the evolution of massive stars, and to correctly interpret the various astrophysical phenomena related to them, we need to consider stellar interactions in hierarchical triples. The evolution of hierarchical triples involves a complex interplay between three-body dynamics, stellar evolution, and stellar interactions <cit.>. Three-body interactions can result in e.g. ZLK oscillations <cit.>, a secular effect where the eccentricity of the inner binary can be significantly enhanced as a result of dynamics. ZLK oscillations coupled with various dissipative processes (e.g., tides, GWs) can shrink the orbit <cit.> and prompt the merger of the inner binary <cit.>. These type of mergers can result in astronomical transient events such as Type Ia supernova <cit.> or double compact object mergers <cit.>. Furthermore, stellar evolution can affect the orbital dynamics of the triple. For example, radial expansion and mass loss can prompt ZLK oscillations or dynamical instabilities <cit.>. Population synthesis studies of stellar triples show that the inner binaries in hierarchical triples have increased stellar interactions compared to isolated binaries <cit.>. Similarly, tertiary-driven dynamics could play an essential role in double compact object mergers. While GW sources detected by the LIGO/Virgo collaboration <cit.> have been studied in the context of stellar triples, this has been done so far only in a limited parameter space. For example, for systems in which the inner binary is wide enough such that interaction between the two stars in the form of mass exchange can be neglected <cit.>, or in which the stars of the inner binary merge during the main sequence <cit.>. There are still major uncertainties and a need to explore and to understand the population of merging binary BHs from hierarchical triples. In this paper, we focus on the evolution of hierarchical triples in which the stars of the inner binaries are chemically homogeneously evolving. CHE stars have been discussed in the context of rapidly-rotating stars <cit.>, which can experience enhanced mixing during the MS stage. This mixing allows hydrogen-rich matter in the radiative envelope to be deposited into the convective core, where it is fused to helium. At the same time, helium is mixed throughout the star. This prevents the build-up of a chemical gradient inside the star and the classical core-envelope structure. As a result, the stars remain very compact over their lifetime. CHE has been proposed to occur in very close binaries where the tidal deformation of both stars is strong and they are forced to rotate rapidly <cit.>. More recently, CHE binaries received renewed interest as they have been proposed as a new pathway to form BH binaries that can merge within the age of the universe <cit.>. Recently, <cit.> studied triples with CHE inner binaries in the context of sequential merging BH-BHs with masses that fall in the pair-instability mass gap. Specifically, they considered sequential mergers of hierarchical co-planar triples, a simplified approach which neglected three-body dynamics. In this paper, we remove the constraints of co-planarity and explore, for the first time, the evolution of massive stellar triples with CHE inner binaries in the entire parameter space. As isolated CHE binaries are known to be promising GW progenitors, we will mostly focus on the role of the tertiary star in the evolution of the inner binary in the context of GW astronomy. This paper is structured as follows. In section <ref>, we introduce , the triple evolutionary code we use in this study, and the adaptations we have made to model CHE and contact binaries. In section <ref>, we discuss the results of our population synthesis in and identify the most important evolutionary channels. In section <ref>, we show that the initial parameters of the tertiary star are sufficient to predict the evolutionary channel of each system. In section <ref>, we use analytical and numerical methods to explore our synthetic population of stellar triples in the context of GW sources. Finally, we discuss the main difference between the evolution of triples with and without CHE stars in their inner binary. § METHODOLOGY We use to simulate the evolution of our hierarchical triples <cit.>. couples secular dynamics of stellar triples with stellar evolution, and takes into account additional physical processes such as stellar interactions and dissipative processes. determines the evolution of each star by using the fitting formulae of <cit.> to the stellar tracks of <cit.> from the rapid binary synthesis code <cit.>, while interactions between the stars are determined by . treats three-body dynamics in the following way. For secular evolution, we include secular three body dynamics (subscript `3b') including quadrupole () and octupole terms ( with corrections of ). Regarding the additional physical processes, we take into account: i) general relativistic effects (GR) and GW emission <cit.>, ii) tidal friction <cit.>, iii) the effects of stellar winds under the assumptions of fast, adiabatic wind at the mass loss rate provided by (subscript `wind'), iv) precession due to ZLK, GR, tides <cit.> and intrinsic stellar rotation <cit.>, and v) the change in the stellar rotation due to stellar evolution based on spin angular momentum conservation (subscript `I'). This gives rise to a set of first-order ordinary differential equations, that are solved numerically. These equations are: {[ ȧ_ in = ȧ_ in, GR +ȧ_ in, TF +ȧ_ in, wind; ȧ_ out = ȧ_ out, GR +ȧ_ out, TF +ȧ_ out, wind; ė_ in = ė_ in,3b + ė_ in,GR +ė_ in,TF; ė_ out = ė_ out,3b +ė_ out,GR + ė_ out,TF; ġ_ in = ġ_ in,3b + ġ_ in,GR + ġ_ in,tides + ġ_ in,rotate; ġ_ out = ġ_ out, 3b + ġ_ out,GR + ġ_ out,tides +; ġ_ out,rotate; ḣ_ in = ḣ_ in, 3b; θ̇ = -1/J_ b, inJ_ b, out [J̇_ b, in(J_ b, in+J_ b, outθ) +; J̇_ b, out(J_ b, out+ J_ b, inθ)]; Ω̇_1 = Ω̇_ 1, TF +Ω̇_ 1, I +Ω̇_ 1, wind; Ω̇_2 = Ω̇_ 2, TF +Ω̇_ 2, I +Ω̇_ 2, wind; Ω̇_3 = Ω̇_ 3, TF +Ω̇_ 3, I +Ω̇_ 3, wind ]. where a, e, g, h and J_b represent the semimajor axis, eccentricity, argument of pericenter, line of ascending nodes, and the orbital angular momentum for the inner (subscript `in') and outer (subscript `out') orbit. The dot represents the time derivatives. Lastly θ≡cos(i), where i is the mutual inclination between the inner and outer orbit, and Ω_1, Ω_2, Ω_3 the spin frequency of the primary, secondary and tertiary star respectively. Per definition the primary and secondary stars are the stars in the inner binary, with the primary star initially more massive than the secondary star, and the tertiary star orbits the inner binary. We highlight three aspects of the orbital evolution of hierarchical triples that is particularly relevant for the systems we study in this paper. Firstly, if the apsidal precession of the inner binary due to short range forces, such as tides (ġ_ in, tides) and GR effects (ġ_ in, GR) occurs on a much shorter timescale than the precession due to three-body dynamics (ġ_ in, 3b), ZLK oscillations will be quenched <cit.>. The timescale of ZLK oscilations can be approximated as <cit.>: t_ ZLK = (M_1 + M_2/G M_ out^2)^1/2(a_ out/a_ in^1/2)^3(1-e_ out^2)^3/2. The timescale related to the apsidal precession due to tides are <cit.>: t_ tides = (M_1/15 k_ amμ_ in^1/2M_2) (a_ in^11/2/R_1^5) ((1 - e_ in^2)^5/1 + 3/2 e_ in^2 + 1/8 e_ in^4), where k_ am the apsidal motion constant, which we assume to be 0.0144 for MS and helium stars, μ_ in = G(M_1+M_2), i.e. the standard gravitational parameter for the inner binary and R_1 is the radius of the inner star. The timescale related to precession due to general relativistic effects is <cit.>: t_ GR = c^2/3μ_ in^3/2a_ in^5/2 (1 - e_ in^2). If t_ ZLK≫min(t_ GR,t_ tides), then three-body dynamics are suppressed. If the timescales are comparable, then the maximum eccentricity induced by the ZLK oscillations is diminished. In principle, rotation-induced oblateness in the inner binary also induces apsidal precession <cit.>. However, as long as the rotational period of the inner stars is not shorter than the orbital period (which is true for all systems considered here), ġ_ tides≫ġ_ rot and therefore precession due to stellar rotation does not play a role in suppressing three-body dynamics <cit.>. Secondly, octupole terms in the three-body dynamics are typically negligible for CHE triples, as the mass ratio of the inner binary is always very close to one. Finally, we estimate the time it takes for the inner binary to merge due to GWs following <cit.>, if the tertiary is dynamically decoupled from the inner binary. If ZLK oscillations are still relevant during the inspiral phase, we follow the approximation of <cit.>: t_ GW≈ t_ GW, Peters(a_ in,e_ in, max)(1 - e_ in, max)^-1/2, where t_ GW is the time required for the merger, t_ GW, Peters is the time to merger based on the relation of <cit.>, e_ in, max is the maximum eccentricity reached during ZLK oscillations and a_ in is the initial inner semimajor axis. The approximation in equation <ref> is based on <cit.> and it neglects the effects of precession due GR. When the latter is taken into account, <cit.> finds that equation <ref> underestimates the actual merger timescale typically by a factor of 2-3. §.§ Modelling of chemically homogeneous evolution We follow <cit.> in order to incorporate CHE stars in . That means that we assume a star evolves chemically homogeneously, if the angular frequency of the spin of the star is above a certain critical value, i.e. ω_ star > ω_ CHE, crit. <cit.> provides a fit to this critical value based on <cit.> models at different masses and metallicities. In order to determine whether a star evolves chemically homogeneously, we check whether our simulated star is spinning above ω_ CHE, crit at every timestep. If a star meets this criteria we do not evolve its radius during that timestep. We assume that the star by the end of core hydrogen burning forms a helium star with a mass M_ He, ZAMS = M_ TAMS, where M_ He, ZAMS is the initial mass of the helium star and M_ TAMS is the terminal age main sequence mass of the star. With these assumptions, CHE stars experience an instantaneous drop in radii at the end of their MS phase <cit.>. This is a simplification of the results of detailed simulations of CHE stars, where the latter suggests a gradual contraction of the radius during the MS <cit.>. If a CHE star loses angular momentum (e.g. due to stellar winds), its rotational frequency decreases. If the frequency reduces to below the critical value, we assume the evolution of the star transitions back to the classical non-CHE case. For simplicity, we only consider systems in which the stars of the inner binary are CHE from zero-age main sequence (ZAMS). Stars that do not evolve chemically homogeneously from ZAMS could, in theory, become CHE if they attained a sufficiently high-spin frequency before a significant chemical gradient is built up in their interior. This can be achieved for example, if a star is spun up by accretion during a mass transfer event <cit.>. We neglect such systems in this study. §.§ Contact binaries We follow the implementation of <cit.> for modelling contact binaries <cit.>. We assume that contact binaries, i.e. binaries in which both stars fill their Roche-lobes, can maintain co-rotation and consequently survive the contact phase without merging as long as neither of the stars fill the outer Lagrangian points (L2 and L3). For contact binaries, <cit.> finds that mass is transferred between the two stars back and forth until they reach an equal mass ratio. We follow <cit.> and approximate the L2 point as R_L2,2 - R_RL,2/R_RL,2 = 0.299 tan^-1(1.84q^0.397), where R_RL,2 is the Roche-lobe radius of the secondary star, which we approximate following <cit.>. If the stars in the inner binary are in contact but without filling their L2 points, we assume that the masses of the binary equalise via a fully conservative mass transfer phase. We follow <cit.> and assume this mass equalisation occurs instantaneously and readjust the orbit of the inner binary as <cit.>: a_ fin/a_ init = ( M_ 1,init M_ 2,init/M_ 1,fin M_ 2,fin)^2, where a_ init, a_ fin are the initial and the final orbital separation and M_ 1,init, M_ 2,init are the initial masses of the primary and the secondary, respectively. The final masses are M_ 1,fin = M_ 2,fin = 1/2·(M_ 1,init + M_ 2,init) by definition. The assumption of mass equalisation for contact binaries results in the prediction of the CHE channel leading to mostly equal-mass binary BH mergers <cit.>. §.§ Stellar winds The mass loss rates of stellar winds and their effects on the evolution of the star are determined by <cit.>, while the effects on the orbit of the triple are determined by (equation <ref>). In this study, we use the same implementation of stellar winds for massive stars as in <cit.> with one difference; the mass loss rates of helium stars and giants are calculated according to the empirical formula of <cit.> instead of <cit.>. For reference, we summarise the mass loss rates prescriptions used in this study. For MS stars, we follow <cit.>, if T_eff≤ 50 kK and <cit.>, if T_eff > 50 kK. For evolved stars crossing the Hertzsprung gap or core helium burning (CHeB) stars, we follow <cit.>, if T_eff≥ 8 kK or the maximum between <cit.> and <cit.>, if T_eff < 8 kK. For evolved stars beyond the Humphreys-Davidson limit, we assume Ṁ_LBV = 1.5·10^-4 M_⊙yr^-1 <cit.>. For Asymptotic Giant Brach stars and double shell burning supergiants, we calculate the maximum between <cit.>, <cit.> and <cit.>. Finally, for helium stars we follow the empirical form from <cit.> in the form Ṁ_WR = 0.5·10^-13·(L/L_⊙)^1.5(Z/Z_⊙)^0.86 with a clumping factor of η = 0.5 from <cit.> and a metallicity scaling of Ṁ_WR∼ Z^0.86 (). In order to compute the change in the orbit due to stellar winds, we assume stellar winds are spherically symmetric and fast compared to the orbital velocity; additionally, we neglect wind accretion by the companions. In that case the inner and the outer orbit of the triple widens as ȧ_ in,wind = ( a_ final/a_ init)_ in = M_ 1,init + M_ 2,init/M_ 1,final + M_ 2,final, and ȧ_ out,wind = ( a_ final/a_ init)_ out = M_ 1,init + M_ 2,init + M_ 3, init/M_ 1,final + M_ 2,final + M_ 3,final, where subscripts `init' and `final' refer to properties before and after the stellar winds carried mass away from the stars in a given timestep. We assume that the eccentricity remains unchanged by stellar winds <cit.>. We neglect stellar wind accretion by the other stars in the triple system <cit.>. Neglecting accretion is justified for line-driven winds due to their large terminal velocities <cit.>. The assumptions of a fast and spherically symmetric wind might not always be valid <cit.>, and rapidly rotating stars might not have fully symmetric outflows <cit.>. Particularly, stellar winds in certain binary-configurations might even lead to orbital shrinking <cit.>. §.§ Remnant formation The mass of the compact object remnant is computed based on the delayed supernova model from <cit.>. This prescription gives the mass of the stellar remnant as a function of CO core mass, where the latter is determined in based on the fits of <cit.>. The natal kick velocity for BHs is calculated as v_BH = (1 - f_ b)(M_ NS/M_BH)v_ kick, where f_ b is the fallback fraction <cit.>, M_ NS is the canonical neutron star mass (M_ NS = 1.4 M_⊙) and v_ kick is a random kick velocity drawn from the distribution inferred by <cit.> from proper motion measurements of pulsars. We determine the change in the inner and outer orbit due to the core collapse of any of the stars in the triple system based on the formalism developed in <cit.>. Models of <cit.> predict that the most massive stars collapse directly (typically M_ ZAMS≳ 40 M_⊙), without any ejecta, and the only mass loss during the remnant formation is due to neutrino losses, which is assumed to be 10 per cent of the pre-core-collapse mass of the star. Additionally, we assume that the neutrino emission is spherically symmetrical and do not impart natal kick onto the BH. In this case, the orbit is only changed due to the instantaneous mass loss <cit.>. We note that, if the pre-core-collapse orbit is circular, a Blauuw kick due to neutrino losses does not lead to a significant change in the inner orbital elements. However, this is no longer the case for eccentric pre-core-collapse orbits. In particular, if the core collapse occurs near the pericenter, the orbit can become significantly wider <cit.>. By the onset of core-oxygen burning, the core temperatures of the most massive stars can reach above T_ core∼ 3× 10^9 K. Under these conditions, the emitted gamma-ray photons in the core are energetic enough to form electron-positron pairs. This leads pair-instability (see e.g. , , , ). Depending on the mass of the star, this instability can result in a pulsation pair instability supernova, in which the star experiences a series of pulsations leading to severe mass loss <cit.>, or pair instability supernova, in which the star is completely disrupted and no remnant is formed <cit.>. For the treatment of pair-instability in massive stars, we follow <cit.>. If the mass of the helium star pre-core-collapse is M_ HE, pre-SN≥ 35 M_⊙, the star is assumed to undergo PPISN, and its remnant mass is determined by the fitting formula of <cit.>, based on the detailed stellar simulations of <cit.>. If 60≤ M_ HE, pre-SN≤130 M_⊙, we assume the star undergoes PISN, and leaves no remnant behind. In principle, if M_ HE, pre-SN≥ 130 M_⊙, photo-disintegration prevents the pair instability supernova to occur and the star collapses directly into a BH (, , , ), however this does not occur for any of our simulated systems. §.§ Tertiary mass transfer (TMT) episodes If the tertiary star fills its Roche-lobe, it will transfer mass to the inner binary. There have been some efforts to study and model this process <cit.>, but this complex scenario remains to be fully understood. In order to calculate the Roche-lobe of the tertiary star, we assume the inner binary can be approximated as a point mass and estimate the Roche radius with the fitting formula of <cit.>. This assumption is valid in the regime where the orbital separation of the outer star is much larger than that of the inner binary (e.g. a_out≫ a_in). determines the stability of TMT based on extrapolating typical methods from binary star evolution, i.e. by using critical mass ratios <cit.>. This parameter is defined as q_ crit = M_ donor/M_ accetor, i.e. the ratio of the mass of the donor and the mass of the accretor star at the onset of the mass transfer episode. The mass transfer phase is assumed to be dynamically unstable, if the mass ratio of the system is above the critical mass ratio, i.e. q>q_ crit. We obtain q_ crit for each stellar evolutionary stage from <cit.> and <cit.>. We quote these values for the two most common donor types in our simulations <cit.>. These are q_ crit = 3 and q_ crit = (1.37+2[M_ donor, core/M_ donor]^5)/2.13 for Hertzsprung gap stars (i.e. hydrogen shell burning stars which have not regained thermal equilibrium yet) and core helium burning (CHeB) stars, respectively. The term in the squared bracket is the core mass to total mass ratio of the donor. If this equals to ∼ 0.45 - 0.65, which is fairly typical for massive CHeB stars <cit.>, then q_ crit≈ 0.7-0.75. This reflects the assumption made by <cit.>, CHeB stars tend to have deep convective envelopes <cit.>, and are therefore more likely to experience unstable mass transfer episodes <cit.>. Stable TMT could be accompanied with the formation of a circumbinary disc or it could occur in a ballistic accretion fashion. These two types could lead to significantly different evolution of the inner orbit <cit.>. We assume that TMT occurs via ballistic accretion, if a_ in(1 + e_ in) ≥ R_ cd at the onset of the TMT phase, where R_ cd is (i.e. adapting the fitting formulas for mass transferring binaries of and to triples): R_ cd = 0.0425 a_ out(1-e_ out)[1/q_ out(1 + 1/q_ out) ]^1/4. §.§.§ TMT: Evolution of the inner orbit If the tertiary star fills its Roche-lobe, stops the simulation of the system. However, when discussing potential GW progenitors (Section <ref>), we determine the orbital evolution due to TMT by applying simplified assumptions, if the mass transfer episode is dynamically stable. In this subsection we describe our assumptions about the evolution of the inner orbit during a stable phase of TMT, while in subsection <ref> we discuss the evolution of the outer orbit. We distinguish three particular TMT configurations cases, based on the evolutionary stage of the inner binary and on whether or not the transferred mass forms a circumbinary disc around the inner binary: * an inner binary with compact objects and with ballistic accretion, * an inner binary with compact objects and with a circumbinary disc, * a non-compact inner binary. (i) An inner binary with compact objects and with ballistic accretion. Hydrodynamical simulations of <cit.> showed that in case of a TMT episode with ballistic accretion, the transferred mass eventually engulfs the inner binary and exerts friction on it. This leads to a scenario that could be considered similar to the common-envelope evolution of binaries <cit.>, since in both cases drag forces exerted by a gaseous medium supplied from the donor star lead to the orbital shrinking of the binary. Inspired by this similarity, <cit.> applied a modified version of α-formalism <cit.> to model the inner binary evolution of triples experiencing TMT <cit.>. For the configuration case (i), we take the same approach. Below we explain how the post-mass-transfer inner orbit is determined based on this formalism in detail. Δ M_ trnsf is the mass that is transferred from the tertiary in a timestep Δ t. When Δ M_ trnsf ends up encompassing the inner binary, it has binding energy of E_ bind. As the inner orbit is shrinking due to the friction during the TMT episode, the orbital energy of the inner binary changes by Δ E_ orb. We assume that a fraction (α_ TMT) of Δ E_ orb is used to unbind Δ M_ trnsf. We can write an equation expressing the energy balance as: α_ TMTΔ E_ orb = E_ bind, with Δ E_ orb = GM_1M_2/2a_ in, fin - G(M_1 + Δ M_ trnsf/2) (M_2 + Δ M_ trnsf/2)/2a_ in, init, and E_ bind = -G(M_1 + M_2) Δ M_ trnsf/λ_ TMT a_ init, where λ_ TMT is a parameter related to the structure of Δ M_ trnsf, parameterising its binding energy, a_ in, init is the initial orbital separation before Δ M_ trnsf is transferred to the inner binary and a_ in, fin is the final orbital separation after Δ M_ trnsf is expelled from the inner binary. We assume that the total mass transferred to the inner binary throughout the entire TMT episode equals to the mass of the hydrogen envelope of the tertiary M_ out,env (but see ). Then assuming a constant α_ TMT and λ_ TMT, the orbit changes due to the entire TMT episode as: a_ in, fin/a_ in, init = M_1M_2/2(M_1 + M_2) M_ out,env/α_ TMTλ_ TMT + (M_1 + M_ out,env/2) (M_2 + M_ out,env/2). As both α_ TMT and λ_ TMT are unknown, we combine them and try three different values: α_ TMTλ_ TMT = 0.05, 0.5, 5. Here α_ TMTλ_ TMT = 5 is the fiducial value used in <cit.>, which is in a good agreement with the hydrodynamical simulations of <cit.>, in which the inner stars are on the MS during the TMT episode. We note that we neglect the possibility of TMT episode with ballistic accretion transitioning to a TMT episode with a circumbinary disc. Additionally, for configuration type (i), we assume that the inner binaries circularise as a result of the mass transfer phase (as a_ in, new = a_ in(1 - e_ in)). We note that this assumption might not be correct for highly eccentric inner binaries. For example, <cit.> showed that binaries at the onset of common-envelope events with e≳ 0.95 might retain eccentricities as high as e∼0.2. (ii) An inner binary with compact objects with circumbinary disc. If a circumbinary disc is formed during a mass transfer phase towards an inner BH-BH binary we assume that the orbit of the inner binary remains unchanged. The actual physics underlying such a process are very complex <cit.>. The circumbinary disc may exert a torque on the inner binary and extract angular momentum from it, while the accreted matter can transfer angular momentum onto the inner binary. Furthermore, the circumbinary disc and the inner binary could be tidally distorted by the tertiary star. It is commonly assumed that circumbinary accretion of a BH-BH binary from a gaseous medium leads to the shrinking of its orbit due to the torques exerted by the circumbinary disc and due to dynamical friction of the gas (e.g. ). However, a consesus regarding this physical process is still missing, with some hydrodynamical simulations suggesting that accretion from circumbinary disc could even lead to to orbital widening instead of orbital decay <cit.>. (iii) A non-compact inner binary. If the mass transfer occurs with a MS-MS accretor, we assume that this results in the merger of the inner binary. We make this assumption because these binaries have very short periods and a sizeable fraction of them are in contact and most likely they would expand due to TMT, overfilling their L2 point, which would lead to merger (see later subsection <ref>). As we discuss in in subsection <ref>, we do not consider GW sources from those triple systems, in which the TMT occurs towards a binary with evolved (i.e. non-MS), non-compact stars. We do not model unstable phases of TMT (as we will show later, they are very rare among the systems we discuss in this paper) . We note, however, that during this type of mass transfer episode, the outer orbital separation is predicted to rapidly decrease due to the common-envelope-like evolution in triple system; this could result in a regime where the secular approximation from the triple is no longer valid <cit.>. §.§.§ TMT: Evolution of the outer orbit When determining the evolution of the outer orbit due to a stable phase of TMT, we apply the same method for all accretor types, irrespective of whether a circumbinary disc is formed. We calculate the evolution of the outer orbit during the TMT phase, based on the following relation: ȧ_ out/a_ out = -2 Ṁ_3/M_3[1 - βM_3/M_1 + M_2 - (1 - β)(γ + 1/2)M_3/M_ tot], where β is the fraction of mass accreted by the inner binary, γ is the specific angular momentum lost from the system as a fraction of the specific angular momentum of the triple and Ṁ_3 is the mass transfer rate from the tertiary star. Equation <ref> can be derived from angular momentum arguments. It is an adaptation of the relation describing the orbital evolution of a circular, mass transferring binary comprised of point particles <cit.>, applied to a triple experiencing a TMT episode. This adaptation is valid, if the tertiary star is sufficiently far away from the inner binary, such that the inner binary can be treated as a point particle with a mass of M_1 + M_2. We assume that eventually all the transferred mass is isotropically expelled from the triple (β = 0), from near the inner binary. This expelled matter thus carries away a specific angular momentum that is equal to that of the inner binary (γ = M_3/(M_1 + M_2), see also , for a similar approach). In this case equation <ref> can be expressed as a_ out, fin/a_ out, init = M_ tot,init/M_ tot, fin(M_ 3,init/M_ 3,fin)^2exp(2M_ 3,fin- M_ 3,init/M_1 + M_2). In case of BH-BH inner accretors, these assumptions might be valid, as the accretion rate of BHs might be capped by the Eddington-limit, and most of the mass could indeed be expelled from the system, for example in the form of a jet (e.g. ) . On the other hand, MS stars are likely to accrete more efficiently, and therefore β = 0 might no longer be a good approximation. §.§ Initial conditions We sample 10^5 triples at two representative (moderate and low) metallicities: Z = 0.005 and Z = 0.0005. We simulate each hierarchical triples from ZAMS. After drawing the parameters for a given triple system, we further check, if it is dynamically stable <cit.> or if the stars in the inner binary are CHE at ZAMS. If any of the two criteria are not met, we do not evolve the triple system and only take it into account for the normalisation of event rate calculations. We terminate the simulation of a triple system when either a Hubble time (assumed to be 13.5 Gyrs) has passed, or when the tertiary star fills its Roche lobe, a merger occurs, a dynamical instability occurs or if any of the stars becomes unbound from the triple. We also stop the simulation, if any of the stars in the inner binary transitions back from CHE to classical evolution. That is, we only consider triples in which the stars of the inner binary chemically homogeneously evolve throughout their entire MS lifetimes. We refer to this population as CHE triple population. In this study, we motivate the choice of the initial distributions of the parameters of the inner binaries based on recent surveys of massive binaries <cit.>. In such surveys, a possible tertiary companion is not always unequivocally identified and therefore it is not clear whether the inferred distributions also hold for triples or only for isolated binaries. We assume the ZAMS mass of the primary star (M_ 1,ZAMS) follows the power-law mass distribution of <cit.>, i.e. N∼M_ZAMS^-2.3 for M_ ZAMS≥ 0.5 M_⊙ and N∼M_ZAMS^-1.3 for M_ ZAMS < 0.5 M_⊙. We sample M_ 1,ZAMS from a mass range of 20-100M_⊙. The lower limit approximately coincides with the lowest initial mass at which CHE is still possible in a tidally locked binary <cit.>, while the upper limit is roughly the maximum mass at which the stellar tracks used in are still reasonably accurate. We assume a flat inner mass-ratio (i.e. q_ in, ZAMS = M_ 2,ZAMS/M_ 1,ZAMS) distribution, which is in broad agreement with <cit.>. We restrict the range of q_ in, ZAMS to 0.7-1 given that inner binaries in which both of the stars are chemically homogeneously evolving and have q_ in≤ 0.7 would merge early during the MS (where we found the lower limit of 0.7 from our simulations). We sample the inner semimajor axis from a log-uniform distribution (; and in broad agreement with ) in the range of 16 to 40 R_⊙. We assume that the inner binaries are tidally locked at ZAMS. This has three implications: i) the inner binaries have circular orbits, ii) their rotational angular frequency is synchronised with the orbital angular frequency, and iii) the spins of the stars are aligned with the orbital angular momentum vector. We draw the properties of the outer binary from the same distributions that we assume for the inner binaries, with the exception of outer eccentricities. Observations of hierarchical multiple systems of galactic solar-type stars support the assumption that the distributions of the initial parameters of the inner and the outer binaries are the same <cit.>. We sample the outer semimajor axis from a loguniform distribution in the range of 100 to 10^5 R_⊙. We assume that the distribution of the outer mass ratio (i.e. q_ out, ZAMS = M_ out,ZAMS/(M_ 1,ZAMS + M_ 2,ZAMS)) is flat on a range of 0.1 to 1, furthermore the mass of the tertiary is restricted to a range of 5-100M_⊙. We assume non-spinning tertiary stars. The eccentricities of the outer orbit are drawn from a thermal distribution <cit.>. The mutual inclination between the inner and outer orbit is assumed to be uniform in cos(i_ZAMS), where i_ ZAMS is the initial inclination. The initial argument of the pericenter is assumed to be uniformly distributed between -π and π. In Section <ref>, we compare our CHE triple population to a CHE isolated binary population. To this end, we also perform population synthesis of isolated binaries with CHE stars. We sample 10^5 isolated binaries at Z = 0.005 and Z = 0.0005 and evolve them with . We sample from the same initial distributions that we assumed for the inner binaries of our triple population. Similarly to the triple population, we discard systems that are not CHE at ZAMS and stop the simulation, if a Hubble time has passed, or if any of the stars in the binary transitions from CHE to classical evolution. We only analyse binaries, in which the stars remain CHE throughout their entire MS lifetime (hereafter CHE binaries). Throughout the paper, we estimate birth rate and merger rate densities of different evolutionary channels (discussed in detail in appendix, section <ref>). In order to determine each of these quantities, one must know how common single and multiple stellar systems are. We assume two different stellar populations, with different binary and triple fractions. In the first, we assume that about 73 per cent of massive stars are found in triples <cit.>[<cit.> finds that 73 per cent of O stars are either in triples or quadrupoles. Therefore f_ triple = 0.73 should be considered as a rough upper limit. We also note that <cit.> finds that there is a strong correlation between the inner period and the triple multiplicity; among solar type stellar systems, 96 per cent of the spectroscopic binaries with periods less than 3 days has a tertiary companion. Therefore CHE triples, which have also inner binaries with periods of few days, could have exceptionally high triple fractions too.], whereas in the second test population, we assume there are no triples and about 70 per cent of massive stars are in binaries <cit.> [Strictly speaking, <cit.> did not make any statements about triple fractions, but they found that 70 per cent of massive stars have companions that are sufficiently close such that mass exchange will occur some time in their evolution.]. § RESULTS OF POPULATION SYNTHESIS SIMULATIONS In Table <ref>, we provide an overview of our sampled systems based on the evolutionary type of the inner binary. Out of our sampled population of triples, only about 10 per cent of the triples have an inner binary where both stars evolve chemically homogeneously from ZAMS (CHE at ZAMS triples, see Table <ref>), and we follow the further evolution only for these triples. About 75 per cent of CHE at ZAMS triples qualify as CHE triples and we focus on these systems for the majority of the paper. For the remaining 25 per cent, we distinguish three scenarios: * The inner stars transition to classical evolution. As the orbit of the inner binary widens due to stellar winds, the rotational frequencies of the inner stars decrease, because the stellar tides enforce synchronization between the stellar spins and the (new longer) orbital period. If the inner orbit widens sufficiently, the angular rotational frequencies of the inner stars drop below ω_ CHE and therefore these stars transition to classical evolution. This occurs only in our moderate metallicity model (15.5 per cent of all CHE at ZAMS triples at Z = 0.005 and 0 per cent at Z = 0.0005). * The inner binary does not survive the contact phase during the MS phase of the inner stars. We assume a merger takes place when both stars overflow their outer Lagrangian point during the contact phase. This occurs during mass equalization in the contact phase or due to GW emission, which lead to shrinkage of the inner orbit. As orbital widening due to stellar winds prevent mergers, the process occurs more efficiently at low metallicities (about 9 per cent of all CHE at ZAMS triples at Z = 0.005 metallicity and 17.5 per cent at Z=0.0005). * Computational issue. Finally, we note that the simulation of about 2 (6.7) per cent of CHE at ZAMS triples fails at Z = 0.005 (Z = 0.0005). This can occur because either no solution is found for the secular orbital evolution of the system, or the computation time exceeds the allowed CPU time (which is 5000 seconds per system). §.§ Main evolutionary outcomes In Table <ref>, we show the most common evolutionary outcomes for CHE triples. We distinguish 5 different evolutionary channels.: * No post-MS mass transfer phase: During the MS, it may be in a contact, but the system does not experience any other form of mass transfer events. The inner binary eventually forms a BH-BH binary in all these triples. * Stellar merger of the inner binary due to ZLK: Stellar merger occurs in the inner binary due to ZLK oscillations. * Tertiary mass transfer (TMT): The tertiary star fills its Roche lobe. * Unbound systems: This evolutionary outcome takes place, if any of the stars becomes unbound from the system. This occurs when a stellar remnant is formed in the system, with three major subtypes: (i) natal kick imparted onto the remnant object during the SN explosion, (ii) instantaneous mass loss during pulsational PISN, or (iii) complete disruption of the star due to PISN. * Dynamical instability: These systems eventually become dynamically unstable, where the secular approximation is no longer valid. We discuss these channels in detail in sections <ref> - <ref>. §.§ Examples for the evolution of a few selected systems In the following, we present the evolution of a few selected systems from some of the channels introduced in section <ref>. In all of these example systems, the initial parameters of the inner binary are the same: M_ 1,ZAMS = M_ 2,ZAMS = 70 M_⊙, a_ in, ZAMS = 22.4 R_⊙. These have been specifically chosen such that this system would form a GW source via the binary CHE channel within the Hubble time, if it was an isolated binary (i.e. in about 8.9 Gyrs). The inner binary is tidally locked and therefore e_ in, ZAMS = 0. The stars of the inner binary are in contact from ZAMS and equalise in mass soon after ZAMS. The initial mutual inclination is i_ ZAMS = 90^∘ in all systems discussed below, which allows for ZLK oscillations to develop, unless they are suppressed by short range forces <cit.>. In order to understand the evolutionary paths of CHE triples introduced below, we first show which configurations of CHE triples lead to efficient ZLK oscillations (see Fig. <ref>). We evolve the previously introduced CHE inner binary as an isolated system, and take four snapshots during different evolutionary stages (ZAMS, end of MS, at the onset of core collapse, and at the formation of an inner BH-BH binary). For each snapshot, we show a range of possible tertiary companions to this inner binary with different tertiary masses (M_ out) and outer semi major axes (a_ out) and identify those regions, where three-body dynamics are relevant. As shown in the leftmost panel, precession due to tides completely suppresses three-body dynamics when the inner stars are still on MS for almost the entire parameter space of CHE triples. The limited number of triples for which this is not true typically become dynamically unstable later in the evolution (e.g. compare panel 1 with panel 4). By the time of hydrogen depletion in the inner stars, the stellar radii of CHE stars shrinks typically by a factor of 3-5 with respect to their ZAMS value. Therefore, at this stage tides become less efficient (since t_ tides∼ R^-5, see equation <ref>) and precession due to GR becomes the major limitation to three-body dynamics. For the systems shown in Fig. <ref>, ZLK oscillations occur only, if a_ out≲ 500 R_⊙ . During the CHeB phase of the inner stars, the the typical timescale of precession due to GR further increases, as a result of the strong Wolf-Rayet winds that significantly widen the inner orbit. As long as the inner orbit widens faster than the outer orbit (which is always true if the tertiary star is the initially least massive star in the system), the timescale related to ZLK oscillations will not significantly increase. Therefore during this stage, the parameter space where three-body dynamics are relevant increases. This is also shown in the rightmost panel of Fig. <ref>; by the time the inner binary forms BHs, triples with a_ out≲ 2000 R_⊙ will develop ZLK oscillations. §.§.§ Example for stellar merger of the inner binary due to ZLK oscillations First, we discuss the evolution of a CHE triple, in which the inner binary merges as a double helium star due to strong ZLK oscillations (shown in Fig. <ref>). This triple has a tertiary with an initial mass of M_ out, ZAMS = 32.1 M_⊙ and a circular outer orbit with a_ out, ZAMS=200 R_⊙. As indicated by Fig. <ref>, when the stars of the close inner binary are still on the MS, precession associated with strong tides suppresses the effects of the three-body dynamics <cit.>. At 3.9 Myrs, the stars of the inner binary evolve off the MS. By this time, these stars had lost a small amount of mass due to stellar winds and the inner orbit had widened by only 2 per cent as a result. Similarly, the outer orbit also widens only by a negligible amount. Consequently, the timescale of the ZLK oscillations does not change significantly. On the other hand, the tidal effects become much weaker, as the radii of the stars had decreased by a factor of 5 with respect to their ZAMS value. As a result, the ZLK oscillations are no longer suppressed (see also second panel of Fig <ref>). At this stage, there are two competing mechanisms that drive the evolution of the pericenter: ZLK oscillations and the strong Wolf-Rayet-like winds, which decrease and increase the pericenter, respectively. For this triple, the ZLK timescale is extremely short (few years) and a large inner eccentricity of e_ in≈ 0.65 is reached shortly after the onset of CHeB, during which the orbital widening due to stellar winds is negligible. At this stage, the pericenter becomes sufficiently small such that the helium stars fill their Roche-lobes at the point of closest approach. We assume this results in the merger of the inner binary. §.§.§ Example for TMT towards an eccentric BH-BH binary The next triple we discuss experiences a TMT episode towards an eccentric BH-BH inner binary (shown in Fig <ref>). This system has the same parameters as the previously discussed triple, but with a slightly larger initial outer semimajor axes: a_ out,ZAMS = 421 R_⊙. When the inner stars evolve off MS, ZLK oscillations are quenched by precession due to GR (compare the second panels of Fig. <ref> and Fig. <ref>). Three-body dynamics become later effective, as the orbit of the inner binary widens significantly and faster than the outer orbit due to strong WR winds (compare the third panels of Fig. <ref> and Fig. <ref>, although by this stage the parameters of the inner binary differ slightly). As a result t_ GR increases by a factor of 5, while t_ ZLK barely changes. As ZLK cycles become only effective once the inner orbit has sufficiently widened, the inner binary does not come into contact despite reaching similarly high inner eccentricities as in the previous system. As the stars of the inner binary have the same mass, they co-evolve, and they become stellar remnants at the same time. This occurs around 4.2 Myr, when the inner eccentricity is e_ in, max = 0.75. Since core-collapse occurs in an eccentric orbit, large range of possbile post-supernova orbits are possible (a_ in = 42-186 R_⊙) depending on where exactly the stars are in their orbit. In the particular example shown in Fig <ref>, the core collapse occurs while both stars are near the pericenter (which is less likely as they spend more time near the apocenter). This leads to an inner semi-major axis of a_ in = 171 R_⊙ after BH-BH formation. As the outer orbit is circular at the onset of the core-collapse, it only widens by a moderate amount. As the inner period to outer period ratio has increased by a factor of 7, the timescale of the ZLK oscillations also further decrease, making the three-body dynamics even more relevant for the further evolution of the system. The evolution of this triple therefore demonstrates, that if the ZLK oscillations are strong enough to induce eccentricities before the formation of an inner BH-BH binary, the importance of three-body dynamics can be significantly increased during the last stages of the evolution of the triple, depending on (i) where the inner stars are in their orbit when the formation of the compact objects occur and (ii) on the eccentricity of the outer orbit. After the formation of the inner BH-BH binary, the tertiary star evolves off MS, and at 6.1 Myr fills its Roche-lobe and transfers mass to the highly eccentric (e_ in = 0.94) BH-BH binary at a highly inclined orbit (i = 71.5^∘). At this stage, we stop the simulation (but see later section <ref>, where we predict the further evolution of some of these systems). We note, however, that even if the TMT episode does not affect the inner binary, it still merges due to GWs about a factor of 8 faster than its isolated binary counterpart, just alone due to the high eccentricities induced by the ZLK oscillations. §.§.§ Example for TMT towards a circular BH-BH binary Next, we show the evolution of a CHE triple, which also experiences a TMT episode towards a BH-BH binary, but in which three-body dynamics remain suppressed throughout the entire evolution. The initial outer semimajor axis is a_ out,ZAMS = 1069 R_⊙. For this system the timescales of the ZLK oscillations remain too long with respect to the timescale associated with precession due to GR effects throughout the entire post-MS phase. At the onset of the core-collapse, at which the parameter space for ZLK oscillations is the typically the largest for CHE triples with inner binaries composed of non-compact objects, the outer semimajor axis is a_ out,ZAMS = 1720 R_⊙ and the tertiary mass is M_ out = 31.9 M_⊙. Third panel in Fig. <ref> implies that the three-body dynamics is just quenched by the relativistic precession at this stage. Therefore, the inner orbit remains circular when the BHs are formed, and the inner orbit only widens moderately due to BH formation. The inner and the outer orbit after the formation of a BH-BH binary are a_ in = 46.6 R_⊙ and a_ out = 1860 R_⊙ and therefore the ZLK oscillations remain quenched. At 6 Myr, the tertiary reaches a radius of 547 R_⊙ and fills its Roche-lobe while crossing the Hertzsprung gap. The last two examples suggest (and we will show in section <ref> that this is generally true for the vast majority of CHE triples) that three-body dynamics are only relevant for the evolution of CHE triples, if the tertiary star is on a sufficiently short orbit, such that it will eventually fill its Roche-lobe and initiate a TMT episode. Conversely, if the tertiary star remains detached throughout the evolution of the triple, the inner binary evolves effectively as an isolated binary for the vast majority of CHE triples. §.§ No post-MS mass transfer In these triples, the tertiary star remains bound and detached, while the stars of the inner binary form a BH-BH binary. The inner stars are in contact in the majority of the cases (e.g. around 90 per cent at Z = 0.005). There are no any other mass transfer phases during the evolution of these systems (by definition). About 27 per cent of CHE triples evolve this way in our moderate metallicity model (see Table <ref>). This decreases to 11 per cent at Z = 0.0005. The main reason for this difference is the larger number of PISN that occurs at lower metallicities, which prevent the formation of BHs. After the formation of the BH-BH binary, the system may merge due to GW emission within a Hubble time. This occurs for all systems of this type at Z = 0.0005. However at Z = 0.005, the stellar winds are strong enough such that 32 per cent of the inner binaries of these triples end up with orbits that are too wide to merge within a Hubble time due to GW emission. We note that these are not necessarily all of the GW sources from our simulations, as triples in other channels discussed here can also potentially form merging binary BHs (see discussion in section <ref>). For the majority of these triples (>97 per cent), the inner binary evolves essentially unaffected by the tertiary star (see also section <ref>). Therefore, the properties of the inner binaries of this channel are nearly indistinguishable from those of isolated CHE binaries. The initial outer pericenters of the triples of this channel are large enough such that the outer star remains detached (i.e. a_ p, out, ZAMS≳ 2000-3000 R_⊙ at Z = 0.005 , see also section <ref>). At such large tertiary separations, the three-body dynamics remain suppressed during the entire evolution of the triple. The properties of the subgroup in which three-body dynamics drive the evolution of the inner binary are very different. Firstly, they have very short initial outer pericenters (i.e, a_ p, out, ZAMS≈ 100-700 R_⊙), and secondly, the tertiary has a relatively low mass (typically M_ out,ZAMS = 10-30 M_⊙). In these systems, the ZLK oscillations drive the eccentricity of the inner BH-BH binary up to large values (e.g. e_ in≳ 0.7-0.9). Above a given eccentricity, the GW emission becomes so efficient that the inner binary decouples from the tertiary and it plunges due to GWs <cit.>. These systems typically have a relatively low-mass tertiary star compared to the stars in the inner binary, such that the inner binary merges as a BH-BH binary due to GW emission before the tertiary star would evolve off the MS and fill its Roche-lobe. Overall, the parameter space for this subgroup is very small, and therefore we predict a negligible GW merger rate (see later discussion in section <ref>). §.§ Stellar merger of the inner binary due to ZLK In this scenario, the inner binary merges due to three-body dynamics, before it would form a BH-BH binary. At Z = 0.005, about 3.3 per cent of the CHE triple population evolves this way. In our low metallicity model, this fraction decreases slightly, to 2.2 per cent. This is because at lower metallicities, the inner period to outer period ratio increases less due to the weaker stellar winds, and therefore ZLK oscillations remain less efficient (see equation <ref>). Mergers in this channel occur in inner binaries, in which one or both of the stars have already evolved off MS, otherwise the strong tidal effects typically quench the ZLK oscillations (see section <ref>). As shown in Table <ref>, most of the merger occurs between two helium stars (59-75 per cent). The rest occurs between a helium star - MS star or helium star - BH binaries. The majority of the double helium star mergers (>90 per cent) originate from triples, in which the stars in the inner binary were in contact during MS and co-evolved. This also implies that the majority of them have equal masses at the time of the merger. The masses of these helium inner stars typically range from 29 to 94 M_⊙ at Z = 0.005. The outer orbital period of the triples from this channel has to be sufficiently short, such that the ZLK oscillations are strong enough such that they prompt the inner binary to merge. The outer pericenter at the moment of the merger typically ranges from 100 to 200 R_⊙ and it does not exceeds 700 R_⊙. The eccentricities of the inner binary at the moment of the merger typically have values of e_ in≈ 0.5-0.9. For all of these triples, the tertiary is a MS star at the time of the merger and less massive than the stars of the inner binary, otherwise it would evolve faster than the stars in the inner binary and would fill its Roche-lobe, while the inner stars are still on MS. If the outer orbit does not significantly change after the merger, the tertiary star is expected to transfer mass to the merger product, once it evolved off the MS. §.§ Systems with tertiary mass transfer (TMT) Among CHE triples, this is the most common evolutionary pathway. In these systems, the outer star eventually initiates a mass transfer phase while the inner binary is detached or in contact. Approximately 55 (52) per cent at Z = 0.005 (Z = 0.0005) of all CHE triples experience this type of evolution (see Table <ref>). This means that a TMT episode would eventually occur in about 40 per cent of all stellar systems containing a binary with CHE stars (with f_ binary = 0.21, f_ triple = 0.73). While systems containing binaries with CHE stars are rare (see e.g. typical birth rates in Table <ref>), they form GW sources very efficiently <cit.>. Therefore, our predictions suggest that the evolution of a non-negligible fraction of potential GW progenitors could experience a TMT episode. This is an interesting result, as TMT is thought to be very uncommon for classically evolving hierarchical triples, which would have implied that they play a limited role in important astrophysical phenomena <cit.>. In particular <cit.> found that about 1 per cent of triples with primaries in the intermediate mass range belong to this evolutionary channel. Similarly, <cit.> predicts that about only 1 per cent of the observed 725 triples in the catalogue of <cit.> would eventually initiate TMT. In the following sections (<ref>-<ref>), we discuss the properties of the triples of this channel at the onset of TMT. While predicting the outcome of a TMT episode is currently extremely challenging, highlighting several important aspects of these systems (e.g. dynamical stability of TMT, timescales of TMT epsiodes, the amount of transferred mass, the type of accretors, etc.) helps to better understand the nature of these systems and the role they potentially play in the evolution of GW progenitors. §.§.§ Donors of TMT episodes Here, we discuss the stellar evolutionary stage of the donor star at the onset of the mass transfer episode. as it is highly relevant for determining if the mass transfer episode occurs in a dynamically stable or unstable way <cit.>. In particular, convective envelopes can be developed by core-helium-burning or asymptotic giant branch stars. Mass transfer episodes initiated by such cool-giant donors with deep convective envelopes are more likely to occur in a dynamically unstable way than mass transfer phases initiated by giant donors with mostly radiative envelopes <cit.>. At Z = 0.005, around 80 per cent of the donors of TMT systems are stars crossing the Hertzsprung gap. At this metallicity, the largest expansion in the radius of the star occurs during this evolutionary phase, which makes binary interaction during this stage the most probable. The second most common donor type is CHeB star with 11.3 per cent, while the rest are either stars on the first giant branch (when the tertiary M_ out,ZAMS≲ 8 M_⊙) or stars on the asymptotic giant branch. At lower metallicities, CHeB donors are more prevalent. At Z = 0.0005, only 58 per cent of the tertiary donors are HG stars while 40 per cent are CHeB stars; this is because the onset of CHeB occurs at a higher effective temperature with respect to systems at Z = 0.005. Consequently, at lower metallicities, the onset of CHeB is followed by a larger increase in radius with respect to their higher metallicity counterparts. This in turn implies that stars are more likely to fill their Roche-lobes at this evolutionary stage. §.§.§ Stability of TMT episodes The vast majority of mass transfer episodes in this channel occur in a dynamically stable way (99.9 per cent at Z = 0.005 and 98.8 per cent at Z = 0.0005). This is due to the relatively low mass ratios at the onset of the mass transfer phase (i.e. typically q_ out < q_ crit, see right panel of Fig. <ref> for our moderate metallicity model, and Fig. <ref> for our low metallicity model). Typical mass ratios for systems with HG donors are q_ out = 0.4-0.8, while for CHeB donors, they are q_ out = 0.3-0.5 . The values for CHeB donors are smaller because of the strong LBV winds that CHeB star experience decrease the mass ratios over time. Unstable mass transfer phases exclusively occur with CHeB donors in our simulations. These low mass ratios also imply that the expansion due to stellar evolution drives the TMT episodes <cit.>. Consequently, we expect TMT episodes with HG donors to last 10^4 yrs, while TMT epiosdes with CHeB donor could last much longer up to 10^5-10^4 years. §.§.§ Accretors of TMT episodes In this subsection, we discuss the type of accretors of TMT episodes. The evolutionary stage of the inner binaries has a crucial role in the outcome of TMT episodes. If the inner binary comprises CHE MS stars, a TMT episode probably leads to the merger, as CHE binaries have very short periods and the majority of them are in contact at the onset of the TMT <cit.>. On the other hand, if the inner binary consists of BHs, TMT epsiode is unlikely to lead to merger by itself, however, in principle, could be a source of (observable) X-ray emission <cit.>. As shown in Table <ref>, the two most common types of accretors are MS-MS and BH-BH binaries. In only 11-15 per cent of CHE triples experience TMT with different accretors, such as an inner binary consisting of two helium stars or a helium star with a MS or BH companion. We highlight the relatively large fraction of BH-BH accretors (24-31 per cent of CHE triples experiencing TMT). For classically evolving triples, mass transfer towards a BH-BH binary is highly unlikely. Firstly, in systems in which a TMT episode were to occur towards a BH-BH inner binary, the stars of the inner binary need to be more massive than the tertiary, such that they form BHs before the outer star fills its Roche-lobe. Secondly, the outer star has to be sufficiently close, otherwise it would remain detached throughout its evolution. This, in turn, puts a limit on the largest possible inner orbit, if the system is to remain dynamically stable. The maximum inner orbit for such systems is so small that classically evolving inner stars (which eventually expand) would initiate mass transfer and would most likely merge, which would reduce the triple to a binary and a tertiary mass transfer would never occur <cit.>. On the other hand, if the triple has CHE inner stars, the stars will not expand and not merge with one another, instead the system will evolve to contain a BH-BH binary by the time the tertiary fills its Roche-lobe. §.§.§ Mass transferred towards the inner binary We discuss the amount of mass that is transferred during the TMT episode. This is an important aspect, as the relative transferred mass determines angular momentum reservoir available to change the orbit of the inner binary. Assuming that the entire envelope of the donor star is transferred towards the inner binary, the amount of transferred mass ranges between 1-40M_⊙ for BH-BH accretors and between 10-50M_⊙ for MS-MS accretors (see left panel of Fig. <ref> for Z = 0.005 and <ref> in section of <ref> of Appendix for Z = 0.0005). Systems with MS-MS accretors typically receive a larger amount of mass than BH-BH accretors, because the tertiary star is typically more massive in the former case. This is because for the tertiary to fill its Roche lobe while the inner stars are still on the MS, the initial tertiary star needs to be evolve faster and hence be more massive than the MS stars. The relative transferred mass expressed as a fraction of the total mass of the inner binary (i.e. M_ transferred/M_ tot,inner) has the same maximum value (∼ 0.5) for both BH-BH and MS-MS accretors (see grey histogram in left panel of Fig. <ref>). §.§.§ Formation of circumbinary disc We discuss how common it is for TMT systems to develop a circumbinary disc at the onset of the mass transfer episode. As explained in section <ref>, whether a TMT episode is accompanied by a formation of a circumbinary disc can have important consequences for the evolution of the inner orbit. We find that about 63 per cent of all TMT systems develop circumbinary discs in our moderate metallicity model, while in the rest TMT proceeds in a ballistic fashion. Systems in which a circumbinary disc is formed during the TMT phase typically have larger outer pericenters at the onset of the mass transfer (a_ p, out≈ 300-6000 R_⊙) than in those where TMT proceeds in a ballistic manner (a_ p, out≈ 100-600 R_⊙). TMT with circumbinary disc is more prevalent at lower metallcities. About 74 per cent of all TMT systems develop circumbinary discs at Z = 0.0005. This occurs because the ratio of the inner and the outer orbital separation decreases less by the onset of the mass transfer phase due to weaker stellar winds (see equation <ref>). TMT episodes with inner BH-BH binaries are somewhat more likely to occur in a ballistic fashion than with MS-MS inner binaries. About 45 (23) per cent of TMT systems with BH-BH inner binaries do not develop circumbinary discs at Z = 0.005 (Z = 0.0005), while 32 (27) per cent of TMT episodes with MS-MS inner binaries occur in a ballistic fashion. This is mainly because the inner apocenter to outer pericenter ratios at the onset of TMT are typically higher for inner BH-BHs than for inner MS-MS binaries (see equation <ref>). This difference is due to Wolf-Rayet winds, supernova kicks and possible ZLK oscillations that BH-BH inner binaries experienced prior to the TMT episode. §.§.§ Three-body dynamics prior to TMT Three-body dynamics can increase the eccentricities of the inner binary. This can, for example significantly decrease the coalescence time due to GWs <cit.>. Three-body dynamics are almost always suppressed during the MS phase of the inner binaries due to the strong tides (see also section <ref>). Consequently, the inner orbits of TMT systems with MS-MS inner binaries are always circular at the onset of the mass transfer episode. On the other hand, this is no longer the case when the inner stars are in their post-MS. In Fig. <ref>, we show the cumulative distribution of the inner binary eccentricities at the onset of the mass transfer phase of TMT systems with BH-BH accretors at Z = 0.005. We see that systems without circumbinary discs tend to have eccentric inner orbits at the onset of mass transfer. The high eccentricities are caused by ZLK cycles during the post-MS evolution of the inner binary. About 40 per cent of such triples have e_ in≳ 0.4 at this stage. This is in contrast with the systems with circumbinary discs; about 90 per cent of the systems have eccentricities e_ in≲0.1. The difference is due to the smaller inner period to outer period ratios that systems without circumbinary discs have (see equation <ref>). In our low metallcity model, high eccentricites at the onset of TMT are much less common (see Fig. <ref> in section of <ref> of Appendix). For these systems the inner period to outer period ratio does not increase significantly because of the weak stellar winds. §.§ Unbound systems In this channel, one of the stars in the triple becomes unbound as a result of core-collapse. We distinguish systems based on whether this occurs via PISN or via classical core collapse <cit.>. As shown in Table <ref>, PISN does not occur in our moderate metallicity model, whereas at Z = 0.0005, it becomes quite prevalent; about 84 per cent of the unbound systems occur due to PISN. If the triples becomes unbound as a result of a classical core-collapse, we further distinguish whether it is due to the core-collapse occuring in the inner binary (97 per cent of all classical core-collapse systems at Z = 0.005 and 99 at Z = 0.0005) or of the tertiary star (3 per cent at Z = 0.005 or 1 per cent at Z = 0.0005). As the inner binary consists of CHE stars, they have large initial masses (i.e. M_ ZAMS≳ 30 M_⊙) and furthermore they develop more massive CO cores than their classically evolving counterparts. Therefore, they get weak (if any) natal kicks when they form BHs according to our implemented natalk kick prescription Yet weak natal kicks, or even completely symmetrical instantaneous mass losses due to neutrino losses <cit.> can unbind the tertiary star, if the outer star has high eccentricities. We find that in systems in which one of the stars becomes unbound due to the core-collapse in the inner binary, the outer eccentricities are large, about 70 per cent of them e_ out≥ 0.8. In the vast majority of the cases (about 99 per cent of such unbound systems), only the tertiary is ejected, while the inner binary remains bound. If the triple becomes unbound due to the core-collapse of the tertiary star and with low outer eccentricity, it almost always occurs as a result of a strong natal kick. Consequently, most of such unbound systems have initial tertiary masses of M_ out,ZAMS≈ 8-25 M_⊙ (see also discussion in section <ref>), as these systems are expecetd to receive the largest kicks according the supernova prescription of <cit.> . §.§ Systems which become dynamically unstable These triples typically have very short initial outer pericenters (a_ p, out, ZAMS≈ 70-400 R_⊙) and therefore are very close to the stability limit at ZAMS. Such systems can transition to non-secular or non-hierarchical evolution, if a_ in/a_ out, e_ out or q_ out, significantly increases during evolution <cit.>. Among CHE triples, there are primarily two processes that can trigger this change: stellar winds and core collapse. If the relative wind mass loss rate (e.g. Ṁ/M) in the inner binary is higher than that of the tertiary star, a_ in/a_ out and q_ out will increase, which can prompt the triple to experience a dynamical instability <cit.>. 30 per cent of the systems of this channel destabilise due to stellar winds and the destabilisation occurs when the stars of the inner binary are in their post-MS phase. At this stage, the inner stars experience strong Wolf-Rayet winds, while the tertiary star is still on the MS with significantly lower mass loss rates. In the remaining 70 per cent, the instability sets in due to the collapse of the core of one of the stars. We find that this only occurs when BH formation takes place in the inner binary. As noted in section <ref>, CHE stars typically form BHs via direct collapse, such that q_ out only increases slightly. Furthermore, the direct collapse is expected to be accompanied by a weak Blauw-kick due to neutrino losses such that a_ in/a_ out and e_ out only increase significantly, if the inner or the outer pre-core-collapse orbits are eccentric, respectively. The pre-core-collapse inner orbit is eccentric in 72 per cent of the systems of this channel. And the eccentricity is caused by ZLK oscillations. In the remaining 28 per cent, three body-dynamics is not efficient in driving up the eccentricity because the mutual inclination is outside of the critical Kozai range <cit.>. Therefore, the core collapse occurs when the inner orbit is circular. These systems still become unstable during the BH formation, because 1) either a_ in/a_ out already increased strongly due to stellar wind mass losses before the BH formation or 2) the outer orbit is eccentric and the core collapse occurs while the tertiary star is near the outer pericenter (leading to a significant increase in e_ out). The occurrence rate of this channel is strongly dependent on metallicity (3.5 per cent of all CHE triples at Z = 0.005 and 0.7 per cent at Z = 0.0005, see Table <ref>). This dependence is due to the reduced strength of stellar winds and ZLK oscillations (which are responsible for any eccentricity in CHE inner binaries) at lower metallicities. § THE ORIGIN OF EACH EVOLUTIONARY CHANNEL In this section, we discuss the initial parameters of the triples from each evolutionary channel introduced in section <ref>. We find that initial parameters can be used as a proxy to determine the final evolutionary outcome of CHE triples. In particular, the evolutionary outcome can be parameterised by the initial mass and orbital separation of the tertiary star. The parameters of the inner binary play a less important role in this regard, as the parameter space for CHE inner binaries is already quite reduced. We illustrate this in the left panel of Fig. <ref> by showing an ensemble of CHE triples at Z = 0.005, in which the parameters of the inner binary are the same, but the mass and the orbital separation of the tertiary star are varied (therefore this grid represents only a small subset of the entire CHE population discussed in section <ref>). The inner binary consists of two 70 M_⊙ stars and with a circular initial orbit with a_ in, ZAMS = 22.4 R_⊙ (similarly to the example systems discussed in section <ref>). The initial tertiary mass ranges from 5 to 100M_⊙, while a_ out,ZAMS ranges from 200 to 10^4 R_⊙. §.§ Initial parameters of systems of different evolutionary channels The majority of the triples shown in the left panel of Fig. <ref> experience TMT episodes. Their initial outer orbital separations are relatively short and range roughly from 100 to 3300 R_⊙. The evolutionary phase of the inner stars at the onset of the TMT episode depends on the initial mass of the tertiary star. For the systems shown in the left panel of Fig. <ref>, the inner binary at the onset of TMT comprise of BHs, if M_ out, ZAMS≲59 M_⊙, helium stars, if 59 M_⊙≲ M_ out, ZAMS≤ 70 M_⊙, and MS stars, if M_ out, ZAMS≥ 70 M_⊙. The majority (53 per cent) of the TMT systems in the left panel of Fig. <ref> have a BH-BH inner binaries. For the entire population of CHE triples presented in section <ref>, the same percentage is smaller (i.e 31 per cent) at the same metallicity (see Table <ref>). As shown in Fig. <ref>, this quantity (i.e. the ratio of the number of TMT systems with BH-BH inner binaries and the number of all TMT system) scales proportionally to the initial mass of the secondary star in the inner binary. This means that TMT episodes occur more frequently with BH-BH accretors among CHE triples with more massive inner stars. This is due to our assumptions about the initial distribution of the triples (section <ref>). If the TMT occurs towards a BH-BH inner binary, the tertiary has to be initially the least massive in the triple. With increasing M_ 2,ZAMS, the fraction of triples for which M_ 2,ZAMS > M_ out,ZAMS increases because of our assumptions of a maximum initial stellar mass of M_ ZAMS, max = 100 M_⊙ and a flat outer mass ratio distribution. In 15 per cent of the triples shown in the left panel of Fig. <ref>, the inner binary merges before BH formation or before a TMT episode occurs. All such mergers in the grid occur between two helium stars, and are due to ZLK oscillations that arise when the stars of the inner binary evolve off the MS. The initial outer orbital separations in this channel are very short, i.e. 200 to 241R_⊙, while the tertiary masses range between 32 ≤ M_ out, ZAMS/M_⊙≤ 68. For lower tertiary masses (M_ out, ZAMS<32 M_⊙), the ZLK oscillations are not strong enough to boost the inner eccentricity and cause a mass transfer episode in the inner binary. For larger tertiary masses (M_ out, ZAMS>70 M_⊙), the tertiary typically fills its Roche-lobe before the stars of the inner binary evolve off the MS. However, during the main sequence phase of the inner stars, the effects of ZLK cycles are quenched and consequently no mergers are prompted by three-body dynamics before the tertiary initiates a TMT episode. Triples of the no post-MS MT channel in the left panel of Fig. <ref> have initial outer orbits a_out≳ 2000-3000 R_⊙. Their initial tertiary mass is also typically outside of the range of ∼8-25 M_⊙, such that the system does not dissociate due to SN kicks. As we shown in the next subsection, three-body dynamics are not important for the evolution of these systems. In left panel of Fig. <ref>, we show the inital pericenter (a_ outer,ZAMS) distribution of the entire CHE triple population for each evolutionary channel at Z = 0.005. As it can be seen, the range of initial pericenters are in agreement with those shown in Fig. <ref> for all channels except for the unbound systems (since for the unbound systems, the outer eccentricity plays a crucial role, as explained in section <ref>, and in the grid we assume circular outer orbits). This again confirms that the parameters of the tertiary star play the most important role in determining the evolutionary path of a CHE triple. As shown in left panel of Fig. <ref>, the range of a_ outer,ZAMS of systems with TMT episodes increases with decreasing metallicity. At lower metallicity, the stellar winds are weaker and consequently, the outer orbit widens less. Therefore, the maximum a_ outer,ZAMS at which the tertiary stars can still fill their Roche-lobes also increases with decreasing metallicity. §.§ Initial parameters of triples with three-body dynamics In the right panel of Fig. <ref>, we show the maximum eccentricities that the inner binaries reach during their evolution (e_ in, max). About 29 per cent of the triples shown in the right panel of Fig. <ref> reach e_ in, max≥0.4 due to ZLK cycles. In all of these triples, the tertiary star eventually fills its Roche-lobe (although in some cases, the inner binary merges first). For the systems shown in Fig. <ref>, ZLK cycles are efficient when a_ out, ZAMS≲1200 R_⊙ and M_ out, ZAMS≲70 M_⊙. When the outer orbit is a_ out, ZAMS≳1200 R_⊙, the ZLK cycles are quenched by various short range forces (e.g. precession caused by tides or general relativistic effects). If a_ out, ZAMS≲1200 R_⊙ but M_ out, ZAMS≳70 M_⊙, the tertiary star fills its Roche-lobe while the stars in the inner binary are still on the MS. The inner binaries of these triples do not develop high eccentricities, as ZLK cycles are quenched during MS due to strong tides (see section <ref>), and TMT episode with MS-MS accretors are expected to result in the merger of the inner binary (see section <ref>). The right panel of Fig. <ref> also shows that e_ in, max does not decrease smoothly with decreasing outer orbital separations, instead it drops rather abruptly across a_ out, ZAMS≈1200 R_⊙. Triples with a_ out, ZAMS≈1200 R_⊙ reach very large inner eccentricties (e_ in, max≈ 0.9), while at slightly larger orbital separations (i.e. a_ out, ZAMS≈1500 R_⊙) the ZLK cycles are completely quenched. These above mentioned effects are qualitatively also true for the entire CHE triple population presented in section <ref> (see right panel of Fig. <ref>). At Z = 0.005, the ZLK oscillations are only efficient, if a_ p, out, ZAMS≲ 1200 R_⊙. This implies that three-body dynamics are only relevant for those triples, in which the tertiary star would eventually fill its Roche-lobe (compare right and left panel of Fig. <ref>). Consequently, if the tertiary in a CHE triple remains detached throughout its evolution, the evolution of the inner binary will almost always be kinetically decoupled from the tertiary star. If a_ p, out, ZAMS≲ 1200 R_⊙, a wide range of inner eccentricites are possible (e_ in, max = 0-0.9) for all a_ p, out, ZAMS. In this case, the value of e_ in, max is primarily determined by the mutual inclination of the triple <cit.>. In our low metallicity model (Z = 0.0005) the maximum initial outer pericenter at which three-body dynamics are still relevant is lower compared to our moderate metallicity model (right panel in Fig. <ref> in section <ref> of Appendix). At such low metallicities, stellar winds do not widen the orbit of the inner binary significantly and thus the timescales of the ZLK cycles do not decrease as much as at Z = 0.005. § GRAVITATIONAL WAVES SOURCES We now discuss the possible formation channels of GW sources that originate from CHE triples and their properties. In section <ref> we predict the merger rate densities and compare them to that of GW sources from isolated CHE binaries. For this, we assume two test populations with different stellar multiplicity fractions. One population is composed of only single and binary stellar systems (i.e. with stellar multiplicity fractions at ZAMS of f_ single = 0.3, f_ binary = 0.7, f_ triple = 0), in the other triples dominate (f_ single = 0.06, f_ binary = 0.21, f_ triple = 0.73). In sections <ref> - <ref> we discuss the properties of each GW formation channel from CHE triples and binaries. These predictions are based on the synthetic populations discussed previously, and in cases where the simulations are stopped before the formation of a BH-BH binary, we predict the further evolution of CHE triples beyond the stopping conditions (Section <ref>) by applying simple assumptions (as detailed below). The four main identified formation channels of GW sources within our CHE triple population are (see also Fig. <ref>): * Effectively isolated inner binary: For such triples, three-body dynamics is suppressed by various short-range forces and the tertiary star remains detached throughout the entire evolution. The inner binary therefore evolves effectively as an isolated binary and the properties of these GW sources are indistinguishable from those of the CHE binary channel. There are two ways these systems can form: i) with the tertiary star bound to the triple (systems from the no post-MS MT channel, see section <ref>) and ii) systems in which the tertiary star becomes unbound from the triple (from the unbound channel discussed in section <ref>). For the latter, we assume that the orbit of the inner binary is not affected by the tertiary unbinding from the triple system. * TMT with a BH-BH accretor: This channel comprises systems in which the tertiary star fills its Roche-lobe when the inner binary is a BH-BH binary. The inner binary components do not coalesce during the TMT phase, but will merge afterwards due to GW emission. In these systems, the tertiary star can affect the evolution of the inner binary in two major ways, via TMT episode and via three-body dynamics (see section <ref>). In section <ref>, we introduced our assumptions regarding the evolution of the inner orbiy during a TMT episode. * TMT with a MS-MS accretor: In this scenario, there are two sequential mergers taking place in the system <cit.>. First, the inner binary merges when the stars are still on the MS as a result of mass transfer from the tertiary to the inner binary. This reduces the triple to a binary. We assume that the merger product of the inner binary evolves further in a classical way (as opposed to CHE). Consequently, the merger product expands and eventually fills its Roche-lobe and transfers mass to the initial tertiary star. The orbit shrinks due to this second phase of mass transfer and as a result, a merging double compact object is formed. The second phase of mass transfer is essential. Systems in which no mass transfer takes place after the inner binary merger might form detached BH-BH binaries but are too wide to merge due to GWs within the Hubble time. We note that double MS mergers among CHE triples typically occur due to TMT episodes as three-body dynamics are suppressed during the MS phase. * Dynamical mergers: In the triples of this channel, ZLK oscillations are very efficient and drive up the inner eccentricities to e_ in≈ 0.6-0.9 after the stars of the inner binary have become BHs. Such systems merge due to GW emission within a few Myrs. The tertiary remains detached until the inner binary merges and therefore these triples belong to the no post-MS MT channel. As discussed in section <ref>, these systems are rare. We ignore the possibility of GW source forming in a CHE triple through a stellar merger that do not occur between two MS stars. Such mergers can occur due to TMT or three-body dynamics with (i) helium star-MS binary or (ii) double helium star binaries. We justify the omission of the first type, as they are relatively rare. This type of merger occurs in 0.2-2 per cent of all CHE triples depending on metallicity. For the second type, the merger product is a helium star, it is not expected to significantly expand and it is unlikely to ever fill its Roche-lobe. Without a phase of mass transfer that leads to orbital shrinkage, the binary remains too wide to merge within a Hubble time. However, if the merger remnant can accrete matter during the TMT phase it could regain a hydrogen-rich envelope, and expand later in its evolution. For simplicity, we neglect this scenario. §.§ Rates of GW mergers In the population without triples, the predicted merger rate density is R_ merger = 44.2 Gpc^-3yr^-1 (see Table <ref>). This is about a factor of two higher than predicted by <cit.>, giving a rough agreement given the simplicity of our rate calculation (see discussion in Appendix <ref>). The total merger rate density of the population containing triples is R_ merger = 23 Gpc^-3yr^-1. This is about a factor of two lower than that of the population without triples. There are two reasons for this difference. Firstly, stellar mergers frequently occur in CHE triples, preventing the formation of compact BH-BH binaries. While all CHE binaries form BH-BH binaries, only about 60 (45) per cent of CHE triples form (inner) BH-BH binaries at Z = 0.005 (Z = 0.0005). Secondly, the number of systems formed in the population with triples is always lower per unit stellar mass formed than in the population without triples, as triple systems, on average, have larger total masses than binaries and/or single stars. In the population with triples, about half of the GW mergers originate from formation channels involving CHE originate from triples. The role of the tertiary is negligible for 69 per cent of GW progenitors from CHE triples. In the remaining 31 per cent, the evolution of the inner binary is affected by the tertiary star via TMT and/or three-body dynamics. §.§ Isolated binaries At Z = 0.005, about 68 per cent of the CHE binary population forms a BH-BH binary that merges within the Hubble time, while at Z = 0.0005, all CHE binaries merge due to GWs within the age of the universe. In our moderate metallicity model, the delay times of these BH-BH binaries from this population ranges from 3 to 50 Gyr (and therefore the delay time of GW sources ranges from 3 to 13.5 Gyr). In our low metallicity model, the delay times are considerably shorter, ranging roughly from 100 to 600 Myrs. At Z = 0.005, only those binaries merge which were in contact during their MS phase. At Z = 0.0005, about 97 per cent of all GW progenitors were in contact during their MS phase. Since we assume such binaries equalise in mass, we predict that the vast majority of GW sources consist of equal mass black home binaries from this population (in broad agreement with ). The masses of the merging binary black holes from this channel range from 20 to 42 M_⊙ at Z = 0.005 and 33 to 54 M_⊙ at Z = 0.0005. §.§ Effectively isolated inner binaries This is the dominant channel among CHE triples with a predicted merger rate density of 8.8 Gpc^-3yr^-1. At Z = 0.005 (Z = 0.0005), about 19 (12) per cent of all CHE systems (e.g CHE binaries and CHE triples, see section <ref>) are expected to form GW sources via this channel. In 53 per cent of the GW progenitors of this channel, the tertiary star becomes unbound by the time both stars in the inner binaries form BHs. This percentage drops to 38 per cent at Z = 0.0005. The demographics of this channel are nearly indistinguishable from the isolated binary population. The merger efficiency of this channel, which we define as the GW sources as a fraction of BH-BH inner binaries formed via a certain channel, is 68 per cent. Unsurprisingly, this is the same as the merger efficiency of the isolated CHE binary channel. Similarly to the CHE binary case, the majority of the inner binaries of these triples were also in contact during their MS phase and therefore this channel also produces overwhelmingly equal mass mergers. §.§ TMT with a BH-BH accretor This is the dominant formation channel in which the evolution of the inner binary is affected by the tertiary star. The predicted merger rate density is R_ merger = 3.8 Gpc^-3yr^-1, which accounts for about 16 per cent of all GW mergers from CHE systems. About 10 per cent of all CHE systems form merging binary BHs via this channel. With our simplistic models of TMT (see subsection <ref>), we predict that the outer orbit widens as a result of the TMT episode for all triples considered in this study. In the lower panel of Fig. <ref>, we show how the outer pericenter changes after the mass transfer phase for triples experiencing TMT with a BH-BH inner binary accretor for our moderate metallicity model (and in the lower panel of Fig. <ref> for our low metallicity model). The orbital separations widen typically by a factor 1.5-2. Even, if the inner orbit remains unchanged due to TMT, the outer orbit widens so much, such that three body-dynamics become typically negligible after the TMT episode for the majoirty of these triples. For example, at Z = 0.005, in those TMT systems, in which ZKL oscillation are effective prior to the mass transfer event, 70 per cent of the inner binary becomes decoupled from the tertiary star after the TMT episode. If the evolution of the inner BH-BH inner binary is decoupled from the tertiary, its orbital evolution is solely determined by the emission of GWs (and therefore the coalescence time can be determined according , otherwise, we use equation <ref>). As noted in section <ref>, we make different assumptions about the evolution of the inner orbit based on whether a circumbinary disc is formed during TMT. We therefore discuss the properties of GW sources from these two subtypes separately. §.§.§ Accretion through a circumbinary disc The predicted merger rate of this channel is 2.4 Gpc^-3yr^-1. The merger rate efficiency is just 6 per cent higher than the merger rate efficiency from isolated binaries. The slight increase is due to the small number of eccentric inner binaries at the onset of the mass transfer (∼10 per cent of systems undergoing TMT with BH-BH accretors and circumbinary discs have e_ in>0.4, see Fig. <ref>). The small difference is not surprising as we have assumed here that the orbit of the inner binary does not change due to circumbinary disc accretion. However, if circumbinary disc accretion leads to a significant increase (decrease) in the inner period, the compact object merger fraction decreases (increases) significantly as well. Clearly, better models are required to understand circumbinary accretion of a BH binary from a mass transferring tertiary star. §.§.§ Ballistic accretion The properties of these GW sources depend on how the inner binary evolves due to TMT. If we simplistically assume that that the inner orbit does not change (i.e. Scenario 1, see section <ref>), then the merger rate density of this channel in the local universe is R_ merger = 1.4 Gpc^3yr^-1. In this case about 3.8 (2.3) per cent of all stellar systems containing a CHE binary form GW sources via this channel at Z = 0.005 (Z = 0.0005). The merger efficiency of this channel is 75 per cent at Z = 0.005, which is slightly higher than that of the CHE binary population (68 per cent). As discussed in section <ref>, a considerable fraction of these sources have high eccentricities, namely, 48 per cent with e_ in≳ 0.4 at Z = 0.005 and 10 per cent at Z = 0.0005. This results in shorter delay times and more mergers with respect to the isolated CHE binary channel (top left panel of Fig. <ref>). If the orbital evolution can be described by equation <ref> (i.e. Scenario 2, see section <ref>), then the inner pericenters of BH-BH binaries decrease by 1-3 orders of magnitude due to the TMT episode, depending on the efficiency parameter α_ TMT. In this case, all inner bineries become dynamically decoupled from the tertiary star after the TMT episode. As shown in the left panel of Fig. <ref>, the peak of the orbital separation distribution shifts from 32 R_⊙ to 25, 5 and 1 R_⊙ with α_ TMT=0.05,5,5. With such short periods, nearly all (i.e. typically ≳ 99 per cent) of the inner binaries eventually emerge. However, none of the inner binaries merge during the mass transfer, in fact they merge due to GW emission afterwards. In Fig. <ref>, we show that the typical delay times in Scenario 2 are also orders of magnitude shorter with respect to that of isolated CHE binaries. With α_ TMT = 0.05, the delay times of these GW sources is dominated by the stellar evolution. Such timescales could make TMT episodes relevant in young clusters in which star-formation is still active. Even when assuming a weaker friction exerted by the transferred mass (i.e. α_TMTλ_ TMT =5) resulting in the smallest orbital shrinkage in our models, most of the BHs merge within a few hundred Myr at Z = 0.005. Despite the higher merger efficiency, the predicted merger rate density for Scenario 2 is considerably lower (i.e. R_ merger = 0.5 Gpc^-3yr^-1) than in Scenario 1. This is due to the extremely short delay times, implying the progenitor stars must have formed recently, when the cosmic star formation rate is low <cit.>. As the cosmic star formation rate is expected to increase strongly from z=0 to z=2 , we expect the merger rate density of this channel to be significantly higher at z≈2 than at z = 0. This would make these sources more relevant for third-generation GW detectors. We mention two interesting aspects of this channel. Firstly, depending on the efficiency parameter of the TMT episode, these systems could be in the LISA frequency band <cit.> during the mass transfer phase. In the right panel panel of Fig. <ref>, we show the frequency at which the BH-BH binaries emit GWs after the mass transfer episode. With α_TMT = 0.5, about half, and with α_TMT = 0.05, all of our systems enter the mHZ regime during the mass transfer phase. The evolution through the LISA frequency range would be primarily driven by gas dynamics instead of GW emission <cit.>. Such sources would be detectable by LISA, if the corresponding luminosity distances are not larger than ∼10 kpc and ∼10^4 kpc in case of α_TMT = 0.5 and α_TMT = 0.05, respectively <cit.>. Secondly, a TMT episode could be accompanied by a detectable electromagnetic signal, as the transferred mass is expected to heat up when it reaches the inner BH binary. If the delay time between this signal and the GW merger is within the lifetimes of typical observing missions, then the GW merger could be associated with this electromagnetic counterpart <cit.>. We find that the time between the end of the TMT episode and the GW merger in case of α_ TMTλ_ TMT = 0.05 is shorter than a year for 6 per cent of these sources at Z = 0.0005. This implies that in this case a electromagnetic counterpart could be detected, shortly before the GW merger. This is in contrast with the possible electromagnetic signatures associated with BH mergers in AGN discs, where the electromagnetic counterpart would occur after the GW merger <cit.> §.§ TMT with a MS-MS accretor This channel has a low merger rate density of R_ merger = 0.2 Gpc^-3yr^-1. Even though 25 per cent of all systems containing a CHE binary experience a double MS merger in the inner binary at Z = 0.005, only 1.1 per cent of them form merging binary BHs. This low merging efficiency is due to two reasons. Firstly, if the mass transfer episode between the merger product and the tertiary star proceeds in a dynamically unstable way, the process mostly ends in stellar merger and no double compact binary is formed. Secondly, if the same mass transfer proceeds instead in a stable way, the binary BH typically has too wide orbit to merge within the Hubble time. We note, however, that these predictions are sensitively dependent on uncertain stellar physics (such as the efficiency of CEE phase, mass-loss radius exponent and binding energy of stars with M_ ZAMS≳ 100 M_⊙). We also note that the merger efficiency is significantly higher in our low metallicity model, 12.3 per cent of triples with double MS merger forms merging binary BHs. As the merger efficiency seems to increase with decreasing metallicity, and we only calculate the merger rate density based on two metallicities, it is likely that we underestimate the merger rate density for this channel (see more detailed explanation in Appendix section <ref>). In case of a TMT episode with a MS-MS accretor, we always assume that the inner binary merges due the mass transfer phase. We justify this assumption by the fact that that CHE MS-MS binaries tend to be on very close orbits (∼20-30 R_⊙) compared to their stellar radii (∼5-10R_⊙). A significant fraction of them are already in contact. Furthermore, these stars may swell up as a result of accretion, this type mass transfer event is likely to end in merger <cit.>. The merger product is a rejuvenated MS star with a mass of M_1+2 = M_1 + M_2. This means that we neglect any accretion during TMT and we assume a fully conservative merger without mass outflows. At Z = 0.005, the mass of the inner binary merger remnant M_ 1+2 ranges from 65 to 188 M_⊙. The distribution has a peak around ∼ 100 M_⊙. At Z = 0.0005, the mass of the merger product ranges from 70 M_⊙ to 190 M_⊙. The orbital separations after the TMT episode are shown in the upper panel of Fig. <ref> (and Fig. <ref> for our low metallicity model). We can see that the outer orbit widens typically by a factor of 1.7-2.5 and the orbital separations range from 150 to 6800 R_⊙. While the ranges are similar at both metallicities, at Z = 0.0005, the typical orbital separations are significantly shorter. Most of the systems experience a second phase of mass transfer after the TMT episode (62 per cent at Z = 0.005 and 96 per cent at Z = 0.0005) and typically the donor star is on the Hertzsprung gap during this second phase of mass transfer ( about 99 per cent at Z = 0.005, and about 86 per cent at Z = 0.0005). More evolved donor stars are not expected to occur frequently, as the onset of CHeB occurs at a cooler effective temperature with increasing mass with and followed by a less significant subsequent radial expansion <cit.>. In particular for M_ ZAMS≳ 100 M_⊙, stars are predicted to expand negligibly after the CHeB, even at low metallicities. Regarding the stability of the mass transfer between the merger remnant and the initial tertiary, we find that it occurs in an dynamically unstable manner in 66 (30) per cent of cases at Z = 0.005 (Z = 0.0005). We assume that CE phases with a donor star on the Hertzsprung gap result in a merger, following <cit.> <cit.>. At both metallicities, binary BHs are only produced when the second phase of mass transfer proceeds in a stable manner. Furthermore, in order to form a GW source, the orbit needs to be compact enough (a_ out≲ 1000 R_⊙) at the onset of the second mass transfer event. This only occurs in about 5 per cent (30 per cent) of systems with stable mass transfer at Z = 0.005 (Z = 0.0005). This is the only GW formation channel of CHE triples that yields a significantly different mass and mass ratio distributions than the CHE binary channel. The masses of the merging binary BHs range from 16 to 27 M_⊙ at Z = 0.005 and 17 to 54 M_⊙ at Z = 0.0005. The mass ratios range from 0.7 to 0.8 at Z = 0.005 and 0.5 to 1.0 at Z = 0.0005. All other channels produce merging binary BHs with masses that range from 20 to 42 at Z = 0.005 and 33 to 54 at Z = 0.0005. The vast majority (≳ 90 per cent) of these systems have equal masses, as the inner binaries had been in a contact during their MS phase. §.§ Dynamical mergers The merger rate density of these channel is very low, R_ merger = 0.05 Gpc^-3yr^-1. The delay times of these systems are very short and range from 4 to 20 Myrs. Similarly to the GW progenitors that have experienced TMT episodes with ballistic accretion, the short delay times imply that the merger rate density could be about an order of magnitude larger at z ≈ 2. About 25 per cent of these systems have eccentricities e_ in≳ 10^-4 when the characteristic GW frequency reaches 10 Hz, making eccentricities detectable by third-generation detectors <cit.>. For all systems, the tertiary star is still on the MS when the inner binary merges due to GWs with outer pericenters of a_p,out≈ 120-790R_⊙. It is therefore expected that the initial tertiary star will eventaully fill its Roche-lobe, once it evolves of the MS. § CONCLUSION We studied the evolution of hierarchical triples with CHE stars in the inner binary with a rapid populations synthesis approach. We performed simulations with the triple population synthesis code at two representative metallicities: Z = 0.005 and Z = 0.0005. We showed that the evolution of CHE stars can be altered by the presence of a tertiary star in several ways. This can potentially lead to a formation of a number of diverse and unique astrophysical phenomena, e.g. TMT phases with BH-BH accretors, highly eccentric mergers of helium stars, and mergers of binary BHs with very short (few Myrs) delay times. To summarise our main findings: * Tertiary mass transfer (TMT) episodes are common among CHE triples: Unlike in classically evolving hierarchical triples, we predict that TMT phase is very common among CHE triples. The tertiary star fills its Roche-lobe in about 50 per cent of all triples with CHE inner binaries. The same fraction for classically evolving systems is predicted to be a few percent at best <cit.>. We find that the mass transfer episodes initiated by the tertiary star typically occurs in a dynamically stable way. * BH-BH inner binaries that accrete from tertiary star are also common: About 31 (24) per cent of the tertiary-driven mass-transfer episodes occur with BH-BH accretors at Z = 0.005 (Z = 0.0005). Previous population synthesis studies suggest that such scenario is probably not possible triples with classically evolving stars <cit.>. Therefore, mass transfer towards a BH-BH inner binary represents a unique scenario for triples (or higher-order multiples) with CHE stars in the inner binaries. An exciting prospects would be a possible EM counterpart from such an event <cit.>. * Importance of three-body dynamics: ZLK oscillations can be effective for CHE triples, if the stars in the inner binary have evolved off MS (otherwise precession due to strong tides quench ZLK cycles) and if the initial outer pericenter is a_ p, outer,ZAMS≲ 2000 R_⊙ (otherwise ZLK cycles are quenched by various short range forces throughout the entire evolution of the inner binary). ZLK oscillations are only present in those CHE triples, in which the outer pericenter is so short, such that the tertiary star would eventually fill its Roche-lobe. The inner eccentricities of these systems can reach values up to e_ in, max∼0.9 (left panel of Fig. <ref>). The effects of three-body dynamics are negligible for those CHE triples in which the triple remains detached. In this case, the inner binary evolves effectively as an isolated binary. * Three-body dynamics can drive the inner binary to a stellar merger: In about 3 per cent of CHE triples, the inner binary merges before BH-BH formation. The most common type is a merger of a double helium star binary, that comes into contact in a highly eccentric orbit (Table <ref>). * CHE triples form GW sources efficiently: About 30 (24) per cent of the CHE triple population forms BH binaries that merge due to GWs within Hubble time at Z = 0.005 (Z = 0.0005). We predict a merger rate density of GW sources from CHE triples of R_ merger≈ 12 Gpc^-3yr^-1 (Table <ref>). We also predict that about half of the GW sources from CHE systems originate from triples. In 69 per cent of all GW sources from CHE triples, the inner binary evolves effectively as an isolated binary and therefore its properties are indistinguishable from those of CHE binaries. In the remaining 31 per cent, the evolution of the GW progenitor is affected by three-body dynamics and/or TMT episodes. * Tertiary mass transfer and three-body dynamics could lead to the formation of BH-BH binaries that merge within Myrs The vast majority of those GW progenitors of CHE triples, in which the evolution of the inner binary is not decoupled from the tertiary object, experience a TMT episode with a BH-BH inner binary. In this case, we model the evolution of the inner binary during the TMT phase with energy arguments <cit.> and with different assumptions on how efficiently the transferred mass shrinks the orbit of the inner binary. We find typical values for the delay time of these GW sources of few hundred Myrs and few Myrs in our model variation with the least and the most orbital shrinkage, respectively. § ACKNOWLEDGEMENTS SdM acknowledges Fabio Antonini, Adrian Hamers and Lieke van Son for insightful discussions. AD acknowledges travel grant from the HPC3 Europa programme for providing computational resources at the Snelius supercomputer in the Netherlands and acknowledges support fro API for allowing an extended visit. Computational work was performed by the Snelius supercomputer in the Netherlands and by the University of Birmingham's BlueBEAR HPC service. ST acknowledges support from the Netherlands Research Council NWO (VENI 639.041.645 and VIDI 203.061 grants). SdM acknowledges funding by the Netherlands Organization for Scientific Research (NWO) as part of the Vidi research program BinWaves with project number 639.042.728. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. mnras § ADDITIONAL FIGURES In this section, we present the low metallicity model (i.e. Z = 0.0005) counterparts of some of the figures presented in the main text. In Fig. <ref>, we show the mass ratios at the onset of TMT and the amount of (relative) mass transferred towards the inner binary. In Fig. <ref>, we show the cumulative distribution of eccentricities for CHE triples that experience TMT at the onset of the mass transfer phase. In <ref>, we show the M_ 2,ZAMS distribution of TMT sources, distinguishing them based on the evolutionary phase of the inner binary. In Fig. <ref> we show the distribution of initial inner pericenters of CHE triples, distinguishing systems based on the maxium inner eccentricity reached during evolution (left panel), and the based on the evolutionary channel (right panel). In Fig <ref>, we show the outer pericentre before and after the TMT episode for systems with MS-MS inner binaries (upper panel) and BH-BH inner binaries (lower panel) at the onset of the mass transfer phase. § CALCULATION OF BIRTH AND EVENT RATES Throughout the paper, we estimate the: * Formation efficiency (equation <ref>) * Birth rate density (equation <ref>) * Merger rate density (equation <ref>) for each identified evolutionary channels. In this section, we discuss in detail how we determine these quantities. (i) Formation efficiency: The formation efficiency expresses the number of ZAMS stellar systems formed that will evolve according to a specific evolutionary channel as a fraction of all ZAMS stellar systems formed. We calculate this quantity as: ϵ_ formation = f_ pm·N_ channel/N_ simulated, where N_ channel is the number simulated systems that evolves according to the channel of interest, N_ simulated the total number of sampled systems, and f_ pm is the portion of the simulated parameter space with respect to the complete parameter space, that is: f_ pm = f_ triple· f_ M_ 1, ZAMS· f_ q, in· f_ q, out· f_ a, in· f_ a, out, where f_ triple is the assumed triple fraction, f_ M_ 1, ZAMS is the fraction of the simulated parameter space of primary masses: f_ M_ 1, ZAMS = ∫_20 M_⊙^100 M_⊙ M_ 1, ZAMS^-2.3 dm/∫_0.08 M_⊙^0.5 M_⊙ M_ 1, ZAMS^-1.3 dm + ∫_0.5 M_⊙^100 M_⊙ M_ 1, ZAMS^-2.3 dm, where we assumed that the absolute minimum stellar mass is M_ ZAMS,min = 0.08 M_⊙ and the absolute maximum stellar mass is M_ ZAMS,max = 100 M_⊙, and as explained in section <ref>, we sample primary masses in the range of 20-100M_⊙. The fraction of the simulated parameter space of inner mass ratios is: f_ q, in = 1.0-0.7/1.0-0.0, since the distribution of (inner and outer) mass ratios is assumed to be uniform. In equation <ref>, we assume that inner mass ratios of hierarchical triples have an interval of (0,1] and we sample from the interval of [0.7,1]. The fraction of the simulated parameter space of outer mass ratios is f_ q, out = 1.0-0.1/1.0-0.0, where we assume that outer mass ratios triples have an interval of (0,1] and we sample from the interval of [0.1,1]. The fraction of the simulated parameter space of inner semimajor axis is: f_ a, in = log_10(40 R_⊙)-log_10(14 R_⊙)/log_10(10^5 R_⊙)-log_10(14 R_⊙), since the distribution of (inner and outer) semimajor axis is assumed to be uniform in a logarithmic space. We assume that inner mass semimajor axes of all triples range from 14R_⊙ to 10^5 R_⊙ and we sample from the interval of [14,40] R_⊙. Finally, the fraction of the simulated parameter space of outer semimajor axis is: f_ a, out = log_10(10^5 R_⊙)-log_10(10^2 R_⊙)/log_10(10^5 R_⊙)-log_10(10^2 R_⊙), where we assume that inner mass semimajor axes of all triples range from 10^2 R_⊙ to 10^5 R_⊙ and we sample from the enitre interval. Equation <ref> for channels involving isolated binaries reduces to f_ pm = f_ binary· f_ M_ 1, ZAMS· f_ q, in· f_ a, in. ii) Birth rate density: The birth rate density gives the number density of ZAMS stellar systems in the local universe (that is at redshift z≈0), which will evolve according to a specific channel. We calculate the birth rate of systems in a certain channel as: R_ birth = ∑_Z_iSFRd^*(Z_i,z_ ZAMS = 0)/M·ϵ_ formation, where we sum over the two metallicity values, at which we performed our simulations; Z = 0.005 and Z = 0.0005. SFRd^*(Z, z) is defined as the metallicity-specific star formation rate density, and it gives the stellar mass formed within a metallicity range Z_ low≤ Z ≤ Z_ high at redshift z: SFRd^*(Z, z) = ∫_Z_ low^Z_ high f_ met(Z, z) SFRd(z) dZ, where Z_ low and Z_ high are 0.0015 (10^-10) and 0.01 (0.0015), respectively, for our model with Z = 0.005 (Z = 0.0005). Here, Z = 0.0015 is the midpoint between Z = 0.005 and Z = 0.0005 in logarithmic space, Z = 0.01 is the highest metallicity at which CHE binaries can still form GW sources at appreciable numbers and Z = 10^-10 is an arbitrarily chosen, extremely low metallicity value. In equation <ref>, SFRd(z) is the star formation rate density, and we use the model from <cit.>: SFRd(z) = 0.01·(1 + z)^2.6/1 + ((1+z)/3.2)^6.2 M_⊙yr^-1Mpc^-3, and f_ met(Z,z) is the metallicity distribution of the stellar mass formed. This quantity is also redshift dependent and assumed to follow a log-normal distribution <cit.>: f_ met(Z,z) = 1/σ√(2π)exp((log_10(Z) - μ(z))^2/2σ^2), with a standard deviation of σ = 0.5 and with a redshift-dependent mean metallicity μ(z) = log_10(Z_⊙· 10 ^0.153 - 0.074z^1.34) - 0.5ln(10)σ^2. Finally, the term M̃ in equation <ref> is the average mass of all stellar systems and we calculate this as: M̃=f_ single·M̃_ 1,ZAMS + f_ binary·∫_0^1 (1 + q_ in) M̃_ 1,ZAMS dq_ in + f_ triple·∫_0^1∫_0^1 (1 + q_ in) (1 + q_ out) M̃_ 1,ZAMS dq_ in dq_ out, where we have defined M̃_ 1,ZAMS, as the average mass of the primary, i.e.: M̃_ 1,ZAMS = ∫_0.08 M_⊙^100 M_⊙ M_ 1,ZAMS f_ IMF dM_ 1,ZAMS where f_ IMF is the normalised, piecewise continuous initial mass function of <cit.>, f_ single and f_ binary are the single and binary fractions, respectively. We neglect higher order systems, such that f_ triple = 1 -f_ single - f_ binary. We note that we also assume that the binary and triple fractions are independent on the primary mass of the system <cit.>. Assuming flat mass ratio distributions for both the inner and outer binary, equation <ref> becomes: M̃ = (f_ single + 3/2 f_ binary + 9/4· f_ triple) ·M̃_ 1,ZAMS, The term SFRd^*(Z_i,z_ ZAMS = 0)/M̃ in equation <ref> then gives the average number of stars formed at redshift z = 0 in a metallicity range of Z_ i,low≤ Z ≤ Z_ i, high. Multiplying this term with ϵ_ formation gives the number of systems formed in a given formation channel as a fraction of all systems formed in the above mentioned metallicity range for a given star formation history model. Summing these values over all of our metallicity bins therefore yields the total birth rate of systems in a specific channel. iii) Merger rate density: The merger rate density gives the rate density of a given astrophysical event (such as GW transients from coalescing double compact objects) in the local universe. The main difference between the birth and merger rate is due to the considerable delay time between the formation of the stellar system and the occurrence of the GW merger. For example, if the delay time for a GW source at z = 0 is t_ delay = 10.5 Gyrs, then the redshift at ZAMS of its progenitor systems is z_ ZAMS≈ 2, at which the star formation rate density is an order of magnitude higher with respect to its value at z = 0 <cit.>. We determine the merger rate density at z = 0 as: R_ event = ∑_Z_i∫_0 Gyr^13.5 GyrSFRd^*(Z_i,z_ ZAMS(t_ delay))/M·ϵ̃(t_ delay) dt_ delay, where z_ ZAMS is the redshift at which the progenitor of a given astrophysical event is formed (and therefore it is a function of delay time), ϵ̃ is the number of astrophysical events occurring at z = 0 with a delay time of t_ delay as a fraction of all ZAMS stellar systems formed at z = z_ ZAMS. We determine z_ ZAMS for a given delay time via the standard relation for lookback time: t_ delay = 1/H_0∫_z = 0^z_ ZAMSdz'/(1 + z') E(z')dz', where E(z) = √(Ω_m(1+z)^3 + Ω_λ), with Ω_M = 0.3, Ω_λ = 0.7 and H_0 = 70 kms^-1Mpc^-1. We note that our merger rate density should be only considered as an order of magnitude estimate at best. This imprecision is due to several uncertainties in stellar physics and, notably, the limited density of our metallicity grid. We performed simulations only at two metallicities to determine the merger rate density. However, the formation efficiency and delay times of GW sources originating from CHE systems is expected to be sensitively dependent on metallicity. In particular, we overestimate the delay times for GW sources formed at 0.001<Z≤0.005, which in turn leads to an overestimation of the merger rate density at z = 0. This is because, we represent all systems formed in this metallicity range with our models at Z = 0.005, at which the stellar winds are stronger and therefore lead to wider BH-BH binaries. The longer time delays imply that GW sources merging at z = 0 are predicted to have formed at a larger redshift, at which the star formation rate is higher. In particular, predicts that the cosmic star formation rate montonically increases up to z∼2. This could also explain why our merger rate is a factor of two higher than predicted by <cit.>. Similarly, we underestimate the delay times for GW sources formed at 0.0005<Z≤0.001, and therefore we might underestimate the merger rate densities for such systems. In particular, this could mean that the merger rate density of the TMT with a MS-MS accretor channel (discussed in section <ref>) could be significantly higher than predicted (shown in Table <ref>).
http://arxiv.org/abs/2307.04394v3
20230710075623
Relieving the $S_8$ Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
[ "Xingpao Suo", "Xi Kang", "Huanyuan Shan" ]
astro-ph.CO
[ "astro-ph.CO" ]
APS/123-QED [email protected] Institute for Astronomy, School of Physics, Zhejiang University, Hangzhou 310027, China [email protected] Institute for Astronomy, School of Physics, Zhejiang University, Hangzhou 310027, China Purple Mountain Observatory, 10 Yuan Hua Road, Nanjing 210034, China [email protected] Shanghai Astronomical Observatory (SHAO), Nandan Road 80, Shanghai 200030, China Recent observations of weak gravitational lensing surveys indicate a smoother Universe compared to the predictions of the Cosmic Microwave Background (CMB). This is known as σ_8 tension or S_8 tension, where σ_8 represents the present root-mean-square matter fluctuation averaged over a sphere of radius 8 h^-1Mpc and S_8 ≡σ_8√(Ω_m/0.3). In this Letter, we investigate a kind of general Dirac-Born-Infeld (DBI) Lagrangian referred as surface-type DBI (s-DBI) model. We have found that, up to the linear order, the constraints on the s-DBI model with CMB from Planck2018 and low-redshift probes (WL and GC) yield S_8= 0.7685_-0.0066^+0.0077 and S_8=0.766_-0.0376^+0.0471, respectively, which are not only self-consistent but also consistent with the values derived from most low-redshift probes. Furthermore, we provide an outlook for searching the non-linear effects of this model, which could be helpful to resolve other issues by Cold Dark Matter on small scales. Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm Huanyuan Shan August 12, 2023 ========================================================================================= Introduction. –The ΛCDM model stands as the most widely accepted cosmological model, serving as the standard framework for Big Bang cosmology. It offers a simple yet effective description that agrees with most observations. However, with the development of theoretical and observational studies, some disagreement between different observations or between theory and observations have emerged, challenging the ΛCDM model and suggesting the need for new extended model or physics<cit.>. Among these challenges, σ_8, or S_8 tension is one of the most significant<cit.>. It shows that the low-redshift probes such as weak gravitational lensing (WL) <cit.>, galaxy clustering (GC) <cit.> as well as their combined analyses <cit.>, indicate a smoother Universe than the constraint by cosmic microwave background (CMB)<cit.>. Quantitatively, the structure growth parameter S_8 ≡σ_8 √(Ω_m/0.3) derived from low-redshift probes is systematically 2-3σ lower than the value obtained from the CMB<cit.>. Recently, a joint cosmological analysis of cosmic shear + galaxy-galaxy lensing + GC yielded a constraint of (Ω_m, S_8) = (0.305^+0.010_-0.015,0.766^+0.02_-0.014)(see <cit.>, hereafter referred as K1K-3×2pt). This result is deviated by 8.3 ± 2.6% relative to (Ω_m, S_8) = (0.3166±0.0084, 0.834±0.016) given by of Planck2018<cit.>. In this Letter, we present a novel dark matter model which offers a solution to the S_8 tension. Referred as the surface-type Dirac-Born-Infeld (s-DBI) model, it adopts an area functional form as the dark matter Lagrangian, which presents a special case within the broader class of general DBI models. Our study demonstrates that this model effectively addresses the S_8 tension by smoothing out the low-redshift structure while preserving the perturbation evolution at high redshifts. The surface-type DBI as a dark matter model. –Here we consider the Lagrangian ℒ ≡R/2κ + Λ_I + Λ_II√(1 + ∂_μϕ∂^μϕ) + ℒ_m and its corresponding action S = ∫ d^4x √(-g)ℒ, where g ≡(g_μν) represents the determinant of the space-time metric g_μν with signature [-1,1,1,1], R denotes the scalar curvature of Levi-Civita connection, κ≡ 8 π G with gravitational constant G, Λ_I is the vacuum energy or equivalently cosmological constant, ℒ_m is the lagrangian of normal matter including radiation and baryon, and Λ_II√(1 + ∂_μϕ∂^μϕ) with a constant Λ_II and scalar field ϕ is the Lagrangian that we introduce to represent dark matter, which we refer to as the surface-type Dirac-Born-Infeld (s-DBI) model. It is important to note that our consideration of s-DBI is primarily from a mathematical standpoint. The terms ∫ d^4x √(-g) and ∫ d^4 x √(-g)√(1 + ∂_μϕ∂^μϕ) can be viewed as formal area or volume functionals. Meanwhile, it is worth mentioning that the s-DBI also processes strong physical motivations. It can be interpreted as a general DBI with the constant warp factor <cit.> or as a low-dimension deduced equivalence in membrane theory <cit.>. For the Lagrangian given in Eq. (<ref>), applying the principle of least action leads to the Einstein field equation: R_μν - 1/2R g_μν = -κ( T_μν^(Λ_I) + T_μν^(Λ_II) + T_μν^(m)), where R_μν is the Ricci tensor, T_μν^(Λ_I) = - Λ_I g_μν and T_μν^(Λ_II) = Λ_II(∂_μϕ∂_νϕ/√(1+∂_ρϕ∂^ρϕ) - g_μν√(1+∂_ρϕ∂^ρϕ)) represent the energy-stress tensor of dark energy and dark matter in this model, respectively. Now our focus turns to the s-DBI field. In a homogeneous Universe, according to Eq. (<ref>), this field can be treated as a perfect fluid characterized by the Equation of State (EoS) w = - Λ_II^2/ρ^2= - 1/1+(a_d/a)^6 , where w ≡ P / ρ, P and ρ denoting the pressure and mass density of the s-DBI field, respectively. Here, a is the scale factor normalized to unity at the present time, and a_d is a free parameter. When the Universe evolves from a=0 to a=∞, the s-DBI field transforms from the dark-matter phase (w=0) to the dark energy phase (w=-1). The parameter a_d characterizes the scale at which this phase transition occurs and can be interpreted as the decay scale factor or decay parameter. Notably, this phase transition is rapid with a power index of six. Using Eq. (<ref>), we can derive the density evolution of ρ regard to a as follows: ρ(a) = ρ_today/√(1+a_d^-6)√(a_d^-6+a^-6)≡ρ_s √(a_d^-6+a^-6) . Moreover, considering a linear perturbation in the homogeneous Universe, the sound speed of the s-DBI field can be given by c_s^2 = c_a^2 = - w , where c_s and c_a are the rest-frame and adiabatic sound speed, respectively. The EoS and sound speed provide sufficient information to complete the scalar linear evolution equations of the Universe <cit.>. The dark matter with the above form EoS and sound speed has such properties that during the early stages (a≪ a_d), it behaves similarly to the pressure-less standard cold dark matter, but at the late stages (a close to a_d), it exhibits a certain sound speed and pressure, which leads to the smoothing out the structures that formed during the early stages. This may provide an explanation for the observed smoother Universe compared to the predictions from the CMB. In Fig. <ref>, we present the linear matter spectra of different redshifts with a_d = 3.8 as a reference. It is evident that the suppression related to the ΛCDM increases with time. The value of the decay parameter will greatly influence this process. Fig. <ref> shows the power spectra of different a_d values at z=0, along with the matter power spectrum of the ΛCDM model for comparison. As a_d tends towards infinity, the s-DBI model will degenerate to ΛCDM. An initial estimate for a_d can be made based on the following considerations: if a_d≤ a_today = 1, the dark matter would have already decayed to the dark energy phase, which is against the observation. Therefore, a_d should be larger than one. However, a_d should not be so large that it becomes indistinguishable from standard cold dark matter. According to <cit.> and <cit.>, any solution to the S_8 tension must be effective after z≈ 1. Hence, a_d should not exceed approximately ten. In summary, if the constraint yields a value outside the range of [1, 10], it should be considered as providing insufficient support for this model. Note that the non-relativistic approximation of the s-DBI field is equivalent to the Chaplygin gas<cit.>, which has the EoS of P= - A/ρ with a constant A>0. However, in the relativistic region, we need to consider Eq. (<ref>) and the evolution equation for ϕ (1/2∂_μlog( -g ) + ∂_μ) ∂^μϕ/√(1+∂_νϕ∂^νϕ)=0 , which represents a general minimal surface equation. Since the perturbation evolution of dark matter, particularly on large scales and in the early stage of our Universe, is dominated by non-relativistic and linear part, we can ignore the non-linear and relativistic aspects of the theory. Constraints by the observations. –To demonstrate that the s-DBI model can alleviate the S_8 tension, we perform a series of constraints using different observational datasets. We begin with the of Planck2018, which combines the TT, TE, EE and low-E angular power spectra of the CMB to constrain the cosmological parameters<cit.>. This baseline analysis is advantageous as it avoids model-dependent non-linear effects that may introduce uncertainties <cit.>. For the low-redshift probes, we employ the WL shear catalog from KiDS1000<cit.> and the GC data from SDSS-III BOSS<cit.>. In our analysis, we treat the high-redshift probe (CMB) and low-redshift probes (WL and GC) separately, instead of combining them, since if the two data sets can give a consistent result, it will be a stronger proof of the correctness of a model. Additionally, we employ the same data sets to constrain the ΛCDM model in parallel, serving as a control group for comparison. We modified the Boltzmann code <cit.> [<https://lesgourg.github.io/class_public/class.html>] to perform perturbation calculations. Based on it, a public Markov Chain Monte Carlo (MCMC) sampler <cit.>[<https://baudren.github.io/montepython.html>] was used. All the MCMC samplings in our constraint are done with Metropolis-Hasting algorithm coded in . To constrain this model with Planck2018 , we assume a flat prior on some nuisance parameters in Planck likelihood <cit.> and the cosmological parameters {ω_b, Ω_s, h, A_s, n_s, τ_reio, a_d}, where Ω_s ≡ρ_s/ρ_cr≡8π G /3H_0^2ρ_s is the reduced dark matter density in our model. The names and prior of base cosmological parameters are listed in Table <ref>. For comparison, we also conducted a parallel ΛCDM constraint using a similar setup. Note that in all the analysis we always assume the spacial curvature is zero (Ω_K=0) and our neutrinos model is the same as Planck2018 with two massless species and one massive with 0.06eV. The posterior distributions with Planck2018 are presented in Table <ref>. The Markov chain used for the analysis satisfies the Gelman-Rubin convergence criterion with R-1 ≈ 10^-3, indicating good convergence. The posterior distributions for all parameters are approximately Gaussian, and the acceptance rate of the chain is around 0.22, indicating reliable convergence. Furthermore, our constraints on the ΛCDM model are consistent with the results reported by the Planck2018 collaboration <cit.>, validating the accuracy of our analysis. The results reveal slight differences in the mean values or best fits of common cosmological parameters between the s-DBI and ΛCDM. However, significant discrepancies are observed in the total matter density Ω_m and the structure growth parameter S_8. The s-DBI model yields values of (Ω_m, S_8) = (0.3072_-0.0055^+0.0071,0.7685_-0.0066^+0.0077) , which are in strong agreement with the results from K1K-3×2pt and clearly deviate from the result given by Planck2018 <cit.>. To assess the goodness of fit, we present the CMB temperature power spectrum with the best-fit model in Fig. <ref>. It is evident that the discrepancy between the two models is significantly smaller than the discrepancy between the theoretical predictions and observational data, giving χ^2_obs,LCDM=4.51× 10^-12, χ^2_obs,s-DBI= 4.38 × 10^-12 and χ^2_s-DBI,LCDM = 1.19× 10^-13, where χ^2_i,j is defined as χ^2_i,j≡∑_k=0^N-1(f_k^(i) - f_k^(j))^2/f_k^(j) with f_k^(i) the k-th entry of data set i with total length N. These results suggest that both the s-DBI and the ΛCDM model are strongly favored by Planck2018 data. Due to the similarity of the results, we did not include the plots of other components of the power spectra. It's also worth noting that the s-DBI model does not exacerbate the Hubble tension<cit.>. On the contrary, it relieves the Hubble tension by increasing the Hubble constant slightly higher to h≈ 0.68, compared with the result from ΛCDM with h≈0.67. After constraining the model with Planck2018 CMB power spectrum, we proceed with the combined constraint using low-redshift probes, i.e., WL and GC. We perform parallel constraints for both the s-DBI and ΛCDM models. The non-linear scale evolution of our model is not available, so we eliminate the non-linear effect reliably. For WL, we adopt the correlation function ξ_+(θ) and truncate the small scale portion (θ<10) using the KiDS cosmology analysis pipeline <cit.>. The validity of this truncation is ensured through a comparison between the correlation function data vector ξ⃗_+^NL and ξ⃗_+^L, which include the non-linear and linear effects, respectively. By increasing the angular variable θ, we verify that the relative distance between the two vectors ||Δξ⃗|| / ||ξ⃗_+^NL|| reaches a level of 10^-2, where Δξ⃗≡ξ⃗_+^L - ξ⃗_+^NL and ||· || ≡√(⟨·, ·⟩). Note that we discard the correlation function ξ_- since the effect of non-linear on ξ_- can hardly be removed. For GC, we focus only on the measurements of the baryon acoustic oscillations (BAO) and discard the redshift-space distortions. Due to the strict elimination of the non-linear effect, the constraint capacity on the five common base parameters becomes weaker. Hence, for both the s-DBI and ΛCDM models, we fix these parameters according to their respective best-fit values in Table <ref>. However, for the s-DBI model, we allow the decay parameter a_d to have a prior within the interval [2,6], as the constraint capability of WL + GC on a_d is unknown. For the ΛCDM model, the low-redshift data still prefer a lower value of S_8 compared to the Planck2018 . The constraint yields (Ω_m, S_8) = (0.299_-0.0105^+0.011, 0.770_-0.035^+0.0371), which is consistent with the results from K1K-3×2pt but with a difference of about 0.6σ for Ω_m and 0.1σ for S_8. However, as shown in Fig. <ref>, the tension between low-redshift probes and CMB still persists. On the other hand, for the s-DBI model, the S_8 tension does not exist. As depicted in Fig. <ref>, the constraint on WL+GC data gives a value of (Ω_m, S_8) = (0.305_-0.0127^+0.0107, 0.766_-0.0376^+0.0471), which is highly consistent with our constraint using the Planck2018 . Note that the area of credibility interval is larger than that of ΛCDM due to the degeneration between Ω_s and a_d. In conclusion, our analysis reveals that the S_8 tension persists in the ΛCDM model when considering non-linear-free data. This suggests that modifying the non-linear model such as <cit.> or <cit.>, is unlikely to resolve the tension effectively. On the other hand, the s-DBI model, within the scope of the data sets we have considered, successfully alleviates the S_8 tension. Non-linear effect and outlook. –A key issue exists about whether small-scale structures such as dark matter halos can form under the s-DBI model. To answer this question, we note that s-DBI's non-relativistic approximation, Chaplygin gas is barotropic, for which we can introduce an effective potential h ≡ - ∫_ρ^∞dP(ρ')/ρ' = - 1/2Λ_II^2/ρ^2 to substitute the effect of pressure. We include this external potential in the N-body simulation software <cit.> by modifying the implementation of the PM algorithm. Setting the cosmological parameters Ω_m, Ω_vac and h as the best-fit values of the s-DBI model from Table <ref>, we carry out the simulation with 512^3 particles in a cube box with the length of the edge of 100Mpc. The parallel simulation for ΛCDM is also performed. The simulations reveal that in the s-DBI model, the dark matter halo can indeed form. Furthermore, we find that the differences between the s-DBI and ΛCDM models are tiny at redshifts z>1. However, as the redshift z approaches zero, the s-DBI model predicts a lower non-linear power spectrum compared to ΛCDM. The "bias" between the power spectra of the two models, defined as b_M ≡√(P_s-DBI/P_Λ CDM), is shown in Fig. <ref>. Note that the leading order bias between observed and simulated power spectrum, denoted as b_1 ≡√(P_gg/P_mm ), can range from about 1.4 to 3.5<cit.>. In comparison, the bias b_M ≈ 0.9 is close enough to unity, suggesting that our model can fit the observed galaxy power spectrum by minor regulation on b_1. Besides, In the s-DBI simulation, we found that some small dark matter halos are dissolved by external pressure, suggesting that our model may hold promise in addressing other inconsistencies related to cold dark matter, such as the presence of dark matter lacking galaxy<cit.>, cuspy halo<cit.> and dwarf galaxy missing problem<cit.>. However, a rigorous numerical analysis is necessary to fully investigate these issues. Moreover, a more comprehensive understanding of the non-linear effects is crucial for further constraints using various cosmological probes. Given the complexity of this topic, we leave it to future work. Xingpao Suo and Xi Kang acknowledge the support from the National Key Research and Development Program of China (No.2022YFA1602903), the NSFC (No. 11825303, 11861131006), the science research grants from the China Manned Space project with No. CMS-CSST-2021-A03, CMS-CSST-2021-A04, the Fundamental Research Funds for the Central Universities of China (226-2022-00216) and the start-up funding of Zhejiang University. Huanyuan Shan acknowledges the support from NSFC of China under grant 11973070, Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7013, Program of Shanghai Academic/Technology Research Leader, and the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A01, CMS-CSST-2021-A04. We thank Joe Zuntz and Benjamin Stölzner for helpful discussions. *
http://arxiv.org/abs/2307.04445v1
20230710095314
Learning Behavioral Representations of Routines From Large-scale Unlabeled Wearable Time-series Data Streams using Hawkes Point Process
[ "Tiantian Feng", "Brandon M Booth", "Shrikanth Narayanan" ]
cs.LG
[ "cs.LG", "eess.SP" ]
University of Southern California Los Angeles CA USA [email protected] University of Colorado Boulder Boulder CO USA [email protected] University of Southern California Los Angeles CA USA [email protected] Continuously-worn wearable sensors enable researchers to collect copious amounts of rich bio-behavioral time series recordings of real-life activities of daily living, offering unprecedented opportunities to infer novel human behavior patterns during daily routines. Existing approaches to routine discovery through bio-behavioral data rely either on pre-defined notions of activities or use additional non-behavioral measurements as contexts, such as GPS location or localization within the home, presenting risks to user privacy. In this work, we propose a novel wearable time-series mining framework, Hawkes point process On Time series clusters for ROutine Discovery (HOT-ROD), for uncovering behavioral routines from completely unlabeled wearable recordings. We utilize a covariance-based method to generate time-series clusters and discover routines via the Hawkes point process learning algorithm. We empirically validate our approach for extracting routine behaviors using a completely unlabeled time-series collected continuously from over 100 individuals both in and outside of the workplace during a period of ten weeks. Furthermore, we demonstrate this approach intuitively captures daily transitional relationships between physical activity states without using prior knowledge. We also show that the learned behavioral patterns can assist in illuminating an individual's personality and affect. <ccs2012> <concept> <concept_id>10003120.10003138.10011767</concept_id> <concept_desc>Human-centered computing Empirical studies in ubiquitous and mobile computing</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Empirical studies in ubiquitous and mobile computing Learning Behavioral Representations of Routines From Large-scale Unlabeled Wearable Time-series Data Streams using Hawkes Point Process Shrikanth Narayanan ======================================================================================================================================= § INTRODUCTION Wearable sensors have garnered considerable interest in many fields, such as healthcare, user authentication, and entertainment, over the last two decades <cit.>. These non-obtrusive devices, which are often small in size and have efficient computational capabilities, can be extremely useful in capturing vital bio-metric and bio-behavioral data from individuals over a prolonged period in natural settings <cit.>. Such rich and vast amounts of multimodal time-series data collected directly from everyday life allow for a more comprehensive understanding of the factors affecting activities of daily living (ADLs) <cit.>, including but not limited to social interactions <cit.>, sleep patterns <cit.>, physical activities <cit.>, and even emotion variations <cit.>. Increasingly, the ability to recognize ADLs offers researchers opportunities to investigate broad human behavior patterns and infer common daily routines. Routine behavior is notably meaningful in quantifying what activity pattern people adopt and whether these patterns cause variations of psychological well-being and personality within groups of people <cit.>. In this paper, we present a novel data processing approach, Hawkes point process On Time series clusters for ROutine Discovery (HOT-ROD) for learning routine patterns in biobehavioral time series from wearable sensors. Our proposed HOT-ROD pipeline includes data processing components ranging from aggregation, imputation, filtering, time-series clustering, and routine discovery. We show that the proposed routine features, comprised of temporally linked cluster transitions in multimodal wearable recordings, can assist in illuminating an individual's personality and affect as well as aspects of task performance such as job behaviors. The main contributions of this work are as follows: * We propose a novel combination of Toeplitz Inverse Covariance-Based Clustering (TICC) <cit.> and Hawkes point process <cit.> for discovering routine characteristics in long-term real-world wearable recordings. The approach can operate without expert knowledge of the data or the collection of sensitive contextual information. * Our learned routine patterns capture temporal relationships between adjacent time-series clusters, thus providing valuable insights into understanding the shared behavior patterns within a group of individuals. * Using naturalistic heart rate and step count features from over 100 individuals in the workplace and at home over a period of ten weeks, we show that our HOT-ROD approach combined with daily summaries of physical activity from wearable sensors helps identify job properties and personalities of participants. Furthermore, we empirically show that our approach can achieve modest improvement in predicting individual attributes from a few days of recordings. § RELATED WORKS Many conventional approaches in characterizing daily routines require the acquisition of labeled contexts, like trajectory in home settings <cit.>, life event sequences (eating, sleeping, etc.) <cit.>, and GPS locations <cit.>. The Kasteren data set <cit.> was collected in a 3-bedroom apartment setting for a period of 28 days using 14 state change sensors. The investigators then achieved a timeslice accuracy of 95.6% on this data set using a hidden Markov model and conditional random fields. Some following studies have successfully utilized a probabilistic neural network learning model to separate the normal routine from unusual and suspected routines <cit.>. Meanwhile, other researchers have studied human behavior by tracking the spatial properties of participants via GPS <cit.>. However, one primary concern about these studies is that the data collection protocol could invade privacy by tracking sensitive and identifiable knowledge about an individual, such as continuous GPS. These approaches might also be costly and not scalable due to the substantial amount of effort required from researchers in either setting up the recording system or annotating the data. To prevent the acquisition of personally identifiable information, there have been a number of studies in establishing machine learning models to infer a pre-selected set of activities (walk, stand, etc.) from unlabeled wearable time-series <cit.>, like motion and posture. However, these models are typically trained from data gathered in laboratory settings and may not yield good performances on in situ data sets. As an alternative, motif-based methods have obtained empirical success in detecting repeated patterns; but this method can be computationally prohibitive due to the optimal granularity for motif patterns that must be searched through the whole time-series. Unlike the motif-based data mining method, one other recent study proposed to learn routine behaviors via a sparse and low-rank matrix decomposition technique <cit.>. In this work, the real-world physical activity data collected from Fitbit were used to cluster the behaviors of participants without expert knowledge or micro-pattern extraction. The main disadvantage associated with this approach is however the limited interpretability of decomposed matrices returned from sensor matrix operations. To our knowledge, our proposed causality-based pattern extraction from unlabeled wearable sensor recordings has not been previously considered. § DATASET INTRODUCTION In this study, we used a dataset called TILES-2018 <cit.> that is publicly available for our experiment. This data set involves a set of comprehensive experiments with the target of studying how physiological and behavioral variables affect employee wellness, personality, and workplace stress. Throughout a ten-week period, physiological, environmental, and human interaction data were gathered from hospital employees who primarily provide patient care (nurses, technicians, etc.) at a large critical-care hospital. The complete dataset consisted of 213 hospital workers comprised of 120 female participants. In total, there were 113 (54.3%) registered nurses enrolled in the study, with the rest reporting some other job title, such as occupational or lab technicians. A total of 54 participants reported working the night shift and the rest worked during the day. More details regarding the dataset can be referred to <cit.> §.§ Study Procedure The participants involved in this study need to complete a survey during onboarding, consisting of a web-based series of surveys pulled from existing test battery questionnaires that assessed standard demographic information, personality, and affect variables. In this work, we primarily investigate how automatically discovered routine patterns correlate with job type, gender, shift type, personality, and affect. Each of the five personality factors (extraversion, agreeableness, conscientiousness, openness, neuroticism) is measured via the Big Five Inventory-2 survey <cit.>. Five-factor scores were computed by taking the average of all responses, where each factor score is in the range of 1-5. The Positive and Negative Affect Schedule (PANAS) was administered <cit.> to measure affect. The PANAS consists of 10 positive affect items and 10 negative affect items. Positive and negative affect scores were calculated by summing individual scores from each group (positive and negative) with higher scores representing higher levels of corresponding affect. §.§ Wearable Data In this study, researchers instructed participants to wear a Fitbit Charge 2 <cit.>, an OMsignal garment-based sensor <cit.>, and a customized audio badge <cit.> which collectively tracks heart rate, physical activity, speech characteristics, and many other human-centric signals. Participants were asked to wear the OMsignal garment and audio badge only during their work shifts due to the battery limitation of these devices. However, participants were instructed to wear a Fitbit sensor as often as possible throughout the 10-week data collection period. In the present study, we focus on the Fitbit time series data since it is present for most participants both during and outside of their working hours. This data stream offers information about energy expenditure, sleep quality, step count and heart rate measured through photoplethysmography (PPG). § METHOD In this section, we introduce our HOT-ROD analysis pipeline in discovering routine features from Fitbit time-series recordings. As shown in Fig. <ref>, our proposed HOT-ROD data pipeline consists of three major modules: 1. ; 2. ; 3. . To calculate routine features using the Hawkes Point Process, we first group time-series by day. In this work, a day is defined as the variable period of time between sleep onsets as determined from the Fitbit daily summary. Sleep durations shorter than six hours (presumed to be naps) are ignored when determining these day boundaries in the time series. Some participants may sleep less than six hours regularly or may not wear their Fitbit devices to bed, which would result in measured days lasting upwards of 30 hours. To remove these outliers, we maintain the data with its length of 20-28 hours according to the definition of a circadian cycle <cit.>. The pre-processing component includes data aggregation, data imputation, and data filtering. We first aggregate multivariate time-series data at the fixed rate of one minute. The second step uses an Autoregressive integrated moving average (ARIMA) model to fill in the missing data in the aggregated output. We then utilize the Savitzky-Golay filter to smooth the imputed time-series data without substantially distorting the signal following previous work <cit.>. Following the data pre-processing scheme, we cluster the pre-processed data stream using Toeplitz Inverse Covariance-Based Clustering (TICC) <cit.>. At the final stage, we extract routine features utilizing the Hawkes point process technique <cit.> where cluster transitions serve as the process events. The details of each module are further described below. §.§ Data Pre-processing Data Aggregation Fitbit Charge 2 sensors read PPG heart rate samples at intervals of less than one minute, but the time differences between two consecutive samples are inconsistent. Prior studies have suggested that PPG-based heart rate should be averaged over a one-minute duration to obtain a reliable measurement <cit.>, and thus adopt this strategy. Another compelling reason to aggregate the PPG heart rate samples is to rate-match the data output of step count samples, which are also made available every minute. Data Imputation Missing data in wearable sensors recordings are unfortunately unavoidable and are often encountered for various reasons including intermittent disconnections, body movements, and firmware malfunctions <cit.>. In this work, we select an Autoregressive integrated moving average (ARIMA) model to impute missing values <cit.>. We utilize ARIMA to populate missing values based on past observations. Missing points that occur in the first five data points in the time series are filled with mean values from the corresponding day in the time series. Prior literature <cit.> suggests that imputation works better when the proportion of missing data is small, thus we experimentally choose to fill the missing segments that are not continuously missing over 15 minutes. In this imputation experiment, we masked 10%, 25%, and 50% of data continuously in a set 60 minutes of time series for ARIMA to impute. We choose 15 minutes since we observe that ARIMA yields significantly higher mean absolute errors when the missing data rate is 50% (30 minutes) than for missing rates at 25% (15 minutes) and 10% (6 minutes). Data Filtering Wearable sensor recordings are vulnerable to noise and motion artifacts and thus filtering becomes an essential prerequisite for further processing of the signal. We address this issue by applying the Savitzky-Golay filter <cit.>. This is a well-known smoothing approach that increases the precision of signals while simultaneously preserving the signal trend. The filter tries to fit a polynomial with some pre-selected fixed degree z to the time-series sequence 𝐒 of length 2m + 1 centered at i = 0 such that the squared error is minimized over coefficients of a polynomial p_i = ∑_x=0^z a_xi^x: min_a_0:z∑_i = -m ^ m (p_i - s_i) ^ 2 It is worth noting that there are many other attractive approaches available in the literature to pre-process the time-series data, but empirically testing each is beyond our scope in this work. §.§ Time-series Clustering Definitions We first introduce some notations and definitions used in this and later sections. The pre-processed time-series data is defined as a set of observations, (𝐬_1, 𝐬_2,..., 𝐬_n) ordered by time, where each 𝐬_i ∈𝑅^m is the i-th observation in time with m features. The sensor features in this case are PPG heart rate and step-count that are observed every minute, so m=2. We further create 𝐗_i which consists of w observations (𝐬_i, 𝐬_i+1, ... , 𝐬_i+w-1). The aim of TICC described next is to partition these sequential time-series observations to form K clusters. Toeplitz Inverse Covariance-Based Clustering (TICC) In this method, each cluster is characterized by a Toeplitz Gaussian inverse covariance Θ_k∈𝑅^mw× mw and empirical mean μ^m. The Toeplitz Gaussian inverse covariance essentially captures the interdependencies between different observations. This clustering approach also enforces temporal consistency between consecutive vectors 𝐗_i and 𝐗_i+1 to find repeated long-range patterns in the data that represent particular behaviors of the object. In summary, the TICC method assigns each frame to one Gaussian inverse covariance Θ_k by minimizing the following objective function: 𝐏, Θminimize∑_k=1^K∑_𝐗_𝐢∈𝐏_k (-𝑙𝑙(𝐗_i, Θ_k) + β·1_𝐗_i∉𝐏_k) where in Eq. <ref>, K is the number of clusters and 𝐏_k corresponds to the cluster point set that belongs to cluster k. 1_𝐗_j∉𝐏_k is an indicator function equal to one when the current cluster 𝐗_j is different from the future cluster assignment and zero otherwise. β is the penalty parameter that controls the temporal consistency. A larger β will result in encouraging the neighboring samples to be assigned to the same cluster. <cit.> also suggests that the performance of time-series clustering largely depends on the choice of β when the sample size is adequate. Finally, 𝑙𝑙(𝐗_j, Θ_k) represents the log-likelihood that 𝐗_𝐣 belongs to 𝐏_k, which is defined below: -𝑙𝑙(𝐗_i, Θ_k) = -1/2(𝐗_i - μ_𝐤)^⊺Θ_k(𝐗_i - μ_𝐤) + 1/2logΘ_k - m/2log(2π) where μ_𝐤 is the empirical mean of cluster k, and m is the number of features in each observations. In our context, we choose this method for time-series clustering since our interest is to robustly identify long-range repeated patterns while contending with the possible presence of irrelevant data points. §.§ Hawkes Point Process Hawkes processes <cit.>, also known as a self-exciting counting process, is a stochastic point process for studying event sequence patterns where historical event occurrences are assumed to increase the chance of arrival of new events. In general, a point process is a collection of a list of discrete events in time and their associate dimension, {t_i, u_i} with time t_i∈ [0, n] and event u_i∈ [0, U], where n and U are the maximum time in a time-series and total types of events, respectively. Here, we define d_i representing the clusters that assign to i-th time point. In this study, we define the event as transitions between different time-series clusters. For instance, given two possible consecutive time points {t_i, d_i} and {t_i+1, d_i+1} in a time-series of a day, we can define an event as {t_i+1, (d_i→ d_i+1)}, such that d_i≠ d_i+1. In this work, we can also find that n represents the number of total points in a time-series of days. A multi-dimensional point process with U types of events can be equivalently represented by 𝒰 counting processes: 𝒩_u = {𝒩_u(t)| t ∈ [0, n]}, where 𝒩_u(t) denotes the number of type-u events occurring before n-th time point. There are possible K2-K types of events by the definition of the event in this work. Let the history ℋ(t) be the list of times of events up to time n: ℋ^𝒰_n = { (t_i, u_i) | t_i < n, u_i∈𝒰} Then, the expected instantaneous rate of occurring type-u events given history is : λ_u(t) dt = λ_u(t|ℋ^𝒰_t) We can then derive the intensity function λ_u(t) of the multi-dimensional Hawkes process as: λ_u(t) = μ_u + ∑_i:t_i<tϕ_uu'(t_i) where μ_u is the exogenous (base) intensity indecent of the history. ϕ_uu'(t) is the impact function capturing the temporal influence of type-u' event on the subsequent type-u event. We can further define ϕ_uu'(t) as: ϕ_uu'(t) = ∑_m=1^M a_uu'^m k_m(t) where k_m(t) is the m-th basis function and a_uu'^m represents the coefficient corresponding to k_m(t). We adopted the Gaussian filter as the basis function in this study. We describe type-u' event does not Granger-cause type-u event, when function λ _u(t) is independent of historical events of type-u'. Finally, we can build a Granger causality graph G=(𝒰, ℰ) with U types of events as the nodes and the directed edges in between representing the causation. In this study, the routine feature is naturally defined as the adjacency matrix 𝐀 of the Granger causality graph learned from the Hawkes process. Element 𝐀_uu' can be viewed as the infectivity (∫_0^tϕ_uu'(s) ds) of type-u' cluster-transition on type-u cluster-transition. A detailed description of the above definitions can be found in <cit.>. To discover the Granger causality graph from a Hawkes point process, we adopt the learning algorithm proposed by <cit.>. This algorithm learns the Granger causality graph robustly given a few training sequences. Summary Our proposed HOT-ROD pipeline aims to learn routine characteristics from wearable time-series without prior knowledge or the acquisition of labeled event sequences. The time series data used in this study contains continuous measurements of physiological response and physical activity. The proposed learned routine features capture patterns in how people advance from one state to another in everyday life. § RESULT In this section, we validate the effectiveness of the proposed HOT-ROD approach for predicting demographic, personality and affect with Fitbit time series data. §.§ Experimental Setup §.§.§ Daily Summary Routine Feature We use Fitbit daily summary data to construct our baseline model. The Fitbit daily summary data is extracted using the API provided by Fitbit. The measurements we select from Fitbit daily summary report include sleep duration, sleep efficiency, step counts, resting heart rate, and heart rate zone duration. Here, Fitbit categorizes heart rate into 4 zones: * Out of Zone (heart rate is below 50% of its maximum). * Fat-burn Zone (heart rate is 51% to 69% to its maximum). * Cardio Zone (heart rate is 70% to 84% to its maximum). * Peak Zone (heart rate is above 85% of its maximum). A set of 5 statistical functionals (e.g., max, standard deviation) are then introduced on the Fitbit summary feature. §.§.§ HOT-ROD Routine Feature We compute the HOT-ROD routine features from Fitbit Charge 2 time-series that measure PPG heart rate and step count. Routine behaviors may differ between workdays and off-days. Thus we separately learn the routine features of workdays and off-days. We also observe that the data available for each participant varies (min: 2 days, max: 70 days) which can lead to biased results without proper data selection. Hence, we choose to randomly select an equal amount of days of data from workdays and off-days of a participant for our analysis. In our study, we experimentally test by picking n=5 days from each type. We choose to pick 5 days in our experiment since it ensures a reasonable amount of data per participant and also the number of qualified participants retained in the analysis. In the end, there are 101 participants retained in this experiment. According to Section <ref>, we first aggregate PPG heart rate and step count every minute for a day of data. We impute the missing data in each aggregated time-series using the ARIMA model, where we estimate the number of time lags, the degree of difference, and the order of the moving average according to the Akaike information criterion. We fill the continuous missing segments over 15 minutes with large enough negative values. We then filter the imputed time series using an S-G filter. To minimize the deforming of the signal, we choose a small window size m = 2, and cubic order polynomials in the S-G filter. The window size parameter could be tuned systematically based on criteria and heuristics defined in <cit.>, but we leave this endeavor for future work. Prior to time-series clustering, we perform the z-normalization of each time-series to remove variances between participants. We empirically experiment with the number of clusters K ={3, 4, 5} in TICC. We set β to be 10 in this experiment to encourage neighboring samples to be assigned to the same cluster. We want to highlight that we plan to systematically tune β in our future works. Since some input time series may still contain portions of the missing elements (the segment that is missing above 15 minutes), one cluster output is associated with "missing measurements". We decide to ignore this cluster in the following analysis procedure. Finally, we applied the Hawkes Point Process to learn the infectivity between cluster transitions. §.§.§ Model Description We observe that the distributions of ground truth assessments exhibited significant skew, posing a difficulty for most supervised learning algorithms, as they will be biased towards the majority group, leading to poor predictions on minority labels. Therefore, we choose to binarize the ground truth label as our final prediction target. We binarize personality, affect scores by the median split, while we categorize job type and work shift as nurses/non-nurses and day-shift/night shift, respectively. We then evaluate the efficacy of the Fitbit summary routine feature and HOT-ROD routine features extracted above by predicting binarized ground-truth labels using the Random Forest (RF) classifier. We select Random Forest models since they have considerable advantages over other techniques with respect to robustness to noise, tuning simplicity, and the ability to choose the most relevant features from high-dimensional data input, where many features are often redundant <cit.>. Specifically, we perform the predictions using three sets of features: 1. Fitbit summary routine feature; 2. HOT-ROD routine feature; 3. Fitbit summary routine feature and HOT-ROD routine feature. We perform 5-fold cross-validation and report the average results in the macro-F1 score. We grid search the hyper-parameters in the RF model as follows: 1. Number of estimators: [10, 20, 30]; 2. Feature selection criterion: ["gini", "entropy"]; 3. Max depth of the tree: [4, 5, 6]; 4. Minimum samples to split: [2, 3, 5]. §.§ Prediction Results The experimental results for predicting IGTB assessment (personality and affect) and demographic information (job type, shift) are listed in Table <ref> and Table <ref> respectively. For predicting demographics, HOT-ROD features combined with Fitbit summary routine features achieve the best performance, with a better F1 score in predicting work shift than only using either the Fitbit summary feature or HOT-ROD routine features. We also observe that HOT-ROD reaches the best performance in forecasting neuroticism when the assigned number of clusters is 4. HOT-ROD features can also outperform Fitbit summary and combined feature set in classifying conscientiousness and openness when number of cluster is 3. Combining feature sets achieves the best performance in determining extraversion, agreeableness, and affect related variables. § DISCUSSIONS The results in Table <ref> and Table <ref> demonstrate foremost that personality and affect prediction from this data (collected in a natural setting outside of a well-controlled lab) is considerably challenging using standard physiologic features and simple machine learning techniques since none of the validation scores can reach above 70% even when the label is binarized. It also suggests that routine patterns, derived from wearable recordings by our HOT-ROD analysis, modestly improve performance. The prediction results demonstrate the HOT-ROD feature work comparatively reliably when the number of clusters is below 4. This may be because the available points for each cluster transition in a day decrease dramatically when the number of clusters increases, leading to an imprecise estimation of the Granger graph from time-series event. We further identify that HOT-ROD features yield substantially better performances in predicting conscientiousness, which is known for measuring self-discipline. Increasingly, HOT-ROD routine features combined with Fitbit summary routines are better predictors of job type and shift type. These prediction results demonstrate that HOT-ROD routine features are aligned quite well with the job type and self-discipline which closely relate to human behavior. In the end, we observe that the HOT-ROD features to produce a noticeably better F1-score in classifying positive affect. Though it is easy to find that the HOT-ROD pipeline can capture routine features to predict human behaviors using only a few days of data, the prediction results are related to the number of clusters assigned in TICC. It draws attention to the fact that more data are needed to generate a reliable estimation of routine behavior. To better understand the learned routine features in the case when the number of clusters K = 4, we further infer an interpretation for each cluster as follows: 1. Rest activity; 2. Light activity; 3. Moderate activity or exercises; 4. Missing measurements. Fig <ref> displays the infectivity matrix for various cluster transitions of high and low conscientious groups. There are 13 noticeable causality relations between cluster transitions, while none of the cluster-transition events have obvious self-triggering patterns. This implies that most events don't have a periodic daily behavior. Both groups behaved similarly on workdays, while Light activity → Rest transitions are more likely to be triggered by Rest → Light activity transitions in low conscientious population on off-days. Additionally, Rest → Moderate activity transitions also tend to impact Moderate activity → Light activity transitions in low conscientious population on off-days. These causal relations indicate that the low conscientious population tends to be more sedentary and less active on days of not working. This finding is consistent with the prior work stated in <cit.> and <cit.>. Consistent with the feature importance returned from the random forest model, we also observe that these learned causal relations offer important information in predicting conscientious type. Similarly, we also observe that the Rest → Moderate activity is more likely to be triggered by the Moderate activity → Rest on off-days in the group with the high level of positive affect scores. § CONCLUSION We propose a technique, HOT-ROD, for discovering routine patterns in wearable sensor time-series data utilizing the Granger causality graph extracted from a time-series of cluster-transition events. Using a data set of over 100 participants working in a hospital environment for ten weeks, we show that this data-driven technique intuitively captures transitional behaviors between activity states in a manner consistent with personality without using any prior knowledge. We have also shown that routine features extracted from HOT-ROD combined with routine features derived using Fitbit daily summary information modestly improve the performance in predicting job type, work shift, extraversion, agreeableness and affect variables than using a single set of features. As the next step, we want to examine how the switching penalty β impacts the learned routine behaviors. Increasingly, we believe the performance of the proposed technique will further increase when more data is available, but we also hope to systematically study the amount of data required to learn the robust routine behavior patterns using our approach. In addition, we believe this technique is simple enough to generalize to other data sets, possibly more broadly than wearable sensor readings. Ultimately, we plan to compare our method with other popular time-series approaches presented using time-series clustering <cit.>, motif finding <cit.>, and deep representation learning <cit.>. § ACKNOWLEDGEMENT This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2017-17042800005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. ACM-Reference-Format
http://arxiv.org/abs/2307.05952v1
20230712064809
Sparse factor models of high dimension
[ "Benjamin Poignard", "Yoshikazu Terada" ]
math.ST
[ "math.ST", "stat.TH", "Primary: 62H25, 62F12, Secondary: 62J07" ]
-2cm 2cm -2cm 90pt -10pt 20pt 0pt 10pt LD..2,5 theoremTheorem[section] lemma[theorem]Lemma cor[theorem]Corollary proposition[theorem]Proposition corollary[theorem]Corollary property[theorem]Property assumptionAssumption remarkRemark[section] ℝ 𝒳 𝒫 ℱ 𝒩 ℋ [#1]#2 footnote[#1]#2[#1] #1 #1 #1 #1 =7pt 9in 6.3in -1.2cm -1.3cm addtoresetequationsection Fig[figure] .Fig Fig[1] Fig figure1 #1 Figure .Fig : Tab[table] .Tab Tab[1] Tab table1 #1 Table .Tab: 𝔹 ℂ 𝔻 𝔼 𝔽 𝔾 ℍ 𝕂 𝕃 𝕄 ℕ ℙ ℝ 𝕌 𝕍 𝕏 ℤ 𝒜 ℛ 𝒦 ℬ 𝒢 𝒵 𝒲 𝒰 ℰ 𝒱 𝒪 ℱ 𝒞 𝒮 ℋ ℳ 𝒩 𝒫 𝒟 ℒ 𝒬 diag
http://arxiv.org/abs/2307.04198v2
20230709150720
Compact monotone tall complexity one $T$-spaces
[ "Isabelle Charton", "Silvia Sabatini", "Daniele Sepe" ]
math.SG
[ "math.SG" ]
In this paper we study compact monotone tall complexity one T-spaces. We use the classification of Karshon and Tolman, and the monotone condition, to prove that any two such spaces are isomorphic if and only if they have equal Duistermaat-Heckman measures. Moreover, we show that the moment polytope is Delzant and reflexive, and provide a complete description of the possible Duistermaat-Heckman measures. Whence we obtain a finiteness result that is analogous to that for compact monotone symplectic toric manifolds. Furthermore, we show that any such T-action can be extended to a toric (T × S^1)-action. Motivated by a conjecture of Fine and Panov, we prove that any compact monotone tall complexity one T-space is equivariantly symplectomorphic to a Fano manifold endowed with a suitable symplectic form and a complexity one T-action. Mid-infrared spectroscopy with a broadly-tunable thin-film lithium niobate optical parametric oscillator Amir H. Safavi-Naeini1 August 12, 2023 ======================================================================================================== § INTRODUCTION Fano manifolds play an important role in complex algebraic geometry and beyond. A compact complex manifold is Fano if its anticanonical bundle is ample. Any such manifold is simply connected (see <cit.> and <cit.>), and its Todd genus equals one (see <cit.> for a definition). Moreover, in any complex dimension there are finitely many topological types of Fano manifolds (see <cit.>). The Fano condition can be reformulated in Kähler terms: A compact complex manifold (Y,J) is Fano if and only if there exists a Kähler form ω∈Ω^1,1(Y) such that c_1 = [ω], where c_1 is the first Chern class of (Y,J) – see, for instance, <cit.>. This motivates the following definition[In the literature there are slight variations on this definition and sometimes these manifolds are also called symplectic Fano (see <cit.>).]: A symplectic manifold (M,ω) is (positive) monotone if there exists (a positive) λ∈ such that c_1 = λ [ω], where c_1 is the first Chern class of (M,J) and J is any almost complex structure that is compatible with ω. If (M,ω) is positive monotone, then ω can be rescaled so as to be equal to c_1 in cohomology. A driving question in symplectic topology is to determine whether every compact positive monotone symplectic manifold is diffeomorphic to a Fano manifold. The answer is affirmative in (real) dimension up to four by work of McDuff (see <cit.>), and is negative starting from dimension twelve by work of Fine and Panov (see <cit.>). To the best of our knowledge, this is an open problem in the remaining dimensions. Motivated by a conjecture due to Fine and Panov (see <cit.>), which is supported by recent results (see <cit.>), we study the above question in the presence of a Hamiltonian torus action: An action of a compact torus T on a symplectic manifold (M,ω) by symplectomorphisms that is codified by a smooth T-invariant map Φ : M →^*, called moment map (see Section <ref> for details). If the action is effective and M is connected, the triple (M,ω, Φ) is called a Hamiltonian T-space. We remark that a monotone Hamiltonian T-space is necessarily positive monotone (see Proposition <ref>). The following is the long-term question behind this paper. Find necessary and sufficient conditions for a compact monotone Hamiltonian T-space to be diffeomorphic to a Fano variety. A starting point to attack Problem <ref> is to consider `large' torus symmetries. This is codified precisely by the complexity of a Hamiltonian T-space (M,ω,Φ), which is the non-negative integer k:=1/2 M - T. Intuitively, the lower the complexity, the larger the symmetry. A Hamiltonian T-space of complexity k is called a complexity k T-space. Problem <ref> has been already solved in complexity zero, i.e., for compact monotone symplectic toric manifolds. To recall the solution, we say that two Hamiltonian T-spaces are isomorphic if they are symplectomorphic so that the moment maps are intertwined (see Definition <ref> for a precise statement). Let (M,ω, Φ) be a compact monotone symplectic toric manifold. By Delzant's classification (see <cit.>), the isomorphism class of (M,ω, Φ) is determined by the moment polytope Φ(M) ⊂^*, which is Delzant (see Section <ref>). Moreover, if without loss of generality we assume that c_1 = [ω], then, up to translation, Φ(M) is also reflexive (see Definition <ref>). Reflexive polytopes were introduced by Batyrev in <cit.> in the study of toric Fano varieties and, like Fano manifolds, enjoy special properties. For instance, if ℓ⊂ is the standard lattice, then there are finitely many reflexive polytopes of full dimension in ^* up to the standard action of GL(ℓ^*) – see Corollary <ref>. The combination of Delzant's classification and this result above yields finiteness for compact monotone symplectic toric manifolds up to the following notion of equivalence: Two Hamiltonian T-spaces (M_1,ω_1,Φ_1) and (M_2,ω_2,Φ_2) are equivalent if there exists a symplectomorphism Ψ : (M_1,ω_1) → (M_2,ω_2) and an affine transformation a ∈GL(ℓ^*) ⋉𝔱^* of 𝔱^* such that Φ_2 ∘Ψ = a ∘Φ_1. More precisely, the following holds. For each n∈_≥ 0, there are finitely many equivalence classes of compact symplectic toric manifolds of dimension 2n with first Chern class equal to the class of the symplectic form. Moreover, by Delzant's classification and the Kähler description of the Fano condition, the following result solves Problem <ref> in complexity zero. If is a compact monotone symplectic toric manifold, then there exists an integrable complex structure J on M that is compatible with ω and invariant under the torus action such that the Kähler manifold (M,J) is Fano. In fact, Theorem <ref> proves the stronger result that any compact monotone symplectic toric manifold is symplectomorphic to a Fano manifold (endowed with a suitable symplectic form). §.§ The results In this paper we solve Problem <ref> for tall complexity one T-spaces, i.e., those for which no reduced space is a point (see Definition <ref>). Such spaces have been classified by Karshon and Tolman in a series of papers (see <cit.>). This classification is more involved than that of compact symplectic toric manifolds: For instance, there are several invariants, namely the moment polytope, the genus, the painting and the Duistermaat-Heckman measure (see Section <ref> for more details), and these invariants satisfy some compatibility conditions (see <cit.>). In order to attack Problem <ref> in the above setting, first we study the isomorphism classes of compact monotone tall complexity one T-spaces. Our first main result states that, for these spaces, the Duistermaat-Heckman measure determines all other invariants. For our purposes, we codify this measure by the unique continuous function that represents its Radon-Nikodym derivative with respect to the Lebesgue measure on ^*, which we call the Duistermaat-Heckman function (see Theorem <ref> and Definition <ref>). Two compact monotone tall complexity one T-spaces are isomorphic if and only if their Duistermaat-Heckman functions are equal. Our second main result is finiteness of compact monotone tall complexity one T-spaces up to equivalence, which is the analog of Theorem <ref> . To this end, we observe that the moment polytope of a space with c_1 = λ [ω] is a reflexive Delzant polytope if and only if c_1 = [ω] and the moment map Φ satisfies the so-called weight sum formula (see Proposition <ref> and Lemma <ref>). If is a compact monotone tall complexity one T-space, then we may rescale ω and translate Φ so as to satisfy the above conditions (see Corollary <ref> and Proposition <ref>). The next result is a crucial step towards establishing finiteness. Given a reflexive Delzant polytope Δ, there exist finitely many isomorphism classes of compact monotone tall complexity one T-spaces with Φ(M) = Δ. The following result is a simple consequence of Theorem <ref> and answers a question originally posed to us by Yael Karshon. For each n∈_≥ 0, there are finitely many equivalence classes of compact tall complexity one T-spaces of dimension 2n with first Chern class equal to the class of the symplectic form. Our third main result concerns the extendability of a tall complexity one T-action on a compact monotone symplectic manifold (M,ω) to a toric (T × S^1)-action. To the best of our knowledge, there is no criterion to ensure such extendability for compact tall complexity one T-spaces of dimension at least six. If is a compact monotone tall complexity one T-space , then the Hamiltonian T-action extends to a symplectic toric (T × S^1)-action. Finally, our last main result is a solution to Problem <ref> for tall complexity one T-spaces. We recall that, given a compact torus T, there exists a unique complex Lie group T_ such that the Lie algebra of T_ is the complexification of and T is a maximal compact subgroup of T_ (see <cit.>). For instance, if T = (S^1)^d, then T_ = (^*)^d. The following result is concerned with the existence of an integrable complex structure. If is a compact monotone tall complexity one T-space, then there exists a T-invariant integrable complex structure J on M that is compatible with ω such that the complex manifold (M,J) is Fano and the T-action extends to an effective holomorphic T_-action. As an immediate consequence of Theorem <ref>, the following result solves a stronger version of Problem <ref> for compact tall complexity one T-spaces. Any compact monotone tall complexity one T-space is equivariantly symplectomorphic to a Fano manifold endowed with a suitable symplectic form and a complexity one T-action. §.§ Structure of the paper In Section <ref> we recall fundamental properties of (compact) Hamiltonian T-spaces. While many of the notions and results presented therein are well-known to experts, we also set the terminology and notation used throughout. Some basic properties of Hamiltonian T-spaces are considered in Section <ref>, which describes in detail the local models near any orbit, and introduces the notion of exceptional and regular points and orbits. These are a generalization of the corresponding concepts introduced by Karshon and Tolman in complexity one, which play a key role in their classification. Moreover, we also discuss the notion of exceptional and regular sheets, which are closely related to the notion of x-ray (see <cit.>). In Section <ref> we take a closer look at the invariants of compact Hamiltonian T-spaces, starting with the so-called Convexity Package and its consequences. A large part of Section <ref> is taken up by the existence of the Duistermaat-Heckman function of a compact Hamiltonian T-space (see Theorem <ref> and Definition <ref>). We could not find an appropriate reference for this result and, hence, included the material for completeness. We focus on the complexity one case, showing that in this case there is a polytope in ^* × that encodes the Duistermaat-Heckman function (see Corollary <ref>). In Section <ref> we introduce compact complexity preserving Hamiltonian T-spaces, a class of spaces that generalizes simultaneously compact symplectic toric manifolds and compact tall complexity one T-spaces. For instance, we prove that their moment polytopes are Delzant polytopes (see Proposition <ref>), and their Duistermaat-Heckman functions enjoy natural properties (see Corollary <ref>). These spaces may be of independent interest. Finally, we review the classification of compact tall complexity one T-spaces due to Karshon and Tolman in Section <ref>. In Section <ref> we prove some properties of Hamiltonian T-actions on compact monotone symplectic manifolds. We show that the presence of a Hamiltonian T-action forces monotonicity to be positive (see Proposition <ref>). Hence, the symplectic form of any compact monotone Hamiltonian T-space can be rescaled so that it is equal to c_1 in cohomology (see Corollary <ref>). Moreover, we show that the moment map of any compact Hamiltonian T-space with c_1 = [ω] can be translated to satisfy the so-called weight sum formula (see Proposition <ref>). Hence, in order to study compact monotone Hamiltonian T-spaces, it suffices to consider those that satisfy both aforementioned conditions, which we call normalized monotone. That is the content of Section <ref>. In Section <ref>, we characterize normalized monotone complexity preserving Hamiltonian T-spaces as being precisely those that are compact monotone and have reflexive Delzant polytopes as moment map images (see Proposition <ref> and Lemma <ref>). Section <ref> is the technical heart of the paper and is where we prove our first main result, Theorem <ref>. In Section <ref>, we recall that the genus of a compact monotone tall complexity one T-space is zero, a result proved in <cit.>. Moreover, we show that there is a facet of the moment polytope along which the Duistermaat-Heckman function is constant and equal to the minimal value (see Proposition <ref>). Such a facet, which we call minimal, plays an important role in the arguments of Sections <ref> and <ref>. In Section <ref>, we characterize isolated fixed points in normalized monotone tall complexity one T-spaces (see Proposition <ref>). This is another fundamental result that we use extensively in Sections <ref> and <ref>. The space of exceptional orbits and the painting of a normalized monotone tall complexity one T-space are studied in Section <ref>. We show that the isotropy data associated to the former can be reconstructed by `looking at the moment polytope' (see Remark <ref> for a precise statement). Moreover, we prove that the painting of a normalized monotone tall complexity one T-space is trivial (see Definition <ref> and Theorem <ref>). In Section <ref>, we provide an explicit formula for the Duistermaat-Heckman function of a normalized monotone tall complexity one T-space (see Theorem <ref>). It is given in terms of the number of connected components of the space of exceptional orbits and an integer that can be associated to the preimage of any vertex that lies on a given minimal facet (see Lemma <ref>). Finally, in Section <ref> we prove Theorem <ref> by bringing the above results together. In Section <ref> we prove all our remaining main results. Section <ref> addresses the question of finding necessary conditions for a function to be the Duistermaat-Heckman function of a normalized monotone tall complexity one T-space (see Proposition <ref>). This allows us to prove the finiteness result, namely Theorem <ref> and Corollary <ref>. In Section <ref>, we prove that the aforementioned necessary conditions are, in fact, sufficient (see Theorem <ref> and Corollary <ref>). Our method to prove this result leads us naturally to obtain the extendability result, namely Theorem <ref>. Finally, in Section <ref>, we prove Theorem <ref>. §.§ Acknowledgments We would like to thank Yael Karshon for posing the question of finiteness. The authors were partially supported by SFB-TRR 191 grant Symplectic Structures in Geometry, Algebra and Dynamics funded by the Deutsche Forschungsgemeinschaft. D.S. was partially supported by FAPERJ grant JCNE E-26/202.913/2019 and by a CAPES/Alexander von Humboldt Fellowship for Experienced Researchers 88881.512955/2020-01. D. S. would like to thank Universität zu Köln for the kind hospitality during a long stay. This study was financed in part by the Coordenao de Aperfeioamento de Pessoal de Nvel Superior – Brazil (CAPES) – Finance code 001. I.C. would like to thank Instituto de Matemtica Pura e Aplicada (IMPA), Rio de Janeiro, for the support of a stay in Brazil. §.§ Conventions §.§.§ Tori Throughout the paper, we identify the integral lattice of S^1 ⊂ with . This means that the exponential map exp : Lie(S^1) = i→ S^1 is given by exp(ix) = e^2π i x, where z ↦ e^z is the standard complex exponential function. Moreover, we often identify Lie(S^1) and its dual with tacitly; we trust that this does not cause confusion. In this paper, T is a compact torus of dimension d with Lie algebra . We denote its integral lattice by ℓ, namely ℓ=(exp→ T), and the dual of ℓ by ℓ^*. Moreover, we denote by ⟨·, ·⟩ the standard pairing between ^* and . Finally, we fix an inner product on once and for all. §.§.§ Convex polytopes Let be a real vector space of dimension d with full-rank lattice ℓ⊂. A (convex) polytope Δ in ^* is a subset that satisfies either of the following two equivalent conditions: * Δ is the convex hull of a finite set of points, or * Δ is the bounded intersection of a finite set of (closed) half-spaces of ^*, (see <cit.>). Throughout the paper, we assume that a polytope Δ has dimension equal to that of ^*. We often write Δ in its minimal representation, i.e., Δ=⋂_i=1^l {w∈^* |⟨ w,ν_i ⟩≥ c_i} where ν_i ∈ is the inward normal, c_i ∈, and the affine hyperplane {w∈^* |⟨ w,ν_i ⟩ = c_i} supports a facet of Δ for i=1,…, l. A finite non-empty intersection of facets of Δ is a face of Δ. For convenience, we think of Δ also as a face. The dimension of a face ℱ of Δ is the dimension of the affine span of ℱ in ^*. Faces that are 1-dimensional are called edges, while 0-dimensional faces are vertices. § (COMPACT) HAMILTONIAN T-SPACES AND THEIR INVARIANTS §.§ Definition and first properties Let (M,ω) be a symplectic manifold of dimension 2n. A smooth T-action ψ T × M → M is Hamiltonian if it admits a moment map, i.e., a smooth T-invariant map Φ M →^* that satisfies d ⟨Φ, ξ⟩ = -ι_ξ^#ω for all ξ∈, where ξ^#∈𝔛(M) denotes the vector field associated to ξ. In this case, the diffeomorphism ψ(t, ·) M → M is a symplectomorphism for each t ∈ T, i.e., it preserves ω. For brevity we denote ψ(t, p) by t· p. * A (compact) Hamiltonian T-space is a (compact) connected symplectic manifold (M,ω) endowed with an effective Hamiltonian T-action and a moment map Φ M →^*. We denote such a space by (M,ω,Φ). * Two Hamiltonian T-spaces (M_1,ω_1,Φ_1) and (M_2,ω_2,Φ_2) are isomorphic if there exists a symplectomorphism Ψ: (M_1,ω_1) → (M_2,ω_2) such that Φ_2 ∘Ψ = Φ_1. Since T is connected, an isomorphism of Hamiltonian T-spaces is necessarily a T-equivariant diffeomorphism. Definition <ref> includes T = 0 (this is used, for instance, in Theorem <ref>). In this case, a Hamiltonian T-space is simply a symplectic manifold and an isomorphism is simply a symplectomorphism. §.§.§ Orbital moment map and reduced spaces We endow the quotient space M/T with the quotient topology. Since Φ is T-invariant, it descends to a continuous map Φ̅ : M/T →^* that is called the orbital moment map. If Ψ is an isomorphism between (M_1,ω_1,Φ_1) and (M_2,ω_2,Φ_2) , then there is a homeomorphism Ψ̅ : M_1/T → M_2/T such that Φ̅_1 = Φ̅_2 ∘Ψ̅. The fibers of Φ̅ can be canonically identified with the quotient of the fibers of Φ by the T-action. These are known as reduced spaces. If α∈Φ(M) is a regular value of Φ, then the reduced space at α, Φ^-1(α)/T, is an orbifold that inherits a symplectic form ω_red (see <cit.> and <cit.>). §.§.§ Complexity of a Hamiltonian T-space Since the T-action on M is effective and since orbits are isotropic submanifolds of (M,ω), we have that d≤ n. The difference n-d is a simple, but important invariant of . The complexity of a Hamiltonian T-space is k:=1/2 M - T. Complexity zero Hamiltonian T-spaces are symplectic toric manifolds. Throughout the paper, we refer to torus actions of complexity zero as toric. Intuitively, the complexity of a Hamiltonian T-space is half of the dimension of a reduced space at a regular value. §.§.§ Local model and local normal form Given p ∈ M, its stabilizer is the closed subgroup H:={t ∈ T | t · p = p}. We set h:= H. Since T is abelian, any two points on the same orbit have equal stabilizers. Hence, the stabilizer of an orbit is well-defined. If 𝒪 denotes the T-orbit containing p, the infinitesimal symplectic linear action of H on (T_pM,ω_p) fixes T_p 𝒪. Thus there is a symplectic linear action of H on the quotient vector space (T_p 𝒪)^ω/T_p𝒪 endowed with the quotient linear symplectic structure. We call the underlying Lie group homomorphism the symplectic slice representation of p. The symplectic slice representations of two points lying on the same orbit are naturally isomorphic. Hence, the symplectic slice representation of an orbit is well-defined. This allows us to `decorate' the quotient space M/T by attaching the symplectic slice representation to every orbit (this data includes the stabilizer of the orbit). Let Ψ be an isomorphism between (M_1,ω_1,Φ_1) and (M_2,ω_2,Φ_2) and let Ψ̅ : M_1/T → M_2/T be the homeomorphism given by Remark <ref>. For any p ∈ M_1, Ψ̅([p]) and [p] have equal stabilizers and symplectic slice representations. Fix a T-invariant almost complex structure on (M,ω); the existence of such a structure is proved in <cit.>. We observe that (T_p 𝒪)^ω/T_p𝒪 has real dimension 2(h+n-d)=2(h+k), where k is the complexity of . Hence, we use the above almost complex structure to identify (T_p 𝒪)^ω/T_p𝒪 with ^h+k endowed with the standard symplectic form i/2 ∑_j=1^h+k dz_j ∧ dz_j. Moreover, under this identification, the linear H-action is by unitary transformations. Let ρ : H → U(^h+k) be the associated homomorphism of Lie groups. Since H is abelian, ρ(H) is contained in a maximal torus of U(^h+k). We denote the maximal torus of U(^h+k) consisting of diagonal transformations by (S^1)^h+k. Hence, we may assume that ρ factors through a Lie group homomorphism H → (S^1)^h+k that we also denote by ρ by a slight abuse of notation. We write ρ_j for the jth component of ρ, where j = 1,…, h+k. Let d_eρ_j denote the derivative at the identity of ρ_j and set d_eρ_1:=2π i α_1,…, d_eρ_h+k:=2π i α_h+k. Hence, α_j ∈ℓ^*_𝔥 for all j=1,…, h+k, where 𝔥 is the Lie algebra of H and ℓ_𝔥 denotes the integral lattice in 𝔥. We call α_1,…, α_h+k the isotropy weights of p (for the H-action). The multiset of isotropy weights of p does not depend on the choice of T-invariant almost complex structure on (M,ω). Moreover, this multiset encodes the action of the identity component of H on ^h+k. Explicitly, exp(ξ)· (z_1,…,z_h+k)=(e^ 2π i ⟨α_1,ξ⟩z_1,…, e^ 2π i ⟨α_h+k,ξ⟩z_h+k) for every ξ∈𝔥 . In particular, if H is connected, then the multiset of isotropy weights of p determine the symplectic slice representation up to unitary isomorphisms. Finally, by Remark <ref>, the multisets of isotropy weights of two points lying on the same orbit are equal. From the stabilizer H ≤ T of p and the Lie group homomorphism ρ : H → (S^1)^h+k, we construct a symplectic manifold together with a Hamiltonian T-action and a moment map. This is the local model for a T-invariant neighborhood of 𝒪 in . We do this in two equivalent ways, seeing as one is more convenient for proofs and the other is more convenient for calculations. §.§.§ The abstract construction Let Ω denote the symplectic form on T^*T ×^h+k given by taking the sum of the pullbacks of the canonical symplectic form on T^*T and the standard symplectic form on ^h+k (see equation (<ref>)). Let H act (on the right) on T^*T ×^h+k as follows: On T^*T it acts by the cotangent lift of (right) multiplication, while on ^h+k it acts by z · h := ρ(h^-1)(z), for h ∈ H and z ∈^h+k. By construction, the H-action on (T^*T ×^h+k,Ω) is Hamiltonian. Let Φ̂ : T^*T ×^h+k→𝔥^* be the moment map that sends (0,0) ∈ T^*T ×^h+k to the origin in 𝔥^*. Since this H-action is free and proper, the quotient (T^*T ×^h+k) / /H:= Φ̂^-1(0)/H is a smooth manifold that inherits a symplectic form ω_red (see <cit.>). Let T act (on the left) on T^*T ×^h+k as follows: On T^*T it acts by the cotangent lift of (left) multiplication, while on ^h+k it acts trivially. This T-action is Hamiltonian and commutes with the above H-action. Hence, it induces a Hamiltonian T-action on ((T^*T ×^h+k) / /H, ω_red). As a moment map for this T-action we take the one that sends [0,0] ∈ (T^*T ×^h+k) / /H to the origin in ^* and we denote it by Φ_red. The desired local model is the triple ((T^*T ×^h+k) / /H, ω_red, Φ_red). §.§.§ The explicit construction The choice of inner product on 𝔱 induces an inner product on 𝔱^* which, in turn, determines an isomorphism ^* ≃Ann(𝔥) ⊕𝔥^*. Moreover, we choose a trivialization T^*T ≅ T ×^*. With these choices, we fix an identification T^*T ×^h+k with T ×Ann(𝔥) ×𝔥^* ×^h+k. Under this identification, the above (right) H-action is given by (t,α,β,z) · h = (th,α,β,ρ(h^-1)z), while the above moment map Φ̂ is given by (t,α,β,z) ↦β - Φ_H(z), where Φ_H : ^h+k→𝔥^* is the homogeneous moment map for the (left) H-action on ^h+k given by h · z = ρ(h)z. The map T ×Ann(𝔥) ×^h+k →Φ̂^-1(0) (t,α,z) ↦ (t,α,Φ_H(z),z) is an H-equivariant diffeomorphism, where the left hand side is equipped with the (right) H-action on T ×Ann(𝔥) ×^h+k given by (t,α,z) · h = (th, α, ρ(h^-1)z). Hence the quotient Y:=T ×_H Ann(𝔥) ×^h+k is diffeomorphic to (T^*T ×^h+k) / /H. We denote by ω_Y (respectively Φ_Y) the pullback of ω_red (respectively Φ_red) under the above diffeomorphism. The (left) T-action on Y is given by s · [t,α,z] = [st,α,z], while the moment map Φ_Y takes the form Φ_Y([t,α,z]) = α + Φ_H(z). The stabilizer of [1,0,0] ∈ Y is H and the symplectic slice representation of [1,0,0] is ρ : H → (S^1)^h+k. Moreover, if the T-action is effective, the complexity of (Y,ω_Y,Φ_Y) is equal to k. We refer to (Y,ω_Y,Φ_Y) as the local model of p. By Remark <ref>, the local models at two points lying on the same orbit are equal. By a slight abuse of terminology, we also refer to ((T^*T ×^h+k) / /H, ω_red, Φ_red) as the local model of p. Moreover, throughout the paper we sometimes state results in terms of Y but use (T^*T ×^h+k) / /H in the proof. We trust that this does not cause confusion. As a consequence of the local normal form theorem for Hamiltonian actions of compact Lie groups due to Guillemin-Sternberg <cit.> and Marle <cit.>, any Hamiltonian T-space is isomorphic to a local model near an orbit. More precisely, the following holds. Let be a Hamiltonian T-space. Given p ∈ M, let (Y,ω_Y,Φ_Y) be the local model of p. There exist T-invariant open neighborhoods U⊂ M of p and V ⊂ Y of [1,0,0] and an isomorphism between (U,ω,Φ) and (V,ω_Y,Φ_Y + Φ(p)) that maps p to [1,0,0]. Let be a Hamiltonian T-space. For any p ∈ M with stabilizer H, the homomorphism ρ : H → (S^1)^h+k is injective. Fix p ∈ M and notation as above. Let (Y,ω_Y,Φ_Y) be the local model of p. Since the T-action on M is assumed to be effective, so is the T-action on Y by Theorem <ref> (see Remark <ref>). By definition of Y and of the T-action on Y, the T-action on Y = T ×_H Ann(𝔥) ×^h+k is effective if and only if the Lie group homomorphism ρ : H → (S^1)^h+k is injective, as desired. Given p ∈ M, let H be its stabilizer and let {α_j} be the multiset of isotropy weights of p. By Corollary <ref>, the H-action on (T_p 𝒪)^ω/T_p𝒪 is effective. In particular, so is the action by its identity component. Using equation (<ref>), it follows that the -span of {α_j} equals ℓ^*_𝔥. The above discussion simplifies significantly if p is a fixed point, i.e., if H=T. In this case, h =d so that h+k = n, and the isotropy weights α_1,… , α_n lie in ℓ^*. Moreover, Y = ^n, the symplectic form ω_Y is the standard symplectic form on ^n, the T-action on Y is given by exp(ξ)· (z_1,…,z_n)=(e^ 2π i ⟨α_1 , ξ⟩z_1,…, e^ 2π i ⟨α_n , ξ⟩z_n) for every ξ∈ , and the moment map Φ_Y : Y →^* is given by Φ_Y(z)=π∑_j=1^n α_j |z_j|^2, where z = (z_1,…,z_n) ∈^n. §.§.§ Regular and exceptional local models Theorem <ref> allows us to understand a Hamiltonian T-space in a T-invariant open neighborhood of a point p by studying the local model of p. For this reason, in this subsection we take a closer look at local models. In what follows, we fix a non-negative integer k and a closed subgroup H ≤ T. As above, we set d:= T and h:= H. We also fix an injective Lie group homomorphism ρ : H ↪ (S^1)^h+k. We denote the subspace of fixed points of the H-action induced by ρ by (^h+k)^H. As in Section <ref>, we use k, H and ρ to construct Hamiltonian T-spaces ((T^*T ×^h+k) / /H, ω_red, Φ_red) ≅ (Y,ω_Y,Φ_Y) that we refer to as the local model determined by k, H and ρ (see equations (<ref>), (<ref>) and (<ref>) for the definition of Y, the T-action and Φ_Y respectively). We remark that k is the complexity of (Y,ω_Y,Φ_Y), that H is the stabilizer of p:=[1,0,0] and that ρ is the symplectic slice representation of p. Let k be a non-negative integer, let H ≤ T be a closed subgroup and let ρ : H ↪ (S^1)^h+k be an injective Lie group homomorphism, where h = H. Set s:= _ (^h+k)^H. There exists an isomorphism of Hermitian vector spaces ^h+k≃^h+k-s⊕^s such that ρ = (ρ',1) : H ↪ (S^1)^h+k-s× 1 ↪ (S^1)^h+k-s× (S^1)^s ≃ (S^1)^h+k and (^h+k-s)^H = {0}. Moreover, s ≤ k and, if s = k, then ρ is an isomorphism between H and (S^1)^h and the H-action on ^h is toric. To simplify notation, set V := (^h+k)^H. The standard Hermitian product on ^h+k induces an isomorphism of Hermitian vector spaces ^h+k≃ V^⊥⊕ V, and both V and V^⊥ are endowed with the restriction of the standard Hermitian product. By definition ρ(H) fixes V pointwise and is a subgroup of U(h+k). Hence, ρ splits as the direct sum of two H-representations that we denote by ρ_V : H → U(V) and ρ': H → U(V^⊥). By construction, ρ_V is the trivial representation. Moreover, ρ'(H) is contained in a maximal torus of U(V^⊥) (see Section <ref>). Since the Hermitian vector spaces V and V^⊥ are isomorphic to ^s and to ^h+k-s respectively and since {0}= (^h+k-s)^H, this proves the first statement. To prove the second, we observe that ρ': H → U(V^⊥) is injective, since ρ is injective. Since maximal tori in U(V^⊥) have dimension equal to h+k-s and since h = H, it follows at once that s≤ k. Finally, if s = k, then, since ρ' is injective, it follows that the map H ↪ (S^1)^h is an isomorphism of Lie groups. Lemma <ref> allows us to `decompose' local models as follows (see <cit.> for a proof in the case k=s=1). Let k be a non-negative integer, let H ≤ T be a closed subgroup and let ρ : H ↪ (S^1)^h+k be an injective Lie group homomorphism, where h = H. Let s = _(^h+k)^H and ρ' : H ↪ (S^1)^h+k-s be as in the statement of Lemma <ref>, so that (^h+k-s)^H={0}. The local model determined by k, H and ρ is isomorphic to (Y' ×^s,pr^*_1ω_Y' + pr^*_2ω_0, pr_1^*Φ_Y'), where (Y',ω_Y',Φ_Y') is the local model determined by k-s, H and ρ', ω_0 is the standard symplectic form ^s, and pr_j is the projection from Y' ×^s to its jth component, for j=1,2. In this proof we use the abstract construction of local models (see Section <ref> and Remark <ref>). Fix an isomorphism ^h+k≃^s ⊕^h+k-s as in the statement of Lemma <ref>. This induces a symplectomorphism between (T^*T ×^h+k, Ω) and ((T^*T ×^h+k-s) ×^s,pr^*_1Ω' + pr^*ω_0), where Ω' denotes the symplectic form on T^*T ×^h+k-s and, as above, pr_j is the projection from (T^*T ×^h+k-s) ×^s to its jth component, for j=1,2. We endow (T^*T ×^h+k-s) ×^s with the following (right) H-action and (left) T-action: * (H-action): On T^*T ×^h+k-s we consider the (right) H-action used to construct the local model determined by k-s, H and ρ', while H acts trivially on ^s, and * (T-action): On T^*T ×^h+k-s we consider the (left) T-action used to construct the local model determined by k-s, H and ρ', while T acts trivially on ^s. The above symplectomorphism is both H- and T-equivariant. Hence, there is a T-equivariant symplectomorphism between the symplectic quotients with respect to the H-actions, i.e., an isomorphism between the resulting Hamiltonian T-spaces. Once the abstract constructions are identified with the explicit ones as in Section <ref>, this yields the desired isomorphism of Hamiltonian T-spaces. Proposition <ref> is particularly simple if s = k, i.e., if _ (^h+k)^H is maximal. In this case, (Y', ω', Φ_Y') is toric. Motivated by Theorem <ref> and by Remark <ref>, we introduce the following dichotomy. Let be a complexity k T-space and let p ∈ M be a point with stabilizer H. Let 𝒪 be the orbit containing p. We say that p is regular if _(^h+k)^H = k and exceptional otherwise, where H acts on (T_p𝒪)^ω/T_p 𝒪≃^h+k via the symplectic slice representation at p. We observe that Definition <ref> extends also to orbits (see Remark <ref>); hence, throughout the paper, we also refer to regular and exceptional orbits. Let be a Hamiltonian T-space. We define the set of exceptional orbits M_exc as the subset of M/T consisting of exceptional orbits. A fixed point p in a complexity k T-space is regular if and only if it lies on a fixed submanifold of dimension k. We observe that the latter condition is equivalent to the T-action on the normal bundle to the fixed submanifold that contains p being toric. The techniques of Example <ref> and Lemma <ref>, together with Theorem <ref>, can be used to prove the following result. * Any point with trivial stabilizer, as well as any point in a complexity zero T-space, is regular. * If a Hamiltonian T-space has positive complexity, then any isolated fixed point is exceptional. * The subset of regular points in a Hamiltonian T-space is open. * If a point p is regular, then the stabilizer of p is connected. We conclude this section with two results in complexity one. Let be a complexity one T-space. A point p ∈ M^T is isolated if and only if it is exceptional. By Lemma <ref>, if p ∈ M^T is isolated, then it is exceptional. Conversely, if p ∈ M^T is regular, then, by Theorem <ref> and Example <ref>, p is not isolated. Finally, as an immediate consequence of Theorem <ref> and of Proposition <ref>, the following characterization of exceptional points holds (cf. <cit.>). Let be a complexity one T-space. A point p ∈ M is exceptional if and only if every nearby orbit in the same moment fiber has a strictly smaller stabilizer. In particular, Definition <ref> agrees with the definition of exceptional orbits of <cit.> in complexity one, and is the appropriate notion for our purposes. §.§.§ Regular and exceptional sheets Let be a Hamiltonian T-space. For any closed subgroup H ≤ T, the action of H on M is also Hamiltonian: Let 𝔥 be the Lie algebra of H and let i: 𝔥↪ denote the inclusion. Then a moment map for the H-action is given by the composition M Φ⟶^* i^*⟶𝔥^*, where i^*^* →𝔥^* is the dual of i. We denote the set of fixed points of the H-action by M^H = { p ∈ M | h · p = p for all h ∈ H}. The following result is used throughout and a proof can be found in <cit.>. Let be a Hamiltonian T-space and let H ≤ T be closed. Then any connected component of M^H is an embedded symplectic submanifold of (M,ω). We refer to the connected components of M^T as the fixed submanifolds of . If N⊂ M^T is a fixed submanifold, then for any p,p'∈ N, the isotropy weights of p are equal to those at p'. Hence, the isotropy weights of the fixed submanifold N are well-defined. Moreover, they can be used to determine N: By Theorem <ref>, N equals twice the number of isotropy weights of N that are equal to 0. Let H ≤ T be a closed subgroup that is the stabilizer of a point p ∈ M and let N ⊂ M^H denote the connected component that contains p. We set ω_N:= ω|_N. By Lemma <ref>, (N,ω_N) is an embedded symplectic submanifold of (M,ω). Moreover, since T is abelian and connected, and since N is connected, N is T-invariant. The T-action on N induces a T':=T/H-action on N that is effective because the stabilizer of p for the T-action equals H. This T'-action on N is Hamiltonian. To see this, we construct a moment map starting from Φ and the point p. To this end, let pr : T → T' be the quotient map. We denote its derivative at the identity by pr_*: →' and the dual of pr_* by pr^* : (')^* →^*. By construction, pr^* is an isomorphism between (')^* and Ann(𝔥). Since N ⊂ M^H is connected and since Φ is a moment map, ⟨Φ|_N,ξ⟩ = ⟨Φ(p), ξ⟩ for all ξ∈𝔥. Consequently, Φ(N) ⊂Φ(p) + Ann(𝔥). Hence, there exists a unique smooth map Φ_N : N → (')^* such that pr^* ∘Φ_N = Φ|_N - Φ(p). By (<ref>), since Φ : M →^* is a moment map for the T-action on M and since (N,ω_N) is a symplectic submanifold of (M,ω), it follows shows that Φ_N is a moment map for the T'-action on N. Since the T'-action on N is effective, this shows that (N,ω_N,Φ_N) is a Hamiltonian T'-space. Let be a Hamiltonian T-space and let H ≤ T be a subgroup that occurs as a stabilizer of some point p ∈ M. The Hamiltonian T'-space (N,ω_N,Φ_N) constructed above is called a sheet stabilized by H. Whenever we wish to emphasize the role of p, we say that (N,ω_N,Φ_N) is the sheet through p. Any fixed submanifold N ⊂ M^T gives rise to a sheet that we denote simply by (N,ω_N). For our purposes, it is useful to allow some freedom in the choice of moment map associated to a sheet. Modulo pr^*, Φ_N and Φ|_N only differ by a constant. Depending on the context, we use either moment map. We trust that this does not cause confusion. Given a sheet (N,ω_N,Φ_N) of , we are interested in understanding the complexity of (N,ω_N,Φ_N) in relation to that of . By Definition <ref>, the complexity of (N,ω_N,Φ_N) is 1/2 N - (d-h). Let be a Hamiltonian T-space and let (N,ω_N,Φ_N) be a sheet stabilized by H ≤ T. The complexity of (N,ω_N,Φ_N) is at most that of . Moreover, the following are equivalent: * The complexity of (N,ω_N,Φ_N) is less than that of . * Any p ∈ N with stabilizer H is exceptional. * The fiberwise H-action on the symplectic normal bundle to N has positive complexity. Let p ∈ N be a point that is stabilized by H and let (Y,ω_Y,Φ_Y) be the local model of p. Since the above conditions can be checked locally, by Theorem <ref>, it suffices to prove the result in the case = (Y,ω_Y,Φ_Y) and p = [1,0,0]. Recall that Y = T ×_H Ann(𝔥) ×^h+k, where k is the complexity of (Y,ω_Y,Φ_Y), and that the T-action on Y is given by equation (<ref>). The submanifold Y^H is given by T ×_H Ann(𝔥) × (^h+k)^H ≃ T/H ×Ann(𝔥) × (^h+k)^H, where (^h+k)^H is the fixed point set for the H-action on ^h+k. Since (^h+k)^H is a subspace of ^h+k, it follows that Y^H is connected. Since N ⊆ Y^H is a connected component of Y^H, N = Y^H. Moreover, by (<ref>), the dimension of N equals 2(d-h) + 2_(^h+k)^H. By Lemma <ref>, _(^h+k)^H ≤ k. Hence, the complexity of (N,ω_N,Φ_N) is at most k. Moreover, since (^h+k)^H is a complex subspace of ^h+k, the H-action on the symplectic normal vector space to N at p can be identified with the linear H-action on ^h+k/(^h+k)^H. Suppose that the complexity of (N,ω_N,Φ_N) is less than that of (Y,ω_Y,Φ_Y). Since N = 2(d-h) + 2_(^h+k)^H, it follows that _(^h+k)^H<k, so that p is exceptional. Therefore, <ref> implies <ref>. If p is exceptional, so that _(^h+k)^H<k, then _^h+k/(^h+k)^H > h. Hence, the linear H-action on ^h+k/(^h+k)^H has positive complexity. Therefore, <ref> implies <ref>. Finally, if the H-action on the symplectic normal vector space to N at p has positive complexity, then _^h+k/(^h+k)^H > h. The latter is equivalent to _(^h+k)^H<k, so that N < 2(d-h) + 2k. Hence, the complexity of (N,ω_N,Φ_N) is less than that of (Y,ω_Y,Φ_Y). Therefore, <ref> implies <ref>. The proof of Proposition <ref> can be adapted to prove equivalence of the following conditions: * The complexity of (N,ω_N,Φ_N) is equal to that of . * Any p ∈ N with stabilizer H is regular. * The fiberwise H-action on the symplectic normal bundle to N has complexity zero. We observe that, by Proposition <ref>, either <ref> or <ref> needs to hold for any sheet. If (N,ω_N) is a fixed submanifold in , properties <ref> and <ref> of Proposition <ref> (respectively properties <ref> and <ref> of Remark <ref>) simplify as follows: The dimension of N is less than (respectively equal to) twice the complexity of , and all points in N are exceptional (respectively regular). Moreover, N ≤ 2k, where k is the complexity of . Motivated by Proposition <ref> and Remark <ref>, we introduce the following terminology. A sheet (N,ω_N,Φ_N) in is exceptional if it satisfies any of the conditions of Proposition <ref>, and regular otherwise. Exceptional sheets enjoy the following stronger characterization. A sheet (N,ω_N,Φ_N) in is exceptional if and only if every point in N is exceptional. If every point in N is exceptional, then (N,ω_N,Φ_N) is exceptional by Proposition <ref>. Conversely, suppose that (N,ω_N,Φ_N) is exceptional. By contradiction, suppose that p ∈ N is regular. By Lemma <ref>, every point in N that is sufficiently close to p is also regular. By the principal orbit theorem (see <cit.>), the set of points in N that is stabilized by H is dense. Hence, there exist points in N that are stabilized by H that are arbitrarily close to p. However, by Proposition <ref>, any such point is exceptional, a contradiction. It is not necessarily true that, if (N,ω_N,Φ_N) is a regular sheet, then all points in N are regular. A counterexample is as follows: Let be a Hamiltonian T-space of positive complexity that contains an isolated fixed point p. Since T is compact and abelian, and since M is connected, the principal orbit theorem (see <cit.>) implies that there exists an open and dense subset of M whose points have trivial stabilizer. Hence, taking H = {e}, it follows that is a regular sheet. However, by Lemma <ref>, p is exceptional. §.§ Invariants of compact Hamiltonian T-spaces In this section we recall some fundamental results about compact[In fact, many of the theorems presented in this section hold under the weaker assumption that the moment map be proper as a map to a convex open subset of ^*. However, this degree of generality goes beyond the scope of this paper.] Hamiltonian T-spaces. §.§.§ Convexity package and its consequences We start with the following foundational result (see <cit.>). Let be a compact Hamiltonian T-space. * (Connectedness) The fibers of the moment map are connected. * (Stability) The moment map is open as a map onto its image. * (Convexity) The moment map image is the convex hull of the images of the fixed submanifolds. We remark that, since the action is effective, the moment map image of a compact Hamiltonian T-space is a polytope that has dimension equal to ^*. Let be a compact Hamiltonian T-space. The image Φ(M) is called the moment polytope. By Theorem <ref>, the moment polytope of a compact Hamiltonian T-space is convex. In fact, more is true and, in order to prove this, we need to recall a few notions. We say that a polytope Δ⊂^* is rational if any edge e ⊂Δ is of the form e = {v + t α| t ∈ [0,l]} for some v ∈^*, α∈ℓ^* and l ∈_>0. Moreover, a subset C ⊆^* is a cone if, for all v ∈ C and all λ∈_≥ 0, λ v ∈ C. A cone in ^* is proper if it does not contain any subspace of ^* of positive dimension. The following result provides a local description of the moment polytope of a compact Hamiltonian T-space near the image of a fixed submanifold. Let be a compact Hamiltonian T-space of dimension 2n. Let N be a fixed submanifold and let α_1,…α_n be the isotropy weights of N (see Remark <ref>). Consider the cone 𝒞_N = _≥ 0-span{α_1,…α_n}⊆^*, and let ℋ_N ⊆^* be the maximal subspace that is contained in 𝒞_N. * There exist an open neighborhood V of Φ(N) in Φ(M) and an open neighborhood W of 0 in 𝒞_N such that V = W + Φ(N). In particular, Φ(M) is rational. * The intersection ( ℋ_N + Φ(N)) ∩Φ(M) is a face of Φ(M) and the dimension of this face equals the dimension of ℋ_N. In particular, Φ(N) is a vertex of Φ(M) if and only if the cone 𝒞_N is proper. For any p ∈ N, by Theorem <ref>, there exist a T-invariant open neighborhood U_p of p and an open neighborhood W_p of 0 in 𝒞_N such that Φ(U_p) = W_p + Φ(N). Moreover, by Theorem <ref>, Φ(U_p) is an open neighborhood of Φ(p) in Φ(M). Since N is compact, there exist finitely many p_1,…, p_r ∈ N such that N is contained in ⋃_j=1^r U_p_j. Set W:= ⋂_j=1^r W_p_j. By construction, W + Φ(N) = ⋂_j=1^r Φ(U_p_j) is an open neighborhood of Φ(N) in Φ(M). Since the cone 𝒞_N is convex and rational, and since vertices of the moment polytope are the image of fixed submanifolds by Theorem <ref>, the moment polytope is rational. This proves part <ref>. To prove part <ref>, observe that any cone is the product of the maximal subspace that it contains with a proper cone. Thus we can write 𝒞_N = ℋ_N ×𝒞'_N for some proper cone 𝒞'_N. Hence, without loss of generality, we may take an open neighborhood W in 𝒞_N as in the statement of part <ref> to be of the form W_ℋ× W', where W_ℋ (respectively W') is an open neighborhood of 0 in ℋ_N (respectively 𝒞'_N). Therefore, the desired result holds `locally' by part <ref>; convexity of Φ(M) (see Theorem <ref>) implies that it is true `globally'. Until the end of the section, we deduce some consequences of Theorem <ref> that we use throughout the paper. We start with the following sufficient condition for a sheet to be exceptional (see Definition <ref>). Let be a compact Hamiltonian T-space and let (N,ω_N,Φ_N) be a sheet stabilized by a non-trivial subgroup H. If Φ(N) is not contained in the boundary of Φ(M), then (N,ω_N,Φ_N) is exceptional. We prove the contrapositive. Let (N,ω_N,Φ_N) be a regular sheet. It suffices to show that the set of regular values of Φ|_N that are contained in Φ(N) is contained in the boundary of Φ(M). Let x ∈Φ(N) be a regular value for Φ|_N. By Theorem <ref>, there exists p ∈Φ|_N^-1(x) that has trivial stabilizer for the T/H-action, i.e., the stabilizer of p for the T-action is H. By Remark <ref>, p is regular and, hence, the T/H-action on the symplectic normal bundle to N is toric. By the local normal form (Theorem <ref>), and by openness of the moment map (Theorem <ref>), it follows that Φ(x) lies in the boundary of Φ(M). For compact Hamiltonian T-spaces, the existence of exceptional sheets is intimately connected to the existence of exceptional fixed points. More precisely, the following holds. A compact Hamiltonian T-space contains an exceptional sheet if and only if it contains an exceptional fixed point. Suppose first that (N,ω_N,Φ_N) is an exceptional sheet of . Since N is compact, it contains a fixed point p ∈ M^T. By Lemma <ref>, p is exceptional. Conversely, suppose that p ∈ M^T is exceptional. By definition, the sheet (N,ω_N,Φ_N) through p is exceptional. Next we deduce some general results about isotropy weights of isolated fixed points that are used in one of the key results of our paper, Proposition <ref>. Let be a compact complexity k T-space and let p ∈ M^T be isolated. For any isotropy weight α of p, there exists a sheet (N_α,ω_α,Φ_α) with the following properties: * the point p lies in N_α, * the sheet (N_α,ω_α,Φ_α) is stabilized by the codimension 1 subgroup H_α :=exp({ξ∈𝔱|⟨α,ξ⟩∈ℤ}), * the dimension of N_α is at most 2(k+1), * the moment map image Φ_α(N_α) is contained in the affine line Φ(p) + ⟨α⟩ and intersects the open half-ray Φ(p) + _>0⟨α⟩, and * there exists q_α∈ M^T ∩ N_α such that Φ(q_α) = Φ_α(q_α) is a global extremum of Φ_α(N_α) with Φ(q_α) ∈Φ(p) + _>0⟨α⟩, and -α is an isotropy weight of q_α. Let α_1,…, α_n ∈ℓ^* be the isotropy weights of p. Without loss of generality, we may assume that α=α_n. By the local normal form of Theorem <ref>, we may identify a T-invariant open neighborhood of p in M with a T-invariant open neighborhood of 0 ∈^n so that the action becomes that of (<ref>) and the moment map is given by (<ref>). Since p ∈ M^T is isolated, α_j ≠ 0 for all j. Hence, since ⟨α_1,…, α_n ⟩ = ℓ^*, there can be at most k isotropy weights that are multiples of α_n. Therefore the subgroup H_α of (<ref>) stabilizes a subspace of ^n that is of real dimension at most 2(k+1). Moreover, by (<ref>), the subgroup H_α is the stabilizer of some point p'∈ M. The sheet (N,ω_N,Φ_N) through p' in the sense of Definition <ref> satisfies the desired conditions. For our purposes, it is useful to introduce the following terminology. Let be a compact Hamiltonian T-space and let p ∈ M^T be isolated. Given an isotropy weight α of p, we say that the sheet (N_α,ω_α,Φ_α) of Lemma <ref> is along α. Let be a compact Hamiltonian T-space. Let ℱ be the facet of Φ(M) supported on {w ∈^* |⟨ w, ν⟩ = c}. If p ∈ M^T satisfies ⟨Φ(p), ν⟩ > c, then there exists an isotropy weight α of p with ⟨α, ν⟩ < 0. Let α_1,…,α_n be the isotropy weights of p and suppose that ⟨α_j, ν⟩≥ 0 for all j=1,…, n. By part <ref> of Corollary <ref>, an open neighborhood of Φ(p) in Φ(M) equals an open neighborhood of Φ(p) in Φ(p) + _≥ 0⟨α_1,…,α_n⟩. Hence, since ⟨α_j,ν⟩≥ 0 for all j, an open neighborhood V of Φ(p) in Φ(M) is contained in the half-space {w ∈𝔱^* |⟨ w, ν⟩≥⟨Φ(p), ν⟩}. Since ⟨Φ(p), ν⟩ > c and since Φ(M) has a facet supported on {w ∈^* |⟨ w, ν⟩ = c}, this is a contradiction. Motivated by Lemma <ref>, we introduce the following terminology. Let be a compact Hamiltonian T-space and let ℱ be the facet of Φ(M) supported on {x ∈^* |⟨ x, ν⟩ = c}. For any p ∈ M^T with ⟨Φ(p), ν⟩ > c, we say that the isotropy weight α of p of Lemma <ref> is (ℱ-)downward pointing. Combining Lemmas <ref> and <ref>, we obtain the following result. Let be a compact Hamiltonian T-space and let ℱ be the facet of Φ(M) supported on {x ∈^* |⟨ x, ν⟩ = c}. Let p ∈ M^T be isolated with ⟨Φ(p), ν⟩ > c and let α be an isotropy weight of p that is ℱ-downward pointing. Let (N_α,ω_α,Φ_α) be the sheet along α. There exists q_α∈ M^T ∩ N_α such that * Φ(q) = Φ_α(q_α) is a global extremum of Φ_α, * -α is an isotropy weight of q_α, and * ⟨Φ(q_α), ν⟩ < ⟨Φ(p),ν⟩. Taking q_α as in Lemma <ref>, we need to prove only the last property. This follows immediately by observing that Φ(q) ∈Φ(p) + _>0⟨α⟩ and that α is ℱ-downward pointing. To conclude this section, we look at the preimage of faces of the moment polytope. Given a face ℱ of Φ(M), we set 𝔥_ℱ:= {ξ∈|⟨ x-y, ξ⟩=0 for all x,y∈ℱ}. By part <ref> of Corollary <ref>, 𝔥_ℱ is a rational subspace of of dimension equal to the codimension of ℱ in Φ(M). Hence exp(𝔥_ℱ) is subtorus of T. The subset M_ℱ:=Φ^-1(ℱ) ⊂ M is T-invariant; we set H_ℱ:= { t ∈ T | t · p = p for all p ∈ M_ℱ}. Let be a compact Hamiltonian T-space and let ℱ be a face of Φ(M). Then M_ℱ=Φ^-1(ℱ) is a connected component of M^H_ℱ and the Lie algebra of H_ℱ equals 𝔥_ℱ. Connectedness of M_ℱ can also be proved using the fact that the Convexity Package implies that the preimage of any convex set is connected (see <cit.>). Let be a compact Hamiltonian T-space and let ℱ be a face of Φ(M). There exists a connected, open and dense subset of M_ℱ whose points have stabilizer equal to H_ℱ. By the principal orbit theorem (see <cit.>) and since T is abelian, there exists a subgroup H of T and a connected, open and dense subset of M_ℱ such that H is the stabilizer of any point in this subset. Hence, H_ℱ⊆ H. However, since H is the stabilizer of orbits of principal type and since T is abelian, H is contained in the stabilizer of any point in M_ℱ. Therefore, H ⊆ H_ℱ. By Corollary <ref>, the preimage of a face ℱ of Φ(M) gives rise to a sheet in the sense of Definition <ref> that is stabilized by H_ℱ. We denote it by (M_ℱ,ω_ℱ,Φ_ℱ). In particular, since the complexity of (M_ℱ,ω_ℱ,Φ_ℱ) is at most that of by Proposition <ref>, if the codimension of ℱ is r, then M_ℱ≤ 2n - 2r. §.§.§ The Duistermaat-Heckman measure and its density function In this section we take a close look at an invariant of compact Hamiltonian T-spaces that is central to this paper. We start by recalling the following notion. Let be a compact Hamiltonian T-space of dimension 2n. The Duistermaat-Heckman measure of is the pushforward of the (normalized) Liouville measure, i.e., for any Borel set U⊂𝔱^*, m_DH(U)=1/(2π)^n∫_Φ^-1(U)ω^n/n!. The Duistermaat-Heckman measure is absolutely continuous with respect to the Lebesgue measure on 𝔱^* (see <cit.>). Therefore its Radon-Nikodym derivative with respect to the Lebesgue measure is a Lebesgue integrable function f_DH𝔱^* → that is uniquely defined up to a set of measure zero. Without loss of generality, we henceforth assume that f_DH vanishes identically on ^* ∖Φ(M). In <cit.>, Duistermaat and Heckman give an explicit representative of the restriction of f_DH to the intersection of the moment polytope with the set of regular values of Φ. In order to state this result, we denote the set of regular values of Φ contained in Φ(M) by Φ(M)_reg. Moreover, we recall that for any x∈Φ(M)_reg, the reduced space M_x is an orbifold that inherits a symplectic form that we denote by ω_x (see Section <ref>). Let be a compact complexity k T-space. The restriction of the Radon-Nikodym derivative of the Duistermaat-Heckman measure of to Φ(M)_reg can be chosen to be equal to the function Φ(M)_reg→ x ↦1/(2π)^k∫_M_xω_x^k/k! =: 1/(2π)^kVol(M_x), x ∈Φ(M)_reg. By <cit.>, the restriction of the function (<ref>) to each connected component of Φ(M)_reg is a polynomial of degree at most k. Moreover, if this polynomial has positive degree, the coefficients of the monomials of top degree are integral. This is because the cohomology classes [ω_x] vary linearly with x on such a connected component and the variation is controlled by a cohomology class with integral coefficients (see <cit.>). Theorem <ref> is sufficient to calculate the Duistermaat-Heckman measure, as the set of singular values has measure zero by Sard's theorem. By Remark <ref>, the function of (<ref>) is continuous. The aim of this section is to prove Theorem <ref> below, which is probably well-known to experts (cf. <cit.> for linear symplectic actions on vector spaces and <cit.>). However, since we use it extensively throughout the paper, we include a proof for completeness. Given a compact Hamiltonian T-space , there exists a unique continuous function DH : Φ(M) → that extends the function of (<ref>). By Remark <ref>, if the interior of the moment polytope consists solely of regular values, then Theorem <ref> is trivial. In this case, DH is the restriction of a polynomial of degree at most the complexity of . For instance, the desired function in the case of compact symplectic toric manifolds is the indicator function of the moment polytope, since reduced spaces are connected by Theorem <ref>, and since k = M_x = 0 for all x ∈Φ(M)_reg. Theorem <ref> allows us to introduce the following notion. Let be a compact Hamiltonian T-space. We call the continuous map DH : Φ (M) → given by Theorem <ref> the Duistermaat-Heckman function of . Let be a compact Hamiltonian T-space and let H ⊂ T be a subtorus. Choose a complementary subtorus K of T so that T = H × K. This induces an identification ^* ≃𝔥^* ⊕𝔨^*. We write the Lebesgue measure on ^* as dxdy, where dx (respectively dy) is the Lebesgue measure on 𝔥^* (respectively 𝔨^*). Let π : ^* →𝔥^* be the projection induced by the inclusion H ⊂ T. The H-action is Hamiltonian with moment map Φ':= π∘Φ. Since DH is continuous, by Fubini's theorem, the Duistermaat-Heckman function of (M,ω, Φ') is given by DH (M,ω, Φ') (x) = ∫_Δ_xDH(x,y) dy for all x ∈Φ'(M), where Δ_x = π^-1(x) ∩Φ(M). Suppose further that is a compact symplectic toric manifold and that H has codimension 1. For any x ∈Φ'(M), Δ_x is an interval in {x}×𝔨^* ≃{x}× that we write as {(x,y) ∈𝔥^*⊕| y ∈ [p_min(x), p_max(x)] }. By Remark <ref>, DH (M,ω, Φ') (x) = p_max(x) - p_min(x) for all x ∈Φ'(M). We observe that, since Φ(M) is a convex polytope, the difference p_max - p_min is concave (cf. Proposition <ref> below). Our proof of Theorem <ref> uses extensively ideas from <cit.>. Before proceeding with the proof, we need to recall some facts about singular values of the moment map. §.§.§ Intermezzo 1: chambers of the moment map We begin by relating singular points of the moment map of a Hamiltonian T-space (that is not necessarily compact), to sheets arising from one dimensional stabilizers (see Definition <ref>). If K ≤ T is a closed one-dimensional subgroup that occurs as the stabilizer of some point in M, then every p ∈ M^K is a singular point of Φ. In fact, the converse also holds. Let be a Hamiltonian T-space. If K ≤ T is a one-dimensional closed subgroup that occurs as the stabilizer of some point in M then Φ(M^K) is contained in the set of singular values of Φ. Conversely, if x ∈^* is a singular value of Φ, then there exists a one-dimensional closed subgroup K ≤ T that occurs as the stabilizer of some point in M such that x ∈Φ(M^K). Since the action is Hamiltonian, the first statement holds. Conversely, let p ∈ M be a singular point with Φ(p) =x. Let H be the stabilizer of p, let h = H and let k be the complexity of . Since the T-action is Hamiltonian and since p is a singular point of Φ, H ≥ 1. If H = 1, there is nothing to prove. Hence, suppose that H ≥ 2. By Theorem <ref>, it suffices to prove the result for the Hamiltonian T-action on the local model of p. Hence, we may assume that is the local model (Y,ω_Y,Φ_Y) at p and that p = [1,0,0]. Moreover, by Corollary <ref>, the symplectic slice representation ρ : H → (S^1)^h+k of p is injective. The stabilizer of any point in Y = T ×_H Ann(𝔥) ×^h+k is a subgroup of H. In fact, if H̃≤ H is the stabilizer of some point in Y, we have that Y^H̃ = T ×_H Ann(𝔥) × (^h+k)^H̃, where H acts on ^h+k via the symplectic slice representation of p. Since the H-action on ^h+k is linear, (^r)^H̃ is a subspace for any subgroup H̃≤ H. Hence, to prove the result, it suffices to show that there exists a one-dimensional closed subgroup K ≤ H that occurs as a stabilizer of a point in ^h+k. Let η_1,…, η_h+k∈𝔥^* be the isotropy weights of p. Since the symplectic slice representation of p is injective, the H-action on ^h+k is effective. This implies that η_1,…, η_h+k span 𝔥^*. Hence we may assume that there exists s ≥ 1 such that the span of η_s+1,…, η_h+k has codimension 1. Since the H-action on ^h+k is Hamiltonian with moment map Φ_H(z) = 1/2∑_j=1^h+k |z_j|^2 η_j, it can be checked directly that all points in {z = (z_1,…,z_r) ∈^h+k| z_j = 0 if j=1,…,s, z_j ≠ 0 if j ≥ s+1 }, have stabilizers of dimension one, as desired. In other words, the set of singular values of the moment map equals the union of the moment map image of all sheets that are stabilized by some one dimensional closed subgroup of T (see Definition <ref>). Each such image is contained in some affine hyperplane (see Remark <ref> and the discussion preceding it). If, in addition, M is compact there are two important consequences. First, there are only finitely many subgroups of T that occur as the stabilizer of some point in M (see <cit.>). Second, since the action is Hamiltonian, Φ has some singular point; hence, by Lemma <ref>, there exists a one dimensional closed subgroup of T that that occurs as the stabilizer of some point in M. Let K_1,…, K_r ≤ T be the collection of such one dimensional closed subgroups. Since M is compact, by Lemma <ref>, M^K_i is a compact submanifold of M for each i=1,…, r and, therefore, has finitely many connected components N_i1,…, N_is_i. For each i,j, we denote the corresponding sheet by (N_ij,ω_ij,Φ_ij) and we set Δ_ij:= Φ_ij(N_ij). We observe that the union of the Δ_ij's includes the union of all facets. More precisely, the following holds. Let be a compact Hamiltonian T-space. For any facet ℱ of Φ(M), there exist indices i,j as above so that ℱ = Δ_ij. Let (M_ℱ,ω_ℱ,Φ_ℱ) be the sheet corresponding to ℱ (see Corollary <ref> and (<ref>)), and let H_ℱ its stabilizer. By Lemma <ref>, since the codimension of ℱ in ^* is one, the dimension of H_ℱ is also one. By Corollary <ref>, it follows that (M_ℱ,ω_ℱ,Φ_ℱ) is one of the sheets constructed above. The complement of the union of the Δ_ij's in the moment polytope is precisely Φ(M)_reg. We call the closure of a connected component of this complement a chamber of Φ(M). These chambers partition the moment polytope into subpolytopes, i.e., the following properties hold (see Figure <ref>): * Each chamber is a polytope in ^* of full dimension and any two chambers intersect in a common face. * Let F be a facet of a chamber. The set of points x∈ F that are regular values for all the moment maps Φ_ij of the sheets (N_ij,ω_ij,Φ_ij) constructed above is dense in F. Moreover this set is contained in the interior of F. Conversely, if x is a singular value that is a regular value for all the moment maps Φ_ij of the sheets (N_ij,ω_ij,Φ_ij) constructed above, then x lies in the interior of a facet of a chamber. * If two chambers 𝔠 and 𝔠' intersect in a face F, then there exists a sequence of chambers 𝔠_0 = 𝔠, 𝔠_1,…, 𝔠_s = 𝔠' with the property that the intersection of 𝔠_l and 𝔠_l+1 is a facet that contains F for all l=0,…, s-1. Properties <ref> – <ref> follow from the above definition of the N_ij's and from Theorems <ref> and <ref> (see also <cit.>). By Remark <ref>, Theorem <ref> follows at once if the interior of Φ(M) consists entirely of regular values. The following result describes precisely when this happens in terms of exceptional sheets. Let be a compact Hamiltonian T-space. There is precisely one chamber of Φ(M) if and only if there are no exceptional sheets. If Φ(M) has no exceptional sheets then all sheets are contained in the boundary of Φ(M) by Lemma <ref>. This implies that the union of the Δ_ij's constructed above is contained in the boundary of Φ(M), which equals the union of the facets of Φ(M) since Φ(M) is a convex polytope. By Lemma <ref>, the union of all the facets of Φ(M) is contained in the union of the Δ_ij's. Hence, the union of the Δ_ij's equals the boundary of Φ(M). Since Φ(M) is a convex polytope of full dimension, the complement of the boundary of Φ(M) in Φ(M) equals the interior of Φ(M), which is connected. Hence, there is precisely one chamber of Φ(M). Conversely, if Φ(M) has at least two chambers, then at least one of the Δ_ij's constructed above cannot be contained in the boundary of Φ(M). By Lemma <ref>, the corresponding sheet is exceptional. Seeing as the case of only one chamber has already been proved, in what follows (namely, in Intermezzo 2 and in the proof of Theorem <ref> below), we assume that there are at least two chambers. §.§.§ Intermezzo 2: the wall-crossing formula The main tool that we use in the proof of Theorem <ref> is the so-called wall-crossing formula for compact Hamiltonian T-spaces (see <cit.>, <cit.> and <cit.>). We recall it here for completeness and we draw on the above Intermezzo for notation. Moreover, in this subsection we consider the closure of the complement of the moment map as a chamber. Let 𝔠_± be two chambers in Φ(M) that intersect in a facet F. Let ξ∈ℓ be the primitive element that is normal to the hyperplane supporting F and that points out of 𝔠_- (see Figure <ref>). We fix a point x ∈ F that has the property that is a regular value for all for all the moment maps Φ_ij of the sheets (N_ij,ω_ij,Φ_ij) constructed in the above Intermezzo; such a point exists by property <ref>. We use the fixed inner product on to choose a complementary subspace 𝔨 to the span of ξ, i.e., = ⟨ξ⟩⊕𝔨. Hence, viewing as the space of homogeneous polynomials of degree one on ^*, we can view polynomials on ^* as being generated by ξ and polynomials on 𝔨^*. Moreover, since ξ∈ℓ, we have that exp (⟨ξ⟩) is a circle in T that we denote by S^1. The subspace 𝔨 is isomorphic to the Lie algebra of the quotient T/S^1. In what follows, we use this identification tacitly since it is compatible with the identification of Remark <ref>. Let f_±: ^* → be the polynomial that, when restricted to the interior of 𝔠_±, equals (<ref>). Since x is a regular value of Φ_ij, it lies in the interior of a chamber 𝔠_ij of Φ_ij(N_ij) for all i,j. Hence, there is a corresponding polynomial f_ij: 𝔨^* → that, when restricted to the interior of the 𝔠_ij, equals (<ref>). (If x ∉Φ_ij(N_ij), this polynomial is identically zero.) Finally, for each i,j, we let κ_ij be half the codimension of N_ij in M and, if i,j are such that x ∈Φ_ij(N_ij), we let α_ij1,…, α_ijκ_ij∈ be the isotropy weights for the S^1-action on the normal bundle to N_ij. Set the notation as above. For all y ∈^* we have that f_+(y)- f_-(y) = ∑_{i,j | x ∈Φ_ij(N_ij)}ξ^κ_ij-1(y-x)(∏_s=1^κ_ijα_ijs)^-1[f_ij(y-x)/(κ_ij-1)! + P_ij(y-x)], where P_ij is a polynomial depending on i,j that is divisible by ξ. * We stress that Theorem <ref> also holds if, say, 𝔠_- is the closure of the complement of the moment map image. * The polynomials P_ij have been computed explicitly (see <cit.>). They depend on the symplectic reduction of N_ij at x by the T/S^1-action. §.§.§ Back to the Duistermaat-Heckman function of With the above Intermezzos, we have all the ingredients to proceed with the proof of Theorem <ref>. First, we prove existence of a continuous extension. Let 𝔠_1,…, 𝔠_l be the chambers of Φ(M) and, for each i = 1,…, l, let f_i:^* → be the polynomial that equals (<ref>) when restricted to the interior of 𝔠_i. The result is proved if we show that the map given by x ↦ f_i(x) if x ∈𝔠_i is well-defined, i.e., given two chambers 𝔠_i and 𝔠_j such that 𝔠_i∩𝔠_j ≠∅, the following holds: f_i(x) = f_j(x) for all x ∈𝔠_i ∩𝔠_j. Clearly equation (<ref>) holds if i=j, so suppose that i ≠ j. Set F := 𝔠_i ∩𝔠_j; this is a face of both 𝔠_i and 𝔠_j (see property <ref>). We consider first the special case that F is a facet. Since the set of points in F that are regular values for all the sheets (N,ω_N,Φ_N) constructed in Intermezzo 1 is dense in F (see property <ref>), and since both f_i and f_j are continuous, it suffices to check that equation (<ref>) holds for those points. Let x ∈ F be such a point. If we show that the codimension of each N such that x ∈Φ(N) is at least four, then we are done by Theorem <ref> (cf. <cit.>). Since F is a facet of two distinct chambers, it is not contained in the boundary of Φ(M). In particular, since x lies in the interior of F, it does not lie in the boundary of Φ(M). Therefore, if N is such that x ∈Φ(N), then Φ(N) is not contained in the boundary of Φ(M). By Lemma <ref>, (N,ω,Φ) is an exceptional sheet. Thus the complexity of (N,ω,Φ) is strictly less than that of by Proposition <ref>. Since the dimension of the torus that acts effectively on N is one less than that of T, it follows that the codimension of N in M is at least four, as desired. If F is not a facet, then by property <ref> above we can find chambers 𝔠_0 = 𝔠_i, 𝔠_1,…, 𝔠_s = 𝔠_j such that 𝔠_l and 𝔠_l+1 intersect in a facet that contains F for all l=0,…, s-1 (see property <ref>). The result then follows immediately by the above special case. Uniqueness of the continuous extension follows immediately by observing that Φ(M)_reg is a dense subset of Φ(M) and that the function of (<ref>) is continuous. The Duistermaat-Heckman function is an invariant of the isomorphism class of a compact Hamiltonian T-space that plays an important role in this paper. As an illustration, the following result describes the restriction of the Duistermaat-Heckman function of to any facet of the moment polytope. Let be a compact Hamiltonian T-space. If ℱ is a facet of Φ(M), then DH |_ℱ = DH(M_ℱ,ω_ℱ,Φ_ℱ) if M_ℱ = M - 2, 0 otherwise, where (M_ℱ,ω_ℱ,Φ_ℱ) is as in (<ref>). In this proof we denote the closure of the complement of the moment map image by 𝔠_- (cf. the paragraph preceding Theorem <ref> and Remark <ref>). By Lemma <ref>, (M_ℱ,ω_ℱ,Φ_ℱ) is one of the sheets constructed in Intermezzo 1; in particular, the quotient T/H_ℱ is isomorphic to S^1, where H_ℱ is the stabilizer of (M_ℱ,ω_ℱ,Φ_ℱ). Let x ∈ℱ be a regular value of Φ_ℱ. Since M_ℱ = Φ^-1(ℱ) (see Lemma <ref>), it follows that (M_ℱ,ω_ℱ,Φ_ℱ) is the only sheet constructed in Intermezzo 1 that contains x in its moment map image. In particular, x is a regular value for all the moment maps of all the sheets constructed in Intermezzo 1. Hence, there exists a chamber 𝔠_+ and a facet F of 𝔠_+ such that x lies in the interior of F. Let ξ∈ℓ be the primitive normal to the hyperplane supporting F that points out of 𝔠_-. Let f_+ : ^* → denotes the polynomial that, when restricted to 𝔠_+, equals (<ref>). Since f_- ≡ 0, by Theorem <ref>, f_+(y)= ξ^κ_ℱ-1(y-x) (∏_s=1^κ_ℱα_s)^-1[f_ℱ(y-x)/(κ_ℱ-1)! + P_ℱ(y-x)] for all y ∈^*, where κ_ℱ is half of the codimension of M_ℱ in M, α_1,…, α_κ_ℱ∈ are the isotropy weights of the S^1-action on the normal bundle to M_ℱ, f_ℱ is the polynomial that equals (<ref>) when restricted to the chamber of Φ_ℱ containing x, and P_ℱ is the polynomial associated to (M_ℱ,ω_ℱ,Φ_ℱ) in (<ref>). By (<ref>), if κ_ℱ≥ 2, then f_+(x) = 0. On the other hand, if κ_ℱ = 1, since P_ℱ is divisible by ξ (see Theorem <ref>, then the right hand side of (<ref>) evaluated at x equals f_ℱ(0)/α_1. Since the S^1-action is effective and ξ is chosen to point out of 𝔠_-, it follows that α_1 = 1 and, hence, f_+(x) = f_ℱ(0). Since f_+ and f_ℱ are restrictions of DH and DH (M_ℱ,ω_ℱ,Φ_ℱ) respectively, we have shown that, if x is a regular value of Φ_ℱ, then DH(x) = DH (M_ℱ,ω_ℱ,Φ_ℱ)(x) if M_ℱ = M - 2 0 otherwise. Since Φ_ℱ(M_ℱ)_reg is dense in Φ_ℱ(M_ℱ), and since DH and DH (M_ℱ,ω_ℱ,Φ_ℱ) are continuous and defined on ℱ, equation (<ref>) implies the desired result. §.§.§ The Duistermaat-Heckman function of a complexity one T-space In this section we prove some properties of the Duistermaat Heckman function of a compact complexity one T-space . We start with the following special property that fails to hold in higher complexity (see <cit.>). The Duistermaat-Heckman function DH : Φ(M) → of a compact complexity one T-space is concave. By Theorem <ref>, DH is continuous. Hence, it suffices to check that the restriction of DH to the interior of Φ(M) is concave. Since the complexity of is one, by Remark <ref>, DH is piecewise linear. Thus the restriction of DH to the interior of Φ(M) is concave if and only if it is log-concave. The result then follows immediately from <cit.>. As observed in <cit.>, continuity and concavity of DH (see Theorem <ref> and Proposition <ref>), together with convexity of Φ(M) (see Theorem <ref>), immediately imply the following result. The minimum of the Duistermaat-Heckman function of a compact complexity one T-space is attained at a vertex. Next we prove two results relating DH with exceptional orbits and singular values of Φ. Let be a compact complexity one T-space. If there are no isolated fixed points, then M_exc = ∅ and the Duistermaat-Heckman function DH : Φ(M) → is the restriction of an affine function. Since there are no isolated fixed points, Lemma <ref> implies that there are no exceptional fixed points. Hence, by Lemma <ref>, there are no exceptional sheets. By Lemma <ref>, M_exc = ∅. Moreover, by Lemma <ref>, there is only one chamber of Φ(M). The result then follows by Remark <ref>. Let be a compact complexity one T-space. If the Duistermaat-Heckman function DH is constant, then there are no singular values in the interior of Φ(M). We prove the contrapositive. Suppose that there is a singular value in the interior of Φ(M). Hence there exist two chambers 𝔠_- and 𝔠_+ of Φ(M) that intersect in a facet F that is not contained in the boundary of Φ(M). As in Intermezzo 2, we let ξ∈ℓ be a primitive element that is normal to the hyperplane supporting F and points out of 𝔠_-. Moreover, we fix x ∈ F that is a regular value for all the sheets (N_ij,ω_ij,Φ_ij) constructed in Intermezzo 1; we observe that x lies in the interior of F (see property <ref>). Since F is not contained in the boundary of Φ(M), neither is x. In particular, if x ∈Φ_ij(N_ij), then Φ_ij(N_ij) is not contained in the boundary of Φ(M). Hence, (N_ij,ω_ij,Φ_ij) is exceptional by Lemma <ref>. By Proposition <ref>, the complexity of (N_ij,ω_ij,Φ_ij) is strictly less than that of . Since the complexity of is one, the complexity of (N_ij,ω_ij,Φ_ij) is zero. Hence, the codimension of N_ij in M equals 4. Therefore, the lowest order term in ξ in the right hand side of equation (<ref>) is ξ(∑_{i,j | x ∈Φ_ij(N_ij)} (α_ij1α_ij2)^-1f_ij), where, for each i,j, the polynomialf_ij and the integers α_ij1,α_ij2 are as in the discussion leading up to Theorem <ref>. Since x lies in the interior of Φ(M), it follows that α_ij1α_ij2 < 0 and f_ij(x) > 0 for each i,j. In particular, the polynomial in equation (<ref>) is not identically zero and, therefore, neither is the right hand side of equation (<ref>). Let f_± : ^* → be the polynomial that, when restricted to 𝔠_±, equals (<ref>). By (<ref>) and Theorem <ref>, f_+ and f_- are not equal. Hence, since f_± is the restriction of the Duistermaat-Heckman function DH to 𝔠_±, DH is not constant, as desired. The next result describes DH near a vertex of Φ(M) that corresponds to a fixed surface. To this end, let v ∈Φ(M) be a vertex and let Σ = Φ^-1(v) be a fixed surface. Let α_1,…,α_n be the isotropy weights of Σ (see Remark <ref>), labeled so that α_n = 0. Since v is a vertex, by part <ref> of Corollary <ref>, a sufficiently small neighborhood of v in Φ(M) is of the form { v + ∑_i=1^n-1t_i α_i | t_i ≥ 0 sufficiently small}. Moreover, by part <ref> of Corollary <ref>, for each i=1,…, n-1, the edge e_i of Φ(M) that is incident to v is contained in the half-line {v+t_iα_i | t_i ≥ 0}. Let (M_i,ω_i,Φ_i) be the sheet corresponding to the edge e_i as in (<ref>) with stabilizer H_i. By Proposition <ref>, M_i = 4 and (M_i,ω_i,Φ_i) is a compact complexity one Hamiltonian T/H_i-space. In what follows, we identify T/H_i ≃ S^1. Let N be the normal bundle of Σ in M. Then N splits T-equivariantly as a direct sum N = L_1 ⊕…⊕ L_n-1, where L_i denotes the normal bundle of Σ in M_i. Let be a compact complexity one T-space of dimension 2n and let v ∈Φ(M) be a vertex such that Σ = Φ^-1(v) is a fixed surface. Let α_1, …, α_n-1 be the non-zero isotropy weights of Σ. For all t_1,…, t_n-1≥ 0 sufficiently small, DH (v+ ∑_i=1^n-1t_iα_i)= ∫_Σω-∑_i=1^n-1t_i c_1(L_i) [ Σ], where L_1,…, L_n-1 are as in (<ref>) and c_1(L_i) is the first Chern class of L_i for i=1,…, n-1. In particular, if DH attains its minimum at v, then c_1(L_i)[Σ] ≤ 0 for all i=1,…, n-1. Since v is a vertex and the complexity of is one, the restriction of DH to a sufficiently small neighborhood of v is an affine function. Thus there exist real numbers β_0,β_1,…, β_n-1 such that, for all t_1,…, t_n-1≥ 0 sufficiently small, DH (v+ ∑_i=1^n-1t_iα_i) = β_0 + ∑_i=1^n-1 t_iβ_i. In order to determine the constants β_0,β_1,…, β_n-1, it suffices to understand the restriction of DH to elements of the form v + t_jα_j for j=1,…, n-1. Fix i=1,…, n-1. By Corollary <ref>, DH (v+t_iα_i) = DH(M_i,ω_i,Φ_i)(v + t_iα_i). By <cit.>, for all t_i ≥ 0 sufficiently small, DH(M_i,ω_i,Φ_i)(v + t_iα_i) = ∫_Σω_i - t_i c_1(L_i)[Σ]. The result follows immediately by comparing equations (<ref>), (<ref>) and (<ref>). The following result is an immediate consequence of piecewise linearity of the Duistermaat-Heckman function of a compact complexity one T- space and of Proposition <ref>. Let be a compact complexity one T-space. The subset {(x,t) ∈𝔱^* ×| x ∈Φ(M) , t ∈ [0,DH(x)]} is a convex polytope in ^* ×. The convex polytope of Corollary <ref> and the Duistermaat-Heckman function of a compact complexity one T-space are equivalent in the sense that knowing one allows to reconstruct the other. To emphasize the combinatorial nature of the problem we study, we introduce the following notion. The convex polytope (<ref>) of a compact complexity one T-space is called the Duistermaat-Heckman polytope of . Suppose that is a compact complexity one T-space obtained by restricting a complexity zero T × S^1-action on (M,ω) to the subtorus T ×{1}. The moment polytope of the complexity zero action need not agree with the Duistermaat-Heckman polytope of . However, the two are related as follows. If Δ denotes the moment polytope of the complexity zero T × S^1-action, we have that Δ = { (x,y)∈Φ'(M) ×| y ∈ [p_min(x), p_max(x)] }, whereas, combining Example <ref> and (<ref>), the Duistermaat-Heckman polytope of equals {(x,t) ∈𝔱^* ×| x ∈Φ(M) , t ∈ [0,p_max(x) - p_min(x)]}. Finally, we observe that the latter can be obtained from the former by applying a piecewise integral affine transformation of ^* ⊕, where the lattice is ℓ^* ⊕ (see Figure <ref>). §.§ Compact complexity preserving Hamiltonian T-spaces In this section, we introduce a class of Hamiltonian T-spaces that enjoy special properties, which are also enjoyed by compact symplectic toric manifolds (see Corollary <ref>, Proposition <ref> and Corollary <ref>). We begin with the following result, which extends <cit.>. Let be a compact complexity k T-space. If N ⊂ M^T is a fixed submanifold with N = 2k, then Φ(N) is a vertex of Φ(M). Moreover, for every face ℱ of Φ(M) that contains Φ(N), the sheet (M_ℱ,ω_ℱ,Φ_ℱ) is stabilized by a connected subgroup and has complexity equal to k. Since N= 2k, by Remark <ref>, (N,ω_N) is regular and every point in N is regular. Hence, by Remark <ref>, the T-action on the normal bundle to N is toric. By Corollary <ref>, Φ(N) is a vertex of Φ(M). Let ℱ be a face of Φ(M) that contains Φ(N) and let p ∈ N ∩ M_ℱ. By Corollary <ref>, there exist points arbitrarily close to p that have stabilizer equal to H_ℱ, the stabilizer of (M_ℱ,ω_ℱ,Φ_ℱ). Since p is regular and all stabilizers in a regular local model are connected, H_ℱ is connected by Theorem <ref>. Finally, we observe that, since (N,ω_N) is a fixed submanifold and N ⊆ M_ℱ, (N,ω_N) is a sheet in (M_ℱ,ω_ℱ,Φ_ℱ). Since N = 2k and N ⊂ M^T/H_ℱ_ℱ, by Proposition <ref>, the complexity of (M_ℱ,ω_ℱ,Φ_ℱ) is at least k. On the other hand, (M_ℱ,ω_ℱ,Φ_ℱ) is a sheet in and the complexity of the latter is k. Hence, by Proposition <ref>, the complexity of (M_ℱ,ω_ℱ,Φ_ℱ) is at most k. Let be a compact complexity k T-space. The following are equivalent: * for each face ℱ of Φ(M), the complexity of the sheet (M_ℱ,ω_ℱ,Φ_ℱ) equals k; * for each face ℱ of Φ(M) of codimension r, M_ℱ has maximal dimension, i.e., M_ℱ = 2n-2r; * for each vertex v of Φ(M), Φ^-1(v) has maximal dimension, i.e., Φ^-1(v) = 2k. For any face ℱ of Φ(M) of codimension r, the complexity of (M_ℱ,ω_ℱ,Φ_ℱ) equals that of if and only if M_ℱ = M - 2r. This shows that <ref> and <ref> are equivalent. Since vertices are faces of maximal codimension, <ref> implies <ref>. The converse follows from Proposition <ref>. Motivated by Corollary <ref>, we introduce the following terminology. A compact Hamiltonian T-space is said to be complexity preserving if it satisfies any (and hence all) of the conditions <ref> – <ref> in Corollary <ref>. compact complexity preserving Hamiltonian T-spaces generalize compact symplectic toric manifolds. By Corollary <ref>, if is a compact complexity preserving Hamiltonian T-space, then, for every face ℱ of Φ(M), so is (M_ℱ,ω_ℱ,Φ_ℱ). The following result describes a property of the Duistermaat-Heckman function of compact complexity preserving Hamiltonian T-spaces and is an immediate consequence of Propositions <ref>, <ref> and Remark <ref>. Let be a compact complexity k T-space and suppose that N ⊂ M^T is a fixed submanifold with N = 2k. For any face ℱ of Φ(M) containing Φ(N), DH |_ℱ = DH(M_ℱ,ω_ℱ,Φ_ℱ), where (M_ℱ,ω_ℱ,Φ_ℱ) is the sheet given in (<ref>). In particular, if is complexity preserving, then (<ref>) holds for all faces of Φ(M). To conclude this section, we prove the following result, which we need in Section <ref>. Let be a compact complexity preserving Hamiltonian T-space of positive complexity. If there are no singular values of Φ contained in the interior of Φ(M), then the action has no isolated fixed points. Suppose that p ∈ M^T is isolated. By condition <ref> in Corollary <ref>, Φ(p) is not a vertex. Hence, if ℱ is the face of smallest dimension in which Φ(p) lies, then ℱ≥ 1. By part <ref> of Corollary <ref>, an open neighborhood of Φ(p) in Φ(M) can be identified with an open neighborhood of (0,0) ∈^ℱ×^d-ℱ in ^ℱ×𝒞'_p, where 𝒞'_p is the proper cone in the proof of part <ref> of Corollary <ref>. We observe that, since ℱ≥ 1, the subset {0}×𝒞'_p intersects the interior of ^ℱ×𝒞'_p. Choose d - ℱ linearly independent isotropy weights of p that span 𝒞'_p and, if needed, complete this set with ℱ - 1 linearly independent isotropy weights of p whose span is contained in ^ℱ. The span of these isotropy weights α_1,…, α_d-1 satisfies (Φ(p) + _≥ 0⟨α_1,…, α_d-1⟩) ∩Int(Φ(M)) ≠∅, where Int(Φ(M)) denotes the interior of Φ(M). By Theorem <ref>, we may identify a T-invariant neighborhood of p with a T-invariant neighborhood of 0 ∈^n so that Φ becomes the map (z_1,…, z_n) ↦π∑_i=1^n α_i |z_i|^2 +Φ(p). Moreover, by part <ref> of Corollary <ref>, an open neighborhood of Φ(p) in Φ(M) can be identified with an open neighborhood of Φ(p) in the image of the map of equation (<ref>). By (<ref>), the affine hyperplane Φ(p) + ⟨α_1,…, α_d-1⟩ intersects Int(Φ(M)). All values in this intersection have a one-dimensional stabilizer: this is because (Ann(⟨α_1 ⟩) ∩…∩Ann(⟨α_d-1⟩)) = 1, since α_1,…, α_d-1 are linearly independent. Hence, there is a singular value of Φ in Int(Φ(M)), a contradiction. §.§.§ Moment polytopes for compact complexity preserving Hamiltonian T-spaces In this subsection, we characterize the moment map image of complexity preserving compact Hamiltonian T-spaces (see Proposition <ref> below). To this end, given a polytope Δ⊂^* and a vertex v ∈Δ, we say that Δ is smooth at v if * there are exactly d edges e_1,…, e_d that are incident to v, and * there exists a basis α_1,…, α_d of ℓ^* such that α_i is a tangent vector to the edge e_i for all i=1,…, d. A polytope Δ⊂^* is smooth at v if and only if the collection of inward (or outward) normals to the facets of Δ that contain v can be chosen to be a basis of ℓ. We say that a polytope Δ is Delzant if it is smooth at every vertex. The moment map image of a compact symplectic toric manifold is a Delzant polytope and, conversely, every Delzant polytope arises as such an image (see <cit.>). In general, this fails to be true in higher complexity. However, under the additional hypothesis of complexity preserving, the following result holds. The moment map image of a compact complexity preserving Hamiltonian T-space is a Delzant polytope in ^*. Conversely, for every Delzant polytope Δ in ^* and for every integer k ≥ 0, there exists a compact complexity preserving Hamiltonian T-space of complexity k such that Φ(M) = Δ. Let be a compact complexity preserving Hamiltonian T-space of complexity k. Let v ∈Φ(M) be a vertex. By Corollary <ref>, Φ^-1(v) = 2k. Let α_1,…,α_n be the isotropy weights of Φ^-1(v) (see Remark <ref>). Since Φ^-1(v) =2k, precisely k weights are zero. Without loss of generality, we may assume that α_n-k+1,…, α_n = 0. By part <ref> of Corollary <ref> an open neighborhood of v in Φ(M) looks like an open neighborhood of 0 in _≥ 0-span{α_1,…,α_n}= _≥ 0-span{α_1,…α_n-k}. Since the complexity of is k, d = T = n-k. Hence, by equation (<ref>), there are exactly d edges that are incident to v. Moreover, by Remark <ref>, the -span of α_1,…,α_d equals ℓ^*. Hence, Φ(M) is smooth at v and the first statement follows. Conversely, fix an integer k ≥ 0 and suppose that Δ is a Delzant polytope in ^*. By the classification of compact symplectic toric manifolds (see <cit.>), there exists a compact complexity zero T-space (M',ω',Φ') such that Φ'(M') = Δ. Let (M”,ω”) be a closed symplectic manifold of dimension 2k. Consider the T-action on M:= M'× M” given by taking the product of the above T-action on M' with the trivial T-action on M”. This action is Hamiltonian for the symplectic form ω obtained by summing the pullbacks to M of ω' and ω” along the projections. A moment map for this T-action is given by the pullback to M of Φ' along the projection M → M'; we denote this moment map by Φ. Then (M,ω, Φ) is a compact complexity preserving complexity k T-space with moment map image given by Δ, as desired. §.§ Compact tall complexity one T-spaces In this section we introduce an important class of compact complexity one T-spaces. A compact complexity one T-space is called tall if no reduced space is a point. To shed light on Definition <ref> we observe that, if is a compact complexity one T-space, then the reduced space M_x is homeomorphic to a closed, connected orientable surface for any x ∈Φ(M)_reg (see Section <ref>). If is tall, then this holds for all x ∈Φ(M) (see <cit.>). Moreover, the following result holds. A compact complexity one T-space is tall if and only if it is complexity preserving. Let be a compact tall complexity one T-space and let v ∈Φ(M) be a vertex. By Remark <ref>, Φ^-1(v) is either a fixed point or a fixed surface. Since the reduced space at v can be identified with Φ^-1(v) and since is tall, Φ^-1(v) has dimension two. Hence, since has complexity one, it satisfies property <ref> in Corollary <ref>; therefore, it is complexity preserving. Conversely, if is complexity preserving, then it satisfies property <ref> in Corollary <ref>. Hence, by <cit.>, no reduced space is a point and is tall. In <cit.> the authors classify tall complexity one T-spaces[In loc. cit. the authors consider a more general class of tall complexity one spaces, namely those for which M is connected but not necessarily compact and such that there exists an open convex set 𝒯⊆^* containing the image of the moment map with the property that Φ M →𝒯 is proper. However, we state all results in loc. cit. only in the compact case.]. Below we recall this classification. Henceforth, we fix a compact complexity one T-space . As a consequence of <cit.> or <cit.>, any two reduced spaces of are homeomorphic. This motivates introducing the following notion. The genus of a compact tall complexity one T-space is the genus of the reduced space M_x for any x ∈Φ(M). The following result is a stepping stone for the classification of compact tall complexity one T-spaces (see Proposition 2.2 in <cit.>, Proposition 1.2 and Remark 1.9 in <cit.>) If is a compact tall complexity one T-space, then there exist a closed oriented surface Σ and a map f M/T →Σ such that (Φ,f) M/T ⟶Φ(M)×Σ is a homeomorphism and the restriction f : Φ^-1(x)/T →Σ is orientation-preserving for any x ∈Φ(M). Given two such maps f and f', there exists an orientation-preserving homeomorphism ξΣ' →Σ such that f is homotopic to ξ∘ f' through maps that induce homeomorphisms M/T→Φ(M)×Σ. By Proposition <ref>, the genus of is the genus of Σ. The next invariant of tall complexity one T-spaces is related to the exceptional orbits (see Remark <ref>), and is introduced below. To this end, we observe that given a closed surface Σ and a map f : M/T →Σ as in Proposition <ref>, its restriction to M_exc makes (Φ,f) : M_exc→Φ(M) ×Σ injective. Let (M,ω, Φ), (M',ω',Φ') be compact tall complexity one T-spaces and let Σ, Σ' be closed oriented surfaces. * A painting of (M,ω,Φ) is a map f : M_exc→Σ such that (Φ,f) : M_exc→Φ(M) ×Σ is injective. * An isomorphism of exceptional orbits is a homeomorphism i : M_exc→ M'_exc satisfying Φ = Φ' ∘ i that sends each orbit to an orbit with the same symplectic slice representation. * A painting f : M_exc→Σ of (M,ω, Φ) is equivalent to a painting f' : M'_exc→Σ' of (M',ω', Φ') if there exists an isomorphism of exceptional orbits i : M_exc→ M_exc' and an orientation-preserving homeomorphism ξ : Σ→Σ' such that f' ∘ i and ξ∘ f are homotopic through paintings. By Proposition <ref>, we can associate an equivalence class of paintings to a compact tall complexity one T-space (see <cit.>). For our purposes, it is useful to introduce the following terminology. Let be a tall, compact complexity one T-space. The equivalence class of paintings [f] associated to is trivial if there exists a painting f: M_exc→Σ representing [f] that is constant on each connected component of M_exc. The classification of compact tall complexity one T-spaces is as follows. (Karshon–Tolman, Theorem 1 in <cit.>, and Theorem 1.8 and Remark 1.9 <cit.>) Two compact tall complexity one T-spaces are isomorphic if and only if they have equal genera, equal Duistermaat-Heckman measures, and equivalent paintings. The invariants of a compact tall complexity one T-space determine the moment map image, as it is the support of the Duistermaat-Heckman measure (cf. <cit.>). § COMPACT MONOTONE HAMILTONIAN T-SPACES In this section we use ideas and techniques from equivariant cohomology, referring the reader to <cit.> for details and background. §.§ The weight sum formula In this paper we are mostly concerned with compact Hamiltonian T-spaces satisfying the following condition. A symplectic manifold (M,ω) is monotone if there exists λ∈ such that c_1=λ[ω], where c_1 is the first Chern class of (M,ω). It is positive monotone if λ>0. * If (M,ω) is compact and monotone, since [ω] ≠ 0, then λ in Definition <ref> is unique. * Let (M,ω) be a monotone symplectic manifold and let Ψ : (M',ω') → (M,ω) be a symplectomorphism. Since Ψ pulls back almost complex structures that are compatible with ω to almost complex structures that are compatible with ω', (M',ω') is monotone. Moreover, if (M,ω) is compact and if λ, λ' ∈ are such that c_1 = λ[ω] and c_1' = λ'[ω'], then λ = λ'. If (M,ω) is such that H^2(M;)=, then it is monotone (e.g. P^n). In general, (positive) monotonicity is very restrictive. In the presence of a Hamiltonian torus action, the following result holds. If (M,ω) is compact and monotone, and admits an effective Hamiltonian T-action, then (M,ω) is positive monotone. The proof follows mutatis mutandis that of <cit.>, in which it is assumed that M^S^1 is discrete (see <cit.>). Let H ≤ T be a one dimensional subtorus and let ϕ : M →𝔥^* be the induced moment map. We identify H ≃ S^1 and consider (M,ω, ϕ) as a Hamiltonian S^1-space. We observe that ϕ M → (Lie(S^1))^* is a Morse-Bott function; moreover, by (<ref>), the isotropy weights in the positive normal bundle to a fixed point are positive (cf. <cit.>). Therefore, if F_min (respectively F_max) denotes a fixed component on which ϕ attains its minimum (respectively maximum), all the isotropy weights in the normal bundle to F_min (respectively F_max) are positive (respectively negative). Moreover, even if some of the isotropy weights of p_min∈ F_min (respectively at p_max∈ F_max) are zero, by the effectiveness of the action some of them must be different from zero. Hence, the sum of the isotropy weights of p_min (resp.p_max) is strictly positive (respectively strictly negative). To complete the proof, it is enough to consider the equivariant extensions of [ω] and c_1 in the equivariant cohomology ring of M, which are respectively [ω-ϕ] and c_1^S^1, and to compare them at p_min and p_max to deduce that λ must be positive (see equation (5.1) in <cit.>). Throughout this paper, a Hamiltonian T-space is monotone if (M,ω) is. The following result is an immediate consequence of Remark <ref> and Proposition <ref>. If (M,ω, Φ) is a compact monotone Hamiltonian T-space, then there exists a unique λ >0 such that c_1 = [λω]. The next proposition extends <cit.>. If is a compact Hamiltonian T-space with c_1 = [ω], then there exists a unique w ∈^* such that the moment map Φ:=Φ + w satisfies the weight sum formula, i.e., Φ(p) =-∑_j=1^n α_j, for all p∈ M^T , where α_1,…,α_n ∈ℓ^* are the isotropy weights of p. Since c_1=[ω] and since the action is Hamiltonian, there exists a unique w ∈^* such that c_1^T+w=[ω-Φ]. Thus the moment map Φ = Φ+w satisfies c_1^T=[ω-Φ]. Since M is compact and the action is Hamiltonian, there exists p ∈ M^T. The equality in (<ref>) is obtained by comparing these two equivariant cohomology classes at p ∈ M^T and observing that c_1^T(p)=∑_j=1^nα_j. Let (M,ω,Φ) be a compact Hamiltonian T-space with c_1 = [ω]. If (M',ω', Φ') is isomorphic to (M,ω,Φ), then c_1' = [ω'] by Remark <ref>. Let Ψ : (M,ω,Φ) → (M',ω',Φ') be an isomorphism. By Remark <ref>, Ψ is equivariant. Hence, by (<ref>), if w, w' ∈𝔱^* are as in Proposition <ref> for Φ and Φ' respectively, then w = w'. A compact monotone Hamiltonian T-space is normalized if * c_1=[ω], and * the moment map Φ satisfies the weight sum formula (<ref>). In this case we call a normalized monotone Hamiltonian T-space. Since the isotropy weights of a fixed point lie in ℓ^*, Proposition <ref> has the following immediate consequence. If is a normalized monotone Hamiltonian T-space, then [ω] ∈ H^2(M;) and, for any p ∈ M^T, Φ(p) ∈ℓ^*. The following result is an immediate consequence of Corollary <ref> and Proposition <ref>. If is a compact monotone Hamiltonian T-space, then there exist unique λ >0 and w ∈𝔱^* such that (M,λω, λΦ + w) is normalized monotone. Classifying compact monotone Hamiltonian T-space is almost equivalent to classifying normalized monotone Hamiltonian T-spaces. More precisely, the following holds. Let (M,ω,Φ), (M',ω',Φ') be compact monotone Hamiltonian T-spaces. Let λ, λ' > 0 and v, v' ∈𝔱^* be as in Corollary <ref>. Then (M,ω,Φ) and (M',ω',Φ') are isomorphic if and only if λ = λ', v = v' and (M,λω, λΦ + v) is isomorphic to (M',λ' ω', λ'Φ' + v'). Suppose that (M,ω,Φ) and (M',ω',Φ') are isomorphic. Let Ψ : (M,ω) → (M',ω') be a symplectomorphism such that Φ' ∘Ψ = Φ. By part <ref> of Remark <ref> and by Remark <ref>, λ = λ' and v = v'. Hence, Ψ : (M,λω) → (M',λ'ω') is a symplectomorphism and (λ'Φ' + v') ∘Ψ = λΦ + v, i.e., Ψ is an isomorphism between (M,λω, λΦ + v) and (M',λ' ω', λ'Φ' + v'). The converse is entirely analogous and is left to the reader. §.§ Moment polytopes of monotone complexity preserving Hamiltonian T-spaces We recall that a polytope Δ in ^* can be described by its minimal representation (see Section <ref>): Δ=⋂_i=1^l {w∈^* |⟨ w,ν_i ⟩≥ c_i} for some inward normals ν_1,…, ν_l ∈ and c_1,…, c_l ∈. Such a polytope Δ is integral if its vertices belong to ℓ^*. If Δ is integral, then it is possible to choose the inward normal ν_i so that it is a primitive element of ℓ, for every i=1,…,l. The corresponding constants c_i's are therefore uniquely determined by this choice of ν_i's. A polytope Δ⊂^* is reflexive if it is integral and ν_i ∈ℓ in its minimal representation is primitive with corresponding c_i=-1, for all i=1,…, l. The following result is an immediate consequence of Definition <ref> and is stated below without proof (see <cit.>). For any reflexive polytope the origin is the only interior lattice point. Lemma <ref> and a result of Lagarias and Ziegler <cit.> imply the following result. Up to the action of GL(ℓ^*), there are only finitely many reflexive polytopes in ^*. For instance, there are sixteen two-dimensional reflexive polytopes (see <cit.>), five of which are also Delzant (see Figure <ref>). For a rational polytope Δ⊂^*, given a vertex v of Δ one can choose the vectors α_i's in (<ref>), which support the edges coming out of v, to be primitive elements of ℓ^*; these vectors are uniquely determined and referred to as the weights of the vertex v. Reflexive Delzant polytopes are in particular rational and they can be characterized in terms of the weights of their vertices. More precisely, the following result, proved in various contexts by various authors, holds (see, in particular, <cit.>, <cit.> and <cit.>). Let Δ⊂^* be a d-dimensional Delzant polytope. The following conditions are equivalent: * Δ is a reflexive polytope. * Δ satisfies the weight sum formula, i.e., for each vertex v ∈Δ, v = - ∑_j=1^d α_j , where α_1,…,α_d are the weights of v. In <cit.> it is assumed that the origin is an interior point of Δ to prove that <ref> implies <ref>. However this follows by (<ref>). Indeed, consider the multiset 𝒲 of all the primitive vectors appearing as weights of vertices of Δ. Note that, if α∈𝒲 has multiplicity r, then so does -α. Hence the sum of all the weights in 𝒲 is 0 ∈^*. Therefore, if Δ satisfies (<ref>), then ∑_v∈𝒱 v = 0 , where 𝒱 is the set of vertices of Δ. Since Δ is the convex hull of its vertices, the interior points of Δ are precisely those that can be written as follows: ∑_v∈𝒱λ_v v with λ_v>0 for all v∈𝒱 and ∑_v∈𝒱λ_v=1 . Let λ_v=1/|𝒱| for all v∈𝒱. Then by (<ref>), we have 0 =∑_v∈𝒱λ_v v. Hence, (<ref>) yields that 0 belongs to the interior of Δ. The following technical lemma regarding reflexive Delzant polytopes is used extensively in Sections <ref> and <ref> below. Let Δ be a reflexive Delzant polytope in ^*, let ℱ be a facet of Δ supported on the affine hyperplane {w ∈^* |⟨ w, ν⟩ = -1}, let v be a vertex of Δ in ℱ, and let α_1,…, α_d ∈ℓ^* be the weights of v ordered so that ℱ⊂ v + _≥ 0⟨α_1,…,α_d-1⟩. Then ⟨α_d, ν⟩ = 1. Moreover, if e is the edge that is incident to v and comes out of ℱ, then there exists t_max∈_> 0 such that e = {v + t α_d | 0 ≤ t ≤ t_max}. By (<ref>), ⟨α_i, ν⟩ = 0 for all i=1,…, d-1. By Proposition <ref>, Δ satisfies the weight sum formula at v. Since v ∈ℱ, ⟨α_d, ν⟩ = 1. If e is an edge of Δ as in the statement, then there exists t_max > 0 such that e = {v + t α_d | 0 ≤ t ≤ t_max}. It remains to show that t_max is a positive integer. To this end, we observe that v':= v + t_maxα_d is a vertex of Δ. Since Δ is reflexive, v' ∈ℓ^*. By definition of weight, α_d is primitive in ℓ^*. Hence, t_max∈ℤ_>0. To conclude this section, we look at the relation between normalized monotone complexity preserving T-spaces and reflexive Delzant polytopes in ^*. We start with the following strengthening of Proposition <ref> under the additional assumption of monotonicity. The moment map image of a normalized monotone complexity preserving T-space is a reflexive Delzant polytope in ^*. Conversely, for every reflexive Delzant polytope Δ in ^* and for every integer k ≥ 0, there exists a normalized monotone complexity preserving Hamiltonian T-space of complexity k such that Φ(M) = Δ. Before proving Proposition <ref>, we recall the following result, stated below without proof (see <cit.>). Let be a compact symplectic toric manifold. If Φ(M) is a reflexive Delzant polytope, then is normalized monotone. Let be a normalized monotone complexity preserving T-space. By Proposition <ref>, Φ(M) is a Delzant polytope in ^*. It remains to show that Φ(M) is reflexive. Since is complexity preserving, by part <ref> of Corollary <ref>, the weights of Φ(M) at a vertex v are equal to the non-zero weights of any p ∈Φ^-1(v). Since is normalized monotone, Φ satisfies the weight sum formula (<ref>). Hence, Φ(M) also satisfies the weight sum formula (<ref>) and so, by Proposition <ref>, it is reflexive. Conversely, let Δ be a reflexive Delzant polytope in ^* and let k ≥ 0 be an integer. We adapt the second half of the proof of Proposition <ref> (and fix the notation therein), to show that we can make appropriate choices so that the resulting complexity preserving T-space of complexity k is normalized monotone. Since Δ is reflexive, by Proposition <ref>, (M', ω') is normalized monotone. Choose M” = P^k and ω” to be a monotone symplectic form on P^k such that c_1( P^k) = [ω”]. The complexity preserving T-space (M,ω,Φ) constructed in the second half of the proof of Proposition <ref> has complexity k and is normalized monotone, as desired. We finish with the following simple, useful result. Let be a compact monotone complexity preserving T-space. If Φ(M) is reflexive Delzant, then is normalized monotone. If M =0, there is nothing to prove, so we may assume M > 0. Let k be the complexity of . Since (M,ω) is monotone, by the proof of Proposition <ref>, there exists a unique constant w ∈^* such that c^T_1 + w= λ [ω - Φ]. We fix a vertex v ∈Φ(M) and a fixed point p ∈Φ^-1(v). Evaluating both sides of the above displayed equality at p, ∑_i=1^n-kα_i + c= - λ v, where α_1,…, α_n-k are the non-zero weights of p. Since is complexity preserving, the non-zero isotropy weights of p in M are precisely the weights of v in Φ(M). Hence, by Proposition <ref>, (<ref>) gives that -v +w = - λ v. Since this equality holds for any vertex v ∈Φ(M), we have w = 0 and λ = 1, as desired. § COMPLETE INVARIANTS OF COMPACT MONOTONE TALL COMPLEXITY ONE T-SPACES §.§ The genus and a minimal facet In this section we explore the first consequences of the combination of tallness and monotonicity of compact complexity one T-spaces, recovering and extending some of the results in <cit.>. To this end, let be a monotone tall complexity one T-space of dimension 2n and let v ∈Φ(M) be a vertex. Let N = L_1⊕…⊕ L_n-1 be the normal bundle to Σ:=Φ^-1(v) in M together with its T-equivariant splitting into T-invariant complex line bundles as in (<ref>). Let be a compact monotone tall complexity one T-space of dimension 2n and let v ∈Φ(M) be a vertex that attains the minimum of DH. If c_1(Σ), c_1(L_i) denote the first Chern class of Σ and of the complex line bundle L_i for any i=1,…, n-1 respectively, then c_1(Σ)[Σ] > - ∑_i=1^n-1c_1(L_i)[Σ]. Moreover, the genus of in the sense of Definition <ref> equals zero. Since (M,ω) is monotone, by Proposition <ref>, it is positive monotone. Hence, since Σ is a symplectic submanifold of (M,ω), 0 < c_1[Σ] = c_1(Σ)[Σ] + c_1(N)[Σ] = c_1(Σ)[Σ] + ∑_i=1^n-1c_1(L_i)[Σ], whence (<ref>) holds. By Lemma <ref>, the right hand side of (<ref>) is non-negative, which implies that Σ is diffeomorphic to a sphere, as desired. Lemma <ref> is a stepping stone towards the following important result. Let be a compact monotone tall complexity one T-space of dimension 2n. There exists a facet ℱ of Φ(M) such that DH(M_ℱ,ω_ℱ,Φ_ℱ) is constant and equal to the minimum of DH, where (M_ℱ,ω_ℱ,Φ_ℱ) is defined as in (<ref>). Moreover, for any vertex v ∈ℱ, there exist n-2 non-zero isotropy weights α_1,…, α_n-2 of Φ^-1(v) such that * ℱ⊂ v + _≥ 0⟨α_1, …, α_n-2⟩, and * the self-intersection of Φ^-1(v) in Φ^-1(e_j) equals zero for all j=1,…, n-2, where e_j ⊂ℱ is the edge incident to v with tangent vector given by α_j. By Proposition <ref>, is complexity preserving. Hence, by Corollary <ref>, for any face ℱ̃ of Φ(M) DH(M_ℱ̃,ω_ℱ̃,Φ_ℱ̃) = DH |_ℱ̃. Therefore, to prove the first statement, it suffices to prove that there exists a facet ℱ of Φ(M) such that DH |_ℱ is constant and equal to the minimum of DH – see Figure <ref>. By Corollary <ref>, the minimum of DH is attained at a vertex of Φ(M), say v_0. Let α_1,…, α_n-1 be the non-zero isotropy weights of the fixed surface Σ:=Φ^-1(v_0). By Lemma <ref>, DH( v_0 + ∑_i=1^n-1t_i α_i) = ∫_Σω - ∑_i=1^n-1t_i c_1(L_i)[Σ] for all t_1,…, t_n-1≥ 0 sufficiently small. By Lemma <ref>, 2 > - ∑_i=1^n-1c_1(L_i)[Σ]; moreover, by Lemma <ref>, c_1(L_i)[Σ]≤ 0 for all i=1,…, n-1. Thus at least n-2 of c_1(L_1)[Σ], …, c_1(L_n-1)[Σ] vanish; there is no loss of generality in assuming that c_1(L_i)[Σ]=0 for all i=1,…, n-2. Hence, DH( v_0 + ∑_i=1^n-1t_i α_i) = ∫_Σω - t_n-1 c_1(L_n-1)[Σ] for all t_1,…, t_n-1≥ 0 sufficiently small. In particular, the restriction of DH to a sufficiently small neighborhood of v_0 in (v_0 + _≥ 0⟨α_1, …, α_n-2⟩) ∩Φ(M) is constant and equal to DH(v_0), which, by assumption, is the minimum of DH. Let ℱ be the facet of Φ(M) that is contained in v_0 + _≥ 0⟨α_1, …, α_n-2⟩. Since DH is concave (see Proposition <ref>), since DH attains its minimum at v_0, and since v_0 ∈ℱ, DH|_ℱ is constant and equal to the minimum of DH. Moreover, since c_1(L_i)[Σ] = 0, the self-intersection of Σ in Φ^-1(e_i) is zero for each i= 1,…, n-2. It remains to show that the bullet points in the statement hold for all vertices of ℱ. However, since DH|_ℱ is constant and equal to the minimum of DH, it follows that any vertex of ℱ is a minimum of DH. Hence, the above argument gives the desired result. A facet as in Proposition <ref> plays an important role throughout Section <ref>. Let be a compact monotone tall complexity one T-space. A facet of Φ(M) satisfying the conclusions of Proposition <ref> is called a minimal facet of Φ(M) and denoted by ℱ_min. Given a minimal facet ℱ_min, the sheet corresponding to ℱ_min is denoted by (M_min,ω_min,Φ_min). We observe that, in spite of the notation, (M_min,ω_min,Φ_min) clearly depends on ℱ_min. However, we trust that the notation does not cause confusion. Let be a compact monotone tall complexity one T-space. If ℱ_min is a minimal facet of Φ(M), then M_min contains no isolated fixed point of . By Remark <ref> and Proposition <ref>, (M_min,ω_min,Φ_min) is a compact tall complexity one T/H_ℱ_min-space. By Corollary <ref> and Proposition <ref>, DH(M_min,ω_min,Φ_min) is constant. Hence, by Lemma <ref>, there are no singular values in the (relative) interior of Φ_min(M_min). Thus, by Lemma <ref>, (M_min,ω_min,Φ_min) has no isolated fixed points for the T/H_ℱ_min-action and, hence, for the T-action. To conclude this section, we prove that certain self-intersections of the pre-image of vertices in a minimal facet are independent of the vertices. To this end, we say that an edge e of a polytope Δ comes out of a facet ℱ if it is not contained in ℱ but it is incident to a vertex of Δ contained in ℱ. Let be a compact monotone tall complexity one T-space and let ℱ_min be a minimal facet of Φ(M). There exists s ∈ such that, for any vertex v ∈ℱ_min, the self-intersection of Φ^-1(v) in Φ^-1(e) equals s, where e is the edge of Φ(M) that comes out of ℱ_min and is incident to v. Let v_1,v_2 ∈ℱ_min be vertices and let e_1,e_2 be the edges that come out of ℱ_min that are incident to v_1 and to v_2 respectively. By Proposition <ref>, DH(v_1) = DH (v_2). Let Σ_i:= Φ^-1(v_i) for i=1,2. Thus, by Lemma <ref>, [ω](Σ_1) = [ω](Σ_2). Since (M,ω) is monotone, it is positive monotone by Proposition <ref>. Hence, c_1(Σ_1) = c_1(Σ_2). Moreover, by Proposition <ref>, the only self-intersection of Σ_i that can possibly be different from zero is that in Φ^-1(e_i), for i=1,2. Hence, since Σ_1 ≃Σ_2, the result follows. §.§ A characterization of isolated fixed points Henceforth, we assume that is a normalized monotone tall complexity one T-space unless otherwise stated. Moreover, we fix a minimal facet ℱ_min of Φ(M) (which exists by Proposition <ref>). By Proposition <ref>, Φ(M) is a reflexive Delzant polytope. Hence, by Definition <ref>, there exists a unique primitive ν_min∈ℓ such that ℱ_min ⊂{w ∈𝔱^* |⟨ w,ν_min⟩ = -1}, Φ(M) ⊂{w ∈𝔱^* |⟨ w,ν_min⟩≥ -1 }. Finally, if v ∈ℱ_min is any vertex and α_1,…,α_n-1 are the non-zero isotropy weights of Φ^-1(v), then, by Proposition <ref>, we may assume that the weights are ordered so that ℱ_min⊂ v + _≥ 0⟨α_1,…, α_n-2⟩. In particular, by Lemma <ref>, ⟨α_n-1, ν_min⟩ = 1. Let be a normalized monotone tall complexity one T-space of dimension 2n. If p ∈ M^T is isolated, then there exists an edge e of Φ(M) that comes out of ℱ_min such that Φ(p) is (the only element) in the intersection of e with the linear hyperplane {w ∈𝔱^* |⟨ w,ν_min⟩ = 0 }. Moreover, if v ∈ℱ_min is the vertex that is incident to e and if α_1,…,α_n-1 are the non-zero isotropy weights of Φ^-1(v) ordered so that (<ref>) holds, then the isotropy weights of p are α_1,…,α_n-2, α_n-1,-α_n-1. Since is normalized monotone, by Corollary <ref>, Φ(p) ∈ℓ^*. Hence, since ν_min∈ℓ, ⟨Φ(p), ν_min⟩∈. Moreover, by Corollary <ref>, Φ(p) ∉ℱ_min; hence, by (<ref>), ⟨Φ(p), ν_min⟩ is a non-negative integer. Let β_1,…, β_n ∈ℓ^* be the isotropy weights of p. By Lemma <ref>, there exists an isotropy weight β of p such that ⟨β, ν_min⟩ < 0. Without loss of generality, we may assume that β_n = β. Let (N,ω_N,Φ_N) be the sheet along β_n given by Lemma <ref>, let H = exp({ξ∈|⟨β_n,ξ⟩∈}) be its stabilizer, and let q ∈ M^T ∩ N be a fixed point satisfying the conclusions of Corollary <ref>, i.e., * Φ(q) = Φ_N(q) is a global extremum of Φ_N, * -β_n is an isotropy weight of q, and α_1 α_2 -α_2 ℱ_min ⟨Φ(q), ν_min⟩ < ⟨Φ(p),ν_min⟩. We split the proof in two cases: first, we assume that ⟨Φ(p), ν_min⟩ is minimal among isolated fixed points and, second, we deduce the general case from this special one. Case 1: We suppose that ⟨Φ(p), ν_min⟩ is minimal among isolated fixed points. By (<ref>), the fixed point q is not isolated. Hence, it lies on a fixed surface. By Proposition <ref>, v is a vertex of Φ(M). Let α_1,…, α_n-1 be the non-zero isotropy weights of Φ^-1(Φ(q)) ordered so that α_n-1 = - β_n. By Proposition <ref>, α_1,…, α_n-1 are a basis of ℓ^*. Hence, β_n is a primitive element in ℓ^*. Moreover, since H must be one of the stabilizers of dimension n-2 for points sufficiently close to q in M, it follows that N = 4 and that Φ(p) + ⟨β_n⟩ contains an edge of Φ(M). Hence, since is tall, Φ(p) lies in the (relative) interior of this edge. Hence, by Corollary <ref>, there exists precisely one i=1,…, n-1 such that β_i is a multiple of β_n; without loss of generality, we assume that i=n-1. By Remark <ref>, the -span of β_1,…, β_n-2,β_n equals ℓ^*. Hence, by a dimension count, β_1,…, β_n-2,β_n are linearly independent. We claim that ⟨β_j, ν_min⟩≥ 0 for all j = 1,…, n-2. Suppose, on the contrary, that there exists an index j=1,…, n-2 such that ⟨β_j, ν_min⟩ < 0. By Corollary <ref> applied to β_j and by the above argument, Φ(p) must lie in the (relative) interior of an edge of Φ(M) that is contained in Φ(p) + ⟨β_j⟩. (Observe that this uses the fact that ⟨Φ(p), ν_min⟩ is minimal among all isolated fixed points.) Since any point in a convex polytope is contained in the (relative) interior of at most one edge, it follows that ⟨β_j⟩ = ⟨β_n⟩, which contradicts the linear independence of β_1,…, β_n-2,β_n. Since β_n-1 is a multiple of β_n, since β_n is primitive, and since Φ(p) lies in the (relative) interior of an edge of Φ(M), there exists a positive integer λ such that β_n-1 = -λβ_n. Since is normalized, the weight sum formula at p Φ(p) = -∑_j=1^nβ_j implies that 0 ≤⟨Φ(p), ν_min⟩ = - ∑_j=1^n-2⟨β_j , ν_min⟩_≤ 0 + (λ - 1)_≥ 0⟨β_n, ν_min⟩_< 0≤ 0. Therefore ⟨Φ(p), ν_min⟩= 0, λ = 1 and ⟨β_j, ν_min⟩ = 0 for all j=1,…, n-2. Since ⟨Φ(q), ν_min⟩ < ⟨Φ(p), ν_min⟩ by (<ref>), since ⟨Φ(q), ν_min⟩∈ by Corollary <ref>, and by (<ref>), we have that ⟨Φ(q), ν_min⟩ = -1, i.e., v:=Φ(q) is a vertex of ℱ_min. Moreover, since α_n-1 = -β_n and since ⟨β_n, ν_min⟩ < 0, the edge incident to v contained in Φ(p) + ⟨β_n ⟩ comes out of ℱ_min. Since β_n-1 = -λβ_n, then β_n-1 = α_n-1. We observe that the set (!) {β_1,…,β_n-2} is precisely the multiset of isotropy weights for the T-action on the normal bundle to the T-invariant submanifold N at the point p. Hence, this set equals {α_1,…,α_n-2} modulo ⟨β_n ⟩ = ⟨α_n-1⟩, since the latter is the multiset of isotropy weights for the T-action on the normal bundle to the T-invariant submanifold N at another point. The affine hyperplane v + ⟨α_1,…,α_n-2⟩ contains ℱ_min by (<ref>). Hence, ⟨α_j, ν_min⟩ = 0 for all j =1,…, n-2. Since ⟨α_n-1, ν_min⟩ > 0 and ⟨β_j, ν_min⟩ = 0 for all j=1,…, n-2, then {β_1,…,β_n-2} = {α_1,…,α_n-2}. This completes the proof of the result under the hypothesis that ⟨Φ(p), ν_min⟩ is minimal among all isolated fixed points. Case 2: To conclude the proof, it suffices to show that, if p ∈ M^T is isolated, then ⟨Φ(p), ν_min⟩ is minimal. Suppose not; then, by the above argument and since ⟨Φ(p), ν_min⟩∈_≥ 0, there exists p' ∈ M^T such that ⟨Φ(p'), ν_min⟩ >0 and minimal among fixed points with positive pairing. Let β_1',…, β_n' ∈ℓ^* be the isotropy weights of p'. By Lemma <ref>, we may assume that ⟨β_n', ν_min⟩ < 0. We consider the sheet (N',ω_N',Φ_N') along β_n' given by Lemma <ref> and the point q' ∈ M^T ∩ N' satisfying the conclusions of Corollary <ref>. In analogy with (<ref>), ⟨Φ(q'), ν_min⟩ < ⟨Φ(p'), ν_min⟩. Hence, by minimality, q' is either isolated and satisfies ⟨Φ(q'), ν_min⟩ = 0, or is not isolated. In either case, by the arguments used in the special case above, β'_n ∈ℓ^* is primitive, N' = 4 and Φ(p') lies in the (relative) interior of an edge of Φ(M). In particular, there is precisely one other isotropy weight of p' that is collinear with β'_n, say β'_n-1. Moreover, as above, ⟨β'_j, ν_min⟩≥ 0 for all j = 1,…, n-2. Set β'_n-1 = - λ' β_n for some positive integer λ'; since β'_n is primitive, λ'≥ 1. We use again the weight sum formula (<ref>) and, in analogy with (<ref>), we obtain the following absurd string of inequalities 0 < ⟨Φ(p'), ν_min⟩ = - ∑_j=1^n-2⟨β'_j , ν_min⟩_≤ 0 + (λ' - 1)_≥ 0⟨β'_n, ν_min⟩_< 0≤ 0. §.§ The space of exceptional orbits and the painting Motivated by Proposition <ref>, our first aim is to prove properties of an isolated fixed point with isotropy weights α_1,…, α_n-2,α_n-1,-α_n-1 in a complexity one T-space (see Lemma <ref> below). By Remark <ref>, α_1,…, α_n-1 form a basis of ℓ^*; therefore, by a dimension count, α_j is primitive for all j=1,…, n-1. We define the following subgroups of T: H:=exp( {ξ∈|⟨α_i , ξ⟩ = 0, for all i=1,…, n-2}), which is of dimension 1, and T':=exp( {ξ∈|⟨α_n-1 , ξ⟩ = 0}), which is of codimension 1. Observe that T ≃ T' × H; moreover, we use the given inner product to identify the duals 𝔥^*, (')^* of the Lie algebras of H and T with ⟨α_n-1⟩ and ⟨α_1,…, α_n-2⟩ respectively. We start by looking at the local model determined by the above isotropy weights (see Section <ref>). We consider the following T-action on ^n exp(ξ)· (z_1,…,z_n-2,z_n-1,z_n)= (e^ 2π i ⟨α_1 , ξ⟩z_1,…, e^ 2π i ⟨α_n-2 , ξ⟩z_n-2,e^ 2π i ⟨α_n-1 , ξ⟩ z_n-1, e^ 2π i ⟨ -α_n-1 , ξ⟩ z_n ) for ξ∈, with moment map Φ_0: ^n →^* given by Φ_0(z_1,…, z_n) = π(∑_j=1^n-2α_j |z_j|^2 + α_n-1(|z_n-1|^2-|z_n|^2) ). From (<ref>) it is clear that 0 ∈^n is a fixed point, that the circle H acts trivially on ^n-2 = ⟨ z_1,…, z_n-2⟩, and that the (n-2)-dimensional torus T' acts trivially on ^2=⟨ z_n-1, z_n⟩. Therefore the linear T-action on ^n of (<ref>) splits as the product of a toric T'-action on ^n-2 = ⟨ z_1,…, z_n-2⟩, and a complexity one H-action on ^2 = ⟨ z_n-1, z_n⟩. Moreover, * the stabilizer in T of a point q := (z_1,…,z_n) ∈^n is the product of the stabilizer in T' of q_1:= (z_1,…, z_n-2) ∈^n-2 and of the stabilizer in H of q_2:=(z_n-1,z_n) ∈^2, * the symplectic slice representation of q ∈^n for the action of T splits as the product of the symplectic slice representations of q_1∈^n-2 for the action of T' and of q_2 ∈^2 for the action of H, and * a point q∈^n is exceptional with respect to the action of T if and only if at least one of q_1 ∈^n-2 and q_2∈^2 is exceptional with respect to the corresponding actions of T' and H. We observe that property <ref> follows from properties <ref> and <ref>. Hence, in order to understand properties of the product, we consider each factor separately. This is the content of the following two results. Consider ^n-2 with the above linear toric T' action. Then every point in ^n-2 is regular for the action of T'. Moreover, for each q_1= (z_1,…, z_n-2) ∈^n-2, the subset J:={ j ∈{1,…,n-2}| z_j≠ 0} is the unique subset such that * the moment map image π∑_j=1^n-2α_j |z_j|^2 ∈ (')^* lies in _> 0⟨{α_j | j ∈ J }⟩ (if J = ∅, then q_1 = 0 and the moment map image equals zero), * the stabilizer of q_1 is K_J:= exp( {ξ∈'|⟨α_j , ξ⟩ = 0 for all j∈ J}), and * the isotropy weights of q_1 are {α_j | j ∉ J}⊂ (')^*, where we identify Lie(K_J)^* with ⟨{α_j| j∉ J ∪{n-1}}⟩⊆ (')^*⊂^*. Conversely, given any subset J ⊆{1,…,n-2} and any w ∈_> 0⟨{α_j | j ∈ J }⟩, there exists q_1 = (z_1,…, z_n-2) ∈^n-2 such that π∑_j=1^n-2α_j |z_j|^2 =w, the stabilizer of q_1 is K_J, and the isotropy weights of q_1 are {α_j | j ∉ J}. Finally, the subset of points with trivial stabilizer is path-connected and dense. By Lemma <ref>, every point in a complexity zero Hamiltonian space is regular. The linear toric T'-action on ^n-2 is given explicitly by exp(ξ')· (z_1,…,z_n-2)=(e^ 2π i ⟨α_1 , ξ' ⟩z_1,…, e^ 2π i ⟨α_n-2 , ξ' ⟩z_n-2) for ξ'∈' , (cf. (<ref>)). By definition of J, the moment map image π∑_j=1^n-2α_j |z_j|^2 lies in _> 0⟨{α_j | j ∈ J }⟩. Since α_1,…,α_n-2 are linearly independent, J is the only such subset of {1,…,n-2}. This proves the first bullet point. Next we prove the second bullet point. Let K be the stabilizer of q_1. By (<ref>), if ξ' ∈' then exp(ξ') ∈ K if and only if ⟨α_j,ξ'⟩∈ for all j ∈ J. However, since each α_j is primitive, K=exp( {ξ'∈'|⟨α_j, ξ'⟩∈ for all j∈ J})=exp( {ξ'∈'|⟨α_j, ξ'⟩ =0 for all j∈ J}), which, by definition, is exactly K_J. We turn to the proof of the third bullet point. The symplectic slice representation of q_1 is the following representation of K_J: We set ^J: ={(w_1,…, w_n-2) ∈^n-2| w_j = 0 for all j ∈ J}. This is a T'-invariant complex subspace of ^n-2 that can be identified symplectically with the symplectic normal to the T'-orbit of q_1, once the tangent space at q_1 is identified with ^n-2. Under this identification, since the T'-action on ^n-2 is linear, the K_J-action on ^J is given by the restriction of the T'-action to K_J. The isotropy weights of this K_J-action are given by the set {α_j | j ∉ J}⊂ (')^*. Conversely, given a subset J ⊆{1,…,n-2} and w ∈_> 0⟨{α_j | j ∈ J }⟩, there exist positive constants λ_j for j ∈ J such that w = ∑_j ∈ Jλ_j α_j. The point q_1 = (z_1,…, z_n-2) ∈^n-2 with coordinates given by z_j = π^-1√(λ_j) if j ∈ J 0 if j ∉ J is such that π∑_j=1^n-2α_j |z_j|^2 =w, its stabilizer is K_J, and its isotropy weights are {α_j | j ∉ J}. Finally, q_1 = (z_1,…, z_n-2) ∈^n-2 has trivial stabilizer if and only if z_j ≠ 0 for all j=1,…, n-2. The subset {(z_1,…, z_n-2) ∈^n-2| z_j ≠ 0 for all j =1,…, n-2 } is clearly path-connected and dense. Consider ^2 with the above linear complexity one H-action. A point q_2 ∈^2 is exceptional if and only if q_2 = (0,0). Moreover, q_2 = (0,0) is the only point stabilized by H. In this case, the isotropy weights of q_2 are {± α_n-1}. We fix an isomorphism between H and S^1 so that α_n-1 corresponds to +1. Under this isomorphism, the above linear H-action on ^2 can be identified with the linear S^1-action on ^2 with weights equal to +1 and -1. The result then follows from a simple computation and from Lemma <ref>. Theorem <ref> and Lemmas <ref>, <ref> imply the following result that is central to this section. Let be a complexity one T-space of dimension 2n and let p ∈ M^T be isolated with isotropy weights α_1,…,α_n-2,α_n-1,-α_n-1. There exists an open neighborhood U of p such that the following are equivalent: * q ∈ U is exceptional, and * Φ(q) ∈Φ(p) + _≥ 0⟨{α_j | j = 1,…, n-2}⟩ and the stabilizer of q contains H. Moreover, given q ∈ U exceptional, if J ⊆{1,…, n-2} is defined by Φ(q) ∈Φ(p) + _> 0⟨{α_j | j ∈ J }⟩, then * the stabilizer of q is K_J × H, where K_J is defined in (<ref>), and * the isotropy weights of q are {α_j | j ∉ J}∪{± α_n-1}, where we identify Lie(K_J)^*⊆ (')^* ⊂^* with ⟨{α_j | j∉ J ∪{n-1}}⟩. Conversely, given any subset J ⊆{1,…, n-2} and any w ∈Φ(p) + _>0⟨α_j | j ∈ J }⟩, there exists an exceptional point q ∈ U such that Φ(q) = w, the stabilizer of q is K_J × H, and the isotropy weights of q are {α_j | j ∉ J}∪{± α_n-1}. Finally, the subset {q ∈ U | q is exceptional and has stabilizer H } is path-connected and dense in {q ∈ U | q is exceptional}. By Theorem <ref>, it suffices to consider the local model determined by the isotropy weights, p = 0 ∈^n and Φ(p) = 0. By property <ref>, and Lemmas <ref> and <ref>, a point q = (q_1,q_2) ∈^n is exceptional if and only if q_2 = (0,0), which is also equivalent to the stabilizer of q_2 being H. Suppose that q = (q_1,q_2) is exceptional and let J ⊆{1,…, n-2} be the subset given by Lemma <ref>. Since q_2 = (0,0), by (<ref>), Φ(q) lies in Φ(p) + _> 0⟨{α_j | j ∈ J }⟩ if and only if π∑_j=1^n-2α_j |z_j|^2 ∈ (')^* lies in _> 0⟨{α_j | j ∈ J }⟩. Hence, J is the unique subset of {1,…, n-2} such that (<ref>) holds. Properties <ref> and <ref> in the statement follow immediately from <ref> and <ref> in the discussion preceding Lemma <ref>, and from Lemmas <ref> and <ref>. Conversely, let J ⊆{1,…, n-2} be a subset and w ∈_> 0⟨{α_j | j ∈ J }⟩. Let q_1 ∈^n-2 be the point given by Lemma <ref>. By property <ref> in the discussion preceding Lemma <ref> and Lemma <ref>, the point q = (q_1,0,0) ∈ V is exceptional. Moreover, by (<ref>), Φ(q) = w. By Lemmas <ref> and <ref>, and by properties <ref> and <ref>, the stabilizer of q is K_J × H and the isotropy weights of q are {α_j | j ∉ J}∪{± α_n-1}, as desired. Finally, by Lemmas <ref> and <ref>, {q =(q_1,q_2) ∈^n-2×^2 | q is exceptional and has stabilizer H } equals {q = (q_1,0,0) ∈^n-2×^2 | q_1 has trivial stabilizer}. By Lemma <ref>, {q_1 ∈^n-2| q_1 has trivial stabilizer} is path-connected and dense in ^n-2, thus completing the proof. The subset J associated to an exceptional point near the isolated fixed point of Lemma <ref> has the following useful property. Let be a complexity one T-space of dimension 2n and let p ∈ M^T be isolated with isotropy weights α_1,…,α_n-2,α_n-1,-α_n-1. Let U be the open neighborhood of p given by Lemma <ref>. Given exceptional points q, q' ∈ U, let J, J' ⊆{1,…, n-2} be the subsets corresponding to q, q' as in Lemma <ref>. The symplectic slice representations of q and q' are isomorphic if and only if J = J'. If J = J', then by parts <ref> and <ref> of Lemma <ref>, the points q and q' have equal stabilizers and the same isotropy weights. Since their common stabilizer is connected, it follows that they have isomorphic symplectic slice representations. Conversely, suppose that q and q' have isomorphic symplectic slice representations. Hence, by Lemma <ref>, they have connected stabilizers, so that K_J = K_J'. Since the dual of the Lie algebra of K_J can be identified with ⟨{α_j | j∉ J ∪{n-1}}⟩, and since α_1,…, α_n-2 are linearly independent, it follows that J = J'. Throughout this section, we apply Lemma <ref> and Corollary <ref> to an isolated fixed point in a normalized monotone tall complexity one T-space. In this case, H = H_ℱ_min, the stabilizer of (M_ℱ_min,ω_ℱ_min,Φ_ℱ_min), see Definition <ref>. Intuitively speaking, the next result is the `global version' of Lemma <ref> for normalized monotone tall complexity one T-spaces. Let be a normalized monotone tall complexity one T-space of dimension 2n. Let q ∈ M be exceptional. There exist p ∈ M^T isolated and a unique subset J ⊆{1,…, n-2} such that, if α_1,…,α_n-2,α_n-1,-α_n-1 are the isotropy weights of p as in Proposition <ref>, then * the moment map image Φ(q) lies in Φ(p) + _>0⟨{α_j | j ∈ J}⟩, * the stabilizer of q is K_J × H_ℱ_min, where K_J ≤ T' is as in (<ref>), and * the isotropy weights of q are {α_j | j ∉ J}∪{± α_n-1} (see part <ref> of Lemma <ref>). By Lemma <ref>, the sheet (N,ω_N,Φ_N) through q is exceptional. Since N is compact, it contains a fixed point p ∈ M^T that is exceptional and therefore isolated by Lemma <ref>. Since N is connected, by the principal orbit theorem (see <cit.>), there exists a relatively open, dense and connected subset N' of N such that, if q' ∈ N', then q and q' have isomorphic symplectic slice representations. In particular, if U is the open neighborhood of p given by Lemma <ref>, then U ∩ N' is not empty; moreover, for all q' ∈ U ∩ N', the symplectic slice representation of q' is isomorphic to that of q. By Lemma <ref> and Corollary <ref>, there exists a unique subset J ⊆{1,…, n-2} such that, for all q' ∈ U ∩ N', * the moment map image Φ(q') lies in Φ(p) + _>0⟨{α_j | j ∈ J}⟩, * the stabilizer of q' is K_J × H_ℱ_min, where K_J ≤ T' is as in (<ref>), and * the isotropy weights of q' are {α_j | j ∉ J}∪{± α_n-1}. Since U ∩ N ≠∅, the second and third bullet points imply properties <ref> and <ref>. To see that property <ref> holds, we observe that, by the first bullet point, Φ(U ∩ N') is contained in Φ(p) + _>0⟨{α_j | j ∈ J}⟩. Since N_reg is dense in N, we have that Φ(U ∩ N) is contained in Φ(p) + _≥ 0⟨{α_j | j ∈ J}⟩. On the other hand, since the stabilizer of q is K_J × H_ℱ_min, the sheet (N,ω_N,Φ_N) is a compact Hamiltonian T”-space, where T” = T/(K_J × H_ℱ_min) ≃ T' /K_J. By construction, we may identify the dual of the Lie algebra of T” with Φ(p) + ⟨{α_j | j ∈ J}⟩. Hence, by the Convexity Package (Theorem <ref>), the moment map image Φ_N(N) = Φ(N) is a convex polytope in Φ(p) + ⟨{α_j | j ∈ J}⟩. Since Φ(p) + _≥ 0⟨{α_j | j ∈ J}⟩ is convex in Φ(p) + ⟨{α_j | j ∈ J}⟩ and since Φ(U ∩ N) is contained in Φ(p) + _≥ 0⟨{α_j | j ∈ J}⟩, Φ(N) is contained in Φ(p) + _≥ 0⟨{α_j | j ∈ J}⟩. In particular, the interior of Φ(N) is contained in Φ(p) + _>0⟨{α_j | j ∈ J}⟩. Since q is a regular point for the moment map Φ_N, the moment map image Φ_N(q) = Φ(q) lies in the (relative) interior of Φ(N), as desired. By Lemma <ref>, if q is exceptional, then the moment map Φ(q) can be used to reconstruct the symplectic slice representation of q. To see this, we observe that, by property <ref> and Proposition <ref>, Φ(q) lies in the affine hyperplane Φ(p)+{w ∈^* |⟨ w, ν_min⟩ = 0}. We recall that a basis for the linear subspace {w ∈^* |⟨ w, ν_min⟩ = 0} is given by α_1,…, α_n-2 (cf. (<ref>)). Hence, by Lemma <ref>, Φ(q) determines the subset J uniquely, and J determines the stabilizer and the isotropy weights of q. Since the stabilizer of q is connected, the claim follows. Our next aim is to prove Proposition <ref>, which plays an important role in several key results below (e.g., Theorems <ref> and <ref>. We start with the following result. Let be a normalized monotone tall complexity one T-space. The following are equivalent: * there exists an isolated fixed point, and * for each edge e that comes out of ℱ_min, there exists an isolated fixed point p ∈ M^T such that Φ(p) ∈ e. Clearly <ref> implies <ref>. Conversely, suppose that there exists an isolated fixed point p ∈ M^T. If M = 4, there is nothing to prove, so we may assume that M ≥ 6. By Proposition <ref>, there exists an edge e that comes out of ℱ_min such that Φ(p) ∈ e. Let v ∈ℱ_min be the vertex to which e is incident and let α_1,…, α_n-2,α_n-1 be the non-zero isotropy weights of Φ^-1(v) ordered so that (<ref>) holds. For j=1,…, n-2, let v_j ∈ℱ_min be the vertex that lies on the edge supported on v + _≥ 0⟨α_j ⟩ and is not v. Let e_j be the edge that comes out of ℱ_min that is incident to v_j. We claim that there exists an isolated fixed point p_j such that Φ(p_j) ∈ e_j (see Figure <ref>). To this end, by Lemma <ref>, there exists an exceptional point q ∈ M arbitrarily close to p such that Φ(q) ∈Φ(p) + _>0⟨α_j ⟩; moreover, the stabilizer of q has codimension 1. Let (N,ω_N,Φ_N) be the sheet through q. Since q is exceptional, so is (N,ω_N,Φ_N); furthermore, p ∈ N by construction. Hence, (N,ω_N,Φ_N) is a compact symplectic toric manifold with moment map image contained in Φ(p) + _≥ 0⟨α_j ⟩. Let p_j ∈ M^T ∩ N be the unique fixed point such that Φ(p_j) ∈Φ(p) + _>0⟨α_j ⟩. Since (N,ω_N,Φ_N) is exceptional, so is p_j; moreover, by Lemma <ref>, p_j is isolated. Hence, by Proposition <ref>, the image Φ(p_j) lies on an edge that comes out of ℱ_min. This edge is necessarily e_j: To see this, we observe that the moment map image Φ(N) is contained in the affine two-dimensional plane v + ⟨α_j,α_n-1⟩. This plane supports a two-dimensional face of Φ(M) that contains e and the edge of ℱ_min that is incident to both v and v_j. Hence, there exists only one other edge that is incident to v_j that is contained in this affine plane. Since this plane intersects ℱ_min precisely in the edge that is incident to both v and v_j, by (<ref>) applied to the non-zero isotropy weights of Φ^-1(v_j), the other edge that is incident to v_j and contained in the above affine plane must come out of ℱ_min, i.e., it must be e_j. By the last paragraph, <ref> holds for each edge that comes out of ℱ_min that is incident to a vertex of ℱ_min that is adjacent to v in ℱ_min (i.e., there exists an edge of ℱ_min that is incident to both vertices). We define the following relation on the set of vertices of ℱ_min: v_1 ∼ v_2 ⇔ either v_1=v_2 or v_1 is adjacent to v_2. Since the transitive closure of the above relation has one equivalence class and since there is a one-to-one correspondence between edges that come out of ℱ_min and vertices of ℱ_min, <ref> holds. As a consequence of Lemma <ref>, we obtain the following sufficient condition for a normalized monotone tall complexity one T-space to be without isolated fixed points. Let be a normalized monotone tall complexity one T-space. If there is a vertex of Φ(M) on the linear hyperplane {w ∈^* |⟨ w, ν_min⟩ = 0}, then there are no isolated fixed points. Moreover, M_exc = ∅. Let v ∈Φ(M) be a vertex of Φ(M) such that ⟨ v, ν_min⟩ = 0. First we show that there is an edge e that comes out of ℱ_min that is incident to v. Let α_1,…, α_n-1 be the non-zero isotropy weights of Φ^-1(v). By Lemma <ref>, we may assume that ⟨α_n-1, ν_min⟩ < 0. Let e be the edge that is contained in v + _≥ 0⟨α_n-1⟩ and let v' ∈Φ(M) be the other vertex to which e is incident. By construction, ⟨ v', ν_min⟩ < 0. Moreover, since is normalized monotone, Φ(M) is integral. Therefore, by (<ref>), ⟨ v', ν_min⟩ = -1, i.e., v' is a vertex of ℱ_min. Since ⟨α_n-1, ν_min⟩ < 0, e is an edge of Φ(M) that comes out of ℱ_min. Hence, v is the only element in the intersection of e and {w ∈^* |⟨ w, ν_min⟩ = 0}. By Theorem <ref> and Proposition <ref>, there is no isolated fixed point that is mapped to e under Φ. Hence, by Lemmas <ref>, <ref> and <ref>, the result follows. The next result plays a key role throughout the paper. Let be a normalized monotone tall complexity one T-space of dimension 2n. If (N,ω_N,Φ_N) is an exceptional sheet that is stabilized by a one-dimensional subgroup H, then H = H_ℱ_min and Φ(N) = Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0}. Since (N,ω_N,Φ_N) is exceptional, every point in N is exceptional. Moreover, since (N,ω_N,Φ_N) is stabilized by H, there exists a point q ∈ N with stabilizer equal to H. Hence, by part <ref> of Lemma <ref>, there exist p ∈ M^T isolated and a unique subset J ⊆{1,…, n-2} such that, if α_1,…,α_n-2,α_n-1,-α_n-1 are the isotropy weights of p as in Proposition <ref>, then the stabilizer of q' is K_J × H_ℱ_min, where K_J ≤ T' is as in (<ref>). By definition, K_J is connected. Hence, if the dimension of the stabilizer of q is one, then it must be H_ℱ_min, thus proving the first statement. By Proposition <ref>, Φ(M) is a reflexive Delzant polytope and therefore the origin lies in the interior of Φ(M) (see Lemma <ref>). Hence, the interior of Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} in {w ∈^* |⟨ w, ν_min⟩ = 0} is non-empty. Since both M and N are compact, and since {w ∈^* |⟨ w, ν_min⟩ = 0} is a linear hyperplane in ^*, by the Convexity Package (Theorem <ref>), both Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} and Φ_N(N) = Φ(N) are convex polytopes. Therefore, in order to prove that (<ref>) holds, it suffices to show that Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} and Φ(N) have the same vertices. Since M_exc≠∅, by Corollary <ref>, there is no vertex of Φ(M) lying on {w ∈^* |⟨ w, ν_min⟩ = 0}. Hence, a point v̂∈Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} is a vertex if and only if there exists an edge e of Φ(M) that comes out of ℱ_min such that v̂ is the intersection of e with {w ∈^* |⟨ w, ν_min⟩ = 0}. On the other hand, since (N,ω_N,Φ_N) is exceptional and since the complexity of is one, by Proposition <ref>, the complexity of (N,ω_N,Φ_N) is zero, i.e., it is a compact symplectic toric manifold. Therefore, v̂∈Φ(N) is a vertex if and only if there exists an isolated fixed point p ∈ N such that Φ(p) = v̂. Let v̂∈Φ(N) be a vertex and let p ∈ N be as above. By Proposition <ref>, there exists v̂ an edge e of Φ(M) that comes out of ℱ_min such that v̂ is the intersection of e with {w ∈^* |⟨ w, ν_min⟩ = 0}. Hence, each vertex of Φ(N) is a vertex of Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0}. Moreover, by Lemma <ref>, there exists an open neighborhood U of p such that U ∩ N is precisely the subset of exceptional points in U and Φ(U ∩ N) = Φ(U) ∩{w ∈^* |⟨ w, ν_min⟩ = 0}. Set V:= Φ(U). By the Convexity Package (Theorem <ref>), V is an open neighborhood of v̂. Moreover, by (<ref>), V ∩Φ(N) = V ∩{w ∈^* |⟨ w, ν_min⟩ = 0}, (see Figure <ref>). Suppose that there exists a vertex of Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} that is not a vertex of Φ(N). Since both Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} and Φ(N) are convex polytopes of full dimension in {w ∈^* |⟨ w, ν_min⟩ = 0} and since the vertices of the latter are a subset of those of the former, there exists a vertex v̂ of Φ(N) such that for any open neighborhood V of v̂ V ∩Φ(N) ⊊ V ∩{w ∈^* |⟨ w, ν_min⟩ = 0}, (see Figure <ref>). By (<ref>), this is a contradiction. Our penultimate aim in this section is to prove Theorem <ref> below. To this end, first we prove the following result. Let be a normalized monotone tall complexity one T-space of dimension 2n. If q ∈ M is exceptional, then there exists a unique exceptional sheet (N,ω_N,Φ_N) that is stabilized by H_ℱ_min such that q ∈ N. First we prove uniqueness. Let (N_i, ω_i, Φ_i) be an exceptional sheet that is stabilized by H_ℱ_min such that q ∈ N_i for i=1,2. Hence, both N_1 and N_2 are connected components of M^H_ℱ_min. Since q ∈ N_1 ∩ N_2, it follows that N_1 = N_1 ∪ N_2 = N_2, as desired. Next we prove existence. Let (N',ω',Φ') be the sheet through q. Since q is exceptional, by Lemma <ref>, (N',ω',Φ') is exceptional. Since N' is compact, there exists p ∈ M^T ∩ N' that, by Lemma <ref>, is isolated. We claim that there exists an exceptional sheet (N,ω_N,Φ_N) stabilized by H_ℱ_min such that p ∈ N. To this end, we use Proposition <ref> and Lemma <ref>. Let U be the open neighborhood of p given by Lemma <ref> and let U_1 be the subset of U consisting of exceptional points stabilized by H_ℱ_min, which is path-connected and dense in the subset of U consisting of exceptional points. In particular, U_1 ≠∅. Given any point q' ∈ U_1, we consider the sheet (N,ω_N,Φ_N) through q'. By Lemma <ref>, (N,ω_N,Φ_N) is exceptional, while, by definition, it is stabilized by H_ℱ_min. Since U_1 is dense in the subset of U consisting of exceptional points and since p ∈ U is exceptional by Lemma <ref>, it follows that p ∈ N, thus proving the claim. Hence, N' ∩ N ≠∅. Moreover, since any point in N' is exceptional, by Lemma <ref> the stabilizer of any point in N' contains H_ℱ_min. Since both N' and N are connected and since N is a connected component of M^H_ℱ_min, it follows that N' ∪ N = N, so that N' is contained in N. Hence, q ∈ N, as desired. In fact, the proof of Lemma <ref> yields a slightly stronger result, namely that, under the hypotheses of the lemma, any exceptional sheet is contained in one that stabilized by H_ℱ_min. Let be a normalized monotone tall complexity one T-space of dimension 2n. Each connected component of M_exc is mapped homeomorphically to Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0 } by the orbital moment map. In particular, each connected component of M_exc is contractible. Moreover, * if m is the number of vertices of the minimal facet, then the number of connected components of M_exc is precisely the number of isolated fixed points divided by m, and * if e is an edge of Φ(M) that comes out of ℱ_min, then the number of isolated fixed points lying on Φ^-1(e) equals the number of connected components of M_exc. First we show that the image of {p ∈ M^H_ℱ_min| p is exceptional} under the quotient map M → M/T equals M_exc and that number of connected components of the latter equals the number of exceptional sheets that are stabilized by H_ℱ_min. Clearly, the image of {p ∈ M^H_ℱ_min| p is exceptional} under the quotient map M → M/T is contained in M_exc. Conversely, given an exceptional orbit 𝒪∈ M_exc, every point in 𝒪 is exceptional by Remarks <ref> and <ref>. Fix a point p ∈𝒪. By Proposition <ref> and Lemma <ref>, p ∈ M^H_ℱ_min. Since M^H_ℱ_min is T-invariant, it follows that 𝒪 is contained in M^H_ℱ_min. Hence, M_exc is contained in the image of {p ∈ M^H_ℱ_min| p is exceptional} under the quotient map M → M/T and the first claim follows. To prove the second claim, we observe that the connected components of {p ∈ M^H_ℱ_min| p is exceptional} are exactly the exceptional sheets that are stabilized by H_ℱ_min. Since sheets are T-invariant and T is connected, the restriction of the quotient map M → M/T to {p ∈ M^H_ℱ_min| p is exceptional} induces a bijection between the connected components of {p ∈ M^H_ℱ_min| p is exceptional} and those of M_exc. By Proposition <ref>, the image of an exceptional sheet that is stabilized by H_ℱ_min is precisely Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0 }. Moreover, any such sheet is a compact symplectic toric manifold, so that the corresponding orbital moment map image is a homeomorphism onto its image. Hence, the first claim follows. By the above argument, in order to prove the bulleted statements, it suffices to show that the number of exceptional sheets that are stabilized by H_ℱ_min is precisely the number of isolated fixed points divided by m. We begin by observing that M_exc = ∅ if and only if there are no isolated fixed points. This is a consequence of Lemma <ref> and the complexity of being one. So we may assume that M_exc≠∅. Let (N,ω_N,Φ_N) be such an exceptional sheet that is stabilized by H_ℱ_min. Since (N,ω_N,Φ_N) is a compact symplectic toric manifold, the number of vertices in Φ_N(N) = Φ(N) is precisely the number of fixed points. By Proposition <ref>, Φ(N) equals Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0 }; moreover, there is a bijection between the vertices of Φ(N) and those of ℱ_min. Hence, the cardinality of M^T ∩ N equals m. Moreover, if (N',ω',Φ') is another sheet stabilized by H_ℱ_min, then either N = N' or N ∩ N' = ∅. In particular, if N ≠ N', the subsets M^T ∩ N and M^T ∩ N' are disjoint. Finally, since the set of isolated fixed points is a subset of the set of exceptional points by Lemma <ref>, by Lemma <ref>, M^T_isolated equals the disjoint union over all exceptional sheets (N,ω_N,Φ_N) stabilized by H_ℱ_min of the intersections M_isolated^T ∩ N. Since any such sheet is exceptional, by Lemma <ref>, M^T ∩ N equals the intersection of N with the set of isolated fixed points for any exceptional sheet (N,ω_N,Φ_N) stabilized by H_ℱ_min. Putting the above facts together, the bulleted statements follow. We conclude this section with the following important result. The equivalence class of paintings of a compact normalized monotone tall complexity one T-space is trivial. By Lemma <ref>, the genus of is zero. Let f : M_exc→ S^2 be a painting of . We need to construct a painting f' : M_exc→ S^2 that is constant on each connected component of M_exc and that is equivalent to f. If M_exc = ∅, there is nothing to prove, so we may assume that M_exc≠∅. Since Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩ = 0} is a convex polytope, it is contractible. We fix a point w_0 in Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩ = 0} (for instance, the origin). Since Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩ = 0} is contractible, there exists a deformation retraction of Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩ = 0} onto w_0 that we denote by H_t, where H_0 = id and H_1(w) = w_0 for all w ∈Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩ = 0}. Let k >0 be the number of connected components of M_exc. We have that M_exc = ∐_i=1^k M_i, where M_i is a connected component of M_exc for i=1,…, k. By Theorem <ref>, the restriction of Φ M/T →^* to M_i is a homeomorphism onto Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩ = 0}; let Φ^-1_i be the inverse to this homeomorphism. We denote the restriction of f to M_i by f_i and, for any 0 ≤ t ≤ 1, we define f_i,t : M_i → S^2 as the composite f_i ∘Φ_i^-1∘ H_t ∘Φ. Since M_exc is the disjoint union of the M_i's, for any 0 ≤ t ≤ 1, there exists a unique continuous map f_t : M_exc→ S^2 such that the restriction of f_t to M_i equals f_i,t for all i=1,…, k. We claim that f_t : M_exc→ S^2 is a painting of for all 0≤ t ≤ 1. To this end, we fix 0 ≤ t ≤ 1, consider the map (Φ,f_t) and two distinct points [q_i],[q_j] ∈ M_exc belonging to M_i and M_j respectively. We aim to show that (Φ([q_i]),f_t ([q_i])) ≠ (Φ([q_j]),f_t ([q_j])). If i = j, since the restriction of Φ to M_i = M_j is a homeomorphism and since [q_i] ≠ [q_j], it follows that Φ([q_i]) ≠Φ([q_j]), so that the result follows. Suppose next that i ≠ j. If Φ([q_i]) ≠Φ([q_j]), then the result follows, so we may assume that Φ([q_i]) = Φ([q_j]) =: w. If f_t ([q_i]) = f_t([q_j]), then the points Φ^-1_i(H_t(w)) ∈ M_i and Φ^-1_j(H_t(w)) ∈ M_j are distinct (since i ≠ j), but are such that (Φ(Φ^-1_i(H_t(w))),f(Φ^-1_i(H_t(w)))) = (Φ(Φ^-1_j(H_t(w))),f(Φ^-1_j(H_t(w)))). This implies that f is not a painting, which is absurd. Hence, f_t ([q_i]) ≠ f_t([q_j]), which implies that (Φ([q_i]),f_t ([q_i])) ≠ (Φ([q_j]),f_t ([q_j])), as desired. We set f':= f_1. Since f_t : M_exc→ S^2 is a painting of for all 0≤ t ≤ 1, f = f_0 and f' are homotopic through paintings. Moreover, the restriction of f' to M_i is constant by construction for any i=1,…, k, thus completing the proof. §.§ The Duistermaat-Heckman function In order to prove the main result of this section (see Theorem <ref>), we prove the following intermediate result. To this end, we recall that (<ref>) and (<ref>) hold, that for any vertex v and any edge e of Φ(M), the preimages Φ^-1(v) and Φ^-1(e) are a two dimensional sphere and a four dimensional manifold respectively. Moreover, by Lemma <ref>, there exists s ∈ such that, if v ∈ℱ_min and e is the edge that comes out of ℱ_min that is incident to v, then the self-intersection of Φ^-1(v) in Φ^-1(e) equals s (and does not depend on v). Let be a compact normalized monotone tall complexity one T-space of dimension 2n. Let v ∈ℱ_min be a vertex and let e be the edge incident to v that comes out of ℱ_min. Let s ∈ be as in Lemma <ref> and let k ≥ 0 be the number of isolated fixed points contained in Φ^-1(e). Then the restriction of the Duistermaat-Heckman function DH to e is the function e → w ↦ 2 - s ⟨ w, ν_min⟩ - k ρ(w), where ρ : ^* → is the function given by w ↦ 0 if ⟨ w, ν_min⟩≤ 0 ⟨ w, ν_min⟩ if ⟨ w, ν_min⟩≥ 0. Let α_1,…, α_n-1 be the non-zero isotropy weights of Φ^-1(v) ordered so that (<ref>) holds; in particular, ⟨α_n-1, ν_min⟩ = 1 (see Lemma <ref> and (<ref>)). Since e is the edge that comes out of ℱ_min and is incident to v, by Lemma <ref>, there exists t_max∈_>0 such that e = { v + tα_n-1| 0 ≤ t ≤ t_max}. Let (M_e, ω_e, Φ_e) be the sheet corresponding to e as in (<ref>). Since α_n-1∈ℓ^* is primitive, the stabilizer H_e of (M_e, ω_e, Φ_e) is precisely exp(Ann( ⟨α_n-1⟩) ). Since is tall and compact, (M_e, ω_e, Φ_e) is a compact tall complexity one Hamiltonian T/H_e ≃ S^1-space by property <ref> of Corollary <ref> and by Proposition <ref>. In what follows, Φ_e is chosen so that Φ_e(M_e) = [0,t_max]. Moreover, the isolated fixed points for the S^1-action on M_e are precisely the isolated fixed points for the T-action contained in M_e. By Corollary <ref>, the restriction of the Duistermaat-Heckman function DH to e equals DH (M_e,ω_e,Φ_e). Since ⟨ v + tα_n-1,ν_min⟩ = t-1, in order to prove that (<ref>) holds, it suffices to check that the Duistermaat-Heckman function of (M_e,ω_e,Φ_e) is the function [0,t_max] → given by t ↦ 2 - s(t-1) -k Θ(t-1), where Θ: → is the function t ↦ 0 if t ≤ 0 t if t ≥ 0. By Proposition <ref> and the definition of Φ_e, any isolated fixed point p ∈ M_e for the S^1-action satisfies Φ_e(p) = 1 and its isotropy weights are +1,-1. In particular, if k > 1, then t_max > 1. Since (M_e,ω_e,Φ_e) is tall and since the domain of the Duistermaat-Heckman function is Φ_e(M_e) = [0,t_max] (see Definition <ref>), by <cit.>, we have that DH (M_e,ω_e,Φ_e)(t) = ∫_Φ_e^-1(v)ω_e - t c_1(L_e)[Φ_e^-1(v)] - k Θ(t-1) for all t ∈ [0,t_max], where c_1(L_e) is the first Chern class of the normal bundle L_e to Φ_e^-1(v) in M_e. Since is normalized monotone, since Φ^-1_e(v) = Φ^-1(v) is a sphere by Lemma <ref>, and by Lemma <ref>, we have that ∫_Φ_e^-1(v)ω_e = c_1(M)[Φ^-1(v)] = 2 + ∑_j=1^n-1c_1(L_j)[Φ^-1(v)] = 2 + c_1(L_n-1)[Φ^-1(v)], where L_1⊕…⊕ L_n-1 is the T-equivariant splitting of the normal bundle to Φ^-1(v) in M, and the last equality follows from Proposition <ref>. Combining equations (<ref>) and (<ref>), and observing that L_e = L_n-1, we have that DH (M_e,ω_e,Φ_e)(t) = 2 - s(t-1) - k Θ(t-1), as desired. By Theorem <ref>, the number of isolated fixed points that lies in the preimage of an edge that comes ouf of ℱ_min equals the number of connected components of M_exc. Hence, we can use Proposition <ref> to prove the following result. Let be a compact normalized monotone tall complexity one T-space of dimension 2n. Let s ∈ be as in Lemma <ref> and let k ≥ 0 be the number of connected components of M_exc. The Duistermaat-Heckman function DH : Φ(M) → is given by DH (w) = 2 - s ⟨ w, ν_min⟩ - k ρ(w), where ρ : ^* → is the function given by w ↦ 0 if ⟨ w, ν_min⟩≤ 0 ⟨ w, ν_min⟩ if ⟨ w, ν_min⟩≥ 0. First, we show that the interior of the intersection Φ(M) ∩{w ∈^* |±⟨ w, ν_min⟩ > 0} consists entirely of regular values of Φ. To this end, suppose that w ∈^* ∖∂Φ(M) is a singular value of Φ. Hence, there exists q ∈Φ^-1(w) that has a stabilizer has positive dimension. By <cit.>, q is exceptional. Thus, by Proposition <ref> and Lemma <ref>, Φ(q) ∈{w ∈^* |⟨ w, ν_min⟩ = 0}, as desired. Hence, by Remark <ref>, the restriction of DH to Φ(M) ∩{w ∈^* |±⟨ w, ν_min⟩ > 0} is the restriction of an affine function f^±: ^* → of the form f^±(w) = c^± + ⟨ w,β^±⟩ for some c^±∈ and some β^±∈ℓ. We fix a vertex v ∈ℱ_min, we let e be the edge that comes out of v and α_1,…, α_n-1 be the non-zero isotropy weights of Φ^-1(v) ordered so that (<ref>) and (<ref>) hold. Since DH is continuous (by Theorem <ref>) and since the restriction of DH to ℱ_min is constant by Proposition <ref>, the restriction of the affine function f^- to the affine hyperplane supporting ℱ_min is constant. Since f^- is an affine function and since ℱ_min is supported on the hyperplane v + ⟨α_1,…, α_n-2⟩, the restriction of the linear part of f^- to ⟨α_1,…, α_n-2⟩ is identically zero. In other words, β^- ∈Ann(⟨α_1,…, α_n-2⟩). Hence, since ν_min∈Ann(⟨α_1,…, α_n-2⟩), since β^-, ν_min∈ℓ, and since ν_min is primitive, there exists λ^- ∈ such that β^- = λ^- ν_min. By Lemma <ref>, there exists t_max∈_>0 such that e = { v + tα_n-1| 0 ≤ t ≤ t_max}. By (<ref>) and since v ∈ℱ_min, ⟨ v + tα_n-1,ν_min⟩ = -1+t for all 0 ≤ t ≤ t_max. Therefore, by Proposition <ref>, the restriction of DH to e ∩{w ∈^* |⟨ w, ν_min⟩ < 0} is given by the function that sends t to 2 + s -st, where 0 ≤ t < 1. Hence, c^- + λ^-(-1+t) = 2 + s -s t for all 0 < t < 1. Equation (<ref>) readily implies that λ^- = - s and c^- = 2. Hence, f^-(w) = 2 - s⟨ w, ν_min⟩. We split the remainder of the proof in two cases, depending on whether t_max = 1 or t_max≥ 2. In the former case, the other vertex v' of Φ(M) that is incident to e lies on the linear hyperplane {w ∈^* |⟨ w,ν_min⟩ = 0}. Hence, by Corollary <ref>, there are no isolated fixed points. Therefore, by Lemma <ref>, the function DH is the restriction of an affine function. Since the interior of Φ(M) ∩{w ∈^* | - ⟨ w, ν_min⟩ > 0} is not empty and since the restriction of DH to this subset equals the affine function f^-(w) = 2 - s⟨ w, ν_min⟩, it follows that DH (w) = 2 - s⟨ w, ν_min⟩ = 2 - s⟨ w, ν_min⟩ - k ρ(w) for all w ∈Φ(M), where the last equality follows from the fact that k=0, since there are no isolated fixed points (see Lemma <ref> and Corollary <ref>). It remains to consider the case t_max≥ 2. Since DH is continuous, then the restriction of f^- and f^+ to Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} are equal. Hence, the restriction of the affine function f^+ to Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} equals 2. By Proposition <ref>, Φ(M) is a reflexive Delzant polytope; thus, by Lemma <ref>, the origin lies in the interior of Φ(M). Therefore, the interior of Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} in {w ∈^* |⟨ w, ν_min⟩ = 0} is non-empty. Hence, the restriction of the linear part of f^+ to the hyperplane {w ∈^* |⟨ w, ν_min⟩ = 0} is identically zero. Arguing as above, this implies that there exists λ^+ ∈ such that β^+ = λ^+ ν_min. Since t_max≥ 2, the intersection e ∩{w ∈^* |⟨ w, ν_min⟩ > 0} is not empty. By Proposition <ref>, the restriction of DH to e ∩{w ∈^* |⟨ w, ν_min⟩ > 0} is the function that sends t to 2 + s - st -k(t-1) for 1 < t ≤ t_max. Hence, we have that λ^+(-1+t) + 2 = 2 + s - st - k(t-1) for all 1 < t < t_max. Equation (<ref>) readily implies that λ^+ = - s - k. Hence, f^+(w) = 2 -s⟨ w, ν_min⟩-k⟨ w, ν_min⟩. Thus the restriction of DH to Φ(M) ∖{w ∈^* |⟨ w, ν_min⟩ = 0} is the function given by w ↦ 2 - s ⟨ w, ν_min⟩ if ⟨ w, ν_min⟩ < 0 2 - s ⟨ w, ν_min⟩ - k ⟨ w, ν_min⟩ if ⟨ w, ν_min⟩ >0. This function equals the restriction of (<ref>) on the dense subset Φ(M) ∖{w ∈^* |⟨ w, ν_min⟩ = 0} of Φ(M). Since DH is continuous, equation (<ref>) holds. §.§ The Duistermaat-Heckman function is a complete invariant We can proceed with the proof of our first main result. Let and (M',ω',Φ') be compact monotone tall complexity one T-spaces of dimension 2n. If they are isomorphic, then they have equal Duistermaat-Heckman measures. By Theorem <ref>, they have equal Duistermaat-Heckman functions. Suppose conversely that they have equal Duistermaat-Heckman functions. First, we show that it suffices to show the result under the additional assumption that both spaces are normalized. To this end, assume this special case. Since and (M',ω',Φ') are monotone, by Corollary <ref>, there exist λ, λ' >0 and v, v' ∈^* such that (M,λω, λΦ+v) and (M',λ'ω', λ'Φ'+v') are normalized monotone. Moreover, since and (M',ω',Φ') are tall and have equal Duistermaat-Heckman functions, there exists a Delzant polytope Δ such that Φ(M) = Δ = Φ'(M'). Since (M,λω, λΦ+v) and (M',λ'ω', λ'Φ'+v') are normalized monotone, by Proposition <ref>, the polytopes λΔ + v and λ' Δ + v' are reflexive Delzant. We observe that there exists at most one reflexive Delzant polytope obtained from Δ by rescaling and translating because the vertices of a reflexive Delzant polytope are determined by the tangent cones to its vertices (see Proposition <ref>). Hence, λ = λ' and v = v'; in particular, λΔ + v = λ' Δ + v'. On the other hand, we observe that, for any w ∈λΔ + v = λ' Δ + v', DH (M,λω, λΦ+v)(w) = λ DH(λ w+v) DH (M',λ'ω', λ' Φ'+v')(w) = λ' DH (M',ω',Φ') (λ' w+v'). This is an immediate consequence of the definition of the Duistermaat-Heckman function, of the fact that its restriction to the set of regular values is given by (<ref>), and of the complexity of both spaces being one. Hence, DH (M,λω, λΦ+v) = DH (M',λ'ω', λ' Φ'+v'). By assumption, (M,λω, λΦ+v) and (M',λ'ω', λ' Φ'+v') are isomorphic. Therefore, by Lemma <ref>, it follows that and (M',ω',Φ') are isomorphic, as desired. It remains to prove the result under the additional assumption that the spaces are normalized. By Remark <ref> and Theorem <ref>, the two spaces have equal Duistermaat-Heckman measures. Moreover, both spaces have genus zero by Lemma <ref>, and equal moment map images, both of which are equal to a reflexive Delzant polytope Δ by Proposition <ref>. Since the Duistermaat-Heckman functions are equal , there is a facet ℱ_min of Δ that is a minimal facet for both spaces. Let s, s' ∈ be the integers as in Lemma <ref> for and (M',ω',Φ') respectively. Since the Duistermaat-Heckman functions of and (M',ω',Φ') are equal, by Theorem <ref>, 2 - s⟨ w ,ν_min⟩ - k ρ(w) = 2 - s'⟨ w ,ν_min⟩ - k' ρ(w) for all w ∈Δ, where ρ : ^* → is the function of equation (<ref>). Since the interior of Δ is not empty and by (<ref>), there exists a point w ∈Δ such that -1 < ⟨ w, ν_min⟩ < 0. Evaluating both sides of (<ref>) at this point, we obtain that s = s'. By the same argument and since Δ is reflexive so that the origin is an interior point of Δ, there exists w' ∈Δ such that ⟨ w', ν_min⟩ > 0. Evaluating both sides of (<ref>) at w', we obtain that k = k'. Hence, either both M_exc and M'_exc are empty or neither is. We suppose first that neither M_exc nor M'_exc is empty. As in the proof of Theorem <ref>, we write M_exc = ∐_j=1^k M_j and M'_exc = ∐_j=1^k M'_j, where M_j is a connected component of M_exc for all j=1,…, k, and analogously for M'_j and M_exc'. By Theorem <ref>, there exist trivial paintings f : M_exc→ S^2 and f': M'_exc→ S^2 of and (M',ω',Φ') respectively (see Definition <ref>). Since f, f' are paintings, if i ≠ j, f(M_i) ≠ f(M_j) and f'(M'_i) ≠ f'(M'_j). In particular, the images of both f and f' consist of k distinct points in S^2. We claim that we may assume that f(M_j) = f'(M'_j) for all j = 1,…, k. To see this, we can use the argument of <cit.> to construct an orientation-preserving diffeomorphism ξ : S^2 → S^2 such that ξ(f(M_j)) = f'(M'_j) for all j =1,…, k. The composite ξ∘ f is a trivial painting of that is equivalent to f. By Theorem <ref>, for all j=1,…, n, the orbital moment maps Φ (respectively Φ') maps each connected component of M_j (respectively M'_j) homeomorphically onto Δ∩{ w ∈^* |⟨ w, ν_min⟩ = 0}, where we use the fact that Φ(M) = Δ = Φ'(M'). If Φ_j (respectively Φ'_j) denotes the restriction of the orbital moment map to M_j (respectively M_j'), then the composite i_j:= (Φ'_j)^-1∘Φ_j : M_j → M'_j is a homeomorphism that satisfies Φ = Φ' ∘ i_j. Hence, since M_exc and M'_exc have the same number of connected components, there is a unique map i : M_exc→ M'_exc that, when restricted to M_j, equals i_j for all j=1,…,n. Thus i is a homeomorphism such that Φ = Φ' ∘ i. Moreover, since the moment map images of and (M',ω',Φ') are equal, by Remark <ref>, the homeomorphism i maps each orbit to an orbit with the same symplectic slice representation. Hence, i : M_exc→ M'_exc is an isomorphism of exceptional orbits such that f' = i ∘ f. Thus and (M',ω',Φ') have equivalent paintings. If M_exc = ∅ = M'_exc, then and of (M',ω',Φ') also have equivalent paintings (trivially). Hence, in either case, the result follows by Theorem <ref>. We observe that, by Corollary <ref>, Theorem <ref> can be restated equivalently as saying that two compact monotone tall complexity one T-spaces are isomorphic if and only if they have equal Duistermaat-Heckman polytopes. § REALIZABILITY, EXTENSION TO TORIC AND FINITENESS RESULTS §.§ Necessary conditions for the realization and a finiteness result To state the first result of this subsection, we recall that, by Lemma <ref>, there exists s ∈ such that, if v ∈ℱ_min is a vertex and e is the edge of Φ(M) that comes out of ℱ_min and is incident to v, then the self-intersection of the sphere Φ^-1(v) in the four-dimensional submanifold Φ^-1(e) is s. Let be a normalized monotone tall complexity one Hamiltonian T-space, let s ∈ be as above and let k ∈ be the number of connected components of M_exc. The pair (s,k) ∈^2 belongs to the set {(0,0), (-1,0), (-1,1), (-1,2) }. We fix a vertex v ∈ℱ_min and we let e=e_n-1 be the edge of Φ(M) that comes out of ℱ_min and is incident to v. The normal bundle N to Σ:=Φ^-1(v) splits T-equivariantly as L_1⊕⋯⊕ L_n-1. By Proposition <ref>, the first Chern number c_1(L_j)[Σ], which agrees with the self-intersection of Σ in Φ^-1(e_j), equals zero for all j=1,…,n-2. Therefore, by Lemma <ref> and equation (<ref>) in its proof, 0<2+c_1(L_n-1)[Σ]=2+s . Moreover, by (<ref>) in Lemma <ref>, s=c_1(L_n-1)[Σ]≤ 0. Hence, we conclude that s∈{0,-1}. Suppose first that s= 0. By Theorem <ref>, the Duistermaat-Heckman function DH : Φ(M) → is given by w ↦ 2 - kρ(w), where ρ : ^* → is the function of (<ref>). Since k ≥ 0, 2 - kρ(w) ≤ 2 for all w ∈Φ(M). Since DH(v) = 2 and since v ∈ℱ_min, it follows that DH(w) = 2 for all w ∈Φ(M). By Proposition <ref>, Φ(M) is a reflexive (Delzant) polytope. Hence, by Lemma <ref>, Φ(M) contains the origin in its interior. Thus, by definition of the function ρ, k = 0. Suppose that s = -1. We must show that k ≤ 2. To this end, we may assume that k > 0. Let α∈ℓ^* be the isotropy weight of Φ^-1(v) such that the edge e is contained in the half-ray v + _≥ 0⟨α⟩. By Lemma <ref>, let t_max∈_>0 be such that e = {v + t α| 0 ≤ t ≤ t_max}. First, we prove that t_max≥ 2. Let v' ∈Φ(M) be the other vertex to which e is incident. If t_max = 1, then v' is a vertex of Φ(M) that lies on the linear hyperplane {w ∈^* |⟨ w, ν_min⟩ = 0}. By the Convexity Package (Theorem <ref>), and Proposition <ref>, there are no isolated fixed points in Φ^-1(e). Hence, by Theorem <ref>, M_exc = ∅, a contradictionḢence, t_max≥ 2, as desired. As a consequence, v + 2α∈Φ(M). To conclude the proof, we evaluate DH at v + 2α. By (<ref>) and the fact that v∈ℱ_min, we obtain that ⟨ v+2α, ν_min⟩ = 1. Moreover, since is tall, DH(w) > 0 for all w ∈Φ(M). Therefore, by (<ref>), DH(v+2α)= 2 + 1- k > 0. Since k∈, it follows that k ≤ 2. We proceed with the proof of our second main result. Suppose that Φ(M) = Δ is reflexive Delzant. Since is monotone, by Lemma <ref>, is normalized monotone. Since there are finitely many facets of Φ(M) and since, by Proposition <ref>, there are finitely many possibilities for (s,k), by Theorem <ref>, there are finitely many possibilities for DH. Hence, the result follows from Theorem <ref>. By Theorem <ref> and Proposition <ref>, and since there are precisely as many edges that come out of ℱ_min as there are vertices of ℱ_min, we obtain the following bound on the number of isolated fixed points. Let be a normalized monotone tall complexity one T-space of dimension 2n. If m is the number of vertices of a minimal facet, then there are precisely either zero, m or 2m isolated fixed points in M. The next result gives a combinatorial property of the moment map image of a normalized monotone tall complexity one T-space such that M_exc has two connected components. Let be a normalized monotone tall complexity one T-space of dimension 2n. Let ℱ_min be a minimal facet of Φ(M) supported on the affine hyperplane {w ∈^* |⟨ w, ν_min⟩ = -1}. If M_exc has two connected components, then there exists a minimal facet ℱ'_min of Φ(M) supported on the affine hyperplane {w ∈^* |⟨ w, -ν_min⟩ = -1}. In particular, Φ(M) is contained in the strip {w ∈^* | -1 ≤⟨ w, ν_min⟩≤ 1}. We fix a vertex v ∈ℱ_min. Since the number k of connected components of M_exc is 2, by Proposition <ref>, it follows that s=-1. In particular, by Theorem <ref>, the Duistermaat-Heckman function DH : Φ(M) → of is given by DH (w) = 2 + ⟨ w, ν_min⟩ - 2 ρ(w), where ρ : Φ(M) → is the non-negative function given by (<ref>). Since v ∈ℱ_min, DH (v) = 1. Hence, since ℱ_min is a minimal facet, the minimal value of DH equals 1. Let e be the edge of Φ(M) that comes out of ℱ_min and is incident to v, and let α∈ℓ^* be the isotropy weight of Φ^-1(v) so that e is contained in the half-ray v + _≥ 0⟨α⟩. By (<ref>) and Proposition <ref>, ⟨α, ν_min⟩ =1 and there exists t_max∈_>0 such that e = {v + tα| 0 ≤ t ≤ t_max}. We set v':= v + t_maxα; this is a vertex of Φ(M). Moreover, we observe that ⟨ v', ν_min⟩ = t_max -1. Since s=-1 and k =2, arguing as in the last paragraph of the proof of Proposition <ref>, t_max≥ 2. In particular, by (<ref>) and since the minimal value of DH is 1, DH (v') = 3 - t_max≥ 1, whence t_max≤ 2. Hence, t_max = 2 and DH (v') =1, so that DH attains its minimum at v'. By Proposition <ref>, v' lies on a minimal facet ℱ_min'. In fact, ℱ_min' is contained in the connected component of the level set (DH )^-1(1) that contains v'. By (<ref>), the latter is given by the affine hyperplane {w ∈^* |⟨ w, -ν_min⟩ = -1}. Since the affine span of a facet is an affine hyperplane, the first statement follows. The second statement follows at once from the first and the fact that Φ(M) is contained in the intersection {w ∈^* |⟨ w, ν_min⟩≥ -1}∩{w ∈^* |⟨ w, -ν_min⟩≥ -1}. The next result is the most important building block of the main finiteness result of this paper, Corollary <ref>. Given a reflexive Delzant polytope Δ in ^*, there are finitely many isomorphism classes of normalized monotone tall complexity one T-spaces with moment map image equal to Δ. Let ℱ be a facet of Δ and let (s,k) ∈{(0,0), (-1,0), (-1,1), (-1,2) }. Since Δ has finitely many facets and by Proposition <ref>, it suffices to show that there are finitely many isomorphism classes of normalized monotone tall complexity one T-spaces with moment map image equal to Δ such that * ℱ is a minimal facet of Φ(M), * given any vertex v ∈ℱ and the edge e of Φ(M) that comes out of ℱ and is incident to v, the self-intersection of Φ^-1(v) in Φ^-1(e) equals s (cf. Lemma <ref>), and * the set of exceptional orbits has precisely k connected components. If there is no compact normalized monotone tall complexity one T-space with the above properties, there is nothing to prove, so we may assume that there exists such a space . By Theorem <ref>, the data Δ, ℱ and (s,k) determine uniquely the Duistermaat-Heckman function of . Hence, by Theorem <ref>, there is exactly one isomorphism class of compact normalized monotone tall complexity one T-space with the above properties, as desired. Theorem <ref> and Corollary <ref> allow us to prove Corollary <ref>, thus answering a question posed to us by Yael Karshon. We recall that two Hamiltonian T-spaces (M_1,ω_1,Φ_1) and (M_2,ω_2,Φ_2) are equivalent if there exists a symplectomorphism Ψ : (M_1,ω_1) → (M_2,ω_2) and an affine transformation a ∈GL(ℓ^*) ⋉𝔱^* such that Φ_2 ∘Ψ = a ∘Φ_1. In this case, we write (M_1,ω_1,Φ_1) ∼ (M_2,ω_2,Φ_2). * In the above notion of equivalence, the reason why we restrict to elements in GL(ℓ^*) ⋉𝔱^* is the following: Given an effective Hamiltonian T-space and an affine transformation a of 𝔱^*, the triple (M,ω, a ∘Φ) is an effective Hamiltonian T-space if and only if a ∈GL(ℓ^*) ⋉𝔱^*. * Isomorphic Hamiltonian T-spaces in the sense of Definition <ref> are necessarily equivalent, but the converse need not hold. We fix n and we denote the set of equivalence classes of compact tall complexity one T-spaces of dimension 2n with first Chern class equal to the class of the symplectic form by ℳ_n. By Definition <ref>, any normalized monotone tall complexity one T-space of dimension 2n is such that its first Chern class equals the class of the symplectic form. We define an auxiliary equivalence relation ≈ on the set of normalized monotone tall complexity one T-spaces of dimension 2n as follows: Given two such spaces (M_1,ω_1,Φ_1), (M_2,ω_2,Φ_2), we say that (M_1,ω_1,Φ_1) ≈ (M_2,ω_2,Φ_2) if there exists a linear transformation l ∈GL(ℓ^*) such that Φ_2 ∘Ψ = l ∘Φ_1. We denote the set of ≈-equivalence classes of normalized monotone tall complexity one T-spaces of dimension 2n by 𝒩ℳ_n. We observe that there is a natural map 𝒩ℳ_n →ℳ_n sending the ≈-equivalence class of to its ∼-equivalence class. Moreover, we claim that this map is a bijection. First, we show that it is injective. Suppose that (M_1,ω_1,Φ_1), (M_2,ω_2,Φ_2) are normalized monotone tall complexity one T-spaces of dimension 2n such that (M_1,ω_1,Φ_1) ∼ (M_2,ω_2,Φ_2). Then there exists a ∈GL(ℓ^*) ⋉𝔱^* such that a (Φ_1(M_1)) = Φ_2(M_2). We write a = (l, v) for unique l ∈GL(ℓ^*) and v ∈𝔱^*. It suffices to show that v = 0. By Proposition <ref>, both Φ_1(M_1) and Φ_2(M_2) are reflexive (Delzant) polytopes. In particular, all vertices of Φ_1(M_1) and of Φ_2(M_2) lie in ℓ^*. Since both polytopes have at least one vertex and since l ∈GL(ℓ^*), v ∈ℓ^*. Moreover, by Lemma <ref>, the origin is the only interior lattice point in both Φ_1(M_1) and Φ_2(M_2). Hence, since a (Φ_1(M_1)) = Φ_2(M_2), the lattice point v lies in the interior of Φ_2(M_2). Thus v = 0, as desired. Next we prove surjectivity. To this end, let be a compact tall complexity one T-space of dimension 2n with c_1(M) = [ω]. By Proposition <ref>, there exists (a unique) v ∈𝔱^* such that (M,ω,Φ +v) is normalized. Hence, ∼ (M,ω,Φ +v), as desired. Therefore it suffices to prove that 𝒩ℳ_n is finite. To this end, we denote the orbit space of the standard GL(ℓ^*)-action on the set of reflexive Delzant polytopes in ^* by ℛ𝒟_n. By Proposition <ref>, the map p: 𝒩ℳ_n →ℛ𝒟_n that sends the ≈-equivalence class of to the GL(ℓ^*)-orbit of Φ(M) is surjective. By Corollary <ref>, ℛ𝒟_n is finite. Hence, it suffices to prove that the fibers of the above map are finite. We fix a reflexive Delzant polytope Δ and we consider the map from the set of isomorphism classes of normalized monotone tall complexity one T-spaces with moment map image equal to Δ to p^-1([Δ]) that sends the isomorphism class of to its ≈-equivalence class. This map is surjective: If is such that [Φ(M)] = [Δ], then there exists l ∈GL(ℓ^*) such that Δ = l(Φ(M)). The isomorphism class of (M, ω, l ∘Φ) is then mapped to the ≈-equivalence class of . The result now follows from Corollary <ref>. §.§ Sufficient conditions for the realization and extension to a toric action Let be a normalized monotone tall complexity one T-space. By the results of Section <ref>, there exists a quadruple (Δ,ℱ,s,k) determined by , where Δ = Φ(M), ℱ is a facet of Δ that is a minimal facet of (see Definition <ref>, s ∈ is the the self intersection of the sphere Φ^-1(v) in Φ^-1(e), where v ∈ℱ is any vertex and e is an edge of Δ that comes out of ℱ and is incident to v, and k ∈ is the number of connected components of M_exc. The overall aim of this section is to determine which quadruples arise in this fashion (see Corollary <ref>). So far, we have established the following necessary conditions: (i) Δ is a full-dimensional reflexive Delzant polytope in ^* (Propositions <ref> and <ref>). (ii) ℱ is a facet of Δ that is supported on the affine hyperplane {w ∈^* |⟨ w,ν⟩ = -1}. (iii) The pair (s,k) belongs to the set {(0,0), (-1,0), (-1,1), (-1,2) } (Proposition <ref>). (iv) If there is a vertex of Φ(M) on the linear hyperplane {w ∈^* |⟨ w, ν⟩ = 0}, then k=0 (Corollary <ref>). (v) If k=2 then there exists a facet ℱ' supported on the hyperplane {w∈^* |⟨ w, -ν⟩ = -1} (Corollary <ref>). The first step towards proving which quadruples are associated to a normalized monotone tall complexity one T-space and whether the T action extends to a toric action (Corollary <ref>) is establishing its combinatorial analogue, namely Theorem <ref>. To this end, we introduce the following terminology. We say that a quadruple (Δ, ℱ,s,k) consisting of a polytope Δ in ^*, a facet ℱ⊂Δ, and integers s,k, is admissible if it satisfies conditions (i)–(v) above. If (Δ, ℱ, -1,2) is admissible, then, by the proof of Corollary <ref>, the polytope Δ is contained in the strip {w ∈^* | -1 ≤⟨ w, ν⟩≤ 1}. Let k ∈{1,2}. If (Δ, ℱ, -1,k) is admissible, then for any edge e of Δ that intersects the linear hyperplane {w ∈^* |⟨ w,ν⟩ = 0}, there exists a vertex v ∈ℱ and a weight α_v ∈ℓ^* at v such that e ∩{w ∈^* |⟨ w,ν⟩ = 0} = {v + α_v}. In particular, e ∩{w ∈^* |⟨ w,ν⟩ = 0} is contained in ℓ^*. Fix such an edge e. Since (Δ, ℱ, -1,k) is admissible and k >0, Δ has no vertex on the linear hyperplane {w ∈^* |⟨ w,ν⟩ = 0}. Hence, e intersects {w ∈^* |⟨ w,ν⟩ = 0} in the relative interior of e, so that the intersection e ∩{w ∈^* |⟨ w,ν⟩ = 0} consists of one element, which is not a vertex since (Δ, ℱ, -1,k) is admissible. Let v ∈Δ be the vertex that is incident to e and satisfies ⟨ v, ν⟩ < 0. Since Δ is integral and is contained in the upper-half plane {w ∈^* |⟨ w,ν⟩≥ -1}, and since ν∈ℓ^* is primitive, ⟨ v, ν⟩ = -1, i.e., v ∈ℱ. Moreover, e is the edge that comes out of ℱ that is incident to v. Let α_v ∈ℓ^* be the weight of v such that e is contained in the half-ray v + _≥ 0⟨α_v ⟩. By Lemma <ref>, ⟨ v+ α_v, ν⟩ = 0, whence e ∩{w ∈^* |⟨ w,ν⟩ = 0} = {v + α_v}. Since v, α_v ∈ℓ^*, the result follows. Admissible quadruples encode the abstract analogs of the functions given by (<ref>) in Theorem <ref>. Let (Δ, ℱ, s,k) be an admissible quadruple. The abstract Duistermaat-Heckman function determined by (Δ, ℱ, s,k) is the map Δ→ that sends w ∈Δ to DH (w) = 2 - s ⟨ w, ν⟩ - k ρ(w), where ρ : ^* → is given by w ↦ 0 if ⟨ w, ν⟩≤ 0 ⟨ w, ν⟩ if ⟨ w, ν⟩≥ 0. Let (Δ, ℱ, s,k) and (Δ', ℱ', s',k') be admissible quadruples. By the arguments in the proof of Theorem <ref> (see page proof theorem thm:DH_classifies), if the abstract Duistermaat-Heckman functions determined by (Δ, ℱ, s,k) and (Δ', ℱ', s',k') are equal, then (Δ, ℱ, s,k) = (Δ', ℱ', s',k'). In what follows, we fix the map pr : ^* ×→^* given by projection to the first component and we denote the Lebesgue measure on by dy. Given a polytope Δ' in ^* ×, the projection pr(Δ') is a polytope in ^*. On such a projection we define the combinatorial analog of the function constructed in Example <ref> (the terminology used below is not standard). Let Δ' be a polytope in ^* ×. The height function of Δ :=pr(Δ') is the map Δ→ that sends w ∈Δ to Length(Δ'_w):=∫_Δ'_w dy, where Δ'_w:= pr^-1(w) ∩Δ. The combinatorial realizability result is as follows. For each admissible quadruple (Δ, ℱ, s,k), there exists a reflexive Delzant polytope Δ' in ^* × such that pr(Δ') = Δ and the height function of Δ equals the abstract Duistermaat-Heckman function determined by (Δ, ℱ, s,k). In Figure <ref> we provide the complete list of reflexive Delzant polytopes Δ' such that the projection is the reflexive square Δ of Figure <ref>. Before turning to the proof of Theorem <ref>, following <cit.>, we introduce an important construction on smooth polytopes. Let Δ be a Delzant polytope in ^* given by Δ=⋂_i=1^l {w∈^* |⟨ w,ν_i ⟩≥ c_i}, let ℱ be a face of Δ of codimension at least two, and let I ⊂{1,…, l} be the subset of those indices corresponding to the facets containing ℱ. We set ν_0:= ∑_i ∈ Iν_i and, given ϵ > 0, we also set c_0:= ϵ + ∑_i ∈ I c_i. For any ϵ >0 such that any vertex v of Δ not lying on ℱ satisfies ⟨ v, ν_0 ⟩ > c_0, we define the blow-up of Δ along ℱ of size ϵ to be the polytope Δ∩{w ∈^* |⟨ w, ν_0 ⟩≥ c_0}. As remarked in <cit.>, any blow-up of a Delzant polytope (along any face and of any size) is a Delzant polytope (so long as the face has codimension at least two and the size satisfies the condition stated in Definition <ref>). We fix an admissible quadruple (Δ, ℱ, s,k). We split the proof in two cases, depending depending on whether k=0 or not. Case 1: k=0. The abstract Duistermaat-Heckman function is DH(w) = 2 - s⟨ w, ν⟩ for all w ∈Δ. We set Δ_(s,0)':={(w,y) ∈^* ×| w ∈Δ , -1 ≤ y ≤ DH(w) -1}, where DH : Δ→ is the abstract Duistermaat-Heckman function determined by (Δ, ℱ, s,k) – see Figure <ref>. First, we claim that pr(Δ_(s,0)') = Δ. To this end, it suffices to prove that DH(w) ≥ 1 for all w ∈Δ. This follows immediately from the fact that, by Definition <ref>, s ∈{0,-1} and Δ is contained in the upper half-space of ^* given by {w ∈^* |⟨ w, ν⟩≥ -1}. By construction, the height function of Δ_(s,0)' equals DH, so it remains to show that Δ_(s,0)' is a reflexive Delzant polytope. To this end, we write Δ in its minimal representation (see (<ref>)) Δ=⋂_i=1^l {w ∈^* |⟨ w, ν_i ⟩≥ -1}, where ν_i ∈ℓ^* is primitive for all i=1,…, l and, without loss of generality, the hyperplane supporting ℱ is {w∈^* |⟨ w,ν_1⟩≥ -1}, i.e., ν_1 = ν. By (<ref>), Δ_(s,0)' ={(w,y) ∈^* ×|⟨ (w,y), (0,1) ⟩≥ -1} ∩{(w,y) ∈^* ×|⟨ (w,y),(-sν,-1) ⟩≥ -1} ∩⋂_i=1^l {(w,y) ∈^* ×|⟨ (w,y),(ν_i,0) ⟩≥ -1}, where, by a slight abuse of notation, we denote the natural pairing between ^* × and × also by ⟨·, ·⟩. Therefore, Δ_(s,0)' is a polytope (see Section <ref>). Moreover, the vertices of Δ_(s,0)' are precisely the elements of the set {(v, -1) ∈^* ×| v ∈Δ vertex} ∪ {(v, 1 - s⟨ v, ν⟩) ∈^* ×| v ∈Δ vertex}. We observe that, since Δ is reflexive, any vertex of Δ lies in ℓ^*; since ν∈ℓ, it follows that, if v ∈Δ is a vertex, then 1 - s⟨ v, ν⟩∈. Hence, by (<ref>), any vertex of Δ_(s,0)' lies in ℓ^* ×, i.e., Δ_(s,0)' is integral. Since ν_i ∈ℓ^* is primitive, (ν_i,0) ∈ℓ^* × is primitive. Moreover, since ν = ν_1 and since s ∈{0,-1}, (-sν,-1) ∈ℓ^* × is also primitive. Hence, by (<ref>), Δ_(s,0)' is reflexive. Finally, to see that Δ_(s,0)' is Delzant, we fix a vertex v ∈Δ. By (<ref>), the set of inward normals of the facets of Δ_(s,0)' that contain (v,-1) (respectively (v, 1 - s⟨ v,ν⟩)) consists of (0,1) (respectively (-sν, -1)), and of {(ν_v, 0)}, where {ν_v} is the set of inward normals of the facets of Δ that contain v. Since Δ is Delzant, it follows that Δ_(s,0)' is smooth at (v,-1) (respectively (v, 1 - s⟨ v,ν⟩)). Since any vertex of Δ_(s,0)' is equal to (v,-1) or (v, 1 - s⟨ v,ν⟩) for some vertex v of Δ, Δ_(s,0)' is Delzant, thus completing the proof in this case. Case 2: k≠ 0. Since (Δ, ℱ, s,k) is admissible, (s,k) ∈{(-1,1),(-1,2)}. Moreover, by Definition <ref>, Δ has no vertices on the linear hyperplane {w ∈^* |⟨ w, ν⟩ = 0 } and the quadruple (Δ, ℱ, 0,0) is also admissible. Hence, by Case 1, there exists a reflexive Delzant polytope Δ_(0,0)' satisfying the conclusions of the statement for the admissible quadruple (Δ, ℱ, 0,0). We deal with the cases (s,k) = (-1,1) and (s,k) = (-1,2) separately. ∙ Suppose that (s,k) = (-1,1). By (<ref>), the reflexive Delzant polytope Δ_(0,0)' has a codimension two face ℱ̃ given by the intersection of the facets supported by the affine hyperplanes {(w,y) ∈^* ×|⟨ (w,y), (0,-1) ⟩ = -1 } and {(w,y) ∈^* ×|⟨ (w,y), (ν,0) ⟩ = -1 }. This is a copy of ℱ on the affine hyperplane {(w,y) ∈^* ×| y = 1}. We wish to perform the blow-up of Δ'_(0,0) along ℱ̃ of size 1 (see Figure <ref>). To this end, with the notation in Definition <ref>, ν_0 = (ν,-1) and c_0 = -1. By (<ref>) and since s = -1, a vertex of Δ'_(0,0) that does not lie on ℱ̃ is either of the form (v,-1) for some vertex v of Δ or of the form (v,1) for some vertex v of Δ that does not lie on ℱ. Since ⟨ w, ν⟩≥ -1 for any w ∈Δ, if v is a vertex of Δ, then ⟨ (v,-1),(ν,-1) ⟩≥ -1 + 1 = 0 > -1 = c_0. On the other hand, since there are no vertices of Δ lying on the linear hyperplane {w ∈^* |⟨ w, ν⟩ = 0 } and since Δ is integral, if v is a vertex of Δ that does not lie on ℱ, then ⟨ v, ν⟩≥ 1. Hence, in this case, ⟨ (v,-1),(ν,1) ⟩≥ 1 -1 > -1 = c_0. By (<ref>) and (<ref>), we can perform the the blow-up of Δ'_(0,0) along ℱ̃ of size 1 that we denote by Δ'_(-1,1), i.e., Δ'_(-1,1) = Δ'_(0,0)∩{(w,y) ∈^* ×|⟨ (w,y),(ν,-1) ⟩≥ -1}. By (<ref>) and (<ref>), and since s = -1, Δ'_(-1,1) = {(w,y) ∈^* ×| w ∈Δ , -1 ≤ y ≤min(1,1 + ⟨ w, ν⟩)}. Since Δ'_(0,0) is Delzant, by Remark <ref>, Δ'_(-1,1) is also Delzant. By (<ref>), it can be checked directly that a vertex of Δ'_(-1,1) is one of the following three types: * (v, min(1,1 + ⟨ v, ν⟩)) for some vertex v of Δ, * (w,1), where w lies on an edge of Δ and satisfies ⟨ w, ν⟩ = 0, or * (v,-1) for some vertex v of Δ. Since Δ is integral and since ν∈ℓ^*, if v is a vertex of Δ, then (v, min(1,1 + ⟨ v, ν⟩)) and (v,-1) belong to ℓ^* ×. Moreover, by Lemma <ref>, if w lies on an edge of Δ and satisfies ⟨ w, ν⟩ = 0, then w ∈ℓ^*. Hence, (w,1) ∈ℓ^* ×, so that Δ'_(-1,1) is integral. Moreover, by (<ref>) and (<ref>), Δ'_(-1,1) is reflexive. Since (s,k) = (-1,1), it follows that the map Δ→ that sends w to min(1,1 + ⟨ w, ν⟩) equals DH - 1, where DH : Δ→ is the abstract Duistermaat-Heckman function determined by (Δ, ℱ, -1,1). This implies both that pr(Δ'_(-1,1)) = Δ (since the minimal value of the above map on Δ is 0), and that the height function of Δ equals the abstract Duistermaat-Heckman function determined by (Δ, ℱ, -1,1), as desired. ∙ Suppose that (s,k) = (-1,2): Since (Δ, ℱ, -1,2) is admissible, there exists a facet ℱ' of Δ supported on the affine hyperplane {w ∈^* |⟨ w,-ν⟩ = -1} and the quadruple (Δ, ℱ, -1,1) is also admissible. Let Δ'_(-1,1) be the reflexive Delzant polytope constructed from (Δ, ℱ, -1,1) as above. Hence, by (<ref>) and (<ref>), the reflexive Delzant polytope Δ'_(-1,1) has a codimension two face ℱ̃' given by the intersection of the facets supported by the affine hyperplanes {(w,y) ∈^* ×|⟨ (w,y), (0,1) ⟩ = -1 } and {(w,y) ∈^* ×|⟨ (w,y), (-ν,0) ⟩ = -1}. This is a copy of ℱ' on the affine hyperplane {(w,y) ∈^* ×| y = -1}. We wish to perform the blow-up of Δ'_(-1,1) along ℱ̃' of size 1 (see Figure <ref>). To this end, with the notation in Definition <ref>, ν_0 = (-ν,1) and c_0 = -1. A vertex of Δ'_(-1,1) that does not lie on ℱ̃' is of one of three types: * (v, min(1,1 + ⟨ v, ν⟩)) for some vertex v of Δ, * (w,1), where w ∈Δ lies on an edge of Δ and satisfies ⟨ w, ν⟩ = 0, or * (v,-1) for some vertex v of Δ that does not lie on ℱ', (see the proof in the case (s,k) = (-1,1).) In the first case, we have that ⟨ (v, min(1,1 + ⟨ v, ν⟩)), (-ν,1) ⟩ = min(1- ⟨ v, ν⟩,1) > -1 = c_0, where the inequality follows from the fact that Δ is contained in the strip {w ∈^* | -1 ≤⟨ w, ν⟩≤ 1} (see Remark <ref>). In the second case, we have that ⟨ (w,1), (-ν,1)⟩ = 1 > -1 = c_0. As in the case (s,k)=(-1,1), if v is a vertex of Δ, then ⟨ v, -ν⟩≥ 1. Hence, in the third case, we have that ⟨ (v,-1), (-ν,1) ⟩≥ 0 > -1 =c_0. By (<ref>), (<ref>) and (<ref>), we can perform the the blow-up of Δ'_(-1,1) along ℱ̃' of size 1 that we denote by Δ'_(-1,2), i.e., Δ'_(-1,2) = Δ'_(-1,1)∩{(w,y) ∈^* ×|⟨ (w,y),(-ν,1) ⟩≥ -1}. By (<ref>), we have that Δ'_(-1,2) = {(w,y) ∈^* ×| w ∈Δ , max(-1,-1 + ⟨ w, ν⟩) ≤ y ≤min(1,1 + ⟨ w, ν⟩)}. Since Δ'_(-1,1) is smooth, by Remark <ref>, Δ'_(-1,2) is also smooth. Moreover, by (<ref>), it can be checked directly that a vertex of Δ'_(-1,2) is one of the following three types: * (v, min(1,1 + ⟨ v, ν⟩)) for some vertex v of Δ, * (w, ± 1), where w lies on an edge of Δ and satisfies ⟨ w, ν⟩ = 0, or * (v,max(-1,-1+⟨ v, ν⟩) for some vertex v of Δ. As in the case (s,k)=(-1,1), it follows that Δ'_(-1,2) is integral. Moreover, since Δ'_(-1,1) is reflexive, by (<ref>) Δ'_(-1,1) is reflexive. Since Δ is contained in the strip {w ∈^* | -1 ≤⟨ w, ν⟩≤ 1}, the maximal (respectively minimal) value of the map Δ→ that takes w to max(-1,-1 + ⟨ w, ν⟩) (respectively min(1,1 + ⟨ w, ν⟩)) is zero. Hence, pr(Δ'_(-1,2)) = Δ. Moreover, the height function of Δ is the map Δ→ that sends w ∈Δ to min (1,1 + ⟨ w, ν⟩) - max(-1,-1 + ⟨ w, ν⟩) = min(2 - ⟨ w, ν⟩, 2 + ⟨ w, ν⟩). Since (s,k) = (-1,2), the above map equals the abstract Duistermaat-Heckman function determined by (Δ, ℱ, -1,2), as desired. Theorem <ref> and Delzant's classification of compact symplectic toric manifolds <cit.> yield the following geometric realizability and extension result. If (Δ, ℱ, s,k) is an admissible quadruple, then there exists a normalized monotone tall complexity one T-space such that * Φ(M) = Δ and the Duistermaat-Heckman function of equals the abstract Duistermaat-Heckman function determined by (Δ, ℱ, s,k), and * the Hamiltonian T-action extends to an effective Hamiltonian (T × S^1)-action. By Theorem <ref>, there exists a reflexive Delzant polytope Δ' in ^* × such that pr(Δ') = Δ and the height function of Δ equals the abstract Duistermaat-Heckman function determined by (Δ, ℱ, s,k). By <cit.>, there exists a compact complexity zero (T × S^1)-space (M,ω, Φ̃ = (Φ,Ψ)) such that the moment map image Φ̃(M) = Δ', where we identify (×)^* with ^* ×. We claim that satisfies the desired properties. To see this, we observe that, by construction, it is tall and has complexity one, and the T-action extends to an effective Hamiltonian (T × S^1)-action. Moreover, since Δ' is a reflexive Delzant polytope, by Proposition <ref>, (M,ω, Φ̃) is normalized monotone so that, in particular, (M,ω) satisfies c_1 = [ω]. Since Δ is reflexive Delzant and since pr(Δ') = Δ, Φ(M) = Δ, so that Φ satisfies the weight sum formula. Hence, is normalized monotone. Finally, by Example <ref>, the Duistermaat-Heckman function of equals the height function of Δ. Since the latter equals the abstract Duistermaat-Heckman function determined by (Δ, ℱ, s,k), the result follows. In fact, the constructions in the proof of Theorem <ref> have geometric counterparts that allow to give an explicit geometric description of in Corollary <ref>. For instance, the case k=0 is described explicitly in <cit.>, while it is well-known that the combinatorial blow-up of a polytope along a face corresponds to an equivariant symplectic blow-up (see <cit.>). We can prove another important result of this paper. By Corollary <ref>, we may assume that is normalized monotone. Hence, Δ := Φ(M) is a reflexive Delzant polytope by Proposition <ref>. Let ℱ_min⊂Φ(M) be a minimal facet and let (s,k) be as in the statement of Proposition <ref>. By construction, the quadruple (Δ, ℱ_min, s,k) is admissible. Hence, by Corollary <ref>, there exists a normalized monotone tall complexity one T-space (M',ω', Φ') such that * its Duistermaat-Heckman function equals the abstract Duistermaat-Heckman function associated to (Δ, ℱ_min, s,k), and * the Hamiltonian T-action extends to an effective Hamiltonian (T × S^1)-action. By construction, and (M',ω', Φ') have equal Duistermaat-Heckman functions. Hence, by Theorem <ref>, they are isomorphic and the result follows. §.§ Compact monotone tall complexity one spaces are equivariantly Fano In this section, we prove the last main result of our paper, Theorem <ref>. To this end, we recall that a compact complex manifold (Y,J) is Fano if and only if there exists a Kähler form σ∈Ω^1,1(Y) such that c_1(Y) = [ω]. By Corollary <ref>, there is no loss of generality in assuming that is normalized monotone. By Theorem <ref>, the Hamiltonian T-action extends to an effective Hamiltonian (T × S^1)-action. We denote the corresponding normalized monotone symplectic toric manifold by (M,ω, Φ̃ = (Φ,Ψ)). By the classification of compact symplectic toric manifolds in <cit.> that there exists an integrable almost complex structure J on M that is compatible with ω and (T × S^1)-invariant; moreover, ω equals the Kähler form of (M,J). By Proposition <ref>, [ω] = c_1(M) > 0, so that the Kähler manifold (M,J) is Fano. Finally, by <cit.>, the T-action extends to an effective holomorphic T_-action, as desired. 99 atiyah M.F. Atiyah, Convexity and commuting Hamiltonians, Bull. London Math. Soc., 14, no. 1, (1982), 1 – 15. ballmann W. Ballmann, Lectures on Kähler Manifolds, ESI Lect. Math. Phys., European Mathematical Society (EMS), Zürich, 2006. batyrev V. V. Batyrev, Dual polyhedra and mirror symmetry for Calabi-Yau hypersurfaces in toric varieties, J. Algebraic Geom., 3, no. 3, (1994), 493 – 535. bp M. Brion, C. Procesi, Action d'un tore dans une variété projective, Operator algebras, unitary representations, enveloping algebras, and invariant theory (Paris 1989), 509 – 539, Progr. Math., 92, Birkhäuser Boston, Boston, MA, 1990. ck Y. Cho, M.K. Kim, Log-concavity of complexity one Hamiltonian torus actions, C. R. Math. Acad. Sci. Paris, 350, no. 17-18, (2012), 845 – 848. cho Y. Cho, Classification of six dimensional monotone symplectic manifolds admitting semifree circle actions I, Internat. J. Math., 30, no.6, Paper No. 1950032, (2018), 71 pp. cho2 Y. Cho, Classification of six dimensional monotone symplectic manifolds admitting semifree circle actions II, Internat. J. Math., 32, no. 2, Paper No. 2050120, (2021), 47 pp. cho3 Y. Cho, Classification of six dimensional monotone symplectic manifolds admitting semifree circle actions III, preprint, (2019), arXiv:1905.07292v1. delzant T. Delzant, Hamiltoniens périodiques et image convexes de l'application moment, Bull. Soc. Math. France, 116, no. 3, (1988), 315 – 339. DeVito J. DeVito, Homeomorphisms of the 2-sphere S^2 fixing a set of points, Mathematics Stack Exchange, https://math.stackexchange.com/q/2947614https://math.stackexchange.com/q/2947614, version: 2018-10-09. dh J.J. Duistermaat, G.J. Heckman, On the variation in the cohomology of the symplectic form of the reduced phase space, Invent. Math., 69, no. 2, (1982), 259 – 268. dk J.J. Duistermaat, J.A.C. Kolk, Lie Groups, Universitext, Springer-Verlag, Berlin, 2000. ep M. Entov, L. Polterovich, Rigid subsets of symplectic manifolds, Compos. Math., 145, no. 3, (2009), 773 – 826. fp_hyp J. Fine, D. Panov, Hyperbolic geometry and non-Kähler manifolds with trivial canonical bundle, Geom. Top., 14, no. 3, (2010), 1723 – 1763. fp J. Fine, D. Panov, Circle invariant fat bundles and symplectic Fano 6-manifolds, J. London Math. Soc., 91, no. 3, (2015), 709 – 730. gvhs L. Godinho, F. von Heymann, S. Sabatini, 12, 24 and beyond, Adv. Math., 319, (2017), 472 – 521. GLS V. Guillemin, E. Lerman, S. Sternberg, Symplectic Fibrations and Multiplicity Diagrams, Cambridge University Press, Cambridge, 1996. gs V. Guillemin, S. Sternberg, Convexity properties of the moment mapping, Invent. Math., 67, no. 3, (1982), 491 – 513. gs-kahler V. Guillemin, S. Sternberg, Geometric Quantization and Multiplicities of Group Representations, Invent. Math., 67, no. 3, (1982), 515 – 538. gs-local V. Guillemin, S. Sternberg, A normal form for the moment map, Differential geometric methods in mathematical physics (Jerusalem, 1982), Math. Phys. Stud., 6, Reidel, Dordrecht, 1984, 161 – 175. gs-inve V. Guillemin, S. Sternberg, Birational equivalence in the symplectic category, Invent. Math., 97, no. 3, (1989), 485 – 522. gs-supersymmetry V. Guillemin, S. Sternberg, Supersymmetry and Equivariant de Rham Theory, Mathematics Past and Present, Springer-Verlag, Berlin, 1999. hnp C. Haase, B. Nill, A. Paffenholz, Lecture Notes on Lattice Polytopes preprint, available at https://www2.mathematik.tu-darmstadt.de/ paffenholz/daten/preprints/20201007_ Lattice_Polytopes.pdfhttps://www2.mathematik.tu-darmstadt.de/∼paffenholz/daten/preprints/20201007_Lattice_Polytopes.pdf. hirze F. Hirzebruch, T. Berger, R. Jung, Manifolds and modular forms, Aspects of Mathematics, E20, With appendices by Nils-Peter Skoruppa and by Paul Baum, Friedr. Vieweg & Sohn, Braunschweig, 1992. isko_prok V.A. Iskovskikh, Yu. G. Prokhorov, Fano varieties, in Algebraic Geometry, V, Encyclopaedia Math. Sci., 47, Springer, Berlin, 1999, 1 – 247. kar_not_log Y. Karshon, Example of a non-log-concave Duistermaat-Heckman measure, Math. Res. Lett., 3, no. 4, (1996), 537 – 540. karshon Y. Karshon, Periodic Hamiltonian flows on four dimensional manifolds, Mem. Amer. Math. Soc., 141, no. 672, 1999. kt1 Y. Karshon, S. Tolman, Centered complexity one Hamiltonian torus actions, Trans. Amer. Math. Soc., 353, no. 12, (2001), 4831 – 4861. kt2 Y. Karshon, S. Tolman, Complete invariants for Hamiltonian torus actions with two dimensional quotients, J. Symplectic Geom., 2, no. 1, (2003), 25 – 82. kt3 Y. Karshon, S. Tolman, Classification of Hamiltonian torus actions with two-dimensional quotients, Geom. Topol., 18, no. 2, (2014), 669 – 716. kirwan F.C. Kirwan, Cohomology of quotients in symplectic and algebraic geometry, Mathematical Notes, 31, Princeton University Press, Princeton, NJ, 1984. kollar J. Kollár, Y. Miyaoka, S. Mori, Rational connectedness and boundedness of Fano manifolds, J. Diff. Geom., 36, no. 3, (1992), 765 – 775. lz J.C. Lagarias, G.M. Ziegler, Bounds for lattice polytopes containing a fixed number of interior points in a sublattice, Canad. J. Math., 43, no. 5, (1991), 1022 – 1035. lerman_tolman E. Lerman, S. Tolman, Hamiltonian torus actions on symplectic orbifolds and toric varieties, Trans. Amer. Math. Soc., 349, no. 10, (1997), 4201 – 4230. li H. Li, The fundamental group of symplectic manifolds with Hamiltonian Lie group actions, J. Symplectic Geom., 4, no. 3, (2006), 345 – 372. lp N. Lindsay, D. Panov, S^1-invariant symplectic hypersurfaces in dimension 6 and the Fano condition, J. Top., 12, no. 1, (2019), 221 – 85. lindsay N. Lindsay, Hamiltonian circle actions on symplectic Fano manifolds, Ph.D. thesis, King's College London, 2018. marle C.-M. Marle, Modèle d'action hamiltonienne d'un groupe de Lie sur une variété symplectique, Rend. Sem. Mat. Univ. Politec. Torino, 43, no. 2, (1985), 227 – 251. marsden_weinstein J. E. Marsden, A. Weinstein, Reduction of symplectic manifolds with symmetry, Rep. Math. Phys., 5, (1974), 121 – 130. mcduff displacing D. McDuff, Displacing Lagrangian toric fibers via probes, Low-Dimensional and Symplectic Topology, in: Proc. Sympos. Pure Math., vol. 82, Amer. Math. Soc., Providence, RI, (2011), 131 – 160. mcduff_structure D. McDuff, The structure of rational and ruled symplectic 4-manifolds, J. Amer. Math. Soc. 3, (1990), 679–712. mcduff-salamon D. McDuff, D. Salamon, Introduction to symplectic topology, Oxford Mathematical Monographs, Second Edition, The Clarendon Press, Oxford University Press, New York, 1998. mcduff_sal D. McDuff, D. Salamon, J-holomorphic curves and symplectic topology, AMS Colloquium Publications, 52, American Mathematical Society, Providence, RI, 2004. mcduff_tolman D. McDuff, S. Tolman, Polytopes with Mass Linear Functions II: The Four-Dimensional Case, Int. Math. Res. Not. IMRN, no. 15, (2013), 3509 – 3599. Nicolaescu L. Nicolaescu, An Invitation to Morse Theory, Universitext, second edition, New York, (2011). paradan P.-E. Paradan, Wall crossing formulaes in Hamiltonian geometry, Progress in Mathematics, Geometric Aspects of Analysis and Mechanics. In Honor of the 65th Birthday of Hans Duistermaat. (292), 2011, 295 – 343. prv B. Poonen, F. Rodriguez-Villegas, Lattice Polygons and the Number 12, The American Mathematical Monthly, 3, no. 3 (Mar., 2000), 238 – 250. rez A.G. Reznikov, Symplectic twistor spaces, Ann. Global Ann. Geom., 11, no. 2, (1993), 109 – 118. ss S. Sabatini, D. Sepe, On topological properties of positive complexity one spaces, Transform. Groups, 27, no. 2, (2022), 723 – 735. Sjamaar R. Sjamaar, Convexity properties of the moment mapping re-examined, Adv. Math. 138, (1998), 46 – 91. tolman_inven S. Tolman, Examples of non-Kähler Hamiltonian torus actions, Invent. Math., 131, (1998), 299 – 310. ziegler G. M. Ziegler, Lectures on Polytopes, Graduate Texts in Mathematics, 152. Springer-Verlag, New York, (1995).
http://arxiv.org/abs/2307.04307v1
20230710020825
Weyl semimetallic state in the Rashba-Hubbard model
[ "Katsunori Kubo" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall" ]
Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195, Japan We investigate the Hubbard model with the Rashba spin-orbit coupling on a square lattice. The Rashba spin-orbit coupling generates two-dimensional Weyl points in the band dispersion. In a system with edges along [11] direction, zero-energy edge states appear, while no edge state exists for a system with edges along an axis direction. The zero-energy edge states with a certain momentum along the edges are predominantly in the up-spin state on the right edge, while they are predominantly in the down-spin state on the left edge. Thus, the zero-energy edge states are helical. By using a variational Monte Carlo method for finite Coulomb interaction cases, we find that the Weyl points can move toward the Fermi level by the correlation effects. We also investigate the magnetism of the model by the Hartree-Fock approximation and discuss weak magnetic order in the weak-coupling region. Weyl semimetallic state in the Rashba-Hubbard model Katsunori Kubo August 12, 2023 ===================================================== § INTRODUCTION In a two-dimensional system without inversion symmetry, such as in an interface of a heterostructure, a momentum-dependent spin-orbit coupling is allowed. It is called the Rashba spin-orbit coupling <cit.>. The Rashba spin-orbit coupling lifts the spin degeneracy and affects the electronic state of materials. Several interesting phenomena originating from the Rashba spin-orbit coupling have been proposed and investigated. By considering the spin precession by the Rashba spin-orbit coupling, Datta and Das proposed the spin transistor <cit.>, in which electron transport between spin-polarized contacts can be modulated by the gate voltage. After this proposal, the tunability of the Rashba spin-orbit coupling by the gate voltage has been experimentally demonstrated <cit.>. Such an effect may be used in a device in spintronics. The possibility of the intrinsic spin Hall effect, which is also important in the research field of spintronics, by the Rashba spin-orbit coupling has been discussed for a long time <cit.>. Another interesting phenomenon with the Rashba spin-orbit coupling is superconductivity. When the Rashba spin-orbit coupling is introduced in a superconducting system, even- and odd-parity superconducting states are mixed due to the breaking of the inversion symmetry <cit.>. This mixing affects the magnetic properties of the superconducting state, such as the Knight shift. While the above studies have mainly focused on the one-electron states in the presence of the Rashba spin-orbit coupling, the effects of the Coulomb interaction between electrons have also been investigated. The Hubbard model with the Rashba spin-orbit coupling on a square lattice called the Rashba-Hubbard model is one of the simplest models to investigate such effects. In this study, we investigate the ground state of this model at half-filling, i.e., electron number per site n=1, by the variational Monte Carlo method and the Hartree-Fock approximation. In the strong coupling limit, an effective localized model is derived and the possibility of long-period magnetic order is discussed <cit.>. The long-period magnetism is a consequence of the Dzyaloshinskii-Moriya interaction caused by the Rashba spin-orbit coupling. Such long-period magnetic order is also discussed by the Hartree-Fock approximation for the Rashba-Hubbard model <cit.>. However, there is a contradiction among these studies even within the Hartree-Fock approximation. In the weak-coupling region with a finite Rashba spin-orbit coupling, an antiferromagnetic order is obtained in Ref. <cit.>, but a paramagnetic phase is obtained in Refs. <cit.> and <cit.>. We will discuss this point in Sec. <ref>. The knowledge of the electron correlation beyond the Hartree-Fock approximation is limited. The electron correlation in the Rashba-Hubbard model is studied by a dynamical mean-field theory mainly focusing on magnetism <cit.> and by a cluster perturbation theory investigating the Mott transition in the paramagnetic state <cit.>. We will study the electron correlation in the paramagnetic phase by using the variational Monte Carlo method in Sec. <ref>. The results concerning the Mott transition are consistent with Ref. <cit.>. In addition, we find a transition to a Weyl semimetallic state by the electron correlation. Even without the Coulomb interaction, the band structure of this model is intriguing. When the Rashba spin-orbit coupling is finite, the upper and lower bands touch each other at Weyl points. In the large Rashba spin-orbit coupling limit, all the Weyl points locate at the Fermi level for half-filling. Topological aspects of the Weyl points and corresponding edge states of this simple model are discussed in Sec. <ref>. § MODEL The model Hamiltonian is given by H=H_kin+H_R+H_int. The kinetic energy term is given by H_kin = -t∑_(r,r') σ (c_rσ^†c_r' σ +c_r' σ^†c_rσ) =∑_kσϵ_k c_kσ^†c_kσ, where c_rσ is the annihilation operator of the electron at site r with spin σ and c_kσ is the Fourier transform of it. (r,r') denotes a pair of nearest-neighbor sites, t is the hopping integral, and the kinetic energy is ϵ_k=-2t (cos k_x + cos k_y), where the lattice constant is set as unity. The Rashba spin-orbit coupling term is given by <cit.> H_R = iλ_R ∑_rσσ' a=± 1 a (σ^x_σσ' c_rσ^†c_r+aŷσ' -σ^y_σσ' c_rσ^†c_r+ax̂σ') = -2λ_R∑_kσσ'(sin k_y σ^x_σσ'-sin k_x σ^y_σσ') c_kσ^†c_kσ' = ∑_kσσ'[h_x(k) σ^x_σσ'+h_y(k) σ^y_σσ'] c_kσ^†c_kσ' = ∑_kσσ' H_R σσ'(k) c_kσ^†c_kσ', where x̂ (ŷ) is the unit vector along the x (y) direction, σ are the Pauli matrices, λ_R is the coupling constant of the Rashba spin-orbit coupling, h_x(k)=-2λ_R sin k_y, and h_y(k)= 2λ_R sin k_x. We can assume t ≥ 0 and λ_R ≥ 0 without loss of generality. We parametrize them as t=t̃cosα and λ_R=√(2) t̃sinα. The band dispersion of H_0=H_kin+H_R is E_±(k) =-2t(cos k_x+cos k_y) ± |h(k)|, where |h(k)|=√(h_x^2(k)+h_y^2(k)) =2λ_R√(sin^2 k_x+sin^2 k_y). The bandwidth is W=8t̃. Due to the electron-hole symmetry of the model, the Fermi level is zero at half-filling. For α=0, that is, without the Rashba spin-orbit coupling, the band is doubly degenerate [Fig. <ref>(a)]. For a finite λ_R, the spin degeneracy is lifted except at the time-reversal invariant momenta X^(0)=(0,0), X^(1)=(π,0), X^(2)=(0,π), and X^(3)=(π,π) [Figs. <ref>(b) and <ref>(c)]. These are two-dimensional Weyl points. The energies at the Weyl points X^(1) and X^(2) are always zero. By increasing α to 0.5π (t=0), the energies at the other Weyl points X^(0) and X^(3) also move to zero. In Fig. <ref>(d), we show the energy dispersion in the entire Brillouin zone for α=0.5π. We can see the linear dispersions around the Weyl points. The Coulomb interaction term is given by H_int=U∑_rn_r↑n_r↓, where n_rσ=c_rσ^†c_rσ and U is the coupling constant of the Coulomb interaction. § TOPOLOGY AND EDGE STATES OF THE NON-INTERACTING HAMILTONIAN The energy bands degenerate when h(k)=0, i.e., at the Weyl points. In the vicinity of these points, we set k=X^(l)+p and obtain H_R(k) = ∑_j h_j(k)σ^j ≃∑_ij. ∂ h_j(k)/∂ k_i|_k=X^(l) p_iσ^j = ∑_ijv^(l)_ijp_iσ^j. The chirality of each Weyl point X^(l) is defined as χ_l = sgn [ v^(l)] <cit.> and we obtain χ_0=χ_3=1 and χ_1=χ_2=-1. The winding number of a normalized two-component vector field ĥ(k)=h(k)/|h(k)| is <cit.> w_l = ∮_C_ldk/2π·[ ĥ_x(k)∇ĥ_y(k) -ĥ_y(k)∇ĥ_x(k)], where C_l is a loop enclosing X_l. We obtain w_l=χ_l. Figure <ref> shows ĥ(k) around k=X^(0) and X^(1) as examples. We can recognize the winding numbers 1 and -1, respectively, from this figure. These topological numbers are related to the Berry phase <cit.>. The eigenvector of H_R(k) with eigenvalue -|h(k)| is |k⟩ =(1/√(2))(-1,ĥ_x(k)+iĥ_y(k))^T. The Berry connection is a(k) = -i⟨k | ∇ |k⟩ = 1/2[ ĥ_x(k)∇ĥ_y(k) -ĥ_y(k)∇ĥ_x(k)]. Then, the Berry phase is γ_l = ∫_C_ldk·a(k) =w_lπ. From the existence of such topological defects like the Weyl points, we expect edge states as in graphene with Dirac points <cit.>. We consider two types of edges: the edges along an axis direction [straight edges, Fig. <ref>(a)] and the edges along [11] direction [zigzag edges, Fig. <ref>(b)]. We denote the momentum along the edges as k and the momentum perpendicular to the edges as k_⊥. To discuss the existence of the edge states, the chiral symmetry and the winding number for a fixed k are important <cit.>. The Rashba term has a chiral symmetry: { H_R(k), σ^z } = H_R(k)σ^z+σ^zH_R(k)=0 and σ^zσ^z †=I with I being the unit matrix. The winding number for a fixed k is given by w(k) = ∫_0^2πdk_⊥/2π[ ĥ_x(k) ∂/∂ k_⊥ĥ_y(k) -ĥ_y(k) ∂/∂ k_⊥ĥ_x(k) ]. For the straight edges, we find w(k)=0 and we expect that the edge states are probably absent. For the zigzag edges, h_x(k)=-2λ_R sin(k-k_⊥) and h_y(k)= 2λ_R sin(k+k_⊥), where we have set 1/√(2) times the bond length as unity, and we find w(k)=-sgn[sin(2k)] except for k = 0, ±π/2, and ±π (projected Weyl points). At the projected Weyl points, w(k)=0. Thus, the edge states should exist except for the projected Weyl points at least without t. We note that the edge states can be understood as those of a one-dimensional topological insulator. The model only with the Rashba term with fixed k is a one-dimensional model. When this one-dimensional system has a gap with a non-zero topological number, the system can be regarded as a one-dimensional topological insulator and has edge states. This one-dimensional system is of symmetry class BDI and can possess a topological number of ℤ <cit.>. To explicitly demonstrate the existence of the edge states, we numerically evaluate the band energy for lattices with finite widths. We denote the number of lattice sites perpendicular to the edges as N (see Fig. <ref>) and obtain 2N bands. The obtained energy bands are shown in Fig. <ref>. For the straight edges [Figs. <ref>(a)–(c)], we do not find the edge states. It is consistent with w(k)=0. For the zigzag edges [Figs. <ref>(d)–(f)], we obtain isolated zero-energy states except for λ_R=0 [Fig. <ref>(d)]. In particular, for α=0.5π, the zero-energy states appear at all the k points except for the projected Weyl points as is expected from w(k) 0. We find that the zero-energy states remain even for finite t as shown in Fig. <ref>(e). For an even number of N, the energy of the zero-energy states shifts from zero around the projected Weyl points when N is small. For an odd number of N, we obtain zero energy even for a small N. Thus, we set N=51 in the calculations. We discuss the characteristics of the zero-energy edge states. We define c_i kσ as the Fourier transform of c_rσ along the edges, where i labels the site perpendicular to the edges (see Fig. <ref>). For the lattice with the zigzag edges, we can show that the states c_-(N-1)/2, π/4, ↓^†|0⟩ and c_(N-1)/2, π/4, ↑^†|0⟩ do not have matrix elements of H_R, where |0⟩ is the vacuum state. Thus, these states are the zero-energy states for α=0.5π completely localized on the left and right edges, respectively, with opposite spins. This helical character of the edge states is natural since the system lacks inversion symmetry due to the Rashba spin-orbit coupling. For other momenta and α, we calculate the spin density of the zero-energy edge states n_0 k σ(i)=⟨ 0 k| c_ikσ^† c_ikσ|0 k ⟩, where |0 k ⟩ denotes the zero-energy state at momentum k. The zero-energy states are doubly degenerate, and we take the average of the two states. We show n_0 k σ(i) for α=0.3π, as an example, in Fig. <ref>. At k where the bulk band gap is sufficiently large, the zero-energy states are localized well on the edges [Figs. <ref>(c) and <ref>(d)]. As the bulk band gap becomes small, the zero-energy states penetrate inner sites [Figs. <ref>(b) and <ref>(e)] and the zero-energy states extend in the entire lattice when the gap closes [Figs. <ref>(a) and <ref>(f)]. The spin components are opposite between the edges. For example, for k=0.4π and 0.45π, the up-spin state dominates on the right edge while the down-spin state dominates on the left edge. Thus, the edge states are helical. The spin components are exchanged between states at k and -k [compare Fig. <ref>(d) with Fig. <ref>(g) and Fig. <ref>(e) with Fig. <ref>(h)]. In Fig. <ref>(i), we show a schematic view of the spin density corresponding to k≃ 0.4π on the real-space lattice. § WEYL SEMIMETALLIC STATE INDUCED BY THE CORRELATION EFFECTS In this section, we investigate the effects of the Coulomb interaction U at half-filling, i.e., the electron number per site n=1, within the paramagnetic phase by applying the variational Monte Carlo method <cit.>. To achieve this objective, it is necessary to select a wave function capable of describing the Mott insulating state, as a Mott transition is anticipated, at least in the ordinary Hubbard model without the Rashba spin-orbit coupling. In this study, we employ a wave function with doublon-holon binding factors [doublon-holon binding wave function (DHWF)] <cit.>. A doublon means a doubly occupied site and a holon means an empty site. Such intersite factors like doublon-holon binding factors are essential to describe the Mott insulating state <cit.>. Indeed, the DHWF has succeeded in describing the Mott transition for the single-orbital <cit.> and two-orbital <cit.> Hubbard models. The DHWF is given by |Ψ(α_eff)⟩ = P_d P_h P_G | Φ(α_eff)⟩. The Gutzwiller projection operator P_G=∏_r[1-(1-g)P_d r], describes onsite correlations, where P_d r = n_r↑n_r↓ is the projection operator onto the doublon state at r and g is a variational parameter. The parameter g tunes the population of the doubly occupied sites. When the onsite Coulomb interaction is strong and n=1, most sites should be occupied by a single electron each. In this situation, if a doublon is created, a holon should be around it to reduce the energy by using singly occupied virtual states. P_d and P_h describe such doublon-holon binding effects. P_d is an operator to include intersite correlation effects concerning the doublon states. This is defined as follows <cit.>: P_d=∏_r[1-(1-ζ_d) P_d r∏_a (1-P_h r+a) ], where P_h r = (1-n_r↑)(1-n_r↓) is the projection operator onto the holon state at r and a denotes the vectors connecting the nearest-neighbor sites. P_d gives factor ζ_d when site r is in the doublon state and there is no holon at nearest-neighbor sites r+a. Similarly, P_h describing the intersite correlation effects on the holon state is defined as P_h=∏_r[1-(1-ζ_h) P_h r∏_a (1-P_d r+a) ]. Factor ζ_h appears when a holon exists without a nearest-neighboring doublon. For the half-filled case, we can use the relation ζ_d=ζ_h due to the electron-hole symmetry of the model. The one-electron part |Φ(α_eff) ⟩ of the wave function is given by the ground state of the non-interacting Hamiltonian H_0(α_eff) in which α in H_0 is replaced by α_eff. We can choose α_eff different from the original α in the model Hamiltonian. Such a band renormalization effect of the one-electron part is discussed for a Hubbard model with next-nearest-neighbor hopping <cit.>. We define the normal state as |Ψ_N⟩=|Ψ(α_eff=α)⟩, i.e., α_eff remains the bare value. We also define the Weyl semimetallic state as |Ψ_Weyl⟩=|Ψ(α_eff=0.5π)⟩, i.e., all the Weyl points are at the Fermi level and the Fermi surface disappears. In addition, we can choose other values of α_eff, but in a finite-size lattice, a slight change of α_eff does not change the set of the occupied wave numbers and the wave function |Φ(α_eff) ⟩. Thus, we have limited choices for α_eff as the band renormalization in the Hubbard model with the next-nearest-neighbor hopping <cit.>. We use the antiperiodic-periodic boundary conditions since the closed shell condition is satisfied, i.e., no k point is exactly on the Fermi surface for a finite-size lattice and there is no ambiguity to construct |Φ(α_eff)⟩. The calculations are done for L × L lattices with L=12, 14, and 16. We evaluate the expectation value of energy by the Monte Carlo method. We optimize the variational parameters g and ζ_d=ζ_h to minimize the energy. We denote the optimized energy of |Ψ(α_eff) ⟩ as E(α_eff). In particular, we denote E_N=E(α_eff=α) and E_Weyl=E(α_eff=0.5π). By using the Monte Carlo method, we also evaluate the momentum distribution function n(k)=∑_σ⟨ c_kσ^†c_kσ⟩, where ⟨⋯⟩ represents the expectation value in the optimized wave function. In Fig. <ref>(a), we show n(k) in the normal state at α=0.25π for L=16. For U/t̃=10, n(k) has clear discontinuities at the Fermi momenta. On the other hand, for U/t̃=14, n(k) does not have such a discontinuity; that is, the system is insulating and a Mott metal-insulator transition takes place between U/t̃=10 and U/t̃=14. To determine the Mott metal-insulator transition point U_MIT, we evaluate the quasiparticle renormalization factor Z, which is inversely proportional to the effective mass and becomes zero in the Mott insulating state, by the jump in n(k). Except for α=0, we evaluate Z by the jump between (π,0) and (π,π) as shown in Fig. <ref>(a). For α=0, the above path does not intersect the Fermi surface and we use the jump between (π,π) and (0,0) instead. In Fig. <ref>(b), we show the U dependence of Z for α=0.25π and L=16. By extrapolating Z to zero, we determine U_MIT/t̃≃ 12.9. We note that for a small α with a large L, the Mott transition becomes first-order consistent with a previous study for α=0 <cit.>. We have also evaluated energies for some values of α_effα. Figure <ref>(a) shows energies for α_eff=0.18π and 0.22π measured from the normal state energy at α=0.2π for L=16. The normal state has the lowest energy, at least for U/t̃≤ 20. Thus, the renormalization of α, even if it exists, is weak for a system distant from the Weyl semimetallic state (α=0.5π). A similar conclusion is obtained for a small intersite spin-orbit coupling case of the Kane-Mele-Hubbard model <cit.>. It is in contrast to the onsite spin-orbit coupling case <cit.>, where the effective spin-orbit coupling is enhanced by the Coulomb interaction even when the bare spin-orbit coupling is small. On the other hand, the renormalization of α becomes strong around α=0.5π. In Fig. <ref>(b), we show the energy E_Weyl of the Weyl semimetallic state measured from that of the normal state for α=0.4π for L=16. E_Weyl becomes lower than the normal state energy at U>U_Weyl≃ 9.4t̃. There is a possibility that the normal state changes to the Weyl semimetallic state gradually by changing α_eff continuously. However, for a finite lattice, the choices of α_eff are limited between α_eff=α and α_eff=0.5π. For example, at α=0.4π, there is no choice for L=12 and L=14 and only one choice 0.4017<α_eff/π<0.4559 for L=16. For this reason, we evaluate U_Weyl by comparing the energies of the normal and the Weyl semimetallic states to show the tendency toward the Weyl semimetallic state by the renormalization effect on α. Figure <ref> shows a phase diagram without considering magnetic order. The size dependence of the phase boundaries is weak. For a weak Rashba spin-orbit coupling region, i.e., for a small α, the Rashba spin-orbit coupling stabilizes the metallic phase. It is consistent with a previous study by a cluster perturbation theory <cit.>. Around α=0.5π, we obtain a wide region of the Weyl semimetallic phase. Thus, we expect phenomena originating from the Weyl points can be realized even away from α=0.5π with the aid of electron correlations. In the Weyl semimetallic state, the density of states at the Fermi level vanishes, and thus, energy gain is expected similar to the energy gain by a gap opening in an antiferromagnetic transition. We note that such a renormalization effect on α cannot be expected within the Hartree-Fock approximation and is a result of the electron correlations beyond the Hartree-Fock approximation. § HARTREE-FOCK APPROXIMATION FOR MAGNETISM In this section, we discuss the magnetism of the model by the Hartree-Fock approximation. The energy dispersion given in Eq. (<ref>) has the following property: E_±(k+Q)=-E_∓(k) for Q=(π,π). When E_a(k)=0, in particular, E_-a(k+Q)=E_a(k)=0. Thus, the Fermi surface is perfectly nested for half-filling (the Fermi energy is zero) with the nesting vector Q=(π,π) [see Figs. <ref>(a)–(c)]. Due to this nesting, the magnetic susceptibility at Q=(π,π) diverges at zero temperature <cit.>. It indicates that the magnetic order occurs with an infinitesimally small value of the Coulomb interaction U at zero temperature. However, some recent Hartree-Fock studies argue the existence of the paramagnetic phase with finite U <cit.>. To resolve this contradiction and gain insights into magnetism, we apply the Hartree-Fock approximation to the model within two-sublattice magnetic order, i.e., with ordering vector of Q=(π,π) or Q=(π,0). The Hartree-Fock Hamiltonian is given by H_HF = ∑_k[ c_k^† c_k+Q^† ][ ϵ̂(k) -Δ·σ; -Δ·σ ϵ̂(k+Q) ][ c_k; c_k+Q ], where k-summation runs over the folded Brillouin zone of the antiferromagnetic state, c_k=(c_k↑,c_k↓)^T, ϵ̂(k)=ϵ_kI+H_R(k), and Δ=Um_AF. Here, m_AF=[1/(2L^2)]∑_rσσ'e^-iQ·r⟨ c_rσ^†σ_σσ' c_rσ'⟩_HF, where ⟨⋯⟩_HF represents the expectation value in the ground state of H_HF. We solve the gap equation Δ=Um_AF self-consistently. First, we consider the magnetic order for Q=(π,π). Without the Rashba spin-orbit coupling, the asymptotic form m_AF=|m_AF|∼ (t̃/U)e^-2π√(t̃/U) for the weak-coupling region Δ=|Δ| ≪ W was obtained by Hirsch analyzing the gap equation <cit.>. If we take into consideration the fact that the asymptotic form of the density of states ρ(ϵ) ≃ -[1/(2π^2 t̃)] ln [|ϵ|/(16t̃)] for ϵ≃ 0 <cit.> is a good approximation even up to the band edge [see Fig. <ref>(d)], we obtain m ≃ (32t̃/U)e^-2π√(t̃/U). Indeed, this approximate form reproduces the numerical data well in the weak-coupling region as shown in Fig. <ref>(a). For a finite λ_R, we find numerically that m_AF is parallel to the x or y direction. It is expected from the effective Hamiltonian in the strong coupling limit we will discuss later. By assuming Δ≪λ_R and Δ≪ W, we obtain m_AF∼ (t̃/U)e^-2/[Uρ(0)] for a finite ρ(0), where ρ(0) is the density of states at the Fermi level. The coefficient to m_AF is determined by the entire behavior of the density of states up to the band edge [see Figs. <ref>(e) and <ref>(f)] and we cannot obtain it analytically in general. Figures <ref>(b) and <ref>(c) show the numerically obtained m_AF for α=0.2π and 0.4π, respectively, along with the fitted curves of (at̃/U)e^-2/[Uρ(0)], where a is the fitting parameter. The fitted curves reproduce well the numerical data in the weak-coupling region. From the obtained asymptotic form and the numerical data supporting it, we conclude that the magnetic order occurs by an infinitesimally small U for 0 ≤α < 0.5π consistent with the divergence of the magnetic susceptibility <cit.>. We cannot apply this asymptotic form for α=0.5π since ρ(0)=0 there. The numerical result shown in Fig. <ref>(d) indicates a first-order transition for α=0.5π. Here, we discuss previous papers indicating the existence of the paramagnetic phase with finite U. In Ref. <cit.>, the authors introduced a threshold ε for the magnetization m_AF. Then, the authors determined the magnetic transition point when m_AF becomes smaller than ε. However, m_AF becomes exponentially small in the weak-coupling region, as understood from the above analysis. In Ref. <cit.>, ε is not sufficiently small to discuss exponentially small m_AF and a finite region of the paramagnetic phase was obtained. In Ref. <cit.>, the authors calculated the energy difference Δ E between the paramagnetic state and the antiferromagnetic state. Then, the authors introduced a scaling between Δ E and U-U_AF, where U_AF is the antiferromagnetic transition point. They tuned U_AF to collapse the data with different α onto a single curve in a large-U region. Then, they obtained finite U_AF for α 0. However, this scaling analysis does not have a basis. In particular, if such a scaling holds for critical behavior, the data collapse should occur for U ≃ U_AF, not for a large-U region. We have also solved the gap equation for Q=(π,0) and obtained m_AF parallel to the y direction. By comparing energies for Q=(π,π) and Q=(π,0), we construct a phase diagram shown in Fig. <ref>. As noted, the antiferromagnetic state with Q=(π,π) occurs at infinitesimally small U except for α=0.5π. The Weyl semimetallic state remains for U/ t̃≲ 4.4 at α=0.5π. The antiferromagnetic state with Q=(π,0) appears at large U for α/π≳ 0.2. This phase boundary can be understood from the effective Hamiltonian in the strong coupling limit. The effective Hamiltonian is derived from the second-order perturbation theory concerning t and λ_R and is given by <cit.> H_eff = ∑_raμ[ J^μ_aS_r^μS_r+a^μ +D_a^μ(S_r×S_r+a)^μ], where a=x̂ or ŷ, μ=x, y, or z, S_r is the spin operator at site r, J_x̂^x =J_x̂^z =J_ŷ^y =J_ŷ^z = 4(t^2-λ_R^2)/U, J_x̂^y =J_ŷ^x = 4(t^2+λ_R^2)/U, D_x̂^y =-D_ŷ^x =8tλ_R/U, and the other components of D_a are zero. From the anisotropy in the interaction, we expect the ordered moments along the x or y direction for Q=(π,π) and along the y direction for Q=(π,0). Thus, the directions of the ordered moments obtained with the Hartree-Fock approximation are in accord with the effective Hamiltonian. For t ≪λ_R (α≃ 0), the magnetic order with Q=(π,π) is stable as in the ordinary Heisenberg model. For t ≫λ_R (α≃ 0.5π), the magnetic order with Q=(π,0) has lower energy than that with Q=(π,π) due to the anisotropic interaction. For t=λ_R (J_x̂^x =J_x̂^z =J_ŷ^y =J_ŷ^z=0), if we ignore the Dzyaloshinskii-Moriya interaction D_a, the model is reduced to the compass model <cit.>. It is known as a highly frustrated model. The condition t=λ_R corresponds to α=tan^-1(1/√(2))=0.1959π. Thus, the phase boundary α≃ 0.2π obtained with the Hartree-Fock approximation at a large-U region is corresponding to the highly frustrated region of the model. However, in a large-U region, we expect longer-period magnetic order due to the Dzyaloshinskii-Moriya interaction. It is out of the scope of the present study and has already been investigated by previous studies using the effective Hamiltonian <cit.>. Our important finding in this section is the absence of the paramagnetic phase except for α=0.5π in the weak-coupling region. However, the ordered moment and the energy gain of the antiferromagnetic state in the weak-coupling region are exponentially small. Thus, the effects of this magnetic order should be weak. In addition, this magnetic order would be easily destroyed by perturbations such as the next-nearest-neighbor hopping breaking the nesting condition <cit.>. Thus, the discussions in the previous sections without considering magnetic order are still meaningful. § SUMMARY We have investigated the Rashba-Hubbard model on a square lattice. The Rashba spin-orbit coupling generates the two-dimensional Weyl points, which are characterized by non-zero winding numbers. We have investigated lattices with edges and found zero-energy states on a lattice with zigzag edges. The zero-energy states are localized around the edges and have a helical character. The large density of states due to the flat zero-energy band may result in magnetic polarization at edges, similar to graphene <cit.>. We have also examined the effects of the Coulomb interaction U. The Coulomb interaction renormalizes the ratio of the coupling constant of the Rashba spin-orbit coupling λ_R to the hopping integral t effectively. As a result, the Weyl points can move to the Fermi level by the correlation effects. Thus, the Coulomb interaction can enhance the effects of the Weyl points and assist in observing phenomena originating from the Weyl points even if the bare Rashba spin-orbit coupling is not large. We have also investigated the magnetism of the model by the Hartree-Fock approximation. We have found that the antiferromagnetic state with the ordering vector Q=(π,π) occurs at infinitesimally small U due to the perfect nesting of the Fermi surface even for a finite λ_R. However, the density of states at the Fermi level becomes small for a large λ_R and as a result, the energy gain by the antiferromagnetic order is small in the weak-coupling region. Therefore, the effects of the magnetic order should be weak in such a region. In addition, this magnetic order would be unstable against perturbations, such as the inclusion of next-nearest-neighbor hopping <cit.>. Thus, we conclude that the discussions on the Weyl semimetal without assuming magnetism are still meaningful. This work was supported by JSPS KAKENHI Grant Number JP23K03330. 53 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bychkov and Rashba(1984)]Bychkov1984 author author Y. A. Bychkov and author E. I. Rashba, title title Properties of a 2D electron gas with lifted spectral degeneracy, @noop journal journal JETP Lett. volume 39, pages 78 (year 1984)NoStop [Datta and Das(1990)]Datta1990 author author S. Datta and author B. Das, title title Electronic analog of the electro-optic modulator, https://doi.org/10.1063/1.102730 journal journal Appl. Phys. Lett. volume 56, pages 665 (year 1990)NoStop [Schultz et al.(1996)Schultz, Heinrichs, Merkt, Colin, Skauli, and Løvold]Schultz1996 author author M. Schultz, author F. Heinrichs, author U. Merkt, author T. Colin, author T. Skauli, and author S. Løvold, title title Rashba spin splitting in a gated HgTe quantum well, https://doi.org/10.1088/0268-1242/11/8/009 journal journal Semicond. Sci. Technol. volume 11, pages 1168 (year 1996)NoStop [Nitta et al.(1997)Nitta, Akazaki, Takayanagi, and Enoki]Nitta1997 author author J. Nitta, author T. Akazaki, author H. Takayanagi, and author T. Enoki, title title Gate Control of Spin-Orbit Interaction in an Inverted In0.53Ga0.47As/In0.52Al0.48As Heterostructure, https://doi.org/10.1103/PhysRevLett.78.1335 journal journal Phys. Rev. Lett. volume 78, pages 1335 (year 1997)NoStop [Engels et al.(1997)Engels, Lange, Schäpers, and Lüth]Engels1997 author author G. Engels, author J. Lange, author T. Schäpers, and author H. Lüth, title title Experimental and theoretical approach to spin splitting in modulation-doped InxGa1-xAs/InP quantum wells for B→0, https://doi.org/10.1103/PhysRevB.55.R1958 journal journal Phys. Rev. B volume 55, pages R1958 (year 1997)NoStop [Sinova et al.(2004)Sinova, Culcer, Niu, Sinitsyn, Jungwirth, and MacDonald]Sinova2004 author author J. Sinova, author D. Culcer, author Q. Niu, author N. A. Sinitsyn, author T. Jungwirth, and author A. H. MacDonald, title title Universal Intrinsic Spin Hall Effect, https://doi.org/10.1103/PhysRevLett.92.126603 journal journal Phys. Rev. Lett. volume 92, pages 126603 (year 2004)NoStop [Inoue et al.(2004)Inoue, Bauer, and Molenkamp]Inoue2004 author author J.-i. Inoue, author G. E. W. Bauer, and author L. W. Molenkamp, title title Suppression of the persistent spin Hall current by defect scattering, https://doi.org/10.1103/PhysRevB.70.041303 journal journal Phys. Rev. B volume 70, pages 041303(R) (year 2004)NoStop [Chalaev and Loss(2005)]Chalaev2005 author author O. Chalaev and author D. Loss, title title Spin-Hall conductivity due to Rashba spin-orbit interaction in disordered systems, https://doi.org/10.1103/PhysRevB.71.245318 journal journal Phys. Rev. B volume 71, pages 245318 (year 2005)NoStop [Dimitrova(2005)]Dimitrova2005 author author O. V. Dimitrova, title title Spin-Hall conductivity in a two-dimensional Rashba electron gas, https://doi.org/10.1103/PhysRevB.71.245327 journal journal Phys. Rev. B volume 71, pages 245327 (year 2005)NoStop [Sugimoto et al.(2006)Sugimoto, Onoda, Murakami, and Nagaosa]Sugimoto2006 author author N. Sugimoto, author S. Onoda, author S. Murakami, and author N. Nagaosa, title title Spin Hall effect of a conserved current: Conditions for a nonzero spin Hall current, https://doi.org/10.1103/PhysRevB.73.113305 journal journal Phys. Rev. B volume 73, pages 113305 (year 2006)NoStop [Dugaev et al.(2010)Dugaev, Inglot, Sherman, and Barnaś]Dugaev2010 author author V. K. Dugaev, author M. Inglot, author E. Y. Sherman, and author J. Barnaś, title title Robust impurity-scattering spin Hall effect in a two-dimensional electron gas, https://doi.org/10.1103/PhysRevB.82.121310 journal journal Phys. Rev. B volume 82, pages 121310(R) (year 2010)NoStop [Shitade and Tatara(2022)]Shitade2022 author author A. Shitade and author G. Tatara, title title Spin accumulation without spin current, https://doi.org/10.1103/PhysRevB.105.L201202 journal journal Phys. Rev. B volume 105, pages L201202 (year 2022)NoStop [Gor'kov and Rashba(2001)]Gorkov2001 author author L. P. Gor'kov and author E. I. Rashba, title title Superconducting 2D System with Lifted Spin Degeneracy: Mixed Singlet-Triplet State, https://doi.org/10.1103/PhysRevLett.87.037004 journal journal Phys. Rev. Lett. volume 87, pages 037004 (year 2001)NoStop [Yanase and Sigrist(2008)]Yanase2008 author author Y. Yanase and author M. Sigrist, title title Superconductivity and Magnetism in Non-centrosymmetric System: Application to CePt3Si, https://doi.org/10.1143/JPSJ.77.124711 journal journal J. Phys. Soc. Jpn. volume 77, pages 124711 (year 2008)NoStop [Beyer et al.(2023)Beyer, Hauck, Klebl, Schwemmer, Kennes, Thomale, Honerkamp, and Rachel]Beyer2023 author author J. Beyer, author J. B. Hauck, author L. Klebl, author T. Schwemmer, author D. M. Kennes, author R. Thomale, author C. Honerkamp, and author S. Rachel, title title Rashba spin-orbit coupling in the square-lattice Hubbard model: A truncated-unity functional renormalization group study, https://doi.org/10.1103/PhysRevB.107.125115 journal journal Phys. Rev. B volume 107, pages 125115 (year 2023)NoStop [Cocks et al.(2012)Cocks, Orth, Rachel, Buchhold, Le Hur, and Hofstetter]Cocks2012 author author D. Cocks, author P. P. Orth, author S. Rachel, author M. Buchhold, author K. Le Hur, and author W. Hofstetter, title title Time-Reversal-Invariant Hofstadter-Hubbard Model with Ultracold Fermions, https://doi.org/10.1103/PhysRevLett.109.205303 journal journal Phys. Rev. Lett. volume 109, pages 205303 (year 2012)NoStop [Radić et al.(2012)Radić, Di Ciolo, Sun, and Galitski]Radic2012 author author J. Radić, author A. Di Ciolo, author K. Sun, and author V. Galitski, title title Exotic Quantum Spin Models in Spin-Orbit-Coupled Mott Insulators, https://doi.org/10.1103/PhysRevLett.109.085303 journal journal Phys. Rev. Lett. volume 109, pages 085303 (year 2012)NoStop [Gong et al.(2015)Gong, Qian, Yan, Scarola, and Zhang]Gong2015 author author M. Gong, author Y. Qian, author M. Yan, author V. W. Scarola, and author C. Zhang, title title Dzyaloshinskii-Moriya Interaction and Spiral Order in Spin-orbit Coupled Optical Lattices, https://doi.org/10.1038/srep10050 journal journal Sci Rep volume 5, pages 10050 (year 2015)NoStop [Minář and Grémaud(2013)]Minar2013 author author J. Minář and author B. Grémaud, title title From antiferromagnetic ordering to magnetic textures in the two-dimensional Fermi-Hubbard model with synthetic spin-orbit interactions, https://doi.org/10.1103/PhysRevB.88.235130 journal journal Phys. Rev. B volume 88, pages 235130 (year 2013)NoStop [Kennedy et al.(2022)Kennedy, dos Anjos Sousa-Júnior, Costa, and dos Santos]Kennedy2022 author author W. Kennedy, author S. dos Anjos Sousa-Júnior, author N. C. Costa, and author R. R. dos Santos, title title Magnetism and metal-insulator transitions in the Rashba-Hubbard model, https://doi.org/10.1103/PhysRevB.106.165121 journal journal Phys. Rev. B volume 106, pages 165121 (year 2022)NoStop [Kawano and Hotta(2023)]Kawano2023 author author M. Kawano and author C. Hotta, title title Phase diagram of the square-lattice Hubbard model with Rashba-type antisymmetric spin-orbit coupling, https://doi.org/10.1103/PhysRevB.107.045123 journal journal Phys. Rev. B volume 107, pages 045123 (year 2023)NoStop [Zhang et al.(2015)Zhang, Wu, Li, Wen, Sun, and Ji]Zhang2015 author author X. Zhang, author W. Wu, author G. Li, author L. Wen, author Q. Sun, and author A.-C. Ji, title title Phase diagram of interacting Fermi gas in spin–orbit coupled square lattices, https://doi.org/10.1088/1367-2630/17/7/073036 journal journal New J. Phys. volume 17, pages 073036 (year 2015)NoStop [Brosco and Capone(2020)]Brosco2020 author author V. Brosco and author M. Capone, title title Rashba-metal to Mott-insulator transition, https://doi.org/10.1103/PhysRevB.101.235149 journal journal Phys. Rev. B volume 101, pages 235149 (year 2020)NoStop [Mireles and Kirczenow(2001)]Mireles2001 author author F. Mireles and author G. Kirczenow, title title Ballistic spin-polarized transport and Rashba spin precession in semiconductor nanowires, https://doi.org/10.1103/PhysRevB.64.024426 journal journal Phys. Rev. B volume 64, pages 024426 (year 2001)NoStop [Hou(2013)]Hou2013 author author J.-M. Hou, title title Hidden-Symmetry-Protected Topological Semimetals on a Square Lattice, https://doi.org/10.1103/PhysRevLett.111.130403 journal journal Phys. Rev. Lett. volume 111, pages 130403 (year 2013)NoStop [Sun et al.(2012)Sun, Liu, Hemmerich, and Das Sarma]Sun2012 author author K. Sun, author W. V. Liu, author A. Hemmerich, and author S. Das Sarma, title title Topological semimetal in a fermionic optical lattice, https://doi.org/10.1038/nphys2134 journal journal Nat. Phys. volume 8, pages 67 (year 2012)NoStop [Berry(1984)]Berry1984 author author M. V. Berry, title title Quantal phase factors accompanying adiabatic changes, https://doi.org/10.1098/rspa.1984.0023 journal journal Proc. R. Soc. London, Ser. A volume 392, pages 45 (year 1984)NoStop [Fujita et al.(1996)Fujita, Wakabayashi, Nakada, and Kusakabe]Fujita1996 author author M. Fujita, author K. Wakabayashi, author K. Nakada, and author K. Kusakabe, title title Peculiar Localized State at Zigzag Graphite Edge, https://doi.org/10.1143/JPSJ.65.1920 journal journal J. Phys. Soc. Jpn. volume 65, pages 1920 (year 1996)NoStop [Ryu and Hatsugai(2002)]Ryu2002 author author S. Ryu and author Y. Hatsugai, title title Topological Origin of Zero-Energy Edge States in Particle-Hole Symmetric Systems, https://doi.org/10.1103/PhysRevLett.89.077002 journal journal Phys. Rev. Lett. volume 89, pages 077002 (year 2002)NoStop [Hatsugai(2009)]Hatsugai2009 author author Y. Hatsugai, title title Bulk-edge correspondence in graphene with/without magnetic field: Chiral symmetry, Dirac fermions and edge states, https://doi.org/10.1016/j.ssc.2009.02.055 journal journal Solid State Commun. volume 149, pages 1061 (year 2009)NoStop [Schnyder et al.(2008)Schnyder, Ryu, Furusaki, and Ludwig]Schnyder2008 author author A. P. Schnyder, author S. Ryu, author A. Furusaki, and author A. W. W. Ludwig, title title Classification of topological insulators and superconductors in three spatial dimensions, https://doi.org/10.1103/PhysRevB.78.195125 journal journal Phys. Rev. B volume 78, pages 195125 (year 2008)NoStop [Kitaev(2009)]Kitaev2009 author author A. Kitaev, title title Periodic table for topological insulators and superconductors, https://doi.org/10.1063/1.3149495 journal journal AIP Conf. Proc. volume 1134, pages 22 (year 2009)NoStop [Ryu et al.(2010)Ryu, Schnyder, Furusaki, and Ludwig]Ryu2010 author author S. Ryu, author A. P. Schnyder, author A. Furusaki, and author A. W. W. Ludwig, title title Topological insulators and superconductors: Tenfold way and dimensional hierarchy, https://doi.org/10.1088/1367-2630/12/6/065010 journal journal New J. Phys. volume 12, pages 065010 (year 2010)NoStop [Yokoyama and Shiba(1987)]Yokoyama1987 author author H. Yokoyama and author H. Shiba, title title Variational Monte-Carlo Studies of Hubbard Model. I, https://doi.org/10.1143/JPSJ.56.1490 journal journal J. Phys. Soc. Jpn. volume 56, pages 1490 (year 1987)NoStop [Kaplan et al.(1982)Kaplan, Horsch, and Fulde]Kaplan1982 author author T. A. Kaplan, author P. Horsch, and author P. Fulde, title title Close Relation between Localized-Electron Magnetism and the Paramagnetic Wave Function of Completely Itinerant Electrons, https://doi.org/10.1103/PhysRevLett.49.889 journal journal Phys. Rev. Lett. volume 49, pages 889 (year 1982)NoStop [Yokoyama and Shiba(1990)]Yokoyama1990 author author H. Yokoyama and author H. Shiba, title title Variational Monte-Carlo Studies of Hubbard Model. III. Intersite Correlation Effects, https://doi.org/10.1143/JPSJ.59.3669 journal journal J. Phys. Soc. Jpn. volume 59, pages 3669 (year 1990)NoStop [Yokoyama(2002)]Yokoyama2002 author author H. Yokoyama, title title Variational Monte Carlo Studies of Attractive Hubbard Model. I, https://doi.org/10.1143/PTP.108.59","inLanguage":"en","copyrightHolder":"The journal journal Prog. Theor. Phys. volume 108, pages 59 (year 2002)NoStop [Capello et al.(2006)Capello, Becca, Yunoki, and Sorella]Capello2006 author author M. Capello, author F. Becca, author S. Yunoki, and author S. Sorella, title title Unconventional metal-insulator transition in two dimensions, https://doi.org/10.1103/PhysRevB.73.245116 journal journal Phys. Rev. B volume 73, pages 245116 (year 2006)NoStop [Watanabe et al.(2006)Watanabe, Yokoyama, Tanaka, and Inoue]Watanabe2006 author author T. Watanabe, author H. Yokoyama, author Y. Tanaka, and author J.-i. Inoue, title title Superconductivity and a Mott Transition in a Hubbard Model on an Anisotropic Triangular Lattice, https://doi.org/10.1143/JPSJ.75.074707 journal journal J. Phys. Soc. Jpn. volume 75, pages 074707 (year 2006)NoStop [Yokoyama et al.(2006)Yokoyama, Ogata, and Tanaka]Yokoyama2006 author author H. Yokoyama, author M. Ogata, and author Y. Tanaka, title title Mott Transitions and d-Wave Superconductivity in Half-Filled-Band Hubbard Model on Square Lattice with Geometric Frustration, https://doi.org/10.1143/JPSJ.75.114706 journal journal J. Phys. Soc. Jpn. volume 75, pages 114706 (year 2006)NoStop [Onari et al.(2007)Onari, Yokoyama, and Tanaka]Onari2007 author author S. Onari, author H. Yokoyama, and author Y. Tanaka, title title Phase diagram of half-filled square lattice for frustrated Hubbard model, https://doi.org/10.1016/j.physc.2007.05.017 journal journal Physica C volume 463–465, pages 120 (year 2007)NoStop [Koga et al.(2006)Koga, Kawakami, Yokoyama, and Kobayashi]Koga2006 author author A. Koga, author N. Kawakami, author H. Yokoyama, and author K. Kobayashi, title title Variational Monte Carlo Study of Two Dimensional Multi-Orbital Hubbard Model, https://doi.org/10.1063/1.2355252 journal journal AIP Conf. Proc. volume 850, pages 1458 (year 2006)NoStop [Takenaka and Kawakami(2012)]Takenaka2012 author author Y. Takenaka and author N. Kawakami, title title Variational Monte Carlo Study of Two-Dimensional Multi-Orbital Hubbard Model on Square Lattice, https://doi.org/10.1088/1742-6596/400/3/032099 journal journal J. Phys.: Conf. Ser. volume 400, pages 032099 (year 2012)NoStop [Kubo(2021)]Kubo2021 author author K. Kubo, title title Destabilization of ferromagnetism by frustration and realization of a nonmagnetic Mott transition in the quarter-filled two-orbital Hubbard model, https://doi.org/10.1103/PhysRevB.103.085118 journal journal Phys. Rev. B volume 103, pages 085118 (year 2021)NoStop [Kubo(2022)]Kubo2022 author author K. Kubo, title title Enhanced Spin–Orbit Coupling in a Correlated Metal, https://doi.org/10.7566/JPSJ.91.124707 journal journal J. Phys. Soc. Jpn. volume 91, pages 124707 (year 2022)NoStop [Kubo(2023)]Kubo2023 author author K. Kubo, title title Enhancement of an Effective Spin-Orbit Coupling in a Correlated Metal, https://doi.org/10.7566/JPSCP.38.011161 journal journal JPS Conf. Proc. volume 38, pages 011161 (year 2023)NoStop [Sato and Yokoyama(2016)]Sato2016 author author R. Sato and author H. Yokoyama, title title Band-Renormalization Effects and Predominant Antiferromagnetic Order in Two-Dimensional Hubbard Model, https://doi.org/10.7566/JPSJ.85.074701 journal journal J. Phys. Soc. Jpn. volume 85, pages 074701 (year 2016)NoStop [Richter et al.(2021)Richter, Graspeuntner, Schäfer, Wentzell, and Aichhorn]Richter2021 author author M. Richter, author J. Graspeuntner, author T. Schäfer, author N. Wentzell, and author M. Aichhorn, title title Comparing the effective enhancement of local and nonlocal spin-orbit couplings on honeycomb lattices due to strong electronic correlations, https://doi.org/10.1103/PhysRevB.104.195107 journal journal Phys. Rev. B volume 104, pages 195107 (year 2021)NoStop [Liu et al.(2023)Liu, You, Gu, Maekawa, and Su]Liu2023 author author Z. Liu, author J.-Y. You, author B. Gu, author S. Maekawa, and author G. Su, title title Enhanced spin-orbit coupling and orbital moment in ferromagnets by electron correlations, https://doi.org/10.1103/PhysRevB.107.104407 journal journal Phys. Rev. B volume 107, pages 104407 (year 2023)NoStop [Jiang(2023)]Jiang2023 author author K. Jiang, title title Correlation Renormalized and Induced Spin-Orbit Coupling, https://doi.org/10.1088/0256-307X/40/1/017102 journal journal Chin. Phys. Lett. volume 40, pages 017102 (year 2023)NoStop [Hirsch(1985)]Hirsch1985 author author J. E. Hirsch, title title Two-dimensional Hubbard model: Numerical simulation study, @noop journal journal Phys. Rev. B volume 31, pages 4403 (year 1985)NoStop [Fazekas(1999)]Fazekas1999 author author P. Fazekas, https://doi.org/10.1142/2945 title Lecture Notes on Electron Correlation and Magnetism, series Series in Modern Condensed Matter Physics, Vol. volume 5 (publisher World Scientific, year 1999)NoStop [Kugel and Khomskii(1982)]Kugel1982 author author K. I. Kugel and author D. I. Khomskii, title title The Jahn-Teller effect and magnetism: Transition metal compounds, https://doi.org/10.1070/PU1982v025n04ABEH004537 journal journal Sov. Phys. Usp. volume 25, pages 231 (year 1982)NoStop
http://arxiv.org/abs/2307.05301v1
20230711144552
Signal-background separation and energy reconstruction of gamma rays using pattern spectra and convolutional neural networks for the Small-Sized Telescopes of the Cherenkov Telescope Array
[ "J. Aschersleben", "T. T. H. Arnesen", "R. F. Peletier", "M. Vecchi", "C. Vlasakidis", "M. H. F. Wilkinson" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.HE" ]
inst1,inst2]J. Aschersleben [email protected] inst1]T. T. H. Arnesen [email protected] inst1]R. F. Peletier [email protected] inst1]M. Vecchi [email protected] inst1]C. Vlasakidis [email protected] inst2]M. H. F. Wilkinson [email protected] [cor1]Corresponding author [inst1]organization=Kapteyn Astronomical Institute, University of Groningen, addressline=PO Box 800, postcode=NL-9700 AV, city=Groningen, country=The Netherlands [inst2]organization=Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, addressline=PO Box 407, postcode=NL-9700 AK, city=Groningen, country=The Netherlands Imaging Atmospheric Cherenkov Telescopes (IACTs) detect very high-energy gamma rays from ground level by capturing the Cherenkov light of the induced particle showers. Convolutional neural networks (CNNs) can be trained on IACT camera images of such events to differentiate the signal from the background and to reconstruct the energy of the initial gamma ray. Pattern spectra provide a 2-dimensional histogram of the sizes and shapes of features comprising an image and they can be used as an input for a CNN to significantly reduce the computational power required to train it. In this work, we generate pattern spectra from simulated gamma-ray and proton images to train a CNN for signal-background separation and energy reconstruction for the Small-Sized Telescopes (SSTs) of the Cherenkov Telescope Array (CTA). A comparison of our results with a CNN directly trained on CTA images shows that the pattern spectra-based analysis is about a factor of three less computationally expensive but not able to compete with the performance of the CTA images-based analysis. Thus, we conclude that the CTA images must be comprised of additional information not represented by the pattern spectra. CTA gamma rays Imaging Atmospheric Cherenkov Telescopes atmospheric shower reconstruction machine learning § INTRODUCTION When a gamma ray reaches the Earth's atmosphere, it induces a cascade of secondary particles which are known as air showers. The secondary particles can reach velocities higher than the speed of light in air, inducing a flash of Cherenkov light <cit.>. The Cherenkov light can be captured by Imaging Air Cherenkov Telescopes (IACTs) from the ground to reconstruct specific properties of the initial particle, such as its type, energy and direction (see <cit.> for an overview of ground-based gamma-ray astronomy). The Cherenkov Telescope Array (CTA) <cit.> is the next-generation ground-based observatory for gamma-ray astronomy at very-high energies, offering 5-10 times better flux sensitivity than current-generation gamma-ray telescopes <cit.>, such as H.E.S.S. <cit.>, MAGIC <cit.> and VERITAS <cit.>. It will cover a wide energy range between 20 to 300 benefiting from three different telescope types: Large-Sized Telescopes (LSTs), Medium-Sized Telescopes (MSTs) and Small-Sized Telescopes (SSTs). The CTA Observatory will be distributed on two arrays in the northern hemisphere in La Palma (Spain) and the southern hemisphere near Paranal (Chile). CTA will outperform the energy and angular resolution of current instruments providing an energy resolution of ∼5 around 1 and an angular resolution of 1 at its upper energy range. With its short timescale capabilities and large field of view of 4.5-8.5, it will enable the observation of a wide range of astronomical sources, including transient, high-variability or extended gamma-ray sources. Several analysis methods for IACT data have been developed to classify the initial particle and reconstruct its energy and direction. Hillas parameters <cit.> are one of the first reconstruction techniques proposed by A. M. Hillas in 1985. They describe features of the Cherenkov emission within the camera images and are widely used as input to machine learning algorithms like Random Forest <cit.> or Boosted Decision Trees <cit.> to perform full event reconstruction of gamma rays. Another approach is the ImPACT algorithm <cit.>, which performs event reconstruction using expected image templates generated from Monte Carlo simulations. Other methods such as model analysis <cit.> and 3D model analysis <cit.>, which are based on a semi-analytical shower model and a Gaussian photosphere shower model respectively, managed to be more sensitive to certain properties of the shower <cit.>. Recently, convolutional neural networks (CNNs) <cit.> have been proposed and applied to IACT data <cit.>. CNNs are machine learning algorithms that are specialised for image data and are currently one of the most successful tools for image classification and regression tasks <cit.>. They rely on convolutional layers which consist of image filters that are able to extract relevant features within an image. Among many others, models such as AlexNet <cit.>, GoogLeNet <cit.> and ResNet <cit.> established many new techniques, such as the Rectified Linear Unit (ReLU) <cit.> activation function and deeper architectures, which set the milestones for many upcoming architectures. ResNet won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2015 by introducing shortcut connections into the architecture and achieving a top-5 classification error of only 3.6 <cit.>. CNNs that contain these shortcut connections often achieve higher performances and are referred to as residual neural networks (ResNets). The first event classifications with a CNN trained on IACT images have been presented in <cit.> and <cit.>, which have demonstrated the signal-background separation capabilities of CNNs. Later work has shown the energy and direction reconstruction capabilities of gamma rays with CNNs <cit.>, their ability to run in stereo telescope mode <cit.> and to be applied to real data <cit.>. However, one of the main drawbacks of this method is that the training of CNNs is computationally very expensive <cit.>. It typically requires access to computing clusters with powerful graphics processing units (GPUs) and large amounts of random-access memory (RAM). The larger the dimension of the input image, the larger the computational power and time needed for the CNN training. A significant reduction of the dimension of the input image without any performance losses would therefore result in substantial savings in hardware and human resources, increase the efficiency of related scientific works and lower the environmental impact of CNNs <cit.>. An approach to this problem is pattern spectra <cit.>, which are commonly used tools for image classification <cit.> and can significantly reduce the computational power needed to train CNNs. They provide a 2-dimensional distribution of sizes and shapes of features within an image and can be constructed using a technique known as granulometries <cit.>. The features within the image are extracted with connected operators <cit.>, which merge regions within an image with the same grey scale value. Compared to other feature extraction techniques, this approach has the advantage of not introducing any distortions into the image. In this work, we generate pattern spectra from simulated CTA images to apply them on a ResNet for signal-background separation and energy reconstruction of gamma rays. The application of a ResNet on pattern spectra takes advantage of their 2D nature by selecting relevant combinations of features within the CTA images. Our pattern spectra algorithm is based on the work presented in <cit.>, which provides two main advantages compared to other existing pattern spectra algorithms: (i) the computing time for creating the pattern spectra is independent of its dimensions and (ii) it is significantly less sensitive to noise. These properties merit the investigation of pattern spectra-based analysis for IACTs. Direction reconstruction of gamma rays is not considered here since pattern spectra are rotation invariant, meaning that the same CTA image rotated by an arbitrary angle would result in the same pattern spectrum. By generating pattern spectra from simulated CTA images, we aim to obtain a competitive algorithm that is significantly faster and less computationally intensive while keeping comparable performance to a CNN trained on CTA images in terms of signal-background separation and energy reconstruction of gamma rays. The structure of this article is as follows: In Section <ref>, the CTA dataset used in this analysis is described. Section <ref> is devoted to our analysis methods including the pattern spectra algorithm, the ResNet architecture and the performance evaluation methods for our algorithms. The results are shown in Section <ref> and discussed in detail in Section <ref>. Finally, we state our conclusions in Section <ref>. The source code of this project is publicly available at <cit.>. § DATASET The dataset consists of simulated gamma-ray and proton events detected by the southern CTA array (Prod5_DL1 (ctapipe v0.10.5 <cit.>), zenith angle of 20, North pointing <cit.>). Due to the hexagonal pixels integrated in the LSTs and MSTs cameras, which cannot be processed by the current version of the pattern spectra algorithm, only the 37 SSTs with rectangular pixels are considered in this analysis. The SST images containing the charge information, i.e. the integrated photodetector pulse, will be referred to as CTA images in the following. CTA images generated by gamma rays with an energy between 500 and 100 and protons with an energy between 1.5 and 100 have been considered for this study to match the operating energy range of the SSTs. For the energy reconstruction ∼ 3 · 10^6 gamma-ray events generated with a 0.4 offset from the telescope pointing position, referred to as pointlike gamma rays in the following, are used. For the signal-background separation, ∼ 2 · 10^6 diffuse gamma rays and ∼ 2 · 10^6 diffuse protons are used, whereas the term diffuse describes events generated in a view cone of 10. The pointlike and diffuse events are considered in the analysis to represent real observation conditions. When observing a source, background events reach the telescopes not only from the direction of the source but potentially from a much larger view cone. However, using pointlike gamma-rays and diffuse proton events for signal-background separation would introduce a bias in the learning process of the CNN. Therefore, we consider diffuse events for the signal-background separation and pointlike events for the energy reconstruction task. In particular for high energies, the dataset often includes single events that were captured by multiple SSTs. This results in several CTA images for a single event. Since the construction and training of a CNN, that is able to handle a varying amount of input images, is very challenging, we constructed a single CTA image for each event as a first step towards the implementation of pattern spectra for the analysis of CTA images. In order to obtain a single CTA image per event, all CTA images of the same event are combined into a single image by adding up the individual pixel values of each image. We are aware that this is reducing the performance of the array, but we adopt this strategy to simplify our proof of concept work. However, we do not promote the idea of image stacking for CNN analyses with CTA data when trying to maximise the performance of the CNN. § ANALYSIS §.§ Pattern spectra The algorithm used to extract pattern spectra from the CTA images is based on the work presented in <cit.> and will be briefly summarised in the following. Let f be a grey-scale image with grey levels h. Consider an image domain E ⊆ℝ^2 and let the set X ⊆ E denote a binary image with domain E. The grain of a binary image X is defined as a connected component of X. The peak components P^k_h(f) of an image f are defined as the kth grain of the threshold set T_h(f), which is defined as T_h(f) = {x ∈ E | f(x) ≥ h }. For each image f, a Max-tree is computed according to the algorithm described in <cit.>. The Max-tree is composed of nodes N^k_h(f), which consist of the subset of the peak components P^k_h(f). Figure <ref> (a) shows an example of a 2D grey-scale image, (b) the corresponding peak components P^k_h(f) and (c) its Max-tree with nodes N^k_h(f). The pattern spectra are based on the size and shape attributes of the peak components P^k_h(f). The size attribute corresponds to the area A(P^k_h(f)), which is computed by the sum of the pixels belonging to the detected feature. The shape attribute corresponds to I/A^2 with the moment of inertia I describing the sum of squared differences to the centre of gravity of the feature. The size and shape attributes are binned into N = 20 size classes s and shape classes r, which results in a good compromise between the performance of the pattern spectra and the computational power needed to train the ResNet. The 2D pattern spectrum is computed from the Max-tree as follows <cit.>: * Construct a 2D array Φ[r,s] of size N × N = 20 × 20. * Set all elements of Φ[r,s] to zero. * For each node N^k_h(f) of the Max-tree, compute the size class r from the area A(P^k_h(f)), the shape class s from I(P^k_h(f))/A(P^k_h(f))^2 and the grey-level difference δ_h between the current node and its parent. * Add the product of δ_h and A(P^k_h(f)) to Φ[r,s]. An example of a pattern spectrum extracted from a CTA image is shown in Figure <ref>. The image in the top-left shows a CTA image of a 1.9 gamma-ray event that was captured by eight SSTs. The bright features in the centre of the image correspond to the Cherenkov emission induced by the particle shower. Due to the different locations of the SSTs, the Cherenkov light is captured with different intensities and at different positions on the SST cameras. The pattern spectrum generated from the CTA image is shown in the bottom-left. Each pattern spectrum pixel represents a set of detected features. An example of the detected features is shown in the middle of Figure <ref>. The image on top shows a set of detected features within the CTA image highlighted in red. The image at the bottom shows the pattern spectrum with the red pixel representing these features. This specific example shows features with a small A and small I/A^2 referring to features with a small size and a circular-like shape. They correspond to individual pixels in the CTA image and represent mostly noise. Another example is shown in the top-right and bottom-right of Figure <ref>. Compared to the previous example, the red-marked pattern spectrum pixels correspond to larger A and I/A^2 values. Thus, the highlighted objects (red/orange) in the CTA image correspond to features with a larger size and more elliptical-like shape. The detected features in this example are of particular interest since they represent the Cherenkov photons induced by the particle shower, which contain information about the type and energy of the initial particle. §.§ Residual neural network architecture For the signal-background separation and energy reconstruction of gamma-ray events, two individual but almost identical ResNet architectures are constructed and trained with either CTA images or pattern spectra. The architectures of our ResNets are identical to the ResNets presented in <cit.> and are based on the work presented in <cit.>. The ResNet is illustrated in Figure <ref>. Due to the rather shallow architecture compared to the ResNet presented in <cit.>, we refer to our architectures as thin residual neural networks (TRNs) in the following. They are constructed using Tensorflow 2.3.1 <cit.> and Keras 2.4.3 <cit.> and consist of 13 convolutional layers with Rectified Linear Unit (ReLU) <cit.> activation function, a global average pooling layer and two fully connected (dense) layers with 64 and 32 neurons respectively. The output layer consists of a single neuron for the energy reconstruction and two neurons with softmax <cit.> activation function for the signal-background separation. Shortcut connections <cit.> at every third convolutional layer were implemented in order to improve the stability and performance of the algorithm. The solid arrows in Figure <ref> represent linear shortcut connections, in which the input of a building block x is added to the output of the last layer of the building block F(x). If the input and output of a building block have different dimensions, the input x is put into another convolutional layer with the same number of filters as the last layer of the building block. The output of this residual operation G(x) is added to the output of the last layer of the building block F(x). A filter size of 1× 1 is used for all shortcut connections with a convolutional operation. In total, the two TRNs have about 150000 trainable parameters. §.§ Experiments The TRNs described in the previous section are trained and evaluated 10 times each on the datasets for both signal-background separation and energy reconstruction to perform a statistical analysis of the training process. Similar to the work presented in <cit.>, a multiplicity cut of four or more triggered telescopes is applied for both the gamma-ray and proton events. The dataset is split into 90 training data, from which 10 is used as validation data, and 10 test data. The weights of the TRN are initialized using the Glorot Uniform Initializer <cit.> and the training, validation and test data are randomized for each run. The adaptive moment (ADAM) optimizer <cit.> with a learning rate of 0.001, and a batch size of 32 is used for the TRN training. The training is stopped if there is no improvement on the validation dataset for over 20 epochs, and the model with the lowest validation loss is saved. The categorical cross entropy and mean squared error <cit.> are applied as loss functions for the signal-background separation and energy reconstruction, respectively. The results shown in Section <ref> are obtained by evaluating the performance of each TRN on the test data. §.§.§ Signal-background separation Each event is labelled by its gammaness Γ, whereas Γ = 1 corresponds to a gamma-ray (photon) and Γ = 0 corresponds to a proton. The output of the TRN is a Γ-value between 0 and 1, which describes a pseudo-probability of the event being a photon according to the TRN. For a fixed Γ-threshold α_Γ, the photon efficiency η_γ is defined as η_γ = TP / P, where TP is the number of true positives, i.e. photon events with Γ≥α_Γ (correctly classified photons), and P is the total number of positives (photons) that pass the selection criteria described in Section <ref>. Similarly, the proton efficiency η_p is defined as η_p = FP / N, where FP is the number of false positives, i.e. proton events with Γ < α_Γ (misclassified protons), and N is the total number of negatives (protons) that pass the selection criteria. A good classifier results in a high photon efficiency η_γ and a low proton efficiency η_p for a given Γ-threshold. In order to evaluate the performance of our TRNs, the efficiencies as a function of the Γ-threshold and the effective area A_eff as a function of the true energy E_true are calculated. The effective area is determined by A_eff = η̃_γ· A_geom, where A_geom is the geometrical area of the instrument, i.e. A_geom = π r_max^2 with r_max being the maximum simulated impact radius, and η̃_γ= TP / P̃ with P̃ being the total number of simulated photons, including the events that did not pass the selection criteria in Section <ref>. Similarly, we define η̃_p = FP / Ñ with Ñ being the total number of simulated protons. The energy range is split into seven logarithmic bins, whereas each event is assigned to an energy bin based on its true energy E_true. The effective area is then calculated for each energy bin by increasing the Γ-threshold until η̃_p = 10^-3 is reached and extracting the corresponding η̃_γ. The value η̃_p = 10^-3 is motivated by the photon flux of the Crab Nebula being about three orders of magnitude lower than the isotropic flux of cosmic rays (CRs) within an angle of 1 around the direction of the source: Φ_γ^Crab≈ 10^-3·Φ_CR  <cit.>. Furthermore, the receiver operating characteristic (ROC) curve <cit.> is determined. The ROC curve describes the photon efficiency η_γ versus the proton efficiency η_p. The area under the ROC curve (AUC) is calculated and used as a measure of the performance of each TRN. For part of our calculations, we make use of pyirf v0.7.0 <cit.>, which is a python library for the generation of Instrument Response Functions (IRFs) and sensitivities for CTA. From the 10 TRNs, the mean efficiencies, effective area, ROC curve and AUC value are calculated for both the CTA images and pattern spectra-based analyses. §.§ Energy reconstruction The gamma-ray events are labelled by their true energy E_true, which the TRN learns to predict based on the training input. The performance of the TRN on the test data is evaluated by comparing the reconstructed energy E_rec of the TRN with the true energy E_true of the initial gamma ray. Therefore, the relative energy error Δ E / E_true = (E_rec - E_true) / E_true is calculated for each event. The whole energy range between 500 and 100 is split into seven logarithmic bins and each event is assigned to an energy bin based on its true energy E_true. For each of these energy bins, the distribution of the relative energy error Δ E / E_true is determined and its median calculated. The median of Δ E / E_true is referred to as the energy bias in the following. Small (high) energy biases indicate high (low) accuracies. The distributions of the relative energy error Δ E / E_true are then bias-corrected by subtracting the median, i.e. (Δ E / E_true)_corr = Δ E / E_true - median(Δ E / E_true). The energy resolution is defined as the 68th percentile of the distribution |(Δ E / E_true)_corr|. From the 10 TRNs, the mean energy bias and energy resolution with their standard deviation are calculated for each energy bin for both the CTA images and pattern spectra-based analyses. § RESULTS §.§ Signal-background separation Two examples of the gammaness distributions obtained from a single TRN trained with the CTA images and pattern spectra are shown in Figure <ref>. Figure <ref> (left) shows a distinct separation between photon and proton events for the TRN trained with CTA images. The majority of photon events are classified with Γ = 1 and the majority of proton events with Γ = 0. The number of proton (photon) events continuously decreases for larger (smaller) Γ-values, which indicates a good separation capability of the TRN. Figure <ref> (right) shows the performance of the TRN trained with the pattern spectra, which results in a lower signal-background separation capability compared to the TRN trained with CTA images. Once again, the majority of photon events are classified with Γ = 1 and the majority of proton events with Γ = 0. However, the distributions decrease less rapidly compared to the CTA images-based analysis. The mean photon efficiency η_γ and proton efficiency η_p as a function of the Γ-threshold α_Γ are shown in Figure <ref>. The shaded regions in this figure and the upcoming ones depict the standard deviation across the 10 TRNs. Both the photon efficiency and proton efficiency decrease steadily for an increasing α_Γ-value. Up to Γ∼ 0.1 the pattern spectra-based analysis results in a very similar photon efficiency but in a much higher proton efficiency in comparison to the CTA images-based analysis. The proton efficiency of the pattern spectra approaches a similar value compared to the CTA images at Γ∼ 0.9 at which, however, the CTA images outperform the pattern spectra in the photon efficiency. Therefore, the CTA images result overall in better photon and proton efficiencies independent of the Γ-threshold α_Γ. Figure <ref> (left) shows the mean effective area A_eff as a function of the true energy E_true. The CTA images result in a higher effective area than the pattern spectra for all energies. The difference between the two analyses increases with increasing energy. The CTA images result in a maximum effective area of ∼12.8e5 at ∼80, whereas the pattern spectra result in a maximum effective area of ∼7.0e5 at ∼80, which corresponds to factor of 1.8 between the two analyses. The mean ROC curve and corresponding AUC value are shown in Figure <ref> (right). As expected from the gammaness distributions discussed above, the ROC curve obtained from the CTA images is significantly steeper than the ROC curve obtained from the pattern spectra. The mean AUC value of 0.987 for the CTA images is therefore significantly larger than the value of 0.929 obtained from the pattern spectra by a factor of 1.06. Therefore, the TRN trained with CTA images shows a higher signal-background capability than the pattern spectra-based analysis. §.§ Energy reconstruction Figure <ref> shows two examples of the energy migration matrices, i.e. the 2D histogram of E_rec against E_true, obtained from a single TRN trained with the CTA images and pattern spectra. Most of the events are distributed around the E_rec = E_true line for both the CTA images and pattern spectra-based analysis. However, the distribution obtained from the pattern spectra is more spread compared to the CTA images-based analysis. The mean energy accuracy obtained from 10 independent TRNs is shown in Figure <ref> (left). The energy biases obtained from the CTA images-based analysis are closely distributed around 0 with the largest energy bias of ∼5 at the lowest energy bin. The energy biases obtained from the pattern spectra-based analysis reach up to ∼20 with the largest energy biases at the lowest and highest energy bin. The absolute value of the energy bias obtained from the pattern spectra-based analysis is larger than the values obtained from the CTA images for all energies. The mean energy resolution obtained from 10 independent TRNs is shown in Figure <ref> (right). The CTA images-based analysis ranges from 0.08 to 0.12 with a minimum at ∼7.5. While we simplified our analysis by stacking CTA images for each event, the energy resolution still meets the CTA requirements <cit.> for all energy bins, except for the lowest energy bin. The pattern spectra result in an energy resolution between 0.22 and 0.25 with a minimum at the highest energy bin and does not meet the CTA requirements. Thus, the CTA images-based analysis outperforms the pattern spectra for all energies with a maximum factor of 2.9 at ∼7.5 between the two curves. § DISCUSSION A comparison of the computational performance of the analyses is shown in Figure <ref>. The TRN training with pattern spectra is about a factor of 2.5 faster and requires a factor of 2.5 less RAM compared to the TRN training with CTA images. The pattern spectra are capable of detecting and classifying relevant features in the CTA images, which is illustrated by the gammaness distributions shown in Figure <ref> (right) and the energy migration matrix shown in Figure <ref> (right). However, the pattern spectra-based analysis is outperformed by the CTA images with respect to their signal-background and energy reconstruction capabilities. For a given Γ-threshold α_Γ, the pattern spectra result in a poorer photon and proton efficiency compared to the CTA images (see Figure <ref>), which is a main drawback of the analysis since both efficiencies are important quantities for the analysis of real gamma-ray data. Moreover, we infer from the effective area versus energy plot shown in Figure <ref> (left) that the signal-background capabilities of the pattern spectra-based analysis are below the capabilities of the CTA images-based analysis independent of the energy of the initial particle. The AUC value obtained from the CTA images is a factor 1.06 larger than the pattern spectra AUC value and illustrates once again the overall lower signal-background capabilities of the pattern spectra-based analysis. The CTA images result in a better energy resolution and a lower energy bias for all energies compared to the pattern spectra. Although our choice of attributes, i.e. size and shape attribute, is well-motivated, these two attributes do not seem to be sufficient to fully describe all relevant features within the CTA images. Potentially, the pattern spectra might not be able to detect, e.g., the electromagnetic substructure in proton showers. Other feature attributes, e.g. the perimeter, sum of grey levels and compactness (perimeter / A^2), were tested for both signal-background separation and energy reconstruction but did not result in a significantly better performance. Furthermore, we applied pattern spectra on other algorithms including classification and regression trees (CART) <cit.>, Learning Vector Quantization (LVQ) and Generalized Matrix Learning Vector Quantization (GMLVQ) <cit.>. None of these algorithms achieved a better performance than the TRN. We, therefore, conclude that the TRN relies on features within the CTA images that are not detected by the pattern spectra algorithm. The performances stated in this work do not represent the expected performance by the CTA Observatory at the end of its construction phase. § CONCLUSIONS For the first time, signal-background separation and energy reconstruction of gamma rays were performed under the application of pattern spectra. We have shown that the pattern spectra algorithm has the capability to detect and classify relevant features in IACT images. The detected features are capable of differentiating between gamma-ray and proton events and to reconstruct the energy of gamma-ray events. The training of the TRN with pattern spectra requires 2.5 less RAM and is about a factor of 2.5 faster than the TRN trained with CTA images, which agrees with our expectation due to the smaller size of the pattern spectra as compared to CTA images. The reduction in computational power was one of the main motivations to test the performance of pattern spectra on IACT data. However, the pattern spectra-based analysis is not competitive with the CTA images-based analysis in signal-background separation and energy reconstruction. The AUC value, which is a measure of the signal-background separation capability of an algorithm, obtained from the CTA images is a factor 1.06 larger than the value obtained from the pattern spectra. The CTA images result in better energy accuracy and energy resolution for all energies with a maximum factor of 2.9 at ∼7.5 in energy resolution compared to the pattern spectra. We, therefore, conclude that the relevant features within the CTA images are not sufficiently detected or described by our choice of size and shape attributes. Other sets of attributes were tested but resulted in no major improvements. Thus, the TRN trained on CTA images must rely on additional features not captured by the pattern spectra. In other applications, especially when the input images are larger, or vary in size, the results may be different. § ACKNOWLEDGEMENTS This work was conducted in the context of the CTA Consortium and CTA Observatory. We gratefully acknowledge financial support from the agencies and organizations listed at http://www.cta-observatory.org/consortium acknowledgements. We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high-performance computing cluster. elsarticle-num
http://arxiv.org/abs/2307.05593v2
20230710180946
Quantum Simulation of Lattice QCD with Improved Hamiltonians
[ "Anthony N. Ciavarella" ]
hep-lat
[ "hep-lat", "nucl-th", "quant-ph" ]
IQuS@UW-21-056 [email protected] InQubator for Quantum Simulation (IQuS), Department of Physics, University of Washington, Seattle, Washington 98195-1550, USA Quantum simulations of lattice gauge theories are anticipated to directly probe the real time dynamics of QCD, but scale unfavorably with the required truncation of the gauge fields. Improved Hamiltonians are derived to correct for the effects of gauge field truncations on the SU(3) Kogut-Susskind Hamiltonian. It is shown in 1+1D that this enables low chromo-electric field truncations to quantitatively reproduce features of the untruncated theory over a range of couplings and quark masses. In 3+1D, an improved Hamiltonian is derived for lattice QCD with staggered massless fermions. It is shown in the strong coupling limit that the spectrum qualitatively reproduces aspects of two flavor QCD and simulations of a small system are performed on IBM's Perth quantum processor. Quantum Simulation of Lattice QCD with Improved Hamiltonians Anthony N. Ciavarella 0000-0003-3918-4110 August 12, 2023 ============================================================ § INTRODUCTION The real time dynamics of quantum chromodynamics (QCD) are of relevance to a number of phenomena in particle and nuclear physics. These range from collisions of hadrons at high energies to the behavior of quark-gluon plasma in the early universe. The simulation of QCD discretized onto a lattice has enabled non-perturbative calculations of static observables in QCD such as hadron masses and form factors <cit.>. Quantum computers are expected to be able to directly probe the real time dynamics of quantum field theories. The recent developments in quantum hardware have inspired studies into how to implement simulations of lattice gauge gauge theories on quantum computers. The first quantum simulations of pure non-Abelian lattice gauge theories have been performed in low dimensions on quantum hardware <cit.>. There have also been quantum simulations of non-Abelian gauge theories coupled to matter in one spatial dimension <cit.>. Theoretical studies have been performed into how to scale up these calculations to larger systems <cit.> and large scale simulations have been performed of Abelian gauge theories <cit.>. However, all these approaches to simulating gauge theories require the gauge field to be truncated and scale poorly with the gauge field truncation. Similar problems were found in the classical simulation of lattice gauge theories with the scaling of errors with lattice spacing. These problems were mitigated through the development of improved Symanzik actions with more favorable scaling of errors with lattice spacing <cit.>. It is expected that improved Hamiltonians can be found that mitigate the effects of truncating the gauge field as well. In this work, improved Hamiltonians are derived for lattice gauge theories through the application of the similarity renormalization group (SRG). SU(3) gauge fields coupled to fermions in 1+1D are used as a case study for the improved Hamiltonians studied. Tensor network simulations are used to demonstrate that the improved Hamiltonians derived in 1+1D correctly reproduce observables on large lattices. An improved Hamiltonian for lattice QCD with two flavors is derived for 3+1D and a small simulation is performed on IBM's quantum processors. § 1+1D §.§ 1+1D Hamiltonian Gauge theories in one spatial dimension have been used as toy models to study the quantum simulation of gauge theories in higher dimensions as they share many qualitative features and their reduced complexity makes simulation more tractable. Previous simulations on quantum hardware have studied the dynamics of hadrons in one spatial dimension <cit.> and β decay <cit.>. In this work, the SU(3) Kogut Susskind Hamiltonian <cit.> with a single flavor of staggered fermions in 1+1D will be used as a toy model to study the effects of gauge field truncation and the performance of improved Hamiltonians. The Hamiltonian describing this theory is Ĥ = Ĥ_Kin + Ĥ_m + Ĥ_E Ĥ_Kin = ∑_x,a,b1/2ψ̂_x,a^†Û^a,b_x,x+1ψ̂_x+1,b + h.c. Ĥ_m = m ∑_x,a (-1)^x ψ̂_x,a^†ψ̂_x,a Ĥ_E = ∑_x,cg^2/2Ê_x,x+1^cÊ_x,x+1^c , where g is the gauge coupling, m is the fermion mass, ψ̂_x,a is the fermion field at site x with color a, Û^a,b_x,x+1 is the parallel transporter on the link between the sites x,x+1 and Ê_x,x+1^c is the chromo-electric field operator. By working with open boundary conditions in the axial gauge, and enforcing Gauss's law, the gauge fields in this theory can be completely integrated out yielding the Hamiltonian Ĥ = Ĥ_Kin + Ĥ_m + Ĥ_E Ĥ_Kin = ∑_x,a1/2ψ̂_x,a^†ψ̂_x+1,a + h.c. Ĥ_m = m ∑_x,a (-1)^x ψ̂_x,a^†ψ̂_x,a Ĥ_E = ∑_x,cg^2/2(∑_y<xQ̂_y^c) (∑_y<xQ̂_y^c) , where Q̂_y^c is the chromo-electric charge at site x defined by Q̂_y^c = ∑_a,bψ̂^†_y,a T^c_a,bψ̂_y,b , where T^c_a,b are the Gell-Mann matrices. By working with this Hamiltonian, we can directly study the untruncated theory and the performance of improved Hamiltonians that correct for the gauge field truncation. §.§ Strong Coupling Expansion m=0 Before the Hamiltonian in Eq. (<ref>) can be mapped onto a quantum computer, it must first be truncated to a finite Hilbert space. Typically, this is done by working in the basis of the chromo-electric field and truncating the field below some cutoff. It has been shown numerically for some small systems <cit.> and rigorously proven in general <cit.> that the error induced by this truncation falls off exponentially with the truncation. The error due to gauge field truncation can be reduced even further by first performing a unitary rotation on the Hamiltonian to reduce the coupling to the higher electric field states and then truncating. In other words, there is a low-energy subspace coupled to a high-energy subspace and one would like to derive an effective field theory description of the low-energy subspace with the high-energy subspace decoupled. Previous work has explored how to perform this decoupling variationally <cit.>. One alternative method to construct such an effective Hamiltonian is Schrieffer-Wolff perturbation theory which systematically constructs approximate unitary transformations that decouple the high-energy subspace <cit.>. As an example, we will consider the Hamiltonian in Eq. (<ref>) on two staggered sites (one physical site) with massless fermions, truncated at zero electric field. This is the harshest possible truncation that can be applied, and the only physical states left in the Hilbert space are those where sites are unoccupied or have three fermions present forming a color singlet, i.e., a baryon. At this truncation, the Hamiltonian in Eq. (<ref>) is trivial, and there are no dynamics. The states kept in this truncation span the zero electric energy subspace while all states with higher electric energy are being discarded. Using the Schrieffer-Wolff perturbation theory, an effective Hamiltonian for the zero electric energy subspace at leading order is given by Ĥ_eff = ∑_x9/16g^2Ẑ_xẐ_x+1 + 27/32g^4(X̂_x X̂_x+1 + Ŷ_x Ŷ_x+1) + 𝒪(g^-6) , where X̂_x, Ŷ_x, Ẑ_x are the corresponding Pauli matrices at site x on the lattice. In this basis, spin up states correspond to a site being unoccupied and spin down states correspond to a baryon being present on the site. The details of this derivation and how to systematically derive higher order terms are in Appendix <ref>. In this context, the Schrieffer-Wolff expansion corresponds to performing a strong coupling expansion around the zero electric energy subspace. Note that similar results have been derived for SU(2) lattice gauge theories and the Schwinger model with multiple flavors, showing that they are equivalent to spin systems in the strong coupling limit <cit.>. The effective Hamiltonian in Eq. (<ref>) requires only a single qubit per site to be mapped onto a quantum computer. The Hamiltonian in Eq. (<ref>) with gauge fields integrated out requires three qubits per site to represent the state of the system. By using this effective Hamiltonian to describe a subspace of the system, the computational resources required are reduced. However, the Schrieffer-Wolff expansion is known to have a finite radius of convergence <cit.>, so this effective Hamiltonian should only be valid over a limited range of couplings. The energy gap for the effective Hamiltonians obtained at different orders in the Schrieffer-Wolff expansion over a range of couplings are shown in Fig. <ref>. Note that both the ground state and first excited state are in the baryon number zero sector. As this figure shows, the effective Hamiltonians obtained through the Schrieffer-Wolff expansion are only valid for strong couplings, and the expansion fails to converge at weak couplings. §.§ Similarity Renormalization Group m=0 The strong coupling expansion in the previous section was able to yield an improved Hamiltonian to correct for the chromo-electric field truncation for a small system. However, the performance of the improved Hamiltonian was limited by the convergence of the strong coupling expansion. An alternative approach to derive an improved Hamiltonian is the SRG. This method works by choosing a generator of unitary rotations that should decouple the high energy subspace and then continuously flowing to decouple the high energy subspace <cit.>. Explicitly the Hamiltonian being flowed is parametrized as Ĥ_s = Ĥ_Λ + V̂_s , where Ĥ_Λ determines the energy scales that should be decoupled, V̂_s is the remaining terms in the Hamiltonian and s is the flow parameter. The generator of the SRG flow is traditionally taken to be η̂_s = [Ĥ_Λ,Ĥ_s] . The evolution of the Hamiltonian under SRG is given by dĤ_s/ds = dV̂_s/ds = [[Ĥ_Λ,V̂_s],Ĥ_̂ŝ] = [[Ĥ_Λ,V̂_s], Ĥ_Λ] + [[Ĥ_Λ,V̂_s],V̂_s] . By flowing to s→∞, the low and high energy sectors will be decoupled. The similarity renormalization group has previously been used in low energy nuclear physics to derive low energy nuclear potentials with improved convergence properties <cit.>. In the following sections, it will be shown how the SRG can be used to derive improved Hamiltonians that correct for the effects of gauge field truncation. §.§.§ Two Staggered Sites Once again, the Hamiltonian in Eq. (<ref>) on two staggered sites (one physical site), truncated at zero electric field will be used as an example to construct an improved Hamiltonian. The generator of the SRG flow will be chosen to decouple states with different electric energies, i.e. Ĥ_Λ = Ĥ_E. The SRG equations can then be solved to recover an improved Hamiltonian of the form Ĥ_SRG = A(g)(X̂_1 X̂_2 + Ŷ_1 Ŷ_2) + B(g) Ẑ_1Ẑ_2 , where A(g) and B(g) are constants computed numerically. Note that this Hamiltonian takes the same form as that derived in the strong coupling expansion in Eq. (<ref>) except now the coefficients multiplying the operators have been determined through SRG instead of a perturbative expansion. The energy gap for this Hamiltonian as a function of the coupling is shown in Fig. <ref>. Unlike the improved Hamiltonian obtained through the strong coupling expansion, the improved Hamiltonian obtained through the SRG suffers from no convergence issues and is able to correctly reproduce the energy gap at all values of the coupling. §.§.§ Larger Systems As shown in the previous section, the SRG was capable of producing an improved Hamiltonian that correctly describes the physics of a small system. In practice, improved Hamiltonians will be needed for larger systems. The setup of the SRG used in the previous section does not scale efficiently to larger lattices. This is because as the SRG evolves, the number of operators generated can be exponential in the system size. This can be mitigated through the use of the in-medium similarity renormalization group (IMSRG) which truncates operators in the SRG flow above a certain weight <cit.>. The cost of performing the IMSRG scales exponentially with the size truncation. However the convergence with operator size is also exponential due to the exponential decay of correlations in low energy states. As an explicit example, improved Hamiltonians for the zero electric field truncation will be derived with IMSRG. The smallest nontrivial operator size truncation is at two staggered sites. The improved Hamiltonian derived with IMSRG at this truncation with coupling g on L staggered sites is Ĥ_SRG = ∑_x < L A(g)(X̂_x X̂_x+1 + Ŷ_x Ŷ_x+1) + B(g) Ẑ_xẐ_x+1 . The accuracy of the improved Hamiltonians derived through IMSRG at this electric field truncation can be improved by computing the IMSRG flow for larger operator size truncations. In general, one would expect this method to work well when the operator size truncation used is comparable to the correlation length of the system in question. Explicitly, the form of the improved Hamiltonians obtained by truncating at operators defined on three staggered sites takes the form Ĥ_3,SRG = ∑_x A_1(g) (X̂_x X̂_x+1 + Ŷ_x Ŷ_x+1) + B_1(g) Ẑ_xẐ_x+1 + A_2(g) (X̂_x X̂_x+2 + Ŷ_x Ŷ_x+2) + B_2(g) Ẑ_xẐ_x+2 where A_i(g), and B_i(g) are constants determined from solving the SRG equations numerically. Note that this takes the same form as Eq. (<ref>) just with the inclusion of next to nearest neighbor hopping. The performance of the improved Hamiltonians can be improved further by truncating the operator size at four staggered sites. The improved Hamiltonian obtained at this truncation takes the form Ĥ_4,SRG = ∑_x A_1(g) (b̂_x b̂^†_x+1 + b̂^†_x b̂_x+1) +B_1(g) Ẑ_x Ẑ_x+1 + A_2(g) (b̂_x b̂^†_x+2 + b̂^†_x b̂_x+2) + B_2(g) Ẑ_x Ẑ_x+2 + A_3(g) (b̂_x b̂^†_x+3 + b̂^†_x b̂_x+3) + B_3(g) Ẑ_x Ẑ_x+3 + C_1(g) (b̂_x b̂^†_x+1 + b̂^†_x b̂_x+1) Ẑ_x+2Ẑ_x+3 + C_2(g) (b̂_x b̂^†_x+2 + b̂^†_x b̂_x+1) Ẑ_x+1Ẑ_x+3 + C_2(g) (b̂_x+1b̂^†_x+3 + b̂^†_x+1b̂_x+3) Ẑ_xẐ_x+2 + C_3(g) (b̂_x b̂^†_x+3 + b̂^†_x b̂_x+1) Ẑ_x+1Ẑ_x+2 + C_4(g) (b̂_x+1b̂^†_x+2 + b̂^†_x+1b̂_x+2) Ẑ_xẐ_x+3 + C_5(g) Ẑ_xẐ_x+1Ẑ_x+2Ẑ_x+3 + D_1(g) (b^†_x b^†_x+1 b_x+2 b_x+3 + b_x b_x+1 b^†_x+2 b^†_x+3) + D_2(g) (b^†_x b_x+1 b_x+2^† b_x+3 + b_x b^†_x+1 b_x+2 b^†_x+3) + D_3(g) (b^†_x b_x+1 b_x+2 b_x+3^† + b_x b^†_x+1 b^†_x+2 b_x+3) where b̂_x = 1/2(X̂_x + i Ŷ_x) is a qubit annihilation operator at site x and A_i(g), B_i(g), C_i(g), and D_i(g) are constants determined from solving the SRG equations numerically. To test the performance of the improved Hamiltonians derived through SRG, density matrix renormalization group (DMRG) calculations were performed using the C++ iTensor library <cit.> to obtain the vacuum state and the single baryon ground state of the Hamiltonian in Eq. (<ref>) and the improved Hamiltonians described above for lattices with up to fifteen physical sites with open boundary conditions. Fig. <ref> shows the mass of the baryon (difference of the energy of the single baryon state and vacuum state) for the full Hamiltonian and the improved Hamiltonians for the zero electric field truncation for g=2. As this figure shows, the relative error in the baryon mass computed with the improved Hamiltonians grows with system size and then saturates. By using improved Hamiltonians with a larger operator size truncation in the IMSRG, the relative error in the baryon mass can be reduced down to the percent level. The baryon mass for g=1 was also computed and is shown in Fig. <ref>. At this weaker coupling, the correlation length is longer and the relative error in the baryon mass grows uncontrollably with the lattice size for the improved Hamiltonian obtained by the two staggered site truncation IMSRG. However, increasing the size of the operator truncation used in the IMSRG decreases the error in the baryon mass to controllable levels. In addition to studying the energy of different states on the lattice, the IMSRG flows of operators can be computed and their expectation values can be computed using improved Hamiltonians. As an explicit example, the SRG flow of the chromo-electric energy density was computed. The operators corresponding to the chromo-electric operators in the improved basis are the same as those that show up in the improved Hamiltonians, just with different coefficients. The vacuum expectation of the chromo-electric energy density is shown in Fig. <ref> for g=1 and g=2. As before, increasing the size of the operator truncation in the IMSRG improves the accuracy of the improved Hamiltonians. Remarkably, even though the improved Hamiltonians are being truncated at zero electric field, their ground states still reproduce the electric energy density of the full untruncated theory. §.§ Similarity Renormalization Group m ≠ 0 In the previous section, IMSRG was used to derive an improved Hamiltonian that describes the dynamics of baryons in QCD in one dimension with massless quarks. The same technique can be used to setup improved Hamiltonians in the case of massive quarks as well. In a theory with massive quarks, the piece of the Hamiltonian that should be used to generate the SRG flow is the combination of the mass and electric energy terms. At the zero electric energy truncation, the only state left after truncation is the one with matter sites empty and anti-matter sites filled. Therefore with massive quarks, there are no dynamics at this level of truncation. The next lowest truncation in the SRG flow depends on the relative size of the fermion mass m and the coupling g. If 2/3g^2 > m, then the next lowest lying state in the spectrum consists of a baryon at a site. The improved Hamiltonian derived by truncating at this level takes the same form as in the previous section except with the addition of a mass term for the baryons. If instead 2/3g^2 < m, then the next lowest lying state in the spectrum corresponds to a quark anti-quark pair connected by a link of electric flux. In the strong coupling limit, this corresponds to a meson at the excited link. Denoting the trivial vacuum state by |Vac⟩, and the state with a qq pair on link l by |l⟩, the Hamiltonian obtained under IMSRG flow truncating the energy at single link excitations and the operator size at two link operators takes the form Ĥ_SRG = E_0(g,m) |Vac⟩⟨Vac| + ∑_l h(g,m)(|l+1⟩⟨l| + |l⟩⟨l+1|) + E_1(g,m) |l⟩⟨l| , where E_0(g,m), E_1(g,m), and h(g,m) are constants determined through numerically solving the SRG flow. Note that this Hamiltonian has the same form as that of a single non-relativistic particle. The Hamiltonian in Eq. (<ref>) can be viewed as a Hamiltonian for a single link excitation (or meson) and can be mapped onto a second quantized Hamiltonian to describe a system with more excited links. Explicitly, the single excitation sector of Ĥ_SRG = ∑_l h(g,m)/2(X̂_l X̂_l+1 + Ŷ_l Ŷ_l+1) + E_0(g,m) - E_1(g,m)/2Ẑ_l , will be identical to the Hamiltonian in Eq. (<ref>). This improved Hamiltonian will also be capable of describing states with multiple links excited as well. The description of these states with multiple links excited can be improved by raising the truncation of states kept after SRG flow to include states where two links are excited. By keeping these states after the SRG flow and keeping the other truncations as before, the improved Hamiltonian given by Ĥ_SRG2 = ∑_l h(g,m)/2(X̂_l X̂_l+1 + Ŷ_l Ŷ_l+1) + s(g,m)Ẑ_l Ẑ_l+1 + E_0(g,m) - E_1(g,m)/2Ẑ_l , will have single and two excitation sectors that match the improved Hamiltonians derived through SRG. As a test of the performance of this improved Hamiltonian, the mass of the meson was computed on a lattice with two physical sites for g=1 and various values of m in Fig. <ref>. Similar to the massless case, the improved Hamiltonian derived with the SRG performs well when there is a large separation in energy scales between the states being decoupled. Note that in principle, the same comparison can be done with larger lattices, however the meson is in the same baryon number sector as the vacuum which complicates the calculation of the meson mass. It is expected that this improved Hamiltonian scales to larger lattices as in the massless case. §.§.§ Quantum Simulation As an example of how these improved Hamiltonians can be used for quantum simulation, a simulation will be performed of a meson's time evolution on three physical sites with open boundary conditions. Using the Hamiltonian in Eq. (<ref>) would require a quantum computer with 18 qubits to encode the state, and non-local interactions between the qubits to implement the electric energy piece of the Hamiltonian. Using the improved Hamiltonian in Eq. (<ref>) requires only 5 qubits to represent the state and only requires nearest neighbor interactions on the quantum computer to perform time evolution. Fig. <ref> shows the real time evolution of a single meson on three physical sites with g=1,m=1 simulated on IBM's Perth quantum processor <cit.>. A meson state was prepared on the quantum processor by applying an X̂ gate to the qubit assigned to the leftmost link. Time evolution was performed using a first order Trotter formula. Explicitly, the Hamiltonian was decomposed as Ĥ=∑_l=1^4Ĥ_l where Ĥ_l = h(g,m)/2(X̂_l X̂_l+1 + Ŷ_l Ŷ_l+1) + s(g,m)Ẑ_l Ẑ_l+1 , and the Trotterized time evolution operator was given by Û(Δ t) = e^-iĤ_2 Δ t e^-iĤ_4 Δ t e^-iĤ_3 Δ t e^-iĤ_1 Δ t . Each individual e^-iĤ_l Δ t was decomposed into a circuit with 3 CNOT gates using standard techniques <cit.>. The sum over Pauli Ẑ operators can be ignored when performing time evolution because it commutes with the full Hamiltonian and the operators being measured. The noise in the quantum simulation was mitigated using self-mitigation combined with Pauli twirling <cit.>. For each Trotter step, 50 circuits describing the time evolution were used along with 50 circuits with Δ t = 0 used to determine the strength of the depolarizing noise channel. Each circuit was sampled 10,000 times. As Fig. <ref> shows, the quantum hardware is able to describe the time evolution well at short times, but at long times the hardware noise begins to dominate. However, despite the presence of hardware noise at late times, the location of the peak of the wavepacket of the meson can still be located at late times. § 3+1D §.§ 3+1D Hamiltonian Performing a quantum simulation of lattice QCD requires a choice of Hamiltonian to be used. This choice is complicated by the phenomena of fermion doubling, where the naive discretization of the Dirac field on the lattice in d dimensions actually describes 2^d fermions. Furthermore, the Nielson-Ninomiya theorem forbids the presence of chiral symmetry on the lattice when all doublers are removed <cit.>. In this work, staggered fermions will be used. Staggered fermions work by distributing the components of the Dirac field across different sites of the lattice. This preserves some chiral symmetry at the cost of still having some fermion doublers remain. In lattice QCD calculations on classical computers, space and time are both discretized leading to staggered fermions describing 4 types of fermions, referred to as tastes in the literature. For practical calculations, these can be reduced to a single flavor through the process of rooting <cit.>. In quantum simulation, time is left continuous and only space is discretized. This changes the counting of the number of tastes present. Explicitly, with three dimensions of space discretized and time left continuous, staggered fermions describe two tastes. This is a feature, not a bug for using lattice QCD to study nuclear physics as one taste can be identified as an up quark and the other can be identified as a down quark. Therefore, we would expect lattice QCD with a single staggered fermion on a quantum computer to describe two flavor QCD where both quarks have the same mass. With massless quarks, this lattice regularization should reproduce the predictions of chiral perturbation theory as the continuum limit is approached. Explicitly, the Hamiltonian that should be used for 3+1 dimensional two flavor massless lattice QCD on a quantum computer is Ĥ = Ĥ_K + Ĥ_E + Ĥ_B Ĥ_K = ∑_r,μ̂,a,bη_r,μ̂1/2ψ̂_r,a^†Û^a,b_r,r+μ̂ψ̂_r+μ̂,b+h.c. Ĥ_E = g^2/2∑_l ∈links,cÊ_l^c Ê_l^c Ĥ_B = -1/2g^2∑_p ∈plaquettes_p , where ψ_r,a is a fermion field at site r with color a, μ̂ is a unit vector in the x̂, ŷ, or ẑ directions, η_r,μ̂ are the spin diagonalization phases, Û^a,b_r,r+μ̂ is an SU(3) parallel transporter between sites r and r+μ̂, Ê_l^c is the SU(3) chromo-electric field on link l and _p is the Hermitian component of the trace over color indices of the product of parallel transporters on plaquette p. Previous work has shown that this Hamiltonian has a discrete chiral symmetry corresponding to translation by one lattice site that is spontaneously broken and an isospin symmetry that corresponds to diagonal translations <cit.>. §.§ Improved Hamiltonian As is the case for 1D QCD, mapping the Hamiltonian in Eq. (<ref>) onto qubits is challenging, especially if one wishes to perform a quantum simulation with existing hardware. Improved Hamiltonians can also be derived for performing quantum simulations of this theory. Following the discussions of the previous sections, IMSRG can be applied to this theory with a truncation in operator size. The smallest non-trivial operator size IMSRG can be applied to is a single link and the lowest electric field truncation that can be used is zero electric field. The resulting improved Hamiltonian on the 3 dimensional lattice will take the same form as in the 1D case except now the hopping terms will have phases that result from the spin diagonalization. Explicitly, the improved Hamiltonian obtained through SRG at this truncation in operator size and electric field is Ĥ_SRG = ∑_r A(g) (ψ̂_r^†ψ̂_r+x̂ + ψ̂_r+x̂^†ψ̂_r) + A(g) (-1)^r_1(ψ_r^†ψ̂_r+ŷ + ψ̂_r+ŷ^†ψ̂_r) + A(g) (-1)^r_1 + r_2(ψ̂_r^†ψ̂_r+ẑ + ψ̂_r+ẑ^†ψ̂_r) + B(g) ∑_μ̂(2ψ̂^†_rψ̂_r-1) (2ψ̂^†_r+μ̂ψ̂_r+μ̂-1) , where ψ_r is a colorless fermion field at site r and A(g) and B(g) are numerical constants determined through solving the SRG equations. Note that this improved Hamiltonian only describes the QCD Hamiltonian accurately for large coupling g. At large coupling, the π meson is massive and is integrated out of this improved Hamiltonian. By increasing the chromo-electric field truncation of states kept after the SRG flow, states with quark-antiquark pairs separated by a link will be included in the low energy Hilbert space kept after truncation and will yield an improved Hamiltonian that describes meson degrees of freedom as well. §.§.§ Spectrum The improved Hamiltonian in Eq. (<ref>) will describe the untruncated theory accurately in the limit of large g. While the continuum limit of lattice QCD is in the limit of g→0, large couplings can be used to study the theory at finite lattice spacing. In the limit g→∞, A(g)→ 0 and some qualitative features of low energy QCD are recovered. In particular, it has been shown that in the strong coupling limit this theory has an isospin symmetry and a spontaneously broken chiral symmetry <cit.>. In addition to the previously studied features of this regularization, the strong coupling limit of this Hamiltonian also reproduces the approximate SU(4) spin flavor symmetry of nuclear physics. As an example, we will study the improved Hamiltonian in Eq. (<ref>) on a single cube. The fermionic fields will be mapped onto qubits using a Jordan-Wigner encoding. When A(g)=0, the Hamiltonian in Eq. (<ref>) can be rewritten in terms of Pauli matrices as Ĥ_SRG = 9/16g^2∑_μ̂Ẑ_rẐ_r+μ̂ . The ground state is in the baryon number B=0 sector and is a degenerate Néel state. For the rest of this discussion, we will only consider the sector that is even under reflection across the ẑ axis. The lowest lying excited states in the B=0 sector correspond to performing a SWAP operation on one of the links. Denoting the energy cost of flipping one link as Δ=9/8g^2, this set of excited states has energy 4Δ and there are 12 of them. These 12 states should correspond to spin one and spin zero baryon anti-baryon pairs, i.e., pp, nn, np and pn states. The lowest lying energy states in the B=1 sector correspond to flipping one site from the Néel state on the cube. There are four corners that can be flipped in the Néel state to end up in the B=1 sector so there are four degenerate states with energy 3Δ. These correspond to the two spin modes of the proton and neutron. Note that the proton and neutron mass are degenerate which should be expected from isospin symmetry. In the B=2 sector, the lowest lying states correspond to flipping two spins in the Néel state. This results in six degenerate states with energy 6Δ. These states correspond to spin 1 pn states and spin 0 pp, pn and nn states. The fact that these states are degenerate is reflective of spin-flavor symmetry which is approximately present in low energy nuclear physics. The spin-flavor symmetry has been shown to emerge in the large N_c limit of QCD <cit.> and is related to the minimization of entanglement in low energy nucleon scattering <cit.>. We also see that in the strong coupling limit, the deuteron has binding energy zero. Similar calculations can be done in the higher baryon number sectors which also show that these sectors also demonstrate spin-flavor symmetry and nuclei with binding energy = 0. It is also interesting to note that the nucleon-nucleon scattering lengths are large. As a result, the pionless EFT describing nucleon scattering is an expansion around a non-trivial fixed point where the binding energy of nuclei vanishes as is the case in this lattice regularization <cit.>. §.§.§ Quantum Simulation The Hilbert space describing the Hamiltonian in Eq. (<ref>) consists of a single fermion mode for each site. Using the Jordan-Wigner encoding, the state of each site can be represented with a single qubit. In this encoding, a list of fermion operators ψ_1,ψ̂_2,...,ψ̂_N are mapped onto qubit operators as ψ̂_n = ⊗_k<n1/2Ẑ_k (X̂_n + i Ŷ_n) . For a local one dimensional fermionic theory, this fermion encoding leads to a Hamiltonian that is local in qubits. However, in higher dimensions, the operators in the Hamiltonian will include strings of Pauli Ẑ operators that wrap around the lattice. These long range operators are necessary to enforce the anti-commutation relations of the fermionic operators and may make it difficult to practically scale to calculations on a large lattice. As a demonstration of how this improved Hamiltonian works in practice, time evolution on six vertices connected to a single vertex at the center as shown in Fig. <ref> will be simulated. This is the smallest non-trivial subsystem of a full three dimensional lattice that will be repeated periodically and will be useful for understanding how simulations on a larger lattice will work. Each of the seven vertices can be mapped onto a single qubit. The Hamiltonian describing their time evolution is given by Ĥ_SRG = ∑_v A(g) (ψ̂_0^†ψ̂_v + ψ̂_v^†ψ̂_0) + B(g) ∑_v(2ψ̂^†_vψ̂_v-1) (2ψ̂^†_0ψ̂_0-1) , where the 0 subscript denotes the vertex at the center and the sum is over the other vertices. The quantum processor is initialized with the center qubit in the 1 state and the remaining qubits are in the 0 state. In the staggered fermion lattice regularization, sites are alternatively identified with matter and anti-matter degrees of freedom so this state should correspond to the trivial vacuum. By evolving with the Hamiltonian in Eq. (<ref>), it should be possible to observe matter anti-matter fluctuations. Note that with this initial state, a single Trotter step can be performed without having to implement CNOT gates from the Jordan-Wigner strings. A single Trotter step was implemented on IBM Perth with the size of the time step being varied to sample different times. Due to the connectivity of the hardware, this circuit required 28 CNOT gates. Fig. <ref> shows the results of performing a single Trotter step for g=2 on IBM Perth. For small times, the quantum simulation is able to describe the evolution of the system accurately, however beyond t=1, the error in the single Trotter step used is large and limits the accuracy of the quantum simulation. While the Jordan-Wigner encoding is efficient in the number of qubits used, the Hamiltonian generated has long range interactions which are necessary to preserve the anti-commutation relation of the fermions. Scaling these calculations to a larger lattice will require making use of a more efficient fermion encoding. For example, the Bravyi-Kitaev superfast encoding can be used to map fermions onto qubits <cit.>. In this encoding, a qubit is associated with each link on the lattice and represents the parity of the number of fermions on the link. The length of the strings of Pauli Ẑ operators for an operator on a link extends only to neighboring links. For a large lattice, this will limit the circuit depth necessary to perform time evolution and potentially allow for larger calculations to be performed. § DISCUSSION In this work, the SRG has been used to derive improved Hamiltonians that mitigate the effects of gauge field truncation. It was demonstrated in 1+1D that the improved Hamiltonians derived this way outperform those derived through the strong coupling expansion for small systems. Tensor network calculations were performed to demonstrate that these improved Hamiltonians perform well as the system size is increased. These techniques were also applied to 3+1D giving an improved Hamiltonian capable of describing two flavour QCD on the lattice. Real time dynamics on small systems were simulated on IBM's Perth quantum processor. Previous strategies for quantum simulation of lattice gauge theories improved accuracy by increasing the truncation of the gauge field. This comes at the cost of needing more qubits to represent the system and a more complicated circuit to implement the time evolution. The improved Hamiltonians introduced in this work are capable of improving accuracy only at the cost of requiring more complicated circuits to simulate. Improved Hamiltonians have been derived for a single flavor of staggered fermions coupled to SU(3) gauge fields truncated at low electric field. This has enabled quantum simulation of systems that would otherwise be out of reach of current quantum hardware. The same approach introduced here can be used to derive improved Hamiltonians for larger electric field truncations and with more flavors of fermions. Future work will extend these methods to higher spatial dimensions with larger operator truncations where the plaquette terms will modify the SRG flow. This will enable quantum simulations of lattice gauge theories in multiple dimensions to be performed in the near term. The authors would like to acknowledge useful conversations about SRG with Zhiyao Li on a related project. We would also like to thank Marc Illa and Roland Farrell for feedback in preparing this manuscript. The authors would also like to acknowledge many useful conversations with Martin Savage, Francesco Turro, Xiaojun Yao and Niklas Mueller. The material presented here was funded by U.S. Department of Energy, Office of Science, Office of Nuclear Physics, Inqubator for Quantum Simulation (IQuS)[<https://iqus.uw.edu>] under Award Number DOE (NP) Award DE-SC0020970 via the program on Quantum Horizons: QIS Research and Innovation for Nuclear Science. This work was enabled, in part, by the use of advanced computational, storage and networking infrastructure provided by the Hyak supercomputer system at the University of Washington[<https://itconnect.uw.edu/research/hpc>]. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. § SCHRIEFFER-WOLFF PERTURBATION THEORY The improved Hamiltonians derived in this work are based on performing a unitary transformation before truncating the electric field to reduce the coupling to the states being removed by the truncation. This can be done perturbatively through the use of Schrieffer-Wolf perturbation theory (SWPT). In this section, the application of SWPT to the Hamiltonian in Eq. (<ref>) with m=0 will be demonstrated. The Hamiltonian for lattice gauge theories in 1D we wish to simulate takes the form Ĥ = Ĥ_E + Ĥ_D + V̂ , where Ĥ_E is the electric Hamiltonian, V̂ couples the low energy subspace to the high energy subspace and Ĥ_D describes dynamics in the high energy Hilbert space. Note that the kinetic term of Eq. (<ref>) is equal to Ĥ_D + V̂. For the zero electric field truncation, V̂ is the piece of the kinetic term that corresponds to a baryon on a site ejecting a quark to a neighboring site and Ĥ_D is the piece of the kinetic term that describes a quark propagating freely between sites. SWPT systematically generates a unitary, e^Ŝ that decouples the selected low energy subspace. For lattice gauge theories, we will be decoupling the electric vacuum and states with low energy relative to the electric Hamiltonian. To leading order we have e^Ŝ_1Ĥ e^-Ŝ_1 = Ĥ_E + [Ŝ_1,Ĥ_E + Ĥ_D] + Ĥ_D + V̂ + [Ŝ_1,V̂] + 1/2[Ŝ_1,[Ŝ_1,Ĥ_E + Ĥ_D] + 𝒪(V̂^3) . The leading order coupling between the low and high energy subspace comes from V̂ and be cancelled at leading order by choosing Ŝ_1 such that [Ŝ_1,Ĥ_E + Ĥ_D]=-V̂. Explicitly, the matrix elements of Ŝ_1 are (S_1)_ab = 1/E_a - E_b V_ab , where the indices label eigenstates of Ĥ_E + Ĥ_D with eigenvalues E_a. To leading order, the effective Hamiltonian is Ĥ_eff^1 = Ĥ_E + 1/2[Ŝ_1,V̂] , and provided that the low energy subspace has an electric energy of 0, the commutator is equal to 1/2[Ŝ_1,V̂] = -V̂1/Ĥ_E + Ĥ_DV̂ = -∑_n (-Ĥ_E^-1Ĥ_D)^n 1/Ĥ_EV̂ . Therefore to 𝒪(H_E^-2), the effective Hamiltonian is given by Ĥ_eff^1 = Ĥ_E - V̂1/Ĥ_EV̂ + V̂1/Ĥ_EĤ_D1/Ĥ_EV̂ + 𝒪(Ĥ_E^-3) . Plugging in the corresponding pieces of Eq. (<ref>) yields the improved Hamiltonian in Eq. (<ref>). Techniques for performing this expansion to higher orders can be found in Ref. <cit.>.
http://arxiv.org/abs/2307.07226v1
20230714084153
Challenge Results Are Not Reproducible
[ "Annika Reinke", "Georg Grab", "Lena Maier-Hein" ]
cs.CV
[ "cs.CV" ]
FaIRGP: A Bayesian Energy Balance Model for Surface Temperatures Emulation Paolo Pegolo 0000-0003-1491-8229 August 12, 2023 ============================================================================ While clinical trials are the state-of-the-art methods to assess the effect of new medication in a comparative manner, benchmarking in the field of medical image analysis is performed by so-called challenges. Recently, comprehensive analysis of multiple biomedical image analysis challenges revealed large discrepancies between the impact of challenges and quality control of the design and reporting standard. This work aims to follow up on these results and attempts to address the specific question of the reproducibility of the participants methods. In an effort to determine whether alternative interpretations of the method description may change the challenge ranking, we reproduced the algorithms submitted to the 2019 Robust Medical Image Segmentation Challenge (ROBUST-MIS). The leaderboard differed substantially between the original challenge and reimplementation, indicating that challenge rankings may not be sufficiently reproducible. § INTRODUCTION Robust segmentation of biomedical images is an important precursor to many new, innovative computer-assisted applications. Deep learning-based segmentation methods have proven to work successfully on a wide range of medical imaging data, including computed tomography (CT), magnetic resonance imaging (MRI), and endoscopy <cit.>. For benchmarking which type of model works best on a given medical domain, challenges have become an important tool, and are now commonplace in conferences such as the conference on Medical Image Computing and Computer Assisted Interventions (MICCAI) or the IEEE International Symposium on Biomedical Imaging (ISBI). However, recent comprehensive analysis of challenges in the biomedical domain revealed that the current state of quality control severely limits interpretation of rankings and reproducibility, with only a fraction of the relevant information typically provided <cit.>. In order to concretely analyze the reproducibility of the participating methods in challenges, we aimed to reimplement the algorithms of all participating teams in a challenge only based on their submitted method descriptions. As an example, we performed our experiments for the 2019 Robust Medical Image Segmentation Challenge (ROBUST-MIS). Given the obligation to submit a detailed description of their methods together with their actual results, this challenge had a disproportionately high amount of algorithmic information available, which should in theory faciliate the reproducibility of results. However, in this work, we show that even with this high amount of information available, we were not able to reproduce the challenge results. § MATERIALS AND METHODS The ROBUST-MIS challenge <cit.> focused on the robustness and generalization capabilities of algorithms. A collection of surgical data with 10 040 annotated images from 30 surgical procedures across three different types of surgery served as the basis for the challenge. The challenge was validated across competing methods in three stages with a growing domain gap between the training and test data, i.e. higher stages contained more difficult images requiring a higher degree of generalization to be segmented successfully. A detailed overview of the challenge can be found in <cit.>. In the following experiments, we focused on the multi-instance instrument segmentation task of the challenge. In the challenge, alongside their algorithm submission, participating teams were required to submit a document summarizing their method in detail to the point of being reproducible, such as the used network architecture, data augmentations and all hyperparameters. These method descriptions, along with the summaries included in the challenge paper <cit.> were used as a basis for reproducing the challenge results. In general, we aimed to stay as close to the descriptions as possible, meaning the same programming languages and libraries were used, if this information was made available. In case of ambiguous or missing information in method descriptions, we first attempted to infer the correct meaning using literature directly cited by the method description. Only if this was not possible, secondary literature was considered. As a last resort, we filled the missing information by surveying publicly available similar implementations and taking the most popular approach that worked reasonably well on the problem domain. For example, if a team would not document the type of optimizer, and relevant citations did not explicitly mention this either, the default choice of the most popular or official implementation was used. If two interpretations were equally likely, the method was trained using both interpretations, and the one resulting in better validation performance was chosen. In the original challenge, participants were ranked according to two different criteria, robustness and generalization capabilities, resulting in two rankings based on the multi-instance Dice Similarity Coefficient (MI_DSC) <cit.>. The robustness ranking was determined by calculating a metric-based ranking using the 5% quantile of MI_DSC values obtained from the testing set. The accuracy ranking was calculated as a test-based ranking using a Wilcoxon signed rank test at a 5% significance level <cit.>. For our calculation, we considered stage three of the test set. Additionally, we compared the rankings with Kendall's τ correlation coefficient <cit.>, which yields a value of 1 for two perfectly agreeing rankings and -1 if rankings are reversed. Ranking variability was investigated via bootstrapping <cit.>. We used the challengeR package <cit.> for calculating rankings and ranking uncertainty. § RESULTS During our reimplementation, lots of ambiguities were found in the method descriptions. Fig. <ref> presents a qualitative summary of the assumptions made across all descriptions. Here, the term minor deficiency was defined as an assumption that had to be taken due to missing or clearly incorrect information, but was thought to either have a minor impact on model performance or there was high confidence that the right assumption has been made from context. Major deficiencies were defined as missing design decisions either thought to have a major impact on final model performance, there was low confidence that the correct assumption had been made from context or context was unavailable. In such a case, it was highly unlikely that our choice was identical to that of the original implementation. From the figure, it can be seen that both the model selection and data augmentation showed the highest amount of major and minor deficiencies during the reimplementation, followed by the data splits and the description of inference. When calculating the metric values of the reimplemented methods, the distribution of values substantially differed between the original challenge and the reimplementation, except for team A2. This was also visible in the rankings. Tab. <ref> shows the accuracy ranking for the original challenge and the reimplementation. The original winner changed for the reimplementation and teams moved mostly up or down by one single rank with an average change of one rank. Kendall's τ was 0.59 between both rankings, indicating a high variability. The ranking variability was analyzed by applying bootstrapping. The average (median, Interquartile Range (IQR) Kendall's τ over 1,000 bootstrap rankings was 1.00 (median: 1.00; IQR: (1.00, 1.00)) for the original challenge, which was thus very robust against small perturbations. The average (median, IQR) Kendall's τ for the reimplementation was slightly less with a mean (median, IQR) Kendall's τ of 0.98 (median: 0.98; IQR: (0.98, 1.00)). Similarly, Tab. <ref> shows the original and reimplemented versions for the robustness ranking. Again the winners according to this ranking changed and the average change in ranks was higher for this ranking scheme (1.3). Comparing both rankings yielded a Kendall's τ of 0.40. Notably, four algorithms failed to achieve a 5% quantile of the MI_DSC above 0 in the reimplementation, which only happened for two algorithms in the original challenge. We further found a higher ranking uncertainty for the original challenge with a mean (median, IQR) Kendall's τ of 0.85 (median: 0.98; IQR: (0.98, 1.00)). On the other hand, this ranking scheme was more stable for the reimplementation (mean: 0.97; median: 1.00; IQR: (1.00, 1.00)). § DISCUSSION In this work, we attempted to reproduce the rankings of the ROBUST-MIS challenge by means of reimplementing algorithms of participating teams given the method description they were required to submit. This attempt failed: both ranking schemes yielded results that substantially differed for the reimplementation, including changing the winners. While training deep learning models comes with a substantial amount of non-determinism, which additionally contributes to the problem of reproducibility, we think the primary reason for failing to reproduce the results is the insufficient documentation provided by the participants. As shown in Fig. <ref>, the number of assumptions needed to be taken for reproducing the methods were numerous and spanned all relevant steps of model development, including data preprocessing, model architecture, and inference. For one team, we were not even able to identify the basic network architecture. We found that complex design decisions tended to be described less accurately than design decisions that are typically simpler to document. For example, the standard choices for optimizers are limited and typically prominently visible in the source code. This may be a reason why almost all participants succeeded in unambiguously stating the utilized optimizer and associated hyperparameters. On the other hand, model selection and data augmentation are complex processes, which were documented poorly by challenge teams. While the best-performing model is usually selected by calculating the loss on a separate validation data set, this does not necessarily have to be the case. In the ROBUST-MIS challenge, in particular, it was beneficial to select a sensitive over a specific model, since a false negative fraction of only 5% would be enough to completely fail the robustness ranking, i.e. yielding a 5% quantile of 0. Many teams either overlooked this aspect of the challenge completely in their documentation or provided incomplete information. Similarly, while the types of data augmentations were typically well reported, the respective hyperparameters were usually not documented. In addition, data augmentations can be applied individually or be combined with other data augmentation techniques. In such a case, the order and probabilities need to be specified. Finally, data augmentations complicate the exact meaning of the term 'epoch': is the original dataset extended only once with a certain percentage of augmented images, or are augmentations continually applied on the fly during training? All these choices need to be documented in detail in order to allow for faithful reimplementation. Most design choices going into an algorithm relevant for challenge participation directly map to the source code, and thus reproducibility would be greatly improved by making the source code publicly available. However, since this is practically challenging, e.g. for teams from industry, certain aspects of the method description should be handled with great care: Reasoning for complexity: Some teams made complicated design decisions. For example, one team used a complex multi-stage approach for inference but did not elaborate on the reasoning for choosing this procedure. While a detailed explanation would have increased the understanding in general, it could also have been used to verify that an implementation was correct while reproducing the results. Hyperparameters: Although simple to document, many teams failed to properly list their chosen hyperparameters, especially for data augmentation and final threshold values for the purpose of inference. Model Selection: While most design decisions directly map to the source code, model selection is often a notable exception to this, and may involve manual analysis and comparison of several models using different performance metrics. This may be a reason why this work identified many deficiencies related to this aspect. Especially in segmentation tasks, the considerations may go beyond minimizing the validation loss, since the final ranking methods are often not suitable for being utilized as loss functions. In any case, model selection should ideally be quantifiable and documented. It should be noted that drawing conclusions from this work is limited since only a single challenge has been analyzed. However, for this challenge, an exceptionally high amount of information regarding the algorithms was available, strengthening our hypothesis that reproduction of challenge results is limited even if a detailed method description is required from the organizers. Furthermore, training deep learning models is inherently associated with a certain degree of non-determinism, where two identical training runs can potentially lead to severely different results <cit.>. Only one challenge participant addressed this limitation by employing ensembling and averaging their results during inference. Thus, ironically, this work itself may be deemed non-reproducible. With this work, we showed that even well-documented methods are not easily reproducible. However, we think that the most effective way of reducing the issue of non-reproducibility would be publicly available source code of all participating teams of a challenge, although maybe practically challenging. Especially for the winning teams, such an action would be desirable since the winning method is typically seen as the new state-of-the-art method for a specific problem. We hope that this work will trigger further actions by stakeholders involved in policy-making for challenges. § ACKNOWLEDGEMENTS Part of this work was funded by Helmholtz Imaging, a platform of the Helmholtz Incubator on Information and Data Science. We would like to thank Marcel Knopp and Minu D. Tizabi for proofreading the document. unsrtnat
http://arxiv.org/abs/2307.04019v3
20230708173320
GP-guided MPPI for Efficient Navigation in Complex Unknown Cluttered Environments
[ "Ihab S. Mohamed", "Mahmoud Ali", "Lantao Liu" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.SY", "eess.SY" ]
Explicit a posteriori error representation for variational problems and application to TV-minimization [ August 12, 2023 ======================================================================================================== @topnum0 @botnum0 empty empty Robotic navigation in unknown, cluttered environments with limited sensing capabilities poses significant challenges in robotics. Local trajectory optimization methods, such as Model Predictive Path Intergal (MPPI), are a promising solution to this challenge. However, global guidance is required to ensure effective navigation, especially when encountering challenging environmental conditions or navigating beyond the planning horizon. This study presents the GP-MPPI, an online learning-based control strategy that integrates MPPI with a local perception model based on Sparse Gaussian Process (SGP). The key idea is to leverage the learning capability of SGP to construct a variance (uncertainty) surface, which enables the robot to learn about the navigable space surrounding it, identify a set of suggested subgoals, and ultimately recommend the optimal subgoal that minimizes a predefined cost function to the local MPPI planner. Afterward, MPPI computes the optimal control sequence that satisfies the robot and collision avoidance constraints. Such an approach eliminates the necessity of a global map of the environment or an offline training process. We validate the efficiency and robustness of our proposed control strategy through both simulated and real-world experiments of 2D autonomous navigation tasks in complex unknown environments, demonstrating its superiority in guiding the robot safely towards its desired goal while avoiding obstacles and escaping entrapment in local minima. The GPU implementation of GP-MPPI, including the supplementary video, is available at <https://github.com/IhabMohamed/GP-MPPI>. Autonomous vehicle navigation, MPPI, sparse Gaussian process (SGP), occupancy grid map path planning. § INTRODUCTION AND RELATED WORK Autonomous navigation of mobile robots in unknown, cluttered, and unpredictable environments with limited sensor capabilities is a challenging task owing to the inherent uncertainty and complexity of such environments. To tackle this challenge, a receding-horizon strategy such as Model Predictive Control (MPC) is commonly employed. The MPC control framework allows the robot to simultaneously plan a short trajectory (sequence of actions), following which the robot executes the immediate action while planning a subsequent trajectory. To successfully achieve receding-horizon planning, the robot must consider both safety and persistent feasibility, where safety is achieved by avoiding collisions with any obstacles while executing a planned trajectory, and persistent feasibility is maintained by always generating a safe trajectory that does not result in dead-ends or local minima while progressing towards the desired goal. One of the significant challenges in robot motion planning is that the desired goal is often situated beyond the planning horizon, which requires the use of local subgoals or cost-to-go heuristics for motion safety and persistent feasibility. A common strategy is to rely on single-query motion planning algorithms, such as A^* and RRT^X, to identify feasible paths that direct the local planner towards its desired goal <cit.>. For instance, the RRT^X algorithm, introduced in <cit.>, incorporates replanning techniques from Dynamic Rapidly-exploring Random Trees (DRRT) and Rapid-exploring Random Trees (RRT^*) algorithms to adjust the path during exploration based on environmental changes. However, due to its high computational demands, implementing this algorithm in real-time on a robot can be challenging. One alternative method to achieve efficient solutions for motion planning problems is the integration of MPC with data-driven methods, also known as learning-based MPC <cit.>. To name a few, a subgoal planning policy using Deep Reinforcement Learning (DRL) is recently proposed to guide the local MPC planner to navigate in crowded surroundings <cit.>. Similarly, RL was utilized to choose the next subgoal from a set of predefined possibilities <cit.>, which guides the robot through challenging environments with dead-end corridors while also prevents the MPC planner from getting trapped in local minima. Another related work that combines learning with MPC is POLO which aims to enhance MPC performance by learning a global value function <cit.>. Most of these approaches typically rely on either offline training or having access to the global map of the environment. In addition, many recent studies have suggested combining Gaussian Process (GP) with MPC to learn system dynamics, leading to better control performance and robustness to uncertainty <cit.>. Another research avenue employed gap-based techniques that identify gaps as free spaces between obstacles, enabling a robot to move through them while avoiding local minima and obstacles. The first developed method was the Nearness Diagram (ND) <cit.>, but many of its variants exhibited undesired oscillatory motion. To overcome these limitations, robotics researchers have developed techniques that rely on the geometry of the gap. One such technique is the Follow-the-Gap Method (FGM), which selects a gap based on its area and computes the robot's heading using the gap center's direction relative to both the robot and the final goal <cit.>. Another approach is the sub-goal seeking method, which assigns a cost to each sub-goal based on the goal heading error with respect to the robot and the gap heading, and then selects the sub-goal with the lowest cost (error) <cit.>. The Admissible Gap (AG) method <cit.>, an iterative algorithm that takes into account the exact shape and kinematic constraints of the robot, identifies possible admissible gaps, and selects the nearest gap as the goal. Different from all these strategies, our proposed framework leverages a Sparse variant of Gaussian Process (SGP) which is a new perception model by “abstracting” local perception data so that the local sub-goal for navigation can be naturally extracted. Specifically, we introduce the GP-MPPI control strategy, which enhances the state-of-the-art sampling-based MPC, Model Predictive Path Integral (MPPI) <cit.>, by incorporating the GP-subgoal recommender policy. Such a policy takes advantage of the SGP occupancy model to learn about the navigable space surrounding the robot, identifies a set of suggested subgoals, and ultimately recommends the optimal subgoal that minimizes a predefined cost function to the MPPI local planner, as demonstrated in Fig. <ref>. Subsequently, MPPI computes the optimal control sequence that satisfies the robot and collision avoidance constraints while moving towards the recommended subgoal, followed by executing the first optimal control 𝐮_0 to the robot. In summary, the contributions of this work can be summarized as follows: * We propose an online learning-based control strategy that recommends subgoals solely based on local sensory information, ensuring safety and persistent feasibility; such an approach eliminates the need for a global map of the environment or an offline training process as in RL techniques, resulting in a more flexible and agile control framework that can be easily deployed in different unexplored environments, as revealed in Section <ref>. * To the best of the authors' knowledge, this is the first attempt to utilize the SGP occupancy model in conjunction with sampling-based trajectory optimization methods, specifically MPPI, to efficiently explore the navigable space surrounding the robot. * In Sections <ref> and <ref>, we validate our GP-MPPI control strategy for collision-free navigation in complex and unknown cluttered environments, using both simulation and experimental demonstrations; by comparing it with two baseline sampling-based approaches (namely, MPPI <cit.>, and log-MPPI <cit.>), we show its effectiveness in overcoming local minima that may arise when the sampled trajectories of MPPI are concentrated in high-cost regions or due to challenging environmental conditions. § PRELIMINARIES To provide the necessary background for our proposed work, in this section, we formulate the optimal control problem and present a concise overview of the MPPI control strategy that can be utilized to address this problem, along with a brief introduction to the Sparse Gaussian Process (SGP) which is the backbone of our GP-subgoal recommender policy. §.§ Problem Formulation Consider a nonlinear discrete-time stochastic dynamical system 𝐱_k+1=f(𝐱_k,𝐮_k+δ𝐮_k), with 𝐱_k ∈ℝ^n_x and 𝐮_k ∈ℝ^n_u representing the state of the system and its control input, respectively. The disturbance introduced into the control input, δ𝐮_k, is modeled as a zero-mean Gaussian noise with co-variance Σ_𝐮. Given a finite time-horizon N, we define the control sequence 𝐔 as 𝐔 = [𝐮_0, 𝐮_1, …,𝐮_N-1]^⊤∈ℝ^n_u N and the resulting state trajectory of the system being controlled as 𝐗 = [𝐱_0, 𝐱_1, …, 𝐱_N]^⊤∈ℝ^n_x (N+1). Furthermore, 𝒳^d is used to represent the d-dimensional space with 𝒳_rob(𝐱_k) ⊂𝒳^d and 𝒳_o b s⊂𝒳^d representing the robot's occupied area and obstacles' area, respectively. Let 𝐱_s and 𝐱_f denote the initial and desired (goal) state of the robot, respectively. Given 𝒳_rob(𝐱_k), 𝒳_o b s, 𝐱_s, and 𝐱_f, we aim to find the optimal control sequence, 𝐔, that allows the robot to safely and efficiently navigate from its initial state, 𝐱_s, to the desired state, 𝐱_f, by avoiding both getting stuck in local minima and collisions with obstacles, while minimizing a cost function J. The optimization problem at hand can be approached utilizing the classical MPPI control strategy described in <cit.>. This optimization can be mathematically expressed as in (<ref>), with the objective of minimizing the cost function, J, which is comprised of the expectation of a combination of state terminal cost ϕ(𝐱_N), running cost q(𝐱_k), and control inputs 𝐮_k, weighted by the positive-definite matrix R∈ℝ^n_u × n_u, taking into consideration the system dynamics outlined in (<ref>) and constraints such as collision avoidance and control constraints as stated in (<ref>). min _𝐔 J = 𝔼[ϕ(𝐱_N)+∑_k=0^N-1(q(𝐱_k)+1/2𝐮_k^⊤ R 𝐮_k)], s.t. 𝐱_k+1=f(𝐱_k, 𝐮_k+δ𝐮_k), δ𝐮_k∼𝒩(0, Σ_𝐮), 𝒳_rob(𝐱_k) ∩𝒳_obs=∅, 𝐡(𝐱_k, 𝐮_k) ≤ 0, 𝐱_0 = 𝐱_s, 𝐮_k∈𝕌, 𝐱_k∈𝕏. §.§ Overview of MPPI Control Strategy In order to solve the optimization control problem defined in (<ref>), MPPI leverages Monte Carlo simulation to generate a significant number of real-time simulated trajectories by propagating them from the underlying system dynamics. It then evaluates the cost-to-go of each trajectory based on a predefined cost function and updates the optimal control sequence by considering a weighted average cost from all of the simulated trajectories. More details are given in <cit.>. Subsequently, each trajectory τ_i in the time-horizon N can have its cost-to-go evaluated as given in (<ref>), where the cost-to-go S̃(τ_i) is calculated as the sum of the terminal state cost ϕ(𝐱_N) and the instantaneous running cost q̃(𝐱_k, 𝐮_k, δ𝐮_k,i) over all time steps. The instantaneous running cost, q̃, expressed in (<ref>), is comprised of the state-dependent running cost q(𝐱_k) and the quadratic control cost q(𝐮_k, δ𝐮_k), where γ_𝐮 = ν -1/2ν and the aggressiveness in exploring the state-space is determined by the parameter ν∈ℝ^+. Specifically, S̃(τ_i ) =ϕ(𝐱_N) + ∑_k=0^N-1q̃(𝐱_k, 𝐮_k, δ𝐮_k,i) ∀ i ∈{0, ⋯, M-1}, q̃= q(𝐱_k)_State-dep.+ γ_𝐮δ𝐮_k,i^⊤ R δ𝐮_k,i+ 𝐮_k^⊤ R δ𝐮_k,i+ 1/2𝐮_k^⊤ R 𝐮_k_q(𝐮_k, δ𝐮_k): Quadratic Control Cost. As outlined in (<ref>) from <cit.>, the optimal control sequence {𝐮_k}_k=0^N-1 in the vanilla MPPI algorithm is iteratively updated by taking a weighted average cost from all simulated trajectories, where S̃(τ_m) represents the cost-to-go of the m^th trajectory, and λ∈ℝ^+ denotes the “inverse temperature”, which regulates the selectiveness of the weighted average of the trajectories. After smoothing the resulting control sequence with a Savitzky-Galoy filter <cit.>, the first control 𝐮_0 is executed in the system, with the remaining sequence utilized as a warm-start for the next optimization step. Formally, 𝐮_k←𝐮_k +∑_m=0^M-1exp( -1/λS̃(τ_m) ) δ𝐮_k, m/∑_m=0^M-1exp( -1/λS̃(τ_m) ). §.§ Sparse Gaussian Process Gaussian Process (GP) is a well-established non-parametric model described by a mean function m(z) and a co-variance function k(z, z^') (also referred to as kernel function), where z∈ℝ^n_g is the input to the GP <cit.>; it can be mathematically expressed as f(𝐳) ∼𝒢 𝒫(m(𝐳), k(𝐳, 𝐳^')). Let 𝒟 = {(𝐳_i, y_i)}_i=1^n denote a dataset consisting of n input-output pairs, where each output y_i ∈ℝ is assumed to be the sum of an unknown underlying function f(𝐳_i) and Gaussian noise ϵ_i with a zero-mean and variance σ^2, i.e., ϵ_i ∼𝒩(0, σ^2). In the context of GP regression, to estimate the output y^* for a given new input z^*, the following GP prediction equation is employed p(y^* | y) = 𝒩(y^* | m_y(z^*), k_y(z^*,z^*) + σ^2), m_𝐲(𝐳) =K_𝐳 n(σ^2 I+K_n n)^-1𝐲, k_𝐲(𝐳, 𝐳^') =k(𝐳, 𝐳^')-K_𝐳 n(σ^2 I+K_n n)^-1 K_n 𝐳^', where m_𝐲(𝐳) and k_𝐲(z,z^') are the GP posterior mean and co-variance functions, respectively, while K_nn∈ℝ^n × n refers to the n × n co-variance matrix of the training inputs and K_𝐳n∈ℝ^n is n-dimensional row vector of kernel function values between 𝐳 and the training inputs, with K_n𝐳 = K_𝐳n^⊤. Achieving a more accurate GP prediction requires the optimization of hyper-parameters, such as kernel parameters Θ and noise variance σ^2, by maximizing the log marginal likelihood log p(𝐲)=log[𝒩(𝐲|0, σ^2 I+K_n n)]. The standard GP can be computationally intensive due to its complexity of 𝒪(n^3), where n represents the number of training instances. To mitigate this issue, various approximation methods, collectively known as Sparse Gaussian Process (SGP), have been developed as an alternative approach. Instead of using the complete training data, SGP employs a smaller set of m_s training points, called inducing points Z_m_s, resulting in a more efficient process and a lower computation complexity of 𝒪(n m_s^2)  <cit.>. Our present work leverages the variational SGP method, proposed in <cit.>, to approximate the true posterior of a GP p(f|𝐲) using an approximated variational posterior distribution q(f,f_m_s), where f_m_s are the values of the underlying function f at the inducing points Z_m_s. This approximation is done by augmenting the true posterior with the variable f_m_s such as p(f,f_m_s|𝐲) = p(f|f_m_s) p(f_m_s|y). Then, the approximated variational distribution q(f,f_m_s) can be factorized in the same manner as the augmented true posterior, as follows q(f,f_m_s) = p(f|f_m_s)ϕ(f_m_s), where ϕ(f_m_s) is an unconstrained variational distribution over f _m_s and p(f|f_m_s) is the conditional GP prior. By minimizing the Kullback-Leibler (KL) divergence between the approximated and true posteriors, 𝕂𝕃[q(f, f_m_s)||p(f|𝐲)], the variational SGP obtains estimates of the inducing inputs Z_m_s and hyperparameters (Θ, σ^2). § GP-MPPI CONTROL STRATEGY The goal of our present research, as outlined in (<ref>), is to determine the optimal control sequence 𝐔={𝐮_k}_k=0^N-1 that enables safe and efficient navigation of the mobile robots through complex and unknown cluttered environments, while avoiding collisions with obstacles and getting trapped in local minima. Although the MPPI control framework, as summarized in <cit.>, has many positive attributes, it is prone to generating infeasible control sequences or trajectories, particularly when the distribution of all sampled trajectories are concentrated within high-cost regions. To mitigate this issue, new sampling strategies proposed in <cit.> have enabled more efficient exploration of the state-space, allowing the algorithm to find better solutions and potentially reduce the risk of trapping in local minima. Nevertheless, for specific tasks such as the one depicted in Fig. <ref>, eliminating the local minima remains a potential challenge that needs to be tackled. One solution could be incorporating MPPI with a global planner, such as the solution presented in <cit.>, which utilizes the RRT algorithm to guide MPPI. Instead, we introduce the GP-MPPI control strategy, a new online navigation technique that leverages the SGP occupancy model to learn about the navigable space surrounding the robot. Specifically, we introduce the GP-subgoal recommender policy, which identifies a set of recommended subgoals and subsequently suggests the optimal subgoal that minimizes a predefined cost function to the MPPI local planner, as depicted in Fig. <ref> and explained in detail in Section <ref>. Unlike conventional methods, a distinctive aspect of the proposed control strategy is that it does not require either a global map for long-term planning or an offline training process. §.§ SGP Occupancy Surface Representation Our proposed GP-subgoal recommendation policy relies on our earlier work presented in <cit.>, where we transformed pointcloud data into an occupancy surface and modeled it using a Sparse Gaussian Process (SGP). Within this approach, the occupancy surface takes the form of a 2D circular surface centered around the sensor origin and has a predefined radius of r_oc. This surface serves as the projection space for all observed points, which are represented in spherical coordinates (θ_i, α_i, r_i), where (θ_i, α_i, r_i) correspond to the azimuth, elevation, and radius values of each observed point, respectively. Each point 𝐳_i on the occupancy surface is defined by two attributes: the azimuth and elevation angles 𝐳_i= (θ_i, α_i), and assigned an occupancy value f(𝐳_i) that is a function of the point radius r_i, such as f(𝐳_i)=r_oc-r_i. Afterward, the probability of occupancy f(𝐳) over the occupancy surface is modeled by an SGP occupancy model, as follows f(𝐳) ∼𝒮𝒢𝒫(m(𝐳), k(𝐳, 𝐳^')), k(𝐳, 𝐳^') =σ_f^2(1+(𝐳-𝐳^')^2/2 αℓ^2)^-α, where σ_f^2 is the signal variance, l is the length-scale, and α is the relative weighting factor that manipulates large and small scale variations. In our SGP model, the point's occupancy to radius relation is encoded as a zero-mean function, m(𝐳)=0, where the occupancy value of the non-observed points is set to zero. The Rational Quadratic (RQ) kernel, k(𝐳, 𝐳^'), is selected as the SGP kernel due to its ability to model functions that vary across different length-scale <cit.>. This characteristic makes the RQ kernel well-suited for modeling the occupancy surface. In Fig. <ref>, we present a concrete example of the SGP occupancy model applied to our Jackal robot, which is equipped with a Velodyne VLP-16 LiDAR and located in an unknown cluttered environment, as depicted in Fig <ref>. The figure also illustrates the raw pointcloud generated by the onboard sensor (Fig <ref>), as well as the original occupancy surface, which represents the projection of the point clouds onto the 2D circular surface with radius r_oc, where warmer colors indicate areas of lower occupancy (Fig <ref>). Furthermore, Fig <ref> exhibits the SGP occupancy surface reconstructed by the SGP occupancy model, as previously expressed in (<ref>). The precision of the SGP occupancy model is intensively evaluated in our previous work <cit.>, where the results showed that an SGP occupancy model comprising of 400 inducing points generates a reconstructed point cloud with an average error of approximately 12. §.§ GP-Subgoal Recommender Policy The primary advantage of GP and its variants, compared to other modeling techniques, is their ability to provide a measure of variance, which indicates the level of uncertainty, along with a function estimate (i.e., mean). More precisely, in the context of the occupancy surface, the SGP occupancy model prediction, as defined in (<ref>), provides both mean μ_oc_i and variance σ_oc_i values for each point on the surface, where the mean represents the expected occupancy while the variance reflects the uncertainty associated with the predicted occupancy. Consequently, constructing the SGP occupancy surface is accompanied by an SGP variance surface that captures the uncertainty in the occupancy estimate, as depicted in Fig. <ref>. Within this research, we have opened up a new avenue for effectively utilizing the SGP variance surface as a reliable indicator for distinguishing between occupied and free spaces around the robot, where regions with variances higher than a certain threshold V_th correspond to free space, while low-variance regions indicate occupied space. In fact, the variance surface changes across observations due to variations in the number and distribution of observed points employed in the training of the SGP model. As a result, the variance threshold V_th is considered to be a variable that relies on the distribution of the variance across the surface and can be calculated as V_th=K_m v_m, where K_m ∈ℝ^+ is a tuning parameter and v_m represents the mean of the variance distribution. To identify free navigable spaces, we define a Gaussian Process frontier (namely, GP frontier) as the centroid point (θ_i, α_i) of each high variance region. These GP frontiers {f_i}_i=1^ℱ serve as local recommended subgoals (see colored circles in Fig. <ref>). Unlike the well-known frontier concept introduced in <cit.>, it is worth noting that our GP frontier does not rely on a global occupancy map; instead, it is extracted from the uncertainty of the current observation. Following the identification of the GP frontiers by the SGP model, a cost function J_gp is utilized to determine the optimal GP frontier f^* that guides the local planner (in our case, MPPI) towards the desired state 𝐱_f. Our cost function J_gp, given in (<ref>), has been established with two distinct terms. The first term, as introduced in <cit.>, calculates the distance d_fs between a GP frontier f_i and the desired state 𝐱_f. This distance criterion is used to identify the GP frontier closest to 𝐱_f. The second term, inspired by the direction criterion proposed in <cit.>, evaluates the direction θ_f_i of a GP frontier with respect to the robot heading. This criterion prioritizes a GP frontier that aligns better with the robot heading. J_gp(f_i) = k_dst d_fs + k_dirθ_fi^2 , f^* =argmin _f_i∈ℱ(J_gp(f_i)), where k_dst, k_dir are weighting factors. The GP frontier direction θ_f_i is squared to indicate the absolute direction. Finally, the local planner receives the optimal subgoal g^*, obtained by acquiring the Cartesian coordinate of the optimal GP frontier f^*, which leads the robot to its desired state 𝐱_f. §.§ Real-Time GP-MPPI Control Algorithm Algorithm <ref> summarizes the real-time control cycle of the GP-MPPI algorithm, which includes two primary components: the local MPPI motion planner (described earlier in Section <ref>) and the GP-subgoal recommender (explained in Section <ref>). Each time-step Δ t, the GP policy recommends the optimal subgoal g^*, the current state is estimated, and a M × N random control variations δ𝐮 are generated (lines 2:4). Then, M trajectories are simulated in parallel, propagated from the system dynamics defined in (<ref>), and evaluated using (<ref>) (lines 5:13). It is noteworthy that the minimum sampled cost trajectory, denoted as S̃_min, among all simulated trajectories prevents numerical overflow or underflow without affecting the optimality of the algorithm <cit.>. After that, the optimal control sequence {𝐮_k}_k=0^N-1 is updated, smoothed with a Savitzky-Galoy filter, and the first control 𝐮_0 is applied to the system (lines 14:18), while the remaining sequence of length N - 1 is slid down to be utilized at next time-step (lines 19:22). In lines 25 to 38, the function known as GP-SubgoalRecommender is described, which takes a pointcloud input (PCL) and returns the optimal subgoal g^* for the local planner. To optimize the hyper-parameters Θ and inducing points Z_m_s of the SGP occupancy model, the pointcloud data is transformed into training data 𝒟 (lines 26:29). The mean occupancy μ_oc and variance σ_oc are then estimated over the surface Z^*, and the GP frontiers are defined as those with σ_oc > V_th, where the centroids of these frontiers are converted to Cartesian coordinates (lines 30:34). Finally, the cost function J_gp in (<ref>) is used to select the optimal subgoal g^* (lines 35:37). In this study, we introduce two operating modes for the GP-MPPI algorithm: the simple mode (SM) and the recovery mode (RM). Under the simple mode, MPPI consistently leverages the optimal subgoal 𝐠^* suggested by the GP policy. In contrast, in the recovery mode, MPPI generates the optimal control sequence that steers the robot towards its desired state 𝐱_f, adhering to the recommended subgoal only when the robot is at risk of encountering local minima. Such local minima occur when the robot's linear velocity is zero (v=0) and its current state 𝐱_k does not match 𝐱_f (i.e., 𝐱_k ≠𝐱_f). Thanks to the optimal control sequence {𝐮_k}_k=0^N-1 obtained by MPPI, we can efficiently anticipate the occurrence of local minima by imposing a condition on the mean of the predicted linear velocities over the time-horizon N, expressed as follows: μ_𝐮 = 1/N∑_i=0^N-1 |v_i| < 𝐮_th, where 𝐮_th∈ℝ^+ represents a control switching threshold set based on N. If this condition is fulfilled, then MPPI will follow the subgoal recommended by the GP rather than navigating directly towards its desired state 𝐱_f. § SIMULATION-BASED EVALUATION In this section, the effectiveness of our proposed control strategy is assessed and compared with both vanilla MPPI and log-MPPI control strategies in a goal-oriented autonomous ground vehicle (AGV) navigation task conducted in 2D cluttered environments of unknown nature. §.§ Simulation Setup: In this study, we consider the kinematics model of a differential wheeled robot presented in <cit.>, specifically the fully autonomous ClearPath Jackal robot, where the robot's position and orientation in the world frame are given by 𝐱 = [x, y, θ]^⊤∈ℝ^3, and the control input 𝐮 = [v,ω]^⊤∈ℝ^2 denotes the robot's linear and angular velocities. Our autonomous AGV platform is equipped with a 16-beam Velodyne LiDAR sensor utilized for two key functions: (i) constructing the SGP variance surface, and (ii) generating the local costmap. The simulations for all proposed control schemes were conducted with the following parameters: a prediction time of 6, a control frequency of 30 (i.e., N=180), sampling 2528 rollouts per time-step Δ t, and an exploration variance ν of 1200. Additionally, a control weighting matrix R, expressed as λΣ_n^-1/2, is utilized. In the case of MPPI and GP-MPPI, the inverse temperature λ and the control noise co-variance Σ_𝐮 = Σ_n = Diag(σ_v^2, σ_w^2) are both set to 0.572 and Diag(0.023, 0.028), respectively. However, for log-MPPI, different values of 0.169 and Diag(0.017, 0.019) are used for these parameters, along with a normal distribution that has a co-variance of Σ_n = Diag(0.002, 0.0022) (For more details, refer to <cit.>). The Savitzky-Galoy (SG) convolutional filter is utilized with a quadratic polynomial function, i.e., n_sg=2, and a window length l_sg of 51. The occupancy surface was constructed with an occupancy radius r_oc of 5 meters, a full azimuth range of -180^o to 180^o, and elevation height of 0^o to 15^o. The SGP occupancy model was designed with 400 inducing points (Z_m = 400), where the GP frontiers were identified based on a variance threshold of V_th= K_m v_m, where K_m was set to 0.4. For the distance and direction factors K_dst and K_dir of the cost function J_gp, we assigned weighting factors of 5 and 4, respectively. To enable the recovery mode of the GP-MPPI, we have set the control threshold, 𝐮_th, to 0.55[]. All the proposed control schemes, which are written in Python and integrated with the Robot Operating System (ROS) framework, are executed in real-time on an NVIDIA GeForce GTX 1660 Ti laptop GPU, with the GP-subgoal recommender built on GPflow<cit.>. To accomplish the 2D navigation task, we adopt a state-dependent cost function described in (<ref>), which comprises two terms. The first term, with Q = Diag(2.5,2.5,5), aims to steer the robot towards its desired state, whereas the second term incorporates a Boolean variable 𝕀_crash to heavily penalizes collisions with obstacles. q(𝐱_k)= (𝐱_k-𝐱_f)^⊤ Q (𝐱_k-𝐱_f) + 10^3 𝕀_crash. Since the robot is operating in unknown environments, it relies on a 2D costmap to maintain a record of obstacles in its vicinity. This costmap is generated by analyzing sensor data from the environment and constructing a 2D occupancy grid, with each cell typically categorized as occupied, free, or unknown <cit.>. The generated occupancy grid is subsequently employed as a 2D local costmap, feeding directly into the sampling-based MPC algorithm, enabling safe and collision-free navigation. The robot-centered 2D local costmap, which is built by the on-board Velodyne VLP-16 LiDAR sensor, has a size of 200×200 and a grid resolution of 0.05/. Finally, throughout the simulations, the maximum linear velocity v_max of the robot is set to 1.5/. §.§ Simulation Scenarios and Performance Metrics: The benchmark evaluation utilizes two types of Gazebo simulation environments, as depicted in Fig. <ref>. The first type, referred to as Forest #1, is a 50×50 forest-like environment characterized by tree-shaped obstacles with a density of 0.2/□; The other type, named Maze #1, is a 20×20 maze-like environment with three U-shaped rooms (i.e., U_1, U_2, and U_3), as well as various other obstacles (highlighted in red in Fig. <ref>)[To evaluate the local planner's obstacle avoidance capability, the red obstacles are intentionally made undetectable as occupied space by the GP-subgoal recommender, as occupancy elevation height is set to a higher value.]. In the first scenario, denoted as Forest #1, the robot is directed to navigate from an initial pose 𝐱_s = [-5,-8,0]^⊤ to a desired pose 𝐱_f = [20,20,45]^⊤ in ([], [], []). Meanwhile, in Maze #1, we conducted two separate control missions to (i) evaluate the robustness of our proposed control strategy, and (ii) examine its performance under the two different operating modes, previously described in Section <ref>. The first mission, MU_1, requires the robot to navigate from 𝐱_s = [-5,-8,60]^⊤ to a desired pose 𝐱_f = [4,4,45]^⊤ located inside U_1; while, in the second mission, named MU_2, the robot starts at 𝐱_s = [-6,8,0]^⊤, crosses U_2, and reaches a desired pose of 𝐱_f = [8,-8,170]^⊤. To ensure a fair and comprehensive comparison of the three control schemes, we have established a set of performance metrics, including the task completion percentage 𝒯_c, the average distance traveled by the robot d_av to reach 𝐱_f from 𝐱_s, the average linear velocity v_av of the robot within the cluttered environment, and the percentage of assistance 𝒜_gp provided by the GP-subgoal recommender policy to MPPI when the recovery mode is utilized. The successful task completion entails the robot reaching the target position without encountering obstacles or getting trapped in local minima ℛ_lm. §.§ Simulation Results: We evaluated the effectiveness of the proposed control strategies in Forest #1 and Maze #1 (i.e., MU_1 & MU_2) through 10 trials each, and the resulting performance statistics are summarized in Table <ref>. The performance results demonstrate that, as expected, the proposed GP-MPPI control strategy outperforms both the vanilla MPPI and log-MPPI as the autonomous vehicle successfully accomplished all control missions (with 𝒯_c=100%) without getting stuck in local minima or colliding with obstacles (i.e., ℛ_lm =0), despite having limited perception range and incomplete knowledge of the environment. In contrast, in Forest #1, log-MPPI achieved a task completion rate 𝒯_c of 95.72% over 10 trials, compared to 86.87% when MPPI was utilized. Additionally, log-MPPI encountered local minima only twice, while MPPI was trapped six times. Nevertheless, both control methods were unable to complete any of the trials in MU_1 and MU_2 due to the challenging environmental conditions (refer to the robot trajectories generated by log-MPPI in Fig. <ref>). Additionally, our proposed approach in Forest #1 provided a shorter route towards the desired state 𝐱_f, especially when the recovery mode (RM) is activated, similar to the optimal trajectory of the baselines, with an average linear velocity v_av of 1.30/, which approaches the maximum specified velocity of 1.5/. Concerning the two modes of GP-MPPI, it is observed that activating the recovery mode (RM) during Forest #1 and MU_1 missions improves the average distance traveled d_av by the robot. For instance, in MU_1, d_av was approximately 32.74 with RM, whereas with the simple mode (SM), which consistently relies on the subgoal recommended by GP, d_av was roughly 34.48. On the other hand, during the MU_2 mission, the RM produced a slightly longer robot trajectory than the SM since operating our proposed GP-MPPI in the RM strikes a balance between the state-dependent cost function that directs the robot to follow a direct route towards the desired state and the optimal subgoal recommended by the GP policy that forces the robot to avoid the dead-ends associated with rooms U_2 and U_3 on its way to 𝐱_f, as illustrated in Fig. <ref>. We can also see that, due to the presence of U-shaped rooms in Maze #1, the GP provides more assistance, represented by 𝒜_gp, than in Forest #1. In Fig. <ref>, we illustrate through an example from the conducted trials the robot trajectories generated by GP-MPPI under the two operating modes in Maze #1. We can clearly observe that our proposed control strategy successfully achieves collision-free navigation in both modes, without getting stuck in local minima. As an example, Fig. <ref> displays the velocity profile of the robot during the MU_1 mission shown in Fig. <ref>, while using GP-MPPI with RM, along with its corresponding mean of the predicted linear velocities μ_𝐮 over the given time-horizon N (see Fig. <ref>). The mean values that fall below the switching threshold 𝐮_th, set at 0.55[], denote the intervals where the RM is active, and are visually emphasized in yellow in Fig. <ref>. § REAL-WORLD DEMONSTRATION In this section, we experimentally demonstrate the applicability of our proposed control strategy in achieving a safe 2D grid-based collision-free navigation in a complex and unknown indoor cluttered environment. §.§.§ Experimental Setup and Validation Environment: To conduct our experimental validation, we used the simulation setup previously outlined in Section <ref>, except for (i) setting the maximum speed v_max to 1.0/ to avoid the robot localization error associated with using the RealSense camera as a source of localization, (ii) setting the occupancy radius r_oc to 3.0, and (iii) decreasing the size of the 2D grid map to 120×120. r0.25 < g r a p h i c s > Panoramic photo of our L-shaped indoor environment. We also decreased the recovery mode switching threshold 𝐮_th to 0.3/ to be compatible with the updated v_max. Additionally, to ensure real-time execution of the GP-subgoal recommender policy, we decrease the resolution of the SGP variance surface to one-third of its original value along the azimuth axis while keeping the original resolution along the elevation axis. We employed an L-shaped indoor corridor environment measuring 9×14 for experimental validation. The environment has a varying width between 1.8 and 2.8 and contains randomly placed boxes-like obstacles, as depicted in Fig. <ref>. The assigned control mission of the robot is to navigate from 𝐱_s = [0,0,0]^⊤ and arrive at 𝐱_f = [7.5,13,90]^⊤. §.§.§ Experimental Results: The performance statistics of our proposed GP-MPPI control scheme, gathered from four trials conducted in our indoor environment, are summarized in Table <ref> for the two operating modes. From all trials, we can conclude that both operating modes provide collision-free navigation in the cluttered environment with an average linear velocity of 0.80, without the risk of being trapped in local minima (as ℛ_lm = 0) while moving towards its desired state. This ensures the safety and consistent feasibility of the receding-horizon planning. In contrast, it is observed that the vanilla MPPI and log-MPPI consistently failed to complete any of the trials due to being trapped in the first edge of the L-shaped environment. However, MPPI managed to avoid such traps with the aid of the GP-subgoal recommender policy in the recovery mode (RM), which provides an average assistance percentage 𝒜_gp of roughly 31.36%. More details about the simulation and experimental results, including the behavior of the baselines, are provided in the supplementary video: <https://youtu.be/et9t8X1wHKI>. § CONCLUSION In this work, we proposed the GP-MPPI control strategy, which comprises two primary components: the GP-subgoal recommender policy and the local planner, the MPPI. First, the GP-subgoal recommender utilized the learning capacity of SGP to create a reliable SGP variance surface, which served as an indicator for differentiating between occupied and free spaces around the robot. Consequently, a set of suggested subgoals was identified, and the optimal subgoal that minimizes a predefined cost function was recommended to the local MPPI planner. Based on the recommended subgoal, MPPI computes the optimal control input that enables the robot to navigate towards the goal efficiently and safely while accounting for its dynamics and avoiding collisions. By conducting a combination of simulated and real-world experiments, we have shown that our proposed control strategy is superior to the vanilla MPPI and log-MPPI methods in achieving efficient and safe navigation in unknown and complex environments, thereby avoiding the risk of getting stuck in local minima. IEEEtran
http://arxiv.org/abs/2307.07344v1
20230714134705
Inverse Evolution Layers: Physics-informed Regularizers for Deep Neural Networks
[ "Chaoyu Liu", "Zhonghua Qiao", "Chao Li", "Carola-Bibiane Schönlieb" ]
cs.LG
[ "cs.LG", "cs.NA", "math.NA" ]
Inverse Evolution Layers: Physics-informed Regularizers for Deep Neural Networks Chaoyu Liu^1,First author, Zhonghua Qiao^1, Chao Li^2,3, Carola-Bibiane Schönlieb^3,Corresponding author ^1 Department of Applied Mathematics, The Hong Kong Polytechnic University ^2 School of Science and Engineering & School of Medicine, University of Dundee ^3 Department of Applied Mathematics and Theoretical Physics, University of Cambridge [email protected]; [email protected]; [email protected]; [email protected] ================================================================================================================================================================================================================================================================================================================================================================================================================================================================= This paper proposes a novel approach to integrating partial differential equation (PDE)-based evolution models into neural networks through a new type of regularization. Specifically, we propose inverse evolution layers (IELs) based on evolution equations. These layers can achieve specific regularization objectives and endow neural networks' outputs with corresponding properties of the evolution models. Moreover, IELs are straightforward to construct and implement, and can be easily designed for various physical evolutions and neural networks. Additionally, the design process for these layers can provide neural networks with intuitive and mathematical interpretability, thus enhancing the transparency and explainability of the approach. To demonstrate the effectiveness, efficiency, and simplicity of our approach, we present an example of endowing semantic segmentation models with the smoothness property based on the heat diffusion model. To achieve this goal, we design heat-diffusion IELs and apply them to address the challenge of semantic segmentation with noisy labels. The experimental results demonstrate that the heat-diffusion IELs can effectively mitigate the overfitting problem caused by noisy labels. § INTRODUCTION In recent years, deep learning has made a profound impact on many image processing tasks such as classification, segmentation and inpainting <cit.>. Compared to conventional mathematical models based on partial differential equations (PDEs), deep neural networks can extract shallow and deep features from large-scale training datasets, enabling them to outperform the PDE-based methods on many image processing tasks when given sufficient data. However, it can be challenging to mathematically analyze the outputs of neural networks and regulate them to exhibit desired properties. In contrast, PDE-based methods have a strong theoretical foundation and are well-established in the field of mathematics and physics. This makes it easier to analyze and understand the behavior of these models, as well as provide theoretical guarantees on the solutions. Many PDE-based models have been extended to image processing tasks, achieving remarkable success <cit.>. The results obtained from these models can be guaranteed to have certain desirable properties under mathematical analysis. For example, in <cit.>, Liu propose a PDE-based model by introducing phase field models and corresponding numerical schemes for image segmentation. The introduce of the phase field model enable the proposed model to exhibit strong robustness to noise. Therefore, there is a meaningful research opportunity to establish a bridge between traditional mathematical models and data-driven models and integrate powerful mathematical properties and tools into data-driven models. There have been many studies on connections between partial differential equations and neural networks. In <cit.>, Weinan provided an explict exposition of the connection between neural networks including the ultradeep ResNet <cit.> and dynamical systems. Subsequent research by Lu <cit.> shows that many effective networks can be seen as different numerical discretizations of differential equations. Instead of explaining through discretized differential equation, Chen <cit.> proposed a novel approach whereby a neural network is employed to parameterize the derivative of the hidden state with respect to a continuous time variable. In addition to identifying the similarities between mathematical models and neural networks, many efforts have been made to integrating PDE-based models into neural networks. The most direct approach to achieving this objective is to add loss functions derived from various PDE models to regularize the outputs <cit.>. However, this method can cause intractable problems including gradient explosion during the training process when the added loss function lacks good derivative properties. Additionally, the added loss makes no contribution to the forward propagation during training or prediction. Recently, Liu <cit.> proposed a novel method of adding loss by integrating spatial regularization into the constrained optimization problem generalized from the softmax activation function. This approach enables the loss to affect the forward propagation. However, this method may significantly increase the computation cost during training since it requires solving an optimization problem during each forward propagation. In addition to loss-inserting methods, researchers have attempted to modify the architecture of neural networks to enhance their interpretability. For example, Chen <cit.> developed a neural network for image restoration by parameterizing a reaction-diffusion process derived from conventional PDE-based image restoration models. Similar techniques that employ convolutional neural networks (CNNs) to learn parameters in active contour models can be found in <cit.>. In <cit.>, Lunz combine the neural networks with variational models for inverse problems by using a neural network to learn regularizers in the models. Ruthotto <cit.> imparted mathematical properties of different types of PDEs by imposing certain constraints on layers. Raissi <cit.> proposed Physics-Informed Neural Networks (PINNs) by approximating differential operators with automatic differention and imposing two loss functions for equations and boundary conditions to a neural network to enforce it behave as given partial differential equations. Although the interpretability of these networks has significantly improved, their performance and learning ability could be limited by the specified architecture or constraints. Moreover, the specified architecture and constraints make it challenging to find an optimal balance between interpretability and learning ability. Consequently, their specified architectures and constraints limit their applicability to a particular type of problem and make them unsuitable for improving other popular and advanced neural networks. For instance, PINNs have demonstrated efficacy in solving partial differential equations; however, their applicability to image processing tasks presents a significant challenge. In this paper, we propose a novel method to integrate PDE models into a given neural network by adding layers derived from the inverse processes of corresponding evolution equations. These layers enable the neural network outputs to possess the desired properties of the solutions of the evolutions. Our proposed method differs from traditional loss-inserting techniques in that it has stronger interpretability and affects both forward and backward propagation during training. In addition, it does not suffer from gradient-related issues during the backward propagation stage. Furthermore, it also allows for convenient trade-offs between desired properties and neural network performance through adjustment of the evolution time. The main contributions of this work are: 1. We propose a novel and straightforward framework for integrating PDE-based mathematical models into neural networks during training. This framework can be easily implemented and does not require any additional learning parameters. More importantly, it has strong generalizability and is applicable to a wide range of neural networks. In addition, our framework preserves the learning capabilities of the given neural networks as it does not alter their original structure or impose any constraints on them. 2. We develop the inverse evolution layers (IELs), which are derived from the inverse processes of the evolution equations in mathematical models. These layers have good interpretability and can act as regularizations by amplifying undesired properties of neural networks, thus compelling the neural networks' outputs to possess the desired properties during training. 3. We introduce heat-diffusion IELs and evaluate their performance on several semantic segmentation models on datasets with normal labels and noisy labels. Experimental results demonstrate the effectiveness of our proposed approach. The rest of this paper is organized as follows. In section <ref>, we provide a comprehensive description of the inverse evolution layers and the generalized neural network architecture with these layers, along with an explanation of how these layers can be utilized to regularize neural networks. In section <ref>, we present an example that employs heat-diffusion inverse evolution layers to address the issue of semantic segmentation with noisy labels. The experiment results show that our heat-diffusion IELs can significantly inhibit the networks' noise overfitting issue. In section <ref>, we summarize the main contributions and findings of our work, and discuss potential avenues for future research. § NEURAL NETWORKS WITH INVERSE EVOLUTION LAYERS §.§ Inverse Evolution Layers In this section, we introduce the inverse evolution layers (IELs) which play a pivotal role in our framework for integrating specific characteristics of mathematical models into neural networks. In physics, evolution processes can be modelled by partial differential equations that vary over time. For instance, diffusion processes can be modelled by the diffusion equations <cit.>. Additionally, advection–diffusion–reaction systems can be depicted through advection–diffusion–reaction equations <cit.>. The solutions of these processes inherently possess favorable properties. For example, the solution to a diffusion process will have good smoothness. Rather than enhancing these desirable properties, we aim to develop layers that amplify opposing unfavorable properties. To achieve this goal, we first consider a general form of the partial differential equations of forward evolutions u_t = ℱ(u), t∈[0,T], where u = u(t, x) denotes the analytical solution of an evolution, and ℱ refers to a combination of linear and nonlinear differential operators which may include a variety of gradient operators. Using powerful tools in numerical mathematics, one can efficiently solve these PDEs and determine the values of variables at any time in the evolution. For example, if solving the general partial differential equation by a simple explicit forward Euler's formula for temporal discretization and a finite difference scheme for spatial discretization, we can obtain the following discrete numerical scheme u_t+1-u_t/Δ t = F(u_t). Here u_t and u_t+1 respectively denote the solution at time T_t and T_t+1, where T_t+1-T_t = Δ t. F represents finite difference approximations of ℱ, and F(u_t) can be conceptualized as a combination of convolutions of u with designated filters. We rewrite equation (<ref>) as follows u_t+1 = u_t + Δ t*F(u_t). The above equation (<ref>) demonstrates that once the precise value of u_t is known, the value of u_t+1 can be determined. Moreover, given u_0, it is possible to obtain a numerical solution at any time by iteratively applying this equation. Based on the equation (<ref>), we propose inverse evolution layers. Instead of solving evolutions in a forward manner, these layers are utilized to numerically simulate the inverse process of evolutions. From the equation (<ref>), we can see that u_t+1 is calculated by adding the term Δ t*F(u_t) to u_t. Therefore, a natural way to obtain the simulation of the inverse evolution is to replace the "+" with "-". On this basis, we introduce the concept of inverse evolution layers. The construction of an inverse evolution layer is quite straightforward: we design a layer such that, given an input u, its output is L(u) = u - Δ t*F(u). From equation (<ref>), we can see that the inverse layer involves no learning parameters. In contrast to forward evolution, the inverse evolution layers are expected to amplify certain undesired properties, making them suitable for use in neural networks as a form of regularization. §.§ The Architecture of Neural Networks plus Inverse Evolution Layers Given a neural network, the way we incorporate inverse evolution layers is to add them to the output of the network before computing the loss function, as depicted in Figure <ref>. In our framework, we will activate the inverse evolution layers during training, while during the evaluation and prediction we will deactivate the inverse evolution layers. As previously mentioned, inverse evolution layers are designed to magnify undesirable properties and increase the inconsistency with the corresponding forward physical process. For example, as the solution of the heat diffusion process is typically smooth, the corresponding inverse evolution layers will accentuate the roughness of their inputs. Consequently, if the input to the inverse evolution layers, the output of the neural network, contains noise, the noise will be amplified after passing through the IELs. Due to the specialized construction of the IELs, it can be anticipated that during the training phase, the undesired characteristics of the neural network's outputs will be amplified after passing through the IELs. Thus, the IELs can serve as a form of regularization by penalizing the neural network to generate outputs that exhibit undesirable properties with much less frequency. § DIFFUSION IELS §.§ Drivation of Diffusion IELs In this section, we present a simple example by developing inverse evolution layers based on the heat diffusion equation. As the solution to the heat diffusion process is typically smooth, we can expect that the heat-diffusion IELs can effectively prevent neural networks from overfitting noise in images. The heat diffusion equation is formulated as u_t = Δ u, t∈[0,T]. According to equation (<ref>), we can construct IELs through the following formula L(u) = u - Δ t*F_Δ(u), where F_Δ, the finite difference approximation of Δ, can be depicted by the following 3× 3 filter, [ 0 1 0 1 -4 1 0 1 0 ]. §.§ Experiments for Heat-diffusion IELs In our experiments, we will evaluate the effectiveness of heat-diffusion IELs on semantic segmentation tasks using different datasets with both normal and noisy labels. We will use three well-known neural network architectures, namely Unet <cit.>, DeepLab <cit.> and HRNetV2-W48 <cit.>. Table <ref> provides the details on the number of IELs and Δ t we adopt for each network. Our experiments demonstrate that the heat-diffusion IELs are particularly effective for datasets with noisy labels. §.§.§ Unet with Heat-diffusion IELs on WBC We conducted a performance comparison between Unet and Unet with IELs on a small dataset called the White Blood Cell (WBC) <cit.> dataset which consists of three hundred images. In the experiment, 270 samples were used for training and the remaining 30 samples were used for validation. The comparison is conducted on both normal and noisy labels. The way we add noise to training labels is to randomly choose some small windows on each label and replace their values with a random class. All noise windows in each label are 2× 2 and the total area of the noise is set to 10 percent of the image size. The Unet structure used in the experiment has depth of 5 and the specific configuration in each level consists of 3× 3 convolution layers, instance norm and leaky ReLu. Downsampling and upsampling are achieved by pooling operation and up-convolution, respectively. The total number of epochs is set to 20, with a learning rate of 0.0001. The loss function used is the cross-entropy loss. The experimental results for the WBC dataset are presented in Figure <ref> where the evaluation metric is the mean of dice score (DS) on the validation dataset. Segmentation results on noisy labels are displayed in Figure <ref>. Figure <ref> shows that the original Unet and Unet with IELs have comparable performance on normal labels but Unet with IELs is much more robust to noisy labels. These results suggest that heat-diffusion IELs do not reduce the performance of the original Unet but rather significantly improve it on noisy labels. In Figure <ref>, the results of the Unet are full of noise, which can be expected since neural networks are prone to the noise <cit.>, while the segmentation maps of Unet with IELs have almost no noise. These results also indicate that the designed heat-diffusion IELs can act as regularizers and effectively prevent the Unet from overfitting noise. §.§.§ Unet with Heat-diffusion IELs on 2018 Data Science Bowl After the experimetns on the WBC dataset, we extend the comparison to the 2018 Data Science Bowl dataset <cit.>, which contains 607 training and 67 test images. In our experiment, the test images were used for validation during training. The comparison was also conducted on both normal labels and noisy labels. And the way we add noise is identical to that in the WBC dataset except that the size of noise windows is tuned to 3× 3. The comparison results are displayed in Figure <ref> and Figure <ref>. Figure <ref> shows that on the 2018 Data Science Bowl dataset, the heat-diffusion IELs can still maintain the performance of the Unet on normal labels and prevent the Unet from overfitting to noisy labels. As shown in the Figure <ref>, the original Unet still suffers a lot from noisy labels while Unet with IELs achieves much more satisfactory results on noisy labels. §.§.§ DeepLabV3+ with Heat-diffusion IELs on Cityscapes In addition to the datasets for medical images, we also evaluate the perfomance of our IELs on images obtained from real-world scenarios. The dataset we employ for the evaluation is the Cityscapes dataset <cit.> which contains 2795 train and 500 validation images and the corresponding neural networks we test on this dataset are the DeepLabV3+ <cit.> and HRNetV2-W48 <cit.>. The way we add noise is identical to the previous experiments except that the size of noise windows is 5× 5, which is also quite small compared to the image size (1024× 2048). For DeepLabV3+, we use ResNet-101 as the backbone and the results are presented in Figure <ref>, where the mean of class-wise intersection over union (mIoU) is adopted as the evaluation metric. These results indicate that for normal labels, DeepLabV3+ and its IELs counterpart show similar performance while for noisy labels DeepLabV3+ with IELs outperforms the original DeepLabV3+. This demonstrate that IELs are effective in mitigating the impact of noise in labels. Furthermore, Figure <ref> provides the detailed segmentation maps, which indicate that DeepLabV3+ tends to overfit the noise in labels whereas DeepLabV3+ with heat-diffusion IELs can generate accurate segmentation map with minimal noise. §.§.§ HRNetV2-W48 with Heat-diffusion IELs on Cityscapes Furthermore, we test the HRNetV2-W48 and its IELs counterpart on the Cityscapes dataset. To expedite the training process, a pretrained model is utilized. The results are displayed in Figure <ref>. We use the same evaluation metric, mIoU, for this experiment as well. While HRNetV2-W48 achieves high mIoU for the validation dataset, its tendency to overfit on noisy labels is more prominent than that of DeepLabV3+. This is due to the fact that HRNetV2-W48 relies heavily on the high-resolution features which are more susceptible to noise. Nevertheless, our heat-diffusion IELs significantly mitigate this overfitting, as evident from the results. Additionally, Figure <ref> provides detailed segmentation maps, which further demonstrate the effectiveness of our heat-diffusion IELs in handling noise in labels. § CONCLUSION AND FUTURE WORK This paper proposes inverse evolution layers (IELs) as a regularization technique for neural networks. The aim of IELs is to guide the neural networks to produce outputs with expected partial differential equation (PDE) priors, thereby endowing them with appropriate properties. IELs can be easily incorporated into various neural networks without the introduction of new learnable parameters. Moreover, the integration of IELs does not compromise the learning ability of neural networks. We demonstrate the effectiveness of IELs through the use of heat-diffusion IELs to address the issue of noisy labels in semantic segmentation. Our experiments illustrate the efficacy of diffusion IELs in this regard. Overall, our proposed approach represents a remarkable contribution to the field of neural network regularization, offering a promising means of integrating PDE-based mathematical models into neural networks in a transparent, interpretable, and effective manner. Regarding future work, we envisage that, apart from heat-diffusion layers, a variety of other IELs could be formulated to equip neural networks with diverse mathematical or physical properties to address specific image processing tasks such as image inpainting, image generation, classification using noisy or mislabeled data, and others. Additionally, extending IELs to other research domains such as natural language processing and generative models presents a promising area for further investigation. Furthermore, we anticipate developing rigorous theorems to clarify the working principle of IELs, thus facilitating their use in practice.
http://arxiv.org/abs/2307.05139v1
20230711093347
Coherent phonon and unconventional carriers in the magnetic kagome metal Fe$_3$Sn$_2$
[ "M. V. Gonçalves-Faria", "A. Pashkin", "Q. Wang", "H. C. Lei", "S. Winnerl", "A. A. Tsirlin", "M. Helm", "E. Uykur" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
[email protected] Institute of Ion Beam Physics and Materials Research, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany Technische Universität Dresden, 01062 Dresden, Germany Institute of Ion Beam Physics and Materials Research, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany Department of Physics and Beijing Key Laboratory of Opto-electronic Functional Materials & Micro-nano Devices, Renmin University of China, Beijing 100872, China Department of Physics and Beijing Key Laboratory of Opto-electronic Functional Materials & Micro-nano Devices, Renmin University of China, Beijing 100872, China Institute of Ion Beam Physics and Materials Research, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany Felix Bloch Institute for Solid-State Physics, Leipzig University, 04103 Leipzig, Germany Institute of Ion Beam Physics and Materials Research, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany Technische Universität Dresden, 01062 Dresden, Germany [email protected] Institute of Ion Beam Physics and Materials Research, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany Temperature- and fluence-dependent carrier dynamics of the magnetic Kagome metal were studied using the ultrafast optical pump-probe technique. Two carrier relaxation processes (τ_1 and τ_2) and a laser induced coherent optical phonon were observed. By using the two-temperature model for metals, we ascribe the shorter relaxation τ_1 (∼1 ps) to hot electrons transferring their energy to the crystal lattice via electron-phonon scattering. τ_2 (∼25 ps), on the other hand, cannot be explained as a conventional process and is attributed to the unconventional (localized) carriers in the material. The observed coherent oscillation is assigned to be a totally symmetric A_1g optical phonon dominated by Sn displacements out of the Kagome planes, and possesses a prominently large amplitude, on the order of 10^-3, comparable to the maximum of the reflectivity change (ΔR/R). This amplitude is equivalent to charge-density-wave (CDW) systems, although no signs of such an instability were hitherto reported in . Our results set an unexpected connection between and kagome metals with CDW instabilities, and suggest a unique interplay between phonon and electron dynamics in this compound. Coherent phonon and unconventional carriers in the magnetic kagome metal E. Uykur August 12, 2023 ========================================================================= Introduction. During the last years magnetic Kagome metals have emerged as an interesting new class of materials due to their unusual properties. In terms of electronic structure, a simple tight-binding model constructed on the Kagome lattice is well known to result in dispersionless flat bands and Dirac cones <cit.>. Thus, strongly correlated and localized electrons together with topological states are expected for these materials. Combining such features with magnetism makes Kagome metals suitable to host different types of exotic phenomena, and in this regard the family of Kagome FeSn-binary compounds (FeSn, Fe_3Sn, ) presents several promising candidates. For both the linearly dispersing bands and the flat bands were previously observed experimentally <cit.>, and recently it attracted great attention after discoveries of massive Dirac fermions <cit.>, large anomalous Hall effect <cit.>, skyrmion bubbles at room temperature <cit.> and tunable spin textures using an external magnetic field <cit.>. is a layered rhombohedral material belonging to the R3̅m space group, with hexagonal lattice parameters a = b = 5.3 Å and c = 19.8 Å. Its crystalline structure is composed of Fe_3Sn kagome bilayers, where the Fe kagome network is stabilized with Sn1 atoms and sandwiched between honeycomb Sn2 layers [Fig. <ref>(a,b)] <cit.>. Previous studies reported a high ferromagnetic ordering temperature, T_C ∼ 640 K <cit.>, and identified a temperature-driven spin reorientation in . The spin reorientation, where the spins are realigned from the out-of-plane direction towards in-plane <cit.>, occurs in a broad temperature range, between 150 K and 70 K, and its signatures were observed with several different experimental techniques, such as Mössbauer <cit.>, neutron diffraction <cit.>, electronic transport <cit.> and infrared spectroscopy <cit.>. Furthermore, due to the bilayer nature of the compound and the presence of a breathing Kagome distortion on the Fe-Kagome layer [Fig. <ref>(b)], a slightly modified band structure is expected <cit.>. It has been shown that the observed properties of are closely related to peculiarities of its lattice and magnetic structure <cit.>. The fingerprints of the non-trivial carrier dynamics have been identified in optical studies <cit.>, whereas the interplay of the topological orders with magnetism and strongly correlated electrons is yet to be clarified. The tunability of different contributions is highly desirable, also for possible future applications of . Here, we present an ultrafast optical pump-probe spectroscopy investigation on . This method has been extensively used to study the dynamics of non equilibrium charge carriers and phonon dynamics in solids <cit.>, and it is well suited to study metallic systems <cit.>, where different contributions can be identified. So far, the ultrafast carrier dynamics for Kagome metals have not yet been widely explored, with only a few reports on nonmagnetic CsV_3Sb_5 <cit.>, where the ultrafast response of the unusual charge-density-wave (CDW) state has been probed. In this letter, we report the temperature- and fluence-dependent transient reflectivity measurements of using optical pump-probe. Our results reveal an unusually large amplitude of coherent phonon oscillations in , with intriguing similarities to the CDW case, even though no CDW has been reported in as a ferromagnetic Kagome metal. Thus, giving new insights into the electron-phonon coupling as the possible mechanism related to the unconventional carriers in kagome metals. Experimental. A temperature- and fluence- dependent optical pump-probe spectroscopy study has been performed on as-grown single crystals <cit.> in the reflection geometry. For pump and probe we used ∼ 60 fs long laser pulses, centered at 800 nm and generated by a Ti:sapphire laser amplifier with 250 kHz repetition rate. Further experimental details can be found in the supplementary material. In Fig. <ref>(c), we summarize the general behavior of the observed relaxation processes. Here, reflectivity change (Δ R/R) is given as a function of the pump-probe time delay at 300 K with pump fluence of 1.6 mJ/cm^2. The best fit for the spectrum was achieved with the following equation: Δ R/R = y0 + c_1exp(-t/τ_1) + c_2exp(-t/τ_2), where c_1 and c_2 are constants, y0 is an offset parameter and τ_1 and τ_2 are the relaxation times. The time scales of the relaxations were: τ_1 in the order of ∼ 1 ps (yellow solid line), τ_2 in the order of a few tens of picoseconds (green solid line), and finally a much longer relaxation that had to be approximated using the offset constant term y0 (blue dashed line). Another interesting feature is that around the first ∼ 8 ps after pump-probe temporal overlap, the decaying signal is modulated by pronounced oscillations, as seen in the inset of Fig. <ref>(c). This is a coherent optical phonon induced by the ultrashort pump pulse and will be discussed in more detail later on. Relaxations. The temperature dependence of the transient reflectivity, the obtained relaxation times and the offset constant y0 are given in Fig. <ref>(a-d), whereas Fig. <ref>(e) depicts c1 and c2, the constants representing the amplitude of the τ_1 and τ_2 according to Eq. <ref>, respectively. Fig. <ref>(f-j) demonstrate the fluence dependence of the same parameters. We limited the time delay to 8 ps, longer time delays can be found in the supplementary material. Due to the metallic nature of <cit.>, τ_1 and y0 can be explained using the phenomenological two temperature model (TTM) for metals <cit.>, where τ_1 is the relaxation of the hot electrons, whereas y_0 reflects the dissipation of the residual lattice heating. As given in Fig. <ref>(b), τ_1 is temperature independent, lying around 1.1 ps. y0, on the other hand, increases with increasing temperature up to around 175 K and then it saturates for higher temperatures, indicating that cooling down the sample removes the excess heat and brings the system to equilibrium faster [Fig. <ref>(d)]. The fluence dependencies of τ_1 and y0 also corroborate this explanation as seen in Fig. <ref>(g) and (i), respectively. By simply taking into account the electron/lattice temperature and the electron-phonon coupling, the increase of τ_1 with fluence can be nicely reproduced by the TTM model [red solid line in Fig. <ref>(g)]. A similar change has also been observed for 10 K and 170 K (see supplementary material for details of the TTM and the analysis for 10 K and 170 K). Coming to the τ_2 as represented by the green solid line in Fig. <ref>, the dynamics behind this process indicates a departure from a simple Drude metal. Considering that the spectra are dominated by the coherent phonon oscillations and the excess heat of the system generates a background, τ_2 is more reliably extracted at low temperatures, where y0 vanishes. The τ_2 value is weakly temperature dependent [Fig. <ref>(c)] changing from 30 ps to ∼ 25 ps with decreasing temperature. At high temperatures, we did not observe any fluence dependence [Fig. <ref>(h)]. With decreasing temperature at lower fluences, a small decrease is present and it goes into the saturation limit at higher pump fluences. We ascribe τ_2 to the unconventional carriers that are expected in kagome metals. Previous optical studies <cit.> suggest that is not a simple metal. Its optical conductivity shows two distinct intraband contributions. A sharp Drude contribution is accompanied by a second peak due to localized carriers (localization peak), which is the common situation on both magnetic and nonmagnetic kagome metals <cit.>. Here pumping leads to the delocalization of these unconventional carriers and we believe to observe the time scale of the localization process. The amplitude of this process should be proportional to the spectral weight of the localization peak observed in the broadband IR spectroscopy measurements <cit.>. Indeed a direct comparison reveals a temperature-independent behavior for both the spectral weight of the localization peak [Fig.S5(e) of Ref. <cit.>] and the amplitude of τ_2 [c_2 in Fig. <ref>(e)]. The temperature-driven dynamics show a different evolution of c_1 and c_2 as given in Fig. <ref>(e). While with decreasing temperature, c_2 does not change, c_1 shows a slight increase and saturates below the spin-reorientation temperature. Here the change of the carrier density is probably not related with the change in the Fermi level with temperature, but rather with gapping of the certain parts of the Fermi surface upon the reorientation of the spins. With increasing fluence, on the other hand [Fig. <ref>(j)], a linear increase is observed for both c_1 and c_2, which is consistent with the increase of photo-excited carriers at higher fluences. Phonon mode. Now let us turn to the coherent optical phonon identified in the spectra. These laser-induced oscillations are generated by the lattice atoms vibrating in phase to each other, and measured as a periodic modulation of the optical properties <cit.>. In supplementary material the details regarding data analysis for the resonance frequency and amplitude of the mode can be found. Such coherent phonon oscillations are reported for different systems in the literature <cit.>. However, the extraordinary strength of these oscillations in the current measurements is an interesting finding. Such strong oscillations are usually observed in systems with periodic lattice distortions, charge density waves, and other types of collective order <cit.>, whereas does not possess any of those. On the other hand, the correlated nature of the kagome metals has been identified by different means, including the observation of the aforementioned localization peak in the optical spectra. Here, the intraband carriers are damped by the back-scattering from the collective modes, which in principle can have any bosonic excitation as origin. Our observation of this unusual phonon coupling makes phonons a plausible candidate for this collective mode. Fig. <ref>(a-c) depicts the temperature dependence of the obtained phonon parameters, namely the resonance frequency, amplitude and width. Its frequency, retrieved using a Fourier transform was found to be around 2.40 to 2.50 THz, which corresponds to an A_1g totally symmetric mode  <cit.>. As shown in Fig. <ref>, this is primarily an out-of-plane Sn mode that does not affect the kagome network significantly. It is dependent on both temperature and magnetic structure, presenting a clear phonon softening with increasing temperature and anomalies on its amplitude and peak width around the spin reorientation region ∼150 K, as demonstrated with the blue arrow in Fig. <ref>(b). The phonon softening and the increase of the amplitude of the phonon oscillations have also been observed with increasing fluence [See suplementary Fig.S5]. Phonon softening with temperature and fluence is often attributed to anharmonic terms in the vibrational potential energy <cit.>. However, other signatures of these anharmonic effects are not observed in our data. For instance, the increase of the amplitude does not follow the expected increasing behavior. Furthermore, the width does not show a decrease, in fact a slight increase at higher temperatures indicates a strong electron-phonon coupling. Other evidence against the anharmonic phonon softening is that the decay rate of the phonon does not change significantly with temperature (details of the analysis are given in supplementary material). Along with the evidences against the anharmonic phonon coupling, the absence of E_g phonon modes, the cosine-like character of the oscillations [see supplementary], and the large amplitude of the oscillations when compared to the non-oscillatory decaying signal (also increasing linearly with fluence), are strong indications of displacive excitation of coherent phonons (DECP) as the mechanism behind this coherent phonon generation <cit.>. This indicates a strong electron-phonon coupling (e-p) in in both low and room temperature regimes, as DECP depends exclusively on this coupling to induce the coherent oscillations. The maximum of the non-oscillatory exponential decay increases with fluence, indicating a larger photo-excited carriers density at higher fluences, and then a considerable electronic softening of the lattice is expected <cit.>. As a consequence, the reduction of the restoring force for the A_1g lattice displacement appears naturally with the excitation of a larger number of electrons. Thus, this phenomenon can be understood as solely an electronic softening of the crystal lattice. Such a strong phonon coupling suggests some sort of an incipient lattice distortion in . In first glance, the breathing kagome distortion [Fig. <ref>(b)], where the successive Fe-bonds in kagome network are slightly different, is a reasonable cause. On the other hand, in this case, it is expected that the breathing E_g mode, which directly affects the kagome network, would be the phonon that modulates reflectivity. Considering that the observed A_1g mode does not affect this breathing kagome structure, this assumption seems to be unlikely. Another possibility why is special lies in the proclivity of kagome metals for charge-density-wave instabilities that have been revealed not only in nonmagnetic compounds like AV_3Sb_5 and ScV_6Sn_6 <cit.>, but also in the magnetic kagome metal FeGe <cit.>. Our data support growing evidence that even in the absence of a CDW transition, charge carriers in kagome metals can be strongly coupled to specific phonons that, in turn, have crucial effect on their dynamics.. Finally, we use density-functional-theory (DFT) to elucidate the effect of the A_1g phonon mode on the optical conductivity, details of the calculations are given in supplementary. We have introduced the atomic displacements due to the phonon mode as demonstrated in Fig. <ref>, and calculated the optical conductivity as given in Fig. <ref>(d). The displacement amplitude is taken as 0.1 Å that is consistent with the estimated atomic displacement (see supplementary). To demonstrate changes in the optical conductivity, and ensuing changes in the reflectivity, we have plotted in Fig. <ref>(e) the difference in optical conductivity with respect to the undistorted structure. The results suggest that at 800 nm [red line in Fig. <ref>(e)], the observed 2.5 THz phonon mode has a strong impact on the optical conductivity and can clearly be the reason behind the observed 10^-3 change in the reflectivity (the orange circles are the estimates over the experimental reflectivity spectra). The distortion of the structure in two opposite directions nicely leads to a symmetric change of the optical conductivity. For comparison, changes in the optical conductivity induced by the other A_1g modes (presented in the supplementary materials) have also been calculated. Results suggest that at 800 nm, the most prominent change is due to the observed 2.5 THz A_1g mode, and other modes do not alter the optical conductivity significantly. These calculations may also explain why we could measure only a single phonon mode as a reflectivity modulation, while the other totally symmetric A_1g modes have not be observed. Conclusions. Photo-induced changes in reflectivity of the kagome metal reveal the dynamics of carriers and coherent optical phonons. We detect three time scales. Two of them, the faster and slower ones, are clearly related to the highly metallic nature of the material and can be well explained using the two-temperature model for metals. The medium time scale, on the other hand, is related to the unconventional localized carriers in kagome metals. Their distinct relaxation time and coupling to short optical pulses allows an independent probe of Drude and localized carriers, as well as the control of localization using ultrafast optical probes. Additionally, strong coherent phonon oscillations have been observed indicating a strong electron-phonon coupling in even at room temperature. The nature of this phonon mode is attributed to the electronic softening of the crystal lattice due to the large photo-induced carrier density. The spin reorientation of around 150 K does not seem to have a significant effect on the dynamics of charge carriers, although it manifests itself in the temperature dependence of the coherent phonon. In conclusion, our study demonstrates the salient role of phonon dynamics and electron-phonon coupling even in those kagome metals where no CDW instabilities occur. H. C. L. acknowledges support from the National Key R& D Program of China (Grants No. 2016YFA0300504 and No. 2018YFE0202600), and the National Natural Science Foundation of China (Grants No. 11574394, No. 11774423, and No. 11822412). The work in Germany has been supported by the Deutsche Forschungsgemeinschaft (DFG) via Grant UY63/2-1. Computations for this work were done (in part) using resources of the Leipzig University Computing Center. ibitem[1] * @filesw auxout #1Slistctr biblabel#1[S#1] Supplementary Material for “Ultrafast Carrier and Phonon Dynamics of the Magnetic Kagome Metal " M. V. Gonçalves-Faria, A. Pashkin, Q. Wang, H. C. Lei, S. Winnerl, A. A. Tsirlin, M. Helm, and E. Uykur §.§ Samples and the Experimental Details Single crystals of were grown using self-flux method as described elsewhere Wang2016[1]. The (001)-plane with lateral dimensions of 1000 μm×800 μm× 200 μm is used for the optical pump-probe transient reflectivity measurements. We used the same as-grow sample as in our previous infrared spectroscopy study Biswas2020[2]. Temperature- and fluence- dependent optical pump-probe spectroscopy were performed in the reflection geometry. For pump and probe we used 60 fs long laser pulses, centered at 800 nm and generated by a Ti:sapphire laser amplifier with 250 kHz repetition rate. Probe spot on the sample was around 25 μm and the pump spot was around 30 μm. §.§ Data Analysis Transient reflectivity was measured up to 150 ps delay time with 1 ps time resolution. Up to around 9 ps delay time, we increased the time resolution to 33 fs to resolve the phonon oscillations. The overall temperature and fluence dependence of the transient reflectivity does not change drastically and can be analysed by employing equation (1) from the main text. τ_1 and τ_2 are the retrieved relaxation times, and y0, the offset parameter, is also related to a relaxation process, as described in the main text; however, since it is much longer than our measurement limit, the relaxation time cannot be obtained reliably. Therefore, we model it with a constant. In Fig. <ref>, the temperature and fluence dependence of the measured transient reflectivity can be seen in the short and long time delays. The oscillations dominating the short time delays are the coherent phonons. The analysis process of these are given below and the details are discussed in the main text. §.§ Phonon Mode The short delay time of the transient reflectivity is dominated by the coherent phonon oscillations. Here, we fit the non oscillatory part of the transient reflectivity as demonstrated in Fig. <ref>. Afterwards, this fit is subtracted from the main signal to obtain the isolated oscillations as shown with the blue curve in the same figure. The fast Fourier transform (FFT) of these oscillations gives the resonance frequency of the phonon mode that couples to the electronic background. As depicted in the inset of Fig. <ref>, we fitted the FFT with a Lorentzian in order to obtain the resonance frequency, amplitude, and the width of the phonon mode, as discussed in the main text. The details of the observed coherent phonon oscillations were analyzed with the wavelet transform of the frequency of the oscillations as given in Fig. <ref>. The decay rate of the phonon does not change significantly with temperature. It is around 0.32 ps^-1 both at room temperature and 10 K. With increasing fluence, it slightly increases to 0.35 ps^-1, nonetheless, does not change appreciably with temperature. Moreover, here it is demonstrated that the A_1g mode is the only mode observed and it does not vary with time. As depicted in Fig. <ref>, we observed no chirp in the A_1g phonon mode, in contrast to what has been observed in other compounds, such as Bismuth for example Misochko2004[3]. Therefore, we can fit the oscillatory part of the signal (blue line in Fig. <ref>) with the following equation y = Ae^-(t/t_0)sin(ω t + ϕ), in order to retrieve its phase ϕ. This is an additional way of testing the displacive excitation of coherent phonons (DECP) as the generating mechanism of the coherent phonons, since the oscillations should present a cosine-like behavior Zeiger1992[4]. Fig. <ref> displays the experimental data (dots) and the fitting (red solid line) using Eq. <ref> at 10 K (panel a) and room temperature (panel b). The oscillations were extrapolated until time zero, that was determined from the pump-probe interference pattern generated before the increase in reflectivity. At 10 K the phase is 90° away from a sine function and at 300 K the phase is 102°, which indicates a cosine description, being; therefore, in good agreement with a DECP-launched coherent phonon. The fluence dependence of the phonon is also investigated. In the main text, the phonon softening with increasing fluence for different temperatures has been discussed along with the changes on its amplitude (increasing with fluence). Here we plot (Fig. <ref>) the resonance frequency and the amplitude on the same graph demonstrating the remarkable match between them as another evidence to the DECP mechanism. Please note that lower fluences, especially at higher temperatures, show a worse signal to noise ratio, hence some deviations are observed. §.§ Heat Capacity and Two Temperature Model In Fig. <ref>, we plotted the temperature dependence of the heat capacity between 2 and 300 K provided by the crystal grower (H. Lei unpublished). The smooth evolution of the heat capacity confirms the absence of structural phase transitions in this temperature range. At high temperatures, the heat capacity reaches the Dulong-Petit limit. The inset demonstrates the C_p/T vs T^2 with its linear fit according to: C_p = γ T + 12/5π ^4nR(T/Θ_D)^3, where γ, n, R, and Θ_D are the Sommerfeld coefficient of the electronic contribution, the number of atoms per formula unit, molar gas constant, and the Debye temperature of the lattice contribution, respectively. Our fit gives γ= 9.45 mJmol^-1K^-2 and Θ_D= 263 K. The parameters τ_1 and y_0, retrieved from the exponential fitting of the experimental data, can be explained by the two-temperature model (TTM) Ultrafast_metals2[5]. In this model, the metal is modelled as a two coupled thermal systems composed of electrons and the crystal lattice, respectively. A difference between electronic and lattice temperatures can be induced by the incident ultrashort laser pulse, by transferring its energy to the conduction electrons which thermalize rapidly via electron-electron scattering. Since the electronic heat capacity C_e is much less than the lattice heat capacity C_l, it is possible to create transient electronic temperatures much higher than the lattice temperature. Then in a time scale of the order of τ_1, these hot electrons return to a local equilibrium solely through electron-phonon scattering processes Ultrafast_metals[6]. In such systems a much longer relaxation process is also observed. This results from residual lattice heating which cools down in a longer time scale via heat diffusion. Based on the TTM, the relaxation time associated with the electron-phonon scattering, τ_1, is given by Groeneveld1995[7]: τ = γ(T_e^2 - T_l^2)/2H(T_e,T_l), where γ is the Sommerfeld coefficient calculated with the heat capacity data from Fig. <ref> and fitted with Eq.(<ref>), T_e and T_l are the temperatures of the electronic and lattice subsystems, respectively, and H(T_e,T_l) is the energy transfer rate per unit volume and per second from electrons to phonons, given by: H(T_e,T_l) = f(T_e) - f(T_l), where f(T) = 4G_∞T^5/Θ_D^4∫_0^Θ_D/Tx^4/e^x -1dx, with G_∞ being the electron-phonon coupling constant. The electronic temperature can be estimated by T_e = (T_l^2 + 2U_l/γ)^1/2, where U_l is the deposited laser energy density calculated using the penetration depth of and the incident fluences. The fluence dependence of T_e at room temperature according to Eq. <ref> is shown in Fig. <ref>. Figure 2(g) of the main text presents the fit for τ_1 as function of fluence at room temperature using Eq.(<ref>). At 10 and 170 K the results and the fits for τ_1 are presented in Fig. <ref>. In all cases an increase of τ_1 with encreasing fluence was observed, which is the behavior expected from the TTM. In Table (<ref>) the retrieved values for the electron-phonon coupling constant G_∞ are summarised. These are the results from the best fits of the experimental data to the model in Eq. (<ref>). One can see that G_∞ decreases with increasing temperature, which is in agreement with the phonon amplitude shown in Fig.(3b) of the main text. §.§ Density Functional Theory Calculations Density-functional-theory (DFT) calculations of the band structure were performed in the  wien2k[8,9] code using Perdew-Burke-Ernzerhof flavor of the exchange-correlation potential pbe96[10]. We used experimental structural parameters determined by x-ray diffraction measurements, as summarized in Table (<ref>). The module optic[11] was used for evaluating the optical conductivity. Spin-orbit coupling was included in the calculations of band structure and optical conductivity. Furthermore, ferromagnetic order was taken into account, where the spins on Fe-atoms are aligned along in-plane direction, which eventually occurs below the spin-reorientation temperature. The DFT-obtained magnetic moment per Fe-atom was 2.18 μ_B, which is very close to the experimental values Wang2016[1]. Self-consistent calculations were converged on the 24× 24× 24 k-mesh. Optical conductivity was calculated on the k-mesh with up to 100× 100× 100 points within the Brillouin zone. The phonon calculations were performed in using the same structural parameters and the built-in procedure with frozen atomic displacements of 0.015 Å. Magnetic moments were directed along the c axis to avoid symmetry lowering. The obtained Gamma-point phonon energies are given in Table (<ref>). Relevant to the current work, the four A_1g modes are also depicted in Fig. <ref>, where the Ph1 is the coherent phonon oscillations in this pump-probe study. The conductivity change under the influence of phonon modes was investigated with DFT by deforming the structure under the mentioned phonon mode. The procedure is described below. Firstly, we estimated the amplitude of the atomic displacement for the relevant Ph1 A_1g using our transient reflectivity experimental data. Calculating the exact values of coherent displacement amplitude in ultrafast structural dynamics is a great challenge. However, the order of magnitude of such atomic displacements may be determined from the reflectivity variation ΔR/R using the following equation, that is based on equations (11) and (12) from Ref. Stevens2002[12]: 1/R∂ R/∂ E_cQ_0^2 = Δ R/RIm(ε)/hΩ_0^2ρ cΦ/1-R, where Q_0 is the displacement amplitude, E_c is the incident laser pulse frequency, Ω_0 is the coherent phonon frequency, ρ is the mass density of the oscillating atoms (Sn in this case) in the material, c is the speed of light, Φ is the incident fluence on the surface, h is Planck's constant, and ε is the dielectric constant. The measured values of ΔR/R around 10^-3 result in Q_0 of about 5-40 pm. The fluence dependence for the displacement amplitude according to Eq. (<ref>) is given in Fig. <ref>. For the DFT calculations, we chose 10 pm as an average displacement. Then, this 10 pm displacement is introduced to the structure given in Table (<ref>), and the optical conductivity is re-calculated with the new structural parameters. In Fig. <ref>(a), the optical conductivity calculated with the non distorted structure is shown as a comparison to the experimental optical conductivity Biswas2020[2]. The good match with the experiment is obtained only after re-scaling the energy scale of the calculations by /1.4, which is commonly observed for strongly correlated systems. Here the intraband contributions were subtracted from the experimental optical conductivity for a direct comparison with the DFT results. In Fig. <ref>(b), the difference in the optical conductivity with respect to the nominal optical conductivity is given for all the A_1g modes. As one can immediately notice, at the 800 nm energy range of our pump-probe study the appreciable change in the conductivity only occurs for the Ph1, whereas the changes with other modes are negligible small. This is one of the reasons that we only observe Ph1 in this energy range. ibitem[1] * @filesw auxout #1Slistctr biblabel#1[S#1] § SUPPLEMENTARY REFERENCES Wang2016[S1] Q. Wang, S. Sun, X. Zhang, F. Pang, and H. Lei, Anomalous Hall effect in a ferromagnetic Fe_3Sn_2 single crystal with a geometrically frustrated Fe bilayer kagome lattice, https://doi.org/10.1103/PhysRevB.94.075135 Phys. Rev. B 94, 075135 (2016). Biswas2020[S2] A. Biswas, O. Iakutkina, Q. Wang, H. Lei, M. Dressel,and E. Uykur, Spin-Reorientation-Induced Band Gap in Fe_3Sn_2: Optical Signatures of Weyl Nodes, https://doi.org/10.1103/PhysRevLett.125.076403 Phys. Rev. Lett. 125, 076403 (2020). Misochko2004[S3] O. Misochko, M. Hase, K. Ishioka, and M. Kitajima, Observation of an Amplitude Collapse and Revival of Chirped Coherent Phonons in Bismuth, https://doi.org/10.1103/PhysRevLett.92.197401 Phys. Rev. Lett. 92, 197401 (2004). Zeiger1992[S4] H. Zeiger, J. Vidal, T. Cheng, E. Ippen, G. Dresselhaus, and M. Dresselhaus, Theory for displacive excitation of coherent phonons, https://doi.org/10.1103/PhysRevB.45.768 Phys. Rev. B 45, 768 (1992). Ultrafast_metals2[S5] P. Allen, Theory of thermal relaxation of electrons in metals, https://doi.org/10.1103/PhysRevLett.59.1460 Phys. Rev. Lett. 59, 1460 (1987). Ultrafast_metals[S6] R. Schoenlein, W. Lin, J. Fujimoto, and G. Eesley, Femtosecond studies of nonequilibrium electronic processes in metals, https://doi.org/10.1103/PhysRevLett.58.1680 Phys. Rev. Lett. 58, 1680 (1987). Groeneveld1995[S7] H. Groeneveld, R. Sprik, and A. Lagendijk, Femtosecond spectroscopy of electron-electron and electron-phonon energy relaxation in Ag and Au, https://doi.org/10.1103/PhysRevB.51.11433 Phys. Rev. B 51, 11433 (1995). wien2k[S8] P. Blaha, K. Schwarz, G.K.H. Madsen, D. Kvasnicka, J. Luitz, R. Laskowski, F. Tran, and L.D. Marks, WIEN2k, An Augmented Plane Wave + Local Orbitals Program for Calculating Crystal Properties (Karlheinz Schwarz, Techn. Universität Wien, Austria), 2018. ISBN 3-9501031-1-2. Blaha2020[S9] P. Blaha, K. Schwarz, F. Tran, R. Laskowski, G. K. H. Madsen, and L. D. Marks, WIEN2k: An APW+lo program for calculating the properties of solids, https://doi.org/10.1063/1.5143061 J. Chem. Phys. 152, 074101 (2020). pbe96[S10] J. Perdew, K. Burke, and M. Ernzerhof, Generalized Gradient Approximation Made Simple, https://doi.org/10.1103/PhysRevLett.77.3865 Phys. Rev. Lett. 77, 3865 (1996). optic[S11] C. Ambrosch-Draxl and J. Sofo, Linear optical properties of solids within the full-potential linearized augmented planewave method, https://doi.org/10.1016/j.cpc.2006.03.005 Computer Physics Communications 175, 1 (2006). Stevens2002[S12] T. Stevens, J. Kuhl, and R. Merlin, Coherent phonon generation and the two stimulated Raman tensors, https://doi.org/10.1103/PhysRevB.65.144304 Phys. Rev. B 65, 144304 (2002).
http://arxiv.org/abs/2307.10113v1
20230714181009
Multiscale studies of delayed afterdepolarizations II: Calcium-overload-induced ventricular arrhythmias
[ "Navneet Roshan", "Rahul Pandit" ]
q-bio.TO
[ "q-bio.TO" ]
AIP/123-QED ]Multiscale studies of delayed afterdepolarizations II: Calcium-overload-induced ventricular arrhythmias Centre for Condensed Matter Theory, Department of Physics, Indian Institute of Science, Bangalore, 560012, India Centre for Condensed Matter Theory, Department of Physics, Indian Institute of Science, Bangalore, 560012, India [email protected] Disturbances in calcium homeostasis in a cardiac myocyte can lead to calcium-overload conditions and abnormal calcium releases, which occur primarily in the following two phases of the action potential (AP): (a) triggered or late calcium release (LCR) during the plateau phase; (b) spontaneous calcium release (SCR) during the diastolic interval (DI). Experimental and numerical studies of LCRs and SCRs have suggested that these abnormal calcium releases can lead to triggered excitations and thence to life-threatening ventricular arrhythmias. We explore this suggestion in detail by building on our work in the previous accompanying Paper I, where we have studied abnormal calcium releases and delayed afterdepolarizations (DADs) in two state-of-the-art mathematical models for human ventricular myocytes. Here, we carry out a detailed in-silico study of one of these models, namely, the ten Tusscher-Panfilov TP06 <cit.> model. We increase the L-type Ca-channel current I_CaL, to trigger LCRs, and calcium leak through the ryanodine receptor (RyR), to trigger SCRs, in the myocyte. We then perform multiscale simulations of coupled TP06-model myocytes in tissue in one-, two-, and three-dimensional (1D, 2D, and 3D) domains, with clumps of DAD-capable myocytes, to demonstrate how these clumps precipitate premature ventricular complexes (PVCs) that lead, in turn, to fibrillatory excitations like spiral and scroll waves. We examine possible pharmacological implications of our study for the class of ventricular arrhythmias that result from Ca2+ overload. [ Rahul Pandit August 12, 2023 =================== § INTRODUCTION Mammalian hearts are made up of electrically excitable tissue in which individual muscle cells (or cardiomyocytes) are electrically coupled with their neighbors. Normal electrical stimulation starts from the sino-atrial node (SAN) and propagates, in the form of an electrical wave of activation, through this tissue, mediates the synchrony of the myocytes, and results in the normal rhythm of the heart. Cardiac arrhythmias, which arise from disturbances in this normal propagation, are among the leading causes of mortality <cit.>; and risk factors associated with arrhythmias have become worse after COVID-19 <cit.>. Ventricular arrhythmias, like ventricular tachycardia (VT) and ventricular fibrillation (VF), can lead to sudden cardiac death (SCD). These arrhythmias are associated with reentrant waves such as spiral or scroll waves, which have been studied extensively in mathematical models for cardiac tissue. VT or VF can arise in several ways [see, e.g., Refs. <cit.>], some of which originate at cellular and sub-cellular scales <cit.>. One important sub-cellular precursor that can lead to such arrhythmias is a disruption of the Ca2+ handling inside the myocyte. The concentration of Ca2+ inside a myocyte plays a critical role in cell functions; and it is coupled to the membrane potential V_m, so it can affect the myocyte action potential (AP). The proteins in cardiomyocytes help to maintain multiple compartments, with varying amounts  <cit.> of Ca2+ concentrations [Ca2+ homeostasis]. The disruption in Ca2+ homeostasis can occur because of upregulations, downregulations, or mutations of proteins involved in calcium handling <cit.>. An increase in the intracellular Na+ concentration <cit.>, the use of certain medications <cit.>, ischemia  <cit.>, and catecholamines <cit.> also disrupt Ca2+ homeostasis. Many of these conditions are associated with a Ca2+ overload in the cell or an abnormal release of Ca2+ from the sarcoplasmic reticulum (SR) to the cytosol or both [see Fig. 1 of the previous accompanying Paper I (henceforth Paper I)]. The abnormal release of Ca2+ occurs primarily in the following two phases or intervals of the AP: (a) triggered or late Ca2+ release (LCR) during the plateau phase <cit.>; (b) spontaneous Ca2+ release (SCR) during the diastolic interval (DI) <cit.>; some studies have shown that both LCRs and SCRs have been observed in ischemia <cit.>, in hypertrophied <cit.> or failing hearts <cit.>, and also during catecholaminergic polymorphic ventricular tachycardias (CPVTs) <cit.>. Here, abnormal Ca2+ releases affect V_m through the electrogenic sodium-calcium exchanger NCX (or NaCa) and thus disturb the phase of the AP, in which they occur; the strength of these releases determines the amplitude of the disturbances in V_m. The occurrence of LCRs, during the plateau phase of the AP, promotes the elongation of the action-potential duration (APD) via early afterdepolarizations (EADs). Similarly, SCRs are responsible for delayed afterdepolarizations (DADs) in the AP. Such EADs and DADs can lead to premature ventricular complexes (PVCs) <cit.>. We refer the reader to Refs. <cit.> for discussions of the different types of EADs and DADs. The SCR-driven DADs can lead to premature ventricular complexes (PVCs) <cit.>, if there is synchrony between DADs of neighboring myocytes and the source-sink relationship <cit.> in cardiac tissue, which occurs, e.g., in non-ischemic heart failures <cit.>, hypertrophied failing hearts <cit.>, the post-acidotic incidence of cardiac arrhythmias <cit.>, and CPVTs <cit.>. These PVCs are one of the causes of arrhythmias <cit.>. Numerical simulations have suggested that DAD-driven PVCs can promote reentry and spiral waves if (a) subthreshold DADs (defined in Paper I Fig. 5) interact with infarcted cardiac tissue (as shown, e.g., for porcine cardiac tissue in Ref. <cit.>) or (b) there is an interplay of DADs with mutated fast Na+ ion channels (as shown, e.g., for leporine cardiac tissue in Ref. <cit.>). Furthermore, DAD-driven PVCs are associated with CPVT, which is responsible for SCDs in young and healthy individuals <cit.> or sports-related SCDs <cit.>; in the latter two cases, hearts may not have infarcted tissue or mutated Na+ channels. Recent experiments and a few numerical studies suggest that both LCRs and SCRs can be present in a myocyte during Ca2+ overload <cit.>; these studies shed some light on the arrhythmogenic role of these LCRs, but a detailed mechanistic understanding has to be developed. It is important, therefore, to study DAD-driven arrhythmias by using state-of-the-art computational models and methods, which are becoming increasingly important in the study of all cardiac arrhythmias <cit.>. Our study uses the ten Tusscher-Panfilov TP06 <cit.> mathematical model to explore whether Ca2+-release events, such as LCRs and SCRs, can promote reentry in ventricular tissue via the formation of PVCs. To the best of our knowledge there has been no comprehensive study, at the tissue level, of ventricular arrhythmias induced by a group of myocytes that are capable of abnormal calcium release and which can lead to PVCs in tissue with no infarction-induced scars. We provide a theoretical framework for investigating ventricular arrhythmias that arise from Ca2+ overload [see, e.g., Refs.  <cit.>] and which are associated with LCR- and SCR-induced SCDs. Only a few mathematical models for ventricular tissue can yield both LCRs and SCRs [see, e.g., Paper I and Ref. <cit.>]. We use one of these models, namely, the TP06 model, because it shows various types of DADs at the myocyte scale. Our multi-scale study, which spans myocyte, cable, tissue, and ventricular scales [see the schematic diagram in Fig. <ref>], yields several new insights into PVC-induced non-shockable arrhythmias <cit.> that are precipitated by Ca2+ overload and which cannot be eliminated by straightforward low-amplitude electrical-defibrillation protocols. Before we present the details of our study, we give a qualitative overview of our principal results. In Subsection <ref>, we increase I_CaL to provide the Ca2+-overload and we quantify its effects on V_m and the Ca2+ concentration in the sarcoplasmic store, for two Ca2+-overload protocols. In Subsection <ref>, we demonstrate the emergence of LCR-induced EADs and SCR-triggered DADs in the TP06 myocyte model. In Subsection <ref> we show that a selective reduction of the late part of I_CaL can eliminate LCRs and hence EADs; we distinguish two types of myocytes: those that are capable of producing DADs (henceforth, DAD myocytes) and those that cannot (henceforth, non-DAD myocytes). In the next Subsection <ref> we investigate the propagation of electrical waves of activation in a cable-type domain with non-DAD myocytes with a clump of DAD myocytes in the middle; we examine this propagation for the two Ca2+-overload protocols [Subsection <ref>] with the Ca2+-overload either localized to the DAD-clump or spread over the entire cable. We demonstrate, then, that (a) the DAD clump can lead to the emergence of DAD-driven premature ventricular complexes (PVCs) and (b) the interaction of successive propagating PVCs results in conduction blocks. Our simulations in two-dimensional (2D) [Subsection <ref>] and three-dimensional (3D) anatomically realistic human bi-ventricular [Subsection <ref>] domains, with DAD clumps, also display such PVCs that lead to reentry, i.e., the formation of spiral (2D) or scroll (3D) waves. We show explicitly that these PVCs cannot be removed by a standard low-amplitude electrical-defibrillation technique. We have organized the rest of this paper as follows. In Sec. <ref>, we describe the models we use and the numerical and theoretical methods that we employ. Section <ref> is devoted to a detailed discussion of our results. The concluding Section <ref> examines the possible clinical implications of our principal results. § MODEL AND METHODS In Paper I, we have studied two human-ventricular mathematical models. Here, we consider one of them, namely, the TP06 model. We stimulate the TP06-model myocytes by an electric current (square pulses of height -52  pA/pF and duration 1  ms with a 1 Hz pacing frequency). §.§ TP06 Model In the TP06 model <cit.> for cardiac tissue, we use the following partial differential equation (PDE) for the transmembrane potential V_m: ∂ V_m∂ t = -I_stim + I_TP06C_m + ∇. ( 𝐃∇ V_m) ; t is the time; C_m is the capacitance per unit area of the myocyte membrane; I_stim is the externally applied current stimulus to the myocyte; I_TP06 is the sum of all the transmembrane ionic currents [Eq. <ref>]; and 𝐃 is the diffusion tensor, which is taken to be a scalar for cable (1D) and two-dimensional (2D) simulation domains and a tensor for the 3D bi-ventricular domain; in isolated-myocyte studies, there is no diffusion term. The single-myocyte TP06 model contains the following 12 currents that contribute to the dynamics of V_m: I_TP06 = I_Na+I_CaL+I_K1+I_Kr+I_Ks+I_to+I_pK + I_bCa+I_NaCa+I_NaK+I_bNa+I_pCa. These currents are defined in Table <ref>; and the full set of ODEs for this model is given in Ref. <cit.>. We concentrate on DADs, which are affected significantly by I_CaL, so we give the equation for this current below: I_CaL= G_CaL· d · f · f_2· f_Cass· F( Ca_SS, V_m, Ca_o), where d is a voltage-dependent activation gate, f and f_2 are, respectively, voltage-dependent, slow- and fast-inactivation gates, and f_Cass is the cytosolic-calcium-dependent inactivation gate; F is a function of the subspace Ca2+ concentration Ca_SS , V_m, and the extracellular Ca2+ concentration Ca_o. f_2 obeys the following equations: df_2dt = f_2-f_2,∞τ_f_2 ; f_2,∞ = 0.67/1+expV_m+357+ f_2,sat ; τ_f_2 = 562·exp-(V_m+27)^2240+ 31/1+exp25-V_m/10+ 80/1+expV_m+30/10 ; here, f_2,sat determines the late phase of I_CaL and τ_f_2 is a time constant; the control value of f_2,sat = 0.33 in the TP06 model. In Subsection <ref>, we tune the value of f_2,sat to suppress LCRs. The following currents (see Paper I) also play an important role in these LCRs: I_up = V_maxup/1+ (K_up/Ca_i )^2; I_leak = V_leak·(Ca_SR- Ca_i); here, I_up, K_up, and V_maxup are, respectively, the sarco-endoplasmic-reticulum Ca2+ ATPase (SERCA) uptake rate, a constant, and the maximal SERCA uptake rate; Ca_i is the cytosolic calcium concentration; I_leak is the leak rate; V_leak dictates the strength of the leakage; and Ca_SR is the sarcoplasmic-reticulum (SR) calcium concentration. In our 2D and 3D studies with the TP06 model, we set I_leak = 0. The opening of the ryanodine receptor (RyR) and calcium release through it is modeled by the following equation: I_rel = (V_rel.O + V_RyRL)·(Ca_SR- Ca_SS), where I_rel is the release rate; V_rel dictates the strength of the release; O, the probability that the RyR is open, is a function of Ca_SR, Ca_SS (the subspace Ca2+ concentration), and other parameters in the TP06 model (see the Appendix of Paper I); V_RyRL controls the RyR leak (we use V_RyRL = 0, if there is no RyR leak, and V_RyRL = 0.00018 ms^-1 to introduce a representative RyR leak); we use this V_RyRL to model the calcium leak from the SR to the cytosol which in turn leads to DADs <cit.>. We employ the following two ways of increasing the calcium overload for such myocytes: (a) increasing G_CaL; and (b) increasing G_CaL along with G_Kr to maintain the APD. We have discussed (b) in Paper I, which we also follow to scale the maximal conductances or fluxes as in Ref. <cit.>; e.g., to scale G_CaL, we define G_CaL≡ S_GCaL× G_CaL0, where G_CaL0 is the control value of G_CaL and S_GCaL is the scale factor for G_CaL. §.§ Numerical Methods The numerical methods we use are the same as those in Paper I. In short, we use the forward-Euler method for time marching in Eq. <ref> and the Rush-Larsen scheme for integrating the ODEs for the gating variables in the ionic currents [see, e.g., Eq. <ref>]. For spatial discretization, we use three-, five-, and seven-point stencils for the Laplacian in one (1D), two (2D), and three dimensions (3D). We apply no flux boundary conditions at the boundaries of these simulation domains. The time step δ t = 0.02 ms; the spatial resolution is δ x = 0.025 cm;and D = 0.00154 . We have checked that, with these parameters, (a) we obtain a conduction velocity ≃ 68 , which is in the bio-physically relevant range, and (b) the von-Neumann stability criterion is satisfied. In our anatomically realistic (AR) simulation, we use the human bi-ventricular geometry and fiber-orientation data [obtained from diffusion tensor magnetic resonance imaging (DTMRI) as in Ref. <cit.>]. The diffusion constant along the fiber is D_∥=0.00154 and in the transverse direction D_⊥= D_∥/4 . In tensorial notation 𝐃 = D_∥δ_ij + (D_∥-D_⊥)α_iα_j, where δ_ij is Kronecker delta, and α_i, with i = 1, 2, 3, are the direction cosines that represent the local orientation of a myocyte. We employ the phase-field method [see, e.g., Refs. <cit.>], which eliminates the need for applying the von-Neumann boundary conditions on irregular boundaries. § RESULTS We present our results in the following Subsections: In Subsection <ref> we examine the dependence of V_m and Ca_SR on the Ca2+-overload protocols that we use. In Subsection <ref> we demonstrate the emergence of Ca2+ sparks that are responsible for EADs and DADs. In Subsection <ref> we show that a reduction in the late part of the L-type Calcium current (LCC) eliminates EADs. In Subsection <ref> we present the results of our cable simulations in which we introduce a clump of myocytes that yield DADs. Subsection <ref> contains illustrative results for the evolution of DAD-induced PVCs and their development into spiral waves; we also examine the dependence of these PVCs on the shape of the DAD clump; and we show that these PVCs are non-shockable. In Subsection <ref> we generalize our 2D study to one in a 3D anatomically realistic human-bi-ventricular domain. §.§ The dependence of V_m and Ca_SR on I_CaL In Figs. <ref> (a) and (b) we present plots that compare the two Ca2+-overload protocols, (a) and (b), that we use. In protocol (a), we scale the maximal conductance G_CaL of the L-type calcium channel (LCC) and the maximal conductance G_Kr of the rapid delayed rectifier potassium channel so that the APD is unchanged. By contrast, in protocol (b), we scale G_CaL to obtain calcium overload in the myocyte; in this case, the increase in the inward current I_CaL enhances the APD slightly. In Figs. <ref> (a) and (b) the subplots (i), (ii), and (iii) show, respectively, how V_m, I_CaL, and Ca_SR change with time t; we have recorded the data for these plots after the first pacing. In summary, in protocol (a), we scale the G_CaL and G_Kr together to increase the Ca_SR, without changing the APD; by contrast, in protocol (b), the APD and Ca_SR both increase concomitantly as we increase G_CaL. By comparing subplots (i) and (ii) we see that an increase in APD elongates I_CaL too. From subplot (iii), we conclude that the SR calcium content Ca_SR increases with S_GCaL and it is higher in protocol (b) than in protocol (a). §.§ Emergence of Ca2+ sparks and afterdepolarizations To explore the formation of Ca2+ sparks, we stimulate the myocyte by using the calcium-overload protocols (a) and (b) [see Subsection <ref>], with the following illustrative parameter values for these two protocols: (a) S_Vmaxup=4.5, S_KNaCa=2, S_GCaL = 2, S_GKr = 2.5; and (b) S_Vmaxup=4.5, S_KNaCa=2, S_GCaL = 2, S_GKr = 1. For each protocol, we present two sets of results, namely, with an RyR leak (blue curves) and without an RyR leak (red curves) [i.e., V_RyRL = 0.00018 ms^-1 and V_RyRL = 0, respectively, in Eq. (<ref>)]. In Fig. <ref> we show plots versus time t of (i) V_m, (ii) I_stim, (iii) I_NaCa, and (iv) I_rel, after 15 pacing stimulations, for protocols (a) and (b) in panels (a) and (b), respectively. From subplots (i), we see that, for both protocols, the inclusion of an RyR leak results in a suprathreshold DAD, whose upstroke is not because of the current stimulus I_stim (subplots (ii)) but it is associated with an inward spike in I_NaCa (subplots (iii)) and a sharp upstroke (SCR) in I_rel (subplots (iv)). [The relation of SCR, I_NaCa, and V_m has been explained in Paper I. Experiments suggest that “The SR Ca2+ leak through the RyR channel is believed to be the primary mechanisms for SCRs <cit.>. The SCRs increase V_m by activating the Na+/Ca2+ exchangers (NCX or NaCa) <cit.> in the forward mode, in which the electrogenic NCX extrudes one Ca2+ out of a myocyte and exchanges it with three Na+ ions.”] For both protocols (a) and (b), we find oscillations during the plateau phase of the AP (subplot (i)), which are associated with oscillations in I_NaCa (subplot (ii)) and sharp structures in I_rel (subplot (iii)). These structures are arising during Ca2+ overload and are due to late Ca2+ releases (LCRs). With the RyR leak, we observe both LCRs and SCRs. The latter lead to DADs. Note that the large inward spike in I_NaCa is associated with the SCR through cytosolic Ca_i [cf., Eq. (5), in the Appendix of the previous paper, which shows the thermodynamic forces on I_NaCa that arise, inter alia, from a competition between inward and outward Na+ and Ca2+ and also depend on V_m]; during the resting phase of the membrane potential (V_m=-86 mV), the thermodynamic force on I_NaCa is maximal and its direction is inward <cit.>, which has the important consequence of unloading Ca2+ in the diastolic phase of the AP. §.§ Role of the late part of the I_CaL current in triggering LCRs In the previous Subsection <ref>, we have shown that, to observe DADs, we need a Ca2+ overload and the RyR leak from SR to SS. However, in the model, we have found that LCRs are present even without this RyR leak. Therefore, we now investigate the causes of these LCRs. In the TP06 model, Ca-induced-Ca-release (CICR) is dependent on Ca_SR and the Ca_SS. The former modulates O, the opening probability of the RyR [Eqs. (<ref>) and (10) in the Appendix of the accompanying paper], whereas the latter acts as a trigger [in Eq. (11) in the Appendix of Paper I] <cit.>. We hypothesize that the late part of the I_CaL current increases Ca_SS and is, therefore, responsible for triggering these LCRs; we check this hypothesis as follows: For the illustrative parameter values S_GCaL = 2, S_GKr = 2.5, S_Vmaxup=4.5, and S_KNaCa=2, we stimulate the myocyte for 15 pacings, and register the state variables, which we with use as initial conditions. We then stimulate the myocyte, for both protocols (a) and (b), and present the two sets of results, namely, with f_2,sat = 0.33, the control value, (blue curves) and f_2,sat = 0.11 (red curves) in Eq. (<ref>), with V_RyRL = 0 ms^-1. In Fig. <ref> we show plots versus the time t of (i) V_m, (ii) I_CaL, (iii) Ca_SR, and (iv) I_rel, for protocols (a) and (b) in panels (a) and (b), respectively. From subplots (i) we see that, for both protocols, the reduction in f_2,sat results in a reduction of the magnitude of I_CaL (subplot (ii)), in its plateau phase (but not in the initial transient), suppression of oscillations in Ca_SR (subplot (iii)), and the elimination of the blue spikes in I_rel (LCRs in subplot (iv)) and a concomitant decrease in the APD. As we have discussed Paper I, the CICR process in the TP06 model depends only on Ca_SR and Ca_SS; and Ca_SR is the same for both the cases (a) and (b); therefore, the reduction in I_CaL is responsible for eliminating LCRs, as I_CaL directly influences Ca_SS. It is instructive to compare the EADs discussed in Refs. <cit.> with the LCR-driven EADs (henceforth LCRs) we obtain in Fig. <ref>. The former arise because of the reactivation of I_CaL, whereas the latter are associated with the sharp peaks in I_rel, which are triggered by the plateau-part of I_CaL [see Fig. <ref> (a) (iv)]. §.§ Cable simulations We now carry out simulations in a 1D cable domain, by using the numerical methods of Subsection <ref>; this cable has 256 myocytes, of which 20 myocytes in the center of the domain are DAD-capable myocytes, which have an RyR leak (V_RyRL = 0.00018 ms^-1) and a calcium overload as in Subsection <ref>. The schematic diagram in Fig. <ref>(a) shows a cable with non-DAD (blue) and the DAD-capable (green) myocytes. We stimulate the first myocyte in the cable and apply the no-flux boundary condition at both ends of the cable. To model the various calcium-overload situations that may arise in pathophysiological conditions, we have considered the four sets of parameters mentioned in Table <ref>: * Case(i): Ca2+ overload in the entire cable (both normal and DAD-capable myocytes); protocol (i) of Subsection <ref>. * Case(ii): Ca2+ overload in the entire cable (both normal and DAD-capable myocytes); protocol (ii) of Subsection <ref>. * Case(iii): Ca2+ overload localized at DAD-capable myocytes; protocol (i) of Subsection <ref>. * Case(iv): Ca2+ overload localized at DAD-capable myocytes; protocol (ii) of Subsection <ref>. We present pseudocolor space-time plots of V_m for Cases (i)-(iv), in Fig. <ref> (b), for the initial two pacings, and Fig. <ref> (c), after 120 pacings, with a pacing frequency of 1 Hz. Initially, the stimulation, provided at one end of the cable, reaches the other end of the cable as in Figs. <ref>(b), (i)-(iv); however, after a few pacings, the Ca_SR load builds up in the myocytes and leads to suprathreshold DADs, which, in turn, precipitate premature ventricular complexes (PVCs) in the cable; these PVCs overtake the periodic stimulation of the cable. The PVCs, which originate from the DAD clump, then stimulate the entire cable; however, after 120s, signatures of conduction block appear in the cable for Case (ii); by contrast, in Cases (i), (iii), and (iv), we observe PVCs emerging from the center of the cable, from where they reach, uninterrupted, both the ends of the cable. In Figs. <ref>(a) and (b), we present plots of the AP from the myocytes at sites 1, 16, 32, …, 256 along the cable to demonstrate the emergence of LCR-induced EADs and SCR-induced DADs (highlighted at representative points by arrows). In Figs. <ref> (a) and (b), we present the temporal evolution of the APD and Ca_SR, for non-DAD and DAD-capable myocytes from representative points [stars in Fig. <ref>(a)] in the simulation domain; for Cases (i)-(iv) [see Table <ref>] and for the entire 120s duration of our cable simulation. The emergence of suprathreshold DADs, which have a higher frequency of incidence than the stimulation frequency, leads to the drop in the APD that is seen clearly in Figs. <ref>(a) (i)-(iv); this drop is significantly higher in Figs. <ref>(a) (iii)-(iv) than in Figs. <ref>(a) (i)-(ii). After 100s, the APDs for subsequent excitations are spread over a range, see Figs. <ref>(a) (i) and (ii)] but limited in Figs. <ref>(a) (iii) and (iv). Furthermore, Figs. <ref>(a) (i) and (ii) show that after 100s the beat-to-beat APDs from DAD-capable and non-DAD myocytes can differ significantly, i.e., there is dispersion in the APD across these myocytes. Note that this difference in APDs, which is not present initially but develops as time progresses, can provide a substrate for reentry in 2D and 3D tissue (see below)  <cit.>. For Cases (i) and (ii), i.e., for global Ca2+ overloads, the plots of Ca_SR versus time in Figs. <ref>(b) (i) and (ii)], for DAD-capable (blue points) and non-DAD myocytes (red +) overlap substantially However, for localized Ca2+ overload, the plots of Ca_SR for the DAD-capable and non-DAD myocytes do not overlap [see Figs. <ref>(iii) and (iv)]. In summary, then, we have demonstrated how the calcium-overload-induced LCRs and SCRs, which appear at the single-cell level in DAD-capable myocytes [see Fig. <ref> in Subsection <ref>], can lead to dispersion in the APD and conduction blocks in the cable. Once the amplitude of these SCRs, from the myocytes in the DAD clump, is large enough to overcome the source-sink mismatch and go beyond the threshold for triggered excitations, DAD-driven PVCs emerge from this clump. The incidence frequency of these PVCs [which is much higher than the pacing frequency (1 Hz)] depends on the Ca2+ uptake rate in the SR (via the SERCA pump) and Ca_SR. The PVC-driven high-frequency stimulation leads to more Ca2+ build-up in the SR and other myocyte compartments; this increased Ca_SR leads to an enhancement in the PVC frequency and results in a positive feedback loop between the PVC frequency and Ca2+. The PVCs from the DAD clump stimulate the myocytes outside the clump, thereby increasing the extent of the Ca2+ overload; this promotes LCRs. After about 20s, a significant difference emerges between the APDs from DAD and non-DAD myocytes and leads, in turn, to conduction block. We will show in the following Susections that such block leads to spiral- and scroll-wave formation in 2D and 3D tissue, respectively. §.§ From PVCs to spiral waves We now generalize the study of Subsection <ref>, of the effects of LCRs and SCRs, to a 2D simulation domain [and the TP06 model for ventricular tissue]. We concentrate on Case(ii) [Table <ref>], because it is most likely to cause conduction blocks in cable-type domains. We perform simulations in a rectangular domain with 512×220 grid points [≃ 12.5 × 5.5 cm^2]; we embed a clump of DAD-capable myocytes (henceforth DAD clump) with Case-(ii) parameters: for purposes of illustration, we employ either a circular clump (diameter 4 cm or 160 grid points) or a square clump (side 4 cm or 160 grid points). We stimulate the first two columns of myocytes at the left boundary of this domain, with a pacing frequency of 1 Hz, and we apply no-flux boundary conditions on all the boundaries of the rectangle. We find that, after 3 stimulations, PVCs emerge from such DAD clumps. With the passage of time, a wavefront, arising from a PVC that originates from the clump, meets the waveback of the previous PVC and results in a conduction block, which leads, in turn, to the development of rotating spiral waves (reentry) and, eventually, broken spiral waves (fibrillation): In Fig. <ref> we present pseudocolor plots of V_m, at representative times t, to illustrate how plane-wave pacing [Figs. <ref> (i) and (ii)] stimulates the DAD-clump and leads to PVCs [Figs. <ref> (iii) and (iv)], which propagate outwards from this clump. The collisions of the wavefronts and wavebacks of two successive PVCs lead first to conduction blocks [Figs. <ref> (v) and (vi)] and finally to the formation of spiral waves [two spiral-wave tips appear in Figs.<ref> (ix)]. For the complete spatiotemporal evolution of V_m, see Video <ref> in the Appendix. In Fig. <ref>, we present the time series of the APDs extracted from plots of V_m(t), from the two representative points, one outside the DAD clump (i) and the other inside it (ii) [see Fig. <ref>(a)]; here, Fig. <ref>(b)(i) is the plot of APDs, obtained from the non-DAD region, and Fig. <ref>(b)(ii) is the APD from the DAD region. As time progresses, the APD of the myocytes in the DAD region is stable; however, the APD from the non-DAD myocytes varies in time. The transition to reentry, after the conduction block, is reflected in the sudden jump in the APD of the DAD myocytes and the large variations in the APD of the non-DAD myocytes. In Fig. <ref> we compare the effects of circular [with diameter = 160 grid points] and square [with side = 160 grid points] DAD clumps, with all other model parameters held fixed. If we stimulate the first two columns of myocytes at the left boundary of this domain, with a pacing frequency of 1 Hz, PVCs originate from both these clumps. The PVC from the square clump is flatter in parts than the one from the circular clump, as we can see by comparing Figs. <ref>(a)(iv) and (b)(iv). As time progresses, conduction block occurs [see, e.g., Figs. <ref> (v) and (vi) for the circular clump]; and then two spiral-wave cores form, more clearly for the circular clump than for the square one. [For the complete spatiotemporal evolution of V_m for these cases see Video  <ref> and Video  <ref> in the Appendix.] We conjecture that this difference arises because of the flatness of the PVCs from the rectangular clump. We now use the low-amplitude defibrillation scheme suggested in Refs. <cit.>, in which we divide the simulation domain into square subdomains of size 30×30 grid points; at the boundaries of each subdomain, we apply a current stimulus (to each one of the 3 grid points that straddle every point on these boundaries) with amplitude -50 pA/pF for 10 ms. Such a current stimulus, applied on the spatially extended mesh formed by the boundaries of the subdomains, yields the mathematical analog of defibrillation, i.e., spiral waves, generated by using the S1-S2 cross-field protocol (see, e.g., Ref. <cit.>) in the original TP06 model <cit.>, are eliminated, as we show by representative pseudocolor plots of V_m in Fig. <ref>(a) and Video  <ref> in the Appendix. We find, however, that this defibrillation fails to eliminate PVCs that originate from the DAD clump that we have studied above; we illustrate this by representative pseudocolor plots of V_m in Fig. <ref>(b) and Video <ref> in the Appendix. Our results are reminiscent of those reported in Refs. <cit.> for models with EAD-capable myocytes in the whole simulation domain; these models display, inter alia, phase waves that move through the types of meshes we have used above. Therefore, one way of understanding the failure of our defibrillation scheme in Fig. <ref>(b) is to realise that phase waves form inside the DAD clump, pass through it, unimpeded by the current stimulus on the mesh, and continue to yield PVCs. Defibrillation failure in such arrhythmias occurs because, at the wavefront, the current I_NaCa leads I_CaL and I_Na, as we show explicitly via the pseudocolor plots of V_m and these currents in Figs. <ref> (a)-(d) and the Video <ref> in the Appendix. Such PVC-induced arrhythmias cannot be eliminated by low-amplitude current stimuli in our model [see Fig. <ref> (b)]. By contrast, in conventional arrhythmias, with self-sustaining spirals, I_Na is the lead current, which can be eliminated by the application of small current stimuli for a brief duration [as we show in Fig. <ref> (a)]. §.§ Human bi-ventricular simulation Human ventricular tissue is anisotropic because of the orientation of muscle fibers; furthermore, the anatomically realistic human bi-ventricular domain is complicated. We use the human bi-ventricular geometry [obtained from diffusion tensor magnetic resonance imaging (DTMRI)], which we enclose in a cubical box with 512×512×512 grid points. The bi-ventricular domain inside this cube is conveniently described by using the phase-field variable ϕ, with ϕ=1 in this domain and ϕ=0 outside [see, e.g., Refs. <cit.>]; ϕ changes continuously from 1 to 0, typically over 4-5 grid points. We take the DAD clump to be the region of overlap of this bi-ventricular geometry with a sphere; this clump is the green region (with 775,596 grid points) in the pseudocolor plot of Fig. <ref>, in which the region with non-DAD myocytes is shaded blue. To generate the Ca2+-overload for the DAD-capable and non-DAD myocytes, we use Case-(ii) parameters [Table <ref>]. We pace 40×512×512 myocytes at the apex of the bi-ventricular domain [Fig. <ref>]. The pseudocolor plot of the membrane potential of V_m, in Figs. <ref> (a)-(i), shows how the initial, pacing-induced plane wave [Fig. <ref> (a)] interacts with the DAD clump to yield PVCs, after the initial five pacings [Fig. <ref>(d)]; these PVCs stimulate the entire bi-ventricular domain, with a frequency higher than the 1 Hz pacing frequency provided at the apex. These PVCs evolve into a rotating scroll wave that eventually becomes a broken scroll wave [see Figs. <ref>(g)-(i) and the Video <ref> in the Appendix and compare these with our 2D-tissue results in Subsection <ref>]. § DISCUSSION AND CONCLUSIONS We have presented a detailed multiscale study – from a single myocyte to an anatomically realistic bi-ventricular domain – of LCR-induced EADs, SCR-triggered DADs, and DAD-clump-promoted PVCs in the TP06 mathematical model for human ventricular tissue. We first show [Subsection <ref>] how an increase of I_CaL provides the Ca2+-overload and the way in which this affects V_m and the Ca2+ concentration in the sarcoplasmic store, for two Ca2+-overload protocols. We demonstrate [Subsection <ref>] the emergence of LCR-induced EADs and SCR-triggered DADs and their dependence on the RyR. Next, we explore [Subsection <ref>] how a selective reduction of the late part of I_CaL can eliminate LCRs and hence EADs; we distinguish between DAD myocytes and non-DAD myocytes. This leads naturally to our investigation of the propagation of electrical waves of activation in a cable-type domain [Subsection <ref>] with non-DAD myocytes and a clump of DAD myocytes in the middle. By examining this propagation for the two Ca2+-overload protocols [Subsection <ref>], with the Ca2+-overload either localized to the DAD-clump or spread over the entire cable. In the latter case, we demonstrate the development of DAD-clump-driven PVCs, and in turn, to conduction blocks. Our simulations in two-dimensional (2D) [Subsection <ref>] and three-dimensional (3D) anatomically realistic human bi-ventricular [Subsection <ref>] domains, with DAD clumps, also display such PVCs that lead to reentry and the formation of spiral (2D) or scroll (3D) waves. For a recent overview of all types of PVCs and their clinical implications and management, we refer the reader to Ref. <cit.>; this paper notes that PVCs occur often and they " …are observed in the majority of individuals monitored for more than a few hours …". We have studied those PVCs that arise from DAD clumps; and we have examined the conditions under which these PVCs evolve into life-threatening ventricular fibrillation. Earlier studies have also investigated other LCR- and SCR-induced DADs and EADs. For example, the authors of Ref. <cit.> have used isoproterenol-treated myocytes to generate such afterdepolarizations; they have then discussed the possibility of the dispersion of APDs, among EAD myocytes, as a substrate for reentry. Myocytes from different transmural layers of ventricles may have different APD elongation, in response to β-adrenergic stimulation, which can be another possible mechanism for reentry and fibrillation <cit.>. Reference <cit.> has suggested, based on experiments <cit.> and a single-myocyte model <cit.>, that triggered waves associated with DADs and EADs might lead to ectopic foci and thence to arrhythmias. We have explored this last suggestion via detailed multiscale simulations, by using the TP06 model for human ventricular tissue. We note the following points: (1) Although our study is based on a ventricular-myocyte model, it can be generalized mutatis mutandis to study LCRs and SCRs in atrial tissue and to elucidate their roles in atrial arrhythmias [see, e.g., see Ref <cit.>]. (2) We have used Ca2+-overload to trigger SCR and LCRs; however, LCRs and SCRs can occur without Ca2+-overload, e.g., if there is heart failure. Irrespective of the way in which LCRs and SCRs are generated, they should lead to PVCs and the formation of spiral (2D) or scroll (3D) waves as we have discussed above. We hope that our detailed study of DAD-clump-promoted PVCs and fibrillation will lead to experimental verifications of our results, e.g., in  in vitro experiments with engineered human-heart tissue obtained from pluripotent stem cells [see, e.g., Ref. <cit.>]. §.§ Possible clinical implications of our study Disturbed Ca2+ homeostasis is intimately associated with cardiac arrhythmias as discussed, e.g., in Refs. <cit.>. The disruption in Ca2+ homeostasis has been observed in a variety of conditions, e.g., heart failure (HF) <cit.> and catecholaminergic-polymorphic ventricular tachycardia (CPVT) <cit.>. Reference <cit.> has suggested, based on experiments <cit.> and a single-myocyte model <cit.>, that triggered waves associated with DADs and EADs might lead to ectopic foci and thence to arrhythmias, a possibility that we have examined in detail for the TP06 model. In particular, we have considered two Ca2+-overload protocols, one with an increased APD and the other without an increase in the APD; the former protocol, with Ca2+ overload in the entire domain, leads to conduction block, reentry, and fibrillation. Our study demonstrates (a) the protective mechanism of the repolarizing current I_Kr, in the development of these Ca2+-overload-induced arrhythmias and (b) that the selective reduction of I_CaL, in the plateau phase of the AP, can suppress LCRs, in agreement with the suggestions of Refs. <cit.>; both these possibilities (a) and (b) can be used in the development of drugs for the suppression of such arrhythmias. Our study suggests [see Subsection <ref>] that it might not be easy to eliminate these arrhythmias by electrical defibrillation. We have noted, while discussing Fig. <ref>, that the reduction in f_2,sat reduces the magnitude of I_CaL (subplot (ii)), in its plateau phase (but not in the initial transient), suppression of oscillations in Ca_SR (subplot (iii)), and the elimination of the blue spikes in I_rel (subplot (iv)). Therefore, medications that selectively target the plateau phase of I_CaL can eliminate LCRs, without affecting the transient phase of I_CaL. References <cit.> suggest that blocking I_CaL-related channels can eliminate LCR-induced arrhythmias; our study yields a more nuanced suggestion, to wit, we must suppress only the plateau phase of I_CaL to remove LCRs. We observe that an increase of I_CaL or a reduction of I_Kr or both can lead to an enhancement of the APD [see Fig. <ref>]. However, from Figs. <ref>, <ref>, and <ref>, we conclude that, if we strengthen I_Kr, by increasing S_GKr, we can control the APD and, therefore, suppress reentry and fibrillation. §.§ Limitations of our study We have not used sub-cellular, microscopic descriptions for SCRs and LCRs. The synchronization of the SCRs and DADs across the myocyte is related, in our model, to the previous AP; however, the synchronization process is more complicated, if we use microscopic descriptions of the SCRs and LCRs. We have induced Ca2+ overload principally by controlling I_CaL. We have not included overload via the stimulation of β-adrenergic receptors (β-AR) [see, e.g., Refs. <cit.>]; heterogeneous β-AR stimulation can generate a spatially heterogeneous APD distribution, which can, in turn, act as a substrate for reentry; such stimulation enhances the L-type calcium channel (LCC) in the cardiac myocyte via the pathway of intracellular cyclic AMP (cAMP) and protein kinase A (PKA) <cit.>; this can, in turn, enlarge the APD and thus increase the Ca2+-overload and abnormal calcium releases as discussed, e.g., in the models of Refs. <cit.>. We note that Ca2+-overload-induced afterdepolarizations and subsequent PVCs can occur not only in ventricular myocytes, but also for myocytes in the atria, Purkinje fibers, and pacemaker cells [see, e.g., Refs. <cit.>]; these are not included in our study. We have not accounted for electrical heterogeneity across the ventricular wall <cit.> and in the apico-basal direction <cit.>. § DATA AND CODE AVAILABILITY Data from this study and the computer scripts can be obtained from the authors upon reasonable request. § CONFLICTS OF INTEREST No conflicts of interests, financial or otherwise, are declared by the authors. § AUTHOR CONTRIBUTIONS NR and RP planned the research and analysed the numerical data; NR carried out the calculations and prepared the tables, figures, and the draft of the manuscript; NR and RP then revised the manuscript in detail and approved the final version. § FUNDING We thank the Science and Engineering Research Board (SERB) and Council for Scientific and Industrial Research (CSIR), and the National Supercomputing Mission (NSM), India for support, and the Supercomputer Education and Research Centre (IISc) for computational resources. We thank Mahesh K. Mulimani and Soling Zimik for valuable discussions. § APPENDIX §.§ Video SV1 Animations of pseudocolor plots of the membrane potential V_m showing the propagation of plane waves, the emergence and propagation of PVCs originating from a DAD clump, the collision of the wavefront and waveback of two successive PVCs, and the formation of two stable spiral waves as in Fig. <ref>. For all the videos here, we use 30 frames per second, with each frame separated from the succeeding frame by 20ms in real time. See video here: <https://youtu.be/MDqVnDQ-8sg>. §.§ Video SV2 Animations of pseudocolor plots of the membrane potential V_m showing the propagation of plane waves, the emergence and propagation of PVCs originating from a DAD clump, the collision of the wavefront and waveback of two successive PVCs, and the formation of two spiral cores (temporarily) for a square DAD clump as in Fig. <ref>. See video here: <https://youtu.be/ptcnRiMi9iA>. §.§ Video SV3 Animations of pseudocolor plots of the membrane potential V_m showing the elimination of arrhythmogenic spiral waves in the conventional 2D TP06 model for cardiac tissue by using the low-amplitude defibrillation scheme suggested Refs. <cit.> (cf., Fig <ref>). See video here: <https://youtu.be/x8aBZsdNZMc>. §.§ Video SV4 Pseudocolor plots of the membrane potential V_m showing the elimination of arrhythmogenic PVCs in the TP06 model for cardiac tissue with a DAD clump by using the low-amplitude defibrillation scheme suggested in Refs. <cit.> (cf., Fig <ref>). See video here: <https://youtu.be/GWpFT0kD__Q>. §.§ Video SV5 Animations of pseudocolor plots of V_m, superimposed on the bi-ventricular simulation domain (in blue), depicting the spatiotemporal evolution of the electrical excitations, the emergence of PVCs, and the evolution of PVCs to scroll and broken scroll-waves (cf., Fig <ref>). See video here: <https://youtu.be/BAtsmGVwiQE>. §.§ Video SV6 Animations of pseudocolor plots of the membrane potential and a few currents during the emergence of PVC, (top left) V_m, (top right) -I_CaL, (bottom left) -I_Na, and (bottom right) -I_NaCa (cf., Fig <ref>). See video here: <https://youtu.be/B44qx-kaNA4>.
http://arxiv.org/abs/2307.07373v1
20230714142716
Measurement of $Λ$ hyperon spin-spin correlations in p+p collisions by the STAR experiment
[ "Jan Vanek" ]
nucl-ex
[ "nucl-ex", "hep-ex" ]
=6.0in =8.25in =-0.3in =-0.20in #1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Measurement of Λ hyperon spin-spin correlations in p+p collisions by the STAR experiment Jan Vanek, for the STAR collaboration Brookhaven National Laboratory Polarization of Λ hyperons has been observed in various collision systems over a wide range of collision energies over the last 50 years since its discovery in Fermilab in the 70's. The existing experimental and theoretical techniques were not able to provide a conclusive answer about the origin of the polarization. In these proceedings, we discuss the possibility to use a new experimental method which utilizes measurement of ΛΛ̅, ΛΛ, and Λ̅Λ̅ pair spin-spin correlations. With this new approach, it should be possible to distinguish if the polarization originates from early stage effects, such as initial state parton spin correlation, or if it is a final state effect originating from hadronization. Furthermore we, study the feasibility to perform this measurement in p+p collisions at √(s) = 200 GeV collected by STAR in 2012 which should provide sufficient statistics of ΛΛ̅, ΛΛ, and Λ̅Λ̅ pairs to perform this measurement. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § INTRODUCTION An interesting discovery by Fermilab was published in 1976. They observed that Λ hyperons produced in p+Be collisions with a 300 GeV proton beam are polarized <cit.>. This observation is surprising because neither the proton beam nor the beryllium target was polarized. As a result, experimentalists and theorists all around the world started investigating this phenomenon. The Λ hyperon polarization can be measured through reconstruction of a hadronic decay channel Λ^0 → pπ^- (and charge conjugate) and subsequent measurement of the angle (θ_p, or θ^⋆) between the decay proton momentum in the Λ rest frame (p) and a normal vector to the production plane (n̂). This decay channel is selected, because the proton is preferentially emitted in the direction of the Λ polarization in the Λ rest frame. A cartoon illustrating the production plane determination, using the variables defined above, is shown in Fig. <ref>. The polarization (P_Λ) is then extracted from the angular distribution of the protons according to formula dN/dcos(θ^⋆) = 1 + α P_Λcos(θ^⋆), where α is the weak decay constant of the Λ hyperon. This method was used in the first Λ hyperon polarization measurement from Ref. <cit.> and other measurements that followed. It is also possible to measure the Λ hyperon polarization with respect to a different reference direction, e.g. a jet axis, or polarization of the beam for measurements with polarized beams. A brief overview of experimental results using these traditional methods is provided in Sec. <ref>. Section <ref> provides a description of a new method for Λ hyperon polarization measurement, which relies on the determination of Λ hyperon pair spin-spin correlation. In addition, the section shows first steps of analysis utilizing this method in p+p collisions at √(s) = 200 GeV by the STAR experiment. § OVERVIEW OF Λ HYPERON POLARIZATION RESULTS One of the results presented in the first Λ hyperon polarization paper is shown in Fig. <ref>. The polarization α P_Λ of the Λ hyperons produced in p+Be collisions with 300 GeV proton beam on a Be target, rises with the Λ transverse momentum (p_T). In pursuit of an explanation of the origin of Λ hyperon polarization, a number of independent investigations were performed over a wide range of collision systems and energies. A selection of such results is shown in Fig. <ref>. In this case, the polarization P_Λ is plotted as a function of x_F = p_z^Λ/p_beam. The key observation here is that the polarization appears to depend primarily on the x_F and not on the collision type or energy. All presented results are from collisions of unpolarized particles. It is also important to investigate, if the polarization of the produced Λ hyperons is correlated with polarization of the beams, for example, in polarized p⃗+p⃗ collisions at the STAR experiment. An example of such measurement is presented in Fig. <ref> which shows the longitudinal spin transfer D_LL of Λ and Λ̅ hyperons at positive pseudorapidity (0 < η < 1.2) measured in p⃗+p⃗ collisions at √(s) = 200 GeV. No significant longitudinal polarization of Λ hyperons is observed at STAR, which suggests that the beam polarization does not play a significant role for the polarization of the Λ hyperon at RHIC energies within the studied kinematic range[The x_F in this η region at RHIC is going to be rather small which will likely lead to small Λ polarization, as seen in Fig. <ref>. It is not possible to make direct comparison to Fig. <ref> due to polarization of the beams and also a different observable of the measurement.]. The examples above are a small selection of all experimental efforts over the last 50 years in attempt to explain the origin of Λ hyperon polarization. Unfortunately, none of the measurements, or the theoretical models, can provide a definitive answer on where the Λ hyperon polarization is generated. In the following section, we investigate a possibility to improve our knowledge of the phenomenon by measurement of Λ hyperon pair spin-spin correlations. § Λ HYPERON SPIN-SPIN CORRELATIONS In general, most experimental techniques for measurement of Λ hyperon polarization are based on the same general idea. It is the measurement of an angle, usually denoted θ^⋆, between a reference direction and momentum of the decay proton in the hyperon rest frame. The reference direction can be chosen based on specific physics considerations. As shown above, the first possibility is to use the production plane. Other common alternatives are, e.g., the polarization of the beam, or a jet axis, in case the Λ hyperon is part of a jet. Another possibility is to look for events with two or more Λ or Λ̅ hyperons and measure the angle θ^⋆_12 between the decay (anti-)protons, both boosted into the rest frame of their mother particles. Since these (anti-) protons are preferentially emitted in the direction of their mother's polarization, such measurement gives access to ΛΛ̅, ΛΛ, and Λ̅Λ̅ pair spin-spin correlations <cit.>. For this method, it is possible to use the following formula (see also Ref. <cit.>): dN/dcos(θ^⋆_12) = 1 + α_1α_2 P_Λ_1Λ_2cos(θ^⋆_12), where α_1 and α_2 are weak decay constants of Λ hyperons in the pair and P_Λ_1Λ_2 is the polarization of the pair. Λ_1 and Λ_2 can both be either Λ or Λ̅. One of the key advantages of this approach is that it should be able to identify if the polarization comes from initial stage effects, such as spin-spin correlation of the initial stage partons, or if it is a final state effect originating from e.g. hadronization. The initial state correlation should be seen in data as spin-spin correlation of the ΛΛ̅ pairs, as those likely originate from a single ss̅ quark pair produced in the hard partonic scattering. At the same time, no strong correlation is expected for ΛΛ and Λ̅Λ̅ pairs, as those cannot originate from a single ss̅ quark pair. In order to investigate the Λ hyperon pair spin-spin correlations in p+p collisions at √(s) = 200 GeV at STAR, it is important to verify that there are no other known mechanisms to generate non-zero P_Λ_1Λ_2. This was done with PYTHIA 8.3 simulation of p+p collisions at √(s) = 200 GeV. The extracted 1/N_evtdN/dcos(θ^⋆_12) distributions as a function of cos(θ^⋆_12) for ΛΛ̅ and ΛΛ pairs are shown in Fig. <ref>. The distributions are fitted with a linear function which is used to extract the value of P_Λ_1Λ_2 using equation (<ref>). For both combinations, the polarization is zero, meaning that pure PYTHIA does not predict any Λ hyperon spin-spin correlations at mid-rapidity in p+p collisions at √(s) = 200 GeV. The extraction of the dN/dcos(θ^⋆_12) distributions from the data starts with selection of Λ and Λ̅ hyperon candidates. This is done by pairing protons and pions reconstructed and identified with the STAR Time Projection Chamber. Each Λ candidate then corresponds to one pπ pair. Events with two or more Λ candidates were considered for further analysis. For events which contain at least two such pairs, a 2D distribution is filled where one axis is the invariant mass of one of the pπ pairs and the second axis is the invariant mass of the second pπ pair. This is done for two combinations of the pπ pairs: an unlike-sign (US) pπ pair matched with a different US pair from the same event. This distribution contains both the signal and the combinatorial background. The background can be estimated using US pairs matched to the like-sign (LS) pπ pairs. The US-LS distribution is then subtracted from US-US distribution and subsequently fitted with a 2D Gaussian function to determine the Λ candidate invariant mass peak mean and width. The signal region is defined as the mean ±3σ, where both the mean and σ are taken from the fit. Example of two of the 2D invariant mass distributions is shown in Fig. <ref>. This procedure is done separately for ΛΛ̅, ΛΛ, and Λ̅Λ̅ candidate pairs in p+p collisions at √(s) = 200 GeV measured by STAR in 2012. The extracted numbers of signal and background pairs for each of the three possible combinations is shown in Tab. <ref>. The number of candidate pairs is going to provide sufficient statistics to perform this measurement using the 2012 p+p collisions data-set. § SUMMARY The Λ hyperon polarization puzzle is one of the main unresolved mysteries of the experimental high energy particle physics. The polarization has been observed in several different collision systems at various energies. The magnitude of the polarization appears to depend primarily on x_F and not much on the specific collision energy. Despite enormous experimental and theoretical efforts to explain the Λ hyperon polarization, no conclusive answer was found. In order to improve the knowledge in this field, a new method was developed which relies on measurement of Λ hyperon pair spin-spin correlations. Any non-zero signal measured in p+p collisions at √(s) = 200 GeV at STAR would provide more insight into Λ hyperon polarization, as a simulation using PYTHIA 8.3 predicts no spin-spin correlation signal. The number of extracted ΛΛ̅, ΛΛ, and Λ̅Λ̅ candidate pairs extracted from the aforementioned STAR data-set is sufficient to perform this type of measurement and thus is going to provide important additional insight into Λ hyperon polarization physics in p+p collisions at RHIC energies. 6 ref-first_paper Bunce G., et al., Λ^0 Hyperon Polarization in Inclusive Production by 300-GeV Protons on Beryllium, Phys. Rev. Lett. 36, 1113 (1976) ref-HERMES Airapetian A., et al. [HERMES Collaboration], Transverse Polarization of Λ and Λ̅ Hyperons in Quasireal Photoproduction, Phys.Rev.D 76, 092008 (2007) ref-ATLAS Aad G., et al. [ATLAS Collaboration], Measurement of the transverse polarization of Λ and Λ̅ hyperons produced in proton-proton collisions at √(s) = 7 TeV using the ATLAS detector, Phys. Rev. D 91, 032004 (2015) ref-BELLE Guan Y., et al. [BELLE Collaboration], Observation of Transverse Λ/Λ̅ Hyperon Polarization in e^+e^- Annihilation at Belle, Phys. Rev. Lett. 122, 042001 (2019) ref-STAR_transfer Adam J., et al. [STAR Collaboration], Improved measurement of the longitudinal spin transfer to Λ and Λ̅ hyperons in polarized proton-proton collisions at √(s_NN) = 200 GeV, Phys. Rev. D 98, 112009 (2018) ref-Tornqvist Törnqvist, N.A. Suggestion for Einstein-Podolsky-Rosen experiments using reactions like e^+e^- →ΛΛ̅→π^-pπ^+p̅., Found Phys 11, 171–177 (1981) ref-Gong Gong, W., et al., Measurement of Bell-type inequalities and quantum entanglement from Λ-hyperon spin correlations at high energy colliders, Phys. Rev. D 106, L031501 (2022)
http://arxiv.org/abs/2307.04291v1
20230710005229
Wait, wasn't that code here before? Detecting Outdated Software Documentation
[ "Wen Siang Tan", "Markus Wagner", "Christoph Treude" ]
cs.SE
[ "cs.SE" ]
Wait, wasn't that code here before? Detecting Outdated Software Documentation Wen Siang Tan School of Computer Science University of Adelaide Adelaide, SA, Australia [email protected] Markus Wagner Department of Data Science & AI Monash University Melbourne, VIC, Australia [email protected] Christoph Treude School of Computing and Information Systems The University of Melbourne Melbourne, VIC, Australia [email protected] August 12, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================== Encountering outdated documentation is not a rare occurrence for developers and users in the software engineering community. To ensure that software documentation is up-to-date, developers often have to manually check whether the documentation needs to be updated whenever changes are made to the source code. In our previous work, we proposed an approach to automatically detect outdated code element references in software repositories and found that more than a quarter of the 1000 most popular projects on GitHub contained at least one outdated reference. In this paper, we present a GitHub Actions tool that builds on our previous work's approach that GitHub developers can configure to automatically scan for outdated code element references in their GitHub project's documentation whenever a pull request is submitted. Video—<https://www.youtube.com/watch?v=4cA10vdlmns> software repositories, outdated documentation, outdated references, code elements, workflow automation § INTRODUCTION Not only developers but also users often find encountering outdated software documentation a frustrating experience. In our previous work <cit.>, we found that 28.9% of the top 1000 most popular projects[Top 1000 projects ranked by the number of stars] on GitHub contain at least one outdated reference to source code in their documentation. In the same paper, we proposed an approach named DOCER (Detecting Outdated Code Element References) to automatically detect outdated code element references in software repository documentation. The approach works by extracting code element references from documentation (README and wiki pages) using a list of regular expressions. These extracted references include variables, functions and class names found in the documentation such as HttpClient, Promise.reject(err) and ArrayList<String>. To determine if a reference is outdated, we match the reference to two revisions of the source code: the repository snapshot when the documentation was last updated and the current revision. We compare the number of instances found in the two versions and flag the reference as outdated if it existed in the snapshot but is no longer found in the current revision. <Ref> shows an overview of the DOCER approach. In our previous paper, we provided an implementation that developers can use to scan for outdated code element references. However, running the script whenever new changes are proposed may be mundane and repetitive. To simplify this process, we created a tool based on GitHub Actions workflow that is automatically triggered whenever a pull request is submitted to the repository. This workflow automates all the steps mentioned above and reports outdated references by commenting on the pull request. In the following sections of this paper, we provide an in-depth introduction to the tool's implementation (Section <ref>), and describe real-world examples where the DOCER approach successfully detected outdated documentation (Section <ref>). Limitations of the tool are discussed in Section <ref> before we conclude the paper with related (Section <ref>) and future work (Section <ref>). § TOOL In this section, we introduce: (1) the GitHub Actions workflow that the tool is based on, (2) an example repository showing how the tool can be configured to run whenever a pull request is submitted, and (3) how false positives reported by the tool can be ignored. §.§ Implementation GitHub Actions,[<https://github.com/features/actions>] a feature on GitHub, enables developers to automate workflows based on events. This feature is typically employed for building Continuous Integration and Continuous Delivery (CI/CD) pipelines. We created the tool using GitHub Actions because it provides developers a convenient way to integrate the tool with existing GitHub projects. Developers also have the flexibility to configure their projects in a way that the tool automatically scans for outdated code element references in their documentation, whenever a pull request is submitted. The workflow is defined by a YAML file[<https://yaml.org/>] containing a series of actions that gets executed when the workflow is triggered. To begin, we list the name of the workflow (DOCER), the events that trigger the workflow (pull requests), followed by the name of the GitHub-hosted runner[<https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners>] (latest Long Term Support version of Ubuntu) and the permissions needed for the job (read repository contents and write to pull requests). [bgcolor=mygray]yaml name: DOCER on: pull_request jobs: run: runs-on: ubuntu-latest permissions: contents: read pull-requests: write steps: The rest of the file defines the steps to execute in the workflow. Three repositories are cloned on the runner (repositories containing the source code, wiki pages, and scripts for the analysis) using a GitHub Action named checkout.[<https://github.com/actions/checkout>] [bgcolor=mygray]yaml - name: Checkout repository uses: actions/checkout@v3 with: repository: github.repository ref: github.event.pull_request.head.sha path: repo fetch-depth: 0 - name: Checkout wiki continue-on-error: true uses: actions/checkout@v3 with: repository: github.repository .wiki path: wiki - name: Checkout tool uses: actions/checkout@v3 with: repository: wesleytanws/DOCER_tool path: tool Once the repositories are cloned, the runner possesses all the necessary files to scan for outdated references. The workflow then commences the analysis, installs the necessary Python packages used by the report, generates the report and finally stores the results in an environment variable. [bgcolor=mygray]yaml - name: Run tool run: | bash tool/analysis.sh pip install pandas pip install numpy echo 'report<<EOF' >> GITHUB_ENV python tool/report.py github.repository github.run_id >>GITHUB_ENV echo 'EOF' >> GITHUB_ENV In the case where merging the pull request may result in outdated documentation, the workflow uses a GitHub Action named github-script[<https://github.com/actions/github-script>] to post a comment on the pull request listing the potentially outdated references. [bgcolor=mygray]yaml - name: Comment on pull request if: env.report uses: actions/github-script@v6 env: report: env.report with: script: | github.rest.issues.createComment( issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: process.env.report ) Figuring out why a code element reference has been flagged as potentially outdated can be challenging, especially when there are numerous modifications in the pull request. This final step uploads the report and summary files to GitHub using a GitHub Action named upload-artifact,[<https://github.com/actions/upload-artifact>] allowing developers to view the full report. [bgcolor=mygray]yaml - name: Upload artifact if: env.report uses: actions/upload-artifact@v3 with: name: report path: | output/report.csv output/summary.csv output/summary.md The GitHub repository, which includes the workflow outlined above and the source code for the tool, is available for public access.[<https://github.com/wesleytanws/DOCER_tool/tree/v1.0.1>]<Ref> summarises the steps defined by the workflow. §.§ Adding to GitHub projects To demonstrate how the GitHub Actions tool works, we will integrate the tool with an example repository with three files (<Ref>): * README.md documents the mathematical functions defined in arithmetic.py * arithmetic.py defines the mathematical functions * main.py calls the functions defined in arithmetic.py Integrating the tool to a repository is as convenient as copying the YAML file defining the workflow[<https://github.com/wesleytanws/DOCER_tool/blob/v1.0.1/DOCER.yml>] to the .github/workflows folder. Suppose a pull request as shown in <Ref> is submitted to the repository. Looking at the pull request submitted, two files in the repository have been modified. In arithmetic.py, the subtract and divide functions were removed and a new power function was added. Similarly, the main.py file was modified to remove the subtract function and the chained multiply functions were refactored into a power function. Notice that the tool reports that continuing to merge the pull request may result in two outdated references in the documentation (<Ref>). This discrepancy arises because the README file was not updated to reflect the removal of `divide' and `subtract' functions from the source code. To keep the documentation up-to-date, we can simply remove the two outdated references in the README file. Better still, we can document the new function and mention that the two functions are now deprecated as shown in <Ref>. §.§ Excluding code elements One useful feature that we added to the tool is the ability to exclude certain code elements from the report, which allows developers to stop keeping track of code elements that have been determined to be false positives. Developers can add a list of code elements separated by newlines in a file named .DOCER_exclude located at the root of the repository. Code elements in the exclude list will be ignored by the tool when scanning for outdated references. § EXAMPLES In our previous work <cit.>, we evaluated the approach's usefulness in real-world software projects by submitting GitHub issues to 15 different projects. Here, we present two examples of true positives and false positives in the issues submitted <cit.>. automates the creation of such notifications. True positives The google/cctz project was one of the 15 projects that responded positively to our GitHub issue.[<https://github.com/google/cctz/issues/210>] All code element instances int64_t were removed from the source code in one of the commits but the documentation continued to reference the deleted code element. In response to our GitHub issue, the developer updated the documentation to align with the changes in the source code (<Ref>). In the google/hs-portray project, the function prettyShow was renamed to showPortrayal in the source code, but the README file was not updated (<Ref>). We alerted the developers of this discrepancy, and the issue was fixed subsequently.[<https://github.com/google/hs-portray/issues/7>] False positives In another Google project google/clif (<Ref>), a CMake flag was removed from the source code but the documentation was not updated. The developer responded that the flag is no longer required in the source code but it is still relevant for users that have installed multiple versions of Python to configure the installation directory correctly.[<https://github.com/google/clif/issues/52>] A false positive was reported in the google/gnostic project (<Ref>) where the code element text_out was deleted from the source code. Although the code element is no longer found in the source code, the functionality remains in the program logic. This leads to the code element reference getting falsely flagged as outdated.[<https://github.com/google/gnostic/issues/273>] § LIMITATIONS Trying to understand and use documentation which features code elements that do not exist is just one of many frustrations that software developers encounter when they are confronted with outdated documentation. Addressing this particular frustration is the goal of . Other forms of outdated documentation, such as inaccurate descriptions of the functionality of code elements or not-yet-documented code elements, are beyond the scope of our current work. is currently limited to detecting outdated documentation in GitHub (README and wiki pages) and would not be able to find issues in documentation hosted externally. detects code elements in documentation using a set of regular expressions from previous work. These regular expressions have not been validated on all possible programming languages and refining them to work on popular programming languages is part of our future work. Our tool may sometimes falsely categorised references as outdated due to limitations of the approach. For example, the change log of a project may contain references to deleted code elements in the source code. However, these references should not be flagged as outdated as they only serve as a notice. As a workaround, developers can add the code elements to the .DOCER_exclude file to avoid the tool reporting the references as outdated. In addition, our tool only detects code elements written as text. Other kinds of outdated documentation such as images and videos in the documentation cannot be detected. § RELATED WORK There are numerous existing work related to detecting and fixing inconsistencies between source code and documentation, with source code comments being one of the main focuses. Wen et al. <cit.> conducted an empirical study of 1500 Java systems, citing deprecation and refactoring as causes of code-comment inconsistencies. In one of the earliest attempts to address these inconsistencies, Tan et al. <cit.> proposed @tcomment, aiming to catch exceptions related to null values in Javadoc comments. Ratol and Robillard <cit.> introduced Fraco, a tool targeting source code comments and identifiers renaming. Panthaplackel et al. <cit.> proposed a model that can modify natural language comments based on source code changes, outperforming existing comment generation models. Other work related to documentation but not limited to source code comments include DocRef by Zhong and Su <cit.>. Combining natural language tools and code analysis techniques to identify discrepancies between source code and documentation, DocRef was able to detect more than 1000 errors in API documentation. Designed to report documentation changes, AdDoc by Dagenais and Robillard <cit.> uses traceability links to identify changes to the documentation that deviate from existing code patterns. Using static program analysis, Zhou et al. <cit.> proposed a framework DRONE, that automatically discovers defects in Java API documentation and generates helpful recommendations. Another work addressing API documentation is FreshDoc by Lee at al. <cit.>. By using a grammar parser and analysing multiple source code versions, FreshDoc can automatically update class, method and field names found in the documentation. In contrast to these approaches and to the best of our knowledge, is the first tool which attempts to prevent inconsistent and outdated documentation by alerting software developers before their documentation becomes outdated. We accomplish this through a GitHub Action which is GitHub's implementation of a software bot <cit.>. Software bots have recently attracted the attention of the software engineering research community, with a particular focus on code review bots which—similar to —comment on pull requests. For example, Wessel et al. <cit.> found that the adoption of code review bots increases the number of monthly merged pull requests, decreases monthly non-merged pull requests, and decreases unnecessary communication among developers. Our goal with is to enable code review bots to also decrease the amount of outdated documentation. § FUTURE WORK AND CONCLUSION In this paper, we presented that developers can use to automatically scan for outdated code element references. The tool analyses the repository and generates a report on the state of code element references whenever a pull request is submitted. If merging the pull request results in outdated references in the documentation, the tool will upload the report and comment on the pull request alerting developers of the situation. Developers can choose to fix the outdated references in their documentation, or add the references to the exclude list if they have been determined to be false positives. As mentioned in <Ref>, refining the list of regular expressions used to detect code elements is part of our future work. One such refinement could be ensuring that the regular expressions can accurately extract code elements found in popular programming languages such as JavaScript, Python and Java. In addition, several improvements can be made to the tool. Adding a feature where developers can reply to the tool's comment for code elements they do not want to keep track of could be helpful. The tool will then automatically add the code elements to the project's exclude list. Another improvement could be adding a file that defines a list of documentation files to exclude, e.g. wiki page that contains the project's change log. Expanding the tool to not only work on GitHub, but also other version control platforms is another direction worth exploring. This allows more developers to scan for outdated code element references in their projects. IEEEtran
http://arxiv.org/abs/2307.05587v1
20230710154713
Active Learning for Video Classification with Frame Level Queries
[ "Debanjan Goswami", "Shayok Chakraborty" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Active Learning for Video Classification with Frame Level Queries This research was supported in part by the National Science Foundation under Grant Number: 2143424 Debanjan Goswami Department of Computer Science Florida State University Shayok Chakraborty Department of Computer Science Florida State University August 12, 2023 ========================================================================================================================================================================= Deep learning algorithms have pushed the boundaries of computer vision research and have depicted commendable performance in a variety of applications. However, training a robust deep neural network necessitates a large amount of labeled training data, acquiring which involves significant time and human effort. This problem is even more serious for an application like video classification, where a human annotator has to watch an entire video end-to-end to furnish a label. Active learning algorithms automatically identify the most informative samples from large amounts of unlabeled data; this tremendously reduces the human annotation effort in inducing a machine learning model, as only the few samples that are identified by the algorithm, need to be labeled manually. In this paper, we propose a novel active learning framework for video classification, with the goal of further reducing the labeling onus on the human annotators. Our framework identifies a batch of exemplar videos, together with a set of informative frames for each video; the human annotator needs to merely review the frames and provide a label for each video. This involves much less manual work than watching the complete video to come up with a label. We formulate a criterion based on uncertainty and diversity to identify the informative videos and exploit representative sampling techniques to extract a set of exemplar frames from each video. To the best of our knowledge, this is the first research effort to develop an active learning framework for video classification, where the annotators need to inspect only a few frames to produce a label, rather than watching the end-to-end video. Our extensive empirical analyses corroborate the potential of our method to substantially reduce human annotation effort in applications like video classification, where annotating a single data instance can be extremely tedious. active learning, video classification, deep learning § INTRODUCTION With the widespread deployment of modern sensors and cameras, images and videos have become ubiquitous. This has encouraged the development of video classification algorithms to analyze their semantic content for various applications, such as search, summarization, security and surveillance among others. Deep neural networks (CNN and LSTM architectures) have depicted commendable performance in this field <cit.>. Common methods include obtaining global video-level descriptors using CNN architectures <cit.>, processing videos at two spatial resolutions: a low-resolution context stream and a high-resolution fovea stream <cit.>, fusion technique to integrate data representations at the frame level and video level <cit.> among others. However, for all these models to work reliably, a large amount of labeled training data is essential, gathering which is an expensive process in terms of time, labor and human expertise. Thus, an algorithm to reduce the human labeling effort is of immense importance in video classification applications. Active Learning (AL) is a machine learning paradigm, where the goal is to automatically identify the salient and exemplar samples from large amounts of redundant data <cit.>. This tremendously reduces the human annotation effort in inducing a machine learning model, since the human expert only has to label the samples queried by the algorithm. Further, since the model gets trained on the exemplar samples from the data population, it typically depicts better generalization performance than a model where the training data is selected at random. This is an extremely relevant paradigm in today's world, where an enormous amount of digital data is being generated, but there is a serious dearth of human labor to annotate the data to induce learning models. AL has been successfully used in a variety of applications, including computer vision <cit.>, text analysis <cit.>, computational biology <cit.> and medical diagnosis <cit.> among others. Active learning is particularly relevant in the context of deep learning, in order to reduce human annotation effort in training the data-hungry deep neural networks <cit.>. Designing an AL algorithm for a video classification application entails the human annotator to meticulously watch each queried video end-to-end in order to furnish a label [we use the terms annotators, oracles, labelers and users synonymously in this paper]. This is an extremely time-consuming and laborious process; the annotators may get bored and fatigued quickly and lose interest in the task. This necessitates specialized and more user-friendly query and annotation mechanisms, to utilize the available human labor more efficiently. In this paper, we propose a novel active learning algorithm to address this challenging and practical problem. Our algorithm identifies a batch of informative videos, together with a set of exemplar frames from each; the human annotator merely has to review the queried frames and furnish a label for each video. This is illustrated in Figure <ref>. Providing such feedback is significantly less time-consuming and burdensome than watching an end-to-end video. We formulate an optimization problem based on an uncertainty and diversity based criterion to identify a batch of informative videos, and exploit representative sampling techniques to select a subset of exemplar frames from each. To our knowledge, this is the first active learning framework for video classification which poses label queries based on a set of exemplar frames, rather than the complete video. We hope this research will motivate the development of AL algorithms with other novel annotation mechanisms, with the goal of further reducing the labeling burden on human oracles in a video classification application. The rest of the paper is organized as follows: we present a survey of related research in Section <ref>, our active sampling framework is detailed in Section <ref>, the results of our empirical studies are presented in Section <ref>, and we conclude with discussions in Section <ref>. § RELATED WORK In this section, we present an overview of active learning in general, followed by a survey of AL for video classification. Active Learning: AL has received significant research attention in the machine learning community. Uncertainty sampling is by far the most common strategy for active learning, where unlabeled samples with highest classification uncertainties are queried for their labels. The uncertainty of an unlabeled sample can be computed by its entropy <cit.>, its distance from the separating hyperplane in the feature space for SVM classifiers <cit.>, the disagreement among a committee of classifiers regarding the label of the sample <cit.>, the expected error reduction of the future learner <cit.> and so on. Submodular optimization techniques have also been exploited for active data sampling <cit.>. The growing success and popularity of deep learning has motivated research in the field of deep active learning (DAL), where the goal is to select informative unlabeled samples to efficiently train a deep neural network <cit.>. A task agnostic AL framework was proposed by Yoo and Kweon <cit.> that incorporated a loss prediction module in the network architecture, to predict the loss value of an unlabeled sample and query samples accordingly. A DAL framework based on core-set selection was proposed by Sener and Savarese <cit.>, which selected a batch of samples, such that the deep model trained on the selected samples depicts similar performance to that trained on the whole dataset. DAL has also been studied in conjunction with neural architecture search <cit.>, which queries samples for labeling and simultaneously searches for the best neural architectures on-the-fly. A novel training loss function for DAL was proposed by Shui et al., where active sample selection and traning the network parameters were achieved through alternating optimization <cit.>. Deep active learning techniques based on adversarial training have depicted particularly impressive performance <cit.>. Active learning has also been studied in conjunction with other learning paradigms such as transfer learning <cit.>, reinforcement learning <cit.> etc. Moreover, the idea of identifying an informative set of samples for human inspection has been extended to other problem domains, such as matrix completion <cit.>, video summarization <cit.> and feature selection <cit.> among others. Recently, there have been efforts to design AL systems with novel query and annotation mechanisms, with the goal of further reducing the labeling burden on human annotators. Joshi et al. <cit.> proposed a binary query mechanism which queried an unlabeled sample together with a potential class label and the user had to provide the binary answer as to whether the queried unlabeled sample belonged to the selected class or not. Along similar lines, Biswas and Jacobs proposed an AL algorithm for clustering, which queried a pair of samples and the oracles needed to specify whether or not the samples in a pair correspond to the same cluster <cit.>. Xiong et al. <cit.> proposed a triplet query framework to learn approximate distance metrics for a nearest neighbor classifier; the algorithm queried unlabeled data triplets (x_i, x_j, x_k) and posed the question whether instance x_i was more similar to x_j than to x_k. Qian et al. <cit.> proposed an active learning algorithm where the query strategy was to place an ordering (or partial ordering) on the similarity of the neighbors of the selected unlabeled sample, rather than querying its actual label. Active Learning for Video Classification: While AL has been extensively studied for image recognition <cit.>, it is much less explored for video classification. Similar to image classification, uncertainty sampling (using metrics like entropy, error reduction) is a popular AL query strategy for video recognition <cit.>. Yan et al. <cit.> proposed a multi-class AL framework for video classification using expected error reduction. Since the estimation of the posterior probability distribution P(y|x) may be unreliable due to the lack of sufficient training data, simple heuristics were also proposed to simplify the sample selection strategies. Another approach was developed in the context of SVMs, which queried a set of samples which can produce the maximum expected reduction in the SVM objective <cit.>. Bandla and Grauman <cit.> used AL to train an action detector for videos which selected the video which was expected to maximally reduce the entropy among all unlabeled videos. The core idea was to use the current trained detector to extract relevant portions in the video where the action of interest occurs, so that the video segment outside the interval does not introduce noise in the entropy computation. However, this method is specifically designed to actively learn an action detector from videos. Active contrastive learning has also been explored for learning audio-visual representations from unlabeled videos <cit.>. All these methods require the human annotator to watch an unlabeled video end-to-end in order to provide a label, which may be extremely time-consuming and arduous. In contrast, our framework identifies a subset of exemplar frames, and the human labeler has to label a video by merely reviewing the frames, which is a much more efficient annotation strategy. Our method is applicable to any type of videos and does not make any assumptions about the contents of the video. Other related efforts include AL for video tracking <cit.>, video description <cit.>, video recommendation <cit.> and video segmentation <cit.>. However, these methods attempt to solve a different problem than video classification, which is the focus of this paper. We now describe our framework. § PROPOSED FRAMEWORK Consider an active learning problem for video classification, where we are given a labeled training set L and an unlabeled set U, with |L| ≪ |U|. Each data sample x in L and U is a video. Let w be the deep neural network trained on L and C be the number of classes in the data. Our objective is two-fold: (i) select a batch B containing b unlabeled videos so that the model trained on L ∪ B has maximum generalization capability; (ii) however, we are not allowed to show an entire video to a human annotator and ask for its label; we are required to select a subset of k exemplar frames from each queried video, so that only those can be shown to an annotator for labeling the video. Both these objectives are critical in improving the generalization capability of the deep model. The first objective ensures that the salient videos are selected from the unlabeled set for active query. The second objective ensures that the most representative frames are selected from each video for query. This is important, as otherwise, the annotator may not be confident enough to provide a label or may provide an incorrect label, both of which will result in a wastage of query budget and degrade the performance of the model. In the following sections, we discuss our active sampling strategies for sampling videos and frames. §.§ Active Video Sampling We quantified the utility of a batch of b videos and selected a batch furnishing the maximal utility. The informativeness and diversity metrics were used to compute the utility of a batch of videos in this research. An active learning framework driven by these conditions ensures that the video samples in the batch augment useful knowledge to the underlying deep neural network, and there is high diversity (minimum redundancy) of information among the samples in the batch. These conditions have been used in previous active learning research <cit.>. Computing informativeness: The informativeness of an unlabeled video sample x_i was computed as the uncertainty of the deep model w in predicting a label for x_i. The Shannon's entropy was used to compute the prediction uncertainty: e(x_i) = -∑_y=1^C P(y|x_i, w) log P(y|x_i, w) Computing diversity: We computed a diversity matrix R ∈^|U| × |U| where R(i,j) denotes the diversity between videos x_i and x_j in the unlabeled set. We used the kernelized distance on the deep feature representations to compute the diversity between a pair of videos in this research: R(i,j) = K (x_i, x_j) where K = (. , .) denotes the distance in the Reproducing Kernel Hilbert Space (RKHS) <cit.>. §.§.§ Active Video Selection By definition, all the entries in e and R are non-negative, that is, e_i≥ 0 and R(i,j) ≥ 0, ∀ i,j. Given e and R, our objective is to select a batch of videos with high uncertainties (given by the entries in e) and high mutual diversity (given by the entries in R). We define a binary selection vector z ∈^|U| × 1 where z_i denotes whether the unlabeled video x_i will be selected in the batch (z_i = 1) or not (z_i = 0). Our batch selection task (with batch size b) can thus be posed as the following NP-hard integer quadratic programming (IQP) problem: max_z e^T z + μ z^T R z s.t. z_i∈{0,1}, ∀ i and∑_i=1^|U| z_i = b where μ is a weight parameter governing the relative importance of the two terms. The binary integer constraints on z allow us to combine e and R into a single matrix Q ∈^|U| × |U| and express the optimization problem as follows: max_z z^T Q z s.t. z_i∈{0,1}, ∀ i and∑_i=1^|U| z_i = b where the matrix Q is constructed as follows: Q(i,j) = μ R(i,j), if i≠ j e(i), if i = j The binary integer constraints on the variable z make the IQP in Equation (<ref>) NP-hard. We used the Iterative Truncated Power algorithm <cit.> to solve this optimization problem. §.§.§ The Iterative Truncated Power Algorithm This algorithm was originally proposed in the context of the sparse eigenvalue and the densest k-subgraph problems. It attempts to solve an optimization problem similar to that in Equation (<ref>). The algorithm starts with an initial solution z_0 and then generates a sequence of solutions z_1, z_2, …. The solution z_t at iteration t is obtained by multiplying the solution z_t-1 at iteration (t-1) by the matrix Q and then truncating all the entries to 0, except the b largest entries. The process is repeated until convergence. The algorithm is guaranteed to converge monotonically for a positive semi-definite (psd) matrix Q. When the matrix Q is not psd, the algorithm can be run on the shifted quadratic function (with a positive scalar added to the diagonal elements) to guarantee a monotonic convergence <cit.>. The algorithm is computationally efficient and converges fast. It benefits from a good starting point. In our empirical studies, the initial solution z_0 was taken as the indicator vector corresponding to the b largest column sums of the matrix Q, as it produced competitive results in our preliminary experiments. The pseudo-code for our active video sampling algorithm is presented in Algorithm <ref>. §.§.§ Computational Considerations Computing the diversity matrix R involves quadratic complexity. We first note that R needs to be computed only once in our framework, before the start of the AL iterations. As the unlabeled videos get queried through AL, we can keep deleting the corresponding rows and columns in R to derive the new diversity matrix. Moreover, random projection algorithms can be used to speed up computations. The theory of random projections states that, if we have a point cloud in a high dimensional space, they may be projected into a suitable lower-dimensional space such that the distances between the points are approximately preserved <cit.>. A data matrix A ∈^N × D in the D dimensional space is multiplied by a random projection matrix X ∈^D × d (d ≪ D) to obtain a projected matrix B ∈^N × d in the lower dimensional space d: B = AX <cit.>. This can be used to substantially reduce the computational overhead, as distance computations are more efficient in the low dimensional space. We will explore this as part of future research. §.§ Active Frame Sampling Once we select b videos from the unlabeled set, our next task is to identify a subset of k frames from each of these videos; we exploited representative sampling techniques for this purpose. These techniques identify the exemplar data points which well-represent a given dataset. In particular, the coreset algorithm selects a subset of points such that a model trained over the selected subset is maximally similar to that trained on the whole dataset. For the sake of completeness, we discuss the main ideas here and request interested readers to refer to <cit.> for further details. Coreset poses the subset selection problem as: min_s: |s|=k | 1/n∑_i ∈ [n] l(x_i, y_i, A_i) - 1/|s|∑_j ∈ s l(x_j, y_j, A_j) | where (x_i, y_i) denotes a training sample and its label, A_i denotes a learning algorithm which outputs a set of parameters by minimizing a loss function l(. , . , .) on a given labeled set i. Informally, given a budget k, the goal is to select a set of samples s, such that the model trained on s depicts similar performance as the model trained on the whole dataset with n samples. This function cannot be directly optimized, as the labels of the samples in the unlabeled set are unknown. An upper bound of this function was derived and the problem of active sampling was shown to be equivalent to the k-center problem (also called min-max facility location problem) <cit.>. The objective of this problem is to select k center points from n samples, such that the largest distance between a data point and its nearest center is minimized. Formally, this can be posed as follows: min_s: |s| = kmax_imin_j ∈ sΔ(x_i, x_j) This problem is NP-Hard <cit.>. However, a greedy algorithm, as detailed in Algorithm <ref>, is guaranteed to produce a solution s such that: max_imin_j ∈ sΔ(x_i, x_j) ≤ 2 × OPT, where OPT is the optimal solution. We used this algorithm to select a subset of k frames from each of the queried videos. As evident from the formulation, our method does not make any assumptions about the contents of the video, and is applicable to any type of video. § EXPERIMENTS AND RESULTS §.§ Datasets We used the UCF-101 <cit.> and the Kinetics datasets <cit.> to study the performance of our algorithm. Both these datasets contain videos of humans performing a variety of actions, captured under unconstrained, real-world conditions, and are extensively used to study the performance of video classification algorithms. We used data from 5 classes at random from each dataset for our experiments. §.§ Oracle Simulation All the publicly available video datasets contain annotations for the complete videos; we did not find any datasets which contain annotations based on a subset of frames. Also, different active sampling algorithms will select different subsets of frames, and it is challenging to obtain annotations from a human labeler for every possible subset of frames for a given video, to conduct experiments. We therefore used a deep neural network to simulate the human labeling oracle in our empirical studies. The oracle model was trained on a completely different subset of the data. No information about the oracle model was used in the design and development of our active learning algorithm. During AL, when a video sample was selected for query, the selected frames were passed as an input to the trained oracle model and its prediction entropy on the sample was computed. If the entropy exceeded a particular threshold τ_oracle, the oracle was assumed to be not confident enough to produce a label, and no label was returned; otherwise, the oracle returned the predicted label (which may be correct or incorrect). These were done to appropriately mimic a real-world data annotation setup with a human annotator. §.§ Implementation Details Base Model: We used a CNN-RNN architecture in our experiments where InceptionV3 pretrained on the ImageNet-1k dataset was used as the feature extractor and a GRU network as the decoder [<https://keras.io/examples/vision/video_classification/>]. The input frames were scaled and normalized to a fixed input size of 224 × 224 pixels and fed into the Convolutional Neural Network (CNN). The features extracted were fed into a 5-layer GRU network which consists of 2 GRU layers and 1 fully connected layer with one dropout layer. The 2 GRU layers had 20 and 12 neurons, while the first fully connected layer had 8 neurons with the ReLU activation function. We used the adam optimizer with a learning rate of 0.001, momentum of 0.99, batch size of 32, and the network was trained for 20 epochs in each active learning iteration. Oracle Model: We used a similar CNN-RNN architecture as the oracle model. However, for the oracle model, the 2 GRU layers of the GRU network had 40 and 16 neurons. We used the adam optimizer with a learning rate of 0.001 for the UCF dataset and 0.01 for the Kinetics dataset, momentum of 0.99, batch size of 64, and the network was trained for 30 epochs. As part of future research, we plan to study the performance of our framework with other architectures for the oracle model, and also conduct experiments with real people as annotators. §.§ Experimental Setup Each dataset was split into 5 parts: (i) an initial training set L; (ii) unlabeled set U; (iii) test set T; (iv) training set to train the oracle model L_oracle; and (v) test set T_oracle to test the oracle model and compute the entropy threshold τ_oracle. The number of samples (videos) in each of these sets, together with the accuracy of the oracle model (A_oracle) for each dataset are depicted in Table <ref>. We note that a better trained oracle could have potentially improved the performance of our algorithm; however, we wanted to validate our algorithm in a challenging real-world setup, where the annotators can abstain from labeling samples and can also provide incorrect labels. We therefore used an oracle model with moderate accuracy (≈ 70 - 75%) in our empirical studies. The oracle model was trained on L_oracle; each sample in T_oracle was then passed as an input to the trained oracle and the prediction entropy was noted. The 50^th percentile of the prediction entropy distribution was taken as the entropy threshold τ_oracle; during the AL iterations, if the entropy of any queried video exceeded this threshold, the oracle was assumed to abstain from labeling. The base model was first trained on the set L. In each AL iteration, each algorithm queried b videos from the set U, and k frames from each of the b videos. The k frames of each video were then passed as an input to the oracle model. Based on its prediction entropy on the sample, the oracle may or may not furnish a label for a given unlabeled video sample. If the oracle does not furnish a label for a given video, it was discarded. The other unlabeled videos (which were labeled by the oracle), together with the returned labels were then appended to the training set, the base model was updated, and its accuracy was computed on the test set. The process was repeated for 10 iterations, which was taken as the stopping criterion in this work. All the results were averaged over 3 runs (with different training, unlabeled and test sets) to rule out the effects of randomness. The video budget b was taken as 25 and the frame budget k as 100 in each AL iteration. The weight parameter μ in Equation (<ref>) was taken as 0.01 and a Gaussian kernel was used to compute the diversity matrix in Equation (<ref>). §.§ Comparison Baselines As mentioned in Section <ref>, existing AL techniques for video classification query the complete videos for annotation and the labels obtained are assumed to be always correct. In our framework, the labeling oracle may refuse to provide a label to a queried video and may also provide an incorrect label. This is a more challenging and real-world scenario. It will thus not be fair to compare our method against the existing techniques. We used three comparison baselines in this work: (i) Random-Random (RR), where we selected a batch of b videos at random and a subset of k frames from each video at random; (ii) Entropy-Random (ER), where the b videos with the highest classification entropies were queried and k frames were queried from each at random; and (iii) Entropy-kmeans (EK), where b videos were first selected using entropy sampling; k-means clustering was then performed and the k frames corresponding to the k cluster centroids were selected for query from each video. §.§ Active Learning Performance The AL performance results are depicted in Figure <ref>. In each figure, the x-axis represents the iteration number, and the y-axis denotes the accuracy on the test set. The proposed method comprehensively outperforms the RR method on both datasets. The ER method depicts random fluctuations in the test accuracy over the AL iterations; our method, on the other hand, depicts a more steady growth in the test accuracy. The EK method depicts the best performance among the baselines, but is not as good as our method. Our method outperforms EK in most of the AL iterations across both the datasets. It also attains the highest accuracy after 10 AL iterations for both the datasets. We can conclude the following: (i) our video selection criterion based on uncertainty and diversity identifies the most informative videos in the unlabeled set; and (ii) our frame selection criterion based on representative sampling selects a subset of exemplar frames from each queried video, so that a large percentage of them can be correctly annotated by the oracle, which enriches the quality of our training data. As a result, our method augments maximal useful information to the deep neural network, which boosts its generalization capability. These results unanimously corroborate the potential of our framework in substantially reducing the human annotation effort in real-world video classification applications, where labeling a single sample involves significant time and human labor. The performance of the oracle model is reported in Tables <ref> and <ref> for the UCF and Kinetics datasets respectively. A total of 250 videos were queried from these datasets (25 videos in each of the 10 AL iterations). The tables show the percentage of these videos that were correctly annotated, incorrectly annotated and discarded by the labeling oracle. For the UCF dataset, and for the proposed method, the oracle correctly annotated 58.66% of the queried videos (the highest among all the methods). This shows that representative sampling through coreset is an effective strategy to identify the exemplar frames from a queried video, which have a high probability of receiving the correct label from the oracle, and augmenting useful information to the training set. For the Kinetics dataset, 66% of the videos queried by our method were correctly annotated by the oracle, where as 67.33% of the videos queried by Random Sampling were annotated correctly by the oracle. However, we note that, having a high percentage of unlabeled videos correctly labeled by the oracle does not necessarily mean that the useful samples are being queried. For instance, it is easy to select a batch of videos, which do not have much useful content and are easy to label, and get a high percentage of them correctly labeled by the oracle. However, these videos, even if correctly labeled, will not augment much useful information to the training set, as they are devoid of any useful content. Even though RR depicts a slightly higher percentage of correctly labeled samples than our method in Table <ref>, its generalization accuracy is much worse than our method, as evident from Figure <ref>. The key challenge is to query a set of informative videos and get a high percentage of them correctly labeled by the oracle; both of these are crucial in improving the generalization capability of the model over time. The results in Figure <ref> jointly capture both these aspects, and show that our method outperforms the baselines. §.§ Effect of the Number of Queried Frames per Video In this experiment, we studied the effect of the frame budget k (number of frames allowed to be selected from a queried video) on the AL performance. The results on the UCF dataset, with frame budgets 10, 20, 50 and 100 are presented in Figure <ref>. Our method depicts impressive performance across different frame budgets. For frame budgets 20, 50 and 100, our framework attains the highest test accuracy after 10 AL iterations. Note that querying lesser number of frames from a video lessens the labeling burden on the oracle, as the oracle has to review an even smaller number of frames to furnish a label. These results show the promise and potential of our technique to further reduce human annotation effort in a video classification application. §.§ Effect of the Number of Queried Videos The goal of this experiment was to study the effect of the video budget b (number of videos queried in each AL iteration) on the AL performance. The results on the UCF dataset with b = 15, 20, 25 and 30 are shown in Figure <ref>. Our framework once again surpasses the baselines across the different budgets. These results are important from the standpoint of a real-world application, where the batch size is governed by the time, man-power and other available resources in a given application, and is different for different applications. § CONCLUSION AND FUTURE WORK The goal of this research was to devise an efficient annotation mechanism to reduce human annotation effort in video classification applications, where annotating a single data instance is extremely tedious. Our framework identifies a batch of informative videos, together with a set of exemplar frames from each; the human annotator has to produce a label for each video just by reviewing the subset of frames, instead of watching the complete video end-to-end. To the best of our knowledge, this is the first research effort to develop an AL technique for video classification, with this kind of a query and annotation mechanism. Our empirical results validated the promise and potential of our framework to drastically reduce human annotation effort in training a deep neural network for video classification. We hope this research will motivate the development of AL algorithms with other annotation mechanisms, with the goal of further reducing the human annotation effort in video classification. As part of future work, we plan to validate the performance of our algorithm on other applications where the data has a temporal nature. For instance, the proposed query mechanism will also be very relevant in a text classification application to identify informative text snippets, so that a human annotator can furnish a label by reviewing only the snippets, rather than reading the document end-to-end. We will also study the performance of our framework with different size of the data splits, as outlined in Table <ref>. IEEEtran
http://arxiv.org/abs/2307.04478v1
20230710110110
A closed form exact formulation of the spectral representation of a second-order symmetric tensor and of its derivatives
[ "Andrea Panteghini" ]
cs.CE
[ "cs.CE", "cs.NA", "math.NA" ]
Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling Giuseppe Desolda10000-0001-9894-2116 Andrea Esposito10000-0002-9536-3087 Florian Müller20000-0002-9621-6214 Sebastian Feger20000-0002-0287-0945 August 12, 2023 ==================================================================================================================================================== The spectral decomposition of a symmetric, second-order tensor is widely adopted in many fields of Computational Mechanics. As an example, in elasto-plasticity under large strain and rotations, given the Cauchy deformation tensor, it is a fundamental step to compute the logarithmic strain tensor. Recently, this approach has been also adopted in small-strain isotropic plasticity to reconstruct the stress tensor as a function of its eigenvalues, allowing the formulation of predictor-corrector return algorithms in the invariants space. These algorithms not only reduce the number of unknowns at the constitutive level, but also allow the correct handling of stress states in which the plastic normals are undefined, thus ensuring a better convergence with respect to the standard approach. While the eigenvalues of a symmetric, second-order tensor can be simply computed as a function of the tensor invariants, the computation of its eigenbasis can be more difficult, especially when two or more eigenvalues are coincident. Moreover, when a Newton-Rhapson algorithm is adopted to solve nonlinear problems in Computational Mechanics, also the tensorial derivatives of the eigenbasis, whose computation is still more complicate, are required to assemble the tangent matrix. A simple and comprehensive method is presented, which can be adopted to compute a closed form representation of a second-order tensor, as well as their derivatives with respect to the tensor itself, allowing a simpler implementation of spectral decomposition of a tensor in Computational Mechanics applications. § INTRODUCTION This paper presents important developments regarding the eigenvalues and eigenvectors of a symmetric second-order tensor and the determination of the associated basis required for its spectral representation. The results here presented apply to situations involving isotropic scalar-valued functions and isotropic tensor-valued functions of a symmetric second-order tensor. For instance, the finding of this article are useful for the integration of constitutive laws of isotropic materials and in finite deformations (e.g., to compute the the logarithmic strain tensor from the displacement gradient). The numerical integration of isotropic elasto-plastic constitutive laws can be more efficiently carried out by formulating the return algorithms in terms of eigenvalues of the elastic strain tensor (e.g. Borja et al. <cit.> and de Souze Neto et al. <cit.>), or in the invariants elastic strain space <cit.>, <cit.>. Differently from the standard approach <cit.>, an invariant-based return algorithm allows the correct handling of stress states in which the plastic normals are undefined. These two integration algorithms require the spectral representation of the stress, as well as the determination its derivatives to assemble the stiffness matrix. Unfortunately, their determination using the approach described in the literature is very cumbersome (see e.g., De Souza Neto et al. <cit.>, Borja et al. <cit.>), particularly when two or three eigenvalues coincide. This key aspect certainly makes these invariant-based integration algorithms, even if more and more efficient, less attractive with respect to standard return algorithms formulated in terms of full tensorial components. About the applications in large strain theories, to avoid the complexity of the standard procedure, commercial codes (e.g. SIMULIA Abaqus <cit.>) often employ approximate formulations to numerically integrate the logarithmic strain in finite deformation analyses. Some Authors suggest, for specific isotropic functions, to resort to their numerical approximation based on series expansion (e.g. Ortiz et at. <cit.>, de Souza Neto <cit.>, Hudobivnik et al. <cit.>). However, it should be noted that these series-based procedures, even if simpler and numerically efficient, can be hardly adopted when the isotropic functions are not known explicitly (i.e., for instance, in the case of the integration of the isotropic elastoplastic materials described above). The writer has later discovered that Odgen <cit.> incidentally describes, in an exercise contained in his book, a very important result, which to the best of his knowledge, seems to have been missed by the vast majority of the research community. He suggests a very simple method for retrieving a closed-form expression for the basis of the spectral decomposition of a second-order tensor which does not require the computation of the originating eigenvectors. This result has later been reported also by Miehe <cit.>, who however states that " the formulation above is restricted to the case of distinct eigenvalues of the tensor". Moreover the same Author <cit.> points out that such an approach requires the inversion of the second-order tensor, which severely restricts the applicability of the method. De Souza Neto et al. <cit.> describe a very cumbersome method to evaluate both the basis and their spin. They also state that "...a methodology similar to that adopted here was introduced by Miehe (1993, 1998a), where a particularly compact representation for the function derivative is used. However, the compact representation allows only the computation of the derivative at invertible arguments and cannot be used...". In this paper it is mathematically shown that indeed the basis required for the spectral representation of a symmetric second-order tensor can be derived without the computationally expensive evaluation of the associated eigenvectors. It is also shown that this can also be directly derived from the secular (or characteristic) equation of the tensor, without any assumptions about the invertibility of the second-order tensor. Most importantly it is clarified how the result can be particularized to the case of two and three coinciding eigenvalues, hence removing the strong limitation of the approach described by Miehe <cit.>, <cit.> which de facto prevents the application of this extremely useful result. This paper also provides the tensor derivatives of the basis, i.e. its spin. Moreover, it is presented a simple and generic approach to compute the spectral representation of isotropic tensor-valued functions, as well as their derivatives with respect to the tensor variable itself. The proposed procedures can be practically adopted in computational mechanics since all limitations of the procedures available in the literature have been removed (the approach of De Souza Neto et al. <cit.> does not have such limitations but is laborious to implement). Finally two applications are presented for isotropic elasto-plasticity and for the evaluation of the logarithmic strain tensor in finite deformations. § EIGENVALUES, EIGENVECTORS AND SPECTRAL REPRESENTATION OF A SYMMETRIC, SECOND-ORDER TENSOR Given the symmetric, second-order tensor , its (ordered) eigenvalues λ_i and their corresponding eigenvectors n_i are obtained by solving the eigenvalues-eigenvectors problem <cit.>: ( -λ I )n = 0 n^Tn=1 being I the second-order identity tensor. The principal components λ_i can be obtained by solving the third-order scalar equation in λ, namely the secular equation: λ^3 -I_1 λ^2 + I_2 λ - I_3 = 0 The coefficients I_1 = I_2 = 1/2(I_1^2 - :^T) I_3 = () are the invariants of , since their values do not depend on the reference system in which is expressed. The three ordered solutions of Eq. (<ref>) are the eigenvalues of the problem described in Eq. (<ref>). As explained in <cit.>, they can be computed in closed form as: λ_I=I_1/3+2/√(3)√(J_2)sin(θ+ 2/3π) λ_II=I_1/3+2/√(3)√(J_2)sin(θ) λ_III=I_1/3+2/√(3)√(J_2)sin(θ- 2/3π) where J_2 = 1/2: J_3 = () are the invariants of the second-order, deviatoric symmetric tensor = - I_1/3 I, and the Lode's angle θ is defined as θ = 1/3arcsin( - √(27)/2J_3/√(J_2^3)) where -π/6 ≤θ≤π/6. It is well known that the second-order symmetric tensor T can be expressed as a function of its eigenvalues λ_i and the corresponding eigenvectors n_i by resorting to the spectral theorem[ Let consider that, unless otherwise specified, it is always intended ∑ f_i = ∑_i=I,II,III f_i ]: T = ∑λ_i n_i ⊗n_i = λ_i N_i where N_i is the eigenbasis of T related to λ_i. § CLOSED-FORM EXPRESSION FOR THE EIGENBASIS OF T We will consider three cases, as a function of the multiplicity of the eigenvalues λ_i: * λ_I> λ_II >λ_III * λ_I> λ_II=λ_III or λ_I= λ_II>λ_III * λ_I= λ_II=λ_III Let observe that the number of non coincident eigenvalues, i.e., the the eigenvalues multiplicity can be simply determined from the invariants of T. Hence, case (i) occurs when J_2≠0 and θ±π/6, the case (ii) implies J_2≠0 and θ=±π/6, and finally the case (iii) requires that J_2=0 (while θ is undefined). §.§.§ A general property of the eigenbasis N_i. We will initially prove that it results: ∑N_i = I Let consider that the i-th eigenvalue and eigenvector of will satisfy Eq. (<ref>), i.e. n_i= λ_in_i Since n_i is a unit vector, it results n_i^T n_i = : ( n_i ⊗n_i)= λ_i ( n_i^T n_i)= λ_i one can compute the first invariant I_1 in the principal coordinate system as I_1==:I=∑λ_i = : ∑N_i From this equation it must result :I = : ∑N_i This conditions yields ∑N_i = I §.§.§ Case (i): λ_I> λ_II >λ_III. One will prove that the spectral theorem =∑λ_i (n_i ⊗n_i ) = ∑λ_i N_i can be written as = ∑λ_i λ_i i.e., we will prove that it simply results[It should be noted that, to the best of the Author's knowledge, this result appears for the first time, without any demonstration or explanation in Ogden's book <cit.>. It has been used by Mihe <cit.>, <cit.>, but, as explained in the Introduction, due to the limitations of his approach, it seems it is not commonly adopted in Computational Mechanics. ]: N_i= λ_i By considering the symmetry of T, the derivatives of the invariants I_1, I_2 and I_3, defined by Eq. (<ref>), (<ref>) and (<ref>) with respect to are: I_1=I I_2=I_1 I- I_3=I_3 ^-1 = where denotes the adjugate matrix of . By substituting the property (<ref>) and the spectral theorem (<ref>) into Eq. (<ref>) and (<ref>) respectively, one obtains: I_1=∑N_i I_2=I_1 I-∑λ_i N_i Finally, by resorting to the spectral theorem (<ref>), one can write (<ref>) as [ Let observe that, by multiplying Eq. (<ref>) by = I_3 ^-1 one obtains I_3 ^-1n= λ I_3 ^-1n which gives n = I_3/λn Hence, the eigenvectors n of and are coincident, whilst the i-th eigenvalue μ_i of associated to n_i can be computed from λ_i as: μ_i= I_3/λ_i= (λ_jλ_k)_i j k The spectral representation of is then: = ∑( λ_jλ_kN_i )_i j k ] I_3= ∑(λ_jλ_kN_i )_i j k Let consider now that the value of I_1, I_2 and I_3 are independent with respect to the reference systems, hence one can compute them also in terms of principal components. It result: I_1= λ_I+λ_II+λ_III I_2= λ_Iλ_II+ λ_Iλ_III +λ_IIλ_III I_3 = λ_Iλ_IIλ_III The derivatives of the invariants I_1, I_2 and I_3 can also be computed by differentiating these last three expressions, observing that λ_i=λ_i(). It results: I_1=λ_I+λ_II+λ_III =∑λ_i I_2=I_1∑λ_i-∑λ_iλ_i= I_1 I -∑λ_iλ_i I_3 = λ_IIλ_IIIλ_I + λ_Iλ_IIIλ_II + λ_Iλ_IIλ_III =∑(λ_jλ_kλ_i)_i j k One can now compute the eigenbasis N_i as a function of the derivatives of the eigenvalues λ_i with respect to by solving the linear system of equations obtained by equating Eq. (<ref>), (<ref>) and (<ref>) with Eq. (<ref>), (<ref>) and (<ref>) respectively. One obtains { ∑N_i = ∑λ_i ∑λ_i N_i = ∑λ_i λ_i ∑(λ_jλ_kN_i )_i j k=∑(λ_jλ_kλ_i)_i j k. which, under the assumption λ_I> λ_II >λ_III[ Let observe that the determinant of the matrix of the system (<ref>) reads: [ 1 1 1; λ_I λ_II λ_III; λ_IIλ_III λ_Iλ_III λ_Iλ_II ] = - (λ_I-λ_II)(λ_I-λ_III)(λ_II-λ_III) It is always nonzero if λ_I> λ_II >λ_III. ] simply gives N_i = λ_i so that the spectral theorem (<ref>) can be re-written as: = ∑λ_i (n_i ⊗n_i ) = ∑λ_i λ_i §.§.§ Case (ii): λ_I> λ_II=λ_III or λ_I= λ_II>λ_III. If one or more eigenvalues are coincident of , then the linear system (<ref>) will not admit a unique solution. Let λ̂ be the non-repeated eigenvalue of and N̂ the correspondent eigenbasis. The first invariant I_1 is equal to: I_1 = λ̂+ 2 λ_II so that, it results: λ_II= 1/2(I_1 - λ̂) Eq. (<ref>) can be rewritten as: N̂ + 2 N_II = I hence, it results: N_II = 1/2(I -N̂) The spectral theorem can be rewritten as: =λ̂N̂ + 1/2(I_1-λ̂) (I -N̂) = 3/2(λ̂- I_1/3) N̂+ 1/2( I_1- λ̂) I Eq. (<ref>) can be further simplified by computing the deviatoric part l̂ of λ̂ as l̂=λ̂-I_1/3. One obtains = I_1/3I+ 3/2l̂( N_j -1/3I) This last equation clearly shows that, when two eigenvalues are coincident, the deviatoric part of N̂, defined as N̂^d= N̂ -I/3, is simply proportional to the deviatoric part of the tensor , i.e. N̂^d = 1/λ̂-λ_IIt = ∓1/qt θ=±π/6 where q=√(3 J_2). This result is a consequence of the multiplicity of the deviatoric principal components. When two eigenvalues of T coincide, the two coincident deviatoric principal components result to be minus half of the (only) independent one, since their sum must vanish. Eq. (<ref>) results to be the sum of two independent terms: the volumetric and the deviatoric parts. The basis of the volumetric part is obviously proportional to the identity tensor I, whilst that of the deviatoric part can only be proportional to the tensor itself. = I_1/3I+ 3/2l̂N̂^d It should be noted that, as in Case (i), it is still possible to demonstrate that N̂=λ̂ To prove this result, let compute the second invariant J_2 of the deviatoric tensor as a function of the principal component λ̂: J_2= :/2 =(λ̂-I_1/3 )^2 + 2 (λ_II -I_1/3 )^2 /2 = 3 (λ̂-I_1/3 )^2 /4 By differentiating this expression with respect to , one obtains J_2= = - I_1/3I=3/2(λ̂-I_1/3) (λ̂- 1/3I) so that, solving for one obtains: = 3/2(λ̂- I_1/3)λ̂+ 1/2( I_1- λ̂) I By equating this last expression with Eq. (<ref>) and solving for N̂[This can be done under the condition λ̂ I_1/3 that, observing Eq. (<ref>) is equivalent to J_20] one obtains: N̂ = λ̂ §.§.§ Case (iii): λ_I= λ_II=λ_III. Finally, let consider the case of three coincident eigenvalues λ=λ_I= λ_II=λ_III. The tensor is purely volumetric in any reference system. By observing that it results l_i = 0 ∀ i and I_1=3λ, Eq. (<ref>) simply becomes = λI From Eq. (<ref>) it results N_I=N_II=N_III=1/3I § COMPUTATION THE EIGENBASIS DIRECTLY FROM THE SECULAR EQUATION Since the three eigenbasis are equal to the derivatives of its conjugate principal components with respect to the tensor , one can determine them by simply differentiating Eqs. (<ref>) with respect to . Using the chain rule, one obtains: N_i = λ_i = 1/3I+ √(3)/3( sinβ_i/√(J_2)J_2+2 J_2 cosβ_i θ) where β_I= θ+2/3 π, β_II= θ, β_III= θ-2/3 π, and [ It should be noted that Eq. (<ref>) requires the computation of ^-1. An expression more suitable for the implementation is θ=-1/cos 3 θ(√(3)/2 √(J_2^3)J_3+ √(3)/6 √(J_2)I + sin 3θ/2 J_2) where J_3= [ s_yy s_zz-s_yz^2 s_xz s_yz- s_xy s_zz s_xy s_yz - s_xz s_yy; s_xz s_yz- s_xy s_zz s_xx s_zz -s_xz^2 s_xy s_xz - s_yz s_xx; s_xy s_yz - s_xz s_yy s_xy s_xz - s_yz s_xx s_xx s_yy - s_xy^2 ] that is undefined only for J_2=0 or θ=±π/6 ] J_2= θ=1/cos 3 θ(sin 3θ/3^-1 - √(3)/6 √(J_2)I - sin 3θ/2 J_2) The computation of the spin of the eigenbasis, i.e. N_i is even more tiring. A more elegant and simpler approach can be obtained by working directly on the secular equation (<ref>). Each of the eigenvalues λ_i will satisfy Eq. (<ref>), i.e. f()= λ_i^3 - I_1 λ_i^2 + I_2 λ_i- I_3 =0 hence, it must result f()=[(3 λ_i^2 - 2 I_1 λ_i + I_2 ) λ_i- Iλ_i^2 . . + (I_1 I - )λ_i + I_3 ^-1]: =0 ∀ This imply the condition: (3 λ_i^2 - 2 I_1 λ_i + I_2 ) λ_i- Iλ_i^2 + (I_1 I - )λ_i + I_3 ^-1=0 The eigenbasis N_i can be obtained by simply solving this last equation of λ_i. By observing that J_2=1/3 I_1^2-I_2, after some simple algebraic manipulation, one obtains[ A very compact way to write this derivative is I_3 ^-1. However, it should be noted that it is not completely correct from a formal point of view, since it is undefined when I_3=0. The invariant I_3, being defined as , is simply the adjugate matrix of , that is always defined. In simpler words, being I_3= a third degree polynomial in _ij, its derivative with respect to is always defined. It results: I_3= =[ _yy_zz-_yz^2 _xz_yz- _xy_zz _xy_yz - _xz_yy; _xz_yz- _xy_zz _xx_zz -_xz^2 _xy_xz - _yz_xx; _xy_yz - _xz_yy _xy_xz - _yz_xx _xx_yy - _xy^2 ] Eq. (<ref>) becomes N_i = λ_i = λ_i [(λ_i-I_1 ) I+ ]+I_3/J_2 (4 sin^2 β_i -1) ]: N_i = λ_i = λ_i [(λ_i-I_1 ) I+ ]+I_3^-1/J_2 (4 sin^2 β_i -1) The spin of the eigenbasis can be obtained by differentiating Eq. (<ref>) by the tensor . One obtains N_i= ^2λ_i/⊗ =1/J_2 (4 sin^2 β_i -1)[ - 4 √(3 J_2)sinβ_i (N_i⊗N_i) . . + (2 λ_i -I_1 ) (N_i⊗I + I⊗N_i). .+ (N_i⊗ + ⊗N_i) + λ_i (I -I⊗I) + ^2 I_3/⊗] where I is the fourth-order identity tensor and ^2 I_3/⊗ = δ_jk_il+ _jkδ_il being δ_ij the Kroneker delta operator. Let note that, even in the case of two coincident λ_i, the spin of the basis associated to the non-repeated eigenvalue λ̂ can still be computed using Eq. (<ref>). It is the only spin required to compute the derivative of Eq. (<ref>). However, by exploiting the proportionality between the deviatoric part of the tensor and the basis itself, it can be simpler obtained by means of Eq. (<ref>). As explained in the previous section, when all the eigenvalues coincide, the three eigenbasis N_i are simply equal to I/3. Their spin is not defined, but, as explained in the next section, it is still possible to evaluate the derivative of the spectral representation of the tensor when its invariants are isotropic functions. § ISOTROPIC FUNCTIONS In many mechanical applications it is a priori known that two second-order, symmetric tensors S and T share the same principal directions. Under these conditions, the two tensors are called co-axial. These applications usually involve isotropic tensor functions, i.e., the invariants of the tensor T are function of the those of the tensor S. In these applications, once the principal components η_i of the tensor S are computed as a function of those of T, say λ_i it is finally required to compute the Cartesian components of S. Let S be a symmetric, second-order tensor, co-axial with . Let assume that the generic eigenvalues η_i( λ_I, λ_II, λ_III) of S can be computed as a function of the eigenvalues λ_i of . Since S and are co-axial, they will share the same eigenbasis N_i and it results S=∑η_i ( λ_I, λ_II, λ_III)N_i Since it results N_i ⊗N_j=0 for i j, the derivative of this expression with respect to the tensor T will be S = ∑η_iλ_iN_i ⊗N_i + η_i N_i Let consider the case in which two eigenvalues λ_i of coincide. As explained in the section above, under this condition it results that the deviatoric part of S, say s, results to be proportional to the deviatoric part of T, say t. Hence, one can compute S as S= I_1S/3I+ q_S/q_Tt where I_1T=T is the first invariant of T, q_T=√(3/2t:t), I_1S=S=I_1S(I_1T,q_T) and q_S=√(3/2s:s)=q_S(I_1T,q_T). Let now compute S. Since s and t are simply proportional, it must result θ_S=θ_T and then θ_Sθ_T=1 Moreover, considering that Eq. (<ref>) gives: q_S(θ_S)= √(-27/2J_3S/sin (3 θ_S)) it results q_Sθ_T=qθ_Sθ_Sθ_T = 3√(4)/2J_3Scos (3 θ_S) √(sin^2 (3 θ_S))/√(J_3S^2)sin^2 (3 θ_S)= 0 θ_S=θ_T=±π/6 Analogously I_1Sθ_T=I_1Sθ_Sθ_Sθ_T=0 ∀θ_T Hence, observing that from Eq. (<ref>) it results that q_TT = 3/2 q_Tt = ∓3/2N̂^d θ_T=±π/6 by differentiating Eq. (<ref>) with respect to T one obtains: S= 1/3I_1SI_1TI⊗I∓1/2I_1Sq_TI⊗N̂^d + q_S/q_T(ℐ - 1/3I⊗I) + 3/2(q_Sq_T - q_S/q_T) N̂^d ⊗N̂^d ∓q_SI_1TN̂^d ⊗I θ_T = ±π/6 where ℐ is the fourth-order identity tensor. Finally, when all the eigenvalues coincide, Eq. (<ref>) reduces to: S= I_1S/3I whilst it derivative can be computed by particularizing Eq. (<ref>). By observing that when t→0, N_i →I/3, so that its deviatoric part N̂^d→0. Observing that q_S → 0 when q_T → 0, using a Tayor expansion for q_T → 0, it will result: q_S(I_1T, 0)≈q_Sq_T q_T so that q_S/q_T →q_Sq_T, and finally: S= 1/3I_1SI_1TI⊗I+ q_Sq_T(ℐ - 1/3I⊗I) § APPLICATIONS §.§ Isotropic elastoplastic materials under small-strains and displacements Let consider a generic elastoplastic isotropic material, in which the principal directions of the elastic strains and of the stress coincides. Let be s the deviatoric part of the Cauchy stress tensor σ, and p=1/3σ q=√(3/2s:s) θ_σ=1/3arcsin( -27/2s/q^3) the stress invariants, i.e. the hydrostatic pressure, the equivalent von Mises stress, and the stress Lode's angle respectively. In a general backward Euler integration scheme, let be ^* and Δ^p the elastic strain predictor and the plastic strain increment respectively. The plastic strain increment can be computed as a function of an isotropic plastic potential g(p,q,θ_σ) as Δ^p= g(p,q,θ_σ)σΔγ where Δγ is the plastic multiplier. Since g(p,q,θ_σ) is an isotropic function of σ, its derivative respect to σ will be co-axial with the stress <cit.> <cit.>. Then, since the elastic strain ^e is co-axial with σ for the assumption of isotropy, it results that also ^*=^e+Δ^p is co-axial with σ. For these reasons, the principal directions of stress are a priori known, being coincident with those of the predictor ^*. Let e^* the deviatoric part of the elastic predictor ^*, and _v^*=^* _q^*= √(2/3e^*:e^* ) θ^*_=1/3arcsin( - 4 e^*/_q^*3) the its invariants, i.e. the volumetric strain predictor, the equivalent von Mises strain predictor, and the strain predictor Lode's angle. In general, if a standard return algorithm in the full tensorial space is employed, numerical problems and convergence difficulties can arise when two or more eigenvalues coincide. Instead, p, q, θ_σ can be more easily computed formulating a return algorithm in the invariants strain space <cit.>. Once p, q and θ_σ have been obtained as a function of the strain invariants predictor, it is necessary to compute the stress tensor σ. If _q^* 0 and |θ^*_| π/6, one can compute the stress tensor from its invariants and from the eigenbasis N_i^* of the elastic strain predictor ^* by resorting to the spectral theorem. It results σ= ∑[p(_v^*,_q^*,θ^*_) + 2/3 q(_v^*,_q^*,θ^*_) sinβ_i (_v^*,_q^*,θ^*_)] N_i^* where β_I = θ_σ (_v^*,_q^*,θ^*_) + 2/3π β_II = θ_σ(_v^*,_q^*,θ^*_) β_III = θ_σ (_v^*,_q^*,θ^*_) - 2/3π and N_i^* is computed from Eq. (<ref>) as a function of the invariants of ^* and its principal components. The consistent jacobian matrix[It should be noted that this general approach has been recently adopted by the Author in <cit.>, while in his older work <cit.>, in order to avoid the computation of the spin of the eigenbasis, the spectral representation of the stress was computed as a function of the eigenvectors of the strain predictor, while jacobian matrix was obtained by means of a "simplified" procedure based on the inversion of a 6x6 matrix. Unfortunately, this procedure is model-specific and requires the smoothness in the deviatoric plane of the yield function and of the plastic potential.] can be computed from Eq. (<ref>) as σ^*=∑[p+ 2/3 q sinβ_i ] N_i^*^* + N_i^* ⊗{[ p_v^* +2/3(q_v^*sinβ_i + q θ_σ_v^*cosβ_i ) ]I. . +2/3_q^*[p_q^* +2/3(q_q^*sinβ_i + q θ_σ_q^*cosβ_i ) ]e^* . . +[pθ^*_ +2/3(qθ^*_sinβ_i + q θ_σθ^*_cosβ_i ) ]θ^*_^*} where the eigenbasis spin N_i^*^* and θ^*_^* are computed as a function of the invariants and principal components of ^* from Eqs. (<ref>) and (<ref>) respectively. If _q^* is not nil, at least two eigenvalues of the strain predictor ^* are distinct. Specifically, if θ^*_=±π/6 two eigenvalues of ^* will be coincident. In this case, from Eq. (<ref>) it will result that e^* will be proportional to the deviatoric part of the eigenbasis associated to its non-repeated eigenvalue. Hence, from Eq. (<ref>) one simply obtains: σ=p (_v^*,_q^*,θ^*_) I+2 /3 _q^* q (_v^*,_q^*,θ^*_) e^* Also the eigenbasis of the deviatoric part of the plastic strain increment Δe^p and of the elastic strains will coincide with those of e^*, and then it will result: Δ^p=^p_v/3I+_q^p/_q^*e^* ^e=^e_v/3+_q^e/_q^*e^* The jacobian matrix can be obtained simplifying Eq. (<ref>) using Eq. (<ref>). It yields: σ^*= p_v^*I⊗I+ 2/3_q^*[p_q^*(I⊗e^*) +q_v^*(e^* ⊗I) . . + 2/3 _q^*(q_q^* -q/_q^*) ( e^* ⊗e^*) + q ( ℐ-1/3I⊗I) ] If _q^* is nil, the strain predictor ^* will be a volumetric tensor, since its spectral decomposition has the same structure of Eq. (<ref>). Moreover, _q^*=0 implies e^* = 0. Since the material is isotropic, the eigenbasis of σ and ^* the same, resulting to be coincident with the second-order identity tensor I. Then, from Eq. (<ref>) it will result σ= p (_v^*) I The derivative of the eigenbasis is undefined. However, as explained in the section above, the Jacobian Matrix can be obtained as a limit case of Eq. (<ref>), i.e., using Eq. (<ref>). Let observe that, under purely volumetric conditions, the convexity of the elastic potential requires <cit.>: p_q^* =q_v^*=0 It results: σ^*= p_v^*I⊗I+ 2/3q_q^*( ℐ-1/3I⊗I) §.§ Computation of logarithmic strain tensor from displacement gradient In the framework of large strains and rotations, let p denotes the reference coordinate system. Indicating with u( p) the vector function describing the displacement of each material point, it results that its final position will be (i.g. <cit.>) x=p+ u( p) The deformation gradient F is defined as F= ∇_p x = I+∇_p u( p) By applying the polar decomposition (i.g. <cit.>) to the deformation gradient F, one obtains: F = VR where the orthogonal tensor R describes the local rotation, whilst the symmetric positive definite tensor V is the left stretch tensor, where V^2 = B = FF^T B being the left Cauchy-Green tensor. The logarithmic strain tensor can be computed as: =lnV=1/2lnB i.e., = 1/2∑ln( λ^B_i) N^B_i where λ^B_i and N^B_i are the i-th principal component and eigenbasis of the tensor B respectively. The invariants of B, I_1B, J_2B and θ_B can be computed using Eqs. (<ref>), (<ref>) and (<ref>), whilst the principal components λ^B_i can be obtained using Eqs. (<ref>). If λ^B_i are distinct, i.e., if J_2B 0 and |θ_B| π/6, all the eigenbasis N^B_i of the left Cauchy-Green tensor can be computed as a function of its invariants and its principal components using Eq. (<ref>). The logarithmic strain tensor can be computed using Eq. (<ref>). The jacobian matrix B can be computed by using Eq. (<ref>): B=1/2∑[ ln( λ^B_i) N^B_iB + 1/λ^B_iN^B_i⊗N^B_i] where N^B_iB can be computed using Eq. (<ref>). When two principal components of B are coincident, i.e. if J_2B 0 and |θ_B| = π/6, one can compute by exploiting the proportionality between the deviatoric part b of B and e. Let start by computing the invariants q_=√(3 J_2) and I_1 of as a function of q_B= √(3 J_2B) and I_1B. Let observe that it results q_B = ±( λ^B_II-λ̂^B) θ_B = ±π/6 By solving this expression for λ^B_II one obtatins λ^B_II= λ̂^B ± q_B θ_B = ±π/6 Substituting this result into the definition of I_1B=λ̂^B + 2λ^B_II and solving for λ̂^B gives λ̂^B= I_1B∓ 2 q_B/3 θ_B = ±π/6 By substituting this expression into Eq. (<ref>) one obtains λ^B_II= I_1B± q_B/3 θ_B = ±π/6 One can now compute the invariants of as a function of those of B. It results: I_1 =λ̂^+ 2 λ^_II = 1/2[ ln( I_1B∓ 2 q_B/3)+2ln( I_1B± q_B/3) ], q_= ±( λ^_II-λ̂^) = ±1/2( lnλ^B_II - lnλ̂^B ) = ±1/2ln( I_1B± q_B/I_1B∓ 2 q_B) θ_ = θ_B = ±π/6 The logarithmic strain tensor can be finally computed using Eq.(<ref>). It results: = I_1/3I + q_/q_B b Its derivative can be obtained by applying Eq. (<ref>). It results: B= 1/3I_1I_1BI⊗I∓1/2I_1q_BI⊗N̂^d_B + q_/q_B(ℐ - 1/3I⊗I) + 3/2(q_q_B - q_/q_B) N̂^d_B ⊗N̂^d_B ∓q_I_1BN̂^d_B ⊗I θ_ = θ_B = ±π/6 where, from Eq. (<ref>): N̂^d_B= ∓1/q_Bb θ_B = ±π/6 and, by computing the derivatives of Eq. (<ref>): I_1I_1B=3(± q_B - I_1B)/(I_1B± q_B)(± 4 q_B -2 I_1B) I_1q_B=3 q_B/(± 2 q_B-I_1B)(I_1B± q_B) q_I_1B = 3 q_B/(I_1B± q_B)(± 4 q_B -2 I_1B) q_q_B = - 3 I_1B/(I_1B± q_B)(± q_B -2 I_1B)) θ_ = θ_B = ±π/6 Finally, if J_2B = 0, then the logarithmic strain will be purely volumetric, and it will result λ_i^B = λ^B. Eqs. (<ref>) become: I_1 = 3/2ln( I_1B/3) = 3/2lnλ^B q_= 0 By applying Eq. (<ref>) it will result: = 1/2lnλ^B I To compute the derivative of with respect to B, let start substituting Eqs. (<ref>) into Eqs. (<ref>). It results: I_1I_1B=3/2 I_1B = 1/2 λ^B I_1q_B=q_I_1B = 0 q_q_B = 3/2 I_1B = 1/2 λ^B By substituting these expressions into Eq. (<ref>) one obtains: B= 1/2λ^Bℐ § CONCLUSIONS The spectral representation of a symmetric, second-order tensor is an important tool in many applications of computational mechanics. While the computation of the eigenvalues of a symmetric, second-order tensor is a relative simple task, obtaining a closed-form expression for the eigenbasis is more complicate, especially when some eigenvalue is repeated. Moreover, in many computational mechanics applications, also the derivative of the spectral representation is required. The exact closed-form expressions available in the literature for both the eigenbasis and their derivative are quite hard to implement (see, e.g., <cit.>). For this reason, many Authors suggest to resort to series expansions, that however are available only specific functions (see, e.g., <cit.>, <cit.>) or require automatic differentiation techniques for a generic function <cit.>, These approximate techniques are hard to apply when the isotropic tensor-valued functions are not known explicitly, such as, for instance, in the numerical integration of elastoplastic isotropic constitutive laws formulated in invariants space (<cit.> <cit.> <cit.>). In this paper, starting from a incidental result reported by Ogden <cit.> working only in the case of not coincident eigenvalues, an exact, simple and clear approach has been developed. Differently from that described by Miehe <cit.>, <cit.> no particular requirements about the invertibility of the tensor, or its eigenvalues multiplicity are necessary. Two applications have been presented: (i) the computation of stress tensor and of the stiffness matrix in the case of the numerical integration of an elastoplastic isotropic material in the invariant stress space, and (ii) the calculation of the logarithmic strain tensor from the displacement gradient, as well as its derivative with respect to the left Cauchy-Green tensor. plain
http://arxiv.org/abs/2307.03973v1
20230708131320
Autonomy 2.0: The Quest for Economies of Scale
[ "Shuang Wu", "Bo Yu", "Shaoshan Liu", "Yuhao Zhu" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.CY" ]
takeaways[2] [title=#1, size=fbox,after skip=0.5, colbacktitle=yellow!25,coltitle=black] #2 none printacmref=false plain Autonomy 2.0: The Quest for Economies of Scale Shuang Wu, Bo Yu, Shaoshan Liu, Yuhao Zhu August 12, 2023 ============================================== § INTRODUCTION With the advancement of robotics and AI technologies in the past decade, we have now entered the age of autonomous machines. In this new age of information technology, autonomous machines, such as service robots, autonomous drones, delivery robots, and autonomous vehicles, rather than humans, will provide services  <cit.>. The rise of autonomous machines promises to completely transform our economy. However, after more than a decade of intense R&D investments, autonomy has yet to deliver its promise <cit.>. In this article, through examining the technical challenges and economic impact of the digital economy, we argue that scalability is both highly necessary from a technical perspective and significantly advantageous from an economic perspective, thus is the key for the autonomy industry to achieve its full potential. Nonetheless, the current development paradigm, dubbed Autonomy 1.0, scales with the number of engineers, instead of with the amount of data or compute resources, hence preventing the autonomy industry to fully benefit from the economies of scale, especially the exponentially cheapening compute cost and the explosion of available data. We further analyze the key scalability blockers and explain how a new development paradigm, dubbed Autonomy 2.0, can address these problems to greatly boost the autonomy industry. § SCALABILITY OF THE DIGITAL ECONOMY The digital economy refers to the use of information technology to create, market, distribute, and consume goods and services. It has been the key driving force for the world's economic growth in the past two decades. Consider the internet industry, for instance. The internet industry has accounted for 21% of the GDP growth in mature economies from 2005 to 2010  <cit.>. In 2019, the internet industry contributed $2.1 trillion to the U.S. economy, about 10% of the U.S. GDP, and is the fourth largest industry of the U.S. economy (behind only real estate, government, and manufacturing)  <cit.>. Along with its contribution to economy, the internet industry provides nearly 6 million direct jobs, accounting for 4% of U.S. employments. Two key forces fuel the continuous growth of the digital economy, both of which have to do with scalability: * The commoditization of computing power, as exemplified by Moore's law  <cit.>, is the greatest driving force behind the digital industry. The most successful digital economy companies have developed core technology stacks that are scale by the available compute resources and data, not by the size of their engineering teams. One remarkable example is WhatsApp: when acquired by Facebook for $19 billion, WhatsApp had only 32 engineers serving over 450 million users. * The breakthrough of artificial intelligence in the last decade has demonstrated that, in addition to many technical improvements and tuning, scaling neural network models and training datasets has been our most effective strategy for achieving continuous performance gains  <cit.>. Autonomy technologies such as those found in autonomous driving are widely seen as the pillar of the next digital economy era. However, today's autonomous machines technologies, dubbed Autonomy 1.0, represent everything a scalable industry should not do. To illustrate the problem facing autonomous driving companies, Figure <ref> analyzes the R&D expenditures and revenue per employee of two leading public digital economy companies, Microsoft representing the software industry and Alphabet representing the internet industry, and two public autonomous driving companies, TuSimple representing the robot truck industry and Aurora representing the robotaxi industry. We selected these autonomous driving companies for the accessibility of their financial data. Both Alphabet and Microsoft spend less than 20% of their total operating expenditures on R&D. For instance, Google employs less than 30,000 engineers while serving over 4.3 billions of users. Their scalability is mainly constrained by available compute resources and data instead of by the number of engineers. In comparison, both TuSimple and Aurora spend more than 70% of their operating expenditures on R&D. Often, to reach new users or to deploy services to new locations, autonomous driving companies need to pour additional R&D resources to re-calibrate their existing technology stacks to adapt to new environments. Hence, their scalability is constrained by R&D investment or, more directly, the number of engineers. As a result, Alphabet and Microsoft are able to generate $1.5 million and $0.8 million of revenue per employee respectively while maintaining a high growth rate, whereas TuSimple and Aurora generate negligible revenue per employee and struggle with growth. For the autonomy industry to achieve economies of scale, we have to revolutionize the R&D paradigm. In following sections, we will describe key scalability issues with Autonomy 1.0, and outline promising solutions that are already at the horizon to achieve scalability in Autonomy 2.0. § AUTONOMY 1.0: THE END OF THE ROAD OF AN AGING ARCHITECTURE Current commercial autonomous driving systems mostly inherited the software architecture from competitors in the DARPA Grand Challenges between 2005 and 2007  <cit.>. This software architecture, while represented a great leap of autonomy technology at the time, has showed its age and become difficult to scale after more than a decade of intense industry efforts to improve and adapt. Figure  <ref> illustrates Autonomy 1.0's scalability problems using autonomous driving operation data from California from 2018 to 2022. Over the past five years, although enormous amount of investment has been poured into autonomous driving, we did not observe significant growth of the number of vehicles under operation, which increased only from 400 in 2018 to 1,500 in 2022. The operation mileage per year increased only from 2 million miles to 5 million miles. Most importantly, there are still over 2,000 disengagement incidents per year. Given this trend in Autonomy 1.0, we are still years away from serious commercial operations of autonomous vehicles. Autonomy 1.0 is modular and consists of functional modules such as sensing, perception, localization, high-definition maps, prediction, planning and control <cit.>, each further consists of several functional sub-modules integrated by explicit and hand-crafted logic. Most decision-making tasks, such as planning, which is responsible for generating optimal and drivable paths, are solved with constraint optimization under a set of hand-tuned rules. When a disengagement incident happens, engineers usually have to go through a long process of debugging to identify which specific module or rule may have been the root cause of the disengagement, then optimize that module or develop logic changes to handle the specific problem. Often, due to intricate dependency and coupling among modules or rules, the new software version leads to other problems that need to be addressed, thus greatly slowing down development process. The Autonomy 1.0 software stack over time became a complicated collection of ad-hoc rules and a set of interdependent modules for handling various long-tail events, which has been increasingly difficult to debug, maintain and evolve for improved performance. Taking the open-source project Apollo <cit.> as an example, its perception module alone consists of multiple individual leaning-based sub-modules to accomplish object detection in 2D images, LiDAR point cloud segmentation, traffic light detection, lane detection, and others. To integrate information from these perception sub-modules, a post-processing module then fuses 2D and 3D information and outputs an integrated representation of the environment to the downstream prediction module. The planning module makes decisions and plans routes based on the data from the prediction, localization, and map modules. These modules often have strong dependencies among themselves. Making changes to one module not only impacts the overall system performance, possibly violating real-time constraints and resource allocation, but also impacts the algorithmic performance of other downstream modules due distributional shift of data. The whole system has become complicated and even brittle, demanding enormous amount of engineering resources to maintain, let alone to scale. We summarize the three Autonomy 1.0's major scalability bottlenecks below. * Complexity Bottleneck: The design of autonomy 1.0 systems demands extensive engineering efforts to define software interfaces, distribute data among modules, and map various workloads in a heterogeneous computing system. It is challenging, given the complexity, to debug and continuously update the software stack. The myriad of components also make it challenging to schedule tasks and optimize the latency of the unwieldy stack at run-time. As a result, typical autonomy 1.0 systems exhibit large latency variations <cit.>, which can harm the reliability of the autonomous driving system. * Human-Data Bottleneck: Autonomy 1.0 systems depend on fleets of physical vehicles operated by humans to collect data and perform system-level tests. This is a time-consuming and expensive process that is difficult to scale out. The scalability issue will only get worse as increasingly more modules of autonomy stack adopt data-driven approaches, which requires continuous collection and labeling, because any specific instance of the recorded data reflects only a particular subset of the world states. * Generalization Bottleneck: Autonomy 1.0 systems consist rule-based processing logic and hand-crafted interfaces, which makes them difficult to generalize to new environments. This is because the complexity and diversity of real-world environments makes it difficult to design the autonomy system to anticipate all possible challenging scenarios, whether for perception or planning. As a result, autonomy 1.0 systems are often over-fitted to frequently operated regions and common situations. To handle new environments and newly encountered rare cases, additional changes to the system are required, which is increasingly difficult and time-consuming. § AUTONOMY 2.0: SCALABILITY IS EVERYTHING Recent research breakthroughs in artificial intelligence, such as Transformer <cit.>, large language models (LLM) <cit.> and offline reinforcement learning <cit.>, have sparked new ideas in architecture design, data and model infrastructure, and engineering practices of autonomous driving, leading to a new development paradigm, which we dub Autonomy 2.0. The key of Autonomy 2.0 is scalability, which is delivered through two ingredients: 1) a software stack that improves continuously with increasing scale of data and compute resources. 2) a simulation paradigm based on digital twins for algorithmic exploration using large-scale, real-time, realistic data before deployment. Figure  <ref> illustrates the differences between Autonomy 1.0 and Autonomy 2.0 system architectures. Table  <ref> summarizes how Autonomy 2.0 addresses the three bottlenecks in Autonomy 1.0. §.§ Learning-Native Software Stack Any autonomous machine performs two main tasks: perception and action, reflecting the natural dichotomy of the past and the future. The perception task observes the environment and infers its current state based on observations so far. The action task, based on these observations, chooses an appropriate sequence of actions to achieve goals while considering how the environment may evolve in the near future. The software stack in Autonomy 2.0, thus, naturally consists of a perception module and an action module. Unlike in Autonomy 1.0 where each module is implemented by a number of sub-modules, there is a strong evidence that the two modules, in Autonomy 2.0, will each be implemented as a single large deep learning model, likely based on transformer or its variants due to their ability to generalize, as demonstrated in their recent successes in LLMs. Benefits. Before describing how the two-model architecture will look like in Autonomy 2.0, we will first discuss why such an architectural design choice is key to scalability. The two-model architecture addresses the Complexity Bottleneck by drastically reducing the amount of code that needs to be maintained and reasoned about. Figure <ref>a) compares the lines of code in the Apollo Perception module <cit.>, which represents the Autonomy 1.0 approach, with an example of the perception module in Autonomy 2.0, BEVFormer  <cit.>. The Apollo Perception module's size is ten times larger than BEVFormer, and BEVFormer has achieved state of the art perception results. The software architecture also handles corner cases through data-driven model learning instead of hand-crafted logic, and thus address the Generalization Bottleneck in Autonomy 1.0. In Figure <ref>b), we analyze over 400 issues associated with the Apollo planning modules, 47% of the issues are related to Apollo failing to handle a specific usage case, and 30% of the issues are related to software engineering problems such as interfaces with other modules. In Autonomy 1.0, many hand crafted rules are implemented to handle specific use cases. As the rules accumulate, software quality naturally becomes an issue. Architectural Design. The perception and action modules have different goals and traditionally require distinctive algorithmic approaches. The perception module is trained using supervised learning and self-supervised learning to infer one unique ground truth of world states. In contrast, the action module needs to search and choose from many acceptable action sequences, while anticipating the behaviors of other agents. Therefore, the action module makes use of methods from reinforcement learning, imitation learning, and model predictive control. Interestingly, while the fundamental distinctions of the two modules have not changed in Autonomy 2.0, there is a growing convergence of the implementation of the two modules: recent successes of large language models (LLM) <cit.> to comprehend a large amount of information to perform multiple sub-tasks suggest that both modules can be implemented using a similar architecture based on Transformer <cit.>. Transformer is a great algorithmic substrate for both the perception and action modules because of its ability to generalize. For perception, a transformer can effectively fuse perceptual data from multiple sensors and multiple moments into a unified representation, avoiding information loss from sparsification and module serialization. For action, the sequential nature of transformer makes it a perfect fit for processing and generating temporal data, especially for sampling multiple possible future paths. Perception. In Autonomy 1.0, the perception module consists of multiple DNNs, each trained separately to support individual tasks such as 2D/3D object detection, segmentation, and tracking. In contrast, the perception module in Autonomy 2.0 uses a single transformer backbone to provide a unified representation of the ego-vehicle's environment (e.g., 2D Bird's Eye View (BEV) <cit.> or 3D occupancy <cit.>), which is then attached to a number of decoder “heads”, each of which is tuned for an individual task. This single-transformer approach toward the perception module has been gaining popularity across the AV industry. For instance, this is the approach described by Tesla engineers in their “AI Day 2022” event  <cit.>, and has been deployed by another leading intelligent electric vehicle company XPENG  <cit.>. Action. The action module anticipates a combinatorially large number of possible “world trajectories”, hypothesizes multiple action sequences, and evaluates them to send the optimal one to actuators. In Autonomy 1.0, the action module is implemented as a set of sub-modules for prediction, planning, and control. The action module in Autonomy 2.0 is end-to-end learned using transformer-inspired architectures for sequential decision making <cit.>. The action transformer incorporates two models: a policy model and a world model. First, the pre-trained, transformer-based policy model leverages the large amount of historical data for agent behavior prediction and ego vehicle decision making and trajectory planning <cit.>. Second, the world model is essentially a behaviorally realistic simulator (validated against real-world data) of the world. The two models are connected with a closed-loop in the transformer so that the policies can be fine-tuned online <cit.>. §.§ Digital-Twin Based Development and Deployment Autonomy 1.0 relies almost exclusively on human efforts for tasks such as manual data labeling and physical testing, posing a scalability bottleneck. Autonomy 2.0 addresses the “Human-Data Bottleneck” using an emerging simulation technology called digital twins, where a virtual representation acts as the counterpart of the physical world. As highlighted by the recent National Artificial Intelligence R&D Strategic Plan 2023 published by the White House <cit.>, digital twins have fueled many real-world applications (e.g., urban planning/management of smart cities and additive manufacturing), and is a main strategy to sustain AI technologies. Under the digital-twins paradigm, one instruments the physical system to collect real-world, real-time data, which is then interactively shared with the digital counterpart. In the digital world, one could further synthesize scenarios (e.g., traffics) with a statistically significant fidelity with a similar behavioral distributions as that in human driving behaviors. Developing and testing autonomous driving software using synthesized virtual scenarios accelerates the evaluation process by 10^3 to 10^5 times  <cit.> and reduces the testing costs by two orders of magnitude  <cit.> compared to the physical-only approach in Autonomy 1.0. Figure <ref>c) demonstrates the R&D cost efficiency in Autonomy 1.0, which costs $180/hr through physical testing, vs. in Autonomy 2.0, which costs $2/hr through virtual testing, an 100-fold improvement <cit.>. Figure <ref>d) demonstrates the R&D efficiency in Autonomy 1.0, which takes around 3 kilo miles per physical vehicle per year through physical testing<cit.>, vs. in Autonomy 2.0, which takes over 3 million miles per virtual vehicle per year through simulation, a 1000-fold improvement <cit.>. Combining these two factors would bring over 10^5 times improvement under the same engineering investment in Autonomy 2.0, and scalability is thus only constrained by the available compute resources instead of number of engineers, effectively eliminating the human-data bottleneck. § SUMMARY The autonomy economy, or the use of autonomous machines to provide goods and services, will fuel the world's economic growth in the coming decades. Huge investments are pouring into the autonomy economy. Such a huge investment will only be justified if autonomous machines can reach, and provide utility for, every person on planet. Similar to today's digital economy, scalability will necessarily be the winning formula in this process. The current practice of developing and deploying autonomous machines carries the historical baggage of complexity bottleneck, human-data bottleneck, and generalization bottleneck, and is thus unscalable. We must start from a clean slate and rethink the architecture design of autonomous machines. We posit that Autonomoy 2.0 will embrace a learning-native software stack, which addresses the complexity bottleneck through software simplicity and addresses the generalization bottleneck through end-to-end learning. The digital twins technologies will have to be integrated throughout the development, evaluation, and deploymemt cycle in Autonomy 2.0 to address the human-data bottleneck. ieeetr
http://arxiv.org/abs/2307.03979v1
20230708140755
Attacking (EC)DSA scheme with ephemeral keys sharing specific bits
[ "M. Adamoudis", "K. A. Draziotis", "D. Poulakis" ]
cs.CR
[ "cs.CR", "94A60" ]
Computation of the private key]Attacking (EC)DSA scheme with ephemeral keys sharing specific bits [2010]94A60. [ Sahil Gangurde ABV-Indian Institute of Information Technology & Management, Gwalior, India [email protected] =========================================================================================================================== In this paper, we present a deterministic attack on (EC)DSA signature scheme, providing that several signatures are known such that the corresponding ephemeral keys share a certain amount of bits without knowing their value. By eliminating the shared blocks of bits between the ephemeral keys, we get a lattice of dimension equal to the number of signatures having a vector containing the private key. We compute an upper bound for the distance of this vector from a target vector, and next, using Kannan's enumeration algorithm, we determine it and hence the secret key. The attack can be made highly efficient by appropriately selecting the number of shared bits and the number of signatures. § INTRODUCTION - STATEMENT OF RESULTS In August 1991, the U.S. government's National Institute of Standards and Technology (NIST) proposed an algorithm for digital signatures. The algorithm is known as DSA, for Digital Signature Algorithm <cit.>. It is an efficient variant of the ElGamal digital signature scheme <cit.> intended for use in electronic mail, electronic funds transfer, electronic data interchange, software distribution, data storage, and other applications which require data integrity assurance and data authentication. In 1998, an elliptic curve analogue called Elliptic Curve Digital Signature Algorithm (ECDSA) was proposed and standardized <cit.>. §.§ The (EC)DSA Signature Scheme First, we recall the DSA schemes. The signer selects a prime p of size between 1024 and 3072 bits with increments of 1024, as recommended in FIPS 186-3 <cit.>. Also, he selects a prime q of size 160, 224 or 256 bits, with q|p-1 and a generator g of the unique order q subgroup G of the multiplicative group 𝔽_p^* of the prime finite field 𝔽_p. Furthermore, he selects a randomly a ∈{1,…,q-1} and computes R = g^a p. The public key of the signer is (p,q,g,R) and his private key a. He also publishes a hash function h : {0,1}^* →{0,…,q-1}. To sign a message m∈{0,1}^*, he selects randomly k ∈{1,…,q-1} which is the ephemeral key, and computes r = (g^k p) q and s = k^-1(h(m)+ar) q. The signature of m is (r,s). The signature is accepted as valid if and only if the following holds: r = ((g^s^-1h(m) qR^s^-1r q) p) q. Next, let us recall the ECDSA scheme. The signer selects an elliptic curve E over 𝔽_p, a point P∈ E(𝔽_p) with order a prime q of size at least 160 bits. Following FIPS 186-3, the prime p must belongs to the set {160,224,256,512}. Further, he chooses randomly a ∈{1,…,q-1} and computes Q = aP. Finally, he publishes a hash function h : {0,1}^* →{0,…,q-1}. The public key of the signer is (E,p,q,P,Q) and his private key a. To sign a message m, he selects randomly k ∈{1,…,q-1} which is the ephemeral key and computes kP = (x,y) (where x and y are regarded as integers between 0 and p-1). He computes r = x q and s = k^-1(h(m)+ar) q. The signature of m is (r,s). The verifier computes u_1 = s^-1h(m) q, u_2 = s^-1r q, u_1P+u_2Q = (x_0,y_0). He accepts the signature if and only if r = x_0 q. §.§ Previous Results Researchers have explored various attacks on DSA schemes by analyzing the signature equation s= k^-1(h(m)+ar) mod q and using lattice reduction techniques such as LLL and CVP algorithms. One study focused on the use of a linear congruential pseudorandom number generator (LCG) for generating random numbers in DSA <cit.>, showing that combining the DSA signature equations with LCG generation equations can lead to a system of equations that provide the secret key. To recover the secret key, several heuristic attacks have been proposed <cit.> in another study, which assume the revelation of a small fraction of the corresponding nonce k. However, these attacks are based on heuristic assumptions, making it difficult to make precise statements on their theoretical behavior. The first rigorous lattice attack on (EC)DSA was presented in <cit.>. The authors successfully decreased the security of (EC)DSA to a Hidden Number Problem (HNP), which can then be further reduced to an approximation Closest Vector Problem (CVP) for a specific lattice. The signer's secret key a can be computed using this reduction in polynomial time. The attack was also adapted to the case of ECDSA, as described in <cit.>. The paper <cit.> describes an attack on DSA schemes that uses the LLL reduction method and requires one message. By computing two short vectors of a three-dimensional lattice, the attack derives two intersecting lines in (a, k), provided that a and k are sufficiently small, and the second shortest vector is sufficiently short. If two messages are available, the same attack can be applied to derive a linear congruence relating to the corresponding ephemeral keys. The papers <cit.> and <cit.> describe attacks on DSA schemes using the LLL algorithm and one or two messages. In <cit.>, the combination of LLL with algorithms for finding integral points of two classes of conics gives a, provided that at least one of the sets {a,k^-1 q}, {k,a^-1 q}, {a^-1 q,k^-1 q} is sufficiently small. In <cit.>, the Lagrange Reduction algorithm is applied on a 2-dimensional lattice defined by a signed message, and provides two straight lines intersecting at (a, k). Similar attacks can be applied to the pairs (k^-1 q, k^-1a q) and (a^-1 q,a^-1k q). If two signed messages are available, the above two attacks can be applied to the equation relating the two ephemeral keys. The article <cit.> presents an attack using Coppersmith's method to compute the secret key a. The attack works when a and k satisfy a specific inequality, and in this case, the secret key a can be efficiently computed. The article <cit.> describes an attack that involves constructing a system of linear congruences using signed messages. This system has at most one unique solution below a certain bound, which can be computed efficiently. Thus, if the length of a vector containing the secret and ephemeral keys of a signed message is quite small, the secret key can be computed using the above system. The article <cit.> presents an improved version of this attack. In <cit.>, the proposed attacks take advantage using of the bits in the ephemeral key and the Fast Fourier Transform. In <cit.>, it is shown that, using lattice reduction under some heuristic assumptions, that partial information about the nonces of multiple signatures can lead to recovery of the full private key. The original approach to doing so is based on discrete Fourier analysis techniques <cit.>. A very important issue is the attacks on cryptosystems based on the malicious modification of memory registers. These attacks may affect the randomness of the secret parameters, and so, to force certain bits of the ephemeral key to be equal, without their values being known. In <cit.>, it is discussed how such attacks could occur in a real-life scenario. Following the line of research from <cit.>, the authors of <cit.> focus on an attack scenario where ephemeral keys share specific bits, such as the least significant bits (LSB) and/or most significant bits (MSB), either within multiple blocks. By eliminating the shared blocks of bits between the ephemeral keys, a lattice of dimension equal to the number of signatures is provided, which contains a quite short vector with components that reveal the secret key. Then, the LLL algorithm is used for the computation of this vector. Note that these attacks are based on heuristic assumptions. Later, in <cit.>, the authors further improved upon the attack proposed in <cit.> by providing a probabilistic attack with a success probability approaching 1 when the pair (δ,n) is appropriately selected, where n represents the number of signatures, and δ represents the number of shared bits in the ephemeral keys. This attack relies on a mild assumption regarding the hash function used in (EC)DSA. §.§ Our Contribution Our study builds on the research presented in <cit.>, and we present a deterministic attack that, although not always polynomial in complexity, proves to be highly efficient in practical scenarios. Instead of using methods like LLL, approximate, or exact CVP, which were employed in previous attacks, we use enumeration on a suitable lattice to find lattice vectors that are close to a specific target vector. From these solutions, we can readily extract the secret key to the system. It is important to highlight that the attacks presented in <cit.> rely on heuristics assumptions that aim to force the presence of a vector containing the private key as a solution to the Shortest Vector Problem (SVP) in a relatively large lattice. In <cit.>, the authors provide a probabilistic approach to <cit.>, where an assumption for the hash function is made and the attack is modelled as a Closest Vector Problem (CVP). Due to the computational complexity of finding such a vector using a deterministic algorithm, an approximation algorithm can be used instead. Our approach takes a different path. We calculate a bound for the distance between the vector of the lattice containing the private key and a target vector. Then, we leverage Kannan's enumeration algorithm to determine this vector and, consequently, extract the secret key. Our experiments demonstrate that the attack can be made highly efficient by appropriately selecting values for δ and n. Finally, we improve the results provided in <cit.>. §.§ Our results In the subsequent Theorem, we apply the framework suggested by <cit.>, which presupposes that we have access to a collection of signed messages with ephemeral keys that are shorter than q. These messages have some of their most and least significant bits in common, with a total of δ bits shared. Suppose we have a (EC)DSA scheme with a binary length ℓ prime number q and secret key a. Let m_j (j=0,…,n) be messages signed with this scheme, (r_j,s_j) their signatures, and k_j = ∑_i=1^ℓ k_j,i 2^ℓ-i (where k_j,i∈{0,1}) are the corresponding ephemeral keys, respectively. Set A_j = -r_js_j^-1 q. Suppose that 0< k_j < q (j=0,…,n), and there are integers δ >0 and 0 ≤δ_L≤δ such that the following conditions hold: * k_0,i+1 = ⋯ = k_n,i+1 (i=1,…,δ-δ_L,ℓ-δ_L, …,ℓ-1). * For i = 0,…,n, set C_i,j = (A_j-1 -A_i) 2^-δ_L q, (j=1,…,i), and C_i,j = (A_j -A_i) 2^-δ_L q (j=i+1,…,n). The shortest vector of the lattice ℒ_i spanned by the vectors (2^δ+1q,0,…, 0),…, (0,…, 0, 2^δ+1q , 0), (2^δ+1C_i,1, …, 2^δ+1C_i,n, 1) has length > 1/2 (2^δ+1q)^n/n+1. Then, the secret key a can be computed in 𝒪(2^ℓ-δ n+2n n ( (nℓ)^c 2^𝒪(n) +ℓ^4 2^n (n+1)^n+1/2)) bit operations, for some c > 0. By the Gaussian heuristic <cit.> the length of the vectors of the lattice ℒ is > q^n/(n+1). Thus, the hypothesis (2) of Theorem <ref> will very often be satisfied. In the above complexity estimate, if ℓ≤δ n, then the time complexity is polynomial in ℓ. Roadmap. The paper is structured as follows: Section 2 presents an auxiliary lemma that will prove crucial in the proof of Theorem <ref>. Section 3 is dedicated to the proof of Theorem <ref>, providing a detailed explanation and justification. In Section 4, an attack on (EC)DSA, derived from Theorem <ref>, is presented. Additionally, several experiments are conducted to illustrate the effectiveness of the attack. Finally, Section 5 concludes the paper, summarizing the main findings and discussing potential avenues for future research. § LATTICES Let ℬ = { b_1, …, b_n}⊂^n be a basis of ^n. A n-dimensional lattice spanned by ℬ is the set ℒ = {z_1 b_1+⋯ +z_n b_n/ z_1,…,z_n ∈}. Recall that the scalar product of two vectors 𝐮 = (u_1,…,u_n) and 𝐯 = (v_1,…,v_n) in ℝ is the quantity ⟨𝐮,𝐯⟩ = u_1v_1+⋯ + u_nv_n, and the Euclidean norm of a vector v = (v_1,…,v_n) ∈^n the quantity 𝐯 = ⟨𝐯,𝐯⟩^1/2 = (v_1^2+⋯ +v_n^2)^1/2. The Gram-Schmidt orthogonalisation (GSO) of the basis ℬ is the orthogonal family {𝐛_1^⋆,…,𝐛_n^⋆} defined as follows: 𝐛_i^⋆ = 𝐛_i-∑_j=0^i-1μ_i,j𝐛_j^⋆, with μ_i,j = ⟨𝐛_i,𝐛_j^⋆⟩/𝐛_j^⋆^2 (j= 0,…,i-1). Let L be a lattice. If K is a convex body in ^n+1 symmetric about the origin, we denote by λ_i(K,L) (i=1,…,n+1) the ith successive minimum of K with respect to L which it is defined as follows λ_i(K, L) = inf{λ > 0/ (λ K) ∩ L contains i linearly independent points}. Further, we denote by s(L) the length of a shortest vector in L. Let B_𝐯(R) be the closest ball of center 𝐯 and radius R in ℝ^n+1 and L a lattice. Then,we have: |B_𝐯(R)∩ L | < ( 2R/s(L)+1)^n+1. Set 𝒟_𝐯(R) = {𝐱-𝐲/ 𝐱,𝐲∈ B_𝐯(R)}. Then, 𝒟_𝐯(R) is a convex body, symmetric about the origin. Then <cit.> implies: |B_𝐯(R)∩ L | < ∏_i=1^n+1(1/λ_i(𝒟_𝐯(R),L)+1). Let 𝐱,𝐲∈ B_𝐯(R). Then, we have: 𝐱-𝐲≤𝐱-𝐯+ 𝐯-𝐲≤ 2R. It follows that 𝒟_𝐯(R)⊆ B_0(2R), and so we deduce λ_1(B_0(2R),L) ≤λ_i(𝒟_𝐯(R),L) (i=1,…,n). Further, we have λ_1(B_0(2R),L) ≥ s(L)/2R. Combining the inequalities (<ref>), (<ref>) and (<ref>), we obtain: |B_𝐯(R)∩ L | < ( 2R/s(L)+1)^n+1. § PROOF OF THEOREM 1.1 Let a be the secret key and k_j, j = 0,…,n the ephemeral keys. We put A_j = -r_js_j^-1 q and B_j = -h(m_j) s_j^-1 q for j = 0,…,n. The signing equation for (EC)DSA provides that, k_j+A_j a +B_j ≡ 0 ( q) (j=0,…,n). Suppose first that k_0 = min{k_0,…,k_n}. We set δ_M=δ-δ_L. From the hypothesis of the Theorem we get z_j=k_j-k_0=ε 2^ℓ-δ_M-1+⋯+ε' 2^δ_L, for some ε, ε'∈{0,1}. Since z_j>0 we get 0<z_j<2^ℓ-δ_M and there exists positive integer z_j' such that z_j = 2^δ_Lz^'_j Furthermore, we set C_j = (A_j-A_0)2^-δ_L q and D_j = (B_j-B_0)2^-δ_L q. From (<ref>) we have the congruences: z_j^'+C_j a +D_j ≡ 0 ( q) (j=1,…,n). Since z_j^' is positive, there is a positive integer c_j such that -C_ja-D_j+c_jq= z_j^'. Thus, we obtain: 0 < c_jq-C_j a-D_j < 2^ℓ-δ. It follows that -2^ℓ-δ-1 < c_jq-C_j a-D_j-2^ℓ-δ-1 < 2^ℓ-δ-1, whence we get 0 < |c_jq-C_j a-D_j-2^ℓ-δ-1| < 2^ℓ-δ-1. Therefore, we have: 0 < |c_jq2^δ+1 -C_j2^δ+1 a-D_j2^δ+1-2^ℓ| < 2^ℓ. We consider the lattice ℒ spanned by the rows of the matrix 𝒥 = ( [ 2^δ+1q 0 0 … 0 0; 0 2^δ+1q 0 … 0 0; 0 0 2^δ+1q … 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 … 2^δ+1q 0; 2^δ+1C_1 2^δ+1C_2 2^δ+1C_3 … 2^δ+1C_n 1 ]). The vectors of the lattice ℒ are of the form (2^δ+1(qx_1+x_n+1C_1),2^δ+1(qx_2+x_n+1C_2),…,2^δ+1(qx_n+x_n+1C_n),x_n+1), for some integers x_1,…,x_n+1. By setting (x_1,…,x_n+1)=(c_1,…,c_n,-a), we get the lattice vector 𝐮 = (2^δ+1(c_1q-C_1a),…,2^δ+1(c_nq-C_na),-a). Further we consider the vector in the span of ℒ, 𝐯 = (D_12^δ+1+2^ℓ,…,2^δ+1D_n+2^ℓ,0). Now, we have u- v=(2^δ+1(qc_1-C_1a-D_1)-2^ℓ,…,2^δ+1(qc_n-C_na-D_n)-2^ℓ,-a), and inequalities (<ref>) yield: 𝐮-𝐯 < 2^ℓ√(n+1). Put R = 2^ℓ√(n+1). Then 𝐮∈ B_𝐯(R). Next, we compute a LLL-reduced basis for ℒ, say ℬ = {𝐛_1,…,𝐛_n+1}. This can be done in time 𝒪(n^6 (log q)^3). By hypothesis (2) of Theorem, we have: s(ℒ) > 1/2 (2^δ+1 q)^n/n+1. Let {𝐛_1^*,…,𝐛_n+1^*} the Gram-Schmidt orthogonalisation of ℬ. By <cit.>, we get: 4 b_i^*^2 ≥ 2 b_i-1^*^2 ≥ b_i-1^2 ≥ s(L)^2 Thus, we obtain: 1/4 (2^δ+1q)^n/n+1≤𝐛_i^* (i=1,…,n+1). Next, using Kannan's enumeration algorithm <cit.>, we compute all the elements of B_𝐯(R)∩ℒ. Combining <cit.> with the inequality (<ref>), we obtain that the bit complexity of the procedure is (nlog q)^c 2^𝒪(n)(2^ℓ+2/(2^δ+1q)^n/n+1)^n+1 , where c is a constant >0. Then we check whether the last coefficient of 𝐮∈ B_𝐯(R)∩ℒ is the minus of the secret key -aq. Every such operation needs 𝒪((log q)^4) bit operations <cit.>. If none of the elements of 𝐮∈ B_𝐯(R)∩ℒ gives the secret key, then we repeat the procedure assuming that k_1 = min{k_0,…,k_n}, and we continue until we found the secret key. By Lemma <ref>, we have: |B_𝐯(R)∩ℒ | < ( 2^ℓ+2√(n+1)/ (2^δ+1q)^n/n+1 +1)^n+1. Thus, the overall bit complexity of the computation of a is 𝒪(n(nlog q)^c 2^𝒪(n)(2^ℓ+2/(2^δ+1q)^n/n+1)^n+1 +n ( 2^ℓ+2√(n+1)/ (2^δ+1q)^n/n+1 +1)^n+1 (log q)^4), whence the result. § THE ATTACK The proof of Theorems 1.1 yields the following attack: ATTACK-DSA Input: Messages m_j (j=0,…,n) and (r_j,s_j) their (EC)DSA signatures and integers δ >0 and 0 ≤δ_L≤δ and the public key (p,q,g,R) (resp. (E,p,q,P,Q)). Output: The private key a. * For j=0,…, n compute A_j = -r_is_i^-1 q, B_j = -h(m_j) s_j^-1 q. * For i=0,…,n, * For j=1,…,i compute C_i,j = (A_j-1 -A_i) 2^-δ_L q, D_i,j = (B_j-1 -B_i) 2^-δ_L q, and for j= i+1,…,n compute C_i,j = (A_j -A_i) 2^-δ_L q, D_i,j = (B_j -B_i) 2^-δ_L q. * Consider the lattice ℒ_i spanned by the rows of the matrix J_i = ( [ 2^δ+1q 0 0 … 0 0; 0 2^δ+1 q 0 … 0 0; 0 0 2^δ+1 q … 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 … 2^δ+1 q 0; 2^δ+1C_i,1 2^δ+1C_i,2 2^δ+1C_i,3 … 2^δ+1C_i,n 1 ]) and compute a LLL-basis ℬ_i for ℒ_i. * Consider the vector 𝐯_i = (2^δ+1D_i,1+2^ℓ,…,2^δ+1D_i,n+2^ℓ,0), and using Kannan's enumeration algorithm with basis ℬ_i, compute all 𝐮∈ℒ_i satisfying 𝐮-𝐯_i < 2^ℓ√(n+1). * Check whether the last coordinate of 𝐮 say u_n+1 satisfies g^-u_n+1≡ Rq (resp. P(-u_n+1) = Q). If it is so, then return the secret key -u_n+1q=a. For the Pseudocode of Kannan's Enumeration Algorithm, one can see <cit.>. Supposing that condition (2) is satisfied, taking n quite small and nδ≥ℓ, Theorem <ref> implies that the attack is polynomial in ℓ. Furthermore, if s(L) is closed to the Gauss heuristic, then the upper bound for the number of points of B_𝐯(R)∩ℒ will be the smaller possible, and so it is expect that the attack will be quite efficient. Experiments. We conducted a thorough analysis of our experiments, and we compared our results with those presented by Gomez et al. <cit.>. Our findings indicate a significant improvement in almost all cases. Our experiments were conducted on a Linux machine with an i5-12400 CPU, using Sagemath 9.8 <cit.>. We made the assumption that we already knew the minimum ephemeral key. However, in the general case, where the minimum key is unknown, we would need to perform n executions, where n+1 represents the number of signatures. This worst-case scenario would require multiplying the execution time of each experiment by n. Overall, our results demonstrate a notable improvement compared to the previous findings (see the Table below). Finally, we have successfully found the secret key even when the shared bits in the ephemeral keys are only 5. Remarkably, in this case, we only needed a minimum of 58 signatures. It is worth noting that in <cit.>, no successful attack was provided for the specific scenario where δ=5. § CONCLUSION Attacks based on the malicious modification of memory registers is a topic of high importance, since it may affect the randomness of the secret parameters by forcing a limited number of bits to a certain value, which can be unknown to the attacker. In this context, we developed a deterministic attack on the DSA schemes, providing that several signatures are such that the corresponding ephemeral keys share a number of bits without knowing their value. Our attack is deterministic, meaning that it always produces a result for a given input every time. However, it is important to note that while the attack is deterministic, it may not always be practical to execute. Deterministic attacks on the (EC)DSA are relatively rare, as they typically rely on heuristic assumptions. While deterministic attacks on (EC)DSA, are rare, our attack demonstrates practical feasibility in specific scenarios, surpassing previous results, (see Table <ref>). However, it is important to note that the practicality and effectiveness of our attack may vary depending on the specific choice of (δ,n). Acknowledgement The author, Marios Adamoudis is co-financed by Greece and the European Union (European Social Fund-ESF) through the Operational Programme ”Human Resources Development, Education and Lifelong Learning” in the context of the Act ”Enhancing Human Resources Research Potential by undertaking a Doctoral Research” Sub-action 2: IKY Scholarship Programme for PhD candidates in the Greek Universities. 99 marios M. Adamoudis, K. A. Draziotis and D. Poulakis, Enhancing a DSA attack, CAI 2019, p. 13-25. LNCS 11545, Springer 2019. Aranha D. F. Aranha, F. R. Novaes, Akira Takahashi, M. Tibouchi, and Y. Yarom. LadderLeak: Breaking ECDSA with less than one bit of nonce leakage. In Jay Ligatti, Xinming Ou, Jonathan Katz, and Giovanni Vigna, editors, ACM CCS 2020, pages 225-242. ACM Press, November 2020. Bellare M. Bellare, S. Goldwasser and Micciancio, “Pseudo-random" number generation within cryptographic algorithms: the DSS case. In Proc. of Crypto '97, LNCS 1294 IACR, Palo Alto, CA. Springer-Verlag, Berlin 1997. Blake I. F. Blake and T. Garefalakis, On the security of the digital signature algorithm. Des. Codes Cryptogr., 26, no. 1-3 (2002), 87-96. Bleichenbacher D. Bleichenbacher. On the generation of one-time keys in DL signature schemes. In Presentation at IEEE P1363 working group meeting, page 81, 2000. Draziotis K. A. Draziotis and D. Poulakis, Lattice attacks on DSA schemes based on Lagrange's algorithm. 5th international Conference on Algebraic Informatics, CAI 2013. Berlin: Springer. LNCS 8080, 119-131 (2013). Draziotis2 K. A. Draziotis, (EC)DSA lattice attacks based on Coppersmith's method, Information Processing Letters 116(8), Elsevier (2016), Pages 541-545. ElGamal T. ElGamal, A public key cryptosystem and a signature scheme based on discrete logarithm, IEEE Transactions on Information Theory, 31 (1985), 469-472. fips FIPS PUB 186-3, Federal Information Processing Standards Publication, Digital Signature Standard (DSS). Faugere J. -L. Faugère, C. Goyet, and G. Renault, Attacking (EC)DSA Given Only an Implicit Hint, Selected Area of Cryptography, LNCS 7707, p. 252–274, Springer-Verlag, Berlin - Heidelberg 2013. Gomez Ana I. Gomez, D. Gomez-Perez, and G. Renault, A probabilistic analysis on a lattice attack against DSA. Des. Codes Cryptogr. 87, 2469-2488 (2019). Hanrot G. Hanrot and D. Stehlé, Improved analysis of kannan’s shortest lattice vector algorithm. In Proceedings of Crypto, LNCS 4622, 170-186. Springer, 2007. Hanrot2 G. Hanrot, X. Pujol and D. Stehlé, Algorithms for the shortest and closest lattice vector problems. Chee, Yeow Meng (ed.) et al., Coding and cryptology. Third international workshop, IWCC 2011, Qingdao, China, May 30 – June 3, 2011. Proceedings. Berlin: Springer. Lecture Notes in Computer Science 6639, 159-190 (2011). Hoffstein J. Hoffstein, J. Pipher, H. H. Silverman, An introduction to mathematical cryptography. 2nd ed. Undergraduate Texts in Mathematics. New York, NY: Springer 2014. Howgrave N. A. Howgrave-Graham and N. P. Smart, Lattice Attacks on Digital Signature Schemes, Des. Codes Cryptogr. 23 (2001) 283-290. Johnson D. Johnson, A. J. Menezes and S. A. Vastone, The elliptic curve digital signature algorithm (ECDSA), Intern. J. of Information Security, 1 (2001) 36-63. Koblitz N. Koblitz, A. J. Menezes and S. A. Vastone, The state of elliptic curve cryptography, Des. Codes Cryptogr. 19 (2000), 173-193. Koblitz2 N. Koblitz and A. J. Menezes, A survey of Public-Key Cryptosystems, SIAM REVIEW, 46, No. 4 (2004), 599-634. Leadbitter P.J. Leadbitter, D. Page, N.P. Smart. Attacking DSA Under a Repeated Bits Assumption. In: Joye, M., Quisquater, JJ. (eds) Cryptographic Hardware and Embedded Systems - CHES 2004. CHES 2004. Lecture Notes in Computer Science, vol 3156, (2004) 428-440. Springer, Berlin, Heidelberg. Lenstra A. K. Lenstra, H. W. Lenstra Jr., and L. Lovász, Factoring polynomials with rational coefficients, Math. Ann., 261 (1982), 513-534. Malikiosis R.-D. Malikiosis, Lattice-point enumerators of ellipsoids, Combinatorica 33, No. 6 (2013) 733-744. Menezes A. J. Menezes, P. C. van Oorschot and S. A. Vanstone, Handbook of Applied Cryptography, CRC Press, Boca Raton, Florida, 1997. Micciancio D. Micciancio and P. Voulgaris. A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. In Proc. of STOC, ACM, (2010) pages 351-358. Mulder1 E. De Mulder, M. Hutter, M. E. Marson, and P. Pearson. Using Bleichenbacher s solution to the Hidden Number Problem to attack nonce leaks in 384-bit ECDSA. In Cryptographic Hardware and Embedded Systems-CHES 2013, 435-452. Springer, 2013. Mulder2 E. De Mulder, M. Hutter, M. E. Marson, and P. Pearson. Using Bleichenbacher's solution to the hidden number problem to attack nonce leaks in 384-bit ecdsa: extended version. Journal of Cryptographic Engineering, 4(1):33-45, 2014. National National Institute of Standards and Technology (NIST). FIPS Publication 186: Digital Signature Standard. May 1994. Nguyen P. Nguyen and I. E. Shparlinski, The Insecurity of the Digital Signature Algorithm with Partially Known Nonces, J. Cryptology, 15 (2002), 151-176. Nguyen2 P. Nguyen and I. E. Shparlinski, The Insecurity of the Elliptic Curve Digital Signature Algorithm with Partially Known Nonces, Des. Codes Cryptogr. 30, (2003), 201-217. Poulakis D. Poulakis, Some Lattice Attacks on DSA and ECDSA, Applicable Algebra in Engineering, Communication and Computing, 22, (2011), 347-358. Poulakis1 D. Poulakis, New lattice attacks on DSA schemes, J. Math. Cryptol. 10 (2) (2016), 135–144. sage Sage Mathematics Software, The Sage Development Team. <http://www.sagemath.org>. Sun C. Sun, T. Espitau, M. Tibouchi, and M. Abe, Guessing Bits: Improved Lattice Attacks on (EC)DSA with Nonce Leakage, IACR Transactions on Cryptographic Hardware and Embedded Systems, ISSN 2569-2925, Vol. 2022, No. 1, pp. 391-413. Zheng Z. Zheng, Modern Cryptography, Volume 1, Springer 2021.
http://arxiv.org/abs/2307.04507v1
20230710120118
Improving Factuality of Abstractive Summarization via Contrastive Reward Learning
[ "I-Chun Chern", "Zhiruo Wang", "Sanjan Das", "Bhavuk Sharma", "Pengfei Liu", "Graham Neubig" ]
cs.CL
[ "cs.CL", "cs.AI" ]
An analysis of least squares regression and neural networks approximation for the pricing of swing options [ ========================================================================================================== Modern abstractive summarization models often generate summaries that contain hallucinated or contradictory information. In this paper, we propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics. Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics using contrastive reward learning, leading to more factual summaries by human evaluations. This suggests that further advances in learning and evaluation algorithms can feed directly into providing more factual summaries. Code and human evaluation results will be publicly available at <https://github.com/EthanC111/factuality_summarization>. § INTRODUCTION One major challenge in current abstractive summarization models is how to generate more factual summaries that are consistent with the source text <cit.>. Various approaches have been proposed to address this challenge, including augmenting the model input <cit.>, performing post-processing <cit.>, and modifying the learning algorithms <cit.>. In particular, learning-based methods possess the advantage of not requiring modification to the existing model architecture or the addition of new modules. In the meantime, with the growing interest in aligning learning objectives with evaluation criteria of interest, utilizing feedback of automatic evaluation metrics <cit.> or human preferences <cit.> as rewards for fine-tuning abstractive summarization models has gained substantial attention. These methods learn to optimize rewards using techniques such as reinforcement-learning (RL) <cit.>, minimum risk training (MRT) <cit.>, and contrastive reward learning (CRL) <cit.>. Given the benefits of learning-based methods in improving factuality of abstractive summarization, and recent advancements in factuality metrics for detecting factual inconsistencies in generated summaries, it is of interest to apply reward learning to enforce models to learn from feedback of factuality metrics to improve the factuality of abstractive summarization models. We aim to investigate the following questions in this paper - Q1: Can contrastive reward learning effectively utilize existing factuality metrics to improve the factuality of abstractive summarization models? Q2: Can the improvement in factuality be reflected in human evaluation studies? In this paper, we propose a contrastive reward learning framework that enables abstractive summarization models to directly learn from feedback of factuality metrics in a sample-efficient manner. In contrast to other contrastive learning frameworks <cit.>, our proposed framework does not rely on the complex construction of negative samples. Instead, similar to <cit.>, all candidate summaries used for contrastive learning are generated from pretrained sequence-to-sequence models <cit.> using diverse beam search <cit.>. Additionally, our framework also incorporates the use of quality metrics to provide more fine-grained information on the ranking (positive / negative) of candidate summaries. Specifically, we investigate learning from the rewards of two factuality metrics: BARTScore <cit.> and DAE <cit.>. Through automatic and human evaluation studies, we demonstrate that our framework enables summarization models to generate significantly more factual summaries. § CONTRASTIVE LEARNING FROM FACTUALITY REWARDS §.§ Contrastive Learning for Abstractive Summarization Abstractive Summarization Given a source document D, the summarization model learns a generative model g_θ, that converts the source document D into a summary S: S = g_θ(D) MLE Loss Given a training sample pair {D,S^r} consists of source document D and reference summary S^r (note that S^r consists of L tokens, S^r = {s^r_1, ⋯, s^r_j, ⋯, s^r_L}), the MLE loss ℒ_mle aims to maximize the likelihood of reference summary S^r given the source document D: ℒ_mle = log p_g_θ(S^r | D) = ∑_j = 1^Llog p_g_θ(s^r_j | D, s^r_<j) where s^r_<j = {s^r_0, ⋯, s^r_j-1} and s^r_0 is a pre-defined start token. Despite its effectiveness in enforcing generated summaries to align with the reference summaries, the MLE loss is not aware of the quality (evaluated by some quality metric M) of the generated summaries. To address this issue, we introduce a contrastive loss <cit.>. Contrastive Loss Given a training sample pair {D,S^r}, and that S_i, S_j are candidate summaries generated from a pre-trained model given D, and that M(S_i) > M(S_j) ∀ i,j,i < j [ M could be reference-free (e.g., BARTScore, DAE) or reference-based (e.g., ROUGE) metric. If M is a reference-free metric, then M(S_i) = M(S_i, D) ; if M is a reference-based metric, then M(S_i) = M(S_i, S^r)], the contrastive loss is defined as: ℒ_ctr = ∑_i ∑_j > imax (0, f(S_j) - f(S_i) + λ_ij) Note that λ_ij = (j - i) ×λ is the rank difference between two candidates times a constant λ (usually set as 1) [The magnitude of contrastive loss can be directly regulated through the weight of contrastive loss γ, so we simply set λ equal to 1.] and that f(S) is the length-normalized estimated log-probability given by: f(S) = ∑_t=1^llog p_g_θ(s_t|D, S_<t)/|S|^α where α is a constant. Intuitively, the contrastive loss penalizes any discoordination between the length-normalized estimated log-probability and the quality metric evaluation (i.e., when f(S_j) > f(S_i) but M(S_i) > M(S_j)). The quality metric M could be any evaluation criteria, including automatic evaluation metrics <cit.>, or human preferences <cit.>. Combined Loss The combined loss used for fine-tuning is described by <ref>. ℒ_com=ℒ_mle+γℒ_ctr where ℒ_mle is the MLE loss given in <ref>, ℒ_ctr is the contrastive loss given in <ref>, and γ is the weight of contrastive loss. Summarization models fine-tuned with ℒ_com is referred as CRL-COM. §.§ Reward from Factuality Metrics We use two factuality metrics as quality metrics M for use in the contrastive loss described in <ref>. BARTScore <cit.>'s factuality score is calculated as the log-likelihood of the summary given the source calculated from a reference-free version of BARTScore. DAE <cit.> is calculated as the softmax output of the least-factual dependency-arc inside the sentences in the summary. These two metrics were chosen for relative computational efficiency, as they are evaluated many times in the training process. [ In contrast, QA-based factuality metrics are computationally inefficient <cit.>. As a result, they are less feasible for use in reward-learning settings. ] § EXPERIMENTS §.§ Experimental Setup Driven by the two research questions presented in the introduction, we train two kinds of factuality-driven summarization models, namely CRL-COM (B) and CRL-COM (D), trained from contrastive reward learning using BARTScore and DAE as quality metrics, respectively. A baseline summarization model CRL-COM (R) is also trained from contrastive reward learning using ROUGE as quality metric. Note that commonly used n-gram based metrics, including ROUGE <cit.>, have been shown to have a low correlation with human evaluations, particularly on factuality perspective <cit.>. Thus, we focus on evaluating the factuality of CRL-COM (B) and CRL-COM (D) compared to CRL-COM (R), with the hypothesis that CRL-COM (B) and CRL-COM (D) should be capable of generating more factual summaries compare to CRL-COM (R). Datasets: We use two abstractive summarization datasets – CNN/Daily Mail (CNNDM) dataset <cit.> and the XSUM dataset <cit.>. CNNDM summaries tend to be more extractive and are composed of multi-sentence summaries, while XSUM summaries are more abstractive and are composed of single-sentence summaries. Models: Following the setting outlined in <cit.>, we fine-tuned a pre-trained BART model <cit.> on the CNNDM dataset and a pre-trained PEGASUS <cit.> model on the XSUM dataset. Implementation and Fine-tuning Details: The combined loss (with weight of the contrastive loss γ = 100) described in <ref> is used to fine-tune the pre-trained models. Following <cit.> few-shot fine-tuning learning paradigm, we sampled 1000 training samples from each dataset for few-shot fine-tuning. A constant learning rate of 10^-5 and 10^-4 was applied to the fine-tuning process for the CNNDM and XSUM datasets, respectively, in order to facilitate fast convergence. For each dataset, we fine-tuned three models using three different quality metrics: ROUGE (R), BARTScore (B), and DAE (D), designated as CRL-COM (R), CRL-COM (B), and CRL-COM (D), respectively. During validation, we employed the same quality metric used for fine-tuning for early stopping. Automatic Evaluation Each model is evaluated on three metrics: ROUGE (with variants ROUGE-1, ROUGE-2, ROUGE-L), BARTScore, and DAE. Human Evaluation To objectively evaluate the factual consistencies of the generated summaries from each model, we randomly sampled 100 samples from CNNDM and 200 samples from XSUM for human evaluation. We assess each summary from three different perspectives: Factuality (FAC), Coherence (COH), and Relevance (REL), with a particular emphasis on factuality. The assessment follow similar guidelines as in <cit.>. The evaluation guidelines provided to the annotators are listed in <ref>. An expert annotator is involved in the human evaluation studies. §.§ Results and Analysis Contrastive reward learning can enforce models to learn from feedback of factuality metrics Driven by Q1, we observe that results from automatic evaluation presented in <ref> indicate that contrastive reward learning enables abstractive summarization models to develop in a direction that aligns with existing factuality metrics. Learning from factuality metrics improves factuality of abstractive summarization. Driven by Q2, we observe that results from human evaluation presented in <ref> indicate that on both datasets, CRL-COM (B) and CRL-COM (D) exhibit superior performance in terms of factuality compared to CRL-COM (R). This suggests that while learning from factuality metrics such as BARTScore and DAE may potentially result in sacrificing the performance of the models on ROUGE scores, the resulting models can generate more factually consistent summaries. In other words, summaries with higher BARTScore or DAE scores but lower ROUGE scores tend to be more factually consistent with the source article compared to those with lower BARTScore or DAE scores but higher ROUGE scores. This further supports the assertion that BARTScore and DAE are effective at capturing factual information. Learning from factuality metrics did not sacrifice coherence and relevance. According to human evaluations, the summaries generated by CRL-COM (B) and CRL-COM (D) showed comparable coherence and relevance to those generated by CRL-COM (R). This suggests that BARTScore and DAE has comparable abilities to ROUGE in terms of measuring coherence and relevance. § RELATED WORK §.§ Factuality Metrics for Abstractive Summarization Various factuality metrics assess the factual consistency between a summary and its corresponding source document. QA-based factuality metrics leverage question generation (QG) models to generate questions from the summary and question answering (QA) models to answer those questions, given both the source and summary <cit.>. Factuality is then evaluated based on the alignment between the answers from the source and summary. Another class of metrics, entailment-based factuality metrics <cit.>, evaluates whether all the information in the summary is entailed by the source document. Recent studies on leveraging pre-trained language model as evaluation <cit.> also achieve competitive performance on evaluating factuality. §.§ Improving Factuality of Abstractive Summarization via Contrastive Learning Several contrastive learning frameworks have been proposed to enable models to learn factuality from positive samples (such as reference summaries) and negative samples (such as edited reference summaries and system generated summaries). For example, CLIFF <cit.> and CO2Sum <cit.>. Both of which are similar in nature but CO2Sum employs more sophisticated methods for negative sample construction. § CONCLUSION In this work, we present a simple contrastive reward learning framework that enforces abstractive summarization models to learn from feedback of existing factuality metrics. Empirical studies demonstrate the effectiveness of this approach, showing that abstractive summarization models that learn from factuality metric feedback through contrastive reward learning can generate more factual summaries without sacrificing coherence or relevance. This suggests that further advancements in the reward learning paradigm and factuality metrics can facilitate the development of more factually consistent abstractive summarization models. § LIMITATIONS While we have included two distinctive dataset (CNNDM and XSUM) in our experiments, more non-news datasets could be included in future studies. Other possibilities for future work include comparing the capability of RL-based reward learning and contrastive reward learning in improving the factuality of abstractive summarization models. § ETHICS STATEMENT Even though some of the investigated systems may achieve a high level of factuality on the CNNDM dataset, this does not guarantee that they can be used as off-the-shelf factual consistent summarization models. Thorough evaluation should be conducted before using these models in high-stakes settings to ensure their reliability. § ACKNOWLEDGEMENTS We would like to thank Yixin Liu for helpful discussion on BRIO. We would also like to thank Tanya Goyal for helpful discussion on DAE. acl_natbib
http://arxiv.org/abs/2307.05733v1
20230711190025
Dzyaloshinskii-Moriya interactions, Néel skyrmions and V$_4$ magnetic clusters in multiferroic lacunar spinel GaV$_4$S$_8$
[ "Vladislav Borisov", "Nastaran Salehi", "Manuel Pereiro", "Anna Delin", "Olle Eriksson" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.str-el" ]
[Corresponding author: ][email protected] Using ab initio density functional theory with static mean-field correlations, we calculate the Heisenberg and Dzyaloshinskii-Moriya interactions (DMI) for an atomistic spin Hamiltonian for the lacunar spinel, GaV_4S_8. The parameters describing these interactions are used in atomistic spin dynamics and micromagnetic simulations. The magnetic properties of the lacunar spinel GaV_4S_8, a material well-known from experiment to host magnetic skyrmions of Néel character, are simulated with these ab initio calculated parameters. The Dzyaloshinskii-Moriya contribution to the micromagnetic energy is a sum of two Lifshitz invariants, supporting the formation of Néel skyrmions and its symmetry agrees with what is usually expected for C_3ν-symmetric systems. The are several conclusions one may draw from this work. One concerns the quantum nature of the magnetism, where we show that the precise magnetic state of the V_4 cluster is crucial for understanding quantitatively the magnetic phase diagram. In particular we demonstrate that a distributed-moment state of each V_4 cluster explains well a variety of properties of GaV_4S_8, such as the band gap, observed Curie temperature and especially the stability of Néel skyrmions in the experimentally relevant temperature and magnetic-field range. In addition, we find that electronic correlations change visibly the calculated value of the DMI. Dzyaloshinskii-Moriya interactions, Néel skyrmions and V_4 magnetic clusters in multiferroic lacunar spinel GaV_4S_8 Olle Eriksson August 12, 2023 ==================================================================================================================== § INTRODUCTION Magnetic skyrmions, which are topological spin textures, have been found in a few materials classes (B20 compounds [Muehlbauer2009,Yu2011], Co-Mn-Zn alloys etc., see also a review in Ref. [Kanazawa2017]) as well as in low-dimensional systems of transition metal multilayers (Pt/Co/Ta [Wang2019], Ir/Fe/Co/Pt [Soumyanarayanan2017], Pd/Fe/Ir(111) [Romming2013] etc.) where, for some systems, topological magnetism was observed even at room temperature and in the absence of applied magnetic field. Even more unique are bulk magnets where skyrmions coexist with ferroelectricity and the only known examples are Cu_2OSeO_3 and the lacunar spinels GaV_4S_8 [Kezsmarki2015], GaV_4Se_8 [Bordacs2017,Fujima2017], GaMo_4S_8 [Butykai2022] and GaMo_4Se_8 [Schueller2020]. The spinel compounds are especially interesting, because the ferroelectricity is of rare orbital-driven origin and has a considerable magnitude and because the skyrmions are of Néel character. This is in contrast to all the other bulk systems, where only Bloch skyrmions are observed. Such a unique behavior of lacunar spinels has been attributed to the C_3ν point group of the crystal structure (Fig. <ref>). It was argued [Bogdanov2002] that this symmetry implies a specific form of the Lifshitz invariants describing the Dzyaloshinskii-Moriya interaction (DMI), which leads to the stability of Néel skyrmions and contrasts with the isotropic DMI in B20 compounds where only Bloch skyrmions emerge. At the same time, theoretical studies of the DMI in lacunar spinels are sparse. In many papers, some values of magnetic interactions between S=1/2 V_4-clusters are assumed and spin models with only nearest neighbors are used then to model the magnetic textures at varying external magnetic field strengths[Kezsmarki2015,Fujima2017]. On the other hand, there are a few works where the Heisenberg and DM interactions between the V_4-clusters are actually calculated, using perturbation theory or total-energy fitting methods [Zhang2017,Nikolaev2019,Nikolaev2020,Schueller2020], and the results clarify the symmetry of DM vectors and the relative energy scales of different interactions in the system. Neutron experiments [Dally2020] and theoretical studies [Schueller2019] propose that the magnetization is uniformly distributed over all V sites in the V_4 cluster. Nevertheless, it is not clear yet how the DMI in lacunar spinels is affected by electronic correlations and details of the magnetic state of the four-site transition metal clusters, which play the role of effective spins and form a face-centered network (Fig. <ref>). The work presented here is aimed at closing this gap by a systematic analysis of electronic and magnetic properties of the first skyrmionic lacunar spinel GaV_4S_8. § THEORETICAL METHODS To understand the magnetic phenomena in GaV_4S_8 we follow a multiscale approach where we start with a description on the level of individual electrons, proceed with atomistic magnetic interactions between effective V_4 cluster moments and, finally, model the magnetic textures at finite temperature in external field on length scales in the range [10-10^3]nm. Details about these three main steps are given further below. Step 1 (electronic properties). The electronic structure and magnetic properties of lacunar spinels are studied in this work using density functional theory (DFT) [Hohenberg1964] in the full-potential Linear Muffin-Tin Orbital implementation, available in the RSPt code [Wills1987,Wills2010]. Electronic correlations are modeled here on the static mean-field level by means of the DFT+U approach with varied U and fixed Hund's coupling J_H=[0.9]eV on top of the spin-polarized local-density or generalized-gradient approximations of the DFT. Summation in the Brillouin zone is performed on the shifted (16× 16× 16) k-mesh and the Fermi smearing with a temperature of [1]mRy is used for electronic occupations. All calculations are performed for the known experimental structure reported in the literature. Two different V_4 cluster configurations are considered here, where the magnetic moment is either localized mostly on one V site or distributed over the whole cluster (four V sites, see Fig. <ref>c). Note that the total moment per cluster is around [1]μ_B in both cases. It also has to be noted that, because of the elongation of V_4 tetrahedra along the [111]-direction, one of the V sites (V_1) is not symmetry-related to the other three sites (V_2) which are, however, equivalent to each other. As will become clear in the following, the cluster configuration can change the calculated properties substantially. As our on-going calculations suggest and in accordance with literature [Schueller2020], the other lacunar spinel GaMo_4Se_8 with 4d states shows more uniformly magnetized Mo_4 clusters with parallel-aligned spin moments, in contrast to the 3d V-based spinels. This may indicate a fundamental difference between the 3d and 4d lacunar spinels, which will be discussed in a future work. Step 2 (magnetic exchange). Magnetic interactions are calculated using the well-established Lichtenstein-Katsnelson-Antropov-Gubanov (LKAG) approach [LKAG1987] (for a recent review see [Jijreview2023]), where the idea is to relate the interaction between two spins to the energy change due to a small perturbation of the magnetic state. We use the muffin-tin projection to calculate site-specific electronic parameters [Kvashnin2015] and restrict ourselves to bilinear contributions to the magnetic energy for each pair of spins S⃗_i, which is written as follows: H = -J_ij (S⃗_i ·S⃗_j) -D⃗_ij· (S⃗_i ×S⃗_j) - S⃗_i Γ̂_ij S⃗_j, where one can distinguish between the isotropic Heisenberg (J_ij), Dzyaloshinskii-Moriya (D⃗_ij) and symmetric anisotropic (Γ̂_ij) exchange interactions. To calculate the DM interaction, we perform three independent calculations where the spin axis is oriented along the x-, y- and z-directions to obtain the D_x, D_y and D_z components. It is also worth mentioning that the total magnetic moment of GaV_4S_8 varies only slightly upon such a global rotation of the magnetization. We also find that the symmetric anisotropic exchange Γ̂_ij is one or two orders of magnitude smaller than the DM interaction for different bonds, so we do not include this type of exchange in further simulations for GaV_4S_8. Application of the whole approach described above to several transition metal systems in our previous works [Borisov2021,Ntallis2021,Borisov2022] has demonstrated the reliability of the calculated values of magnetic interactions, which justifies the use of this approach in the present work. The critical temperature T_c for the magnetic ordering is estimated from Monte Carlo simulations based on the calculated exchange interactions J_ij and D⃗_ij. Simulations are done using the UppASD code [uppasd,Eriksson2017] for bulk supercells containing (N× N× N) unit cells with periodic boundary conditions, where we compared N=10, 20, 30 to estimate the size effects (see example in Fig. <ref>a in the SI). Initial annealing is performed for 5·10^4 steps at the simulated temperature and statistical sampling of different observables is done afterwards for 10^5 steps. To make it easier to discuss and present graphically different interactions in the studied spinels, we define effective magnetic interactions which characterize interactions between whole V_4 clusters (assuming frozen intra-cluster, magnetic degrees of freedom), instead of single atoms: J_eff^mn = ∑_i∈{m}∑_j∈{n} J_ij, D⃗_eff^mn = ∑_i∈{m}∑_j∈{n}D⃗_ij. Here, the summation runs over all sites of cluster m and all sites of another cluster n. Such effective parameters imply the assumption that the coupling between four spins within each V_4 cluster is significantly stronger than between different clusters, which is confirmed by our calculations, described below, and that during spin dynamics at not too high temperature the spins of the same cluster rotate synchronously. Numerical results obtained in this way are discussed below (Sections IV and V) for GaV_4S_8. These parameters, however, lead to a different temperature-dependent magnetization (from Monte-Carlo simulations) in the ferromagnetic state compared to the interactions J_ij and D⃗_ij between individual V sites (see example in Fig. <ref>b in the SI). The nature of this effect will be studied in the future. Step 3 (micromagnetics). From the atomistic interaction parameters J_ij and D⃗_ij defining the spin model (<ref>) one can go to the continuous limit and obtain the micromagnetic energy which can be used to model the magnetic properties on length scales that range up to hundreds of nanometers. Let us consider the derivation of the micromagnetic energy density ε_DM due to the DM interaction (derivation for the Heisenberg exchange is similar). Following the derivations in [Poluektov2018,Zhang2017], one can start from the atomistic DM interactions between spin on site i and all other spins on sites j: ε_DM = -∑_jD⃗_ij· (S⃗_i ×S⃗_j) and replace S⃗_i with the micromagnetic order parameter, m⃗≡m⃗(r⃗), as well as S⃗_j with 1^st-order expansion m⃗+(R⃗_ij·∇⃗)m⃗, where R⃗_ij is the distance between the two sites. Substituting this into Eqn. (<ref>) leads to a DM contribution to the micromagnetic energy density: ε_DM = -∑_jD⃗_ij· (m⃗× (R⃗_ij·∇⃗)m⃗) = = +m⃗·[ ∑_jD⃗_ij (R⃗_ij·∇⃗)]×m⃗, where, one can define the spiralization matrix D̂≡ D_αβ (α,β=x,y,z) as D_αβ = ∑_j≠ i D_ij^α R_ij^β. In general, this matrix can contain nine non-zero components and has to be consistent with the crystal symmetry. For GaV_4S_8, we find that the only non-zero components are D_xy = -D_yx = D. Due to this specific form, the x-component of the vector in the square brackets in Eqn. (<ref>) is D_xy ∂/∂ y and the y-component is -D_xy ∂/∂ x, while the z-component is zero. Accordingly, the DM contribution to the micromagnetic energy reads: ε_DM = (m_x, m_y, m_z)·| [ e⃗_x e⃗_y e⃗_z; D ∂∂ y -D ∂∂ x 0; m_x m_y m_z ]|. A straightforward calculation gives the following energy density, ε_DM, in a form which coincides with the interfacial type of DM interaction often discussed in the literature for magnetic films (see Eqn. (8) in Ref. Bogdanov2001): -D [ m_x ∂ m_z/∂ x - m_z ∂ m_x/∂ x + m_y ∂ m_z/∂ y - m_z ∂ m_y/∂ y]. This result agrees with the Lifshitz invariants expected for the C_3ν crystal symmetry, as discussed, for example, in Ref. [Bogdanov2002] (Eqn. 6 in this reference) and Ref. [Ado2020] (Table I in this reference), and with the derivation for another spinel GaV_4Se_8 in the SI of Ref.[Zhang2017]. In the latter reference, however, only nearest neighbors and cluster-cluster interactions were taken into account, while in our work we include the full information on the intersite interaction parameters J_ij and D⃗_ij between several hundred and a couple of thousand neighbors. As a side remark, for bulk systems with cubic crystal symmetry and isotropic DMI, such as B20 compounds MnSi and FeGe, the second line in the determinant in Eqn. (<ref>) would be D·(∂/∂ x,∂/∂ y,∂/∂ z), since the spiralization matrix D̂ is diagonal, as verified by our direct calculations (not shown here). This would lead to the usual expression for the isotropic DM energy density ε_DM = D m⃗·(∇⃗×m⃗). Calculation of the micromagnetic parameters and magnetic textures (helical states and skyrmions) for B20 systems is discussed, for example, in our recent works [Borisov2021,Borisov2022] as well as in Refs.[Gayles2015,Kashin2018,Grytsiuk2019]. Similarly, one can also derive the magnetic energy for the Heisenberg exchange: ε_H = ∑_j≠ i J_ijS⃗_i ·S⃗_j → ∑_j≠ i J_ijm⃗·( m⃗ + (R⃗_ij·∇)m⃗ + 1/2 (R⃗_ij·∇)^2m⃗). Here, the first term is a constant energy contribution, since it is proportional to m⃗^2, that is equal to 1 (because constant length of magnetic moment vectors is considered). One can also show that the next term with a 1^st-order derivative reads: m⃗· (R⃗_ij·∇)m⃗≡ m_α R_ij^β∇_β m_α = R_ij^β∇_β( m_α^2/2 ). which is zero, because m_α m_α = 1. Finally, the leading-order term in Eqn. (<ref>) reads: ε_H = 1/2∑_j≠ i J_ijm⃗· (R⃗_ij·∇)^2m⃗ = = 1/2∑_j≠ i J_ij R_ij^α R_ij^β m_γ∇_α∇_β m_γ. Usually, only the diagonal term is considered in micromagnetic simulations (α = β), and the energy becomes proportional to the spin stiffness A defined as follows: A = 1/2∑_j≠ i J_ij R^2_ij. As discussed in the literature [Pajda2001], and our recent work [Borisov2022], the numerical evaluation of micromagnetic parameters according to Eqns. (<ref>) and (<ref>) shows a convergence problem with respect to the real-space cutoff. For that reason, following the literature recipe in Ref. [Pajda2001], we introduce an exponential regularization factor (in contrast to Eqn. (<ref>), indices i and j refer either to atomic sites or different clusters): A = 1/2∑_j≠ i J_ijR^2_ij e^-μ R_ij, D_αβ = ∑_j≠ i D_ij^α R_ij^β e^-μ R_ij. where the limit μ→ 0 is taken at the final step. The exponential factors are introduced here to improve the convergence with respect to the real-space cutoff for the magnetic interactions. The calculated values of A and D̂ as functions of parameter 1.0 < μ < 2.0 are then extrapolated to μ = 0 using a 3^rd-order polynomial, which gives a reasonable fitting quality, and a discussion of some further technical details can be found in the SI of Ref. Borisov2022. As mentioned on page 2, we compare the properties of GaV_4S_8 for two electronic configurations of V_4 clusters which we find to be relatively close in energy (Δ E ∼[100]meV). In case of the distributed-moment configuration (Fig. <ref>b), we calculate the spin stiffness A and DM spiralization D̂ from the effective interactions defined by Eqn. (<ref>), because the spin stiffness from the original J_ij interactions between individual V sites is negative. This is natural to expect since the cluster configuration is ferrimagnetic in our calculations. The effective cluster-cluster DM interaction, on the other hand, disregards the internal DMI between V sites in the same cluster, which should not matter for large-scale magnetic textures. This is because we assume that the internal magnetic structure of V_4 clusters is frozen, which is a good approximation due to the large calculated intracluster magnetic exchange which can have a magnitude as large as [28]meV. For the localized-moment state (Fig. <ref>c), we compute the micromagnetic parameters calculated from the original interatomic interactions while taking into account only the V_1 sites with the largest magnetic moment. The resulting micromagnetic parameters are scaled down by the square of the V_1 moment to simulate an effective [1]μ_B V_4 cluster. Regarding the DM spiralization matrix (Eqn. <ref>), initially we obtain it in the basis of Cartesian unit vectors e⃗_α (xyz-basis). It turns out, however, that it is more convenient to work in the basis where the e⃗_z is along the [111] direction (relevant for skyrmions) and the other two basis vectors are in the (111)-plane. In particular, we choose e⃗_y as a vector connecting the [111] line with the end of the lattice vector a⃗_2 and e⃗_x is obtained as a normalized cross product of e⃗_y and e⃗_z. The inversion of the matrix, where rows are formed by these new unit vectors e⃗_α, results in a unitary matrix Û that defines a transformation to the [111]-basis. For the DM spiralization matrix D̂ in this new basis, one gets: D̂ = Û^TD̂' Û, where D̂' is the matrix in the xyz-basis. It is in the [111]-basis where the spiralization matrix of lacunar spinel GaV_4S_8 has a dominant component D_xy = -D_yx = D, in agreement with the C_3ν crystal symmetry. For that reason, in the paper we discuss the DM interaction and perform the micromagnetic simulations in the basis where the z axis is along the [111]-direction. Regarding the on-site anisotropy, we assume in our simulations that the uniaxial anisotropy energy constant K_1 is between [(10-16)]kJ/m^3, as suggested in previous works [Ehlers2016,Padmanabhan2019]. This anisotropy constant is even smaller than K_1 = [45]kJ/m^3 for hcp Co and cubic anisotropy constant K'_1 = [48]kJ/m^3 for bcc Fe but larger than K_1 = [-0.5]kJ/m^3 for fcc Ni. Despite the small value of K_1 for GaV_4S_8, as we will see in the following, it is important to include anisotropy in simulations of magnetic skyrmions in this class of systems. Using the calculated micromagnetic parameters, including the DM interaction (Eqn.<ref>) and the literature values of the uniaxial anisotropy [(10-16)]kJ/m^3, we perform micromagnetic simulations of magnetic textures at finite temperature and in external magnetic field using the UppASD­ [uppasd,Eriksson2017] and MuMax3 [mumax3] codes. The temperature is varied between 0 and [30]K and the external field – between 0 and [600]mT, which corresponds to the experimentally studied range of parameters. The micromagnetic region is described by a (512× 512× 1) mesh with an equidistant step Δ h in all directions (Δ h = [0.5]nm for simulating skyrmions and Δ h = [1.0]nm for simulating ferromagnetic state and spin spirals) and non-periodic boundary conditions. We have also verified the effect of dimension in the z-direction and the boundary conditions (see discussion in Section VI). The magnetization dynamics is described by the Landau-Lifshitz-Gilbert equation [Landau1935,Gilbert2004]: ∂m⃗_i/∂ t = -γ/1 + α^2( m⃗_i ×B⃗_i + α/m m⃗_i × (m⃗_i ×B⃗_i) ), where m⃗_i is the magnetization of a given micromagnetic region i, and the effective field B⃗_i is determined from the micromagnetic parameters A, D and uniaxial anisotropy, K, as well as the external field, B⃗, and the classical dipolar field; γ is the gyromagnetic ratio. At finite temperature, random field proportional to √(α T) is added to B⃗_i, and the damping constant α is set to 0.04. Variable time step in the range [3·10^-15-5·10^-14]s is used for the dynamics simulations, and the total simulation time was around [(2-5)]ns, which allowed to reach the equilibrium state starting from a random magnetic configuration. Micromagnetic simulations are run starting from a random magnetic configuration at a temperature of [20]K. The system is cooled in [2]K-temperature steps down to [0]K, which corresponds to simulated annealing, and then the system is heated up to [20]K with the same speed. Results for the anisotropy values K = [10]kJ/m^3 and K = [16]kJ/m^3 are compared. Since the [111]-axis of the crystal structure is defined now as the z-axis, which leads to expression (<ref>) for the DMI energy, the anisotropy easy-axis is also along the z-direction. Data Availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. § ELECTRONIC PROPERTIES In the following, we discuss the electronic structure, and related properties, of the lacunar spinel GaV_4S_8, calculated using the local-density (LDA) as well as the generalized-gradient (GGA) approximation, also including correlations described on the static mean-field level of spin-polarized DFT with Hubbard-U corrections. It is noteworthy that the electronic structure work presented here shows two stable minima, one with a total moment of ∼ 1 μ_B for each tetrahedron of V atoms of the lacunar spinel structure (Fig. <ref>c) and one solution in which the total moment per V tetrahedron still is close to 1 μ_B, but where one of the V atoms of each tetrahedron carries a significantly larger moment compared to the other V atoms (Fig. <ref>b). We refer to the first configuration as the distributed moment case, and the second configuration as the localized moment case. The calculated electronic states of GaV_4S_8 near the Fermi level, E_F, are represented by a number of states with a narrow bandwidth crossing E_F, as indicated by the LDA results for the localized moment state of V_4 clusters in Fig. <ref>a. The spin-splitting of bands increases somewhat in response to electronic correlations in DFT+U calculations and a small gap of the order of [0.1]eV appears in the electronic spectrum for U=[2]eV (Fig. <ref>c). Also, the distributed moment state (Fig. <ref>b) of V_4 clusters becomes stable, but we find that the energy of the localized moment state is somewhat lower by around [76]meV/f.u. Varying U between 0 and [2]eV increases the total magnetic moment per formula unit from [0.8]μ_B to almost [1.0]μ_B, and these values agree with the range of measured values in the literature [Pocha2000]. The results discussed above are obtained within the local-density approximation. Using the generalized-gradient approximation (GGA) we obtain essentially the same trends, but GGA shows a stronger tendency to magnetism. This leads to the magnetic moment of each V_4 tetrahedron being close to [1.0]μ_B already for pure DFT (U=[0]eV) calculations (Table I). At increasing correlation strength U, the moments of both V_1 and V_2 sites are enhanced (Table I), but the total moment of each V_4 cluster remains almost the same. Next, we notice that the DFT+U band structures and densities of states (Fig. <ref>) within LDA and GGA are similar but there is an offset around [1]eV in terms of the U values between the two approximations, meaning that in GGA+U one needs U values smaller by roughly [1.0]eV to get results similar to LDA+U. At U=[2]eV, we find again that the distributed-moment state can be stabilized (last row in Table I) but it is higher in energy by [99]meV/f.u. compared to the localized-moment state. Finally, the band gap calculated in GGA is larger compared to the LDA estimate, which is in accordance with literature (see Fig. 10 in [YiqunWang2019]). § MAGNETIC EXCHANGE At U=[0]eV, both in LDA and in GGA, we find a considerable ferromagnetic interlayer Heisenberg interaction (J_⊥) of the order of [4]K, which couples the neighboring (111)-planes of V_4 clusters. Furthermore, we find a much weaker intralayer interaction (J_||), which is between the neighboring clusters in each of these (111)-planes. This picture changes when correlations are included within DFT+U. In particular, the intralayer exchange J_|| becomes stronger (reaching up to almost [12]K) and can outweigh the interlayer exchange J_⊥. Interestingly, within the LDA, J_⊥ increases as a function of U until U=[1]eV and then decreases, while GGA results show decreasing J_⊥ which even becomes antiferromagnetic at U=[2]eV where the localized-moment state of the V_4 clusters changes to the distributed-moment state (Fig. <ref>c). In LDA, however, such a transformation of the V_4 cluster does not lead to change of sign of effective magnetic interactions. It is necessary to emphasize that the parameters J_⊥ and J_|| that we discuss here are actually the effective interactions defined by Eqn. (<ref>) and are introduced to make the discussion of the results more transparent. For Monte Carlo (MC) simulations at different temperatures, we use the original data, i.e. interactions between individual V sites (even weakly magnetic ones). One should note that, even though the effective interlayer exchange becomes AFM for U=[2]eV in GGA, the Monte Carlo simulations using not just effective but all calculated intersite interactions actually find the correct ferromagnetic ground state, allowing to conclude that the effective cluster-cluster models probably cannot cover the whole physics in this system (see further Monte Carlo simulations in Fig. <ref>b in the SI). Figure <ref> shows how the Curie temperature, estimated from our MC simulations using both the calculated Heisenberg and Dzyaloshinskii-Moriya interactions, changes as a function of electronic correlation strength characterized by the U parameter. For the localized-moment state, the ordering temperature for U=[(1.0-1.5)]eV is around [12]K and agrees nicely with the measured value of [13]K [Kezsmarki2015]. Electronic correlations increase slightly the Curie temperature (by a few degrees) and at some point (U=[2]eV) allow to stabilize the distributed moment state, as discussed above. The temperature dependence of magnetization (M(T)) curves for both cluster configurations is similar (Fig. <ref>) and look like typical M(T) curves for ferromagnets. However, the distributed-moment state is characterized by a higher Curie temperature (around [23]K), which is overestimated compared to the experimental value of [13]K. Stronger magnetism for the distributed-moment configuration is likely caused by the larger number of magnetic exchange paths when all V sites in each V_4 cluster have non-zero magnetic moments. § DZYALOSHINSKII-MORIYA INTERACTION For GaV_4S_8 within pure DFT (LDA approach, U=[0]eV), we obtain the spiralization matrix in the [111]-basis (in units of meV·Å)[In this paper, all DM spiralization matrices are given with a precision of two digits after comma and all numbers with the absolute value below 0.01 are written as 0.], as described in the Methods section: D̂ = ( [ 0 +0.12 0; -0.12 0 0; 0 0 0; ]). The [111]-basis appears to be more convenient for studying the micromagnetic behavior than the Cartesian basis, since the D̂ matrix has just one dominating component D_xy which is to be substituted as the D parameter in Eqn. (<ref>). The obtained form of the spiralization matrix agrees with a previous work [Zhang2017] and the observation of Néel skyrmions in this bulk system [Kezsmarki2015]. Possible origin of the slight asymmetry (∼ 10^-3) of the calculated spiralization matrix may be related to a weak dependence of electronic properties on the total magnetization direction due to spin-orbit coupling. When electronic correlations are included on the mean-field level with LDA and U=[1]eV, the DMI increases dramatically by more than a factor of 4 and changes sign: D̂ = ( [ 0 -0.54 0; 0.54 0 0; 0 0 0; ]) Similar response to moderate correlations is observed for the cluster-cluster DM interactions, in particular, for the nearest-neighbor in-plane interaction D_1a (Fig. <ref>). Large enhancement of the DMI can be related to the change of the magnetic state of the V_4 clusters: at U=[1]eV the magnetic moment of one of V sites increases by more than a factor of 2, while the other three sites remain weakly magnetic but change to the opposite direction. This more asymmetric distribution of magnetization at U=[1]eV can, in principle, create additional inversion-symmetry breaking effect which increases the DMI. On top of that, the system becomes semimetallic (Fig. <ref>), since the band gap is zero but no bands directly cross the Fermi level. Within the generalized-gradient approximation (GGA), we find a contrasting behavior, since the DMI parameter D_xy decreases in absolute value for the localized-moment state as a function of electronic correlations U, from D_xy=[-0.20]meV·Å at U=[0]eV to [-0.16]meV·Å at U=[2]eV. We should mention that, in contrast to LDA, correlations added within GGA lead to a band gap opening already at U=[1]eV (Fig. <ref>) which could partially explain the differences in the calculated DMI. Notably, for the distributed-moment state in GGA, which is also stable at U=[2]eV, the DM parameter D_xy=[-1.37]meV·Å is considerably larger. Strong dependence of the DM spiralization constant on the cluster state makes sense in view of the other findings shown in Fig. <ref> (for LDA approximation) where the atomistic DMI is larger for the more asymmetric magnetization distribution in the cluster which may break further the inversion symmetry. Also, the localized- and distributed-moment states of V_4 cluster lead to distinct band structures (Fig. <ref> in the SI) and this can have a sizeable effect on the DMI too. In addition, the spin stiffness A decreases by almost a factor of 5 (in GGA) in response to additional electronic correlations with U=[2]eV. Because of that, the A/D_xy ratio decreases from [26.3]nm to [7.4]nm, suggesting again that the magnetic properties of GaV_4S_8 are very sensitive to electronic correlations. On the other hand, at U=[2]eV also the distributed moment state is stable and it shows a relatively large spin stiffness, compared to the localized moment state. Also, the DM interaction is considerably larger resulting in the A/D ratio of [5.0]nm. The A/D ratio is important in the context of non-collinear magnetism as well as skyrmions and lower values of A/D, in general, are expected to indicate more compact skyrmions and helical spin states, which is confirmed by the micromagnetic simulations in Sec. VI. Notably, the LDA+U results show a spin stiffness which is roughly a factor of two larger compared to the GGA+U estimates for U≥[1]eV. It is worth mentioning that the strongest interatomic DM interaction (D_0∼[0.6]meV≈[6.8]K) in these spinels, according to our calculations, comes from the V-V bonds within each metal cluster, implying that the actual magnetic state of V_4 clusters may be non-collinear. On the other hand, the non-collinearity is not expected to be large, since the canting angles within the cluster should be of the order of D_0/J_0 where J_0 is the nearest-neighbor Heisenberg V-V exchange which is, as we find, antiferromagnetic and in the range of several hundred Kelvin. For that reason, canting angles around a few degrees can be expected, which should not change the main findings reported in the present work. This intracluster DMI would actually contribute significantly to the calculated micromagnetic constant D, and it is a subtle question whether to include this DMI or not when addressing the behavior of skyrmions in this system. A strong argument not to do so, we suggest, is that the internal magnetic exchange in each V_4 cluster is very large in our calculations and, for that reason, the spins of each cluster are expected to co-rotate. In that case, the internal DMI as well as the internal Heisenberg exchange do not contribute to the micromagnetic energy. Table II summarizes our findings for the spin stiffness and DM spiralization for the different calculation setups and two V_4 cluster states. § MICROMAGNETIC SIMULATIONS Part of the motivation for this study is the experimental realization of Néel skyrmions in the lacunar spinel GaV_4S_8. As an ultimate test of the accuracy of the calculated electronic structure and interatomic exchange parameters, as well as the transition to micromagnetic interaction strengths, we explore here the possibility of skyrmion formation with the aforementioned parameters. It should be noted that so far no fitting to experimental data has been made and all calculations are made in ab-initio mode. To undertake this investigation we compare three sets of calculated micromagnetic parameters and their ability to reproduce skyrmions. The parameters used are: a) the localized-moment state at U = [1]eV: A = [0.1694]pJ/m, D_xy = [0.0128]mJ/m^2 b) the localized-moment state at U = [2]eV: A = [0.0824]pJ/m, D_xy = [-0.0112]mJ/m^2 c) the distributed-moment state at U = [2]eV: A = [0.4886]pJ/m, D_xy = [-0.0979]mJ/m^2 Note that the values are different here when compared to those in Table II, because they are divided by the unit cell volume and converted to the units usually used in experimental reports. The saturation magnetization of the simulations is [41.35]kA/m which corresponds to [1]μ_B per formula unit, a value found in experiment as well as in the calculations (Sec.III). For the parameter set “c” (distributed-moment state), we obtain a multi-domain ferromagnetic state with isolated skyrmions (Fig. <ref>a) at zero temperature and zero applied field. The emergence of a spin-spiral magnetic state (Fig. <ref>b), for zero-field simulations at low but finite temperature, with a period of around [20]nm, agrees well with the measured value a_cyc=[17.7]nm (see Fig. 3 in Ref. Kezsmarki2015). In addition, a state with stable skyrmions, with calculated topological charge close to ± 1, is found when the external magnetic field is applied along the ± z-direction with a strength between [(50-300)]mT (see Fig. <ref>c). The skyrmion size in our simulations depends on the external field and ranges from [27]nm at B = [25]mT to [13]nm at B = [300]mT, where the number of skyrmions is dramatically decreased. This estimated size is compatible with the experimentally observed skyrmion size a_sky=[22.2]nm reported in Fig. 3 of Ref. Kezsmarki2015. At higher fields, the system is ferromagnetic up to temperatures around [(12-14)]K. The latter marks the critical temperature also for other types of magnetic order in this system. All these findings are summarized in the calculated phase diagram in Fig. <ref>a and are in a good agreement with experiments in Ref. multiferroic. In contrast, the parameter sets “a” and “b” (obtained from the localized moment state) produce practically no skyrmions and show a magnetic order only at relatively low temperatures (see phase diagram in Fig. <ref>b), which is due to the smaller values of the A and D parameters. In general, larger spin stiffness for the distributed moment state can be explained by the fact that there are more interaction paths in the structure, since all V atoms are then magnetic. By varying the micromagnetic parameters A, D and K, we find (data not shown) that the A/D, A/K and D/K ratios are all important for stabilizing skyrmions, which explains why the parameter set “c” (distributed moment state) gives a better agreement with experiment, given that the anisotropy is in the range [(10-16)]kJ/m^3. Our conclusions for the three parameter sets used in the micromagnetic simulations are qualitatively robust with respect to at least 10%-variations of the strength of the interactions. We note that lower DM values lead to smaller saturation fields for stabilizing a ferromagnetic state. Results for non-periodic and periodic boundary conditions in the simulations are also found here to yield similar results. We have also verified that increasing the out-of-plane dimension (parallel to the external magnetic field; the z-axis) of the simulation cell from n_z = 1 to n_z = 16 does not change qualitatively the phase diagrams shown in Fig. <ref> (which are obtained using n_z = 1). The most prominent, quantitative change of these simulations, compared to the thin 2D simulation cell, is an increase of the Curie temperature for the distributed moment configuration up to around [20]K and a lowering of the magnetic field needed to induce the ferromagnetic state. For the localized-moment configuration, the Curie temperature remains essentially the same for thinner and thicker simulation cells. To summarize this section, our results indicate that the distributed-moment configuration (Fig. <ref>c) describes better the magnetic phase diagram (Fig. <ref>a) of bulk GaV_4S_8 spinel, which shows ferromagnetic phase, spin spirals and Néel skyrmions. An example of the magnetic texture of a Néel skyrmion is shown in Fig. <ref>, isolated in a ferromagnetic background a) and in a lattice b). Based on the experimental phase diagram reported in Ref. multiferroic, the results shown in Fig. <ref>a) have similar trends for the transition to the paramagnetic phase. In addition, the transition to the ferromagnetic phase occurs at small magnetic fields (in the range around [250-400]mT-as shown in Fig. <ref>a)) which agrees with observations (in the experiments it is [50-160]mT). The Curie temperature calculated in our simulations is in the neighborhood of [17]K which is close to the experimental one, i.e. ∼[13]K. The only difference between phase diagrams is related to the skyrmionic and cycloidal regions. In the experimental phase diagram, the skyrmion lattice phase only appears from [9]K to [12.7]K but in the theoretical phase diagram, the temperature range where both phases appear goes from [0]K until near [17]K. Figures <ref>a) and b) show the phase diagram of the system in 2D for distributed and localized moments, respectively. By performing simulations for the 3D system, we obtained the same Curie temperature for the localized moments as the 2D case but the phase transition to the ferromagnetic ordering was occurring at very small magnetic fields (around [15-20]mT). In the 3D simulations for the distributed moments, the calculations indicate that the same phase transitions for skyrmionic and cycloidal regions occur at the same values for the temperature and external magnetic field but the transition to the paramagnetic ordering arises at higher Curie temperature. In Fig. <ref> in the SI are also shown eight layers of the 3D simulation for the distributed moments. The figure emphasizes the tubular quasi-2D nature of skyrmions in GaV_4S_8. § CONCLUSIONS In this theoretical study of the lacunar spinel GaV_4S_8, we find that the Dzyaloshinskii-Moriya interaction (DMI) calculated from first principles and the associated micromagnetic energy reflect the C_3ν crystal symmetry and support the formation of Néel skyrmions, previously reported for this system [Kezsmarki2015]. In contrast to previous works [Zhang2017,Nikolaev2019], we obtain a detailed picture of the magnetic interactions, both between individual V sites and between different V_4 clusters, not just the nearest neighbors. Electronic correlations are important in this multiferroic system, since, on the one hand, they open up a semiconducting band gap ∼[100]meV and, on the other hand, they enhance significantly the energy scale of the Dzyaloshinskii-Moriya interactions (spiralization D in Eqn. (<ref>)) relative to the Heisenberg exchange (spin stiffness A in Eqn. (<ref>)). In particular, within the generalized-gradient approximation we find a smaller A/D ratio of [7.4]nm for U=[2]eV (moderate correlations, localized-moment state) compared to the pure DFT result A/D=[26.3]nm, even though the absolute value of D is reduced by correlations. We believe that this behavior is related to the opening of the band gap and redistribution of the magnetic density in the V_4 cluster. Compared to the localized-moment state, the ratio A/D=[5.0]nm for the distributed-moment configuration is remarkably smaller, while the spin stiffness A is a factor of 6 larger, allowing to distinguish these two cluster states based on the predicted magnetic properties. Based on our micromagnetic simulations using our computed first-principles parameters, we conclude that a small |A/D| ratio is important for stabilizing Néel skyrmions with a size [13-27]nm close to the measured one (a_sky=[22.2]nm, [Kezsmarki2015]), while the value of the spin stiffness A determines the critical temperature for magnetic order. Although the uniaxial magnetic anisotropy is weak, it has a considerable effect on the magnetic phase diagram (Fig. <ref>). With the literature values of the anisotropy, we find that the distributed-moment configuration, where all four V sites in each V_4 cluster have sizeable moments, describes much better the magnetic properties and textures in GaV_4S_8 for the experimentally studied temperature and magnetic field range. We note that there is a difference between the atomistic and micromagnetic results for this system. For example, in Fig. <ref> the Curie temperature, T_C, of the two different types of electronic (and magnetic) configurations are [15]K and [23]K, while the respective estimates from Fig. <ref> are [4]K and [15]K, meaning a shift around [10]K between the 3D atomistic and 2D micromagnetic results. We have also made 3D micromagnetic simulations, which showed somewhat larger T_C for the distributed-moment cluster, but the same T_C for the localized-moment cluster. Overall, it is from these types of simulations difficult to pin-point an ordering temperature with an accuracy of a few kelvin,<cit.> so that from these comparisons it is difficult to identify which electronic configuration (distributed or localized moments) is relevant for this system. However, magnetic textures at lower temperatures are more faithfully reproduced by the type of calculations/simulations presented here<cit.>, and for this reason we argue that the four-site, distributed moment configuration in GaV_4S_8, which is crucial to reproduce the magnetic properties, represents the correct electronic configuration of this material. It should be noted that the localized-moment configuration has a lower energy in the DFT calculations presented here, which seems to contradict the conclusion that the distributed moment configuration is the relevant one for GaV_4S_8. However, the energy difference between the types of configurations is not large and it is possible that dynamical correlations may change the balance so that the distributed-moment case becomes lower in energy than the local moment configuration. In view of the calculated magnetic phase diagrams (Fig. <ref>) and their comparison with experiment, in particular, the observation of Néel skyrmions suggests strongly that the V_4 clusters are in the distributed-moment state. However, given the closeness of the different electronic configurations, and their distinctly different magnetic states, we speculate that the excitation spectra of GaV_4S_8 should be particularly interesting, both from traditional electron- and x-ray spectroscopic methods, as well as magnetic excitations, e.g., as provided by inelastic neutron scattering experiments. Experimental work is necessary to map out the complexities of the here proposed electronic and magnetic configurations of the V-based lacunar spinel GaV_4S_8. Author contributions VB designed the study, performed all density functional theory calculations and a large part of micromagnetic simulations, wrote most of the manuscript and prepared figures 1–6 and 8–11. NS did further more extensive micromagnetic simulations to construct the detailed phase diagrams and prepared figures 7, 12 and 13 as well as wrote an accompanying text. MP helped NS with running some of the simulations and writing of the manuscript. AD contributed by discussing and analyzing the results and editing of the manuscript. OE helped with planning the theoretical work and analyzing the results, and contributed to the writing and editing of the manuscript. All authors have read and approved the final manuscript. Competing interests All authors declare no financial or non-financial competing interests. Acknowledgements This work was financially supported by the Knut and Alice Wallenberg Foundation through grant numbers 2018.0060, 2021.0246, and 2022.0108, and Göran Gustafsson Foundation (recipient of the “small prize”: Vladislav Borisov). Olle Eriksson also acknowledges support by the Swedish Research Council (VR), the Foundation for Strategic Research (SSF), the Swedish Energy Agency (Energimyndigheten), the European Research Council (854843-FASTCORR), eSSENCE and STandUP. Anna Delin acknowledges financial support from Vetenskapsrådet (VR)(grant numbers VR 2016-05980 and VR 2019-05304). The computations/data handling were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputing Centre (NSC, Tetralith cluster) partially funded by the Swedish Research Council through grant agreement no. 2018-05973 and by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at the National Supercomputing Centre (NSC, Tetralith cluster) partially funded by the Swedish Research Council through grant agreement no. 2022-06725. The funder played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript. Structural sketches in Figs. <ref> and <ref> have been produced by the VESTA3 software <cit.>. Figures 6 and 13 are produced using the ParaView software <cit.>. § BAND STRUCTURES The comparison of electronic band structures of lacunar spinel GaV_4S_8 is shown in Fig. <ref> where the calculation results obtained within the local-density (LDA) and generalized-gradient approximations (GGA) are collected. The effects of electronic correlations missing in the pure DFT are estimated by including DFT+U corrections with varying U parameter, which characterizes the strength of correlations. While the system is erroneously described as metallic by pure DFT (U=[0]eV), DFT+U corrections help to open the electronic band gap for U>[1]eV (LDA) and already at smaller U values for GGA. The resulting gap is of the order of several hundred meV, in acceptable agreement with experiment. To give more details, Fig. <ref> shows the spin-resolved bands for DFT and DFT+U with U=[1]eV. Overall, it appears that both the spin-up and spin-down bands are shifted mostly equally by the correlation corrections. For U<[2]eV, our calculations converge mostly to the localized-moment state of V_4 clusters (Fig. <ref>b in the main text), but for large U values it is possible to stabilize also the distributed-moment state (Fig. <ref>c in the main text). The band structures for that configuration are shown in Fig. <ref>c,f and are different from the localized-moment results, but the semiconducting band gap is similar for both V_4 cluster configurations. However, the absence of detailed ARPES data in the literature prevents the prediction of the V_4 cluster state based solely on the electronic properties. § ATOMISTIC SPIN DYNAMICS When performing atomistic spin dynamics simulations of the lacunar spinel using the calculated magnetic interactions, we verified that the size effects do not change the results considerably. In Fig. <ref>a, the temperature-dependent magnetization is plotted for simulation (N× N× N) cells of increasing size N=10, 20, 30. The variations due to the cell size are apparently very small when all the interactions are included. If only Heisenberg interaction is taken into account (open symbols in Fig. <ref>a), then only the 30× 30× 30 or larger cells provide sufficiently converged results. Interestingly, the Dzyaloshinskii-Moriya interaction appears to change dramatically the temperature-dependence of the magnetization. This can be seen from the Monte-Carlo results obtained using only the Heisenberg interactions J_ij (open symbols in Fig. <ref>a) compared to the simulations which include also the DM interaction D⃗_ij (filled symbols in Fig. <ref>a). The results suggest that the DMI enhances the long-range ferromagnetic order in a temperature range between 0 and [12]K, which is rather surprising considering that DMI usually tries to make the magnetic structure more non-collinear. In this respect, GaV_4S_8 spinel contrasts many other known magnets and this aspect deserves further analysis which will be done in the future work. Another important point is the difference between atomistic simulations based on inter-site (4-site model) or inter-cluster (1-site model) interactions, i.e. whether the individual sites or whole V_4 clusters are considered as the elementary magnetic units. From Fig. <ref>b, we can clearly see that neglecting the intracluster degrees of freedom by considering a 1-site spin model with effective interactions (eqn. (<ref>)) changes the behavior of the magnetization M(T) significantly, as obtained in atomistic spin dynamics simulations, even when the DM interaction is not included. On the other hand, if the 1-site model includes the effective DM interaction, defined similarly to eqn. (<ref>), we find that the long-range magnetic order is basically totally suppressed at any temperature above [1]K. In micromagnetic simulations, we do not see this drawback of the 1-site model. These findings suggest that the modelling of intracluster degrees of freedom in lacunar spinel GaV_4S_8 is a subtle issue and has to be considered in greater detail in future studies. § MICROMAGNETIC SIMULATIONS Based on the micromagnetic simulations for different temperatures and magnetic field strengths, we constructed the phase diagrams for GaV_4S_8 with two different V_4 cluster states, shown in Fig. <ref> in the main text. The original data points that we obtained in those mumax3 simulations are depicted here in Fig. <ref> using the same color code for convenience. It is worth noting that in our UppASD simulations using the multiscale module μASD<cit.> we observed skyrmion lattices in some conditions (example shown in Fig. <ref>). For selected points in mumax3 phase diagrams, we have verified the effect of cell size along the z-direction, i.e. we compared the 2D and 3D simulation results. One example for external field of [250]mT is shown in Fig. <ref> based on the simulation with 8 layers along the z-direction. It turns out that the phase diagrams do not change qualitatively, but the Curie temperature for the distributed-moment configuration increases up to [25]K, resulting in a shift of the boundary to the paramagnetic phase in Fig. <ref>a and agreeing better with the atomistic simulations in Fig. <ref> in the main text. For the localized-moment state (Fig. <ref>b), however, there is no visible shift of the Curie temperature in the 3D simulations. prb-titles
http://arxiv.org/abs/2307.04868v1
20230710193252
Leveraging an Alignment Set in Tackling Instance-Dependent Label Noise
[ "Donna Tjandra", "Jenna Wiens" ]
cs.LG
[ "cs.LG" ]
The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis Bodong Chen August 12, 2023 ==================================================================================================== Noisy training labels can hurt model performance. Most approaches that aim to address label noise assume label noise is independent from the input features. In practice, however, label noise is often feature or instance-dependent, and therefore biased (i.e., some instances are more likely to be mislabeled than others). E.g., in clinical care, female patients are more likely to be under-diagnosed for cardiovascular disease compared to male patients. Approaches that ignore this dependence can produce models with poor discriminative performance, and in many healthcare settings, can exacerbate issues around health disparities. In light of these limitations, we propose a two-stage approach to learn in the presence instance-dependent label noise. Our approach utilizes points, a small subset of data for which we know the observed and ground truth labels. On several tasks, our approach leads to consistent improvements over the state-of-the-art in discriminative performance (AUROC) while mitigating bias (area under the equalized odds curve, AUEOC). For example, when predicting acute respiratory failure onset on the MIMIC-III dataset, our approach achieves a harmonic mean (AUROC and AUEOC) of 0.84 (SD [standard deviation] 0.01) while that of the next best baseline is 0.81 (SD 0.01). Overall, our approach improves accuracy while mitigating potential bias compared to existing approaches in the presence of instance-dependent label noise. *Data and Code Availability This paper uses the MIMIC-III dataset <cit.>, which is available on the PhysioNet repository <cit.>. We also use two public datasets outside of the healthcare domain: 1) the Adult dataset[https://github.com/AissatouPaye/Fairness-in-Classification-and-Representation-Learning], and 2) the COMPAS dataset[https://www.kaggle.com/danofer/compass]. A link to the source code is provided in the footnote[https://github.com/MLD3/Instance_Dependent_Label_Noise]. *Institutional Review Board (IRB) This work is not regulated as human subjects research since data are de-identified. § INTRODUCTION Motivation and Problem Setting Datasets used to train machine learning models can contain incorrect labels (i.e., label noise), which can lead to overfitting. While label noise is widely studied, the majority of past work focuses on instance-independent label noise (i.e., when the noise is independent from an instance's features) <cit.>. However, label noise can depend on instance features <cit.>, leading to different noise rates within subsets of the data. Furthermore, in settings where the noise rates differ with respect to a sensitive attribute, this can lead to harmful disparities in model performance <cit.>. For example, consider the task of predicting cardiovascular disease among patients admitted to a hospital. Compared to male patients, female patients may be more likely to be under-diagnosed <cit.> and thus mislabeled, potentially leading to worse predictions for female patients. Although instance-dependent label noise has recently received more attention <cit.>, the effect of these approaches on model bias has been relatively understudied <cit.>. Here, we address current limitations and propose a novel method for learning with instance-dependent label noise in a setting inspired by healthcare, specifically examining how modeling assumptions affect existing issues around potential model bias. Gaps in Existing Work Broadly, current work addressing instance-dependent label noise takes one of two approaches: 1) learn to identify mislabeled instances <cit.>, or 2) learn to optimize a noise-robust objective function <cit.>. In the first category, instances identified as mislabeled are either filtered out <cit.> or relabeled <cit.>. In some settings, this approach can have a negative effect on model bias. Revisiting our example on cardiovascular disease, approaches that filter out mislabeled individuals could ignore more female patients, since they have a potentially higher noise rate. While relabeling approaches use all available data, they can be sensitive to assumptions around the noise distribution <cit.>. In the second category, current approaches rely on objective functions that are less prone to overfitting to the noise and use all of the data and observed labels <cit.>. However, past work has empirically shown that these improve discriminative performance the most when used to augment filtering approaches, and thus, the limitations and scenarios described above still potentially hold. Our Idea In light of these limitations, we propose an approach that addresses instance-dependent label noise, makes no assumptions about the noise distribution, and uses all data during training. We focus on a setting that frequently arises in healthcare, where we are given observed labels for a condition of interest (e.g., cardiovascular disease) and have a clinical expert who can evaluate whether the observed labels are correct for a small subset of the data (e.g., by manual chart review). Using this subset, which we refer to as the `alignment' set, we learn the underlying pattern of label noise in a pre-training step. We then minimize a weighted cross-entropy over all the data. Note that our set is a special case of anchor points <cit.>, with the added requirement that the set contains instances for which the ground truth and observed labels do and do not match. On synthetic and real data, we demonstrate that our approach improves on state-of-the-art baselines from the noisy labels and fairness literature, such as stochastic label noise <cit.> and group-based peer loss <cit.>. Overall, our contributions include: * A novel approach to learn from datasets with instance-dependent noise that highlights a setting frequently found in healthcare * A systematic examination of different settings of label noise, evaluating discriminative performance and bias mitigation * Empirical results showing that the proposed approach is robust to both to the noise rate and amount of noise disparity between subgroups, reporting the model’s ability to maintain discriminative performance and mitigate potential bias * A demonstration of how performance of the proposed approach changes when assumptions about the set are violated § METHODS We introduce a two-stage approach for learning with instance-dependent label noise that leverages a small set of points for which we have both observed and ground truth labels. *Notation and Setting Our notation is summarized in Table <ref>, with additional notation defined throughout as needed. Our dataset, D=A ∪A consists of instances in A={x^(j), ỹ^(j), y^(j)}_j=1^a and A={x^(i), ỹ^(i)}_i=1^a. A is the set of points (i.e., the set), where both ỹ^(j) and y^(j) are known, and we assume that it includes instances where ỹ^(i)≠ y^(i). points are a special case of anchor points <cit.>, where points that do and do not have matching observed and ground truth labels are both required. A is the non-set and contains instances for which we do not know the ground truth labels. In the presence of noisy labels, we assume that whether ỹ=y is dependent on x (i.e., P(ỹ==y) ≠ P(ỹ==y |x)). Given this dataset, we aim to train a model to learn f: ℝ^d → [0, 1] (i.e. the function used to predict the ground truth labels), so that we can map unseen instances into one of two classes based on their feature vectors. Our learned model parameters, θ, are such that the output of the corresponding model represents the predicted class probabilities, (i.e., ŷ). Although we focus on binary classification, our setup can be applied to multiclass classification. Justification and Desired Properties Our setting is inspired by the use of pragmatic labeling tools in healthcare. Such tools are often based on various components of the electronic health record (EHR), and they are applied to identify cohorts or outcomes of interest <cit.>. However, while practical, such definitions are not always reflective of the ground truth, and thus, require validation through manual chart review. This is often done on a randomly chosen subset of individuals, which can be constructed to represent the target population and account for known heterogeneity. As a result, f is the function that predicts whether the condition is actually present, and the set is the chart reviewed subset used to help learn f. Through our approach, we aim to achieve: 1) robustness to the overall noise rate and 2) robustness to differences in noise rates between groups (i.e., the noise disparity). Revisiting our motivating example with EHR-based labeling tools, previous work has shown that labeling tools for rarer conditions such as drug-induced liver injury and dementia are more likely to be less reliable than those for common conditions <cit.>. Similar to how different noise rates can arise in practice, differences in noise rates between subgroups can also vary in practice <cit.>. As a result, achieving these properties can potentially make our approach generalize to a wide variety of settings. *Proposed Approach Here, we describe the proposed network and training procedure. Proposed Network. Our proposed network (Figure <ref>) consists of two components. The first, parameterized by θ, is a feed-forward network that uses feature vector x to predict the class probability, ŷ=P(y==1 |x; θ). The second component, paramaterized by ϕ, is an auxiliary feed-forward network that uses observed label ỹ and features x to compute β̂=P(y==ỹ|ỹ, x; ϕ), an instance-dependent prediction for whether the observed label is correct based on x and ỹ. β̂ can be considered as a confidence score for the observed label, with higher values indicating higher confidence. Learning β̂ models the underlying pattern of label noise by forcing the model to learn which instances are correctly labeled. We use β̂ to reweight the objective function during the second step of training, as described below. By including the observed label as input to ϕ, our approach also applies to instance-independent label noise because it accounts for the case when the underlying pattern of label noise does not depend on the features. In order to learn β̂, we assume that the label noise pattern can be represented as some function, though the specific form of this function (e.g., linear) does not need to be known. During training, we compute the loss using the outputs from both networks. At inference time (i.e., in practical use after training), we compute the class predictions from the network parameterized by θ only since ỹ is unavailable. Training Procedure. Our training procedure is summarized in Figure <ref> and Appendix <ref>. In Step 1, we pre-train both networks using the points, A, minimizing an objective function based on cross entropy: θ', ϕ' = argmin_θ, ϕℒ_θ + α_1 ℒ_ϕ. α_1∈ℝ^+ is a scalar hyperparameter; θ' and ϕ' are parameters that represent the initial values of θ and ϕ. ℒ_θ is the cross-entropy loss between the class predictions and ground truth labels. It aids in learning the parameter values for θ, and thus, the model's decision boundary. 𝕀 is an indicator function. ℒ_θ = -1/| A |∑_j ∈ A𝕀(y^(j)==1)log(ŷ^(j)) + 𝕀(y^(j)==-1)log(1-ŷ^(j)) ℒ_ϕ is the cross-entropy loss between the predicted confidence score β̂^(j) and the actual agreement between ỹ^(j) and y^(j). It aids in learning the weights for ϕ, and thus, the underlying label noise pattern. ℒ_ϕ = -1/| A |∑_j ∈ A𝕀(ỹ^(j)==y^(j))log (β̂^(j)) + 𝕀(ỹ^(j)≠ y^(j))log (1 - β̂^(j)) In Step 2, we initialize θ and ϕ as θ' and ϕ' and fine tune using the complete dataset. Step 2 consists of two parts, Step 2a and Step 2b. Each part aims to improve a specific component of the network (e.g., θ) using another component of the network (e.g., ϕ). We begin with Step 2a, move to Step 2b, and continue to alternate between Step 2a and Step 2b in a manner similar to expectation maximization so that we continually improve both θ and ϕ. In Step 2a, we freeze ϕ and find θ that minimizes the objective ℒ'_θ + γℒ_θ. γ∈ℝ^+ is a scalar hyperparameter. In Step 2b, we freeze θ and find ϕ that minimizes the objective ℒ'_θ + α_2 ℒ_ϕ. α_2 ∈ℝ^+ is a scalar hyperparameter. ℒ_θ' computes the cross-entropy loss over the potentially noisy, non-points. Each instance is weighted by the model's confidence in whether the observed label is correct via β̂^(i), taking advantage of the model's learned noise pattern. Our approach aims to mitigate bias by up-weighting groups, k=1,2,...,g with a higher estimated noise rate, r̂_k, so that they are not dominated by/ignored compared to groups with a lower estimated noise rate. ℒ_θ' = -1/|A|∑_k=1^g1/1-r̂_k∑_i ∈A∩ G_k ∑_j∈{-1, 1}β̂^(i)_ϕ𝕀(ỹ^(i)==j)log(ŷ^(i)_j) We calculate 1 - r̂_k is as follows. We introduce sets G_k for k=1,2,...,g to represent disjoint subgroups of interest in the data, which are assumed to be known in advance. G_a ∩ G_b = ∅ for all a=1, 2, ..., g, b=1, 2, ..., g with a ≠ b and ∪_k=1^g G_k = D. Each group G_k is then associated with estimated noise rate r̂_k=1/| G_k |∑_i ∈ G_k 1-β̂^(i). Although weighting each instance by β̂ is a form of soft filtering, weighting each group by the inverse of its overall `clean' rate avoids the effect of de-emphasizing groups with higher predicted noise rates. As a result, the expected value of ℒ_θ' with respect to β̂ is equal to the cross-entropy loss between the model's predictions and ground truth labels (see Appendix <ref> for proof). However, this assumes accurate estimates of β̂. Thus, we expect that the proposed approach will perform best when the set is representative of the target population. In scenarios where the set is biased (e.g., some groups are underrepresented), if the learned noise function does not transfer to the underrepresented group, then the proposed approach may not be beneficial. In Section <ref>, we test this. During Step 2a, ℒ_θ' is used to train θ by learning to predict ŷ such that it matches observed label ỹ on instances that are predicted to be correctly labeled. During Step 2b, ℒ_θ' is used to train ϕ. Here, since θ is frozen and ϕ is not, the network learns to predict the optimal β̂. Based on ℒ_θ' alone, there are two possible options to learn β̂: 1) consistently make β̂ close to 0, and 2) predict β̂ such that it is close to 1 when ŷ matches ỹ and close to 0 when ŷ does not match ỹ. Since ỹ is used as a proxy for y in this step, the second option aligns with what we want β̂ to represent. To encourage this over the first option (i.e., consistently predicting 0 for β̂), we include ℒ_ϕ in Step 2b, which is not minimized by consistently predicting 0 for β̂. Note that, in Step 2b, we rely on the cluster assumption <cit.> from semi-supervised learning, which broadly states that labeled data fall into clusters and that unlabeled data aid in defining these clusters. In the context of Step 2b, `labeled' and `unlabeled' are analogous to whether we know if the ground truth and observed labels match (i.e., point versus non-point), rather than the actual class labels themselves. As a result, we also rely on the set being representative of the target population here to avoid dataset shift. In contrast to previous filtering approaches, our approach utilizes all data during training. Moreover, it does not require a specialized architecture beyond the auxiliary network to compute β̂. Thus, it can be used to augment existing architectures. § EXPERIMENTAL SETUP We empirically explore the performance of our proposed approach relative to state-of-the-art baselines on five benchmark prediction tasks with two different label noise settings. For reproducibility, full implementation details are provided in Appendices <ref> and <ref>. We aim to test 1) the extent to which our desired properties hold, 2) the extent to which the proposed approach is robust to changes in the composition of the set, and 3) which components of the proposed approach contribute the most. *Datasets We consider five different binary prediction tasks on four datasets from several domains with synthetic and real datasets. Though inspired by healthcare, we also consider domains outside of healthcare to show the broader applicability of our approach in areas where harmful biases can arise (e.g., predicting recidivism and income). Throughout our experiments, we start by assuming the labels in the dataset are noise free, and we inject varying amounts of synthetic label noise. In this subsection, we describe the tasks, features, and `ground truth' labels we use. The next subsection will describe how we introduce synthetic label noise. Synthetic: We generate a dataset containing 5,000 instances according to the generative process in Appendix <ref>. The positive rates for the majority and minority groups are 37.5% and 32.3%, respectively. MIMIC-III: Within the healthcare domain, we leverage a publicly available dataset of electronic health record data <cit.>. We consider two separate prediction tasks: onset of 1) acute respiratory failure (ARF) and 2) shock in the ICU (intensive care unit) <cit.>. MIMIC-III includes data pertaining to vital signs, medications, diagnostic and procedure codes, and laboratory measurements. We consider the four hour prediction setup for both tasks as described by <cit.>, resulting in 15,873 and 19,342 ICU encounters, respectively. After preprocessing (see Appendix <ref>), each encounter had 16,278 and 18,186 features for each task respectively. We use race as a sensitive attribute, with about 70% of patients being white (positive rate 4.5% [ARF], 4.1% [shock]) and 30% being non-white (positive rate 4.4% [ARF], 3.7% [shock]). Beyond healthcare, we use two benchmark datasets frequently considered in the fairness domain. Adult: a publicly available dataset of census data <cit.>. We consider the task of predicting whether an individual's income is over $50,000. This dataset includes data pertaining to age, education, work type, work sector, race, sex, marital status, and country. Its training and test sets contain 32,561 and 16,281 individuals, respectively. We use a pre-processed version of this dataset and randomly select 1,000 individuals out of 32,561 for training. We also only include features pertaining to age, education, work type, marital status, work sector, and sex to make the task more difficult (see Appendix <ref>). After preprocessing, each individual was associated with 56 features, and all features had a range of 0-1. We use sex as a sensitive attribute, with 67.5% of individuals being male (positive rate 30.9%) and 32.5% being female (positive rate 11.3%). COMPAS: a publicly available dataset collected by ProPublica from Broward County, Florida, USA <cit.>. We consider the task of predicting recidivism within two years, i.e., whether a criminal defendant is likely to re-offend. COMPAS includes data pertaining to age, race, sex, and criminal history. We use a pre-processed version of this dataset and also normalize each feature to have a range of 0-1 (see Appendix <ref>). After preprocessing, the dataset included 6,172 individuals with 11 features per individual. We use race as a sensitive attribute, with 65.8% of individuals being white (positive rate 39.1%) and 34.2% being non-white (positive rate 44.5%). *Label Noise To test the robustness of our approach in different settings of label noise, we introduce synthetic instance-dependent label noise to our datasets. Like past work <cit.>, our setup is limited for the real datasets because our added noise is synthetic and we use the labels provided in the dataset as ground truth, since we do not have access to actual ground truth labels on these public datasets. To introduce instance-dependent noise, mislabeling was a function of the features. Let w_m ∼ N(0, 0.33)^D and z_m = σ(x·w_m), where σ is the sigmoid function, denote the coefficients describing the contribution of each feature to mislabeling and the risk of mislabeling, respectively. Whether an instance was mislabeled was based on z_m and the desired noise rate. For example, for a noise rate of 30%, instances whose value for z_m was above the 70^th percentile had their labels flipped. This allowed us to vary the noise rate within subgroups in a straightforward manner. Across datasets, we focused on cases where the noise rate in the `minority' population was always greater than or equal to that of the `majority' group since this is more likely to occur <cit.>. *Evaluation Metrics We evaluate our proposed approach in terms of discriminative performance and model bias. For discriminative performance, we evaluate using the area under the receiver operating characteristic curve (AUROC) (higher is better). With respect to model bias, while there exist many different measures, we focus on equalized odds <cit.>, since it is commonly used in the context of healthcare <cit.>, when similar performance across groups is desired <cit.>. Because equalized odds focuses on the difference between the true and false positive rates among groups, it is applicable to many settings in healthcare since the consequences of failing to treat a patient in need <cit.>, or giving an inappropriate treatment <cit.> can be serious. More specifically, we measure the area under the equalized odds curve (AUEOC) <cit.> (higher is better). For classification threshold τ, we calculate the equalized odds (EO(τ)) between two groups, called 1 and 2, as shown below. TP_a(τ) and FP_a(τ) denote true and false positive rates for group a at threshold τ, respectively. The AUEOC is obtained by plotting the EO against all possible values of τ and calculating the area under the curve. We compute the harmonic mean (HM) between the AUROC and AUEOC to highlight how the different approaches simultaneously maintain discriminative performance and mitigate bias. In the harmonic mean the worse performing metric dominates. For example, if a classifier has AUROC=0.5 and AUEOC=1.0, the harmonic mean will emphasize the poor discriminative performance. EO(τ) = 2 - | TP_1(τ) - TP_2(τ) | - | FP_1(τ) - FP_2(τ) |/2 *Baselines We evaluate our proposed approach with several baselines to test different hypotheses. Standard does not account for label noise and assumes that ỹ=y is always true. SLN + Filter <cit.> combines filtering <cit.> and SLN <cit.> and was shown to outperform state-of-the-art approaches like Co-Teaching <cit.> and DivideMix <cit.>. It relies on filtering heuristics, which indirectly rely on uniform random label noise to maintain discriminative performance and mitigate bias. JS (Jensen-Shannon) Loss <cit.> builds on semi-supervised learning and encourages model consistency when predicting on perturbations of the input features. It was shown to be competitive with other state-of-the-art noise-robust loss functions <cit.>. It was proposed for instance-independent label noise. Transition <cit.> learns to correct for noisy labels by learning a transition function and was shown to outperform state-of-the-art approaches such as MentorNet <cit.>. It applies to instance-dependent label noise, but it assumes that the contributions of each feature to mislabeling and input reconstruction are identical. CSIDN (confidence-scored instance-dependent noise) <cit.> also learns a transition function and was shown to outperform state-of-the-art approaches such as forward correction <cit.>. Like our approach, CSIDN uses the concept of `confidence' in the observed label to help with training. Unlike our approach, CSIDN uses the model's class predictions directly as confidence scores (instead predicting them via an auxiliary network) and uses them to learn the transition function (as opposed to re-weighting the loss). Fair GPL <cit.> builds on work addressing uniform random label noise <cit.> and uses peer loss (i.e., data augmentation that reduces the correlation between the observed label and model's predictions) within subgroups <cit.>. It assumes that label noise only depends on group membership. We also train a model using the ground truth labels (called Clean Labels) as an empirical upper bound for discriminative performance. *Implementation Details For each dataset, we randomly split the data into 80/20% training/test, ensuring that data from the same individual did not appear across splits. For the Adult dataset, we used the test set provided and randomly selected 1,000 individuals from the training set. We then randomly selected 10% of the training data for all datasets except MIMIC-III from each subgroup to be points, thereby ensuring that they were representative of the overall population. For the MIMIC-III dataset, 2% from each subgroup were selected as points due to the larger size of the dataset. points were selected randomly to simulate our setting of focus, where we have a proxy labeling function and then randomly select a subset of the data to chart review in order to validate the proxy function. Then, for all datasets, half of the points were then set aside as a validation set to use during training for early stopping and hyperparameter selection, while the other half remained in the training set. Later, in our experiments, we evaluated when the set size varied and when the set was biased. All approaches (i.e., baselines and proposed) were given the ground truth labels for data in the set (i.e., no noise added to points) during training so that some approaches did not have an unfair advantage. All models were trained in Python3.7 and Pytorch1.7.1 <cit.>, using Adam <cit.>. Hyperparameters, including the learning rate, L2 regularization constant, and objective function scalars (e.g., α), were tuned using random search, with a budget of 20. We used early stopping (patience=10) based on validation set performance, which we measured with the HM. We report results on the held-out test set, showing the mean and standard deviation over 10 replications. § RESULTS AND DISCUSSION We describe the results from experiments with instance-dependent noise. For each plot, we combined discriminative performance and bias mitigation and plotted the HM of the AUROC and AUEOC to assess general performance with respect to both metrics. We show the AUROC and AUEOC separately in Appendix <ref>. Additional experiments are provided in Appendix <ref>. Their results are summarized here. *Robustness to Noise Rate Here, we investigated how robust the proposed approach and baselines were to varying amounts of instance-dependent label noise (Figure <ref>). Since noise was synthetically introduced and not dataset specific, we conducted two experiments on the synthetic dataset. In the first, we varied the overall noise rate from 10-60% in the majority group. For the minority group, we considered noise rates that were consistently 20% higher than that of the majority group, to keep the noise disparity level (i.e., the difference in noise rates between subgroups) constant. In the second, we varied the minority noise rate from 20-90% with a majority noise rate fixed at 20% throughout (i.e., from 0-70% disparity) on the synthetic dataset. Part 1: Overall Noise Rate. Overall, our proposed approach demonstrated robustness to a variety of noise rates within a realistic range (Figure <ref>). At low minority noise rates (i.e., below 40%), the proposed approach and baselines, with the exception of JS Loss, were competitive. As the noise rate increased, many of the baselines experienced noticeable degradation in performance. The proposed approach and Transition showed more robustness, with the proposed approach being the most robust until a minority noise rate of 80%, which represents an extreme case of label noise. Part 2: Noise Disparity. Like the previous experiment, the proposed approach was robust over a variety of noise disparities (Figure <ref>). This is likely because the objective function ℒ'_θ from Step 2 of training accounts for disparities by scaling each instance-specific loss term with the reciprocal of its estimated group clean rate (i.e., 1 - the estimated group noise rate). Similar to the previous experiment, at a minority noise rate of 80% and above, the proposed approach was no longer the most robust, though this setting is unlikely to occur in practice. *Sensitivity to Set Composition Our next set of experiments tested the proposed approach in settings where we relax key settings about the set. We considered all datasets with instance-dependent noise. The majority/minority noise rates were 20%/40%, respectively. Here we show performance with respect to the proposed approach, Standard, and Clean Labels. Results for the other baselines are included in Appendix <ref>. Part 1: set size. We varied the size of the set, from 1% and 15% of the training set, with the set being representative of the test set (Figure <ref>). The proposed approach was robust to a wide range of set sizes, only showing noticeable degradation at set sizes of 3% or lower. As the size of the set grew, performance improved, likely since having a larger set provided access to a larger set of ground truth labels at training time. Although the minimum number of points required in the set is likely to vary depending on the task, our results are promising in that they show that our approach is effective on a variety of real life tasks, even when the set is small (i.e., as little as 3% of the data). Part 2: Biased set. Here, we test how the proposed approach performs when the set is not representative of the population. We varied the amount of bias in the set by changing the proportion at which the subgroups were present. We kept the size of the set constant at 10% of the training data (2% for MIMIC-III on both tasks). We observed that the proposed approach was robust over a wide range of conditions, i.e., when the minority proportion is 20%-80% (Figure <ref>). We hypothesize that this is because the learned relationship between the features and noise can generalize across groups to an extent. In scenarios where performance of the proposed approach degraded, one subgroup heavily dominated the set. This is shown in Figure <ref> on the extremes of the x-axis of some datasets, which correspond to an set that is heavily over-represented for one subgroup and heavily under-represented for the other. Our approach relies, in part, on having a relatively unbiased set for estimating β̂ in order to avoid introducing dataset shift between the two steps of our training pipeline. Thus, these results are in line with our expectations and highlight a limitation of our approach. However, despite this reliance, we observe that our approach is still robust in some scenarios where the set is biased. *Which Parts of Our Approach Matter? Our last set of results examines the individual components of the approach itself on the synthetic dataset. Here, we performed an ablation study where we began with training on only the points (i.e., Step 1 of our approach), and then gradually added the other components of our approach (e.g., add Step 2a). In summary, while each component improved performance, we find that the most improvement came from adding ℒ_θ and ℒ_ϕ during Steps 2a and 2b, respectively, as opposed to using only ℒ_θ' during those steps. We also performed a hyperparameter sensitivity analysis on the three hyperparameters, α_1, γ, and α_2, that our approach introduced. The approach was most sensitive to the α_2 hyperparameter and more robust to α_1 and γ. We include results for the ablation study and hyperparameter sensitivity analysis in Appendix <ref>. *Which Parts of Our Approach Matter? Our last set of results aims to more closely examine the individual components of the approach itself. We include results for an ablation study and a hyperparameter sensitivity analysis in Appendix <ref>. In summary, while each component improved performance, we find that the most improvement came from adding ℒ_θ and ℒ_ϕ during Steps 2a and 2b, respectively, as opposed to using only ℒ_θ' during those steps. The approach was most sensitive to the α_2 hyperparameter and more robust to α_1 and γ. § RELATED WORK We build from previous work in label noise and address key limitations. Generally, many state-of-the-art approaches <cit.> are limited in that they do not consider instance-dependent noise, do not consider the potential consequences of bias in label noise, or do not leverage the information our setting provides. We tackle these limitations by accounting for differences in noise rates among subsets of the data and taking advantage of additional information that can be found in our setting. In this section, we summarize past work and highlight our contributions. *Identifying Mislabeled Data Approaches that learn to identify mislabeled instances fall into two sub-categories: 1) filtering approaches and 2) relabeling approaches. Filtering approaches use heuristics to identify mislabeled instances (e.g., MentorNet <cit.>, Co-teaching <cit.>, FINE <cit.>). Many are based on the idea that correctly labeled instances are easier to classify than mislabeled instances (i.e., the memorization effect) <cit.>. For example, mislabeled instances could be those that the model incorrectly classifies <cit.>, have a high loss value <cit.>, or significantly increase the complexity of the model <cit.>. Given the identified mislabeled instances, these approaches either ignore them during training <cit.> or treat them as `unlabeled’ and apply techniques from semi-supervised learning (e.g., DivideMix <cit.>, SELF <cit.>). Overall, these heuristics have been shown to improve discriminative performance. However, depending on the setting, they can disproportionately discard subsets of data, which could exacerbate biases in model performance. For binary classification, some approaches `correct' (i.e., switch) the observed label for instances that are predicted to be incorrect <cit.>. Building on this idea, others make use of a transition function that estimates the probability of the observed label being correct. Model predictions can then be adjusted by applying the transition function to the classifier's predictions for each class. Some works manually construct the transition function from expert knowledge <cit.>, while others learn it <cit.>. However, such approaches often make assumptions on the form of the noise distribution, and past work has shown that results are sensitive to the choice of distribution <cit.>. To date, much of the work described above assumes instance-independent label noise (i.e., mislabeling is independent of the features). However, when this assumption is violated, the model may overfit to label noise <cit.>. From an emerging body of work in instance-dependent label noise <cit.>, current approaches remain limited in that they still rely on filtering heuristics. Although we use soft filtering, we filter based on the learned relationship between the features and noise rather than existing heuristics and upweight groups with a higher estimated noise rate. While similar to a transition function in some aspects, our approach requires fewer probability estimates on label correctness (two estimates compared to the number of classes squared for a transition function) while achieving state-of-the-art performance. *Noise-Robust Loss Functions Prior work examines how regularization techniques can be adapted to the noisy labels setting, addressing issues related to overfitting on noisy data <cit.>. Label smoothing, and in some cases negative label smoothing, were found to improve the accuracy on both correctly labeled and mislabeled data <cit.>. With this approach, the observed labels are perturbed by a small, pre-determined value, with all labels receiving the same perturbation at every training epoch. Follow-up work found that, instead of applying the same perturbation at each epoch, adding a small amount of Gaussian stochastic label noise (SLN) at each epoch resulted in further improvements, as it helped to escape from local optima <cit.>. However, these approaches were most beneficial in the context of augmenting existing methods that identify mislabeled instances (e.g., stochastic label noise is applied to instances that are identified as correctly labeled by filtering approaches), and thus, potentially suffer from the same limitations. Alternatively, recent work has also proposed perturbing the features to encourage consistency in the model's predictions <cit.>, though mainly in the context of instance-independent label noise. Others have proposed noise-robust variations of cross entropy loss <cit.> but generally relied on assumptions like the memorization effect. *Label Noise in Fairness Label noise has also been addressed within the fairness literature recently. When the frequencies at which subgroups (defined by a sensitive attribute) appear are different within a dataset, past work has shown that common approaches addressing label noise can increase the prediction error for minority groups (i.e., rarer subgroups) <cit.>. Past work proposed to re-weight instances from subgroups during training where model performance is poorer <cit.> in the instance-independent noise setting. Others use peer loss <cit.> within subgroups <cit.> but assume that noise depends only on the sensitive attribute. We also train with a weighted loss, but weights are based on predicted label correctness rather than performance on the observed labels. Recently, <cit.> addressed some of the gaps of past work by examining the instance-dependent case. Our proposed approach differs from theirs in that we do not require our features to be grouped into distinct categories, such as root and low level attributes. *Anchor Points for Addressing Label Noise Another related setting in past work uses anchor points. Anchor points are subsets of the data where the ground truth labels are known <cit.>. To date, anchor points are generally used to learn a transition function <cit.> or for label correction directly <cit.>. We use a similar concept, points, to 1) pre-train the model, and 2) predict label correctness. The first part builds from work in semi-supervised learning <cit.>, which has shown improvements from pre-training on labeled data. The second part is similar to a transition function, but differs in that we use the correctness predictions to re-weight the loss rather than adjust the predictions. We also assume that, for some alignment points, the ground truth and observed labels do not match. Generally, anchor-based approaches mitigate model bias by implicitly assuming that the anchor points are representative of the target population. Our approach also uses this assumption, but we empirically explore how model performance changes when the anchor points are biased (i.e., not representative), since it may be easier to obtain correct labels for specific subgroups <cit.>. § CONCLUSION We introduce a novel approach for learning with instance-dependent label noise. Our two-stage approach uses the complete dataset and learns the relationship between the features and label noise using a small set of points. On several datasets, we show that the proposed approach leads to improvements over state-of-the-art baselines in maintaining discriminative performance and mitigating bias. Our approach is not without limitations. We demonstrated that the success of the approach depends, in part, on the representativeness in the set. Our experiments were also on pseudo-synthetic data in which we injected noise; this assumes we start from a noise free dataset. Finally, we only examined one form of bias in a specific case of instance-dependent label noise. Nonetheless, our case frequently arises in healthcare, especially when pragmatic (e.g., automated) labeling tools are used on large datasets, and chart review on the entire dataset is infeasible. This work was supported by Cisco Research and the National Science Foundation (NSF award no. IIS 2124127). The views and conclusions in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of Cisco Systems Inc. or the National Science Foundation. We also thank the anonymous reviewers for their valuable feedback. § PROPOSED APPROACH: ADDITIONAL DETAILS We provide additional details on our approach, including a general overview in the form of pseudocode as well as a justification for the proposed objective function and its relation to the clean label loss. §.§ General Overview We summarize our approach with pseudocode below in Algorithm <ref>. We begin with the dataset and initial model parameters, and we aim to use the dataset to learn the final model parameters. A is the set of anchor points. θ' and ϕ' are the initial model parameters for the θ and ϕ networks. Here, 'stopping criteria' may refer to any stopping criteria, such as early stopping. The Freeze() function takes as input model parameters and freezes them, and the Unfreeze() function takes as input model parameters and unfreezes them. §.§ Proposed and Clean Label Loss We show that minimizing the proposed loss ℒ'_θ from Step 2 of the proposed method is equal to minimizing cross entropy on the clean labels in expectation. ℒ'_θ = -1/|A|∑_k=1^g∑_i ∈A∩ G_k1/1-r̂_k∑_j=1^c β̂^(i)_ϕ𝕀(ỹ^(i)==j)log(ŷ^(i)_j) Therefore, 𝔼[∑_k=1^g∑_i ∈A∩ G_k1/1-r̂_k∑_j=1^cβ̂^(i)_ϕ𝕀(ỹ^(i)==j)log(ŷ^(i)_j) ] = ∑_k=1^g∑_i ∈A∩ G_k1/1-r̂_k∑_j=1^c𝔼 [β̂_ϕ^(i)𝕀(ỹ^(i)==j)log(ŷ^(i)_j) ] =∑_k=1^g∑_i ∈A∩ G_k1/1-r̂_k∑_j=1^c(1-r̂_k)𝕀(y^(i)==j)log(ŷ^(i)_j) =∑_k=1^g∑_i ∈A∩ G_k∑_j=1^c𝕀(y^(i)==j)log(ŷ^(i)_j) As a reminder, each group G_k is then associated with estimated noise rate r̂_k=1/| g_k |∑_i ∈ G_k 1-β̂^(i)_ϕ and estimated clean (i.e., correct) rate 1 - r̂_k = 1/| G_k |∑_i ∈ G_kβ̂^(i)_ϕ. We can express the noise and clean rates in terms of β̂^(i)_ϕ since 1 - r_k = 1/| G_k |∑_i ∈ G_k𝕀(ỹ^(i)==y^(i)) = P(y==ỹ|ỹ, x) for a random instance in G_k = 1/| G_k |∑_i ∈ G_k P(y^(i)==ỹ^(i)|ỹ^(i), x^(i)) where r_k and 1 - r_k are the actual noise and clean rates within group k, respectively. Therefore, since β̂_ϕ is trained to predict P(y==ỹ|ỹ, x), we estimate the noise and clean rates using β̂_ϕ. § PREPROCESSING DETAILS Here, we provide more detail on our synthetic data generation process and real dataset pre-processing. §.§ Synthetic Our data generation process is as described below. Note that the Percentile(p, {z}) function outputs the p^th percentile over all values in {z}. We defined the feature at index 0 to be a synthetic sensitive attribute. Instances with values below the 20^th percentile for this feature were considered as the `minority', and the rest were considered as the `majority'. Features 10-19 for the majority instances and features 20-29 for the minority instances were set to 0 to provide more contrast between the two groups. For individual i, d=30, x^(i)∼ N(0, 1)^30 w∼ N(0, 1)^30, z^(i)=x^(i)·w y^(i)=1 if z^(i)>Percentile(50, {z^(j)}_j=1^5000) else 0 x^(i)_j=0 for ȷ=10,11,...,19 if x^(i)_0 >Percentile(20, {x^(j)_0}_j=1^5000) x^(i)_j=0 for ȷ=20,21,...,29 if x^(i)_0 <Percentile(20, {x^(j)_0}_j=1^5000) §.§ MIMIC-III Data were processed using the FlexIble Data Driven pipeLinE (FIDDLE), [<cit.>], a publicly available pre-processing tool for electronic health record data. We used the same features as [<cit.>] for our tasks. More information can be found at https://physionet.org/content/mimic-eicu-fiddle-feature/1.0.0/. §.§ Adult Although, we used a pre-processed version of this dataset, we omitted features pertaining to education, work type, and work sector to make the task more difficult. More specifically, in the file `headers.txt' at the repository mentioned in Footnote 1, we kept all features beginning with `age', `workclass', `education', `marital status', and `occupation'. We also kept the `Sex_Female' feature. The remaining features were excluded to make the task more difficult. Values were normalized for each feature to have a range of 0-1 by subtracting by the minimum value observed among all individuals and dividing by the range. During training, we only used 1,000 randomly selected individuals from the provided dataset to make the task more difficult, since there would be fewer samples from which to learn. We made the task more difficult for this dataset to further highlight the differences in performance between the approaches. §.§ COMPAS Although, we used a pre-processed version of this dataset, we omitted the feature `score_factor' (i.e., the risk score for recidivism from the ProPublica model) to make the task more difficult. Values were normalized for each feature to have a range of 0-1 by subtracting by the minimum value observed among all individuals and dividing by the range. § ADDITIONAL NETWORK AND TRAINING DETAILS Here, our ranges of hyperparameters and implementation choices for the proposed network. All networks were trained on Intel(R) Xeon(R) CPUs, E7-4850 v3 @ 2.20GHz and Nvidia GeForce GTX 1080 GPUs. All layers were initialized with He initialization from a uniform distribution. We divide our training data into five batches during training. All random seeds (for Pytorch, numpy, and Python's random) were initialized with 123456789. §.§ Hyperparameter Values Considered Here, we show the range of values we considered for our random search. More details are provided in Table <ref>. For any hyperparameters associated with the Adam optimizer not mentioned above, we used the default values. Not all hyperparameters were used with each approach. `Filter Threshold' and `Noise Added' were only used with the baseline SLN + Filter. Here, Filter Threshold refers to the minimum value of the predicted probability of the observed label for an instance to be considered `correctly labeled'. For example, if Filter Threshold=0.5, then all examples whose predicted probability for the observed label is at least 0.5 are considered `correct' and used during training. `Number of Parts' was only used with the baseline Transition. `α_GPL' was only used with the baseline Fair GPL. `α_1Proposed', `α_2Proposed', and `γ_Proposed' was only used with the proposed method. Here, `α_1Proposed' and `α_2Proposed' correspond to the terms α_1 and α_2 that were used in the objective functions. We refer to them with the added term `Proposed' in the subscript in this section to distinguish it from the α value used by the baseline Fair GPL. §.§ Network Details For the overall architecture, we used a feed forward network with two hidden layers. The auxiliary β prediction component was also implemented with two feed forward layers. All layer sizes are as described in Table <ref>. In addition, we used the ReLU activation function. The complete implementation can be found in the attached code. § EXPANDED RESULTS Here, we describe additional results that were not included in the main text. We begin with followup experiments on the synthetic data and then describes results from the real data. §.§ Robustness to Noise Rate Expanded Here we include the AUROC and AUEOC plotted separately for the experiments where we varied the overall noise rate and noise disparity. As we varied the overall noise rate (Figure <ref>), the proposed approach is able to consistently outperform the baselines with respect to discriminative performance until a minority noise rate of 80%. This observation is similar to what we observed with the HM. With respect to bias mitigation, the proposed approach is not more beneficial than the baselines up to a minority noise rate of 60%. At a minority noise rate above 60%, our approach experienced the least degradation compared to the baseline approaches. This is in line with our expectations since our approach explicitly accounts for differences in noise rates among groups during training. As we varied the noise disparity (Figure <ref>), we have similar observations to the previous experiment in that the proposed approach is able to consistently outperform the baselines with respect to discriminative performance until a minority noise rate of 80%. With respect to bias mitigation, the proposed approach is not more beneficial than the baselines up to a minority noise rate of 40%. At a minority noise rate above 40%, our approach experienced the least degradation compared to most of the other baseline approaches and was comparable to the Transition baseline. Unlike the previous experiment, the degradation in AUEOC among many of the baseline approaches is larger, which is in line with our expectations since we were directly changing the difference in noise rates between the groups while the previous experiment kept the difference constant. §.§ Ablation Study We also examined our approach more closely by conducting an ablation study and a hyperparameter sensitivity analysis on the synthetic data. We used the synthetic dataset since our noise was synthetically introduced and not dataset specific. In our ablation study (Figure <ref>), we began with training on only the points (i.e., Step 1 only), which achieved the worst performance. We then introduced Step 2 and added the remaining training data (i.e., non-points) but only trained using ℒ_θ'. This led to an improvement in performance, but not to the level of the full approach. The next two ablations build on the previous one. In the first one, we added continued supervision on the points with ℒ_θ, and observed an improvement in performance, likely due to the retention of high quality data in this step. In the second one, we added continued supervision on the points using ℒ_ϕ, and observed an even larger improvement. This is likely because including ℒ_θ prevented the model from learning a solution where β̂ was small for all instances, as previously discussed. Finally, we end with our full proposed approach, which performed noticeably better than each of the ablations, showing the importance of each component. §.§ Hyperparameter Sensitivity Analysis In our sensitivity analysis on the synthetic data (Figure <ref>), we tested how performance of the (full) proposed approach varied to changes in the hyperparameters α_1, α_2, and γ. For each of these hyperparameters, we measured performance at values between 0.01 and 100 on a logarithmic scale while keeping the other two values constant at 1. We found that α_1 and γ were the most robust to changes in the value. We found that α_2 was more sensitive, with values between 0.1 and 10 generally working best. §.§ Sensitivity to Set Composition Expanded In our analysis on sensitivity to set composition, we include results for the other baselines in (Figure <ref>). At set sizes of below 5% on the real datasets, the proposed approach was beneficial to the baselines. At larger set sizes, the baseline Transition was able to match the proposed method due to the increased amount of clean data. When the set was biased, the proposed approach outperformed the baselines in the unbiased settings and was competitive as bias in the set increased.
http://arxiv.org/abs/2307.04123v1
20230709083214
Towards cross-language prosody transfer for dialog
[ "Jonathan E. Avila", "Nigel G. Ward" ]
cs.CL
[ "cs.CL" ]
Bounced Model of Droplet on Moving Substrate Chengwu Liu August 12, 2023 ============================================ Speech-to-speech translation systems today do not adequately support use for dialog purposes. In particular, nuances of speaker intent and stance can be lost due to improper prosody transfer. We present an exploration of what needs to be done to overcome this. First, we developed a data collection protocol in which bilingual speakers re-enact utterances from an earlier conversation in their other language, and used this to collect an English-Spanish corpus, so far comprising 1871 matched utterance pairs. Second, we developed a simple prosodic dissimilarity metric based on Euclidean distance over a broad set of prosodic features. We then used these to investigate cross-language prosodic differences, measure the likely utility of three simple baseline models, and identify phenomena which will require more powerful modeling. Our findings should inform future research on cross-language prosody and the design of speech-to-speech translation systems capable of effective prosody transfer. Index Terms: speech-to-speech translation, corpus, prosodic dissimilarity metric, English, Spanish § INTRODUCTION Speech-to-speech translation systems are valuable tools for enabling cross-language communication. While very useful today for short, transactional interactions, they are less so for long-form conversation <cit.>. One reason is that, without proper prosody transfer, translation systems are unable to reliably convey many intents and stances, impeding users' ability to deepen their interpersonal relationships and social inclusion. In dialog, prosody conveys pragmatic functions such as in turn-taking, expressions of attitudes, and negotiating agreement. Regarding prosody, current translation systems generally aim only to produce prosody that sounds natural, but this is not always sufficient. In traditional models, translation is done by a cascade of subsystems — for automatic speech recognition, machine translation, and speech synthesis — and the intermediate representations are just text, with all prosodic information lost. The prospect instead of transferring the additional information provided by the source-language prosody was a motivation for the development of unified, end-to-end models <cit.>. Despite rapid recent advances <cit.>, the ability of such models to perform prosody transfer seems not to have been examined. Rather, current approaches to prosody transfer handle it with specific modules <cit.>. To date, these target only specific functions of prosody, notably its roles in conveying paralinguistic/emotional state, emphasis, and syntactic structure, and target only a few prosodic features, notably F_0, pausing, and word duration. Very recent work has shown that this can significantly improve perceived translation quality <cit.>, but also that these techniques so far only close less than half of the perceived gap between default prosody and the human reference. Clearly, something is still missing. This paper investigates what that might be. While one might hope that the answer could be found in the linguistics literature, published knowledge of how prosody differs across languages focuses mostly on syllable-level, lexical, and syntactic prosody. In particular, there is relatively little work on differences in how prosody conveys pragmatic functions. Even for English and Spanish, a well-studied pair, our knowledge is sparse beyond a few topics such as turn-taking <cit.>, questions and declaratives <cit.>, and expression of certainty <cit.>. However, these certainly do not exhaust the prosodic meanings important for dialog. Further, these studies have been mostly limited to differences in intonation and duration, leaving out most prosodic features. Accordingly, this paper takes a fresh look, using a corpus-based approach. § PROTOCOL AND CORPUS To investigate prosodic differences in dialog, we need a suitable cross-language corpus. However, corpora for speech-to-speech translation today primarily comprise monologues, derived from readings <cit.>, political discussions <cit.>, or informative talks <cit.>. Those comprising dialogs were derived from television show dubs <cit.>, lectures and press conferences <cit.>, or speech synthesis <cit.>. Speech collected in these settings lacks interactivity, spontaneity, and most of the prosodic variation found in real dialog. We accordingly developed the Dialogs Re-enacted Across Languages (DRAL) protocol. This involves pairs of nonprofessional, bilingual participants. They first have a ten-minute conversation, which we record. These conversations are unscripted, although we sometimes suggest topics, which allows for pragmatic diversity and spontaneous interactions. Depending on their relationship, the participants mostly get to know each other, catch up on recent happenings, and/or share personal experiences. Subsequently, under the direction of a producer, they select an utterance or exchange and closely re-enact it in their other language, which may take several attempts to get right. They then re-enact another utterance. The yield is typically a few dozen matched pairs per one-hour session, with overall good pragmatic diversity, as suggested by Table <ref>. Our design choices and the DRAL corpus are discussed further in our technical report <cit.>. Following this protocol we have so far collected matched EN-ES utterance pairs, from a total of 42 speakers. The latest release, including source recordings and metadata, is available at <https://cs.utep.edu/nigel/dral/>. In the following explorations, we use the first 1139 matched “short” utterances, which each feature a single interlocutor. The average duration is 2.5 seconds. § UTTERANCE PROSODY REPRESENTATION As our aim here is exploratory, we chose to work with simple, explicit, interpretable representations of prosody. We use the Midlevel Prosodic Features Toolkit[<https://github.com/nigelgward/midlevel>], as its features were designed to be robust for dialog data, generally perceptually relevant, and normalized per speaker. From the available features, we selected ten based on previous utility for many tasks for several languages <cit.>, specifically: intensity, lengthening, creakiness, speaking rate, pitch highness, pitch lowness, pitch wideness, pitch narrowness, peak disalignment (mostly late peak), and cepstral peak prominence smoothed (CPPS), the latter an inverse proxy for breathy voice. This rich set of prosodic features supports more comprehensive analyses than most prosody research efforts. To characterize the prosody of an utterance, each base feature is computed over ten non-overlapping windows, together spanning the whole utterance. Thus, each utterance is represented by 100 features. The window sizes are proportional to an utterance's duration and span fixed percentages of its duration: 0–5%, 5–10%, 10–20%, 20–30%, 30–50%, 50–70%, 70–80%, 80–90%, 90–95%, 95–100%, as seen in Figure <ref>. This representation is thus not aligned to either syllables or words, but is appropriate for representing the sorts of overall levels and contours that are most often associated with pragmatic functions. Normalization occurs at two steps in the feature computation. The low-level (frame-level) features — pitch, energy, and CPPS — are normalized per track to mitigate individual differences. Subsequently, the mid-level features (peak disalignment, lengthening, etc.) are computed over each specified span for every utterance, and after being computed for all utterances in a track, each is z-normalized. § CROSS-LANGUAGE FEATURE CORRELATIONS For our first glimpse at the EN-ES prosody mapping, we examined the Spearman correlations between the 100 EN prosodic features and the 100 ES prosodic features, across all matched pairs. (We computed Spearman correlations as well within each language for comparison.) Were EN and ES prosodically identical, we would expect each EN feature to correlate perfectly with its ES counterpart. In fact, the correlations were far more modest but always positive and often substantial: more than half the features sharing the base feature and span have correlation ρ≥0.3. Thus, overall, EN and ES prosody is quite similar, and pitch highness is generally the most similar, especially towards the middle of utterances (e.g. 30–50%, ρ=0.59). While some features, such as pitch highness, have much stronger span-for-span correlations, other features, notably speaking rate, lengthening, and CPPS, have correlations that are strong throughout the utterances. For example, speaking rate at every span in an EN utterance correlates with speaking rate at every span in the corresponding ES utterance. These findings are compatible with the idea that English and Spanish prosody is overall roughly similar, but that the locations of local prosodic events can vary, likely due to differences in word order and lexical accents. However, some correlations were much weaker. The lowest cross-language correlations for the same features were for creakiness and peak disalignment, suggesting that these are likely to have different functions in the two languages. There were also many off-diagonal correlations. Most of these were unsurprising, such as the anticorrelations between the speaking rate and lengthening features, but not all. For example, intensity at the end of an EN utterance correlates with CPPS throughout an ES utterance (EN 90–95% vs. ES 5–20%, 30–70%, and 80–100%, ρ≥0.3), while no such relationship was found within either language. Examination of the ten pairs that most closely reflect this pattern (EN high near final intensity and ES high CPPS), showed that in half the speaker is preparing a follow-up explanation. Thus, we have identified a pragmatic function that seems to be prosodically marked differently in EN and ES. Figure <ref> shows the values for these two features for one such pair. § PROSODIC DISSIMILARITY METRIC To judge the quality of prosody transfer, we need a measure of how far the predicted prosody diverges from the observed prosody in the human reference translation. If there existed a synthesizer capable of realizing arbitrary prosodic specifications, we could just use it and then use human perceptions of the match between the synthesized and reference speech. However, no existing synthesizer is capable of this, especially for the rich set of prosodic features we are investigating here. Existing metrics for estimating similarity from prosodic feature representations exist, such as <cit.> and <cit.>, but these again are limited in the prosodic features considered. Accordingly, we propose a new simple metric. This estimates the dissimilarity of two utterances as the Euclidean distance between their respective prosodic representations, as computed in Section <ref>, with all features given equal weight. We do not expect this metric to accurately match human perceptions, but we can hope that it might be useful as a first-pass metric for judging prosodic dissimilarity. To gauge this, we compared its outputs to our perceptions of a few dozen within-language utterance pairs. To structure this process, we wrote software to randomly select an utterance (the “anchor”) from the data and retrieve the four most similar utterances and four most dissimilar utterances according to the metric. Ideally, perhaps, we would have made holistic judgments of the degree of prosodic similarity between each sample-anchor pair, but, probably like most people, we lack this ability. Instead, we repeatedly listened and identified whatever similarities and dissimilarities we could note, taking 2 or 3 minutes per pair to do so. The most salient of these were always at the level of pragmatic function, rather than prosodic features, but we considered this unproblematic, as the ultimate aim of prosody transfer is pragmatic fidelity, not prosodic fidelity. We did this process for seven anchors and eight comparisons utterances each, all from the English half of the data. We found, first, that the metric captures many aspects of pragmatic similarity — including speaker confidence, revisiting unpleasant experiences, discussing plans, describing sequences of events, and describing personal feelings — all of which were generally also prosodically similar. Table <ref> shows one set of utterances to illustrate. The prosody of this anchor utterance suggested that the topic is personal feelings: a slow then fast then slow speaking rate, a pause, and occasional use of creaky voice. Each of the utterances rated similar by the metric shared these qualities, albeit to varying degrees. Second, we noted that the similarities found were not generally lexically governed. While some words and syntactic structures have characteristic prosody, and some of the pairs considered similar by the metric shared lexical content, such as music in the fourth and fifth examples in Table <ref>, generally prosodic similarity seemed to be orthogonal to lexical similarity. Third, we noted that the metric does not always appear to match perceptions. To try to understand its limitations and what needs improving, we examined examples where our judgments diverged most from the metric's estimates, namely four which the metric judged very similar but sounded rather different to us, including EN_025_1 in Table <ref>, and two which we felt had significant similarities but which the metric judged very different, including EN_024_1 in Table <ref>. Of these, two pairs had very salient nasality differences, which our model does not capture, and sounded very different in terms of pragmatic function, specifically relating to the presumption of common ground. For three pairs the problem seemed to be differences in syllable-aligned pitch and energy contours, which are not directly represented by our features. However, for 50 of the 56 pairs examined, our judgments aligned with those of the model. Thus, while the metric needs improving, overall we deemed it likely to be useful. We consider these findings also to be evidence that our prosody representation is meaningful. Accordingly, below we rely on both for evaluating the quality of prosody transfer, as a way to obtain insight. § COMPARISON OF MODELING STRATEGIES Our corpus and metric enable the evaluation of different models of the cross-language prosody mappings. The task is, given the prosody of an utterance in the source language, to predict the prosody of its translation in the target language. The error is the dissimilarity between the inferred prosody and the prosody of the human re-enactment. We here report the results for models in both directions, EN→ES and ES→EN, using the partition described in Table <ref>. The first model is intended to represent the best that can be achieved with a typical cascaded speech-to-speech model, with a synthesizer that operates in ignorance of the input-utterance prosody. Our implementation relies on the lookup of the human-generated translation in the target language, to avoid the impact of ASR or MT errors. We use Whisper <cit.> to transcribe this to a word sequence with punctuation and then use Coqui TTS[<https://github.com/coqui-ai/TTS>] to synthesize speech from that transcription. To ensure a fair comparison, utterances incorrectly transcribed were excluded from the data. Table <ref> reflects the 252 excluded utterances. To judge the quality of each output, we compute a representation of the prosody of the synthesized speech using the method of Section <ref>. The second model predicts the prosody of the translation to be identical to the prosody of the input: it trivially outputs the same representation. This “naive” model embodies a strategy of directly transferring the input prosody. The third model is trained by linear regression. Thus, each feature of the target prosody representation is predicted as a linear function of the 100 features of the input utterance. Table <ref> shows the three models' overall average error. The synthesizer baseline is outperformed by the naive baseline, suggesting that keeping the same prosody in translation may be a reasonable basic strategy. The naive baseline is in turn outperformed by the linear regression model, suggesting that even a simple model can learn some aspects of the mapping between English and Spanish prosody. While our simple linear model shows a benefit, its prediction error is still very high. We think the likely factors include not only the existence of mappings too complex for a linear model, but also the small size of the training data, the existence of free variation implying a permissible margin of error for our metric, unmodeled dependencies of target-language prosody on the source-utterance context and its lexical content, and speaker-specific prosody behavior tendencies. § QUALITATIVE ANALYSIS To better understand the challenges of cross-language prosody modeling, we examined examples where the various models did well or poorly. First, we examined the 16 examples in each direction whose synthesized prosody was least similar to the human-produced target. The most common and salient differences were: failure to lengthen vowels and vary the speaking rate for utterances where speakers are thinking or expressing uncertainty or hesitation, failure to change pitch at turn ends, and generally sounding read or rehearsed and thus unnatural for conversational speech. Next, we exampled the 16 pairs for which the naive model did worse, that is, the cases where the English and Spanish prosody diverged most. Often there were salient differences, in a few common patterns, such as ES utterances being creakier than the English, EN but not ES utterances ending with rising pitch, and EN utterances being breathier in some regions. The latter two may reflect the common use of uptalk in English, that is to say, the use of breathy voice and rising pitch to establish common ground regarding a referent <cit.>, a pattern rare in the Spanish dialect of our corpus. In other cases there were no highly salient differences; presumably, these had multiple smaller differences which added up to a big difference according to the metric. Next, we examined the examples where the linear regression model provided the most improvement relative to the naive baseline; unsurprisingly these were often cases where it corrected for the divergences mentioned above. Finally, we examined the highest-magnitude coefficients of the linear model. Most were unsurprising and reflected correlations noted above. However, among the top three, there was a –.32 coefficient relating EN lengthening over 5%–10% to ES CPPS over 0%–5%. This may reflect the tendency for EN speakers to start turns with fast speech (low lengthening) but not ES speakers <cit.>, who perhaps tend instead to start turns with more harmonic (higher CPPS) speech. § IMPLICATIONS AND FUTURE WORK As we expected, these investigations indicate that effective cross-language transfer will require attention to prosodic features beyond pitch and duration. These include at least breathy voice, creaky voice, and intensity. We also found that the prosody of some pragmatic functions, as they occur in dialog, differs in previously unsuspected ways across languages. These include at least grounding, getting personal, leading into something, and taking the turn. These findings suggest that well-designed prosody transfer techniques will be important for effective speech-to-speech translation. Finally, our results indicate that doing so has the potential to convey many more pragmatic functions and intents that have been previously managed. These investigations relied on a small corpus, a non-comprehensive prosody representation, and a crude metric. The fact that these enabled us to obtain interesting findings, is evidence for their utility. At the same time, all of these need extensions and improvements, and doing so would enable future work to produce a clearer and broader picture of what prosody is conveying in the two languages, how it does it, and what the differences are. In addition to such basic research, we envisage our findings informing the design of speech-to-speech translation systems, potentially via two paths. In one path, for end-to-end models, an improved version of our dissimilarity metric, properly extended and tuned to model human perceptions, could serve as the loss function for training. In the other path, for cascaded models, our analysis techniques could inform the design of a specific prosody-transfer module, and inspire the development of synthesizers capable of following a rich prosody specification and thereby conveying a wide range of pragmatic functions. Given the unavoidable high cost and consequent low volume of matched conversation data, either approach will mostly likely need to exploit per-language or joint self-supervised training techniques. We share all our data, code, and observations at our public repository: <https://github.com/joneavila/DRAL>. § ACKNOWLEDGEMENTS We thank Emilia Rivas for assistance with the data collection, Ann Lee, Benjamin Peloquin, and Justine Kao for discussions, and UTEP URI for internal funding. IEEEtran
http://arxiv.org/abs/2307.04995v1
20230711031740
PowerFusion: A Tensor Compiler with Explicit Data Movement Description and Instruction-level Graph IR
[ "Zixuan Ma", "Haojie Wang", "Jingze Xing", "Liyan Zheng", "Chen Zhang", "Huanqi Cao", "Kezhao Huang", "Shizhi Tang", "Penghan Wang", "Jidong Zhai" ]
cs.LG
[ "cs.LG", "cs.PL" ]
empty : A Tensor Compiler with Explicit Data Movement Description and Instruction-level Graph IR Zixuan Ma, Haojie Wang, Jingze Xing, Liyan Zheng, Chen Zhang, Huanqi Cao Kezhao Huang, Shizhi Tang, Penghan Wang and Jidong Zhai Tsinghua University ================================================================================================================================================================= Deep neural networks (DNNs) are of critical use in different domains. To accelerate DNN computation, tensor compilers are proposed to generate efficient code on different domain-specific accelerators. Existing tensor compilers mainly focus on optimizing computation efficiency. However, memory access is becoming a key performance bottleneck because the computational performance of accelerators is increasing much faster than memory performance. The lack of direct description of memory access and data dependence in current tensor compilers' intermediate representation (IR) brings significant challenges to generate memory-efficient code. In this paper, we propose , a tensor compiler that can generate high-performance code for memory-intensive operators by considering both computation and data movement optimizations. represent a DNN program using , which includes primitives indicating its computation, data movement, and parallel strategies. This information will be further composed as an instruction-level dataflow graph to perform holistic optimizations by searching different memory access patterns and computation operations, and generating memory-efficient code on different hardware. We evaluate on NVIDIA GPU, AMD GPU, and Cambricon MLU, showing speedup up to 1.97×, 2.93×, and 16.91× (1.28×, 1.23×, and 2.31× on average), respectively, compared to current most performant frameworks. § INTRODUCTION Deep neural networks (DNNs) have been widely used in a number of important domains, such as computer vision (CV), natural language processing (NLP), and so on. Due to the massive requirement of DNN models for computation power, domain-specific accelerators have been developed to improve DNN efficiency. While the computational performance of accelerators has rapidly increased in recent years, memory performance is lagging far behind. <Ref> illustrates the trend of half-precision performance and memory bandwidth of typical accelerators from 21.2 TFLOPS and 732 GB/s (NVIDIA Tesla P100) in 2016 to 330.3 TFLOPS and 1,008 GB/s (NVIDIA RTX 4090) in 2023. During this period, the ratio of computation to memory performance has increased by 11.3 ×, making memory performance the main bottleneck of DNN models. Moreover, the divergence in architectures and memory hierarchies of different accelerators also brings significant optimization challenges to designing performance portable DNN systems. Therefore, optimizing memory efficiency is becoming essential to fully explore hardware performance. memory optimization: both generate high-perf code for each memory kernel and enable data reuse. To enable data reuse, we need to explicitly express the data movement of each operator on the memory hierarchy, making it possible for data exchange between operators on faster scrachpad memory. And to achieve good perf of each memory kernel, we should allow flexible schedule of each operator. However, current tensor compilers like TVM and Ansor fail to reach such goals. They use loop to represent computations. data movement is indirectly represented through a schedule on top of the, making it challenging to assess memory performance during optimization and optimize memory performance effectively. and the dependence analysis granularity is not good. The perfect-nested loop also restrict the data schedule, as different operators must use the same schedule to fit in the perfect-nested loop. so the performance of each operator is limited. to address thse challenges, we proposes , a tensor compiler that considers both computation and memory access to address the challenge of generating memory-efficient code for various architectures. represents tensor programs using GIR, which describes operators using three primitives, parallelism, computation block, and data movement. The parallelism primitive indicates parallel strategies for a given computation. The instruction block and data movement primitives are constructed as nodes of a dataflow graph with a granularity of instruction block?, and the edge between them represents the data objects memory slice and the dependence. By explicitly describing memory access operations and instruction dependence, can search for different memory access patterns for multiple computing operations, thus supporting a more thorough optimization space for memory-intensive operators. and why can you apply different schedule? Tensor compilers are designed to generate efficient code on different accelerators. There are two main approaches to generating memory-efficient code: improving hardware memory bandwidth utilization and minimizing low-level (large and slow memory hierarchy) memory access. The former requires the program's memory access pattern to align with the hardware characteristics to achieve peak performance, while the latter involves analyzing data reuse relationships and reusing data on high-level memory to reduce low-level memory access. Achieving both requires the tensor compiler to optimize memory access directly, which requires an explicit description of data movement in their intermediate representation (IR). Additionally, the compiler also needs to analyze data dependencies at a fine granularity to reuse data using high-level memory hierarchy, improving overall memory performance. Existing tensor compiler faces significant challenges in meeting these requirements. For example, TVM <cit.> and Ansor <cit.>represent a tensor program with an abstraction of compute and schedule, inspired by Halide <cit.>. These compilers first convert DNN models into a loop-based IR (compute) and apply a series of optimizations (schedule) to transform DNN programs to find better performance. Consequently, they generate efficient code tailored to specific architectures. However, memory-intensive programs with this abstraction incur three main challenges: Implicit data movement representation In TVM, the pattern of data movement is implicitly represented through a schedule, making it challenging to assess memory performance during optimization and optimize memory performance effectively. Schedule search order. When merging multiple computing operations, TVM first fuses multiple loops into one and then applies schedules to generate efficient code. This generation order prevents the application of different memory access patterns from distinct computation operations, missing potential optimization opportunities. Coarse-grained dependence analysis Existing tensor compilers represent DNN programs as perfectly nested loops, which limits dependence analysis only to loops. This dependence analysis granularity does not align with the data's granularity in the memory hierarchy, leading to missing opportunities to reduce memory access through data multiplexing. These constraints make it challenging for tensor compilers to achieve optimal memory performance. To address these limitations, we propose , a tensor compiler that considers both computation and data movement patterns to address the challenge of generating memory-efficient code for various architectures. represents tensor programs using , which describes operators using three primitives, parallel, computation, and data movement. The parallel primitive indicates parallel strategies for a given computation. The computation and data movement primitives, which we call , are constructed as nodes of an instruction-level dataflow graph, called , and the edge between them represents the data on certain memory hierarchy and also their dependence. By explicitly expressing memory access operations and instruction-level dependence, can search different memory access patterns for multiple computing operations, thus supporting a more thorough optimization space for memory-intensive operators. With this representation, can generate different for each computation with various memory access patterns to find the configuration that maximizes memory bandwidth utilization. By explicitly representing data dependence at the instruction level, can analyze fine-grained dependence and apply graph optimization to reduce redundant memory access and reuse data in the proper memory hierarchy with the lowest cost. This optimization is automatic and unlimited, requiring no rules to guide the merging process and supporting more complex tensor programs. Additionally, we provide a performance model in to evaluate the key performance indicators of . This model can assist in deciding which operators to generate or use from a DNN library and which performs best and is worth generating. Finally, the methods employed by are cross-platform with platform-independent optimization strategies. We have designed a general abstraction that can accommodate various architectures with different parallel structures and memory hierarchies. By configuring the generation and searching process, can generate optimized code for a specific hardware platform, requiring developers to only implement instruction-to-instruction mappings for computation and data movement to adapt to a new hardware platform. We have implemented from scratch and support multiple architectures, including NVIDIA GPU, AMD GPU, and Cambricon MLU. Our evaluation with various DNN models demonstrates that can efficiently generate code on different platforms, achieving higher performance than native computing systems like TensorRT <cit.> on NVIDIA GPU and MagicMind <cit.> on Cambricon with 1.28× and 2.3× respectively. By using descriptions, can seamlessly adapt to different architectures, requiring only up to 1,000 lines of code for adaptation. This paper makes the following contributions: * We present , which targets optimizing memory performance by explicitly representing data movement patterns and fine-grained data dependence through instruction-level graph descriptions. * We design a set of optimization strategies that include searching and transformations on , to reduce memory access and improve memory efficiency. * We propose a hardware abstraction and develop an end-to-end tensor compiler that utilizes this abstraction, and generating optimized code for various architectures. * is implemented on different hardware, including NVIDIA GPU, AMD GPU, and Cambricon MLU, and achieves up to 1.97×, 2.93×, and 16.91× (1.28×, 1.23×, and 2.31× on average), respectively, compared with the most efficient DNN framework on each hardware. § BACKGROUND TVM and Ansor adopt the compute/schedule separation idea and use auto-tuning techniques to search over different schedules for code generation, working with loop-based IR. As illustrated in <Ref>, when optimizing multiple computation operations, TVM first merges the loops of different operations and then searches for the execution plan of the merged loop before generating codes. However, this approach leads to two limitations. Firstly, during dependence analysis, it becomes impossible to analyze the reuse relationship of the data blocks that correspond to each data movement operation on the high memory hierarchy, preventing many operations from being fused. Secondly, the merging-first-then-scheduling approach makes it impossible for different operators to adopt different execution plans, which results in suboptimal in-memory performance. Polyhedral works, represented by PPCG <cit.> and PLUTO <cit.>, are capable of analyzing element-level dependence in a loop-based program, by representing the program mathematically as points in a high-dimensional integer space. The program can then be optimized by transforming the space. This type of work typically employs integer linear programming to find a mathematically optimal transformation. However, due to the lack of consideration for specific hardware constraints, the optimized program may not perform well when running on real hardware. Considering the huge overhead of integer linear programming, it is also hard to adopt auto-tuning techniques. Another approach, represented by TensorFlow-XLA <cit.> and DNNFusion <cit.>, uses kernel fusion at the computational graph level to reduce memory access. It generates the fusion plan on the computation graph and then employs the code generation tool to generate device code. TensorRT <cit.> is another type of work that uses a series of rules to map complex computation operations in the DNN model to manually-optimized kernels. This approach takes the operation as the granularity of analysis, and its extensibility is limited by the capability of the back-end operator library or code generation. To address these limitations, utilizes instruction block and memory slice as the granularity of analysis. This granularity matches the computation and data movement operations that are actually performed on the hardware, providing the required information for data reuse. By organizing the program as a graph with computation and data movement operations at this granularity, the dependence analysis overhead is significantly reduced. As shown in <Ref>, compared to TVM, first searches for the execution plan of different operations and then combines different execution plans through fine-grained dependence analysis to achieve higher memory performance. Various domain-specific accelerators, such as NVIDIA GPUs and Cambricon MLUs, generally employ a hierarchical memory structure based on scratch-pad memory, comprising several layers such as register file, shared memory, and global memory. This complex and varied memory hierarchy presents two main challenges to memory performance optimization for accelerators: 1) the deep coupling of the memory hierarchy and parallel structure, and difficulty in determining the optimal memory hierarchy for data usage. 2) complex data movement between different memory hierarchies, and difficult to optimize. To address these challenges, we abstract the memory hierarchies of different hardware in a unified abstraction and represent the characteristics of different hardware in its attributes. This approach enables us to provide unified optimization and simplify memory performance optimization for different architectures. Furthermore, by incorporating synchronization operations in GIR, we can efficiently analyze the highest memory level that data can use, allowing to provide optimal memory performance optimization for different architectures. § OVERVIEW <Ref> provides an overview of , along with a running example of how it optimizes memory performance for a real-world DNN model. The given model fragment is from ShuffleNet, which merges two tensors into one, shuffles it, and then divides it into two new tensors. To optimize this model, takes this model fragment as input, and converts it to instruction-level graph IR, called . Then optimizes this model by -based optimizations, and finally generates high-performance code. The abstraction will be introduced in <Ref>. -based optimizations take a computation graph as input and apply graph generation to each operator separately to obtain the of each operator. Next, it searches and merges these to obtain the corresponding to the computation graph. With the merged , then applies graph rewriting rules to optimize the using three graph rewriting rules, resulting in a reduction of memory access amount, significantly improving memory performance. Graph rewriting and -based optimization process will be introduced in <Ref> and <Ref>, respectively. Finally, rearranges the optimized and sequentially generates code for specific hardware. Code generation techniques will be discussed in <Ref>. § ABSTRACTION This section presents the design of , which aims to address the lack of a comprehensive description of data movement between memory hierarchies in existing tensor compilers, thereby hindering memory access performance optimization. The core idea of the is to represent a tensor program as a graph consisting of computation and data movement primitives, called s, with the granularity of instruction block at a specific level of parallelism. The and the device code for a operation as shown on <Ref>. is represented as a dataflow graph wrapped by parallel primitives. Each node of the denotes an instruction-level computation or data movement , and each edge represents a memory slice on a specific memory hierarchy. The parallel primitive indicates the parallel strategies of this instruction-level dataflow graph. This approach offers two advantages: first, all computation and data movement operations are expressed with proper granularity, making them easier to optimize; second, the dependences between computation and data movement are clear and easy to analyze. The graph structure in the intermediate representation also enables the use of graph algorithms to optimize the and achieve end-to-end optimization effects. Memory slice In , data is represented as a memory slice, which is a set of contiguous elements on a tensor. Each memory slice has four attributes: memory hierarchy, num, width, and stride. Memory hierarchy indicates which level of memory hierarchy this memory slice is stored, e.g., DRAM or SRAM. The num attribute specifies the number of continuous segments, each with a length of width elements and a fixed stride of stride between segments. This format has two advantages. First, it aligns with best practices for memory performance since most memory hierarchies, such as DRAM and SRAM, are optimized for continuous access. Second, it matches the granularity and shape of computation instruction blocks, which are often designed in this format. The two-dimensional format enables fine-tuning of data movement operations without significant performance loss. Computation The uses computation abstraction, namely computation , at the instruction block level, where each computation corresponds to a set of instructions corresponding to the optimized computation kernel in the code. The computation in is defined by three types of s: element-wise, reduce, and broadcast. Element-wise perform element-to-element computations, such as arithmetic and activation. Reduce and broadcast handle dimension expansion and contraction. Note that most wrapped computation can also be represented as the above three , e.g., we can represent matrix multiplication with element-wise and reduce operations. will automatically choose the representation method to achieve better performance (an example is shown in <Ref>). Using this fine-grained computation primitive, we can describe program computation and optimize their schedules for higher performance without considering hardware-related information such as instruction ordering. Data movement Data movement in describe the transfer of data within and across different memory hierarchies. These operations require the input and output to have the same pattern, which includes the number of elements, the width of each element, and the stride between consecutive elements. By explicitly expressing data movement operations, enables efficient handling of data transfer and optimization of memory performance. also explicitly introduces a special case of data movement , called synchronization , to ensure correctness between data movement . For instance, in CUDA, synchronization operations within a thread block are necessary when different warps access the same shared memory object. The scope attribute of a synchronization in specifies its scope, which determines the implementation of the operation. In CUDA, intra-warp synchronization uses warp shuffle, while block synchronization uses shared memory. In , a synchronization operation not only implies waiting but also indicates a change in the memory slice pattern, reflecting that different data movement operations use different memory slice patterns to read and write the same memory data. By incorporating synchronization operations, can better analyze the dependence between data and ensure program correctness. Parallel In , we define the parallel granularity as the smallest memory performance granularity. For example, on GPUs, it corresponds to a streaming multiprocessor (SM), which is equivalent to a warp in the CUDA programming model. While on CPUs and Cambricon MLUs, it corresponds to a core or IPU, which is equivalent to a thread. We refer to this smallest unit as the parallel unit and assign an index to each unit. We then map the nested parallel structure of the hardware onto the index space, with distinct segments contiguously allocated to each structure. For instance, in CUDA, the indexes of all warps in the same block are contiguous. This approach enables us to analyze the parallel scope of each data movement operation and eliminate the impact of different parallel nesting structures across architectures. As a result, we can streamline the code optimization process for more efficient execution. § REWRITING As discussed in <Ref>, can express data dependence between computation and data movement , enabling additional analysis and optimization opportunities. To optimize a , we propose graph rewriting techniques using a set of rules called rewrite rules, which are commonly used for graph transformations. In , we propose three types of rewriting rules and an optimization strategy to deploy these rules on . §.§ Rewrite Rules Synchronization insertion The goal of synchronization insertion is to increase the memory hierarchy of memory slices in the and to add synchronization operations to ensure the correctness of the program. When multiple data movement access the same memory slice, synchronization is necessary among the parallel units participating in the . The requirement for synchronization arises from the fact that different parallel units before and after synchronization access the same memory slice. Essentially, the process of read-synchronize-write involves exchanging data between various parallel units. Different memory hierarchies correspond to different parallel scopes. For instance, on NVIDIA GPUs, data exchange within the same warp is stored in registers and synchronized using warp shuffle. When data is exchanged among different warps within a thread block, shared memory is utilized, and synchronization takes place at the thread block. Therefore, the highest memory hierarchy for data storage can be determined by analyzing the parallel units of data exchange for the read and write operations of the same memory block. The mapping between the memory hierarchy and parallel units is specific to each hardware architecture and is expressed through a set of conditions. Although these conditions are associated with the kernel execution configuration, their format is independent of the hardware. For example, in a CUDA kernel with four warps per thread block, the synchronization scope condition is shown in Figure <ref>. Each square in the figure represents a memory slice, and its pattern represents its memory slice pattern. The different colors represent different parallel units that access the data in the corresponding memory. The synchronization scope is determined by the units involved in the reading and writing operations. If the unit that writes is the same as the unit that reads, the synchronization scope is restricted to the warp level. In cases where the pattern of reading and writing is the same, the scope of the synchronization is restricted to the lane level, and the operation has no effect. If the result of dividing the IDs of the unit that reads or writes by 4 is the same, then the synchronization scope is restricted to the block level. Otherwise, the scope is restricted to the device level. Using the conditions mentioned above, we define a synchronization insertion operation. When two consecutive data movement are encountered (synchronization is allowed between them, as is the case for all consecutive in this section), we analyze the synchronization relationship between the two memory slices and increase their memory hierarchy to the highest level. A new synchronization is then inserted into the rewritten graph, with the exception of lane synchronization, which only inserts a new memory slice without synchronization. This process improves the memory performance of the and ensures program correctness. merging Although we can improve the memory hierarchy of memory slices by applying synchronization insertion, there are still a large number of redundant data movement in and introduce additional data movement and synchronization overhead, meanwhile generating redundant memory slices, thereby reducing execution performance. This redundancy falls into two main categories: read-after-write and read-after-read. Read-after-write means that two consecutive data movement sequentially operate on the same memory slice, write first, and then read. This kind of data movement may be redundant. Read-after-read means that two data movement read the same memory slice. When there is no synchronization relationship between the two , this combination will cause redundant data movement. To solve this redundancy problem, we propose the merging rule, which consists of two rules: * Replace two consecutive data movement with a new data movement whose input is the input of the first and output is the output of the last . If the memory slice pattern read and written is consistent, it will be replaced by a null . * Replace two data movement that read the same memory slice without dependence with one . By combining multiple into one, this rule can reduce memory access for thus optimizing the memory performance. swapping Applying these two rules mentioned above does not guarantee optimal memory performance of the program. In fact, a , as shown in <Ref>, cannot be further optimized. This is because the program is constrained by two global synchronization , resulting in the generation of four DRAM read and write operations. This is due to the disconnect between data movement and synchronization , which makes it impossible to apply optimization rules. To solve this problem, we can move the computation . For example, element-wise represented by arithmetic computations, activation functions, etc., are interchangeable with data movement, synchronization, broadcast, and some of reduce . This interchangeability allows us to move element-wise , providing more opportunities for optimization. <Ref> shows a process of optimizing through swapping. By applying swapping, the can further apply merging rules, thereby reducing the total execution time of the program. §.§ Optimization Strategy To optimize the memory usage of a program, we can apply three graph rewriting rules on the . These rules ensure that the program's theoretical memory performance does not decrease after applying the rules. Therefore, we can apply as many rewriting rules as possible during the optimization process. However, if all the rules are irreversible, we can use a greedy strategy instead of the search-based method to quickly find the optimal solution. Among the three rules, synchronization insertion and merging are irreversible and can only be applied in one direction. On the other hand, the swapping rule is reversible, so our optimization approach only allows forward swapping of element-wise to make it irreversible. With the revised rules, we propose a greedy optimization strategy outlined in the pseudo-code shown in <Ref>. The strategy applies synchronization insertion and swapping as much as possible, followed by merging until merging can no longer be applied. Since the rules are irreversible, the optimization is directional and guaranteed to terminate. This algorithm enables us to quickly obtain an optimized , and in practice, the solution time is negligible. Thus, we can search complex graph structures and experiment with different combinations of rules to optimize in various applications. This fast optimization method is a valuable tool for optimizing tensor programs with . § GENERATION The primary objective of is to optimize the memory performance of tensor programs from end to end. To achieve this, we generate for the input model, which can be further optimized using Graph Rewriting techniques and generate efficient code. In this section, we will explain how the are generated from the model. Preprocessing of computation graph The input to is a model represented as a computation graph. As the primary optimization goal of is memory performance, operators bounded by computing performance directly use DNN libraries like cuDNN on NVIDIA GPU or CNNL on Cambrian MLU. We determine whether to call the DNN library based on the operators' computation and memory access amount, rather than its type. As a typical example shown in <Ref>, which is a subgraph in the EfficientNet model, the depth-wise convolution operators are memory-intensive. Thus it will be expressed using and jointly optimized with its preceding operator . On the other hand, the two point-wise convolutions are computation-intensive, so chooses to call the DNN library directly. will also adjust the data layout of the operators when calling the library function to achieve better performance by introducing additional transpose operators. These transpose operators will be jointly optimized with other memory-intensive operators to further improve the performance. After these optimizations, the model is split into separated subgraphs containing only memory-intensive operators. generation The next step in is to generate from the subgraphs composed of memory-intensive operators. As described in <Ref>, searching the execution plan of different operators and then applying the combination can effectively improve the search space of the program, leading to higher performance. To achieve this, complex operators need to be split. For instance, the operator is a composite operator comprising a and a operator. splits all such composite operators into four basic operators: element-wise, broadcast, reduce, and transpose. Note that these basic operators are high-level operators on the computation graphs, not the same as 's . These operators can apply predefined code templates to generate . For each basic operator, we define several corresponding structures. We take the operator as an example. The reduction with one parallel unit and with multiple parallel units requires different computation and data movement s that correspond to different graph structures. Besides, with a fixed graph structure, each operator has several implementation methods, including parameters such as parallelism at execution time and the tiling size for specific implementations. traverses the possible values of these parameters on the graph structure. The combination of the graph structure and the parameters are defined as a template that can generate a set of for a specific basic operator. The graph generation of the and the , both of which generate multiple , as shown in <Ref>. merging After generating for each basic operator, attempts to merge several to a larger one, thus joint optimizing multiple operators. Our design allows merging only if the parallel structures of the two match and there are no external dependencies between the operators on the corresponding computation graph. For instance, two operators can be independent or directly connected. Given this rule, we can efficiently generate a merging plan when we choose a for each basic operator. When merging two , we also insert global synchronization operations between data movement s that read and write the same tensor. For sequential reads and writes, a global synchronization needs to be inserted between two data movement s. For parallel reads, a global synchronization operation needs to be inserted before the two s. The resulting merged can be rapidly optimized to achieve high memory performance by applying Graph Rewriting techniques. <Ref> shows an example of how synchronization operation is inserted into merged . To explore the search space as much as possible, traverses all of all basic operators and generates all possible merge plans. However, the direct search approach for this step is computationally expensive. To address this issue, we introduce a performance model. Since all s in a are regular and performance-independent, we can accurately predict the running time of the using the performance model. We can then prune the search space and efficiently obtain an optimization plan that guarantees the optimization result. § CODE GENERATION The design principle of involves abstracting different hardware into a unified structure and extracting hardware features as parameters. This abstraction allows us to optimize programs and generate code for parallel computation and data movement s that are specific to the hardware being used. By decoupling hardware and optimization, the system becomes more extensible. Before generating device code, reorders the to determine the execution order of computation and data movement s. This determines the reuse of memory slices in the physical memory hierarchy. The reordering process is similar to traditional compiler instruction reordering and can use existing methods for optimization. After determining the execution sequence, generates as code in the native programming language and compiles it into the device executable using native compiler. During the code generation phase, generates parallel structure code related to hardware information first, and then sequentially generates device codes for computation and data movement s in order. The generated code is compiled using a native compiler and evaluated to ensure optimal performance. This design enables the low porting cost of to new hardware, as porting only requires mapping the parallel abstraction and primitives for computation and data movement. For example, porting to NVIDIA GPU and AMD GPU requires less than 1,000 lines of new code for each. We will explain how adapts to different platforms and generates efficient code using NVIDIA GPU and Cambrian as examples. NVIDIA GPU The GPU is the most commonly used accelerator in AI domain, with vendors such as NVIDIA and AMD having similar hardware architectures and programming models. To illustrate how is ported to GPUs, we show how it maps to CUDA on NVIDIA GPUs. CUDA's parallel structure consists of three layers: thread block, warp, and thread. To map the parallel unit of onto CUDA, we map it to the warp, as described in <Ref>. The warp is the smallest scheduling unit in CUDA with independent performance. In contrast, threads in CUDA have dependent memory access and scheduling and correspond to SIMD lanes in the SIMD architecture, which makes them unsuitable as parallel unit in . During the search process, the number and size of thread blocks in CUDA can be configured and used as search parameters. For the instructions of computation and memory access operations, the CUDA program represents the program executed on a specific thread. Thus, in , an implementation that says "read contiguous memory containing n × 32 elements" would generate "read n element with stride 32". Cambricon MLU Cambricon MLU is a domain-specific architecture that differs significantly from GPUs in terms of architecture. It has two layers of scratch-pad memory, NRAM and SRAM, which are located in the scope of one IPU or one MTL Cluster, respectively. However, data movement operations on the MLU platform require direct memory access (DMA), which can introduce overhead and limit the memory performance achievable by synchronously performing computation and data movement operations on each IPU. To improve memory efficiency, applies pipeline parallelism, a common optimization technique on the MLU platform. In this technique, multiple pipelines are executed on a single IPU simultaneously, enabling data movement to overlap with computation. During code generation, selects the parallel unit as a pipeline on the IPU and interleaves multiple pipelines to generate execution code. Synchronization operations are then inserted to ensure correct execution. This pipeline parallelism optimization introduces a new parallel dimension under the MTP Cluster and IPU, and does not affect any existing optimization on . The successful porting of to Cambricon MLU platform demonstrates its strong extensibility. § EVALUATION §.§ Evaluation Setup We evaluated on three different hardware platforms: NVIDIA GPU, AMD GPU, and Cambricon. For NVIDIA GPU, we used a Tesla A100 40GB PCIe GPU, which supports Tensor Core for accelerating matrix computation. The peak performance of Tesla A100 on TF32 datatype is 156 TFLOPS, and its peak DRAM bandwidth is 1,555 GB/s. We used CUDA version 11.7.0 for the evaluation and set the memory and application clocks to their maximum values. For AMD GPU, we used INSTINCT MI100, which is optimized for high-performance matrix computation. The peak performance of MI100 on matrix core is 46.1 TFLOPS, and its peak DRAM bandwidth is 1,200 GB/s. We used ROCm version 4.3 for the evaluation. For Cambricon, we used MLU-370x4, which has a peak FP32 performance of 24 TFLOPS and peak DRAM bandwidth of 307.2 GB/s. We used BANGC version 1.0 for the evaluation. These hardware platforms were chosen to represent a diverse range of architectures and to provide a comprehensive evaluation of 's performance. §.§ End-to-End performance We evaluated seven models on to compare with different baselines on various architectures. DNN Models We evaluated on seven models that include three transformer models and four CNN models for model diversity. BERT <cit.> and GPT-2 <cit.> are popular language models that use transformer architecture for natural language processing tasks. The difference is that GPT-2 is autoregressive that should run the model as many times as the number of tokens to generate. Vision Transformer (ViT) <cit.> applies transformer model to computer vision tasks such as image classification and object detection. SAR-DRN <cit.> and EfficientNet <cit.> is a popular CNN models for super-resolution image generation; ShuffleNet <cit.> and RedNet-50 <cit.> introduce more complex memory-intensive operators to CNN models. In the evaluation, we set batch size of all models to 1 except GPT-2, whose batch size is 128 as an autoregressive model. Baselines TensorFlow <cit.> and PyTorch <cit.> represent traditional ML frameworks that support both training and inference. We evaluated both of them on NVIDIA and AMD GPUs and PyTorch on Cambricon MLU. TorchScript <cit.> is a library provided by PyTorch that converts PyTorch models into a more efficient, serialized format that can be used for inference and deployment. We evaluated TorchScript on NVIDIA and AMD GPUs. TensorFlow-XLA (Accelerated Linear Algebra) <cit.> is a domain-specific compiler for linear algebra used in TensorFlow. TensorFlow-XLA can automatically parallelize and vectorize TensorFlow computations to make use of the full power of modern hardware. We evaluated TensorFlow-XLA on NVIDIA GPU. TensorRT <cit.> and MagicMind are deep learning inference optimizers developed by their vendors that enable high-performance inference based on high-performance kernels. We evaluated TensorRT on NVIDIA GPU and MagicMind on Cambricon MLU. TVM <cit.> is a ML compiler using loop-based IR. Ansor <cit.> is an automated scheduling tool for TVM, representing the state-of-the-art for tensor compilers targeting computation kernels. We evaluated TVM/Ansor on NVIDIA and AMD GPUs. Results The evaluation results, as presented in <Ref>(a), demonstrate that outperforms all baselines on A100. achieves an average speed-up of 9.7×, 7.3×, 8.2×, and 4.1× over PyTorch, TorchScript, TensorFlow, and TensorFlow-XLA, respectively. Additionally, compared to TensorRT, accelerates models by 28% on average and achieves a speed-up of 1.98× on the GPT-2 model. Moreover, when compared to TVM, achieves an average speed-up of 1.98×. For BERT and ViT, achieves similar performance as TensorRT due to GEMM taking up most of the execution time, thus limiting optimization opportunities. Meanwhile, for shuffleNet and EfficientNet, achieves similar performance as TVM. This is because spends more time on convolution operations compared to TVM, which affects the overall performance. On MI100, outperforms all baselines, achieving an average speed-up of 3.1×, 2.7×, and 18.0× over PyTorch, TorchScript, and TensorFlow, respectively, as shown in <Ref>(b). Compared to TVM, achieves an average speed-up of 1.24×. The lower acceleration ratio on AMD GPU is due to the ratio of peak computation performance to memory bandwidth being lower than that of NVIDIA GPU. The memory access operation takes less time, resulting in a less significant optimization effect of . Finally, as demonstrated in <Ref>(c), on Cambricon MLU-370, achieves an average speed-up of 3.1× over PyTorch and a speed-up of 2.3× over MagicMind. This result indicates that supports different architectures and has good cross-platform optimization capabilities. §.§ Case Study We use GPT-2 as an example to demonstrate how outperforms existing frameworks. <Ref> presents the performance of GPT-2 in a generation application on the A100 GPU. The figure shows that without utilizing KVCache, 's performance is comparable to that of TensorRT. For this model, TensorRT manually optimizes the three fixed memory-intensive operators in the computation graph. In this case, proposes similar optimization strategies compared to TensorRT, resulting in similar performance. KVCache optimization is a widely deployed technique in generative models <cit.>. It leverages the characteristics of the GPT model by storing intermediate computation results in every iteration and reusing them later. To use the intermediate results from previous iterations, the KVCache optimization introduces two Concat operators in the attention layer, which concatenate the current and previous computation results. Thus, it reduces computation time by avoiding re-computation and maintains the equivalence of results. As shown in <Ref>, the execution time of TensorRT significantly decreases after applying the KVCache technique, especially for large input indices. can apply further optimization to GPT-2 after using KVCache. As each iteration only requires the computation of one token, the sequence length is always set to 1. In the context of the attention layer in GPT-2 with KVCache, the two matrix multiplication operations degenerate into matrix-vector multiplication, which is limited by memory performance. Thus, using to jointly optimize all operations of the attention layer and generate a memory-efficient kernel significantly improves program performance. <Ref> demonstrates how optimizes the attention layer, where the two matrix multiplications have identical shapes and computation patterns, but the program with the best performance after optimization uses distinct for them. These differ in their data slice pattern and axis of computation operation, making such optimizations challenging to achieve through loop-based approaches. Therefore, exhibits a more significant performance advantage than TensorRT after applying KVCache, improving performance by up to 3.16×. §.§ Breakdown To illustrate the source of 's performance improvements, we break down the numbers of launched kernels of TensorRT, TVM, and . As shown in <ref>, we define computation operations like matrix multiplication and 2D-convolution in kernel libraries as computation kernels, while other operations are classified as memory kernels. Notably, since TVM regards all computation as tensor expressions and generates kernels for them, we only compare the total number of kernels to TVM. The results indicate that for BERT, ViT, and RedNet50, and TensorRT use the same number of computation kernels, but generates fewer memory kernels, particularly in RedNet50, where the number of memory kernels is reduced by 59% compared to TensorRT. On the other hand, TVM's total kernel numbers are higher than and TensorRT. This observation demonstrates 's stronger kernel fusion capability. For SAR-DRN, TVM has the same number of kernels as and TensorRT's computation kernel. This finding suggests that TVM can fuse all memory kernels in this model into computation kernels. However, is limited by the vendor-provided kernel library and cannot fuse them into computation kernels. Our future work includes using for computation-intensive kernels, which will give the ability to fuse computation-intensive and memory-intensive kernels. In contrast, for GPT-2, ShuffleNet, and EfficientNet, reduces the number of computation kernels by combining them with memory kernel optimization. Consequently, the total number of kernels is reduced. For example, in GPT-2, the number of kernels generated by is reduced by 73% and 71% compared to TensorRT and TVM, respectively. These findings demonstrate 's strong optimization and code generation capabilities. §.§ Searching Time We compare the search times of and TVM when optimizing various models. As depicted in <Ref>, the search time for is significantly shorter, at two orders of magnitude, compared to TVM. This outcome can be primarily attributed to two factors. First, our system directly uses the native library's implementation for computation-intensive operations, eliminating the need to search for it and thereby reducing search time. Second, we incorporate memory performance model pruning during the optimization process, which prevents the exploration of solutions involving substantial memory access and consequently decreases search overhead. Our evaluation results demonstrate that can generate code for models with less time and offer significant advantages to users who require rapid model deployment. § RELATED WORKS Many tensor compilers are capable of generating high-performance code for standalone deep learning operators, including TVM <cit.>, FlexTensor <cit.>, Ansor <cit.>, AMOS <cit.>, Roller <cit.>, TensorIR <cit.>, and Hidet <cit.>. However, their approaches do not model the data dependence between operators and cannot explore the data locality of nearby operators. Polyhedral approaches, including PPCG <cit.> and PLUTO <cit.>, model the data dependence at the element level, which conducts too large searching space to find efficient solutions. Other works accelerate DNN models at the graph level. TensorRT <cit.> and AITemplate <cit.> manually optimize frequently-used patterns. TASO <cit.>, Rammer <cit.>, and PET <cit.> can combine existing operators to generate more efficient code. These pre-defined patterns and operator sets cannot cover the diversified memory access pattern in existing deep learning models. Astitch <cit.> and DNNFusion <cit.> can generate memory-optimized code for unseen patterns, but their fusion is rule-based and overlooks many more efficient schedules. Some domain-specific languages like Triton <cit.>, FreeTensor <cit.>, and Graphene <cit.> allows user to express tensor programs in finer granularity than operator-based computation graph but requires much more expert knowledge and human effort to explicitly define how the program should be executed. § CONCLUSION In this paper, we proposed , a compiler that optimizes memory performance through by explicitly representing data movement patterns and fine-grained data dependence through instruction-level graph descriptions. By designing optimization strategies based on and hardware abstraction, we are able to achieve significant speedup results on different hardware. Our evaluation results demonstrated that achieves up to 1.97×, 2.93×, and 16.91× speedup (1.28×, 1.23×, and 2.31× on average), respectively, compared to current most performant frameworks. plain
http://arxiv.org/abs/2307.07609v1
20230714200912
Interpretable machine learning to understand the performance of semi local density functionals for materials thermochemistry
[ "Santosh Adhikari", "Christopher J. Bartel", "Christopher Sutton" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
APS/123-QED Department of Chemistry and Biochemistry, University of South Carolina, Columbia, SC-29208, USA Department of Chemical Engineering and Materials Science, University of Minnesota, Minneapolis, MN 55455, USA [email protected] Department of Chemistry and Biochemistry, University of South Carolina, Columbia, SC-29208, USA This study investigates the use of machine learning (ML) to correct the enthalpy of formation (ΔH_f) from two separate DFT functionals, PBE and SCAN, to the experimental ΔH_f across 1011 solid-state compounds. The ML model uses a set of 25 properties that characterize the electronic structure as calculated using PBE and SCAN. The ML model significantly decreases the error in PBE-calculated ΔH_f values from an mean absolute error (MAE) of 195 meV/atom to an MAE = 80 meV/atom when compared to the experiment. However, a similar reduction in the MAE was not observed for SCAN. Rather, the errors from the ML model (MAE = 76 meV/atom) and SCAN (MAE = 85 meV/atom) were observed to be comparable. To explain the substantial decrease in the error of PBE ΔH_f values and less so for SCAN, we employed partial dependence plots (PDPs) of an interpretable model, specifically generalized additive models (GAMs). The PDP+GAM approach allowed for an examination of the impact of all 25 features on the errors associated with the PBE and SCAN ΔH_f values. For PBE, the PDP+GAM analysis shows compounds with a high ionicity (I), i.e., I>0.22, have errors in ΔH_f that are twice as large as compounds having I < 0.22 (246 meV/atom compared to 113 meV/atom). Conversely, no analogous trend is observed for SCAN-calculated ΔH_fs, which explains why the ML model for PBE can more easily correct the systematic error in calculated ΔH_fs for PBE but not for SCAN. Subgroup discovery (SGD) is used to better understand the relationship between the electronic structure features and the errors in the PBE calculated ΔH_f. Out of these 25 features, SGD identifies the most reliable region or lowest error subgroup (108 meV/atom) to be comprised of 368 compounds (out of 1011 total) containing low charge transfer between atoms based on the selector I<=0.52, ζ_D<=0.43, and P_D<=0.02. Interestingly, although the literature suggests PBE is reliable for intermetallics but less so for oxides and halides, our analysis reveals intermetallics pose a challenge for PBE only when the charge transfer is significant (I>0.22). Meanwhile, oxides and halides may be described accurately by PBE for systems in which charge transfer is relatively low (I < 0.22). Interpretable machine learning to understand the performance of semi local density functionals for materials thermochemistry Christopher Sutton August 12, 2023 ============================================================================================================================ § INTRODUCTION Enthalpy of formation (ΔH_f) and enthalpy of decomposition (ΔH_d) are the two critical factors for understanding the stability of materials <cit.>. Because ΔH_d is calculated from the convex hull construction using ΔH_f for the set of relevant compositions, the development of effective computational approaches for materials design primarily relies on the accurate calculation of ΔH_f. Density functional theory (DFT) approximations based on generalized gradient approximation (GGA) are widely applied to calculate ΔH_f for solid-state materials. Although the accuracy of DFT-calculated ΔH_f can depend on the choice of functional, PBE <cit.> is the most common GGA that has been used to several hundred of thousand of compounds in open computational databases such as the Materials Project (MP) <cit.>, the Open Quantum Materials Database (OQMD) <cit.>, and AFLOW <cit.>. However, the reliability of the calculated ΔH_f using PBE depends on the specific material class. For instance, PBE-calculated ΔH_f values are reasonably accurate for intermetallic alloys, such as Al-based and transition metal-based alloys <cit.>. On the other hand, PBE is unreliable in predicting ΔH_f for systems that combine metals and non-metals, such as oxides <cit.> and nitrides <cit.>. The primary source of these increased errors is related to the significant self-interaction error (SIE) inherent in semilocal functionals such as PBE <cit.>. The meta GGA functionals such as SCAN <cit.>, have been demonstrated to enhance the accuracy of ΔH_f predictions by as much as 50% relative to PBE <cit.>. However, issues remain in overestimating ΔH_f for compounds that are considered to be weakly bound (typically indicated by ΔH_f values between -1.0 eV/atom to -0.5 eV/atom) <cit.>, such as in the case of intermetallics. A recent advancement of SCAN, known as r^2SCAN <cit.>, has been shown to further improve the MAE of SCAN by approximately 15 meV/atom for a dataset of over 1000 solids <cit.>. However, the persistent problem of overestimation for intermetallics still remains, as pointed out in Ref <cit.>. Alternatively, methods like random phase approximations (RPA) and hybrid functionals, such as HSE06 <cit.>, address some of the limitations of GGA functionals by, for example, incorporating nonlocality. However, these methods are computationally demanding and may not always lead to higher accuracies in the predicted ΔH_f values <cit.>. A cheaper approach to improving PBE-calculated ΔH_f (ΔH_f^PBE), which is widely used in databases such as MP, OQMD, and AFLOW, is to apply an on-site Hubbard U to d- or f-orbitals to reduce the effect of SIE <cit.>. However, determining the appropriate +U value remains an open question, and different strategies have been proposed, such as performing self-consistent calculations using linear response <cit.> and tuning +U to recover higher accuracies for specific properties <cit.>. Additionally, since +U corrections are commonly applied only to states with d or f characters (eg. strongly correlated materials), SIE associated to states with s and p characters may still remain <cit.>. It is also possible to correct much of the errors made by GGA (or GGA+U) functionals by fitting corrections to the elemental reference energies. This approach was first shown for oxides <cit.> and generalized to compounds spanning the periodic table <cit.>. Fitted corrections are most effective when errors in ΔH_f are systematic with respect to certain elements (GGAs) and were shown to yield less of an improvement for SCAN than for PBE <cit.>. Alternatively, machine learning (ML) has conventionally been applied to predict the DFT-calculated ΔH_f of solids at a large scale (> 1000 materials) <cit.>. However, these models inherit the underlying errors of the DFT functional (typically PBE). A more recent study <cit.> instead uses DFT-calculated ΔH_fs reported in the MP database (computed using PBE with elemental corrections) to predict experimental ΔH_fs (ΔH_f^expt) with 30% more accuracy compared to the MP database using compositional <cit.> features (i.e., just considering the chemical formulas of each compound). Although this study reports a substantial correction to the DFT ΔH_f errors, it does not explain the evolution or origin of the error. An understanding of ΔH_f errors due to the selection of a specific DFT functional, necessitates using properties as features beyond compositional ones. These properties should be able to characterize the electron density distribution computed by the chosen functional. In this work, we use ML to both understand where the DFT-calculated ΔH_f values are reliable and correct them to align with the corresponding corresponding ΔH_f^expt values. To achieve this, we initially formulate a set of numerical electronic-structure-based features that are sensitive to the choice of functional, for example, charge transfer and the density of states. We subsequently analyze the influence of these features using an interpretable ML approach to explain the varying degrees of improvement in ΔH_f^DFT for two DFT functionals, PBE and SCAN. § DATASET AND FEATURES This work utilizes the dataset of ΔH_f^expts previously examined by Bartel et al. <cit.>, which consists of 714 binary, 270 ternary, and 28 quaternary compounds. The dataset spans over 62 elements across the periodic table and a diverse set of chemical families such as oxides, sulfides, nitrides, phosphides, halides, and intermetallics. To generate the electronic features, we use LOBSTER (version 4.1.0) <cit.> and Density Derived Electrostatic and Chemical Methods (DDEC6, version 3.5) <cit.>, which post-processes the outputs from VASP calculations (see METHODS section) to generate a set of features (described below in more detail). The initial dataset by Bartel et al. <cit.> contained 1012 compounds; however, we were only able to generate all the features summarized in Table <ref> for 1011 compounds using PBE and 984 compounds using SCAN because of issues with post-processing via LOBSTER and DDEC6. §.§ Structure-based features: Table <ref> lists several structure-based features that are generated from the PBE and SCAN optimized geometries from Ref. <cit.>. These features includes the total number of atoms per unit volume (N) and the average coordination number (CN), which is defined as the nearest neighbors for each atom within a cutoff radius of 8 Å. The CN is calculated using the Brunners algorithm <cit.> within pymatgen <cit.>. We have also included the product of CN and N, which is labeled as packing (η). §.§ LOBSTER-based features: LOBSTER <cit.> was used to calculate the integrated projected crystal orbital overlap population (IpCOOP) <cit.>, projected crystal orbital Hamilton population (IpCOHP) <cit.> and crystal orbital bond index (ICOBI) <cit.> for all atom pairs separated up to 5 Å. IpCOOP, IpCOHP, and, ICOBI quantify the number of electrons, contribution to the band-structure energy, and bond index (degree of covalency, ionicity), respectively, associated with the given bond. As a result, a set of IpCOOP, IpCOHP, and ICOBI values are calculated for all unique pairs of atoms within the cutoff radius of 5 Å. For example, for the ternary compound ABC, each of the three quantities will be calculated for all A-B, B-C, and A-C atom pairs within 5 Å. To ensure that a consistent set of nine numbers are computed for each compound, the maximum, average, and the standard deviation were calculated across each unique pair of atoms for each compound (e.g., maximum, average, and standard deviation across the values generated for A-B, B-C, and A-C). If multiple IpCOOP, IpCOHP, and ICOBI values were produced for each unique pair of atoms in a given compound, potentially due to minor variations in the bond distance within the structure, we consolidated this information by taking the average of these values for each pair. The normalized contribution of s-, p-, d- and f- orbitals to the bands within an energy window of ±3 eV around the Fermi level for each compound was calculated using LOBSTER. §.§ DDEC6 features: Several features have been previously used to quantify the charge-transfer character in compounds such as the net dipole moment per unit cell (p) <cit.> and ionicity (I) <cit.> by evaluating the contribution of each atom in the system to the total dipole moment and the extent of ionic bonding, respectively. For illustration, I for a quaternary compound A_αB_βC_θD_γ is computed as: I = 1/α + β + θ + γ (αδ_A/s_A + βδ_B/s_B + θδ_C/s_C + γδ_D/s_D) where δ_A and s_A, respectively denote the net charges assigned and summed bond orders for the element A in the compound <cit.> and so on. Building off of this previous work, the dipole moment per unit volume (P) was calculated according to the equation: P = 1/2l/V∑_i=1^N |Q_i|, where, N is the total number of atoms, Q_i is the net charges acquired (positive) or lost (negative) by the i^th atom in the system, and l is the distance between the center of positive and negative charges. The average charge transfer (ζ) in the system is also calculated based on the equation: ζ=∑_i=1^N| Q_i |/2N, In the expression of both P and ζ, the factor of 2 in the denominator compensates for the double-counting of the charge transfer (charge acquired vs. charge lost) and for all samples, both P and ζ are greater than zero. The net charges and dipole moment contribution of each atom computed with DDEC6 <cit.> was used to compute p <cit.>, I <cit.>, ζ_D and P_D, where the subscript, D, indicates computation using the DDEC6 methods. Separate from DDEC6, we also computed the net charges on each atoms using Bader analysis <cit.> and calculated ζ_B and P_B starting from these charges (hence, the subscript, B, for Bader). Both features are incorporated as the calculation of atomic charges differs when using DDEC6 in comparison to using Bader atomic charges, which are both widely used. Bader charges are calculated by dividing the electron density at zero flux surfaces, whereas DDEC6-based charges are calculated as a functional of the electron density. More importantly the computed net atomic charges in both cases are derived from the total electron density, which naturally incorporates the effects of SIE. §.§ Atomic features: In addition to these electronic features, the compositionally averaged modified Pettifor index <cit.> (Z) was also used to distinguish between compounds based on the chemical elements. Modified Pettifor's index is a unique value assigned to each element in the periodic table that gives the measure of the extent of its replaceability in the crystal structure. § METHODS DFT calculations using the PBE <cit.> and SCAN <cit.> functionals were performed using the projector augmented wave (PAW) formalism as implemented in the Vienna ab initio simulation package (VASP) code version 5.4.4. The structure files required for the calculations were obtained from Ref <cit.>. We used the PAW pseudopotentials as recommended in the VASP manual, a plane-wave cutoff of 520 eV, energy convergence criterion of 10^-6 eV, the smearing parameter (k_BT) of 0.01 eV (following the first order Methfessel-Paxton scheme), and gamma-centered Monkhorst-Pack k-point grid with 20|b_i| discretizations along each reciprocal lattice vector, b_i, for the Brillouin zone sampling in all the calculations. The ML task here adopted the so-called “Δ” learning scheme <cit.> where the electronic structure properties from DFT calculations (Table <ref>) are used as model inputs to predict the difference between ΔH_f^expt and ΔH_f^DFT, where the DFT functionals are PBE and SCAN. This difference is then added back to ΔH_f^DFT to predict ΔH_f^expt (with Δ-learned predictions denoted ΔH_f^ML). Linear ridge regression (Linear), random forest regressor (RFR), and kernel ridge regression (KRR) using the Laplacian kernel (KRR+Lap) and the Gaussian rbf kernel (KRR+rbf) were performed with scikit-learn <cit.> version 1.1.1. All hyperparameters were tuned via grid search using 5-fold cross-validation. The GAM models were trained using the pyGAM package <cit.>. A linear response function (LinearGAM) was used in training the Δ-learning model. We note that to evaluate the variability of each of the ML models, 51 distinct models were trained using 51 unique random seeds, leading to different random 80 % / 20 % training/test splits for each model (see Table <ref> for the mean accuracy and standard deviation of the test error). ΔH_f^DFT/MLs were utilized to calculate the enthalpy of decomposition (ΔH_d^DFT/ML) for stability analysis using the approach described in Ref <cit.>. For consistency of the notation, we use δΔH_f^DFT and δΔH_f^ML to represent the difference between ΔH_f for the DFT functional and ML model, respectively, with experiment (i.e., δΔH_f^DFT/ML=ΔH_f^expt - ΔH_f^DFT/ML). To evaluate model performance, the mean error (ME) and mean absolute error (MAE) of ΔH_f^DFT/ML are computed relative to ΔH_f^expt. The ME (MAE) is computed by averaging δΔH_f^DFT (|δΔH_f^DFT|). Figures S1 - S25 display the distribution of δΔH_f^DFTs versus all 25 features listed in Table <ref>. Finally, for our analysis of the errors for the functional with experiment, the partial dependence plots (PDPs), generated from the gamma response function (GammaGAM) were used to predict |δΔH_f^DFT|. Both LinearGAM and GammaGAM are referred to as `GAM' in the results section. To specifically obtain regions where a given set of ΔH_f^DFT has a decreased error relative to experiment (i.e., reliable regions), we used subgroup discovery (SGD) as described in Ref. <cit.>. The SGD target variable for the identification of the reliable regions was the absolute errors |δΔH_f^DFT|. All computations were performed using the SGD implementation in realKD 0.7.2. § RESULTS AND DISCUSSION §.§ Δ-learning ΔH_fs Table <ref> summarizes the average ΔH_f errors calculated from several ML methods. The highest performing Δ-learned ML model for PBE was RFR, which has an MAE of 80 meV/atom for the 20% test set which is substantially smaller than PBE (MAE = 195 meV/atom) for the same 20% test set. Fig. <ref> shows the distribution of δΔH_f^MLs, which is equally distributed above and below zero indicating that the ML model has a mean near zero and hence no systematic bias which is in stark contrast to PBE. The significant reduction in δΔH_f^ML is attributed to the compounds with a large δΔH_f^PBE (Fig. <ref>), which trends in the number of elements in each compounds. More specifically, the set of compounds for which |δΔH_f^PBE| > 200 meV includes 70% of the ternary compounds (184 out of 270) and all the 28 quaternary compounds. The gain in accuracies with ΔH_f^ML compared with ΔH_f^PBE is factors of 7.5 ( 43 meV/atom vs. 322 meV/atom) for quaternary and 3.7 (69 meV/atom vs. 252 meV/atom) for ternary both of which are much higher than a factor of 2 for binary (86 meV/atom vs. 168 meV/atom). Elemental corrections to PBE, so-called PBE+, as fit by Bartel et al. <cit.> for the same set of experimental compounds have an MAE = 103 meV/atom, which is approximately 50% lower than PBE (MAE = 195 meV/atom). Moreover, the MAEs in ΔH_f^PBE+ are even for all binary (103, meV/atom), ternary (106 meV/atom), and quaternary (82 meV/atom) compounds, which indicates that the elemental reference energy corrections address most of the systematic errors present with PBE. However, the errors are still higher with PBE+ than with the Δ-learned ML models as can be observed from Fig. <ref>(a) which shows a parity plot of δΔH_f^ML and δΔH_f^PBE/PBE+ indicating that the ML model outperforms both PBE and PBE+ for the prediction of materials' ΔH_fs. The overall reduced errors in ΔH_f^ML suggest that the feature set is contributing additional, material-specific knowledge, which aids in the increases the model's accuracy. Moreover, as mentioned in the Introduction, databases such as OQMD, MP, and, AFLOW often use +U corrections for specific elements to mitigate the effect of the SIE and where generally δΔH_f^PBE is larger. Analysis of the MAEs of PBE, PBE+, and ML (Table <ref>) shows that, PBE is most problematic for oxides of few transition metals and actinides where OQMD would use +U (365 meV/atom), followed by oxides and fluorides of transition metals (348 meV/atom) where MP would use +U. The errors are much higher for ternary and quaternary compounds (432 meV/atom (OQMD), and 418 meV/atom (MP)) compared to binary compounds. Across each category in Table <ref>, ML models show consistently lower MAE of ∼ 20 meV/atom compared with PBE+. For SCAN, δΔH_f^SCANs has a test-set MAE of 76 meV/atom, which is comparable to the MAE of SCAN and SCAN fitted with elemental corrections (SCAN+) of 85 and 65 meV/atom. The mean of δΔH_f^SCAN is equal to zero indicating that it has no systematic bias. In contrast to what was observed for ΔH_f^PBE, ΔH_f^SCANs are relatively more accurate for the ternary (MAE = 59 meV/atom) and quaternary compounds (MAE = 45 meV/atom), but less so for the binary compounds (MAE = 99 meV/atom), see Fig <ref>. This supports the findings in the literature that SCAN is not as accurate as PBE for intermetallics (which are primarily binary compounds in this dataset) <cit.>. In order to gain an understanding of the ability of ML to substantially increase in the accuracy of the predictions of ΔH_f^PBE, while providing little improvement for ΔH_f^SCAN, we have trained an interpretable GAM using the same Δ-learned procedure previously described for the RFR model. Despite the fact that the Δ-learned ML model within the GAM framework exhibits a slightly higher error for ΔH_fs compared to the RFR model (91 vs. 80 for SCAN; 86 meV/atom vs. 76 meV/atom for PBE, respectively, see Table <ref>), the advantage of GAMs is the simple, additive structure which facilitates a more straightforward analysis of the feature set through partial dependence plots (PDP). The interpretability of GAMs is preferred over RFR, which does not consider the effect of each feature separately but instead by design incorporate interactions between features. As a result, the PDPs generated from RFR-based models may potentially show unreliable trends, and therefore, are avoided in this work. Using the GAM model trained to the set of |δΔH_f^PBE| and |δΔH_f^SCAN| values, PDPs were generated (see Figs S26-S50) for all 25 features inputted into our model (see Table <ref>). Out of those 25 different input features, PDP analysis shows that the features ζ_B (in e^-/atom), Z and I (in e^-/atom) have the highest impact on |δΔH_f^PBE/SCAN|. Although ζ_B, Z PDPs displayed qualitatively similar trends for |δΔH_f^PBE| and |δΔH_f^SCAN| values, the PDP of I indicates a stark difference. Thus, in this text, we focus our discussion on the PDP for I (see Fig <ref>). PDP plot of I shows a positive impact for I > 0.22 on |δΔH_f^PBE|, which indicates a higher error in the calculation of ΔH_f^PBE for compounds with more ionic character (see Fig <ref>). There are 636 compounds that have an I > 0.22. Of which, nearly all (301 out of 314 compounds in the overall dataset) of the compounds that contain either alkali or alkaline earth elements are included in this subset. These 301 compounds have larger δΔH_f^PBE values (MAE of 261 meV/atom), which is comparable to the MAE (246 meV/atom) of all 636 compounds. The other 13 of 314 alkali or alkaline earth elements compounds, which have an I < 0.22, the MAE = 71 meV/atom for δΔH_f^PBE. Although the current literature indicates compounds containing diatomic elements (O_2, Cl_2, N_2, F_2, H_2) have a very high error (δΔH_f^PBE), for the 41 (out of a total of 528) compounds within I < 0.22, the MAE is relatively low (116 meV/atom) despite containing a few transition metal chlorides (NiCl_2 (390 meV/atom), TiCl_2 (360 meV/atom), FeCl_3 (300 meV/atom), FeCl_2 (350 meV/atom)), in this subset. In comparison, the 487/528 compounds that contain diatomic elements and are in the range I > 0.22 have an MAE=250 meV/atom. A similar trend is observed for binary compounds with metallic bonding (intermetallics), such as CaMg_2, AlNi, AlTi, where 13/25 with I < 0.22, have an MAE of 61 meV/atom. In comparison, the remaining 12 intermetallics with an I > 0.22, which include Ba_2Pb, Ca_2Sn, FeTi, etc., have significantly higher MAE (233 meV/atom). These results indicate that the evolution of ΔH_f^PBE errors and hence the corrections required for such compounds can vary based on the bonding environments. This could potentially be influenced by the number of elements (e.g., binary, ternary, or quaternary) present in the compound, which could lead to more variation in the bonding of some material. In contrast, for |δΔH_f^SCAN|, a highly uncertain trend with I is observed as indicated by the flat black line in Figure <ref>). The explicit values of the I are also provided as subfigures in the bottom panels confirming that the qualitative trend with I changes between the PBE and SCAN. This observation is in agreement with the previous literature that suggests SCAN improves thermochemical calculations compared with PBE by better treatment of diversely bonded systems <cit.>. An examination of the δΔH_f^ML as a function of I, compound class (binary, ternary, quaternary), and the presence of diatomic elements within the compound, indicates it is exactly these compounds (e.g., compounds with diatomic elements with I>0.22) where a considerable decrease in δΔHf^ML compared to δΔHf^PBE is observed (see Fig <ref> (a)). This observation indicates there exists a specific domain formed by conditions on various properties where δΔH_f^PBE is relatively low, thus constituting a reliable region. To identify such a reliable region, we employ the approach introduced in the Ref. <cit.> and identified a set of 368 compounds (out of 1011) with an MAE = 107 meV/atom for ΔH_f^PBE which are selected by this combination of properties: I<=0.52, ζ_D<=0.43, and P_D<=0.02 (Fig <ref>(b)). In comparison, the remaining 643 compounds have an MAE = 247 meV/atom. The constraint I<=0.52 identified by SGD is in agreement with the analysis from the PDPs (I<0.22) discussed above, but expands the range of compounds in this reliable region. Finally, because a material competes with all possible phases in its compositional space (enthalpy of decomposition; ΔH_d) for stability rather than just the elemental phases (ΔH_f),<cit.> we assess how the lower errors in δΔH_f^ML translates to ΔH_ds (i.e., ΔH_d^ML) using the leave-one-chemical-space-out scheme and compared them with the ΔH_ds obtained using ΔH_f^expt (ΔH_d^expt) and ΔH_f^DFT (ΔH_d^DFT). As analyzed in Ref <cit.>, the majority of compounds in the MP database <cit.> compete for stability with compound phases (Type 2) or a mixture of compounds and elemental phases (Type 3) rather than elemental phases only (Type 1). Although ΔH_d^DFTs (Type 2 and Type 3) are typically 1-2 orders of magnitude smaller compared to ΔH_f^DFTs, previous literature suggests that the ML models trained to predict accurate ΔH_f^DFT values perform poorly for ΔH_d^DFTs. <cit.> This is because errors made by ML models for ΔH_f^DFT are not as systematic as the DFT errors with respect to the chemical composition <cit.>. In contrast, our results show that (Table S1) the ΔH_d^ML error is comparable to ΔH_d^PBE error for both type 2 and type 3 compounds suggesting that ML corrections to ΔH_f^PBE are mostly canceling out for stability prediction (ΔH_ds). We observe similar findings for ΔH_d^MLs in the case of SCAN too (Table S2). § CONCLUSION In this work, we used ML to correct the DFT-calculated enthalpy of formation (ΔH_f^DFT) to predict experimental enthalpy of formation (ΔH_f^expt) of 1011 solids. The ML model inputs 25 numerical properties that capture electronic structure information computed with either the PBE or SCAN density functional. Among various ML methods, random forest regressor (RFR) performs best and corrects ΔH_f^PBE to within an MAE = 80 meV/atom of ΔH_f^expt, significantly improving upon the MAE of 195 meV/atom of ΔH_f^PBEs. In contrast, the ML model correcting ΔH_f^SCAN (MAE=76 meV/atom) has comparable accuracy to SCAN (85 meV/atom). To analyze why a significant correction is observed for PBE while not so for SCAN, we studied the impact and trends of all 25 features on the absolute difference between ΔH_f^expt and ΔH_f^DFT (errors), using the partial dependence plots (PDPs) within generalized additive models. PDPs showed that for a number of features, there is a clear trend between feature values and the ΔH_f^PBE errors indicating that the errors made by PBE are more systematic and straightforward to correct. More specifically, the interpretable PDP analysis identified the ionicity (I), which quantifies the ratio of average charge transferred and summed bond orders, as a strong indicator for high ΔH_f^PBE errors. Based on the PDP analysis, we explicitly showed that the MAE for compounds with I < 0.22 (113 meV/atom) is a factor 2 smaller compared to the ones with I > 0.22 (246 meV/atom) indicating different evolution of errors across different regions. This trend from the PDPs is further supported by the application of subgroup discovery (SGD) to the PBE errors using the same set of 25 features to identify the reliable regions (i.e., lowest errors). Out of these 25 features, SGD selects a subgroup with the lowest error (108 meV/atom), containing 368 compounds out of 1011 total, and is predominantly comprised of compounds containing low charge transfer between atoms based on the selector: I<=0.52, ζ_D<=0.43, and P_D<=0.02, containing about one-third of compounds (368/1011) with ΔH_f^PBE errors less than 108 meV/atom. Here ζ and P quantify the average charge transfer and dipole moment per volume from DDEC6 analysis. Interestingly, although the literature suggests PBE is reliable for intermetallics and less so for oxides and halides, our analysis reveals intermetallics pose a challenge for PBE only when the charge transfer is significant (I>0.22) and oxides and halides may be more amenable to PBE if the charge transfer is relatively low (I<0.22), for example, compounds such as MoCl_5 (98 meV/atom), PtCl_3 (2 meV/atom), Ag_2O (29 meV/atom), PdO (116 meV/atom), etc. Overall, our work leads to a better understanding of the strengths and limitations of the semilocal density functional predictions of materials thermochemistry and can potentially guide the future development of more accurate methods. We acknowledge support by departmental start-up funds at the University of South Carolina and the NSF-funded MadeinSC (Award number OIA-1655740).
http://arxiv.org/abs/2307.03947v1
20230708101048
Hyperelliptic Gorenstein curves and logarithmic differentials
[ "Luca Battistella", "Sebastian Bozlee" ]
math.AG
[ "math.AG", "math.GT", "14H20 (Primary) 14H10 (Secondary)" ]
1.2 #1|_#1 commutative diagrams/.cd, arrow style=tikz, diagrams=>=stealth definition innercustomthmTheorem tocline#1#2#3#4#5#6#7#1>@̧tocdepth secpenalty#2 M ifempty#4 tempdimar@tocindent#1 tempdima#4 @ #3tempdimapnumwidth plus4em -pnumwidth #5-tempdima #1 1em 2em 3em #6topnumwidthtocpagenum#7 [1] @cev#1 calc fadings decorations.pathmorphing decorations.pathreplacing shapes marginnote #1marginnote[][]#1 #1#1 OT1pzcmit definition theoremTheorem[section] *theorem*Theorem claim[theorem]Claim conjecture[theorem]Conjecture *conjecture*Conjecture corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition remark[theorem]Remark assumption[theorem]Assumption *runningexample*Running example aside[theorem]Aside *aside*Aside condition[theorem]Condition construction[theorem]Construction convention[theorem]Convention definition[theorem]Definition example[theorem]Example exerciseExercise notation[theorem]Notation proposition-definition[theorem]Proposition-Definition question[theorem]Question setting[theorem]Setting theorem innercustomconjConjecture theorem innercustomcorCorollary
http://arxiv.org/abs/2307.07318v1
20230714125122
A Unified Distributed Method for Constrained Networked Optimization via Saddle-Point Dynamics
[ "Yi Huang", "Ziyang Meng", "Jian Sun", "Wei Ren" ]
math.OC
[ "math.OC" ]
A Unified Distributed Method for Constrained Networked Optimization via Saddle-Point Dynamics Yi Huang, Ziyang Meng, Senior Member, IEEE, Jian Sun, Senior Member, IEEE, and Wei Ren, Fellow, IEEE This work has been supported in part by the National Natural Science Foundation of China under Grants under Grants 62103223, 61925303, 62088101, 61833009 and U19B2029. Yi Huang and Jian Sun are with the School of Automation, Beijing Institute of Technology, Beijing 100081, China (e-mail: [email protected], e-mail: [email protected]). Ziyang Meng is with the Department of Precision Instrument, Tsinghua University, Beijing 100084, China (e-mail: [email protected]). Wei Ren is with Department of Electrical and Computer Engineering, University of California, Riverside, CA 92521, USA (e-mail: [email protected]). August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper develops a unified distributed method for solving two classes of constrained networked optimization problems, i.e., optimal consensus problem and resource allocation problem with non-identical set constraints. We first transform these two constrained networked optimization problems into a unified saddle-point problem framework with set constraints. Subsequently, two projection-based primal-dual algorithms via Optimistic Gradient Descent Ascent (OGDA) method and Extra-gradient (EG) method are developed for solving constrained saddle-point problems. It is shown that the developed algorithms achieve exact convergence to a saddle point with an ergodic convergence rate O(1/k) for general convex-concave functions. Based on the proposed primal-dual algorithms via saddle-point dynamics, we develop unified distributed algorithm design and convergence analysis for these two networked optimization problems. Finally, two numerical examples are presented to demonstrate the theoretical results. Distributed optimization, Constrained saddle-point problem, Optimistic Gradient Descent Ascent (OGDA) method, Extra-Gradient (EG) method AssumptionAssumption RemarkRemark LemmaLemma DefinitionDefinition PropositionProposition TheoremTheorem PropertyProperty CorollaryCorollary ExampleExample § INTRODUCTION The problem of distributed optimization has attracted considerable attention in recent decades due to its wide applications in machine learning, power systems, multi-robot localization, sensor networks, and resource allocation <cit.>. In general, most distributed optimization problems in the existing literature can be divided into two categories: optimal consensus problem and optimal resource allocation problem <cit.>. The main difference of these two problems is that in the first problem, each agent has its own objective function with respect to a common decision variable, while in the second one, all the agents own independent local objective functions and decision variables but these decision variables are coupled in a global equality constraint. To solve the optimal consensus problem, a common approach is to introduce a consensus constraint such that the coupled objective functions can be separated with the local decision variables. In such a case, the optimal consensus problem and optimal resource allocation problem can be both regarded as a class of optimization problems with a linear equality constraint. For these two classes of optimization problems, many discrete-time and continuous-time algorithms are developed in <cit.>. Note that most existing distributed optimization algorithms to solve the optimal consensus problem and resource allocation problem are designed separately. Fewer results provide a unified framework for analysis and design of these two optimization problems. As mentioned above, these two optimization problems can be both viewed as the constrained optimization problem with a linear equality constraint. For the constrained optimization problem, we can transform it to a class of saddle-point problems in terms of the corresponding Lagrangian functions <cit.>. This fact illustrates that the above-mentioned two optimization problems can both be transformed into the saddle-point problems. Therefore, when the saddle points of the corresponding Lagrangian functions are obtained, these two optimization problems can be solved. It is well known that saddle-point problems arise in many areas such as constrained optimization <cit.>, robust control <cit.>, zero-sum games <cit.> and generative adversarial networks (GANs) <cit.>. Some typical first-order optimization methods (e.g., Gradient Descent Ascent (GDA), Optimistic Gradient Descent Ascent (OGDA) and Extra-gradient (EG) methods) have been proposed to solve the saddle-point problems. This paper focuses on OGDA and EG methods, whose ideas were first proposed in <cit.> and <cit.>, and have attracted considerable attention. The authors of <cit.> showed the linear convergence rates of OGDA and EG methods for a special case, i.e., f(x,y)=x^TAy, where A is square and full rank. In <cit.>, the authors proposed a variant of EG method with linear convergence when f(x,y) is strongly convex-strongly concave, and applied it to the GANs training. The authors of <cit.> showed OGDA and EG methods as approximate variants of the proximal point method, and provide their linear convergence for strongly convex-strongly concave functions. For general convex-concave functions, the authors of <cit.> provided a unified convergence analysis of OGDA and EG methods and proved that these two methods can both achieve an ergodic convergence rate of O(1/k). Nevertheless, the last iteration of <cit.> is shown to only converge into a bounded neighborhood of a saddle point instead of achieving exact convergence to a saddle point. In addition, we note that most results on OGDA and EG methods mentioned above only consider the saddle-point problems in absence of constraints. Actually, the saddle-point problems with set constraints are very common in practical applications. Inspired by the above discussions, this paper tries to establish the relationship between two classes of constrained networked optimization problems and general constrained saddle-point problems, and then solve them under a unified saddle-point dynamics framework. Compared with the related results, the main contributions of this paper are three-fold. c1) We develop unified distributed algorithm design and convergence analysis via saddle-point dynamics to solve two classes of constrained networked optimization problems, i.e., optimal consensus problem and resource allocation problem with non-identical set constraints. c2) Two projection-based primal-dual algorithms via OGDA and EG methods are developed for constrained saddle-point problem for general convex-convex functions. Unlike the results of <cit.> that are only shown to converge into a bounded neighborhood of a saddle point, the developed algorithms achieve exact convergence to a saddle point with an ergodic convergence rate O(1/k). c3) The developed distributed algorithms are with constant step-sizes and performs better convergence performance than the algorithms in <cit.> with diminishing step-sizes. In contrast with the constant step-size algorithms in <cit.> and <cit.>, the developed algorithms are more easier to be implemented without solving the sub-optimization problem at each iteration. The rest of this paper is organized as follows. Section II formulates the considered problem. Section III proposes two primal-dual algorithms via OGDA and EG methods. Section IV develops unified distributed algorithms to solve two networked optimization problems. Section V gives simulation examples and Section VI concludes this paper. § PRELIMINARIES AND FORMULATION Notation: Let ℝ be the set of real numbers and ℕ be the set of natural numbers. I_n is the n× n identity matrix and 1_n is the n× 1 ones vector. ‖·‖ denotes the Euclidean norm. Let ℐ_N={1,2,…, N} and col(x_i)^N_i=1 be a column stack of the vector x_i, i∈ℐ_N. diag(W_i)_i=1^N denotes a diagonal block matrix and W_i is placed in the ith diagonal block, and ⊗ represents the Kronecker product. §.§ Problem Formulation In this section, two classes of constrained networked optimization problems are formulated. Consider a network graph 𝒢 of N agents. The distributed optimal consensus problem with non-identical set constraints is described by <cit.> min_x ∈Ω f(x)=∑^N_i=1 f_i(x_i),  s.t. (L⊗ I_m)x=0, where x_i∈ℝ^m, x=col(x_i)^N_i=1∈ℝ^Nm, Ω=∏^N_i=1Ω_i is the Cartesian product, and L∈ℝ^N× N is an Laplacian matrix of graph 𝒢. In this problem, each agent only privately has access to local objective function f_i(x_i) and set constraint Ω_i, i∈ℐ_N. Provided that graph 𝒢 is connected, (L⊗ I_m)x=0 implies that the consensus x_i=x_j is satisfied for ∀ i,j∈ℐ_N. Next, the distributed resource allocation problem via a multi-agent network is formulated as <cit.> min_y∈Ω h(y)=∑^N_i=1h_i(y_i),  s.t. ∑^N_i=1W_iy_i=∑^N_i=1d_i, where h_i(y_i): ℝ^q_i→ℝ is the local objective function of agent i, y_i∈ℝ^q_i is its local decision variable, y=col(y_i)^N_i=1∈ℝ^q with q=∑^N_i=1q_i, and Ω=∏^N_i=1Ω_i is the Cartesian product. ∑^N_i=1W_iy_i=∑^N_i=1d_i is the coupled equality constraint, in which W_i∈ℝ^m× q_i and d_i∈ℝ^m are the local data only known by agent i. The following standard assumptions are imposed. (i) The graph 𝒢 is undirected and connected. (ii) Ω_i, i∈ℐ_p, is closed and convex. The local objective functions f_i(x_i) and h_i(y_i) are differentiable and convex on Ω_i, and their gradients ∇ f_i(x_i) and ∇ h_i(y_i) are Lipschitz continuous for ∀ i∈ℐ_N. (iii) There exists at least one solution to the problems (<ref>) and (<ref>). The problems (<ref>) and (<ref>) capture a wide class of networked optimization problems in practical applications. For instance, the optimal rendezvous, cooperative localization and machine learning in <cit.> can be described by problem (<ref>). The resource scheduling, economic dispatch and flow control in smart grids <cit.> can be formulated by problem (<ref>). We establish the relationships between the above two classes of constrained networked optimization problems and constrained saddle-point problems for general convex-concave functions. For the problem (<ref>), its augmented Lagrangian function is L_1(x,v)=∑^N_i=1 f_i(x_i)+v^T(L⊗ I_m)x+1/2x^T(L⊗ I_m)x, where v=col(v_i)^N_i=1∈ℝ^Nm is the dual variable <cit.>. Then, the optimization problem (<ref>) can be transformed into the following constrained saddle-point problem min_x ∈Ωmax_v∈ℝ^Nm L_1(x,v). Note that L_1(x,v) is a convex-concave function. We have that the problem (<ref>) is reformulated as a constrained saddle-point problem for general convex-concave functions. For the problem (<ref>), its modified Lagrangian function can be derived as L_2(y,z,λ)=∑^N_i=1h_i(y_i)+λ^T(Wy-d-(L⊗ I_m)z)-1/2λ^T(L⊗ I_m)λ, where W=diag(W_i)^N_i=1∈ℝ^Nm× q, d=col(d_i)^N_i=1∈ℝ^Nm, λ=col(λ_i)^N_i=1∈ℝ^Nm is the dual variable, and z=col(z_i)^N_i=1∈ℝ^Nm is an auxiliary variable (see eq. (12) in <cit.>). The problem (<ref>) is transformed into the following constrained saddle-point problem <cit.> min_y ∈Ω, z∈ℝ^Nmmax_λ∈ℝ^Nm L_2(y,z,λ). Similarly, we have that L_2(y,z,λ) is a convex-concave function. This implies that problem (<ref>) can be also transformed into a constrained saddle-point problem for general convex-concave functions. §.§ Unified Problem Framework To solve the above two classes of constrained networked optimization problems via a unified framework, we consider the following general constrained saddle-point problem min_x ∈𝒳max_y∈𝒴 f(x,y), where 𝒳⊆ℝ^n and 𝒴⊆ℝ^m are both closed and convex, and f: 𝒳×𝒴→ℝ is a convex-concave objective function, i.e., for any y∈𝒴, f(x,y) is a convex function with respect to x∈𝒳, and for any x ∈𝒳, f(x,y) is a concave function with respect to y∈𝒴. We focus on finding a saddle point (x^*,y^*)∈𝒳×𝒴 of problem (<ref>) that satisfies f(x^*,y)≤ f(x^*,y^*)≤ f(x,y^*), ∀ (x,y)∈𝒳×𝒴. According to the optimal condition of <cit.>, the pair (x^*,y^*) is a saddle point of (<ref>) if the following variational inequality holds for ∀ (x,y)∈𝒳×𝒴. [ ∇_x f(x^*,y^*); -∇_y f(x^*,y^*) ]^T[ x-x^*; y-y^* ]≥ 0. The function f(x,y) is continuously differentiable for any x∈𝒳 and y∈𝒴. The gradient ∇_x f(x,y) is l_xx-Lipschitz in x, and l_xy-Lipschitz in y. The gradient ∇_y f(x,y) is l_yx-Lipschitz in x, and l_yy-Lipschitz in y. If f(x,y)=x^TBy is a bilinear function with constant matrix B, we obtain that the Lpschitz constants l_xx and l_yy are zero. The solution set of problem (<ref>) is nonempty. This paper aims to develop a unified distributed method for solving two classes of networked optimization problems. To achieve this goal, we first propose two primal-dual algorithms for constrained saddle-point problem (<ref>), and then develop unified distributed algorithms via saddle-point dynamics for constrained networked optimization problems (<ref>) and (<ref>). § SADDLE-POINT DYNAMICS DESIGN In this section, we first develop two projection-based primal-dual algorithms by using OGDA and EG methods to solve the constrained saddle-point problem (<ref>). Next, the convergence analysis of these two algorithms is provided. §.§ Primal-dual algorithm via OGDA method We develop a projection-based primal-dual algorithm via OGDA to solve the constrained saddle-point problem (<ref>) x_k+1 =𝒫_𝒳(x_k-α∇_x f(x_k,y_k) -α(∇_x f(x_k,y_k)-∇_x f(x_k-1,y_k-1) ), y_k+1 =𝒫_𝒴(y_k+α∇_y f(x_k,y_k) +α(∇_y f(x_k,y_k)-∇_y f(x_k-1,y_k-1))), where 𝒫_𝒳(·) and 𝒫_𝒴(·) represent the projection operations on 𝒳 and 𝒴, respectively, and α is the constant step-size that will be specified later. Let z=col(x,y)∈𝒳×𝒴⊂ℝ^m+n and the operator F: 𝒳×𝒴→ℝ^m+n as F(z)=col(∇_x f(x,y), -∇_y f(x,y)). Eq. (<ref>) can be arranged as z_k+1=𝒫_Λ(z_k-α F(z_k)-α(F(z_k)-F(z_k-1))), where Λ=𝒳×𝒴. In contrast to the GDA algorithm that is formulated as z_k+1=𝒫_Λ(z_k-α F(z_k)) in <cit.>, the main difference of the proposed OGDA-based algorithm (<ref>) is the added gradient correction term -α (F(z_k)-F(z_k-1)), which includes the gradient information of f(x,y) at the current iteration and previous iteration. The advantages of adding the gradient correction term is to guarantee exact convergence to a saddle point of general convex-concave functions. As mentioned in <cit.>, the GDA algorithm of <cit.> requires strongly convex-strongly concave condition of objective function to ensure the exact convergence and may not converge to a saddle point for general convex-concave functions. This result is also illustrated in Example 1 given in the following simulation section. §.§ Primal-dual algorithm via EG method We also develop a projection-based primal-dual algorithm via EG method to solve the problem (<ref>). Firstly, we compute the mid-point iteration (x_k+1/2,y_k+1/2), i.e., x_k+1/2=𝒫_𝒳 (x_k-α∇_x f(x_k,y_k)), y_k+1/2=𝒫_𝒴 (y_k+α∇_y f(x_k,y_k)), where α is the constant step-size that will be specified later. By using the mid-point (x_k+1/2,y_k+1/2), we further compute the next iteration (x_k+1, y_k+1) as x_k+1=𝒫_𝒳(x_k-α∇_x f(x_k+1/2,y_k+1/2)), y_k+1=𝒫_𝒴(y_k+α∇_y f(x_k+1/2,y_k+1/2)). According to the definitions of z and F(z) in (<ref>), we can rewrite the algorithm (<ref>)-(<ref>) as z_k+1/2=𝒫_Λ(z_k-α F(z_k)), z_k+1=𝒫_Λ(z_k-α F(z_k+1/2)). It follows from (<ref>) that the crucial idea of the EG method is to find a mid-point z_k+1/2 by using the GDA method at the current point, and then obtain the next iteration by using the gradient F(z_k+1/2) at this mid-point. Compared with the GDA method in <cit.>, the EG-based algorithm (<ref>) via adding the midpoint step can achieve exact convergence to a saddle point for general convex-concave functions. In contrast to the work of <cit.>, the main differences of our proposed algorithms are two-fold. (i) We consider the constrained saddle-point problem while <cit.> studied the unconstrained one. (ii) Our algorithms achieve exact convergence to a saddle point while the result of <cit.> only converges into a bounded neighborhood of a saddle point. §.§ Convergence analysis The convergence analyses of the proposed two primal-dual algorithms via OGDA and EG are provided. Firstly, we show the convergence result for the algorithm (<ref>) in the following theorem and its proof can be found in Appendix A. Suppose that Assumptions 2-3 hold and the step-size α satisfies 0<α< 1/2κ_m with κ_m=2max(l_xx, l_xy,l_yx,l_yy). Under the initial conditions x_0=x_-1 and y_0=y_-1, the developed OGDA-based algorithm (<ref>) guarantees that the iteration sequence {x_k,y_k} converges to a saddle point of problem (<ref>). Moreover, it holds that for any T≥ 1 | f(x̂_T,ŷ_T)-f(x^*,y^*) |≤1/2α T‖ z_0-z^*‖^2, where x̂_T=1/T∑^T_k=1x_k and ŷ_T=1/T∑^T_k=1y_k. We next provide the convergence result of the algorithm (<ref>) with its proof given in Appendix B. Suppose that Assumptions 2-3 hold and the step-size α satisfies 0<α<1/κ_m. Under the initial condition z_0=z_-1, the developed EG-based algorithm (<ref>) guarantees that the iteration sequence {x_k,y_k} converges to a saddle point of problem (<ref>). Furthermore, it holds that for any T≥ 1 | f(x̂_T,ŷ_T)-f(x^*,y^*) |≤1/2α T‖ z_0-z^*‖^2, where x̂_T=1/T∑^T-1_k=0x_k+1/2 and ŷ_T=1/T∑^T-1_k=0y_k+1/2. It follows from Theorems 1-2 that the proposed OGDA-based algorithm (<ref>) and EG-based algorithm (<ref>) both achieve exact convergence to a saddle point rather than a bounded neighborhood of a saddle point shown in <cit.>. Moreover, based on (<ref>) and (<ref>) in Theorems 1-2, we have that the objective function f(x,y) at the average iteration generated by these two algorithms converge to an optimal value with a sublinear rate O(1/T). § UNIFIED DISTRIBUTED ALGORITHM VIA SADDLE-POINT DYNAMICS Based on the primal-dual algorithms via OGDA and EG methods for constrained saddle-point problems, we develop unified distributed algorithm design and convergence analysis for solving the networked optimization problems (<ref>) and (<ref>). §.§ Distributed constrained optimal consensus problem Note that the constrained optimal consensus problem (<ref>) can be transformed into the constrained saddle-point problem (<ref>). Based on the proposed OGDA-based algorithm (<ref>), we develop a distributed primal-dual algorithm as x^k+1_i =𝒫_Ω_i (x^k_i-2α∇ f_i(x^k_i)+α∇ f_i(x^k-1_i) -2α∑_j∈𝒩_i (x^k_i-x^k_j+v^k_i-v^k_j) +α∑_j∈𝒩_i (x^k-1_i-x^k-1_j+v^k-1_i-v^k-1_j) ), v^k+1_i =v^k_i+2α∑_j∈𝒩_i(x^k_i-x^k_j)-α∑_j∈𝒩_i (x^k-1_i-x^k-1_j). Let x_k=col(x^k_i)^N_i=1, v_k=col(v^k_i)^N_i=1, ∇ f(x_k)=col(∇ f_i(x^k_i))^N_i=1, and 𝒫_Ω(·)=col(𝒫_Ω_i(·))^N_i=1. From the definition of L_1(x, v) in Section II, one has that ∇_x L_1(x,v)=∇ f(x)+(L⊗ I_m)(x+v) and ∇_v L_1(x,v)=(L⊗ I_m)x. Then, a compact form of (<ref>) can be obtained as x_k+1 =𝒫_Ω (x_k-2α∇_x L_1(ϖ_k)+α∇_x L_1(ϖ_k-1) ), v_k+1 =v_k+2α∇_v L_1(ϖ_k)-α∇_v L_1(ϖ_k-1), where ϖ_k=col(x_k,v_k). Define Φ(ϖ)=[ ∇_x L_1(x,v); -∇_v L_1(x,v)]=[∇ f(x)+(L⊗ I_m)(x+v); -(L⊗ I_m)x], and then algorithm (<ref>) can be arranged as ϖ_k+1=𝒫_Θ_1(ϖ_k-2αΦ(ϖ_k)+αΦ(ϖ_k-1)) with Θ_1=Ω×ℝ^Nm. This illustrates that algorithm (<ref>) has the same structure as (<ref>). Thus, the results of (<ref>) given in Theorem 1 can be easily extended to the case of (<ref>). Under Assumption 3, one has that Φ(ϖ) is Lipschitz continuous, i.e., ‖Φ(ϖ_1)-Φ(ϖ_2)‖≤κ_c‖ϖ_1-ϖ_2‖ for any ϖ_1, ϖ_2, where κ_c is determined by Lipschitz constants of ∇ f_i(x_i), i∈ℐ_N and the largest eigenvalue of L. Similar to the results of Theorem 1, we obtain the following corollary. Suppose that Assumption 1 holds and the step-size α satisfies 0<α<1/2κ_c. The developed distributed algorithm (<ref>) guarantees that x_k converges to an optimal solution of problem (<ref>). Moreover, for any T≥ 1, it holds that | L_1(x̂_T,v̂_T)-L_1(x^*,v^*)|≤1/2α T(‖ x_0-x^*‖^2+‖ v_0-v^*‖^2), where x̂_T=1/T∑^T_k=1x_k and v̂_T=1/T∑^T_k=1v_k. By applying the EG-based algorithm (<ref>)-(<ref>), we develop another distributed primal-dual algorithm to solve the optimization problem (<ref>), which is composed of two steps. Step 1: Calculate the mid-point iteration (x^k+1/2_i,v^k+1/2_i). x^k+1/2_i =𝒫_Ω_i (x^k_i-α∇ f_i(x^k_i) -α∑_j∈𝒩_i (x^k_i-x^k_j+v^k_i-v^k_j) ), v^k+1/2_i =v^k_i+α∑_j∈𝒩_i(x^k_i-x^k_j). Step 2: Calculate the next iteration (x^k+1_i,v^k+1_i). x^k+1_i =𝒫_Ω_i (x^k_i-α∇ f_i(x^k+1/2_i) -α∑_j∈𝒩_i (x^k+1/2_i-x^k+1/2_j+v^k+1/2_i-v^k+1/2_j) ), v^k+1_i =v^k_i+α∑_j∈𝒩_i(x^k+1/2_i-x^k+1/2_j). The proposed algorithm (<ref>)-(<ref>) obtains the same convergence results as Corollary 1, and its detailed proof can be derived from that of Theorem 2. From the algorithm (<ref>), it seems that the neighbors' states (x_j, v_j) at the current iteration and previous iteration are both transmitted, which leads to twice communication than those of <cit.> and <cit.>. In fact, at the current iteration k, only (x^k_j, v^k_j) is required to be transmitted since (x^k-1_j, v^k-1_j) has been transmitted in the previous iteration. Thus, the communication requirement of the proposed algorithm (<ref>) is the same as those of <cit.> and <cit.>. §.§ Distributed resource allocation problem Based on the OGDA-based algorithm (<ref>), we propose a distributed algorithm to solve the optimization problem (<ref>) y^k+1_i =𝒫_Ω_i ( y^k_i-2α (∇ h_i(y^k_i)+W^T_iλ^k_i) +α (∇ h_i(y^k-1_i)+W^T_iλ^k-1_i)), 18a z^k+1_i =z^k_i+2α∑_j∈𝒩_i(λ^k_i-λ^k_j) -α∑_j∈𝒩_i(λ^k-1_i-λ^k-1_j), 18b λ^k+1_i =λ^k_i+2α (W_iy^k_i-d_i-∑_j∈𝒩_i(z^k_i-z^k_j +λ^k_i-λ^k_j) )-α (W_iy^k-1_i-d_i -∑_j∈𝒩_i(z^k-1_i-z^k-1_j+λ^k-1_i-λ^k-1_j)). 18c Let y_k=col(y^k_i)^N_i=1∈ℝ^q, z_k=col(z^k_i)^N_i=1∈ℝ^Nm, λ_k=col(λ^k_i)^N_i=1∈ℝ^Nm, ∇ h(y_k)=col(∇ h_i(y^k_i))^N_i=1∈ℝ^q, W=diag(W_i)^N_i=1∈ℝ^Nm× q, and d=col(d_i)^N_i=1∈ℝ^Nm. According to the definition of L_2(y,z,λ) in Section II, we have that ∇_y L_2(y,z,λ)=∇ h(y)+W^Tλ, ∇_z L_2(y,z,λ)=-(L⊗ I_m)λ, and ∇_λ L_2(y,z,λ)=Wy-d-(L⊗ I_m)(z+λ). Then, a compact form of (18) is written as y_k+1 =𝒫_Ω (y_k-2α∇_y L_2(ξ_k)+α∇_y L_2(ξ_k-1)), z_k+1 =z_k-2α∇_z L_2(ξ_k)+α∇_z L_2(ξ_k-1), λ_k+1 =λ_k+2α∇_λ L_2(ξ_k)-α∇_λ L_2(ξ_k-1), where ξ_k=col(y_k,z_k,λ_k). Define Ψ(ξ)=[∇_yL_2(y,z,λ); ∇_z L_2(y,z,λ);-∇_λ L_2(y,z,λ)]=[∇ h(y)+W^Tλ; -(L⊗ I_m)λ; -(Wy-d-(L⊗ I_m)(z+λ))], and (<ref>) is rewritten as ξ_k+1=𝒫_Θ_2(ξ_k-2αΨ(ξ_k)+αΨ(ξ_k-1)) with Θ_2=Ω×ℝ^Nm×ℝ^Nm, which has the same structure of (<ref>). In addition, we obtain that Ψ(ξ) is κ_s-Lipschitz continuous, where κ_s is determined by Lipschitz constants of ∇ h_i(y_i), i∈ℐ_N and largest eigenvalues of matrices L and W. Under Assumption 1 and the step-size satisfying 0<α<1/2κ_s, the distributed algorithm (18) guarantees that y_k converges to an optimal solution of the problem (<ref>). Moreover, | L_2(ŷ_T,ẑ_T, λ̂_T)-L_2(x^*,z^*,λ^*)|≤1/2α T(‖ y_0-y^*‖^2+‖ z_0-z^*‖^2+‖λ_0-λ^*‖^2) holds for any T≥ 1,, where ŷ_T=1/T∑^T_k=1y_k, ẑ_T=1/T∑^T_k=1z_k and λ̂_T=1/T∑^T_k=1λ_k. Based on the EG-based algorithm (<ref>)-(<ref>), another distributed primal-dual algorithm is developed to solve the optimization problem (<ref>), which is formulated as Step 1: Calculate the mid-point (y^k+1/2_i,z^k+1/2_i, λ^k+1/2_i). y^k+1/2_i =𝒫_Ω_i ( y^k_i-α (∇ h_i(y^k_i)+W^T_iλ^k_i)), z^k+1/2_i =z^k_i+α∑_j∈𝒩_i(λ^k_i-λ^k_j), λ^k+1/2_i =λ^k_i+α (W_iy^k_i-d_i -∑_j∈𝒩_i(z^k_i-z^k_j+λ^k_i-λ^k_j) ). Step 2: Calculate the next iteration (y^k+1_i,z^k+1_i, λ^k+1_i). y^k+1_i =𝒫_Ω_i ( y^k+1/2_i-α (∇ h_i(y^k+1/2_i)+W^T_iλ^k+1/2_i)), z^k+1_i =z^k_i+α∑_j∈𝒩_i(λ^k+1/2_i-λ^k+1/2_j), λ^k+1_i =λ^k_i+α (W_iy^k+1/2_i-d_i -∑_j∈𝒩_i(z^k+1/2_i-z^k+1/2_j+λ^k+1/2_i-λ^k+1/2_j) ). Actually, the algorithm (<ref>)-(<ref>) has the same formulation as that in <cit.>. However, only asymptotic convergence was proven in <cit.> and its convergence rate analysis was not given. Based on the result of Theorems 2, we easily prove that the algorithm (<ref>)-(<ref>) achieves exact convergence to an optimal solution with O(1/k) convergence rate. Although the traditional centralized optimization method (e.g., ADMM-based algorithm in <cit.>) also can solve these two networked optimization problems, it requires massive communication and large bandwidth for the central node. In contrast, the developed distributed algorithm via local information interaction can overcome the issues of the centralized method and therefore can be applied to solve a large-scale networked optimization problem. In addition, unlike the distributed algorithms in <cit.> and <cit.> that require solving a sub-optimization problem at each iteration, the developed algorithms are easier to be implemented without solving the sub-optimization problem. § NUMERICAL SIMULATION In this section, we provide some numerical simulation examples for solving networked optimization problems (<ref>) and general constrained saddle-point problem (<ref>) to demonstrate the effectiveness of the proposed algorithms. Example 1: We first verify the proposed OGDA-based algorithms (<ref>) and EG-based algorithm (<ref>) by solving the following constrained saddle-point problem min_x∈𝒳max_y∈𝒴 f(x,y)=x^TBy, where B ∈ℝ^10× 10 is a random matrix and its element is generated from a uniform distribution on [0,5], the constrained sets 𝒳 and 𝒴 are set to be 𝒳=[-5,5]^10 and 𝒴=[-2,2]^10. Then, we obtain that (x^*, y^*)=(0_10, 0_10) is a saddle point of problem (<ref>) and the optimal value is f(x^*,y^*)=0. We carry out the OGDA-based algorithm (<ref>) and EG-based algorithm (<ref>) under the same initial values x_0=10× 1_10, y_0=10× 1_10, and the chosen step-size α=0.01. In addition, the GDA algorithm of <cit.> is also implemented as a comparison. Fig. 1 shows that the convergence results of the objective error | f(x_k,y_k)-f(x^*,y^*)| under the OGDA-based algorithm (<ref>), EG-based algorithm (<ref>) and GDA algorithm of <cit.>. It is shown that the developed OGDA algorithm (<ref>) and EG algorithm (<ref>) both guarantee that the iteration (x_k,y_k) converges to the saddle point (0_10,0_10) while the GDA algorithm of <cit.> does not converge. Example 2: We next demonstrate the distributed ODGA-based algorithm (18) and EG-based algorithm (<ref>)-(<ref>) to solve the resource allocation problem (<ref>). Consider a network of N=20 agents and its topology is described by a ring graph. Each local objective function is h_i(y_i)=a_iy_i+b_ilog(1+e^c_iy_i), and local set constraint is Ω_i=[-1,1], i∈ℐ_N. The datums in the function h_i(y_i) and coupled equation constraint ∑^N_i=1(W_iy_i-d_i)=0 are randomly generated from a_i∈ [-5,5], b_i∈ [0,2], c_i∈ [0,1], W_i∈ [-1,1] and d_i∈ [-2,2]. The distributed ODGA-based algorithm (18) and EG-based algorithm (<ref>)-(<ref>) are implemented by choosing different step-sizes α. The left subfigure of Fig. 2 describes the objective error | h(y_k)-h(y^*)| under these two algorithms with respect to the number of gradient computation, which shows that the developed two algorithms both guarantee | h(y_k)-h(y^*)| converging to zero. The right subfigure of Fig. 2 provides the performance comparison between the developed two algorithms and the algorithms in <cit.> and <cit.>. It is shown that the algorithm in <cit.> enjoys better convergence performance than the other algorithms, and the developed EG-based algorithm has similar convergence performance as that in <cit.>. § CONCLUSION This paper develops a unified distributed method for solving two classes of networked optimization problems with non-identical set constraints. We first establish the relationship between two networked optimization problems and constrained saddle-point problems, and then propose two projection-based primal-dual algorithms via OGDA and EG methods. Subsequently, we develop unified distributed algorithms via saddle-point dynamics to solve these two networked optimization problems. The final examples demonstrates the effectiveness of the developed algorithms. Before presenting the proofs of Theorems 1-2, some preliminary results are provided. [Lemma 4, <cit.>] Let F(·) be defined in (<ref>). Under Assumption 1, the following results hold (i) F(·) is a monotone operator, i.e., (F(z_1)-F(z_2))^T(z_1-z_2)≥ 0 for any z_1, z_2∈𝒳×𝒴. (ii) F(·) is Lipschitz continuous, i.e., ‖ F(z_1)-F(z_2)‖≤κ_m‖ z_1-z_2‖ holds for any z_1, z_2∈𝒳×𝒴, where κ_m=2max(l_xx,l_xy,l_yx,l_yy). [Proposition 7, <cit.>] Let {z_k} be the iteration sequence generated by the following update z_k+1=z_k-α F(z_k+1)+ε_k, where F:ℝ^m+n→ℝ^m+n is a continuous function, α is a positive constant, and ε_k∈ℝ^m+n is an arbitrary vector. For any z∈ℝ^m+n and k≥ 1, it holds that ‖ z_k+1-z‖^2=‖ z_k-z‖^2-2α (z_k+1-z)^TF(z_k+1)            -‖ z_k+1-z_k‖^2+2ε^T_k(z_k+1-z). [Proposition 5, <cit.>] Define x̂_T=1/T∑^T_k=1x_k and ŷ_T=1/T∑^T_k=1y_k. Under Assumption 1, it follows that f(x̂_T,y^*)-f(x^*,ŷ_T)       ≤1/T∑^T-1_k=0 (z_k+1-z^*)^TF(z_k+1). §.§ Proof of Theorem 1 Define Y_k+1=𝒫_Λ(Υ_k)-Υ_k, where Υ_k=z_k-α F(z_k)-α(F(z_k)-F(z_k-1)). It then follows from (<ref>) that z_k+1=z_k-α F(z_k)-α(F(z_k)-F(z_k-1))+Y_k+1, which can be rewritten as z_k+1=z_k-α F(z_k+1)+χ_k with χ_k=α{(F(z_k+1)-F(z_k)-(F(z_k)-F(z_k-1))}+Y_k+1. By using (<ref>) of Lemma 2, we obtain that (z_k+1-z)^TF(z_k+1) =1/2α‖ z_k-z‖^2-1/2α‖ z_k+1-z‖^2 -1/2α‖ z_k+1-z_k‖^2+1/αχ^T_k(z_k+1-z) =1/2α‖ z_k-z‖^2-1/2α‖ z_k+1-z‖^2-1/2α‖ z_k+1-z_k‖^2 +1/αY^T_k+1(z_k+1-z)+(z_k+1-z)^T(F(z_k+1)-F(z_k)) -(z_k-z)^T(F(z_k)-F(z_k-1)) -(z_k+1-z_k)^T(F(z_k)-F(z_k-1)). According to the Lipschitz continuity of F(z) in Lemma 1 and Young's inequality, we have that -(z_k+1-z_k)^T(F(z_k)-F(z_k-1))≤κ_m/2‖ z_k-z_k+1‖^2+κ_m/2‖ z_k-z_k-1‖^2. Then, (<ref>) can be simplified as (z_k+1-z)^TF(z_k+1) ≤1/2α‖ z_k-z‖^2-1/2α‖ z_k+1-z‖^2-η‖ z_k+1-z_k‖^2 -κ_m/2‖ z_k+1-z_k‖^2+κ_m/2‖ z_k-z_k-1‖^2 +(z_k+1-z)^T(F(z_k+1)-F(z_k)) -(z_k-z)^T(F(z_k)-F(z_k-1))+1/αY^T_k+1(z_k+1-z), where η=1/2α-κ_m>0 if α<1/2κ_m is chosen. Let z^*=col(x^*,y^*) be a saddle point of problem (<ref>). Since z_k∈𝒳×𝒴 for all k≥ 1, it then follows from (<ref>) that (z_k-z^*)^TF(z^*)≥ 0, ∀ k≥ 1. According to the monotone property of F(·) given in Lemma 1, one has that (z_k-z^*)^T(F(z_k)-F(z^*))≥ 0, which further implies that (z_k-z^*)^TF(z_k)≥ 0, ∀ k≥ 1. In addition, according to the definition of Y_k+1, one has that 𝒫_Λ(z_k+1-Y_k+1)=z_k+1. This implies that -Y_k+1∈𝒩_Λ(z_k+1), where 𝒩_Λ(z_k+1) is the normal cone of the set Λ at z_k+1. Since z^*∈Λ, one can further derive that (z_k+1-z^*)^TY_k+1≤ 0. Setting z=z^* of (<ref>), and combining (<ref>)-(<ref>), one has that 0 ≤ (z_k+1-z^*)^TF(z_k+1)≤1/2α‖ z_k-z^*‖^2 -1/2α‖ z_k+1-z^*‖^2-η‖ z_k+1-z_k‖^2-κ_m/2‖ z_k+1-z_k‖^2 +κ_m/2‖ z_k-z_k-1‖^2+(z_k+1-z^*)^T(F(z_k+1)-F(z_k)) -(z_k-z^*)^T(F(z_k)-F(z_k-1)). Summing (<ref>) over k from 0 to t, we obtain that η∑^t_k=0‖ z_k+1-z_k‖^2≤1/2α‖ z_0-z^*‖^2-1/2α‖ z_t+1-z^*‖^2 -κ_m/2‖ z_t+1-z_t‖^2+κ_m/2‖ z_0-z_-1‖^2+(z_t+1-z^*)^T (F(z_t+1)-F(z_t))-(z_0-z^*)^T(F(z_0)-F(z_-1)) ≤1/2α‖ z_0-z^*‖^2-(1/2α-κ_m/2)‖ z_t+1-z^*‖^2, where the last second inequality is obtained by using the initial condition z_0=z_-1 and (z_t+1-z^*)^T(F(z_t+1)-F(z_t))≤κ_m/2‖ z_t+1-z^*‖^2+κ_m/2‖ z_t+1-z_t‖^2. Letting t→∞ and under α<1/2κ_m, it follows from (<ref>) that ∑^∞_k=0‖ z_k+1-z_k‖^2≤1/2αη‖ z_0-z^*‖^2<∞. Consequently, we obtain that lim_k→∞(z_k+1-z_k)=0. In addition, it follows from (<ref>) that (1/2α-κ_m/2)‖ z_t+1-z^*‖^2≤1/2α‖ z_0-z^*‖^2 holds for any t≥ 0. This implies that z_k is bounded for ∀ k∈ℕ. Then, we obtain that z_k has the subsequence {z_n_k} that converges to some limit point z^∞, i.e., lim_k→∞ z_n_k=z^∞=col(x^∞,λ^∞). Moreover, from (<ref>), we derive that z^∞=𝒫_Λ(z^∞-α F(z^∞))=0. This implies that -α F(z^∞) ∈𝒩_Λ(z^∞) and then we obtain that (z-z^∞)^TF(z^∞)≥ 0 holds for z∈𝒳×𝒴. It follows from (<ref>) that z^∞ is a saddle point of problem (<ref>). To this end, we have shown that {z_k} has a convergence subsequence {z_n_k}. We next prove the convergence of the original sequence {z_k}. From (<ref>), one has that 1/2α‖ z_k+1-z^*‖^2+κ_m/2‖ z_k+1-z_k‖^2-(z_k+1-z^*)^T(F(z_k+1)-F(z_k))≤1/2α‖ z_k-z^*‖^2+κ_m/2‖ z_k-z_k-1‖^2-(z_k-z^*)^T(F(z_k)-F(z_k-1))-η‖ z_k+1-z_k‖^2. Define Δ_k=1/2α‖ z_k-z^*‖^2-κ_m/2‖ z_k-z_k-1‖^2+(z_k-z^*)^T(F(z_k)-F(z_k-1)) and one has that Δ_k≥ (1/2α-κ_m/2)‖ z_k-z^*‖^2≥ 0. It then follows that Δ_k+1≤Δ_k-η‖ z_k+1-z_k‖^2. According to the monotonicity and boundedness of Δ_k, we have that Δ_k is convergent. Based on the fact that lim_k →∞ (z_k+1-z_k)=0, one has that ‖ z_k-z^*‖ is convergent. By setting z^*=z^∞, we have that lim_k→∞ z_n_k=z^∞=z^*. Based on lim_k→∞ z_n_k=z^* and lim_k→∞(z_k+1-z_k)=0, we obtain that lim_k→∞Δ_k=0. Under the fact that Δ_k≥ (1/2α-κ_m/2)‖ z_k-z^*‖^2≥ 0, we obtain that lim_k→∞‖ z_k-z^*‖^2=0. Thus, we have shown that the sequence {z_k} converges to a saddle point of problem (<ref>). We next analyze the convergence rate of algorithm (<ref>). From (<ref>) in Lemma 3, we obtain that f(x̂_T,y^*)-f(x^*,ŷ_T)≤1/T∑^T-1_k=0 (z_k+1-z^*)^TF(z_k+1) ≤1/T (1/2α‖ z_0-z^*‖^2-1/2α‖ z_T-z^*‖^2-κ_m/2‖ z_T -z_T-1‖^2+(z_T-z^*)^T(F(z_T)-F(z_T-1)) ) ≤1/T (1/2α‖ z_0-z^*‖^2-(1/2α-κ_m/2)‖ z_t+1-z^*‖^2 ) ≤1/2α T‖ z_0-z^*‖^2. where the second inequality is derived from (<ref>) and the third inequality is obtained by using (<ref>). Note that f(x̂_T,y^*)-f(x^*,ŷ_T)=f(x̂_T,y^*)-f(x^*,y^*)+f(x^*, y^*)-f(x^*,ŷ_T)≤1/2α T‖ z_0-z^*‖^2. Since (x̂_T,ŷ_T) ∈𝒳×𝒴 and f(x,y) is a convex and concave function on 𝒳×𝒴, we obtain that 0 ≤ f(x̂_T,y^*)-f(x^*,y^*)≤1/2α T‖ z_0-z^*‖^2 and 0 ≤ f(x^*, y^*)-f(x^*,ŷ_T)≤1/2α T‖ z_0-z^*‖^2. In addition, since f(x̂_T,ŷ_T)≤ f(x̂_T,y^*) and f(x̂_T,ŷ_T)≥ f(x^*,ŷ_T), we further derive that f(x̂_T,ŷ_T)-f(x^*,y^*) ≤ f(x̂_T,y^*)-f(x^*,y^*)≤1/2α T‖ z_0-z^*‖^2 and f(x^*, y^*)-f(x̂_T,ŷ_T)≤ f(x^*, y^*)-f(x^*,ŷ_T)≤1/2α T‖ z_0-z^*‖^2. Thus, we obtain that | f(x̂_T,ŷ_T)-f(x^*,y^*) |≤1/2α T‖ z_0-z^*‖^2. §.§ Proof of Theorem 2 Let m_k+1=z_k+1/2-(z_k-α F(z_k)) and we obtain that z_k+1/2=z_k-α F(z_k)+m_k+1. It then follows from (<ref>) that 𝒫_Λ(z_k+1/2-m_k+1)=z_k+1/2, which implies -m_k+1∈𝒩_Λ(z_k+1/2). Since z_k+1∈Λ, one has that (z_k+1/2-z_k+1)^Tm_k+1≤ 0. In addition, define n_k+1=z_k+1-(z_k-α F(z_k+1/2)) and one can derive that z_k+1=z_k-α F(z_k+1/2)+n_k+1. Note from (<ref>) that 𝒫_Λ(z_k+1-n_k+1)=z_k+1, which infers -n_k+1∈𝒩_Λ(z_k+1). It then follows that (z_k+1-z^*)^Tn_k+1≤ 0. Also, the above equation (<ref>) can be rewritten as z_k+1=z_k-α F(z_k+1)+ψ_k+1, where ψ_k+1=α F(z_k+1)-α F(z_k+1/2)+n_k+1. By setting z=z^* of (<ref>) in Lemma 2, it follows from (<ref>) that ‖ z_k+1-z^*‖^2=‖ z_k-z^*‖^2-2α (z_k+1-z^*)^TF(z_k+1)     -‖ z_k+1-z_k‖^2+2ψ^T_k+1(z_k+1-z^*) ≤‖ z_k-z^*‖^2-‖ z_k+1-z_k‖^2-2α (z_k+1-z^*)^TF(z_k+1/2) ≤‖ z_k-z^*‖^2-‖ z_k+1-z_k+1/2‖^2-‖ z_k+1/2-z_k‖^2 -2(z_k+1-z_k+1/2)^T(-α F(z_k)+m_k+1) -2α (z_k+1-z_k+1/2)^TF(z_k+1/2)-2α (z_k+1/2-z^*)^TF(z_k+1/2) ≤‖ z_k-z^*‖^2-‖ z_k+1-z_k+1/2‖^2-‖ z_k+1/2-z_k‖^2 -2α(z_k+1-z_k+1/2)^T(F(z_k+1/2)-F(z_k)) -2α (z_k+1/2-z^*)^TF(z_k+1/2), where the first inequality is obtained by using (z_k+1-z^*)^Tn_k+1≤ 0, and the second inequality is derived with ‖ a-b ‖^2=‖ a-c ‖^2+‖ b-c ‖^2+2(a-c)^T(c-b) for any vector a, b, c and z_k+1/2=z_k-α F(z_k)+m_k+1, and the last inequality is obtained by using (z_k+1/2-z_k+1)^Tm_k+1≤ 0. Note that -2α(z_k+1-z_k+1/2)^T(F(z_k+1/2)-F(z_k))≤α^2κ^2_m‖ z_k+1-z_k+1/2‖^2+‖ z_k+1/2-z_k‖^2, and it then follows from (<ref>) that (z_k+1/2-z^*)^T F(z_k+1/2)≤1/2α‖ z_k-z^*‖^2         -1/2α‖ z_k+1-z^*‖^2-ρ‖ z_k+1-z_k+1/2‖^2, where ρ=1-α^2κ^2_m/2α>0 if α<1/κ_m. Similar to the derivation of (<ref>), we obtain that (z_k+1/2-z^*)^TF(z_k+1/2)≥ 0 for ∀ k≥ 0. It then follows that ‖ z_k+1-z^*‖^2≤‖ z_k-z^*‖^2-2αρ‖ z_k+1-z_k+1/2‖^2. Similar to the analysis of Theorem 1, we have that lim_k→∞‖ z_k-z^*‖^2=0. Thus, we conclude that the sequence {z_k} converges to a saddle point z^* of the problem (<ref>). We further analyze the convergence rate of algorithm (<ref>). Let x̂_T=1/T∑^T-1_k=0x_k+1/2 and ŷ_T=1/T∑^T-1_k=0y_k+1/2. From (<ref>) in Lemma 3 and (<ref>), one has that f(x̂_T,λ^*)-f(x^*,ŷ_T) ≤1/T∑^T-1_k=0(z_k+1/2-z^*)^TF(z_k+1/2) ≤1/2α T‖ z_0-z^*‖^2. Similar to the proofs of Theorem 1, we obtain that | f(x̂_T,ŷ_T)-f(x^*,y^*) |≤1/2α T‖ z_0-z^*‖^2. 1 3 T. Yang, X. Yi, J. Wu, Y. Yuan, D. Wu, and Z. Meng, “A survey of distributed optimization,” Annu. Rev. Control, vol. 47, pp. 278-305, 2019. 4a A. Falsone, I. Notarnicola, G. Notarstefano, and M. Prandini, “Tracking-ADMM for distributed constraint-coupled optimization,” Automatica, vol. 117, pp. 108962, 2020. 5 A. Nedic, A. Ozdaglar, P. A. Parrilo, “Constrained consensus and optimization in multi-agent networks,” IEEE Trans. Autom. Control, vol. 55, no. 4, pp. 922-938, 2010. 6 Z. Qiu, S. Liu, and L. Xie, “Distributed constrained optimal consensus of multi-agent systems,” Automatica, vol. 68, pp. 209-215, 2016. 8 Q. Liu, S. Yang, and Y. Hong, “Constrained consensus algorithms with fixed step size for distributed convex optimization over multiagent networks,” IEEE Trans. Autom. Control, vol. 62, no. 8, pp. 4259-4265, 2017. 7 J. Lei, H. Chen, and H. Fang, “Primal-dual algorithm for distributed constrained optimization,” Syst. Control Lett., vol. 96, pp. 110-117, 2016. 7a B. Gharesifard, and J. Cortes, “Distributed continuous-time convex optimization on weight-balanced digraphs,” IEEE Trans. Autom. Control, vol. 59, no. 3, pp. 781-786, 2013. 9 P. Lin, W. Ren, C. Yang, and W. Gui, “Distributed continuous-time and discrete-time optimization with nonuniform unbounded convex constraint sets and nonuniform stepsizes,” IEEE Trans. Autom. Control, vol. 64, no. 12, pp. 5148-5155, 2019. 10 W. Yu, H. Liu, W.X. Zheng, and Y. Zhu “Distributed discrete-time convex optimization with nonidentical local constraints over time-varying unbalanced directed graphs,” Automatica, vol. 134, pp. 109899, 2021. 11 P. Yi, Y. Hong, and F. Liu, “Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and its application to economic dispatch of power systems,” Automatica, vol. 74, no. 12, pp. 259-269, 2016. 12 A. Cherukuri and J. Cortes, “Initialization-free distributed coordination for economic dispatch under varying loads and generator commitment,” Automatica, vol. 74, no. 12, pp. 183-193, 2016. 14 T. H. Chang, A. Nedic, and A. Scaglione, “Distributed constrained optimization by consensus-based primal-dual perturbation method,” IEEE Trans. Autom. Control, vol. 59, no. 6, pp. 1524-1538, 2014. 15 I. Notarnicola and G. Notarstefano, “Constraint-coupled distributed optimization: a relaxation and duality approach,” IEEE Trans. Control Netw. Syst., vol. 7, no. 1, pp. 483-492, 2019. 16 A. Falsone, K. Margellos, S. Garatti, and M. Prandini, “Dual decomposition for multi-agent distributed optimization with coupling constraints,” Automatica, vol. 84, pp. 149-158, 2017. 14a T. H. Chang, “A proximal dual consensus ADMM method for multi-agent constrained optimization,” IEEE Trans. Signal Process., vol. 64, no. 14, pp. 3719-3734, 2016. 14b A. Nedic, A. Olshevsky, and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” SIAM J. Optim., vol. 27, no. 4, pp. 2597-2633, 2017. 17 S. Liang and G. Yin, “Distributed smooth convex optimization with coupled constraints,” IEEE Trans. Autom. Control, vol. 65, no. 1, pp. 347-353, 2019. 17a X. Zeng, J. Lei, and J. Chen, “Dynamical primal-dual accelerated method with applications to network optimization,” IEEE Trans. Autom. Control, 2022. DOI: 10.1109/TAC.2022.3152720 17b J. Xu, Y. Tian, Y. Sun, and G. Scutari, “Distributed algorithms for composite optimization: Unified framework and convergence analysis,” IEEE Trans. Signal Process., vol. 69, pp. 3555-3570, 2021. 17c T. Sherson, R. Heusdens, W.B. Kleijn, “On the distributed method of multipliers for separable convex optimization problems,” IEEE Trans. Signal Inf. Process. Netw., vol. 5, no. 3, pp. 495-510, 2019. 18a D. Mateos-Nunez, and J. Cortes, “Distributed saddle-point subgradient algorithms with Laplacian averaging,” IEEE Trans. Autom. Control, vol. 62, no. 6, pp. 2720-2735, 2016. 18b D. P. Bertsekas, Convex optimization theory. Athena Scientific, Nashua, NH, 2009. 18c P. Mercader, K.J. Astrom, A. Banos, et al, “Robust PID design based on QFT and convex-concave optimization,” IEEE Trans. Control Syst. Techn., vol. 25, no. 2, pp. 441-452, 2016. 18 T. Basar and G. J. Olsder, Dynamic noncooperative game theory. Society for Industrial and Applied Mathematics, 1998. 19a I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, pp. 2672-2680, 2014. 20 G. M. Korpelevich, “The extragradient method for finding saddle points and other problems,” Matecon, vol. 12, pp. 747-756, 1976. 21 L. D. Popov, “A modification of the arrow-hurwicz method for search of saddle points,” Mathematical Notes, vol. 28, no. 5, pp. 845-848, 1980. 23 T. Liang and J. Stokes, “Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks,” Artificial Intelligence and Statistics, PMLR, pp. 907-915, 2019. 24 G. Gidel, H. Berard, G. Vignoud, P. Vincent, and S. Lacoste-Julien, “A variational inequality perspective on generative adversarial networks,” arXiv preprint arXiv: 1802.10551, 2018. 25 A. Mokhtari, A. E. Ozdaglar, and S. Pattathil, “A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach,” Artificial Intelligence and Statistics, PMLR, pp. 1497-1507, 2020. 26 A. Mokhtari, A. E. Ozdaglar, and S. Pattathil, “Convergence rate of O(1/k) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems,” SIAM J. Optim., vol. 30, no. 4, pp. 3230-3251, 2020. 28 S. Cui, and U.V. Shanbhag, “On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems,” Set-Valued Var. Anal., vol. 29, pp. 453-499, 2021. 27 A. Nedic and A. Ozdaglar, “Subgradient methods for saddle-point problems,” J. Optim. Theory Appl., vol. 142, no. 1, pp. 205-228, Jul. 2009. 31 S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learn., vol. 3, no. 1, pp. 1-122, 2011.
http://arxiv.org/abs/2307.05255v1
20230711134437
Quantum dynamic response-based NV-diamond magnetometry: Robustness to decoherence and applications in motion detection of magnetic nanoparticles
[ "Wenkui Ding", "Xingyu Zhang", "Jing Liu", "Xiaoguang Wang" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall" ]
APS/123-QED Department of Physics, Zhejiang Sci-Tech University, 310018 Zhejiang, China Department of Physics, Xiamen University, 361005 Fujian, China MOE Key Laboratory of Fundamental Physical Quantities Measurement, National Precise Gravity Measurement Facility, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China [email protected] Department of Physics, Zhejiang Sci-Tech University, 310018 Zhejiang, China We propose a novel quantum sensing protocol that leverages the dynamical response of physical observables to quenches in quantum systems. Specifically, we use the nitrogen-vacancy (NV) color center in diamond to realize both scalar and vector magnetometry via quantum response. Furthermore, we suggest a method for detecting the motion of magnetic nanoparticles, which is challenging with conventional interference-based sensors. To achieve this, we derive the closed exact form of the Berry curvature corresponding to NV centers and design quenching protocols to extract the Berry curvature via dynamical response. By constructing and solving non-linear equations, the magnetic field and instantaneous motion velocity of the magnetic nanoparticle can be deduced. We investigate the feasibility of our sensing scheme in the presence of decoherence and show through numerical simulations that it is robust to decoherence. Intriguingly, we have observed that a vanishing nuclear spin polarization in diamond actually benefits our dynamic sensing scheme, which stands in contrast to conventional Ramsey-based schemes. In comparison to Ramsey-based sensing schemes, our proposed scheme can sense an arbitrary time-dependent magnetic field, as long as its time dependence is nearly adiabatic. Quantum dynamic response-based NV-diamond magnetometry: Robustness to decoherence and applications in motion detection of magnetic nanoparticles Xiaoguang Wang August 12, 2023 ================================================================================================================================================ § INTRODUCTION Quantum metrology <cit.> and quantum sensing <cit.> have attracted significant attention in recent years. Quantum sensors, leveraging the unique properties of quantum systems, hold promise for detecting weak or nanoscale signals that surpass the capabilities of classical sensors. While most quantum sensors rely on interference schemes, there are situations where implementing interferometry or Ramsey-based schemes becomes challenging <cit.>. One such scenario arises when the signal to be detected exhibits a short period of viability, making it impractical to accumulate the necessary phase for information encoding in the interference-based scheme <cit.>. As a result, there is a growing emphasis on exploring novel mechanisms to realize innovative quantum sensing schemes, driving rapid developments in the field of quantum science and technology <cit.>. In recent studies <cit.>, the concept of dynamical response has been proposed as a means to detect geometric quantities in quantum many-body systems. Notably, the emergence of Berry curvature in the nonadiabatic response of physical observables to slow quenches, irrespective of the system's interaction nature, has been identified <cit.>. Building upon these findings, our study showcases the potential of utilizing the mechanism of dynamic response for quantum sensing, offering a complementary approach to the conventional interference-based sensing schemes. Specifically, we present quantum response-based sensing schemes utilizing nitrogen-vacancy (NV) color centers in diamond <cit.>. The NV center in diamond is a highly attractive candidate for quantum sensing due to its efficient initialization and readout capabilities through optical excitations, as well as its relatively long coherence time, even at ambient temperature <cit.>. Consequently, extensive theoretical and experimental investigations have been conducted to explore the quantum sensing potential of NV centers <cit.>. Notably, NV centers have demonstrated the ability to sense magnetic fields with nanoscale spatial resolution <cit.>. Besides, the geometric quantity, like the geometric phase, in NV centers has been investigated <cit.> and proposed in the applications in quantum sensing, like gyroscope <cit.> and magnetometer <cit.>. Furthermore, owing to diamond's chemical inertness and the excellent quantum property under ambient condition, NV sensors hold promise for applications in bioimaging <cit.>. In this study, we propose a novel approach using NV centers through quantum dynamic response to sense the motion of magnetic nanoparticles, which has the potential to find applications in the field of bioimaging. Before introducing the dynamic response-based sensing scheme, we provide a brief overview of the quantum response theory <cit.>. By employing adiabatic perturbation theory <cit.>, the general formula for quantum response can be derived as follows (see Appendix A for detailed information): M_μ=const+v_λℱ^(m)_μλ+𝒪(v_λ^2). Here, M_μ represents the observable being measured in the experiment, often referred to as the generalized force along the μ-direction. It can be defined as M_μ≡-⟨ψ(t_f)|∂_μ H|ψ(t_f)⟩, with ∂_μ H≡∂ H/∂μ. The quantum state evolves according to |ψ(t_f)⟩=𝒯e^-i∫_0^t_fH(t^')dt^'|Ψ_m(0)⟩, where 𝒯 denotes the time-ordering operator and the time dependence of the Hamiltonian is introduced by the time-dependent parameters, H(t)=H(λ(t),μ(t),…)≡ H(λ,μ,…). The initial state is prepared as one of the instantaneous eigenstates of the Hamiltonian, H(t)|Ψ_m(t)⟩=E_m(t)|Ψ_m(t)⟩. The quench process is achieved by varying the parameter λ(t) over time, with v_λ≡∂λ/∂ t representing the instantaneous quench velocity along the λ-direction at time t_f. Notably, the Berry curvature ℱ^(m)_μλ corresponding to the instantaneous eigenstate |Ψ_m(t_f)⟩, emerges as the coefficient in the non-adiabatic response when the quench velocity approaches zero. We would like to make some comments regarding the sensor utility of the quantum response formula presented in Eq. (<ref>). The validity of this equation does not rely on the specific details of the quench process, as long as the quench is performed in a nearly adiabatic manner. Most notably, this formula indicates that by implementing quenches along the λ-direction and measuring the corresponding response along the μ-direction, we can extract the value of the Berry curvature ℱ_μλ^(m). Since the Berry curvature is a geometric quantity solely determined by the parameter-dependent instantaneous eigenstate of the quantum system, it remains independent of the specific details of the quench process. Moreover, if the physical quantity of interest is encoded within the Berry curvature, we can determine its value by measuring the Berry curvature using the quench-response mechanism. Conversely, if the Berry curvature is known a priori, we can determine the instantaneous quench velocity by measuring the system's response. This article is organized as follows: In Sec. II, we derive the closed exact form of the Berry curvature associated with NV centers. In Sec. III, we present a concrete dynamic response-based scheme for NV-magnetometry and assess its feasibility by considering the effects of decoherence. In Sec. IV, we propose a protocol for detecting the motion of a magnetic nanoparticle using quantum response. In Sec. V, we discuss the sensitivity of our dynamic sensing scheme. Finally, summaries are made in Sec. VI. § BERRY CURVATURE OF NV CENTERS Our focus is on utilizing the NV center in diamond to realize dynamic response-based quantum sensing. In this section, we aim to derive the analytical expression for the Berry curvature associated with this quantum system. Generally, the Berry curvature can be expressed as the imaginary part of the geometric tensor, ℱ_μλ=-2[χ_μλ]. The geometric tensor χ_μλ is defined as follows <cit.>: χ_μλ=⟨∂_μΨ|∂_λΨ⟩-⟨∂_μΨ|Ψ⟩⟨Ψ|∂_λΨ⟩. where |∂_λΨ⟩≡∂|Ψ⟩/∂λ, and |Ψ⟩≡|Ψ(λ,μ)⟩ represents a parameter-dependent quantum state. In particular, when the parameter-dependent quantum state corresponds to the instantaneous eigenstates of the parameter-dependent Hamiltonian, given by H(λ,μ)|ϕ_m(λ,μ)⟩=E_m(λ,μ)|ϕ_m(λ,μ)⟩, the Berry curvature can be determined using the following expression: ℱ_μλ^(m)=i∑_n≠ m⟨ϕ_m|∂_μ H|ϕ_n⟩⟨ϕ_n|∂_λ H|ϕ_m⟩-μ↔λ/[E_n(λ,μ)-E_m(λ,μ)]^2, assuming the eigenstate |ϕ_m⟩ is non-degenerate. The Hamiltonian that describes the NV center driven by a time-varying magnetic field is given by <cit.>: H(t)=DS_z^2+E(S_x^2-S_y^2)+g_eμ_B𝐡(t)·𝐒+𝐒·∑_k=1^N𝐀_k·𝐈_k. Here, 𝐒=(S_x,S_y,S_z) represents the spin operator of the NV electronic spin, which has a spin quantum number S=1. The Hamiltonian contains several important terms: The first term represents the diagonal term of the zero-field splitting and D≈ 2.87 GHz, represents the zero-field splitting parameter, which exhibits temperature dependence and can be exploited for temperature sensing. The second term corresponds to the off-diagonal term of the zero-field splitting, which captures the interaction between the NV center's electronic spin and an external electric field or stress, providing a means for electric field and stress detection <cit.>. The third term corresponds to the Zeeman energy of the NV electronic spin in the presence of a time-varying magnetic field 𝐡(t)=(h_x(t),h_y(t),h_z(t)), while g_e is the NV electronic g-factor and μ_B is the Bohr magneton. This term enables the sensing of magnetic fields. The last term describes the hyperfine interaction between the NV electronic spin and the surrounding nuclear spins, such as ^13C nuclear spins with a spin quantum number I=1/2. This term enables spin-sensing, where 𝐈_k represents the spin operator of the k-th nucleus and 𝐀_k represents the coupling strength of the NV electronic spin and the k-th nuclear spin. The NV center in diamond possesses remarkable quantum properties, making it a versatile and promising quantum sensor under ambient temperature. While the last term is typically considered as the origin of decoherence of the NV electronic spin, for the purpose of demonstrating our dynamic response-based sensing protocol, we temporarily neglect this coupling term. Its effect will be carefully investigated in the subsequent section. By neglecting the last coupling term in Eq. (<ref>), the simplified Hamiltonian can be expressed (by assuming g_eμ_B=1 for simplicity) as follows: H=[ D+h_z h_x-ih_y/√(2) E; h_x+ih_y/√(2) 0 h_x-ih_y/√(2); E h_x+ih_y/√(2) D-h_z ]. To obtain the analytic form of the Berry curvature using Eq. (<ref>), we need to find the explicit eigenstates and eigenvalues of this parameter-dependent Hamiltonian. Fortunately, for a general 3× 3 Hermitian matrix <cit.>, since all the eigenvalues are real, we can analytically calculate them in terms of the trignometric solutions (see Appendix B for more details). To be specific, the eigenvalues of the Hamiltonian in Eq. (<ref>) can be obtained as follows: E_1 =2/3[D-Δ_0cos(φ-π/3)], E_2 =2/3[D-Δ_0cos(φ+π/3)], E_3 =2/3[D+Δ_0cos(φ/3)], where Δ_0≡√(3/2Tr[ℋ^2]) and cosφ=1/2(3/Δ_0)^3(ℋ). Here, we have used the traceless Hamiltonian ℋ≡ H-Tr[H]/31, with Tr[ℋ^2]= 2/3D^2+2E^2+2h^2, (ℋ)= 2D/3(E^2+h^2-D^2/9)+h_x^2(E-D)-h_y^2(E+D), where h^2=h_x^2+h_y^2+h_z^2. An obvious advantage of this trigonometric analytic form is that, it immediately reveals E_1≤ E_2≤ E_3 since 0≤φ≤π. To the best of our knowledge, the exact form of the eigenenergy of the NV Hamiltonian presented in this study has not been utilized in the existing literature. Conventionally, discussions on the eigenenergies or transitions of the NV center, a 3-level system, often rely on perturbation methods to obtain approximate results <cit.>. However, these approximate approaches can pose challenges when it comes to calculating the Berry curvature, which requires a more precise understanding of the system's eigenenergy structure. The instantaneous eigenstates of the 3× 3 Hermitian matrix can be represented as the cross product of two three-dimensional vectors <cit.>, |Ψ̃_m⟩=[(𝐡_1-E_m𝐞_1)×(𝐡_3-E_m𝐞_3)]^*, where 𝐡_j is the j-th column of the Hamiltonian and 𝐞_i is the unit vector, like 𝐞_1=(1,0,0)^T. Consequently, the explicit form of the instantaneous eigenstate corresponding to E_m is given by |Ψ_m⟩=1/√(2𝒩_m)[ -E(h_x+ih_y)+(D_m-h_z)(h_x-ih_y); √(2)(-D_m^2+E^2+h_z^2); -E(h_x-ih_y)+(D_m+h_z)(h_x+ih_y) ], where we have defined D_m≡ D-E_m and 𝒩_m=D_m^4+ (h^2-2E^2-3h_z^2)D_m^2+2E(h_y^2-h_x^2)D_m +(E^2+h_z^2)(E^2+h^2), is the corresponding normalization factor. Equipped with these exact eigenenergies and eigenstates, we can now calculate the Berry curvature corresponding to the eigenstate |Ψ_m⟩ using Eq. (<ref>). While the Berry curvature is fundamentally determined by Eq. (<ref>) once the explicit form of the parameter-dependent eigenstate is known, however, directly applying this equation poses challenges due to the intricacy of computing wave function derivatives. Hence, we resort to the alternative expression provided by Eq. (<ref>), which enables us to determine the Berry curvature without explicitly calculating the wave function derivatives. After performing the involved yet straightforward calculations, we obtain the analytical expression for the Berry curvature associated with NV centers when the Cartesian components of the magnetic field are utilized as the driven parameters. The explicit forms of the Berry curvature components are given as follows: ℱ_xy^(m) =∑_n≠ m-2h_z(D_m+D_n)/𝒩_m𝒩_n(D_m-D_n)[(D_m^++D_n^+)(D_m^-D_n^-h_x^2-h_y^2h_z^2)+(D_m^-+D_n^-)(D_m^+D_n^+h_y^2-h_x^2h_z^2)], ℱ_xz^(m) =∑_n≠ m2h_y(D_m^++D_n^+)/𝒩_m𝒩_n(D_m-D_n)[2E(D_m^-D_n^-+h_z^2)h_x^2-(D_m+D_n)(h^2-h_z^2)h_z^2], ℱ_yz^(m) =∑_n≠ m2h_x(D_m^-+D_n^-)/𝒩_m𝒩_n(D_m-D_n)[2E(D_m^+D_n^++h_z^2)h_y^2+(D_m+D_n)(h^2-h_z^2)h_z^2], where we have introduced the notation D_m^±≡ D_m± E. It is worth noting that these analytical results reveal some intriguing features. Specifically, when h_z=0, we have ℱ_xy=0, and similarly, when E=0 and h_z=0, we find ℱ_xz=ℱ_yz=0. With the explicit formulation of the Berry curvature at our disposal, we are now equipped to develop sensing protocols that harness the quantum dynamical response mechanism described by Eq. (<ref>). In the subsequent sections, we will illustrate specific sensing schemes based on quantum response and thoroughly examine their feasibility. Through these investigations, we aim to establish the practicality and effectiveness of employing the quantum dynamical response for sensing applications. § SCALAR MAGNETOMETRY AND THE ROBUSTNESS TO DECOHERENCE §.§ dynamic response-based sensing scheme using the rotating quench field In this subsection, we present a specific scheme for scalar magnetometry utilizing the quantum response, focusing on a rotating quench protocol. Furthermore, we consider the simplified scenario where E=0, which allows for a clear and concise presentation of the sensing procedure. Under these conditions, the Hamiltonian governing the dynamics of the NV center, driven by a magnetic field 𝐡(t)=h(sinθcosϕ,sinθsinϕ,cosθ), takes the form: H=DS_z^2+e^-iϕ S_ze^-iθ S_yS_ze^iθ S_ye^iϕ S_z, where we have adopted the convention of rescaling the zero-field coupling strength by setting h=1, effectively incorporating it into the parameter D/h→ D. Utilizing the eigenenergy expression derived in Eq. (<ref>), we can explicitly calculate the eigenenergies as follows: E_1 =2/3[D-√(D^2+3)cos(φ-π/3)], E_2 =2/3[D-√(D^2+3)cos(φ+π/3)], E_3 =2/3[D+√(D^2+3)cos(φ/3)], where cosφ=D(-9-2D^2+27cos^2θ)/2√((D^2+3)^3). Notably, due to the commutation relation [e^-iϕ S_z,H]=0, the eigenenergies do not depend on the value of ϕ. Furthermore, the corresponding eigenstates can be obtained as follows: |Ψ_m⟩=1/√(𝒩_m)( [ e^-iϕsinθ(D_m-cosθ); √(2)(cos^2θ-D_m^2); e^iϕsinθ(D_m+cosθ) ]), where D_m≡ D-E_m and the normalization factor, 𝒩_m=2D_m^4-D_m^2+(1-3D_m^2)cos2θ+1. It should be noted that the analytic form of the eigenstate given in Eq. (<ref>) is not applicable when θ=π/2 and E_m=D (see Appendix B for more details). In fact, when θ=π/2, the exact eigenvalues can be further simplified as E_1=(D-√(4+D^2))/2, E_2=D, and E_3=(D+√(4+D^2))/2, while the eigenstate corresponding to E_2 is represented by |Ψ_2⟩=1/√(2)(e^-iϕ,0,-e^iϕ)^T. Having obtained the explicit form of the eigenenergies and eigenstates, we can now proceed to calculate the Berry curvature using the formula in Eq. (<ref>), where the derivatives of the Hamiltonian with respect to ϕ and θ are given by ∂_ϕ H =-sinθsinϕ S_x+sinθcosϕ S_y, ∂_θ H =cosθcosϕ S_x+cosθsinϕ S_y-sinθ S_z. Utilizing the analytic form of the eigenstates presented in Eq. (<ref>), we obtain ⟨Ψ_m|∂_ϕ H|Ψ_n⟩ =2i/√(𝒩_m𝒩_n)(D_m^2-D_n^2)sin^2θcosθ, ⟨Ψ_n|∂_θ H|Ψ_m⟩ =1/√(𝒩_m𝒩_n)(D_n+D_m)(1-D_nD_m)sin2θ. Applying Eq. (<ref>), we immediately observe that ℱ_ϕϕ=ℱ_θθ=0, while ℱ_ϕθ^(m)=-ℱ_θϕ^(m). In particular, the explicit form of the Berry curvature corresponding to the eigenstate |Ψ_m⟩ is given by ℱ_ϕθ^(m) =8sin^3θcos^2θ∑_n≠ m(D_n+D_m)^2(1-D_nD_m)/𝒩_m𝒩_n(D_n-D_m). It is worth noting that ℱ_ϕθ^(m) is also independent of ϕ. Furthermore, we can express the Berry curvature for the ground state as follows: ℱ_ϕθ^(1) =8sin^3θcos^2θ× [(D_1+D_2)^2(1-D_1D_2)/𝒩_1𝒩_2(D_2-D_1)+(D_1+D_3)^2(1-D_1D_3)/𝒩_1𝒩_3(D_3-D_1)]. Moreover, in the special case when θ=π/2, the Berry curvature assumes a more compact form: ℱ^(1)_ϕθ(ϕ,θ=π/2)=D-D^2+2/√(D^2+4). We now present a concrete quenching protocol to demonstrate how the quantum response-based sensing scheme operates. We apply a rotating quench field given by [ h_x(t)=sin(v^2t^2/2π), h_y(t)=0, h_z(t)=cos(v^2t^2/2π), ] where the quench is realized through θ(t)=v^2t^2/2π. This choice of the rotating quench ensures that the driving at the initial time is adiabatic since v_θ(t=0)=0. Specifically, we measure the response ⟨∂_ϕ H⟩ at t_f=π/v with an instantaneous quench velocity of v_θ(t_f)=v. Firstly, we perform numerical simulations to verify the validity of the quantum response formula stated in Eq. (<ref>), which asserts that ⟨ψ(t_f)|∂_ϕ H|ψ(t_f)⟩≈⟨Ψ_1(0)|∂_ϕ H|Ψ_1(0)⟩+ v_θℱ_ϕθ^(1), where |ψ(t_f)⟩=𝒯e^-i∫_0^t_fH(t^')dt^'|Ψ_1(0)⟩. Specifically, for the rotating quench protocol given by Eq. (<ref>), we aim to verify that ⟨ψ(t_f)|S_y|ψ(t_f)⟩/v≈D^2+2/√(D^2+4)-D. To accomplish this, we solve the time-dependent Schrödinger equation to obtain the left-hand side of the equation. The result is depicted as the black solid line in Fig. <ref>. Meanwhile, the green dashed line in Fig. <ref> corresponds to the right-hand side of the equation. Evidently, the figure demonstrates that as the quench velocity approaches zero, the Berry curvature can be accurately approximated by the ratio of the response signal to the quench velocity. To implement quantum sensing based on the dynamic response, we note that the quantity ⟨ψ(t_f)|S_y|ψ(t_f)⟩ can be measured in the experiment. By solving this non-linear equation, we can deduce the value of D or, equivalently, the magnitude of the magnetic field h. It is important to note that the quench process cannot cross the degenerate point. §.§ robustness to decoherence of the sensing protocol In this section, we examine the retrieval of the Berry curvature using quantum response in the presence of decoherence. Building upon the numerical simulation presented in Sec. <ref>, we extend our analysis to incorporate the influence of the environment, specifically the interaction with N nuclear spins. This interaction is captured by the inclusion of the last term in Eq. (<ref>), which accounts for the coupling between the NV electronic spin and the nuclear spins. The presence of coupling to the nuclear spins introduces decoherence effects on the NV electronic spin, particularly when the nuclear spins are partially polarized. In this context, we consider the quenching process described by Eq. (<ref>), and our objective is to calculate the response signal M_y=Tr[ρ(t_f) S_y] at t_f=π/v, where ρ(t_f) is the state of the compound system, ρ(t_f)=U(t_f)ρ(0)U^†(t_f). The time evolution operator U(t_f)=𝒯e^-i∫_0^t_fH(t^')dt^', with H(t) given by the Hamiltonian in Eq. (<ref>). The initial state, ρ(0)=|ϕ_0⟩⟨ϕ_0|⊗ρ_n, consists of two components: |ϕ_0⟩⟨ϕ_0|, which corresponds to the ground state of the NV Hamiltonian in the absence of coupling to the nuclear spin bath, and ρ_n, which represents the initial state of the nuclear spin bath. The nuclear spin bath is assumed to be in a thermal state and is characterized by the density matrix ρ_n=(1/Z)exp(-β∑_k=1^NI_kz). Here, Z=[2cosh(β/2)]^N represents the partition function, and β=2tanh^-1(P) denotes the inverse temperature, determined by the average nuclear polarization P <cit.>. When dealing with a large number of nuclear spins (N), simulating the dynamics governed by a time-dependent Hamiltonian using the density matrix formalism becomes computationally challenging due to the exponential growth of the Hilbert space dimension (∼ 2^N+1× 2^N+1). To simplify the simulation for larger N, we employ certain approximations. First, we assume a homogeneous coupling between the NV electronic spin and the nuclear spins, namely 𝐀_k=A, based on the quasistatic approximation <cit.>. This allows us to utilize the collective nuclear spin operator 𝐈=∑_k=1^N𝐈_k, and the total angular momentum 𝐉=𝐒+𝐈 becomes a constant of motion, leading to the reduction of the dimension of the Hilbert space. Second, since the initial state of the nuclear spins is assumed to be in a thermal state, we can employ wave function dynamics instead of density matrix calculations. Namely, ρ(t_f)=∑_I_0=k^N/2∑_M_0=-I_0^I_0ω(I_0,M_0)|ψ(t_f)⟩⟨ψ(t_f)|, where k=1/2 if N is odd, and k=0 if N is even. The time evolution of the wave function is given by |ψ(t_f)⟩=U(t_f)(|ϕ_0⟩⊗|I_0,M_0⟩), and the statistical weight associated with the nuclear spin state |I_0,M_0⟩ is given by ω (I_0,M_0) =C_N^N/2-I_0(1+P/2)^N/2-M_0(1-P/2)^N/2+M_02I_0+1/N/2+I_0+1, where C_N^M represents the binomial coefficient. By employing these simplifications, we can tackle the simulation of the dynamics in a more computationally feasible manner while capturing the essential features of the system's behavior. After making the simplifications mentioned earlier, we perform simulations of the quantum response experiment considering N=20 nuclear spins (with spin I=1/2). The retrieved Berry curvatures for different average nuclear polarizations P are presented in Fig. <ref>. The results reveal an interesting phenomenon: when the nuclear polarization is non-zero (P=0.2, red solid line), there exists an optimal quenching speed v for extracting the Berry curvature using the quantum response formula. Several factors contribute to this observation. Firstly, according to adiabatic perturbation theory, a smaller quenching speed v leads to a more accurate retrieval of the Berry curvature through quantum response. Secondly, a slower quenching speed implies a longer evolution time, increasing the impact of decoherence. This competing mechanism leads to the existence of the optimal quench velocity. However, a counterintuitive finding arises when the nuclear spins are completely unpolarized (P=0, blue solid line). Our calculations demonstrate that as the nuclear polarization approaches zero, indicating increased decoherence, the influence of decoherence on the quantum response experiment becomes less significant instead. This novel feature is in contrast to conventional Ramsey-based sensing schemes, where higher nuclear polarization is typically required to mitigate electronic spin decoherence. The origin of this unique characteristic can be attributed to adiabatic perturbation theory, which suggests that the presence of decoherence or dephasing in the quantum system can actually enhance the applicability of the quantum response formula <cit.>. This finding highlights the robustness of our quantum response-based sensing scheme to decoherence, making it highly feasible for realistic experiments, since the polarization of nuclear spins in solid-state systems is usually difficult and time-consuming <cit.>. § VECTOR MAGNETOMETRY AND MOTION SENSING OF MAGNETIC NANOPARTICLES In recent years, several proposals have been put forward to realize vector magnetometry using solid-state spins <cit.>. In this section, we present a concrete example to demonstrate the implementation of vector magnetometry and the motion sensing of magnetic nanoparticles using NV centers in diamond through quantum dynamic response. The schematic diagram in Fig. <ref> illustrates the setup, where NV centers are utilized to sense the motion of a magnetic nanoparticle and determine the instantaneous magnetic field generated by the magnetic nanoparticle itself <cit.>. Typically, the magnetic nanoparticle undergoes Brownian motion, leading to a time-varying magnetic field experienced by the NV center. By formulating equations based on the quantum response formula, we can, in principle, determine the motion of the magnetic nanoparticle for arbitrary time dependencies, as long as the motion is nearly adiabatic. Here, for clarity, we restrict the motion of the magnetic nanoparticle along the x-axis, to demonstrate the capability of the vector magnetometry and the motion sensing. We now consider two ensembles of NV centers, and the Hamiltonian for the NV center in the i-th ensemble is given by: H^(i)(t)=DS_z^2+h_z^(i)S_z+h_yS_y+h_x(t)S_x. Here, h_z^(i) represents the static magnetic field applied to the i-th ensemble along the z-axis. Since these two ensembles of NV centers are usually close to each other, this different static field can be generated by mounting a nano magnet on the diamond. The static magnetic field h_y is common to both NV ensembles. The static fields h_z^(i) and h_y are assumed to be known beforehand, which can be determined, for example, through conventional Ramsey-based magnetometry. The magnetic field h_x(t), which we aim to detect, is generated by the magnetic nanoparticle. Initially, at time t=0, the magnetic nanoparticle is far away from the NV center, resulting in a negligible value for h_x(t=0). The initial state of the NV center is prepared in its ground state, which can be optically polarized by illuminating a 532 nm laser <cit.>. When the magnetic nanoparticle moves in close proximity to the NV center, the NV center experiences a time-varying magnetic field h_x(t) along the x-axis. At a specific measurement time t_f, we perform measurements on the spin expectation values of the two NV ensembles, denoted as ⟨ S_z^(1)⟩ and ⟨ S_z^(2)⟩, utilizing spin state-dependent photoluminescence (PL) <cit.>. According to the quantum response formula, the relationship between these measured spin expectation values and the magnetic field components can be described by the following equations: ⟨ S_z^(1)⟩/v_x=ℱ^(1)_xz[h_x(t_f),h_y,h_z^(1)], ⟨ S_z^(2)⟩/v_x=ℱ^(1)_xz[h_x(t_f),h_y,h_z^(2)], where the Berry curvature ℱ_xz^(1)[h_x,h_y,h_z^(i)] is determined by Eq. (<ref>) and v_x is the quench velocity. By solving these nonlinear equations, we can obtain the instantaneous values of the magnetic field h_x and the velocity v_x at time t_f. From the perspective of motion sensing, the proposed method allows us to determine the instantaneous velocity of the magnetic nanoparticle <cit.>, and extract valuable information about its position by determining the magnetic field h_x. The vector magnetometry can be realized in the same manner. For instance, in the case where the value of h_y is not known in advance, we can extend the setup by incorporating an additional ensemble of NV centers with a different h_z^(i). This allows us to construct an additional nonlinear equation, enabling the determination of h_y as well. In other words, by constructing groups of these nonlinear equations using static field gradients, we eliminate the need to know the quench velocity beforehand to estimate the magnetic field. In conclusion, we present a novel sensing proposal for detecting the motion of magnetic nanoparticles based on the mechanism of quantum dynamic response. This approach enables us to realize highly sensitive motion sensing within nanoscale, where the position and instantaneous velocity of the magnetic nanoparticle can be determined through the analysis of the measured spin expectation values. It offers a promising avenue for accurately tracking and characterizing the motion of nano-scale objects using solid-state spins. This has significant implications in various fields, including bioimaging, where magnetic nanoparticles can serve as indicators for targeted imaging <cit.>. § DISCUSSION ON THE SENSITIVITY In this section, we investigate the sensitivity of our dynamic response-based sensing scheme, specifically focusing on the sensing scheme discussed in Sec. <ref>. By analyzing the closed exact form of the Berry curvature given in Eq. (<ref>), we can calculate the susceptibility of the Berry curvature with respect to the parameter D. Remarkably, we can analytically calculate the susceptibility as θ approaches zero when D=1. In this limit, the susceptibility exhibits the following behavior: lim_θ→ 0∂/∂ D[8sin^3θcos^2θ(D_1+D_2)^2(1-D_1D_2)/𝒩_1𝒩_2(D_2-D_1)] =∞, lim_θ→ 0∂/∂ D[8sin^3θcos^2θ(D_1+D_3)^2(1-D_1D_3)/𝒩_1𝒩_3(D_3-D_1)] =-1/8√(2), which indicates that, lim_θ→ 0∂ℱ_ϕθ^(1)/∂ D=∞. Apparently, this indicates that near the work point (θ=0,D=1), a slight change in D will result in a significant variation in the Berry curvature ℱ_ϕθ^(1), which corresponds to a measurable quantity divided by the quench velocity in the experiment. Consequently, we anticipate an exceptionally high sensitivity near the work point in our dynamic response sensing scheme. This is reminiscent of the sensor utility of non-Hermitian systems, where the susceptibility of certain measurable quantities can also exhibit divergent behaviors <cit.>. However, it is important to note that the work point (θ=0, D=1) actually corresponds to an energy degenerate point (E_1=E_2). Thus, achieving near adiabatic conditions when approaching this point requires an extremely small quenching velocity. Consequently, while the susceptibility near the work point may be divergent, it is accompanied by a significantly longer evolution time. Therefore, the divergence in susceptibility does not necessarily translate into a divergence in sensitivity. In fact, a general bound for the estimation uncertainty has been proposed in Ref. <cit.> for dynamic quantum sensing schemes, taking into account the evolution time explicitly. When the parameter encoding process (for both sudden quench and adiabatic quench) is governed by the parameter Hamiltonian Ĥ_λ, this bound is given by δλ≥1/t||∂Ĥ_λ/∂λ||, where ||Â|| represents the seminorm defined as the difference between the maximum and minimum eigenvalues of the operator Â, i.e., ||Â||=E_max-E_min. In the dynamic sensing protocol described in Sec. III, the ultimate sensitivity bound for estimating the parameter D is given by δ D≥ 1/t. Since both our dynamic response-based sensing scheme and the conventional Ramsey-based sensing scheme are subject to the same ultimate sensitivity bound as described by Eq. (<ref>), the divergence in the susceptibility of the Berry curvature presented in Eq. (<ref>) does not necessarily imply a divergent sensitivity. Hence, our dynamic response-based sensing scheme does not offer an inherently enhanced ultimate sensitivity compared to the Ramsey-based scheme. However, the advantage of our dynamic response-based sensing scheme lies in its capability to sense time-varying magnetic fields or the motion of magnetic nanoparticles, which remains challenging for conventional interference-based sensing schemes. This opens up new possibilities for applications in dynamic sensing scenarios where conventional schemes fall short. § SUMMARY The essence of our dynamic response-based sensing scheme lies in utilizing the dynamics governed by a time-dependent Hamiltonian to encode the parameter of interest into the quantum state. Usually, calculating the dynamics governed by a time-dependent Hamiltonian, like using the time-ordering evolution operator, can be challenging, limiting its application in quantum sensing. However, the quantum response theory offers a valuable tool by providing a simple and clear expression of the observable dynamics in terms of the Berry curvature, as long as the time dependence of the Hamiltonian is near adiabatic. In this study, we leverage this relation to demonstrate the power of the quench-response mechanism in realizing quantum sensing. Unlike conventional interference or Ramsey-based sensing schemes, which rely on time-independent Hamiltonians to encode the parameter, our dynamic response-based sensing scheme offers distinct advantages. It enables the sensing of instantaneous magnetic fields and the detection of the motion of magnetic nanoparticles. This capability opens up new possibilities in quantum sensing, particularly in scenarios where the parameter to be estimated are time-dependent and require real-time measurements. In this study, we employ the NV center in diamond as our platform to demonstrate the effectiveness of the dynamic response-based sensing scheme. By analytically deriving the exact form of the Berry curvature, we are able to design quench-response protocols that enable us to accurately estimate the magnitude of the magnetic field or the quench velocity. One of the notable advantages of our dynamic response-based sensing scheme is its robustness to decoherence. Contrary to conventional interference-based approaches, we find that a vanishing nuclear polarization actually benefits our scheme. This counterintuitive result highlights the unique properties of the dynamic response-based approach and its resilience to decoherence effects. This robustness is a significant advantage, making our scheme highly feasible for realistic experiments. Furthermore, by exploiting the quench-response mechanism, we propose schemes that enable the detection and characterization of the motion of magnetic nanoparticles. This advancement opens up new possibilities for applications in bioimaging and other areas where accurate motion tracking within nanoscale is essential. In fact, the principle of our dynamic sensing scheme can be extended to other quantum systems, including quantum many-body systems, whether they are interacting or not. While the exact form of the Berry curvature may not be obtainable in these systems, it can still be measured experimentally through alternative methods <cit.> or via the quantum response theory introduced here. By measuring the value of the Berry curvature in advance, we can design dynamic sensing protocols to detect the quench velocity in these systems. The dynamic response-based sensing scheme proposed in this study offers the advantage of technical simplicity, making it highly accessible for practical implementation in experimental settings. Our study demonstrates the potential of utilizing the dynamic response and the quench-response mechanism to realize a novel sensing scheme. However, there are still untapped possibilities and further potentials to explore in the field of quantum sensing using this approach. Future research can delve deeper into these unexplored avenues and uncover new applications and insights. This work was supported by the National Key Research and Development Program of China (Grants No. 2017YFA0304202 and No. 2017YFA0205700), the NSFC through Grant No. 11875231 and No. 11935012, and the Fundamental Research Funds for the Central Universities through Grant No. 2018FZA3005. § REVIEW OF THE QUANTUM RESPONSE THEORY To render this work more self-consistent, we now make a brief review on the adiabatic perturbation theory and the quantum response theory. More details can be found in Refs. <cit.>. The Schrödinger equation for a time-dependent Hamiltonian is i∂ |ψ(t)⟩/∂ t=H(t)|ψ(t)⟩. Here we expand the wave function using the instantaneous eigenstates as |ψ(t)⟩=∑_n a_n(t)|ϕ_n(t)⟩, with H(t)|ϕ_n(t)⟩=E_n(t)|ϕ_n(t)⟩. Thus, the Schrödinger equation can be represented as (by left multiplying ⟨ϕ_m(t)| on both sides), i∂ a_m(t)/∂ t+i∑_n a_n(t)⟨ϕ_m(t)|∂/∂ t|ϕ_n(t)⟩=E_m(t)a_m(t). We now make the gauge transformation a_n(t)=α_n(t)e^-iω_n(t)e^iγ_n(t), where the dynamic phase is defined as ω_n(t)≡-∫_t^t_fE_n(τ)dτ, and the Berry phase is defined as γ_n(t)=-i∫_t^t_f⟨ n|∂/∂ t^'|n⟩ dt^'. As a result, we obtain (the indices m↔ n are exchanged) ∂α_n(t)/∂ t=-∑_m≠ nα_m(t)⟨ϕ_n(t)|∂/∂ t|ϕ_m(t)⟩ e^i(ω_nm(t)-γ_nm(t)), where ω_nm(t)=ω_n(t)-ω_m(t) and γ_nm(t)=γ_n(t)-γ_m(t). Alternatively, we can write it in the integral form as follows: α_n(t)= -∫_t_i^tdt^'∑_m≠ nα_m(t^')⟨ϕ_n(t^')|∂/∂ t^'|ϕ_m(t^')⟩ e^i(ω_nm(t^')-γ_nm(t^')). Now if the initial state is in the ground state, namely, α_0(0)=1 and α_m(0)= 0 for m ≠ 0, by making the adiabatic perturbation approximation <cit.>, we obtain α_n(t)≈ -∫_t_i^tdt^'⟨ϕ_n(t^')|∂/∂ t^'|ϕ_0(t^')⟩ e^i(ω_n0(t^')-γ_n0(t^')). Using integration by parts, we obtain that, α_n(t_f)≈[ i⟨ϕ_n(t)|∂/∂ t|ϕ_0(t)⟩/E_n(t)-E_0(t). .-1/E_n(t)-E_0(t)d/dt⟨ϕ_n(t)|∂/∂ t|ϕ_0(t)⟩/E_n(t)-E_0(t)+…]× .e^i(ω_n0(t)-γ_n0(t))|_t_i^t_f. Since the time dependence of the Hamiltonian is usually introduced through the time varying parameter, namely, H(t)≡ H(λ(t)), we have the following relation, ⟨ϕ_n(t)|∂/∂ t|ϕ_0(t)⟩=∂λ/∂ t⟨ϕ_n(λ)|∂/∂λ|ϕ_0(λ)⟩, while ω_n(λ)≡-∫_λ^λ_fE_n(λ^')/v(λ^')dλ^', with v(λ)=dλ/dt, and γ_n(λ)=-i∫_λ^λ_f⟨ n|∂/∂λ^'|n⟩ dλ^'. Therefore, the integral above can be rewritten as follows: α_n (λ_f)≈[ i∂λ/∂ t⟨ϕ_n(λ)|∂/∂λ|ϕ_0(λ)⟩/E_n(λ)-E_0(λ)-∂^2 λ/∂ t^2⟨ϕ_n(λ)|∂/∂λ|ϕ_0(λ)⟩/[E_n(λ)-E_0(λ)]^2. .-(∂λ/∂ t)^21/E_n(λ)-E_0(λ)d/dλ⟨ϕ_n(λ)|∂/∂λ|ϕ_0(λ)⟩/E_n(λ)-E_0(λ)+…]× .e^i(ω_n0(λ)-γ_n0(λ))|_λ_i^λ_f. When the quench is near adiabatic (∂λ/∂ t→ 0), the transition amplitude can be approximated as α_n (λ_f)≈ i.∂λ/∂ t⟨ϕ_n(λ)|∂/∂λ|ϕ_0(λ)⟩/E_n(λ)-E_0(λ)e^i(ω_n0(λ)-γ_n0(λ))|_λ_i^λ_f. Particularly, when the energy gap is large or the quench velocity is vanishing at the initial time, we have a_n(λ_f) =α_n(λ_f)e^-iω_n(λ_f)e^iγ_n(λ_f) ≈ i.∂λ/∂ t⟨ϕ_n(λ)|∂/∂λ|ϕ_0(λ)⟩/E_n(λ)-E_0(λ)|_λ_f. This is the result presented in Ref. <cit.>. We can also utilize the following relation: ⟨ϕ_n(λ)|∂/∂λ|ϕ_m(λ)⟩=-⟨ϕ_n(λ)|∂ H/∂λ|ϕ_m(λ)⟩/E_n(λ)-E_m(λ). Thus, we have the response signal along the μ-direction as a function of the quench velocity v_λ≡∂λ/∂ t up to the leading order as follows: M_μ ≡-⟨ψ(t_f)|∂ H/∂μ|ψ(t_f)⟩≈ -⟨ϕ_0|∂ H/∂μ|ϕ_0⟩ +.i∂λ/∂ t∑_n≠ 0⟨ϕ_0|∂ H/∂μ|ϕ_n⟩⟨ϕ_n|∂ H/∂λ|ϕ_0⟩-μ↔λ/[E_n(λ)-E_0(λ)]^2|_λ_f. This leads to the general formula of the quantum response as follows: M_μ=const+v_λℱ^(0)_μλ+𝒪(v_λ^2), where the Berry curvature is given by ℱ_μλ^(m)=i∑_n≠ m⟨ϕ_m|∂ H/∂μ|ϕ_n⟩⟨ϕ_n|∂ H/∂λ|ϕ_m⟩-μ↔λ/[E_n(λ)-E_m(λ)]^2. § EXACT EIGENVALUES AND EIGENVECTORS OF A 3× 3 HERMITIAN MATRIX In this section, we provide the analytic solution of the eigenvalues and eigenvectors of a general 3× 3 Hermitian matrix represented as follows: H=[ a_11 a_12 a_13; a_12^* a_22 a_23; a_13^* a_23^* a_33 ]. The secular equation to calculate the eigenvalue is (H-λ1)=0, which, according to the Cayley-Hamilton theorem, corresponds to the cubic equation λ^3-Tr(H)λ^2-1/2[Tr(H^2)-(Tr(H))^2]λ-(H)=0. Since H is a Hermitian operator, Tr(H^2), Tr(H) and (H) are all real quantities. To further simplify the corresponding cubic equation, we now make some transformations as follows: B =H-Tr[H]/31, A =√(2/Tr[B^2])B. As a result, the eigenvalues of H and the eigenvalues of A follow the relation: λ_k=√(Tr[B^2]/2)t_k+Tr[H]/3. We notice that Tr[A]=0 and Tr[A^2]=2. Consequently, the secular equation to calculate the eigenvalues of A becomes a depressed cubic equation: t^3-t-q=0, with q=(A). Since the operator A is still a Hermitian operator, all the eigenvalues are real, then we can assume the solution to be t=ucosθ. We can prove that -2/3√(3)<q<2/3√(3), when Eq. (<ref>) has three distinct real roots (it is easy to observe by plotting the graph of the function). Specifically, when q=2/3√(3), two multiple roots correspond to the stationary point of f(t)=t^3-t, namely t_1=t_2=1/√(3), and t_3=-2/√(3). It is similar when q=-2/3√(3), and we can conclude that -2/√(3)≤ t≤2/√(3). As a result, we can choose u=2/√(3). After dividing the equation by u^3/4, the depressed cubic equation in Eq. (<ref>) now becomes, 4cos^3θ-3cosθ-3/2√(3)q=0. Using the trigonometric identity 4cos^3θ-3cosθ=cos(3θ), we obtain that, cos(3θ)=3/2√(3)q. As a result, we have the three eigenvalues of matrix A as follows: t_k=2/√(3)cos[1/3arccos(3/2√(3)(A))-2π k/3], for k=0,1,2. Then, the eigenvalues of H can be determined by Eq. (<ref>). The eigenstates of the 3× 3 Hermitian matrix H can be represented as the cross product of two three-dimensional vectors, |Ψ̃_m⟩=[(𝐡_1-E_m𝐞_1)×(𝐡_3-E_m𝐞_3)]^*, as long as the two vectors are linear independent <cit.>. Here, 𝐡_j is the j-th column of the Hermitian matrix H, and 𝐞_i is the unit vector, like 𝐞_1=(1,0,0)^T. We now make a brief proof to show that |Ψ̃_m⟩ is indeed the eigenstate. First, if |Ψ̃_m⟩ is the eigenstate, then we have (H-E_m1)|Ψ̃_m⟩=0, or equivalently we have to prove that, ⟨Ψ̃_m|(H-E_m1)|ψ⟩=0, where |ψ⟩=α_1 𝐞_1+α_2 𝐞_2+α_3 𝐞_3 is an arbitrary wave vector. After the expansion, we have ⟨Ψ̃_m |(H-E_m1)|ψ⟩ = α_1⟨Ψ̃_m|(𝐡_1-E_m𝐞_1)⟩+α_2⟨Ψ̃_m|(𝐡_2-E_m𝐞_2)⟩ +α_3⟨Ψ̃_m|(𝐡_3-E_m𝐞_3)⟩. Obviously, both the first term and the last term equal zero. To prove the second term equals zero, we need to utilize the property of the mixed product as follows: (𝐚×𝐛)·𝐜=[ a_1 b_1 c_1; a_2 b_2 c_2; a_3 b_3 c_3 ]. As a result, we can prove that ⟨Ψ̃_m |(𝐡_2-E_m𝐞_2)⟩ =[(𝐡_1-E_m𝐞_1)×(𝐡_3-E_m𝐞_3)]·(𝐡_2-E_m𝐞_2) =(H-E_m1)=0. When these two vectors are linear dependent, namely, (𝐡_1-E_m𝐞_1)=μ(𝐡_3-E_m𝐞_3), the eigenstate can be straightforwardly calculated by solving (H-E_m1)|Ψ̃_m⟩=0 and the normalized eigenstate is given by |Ψ_m⟩=1/1+|μ|^2( [ 1; 0; -μ ]). For instance, this is the situation when θ=π/2 in the Hamiltonian [Eq. (<ref>)] in the main text, where for E_2=D, the corresponding eigenstate can be determined using the above expression. 58 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Braunstein and Caves(1994)]braunstein1994statistical author author S. L. Braunstein and author C. M. Caves, title title Statistical distance and the geometry of quantum states, https://doi.org/10.1103/PhysRevLett.72.3439 journal journal Phys. Rev. Lett. volume 72, pages 3439 (year 1994)NoStop [Giovannetti et al.(2006)Giovannetti, Lloyd, and Maccone]giovannetti2006quantum author author V. Giovannetti, author S. Lloyd, and author L. Maccone, title title Quantum metrology, https://doi.org/10.1103/PhysRevLett.96.010401 journal journal Phys. Rev. Lett. volume 96, pages 010401 (year 2006)NoStop [Giovannetti et al.(2011)Giovannetti, Lloyd, and Maccone]giovannetti2011advances author author V. Giovannetti, author S. Lloyd, and author L. Maccone, title title Advances in quantum metrology, @noop journal journal Nat. Photonics volume 5, pages 222 (year 2011)NoStop [Escher et al.(2011)Escher, de Matos Filho, and Davidovich]escher2011general author author B. Escher, author R. de Matos Filho, and author L. Davidovich, title title General framework for estimating the ultimate precision limit in noisy quantum-enhanced metrology, @noop journal journal Nat. Phys. volume 7, pages 406 (year 2011)NoStop [Pezzè et al.(2018)Pezzè, Smerzi, Oberthaler, Schmied, and Treutlein]pezze2018quantum author author L. Pezzè, author A. Smerzi, author M. K. Oberthaler, author R. Schmied, and author P. Treutlein, title title Quantum metrology with nonclassical states of atomic ensembles, https://doi.org/10.1103/RevModPhys.90.035005 journal journal Rev. Mod. Phys. volume 90, pages 035005 (year 2018)NoStop [Braun et al.(2018)Braun, Adesso, Benatti, Floreanini, Marzolino, Mitchell, and Pirandola]braun2018quantum author author D. Braun, author G. Adesso, author F. Benatti, author R. Floreanini, author U. Marzolino, author M. W. Mitchell, and author S. Pirandola, title title Quantum-enhanced measurements without entanglement, https://doi.org/10.1103/RevModPhys.90.035006 journal journal Rev. Mod. Phys. volume 90, pages 035006 (year 2018)NoStop [Degen et al.(2017)Degen, Reinhard, and Cappellaro]degen2017quantum author author C. L. Degen, author F. Reinhard, and author P. Cappellaro, title title Quantum sensing, https://doi.org/10.1103/RevModPhys.89.035002 journal journal Rev. Mod. Phys. volume 89, pages 035002 (year 2017)NoStop [Barry et al.(2020)Barry, Schloss, Bauch, Turner, Hart, Pham, and Walsworth]barry2020sensitivity author author J. F. Barry, author J. M. Schloss, author E. Bauch, author M. J. Turner, author C. A. Hart, author L. M. Pham, and author R. L. Walsworth, title title Sensitivity optimization for nv-diamond magnetometry, https://doi.org/10.1103/RevModPhys.92.015004 journal journal Rev. Mod. Phys. volume 92, pages 015004 (year 2020)NoStop [Yurke et al.(1986)Yurke, McCall, and Klauder]yurke1986su author author B. Yurke, author S. L. McCall, and author J. R. Klauder, title title Su(2) and su(1,1) interferometers, https://doi.org/10.1103/PhysRevA.33.4033 journal journal Phys. Rev. A volume 33, pages 4033 (year 1986)NoStop [Wang et al.(2022)Wang, Liu, Schloss, Alsid, Braje, and Cappellaro]wang2022sensing author author G. Wang, author Y.-X. Liu, author J. M. Schloss, author S. T. Alsid, author D. A. Braje, and author P. Cappellaro, title title Sensing of arbitrary-frequency fields using a quantum mixer, https://doi.org/10.1103/PhysRevX.12.021061 journal journal Phys. Rev. X volume 12, pages 021061 (year 2022)NoStop [Maclaurin et al.(2013)Maclaurin, Hall, Martin, and Hollenberg]maclaurin2013nanoscale author author D. Maclaurin, author L. Hall, author A. Martin, and author L. Hollenberg, title title Nanoscale magnetometry through quantum control of nitrogen–vacancy centres in rotationally diffusing nanodiamonds, @noop journal journal New J. Phys. volume 15, pages 013041 (year 2013)NoStop [Budich and Bergholtz(2020)]budich2020non author author J. C. Budich and author E. J. Bergholtz, title title Non-hermitian topological sensors, https://doi.org/10.1103/PhysRevLett.125.180403 journal journal Phys. Rev. Lett. volume 125, pages 180403 (year 2020)NoStop [Wiersig(2020)]review2020jan author author J. Wiersig, title title Review of exceptional point-based sensors, https://doi.org/10.1364/PRJ.396115 journal journal Photon. Res. volume 8, pages 1457 (year 2020)NoStop [Chu et al.(2020)Chu, Liu, Liu, and Cai]chu2020quantum author author Y. Chu, author Y. Liu, author H. Liu, and author J. Cai, title title Quantum sensing with a single-qubit pseudo-hermitian system, https://doi.org/10.1103/PhysRevLett.124.020501 journal journal Phys. Rev. Lett. volume 124, pages 020501 (year 2020)NoStop [Mishra and Bayat(2021)]mishra2021driving author author U. Mishra and author A. Bayat, title title Driving enhanced quantum sensing in partially accessible many-body systems, https://doi.org/10.1103/PhysRevLett.127.080504 journal journal Phys. Rev. Lett. volume 127, pages 080504 (year 2021)NoStop [Ding et al.(2022)Ding, Liu, Zheng, and Chen]ding2022dynamic author author W. Ding, author Y. Liu, author Z. Zheng, and author S. Chen, title title Dynamic quantum-enhanced sensing without entanglement in central spin systems, https://doi.org/10.1103/PhysRevA.106.012604 journal journal Phys. Rev. A volume 106, pages 012604 (year 2022)NoStop [Gritsev and Polkovnikov(2012)]Gritsev6457 author author V. Gritsev and author A. Polkovnikov, title title Dynamical quantum hall effect in the parameter space, https://doi.org/10.1073/pnas.1116693109 journal journal Proc. Natl. Acad. Sci. volume 109, pages 6457 (year 2012)NoStop [Rigolin et al.(2008)Rigolin, Ortiz, and Ponce]rigolin2008beyond author author G. Rigolin, author G. Ortiz, and author V. H. Ponce, title title Beyond the quantum adiabatic approximation: Adiabatic perturbation theory, https://doi.org/10.1103/PhysRevA.78.052508 journal journal Phys. Rev. A volume 78, pages 052508 (year 2008)NoStop [Avron et al.(2011)Avron, Fraas, Graf, and Kenneth]avron2011quantum author author J. E. Avron, author M. Fraas, author G. M. Graf, and author O. Kenneth, title title Quantum response of dephasing open systems, @noop journal journal New J. Phys. volume 13, pages 053042 (year 2011)NoStop [De Grandi and Polkovnikov(2010)]de2010adiabatic author author C. De Grandi and author A. Polkovnikov, title title Adiabatic perturbation theory: From landau–zener problem to quenching through a quantum critical point, in @noop booktitle Quantum Quenching, Annealing and Computation (publisher Springer, year 2010) pp. pages 75–114NoStop [De Grandi et al.(2013)De Grandi, Polkovnikov, and Sandvik]de2013microscopic author author C. De Grandi, author A. Polkovnikov, and author A. Sandvik, title title Microscopic theory of non-adiabatic response in real and imaginary time, @noop journal journal J. Phys.: Condens. Matter volume 25, pages 404216 (year 2013)NoStop [Doherty et al.(2012)Doherty, Dolde, Fedder, Jelezko, Wrachtrup, Manson, and Hollenberg]doherty2012theory author author M. W. Doherty, author F. Dolde, author H. Fedder, author F. Jelezko, author J. Wrachtrup, author N. B. Manson, and author L. C. L. Hollenberg, title title Theory of the ground-state spin of the nv^ center in diamond, https://doi.org/10.1103/PhysRevB.85.205203 journal journal Phys. Rev. B volume 85, pages 205203 (year 2012)NoStop [Doherty et al.(2013)Doherty, Manson, Delaney, Jelezko, Wrachtrup, and Hollenberg]doherty2013nitrogen author author M. W. Doherty, author N. B. Manson, author P. Delaney, author F. Jelezko, author J. Wrachtrup, and author L. C. Hollenberg, title title The nitrogen-vacancy colour centre in diamond, @noop journal journal Phys. Rep. volume 528, pages 1 (year 2013)NoStop [Balasubramanian et al.(2009)Balasubramanian, Neumann, Twitchen, Markham, Kolesov, Mizuochi, Isoya, Achard, Beck, Tissler et al.]balasubramanian2009ultralong author author G. Balasubramanian, author P. Neumann, author D. Twitchen, author M. Markham, author R. Kolesov, author N. Mizuochi, author J. Isoya, author J. Achard, author J. Beck, author J. Tissler, et al., title title Ultralong spin coherence time in isotopically engineered diamond, @noop journal journal Nat. Mater. volume 8, pages 383 (year 2009)NoStop [Hanson et al.(2008)Hanson, Dobrovitski, Feiguin, Gywat, and Awschalom]hanson2008coherent author author R. Hanson, author V. Dobrovitski, author A. Feiguin, author O. Gywat, and author D. Awschalom, title title Coherent dynamics of a single spin interacting with an adjustable spin bath, @noop journal journal Science volume 320, pages 352 (year 2008)NoStop [Wood et al.(2018)Wood, Aeppli, Lilette, Fein, Stacey, Hollenberg, Scholten, and Martin]wood2018limited author author A. A. Wood, author A. G. Aeppli, author E. Lilette, author Y. Y. Fein, author A. Stacey, author L. C. L. Hollenberg, author R. E. Scholten, and author A. M. Martin, title title T_2-limited sensing of static magnetic fields via fast rotation of quantum spins, https://doi.org/10.1103/PhysRevB.98.174114 journal journal Phys. Rev. B volume 98, pages 174114 (year 2018)NoStop [Liu et al.(2019)Liu, Ajoy, and Cappellaro]liu2019nanoscale author author Y.-X. Liu, author A. Ajoy, and author P. Cappellaro, title title Nanoscale vector dc magnetometry via ancilla-assisted frequency up-conversion, https://doi.org/10.1103/PhysRevLett.122.100501 journal journal Phys. Rev. Lett. volume 122, pages 100501 (year 2019)NoStop [Ajoy et al.(2015)Ajoy, Bissbort, Lukin, Walsworth, and Cappellaro]ajoy2015atomic author author A. Ajoy, author U. Bissbort, author M. D. Lukin, author R. L. Walsworth, and author P. Cappellaro, title title Atomic-scale nuclear spin imaging using quantum-assisted sensors in diamond, https://doi.org/10.1103/PhysRevX.5.011001 journal journal Phys. Rev. X volume 5, pages 011001 (year 2015)NoStop [Mamin et al.(2013)Mamin, Kim, Sherwood, Rettner, Ohno, Awschalom, and Rugar]mamin2013nanoscale author author H. Mamin, author M. Kim, author M. Sherwood, author C. Rettner, author K. Ohno, author D. Awschalom, and author D. Rugar, title title Nanoscale nuclear magnetic resonance with a nitrogen-vacancy spin sensor, @noop journal journal Science volume 339, pages 557 (year 2013)NoStop [Grinolds et al.(2013)Grinolds, Hong, Maletinsky, Luan, Lukin, Walsworth, and Yacoby]grinolds2013nanoscale author author M. S. Grinolds, author S. Hong, author P. Maletinsky, author L. Luan, author M. D. Lukin, author R. L. Walsworth, and author A. Yacoby, title title Nanoscale magnetic imaging of a single electron spin under ambient conditions, @noop journal journal Nat. Phys. volume 9, pages 215 (year 2013)NoStop [Acosta et al.(2009)Acosta, Bauch, Ledbetter, Santori, Fu, Barclay, Beausoleil, Linget, Roch, Treussart, Chemerisov, Gawlik, and Budker]acosta2009diamonds author author V. M. Acosta, author E. Bauch, author M. P. Ledbetter, author C. Santori, author K.-M. C. Fu, author P. E. Barclay, author R. G. Beausoleil, author H. Linget, author J. F. Roch, author F. Treussart, author S. Chemerisov, author W. Gawlik, and author D. Budker, title title Diamonds with a high density of nitrogen-vacancy centers for magnetometry applications, https://doi.org/10.1103/PhysRevB.80.115202 journal journal Phys. Rev. B volume 80, pages 115202 (year 2009)NoStop [Maze et al.(2008)Maze, Stanwix, Hodges, Hong, Taylor, Cappellaro, Jiang, Dutt, Togan, Zibrov et al.]maze2008nanoscale author author J. R. Maze, author P. L. Stanwix, author J. S. Hodges, author S. Hong, author J. M. Taylor, author P. Cappellaro, author L. Jiang, author M. G. Dutt, author E. Togan, author A. Zibrov, et al., title title Nanoscale magnetic sensing with an individual electronic spin in diamond, @noop journal journal Nature (London) volume 455, pages 644 (year 2008)NoStop [Maclaurin et al.(2012)Maclaurin, Doherty, Hollenberg, and Martin]maclaurin2012measurable author author D. Maclaurin, author M. W. Doherty, author L. C. L. Hollenberg, and author A. M. Martin, title title Measurable quantum geometric phase from a rotating single spin, https://doi.org/10.1103/PhysRevLett.108.240403 journal journal Phys. Rev. Lett. volume 108, pages 240403 (year 2012)NoStop [Yu et al.(2020)Yu, Yang, Gong, Cao, Lu, Liu, Zhang, Plenio, Jelezko, Ozawa et al.]yu2020experimental author author M. Yu, author P. Yang, author M. Gong, author Q. Cao, author Q. Lu, author H. Liu, author S. Zhang, author M. B. Plenio, author F. Jelezko, author T. Ozawa, et al., title title Experimental measurement of the quantum geometric tensor using coupled qubits in diamond, @noop journal journal Natl. Sci. Rev. volume 7, pages 254 (year 2020)NoStop [Ledbetter et al.(2012)Ledbetter, Jensen, Fischer, Jarmola, and Budker]ledbetter2012gyroscopes author author M. P. Ledbetter, author K. Jensen, author R. Fischer, author A. Jarmola, and author D. Budker, title title Gyroscopes based on nitrogen-vacancy centers in diamond, https://doi.org/10.1103/PhysRevA.86.052116 journal journal Phys. Rev. A volume 86, pages 052116 (year 2012)NoStop [Ajoy and Cappellaro(2012)]ajoy2012stable author author A. Ajoy and author P. Cappellaro, title title Stable three-axis nuclear-spin gyroscope in diamond, https://doi.org/10.1103/PhysRevA.86.062104 journal journal Phys. Rev. A volume 86, pages 062104 (year 2012)NoStop [Arai et al.(2018)Arai, Lee, Belthangady, Glenn, Zhang, and Walsworth]arai2018geometric author author K. Arai, author J. Lee, author C. Belthangady, author D. R. Glenn, author H. Zhang, and author R. L. Walsworth, title title Geometric phase magnetometry using a solid-state spin, @noop journal journal Nat. Commun. volume 9, pages 4996 (year 2018)NoStop [Balasubramanian et al.(2014)Balasubramanian, Lazariev, Arumugam, and Duan]balasubramanian2014nitrogen author author G. Balasubramanian, author A. Lazariev, author S. R. Arumugam, and author D.-w. Duan, title title Nitrogen-vacancy color center in diamond—emerging nanoscale applications in bioimaging and biosensing, @noop journal journal Curr. Opin. Chem. Biol. volume 20, pages 69 (year 2014)NoStop [Provost and Vallee(1980)]provost1980riemannian author author J. Provost and author G. Vallee, title title Riemannian structure on manifolds of quantum states, @noop journal journal Commun. Math. Phys. volume 76, pages 289 (year 1980)NoStop [Chen et al.(2018)Chen, Meng, Zhang, Duan, Shi, and Du]chen2018quantummetrology author author M. Chen, author C. Meng, author Q. Zhang, author C. Duan, author F. Shi, and author J. Du, title title Quantum metrology with single spins in diamond under ambient conditions, @noop journal journal Natl. Sci. Rev. volume 5, pages 346 (year 2018)NoStop [Dolde et al.(2011)Dolde, Fedder, Doherty, Nöbauer, Rempp, Balasubramanian, Wolf, Reinhard, Hollenberg, Jelezko et al.]dolde2011electric author author F. Dolde, author H. Fedder, author M. W. Doherty, author T. Nöbauer, author F. Rempp, author G. Balasubramanian, author T. Wolf, author F. Reinhard, author L. C. Hollenberg, author F. Jelezko, et al., title title Electric-field sensing using single diamond spins, @noop journal journal Nat. Phys. volume 7, pages 459 (year 2011)NoStop [Siddique and Khraishi(2020)]siddique2020eigenvalues author author A. B. Siddique and author T. A. Khraishi, title title Eigenvalues and eigenvectors for 3× 3 symmetric matrices: An analytical approach, @noop journal journal Journal of Advances in Mathematics and Computer Science , pages 106 (year 2020)NoStop [Smith(1961)]smith1961eigenvalues author author O. K. Smith, title title Eigenvalues of a symmetric 3× 3 matrix, @noop journal journal Commun. ACM volume 4, pages 168 (year 1961)NoStop [Kopp(2008)]kopp2008efficient author author J. Kopp, title title Efficient numerical diagonalization of hermitian 3× 3 matrices, @noop journal journal Int. J. Mod. Phys. C volume 19, pages 523 (year 2008)NoStop [Ding et al.(2014)Ding, Shi, You, and Zhang]ding2014high author author W. Ding, author A. Shi, author J. Q. You, and author W. Zhang, title title High-fidelity quantum memory utilizing inhomogeneous nuclear polarization in a quantum dot, https://doi.org/10.1103/PhysRevB.90.235421 journal journal Phys. Rev. B volume 90, pages 235421 (year 2014)NoStop [Zhang et al.(2006)Zhang, Dobrovitski, Al-Hassanieh, Dagotto, and Harmon]zhang2006hyperfine author author W. Zhang, author V. V. Dobrovitski, author K. A. Al-Hassanieh, author E. Dagotto, and author B. N. Harmon, title title Hyperfine interaction induced decoherence of electron spins in quantum dots, https://doi.org/10.1103/PhysRevB.74.205313 journal journal Phys. Rev. B volume 74, pages 205313 (year 2006)NoStop [He et al.(2019)He, Chesi, Lin, and Guan]he2019exact author author W.-B. He, author S. Chesi, author H.-Q. Lin, and author X.-W. Guan, title title Exact quantum dynamics of xxz central spin problems, https://doi.org/10.1103/PhysRevB.99.174308 journal journal Phys. Rev. B volume 99, pages 174308 (year 2019)NoStop [Slichter(1996)]slichter2013principles author author C. P. Slichter, @noop title Principles of magnetic resonance, edition 3rd ed. (publisher Springer-Verlag, Berlin, year 1996)NoStop [Schloss et al.(2018)Schloss, Barry, Turner, and Walsworth]schloss2018simultaneous author author J. M. Schloss, author J. F. Barry, author M. J. Turner, and author R. L. Walsworth, title title Simultaneous broadband vector magnetometry using solid-state spins, https://doi.org/10.1103/PhysRevApplied.10.034044 journal journal Phys. Rev. Appl. volume 10, pages 034044 (year 2018)NoStop [Lee et al.(2015)Lee, Niethammer, and Wrachtrup]lee2015vector author author S.-Y. Lee, author M. Niethammer, and author J. Wrachtrup, title title Vector magnetometry based on s=3/2 electronic spins, https://doi.org/10.1103/PhysRevB.92.115201 journal journal Phys. Rev. B volume 92, pages 115201 (year 2015)NoStop [Niethammer et al.(2016)Niethammer, Widmann, Lee, Stenberg, Kordina, Ohshima, Son, Janzén, and Wrachtrup]niethammer2016vector author author M. Niethammer, author M. Widmann, author S.-Y. Lee, author P. Stenberg, author O. Kordina, author T. Ohshima, author N. T. Son, author E. Janzén, and author J. Wrachtrup, title title Vector magnetometry using silicon vacancies in 4h-sic under ambient conditions, https://doi.org/10.1103/PhysRevApplied.6.034001 journal journal Phys. Rev. Appl. volume 6, pages 034001 (year 2016)NoStop [Zheng et al.(2020)Zheng, Sun, Chatzidrosos, Zhang, Nakamura, Sumiya, Ohshima, Isoya, Wrachtrup, Wickenbrock, and Budker]zheng2020microwave author author H. Zheng, author Z. Sun, author G. Chatzidrosos, author C. Zhang, author K. Nakamura, author H. Sumiya, author T. Ohshima, author J. Isoya, author J. Wrachtrup, author A. Wickenbrock, and author D. Budker, title title Microwave-free vector magnetometry with nitrogen-vacancy centers along a single axis in diamond, https://doi.org/10.1103/PhysRevApplied.13.044023 journal journal Phys. Rev. Appl. volume 13, pages 044023 (year 2020)NoStop [Wang et al.(2018)Wang, Liu, Leong, Zeng, Feng, Li, Dolde, Fedder, Wrachtrup, Cui, Yang, Li, and Liu]wang2018magnetic author author N. Wang, author G.-Q. Liu, author W.-H. Leong, author H. Zeng, author X. Feng, author S.-H. Li, author F. Dolde, author H. Fedder, author J. Wrachtrup, author X.-D. Cui, author S. Yang, author Q. Li, and author R.-B. Liu, title title Magnetic criticality enhanced hybrid nanodiamond thermometer under ambient conditions, https://doi.org/10.1103/PhysRevX.8.011042 journal journal Phys. Rev. X volume 8, pages 011042 (year 2018)NoStop [Childress et al.(2006)Childress, Gurudev Dutt, Taylor, Zibrov, Jelezko, Wrachtrup, Hemmer, and Lukin]childress2006coherent author author L. Childress, author M. Gurudev Dutt, author J. Taylor, author A. Zibrov, author F. Jelezko, author J. Wrachtrup, author P. Hemmer, and author M. Lukin, title title Coherent dynamics of coupled electron and nuclear spin qubits in diamond, @noop journal journal Science volume 314, pages 281 (year 2006)NoStop [Cohen et al.(2020)Cohen, Nigmatullin, Kenneth, Jelezko, Khodas, and Retzker]cohen2020utilising author author D. Cohen, author R. Nigmatullin, author O. Kenneth, author F. Jelezko, author M. Khodas, and author A. Retzker, title title Utilising nv based quantum sensing for velocimetry at the nanoscale, @noop journal journal Sci. Rep. volume 10, pages 5298 (year 2020)NoStop [Rams et al.(2018)Rams, Sierant, Dutta, Horodecki, and Zakrzewski]rams2018at author author M. M. Rams, author P. Sierant, author O. Dutta, author P. Horodecki, and author J. Zakrzewski, title title At the limits of criticality-based quantum metrology: Apparent super-heisenberg scaling revisited, https://doi.org/10.1103/PhysRevX.8.021022 journal journal Phys. Rev. X volume 8, pages 021022 (year 2018)NoStop [Bleu et al.(2018)Bleu, Malpuech, Gao, and Solnyshkov]bleu2018effective author author O. Bleu, author G. Malpuech, author Y. Gao, and author D. D. Solnyshkov, title title Effective theory of nonadiabatic quantum evolution based on the quantum geometric tensor, https://doi.org/10.1103/PhysRevLett.121.020401 journal journal Phys. Rev. Lett. volume 121, pages 020401 (year 2018)NoStop [Ozawa and Goldman(2018)]ozawa2018extracting author author T. Ozawa and author N. Goldman, title title Extracting the quantum metric tensor through periodic driving, https://doi.org/10.1103/PhysRevB.97.201117 journal journal Phys. Rev. B volume 97, pages 201117 (year 2018)NoStop
http://arxiv.org/abs/2307.05545v2
20230708232436
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
[ "Zhongliang Jiang", "Septimiu E. Salcudean", "Nassir Navab" ]
cs.RO
[ "cs.RO" ]
Z. Jiang et al. 1]Zhongliang Jiangcor1 [cor1]Corresponding author at: Technische Universität München, Fakultät für Informatik – I16, Boltzmannstr. 3, 85748 Garching bei München [email protected] 2]Septimiu E. Salcudean 1,3]Nassir Navab [1]Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany [2]Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada [3]Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA XX June 2021 xx Month 2021 xx Month 2021 xx Month 2021 Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the “language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques. Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging. Ultrasound imaging, robotic ultrasound, telesonography, medical robotics, orientation optimization, path planning, visual servoing, compliant control, robotic US, robot learning, reinforcement learning, learning from demonstrations § INTRODUCTION Today, medical imaging is one of the most crucial components of the entire healthcare industry, from wellness and screening to early diagnosis, treatment selection, and follow-up <cit.>. Compared to the other three most common medical imaging modalities used in the current clinical practice [i.e., Radiography (X-ray), Computerized tomography (CT), and Magnetic resonance imaging (MRI)], Ultrasound (US) imaging has the advantage of being noninvasive, low-cost, portable, and free of ionizing radiation <cit.>. These merits make it particularly suitable for some clinical needs, such as image-guided interventions <cit.> and obstetric applications <cit.>. In October 2021, 0.79 million US examinations were performed in England, whereas there were 0.52 million CT scans and 0.31 million MRI scans <cit.>. However, regarding traditional free-hand US examinations, substantial experience and visuo-tactile skills are required for achieving high-quality US images <cit.>. These factors limit the utilization of US in clinical applications requiring reliable biometric measurements or repeatable images for monitoring lesions. To obtain high-quality images, sonographers need to maintain the probe with proper pressure and adjust the probe orientation for optimal acoustic windows. To overcome intra- and inter-operator variations, the robotic US system (RUSS) has been gaining attention for two decades. To illustrate the increased interest about RUSS, the number of related peer-reviewed publications in each year and cumulative years are depicted in Fig. <ref>. For individual years, the number of publications has grown from 1,020 in the year 2001 to 15,500 in the year 2022. The accumulated number of publications exponentially increased to 125,110 from 2001 to 2022. This dramatic rise in interest can be attributed to three distinct communities: engineers, clinicians, and entrepreneurs <cit.>. The need from clinicians for high-quality images and efficient and easy-to-use RUSS stimulates the development of RUSS by engineers. Due to the considerable economic benefits, entrepreneurs are motivated to develop prototypes and market them [https://www.adechotech.com/],[https://en.mgi-tech.com/],[https://www.bkmedical.com/]. To assist in combating global pandemics (e.g., COVID-19 and Ebola), the demand for intelligent systems and robotics is boosted extensively in the fields of disease prevention, screening, diagnosis, treatment, home care, etc. <cit.>. RUSS has been investigated to remotely or autonomously perform US tests for early detection and diagnosis <cit.>. Deploying RUSS in hospitals enables the separation of patients and sonographers, hence lowering the risks of virus transmission between patients and medical staff. This paper is motivated by the desire to assist both robotic US technicians and clinicians. For roboticists, we provide a comprehensive summary of enabling technologies (i.e., compliant force control and path planning) that are commonly needed for a variety of applications. In addition to the enabling technologies, the advanced solutions developed by integrating additional techniques (e.g., surface registration, visual servoing, and image segmentation) are summarized to demonstrate the potential of RUSS for addressing real-world challenges (e.g., tissue motion and deformation). Using these techniques, clinicians and technicians can further consider how RUSS can assist them in addressing particular clinical needs by sensibly integrating the different techniques together. This will help to bridge the gap between medical and technology research. Prior to this survey, there were some reviews that summarized the development of RUSS <cit.>. Recently, Salcudean et al. discussed the roles robotics play in the acquisition of medical images, including US, endoscopy, X-ray, optical coherence tomography, and nuclear medicine <cit.>. Specific to RUSS, Von Haxthausen et al. provided a systematic summary of recent publications between 2016 and 2020 <cit.>. Li et al. focused on the development of autonomous RUSS <cit.>. These two surveys categorize literature based on the level of automation; in contrast, this article emphasizes the connection between the potential clinical applications and enabling techniques. In addition, some novel concepts of application-oriented techniques (e.g., motion-aware <cit.> and deformation-aware <cit.> characteristics) have not been discussed before. However, they are important to further pave the way for applying RUSS in real scenarios. Due to the fast development of artificial intelligence (AI), learning-based RUSS is emerging to automatically perform specific US examinations <cit.>. Li et al. also noted this trend and mentioned the AI-based RUSS as one of the future directions <cit.>. Nevertheless, learning-based RUSS solutions have not been systematically discussed yet. Therefore, a comprehensive survey article covering these new trends of RUSS will be helpful for roboticists to quickly and systematically learn the key knowledge of RUSS, as well as for clinicians to comprehend how the robot benefits their specific clinical needs. Regarding future development for RUSS, we discussed some open challenges and promising perspectives to inspire the research community and other stakeholders. § MATERIALS AND METHODS §.§ Searching Policy In order to provide an objective view of the development of robotic US imaging over the last two decades, we carried out an extensive search of RUSS on the Web of Science and google scholar. The search term was “(remote OR teleoperat*) AND (ultrasound OR US OR ultrasonography OR echography)", and “robot* AND (ultrasound OR US OR ultrasonography OR echography) AND (Imaging OR screening OR scan* OR acquisition* OR servoing)". To further narrow the most relevant and most impactful articles, the titles and abstracts were carefully reviewed to exclude the articles that were (a) not focusing on the medical domain, (b) not using robotic imaging adjustment or optimization, or (c) not employing traditional 2D/3D probes. This excludes papers using endocavitary probes <cit.> for cardiology and prostate applications. Finally, among similar articles, the most representative ones (the newest or most cited) were selected. §.§ Technological Developments in RUSS   Skilled sonographers are often in shortage, particularly in rural areas. To allow accurate adjustment of US acquisition parameters and address the unbalanced distribution of healthcare resources across nations and regions, teleoperated RUSS solutions have been developed over the past two decades (see Section <ref>). For such systems, the operations are fully carried out by experts via teleoperation techniques; thereby, remote experts take the responsibility of robotic acquisition. To improve the level of autonomy of RUSS, quite a large number of RUSS solutions have been proposed for different applications in the past decades. To review the key characteristics of autonomous RUSS, we first summarize the existing articles in terms of enabling technologies, namely three key acquisitions parameters: contact force (Section <ref>), probe orientation (Section <ref>), and scan path (Section <ref>). By precisely controlling these parameters, the accuracy and reproducibility of US imaging can be improved <cit.>. In addition, more advanced techniques need to be developed to tackle additional practical complications occurring in clinical routines, e.g., patient movement and probe pressure-induced deformation. In this article, we featured four advanced techniques: 1) motion-aware US imaging (Section <ref>), deformation-aware US imaging (Section <ref>), US visual servoing (Section <ref>), and elastography imaging (Section <ref>). Sonographers often need to search for standard examination planes for biometric measurement and diagnosis. It is a time-consuming and non-repeatable process, even for experienced sonographers, due to the noisy US images and tissue motion. Benefiting from the development of artificial intelligence, and in particular deep learning, the area of medical image processing has achieved phenomenal success <cit.>. Learning-based image processing techniques lead to accurate and robust understandings of US images, which further enables training RUSS to learn both manipulation skills and clinical knowledge directly from human sonographers. We summarize the most recent developments in learning-powered RUSS (Section <ref>), aiming to automatically search for specific anatomy or navigate a probe to visualize standard US planes. Finally, we discuss the open challenges and provide a few potential directions for future developments Section <ref>. The important components of robotic US and the organization structure of this article are depicted in Fig. <ref>. By incorporating additional techniques to fundamental enabling technologies, the level of technical complexity is increased from Section <ref> to Section <ref>. In this way, we would like to highlight our strategy to inspire the community to achieve the ultimate goal of developing an intelligent robotic sonographer that can collaborate with human sonographers to improve diagnostic and intraoperative imaging in real scenarios. § TELEOPERATION IN RUSS   Teleoperation allows operators to remotely carry out certain tasks. Due to the development of networks, multimedia, and communication technologies in the past decades, teleoperation has become one of the most mature techniques for reforming modern medical procedures <cit.>. The main characteristic of teleoperation is that the robot's motion is controlled by operators. This is important for obtaining regulatory approval. The most successful representative is da Vinci from Intuitive Surgical, which has become the clinical standard of minimally invasive surgery for a wide range of surgical procedures <cit.>. Regarding teleoperated RUSS, it has been seen as a solution for work-related musculoskeletal disorders of sonographers <cit.>. In addition, separating operators from patients reduces the risk of transmitting pandemics (e.g., Covid-19) <cit.>. This section summarizes the technical and clinical contributions of remote RUSS, respectively. §.§ Technical Developments   Teleoperated RUSS often consists of three individual components: 1) an expert console, 2) a patient-side manipulator (PSM) used to maneuver a US probe, and 3) a software control system mapping the movement made by experts to the PSM. The teleoperated RUSS allows sonographers to manually, unconstrainedly, and safely control the probe motion onto the patient via the PSM. Teleoperated systems are also utilized on-site because robotic systems can overcome human limits in manipulation and perception by adding dexterity and precision. A common example is da Vinci, which is often employed on-site <cit.>. §.§.§ Robotic Mechanism In 1999, Salcudean et al. designed a six degree of freedom (DOF) lightweight mechanism with limited force capability for teleoperated RUSS <cit.>. Due to the need for a large orientation workspace, a parallelogram linkage was employed to decouple the orientation and translation in their final design, achieving the control resolution of 0.1 mm for translation and 0.09^∘ for rotation. Similarly, Lessard et al. designed the PSM in parallel structure in order to have enough workspace <cit.>. Masuda et al. designed a 6-DOF mechanism consisting of gimbals, pantograph and slide mechanisms, which weighed 3.3 kg <cit.>. To guarantee the safety of patients, there are four sensors symmetrically deployed around the probe to monitor real-time force. In addition, a number of soft mechanisms were developed for force-sensitive applications, e.g., obstetric examinations, to strictly limit the maximum US probe pressure. Vilchis et al. proposed a cable-driven nonrigid remote robot <cit.>. This system has been used on 100 patients with abdominal aortic aneurysm (AAA) at a distance of 1125 km. Tsumura et al. designed a passive mechanism using springs for fetal examinations, which can prevent excessive contact force <cit.>. Besides, a portable and attachable robotic system has been designed by Ito et al. <cit.> [see Fig. <ref> (e)]. In the same direction, Vieyres et al. proposed a 4-DOF light mechanism with 3-DOF rotation and 1-DOF translation in probe centerline <cit.>. Then, they updated the design of the portable RUSS to allow all 6-DOF motions using serial mechanism <cit.>. The portable RUSS is easily used by paramedics, which makes it ideal for use in emergency medical circumstances. Nevertheless, owing to the need of the compact structure, portable RUSS typically have restricted working space. Since mechanical design is beyond the scope of this survey's primary focus on imaging acquisition, we refer readers to two comprehensive review articles with mechanical designs for RUSS <cit.>. To reduce the cost of RUSS, commercial robotic manipulators e.g., Universal Robot (University robot, Denmark) and Franka Emika Panda (Franka Emika GmbH, Germany) are often used as PSM <cit.> [see Fig. <ref> (b) and (c)]. It is noteworthy that another typical standard robotic arm KUKA LBR iiwa (KUKA Robotics GmbH, Germany), with integrated joint torque sensors, is also commonly employed as a PSM <cit.>. HIPPOCRATE is a representative of teleoperated RUSS developed using a serial industrial robotic arm <cit.>. §.§.§ Shared Autonomy in Teleoperated RUSS To fully take advantage of the stability and accuracy of robotic techniques, Abolmaesumi et al. proposed a shared autonomy strategy between an expert and an image servo <cit.>. The in-plane three DOFs were controlled by visual servoing to automatically center the carotid artery in cross-sectional images, while the other three DOFs were teleoperated by an expert. In this case, the image servo can provide pixel-by-pixel control accuracy and further mitigate the negative influence of human tremor. To keep the tissue of interest always visible in the image and give more flexibility to the expert, Li et al. and Krupa et al. shared all four (in-plane and out-of-plane) DOFs of a lightweight body-mounted mechanism between the visual servoing algorithm and a human operator via teleoperation <cit.>. The visual servoing technique has also been widely used in autonomous RUSS to estimate and compensate for the motion of internal organs <cit.>, visualize and track the object of interest <cit.>, and improve the image quality by optimizing the acoustic windows <cit.>, etc. Please refer to Section <ref> for more details. §.§.§ User Interface Masuda et al. employed two joysticks to remotely control the three-dimensional rotation and translation individually of the PSM <cit.>. Yet, this manner differs from how experts conduct conventional US examinations. To enhance the intuitiveness of the interaction, a dummy probe is frequently utilized to intuitively control PSM from the expert console <cit.>. A gyroscope was installed within the dummy probe so that it could track the motion of the expert <cit.>. To improve the accuracy of the motion estimation, some mature techniques, such as optical and electromagnetic tracking can be utilized. As the use of a dummy probe allows experts to conduct US examinations as usual, RUSS can reduce training time and increase examination efficiency. However, the lack of force feedback on expert side may hinder the clinical acceptance. To tackle this problem, Martinelli et al. employed a haptic control system that rendered contact force in three dimensions <cit.>. Conti et al. employed a commercial 6-DOF haptic device (Omega 6) to reflect the contact force in six dimensions <cit.> [see Fig. <ref> (a)]. Recently, Naceri et al. directly deployed two 7-DOF Franka Emika Panda <cit.>, one of which was used at expert console with force feedback, and the other one used at patient side to precisely reproduce the movements of the experts. Benefiting from the development of virtual reality (VR) techniques, a VR simulator was designed as a new type of interface for teleoperated RUSS <cit.> [see Fig. <ref> (f)]. Compared to traditional joysticks or other haptic devices, an immersive experience can be achieved using VR simulators, which could intuitively visualize the remote scenes in 3D. The initial evaluation of a VR simulator has been performed by 12 experienced sonographers and the results suggest that the immersive simulator could be used for teleoperated RUSS <cit.>. A deeper discussion about human-robotic interaction studies will be beyond the focus of this paper. To inspire further research incorporating novel human-machine interfaces to improve the efficiency, intuitiveness, and robustness of teleoperated RUSS, we refer readers to two comprehensive surveys on interface approaches <cit.>. Specific to medical applications, Abdelaal et al. provided a crucial review of interfaces that have been used or tested in vivo <cit.>. §.§ Clinical Feasibility Evaluation   Teleoperated RUSS can fully utilize the advanced knowledge of experts. Compared to autonomous RUSS, teleoperated RUSS is easier to be certified for clinical use due to the fact that all diagnostic decisions and scan trajectory are made by experts. To achieve this objective, clinical studies have been performed using different teleoperated RUSS for a number of examinations. Clinical evaluations of existing teleoperated RUSS solutions have been categorized according to their clinical applications as TABLE-<ref>. §.§.§ Abdominal Imaging The abdomen is often examined using US images, which is one of the primary focuses of teleoperated RUSS. To validate the feasibility and diagnostic accuracy of such systems, Arbeille et al. evaluated a preliminary version of a teleoperated RUSS for general abdominal imaging on 20 patients <cit.>. The expert was in a room at some distance (20-50 km) from the patient's site. The time delay between experts and the PSM was less than 0.1 s using ISDN (terrestrial) telephone lines and less than 0.5 s using satellite links. To evaluate the performance, the authors validated their approach on four different groups of organs. The results demonstrated that the expert could image the main views (longitudinal and transverse) of the liver, gallbladder, kidneys, aorta, pancreas, bladder, and uterus on the patient. Only the heart and spleen were not identified in two and four of the 20 cases, respectively. The experiments also showed that sonographers can master the teleoperated RUSS in less than 3 hours, while the examination time (27±7 min for three or four organs) was approximately 50% longer than the traditional US examination. In a following study, Arbeille et al. further compared the performance of robotized and conventional US examinations on 87 patients examined in the emergency department at the Tours University in France <cit.>. The results demonstrated that each organ (e.g., liver, gallbladder, pancreas, kidney) can be correctly imaged by a robotized system in between 91100% of cases compared with the conventional US examinations. In addition, the mean visualization score for the teleoperated RUSS was 87.4% for the abdomen, while there were no false diagnoses made in this study <cit.>. In another clinical evaluation, Adams et al. also assessed the feasibility of performing adult abdominal US examination using a remote RUSS on 18 patients in the University of Saskatchewan, Canada <cit.>. Telerobotic examinations were successful in 92% of the examinations on various abdominal organs (given the organs were sufficiently visualized on the conventional examination); five pathological findings were identified on both modalities, three and two findings were only identified using conventional and telerobotic system, respectively. Furthermore, they reported that all participating patients were willing (89% were strongly willing and the remaining 11% were willing) to have another telerobotic examination <cit.>. Martinelli et al. carried out a study on 58 patients with a focus on the aorta <cit.>. The examination results demonstrated that all aneurysm cases were correctly detected by both conventional scans and the teleoperated RUSS. Furthermore, the quantitative results show that the diameter of the patient's aorta can be accurately measured. The interobserver correlation coefficient was 0.98 and the difference in measurement was less than 4 mm in 96.3% cases. In addition, the examination duration (mean±SD) of the teleoperated system and traditional examinations are 17±8 min and 12±7 min, respectively. Finally, they also reported that the acceptability of patients was 84±18%, which is similar to the result in <cit.>. §.§.§ Cardiovascular Imaging Compared with general abdominal organs, cardiac examinations are considered more technically demanding procedures. Regarding echocardiography, the clinical needs include the visualization and evaluation of the four cardiac chambers, measurements of aortic flow, and the identification of mitral, tricuspid, or aortic valve leaks or aortic stenosis <cit.>. To successfully perform tele-echocardiography, the probe was held by a 3-DOF robotic arm providing three orthogonal rotations, and then, the robotic arm was fixed to a motorized plate for obtaining translational movements <cit.>. The results on 41 cardiac patients demonstrated that similar measurements can be achieved in most cases (93%100%). Among the 71 valve leaks or aortic stenosis patients, 61 (86%) were successfully detected using tele-echocardiography and there was no false-positive diagnosis reported. Boman et al. also carried out a similar study on cardiovascular examination in Sweden <cit.>. The evaluations were carried out in three different stages. In stage 1, there were 27 patients in a different place than sonographers with a distance of 80 km. Regarding the other two stages, a total of 31 subjects were recruited in a place at 135 km from the experts. The results indicate that real-time echocardiographic examinations are possible <cit.>. Boman et al. compared the tele-echocardiography examination with the standard of care referral approach in terms of time and diagnosis <cit.>. 19 patients were randomized to remote consultation and imaging, and 19 to the standard of care consultation. The results demonstrated that the processing time was significantly reduced in the remote one (only 26.5 days vs 114 days for the standard one). Therefore, compared with the standard of care approach, patients were more satisfied with the remote consultation strategy, which offered an increased rapidity of diagnosis and the likelihood of receiving faster patient management <cit.>. In 2007, Sekar et al. evaluated tele-echocardiography examination in the diagnosis of congenital heart diseases in pediatric populations <cit.>. In this 3-year study, 102 pediatric telecardiology examinations were performed between a tertiary care cardiac center and a remote rural hospital located 193 km away. Pathology was ruled out in 50 children by tele-echocardiography. In addition, heart lesions were identified in 52 children and 30 among them required surgery. By using teleoperation techniques, the total cost for such remote care can be controlled under 90 USD, which becomes considerable for most developing areas <cit.>. Sengupta et al. further validate the feasibility of long-distance (trans-Atlantic) telerobotic US scans for vascular examinations <cit.>. The results showed that the procedure to localize the remote probe along the short axis of the carotid artery took less than 60 s and an examination could successfully be conducted in 4 min. Avgousti et al. employed 4G wireless networks in order to reduce the time delay for live tele-echography <cit.>. However, it is also important to note that the communication stability and potential signal interference may lead to uncertainty. §.§.§ Obstetric Imaging Obstetric imaging is also one of the most frequent applications of US examination in clinical practice. From the beginning phase to the birth of infants, more than five fetal examinations are carried out and such examinations are important to evaluate the health of both fetuses and pregnant women <cit.>. To assess the feasibility of teleoperating fetal US examinations in pregnant women, Arbeille et al. carried out a study on 29 pregnant women in an isolated hospital 1700 km away using both conventional and teleoperation examinations <cit.>. The results demonstrated that the biometric parameters, placental location, and amniotic fluid volume can be correctly measured in most cases (93.1%) using a teleoperated RUSS. Only in two cases, femur length could not be correctly measured. The mean duration of US examination of the remote examinations (18 min) was longer than that of conventional examinations (14 min). Another study with a similar objective was presented by Adams et al. on 30 patients in Canada <cit.>. In this study, the results indicated that there was no statistically significant difference between teleoperated RUSS and conventional measurements of overhead circumference, biparietal diameter, or single deepest vertical pocket of amniotic fluid; however, there were slight differences in the measures of abdominal circumference and femur length. Besides, 80% of the fetal structures could be sufficiently acquired by the telerobotic system (range, 57%–100% for each patient). Finally, a survey of participants shows that 92% patients are willing to have another telerobotic examination in the future. The aforementioned studies demonstrated the feasibility of using teleoperation to remotely carry out fetal US examinations while keeping comparable biometric measurements as precise as the conventional approach. §.§.§ General Applications Georgescu et al. reported the usability of a teleoperation system for general applications over one year <cit.>. In total 300 patients were involved: 138 supra-aortic vessels, 68 abdomen, 33 thyroid, 30 lower limb vein, 20 pelvis, 7 kidneys, 3 small parts, and 1 obstetrics. The reported average duration of a teleoperation examination was 24±5 min over all 300 examinations. In addition, the results showed that the use of teleoperation in the general medicine practice significantly reduced the waiting time (save several days) for patients, and similar information as conventional US examinations was achieved. It also contributed to saving costs for the healthcare system and facilitating earlier treatment of conditions, potentially leading to improved patient outcomes and less time in care facilities <cit.>. Most recently, a teleoperated RUSS was tested on 22 Covid-19 patients, and they concluded that teleoperated RUSS can be used to diagnose common abdominal, vascular, and superficial organ pathologies with acceptable accuracy <cit.>. § ENABLING TECHNOLOGIES FOR AUTONOMOUS RUSS   Recently, interest in autonomous RUSS has increased relatively to teleoperated RUSS. Autonomous RUSS has the potential to achieve standardized and reproducible US acquisitions. RUSS solutions further release sonographers from burdensome manipulation tasks and allow them to focus on diagnosis, requiring deep anatomical and physiological knowledge. The move of the research community toward autonomous RUSS has also proposed novel scientific questions, which defined important and exciting challenges. To develop autonomous RUSS, we first need to understand how human sonographers perform US scans. In this paper, we call this process the recovery of the “language of sonography". The community has not investigated this consciously, but this path can be traced throughout the analysis of the state of the art. The adjustment of contact force, probe position and orientation for optimal image acquisition has often been the first focus. Then, it is also crucial to plan an appropriate path for covering the area of interest and to compensate for the potential motion and deformation of the target anatomy during imaging. These points will be discussed explicitly in the following sections in more detail when we review some of the most relevant states of the art. In this section, three fundamental techniques used in RUSS are elaborated: 1) compliant control used to apply and maintain a given contact force between US probe and patients, 2) orientation optimization to determine the appropriate probe orientation for a given scan (often orthogonal to the contacted surface) and 3) path planning to best localize and visualize the anatomy of interest. §.§ Force Control Approaches   Due to the inherited characteristic of US imaging, a certain contact force between a US probe and human tissues is required to optimize acoustic coupling, thereby achieving high-quality US images. It is challenging for human operators to maintain a constant force during US scans. The varying force will result in non-homogeneously deformed US images. Thus, a dedicated force controller is needed to maintain the contact force during scans. Furthermore, such a controller is also crucial for guaranteeing the safety of patients by preventing excessive force. Depending on the target tissues, the acceptable contact force is less than approximately 20 N <cit.>. In the meanwhile, a small force (less than 1.2 N) is commonly considered as not being in complete contact with the skin <cit.>. It is noteworthy that this subsection only summarized the force control approaches (both software and hardware-wise) that have been used for developing RUSS. A more general and comprehensive summary of force control can refer to <cit.>. §.§.§ Hybrid Force/Position Controller The traditional hybrid force/position control approaches are implemented in two decoupled subspaces taking position law and force control law, respectively, into account <cit.>. Both force and position differences between current values and desired values are fed into the robotic dynamic model to update the manipulator's motion. To apply a constant contact force between a probe and subjects, Gilbertson et al. implemented a hybrid position/force controller for a 1-DOF hand-held RUSS <cit.>. In this study, they simplified the contact model as two interfaces (human-machine and probe-patient) using a set of masses, springs, and dampers. Thereby, the contact force can be dynamically connected to the probe position and velocity by selecting proper interface parameters. A similar hybrid position/force method based on an external 6-DOF force/torque (F/T) sensor was designed for 6-DOF RUSS <cit.>. Their approaches can automatically switch between velocity and force control modes according to the contact condition (free or contact space). The External hybrid force/position control is also often used in RUSS. The external controller first updates the position based on the force; then, the positional error is controlled using an internal servo. Pierrot et al. used a PI controller to maintain the contact force and a PID controller to continually run the joint position servo loop for a 7-DOF robotic US system <cit.>. Similarly, Ma et al. used a PID controller to actively compute the variation of Cartesian position based on the force error; and then used a position controller (provided by the manufacturer) in the inner loop <cit.>.To limit the negative effect caused by potential force measurement errors, a low-pass filter, and a moving filter were used to smooth the measured force. The authors claimed that the implementation of such an external force controller is simpler and can be adapted for any kind of robot <cit.>. §.§.§ Compliant Controller Regarding the hybrid force/position controller, a position controller is employed either in a sub-space for the traditional ones or in the low-level servoing loop for the external ones. Since the environment is unknown in real scenarios, the position control may result in excessive force to move to the computed positions. To ensure the safety of patients, two compliant control methods (impedance controller and admittance controller) are often used. The dynamic model of compliant controller is described as Eq. (<ref>) <cit.>. F + F_ext = K_m e + Dė + Më where F is the applied force/torque in Cartesian space, e = (x_d - x_c) is the Cartesian position and orientation error between the current pose x_c and the target pose x_d, F_ext is the desired force/torque, K_m, D and M are the stiffness, damping and inertia matrices, respectively. Based on Eq. (<ref>), the compliant performance can be achieved in all directions by giving different K_m and D, which enables safe/soft interactions between RUSS and patients. Regarding Eq. (<ref>), there are two different interpretations, which are referring to impedance control and admittance control, respectively. For the former one, the pose error is seen as feedback and the computed force and torque are applied to achieve the expected force F_ext. On the other hand, for an admittance controller, the force applied at the end-effector F is measured as input, while the output is the Cartesian movement. Since admittance control only requires the measurement of external force/torque, it is often used for low-cost robots without accurate joint torque sensors, e.g., universal Robots <cit.>. On the contrary, impedance control is more often used when robotic manipulators are equipped with accurate joint torque sensors, e.g., KUKA LBR iiwa <cit.> and Franka Emika Panda <cit.>. When the stiffness of the environment diminishes, the performance of impedance control will decrease due to friction and unmodeled dynamics, while the performance of admittance control will increase <cit.>. Therefore, admittance control could achieve better performance on soft tissues, while impedance control could be more suitable for stiff tissues. §.§.§ Spring-based Mechanism Since some clinical applications, e.g., fetal examination, are really sensitive to the applied force during US examinations, Tsumura et al. proposed a spring-based mechanism to maintain the contact force and passively adjust the probe pose with respect to the constrained surface <cit.>. Compared to the aforementioned sensor-based controllers, the passive mechanism can apply a constant force quickly and safely, especially in unstructured environments. Wang et al. proposed a spring-loaded ball clutch to limit the maximum contact force <cit.>. In normal cases, the detent structure is in its engaged position with ball restricted by a preloaded compressed spring. Once excessive force occurs, the ball comes out from the detent hole. Thus, the involved clutch joint will lose the function of transmitting torque <cit.>. In these ways, the maximum contact force of such mechanisms can be mechanically limited to 10 N <cit.> and 21.98±0.96 N <cit.>. Yet, this approach cannot precisely and dynamically control the contact force. To address this challenge, Housden et al. extended their work <cit.> by integrating a customized multi-axis F/T sensor to allow active adjustment of contact force <cit.>. The designed F/T sensor consists of two pieces with eight legs in total and the displacements of the legs were measured with eight optoelectronic sensors. By using the measured force as feedback, this system can actively adjust the contact force toward the desired values <cit.>. Bao et al. designed a parallel, motor-spring-based end-effector to actively generate a certain force for US scanning <cit.>. The force is adjusted by changing the position of two sliders connecting a moving platform using springs. The symmetrical configuration restricted the contact force consistent with the probe's centerline. §.§.§ Others Huang et al. attached two thin force sensors (IMS-Y-Z03, I-Motion Inc., China) on both sides of the front face of a linear probe <cit.>. Then, a simple rule was implemented to control the applied force: the probe moves downward 3.1 mm when the force is smaller than 1 N, the probe moves upward 3.1 mm when the force is larger than 8 N, and scans were only performed when both sensors measurements are in the range of [1, 8 N]. Their team extended this work by replacing a 3-DOF linear stage with a 6-DOF robotic arm <cit.>. A robotic arm enables in-plane rotation; thereby, an updated rule was used to maintain the constant force: the probe moves downward 0.2 mm when both the forces are smaller than the desired force, the probe moves upward 0.2 mm when the forces are larger than the desired one, the probe rotates 0.2^∘ (in-plane) when the two forces are different. Compared with other force adjustment approaches, this method is easy to be implemented, while the handcraft rule needs further improvement to adapt to inter-patient variations. §.§ Probe Orientation Optimization   The relative probe orientation with respect to the restricted surface is also a key factor dominating the image quality. For some applications like US imaging of bone, US probe orientation is often optimized to be orthogonal to the constraint surface <cit.>. In certain applications, such as image-guided interventions, the US probe may need to be tilted from the orthogonal direction in order to better visualize the targets and/or inserted instruments <cit.>. The articles discussing probe orientation adjustment are summarised in three subcategories: in-plane orientation, out-of-plane orientation, and full orientation optimization in this section. §.§.§ In-Plane Optimization The in-plane orientation of a 2D probe represents the rotation around the short axis of the probe (see Fig. <ref>). In other words, in-plane motion only happens in the plane of US view. In <cit.>, the in-plane rotation was optimized using the visual servoing technique to improve the general image quality. To quantitatively assess the image's quality and further use it as the input signal for servoing control, the US confidence map <cit.> was computed for individual images. The US confidence map provides a pixel-wise measure of signal loss based on a simplified model of wave propagation in tissues. The computed confidence map is often used as a measurement metric of image's quality <cit.>. However, it is worth noting that the quality here refers only to the strength of US signal. The best US images according to the confidence map may not be the best images expected by clinicians in examinations. To obtain the US images leading to higher overall confidence values, the probe's orientation was often optimized to the orthogonal direction of the surface <cit.>. In addition, Jiang et al. and Welleweerd et al. also employed US confidence map-based in-plane adjustments to improve sub-optimal contact conditions for limb arm and breast scans <cit.>, respectively. Huang et al. adjusted in-plane orientation to balance the contact forces measured at two endpoints on the probe tip <cit.>. Zettinig et al. proposed a 3D-to-3D volume registration to adapt the movement of target anatomy; then they further optimized the in-plane orientation to align the current needle guideline with the planned path on a preoperative CT or MR <cit.>. §.§.§ Out-of-Plane Optimization The out-of-plane motion is defined as the rotation around the probe's axial direction (see Fig. <ref>). In <cit.>, authors claimed that in-plane adjustment only benefit axial aortic scans marginally; therefore, they optimized out-of-plane rotation to improve the imaging quality in terms of overall US confidence values <cit.>. A fixed rotation angle interval was applied step by step. However, it is uncommon for existing articles to only optimize the out-of-plane orientation. §.§.§ Full Orientation Optimization To estimate the normal direction of a constrained surface, depth camera-based approaches are most often used in the existing literature <cit.>. The advantage of these approaches is high computational efficiency, while the main limitation is relatively low accuracy of the estimations. Recently, Ma et al. designed a probe holder with four laser distance sensors to actively adjust the probe's orientation to be normal to the surface <cit.>. The results demonstrated their adjustment can be computed in real-time. In addition, Jiang et al proposed a method to identify the normal direction of the restricted surface using contact force for out-of-plane optimization and US images for in-plane optimization <cit.> (see Fig. <ref>). The bone boundary was used to demonstrate the probe orientation's impact on the imaging quality. In this study, Jiang et al proposed a feature called the smooth derivative of contact force, which enabled the accurate estimation of the out-of-plane orientation without the requirement for an expensive external F/T sensor <cit.>. To further improve the accuracy of the estimated normal direction, Jiang et al. deduced the underlying mechanical model based on the force measured during two orthogonal fan motions at a given contact point <cit.>. The upgraded method works for both convex and linear probes, and due to its purely force-based nature, it is invariant to image noises. Yet, due to nonnegligible deformations of the soft tissue (e.g., breast), the force-based approaches are more suitable for orthopedic applications (e.g., limbs and back). Besides, a number of studies optimized the probe's full orientation solely using US images. Welleweerd et al. proposed a framework for automatic breast scanning without requiring patient-specific models <cit.>. To achieve this, in-plane optimization was firstly carried out to ensure acoustic coupling between the probe and the examined breast. Once the mean confidence value <cit.> of the resulting image is inside the given range, the probe will be moved tangentially to the breast. If the current mean confidence value differs from the specified range, out-of-plane corrections will be carried out to maintain constant confidence. The mean error between the estimated normal directions and ground truth at all points of trajectory was 12.6^∘ out-of-plane and 4.3^∘ in-plane <cit.>. Chatelain et al. extended their preliminary work <cit.> from in-plane control of a 2D probe to full-orientation control of a 3D wobbler probe using the confidence map <cit.>. Recently, Osburg et al. used Convolutional Neural Network (CNN) to compute the surface normal at the point of contact based on native 3D volumetric data <cit.>. Instead of identifying the normal direction of constraint surfaces, Jiang et al. estimated the normal direction of a subcutaneous tubular structure directly based on the segmented vessels of the most recent images <cit.>. The vascular boundaries obtained at different positions contain the local geometrical information (radius and centerline) of the blood vessel; thus, the US probe can be oriented orthogonally to the estimated centerline of the local segment of the tubular structure. §.§ Path Generation for Autonomous US Scanning   In order to accomplish US examinations, a proper path is essential to visualize the object or locate the lesion on human tissue, e.g., along a target blood vessel and covering a volume of interest. This section categorizes the existing path planning methods as 1) offline scan path generation methods and 2) online scan path generation methods. §.§.§ Offline Scan Path Generation To locate and evaluate the length and severity of stenosis for planning the treatment of peripheral arterial disease (PAD), Merouche et al. directly give the scanning path by manually moving the robotic arm along the target artery <cit.>. To address the potential visualization issue caused by small motions after path planning procedures and to facilitate the tracking of the artery during automatic scans, the probe's position was tuned to maintain the cross-sectional lumen horizontally centered in the US view. Similarly, Jiang et al. manually drew a scan path on the surface of a vascular phantom, and then extracted the path based on RGB images <cit.>. Considering autonomous path planning, scan trajectories can be determined on pre-scanned images (e.g., MRI and CT); then, transferring the planned path to the current setup by registering the live US or RGB-D image to the preoperative atlas. Hennersperger et al. validated the feasibility of autonomously transferring a planned scan path from MRI to the current setup based on the registration between the MRI and 3D surface point clouds acquired by a Kinect camera (Microsoft Corporation, USA) <cit.>. Similarly, Langsch et al. computed the scanning trajectory of an aorta by registering 3D US volume to the patient's MRI <cit.>. However, due to the need for tomographic data (MRI or CT) of each patient, the advantage of these approaches is reduced in clinical practice. To further address this challenge, Virga et al. carried out non-rigid registration between the patient-specific 3D surface extract from a depth camera and a generic preoperative MRI template <cit.> [see Fig. <ref> (a)]. Specific to thorax examinations, Jiang et al. presented a skeleton graph-based non-rigid registration between the cartilage point clouds extracted from a tomographic template and US images of patients <cit.>. To further improve the registration accuracy, Jiang et al. introduced the dense skeleton graph to replace the manually designed key points of the skeleton <cit.> [see Fig. <ref> (b)]. Akbari et al. presented a complete US-based approach to find a proper trajectory for breast US imaging <cit.>. A manual prior scan is carried out in advance; then, the desired trajectory for the post scan is computed based on geometrical analysis of the target using the pre-scanned US images. In addition, the scanning path is often planned solely on the surface extracted by an external camera directly <cit.>. Mustafa et al. extracted the patient's abdomen surface from an RGB image acquired using a web camera (2D) based on a preset HSV color filter; then, the position of the liver was estimated and a four-step acquisition protocol was applied <cit.>. Due to the lack of imaging depth information, the camera needed to be carefully configured anteriorly to subjects. Ma et al. used a Realsense SR305 RGB-D camera (Intel Corporation, USA) to extract the 3D surface data using a depth threshold and further planned the scanning path on the extracted 3D surface <cit.>. Huang et al. extracted 2D skin surfaces of patients from an RGB image using the rule “red>Green>Blue" <cit.> [see Fig. <ref> (c)]. They claimed this is more generic and robust than the threshold-based approaches. Then, a “snake" trajectory was automatically generated to cover the area of interest. Suligoj et al. used the same logic to generate scan paths over a region manually annotated in an RGB image <cit.> [see Fig. <ref> (d)]. Recently, Ma et al. proposed a learning-based method to extract the human abdomen from a depth camera, and further divided the extracted region into four parts for autonomously generating scanning paths of the lung <cit.>. The aforementioned path planning approaches for US scanning were directly determined on the patient's surface. However, the optimal coverage of an underlying volume of interest is not considered. To address this challenge, Graumann et al. proposed a method to automatically compute a suitable scanning path to cover a volume of interest easily selected in preoperative images <cit.>. Depending on the sizes of targeting volumes, one or multiple lines were automatically generated for full coverage. To automatically determine the optimal probe position on the skin to monitor the motion of the internal organ of interest, Bruder et al. computed patient-specific US image quality from a given CT scan <cit.>. To further consider the full coverage of subcostal organs like liver and heart, Göbl et al. proposed a framework integrating both geometrical and physics-based constraints to estimate the best US scanning path with respect to the limited acoustic windows <cit.>. The poses maximizing the image quality (i.e., less acoustic attenuation) are finally selected. The results on both human and phantom data demonstrated that superior image quality was achieved using their method in comparison with a naive planning approach while maintaining the necessary coverage of the target. §.§.§ Online Scan Path Generation Although the off-line path planning are more often used in RUSS, some online planning approaches based on live US images have also been developed. Online approaches can generate more flexible trajectories than offline approaches, which can effectively guarantee the target's visibility inside the US view, even in the presence of unexpected motion. In <cit.>, Jiang et al. proposed a pipeline to enable a RUSS to automatically perform US screening of tubular structures based only on real-time US image feedback. The US probe was manually positioned on the tubular structures [see Fig. <ref> (e)]. Afterward, a U-Net was activated to constantly segment cross-sectional vessel lumen from US images; and thereby, a set of boundary point clouds were extracted and further used to estimate the geometry (centerline and radius) of the local artery sections. To completely scan the whole artery, the US probe was moved forward in the direction of the estimated local vessel centerline in real-time. In addition, similar work was accomplished by Huang et al. for automatically screening of carotid artery based on the US image feedback <cit.>. In <cit.>, Kim et al. employed a CNN as a classifier for real-time B-mode images to update the probe position for heart examinations. Since the next action is planned in real-time, the online path planning approach can facilitate the robust tracking of the target during autonomous scans. To ensure the scanning quality to facilitate the clinical diagnosis, Jiang et al. first presented an online segmentation quality-aware method based on the Doppler signal <cit.>. Once the segmentation performance is considered low, the probe orientation will be adjusted to enhance the Doppler signal and thereby improve the accuracy and completeness of the reconstructed 3D vessel. The significance of this study lies in its ability to inspire future research into quality-aware, closed-loop robotic scanning. § APPLICATION-ORIENTED ADVANCED TECHNOLOGIES FOR AUTONOMOUS RUSS   The aforementioned three enabling technologies (force control, orientation optimization, and scanning path generation) have been extensively studied in the existing literature. However, the enabling technologies can only guarantee the quality of US acquisition in ideal cases. To further enable the implementation of extensive and autonomous RUSS screening programs, more advanced technologies tackling practical challenges in real scenarios should be considered. In this section, four distinctive techniques are discussed: 1) Motion-aware US imaging: regarding the autonomous scanning of the anatomy of interest, the potential body motion should be monitored and properly compensated to achieve accurate and complete 3D anatomy geometry. 2) Deformation-aware US imaging: due to the inherited characteristic of US imaging, a certain force is necessary for properly visualizing the underlying anatomy of interest; thereby, the inevitable force-induced deformation hinders the correct measurements of the target anatomy. 3) US visual servoing: by providing pixel-to-pixel control to accurately move the probe to reach the desired cross-sectional images and guarantee the visibility of the object of interest in US views. 4) Elastography imaging: benefiting from the accurate control over probe position and contact force between the probe and tested objects, the underlying tissue properties can be estimated for diagnosis using RUSS. §.§ Motion-Aware US Imaging   §.§.§ Periodic Motion Detection and Compensation In this context, periodic or quasiperiodic motions refer primarily to internal physiological motions such as respiration and pulsation. Because of the advantages of non-invasive and real-time performance, US can be used to monitor internal tissue motion <cit.>. In free-hand mode, it is extremely difficult to compensate for such motions to achieve stable US images. To tackle this challenge, RUSS has been seen as a promising solution <cit.> because robots usually can provide higher accuracy in terms of positioning and repeatability than humans <cit.>. Esteban et al. reported that RUSS can intrinsically compensate for small motions caused by breathing or human tremor using compliant force control <cit.>. Heunis et al. employed a 6-DOF Stewart platform to mimic the involuntary periodic movements that occur during scans; and further proposed a pipeline to create an effective scanning path to cover a surface while compensating for these motions and adhering to preset contact forces <cit.>. This movement was also compensated for by using force control. The results demonstrated that the reconstruction error of arteries was 1.9±0.3 mm in non-static scenarios. To actively compensate for the respiration-induced motion in the liver or prostate, Ipsen et al. applied a constant force control to accomplish continuous US scans in long-term monitoring <cit.>. Furthermore, visual servoing (Section <ref>) is another potential solution for compensating the respiration motion <cit.> and pulsation caused by heart beating <cit.>. §.§.§ Non-Periodic Motion Detection and Compensation Subjects are often adjusted by sonographers to better visualize the target during scans. Thus, the ability to compensate for non-periodic patient’s motion is crucial for the practical use of RUSS. A representative example of the influence caused by non-periodic motion of the imaged patients is shown in Fig. <ref>. The scanned results are significantly different when the same object is kept stationary and moved during scanning. To obtain complete and accurate 3D US scans of a vascular phantom in the presence of rigid motion, Jiang et al. proposed a vision-based RUSS to actively compensate for such non-periodic motion <cit.>. In this study, five passive markers were rigidly attached to the imaged phantom surface and further used to monitor the potential target motion. Once the target is moved, the motion-aware RUSS automatically computes the transformation and updates the trajectory to recover the scanning from the breaking point. To eliminate the requirement for careful configuration of the passive markers in real scenarios, Jiang et al monitored the patient's motion based on the real-time segmentation of objects in RGB images and computed the compensation matrix using extracted surface point clouds acquired before and after the motion <cit.>. The results on a realistic arm phantom demonstrate the effectiveness of this marker-less compensation method. The advantages of robotic US (accuracy and stability) and free-hand US (flexibility) were combined by including active compensation for potential patient motion during scans. However, such systems only considered the rigid motion of objects. To further tackle non-rigid articulated joint motions, Jiang et al. proposed a vision-based framework, combining joint detection and non-rigid surface registration, to automatically update scanning trajectories from a template to individual volunteers with varying arm gestures <cit.>. The robustness and accuracy of the proposed system have been evaluated on multiple volunteers. §.§ Deformation-Aware US Imaging   Due to the probe-patient contact force, shape distortion of the visualized anatomy's geometry is inevitable, particularly for soft tissues such as superficial blood vessels (see Fig. <ref>). The force-induced deformation reduces the precision and repeatability of US images, and thereby could further limiting the diagnostic accuracy and consistency, especially for computer-assisted diagnosis. To provide precise and reliable US images, pressure-induced image deformation needs to be properly corrected. Unlike human sonographers, robots/computers are not trained to make the diagnosis based on deformed images. Therefore, such corrections are particularly important for RUSS. To achieve distortion-free images, Treece et al. combined non-rigid image-based registration with position sensing to correct pressure-induced deformations for free-hand 3D imaging <cit.>. Sun et al. computed 2D deformation fields based on the estimated pixel displacements and corresponding contact forces using polynomial regression models <cit.>. The pixel displacements were computed based on flow techniques using raw echo frequency (RF) data. Based on their experimental results, the parabolic polynomial regression model significantly outperforms the linear model. However, there was no significant performance difference between 2nd order and higher-order polynomial models. Burcher et al. build a model using the finite element method (FEM) to predict the deformation <cit.>. Nonetheless, the performance of the FEM-based approach is heavily dependent on the prior knowledge of tissue properties, which are usually hard to measure in real scenarios. To overcome this challenge, Dahmani et al. employed a linear elastic model to approximate personalized biomedical properties of involved tissues from the images <cit.>. To alleviate the inter-variation of pressure-induced deformation between the acquired images along a scanning path, RUSS is often required to maintain a constant force during the screening. To correct distorted images, Virga et al. built a 4th-order polynomial model to regress the pixel displacement with respect to contact force and further propagate the computed deformation field at sparse sampling points to the whole sweep direction <cit.>. The sampling points were selected manually on the first frame and this method took 186 s on average to compute a deformation field at one location. To speed up the process for compression-free 3D volume, Jiang et al. proposed a stiffness-based deformation correction approach, incorporating image pixel displacements, contact forces, and nonlinear tissue stiffness <cit.>. To obtain patient-specific stiffness models, robotic palpation was performed at sampling positions. Since tissue stiffness is the key factor dominating the deformation, the optimal deformation regression models at sampling positions can be propagated to other positions on the trajectory by interpolating the estimated local stiffness. However, the state of the art in the field of US image correction for force-induced deformation is not yet applicable to clinical practice. To further achieve this objective, a pixel-wise tissue properties estimator and anatomy-aware correction system should be developed to bridge the gap between different anatomy and different patients. §.§ Ultrasound Visual Servoing   Understanding the interaction of sonographers with the patient and the US probe is of high importance when developing RUSS. In order to acquire B-mode images of the anatomy of interest, sonographers perform a rough positioning of the probe on the human body. Consecutively, the B-mode images are analyzed while adjusting the probe to obtain the final view with the anatomy of interest in focus. This dynamic image-based adjustment and exploring of the anatomy can be defined as “visual servoing". While this has been the subject of research in the last decades, we believe that the introduction of deep learning and the advances in reinforcement learning could allow the scientific community to further understand and solve this image-based optimization problem. Recent work that has been published in this field <cit.> can be taken as an indicator for being a potentially interesting research topic in the coming years. In this section, we review some prior work on visual servoing that can be considered as a development of the state of the art towards the goal of autonomous intelligent exploration of particular anatomy and physiology views needed for examination and treatment. §.§.§ Autonomous US Probe Guidance To automatically rediscover a previously registered US imaging view, Bachta et al. developed an image-based visual servoing approach using boundary information and tested it in a simulator <cit.>. The target edge was retrieved using a polynomial regression analysis, and the optimized coefficients were used as visual features to guide a robot-controlled probe to reach a desired image section. However, this method suffers from image noise and is limited to a specific shape. To overcome this challenge, Mebarki et al. employed image moments as visual features <cit.>, which are generic and robust with respect to measurement perturbations. To further achieve a model-free servoing task on unknown targets, they compute the interaction matrix in real-time using B-mode images <cit.>. The experiments on gelatin phantoms demonstrated promising results in terms of minimizing the visual-features error; however, only local convergence can be guaranteed. In particular, in the case of a roughly symmetric object, similar geometric properties can be observed from different cross-sectional images. To overcome this shortage, Nadeau et al. defined a set of 2D features based on a three-dimensional space using a motorized 3D probe <cit.>. To accurately and actively navigate the probe to a given US plane using the visual servoing technique, Duflot et al. first used the subsampled shearlet coefficients as novel visual features as an input to the controller, instead of pure image signal information, i.e., point, lines, moments, etc. <cit.>. Since a set of noiseless and redundant features can be extracted using shearlet coefficients, promising performances of their approach in terms of accuracy, repeatability, and robustness could be achieved. A comprehensive comparison between shearlet-based and photometric-based visual servoing controllers was carried out in both simulator and physical phantom <cit.>. §.§.§ Imaging Stabilization and Object Tracking Visual servoing has also been used to track anatomies of interest and perform online compensation of the anatomy’s motion to stabilize the real-time US images. Without compensating for some potential motion like breathing, the resulting images will be affected. This will lead to inaccuracies in the estimation of the precise location of intervention target tissues. US visual servoing technologies are developed to compute the corresponding probe adjustment against environment dynamics based on real-time image feedback. Nadeau et al. presented an intensity-based approach to maintain the view of an organ while compensating for the physiological motion of the patient <cit.>. Since the computation of image moments depends on object segmentation, image intensity values were directly used as visual features. In an extension work, they adapted their method for 3D probes and did first validations on soft animal tissues <cit.>. In 2015, Nadeau et al. applied a similar intensity-based visual servoing method to keep a target centered within a virtual imaging view in the context of intracardiac surgery <cit.>. Its effectiveness has been validated on in-vivo data. Besides cardiac applications, Nadeau et al. applied visual servoing to stabilize respiratory motion by compensating periodic disturbances with a predictive controller <cit.>. In addition to intensity-based approaches, Krupa et al. employed US speckle information to estimate both in-plane and out-of-plane motion, thereby, realizing the tracking of soft tissue movements in US view <cit.>. Speckle is often considered to be noise, however, it conveys valuable data on the tissue of interest. Speckle contains spatially coherent information between consecutive US images because it physically results from coherent reflections of small components in human tissue. The preliminary experiments performed on a phantom with 2-DOF in-plane and out-of-plane motions demonstrated the potential of a speckle-based servoing approach. The validation for 6-DOF motion was further reported in <cit.>. To further consider soft tissues' deformation, Royer et al. developed a physics-based model to facilitate the accurate tracking of the target of interest in 3D US images <cit.>. §.§.§ Imaging Quality Optimization Visual servoing techniques have also been investigated to improve imaging quality. Chatelain et al. first introduced the US confidence map as a new feature for visual servoing <cit.>. The authors claimed that the US imaging quality could be improved by optimizing the probe orientation to maximize the overall confidence value. An interesting extension using 3D probes instead of 2D probes has been reported in <cit.>. To evaluate the effect of the proposed method in real scenarios, in-vivo validations were performed on healthy volunteers. In addition, Patlan et al. directly employed elastography as the input of the visual servoing controller <cit.>. To optimize the quality of the resulting elastography, the probe was automatically actuated to image a soft tissue object from different views, and further fused to enhance the computed elastography. §.§ Elastography Imaging   US elastography is a non-invasive technique aiming to estimate the mechanical proprieties (i.e., stiffness) of the underlying soft tissues. Elastography has gained great interest in applications such as differentiating tumors from healthy tissues (breast, prostate, liver, etc.) and guiding radiofrequency ablation surgeries <cit.>. Based on the underlying principles for producing US elastography, the currently available techniques can be mainly grouped into shear wave imaging and mechanical strain imaging. In shear wave imaging, the propagation speed of shear wave is measured. In addition, for strain imaging, a mechanical compression is performed using a US probe on the object's skin, where the mechanical compression process can be accurately controlled and measured based on robotic techniques. Thereby, accurate and standardized elastography is expected to be achieved. Compared with shear wave imaging, strain images are more common for robotic elastography imaging because it doesn't require specialized US hardware. Schneider et al. computed laparoscopic US elastography using an external vibrator positioned on the patient skin, where the US probe was remotely controlled by da Vinci (see Fig. <ref>) <cit.>. Patlan-Rosales et al. computed strain images using real-time radio-frequency (RF) signals to precisely locate subcutaneous tumors <cit.>. In this study, robot-assisted palpation was used instead of an external vibrator and the resulting strain images were used to horizontally maintain the object in the imaging center. To estimate the strain map of moving tissues, Patlan-Rosales et al. estimated and compensated the non-rigid motion using visual servoing on an abdominal phantom <cit.>. Instead of 2D elastography, the same team extended their work to create 3D elastography based on the pre- and post-compressed volumes obtained by a 3D US probe <cit.>. To compute 3D elastography without using a 3D probe, Huang et al. designed a linear sliding track with a position sensor and a height-adjustable holder for conventional 2D probes <cit.>. In this study, the pre- and post-compression echo signals were recorded by manually adjusting the height of the probe holder. Then, paired frames of RF data from the pre- and post-compression sweeps were obtained by interpolation. 2D strain images were computed using the paired RF data; thereby, 3D strain maps were obtained by stacking the computed 2D strain images. To allow automatic acquisition of 3D strain maps, they replaced the linear track with a motorized 3-DOF linear stage <cit.> and a 6-DOF robotic arm <cit.>, respectively. § AI-POWERED ROBOTIC US ACQUISITION   AI techniques have been seen as a promising way to further improve the automation level of RUSS by enhancing the understanding of US images and enabling the intuitive transfer of senior sonographers' advanced physiological knowledge. Such techniques have gained increasing attention most recently. A diverse set of tasks like segmentation and classification of US images have achieved great success. Regarding the field of US image segmentation and classification, a large number of research articles have been published. More detailed techniques can be found in these survey articles <cit.>. In this article, we will only focus on the studies that aim to automatize and/or standardize US scanning using AI-based approaches. More specifically, the approaches tried to automatically search for specific anatomical features or navigate a probe to display standard US planes needed for examinations. These tasks are challenging because RUSS must be able to properly interpret the current states (US image, contact force, probe pose) and the surrounding context. Due to the potential tissue deformation and inconsistent acoustic artifacts of medical US images, guiding a probe to visualize target objects in desired planes is a highly sophisticated task, which requires years of training <cit.>. However, such knowledge is not yet available for robots or computers. Due to the great advantage in feature representation over naive handcrafted features, CNN has the potential to achieve superhuman performance to robustly and accurately locate standard planes on challenging US images. Chen et al. employed a deep CNN to identify the fetal abdominal standard plane from recorded US video <cit.>. Since data collection and manual labeling are time-consuming, a transfer learning strategy was used to guarantee the performance with limited training data. To achieve real-time performance, Baumgartner et al. proposed a deep CNN architecture called SonoNet to automatically detect 13 fetal standard planes as well as provide localization of the fetal structures using a bounding box <cit.>. The SonoNet was trained in a weakly supervised mode with only image-level scan plane labels, which make it possible to prepare a large data set. These approaches aid sonographers to locate standard planes that can also improve efficiency in particular for novices. Yet, these methods cannot automatically guide the probe towards target planes or anatomical structures of interest. To enable the ability of RUSS to automatically perform US scans, Mylonas et al. proposed a learning-based approach allowing autonomous execution of US scanning according to expert demonstrations <cit.>. To achieve this objective, a Gaussian Mixture Modeling (GMM) was employed to model the demonstrations (trajectories) towards target objects in a probabilistic manner. However, since the real-time US image was not taken into consideration, all the demonstrations roughly started from the same initial position. This limitation severely impairs the usability of this method in real scenarios. To overcome this limitation and further provide real-time probe movement guidance for obtaining standard planes, Droste et al. proposed a behavioral cloning framework to mimic the process of sonographers searching for standard planes <cit.>. The proposed US-GuideNet consists of two fully connected layers and a gated recurrent unit (GRU) used to extract the sequential information. Due to hardware limitations, the predicted next movement of the probe and the estimated final standard planes only accounted for the rotational component, while the translational component remained unaccounted for. The performance of the imitation-based approach heavily relies on the given demonstrations. However, human US demonstrations are frequently and inherently sub-optimal, where the sonographers often need to adjust the probe around the desired pose to finally determine the optimal view. To tackle sub-optimal demonstrations, Burke et al. introduced a probabilistic temporal ranking model which assumes that the images shown in the later stage are more important than the earlier images <cit.>. The probabilistic ranking model can generate a large data set consisting of pair-wise images based on limited demonstrations; and then, a reward inference network was trained to assess individual B-mode images in self-supervised mode. To automatically navigate the probe to the viewpoint visualizing the mimicked tumor inside the gel phantom, an exploratory Bayesian optimization policy was employed. Nonetheless, due to safety concerns, it is impractical to interact richly with patients to gain enough experience to achieve the optimal searching policy in real scenarios. The process of navigating a US probe to a proper viewpoint displaying standard planes can be seen as a series of probe motions performed in accordance with current observations (e.g., US images, force, probe pose). Therefore, the reinforcement learning (RL) architecture has been seen as a particularly suitable solution for this type of task. Milletari et al. presented an initial work using a deep Q-learning (DQN) architecture to guide sonographers towards the correct sonic window for cardiac examination <cit.>. To avoid dynamic interaction with patients, a grid world environment was built over the chest using recorded videos to simulate acquisition environment. The results demonstrated that the DQN-based approach achieved better results (86.1% correct guidance) than a supervised approach (77.8% correct guidance) trained on the same data. A similar work also trained a DQN on a simulated 2D grid environment to navigate the probe towards the sacrum <cit.>. To automatically terminate the navigation process, a binary classifier (ResNet18) was employed to determine if the target object had been reached. Since this method only considered 3-DOF translational movements, the probe orientation is necessary to be carefully initialized. To further eliminate the requirement of manual initialization and automatically localize the paramedian sagittal oblique plane (a standard plane used in spine US examination), Li et al. trained a DQN to predict the potential actions in 5-DOF spaces (besides the translation in the probe centerline) <cit.>. In contrast to the grid word environment, this work built a simulator using 3D US volumes that cover the target anatomy of interest. This simulator can generate synthetic US images based on arbitrary probe poses. The experimental results demonstrated that the method can repeatably navigate the probe to the target standard plane with an accuracy of 4.91 mm (translational) and 4.65^∘ (orientational) in the intra-patient setting. Then, the authors extended the work by adding a deep learning module (VGG-16) to recognize the target standard views from real-time US images <cit.>. Due to the US simulator, a large amount of state-action data can be obtained for training the DQN agent. In addition, to learn the policy to guide the probe to the position visualizing the kidney, Chen et al. used a supervised learning process to predict the next actions based on the current US image; and an actor-critic RL module was developed to improve the utilization of data and enhance the generalization <cit.>. Recently, to bridge the gap between simulation and real scenarios, Bi et al. proposed VesNet-RL to perform US standard plane (longitudinal view) searching for vascular structures <cit.>. To achieve high generalization capability, this study computed the binary mask of real-time B-mode images and used the background-irreverent binary masks as the input to train the RL agent. Instead of performing validation in the simulated environment with a virtual probe, Ning et al. proposed a state representation model to encode the force and US images into the scene image space acquired using an RGB camera; and then an agent was trained using the proximal policy optimization (PPO) method to control the robotic manipulator to automatically perform US scans in real world <cit.>. Similarly, Deng et al. employed a deep neural network to encapsulate the scanning skill (the US images, the pose/position of the probe, and the contact force) into a high-dimensional multi-modal model; then, a policy was trained based on expert demonstrations <cit.>. Due to the differences between the images in the given demonstrations and real ones obtained during dynamic interactions, the trained model was further improved with guided explorations carried out by human operators. However, such manual correction is very expensive during clinical examinations, and it will limit the efficiency of the RUSS. Instead of directly learning a policy to search for standard planes, Jiang et al. proposed a novel machine learning framework (MI-GPSR) to understand the implicit physiological knowledge from expert demonstrations, which is implemented in a fashion of self-supervised mode using a probability ranking approach <cit.>. To ensure the generalization capability of the method, the authors employed the mutual information <cit.> to explicitly disentangle the task-related features from the domain features. The results on three types of phantoms [gel tubular structure, chicken heart, and lamb kidney phantom (see Fig. <ref>)] demonstrated that MI-GPSR can properly predict the reward of individual US images from unseen demonstrations and unseen phantoms with the same anatomy <cit.>. Understanding and modeling the semantic reasoning and intention of expert sonographers can facilitate not only the development of autonomous intelligent RUSS but also the design of US education and training systems and advanced methods for grading and evaluating the performance of human and robotic sonography. § OPEN CHALLENGES AND FUTURE PERSPECTIVES   Medical robots have gained increased attention, in particular during the COVID-19 pandemic. The role of robotics in managing public health and infectious diseases has been widely discussed among the community <cit.>. In order to apply RUSS in clinical practice, there are still many open challenges, including both technological (e.g., deep understanding of the dynamic scene, and advanced sensing technologies) and nontechnological (e.g., regulatory affairs and financing) aspects <cit.>. Here, we highlight two aspects that will widely affect the roadmap for RUSS, particularly for clinical translation and commercialization: 1) the acceptance of RUSS, and 2) the ethical and legal issues. In addition, we discussed some promising research directions to inspire the future development of RUSS. §.§ Acceptance by Patients and Clinicians The RUSS are designed to help both sonographers and patients in clinical practice. Besides demonstrating comparable or even better outcomes, the acceptance for RUSS is also important. Here, we want to first make a distinction between the concepts of acceptance and trust. Trust is mostly based on how well RUSS performs in terms of technical performance, such as safety, clinical results, robustness, repeatability, and so on. Yet, effective communication, friendly interaction, and mental development would also be necessary for improving acceptance. Regarding teleoperated RUSS, Adams et al. indicated that all patients (18) were willing (89% were strongly willing and the remaining 11% were willing) to have another telerobotic examination <cit.>. A similar result was reported by <cit.>, where 97% of 28 patients were willing to have another teleoperation scan. However, the number of participating patients in these two studies is limited. A more comprehensive survey about the patients' acceptance of RUSS should be carried out in the future. Furthermore, it is noteworthy that the clinicians' attitudes toward RUSS are still missing. Teleoperation systems are controlled by human operators, and there are some very successful teleoperation surgical systems, e.g., da Vinci system. This fact contributes to the positive attitude of stakeholders for teleoperated RUSS <cit.>. In contrast, since autonomous RUSS are partially or fully out of the control of experts, non-negligible worries about safety arise, which stress both patients and experts during scans. Autonomous RUSS is still far from gaining widespread acceptance. A standard evaluation metric considering clinical practices will help improve the trustiness of emerging autonomous medical robotics <cit.>. Nagy et al. defined the concept of level of Clinical Realism: 1) Training tasks with rigid phantoms; 2) Surgical tasks with simple phantoms; 3) Surgical tasks with realistic phantoms, but little or no soft-tissue interaction; 4) Surgical tasks with soft-tissue interaction; 5) Surgical tasks with soft-tissue topology changes <cit.>. To tackle the safety concern of autonomous RUSS, robotic arms are often controlled in compliant force mode, which will result in soft interaction between the probe and patients to prevent excessive contact force <cit.>. A force threshold is specified as a hard limitation in the low-level controllers to completely eliminate the potential extreme situation. The RUSS will stop instantly whenever the real-time force exceeds the predetermined threshold, which was 25 N in <cit.>. During robotic scans, two emergency buttons are often held by the clinical expert and the patient, respectively, to incorporate their observations into the safety-aware loop. Such a dedicated multi-layer safety-aware framework is beneficial for increasing the trust of clinicians and patients. By offering detailed explanations of the ongoing robotic US scans over audio and doing some straightforward interactions with patients such as ”high five", Eilers et al. claimed that the acceptance from patients could be enhanced <cit.>. To improve the acceptance of new medical devices in clinical practices, the robotic system with a medical certification can speed up the process in both research and market-driven developments <cit.>. For example, KUKA LBR iiwa has been widely used as the key component for developing RUSS <cit.>. Nevertheless, this comes with a high unit cost and may necessitate the assistance of an experienced engineer for imaging acquisition or routine system maintenance <cit.>. Since the fee will be paid by the end-users, the financial issue will become a practical factor hindering the acceptance from the patients. Most recently, Kosa et al. examined the role of robotics in Intensive Care Medicine and their acceptability to patients and caregivers <cit.>. They concluded that it is still immature to use robots directly handling patients, and close collaborations between roboticists and clinicians are required to advance robotics to benefit the ICU. §.§ Ethical and Legal Issues The ethical and legal issues regarding medical robotics are still not clearly defined, particularly for autonomous systems. The distribution of responsibility between experts and RUSS (or other surgical robotic systems) remains unclear. Clinical translation will also need regulatory acceptance. In order to properly tackle the ethical, regulatory, and legal issues for RUSS, Yang et al. divided surgical robots into six subgroups in terms of autonomy levels: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy <cit.>. To further improve the concept of level of autonomy, Haidegger defined the term “situation awareness" as the operator’s perception, comprehension, and prediction of a robot’s behavior in its environment <cit.>. Then, “situation awareness" is used to distinguish the required level of human supervision. Up to the time of writing this article, commercial surgical robots are still solidly resting at Level-0, while a very large number of high-autonomy surgical robotic systems are waiting for clinical translation <cit.>. Since commercial surgical robots are dominated by a few disproportionately large companies; thereby they have no rush in disrupting the status quo <cit.>. Ethical and legal regulations are critical for clinical translation and further commercialization. The need for such a regulation has been highlighted by various senior researchers in multiple impactful publications recently <cit.>. To establish such regulations for medical robots, O'Sullivan et al. defined three different responsibilities: (1) accountability: the capacity of a system to give an explanation for its actions; (2) liability: the legal liability for potential damages caused by a robot; and (3) culpability: whom and how to implement punishment <cit.>. In addition, Vayena et al. discussed ethical and legal issues for digital health in terms of privacy and security, trust, and accountability <cit.>. As a large amount of data is often necessary for analysis, protecting privacy is undoubtedly important for avoiding misuse. Public trust is of paramount importance. Vayena et al. considered that the creation of a culture of trust will enable all stakeholders to benefit from the development of digital health <cit.>. Similarly, Yang et al. summarised five increasingly pressing topics in terms of ethics for robotics and AI <cit.>. Besides the aforementioned terms like responsibility, this works further emphasized some societal issues such as potential influence on employment and human freedom. Due to the quick evolution of the area of medical robotics, a proper and comprehensive regulatory system will boost a prosperous market and gradually benefit all stakeholders. To deal with the unsolved issues regarding the safety, transparency, and trustworthiness of modern medical devices with a certain level of autonomy, the two leading Standard Development Organizations International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) created the first joint standardization document (IEC/TR 60601-4-1) regarding autonomy for technical developers <cit.>. Recently, Prestes et al. established the first global ontological standard for AI and robotics: IEEE 7007—Ontological Standard for Ethically Driven Robotics and Automation Systems <cit.>. For an in-depth review of the ongoing initiatives regarding regulations, we highly recommend that readers refer to these two articles <cit.>. §.§ Future Perspectives In addition to challenges, there are also numerous opportunities in the field of RUSS, particularly in light of the boom in both fundamental sensor development and advanced AI research. This survey will elaborate on future perspectives from these two aspects. By providing an understanding of the state of the art, we hope it can stimulate a number of exciting ideas. To clarify, the opportunities extend far beyond what are described below. §.§.§ Fundamental Sensing Systems Sensors are essential components of all intelligent systems. Generally, the development of new sensors has a substantial effect on existing systems in numerous ways. To achieve the ultimate goal of an autonomous RUSS, it is necessary to integrate multiple sensing systems mimicking the sophisticated human sensing system. By developing efficient data fusion techniques, redundancy, and multi-modality data would aid in achieving robust and reliable perception results. This applies not only to RUSS but to a vast array of autonomous systems. Most recently, the novel concept and development of US patches have become attractive. Due to the advantages of small size, stretchable probability, and no need for US gel, it is very desired for continuous healthcare monitoring. The traditional US probes are rigid and bulky, making them unsuitable for imaging through nonplanar surfaces. To address this challenge, Hu et al. proposed a stretchable US probe that can conform to and detect nonplanar complex surfaces <cit.>. This soft probe consisted of a 10× 10 array of piezoelectric transducers covered by compliant silicone elastomers, and the results demonstrated that it could be stretched more than 50%. Similarly, Wang et al. developed and tested a skin-conformal ultrasonic phased array to monitor the physiological signals from tissues up to 14 cm <cit.>. To tackle the practical issue that the image quality is highly affected by US gels, Wang et al. designed a bioadhesive US device consisting of a thin and rigid US probe robustly adhered to the skin via a couplant made of a soft, tough, antidehydrating, and bioadhesive hydrogel-elastomer hybrid <cit.>. Based on this device, continuous imaging of internal tissues over days becomes feasible. Most recently, Hu et al. demonstrate a wearable cardiac US imager providing direct cardiac function assessment <cit.>. Such fundamental changes in US probe would open numerous opportunities for revolutionizing the techniques of robot-assisted US imaging. §.§.§ Advanced AI-based RUSS We consider the AI-based RUSS would be another promising direction, where the core task is to improve the intelligence of RUSS. To this end, the research community needs first to improve the computer's understanding of dynamic environments through multi-modality signals. Only when the system owns precise perception abilities, we can further expect and explore the way to make proper decisions autonomously. Several studies have demonstrated that AI-based approaches outperformed conventional image processing methods <cit.>. Benefiting from the accurate segmentation of target objects (e.g., blood vessels), precise state representations will further facilitate the development of autonomous scanning <cit.> or autonomous exploration of standard US planes <cit.>. In addition, advanced learning-based frameworks have the potential to be used to transfer senior sonographers' physiological knowledge and experience to novices. Recent studies in the direction of learning from demonstrations <cit.> implicitly result in an attractive and influential new research topic on recovery of “language of sonography". Hands-on experience is very important and necessary for sonographers. Senior sonographers who can perform flawless US scans are still unable to directly parameterize and intuitively describe the acquisition requirements. However, US examinations are carried out based on their understanding of high-level physiological knowledge. Such knowledge is common among sonographers, although their comprehension may vary slightly due to experience. The concept of recovery of “language of sonography" refers to the underlying understanding of high-level anatomical knowledge. We believe that efforts to retract the “language of sonography" from intuitive demonstrations with multiple signals, such as US images, RGB-D images, force information, probe movement, gaze information, etc., are as valuable and essential as the progress made in robotic sonography itself <cit.>. § DISCUSSION Robotic technologies have demonstrated promising potential to extend the use of US imaging in the healthcare industry, such as remote examinations, and accurate and quantitative control of acquisition parameters. Compared with conventional US examinations, although current RUSS cannot yet show superiority in terms of improving clinical outputs, a number of benefits have been demonstrated. From the perspective of patients, the waiting time for the healthcare intervention was significantly reduced from 144 to 26.5 days <cit.> and their cost was reduced as well <cit.>. As for sonographers, robots bring dexterity as well as reduce work-related musculoskeletal disorders <cit.>. Additionally, RUSS has the potential to make a significant contribution in a variety of clinical scenarios, including performing trauma examinations in pre-hospital settings <cit.>, freeing up a clinician's hand during the intervention <cit.>, and performing routine PAD screening or monitoring without radiation <cit.>. When it comes to trauma scans, it is vital to spot life-threatening intracavitary hemorrhage as soon as possible because this will enable doctors to make prompt treatment decisions to save lives in emergency scenarios. RUSS could be used for reliable and accurate trauma scan identification in pre-hospital settings by fusing precise sensing devices with a cutting-edge learning-based semantic segmentation framework. Continuing the current progress on RUSS requires a deep understanding of how its embedded technologies add value to healthcare practices. Intelligent robotic imaging systems could provide different benefits. On one hand, they can democratize the healthcare by making US examination available at locations in which patient populations do not currently have access to expert sonographers. On the other hand, to maximize the added value of RUSS, it is important to also focus on enabling new types of interventions or new procedures that are impractical or impossible based on traditional US examination, e.g., 3D or 4D visualization of scanned anatomy compensating or embedding physical breathing and heartbeat. Although there is not yet any fully autonomous system for US examinations, autonomy is one of the main objectives of the scientific community. Similar to surgical robotics, autonomous RUSS will be more challenging to commercialize <cit.>, however, due to its nature of offering images and visualization rather than decision making, cutting, and suturing tissues, we believe autonomous RUSS is easier to be certified and productized than autonomous surgical robotic solutions. On the other hand, compared to robotic X-ray and nuclear imaging, RUSS may be harder to certify because it requires direct interaction with patients. Researchers, therefore, need to continue their studies to guarantee the trust in and acceptance of autonomous RUSS by both doctors and patients. The reported results on current autonomous RUSS are still far from maturity and do not perform as well as or outperform clinicians. Most existing research makes simplifying assumptions and often uses artificial setups for their validation. For example, most US servoing approaches (Section <ref>) are validated on phantoms or using simulation rather than on human subjects, and the existing motion and deformation compensation approaches may not perform as well on patients within the complex and dynamic clinical setups. * Could advanced machine learning allow us to learn the “language of sonography" by observing expert sonographers? * Could our RUSS systems understand the physics of imaging and its interaction with dynamic patient physiology? * Could RUSS allow optimizing B-Mode, 3D and 4D image acquisition? * Could advanced sensing and intelligent control allow for guaranteeing reproducibility and safety of scanning procedures? * Could multimodal imaging and pretraining allow RUSS systems to observe and understand the specific anatomy and physiology of each patient? * Could explainable AI enable RUSS systems to report and justify their actions and decisions to physicians? * Could user-centric RUSS design allow smooth and friendly communication between sonographer robots, physician colleagues, and patients? Answering each of these exciting and essential questions requires large multi-disciplinary scientific and engineering communities to gather, communicate and collaborate. The current review paper hopes to play a small role in gathering and highlighting some of the requirements and opening the path for the community to study and analyze the next crucial steps to take. § CONCLUSION This survey has provided a brief picture of the rapidly evolving field of robot-assisted US imaging systems. Starting from the technical developments and clinical translations of various teleoperation systems in the first decade of the new millennium, in Section <ref>, the article summarizes the path the community took to get to its recent research focus on autonomous RUSS, in particular after the booming of machine learning and artificial intelligence throughout the last decade. It is challenging to develop intelligent RUSS solutions, which require a number of advanced capabilities to understand dynamic environments, physics of US imaging, human anatomy and physiology, and thereby to tackle complex cases of diagnostic and interventional imaging. To date, there are no such systems available. This paper aims at reviewing the state of the art and discussing the paths the community has taken or needs to take in the future. The survey shows that the recent progress has demonstrated that RUSS may be able to improve image acquisition and 3D visualization, also taking motion and deformation into account, real-time geometrical (including volumetric) measurements, and in particular their reproducibility. The US handling habits vary among expert sonographers, and cannot be well described using handcrafted features. We believe that in the near future, the development of advanced machine learning will allow for figuring out the underlying “language of sonography" based on expert demonstrations. This can not only allow for autonomous intelligent RUSS development but also for designing US education and training systems, and advanced methodologies for grading and evaluating the performance of human and robotic US examinations. In view of its speed of progress, RUSS has the potential to revolutionize not only the US-based medical interventions themselves but also clinical screening, diagnosis, and robotic-assisted surgery. § DECLARATION OF COMPETING INTEREST The authors report no conflicts of interest. § ACKNOWLEDGMENTS The authors would like to acknowledge the Editors and anonymous reviewers for their time, and implicit contributions to the improvement of the article's thoroughness, readability, and clarity. model2-names.bstauthoryear
http://arxiv.org/abs/2307.06115v1
20230712121454
The next gap in the subrank of 3-tensors
[ "Fulvio Gesmundo", "Jeroen Zuiddam" ]
math.AG
[ "math.AG", "cs.CC", "math.CO", "quant-ph", "5A69, 4N07, 15A72, 68R05" ]
The next gap in the subrank of 3-tensors Fulvio Gesmundo Jeroen Zuiddam Saarland University University of Amsterdam August 12, 2023 Abstract Recent works of Costa–Dalai, Christandl–Gesmundo–Zuiddam, Blatter–Draisma–Rupniew­ski, and Briët–Christandl–Leigh–Shpilka–Zuiddam have investigated notions of discreteness and gaps in the possible values that asymptotic tensor ranks can take. In particular, it was shown that the asymptotic subrank and asymptotic slice rank of any nonzero 3-tensor is equal to 1, equal to 1.88, or at least 2 (over any field), and that the set of possible values of these parameters is discrete (in several regimes). We determine exactly the next gap, showing that the asymptotic subrank and asymptotic slice rank of any nonzero 3-tensor is equal to 1, equal to 1.88, equal to 2, or at least 2.68. Keywords: subrank, asymptotic subrank, tensor degeneration 2020 Math. Subj. Class.: (primary) 15A69, (secondary) 14N07, 15A72, 68R05 § INTRODUCTION Unlike matrix rank, many natural notions of tensor rank (in particular, those defined in an asymptotic or amortized manner) may take non-integral values. For instance, the asymptotic subrank and asymptotic slice rank of the tensor e_1 ⊗ e_2 ⊗ e_2 + e_2 ⊗ e_1 ⊗ e_2 + e_2 ⊗ e_2 ⊗ e_1 equals 2^h(1/3)≈ 1.88 where h is the binary entropy function. Applications often ask for determining the value of these parameters for specific tensors. This raises the question: What values can such parameters take? Are there gaps between the values? Are there accumulation points? Concrete gaps. We briefly summarize the known results in this area, which can roughly be grouped into result about “concrete gaps” and the “general structure” of gaps. Since the first gap result of Strassen <cit.>, who proved that the asymptotic subrank (and as a consequence, the asymptotic partition rank) of any nonzero k-tensor is either 1 or at least 2^2/k, several works have investigated notions of discreteness and gaps in the values of tensor parameters. Costa and Dalai <cit.> proved that the asymptotic partition rank of any nonzero k-tensor is either 1 or at least 2^h(1/k)≈ 1.88, where h denotes the binary entropy function. Christandl, Gesmundo and Zuiddam <cit.> proved the stronger analogous statement for the asymptotic subrank, and moreover showed for k=3 that the asymptotic subrank is equal to 1, equal to 2^h(1/k)≈ 1.88, or at least 2, leaving as an open problem whether the set of all possible values is discrete, and in particular what is the next possible value. General structure. Blatter, Draisma and Rupniewski <cit.> proved that the set of values of every normalized monotone tensor parameters over finite fields is well-ordered: in particular, it has no accumulation points from above. Christandl, Vrana and Zuiddam <cit.> proved, using methods from representation theory and quantum information, that the asymptotic slice rank over the complex numbers takes only finitely many values on tensors of any fixed format, and thus only countably many values in general. Blatter, Draisma and Rupniewski <cit.> proved that a class of asymptotic tensor parameters over complex numbers take only countably many values; this class includes asymptotic subrank and asymptotic slice rank, over arbitrary fields. Briët, Christandl, Leigh, Shpilka and Zuiddam <cit.> proved that for a general class of asymptotic tensor parameters over several regimes, the set of values of any function in the class is discrete. This includes the asymptotic subrank over finite fields and the asymptotic slice rank over complex numbers. New results and methods. In this paper, we prove a new concrete gap for the asymptotic subrank of 3-tensors over any field, showing that the asymptotic subrank of any nonzero tensor is equal to 1, equal to 2^h(1/k)≈ 1.88, equal to 2, or at least ≈ 2.68 (<ref>). The last value is the asymptotic subrank of the multiplication tensor of the trivial unital algebra of dimension 3. To obtain this result, we prove a structural result about restrictions between tensors. In our proof we make use of the notion of the maximum rank in the slice span of a tensor that was also central in <cit.>, and in particular we prove that this parameter remains as large as possible under generic restriction. We moreover use a result about degenerating to the trivial algebra of Blaser–Lysikov <cit.> and a classification of matrix subspaces of low-rank of Atkinson <cit.> and Eisenbud–Harris <cit.>. § GAPS IN THE ASYMPTOTIC SUBRANK In this section we will provide some preliminary definitions, briefly summarize the known results from <cit.>, and discuss the new results in detail. §.§ Basic definitions We provide basic definitions here. For more background we refer to <cit.>. Let be any field. In this work, we consider tensors of order three: let T ∈^n_1⊗^n_2⊗^n_3 be a tensor over with dimensions (n_1, n_2, n_3). The subrank of T, denoted by (T), is the largest number r∈ such that there are linear maps A_i : ^n_i→^r such that (A_1 ⊗ A_2⊗ A_3)T = ∑_i=1^r e_i ⊗ e_i ⊗ e_i. The asymptotic subrank of T is defined as (T) = lim_n →∞(T^⊠ n)^1/n where ⊠ is the Kronecker product on tensors. This limit exists and equals the supremum by Fekete's Lemma, since is super-multiplicative. The flattenings of T are the elements in (^n_1⊗^n_2) ⊗^n_3, ^n_1⊗ (^n_2⊗^n_3) and ^n_2⊗ (^n_1⊗^n_3), obtained by naturally grouping the tensor factors of T. They are regarded as matrices, that is tensors of order two. For any two tensors T ∈^n_1⊗^n_2⊗^n_3 and S ∈^m_1⊗^m_2⊗^m_3 we say T restricts to S and write T ≥ S if there are linear maps A_i : ^n_i→^m_i such that (A_1 ⊗ A_2⊗ A_3)T = S. In particular, (T) is the largest number r∈ such that T ≥∑_i=1^r e_i ⊗ e_i ⊗ e_i. §.§ Previously known gaps Let be any algebraically closed field. Let the tensors ∈^2 ⊗^2 ⊗^2 and ∈^2 ⊗^2 ⊗^2 be defined by = e_1 ⊗ e_1 ⊗ e_1 + e_2 ⊗ e_2 ⊗ e_2, = e_1 ⊗ e_1 ⊗ e_2 + e_1 ⊗ e_2 ⊗ e_1 + e_2 ⊗ e_1 ⊗ e_1. In <cit.> the following classification in terms of and is proven: Let n_1, n_2, n_3 ∈ be arbitrary. For every nonzero T ∈^n_1⊗^n_2⊗^n_3 exactly one of the following is true: * T has a flattening of rank one; * ≥ T and T ≥; * T ≥. By monotonicity of asymptotic subrank, using the known values () = 2^h(1/3) and () = 2, the following gap theorem for asymptotic subrank can be deduced immediately from <ref>: For every nonzero T ∈^n_1⊗^n_2⊗^n_3, exactly one of the following is true: * (T) = 1; * (T) = c_1 2^h(1/3)≈ 1.88988; * (T) ≥ 2. <ref> is also true with replaced by asymptotic slice rank, since for and the asymptotic subrank equals the asymptotic slice rank. §.§ New result We prove a classification that extends <ref>. To state it we need to define the following tensors. Let ^(1), ^(2),^(3)∈^3 ⊗^3 ⊗^3 be defined by ^(1) = e_1 ⊗ e_1 ⊗ e_1 + e_2 ⊗ e_1 ⊗ e_2 + e_2 ⊗ e_2 ⊗ e_1 + e_3 ⊗ e_1 ⊗ e_3 + e_3 ⊗ e_3 ⊗ e_1 ^(2) = e_1 ⊗ e_1 ⊗ e_1 + e_2 ⊗ e_2 ⊗ e_1 + e_1 ⊗ e_2 ⊗ e_2 + e_3 ⊗ e_3 ⊗ e_1 + e_1 ⊗ e_3 ⊗ e_3 ^(3) = e_1 ⊗ e_1 ⊗ e_1 + e_1 ⊗ e_2 ⊗ e_2 + e_2 ⊗ e_1 ⊗ e_2 + e_1 ⊗ e_3 ⊗ e_3 + e_3 ⊗ e_1 ⊗ e_3. Note that these tensors are cyclic shifts of each other. As a bilinear map, regarding the first and second tensor factors as “inputs” and the third factor as “output”, ^(3) is the tensor encoding the multiplication map of the 3-dimensional trivial unital algebra [x,y]/(x^2,xy,y^2). Let ∈^3 ⊗^3 ⊗^3 be defined by = e_1 ⊗ e_2 ⊗ e_3 - e_1 ⊗ e_3 ⊗ e_2 + e_2 ⊗ e_1 ⊗ e_3 - e_2 ⊗ e_3 ⊗ e_1 + e_3 ⊗ e_1 ⊗ e_2 - e_3 ⊗ e_2 ⊗ e_1. In other words, is the unique up to rescaling fully skew-symmetric tensor e_1 ∧ e_2 ∧ e_3 ∈Λ^3 ^3. As a trilinear map, it takes three vectors in ^3 to the determinant of the 3 × 3 matrix whose columns are the three vectors. For any two tensors T and S we say T degenerates to S, and write T S, if there are linear maps A(), B(), C() whose coefficients are Laurent polynomials in the formal variable , such that (A() ⊗ B() ⊗ C()) T = S + S_1 + ^2 S_2 + ⋯ + ^t S_t for some arbitrary tensors S_1, …, S_t. It is known that asymptotic subrank is monotone under degeneration <cit.>, <cit.>. Let n_1, n_2, n_3 ∈ be arbitrary. For every nonzero T ∈^n_1⊗^n_2⊗^n_3 exactly one of the following is true: * T has a flattening of rank one; * ≥ T and T ≥; * T ≥ and T has a flattening of rank two; * all flattenings of T have rank at least three, in which case at least one of the following is true * T ^(i) for some i ∈ [3]; * T ≥ and ≥ T. In order to discuss the resulting gaps in the values of the asymptotic subrank, we need to know the values of (^(i)) and (). Define the number c_2 2^τ + h(τ) where τ∈ (0,1/2) is the unique solution of h(2τ)- h(τ)+τ = 0, (where h is the binary entropy function) which numerically evaluates to c_2 ≈ 2.68664. The value of (^(i)) was computed in <cit.>, the one of () was computed implicitly in <cit.> and more explicitly in <cit.>. We record here the two results: For every i ∈ [3], we have (^(i)) = c_2 ≈ 2.68664. The tensor ^(3) is the structure tensor of the 3-dimensional null-algebra as described in <cit.>. From Equation 6.19 (with q=2), we find that (^(3)) = c_2. The values of (^(1)) and (^(2)) are the same as (^(3)), since they are obtained from ^(3) by permuting the tensor factors. () = 3. The value of () is at most 3 because the flattenings ranks of are 3. The lower bound follows from a standard application of the support functional method of Strassen <cit.>. For this we first observe that the support of is tight in the basis that we presented it. The support is symmetric so the minimum over θ is attained for the uniform θ. The maximum over the probability distributions on the support is attained for the uniform distribution, which gives the required value. The essential information from <ref> is that () ≥(^(i)). The above leads to the following: For every nonzero T ∈^n_1⊗^n_2⊗^n_3, exactly one of the following is true: * (T) = 1; * (T) = c_1 ≈ 1.88988; * (T) = 2; * (T) ≥ c_2 ≈ 2.68664. We follow <ref> case by case. In case (a), the tensor has a flattening of rank one (but is nonzero), and thus (T) = 1. In case (b), T is equivalent to so (T) = () = c_1. In case (c), T ≥ so (T) ≥ 2 and T has a flattening of rank two, so (T) ≤ 2. In case (d), either T ^(i) for some i ∈ [3], in which case (T) ≥(^(i)) = c_2 or T, in which case (T) ≥() = 3 ≥ c_2. <ref> is also true with replaced by asymptotic slice rank, since (by standard results) for each of the tensors , , and ^(i) the asymptotic subrank equals the asymptotic slice rank. § PRELIMINARY RESULTS Before proving our main result we discuss three preliminary lemmas that will play a central role in the proof. §.§ Rank of slice-spans under restriction For any matrix subspace 𝒜, let (𝒜) be the largest matrix rank of any element of 𝒜. By semicontinuity, () is the rank of generic elements of . Let T ∈^n_1*⊗^n_2⊗^n_3. For every i, let T^(i) : ^n_i→^n_j⊗^n_j be the i-th flattening of T, with {i,j,k} = { 1,2,3}. Define _i(T) = ( T^(i) (^n_i *)), where T^(i) (^n_i *) is regarded as a linear space of n_j × n_k matrices. The parameters _i and their properties were used in <cit.> to prove discreteness of asymptotic tensor ranks. Clearly _i(T) ≤min{n_j,n_k} for every distinct i,j,k ∈ [3], and that _i is monotone under restriction. We prove that _i(T) remain as large as possible under (generic) restriction. This result, as well as its proof, is similar to <cit.>. Let T ∈^n_1⊗^n_2⊗^n_3. For any m_1, m_2, m_3 ∈ there is a tensor S ∈^m_1⊗^m_2⊗^m_3 such that T ≥ S and for every distinct i,j,k ∈ [3] we have _i(S) = min{_i(T), m_j, m_k}. It is clear that _i(S) ≤min{_i(T), m_j, m_k} for every choice of i,j,k. For every i ∈ [3], let A^(i)_1, …, A^(i)_n_i be the i-slices of T, that is A^(i)_j = T^(i)(e_j) for a fixed basis e_1 e_n_i of ^n_i*. For every i∈ [3], there is a non-empty (Zariski) open set of _n_i such that the matrices B^(i)_1, …, B^(i)_n_i obtained by taking any such linear combinations of A^(i)_1, …, A^(i)_n_i has the property that B^(i)_1 has rank equal to _i(T). For every i∈ [3], there is a nonempty open set of column operations and row operations on B^(i)_1 such that the new matrix C^(i)_1 has the property that its submatrix C^(i)_1|_m_j × m_k has rank min{_i(T), m_j, m_k}. The intersection of finitely many Zariski open subset is Zariski open and dense. Hence, we can obtain the above properties simultaneously for every i ∈ [3] by acting on T with an operation from the intersection. After these operations, let S be the subtensor obtained projecting T on the coordinates [m_1] × [m_2] × [m_3]. The tensor S satisfies _i(S) = min{_i(T), m_j, m_k}. §.§ Degenerating to the trivial algebra Recall the tensor ^(i)∈^3⊗^3⊗^3 for i ∈ [3]. We consider the analog tensor in higher dimension. For any n ∈, let ^(3)_n ∈^n ⊗^n ⊗^n be defined by ^(3)_n = e_1 ⊗ e_1 ⊗ e_1 + ∑_i=2^n (e_1 ⊗ e_i ⊗ e_i + e_i ⊗ e_1 ⊗ e_i) and let ^(1)_n, ^(2)_n ∈^n ⊗^n ⊗^n be obtained from ^(3)_n by cyclically shifting the tensor factors. Note that ^(i) = ^(i)_3. As a bilinear map, regarding the first and second tensor factors as “inputs” and the third factor as “output”, ^(3)_n is the tensor encoding the multiplication map of the n-dimensional trivial unital algebra [x_1 x_n-1]/^2 where = (x_1 x_n-1) is the ideal generated by the variables. We will be using the following result, which says that if the slice spans have maximum rank in two directions i≠ j, then the tensor degenerates to _n^(k) in the third direction k ≠ i, k ≠ j. Let T ∈^n⊗^n⊗^n. For every distinct i,j,k ∈ [3], if _i(T) = _j(T) = n, then T ^(k)_n. <ref> follows directly from combining the following two lemmas. Let T ∈^n⊗^n⊗^n. If _1(T) = _2(T) = n, then T is isomorphic to the multiplication tensor of an n-dimensional unital algebra. In particular, T ≃^(3)_n + ∑_a=2^n ∑_b=2^n ∑_c=1^n T_a,b,c e_a ⊗ e_b ⊗ e_c for some T_a,b,c∈. The multiplication tensor of an n-dimensional unital algebra degenerates to ^(3)_n. More precisely, for every n ∈ and every T_a,b,c∈, ^(3)_n + ∑_a=2^n ∑_b=2^n ∑_c=1^n S_a,b,c e_a ⊗ e_b ⊗ e_c ^(3)_n. Let A() = B() map e_1 ↦^-2 e_1 and e_j ↦ e_j for j ≥ 2. Let C() map e_1 ↦^4 e_1 and e_j ↦ e_j for j ≥ 2. Then (A() ⊗ B() ⊗ C()) (^(3)_n + ∑_a=2^n ∑_b=2^n ∑_c=1^n S_a,b,c e_a ⊗ e_b ⊗ e_c) = ^(3)_n + ^3 U_1 + ^6 U_2 for some tensors U_1, U_2. §.§ Classification of matrix subspaces of small rank The final ingredient is a classification of matrix subspaces of small rank. Let 𝒜⊆^n_1× n_2 and ℬ⊆^m_1 × m_2 be matrix subspaces. We say 𝒜 and ℬ are equivalent if 𝒜 can be obtained from ℬ by simultaneous invertible row and column operations and adding or removing any number of zero rows or columns. The classification of linear subspaces of small rank is the subject of a long line of research in linear algebra and algebraic geometry. In this section, we only need the classification for rank 2, which dates back to <cit.>. In <cit.>, the classification for rank at most three was obtained, whereas the classification for rank at most four was only recently obtained in <cit.>. The classification uses the space of 3 × 3 skew-symmetric matrices, which is the space {[ 0 a b; -a 0 c; -b -c 0 ] : a,b,c ∈} Note that the slice span, in each of the three directions, of the tensor defined earlier is equal to the space of 3 × 3 skew-symmetric matrices. Let 𝒜 be a matrix subspace. * If (𝒜) = 1, then up to equivalence, 𝒜 is supported in a single row or column. * If (𝒜) = 2, then up to equivalence, 𝒜 is supported in two rows, or two columns, or a row and a column, or 𝒜 is equivalent to the space of 3× 3 skew-symmetric matrices. § PROOF OF <REF> We now give the proof of <ref>. Let T ∈^n_1⊗^n_2⊗^n_3 be nonzero. Let r_i = _i(T) and q_i = _i(T) for i ∈ [3]. If r_1, r_2, r_3 ≥ 3, then q_1, q_2, q_3 ≥ 2. If q_i = 1 for some i ∈ [3], then the the span of the i-slices is supported on a single row or column by <ref>. This implies that r_j = 1 for some j ∈ [3]. We consider two cases: * q_i, q_j ≥ 3 for some distinct i,j ∈ [3] * q_i,q_j = 2 for some distinct i,j ∈ [3] Suppose r_1, r_2, r_3 ≥ 3. Let i,j,k ∈ [3] be distinct. If q_i, q_j ≥ 3, then T ^(k). Suppose q_1, q_2 ≥ 3. By <ref>, there is a tensor S ∈^3⊗^3⊗^3 such that T ≥ S and _1(S) = min{q_1, 3} = 3 and _2(S) = min{q_2, 3} = 3. Then S ^(3) by <ref>. Suppose r_1, r_2, r_3 ≥ 3. Let i,j,k ∈ [3] be distinct. If q_i, q_j = 2, then q_k = 2 and T is isomorphic to . For i=1,2,3, let 𝒜_i = T^(i)(^n_i*) be the image of the i-th flattening of T. If any of these is equivalent to the subspace of 3 × 3 skew-symmetric matrices then all three are and T is equivalent to . Suppose none of these is equivalent to the subspace of 3 × 3 skew-symmetric matrices. We will deduce a contradiction. Suppose for simplicity of notation that q_1 = q_2 = 2. We apply <ref> to the space _1. Since r_2, r_3 ≥ 3, _1 is not equivalent to a space supported on only two rows or two columns. Therefore, after changing coordinates, we may assume it is supported on the first row and the first column. In other words _1 = {( [ ℓ_11 ℓ_12 ⋯ ℓ_1r_3; ℓ_21 ; ⋮ ; ℓ_r_21 ]) ∈^r_2⊗^r_3 : ℓ_bc∈^r_1}. If ℓ' is a generic linear combination of ℓ_11 ,ℓ_21ℓ_r_2 1 then ℓ' , ℓ_12ℓ_1r_3 are linearly independent, otherwise the rank of the third flattening of T would be smaller than r_3. After possibly acting on the second tensor factor of T, or equivalently performing row operations on _1, assume ℓ_11 = ℓ' and applying a linear transformation on the first tensor factor we may assume ℓ_j1 = e_j. In this case, the space _2 turns out to be _2 = {( [ e_1 z_12 ⋯ z_1r_3; e_1 ; ⋱ ; e_1; ]) ∈^r_1⊗^r_3 : z_1c∈^r_2}. If r_1 ≥ 3, then _2 contains a matrix of rank at least 3, in contradiction with the condition q_2 ≤ 2. This provides a contradiction and concludes the proof. Suppose r_i ≤ 2 for some i ∈ [3]. Then we must be in case (a), (b) or (c) by <ref>. Suppose r_i ≥ 3 for all i ∈ [3]. Then we cannot be in case (a), (b) or (c). From <ref>, <ref> and <ref> follows that (d) must hold. § OPEN PROBLEMS * <ref> says that the smallest possible values of the asymptotic subrank are 0, 1, 1.88988, 2, 2.68664. We know that the set of possible values ≥ 2.68664 is also discrete <cit.>. What is the next smallest value? As a candidate, we know that there exists a tensor T with (T) ≈ 2.7551 <cit.>. * What general structure is there in the gaps in the asymptotic subrank? For every natural number n ∈ what is the smallest (largest) value that the asymptotic subrank takes that is strictly larger (smaller) than n? Acknowledgements. We thank the organisers of the Workshop on Algebraic Complexity Theory (WACT) 2023 at the University of Warwick, where this project was conceived. J.Z. was supported by NWO Veni grant VI.Veni.212.284. alphaurl
http://arxiv.org/abs/2307.03968v1
20230708125450
Multi-Level Power Series Solution for Large Surface and Volume Electric Field Integral Equation
[ "Y. K. Negi", "N. Balakrishnan", "S. M. Rao" ]
cs.CE
[ "cs.CE", "cs.NA", "math.NA" ]
Impact of noise on inverse design: The case of NMR spectra matching O. Anatole von Lilienfeld August 12, 2023 =================================================================== In this paper, we propose a new multi-level power series solution method for solving a large surface and volume electric field integral equation-based H-Matrix. The proposed solution method converges in a fixed number of iterations and is solved at each level of the H-Matrix computation. The solution method avoids the computation of a full matrix, as it can be solved independently at each level, starting from the leaf level. Solution at each level can be used as the final solution, thus saving the matrix computation time for full H-Matrix. The paper shows that the leaf level matrix computation and solution with power series gives an accurate results as full H-Matrix iterative solver method. The method results in considerable time and memory savings compared to the H-Matrix iterative solver. Further, the proposed method retains the O(NlogN) solution complexity. Method of Moments (MoM), H-Matrix, surface electric field integral equation,volume electric field integral equation. § INTRODUCTION With the use of ever increasing higher frequencies for various defence and civilian applications in the current world, the electrical size of electromagnetic scattering/radiation problem has grown drastically <cit.>. Solving the electrically large problems numerically to obtain fast and accurate results is the biggest challenge in the Computational Electromagnetics (CEM) community. Also, with the increase in computing power and memory, the need for large-scale solution algorithms has grown even more. Out of the various numerical methods in CEM, the most popular methods are: a) the Finite Difference Time Domain (FDTD) <cit.> method in the time domain and b) the Method of Moments (MoM) <cit.> and Finite Element Method (FEM) <cit.> in the frequency domain. Traditionally, the frequency domain methods have been more popular than the time domain methods as most of the early experimental results were available in the frequency domain and validating the computational results was convenient and easy. Out of the various frequency domain methods, MoM based methods are highly accurate and flexible for modeling irregular structures, the MoM matrix can be computed with the Surface Electric Field Integral Equation (S-EFIE) for solving Perfect Electrical Conductor (PEC) problems with surface mesh, and the Volume Electric Field Integral Equation (V-EFIE) <cit.> for solving inhomogeneous dielectric problems with volume mesh. Further, the MoM leads to a smaller number of unknowns compared to FEM and is free from grid dispersion error. However, the MoM matrix is a full matrix compared to a sparse matrix for the FEM method. Hence, the solution to large size problems with MoM in electromagnetics requires high matrix memory and computation time due to the dense matrix. Note that MoM dense matrix computation, matrix vector product and storage cost scales to O(N^2 ) for N number of unknowns. Solving the dense matrix with an iterative solver leads to N_itr O(N^2) calculations for N_itr iteration with O(N^2) for matrix-vector multiplication cost. With the direct solver, the complexity grows as O(N^3). Various fast solver algorithms like Multi-Level Fast Multipole Algorithm (MLFMA) <cit.>, Adaptive Integral Method (AIM) <cit.>, FFT <cit.>, IE-QR <cit.>, and Hierarchical Matrix (H-Matrix) <cit.> have been proposed to overcome the MoM limitations of high memory and computation cost. Fast solver reduces the matrix memory, matrix fill time, and matrix-vector product time to O(NlogN). The reduced matrix-vector product time improves the solution time to N_itr O(NlogN) for N_itr iterations with various iterative solution methods like Bi-Conjugate Gradient (BiCG) or Generalized Minimum Residual (GMRES). Fast solvers are built on the compressibility property of the far-field interaction matrices. The compression of the far-field matrices can be done using analytical matrix compression methods like MLFMA or AIM, and also with numerical matrix compression methods like H-Matrix. Compared to analytical compression methods, numerical compression methods are easy to implement and are kernel independent. All the fast solvers depend on the iteration count of the iterative solution methods. The convergence of the iterations depends on the condition number of the computed MoM matrix, and further, for a large number of unknowns, the convergence iteration count also increases. The high iteration count can be mitigated by using various preconditions like ILUT, Null-Field, and Schur's complement method based preconditioners <cit.>. The matrix preconditioner improves the condition number of the matrices and reduces the iteration count of the overall matrix solution. Despite the improvement in solution time, the use of preconditioners comes with the overhead of preconditioner computation time and extra preconditioner solution time for each iteration. Also, for the solving of a large number of unknowns, the iteration count may still be high. Recently there has been a trend in the CEM community for the development of an iteration-free fast solver method for solving problems with a large number of unknowns. Various fast direct solvers <cit.> have been proposed to overcome the iteration dependency of the solution process. These direct solvers are based on LU decomposition and compression methods. The methods are complex to implement and give quadratic scaling for complex real-world problems. In this work, we propose a Multi-Level (ML) fast matrix solution method based on the power series <cit.>. The proposed method exploits the property of ML matrix compression of the H-Matrix. The matrix is solved for each level using the matrix computation of the leaf level only, and the matrix solution can be terminated at the desired level as per the required accuracy. Our experimental results show that we get good accuracy even for the lowest level solution. The method relies on matrix-vector multiplication at each level and using the solution of the lowest level saves matrix computation time and memory requirement for the overall matrix solution. The rest of the paper is organized as follows. Section II gives a summary of MoM computation for S-EFIE and V-EFIE, section III covers H-Matrix computation for S-EFIE and V-EFIE. The derivation of the proposed ML power series solver is given in section IV. The numerical results of the proposed method, and conclusion are discussed in sections V, and VI. § METHOD OF MOMENTS MoM is a popular and efficient integral equation based method for solving various electromagnetic radiation/scattering problems. MoM can be computed using Electric Field Integral Equation (EFIE) for both surface and volume modeling. Surface modeling can be done using Rao Wilton Glisson (RWG) <cit.> triangle basis function, whereas volume modeling can be done using Schaubert Wilton Glisson (SWG) <cit.> tetrahedral basis function. In the case of dielectric modeling compared to S-EFIE, V-EFIE is an integral equation of the second kind and is more well-conditioned and stable. V-EFIE can model inhomogeneous bodies more efficiently than surface EFIE. In this work, we use RWG basis function for PEC surface S-EFIE modeling and SWG basis function for volume V-EFIE modeling. The surface/volume EFIE governing equation for the conductor/dielectric scattering body illuminated with the incident plane wave is given as the total electric field (E^total) from a scattering surface/volume and is the sum of incident electric field (E^inc) and scattered electric fields (E^scatt). E^total=E^inc+E^scatt. The scatted electric field is due to the surface current in PEC surface or volume polarization current in the dielectric media and is given as: E^scatt=-jωA(r)- ∇ϕ(r). In the above equation A(r) is the magnetic vector potential and describes radiation of current, ϕ(r) is electric potential and describes associate bound charge. Applying the boundary condition for PEC structure the S-EFIE can be written as: E^inc=jωA(r)+ ∇ϕ(r). Similarly, the V-EFIE can be written for a dielectric inhomogeneous body as: E^inc=D(r)/ϵ(r) + jωA(r) + ∇ϕ(r). In the above, equation D(r) is the electric flux density and ϵ(r) is the dielectric constant of the scattering volume media. The surface current in equation (3) for PEC structure is expanded with RWG function, and similarly in equation (4) for dielectric volume structure polarization current and charge is modeled with SWG basis function. Performing Galarkin testing over each term with integrating over the surface/volume, the final system of equation boils down to the linear system of the equation as below: [Z]x=b. In the above equation, Z is a dense MoM matrix, b is a known incident plane wave, and x is an unknown coefficient to be computed. The dense matrix leads to high cost matrix computation and memory requirement as well as solution time complexity. In the next section, we discuss the implementation of the H-Matrix for the mitigation of high cost of the conventional MoM matrix § H-MATRIX The high cost of MoM limits its application to a few λ problem sizes. This limitation of MoM can be overcome by incorporating fast solvers. Most of the fast solvers work on the principle of compressibility of the far-field matrices. For the implementation of a fast solver, the mesh of geometry is divided into blocks using an oct-tree or binary-tree division process and terminated at the desired level with a limiting edge or face count in each block. The non-far-field interaction blocks at the lowest level are considered near-field blocks and are in the dense matrix form. The compression of the far-field block matrix at each level can be done analytically or numerically. The system of equations in equation (5) can now be written as the sum of near-field and far-field matrix form as: [Z_N+Z_F]x=b. In the above equation Z_N is a near-field block matrix and Z_F is far-field compressed block matrices for the MoM fast solver matrix. Numerical compression of far-field matrices is easy to implement and is kernel-independent. A few of the popular fast solvers using numerical compression methods are IE-QR, H-Matrix. In this work, we have implemented H-Matrix for ML matrix compression. For the ML compression computation, the mesh is divided into ML binary tree division-based subgroups. H-Matrix works on the computation of a far-field matrix for the interaction blocks satisfying the admissibility condition given in equation (7). The admissibility condition states that η times the distance between the observation cluster (Ω_t) and source cluster (Ω_s) should be greater or equal to the minimum diameter of the observation cluster or source cluster for far-field computation, where η is the admissibility control parameter, and its value is taken as 1.0. η dist(Ω_t,Ω_s) ≥ min(diam(Ω_t),diam(Ω_s)). The far-field matrix block compression is done in such a way that its parent interaction matrix should not be computed at the top level. Matrix compression at each level is carried out using Adaptive Cross Approximation (ACA) <cit.> <cit.> method. The method exploits the rank deficiency property of the far-field matrix blocks. The low-rank sub-block of the far-field Z_sub with m rows and n columns is decomposed into approximate U_(m× k) and V_(k× n) matrices where k is the numerical rank of the low-rank sub-block far-field matrix such that k<<min(m,n). In this work, for memory savings, we only compute half of the H-Matrix <cit.> by making the computation process symmetric, and to maintain the accuracy of the H-Matrix, we use re-compressed ACA <cit.> for far-field block compression. The solution of the iterative solver is iteration count dependent, and further, the convergence iteration count depends on the condition number of the matrix. Also, as the number of unknowns increases, the iterating count for the convergence increases. In the next section, we discuss our proposed method, which is an iteration count and far-field level block independent solution process. § MULTI-LEVEL POWER SERIES SOLUTION The full H-Matrix is a combination of near-field and far-field block matrices. The far-field compressed block matrices are computed for various levels, and in equation (6), the far-field matrix (Z_F) can be further decomposed into the different matrix levels as below: [Z_F]=[Z_F1]+[Z_F2]+[Z_F3]. In the above equation far-field matrix Z_F1 is for level 1, Z_F2 is for level 2. and, Z_F3 is for level 3. Level 3 forms the leaf level of the binary tree and level 1 as the top level of the tree. Fig. 1. shows the H-Matrix layout for a two-dimension strip. In Fig. 1. light gray boxes represent Z_F1 far-field matrix at level 1, dark gray boxes as Z_F2 is for level 2 and large white boxes as Z_F3 for level 3, the black boxes are the near-field dense matrices. For illustrative purposes, the near-field matrix is a diagonal block form for a two-dimension strip. The real-world problems are three-dimension in structure, giving a non-diagonal block near-field matrix. To implement our ML power series solution method, we must diagonalize the near-field block matrix. The near-field matrix in equation (6) is diagonalized using diagonal scaling coefficient [α], as computed in <cit.> such that the scaled diagonal block near-field matrix can be given as: [Z̃_N]=[α][Z_N]. Expanding equation (8) and scaling it with the scaling coefficients [α] gives: [α][Z_N+Z_F1+Z_F2+Z_F3]x=[α]b. [Z̃_N]x+[α][Z_F1]x+[α][Z_F2]x+[α][Z_F3]x=b̃. In the above equation b̃ is a [α] scaled vector b and can be further simplified as : x+ [Z̃_N]^-1[α][Z_F1]x+[Z̃_N]^-1[α][Z_F2]x +[Z̃_N]^-1[α][Z_F3]x= [Z̃_N]^-1b̃. Let [Z̃_N]^-1[α][Z_F1]=[U_1], [Z̃_N]^-1[α][Z_F2]=[U_2] and [Z̃_N]^-1[α][Z_F3]=[U_3] equation (12) can further be simplified as x+ [U_1]x+[U_2]x +[U_3]x= [Z̃_N]^-1b̃. [I+ U_1]x+[U_2]x +[U_3]x= [Z̃_N]^-1b̃. x+[I+ U_1]^-1[U_2]x +[I+ U_1]^-1[U_3]x =[I+ U_1]^-1 [Z̃_N]^-1b̃. Let [I+ U_1]^-1[U_2]=[V_2] and [I+ U_1]^-1[U_3] =[V_3] equation (15) can further be simplified as x+ [V_2]x+[V_3]x = [I+ U_1]^-1 [Z̃_N]^-1b̃. x+[I+ V_2]^-1[V_3]x=[I+ V_2]^-1[I+ U_1]^-1 [Z̃_N]^-1b̃. Let [I+V_2 ]^-1 [V_3 ]=[W_3] and equation (17) can be written as x+[W_3]x=[I+V_2 ]^-1 [I+U_1 ]^-1 [Z̃_N]^-1b̃. x=[I+W_3 ]^-1 [I+V_2 ]^-1 [I+U_1 ]^-1 [Z̃_N]^-1b̃. In the above equations [I+W_3 ]^-1,[I+ V_2 ]^-1 and [I+ U_1 ]^-1 can be solved independently at each level using a power series solution method with the expansion as below: [I+ U_1 ]^-1=[I+ [Z̃_N]^-1[α][Z_F1]]^-1. [I+V_2 ]^-1=[I+[I+U_1 ]^-1 [U_2 ]]^-1 =[I+[I+ [Z̃_N]^-1[α][Z_F1]]^-1 [Z̃_N]^-1[α][Z_F2]]^-1. [I+W_3 ]^-1=[I+[I+V_2 ]^-1 [V_3 ]]^-1 =[I+[I+[I+U_1 ]^-1[U_2 ]]^-1[I+U_1 ]^-1[U_3 ]]^-1 =[I+[I+[I+ [Z̃_N]^-1[α][Z_F1]]^-1[Z̃_N]^-1 [α][Z_F2 ]]^-1 [I+[[Z̃_N]^-1 [α][Z_F1]]^-1[Z̃_N]^-1[α][Z_F3 ]]^-1. From equations (20), (21), and (22), it can be observed that the solution of these equations is dependent on that level and the lower levels of the binary tree block interaction matrix. At each level, the inverse of the matrix system equation can be efficiently computed by using a fast power series solution<cit.>. The fast power series iterative solution converges in two fixed iterations. The solution process only depends on the matrix-vector product of the H-Matrix, thus retaining the complexity of O(NlogN)<cit.>. The ML solution can be computed at the desired level per the required accuracy. Our results show that the solution at the leaf level gives an accurate result leading to time and memory savings. § NUMERICAL RESULTS In this section, we show the accuracy and efficiency of the proposed method. The simulations are carried out on 128 GB memory and an Intel (Xeon E5-2670) processor system for the double-precision data type. The H-Matrix computation is done with the ACA matrix compression error tolerance of 1e-3 <cit.> and solved with GMRES iterative solver with convergence tolerance of 1e-6 <cit.>. For a compressed or dense matrix [Z] if we want to expand [1+Z]^-1 in power series, the necessary and sufficient condition for convergence is |Z|<1 and we choose 0.1 for our simulations <cit.>.The conductor and dielectric geometry with dielectric constant ϵ_r is meshed with an element size less than λ/10 and λ/(10√(ϵ_r)) respectively. To show the accuracy of the proposed method, the RCS results are compared with full H-Matrix iterative solver<cit.>. In the further subsections, we demonstrate the far-field memory and computation time savings along with in solution time saving with our proposed ML power series solution with different examples. §.§ PEC square plate To show the accuracy and efficiency on a PEC object in this subsection, we consider a square plate of size 15.0 λ along x and y axis meshed with 67,200 unknown edges. The square plate mesh is divided with binary tree division till level 6. The PEC S-EFIE H-Matrix is solved with ML power series solution method and H-Matrix iterative solver. ML power series converges in 2 iterations, and the iterative solver solution converges in 686. Only the far-field matrix at leaf level 6 is computed for the ML power series solution, ignoring far-field computation from levels 1 to 5 of the binary tree. Fig. 2. shows the Bi-static RCS of a PEC square plate, and from the Fig., it can be observed that the solution with ML power series solver matches with the H-Matrix iterative solver. Table 1 shows the savings in memory, computation, and solution time of the ML power series solution method as compared with conventional H-Matrix-based iterative solver. §.§ Dielectric slab To show the accuracy and efficiency for a considerable size dielectric problem in this subsection, we consider a dielectric slab elongated along the y-axis with a height of 10.0 λ length, 1.0 λ width, and 0.1 λ thickness and dielectric constant (ϵ_r=2.0) meshed with 120,080 tetrahedral faces. The ML power series converges in 2 iterations, and the regular H-Matrix iterative solver converges in 33 iterations. The dielectric slab mesh is divided with binary tree division till level 10. Only the far-field matrix at leaf level 10 is computed for the ML power series solution. The accuracy of the method for a Bi-static RCS is shown in Fig. 3. Table 2 shows the significant matrix memory, matrix fill and solution time savings of the ML power series solution compared to the conventional H-Matrix-based iterative solver. §.§ Dielectric hollow cylinder In this subsection, we consider a dielectric hollow cylinder elongated along the y-axis with a size of 6.0λ length, 0.4λ outer radii, and 0.05λ thickness with a dielectric constant (ϵ_r=2.0), meshed with 158,830 tetrahedral faces. The ML power series converges in 2 iterations, and the H-Matrix iterative solver converges in 24 iterations. The hollow cylinder mesh is partitioned with a binary tree division till level 8, and for the ML power series solution only the far-field matrix at leaf level 8 is computed. Fig. 4. shows the close match in the bi-static RCS computed using the ML power series method and that with regular H-Matrix iterative solver. Table 3 shows the memory and time saving of the ML power series solution compared to the conventional H-Matrix iterative solver. § CONCLUSION It can be observed from the illustrative examples in the previous sections that our proposed ML power series solution method gives considerable matrix memory, fill and solve time saving for significant size problems. The solution method is as accurate as the H-Matrix iterative solver. The savings may not be substantial for small-size mesh structures. Still, the method will give significant savings for large-size problems taken up for illustration and for complex and sizeable electrical problems like antenna arrays and complex composite structures. Also, the technique is entirely algebraic in nature and can apply to fast analytical solver-based methods like AIM and MLFMA. The matrix block in each level can be computed independently, and the solution of the method only depends on the matrix-vector product of the system matrix. Hence, the proposed method is amenable to efficient parallelization. ACESJournal Yoginder Kumar Negi pict/yknegi.jpg obtained the B.Tech degree in Electronics and Communic-ation Engineering from Guru Gobind Singh Indraprastha University, New Delhi, India, in 2005, M.Tech degree in Microwave Electronics from Delhi University, New Delhi, India, in 2007 and the PhD degree in engineering from Indian Institute of Science (IISc), Bangalore, India, in 2018. Dr Negi joined Supercomputer Education Research Center (SERC), IISc Bangalore in 2008 as a Scientific Officer. He is currently working as a Senior Scientific Officer in SERC IISc Bangalore. His current research interests include numerical electromagnetics, fast techniques for electromagnetic application, bio-electromagnetics, high-performance computing, and antenna design and analysis. B. Narayanaswamypict/nbk.jpg received the B.E. degree (Hons.) in Electronics and Communi-cation from the University of Madras, Chennai, India, in 1972, and the Ph.D. degree from the Indian Institute of Science, Bengaluru, India, in 1979. He joined the Department of Aerospace Engineering, Indian Institute of Science, as an Assistant Professor, in 1981, where he became a Full Professor in 1991, served as the Associate Director, from 2005 to 2014, and is currently an INSA Senior Scientist at the Supercomputer Education and Research Centre. He has authored over 200 publications in the international journals and international conferences. His current research interests include numerical electromagnetics, high-performance computing and networks, polarimetric radars and aerospace electronic systems, information security, and digital library. Dr. Narayanaswamy is a fellow of the World Academy of Sciences (TWAS), the National Academy of Science, the Indian Academy of Sciences, the Indian National Academy of Engineering, the National Academy of Sciences, and the Institution of Electronics and Telecommunication Engineers. Sadasiva M. Rao pict/smr.jpg obtained his Bachelors, Masters, and Doctoral degrees in electrical engineering from Osmania University, Hyderabad, India, Indian Institute of Science, Bangalore, India, and University of Mississippi, USA, in 1974, 1976, and 1980, respectively. He is well known in the electromagnetic engineering community and included in the Thomson Scientifics Highly Cited Researchers List. Dr. Rao has been teaching electromagnetic theory, communication systems, electrical circuits, and other related courses at the undergraduate and graduate level for the past 30 years at various institutions. At present, he is working at Naval Research Laboratories, USA. He published/presented over 200 papers in various journals/conferences. He is an elected Fellow of IEEE.
http://arxiv.org/abs/2307.04508v1
20230710120620
Laplace-Transform GW
[ "Johannes Tölle", "Niklas Niemeyer", "Johannes Neugebauer" ]
physics.chem-ph
[ "physics.chem-ph", "physics.comp-ph" ]
Laplace-Transform GW Johannes Tölle^1,[email: [email protected]], Niklas Niemeyer^2,, and Johannes Neugebauer^2[email: [email protected]] ^1Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125, USA ^2Theoretische Organische Chemie, Organisch-Chemisches Institut and Center for Multiscale Theory and Computation, Westfälische Wilhelms-Universität Münster, Corrensstraße 36, 48149 Münster, Germany ^Both authors contributed equally. Date: July 9, 2023 empty Abstract We present a simple and accurate GW implementation based on a combination of a Laplace transformation (LT) and other acceleration techniques used in post-SCF quantum chemistry, namely, natural auxiliary functions and the frozen-core approximation. The LT-GW approach combines three major benefits: (a) a small prefactor for the computational scaling, (b) easy integration into existing molecular GW implementations, and (c) significant performance improvements for a wide range of possible applications. Illustrating these advantages for systems consisting of up to 352 atoms and 7412 basis functions, we further demonstrate the benefits of this approach combined with an efficient implementation of the Bethe–Salpeter equation. INTRODUCTION – After its introduction in 1965 <cit.>, the GW (G: time ordered one-body Green’s function, W: screened Coulomb interaction) method has now become the standard approach for the accurate ab-initio determination of ionization potentials (IPs), electron affinities (EAs) (or more generally quasi-particle energies), and in combination with the Bethe–Salpeter equation (BSE), for excitation energies in condensed matter physics <cit.>. The adoption within the realm of quantum chemistry has been established in recent years <cit.> with the availability of implementations in a wide range of molecular quantum chemistry codes, see e.g., Refs. <cit.>. The success of the GW method is owed to the fact that it offers good accuracy while being computationally feasible for a wide range of systems, c.f. Ref. <cit.>. However, the GW method generally relies on error cancellation, and G_0W_0, in particular, depends on the starting point chosen, the approach used for determining the dielectric function, and the self-consistency scheme chosen for the GW calculation. An excellent overview of the different aspects related to the GW approximation can be found in Ref. <cit.>. Especially the computational cost for determining the screened Coulomb interaction and therefore the G_0W_0 self-energy Σ_0 varies significantly for different practical realizations of the GW method in molecular orbital bases. The “fully-analytic” approach <cit.>, for example, scales as 𝒪(N^6). The scaling can be reduced significantly by numerical integration of the self-energy Σ_0, Σ_0(,,ω) = i/2π∫ dω' e^iω'η G_0(,,ω+ω') W_0(,,ω'), where the non-interacting one-particle Green's function is denoted as G_0 and the screened Coulomb interaction as W_0. To avoid divergences along the real frequency axis <cit.>, the integration in Eq. (<ref>) is commonly performed along the imaginary frequency axis in combination with analytic continuation (AC) to the real frequency axis leading to a formal scaling of 𝒪(N^4) <cit.>. Alternatively, one can employ the so-called contour-deformation approach (CD) <cit.> by dividing the integration in Eq. (<ref>) into an integration along the imaginary frequency axis and the real-frequency axis. The scaling, however, is 𝒪(N^4-5) and depends on the quasi-particles to be determined (see Ref. <cit.>). Σ_0 can also be determined within the space-time formulation of the GW method <cit.>. In this approach, the construction of W_0 is performed in imaginary-time rather than frequency space in combination with additional techniques, among others, real-space grid representation of the Green's function <cit.>, pair atomic density fitting <cit.>, or separable density-fitting <cit.> to reduce the overall scaling to 𝒪(N^3). Note that this ansatz shares certain similarities to Laplace-transform (LT) techniques developed in molecular quantum chemistry <cit.>. One drawback of these methods is, however, related to increasing memory requirements and larger prefactors due to the real-space representation <cit.>, potentially uncontrollable errors introduced by exploiting locality <cit.>, or the necessity to construct specialized real-space grids <cit.>. These aspects also lead to more challenging numerical implementations of these methods, potentially limiting their widespread application. This work demonstrates an alternative efficient evaluation of the GW self-energy by combining different ideas for reducing the computational cost based on the AC-GW formulation. In particular, we make use of a Laplace transformation for the evaluation of W_0, a truncation of the auxiliary basis using natural auxiliary functions (NAF) <cit.> and the frozen-core (FC) approximation. We refer to this approach as LT-GW which is based on three guiding principles: (a) a small prefactor should be preserved, (b) adaptation of existing AC-GW implementations should require minimal effort, and (c) significant performance improvements should result for a wide range of system sizes with controllable error.   THEORY – In the following, a concise overview of the modified GW implementation based on the Laplace-transform (LT) technique is given. More detailed information regarding GW implementations based on imaginary frequency integration can be found in Refs. <cit.>. A diagonal element nm for the correlation part of the screened-Coulomb interaction W^c_nm in a molecular orbital basis for an imaginary frequency iω is calculated as W^c_nm(iω') = ∑_PQ R^P_nm{[1 - Π(iω')]_PQ^-1 - δ_PQ}R^Q_nm, where molecular spin-orbital (ϕ) and auxiliary basis function (χ) indices are given in lowercase and uppercase letters, respectively. Furthermore, i,j,… refer to occupied, a,b,… to virtual, and n,m,… to arbitrary orbitals with eigenvalues ϵ. Π_PQ(iω') is evaluated as Π_PQ(iω') = - 2 ∑_iaR^P_ia(ϵ_a - ϵ_i)/ω'^2 + (ϵ_a - ϵ_i)^2 R^Q_ia, and the transformed three-center integrals R^P_nm are defined as R^Q_nm = ∑_P (nm|P) [𝐕^-1/2]_PQ, with (nm|P) = ∫ d∫ dϕ_n() ϕ_m() χ_P()/| - |, and V_PQ = ∫ d∫ dχ_P() χ_Q()/|-|. In AC-GW, the construction of Π_PQ(iω') is the most time-consuming step, formally scaling as 𝒪(N_oN_vN_aux^2) for each imaginary frequency (N_o being the number of occupied orbitals, N_v the number of virtual orbitals, and N_aux the number of auxiliary functions). Finally, the correlation (dynamical) part of the G_0W_0 self-energy Σ^c is obtained (ϵ_F denotes the Fermi-level) Σ_n^c(iω)= -1/π∑_m ∫_0^∞ d ω' iω + ϵ_F - ϵ_m/(iω + ϵ_F - ϵ_m )^2 + ω'^2 W_nm(iω'), which is integrated numerically using a modified Gauss-Legendre quadrature, see Refs. <cit.>. Quasi-particle energies are then determined by AC of Σ^c to the real frequency axis. For the AC to the real frequency axis, we use a N-point Padé approximation as described in the appendix of Ref. <cit.>. In this work, we make use of the LT for evaluating Π_PQ(iω'). In a first step, the denominator in Eq. (<ref>) is rewritten as 1/ω'^2 + (ϵ_a - ϵ_i)^2 = ∫^∞_0 dτexp(-(ω'^2 + (ϵ_a - ϵ_i)^2)τ) = ∫^∞_0 dτexp(-ω'^2τ) exp(-( ϵ_a - ϵ_i)^2 τ). holding for (ω'^2 + (ϵ_a - ϵ_i)^2) > 0 which is guaranteed to be true. Replacing the denominator with the integral in Eq. (<ref>) allows to apply a numerical integration of the form 1/ω'^2 + (ϵ_a - ϵ_i)^2 ≈ - ∑_m^N_LT w_m exp(-(ω'^2 + (ϵ_a - ϵ_i)^2) x_m) = - ∑_m^N_LT w_m exp(-ω'^2 x_m) exp(-(ϵ_a - ϵ_i)^2 x_m), where the N_LT quadrature points and their corresponding weights are denoted as x_m and w_m, respectively. Factorizing the exponential functions with frequencies and orbital-energy differences as their arguments through the LT allows evaluating their contributions to Π_PQ(iω') separately as Π_PQ(iω') ≈ -2 ∑_m ∑_iaR^P_ia w_m (ϵ_a - ϵ_i) e^-(ϵ_a - ϵ_i)^2 x_m R^Q_ia_M^m_PQ(iω') e^-ω'^2 x_m. In practice, M^m_PQ(iω') is calculated for each quadrature point, which requires N_LT N_oN_vN_aux^2 operations, followed by the outer loop over imaginary frequencies [see Eq. (<ref>)] counting N_LT N_aux^2 N_iω operations. In contrast, the evaluation of Eq. (<ref>) for the determination of quasi-particle energies requires N_iω N_oN_vN_aux^2 operations. It becomes clear that the formal scaling remains unchanged with 𝒪(N^4) since neither N_iω nor N_LT depends on the system size represented by N. A constant speed-up can, however, be expected using the LT technique as long as N_LT < N_iω which is proportional to the ratio N_iω/N_LT. The natural auxiliary function (NAF) approximation <cit.> reduces the size of the three-index integral tensor that commonly appears in post-SCF methodology making use of the resolution of the identity approximation. Its basis is given by a symmetric, positive definite matrix K that reads K_PQ = ∑_nm R^P_nmR^Q_nm. A rank reduction of the three-index integral list is achieved by first diagonalizing K to yield the NAFs labeled by P̃, ∑_Q K_PQ V_Q,P̃ = V_P P̃ϵ_P̃ , followed by setting up a transformation matrix U_PP̃ that only includes NAFs with corresponding eigenvalues above a certain threshold ε_NAF (assembled from the columns of V_P P̃). Finally, the three-center integral tensor is transformed to the NAF space following R^P̃_nm = ∑_P R^P_nm U_PP̃. In the limit of U including all eigenvectors of K, Eq. (<ref>) represents an orthogonal transformation. Our implementation omits the virtual–virtual part of the sum in Eq. (<ref>) due to its unfavorable scaling with the system size. Closed-shell molecules are handled by including a factor of two in Eq. (<ref>) to account for the single set of spatial orbitals. Determining the NAFs formally scales as 𝒪(N_o N_v N^2_aux). The theoretical speed-up of the NAF approximation in AC-GW calculations becomes apparent when inspecting Eqs. (<ref>) and (<ref>). The time-determining step includes an inner product of the three-index integral tensor contracting the occupied–virtual composite index ia. As a result, the expected speed-up scales quadratically with the quotient of the number of original auxiliary basis functions N_aux and the number of NAFs N_NAF, that is, (N_aux/N_NAF)^2.   Quasi-particle energies using LT-G_0W_0 – A detailed overview of the computational details is given in Sec. S1 of the Supporting Information (SI). In the following, we will demonstrate the robustness, scalability, and speed-up of combining AC-G_0W_0 with the LT, NAF, and FC techniques. First, its accuracy is determined for a subset of the GW100 benchmark set <cit.>. Reference orbitals were obtained using the Hartree–Fock approximation throughout. All results are compared to reference quasi-particle (QP) energies based on the “fully-analytic” evaluation of the G_0W_0 self-energy without employing the RI approximation (also for the mean-field calculation) <cit.>. The results of 15 representative molecular systems are explicitly shown here and deviations for the rest of the benchmark set can be found in the SI. Note that we omitted all molecular systems containing very heavy atoms such as iodine and xenon, as well as the rubidium and silver dimers because we restrict ourselves here to a non-relativistic description and do not use effective core potentials in this work. This reduces the total number of systems included in our calculations to 93. The signed error for the HOMO and LUMO QP energies relative to the “fully-analytic” evaluation of the G_0W_0 self-energy without making use of the RI approximation are shown in Tabs. <ref> and <ref>. The approximate treatments include (a) the “fully-analytic” approach using the RI approximation, (b) AC-G_0W_0, (c) AC-G_0W_0 in combination with LT (ε_LT=10^-7), (d) AC-G_0W_0 in combination with FC, (e) AC-G_0W_0 in combination with the NAF approximation (ε_NAF = 10^{-6,-4,-2}), and (f) combining AC-G_0W_0 with LT/NAF/FC (ε_LT=10^-7, ε_NAF = 10^{-6,-4,-2}). Comparing the “fully-analytic” evaluation with and without the RI approximation, a mean absolute error (MAE) of 1.1 meV (HOMO) and 1.6 meV (LUMO) in the quasi-particle energies is found. Virtually identical deviations are obtained for AC-G_0W_0 highlighting its applicability for determining valence G_0W_0 quasi-particle energies. Applying the LT leads to almost identical results with deviations smaller than 0.1 meV, numerically justifying the chosen parameters for the LT quadrature. Introducing additional approximations such as NAF and FC increases the QP errors. However, the overall accuracy for the different thresholds and combinations of the various approximations remains below an MAE of 10.0 meV for both HOMO and LUMO quasi-particle energies with the largest deviation of 29.6 meV for the HOMO quasi-particle energy of vinyl bromide in the case of FC and AC/FC/LT/NAF. As described in the SI, this error originates from the FC for bromine and can readily be reduced to below 5 meV by adjusting the number of frozen core orbitals. Because all systems in the following mainly contain first- and second-row elements (with the exception of WW-6 which is separately benchmarked against non-FC calculations), we continue to use the default number for frozen core orbitals as described in Sec. S1 of the Supporting Information. From the above analysis, it becomes clear that AC-G_0W_0 in combination with a comparatively loose NAF threshold of 10^-2 leads to an almost negligible error. As a result, all further calculations shown in this article will be confined to this threshold. Next, we performed G_0W_0 calculations on water clusters (see Fig. <ref>) of increasing size containing ten to 100 water molecules (corresponding to 430 to 4300 SCF basis functions in a def2-TZVP basis, respectively) and investigate QP energies and computational timings (computational details are given in Sec. S1 of the Supporting Information). The geometries were obtained by first generating a cubic 20× 20× 20 Å^3 water cluster containing 233 water molecules with VMD <cit.>, optimizing it with GFN2-xTB (6.4.1) <cit.> and then including the respective number of molecules closest to the center of mass of the whole cluster. In Fig. <ref>, we display the signed error in QP energies as a function of the number of molecules included in the water cluster for the HOMO and the LUMO for the different approximate strategies employed here as well as a combination thereof. Again, we find that the LT approximation does not introduce significant errors in QP energies for either the HOMOs or the LUMOs. For the NAF approximation (ε_NAF = 10^-2), the error with respect to the reference calculation is constant at about 1.5 meV and 3.0 meV for the HOMO and the LUMO, respectively. For the FC approximation, a constant error of about 3.5 meV and -0.5 meV is observable for the HOMO and the LUMO energies, respectively. While the error of the approximation combining LT, NAF, and FC exceeds the individual errors in the HOMO case (about 4.5 meV), we find partial error cancellation in the LUMO case (about 1.8 meV). Most importantly, however, it can be seen that (a) the error in QP energies is essentially independent of the system size and (b) the magnitude of QP energy errors is within a tolerable range using the approximations and thresholds suggested here (compare SI, Sec. 1). As a next step, we show computational timings of the various G_0W_0 methods. To assess the practical scaling behavior with the system size, we consider a double logarithmic plot of wall-clock timings for the calculation of the screened Coulomb interaction W_0 [see, e.g., Eq. (<ref>)] as a function of the number of SCF basis functions in Fig. <ref>. A non-logarithmic wall-clock timing plot along with the resulting speed-ups can be found in Fig. S2 of the Supporting Information. Taking a look at the corresponding linear fits performed on the data in Fig. <ref>, we find a slope of 3.34 for the unmodified AC-G_0W_0 algorithm, which is only slightly smaller than the formal scaling exponent of four that would be expected for the AC approach. The exponent is reduced by both the FC and NAF approximations to 3.30 and 3.13, respectively, where no such reduction would be expected for the exponent but rather for the prefactor only. Here, we note that the number of NAFs included in the calculations is on average 25–30% lower than the number of original auxiliary basis functions. For the water cluster containing 100 water molecules, the auxiliary-basis size reduction is 26%, which should result in a speed-up of 0.74^-2≈ 1.83, and which is close to the observed speed-up of 2.0. The LT approximation leads to a lowering of the exponent from 3.34 to 2.78. In this case, the expected speed-up should be proportional to the quotient of the original number of imaginary frequencies and the number of Laplace grid points (see Eq. <ref>). For the cluster containing 100 water molecules, this ratio is 128/17 ≈ 7.5 which compares well with the observed speed-up of 6.7. Inspecting the exponents of the two combined approximations LT/NAF as well as LT/NAF/FC, we find that the individual reductions in computational scaling add up so that for LT/NAF/FC the slope of the linear fit (as a measure of the computational scaling) is lowered by almost one with respect to the regular AC-G_0W_0 calculation. For the presented wall-clock timings, it can thus be seen that, although the formal scaling behavior is unchanged by the approximations introduced, LT-G_0W_0 leads to a drastically lower practical computational scaling while retaining a very high degree of accuracy. Additionally, we consider absolute timings of the G_0W_0 and eigenvalue-self-consistent GW (five cycles) calculations for the cluster containing 100 water molecules to illustrate the speed-up that can be expected in practical calculations with moderately sized systems and the LT-G_0W_0 method. The results can be found in Tab. <ref>. It turns out that the speed-ups of the composite approximation LT/NAF/FC are 18.1 and 17.6 for G_0W_0 and evGW, respectively, which slightly exceeds the product of the speed-ups of the individual LT (6.7 and 6.6), NAF (2.0 and 2.1), and FC (1.2 and 1.3) approximations, each amounting to roughly 16. The individual approximations thus do not interfere with each other but can constructively be used in combination, and the respective speed-up directly carries over to (partially) self-consistent GW calculations. Finally, we note that the G_0W_0 calculation using only the LT approximation is about twice as fast as the regular one already for the smallest investigated water cluster containing 10 molecules (10 seconds vs 20 seconds), providing evidence for the small prefactor of LT-GW combined with the NAF and FC approximations. LT-G_0W_0 with BSE – We apply a combination of LT-G_0W_0 and the Bethe–Salpeter (BSE) equation to investigate the effect of the LT approximation on the accuracy of linear absorption spectra. The BSE calculations are performed with the efficient integral-direct resolution of the identity implementation for the Hartree–Fock and long-range exchange part of the response matrix in Serenity originally presented in our work in Ref. <cit.>. As introduced above, the LT-G_0W_0 method refers to the application of the LT, NAF, and FC approximation and will be used in the following. As a first test case, we consider the WW-6 dye relevant in photovoltaics <cit.>. The molecular geometry was taken from Ref. <cit.> and is displayed in Fig. <ref>. Within the def2-TZVP basis set, there are 5583 SCF basis functions as well as 13802 auxiliary basis functions for the GW/BSE part of the calculation. In Fig. <ref>, we compare the linear absorption spectra for the WW-6 system that was obtained with the regular AC-G_0W_0/BSE calculation with the LT-G_0W_0 calculation employing both the NAF (ε_NAF = 10^-2) and the FC approximations. In both cases, eight of the lowest-lying excitation energies and corresponding oscillator strengths were determined. The FC approximation was not applied for the BSE calculations. We find no visible difference between the linear absorption spectra calculated with the regular and the approximate approach. Numerical results for QP energies as well as excitation energies and oscillator strengths can be found in Tabs. <ref> and <ref>, respectively. The mean deviation of QP energies is about 9.6 meV which far exceeds the mean error of excitation energies and oscillator strengths which amount to 0.75 meV and 0.39· 10^-3 a.u., respectively. The occupied and virtual QP energy errors are more systematic for this test system than for the HOMOs and LUMOs of the water clusters investigated beforehand. This results in more favorable error cancellation for excitation energies, which depend on QP energy differences. The errors of the oscillator strengths are equally negligible, which, in turn, is probably a result of the eigenvectors of the BSE problem being largely unaffected because of the error cancellation mentioned above. Inspecting the computational timings (given in Fig. <ref>), we find that in the regular case, the overall wall-clock timings are dominated by the calculation of the screened Coulomb interaction W with 2293 minutes, while in the approximate case, the BSE part of the calculation exceeds the time needed for the GW calculation by far. Here, the overall G_0W_0 calculation time is, in fact, dominated by the preparation of the three-index MO integrals, as the calculation of W only took 103 minutes. We also note that for the approximate calculation, setting up the NAF matrix, diagonalizing it, and then performing the NAF transformation to the three-index integral tensor introduces a small overhead of about 25 minutes (or ten percent), which is summarized in the timings for the “MO Ints”. The number of NAFs included in the calculation was 8755 corresponding to a reduction of 37% with respect to the full number of auxiliary basis functions. The speed-up for the entire calculation amounts to 2.3 (3915 minutes vs 1720 minutes) while the speed-up for the calculation of the screened Coulomb interaction alone is 22.3 (2293 minutes vs 103 minutes). These calculations demonstrate that LT-GW is able to provide accurate references for BSE calculations, while drastically reducing the computational demand of the preceding G_0W_0 calculation. As a second test system, we consider stacks of BODIPY dyes, which are of interest in the field of supramolecular polymer design <cit.>. Additionally, supermolecular BODIPY-based compounds are interesting for GW/BSE calculations in particular because alternative (standard) methods for predicting their absorption spectra may either lack the necessary accuracy (e.g. linear response time-dependent density-functional theory, see e.g. Ref. <cit.>) or are simply not feasible for this kind of system size (e.g. coupled cluster-based methodology such as coupled cluster with singles and approximate doubles <cit.> and even local variants thereof <cit.>). In our calculations, we include monomer, dimer, and tetramer geometries (provided by the authors of Ref. <cit.> and displayed in Fig. <ref>) and compare our G_0W_0/BSE-based spectra with experimental ones in Fig. <ref>. For all n-mers, 32 of the lowest-lying excitation energies and corresponding oscillator strengths were determined after calculating 20 of both the lowest-lying virtual and highest-lying occupied QP energies for each monomer in each geometry, that is, 40 for the dimer as well as 80 for the tetramer. Based on the findings of the approximate calculations for the WW-6 test system, we omit G_0W_0 calculations that do not apply any further approximations here. The experimental spectra exhibit three main bands at about 600, 400, and 300 nm. Interestingly, a strong blue shift of, in particular, the energetically lowest-lying absorption band is observed upon aggregation (experimentally induced by lowering the solution temperature). This behavior can most likely be attributed to the corresponding interaction of the transition dipole moments of the monomers in this stacking pattern. Going over to the computed spectra, one finds that the monomer spectrum reproduces the position and intensity of the experimental bands with a high degree of accuracy (given a constant shift of the absorption spectrum of 0.48 eV). It can further be seen that the blue shift of the lowest-lying absorption band of the dimer compares well with the experimental one. The computed tetramer spectrum exhibits a blue shift far exceeding the experimental one. This is most likely due to a combination of different factors. On the one hand, the experimental spectrum is a combination of several different aggregates of varying sizes and particular arrangements. On the other hand, the tetramer geometry was obtained by stacking two dimers on top of each other followed by a reoptimization. As a result, the distance between the inner two monomers is smaller than the distance between the outer pairs which could lead to an overestimation of the excitonic couplings leading to the blue shift. The GW calculation (screened Coulomb interaction W) took 6, 70, and 813 minutes for the monomer, dimer, and tetramer, respectively.   CONCLUSION – We have presented the LT-GW method, for which we numerically demonstrated that it follows our three main objectives: (a) a small prefactor, (b) minimal effort for adaptation in existing AC-GW codes, and (c) significant performance improvements (up to 22-fold) for a wide range of system sizes with controllable error. For this, LT-GW combines the GW approximation in the context of the analytic continuation (AC) approach with a Laplace transformation (LT), natural auxiliary functions (NAFs), and the frozen-core (FC) approximation. We have highlighted its synergy with the BSE for calculations of excitation energy and properties for extended systems consisting of up to 7412 basis functions. We are convinced that the LT-GW method constitutes a practical and widely applicable extension to existing GW implementations for molecular systems. In the LT-G_0W_0/BSE calculations, we have shown that the computational time is now dominated by the BSE calculation. Based on our three guiding principles, we aim to achieve similar improvements also for the BSE in the future by making use of, for example, minimal auxiliary basis sets <cit.> or simplified integrals <cit.>.   Computational details, additional analysis of quasi-particle energies for atoms and molecules from the GW100 benchmark set as well as non-logarithmic wall-clock-timings and the speed-up plot of the water clusters can be found in the Supporting Information. J.T. gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through DFG-495279997. N.N. and J.N. gratefully acknowledge funding by the DFG through SFB 1459 (Project A03, Project-ID A03-433682494). We would like to thank Christian Mück-Lichtenfeld for providing the monomer, dimer, and tetramer BODIPY geometries originally presented in Ref. <cit.>. We would like to thank Alexander Rödle and Gustavo Fernández for providing the raw data of the experimental absorption spectra originally presented in Ref. <cit.>. The data supporting the findings of this study are available either within the supplementary material or upon reasonable request from the authors.
http://arxiv.org/abs/2307.04356v1
20230710054920
Reducing Information Loss for Spiking Neural Networks
[ "Yufei Guo", "Yuanpei Chen", "Liwen Zhang", "Xiaode Liu", "Xinyi Tong", "Yuanyuan Ou", "Xuhui Huang", "Zhe Ma" ]
cs.NE
[ "cs.NE", "cs.CV" ]
headings 88 ECCV-22 submission ID ECCV-22 submission ID Paper ID Reducing Information Loss for SNNs Guo, Y. et al. Intelligent Science & Technology Academy of CASIC, Beijing 100854, China Chongqing University, Chongqing, 400044, China [email protected], [email protected], [email protected] Reducing Information Loss for Spiking Neural Networks Yufei Guo1Equal contribution. Yuanpei Chen1^⋆ Liwen Zhang1 YingLei Wang1 Xiaode Liu1 Xinyi Tong1 Yuanyuan Ou2 Xuhui Huang1 Zhe Ma1 August 12, 2023 ====================================================================================================================================== The Spiking Neural Network (SNN) has attracted more and more attention recently. It adopts binary spike signals to transmit information. Benefitting from the information passing paradigm of SNNs, the multiplications of activations and weights can be replaced by additions, which are more energy-efficient. However, its “Hard Reset" mechanism for the firing activity would ignore the difference among membrane potentials when the membrane potential is above the firing threshold, causing information loss. Meanwhile, quantifying the membrane potential to 0/1 spikes at the firing instants will inevitably introduce the quantization error thus bringing about information loss too. To address these problems, we propose to use the “Soft Reset" mechanism for the supervised training-based SNNs, which will drive the membrane potential to a dynamic reset potential according to its magnitude, and Membrane Potential Rectifier (MPR) to reduce the quantization error via redistributing the membrane potential to a range close to the spikes. Results show that the SNNs with the “Soft Reset" mechanism and MPR outperform their vanilla counterparts on both static and dynamic datasets. § INTRODUCTION Deep Neural Networks (DNNs) have greatly improved many applications in computational vision, , object detection and recognition <cit.>, object segmentation <cit.>, object tracking <cit.>, etc. In pursuit of models with better performance, more and more complex networks are proposed. However, the increasing complexity poses a new challenge to model deployment on power-constrained devices, thus becoming an impediment to the applications of these advanced complex models. There have been several approaches to address this problem, such as quantization <cit.>, pruning <cit.>, knowledge distillation <cit.>, spiking neural networks (SNNs) <cit.>, and so on. Among these approaches, the biology-inspired method, SNNs provide a unique way to reduce energy consumption by mimicking the spiking nature of brain neurons. A spiking neuron integrates the inputs over time and fires a spike output whenever the membrane potential exceeds the firing threshold. And using 0/1 spike to transmit information makes SNNs enjoy the advantage of multiplication-free inference by converting multiplication to additions. Furthermore, SNNs are energy-efficient on neuromorphic hardwares, such as SpiNNaker <cit.>, TrueNorth <cit.>, Darwin <cit.>, Tianjic <cit.>, and Loihi <cit.>. Despite the attractive benefits, there is still a huge performance gap between existing SNN models and their DNN counterparts. We argue that the reason for the low accuracy is there exists information loss in SNNs. First, the information processing of neurons in supervised training-based SNNs are generally following the rules of the Integrate-and-Fire (IF) model or Leaky IF (LIF) model, where once a membrane potential exceeds the firing threshold, a “Hard Reset” operation will force the “residual” potential to be set to 0, , once fired, all the information will be taken away. Obviously, this mechanism of “residual” membrane potential-ignored reset mode would fail to preserve the diversity of various membrane potentials. Hence the information encoding capacity of the network is compromised, such that the risk of information loss increases accordingly. Second, although the 0/1 spike information processing paradigm enables SNNs to enjoy the advantage of high efficiency, quantifying the real-valued membrane potential to 0/1 spikes will inevitably introduce the quantization error, which also brings about information loss. To address the information loss problem, we propose a “Soft Reset”-based IF (SRIF) neuron model that retains the “residual” membrane potential from subtracting its spike value at the firing instants. Hence the diversity of the membrane potentials that exceed the firing threshold will be preserved. Though “Soft Reset” is commonly used in converting methods from ANN to SNN (ANN2SNN) <cit.> methods, rarely applied in supervised SNNs <cit.>, and has not been discussed in SNN enhancement from the perspective of information loss reducing. In addition, for alleviating quantization error, the Membrane Potential Rectifier (MPR) is proposed, which is performed before the firing activity to adjust the membrane potentials towards the spike values (, 0/1). With MPR, the membrane potential will be decoupled as an original one and a modulated one. The original one can keep the mechanism of a neuron and the modulated one enjoys less quantization error than the original one without suffering from any negative effects. The difference between our neuron and the vanilla neuron is illustrated in Fig. <ref>. Our main contributions are as follows: * We propose using the SRIF model for supervised training-based SNNs. By retaining the “residual” membrane potential, SRIF enables the networks to distinguish the differences among those membrane potentials that exceed the firing threshold via subtracting their spike values thus enhancing the information encoding capacity of supervised training-based SNNs. * We present MPR to mitigate the quantization error. By utilizing a non-linear function to modulate the membrane potential close to 0/1 before firing activity triggers, the gap between the potential and its corresponding 0/1 spike value is minified while maintaining the sparse spike activation mechanism of SNNs. To our best knowledge, few works have noticed the quantization error in SNNs, and a simple but effective method for addressing this problem is presented. * Extensive experiments on both static and dynamic datasets were conducted to verify our method. Results show that the SNN trained with the proposed method is highly effective and efficient compared with other state-of-the-art SNN models, , 96.49% top-1 accuracy and 79.41% top-1 accuracy are achieved on the CIFAR-10 and CIFAR-100. These results of our models even outperform their DNN counterparts surprisingly, and it is very rare that SNNs may have a chance to surpass their DNN counterparts. § RELATED WORK §.§ Learning Methods of Spiking Neural Networks The training methods of SNNs can be divided into two categories. The first one is ANN2SNN <cit.>. ANN2SNN yields the same input-output mapping for the ANN-SNN pair via approximating the continuous activation values of an ANN using ReLU by averaging the firing rate of an SNN under the rate-coding scheme. Since the ANN has achieved great success in many fields, ANN2SNN can maintain the smallest gap with ANNs in terms of performance and can be generalized to large-scale structures. However, being restricted to rate-coding, ANN2SNN usually requires dozens or even hundreds of timesteps to obtain well-performed networks. Lots of efforts have been done to reduce the long inference time, such as weight normalization <cit.>, threshold rescaling <cit.>, soft reset <cit.>, threshold shift <cit.>, and the quantization clip-floor-shift activation function <cit.>, it is still hard to obtain high-performance SNNs with ultra-low latency. The second one is supervised learning-based SNNs. SNNs quantize the real-valued membrane potentials into 0/1 spikes via the firing activity. Since the gradient of the firing activity function is zero almost everywhere, the gradient descent-based optimizer can not be directly used for the training of SNNs. To alleviate the optimization difficulty, the approximate gradient-based strategy is commonly used, and some related approaches had been proposed to achieve trainable SNNs with high performance. For example, by regarding the SNN as a special RNN, a training method of back-propagation through time with different kinds of surrogate gradient was proposed <cit.>. The spatio-temporal back-propagation (STBP) <cit.> method enables SNNs to be trained on the ANN programming platform, which also significantly promotes the direct training research of SNNs. Differentiable spike which can match the finite difference gradient of SNNs well was proposed in <cit.>. The temporal efficient training (TET) <cit.> method with a novel loss and a gradient descent regime that succeeds in obtaining more generalized SNNs, has also attracted much attention. In RecDis-SNN <cit.>, a new perspective to understand the difficulty of training SNNs by analyzing undesired membrane potential shifts is presented and the MPD-Loss to penalize the undesired shifts is proposed. Numerous works verify that supervised learning can greatly reduce the number of timesteps and handle dynamic datasets. It has increasingly aroused researchers’ interest in recent years. In this work, we focus on improving the performance of the supervised learning-based SNNs by repressing information loss, which is rarely mentioned in other works. §.§ Threshold-dependent Batch Normalization Batch Normalization (BN) is one of the most widely used normalization technologies, which is initially designed for very deep Convolutional Neural Networks (CNNs). As it only focuses on normalizing the spatial feature maps, directly applying BN to SNNs would damage the temporal characteristic of SNNs, which stand with spatio-temporal feature maps, leading to low accuracy. To address this issue, some specially-designed normalization methods for SNNs were proposed recently. Typically, to simultaneously balance neural selectivity and normalize the neuron activity, NeuNorm <cit.> was proposed. Then, a more effective normalization technique that can take good care of the firing threshold, named threshold-dependent Batch Normalization (tdBN) was further proposed in <cit.>. It can normalize the feature maps of SNNs in both spatial and temporal domains <cit.>. Specifically, let X_t ∈ℝ^B× C× H× W represent the input maps at each timestep, where t=1,…,T (B: batch size; C: channel; (H, W): spatial domain). Then for each channel c, the spatio-temporal sequence X^(c) = {X_1^(c), ⋯ ,X_T^(c)} is normalized by tdBN as follows, X̃^(c) = λ·α V_th(X^(c)-x̅^(c))/√( mean((X^(c)-x̅^(c))^2)+ϵ) + β, where V_th is the firing threshold, α is a network-structure-dependent hyper-parameter, ϵ is a tiny constant, λ and β are two learnable parameters, x̅^(c)= mean(X^(c)) is the mean value of X^(c), X̃^(c) is the normalized maps. In this paper, tdBN is also adopted considering its spatio-temporal normalization mechanism. § PRELIMINARY AND METHODOLOGY To avoid the information loss in supervised training-based SNNs, we propose the “Soft Reset” IF (SRIF) model and Membrance Potential Rectificater (MPR). §.§ “Soft Reset" IF Model An SNN adopts a biology-inspired spiking neuron that accumulates inputs along the time dimension as its membrane potential and fires a spike when the potential exceeds the firing threshold. This mechanism makes it much different from its DNN counterpart. For better introducing the proposed SRIF neuron, a unified form defined by a recent work <cit.>, is given to describe the dynamics of all kinds of spiking neurons as follows, H[t] = f(U[t-1],X[t]), O[t] = Θ(H[t]-V_th), U[t] = H[t](1-O[t])+V_resetO[t], where X[t], H[t], U[t], and O[t] are the input, membrane potentials before and after the trigger of a spike, and output spike at the timestep t, respectively. V_th is the firing threshold, and is usually set to 0.5. Θ(·) is the step function defined by Θ(x) = 1 for x ≥ 0 and Θ(x) = 0 for x < 0. V_reset denotes the reset potential, which is set as 0. The function f(·) describes the neuronal dynamics of spiking neuron models, for the commonly used IF neuron and LIF neuron, f(·) can be respectively defined as follows, H[t] = U[t-1]+X[t], H[t] = τ U[t-1]+ X[t], where τ denotes the membrane time constant. Both LIF and IF neurons have some unique advantages, with decay characteristics introduced by the membrane time constant, LIF neuron behaves more biologically compared with IF neuron, while IF neuron is more efficient due to its addition-only processing manner. In terms of accuracy performance, neither of them show an overwhelming advantage, and more detailed experimental results of these two neurons are provided in Section 4. Considering the subtle gap in performance, we prefer to use LIF model due to its neurodynamic characteristic, from the perspective of brain science research. Conversely, from the perspective of computer science research, we recommend using IF model, since it is more friendly to hardwares. However, both the IF model and LIF model might undertake a greater or lesser risk of information loss by the “Hard Reset" mechanism, , when the input membrane potentials exceed the firing threshold, the neurons will force the membrane potentials to a fixed value. Such mechanism ignores the “residual" parts of those fired membrane potentials. These “residual" parts contain the diversity of the input potentials, and we argue that a neuron model which can preserve the diversity or differences of these membrane potentials that cause the firing is more suitable. To this end, along with the consideration of efficiency, we propose using a “Soft Reset" mechanism-based IF neuron, SRIF, which can keep the diversity of the membrane potentials by subtracting their firing spike values from themselves at the time where the threshold is exceeded. Though this similar “Soft Reset” mechanism has been widely used in ANN2SNN <cit.>, there are few works to use it in supervised learning-based SNNs <cit.>. We found its value in this field from a new perspective to reduce information loss. In SRIF neuron, Eq. (<ref>) is updated as U[t] = H[t](1-O[t])+(H[t]-O[t])O[t]. It can be further simplified as U[t] = H[t]-O[t]. It can be seen that, similar to IF neuron, SRIF is also an addition-only model, thus enjoying computational efficiency when implementing on hardwares. Fig. <ref> compares the difference between IF neuron and SRIF neuron in an intuitive way. Suppose that both models receive weighted input sequence of 1.5V_th, 1.2V_th, 1.5V_th, 0.9V_th, and 1.4V_th across 5 consecutive timesteps. Our SRIF neuron will produce three spikes by retaining the residual potentials at the firing instants as depicted in Fig. <ref>. Whereas, the IF neuron will produce four spikes. §.§ Membrane Potential Rectificater To further mitigate the information loss, we present a non-linear function, called MPR by reducing the quantization error. MPR aims to redistribute the membrane potential before it is operated by the step function. It only modulates the membrane potential that is presented to the step function but does not modify the value of membrane potential, which receives and accumulates spikes from other neurons. Specifically, we further distinguish the membrane potentials as the original one, H as in Eq. (<ref>) and the modulated one, Ĥ, which is the membrane potential that will be presented to the step function. In all previous works, H and Ĥ are treated as the same. While in this paper, we would like to provide a new perspective that using a decoupling function to separate H and Ĥ can be helpful. Specifically, H manages the original tasks as in other work, Ĥ derives from H with a non-linear function, φ(·), and it will be fed into the step function with a modulated form that can shrink the quantization error. With this decoupling mechanism, a neuron model can not only keep the membrane potential updating rule but also enjoy less quantization error. Before giving the full details of the MPR, we try to formulate the quantization error first. It is clear that the quantization errors corresponding to different membrane potentials should be different. Hence, a value closer to its quantization spike, o, enjoys less quantization error. In specific, the firing threshold divides the membrane potentials into two parts, the part with smaller values is assigned to “0" spike, and the other with larger values is assigned to “1" spike. Then the quantization error depends on the margin between the membrane potential and its corresponding spike. Therefore, the quantization error can be defined as the square of the difference between the membrane potential and its corresponding quantization spike value as follows: ℒ_q = (u-o)^2, where u is the membrane potential and o ∈{0,1}. when u is below the firing threshold, o is 0, otherwise, 1. Hence, the design of MPR should obey the following two principles: * Spike-approaching: the modulated membrane potential, Ĥ should be closer to the 0/1 spikes than the original membrane potential, H. This principle ensures quantization error reduction. * Firing-invariance: for the H less than V_th, the MPR should not produce the Ĥ greater than V_th and vice versa. This principle ensures the neuron output be consistent with or without using MPR. Based on the above two principles, we define the MPR as the following symmetrical function: φ (u) = {[ -(1-u)^1/3+1, u 0,; 1/2tanh(3/2) tanh(3(u-1/2))+1/2, 0≤ u≤ 1,; (u)^1/3, u 1. ]. Fig. <ref> shows the response curve of the designed MPR function following the principles of spike-approaching and firing-invariance. According to <cit.>, the membrane potential follows a Gaussian distribution, 𝒩(μ ; σ). Hence, to visualize the effect of the MPR, we sample 1000,00 values from a Gaussian distribution with 𝒩(1/2 ; 1), and present them to the MPR. Then the distribution of these 1000,00 MPR outputs is drawn in Fig. <ref>. It can be seen that the unimodal distribution, 𝒩(1/2 ; 1) is adjusted to a bimodal distribution which is with less quantization error since it can naturally gather the membrane potentials near “0" and “1". Moreover, it is worth noting that, the redistributed membrane potential, Ĥ by MPR is only used for narrowing the gap between the true membrane potential, H and its quantization spike. It will not replace the original H in our SRIF neuron model. Then the complete new dynamics of the SRIF model can be described as follows, H[t] = U[t-1]+X[t], Ĥ[t] = φ(H[t]), O[t] = Θ(Ĥ[t]-V_th), U[t] = H[t]-O[t]. The detailed Feed-Forward procedure for the SRIF neuron with MPR is given in Algo.1. § EXPERIMENT The proposed methods were evaluated on various static datasets (CIFAR-10 <cit.>, CIFAR-100 <cit.>, ImageNet <cit.>) and one neuromorphic dataset (CIFAR10-DVS <cit.>) with widely-used spiking archetectures including ResNet20 <cit.>, VGG16 <cit.>, ResNet18 <cit.>, ResNet19 <cit.>, and ResNet34 <cit.>. §.§ Datasets and Settings Datasets. The CIFAR-10(100) dataset consists of 60,000 images in 10(100) classes with 32× 32 pixels. The number of the training images is 50,000, and that of the test images is 10,000. The CIFAR10-DVS dataset is the neuromorphic version of the CIFAR-10 dataset. It is composed of 10,000 images in 10 classes, with 1000 images per class. ImageNet dataset has more than 1,250,000 training images and 50,000 test images. Preprocessing. Data normalization is applied on all static datasets to ensure that input images have 0 mean and 1 variance. Besides, the random horizontal flipping and cropping on these datasets were conducted to avoid overfitting. For CIFAR-10, the AutoAugment <cit.> and Cutout <cit.> were used for data augmentation. For the neuromorphic dataset, since the CIFAR10-DVS dataset does not separate data into training and testing sets, we split the dataset into 9000 training images and 1000 test images similar to <cit.>. For data preprocessing and augmentation, we resized the training image frames to 48× 48 as in <cit.> and adopted random horizontal flip and random roll within 5 pixels. And the test images are just resized to 48× 48 without any additional processing. Training setup. For all the datasets, the firing threshold V_th was set as 0.5 and V_reset as 0. For static image datasets, the images were encoded to binary spike using the first layer of the SNN, as in recent works  <cit.>. This is similar to rate-coding. For the neuromorphic image dataset, we used the 0/1 spike format directly. The neuron models in the output layer accumulated the incoming inputs without generating any spike as the output like in <cit.>. For CIFAR-10(100) and CIFAR10-DVS datasets, the SGD optimizer with the momentum of 0.9 and learning rate of 0.01 with cosine decayed <cit.> to 0. All models were trained within 400 epochs with the same batch size of 128. For the ImageNet dataset, the SGD optimizer with a momentum set as 0.9 and a learning rate of 0.1 with cosine decayed <cit.> to 0. All models are trained within 320 epochs as in <cit.>. The batch size is set to 64. §.§ Ablation Study for Different Neuron Models We first conducted a set of ablation experiments to verify the effectiveness of the proposed SRIF model on CIFAR-10(100) using ResNet20 as the backbone under various timesteps without MPR. The results are shown in Tab. 1. It can be seen that whether on CIFAR-10 or CIFAR-100, the SRIF neuron always obtains the best result ranging from 2 timesteps to 8 timesteps. This indicates the superiority of the SRIF neuron. On the other hand, the LIF neuron performs better than the “Hard Reset" IF neuron on CIFAR-10, while the IF neuron performs better on CIFAR-100, even though the LIF neuron is more like a biological neuron. This comparison also shows that, although SNNs are proposed to imitate the biological neural networks, for the implementation of large-scale networks, they still need to rely on computer hardwares. Hence, the characteristics of computational science should also be considered. In this respect, the SRIF neuron is more suitable for its advantage of low power consumption and capacity of reducing information loss. §.§ Addition of MPR Then, a set of ablation experiments for the MPR were conducted on CIFAR-10(100) using ResNet20 and ResNet19 as backbones within 4 timesteps. Results in Tab. 2 show that the MPR can greatly improve performance. Especially on CIFAR-100, where ResNet20 with MPR increases the accuracy by 2.73%. These results verify the effectiveness of MPR in terms of performance improvement. We also computed the average quantization error of the first layer of the second block in the ResNet20/19 before and after MPR on the test set of CIFAR-10(100), respectively. Results in Tab. 3 show that the quantization error is obviously reduced by the MPR. The overall original membrane potential distribution and modulated membrane potential distribution by MPR of the first layer of the second block in ResNet20 on CIFAR-10 and CIFAR-100 test sets are shown in Fig. <ref>. It shows that the MPR adjusts the membrane potential distribution near “0" and “1", which is closer to its quantization spike. Put together, these results quantitatively support the effectiveness of MPR in reducing quantization error. §.§ Comparisons with Other Methods Our method was further compared with other state-of-the-art SNNs on static and neuromorphic datasets. Results are shown in Tab. 4, where for each run, the mean accuracy and standard deviation of 3 trials are listed. For simplification, InfLoR (, short for Information Loss Reducing) is used to denote the combination of SRIF and MPR. CIFAR-10(100). For CIFAR-10, our method improves network performance across all commonly used backbones in SNNs. ResNet19-based InfLoR-SNN achieved 96.49% top-1 accuracy with 6 timesteps, which outperforms its STBP-tdBN counterpart with 3.33% higher accuracy and its ANN counterpart 0.20% higher accuracy even. The ResNet20-based InfLoR-SNN can reach to 93.65%, while only 92.54% in <cit.>. And our VGG16-based network also shows higher accuracy than other methods with fewer timesteps. On CIFAR-100, InfLoR-SNN also performs better and achieves a 1.89% increment on VGG16. Noteworthy, InfLoR-SNN significantly surpasses Diet-SNN <cit.> with 7.12% higher accuracy, which is not easy to achieve in the SNN field. Again, our ResNet19 also outperforms its ANN counterpart. To our best knowledge, it is the first time that the SNN can outperform its ANN counterpart. ImageNet. For the ImageNet dataset, ResNet18 and ResNet34 were used as the backbones. Results show that our ResNet18 achieves a 1.60% increment on SEW ResNet18 and a 2.46% increment on Spiking ResNet18. The accuracy of our ResNet34 does not exceed SEW ResNet34. However, SEW ResNet34 <cit.> transmits information with integers, which is not a typical SNN. For a fair comparison, we also report the result of Spiking ResNet34 in <cit.> which is worse than our method. Moreover, our InfLoR-based ResNet34 with 4 timesteps still obviously outperforms STBP-tdBN-based RersNet34 with 6 timesteps. CIFAR10-DVS. For the neuromorphic dataset, CIFAR10-DVS, InfLoR-SNN achieves the best performance with 75.50% and 75.10% top-1 accuracy in 10 timesteps with ResNet19 and ResNet18 as backbones, and obtains 7.80% improvement compared with STBP-tdBN for ResNet19. It's worth noting that, as a more complex model, ResNet19 only performs a little better than ResNet20 on CIFAR10-DVS. It might be that this neuromorphic dataset suffers much more noise than static ones, thus a more complex model is easier to overfit. § CONCLUSIONS This work aims at addressing the information loss problem caused by the “Hard Reset" mechanism of neurons and the 0/1 spike quantification. Then, the SRIF model, which will drive the membrane potential to a dynamic reset potential, and the MPR that can adjust the membrane potential to a new value closer to quantification spikes than itself are proposed. A detailed analysis of why the SRIF and MPR can reduce the information loss is provided. Furthermore, abundant ablation studies of the proposed methods are given. Combining these two methods, our SNNs outperform other state-of-the-art methods. splncs04
http://arxiv.org/abs/2307.04391v1
20230710075459
Vehicle Detection in 6G Systems with OTFS Modulation
[ "Pavel Karpovich", "Tomasz P. Zielinski" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT", "H.1.1" ]
emptyfancy [t]1.0 Accepted for Konferencja Radiokomunikacji i Teleinformatyki KRiT-2023, Krakow 2023 (author's version) VEHICLE DETECTION IN 6G SYSTEMS WITH OTFS MODULATION Pavel Karpovich ^1,2; Tomasz P. Zielinski^2; [t]0.4 ^1 Institute of Telecommunications AGH, Krakow mailto:[email protected],[email protected] ^2 Nokia Solutions and Networks, Krakow, mailto:[email protected] 2 The recently introduced orthogonal time frequency space modulation (OTFSM) is more robust to large narrow-band Doppler frequency shift than the orthogonal frequency division multiplexing (OFDM), used in the 5G standard. In this paper it is shown how the telecommunication OTFSM-based signal with random padding can be used with success in the 6G standard for detection of high-speed vehicles. Two approaches for detecting targets during the random padded OTFS based transmission are compared in the paper. 5G, 6G, OFDM, OTFSM, radar. § INTRODUCTION In last few years, the scientific community attention has been focused on the discussion of next generation 6G communication. There are a lot of publications about what applications will drive the 6G network and what technologies should be included in the 6G standard to satisfy their requirements <cit.> <cit.>. Among large number of proposals, there are some that are most common, such as a terahertz wave and an integrated sensing and communication (ISAC) <cit.> <cit.>. This paper addresses a problem of adding a radar functionality to the communication systems of the future which will use higher frequency carriers and support high-mobility users. The usage of terahertz band is challenging. Even relatively slow objects could generate very high Doppler frequency shifts. The strong Doppler effect limits the usage of the orthogonal frequency division multiplexing (OFDM) waveform which is at present de-facto a standard waveform in telecommunication systems (e.g. DVB-T2, Wi-Fi, LTE, 5G <cit.>). The OFDM is based on assumptions that linear convolution of the signal and the channel impulse response can be replaced by circular convolution, and that the channel impulse response is time-invariant or almost time-invariant. This allows to do a very fast and simple channel impulse response estimation. In case of the strong Doppler environment the assumption about constant channel impulse response is no longer valid since any channel coefficient can rotate in complex plane all the time due to the Doppler effect. Using OFDM in such conditions leads to errors in channel estimation and equalization, and eventually to inter-carrier-interference (ICI) and subsequently errors in bit detection. Increasing sub-carrier spacing (SCS) in OFDM helps to deal with the strong Doppler frequency shift. However, this operation will increase also the OFDM cyclic prefix overhead and reduce transmission efficiency <cit.>. In order to eliminate the mentioned above disadvantage of the OFDM, the orthogonal time frequency and space (OTFS) modulation was recently introduced in <cit.>. Due to its unique features it is seriously treated as one of possible 6G waveforms <cit.>. In this article simulation results for an ISAC system using the OTFS waveform are shown. We will start with the OTFS waveform description, present the delay-Doppler domain used in OTFS and discuss different pilot configurations exploited in it. Next, we will introduce the ISAC system using the OTFS waveform. Finally, in experimental part, we will show results from simulation of a radar part of the discussed RP-OTFS-based ISAC system. In work <cit.> results from simulation of the communication part of the RP-OTFS transmission system were presented while this paper addresses simulation of the radar part of the system only. Practical verification of the general RP-OTFS based transmission and sensing concept was already presented in <cit.>. § ORTHOGONAL TIME FREQUENCY AND SPACE The concept of the OTFS is shown in the figure <ref> <cit.> <cit.>. In comparison to OFDM, the OTFS is a two-dimensional modulation technique. In case of OTFS the modulation process looks as follows. At the beginning modulated IQ/QAM symbols are put into elements of the matrix A in figure <ref>, i.e. on the grid in a delay-Doppler (DD) domain. Then, the inverse Zak transform (inverse Fourier transform over the Doppler axis) <cit.> is used to transform (demodulate) data from the DD to a fast time - slow time (TT) domain. Finally, the obtained samples are reshaped from a matrix into a vector. The DD grid usage for data modulation makes the OTFS waveform attractive for ISAC since it is “native” domain for radars. §.§ The delay-Doppler grid Names of the DD grid directions reflect their physical sense. The delay direction (the first D in DD) consists of adjacent samples from time domain, the Δ t between samples is small. This direction is suitable for detecting small time changes in a observed signal. For example, in multi-path propagation environment, the difference between paths is not very big and can be estimated in the delay direction of the DD grid. But the delay direction is not suitable for observation of long time processes like the Doppler effect because Doppler frequencies are usually very small and require more time for estimation. In the Doppler direction (the second D of DD) only every Mth sample from time domain is used and this allows to estimate long time signal changes using FFTs with small sizes. Parameters of the DD grid should be chosen taking into account that the sent OTFS modulated waveform will be used, both, for digital data transmission and moving vehicles detection. As sources of multi-path reflections could be treated as “radar targets”, the OTFS DD grid should fulfill, both, telecommunication and radar requirements. The DD grid has two parameters: M — the number of samples in the delay (fast time) direction, and N — the number of samples in the Doppler direction. Looking at figure <ref>, we can say that in Doppler direction a signal is practically decimated by M. Hence taking into account the Nyquist theorem, the maximum Doppler offset that can be estimated using such DD grid is f_d max = ±f_s 2 M, where f_s is a sampling rate. Resolution in the Doppler direction depends on N: increasing N and keeping f_s and M constant will increase the FFT length and the Doppler resolution. The resolution in delay direction depends only on f_s. Choosing f_s, M, N and carrier frequency f_c one can optimize the OTFS-based radar and digital transmission. For example, lets choose the DD grid parameters for radar detection of many moving cars (reflections from stationary objects are not interesting for us). For maximum car speed of 60 m/s, carrier frequency f_c = 52.6 GHz (the maximum carrier frequency for 5G FR2), sampling ratio f_s = 50 MHz and maximum M = 1190, we can assume that the maximum Doppler frequency shift is equal to 21 kHz. Then, by fixing M=1024 and changing N, one can get different resolution of velocity estimation, changing from 9 m/s (for N=8) to 0.1 m/s (N=512), where values of N=8 and N=512 are exemplary ones. §.§ Pilots configurations As OFDM, the OTFS uses pilots for estimation of a channel impulse response (CIR). Their configurations are different. Here we will discuss two types of pilot placement strategies, shown in figure <ref>: a zero-padded one (ZP-OTFS) and a random-padded one (RP-OTFS). In both configurations the DD matrix A is divided into two parts: the data zone and the pilots zone. Every carrier in the DD grid is assigned to the pilot or data zone only, not to both of them the same time. §.§.§ ZP-OTFS In the ZP-OTFS, the pilot has a form of a rectangular zone of the DD matrix A, shown in figure <ref>, which is filled with zeros and have only one non-zero carrier in its center. We will call this non-zero carrier a pilot pulse. In case of the ZP-OTFS, the length of the pilot zone in the delay direction is twice bigger than length of the channel impulse response. In the Doppler direction the pilot zone usually makes use of all cells, as shown in figure <ref>. Due to zeros surrounding the pilot pulse, the channel estimation process becomes very simple in the ZP-OTFS. There is also no interference between pilot and data zones as well as no ZP-OTFS symbol interference. In case of ZP-OTFS the channel impulse response is estimated by division of every cell of the received pilot zone by the known, transmitted pilot pulse (some threshold should be introduced here in order not to neglect reflection free samples of the pilot zone). The main disadvantage of the ZP-OTFS is low energy efficiency, because the pilot zone is very sparse. §.§.§ RP-OTFS The recently introduced RP-OTFS <cit.> <cit.> is designed to correct deficiencies of the ZP-OTFS. Here the pilot zone is filled by short OFDM symbols, treated as pilots, with random data inside — see figure <ref>. In case of the ZP-OTFS, discussed earlier, the data zone is generated in the DD domain and transformed to fast-slow time domain by the inverse Zak transform. In turn, in case of the RP-OTFS, OFDM symbols of pilots are inserted directly into fast-slow time grid (without the inverse Zak transformation). Absence of zeros in the pilot zone increases signal-to-noise ratio (SNR) and causes that the RP-OTFS application is more efficient than the ZP-OTFS in CIR estimation what is very important for both for communication and radar. The CIR estimation begins with conventional OFDM channel estimation with the only difference that we treat the whole OFDM symbol as a pilot. After that, when all CIR momentum estimates are found using all OFDM symbols (having transmitted and received pilots one can easily estimates CIR taps from them), we transform the matrix of CIR taps to the DD domain by the Zak transform, i.e. by performing FFT over the CIR matrix rows. Note, that in the RP-OTFS the Zak transform is performed upon CIR estimates, do not upon time samples of OFDM symbols which were used for CIR calculation. There are two disadvantages of the RP-OTFS application. Firstly, the length of cyclic prefix (CP) of the OFDM-based pilot should be equal to the OFDM symbol length, i.e. it is long and the CP overhead reduces the achievable bit-rate. Secondly, we assume that the CIR is quasi time-invariant and, therefore, we can not use long OFDM pilots for very high frequency Doppler channels. § INTEGRATED SENSING AND COMMUNICATION (ISAC) In case of ISAC <cit.><cit.>, usually, the communication processing is the same as in conventional system. In this paper we are concentrating our attention on peculiarity of RP-OTFS radar processing since efficiency of the RP-OTFS based communication sub-system has been already tested <cit.> <cit.>. Two approaches of target detection are analyzed: correlation-based and pilot-based. The first correlation-based method origins from classical radar processing in which a cross ambiguity function is used <cit.>: transmitted, reference signal (known, re-modulated in the receiver or acquired by special reference antenna) is shifted in time and frequency and correlated with the received, surveillance signal. The problem of the correlation base radar approach is that usually it is hard to find weak signal reflections, coming from small, moving objects, on the background of strong signal reflections caused by buildings (the radar clutter problem) <cit.>. In the second pilot-based approach of vehicle detection transmitted pilots, known in the receiver, are used to CIR estimation <cit.>. In case of reflections coming from moving vehicles some CIR taps are complex-value numbers that oscillates in time with frequency of Doppler frequency shift caused the reflecting object movement. Here, we treat radar targets as sources of multi-path propagation. By CIR analysis we can retrieve information about signal reflections and about reflecting objects. The pilot based ISAC system requires non distorted CIR estimates for Doppler frequency shifts extraction. As mentioned in the introduction, high Doppler objects can not be detected by OFDM. This also limits application of pilot-based radars making use of OFDM-based pilots. § EXPERIMENTAL PART In experimental part we simulated a radar performance of the discussed RP-OTFS-based ISAC system. Parameters of the applied OTFS-based signal was following: size of the grid in delay and doppler direction 64x256 (MxN), length of the pilot zone 16 (meaning of L is explained in fig. <ref>), modulatiotion 4-QAM, carrier frequency 4 GHz and bandwidth 20 MHz. In simulation we used different target velocities in order to test the system performance in different conditions. Delay-Doppler (distance-velocity) radar maps for a target moving with velocity about 139 m/s (500 km/h), calculated for both tested radar approaches (correlation based and pilot based ones), are shown in figure <ref>. In both methods integration/observation time 100 milliseconds was used. Input signal had signal-to-noise ratio (SNR) equal to 0 decibels. In both cases one can clearly see sharp peaks in the delay-Doppler (distance-velocity) matrix which correspond to parameters of moving vehicles. However for CAF two additional lower peaks are visible which are generated by the CP of the pilot part of the RP-OTFS waveform. As in case of the pilot-based approach we eliminate CP from signal processing chain, such peaks are missing in DD map of this method. In case of correlation-based radar mean level of background side-lobes, surrounding the detection peak, is equal to about -30 decibels while for pilot-based radar -40 decibels. In figures <ref> and <ref> processing gain charts for both discussed RP-OTFS-based radars are shown, i.e. expressed in decibels root mean square (RMS) value of the method noise floor (visible in figure <ref>) as a function of signal to noise ratio (SNR) of an input signal. Simulated maximum vehicle speed (v_m) was equal to 50 (13.9 m/s) albeit 500 km/h (139 m/s) and integration/observation time (T_i) was varying from 10 ms to 200 ms. In figure <ref> both tested RP-OTFS-based radars are compared: it is seen that the pilot-based version outperforms the correlation-based one in DD detection hight, i.e. in noise robustness. § DISCUSSION The main limitation factor in case of the correlation based RP-OTFS radar is its high level of CAF side-lobes, resulting in significantly lower output SNR in comparison to the pilot-based radar. Figures  <ref> and <ref> confirm quantitative conclusions which can be drown from figure <ref>. As mentioned before, in the development of the discussed RP-OTFS-based ISAC system we have assumed that channel pulse response is quasi time-invariant in the pilot zone. In case of high-mobility Doppler channels this assumption is fulfilled only approximately. This fact will limit the maximum processing gain of the presented pilot based RP-OFTS radar. Consequences of this method drawback will increase for higher velocities as it is visible in fig. <ref>. The same effect will be observed also when the pilot zone length will be increased. Nevertheless, obtained results confirm that the pilot-based radar outperforms the correlation-based one in terms of noise robustness. § CONCLUSION Two moving vehicles detection approaches based on the RP-OTFS ISAC system were compared in this paper. The main limitation factor of the correlation based radar method is high level of CAF side-lobes, apart from existence of two additional peaks in CAF which are caused by repetition of the pilot samples. Detection of targets with low radar cross section on the background of strong background signal, so called clutter, e.g. direct path signal, is very challenging here. Presence of many ghost peaks in the delay-Doppler (distance-velocity) map makes subsequent processing steps in this method very challenging. In turn, the pilot based RP-OTFS radar is characterized by lower level of side-lobes in the delay-Doppler map and it does not have extra peaks caused by the repeating pilot samples. But this approach is sensitive to the quality of the channel impulse response estimation. In order to minimize error of the channel impulse response estimate, and in consequence error of the moving object detection, we need to keep pilot zone as short as possible. 99 6G_harsh H. Tataria et al., "6G Wireless Systems: Vision, Requirements, Challenges, Insights, Opportunities," Proc. IEEE, vol. 109, no. 7, pp. 1166-1199, July 2021. 6G_vision W. Saad, M. Bennis and M. Chen, "A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems," IEEE Network, vol. 34, no. 3, pp. 134-142, May/June 2020. isac1 F. Liu et al., “Integrated Sensing and Communications: Toward Dual-Functional Wireless Networks for 6G and Beyond,” IEEE J. on Selected Areas in Comm., vol. 40, no. 6, pp. 1728-1767, June 2022. isac2 Z. Wei et al., “Integrated Sensing and Communication Signals Towards 5G-A and 6G: A Survey,” IEEE Internet of Things Journal, early access, 2023. ofdm_numerology Josue Flores de Valgas, Jose F. Monserrat, Hüseyin Arslan, "Flexible Numerology in 5G NR: Interference Quantification and Proper Selection Depending on the Scenario", Mobile Information Systems, vol. 2021, Article ID 6651326, 9 pages, 2021. otfs1 R. Hadani et al., "Orthogonal Time Frequency Space Modulation," 2017 IEEE Wireless Comm. and Networking Conf. (WCNC), San Francisco, CA, USA, 2017, pp. 1-6, 2017. otfs2 Z. Wei at al., “Orthogonal Time-Frequency Space Modulation: A Promising Next-Generation Waveform,” IEEE Wireless Comm., vol. 28, iss. 4, pp. 136-144, 2021. my_rp1 P. Karpovich and T. P. Zielinski, "Random-Padded OTFS Modulation for Joint Communication and Radar/Sensing Systems," 2022 23rd Int. Radar Symp. (IRS), pp. 104-109, Gdansk 2022. my_rp2 P. Karpovich et al., “Field Tests of a Random-Padded OTFSM Waveform in a Joint Sensing and Communication System,” IEEE ICC Int. Communications Conf., Rome 2023. zak H. Bolcskei and F. Hlawatsch, "Discrete Zak transforms, polyphase transforms, and applications," in IEEE Trans. on Signal Processing, vol. 45, no. 4, pp. 851-866, April 1997. radar M.A. Richards, “Fundamentals of Radar Signal Processing,” McGraw-Hill Education, 2014. my_dvbt2 P. Karpovich et al., "Practical Results of Drone Detection by Passive Coherent DVB-T2 Radar," 21st Int. Radar Symp. (IRS), pp. 77-81, Warsaw 2020. ofdm_base_radar M. Braun et al., "Parametrization of joint OFDM-based radar and comm. systems for vehicular applications," 2009 IEEE Int. Symp. on Personal, Indoor & Mobile Radio Comm., pp. 3020-3024, Tokyo 2009.
http://arxiv.org/abs/2307.03878v2
20230708014803
New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump
[ "Francesco Capozzi", "Bhaskar Dutta", "Gajendra Gurung", "Wooyoung Jang", "Ian M. Shoemaker", "Adrian Thompson", "Jaehoon Yu" ]
hep-ph
[ "hep-ph", "hep-ex" ]
apsrev4-1 Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’Aquila, 67100 L’Aquila, Italy Istituto Nazionale di Fisica Nucleare (INFN), Laboratori Nazionali del Gran Sasso, 67100 Assergi (AQ), Italy Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Center for Neutrino Physics, Department of Physics, Virginia Tech, Blacksburg, VA 24061, USA Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Beam dumps and fixed-target experiments have been very sensitive probes of such particles and other physics beyond the Standard Model (BSM) by considering the production of new states from the primary interaction in the beam dump. In a proton beam dump, there are many secondary interactions taking place in electromagnetic showers which may be additional production channels for pseudoscalar bosons or axion-like particles (ALPs). The target-less configuration of the MiniBooNE experiment, which collected data from 1.86 × 10^20 protons impinging directly on the steel beam dump, is an excellent test of sensitivity to these production channels of ALPs in the MeV mass region. Using the null observation of the MiniBooNE dump mode data, we set new constraints on ALPs coupling to electrons and photons produced through a multitude of channels and detected via both scattering and decays in the MiniBooNE detector volume. We find that the null result rules out parameter space that was previously unconstrained by laboratory probes in the 10-100 MeV mass regime for both electron and photon couplings. Lastly, we make the case for performing a dedicated analysis with 1.25× 10^20 POT of data collected by the ArgoNeuT experiment, which we show to have complementary sensitivity and set the stage for future searches. MI-HET-808 New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump Jaehoon Yu ============================================================================================== § INTRODUCTION Particle beam dumps have proven to be ultra-sensitive probes of new physics sectors beyond the Standard Model (BSM), where the myriad electromagnetic and hadronic cascades produce showers of electrons, positrons, gamma rays, and mesons; each a potential channel for BSM particle production. Studying the beam target environment and the particle showers within is thus a crucial first step to understanding what kind of physics is possible, and at what energy scales. Already many searches have been performed by electron beam dumps (E137, NA64, E141, Orsay, E774, etc. <cit.>) and proton beam dumps at the GeV energy scale (e.g. CHARM, NuCal, NA62, SeaQuest/SpinQuest <cit.>) and sub-GeV sources (e.g. CCM <cit.>, IsoDAR <cit.>, and COHERENT <cit.>), and others <cit.>. The existence of pseudoscalar bosons with small couplings to the SM are predicted in models of broken symmetries in connection with explaining many puzzles in nature. Axions and axion-like particles (ALPs) are central features in the landscape of solutions, in particular, to the strong CP problem <cit.> and to the dark matter problem <cit.>, and otherwise appear ubiquitously in string theory <cit.>, and the ultraviolet spectra of many other puzzle-solving models with spontaneously broken symmetries. In many of these scenarios, it is possible that the ALP has couplings to SM leptons and the electromagnetic field, making the particle showers inside the beam target good laboratory probes of ALPs, reaching up to GeV mass scales. ALPs at the MeV to GeV mass scales are of particular interest to beam dump and fixed target experiments and have been studied in the context of heavy axions <cit.>, whose parameter space extends beyond that of traditional QCD axion models. In 2018 MiniBooNE collaboration performed an analysis of their targetless-mode run <cit.>, in which they collected data associated with 1.86 × 10^20 protons on target (POT) bypassing the main beryllium target and impinging on the steel beam dump. Expected neutrino rates for this mode were very low, and no excess of events was observed, in contrast to the results from the target-mode runs <cit.>. In this work, we show that the null result from this data set is sensitive enough to ALPs produced in electromagnetic showers in the dump to set new limits on photon and electron couplings. Running in a target-less mode has the effect of suppressing the fluxes of neutrinos coming from charged meson decays. Searches for BSM particles that have production channels orthogonal to the charged pion decay gain a big advantage here; in the case of a thin target, the charged mesons decay in flight after getting produced, allowing them to be focused by the magnetic horn system. In the thick beam dump case, however, the charged pions are stopped in the material and decay isotropically, suppressing the subsequent neutrino background that would lie in the signal region for the BSM search. This realization is especially important for future beam dump experiments at higher energies, where the higher intensity of electromagnetic cascades provide both the coupling and mass reach necessary to significantly extend the limits tested so far by laboratory searches in the MeV to GeV mass range. We will show that data collected by the ArgoNeuT detector <cit.> already has this capability, and depending on the specific sensitivity of a dedicated analysis, null observations in this data could already rule out parameter space unconstrained by laboratory probes to-date. In  <ref> we outline the production and detection channels we consider for electromagnetically-coupled ALPs. In  <ref> we describe the statistical analysis performed for the MiniBooNE dump-mode data and the ArgoNeuT data given an ALP signal hypothesis, with the resulting limits placed on the parameter space of photon and electron couplings in  <ref>. Finally we conclude in  <ref>. § BSM PRODUCTION AND DETECTION IN A BEAM DUMP We consider primarily ALPs produced in electromagnetic cascades inside the beam dump or beam target environment, e.g., those that get produced from couplings to photons and to electrons; ℒ_ALP⊃ i g_ae a ψ̅_e γ^5 ψ_e - 1/4 g_aγ a F_μνF^μν This Lagrangian, which for simplicity we will assume only one tree-level coupling active or dominant at a time, opens up a slew of production and detection channels available to beam target and beam dump experiments. These have recently been investigated in refs. <cit.>, and we summarize them in Table <ref>. For ALPs coupled to electrons, the dominant final state will be e^+ e^- pairs appearing in the detector as single Cherenkov rings, either from the pair being highly collinear with a separating angle less than the typical angular resolution of the detector or if one of the electrons/positrons are too soft. This final state appears mainly through decays for m_a > 2 m_e and otherwise through the Bethe-Heitler lepton pair production process (a Z → e^+ e^- Z) for sub-MeV ALPs, considered before to set limits on light (pseudo)scalars appearing in a proton beam target <cit.>. The cross-section for this process was computed in refs. <cit.> using the formalism and atomic form factors presented in ref. <cit.>, and it is larger than inverse-Compton scattering (a e^- →γ e^-) by up to an order of magnitude for ALP energies in the 100 MeV - 1 GeV range, which is the energy region of interest for this study. The resonant cross section in the electron rest frame is σ = 2 π m_e g_ae^2 sm_a^2 √(s(s-4m_e^2))δ(E_+ - (m_a^2/2m_e - m_e)) ≃2π m_e g_ae^2m_a^2δ(E_+ - (m_a^2/2m_e - m_e)). To simulate the production fluxes, we first generate the SM particle fluxes inside the MiniBooNE dump with GEANT4 using the physics list, then pass a high-statistics sample of each particle flux (e^±, γ, π^±) into the event generator.[https://github.com/athompson-git/alplibhttps://github.com/athompson-git/alplib] The positron and electron fluxes are shown in Fig. <ref>, while the photon flux is shown in Fig. <ref>. We show a large phase space of the e^± and γ fluxes to illustrate the many low-energy features that come about from processes like nuclear de-excitation and beta decay. However, in principle, only the high energy tail (>75 MeV) in the forward-going region (θ≲ 10^-2 rad) is responsible for the bulk of BSM particle production that is captured within the signal region and pointing within the solid angle of the MiniBooNE detector. This is illustrated in Fig. <ref> where we show the energy spectra before and after an angular cut of 10 mrad. Further details of the event selection and signal window are discussed in the following section. For ALPs produced from electrons or positrons in resonant production (e^+ e^- → a), associated production (e^+ e^- → a γ), or bremsstrahlung (e^± Z → e^± Z a), the energy loss of the electrons and positrons in the material during particle transport must also be folded into the event rate calculation. This modifies the number flux leaving the beam dump as dN_adE_a = N_A X_0/A (ħ c)^2 ∫d^2Φ_e^+/dE_e dΩ_e I(t, E_+, E^') ×Θ_detd^2σ(E^')dE^' dΩ^' dΩ_e dΩ^' dE_+ dt dE^' where N_A is Avogadro's number, X_0 is the radiation length of the electrons/positrons in the dump material, and A is the atomic weight. I(t, E_i, E_f) = θ(E_i - E_f)/E_i Γ (4 t/3) (ln E_i/E_f)^4t/3 - 1 is the energy loss smearing function for the electron/positron radiation length t integrated up to target radiation thickness T <cit.>. We integrate over the solid angle of the positron with respect to the beamline, Ω_e, and outgoing ALP solid angle with respect to the positron direction, Ω^', taking care to integrate only those ALPs pointed in the direction of the detector solid angle through the Heaviside function Θ_det <cit.>. § DATA ANALYSIS §.§ MiniBooNE Dump Mode The final states of concern in our search for ALPs in the MiniBooNE detector are photon-like events and electron-like events, listed in Table <ref>. We have adopted the same selection cuts made in the ν-e analysis of the MiniBooNE dump mode data for these states. Here we study the detector response with true simulated information to analyze the efficiency of the electron-like event selection from reconstructed events inside the detector. For the analysis of the Monte Carlo generated data, after the preliminary cuts have been applied, the first round of the reconstructed events is fit under the one-track electron and muon hypothesis. Each fit returns the likelihood of the corresponding hypothesis: ℒ_e and ℒ_μ. Those events satisfying the log(ℒ_e/ℒ_μ) > -0.05 continue the next round of reconstruction. In the second round, reconstructed events are fit under the general two-photon hypothesis. Similarly, the events should satisfy log(ℒ_π^0/ℒ_e) < 0. The efficiencies of these two cuts using simulated data as functions of electron visible energy and electron scattering angle are shown in Fig. <ref>. The selection efficiencies as a function of the visible energy, E^vis_e, are fitted as an arctangent function (p_0arctan(p_1 x) + p_2). The selection efficiencies as a function of the cosine of the angle with respect to the beam axis, cosθ_e, are fitted as a straight line (p_0+p_1x) except for the forward region of log(ℒ_e/ℒ_μ) which has a second-order polynomial fit (p_0+p_1x+p_2x^2). Uncertainties from the goodness-of-fit on the efficiency curve as a function of E_e^vis and cosθ_e are constrained to be less than 20%, so their impact on the exclusions over the model parameter space shown in the following section will not be qualitatively different. In addition to these log-likelihood efficiencies, we also take into account the cut on the reconstructed vertex radius of 500 cm, which effectively reduces the MiniBooNE volume to a sphere of 10 m in diameter. Other cuts, such as the number of tank and veto hits, and the Scintillation / Cherenkov ratios we assume to have perfect signal efficiency for the detection channels in Table <ref>. However, we do check that the γγ, e^+ e^-, and γ e^- final states from axion interactions and decays are collinear enough to be identified as a single electron-like Cherenkov ring in the detector. This also ensures that the cut on the di-gamma invariant mass m_γγ≤ 80 MeV is passed by selection for our ALP signals. Lastly, we bin the ALP signal Monte Carlo events into visible energy and cosine bins between 75 ≤ E_γ≤ 850 MeV and cosθ≥ 0.9 (taking E_γ = E_e^vis for the electron-like visible energy measurement). Since inverse Primakoff scattering is characterized by a forward outgoing photon, while inverse Compton scattering is characterized by a forward outgoing electron and a soft off-forward photon (typically below the lower energy cut), these scattering channels are well within the selection region for most choices of the couplings and the ALP mass. Example spectra for photon and electron coupling channels are shown in Fig. <ref>, where we have convolved the predicted event rates with the efficiency functions described above. For the case of ALPs undergoing inverse Primakoff scattering in the detector, a Z →γ Z, we integrate over the visible energy and outgoing angle of the final state photon; d^2R/dE_γ dΩ_γ = N_T ∫dN_a/dE_ad^2σ(E_a)/dE_γ dΩ_γϵ(E_γ) ϵ(Ω_γ) dE_a where ϵ(E_γ) and ϵ(Ω_γ) = ϵ(cosθ_γ) are equivalent to the visible energy and cosine efficiencies, respectively, of the electron-like signals shown in Fig. <ref>. Here, recall the differential event rate dN_a / dE_a passing into the detector from Eq. <ref>. Integrating Eq. <ref> over energy bin edges [75, 100, 150, 200, 250, 300, 500, 850] (in MeV) and cosine bin edges [0.9, 0.95, 0.99, 1.0] yields the ALP signal s_i in each bin i as a function of the mass and couplings. In the case of decays, instead of the differential cross section in Eq. <ref> we use the probability of decays occurring inside the detector P_decay= e^-ℓ/(τ v_a)[ 1 - e^-Δℓ /(τ v_a) ] where τ v_a is the ALP decay length in the lab frame, ℓ is the baseline distance between the ALP production in the dump, and Δℓ is the fiducial path length in the detector during which the decay must take place. For the other detection channel final states (2γ, 1γ1e^-, or e^+e^-), both final state particles leave visible energy in the detector, so we need to ensure that they are collinear enough to be reconstructed as a single Cherenkov ring in the detector. We check the angular distribution of the final state and cut events if two final state particles are separated by more than 5 degrees. We use a binned log-Poisson likelihood to obtain the confidence limits; ln L(θ⃗) = ∑_i=1^7 d_i ln[s_i(θ⃗) + b_i] - [s_i(θ⃗) + b_i] - ln[Γ(d_i + 1)] for data d_i, backgrounds b_i, and signal s_i(θ⃗), where θ⃗ = (m_a, g_aγ) in the case of dominant ALP-photon coupling and θ⃗ = (m_a, g_ae). The CLs are then given by finding regions of constant delta-log-likelihood, -2Δln L ≡ 2(ln L(θ) - ln L(θ)_min), in the relevant model parameter space θ⃗. §.§ ArgoNeuT ArgoNeuT <cit.> collected data from 1.25 × 10^20 POT impinging on the NuMI target, with its LArTPC detector situated 1.04 km downstream of the target while the beamline was in anti-neutrino mode <cit.>. With a fiducial volume of 0.40×0.47×0.90 cm^3, the angular acceptance of the detector coverage corresponds to roughly 0.325 mrad in solid angle. We perform a similar simulation with GEANT4 using the physics list to model the particle cascades inside the NuMI beam target environment (120 GeV protons on graphite). The ALP flux is calculated in the same way explained in the case of the MiniBooNE dump. From the GEANT4 flux distributions of e^± and γ in the solid angle of ArgoNeuT, shown in Fig. <ref>, we estimate the ALP flux produced from 1.25× 10^20 POT during data collection. A dedicated search for heavy ALPs decaying to di-muon pairs was performed by the ArgoNeuT collaboration <cit.>, exhibiting an event topology with very low background expectations. However, here we are interested in different types of event topologies: e^+ e^-, e^- γ, 2γ and 1γ (see Table <ref>), for which a dedicated analysis is missing. Therefore, we will not perform a likelihood analysis. We will just provide the contours in the parameter space for which the following number of signal ALP events would be observed in ArgoNeuT: 3, 20, and 100. These numbers are equal to the Poisson error of ∼ 10, 400, and 10^4 background events, respectively. § RESULTS The constraints on the ALP-photon coupling g_aγ as a function of the ALP mass m_a derived from MiniBooNE beam dump mode data is shown in Fig. <ref>. The 1σ and 2σ CLs are shown individually using the delta-log-likelihood method, and we find that the MiniBooNE data sets new laboratory limits on the ALP coupling for masses below 100 keV or so, where previously astrophysics (HB star cooling and SN1987a <cit.>, see also refs. <cit.>) had placed the only constraints ahead of beam dump constraints <cit.>[The measurement of the explosion energy of SN1987A can have tension to the cosmological triangle region unless the star cooling process is significantly different from the standard picture <cit.>.] and recently, constraints set by the CCM120 engineering run <cit.>. Limits set by the ArgoNeuT null result from 1.25× 10^20 POT of collected data are shown in blue, benchmarking the signal event rate at 3, 20, and 100 events in the absence of a dedicated analysis with backgrounds and proper event selection. Comparing the shape of the exclusion contours between MiniBooNE and ArgoNeuT, one can see the impact of the longer baseline between beam target and detector at ArgoNeuT (∼ 1 km) versus MiniBooNE (489 m) shifting the sensitivity contour to larger masses reflecting longer ALP lifetimes for a →γγ decay. In this space, we also show the parameter space associated with QCD Axion model benchmarks spanned between the dashed black lines. Here the range of couplings and masses are shown for Kim-Shifman-Vainshtein-Zakharov (KSVZ) benchmark models <cit.>, where the range is defined by taking the anomaly number ratios of E/N = 44/3 to E/N = 2 in the model. The correlations between the QCD axion mass and its effective couplings are taken from ref. <cit.> (see also Appendix <ref>). While the constraints shown here are purely on the photon-ALP couplings, independent constraints on the ALP-gluon couplings in these model variants are stringent and would indirectly rule out much of the parameter space <cit.>. These bands are of course only representative of these traditional QCD models shown for a sense of scale. QCD axions that are invoked to solve the strong CP problem which have parametrically heavier or lighter masses in other non-traditional models are also possible <cit.>. We set limits in the same way on the electron-ALP coupling g_ae as a function of the ALP mass in Fig. <ref>. The parameter space associated with Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) benchmark models <cit.>, for which couplings to electrons would be dominant relative to the photon couplings, the span between the dashed black lines. Again, we show this span of model parameter space for reference although the constraints shown here from pure g_ae-driven channels are conservative and indirect constraints on the DFSZ gluon couplings would be more stringent. In the electron coupling, we find that MiniBooNE dump mode tests parameter space already ruled out by existing laboratory searches (e.g. NA64, E137, and other beam dumps). Although, in the mass range ∼ 10 MeV the resonant channel e^+ e^- → a produces a highly peaked signal which becomes visible inside the energy region of interest, 75 < E_vis < 850 MeV (see Fig. <ref>). This is because the resonant energy tracks the square of the ALP mass, as E_a = m_a^2 / (2m_e), producing the first visible peak within this energy range for m_a ≃ 10 MeV. The MiniBooNE dump mode becomes highly sensitive to ALP signals here for those masses but is consistent with the existing E137 constraints in this region. The subtle undulating features in the CL contours from m_a = 10 - 30 MeV then reflect the signal rising and falling to accommodate the two data points in the 3rd and 6th energy bins in Fig. <ref>. ArgoNeuT sensitivity to this coupling is fairly powerful in the m_a > 2m_e mass range and would exclude new parameter space ahead of the limits set by the CCM120 engineering run between m_a = 1 MeV and m_a = 5 MeV. This is owed in part to the energy scale and long distance from the detector to the target being ideal to probe long ALP lifetimes, and also the relatively larger e^± fluxes produced in the NuMI target (Fig. <ref>). This exclusion would be possible even for a benchmark signal rate of 100 events, corresponding roughly to a Poisson background of 10^4 events without taking into account signal efficiency. This sensitivity is lost in the scattering limit for m_a < 2 m_e where NA64 missing energy and CCM120, where being at much closer proximity to the production site, ℓ∼ 20 m plays a bigger role, set the leading constraints. § OUTLOOK The analysis of the MiniBooNE dump mode data shows significant sensitivity to dark sector states produced by the secondary electromagnetic cascades in the BNB dump environment. By utilizing the off-target configuration and examining the interactions of 1.86 × 10^20 protons with the steel beam dump, we have expanded the existing constraints on ALPs in the 10-100 MeV mass regime that couple to photons. Simultaneously, despite a small exposure and fiducial detector mass, the null observations of ArgoNeuT could potentially rule out parameter space for ALPs in the same mass range coupling to electrons, due to the higher beam energy. Stopped-pion experiments at ∼GeV scale proton beam dumps also have the capability to probe new physics in the secondary electromagnetic showers, expanding in complementary regions of model parameter space to the higher energy, longer baseline beam dump experiments situated at the NuMI, BNB, or LBNF beams. Future beam dump searches may be possible to fully probe QCD axion parameter space for MeV masses, such as a proposed dump mode or target-less running mode for DUNE <cit.>. A dedicated target-less mode was shown to test electron-ALP couplings down to g_ae∼ 10^-6 for m_a < 2 m_e and down to g_ae∼10^-9 from ALP decays to e^+ e^- pairs with a limited 3 month to 1-year exposure. § ACKNOWLEDGMENTS We are grateful to Ornella Palamara for the helpful discussions regarding the potential for dedicated ALP studies at ArgoNeuT. The work of IMS is supported by DOE under the award number DE-SC0020250. The work of BD and AT is supported by the DOE Grant No. DE-SC0010813. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High-Performance Research Computing. The work of GG, WJ, and JY is supported by the U.S. Department of Energy under Grant No. DE-SC0011686. We thank the Center for Theoretical Underground Physics and Related Areas (CETUP*) and SURF for facilitating portions of this research. § QCD AXION MODELS The correlations between the QCD axion mass and its effective couplings are given below, taken from ref. <cit.>. We simply reiterate those correlations here for the convenience of the reader. The relation between the Peccei-Quinn breaking scale f_a and the axion mass is f_a = (5.691× 10^6eV/m_a) GeV To find the correlations between the axion mass and its effective couplings to photons in the Kim-Shifman-Vainshtein-Zakharov (KSVZ) benchmark model <cit.> is then given by Eq. <ref>; g_aγ = m_a/GeV(0.203 E/N - 0.39) We then consider a range of model parameter space by considering anomaly number ratios of E/N = 44/3 to E/N = 2. This defines a band in (m_a, g_aγ) parameter space in which the QCD axion's couplings and mass may reside. For the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) benchmark model <cit.>, for which couplings to electrons would be dominant relative to the photon couplings, we take g_ae = m_e C_ae(m_a, tanβ)f_a where the coefficient C_ae is dependent on the rotation angle β for the vacuum expectation values of the extended Higgs sector in DFSZI and DFSZII models; DFSZ(I): C_ae = -1/3sin^2β + loop factors DFSZ(II): C_ae = 1/3sin^2β + loop factors Here we take tanβ values between 0.25 and 120, which equates to sinβ = 0.242536 and sinβ = 0.999965, respectively <cit.>.
http://arxiv.org/abs/2307.07510v1
20230714175813
NNLL Resummation for Projected Three-Point Energy Correlator
[ "Wen Chen", "Jun Gao", "Yibei Li", "Zhen Xu", "Xiaoyuan Zhang", "Hua Xing Zhu" ]
hep-ph
[ "hep-ph", "hep-ex" ]
=1 ∋[4] Figures𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒮𝒯𝒫𝒥𝒲𝒳𝒴𝒵 trReImLia]Wen Chen,b]Jun Gao,a]Yibei Li,a]Zhen Xu,c]Xiaoyuan Zhang,a]Hua Xing Zhu[Current address: School of Physics, Peking University, Beijing 100871, China][a]Zhejiang Institute of Modern Physics, Department of Physics, Zhejiang University, Hangzhou, 310027, China[b]INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology, School of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240, China[c]Department of Physics, Harvard University, Cambridge, MA 02138, [email protected]@[email protected]@[email protected]@pku.edu.cnThe projected energy correlator measures the energy deposited in multiple detectors as a function of the largest angular distance x_L = (1 - cosχ_L)/2 between detectors. The collinear limit x_L→ 0 of the projected energy correlator is particularly interesting for understanding the jet-substructures, while the large logarithms of x_L could potentially spoil the perturbation theory and must be resummed. As a necessary ingredient for its resummation at next-to-next-to-leading logarithmic (NNLL) accuracy, we calculate the two-loop jet functions for the projected three-point energy correlator (E3C), using direct integration method and the parameter space Integration-by-Part (IBP) method. We then present the NNLL resummation for e^+e^- annihilation and an approximate NNLL resummation for pp→ jj process, where the two-loop hard constant is estimated in the latter case. The convergence is improved and the hadronization effect in the collinear limit is suppressed when considering the ratio of E3C distribution to two-point energy-energy correlator (EEC). Our results show potential in precision determination of strong coupling constant using energy correlators from both e^+e^- data and pp data. NNLL Resummation for Projected Three-Point Energy Correlator [ received: ** 2023, accepted: * 2023 ============================================================= § INTRODUCTION Energy correlators are a class of multi-particle angle correlation functions, weighted by the particle energy. Thanks to the energy weighting, they are infrared and collinear safe observables and can be calculated in perturbation theory. The simplest energy correlator is a two-point energy correlator, or Energy-Energy Correlation function (EEC). Proposed in 1970s <cit.>, EEC measures the correlation of energy deposited in two detectors as a function of the angle χ between them. In perturbation theory, the definition of EEC reads dσ^[2]/dcosχ≡∑_i,j∫ d σE_i E_j/Q^2δ(n⃗_i·n⃗_j-cosχ) , where i, j run over all the final state particles, n⃗_i and n⃗_j are unit three-vectors that define the directions of the particles, and Q is the total energy in the center-of-mass frame. Compared with other event shape variables studied at Large Electron–Positron Collider (LEP), one advantage of EEC is its simple analytic properties. As far as we are aware of, EEC is the only event shape that can be calculated analytically beyond leading order, e.g. it's now known analytically through to next-to-next-to-leading order (NNLO) <cit.> in =4 super Yang-Mills (SYM) theory and through to NLO in QCD <cit.>. In recent years, increasing attention has been paid to generalization of EEC to N-point energy correlators, which measure the energies of the outgoing particles with N detectors at colliders and turn out to be a function of N(N-1)/2 angles among these detectors <cit.>. For example, the three-point energy correlator (EEEC) is defined as d^3σ/dx_1dx_2dx_3≡∑_i,j,k∫ dσE_iE_jE_k/Q^3 ×δ(x_1-1-cosθ_jk/2) δ(x_2-1-cosθ_ik/2) δ(x_3-1-cosθ_ij/2) , which gives rise to rich functional dependence on the angles and can be used to probe various properties of perturbative QCD. The LO EEEC was first computed in the triple collinear limit in Ref. <cit.>, later genelarized to arbitrary angle dependence in both =4 SYM <cit.> and QCD <cit.>. To reduce the dimension of the kinematic space of the measured angles without losing too much useful information, one can project the kinematic dependence into a 1D subspace, which leads to the so-called projected energy correlator<cit.>. In momentum space, projected N-point energy correlator (ENC) is given by restricting the maximum angular distance to be x_L: dσ^[N]/dx_L≡∑_n∑_1≤ i_1,⋯ i_N≤ n∫ dσ∏_a=1^N E_i_a/Q^Nδ(x_L-max{x_i_1,i_2,x_i_1,i_3, ⋯ x_i_N-1, i_N}) , and for example, EEEC is then reduced to the projected three-point correlator (E3C). In this work we are mainly interested in the small angle, or collinear limit of E3C, namely x_L → 0. It is well-known in the boundary of phase space, incomplete cancellation of infrared divergences can lead to large logarithms that could possibly spoil the convergence of the perturbation theory and thus it is essential to resum these large logarithms to all orders. EEC is special as it exhibits both large logarithms in collinear limit and back-to-back limit. In this work we are interested in the large logarithms in the collinear limit, for which the most singular terms behave as α_s^n ln^n x_L at n loops. In the collinear region, EEC can be factorized into a hard function and a jet function, both of which live in the flavor space. The resummation of collinear EEC has been performed up to NNLL accuracy in both QCD <cit.> and =4 SYM <cit.>. More interestingly, the collinear factorization can be easily generalized to three-point energy correlator <cit.> and even the projected N-point energy correlator <cit.>. Previously, LL and NLL resummation has been performed in <cit.>. To improve upon those results, it is necessary to compute the relevant jet and hard function to higher order. While the hard function is universal for them, the jet functions differ by the measurement function. One of the key new results in this paper is the calculation of two-loop jet function for projected three-point energy correlator, which is the last missing ingredient for NNLL resummation of projected three-point energy correlator in e^+e^- collider. One of the main motivations for improving the theoretical accuracy of projected energy correlators comes from the possibility of determining the strong coupling constant α_s by measuring the ratio of projected energy correlators <cit.>. Measurements of strong coupling constant using classical QCD event shape observable has been actively studied for a long time, e.g. <cit.>. In recent years, there has been increasing attention to using jet substructure observables to extract α_s, such as soft-drop thrust and jet mass <cit.>, see also <cit.> for α_s determination from jet substructure by demixing quark and gluon jets. Since we are mainly concerned with the collinear limit of projected energy correlators in this paper, our results naturally provide theory input for measuring projected energy correlator within a jet, treating it as a jet substructure observable. We will show that considering the ratio of E3C and EEC can significantly reduce scale uncertainties and hadronization corrections, which makes it a good candidate for precision determination of α_s using jet substructure. We also note that energy correlators have the advantage that they can be defined and calculated using charged hadrons only <cit.>. Using the track function formalism <cit.>, it is possible to perform precision calculation for projected energy correlators on tracks in the future. The outline of this paper is as follows. In Sec. <ref>, we present the factorization theorem for ENC in the collinear limit and the RG evolution for both hard function and jet function. The desired orders required for all the ingredients to achieve NNLL resummation are briefly summarized there. In Sec. <ref>, we calculate the two-loop E3C jet function. Modern multiloop techniques like IBP and differential equation (DE) are applied for both finite and contact terms. Combining all together, we are able to extract the two-loop E3C jet constants, which is the last missing piece of the NNLL resummation for collinear E3C in e^+e^- collision. In Sec. <ref>, we present the matched NNLL results for both E3C and the ratio of E3C to EEC in e^+e^- collision. A qualitative analysis is performed to estimate the leading hadronization correction. The resummation procedure is extended to the case of pp collision, in particular, the pp→dijet process in Sec. <ref>. We present the highest perturbative prediction given the available ingredients, the approximate NNLL, with the missing two-loop hard function constants estimated and included as an additional uncertainty. We summarize and conclude in Sec. <ref>. § RESUMMATION FORMALISM §.§ Factorization theorem In this subsection, we summarize the factorization theorem for the projected N-correlator in the collinear limit and describe the necessary ingredients for NNLL resummation <cit.>. Similar to EEC, N-point energy correlator (ENC) in this limit is dominated by the logarithmic series of the largest angular distance x_L dσ^[N]/dx_L= ∑_L=1^∞∑_j=-1^L-1(α_s(μ)/4 π)^L c_L,j^j (x_L) +… , where ^-1(x_L)=δ(x_L) and ^j(x_L)= [ln^j(x_L)/x_L ]_+ for j≥ 0, with standard plus distribution. We do the logarithm counting in the projected N-point energy correlator cumulant, defined as Σ^[N](x_L, lnQ^2/μ^2) = 1/σ_tot∫_0^x_L dx_L^' dσ^[N]/d x_L^'(x_L^', lnQ^2/μ^2) , which maps [ln^j(x_L)/x_L ]_+ → 1/(j+1)×ln^j+1(x_L) and δ(x_L)→ 1. Then N^kLL accuracy refers to the logarithmic series ∑_i=0^∞∑_j=max{0, i-k}^i(α_s(μ)/4 π)^i d_i,jln^j x_L in the cumulant Σ^[N]. At leading power, the e^+e^- cumulant Σ^[N] can be written in terms of a modified factorization formula in the collinear limit x_L → 0<cit.>: Σ_ee^[N](x_L, lnQ^2/μ^2) = ∫_0^1 d x x^N J⃗^[N](lnx_L x^2 Q^2/μ^2) ·H⃗_ee(x, lnQ^2/μ^2) , where the hard function H⃗_ee^[N] encodes the production of a parent parton with energy fraction x with respect to the center of mass energy, and the jet function J⃗^[N] encodes the evolution of the parent parton into a number of collinear partons which contribute to the observable. Similar factorization formula for EEC was first obtained in <cit.>, and checked explicitly with known NLO results in QCD <cit.> and N = 4 SYM  <cit.>. We note the explicit dependence on the variable x in both the jet function and the hard function. Ignoring the dependence on different quark flavor, both jet and hard functions are two-component vectors living in the flavor space, i.e. J⃗^[N]={J_q^[N], J_g^[N]}, H⃗_ee={H_ee,q,H_ee,g}. We will describe their definition for both e^+e^- annihilation and pp collision in detail in the following subsections. We also emphasize that the factorization theorem holds for any N at leading power, though we only calculate the N=3 case in this paper. Finally the energy weights in the distribution makes projected N-point energy correlator insensitive to the soft radiations and non-global logarithms. In hadron colliders, the largest angular distance x_L is replaced by the rapidity-azimuth distance R_L = max_i,j ∈ X_E√(Δη_ij^2 + Δϕ_ij^2), where X_E is the set of particles that contributes to the energy weight. When the projected energy correlators are measured within a jet, as is typical for jet substructure observable, the cumulant Σ^[N]_had also depends on the jet radius R_0 parameter. In the limit of R_L ≪ R_0, the modified factorization formula can be written as Σ^[N]_had(R_0, R_L, lnp_T^2/μ^2) = ∫_0^1 d x x^N J⃗^[N](lnR_L^2 x^2 p_T^2/μ^2) ·H⃗_had(R_0, x, lnp_T^2/μ^2) , where p_T is the jet transverse momentum. Around R_L ∼ R_0, the jet function can also depend on R_0. However, there is no large logarithms associated with R_0, and its dependence can be obtained from fixed-order matching. For simplicity, we will ignore the R_0 dependence in the jet function. In that case the jet function become universal between e^+e^- and pp collision. For pp collision, the hard function depends on the partonic scattering process, as well as parton distribution functions (PDFs). §.§ Hard functions §.§.§ e^+e^- annihilation For e^+e^-, the hard function is simply the semi-inclusive hadron fragmentation function <cit.>, which depends on the parton flavor and parton energy fraction x=2p· q/Q^2, where q is the total momentum and p is the parton momentum. The leading order hard function follows from the born process e^+e^-→ qq̅, H⃗_ee^(0) (x) = {2δ(1-x), 0}. At one-loop, we find 1/2 H_ee,q^(1)(x) = α_s/4 π C_F [ (4 π ^2/3-9) δ(1-x) +4 [ln(1-x)/1-x]_+ +(4 ln (x)-3/2)(2 1/[1-x]_+-x-1)-9 x/2-2 (x+1) ln (1-x)+7/2] , H_ee,g^(1)(x) = α_s/4 πC_F [ 4 (x^2-2 x+2) ln (1-x)/x+8 (x^2-2 x+2) ln (x)/x] . The factor 1/2 in front of the quark channel indicates for identical contribution from anti-quark, since we do not dinstinguish quark and anti-quark flavor. At two-loop, the hard function can be found from the coefficient functions in <cit.>. Similar to the hadron fragmentation function, the renormalization group evolution (RGE) for the hard function H⃗ is simply the DGLAP equation, d H⃗(x, lnQ^2/μ^2)/d lnμ^2 = - ∫_x^1dy/yP(y) ·H⃗( x/y, lnQ^2/μ^2) , with P(y) being the singlet timelike splitting matrix, which is now known to three loops <cit.>. While it is very difficult to derive an analytic solution for DGLAP to all orders in α_s, as we will see below, our resummation only uses a α_s-expanded solution (which turns out to be a very good approximation) and only requires certain moments of the hard function. Explicitly, we will only need the regular and logarithmic moments for the hard function defined as the following <cit.>, ∫_0^1 dx x^N H_q,g(x,μ=Q) = ∑_L=0^∞( α_s/4π)^L h_L^q,g(N) , ∫_0^1 dx x^N ln x H_q,g(x,μ=Q) = ∑_L=1^∞( α_s/4π)^Lḣ_L^q,g(N) , ∫_0^1 dx x^N ln^2 x H_q,g(x,μ=Q) = ∑_L=1^∞( α_s/4π)^Lḧ_L^q,g(N) . Here we use x^N ln x =∂_N x^N and the dot on the RHS stands for the derivative. The expressions of needed hard function moments can be found in Appendix <ref>. §.§.§ pp collision In hadronic collisions, we mainly focus on the dijet production pp→ jj, which has a relatively large cross section at the LHC. Different from e^+e^- collider, this hard function incorporates the partonic scattering cross sections, the contribution from parton distribution functions (PDFs) and the jet algorithms for clustering the particles. Currently, to the best of our knowledge, the hard function is not know at two-loop. However, important progress are being made to compute those hard functions, e.g. <cit.>. Similar to the e^+e^- case, our resummation will only need the hard function moments. In this work we evaluate the needed moments of the hard function numerically in Madgraph5 <cit.>. To investigate the sensitivity of the result to the values of α_s, we used three different PDF sets: , and through Lhapdf <cit.>. Each PDF set fixes also the value of α_s(m_Z) and the corresponding evolution in Madgraph5. To address the fact that the hard function contains collinear divergence when resolving the energy fraction of the quarks and gluons, we use the one cut-off phase space slicing to regularize the collinear singularity, as implemented in <cit.>. With the collinear divergent contribution singled out and calculated analytically, the remaining contributions can be evaluated numerically. The detailed discussion can be found in Appendix <ref>. For pp→ jj, we adopt the anti-k_t algorithm <cit.> for jet detection and use the following parameters in the calculation R_0=0.4, p_T > 15 GeV, |η|<1.5 . The two leading jets are further subject to the following cuts |Δϕ(j_1, j_2)| >2 , |p_T^1 - p_T^2|/(p_T^1 + p_T^2) < 0.5 , and cast to the corresponding p_t bins for the analysis. The calculated moments need to be normalized with the cross section σ_J of jet production within specific p_t range. In particular, we expand H_had/σ_J to NLO in a_s, and take the 𝒪(a_s^0) and 𝒪(a_s^1) as the leading and next-to-leading order results. For the purpose of phenomenological studies, we will focus on two different p_t ranges: [300,350] GeV and [500,550] GeV. The hard function moments needed for NNLL are also summarized in Appendix <ref>. §.§ Jet functions The E3C jet function, on the other hand, encodes the measurement information. From RG invariance of the modified factorization formula (<ref>), the jet function satisfies a modified timelike DGLAP evolution equation d J⃗^[N]( lnx_L Q^2/μ^2)/d lnμ^2 = ∫_0^1 dy y^N J⃗^[N]( lnx_L y^2 Q^2/μ^2) ·P(y) . In order to write down an operator description of the E3C jet function, we first recall the collinear EEEC jet function from <cit.>: J_q(x_1,x_2,x_3,Q,μ^2) = ∫dl^+/2π1/2N_CTr∫ d^4x e^i l· x⟨ 0 | n̅/2χ_n(x) _EEEC δ (Q+n̅·) δ^2(_⊥) χ̅_n(0) |0⟩ J_g(x_1,x_2,x_3,Q,μ^2) = ∫dl^+/2π1/2 (N^2_C-1)Tr∫ d^4x e^i l· x⟨ 0 | ^a,μ_n,⊥(x) _EEEC δ (Q+n̅·) δ^2(_⊥) ^a,μ_n,⊥(0) |0⟩ , where χ_n≡ W_n^†ξ_n is the collinear quark and ^μ_n,⊥≡1/g[1/n̅·W_n^† [in̅· D_n, iD_n⊥^μ] W_n] is the collinear gluon, and _n⊥^μ form a complete set of collinear gauge invariant building blocks <cit.> in SCET <cit.>. The triple collinear measurement function _EEEC is defined as ℳ_EEEC(x_1,x_2,x_3)=∑_i,j,kE_i E_j E_k/Q^3δ(x_1-θ_ij^2/4)δ(x_2-θ_jk^2/4)δ(x_3-θ_ki^2/4) , with θ_ij being the angle between parton i and j. Then our E3C jet function has the same form as EEEC jet function, with a replacement of the measurement function: ℳ_EEEC⇒ℳ_E3C(x_L) =∫_0^x_L dx_L^'∫_K dx_1 dx_2 dx_3 ℳ_EEEC δ(x_L^'-max(x_1,x_2,x_3)) =∫_K dx_1 dx_2 dx_3 ℳ_EEEC θ(x_L-max(x_1,x_2,x_3)) . There are two folds integration in the first line. The first one is performed in the allowed kinematic space {x_1,x_2,x_3}∈ K that will be discussed below, projecting the shape-dependent EEEC jet function into a single-scale jet function. The second integration brings the differential measurement to the cumulant level. For N>3, the measurement function takes a similar structure, with more δ functions and integrations. Perturbatively, the E3C jet function can be written as J_q,g=∑_L (α_s/4π)^L J^(L)_q,g, and we use the normalization condition 2^3 · J^(0)_q=2^3 · J^(0)_g=1 as in Ref. <cit.>. The one-loop correction can be calculated from the QCD 1 → 2 timelike splitting kernel and is given by 2^3 J^(1)_q = 9C_F/2lnx_L Q^2/μ^2 -37 C_F/2 , 2^3 J^(1)_g = (21 C_A/5 +3 n_f/10) lnx_L Q^2/μ^2 -449 C_A/25-21 n_f/25 . Note that the μ-dependent terms are precisely captured by the jet RGE, while the remaining constants have to come from the fixed-order calculation. One of the main result in this paper is to calculate the two-loop constants described below. §.§ Two-loop calculation for the E3C jet function In this subsection, we present the two-loop calculation of the E3C jet functions for both quark jets and gluon jets. Since they are universal in the small angle limit, they can be used in both e^+e^- collision and pp collision. We start from recalling the definition of E3C at finite angle before taking the small angle limit. At two loops, E3C receives contributions from double-real (RR) and real-virtual (RV) as well as double-virtual (VV) corrections to q→ q, from which the quark jet function can be extracted by matching to the factorization formula, (<ref>). Similarly, the gluon jet function can be extracted from the NLO E3C distribution of Higgs gluonic decay H→ gg. To organize the calculation, we rewrite the definition of E3C in Eq. (<ref>) with the number of energy weight: 1/σ_0dσ^[3]/dx_L =∑_1≤ i_1≠ i_2≠ i_3≤ 4∫LIPS_4 |ℳ_4|^2 E_i_1 E_i_2 E_i_3/Q^3δ(x_L-max{x_i_1,i_2,x_i_1,i_3,x_i_2,i_3}) + ∑_n∈{3,4}∑_1≤ i_1≠ i_2≤ n∫LIPS_n |ℳ_n|^2 E^2_i_1 E_i_2/Q^3δ(x_L-x_i_1,i_2) + ∑_n∈{2,3,4}∑_1≤ i_1≤ n∫LIPS_n |ℳ_n|^2 E^3_i_1/Q^3δ(x_L) , where we normalize the distribution to the born cross-section in d dimension. The first line represents the contribution from nonidentical energy weights measurement and the other lines are called contact terms. If we define x_1=x_L z z̅, x_2=x_L (1-z)(1-z̅) and x_3=x_L, then in the collinear limits, they are the contact terms for δ(zz̅) that captures the strict squeeze limit and δ(x_L) that captures the strict triple collinear limit. The main goal of this section is to compute the collinear limit of Eq. (<ref>) and extract the corresponding two-loop constants. The lowest regular distribution of the E3C quark jet function comes from tree-level process γ^*→4 partons in electron-positron annihilation, which under the triple collinear limit, factorizes into the born process γ^*→ qq̅ and the 1→ 3 splitting functions, and we will call it nonidentical energy weight term. Below we will introduce two different methods to compute this part. The traditional method is to calculate the EEEC jet function to order (ϵ) and to integrate two angular distances x_2, x_3 numerically by the interpolation method. The OPE singularities (sometimes called squeezed singularities) of EEEC are subtracted and integrated in d dimension separately. The second approach benefits from the parameter space IBP method <cit.> developed very recently. Only 7 master integrals are needed to express EEEC, allowing the precise calculation of the remaining two-fold integral. The other two parts contribute to the contact terms and cancel the infrared divergence, which is guaranteed by the Kinoshita-Lee-Nauenberg (KLN) theorem <cit.>. Similar to EEC at NLO, the measurement function in the contact terms can be treated as a non-standard cut propagators, which allows for a generalized IBP reduction in Litered <cit.> and Fire6 <cit.>. The master integrals then can be calculated in packages like Canonica <cit.> or Libra <cit.> with the differential equation method implemented. §.§.§ Nonidentical energy-weight terms We start by computing the nonidentical energy-weight contribution in the traditional approach. As discussed in Ref. <cit.>, the inclusive jet function J_ijk is related to the 1→ 3 splitting function P_ijk<cit.> through J^nonid≡ J_ijk=∫dΦ^(3)_c(μ^2e^γ_E/4π)^2 ϵ4g^4/s_123^2∑_i,j,kP_ijkℳ_EEEC , where dΦ^(3)_c is the triple collinear phase space <cit.>, and i,j,k run over all final-state particles. The fully differential distribution with respect to all angular distances {x_1,x_2,x_3} in d=4-2ϵ dimension is then written as d J^nonid/dx_L dRe(z) dIm(z)=(μ^2/Q^2)^2ϵα_s^2/π^3e^2ϵγ_E/Γ(1-2ϵ)1/x_L^1+2ϵ1/ (2Im(z))^2ϵ ×[G(z)+ϵ F(z)+ϵ^2 H(z)+(ϵ^3)] , where G(z),F(z),H(z),⋯ the shape function in ϵ expansion. The order (1) part G(z) is computed analytically in <cit.> and following the same approach, we also calculate the complete result for F(z) and the z→ 1 limit of H(z). We will see that these are all the needed ingredients for nonidentical part. Note that the x_L dependence is defined by plus distribution, where 1/x_L^1+2ϵ=-δ(x_L)/2ϵ+(1/x_L)_+-2ϵ(ln x_L/x_L)_++⋯ . In order to perform the integral over z, we need to figure out the integration region first. Compared with the first line in Eq. (<ref>), it is straightforward to show that d J^nonid/dx_L=(μ^2/Q^2)^2ϵα_s^2/π^3e^2ϵγ_E/Γ(1-2ϵ)6/x_L^1+2ϵ∫_d (z)d (z)/(2 (z))^2ϵ[G(z)+ϵ F(z)+ϵ^2 H(z)+(ϵ^3)]_≡ A(ϵ) , where the constant factor 6 comes from the S_3 permutation symmetry and the integration region is given in Fig. <ref>. To calculate A(ϵ) numerically, we also need to subtract the OPE singularities around z→ 1 at the integrand level, and evaluate its z integration analytically in d dimension. The full asymptotic expansion of z→ 1 is given in the appendix <ref>. The most singular term is proportional to 1/(1-z)(1-z̅), which gives rise to ∫_0^√(3)/2d (z)∫_1/2^√(1-( (z))^2)d (z)1/(2 (z))^2ϵ1/(1-z)(1-z̅) =-π/4ϵ-κ +ϵ(-167/1080π^3-1/20πln^2 3+κln 3+12/5η)+(ϵ^2) . Here κ=_2 e^iπ/3 is the Gieseking's constant living in the transcendentality-two family and η=_3(i/√(3)) is a parity-odd transcendentality-three constant. These constants are typical numbers in loop integrals, especially in trijet observable calculations. With subtraction terms, the integral A in Eq. (<ref>) up to order (ϵ) is then written as A =∫_0^√(3)/2d (z)∫_1/2^√(1-( (z))^2)d (z)1/(2 (z))^2ϵ[G(z→ 1)+ϵ F(z→ 1)+ϵ^2 H(z→ 1)] +∫_0^√(3)/2d (z)∫_1/2^√(1-( (z))^2)d (z)1/(2 (z))^2ϵ[(G(z)-G(z→ 1))+ϵ(F(z)-F(z→ 1))] . The first term is proportional to Eq. (<ref>) and it is straightforward to compute it to (ϵ). For the second integral, we have to expand in ϵ and evaluate it numerically. To implement the interpolation method, we first change the integration variables via v_1=2/√(3) (z) and v_2= (z)-1/2/√(1-((z))^2)-1/2, such that both v_1,2 range from 0 to 1. Then we can build a 2D lattice by discretizing v_1,2 and approximate our integrand with polynomials. This allows one to perform the two-fold numerical integral directly in Mathematica. To check the stability of the integration and estimate the statistical error, we vary the lattice size and the order of polynomials and see which significant figure remains unchanged. Eventually we obtain both δ(x_L) contact term and 1/x_L finite term for the nonidentical energy weight contribution. The explicit expression for both quark and gluon jet function can be found in Eq. (<ref>)-(<ref>) in the appendix. Alternatively, benefiting from the recent development of the IBP method in the Feynman parameter space, we can simplify the whole jet function calculation with integral reduction. First of all, recall that Eq. (<ref>) takes the form J^nonid≡∫dx_1dx_2dx_3 d J^R/dx_1dx_2dx_3∝∫dx_1dx_2dx_3dω_1dω_2dω_3δ(1-ω_1-ω_2-ω_3)P̂_ijk . Here P̂ is a homogeneous function of the energy fraction ω_i of the final-state particles. Explicitly, it is of the form P̂_ijk=ω_1^α_1ω_2^α_2ω_3^α_3/f_1^β_1f_2^β_2 , with f_1 linear in ω_i, and f_2 a polynomial of ω_i of degree 2. Following the idea in Ref <cit.>, the integral d^3J^R/dx_1dx_2dx_3 in Eq. (<ref>) can be related to a Feynman parameter integral through[In the special cases where β_1=0 or f_1=U, we don't need to introduce the parameter ω_4.] d^3J^nonid/dx_1dx_2dx_3= Γ(β_1+β_2)/Γ(β_1)Γ(β_2)∫dω_1dω_2dω_3dω_4δ(1-ω_1-ω_2-ω_3)ω_1^α_1ω_2^α_2ω_3^α_3ω_4^β_1-1/(f_2+f_1ω_4)^β_1+β_2 = Γ(β_1+β_2)/Γ(β_1)Γ(β_2)∫dω_1dω_2dω_3dω_4δ(1-ω_1-ω_2-ω_3)ω_1^α_1ω_2^α_2ω_3^α_3ω_4^β_1-1/(f_2+f_1ω_4)^β_1+β_2 = Γ(β_1+β_2)/Γ(β_1)Γ(β_2)∫dω_1dω_2dω_3dω_4δ(1-U)ω_1^α_1ω_2^α_2ω_3^α_3ω_4^α_4/U^λ_1F^λ_2 ≡ Γ(α_1)Γ(α_2)Γ(α_3)/Γ(β_1)Γ(β_2)I(α_0,α_1,α_2,α_3,α_4) , where U=ω_1+ω_2+ω_3, F=f_2+f_1ω_4, λ_1=α_1+α_2+α_3-β_1-2β_2+3, λ_2=β_1+β_2, and α_0=-β_1-β_2. The integral in the last line is a standard parametric Feynman integral, which can be reduced with IBP reduction <cit.> in the parametric representation <cit.>[The algorithms described in ref. <cit.> to generate symbolic rules work only when all the indices are nonnegative. Thus, here we carry out the reduction by merely solving IBP identities using Kira <cit.>.]. The master integrals are ℐ_1=I_1(α _0,-2 ϵ ,1-2 ϵ ,-2 ϵ), ℐ_2=I_1(α _0,1-2 ϵ ,-2 ϵ ,-2 ϵ), ℐ_3=I_1(α _0,-2 ϵ ,-2 ϵ ,1-2 ϵ), ℐ_4=I_1(α _0,-2 ϵ ,-2 ϵ ,-2 ϵ), ℐ_5=I_2(α _0,-2 ϵ ,-2 ϵ ,-2 ϵ ,0), ℐ_6=I_3(α _0,-2 ϵ ,-2 ϵ ,-2 ϵ ,0), ℐ_7=I_4(α _0,-2 ϵ ,-2 ϵ ,-2 ϵ ,0) , with the integrals I_i defined by the F polynomials F_1=x_1ω_2ω_3+x_2ω_1ω_3+x_3ω_1ω_2 , F_2=F_1+(ω_1+ω_2)ω_4 , F_3=F_1+(ω_1+ω_3)ω_4 , F_4=F_1+(ω_2+ω_3)ω_4 , and α_0=6ϵ-2[Notice that though here α_0 and ϵ are not independent, we should treat them as independent parameters during the IBP reduction, because otherwise some integrals may be ill-defined.]. The master integrals can be evaluated using the differential equation technique <cit.>. For simplicity, we set μ=x_3=1, and introduce u and v following z=u(1+iv). Then we construct the differential-equation system with respect to u, and derive the canonical basis <cit.> using Libra <cit.> ℐ_1^'= 6 u (v-1) ℐ_4+x_1 (1-2 ϵ )/ϵℐ_1 , ℐ_2^'= 6 (u-1) ℐ_4+x_2 (1-2 ϵ )/ϵℐ_2 , ℐ_3^'= 6 (u v+u-x_1) ℐ_4+x_1 x_2 (1-2 ϵ )/ϵℐ_3 , ℐ_4^'= 6 u v ℐ_4 , ℐ_5^'= (x_1-x_2)ℐ_5 , ℐ_6^'= (x_3-x_1)ℐ_6 , ℐ_7^'= (x_2-x_3)ℐ_7 , with the corresponding alphabet {u, 2u-1, x_2, x_2-1}. By solving the differential-equation system, we can express the master integrals via Goncharov polylogarithms (GPLs) <cit.>. The GPL is defined iteratively by G(a_1,⋯ a_n; x)≡∫_0^x dt/t-a_1 G(a_2,⋯ a_n; t) , with G(;x)≡1, G(0⃗_n;x)≡1/n!ln^n (x) . After finishing the simplified calculation of EEEC in the collinear limit, we still need to integrate two angular distances for the projected EEEC as the previous approach. By virtue of the S_3 permutation symmetry, this amount to consider dJ^nonid/dx_L= 6∫dx_1dx_2 Θ(x_1,x_2)d^3J/dx_1dx_2dx_3 = 24∫dudv Θ(x_1,x_2)u^2vd^3J/dx_1dx_2dx_3 ≡ ∫dudv Θ(x_1,x_2)J(u,v) , where Θ(x_1,x_2)≡θ(1-√(x_2))θ(√(x_2)-√(x_1))θ(√(x_2)+√(x_1)-1). Now the OPE singularity corresponds to u→ 0 limit, and similarly, we need to subtract the singular behavior and do the integration separately: dJ^nonid/dx_L=∫dudv Θ(x_1,x_2)J̃(u→0)+∫dudv Θ(x_1,x_2)[J(u,v)-J̃(u→0)] , where again we can evaluate the first integral in d dimension and expand the integrand of the second one in ϵ. To calculate the J̃(u→0), now we can directly extract the asymptotic expansion of the integral I in Eq. (<ref>) from DE, in which we identify two expansion regions: hard region: ω_1∼ω_2∼ω_3∼ 1 , small region: ω_2∼ω_3∼ 1, ω_1∼ u^2 . Evantually we only need to integrate the reduced master integrals in d dimension. Regarding the second integral in Eq. (<ref>), the u integral is straightforward since J̃(u,v) is expressed in terms of GPLs of the form G(…,u). However, the v integral becomes unstable in two regions v→0 and v→∞. To resolve this problem, we decompose the v∈ [0,∞] integration into three parts: [0, 1/C], [1/C, C], and [C, ∞], with a arbitrary cut parameter C>1. In the region (1/C, C), we carry out the integration numerically, with the GPLs numerically using Handyg <cit.>. The other two regions require expanding the integrand in v (or 1/v) to 𝒪(v^100) (or 𝒪(v^-100)) and performing the integration analytically. This expansion can easily be done by asymptotically solving the differential equations satisfied by the GPLs. Eventually, we find the same result as in Eq. (<ref>)-(<ref>). §.§.§ Contact terms While it is convenient to calculate the nonidentical E_i_1E_i_2E_i_3 part starting with the splitting functions, it is preferable to compute the full angular dependence on x_L for corresponding processes (namely e^+e^- annihilation and gluonic Higgs decay) with energy weights E^2_i_1E_i_2 (i_1≠ i_2) and E^3_i_1, and extract the contact term from the collinear limit x_L→ 0. In other words, we will adopt the full matrix elements squared and compute the full phase space integral using modern multi-loop techniques, with which the collinear expansion gives σ^[3]_E^2EC(x_L)/ x_L (the E^2_i_1E_i_2 (i_1≠ i_2) part) and σ^[3]_E^3C(x_L)/ x_L (the E^3_i_1 part) in the x_L→ 0 limit. We start with the relevant processes in perturbation theory for two-loop jet functions, e^+e^- annihilation Higgs decays γ^*→ qq̅+VV H→ gg+VV γ^*→ qq̅g+V H→ ggg+V H→ qq̅g+V γ^*→ qq̅gg H→ gggg γ^*→ qq̅qq̅ H→ qq̅gg γ^*→ qq̅q'q̅' H→ qq̅qq̅ H→ qq̅q'q̅' where V and VV denotes one-loop and two-loop correction respectively. In particular, in the x_L→ 0 limit, 1→2 processes only contribute to δ(x_L)-terms (i.e., σ^[3]_E^3C(x_L)/ x_L). The calculation setup of σ^[3]_E^2EC(x_L,ϵ)/ x_L shares the same structure as the original EEC, which basically follows the approach described in Ref. <cit.> and more detail in <cit.>. Briefly speaking, using the Cutkosky rules <cit.>, we can replace the phase-space on-shell delta functions with the cut propagators δ(p^2)=1/2πi(1/p^2-i0-1/p^2+i0) , and also the EEC measurement function δ(x_L-x_i,j) with δ(x_L-1-cosθ_ij/2)=(p_i· p_j)/x_Lδ[2x_L(p_i· Q)(p_j· Q)-p_i· p_j] = 1/2πi(p_i· p_j)/x_L{1/[2x_L(p_i· Q)(p_j· Q)-p_i· p_j]-i 0-1/[2x_L(p_i· Q)(p_j· Q)-p_i· p_j]+i 0} , where we set the center-of-mass energy Q=1 for simplicity. After topology classification and identification as described in Ref. <cit.>, the E^2EC integral can be reduced to a set of master integrals ℐ_k(x_L,ϵ) using IBP reduction and E^2EC distribution can be written as a linear combination of the master integrals, / x_Lσ^[3]_E^2EC(x_L,ϵ)=∑_k 𝒞_k(x_L, ϵ)ℐ_k(x_L,ϵ) . Specifically, we generate the standard IBP equations using Litered <cit.>, add the missing one that is associated with the EEC measurement function by hand, and do the reduction in Fire6 <cit.>. The master integrals turn out to be the same as in NLO EEC calculation for both e^+e^- annihilation and gluonic Higgs decays, which can be converted into the canonical basis using the DE package Canonica <cit.>. In order to obtain the collinear σ^[3]_E^2EC(x_L,ϵ)/ x_L, one could surely expand the differential equation asymptotically and derive the analytical expression of the master integrals in that limit. However, the fact that the most singular power of 𝒞_k's is x_L^-8 requires us to compute the master integrals up to (x_L^7) order, which turns out to be expensive and time-consuming. This becomes worse in the higher-point energy correlator since the singular power increases as well. One antidote is to reconstruct the coefficients from DE following an ansatz on the structure of asymptotic expansion. In fact, the pattern turns out to be x_L^-ϵU^(1)_1(x_L,ϵ) at (α_s) and x_L^-ϵU^(2)_1(x_L,ϵ)+x_L^-2ϵU^(2)_2(x_L,ϵ) at (α_s^2), where U denotes a series in x_L with rational fractions of ϵ as the coefficients. Therefore, we perform the asymptotic expansion in the following way. First of all, we solve the canonical DE at 0<x_L<1 to transcendental-weight 5, which can be used to obtain the finite part of the contact term via Eq. (<ref>). The result can be converted to Harmonic polylogarithms (HPLs) with the package Hpl <cit.> or even classical polylogarithms. Then we can extract the leading power x_L^-1 and match it to a resummed ansatz x_L^-1-ϵC_1(ϵ)+x_L^-1-2ϵC_2(ϵ) , with unknown ϵ-series C_1(ϵ) and C_2(ϵ). The matching between fixed order calculation and the resummed structure in ϵ leads to the solution of C_1(ϵ) and C_2(ϵ) in ϵ expansion. Since x_L^-1-ϵ and x_L^-1-2ϵ are defined with plus distribution similar to Eq. (<ref>), now we obtain the correct (ϵ^0) formula for σ^[3]_E^2EC(x_L,ϵ)/ x_L in the collinear limit. The last remaining piece is σ^[3]_E^3C(x_L,ϵ)/ x_L. The computation of the self-energy correlator is much easier since its dependence on x_L is factorized out by δ(x_L) and the integrals are simply standard cut integrals. The master integrals can be found in the literature, e.g. <cit.>. Eventually adding σ^[3]_E^2EC(x_L)/ x_L and σ^[3]_E^3C(x_L,ϵ)/ x_L together, we obtain the complete contact terms σ^[3]_C(x_L,ϵ)/ x_L for E3C distribution. The results are also summarized in Eq. (<ref>)-(<ref>). Combined with the nonidentical energy weight contributions, we find all 1/ϵ canceled and thus the infrared safety is guaranteed as expected. §.§.§ Results of two-loop jet function constants With all individual contributions at hand, the full expressions of 2-loop E3Cs in the collinear limit can be written as 1/σ_0σ^[3],2-loop_q/ x_L= 2 J^nonid,2-loop_q/ x_L+1/σ_0σ^[3],2-loop_C,q/ x_L (e^+e^- annihilation) , 1/σ^'_0σ^[3],2-loop_g/ x_L= 2 J^nonid,2-loop_g/ x_L+1/σ^'_0σ^[3],2-loop_C,g/ x_L (gluonic Higgs decay) . Here a factor of 2 is added because we only consider a single jet in Sec. <ref>. Given the tree-level hard functions, {H^(0)_q,H^(0)_g}={2δ(1-x),0} for e^+e^- annihilation and {H̃^(0)_q,H̃^(0)_g}={0,2δ(1-x)} for the Higgs decay through the effective Hgg coupling, we can extract the two-loop jet constant directly from the δ(x_L) contribution from Eq. (<ref>) and Eq. (<ref>). We find that the μ dependence are in full agreement with prediction from RG evolution, providing strong check to our calculation. The μ independent part are the new results from this calculation. For the quark jet function, we get j_2^q,[3]=12.3020 C_F T_F n_f-26.2764 C_A C_F +21.3943 C_F^2 , and for gluon jet functions j_2^g,[3]=17.5487 C_A T_F n_f -2.05342 C_F T_F n_f -5.97991 C_A^2+0.904693 n_f^2 T_F^2 . §.§ Perturbative resummation We start by defining the logarithmic order for our E3C resummation. The ingredients needed for our E3C resummation are summarized in Table <ref>. This includes the order of timelike splitting kernel P̂(y), the boundary information (hard and jet constants), the β function for running coupling as well as the fixed-order matching.[This is the same log counting as N^kLL^' in SCET, except that we omit all ^' for convenience.] Due to the absent of analytic method to solve the RG equation exactly, we also truncate in the number of loops of the RGE solution to the desired logarithmic order <cit.>. We first review the LL resummation in e^+e^- annihilation. Based on our resummation setting, it is safe to set x=1 in the argument of E3C jet function in Eq. (<ref>), which only affects the higher-order terms beyond LL. This leads to dJ⃗^[N]_LL (lnx_LQ^2/μ^2)/dlnμ^2=J⃗^[N]_LL (lnx_LQ^2/μ^2)·α_s/4π∫_0^1 dy y^N P̂^(0)(y)=-J⃗^[N]_LL (lnx_LQ^2/μ^2)·α_s/4πγ_T^(0)(N+1) . Here, we introduce the anomalous dimension to be the moment of timelike splitting kernel γ_T(N)≡ -∫_0^1 dy y^N P̂(y)=(α_s/4π)γ_T^(0)+(α_s/4π)^2 γ_T^(1)+⋯ . Then given the boundary condition J⃗^(0)={2^-N,2^-N}, we can directly write down the solution to LL jet function: J⃗_LL^[N]=2^-N(1,1)·exp[-γ_T^(0)/β_0lnα_s(√(x_L)Q)/α_s(μ)] . Plugging both jet and hard functions into the factorization for the cumulant Σ^[N] and differentiating it with respect to x_L, we obtain the LL resummed physical spectrum for E3C. Beyond LL, the x=1 approximation is no longer valid, and instead we have to solve the jet RGE directly. While it is difficult to obtain a close-form solution for this modified DGLAP equation, we find that a truncated solution in α_s is already in good convergence. Explicitly, we assume the jet function takes the form J⃗^[N]=∑_i=1^∞α_s^i L^i c⃗_i,i_LL+∑_i=1^∞α_s^i L^i-1c⃗_i,i-1_NLL+∑_i=1^∞α_s^i L^i-2c⃗_i,i-2_NNLL+⋯ , with L≡lnx_L Q^2/μ^2 and c_i,j unknown constants, and solve both the jet RGE and β RGE order by order in α_s (which is referred as expanded solution). In practice, we evaluate it numerically up to 𝒪(α_s^50). Another advantage of using expanded solution is that we only need certain moments of the hard functions. For example, consider one term from the jet function, J⃗^[N]⊃α_s^2 c⃗_2,2 L^2, and plug into Eq. (<ref>), we find Σ^[N]⊃α_s^2 c⃗_2,2·∫_0^1 dx x^N ln^2 (x_L x^2 Q^2/μ^2)·H⃗_ee(x, lnQ^2/μ^2) =α_s^2 c⃗_2,2·[ln^2 (x_L Q^2/μ^2)^2 ∫_0^1 dx x^N H⃗_ee(x, lnQ^2/μ^2) +2 ln(x_L Q^2/μ^2) ∫_0^1dx ln x^2 x^N H⃗_ee(x, lnQ^2/μ^2)+ ∫_0^1dx ln^2 x^2 x^N H⃗_ee(x, lnQ^2/μ^2)] =α_s^2 c⃗_2,2·[ ln^2 (x_L Q^2/μ^2)+2 ln(x_L Q^2/μ^2) ∂_N+4∂_N^2] ∫_0^1 x^N H⃗_ee(x, lnQ^2/μ^2) , where the three terms correspond to the standard moment, the single logarithmic moment and the double logarithmic moment of the E3C hard function. To derive the last line, we also use the following relation ∫_0^1 ln^k x^2 x^N H⃗_ee(x, lnQ^2/μ^2)=2^k ∂_N^k ∫_0^1 x^N H⃗_ee(x, lnQ^2/μ^2) . In the Appendix <ref>, we provide all the hard moments with N=2,3 that are required for NNLL resummation. In this paper, we present results for the NNLL resummation of E3C for e^+e^- annihilation, and approximate NNLL resummation for jets from the hadronic collision process pp→ jj. For e^+e^- annihilation, we have all ingredients needed for NNLL resummation. And since there is no accurate fixed-order data for E3C at NNLO, we will instead match the NNLL result to NLO. Regarding the dijet production, due to the absence of the two-loop hard constant, we will present the approximate NNLL resummation (which we refer as NNLL_approx), with an additional uncertainty coming from the missing two-loop hard constant. Resummation with the accurate two-loop hard function as well as the matching with fixed-order result are left as future improvements. § NNLL RESUMMATION IN E^+E^- ANNIHILATION With all the ingredients at hand, now we can present the NNLL resummation prediction. In this section, we first consider e^+e^- collision at two different energies: 250 GeV and 1 TeV. In the resummation calculation, we will use α(m_Z)=0.118. §.§ Resummation results Following the discussion in Sec. <ref>, our resummation is performed by perturbatively solving the jet function RG equation to order 𝒪(α_s^50), plugging back to the cumulant factorization and finally truncating the logarithms lnx_L Q^2/μ^2 to the desired order. In the resummation formula, we set canonical jet scale μ_j=μ_h √(x_L) in the factorization, leaving a single hard scale μ_h=μ in the resummed expression. We vary the scale μ to estimate the uncertainty from higher order corrections. Regarding the observables, below we consider three cases: N=2, N=3 and their ratio. The N=2 case is precisely the EEC observable, where we directly use the result from Ref. <cit.>, and the singular expansion has been verified against the NLO EEC fixed-order calculation. For N=3 case, this is the main result of this paper. In Fig. <ref>, we first check our 𝒪(α_s^2) expansion with the Monte Carlo program Event2. In the collinear limit, we find excellent agreement between theory and numeric result, while in the meantime, this also suggests the non-singular contribution from fixed-order calculation is negligible in this limit. Nevertheless, the matching formula can be written as dσ^match/dx_L=dσ^resum/dx_L-dσ^sing/dx_L+dσ^FO/dx_L . Here each term is a function of α_s(μ) evaluated at the hard scale μ_h=μ. In Fig. <ref>, we present the E3C resummation up to NNLL, matched to fixed-order. As explained above, due to the absence of NNLO data, we only match NNLL to NLO. The hard scale is chosen to be half of the center-of-mass energy μ=Q_jet≡ Q/2, the typical energy for each quark jet, and the scale uncertainty is obtained by varying the hard scale by a factor of 2. In both energies, the uncertainty band width goes down as we increase the resummation order, while at 1 TeV, we have a tighter band because the coupling α_s runs slower at high energy. At NNLL, we find a relative 4 % hard uncertainty for Q=250 GeV and 2 % for Q=1 TeV. We find large corrections as we go from LL to NNLL, as was also observed previously in <cit.>, which emphasize the importance of higher order corrections. For higher center-of-mass energy, the convergence between different orders is improved. To improve the convergence, we also introduce the ratio of different point energy correlators, namely <cit.> Δ_m,n(x_L,μ, μ^')≡dσ^[m]/d x_L/dσ^[n]/d x_L, m,n≥ 2 , where μ and μ^' are the hard scale in dσ^[m]/d x_L and dσ^[n]/d x_L respectively. In particular, we focus on the ratio between fully matched E3C and EEC, i.e. Δ_3,2(x_L). In Fig. <ref>, we show the NNLL resummed Δ_3,2(x_L) at again Q=250 GeV and Q=1 TeV, and find good convergence. This implies that the ratio can be used as precision observable. For hard scale uncertainty, we use the seven-point scale variation, which amounts to varying the scales in both numerator and denominator independently by a factor of 2, to a combination of (μ/Q_jet,μ^'/Q_jet)∈{(1/2,1/2), (2,2 ), (1, 2), (1, 1), (2,1), (1,1/2), (1/2,1) } , and take the envelope as the uncertainty estimation. The convergence also indicates that ENC shares similar non-perturbative behavior in the collinear limit and taking the ratio strongly suppresses the power corrections. §.§ Hadronization corrections In this subsection, we consider the power-suppressed hadronization corrections in the collinear limit. At present hadronization corrections cannot be computed from first principle. For simplicity, we use a phenomenological form for the leading non-perturbative power correction as suggested in <cit.>, and fit the unknown parameters from a Monte Carlo program. This provides some insights on how to model the hadronization effect for a global fit in the future. In general, the non-perturbative corrections in infrared-collinear safe observables are (at least) suppressed as Λ_ QCD/Q to some power, where Q is the hard scale of the process. Following from the LL result in Eq. (<ref>), we observe that in the collinear limit, there exists a lower scale √(x_L)Q in the coupling, and the most important non-perturbative correction that could potentially appear is linear in Λ_ QCD and takes the form Λ_ QCD/(√(x_L)Q), multiplied with an extra kinematic factor 1/x_L. The sub-leading non-perturbative corrections with additional powers of Λ_ QCD/(√(x_L)Q) will become necessary down to small x_L ∼Λ_ QCD^2/Q^2, where the perturbation theory also breaks down. For the leading non-perturbative correction we are considering, such structure is in fact recovered for the EEC in the fragmentation modeling of non-perturbative radiations <cit.> and and analysis using renormalon or dispersive techniques <cit.>. As a qualitative analysis, we use the following parametrization of the leading non-perturbative correction, dσ^ NP-soft/d x_L = 1/x_L·( Λ̃/√(x_L) Q )^1+γ ( soft fragmentation ), we verify the scaling behaviour of the non-perturbative correction in the collinear limit for both EEC and E3C distributions with Pythia8 <cit.>, and extract the non-perturbative parameters by fitting from the difference of the hadron level and parton level predictions. Note that the issues of extracting the non-perturbative power corrections from Monte Carlo generators have been pointed out in Ref. <cit.>. In particular, the corrections from the hadronization modeling in the Monte Carlo programs in fact unfaithfully absorb partial subleading-log contributions, as the hadronization modeling has been tuned to reproduce some collider data with limited perturbative accuracy. Therefore, in this paper we only use Monte Carlo to illustrate the impact of power correction for individual EEC and E3C distribution as well as their ratio. For our case, we stay in the default settings of Pythia8 and obtain the following fit at the 95% confidence level. At Q=250 GeV, we find for EEC and E3C: Λ̃_2 = (0.956 ± 0.031) GeV , γ_2 = 0.462 ± 0.017 , Λ̃_3 = (0.500 ± 0.040) GeV , γ_3 = 0.335 ± 0.031 . And in the case with Q=1 TeV, we have Λ̃_2 = (0.775 ± 0.013) GeV , γ_2 = 0.383 ± 0.008 , Λ̃_3 = (0.435 ± 0.015) GeV , γ_3 = 0.325 ± 0.012 . We emphasis that for too small x_L value, the leading order non-perturbative approximation itself becomes invalidated. The enhancement of the non-perturbative corrections in the collinear limit must be turned off before entering the fully non-perturbative phase, where the degrees of freedom become freely interacting hadrons and a nice scaling behavior follows <cit.>. In this qualitative analysis, we choose the lower bound of the fit range by finding the extreme point of the distributions from hadron level prediction in Pythia8. Multiplying the extreme point by a factor of 2 gives a good estimate of the lower bound for the range where the non-perturbative correction follows the described scaling behavior. In Fig. <ref>, we show the relative hadronization correction from both Pythia8 and our two-parameter fit. Except the shaded region, our parameterization agrees with the Monte Carlo result and it is sufficient for understanding their structure. In Fig. <ref>, we include the non-perturbative correction in the matched E3C resummation, which strongly enhances the extreme collinear limit. At Q=1 TeV, the non-perturbative correction changes our NNLL+NLO prediction by only a few percent at x_L∼ 0.1, while this modification reaches 50% at x_L∼ 10^-4. This shows that the non-perturbative corrections for energy correlators, though being power suppressed at high energies, can become sizable even at the energy level of future e^+e^- colliders. However, since EEC and E3C share a close power law in the leading power correction, the enhancement is significantly canceled when considering their ratio Δ_3,2(x_L). As shown in Fig. <ref>, the leading non-perturbative correction only gives rise to roughly 4% effect at Q=250 GeV and 2% at Q=1 TeV for matched NNLL. This confirms that Δ_3,2(x_L) is insensitive to the hadronization and indeed a good candidate for precise α_s measurement. We also investigate the impact on the final resummation results caused by the uncertainties from the two-parameter fit. The statistical error for both Λ̃ and γ are given in Eq. (<ref>) and (<ref>). Fig. <ref> shows the final uncertainty in the matched NNLL distribution from varying these two NP parameters. In both Q=250 GeV and Q=1 TeV, excluding the shaded region, the NP uncertainty is much smaller than the hard uncertainty estimated by seven-point variation. In particular, at Q=1 TeV, the NP uncertainty is reduced to 1% in the potential fit region. Despite that, we admit that the effect of non-perturbative corrections turns to increase for such small x_L region, and more accurate understanding of the non-perturbative corrections will be required to further improve the precision. §.§ Anticipation of α_s determination In this subsection, we discuss the potential of extracting the strong coupling constant α_s from measuring the resumed E3C/EEC ratio Δ_3,2(x_L). In literature <cit.>, the back-to-back limit of EEC is resummed to NNLL+NLO and has been use for α_s measurement from e^+e^- data. Similar to other event shapes, the non-perturbative correction is significantly large in this region and require careful modeling. And how we profile the resummation and power correction has a sizable effect on the final theory uncertainty. Alternatively, we can also do the α_s measurement only in the collinear limit. First of all, as we discussed in Sec. <ref>, the non-singular contribution is almost zero in this limit, and thus it is safe to ignore the higher fixed-order contribution. Secondly, by considering the ratio distribution, Δ_m,n(x_L), the suppressed power corrections will lead to a smaller theory uncertainty and thus more precise α_s determination. As illustration, we first investigate the sensitivity of Δ_3,2(x_L) when slightly changing the value of α_s. In particular, we vary the value of strong coupling at Z-pole α_s(m_Z) by a factor of 5%, namely α_s(m_Z)={0.112, 0.118, 0.124} and compare the effect on matched resummation result. We first consider the NNLL+NLO Δ_3,2(x_L) at Q=91.2 GeV with all three values of α_s(m_Z). As observed in Fig. <ref>, the slope become sensitive to the α_s in the collinear region x_L=10^-3∼ 10^-4, while the relative difference with respect to α_s(m_Z)=0.118 ranges from 10% to 20%. The slope sensitivity and the cancellation of hadronization correction have made the ratio of E3C and EEC Δ_3,2(x_L) an advantageous observable for extracting the α_s from e^+e^- annihilation. Similar behaviors also exist at other energies and for completeness, we present the comparison at Q=250 GeV and Q=1 TeV in Fig. <ref>. The fact that the resummed E3C/EEC ratio has larger sensitivity to α_s and reduced non-perturbative corrections in the collinear limit makes it a promising candidate for the α_s determination. To further improve the α_s determination requires improving the resummation accuracy, matching with NNLO fixed-order correction, as well as the non-perturbative modeling. § APPROXIMATE NNLL RESUMMATION IN PP COLLISIONS In this section, we consider the dijet production pp→ jj at the LHC. There are several motivations to study energy correlators in pp collisions. First of all, LHC provides unique opportunities to study energy flows correlation in QCD at extremely high energy. While the LEP or future CEPC provides a very clean environment for precise measurements, pp collisions at the LHC can produce multiple jets with very high energies (p_T ≳ 500 GeV), and high angular resolution can be achieved to probe the underlying dynamics for their formation and evolution. Secondly, as we have observed in the e^+e^- collisions, the non-perturbative corrections for ENC have a relatively simple form compared to other event shape observables (at least in leading power), which might be easier to study non-perturbative QCD. At the same time, with multiple scales involved, pp collision can provide robust data from high energy to low energy, which is beneficial for understanding non-perturbative effects. In this section, we still focus on improving the perturbative predictions for ENC. As in Sec. <ref>, the jet functions are universal across different hard processes and the new ingredients are the moments of pp hard function, both regular and logarithmic. The main complication for pp collision is that the hard function now involves convolution with PDFs and algorithmic definition of jet, allowing only numeric calculation of the hard function. For the numerical calculation of the hard function, we adopt the anti-k_t jet algorithm and choose the jet radius to be R_0=0.4. The complete kinematic cuts are summarized in Eqs. (<ref>)-(<ref>). The μ independent part of the NLO hard function are presented in Appendix. <ref>. We observes large corrections going from LO to NLO. The μ dependent part of the NNLO hard function can be derived using the RG equation in (<ref>). The μ independent part requires a genuine two-loop corrections and are beyond the scope of this work. Instead we make a simple estimate of the two-loop constant terms, and dubbed the resulting prediction approximate NNLL resummation (NNLL_approx). Specifically, we use a modified Padé approximation to estimate the two-loop hard function constants in both quark channel and gluon channel: a_s^2 h_0^(2)≈κ(a_s h_0^(1))^2/h_0^(0) , where we vary κ in the range [0, 1/2] as a naive way to estimate our theory uncertainties on the missing two-loop constants. For the splitting function, β function, as well as the jet functions, we used the ones required by NNLL accuracy as shown in Table <ref>. In Fig. <ref>, we show the E3C/EEC ratio Δ_3,2(R_L) up to NNLL_approx, with the hard uncertainty estimated by seven-point variation. Due to the lack of knowledge of the genuine two-loop hard function moment, we have chosen to normalize the E3C/EEC distribution in the range of R_L ∈ [0.01,0.4] to reduce the impact from not knowing the full two-loop hard function. We find good convergence for both p_t ranges: [300,350] GeV and [500,550] GeV. In the future, it would be interesting the compute the two-loop hard function, as well as match the resummed results to fixed order to improve the prediction around R_L ∼ R_0. §.§ Anticipation of α_s determination Similar to e^+e^- annihilation, in this subsection we discuss the potential of extracting the strong coupling constant α_s from the resummed Δ_3,2(R_L) distribution in pp→ jj. In particular, we also investigate the slope sensitivity of the distribution with respect to different values of α_s. For hadron colliders, we need to change the PDFs as we vary the strong coupling among α_s(m_Z)=0.118± 0.06. For this purpose, we use three PDF sets: , and when calculating the hard function using the method in <cit.>. As shown in Fig. <ref>, for each p_t range, the uncertainty is significantly reduced from NLL to NNLL_approx, leading to distinguishable slopes with respect to different α_s. This suggests that ratios of energy correlators are good candidate for extracting α_s. We note that there is larger slope variation for lower p_t of the jet, in agreement with the expectation that the measurement at lower energy is more sensitive to α_s due to asymptotic free nature of QCD. § CONCLUSION In this paper we have performed a systematic study of resummation of projected three-point energy correlator E3C <cit.>, and its ratio to EEC, in both e^+e^- collider and pp collider. We have achieved the first NNLL accuracy for the e^+e^- case, and NNLL_approx accuracy for the pp case. Our results show that good perturbative convergence can be achieved for the ratios of projected energy correlators. The current theoretical uncertainties are at a level of a few percent, and can be further improved in the future when the higher order ingredients become available. We have also shown that the ratio observable is sensitive to variation of α_s, therefore provides a good candidate for precision α_s determination using jet substructure. To achieve the above theory accuracy, one of the main new ingredients is the two-loop E3C jet function computed in this work. The calculation includes three pieces: double-real, real-virtual and double-virtual. The last two contributions only involve a single δ measurement function in the phase space integral and share a similar form as the analytic EEC calculation at NLO <cit.>. Regarding the double-real emissions, which amounts to integrating the fully-differential EEEC distribution within the collinear kinematic space, we used two different approaches and find the same results. The first method is to subtract the infrared divergence in the collinear EEEC jet function, integrate it separately with d-dimension kinematic space, and expand the finite terms in ϵ. The second approach benefits from the recently developed parametric IBP, where we can also simplify the integrand with IBP reduction and calculate the integrals via differential equations. Regarding the ENC resummation, for e^+e^- annihilation, we solve the E3C jet RGE (which is a modified DGLAP equation) order by order in α_s with the two-loop boundary, and push the resummation up to NNLL. For pp collisions, we calculate the combined hard function moments using the method in <cit.> for dijet production. We present the complete NLL and the approximate NNLL resummation result, where the approximation is due to the missing of genuine two-loop hard function constant. The uncertainty is reduced compared with the previous results <cit.>. For the fixed-order matching, we notice that the singular contribution dominates the collinear limit and the non-singular contribution from matching has only small effects in the e^+e^- case. Nevertheless, we perform the matching for e^+e^- given the fixed-order result is already available, but leave the matching with fixed-order in the pp case for the future study. For a complete phenomenological analysis and precise α_s extraction at hadron collider, there are still several ingredients needed in the future. Perturbatively, we need to compute both two-loop hard function and the NLO non-singular distribution for pp→ jj, in order to achieve a full NNLL story. More over, it would be interesting to solve the RG equation exactly following <cit.>, and compare the results with the truncation method. At the same time, for both e^+e^- and pp, it would be interesting to better understand the hadronization power corrections to help further reduce theoretical uncertainties. We hope that all these efforts can lead to a precision determination of α_s from jet substructure in the future. The authors thank Hao Chen, Kyle Lee, Meng Xiao, Tong-Zhi Yang, Yulei Ye for useful discussions. XYZ also thanks the MIT CTP for its hospitality while part of this work was performed. The work of WC, YL, ZX, and HXZ was supported by the National Natural Science Foundation of China under the Grant No. 11975200. The work of JG was sponsored by the National Natural Science Foundation of China under the Grant No.12275173 and No.11835005. § HARD AND JET FUNCTIONS §.§ e^+e^- Hard function The ENC hard function for e^+e^- can be obtained from the semi-inclusive hadron fragmentation function. At NNLL, following our resummation procedure, we need the regular up to two-loop, single logarithmic up to one-loop and the double logarithmic moments at tree level with respect to the energy fraction x: ∫_0^1 dx x^N H_q,g(x,μ=Q) = ∑_L=0^∞( α_s/4π)^L h_L^q,g(N) , ∫_0^1 dx x^N ln x H_q,g(x,μ=Q) = ∑_L=1^∞( α_s/4π)^Lḣ_L^q,g(N) , ∫_0^1 dx x^N ln^2 x H_q,g(x,μ=Q) = ∑_L=1^∞( α_s/4π)^Lḧ_L^q,g(N) . For EEC (N=2), we have h_0^q = 2 , h_0^g = 0 , h_1^q = 131/4 C_F , h_1^g = - 71/12 C_F , h_2^q = ( 64 ζ_4 - 1172/3ζ_3 - 166 ζ_2 + 2386397/2592) C_A C_F + ( - 128 ζ_4 + 1016/3ζ_3 + 1751/18ζ_2 - 1105289/5184) C_F^2 + ( 32 ζ_3 + 118/15ζ_2 - 8530817/54000) C_F T_F n_f , h_2^g = ( - 76/3ζ_3 + 188/45ζ_2 - 29802739/324000) C_A C_F + ( 124/3ζ_3 + 523/18ζ_2 - 674045/5184) C_F^2 , ḣ_0^q =0 , ḣ_1^q = ( 40 ζ_3 + 61/3ζ_2 - 5303/72) C_F , ḣ_0^g=0 , ḣ_1^g = ( - 7/3ζ_2 + 31/4) C_F , ḧ_0^q =0 , ḧ_0^g=0 . Note that the EEC hard moments are also summarized in the appendix of Ref. <cit.>). However, the normalization condition in <cit.> is different from ours, due to the scaled energy E_i/(Q/2) there in contrast with E_i/Q here in the definition of the jet function. For E3C (N=3), we find h_0^q = 2, h_0^g=0, h_1^q=11909/300C_F, h_1^g=-547/150C_F , h_2^q = (-942/5ζ_3-17/45ζ_2+17147309/32400) C_A C_F + (32ζ_3+322/25ζ_2-6169957/30000)C_F n_f T_F + (-2012/15ζ_3-8987/30ζ _2+3256506739/3240000)C_F^2 , h_2^g = (52/5ζ_3+4396/225ζ _2-101763773/810000)C_A C_F+ (392/15ζ_3+397/15ζ_2-163115357/1620000)C_F^2 , ḣ_0^q =0 , ḣ_1^q= (40ζ_3+337/15ζ_2-709693/9000)C_F , ḣ_0^g =0 , ḣ_1^g=(-22/15ζ_2+16739/4500) C_F , ḧ_0^q =0 , ḧ_0^g=0 . For completeness, we also provide the E3C (N=3) hard moments for the gluonic Higgs decay, which is needed for extracting the two-loop gluon jet constants. Here we use h̃ to distinguish from the e^+e^- case. h̃_0^q = 0, h̃_0^g=2, h̃_1^q=-2461/450 n_f T_F, h̃_1^g=11491/150C_A-494/45n_f T_F , h̃_2^q = n_f T_F [C_A (88/3ζ_3+3428/75ζ_2-219509243/810000)+(1727/225ζ_2-187858397/1620000) C_F] +(-352/45ζ_2+7224/125) n_f^2 T_F^2 , h̃_2^g = n_f T_F [C_A (-208/3ζ_3+1264/15ζ_2-38190113/40500) +C_F (96 ζ_3-242/225ζ_2-113165189/810000)] +C_A^2 (-388ζ_3-31684/75ζ _2+837482633/270000) +n_f^2 T_F^2(-64/9ζ_2+44252/675) , ḣ̃̇_0^q =0 , ḣ̃̇_1^q= n_f T_F(-22/15ζ_2+404/125) , ḣ̃̇_0^g =0 , ḣ̃̇_1^g= C_A (40 ζ_3+346/15ζ_2-2134817/27000)+(-8/3ζ _2+5369/1350) n_f T_F , ḧ̃̈_0^q =0 , ḧ̃̈_0^g=0 . §.§ pp → jj Hard function The following table gives the hard function moments for pp→ jj calculated in Madgraph5 in two different p_t ranges: [300,350] GeV and [500,550] GeV, needed for the resummation of both EEC (N=2) and E3C (N=3). [-2.5ex] 7||c||pp → jj at 13 TeV, with [1.2ex] [-2.5ex] (300,350) GeV h_0^q h_0^g a_s h_1^q a_s h_1^g a_s ḣ_1^q a_s ḣ_1^g [0.8ex] N=2 0.3571 0.6429 0.1003 0.3304 0.0546 0.2149 N=3 0.3571 0.6429 0.1463 0.4996 0.0393 0.1379 [-2.5ex] (500,550) GeV h_0^q h_0^g a_s h_1^q a_s h_1^g a_s ḣ_1^q a_s ḣ_1^g [0.8ex] N=2 0.4417 0.5583 0.1337 0.2473 0.0568 0.1816 N=3 0.4417 0.5583 0.1820 0.3894 0.0417 0.1150 tableValues for hard function moments in pp collision for different p_t ranges. The NLO corrections turn out to be significant. As one of the important checks of our calculation, we show in Fig. <ref> the independence of the slicing parameter δ_ cut when evaluating the hard function moments using the method in <cit.>. The values of the moments are in agreement within the numeric uncertainty for three values of δ_ cut across two orders of magnitude, namely δ_ cut∈{0.003, 0.03, 0.3}. §.§ Jet function For ENC, solving the jet function RGE requires the regular anomalous dimensions and their derivatives, and at NNLL, similar to hard function, we need the regular terms up to two-loop, the first derivative up to one-loop as well as the second derivative at tree-level. The QCD timelike splitting function is expanded in α_s/4π P_ij(x)=∑_L=0^∞(α_s/4π)^L+1 P_ij^(L)(x) , and the anomalous dimension for ENC is defined to be the (N+1) Mellin moment of the splitting function. Explicitly, γ_T,ij^(L) ≡ -∫_0^1 x x^N P_ij^(L)(x) , γ̇_T,ij^(L) ≡ -∫_0^1 x ln x x^N P_ij^(L)(x) , γ̈_T,ij^(L) ≡ -∫_0^1 x ln^2 x x^N P_ij^(L)(x) . Here the dot also represents the derivative with respect to N. Note that {i,j}={q,g} and the anomalous dimension is a 2× 2 matrix. The results for EEC (N=2) are derived and summarized in the appendix of Ref. <cit.>, so here we provide the expressions for E3C (N=3). At LO, we find γ_T,qq^(0) = 157/30 C_F , γ_T,gq^(0)= -11/15 C_F , γ_T,qg^(0) = 11/30 n_f , γ_T,gg^(0) = 21/5 C_A + 2/3 n_f , γ̇_T,qq^(0) = ( 4ζ_2 - 10169/1800) C_F , γ̇_T,gq^(0)= 247/900 C_F , γ̇_T,qg^(0) = 137/1800 n_f , γ̇_T,gg^(0) = ( 4ζ_2 - 2453/450) C_A , γ̈_T,qq^(0) = ( - 8ζ_3 + 507103/54000) C_F , γ̈_T,gq^(0)= - 5489/27000 C_F , γ̈_T,qg^(0)= - 1919/54000 n_f , γ̈_T,gg^(0) = ( - 8ζ_3 + 124511/13500) C_A , and at NLO, we have γ_T,qq^(1) = ( -628/15+2905763/54000) C_F^2 + 16157/675 C_A C_F - 13427/3000 C_F n_f , γ_T,gq^(1) = (88/15ζ_2-104389/27000) C_F^2 -142591/13500 C_A C_F , γ_T,qg^(1) = (44/15ζ_2 -60391/27000) C_A n_f - 166729/54000 C_F n_f - 6/25 n_f^2 , γ_T,gg^(1) = ( -168/5ζ_2+90047/1500) C_A^2 + (- 16/3ζ_2 +2273/1350) C_A n_f + 57287/27000 C_F n_f , γ̇_T,qq^(1) = (-120ζ_4+ 422/3ζ_3+10169/150ζ_2-162656941/1080000) C_F^2 + (20ζ_4-1181/15ζ_3+268/9ζ_2+992579/36000) C_F C_A + (16/3ζ_3-40/9ζ_2-433757/1620000) C_F n_f , γ̇_T,gq^(1) = (-286/15ζ_3+1034/225ζ_2+15207541/810000) C_F C_A + (44/5ζ_3-71/9ζ_2+235643/540000) C_F^2 , γ̇_T,qg^(1) = (11/5ζ_3-25/18ζ_2-1490669/1620000) C_A n_f + (-22/3ζ_3+217/225ζ_2+8521133/1080000) C_F n_f + (-22/45ζ_2+10121/13500) n_f^2 , γ̇_T,gg^(1) = (-100ζ_4+772/15ζ_3+21418/225ζ_2-42705619/405000) C_A^2 + (32/3ζ_3-40/9ζ_2-21958/3375) C_A n_f - 59659/540000 C_F n_f , as well as NNLO: γ_T,qq^(2) = ( 1439/75ζ_3+ 136066373/972000) C_F C_A^2 + ( 628/3ζ_4+ 172466/225ζ_3- 113212/225ζ_2-443247883/9720000) C_F^2 C_A +( 1256 ζ_4-14936/15ζ_3-2251148/3375ζ_2+ 47976425617/48600000) C_F^3 +(-2126/45ζ_3 + 8492/3375ζ_2-57923471/4050000) C_A C_F n_f +(-2656/225ζ_3+88163/1125ζ_2-638186993/8100000) C_F^2 n_f -19711/18000 C_F n_f^2 , γ_T,gq^(2) = (6448/75ζ_3-10898/375ζ_2-2010250477/12150000) C_F C_A^2 +(88/3ζ_4-31346/225ζ_3+234407/1125ζ_2-1694499413/24300000) C_F^2 C_A + (-176 ζ_4 +1796/15ζ_3+79268/3375ζ_2-1061823161/24300000) C_F^3 + (704/45ζ_3-3736/675ζ_2+2334509/405000)C_A C_F n_f+ (-88/45ζ_3+152/225ζ_2-14837573/4050000) C_F^2 n_f , γ_T,qg^(2) = (-220/3ζ_4+1004/225ζ_3+323629/6750ζ_2-140682763/6075000) C_A^2 n_f +(6503/225ζ_3+19387/1125ζ_2-509985949/24300000) C_A C_F n_f +(622/225ζ_3+79361/6750ζ_2-2412861131/48600000) C_F^2 n_f +(176/45ζ_3+389/675ζ_2-51449/9000) C_A n_f^2 + (497/135ζ_2-915539/300000) C_F n_f^2 - 86/375 n_f^3 , γ_T,gg^(2) = (840 ζ_4-3752/25ζ_3-342578/375ζ_2+1069405919/1350000)C_A^3 +(400/3ζ_4-29534/225ζ_3-30316/675ζ_2+129284923/2430000) C_A^2 n_f +(2744/45ζ_3-2158/125ζ_2-188283293/6075000) C_A C_F n_f +(-352/225ζ_3+4037/3375ζ_2+27742123/24300000)C_F^2 n_f +(-64/9ζ_3+160/27ζ_2-71341/27000) C_A n_f^2+(-484/675ζ_2-165553/270000) C_F n_f^2 . § Β-FUNCTION RGE AND RUNNING COUPLING The well-known QCD β-function is written as dα_s(μ)/dlnμ= β (α_s(μ)), β (α)=- 2 α [ ( α/4 π) β_0 + ( α/4 π)^2 β_1 + ( α/4 π)^3 β_2 + ⋯] , where the coefficient up to three loops are given by <cit.> β_0 =11/3 C_A - 4/3 T_F n_f , β_1 = 34/3 C_A^2 - 20/3 C_A T_F n_f - 4 C_F T_F n_f , β_2 = n_f^2 T_F^2 (158 /27C_A+44 /9C_F)+n_f T_F (2 C_F^2-205 /9C_FC_A-1415 /27C_A^2)+2857 /54C_A^3 , β_3 = 1093/729 n_f^3 +(50065/162 + 6472/81ζ_3) n_f^2 +(-1078361/162 - 6508/27ζ_3 ) n_f + 3564 ζ_3 + 149753/6 . At one-loop, the β-RGE can be solved exactly. At two-loop and beyond, there are different solutions. In terms of L≡lnμ^2/Λ_QCD^2, a expanded solution can be written as: α_s (μ) = 4 π/β_0[ 1/L - β_1/β_0^2 L^2ln L + β_1^2/β_0^4 L^3 (ln^2 L - ln L - 1) + β_2/β_0^3 L^3. . + β_1^3/β_0^6 L^4( - ln^3 L + 5/2ln^2 L + 2 ln L - 1/2) - 3 β_1 β_2/β_0^5 L^4ln L + β_3/2 β_0^4 L^4/] . Here we can obtain the two-loop running coupling for NLL resumation by setting β_2=β_3=0 and three-loop running coupling for NNLL by only β_3=0. Alternatively, one can iteratively solve the RGE order by order in a formal expansion parameter ϵ∼β_n/β_0, with n≥ 1. For NLL, the two-loop running coupling is written as α_s(μ) = α_s(Q)[X+α_s(Q)β_1/4πβ_0ln X]^-1, X≡ 1+α_s(Q)/2πβ_0lnμ/Q , and at three loops for NNLL α_s(μ) = α_s(Q){X+α_s(Q)β_1/4πβ_0ln X+α_s^2(Q)/16π^2[β_2/β_0(1-1/X)+β_1^2/β_0^2(1/X-1+ln X/X)]}^-1 . For the resummation in this paper, we use the iterative solution (the latter one) and set the coupling at Q=91.2 GeV to be the world average value α_s(m_Z)=0.118. § SQUEEZE LIMIT OF EEEC JET FUNCTIONS In this section, we provide the perturbative data for the squeeze limit of the EEEC jet function in Eq. (<ref>), which is needed for E3C jet function calculation. Given the conformal parameterization, x_1=x_L z z̅, x_2=x_L(1-z)(1-z̅), x_3=x_L , the squeeze limits correspond to z→ 0, 1, ∞, related by a 𝕊_3 symmetry. Without loss of generality, we provide the z→ 1 limit for the shapes function up to 𝒪(ϵ^2). In the quark jet, we find for G(z) G_q(z) z→1≈ C_FT_Fn_f( 13/4800 (1-z) (1-z̅) + z-2/1440 (1-z̅)^2+z̅/1440 (1-z)^2 -39 z+1/28800 (1-z)^2 +13/9600 (1-z̅))+C_F C_A( 91/4800 (1-z) (1-z̅) +2-z/2880 (1-z̅)^2-z̅/2880 (1-z)^2 -273 z-293/28800 (1-z)^2+91/9600 (1-z̅))+C_F^2(1/20 (1-z) (z̅-1) -z+z̅-2/40 (z-1) (1-z̅)) , and for F(z): F_q(z) z→1≈ C_FT_Fn_f( 649/28800 (1-z) (1-z̅) -259/43200 (1-z)^2-259/43200 (1-z̅)^2) +C_F C_A( 561/3200 (1-z)(1-z̅)+ 229/86400 (1-z)^2+229/86400 (1-z̅)^2) +C_F^2( 3307/7200 (1-z)(1-z̅)) , as well as the H(z): H_q(z) z→1≈ C_FT_Fn_f( 664193-23400π^2/4320000(1-z) (1-z̅) + 1800 π ^2-53191/1296000 (1-z)^2+1800 π ^2-53191/1296000 (1-z̅)^2) +C_F C_A( 1805867 - 54600 π^2/1440000(1-z)(1-z̅) +45421-1800 π ^2/2592000 (1-z̅)^2-1800 π ^2-45421/2592000 (1-z)^2) +C_F^2 ( 352451 - 10800 π^2/108000 (1-z)(1-z̅)) . Here the red stands for the most singular term, which contributes to 1/ϵ divergence in the E3C jet function calculation. For the gluon jet, we also find G_g(z) z→1≈ C_FT_Fn_f( 3/320 (1-z) (1-z̅)+3/640(1-z)+3/640(1-z̅)) +C_A T_F n_f(7/800 (1-z) (1-z̅) +z-2/1440 (1-z̅)^2+z̅/1440 (1-z)^2-63 z-43/14400 (1-z)^2 +7/1600 (1-z̅)) +C_A^2( 49/800(1-z)(1-z̅) +2-z/2880 (1-z̅)^2-z̅/2880 (1-z)^2 -441 z-451/14400 (1-z)^2+49/1600 (1-z̅)) , F_g(z) z→1≈ C_FT_Fn_f( 241/3200 (1-z) (1-z̅))+C_A T_F n_f( 343/4800(1-z)(1-z̅) -259/43200 (1-z)^2 -259/43200 (1-z̅)^2)+C_A^2( 557/960(1-z)(1-z̅) +229/86400 (1-z)^2 +229/86400 (1-z̅)^2) , H_g(z) z→1≈ C_FT_Fn_f( 434309 - 16200 π^2/864000 (1-z) (1-z̅))+C_A T_F n_f( 1033981-37800π^2/2160000(1-z)(1-z̅) +1800 π ^2-53191/1296000 (1-z)^2+1800 π ^2-53191/1296000 (1-z̅)^2)+C_A^2(2999389 - 88200 π^2/720000 (1-z)(1-z̅) -1800 π ^2-45421/2592000 (1-z)^2 -1800 π ^2-45421/2592000 (1-z̅)^2) . § RESULT OF TWO-LOOP E3C JET FUNCTION CALCULATION We list the individual results for the two-loop jet function calculation in Sec. <ref>. As we discussed above, the calculation is reorganized as nonidentical energy weight contribution and contact terms. For the nonidentical energy weight in Sec. <ref>, we find for the quark jet d J^nonid_q/dx_L =(α_s/4π)^2{δ(x_L)f_q(μ,Q,ϵ)+1/x_L[C_F T_F n_f(-13/200ϵ+13/100ln(Q^2x_L/μ^2) -0.44158(3))+C_F^2(-6/5ϵ+12/5ln(Q^2x_L/μ^2)-10.963(1) ) +C_F C_A(-91/200ϵ+91/100ln(Q^2x_L/μ^2) -4.3743(7))]} , with the coefficient of the δ(x_L) being f_q(μ,Q,ϵ) =C_F T_F n_f [13/400ϵ^2+1/ϵ(13/200ln(μ^2/Q^2)+0.22079(2))+0.44158(3)ln(μ^2/Q^2) +13/200ln^2(μ^2/Q^2)+0.5441(8)]+C_F C_A[91/400ϵ^2+1/ϵ(91/200ln(μ^2/Q^2)+2.1871(8)) +4.3743(7)ln(μ^2/Q^2)+91/200ln^2(μ^2/Q^2)+10.483(2)]+C_F^2[24.60(4)+3/5ϵ^2 +1/ϵ(6/5ln(μ^2/Q^2)+5.4815(3))+10.963(1)ln(μ^2/Q^2)+6/5ln^2(μ^2/Q^2)] . The ln(Q^2x_L/μ^2) term is verified by the jet RGE. For a gluon jet, the (α_s^2) contribution is d J^nonid_g/dx_L =(α_s/4π)^2{δ(x_L)f_g(μ,Q,ϵ)+1/x_L[C_F T_F n_f (-9/40ϵ+9/20ln(Q^2x_L/μ^2). .-1.8862(6))+C_A T_F n_f (-21/100ϵ+21/50ln(Q^2x_L/μ^2)-1.5376(9)) +C_A^2(-147/100ϵ+147/50ln(Q^2x_L/μ^2)-14.031(3))]} , with the corresponding coefficient f_g(μ,Q,ϵ) =C_A T_F n_f[21/200ϵ^2+1/ϵ(21/100ln(μ^2/Q^2)+0.7688(5))+1.5376(9) ln(μ^2/Q^2) +21/100ln^2(μ^2/Q^2) +2.350(8) ]+C_F T_F n_f [ 9/80ϵ^2+1/ϵ(9/40ln(μ^2/Q^2)+0.9431(3)) +1.886(3) ln(μ^2/Q^2)+9/40ln^2(μ^2/Q^2)+3.757(1)]+C_A^2[33.188(4)+ 147/200ϵ^2 +1/ϵ(147/100ln(μ^2/Q^2)+7.01569(5))+14.031(3) ln(μ^2/Q^2) +147/100ln^2(μ^2/Q^2) ] . Regarding the contact term in Sec. <ref>, for e^+e^- annihilation, we have the sum of E^2EC and E^3C 1/σ_0σ^[3],2-loop_C,q(x_L,ϵ)/ x_L = (α_s/4π)^2 {δ(x_L) r_q(μ,Q,ϵ) + [1/x_L]_+[C_A C_F (91/100 ϵ+1189/200ln(μ ^2/Q^2) -6 ζ_3+25 π ^2/6-52307/18000) +C_F n_fT_F (13/100 ϵ-31/25ln(μ ^2/Q^2)-14809/2000) +C_F^2 (12/5 ϵ+24/5ln(μ ^2/Q^2)+12 ζ_3-43 π ^2/6+274081/3600)] +[ln (x_L)/x_L]_+ (-1343/200C_A C_F+113/100 C_F n_f T_F+87/80C_F^2) } , with the singular part r_q(μ,Q,ϵ) r_q(μ,Q,ϵ) =C_A C_F [-91/200 ϵ ^2+1/ϵ(-91/100ln(μ^2/Q^2)+3 ζ_3-25 π^2/12+452921/36000)-91/100ln ^2(μ^2/Q^2) +(6 ζ_3+890167/36000-25 π ^2/6) ln(μ ^2/Q^2)-347 ζ_3/2+7 π ^4/20-6697 π ^2/1800+47220317/270000] +C_F n_f T_F [-13/200 ϵ^2 +1/ϵ(-13/100ln(μ ^2/Q^2)-5299/12000)-13/100ln^2(μ ^2/Q^2) -4349/6000ln(μ ^2/Q^2)+4 ζ_3+137 π ^2/400-1413979/720000]+C_F^2 [-6/5 ϵ ^2 +1/ϵ(-12/5ln(μ ^2/Q^2)-6 ζ_3+43 π ^2/12-281641/7200)-12/5ln^2(μ ^2/Q^2) +(-12 ζ_3-281641/3600+43 π^2/6) ln(μ ^2/Q^2)+293 ζ_3-7π ^4/10+15371π^2/1440-380074411/864000] . Similarly, in the gluonic Higgs decay, we get 1/σ^'_0σ^[3],2-loop_C,g(x_L,ϵ)/ x_L = λ(μ)(α_s/4π)^2{δ(x_L) r_g(μ,Q,ϵ) + [1/x_L]_+ { n_f^2 T_F^2 (-3/5ln(μ ^2/Q^2)-131/60) +n_f T_F [C_A (21/50 ϵ-171/100ln(μ ^2/Q^2)+7 π ^2/15-140917/9000) +C_F (9/10ln(μ ^2/Q^2)+9/20 ϵ+1579/400)] +C_A^2 (147/50 ϵ+1743/100ln(μ ^2/Q^2)+6 ζ_3-97 π ^2/30+211829/2250) } +[ln (x_L)/x_L]_+ [n_f T_F (51 /25C_A-69 /40C_F)-133 /25C_A^2+2/5 n_f^2 T_F^2] } , with the gluonic singular term r_g(μ,Q,ϵ) r_g(μ,Q,ϵ) = C_A T_F n_f [-21/100 ϵ ^2+1/ϵ(-21/50ln(μ ^2/Q^2)-7 π^2/30+6887/9000)-1163/150ln^2(μ^2/Q^2) +(-948847/18000-7 π ^2/15) ln(μ^2/Q^2) -211 ζ_3/10+3037 π ^2/1800-5585159/67500] + C_F T_F n_f [-9/40 ϵ ^2+1/ϵ(-9/20ln(μ ^2/Q^2)-1509/800)-9/20ln ^2(μ ^2/Q^2)-3109/400ln(μ ^2/Q^2)+15 ζ_3 +5 π ^2/8-230393/6000]+C_A^2 {-147/100 ϵ ^2+1/ϵ[-147/50ln(μ ^2/Q^2)-3 ζ_3+97 π ^2/60-474857/18000] +2143/300ln ^2(μ ^2/Q^2)+(-6 ζ_3+261281/18000+97 π ^2/30) ln(μ ^2/Q^2)+1133 ζ_3/10-7 π ^4/20 +373 π ^2/100-12512789/90000}+n_f^2 T_F^2 [4/3ln ^2(μ ^2/Q^2)+2971/300ln(μ ^2/Q^2)-23 π ^2/45+579043/27000] , where λ is the effective Hgg coupling[For the case of gluonic Higgs decays, we normalize the E3C into the form where the LO E3C is 1/σ^'_0σ^[3]_0 / x_L=λ(μ) (1/4δ(x_L) + 3/4δ(1-x_L)) in d=4-2ϵ dimensions. ]<cit.>. These results are then used to extract the two-loop jet constants. § FIXED-ORDER EXPANSION In this section, we provide the singular expansion of projected energy correlator up to NNLO 𝒪(α_s^3) in e^+e^- annihilation. This can be achieved by expanding our resummed distribution with canonical scale μ=Q. For EEC, we find 1/σ_0dσ^[2]/dx_L =(α_s/4π)C_F3/2 x_L+(α_s/4π)^2 C_F{[53/30n_f T_F+25/4C_F-107/15C_A]ln x_L/x_L +[-4913/450n_f T_F+(-8263/216+43/9π^2-8ζ_3)C_F+(35336/675-25/9π^2+4ζ_3)C_A]1/x_L} +(α_s/4π)^3C_F{[ 8059/300C_A^2-340/9C_F C_A+625/48C_F^2-16259/900C_A T_F n_f+4619/360C_F T_F n_f +92/45n_f^2 T_F^2 ]ln^2 x_L/x_L +[ -17734/675n_f^2 T_F^2 +(-64 ζ_3/3-6760183/32400+416 π ^2/27) C_F T_F n_F +( 32 ζ_3/3+6644267/27000-36 π ^2/5) C_A T_F n_f+(-172 ζ_3/3-723533/2592+1849 π ^2/54)C_F^2 +(-74 ζ_3/3-2916859/6750+503 π ^2/30) C_A^2 +( 262 ζ_3/3+105425/144-550 π ^2/9) C_F C_A ]ln x_L/x_L +[(88031/1125+4π^2/5)n_f^2 T_F^2 +(-15988 ζ _3/45+236 π ^4/135-15161 π ^2/360+164829499/243000) C_F T_F n_F +( 3679 ζ _3/15-118 π ^4/135+379579 π ^2/16200-1025118113/1080000) C_A T_F n_F +(8 π ^2 ζ _3+52 ζ _3+208 ζ _5-167 π ^4/27-18805 π ^2/1296+742433/1944) C_F^2 +(4 π ^2 ζ _3-47483 ζ _3/90+56 ζ _5-481 π ^4/540-906257 π ^2/16200+964892417/540000) C_A^2 +(-12 π ^2 ζ _3+10604 ζ _3/15-216 ζ _5+847 π ^4/180+137305 π ^2/1296-105395741/51840) C_F C_A]1/x_L} . Similarly, for E3C, we have 1/σ_0dσ^[3]/dx_L =(α_s/4π)C_F9/8x_L+(α_s/4π)^2 C_F{[139/100n_f T_F+471/80C_F-979/200C_A]ln x_L/x_L +[-24863/3000n_f T_F-21/10C_F+66769/3000 C_A]1/x_L} +(α_s/4π)^3C_F{[ 17743/1000C_A^2-412753/12000C_F C_A+24649/1600C_F^2-19019/1500C_A T_F n_f +35369/3000C_F T_F n_f+128/75n_f^2 T_F^2 ]ln^2 x_L/x_L+[-4559891/22500C_A^2-814823/48000C_F^2 +(34399441/120000-11 π ^2/2) C_F C_A + (2 π ^2-1026851/10000)C_F T_F n_f+ 3055907/22500C_A T_F n_f -23494/1125n_f^2 T_F^2 ]ln x_L/x_L+[j_2^q,[3](157/15-44 C_A/3 C_F+16 n_f T_F/3C_F)-22/15j_2^g,[3] + (106027/54000-22 π ^2/225) n_f^2 T_F^2 +( 1827 ζ _3/25-3877 π ^2/3000-3239027203/10800000) C_F T_F n_f +(-1037 ζ _3/50-2167 π ^2/4500-24958553/3600000) C_A T_F n_f +( 3267 ζ _3/20-111313 π ^2/14400-6031520921/17280000) C_F^2 + ( -829 ζ _3/100+4433 π ^2/2250+363491521/5400000) C_A^2 +(-42321 ζ _3/200+284797 π ^2/36000+4941457181/7200000) C_F C_A ]1/x_L} , with the two-loop jet constant j_2^q/g,[3] from Eq. (<ref>)-(<ref>). JHEP
http://arxiv.org/abs/2307.04727v1
20230710173714
Deceptive Information Retrieval
[ "Sajani Vithana", "Sennur Ulukus" ]
cs.IT
[ "cs.IT", "cs.CR", "cs.NI", "eess.SP", "math.IT" ]
Deceptive Information Retrieval Sajani Vithana Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 [email protected] [email protected] ============================================================================================================================================================================= We introduce the problem of deceptive information retrieval (DIR), in which a user wishes to download a required file out of multiple independent files stored in a system of databases while deceiving the databases by making the databases' predictions on the user-required file index incorrect with high probability. Conceptually, DIR is an extension of private information retrieval (PIR). In PIR, a user downloads a required file without revealing its index to any of the databases. The metric of deception is defined as the probability of error of databases' prediction on the user-required file, minus the corresponding probability of error in PIR. The problem is defined on time-sensitive data that keeps updating from time to time. In the proposed scheme, the user deceives the databases by sending real queries to download the required file at the time of the requirement and dummy queries at multiple distinct future time instances to manipulate the probabilities of sending each query for each file requirement, using which the databases' make the predictions on the user-required file index. The proposed DIR scheme is based on a capacity achieving probabilistic PIR scheme, and achieves rates lower than the PIR capacity due to the additional downloads made to deceive the databases. When the required level of deception is zero, the proposed scheme achieves the PIR capacity. § INTRODUCTION Information is generally retrieved from a data storage system by directly requesting what is required. This is the most efficient form of information retrieval in terms of the download cost, as the user only downloads exactly what is required. However, if the user does not want to reveal the required information to the data storage system from which the information is retrieved, extra information must be requested to increase the uncertainty of the database's knowledge on the user's requirement. This is the core idea of private information retrieval (PIR) <cit.>, where the user downloads a required file out of K independent files stored in N non-colluding databases without revealing the required file index. In PIR, the databases' prediction of the user-required file based on the received queries is uniformly distributed across all files. Hence, the probability of error of the database's predictions in a PIR setting with K files is 1-1/K. In weakly private information retrieval <cit.>, a certain amount of information on the user-required file index is revealed to the databases to reduce the download cost. In such cases, as the databases have more information on the file index that the user requests, the error probability of the database's prediction is less than 1-1/K. In this work, we study the case where the error probability of databases' prediction is larger than 1-1/K. Note that with no information received by the user at all, the databases can make a random guess on the user-required file index, and reach an error probability of 1-1/K. Therefore, to result in a prediction error that is larger than 1-1/K, the user has to deceive the databases by sending fake information on the required file index. The goal of this work is to generate a scheme that allows a user to download a required file k, while forcing the databases' prediction on the user-required file index to be ℓ, where k≠ℓ, for as many cases as possible. This is coined as deceptive information retrieval (DIR). DIR is achieved by sending dummy queries to databases to manipulate the probabilities of sending each query for each file requirement, which results in incorrect predictions at the databases. However, sending dummy queries increases the download cost compared to PIR. Fig. <ref> shows the behavior of the prediction error probability and the corresponding download costs for different types of information retrieval.[The regions marked as “weakly PIR" and “DIR" in Fig. <ref> show the points that are conceptually valid for the two cases and does not imply that every point in those regions are achievable. The achievable points corresponding to “weakly PIR" and “DIR" lie within the marked regions.] The concept of deception has been studied as a tool for cyber defense <cit.>, where the servers deceive attackers, adversaries and eavesdroppers to eliminate any harmful activities. In all such cases, the deceiver (servers in this case), gains nothing from the deceived, i.e., attackers, adversaries and eavesdroppers. In contrast, the main challenge in DIR is that what needs to be deceived is the same source of information that the user retrieves the required data from. This limits the freedom that a DIR scheme could employ to deceive the databases. To this end, we formulate the problem of DIR based on the key concepts used in PIR, while also incorporating a time dimension to aid deception. The problem of DIR introduced in this paper considers a system of non-colluding databases storing K independent files that are time-sensitive, i.e., files that keep updating from time to time. We assume that the databases only store the latest version of the files. A given user wants to download arbitrary files at arbitrary time instances. The correctness condition ensures that the user receives the required file, right at the time of the requirement, while the condition for deception requires the databases' prediction on the user-required file to be incorrect with a probability that is greater than 1-1/K, specified by the predetermined level of deception required in the system. The scheme that we propose for DIR deceives the databases by sending dummy queries to the databases for each file requirement, at distinct time instances. From the user's perspective, each query is designed to play two roles as real and dummy queries, with two different probability distributions. This allows the user to manipulate the overall probability of sending each query for each message requirement, which is known by the databases. The databases make predictions based on the received queries and the globally known probability distribution of the queries used for each file requirement. These predictions are incorrect with probability >1-1/K as the probability distributions based on which the real queries are sent are different from the globally known overall distribution. This is the basic idea used in the proposed scheme which allows a user to deceive the databases while also downloading the required file. The download cost of the proposed DIR scheme increases with the required level of deception d, and achieves the PIR capacity when d=0. § PROBLEM FORMULATION AND SYSTEM MODEL We consider N non-colluding databases storing K independent files, each consisting of L uniformly distributed symbols from a finite field 𝔽_q, i.e., H(W_1,…,W_K)=∑_i=1^K H(W_i)=KL, where W_i is the ith file. The files keep updating from time to time, and a given user wants to download an arbitrary file at arbitrary time instances T_i, i∈ℕ. We assume that all files are equally probable to be requested by the user. The user sends queries at arbitrary time instances to download the required file while deceiving the databases. We assume that the databases are only able to store data (files, queries from users, time stamps of received queries etc.) corresponding to the current time instance, and that the file updates at distinct time instances are mutually independent. Therefore, the user's file requirements and the queries sent are independent of the stored files at all time instances, i.e., I(θ^[t],Q_n^[t];W_1:K^[t])=0, n∈{1,…,N}, ∀ t, where θ^[t] is the user's file requirement, Q_n^[t] is the query sent by the user to database n, and W_1:K^[t] is the set of K files, all at time t.[The notation 1:K indicates all integers from 1 to K.] At any given time t when each database n, n∈{1,…,N}, receives a query from the user, it sends the corresponding answer as a function of the received query and the stored files, thus, H(A_n^[t]|Q_n^[t],W_1:K^[t])=0, n∈{1,…,N}, where A_n^[t] is the answer received by the user from database n at time t. At each time T_i, i∈ℕ, the user must be able to correctly decode the required file, that is, H(W_θ^[T_i]|Q_1:N^[T_i],A_1:N^[T_i])=0, i∈ℕ. At any given time t when each database n, n∈{1,…,N}, receives a query from the user, it makes a prediction on the user-required file index using the maximum aposteriori probability (MAP) estimate as follows, θ̂^[t]_Q=max_i P(θ^[t]=i|Q_n^[t]=Q), n∈{1,…,N}, where θ̂^[t]_Q is the predicted user-required file index based on the realization of the received query Q at time t. The probability of error of each database's prediction is defined as, P_e=𝔼[P(θ̂^[T_i]_Q≠θ^[T_i])], where the expectation is taken across all Q and T_i. Note that in PIR, P(θ^[t]_Q=i|Q_n^[t]=Q)=P(θ^[t]_Q=j|Q_n^[t]=Q) for all i,j∈{1,…,N}, any Q^[t], which results in P_e^PIR=1-1/K. Based on this information, we define the metric of deception as, D=P_e-(1-1/K). For PIR, the amount of deception is D=0, and for weakly PIR where some amount of information is leaked on the user-required file index, the amount of deception takes a negative value as the probability of error is smaller than 1-1/K. The goal of this work is to generate schemes that meet a given level of deception D=d>0, while minimizing the normalized download cost defined as, D_L=H(A_1:N)/L, where A_1:N represents all the answers received by all N databases, corresponding to a single file requirement of the user. The DIR rate is defined as the reciprocal of D_L. § MAIN RESULT In this section we present the main result of this paper, along with some remarks. Consider a system of N non-colluding databases containing K identical files. A user is able to retrieve any file k, while deceiving the databases by leaking information about some other file k' to the databases. Consider a system of N non-colluding databases storing K independent files. A required level of deception d, satisfying 0≤ d<(K-1)(N-1)/K(N^K-N) is achievable at a DIR rate, R=(1+(N^K-N/N-1)e^ϵ/1+(N^K-1-1)e^ϵ+(N/N-1)(2u-u(u+1)α))^-1, where ϵ=ln(dKN+(K-1)(N-1)/dKN+(K-1)(N-1)-dKN^K), α=N+(N^K-N)e^ϵ/(N-1)e^2ϵ+(N^K-N)e^ϵ+1, u=⌊1/α⌋ For given N and K, ϵ≥0 is a one-to-one continuous function of d, the required level of deception, and α∈(0,1] is a one-to-one continuous function of ϵ. For a given u∈ℤ^+, there exists a range of values of α, specified by 1/u+1< α≤1/u, which corresponds to a unique range of values of ϵ, for which (<ref>) is valid. Since (0,1]=∪{α:1/u+1< α≤1/u, u∈ℤ^+}, there exists an achievable rate (as well as an achievable scheme) for any ϵ≥0 as well as for any d in the range 0≤ d<(K-1)(N-1)/K(N^K-N). When the user specified amount of deception is zero, i.e., d=0, the corresponding values of α and u are α=1 and u=1. The achievable rate for this case is 1-1/N/1-1/N^K, which is equal to the PIR capacity. The achievable DIR rate monotonically decreases with increasing amount of deception d for any given N and K. The variation of the achievable DIR rate with the level of deception for different number of databases when the number of files fixed at K=3 is shown in Fig. <ref>. The achievable rate for different number of files when the number of databases is fixed at N=2 is shown in Fig. <ref>. For any given N and K, the rate decreases exponentially when the level of deception is close to the respective upper bound, i.e., d<(K-1)(N-1)/K(N^K-N). § DIR SCHEME The DIR scheme introduced in this section is designed for a system of N non-colluding databases containing K independent files, with a pre-determined amount of deception d>0 required. For each file requirement at time T_i, i∈ℕ, the user chooses a set of M+1 queries to be sent to database n, n∈{1,…,N}, at time T_i as well as at future time instances t_i,j, j∈{1,…,M}, such that each t_i,j>T_i. The query sent at time T_i is used to download the required file, while the rest of the M queries are sent to deceive the databases. The queries sent at times T_i, i∈ℕ and t_i,j, j∈{1,…,M}, i∈ℕ are known as real and dummy queries, respectively. The binary random variable R is used to specify whether a query sent by the user is real or dummy, i.e., R=1 corresponds to a real query sent at time T_i, and R=0 corresponds to a dummy query sent at time t_i,j. Next, we define another classification of queries used in the proposed scheme. An ϵ-deceptive query Q with respect to file k is defined as a query that always satisfies, P(Q_n=Q|θ=k,R=1)/P(Q_n=Q|θ=ℓ,R=1)=e^-ϵ, P(θ=k|Q_n=Q)/P(θ=ℓ|Q_n=Q)=e^ϵ, ∀ℓ∈{1,…, K}, ℓ≠ k, for some ϵ>0, where Q_n and θ are the random variables representing a query sent to database n, n∈{1,…,N}, and the user-required file index. An equivalent representation of (<ref>) is given by, P(R=1|θ=ℓ)+P(Q_n=Q|θ=ℓ,R=0)/P(Q_n=Q|θ=ℓ,R=1)P(R=0|θ=ℓ)/P(R=1|θ=k)+P(Q_n=Q|θ=k,R=0)/P(Q_n=Q|θ=k,R=1)P(R=0|θ=k)=e^-2ϵ, ∀ℓ∈{1,…, K}, ℓ≠ k. A query Q that satisfies (<ref>) with ϵ=0 for all k∈{1,…,K}, i.e., a 0-deceptive query, is known as a PIR query. The intuition behind the definition of an ϵ-deceptive query with respect to message k in Definition <ref> is as follows. Note that the second equation in (<ref>) fixes the databases’ prediction on the user’s requirement as W_k for the query Q̃. This is because the aposteriori probability corresponding to message k, when Q̃ is received by the databases, is greater than that of any other message ℓ, ℓ≠ k. However, the first equation in (<ref>), which is satisfied at the same time, ensures that the user sends the query Q̃ with the least probability when the user requires to download message k, compared to the probabilities of sending Q̃ for other message requirements. In other words, since we assume equal priors, the query Q̃ is mostly sent when the user requires to download W_ℓ for ℓ≠ k, and is rarely sent to download W_k, while the databases’ prediction on the user-required message upon receiving query Q̃ is fixed at W_k, which is incorrect with high probability, hence, the deception. At a given time t, there exists a set of queries consisting of both deceptive and PIR queries, sent to the N databases. Database n, n∈{1,…,N}, is aware of the probability of receiving each query, for each file requirement, i.e., P(Q_n=Q|θ=k), for k∈{1,…,K}, Q∈𝒬, where 𝒬 is the set of all queries. However, the databases are unaware of being deceived, and are unable to determine whether the received query Q is real or dummy or deceptive or PIR. The proposed scheme generates a list of real and dummy queries for a given N and K along with the probabilities of using them as ϵ-deceptive and PIR queries, based on the required level of deception d. The scheme also characterizes the optimum number of dummy queries M to be sent to the databases for each file requirement, to minimize the download cost. As an illustration of the proposed scheme, consider the following representative examples. §.§ Example 1: Two Databases and Two Files, N=K=2 In this example, we present how the proposed DIR scheme is applied in a system of two databases containing two files each. In the proposed scheme, the user generates M+1 queries for any given file-requirement which consists of one real query and M dummy queries. The user sends the real query at the time of the requirement T_i, and the rest of the M dummy queries at M different future time instances t_i,j. Tables <ref> and <ref> give possible pairs of real queries that are sent to the two databases to retrieve W_1 and W_2, respectively, at time T_i, i∈ℕ. The probability of using each pair of queries is indicated in the first columns of Tables <ref> and  <ref>. Note that the correctness condition in (<ref>) is satisfied at each time T_i as each row of Tables <ref> and <ref> decodes files W_1 and W_2, respectively, with no error. The dummy queries sent to each database at time t_i,j are given in Tables <ref> and <ref>. The purpose of the dummy queries sent at future time instances is to deceive the databases by manipulating the aposteriori probabilities, which impact their predictions. For example, if the user wants to download W_1 at time T_i, the user selects one of the four query options in Table <ref> based on the probabilities in column 1,[The values of p and p' are derived later in this section.] and sends the corresponding queries to database 1 and 2 at time T_i. Based on the information in Table <ref>, the user sends the query W_1 to both databases at M distinct future time instances t_i,j, j∈{1,…,M}. Based on the information in Tables <ref>-<ref>, when the user-required file is W_1, the probability of each query being received by database n, n∈{1,2}, at an arbitrary time instance t is calculated as follows. Let P(R=1|θ=i)=α for i∈{1,2}.[The intuition behind P(R=1|θ=i) is the probability of a query received by any database being real when the user-required file index is i. For a fixed M, P(R=1|θ=i)=1/M+1.] Then, P(Q_n=W_1|θ=1) =P(Q_n=W_1|θ=1,R=1)P(R=1|θ=1) +P(Q_n=W_1|θ=1,R=0)P(R=0|θ=1) =pα+1-α P(Q_n=W_2|θ=1) =P(Q_n=W_2|θ=1,R=1)P(R=1|θ=1) +P(Q_n=W_2|θ=1,R=0)P(R=0|θ=1) =p'α P(Q_n=W_1+W_2|θ=1) =P(Q_n=W_1+W_2|θ=1,R=1)P(R=1|θ=1) +P(Q_n=W_1+W_2|θ=1,R=0)P(R=0|θ=1) =p'α P(Q_n=ϕ|θ=1) =P(Q_n=ϕ|θ=1,R=1)P(R=1|θ=1) +P(Q_n=ϕ|θ=1,R=0)P(R=0|θ=1) =pα Thus, writing these probabilities compactly, we have, P(Q_n=W_1|θ=1) =pα+1-α P(Q_n=W_2|θ=1) =p'α P(Q_n=W_1+W_2|θ=1) =p'α P(Q_n=ϕ|θ=1) =pα. Similarly, when the user-required file is W_2, the corresponding probabilities are, P(Q_n=W_1|θ=2) =p'α P(Q_n=W_2|θ=2) =pα+1-α P(Q_n=W_1+W_2|θ=2) =p'α P(Q_n=ϕ|θ=2) =pα. These queries and the corresponding probabilities of sending them to each database for each message requirement are known to the databases. However, the decomposition of these probabilities based on whether the query is real or dummy, i.e., Tables <ref>-<ref>, is not known by the databases. When database n, n∈{1,…,N}, receives a query Q at time t, it calculates the aposteriori probability distribution of the user-required file index, to predict the user's requirement using (<ref>). The aposteriori probabilities corresponding to the four queries received by database n, n∈{1,2}, are calculated as follows, P(θ=i|Q_n=Q) =P(Q_n=Q|θ=i)P(θ=i)/P(Q_n=Q). Then, the explicit a posteriori probabilities are given by, P(θ=1|Q_n=W_1) =1/2(pα+1-α)/P(Q_n=W_1) P(θ=2|Q_n=W_1) =1/2p'α/P(Q_n=W_1) P(θ=1|Q_n=W_2) =1/2p'α/P(Q_n=W_2) P(θ=2|Q_n=W_2) =1/2(pα+1-α)/P(Q_n=W_2) P(θ=1|Q_n=W_1+W_2) =1/2p'α/P(Q_n=W_1+W_2) P(θ=2|Q_n=W_1+W_2) =1/2p'α/P(Q_n=W_1+W_2) P(θ=1|Q_n=ϕ) =1/2pα/P(Q_n=ϕ) P(θ=2|Q_n=ϕ) =1/2pα/P(Q_n=ϕ). While queries ϕ and W_1+W_2 are PIR queries as stated in Definition <ref>, queries W_1 and W_2 are ϵ-deceptive with respect to file indices 1 and 2, respectively, for an ϵ that depends on the required amount of deception d. The values of p and p' in Tables <ref>-<ref> are calculated based on the requirements in Definition <ref> as follows. It is straightforward to see that p'=pe^ϵ follows from the first part of (<ref>) for each query Q=W_1 and Q=W_2, which also gives p=1/2(1+e^ϵ). The second part of (<ref>) (as well as (<ref>)) results in α=2/1+e^ϵ for both ϵ-deceptive queries W_1 and W_2. Based on the aposteriori probabilities (<ref>)-(<ref>) calculated by the databases using the information in (<ref>)-(<ref>), each database predicts the user's requirement at each time it receives a query from the user. The predictions corresponding to each query received by database n, n=1,2, which are computed using (<ref>), are shown in Table <ref>. Based on this information, when a database receives query Q=W_1, it always decides that the requested message is W_1, and when it receives query Q=W_2, it always decides that the requested message is W_2. For queries Q=ϕ and Q=W_1+W_2, the databases flip a coin to choose either W_1 or W_2 as the requested message. As the queries are symmetric across all databases, the probability of error corresponding to some query Q received by database n at time T_i is given by, P(θ̂^[T_i]_Q≠θ^[T_i]) =P(θ^[T_i]=1,θ̂_Q^[T_i]= 2|Q_n^[T_i]=Q)+P(θ^[T_i]=2,θ̂^[T_i]_Q=1|Q_n^[T_i]=Q) =1/P(Q_n^[T_i]=Q)(P(θ̂^[T_i]_Q=2|θ^[T_i]=1,Q_n^[T_i]=Q)P(Q_n^[T_i]=Q|θ^[T_i]=1)P(θ^[T_i]=1). .+ P(θ̂_Q^[T_i]=1|θ^[T_i]=2,Q_n^[T_i]=Q)P(Q_n^[T_i]=Q|θ^[T_i]=2)P(θ^[T_i]=2)) =1/P(Q_n^[T_i]=Q)(P(θ̂^[T_i]_Q=2|Q_n^[T_i]=Q)P(Q_n^[T_i]=Q|θ^[T_i]=1)P(θ^[T_i]=1). .+P(θ̂_Q=1|Q_n^[T_i]=Q)P(Q_n^[T_i]=Q|θ^[T_i]=2)P(θ^[T_i]=2)), as the predictions only depend on the received queries. The explicit probabilities corresponding to the four queries are,[Note that P(Q_n=Q|θ^[T_i]=i) implies P(Q_n=Q|θ=i,R=1) as only real queries are sent at time T_i.] P(θ̂_W_1^[T_i]≠θ^[T_i]) =1/P(Q_n^[T_i]=W_1)e^ϵ/4(1+e^ϵ) P(θ̂_W_2^[T_i]≠θ^[T_i]) =1/P(Q_n^[T_i]=W_2)e^ϵ/4(1+e^ϵ) P(θ̂_W_1+W_2^[T_i]≠θ^[T_i]) =1/P(Q_n^[T_i]=W_1+W_2)e^ϵ/4(1+e^ϵ) P(θ̂_ϕ^[T_i]≠θ^[T_i]) =1/P(Q_n^[T_i]=ϕ)1/4(1+e^ϵ). As the same scheme is used for all user-requirements at all time instances, the probability of error of each database's prediction for this example is calculated using (<ref>) as, P_e =∑_Q∈𝒬 P(Q_n^[T_i]=Q)P(θ̂_Q^[T_i]≠θ^[T_i]) =3e^ϵ+1/4(1+e^ϵ) where 𝒬={W_1,W_2,W_1+W_2,ϕ}, which results in a deception of D=3e^ϵ+1/4(1+e^ϵ)-1/2=e^ϵ-1/4(1+e^ϵ). Therefore, for a required amount of deception d<1/4, the value of ϵ is chosen as ϵ=ln(4d+1/1-4d). The download cost of this scheme is computed as follows. As the scheme is symmetric across all file retrievals, and since the apriori probability distribution of the files is uniform, without loss of generality, we can calculate the download cost of retrieving W_1. The download cost of retrieving W_1 for a user specified amount of deception d is given by, D_L =1/L(2Lp+2(2L)pe^ϵ+2L∑_m=0^∞ p_mm) =1+2e^ϵ/1+e^ϵ+2𝔼[M] where p_m is the probability of sending m dummy queries per each file requirement. To minimize the download cost, we need to find the probability mass function (PMF) of M which minimizes 𝔼[M] such that P(R=1|θ=i)=α=2/1+e^ϵ is satisfied for any i. Note that for any i, P(R=1|θ=i) can be written as, P(R=1|θ=i)=α=∑_m=0^∞ p_m1/m+1=𝔼[1/M+1], where M is the random variable representing the number of dummy queries sent to each database per file requirement. Thus, the following optimization problem needs to be solved, for a given ϵ, that is a function of the given value of d, min 𝔼[M] s.t. 𝔼[1/M+1]=2/1+e^ϵ=α. The solution to this problem is given in Lemma <ref>, and the resulting minimum download cost is given by, D_L =1+2e^ϵ/1+e^ϵ+4u-2u(u+1)α, where u=⌊1/α⌋. When d=0, it follows that ϵ=0 and u=1, and the achievable rate is 2/3, which is the same as the PIR capacity for N=2 and K=2. §.§ Example 2: Three Databases and Three Files, N=K=3 Similar to the previous example, the user sends real queries at time T_i and dummy queries at times t_i,j, j∈{1,…,M}, for each i∈ℕ, based on the probabilities shown in Tables <ref>-<ref>. The notation W_i^j in these tables correspond to the jth segment of W_i, where each file W_i is divided into N-1=2 segments of equal size. Database n, n∈{1,…,N}, only knows the overall probabilities of receiving each query for each file requirement of the user shown in Table <ref>. These overall probabilities which are calculated using, P(Q_n=Q|θ=k) =P(Q_n=Q|θ=k,R=1)P(R=1|θ=k) +P(Q_n=Q|θ=k,R=0)P(R=0|θ=k), k∈{1,…,K} where P(R=1|θ=i)=α for any i=1,2,3, are the same for each database as the scheme is symmetric across all databases. The entry “other queries" in Table <ref> includes all queries that have sums of two or three elements. Based on this available information, each database calculates the aposteriori probability of the user-required file index conditioned on each received query Q using (<ref>). Each query of the form W_k^j is an ϵ-deceptive query with respect to file k, where ϵ is a function of the required amount of deception, which is derived towards the end of this section. All other queries including the null query and all sums of two or three elements are PIR queries. As all ϵ-deceptive queries must satisfy (<ref>), the value of p' is given by p'=pe^ϵ, which results in p=1/3(1+8e^ϵ), based on the same arguments used in the previous example. Using (<ref>) and (<ref>) for any given deceptive query, the value of α is calculated as follows. Note that for a query of the form W_k^j, for each database n, n∈{1,…,N}, using P(θ=k)=1/K, we have P(θ=k|Q_n=W_k^j)/P(θ=ℓ|Q=W_k^j) =P(Q_n=W_k^j|θ=k)/P(Q_n=W_k^j|θ=ℓ)=pα+1/2(1-α)/p'α, The value of α is computed as α=1/2p(e^2ϵ-1)+1, using (<ref>) and (<ref>) by solving pα+1/2(1-α)/p'α=e^ϵ. Assume that the user wants to download W_2 at some time T_i. Then, at time T_i, the user picks a row of queries from Table <ref> based on the probabilities in the first column, and sends them to each of the three databases. Note that correctness is satisfied as it is possible to decode W_2 from any row of Table <ref>. Next, the user picks M future time instances t_i,j, j∈{1,…,M}, and at each time t_i,j the user independently and randomly picks a row from Table <ref> and sends the queries to the databases. This completes the scheme, and the value of M that minimizes the download cost is calculated at the end of this example. The databases make predictions with the received query at each time t, based on the information available in Table <ref>. As the aposteriori probabilities P(θ=k|Q_n=Q) are proportional to the corresponding probabilities given by P(Q_n=Q|θ=k) from (<ref>), the databases' predictions (using (<ref>)) and the corresponding probabilities are shown in Table <ref>. The probability of error for each type of query is calculated as follows. First, consider the ϵ-deceptive queries with respect to file k, given by W_k^j, j∈{1,2}. For these queries, the error probability from the perspective of database n, n∈{1,…,N}, is given by, P(θ̂^[T_i]_W_k^j≠θ^[T_i]) =P(θ^[T_i]≠ k|Q_n^[T_i]=W_k^j) =∑_ℓ=1,ℓ≠ k^3 P(θ^[T_i]= ℓ|Q_n^[T_i]=W_k^j) =∑_ℓ=1,ℓ≠ k^3 P(Q_n^[T_i]=W_k^j|θ^[T_i]= ℓ)P(θ^[T_i]= ℓ)/P(Q_n^[T_i]=W_k^j) =1/P(Q_n^[T_i]=W_k^j)2/3pe^ϵ, where (<ref>) follows from the fact that the databases' prediction on a received query of the form W_k^j is file k with probability 1 from Table <ref>, and the probabilities in (<ref>) are obtained from real query tables as they correspond to queries sent at time T_i. Next, the probability of error corresponding to each of the the other queries, i.e., PIR queries that include the null query and sums of two or three elements, is given by, P(θ̂^[T_i]_Q≠θ^[T_i]) =P(θ̂^[T_i]≠θ^[T_i]|Q_n^[T_i]=Q) =∑_j=1^3 ∑_m=1,m≠ j^3 P(θ̂^[T_i]=m,θ^[T_i]=j,Q_n^[T_i]=Q)/P(Q_n^[T_i]=Q) =∑_j=1^3 ∑_m=1,m≠ j^3 P(θ̂^[T_i]=m|θ^[T_i]=j,Q_n^[T_i]=Q)P(Q_n^[T_i]=Q|θ^[T_i]=j)P(θ^[T_i]=j)/P(Q_n^[T_i]=Q) =1/P(Q_n^[T_i]=Q)2p/3, if Q=ϕ 2pe^ϵ/3, if Q if of the form ∑_s=1^ℓ W_k_s^j_s for ℓ∈{2,3} where (<ref>) follows from the fact that θ̂^[T_i] is conditionally independent of θ^[T_i] given Q_n, from (<ref>). The probability of error at each time T_i, i∈ℕ, is the same, as the scheme is identical at each T_i, and across all file requirements. Therefore, the probability of error of each database's prediction, using (<ref>) is given by, P_e =P(θ̂^[T_i]≠θ^[T_i]) =∑_Q∈𝒬P(Q_n=Q)P(θ̂^[T_i]_Q≠θ^[T_i]) =∑_k=1^3∑_j=1^2P(Q_n=W_k^j)1/P(Q_n^[T_i]=W_k^j)2/3pe^ϵ+P(Q_n=ϕ)1/P(Q_n=ϕ)2p/3 +20P(Q_n=Q̂)1/P(Q_n=Q̂)2pe^ϵ/3 =4pe^ϵ+2p/3+40pe^ϵ/3 =52e^ϵ+2/9(8e^ϵ+1). where 𝒬 is the set of all queries and Q̂ is a query of the form ∑_s=1^ℓ W_k_s^j_s for ℓ∈{2,3}. The resulting amount of deception is, D =P_e-(1-1/K)=52e^ϵ+2/9(8e^ϵ+1)-2/3=4(e^ϵ-1)/9(8e^ϵ+1). Therefore, for a required amount of deception d<1/18, ϵ is chosen as ϵ=ln(9d+4/4(1-18d)). Without loss of generality, consider the cost of downloading W_1, which is the same as the expected download cost, as the scheme is symmetric across all file retrievals, D_L =1/L(L×3p+3L/2×24pe^ϵ+3L/2∑_m=0^∞ p_m m)=1+12e^ϵ/1+8e^ϵ+3/2𝔼[M] To find the scheme that achieves the minimum D_L we need to find the minimum 𝔼[M] that satisfies P(R=1|θ=i)=α=𝔼[1/M+1]=3(1+8e^ϵ)/2e^2ϵ+24e^ϵ+1, i.e., the following optimization problem needs to be solved. min 𝔼[M] s.t. 𝔼[1/M+1]=3e^-2ϵ(1+8e^ϵ)/2+e^-2ϵ+24e^-ϵ. The solution to this problem is given in Lemma <ref>. The resulting minimum download cost for a given value of ϵ, i.e., required level of deception d, is given by, D_ϵ/L =1+12e^ϵ/1+8e^ϵ+3/2(2u-u(u+1)α), α=3e^-2ϵ(1+8e^ϵ)/2+e^-2ϵ+24e^-ϵ, where u=⌊1/α⌋. When d=0, it follows that ϵ=0, α=1 and u=1, and the achievable rate is 9/13, which is equal the PIR capacity for the case N=3,K=3. §.§ Generalized DIR Scheme for Arbitrary N and K In the general DIR scheme proposed in this work, at each time T_i, i∈ℕ, when the user requires to download some file W_k, the user sends a set of real queries to each of the N databases. These queries are picked based on a certain probability distribution, defined on all possible sets of real queries. For the same file requirement, the user sends M dummy queries at future time instances t_i,j, j∈{1,…,M}, where t_i,j>T_i. The dummy queries sent at each time t_i,j are randomly selected from a subset of real queries. We assume that the databases are unaware of being deceived, and treat both real and dummy queries the same when calculating their predictions on the user-required file index at each time they receive a query. The overall probabilities of a given user sending each query for each file requirement is known by the databases. However, the decomposition of these probabilities based on whether each query is used as a real or a dummy query is not known by the databases. It is also assumed that the databases only store the queries received at the current time instance. The main components of the general scheme include 1) N^K possible sets of real queries to be sent to the N databases for each file requirement and their probabilities, 2) N-1 possible sets of dummy queries and their probabilities, 3) overall probabilities of sending each query for each of the K file requirements of the user. Note that 1) and 2) are only known by the user while 3) is known by the databases. As shown in the examples considered, the set of all possible real queries takes the form of the queries in the probabilistic PIR scheme in <cit.>, with a non-uniform probability distribution unlike in PIR. The real query table used when retrieving W_k consists of the following queries: * Single blocks: W_k is divided into N-1 parts, and each part is requested from N-1 databases, while requesting nothing ϕ from the remaining database. All cyclic shifts of these queries are considered in the real query table. * Sums of two blocks/Single block: One database is used to download W_j^l, l∈{1,…,N-1},j≠ k and each one in the rest of the N-1 databases is used to download W_k^r+W_j^l for each r∈{1,…,N-1}. All cyclic shifts of these queries are also considered as separate possible sets of queries. * Sums of three/Two blocks: One database is used to download W_j_1^ℓ_1+W_j_2^ℓ_2, ℓ_1,ℓ_2∈{1,…,N-1} and j_1≠ j_2≠ k. Each one in the rest of the N-1 databases is used to download W_j_1^l_1+W_j_2^l_2+W_k^r for each r∈{1,…,N-1}. All cyclic shifts of these queries are also considered as separate possible sets of queries. * Sums of K/K-1 blocks: The above process is repeated for all sums of blocks until K/K-1. Out of the N^K different sets of queries described above in the real query table, the queries except ϕ in single blocks, i.e., queries of the form W_k^ℓ, ℓ∈{1,…,N-1}, are chosen as ϵ-deceptive ones with respect to file k, for each k∈{1,…,K}, and are included in the set of dummy queries sent to databases when the user-required file index is k. The N-1 ϵ-deceptive queries W_k^r, r∈{1,…,N-1}, corresponding to the kth file requirement must guarantee the condition in (<ref>). For that, we assign, P(Q_n=W_k^r|θ=k,R=1)=p, r∈{1,…,N-1} and P(Q_n=W_k^r|θ=j,R=1)=pe^ϵ, r∈{1,…,N-1}, j≠ k, for each database n, n∈{1,…,N}. The rest of the queries, i.e., ϕ and sums of ℓ blocks where ℓ∈{2,…,K}, are PIR queries in the proposed scheme. Note that the query ϕ is always coupled with the ϵ-deceptive queries with respect to file index k (required file) for correctness (see Tables <ref>, <ref>, <ref>). Thus, ϕ is assigned the corresponding probability given by, P(Q_n=ϕ|θ=m,R=1)=p, m∈{1,…,K}, n∈{1,…,N}. Similarly, as the rest of the PIR queries are coupled with ϵ-deceptive queries with respect to file indices j, j≠ k, or with other PIR queries, they are assigned the corresponding probability given by, P(Q_n=Q̂|θ=m,R=1)=pe^ϵ, m∈{1,…,K}, n∈{1,…,N}, where Q̂ is any PIR query in the form of ℓ-sums with ℓ∈{2,…,K}. Since the probabilities of the real queries sent for each file requirement must add up to one, i.e., ∑_Q∈𝒬 P(Q_n=Q|θ=m,R=1)=1 for each m∈{1,…,K}, p is given by, p=1/N+(N^K-N)e^ϵ, as there are N query sets in the real query table with probability p, and N^K-N sets with probability pe^ϵ. Each ϵ-deceptive query with respect to file index k is chosen with equal probability to be sent to the databases as dummy queries at times t_i,j when the file requirement at the corresponding time T_i is W_k. Since there are N-1 deceptive queries, P(Q_n=W_k^r|θ=k,R=0)=1/N-1, r∈{1,…,N-1}. and P(Q_n=W_k^r|θ=j,R=0)=0, r∈{1,…,N-1}, j≠ k. for each database n, n∈{1,…,N}. Therefore, for all ϵ-deceptive queries with respect to file index k of the form W_k^i, the condition in (<ref>) can be written as, α/α+1/p(N-1)(1-α) =e^-2ϵ thus, α =1/p(N-1)(e^2ϵ-1)+1=N+(N^K-N)e^ϵ/(N-1)e^2ϵ+(N^K-N)e^ϵ+1, which characterizes α=𝔼[1/M+1]. The information available to database n, n∈{1,…,N}, is the overall probability of receiving each query for each file requirement of the user P(Q_n=Q|θ=k), k∈{1,…,K}, given by, P(Q_n=Q|θ=k) =P(Q_n=Q|θ=k,R=1)P(R=1|θ=k) +P(Q_n=Q|θ=k,R=0)P(R=0|θ=k). For ϵ-deceptive queries with respect to file index k, i.e., W_k^j, j∈{1,…,N-1}, the overall probability in (<ref>) from the perspective of database n, n∈{1,…,N}, is given by, P(Q_n=W_k^j|θ=ℓ) =α p+1-α/N-1=e^2ϵ/(N-1)(e^2ϵ-1)+N+(N^K-N)e^ϵ, ℓ=k α pe^ϵ=e^ϵ/(N-1)(e^2ϵ-1)+N+(N^K-N)e^ϵ, ℓ≠ k. The probability of sending the null query ϕ to database n, n∈{1,…,N}, for each file-requirement k, k∈{1,…,K}, is, P(Q_n=ϕ|θ=k)=α p=1/(N-1)(e^2ϵ-1)+N+(N^K-N)e^ϵ. For the rest of the PIR queries denoted by Q̂, i.e., queries of the form ∑_s=1^ℓ W_i_s^j_s for ℓ∈{2,…,K}, the overall probability in (<ref>), known by each database n, n∈{1,…,N} for each file requirement k, k∈{1,…,K} is given by, P(Q_n=Q̂|θ=k)=α pe^ϵ=e^ϵ/(N-1)(e^2ϵ-1)+N+(N^K-N)e^ϵ. Based on the query received at a given time t, each database n, n∈{1,…,N}, calculates the aposteriori probability of the user-required file index being k, k∈{1,…,K}, using, P(θ=k|Q_n=Q) =P(Q_n=Q|θ=k)P(θ=k)/P(Q_n=Q). Since we assume uniform priors, i.e., P(θ=k)=1/K for all k∈{1,…,K}, the posteriors are directly proportional to P(Q_n=Q|θ=k) for each Q. Therefore, the databases predict the user-required file index for each query received using (<ref>) and (<ref>)-(<ref>). For example, when the query W_1^1 is received, it is clear that the maximum P(θ=k|Q_n=W_1^1) in (<ref>) is obtained for k=1 from (<ref>) and (<ref>). The prediction corresponding to any query received is given in Table <ref> along with the corresponding probability of choosing the given prediction.[The superscript j in the first column of Table <ref> corresponds to any index in the set {1,….N-1}.] Based on the information in Table <ref>, the probability of error when a database n, n∈{1,…,N}, receives the query W_k^ℓ at some time T_i is given by, P(θ̂^[T_i]_W_k^ℓ≠θ^[T_i]) =P(θ^[T_i]≠ k|Q_n^[T_i]=W_k^ℓ) =∑_j=1,j≠ k^K P(θ^[T_i]=j|Q_n^[T_i]=W_k^ℓ) =∑_j=1,j≠ k^K P(Q^[T_i]_n=W_k^ℓ|θ^[T_i]=j)P(θ^[T_i]=j)/P(Q_n^[T_i]=W_k^ℓ) =1/Kpe^ϵ(K-1)/P(Q^[T_i]_n=W_k^ℓ), where (<ref>) follows from the fact that the user sends real queries based on the probabilities P(Q_n=Q|θ=k,R=1) at time T_i. For all other queries Q, the corresponding probability of error is given by, P(θ̂^[T_i]_Q≠θ^[T_i]) =P(θ̂^[T_i]≠θ^[T_i]|Q^[T_i]_n=Q) =∑_j=1^K ∑_m=1,m≠ j^K P(θ̂^[T_i]=m,θ^[T_i]=j,Q_n^[T_i]=Q)/P(Q^[T_i]_n=Q) =∑_j=1^K ∑_m=1,m≠ j^K P(θ̂^[T_i]=m|θ^[T_i]=j,Q_n^[T_i]=Q)P(Q^[T_i]_n=Q|θ^[T_i]=j)P(θ^[T_i]=j)/P(Q^[T_i]_n=Q) =1/P(Q^[T_i]_n=Q)(K-1)p/K, if Q=ϕ (K-1)pe^ϵ/K, if Q of the form ∑_s=1^ℓ W_i_s^j_s, ℓ∈{2,…,K} where (<ref>) follows from the fact that θ̂^[T_i] is conditionally independent of θ^[T_i] given Q from (<ref>). The probability of error of each database's prediction is given by, P_e =∑_QP(Q_n^[T_i]=Q)P(θ̂^[T_i]≠θ^[T_i]|Q^[T_i]=Q) =∑_k=1^K∑_ℓ=1^N-1P(Q_n^[T_i]=W_k^ℓ)1/Kpe^ϵ(K-1)/P(Q_n^[T_i]=W_k^ℓ)+P(Q_n^[T_i]=ϕ)1/K(K-1)p/P(Q^[T_i]_n=ϕ) +(N^K-1-K(N-1))P(P(Q_n^[T_i]=Q̂)1/K(K-1)pe^ϵ/P(Q_n^[T_i]=Q̂)) =pe^ϵ (K-1)(N-1)+(K-1)p/K+(K-1)pe^ϵ(N^K-1-K(N-1))/K =(K-1)(1+e^ϵ(N^K-1))/K(N+(N^K-N)e^ϵ), where Q̂ in (<ref>) represents the queries of the form ∑_s=1^ℓ W_i_s^j_s for ℓ∈{2,…,K}. Note that P(Q_n^[T_i]=Q̂) is the same for each Q̂ as P(Q_n^[T_i]=Q̂|θ=j)=pe^ϵ for each Q̂ and all j∈{1,…,K} from (<ref>). Thus, the amount of deception achieved by this scheme for a given ϵ is given by, D=P_e-(1-1/K)=(K-1)(N-1)(e^ϵ-1)/K(N+(N^K-N)e^ϵ). Therefore, for a required amount of deception d, satisfying d<(K-1)(N-1)/K(N^K-N), the value of ϵ must be chosen as, ϵ=ln(dKN+(K-1)(N-1)/dKN+(K-1)(N-1)-dKN^K). The download cost of the general scheme is, D_L =1/L(NpL+(N^K-N)pe^ϵNL/N-1+NL/N-1𝔼[M]) D_L =Np+N(N^K-N)/N-1pe^ϵ+(N/N-1)𝔼[M] D_L =N/N-1(1-1/N+(N^K-N)e^ϵ+𝔼[M]). Following optimization problem needs to be solved to minimize the download cost while satisfying α=N+(N^K-N)e^ϵ/(N-1)e^2ϵ+(N^K-N)e^ϵ+1, from (<ref>), min 𝔼[M] s.t. 𝔼[1/M+1]=N+(N^K-N)e^ϵ/(N-1)e^2ϵ+(N^K-N)e^ϵ+1=α. The solution to the optimization problem in (<ref>) is given by, 𝔼[M]=2u-u(u+1)α, where u=⌊1/α⌋ for a given value of α, which is specified by the required level of deception d. The proof of Lemma <ref> is given in the Appendix. The minimum download cost for the general case with N databases, K files and a deception requirement d, is obtained by (<ref>) and (<ref>). The corresponding maximum achievable rate is given in (<ref>). § DISCUSSION AND CONCLUSIONS We introduced the problem of deceptive information retrieval (DIR), in which a user retrieves a file from a set of independent files stored in multiple databases, while revealing fake information about the required file to the databases, which makes the probability of error of the databases' prediction on the user-required file index high. The proposed scheme achieves rates lower than the PIR capacity when the required level of deception is positive, as it sends dummy queries at distinct time instances to deceive the databases. When the required level of deception is zero, the achievable DIR rate is the same as the PIR capacity. The probability of error of the databases' prediction on the user-required file index is calculated at the time of the user's requirement, as defined in Section <ref>. In the proposed scheme, the user sends dummy queries at other (future) time instances as well. As the databases are unaware of being deceived, and are unable to distinguish between the times corresponding to real and dummy queries, they make predictions on the user-required file indices every time a query is received. Note that whenever a query of the form W_k^ℓ is received, the databases prediction is going to be θ̂=k from Table <ref>. Although this is an incorrect prediction with high probability at times corresponding to user's real requirements, these predictions are correct when W_k^ℓ is used as a dummy query, as W_k^ℓ is only sent as a dummy query when the user requires to download file k. However, the databases are only able to obtain these correct predictions at future time instances, after which the user has already downloaded the required file while also deceiving the databases. The reason for the requirement of the time dimension is also explained as follows. An alternative approach to using the time dimension is to select a subset of databases to send the dummy queries and to send the real queries to rest of the databases. As explained above, whenever a database receives a query of the form W_k^ℓ as a dummy query, the database predicts the user-required file correctly. Therefore, this approach leaks information about the required file to a subset of databases, right at the time of the retrieval, while deceiving the rest. Hence, to deceive all databases at the time of retrieval, we exploit the time dimension that is naturally present in information retrieval applications that are time-sensitive. A potential future direction of this work is an analysis on the time dimension. Note that in this work we assume that the databases do not keep track of the previous queries and only store the information corresponding to the current time instance. Therefore, as long as the dummy queries are sent at distinct time instances that are also different from the time of the user's requirement, the calculations presented in this paper are valid. An extension of basic DIR can be formulated by assuming that the databases keep track of all queries received and their time stamps. This imposes additional constraints on the problem as the databases now have extra information along the time dimension, which requires the scheme to choose the time instances at which the dummy queries are sent, in such a way that they do not leak any information about the existence of the two types (real and dummy) queries. Another direction is to incorporate the freshness and age of information into DIR, where the user may trade the age of the required file for a reduced download cost, by making use of the previous dummy downloads present in DIR. § PROOF OF LEMMA <REF> The solution to the optimization problem in (<ref>) for the general case with N databases and K files is as follows. The optimization problem in (<ref>), for a required amount of deception d and the corresponding ϵ with α=N+(N^K-N)e^ϵ/(N-1)e^2ϵ+(N^K-N)e^ϵ+1 is given by, min 𝔼[M]=∑_m=0^∞ mp_m s.t. 𝔼[1/m+1]=∑_m=0^∞(1/m+1)p_m = α ∑_m=0^∞ p_m=1 p_m ≥ 0, m∈{0,1,…}. We need to determine the optimum PMF of M that minimizes 𝔼[M] while satisfying the given condition. The Lagrangian L of this optimization problem is given by, L=∑_m=0^∞ mp_m+λ_1(∑_m=0^∞(1/m+1)p_m-α)+λ_2(∑_m=0^∞ p_m-1)-∑_m=0^∞μ_mp_m. Then, the following set of equations need to be solved to find the minimum 𝔼[M], ∂ L/∂ p_m=m+λ_1(1/m+1)+λ_2-μ_m =0, m∈{0,1,…} ∑_m=0^∞(1/m+1)p_m =α ∑_m=0^∞ p_m =1 μ_mp_m =0, m∈{0,1,…} μ_m,p_m ≥ 0, m∈{0,1,…}. Case 1: Assume that the PMF of M contains at most two non-zero probabilities, i.e., p_0,p_1≥0 and p_i=0, i∈{2,3,…}. Then, the conditions in (<ref>)-(<ref>) are simplified as, ∂ L/∂ p_0=λ_1+λ_2-μ_0 =0 ∂ L/∂ p_1=1/2λ_1+λ_2-μ_1 =-1 p_0+1/2p_1 =α p_0+p_1 =1 μ_0p_0 =0 μ_1p_1 =0 μ_0,μ_1,p_0,p_1 ≥ 0. From (<ref>) and (<ref>) we obtain, p_0+1/2(1-p_0) = α and thus, p_0= 2α-1, p_1=2-2α, which along with (<ref>) implies that this solution is only valid for 1/2≤α≤ 1. The corresponding optimum value of 𝔼[M] is given by, 𝔼[M]=1-p_0=2-2α, 1/2≤α≤1. Case 2: Now consider the case where at most three probabilities of the PMF of M are allowed to be non-zero. i.e., p_0,p_1,p_2≥0 and p_i=0, i∈{3,4,…}. The set of conditions in (<ref>)-(<ref>) for this case is, ∂ L/∂ p_m=m+λ_1(1/m+1)+λ_2-μ_m =0, m∈{0,1,2} ∑_m=0^2(1/m+1)p_m =α ∑_m=0^2 p_m =1 μ_mp_m =0, m∈{0,1,2} μ_m,p_m ≥ 0, m∈{0,1,2}. The set of conditions in (<ref>)-(<ref>) can be written in a matrix form as, [ 1 1 -1 0 0 0 0 0; 1/2 1 0 -1 0 0 0 0; 1/3 1 0 0 -1 0 0 0; 0 0 0 0 0 1 1/2 1/3; 0 0 0 0 0 1 1 1; ][ λ_1; λ_2; μ_0; μ_1; μ_2; p_0; p_1; p_2 ] = [ 0; -1; -2; α; 1 ]. Three of the above eight variables, i.e., either μ_i or p_i for each i, are always zero according to (<ref>). We consider all choices of {μ_i,p_i} pairs such that one element of the pair is equal to zero, and the other one is a positive variable, and solve the system for the non-zero variables. Then we calculate the resulting 𝔼[M], along with the corresponding regions of u for which the solutions are applicable. For each region of u, we find the solution to (<ref>) that results in the minimum 𝔼[M]. Based on this process, the optimum values of p_i, i∈{0,1,2}, the corresponding ranges of u and the minimum values of 𝔼[M] are given in Table <ref>. As an example, consider the calculations corresponding to the case where μ_0>0, μ_1=μ_2=0 which implies p_0=0, p_1,p_2>0. Note that for this case, (<ref>) simplifies to, [ 1 1 -1 0 0; 1/2 1 0 0 0; 1/3 1 0 0 0; 0 0 0 1/2 1/3; 0 0 0 1 1; ][ λ_1; λ_2; μ_0; p_1; p_2 ] = [ 0; -1; -2; α; 1 ]. The values of p_1 and p_2, from the solution of the above system, and the corresponding range of α, from (<ref>), along with the resulting 𝔼[M] are given by, p_1=6α-2, p_2=3-6α, 1/3≤α≤1/2, 𝔼[M]=4-6α. Case 3: At most four non-zero elements of the PMF of M are considered in this case, i.e., p_0,p_1,p_2,p_3≥0 and p_i=0, i∈{4,5,…}. The conditions in (<ref>)-(<ref>) can be written in a matrix form as, [ 1 1 -1 0 0 0 0 0 0 0; 1/2 1 0 -1 0 0 0 0 0 0; 1/3 1 0 0 -1 0 0 0 0 0; 1/4 1 0 0 0 -1 0 0 0 0; 0 0 0 0 0 0 1 1/2 1/3 1/4; 0 0 0 0 0 0 1 1 1 1; ][ λ_1; λ_2; μ_0; μ_1; μ_2; μ_3; p_0; p_1; p_2; p_3 ] = [ 0; -1; -2; -3; α; 1 ]. Using the same method described in Case 2, the optimum values of p_i, i∈{0,1,2,3}, corresponding ranges of α and the resulting minimum 𝔼[M] for Case 3 are given in Table <ref>. Case 4: At most five non-zero elements of the PMF of M are considered in this case, i.e., p_0,p_1,p_2,p_3,p_4≥0 and p_i=0, i∈{5,6,…}. The conditions in (<ref>)-(<ref>) can be written in a matrix form as, [ 1 1 -1 0 0 0 0 0 … 0; 1/2 1 0 -1 0 0 0 0 … 0; 1/3 1 0 0 -1 0 0 0 … 0; 1/4 1 0 0 0 -1 0 0 … 0; 1/5 1 0 0 0 0 -1 0 … 0; 0 … 0 0 0 1 1/2 1/3 1/4 1/5; 0 … 0 0 0 1 1 1 1 1; ][ λ_1; λ_2; μ_0; μ_1; μ_2; μ_3; μ_4; p_0; p_1; p_2; p_3; p_4 ] = [ 0; -1; -2; -3; -4; α; 1 ]. Using the same method as before, the optimum values of p_i, i∈{0,1,2,3,4}, the corresponding ranges of α and the resulting minimum 𝔼[M] for Case 4 are given in Table <ref>. Note that the PMF of M and the resulting 𝔼[M] are the same for a given α in all cases (see Tables <ref>-<ref>) irrespective of the support of the PMF of M considered. Therefore, we observe from the above cases that, for a given α in the range 1/ℓ+1≤α≤1/ℓ, 𝔼[M] is minimized when the PMF of M is such that, p_ℓ,p_ℓ-1>0, and p_i=0 for i∈ℤ^+∖{ℓ,ℓ-1}, which requires p_ℓ and p_ℓ-1 to satisfy, p_ℓ+p_ℓ-1 =1 𝔼[1/M+1]=p_ℓ1/ℓ+1+p_ℓ-11/ℓ =α. Therefore, for a given α in the range 1/ℓ+1≤α≤1/ℓ, the optimum PMF of M and the resulting minimum 𝔼[M] are given by, p_ℓ=(ℓ+1)(1-ℓα), p_ℓ-1=ℓ((ℓ+1)α-1), 𝔼[M]=2ℓ-αℓ(ℓ+1). unsrt
http://arxiv.org/abs/2307.05334v1
20230711152545
Exploring Model Misspecification in Statistical Finite Elements via Shallow Water Equations
[ "Connor Duffin", "Paul Branson", "Matt Rayson", "Mark Girolami", "Edward Cripps", "Thomas Stemler" ]
physics.data-an
[ "physics.data-an", "stat.AP", "stat.CO" ]
Tracking Most Significant Shifts in Nonparametric Contextual Bandits Joe Suk Columbia University Samory Kpotufe Columbia University ============================================================================================== The abundance of observed data in recent years has increased the number of statistical augmentations to complex models across science and engineering. By augmentation we mean coherent statistical methods that incorporate measurements upon arrival and adjust the model accordingly. However, in this research area methodological developments tend to be central, with important assessments of model fidelity often taking second place. Recently, the statistical finite element method (statFEM) has been posited as a potential solution to the problem of model misspecification when the data are believed to be generated from an underlying partial differential equation system. Bayes nonlinear filtering permits data driven finite element discretised solutions that are updated to give a posterior distribution which quantifies the uncertainty over model solutions. The statFEM has shown great promise in systems subject to mild misspecification but its ability to handle scenarios of severe model misspecification has not yet been presented. In this paper we fill this gap, studying statFEM in the context of shallow water equations chosen for their oceanographic relevance. By deliberately misspecifying the governing equations, via linearisation, viscosity, and bathymetry, we systematically analyse misspecification through studying how the resultant approximate posterior distribution is affected, under additional regimes of decreasing spatiotemporal observational frequency. Results show that statFEM performs well with reasonable accuracy, as measured by theoretically sound proper scoring rules. Keywords: data assimilation, Bayesian filtering, finite element methods, uncertainty quantification, model misspecification. 2 § INTRODUCTION In a crude sense every physical model is misspecified <cit.>. Approximations and intentional omission of processes are necessary in order to build tractable mathematical representations of reality, however this leads to model discrepancies when comparisons to observations are drawn <cit.>. Thus the phenomenon of model misspecification, whereby the data show inconsistencies with the model employed, is ubiquitous througout engineering and the physical sciences. Bayesian statistical approaches, where implementable, provide an optimal solution to rectify this mismatch with data <cit.>. In such an approach, the posterior probability distribution over any unknown quantities-of-interest is estimated. When the quantity-of-interest is the model state, this estimation is typically the data assimilation problem with relevant posterior distributions being the filtering or smoothing distributions <cit.>. In such an aproach, model uncertainties are typically assumed to be extrusive to the physical model. Solving the inverse problem allows for a similar estimation however uncertainty in such models are taken inside the physical model, such as model parameters, initial conditions, or boundary conditions. See <cit.> for a summary in the infinite-dimensional setting. The combination of intrusive model parameter estimation and extrusive additive model error was formalised as Bayesian calibration in the seminal work of <cit.> (for a review of recent works see also <cit.>). This additive error was modelled via a *GP, a common and flexible tool which allows for uncertainty over functions to be modelled in an interpretable fashion <cit.>. Adjacent to these works is the recently proposed *statFEM <cit.>; a statistically coherent Bayesian procedure which updates finite element discretised *PDE solution fields with observed data. Different to previous works, model errors are intrusive, with *GP priors placed on model components which are potentially unknown, for example external forcing processes or diffusivity. This uncertainty is then leveraged to update *PDE solutions in an online fashion, to compute an approximate Gaussian posterior measure using classical nonlinear filtering algorithms <cit.>. Previous work <cit.> has focused on applying the *EnKF or *ExKF, demonstrating the methodology on canonical systems. Results show that this approach can correct for model mismatch with sparse observations, allowing the reconstruction of these phenomena using an interpretable and statistically coherent physical-statistical model. An interpretation of the methodology is that it provides a physics-based interpolator which can be applied to models where assumptions of stationarity may not necessarily hold. This enables the application of simpler physical models, correcting for their behaviour with observed data. However, as yet there has been no systematic analysis of *statFEM under varying degrees of model misspecification. Work so far has been limited to situations where the posited dynamics well-approximates the data generating process with either model parameters or initial conditions having minor perturbations from the truth. In this work we fill this gap through studying *statFEM in regimes of increasing model misspecification. Using the *SWE as the example system, we deliberately misspecify model parameters from the known values which are used to generate the data (in this case, the model viscosity and bathymetry) to see how the method performs in these various regimes. We detail a suite of simulation studies to analyse how mismatch can be corrected for, . We also study how linearising the governing equations and reducing the observation frequency affects inference. Our results show that * increasing the observational frequency, in both space and time, results in reduced model error, with notable improvements as more spatial locations are observed * misspecifying bathymetry tends to result in less model error than viscosity * linearising the model may ameliorate some degree of model in error if parameters are poorly specified. We acknowledge that the *SWE may not include more highly nonlinear behaviour that one would expect to see in real-life settings. However as we investigate joint parameter and linearisation misspecification a desideratum was such that linear dynamics would approximate the true dynamics. From a statistical perspective, we are interested in how robust *statFEM is to model misspecification. As such our study follows the statistical description where our parameters _DGP, which generate the data , are not the same as those used to compute the posterior over the model state , p(, ). Our likelihood is thus misspecified as p() ≠ p(_DGP) <cit.>. We study misspecification in this setting as, in reality, our models will be misspecified and inference will never be performed in the so-called “perfect model scenario” <cit.>. Parameters and topography are in reality never known and approximations will need to be made. Furthermore, linear approximations are often employed <cit.> and their use with *statFEM is desirable as the resultant posterior distributions can be computed exactly (using the Kalman filter) without the need for linearising the prediction step. Our results are thus relevant for many contexts in which linear approximate models are employed. Synthetic data provides the appropriate setting as we can control the severity of misspecification, without the obfuscation from additional model approximations involved when modelling experimental or in situ measurements. Assimilation of data into 1D shallow water equations has so far focussed on bathymetry inversion <cit.>, analysis of error covariance parameterisations <cit.>, and, the convergence of schemes with sparse surface height observations <cit.>. Previous work on *statFEM <cit.> has demonstrated that under cases of mild misspecification, solutions to nonlinear and time-dependent *PDE can be corrected for with data, to give an interpretable posterior distribution. In these previous works, misspecification was due to either deliberately incorrect parameters, initial conditions, or missing physics. However these studies were necessarily focussed on methodological developments, and did not include comprehensive analyses of *statFEM model misspecification. This systematic analysis is the focus of this paper. Using the *SWE as the example system, we demonstrate our results using similar experimental designs as the *SWE data assimilation works detailed above <cit.>. We study how misspecification effects the filtering posterior distribution across a variety of parameter values and observation patterns, and also provide comparisons between linear approximations and fully nonlinear models. Different to the previous *statFEM works we study the performance as the degree of model misspecification is varied from mild to severe; model performance is assessed through the log-likelihood and the root mean square error scoring rules <cit.>. The paper is structured as follows. In Section <ref> we cover an overview of the *SWE model and the *statFEM methodology we employ to condition on data. This includes the numerical scheme employed and the chosen *GP priors over unknown model components. In Section <ref> we outline the general procedure of the experiments. We detail how the data are generated, how much noise is added, what the *GP hyperparameters are set to, and for which viscosity and bathymetry parameters the linear and nonlinear models are run with. In Section <ref> we detail the results across four subsections. In Section <ref> we look at four posterior distributions, computed for cases of mild misspecification and spatiotemporal observation frequencies, to provide some intuition for how the models are performing. In Section <ref> we look at how varying spatiotemporal observation frequency affects the estimated posterior distribution. Similar analyses of physical parameter misspecification and linearisation are included in Sections <ref> and <ref>, respectively. The results are discussed and the paper is concluded in Section <ref>. For quick reference the paper structure is given in Table <ref>. Additionally, we include an online repository containing all code used to generate the results in this paper; see . Section Contents <ref> Physical model, *GP priors, discretisation, algorithms. <ref> Data generation, noise level, hyperparameters, prior distribution. <ref> Posterior distribution: introductory examples RMSE. <ref> Posterior distribution: analysis of spatiotemporal observation frequency. <ref> Posterior distribution: bathymetry and viscosity misspecification. <ref> Posterior distribution: linear model results with bathymetry and viscosity misspecification. <ref> Discussion and conclusion. Code Quick-reference paper structure. § PHYSICAL-STATISTICAL MODEL For our example system we use the one-dimensional *SWE. The *SWE are derived from the two-dimensional incompressible Navier-Stokes equations through integrating over the vertical direction <cit.>. In this work we also assume that the single-layer flow is irrotational. What results is a coupled *PDE system consisting of state variables (u, η) ∈^2, with u := u(x, t), the velocity field, and η := η(x, t), the surface height, for spatial variable x and time variable t. Our model is that of an idealised, tidally forced flow into an inlet with a spatial domain of length 10 . The model employed is thus: u_t + u u_x - ν u_xx + g η_x = 0, x ∈ [0, 10000], η_t + ((H + η) u )_x = 0, x ∈ [0, 10000], u(10000, t) = 0, η(0, t) = τ(t). The tidal forcing is τ(t) := 2 (1 + cos( 4 π t/86400) ). The mean surface height, H(x) implies the topography b(x) of the solution domain. In our setting therefore we set H(x) = H̅ - b(x), with H̅ = 30. The topography b(x) is a gradual sloping shore with a horizontal displacement parameter s: b(x) = 5 (1 + tanh( x - s/2000) ). The fluid starts at rest, u(x, 0) ≡ 0, η(x, 0) = 0, and the model is run up to time t = 12 . An illustration of these functions is shown in Figure <ref>. We also consider the linearised version of (<ref>), which ignores second-order terms and assumes that η≪ H. This gives u_t - ν u_xx + g η_x= 0, x ∈ [0, 10000], η_t + (H u )_x = 0, x ∈ [0, 10000], u(10000, t) = 0, η(0, t) = τ(t). Initial conditions, bathymetry, and tidal forcing are the same as those for the fully nonlinear system. To reconcile the model with observed data we begin by introducing uncertainty into the governing equations (i.e., (<ref>) or (<ref>)) through additive *GP forcing, following *statFEM. This derives a prior distribution which forms the reference measure for posterior inference. For the nonlinear case this is u_t + u u_x - ν u_xx + g η_x = ξ_u, x ∈ [0, 10000], η_t + ((H + η) u )_x = ξ_η, x ∈ [0, 10000], u(10000, t) = 0, η(0, t) = τ(t). The linear case follows similarly and is detailed in Appendix <ref>. The a priori uncorrelated *GP forcing terms ξ_u and ξ_η are given by [ ξ_u; ξ_η ]∼( [ 0; 0 ], δ(t - t') [ k_u(·, ·) 0; 0 k_η(·, ·) ]). The kernels k_u(·, ·) and k_η(·, ·) have hyperparameters Θ which in this work are fixed and known. Estimation methods are available <cit.>, but we choose to fix parameters for consistency across comparisons, as, in this work, we are interested only in the posterior *statFEM filtering inference — not in joint filtering and hyperparameter inference. We use the squared-exponential kernel, given by k(, ') = ρ^2 exp(-‖ - ' ‖^2 / (2 ℓ^2)), which we notationally subscript to represent the individual component kernels k_u(·, ·) and k_η(·, ·), with hyperparameters Θ = {Θ_u, Θ_η} = {ρ_u, ℓ_u, ρ_η, ℓ_η}. Discretisation of this system now proceeds via the *FEM to give a finite-dimensional approximation to the prior. To do so we use the discretisation of <cit.>. We use a uniform mesh _h ⊆ with vertices {x_j}_j = 1^n_v; the subinterval length is h. We use the P2-P1 element pair to discretise the state, giving the basis function expansions of u(x, t) ≈ u_h(x, t) = ∑_i = 1^n_u u_i(t) ϕ_i(x), η(x, t) ≈η_h(x, t) = ∑_i = 1^n_ηη_i(t) ψ_i(x). The span of the basis functions {ϕ_i}_i = 1^n_u and {ψ_i}_i = 1^n_η defines the *FEM trial and test spaces for the velocity and surface height perturbations, respectively. The weak form of (<ref>) is given by multiplying by testing functions (v_u, v_η) and integrating over the spatial domain ⟨ u_t, v_u ⟩ + ⟨ u u_x ⟩ + ν⟨ u_x, v_u, x⟩ + g ⟨η_x, v_u ⟩ = ⟨ξ_u, v_u ⟩, ⟨η_t, v_η⟩ + ⟨((H + η) u )_x, v_η⟩ = ⟨ξ_η, v_η⟩, where ⟨ f, g ⟩ = ∫_ f g x. Substituting the finite-dimensional *FEM approximations to both the trial and test functions gives differential equations over the *FEM coefficients = (u_1, …, u_n_u), η = (η_1, …, η_n_η): _u ∂/∂ t + _u() + ν + g η = ξ_u, _η∂η/∂ t + _η(, η) = ξ_η, where _u, ji = ⟨ϕ_i, ϕ_j ⟩, _ji = ⟨ϕ_i, x, ϕ_j, x⟩, _ji = ⟨ψ_i, x, ϕ_j⟩, and _u(·), _η(·, ·) are functions which result from discretising the nonlinear operators. The *FEM discretised *GP forcing terms, ξ_u and ξ_η, are given by the approximation ξ∼(, ^⊤), where _ij = k(x_i, x_j), for nodal x_i, x_j <cit.> (note omitted subscripts are for readability). This approximation uses different mass matrices across components due to the components u and η using different basis functions for the *FEM approximation. Thus we have, jointly, (ξ_u, ξ_η) ∼(, δ(t - t') ), where has the block-diagonal structure: = [ _u _u _u^⊤ ; _η_η_η^⊤ ]. A low-rank approximation is required, in order to run our filtering methodology <cit.>. To get a low-rank approximation of this covariance matrix we make use of the block structure. Using the factorisation = ^1/2^⊤/2, we approximate ^1/2≈[ _u _u^1/2 ; _η_η^1/2 ], where _u^1/2∈^n_u × q_u and _η^1/2∈^n_η× q_η, for q_u ≪ n_u, q_η≪ n_η. The block-structured low-rank approximation gives (ξ_u, ξ_η)^⊤∼(, δ(t - t') ^1/2^⊤/2), where ^1/2∈^(n_u + n_η) × (q_u + q_η). Approximations can be computed through e.g., GPU computing <cit.> or Nyström approximation <cit.>, but in this work we use the Hilbert-*GP approach of <cit.>. This enforces that the additive *GP should be zero on the boundaries. It was found empirically that our *GP approximations needed to respect zero boundary conditions or otherwise the posterior covariance would be overly uncertain on the edges of the domain, leading to poor numerical approximation of the covariance. To discretise the dynamics in time we use the θ-method <cit.> for stability. Letting ^n := (n ) and η^n := η(n ), the time-discretised stochastic dynamics are _u ^n - ^n - 1/ + _u(^n - θ) + ν^n - θ + g η^n - θ = 1/√()ξ_u^n - 1, _ηη^n - η^n - 1/ + _η(, η^n - θ) = 1/√()ξ_η^n - 1, where ^n - θ := θ^n + (1 - θ) ^n - 1 (similarly for η), for θ∈ [0, 1]. The initial conditions (^0, η^0) are known so we begin by solving for (^1, η^1). Running the scheme gives the entire set of states {(^n, η^n)}_n = 0^N so that N = T. Observations may also be arriving at particular timepoints, and we want to condition on these observations to get an estimate of the state. We assume that the time between observations is k, for some integer k ≥ 1, giving the observations as _m := (m k ), and the total set of observations {_m}_m = 1^M — the initial state (^n, η^n) is always assumed known. This ensures {m k }_m = 1^M ⊂{n }_n = 0^N and thus M ≤ N. The joint dynamics and observation model is to take k model steps, thus for n = (m - 1) k, …, mk we predict using the model: _u ^n - ^n - 1/ + _u(^n - θ) + ν^n - θ + g η^n - θ = 1/√()ξ_u^n - 1, _ηη^n - η^n - 1/ + _η(, η^n - θ) = 1/√()ξ_η^n - 1. We abbreviate this by writing the l.h.s. of (<ref>) as : ^(n_u + n_η) × 2→^n_u + n_η, with Jacobian matrix _n. At the observation timepoint mk we condition on the data _m = (_m, η_m) + _m, where (_m, η_m) = (^mk, η^mk) and _m ∼(, ). The linear observation operator : ^n_u + n_η→^n_y is known . In this work, it is given by the *FEM polynomial interpolants. To condition on the observations we use the *LR-ExKF — a recursive two-step scheme consisting of prediction and update steps. At timesteps which are not observed only the model prediction steps are completed. This computes the approximation p(_m, η_m _1:m, Θ, ν, c) ∼(μ_m, _m _m^⊤), a multivariate Gaussian over the concatenation of (_m, η_m), an n_u + n_η dimensional object. For a rank-q approximation to the covariance matrix we thus have _m ∈^(n_u + n_η) × q. For a single prediction-update cycle, the algorithm is shown in Algorithm <ref>. [t] Prediction-update cycle of the *LR-ExKF algorithm (rank q). § EXPERIMENTAL SETUP To generate the synthetic dataset, {_m }_m = 1^M, we use the *FEM discretisation of the fully nonlinear *SWE (of Equation (<ref>)) as detailed above for the *statFEM model. That is, we use the same P2-P1 basis function pairs to give the *FEM discretised approximations (u_h^DGP(x, t), η_h^DGP(x, t)). These are computed using a uniform mesh with n_v = 500 elements (h = 20 ), and timesteps of size = 1 . We set θ = 0.6. Observations are given by _m = η_m^obs + _m, η_m^obs := ( η_h^DGP(x_1^obs, mk ), …, η_h^DGP(x_n_y^obs, mk ) )^⊤, where the i.i.d. noise is _m ∼(, σ^2 ), with σ = 5 × 10^-2. This data is generated with ν = 1, and the shore position is s = 2000. We take n_y observations per observed time point, the locations of which are uniformly spaced between in the interval [1000, 2000] . Note that when n_y = 1 this corresponds to observing at x_obs = 1000 . Our experiments compare results with results under different model configurations to those used to generate the data. The posterior distribution is given by p(_m, η_m _1:m, Θ, σ, ν, c), which we compute an approximation to using the *LR-ExKF. The posterior (u_h, η_h) is computed using the same numerical settings as for the data. The observation operator is defined via (_m, η_m) := (η_h(x_1^obs, mk ), …, η_h(x_n_y^obs, mk ) )^⊤, and we assume that the noise level σ is known, simulating the scenario of known measurement device error. We compute the posterior distribution across a Cartesian product of the different input parameters, with k ∈{1, 30, 60, 120, 180}, n_y ∈{1, 2, 5}, s ∈{2000, 3500, 5000, 6500, 8000}, and ν∈{5, 500, 1000, 10000, 50000}. Across the nonlinear and linear models this gives 750 different configurations. Note that we do not estimate the *statFEM posterior for ν = 1 due to numerical instabilities when computing the posterior covariance, however results were similar to that with ν = 5, which is reported here. For the stochastic *GP forcing, we set ℓ_u = ℓ_η = 1000, ρ_u = 0, and ρ_η = 2 × 10^-3. The magnitudes of ρ_u and ρ_η are chosen to balance between accurate UQ when estimating a well-specified model, and adequate uncertainty when estimating a poorly specified model. To get a feel for model performance — and hence the severity of model misspecification — we estimate the prior distribution for the nonlinear model, p(_m, η_m ν, s, Θ) ∼(μ_m, _m _m^⊤), across the grid of s and ν values. This is done through running the filter with the prediction steps only, for all timesteps. To compare with the data we compute the *RMSE. The *RMSE is _m = ‖_m - μ_m ‖_2/√(n_y), where ‖·‖_2 is the Euclidean l^2 norm. Results are shown in Figure <ref>. For ν = 5 there is a clear stratification between the well-specified s = 2000 model and the others which are misspecified. The errors in these models appear to lack the consistent periodicity that models with larger ν see. In these cases we see that there is a consistently large error across each model with no synchronicity across the systems. The stratification between these models becomes less apparent as ν increases up to 5× 10^4, a result of the dissipative effects dominating the dynamics. This leads to models with different s performing similarly as the wave profiles dissipate the energy input from the tidal forcing. There emerges a periodicity across the solutions as ν increases, thought to be due to the tidal forcing. We see that there is a sharp increase in early times, then a similar increase approximately in the middle of the time domain. This increase is thought to be due to the cycle of the forcing starting to “swing down” into the lower cycle of the tidal forcing. We note also that there are similar timescales in the error dynamics and no models appear to dissipate to equilibrium — again due to the oscillatory forcing. § RESULTS In this section we analyse the posterior results. First, we conduct a preliminary analysis of the model posteriors, to give intuition on how our chosen metrics relate to the posterior distribution. Next, we analyse how the observation frequencies k and n_y effect the posterior distribution in the face of misspecification. We then study the case of joint viscosity-bathymetry misspecification, and then conclude with the analysis of the linearised model, also under joint viscosity-bathymetry misspecification. §.§ Preliminary analysis of posterior distributions To get a feel for the posterior results we now describe the results for four models. Each have observations arriving every k = 30 timesteps (every 30 ). We run the nonlinear model with s ∈{2000, 3500} and ν∈{5, 10^4}. At time t = 11.67 we have plotted the posterior means and variances in Figure <ref>. The well specified model captures the more complex dynamical behaviour well with a notable improvement in the estimation of the velocity fields, in comparison to the other models. The more damped models, with ν = 10^4, appear unsurprisingly to underestimate the data at this observation point. Due to the right-shifted bathymetry an increase in velocity is seen to the right of the observation location when s = 3500, ν = 5. The velocity fields are all underestimated, with a notably poor-performing case with s = 3500 and ν = 10^4. In this case the data (observed only on the surface height perturbation) can only correct for so much, and the dynamics must also be appropriately specified in order for the model to be accurate. We also see that the uncertainty on η has given rise to uncertainty in u following intuition; it is seen that the unobserved velocity components have increased uncertainty. As introduced above, to quantitatively compare performance we use the *RMSE. For the models introduced above the average values of these, across all time, are shown within the second row in Figure <ref>. Across the variations in *RMSE there is a qualitative stratification which is especially apparent on the unobserved velocity components. The *RMSE are plotted across time in Figure <ref>; similar stratification is seen to that in Figure <ref>. Variation is seen across the models as data is conditioned on; this is most clearly observed with the poorly performing high-viscosity models. The low-viscosity model with ν = 5 performs well. The mildly-misspecified {ν = 5, s = 3500} performs moderately well and improves upon the prior (see Figure <ref>). §.§ Investigating observation frequency In the second simulation study we study the model performance as we vary the observation frequency in space and time, taking n_y ∈{1, 2, 5} and k ∈{1, 30, 60, 120, 180}, whilst also varying the topography and viscosity. First, we look at the case of a well-specified viscosity (ν = 5), with misspecified bathymetry, with s ∈{2000, 3500, 5000, 6500, 8000}. In Figure <ref> we plot the *RMSE values over all the observed timepoints, for each model. Increasing n_y decreases the model error across all models. Similar reductions in the *RMSE are not seen with the increase of k. Whilst there are improvements, especially for all k = 1, and n_y = 1, it is seen otherwise that the observation frequency k does not have the same drastic effect. We next look at the case of a well-specified s = 2000 and a variable viscosity ν∈{5, 500, 10^3, 10^4, 5 × 10^4 }. Results when varying ν are shown in Figure <ref>. For ν≤ 10^3 we see that the models perform relatively well; conditioning on data ensures that the misspecification induced through the viscosity is corrected for. As ν increases we see the regular increase in error, through the middle of simulation time (at t ≈ 6 ). Whilst the *RMSE values vary magnitude-wise, this regular quasi-periodic structure emerges across each of the models. This approximately corresponds to the tidal forcing τ(t) hitting its minimum through the simulation (Figure <ref>). Errors decrease as this forcing begins to increase once again. Increasing the observation density in space again results in a marked improvement in model discrepancy. As previous with k = 1 this results in the most notable improvement in the *RMSE, with mild improvements for k ≥ 30. As ν≥ 10^4, there is no visual distinction between the models with such high viscosities. Error due to the topography appears to result in a greater degree of stratification between each of the models. This is unsurprising as whilst misspecifying the viscosity leads to mismatch, beyond ν = 10^4 the viscous effects dominate the flow, resulting in similar behaviour. For a single instance of “mild misspecification” with s = 3500 and ν = 5, the empirical means and standard deviations (computed across time) of their *RMSE are shown in Table <ref>. As more spatial locations are observed, the frequency of observations in time has less of an effect on the accuracy of the model. When observing n_y = 5 locations, small increases in the *RMSE are seen with less frequent observations in time. These increases are notably larger when taking n_y = 1. An interesting result is that increasing n_y from 1 to 5 results in improved performance, even when observing every 180 . The inclusion of additional spatial measurement locations results in a dramatic improvement in the performance of the model. This is thought to be due to the fact that the flow in this case has a long wavelength — incorporating data over a larger spatial domain therefore has a more corrective effect on the model as it is now observed over a set of spatiotemporal locations. §.§ Investigating parametric misspecification Following these results, we now investigate joint parametric misspecification of s ∈{2000, 3500, 5000, 6500, 8000} and ν∈{5, 500, 10^3, 10^4, 5 × 10^4}. We set n_y = 1, and k = 30 (1 spatial location observed every 30 ). In Figure <ref> (top) the *RMSE is shown for the estimated posterior distributions p(_m, η_m _1:m, ν, s, Θ, σ). As previous we see that the models with small ν are more accurate. Additionally, as s is increasingly misspecified there is a stratification of model performance, with, unsurprisingly, the correctly specified s = 2000 quite noticeably out-performing the misspecified models. With larger ν values we see that there is a mild increase in the *RMSE through conditioning on data. Less stratification appears to be present as ν is increased; damping dominates the misspecified bathymetry in terms of mismatch. For additional model comparison, we use the log-likelihood. Due to the structure of this problem we can write this via factorisation log p(_1:Mν, s, Θ, σ) = log p(_1 ν, s, Θ, σ) + ∑_m = 2^M log p(_m_1:m - 1, ν, s, Θ, σ). This can be approximated when running the *LR-ExKF, due to the Gaussian approximation. The individual likelihoods are of the form p(_m_1:m - 1, ν, s, Θ, σ) = = (μ̂_m, (_m) (_m)^⊤ + σ^2 ), where p(_m_1:m - 1, ν, s, Θ, σ) = (μ̂_m, _m _m^⊤). This is a strictly proper scoring rule with respect to Gaussian measure <cit.>. Intuitively, this is an uncertainty-weighted scoring rule that punishes models which are more certain about inaccurate predictions of the data, at each observation time. The log-likelihoods are shown, across time, in Figure <ref>. The models stratify across s more obviously for the well-specified models, with less stratification as ν is increased. All models show a gradual decrease in the log-likelihood values over time; conditioning on data results in more accurate models. Note also that similar to the *RMSE values (see also Figure <ref>) we see that there is the same quasi-periodic behaviour as the tidal forcing begins to approach 0, resulting in decreases in the likelihood. For visual comparison, the average *RMSE values and the log-likelihoods are shown in Figure <ref>. As previous, we see that with ν≥ 10^4 there is a clear increase in the *RMSE marking a qualitative change in the dynamics. Similar model stratification is seen for the *RMSE as is for the log-likelihood; in these examples they perform similarly as model comparison metrics. These log-likelihoods are tabulated in Table <ref>. Models with ν = 5 are preferred across each bathymetry. Following the computations of the log-likelihoods, we can perform model comparison via Bayes factors <cit.>. The Bayes factor is given by the ratio of the probabilities of the data given the different assumed models: log_10 = log p(_1:Mν_1, s_1, Θ, σ) - log p(_1:Mν_0, s_0, Θ, σ). We see that there is strong evidence in favour of the well-specified model in comparison to the others (smallest log_10≈ 10^5). In each case it is clear that increasing the degree of misspecification, by either shifting the topography, or, increasing the misspecification, results in less performant models. Models with smaller ν are preferred over those which have a larger ν. Interestingly, there is very strong evidence against the model with {ν = 5 × 10^4, s=2000}, in comparison with that of {ν = 5, s = 8000} (log_10≈ 10^7). We notice that the trend misspecification due to s tends to be less severe than that due to ν (see also Figure <ref>). §.§ Linearisation Finally we investigate joint viscosity-bathymetry misspecification as in the previous subsection, with the addition of model linearisation. As previous, we vary s ∈{2000, 3500, 5000, 6500, 8000} and ν∈{5, 500, 10^3, 10^4, 5 × 10^4}, whilst fixing k = 30 and n_y = 1 to compute the posterior estimates. We plot the *RMSE values across time for these linearised model approximations in Figure <ref>. The *RMSE values, in comparison to those of the nonlinear models, are slightly larger with notable increases in the cases of well-specified bathymetry. This disparity in model performance is further realised in the log-likelihoods (seen in Table <ref>) being larger for the well-specified models in comparison to those of the poorly specified models. We see the unsurprising results that small-ν models perform better than the others. In comparing Tables <ref> and <ref> it is seen that when the severity of model misspecification is larger (approximately s ≥ 5000, ν≥ 10^4) the linear model outperforms the nonlinear model. For s ≥ 5000 we posit this is due to the ignoring of the resultant interactions between the misspecified bathymetry and velocity. When damping is very highly misspecified, even for a well-specified bathymetry the linear model is preferred. Again this is thought to be due to the addition of nonlinearity not really contributing to the dynamics — in this regime the dynamics are dominated by the linear dissipative behaviour, in any case. § DISCUSSION AND CONCLUSION In this work we studied the efficacy of *statFEM as applied to the 1D *SWE, to see how the methodology responds to scenarios of increasing model misspecification. Previous work has necessarily included smaller studies of milder cases of model misspecification; this work provides the first systematic analysis of the approach under gradually increased misspecification severity. Misspecification was induced via linearisation, viscosity, and bottom-topography (bathymetry), in regimes of reduced spatiotemporal observational frequency. The *RMSE and log-likelihood were used for model comparison. The methodology is able to appropriately deal with model misspecification with notably large improvements in model error as the number of observation locations is increased; the method performs well in recovering misspecified dynamics. This is thought to be due to spatial variation being more informative to the model error than increasing the frequency of observations. The changes in observation frequency are small in comparison to the timescale of the flow and thus the differences in the observations, arriving at different times are not large enough to warrant drastic reductions in model error (though there is still a reduction). However, as wavelengths are relatively long the additional information included via spatial variation, through additional observation locations, does indeed result in marked reductions in model error. Note also that our model error term, the *GP ξ, induces spatial correlations over components of model error. Therefore including additional observation locations, which make use of this error structure is again thought to be helpful. We note that whilst we did not include temporal correlation in ξ, , the additional study and comparison of model error structures being correlated in both space and time is of interest. The effects of misspecification have different qualitative behaviors. When ν is well-specified the bathymetry parameter s results in immediate increases in error which are of similar magnitude. On the other hand, as noted previously (see also Figure <ref>) when ν is misspecified a regular quasi-periodic pattern in the error emerges which results in nearly visually indistinguishable error patterns. We see, also, that due to the domination of the dissipation these periodic-type patterns in the error are also seen in the linear model for lower values of ν. More severe mismatch is seen to result from large dissipation values rather than large shifts of the bottom-topography. Both, however, are reduced through observing more spatial locations. It is worth noting that there is a clear visual decrease in the amount of model error present when taking n_y = 2 instead of n_y = 1 (see Figure <ref>), when the topography is misspecified. When designing observation systems (i.e. measurement/sensor locations) this suggests that taking additional observation locations is valuable when they are of a similar lengthscale to the flow under consideration. In cases of severe misspecification linear approximations aid in slightly reducing the model error, as seen via the log-likelihoods. Whilst the *RMSE and log-likelihood are useful and theoretically sound metrics, the study of appropriate additional metrics (such as, e.g., the Brier score <cit.>) would be a useful tool for practitioners when implementing and diagnosing models. We also note that our results are conditioned on sets of *GP hyperparameters which, whilst chosen to ensure appropriate UQ on a well-specified model, are not optimal with regards to the log-likelihood. Joint investigation of hyperparameter estimation and filtering is of interest and is a possible avenue of further research. Model error in this study appears to arrive in similar timescales no matter which parameter is misspecified. In exploring alternate models before we settled on the model used in this paper, we found that there were intuitive interactions between the timescales of model error and posterior updating. When mismatch occurs in fast timescales more frequent updating is preferred. For slow timescales less frequent updating is required. These results provide additional evidence that the *statFEM approach allows for statistically coherent inference in regimes of potentially severe model misspecification. The admission of spatially correlated and physically sensible uncertainty results in improvements in model accuracy as data is assimilated. The induced uncertainty is sensible and reflects modelling choices: for example, boundary conditions are respected and unobserved components are less certain . From the statistical point-of-view, the inclusion of physical information alongside the *GP enables the use of sparse data. Results suggest that the inclusion of data, irrespective of the amount, only aids in model proficiency when using *statFEM. § FEM DISCRETISATION To justify the chosen discretisation and level of mesh-refinement, the deterministic *FEM convergence results are plotted in Figure <ref>. We run a reference model (u_h^ref, η_h^ref) with n_v = 3000 cells, and compute the L^2 errors against this reference model, after running the models with = 1 up to time t = 600 with meshes having n_v ∈{ 500, 600, 750, 1000, 1500 }. Errors shrink with a cubic rate (est. gradient 3.0144). § NOTES ON LINEAR *STATFEM In the linear case, we recall that the *statFEM model definition is u_t + g η_x + ν u_xx = ξ_u, x ∈, η_t + (H u )_x = ξ_η, x ∈, u_x = 0, η = 0, x ∈∂. As previous we model the forcing terms ξ_u and ξ_η by uncorrelated *GP. Making use of the same P2-P1 discretisation as previous gives _u ^n - ^n - 1/ + ν^n - θ + g η^n - θ = 1/√()ξ_u^n - 1, _ηη^n - η^n - 1/ + (H)^n - θ = 1/√()ξ_η^n - 1, where we have recycled the notation for the operators as in the main text. Here we also have _ji(H) = ⟨ H ϕ_i, x + H_x (H) ϕ_i, ψ_j ⟩. The filtering procedure proceeds as previous, where now instead of using a linearised approximation to the prediction step, we compute this exactly (as now the Jacobian of the r.h.s. of (<ref>) does not depend on the state (^n, η^n)). This is because we can write the linear updating rule for the state as _n ^n = _n - 1^n - 1 + √()ξ_u^n - 1, _n η^n = _n - 1η^n - 1 + √()ξ_η^n - 1, where , are defined as appropriately from (<ref>). For computation we employ the same low-rank approximation over the *GP ξ_u and ξ_η. Hyperparameters for these are the same as those used for the nonlinear model. Inference in this scenario now proceeds via a standard low-rank Kalman filter <cit.> instead of the extended Kalman filter employed for the nonlinear models. Data accessibility: All code and data used in this work is publicly available on GitHub . Acknowledgements: The authors would like to thank Bedartha Goswami and Youssef Marzouk for helpful discussions. Funding information: C. Duffin and M. Girolami were supported by EPSRC grant EP/T000414/1. E. Cripps, M. Girolami, M. Rayson and T. Stemler are supported by the ARC ITRH for Transforming energy Infrastructure through Digital Engineering (TIDE, <http://TIDE.edu.au>) which is led by The University of Western Australia, delivered with The University of Wollongong and several other Australian and International research partners, and funded by the Australian Research Council, INPEX Operations Australia, Shell Australia, Woodside Energy, Fugro Australia Marine, Wood Group Kenny Australia, RPS Group, Bureau Veritas and Lloyd's Register Global Technology (Grant No. IH200100009). M. G was supported by a Royal Academy of Engineering Research Chair, and EPSRC grants EP/W005816/1, EP/V056441/1, EP/V056522/1, EP/R018413/2, EP/R034710/1, and EP/R004889/1. E. Cripps was supported by Australian Research Council Industrial Transformation Training Centre (Grant No. IC190100031). Competing interests: The authors have no competing interests to declare. Authors' contributions: C. Duffin conceptualised the research, developed the code-base, ran the experiments, and wrote the manuscript. P. Branson, M. Rayson, E. Cripps, and T. Stemler conceptualised the research and revised the manuscript. M. Girolami conceptualised the research. agsm
http://arxiv.org/abs/2307.03943v1
20230708093708
Camouflaged Object Detection with Feature Grafting and Distractor Aware
[ "Yuxuan Song", "Xinyue Li", "Lin Qi" ]
cs.CV
[ "cs.CV" ]
Camouflaged Object Detection with Feature Grafting and Distractor Aware *Corresponding author. This work is supported in part by the National Natural Science Foundation of China (Grant No. 41927805). Yuxuan Song College of Computer Science and Technology Ocean University of China Qingdao, China [email protected] Xinyue Li College of Computer Science and Technology Ocean University of China Qingdao, China [email protected] Lin Qi* College of Computer Science and Technology Ocean University of China Qingdao, China [email protected] August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================= The task of Camouflaged Object Detection (COD) aims to accurately segment camouflaged objects that integrated into the environment, which is more challenging than ordinary detection as the texture between the target and background is visually indistinguishable. In this paper, we proposed a novel Feature Grafting and Distractor Aware network (FDNet) to handle the COD task. Specifically, we use CNN and Transformer to encode multi-scale images in parallel. In order to better explore the advantages of the two encoders, we design a cross-attention-based Feature Grafting Module to graft features extracted from Transformer branch into CNN branch, after which the features are aggregated in the Feature Fusion Module. A Distractor Aware Module is designed to explicitly model the two possible distractor in the COD task to refine the coarse camouflage map. We also proposed the largest artificial camouflaged object dataset which contains 2000 images with annotations, named ACOD2K. We conducted extensive experiments on four widely used benchmark datasets and the ACOD2K dataset. The results show that our method significantly outperforms other state-of-the-art methods. The code and the ACOD2K will be available at https://github.com/syxvision/FDNet. Camouflaged Object Detection, Transformer, Convolutional Neural Networks, Distractor § INTRODUCTION Camouflage refers to creatures use the similarity of color, texture, etc. to hide themselves in the background without being discovered by predators. Inspired by the natural camouflage of animals such as chameleon, artificial camouflage was created to deceive human's visual inspection. The computer vision task of Camouflaged Object Detection (COD) aims to accurately segment concealed objects from the background environment, which has recently attracted interests of researchers and facilitated many applications in different fields. However, due to its inherent nature, locating and segmenting of camouflaged objects is much more difficult than ordinary object detection, which makes the COD task extremely challenging. Recently, many deep learning based methods have been proposed to solve the COD task and have achieved impressive progress. SegMaR <cit.> introduces a Magnification Module to iteratively upsample images to segment camouflaged objects with complex structures. ZoomNet <cit.> showed that multi-scale information is very effective for resolving the appearance and shape variation of objects at different scales. This model uses a shared encoder to encode images of three scales. However, shared encoders cannot take full advantage of multi-scale images and may cause error propagation. Therefore, we proposed use two different encoders in parallel, and designed a Feature Grafting Module for better feature transfer. Existing COD methods only consider the background as distractor, such as SINetv2 <cit.> which uses reverse attention to erase the foreground and use the background to mine potential camouflage areas. However, in the COD task, due to the similarity between the object and the surrounding environment, there are two different types of distractors as shown in Figure <ref>: 1) in the first row, the stem of the branch is misclassified as camouflaged object since its texture is very similar to the target. 2) in the second row, the lower half of the animal's body is blended with the black background, and the network misses it. This observation inspired us that explicitly modeling semantic features of these two types of distractors with supervision can improve detection performance. In this paper, we propose a Feature Grafting and Distractor Aware network (FDNet) for camouflaged object detection. We employ Transformer and CNN to exploit information on different scales, where Transformer models long-term dependence for rich context information and CNN mines local details for edge information. To aggregate the features from these two encoders, we developped a Feature Grafting Module based on cross-attention, which fuses features in a bottom-up manner to produce a coarse prediction map. A Distractor Aware Module was designed to guide the learning by modeling the two types of distractor and exploring potential camouflage regions under the supervision of groundtruth. Benefited from the designed modules, our proposed network can better recognize distractors and achieve better detection performance. In addition, we contribute to the COD community with a new COD dataset under the fact that most existing COD datasets consists of natural camouflaged animals, whereas only a small portion are camouflage created by human. To address this limitation, we collected and annotated 2000 images of artificial camouflages from the Internet, constituting the current largest artificial camouflage dataset, named ACOD2K. Figure <ref> shows some exmaple images of this dataset. We compared our proposed model with other state-of-the-art models on public datasets and this new dataset. Our contributions. 1) Camouflaged objects can be segmented more accurately by our proposed FDNet which featured by the multi-scale feature extractor and the explicitly modeling of distractors. 2) The parallel encoding and the Feature Grafting Module are able to extract and fuse multi-scale features, which are utilized by the Distractor Aware Module to incorporate two different types of distracting semantic cues for target segmentation. 3) A large artificial camouflage dataset, ACOD2K, was proposed and tested to compare the performance of our proposed model and other existing models. § RELATED WORK The release of large-scale camouflage datasets (such as COD10K <cit.>) has triggered the invention of many deep learning-based methods, which have shown impressive results for the COD task. A majority of the recent work are inspired by how human observers visually search camouflaged targets, as SINet <cit.>, ZoomNet <cit.> and SegMaR <cit.>. SINet was designed to have two stages for searching and recognition respectively. ZoomNet <cit.> and the recently proposed SegMaR <cit.> enlarge the image in potential target regions to further mine distinguishing clues in a coarse-to-fine manner. Other work proposed to use auxiliary cues to improve performance, such as making better use of boundary clues <cit.> and frequency-domain perceptual cues <cit.>. The joint task learning was also found to be useful when SOD(Salient Object Detection) and COD are simultaneously considered to boost each other's performance <cit.>. Unlike CNN, Transformer has a global receptive fields, which can capture richer contextual information. Its success in the natural language processing has been observed by computer vision tasks. UGTR <cit.> uses Bayesian and Transformer to infer areas of uncertainty. To take the advantage of both architecture, we employ CNN and Transformer together to enhance the performance of the model. § OUR METHOD §.§ ACOD2K dataset Camouflage images can be categorized as natural or artificial. Natural camouflage refers to the ability of animals to blend into their surroundings through changes in their physiological characteristics, making them difficult to detect by predators. Artificial camouflage refers to camouflage designed using human reasoning through methods such as painting and camouflage uniforms, with a specific aim to target human visual perception characteristics in order to more effectively deceive the human visual system. It has great practical value for tasks such as disaster-assisted search and rescue operations. Leveraging this advantage, we have constructed ACOD2K, the largest artificial camouflage dataset.It's worth noting that current camouflaged object detection methods are exclusively trained on natural camouflaged images. This is because existing datasets mainly feature natural camouflaged animals, making it difficult to train models that can accurately detect artificial camouflage. For instance, the two most commonly used training datasets in COD tasks, CAMO and COD10K, have an imbalanced distribution of natural and artificial camouflage images. Of the 2,500 images in CAMO, less than 10% are artificial camouflage images. Similarly, COD10K, a large-scale dataset with 10,000 images covering multiple camouflaged objects in natural scenes divided into 5 super classes, lacks artificial camouflage images. This highlights the need for datasets like ACOD2K, which has a significant number of artificial camouflage images, to enable the development of more robust camouflaged object detection methods.ACOD2K are consisted by 2000 images, where 1500 images are with camouflaged objects, 400 images are with non-camouflaged objects, and 100 are background images. Most of the images are collected from the Internet (80%), searched using the keywords such as “military camouflage”, “body painting”, “Ghillie suit”, and the rest are from public COD and SOD dataset. Figure <ref> shows some examples of ACOD2K, from which it can be seen that artificial camouflages are intentionally made by humans using materials and colors to conceal the whole target body in the background. High-quality and fine-grained pixel-level matting annotations were carried out for each image. In order to guarantee the quality, an additional researcher further verified all annotations. §.§ Overall Architecture The overall structure of our proposed FDNet is shown in Figure <ref>. It is divided into two stages, the first stage generates a coarse feature map, and the second stage refines the feature map based on the Distractor Aware Module. FDNet uses multi-scale images as input. Unlike ZoomNet which uses shared encoders, we instead used the PVT <cit.> for the main scale and used the Res2Net50 <cit.> for the sub-scale, which constitue a parallel encoder. We designed a Feature Grafting Module based on cross-attention to aggregate features of these two scales, which not only extracts valuable semantic clues, but also fully suppresses redundant information and background noise. Then the multi-scale features are sent to the Feature Fusion Module for decoding, it achieved more efficient transmission of encoded information through bottom-up dense connections. Finally, Send it into the dual-branch Distractor Aware Module to refine the feature map, and use ground truth for supervision. §.§ Feature Grafting Module For the main scale image, we use PVT as the backbone to extract feature maps of 4 stages, which can be denoted as g_i;i=1,2,3,4. Since the features with too small resolution will lose most of the information, we did not use g_4. For the sub-scale image, we use Res2Net50 as the backbone to extract a set of feature maps, which can be denoted as f_i;i=1,2,3,4.We choose to graft feature on feature groups with the same feature resolution. Since the resolution of the sub-scale is twice that of the main scale, the resolution of g_i,f_i+1;i=1,2,3 is same. For the first two groups, we use pooling for feature grafting to maintain and highlight useful information. In neural networks, deeper features have richer semantic clues. For g_3 extracted using Transformer, which has rich global context information. For f_4 extracted using CNN, which has edge detail information complementary to global information. We believe that using simple fusion methods such as pooling, concatenation, or addition is not effective enough for mutual learning between these two features, and cannot well suppress background noise from CNN. Therefore, we use cross-attention to incorporate the global semantic cue learned from the main scale into each pixel of the sub-scale. The detail is shown in Figure <ref>. F_4 = Softmax(f_4^Q ·g_3^K^T/√(k))· f_4^V f_4^Q,f_4^V=θ(f_4) g_3^K=ϕ(g_3) θ() uses flatten and permute operations to transform f_4∈ R^C × H × W into f_4^'∈ R^HW × C. Same as self attention, we have passed it through Layer Normalization and linear transformation to get f_4^Q, f_4^V, the process of g_3 getting g_3^K through ϕ() is same as θ. §.§ Feature Fusion Module Unlike the previous method that directly performs convolution after channel concat on the adjacent feature layer to output the prediction map, we fuse deeper features as a semantic filter. We first element-wise multiply it with the current layer features to suppress background interference that may cause abnormality, and then preserve the original information by residual addition. The details are shown in Figure <ref>. The features by the Feature Grafting Module are denoted as F_i;i=1,2,3,4. Since F_4 is the last layer of features, we directly perform 3x3 convolution on F_4 to form F̂_̂4̂, For F_3, we perform filtering on F4 to form F_3^filter. Correspondingly, F_2^filter and F_1^filter are shown in the following formula. We take the top-level feature F̂_̂1̂ as the final result of the Feature Fusion Module, and the coarse prediction is F_c. F̂_̂4̂ = Conv3(F_4) F_3^filter = Conv3(Conv1(F_4↑_2) F̂_̂3̂ = Conv3([F_3^filter * F_3+F_3;F̂_̂4̂]) F_2^filter = Conv3(Conv1([F_4↑_4;F_3↑_2])) F̂_̂2̂ = Conv3([F_2^filter * F_2+F_2;F̂_̂3̂]) F_1^filter = Conv3(Conv1([F_4↑_8;F_3↑_4;F_2↑_2])) F̂_̂1̂ = Conv3([F_1^filter * F_1+F_1;F̂_̂2̂]) F_c=Conv3(F̂_̂1̂) Conv3, Conv1 represents 3x3, 1x1 convolution respectively, ↑ refers to upsample, [;] means channel concatenation, and * represents element-wise multiplication. §.§ Distractor Aware Module We believe that there are two types of distractors present in the coarse prediction map generated in the first stage, namely: (i) objects that are camouflaged but not detected, referred to as false negatives, ξ_fn, and (ii) objects that are not camouflaged but are misdetected, referred to as false positives, ξ_fp. To address this, we propose a dual-branch Distractor Aware Module that explicitly models the potential interference and aims to improve the accuracy of the segmentation results.As illustrated in the lower part of Figure <ref>, we first use F̂_̂1̂∈ R^64 × H × W to extract ξ_fn features through a lightweight encoder, the encoder is designed as two 3x3 convolutions, following BN and Relu. In order to make better use of ξ_fn, We generated the predicted map of ξ_fn. During training, the ground truth of ξ_fn is approximated by the difference between the ground truth of the segmentation map and the coarse predicted map F_c. Then we concate ξ_fn with F̂_̂1̂ and send it into the attention mechanism to generate augmented weights ξ_fn^a. The attention mechanism aims to enhance the features of possible ξ_fn regions. we perform element-wise multiplication for ξ_fn^a and original feature F̂_̂1̂, and then perform residual connection to generate the enhanced feature F_fn. Now, the network can better segment those regions that are ignored as background. ξ_fn = Small Encoder(F̂_̂1̂) fn_GT = GT - φ(F_c) Similarly, we use the same encoder to extract ξ_fp features and the predicted map. The ground truth of ξ_fp is approximated by the difference between the coarse predicted map F_c and the ground truth of the segmentation map. we concate F_fn with ξ_fp on channel dimension, then send it into the refine unit consisting of two 3x3 convolutional layers to capture richer context information, so as to better distinguish the misdetected areas. Finally, it is subtracted from F_fn to obtain the prediction feature that suppresses ξ_fp distractor. After 3x3 convolution, we obtain the final prediction map F_p. φ() represents binarization operation. ξ_fp = Small Encoder(F̂_̂1̂) fp_GT = φ(F_c) - GT §.§ Loss Functions Our network has two types of supervision. For the loss L_F_p of the prediction map, same as most COD methods, we use the weighted BCE loss and the weighted IOU loss(Loss1). For the loss L_fn, L_fp of fn and fp, we use the weighted BCE loss(Loss2). The loss function is as follows. Loss = L_F_p+ λ L_fn + β L_fp Loss1 = L_BCE^ω+L_IOU^ω Loss2 = ∑_i(-[N_p/N_p+N_n(y_i)log(p_i)+ N_n/N_p+N_n(1-y_i)log(1-p_i)]) In the experiment, λ and β are set to 10. N_n and N_p represent the number of pixels of positive pixels and negative pixels, respectively. § EXPERIMENTS §.§ Experiment Setup Datasets.We perform experiments on four COD benchmark datasets and ours ACOD2K. Public datasets include CAMO <cit.>, CHAMELON <cit.>, COD10K <cit.> and NC4K <cit.>, Like the previous methods, we use 3040 images from COD10K and 1000 images from CAMO for training, and other datasets for testing. For the ACOD2K, we divide it into train set and test set according to the ratio of 8:2. Evaluation Criteria.We use four metrics that commonly used in COD tasks to evaluate the model performance: Mean absolute error(MAE) <cit.>, F_β^w-measure <cit.>, E-measure <cit.>, S-measure <cit.>. Implementation Details.Our network uses PVT <cit.> and Res2Net50 <cit.> pretrained on ImageNet as backbone. We use data augmentation strategy of random flips and rotations. During training, in order to balance efficiency and performance, the size of the main scale is set to 288x288. The batchsize is 32. We use SGD with momentum and weight decay initialized to 0.9 and 0.0005 as the optimizer, the learning rate is initialized to 0.05, follows a linear decay strategy, and the maximum training epoch is set to 50. The entire network is performed on NVIDIA GeForce GTX 3090Ti. §.§ Comparisons with State-of-the-arts To show the effectiveness of our method, we compare with 10 SOTA methods on public datasets. On ours ACOD2K, we compare with 3 COD methods. For fair comparison, the results of these models are either provided by the authors or retrained from open source code. Quantitative Evaluation.As shown in the Table <ref>, our method achieves the superior performance on multiple evaluation metrics. Specifically, our method increases F_β^ω by 1.5%, 3.3%, 6%, 1.9% over the second-best method on all four datasets. Table <ref> shows the FDNet outperforms the second-best method on the four metrics by increasing 1.4%, 2.4%, 1%,0.4% on the ACOD2K. Qualitative Evaluation.We further show the qualitative comparison of FDNet with other methods, presented in the form of visualization maps. As shown in Figure <ref>, our method not only recognizes them well, but also segments fine edges. In addition, in the second row, our method also works well with the presence of distractor in the image. §.§ Ablation Studies As shown in the Table <ref>, we conducted five ablation experiments. In A, we removed all key modules, only used single-scale images, and simply perform convolution after channel concatenation to get the final prediction map. In B, we replaced the Feature Fusion Module on the basis of A. In C, we use multi-scale images, but share the encoder, and the features of different scales are fused by pooling. In D, we use CNN and Transformer to encode the images of two scales respectively, and use the Feature Grafting Module to fuse feature. In E, we added Distractor Aware Module based on D. Effectiveness of multi-scale. By fusing features of different scales, we can explore richer semantic representations. From the second and third rows in the table <ref>, it can be seen that the performance of C is significantly better than that of B, especially in the COD10K, S_α, F_β^w , E_ϕ, ℳ increased by 4.4%, 8.5%, 2.9%, 0.9% respectively.Effectiveness of Feature Fusion. From the first and second rows of the table <ref>, B's performance on the four indicators increased by 0.8%, 2.2%, 1.1%, 0.4% on average, this is due to the positive impact of the Feature Fusion Module's bottom-up dense feature-guided structure. Effectiveness of Feature Grafting. Compared with C, all indicators of D on the two datasets have different degrees of increase, especially F_β^w on the CAMO increased by 1%. This is largely because Feature Grafting Module aggregates the advantages of two different types of encoders well. Effectiveness of Distractor Aware. E outperforms D on all datasets, and the visual comparison results in Figure <ref> also clearly verify that the module can mine potential interference areas. § CONCLUSION We propose a novel COD network, FDNet. First, we design the Feature Grafting Module to extract valuable semantic information and suppress background noise. Then, in the Distractor Aware Module, we obtained more accurate prediction map by refining the two types of distractors. Additionally, we also construct a new artificial camouflage dataset, ACOD2K. Experiments on four public datasets and ACOD2K show that our method outperforms other methods significantly both qualitatively and quantitatively. In the future, we will explore more effective supervision methods for two types of distractors. IEEEtran
http://arxiv.org/abs/2307.03996v1
20230708153748
ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation
[ "Saifullah Mahbub", "Md. Easin Arafat", "Chowdhury Rafeed Rahman", "Zannatul Ferdows", "Masum Hasan" ]
cs.SE
[ "cs.SE" ]
[email protected] Code review is considered a key process in the software industry for minimizing bugs and improving code quality. Inspection of review process effectiveness and continuous improvement can boost development productivity. Such inspection is a time-consuming and human-bias-prone task. We propose a semi-supervised learning based system ReviewRanker which is aimed at assigning each code review a confidence score which is expected to resonate with the quality of the review. Our proposed method is trained based on simple and and well defined labels provided by developers. The labeling task requires little to no effort from the developers and has an indirect relation to the end goal (assignment of review confidence score). ReviewRanker is expected to improve industry-wide code review quality inspection through reducing human bias and effort required for such task. The system has the potential of minimizing the back-and-forth cycle existing in the development and review process. Usable code and dataset for this research can be found at: https://github.com/saifarnab/code_review <ccs2012> <concept> <concept_id>10011007.10011074.10011081</concept_id> <concept_desc>Software and its engineering Software development process management</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Software and its engineering Software development process management ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation Masum Hasan August 12, 2023 ========================================================================================== § INTRODUCTION The editorial world has been using peer review since 1731 <cit.>. Modern software development industries have given it a more common name: Code Review. Since then Modern Code Review (MCR) <cit.> has become an essential part of software development. MCR is a software quality control process in which one or a group of people evaluates the system by examining and analyzing different parts of source code which can be done either during or after the completion of the implementation phase. The purpose of code review is to find bugs, correct mistakes, and boost the consistency of code by improving performance and by reducing security vulnerabilities. Figure <ref> outlines a typical code review process. A developer or a set of developers prepares the codes and submits them for review. A reviewer or a subgroup of reviewers then performs review checking and makes sure that the author’s codes cause no system failures in other parts of the codebase. They also ensure consistent coding style and design pattern. Following all these checking and evaluations, the reviewer or the subgroup of reviewers who have a higher role either approve or reject these reviews. Developers then make changes in codes, revise their works based on the feedback, or provide appropriate explanations against the approved review until both parties are satisfied. Sometimes a reviewer figures out the problematic part of the reviewed code but fails to submit an appropriate explanation of the problem. In such cases, the changes made by the developers will probably not satisfy the reviewer and we are going to get another couple of develop-review cycles. Such cycles can lead to substantial decrease in productivity in the software industry. It is possible to minimize such situations if we can somehow assign each review a quality score. Such scoring will help us in (a) gaining a deeper understanding of quality reviews, (b) identifying quality reviewers in the company and (c) estimating provided review quality before sending off to the developers. Essentially, if after going through a particular review, a developer feels confident about the changes that he has to make in the codebase, then that review is probably of good quality. In this paper, we focus on modeling the developer confidence in a review. One way is to simply form this task as a supervised learning task where the input will be a review and the output will be the confidence score for that review. The output labeling will be performed by the developer to whom the review had been sent for making changes in the codebase. Figure <ref> shows the problem behind such labeling. We can see a review in the figure which has been marked as good, average, below average and poor by a significant set of developers from three different software companies. We performed this experiment on 25 reviews in total and got more or less similar results. Let us understand what this means. There are developers who are broad minded and will give good score even when the review is not that good. The opposite spectrum is also equally visible in the industry. The score assigned by a developer also depends on what type of mood he is in at that particular moment. In short, this labeling process is highly dependent on human perception which can vary widely from person to person. We propose an alternative labeling scheme in this paper which indirectly trains a set of three models and enables them in predicting the confidence scores for a particular set of reviews. We call this semi-supervised learning approach ReviewRanker. The labeling is related to three simple multiple choice questions (for the three models) regarding - (a) the understanding of the type of change to perform in the code, (b) the understanding of what to insert and (c) what to delete from the code based on the review of interest. We performed a similar experiment (as of Figure <ref>) with these three multiple choice questions and found out that the choices made by the developers from different companies are similar unless the review is largely vague. Thus we have come to a conclusion that the answer to these questions are not biased by the human perception side of the developers. During inference (after training is done with a set of labeled reviews), we provide a code review as input to the three models for predicting the answer to the three questions (see Figure <ref>). We get three confidence scores from these three models corresponding to the ground truth answers of these questions (labeled by a developer in advance). We obtain the final confidence score from these three scores. Thus we model the confidence of the developer in understanding the review given to him or her. Mainly three types of related studies have been performed regarding code review analysis: (1) theoretical studies on different aspects of code reviewing <cit.>, (2) assisting reviewers by problematic code snippet identification <cit.> and (3) reviewer recommendation <cit.>. Although RevHelper <cit.> was developed to measure code review usefulness, it is actually a binary classification tool (useful vs not useful) and does not provide any quality score to the review of interest. Also this method has the human bias aspect that we have mentioned in detail in Figure <ref>. § PROBLEM DEFINITION The input of ReviewRanker is a large set of code reviews R. The output is a confidence score C_i for each review R_i ∈ R, where C_i ∈ [0, 1]. Higher confidence score denotes higher review quality. C_i is the combination of three different confidence scores coming from three different questions related to review R_i. The answer of each question Q_ij is predicted by a model M_j that forms the question answering as a binary classification task. We get a confidence score C_ij (associated with the ground truth label answer) from each model M_j for each question Q_ij for the review of interest R_i. The final confidence score C_i of review R_i is the geometric mean of all C_ij's, where j ∈{1,2,3}. The three questions are as follows: * What type of operation (change in code) did the code review suggest (multi-class classification)? * Did you understand what to insert in the code from the review (binary classification)? * Did you understand what to delete from the code reading the review (binary classification)? Unlike questions related to directly assigning a quality score to a review, these three questions are straightforward and have little to no human bias. § RELATED WORKS Researches have been undertaken to automate the process of reviewing code by using static checks such as standard violation, and common structure defects; while other researchers have focused on automating the process of reviewer recommendation and problematic code detection. §.§ Studies on Code Review Semi-structured individual interviews were conducted with seven developers from Microsoft in <cit.>. They concluded that prior knowledge of files leads to useful comments and tends to increase efficiency. The contemporary code review process at Microsoft was looked into in <cit.>. Research shows that the average spending time in a week for Microsoft developers is four hours in code review, while open source developers take five hours. Microsoft developers give more attention to reviewing relationships with developers compared to open-source developers. An observational survey on Mozilla’s 88 core developers was conducted in <cit.>. The authors found out that approximately 57-69% developers reviewed fewer than 5 patch files, 10% developers reviewed 11 to 20 such files and 4% developers reviewed more than 21 patch files each week. A study described why code review is responsible for evaluating the reliability of test codes and what professional developers do to review test codes by analyzing 300,000 code reviews from open-source projects <cit.>. §.§ Code Review Automation Empirical Studies A prototype tool named Code Distance Visualiser was proposed in <cit.> to detect problematic codes like string overflow, memory leaks, null pointer references, and incorrect API usages. ReviewBot model was proposed in <cit.> where they automated the checking for source code by using a static analyzer and recommended reviewers based on the belief that every line of code had a past history. cHRev model used three measurement metrics to measure the expertise of the reviewers based on their review comments: 1) higher number of review count, 2) reviewer’s effort in the workday and 3) higher weight assignment to the latest reviews <cit.>. RevFinder, a recommendation model for reviewers based on file location was developed in <cit.>. According to their heuristics, identical path files should be reviewed by identical reviewers. To analyze similar file paths, they used four string comparison techniques: 1) longest common prefix, 2) longest common suffix, 3) longest common subsequence and 4) longest common substring. RevRec developed in <cit.> consists of two models: the reviewer expertise model (RevRecRE) and the reviewer collaboration model (RevRecRC). They evaluated three open-source projects - Android, OpenStack, and Qt. A comparative study on code review usefulness was conducted based on textual features and reviewer expertise in <cit.>. The authors proposed a machine learning model named RevHelper to predict the usefulness of a review comment. Their comparative study was based on two heuristics - 1) differences between useful and non-useful reviews and 2) how the reviewers' experience helps them to provide appropriate reviews. § DATASET DESCRIPTION The steps regarding the dataset creation process for this research has been briefly shown in the leftmost box of Figure <ref>. We shall describe each of these steps in detail in this section. §.§ Data Source We have collected our data from multiple open-source projects hosted in Gerrit [https://www.gerritcodereview.com/]. Gerrit is a popular tool for code review in both open-source and commercial code repositories. Gerrit provides an easily accessible REST API [https://gerrit-review.googlesource.com/Documentation/rest-api.html] for collecting code reviews and their related codes. We have created a Gerrit Miner using Java that mines code reviews from open source code repositories such as Android & Iotivity and stores them in a MySQL database. We later query the database and label the reviews with different criteria described in detail in the upcoming subsections. §.§ Data Labeling We have created a labeling application with the Django framework in Python <cit.>. The labeling app was designed to be user-friendly and intuitive. On entry, the web app asks for the login credentials of the user. Once it is provided, it directly goes to the labeling page and displays a code review comment to the user. The user is asked what type of operation (change type in code) the code review suggests (see Figure <ref>). Four options are provided in the form of a drop-down menu: Insert, Delete, Replace, and Not Enough Information. The web app provides the private URLs to the source code, and by clicking the link the user can view the source code, where the code review was submitted, and the later modification (accepted by reviewer) in the source code side by side (see Figure <ref>). When the user selects one of the four operations from the drop down menu, he/she is also asked to provide the code snippet that is impacted by the operation. If the operation is an Insert operation, the user is supposed to provide the code snippet that was to be inserted in a text field named Add Code (only if it is understood from the review what was to be inserted). If the operation is a Remove operation, the user puts the code that was to be removed from the original code in the text box named Remove Code (only if it is understood from the review what was to be removed). If the operation is a Replace operation, the user puts the part of the code that changed in Remove Code text box, and the part that it changed into in the Add Code text box (only if both these parts can be understood from the code review alone). We also took a human-centric design approach to design the labeling app. Each time a sample data was submitted, the web page changed the background color so that the labeling process would not become monotonous and also would give a sense of progress to the user. §.§ Label Validation The reviews were labeled by a team of five independent volunteers who possess substantial experience in programming. All the labelers are from Computer Science background and have more than two years of working experience with programming languages such as C and Java, specifically in the areas of Android and Iotivity. To ensure consistency in the labeling process, 10% of the reviews were given to all the participants for labeling. The remaining 90% of samples were unique for each labeler. The admin frequently examined 10% of the data labels to check for any discrepancies among the labelers. If there was a considerable variation in the labeling, appropriate measures were taken to make the data labels more consistent. Later on, the entire dataset was manually labeled and reviewed by senior software developers to ensure proper validation of the assigned labels. The final confirmation for the labeling was obtained from the admin and considered conclusive for this dataset. § MATERIALS AND METHODS Figure <ref> provides an overview of the steps in developing ReviewRanker. We have already described the dataset creation step in the previous section. In this section, we are going to elaborate the next four steps which are more related to ReviewRanker training and inference phase. §.§ Data Preprocessing §.§.§ Data Labeling: Our initial dataset consisted of 2052 review comments. After the elimination of redundant samples, we are now left with 1483 sample reviews in our final dataset. Let us talk about the ground truth label assignment process for the three multiple choice questions asked for each review (the three questions can be found in Section <ref>). In real life scenario, the ground truth labels associated to a particular review are expected to be assigned by the developer/ developers to whom the review is directed to during the development process. Observing the questions, it is evident that it will take little to no effort from the developers to perform this labeling process. We start with the operation (code change) related question. We define four types of operations: (1) replace (class label 0), (2) delete (label 1), (3) insert (label 2) and (4) not enough information (no label assigned). If a review operation is assigned as "not enough information", then we simply assign that review a confidence score of 0 and exclude that review from ReviewRanker training and inference. The next two questions are about understanding of what to insert and what to remove from the current code base (both are binary classification tasks). If it is clear from the review what to insert, then the insertion related question receives ground truth label of 1, else the label is 0. The exact same aspect goes for the deletion related question. If the operation is labeled as "replace" (first question), then it is expected that the label of both the insertion and deletion related questions will be 1 (it will not always happen in non-ideal cases). Similarly, if the operation is labeled as "delete", then the label of deletion related question is expected to be 1, while the insertion related question will have a label of 0 in an ideal world; and the opposite aspect will happen if the operation is labeled as "insert". Let us now look at an example review - “outer parens not needed”. The labels for this review are as follows: Operation Type: delete (label 1) Understanding of something to be added: nothing to add (label 0) Understanding of something to be deleted: parentheses need to be deleted (label 1) §.§.§ Similar Word Handling Our corpus contains more than 3000 unique words, which is a large number considering the small corpus size (less than 1500 reviews). So, by replacing all semantically identical words with a single word, we minimize the word list, which helps our model find acceptable relationships between words. While doing so, we use both the process of word stemming and lemmatization. Using word-stemming, we can modify a word’s plural instance to singular, normalize grammatical state, and so on. Consider the words provided below: The above words are generated from the word “program”. Through the word-stemming process, we replace all of these words with the word program in our unique word list. Using word lemmatization, we can generate a similar set of words from a single word. For example, the word minor generates the following words: These words are verbally similar to the word minor. Thus we replace all of these words with the word minor in our unique word list as well. By doing so, our corpus now contains around 1700 unique words. §.§.§ Special Word Handling: Our dataset contains code reviews that include a significant amount of special words specific to C code that have no real meaning but play a very important role in review comments. Our proposed model works based on the textual relationship between normal words and these special words. Hence we replace these words with some common words based on their operational characteristics. First, we lowercase the starting letter of all words in our corpus. After that for each of the words: * If the word has any uppercase letter, then we replace the word with keywordvariable, considering we usually use camel case to write variables. * Otherwise, if the word contains .h or #, then we replace the word with keyworddoth. The presence of such special characters denotes header files in C programming. * Otherwise, if the word contains _, then we replace the word with keywordunderscore. Having an underscore in a word is a bit confusing, it may denote a function or a variable. That is why we treat them with a special keyword. * Otherwise, If the word contains parenthesis, then we replace the word with keywordfunction, considering all functions must initiate with a pair of parentheses. After such special keyword handling, our corpus now contains 1368 unique words which started with 3000 initially. §.§ Feature Extraction In order to feed a review to a model as input, We need a mathematical representation of that review. We have 1368 unique words in our preprocessed dataset (see Section <ref>). Each review contains a subset of these words. So, we represent each review with a vector V of size 1368, where V_i represents the total count of word_i found in the review. Let us look at two examples: Review sample 1: line over fifty characters you should reduce it to twenty characters. Review sample 2: provide line level comment to line. If we create a unique word list from this corpus, it would be: We can index these words from 0 to 12. The feature vector for the two sample reviews is as follows: Instead of utilizing word embedding based approaches such as Word2Vec <cit.> and FastText <cit.>, we have opted for a bag-of-words type of approach <cit.>. Word embedding produces semantic vectors for each word typically employed with recurrent neural networks (RNNs) <cit.>. However, due to our small dataset and straightforward classification tasks, we have observed that a basic shallow neural network with bag-of-words feature outperforms RNNs with word embeddings through five fold cross validation. §.§ Model Details Our proposed algorithm combines three models as shown in Table <ref>. Details of the classes present under each model can be found in Section <ref>. Each model is a fully connected vanilla neural network but with a different set of parameter values. The input layer is of size 1368 (word frequency vector: total unique word no. is 1368). M_1 and M_2 are used for binary classification while M_3 is used for multi-class classification (three classes). Relu activation function <cit.> has been used for the intermediate layers, while Softmax has been used for the output layer. A dropout of 20% has been applied between each consecutive hidden layers to prevent overfitting <cit.>. Categorical Cross Entropy <cit.> has been used as the loss function, while Adam (Adaptive Moment Estimation) optimizer <cit.> has been used for weight update. §.§ Review Confidence Score Generation Table <ref> illustrates the entire process of confidence score generation for two sample reviews (We assume that the three task specific models M_1, M_2 and M_3 are already trained). The feature vector of each review is passed through all three models separately. Each model provides a discrete probability distribution of the task specific classes. For example, model M_3 always provides three probability values (sums to 1) for the three operation type specific classes. For each model, we only take the probability score associated with the ground truth class label (expected to be available for all reviews). Thus, for one review, we get total three confidence scores (predicted probability values) from the three models. The final confidence score is the geometric mean ((C_1 × C_2 × C_3)^1/3) of these three confidence scores. A higher confidence score denotes higher review quality, as it is expected that the developer confidence in such reviews will be high. §.§ Confidence Score Generation for the Entire Review Set The expected input to the ReviewRanker system is not a single review, but an entire set of labeled (the three questions/ tasks) reviews. The three models that are part of ReviewRanker are trained on a fraction of this labeled review set. The confidence scores for the reviews are obtained in a 10-fold cross validation style. Let us understand the entire process. Given a large set of labeled reviews S, we first randomly divide the set into 10 small disjoint subsets S_1, S_2, … S_10 of reviews. For fold no. i of the 10-fold cross validation, we use all S_j (j ≠ i) subsets of reviews for training the three models (from randomly assigned initial weights) and finally, use the trained models to predict the final confidence scores of the validation review subset S_i. After doing this 10 times for the 10 folds, we are going to get review confidence scores for all the reviews available in the entire review set S. The important thing to note here is that the confidence score of each review is obtained only when that review is part of the validation subset. This is done to avoid obtaining overfitted scores on training data (many of the confidence scores of training data are close to 1). § RESULTS AND DISCUSSION §.§ Manual Inspection of Assigned Review Quality We examine both the review text and its corresponding confidence score to gain insight into the behavior of the proposed ReviewRanker system. Our goal is to understand why certain reviews receive higher scores than others. To this end, we randomly selected several reviews with high, average, and low confidence scores and analyzed their content (shown in Table <ref>). Through our analysis, we discovered that reviews with higher confidence scores are generally easy to understand, provide clear suggestions for changes to the code, and use specific variable and function names. Reviews with average confidence scores are sometimes easy to understand but lack substantive information, are excessively long, or contain lengthy blocks of code. Reviews with very low confidence scores are often too short to understand, lack meaningful information, and include asterisks and other special characters. Since ReviewRanker is composed of three training based neural network models, it is a data hungry system. So, larger the provided review set, better will ReviewRanker be able to model the developer confidence in a particular review. §.§ Model Performance Table <ref> shows the dataset size and performance of the three ReviewRanker models across the 10 folds. The high mean validation accuracy shows that the models can learn to answer the three simple questions associated with review confidence score generation effectively and can generalize well to validation data. The reported performance has some implications on the usage of ReviewRanker. If for some particular set of code reviews, we see that the 10-fold cross validation performance is not upto the mark, then what it means is that the three models have not been able to understand how to answer the three questions for the provided reviews. In that case, the final confidence score provided by ReviewRanker will not be a reliable metric to measure review quality. §.§ ReviewRanker Validation ReviewRanker has not been validated at industry-wide scale. We have made effort of validating ReviewRanker at small scale in three different software companies. But just as we have mentioned in the Introduction section, there is high human bias when it comes to assigning some kind of quality score to a review manually as part of the labeling process. Hence, our effort remains unsuccessful. Nevertheless, this is a system that has the potential of providing us with effective review quality score at industry scale. The system works end-to-end. The input is a set of reviews (no limitation in the number of reviews provided in the set) and the output is a csv file containing confidence score for each of the provided reviews. These scores can be used to find out characteristics of high, average and poor quality reviews; which in turn can aid software industries in coming up with proper guidelines for providing code reviews. This can save considerable time and cost by minimizing the occurrence of develop-review-develop cycles. Designing an effective industry-wide validation study can be an immediate next research step for ReviewRanker. §.§ Limitations ReviewRanker asks three questions regarding change type, code addition and code deletion while providing confidence score for a particular review. It does not use the context of code based on which the review has been provided. But we firmly believe that usage of code review context by the models for answering the three questions can greatly benefit the confidence score generation process. In such a case, sequence modeling approaches such as Long Short Term Memory (LSTM) <cit.> or Transformer <cit.> can be used as the three models of ReviewRanker. But one also has to take note of the fact that these sequence models are extremely data hungry. So, if a particular review set has less than 10K reviews (which is our case as well), then it is better to use the simple feature extraction method and model architecture that we have proposed. The three questions that we ask the developers to label for each sample are not based on any large scale study. We believe that a more optimal set of questions can be used for review quality estimation provided that a well designed large scale study is undertaken for this purpose. The reviews that we are dealing with in the experimental dataset for ReviewRanker are line-level code reviews. We have not tested the method on block-level code reviews, although we expect similar result for such case as well. Finally, because of the human bias factor, proper validation of the proposed ReviewRanker method could not be performed. § CONCLUSION In this paper, we propose ReviewRanker with the goal of enabling effective inspection of code review quality. We discover the human bias factor of a supervised learning based approach and thus resort to a human-bias free multiple choice question scheme in order to indirectly get the confidence score for each review in a semi-supervised fashion. We ensure that the labeling process requires little to no effort from the developers. ReviewRanker can handle a large number of reviews (theoretically no limitation in number of reviews provided) and can provide the confidence score for each review in an end to end manner with zero external effort required. The proposed system can be implemented easily at industry level to consistently identify the best reviewers and promote the best review practices with minimal time and effort. The adoption of this system is expected to enhance code quality and to reduce the back-and-forth cycle of the review process. Some immediate future research directions are - (a) well designed industry scale evaluation of ReviewRanker effectiveness in review quality estimation, (b) incorporation of code context in ReviewRanker models and (c) replacing the current set of questions with more suitable set of questions through large scale study. We plan to make ReviewRanker publicly available in the form of a Python package upon acceptance. ACM-Reference-Format
http://arxiv.org/abs/2307.05721v1
20230709084446
HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding
[ "Hao Zheng", "Regina Lee", "Yuqian Lu" ]
cs.CV
[ "cs.CV" ]
Self-healing unitarity is an Optical illusion: Comment on `Self-healing of unitarity in effective field theories and the onset of new physics' Archit Vidyarthi [email:[email protected]] August 12, 2023 ============================================================================================================================================== Understanding comprehensive assembly knowledge from videos is critical for futuristic ultra-intelligent industry. To enable technological breakthrough, we present HA-ViD – the first human assembly video dataset that features representative industrial assembly scenarios, natural procedural knowledge acquisition process, and consistent human-robot shared annotations. Specifically, HA-ViD captures diverse collaboration patterns of real-world assembly, natural human behaviors and learning progression during assembly, and granulate action annotations to subject, action verb, manipulated object, target object, and tool. We provide 3222 multi-view, multi-modality videos (each video contains one assembly task), 1.5M frames, 96K temporal labels and 2M spatial labels. We benchmark four foundational video understanding tasks: action recognition, action segmentation, object detection and multi-object tracking. Importantly, we analyze their performance for comprehending knowledge in assembly progress, process efficiency, task collaboration, skill parameters and human intention. Details of HA-ViD is available at: <https://iai-hrc.github.io/ha-vid> § INTRODUCTION Assembly knowledge understanding from videos is crucial for futuristic ultra-intelligent industrial applications, such as robot skill learning <cit.>, human-robot collaborative assembly <cit.> and quality assurance <cit.>. To enable assembly video understanding, a video dataset is required. Such a video dataset should (1) represent real-world assembly scenarios and (2) capture the comprehensive assembly knowledge via (3) a consistent annotation protocol that aligns with human and robot assembly comprehension. However, existing datasets cannot meet these requirements. First, the assembled products in existing datasets are either too scene-specific <cit.> or lack typical assembly parts and tools <cit.>. Second, existing datasets did not design assembly tasks to foster the emergence of natural behaviors (e.g., varying efficiency, alternative routes, pauses and errors) during procedural knowledge acquisition. Third, thorough understanding of nuanced assembly knowledge is not possible via existing datasets as they fail to annotate subjects, objects, tools and their interactions in a systematic approach. Therefore, we introduce HA-ViD: a human assembly video dataset recording people assembling the Generic Assembly Box (GAB, see Figure <ref>). We benchmark on four foundational tasks: action recognition, action segmentation, object detection and multi-object tracking (MOT), and analyze their performance for comprehending application-oriented knowledge. HA-ViD features three novel aspects: * Representative industrial assembly scenarios: GAB includes 35 standard and non-standard parts frequently used in real-world industrial assembly scenarios and requires 4 standard tools to assemble it. The assembly tasks are arranged onto 3 plates featuring different task precedence and collaboration requirements to promote the emergence of two-handed collaboration and parallel tasks. Different from existing assembly video datasets, GAB represents generic industrial assembly scenarios (see Table <ref>). * Natural procedural knowledge acquisition process: Progressive observation, thought and practice process (shown as varying efficiency, alternative assembly routes, pauses, and errors) in acquiring and applying complex procedural assembly knowledge is captured via the designed three-stage progressive assembly setup (see Figure <ref>). Such a design allows in-depth understanding of the human cognition process, where existing datasets lack (see Table <ref>). * Consistent human-robot shared annotations: We designed a consistent fine-grained hierarchical task/action annotation protocol following a Human-Robot Shared Assembly Taxonomy (HR-SAT[HR-SAT, developed by the same authors, is a hierarchical assembly task representation schema that both humans and robots can comprehend. See details via: <https://iai-hrc.github.io/hr-sat>] , to be introduced in Section 2.3). Using this protocol, we, for the first-time, (1) granulate action annotations to subject, action verb, manipulated object, target object, and tool; (2) provide collaboration status annotations via separating two-handed annotations; and (3) annotate human pauses and errors. Such detailed annotation embeds more knowledge sources for diverse understanding of application-oriented knowledge (see Table <ref>). § DATASET In this section, we present the process of building HA-ViD and provide essential statistics. §.§ Generic Assembly Box To ensure the dataset can represent real-world industrial assembly scenarios, we designed the GAB shown in Figure <ref>. First, GAB[Find GAB CAD files at: <https://iai-hrc.github.io/ha-vid>.] is a 250×250×250mm box including 11 standard and 24 non-standard parts frequently used in real-world industrial assembly. Four standard tools are required for assembling GAB. The box design also allows participants to naturally perform tasks on a top or side-facing plate, closer to the flexible setups of real-world assembly. Second, GAB consists of three plates featuring different task precedence and collaboration requirements. Figure <ref> shows the subject-agnostic task precedence graphs (SA-TPG) for the three plates with different precedence constraints. These different task precedence graphs provide contextual links between actions, enabling situational action understanding with different complexities. The cylinder plate also has more collaboration tasks, posing greater challenges for understanding collaborative assembly tasks. Gear and cylinder plates contain parts that become hidden after assembly, e.g., spacers under the gears. This introduces additional complexities for understanding assembly status. §.§.§ Dataset Collection Data was collected on three Azure Kinect RGB+D cameras mounted to an assembly workbench facing the participant from left, front and top views, as shown in Figure <ref>. Videos were recorded at 1280×720 RGB resolution and 512×512 depth resolution under both lab lighting and natural lighting conditions. 30 participants (15 males, 15 females) assembled each plate 11 to 12 times during a 2-hour session. To capture the progression of human procedural knowledge <cit.> acquisition and behaviors (e.g., varying efficiency, alternative routes, pause, and errors) during learning, a three-stage progressive assembly setup is designed. Inspired by discovery learning <cit.>, we design the three stages as[The instruction files can be found at <https://iai-hrc.github.io/ha-vid>. The detailed instructions were written following HR-SAT to align assembly instructions with our annotations.]: Discovery – participants are given minimal exploded view instructions of each plate; Instruction – participants are given detailed step-by-step instructions of each plate; Practice – participants are asked to complete the task without instruction. The first stage encourages participants to explore assembly knowledge to reach a goal, the second stage provides targeted instruction to deepen participants’ understanding, and the last stage encourages participants to reinforce their learning via practicing. During Instruction and Practice stages, the participants were asked to perform the assembly with the plate facing upwards and sideways. §.§.§ Dataset Annotations We provide temporal and spatial annotations to capture rich assembly knowledge shown in Figure <ref>. To enable human-robot assembly knowledge transfer, the structured temporal annotations are made following HR-SAT. According to HR-SAT (shown in Figure <ref>), an assembly task can be decomposed into primitive tasks and further into atomic actions. Each primitive task and atomic action contain five description elements: subject, action verb, manipulated object, target object and tool. Primitive tasks annotations describe a functional change of the manipulated object, such as inserting a gear on a shaft or screwing a nut onto a bolt. Atomic actions describe an interaction change between the subject and manipulated object such as a hand grasping the screw or moving the screw. HR-SAT ensures the annotation transferability, adaptability, and consistency. The ST-TPGs files can be downloaded at: <https://iai-hrc.github.io/hr-sat> We annotate human pause and error as null and wrong respectively to enable research on understanding assembly efficiency and learning progression. Our annotations treat each hand as a separate subject. Primitive tasks and atomic actions are labeled for each hand to support multi-subject collaboration related research. Alongside the primitive task annotations, we annotate the two-handed collaboration status as: collaboration, when both hand work together on the same task; parallel, when each hand is working on a different task; single-handed, when only one hand is performing the task while the other hand pauses; and pause, when neither hand is performing any task. More details about the temporal annotations can be found in Supplementary Section 2.3. For spatial annotations, we use CVAT[<https://www.cvat.ai/>], a video annotation tool, to label bounding boxes for subjects, objects and tools frame-by-frame. Different from general assembly datasets, we treat important assemblable features, such as holes, stud and USB female, as objects, to enable finer-grained assembly knowledge understanding. §.§ Statistics In total, we collected 3222 videos with side, front and top camera views. Each video contains one task – the process of assembling one plate. Our dataset contains 86.9 hours of footage, totaling over 1.5 million frames with an average of 1 min 37 sec per video (1456 frames). To ensure annotation quality, we manually labeled temporal annotations for 609 plate assembly videos and spatial annotations for over 144K frames. The selected videos for labeling collectively capture the dataset diversity by including videos of different participants, lighting, instructions and camera views. Overall, our dataset contains 18831 primitive tasks across 75 classes, 63864 atomic actions across 219 classes, and close to 2M instances of subjects, objects and tools across 42 classes. Figure <ref> presents the annotation statistics of the dataset. Our dataset shows potential for facilitating small object detection research as 46.6% of the annotations are of small objects. More statistics can be found in Supplementary Section 2.4. Our temporal annotations can be used to understand the learning progression and efficiency of participants over the designed three-stage progressive assembly setup, shown in Figure <ref>. The combined annotation of wrong primitive task, pause collaboration status and total frames can indicate features such as errors, observation patterns and task completion time for each participant. Our dataset captures the natural progress of procedural knowledge acquisition, as indicated by the overall reduction in task completion time and pause time from stage 1 to 3, as well as the significant reduction in errors. The wrong and pause annotations enable research on understanding varying efficiency between participants. By annotating the collaboration status and designing three assembly plates with different task precedence and collaboration requirements, HA-ViD captures the two-handed collaborative and parallel tasks commonly featured in real-world assembly, shown in Figure <ref>. Overall, 49.6% of the annotated frames consist of two-handed tasks. The high percentage of two-handed tasks enables research in understanding the collaboration patterns of complex assembly tasks. § BENCHMARK EXPERIMENTS We benchmark SOTA methods for four foundational techniques for assembly knowledge understanding, i.e., action recognition, action segmentation, object detection, and MOT. Due to page limit, we highlight key results and findings in this section, and present implementation details, more results and discussions in the Supplementary Section 3. §.§ Action Recognition, Action Segmentation, Object Detection and MOT Action recognition is to classify a sequence of video frames into an action category. We split 123 out of 609 temporally labeled videos to be the testset, and the rest is trainset. We benchmark five action recognition methods from three categories: 2D models (TSM <cit.>, TimeSFormer <cit.>), 3D models (I3D <cit.>, MVITv2 <cit.>), and skeleton-based method (ST-GCN <cit.>) and report the Top-1 accuracy and Top-5 accuracy in Table <ref>. Action segmentation is to temporally locate and recognize human action segments in untrimmed videos <cit.>. Under the same train/test split, we benchmark three action segmentation methods, MS-TCN <cit.>, DTGRM <cit.> and BCN <cit.>, and report the frame-wise accuracy (Acc), segmental edit distance (Edit) and segmental F1 score at overlapping thresholds of 10% in Table <ref>. Object detection is to detect all instances of objects from known classes <cit.>. We split 18.4K out of 144K spatially labeled frames to be testset, and the rest is trainset. We benchmark classical two-stage method FasterRCNN <cit.>, one-stage method Yolov5 <cit.>, and the SOTA end-to-end Transformer-based method DINO <cit.> with different backbone networks, and report parameter size (Params), average precision (AP), AP under different IoU thresholds (50% and 75%) and AP under different object scales (small, medium and large) in Table <ref>. MOT aims at locating multiple objects, maintaining their identities, and yielding their individual trajectories given an input video <cit.>. We benchmark SORT <cit.> and ByteTrack <cit.> on the detection results of DINO and ground truth annotations (test split of object detection), respectively. We report average multi-object tracking accuracy (MOTA), ID F1 score (IDF1), false positive (FP), false negative (FN), and ID switch (IDS) over the videos in our testing dataset in Table <ref>. The baseline results show that our dataset presents great challenges on the four foundational video understanding tasks compared with other datasets. For example, BCN has 70.4% accuracy on Breakfast <cit.>, MViTv2 has 86.1% Top-1 accuracy on Kinetics-400 <cit.>, DINO has 63.3% AP on COCO test-dev <cit.>, and ByteTrack has 77.8% MOTA on MOT20 <cit.>. Compared to the above baseline results, we are more concerned with whether existing video understanding methods can effectively comprehend the application-oriented knowledge (in Figure <ref>). We present our subsequent analysis in Sections 3.2-3.5. §.§ Assembly progress Insight #1: Assembly action recognition could focus on compositional action recognition and leveraging prior domain knowledge. Understanding assembly progress, as an essential application-oriented task, requires real-time action (action verb + interacted objects and tools) recognition, and compare the action history with predefined assembly plan (represented in a task graph). After further analysis of the sub-optimal action recognition performance in Table <ref>, we found recognizing interacting objects and tools are more challenging than recognizing action verbs, (as shown in Table <ref>). Therefore, a promising research direction could be compositional recognizing action verb and interacted objects and tools. Leveraging prior domain knowledge, such as task precedence and probabilistic correlation between action verbs and feasible objects and tools, one may improve the performance of action recognition. With defined task precedence graphs and rich list of action verb/object/tool pairs, HA-ViD enables research on this aspect. Insight #2: Assembly action segmentation should focus on addressing under-segmentation issues and improving segment-wise sequence accuracy. Assembly progress tracking requires obtaining the accurate number of action segments and their sequence. For obtaining the accurate number of action segments from a given video, previous action segmentation algorithms <cit.> focused on addressing over-segmentation issues, but lack metrics for quantifying under/over-segmentation. Therefore, we propose segmentation adequacy (SA) to fill this gap. Consider the predicted segments as s_pred={s_1',s_2',…,s_F'} and ground truth segments as s_gt={s_1,s_2,…,s_N} for a given video, where F and N are the number of segments, SA = tanh(2(F-N)/F+N). Table <ref> reveals the significant under-segmentation issues on our dataset. This reminds the community to pay attention to addressing under-segmentation issues for assembly action understanding. The proposed SA can offer evaluation support, and even assist in designing the loss function as it utilizes hyperbolic tangent function. As for segment-wise sequence accuracy, the low value of Edit in Table <ref> suggests pressing required research efforts. Compared with Breakfast <cit.> (66.2% Edit score with BCN algorithm), our dataset presents greater challenges. §.§ Process Efficiency Understanding process efficiency is essential for real-world industry. It requires video understanding methods to be capable of recognizing human pause and error. HA-ViD supports this research by providing null and wrong labels. Insight #3: For null action understanding, efforts need to be made on addressing imbalanced class distribution. Table <ref> shows the recall and precision of action recognition and action segmentation of null actions. We suspect the high recall and low precision is caused by the imbalanced class distribution, as null is the largest head class (see Figure <ref>). Insight #4: New research from wrong action annotations. Wrong action is the assembly action (primitive task level) occurred at wrong position or order. Our annotation for wrong actions allows in-depth research on understanding its appearing patterns between participants across the three stages. Joint understanding between wrong actions and their adjacent actions could also trigger new research of predicting wrong actions based on action history. §.§ Task Collaboration Insight #5: New research on understanding parallel tasks from both hands Table <ref> shows that both action recognition and segmentation have lowest performance on parallel tasks during assembly. One possible reason is that the foundational video understanding methods rely on global features of each image, and do not explicitly detect and track the action of each hand. This calls for new methods that can independently track both hands and recognize their actions through local features. Recent research on human-object interaction detection in videos <cit.> could offer valuable insights. §.§ Skill Parameters and Human Intention Understanding skill parameters and human intentions from videos is essential for robot skill learning and human-robot collaboration (HRC) <cit.>. Typically, skill parameters vary depending on the specific application. However, there are certain skill parameters that are commonly used, including trajectory, object pose, force and torque <cit.>. While videos cannot capture force and torque directly, our dataset offers spatial annotations that enable tracking the trajectory of each object. Additionally, the object pose can be inferred from our dataset via pose estimation methods. Therefore, HA-ViD can support research in this direction. Understanding human intention in HRC refers to a combination of trajectory prediction, action prediction and task goal understanding <cit.>. Our spatial annotations provide trajectory information, SA-TPGs present action sequence constraints, and GAB CAD files offer the final task goals. Therefore, HA-ViD can enhance the research in this aspect. § CONCLUSION We present HA-ViD, a human assembly video dataset, to advance comprehensive assembly knowledge understanding toward real-world industrial applications. We designed a generic assembly box to represent industrial assembly scenarios and a three-stage progressive learning setup to capture the natural process of human procedural knowledge acquisition. The dataset annotation follows a human-robot shared assembly taxonomy. HA-ViD includes (1) multi-view, multi-modality data, fine-grained action annotations (subject, action verb, manipulated object, target object, and tool), (2) human pause and error annotations, and (3) collaboration status annotations to enable technological breakthroughs in both foundational video understanding techniques and industrial application-oriented knowledge comprehension. As for limitation of HA-ViD, the imbalanced class distribution of primitive tasks and atomic actions could cause biased model performance and insufficient learning. In addition, the true complexities and diversities of real-world assembly scenarios may still not be fully captured. We benchmarked strong baseline methods of action recognition, action segmentation, object detection and multi-object tracking, and analyzed their performance on comprehending application-oriented knowledge in assembly progress, process efficiency, task collaboration, skill parameter and human intention. The results show that our dataset captures essential challenges for foundational video understanding tasks, and new methods need to be explored for application-oriented knowledge comprehension. We envision HA-ViD will open opportunities for advancing video understanding techniques to enable futuristic ultra-intelligent industry. § ACKNOWLEDGEMENTS This work was supported by The University of Auckland FRDF New Staff Research Fund (No. 3720540). 10 Duque2019 D. A. Duque, F. A. Prieto, and J. G. Hoyos, “Trajectory generation for robotic assembly operations using learning by demonstration,” Robotics and Computer Integrated Manufacturing, vol. 57, no. December 2018, pp. 292–302, 2019. Lamon2019 E. Lamon, A. De Franco, L. Peternel, and A. Ajoudani, “A Capability-Aware Role Allocation Approach to Industrial Assembly Tasks,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3378–3385, 2019. Frustaci2020 F. Frustaci, S. Perri, G. Cocorullo, and P. Corsonello, “An embedded machine vision system for an in-line quality check of assembly processes,” Procedia Manufacturing, vol. 42, pp. 211–218, 2020. Cicirelli2022 G. Cicirelli, R. Marani, L. Romeo, M. G. Domínguez, J. Heras, A. G. Perri, and T. D'Orazio, “The HA4M dataset: Multi-Modal Monitoring of an assembly task for Human Action recognition in Manufacturing,” Scientific Data, vol. 9, p. 745, dec 2022. Ben-Shabat2021 Y. Ben-Shabat, X. Yu, F. Saleh, D. Campbell, C. Rodriguez-Opazo, H. Li, and S. Gould, “The IKEA ASM Dataset: Understanding people assembling furniture through actions, objects and pose,” Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021, pp. 846–858, 2021. Sener2022 F. Sener, R. Wang, and A. Yao, “Assembly101: A Large-Scale Multi-View Video Dataset for Understanding Procedural Activities,” Cvpr, 2022. Toyer2017 S. Toyer, A. Cherian, T. Han, and S. Gould, “Human Pose Forecasting via Deep Markov Models,” DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Applications, vol. 2017-Decem, pp. 1–8, 2017. Zhang2020 J. Zhang, P. Byvshev, and Y. Xiao, “A video dataset of a wooden box assembly process: Dataset,” DATA 2020 - Proceedings of the 3rd Workshop on Data Acquisition To Analysis, Part of SenSys 2020, BuildSys 2020, pp. 35–39, 2020. Ragusa2021 F. Ragusa, A. Furnari, S. Livatino, and G. M. Farinella, “The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain,” in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1568–1577, IEEE, jan 2021. Georgeff1986 M. Georgeff and A. Lansky, “Procedural knowledge,” Proceedings of the IEEE, vol. 74, no. 10, pp. 1383–1398, 1986. Mayer2004 R. E. Mayer, “Should There Be a Three-Strikes Rule Against Pure Discovery Learning?,” American Psychologist, vol. 59, no. 1, pp. 14–19, 2004. Lin2019 J. Lin, C. Gan, and S. Han, “TSM: Temporal Shift Module for Efficient Video Understanding,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7082–7092, IEEE, oct 2019. Bertasius2021 G. Bertasius, H. Wang, and L. Torresani, “Is Space-Time Attention All You Need for Video Understanding?,” in Proceedings of the 38th International Conference on Machine Learning, pp. 813–824, feb 2021. Carreira2017 J. Carreira and A. Zisserman, “Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724–4733, IEEE, jul 2017. Li2022 Y. Li, C.-Y. Wu, H. Fan, K. Mangalam, B. Xiong, J. Malik, and C. Feichtenhofer, “MViTv2: Improved Multiscale Vision Transformers for Classification and Detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4794–4804, IEEE, jun 2022. Yan2018 S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks for skeleton-based action recognition,” in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 7444–7452, jan 2018. Wang2021 D. Wang, D. Hu, X. Li, and D. Dou, “Temporal Relational Modeling with Self-Supervision for Action Segmentation,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 2729–2737, dec 2021. Farha2019 Y. A. Farha and J. Gall, “MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2019-June, pp. 3570–3579, IEEE, jun 2019. Wang2020 Z. Wang, Z. Gao, L. Wang, Z. Li, and G. Wu, “Boundary-Aware Cascade Networks for Temporal Action Segmentation,” in ECCV, vol. Part XXV 1, pp. 34–51, 2020. Amit2014 Y. Amit and P. Felzenszwalb, “Object Detection,” in Computer Vision, pp. 537–542, Boston, MA: Springer US, 2014. Ren2017 S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137–1149, jun 2017. Jain G. J. A. C. A. S. J. B. N. Y. K. K. M. T. J. F. i. L. Z. Y. C. W. A. V. D. M. Z. W. C. F. J. N. L. U. V. Jain, “YOLOv5,” Zhang2022a H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. M. Ni, and H.-Y. Shum, “DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection,” mar 2022. Luo2021 W. Luo, J. Xing, A. Milan, X. Zhang, W. Liu, and T. K. Kim, “Multiple object tracking: A literature review,” Artificial Intelligence, vol. 293, p. 103448, apr 2021. Bewley2016 A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” in 2016 IEEE International Conference on Image Processing (ICIP), pp. 3464–3468, IEEE, sep 2016. Zhang2022 Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and X. Wang, “ByteTrack: Multi-Object Tracking by Associating Every Detection Box,” in Proceedings of the European Conference on Computer Vision (ECCV), vol. 2, oct 2022. Kuehne2014 H. Kuehne, A. Arslan, and T. Serre, “The Language of Actions: Recovering the Syntax and Semantics of Goal-Directed Human Activities,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 780–787, IEEE, jun 2014. Kay2017 W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The Kinetics Human Action Video Dataset,” may 2017. Lin2014 T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft COCO: Common Objects in Context,” may 2014. Dendorfer2020 P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I. Reid, S. Roth, K. Schindler, and L. Leal-Taixé, “MOT20: A benchmark for multi object tracking in crowded scenes,” mar 2020. Tu2022 D. Tu, W. Sun, X. Min, G. Zhai, and W. Shen, “Video-based Human-Object Interaction Detection from Tubelet Tokens,” in Advances in Neural Information Processing Systems 35, pp. 23345—-23357, 2022. Chiou2021 M.-J. Chiou, C.-Y. Liao, L.-W. Wang, R. Zimmermann, and J. Feng, “ST-HOI: A Spatial-Temporal Baseline for Human-Object Interaction Detection in Videos,” in Proceedings of the 2021 Workshop on Intelligent Cross-Data Analysis and Retrieval, (New York, NY, USA), pp. 9–17, ACM, aug 2021. Mees2020 O. Mees, M. Merklinger, G. Kalweit, and W. Burgard, “Adversarial Skill Networks: Unsupervised Robot Skill Learning from Video,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 4188–4194, IEEE, may 2020. Zheng2022 P. Zheng, S. Li, L. Xia, L. Wang, and A. Nassehi, “A visual reasoning-based approach for mutual-cognitive human-robot collaboration,” CIRP Annals, vol. 71, no. 1, pp. 377–380, 2022. Jeon2022 J. Jeon, H.-r. Jung, F. Yumbla, T. A. Luong, and H. Moon, “Primitive Action Based Combined Task and Motion Planning for the Service Robot,” Frontiers in Robotics and AI, vol. 9, feb 2022. Berger2016 E. Berger, S. Grehl, D. Vogt, B. Jung, and H. B. Amor, “Experience-based torque estimation for an industrial robot,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 144–149, IEEE, may 2016. Lu2022 Y. Lu, H. Zheng, S. Chand, W. Xia, Z. Liu, X. Xu, L. Wang, Z. Qin, and J. Bao, “Outlook on human-centric manufacturing towards Industry 5.0,” Journal of Manufacturing Systems, vol. 62, pp. 612–627, jan 2022. Supplementary Document for HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding § OVERVIEW This supplementary document contains additional information about HA-ViD. Section <ref> further describes the process of building HA-ViD, including the design of the Generic Assembly Box, data collection, data annotation, and annotation statistics. Section <ref> presents the implementation details of our baselines, discusses the experimental results, and provides the licenses of the benchmarked algorithms. Section <ref> discusses the bias and societal impact of HA-ViD. Section <ref> presents the research ethics for HA-ViD. § HA-VID CONSTRUCTION In this section, we further discuss the process of building HA-ViD. First, we introduce the design of the Generic Assembly Box. Second, we describe the three-stage data collection process. Third, we describe data annotation details. Finally, we present critical annotation statistics. §.§ Generic Assembly Box Design To ensure the dataset is representative of real-world industrial assembly scenarios, we designed the Generic Assembly Box (GAB), a 250×250×250mm box (see Figure <ref>), which consists of 11 standard parts and 25 non-standard parts and requires 4 standard tools during assembly (see Figure 2). GAB has three assembly plates, including General Plate, Gear Plate, and Cylinder Plate, and three blank plates. The opposite face of each assembly plate is intentionally left blank to allow a different assembly orientation. Three assembly plates feature different design purposes. General Plate (see Figure <ref>) was designed to capture action diversity. The general plate consists of 11 different parts. The parts used in this plate were designed to include the different directions, shapes, and forces in which the common assembly actions can be performed. Since there is close to no precedence between assembling different parts, General Plate results in the most variety of possible assembly sequences. Gear Plate (see Figure <ref>) was designed to capture parallel two-handed tasks, e.g., two hands inserting two spur gears at the same time. Gear Plate has three gear sub-systems: large gear, small gear, and worm gear, which mesh together to form a gear mechanism. The plate consists of 12 different parts. Gear Plate has a higher precedence constraint on assembly sequence than the general plate. Cylinder Plate (see Figure <ref>) was designed to capture two-handed collaborative tasks, e.g., two hands collaborating on screwing the cylinder cap onto the cylinder base. Cylinder Plate requires assembling a cylinder subassembly and fastening it onto the plate. This plate consists of 11 parts. The parts were designed to represent assembling a subassembly where parts become fully occluded or partially constrained to another part (see the cylinder in Figure <ref>). Table <ref> shows a summary of the three assembly plates. The box can be easily replicated using standard components, laser cutting, and 3D printing. The CAD files and bill of material can be downloaded from our website[<https://iai-hrc.github.io/ha-vid>]. §.§ Data Collection Data was collected on three Azure Kinect RGB+D cameras mounted to an assembly workbench. 30 participants (15 male, 15 female) were recruited for a 2-hour session to assemble the GAB. During the data collection session, participants were given a fully disassembled assembly box, assembly parts, tools, and instructions. To capture the natural progress of human procedural knowledge acquisition and behaviors (varying efficiency, alternative routes, pauses, and errors), we designed a three-stage progressive assembly setup: Discovery: Participants were asked to assemble a plate twice following the minimal visual instructions (see Figure <ref>). Instruction: Participants were asked to assemble a plate six times following the detailed step-by-step instructions (see Figure <ref>). Six different instruction versions were created, each presenting a different assembly sequence. Each participant was given three different instruction versions, where two attempts were completed following each instruction version. The three instruction versions given to one participant must contain assembling the plate facing both upwards and sideways. Practice: After the first two stages, participants were asked to assemble a plate four times without any instructions. During this stage, participants performed two attempts of each plate facing upwards and two attempts of each plate facing sideways. The instruction files are available on our website[https://iai-hrc.github.io/ha-vid]. §.§ Data Annotation To capture rich assembly knowledge, we provide temporal and spatial annotations. Temporal Annotations: In HR-SAT[Details for the definitions of primitive task and atomic action can be found at: https://iai-hrc.github.io/hr-sat], an assembly task can be decomposed into a series of primitive tasks, and each primitive task can be further decomposed into a series of atomic actions. For both primitive task and atomic action, there are five fundamental description elements: subject, action verb, manipulated object, target object, and tool (see Figure <ref>). We follow HR-SAT to provide primitive task and atomic action annotations for the assembly processes recorded in the videos. To enable the research in two-handed collaboration task understanding, we defined the two hands of each participant as two separate subjects, and we annotated action verb, manipulated object, target object, and tool for each subject. For both primitive task and atomic action annotations, we follow the annotation specification shown in Figure <ref>. Spatial Annotations: For spatial annotations, we use CVAT[https://www.cvat.ai/] to annotate the subjects (two hands), objects (manipulated object, target object), and tools via bounding boxes, shown in Figure <ref>. §.§ Annotation Statistics Overall, the dataset contains temporal annotations of 81 primitive task classes and 219 atomic action classes. The trainset and testset were split by subjects to balance data diversity. Figure <ref> and Figure <ref> show the class distributions of primitive task and atomic action annotations in the trainset and testset, respectively. Overall, the dataset contains spatial annotations of 42 classes. The trainset and testset were split by subjects to balance data diversity. Figure <ref> shows the class distributions of spatial annotation classes in the trainset and testset. § EXPERIMENT In this section, we provide the implementation details of the baselines, the results unreleased in the main paper, further discussions on the results, and the licenses of the benchmarked algorithms. §.§ Action Recognition We use the MMSkeleton[https://github.com/open-mmlab/mmskeleton] toolbox to benchmark ST-GCN <cit.>; the MMAction2[https://github.com/open-mmlab/mmaction2] toolbox to benchmark I3D <cit.>, TimeSformer <cit.>, and MVITv2 <cit.>; and the original codes to benchmark TSM <cit.>. For ST-GCN, we first extracted the upper 26 skeleton joints from each frame as the input. Action clips which consisted of frames where the skeleton could not be extracted, were excluded from reporting the performance. For I3D (rgb), TSM, MVITv2, and TimeSformer, the RGB frames of each clip were used as input. For I3D (flow), we extracted TV-L1 optical flow frames from each clip as input. To compare model performance on different views (side, front, and top), hands (left and right hands) and annotation levels (primitive task and atomic action), we conducted a combinational benchmark, which means we benchmark each model on 12 sub-datasets (see Figure <ref>). We report the Top-1 and Top-5 accuracy on these sub-datasets in Table <ref>. ST-GCN: Following the default parameters from MMSkeleton, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.1 and decayed by a factor of 10 after epochs 10 and 50. We sampled all frames as the input. The ST-GCN was pretrained on NTU <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 70 epochs, we set the total training epochs to be 80 with a batch size of 16. TSM: Following the original paper’s suggestions, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.0025 and decayed by a factor of 10 after epochs 20 and 40. 8 frames were uniformly sampled from each clip. The TSM was pretrained on ImageNet <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 40 epochs, we set the total training epochs to be 50 with a batch size of 16. TimeSformer: Following the default parameters from MMAction2, we use the SGD optimizer. The learning rate was initialized as 0.005 and decayed by a factor of 10 after epochs 5 and 10. 8 frames were uniformly sampled from each clip. The TimeSformer was pretrained on ImageNet-21K <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 8. I3D (rgb) and (flow): Following the default parameters from MMAction2, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.01 and decayed by a factor of 10 after epochs 40 and 80. 32 frames were uniformly sampled from each clip. I3D takes ResNet50 pretrained on ImageNet-1K <cit.> as the backbone, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 4. MVITv2: Following the default parameters from MMAction2, we use the AdamW optimizer with a cosine annealing learning rate with the minimum learning rate of 0.00015. 16 frames were uniformly sampled from each clip. The MVITv2 was pre-trained on Kinetics-400 <cit.> via MaskFeat <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 4. The benchmarking results of action recognition are shown in Table <ref>. We use a single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model for each sub-dataset. §.§ Action Segmentation We benchmark three action segmentation algorithms: MS-TCN, DTGRM, and BCN, and report the frame-wise accuracy (Acc), segmental edit distance (Edit) and segmental F1 score at overlapping thresholds 10% in Table <ref>. Before benchmarking, we extract I3D features for each frame as the input of the action segmentation algorithms. We use the Pytorch version of the I3D implementation[https://github.com/piergiaj/pytorch-i3d] and the pretrained model on ImageNet <cit.> and Kinetics <cit.>. For action segmentation, we also conducted a combinational benchmark. MS-TCN: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with a fixed learning rate of 0.0005, dropout of 0.5 and sampling rate of 1 (taking all frames into the network). As the slowest convergence of the 12 sub-datasets was observed around 800 epochs, we set the total training epochs to be 1000 with a batch size of 10. DTGRM: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with a fixed learning rate of 0.0005, dropout of 0.5 and sampling rate of 1. As the slowest convergence of the 12 sub-datasets was observed around 800 epochs, we set the total training epochs to be 1000 with a batch size of 16. BCN: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with the learning rate of 0.001 for the first 30 epochs and 0.0001 for the rest epochs, dropout of 0.5 and sampling rate of 1. As the slowest convergence of the 12 sub-datasets was observed around 200 epochs, we set the total training epochs to be 300 with a batch size of 1. The benchmarking results of action segmentation are shown in Table <ref>. We use a single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model for each sub-dataset. §.§ Object Detection We benchmark three object detection algorithms: Faster-RCNN <cit.>, YOLOv5 <cit.> and DINO <cit.> with different backbone networks. The results have been reported in the main paper. Therefore, we only discuss the implementation details here. We train Faster-RCNN and DINO using the implementation provided by the MMDetection <cit.> and train YOLOv5 using the implementation provided by the MMYOLO[https://github.com/open-mmlab/mmyolo]. Faster-RCNN: We train Faster-RCNN with three backbone networks: ResNet50, ResNet101, and ResNext101. All the networks have been pretrained on the coco_2017_train dataset <cit.> and finetuned on our dataset. Following the default setting provided by MMDetection, we use the SGD optimizer with a momentum of 0.9 and weight decay of 0.0001. The learning rate was initialized as 0.02 and decayed by a factor of 10 at epochs 8 and 11. As the slowest convergence of the three models was observed around 14 epochs, we set the total training epochs to be 20. We set the batch size as 4, 1, and 5, respectively, for ResNet50, ResNet101, and ResNext101. YOLOv5: We train YOLOv5-small and YOLOv5-large using MMDetection. These two models have been pretrained on the coco_2017_train dataset, and finetuned on our dataset. Following the default setting provided by MMDetection, we use the SGD optimizer with a momentum of 0.937, weight decay of 0.0005 for both models. The linear learning rate with base learning rate of 0.0025 and factor of 0.01 was applied to YOLOv5-small. The linear learning rate with base learning rate of 0.0025 and factor of 0.1 was applied to YOLOv5-large. We set the total training epochs to be 100 epochs with a batch size of 32 and 50 epochs with a batch size of 10, respectively, for YOLOv5-small and YOLOv5-large to ensure convergence. DINO: We benchmark the DINO model with the Swin-large network as the backbone. The model has been pretrained on the coco_2017_train dataset, and finetuned on our dataset. Following the default setting provided by MMDetection, we use the AdamW optimizer with a learning rate of 0.0001 and weight decay of 0.0001. As the convergence was observed around 6 epochs, we set the total training epochs to be 10 with a batch size of 1. We use single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model. §.§ Multi-Object Tracking In this paper, we focus on tracking-by-detection methods because, normally, tracking-by-detection methods perform better than joint-detection-association methods <cit.>. Since we already benchmarked the object detection methods, we only need to test the SOTA trackers. We benchmark SORT <cit.> and ByteTrack <cit.> trackers on the detection results of DINO and ground truth annotations, respectively. The results have been reported in the main paper. Since the trackers are not neural networks, we do not need to train them and explain the implementation details. We always use the default parameters of the algorithm. For more details, please refer to the papers <cit.> and their GitHub repositories. §.§ Discussion In this section, we further discuss the results from the above experiments and analyze a prevalent problem of video understanding – occlusion. §.§.§ General Discussion Action recognition: We found the Top-1 accuracy of primitive task recognition is 15.6% higher on average than atomic action recognition, and the atomic action recognition performance of the left hand is 2.4% higher on average than the right hand. One possible reason behind these two observations can be occlusion since (1) primitive task recognition is less influenced by occlusion because it can rely on the key motion or relevant object recognition; and (2) the left hand is less occluded because the side-view camera is mounted on the left-side of the participant. Action segmentation: We found (1) the frame-wise accuracy (Acc) of atomic action segmentation is 4% lower on average than primitive task segmentation, as atomic actions have higher diversity and current methods face under-segmentation issues (refer to the main paper); and (2) on the atomic action level, the Acc of the left hand is 6% higher on average than the right hand, where one possible reason could be that the left hand is less occluded. Object detection: From Table 4 of the main paper, we found that (1) the large-scale end-to-end Transformer based model (DINO) performs the best, and the traditional two-stage method (Faster-RCNN) has better performance on small objects but worse performance on large objects than the one-stage method (YOLOv5), which is consistent with the conclusion of <cit.>; (2) current methods still face great challenges in small object detection, as the best model only has 27.4% average precision on small object detection; and (3) recognizing objects with same/similar appearances but different sizes is challenging (see Figure <ref>, e.g., Bar and Rod, Hole C1-C4, and two Wrenches). Multi-object detection: From Table 5 of the main paper, we found that (1) object detection performance is the decisive factor in tracking performance; (2) with perfect detection results, even the simple tracker (SORT) can achieve good tracking results, as SORT has 94.5% multi-object tracking accuracy on the ground truth object bounding boxes; and (3) ByteTrack can track blurred and occluded objects better (comparing b1-2, c1-2, and f1-2 in Figure <ref>) due to taking low-confidence detection results into association, but it generates more ID switches (IDS) (seeing a2-f2 in Figure <ref>) due to the preference of creating new tracklets. §.§.§ Occlusion Analysis From the discussion in Section <ref>, we can see occlusion is a prevalent problem of video understanding. Therefore, we further explore the impact of occlusion on video understanding tasks in this Section. Table <ref> reports the average results over two hands of action recognition and segmentation on three views and the combined view (Com). We fuse the features from three views before the softmax layer to evaluate the performance of the combined view. The results show the significant benefits of combining three views which offers a viable solution for mitigating occlusion challenges in industrial settings. Figure <ref> shows the impact of occlusion on tracking and reidentification via visualizing SORT and ByteTrack tracking results on sampled ground truth object annotations. To quantitatively analyze the occlusion problem, we design two metrics: occlusion duration (OD) and occlusion frequency (OF). Given a video of n frames v=[f_1,…,f_n], the observation of object k is denoted as O_k=[o_t^k,o_t+1^k,…,o_t+m^k], where t and t+m are the frame numbers that object k first, and last appear, respectively. o_j^k={0,1}, where 0 denotes observed, and 1 denotes unobserved. OD_k=1/m∑_j=t^j=t+mo_j^k and OF_k=1/2∑_j=t^j=t+m-1|o_j+1^k-o_j^k|. OD_k and OF_k describe the occluded duration and occluded frequency of object k in a video. We calculate the average OD and OF over every object in our testing dataset and compare the results with the tracking results on ground truth object annotations in Table <ref>. Table <ref> shows a negative correlation between mOD and mOF with MOTA and IDS, which is also consistent with the findings in Figure <ref>. We envision OD and OF will serve as effective occlusion evaluation tools for developing better object association modules and reidentification modules in MOT. §.§ Licenses of the benchmarked algorithms The licenses of the benchmarked algorithms are listed in Table <ref>. § DATASET BIAS AND SOCIETAL IMPACT Our objective is to construct a dataset that can represent interesting and challenging problems in real-world industrial assembly scenarios. Based on this objective, we developed the Generic Assembly Box that encompasses standard and non-standard parts widely used in industry and requires typical industrial tools to assemble. However, there is still a gap between our dataset and the real-world industrial assembly scenarios. The challenges lie in: 1) the existence of numerous unique assembly actions, countless parts, and tools in the industry; 2) the vast diversity of operating environments in the industry; 3) various agents and multi-agent collaborative assembly scenarios in the industry. Therefore, additional efforts would be needed to apply the models trained on our dataset to real-world industrial applications. We hope the fine-grained annotations of this dataset can advance the technological breakthrough in comprehensive assembly knowledge understanding from videos. Then, the learned knowledge can benefit various real-world applications, such as robot skill learning, human-robot collaboration, assembly process monitoring, assembly task planning, and quality assurance. We hope this dataset can contribute to technological advancements facilitating the development of smart manufacturing, enhancing production efficiency, and reducing the workload and stress on workers. § ETHICS APPROVAL HA-ViD was collected with ethics approval from the University of Auckland Human Participants Ethics Committee. The Reference Number is 21602. All participants were sent a Participant Information Sheet and Consent Form[The participant consent form is available at: <https://www.dropbox.com/sh/ekjle5bwoylmdcf/AACLd_NqT3p2kxW7zLvvauPta?dl=0>] prior to the collection session. We confirmed that they had agreed to and signed the Consent form before proceeding with any data collection. § DATA DOCUMENTATION We follow the datasheet proposed in <cit.> for documenting our HA-ViD dataset: 1. Motivation (a) For what purpose was the dataset created? This dataset was created to understand comprehensive assembly knowledge from videos. The previous assembly video datasets fail to (1) represent real-world industrial assembly scenarios, (2) capture natural human behaviors (varying efficiency, alternative routes, pauses and errors) during procedural knowledge acquisition, (3) follow a consistent annotation protocol that aligns with human and robot assembly comprehension. (b) Who created the dataset, and on behalf of which entity? This dataset was created by Hao Zheng, Regina Lee and Yuqian Lu. At the time of creation, Hao and Regina were PhD students at the University of Auckland, and Yuqian was a senior lecturer at the University of Auckland. (c) Who funded the creation of the dataset? The creation of this dataset was partially funded by The University of Auckland FRDF New Staff Research Fund (No. 3720540). (d) Any other Comments? None. 2. Composition (a) What do the instances that comprise the dataset represent? For the video dataset, each instance is a video clip recording a participant assembling one of the three plates of the designed Generic Assembly Box. Each instance consists of two-level temporal annotations: primitive task and atomic action, and spatial annotations, which means the bounding boxes for subjects, objects, and tools. (b) How many instances are there in total? We recorded 3222 videos over 86.9 hours, totaling over 1.5M frames. To ensure annotation quality, we manually labeled temporal annotations for 609 plate assembly videos and spatial annotations for over 144K frames. (c) Does the dataset contain all possible instances, or is it a sample (not necessarily random) of instances from a larger set? Yes, the dataset contains all possible instances. (d) What data does each instance consist of? See 2. (a). (e) Is there a label or target associated with each instance? See 2. (a). (f) Is any information missing from individual instances? No. (g) Are relationships between individual instances made explicit? Yes, each instance (video clip) contains one participant performing one task (assembling one of the three plates of the designed Generic Assembly Box.) (h) Are there recommended data splits? For action recognition and action segmentations, we provide two data splits: trainset and testset. For object detection and multi-object tracking, we provide another two data splits: trainset and testset. Refer to Section <ref> for details. (i) Are there any errors, sources of noise, or redundancies in the dataset? Given the scale of the dataset and complexity in annotation, it is possible that some ad-hoc errors exist in our annotations. However, we have given our best efforts (via human checks and quality checking code scripts) in examining manually labelled annotations to minimize these errors. (j) Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained. (k) Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)? No. (l) Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No. (m) Does the dataset relate to people? Yes, all videos are recordings of human assembly activities, and all annotations are related to the activities. (n) Does the dataset identify any subpopulations (e.g., by age, gender)? No. Our participants have different ages and genders. But our dataset does not identify this information. To ensure this, we have blurred participants’ faces in the released videos. (o) Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? No, as explained in 2. (n), we have blurred participants’ faces in the released videos. (p) Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? No. (q) Any other comments? None. 3. Collection Process (a) How was the data associated with each instance acquired? For each video instance, we provide temporal annotations and spatial annotations. We follow HR-SAT to create temporal annotations to ensure the annotation consistency. The temporal annotations were manually created and checked by our researchers. The spatial annotations were manually created by postgraduate students at the University of Auckland, who were trained by one of our researchers to ensure the annotation quality. (b) What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? Data were collected on three Azure Kinect RGB+D cameras via live video capturing while a participant is performing the assembly actions, and we manually labeled all the annotations. (c) If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? No, we created a new dataset. (d) Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? For video recordings, volunteer participants were rewarded gift cards worth NZ$50.00 upon completion of the 2-hour data collection session. For data annotations, we contracted students at the University of Auckland, and they were paid at a rate of NZ$23.00 per hour. (e) Over what timeframe was the data collected? The videos were recorded during August to September of 2022, and the annotations were made during October of 2022 to March of 2023. (f) Were any ethical review processes conducted (e.g., by an institutional review board)? Yes, we obtained ethics approval from the University of Auckland Human Participants Ethics Committee. More information can be found in Section <ref>. (g) Does the dataset relate to people? Yes, we recorded the process of people assembling the Generic Assembly Box. (h) Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? We collected the data from the individuals in question directly. (i) Were the individuals in question notified about the data collection? Yes, all participants were informed of the data collection purpose, process and the intended use of the data. They were sent a Participant Information Sheet and signed Consent Form prior to the collection session. All sessions started with an introduction where instructions on data collection, health and safety and confirmation of the Consent Form were discussed. (j) Did the individuals in question consent to the collection and use of their data? Yes, all participants were sent a Participant Information Sheet and Consent Form prior to the collection session. We confirmed that they had agreed to and signed the Consent form regarding the collection and use of their data before proceeding with any data collection. Details can be found in Section <ref>. (k) If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? Yes. The Participant Information Sheet and Consent Form addressed how they can request to withdraw and remove their data from the project and how the data will be used. (l) Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? No, all data have been processed to be made de-identifiable and all annotations are on objective world states. The potential impact of the dataset and its use on data subjects were addressed in the Ethics Approval, Participant Information Sheet and Consent Form. Details can be found in Section <ref>. (m) Any other comments? None. 4. Preprocessing, Cleaning and Labeling (a) Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? Yes, we have cleaned the videos by blurring participants’ faces. We have also extracted I3D features from the video for action segmentation benchmarking. (b) Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? No, we only provide the cleaned videos (participants’ faces being blurred) to the public due to the ethics issues. (c) Is the software used to preprocess/clean/label the instances available? Yes, we used CVAT to draw bounding boxes. Details can be found in Section <ref>. (d) Any other comments? None. 5. Uses (a) Has the dataset been used for any tasks already? No, the dataset is newly proposed by us. (b) Is there a repository that links to any or all papers or systems that use the dataset? Yes, we provide the link to all related information on our website. (c) What (other) tasks could the dataset be used for? The dataset can also be used for Compositional Action Recognition, Human-Object Interaction Detection, and Visual Question Answering. (d) Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? We granulated the assembly action annotation into subject, action verb, manipulated object, target object and tool. We believe the fine-grained and compositional annotations can be used for more detailed and precise descriptions of the assembly process, and the descriptions can serve various real-world industrial applications, such as robot learning, human robot collaboration, and quality assurance. (e) Are there tasks for which the dataset should not be used? The usage of this dataset should be limited to the scope of assembly activity or task understanding, e.g., action recognition, action segmentation, action anticipation, human-object interaction detection, visual question answering, and the downstream industrial applications, e.g., robot learning, human-robot collaboration, and quality assurance. Any work that violates our Code of Conduct are forbidden. Code of Conduct can be found at our website[<https://iai-hrc.github.io/ha-vid>.]. (f) Any other comments? None. 6. Distribution (a) Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? Yes, the dataset will be made publicly available. (b) How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? The dataset could be accessed on our website. (c) When will the dataset be distributed? We provide private links for the review process. Then the dataset will be released to the public after the review process. (d) Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? We release our dataset and benchmark under CC BY-NC 4.0[<https://creativecommons.org/licenses/by-nc/4.0/>.] license. (e) Have any third parties imposed IP-based or other restrictions on the data associated with the instances? No. (f) Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? No. (g) Any other comments? None. 7. Maintenance (a) Who is supporting/hosting/maintaining the dataset? Regina Lee and Hao Zheng are maintaining, with continued support from Industrial AI Research Group at The University of Auckland. (b) How can the owner/curator/manager of the dataset be contacted (e.g., email address)? E-mail addresses are at the top of the paper. (c) Is there an erratum? Currently, no. As errors are encountered, future versions of the dataset may be released and updated on our website. (d) Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances’)? Yes, see 7.(c). (e) If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? No. (f) Will older versions of the dataset continue to be supported/hosted/maintained? Yes, older versions of the dataset and benchmark will be maintained on our website. (g) If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? Yes, errors may be submitted to us through email. (h) Any other comments? None. 10 Yan2018 S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks for skeleton-based action recognition,” in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 7444–7452, jan 2018. Carreira2017 J. Carreira and A. Zisserman, “Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724–4733, IEEE, jul 2017. Bertasius2021 G. Bertasius, H. Wang, and L. Torresani, “Is Space-Time Attention All You Need for Video Understanding?,” in Proceedings of the 38th International Conference on Machine Learning, pp. 813–824, feb 2021. Li2022 Y. Li, C.-Y. Wu, H. Fan, K. Mangalam, B. Xiong, J. Malik, and C. Feichtenhofer, “MViTv2: Improved Multiscale Vision Transformers for Classification and Detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4794–4804, IEEE, jun 2022. Lin2019 J. Lin, C. Gan, and S. Han, “TSM: Temporal Shift Module for Efficient Video Understanding,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7082–7092, IEEE, oct 2019. Shahroudy2016 A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1010–1019, IEEE, jun 2016. Deng2009 J. Deng, W. Dong, R. Socher, L.-J. Li, Kai Li, and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, IEEE, jun 2009. Kay2017 W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The Kinetics Human Action Video Dataset,” may 2017. Wei2022 C. Wei, H. Fan, S. Xie, C.-Y. Wu, A. Yuille, and C. Feichtenhofer, “Masked Feature Prediction for Self-Supervised Visual Pre-Training,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14648–14658, IEEE, jun 2022. Farha2019 Y. A. Farha and J. Gall, “MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2019-June, pp. 3570–3579, IEEE, jun 2019. Wang2021 D. Wang, D. Hu, X. Li, and D. Dou, “Temporal Relational Modeling with Self-Supervision for Action Segmentation,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 2729–2737, dec 2021. Wang2020 Z. Wang, Z. Gao, L. Wang, Z. Li, and G. Wu, “Boundary-Aware Cascade Networks for Temporal Action Segmentation,” in ECCV, vol. Part XXV 1, pp. 34–51, 2020. Ren2017 S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137–1149, jun 2017. Jain G. J. A. C. A. S. J. B. N. Y. K. K. M. T. J. F. i. L. Z. Y. C. W. A. V. D. M. Z. W. C. F. J. N. L. U. V. Jain, “YOLOv5,” Zhang2022a H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. M. Ni, and H.-Y. Shum, “DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection,” mar 2022. Chen2019 K. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Xu, Z. Zhang, D. Cheng, C. Zhu, T. Cheng, Q. Zhao, B. Li, X. Lu, R. Zhu, Y. Wu, J. Dai, J. Wang, J. Shi, W. Ouyang, C. C. Loy, and D. Lin, “MMDetection: Open MMLab Detection Toolbox and Benchmark,” jun 2019. Lin2014 T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft COCO: Common Objects in Context,” may 2014. Luo2021 W. Luo, J. Xing, A. Milan, X. Zhang, W. Liu, and T. K. Kim, “Multiple object tracking: A literature review,” Artificial Intelligence, vol. 293, p. 103448, apr 2021. Bewley2016 A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” in 2016 IEEE International Conference on Image Processing (ICIP), pp. 3464–3468, IEEE, sep 2016. Zhang2022 Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and X. Wang, “ByteTrack: Multi-Object Tracking by Associating Every Detection Box,” in Proceedings of the European Conference on Computer Vision (ECCV), vol. 2, oct 2022. Zhao2019 Z.-q. Zhao, P. Zheng, S.-T. Xu, and X. Wu, “Object Detection With Deep Learning: A Review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, pp. 3212–3232, nov 2019. Gebru2018 T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. Daumé, and K. Crawford, “Datasheets for Datasets,” mar 2018.
http://arxiv.org/abs/2307.03916v1
20230708064241
Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing
[ "Zhijie Li", "Xiangyu Ye", "Xi Kong", "Tianyu Xie", "Zhiping Yang", "Pengju Zhao", "Ya Wang", "Fazhan Shi", "Jiangfeng Du" ]
quant-ph
[ "quant-ph" ]
revtex4-2 These authors contributed equally to this work. CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China These authors contributed equally to this work. CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China [email protected] The State Key Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, 210093 Nanjing, China CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China [email protected] CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China School of Biomedical Engineering and Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou 215123, China [email protected] CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China School of Physics, Zhejiang University, Hangzhou 310027, China Here we propose and demonstrate a phased geometric control protocol for zero-field double quantum gates in a V-shaped three-level spin system. This method utilizes linearly polarized microwave pulses and exploits the geometric qubit properties to prevent state leakage. By employing specific phased geometric controls, we realize a low-power multi-pulse zero-field sensing technique using single nitrogen-vacancy centers in diamond. Our method offers a novel approach to implement precise double quantum gate operations with an adaptable driving power, making it a valuable tool for zero-field spin-based quantum technology. Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing Jiangfeng Du August 12, 2023 ======================================================================================= In recent years, quantum sensing techniques based on controllable quantum systems have seen significant development. One successful example is the nitrogen-vacancy (NV) center in diamond, which possesses numerous merits, including nanoscale size, biocompatibility, and long coherence time under ambient conditions <cit.>. Typically, solid-state quantum systems require a static external magnetic field to lift the degeneracy of their ground-state manifolds. However, the presence of an external magnetic field suppresses the anisotropic interactions within the target sample, resulting in the loss of anisotropic physical information and causing inhomogeneous spectral broadening. A well-known zero-field technology is the zero- to ultralow-field nuclear magnetic resonance (ZULF NMR) spectroscopy. This technique effectively mitigates the inhomogeneous broadening of the spectrum in heterogeneous environments by attenuating the broadening effects induced by magnetic susceptibility <cit.>. More zero-field scenarios can be found in the field of electromagnetic biology <cit.> and in the research of ferromagnetic film magnetization <cit.>. In order to extend the zero-field condition to solid-state quantum systems like NV centers, the implementation of high-fidelity quantum control for the three-level system (3LS) is imperative. To address the near-degenerate quantum states in the absence of external fields, one approach is to employ circularly polarized microwave pulses <cit.>. While this method is effective when using a few pulses, it is limited in its ability to utilize double quantum (DQ) transitions with a multi-pulse method, which is crucial for sensing weak AC signals. Recent works have paved the way for realizing dynamical decoupling (DD) with linearly polarized microwave pulses at zero field by manipulating the 3LS via an effective Raman coupling <cit.>. This method enables the utilization of high-power multiple pulses, leveraging the advantage of DQ transitions at zero field to offer a significantly broader sensing bandwidth and expanded sensitivity range. However, the effectiveness of this method is compromised by the occurrence of state leakage due to the contradiction between the unavoidable hyperfine non-degeneracy and the limited driving field strength <cit.>. Subsequently, sequences that counteract the effects of the non-degeneracy detuning were proposed <cit.>. However, these methods, while relaxing the requirements for a strong driving field, lack versatility in their operations. In this study, we propose a method that prevents state leakage with a weak driving field by leveraging the geometric properties of the dressed states. Through this approach, a collection of effective DQ rotation operations can be achieved. Furthermore, we demonstrate a zero-field quantum sensing scheme utilizing single NV centers based on the proposed method. A single NV center in diamond consists of a substitutional nitrogen and a neighboring vacancy, its electron ground states form a typical 3LS (Fig. <ref>(a)). The Hamiltonian of a single NV center driven by a linearly polarized microwave field can be given by (ħ=1) <cit.> H= (D+d_∥Π_z) S_z^2+(Δ+δ/2)S_z+Ωcos(ω t+ϕ)S_x +d_⊥[Π_x(S_y^2-S_x^2)+Π_y(S_xS_y+S_yS_x)], where S=(S_x,S_y,S_z) is the spin-1 operator, D is the zero-field splitting, d_∥ and d_⊥ are the longitudinal and transverse electric dipole moment components, Δ refers to the Zeeman splitting induced by the external magnetic field along the NV center's principle axis, δ contains hyperfine couplings with the surrounding spin-1/2 nuclei, and Π=(Π_x,Π_y,Π_z) denotes the total effective electric field. Furthermore, Ω,ω and ϕ correspond to the amplitude, angular frequency, and phase of the linearly polarized microwave, respectively. Provided that the NV center's native nitrogen atom is a ^15N atom and there is no magnetic field along the NV center's symmetric axis, the splitting within each electronic state manifold is primarily attributed to hyperfine interactions and transverse electric dipole couplings. When a linearly polarized microwave pulse with angular frequency ω=D+d_∥Π_z is applied, it drives the oscillations |0⟩↔|+1⟩ and |0⟩↔|-1⟩ simultaneously. As a result, an effective Raman coupling emerges (Fig. <ref>(a)). By utilizing phase-fixed geometric controls <cit.> on the ground-state 3LS, it is possible to accumulate a geometric π phase on the state |+⟩ while keeping the state |-⟩ nearly unchanged, as long as the 2π cycle occurs rapidly compared to the detuning modulation (Fig. <ref>(b)). This approach enables the realization of a nearly π pulse within the {|+1⟩,|-1⟩} subspace. However, the presence of the hyperfine coupling δ and the transverse effective electric field (Π_x,Π_y) can induce state leakage to the |0⟩ state. Consequently, the imperfect controls in the dynamical decoupling sequence result in degraded spin coherence and distorted signal filtering, thereby diminishing the sensitivity. In this Letter, we introduce a novel phased geometric control method that prevents state leakage and enables a diverse range of operations. With the resonance condition ω=D+d_∥Π_z and the microwave polarization perpendicular to the transverse projection of Π, the Hamiltonian of the system can be expressed as <cit.> H̃'̃(Ω,ϕ)=(Ω e^iϕ/2 |0⟩ + δ'e^iψ/2 |-⟩)⟨+|+H.c., where δ'=√(δ^2+4d^2_⊥Π_y^2) and ψ=arctan(-2d_⊥Π_y/δ). Set Ω=δ', a complete transition between the states |0⟩ and |-⟩ is activated (Fig. <ref>(c)). The operation U_ϕ, which enables the complete transition |0⟩↔|-⟩, is defined by the incident microwave phase ϕ. Defining |ϕ'⟩=(e^iϕ|0⟩+e^iψ|-⟩)/√(2), the Hamiltonian Eq. (<ref>) can be written as H̃'̃(δ',ϕ)=δ'/√(2)(|ϕ'⟩⟨+|+|+⟩⟨ϕ'|). In the qubit spanned by {|+⟩,|ϕ'⟩}, Eq. (<ref>) is proportional to the Pauli-X operator, and U_ϕ acts as a 2π pulse defined by the duration T'=√(2)π/δ' (Fig. <ref>(c)). In this geometric spin qubit, any 2π cycle generates a microwave-phase independent factor of -1 before |+⟩ <cit.>. Moreover, the operation U_ϕ introduces conjugate phase factors in the {|0⟩,|-⟩} subspace (Fig. <ref>(d)), i.e. ⟨ -| U_ϕ |0⟩=-e^-i(ϕ-ψ), ⟨ 0| U_ϕ |-⟩=-e^i(ϕ-ψ). Therefore, the 4π pulse defined as G_π=U_ϕ U_ϕ+π precisely leads to |+⟩→|+⟩ and |-⟩→-|-⟩ (Fig. <ref>(e)). Consequently, the π pulse in the {|+1⟩,|-1⟩} subspace can be achieved without any leakage to the state |0⟩, directly bringing about the zero-field dynamical decoupling (ZDD) sequence with equally spaced G_π operations. Generally, phased geometric gate G_θ=U_ϕ U_ϕ+θ is equivalent to the phase gate P(θ) in the {|+⟩,|-⟩} subspace <cit.>, thus the effect of G_θ can be depicted as a rotation on the Bloch sphere (Fig. <ref>(f)). Following the scheme outlined above, arbitrary effective rotations along z-axis in the {|+⟩,|-⟩} subspace can be implemented. In addition to the G_±π gates, the G_±π/2 gates are particularly relevant in quantum sensing protocols due to their ability to convert coherence into state population in the {|+1⟩,|-1⟩} basis, which can be used to perform correlation of phases accumulated in separate DD sequences. We use a ^12C enriched diamond chip implanted with 40 keV ^15N^+ ions for our experiments. To counterbalance the geomagnetic field, a set of permanent magnets is employed, reducing the field strength to below 0.005 mT. In this regime, we ensure that Δ/δ<<1, where δ is dominated by the intrinsic ^15N hyperfine interaction A_∥. The transverse microwave polarization is aligned perpendicular to the transverse effective electric field vector, with the polarization direction along the x-axis. The resultant non-degenerate splitting is given by δ'=√(A_∥^2+4d^2_⊥Π_y^2)=2π×3.04(1) MHz. Therefore, the manipulating microwave can be determined by Ω=δ' and ω=D+d_∥Π_z=2π× 2870.79(1) MHz. Setting ϕ=0, the |+⟩↔|0'⟩ transition is driven with the angular frequency Ω=√(δ'^2+Ω^2)=√(2)δ', and the pulse length of the 2π operation U_ϕ is defined by T'=2π/Ω (Fig. <ref>(a)). Applying the Ramsey sequence with two separate 2π pulses, oscillation of the frequency δ'/2π emerges. The envelope of this oscillation directly reflects the dephasing occurring in the {|+ 1⟩,|- 1⟩} subspace. By inserting G_π in the middle of the Ramsey sequence, coherence revival is realized (Fig. <ref>(b)). With the specific 4π pulse available, we construct the ZDD-N sequence in the form of 2π (t'/2 4π t' 4π t'/2)^N/22π (Fig. <ref>(a)), where t'=t-2T' is the duration of each free evolution, t denotes the pulse interval, Nt is the total evolution time, and the superscript indicates the interchange of the phases of constituent 2π pulses. This interlaced sequence is designed to compensate fidelity errors caused by pulse imperfections up to the second order <cit.>. By applying the ZDD-N sequences, significant prolongation of the DQ coherence in the {|+ 1⟩,|0⟩,|- 1⟩} basis is observed as the pulse number N increases (Fig. <ref>(b)), indicating that there are sufficient manipulation fidelity and coherence resources available for quantum sensing purposes. Measurements of an AC signal with a frequency of f=0.5 MHz are shown in Fig. <ref>(c, e). The ZDD-64 sensed frequency is f'=1/(2t_s)=0.499(1) MHz, corresponding to the coherence dip at t_s=1.002(1)µs (Fig. <ref>(c)). In nanoscale NMR applications, the correlation spectroscopy sequence <cit.> is utilized to achieve high-resolution spectroscopy or to mitigate the effects of unwanted harmonics <cit.>. However, conventionally performing this free precession technique at zero field is challenging due to the incomplete manipulation of the 3LS. Nevertheless, it can be implemented by inserting G_π/2 gates between separate DD sequences (Fig. <ref>(d)). The lowest order correlation reveals the signal frequency <cit.>, as expressed by ⟨sinψ_1sinψ_2⟩∼cos (2π f(2τ+t) ), where τ is set to t_s according to the coherence dip in the ZDD spectrum, ψ_i is the phase accumulated during each individual ZDD sequence. The correlation signal of two ZDD-16 sequences for the AC field sensed in Fig. <ref>(c) is shown in Fig. <ref>(e). In order to demonstrate the advantage of the ZDD sequence constructed with phased geometric gates, we conduct a comparison with other DD sequences. As shown in Fig. <ref>(a), state evolutions of different DD sequences with distinct driving powers are simulated in the absence of signal fields. The state evolution under normal DD sequence is significantly distorted by detuning, while the LDD and the OC sequences <cit.> which utilize detuning-resistant phase arrangements as well as optimal control techniques, effectively suppress the distortion. In comparison, the ZDD sequence ensures equivalent populations during the free evolution periods. Measurements of the filter functions (FFs) F(t,ω) of different DD sequences at ω=0.5 MHz are presented in Fig. <ref>(b). With low driving fields, the signal filtering of the LDD and the OC sequences are distorted. However, the ZDD sequence operating with Ω=δ' exhibits a reasonable lineshape. The deviation between the ZDD-16 FF and the ideal FF is primarily caused by the finite duty cycle of the manipulating pulses. Nonetheless, this deviation is insignificant when the duty cycle is lower than 40% (Fig. <ref>(c)). In practice, the non-degenerate splitting δ' can be controlled by applying transverse strains, allowing for an adjustable duty cycle. In this work, we introduce a phased geometric control protocol and demonstrate its application in a zero-field quantum sensing technique. The sequences employed for dynamical decoupling and correlation spectroscopy are specifically designed using phased geometric gates. Compared to previous approaches, our method provides a wider range of gate operations in sequence design and prevents the detrimental effects of state leakage by utilizing the properties of the geometric phase. In addition to the NV center, other solid spin systems such as divacancies in SiC <cit.> offer more alternatives for implementing the DQ manipulations with phased geometric gates. These systems possess a non-degenerate splitting that can be easily adjusted by strains or electric fields, enabling precise operations even with a short dephasing time. This allows for a broadened sensing bandwidth and the analysis of electric field noise. Furthermore, it is worth noting that our protocol can be extended to any other spin-based 3LS with similar energy configuration, thereby expanding its potential applications in various quantum technologies. § ACKNOWLEDGEMENTS This work was supported by the National Natural Science Foundation of China (Grant No. T2125011, 81788101), the National Key R&D Program of China (Grant No. 2018YFA0306600), the CAS (Grant No. XDC07000000, GJJSTD20200001, Y201984, YSBR-068), Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302200, 2021ZD0303204), the Anhui Initiative in Quantum Information Technologies (Grant No. AHY050000), Hefei Comprehensive National Science Center, and the Fundamental Research Funds for the Central Universities. This work was partially carried out at the USTC Center for Micro and Nanoscale Research and Fabrication. unsrt
http://arxiv.org/abs/2307.04049v1
20230708212820
Parallel Algorithms Align with Neural Execution
[ "Valerie Engelmayer", "Dobrik Georgiev", "Petar Veličković" ]
cs.LG
[ "cs.LG" ]
[ Parallel Algorithms Align with Neural Execution Valerie Engelmayeraux Dobrik Georgievcam Petar Veličkovićdm auxDepartment of Applied Computer Science, University of Augsburg, Augsburg, Germany camDepartment of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom dmGoogle DeepMind, London, United Kingdom Valerie [email protected] Machine Learning, ICML 0.3in ] Neural algorithmic reasoners are parallel processors. Teaching them sequential algorithms contradicts this nature, rendering a significant share of their computations redundant. Parallel algorithms however may exploit their full computational power, therefore requiring fewer layers to be executed. This drastically reduces training times, as we observe when comparing parallel implementations of searching, sorting and finding strongly connected components to their sequential counterparts on the CLRS framework. Additionally, parallel versions achieve strongly superior predictive performance in most cases. § MOTIVATION In neural algorithmic reasoning, neural networks (NN) act as computational machines. In graph neural networks (GNN), graph nodes take on the role of storage space (interpreting edge labels as nodes adjacent to its endpoints throughout this paper), while edges indicate which ways information may flow. The update function of choice defines the set of constant (neural) time operations. But note how nodes update their features in parallel, each one acting as a processor of its own rather than sheer memory. The parallel nature of neural networks is widely known. Running them in parallel fashion on processing devices like GPUs and TPUs drastically saves computational resources <cit.>. It seems natural that this translation between computational models would also hold the other way around. And indeed, Loukas loukas_what_2020 proves how Neural Networks (NN) are analogous to distributed computational models under certain assumptions. Kaiser & Sutskever kaiser2015neural exploit the advantages of parallel processing in their Neural GPU. Freivalds et al. freivalds_neural_nodate derive their architecture from the parallel computational model of Shuffle-Exchange-Networks. Xu et al. xu_what_2020 observe how their model learns to compute a shortest path starting from both ends in parallel when executing Bellman Ford. Veličković et al. velickovic_clrs_2022 and Veličković et al. velickovic_neural_2020 hint at parallelized computations whenever possible. It is time the parallel processing capabilities of NN are exploited systematically. Theory on parallel computational models and algorithms explicitly designed for them are abundant <cit.>. Their trajectories are shorter and align more closely with neural architectures, as illustrated in figure <ref>. Hinting at these during training teaches NN to execute algorithmic tasks much more efficiently than when providing hints for sequential algorithms, as we demonstrate in section <ref> for the examples of searching, sorting and finding strongly connected components. While it is common practice to modify the neural architecture for better alignment <cit.>, it seems promising to narrow the gap from the other side, by choosing algorithms that naturally align with neural execution. § PARALLEL COMPUTING Fundamentally, the parallel computational models addressed here assume multiple processors collaborating to solve a task. The line between parallel and distributed computing is blurry and depends on how controlled interactions between processors are. We assume a fixed and known interconnection graph, uniquely identified processors and a common clock to govern computation. Therefore, we choose to speak of parallel computing. §.§ Parallel Computational Models Processor Arrays. Communication may take place via hard-wired channels between the processors. These induce an interconnection graph that may in principle take any shape. At every time step, each processor executes some computation based on the contents of its local memory and the information received from its neighbours in the previous step, and may in turn send out a tailored message through any of its channels. PRAM Models. Alternatively, communication may be realised by reading from and writing to global memory, giving rise to PRAM (parallel random access machine) models <cit.>. Submodels allowing for concurrent reading and writing by multiple processors are referred to as CRCW PRAM. Different conventions exist on whether attempting to concurrently write different values is permitted, and if so, how to decide who succeeds. In the most powerful model, the priority CRCW PRAM, the value from the processor with the lowest index taking part in the concurrent write will be taken on. §.§ Efficiency Since multiple steps can be carried out at the same time, the required number of operations in a parallel algorithm does not impose a lower bound to its run time as in the sequential case, but the product of time and processor number. Optimal speedup is achieved if the use of n processors speeds up computation by a factor of n. This gives rise to a notion of efficiency frequently used in parallel computing <cit.>. The efficiency of a parallel algorithm solving a task of sequential complexity C on p processors in time t is defined as C/pt. It is not hard to see that optimal speedup entails an efficiency of Ω(1). §.§ Examples of Parallel Algorithms Searching. For a simple parallel search for value x in a descending list of n items, assume a priority CRCW PRAM with n processors. Distribute the first item to processor 1, the second to processor 2 etc., while x is stored in the global memory. If a processor's item is ≥ x, it tries to write its index to a designated location in the global memory. Since the one with the smallest index will succeed, the location now contains the desired position of x. The run time is independent of the input size[Distributing values to processors can be done in constant time by routing over the shared memory. We neglect distributing/returning in-/outputs from/to a host computer in the following as it is omitted in neural execution.], so the time-processor-product is Θ(n), missing optimal speed-up as searching can be done in O(log n). Sorting. Habermann habermann_parallel_1972 proposes a simple parallel sorting algorithm for a linear array of processors called Odd Even Transposition Sort (OETS). Each processor holds one item. In an odd (even) round, all neighbouring pairs starting at an odd (even) index swap their items if they are out of order. The two types of rounds take turns for at most n rounds total when n items are to be sorted, yielding O(n^2) operations when accounting for the n processors. Again, this is not optimal for comparison-based sorting, which may be done in O(n log n). Strongly Connected Components. Fleischer et al. rolim_identifying_2000 propose a Divide-and-Conquer algorithm for computing strongly connected components (SCC) of a digraph, which they call DCSC. First, find all descendants and predecessors of an arbitrary node, e.g. by carrying out breadth-first search (BFS) in the graph and its reversed version. The intersection of both sets constitutes a SCC. Observe how each further SCC has to be completely contained in either the descendants, the predecessors or the undiscovered nodes, such that the described routine may be called recursively for start nodes in each subset independently, until each vertex is assigned to a SCC. They prove an expected serial time complexity of O(n log n) for graphs on n nodes whose degrees are bounded by a constant. This is not optimal, but parallelization of the two searchs per vertex, as well as the recursive calls may significantly speed up execution. §.§ Analogy to Neural Networks Loukas loukas_what_2020 formally establishes an analogy between models like processor arrays and GNN by identifying processors with graph nodes and communication channels with edges. Therefore, the width of a GNN corresponds to p, and its depth to t. Loukas coins the term capacity for the product of width and depth of a GNN, reflecting the time-processor product of parallel algorithms. The shared memory of a PRAM finds its neural analog in graph-level features. Since the computation of a graph feature may take into account positional encodings of the nodes, we may assume a priority CRCW PRAM, encompassing all other PRAM models. § EFFICIENCY OF EXECUTING ALGORITHMS NEURALLY Inspired by the definition of efficiency in parallel computing, we define the efficiency of a neural executioner as follows. Let be a GNN with capacity c(n) executing an algorithm of sequential complexity C(n). Define its node efficiency as η (, ) C(n)/c(n). This definition implies an important assumption we make throughout this paper. When executing an algorithm on a GNN, one constant-time operation is to be executed per node per layer. This is not entirely unproblematic as discussed in section <ref>, but often expected when providing hints and helps to identify theoretical properties. Under this assumption, node efficiency denotes the share of nodes doing useful computations throughout the layers. Since the computational cost of a GNN also scales with the number of messages that are being sent, it is insightful to study the share of edges that transport relevant information as well. Let be a GNN operating over a graph G=(V,E), m | E |, to execute an algorithm . Then we call an edge (i,j) ∈ E active at layer t for a certain input x, if the operation to be executed by node j at time t involves information stored at node i at time t-1. Let a(t) be the number of active edges at time t, and T the total number of time-steps. Then define edge efficiency as worst case share of active edges when processing inputs x_n of size n, ϵ (, ) x_nmin 1/T∑_t=1^T a(t)/m. Note how neural efficiencies are defined relative to the algorithm they are executing as opposed to the task they solve. This allows for a neural executioner to be efficient in executing an algorithm that is itself not efficient in solving a task. §.§ Parallel Algorithms Entail Higher Efficiency Contradicting a GNN's parallel nature by teaching it to execute sequential algorithms artificially impedes the task. Training to solve tasks in parallel instead is more efficient, which may also simplify the function to learn. Shorter Trajectories. As observed by Loukas loukas_what_2020, the complexity of an algorithm lower bounds the capacity of a GNN executing it. If the number of processors is one, the depth alone needs to match the complexity, while the width might theoretically be set to one. But in practice, the width has to scale with the input size n to ensure applicability to different n. Therefore, training sequential algorithms forces overspending on capacity by a factor of n. Setting the width to n, as is often done to distribute one unit of information over each node, entails n available processors. Making use of them may shorten the trajectory of an algorithm by a factor of up to n in the case of optimal speedup, which allows the capacity to take on its lower bound. The capacity of a GNN directly translates to the time needed to train and execute it. Additionally, long roll-outs give rise to an issue Bansal et al. bansal_end–end_2022 refer to as overthinking, where many iterations degenerate the behaviour of a recurrent processor. Less Redundancy. Neural efficiencies denote the share of nodes and edges involved in useful computations. Redundant computations not only harm run times, but may also interfere with the algorithmic trajectory. Parameterising them correctly to prevent this can complicate the function to learn. Assuming the redundant nodes (grey in figure <ref>) need to preserve their information to be processed or put out later, their self-edges should execute an identity, while the additional incoming messages need to be ignored, i.e. mapped to a constant. In practice, this will be hard to do, which could entail a temporal variant of oversmoothing, where relevant information gets lost throughout the layers <cit.>. Oyedotun et al. skipconnections highlight how skip connections help to avoid the issue, Ibarz et al. ibarz_generalist_2022 introduce a gating mechanism to leave information unchanged, Bansal et al. bansal_end–end_2022 let their architecture recall the original input. So let's explore the efficiency of executing sequential and parallel algorithms. Let be a scalable GNN operating over a graph with n nodes and m edges. Further let be a sequential, and an efficient parallel algorithm on n processors, both of complexity C. Then executing and on , respectively, entails efficiencies η(, ) = O (1/n), ϵ(, ) = O( 1/m), η(, ) = O(1), ϵ(, ) = O(n/m). As observed above, the capacity c of a GNN executing a sequential algorithm of complexity C has to be c ≥ nC, while it may be c=C in the case of optimal speedup. Node efficiencies follow immediately. Since one processor can read only so much information, only a constant number of edges can be active at each layer during sequential processing, while up to a multiple of n edges can be active during parallel algorithms. This yields the stated edge efficiencies. Therefore, the share of nodes avoiding redundant computation cannot exceed 1/n when executing sequential algorithms, whereas it may reach up to 1 for efficient parallel algorithms. At the same time, the number of redundant messages is reduced by a factor of n. Removing the artificial bottleneck of a single processor prevents data from having to be stored until the processor gets to it. Allowing nodes to carry out meaningful computation frees them of the dead weight of acting as memory. Local Exchange of Information. In neural networks, information exchange is inherently local. The feature h_i^t of node i at time t may only depend on itself and its neighbours _i. E.g. for permutation invariant MPNN <cit.>, h_i^t = f (h_i^t-1, j ∈_i⊕ g(h_i^t-1, h_j^t-1)) This paradigm is often not respected by classical algorithms, as depicted in figure <ref>. In the RAM model, the state h_i_t^t of register i_t updated at time t may depend on any two registers j_t and k_t: h_i_t^t = f^t_i (h_k_t^t-1, h_j_t^t-1), j_t, k_t arbitrary. Not being able to restrict which nodes have to communicate may render it advisable for a GNN to operate over a complete graph to make sure all necessary information is available at all times (see e.g. <cit.>). The situation is different in the setting of interconnected processing arrays, see figure <ref>. For example OETS only ever requires neighbouring processors to compare their items. In general, at time t, the memory state h_i^t of processor i is computed by h_i^t = f^t_i (h_i^t-1, j ∈ J_i^t|| h_j^t-1), J_i^t ⊆_i, where concatenation indicates how i may tell apart its neighbours. Therefore it suffices for the GNN to only rely on edges present in the interconnection graph. To emulate a PRAM algorithm, an empty graph would in principle be enough, though it might not deem advantageous to route all communication over the graph feature in practice. Restricting the number of edges further reduces the use of resources and may help performance, since fewer unnecessary messages are being passed. Interconnection graphs are mostly chosen to be sparse, enabling maximum edge efficiency. § METHODOLOGY To test the hypothesis, we consider the two elementary tasks of searching and sorting, as well as computing SCC as an example of a graph algorithm. The parallel algorithms are chosen from section <ref>; as sequential counterparts we use binary search, bubble sort and Kosaraju's SSC algorithm from the CLRS-30 benchmark <cit.>. Key data of the GNN we use are listed in table <ref>. We compare performances across various processor networks, namely the wide-spread architectures of DeepSets <cit.>, GAT <cit.>, MPNN <cit.>, and PGN <cit.>. The trajectories of the new algorithms are encoded for the CLRS framework as follows below. Note that in every case, randomized positional information, as proposed by Mahdavi et al. mahdavi_towards_2023 and standard on CLRS, is provided as part of the input, to emulate the situation of uniquely identified processors. §.§ Searching Parallel Search. The hints for parallel search of x in A closely resemble its template. As to be seen in figure <ref>, each item A_i of A is represented by one node of an empty graph. A node indicates whether A_i ≤ x. The position rank_A (x) of x in A is predicted by the graph feature as categorical variable over the nodes ( in <cit.>). Therefore we introduce an extra node carrying x as a placeholder to allow for as many categories as possible positions of x. To perfectly predict the outcome in this setting, the graph nodes may be updated by h_i = ReLU (A_i -x), yielding h_i = 0 if and only if A_i ≤ x. So the graph feature may be computed by rank_A (x) = min{i=1,…,n : h_i = 0 }. These steps closely align with the considered neural update functions, especially since the function updating the graph level possesses its own set of parameters. Additionally, the roll-out has constant length, leaving room for only a constant number of redundant edges, see figure <ref> and table <ref>. Altogether, we expect high performance on parallel search. Binary Search. Opposed to parallel search, binary search has an optimal complexity of O(log n). But given the need for n nodes, it still requires an enhanced capacity of O(n log n), yielding low node efficiency. In CLRS-30, binary search is executed on a complete graph (whose edges are omitted in figure <ref> to avoid clutter), impairing edge efficiency, see table <ref>. Low efficiency is visible in figure <ref> by the amount of grey components. §.§ Sorting OETS. Actually swapping the items would require making numerical predictions. Instead, we predict changing predecessors as , following preimplemented examples. To still provide edges between nodes holding items to compare, we have to operate on a complete graph, sacrificing edge efficiency (see table <ref>), since only Θ(n) edges are active in each round, so ϵ = n/n^2. As hints, we feed for each round the current predecessors along with an edge indicating whether two nodes have to switch their role, and a graph-level with the parity of the round, serving as rudimentary clock. Bubble Sort. Though Bubble Sort induces the same amount of operations O(n^2) as OETS, it requires a larger network to be executed on (table <ref>). Again, along with operating over a complete graph, this entails low efficiencies. §.§ Strongly Connected Components DCSC. We input the undirected adjacency matrix as edge , along with the directed one as . Parallelizing the recursive calls of DCSC on multiple disjoint sets would require an extra feature dimension for every search that is going on. Therefore we only let the two BFS starting from the same source node be executed in parallel, which we each encode as is standard in CLRS-30. Additionally, a binary on each node is flipped to 1 as soon as it is discovered from both directions, indicating it belongs the currently constructed SCC (this is reset at the start of every new search). At the same time, it receives a to the source, which in the end constitutes the output. Throughout, we keep track of undiscovered nodes in another node . We choose the node with the smallest index from this set as next source. DCSC spends most of its time on the repeated BFS, a subroutine known to be learned well even on relatively simple architectures <cit.>, as it aligns well with neural execution <cit.>. Note how they let each node consider all its incoming edges in parallel, as is done on CLRS-30. This not only allows the trajectory to be shortened from O(n+m) to O(n), but also prevents redundant computations from having to be handled explicitly. Except for the source, each node can carry out the same computation at each step (see <cit.> for details) – just that this will only change its state whenever information flowing from the start node reaches it. DCSC only has to pass the index s of the source node instead of computing predecessor pointers, so computation looks like depicted in figure <ref>, closely resembling the situation in figure <ref>. Therefore, efficiency is expected to be less important for predictive performance in this special case. An obvious upper bound to DCSC's run time is O(n^2), accounting for one (two-sided) BFS per node, resulting in the big capacity reported in table <ref>. There is also no guarantee for more than one node and edge being active per step per BFS, resulting in low efficiencies. But this represents edge cases at best, such that the average trajectories will be much shorter and more efficient, as experiments will show. The core of DCSC aligning so well with neural execution promises good results. Kosaraju. The skeleton of Kosaraju's algorithm as implemented in CLRS-30 on the other hand is formed by a depth first search (DFS), which is more challenging for neural executioners <cit.>. As opposed to the closely related BFS, it is hard to parallelize. In fact, when relying on lexicographic ordering for tie-braking, it is considered an inherently sequential algorithm <cit.>. Since nodes have to wait for the search to retract from its siblings, computation cannot be carried out as in figure <ref>, so processing needs be timed correctly. The total run time is O(n+m), entailing the capacity and efficiencies reported in table <ref>. § RESULTS Predictive performance is reported in table <ref>. As expected, parallel search achieves almost perfect results. Meanwhile, training time is reduced by a factor of almost 3 as compared to binary search (see figure <ref>). Despite DCSC's only partial parallelization and the asymptotically optimal linear run time of its sequential opponent, training time is more than halved for the SCC task. At the same time, predictions become up to more than twice as accurate. On the sorting task, the sequential algorithm entails better accuracy, with the parallel one mostly falling within one standard deviation. Though both algorithms require the same asymptotic number of operations, training OETS takes a fraction of the time needed for bubble sort (figure <ref>). § DISCUSSION Neural efficiency only loosely correlates with predictive performance when comparing tables <ref> and <ref>. This is not too surprising, since correctly parameterising redundant computations is only one of many aspects that make a function hard to learn. We propose a rather one-sided relationship, where low efficiencies can harm accuracy (if not circumvented as in BFS, see section <ref>), but high efficiencies do not necessarily enhance learning success. We would like to highlight the importance of taking the perspective on neural networks as computational models when executing algorithms, as it opens access to the rich theory of computational complexity. E.g. the classes of NC (efficiently parallelizable) and P-complete problems (mostly thought of as inherently sequential) <cit.> inform us on which tasks may be hard to execute neurally, to tackle them more effectively. However in doing so, it is important to keep in mind the gap between the respective sets of constant time operations, with none being strictly more powerful than the other. On the one hand, a single RAM instruction may need to be approximated by entire subnetworks. On the other hand, one neural step suffices to process all incoming edges of a node during execution of BFS <cit.>. This breaks up the strict correspondence between time-processor product and capacity. § CONCLUSION As suggested in section <ref>, parallel algorithms prove to be a lot more efficient to learn and execute on neural architectures than sequential ones. Often, OOD predictions on algorithmic tasks are significantly improved as well, suggesting that higher node and edge efficiency can help learning. Future work has to show how performance is impacted for other tasks, on more elaborate architectures like in <cit.>, and in generalist settings. § ACKNOWLEDGEMENTS We would like to thank Razvan Pascanu and Karl Tuyls for their valuable comments, as well as Pietro Liò for insightful discussions and Torben Hagerup for the support he provided. icml2023