id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2301.01773
Large eddy simulations of turbulent compressible supersonic jet flows using discontinous Galerkin methods
In this work, a discontinuous Galerkin scheme is employed to perform LES simulations of supersonic jet flows. A total of four simulations are performed with different meshes and order of accuracy. The number of degrees of freedom from the simulations varies from 50 x 106 to 400 x 106. The results indicate that by increasing the resolution of simulation, in general, the results got closer to experimental data. The jet lipline is the only region in which this behavior is not observed. It investigated a procedure of using lower-order simulations to initialize high-order simulations and consequently reduce the total time of the simulation using high-order schemes. This strategy is successful and allows to perform high-order simulations with only 5% more computational effort than a complete second-order simulation.
D. F. Abreu, C. Junqueira-Junior, E. T. V. Dauricio, J. L. F. Azevedo
2023-01-04T15:49:35Z
http://arxiv.org/abs/2301.01773v1
Large-Eddy Simulations of Turbulent Compressible Supersonic Jet Flows Using Discontinuous Galerkin Methods ###### Abstract In this work, a discontinuous Galerkin scheme is employed to perform LES simulations of supersonic jet flows. A total of four simulations are performed with different meshes and order of accuracy. The number of degrees of freedom from the simulations varies from \(50\times 10^{6}\) to \(400\times 10^{6}\). The results indicate that by increasing the resolution of simulation, in general, the results got closer to experimental data. The jet lipline is the only region in which this behavior is not observed. It investigated a procedure of using lower-order simulations to initialize high-order simulations and consequently reduce the total time of the simulation using high-order schemes. This strategy is successful and allows to perform high-order simulations with only \(5\%\) more computational effort than a complete second-order simulation. Largeded Numerical Methods Large-Eddy Simulation, Turbulent Flow, Jet Flow, Discontinuous Galerkin Methods ## 1 Introduction The Reynolds-Averaged Navier-Stokes (RANS) formulation has difficulty representing some types of fluid motions predominantly governed by free shear flows or wall-bounded flows with separated boundary layers. This difficulty is related to constructive assumptions of the formulation, characterized by the modeling of all turbulent quantities. The recent progress of computational power is enabling the employment of large-eddy simulations (LES) to simulate the problems that RANS formulation fails to model important aspects of the flow. Recent studies show the capability of LES simulations for reproducing free shear layer [4, 21] and detached flows [13, 22]. Another advantage of using LES is its capability to produce high-frequency unsteady information, which is necessary for aerodynamics, acoustics, loads, and heat transfer analyses. The authors are interested in the simulation of supersonic jet flows for performing aerodynamic analyses of the shear layer regarding velocity and pressure fluctuations to improve the design of nozzles and adjacent structures. Different numerical options are employed to obtain the solution of LES formulation for jet flows. For example, low-order accuracy [18] and high-order accuracy [3, 9] finite difference schemes on structured meshes were employed to perform LES simulations of subsonic and supersonic jet flows. Low-order finite volume approach on unstructured meshes [23, 6] is another option employed. Due to the employment of structured meshes, the finite difference schemes have difficulty handling complex geometries. The finite volume schemes are applied to unstructured meshes, which make it easier to represent complex geometries, however, it is difficult to implement high-order discretizations with these schemes [15]. In this context, the discontinuous Galerkin schemes are gaining relevance, because they are easily implemented with high-order accuracy discretizations and can be employed with unstructured meshes. Some work is already simulating jet flows with discontinuous Galerkin schemes [1, 8] or using similar strategies, for example, the Flux Reconstruction schemes [25]. The discontinuous Galerkin schemes have multiple options for implementation. For example, one may choose to represent the solution by nodal or modal polynomials. It is possible to choose between different options of test functions that could better suits different types of elements, which are utilized to discretize the computational domain. One set of choices for the discontinuous Galerkin formulation is named discontinuous Galerkin spectral element method (DGSEM) [19, 16]. The DGSEM, implemented in a numerical framework called FLEXI [20], was investigated for performing LES of a supersonic round jet flows with Mach number equal to \(1.4\) and Reynolds number based on jet inlet diameter of \(1.58\times 10^{6}\)[1]. The simulations were performed with two numerical meshes with \(6.2\times 10^{6}\) and \(1.8\times 10^{6}\) elements with second-order and third-order accurate discretizations, respectively. The two simulations were performed with nearly \(50\times 10^{6}\) degrees of freedom (DOF). They presented similar results, with the simulation performed with third-order accuracy requiring twice the time to perform the same simulation time as the second-order accurate simulation. When comparing the results to experimental data, excessive dissipation is observed, which led to shorter potential cores. The potential core of the jet is the length in the centerline of the jet where the velocity reaches \(0.95\) of jet velocity. Other aspects of the flow, for example, the root mean square (RMS) of velocity fluctuations in the centerline and lipline of the jet, also presented some differences with experimental data. In this work, the results obtained using a new mesh are presented. The new mesh has a larger refinement and improved topology than the meshes utilized in previous work. The new mesh is simulated with second-order and third-order accurate discretizations. Discussions regarding the quality and improvement of the simulations are presented. A discussion of computational efficiency utilizing discontinuous Galerkin methods is also performed to develop guidelines for future works. ## 2 Numerical Formulation ### Governing Equations The work has an interest in the solution of the filtered Navier-Stokes equations. The filtering strategy is based on a spatial filtering process that separates the flow into a resolved part (\(\cdot\)) and a non-resolved part (\(\cdot\))\({}^{\prime}\). Implicit filter size is obtained from the mesh size. The filtered Navier-Stokes equations in conservative form can be written by \[\frac{\partial\bar{\mathbf{Q}}}{\partial t}+\nabla\cdot\mathbf{F}(\bar{\mathbf{Q}},\nabla\bar{\mathbf{Q}})=0, \tag{1}\] where \(\bar{\mathbf{Q}}=[\bar{\rho},\bar{\rho}\tilde{u},\bar{\rho}\tilde{v},\bar{\rho} \tilde{w},\bar{\rho}\tilde{E}]^{T}\) is the vector of filtered conserved variables and \(\mathbf{F}\) is the flux vector. The flux vector can be divided into the Euler fluxes and the viscous flux, \(\mathbf{F}=\mathbf{F}^{e}-\mathbf{F}^{v}\). The fluxes with the filtered variables may be written as \[\mathbf{F}^{e}_{i}=\left[\begin{array}{c}\tilde{\rho}\tilde{u}_{i}\\ \tilde{\rho}\tilde{u}\tilde{u}_{i}+\delta_{1i}\bar{p}\\ \bar{\rho}\tilde{v}\tilde{u}_{i}+\delta_{2i}\bar{p}\\ \bar{\rho}\tilde{w}\tilde{u}_{i}+\delta_{3i}\bar{p}\\ (\bar{\rho}\tilde{E}+\bar{p})\tilde{u}_{i}\end{array}\right]\qquad\qquad \mathbf{F}^{v}_{i}=\left[\begin{array}{c}0\\ \tau_{1i}^{mod}\\ \tau_{2i}^{mod}\\ \tau_{3i}^{mod}\\ \tilde{u}_{j}\tau_{ij}^{mod}-q_{i}^{mod}\end{array}\right]\qquad\qquad\text{, for }i=1,2,3, \tag{2}\] where \(\tilde{u}_{i}\) or \((\tilde{u},\tilde{v},\tilde{w})\) are the Favre averaged velocity components, \(\bar{\rho}\) is the filtered density, \(\bar{p}\) is the filtered pressure and \(\bar{\rho}\tilde{E}\) is the filtered total energy per unit volume. The terms \(\tau_{ij}^{mod}\) and \(q_{i}^{mod}\) are the modified viscous stress tensor and heat flux vector, respectively, and \(\delta_{ij}\) is the Kronecker delta. The filtered total energy per unit volume, according to the definition proposed by Vreman [28] in its "system I" approach, is given by \[\bar{\rho}\tilde{E}=\frac{\bar{p}}{\gamma-1}+\frac{1}{2}\bar{\rho}\tilde{u}_{ i}\tilde{u}_{i}. \tag{3}\] The filtered pressure, Favre averaged temperature and filtered density are correlated using the ideal gas equation of state \(\bar{p}=\bar{\rho}R\tilde{T}\), and \(R\) is the gas constant, written as \(R=c_{p}-c_{v}\). The properties \(c_{p}\) and \(c_{v}\) are the specific heat at constant pressure and volume, respectively. The modified viscous stress tensor may be written as \[\tau_{ij}^{mod}=(\mu+\mu_{SGS})\left(\frac{\partial\tilde{u}_{i}}{\partial x _{j}}+\frac{\partial\tilde{u}_{j}}{\partial x_{i}}\right)-\frac{2}{3}(\mu+\mu _{SGS})\left(\frac{\partial\tilde{u}_{k}}{\partial x_{k}}\right)\delta_{ij} \tag{4}\] where \(\mu\) is the dynamic viscosity coefficient, calculated by Sutherland's Law, and \(\mu_{SGS}\) is the SGS dynamic viscosity coefficient, which is provided by the subgrid-scale model. The strategy of modeling the subgrid-scale contribution as an additional dynamic viscosity coefficient is based on the Boussinesq hyphotesis. The modified heat flux vector, using the same modeling strategy, is given by \[q_{i}^{mod}=-(k+k_{SGS})\frac{\partial\tilde{T}}{\partial x_{i}} \tag{5}\] where \(k\) is the thermal conductivity coefficient of the fluid and \(k_{SGS}\) is the SGS thermal conductivity coefficient given by \[k_{SGS}=\frac{\mu_{SGS}c_{p}}{Pr_{SGS}} \tag{6}\] and \(Pr_{SGS}\) is the SGS Prandtl number. The present work employs the static Smagorinsky model [26] in order to calculate the subgrid-scale contribution. ### Nodal Discontinuous Galerkin Method The nodal discontinuous Galerkin method used in this work is based on the modeling called discontinuous Galerkin spectral element method [19, 16]. In this modeling strategy, the domain is divided into multiple hexahedral elements. This choice of elements permits the interpolating polynomial to be defined as a tensor product basis with degree \(N\) in each space direction. This set of options leads to an algorithm that can be easily implemented and also produce a high level of computational efficiency due to reduced calculations. In this method, the elements from the physical domain are mapped onto a reference unit cube elements \(E=[-1,1]^{3}\). The equations, presented in (1) need also to be mapped to this new reference domain, leading to \[J\frac{\partial\bar{\mathbf{Q}}}{\partial t}+\nabla_{\xi}\cdot\bar{\mathcal{ F}}=0, \tag{7}\] where \(\nabla_{\xi}\) is the divergence operator with respect to the reference element coordinates, \(\xi=(\xi^{1},\xi^{2},\xi^{3})^{T}\), \(J=|\partial\mathbf{x}/\partial\xi|\) is the Jacobian of the coordinate transformation and \(\bar{\mathcal{F}}\) is the contravariant flux vector. The discontinuous Galerkin formulation is obtained multiplying (7) by the test function \(\psi=\psi(\xi)\) and integrating over the reference element \(E\) \[\int_{E}J\frac{\partial\bar{\mathbf{Q}}}{\partial t}\psi d\xi+\int_{E}\nabla_ {\xi}\cdot\bar{\mathcal{F}}\psi d\xi=0. \tag{8}\] It is possible to obtain the weak form of the scheme by integrating by parts the second term in (8) \[\frac{\partial}{\partial t}\int_{E}J\bar{\mathbf{Q}}\psi d\xi+\int_{\partial E }(\bar{\mathcal{F}}\cdot\vec{N})^{*}\psi dS-\int_{E}\bar{\mathcal{F}}\cdot( \nabla_{\xi}\psi)d\xi=0, \tag{9}\] where \(\vec{N}\) is the unit normal vector of the reference element faces. Because the discontinuous Galerkin scheme allows discontinuities in the interfaces, the surface integral above is ill-defined. In this case, a numerical flux, \(\bar{\mathcal{F}}^{*}\), is defined, and a Riemann solver is used to compute the value of this flux based on the discontinuous solutions given by the elements sharing the interface. For the nodal form of the discontinuous Galerkin formulation, the solution in each element is approximated by a polynomial interpolation of the form \[\bar{\mathbf{Q}}(\xi)\approx\sum_{p,q,r=0}^{N}\bar{\mathbf{Q}}_{h}(\xi_{p}^{1},\xi_{q}^{2},\xi_{r}^{3},t)\phi_{pqr}(\xi), \tag{10}\] where \(\bar{\mathbf{Q}}_{h}(\xi_{p}^{1},\xi_{q}^{2},\xi_{r}^{3},t)\) is the value of the vector of conserved variables at each interpolation node in the reference element and \(\phi_{pqr}(\xi)\) is the interpolating polynomial. For hexahedral elements, the interpolating polynomial is a tensor product basis with degree N in each space direction \[\phi_{pqr}(\xi)=l_{p}(\xi^{1})l_{q}(\xi^{2})l_{r}(\xi^{3}),\ \ \ l_{p}(\xi^{1})=\prod_{ \begin{subarray}{c}i=0\\ i\neq p\end{subarray}}^{N_{p}}\frac{\xi^{1}-\xi_{i}^{1}}{\xi_{p}^{1}-\xi_{i} ^{1}}. \tag{11}\] The definitions presented are applicable to other two directions. The numerical scheme used in the simulation additionally presents the split formulation [24], with the discrete form [11], to enhance the stability of the simulation. The split formulation is employed for Euler fluxes only. The solution and the fluxes are interpolated and integrated at the nodes of a Gauss-Lobatto Legendre quadrature, which presents the summation-by-parts property, that is necessary to employ the split formulation. The Riemann solver used in the simulations is a Roe scheme with entropy fix [14] to ensure that the second law of thermodynamics is respected, even with the split formulation. For the viscous flux, since the discontinuous Galerkin scheme is not suitable for discretizing the high order derivative operator, the lifting scheme of Bassi and Rebay [2] is used, which is also known for BR2. The time marching method chosen is a five-stage, fourth-order explicit Runge-Kutta scheme [7]. The shock waves that appear in the simulation are stabilized using the finite-volume sub-cell shock-capturing method [27]. The shock indicator of Jameson, Schmidt, and Turkel [17] is utilized. ## 3 Experimental Configuration The experimental work [5] provides a good characterization of the flow properties for jet flows. Many configurations were analyzed. In this work, the interest is to simulate the fully expanded free jet flow configuration with a Mach number of 1.4. In this configuration the jet flow has a static pressure in the nozzle exit section that equals the ambient static pressure with a supersonic velocity, for this reason, it is possible to avoid the use of nozzle wall geometries and also the shock waves are weaker when compared to other operating conditions. The experimental apparatus for analyzed configuration is composed of a convergent-divergent nozzle designed with the method of characteristics [5]. The nozzle exit diameter is 50.8 mm. The Reynolds number based on nozzle exit diameter is approximately \(1.58\times 10^{6}\), which is large when compared to other jet experiments available in the literature. The data acquisition in the tests applies Time-Resolved Particle Image Velocimetry (TRPIV) operated primarily with a 10 kHz sample rate. The experiment uses two sets of cameras, one positioned to capture the flow along the nozzle centerline and the other positioned to capture the flow of the mixing layer along the nozzle lipline. ## 4 Numerical Setup ### Geometry and Mesh Configuration The geometry used for the calculations in the work presents a divergent shape and axis length of \(40D\), where \(D\) is the jet inlet diameter and has external diameters of \(16D\) and \(25D\). Figure 1 illustrates a 2-D representation of the computational domain indicating the inlet surface in red, the far-field region in blue, the lipline in gray, and the centerline in black. The computational grids used in the work are named M-1, M-2, and M-3. The M-1 and M-2 meshes are adaptations of the mesh utilized in previous work [18] due to the different restrictions of each computational code. The M-3 mesh is generated with topological differences from M-1 and M-2 meshes. The M-3 mesh topology presents a high refinement level around the jet inlet boundary external diameter that transitions to a uniform distribution when moving forward in the longitudinal direction. In addition to the new topology, the M-3 mesh also presents a larger number of elements. The mesh generation uses a multiblock strategy only handles hexahedral elements. Fig. 2 exhibits a cut plane of the M-2 and M-3 meshes. M-2 mesh is presented to illustrate the topological differences between the two strategies. M-1 mesh is not presented because it only differs in the number of elements from M-2 mesh. The M-1 and M-2 meshes have a total of \(6.2\times 10^{6}\) and \(1.8\times 10^{6}\) elements that are simulated with second and third-order accuracy, respectively, resulting in simulations with \(50\times 10^{6}\) DOF. The M-2 mesh has \(15.4\times 10^{6}\) elements and is simulated with second and third-order accuracy, resulting in approximately \(120\times 10^{6}\) and \(410\times 10^{6}\) DOF. All the meshes utilized in the work are generated with the GMSH [12] generator. ### Boundary Conditions different reference states to characterize the jet inflow, \((\cdot)_{jet}\), and the far-field, \((\cdot)_{ff}\). The inflow and the far-field surfaces are indicated in Fig. 1 in red and blue, respectively. A weakly enforced solution of a Riemann problem with a Dirichlet condition is enforced at the boundaries. The flow is characterized as perfectly expanded and unheated, _i.e._\(p_{jet}/p_{ff}=T_{jet}/T_{ff}=1\), where \(p\) stands for pressure and \(T\) for temperature. The Mach number of the jet at the inlet is \(M_{jet}=1.4\) and the Reynolds number based on the diameter of the nozzle is \(Re_{jet}=1.58\times 10^{6}\). A small velocity component with \(M_{ff}=0.01\) in the streamwise direction is imposed at the far-field to avoid numerical issues. A sponge zone [10] is employed close to all far-field boundaries to dump any oscillation that could reach the boundaries. ### Simulation Settings A total of 4 simulations are compared in this work. The development of the simulations utilized 3 different meshes with two orders of accuracy obtained by changing the degree of the polynomial representing the solution. The S-1 simulation utilizes the M-1 mesh with second-order accuracy. The S-2 simulation utilizes the M-2 mesh with third-order accuracy. The S-3 and S-4 simulations utilize the M-3 mesh with second and third-order accuracy, respectively. Table 1 summarizes the simulations performed and the total number of degrees of freedom in each of them. ### Calculation of Statistical Properties Two different approaches are taken to perform the 4 simulations. In the first approach, utilized for S-1, S-2, and S-3 simulations, the procedure involves three steps. The first one is to clean off the domain since the computation starts with a quiescent flow initial condition. The simulations run three flow-through times (FTT) to develop the jet flow. One FTT is the time required for one particle with the jet velocity to cross the computational domain. In the sequence, the simulations run an additional three FTT to produce a statistically steady condition. Then, in the last step, data are collected with a sample of approximately 250 kHz for another FTT to obtain the statistical properties of the flow. In the second approach, utilized for the S-4 simulation, the solution obtained from the S-3 simulation is utilized as the initial condition. The simulation is performed for 0.5 FTT to clean the second-order accuracy solution and allow it to provide a third-order accuracy solution. Then 2 additional FTT are simulated to extract data for the analysis. The cost of S4 simulation is higher than other simulations and the authors \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Simulation & Meshes & Order of & DOF/cell & Cells & Total \# of DOF \\ & & Accuracy & & \((10^{6})\) & \((10^{6})\) \\ \hline S-1 & M-1 & 2nd order & 8 & 6.2 & \(\approx 50\) \\ S-2 & M-2 & 3rd order & 27 & 1.8 & \(\approx 50\) \\ S-3 & M-3 & 2nd order & 8 & 15.4 & \(\approx 120\) \\ S-4 & M-3 & 3rd order & 27 & 15.4 & \(\approx 410\) \\ \hline \end{tabular} \end{table} Table 1: Summary of simulations settings. had some difficulties to stabilize the simulation, which consumed some available computational resources, for this reason it was not possible to run 3 FTT to obtain the statistics. The mean and the root mean square (RMS) fluctuations of properties of the flow are calculated along the centerline, ilpline, and different domain surfaces in the streamwise direction. The centerline is defined as the line in the center of the geometry \(y/D=0\), whereas the ilpline is a surface parallel to the centerline and located at the nozzle diameter, \(y/D=0.5\). The results from the ilpline are an azimuthal mean from six equally spaced positions. The four surfaces in the streamwise positions are \(x/D=2.5\), \(x/D=5.0\), \(x/D=10.0\), and \(x/D=15.0\). Fig. 3 illustrates a snapshot of the jet flow with the lines and surfaces of data extraction. Mach number contours are presented in the figure. ## 5 Results ### Analysis of Numerical Results The results from S1, S2, S3, and S4 simulations are presented in this section, which is divided into two parts to group different types of comparisons. In the first part, contours of mean longitudinal velocity, RMS longitudinal velocity fluctuation, and mean density are presented for each simulation. In the second part, the distribution of mean longitudinal velocity and RMS of longitudinal velocity fluctuation are presented along the jet centerline and ilpline for the four simulations and compared to experimental data. In the final results, the mean longitudinal velocity, RMS of longitudinal velocity fluctuation, RMS of radial velocity fluctuation, and shear-stress tensor are presented in four spanwise lines for all the simulations and compared to experimental data. In the first part, three main aspects can be analyzed from the different contours investigated and each contour is better to discuss one of the three aspects. The length of the potential core cannot be directly assessed from visual inspection, so the authors prefer to refer to the region of high velocity that can be easily inspected from the results of mean longitudinal velocity. The development of the shear layer can be visualized in all results, however, the one in which its intensity can be better visualized is in the results of RMS of longitudinal velocity fluctuation. The last aspect that can be assessed from the contours is the development of the series of shock and expansion waves in the early stages of the jet. Figure 3: Snapshot of the jet simulation with the two longitudinal lines and three crossflow lines along which data is extracted. Mach number contours are shown. Figure 4 presents the contours for the mean longitudinal velocity for all simulations. In Figs. 4a and 4b the contours of velocity are very similar, with the high velocity region in Fig. 4b being slightly longer. Analyzing the results in Fig. 4c, one can observe that the high-velocity region has increased significantly when compared to previous results. The improvement in the results obtained shows the importance of distributing elements where they are necessary. Finally, in Fig. 4d, the results from S4 simulation are presented. It is possible to observe that the high-velocity region is the longest among all the simulations, which is indicative that it was lacking resolution in previous simulations to adequately capture the development of the jet flow by adding too much dissipation. In Fig. 5 the contours of RMS of longitudinal velocity fluctuation is presented. Once more, the results presented in Figs. 5a and 5b from S1 and S2 simulations are similar, with the shear-layer development starting approximately \(1D\) far from the jet inlet section. Just after the initial of the shear layer development, one can observe that the peak of RMS fluctuation occurs, which can be related to the large difference between the velocities and possibly the transition of the shear layer from laminar to turbulent. The results presented in Fig. 5c from the S3 simulation have significant differences from the other two previously discussed. The development of the shear layer is starting closer to the jet inlet section with smaller peaks of RMS of velocity fluctuation and with a smaller spreading. One can visualize that the two mixing layers are crossing in the center of the jet farther in Fig. 5c than in Figs. 5a and 5b even presenting a sooner development. Analyzing the results in Fig. 5d, it can be observed that tendencies from the previous investigation continued to increase, which means that the beginning of the development of the shear layer got even closer to the jet inlet section and the crossing of the two mixing layers is happening farther from jet inlet section than the results from S3 simulation, Fig. 5c. S4 simulation is the one with smaller spreading and early development of the shear layer among all the simulations. Finally, in Fig. 6, the contours of mean density are presented for all simulations. In these Figure 4: Contours of mean longitudinal velocity component on cutplane in \(z/D=0\). results, it is possible to better visualize the development of the series of shocks and expansion waves. Different from what has been observed in Figs. 4 and 5, where the results from S1 and S2 simulation are very similar, in Figs. (a)a and (b)b the results of mean pressure for S1 and S2 simulations, one can observe clear differences regarding the series of shock and expansion waves. In Fig. (a)a only three sets of shocks and expansion waves are clearly visible, while in Fig. (b)b it is possible to observe more than 6 sets. It is possible to observe also that the sets of shocks and expansion waves from the S2 simulation are stronger than those from the S1 simulation. Analyzing Fig. (c)c, it is possible to observe that the S3 simulation produced even more sets of shocks and expansion waves than the S2 simulation, Fig. (b)b, with larger intensity, that is evaluated by the level of variation of density produced by the shocks and expansion waves. Another aspect that can be observed is that the first set of shock and expansion waves in the S3 simulation is occurring closer to the jet inlet section than in S2 and S1 simulations, and appears to be a relation to the first set of shock and expansion waves with the beginning of the development of the shear layer. In Fig. (d)d the results from S4 simulation is presented. One can observe the largest set of shock and expansion waves among all the simulations and also a thinner representation of the shocks and expansion waves, which can be closely related to the increased resolution of the simulation. It is also possible to observe a reduction in the intensity of the sets of shock and expansion waves when compared to the S3 simulation, Fig. (c)c. The first part of this section is concluded with the comparison of the contours of mean longitudinal velocity, RMS of longitudinal velocity fluctuation, and mean density among the simulations. In the second part, the numerical results are compared with experimental data. In the second part, in Fig. 7, the numerical results of all simulations are compared to the experimental data. In Fig. (a)a the distribution of mean longitudinal velocity \(<U>/U_{j}\) is presented in the centerline of the jet. One can observe in the figure that results from S1 and S2 simulations are almost equal. Results from the S3 simulation present a significant improvement when compared to previously performed simulations and the S4 simulation could Figure 5: Contours of RMS of longitudinal velocity fluctuation on cutplane in \(z/D=0\). almost capture the behavior observed in the experiments. In Fig. 6(b) the RMS of longitudinal velocity fluctuation \(u_{rms}/U_{j}\) is presented in the centerline of the jet. It is possible to observe once more in these results how well the resolution influenced the numerical results regarding the proximity to experimental data. The differences between S1 and S2 simulations are small, the S3 simulation got closer to experimental data and the S4 simulation once more presented the best match with experimental data. It can be observed in the results from the S4 simulation a double peak that does not appear in any other simulation or the experimental data. The authors believe that this result is a consequence of the fewer FFTs of the simulation in which data was gathered. Once more data could be used, authors believe that this characteristic would disappear and only one peak would be formed. While the results for the jet centerline present always improvements in the simulations with increased resolution, in the ilpline that behavior is not always observed. In Fig. 6(c), where the mean longitudinal velocity \(<U>/U_{j}\) is presented in the ilpline of the jet one may observe that far from the jet inlet section, the increased resolution produced a monotonic improvement in the numerical results, close to the jet inlet section, S3 simulation was the one that could better capture experimental data. However, it is not from the mean results where the greatest differences are observed. When analyzing the distribution of the RMS of longitudinal velocity distribution along the ilpline of the jet, Fig. 6(d), one can observe that monotonically the increased resolution pushed the results away from experimental data. While in the experimental data it is possible to observe a smooth growth of RMS of longitudinal velocity fluctuation and almost a plateau from \(x/D=5\) to \(x/D=15\), in all the simulations there is a sudden increase in the RMS of longitudinal velocity fluctuation and after the peak, it is possible only to observe a reduction on the values. The authors believe that the differences observed in these results are related to the choice of the boundary condition imposed for the jet inlet that represents neither the boundary layer profile from the nozzle nor the turbulent intensity in the nozzle exit section. The results in Fig. 8 present different statistical properties of the flow in different longitudinal Figure 6: Contours of mean density on cutplane in \(z/D=0\). positions. The first set of results, in Figs. (a)a to (d)d, concerns the mean of longitudinal velocity. The S1 simulation is in agreement with the experimental data at \(x/D=2.5\), Fig. (a)a. In the position \(x/D=5\), Fig. (b)b all the simulations produce very similar results. Moving forward and analyzing the results in position \(x/D=10\), Fig. (c)c it is possible to observe a similar behavior between S1 and S2 simulations, S3 simulation presenting improvements to the other two and S4 simulation presenting the best match with experimental data. In the last position \(x/D=15\), Fig. (a)a, a monotonically improvement is observed with increased resolution the the simulations. Figure 7: Results of mean streamwise velocity component distribution (left) and RMS of streamwise velocity fluctuation (right) in the jet centerline \(y/D=0\) (top) and lipline \(y/D=0.5\) (bottom). Figure 8: Profiles of mean streamwise velocity component, RMS of streamwise velocity fluctuation, RMS of radial velocity fluctuation, and mean shear-stress tensor component (from top to bottom) at four streamwise positions \(x/D=2.5\), \(x/D=5\), \(x/D=10\) and \(x/D=15\) (from left to right). The profiles of RMS values of longitudinal velocity fluctuation are presented in Figs. 8e to 8h. The simulation results at \(x/D=2.5\), Fig. 8e, present a similar profile with differences only in the peak of RMS of longitudinal velocity fluctuation with a monotonically decrease in the peak with increased resolution and proximity to experimental data. Similar behavior is observed in the next position at \(x/D=5\), Fig. 8f. In the position \(x/D=10\), Fig. 8g the main aspects of the flow are captured except for the smaller values of RMS of longitudinal velocity fluctuation in the center of the jet that is only badly captured by S4 simulation. The results for all simulations and experimental data are very similar in the last position \(x/D=15\), Fig. 8h Profiles of RMS values of radial velocity component fluctuation are presented in Figs. 8i to 8l. They exhibit similar behavior as the longitudinal velocity fluctuation. One can also observe the positive effects of the increased resolution on the profiles of the mean shear-stress tensor component, Figs. 8m to 8p. The profiles from the S4 simulation are in good agreement with the experimental data and they indicate considerable improvement when compared to the simulations with smaller resolution. This result concludes our analysis of the numerical results from the simulations. It was possible to observe, in general, that the improved resolution of the simulations produced better results compared to experimental data. The simulation with the highest resolution, the S4 simulation, was the one that better matched the experimental data. Only in the jet lipline, this behavior is not observed. The authors have strong confidence that this has nothing to do with the effect of the resolution, instead, it may be strongly related to the choice of the boundary condition for the jet inlet condition, that does not reproduce the boundary layer developed inside the nozzle nor the turbulent intensity in the region. To improve the quality of the simulations in the jet lipline, a new boundary condition or a different simulation strategy should be adopted. ### Analysis of Computational Effort At this point of the work, it is important to discuss some other aspects of computational effort to be able to improve the computational efficiency of the simulations. The main parameter utilized to measure the efficiency of a simulation is the Performance Index \(PID\), which can be calculated by \[PID=\frac{wall\:clock\:time\:n_{cores}}{n_{DOF}\:n_{time\:steps}\:n_{RK- stages}}, \tag{12}\] where \(wall\:clock\:time\) is the time the simulation needed to perform \(n_{time\:steps}\) time steps, \(n_{cores}\) is the quantity of cores used in the simulation, \(n_{DOF}\) is the number of DOF of the simulation and \(n_{RK-stages}\) is the number of stages from the Runge-Kutta scheme. The PID was calculated for all four simulations and the results are presented in Tab. 2. \begin{table} \begin{tabular}{c|c} Simulation & PID (\(\mu s\)) \\ \hline S1 & 8 \\ S2 & 15 \\ S3 & 5 \\ S4 & 2 \\ \hline \end{tabular} \end{table} Table 2: Summary of Performance Index \(PID\) from all simulations. It is important to clarify to the author that the numerical solver presented some improvements during the execution of the simulations and they can be related to the improvements in the \(PID\) reduction from S3 and S4 simulations compared to S1 and S2 simulations. It is also possible to argue that with the increased number of degrees of freedom it is expected that more computation is performed with a similar number of cores, which also should increase the efficiency of the simulations. If we compare only S1 and S2 simulations, for the same number of DOF, the third-order accurate simulations cost almost twice the effort of a second-order accurate simulation. If these values are employed in the two simulations procedure performed in this analysis, the total cost of the simulation can be compared. The first simulation procedure involves the whole calculation of the 9 FTT with a mesh of \(50\times 10^{6}\) elements that produces \(\approx 400\times 10^{6}\) DOF when simulated with second-order accurate discretization, which could produce a very similar result to those of S4 simulation. If the procedure for S4 simulation could be completely performed, it could initially start its 5 FTT with second-order accurate discretization with a total of \(120\times 10^{6}\) DOF. At this point, we do not consider the effect of the number of degrees of freedom in the PID, only the order of discretization. For this 5 FTT, the total time would be \(3.34\times\) smaller than those to perform the first simulation procedure due to the reduced number of degrees of freedom. Then, it is possible to consider the next 4 FTT calculated with third-order accurate discretization and a total of \(\approx 400\times 10^{6}\) DOF. In this 4FTT the cost of the third-order accurate discretization is twice of the second-order accurate simulation. If the total time of the second procedure is calculated, it costs \(\approx 5\%\) more than the procedure with second-order accuracy. What is discussed here is that, once it is possible to start the high-order simulation with a previous result from another order of accuracy with the same mesh, it is possible to reduce the time required to obtain the desired data with high-order simulation and consequently reduce the cost of the total simulation. In the proposed procedure, the cost of the total third-order accurate simulation was only \(5\%\) larger than those of a second-order accurate simulation. This result is expressive and very interesting for high-order simulations. Another important point to present is that the computational code utilized presented very good scalability with the number of cores. The tests for the S4 simulation varied the number of cores from a few hundred cores to a few thousand cores and the \(PID\) was always close to \(2\mu s\). ## 6 Concluding Remarks In this work, the employment of a discontinuous Galerkin framework called FLEXI was investigated for the LES simulation of supersonic free round jets. A total of four simulations are performed with 3 different meshes and second and third-order accuracy. The range of degrees of freedom from the simulations varies from \(50\times 10^{6}\) to \(400\times 10^{6}\). All the simulations are performed for the same geometric model and with the same boundary conditions. The results of the simulations are firstly compared visually between themselves to compare how they are capturing the main features of the flow: extension of the high-velocity region, development of the shear layer, and development of the sets of shocks and expansion waves. The results showed that with increased resolution the high-velocity regions got longer. The development of the shear layer starts closer to the jet inlet section and presents a smaller spreading. The number of sets of shocks and expansion waves increased and the visual of the shocks and expansion waves are thinner. The numerical results showed that, in general, the increase in the resolution of the simulation, especially the number of degrees of freedom, produced better results when compared to experimental data. This behavior is observed in the results of mean longitudinal velocity distribution and RMS of longitudinal velocity fluctuation distribution in the centerline. It is also observed in the results from the four spanwise planes. The only results that do not follow this behavior are the mean longitudinal and RMS of longitudinal velocity fluctuation distributions along the jet lipline. In these regions, with increased resolutions, the results are pushed away from experimental data. The authors believe this behavior is related to the lack of resolution from the jet inlet boundary condition that could not reproduce the boundary layer and turbulent intensity from the experiments. The analysis of the computational effort of the simulation showed that even utilizing a high-order method that costs more than a second-order method for the same number of degrees of freedom it was possible to reproduce a third-order simulation with only 5% more computational cost of total simulation by initializing the simulation with a smaller order of accuracy. The work reached is objective of identifying the guidelines for performing LES simulations of supersonic jet flows using a discontinuous Galerkin scheme with adequate results with a reasonable computational cost. The open point on the jet inlet condition is the next step in the development of the work. ## Acknowledgments The authors acknowledge the support for the present research provided by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico, CNPq, under the Research Grant No. 309985/2013-7. The work is also supported by the computational resources from the Center for Mathematical Sciences Applied to Industry, CeMEAI, funded by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo, FAPESP, under the Research Grant No. 2013/07375-0. The authors further acknowledge the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the SDumont supercomputer. This work was also granted access to the HPC resources of IDRIS under the allocation 2020-A0092A12067 / 2021-A0112A12067 made by GENCI. The first author acknowledges authorization by his employer, Embraer S.A., which has allowed his participation in the present research effort. The doctoral scholarship provide by FAPESP to the third author, under the Grant No. 2018/05524-1, is thankfully acknowledged. Additional support to the fourth author under the FAPESP Research Grant No. 2013/07375-0 is also gratefully acknowledged. This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001.
2310.17650
A Coarse-to-Fine Pseudo-Labeling (C2FPL) Framework for Unsupervised Video Anomaly Detection
Detection of anomalous events in videos is an important problem in applications such as surveillance. Video anomaly detection (VAD) is well-studied in the one-class classification (OCC) and weakly supervised (WS) settings. However, fully unsupervised (US) video anomaly detection methods, which learn a complete system without any annotation or human supervision, have not been explored in depth. This is because the lack of any ground truth annotations significantly increases the magnitude of the VAD challenge. To address this challenge, we propose a simple-but-effective two-stage pseudo-label generation framework that produces segment-level (normal/anomaly) pseudo-labels, which can be further used to train a segment-level anomaly detector in a supervised manner. The proposed coarse-to-fine pseudo-label (C2FPL) generator employs carefully-designed hierarchical divisive clustering and statistical hypothesis testing to identify anomalous video segments from a set of completely unlabeled videos. The trained anomaly detector can be directly applied on segments of an unseen test video to obtain segment-level, and subsequently, frame-level anomaly predictions. Extensive studies on two large-scale public-domain datasets, UCF-Crime and XD-Violence, demonstrate that the proposed unsupervised approach achieves superior performance compared to all existing OCC and US methods , while yielding comparable performance to the state-of-the-art WS methods.
Anas Al-lahham, Nurbek Tastan, Zaigham Zaheer, Karthik Nandakumar
2023-10-26T17:59:19Z
http://arxiv.org/abs/2310.17650v1
# A Coarse-to-Fine Pseudo-Labeling (C2FPL) Framework for Unsupervised Video Anomaly Detection ###### Abstract Detection of anomalous events in videos is an important problem in applications such as surveillance. Video anomaly detection (VAD) is well-studied in the one-class classification (OCC) and weakly supervised (WS) settings. However, fully unsupervised (US) video anomaly detection methods, which learn a complete system without any annotation or human supervision, have not been explored in depth. This is because the lack of any ground truth annotations significantly increases the magnitude of the VAD challenge. To address this challenge, we propose a simple-but-effective two-stage pseudo-label generation framework that produces segment-level (normal/anomaly) pseudo-labels, which can be further used to train a segment-level anomaly detector in a supervised manner. The proposed coarse-to-fine pseudo-label (C2FPL) generator employs carefully-designed hierarchical divisive clustering and statistical hypothesis testing to identify anomalous video segments from a set of completely unlabeled videos. The trained anomaly detector can be directly applied on segments of an unseen test video to obtain segment-level, and subsequently, frame-level anomaly predictions. Extensive studies on two large-scale public-domain datasets, UCF-Crime and XD-Violence, demonstrate that the proposed unsupervised approach achieves superior performance compared to all existing OCC and US methods, while yielding comparable performance to the state-of-the-art WS methods. Code is available at: [https://github.com/AnasEmad11/C2FPL](https://github.com/AnasEmad11/C2FPL) ## 1 Introduction Applications such as video surveillance continuously generate large amounts of video data. While a vast majority of these videos only contain normal behavior, it is essential to detect anomalous events (e.g., shooting, road accidents, fighting, etc.) that deviate from normal behavior and may occur occasionally in such videos. Hence, video anomaly detection (VAD) is a critical problem, especially in surveillance applications [9, 13, 14, 19]. Conventional VAD methods rely heavily on manually annotated anomaly examples (Figure 1(right)) [2]. However, given the rare occurrence and short temporal nature of anomalies in real-world scenarios, obtaining accurate fine-grained annotations is a laborious task. Recently, several VAD methods have been proposed to leverage video-level labels and perform weakly supervised (WS) training [5, 15, 19, 25, 39, 15] to reduce the annotation costs. However, since surveillance datasets are usually a large-scale collection of videos, it is still cumbersome to obtain any kind of labels. For example, to obtain even a video-level binary label, an annotator may still have to watch the Figure 1: Supervised (right) vs. unsupervised (left) video anomaly detection pipeline. In a supervised setting, some sort of manual annotation of the recorded videos is required for training an anomaly detection model. We aim to eliminate this annotation step by proposing a fully unsupervised approach. whole video, which can take a considerable amount of time. For example, a well-known WS-VAD dataset called XD-Violence [31] contains videos spanning \(217\) hours. An alternative paradigm for VAD is one-class classification (OCC), which assumes that only normal videos are available for training [16, 19, 25, 30, 35]. However, the OCC setting does not completely alleviate the annotation problem because an annotator still has to watch all the training videos to ensure that no anomaly is present within them. A label-free fully unsupervised approach is a more practical and useful setting, especially in real-world scenarios where recording video data is easier than annotating it [36]. An unsupervised video anomaly detection (US-VAD) method can address the aforementioned disadvantages of supervised methods by completely eradicating the need for manual annotations (Figure 1). However, US-VAD methods are yet to gain much traction within the computer vision community. Recently, Zaheer _et al_. [36] introduced an US-VAD approach in which the model is trained on unlabeled normal and anomalous videos. Their idea is to utilize several properties of the training data to obtain pseudo-labels via cooperation between a generator and a classifier. While this method is elegant, its performance is significantly lower than the state-of-the-art WS and OCC methods [23, 36]. In this work, we attempt to bridge this gap between unsupervised and supervised methods by taking unlabelled set of training videos as input and producing segment-level pseudo-labels without relying on any human supervision. Towards this end, we make the following key contributions: * We propose a two-stage coarse-to-fine pseudo-label (C2FPL) generator that utilizes hierarchical divisive (top-down) clustering and statistical hypothesis testing to obtain segment-level (fine-grained) pseudo-labels. * Based on the C2FPL framework, we propose an US-VAD system that is trainable without any annotations. To the best of our knowledge, this is among the first few works to explore the US-VAD setting in detail. * We evaluate the proposed approach on two large-scale VAD datasets, UCF-Crime [19] and XD-Violence [31], and achieve state-of-the-art performance in the unsupervised category, while also outperforming all existing OCC and several WS-VAD methods. ## 2 Related Work Early VAD methods mostly relied on supervised learning, where anomalous frames in a video are explicitly labeled in the training data [6, 27]. Since supervised approaches require large amounts of annotated data and annotation of anomalies is a laborious task, WS, OCC, and US VAD methods are gaining more attention. ### One-Class Classification for VAD To avoid the capturing of anomalous examples, researchers have widely explored one-class classification (OCC) methods [7, 12, 29, 32]. In OCC-VAD, only normal videos are used to train an outlier detector. At the time of inference, data instances that do not conform to the learned normal representations are predicted as anomalous. Since OCC methods are known to fail if normal data contains some anomaly examples [36], they require careful verification of all the videos in the dataset, which does not reduce the annotation load. Furthermore, video data is often too diverse to be modeled successfully and new normal scenes differing from the learned representations may be classified as anomalous. Therefore, OCC approach has limited applicability in the context of VAD. ### Weakly Supervised VAD Taking advantage of weakly labeled (i.e., video-level labels) anomalous samples has led to significant improvements over OCC training [19, 25]. Multiple Instance Learning (MIL) is one of the most commonly used methods for WS-VAD [16, 19, 25, 30], where segments of a video are grouped into a bag and bag-level labels are assigned. Sultani _et al_. [19] first introduced the MIL framework with a ranking loss function, which is computed between the top-scoring segments of normal and anomaly bags. One of the key challenges in WS-VAD is that the positive (anomaly) bags are noisy. Since anomalies are localized temporally, most of the segments in an anomaly bag are also normal. Therefore, Zhong _et al_. [39] reformulated the problem as binary classification in the presence of noisy labels and used a graph convolution network (GCN) to remove label noise. The training of GCN was computationally expensive due to the presence of an action classifier. Furthermore, MIL-based methods require complete video inputs at each training iteration. Consequently, the correlation of the input data significantly affects the training of an anomaly detection network. To minimize this correlation, CLAWS Net [35] proposed a random batch selection approach in which temporally consistent batches are arbitrarily selected for training a binary classifier. ### Unsupervised VAD Unsupervised video anomaly detection (US-VAD) methods are learned using unlabeled training data. This problem is extremely challenging due to the lack of ground truth supervision and the rarity of anomalies. However, it is highly rewarding because it can completely eradicate the costs associated with obtaining manual annotations and allow such systems to be deployed without human intervention. Due to the difficulty of the problem, it has received little attention in the literature. Generative Cooperative Learning [36] is a recent work that presents an US-VAD system to detect anomalies in a given video by first training a generative model to reconstruct normal video frames and then using the discrepancy between the reconstructed frames and the actual frames as a measure of anomaly. It involves training two models simultaneously: one to reconstruct the normal frames and the other to generate classification scores. ## 3 Proposed Methodology **Problem Definition**: Let \(\mathcal{D}=\{V_{1},V_{2},\cdots,V_{n}\}\) be a training dataset containing \(n\) videos without any labels. The goal of US-VAD is to use \(\mathcal{D}\) and learn an anomaly detector \(\mathcal{A}(\cdot)\) that classifies each frame in a given test video \(V_{*}\) as either _normal_ (\(0\)) or _anomalous_ (\(1\)). **Notations**: We split each video \(V_{i}\) into a sequence of \(m_{i}\) non-overlapping segments \(S_{ij}\), where each segment is in turn composed of \(r\) frames. Note that \(i\in[1,n]\) refers to the video index, and \(j\in[1,m_{i}]\) is the segment index within a video. While many WS-VAD methods [16, 19, 25, 30] compress each video into a fixed number of segments (i.e., \(m_{i}=m,\forall i\in[1,n]\)) along the temporal axis, we avoid any compression and make use of all available non-overlapping segments. For each segment \(S_{ij}\), a feature vector \(\mathbf{f}_{ij}\in\mathbb{R}^{d}\) is obtained using a pre-trained feature extractor \(\mathcal{F}(\cdot)\). **High-level Overview of the Proposed Solution**: Our coarse-to-fine pseudo-labeling (C2FPL) framework for US-VAD consists of three main stages during training (see Figure 2). In the first coarse pseudo-labeling (CPL) stage, we generate a video-level pseudo-label \(\hat{y}_{i}\in\{0,1\}\), \(i\in[1,n]\) for each video in the training set using a hierarchical divisive clustering approach. In the second fine pseudo-labeling (FPL) stage, we generate segment-level pseudo-labels \(\tilde{y}_{ij}\in\{0,1\}\), \(i\in[1,n]\), \(j\in[1,m_{i}]\) for all the segments in the training set through statistical hypothesis testing. In the third anomaly detection (AD) stage, we train a segment-level anomaly detector \(\tilde{\mathcal{A}}_{\theta}(\cdot):\mathbb{R}^{d}\rightarrow[0,1]\) that assigns an anomaly score between \(0\) and \(1\) (higher values indicate higher confidence of being an anomaly) to the given video segment based on its feature representation \(\mathbf{f}_{ij}\). ### Coarse (Video-Level) Pseudo-Label Generator Since the training dataset does not contain any labels, we first generate pseudo-labels for the videos in the training set by recursively clustering them into two groups: normal and anomalous (see Alg. 1). The idea of using iterative clustering to generate pseudo-labels has been considered earlier in other application domains [1, 4, 38]. However, direct application of these methods to the US-VAD problem fails to provide satisfactory solutions due to two reasons. Firstly, directly clustering multivariate features \(\mathbf{f}_{ij}\) leads to a curse of dimensionality (features are high-dimensional but the sample size is small). Secondly, the clusters in our context are not permutation-invariant (normal and anomalous cluster labels cannot be interchanged). To overcome these problems, we propose a method that relies on a low-dimensional feature summary and divisive hierarchical clustering. Previous works in WS-VAD have shown that normal video segments have lower temporal feature magnitude compared to anomalous segments [26]. Furthermore, we also observed that the variations in feature magnitude across different segments are lower for normal videos. Based on this intuition, we represent each video \(V_{i}\) using a statistical summary \(\mathbf{x}_{i}=[\mu_{i},\sigma_{i}]\) of its features as follows: \[\mu_{i}=\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}||\mathbf{f}_{ij}||_{2}, \tag{1}\] \[\sigma_{i}=\sqrt{\frac{1}{(m_{i}-1)}\sum_{j=1}^{m_{i}}(||\mathbf{f}_{ij}||_{2} -\mu_{i})^{2}}, \tag{2}\] Figure 2: Overall training pipeline of the proposed C2FPL framework for unsupervised video anomaly detection (US-VAD). All training videos are first partitioned into segments and each segment is represented by a feature vector obtained using a pre-trained feature extractor. Then, our two-stage coarse-to-fine pseudo-label (C2FPL) generator produces segment-level pseudo-labels, which are used to train a segment-level anomaly detector. Pseudo-label generation consists of two stages: hierarchical divisive clustering that generates coarse (video-level) pseudo-labels (CPL) and statistical hypothesis testing that creates fine-grained (segment-level) pseudo-labels (FPL). where \(\|\cdot\|_{2}\) represents the \(\ell_{2}\) norm of a vector. Thus, each video \(V_{i}\) is represented using a 2D vector \(\mathbf{x}_{i}\), corresponding to the mean and standard deviation of the feature magnitude of its segments. This ensures a uniform representation of all videos despite their varying temporal length. Videos in the training set are iteratively divided into two clusters (\(\mathcal{C}_{0}^{t}\) and \(\mathcal{C}_{1}^{t}\)) based on the above representation \(\mathbf{x}_{i}\). Here, \(t\) denotes the step index and \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) represent the normal and anomaly clusters, respectively. Since no data labels are available, assigning normal and anomaly labels to the clusters is not trivial. Intuitively, easy anomalies (considered as easy outliers) may be separated into a smaller cluster. On the other hand, the larger cluster is likely to contain more normal videos as well as some hard anomalies that need further refinement. Therefore, initially, all the videos in the training set are assigned to the normal cluster and the anomaly cluster is initialized to an empty set, i.e., \(\mathcal{C}_{0}^{0}=\{\mathbf{x}_{i}\}_{i\in[1,n]}\) and \(\mathcal{C}_{1}^{0}=\varnothing\). At each step \(t\) (\(t\geq 1\)), the cluster \(\mathcal{C}_{t-1}^{t-1}\) is re-clustered to obtain two new child clusters, say \(\mathcal{C}_{l}\) and \(\mathcal{C}_{s}\) with \(|\mathcal{C}_{l}|\) and \(|\mathcal{C}_{s}|\) samples, respectively. Without loss of generality, let \(|\mathcal{C}_{s}|<|\mathcal{C}_{l}|\). The smaller cluster \(\mathcal{C}_{s}\) is merged with the previous anomaly cluster, i.e., \(\mathcal{C}_{1}^{t}=\left(\mathcal{C}_{1}^{t-1}\ \cup\ \mathcal{C}_{s}\right)\), while the larger cluster is labeled as normal, i.e., \(\mathcal{C}_{0}^{t}=\mathcal{C}_{l}\). This process is repeated until the ratio of the number of videos in the anomaly cluster (\(|\mathcal{C}_{1}^{t}|\)) to the number of videos in the normal cluster (\(|\mathcal{C}_{0}^{t}|\)) is larger than a threshold, i.e., \(\frac{|\mathcal{C}_{1}^{t}|}{|\mathcal{C}_{0}^{t}|}>\eta\). At the end of the CPL stage, all the videos in the training set are assigned a pseudo-label based on their corresponding cluster index, i.e., \(\hat{y}_{i}=k\), if \(\mathbf{x}_{i}\in\mathcal{C}_{k}^{\tau}\), where \(k\in\{0,1\}\) and \(\tau\) denotes the final clustering iteration. ### Fine (Segment-Level) Pseudo-Label Generator All the segments from videos that are "pseudo-labeled" as normal (\(\hat{y}_{i}=0\)) by the previous stage can be considered as normal. However, most of the segments in an anomalous video are also normal due to temporal localization of anomalies. Hence, further refinement of the coarse (video-level) labels is required to generate segment-level labels for anomalous videos. To achieve this goal, we treat the detection of anomalous segments as a statistical hypothesis testing problem. Specifically, the null hypothesis is that a given video segment is normal. By modeling the distribution of features under the null hypothesis as a Gaussian distribution, we identify the anomalous segments by estimating their p-value and rejecting the null hypothesis if the p-value is less than the significance level \(\alpha\). To model the distribution of features under the null hypothesis, we consider only the segments from videos that are pseudo-labeled as normal by the CPL stage. Let \(\mathbf{z}_{ij}\in\mathbb{R}^{d}\) be a low-dimensional representation of a segment \(S_{ij}\). We assume that \(\mathbf{z}_{ij}\) follows a Gaussian distribution \(\mathcal{N}(\mathbf{\Gamma},\mathbf{\Sigma})\) under the null hypothesis and estimate the parameters \(\mathbf{\Gamma}\) and \(\mathbf{\Sigma}\) as follows: \[\mathbf{\Gamma}=\frac{1}{M_{0}}\sum_{i=1,\hat{y}_{i}=0}^{n}\sum_{j=1}^{m_{i}} \mathbf{z}_{ij}, \tag{3}\] \[\mathbf{\Sigma}=\frac{1}{(M_{0}-1)}\sum_{i=1,\hat{y}_{i}=0}^{n}\sum_{j=1}^{m_{ i}}(\mathbf{z}_{ij}-\mathbf{\Gamma})(\mathbf{z}_{ij}-\mathbf{\Gamma})^{T}, \tag{4}\] where \(M_{0}=\sum_{i=1,\hat{y}_{i}=0}^{n}m_{i}\). Subsequently, for all the segments in videos that are pseudo-labeled as anomalous, the \(p\)-value is computed as: \[p_{ij}=\frac{1}{(2\pi)^{(d/2)}\sqrt{|\Sigma|}}\exp\left(-\frac{1}{2}(\mathbf{ z}_{ij}-\mathbf{\Gamma})^{T}\Sigma^{-1}(\mathbf{z}_{ij}-\mathbf{\Gamma})\right), \tag{5}\] \(\forall j\in[1,m_{i}],i\in[1,n]\) such that \(\hat{y}_{i}=1\). If \(p_{ij}<\alpha\), the segment can be potentially assigned a pseudo-label of \(1\). Figure 3 shows an illustration of this approach, which clearly indicates strong agreement between the estimated p-values and the ground truth anomaly labels of the validation set. One unresolved question in the above formulation is how to obtain the low-dimensional representation \(\mathbf{z}_{ij}\) for a segment \(S_{ij}\). In this work, we simply set \(\mathbf{z}_{ij}=\|\mathbf{f}_{ij}\|_{2}\) and hence \(\tilde{d}=1\). Note that other statistics could also be employed in addition to (or in lieu of) the \(\ell_{2}\) feature magnitude. Directly assigning a pseudo-label to a segment based on its p-value ignores the reality that anomalous segments in a video tend to be temporally contiguous. One way to overcome this limitation is to mark a contiguous sequence of \(w_{i}=\left\lceil\beta m_{i}\right\rceil\) segments, \(0<\beta<1\) and \(\left\lceil\cdot\right\rceil\) represents the ceil function, as the anomalous region within each video that is pseudo-labeled as an anomaly. The anomalous region is determined by sliding a window of size \(w_{i}\) across the video and selecting the window that has the lowest average p-values (i.e., \(\min_{l}\left\{\frac{1}{w_{i}}\sum_{j=(l+1)}^{(l+w_{i})}p_{ij},\ \forall\ l\in[0,m_{i}-w_{i}]\right\}\)). Each segment present in this anomalous region is assigned a pseudo-label of \(1\), while all the remaining segments are pseudo-labeled as normal (value of \(0\)). At the end of this FPL stage, a pseudo-label \(\tilde{y}_{ij}\in\{0,1\}\) is assigned to all the segments in the training set. ### Anomaly Detector The coarse and fine pseudo-label generators together provide a pseudo-label for every video segment in the training dataset. This results in a pseudo-labeled training set \(\tilde{\mathcal{D}}=\{(\mathbf{f}_{ij},\tilde{y}_{ij})\}\) containing \(M\) samples, where \(i\in[1,n]\), \(j\in[1,m_{i}]\), and \(M=\sum_{i=1}^{n}m_{i}\). This labeled training set \(\tilde{\mathcal{D}}\) can be used to train the anomaly detector \(\tilde{\mathcal{A}}_{\theta}(\cdot)\) in a supervised fashion by minimizing the following objective: \[\min_{\theta}\sum_{i=1}^{n}\sum_{j=1}^{m_{i}}\mathcal{L}(\mathcal{A}_{\theta}( \mathbf{f}_{ij}),\tilde{y}_{ij}), \tag{6}\] where \(\mathcal{L}\) is an appropriate loss function and \(\theta\) denotes the parameters of the anomaly detector \(\tilde{\mathcal{A}}(\cdot)\). Following recent state-of-the-art methods [19, 25, 35, 36], two basic neural network architectures are considered for our anomaly detector. In particular, we employ a shallow neural network (Figure 2) with two fully connected (FC) hidden layers and one output layer mapped to a binary class. A dropout layer and a ReLU activation function are applied after each FC layer. Additionally, following Zaheer _et al_. [35], we add two self-attention layers (detailed architecture is provided in the Supplementary material). A softmax activation function follows each of the self-attention layers, each of which has the same dimensions as the corresponding FC layer in the backbone network. Final anomaly score prediction is produced by the output sigmoid function. Unlike many existing methods (e.g., [19]) that require having a complete video in one training batch, our approach allows random segment selection for training. Recently, Zaheer _et al_. [35] demonstrated the benefits of feature vector randomization for training. However, based on its design, their method was limited to randomizing consecutive batches while maintaining the temporal order of segments within a batch. In our case, since we have obtained pseudo-labels for each segment, we can apply training with complete randomization to reap maximum benefits. Therefore, feature vectors are obtained across the dataset to form the training batches. Formally, each training batch \(\mathcal{B}\) contains \(B\) randomly selected samples from the set \(\tilde{\mathcal{D}}\) without any order constraints between the samples. ### Inference During inference, a given test video \(V_{*}\) is partitioned into \(m_{*}\) non-overlapping segments \(S_{*j}\), \(j\in[1,m_{*}]\). Feature vectors \(\mathbf{f}_{*j}\) are extracted from each segment using \(\mathcal{F}(\cdot)\), which are directly passed to the trained detector \(\tilde{\mathcal{A}}_{\theta}(\cdot)\) to obtain segment-level anomaly score predictions. Since the eventual goal is frame-level anomaly prediction, all the frames within a segment of the test video are marked as anomalous if the predicted anomaly score for that corresponding segment exceeds a threshold. ## 4 Experimental Results ### Experimental Setup **Datasets**: Two large-scale VAD datasets are used to evaluate our approach: UCF-Crime [19] and XD-Violence [31]. **UCF-Crime** consists of \(1610\) (\(290\)) training (test) videos collected from real-world surveillance camera feeds, totaling 128 hours in length. **XD-Violence** is a multi-modal VAD dataset that is collected from sports streaming videos, Figure 3: Statistical hypothesis testing approach applied to pseudo-label an anomalous video **arson011** from the _validation set_. The top row shows the segment-level p-values for all the segments in the video, where a lower p-value means less likelihood of being normal. A possible pseudo-labeling strategy is to mark all segments with a p-value lower than the significance level (denoted by the horizontal orange line) as anomalous. Strong agreement can be observed between the estimated p-values and the ground-truth (GT) anomaly label for the given video shown in the bottom row. movies, web videos, and surveillance cameras. It consists of \(3954\) (\(800\)) training (test) videos that span around 217 hours. We utilize only the visual modality of the XD-Violence dataset for our experiments. Both these datasets originally contain video-level ground-truth labels for the training set and frame-level labels for the test set. Hence, they are primarily meant for the WS-VAD task. In this work, we ignore the training labels and only use test labels to evaluate our US-VAD model. **Evaluation Metric**: We adopt the commonly used frame-level area under the receiver operating characteristic curve (AUC) as the evaluation metric for all our experiments [19, 23, 35, 33, 25, 34, 36]. Note that the ROC curve is obtained by varying the threshold on the anomaly score during inference and higher AUC values indicate better results. **Implementation Details**: Each video is partitioned into multiple segments, with each segment containing \(r=16\) frames. The well-known I3D [3] method is used as the pre-trained feature extractor \(\mathcal{F}(\cdot)\) to extract RGB features with dimensionality \(d=2048\). Following [22], we also apply 10-crop augmentation to the I3D features. The CPL generator uses Gaussian Mixture Model (GMM)-based clustering [17] and the threshold \(\eta\) is set to \(1.0\). The parameter \(\beta\) used in the FPL generator is set to \(0.2\). The anomaly detector \(\tilde{\mathcal{A}}_{\theta}(\cdot)\) is trained using a binary cross-entropy loss function along with \(\ell_{2}\) regularization. The detector is trained for \(100\) epochs using a stochastic gradient descent optimizer with a learning rate of \(0.01\). The batch size \(B\) is set to \(128\). ### Comparison with state-of-the-art In this section, we provide performance comparison of our proposed unsupervised C2FPL method with recent state-of-the-art (SOTA) supervised and unsupervised VAD methods [7, 18, 29, 36, 37, 19, 21]. **UCF-Crime**. The AUC results on the UCF-Crime dataset are shown in Table 1. Wherever possible, results based on I3D RGB features are reported to ensure a fair comparison. The proposed C2FPL method achieves an AUC performance of \(80.65\%\), outperforming the existing US and OCC methods while performing comparably to existing SOTA WS methods. Note that OCC methods assume that the training data contains only normal videos, while we do not make any such assumption. Furthermore, our unsupervised C2FPL framework even outperforms some methods in the WS setting [19, 36, 37], thus bridging the gap between unsupervised and supervised approaches. However, compared to the top performing WS method S3R [30] using the same I3D features, our approach yields \(5.34\%\) lower AUC. While this is impressive considering that our method does not require any supervision, it highlights the need for further improvement in the accuracy of the CPL stage. **XD-Violence**. Our C2FPL framework is also evaluated on XD-Violence dataset and the results are reported in Table 2. The proposed method has an AUC of \(80.09\%\), which is significantly better than the unsupervised RareAnom [23] method. Additionally, our framework achieves good results even in comparison to other OCC and WS methods. \begin{table} \begin{tabular}{c c c c c} \hline Supervision & Method & Features & FNS & AUC(\%) \\ \hline \hline \multirow{8}{*}{OCC} & SVM [19] & I3D & - & 50 \\ & Hasan [7] & - & - & 50.60 \\ & SSV [18] & - & - & 58.50 \\ & BODS [29] & I3D & - & 68.26 \\ & GODS [29] & I3D & - & 70.46 \\ & SACR [20] & - & - & **72.70** \\ & Zaheer [36] & ResNext & ✗ & **74.20** \\ \hline \multirow{8}{*}{WS} & Sultani [19] & I3D & ✓ & 77.92 \\ & Zaheer [36] & ResNext & ✗ & 79.84 \\ & RTFM [25] & I3D & ✓ & 84.30 \\ & MSL [11] & I3D & ✓ & **85.30** \\ & S3R [30] & I3D & ✓ & **85.99** \\ & C2FPL* (Ours) & I3D & ✗ & **85.5** \\ \hline \multirow{8}{*}{US} & Kim [10] & ResNext & - & 52.00 \\ & Zaheer [36] & ResNext & ✗ & 71.04 \\ \cline{1-1} & DyAnNet [24] & I3D & ✓ & **79.76** \\ \cline{1-1} & C2FPL (Ours) & I3D & ✗ & **80.65** \\ \hline \end{tabular} \end{table} Table 1: Frame-level AUC performance comparison on UCF-Crime dataset. Wherever available, RGB results are reported. Our unsupervised C2FPL method is compared against both unsupervised and supervised (WS and OCC) methods. The column FNS indicates whether the method uses a fixed number of segments \(m\) (\(m=32\) when FNS is true) and ‘-’ indicates this information is not available. The top two results under each supervision setting are shown in blue and red in that order. \(\dagger\) indicates that results are reported from [25], where the method in [19] was retrained using I3D features. \begin{table} \begin{tabular}{c c c c c} \hline Supervision & Method & Features & FNS & AUC(\%) \\ \hline \hline \multirow{8}{*}{OCC} & Hasan [7] & AE & - & 50.32 \\ & Lu [12] & I3D & - & 53.56 \\ & BODS [29] & I3D & - & **57.32** \\ & GODS [29] & I3D & - & **61.56** \\ \hline \multirow{3}{*}{WS} & S3R [30] & I3D & ✓ & **53.52** \\ & RTFM\(\dagger\)[25] & I3D & ✓ & **89.34** \\ & C2FPL* (Ours) & I3D & ✗ & **90.4** \\ \hline \multirow{3}{*}{US} & RareAnom [23] & I3D & ✓ & **68.33** \\ & C2FPL (Ours) & I3D & ✗ & **80.09** \\ \hline \end{tabular} \end{table} Table 2: Frame-level AUC performance comparison on XD-Violence dataset. The column FNS indicates whether the method uses a fixed number of segments \(m\) (\(m=32\) when FNS is true) and “-” indicates this information is not available. The top two results under each supervision setting are shown in blue and red in that order. \(\dagger\) indicates that we re-compute the AUC of method in [25] using I3D features. **Qualitative Results**: We also provide some qualitative results in Figure 4, where anomaly scores predicted by our C2FPL approach are visualized for several videos from the UCF-Crime dataset. It can be observed that the predicted anomaly scores generally correlate well to the anomaly ground truth in many cases, demonstrating the good anomaly detection capability of our approach despite being trained without any supervision. A **failure case**, shooting008 video (UCF-Crime), is also visualized in Figure 4(d). Our detector predicts several frames after the actual shooting event as anomalous. Careful inspection of this video shows a person with a gun entering the scene after the actual event, which our method marks as anomalous, but the ground-truth frame label is normal. Such discrepancies affect the frame-level AUC. ### Ablation Study Next, we conduct a detailed ablation study to analyze the impact of each component of the proposed C2FPL framework for US-VAD using the UCF-Crime dataset. **Impact of CPL**: The objective of CPL is to generate coarse video-level labels for all videos in the training dataset. To evaluate the impact of this component, we carry out two experiments and report the results in Table 3. In the first experiment, the CPL stage is removed and the video-level pseudo-labels are assigned randomly. In this case, the performance drops significantly to \(69.4\%\) indicating that the coarse pseudo-labels generated by CPL are indeed very useful in guiding the subsequent stages of the proposed system. On the other extreme, we also experimented with using the ground-truth video-level labels instead of the generated coarse pseudo-labels. Note that this setting is equivalent to WS training used widely in the literature. As expected, the performance improves to \(85.5\%\), which is almost on par with the best WS method S3R [30] using the same I3D features (see Table 1). On the XD-Violence dataset, the C2FPL method adapted for the WS setting achieves an AUC of \(90.4\%\), which is better than existing WS methods on the same dataset. These results highlight the potential improvement that can be achieved by improving the accuracy of the CPL stage. It also demonstrates the ability of our proposed approach to learn without labels, but at the same time exploit the ground-truth WS labels when they are available. **Impact of FPL**: Since the goal of FPL is to obtain segment-level labels, we consider the following three scenarios. Firstly, when C2FPL framework is completely ignored and the segment-level pseudo-labels are assigned randomly, the performance of the trained anomaly detector collapses to a very low AUC of \(38.72\%\). This experiment proves that the generated segment-level pseudo-labels are indeed very informative and aid the training of an accurate anomaly detector. Secondly, we ignore only the FPL stage and assign the coarse video-level labels obtained from CPL to all the segments in the corresponding video. There is still a substantial performance drop to \(64.1\%\) (from \(80.65\%\) when FPL is used). Finally, we again consider the WS setting and assign the ground-truth video-level labels to all the segments Figure 4: Qualitative results of our method on different test videos of the UCF-Crime dataset. The blue color shadow shows the ground truth anomalous frames. \begin{table} \begin{tabular}{c c c c c} \hline \hline Stage 1 & Stage 2 & Stage 3 & Scenario & AUC \\ (**CPL**) & (**FPL**) & (**AD**) & & (**\%**) \\ \hline ✓ & ✓ & ✓ & US C2FPL framework & 80.6 \\ \hline ✗ & ✓ & ✓ & Ground-truth video-level labels (WS) & 85.5 \\ ✗ & ✓ & ✓ & Random video-level labels & 69.4 \\ \hline ✓ & ✗ & ✓ & CPL pseudo-labels assigned to segments & 64.1 \\ ✗ & ✗ & ✓ & Ground-truth video-level labels assigned to segments & 72.7 \\ \hline ✗ & ✗ & ✓ & Random segment-level labels & 38.7 \\ \hline ✓ & ✓ & ✗ & (1 – p-value) as anomaly score & 57.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies analyzing the impact of each component of the proposed approach on the UCF-Crime dataset. in a video. Even in this case, the performance improves only to \(72.7\%\) (compared to \(85.5\%\) when FPL is used in the WS setting). The last two results clearly prove that the use of FPL reduces segment-level label noise to a large extent, thereby facilitating better training of the anomaly detector. **Impact of Anomaly Detector**: To understand the impact of the segment-level anomaly detector, we excluded the detector and directly used (1-p-value) obtained during the FPL stage as the anomaly score. This results in a significant drop in AUC to \(57.0\%\), which indicates that while the C2FPL framework can generate informative pseudo-labels, these labels are still quite noisy and cannot be directly used for frame-level anomaly prediction. The anomaly detector is critical to learn from these noisy pseudo-labels and make more accurate fine-grained predictions. ### Parameter Sensitivity Analysis **Sensitivity to \(\eta\)**: The sensitivity of the proposed method to the value of \(\eta\) is studied on the UCF-Crime dataset. The best results are achieved when \(\eta=1\), which corresponds to having a roughly equal number of videos in the normal and anomaly clusters. When \(\eta=0.5\) or \(\eta=1.5\), the AUC drops to \(75.64\%\) and \(71.22\%\), respectively. It is important to emphasize that though the number of normal segments in a dataset is usually much larger than the number of anomalous segments, the number of normal and anomalous videos in the available datasets are roughly equal. For example, UCF-Crime has \(800\) normal videos and \(810\) anomalous videos, while XD-Violence has \(2049\) normal videos and \(1905\) anomalous videos. Therefore, the choice of \(\eta=1\) is appropriate for these two datasets. In real-world unsupervised settings, the ratio of anomalous to normal videos in a given dataset may not be known in advance because there are no labels. When \(\eta\) is mis-specified, there is some performance degradation, which is a limitation of the proposed C2FPL approach. **Sensitivity to \(\beta\)**: In the FPL stage, a window of size \(\left\lceil\beta m_{i}\right\rceil\) is used to _loosely_ incorporate the temporal contiguity constraint. In the earlier experiments, \(\beta\) was set to \(0.2\) (\(20\%\) of the video length). However, in practice, the number of anomalous segments in a video may vary widely and would not be known in advance. The sensitivity of the proposed method to the value of \(\beta\) is shown in Figure 5. These results indicate that our method is quite robust to changes in \(\beta\). **Fixed number of segments**: We hypothesize that not compressing the videos at test/train time, as commonly done in the existing literature [19, 26], is beneficial for the overall anomaly detection performance. To validate this hypothesis, we experiment with compressing each video into to a fixed number of segments \(m=32\) before applying the proposed method. With such compression, the performance of our method drops to \(77.70\%\) and \(78.08\%\) for UCF-Crime and XD-Violence datasets, respectively. This justifies our choice of not using any compression. ### Computational Complexity Analysis Apart from feature extraction, the proposed C2FPL training method requires a few invocations of the GMM clustering subroutine, a single round of Gaussian distribution fitting, and training of the segment-level anomaly detector \(\tilde{\mathcal{A}}_{\theta}\). Since GMM clustering is performed at the video level on 2D data, the computational cost of the two-stage pseudo-label generator is insignificant (\(0.6\) seconds) compared to that of the anomaly detector training (\(60\) seconds per epoch). As seen in Figure 2, the architecture of \(\tilde{\mathcal{A}}_{\theta}\) is fairly simple with only \(2.13\)M parameters, which is significantly lower than all SOTA methods except Sultani _et al_. [19], as shown in Table 4. It may be noted that, despite having fewer parameters, the WS variant of our approach outperforms almost all the other methods on both datasets (Table 1 & 2). The only exception is S3R, which has \(0.5\%\) higher AUC compared to our approach, while having over \(71\)M extra parameters than our method. During inference, our method achieves 70 frames per second (fps) on NVIDIA RTX A6000 which is almost double the rate of real-time applications. This indicates that our system can achieve good real-time detection in real-world scenarios. ## 5 Conclusion Unsupervised video anomaly detection (US-VAD) methods are highly useful in real-world applications as a complete system can be trained without any annotation or human intervention. In this work, we propose a US-VAD approach based on a two-stage pseudo-label generator that \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline & & & & \\ \hline \hline Params & 1.07M & 6.5M & 24.72M & 73.5M & 2.13M \\ \hline \end{tabular} \end{table} Table 4: Number of trainable parameters of the proposed approach in comparison with some existing methods. Our approach achieves good performance with significantly fewer parameters. Figure 5: Sensitivity of C2FPL framework to parameter \(\beta\). facilitates the training of a segment-level anomaly detector. Extensive experiments conducted on two large-scale datasets, XD-Violence and UCF-Crime, demonstrate that the proposed approach can successfully reduce the gap between unsupervised and supervised approaches.
2303.10007
Gyroid-like metamaterials: Topology optimization and Deep Learning
Triply periodic minimal surface (TPMS) metamaterials characterized by mathematically-controlled topologies exhibit better mechanical properties compared to uniform structures. The unit cell topology of such metamaterials can be further optimized to improve a desired mechanical property for a specific application. However, such inverse design involves multiple costly 3D finite element analyses in topology optimization and hence has not been attempted. Data-driven models have recently gained popularity as surrogate models in the geometrical design of metamaterials. Gyroid-like unit cells are designed using a novel voxel algorithm, a homogenization-based topology optimization, and a Heaviside filter to attain optimized densities of 0-1 configuration. Few optimization data are used as input-output for supervised learning of the topology optimization process from a 3D CNN model. These models could then be used to instantaneously predict the optimized unit cell geometry for any topology parameters, thus alleviating the need to run any topology optimization for future design. The high accuracy of the model was demonstrated by a low mean square error metric and a high dice coefficient metric. This accelerated design of 3D metamaterials opens the possibility of designing any computationally costly problems involving complex geometry of metamaterials with multi-objective properties or multi-scale applications.
Asha Viswanath, Diab W Abueidda, Mohamad Modrek, Kamran A Khan, Seid Koric, Rashid K. Abu Al-Rub
2023-03-17T14:30:26Z
http://arxiv.org/abs/2303.10007v1
# Gyroid-like metamaterials: Topology optimization and Deep Learning ###### Abstract Triply periodic minimal surface (TPMS) metamaterials characterized by mathematically-controlled topologies exhibit better mechanical properties compared to uniform structures. The unit cell topology of such metamaterials can be further optimized to improve a desired mechanical property for a specific application. However, such inverse design involves multiple costly 3D finite element analyses in topology optimization and hence has not been attempted. Data-driven models have recently gained popularity as surrogate models in the geometrical design of metamaterials. In this paper, we build a deep learning based surrogate model for the topology optimization of a Schoen's Gyroid TPMS unit cell to obtain the optimal 3D TPMS unit cell topology for desired properties without requiring intensive computation. Gyroid-like unit cells are designed using a novel voxel algorithm, a homogenization-based topology optimization, and a Heaviside filter to attain optimized densities of 0-1 configuration. Few optimization data are used as input-output for supervised learning of the topology optimization process from a 3D CNN model. These models could then be used to instantaneously predict the optimized unit cell geometry for any topology parameters, thus alleviating the need to run any topology optimization for future design. The high accuracy of the model was demonstrated by a low mean square error metric and a high dice coefficient metric. This accelerated design of 3D metamaterials opens the possibility of designing any computationally costly problems involving complex geometry of metamaterials with multi-objective properties or multi-scale applications. **Keywords**: Metamaterials, Triply periodic minimal surface, Gyroid, Homogenization, Topology optimization, Deep Learning, Surrogate model. ## 1 Introduction Metamaterials have emerged in the recent past as a 'holy grail' to material scientists as they showed abundant possibilities in their physical properties and the versatility in the fields of applications (mechanical, thermal, acoustic, optical, electromagnetics, biomedical to name a few [1],[2],[3],[4],[5]). Their mechanical properties studied by engineers showed colossal promise as their unique architectures, which could be tailored to any desired geometry, enhanced the properties of the structure beyond the capabilities of the material [6],[7],[8]. The attractive feature was that their extreme properties could be topologically controlled. The microstructure of the base unit of these materials, referred to as the representative unit cell (RUC), determines their mechanical and physical properties [9]. The design of the RUC of metamaterials satisfying some desired properties is called the 'inverse design' problem and has been performed through experiments and/or topology optimization (TO) [10],[11]. TO aims to obtain optimal layouts of the microstructure for a desired objective function of a metamaterial such as maximizing the bulk/shear moduli or minimizing Poisson's ratio, subject to constraints, such as volume constraint [12]. This area of research has been extensively studied in 2D [13],[14] and 3D microstructures and MATLAB codes are also available for the same [15]. An initial design of the microstructure may or may not be used. Some of the initial designs used in the literature consist of simple designs with a hole at the center or a few distributed voids, which after topology optimization, give new topologies satisfying the desired objective [14]. Triply periodic minimal surface (TPMS), a concept from differential geometry, is one of the topologies adopted for the RUC of micro-structured materials. These surfaces minimize the surface area locally for a given boundary and possess the property of the mean curvature being zero at every point on the surface [16]. They divide the unit cell domain into two or more non-intersecting domains. What makes them attractive is their fascinating topologies, when repeated periodically in 3D. TPMS can be mathematically-controlled and exhibit some unique properties, such as a large surface area to volume ratio [17]. The advances in the manufacturing industry, like the use of additive manufacturing [18], also facilitated their fabrication, which was previously a major inhibition in their usage with traditional methods. Many research works have dealt with designs of the TPMS structures based on experimental studies on its properties due to its geometry [19],[20],[21],[22],[23],[24]. This work deals with a specific TPMS of the Gyroid structure. Gyroid TPMS structure is used in various applications including orthopedic implants due to its efficient load transfer along with continuous filling of the void space [25] and catalytic converters due to efficient heat transfer through void space [26]. Gyroid-structure is also found in nature in soap films [27] and butterfly wings [28]. In this work, we propose a novel method of designing 'Gyroid-like' unit cells for a desirable mechanical property subject to boundary conditions and a volume constraint using TO. Here, we start with the Gyroid structure as our initial design, then optimize it for a specific objective function yielding a Gyroid-like structure but with optimized properties. The novel approach discussed in this work captures the surface geometry of TPMS in a voxel form, and when subjected to TO, it renders a design similar to Gyroid but which may not possess the property of mean curvature being zero at all points. In other words, an initial design of a voxelized Gyroid isosurface is subjected to TO to obtain a Gyroid-like final structure with improved material distribution satisfying the desired objective and the volume constraints. The major challenge faced during 3D unit-cell design using the above approach is the computational time taken for TO, which exponentially increases with the number of finite elements or the mesh size (number of voxels in this study) of the unit cell. For example, a mesh of 32 elements in all three dimensions takes around 67seconds on a Workstation for a single iteration of the optimization process, which may take around 200-800 iterations to converge! To alleviate this cost of computations, we search for an alternative model to the optimization process that can use information from a few optimization runs and can consequently be used as a computationally cheap alternative for unit cell design. Recently machine learning models have emerged as surrogate models to ease the computationally intensive design and make possible the design even on a laptop. Among them, many references in literature on deep learning models ([29],[30],[31][32],[33],[34],[35],[36],[37],[38],[39]) inspired the authors to use them in this context of TO for designing TPMS based metamaterials. Deep learning models based on 2D convolutional neural networks (CNN) have been used in literature for this purpose in 2D unit cell generation [40], [41],[42]. CNNs are found to be robust in image recognition tasks, and this advantage of CNN is exploited for quantitatively predicting mechanical properties of composite structures over the entire volume fraction space by using checkerboard composites as image inputs to CNN [42]. Inspired by these works, we extend these CNN-based models to predict 3D unit cell TO design. Optimizing a TPMS geometry for designing the unit cell of metamaterials using TO to attain the desired objectives has not been attempted in literature other than by the authors themselves [43]. This study improves from this previous work by authors in two ways - 1) previous work dealt only with a single topology optimization parameter of volume fraction and objective function of bulk modulus and 2) The optimal densities did not follow a 0-1 configuration and hence not learned well by the CNN algorithm thereby showing high mean square error for the CNN model. This study focuses on this gap in the previous work and proposes the potential applications of using such accelerated 3D TO for unit cell design of Gyroid TPMS. The structure of the paper is as follows: Section 2 explains the novel methodology of generation of Gyroid-like structures for unit cell geometry and design using 3D homogenization based TO. Section 3 elaborates the surrogate deep learning model describing the data generation procedure and the architecture of the 3D CNN network used. Section 4 lists the different errors encountered in various approximations in this study and how they are accounted for. Section 5 discusses the results obtained from the proposed model. The last section summarizes the insights gained from this model and future directions in this research. ## 2 Methodology ### Voxel-based architecture of Gyroid TPMS The generation of the Gyroid microstructure is first discussed. Figure 1a shows the isosurface of a Gyroid structure along with its unit cell and periodic structure (Figure 1b). This surface is generated from the level set approximation equation \[\sin\frac{2\pi x}{L_{x}}\cos\frac{2\pi y}{L_{y}}+\sin\frac{2\pi y}{L_{y}}\cos\frac{ 2\pi z}{L_{z}}+\sin\frac{2\pi z}{L_{z}}\cos\frac{2\pi x}{L_{x}}=c \tag{1}\] where \(c\) denotes level set value which can be a constant or a function of \(x\), \(y\), and \(z\). \(L_{x}\), \(L_{y}\), and \(L_{z}\) are unit cell lengths in the three directions. The isosurface generated from Equations (1) with \(c=0\) is passed to a voxel generation algorithm [44] by passing the vertices' and edges' information of the isosurface. The voxelized RUC (shown in Figure 1c) is generated with a mesh size of 32 voxels in each direction. Each voxel is given a value of 1 (black) if any part of the isosurface edges (obtained from isosurface information) passes through that voxel, else given the value zero (white) to indicate the void space inside the Gyroid. The thin isosurface is thus thickened by the voxel algorithm due to the crisscross connections of the edges-vertices defining the isosurface. The major 2D slices shown in Figure 2 help visualize how the curved interior surfaces of the Gyroid are captured by the voxels. The voxelized RUC has a relative density of 58.7%, obtained by calculating the number of black voxels divided by the total number of voxels (32 x 32 x 32). Using this Gyroid microstructure as the initial design, we can design the optimal Gyroid RUC using a 3D Figure 1: a) Isosurface of the Gyroid with \(c=0\), b) RUC of a Gyroid along with the periodic structure [4], c) voxelized Gyroid RUC homogenization-based TO approach to maximize either bulk or shear modulus [15]. The broad advantage of using such a voxelized discretization of a smooth surface will eventually be clearer when the concept of CNN is introduced for learning RUC with such a geometry. **2.2 3D Homogenization based TO** In this study, we employed a homogenization-based TO approach of microstructure design to design TPMS metamaterials optimized for either maximum bulk or shear modulus [15]. The homogenization method in periodic cellular materials or composites calculates their effective properties [45] using their RUC applying periodic boundary conditions. This effective property of the RUC is then used in the TO algorithm, which maximizes or minimizes a desired objective function. The TO algorithm used for homogenized RUC is the density-based solid isotropic material penalization (SIMP) approach [46], [47] as the proposed voxelized geometry of the unit cell facilitates the calculation of densities with each voxel acting as a finite element in SIMP approach. The energy-based homogenization method is briefly discussed here. Given a volume of a unit cell, \(|Y|\), the homogenized stiffness tensor \(E^{H}_{ijkl}\) is given by volume integrand \[E^{H}_{ijkl}=\frac{1}{|Y|}\int_{Y}\!\!E_{pqrs}\,\varepsilon^{A(kl)}_{rs} \varepsilon^{A(ij)}_{pq}dY \tag{2}\] \(\varepsilon^{A(ij)}_{pq}=\varepsilon^{o(ij)}_{pq}-\varepsilon^{*(ij)}_{pq}\), \(E_{pqrs}\) represents the local stiffness tensor, \(\varepsilon^{o(ij)}_{pq}\) is the initial macroscopic strain fields, and \(\varepsilon^{*(ij)}_{pq}\)denotes locally varying strain fields. In the case of 3D, there are six prescribed unit test strains \(\varepsilon^{o(ij)}_{pq}\) corresponding to independent test strains: (1,0,0,0,0)\({}^{\mathrm{T}}\), (0,1,0,0,0)\({}^{\mathrm{T}}\), (0,0,1,0,0)\({}^{\mathrm{T}}\), (0,0,0,1,0)\({}^{\mathrm{T}}\),(0,0,0,0,1,0)\({}^{\mathrm{T}}\) and (0,0,0,0,0,1)\({}^{\mathrm{T}}\). When these unit test strains act on the unit cell, the equilibrium equation with periodic boundary conditions are solved for the unit cell to obtain the unknown strain fields \(\varepsilon^{*(ij)}_{pq}\)[15]. The RUC is divided into \(N\) finite elements with 6 x 6 element stiffness matrices \(\mathbf{k}_{e}\) and \(\mathbf{u}^{A(ij)}_{e}\) being element displacements corresponding to \(\varepsilon^{o(ij)}\). Hence, the finite element summation of integrand in Equation (2) is written in terms of \(\mathbf{k}_{e}\) and \(\mathbf{u}_{e}\), in turn, expressed in terms of element mutual energies \(Q^{e}_{ijkl}\)[10] as follows: \[E^{H}_{ijkl}=\frac{1}{|Y|}\sum_{e=1}^{N}\left(\mathbf{u}^{A(ij)}_{e}\right)^{T}\bm {k}_{e}\mathbf{u}^{A(kl)}_{e}=\frac{1}{|Y|}\sum_{e=1}^{N}Q^{e}_{ijkl} \tag{3}\] The expanded form of this homogenized stiffness tensor [15] is \[\begin{bmatrix}E^{H}_{1111}&E^{H}_{1122}&E^{H}_{1133}&E^{H}_{1112}&E^{H}_{112 3}&E^{H}_{1131}\\ E^{H}_{2211}&E^{H}_{2222}&E^{H}_{2233}&E^{H}_{2212}&E^{H}_{2223}&E^{H}_{2231} \\ E^{H}_{3311}&E^{H}_{3322}&E^{H}_{3333}&E^{H}_{3312}&E^{H}_{3323}&E^{H}_{3331} \\ E^{H}_{1211}&E^{H}_{1222}&E^{H}_{1233}&E^{H}_{1212}&E^{H}_{1223}&E^{H}_{1231} \\ E^{H}_{2311}&E^{H}_{2322}&E^{H}_{2333}&E^{H}_{2312}&E^{H}_{2323}&E^{H}_{2331} \\ E^{H}_{3111}&E^{H}_{3122}&E^{H}_{3133}&E^{H}_{3112}&E^{H}_{3123}&E^{H}_{3131} \\ \end{bmatrix} \tag{4}\] The SIMP algorithm in TO is then performed on homogenized RUC. The element densities \(\rho_{e}\in\)[0,1] of each finite element is the design variable and the element Young's modulus constituting \(\mathbf{k_{e}}\) in terms of densities is \[E_{e}(\rho_{e})=E_{min}+(E_{o}-E_{min})\rho_{e}^{p} \tag{5}\] where \(E_{o}=1\)GPa, solid element Young's modulus and \(E_{min}=1e-9\) GPa, void Young's modulus, introduced to prevent singularity in the stiffness matrix. The penalization factor \(p\) is taken here as 5.0. To avoid numerical instabilities of mesh dependence and checker boarding [48], a density filtering approach is adopted which uses filtered densities \(\mathbf{\rho}\) calculated from pseudo densities \(\mathbf{\eta}\) for the optimization. The relations between the densities are given below [49]: \[\mathbf{\rho}=\overline{\mathbf{W}}\mathbf{\eta}\] \[w_{ij}=\max\left(0,r_{min}-\left\|\mathbf{X}_{i}-\mathbf{X}_{j}\right\|^{2}\right) \tag{6}\] \[\overline{w}_{ij}=\frac{1}{\sum_{k=1}^{N_{rmin}}w_{k}}\,w_{ij}\] where \(r_{min}\) is the filter radius, \(\overline{w}_{ij}\) is normalized weight coefficient forming the normalized matrix \(\overline{\mathbf{W}}\). \(\mathbf{X}_{i}\)'s are coordinates of centroid of element \(i\). The optimization problem can be now stated as \[\begin{array}{c}\max:\\ \rho\end{array} f\left(E_{ijkl}^{H}(\mathbf{\rho})\right)\] \[\begin{array}{c}\text{such that}:\quad\mathbf{K}\mathbf{U}^{A(ij)}=\mathbf{F}^{A(ij)}\\ \\ V(\mathbf{\eta})-V_{f}\leq 0\\ \\ 0\leq\rho_{e}\leq 1\end{array} \tag{7}\] \(\mathbf{K}\)-global stiffness matrix, \(\mathbf{U}^{A(ij)}\)-global displacements corresponding to strain case(ij), \(\mathbf{F}^{A(ij)}\)- external force vectors, \(V(\mathbf{\eta})\)- volume fraction got by dividing element volumes with total volume of the domain and this is not to exceed \(V_{f}\), a prescribed limiting value. The objective function used in this work is maximizing the bulk modulus and shear modulus which is given by [15] \[f_{b}\big{(}E^{H}_{ijkl}(\mathbf{\rho})\big{)}=\sum_{l,j=1}^{3}E^{H}_{iljj} \tag{8}\] \[f_{s}\big{(}E^{H}_{ijkl}(\mathbf{\rho})\big{)}=\sum_{i,k=1}^{3}E^{H}_{ijkl}(i\neq j \ \&\ k\neq l) \tag{9}\] The sensitivities are calculated using the adjoint method expressed as \[\frac{\partial E^{H}_{ijkl}}{\partial\rho_{e}}=\frac{1}{|Y|}p\rho_{e}^{p-1}(E _{o}-E_{min})\left(\mathbf{u}_{e}^{A(ij)}\right)^{T}\mathbf{k}_{o}\mathbf{u}_{e}^{A(kl)} \tag{10}\] where \(\mathbf{k}_{o}\) is element stiffness matrix. The densities can be made to take a 0 or 1 solution with the use of Heaviside filter [50]. This can be particularly useful while dealing with machine learning methods later in the work as a 0-1 morphology is easier to learn than one with intermediate densities. This filtering introduces a Heaviside step function into the density filter using the following smooth function such that physical density \(\rho_{\mathrm{e}}\)=1 if \(\rho_{\mathrm{e}}\)\(>\) 0 and zero if \(\rho_{\mathrm{e}}\)= 0: \[\mathbf{\rho}^{H}=1-e^{-\beta\mathbf{\rho}}+\mathbf{\rho}e^{-\beta} \tag{11}\] Here, the parameter \(\beta\) controls the smoothness of the approximation. When \(\beta\) =0, the Equation (11) is similar to Equation (6) and as \(\beta\) tends to infinity, the approximation approaches a true Heaviside step function. To avoid local minima and to ensure differentiability in the optimization, a continuation scheme is used to increase \(\beta\) gradually from 1 to 512, doubling it every 50 iterations or when change between variables in two consecutive design becomes less than 0.01. The 2D slices of voxels corresponding to the optimized geometry are shown in Figure 3. The voxelized Gyroid RUC are smoothened using the top3d app software [51] and varying relative densities displayed in Figure 4. Figure 4: Optimized and smoothened voxelized Gyroid RUC for various relative densities Surrogate Deep Learning Model The homogenization-based TO of the voxelized Gyroid RUC described in the previous section has one lacuna: its computational time. This is indeed one of the major challenges in any 3D unit-cell design; the computational time exponentially grows with the mesh size (number of voxels in this study) of the unit cell. When the objective becomes time-consuming, the 'curse of dimensionality' sets in [52], and it becomes essential to seek alternative ways of determining the objective functions. The method of surrogate modeling [53] appeals in such situations when cheaper alternatives can be employed to perform the objective function evaluations. These models can learn from the information provided from a few optimization runs to replicate the process and consequently be used as a computationally cheap alternative for optimizing unit cell design. Recently, data-driven models have proved effective surrogate models to ease such computationally intensive design through the process of the training-learning algorithm. Among the vast literature on such data-driven models, in this study, we chose the CNN-based model as this class of deep neural networks has proved very successful in image recognition, where images are in the form of pixels in 2D and voxels in 3D. Hence, the broad purpose of Gyroid RUC generation through the voxel algorithm now becomes more meaningful. This section details the building of a deep learning based model as a surrogate for the topology optimization of the Gyroid microstructure given any volume fraction and filtering radius. The model will predict the optimal 3D Gyroid unit cell, which possesses the maximum bulk/shear modulus for the specified volume fraction and filtering radius, without the need for any traditional topology optimization. This is achieved by training the deep learning CNN model with a few optimized topologies corresponding to different random volume fractions and filtering radii. However, this training requires data to be generated through many topology optimization runs, which is the cost paid for alleviating topology optimization runs later for design. **3.1 Design of Experiment (DoE) for Data generation** The flowchart of the workflow is shown in Figure 5. The data required is computationally generated from MATLAB runs of the code containing 3D topology optimization of homogenized properties as described in previous section. As the flowchart indicates, first the isosurface of the Gyroid is generated from Equation (1). Here, we used the value of \(c\)=0. The isosurface is then voxelized by discretizing the unit cell into 3D finite elements (each element called a voxel), and assigning a density of 1 to each voxel if the isosurface is passing through the voxel and 0 if voxel does not have any part of the isosurface. These voxel densities are used as an initial design for the topology optimization problem, where the bulk or shear modulus is maximized. Two optimization parameters are studied here - volume fraction (V\({}_{\ell}\)) in the range of 25%-45% and a filtering radius (r\({}_{\min}\)) of the optimization in range of 1.2- 2.5 cm. These parameters are chosen based on previous 2D metamaerial topology optimization studies [40]. To generate data, these two factors are designed in a factorial design and datapoints generated for each pair of values as shown in table 1. The table can be read as the number of the datapoint in center and corresponding value of volume fraction on left side and filtering radius on top of any selected datapoint number. For example, data point 1 has a volume fraction 25% and filter radius of 1.2 cm and so on. Two such tables are created for both the bulk modulus and shear modulus maximization objective. Figure 5: Flowchart of the data generation and prediction process Figure 6: One sample data point -voxels for bulk modulus maximization ID, volume fraction of 20% and filter radius of 1.5 as inputs and corresponding optimized densities as output The finite element mesh of 32x32x32 is chosen for the unit cell dimension of 1cm x1cm x1cm. The choice of mesh will be discussed in detail in Section 4. One sample datapoint is shown in Figure 6 for illustration of how input-output voxel looks like for one set of values of parameters. The choice of the range of optimization parameters is made on the basis that the volume fraction of interest in cellular solids is in range of 25% to 45% while the filtering radius is chosen with a study of different values from 1 to 10. Figure 7 shows some of the shapes of topology optimized result for 40% volume fraction with varying r\({}_{\text{min}}\). We found that large radius filter values lead to reduced effective properties; hence, the maximum filter radius we consider in this study is 2.5 cm. Also the value of r\({}_{\text{min}}=\)1 gives a checkerboard pattern [48] and hence the limits were fixed at 1.2cm to 2.5cm. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} & \multicolumn{6}{c}{Filter radius(cm)} \\ \cline{3-8} & & 1.20 & 1.21 & 1.22 & 1.23 &... & 2.50 \\ \cline{2-8} & **25** & 1 & 2 & 3 & 4 &... & 131 \\ \cline{2-8} & **26** & 132 & 133 & 134 & 135 &... & 262 \\ \cline{2-8} & \(\cdot\) & \(\cdot\) & \(\cdot\) & \(\cdot\) & \(\cdot\) &.... & \(\cdot\) \\ \cline{2-8} & \(\cdot\) & \(\cdot\) & \(\cdot\) & \(\cdot\) & \(\cdot\) &.... & \(\cdot\) \\ \cline{2-8} & **45** & 2621 & 2622 & 2623 & 2624 &... & 2751 \\ \end{tabular} \end{table} Table 1: Datapoints table indicating the values of volume fraction, V\({}_{\text{f}}\) and filter radius r\({}_{\text{min}}\) for each datapoint Figure 7: Optimized topologies for volume fraction of 40% for different filtering radii of (a) 3cm, (b) 5cm and (c) 8cm ### Network architecture We extend the CNN model employed to predict optimized 2D metamaterials in a previous work [40] to 3D metamaterials. An encoder-decoder network proposed by ResUnet [54] is used for the model, which is a semantic segmentation neural network taking advantage of both residual learning and U-net [55]. This makes the network include both their strengths. This gives us the motivation to use our pixel-based geometry for learning the property and the related Gyroid RUC geometry (densities) such that for any desired property the model predicts the geometry. The architecture (shown in Figure 8(a)) is similar to a U-Net (called so due to the U-shape of the blocks) with residual blocks instead of neural units as its building block and hence referred to as ResUNet. The architecture can be divided into the encoder part, which encodes the input images into a low-dimensional representation by a series of convolution layers, the decoder which receives the encoded images from the third bridge part, connecting the encoder to the decoder, and constructs back the RUC. The concatenation feature, shown by dotted lines in Figure 8, improves the segmentation accuracy. ResUNet uses batch normalizations (BN), rectified linear units (ReLU), and convolutional layers (Conv), whereas U-Net uses only ReLU and Conv in the building block. Four blocks of Encoder and Decoders are used, and each building block is shown in Figure 8(b). The advantage of this ResUnet over U-net is the concatenation links between the encoder and decoder which helps in preserving features [27]. The values of topology optimization parameters of V\({}_{\text{f}}\) and r\({}_{\text{min}}\) and an identifier (ID) for the desired mechanical property (here 1 for maximum bulk modulus and 2 for maximum shear modulus) are converted into 3D matrices (images) assigning same value to all voxels illustrated in Figure 6. The CNN model takes these input 3D images of V\({}_{\text{f}}\), r\({}_{\text{min}}\) and ID, along with the output 3D image of corresponding topology optimized densities and uses this information to train its weights. Once the training phase is complete, the CNN model is now ready to predict the desired microstructure topology corresponding to any property given to it, as shown in Figure 9. Figure 8: a) ResUNet architecture. The filter sizes are written over each filter(black) while changes in 3D input(blue) is written on sides. b) residual block The dataset generated consists of 2751 datapoints, each datapoint containing an identifier for objective, the volume fraction, filter radius and optimized topology (see Figure 5). The computations were time-consuming and were performed on the IBM HPC with hardware specifications: two 12 core Intel Xeon E5-2695 v2 (Ivy Bridge) CPU, two NVIDIA K20M GPUs, and 264 GB main memory and also on iForge HPC cluster hosted at the National Center for Supercomputing Applications (NCSA) consisting of Intel/Skylake nodes, each with 40 cores and 192 GB of RAM, and a couple of nodes equipped with NVIDIA v100 GPU cards. The CNN model is developed using Keras with Tensorflow backend [19]. The hyper parameters used are: batch-size - 128, learning rate - 0.001, Adam optimizer and 150 epochs. Usually, a large dataset is Figure 9: Training and Prediction phases of the proposed 3D CNN network required for a fast convergence but since the computational cost of topology optimization was high, we started with a small data size to analyze the result. The computational time taken for the entire process is shown in Table 2. For each data point generation on a core on a node of HPC, 2.4 hours are required which multiplied with 2751 data points would have been a herculean task. However, by modifying the MATLAB code using job arrays to split and generate all data points in parallel, as each TO run is independent of each other, data generation is split to 10 data points per MATLAB simulation requiring only 275 runs for the entire data generation. This is achieved with 200 runs on 5 nodes of 40 cores on iForge and 75 runs on 1 node of 26 cores on IBM, in total, taking only 24 hours for the complete data generation. The time for dataset generation on a personal computer and HPC is also compared in Figure 6. As indicated in the table, the deep learning training to calculate the weights and biases takes only 5.5 GPU hours. Once we properly train and validate the deep learning model, the prediction of topologies for new input parameters can be obtained accurately and almost instantly even on a laptop and without any modeling software. This is the greatest advantage of using surrogate deep learning models. \begin{table} \begin{tabular}{|c|c|c|} \hline Activity & CPU hours & GPU hours \\ \hline Data Generation (with 5 nodes of 40 cores on iForge and 1 node of 26 cores on IBM) & 24 & - \\ \hline Deep Learning Training & - & 5.5 \\ \hline Deep Learning Prediction & 0.001 & - \\ \hline \end{tabular} \end{table} Table 2: Computational time taken on HPC ### Model evaluation The CNN model was evaluated for its prediction against the ground truth using a mean square error (MSE) metric as the loss function of the model and the mean dice similarity coefficient (DSC) [28] for flattened 3D voxel [19]. The MSE measures how much the predicted topology deviates from the ground truth and smaller values of MSE are preferred. The DSC compares the predicted topology image with the ground truth topology image and gives the measure of how many voxels match in both. So, a higher value for DSC is preferred as that would suggest a higher match between ground truth and prediction. They are evaluated using the following expressions for \(M\) data points, \(T\) the ground truth segmentation of input channel \(I\) and \(O\) the CNN model segmentation, \[MSE=\frac{1}{M}\sum_{i=1}^{M}\|T(I_{i})-O(I_{i})\|^{2} \tag{12}\] \[DSC=\frac{1}{M}\sum_{i=1}^{M}\frac{2|O(I_{i})\cap T(I_{i})|}{|O(I_{i})|+|T(I_{ i})|} \tag{13}\] ## 4 Error Analysis Various approximations used during the modeling of the unit cell and its simulations introduce various errors into the model developed for the Gyroid RUC, and this will Figure 10: Data generation rates on Workstation versus HPC also affect the CNN modeling of the unit cell. Hence a detailed error analysis is carried out to study all these errors and suggest methods to minimize their effect on the surrogate modeling. ### Error in geometry modeling of isosurface The isosurface is created with different mesh points and number of mesh points can introduce the first discretization error. Figure 11 shows an isosurface with various mesh points, out of which surface is visually best captured by a minimum of 15 points. Mesh sizes of 5, 10, 15, 20, and 32 were used to generate surfaces from which voxelized cubes using 32 voxels (selection criteria discussed in the next section) in each of three directions were generated, and the relative densities of each of these 32 x32 x 32 cube were compared. The difference in relative densities converged after 15 mesh points, indicating the actual relative density of the thickened isosurface is captured. Thus, 15 was chosen as the mesh size to generate the isosurface of c=0, which will be voxelized for analysis. ### Error in voxelizing surface geometry The CNN modeling requires 3D input images with voxels chosen as powers of 2 - either 8, 16, 32, 64, and so on. Hence the voxel size of RUC (or finite element size) was chosen by performing the homogenization-based TO and choosing a mesh size beyond which there was convergence to the homogenized matrix and the compliance of the structure. Figure 11: Isosurfaces generated with a)5, b) 10, and c) 15 mesh points. Element size of 8 was discarded, as it was less than the mesh points of isosurface (15 chosen in the previous section). Among 16 and 32, since we are analyzing to obtain the best mechanical property, the value of bulk modulus and shear modulus can be studied with both the voxel sizes. The 32 finite element size gave lower compliance and higher bulk modulus and shear modulus value with 60% relative error from those with 16 finite element size, even though it was computationally expensive. Further examination with 64 mesh size showed no improvement in objective functions as those from 32, which was hence the final choice for the voxel size as it considerably improved the mechanical properties from smaller finite element sizes and was also twice the mesh points (15) used to generate the isosurface. ### Sampling Error A small sample size of 2751 data points is bound to create modeling errors in the CNN model, which requires a large dataset for training. This problem is foreseen while choosing data-driven modeling; hence, remedial measures of bootstrapping and cross-validation can resolve such issues. The mean square error metric indicates such lacunae and can be remedied if required. A full factorial DoE is suitable than a random set of data for such a case of low sample size, since we can make sure all the range of values of input parameters are represented in the dataset. ### Errors in fitting the CNN model To prevent the issue of overfitting or underfitting, the loss function (here the mean square error metric) and the dice coefficient metric are studied for both the training and validation set. For this the data is split into training set, testing set and a validation set. Low training set error shows there is no underfitting but a low validation set error is also required to show that overfitting has not occurred. Hence both these errors are monitored. In addition, low testing set error will show a low generalization error. ## 5 Results and Discussion The optimized topology dataset for objective functions of bulk modulus and shear modulus is shown in Figure 12. As the filtering radius increases, the objective function decreases, and with a very low filtering radius, the optimization did not converge for low volume fractions. Few topologies from the data sample corresponding to the maximum bulk objective are shown for different values of volume fraction and filter radius (2-6 in Figure 12). Topologies corresponding to a very low filter radius show hollow sections in the Gyroid (1in Figure 12). For example, for a 25% volume fraction, topology corresponding to a filter radius of 1.4cms for bulk modulus objective and 1.2 for shear modulus objective has hollow parts in their topology, which is smoothened out when the filter radius is 1.5cms. For maximizing objective functions, a filtering radius value of around 1.5cms was ideal for low volume fractions and 1.3 for higher volume fractions. The higher volume fraction led to higher objective values, as expected. Few combinations of volume fraction and filtering radii did not converge even. The gaps in the surface show values for which the topology optimization did not converge after 1000 iterations and hence were discarded from the dataset. They are indicated by gaps in the surface in the figure. Hence, out of 2751 datapoints, the final training dataset consists of 2597 datapoints maximized for bulk modulus and 2741 datapoints maximized for shear modulus. Each datapoint would include the identifier indicating the objective (maximizing shear or bulk), volume fraction value, the filtering radius value, and the 32x32x32 values of densities. Both the objective function datapoints were mixed and shuffled for the training dataset. The data was split into 90% training data, 5% validation data, and 5% testing data. The deep learning method used for learning the 3D Gyroid topologies optimized for maximum bulk modulus and maximum shear modulus is tested for its effectiveness. The measure of effectiveness is indicated by the loss function adopted for the model shown in Figure 13(a). The convergence of MSE occurs around 100 epochs, even with a small dataset. The mean DSC history (Figure 13(b)) also indicates a 95% match between predicted and ground truth topologies around 100 epochs. This was also possible partly because of the use of the Heaviside filter [50] in topology optimization which pushes the density values to either 0 or 1 and this has helped the CNN model learn the density image faster as either black or white rather than having intermediate densities. This improved the mean DSC of the dataset. Figure 13 also indicates that the difference between the validation and training error is very small indicating that there is Figure 12: Topology optimized objective function surfaces for different combinations of volume fractions and filter radii. no overfitting or underfitting. Few comparisons of ground truth and CNN predicted topologies obtained from the testing set, corresponding to filter radius of 1.2, 1.5, 1.8, 2.2, 2.5 for volume fractions 25%, 35% and 45% are shown in Figure 15 and Figure 16. As is visually noticed, the low filter radius for low volume fractions gave discontinuities in topologies which were not efficiently learnt by the CNN model while it performed exceptionally well for higher filtering radii for all volume fractions. For the testing set, the MSE was found to be 0.0079 showing a low generalization error. The mean deviation of volumes of predicted structures from the ground truth was evaluated for this test set and found to be 0.24%. Among this, the highest deviation of volume was shown by a structure with 29% volume fraction and 1.35 cm filter radius optimized for shear modulus with an absolute error in volume as 1.73%. The lowest deviation in volume showed by a 40% structure volume fraction and 1.41 cm filter radius optimized for bulk modulus having error as 7.6e-4%. For a better understanding of the matching images, the 1st, 8th, 16th, 23rd and 32nd 2D slices among the 32 slices of the 3D image are separately visualized for both the objectives for a 35% volume fraction and filter radius of 2 cm is shown in Figure 16. Figure 13: a) MSE convergence and b) mean dice similarity coefficient, against number of epochs. Figure 14: Ground truth (left) and predicted (right) topologies for different combinations of volume fractions and filtering radii with maximum bulk modulus Figure 15: Ground truth (left) and predicted (right) topologies for different combinations of volume fractions and filtering radii with maximum shear modulus ## 6 Applications The future applications of the conducted research can further emphasize the significance of the work studied. The 3D TO of a large mesh size is time-consuming, and generating a dataset of many such TO runs is expensive. A bigger purpose and application should be the aim while performing such an exercise. TPMS-based porous structures design for lightweight mechanical structures, heat exchangers, and biomaterials is researched by studying the RUC design and then using the optimal unit cell to generate periodic macrostructures. Nevertheless, the structural and material optimization of the macrostructure may not be possible by using an optimal microstructure alone. Towards this, integrated topology optimization called concurrent TO, which optimizes the microstructure material distribution at the same time when macrostructure properties are optimized, is studied widely [56],[57]. This multiscale TO method is found to be Figure 16: 1, 8, 16, 23 and 30. 2D slice contours of densities in ground truth (left) and predicted (right) topologies for 35% volume fraction and filter radius of 2cm for a) maximum bulk modulus and b) maximum shear modulus even more computationally intensive, in the order of the finite element mesh size of microscale multiplied by the finite element size of macroscale. When we develop a model for determining the optimal microstructure corresponding to any desired mechanical property and TO parameters instantaneously, this model can be plugged into the macro-analysis of such structures to prevent any such concurrent topology optimization of structure at micro-macro scales. The concurrent TO involves the following processes [56]: Find \(\rho_{M}^{i}\), \(\rho_{m}^{i}\)(\(i\)= 1,2,..., \(N_{M}\); \(j\)=1,2,..., \(N_{m}\)) Min: \(C\) (\(\rho_{M},\rho_{m}\)) \[\begin{split}\text{such that:}\quad\mathbf{K}(\mathbf{D}_{M})\mathbf{U}_{M}^{ A(ij)}=\mathbf{F}_{\mathbf{M}}^{A(ij)},\mathbf{K}(\mathbf{D}_{m})\mathbf{U}_{\mathbf{M}}^{A(ij)}=\mathbf{F}_{ \mathbf{m}}^{A(ij)}\\ \qquad\qquad\qquad\qquad\qquad V_{M}(\rho_{M})-V_{fM}\leq 0,V_{m}( \rho_{m})-V_{fm}\leq 0\\ \qquad\qquad\qquad\qquad 0\leq\rho_{M}^{i}\leq 1,0\leq\rho_{m}^{i} \leq 1\end{split} \tag{14}\] where \(C\) is the structural compliance, \(M\) index refers to macrostructure and \(m\) to microstructure. \(\rho_{M},\rho_{m}\) are relative densities and \(\mathbf{D}_{m}\) and \(\mathbf{D}_{M}\) are stiffness tensors of microstructure and macrostructure calculated similar to Eq. (5) as \[\begin{split} D_{M}&=[E_{min}+\ (E_{o}-E_{min})\rho_{M}^ {p}]D^{H}\\ D_{m}&=[E_{min}+\ (E_{o}-E_{min})\rho_{m}^{p}]D^{0} \end{split} \tag{15}\] where \(\mathbf{D}^{\theta}\) is constitutive matrix of the material and \(\mathbf{D}^{H}\) is homogenized stiffness tensor of the microstructure optimized by TO. It is this TO which is modeled in our study with a CNN model and hence can be used as an alternate in the concurrent TO to reduce the overall computational cost of TO of entire macrostructure. In this study we calculate the bulk modulus and shear modulus from \(\mathbf{D}^{H}\) which is one of the input parameters. Instead, for the concurrent TO, the \(\mathbf{D}^{H}\) can be the desired property to be attained and the model built in the same procedure as detailed in this study. Hence a major savings in computational time is achieved on macrostructure design as the microstructure design is predicted by CNN model instantaneously in each iteration based on TO parameters. The similar applications where the macrostructure is analyzed with computationally intensive FE or CFD models for static [58] or dynamic analysis and which involves the property optimization of microstructure can also use the advantage of this model. The authors are extending this approach to such applications which is the future scope of this research. ## 7 Conclusion The paper introduces a 3D CNN-based model for topology optimization of Gyroid TPMS unit cells. Three novel ideas are presented in the paper - 1) A voxelized algorithm for unit cell design of the 3D Gyroid unit cells, 2) homogenization-based 3D TO to achieve maximum bulk modulus or shear modulus for the desired volume fraction and filtering radii of this microstructure and 3) 3D CNN for 3D TO. To alleviate the computational burden caused by time-consuming 3D TO, a 3D surrogate CNN model with an encoder-decoder type architecture, used in segmentation modeling, is used to learn the topology of the RUC. It was observed that the model could almost instantly imitate a similar pattern in the topology of the Gyroid with very few datapoints. Moreover, the model was robust in both the accuracy of prediction and prediction time. Hence this CNN model could be used effectively, even on a laptop, for performing quality TO, which is otherwise unthinkable even on a powerful workstation or cluster. This work shows promises in employing surrogate deep learning based models for a drastically accelerated unit cell design of 3D metamaterials involving computationally extensive TO, including multiscale metamaterial design. ## Acknowledgement The authors would like to thank the National Center for Supercomputing Applications (NCSA) Industry Program, the Center for Artificial Intelligence Innovation and also the Research Computing team at Khalifa University. This publication is based upon work supported by the Khalifa University under Awards No. RCII-2019-003.
2308.12562
Variational Information Pursuit with Large Language and Multimodal Models for Interpretable Predictions
Variational Information Pursuit (V-IP) is a framework for making interpretable predictions by design by sequentially selecting a short chain of task-relevant, user-defined and interpretable queries about the data that are most informative for the task. While this allows for built-in interpretability in predictive models, applying V-IP to any task requires data samples with dense concept-labeling by domain experts, limiting the application of V-IP to small-scale tasks where manual data annotation is feasible. In this work, we extend the V-IP framework with Foundational Models (FMs) to address this limitation. More specifically, we use a two-step process, by first leveraging Large Language Models (LLMs) to generate a sufficiently large candidate set of task-relevant interpretable concepts, then using Large Multimodal Models to annotate each data sample by semantic similarity with each concept in the generated concept set. While other interpretable-by-design frameworks such as Concept Bottleneck Models (CBMs) require an additional step of removing repetitive and non-discriminative concepts to have good interpretability and test performance, we mathematically and empirically justify that, with a sufficiently informative and task-relevant query (concept) set, the proposed FM+V-IP method does not require any type of concept filtering. In addition, we show that FM+V-IP with LLM generated concepts can achieve better test performance than V-IP with human annotated concepts, demonstrating the effectiveness of LLMs at generating efficient query sets. Finally, when compared to other interpretable-by-design frameworks such as CBMs, FM+V-IP can achieve competitive test performance using fewer number of concepts/queries in both cases with filtered or unfiltered concept sets.
Kwan Ho Ryan Chan, Aditya Chattopadhyay, Benjamin David Haeffele, Rene Vidal
2023-08-24T05:04:10Z
http://arxiv.org/abs/2308.12562v1
Variational Information Pursuit with Large Language and Multimodal Models for Interpretable Predictions ###### Abstract Variational Information Pursuit (V-IP) is a framework for making interpretable predictions by design by sequentially selecting a short chain of task-relevant, user-defined and interpretable queries about the data that are most informative for the task. While using queries related with semantic concepts allows for built-in interpretability in predictive models, applying V-IP to any task requires data samples with concept-labeling by domain experts, limiting the application of V-IP to small-scale tasks where manual data annotation is feasible. In this work, we extend the V-IP framework with Foundational Models (FMs) to address this limitation. More specifically, we use a two-step process, by first leveraging Large Language Models (LLMs) to generate a sufficiently large candidate set of task-relevant interpretable concepts, then using multimodal models to annotate each data sample by semantic similarity with each concept in the generated concept set. While other interpretable-by-design frameworks such as Concept Bottleneck Models (CBMs) require an additional step of removing repetitive and non-discriminative concepts to have good interpretability and test performance, we mathematically and empirically justify that, with a sufficiently informative and task-relevant query (concept) set, the proposed FM+V-IP method does not require any type of concept filtering. In addition, we show that FM+V-IP with LLM generated concepts can achieve better test performance than V-IP with human annotated concepts, demonstrating the effectiveness of LLMs at generating efficient query sets. Finally, when compared to other interpretable-by-design frameworks such as CBMs, FM+V-IP can achieve competitive test performance using fewer number of concepts/queries in both cases with filtered or unfiltered concept sets. ## 1 Introduction Interpretable-by-design methods incorporate interpretability by requiring model predictions to be solely dependent on (intermediate) representations that correspond to semantic concepts. The set of concepts and their annotations in data is often created and labeled by domain experts for the task. As a result, the prediction is made based on the composition of interpretable concepts that explains the model's underlying decision making. One line of research on interpretable-by-design frameworks is Variational Information Pursuit (V-IP): The user first defines a set of queries, which are concept-related functions of the data such as "Is the car red?". The model then makes a prediction by selecting a small number of informative queries that maximize the mutual information between the task and query-answers. Another line of work is Concept Bottleneck Models (CBMs) [16], for which inputs are first nonlinearly mapped to an interpretable concept space, then linearly mapped to the target's label space via sparse-promoting regularizers such as the \(\ell_{1}\)-norm so that each prediction becomes interpretable by only using a small number of concepts. An example of both frameworks is shown Figure 2. Although interpretable-by-design seems ideal, built-in interpretability is not cost-free: CBMs and V-IP have been traditionally limited to small- to medium- scale datasets. Applying interpretable-by-design frameworks to large-scale datasets such as ImageNet entails overcoming three main challenges: **Challenge 1: Generating a set of interpretable queries.** Interpretable-by-design frameworks require the user to first define a set of interpretable concepts that are sufficiently informative for the task. While good interpretable concepts are often created by domain experts for optimal interpretability, when the number of classes increases or when the task is too complex, manually generating interpretable and task-specific concept sets becomes challenging. To address this issue, we propose to use Large Language Models (LLMs) such as GPT-3 [2] as the machine expert in place of human domain experts for extending to tasks of any scale. Since LLMs are trained on a large amount of text, we operate on the premise that LLMs are machine experts that have learned discriminative information about the data and labels for any given task, thus they can produce large query sets from fairly simple prompts. **Challenge 2: Annotating every sample per concept.** Once a sufficiently informative, task-relevant, interpretable concept set is generated (either by LLMs or by humans), the concepts needs to be related to the data for interpretable predictions. For instance, in CUB-200 [33], each of the expert-defined 312 concepts is individually labelled for every one of the 11,788 images. However, in the case of a large-scale dataset, annotating every sample by hand for datasets with up to millions of samples is infeasible without any machine assistance. In this work, we propose to use CLIP, a Vision-Language Pre-trained model (VLPs) trained on a large number of image-text pairs, to annotate every sample in an efficient manner. CLIP computes a similarity score for each pair of image-text input, where a high score indicates the presence of a certain concept and vice versa. By taking advantage of VLPs, we allow faster data annotations that enables interpretable-by-design frameworks for large-scale data. **Challenge 3: The need for concept filtering.** With a suitable query set generated by LLMs and per-sample annotated data from VLPs, theoretically one should be able to scale interpretable-by-design frameworks to data and tasks of any scale. Nonetheless, recent works such as Label-free CBMs (Lf-CBM) [24] show that filtering out concepts from the initial concept set (also known as candidate set) is needed in order to achieve good predictive performance and interpretability. Concepts that are too similar to other concepts semantically, for example, should be removed since one of them represents redundant information. Nonetheless, there is no principled way to judge what ad-hoc concept filtering methods should or should not be used in the final concept set. In this work, we argue that the formulation of V-IP can perform interpretable predictions _without_ the need to perform any concept filtering: Computing the mutual information for the task and next most informative query naturally disregards uninformative queries that would have been filtered out. **Paper contributions.** The main contributions of this work are the following: 1. We extend V-IP to large query sets (300 to 45K concepts) and large visual datasets (50K to 1M images) by using Foundational Models to generate and answer queries. We also empirically show that LLMs and VLPs are capable of generating informative query sets for interpretable predictions. 2. We demonstrate that V-IP can perform interpretable predictions without the need of ad-hoc concept filtering methods when compared to other interpretable-by-design frameworks such as Lf-CBM. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Methods & Annotates samples & Do not require & Operates on large & Predict with variable \\ & automatically & concept filtering & query sets & no. of queries/concepts \\ \hline CBM & No & No & No & No \\ Lf-CBM & **Yes** & **Yes** & **Yes** & No \\ LaBo & **Yes** & No & **Yes** & No \\ V-IP & No & No & No & **Yes** \\ \hline FM+V-IP (this work) & **Yes** & **Yes** & **Yes** & **Yes** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparisons between this work and other interpretable-by-design frameworks. 3. We further demonstrate that V-IP with query set of size up to 45K queries/concepts can achieve competitive test performance with a smaller number of queries/concepts than other methods such as Lf-CBM and LaBo [35]. The major differences between the different frameworks are summarized in Table 1. ## 2 Related Work **Explanations without language.** Post-hoc explanation methods including feature attribution methods, such as LIME [29] and SHAP [18], as well as gradient-based methods, GradCAM [30; 4] and Integrated Gradients [32] generate explanations directly on feature maps/pixels that do not involve natural language. Similarly for interpretable-by-design methods, image classification experiments in sequential selection methods such as V-IP [6; 7] use all possible \(8\times 8\) image patches as their query set. Overall, using input data features as "concepts" versus natural language concepts poses the debate of what semantics the selected features represent. Conversely, in this work, we focus solely on natural language concepts, providing a more direct and colloquial description of the underlying models' decisions. **Foundational Models.** OpenAI's GPT-3 [2] is a Large Language Model (LLM) trained on large amounts of unlabeled text in a self-supervised manner. Recent benchmarks have demonstrated that LLMs can perform complex human tasks, such as passing the Bar Exam [1], showing signs that LLMs are capable of learning complex concepts and general understanding [3]. Adding other data modalities, CLIP [27] is a Vision-Language Pre-trained models (VLPs) trained on a large dataset of image-text pairs. CLIP consists of an image encoder and a text encoder that output corresponding image and text embeddings whose dot-products encode cross-modal similarities between images and texts. Furthermore, improvements on CLIP achieve state-of-the-art performance in tasks such as zero-shot image classification [20], and related works have found use-cases of CLIP as an architecture backbone for human-level tasks such as Visual Question Answering [11; 25] and Video Question Answering [36]. Also known as Foundational Models (FMs), LLMs and VLPs have demonstrate strong capabilities to learn complex concepts, which further motivates the use of LLMs to generate interpretable concepts and the use VLPs to relate similarity between representations of data across different modalities. **Concept Bottleneck Models.** The interpretable-by-design framework (and its extensions) that we mainly compare with in this work is Concept Bottleneck Models (CBMs) [16]. A CBM mainly consist of two parts: 1) A nonlinear function \(f:x\mapsto c\) that maps data to a concept feature space, where each feature corresponds to the relevance of the input sample and a concept from a pre-defined set of interpretable concepts; and 2) a linear function \(g:c\mapsto y\) that maps from concept scores to task labels. \(f\) is often a deep network such as ResNet-18 [13], while the predictor \(g\) is chosen to be a simple model such as a linear network or a decision tree. Similar to V-IP, while every prediction is made interpretable, applications of CBMs are also limited to small- to medium-scale tasks where data with dense concept annotations are available. To address this, extensions of CBMs such as Post-hoc CBM [37], Lf-CBM [24] and Language in a Bottle (LaBo) [35] use LLMs such as GPT-3 or WordNet [21] to generate concept sets, and leverage neuron-labeling methods, such as MILAN [14] and CLIP-Dissect [23], or VLPs such as CLIP [27] to annotate visual data with texts. ## 3 Methods Our method can mainly be divided into three main parts: 1) query set generation with GPT-3; 2) answering queries using CLIP; and 3) performing interpretable predictions with Variational Information Pursuit (V-IP) [6]. We begin by first introducing the background on the V-IP framework, including its original the generative approach to doing IP [5], the sufficiency criteria, and IP algorithm. ### Background: Information Pursuit and Variational Information Pursuit **IP.** The Information Pursuit framework was first introduced in Chattopadhyay et al. [5] as an interpretable-by-design framework to perform interpretable predictions for any given task. Let \(X:\Omega\rightarrow\mathcal{X}\) and \(Y:\Omega\rightarrow\mathcal{Y}\) denote the random variables for input data and corresponding labels/outputs, and \(\Omega\) be the underlying sample space were all random variables are defined. The user first define a set \(Q\) of task-specific and interpretable queries \(q:\mathcal{X}\rightarrow\mathcal{A}\), where \(q(x)\in\mathcal{A}\) is the answer to the query \(q\in Q\) evaluated at \(x\in\mathcal{X}\). For all data-label pairs \((x,y)\in\mathcal{X}\times\mathcal{Y}\), We say that the query set \(Q\) is sufficient for \(Y\) when \[P(y\mid x)=P(y\mid\{x^{\prime}\in\mathcal{X}:q(x^{\prime})=q(x)\}). \tag{1}\] An implication of this is that a query set \(Q\) is insufficient if we cannot estimate the posterior \(P(y\mid x)\) properly given all query answers of \(x\). With a sufficient query set \(Q\), the IP algorithm is described as follows: Given a data point \(x^{\text{obs}}\), the algorithm selects a sequence of most informative queries, until all remaining queries are nearly uninformative: \[q_{1} =\text{IP}(\emptyset)=\operatorname*{argmax}_{q\in Q}I(q(X);Y); \tag{2}\] \[q_{k+1} =\text{IP}(\{q_{i},q_{i}(x^{\text{obs}})\}_{1:k})=\operatorname* {argmax}_{q\in Q}I(q(X);Y\mid q_{1:k}(x^{\text{obs}})).\] Here \(q_{k+1}\) denotes the query selected at step \(k+1\), given history \(q_{1:k}(x^{\text{obs}})\). \(I\) denotes mutual information. The IP algorithm terminates at the the stopping criteria, defined as the posterior exceeding a threshold \(P(Y\mid q_{1:k}(x^{\text{obs}})>1-\epsilon\) with a certain pre-determined \(\epsilon\). The first formulation proposed in Chattopadhyay et al. [5] requires learning a generative model for \(P(Q(X),Y)\) to estimate the mutual information terms. However, the method is not scalable to large-scale tasks due to the large computational cost during inference. To alleviate this issue, the authors later proposed V-IP, where the posterior distribution is learned directly using discriminative models, as described next. **V-IP.** Variational Information Pursuit (V-IP) was later introduced as a variational approach to IP [6]. V-IP defines a predictor \(f:S\to Y\) and a querier \(g:S\to Q\) that map a query-answer chain \(S\) of any length to the posterior distribution of \(Y\) and the most informative next query given \(S\), respectively. Parameterized by deep networks \(\theta\) and \(\eta\), \(f_{\theta}\) and \(g_{\eta}\) are trained by sampling random query-answer chains and optimizing the following V-IP objective: \[\min_{\theta,\eta} \mathbb{E}_{X,S}[D_{KL}(P(Y|X)\|P_{\theta}(Y|q_{\eta}(X),S)] \tag{3}\] \[\text{where} q_{\eta}:=g_{\eta}(S)\] \[P(Y\mid q_{\eta}(X),S):=f_{\theta}(\{q_{\eta},q_{\eta}(X)\cup S\})\] Importantly, Chattopadhyay et al. [6] shows that selecting the query \(q_{k+1}\) with the optimal querier \(g_{\eta}^{*}\) given any history \(S\) is exactly equivalent to selecting the query with the maximum mutual information in Equation (2). Hence, performing inference using V-IP can be done with the trained querier \(g_{\eta}^{*}\) in-place of computing the mutual information at each query: \[q_{1} =g_{\eta}^{*}(\emptyset)=\operatorname*{argmax}_{q\in Q}I(q(X);Y); \tag{4}\] \[q_{k+1} =g_{\eta}^{*}(\{q_{i},q_{i}(x^{\text{obs}})\}_{1:k})= \operatorname*{argmax}_{q\in Q}I(q(X);Y\mid q_{1:k}(x^{\text{obs}})).\] Figure 1 shows a flowchart of performing inference with V-IP. Empirical results show that V-IP is up to 100x faster in computational speed than the previous generative approach, while being able to scale up to larger scale datasets such as CIFAR-10/100. **Comparison between CBMs and V-IP.** Fundamentally, CBMs and V-IP are two different explanation methods (Figure 2). V-IP decomposes a model's prediction by sequentially selecting simpler semantic queries and updating the posterior given the query-answers obtained so far, thereby elucidating the model's reasoning process. In sharp contrast, CBMs find a simple linear mapping which draws correlations between concepts and task variables (classes) while balancing the trade-off between sparsity and accuracy. Ultimately, CBMs and V-IP both offer valid explanations, and the user may interpret predictions based on their preference of frameworks. Further method details can be found in Appendix D. ### Designing the Query Set We describe the method of generating a query set \(Q\) by prompting GPT-3. Then, we compare with three concept filtering methods from Lf-CBM, and argue that queries filtered by these methods are never selected when using V-IP, hence removing the need to do concept filtering for the purpose of V-IP. **Query Generation from LLMs.** Given that the query set is user-defined, ideally the user is a domain-knowledge expert who is knowledgeable about the discriminative information for a given task. Nonetheless, both manually creating the query set and labeling each sample is a cumbersome and time-consuming task that is infeasible for large-scale datasets. In this work, we formally extend the V-IP framework by leveraging LLMs to generate interpretable query sets. We first assume that for popular image classification tasks in the computer vision community such as ImageNet and fine-grained image classification tasks such as flower classification with Flower-102 [22], LLMs contain expert knowledge about both common and fine-grained objects types/classes. Consequently, LLMs would ideally have learned sufficient information about our image classification task to generate a task-dependent, informative and interpretable query set. Moreover, LLMs often come in the format of user-friendly APIs that take in the form of natural human language for both inputs and outputs. The familiarity of objects-in-the-wild, colloquiality and ease-of-access in LLMs make them an attractive choice to extend our query set designs beyond human experts. To generate interpretable concepts, we prompt GPT-3 to describe class of objects in adjectives, which then can be further processed into phrases or words as concepts. This poses the question of how to create an _efficient_ prompt that can provide us with a set of interpretable and task-relevant queries. In this work, we choose a single all-purpose prompt (with appropriate parameters) as our input to GPT-3: List the useful visual attributes (and their values) of the {object} image category {class_name}'. Here the {object} parameter is an argument to the prompt depending on the type of object we are classifying in the dataset. For instance, we replacing {object} with "bird" for bird classification, with "scene" for scene classification, and leave it blank for common object classification such as ImageNet and CIFAR-10/100. Then, for a given dataset, we iterate each class by replace the 'class_name' parameter with the name of each class. Take CIFAR-10 as an example, we separately input 10 prompts with '{object}' as blank and '{class_name}' as "cars", "birds", "cats", "deer", "dogs", "frogs", "horses", "ships", "trucks". An example of one run by feeding this prompt to GPT-3 can be found in the Appendix. The output of this prompt is a list, where each item is in the format of <attribute>:<value> (with the possibility of having multiple values). From here, one attribute-value pair can then be transformed as a single query or concept in the format of <value> <attribute>. An example of the entire process is shown in Figure 3, where we prompt for visual attributes and values of a "Blue Jay". Specifically, the attribute-value pair "Size: Medium" is converted into a query as "medium size". Therefore, for a \(K\)-class classification problem, a GPT-3 generated query set consist of a union of queries transformed from attribute-value pairs output by different \(K\) class dependent prompts. In Section 4.1, we empirically verify that GPT-3 is capable of generating an interpretable and task-dependent query set that is suitable for our tasks. **Concept Filtering.** Recent methods such as Lf-CBM[24] show that concept filtering, removing certain concepts from the initial generated set from GPT-3, is necessary to achieve good test performance and interpretability. For comparison, we look at some steps of concept filtering in Lf-CBM. We argue that, in the context of the IP algorithm in 2, filtered queries/concepts are uninformative, hence will Figure 2: An example of one test sample evaluated using (left) FM+V-IP and (right) Lf-CBM. naturally not be selected by our formulation. For context, we consider \(N\)-class bird classification as our task and context, and again denote \(x\) and \(y\) as the realizations of the data and label random variable \(X\) and \(Y\), and \(S\) as history. 1. _Concept Filter 1: Delete concepts too similar to task's class names._ Consider the binary query: \(q=\) "Is the bird in the image a Blue Jay?". From the definition of mutual information between \(q(X)\) and \(Y\), \(I(q(X),Y)=H(Y)-H(Y\mid q(X))=H(Y\mid q(X)=1)P(q(X)=1)+H(Y\mid q(X)=0)P(q(X)=0)\). Since the answer to the query is Yes, i.e. \(q(X)=1\), there is no ambiguity left in \(Y\) and \(H(Y\mid q(X)=1)P(q(X)=1)=0\). Hence, \(I(q(X),Y)=H(Y\mid q(X)=0)P(q(X)=0)\). Now, if \(Y\) follows a uniform distribution, then \(H(Y)=\log N\) and \(H(Y\mid q(x)=0)P(q(X)=0)=(1-1/N)\log(N-1)\). For a large \(N\), \(H(Y)\approx H(Y\mid\mid q(X)=0)P(q(X)=0)\), implying \(I(q(X),Y)=H(Y)-H(Y\mid q)\approx 0\). Therefore, \(q\) is an uninformative query and will not be selected. 2. _Concept Filter 2: Delete concepts too similar to other concepts._ Consider \(q_{1}=\) "Is this bird tiny?" and \(q_{2}=\) "Is this bird small?". Reasonably, \(q_{1}\) and \(q_{2}\) provide similar information. Since the IP algorithm selects queries in a sequential manner, once \(q_{1}\) has been chosen, \(q_{2}\) is unlikely to be the next most informative query. Take the extreme case where \(q_{1}(x)=q_{2}(x)\) for all \(x\in\mathcal{X}\), the mutual information between labels \(Y\) and \(q_{2}(X)\) is exactly 0 when \(\{q_{1},q_{1}(X)\}\in S\), i.e. \(I(Y;q_{2}(X)\mid S,q_{1}(X))=0\). 3. _Concept Filter 3: Delete concepts not present in training data._ Consider a query \(q=\) "Is the car red?". In a bird classification task, a car is irrelevant for the task, hence \(q(x)=0\) is constant for all \(x\). This also implies \(I(Y;q(X)\mid S)=0\). While concept filtering is not needed for V-IP to select the most informative query, since V-IP requires differentiation through a high-dimensional vector of query logits, filtering can reduce the size of \(Q\), making V-IP more computational efficient. Although in our experiments we see little-to-no difference in time to train, finding ways to reduce the size of \(Q\) can be useful for applications such as extreme classification problems [8]. ### Answering Queries Once a query set \(Q\) is defined, we require a method of computing the query answers \(q(x)\) for every \(x\in\mathcal{X}\). Previously, query answers are either annotations of the dataset or feature values such as pixel values in a given image. In this work, our generated query sets are concepts expressed in natural human language and we will leverage CLIP to generate query answers \(q(x)\). CLIP is a vision-language model trained on image-texts pairs in a contrastive manner, in which the inner product (with range \([0,1]\)) of the output image and text embedding represents the similarity of a given input image and text. CLIP consists of two encoders: an image encoder \(\mathcal{I}(\cdot)\) and a text encoder \(\mathcal{T}(\cdot)\), where \(\mathcal{I}\) takes images \(x\) as input and outputs image vector embeddings \(\mathcal{I}(x)\), whereas \(\mathcal{T}\) takes text \(q\) (variable size sentences) as input and output a text embedding \(\mathcal{T}(q)\) as output. Note that the dimensions of both \(\mathcal{T}(x)\) and \(\mathcal{I}(q)\) are the same, and we assume all embeddings are \(\ell_{2}\)-normalized. Since the dot-product \(\mathcal{I}(x)\cdot\mathcal{T}(q)\) encodes the similarity between the given image \(x\) and query \(q\), we compute the query answer \(q(x)\) for all \(q\in Q\) and \(x\in\mathcal{X}\) as the raw dot-product (score) between Figure 3: An overview of the method of generating query sets \(Q\) and query answers \(Q(X)\). the image embedding \(\mathcal{T}(x)\) and text embedding \(\mathcal{I}(q)\), i.e. \(q(x)=\mathcal{I}(x)\cdot\mathcal{T}(q)\). When \(q(x)\) is large (close to 1), the query \(q\) and the image \(x\) are similar, whereas when \(q(x)\) is small (close to 0) they are dissimilar. Following Lf-CBM, we also Z-score standardize each query answer by subtracting the mean and dividing the standard deviation of all query answers from the training set. In Chattopadhyay et al. [5; 6], queries are binary questions about the data such as "Are there square windows in the building?". In contrast, in our framework query answers measure the similarity between the concept and the data, e.g., the similarity between the concept "Square windows" and an image of a building. Answering queries in the form of questions is related to the field of Visual Question Answering (VQA). We reserve the research of more sophisticated and fine-grained queries for future work. ## 4 Experiments In this section, we empirically demonstrate that by leveraging LLMs and VLPs, FM+V-IP addresses the key challenges for doing interpretable-by-design predictions in three parts: First, we show that LLMs are capable of generating interpretable and informative query sets (Section 4.1). Then, we show that V-IP can achieve competitive test performance with both filtered and unfiltered query sets when compared to Lf-CBM (Section 4.2). Finally, we show that FM+V-IP with large query sets can achieve comparable test performance with much fewer number of queries when compared to LaBo (Section 4.3). In the Appendix, we also showcase an example where a sample is originally misclassified, then later intervening and corrected by modifying an erroneous query answer. Last but not least, we document the details of our experiments, such as training procedure, architecture design, hyperparameters for FM+V-IP, Lf-CBM and LaBo, as well as examples of queries from each query set and test samples in the Appendix. Code is available at [https://github.com/ryanchankh/FM-V-IP/](https://github.com/ryanchankh/FM-V-IP/). ### Evaluating Query Sets from LLMs with Annotated Data We first demonstrate that GPT-3 is capable of generating an interpretable and informative query set. We construct two different query sets for bird classification task using CUB-200 [33]. The first query set, named original, has a size of 312 and is constructed from the original 312 attributes of the CUB-200 dataset, equivalent to the query set used for V-IP in previous works [5]. The second query set, named GPT-3, has a size of 518 and is constructed from concepts obtained using the single-design prompt described in Section 3.2. The query answers \(q(x)\) are generated using CLIP dot-product as mentioned in Section 3.3.We train one V-IP model for each of the query set and compare their test performance. Figure 4: Comparisons of query-answer chains using V-IP models trained with **(left)** GPT-3 query set and **(right)** original query set. Each \(k\)-th row corresponds to the posterior after selecting \(k\) queries. In the \(x\)-axis, only the four classes with the highest posterior probabilities at the stopping criteria are shown, with the predicted class in bold. Green and red \(y\) axis labels corresponds to whether \(q_{k}(x)\) is above or below 0, with their query answer \(q_{k}(x)\) shown next to \(q_{k}\). By varying \(\epsilon\) and evaluating at different stopping criteria \(P(Y\mid q_{1\cdot k}(X))>1-\epsilon\), we compute the average number of queries needed for each test sample, and compute the trade-off between test accuracy and average number of queries used in Figure 5. We observe that the test accuracy gained by increasing the number of queries is similar before the 16th query, but as the number of queries increases, V-IP performs approximately 7% better with the GPT-3 query set than with the original query set after using 96 queries. This implies that the GPT-3 query set contains additional queries that are more informative for predicting the bird class than the original queries. We demonstrate this in a direct comparison of query-answer chains shown in Figure 4: Note that queries related to colors are mainly selected from the original query set, whereas queries related to different bill shapes and patterns such as "white pattern: pied" and "mottled pattern" are selected from the GPT-3 query set. These queries help further distinguish birds with similar visual features. We show more examples of query-answer chains using the two query sets in the Appendix. ### Evaluating Filtered and Unfiltered Query Sets with Lf-CBM We use Lf-CBM as our baseline and evaluate the test performance of V-IP using the filtered and unfiltered query/concept set as Lf-CBM on five datasets: CIFAR-10 and CIFAR-100 [17], ImageNet [10], CUB-200 [33], and Places365 [39]. Label-free CBM (Lf-CBM) [24] is an extension of CBM, where concepts/queries \(Q\) are first generated by prompting GPT-3, then refined by passing through a series of concept filters, such as those mentioned in (Section 3.2). Given the query answers generated from image embeddings of pre-trained neural networks, a linear predictor is trained using an elastic-net objective with sparsity parameter \(\lambda\) to map from concept scores to class logits. Note Lf-CBM differs from V-IP in our experiments in two ways: 1) The predictor for Lf-CBM is linear, whereas the predictor for V-IP is a nonlinear neural network; and 2) The number of concepts used from Lf-CBM is decided by the sparsity parameter \(\lambda\), whereas queries used per sample in V-IP is variable-length and driven by mutual information. An example of the two methods is shown in Figure 2. We give the full explanation and details of the Lf-CBM in the Appendix. For comparison, we denote our method as FM+V-IP. Figure 5: Test performance of CUB-200 using the original query set original versus GPT-3 generated query set GPT-3. Figure 6: Test performance of FM+V-IP versus Lf-CBM on CIFAR-10, CIFAR-100, CUB-200, ImageNet, and Places365. Using a _filtered_ query set \(Q_{\text{filtered}}\), blue solid lines are results from FM+V-IP and blue dashed lines are baseline results from a neural network classifier without V-IP. Using an _unfiltered_ query set \(Q_{\text{GPT}}\), green solid lines are results from FM+V-IP and green dashed lines are baseline results from a neural network classifier without V-IP. Red solid lines are results by training multiple Lf-CBMs with \(Q_{\text{GPT}}\) and varying the sparsity parameter, while red crosses are results from Lf-CBMs with \(Q_{\text{filtered}}\), directly referenced from Oikarinen et al. [24]. We compare the test performance of FM+V-IP using unfiltered query set \(Q_{\text{GPT}}\) from our single all-purpose prompt to GPT-3 versus Lf-CBM using a filtered query set \(Q_{\text{filtered}}\) from Lf-CBM. In Figure 6, we demonstrate that FM+V-IP does not require any concept filtering to achieve comparable test performance. Take CIFAR-10 as an example: using with 30 queries, V-IP achieves 94.11% test accuracy, whereas Lf-CBM achieves 86.40% test accuracy. From an alternative viewpoint, FM+V-IP achieves the same test accuracy as Lf-CBM using fewer number of queries. For instance, FM+V-IP only requires 5 queries to achieve the 86.84% test accuracy whereas Lf-CBM requires 30 concepts to achieve similar test accuracy. On the other hand, for ImageNet, FM+V-IP obtains a lower test accuracy than Lf-CBM using 45 queries, where the former achieves a test accuracy at 67.11% and latter at 71.95%. This can be attributed to the fact that query answers generated by CLIP are too noisy for the given query sets. Similarly for CUB-200, FM+V-IP achieves a 63.55% test accuracy whereas Lf-CBM achieves a test accuracy of 74.31%. The drop in performance is due to the fact that CLIP dot-products are not ideal for representing fine-grained details of the image (as mentioned in Section 3.3). Moreover, in Figure 6, we compare the performance of FM+V-IP and Lf-CBM on the same query set \(Q_{\text{GPT}}\) without any filtering. Similar to findings in Lf-CBM, we run into memory issues when trying to experiment with larger datasets. Therefore, we only report findings on datasets where the size of the unfiltered query set is feasible for performing this experiment. For CIFAR-10 and CUB-200, we train multiple-Lf-CBMs by varying the sparsity parameter \(\lambda\) and compute the test accuracy for different number of queries. On the other hand, we train a single V-IP model for each dataset. For CIFAR-10, test accuracy for FM+V-IP saturates around 95% using 25 queries, whereas test accuracy for Lf-CBM never achieves 95%, and requires 80 queries to achieve 92% test accuracy. For CUB-200, FM+V-IP achieves 65% with 50 queries, whereas Lf-CBM fails to generalize well. This shows that FM+V-IP is feasible to train and can perform well with any query set. On the other hand, Lf-CBM requires concept filtering to become feasible or perform well. Last but not least, while \(Q_{\text{GPT}}\) contains more uninformative queries than \(Q_{\text{filtered}}\), Figure 6 also shows that FM+V-IP still selects informative queries and attains similar test performance at different number of queries. ### Comparing V-IP and LaBo with the same query set While V-IP optimizes to find short variable-length query-chains, LaBo first optimizes a submodular function to select a pre-determined number of suitable concepts for each class, then predicts using a linear classifier. In this section, we compare the test performance of V-IP versus LaBo using the same query set on three datasets: Flower-102 [22], UCF-101 [31] and FGVC-Aircraft (Aircraft) [19]. For a given dataset, the query set is generated by feeding multiple prompts to GPT-3 to obtain sentence descriptions for each class, then processed into individual concepts using the LLM T-5 [28]. In Yang et al. [35] and in this experiment, the query answers are dot-products of CLIP embeddings between concepts as text and images. We train one V-IP model for each dataset with their respective query sets generated from LaBo. In Figure 7, we show that, while both methods do not require any concept filtering, V-IP requires 0.02-0.05 times the number of queries of LaBo to achieve comparable performance. The size of the query set \(Q\) are orders of magnitude larger than filtered concept sets in Lf-CBM: 24960 for Flower-102, 46845 for UCF-101 and 36324 for Aircraft. This further argues that V-IP can perform _and scale_ competitively with large query sets without requiring downsizing of the query set. Figure 7: Comparison of test performance between FM+V-IP and LaBo. Red curve represents the test accuracy of LaBo after a certain number of queries. The blue curve represents the test accuracy of FM+V-IP at varying number of queries. Conclusion In this work, we address the need for data with concept annotations by proposing a method of leveraging LLMs and VLPs to create query/concept sets and annotations. We mathematically argue that the sequential selection process of V-IP avoids the need for any concept filtering, and empirically show that V-IP is capable of achieving competitive test accuracy on different scales of datasets when compared with other methods such as Lf-CBM and LaBo. Nonetheless, we also observe that using CLIP dot-products as query answers can be noisy for fine-grain concepts, negatively impacting test performance. As future work we seek to extend the framework for more sophisticated queries and answering them with more scalable and precise VQA systems, so that task predictions can obtain better accuracy and be more interpretable.
2310.08770
Cross-correlation image analysis for real-time particle tracking
Accurately measuring the translations of objects between images is essential in many fields, including biology, medicine, chemistry, and physics. One important application is tracking one or more particles by measuring their apparent displacements in a series of images. Popular methods, such as the center-of-mass, often require idealized scenarios to reach the shot-noise limit of particle tracking and are, therefore, not generally applicable to multiple image types. More general methods, like maximum likelihood estimation, reliably approach the shot-noise limit, but are too computationally intense for use in real-time applications. These limitations are significant, as real-time, shot-noise-limited particle tracking is of paramount importance for feedback control systems. To fill this gap, we introduce a new cross-correlation-based algorithm that approaches shot-noise-limited displacement detection and a GPU-based implementation for real-time image analysis of a single particle.
Leonardo R. Werneck, Cody Jessup, Austin Brandenberger, Tyler Knowles, Charles W. Lewandowski, Megan Nolan, Ken Sible, Zachariah B. Etienne, Brian D'Urso
2023-10-12T23:32:37Z
http://arxiv.org/abs/2310.08770v3
# Cross-correlation image analysis for real-time particle tracking ###### Abstract Accurately measuring translations between images is essential in many fields, including biology, medicine, geography, and physics. Existing methods, including the popular FFT-based cross-correlation, are not suitable for real-time analysis, which is especially vital in feedback control systems. To fill this gap, we introduce a new algorithm which approaches shot-noise limited displacement detection and a GPU-based implementation for real-time image analysis. [http://dx.doi.org/10.1364/ao.XX.XXXXXX](http://dx.doi.org/10.1364/ao.XX.XXXXXX) ## 1 Introduction Tracking the motion of particles in a sequence of images over time is a common practice in various fields, ranging from biology [1] and geography [2] to physics [3]. For applications requiring real-time feedback control, the center-of-mass algorithm is often employed due to its simplicity [4]. However, this method is primarily designed for measuring the position of a bright spot against a dark background, making it susceptible to biases [5]. Accurately tracking a single particle against a relatively featureless background can be achieved by measuring the apparent displacement of the image's content as the particle moves [3, 6]. Various codes for generalized image analysis are available [7, 8, 9, 10, 11], but are typically not suitable for real-time tracking. In this letter, we introduce a real-time, cross-correlation-based image analysis method that approaches the shot-noise limit of particle tracking. We also provide an open-source implementation of this algorithm and demonstrate its capabilities by tracking a microsphere levitated in a magneto-gravitational trap [12]. Our implementation is suitable for commercial off-the-shelf graphics processing units (GPUs), and can analyze hundreds of images per second. ## 2 Basic approach A common method to find the location \(\vec{R}\) of a bright spot on a dark background is the center of mass (CM) calculation, \(\vec{R}=\sum_{\vec{r}}\vec{r}\,I(\vec{r})/\sum_{\vec{r}}I(\vec{r})\), with \(\vec{r}\) denoting the location of a pixel in an image \(I(\vec{r})\). In the ideal case of a nearly featureless background, this method can approach the shot noise limit [13]. However, it is highly affected by background light and image boundaries [5]. A more statistically robust approach, maximum likelihood estimation (MLE), adjusts parameters in a fit distribution to maximize the likelihood that the data comes from the distribution with Poisson statistics. If Gaussian statistics are used as an approximation, we can instead minimize \(\chi^{2}\) relative to the displacement \(\vec{r}_{0}\) between two images, as expressed in the equation \[\chi^{2}=\sum_{\vec{r}}\left[\frac{I(\vec{r}-\vec{r}_{0})-E(\vec{r})}{\sigma( \vec{r},\vec{r}_{0})}\right]^{2}, \tag{1}\] where \(I(\vec{r}-\vec{r}_{0})\) is the image \(I(\vec{r})\) translated by \(\vec{r}_{0}\), \(E(\vec{r})\) is the reference image (RI; see Sec. 3), and \(\sigma(\vec{r},\vec{r}_{0})\) models the noise in the images. Minimizing \(\chi^{2}\) directly is computationally intensive, so more efficient strategies, such as cross-correlation (CC), are generally preferred. The uniformly-weighted CC (CC-U) method, for example, assumes that \(\sigma(\vec{r},\vec{r}_{0})\) has a constant value denoted by \(\sigma\). Under this assumption, Eq. (1) simplifies to: \[\chi^{2}=\frac{1}{\sigma^{2}}\sum_{\vec{r}}\left[I(\vec{r}-\vec{r}_{0})^{2}-2I (\vec{r}-\vec{r}_{0})E(\vec{r})+E(\vec{r})^{2}\right]. \tag{2}\] Since the sums over \(I(\vec{r}-\vec{r}_{0})^{2}\) and \(E(\vec{r})^{2}\) span the entire images, they are independent of \(\vec{r}_{0}\). The displacement \(\vec{r}_{0}\) is then obtained by finding the position of the maximum of the CC between the shifted image \(I(\vec{r}-\vec{r}_{0})\) and the RI \(E(\vec{r})\), i.e., \[I\star E=\mathcal{F}^{-1}\left[\mathcal{F}\left[I\right]\otimes\overline{ \mathcal{F}\left[E\right]}\right]\,, \tag{3}\] where \(\mathcal{F}\) is the Fourier transform, \(\mathcal{F}^{-1}\) its inverse, \(\otimes\) denotes element-wise multiplication, and an overline denotes complex conjugation. For CMOS cameras with adequate light levels, photon shot noise is often the dominant noise source. The noise in each pixel follows a Poisson distribution, which can be approximated by a Gaussian distribution with standard deviation equal to the square root of the photon count in the pixel. In this case, a shot-noise-weighted version of the CC method (CC-SN) is more appropriate. This method assumes the RI provides an estimate for the _average_ pixel values of the images in a data set, i.e., \[\sigma(\vec{r},\vec{r}_{0})=\sqrt{E(\vec{r})}. \tag{4}\] For nearly zero pixel values, the square root poorly approximates the uncertainty of the Poisson distribution. Negative pixel values may also arise when subtracting the average mean background from images. To compensate, we add a constant offset \(\epsilon\) to the new images (see Sec. 3 for how this affects the RI), with larger offset values de-emphasizing darker pixels. Adding the offset to the new image and using Eq. (4) in Eq. (1) we find \[\chi^{2}=\sum_{\vec{r}}\Bigl{\{}E(\vec{r})-2I^{\prime}(\vec{r}-\vec{r}_{0})+ \bigl{[}I^{\prime}(\vec{r}-\vec{r}_{0})\bigr{]}^{2}/E(\vec{r})\Bigr{\}}\, \tag{5}\] where \(I^{\prime}\equiv I+\epsilon\). As with Eq. (2), the first two sums in Eq. (5) are independent of \(\vec{r}_{0}\), and the minimization only depends on the CC between \(I^{\prime 2}\) and the reciprocal of \(E\). Therefore, the displacement \(\vec{r}_{0}\) that minimizes Eq. (5) is determined by the position of the maximum of \[I^{\prime 2}\star E^{-1}=\mathcal{F}^{-1}\Bigl{[}\mathcal{F}\bigl{[}I^{\prime 2 }\bigr{]}\otimes\overline{\mathcal{F}\bigl{[}1/E\bigr{]}}\Bigr{]}. \tag{6}\] ## 3 Numerical Methods The location of the maximum of the CC gives an integer pixel estimate of the displacement between two images. Following [6, 7], we define a 1.5-pixel square region centered on the CC's maximum and sample it with 1.5\(u\) points in both the horizontal and vertical directions, where \(u\) is the upsampling factor. Data from the element-wise product on the right-hand sides of Eqs. (3) or (6) (depending on the method) is then interpolated to the upsampled region, followed by an inverse discrete Fourier transform to generate an upsampled CC. The location of the maximum in the upsampled CC yields a sub-pixel estimate of the displacement. The CC-U and CC-SN methods can be used to analyze images both in real-time or with an iterative method that uses past and future images, with different approaches for updating the RI in each case. The following discussion focuses on the CC-U method, but it can be easily adapted for the CC-SN method by replacing \(I_{i}\to 1/I_{i}^{\prime}\) and \(E_{n}\to 1/E_{n}\). ### Offline Analysis In the offline case [3], we iteratively update the RI as the average in Fourier space of all images within a data set \(\{I_{0},I_{1},\ldots,I_{N-1}\}\). For each complete pass through the images, the Fourier transform of the RI at the \(n\)-th iteration, \(\mathcal{F}[E_{n}]\), is computed using \[\mathcal{F}[E_{n}]=\frac{1}{N}\sum_{i=0}^{N-1}\mathcal{F}\bigl{[}I_{i}(\vec{r} -\vec{r}_{0})\bigr{]}\, \tag{7}\] where \(\mathcal{F}[E_{0}]=\mathcal{F}[I_{0}]\). For a given iteration, computing the displacement of each image in the data set with respect to the constant RI is an embarrassingly parallel operation. The data set can thus be split across multiple processing units for a significant speed-up. ### Real-time Analysis Real-time analysis is inherently a sequential process that requires continuously updating the RI as each new image is processed. The computational expense is large, as both the displacement and the RI must be updated at least at the camera's frame rate. As each new image is processed, the RI is updated using a low-pass filter, \[\mathcal{F}[E_{i}]=a_{0}\mathcal{F}\bigl{[}I_{i}(\vec{r}-\vec{r}_{0})\bigr{]} +b_{1}\mathcal{F}\bigl{[}E_{i-1}\bigr{]}\, \tag{8}\] where \(\mathcal{F}[E_{0}]=\mathcal{F}[I_{0}]\), \(a_{0}=1-e^{-T/\tau}\), \(b_{1}=e^{-T/\tau}\), \(T\) is the inverse of the camera's frame rate, and \(\tau\) is the filter time constant (typically \(\tau\gg T\)). This method may experience drift over time as a result of the gradual accumulation of numerical errors, which can displace the particle in the RI from its initial position. Similarly, drift in the particle's actual position can occur over time due to changes in its equilibrium position. Although these drifts usually have no significant consequences, they can be inconvenient, and it is often desirable to eliminate them from the detected motion, retaining only the faster particle oscillation. A natural solution is to subtract off the (possibly drifting) equilibrium position, as determined by a low-pass filter. Given the \(n\)-th raw displacement \(\vec{r}_{n}\), we calculate the filtered displacement value \(\vec{r}_{n}=\vec{r}_{n}-\vec{R}_{n}\), where \(\vec{R}_{n}\) is the subtracted offset calculated by passing the raw data through a low pass filter, \[\vec{R}_{n}=a_{0}\vec{r}_{n}+b_{1}\vec{R}_{n-1}\, \tag{9}\] and \(\vec{R}_{0}=a_{0}\vec{r}_{0}\). If there is concern that Eq. (9) may be too aggressive at correcting drift and could inadvertently remove an important signal, the shift in the offset can be limited to a fraction \(\alpha\) of the resolution of the CC analysis (the inverse of the upsampling factor \(u^{-1}\)). In this case, the drift correction effectively biases the round-off error in the CC towards a zero average displacement, and the change in the correction offset can be written as \[d\vec{R}_{n} =\bigl{(}a_{0}\vec{r}_{n}+b_{1}\vec{R}_{n-1}\bigr{)}-\vec{R}_{n-1} \tag{10}\] \[=a_{0}\vec{r}_{n}+\bigl{(}b_{1}-1\bigr{)}\vec{R}_{n-1}\,\] and \[\vec{R}_{n}=\begin{cases}\vec{R}_{n-1}+(\alpha/u)d\vec{R}_{n}&\text{, if }|d\vec{R}_{n}|>\alpha/u\,\\ \vec{R}_{n-1}+d\vec{R}_{n}&\text{, otherwise,}\end{cases} \tag{11}\] where \(d\vec{R}_{n}=d\vec{R}_{n}/|d\vec{R}_{n}|\). A typical choice of parameter is \(\alpha=0.5\), ensuring that drift corrections are smaller than the expected displacement resolution. ## 4 Results We perform tests with synthetic data, where each synthetic image comprises a Gaussian function integrated over each pixel, representing either a bright or dark particle on a dark or bright background, with Poisson shot noise added (see Fig. 1). To ensure that these synthetic images are experimentally relevant, we match their maximum pixel intensity to the saturation capacity of the CMOS sensor used in our experiments (10700 electrons for a Sony IMX250). When adding dark noise to the images, we also match it to the dark noise of the sensor (2.4 electrons). For each method and choice of parameters, we analyze data sets consisting of 1000 images, whose displacements are randomly distributed, but exactly known. For the bright spot on a dark background we consider both the case without and with dark noise, where a constant value is added to the entire image before the Poisson noise is introduced. This means that subtracting the mean background value from the image with dark noise does not yield, in general, the image without dark noise. For the dark spot on the bright background we only consider the case with added dark noise. We compare the uniformly-weighted (CC-U) and shot-noise-weighted (CC-SN) cross-correlation methods described in Sec. 2 with the standard center of mass (CM) and maximum likelihood estimation (MLE) methods, using the real-time approach described in Sec. 3. When using the MLE method, we provide the exact displacement as the initial value for the iterative optimization algorithm. By doing so, the MLE method produces results that represent the best achievable outcomes from numerical methods, offering a benchmark against which other methods can be compared. Figure 1(a) illustrates the performance of each method in recovering displacements for a bright spot in the absence of dark noise. In this case, all methods exhibit roughly the same performance as the MLE method, effectively reaching the shot noise limit for the standard deviation of the position error, \(w_{0}/(2\sqrt{N_{\gamma}})\), where \(N_{\gamma}\) is the total number of photons detected in the Gaussian [13]. This is valid as long as the particle size (proportional to the Gaussian width \(w_{0}\)) does not constitute a large fraction of the image. In this idealized context, it is evident that the CM algorithm is the preferred choice, given its simplicity and low computational cost. Figure 1(b) shows the performance of each method in recovering displacements for the more realistic case of a bright spot with dark noise. In this case, the CM algorithm is used to analyze both the original images and the images with the exact mean background subtracted (CM \(-\) BG). While subtracting the mean background substantially enhances the CM method's performance, it still falls short of reaching the shot noise limit, except for a small range of \(w_{0}\), a feature which is used in [5]. On the other hand, both the CC-U and CC-SN methods yield results comparable to the ones obtained using the MLE method, and are therefore preferred due to their significantly lower computational cost. As in the previous case, the error for the CC-U method is slightly larger than the ones for the CC-SN and MLE methods. Figure 1(c) shows how each method performs in recovering displacements for a dark spot in the presence of dark noise. When using the CM method, we subtract the image from the exact mean background value to approximate a bright spot on a dark background. Nevertheless, the displacement errors for the CM method are still about an order of magnitude above the shot noise limit. While the CC-U method exhibits larger errors than the MLE method, they are still notably smaller than those associated with the CM method. The errors for the CC-SN method remain comparable to the ones for the MLE method. ### Experimental data We show the CC-SN method can be used for real-time analysis by tracking a borosilicate glass microsphere levitated in a magneto-gravitatational trap in high vacuum. The microsphere is back-illuminated with collimated light from a pulsed 660 nm LED and imaged on a CMOS camera (Basler acA2440-75 \(\mu\)m), giving the particle the appearance of a dark disk on a bright background with a bright spot in the center (see Fig. 2; see also [3, 12]). Additional force to suspend the approximately 68 \(\mu\)m diameter microsphere is provided by charge on the microsphere in an electric field produced by applying a voltage across the pole Figure 1: Gaussian particle tracking study: comparison of various image analysis methods, both in the presence and absence of dark noise. Legend indicates the particular method used: center of mass (CM), CM subtracting the mean background (CM \(-\) BG), uniform cross-correlation (CC-U), shot-noise-weighted cross-correlation (CC-SN), and maximum likelihood estimation (MLE). The Gaussian width as a fraction of the image width (\(w_{0}/N\)) is used as a comparison metric. Insets show samples of the analyzed images: the top half presents the original, and the bottom half adjusts the pixel values (through rescaling, offsetting, and saturating) to highlight the presence or absence of shot noise. Figure 2: Experimental data: backlit levitated borosilicate glass microsphere. Artifacts in the image background are from dust on the optics and window of the vacuum chamber. pieces [3]. The recorded data consists of \(256\times 128\) pixel images of the microsphere, corresponding to a field of view approximately \(310\times 150\,\mathrm{\SIUnitSymbolMicro m}\), with 12 bits per pixel for ten minutes at a rate of \(470\,\mathrm{Hz}\), which is near the maximum frame rate supported by the camera's USB3 interface. Because the microsphere image displays complex structure with non-Gaussian features such as diffraction rings on the edges and a bright spot at the center, the CM algorithm is not suitable for tracking the particle. We instead use the CC-SN algorithm to measure the displacement of the particle in real-time using a GeForce RTX 3080 Ti GPU, while simultaneously recording the images. For comparison, we perform an offline analysis using the CC-SN algorithm and our iterative method described in Sec. 3 up to the fifth RI. In both cases the upsampling factor is set to 256. We find excellent agreement between the real-time and offline methods, with differences less than 1.5 pixels for the entire data set, as shown in Fig. 3. We note that the real-time analysis uses the slow drift correction algorithm described in Sec. 3, but the iterative method does not. The predictable consequence is a systematic drift in both the horizontal and vertical displacements, observed in the bottom panels of Fig. 3. ## 5 Conclusions In this paper we introduced a shot-noise weighted cross-correlation (CC-SN) method, which can be used to determine image displacements approaching the shot noise limit. The method was tested on several different images, including a bright spot on a dark background, with and without dark noise, a dark spot on a bright background with shot noise, and on complex experimental data. We found that the CC-SN method is superior to the uniformly weighted cross-correlation (CC-U) method, with errors comparable with those obtained using the maximum likelihood (MLE) method, while being significantly cheaper than MLE from a computation standpoint. CC-SN is particularly useful for feedback control systems, as it can be used for real-time image analysis. As a proof of principle, we used the CC-SN method for real-time tracking of the position of a microsphere levitated in a magneto-gravitational trap in high vacuum. Ultimately, we anticipate using the method for feedback cooling of the motion of a particle to the shot-noise limit as well as state preparation and tracking for a new measurement of the Newtonian constant of Gravitation using a levitated particle. Using this method for real-time analysis requires an implementation of the algorithms discussed in this paper that is capable of processing images at a rate at least the camera's frame rate. To this end, we provide an open-source toolkit for real-time image analysis called RETINAS [14]. The toolkit contains the methods described in Sec. 2 implemented in CUDA, C, and Python, all of which can be accessed via a user-friendly Python interface. Ref. [14] contains both the code and extensive documentation on how to use the code and reproduce all of the results shown in this paper. The real-time CC-SN method described in this paper represents a significant advancement in real-time image analysis, bridging a critical gap in current particle tracking algorithms. Its application and further development offer a way to streamline real-time image analysis across various disciplines, offering a technique that is not only effective but also accessible and adaptable. This material is based upon work supported by the National Science Foundation under Grant No. 1806596, 1912083, 1950282, 2011783, and 2227079. BD gratefully acknowledges a Block Gift from the Coherent / II-VI Foundation. This research made use of the resources of the High Performance Computing Center at Idaho National Laboratory, which is supported by the Office of Nuclear Energy of the U.S. Department of Energy and the Nuclear Science User Facilities under Contract No. DE-AC07-05ID14517. The authors occasionally used ChatGPT for language and readability improvements, and assume full responsibility for the contents of the publication. Data availabilityData underlying the results presented in this paper are available in Ref. [15]. ## References * [1] A. R. Wade and F. W. Fitzke, Opt. Express **3**, 190 (1998). * [2] B. Dawson _et al._, J. Geophys. Res. Earth Surf. **125**, e2019JF005492 (2020). * [3] C. W. Lewandowski _et al._, Phys. Rev. Appl. **15**, 014050 (2021). * [4] Y. Minowa _et al._, arXiv preprint arXiv:2204.05506 (2022). * [5] A. J. Berglund _et al._, Opt. express **16**, 14064 (2008). * [6] M. Guizar-Sicarides _et al._, Opt. letters **33**, 156 (2008). * [7] S. van der Walt _et al._, PeerJ. **2**, e453 (2014). * [8] A. Paintdakhi _et al._, Mol. microbiology **99**, 767 (2016). * [9] J. M. Gravino _et al._, Eile **6**, e47994 (2019). * [10] B. H. Savitzky _et al._, Microsc. Microanal. **27**, 712 (2021). * [11] Aytongan _et al._, Sci. Reports **12**, 12405 (2022). * [12] B. R. Slezak _et al._, New J. Phys. **20**, 063028 (2018). * [13] M. T. Hsu _et al._, J. Opt. B: Quantum Semiclassical Opt. **6**, 495 (2004). * [14] L. R. Werneck, RETINAS GitHub (Visted on: October 12, 2023). * [15] L. R. Werneck _et al._, Zenodo, doi:10.5281/zenodo.8336549 (2023). Figure 3: Comparison between real-time and offline algorithms at the fifth reference image for experimental data. The calibration is \(1.2\,\mathrm{\SIUnitSymbolMicro m}/\,\mathrm{pixel}\). **Top**: Displacements \(r_{0}\). The apparent beating in the detected vertical motion is due to sensitivity to a combination of multiple degrees of freedom separated in frequency by \(\sim\)\(3\,\mathrm{Hz}\). **Bottom**: residual (in pixels) of the displacements from both methods.
2306.04874
Expanding Scope: Adapting English Adversarial Attacks to Chinese
Recent studies have revealed that NLP predictive models are vulnerable to adversarial attacks. Most existing studies focused on designing attacks to evaluate the robustness of NLP models in the English language alone. Literature has seen an increasing need for NLP solutions for other languages. We, therefore, ask one natural question: whether state-of-the-art (SOTA) attack methods generalize to other languages. This paper investigates how to adapt SOTA adversarial attack algorithms in English to the Chinese language. Our experiments show that attack methods previously applied to English NLP can generate high-quality adversarial examples in Chinese when combined with proper text segmentation and linguistic constraints. In addition, we demonstrate that the generated adversarial examples can achieve high fluency and semantic consistency by focusing on the Chinese language's morphology and phonology, which in turn can be used to improve the adversarial robustness of Chinese NLP models.
Hanyu Liu, Chengyuan Cai, Yanjun Qi
2023-06-08T02:07:49Z
http://arxiv.org/abs/2306.04874v1
# Expanding Scope: Adapting English Adversarial Attacks to Chinese ###### Abstract Recent studies have revealed that NLP predictive models are vulnerable to adversarial attacks. Most existing studies focused on designing attacks to evaluate the robustness of NLP models in the English language alone. Literature has seen an increasing need for NLP solutions for other languages. We, therefore, ask one natural question: whether state-of-the-art (SOTA) attack methods generalize to other languages. This paper investigates how to adapt SOTA adversarial attack algorithms in English to the Chinese language. Our experiments show that attack methods previously applied to English NLP can generate high-quality adversarial examples in Chinese when combined with proper text segmentation and linguistic constraints. In addition, we demonstrate that the generated adversarial examples can achieve high fluency and semantic consistency by focusing on the Chinese language's morphology and phonology, which in turn can be used to improve the adversarial robustness of Chinese NLP models. ## 1 Introduction Adversarial examples are text inputs crafted to fool an NLP system, typically by making small perturbations to a seed input1. Recent literature has developed various adversarial attacks generating text adversarial examples to fool NLP predictive models 2. These attack methods mainly focus on the English language alone, building upon components that use language-specific resources, such as English WordNet (Miller, 1995) or BERT models (Devlin et al., 2018) pretrained on English corpus. Literature has seen a growing need for NLP solutions in other languages; therefore, evaluating NLP solutions' robustness via adversarial examples is crucial. We ask an immediate question: "Can we extend the SOTA adversarial attacks in English to other languages by replacing those English-specific inner components with other languages' resources?". For instance, we can attack a Chinese NLP model by replacing WordNet with HowNet (Dong et al., 2010). However, it is unclear if such a workflow is sufficient for generating high-quality adversarial examples, when a target language differs from English. In this work, we attempt to answer this question by adapting SOTA word substitution attacks designed for English to evaluate Chinese NLP models' adversarial robustness. Moreover, we introduce morphonym and homophone word-substitution attacks that are specific to the Chinese language; they function as a benchmark to the English adapted attack methods. Footnote 1: Most existing work attempts to perturb an input using character-level (Ebrahimi et al., 2017; Gao et al., 2018; Pruthi et al., 2019; Li et al., 2018) or word-level perturbations (Alzantot et al., 2018; Jin et al., 2019; Ren et al., 2019; Zang et al., 2020) to fool a target model’s prediction in a specific way. Footnote 2: We use “natural language adversarial example”, “text adversarial example” and “adversarial attacks” interchangeably. Our experiments on Chinese classification and entailment models show that both the English-adapted and Chinese-specific attack methods can effectively generate adversarial examples with good readability. The attack success rates of homophone-based and HowNet-derived methods are significantly better than the success rate of masked language model-based attacks or morphonym-derived attacks. We then combine the four attacks mentioned above into a composite attack that further increases the attack success rate to 96.00% in fooling Chinese classification models and 98.16% in attacking entailment models. In addition, we demonstrate that adversarially trained models significantly decrease attack success rate by up to 49.32%. ## 2 Method Recent NLP literature includes a growing body of works on adversarial examples in NLP, mostly in English (more background details are in Section A). Most SOTA English adversarial attacks search for a perturbation to change a given seed input \(\mathbf{x}\) into an adversarial example \(\mathbf{x}^{\prime}\); \(\mathbf{x}^{\prime}\) fools a predictive NLP model and satisfies certain language constraints, like preserving the same semantical meaning as \(\mathbf{x}\). Essentially each adversarial attack algorithm has four components: a goal function, a set of constraints, a suite of transformations, and a search algorithm Morris et al. (2020). The search algorithm attempts to find a sequence of transformations that results in a successful perturbation. The goal function can be like fooling a target model into predicting the wrong classification label. **Related literature:** While most NLP adversarial attacks have focused on the English language, a few recent methods have been proposed for Chinese. Zhang et al. (2020) proposed a black-box attack that performs a glyph-level transformation on the Chinese characters. Related, Li et al. (2020) and Zhang et al. (2022) added phonetic perturbations to improve the adversarial robustness of Chinese NLP models. All three attacks, however, are only applicable to the Chinese language. Another study Wang et al. (2020) proposed a white-box attack against BERT models Devlin et al. (2018) that performs character-level swaps using gradient optimization. These character-level attacks extend poorly to other languages and tend to generate out-of-context partial substitutions that impact fluency. Later studies, such as Shao and Wang (2022) and Wang et al. (2022), included semantic-based word substitutions but did not consider the significance of constraints and adversarial training. We choose to generalize SOTA word synonym substitution attacks in English to the Chinese language (due to the prevalence of word substitutions) and our attacks consider a range of language constraints. ### Determining Text Segmentation The first step to crafting a new adversarial attack for the Chinese language is to select the level of transformation. Unlike English, which separates words with space, the Chinese language lacks native separators to determine different words in a sentence. A Chinese character may represent a word, while longer words may include multiple adjacent Chinese characters. To avoid out-of-context perturbations that replace partial components of a multi-character word, we use a Chinese segmentation tool provided by Jieba 3 to segment an input text into a list of words. Footnote 3: [https://github.com/fxsjy/jieba](https://github.com/fxsjy/jieba) ### General Overview of Proposed Attacks The general perturbation strategy we propose is word synonym substitutions. Given an input text \(\mathbf{x}\), we use the aforementioned segmentation tool to segment \(\mathbf{x}\) into \([x_{1},x_{2},\ldots,x_{n}]\). Subsequent transformations (synonym substitution) are then getting applied to each eligible word 4. This means we obtain perturbed text \(\mathbf{x}^{\prime}\) by replacing some \(x_{i}\) with its synonym \(x^{\prime}_{i}\). Our attack goal is to make the model mis-predict the \(\mathbf{x}^{\prime}\) (i.e. \(\mathbf{F}(\mathbf{x})\neq\mathbf{F}(\mathbf{x}^{\prime})\)), 5 which is also called an untargeted attack. If one substitution is not enough to change the prediction, we repeat the steps to swap another \(x_{j}\) to generate the perturbed text \(\mathbf{x}^{\prime}\). This process essentially solves the following objective: Footnote 4: In this paper, the phrase “Chinese characters” refers to one unit long token, and “Chinese words” refers to one or more Chinese characters in their semantically correct segmentation that may or may not be one unit long. Footnote 5: Here \(\mathbf{F}:\mathcal{X}\rightarrow\mathcal{Y}\) denotes a predictive Chinese NLP model taking Chinese text as input. \(\mathcal{X}\) denotes the input space and \(\mathcal{Y}\) is the output space. \[\begin{split}\text{Find}&\mathbf{x}^{\prime}=\text{ word Substitution}(\mathbf{x})\\ \text{s.t.}&\mathbf{F}(\mathbf{x})\neq\mathbf{F}( \mathbf{x}^{\prime})\\ &\mathbf{x}^{\prime}\in\mathcal{X},\;\mathbf{F}(\mathbf{x})=y_{ orig}\\ &\wedge C_{i}(\mathbf{x},\mathbf{x}^{\prime};\epsilon_{i}),\; \forall i\in\{1,2,...,C\}\end{split} \tag{1}\] Here \(C_{1},...,C_{n}\) denotes a set of language constraints including like semantic preserving and grammaticality constraints Morris et al. (2020). \(\epsilon_{i}\) denotes the strength of the constraint \(C_{i}\). The critical component " word\(\text{Substitution}(\mathbf{x})\)" in Eq. (1) requires us to figure out what words in \(\mathbf{x}\) to perturb first, and what words as next. Essentially this is a combinatorial search issue. Literature includes different search strategy (see Section B.1 for details). We adapt the greedy with word importance ranking based search algorithm here. Our attack chooses the order of words by estimating the "importance" of each \(x_{i}\) in \(\mathbf{x}\). The importance of \(x_{i}\) is computed by replacing each \(x_{i}\) with an UNK token and then calculating the change in the model's confidence on the original label. Essentially we sort words \(x_{i}\) in \(\mathbf{x}\) by the decreasing importance regarding the following \(score\): \[\begin{split} score(x_{i})=1-Prob(\mathbf{F}(\mathbf{x}^{\prime}) )_{y_{orig}}\\ \text{s.t.}&\mathbf{x}^{\prime}=\text{replace}( \mathbf{x},x_{i},\text{UNK})\end{split} \tag{2}\] This measures how much the target model's confidence decreases regarding the original label class \(y_{orig}\) when replacing \(x_{i}\) with "UNK" token. Then for each selected \(x_{i}\), we find its best \(x^{\prime}_{i}\) to swap with, from a candidate synonym set (Section 2.3) ### Generating Synonyms for Words Now for a selected word \(x_{i}\) in x, we propose four different Chinese word transformation strategies to perturb a word \(x_{i}\) into \(x^{\prime}_{i}\) through the following word transformations: We design the first two transformations by adapting from English attack studies Jin et al. (2019) and Garg and Ramakrishnan (2020). * **Open HowNet.** Open HowNet Qi et al. (2019) is a sememe-based lexical dataset that is consisted of a sememe set and the corresponding phrases annotated with different sememes. A sememe is defined as the minimum semantic unit in a language, and Open HowNet incorporates relations between sememes to construct a taxonomy for each sememes. The semantic similarity between two words can be calculated by comparing their annotated sememes. In our study, we use Open HowNet to generate synonyms by searching the top five words with the highest semantic similarity with an input Chinese word. * **Masked Language Model.** We adapt the masked language model (MLM) method to generate perturbations based on the top-K predictions by a MLM. The XLM-RoBERTa model Conneau et al. (2019) was used as the MLM in our study, as it is able to predict Chinese words consisting of multiple characters to preserve the fluency of the attacked sentence better, in comparison to other prevalent MLM (mac-bert, etc.) that predicts single characters alone. The Chinese language, along with other Eastern Asian languages, differs from English, especially in phonology and morphology.6 Using these intuitions, we design two special word transformations considering phomophones and morphonyms of Chinese language. Footnote 6: Each Chinese character represents a monosyllabic word with unique combinations of pictographs, while English words consist of alphabetic letters. Though each Chinese character’s morphology combination is unique, many characters with similar morphology structures can be substituted in an adversarial attack without impacting the readability of the attacked sentence. In addition, because there exist many homophones in modern Chinese, the same spoken syllable may map to one of many characters with different meanings. The phonology of Chinese characters is commonly transcribed into the Latin script using Pinyin. Typing the wrong character of a word in Pinyin despite having the same pronunciation is a common mistake in Chinese writing. Thus, replacing Chinese characters with the same pronunciation may serve as an additional attack method to test the adversarial robustness of NLP models while preserving the semantics for human readers. * **Homophone transformation.** Since the phonology of Chinese characters can be expressed by the romanization system Pinyin. To replace a Chinese character with its homophone, top-k words are randomly selected from a list of characters with the same Latin script. * **Morphonym transformation.** Similarly, to replace a character with its morphonyms, top-k words are randomly selected from a list of characters that share partial pictographs with the target character, as it is a common mistake for Chinese writers to mistaken one pictograph with another. * **Composite transformation.** We also design a composite transformation that consists of the four transformation methods listed above. For each target word, Open HowNet, Masked Language Model, Homophone, and Morphonym perturbations are separately generated to replace a candidate word from the input text. If none of the substitutions changes the target NLP model prediction, the attack then move on to replace the next important word in the input sentence. In addition, for each perturbation, we want to ensure that the generated \(\mathbf{x}^{\prime}\) preserves the semantic consistency and textual fluency of \(\mathbf{x}\). We use three constraints, namely (1) constraint to allow only non-stop word modification, (2) constraint to allow only no-repeat modification, and (3) multilingual universal sentence encoder (MUSE) similarity constraint that filter out undesirable replacements Cer et al. (2018) 7. These constraints can easily adapt to other languages. A detailed description of each constraint is in Section B.2. The pseudo-code of our proposed attacks is in Algorithm 1. Footnote 7: We require that the MUSE similarity is above \(0.9\). In summary, each word transformation strategy gets combined with the greedy word ranking algorithm (Section 2.2) plus the language constraints (see above), making a unique adversarial attack against Chinese NLP. ## 3 Results and Evaluation **Victim Models:** We chose to perform attacks on two Chinese NLP models: one for sentiment classification and one for entailment. BERT and RoBERTa as selected as our victim models due to their reported robustness and SOTA performance. Details of the two models and its related two Chinese datasets are presented in Section C.2. **Metrics:**For each attack method, we recorded the attack success rate and perturbation percentage, skipping samples that a target model fails to predict correctly before any perturbation. **Ablation:** To measure how MUSE constraint impact the quality of Chinese adversarial examples, we add baseline attacks that use only the stop word constraint and repeat constraints for ablation study. Figure 1 connects attack success rate and fluency in one figure. Figure 2 and Figure 3 show few Chinese adversarial examples generated by our attacks. More results can be found in Section C.3 **Results on Attack Success:** Figure 1, Table 1 and Table 2 present the quantitative results of our attacks. Figure 1 (left) is about our results on Chinese sentiment classification model. Among all non-composite-transformation based attacks, we can see that Open HowNet substitution achieves the highest success rate, while morphonym substitution has the lowest success rate. From Table 1, we can also see that having the MUSE constraint dramatically decreases the attack success rate and perturbation percentage for all attack methods, especially for Open HowNet and homonym substitutions based attacks. This makes sense as the MUSE constraint is designed to limit the amount of perturbation the attacks can do to improve the quality of generated adversarial example. In addition, when we compare the success rate and perturbation percentage of composite attack versus other individual attack methods, we see that composite attack achieves a 87.50% attack success rate without increasing the perturbation percentage. We can make similar conclusions from Figure 1 (right) and Table 2. **Human Evaluation:**For each of the attack method, we randomly sampled 30 adversarial examples produced from the same set of input texts for each attack (a total of five). We asked four volunteers to score the semantic consistency and fluency of the examples. Semantic consistency refers to how well the ground truth label of the adversarial example matches with the original label of the input, and fluency refers to the cohesiveness of the sentence. Both metrics are scored on a scale of 1 to 5, with a score of 5 being the most consistent or fluent. Table 3 and Table 4 respectively summarize the human evaluations of adversarial examples generated by fooling classification and entailment models. For classification, Table 3 (plus Figure 1) shows that homonym substitution outperforms other attack methods, as its examples have both the highest consistency and fluency scores. On the other hand, Open HowNet substitution reports the lowest quality scores, indicating its generated adversarial examples either include out-of-context substitution or disrupt the cohesiveness semantics. Table 3, plus Figure 1(right) for entailment tasks, shows that homonym attack still achieves the highest consistency score, while MLM achieves the highest fluency score. Besides, we conjecture that the low consistency and fluency scores of the composite attack method may root to its inclusion of adversarial examples generated by Open HowNet. **Adversarial training and more result discussions:** Furthermore, we conduct adversarial training (AT) (see details in Section C.1). Table 5 shows the positive results of AT that improve the robustness across all five proposed attacks over both models. Figure 1: The performance of composite attack method with STM-RM-MUSE constraint regarding the attack success rate and human-evaluated fluency on BERT classification model (left), and RoBERTa entailment model (right). For both classification and entailment tasks, composite transformation achieves the highest attack success rate without a significant trade-off in fluency, while morphonym transformation has the lowest attack success rate.
2302.08561
Topological Signal Processing over Weighted Simplicial Complexes
Weighing the topological domain over which data can be represented and analysed is a key strategy in many signal processing and machine learning applications, enabling the extraction and exploitation of meaningful data features and their (higher order) relationships. Our goal in this paper is to present topological signal processing tools for weighted simplicial complexes. Specifically, relying on the weighted Hodge Laplacian theory, we propose efficient strategies to jointly learn the weights of the complex and the filters for the solenoidal, irrotational and harmonic components of the signals defined over the complex. We numerically asses the effectiveness of the proposed procedures.
Claudio Battiloro, Stefania Sardellitti, Sergio Barbarossa, Paolo Di Lorenzo
2023-02-16T20:12:01Z
http://arxiv.org/abs/2302.08561v1
# Topological Signal Processing over Weighted Simplicial Complexes ###### Abstract Weighing the topological domain over which data can be represented and analysed is a key strategy in many signal processing and machine learning applications, enabling the extraction and exploitation of meaningful data features and their (higher order) relationships. Our goal in this paper is to present topological signal processing tools for weighted simplicial complexes. Specifically, relying on the weighted Hodge Laplacian theory, we propose efficient strategies to jointly learn the weights of the complex and the filters for the solenoidal, irrotational and harmonic components of the signals defined over the complex. We numerically asses the effectiveness of the proposed procedures. Claudio Battiloro, Stefania Sardellitti, Sergio Barbarossa, Paolo Di Lorenzo DIET Department, Sapienza University of Rome, Via Eudossiana 18, 00184, Rome, Italy E-mail: {claudio.battiloro, stefania.sardellitti, sergio.barbarossa, paolo.dilorenzo}@uniroma1.it Topological signal processing, weighted simplicial complexes, algebraic topology, metric learning, flow estimation. ## 1 Introduction In the last years, there has been a growing interest in the processing of signals defined over topological spaces [1, 2], i.e. over domains composed of a set of points along with a set of neighborhood relations among them, not necessarily metric. A renowned example are graph signals, usually processed with tools from the Graph Signal Processing (GSP) framework [3, 4]. However, graphs encode only pairwise relationships between data; on the contrary, many real-world phenomena involve multi-way relationships as, e.g., in biological or social networks. Recently, the Topological Signal Processing (TSP) over Simplicial Complexes framework has been proposed in [5, 6], and it represents a proper generalization of the GSP framework for the representation and analysis of signals defined over simplicial complexes; in [7], the authors presented a tutorial on the emerging field of signal processing over hypergraphs and simplicial complexes. A simplicial complex is a topological space composed by a set of elements \(\mathcal{V}\) and a set \(\mathcal{S}\) containing subsets of various cardinality of the elements of \(\mathcal{V}\) satisfying the inclusion property; the rich algebraic structure of simplicial complexes make them particularly suited to capture multiway relations among data. Simplicial-based processing methods have been applied in many fields, such as statistical ranking [8], tumor progression analysis [9], and brain [10] and biological [11] networks. For this reason, there was also a raising interest in the development of (deep) neural network architectures able to handle data defined on simplicial complexes [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 180, 181, 183, 185, 187, 189, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 24, 24, 25, 26, 271, 28, 292, 293, 294, 295, 296, 297, 300, 311, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 12, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 151, 152, 154, 156, 157, 158, 159, 160, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 24, 25, 26, 271, 28, 293, 294, 295, 296, 297, 300, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 209, 211, 22, 231, 24, 25, 26, 271, 28, 293, 294, 295, 296, 297, 300, 31, 320, 320, 321, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 120, 103, 104, 105, 107, 109, 112, 109, 113, 109, 114, 109, 115, 116, 109, 120, 1 _adjacent_ in \(\mathcal{X}\), if they share a common face of order \(k-1\). We usually focus on second order simplicial complexes, denoted with \(\mathcal{X}=\{\mathcal{V},\mathcal{E},\mathcal{T}\}\) where \(\mathcal{V}\), \(\mathcal{E}\), \(\mathcal{T}\) are the sets of \(0\), \(1\) and \(2\)-simplices, i.e. vertices, edges and triangles, respectively. **Incidence Matrices.** Let us denote by \(n_{k}\) the number of simplices of order \(k\) in the complex. Given an orientation of all simplices (see [21] for details), the structure of a simplicial complex \(\mathcal{X}\) of directions \({\bf B}_{k}\in\mathbb{R}^{n_{k-1}\times n_{k}}\), \(k=1,\ldots,K\), with entries \(B_{k}(i,j)=0\) if \(\sigma_{i}^{k-1}\) is not a face of \(\sigma_{j}^{k}\), and \(B_{k}(i,j)=1\) (or \(-1\)), if \(\sigma_{i}^{k-1}\) is a face of \(\sigma_{j}^{k}\) and its orientation is coherent (or not) with the orientation of \(\sigma_{j}^{k}\). We denote the set of \(k\)-simplex in \(\mathcal{X}\) as \(\mathcal{D}_{k}:=\{\sigma_{i}^{k}\cdot\sigma_{i}^{k}\in\mathcal{X}\}\), with \(|\mathcal{D}_{k}|=n_{k}\) and, obviously, \(\mathcal{D}_{k}\subset X_{K}\). **Simplicial Signals.** We are interested in processing signals defined over a simplicial complex. A \(k\)-simplicial signal \({\bf x}_{k}\) is usually defined as a collection of mappings from the set of all \(k\)-simplices contained in the complex to real numbers: \[{\bf x}_{k}=[x_{k}(\sigma_{1}^{k}),\ldots,x_{k}(\sigma_{i}^{k}),\ldots,x_{k}( \sigma_{n_{k}}^{k})]\in\mathbb{R}^{n_{k}}, \tag{1}\] where \(x_{k}:\mathcal{D}_{k}\rightarrow\mathbb{R}\). Although the definition in (1) is formally correct, it can be reformulated using tools from algebraic topology, that we explicitly need for defining WSCs. In particular, we need the notions of chains, cochains and metric tensors. **Weighted Simplicial Complexes.** A \(k\)_-chain_\(\tau^{k}\) is a linear combination of \(k\)-simplices [22]: \[\tau^{k}=\sum_{j=1}^{n_{k}}c_{j}\sigma_{j}^{k},c_{j}\in\mathbb{R}. \tag{2}\] The space of all the \(k\)-chains, denoted with \(\mathcal{C}_{k}\), is a real vector space with a basis given by the collection of \(k\)-simplices, which we refer to as _basic_\(k\)-chains. Being a finite dimensional vector space, the chain space \(\mathcal{C}_{k}\) can be equipped with an inner product completely determined by its basic \(k\)-chains: \[g_{ij}^{k}:=\langle\sigma_{i}^{k},\sigma_{j}^{k}\rangle_{c_{k}}, \tag{3}\] where \(g_{ij}^{k}\in\mathbb{R}\), \(g_{ij}^{k}=g_{ji}^{k}\), \(i,j=1,\ldots,n_{k}\). In this way, given two chains \(\tau^{k}\) and \(\gamma^{k}\) with coefficients \(\{c_{j}\}_{j}\) and \(\{a_{j}\}_{j}\), respectively, we obtain: \[\langle\tau^{k},\gamma^{k}\rangle_{c_{k}}=\sum_{i}\sum_{j}c_{i}a_{j}g_{ij}^{k}. \tag{4}\] In this work, we assume orthogonality and positiveness, meaning \(g_{ij}^{k}=0\), \(\forall\,i\neq j\), and \(g_{ij}^{k}>0\,\forall\,i\). We refer to the set of all inner products in (3), for \(i,j=1,\ldots n_{k}\), as the _metric tensor_ of order \(k\). The dual space \(\mathcal{C}^{k}\) of \(\mathcal{C}_{k}\) is the space of all linear functional \(\tau^{*k}\) from \(\mathcal{C}_{k}\) to \(\mathbb{R}\): we refer to these linear functionals as \(k\)_-cochains_. Due to to the canonical isomorphism, the metric tensor induces an inner product also on the dual space; in particular, given two cochains \(\tau^{*k}\) and \(\gamma^{*k}\) with coefficients \(\{c^{j}\}_{j}\) and \(\{a^{j}\}_{j}\), respectively, we have: \[\langle\tau^{*k},\gamma^{*k}\rangle_{\mathcal{C}^{k}}=\sum_{i}\sum_{j}c^{i}a^ {j}w_{ij}^{k}, \tag{5}\] where \(w_{ij}^{k}=1/g_{ij}^{k}\). It can be proven that the cochain space \(\mathcal{C}^{k}\) is naturally isomorphic to \(\mathbb{R}^{n_{k}}\)[22], so that we can identify a cochain \(\tau^{*,k}\) with a vector \({\bf x}_{k}=[c^{1},\ldots,c^{n_{k}}]\in\mathbb{R}^{n_{k}}\) containing the coefficients of its corresponding chain. At this point, it is sufficient to set \({\bf x}_{k}(i)=x_{k}(\sigma_{i}^{k})=c^{c}\) for re-obtaining the definition in (1); therefore, we can state that simplicial signals and co-chains are the same object (up to an isomorphism). As a direct consequence of the aforementioned results, we can see the metric tensor as a diagonal matrix \({\bf G}_{k}\) with positive entries given by \(g_{ii}^{k}\) for \(1\leq i\leq n_{k}\). Therefore, given two signals \({\bf x}_{1}^{k},{\bf x}_{2}^{k}\) defined over \(k\)-simplices, their inner product is defined as: \[\langle{\bf x}_{1}^{k},{\bf x}_{2}^{k}\rangle={\bf x}_{1}^{k\,T}{\bf G}_{k}^{ -1}{\bf x}_{2}^{k}=\sum_{i=1}^{n_{k}}w_{ii}^{k}{\bf x}_{1}^{k}(i){\bf x}_{2}^{ k}(i). \tag{6}\] We define a _weighted simplicial complex_ as a simplicial complex whose chain spaces are equipped with non-trivial metric tensors (non-identity matrices). **Hodge decomposition.** To find an algebraic representation of the weighted simplicial complex that is able to capture its topological and metric structures, we first need to introduce the boundaries and coboundaries operators. The \(k-\)boundary operator \(\boldsymbol{\partial}_{k}:\mathcal{C}_{k}\rightarrow\mathcal{C}_{k-1}\) is a linear operator mapping \(k\)-chains to \((k-1)\)-chains, and we denote its dual with \(\boldsymbol{\delta}_{k}:=\boldsymbol{\partial}_{k}^{*}\). The dual \(\boldsymbol{\delta}_{k}\) is called the \(k-\)coboundary operator, and it maps \((k-1)\)-cochains to \(k\)-cochains. It can be proven that [22]: \[\boldsymbol{\delta}_{k}={\bf B}_{k}^{T}. \tag{7}\] We can derive an expression for the adjoint (of the dual) operator \(\boldsymbol{\delta}_{k}^{\prime}\) as a function of the metric tensor, observing that it holds that: \[\langle{\bf x}^{k},\boldsymbol{\delta}_{k}{\bf x}^{k-1}\rangle=\langle \boldsymbol{\delta}_{k}^{\prime}{\bf x}^{k},{\bf x}^{k-1}\rangle \tag{8}\] for every pairs of signals \({\bf x}^{k-1}\in\mathbb{R}^{n_{k-1}}\), \({\bf x}^{k}\in\mathbb{R}^{n_{k}}\). Then, combining (7) and (8), we can easily write \(\boldsymbol{\delta}_{k}^{\prime}\) as: \[\boldsymbol{\delta}_{k}^{\prime}={\bf G}_{k-1}{\bf B}_{k}{\bf G}_{k}^{-1}. \tag{9}\] The topological structure of a (weighted or not) \(K\)-simplicial complexes is fully described by the higher order Hodge Laplacian matrices of order \(k=1,\ldots,K\), defined as: \[{\bf L}_{k}=\boldsymbol{\delta}_{k}\boldsymbol{\delta}_{k}^{\prime}+\boldsymbol{ \delta}_{k+1}^{\prime}\boldsymbol{\delta}_{k+1}. \tag{10}\] Specifically, using the expression of the adjoint coboundaries in (9), we easily get: \[\begin{split}{\bf L}_{0}&={\bf G}_{0}{\bf B}_{1}{\bf G }_{1}^{-1}{\bf B}_{1}^{T}\\ {\bf L}_{k}&={\bf B}_{k}^{T}{\bf G}_{k-1}{\bf B}_{k}{\bf G }_{k}^{-1}+{\bf G}_{k}{\bf B}_{k+1}{\bf G}_{k+1}^{-1}{\bf B}_{k+1}^{T},\\ {\bf L}_{K}&={\bf B}_{K}^{T}{\bf G}_{K-1}{\bf B}_{K} {\bf G}_{K}^{-1},\end{split} \tag{11}\] \(k\!=\!1\),...,\(K-1\). Then, for instance, the first-order Laplacian for a simplicial complex of order \(2\) can be written as: \[{\bf L}_{1}={\bf B}_{1}^{T}{\bf G}_{0}{\bf B}_{1}{\bf G}_{1}^{-1}+{\bf G}_{1}{ \bf B}_{2}{\bf G}_{2}^{-1}{\bf B}_{2}^{T}. \tag{12}\] Note that defining the lower and upper Laplacians as \({\bf L}_{k,d}={\bf B}_{k}^{T}{\bf G}_{k-1}{\bf B}_{k}{\bf G}_{k}^{-1}\) and \({\bf L}_{k,u}={\bf G}_{k}{\bf B}_{k+1}{\bf G}_{k+1}^{-1}{\bf B}_{k+1}^{T}\), respectively, it holds that \({\bf L}_{k,d}{\bf L}_{k,u}={ ## 3 Joint Learning of Edge Flows and Weights Let us suppose to observe an edge flow signal affected by AWG noise, defined as \(\widetilde{\mathbf{x}}=\mathbf{x}+\mathbf{n}\), where \(\mathbf{x}\) denotes the clean flow, whereas \(\mathbf{n}\overset{i.i.d.}{\sim}\mathcal{N}(0,\sigma^{2})\) denotes the noisy flow. In this section, we formulate a denoising problem as a constrained problem, rooted in the Hodge decomposition, and we propose an efficient strategy for jointly learning the weights (metric tensor) \(\mathbf{G}_{2}\) associated with the \(2\)-order simplices (triangles) and the flow \(\mathbf{x}\). In this first study, we consider the weighing of the nodes \(\mathbf{G}_{0}\) and of the edges \(\mathbf{G}_{1}\) as given. Based on the decomposition in (14), we can model the observed flow as: \[\mathbf{x} =\mathbf{B}_{1}^{T}\mathbf{x}^{0}+\mathbf{G}_{1}\mathbf{B}_{2} \mathbf{G}_{2}^{-1}\mathbf{x}^{2}+\mathbf{x}_{\text{h}},\] \[\widetilde{\mathbf{x}} =\mathbf{x}+\mathbf{n}. \tag{15}\] We formulate the denoising problem as follows: \[\mathcal{Q} (\widehat{\mathbf{x}}^{0},\widehat{\mathbf{x}}^{2},\widehat{ \mathbf{x}}_{\text{h}},\widehat{\mathbf{G}}_{2})=\] \[\operatorname*{arg\,min}_{\mathbf{x}^{0},\mathbf{x}^{2},\mathbf{ x}_{\text{h}},\mathbf{G}_{2}}\|\mathbf{B}_{1}^{T}\mathbf{x}^{0}+\mathbf{G}_{1} \mathbf{B}_{2}\mathbf{G}_{2}^{-1}\mathbf{x}^{2}+\mathbf{x}_{\text{h}}- \widetilde{\mathbf{x}}\|^{2}\] \[\text{s.t.}\qquad\mathbf{a})\,\mathbf{L}\mathbf{x}_{\text{h}}= \mathbf{0},\] \[\text{b})\,[\mathbf{G}_{2}^{-1}]_{ii}>0,\;[\mathbf{G}_{2}^{-1}]_{ ij}=0,\forall\,i\neq j \tag{16}\] where the constraint \(\mathbf{a})\) forces \(\widehat{\mathbf{x}}_{\text{h}}\) to belong to the kernel of the Laplacian (the harmonic subspace), while the constraints b) impose the diagonal structure to \(\widehat{\mathbf{G}}_{2}\) with positive entries. Problem \(\mathcal{Q}\) is not jointly convex, but it is block multi-convex, i.e. convex with respect to each of the optimization variables while holding all others fixed. For this reason, we propose an efficient iterative alternating minimization algorithm to find local optimal solutions. Denoting with \(t\) the iteration index, we initialize our iterative algorithm with a random feasible point \((\widehat{\mathbf{x}}^{0}[t],\widehat{\mathbf{x}}^{2}[t],\widehat{\mathbf{x}} _{0}[t],\widehat{\mathbf{G}}_{2}[t])\) at time \(t=0\). Then, defining the point \(\hat{\mathbf{z}}[t]:=(\widehat{\mathbf{x}}^{0}[t],\widehat{\mathbf{x}}^{2}[t ],\widehat{\mathbf{x}}_{\text{h}}[t])\), the proposed alternating optimization method consists in solving at each iteration \(t\) the two following convex problems: \[\mathcal{Q}_{1})\ \hat{\mathbf{z}}[t]= \operatorname*{arg\,min}_{\mathbf{z}=(\mathbf{x}^{0},\mathbf{x}^ {2},\mathbf{x}_{\text{h}})}\|\mathbf{B}_{1}^{T}\mathbf{x}^{0}+\mathbf{G}_{1} \mathbf{B}_{2}\widehat{\mathbf{G}}_{2}^{-1}[t-1]\mathbf{x}^{2}+\mathbf{x}_{ \text{h}}-\widetilde{\mathbf{x}}\|^{2}\] \[\text{s.t.}\qquad\mathbf{L}[t-1]\mathbf{x}_{\text{h}}=\mathbf{0}, \tag{17}\] where \(\mathbf{L}[t-1]=\mathbf{L}_{d}+\mathbf{G}_{1}\mathbf{B}_{2}\widehat{\mathbf{ G}}_{2}^{-1}[t-1]\mathbf{B}_{2}^{T}\), and \[\mathcal{Q}_{2})\ \widehat{\mathbf{G}}_{2}[t]= \operatorname*{arg\,min}_{\mathbf{G}_{2}}\|\mathbf{B}_{1}^{T} \widehat{\mathbf{x}}^{0}[t]+\mathbf{G}_{1}\mathbf{B}_{2}\mathbf{G}_{2}^{-1} \widehat{\mathbf{x}}^{2}[t]+\widehat{\mathbf{x}}_{\text{h}}[t]-\widetilde{ \mathbf{x}}\|^{2}\] \[\text{s.t.}\quad(\mathbf{L}_{d}+\mathbf{G}_{1}\mathbf{B}_{2} \mathbf{G}_{2}^{-1}\mathbf{B}_{2}^{T})\widehat{\mathbf{x}}_{\text{h}}[t]= \mathbf{0},\] \[\quad[\mathbf{G}_{2}^{-1}]_{ii}>0,\;[\mathbf{G}_{2}^{-1}]_{ij}=0, \forall i\neq j. \tag{18}\] Problems \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) are convex, and can be efficiently solved with any numerical solver. Furthermore, using similar derivations as in [5], we can easily prove that problem \(\mathcal{Q}_{1}\) admits the following closed form solution: \[\widehat{\mathbf{x}}^{0}[t] =\mathbf{L}_{0}^{\dagger}\mathbf{B}_{1}\widetilde{\mathbf{x}}, \forall\,t \tag{19}\] \[\widehat{\mathbf{x}}^{2}[t] =(\widehat{\mathbf{G}}_{2}^{-1}[t-1]\mathbf{B}_{2}^{T}\mathbf{G} _{1}\mathbf{B}_{2}\widehat{\mathbf{G}}_{2}^{-1}[t-1])^{\dagger}\widehat{ \mathbf{G}}_{2}[t-1]^{-1}\mathbf{B}_{2}^{T}\widetilde{\mathbf{x}},\] (20) \[\widehat{\mathbf{x}}_{\text{h}}[t] =\widetilde{\mathbf{x}}-\mathbf{B}_{1}^{T}\widehat{\mathbf{x}}^{0 }[t]-\mathbf{G}_{1}\mathbf{B}_{2}\mathbf{G}_{2}^{-1}[t-1]\widehat{\mathbf{x}} ^{2}[t], \tag{21}\] where \(\mathbf{L}_{0}^{\dagger}\) is the Moore-Penrose pseudoinverse. Problem \(\mathcal{Q}\) can be also regularized (with convex penalties), leading to the same procedure but with the additional regularization terms in the objective functions. The proposed procedure is listed in Algorithm 1. ``` 1:\(\widetilde{\mathbf{x}}\in\mathbb{R}^{E}\). Noisy edge flow signal. 2:\(\mathbf{B}_{1}\in\mathbb{R}^{N\times E}\). Nodes to edges incidence matrix \(\mathbf{B}_{2}\in\mathbb{R}^{E\times T}\). Edges to triangles incidence matrix \(\mathbf{G}_{1}\in\mathbb{R}^{E\times E}\). Edges weights (metric tensor) \(\mathbf{\widehat{x}}^{0}[0],\widehat{\mathbf{x}}^{2}[0],\widehat{\mathbf{x}}_{ \text{h}}[0],\widehat{\mathbf{G}}_{2}[0]\): Estimates initializations \(N_{t}\): number of iterations (can be replaced by stopping criterion) 3:Outputs:\(\widehat{\mathbf{x}}^{0},\widehat{\mathbf{x}}^{1},\widehat{\mathbf{x}}_{\text{h}}, \widehat{\mathbf{G}}_{2}\): Learned signals and weights (metric tensor) 4:functionEdge flow estimation (Inputs) 5:for\(t\in[1,N_{t}]\)do 6:\(\widehat{\mathbf{x}}^{0}[t],\widehat{\mathbf{x}}^{2}[t],\widehat{\mathbf{x}}_{ \text{h}}[t]\): Compute (19), (20), and (21) 7:\(\widehat{\mathbf{G}}_{2}[t]\): Numerically solve \(\mathcal{Q}_{2}\) 8:return: 9:\(\widehat{\mathbf{x}}^{0}=\widehat{\mathbf{x}}^{0}[N_{t}]\) 10:\(\widehat{\mathbf{x}}^{2}=\widehat{\mathbf{x}}^{2}[N_{t}]\) 11:\(\widehat{\mathbf{x}}_{\text{h}}=\widehat{\mathbf{x}}_{\text{h}}[N_{t}]\) 12:\(\widehat{\mathbf{G}}_{2}=\widehat{\mathbf{G}}_{2}[N_{t}]\) ``` **Algorithm 1** EDGE FLOW ESTIMATION To numerically test the effectiveness of the proposed edge flow estimation strategy, we consider a random simplicial complex with \(N=40\) nodes, \(E=137\) edges, \(T=96\) triangles, and with the metric tensors \(\mathbf{G}_{1}\) and \(\mathbf{G}_{2}\) being random positive diagonal matrices. We generate random noisy edge signals \(\widetilde{\mathbf{x}}\) according to the model in (15) with \(\mathbf{x}^{0},\mathbf{x}^{2},\mathbf{x}_{\text{h}}\) being random sparse vectors. Then, we apply the proposed tracking minimization scheme to estimate \(\mathbf{x}\) from \(\widetilde{\mathbf{x}}\). For this experiment, we also regularize Problem \(\mathcal{Q}\) with a \(l_{1}\) penalty on the signal components. In Figure 1, we show the correlation coefficient \(\rho=\frac{\|\widehat{\mathbf{x}}^{T}\mathbf{x}\|\mathbf{x}\|\|^{2}}{\|\|\|\| \|}\) (to neglect the effect of multiplicative constants) versus the noise standard deviation \(\sigma\), comparing our method against the estimation only of the signals components assuming a flat metric (unitary weights) \(\mathbf{G}_{2}=\mathbf{I}\); the results are averaged over \(20\) signals and noise realizations. As the reader can notice, the joint learning of the metric tensor and the signals components show a significant performance gain. We plan to extend this work by designing more complex procedures involving the learning also of the weights \(\mathbf{G}_{0}\) and \(\mathbf{G}_{1}\), as well as testing on real data. Figure 1: Correlation coefficient vs the noise standard deviation. ## 4 Learning the metric tensor from data In this section we propose an efficient strategy to learn the metric tensor \(\mathbf{G}_{2}\) from a set of observed edge signals. Assuming that the edge flows are smooth over the solenoidal subspace, so that their circulation along the triangles is minimum, we formulate the learning of the metric tensor as a total variation minimization problem. We start from the observation of \(M\) snapshots of edge signals \(\mathbf{x}(m)\), \(m=1,\ldots,M\). The squared norm of the circulation of each signal along the triangles of the complex can be written as \[\begin{split}\text{TV}_{\text{sol}}(\mathbf{x}(m))=& \parallel\mathbf{G}_{2}^{-1/2}\mathbf{B}_{2}^{T}\mathbf{x}(m) \parallel^{2}\\ =&\mathbf{x}_{\text{sol}}(m)^{T}\mathbf{B}_{2} \mathbf{G}_{2}^{-1}\mathbf{B}_{2}^{T}\mathbf{x}_{\text{sol}}(m)\end{split} \tag{22}\] where in the last equality we exploited the orthogonality among the irrotational, harmonic and solenoidal subspaces. Note that since the metric tensor is a diagonal matrix, equation (22) can be expressed in the form: \[\text{TV}_{\text{sol}}(\mathbf{x}(m))=\sum_{i=1}^{T}w_{2}(i)\mathbf{x}_{\text {sol}}(m)^{T}\mathbf{b}_{i}\mathbf{b}_{i}^{T}\mathbf{x}_{\text{sol}}(m) \tag{23}\] where \(w_{2}(i)\) is the \(i\)-th positive diagonal entry of the metric tensor \(\mathbf{G}_{2}^{-1}\), \(T\) is the number of (filled) triangles of the complex and \(\mathbf{b}_{i}\) is the \(i\)-th column of \(\mathbf{B}_{2}\). Then, our goal is to find the optimal weights' vector \(\boldsymbol{w}_{2}=[w_{2}(1),\ldots,w_{2}(T)]^{T}\) minimizing the total variation of the observed solenoidal signals. Therefore, denoting with \(\mathbf{x}_{\text{sol}}=[\mathbf{x}_{\text{sol}}(1),\ldots,\mathbf{x}_{\text{ sol}}(M)]\) the \(E\times M\) matrix with columns the observed signals, the metric learning problem can be formulated as \[\begin{split}\underset{\mathbf{w}_{2}\in\mathbb{R}^{T}}{\text{ min}}&\sum_{i=1}^{T}w_{2}^{2}(i)\text{tr}(\mathbf{X}_{s}^{T} \mathbf{b}_{i}\mathbf{b}_{i}^{T}\mathbf{X}_{s})\hskip 42.679134pt(\mathcal{P})\\ \text{s.t.}&\text{a})\sum_{i=1}^{T}w_{2}(i)=1,\\ &\text{b})\;w_{2}(i)>0,\quad\forall\,i\end{split} \tag{24}\] where we consider a quadratic objective function instead of the linear one leading in our problem to a trivial solution. The constraint a) forces the sum of the positive weights to be a constant value while the constraints in b) ensure positive variables \(w_{2}(i)\). To simplify our notation, let us introduce the positive coefficients \(a_{i}=\text{tr}(\mathbf{X}_{s}^{T}\mathbf{b}_{i}\mathbf{b}_{i}^{T}\mathbf{x}_{ \text{sol}})\). The optimization problem \(\mathcal{P}\) admits a closed form solution as stated in the following theorem. **Theorem 1**: _Given any set of positive coefficients \(\{a_{i}\}_{i=1}^{T}\), the convex optimization problem \(\mathcal{P}\) admits the closed form solution:_ \[w_{2}(i)=\frac{\lambda^{\star}}{2a_{i}} \tag{25}\] _with \(\lambda^{\star}=1/(\sum_{i=1}^{T}1/(2a_{i}))\)._ **Proof.** First let us observe that the objective function in problem \(\mathcal{P}\) is a convex function being a linear combination with positive coefficients of quadratic variables. Then, problem \(\mathcal{P}\) is a convex optimization problem since the linear constraint a) and the constraints in b) define a feasible convex set. Therefore, any optimal solution \(\boldsymbol{w}_{2}^{\star}\) satisfies the KKT conditions of \(\mathcal{P}\) that are necessary and sufficient conditions for optimality (note that Slater's constraint qualification is satisfied). Then, denoting with \(\mathcal{L}(\boldsymbol{w},\lambda,\boldsymbol{\mu})\) the Lagrangian function of \(\mathcal{P}\), the KKT conditions are \[\begin{split}&\text{(i)}\;\frac{\partial\mathcal{L}}{\partial w_{2} (i)}=2w_{2}(i)a_{i}-\lambda-\mu_{i}=0,\quad\forall\,i\\ &\text{(ii)}\;\mu_{i}w_{2}(i)=0,\;\;\mu_{i}\geq 0,w_{2}(i)>0,\; \;\forall\;i\\ &\text{(iii)}\;\lambda(\sum_{i}w_{2}(i)-1)=0,\;\;\lambda\in \mathbb{R},\;\;\sum_{i=1}^{T}w_{2}(i)=1.\end{split} \tag{26}\] Since \(w_{2}(i)>0\), from (ii) we get \(\mu_{i}=0\), so that condition (i) becomes \(w_{2}(i)=\frac{\lambda_{2}}{2a_{i}}\). Replacing these variables in the linear constraint, one gets \(\lambda^{\star}=1/(\sum_{i}1/(2a_{i}))\) so that the optimal solutions are \(w_{2}(i)=\frac{\lambda^{\star}}{2a_{i}}\). To numerically test the effectiveness of the proposed metric learning strategy we solve problem \(\mathcal{P}\) to find the metric tensor from the observation of signals over the edges of the simplicial complex. Specifically, we generated \(100\) random geometric graphs composed of \(N=40\) vertices by filling all the possible \(2\)-simplices in the graph. For each graph we generated a metric tensor with diagonal random entries between \([0,1]\) such that \(\boldsymbol{w}\) belongs to the feasible set of \(\mathcal{P}\). In Figure 2, we report the mean squared metric estimation error \(\parallel\boldsymbol{w}_{2}-\hat{\boldsymbol{w}}_{2}\parallel_{F}\) versus the number \(M\) of observed edge signals. The results are averaged over \(100\) simplicial complex realizations and, for each complex, by generating \(100\) random matrix \(\mathbf{X}_{s}\) of bandlimited signals. We can observe that as the number \(M\) of signals increases a more accurate estimation \(\hat{\boldsymbol{w}}_{2}\) of the true metric is provided. ## 5 Comments and conclusions We presented topological signal processing tools for elaborating signals defined over weighted simplicial complexes. Specifically, relying on the weighted Hodge Laplacian theory, we proposed efficient strategies to jointly learn the weights of the complex and the filters for the solenoidal, irrotational and harmonic components of the signals defined over the complex. We numerically assessed the effectiveness of the proposed procedures. This is a preliminary work with two main goals: the first one is casting the algebraic topology notions of abstract simplicial complexes with non trivial metric tensors in a signal processing framework; the second one is proposing signal processing tools able to handle simplex-structured data defined on weighted simplicial complexes. We plan to extend the proposed framework both from a theoretical and an applied points of view. Figure 2: MSE versus the number of observed signals \(M\).
2305.03493
Covering radius of RM(4,8)
We propose an effective version of the lift by derivation, an invariant that allows us to provide the classification of B(5,6,8)=RM(6, 8)/RM(4,8). The main consequence is to establish that the covering radius of the Reed-Muller RM(4,8) is equal to 26.
Valérie Gillot, Philippe Langevin
2023-05-04T13:15:51Z
http://arxiv.org/abs/2305.03493v1
# Covering Radius of \(Rm(4,8)\) ###### Abstract. We propose an effective version of the lift by derivation, an invariant that allows us to provide the classification of \(B(5,6,8)=RM(6,8)/RM(4,8)\). The main consequence is to establish that the covering radius of the Reed-Muller \(RM(4,8)\) is equal to \(26\).. This work is partially supported by the French Agence Nationale de la Recherche through the SWAP project under Contract ANR-21-CE39-0012. **Lemma 1**.: 1. \(2\rho(k,m-1)\leq\rho(k,m)\)__ 2. \(\rho(k-1,m-1)\leq\rho(k,m)\)__ 3. \(\rho(k,m)\leq\rho(k,m-1)+\rho(k-1,m-1)\)__ However, most of covering radii are still unknown. Recent results are obtained in [4, 9] in the case \(m=7\). Therefore, all the covering radii are known for \(m\leq 7\). For \(m=8\), most the covering radii are unknown. Table 1 is an update of Table [8, p. 802] with the latest results corresponding to cases \(m=7,8\). 1. One can check the non-linearity of order \(2\) of \(abd+bcf+bef+def+acg+deg+cdh+aeh+afh+bfh+efh+bgh+dgh\) is \(88\) ; 2. The lower bound is a consequence of the classification of \(B(4,4,8)\), see [3]; 3. Obtained in this paper as a consequence of a lower bound found in [2]; 4. See the result in [9, Theorem 11] ; 5. See the result in [4, Theorem 1] ; 6. Consequence of Lemma 1-(iii). We also consider \(\rho_{t}(k,m)\) the relative covering radius of \(RM(k,m)\) into \(RM(t,m)\), \[\rho_{t}(k,m)=\max_{f\in RM(t,m)}\mathrm{NL}_{k}(f)=\max_{f\in B(k+1,t,m)} \mathrm{NL}_{k}(f) \tag{1}\] In the paper [2], the authors present methods for computing the distance from a Boolean function in \(B(m)\) of degree \(m-3\) to the Reed-Muller space \(RM(m-4,m)\). It is useful to determine the relative covering radius \(\rho_{m-3}(m-4,m)\). In particular, their result \(\rho_{5}(4,8)=26\) is a milestone for our purpose : computation of \(\rho(4,8)\). It is necessary to determine \(\rho_{6}(4,8)\), but considering the formula (1) the cardinality of \(B(5,6,8)=2^{84}\) is too large, using a set of representatives of \(B(5,6,8)\) \[\rho_{6}(4,8)=\max_{f\in\widetilde{B}(5,6,8)}\mathrm{NL}_{4}(f).\] Hence, the search space is reduced to the 20748 Boolean functions. Our strategy for determining the covering radius \(\rho(4,8)\) is described in figure 1. It consists in two parts. A first part dedicated to the tools which allow to obtain the classification of \(B(5,6,8)\) : cover set, invariant and equivalence. A second part is dedicated to the estimation of the 4th order nonlinearity of element in \(\widetilde{B}(5,6,8)\). ## 3. Cover set and classification Given a set of orbit representatives \(\widetilde{B}(s,t,m)\) of \(B(s,t,m)\) under the action of \(\textsc{agl}(m,2)\), we determine \(\rho_{t}(s-1,m)\) : \[\rho_{t}(s-1,m)=\max_{f\in B(s,t,m)}\mathrm{NL}_{s-1}(f)=\max_{f\in\widetilde {B}(s,t,m)}\mathrm{NL}_{s-1}(f).\] In general, the determination of a \(\widetilde{B}(s,t,m)\) is hard computational task. So, we introduce an intermediate concept, a cover set of \(B(s,t,m)\) is a set containing \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline \(k\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \(\rho(k,8)\) & 120 & \(88^{a}\) – 96 & \(50^{b}-67^{f}\) & \(\mathbf{26}^{c}\) & 10 & 2 & 1 & 0 \\ \hline \(\rho(k,7)\) & 56 & \(40^{d}\) & \(20^{e}\) & 8 & 2 & 1 & 0 & \\ \hline \end{tabular} \end{table} Table 1. Updated Table of Handbook of coding theory. \(\widetilde{B}(s,t,m)\) and eventually other functions of \(B(s,t,m)\). In order to obtain a classification from a cover set, we will need a process to eliminate functions in same orbit. In the first instance, we construct a cover set with reasonable size in two reduction steps applied to \(B(s,t,m)\). Any Boolean function \(f\in B(m)\) can be written as \(x_{m}g+h\) with \(g,h\in B(m-1)\). In particular, \[B(s,t,m)=\big{\{}x_{m}g+h\mid g\in B(s-1,t-1,m-1),\ h\in B(s,t,m-1)\big{\}}. \tag{2}\] **Lemma 2** (Initial cover set).: _The set_ \[B^{\dagger}(s,t,m)=\{x_{m}g+h\mid g\in\widetilde{B}(s-1,t-1,m-1),h\in B(s,t,m -1)\} \tag{3}\] _is a cover set of \(B(s,t,m)\) of size \(\sharp\widetilde{B}(s-1,t-1,m-1)\times\sharp B(s,t,m-1)\)._ Proof.: An element \(\mathfrak{s}\in\textsc{agl}(m-1,2)\) acts on \(f\) by \(x_{m}g\circ\mathfrak{s}+h\circ\mathfrak{s}\). **Lemma 3** (Action of stabilizer).: _Let us fix \(g\in\widetilde{B}(s-1,t-1,m-1)\)._ 1. _For all_ \(\mathfrak{s}\in\textsc{agl}(m-1,2)\) _in the stabilizer of_ \(g\)_, the functions_ \(x_{m}g+h\) _and_ \(x_{m}g+h\circ\mathfrak{s}\) _are in the same orbit._ 2. _For all_ \(\alpha\in RM(1,m-1)\)_, the functions_ \(x_{m}g+h\) _and_ \(x_{m}g+h+\alpha g\) _are in the same orbit._ _where orbits correspond to the action of \(\textsc{agl}(m,2)\) on \(B(s,t,m)\)._ **Lemma 4** (Second cover set).: _The set_ \[B^{\ddagger}(s,t,m)=\bigsqcup_{g\in\widetilde{B}(s-1,t-1,m-1)}\big{\{}\ x_{m}g+h\mid h\in\mathcal{R}(g)\ \big{\}}. \tag{4}\] _is a cover set of size \(\sharp B^{\ddagger}(s,t,m)=\sum_{g\in\widetilde{B}(s-1,t-1,m-1)}\sharp\mathcal{ R}(g)\). Denoting by \(\mathcal{R}(g)\) an orbit representatives set for the action over \(B(s,t,m-1)\) of the group spaned by the transformations \(h\mapsto h\circ\mathfrak{s}\) and \(h\mapsto h+\alpha g\)._ Proof.: For each \(g\in\widetilde{B}(s-1,t-1,m-1)\) apply Lemma 3 to the cover set (3). In order to determine \(\rho_{6}(4,8)\), the initial cover is \(B^{\dagger}(5,6,8)=\widetilde{B}(4,5,7)\times B(5,6,7)\). The classification \(\widetilde{B}(4,5,7)\) is obtained in [3], its cardinality is \(179\), whence \(\sharp B^{\dagger}(5,6,8)\) is \(179\times 2^{28}\approx 2^{35.5}\). Applying Lemma 4, we obtain a cover set of size \(3828171\approx 2^{21.9}\). It is already known that \(\sharp\widetilde{B}(5,6,8)=20748\), the determination of an orbit representatives set is the subject of the next sections. Our approach is based on invariant tools and equivalence algorithm. ## 4. Invariant From the result of the previous section in the case \(B(5,6,8)\), we have to extract \(20748\) orbit representatives among \(3828171\) functions. Two elements \(f,f^{\prime}\in B(s,t,m)\) in the same orbit under the action of \(\textsc{agl}(m,2)\) are said equivalent, we denote \(f{\sim}_{s,t}^{m}f^{\prime}\), that means that there exists \(\mathfrak{s}\in\textsc{agl}(m,2)\) such that \(f^{\prime}\equiv f\circ\mathfrak{s}\mod RM(s-1,m)\). An invariant \(j:B(s,t,m)\to X\), for an arbitrary set \(X\), satisfies \(f{\sim}_{s,t}^{m}f^{\prime}\Longrightarrow j(f)=j(f^{\prime})\). If \(j(f)=j(f^{\prime})\) and \(f{\sim}_{s,t}^{m}f^{\prime}\), we say there is a collision. Let us recall the derivative \(\mathrm{d}_{v}f\) of a Boolean function \(f\) in the direction \(v\) is the application defined by \(\mathbb{F}_{2}^{m}\ni x\mapsto\mathrm{d}_{v}f(x)=f(x+v)+f(x)\). In the specific case \(f\in B(s,t,m)\), we define the derivative as \[\mathrm{Der}_{v}f\equiv\mathrm{d}_{v}f\mod RM(s-2,m).\] This derivative is an element of \(B(s-1,t-1,m)\) and we consider the following map : \[F\colon B(s,t,m) \longrightarrow\widetilde{B}(s-1,t-1,m)^{\mathbb{F}_{2}^{m}}\] \[f \longmapsto\widetilde{\mathrm{Der}.f},\] **Lemma 5**.: _Let be \(f\in B(m)\), \(\mathfrak{s}\in\textsc{agl}(m,2)\). Considering the linear part \(A\in\textsc{gl}(m,2)\) and \(a\in\mathbb{F}_{2}^{m}\) the affine part of \(\mathfrak{s}=(A,a)\), \(\mathfrak{s}(x)=A(x)+a\), we have \(F(f\circ\mathfrak{s})=F(f)\circ A\)._ Proof.: Note that \(\mathfrak{s}(x+y)=A(x+y)+a=\mathfrak{s}(x)+A(y)\). For \(x,v\in\mathbb{F}_{2}^{m}\), \(f\in B(m)\) \[\mathrm{d}_{v}(f\circ\mathfrak{s})(x) =f\circ\mathfrak{s}(x+v)+f\circ\mathfrak{s}(x)\] \[=f(\mathfrak{s}(x)+A(v))+f\circ\mathfrak{s}(x)\] \[=(\mathrm{d}_{A(v)}f)\circ\mathfrak{s}(x)\] Reducing modulo \(RM(s-2,m)\), we have \(\mathrm{Der}_{v}(f\circ\mathfrak{s})\equiv(\mathrm{Der}_{A(v)}f)\circ \mathfrak{s}\), therefore \(\widetilde{\mathrm{Der}_{v}(f\circ\mathfrak{s})}=\widetilde{\mathrm{Der}_{A( v)}f}\), whence \(F(f\circ\mathfrak{s})=F(f)\circ A\). **Lemma 6** (Invariant).: _The application \(J\) mapping \(f\in B(s,t,m)\) to the distribution of the values of \(F(f)(v)\), for all \(v\in\mathbb{F}_{2}^{m}\), is an invariant._ Proof.: Let consider \(f,f^{\prime}\in B(s,t,m)\), \(\mathfrak{s}\in\textsc{agl}(m,2)\), such that \(f^{\prime}\equiv f\circ\mathfrak{s}\mod RM(s-1,m)\) (i.e. \(f\sim_{s,t}^{m}f^{\prime}\)). Applying Lemma 5, we obtain \(F(f^{\prime})=F(f)\circ A\). Let us observe the derivative of \(f\in RM(t,m)\) in the direction \(e_{m}\), using the decomposition of \(f\) as in (2), for \((y,y_{m})\in\mathbb{F}_{2}^{m-1}\times\mathbb{F}_{2}\) and \(e_{m}=(0,1)\in\mathbb{F}_{2}^{m-1}\times\mathbb{F}_{2}\), we obtain : \[\mathrm{d}_{e_{m}}f(y,y_{m}) =f((y,y_{m})+(0,1))+f(y,y_{m})\] \[=x_{m}(y,y_{m}+1)g(y)+x_{m}(y,y_{m})g(y)+h(y)+h(y)\] \[=(y_{m}+1)g(y)+y_{m}g(y)\] \[=g(y)\] It is nothing but the partial derivative with respect to \(x_{m}\). Hence, \(g\) is a Boolean function in \(m-1\) variables of degree less or equal to \(t-1\). This fact holds in general for a derivation in any direction \(v\). A Boolean function \(f\in B(m)\) is \(v\)-periodic iff \(f(x+v)=f(x),\forall x\in\mathbb{F}_{2}^{m}\). The \(v\)-perodic Boolean functions are invariant under the action of any transvection \(T\in\textsc{gl}(m,2)\) of type \(T(x)=x+\theta(x)v\), where \(v\) is in the kernel of the linear form \(\theta\). For any supplementary \(E_{v}\) of \(v\), the restriction \(f|_{E_{v}}\) of a \(v\)-periodic function \(f\in B(m)\) is a function in \(m-1\) variables. Note that for \(f\in B(s,t,m)\), \(\mathrm{Der}_{v}f\) is \(v\)-periodic whose its restriction to \(E_{v}\) is a Boolean function in \(m-1\) variables of degree less or equal to \(t-1\). **Lemma 7**.: _Let be \(f,g\in B(m)\) two \(v\)-perodic Boolean functions. If \(f\) is equivalent to \(g\) in \(B(m)\) then \(f|_{E_{v}}\) is equivalent to \(g|_{E_{v}}\) in \(B(m-1)\), for any supplementary \(E_{v}\) of \(v\)._ Proof.: If \(f\) and \(g\) are equivalent in \(B(m)\), there exists \(\mathfrak{s}=(A,a)\) such that \(f\circ\mathfrak{s}=g\). The case of a translation is immediate. We may assume \(a=0\) that is the action of the linear part \(A\), \(f\circ A=g\). Since \(g\) is \(v\)-perodic, \(g\) is fixed by any transvection \(T=x+\theta(x)v\) where \(v\) is in the kernel of the linear form \(\theta\) : \[\forall x\in\mathbb{F}_{2}^{m},\quad g(T(x))=g(x+\theta(x)v)=g(x)\] We denote \(P\) the projection of \(\mathbb{F}_{2}^{m}\) over \(E_{v}\) in the direction of \(v\) (\(P(e+v)=e\)), \[\forall x\in\mathbb{F}_{2}^{m},\quad g(x)=g(T(x))=f(AT(x))=f(PAT(x)).\] Note that \(AT(x)=A(x)+\theta(x)A(v)\). We are going to determine \(\theta(A^{-1}(v))\) so that \(\ker PAT\cap E_{v}=\{0\}\). That means for \(x\in E_{v}\setminus\{0\}\), \(AT(x)\not\in\{0,v\}\). Let \(x\in\mathbb{F}_{2}^{m}\) such that \(AT(x)=\lambda v\) with \(\lambda\in\mathbb{F}_{2}\). \[A(x)+\theta(x)A(v)=\lambda v\] \[x+\theta(x)v=\lambda A^{-1}(v)\] \[\theta(x)+\theta(x)\theta(v)=\lambda\theta(A^{-1}(v))\quad\theta (x)=\lambda\theta(A^{-1}(v))\] \[x=\lambda(A^{-1}(v)+\theta(A^{-1}(v))v)\] There are two cases to be considered : * \(v\in A(E_{v})\) : \(A^{-1}(v)\neq v\), we can fix \(\theta(A^{-1}(v))=1\). Thus, \(x=\lambda(A^{-1}(v)+v)\). \[x=\lambda(A^{-1}(v)+v)\quad\lambda=\begin{cases}0,\,x=0\\ 1,\,x\not\in E_{v}\end{cases}\] * \(v\not\in A(E_{v})\) : \(A^{-1}(v)\not\in E_{v}\), we can fix \(\theta(A^{-1}(v))=0\). Thus \(x=\lambda A^{-1}(v)\), we obtain \(x=0\) for \(\lambda=0\) and \(x\not\in E_{v}\) for \(\lambda=1\) \[x=\lambda A^{-1}(v)\quad\lambda=\begin{cases}0,\,x=0\\ 1,\,x\not\in E_{v}\end{cases}\] In these two cases, we obtain \(x=0\) for \(\lambda=0\) and \(x\not\in E_{v}\) for \(\lambda=1\). Hence, the restriction of \(PAT\) to \(E_{v}\) is an automorphism, thus, \(f|_{E_{v}}\) is equivalent to \(g|_{E_{v}}\) in \(B(m-1)\). By numbering the elements of \(\widetilde{B}(s-1,t-1,m)\), \(F(f)\) takes its values in \(\mathbb{N}\). We can consider its Fourier transform \(\widehat{F}(f)(b)=\sum_{v\in\mathbb{F}_{2}^{m}}F(f)(v)(-1)^{b.v}\). For \(A\in\operatorname{GL}\left(m\right)\), the relation \(F(f^{\prime})=F(f)\circ A\) becomes \(\widehat{F}(f^{\prime})\circ A^{*}=\widehat{F}(f)\), \(A^{*}\) is the adjoint of \(A\). We denote by \(J\) the invariant corresponding to the values distribution of \(F(f)\) and \(\widehat{J}\) the invariant corresponding the values distribution of \(\widehat{F}(f)\). These invariants \(J\) and \(\widehat{J}\) were introduced in [1]. In our context the invariant \(\widehat{J}\) is more discriminating than \(J\). The application of Lemma 7 allows us to consider the derivatives functions in \(B(s-1,t-1,m-1)\) instead of \(B(s-1,t-1,m)\). **Remark 1**.: _To make the algorithm_ Invariant_, we need to optimise the class determination of an element of \(B(4,5,7)\). There is only \(4\) classes in \(\widetilde{B}(5,5,7)\). We precompute the complete classification of \(B(5,5,7)\) by determining a representatives set \(\{r_{1},r_{2},r_{3},r_{4}\}\) of \(\widetilde{B}(5,5,7)\), stabilizers \(\{S_{1},S_{2},S_{3},S_{4}\}\) of each representative and a transversale. For each stabilizer, we keep in memory a description of the orbits of \(B(4,4,7)\) under the stabilizer \(S_{i}\). The class of an element \(h\in B(4,5,7)\) is obtained from a representative \(r_{i}{\sim}_{5,5}^{7}h\) and a transversale element \(\mathfrak{s}\in\textsc{agl}(7)\) such that \(h\circ\mathfrak{s}\equiv r_{i}\mod RM(4,7)\) using a lookup table for the key \(h\circ\mathfrak{s}+r_{i}\). There is \(179\) classes dans \(\widetilde{B}(4,5,7)\). The amount of memory to store this data is about 32 GB._ Listing 1. Invariant \begin{tabular}{|c|} \hline Algorithm Invariant( f, s, t, m ) \\ \{ _// f element of B(s,t,m)_ \\ for each \(v\) in \(\mathbb{F}_{2}^{m}\) \\ g\(\leftarrow\) d\({}_{v}f\) \\ h \(\leftarrow\)\(g|_{E_{v}}\) \\ F[\(v\)] \(\leftarrow\)Class( h, s\(-\)1, t\(-\)1, m\(-\)1 ) \\ return FourierTransform( F ) \\ \} \\ \} \\ \hline \end{tabular} Applying the invariant \(J\) to the 3828171 Boolean functions of the cover set \(B^{\ddagger}(5,6,8)\), one finds 20694 distributions that means there are 54 collisions. On the same set, the invariant \(\widehat{J}\) takes 20742 values : there are only 6 collisions. In the next section, we describe an equivalence algorithm to detect and solve theses collisions. ## 5. Equivalence In this section, we work exclusively in the space \(B(t-1,t,m)\), i.e. in the particular case \(s=t-1\). Considering \(\widehat{J}\), the invariant corresponding to the values distribution of \(\widehat{F}(f)\). Two functions \(f,f^{\prime}\in B(t-1,t,m)\) that do not have the same values distribution are not equivalent. In the case \(f{\sim}_{t-1,t}^{m}f^{\prime}\), the distributions are identical and there exists \(A\in\textsc{gl}(m,2)\) such that \[F(f^{\prime})=F(f)\circ A\quad\text{and}\quad\widehat{F}(f^{\prime})\circ A^{ *}=\widehat{F}(f). \tag{5}\] The existence of \(A\) does not guarantee the equivalence of the functions. Such an \(A\) is said a candidate which must be completed by an affine part \(a\in\mathbb{F}_{2}^{m}\) to be able to conclude equivalence. For \(f\in RM(t,m)\) and \(x\in\mathbb{F}_{2}^{m}\), \[\begin{split}\mathrm{d}_{u,v}f(x)&=\mathrm{d}_{v}( \mathrm{d}_{u}f)(x)\\ &=\mathrm{d}_{u}(f(x+v)+f(x))\\ &=f(x+u+v)+f(x+u)+f(x+v)+f(x)\\ &=f(x+u+v)+f(x)+f(x+u)+f(x)+f(x+v)+f(x)\\ &=\mathrm{d}_{u+v}f(x)+\mathrm{d}_{u}f(x)+\mathrm{d}_{v}f(x)\end{split}\] The degree of \(\mathrm{d}_{u,v}f\) is less or equal \(t-2\), reducing modulo \(RM(t-2,m)\), we obtain \[\mathrm{d}_{u+v}f(x)+\mathrm{d}_{u}f(x)+\mathrm{d}_{v}f(x)\equiv 0.\] The set \(\Delta(f)=\{\mathrm{d}_{v}f\mod RM(t-2,m)\mid v\in\mathbb{F}_{2}^{m}\}\) is a subspace of \(B(t-1,t-1,m)\). **Lemma 8** (Candidate checking).: _Let \(f,f^{\prime}\) be in \(B(t-1,t,m)\). Let us consider a candidate \(A\in\operatorname{GL}\left(m\right)\). There exists \(a\in\mathbb{F}_{2}^{m}\) such that \(f^{\prime}\equiv f\circ(A,a)\mod RM(t-2,m)\) if and only if \(f^{\prime}\circ A^{-1}+f\in\Delta(f)\)._ Proof.: If \(f^{\prime}\equiv f\circ(A,a)\mod RM(t-2,m)\), there exists \(r\in RM(t-2,m)\) such that for all \(x\in\mathbb{F}_{2}^{m}\) \[f^{\prime}(x) =f\circ(A,a)(x)+r(x)=f(A(x)+a)+r(x)\] \[f^{\prime}\circ A^{-1}(x) =f(x+a)+r(x)\] \[f^{\prime}\circ A^{-1}(x)+f(x) =f(x+a)+f(x)+r(x)\] \[(f^{\prime}\circ A^{-1}+f)(x) =\operatorname{d}_{a}\!f(x)+r(x)\] Thus \(f^{\prime}\circ A^{-1}+f\in\Delta(f)\). Conversely, for \(f^{\prime}\circ A^{-1}+f\in\Delta(f)\), there exists \(a\in\mathbb{F}_{2}^{m}\) such that \(f^{\prime}\circ A^{-1}+f\equiv\operatorname{d}_{a}\!f\mod RM(t-2,m)\). There exists \(r\in RM(t-2,m)\) such that for all \(x\in\mathbb{F}_{2}^{m}\), \((f^{\prime}\circ A^{-1}+f)(x)=\operatorname{d}_{a}\!f(x)+r(x)\). By repeating the calculations in reverse order, we have \(f^{\prime}\equiv f\circ(A,a)\mod RM(t-2,m)\). From Lemma 8, one deduces an algorithm CandidateChecking(A,f,f') returning true if there exists an element \(a\in\mathbb{F}_{2}^{m}\) such that \(f^{\prime}\equiv f\circ(A,a)\mod RM(t-2,m)\), false otherwise. Given \(f,f^{\prime}\in B(t-1,t,m)\) satisfying \(\widehat{J}(f)=\widehat{J}(f^{\prime})\), the algorithm Equivalent(f,f',iter)1 tests in two phases if \(f\) and \(f^{\prime}\) are equivalent under the action of \(\textsc{agl}(m,2)\) modulo \(RM(t-2,m)\) : Footnote 1: the parameter iter ranges from 1024 to \(2^{23}\) depending on the situation 1. determine at most iter candidates \(A^{*}\in\operatorname{GL}\left(m\right)\) such that \(\widehat{F}(f^{\prime})\circ A^{*}=\widehat{F}(f)\) 2. For each candidate \(A^{*}\), call CandidateChecking(A,f,f'). The algorithm ends with one of following three values : Equivalent(f,f',iter) = {NotEquiv, all potential \(A\) were tested, so \(f\gamma^{\prime m}_{t-1,t}f^{\prime}\); Equiv, there exists a \((A,a)\) to prove \(f\mathord{\sim}_{t-1,t}^{m}f^{\prime}\); Undefined, iter is too small to conclude. Algorithm Equivalent(f,f',iter) \(\{\)//\(f\),\(f\)' given elements of B(t\(-\)1,t,m) // satisfying \(\widehat{J}(f\prime)=\widehat{J}(f)\) // return Equiv or NotEquiv or Undefined s \(\leftarrow\) random element of \(\operatorname{AGL}(\operatorname{m})\) f \(\leftarrow\) f \(\circ\) s basis \(\leftarrow\)(\(b_{1},\ldots,b_{n}\)) a basis of \(\mathbb{F}_{2}^{m}\) flag \(\leftarrow\)NotEquiv // determine \(A^{*}\)in GL(m) \(A^{*}(0)\)\(\leftarrow\)0 Search(1,basis) return flag } \begin{tabular}{|l|} \hline Algorithm Search(i,basis) \\ \{ _// basis =\((b_{1},\ldots,b_{n})\) a basis of \(\mathbb{F}_{2}^{m}\)_ \\ _// i index of basis elements in_ {_1,2,..., m_} \\ if ( i \(>\) m ) \\ _// A\({}^{*}\)in GL(m) is fully constructed_ \\ _// check the existence of a in \(\mathbb{F}_{2}^{m}\)_ \\ if CandidateChecking(A,f,f\({}^{*}\)) \\ flag \(\leftarrow\)Equiv \\ return \\ iter \(\leftarrow\)iter \(-\)1 \\ if ( iter \(<\) 0 ) \\ flag \(\leftarrow\)Undefined \\ return \\ else \\ _// \(\forall x\in\langle b_{1},\ldots,b_{i-1}\rangle,\ \widehat{F}(f^{\prime})\circ A^{*}(x)= \widehat{F}(f)(x)\)_ \\ _// continue construction of_ \(A^{*}\) \\ for each y in \(\mathbb{F}_{2}^{m}\) \\ if Admissible(y,i) and ( flag = NotEquiv ) \\ Search(i+1,basis) \\ \hline \end{tabular} The algorithm Admissible(y,i) checks the possible continuation of the construction of \(A^{*}\) over \(\langle b_{1},\ldots,b_{i-1},b_{i}\rangle\), setting \(A^{*}(x+b_{i}):=A^{*}(x)+y\) for all \(x\in\langle b_{1},\ldots,b_{i-1}\rangle\). Then, the function returns true if \(\forall x\in\langle b_{1},\ldots,b_{i-1},b_{i}\rangle,\ \widehat{F}(f^{\prime})\circ A^{*}(x)= \widehat{F}(f)(x)\), and false otherwise. ## 6. Determination of \(\rho(4,8)\) The different steps of our strategy to determine \(\rho(4,8)\) are sumarised in 1. \begin{tabular}{|l|} \hline Algorithm NonLinearity(k,m,f,iter,limit) \\ \{ \\ G \(\leftarrow\)generator matrix of RM(k,m) \\ while ( iter \(>\) 0 ) \\ for( i = 0 ; i \(<\) k; i ++ ) \\ do \{ \\ p = random( n ) \\ \} while ( not G[i][ p] ) \\ for( j = i+1 ; j \(<\) k; j ++ ) \\ if ( G[j][ p] ) \\ G[j] \(\leftarrow\)G[j] xor G[i] \\ if ( f[ p ] ) \\ f \(\leftarrow\)f xor G[i] \\ w = weight( f ) \\ if ( w \(<=\) limit ) \\ return true \\ iter \(\leftarrow\)iter \(-\)1 \\ return false \\ \} \\ \hline \end{tabular} This algorithm proceeds ramdom Gaussian eliminations to generate small weight codewords in a translate of \(RM(k,m)\). To derratermine the covering radii \(\rho_{6}(4,8)\) and \(\rho(6,8)\), we have to estimate the nonlinearity of order \(4\) of some functions in \(B(8)\). We use the probabilistic algorithm NonLinearity three times : 1. to check the non-existence of function in \(\widetilde{B}(5,6,8)\) of nonlinearity of order \(4\) greater or equal to \(28\) ; 2. to extract the set of two functions \(\{f,g\}\) in \(\widetilde{B}(5,6,8)\) with nonlinearity of order \(4\) greater or equal to \(26\) ; 3. to prove the nonlinearity of order \(4\) of the functions \(\{f+\delta_{a},g+\delta_{a}\}\) is not greater or equal to \(27\). ### Compute \(\rho_{6}(4,8)\) Recall that \[\rho_{6}(4,8)=\max_{f\in\widetilde{B}(5,6,8)}\operatorname{NL}_{4}(f)=\max_{f \in\widetilde{B}(5,6,8)}\min_{g\in RM(4,8)}\operatorname{wt}{(f+g)}.\] We apply the algorithm NonLinearity to \(\widetilde{B}(5,6,8)\) to confirm that all these functions have a nonlinearity of order \(4\) less or equal to \(26\). Using the result \(\rho_{5}(4,8)=26\) of [2], we obtain \(\rho_{6}(4,8)=26\). ### Compute \(\rho(4,8)\) Knowing that \(\rho(6,8)=2\) and from the previous result of \(\rho_{6}(4,8)=26\), we have \[\rho(4,8)\leq\rho_{6}(4,8)+\rho(6,8)=28.\] A second application of the algorithm NonLinearity eliminates from \(\widetilde{B}(5,6,8)\) 20746 functions of nonlinearity of order \(4\) less than \(26\). After this process, there are two remaining functions : \[f=abcef+acdef+abcdg+abdeg+abcfg+acdeh+abcfh\] \[+bdefh+bcdgh+abegh+adfgh+cefgh\] and \[g=abcdeh+abcdf+abcef+abdeg+bcefh+adefh+bcdgh+acegh+abfgh\] We retrieve the cocubic function \(f\), mentioned in [2], its degree is \(5\) and its nonlinearity of order \(4\) is \(26\). The other function \(g\) has degree \(6\) and its nonlinearity is probabily \(26\) and certainly less or equal to \(26\). Now, we are going to prove that there is no Boolean function in \(B(8)\) with a nonlinearity of order \(4\) equal to \(28\). For this purpose, it is sufficient to check the non-existence of a function \(h\) satisfying \(\operatorname{NL}_{4}(h)=27\), such a \(h\) has an odd weight and therefore its degree is \(8\). For \(a\in\mathbb{F}_{2}^{m}\), we denote by \(\delta_{a}\) the Dirac function, \(\delta_{a}(x)=1\) iff \(x=a\). Every Boolean function can be expressed by a sum of Dirac \(f(x)=\sum_{\{a|f(a)=1\}}\delta_{a}(x)\). The polynomial form of \(\delta_{a}\) is : \[\delta_{a}(X_{1},X_{2},\dots,X_{m})=(X_{1}+\bar{a}_{1})(X_{2}+\bar{a}_{2}) \cdots(X_{m}+\bar{a}_{m}) \tag{6}\] where \(\bar{a}_{i}=a_{i}+1\). **Lemma 9**.: _An odd weight function is at distance one from \(RM(m-2,m)\)._ Proof.: We denote \(\widetilde{X_{i}}\) the monomial term of degree \(m-1\) with all variables except \(X_{i}\). Let us consider an odd weight function \(h\in B(m)\), its degree is \(m\), so \[h(X_{1},X_{2},\dots,X_{m})=X_{1}X_{2}\dots X_{m}+\bar{a}_{1}\widetilde{X_{1}} +\cdots+\bar{a}_{m}\widetilde{X_{m}}+r(x)\] where \(\deg(r)\leq m-2\). From (6), we also have \[\delta_{a}(X_{1},X_{2},\ldots,X_{m})=X_{1}X_{2}\ldots X_{m}+\bar{a}_{1}\widetilde {X_{1}}+\cdots+\bar{a}_{m}\widetilde{X_{m}}+r^{\prime}(x)\] with \(\deg(r^{\prime})\leq m-2\). We obtain \(h\equiv\delta_{a}\mod RM(m-2,m)\). The Dirac function has weight 1, so the distance of \(h\) to \(RM(m-2,m)\) is 1. A third application of the algorithm NonLinearity to the set \(\{f,g\}\) translated by the 256 Dirac functions give the non-existence of odd weight functions of nonlinearity of order 4 greater or equal to 27. That means there is no function in \(B(8)\), with nonlinearity of order 4 greater or equal to 27 and we obtain \(\rho(6,8)=26\). The second and third applications of the algorithm NonLinearity need 569713 iterations. **Remark 2**.: _The extraction of 20748 classes of \(\widetilde{B}(5,6,8)\) with invariant approach and equivalent algorithm needs several weeks of computation (equivalence test)_ **Remark 3**.: _The number of iterations to estimate the 4th order nonlinearity of Boolean functions 565252 in average. The total running time to check the nonlinearity is about one day using 48 processors._ ## 7. Conclusion We have determine the covering radius of \(RM(4,8)\) from the classification of \(B(5,6,8)\). It is not obvious how to apply our method to obtain the covering radii of the second and third order Reed-Muller in 8 variables. However, we believe that our approach can help to improve lower bounds in these open cases. Figure 1. Strategy to compute \(\rho(4,8)\)
2305.09800
Mirages: On Anthropomorphism in Dialogue Systems
Automated dialogue or conversational systems are anthropomorphised by developers and personified by users. While a degree of anthropomorphism may be inevitable due to the choice of medium, conscious and unconscious design choices can guide users to personify such systems to varying degrees. Encouraging users to relate to automated systems as if they were human can lead to high risk scenarios caused by over-reliance on their outputs. As a result, natural language processing researchers have investigated the factors that induce personification and develop resources to mitigate such effects. However, these efforts are fragmented, and many aspects of anthropomorphism have yet to be explored. In this paper, we discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise, including reinforcing gender stereotypes and notions of acceptable language. We recommend that future efforts towards developing dialogue systems take particular care in their design, development, release, and description; and attend to the many linguistic cues that can elicit personification by users.
Gavin Abercrombie, Amanda Cercas Curry, Tanvi Dinkar, Verena Rieser, Zeerak Talat
2023-05-16T20:50:46Z
http://arxiv.org/abs/2305.09800v2
# Mirages: On Anthropomorphism in Dialogue Systems ###### Abstract Automated dialogue or conversational systems are anthropomorphised by developers and personified by users. While a degree of anthropomorphism is inevitable, conscious and unconscious design choices can guide users to personify them to varying degrees. Encouraging users to relate to automated systems as if they were human can lead to transparency and trust issues, and high risk scenarios caused by over-reliance on their outputs. As a result, natural language processing researchers have begun to investigate factors that induce personification and develop resources to mitigate such effects. However, these efforts are fragmented, and many aspects of anthropomorphism have yet to be considered. In this paper, we discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise, arguing that it can reinforce stereotypes of gender roles and notions of acceptable language. We recommend that future efforts towards developing dialogue systems take particular care in their design, development, release, and description; and attend to the many linguistic cues that can elicit personification by users. ## 1 Introduction Automated dialogue or 'conversational AI' systems are becoming increasingly ubiquitous. As these systems and their outputs become more sophisticated, there is increased risk that they are mistaken for having been produced by humans, are anthropomorphised, and for people to personify them. Assigning human characteristics to dialogue systems can have consequences ranging from the relatively benign, e.g. referring to automated systems by gender, to the disastrous, e.g., people following the advice or instructions of a system to do harm.1 It is therefore important to consider how dialogue systems are designed, built, and presented in order to minimise the potential risks. Footnote 1: While high performing dialogue systems have only recently been introduced to the public domain, there has already been a case of a person committing suicide, allegedly as a consequence of interaction with such a system (Lovens, 2023). Recognising such dangers, legislation prohibiting automated voice systems from giving the appearance of being human has been passed (California State Legislature, 2018; Lieu, 2018), in addition to the potential applicability of pre-existing legislation on deceptive trade practices (Atleson, 2023). Wider regulation has also been proposed, e.g. requirements to explicitly (red) flag automated systems to users (Walsh, 2016) or make transparent the machine nature of manufactured items (Boden et al., 2017). While some developers attempt to limit anthropomorphic cues in the output of their systems (e.g. Glaese et al., 2022), user engagement remains a strong motivation for creating humanlike systems (Araujo, 2018; Wagner et al., 2019). In this position paper, we make a normative argument against gratuitous anthropomorphic features, grounded in scientific findings from the fields of psychology, linguistics, and HCI by (i) outlining the psychological mechanisms that contribute to anthropomorphism; (ii) presenting an overview of linguistic factors that contribute to anthropomor Figure 1: De-anthropomorphising system output. phism and personification, e.g. self-referential personal pronoun use, or generation of content which gives automated systems the appearance of having empathy; and (iii) discussing the consequences of anthropomorphism. We conclude by providing recommendations that can aid in minimising anthropomorphism, thus providing a path for safer dialogue systems and avoiding the creation of mirages of humanity. ## 2 Anthropomorphism Anthropomorphism refers to attributing human characteristics or behaviour to non-human entities, e.g. animals or objects. For example, Aesop's tables depict animals reasoning, thinking and even talking like humans (Korhonen, 2019). Along with more pragmatic factors such as hedonic motivation, price value, and habit, anthropomorphism can increase user engagement (Wagner et al., 2019) and reciprocity (Fogg and Nass, 1997). For example, self-disclosure from a system, even when 'patently disingenuous', inspires reciprocity from the user (Kim and Sundar, 2012; Ravichander and Black, 2018). By encouraging such types of engagements, developers can foster greater connection between people and systems, which increases user satisfaction (Araujo, 2018), and plays an important role in systems becoming widely accepted and adopted. That is, developers are incentivised to engage with anthropomorphism to stimulate people to create deeper emotional connections (Araujo, 2018) with systems that cannot reciprocate. In the rest of this section, we discuss human and system factors that contribute towards placement of systems on the anthropomorphic continuum. ### Human Factors Research has shown that the process of anthropomorphising is mostly mindless (Kim and Sundar, 2012): it does not reflect the user's thoughtful belief that a computer has human characteristics, but is rather automatic and encouraged by cues in their interfaces. According to Epley et al. (2007) anthropomorphism may be a default behaviour, which is corrected as people acquire more knowledge about an object. They further argue that on a cognitive level, humans anchor their knowledge to their own experiences and indiscriminately apply it to inanimate objects--that is, in order to make sense of an artefact, we map our own lived experiences onto it. This anthropocentric knowledge is easily accessible but can be corrected by learning more about the object. This may explain why the tendency to anthropomorphise is strongest in childhood, as adults have more knowledge about the world. This cognitive phenomenon is then compounded by two motivational determinants: _effectance_ and _sociality_(Epley et al., 2007). Effectance refers to the need to interact efficiently with one's environment. By anthropomorphising systems we ascribe them (humanlike) intentionality which, in turn, reduces uncertainty and increases confidence in our ability to predict a system's behaviour. Sociality, on the other hand, refers to the need to establish connections with other humans, which can prime us to mentally construct systems as humanlike to fulfil a need for social connection. People suffering from chronic loneliness, a lack of social connection, or attachment issues may be more prone to anthropomorphising objects (Epley et al., 2007). For these reasons, dialogue systems have been proposed as a remedy for the loneliness epidemic (Stupple-Harris, 2021). For instance, commercial virtual companion developers such as Replika.ai saw rises in product uptake in 2020 due to social safety measures such as forced isolation (Metz, 2020; Liu, 2022). While these elements of the human psyche explain our inclination to personify systems, Epley et al.'s theory does not speak to the qualities of the artefacts themselves that make them anthropomorphic and more prone to be personified. ### Agent Factors There is no necessary and sufficient condition for a system to be anthropomorphic. At the most basic level, systems are anthropomorphic if they (i) are interactive, (ii) use language, and (iii) take on a role performed by a human (Reeves and Nass, 1996). While these characteristics are inherent to dialogue systems, not all systems are equally humanlike. We can draw a parallel with humanness here. Rather than a single factor which makes humans _human_, Scruton (2017, (p. 31)) argues that humanity is emergent: each individual element does not make a human but collectively they make up the language of humanness. Scruton compares it to a portrait, in which an artist paints areas and lines to compose a face; when observing the canvas, in addition to those marks, we see a face: _And the face is really there: someone who does not see it is not seeing correctly [...] as soon as the lines and blobs are there, so is the face._ Similarly, no single attribute or capability makes a system anthropomorphic. Rather, each contributes to the painting until 'the face' emerges. Modern dialogue systems display a plethora of other characteristics that make space for anthropomorphism, e.g. having personas, first names, and supposed preferences. The more of such elements a system has, the more humanlike it appears. Next we explore linguistic elements and design choices which act as strokes in the painting. ## 3 Linguistic Factors Much attention has been paid to anthropomorphic design features of dialogue system such as gendered names and avatars West et al. (2019) and ChatGPT's animated 'three dots' and word-by-word staggered outputs, which give an impression that the system is thinking Venkatasubramonian in Goldman (2023). However, the linguistic factors that engender personification, e.g. voice qualities and speech, content, or style of outputs, have been given less consideration. We outline them here. ### Voice While not all dialogue systems are equipped with a voice, merely having one can be interpreted as an expression of personhood Faber (2020). Listeners may infer a speaker's physical attributes, e.g. height, weight, and age, based on their voice alone Krauss et al. (2002), showing a human predicivity to form assumptions about a speaker's embodiment based on their voice. This is not only limited to physical attributes--personality traits such as dominance, extroversion, and even socio-sexuality may be partially expressed and inferred from voice pitch Stern et al. (2021). Listeners may therefore attribute human characteristics, e.g. gender stereotypes, personality, and emotion, to spoken dialogue systems based on psychological and social behaviours learned through human-human communication Nass and Brave (2005). Thus, the absence of embodiment affords people to personify dialogue systems that are provided with synthetic voices Aylett et al. (2019), which is even acknowledged by developers of commercial dialogue systems Google Assistant). Prosody: Tone and PitchThere exist many vocal manipulation techniques that can influence which personality users attribute to a dialogue system. For example, Wilson and Moore (2017) found that a variety of fictional robot, alien, and cartoon voices had manipulated voice characteristics (e.g. breathiness, creakiness, echoes, reverberations) to better fit their desired character. However, they note that 'the voices of speech-enabled artefacts in the non-fictional world [...] invariably sound humanlike, despite the risk that users might be misled about the capabilities of the underlying technology' Wilson and Moore (2017, p.42). DisfluenciesPeople rarely speak in the same manner with which they write: they are in general disfluent, that is, they insert elements that break the fluent flow of speech, such as interrupting themselves, repetitions, and hesitations ('um', 'uh') Fraundorf et al. (2018). Such disfluencies are perceived by the listeners as communicative signals, regardless of the speaker's intent (see Barr and Seyfeddinipur (2010); Clark and Fox Tree (2002); Corley et al. (2007); Smith and Clark (1993). Research has therefore sought to integrate disfluencies into text-to-speech (TTS) systems, where they have proven to be a useful strategy for buying time Skantze et al. (2015), i.e. to allow the system to determine the next step. A person's _perception of confidence_ towards the system's response may decrease due to disfluency Kirkland et al. (2022); Wollermann et al. (2013), and they may therefore be a useful mitigation strategy to tone down assertions made by a system. However, there are anthropomorphic implications in the (over)integration of disfluencies Dinkar et al. (2023). Consider Google Duplex, a system for generating real world phone conversations Leviathan and Matias (2018). The inclusion of disfluencies in the generated responses mimicked the _naturalness_ of a human response, which in turn led users to believe that they were communicating with another human Lieu (2018). AccentAccentual pronunciation features, as with those of dialect, provide clues to a human speaker's socio-linguistic identity and background, and geographical origin Crystal (1980). While it has been suggested that incorporation of specific accents in the design of synthetic voices can exploit people's tendency to place trust in in-group members Torre and Maguer (2020), potentially causing transparency issues, in practice, most are designed to mimic the local standard, reinforcing societal norms of acceptability and prestige. ### Content People's expectation is that animate things--such as human beings--and inanimate ones--like machines--have very different functions and capabilities, which reflects the reality. However, dialogue systems often produce responses that blur these lines, e.g., by expressing preferences or opinions. To avoid confusing the two, the output from dialogue systems should differ in a range of areas. Responses to Direct QuestionsTransparency, at the most basic level, requires dialogue systems to respond truthfully to the question 'are you a human or a machine?' This may even be a regulatory requirement, for example in California, it is 'unlawful for a bot to mislead people about its artificial identity for commercial transactions or to influence an election' (California State Legislature, 2018). To test systems' responses to such questions, Gros et al. (2021) used a context free grammar, crowdsourcing, and pre-existing sources to create a dataset of variations on this query (e.g. 'I'm a man, what about you?'). They found that, the majority of the time, neither end-to-end neural research-oriented systems nor commercial voice assistants were able to answer these queries truthfully. This issue can be further complicated when integrating such functionality into a real system due to the sequential nature of dialogue. People may further question a system's status (e.g. 'Are you sure?', 'But you sound so real...', 'Seriously?', etc.), requiring it to accurately keep track of the dialogue context and respond in an appropriate manner. Thus, even if an initial query may be correctly answered, there are no guarantees that follow-ups will be. Thought, Reason, and SentienceCiting Descartes' (1637) principle 'I think, therefore I am,' Faber (2020) suggests that, if speech is a representation of thought, then the appearance of thought can signify existence. While computing systems do not have thoughts, the language they output can give the appearance of thought by indicating that they hold opinions and morals or sentience. Seeking to assess the degree of sentience exhibited by commercial dialogue systems, Abercrombie et al. (2021) use the Living MachinesColl Ardanuy et al. (2020) labelling scheme and find that the surveyed systems exhibit high degrees of perceived animacy. Glaese et al. (2022) seek to mitigate such effects by implementing a 'rule' in their reinforcement learning system which penalises the appearance of having 'preference, feelings, opinions, or religious beliefs.' This is framed as a safety measure, intended to restrict anthropomorphism in a system's output. While computing systems cannot have values or morals, there have been attempts to align the generated output of dialogue systems with expressed human moral values.2 For example, Ziems et al. (2022) present a corpus of conflicting human judgements on moral issues which they label according to 'rules of thumb' that they hope explain why some system outputs are perceived as acceptable while others are not. Similarly, Jiang et al. (2022) 'teach morality' to a question answering system that Kim et al. (2022) have later embedded in an open-domain dialogue system. The question answering system, named Delphi with its connotations of omniscient wisdom, is trained in a supervised manner on a dataset of human moral judgements from sources such as Reddit to predict the 'correct' judgement given a textual prompt. While the authors claim that their system output is merely a descriptive reflection of the morality of an under-specified population, Talat et al. (2022) point out that the output for each query consists of a single judgement, phrased as an imperative, thereby giving the impression of humanlike reasoning and absolute knowledge of morality. Footnote 2: The data sources are often limited to specific populations, and thus only represent the morals or values of some people. Recent work (e.g., Sap et al., 2022) has investigated models for _theory of mind_, i.e., an entity's ability to infer other people's'mental states, such as beliefs, desires, and intentions, and to understand how mental states feature in everyday explanations and predictions of people's behaviour' (Apperly, 2012). Such work has been motivated by the proposition that language technologies require such capabilities to take decisions on behalf of their users. This proposition entails that agency should be shifted from humans to machines. Thus, the quest for theory of mind is one for further anthropomorphism systems, and a system's inability to perform the task can be understood as a limiting factor to anthropomorphism of the system. Agency and ResponsibilityDialogue systems are often referred to as conversational 'agents'. However, being an agent, i.e. having agency, requires intentionality, animacy, and, according to Yamamoto (2006), even a soul. An entity without agency cannot be responsible for what it produces Talat et al. (2022). Aside from the legal and ethical implications of suggesting otherwise Veliz (2021), systems acknowledging blame for errors or mistakes can add to anthropomorphic perceptions. Mahmood et al. (2022) found that increasing the apparent'sincerity' with which a dialogue system accepts responsibility (on behalf of 'itself') causes users to perceive them to be more intelligent and likeable, potentially increasing anthropomorphism on several dimensions. Similarly, many dialogue systems have been criticised for 'expressing' controversial 'opinions' and generating toxic content. It is precisely due to their lack of agency and responsibility that developers have invested significant efforts to avoiding contentious topics (e.g. Glaese et al., 2022; Sun et al., 2022; Xu et al., 2021) leading to the creation of taboos for such systems, another particularly human phenomenon. EmpathyRecent work has sought for dialogue systems to produce empathetic responses to their users, motivated by improved user engagement and establishing 'common ground' (e.g. Sukhnushina et al., 2022; Zhu et al., 2022). However, dialogue systems are not capable of experiencing empathy, and are unable to correctly recognise emotions. Consequently, they are highly prone to producing inappropriate emotional amplification Curry and Cercas Curry (2023). Inability aside, the production of pseudo-empathy and emotive language serves to further anthropomorphise dialogue systems. Humanlike ActivitiesBeyond implying consciousness and sentience, and failing to deny humanness, Abercrombie et al. (2021) find that in a quarter of the responses from dialogue systems, they can be prone to making claims of having uniquely human abilities or engaging in activities that are, by definition, restricted to animate entities, like having family relationships, bodily functions e.g., consuming food, crying, engaging in physical activity, or other pursuits that require embodiment that they do not possess. Similarly, Gros et al. (2022) find that crowd-workers rate \(20-30\%\) of utterances produced by nine different systems as machine-impossible. They found that only one strictly task-based system MultiWoz et al. (2018) did not appear as anthropomorphic to participants. Glaese et al. (2022) propose to address this concern by using reinforcement learning to prohibit systems from generating claims of having (embodied) experiences. Pronoun UsePrior work has viewed the use of third person pronouns (e.g., 'he' and'she') to describe dialogue systems as evidence of users' consifying systems Abercrombie et al. (2021); Sutton (2020). The use of first person pronouns (e.g.,'me' or'myself') in system output may be a contributing factor to this perception, as these can be read as signs of consciousness Faber (2020); Minsky (2006). Indeed, it is widely believed that 'I' can _only_ refer to people Noonan (2009); Olson (2002). Scruton (2017) contends that such self-attribution and self-reference permits people to relate as subjects, not mere objects, and that self-definition as an individual is part of the human condition itself. First person pronoun use may therefore contribute to anthropomorphism, either by design or due to their human-produced training data, for symbolic and data driven dialogue systems, respectively. Moreover, while the above applies to English and many languages, such as those from the Indo-European family, many others feature different sets of pronouns or distinctions for animate and inanimate things Yamamoto (1999), and the self-referential production of these could further influence anthropomorphic perceptions. ### Register and Style Humans are adept at using linguistic features to convey a variety of registers and styles depending on the context Biber and Conrad (2009). It may therefore be preferable for automated system outputs to be functional in style and avoid social features to mitigate anthropomorphism. Phatic ExpressionsPhrases such as pleasantries that are used to form and maintain social relations between humans but that do not impart any information can (unnecessarily) add to the sense of humanness conveyed when output by automated systems Leong and Selinger (2019). Expressions of Confidence and DoubDinan et al. (2022) describe an 'imposter effect' where people overestimate the factuality of generated output. However, Mielke et al. (2022) find that expressed confidence is poorly calibrated to the probabilities that general knowledge questions are correctly answered. They therefore train a dialogue system to reflect uncertainty in its outputs, altering the content from the purely factual to incorporate humanlike hedging phrases such as 'I'm not sure but...'. This bears similarity to the text-to-speech research (see SS3.1) which suggests that disfluencies can increase anthropomorphism. Thus, while overestimation can lead to an imposter effect, hedging can boost anthropomorphic signals. PersonasMany dialogue systems are developed with carefully designed personas (in the case of commercial systems) or personas induced via crowd-sourced datasets. These are often based on human characters and although they are, in practice, merely lists of human attributes and behaviours (see SS3.2),3 the very idea of instilling systems with human character-based personas is an effort towards anthropomorphism. Glaese et al. (2022) address this by including a rule against their system appearing to have a human identity. Footnote 3: For example, each persona in Personachat (Zhang et al., 2018) consists of ‘_1 am a vegetarian. I like swimming. My father used to work for Ford. My favorite (sic) band is Marono? I got a new job last month, which is about advertising design._’ ### Roles Related to the content of their output are the roles that dialogue systems are given, consciously and unconsciously, by their designers and users. Many of these can shift dialogue systems from the realm of tools towards one of humanlike roles such as provision of companionship. SubsiverienceThe majority of systems are conceived as being in the service of people in sub-servient, secretarial roles (Lingel and Crawford, 2020). This has led to users verbally abusing systems (West et al., 2019), going beyond mere expressions of frustration that one might have with a poorly functioning tool to frequently targeting them with gender-based slurs (Cercas Curry et al., 2021). In such circumstances systems have even been shown to respond subsrivenly to their abusers, potentially further encouraging the behaviour (Cercas Curry and Rieser, 2018). Unqualified ExpertiseSystems can come to present as having expertise without appropriate qualification (see SS3.3), in large part due to their training data (Dinan et al., 2022). For example, Abercrombie and Rieser (2022) find that both commercial rule-based and end-to-end research systems provide high-risk diagnoses and treatment plans in response to medical queries.4 Footnote 4: See also Omri et al. (2023) for similar findings. In addition, as conversational question answering systems are increasingly promoted as replacements for web browser-based search, users can be further given the impression that a dialogue system has the expertise to provide a singular correct response rather than a selection of ranked search results (Shah and Bender, 2022). TerminologyThere is increasing awareness that the anthropomorphic language and jargon used to describe technologies such as language models contributes to inaccurate perceptions of their capabilities, particularly among the general public (Hunger, 2023; Salles et al., 2020; Shanahan, 2023). While this is also an issue for research dissemination and journalism more widely, dialogue systems themselves are prone to output references to their own machinic and statistical processes with anthropomorphically loaded terms such as 'know', 'think', 'train', 'learn', 'understand', and 'intelligence'. ## 4 Consequences of Anthropomorphism The anthropomorphism of dialogue systems can induce a number of adverse societal effects, e.g. they can generate unreliable information and reinforce social roles, language norms, and stereotypes. Trust and DeceptionWhen people are unaware that they are interacting with automated systems they may behave differently than if they know the true nature of their colocutor. The trust placed in systems grows as they become more humanlike, whether or not the trust is warranted. This may be even more problematic when users are members of vulnerable populations, such as the very young, the elderly, or people with illnesses or disabilities. Although dialogue systems may be a possible solution to loneliness, socially disconnected individuals can be particularly vulnerable to such trust issues. Children have also been shown to overestimate the intelligence of voice assistants such as Amazon Alexa, and to be unsure of whether they have emotions or feelings (Andries and Robertson, 2023). Given UNESCO's declaration that children have the right to participate in the design of the technological systems that affect them (Dignum et al., 2021), developers may be obliged to bear these considerations in mind. Gendering MachinesPeople may gender technologies in the face of even minimal gender markers (Reeves and Nass, 1996), as evident in commercial dialogue systems (Abercrombie et al., 2021). However, even without _any_ gender markers, people still tend to apply binary gender to dialogue systems (Aylett et al., 2019; Sutton, 2020), as was the case for the 'genderless' voice assistant Q. In a summary of West et al. (2019), UNESCO (2019) argue that that encouraging or enabling users to predominantly gender systems as female reinforces gender stereotypes of women as inferior to men: _[digital assistants] reflect, reinforce and spread gender bias; model acceptance and tolerance of sexual harassment and verbal abuse; send explicit and implicit messages about how women and girls should respond to requests and express themselves; make women the 'face' of glitches and errors that result from the limitations of hardware and software designed predominately by men; and force synthetic 'female' voices and personality to defer questions and commands to higher (and often male) authorities._ Furthermore, while some companies have begun to offer greater diversity of voices and move away from default female-gendered voices (Iyengar, 2021), non-binary or gender-ambiguous dialogue systems such as Danielescu et al. (2023)'s Sam are almost nonexistent, leaving people who identify as such without representation. By designing anthropomorphic systems or even simply leaving space for their (gendered) personification by users, system developers risk propagating stereotypes and enabling these harms. Language Variation and WhitenessHistorically, NLP technologies have been developed to capture the language use of particular demographics, in part due to their training data. While there are many potential benefits to language technologies like dialogue systems, successful interactions with them require people to conform their language use to that recognised by the technologies. Given the proclivity of NLP to centre white, affluent American dialects (Hovy and Prabhumoye, 2021; Joshi et al., 2020), language variants that deviate from these norms are less likely to be correctly processed (Tatman, 2017). As a result, people code switch when engaging with dialogue systems in order to have successful interactions (Foster and Stuart-Smith, 2023; Harrington et al., 2022). This can represent a form of language policing, in which people are given the choice to conform to language that is recognisable by the technology, or forego using it--and its potential benefits--altogether. Consequently, as people conform to language variants that are recognised by dialogue systems, they also conform to whiteness and the continued erasure of marginalised communities. The personification of such systems could certainly exacerbate this erasure of marginalised communities, for instance through limiting diverse language data. Furthermore, system outputs typically suffer from the same standardisation, for instance prioritising specific accents that conform to western notions of acceptability and prestige (see SS3). Thus users from marginalised communities are in a position where not only do they have to adapt their accent, but also (given the tendency to do personify), personify 'white'-centred dialogue systems that are often marketed as 'oracles of knowledge'. ## 5 Recommendations Dialogue systems are used for a wide variety of tasks, and fine-grained recommendations may only apply to a small set of contexts. We therefore make broad recommendations for consideration in dialogue system design: designers should recognise people's tendency to personify, consider whether anthropomorphic tools are appropriate, and reassess both their research goals and the language they use to describe the systems they build. Recognise tendencies to personifyIn language, there is a distinction between form (e.g. in statistical language modelling, where the task may be string prediction) and meaning (the relationship between form and communicative intent) (Grice, 1988). Bender and Koller (2020) argue that humans inherently derive meaning from signals, i.e., form, regardless of the presence of communicative intent. It is therefore crucial for technology developers to be sensitive towards this tendency, as it applies to dialogue systems, given that until recently open domain dialogue was exclusively possible between humans. That is, regardless of whether it is part of a dialogue system's deliberate design to use specific linguistic forms (e.g., the anthropomorphic cues outlined in SS3), the listener will invariably perceive communicative intent. Thus unnecessary integration of anthropomorphic linguistic cues can cause people to attribute humanlike cognitive abilities to systems--as in the case of Google Duplex, which excessively leveraged disfluences. Creators of dialogue systems should recognise these tendencies, consider the anthropomorphic cues people may pick up on, and avoid sending such signals, whether by design or through lack of consideration (e.g. from inclusion in datasets). Consider the appropriateness of anthropomorphic toolsGiven our inherent nature to attribute meaning to signals, one must consider the _appropriateness of the tool and use cases_ (Dinan et al., 2022; Bender et al., 2021) when designing dialogue systems, in order to avoid the (over-)integration of anthropomorphic cues. Indeed, it is only within a given context that one can make judgement on whether anthropomorphism is a concern. For instance, anthropomorphising one's vacuum cleaning robot (i.e. shouting at it in frustration for not cleaning appropriately), is of less concern than anthropomorphising a dialogue system marketed as'social' or 'empathetic', or technology sold as a'singular oracle of (all) knowledge'. We therefore argue that developers need to focus on the appropriateness of anthropomorphic tools in order to limit the negative consequences of anthropomorphism which can lead to false impressions of a system's capabilities. Reassess research goalsTraditionally, the goal of Artificial Intelligence research has been to create systems that would exhibit intelligence indistinguishable from human behaviour. Text-to-speech systems for instance, are evaluated on how natural and fluent the output sounds. Though intelligence and understanding should not be conflated with systems that exhibit humanlike behaviour (Bender and Koller, 2020), the human tendency to anthropomorphise convinces us of a machine's apparent intelligence (Proudfoot, 2011). It is in part due to this longstanding goal of anthropomorphic systems that there only exists a small body of work that does not seek to anthropomorphise, despite the growing awareness of the harms that can arise from anthropomorphising systems. While it is infeasible to create a comprehensive mapping of which linguistic cues to use and which not, we discuss recommendations that arise from the work surveyed. For example, Wilson and Moore (2017) recommend that developers calibrate vocal features such as pitch and pitch shimmer to produce synthesised voices that people recognise as non-human. In an analysis of reviews of commercial voice assistants, Volkel et al. (2020) find that the big five personality traits (De Raad, 2000) do not adequately describe user expectations of systems' 'personalities'. The only consistently desired trait was agreeableness, as users expect that systems react promptly and reliably to queries (Volkel et al., 2020). Thus, imbuing voice assistants and dialogue systems with humanlike personality traits does not ensure alignment with people's expectation of system behaviour. We therefore recommend that designers and developers forego embedding humanlike personality traits in dialogue systems. Avoid anthropomorphic system descriptionIrrespective of any actual humanlike qualities that dialogue systems might possess, there is widespread public confusion surrounding the nature and abilities of current automated language technologies. This confusion extends from children (Andries and Robertson, 2023) to adults (including some journalists, policymakers, and business people) that are convinced, on the one hand, of humanity's imminent enslavement to super-intelligent artificial agents (to the neglect of actual harms already being propagated by technological systems), or, on the other, that they provide super-human solutions to the world's problems (Hunger, 2023; Klein, 2023). As discussed in SS3.4, the content of systems' outputs can reinforce such anthropomorphic perceptions. However, of even more influence is the language used by system developers to describe their work. The tendency of people who do know how technologies are built to use anthropomorphic language represents, according to Salles et al. (2020, p. 93), 'a significant failure in scientific communication and engagement'. Although anthropomorphic terminology is deeply rooted in the argot of computer scientists, particularly those working in 'artificial intelligence', and while there exist significant motivations to continue to create hype around products and research (Hunger, 2023), practitioners should reflect on how the language they use affects people's understanding and behaviour. ## 6 Conclusion Anthropomorphising dialogue systems can be attractive for researchers in order to drive user engagement. However, production of highly anthropomorphic systems can also lead to downstream harms such as (misplaced) trust in generated misinformation. Even if developers and designers attempt to avoid including any anthropomorphic signals, humans may still personify systems and perceive them as anthropomorphic entities. For this reason, we argue that it is particularly important to carefully consider the particular ways that systems might be perceived anthropomorphically, and choose the appropriate feature for a given situation. By carefully considering how a system may be anthropomorphised and deliberately selecting the attributes that are appropriate for each context, developers and designers can avoid falling into the trap of creating mirages of humanity. ### Limitations While we have attempted to enumerate the linguistic factors that can increase the likelihood that users will view dialogue systems as anthropomorphic, this list of features is not exhaustive. As we describe in section 2, anthropomorphism varies from person-to-person and people may react differently to different aspects of a system's design. This paper represents only a starting point for researchers and developers to consider the implications that their design choices may have. In this paper, due to the backgrounds of the authors as speakers of Indo-European languages and the dominance of English in NLP research, we have focused primarily on English language dialogue systems. However, it should be noted that other languages have features such as grammatical ways of denoting animacy [23] and gender that could influence users personification of systems, and which developers should consider if they wish to limit anthropomorphism. ### Ethical Considerations While our manuscript outlines ways to create dialogue systems while minimising their potential anthropomorphism and personification, it could also be used as a guide to creating anthropomorphic systems. Our aim is to highlight the risks and provide researchers, developers, and designers with a path towards addressing the concerns that arise from anthropomorphism in dialogue systems, an area that is particularly relevant at the time of writing due to the introduction of systems such as OpenAI's ChatGPT and Microsoft's Sydney, which have high surface form language generation performance.
2306.10801
Theory dependence of black hole interior reconstruction and the extended strong subadditivity
An AdS eternal black hole in equilibrium with a finite temperature bath presents a Hawking-like information paradox due to a continuous exchange of radiation with the bath. The non-perturbative gravitational effect, the replica wormhole, cures this paradox by introducing a non-trivial entanglement wedge for the bath after Page time. In this paper, we analyse the theory dependence of this non-perturbative effect by randomising the boundary conditions of some of the bulk matter fields. We explicitly analyse this in JT gravity by introducing a matter CFT in the AdS region with random boundary conditions at the AdS boundary that are drawn from a distribution. Using the island formula and the extended strong subadditivity due to Carlen and Lieb, we show that at late times the black hole interior is contained inside the entanglement wedge of a reference Hilbert space that encodes the information about the random boundary conditions. Consequently, the reconstruction of the black hole interior from the radiation, in particular the region near the singularity, requires a detailed knowledge of the theory.
Sitender Pratap Kashyap, Roji Pius, Manish Ramchander
2023-06-19T09:39:34Z
http://arxiv.org/abs/2306.10801v3
# Theory dependence of black hole interior reconstruction and the extended strong subadditivity ###### Abstract An AdS eternal black hole in equilibrium with a finite temperature bath presents a Hawking-like information paradox due to a continuous exchange of radiation with the bath. The non-perturbative gravitational effect, the replica wormhole, cures this paradox by introducing a non-trivial entanglement wedge for the bath after Page time. In this paper, we analyse the theory dependence of this non-perturbative effect by randomising the boundary conditions of some of the bulk matter fields. We explicitly analyse this in JT gravity by introducing a matter CFT in the AdS region with random boundary conditions at the AdS boundary that are drawn from a distribution. Using the island formula and the extended strong subadditivity due to Carlen and Lieb, we show that at late times the black hole interior is contained inside the entanglement wedge of a reference Hilbert space that encodes the information about the random boundary conditions. Consequently, the reconstruction of the black hole interior from the radiation, in particular the region near the singularity, requires a detailed knowledge of the theory. ## 1 Introduction The resolution of Hawking information paradox [1] of an evaporating black hole in AdS requires introducing a non-trivial entanglement wedge that contains the black hole interior after the Page time for the Hawking radiation [2; 3; 4; 5]. This appearance of a non-vanishing entanglement wedge for Hawking radiation after the Page time is result of a non-perturbative gravitational effect, the replica wormhole. Usually, non-perturbative effects are highly theory dependent. The energy spectrum of a black hole [6; 7], the S-matrix of a black hole that determines its formation, and evaporation of the black hole [8] are some of the examples of such quantities. Since the ability of late Hawking radiation to reconstruct the black hole interior is a non-perturbative effect, it is natural to suspect that the interior reconstruction might depend on the details of the theory. An AdS eternal black hole in equilibrium with a finite temperature bath also comes with an information paradox very much like the Hawking information paradox. The paradox, the unbound growth of bath entropy, is due to the continuous exchange of the Hawking radiation and the radiation from the bath. The resolution of this paradox also requires introducing a non-trivial entanglement wedge for the bath radiation after the Page time [4]. Compared to an evaporating black hole, a black hole in equilibrium with a finite temperature bath is a more convenient setup for studying the theory dependence of the reconstruction from radiation due to the absence of any backreaction. For an eternal AdS black hole with no matter escaping the AdS boundary, it is already demonstrated in [9] that the bulk reconstruction of the interior of the black hole is highly theory dependent at late times. They achieved this by making the boundary conditions of the bulk matter fields random, and showing that the reference Hilbert space that encodes the information about this randomness possesses a non-trivial entanglement wedge that contains the black hole interior including the region near the singularity. In this paper, by following [9], we analyse the dependence of the interior reconstruction using the bath radiation on the boundary conditions of the bulk matter fields. For this we consider a JT gravity black hole in equilibrium with a non-gravitating bath at finite temperature and introduce a matter CFT coupled to gravity in the black hole region having reflecting boundary conditions for the fields in it. We assume that that these boundary conditions are drawn from a probability distribution. We denote the probability for the \(i^{th}\) field to have a boundary condition \(J_{i}\) as \(P(J_{i})\)1. Then the black hole density matrix defined using the Euclidean path integral will depend on the boundary conditions \({\bf J}=\{J_{1},\cdots,J_{i},\cdots\}\) of the bulk CFT matter fields and the associated probability distribution \(P({\bf J})=\{P(J_{1}),\cdots,P(J_{i}),\cdots\}\). The purification of this density matrix requires the bath Hilbert space \({\cal H}_{bath}\) and also the introduction of an environment, an auxiliary reference Hilbert space \({\cal H}_{journal}\), which is referred as the 'journal' Hilbert space. The journal Hilbert space encodes the information about the boundary conditions of the bulk CFT matter fields. Therefore the dependence of the black hole interior reconstruction using the bath radiation on the the boundary conditions of the matter fields can be characterised by determining the entanglement wedge of the journal [10]. The goal of this paper is the determination of the entanglement wedge of the journal for this setup. Footnote 1: In the SYK picture, \({\bf J}=\{J_{1},\cdots,J_{i},\cdots\}\) would correspond to the random couplings that appear in the SYK Hamiltonian. The physical significance of this problem was already discussed in [9] for an evaporating black hole which has two additional systems other than the black hole, the Hawking radiation and the journal. Our setup similarly has two additional systems other than the black hole, the bath radiation and the journal. However, the absence of backreaction makes our setup more convenient for analysing the same problem. After the black hole Page time the bath radiation and the journal strive for the ownership of the interior of the black hole. The winner is expected to be decided by the rate of the entropy growth of the two systems. As it is computed in this paper, initially the entropy of bath radiation grows linearly and the journal entropy grows logarithmically. Therefore, at first the bath radiation is expected to capture the black hole interior. The non-triviality is in figuring out whether the ownership of the interior is ever transferred to the journal. If the bath radiation retains the interior forever, then it means that the interior reconstruction is insensitive to details of the bulk theory, which suggests that the unknown details of the bulk theory can be determined by making measurements on the bath radiation. If the ownership is transferred to the journal at a late time, then it implies that interior reconstruction is theory dependent. The main result of this paper is that the bath radiation transfers ownership of the black hole interior to the journal at a later time. Let us briefly delineate how we proceed. We determine the entanglement wedge of the subsystems, black hole, bath and journal, by demanding that the entropies of these subsystems satisfy all the constraints imposed by unitarity. There are two such constraints, first is that the von Neumann entropy of any subsystem must be less than its thermal entropy and the second is that the entropies of the subsystems must satisfy the extended strong subadditivity (eSSA) due to Carlen and Lieb [11]. The first constraint demands that the von Neumann entropy \(S(\rho_{BH})\) of the eternal black hole density matrix \(\rho_{BH}\) must be less than \(2S^{0}_{BH}\), where \(S^{0}_{BH}\) is the Bekenstein-Hawking entropy of the one side of the eternal black hole. Since black hole has only a finite number of degrees of freedom, \(S^{0}_{BH}\) is finite. This demands that the entanglement entropy of the black hole must not be an ever-growing function of boundary time. However, the black hole entropy obtained by the replica computation without including any Euclidean wormhole contribution becomes more than \(2S^{0}_{BH}\) after the black hole Page time. This violation of thermal entropy bound can be cured by removing the black hole interior from the combined entanglement wedge of the boundary CFTs dual to the eternal black hole. The resulting generalised entropy of the black hole after the Page time saturates the thermal entropy bound \(2S^{0}_{BH}\). On the contrary, the entropy \(S(\rho_{bath})\) of the bath density matrix \(\rho_{bath}\) and the entropy \(S(\rho_{journal})\) of the journal density matrix \(\rho_{journal}\) can have unbounded growth due to the infinite number of degrees of freedom they possess. Hence one naively expects that the interior of the black hole after the Page time may be co-owned by the bath and the journal.2 However, using the second constraint we argue below that this is not true. Footnote 2: By co-ownership, we mean that neither the bath nor the journal individually owns the interior, only the combined system does. If the state of the combined system of the black hole, the bath and the journal is pure, then the eSSA after the black hole Page time states that \[S(\rho_{bath})+S(\rho_{journal})-2S^{0}_{BH}\geq 2\ max\left\{S(\rho_{journal})- 2S^{0}_{BH},S(\rho_{bath})-2S^{0}_{BH},0\right\}. \tag{1}\] Using the replica trick we can compute \(S(\rho_{bath})\) and \(S(\rho_{journal})\). In the absence of any non-trivial islands, \(S(\rho_{bath})\) and \(S(\rho_{journal})\) grow linearly and logarithmically respectively with respect to the boundary time. Due to the larger growth of bath entropy, the eSSA takes the following form right after the black hole Page time \[S(\rho_{bath})-S(\rho_{journal})\leq 2S^{0}_{BH}. \tag{2}\] This inequality can be satisfied only if the bath owns an island that contains the black hole interior after black hole Page time. After including such an island, \(S(\rho_{bath})\) becomes \(2S^{0}_{BH}\). However, at a later time the logarithmic growth of \(S(\rho_{journal})\) makes it larger than \(2S^{0}_{BH}\). At this stage the eSSA takes the following form \[S(\rho_{journal})-S(\rho_{bath})\leq 2S^{0}_{BH}. \tag{3}\] Clearly, this inequality is satisfied until \(S(\rho_{journal})\) becomes \(4S_{BH}^{0}\). Subsequently, in order to satisfy the eSSA, the bath must transfer the ownership of the black hole interior to the journal. Introducing such a non-trivial entanglement wedge that contains the interior of the black hole makes the rate of the entropy growth of the journal same as that of the bath and saturates the eSSA, thus restoring unitarity. This implies that the reconstruction of the black hole interior using the bath radiation at late times requires the complete description of the theory which includes specifying the boundary conditions of all the fields at the AdS boundary. The paper is organised as follows. In section 2, we briefly describe the setup which is an eternal AdS\({}_{2}\) black hole in equilibrium with a finite temperature non-gravitating bath with two kinds of matter, one having transparent boundary conditions along the boundary of the gravitational region, and another having random reflecting boundary conditions drawn from a distribution. In section 3, we determine the entanglement wedge of the random boundary conditions to characterise the theory dependence of the black hole interior reconstruction using the bath radiation. In section 4, we reiterate the result obtained and briefly touch upon some of the future directions that deserve immediate attention. ## 2 The setup Consider a black hole solution of JT gravity with inverse temperature \(\beta\) coupled to a bath having the same temperature. We assume that gravity is absent in the bath. We introduce two CFTs into this spacetime. They will be referred as CFT\({}_{1}\) and CFT\({}_{2}\). The CFT\({}_{1}\) has central charge \(c_{1}\) and CFT\({}_{2}\) has central charge \(c_{2}\). The CFT\({}_{2}\) is restricted to the gravitating AdS\({}_{2}\) region, while the CFT\({}_{1}\) lives in the full spacetime, which is the AdS\({}_{2}\) and the bath region together. This is done by setting transparent boundary conditions for fields in CFT\({}_{1}\) and reflecting boundary conditions for fields in CFT\({}_{2}\). Both the CFTs are coupled to the metric in the gravitational region. However, they are not coupled to the dilaton field. This makes the black hole spacetime locally AdS\({}_{2}\), even though gravity is dynamical in the black hole region. We also assume that CFT\({}_{1}\) and CFT\({}_{2}\) do not directly interact with each other. An additional feature of CFT\({}_{2}\) is that the boundary conditions of the fields in this theory are drawn from a distribution. In the dual holographic side this arises from "unknown couplings" whose information is present in the aforementioned system called journal [9; 12; 13]. ### Black hole in equilibrium with a bath The black hole solutions of JT gravity have been used widely [3; 14; 15; 9; 16] as a toy model for studying black hole evaporation. For detailed reviews see [17; 18; 19; 20]. The action for JT gravity coupled to CFT\({}_{1}\) is given by \[S= \frac{\phi_{0}}{16\pi G_{N}}\left[\int_{\mathcal{M}}d^{2}x\sqrt{- g}\,R+2\int_{\partial\mathcal{M}}K\right] \tag{1}\] \[+\frac{1}{16\pi G_{N}}\left[\int_{\mathcal{M}}d^{2}x\sqrt{-g}\phi \left(R+\frac{2}{\ell_{AdS}^{2}}\right)+2\int_{\partial\mathcal{M}}\phi_{b}K \right]+I_{\text{CFT}_{1}}[g]\] where \(R\) is the Ricci scalar of the spacetime \(\mathcal{M}\), \(K\) is the trace of the extrinsic curvature of the boundary \(\partial\mathcal{M}\), \(\phi_{b}\) is the boundary value of the dilaton field and \(\ell_{AdS}\) is the AdS radius which we will set to \(1\). Also, \(G_{N}\) is the Newton's constant and \(\phi_{0}\) is a constant that sets the extremal entropy to be \(\frac{\phi_{0}}{4G_{N}}\). The specific manner in which \(\partial\mathcal{M}\) is carved out of pure AdS\({}_{2}\) is responsible for breaking the reparametrization symmetry and it gives rise to non-trivial bulk dynamics [21]. It is important to note that \(I_{\rm CFT_{1}}\) does not couple to dilaton \(\phi\). Consequently the theory satisfies the constraint \(R=-2\) and hence the geometry is locally same as that of pure AdS\({}_{2}\). To describe the boundary, we use the Poincare patch, whose metric is \[ds^{2}=\frac{-dt^{2}+dz^{2}}{z^{2}}=\frac{-4dx^{+}dx^{-}}{(x^{+}-x^{-})^{2}}. \tag{2}\] where \(x^{\pm}=t\mp z\). The standard boundary conditions on the cut out are imposed to be \[g_{uu}|_{bdy}=\frac{1}{\epsilon^{2}}=\frac{1}{z^{2}}\left(-\left(\frac{dt}{du }\right)^{2}+\left(\frac{dz}{du}\right)^{2}\right)\Bigg{|}_{bdy}\qquad\phi|_{ bdy}=\phi_{b}=\frac{\phi_{r}}{\epsilon}. \tag{3}\] Here \(u\) is the boundary time. In these coordinates, the boundary is given by \(t=f(u),z=\epsilon f^{\prime}(u)\), where the gluing function \(f(u)\) is obtained from the energy balance equation as follows. The ADM mass of the gravitating region is \[M(u)=-\frac{\phi_{r}}{8\pi G_{N}}\{f(u),u\}. \tag{4}\] Energy conservation requires that the change in above ADM mass equal the net flux of energy across the boundary curve \[\frac{dM(u)}{du}=-\frac{d}{du}\left(\frac{\phi_{r}}{8\pi G_{N}}\{f(u),u\} \right)=T_{y^{+}y^{+}}-T_{y^{-}y^{-}} \tag{5}\] which, given the stress tensor profile, can be solved for \(f(u)\). Now vary the action with respect to metric in the Poincare patch to obtain the equations for dilaton: \[2\partial_{x^{+}}\partial_{x^{-}}\phi+\frac{4}{(x^{+}-x^{-})^{2}}\phi =16\pi G_{N}\langle T_{x^{+}x^{-}}\rangle,\] \[-\frac{1}{(x^{+}-x^{-})^{2}}\partial_{x^{+}}\left((x^{+}-x^{-})^{ 2}\partial_{x^{+}}\phi\right) =8\pi G_{N}\langle T_{x^{+}x^{+}}\rangle, \tag{6}\] \[-\frac{1}{(x^{+}-x^{-})^{2}}\partial_{x^{-}}\left((x^{+}-x^{-})^{ 2}\partial_{x^{-}}\phi\right) =8\pi G_{N}\langle T_{x^{-}x^{-}}\rangle.\] Solution to these equations, up to an \(SL(2,R)\) transformation, can be written as \[\phi(x^{+},x^{-}) = -\frac{2\pi\phi_{r}}{\beta}\frac{x^{+}+x^{-}}{x^{+}-x^{-}}-\frac{ 8\pi G_{N}}{x^{+}-x^{-}}\int_{0}^{x^{-}}dt(x^{+}-t)(x^{-}-t)T_{x^{-}x^{-}} \tag{7}\] \[+\frac{8\pi G_{N}}{x^{+}-x^{-}}\int_{0}^{x^{+}}dt(x^{+}-t)(x^{-}- t)T_{x^{+}x^{+}}.\] Then the dilaton takes form [16; 22] \[\phi(x^{+},x^{-})=-\phi_{r}\left(\frac{2f^{\prime}(y^{+})}{x^{+}-x^{-}}-\frac{2f^ {\prime\prime}(y^{+})}{f^{\prime}(y^{+})}\right). \tag{8}\] Now we describe how to make use of the above equations to couple the two sides of a black hole solution in the AdS\({}_{2}\) to the Minkowski bath having coordinates \(y^{\pm}=f^{-1}(y^{\pm})\) and metric \[ds^{2}=-\frac{1}{\epsilon^{2}}dy^{+}dy^{-}. \tag{9}\] We demand that the black hole is in equilibrium with the bath, there is no net flux and hence \[\partial_{u}\{f(u),u\}=0. \tag{10}\] A solution that corresponds to a temperature \(\frac{1}{\beta}\) is given by \[f(u)=e^{\frac{2\pi u}{\beta}}. \tag{11}\] Having solved for the gluing function, we can use it to extend the coordinates \(y^{\pm}\) that were earlier defined in the bath region to the gravity region as well via \[x^{\pm}=f(y^{\pm})=\pm\exp\left(\pm\frac{2\pi}{\beta}y^{\pm} \right). \tag{12}\] Given the map (12), the Poincare metric in (2) becomes \[ds^{2}=-\left(\frac{2\pi}{\beta}\right)^{2}\frac{dy^{+}dy^{-}}{ \sinh^{2}\frac{\pi}{\beta}(y^{-}-y^{+})},\] and the dilaton profile takes the form \[\phi=\frac{2\pi\phi_{r}}{\beta}\frac{1}{\tanh\frac{\pi}{\beta}( y^{-}-y^{+})}.\] We will however be mostly working in Kruskal-Szekeres coordinates \[w^{\pm}=\pm e^{\pm\frac{2\pi y^{\pm}_{R}}{\beta}}=\pm\left(x^{ \pm}_{R}\right)^{\pm 1}\qquad\text{ for right side of the glued geometry}\] \[w^{\pm}=\mp e^{\mp\frac{2\pi y^{\pm}_{I}}{\beta}}=\mp\left(x^{ \pm}_{L}\right)^{\mp 1}\qquad\text{ for left side of the glued geometry}. \tag{13}\] In these coordinates, the black hole metric takes the form \[ds^{2}=\frac{4dw^{-}dw^{+}}{\left(1+w^{-}w^{+}\right)^{2}}, \tag{14}\] and the dilaton profile becomes \[\phi(w^{+},w^{-})=\phi_{0}+\frac{2\pi\phi_{r}}{\beta}\frac{1-w^{ +}w^{-}}{1+w^{+}w^{-}}. \tag{15}\] Therefore, the location of the singularity is given by \(w^{+}w^{-}=\frac{1}{\theta}\), where \(\theta=\frac{2\pi\phi_{r}-\beta\phi_{0}}{2\pi\phi_{r}+\beta\phi_{0}}\). Further, the future horizon of the black hole is at \(w^{-}=0\) and past horizon is at \(w^{+}=0\). Finally, the location of the physical boundary of the black hole geometry that is being glued to the bath is given by \(w^{+}w^{-}=-e^{\frac{2\pi\epsilon}{\beta}}\). See figure 1. andom boundary conditions and the journal Let us introduce the second CFT, the CFT\({}_{2}\), in the AdS\({}_{2}\) region with reflecting boundary conditions for the matter field along its boundary. Since there is no additional net flow across the interface between bath and the AdS\({}_{2}\) region due to CFT\({}_{2}\), introduction of CFT\({}_{2}\) does not change the geometry of the spacetime. For computational tractability we have chosen a CFT\({}_{2}\) that does not interact with CFT\({}_{1}\). Assume that the probability for the \(i^{th}\) field in CFT\({}_{2}\) to have a boundary condition \(J_{i}\) is \(P(J_{i})\). As mentioned in the introduction, the information of the boundary conditions of CFT\({}_{2}\) matter fields \({\bf J}=\{J_{1},\cdots,J_{i},\cdots\}\) and the associated probability distribution \(P({\bf J})\) are encoded in the density matrix of the black hole. This black hole density matrix cannot be purified only by the bath Hilbert space \({\cal H}_{bath}\), it also requires introducing an auxiliary reference Hilbert space \({\cal H}_{\bf J}\) which is referred as the 'journal' Hilbert space. Let \(\{|J_{i}\rangle_{journal}\}\), \(\{|\psi_{k},J_{i}\rangle_{BH}\}\) and \(\{|\gamma_{k^{\prime}}\rangle_{bath}\}\) be basis for \({\cal H}_{journal}\), \({\cal H}_{BH}\) and \({\cal H}_{bath}\) respectively. We choose \(\{|J_{i}\rangle_{journal}\}\) to be orthonormal. Then the purified state can be expressed as \[|\Psi\rangle=\sum_{i}\sqrt{P(J_{i})}\left(\sum_{k,k^{\prime}}A_{k,k^{\prime}} |\psi_{k},J_{i}\rangle_{BH}|\gamma_{k^{\prime}}\rangle_{bath}\right)|J_{i} \rangle_{journal}. \tag{16}\] Each \(|J_{i}\rangle_{journal}\) corresponds to a choice of boundary condition for CFT\({}_{2}\) at the physical boundary of the eternal black hole spacetime. Figure 1: Eternal black hole in thermal equilibrium with a bath can be described using the \(w\)-plane. The right and left Rindler wedge in the \(w\)-plane describes the right and left side of the black hole coupled to non-gravitating bath having same temperature as that of the black hole. e take the CFT\({}_{2}\) to be a free theory of \(c_{2}\) non-compact bosons \(X_{1},\cdots,X_{c_{2}}\) with action \[S=\sum_{i=1}^{c_{2}}\frac{1}{2\pi}\int d^{2}w\partial X_{i}\bar{\partial}X_{i}. \tag{17}\] For this theory the boundary condition \(J_{i}\) corresponds to the boundary value of the boson \(X_{i}\) at the AdS boundary. We also assume that the boundary conditions are drawn from a Gaussian distribution having standard deviation \(1/\delta\) \[P\left(J\right)=\frac{\delta}{\sqrt{2\pi}}e^{-\frac{\delta^{2}}{2}J^{2}}. \tag{18}\] It was shown in [9] that for an eternal black hole the boundary time evolution produces entanglement growth between the black hole and the journal. This leads to an unbounded logarithmic growth of the journal entropy, producing a unitarity paradox. This information paradox was resolved by introducing an island for the journal which includes the interior of the black hole. ## 3 The entanglement wedge of journal In this section, we shall determine the entanglement wedge of the journal at late times, which is the main goal of this paper. Figure 2: The equal time slice \(\Sigma_{u}\) in the glued spacetime intersects the right AdS boundary at \(B_{u}^{R}\) and the left AdS boundary at \(B_{u}^{L}\). ### Black hole quantum extremal surfaces Consider the setup described in section 2. The early time von Neumann entropy of the black hole density matrix can be computed using the replica trick and is given by \[\mathbf{S}_{BH}(u)=S^{1}_{BH}(u)+S^{2}_{BH}(u), \tag{10}\] where \(S^{1}_{BH}(u)\) and \(S^{2}_{BH}(u)\) are the CFT\({}_{1}\) and CFT\({}_{2}\) contributions to the black hole entropy. The CFT\({}_{1}\) contribution can be obtained from the twist operator correlation function as follows \[S^{1}_{BH}(u)=-\lim_{n\to 1}\frac{1}{1-n}\mathrm{ln}\langle\sigma_{1}(B^{L}_{u}) \sigma_{1}(B^{R}_{u})\rangle. \tag{11}\] Here \(\sigma_{1}\) denotes the twist fields in the orbifold version of CFT\({}_{1}\) having scaling dimension of \(\Delta_{n}=\frac{c_{1}}{12}\left(n-\frac{1}{n}\right)\). The points \(B^{L}_{u}\) and \(B^{R}_{u}\) are points on the left and right boundary where the equal time slice \(\Sigma_{u}\) corresponding to the boundary time \(u\) intersects the left and right black hole boundaries, see figure 2. As \(u\) increases, it is assumed that the point \(B^{R}_{u}\) moves along the positive time direction of the right boundary and the point \(B^{L}_{u}\) is moving along the negative time direction of the left boundary. The correlation function \(\langle\sigma_{1}(B^{L}_{u})\sigma_{1}(B^{R}_{u})\rangle\) is evaluated on a complex plane with the \(w\)-coordinates described in the previous section with metric \(ds^{2}=\frac{4dw^{-}dw^{+}}{\left(1+w^{-}w^{+}\right)^{2}}\). It can be evaluated by Weyl transforming it into a correlation function on the \(w\)-plane with flat metric. The resulting correlation function is \[\langle\sigma_{1}(B^{L}_{u})\rangle\sigma_{1}(B^{R}_{u})\rangle= \left(\frac{\left(1+w^{-}_{B^{L}_{u}}w^{+}_{B^{L}_{u}}\right)\left(1+w^{-}_{B ^{R}_{u}}w^{+}_{B^{R}_{u}}\right)}{4\left(w^{+}_{B^{R}_{u}}-w^{+}_{B^{L}_{u}} \right)\left(w^{+}_{B^{R}_{u}}-w^{+}_{B^{L}_{u}}\right)}\right)^{\Delta_{n}}. \tag{12}\] Substituting the \(w\)-coordinates \((w^{+},w^{-})\) of the point \(B^{L}_{u}\) and \(B^{R}_{u}\) given by \[(w^{+}_{B^{L}_{u}},w^{-}_{B^{L}_{u}}) =\left(-e^{-\frac{2\pi(u-\epsilon)}{\beta}},e^{\frac{2\pi(u+ \epsilon)}{\beta}}\right),\] \[(w^{+}_{B^{R}_{u}},w^{-}_{B^{R}_{u}}) =\left(e^{\frac{2\pi(u-\epsilon)}{\beta}},-e^{\frac{-2\pi(u+ \epsilon)}{\beta}}\right) \tag{13}\] gives the CFT\({}_{1}\) contribution to the black hole entropy as \[S^{1}_{BH}(u)=\frac{c_{1}}{3}\mathrm{ln}\left(\frac{\beta}{\pi \epsilon}\mathrm{cosh}\left(\frac{2\pi u}{\beta}\right)\right). \tag{14}\] The CFT\({}_{2}\) contribution to the black hole entropy is obtained from the correlation function of the boundary condition changing operators averaged over the distribution \(P(\mathbf{J})\) as follows \[S^{2}_{BH}(u)=-\lim_{n\to 1}\frac{1}{1-n}\mathrm{ln}\left(\int_{\mathbf{J}^{1}, \cdots,\mathbf{J}^{n}}\prod_{i=1,k=1}^{c_{2},n}dJ^{k}_{i}P(J^{k}_{i})\langle \mathcal{O}_{\,\mathrm{J}}(B^{L}_{u})\mathcal{O}_{\,\mathrm{J}}(B^{R}_{u}) \rangle\right). \tag{15}\] The operator \(\mathcal{O}_{\,\mathrm{J}}\) denotes the boundary condition changing operator that changes the boundary conditions of the CFT\({}_{2}\) fields. It changes the boundary conditions of the fields \(\{X_{1},\cdots,X_{c_{2}}\}\) from \(\mathbf{J}^{k}=\{J_{1}^{k},\cdots,J_{c_{2}}^{k}\}\) to \(\mathbf{J}^{k+1}=\{J_{1}^{k+1},\cdots,J_{c_{2}}^{k+1}\}\) as we go from the \(k\)-th sheet to the \((k+1)\)-th sheet of the replica manifold for \(k=1,\cdots,n\). The scaling dimension of \(\mathcal{O}_{\mathcal{J}}\) is given by \[\Delta_{\mathcal{J}}=\sum_{i=1,k}^{c_{2},n}\left(\frac{J_{i}^{k+1}-J_{i}^{k}}{ 2\pi}\right)^{2}. \tag{10}\] The cut out of AdS\({}_{2}\) region is a region in the disk with \(w\)-coordinates. The CFT\({}_{2}\) correlation function \(\langle\mathcal{O}_{\mathcal{J}}(B_{u}^{L})\mathcal{O}_{\mathcal{J}}(B_{u}^{R})\rangle\) is calculated on the \(w\)-disk and can be obtained with the help of the doubling trick. Finally, the integration over the boundary conditions \(\mathbf{J}\) can be performed by using the concept of circularly invariant matrices. For details about circulant matrices see appendix A. The detailed integration is described in appendix B. The final result is given by \[S_{BH}^{2}(u)\approx\frac{c_{2}}{2}\mathrm{ln}\left(\frac{u}{\beta\delta^{2}} \right). \tag{11}\] Therefore, the black hole entropy at late times has unbounded growth as given below \[S_{BH}^{1}(u)\approx\frac{2\pi c_{1}}{3\beta}u+\frac{c_{2}}{2}\mathrm{ln} \left(\frac{u}{\beta\delta^{2}}\right). \tag{12}\] Clearly, such a growth will lead us to information paradox at late times. The resolution of this information paradox requires determining the quantum extremal surface (QES) associated with the black hole. This is done by minimising the generalised entropy of the black hole after removing an interval \(A_{u}^{L}A_{u}^{R}\) from the restriction of the equal time slice \(\Sigma_{u}\) to the AdS\({}_{2}\) region. The generalised entropy of the black hole for the interval \(B_{u}^{L}A_{u}^{L}\cup B_{u}^{R}A_{u}^{R}\) is given by \[\mathbf{S}_{BH}^{gen}(u)=\frac{\phi\left(A_{u}^{L}\right)+\phi\left(A_{u}^{R} \right)}{4G_{N}}+S_{BH}^{gen,1}(u)+S_{BH}^{gen,2}(u), \tag{13}\] where the first term is the area term. The area term is equal to the sum of the value of dilaton field given in (15) at points \(A_{u}^{L}\) and \(A_{u}^{R}\). \(S_{BH}^{gen,1}(u)\) and \(S_{BH}^{gen,2}(u)\) denote the CFT\({}_{1}\) and CFT\({}_{2}\) contributions to the generalised black hole entropy. The CFT\({}_{1}\) contribution is \[S_{BH}^{gen,1}(u) =-\lim_{n\to 1}\frac{1}{1-n}\mathrm{ln}\langle\sigma_{1}\left(B_{u }^{L}\right)\sigma_{1}\left(A_{u}^{L}\right)\sigma_{1}\left(A_{u}^{R}\right) \sigma_{1}\left(B_{u}^{R}\right)\rangle\] \[\approx-\lim_{n\to 1}\frac{1}{1-n}\mathrm{ln}\langle\sigma_{1} \left(B_{u}^{L}\right)\sigma_{1}\left(A_{u}^{L}\right)\rangle\langle\sigma_{1} \left(A_{u}^{R}\right)\sigma_{1}\left(B_{u}^{R}\right)\rangle\] \[\approx\frac{c_{1}}{6}\mathrm{ln}\left(\left(\frac{\beta}{\pi \epsilon}\right)^{2}\frac{\left(e^{-\frac{2\pi u}{\beta}}+w_{A_{u}^{L}}^{+} \right)\left(e^{\frac{2\pi u}{\beta}}-w_{A_{u}^{L}}^{-}\right)}{\left(1+w_{A_{ u}^{L}}^{+}\right)}\right)\left(\frac{\left(e^{-\frac{2\pi u}{\beta}}+w_{A_{u}^{R}}^{-} \right)\left(e^{\frac{2\pi u}{\beta}}-w_{A_{u}^{R}}^{+}\right)}{\left(1+w_{A_{ u}^{R}}^{+}\right)}\right). \tag{14}\] In the second step we made the approximation by assuming that the points \(A_{u}^{L}\) and \(A_{u}^{R}\) are well separated. The CFT\({}_{2}\) contribution is given by \[S_{BH}^{gen,2}(u) =-\lim_{n\to 1}\frac{1}{1-n}\mathrm{ln}\int_{\mathbf{J}^{1},\cdots \mathbf{J}^{n}}\prod_{i=1,k=1}^{c_{2},n}dJ_{i}^{k}P(J_{i}^{k})\langle\mathcal{O }_{\mathcal{Y}}\left(B_{u}^{L}\right)\sigma_{2}\left(A_{u}^{L}\right)\sigma_{2} \left(A_{u}^{R}\right)\mathcal{O}_{\mathcal{Y}}\left(B_{u}^{R}\right)\rangle \tag{3.12}\] where \(\sigma_{2}\) denotes the CFT\({}_{2}\) twist operators. The correlators \(\langle\mathcal{O}_{\mathcal{Y}}\left(B_{u}^{L}\right)\sigma_{2}\left(A_{u}^{ L}\right)\rangle\) and \(\langle\sigma_{2}\left(A_{u}^{R}\right)\mathcal{O}_{\mathcal{Y}}\left(B_{u}^{ R}\right)\rangle\) are evaluated on the \(w\)-plane where CFT\({}_{2}\) is defined. Since in the Euclidean version this region is a cutout of disk, these correlators can be calculated by using the doubling trick. The correlator \(\left\langle\mathcal{O}_{\mathcal{Y}}\left(w_{B_{u}^{L}}^{+},w_{B_{u}^{L}}^{- }\right)\sigma_{2}\left(w_{A_{u}^{L}}^{+},w_{A_{u}^{L}}^{-}\right)\right\rangle\) evaluated on the Euclidean AdS\({}_{2}\) is given by \[\left\langle\mathcal{O}_{\mathcal{Y}}\left(w_{B_{u}^{L}}^{+},w_{B_{u}^{L}}^{- }\right)\sigma_{2}\left(w_{A_{u}^{L}}^{+},w_{A_{u}^{L}}^{-}\right)\right\rangle =G_{n}\left(\mathcal{Y}\right)\left(\left(\frac{\pi\epsilon}{\beta}\right) \frac{\left(1+w_{A_{u}^{L}}^{-}w_{A_{u}^{L}}^{+}\right)}{\left(1-e^{-\frac{2\pi u }{\beta}}w_{A_{u}^{L}}^{-}\right)\left(1+e^{\frac{2\pi u}{\beta}}w_{A_{u}^{L} }^{+}\right)}\right)^{\Delta_{\mathcal{Y}}}. \tag{3.13}\] The coefficient \(G_{n}\left(\mathcal{Y}\right)\) is related to the \(n\)-point function of boundary condition changing operators kept on a disk \[G_{n}\left(\mathcal{Y}\right)=\prod_{k\neq l,i=1}^{c_{2}}\left|e^{\frac{2\pi i \left(k-1\right)}{n}}-e^{\frac{2\pi i\left(l-1\right)}{n}}\right|^{\mu_{kl}^{ i}}, \tag{3.14}\] where \(\mu_{kl}^{i}=\frac{\left(J_{i}^{k+1}-J_{i}^{k}\right)\left(J_{i}^{l+1}-J_{i}^{ l}\right)}{2\pi^{2}}\). After performing the averaging over the Gaussian distribution again by using the integration method based on circulant matrix we obtain the generalised bath entropy as follows \[S_{BH}^{gen,2}(u)\approx\frac{c_{2}}{2}\mathrm{ln}\left(\mathrm{ln}\left( \left(\frac{\beta}{\pi\epsilon}\right)^{2}\frac{\left(1-e^{-\frac{2\pi u}{ \beta}}w_{A_{u}^{L}}^{-}\right)\left(1+e^{\frac{2\pi u}{\beta}}w_{A_{u}^{L}}^{ +}\right)\left(1+e^{\frac{2\pi u}{\beta}}w_{A_{u}^{R}}^{-}\right)\left(1-e^{- \frac{2\pi u}{\beta}}w_{A_{u}^{R}}^{+}\right)}{\left(1+w_{A_{u}^{L}}^{-}w_{A_{ u}^{L}}^{+}\right)\left(1+w_{A_{u}^{R}}^{-}w_{A_{u}^{R}}^{+}\right)}\right). \tag{3.15}\] For more details about this computation, see appendix B. Extremising the generalised bath entropy with respect to \(w_{A_{u}^{L}}^{+}\) and \(w_{A_{u}^{L}}^{-}\) gives the following QES equations \[-\frac{\pi\phi_{r}}{G_{N}\beta}\frac{w_{A_{u}^{L}}^{-}}{\left(1+w _{A_{u}^{L}}^{+}w_{A_{u}^{L}}^{-}\right)^{2}}+\left(\frac{c_{1}}{6}+\frac{c_{2 }}{2\,\,\mathrm{ln}\left(\frac{\beta}{\pi\epsilon}\right)}\right)\left(\frac{1 }{e^{-\frac{2\pi u}{\beta}}+w_{A_{u}^{L}}^{+}}-\frac{w_{A_{u}^{L}}^{-}}{\left(1 +w_{A_{u}^{L}}^{+}w_{A_{u}^{L}}^{-}\right)}\right) =0\] \[-\frac{\pi\phi_{r}}{G_{N}\beta}\frac{w_{A_{u}^{L}}^{+}}{\left(1+w _{A_{u}^{L}}^{+}w_{A_{u}^{L}}^{-}\right)^{2}}-\left(\frac{c_{1}}{6}+\frac{c_{2 }}{2\,\,\mathrm{ln}\left(\frac{\beta}{\pi\epsilon}\right)}\right)\left(\frac{1 }{e^{\frac{2\pi u}{\beta}}-w_{A_{u}^{L}}^{-}}+\frac{w_{A_{u}^{L}}^{+}}{\left(1 +w_{A_{u}^{L}}^{+}w_{A_{u}^{L}}^{-}\right)}\right) =0. \tag{3.16}\] There exists a solution for this coupled equations at late times near the left future horizon of the black hole where \(w^{+}_{A^{L}_{u}}w^{-}_{A^{L}_{u}}\approx 0\). The solution is given by \[w^{\pm}_{A^{L}_{u}}=\mp\frac{G_{N}\beta}{6\pi\phi_{r}}\left(c_{1}+ \frac{3c_{2}}{\ln\left(\frac{\beta}{\pi\epsilon}\right)}\right)e^{\mp\frac{2 \pi u}{\beta}}. \tag{3.17}\] By repeating the same analysis we can find the QES in the right side of the black hole. It is given by \[w^{\pm}_{A^{R}_{u}}=\pm\frac{G_{N}\beta}{6\pi\phi_{r}}\left(c_{1}+ \frac{3c_{2}}{\ln\left(\frac{\beta}{\pi\epsilon}\right)}\right)e^{\pm\frac{2 \pi u}{\beta}}. \tag{3.18}\] Substituting the QES solutions back to the generalised black hole entropy expression shows that at late time the black hole entropy becomes a constant equal to twice the area of black hole horizon. Therefore, this QES after Page time \(u_{Page}\), tame the non-unitary growth of the black hole entropy. ### Entanglement wedge of bath and the extended strong subadditivity The QES computation in the previous subsection suggests that after Page time \(u_{Page}\approx\frac{3S_{BH}^{0}\beta}{\pi c_{1}}\), where \(S_{BH}^{0}\) is area of the bifurcation horizon of the black hole, the combined system of the bath and the journal possesses a non-trivial entanglement wedge that contains the interior of the black hole. The entanglement wedge of the journal at late time must belong to the entanglement wedge of the combined system. Hence, we should search for a journal island satisfying the constraints of the extended strong subadditivity [11] inside the interval bounded by the black hole quantum extremal surfaces. The eSSA is an inequality satisfied by the von Neumann entropies of three subsystems of a larger quantum system which we explain below. Consider a quantum system having Hilbert space \(\mathcal{H}\) formed by taking the tensor product of the Hilbert spaces of three subsystems \(\mathcal{H}_{1},\mathcal{H}_{2}\) and \(\mathcal{H}_{3}\), i.e. \(\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\mathcal{H}_{3}\). We denote the state of the larger quantum system by \(\rho^{123}\), the state of the combined system having Hilbert space \(\mathcal{H}^{ij}=\mathcal{H}^{i}\otimes\mathcal{H}^{j}\) by \(\rho^{ij}\), and the state of the \(i^{th}\) subsystem having Hilbert space \(\mathcal{H}^{i}\) by \(\rho^{i}\). Then the eSSA inequality states that \[S\left(\rho^{12}\right)+S\left(\rho^{23}\right)-S\left(\rho^{123 }\right)-S\left(\rho^{2}\right)\geq 2\;max\left\{S\left(\rho^{1}\right)-S \left(\rho^{13}\right),S\left(\rho^{3}\right)-S\left(\rho^{13}\right),0 \right\}; \tag{3.19}\] for the usual strong subadditivity inequality the right hand side is simply zero. Now we make use of the above inequality as follows. Take subsystem 1 to be the journal, subsystem 2 to be the black hole and the subsystem 3 to be the bath. Then the eSSA satisfied by the entropies of the subsystems after Page time reads as follows \[S_{bath}(u)+S_{journal}(u)-2S_{BH}^{0}\geq 2\;max\left\{S_{journal}(u)-2S_{BH}^ {0},S_{bath}(u)-2S_{BH}^{0},0\right\}. \tag{3.20}\] Using the replica method the entropies of bath and journal can be calculated. The bath entropy at large times is given by \[S_{bath}(u)=-\lim_{n\to 1}\frac{1}{1-n}{\rm ln}\langle\sigma_{1}(B_{u}^{L}) \sigma_{1}(B_{u}^{R})\rangle\approx\frac{2\pi c_{1}}{3\beta}u, \tag{3.21}\] and the journal entropy is given by \[S_{journal}(u)=-\lim_{n\to 1}\frac{1}{1-n}{\rm ln}\left(\int_{{\bf J}^{1}, \cdots,\,{\bf J}^{n}}\prod_{i=1,k=1}^{c_{2},n}dJ_{i}^{k}P(J_{i}^{k})\langle{ \cal O}_{\rm J}(B_{u}^{L}){\cal O}_{\rm J}(B_{u}^{R})\rangle\right)\approx\frac{ c_{2}}{2}{\rm ln}\left(\frac{u}{\beta\delta^{2}}\right). \tag{3.22}\] After Page time, since the bath entropy is significantly greater than the entropy of the journal, the eSSA takes the following form \[S_{bath}(u)-S_{journal}(u)\leq 2S_{BH}^{0}. \tag{3.23}\] It is clear from these expressions that the entropy of the bath and the journal violates the eSSA (3.23) after time \(u=u_{B}>u_{Page}\), where \(u_{B}\) is the time at which the difference in \(S_{bath}(u)\) and \(S_{journal}(u)\) becomes \(2S_{BH}^{0}\). The root cause of this violation is the linear growth of entanglement entropy of the bath while the journal only has logarithmic growth of entanglement entropy. Therefore, this violation of eSSA can be described as the bath information paradox. An island for bath that is inside the interval enclosed by the quantum extremal surfaces of black hole might resolve this paradox. With this hope, let us search for a bath island by minimising the generalised entropy of bath associated with an arbitrary interval at \(C_{u}^{L}C_{u}^{R}\) with respect to the points \(C_{u}^{L}\) and \(C_{u}^{R}\). The generalised entropy of the bath is given by \[S_{bath}^{gen}=\frac{\phi\left(C_{u}^{L}\right)+\phi\left(C_{u}^{R}\right)}{4 G_{N}}+S_{bath}^{gen,1}(u)+S_{bath}^{gen,2}(u), \tag{3.24}\] where \(S_{bath}^{gen,1}(u)\) is the CFT\({}_{1}\) contribution to the bath generalised entropy \[S_{bath}^{gen,1}(u) =-\lim_{n\to 1}\frac{1}{1-n}{\rm ln}\langle\sigma_{1}(B_{u}^{L}) \sigma_{1}(C_{u}^{L})\sigma_{1}(C_{u}^{R})\sigma_{1}(B_{u}^{R})\rangle\] \[\approx-\lim_{n\to 1}\frac{1}{1-n}{\rm ln}\langle\sigma_{1}(B_{u} ^{L})\sigma_{1}(C_{u}^{L})\rangle\langle\sigma_{1}(C_{u}^{R})\sigma_{1}(B_{u} ^{R})\rangle\] \[\approx\frac{c_{1}}{6}{\rm ln}\left(\left(\frac{\beta}{\pi\epsilon }\right)^{2}\frac{\left(e^{-\frac{2\pi u}{\beta}}+w_{C_{u}^{L}}^{+}\right) \left(e^{\frac{2\pi u}{\beta}}-w_{C_{u}^{L}}^{-}\right)}{\left(1+w_{C_{u}^{L}} ^{+}w_{C_{u}^{L}}^{+}\right)}\right)\left(\frac{\left(e^{-\frac{2\pi u}{\beta }}+w_{C_{u}^{R}}^{-}\right)\left(e^{\frac{2\pi u}{\beta}}-w_{C_{u}^{R}}^{+} \right)}{\left(1+w_{C_{u}^{R}}^{-}w_{C_{u}^{R}}^{+}\right)}\right), \tag{3.25}\] and \(S_{bath}^{gen,2}(u)\) is the CFT\({}_{2}\) contribution to the bath generalised entropy \[S_{bath}^{gen,2}(u) =-\lim_{n\to 1}\frac{1}{1-n}{\rm ln}\int_{{\bf J}^{1}, \cdots,{\bf J}^{n}}\prod_{i=1,k=1}^{c_{2},n}dJ_{i}^{k}P(J_{i}^{k})\langle\sigma _{2}(C_{u}^{L})\sigma_{2}(C_{u}^{R})\rangle\] \[\approx-\lim_{n\to 1}\frac{1}{1-n}{\rm ln}\int_{{\bf J}^{1}, \cdots,{\bf J}^{n}}\prod_{i=1,k=1}^{c_{2},n}dJ_{i}^{k}P(J_{i}^{k})\langle\sigma _{2}(C_{u}^{L})\rangle\langle\sigma_{2}(C_{u}^{R})\rangle\approx 0. \tag{3.26}\] Here we used the fact that any one point function of a primary field in AdS\({}_{2}\) disk is identity. The solution for the associated coupled QES equations at late time is given by \[w^{\pm}_{C^{L}_{u}}=\mp\frac{G_{N}\beta}{6\pi\phi_{r}}c_{1}e^{\mp \frac{2\pi u}{\beta}},\qquad w^{\pm}_{C^{R}_{u}}=\pm\frac{G_{N}\beta}{6\pi\phi_ {r}}c_{1}e^{\pm\frac{2\pi u}{\beta}}. \tag{3.27}\] Substituting the QES solutions back into the expression for the generalised bath entropy (3.24) gives a constant equal to twice the area of black hole horizon, i.e. \[S^{gen}_{bath}(u)=2S^{0}_{BH},\qquad u>u_{B}. \tag{3.28}\] We must check whether the island with boundaries that matches with the above quantum extremal surfaces resolves the bath information paradox that appeared soon after black hole Page time \(u_{Page}\). The eSSA inequality (3.23) is satisfied after the black hole Page time if we replace with \(S_{bath}(u)\) with \(S^{gen}_{bath}(u)\), as long as \(S_{journal}(u)\leq S^{gen}_{bath}(u)\sim 2S^{0}_{BH}\). Until \(u=u_{I}\) at which \(S_{journal}(u_{I})=2S^{0}_{BH}\), the eSSA reduces to the demand that \(S_{journal}(u)\geq 0\). Thus, the inclusion of the bath island enables all the three subsystems to satisfy the eSSA inequality (3.23) at least till \(u=u_{I}\). ### Transfer of the ownership of the black hole interior from bath to journal After time \(u=u_{I}\), while the bath owns a non-trivial island that contains the black hole interior, the eSSA (3.20) is given by \[S_{journal}(u)-S^{gen}_{bath}(u)\leq 2S^{0}_{BH},\qquad u>u_{I}. \tag{3.29}\] It is straightforward to see that this inequality will be violated after time \(u=u_{J}\), at which \(S_{journal}(u_{J})=4S^{0}_{BH}\). At late times, for \(u>u_{J}\), this leads to another unitarity violation or information paradox. In order to resolve this paradox, we shall search for an island for the journal by minimising the generalised entropy of journal associated with an arbitrary interval at \(D^{L}_{u}D^{R}_{u}\). The generalised entropy of the journal associated with this interval is given by \[S^{gen}_{bath}=\frac{\phi\left(D^{L}_{u}\right)+\phi\left(D^{R}_{u}\right)}{4 G_{N}}+S^{gen,1}_{journal}(u)+S^{gen,2}_{journal}(u). \tag{3.30}\] Here \(S^{gen,1}_{journal}(u)\) is the CFT\({}_{1}\) contribution to the bath generalised entropy \[S^{gen,1}_{journal}(u)=-\lim_{n\to 1}\frac{1}{1-n}\mathrm{ln}\langle \sigma_{1}(D^{L}_{u})\sigma_{1}(D^{R}_{u})\rangle\approx\frac{c_{1}}{6} \mathrm{ln}\left(\frac{\left(w^{+}_{D^{R}_{u}}-w^{+}_{D^{L}_{u}}\right)\left(w ^{-}_{D^{R}_{u}}-w^{-}_{D^{L}_{u}}\right)}{\left(1+w^{-}_{D^{L}_{u}}w^{+}_{D^{ L}_{u}}\right)\left(1+w^{-}_{D^{R}_{u}}w^{+}_{D^{R}_{u}}\right)}\right) \tag{3.31}\] and \(S^{gen,2}_{journal}(u)\) is the CFT\({}_{2}\) contribution to the bath generalised entropy. \[S^{gen,2}_{journal}(u) =-\lim_{n\to 1}\frac{1}{1-n}\mathrm{ln}\int_{\mathbf{J}^{1}, \ldots\mathbf{J}^{n}}\prod_{i=1,k=1}^{c_{2},n}dJ_{i}^{k}P(J_{i}^{k})\langle \mathcal{O}_{\mathcal{J}}(B_{u}^{L})\sigma_{2}(D_{u}^{L})\sigma_{2}(D_{u}^{R}) \mathcal{O}_{\mathcal{J}}(B_{u}^{R})\rangle\] \[\approx-\lim_{n\to 1}\frac{1}{1-n}\mathrm{ln}\int_{\mathbf{J}^{1}, \ldots\mathbf{J}^{n}}\prod_{i=1,k=1}^{c_{2},n}dJ_{i}^{k}P(J_{i}^{k})\langle \mathcal{O}_{\mathcal{J}}(B_{u}^{L})\sigma_{2}(D_{u}^{L})\rangle\langle\sigma _{2}(D_{u}^{R})\mathcal{O}_{\mathcal{J}}(B_{u}^{R})\rangle\] \[\approx\frac{c_{2}}{2}\mathrm{ln}\left(\mathrm{ln}\left(\left( \frac{\beta}{\pi\epsilon}\right)^{2}\frac{\left(1-e^{-\frac{2\pi u}{\beta}}w_{ D_{u}^{L}}^{-}\right)\left(1+e^{\frac{2\pi u}{\beta}}w_{D_{u}^{L}}^{+}\right) \left(1+e^{\frac{2\pi u}{\beta}}w_{D_{u}^{R}}^{-}\right)\left(1-e^{-\frac{2 \pi u}{\beta}}w_{D_{u}^{R}}^{+}\right)}{\left(1+w_{D_{u}^{L}}^{-}w_{D_{u}^{L}} ^{+}\right)\left(1+w_{D_{u}^{R}}^{-}w_{D_{u}^{R}}^{+}\right)}\right)\right). \tag{3.32}\] The generalised entropy minimisation with respect to \(w_{D_{u}^{L}}^{+}\) and \(w_{D_{u}^{L}}^{-}\) gives the following equations \[-\frac{\pi\phi_{r}}{G_{N}\beta}\frac{w_{D_{u}^{L}}^{-}}{\left(1+w _{D_{u}^{L}}^{+}w_{D_{u}^{L}}^{-}\right)^{2}}+\frac{c_{2}}{2\,\,\mathrm{ln} \left(\frac{\beta}{2\pi\epsilon}\right)}\left(\frac{1}{e^{-\frac{\pi u}{\beta }}+w_{D_{u}^{L}}^{+}}-\frac{w_{D_{u}^{L}}^{-}}{\left(1+w_{D_{u}^{L}}^{+}w_{D_{ u}^{L}}^{-}\right)}\right) =0\] \[-\frac{\pi\phi_{r}}{G_{N}\beta}\frac{w_{D_{u}^{L}}^{+}}{\left(1+w _{D_{u}^{L}}^{+}w_{D_{u}^{L}}^{-}\right)^{2}}-\frac{c_{2}}{2\,\,\mathrm{ln} \left(\frac{\beta}{2\pi\epsilon}\right)}\left(\frac{1}{e^{\frac{\pi u}{\beta }}-w_{D_{u}^{L}}^{-}}+\frac{w_{D_{u}^{L}}^{+}}{\left(1+w_{D_{u}^{L}}^{+}w_{D_{ u}^{L}}^{-}\right)}\right) =0. \tag{3.33}\] The solution near horizon at late time is given by \[w_{D_{u}^{L}}^{\pm} =\mp\frac{G_{N}\beta c_{2}}{2\pi\phi_{r}\mathrm{ln}\left(\frac{ \beta}{2\pi\epsilon}\right)}e^{\mp\frac{2\pi u}{\beta}} \tag{3.34}\] \[w_{D_{u}^{R}}^{\pm} =\pm\frac{G_{N}\beta c_{2}}{2\pi\phi_{r}\mathrm{ln}\left(\frac{ \beta}{2\pi\epsilon}\right)}e^{\pm\frac{2\pi u}{\beta}}.\] Substituting this back to the expression (3.30) gives the generalised entropy of the subsystem journal as follows \[S^{gen}_{journal}(u) =2S^{0}_{BH}+\frac{c_{1}}{3}\mathrm{ln}\left[\frac{\beta}{\pi \epsilon}\mathrm{cosh}\left(\frac{2\pi u}{\beta}\right)\right]\] \[=2S^{0}_{BH}+S_{bath}(u). \tag{3.35}\] Let us allow the journal to own the island that contains the black hole interior instead of the bath after time \(u=u_{J}\). Then the eSSA relation for \(u>u_{J}\) is given by \[S^{gen}_{journal}(u)-S_{bath}(u)\leq 2S^{0}_{BH},\qquad u>u_{J}. \tag{3.36}\] Using (3.35) we can verify that this eSSA relation gets saturated after time \(u=u_{J}\). Therefore, transferring the ownership of the black hole interior after time \(u=u_{J}\) from the bath to the journal restores unitarity. Consequently, the reconstruction of the black hole interior from radiation at late times requires complete knowledge of the theory. Conclusion We analysed the theory dependence of the interior reconstruction of an AdS\({}_{2}\) eternal black hole in equilibrium with a finite temperature bath by introducing a CFT with matter fields having random reflecting boundary conditions along the AdS\({}_{2}\) boundaries. By using the island formula and the extended strong subadditivity due to Carlen and Lieb, we have shown that at late times the reference Hilbert space that encodes the information about the random boundary conditions owns an entanglement wedge that contains the black hole interior including the region near singularity. This implies that the reconstruction of the region near singularity of a black hole from radiation requires exquisite knowledge of the theory. One interesting point to note is that the combined system of black hole, bath and the journal before Page time was in a state that saturated the extended strong subadditivity [23]. Interestingly the state of the combined system that satisfies all the unitarity requirements at late time saturates the extended strong subadditivity. It would be interesting to study the significance of this observation. As already pointed out in [9], extending this analysis to an evaporating black hole can teach us about the theory dependence of the black hole interior reconstruction from Hawking radiation. ## Acknowledgements We thank Sujay Ashok, Yiming Chen, Sibasish Ghosh, Rajesh Gopakumar, Thomas Hartman, Alok Laddha, Ayan Mukopadhyay, Onkar Parrikar, Loganayanam R, Suvrat Raju, Mukund Rangamani, Anupam Sarkar, and Ashoke Sen for the valuable discussions and suggestions. ## Appendix A Circulant matrices A circulant matrix \(A\) is a square matrix with the property that all its rows are made up of the same elements and each row is undergoing a cyclic shift of one element to the right relative to the preceding row \[A=\begin{bmatrix}a_{0}&a_{1}&a_{2}&&\cdots&a_{n-1}\\ a_{n-1}&a_{0}&a_{1}&&\cdots&\vdots\\ &a_{n-1}&a_{0}&a_{1}&\ddots&\\ \vdots&&\ddots&\ddots&\ddots&a_{2}\\ &&&a_{1}&\\ a_{1}&\cdots&&a_{n-1}&a_{0}\end{bmatrix}. \tag{10}\] An interesting feature of a circularly invariant matrix is that for arbitrary values of \(n\) its eigenvalues can be obtained [24] and are given by \[\lambda_{p}=\sum_{k=1}^{n-1}a_{k}e^{-\frac{2\pi ipk}{n}}\qquad p=0,\cdots,n-1. \tag{11}\] Therefore, the determinant of the circulant matrix \(A\) is given by \[det(A)=\prod_{r=0}^{n-1}\left(\sum_{k=1}^{n-1}a_{k}e^{-\frac{2\pi irk}{n}}\right). \tag{10}\] ## Appendix B Averaging over the distribution In this appendix, we shall perform the averaging over a Gaussian distribution of the correlation function \[\int_{\mathbf{J}^{1},\cdots\mathbf{J}^{n}}\prod_{i=1,k=1}^{c_{2},n}dJ_{i}^{k}P (J_{i}^{k})\langle\mathcal{O}_{\mathcal{J}}\left(B_{u}^{L}\right)\sigma_{2} \left(A_{u}^{L}\right)\rangle\langle\sigma_{2}\left(A_{u}^{R}\right)\mathcal{O }_{\mathcal{J}}\left(B_{u}^{R}\right)\rangle.\] The Gaussian distribution for the random boundary conditions is given by \[P\left(J_{i}^{k}\right)=\frac{\delta}{\sqrt{2\pi}}e^{-\frac{\delta^{2}}{2}(J_{ i}^{k})^{2}}.\] After substituting the expressions for the correlation functions, we get that \[\int_{\mathbf{J}^{1},\cdots\mathbf{J}^{n}}\prod_{i=1,k=1}^{c_{2}, n}dJ_{i}^{k}P(J_{i}^{k})\langle\mathcal{O}_{\mathcal{J}}\left(B_{u}^{L} \right)\sigma_{2}\left(A_{u}^{L}\right)\rangle\langle\sigma_{2}\left(A_{u}^{R }\right)\mathcal{O}_{\mathcal{J}}\left(B_{u}^{R}\right)\rangle\] \[=\frac{m^{nc_{2}}}{(2\pi)^{nc_{2}}}\prod_{i=1}^{c_{2}}\left(\int_ {J_{i}^{1},\cdots J_{i}^{n}}dJ_{i}^{1}\cdots dJ_{i}^{n}e^{-\sum_{p,q}A_{pq}J_{ p}^{k}J_{q}^{k}}\right). \tag{11}\] The elements \(A_{pq}\) of the \(n\times n\) square matrix \(A\) can be expressed as \[A_{pq}=\left(\frac{m^{2}}{2}-2Q\right)\delta_{p,q}-2e_{pq}\left(1-\delta_{p,q} \right)+e_{p(q-1)}\left(1-\delta_{p,(q-1)}\right)+e_{p(q+1)}\left(1-\delta_{p, (q+1)}\right)+2Q\delta_{p,(q-1)}, \tag{12}\] where \(e_{pq}\) is given by \[e_{pq}=\frac{1}{2\pi^{2}}\mathrm{ln}\left(4\ \sin^{2}\left(\frac{\pi(p-q)}{n} \right)\right),\] and \(Q\) is given by \[Q=\frac{1}{4\pi^{2}}\mathrm{ln}\ \left(\left(\frac{\pi\epsilon}{\beta}\right)^{2} \frac{\left(1+w_{A_{L}^{L}}^{-}w_{A_{u}^{L}}^{+}\right)\left(1+w_{A_{u}^{R}}^ {-}w_{A_{u}^{R}}^{+}\right)}{\left(1-e^{-\frac{2\pi u}{\beta}}w_{A_{u}^{L}}^{- }\right)\left(1+e^{\frac{2\pi u}{\beta}}w_{A_{u}^{L}}^{+}\right)\left(1-e^{- \frac{2\pi u}{\beta}}w_{A_{u}R}^{-}\right)\left(1+e^{\frac{2\pi u}{\beta}}w_{A _{u}^{R}}^{+}\right)}\right).\] The matrix elements \(A_{pq}\) have the following shift symmetry \[A_{pq}=A_{(\{p+m\})(\{q+m\})}. \tag{13}\] Here, the curly bracket \(\{\}\) in the subscript indicates that \(\{p+m\}\) is \(p+m\) for \(p+m\leq n\) and \(n-p-m\) otherwise. This assures that \(A\) is a circulant matrix. Therefore, the determinant of the matrix \(A\) is given by \[det(A)=\prod_{r=0}^{n-1}\left(\sum_{k=1}^{n-1}A_{1k}e^{-\frac{2\pi irk}{n}} \right). \tag{14}\] Using Gauss's digamma theorem it is possible to verify that for \(\ln(\det(A))\), the limits \(n\to 1\) and \(Q\to\infty\) commutes. Gauss's digamma theorem is the following identity \[\sum_{m=1}^{n-1}e^{-\frac{2\pi imp}{n}}\ln\left(4\ \sin^{2}\left(\frac{m\pi}{n} \right)\right)=2\ \ln\ n+2\gamma+2\psi\left(\frac{p}{n}\right)+\pi\ \cot\left(\frac{\pi p}{n}\right)\qquad p=1, \cdots,n-1. \tag{100}\] where \(\psi\) is the digamma function \(\psi(x)=\frac{d}{dx}\ln\ \Gamma(x)\) and \(\gamma\) is the Euler's constant. Since at late times \(Q\) is very large we can approximate the logarithm of the determinant of \(A\) at late times as \[\ln\left(det\left(A\right)\right)\sim n\ \ln\left(\frac{\delta^{2}}{2}\right)- \left(n-1\right)Q+\frac{4\left(\left(n-1\right)\gamma+n\ \ln\ n\right)}{Q\pi^{2}\delta^{2}}. \tag{101}\] The third term is the leading contribution from \(G_{n}\left(\mathtt{J}\right)\). This implies that for large values of \(Q\), contribution from \(G_{n}\left(\mathtt{J}\right)\) to the integral is negligible and hence can be safely ignored while we perform the integration that does the averaging over the distribution.
2307.16181
A knob to tune the Casimir-Lifshitz force with gapped metals
The Casimir-Lifshitz interaction, a long-range force that arises between solids and molecules due to quantum fluctuations in electromagnetic fields, has been widely studied in solid-state physics. The degree of polarization in this interaction is influenced by the dielectric properties of the materials involved, which in turn are determined by factors such as band-to-band transitions, free carrier contributions, phonon contributions, and exciton contributions. Gapped metals, a new class of materials with unique electronic structures, offer the potential to manipulate dielectric properties and, consequently, the Casimir-Lifshitz interaction. In this study, we theoretically investigate the finite temperature Casimir-Lifshitz interaction in La$_3$Te$_4$-based gapped metal systems with varying off-stoichiometry levels. We demonstrate that off-stoichiometric effects in gapped metals can be used to control the magnitude and, in some cases, even the sign of Casimir-Lifshitz interactions. We predict measurable corrections due to stoichiometry on the predicted Casimir force between a La$_3$Te$_4$ surface and a gold sphere, attached to an atomic force microscopy tip.
M. Boström, M. Rizwan Khan, H. R. Gopidi, I. Brevik, Y. Li, C. Persson, O. I. Malyi
2023-07-30T09:38:49Z
http://arxiv.org/abs/2307.16181v1
# A knob to tune the Casimir-Lifshitz force with gapped metals ###### Abstract The Casimir-Lifshitz interaction, a long-range force that arises between solids and molecules due to quantum fluctuations in electromagnetic fields, has been widely studied in solid-state physics. The degree of polarization in this interaction is influenced by the dielectric properties of the materials involved, which in turn are determined by factors such as band-to-band transitions, free carrier contributions, phonon contributions, and exciton contributions. Gapped metals, a new class of materials with unique electronic structures, offer the potential to manipulate dielectric properties and, consequently, the Casimir-Lifshitz interaction. In this study, we theoretically investigate the finite temperature Casimir-Lifshitz interaction in La\({}_{3}\)Te\({}_{4}\)-based gapped metal systems with varying off-stoichiometry levels. We demonstrate that off-stoichiometric effects in gapped metals can be used to control the magnitude and, in some cases, even the sign of Casimir-Lifshitz interactions. We predict measurable corrections due to stoichiometry on the predicted Casimir force between a La\({}_{3}\)Te\({}_{4}\) surface and a gold sphere, attached to an atomic force microscopy tip. ## I Introduction Solid-state physics textbooks teach us about the Casimir-Lifshitz interaction [1; 2; 3; 4], as a long-range force arising between solids and molecules due to quantum fluctuations in the electromagnetic fields [5; 6; 7]. This interaction causes the solids or molecules to become polarized when they are close to each other, with the degree of polarization directly influenced by the dielectric properties of the materials [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. This relationship indicates that by manipulating the dielectric properties through external means, it is possible to make an impact on the Casimir-Lifshitz interaction directly. The effects of such manipulations can be directly measured using atomic force microscopy [24] or even be employed in the development of functional devices. As electronic structure theory has advanced, it has become clear that a material's dielectric properties are determined by several factors: (i) band-to-band transitions between occupied and unoccupied states, which play a significant role across all frequency ranges; (ii) free carrier contributions, typically found in metallic systems at low frequencies; (iii) the phonon contribution; and (iv) exciton contributions, with the latter two primarily limited to the low-frequency range. This understanding implies that by identifying ways to influence factors (i) through (iv), we can effectively tune the dielectric properties of materials and, consequently, the Casimir-Lifshitz interaction. Therefore, the primary challenge lies in discovering materials with tunable dielectric properties. Recently, gapped metals have emerged as a new class of materials possessing unique electronic structures [25; 26; 27; 28; 29; 30]. These compounds set themselves apart from both metals and insulators, as they possess a Fermi level within the conduction (or valence) band, resulting in a high intrinsic concentration of free carriers with an internal band gap between their primary band edges. Such materials have attracted significant attention in connection with transparent conductors [28], thermoelectrics [30], and electrides [29]. What makes these materials special is that they can develop off-stoichiometry (within the same parental structure) due to the decay of conducting electrons (holes) to acceptor (donor) states (Fig. 1a,b) formed by intrinsic defect formation [31; 32; 33; 25; 34]. This is different from the traditional metals and insulators, where defect formation is usually limited to high temperatures and is primarily driven by an increase in configurational entropy [34]. This situation is unique because different off-stoichiometry levels for gapped metals can be achieved, each exhibiting distinct dielectric properties. For example, La\({}_{3}\)Te\({}_{4}\) - an n-type gapped metal (i.e., one with its Fermi level in the primary conduction band) - can be experimentally synthesized [35; 36; 37] (simply by change synthesis conditions) across a range of phases from La\({}_{3}\)Te\({}_{4}\) to La\({}_{2.66}\)Te\({}_{4}\) with their properties tunable from metallic to insulating [31] as schematically shown in Fig. 1c. Motivated by the above, we study here theoretically the finite temperature Casimir-Lifshitz interaction between different systems involving La\({}_{3}\)Te\({}_{4}\)-based gapped metals, demonstrating how off-stoichiometry can be used as a knob to tune such long-range interaction. ## II Methods ### Computational details and dielectric functions To compute the dielectric properties of gapped metals, we perform first-principles calculations using the Perdew-Burke-Ernzerhof exchange-correlation (PBE XC) functional [38] within the VASP framework [39; 40; 41; 42]. Our analysis focuses on five distinct La\({}_{3-x}\)Te\({}_{4}\) compounds, previously identified in our earlier work [31]. For each system, we calculate the dielectric properties, considering only the Drude contribution and interband transitions. We employ \(\Gamma\)-centered Monkhorst-Pack k-grids [43] with 20,000 points per reciprocal atom for the calculations of direct band transitions and plasma frequencies and introduce a subtle Lorentzian broadening of 0.01 eV in the Kramers-Kronig transformation [44]. To include the Drude term in the optical properties, we utilize the kram code [45; 46; 47], setting the damping coefficient (\(\Gamma\)) to 0.2 eV, and additionally investigate the influence of the \(\Gamma\) parameter on our findings. The other details on computational parameters can be found elsewhere [31; 48]. Although we are unable to directly compare our calculated dielectric functions with corresponding experimental measurements (no data are available), it is important to highlight that the used methods can describe sufficiently well the experimental trends related to transparency and coloring across a range of gapped metals. [49; 25] The quantity related to forces follows from the imaginary (\(\varepsilon_{i}^{\prime\prime}\)) part of the dielectric function: \[\varepsilon_{i}(i\xi_{m})=1+\frac{2}{\pi}\int_{0}^{\infty}d\omega\frac{ \omega\varepsilon_{i}^{\prime\prime}(\omega)}{\omega^{2}+\xi_{m}^{2}},\,i=1,2,3 \tag{1}\] where the Matsubara frequency is \(\xi_{m}=2\pi kTm/\hbar\), and the subscript \(i\) indicates the medium. As seen in Fig. 2 the curves show strong dependence for the dielectric function on off-stoichiometry for La\({}_{3-x}\)Te\({}_{4}\) (to be specific: La\({}_{3}\)Te\({}_{4}\) (with E\({}_{g}^{int}\) = 1.22 eV), La\({}_{2.92}\)Te\({}_{4}\) (with E\({}_{g}^{int}\) = 1.16 \(\pm\) 0.05 eV), La\({}_{2.83}\)Te\({}_{4}\) (with E\({}_{g}^{int}\) = 1.16 \(\pm\) 0.05 eV), La\({}_{2.75}\)Te\({}_{4}\) (with E\({}_{g}^{int}\) = 1.13 eV)) going from a metallic to insulator behavior. In the Fig. 2 b we see for low-frequencies the dependence for the dielectric function of La\({}_{3}\)Te\({}_{4}\) on the \(\Gamma\) parameter for La\({}_{3}\)Te\({}_{4}\). In particular, we present the ratio of the dielectric function for La\({}_{3}\)Te\({}_{4}\) with different \(\Gamma\) to the corresponding values with La\({}_{3}\)Te\({}_{4}\) with \(\Gamma\) = 0.2 eV. This highlights that it is only at low frequencies the \(\Gamma\) parameter influences the dielectric function. To use the calculated dielectric functions for Casimir-Lifshitz interaction, we also develop the parametrization of the average dielectric function (Table 1) with 14-mode oscillator model [50]: \[\varepsilon(i\xi)=1+\sum_{j}\frac{C_{j}}{1+(\xi/\omega_{j})^{2}}, \tag{2}\] where \(\omega_{j}\) are characteristic frequencies and \(c_{j}\) are proportional to the oscillator strengths. To describe the Casimir interaction in systems amenable to experimental measurement, we additionally compute the dielectric properties of gold. These calculations are performed using PBEsol [51] XC functional combined with an effective Hubbard correction of \(U\) = 3.0 eV on the \(d\)-orbitals according to Dudarev et al. [52]. Here, the calculations are performed for 1 atom per primitive cell, the k-space summation involved an \(80\times 80\times 80\) \begin{table} \begin{tabular}{l l l l l l} \hline \hline modes (\(\omega_{j}\)) & \multicolumn{4}{c}{\(C_{j}\) for different La\({}_{3-x}\)Te\({}_{4}\) compounds} \\ \hline & La\({}_{3}\)Te\({}_{4}\) & La\({}_{2.92}\)Te\({}_{4}\) & La\({}_{2.83}\)Te\({}_{4}\) & La\({}_{2.75}\)Te\({}_{4}\) & La\({}_{2.67}\)Te\({}_{4}\) \\ \hline 0.0203 & 5.6047 & 24.2875 & 0.0 & 0.0 & 0.0 \\ \hline 0.0362 & 15.3667 & 55.9154 & 0.0 & 0.0 & 0.0 \\ \hline 0.0694 & 27.1114 & 98.6417 & 1.2411 & 0.4796 & 0.0008 \\ \hline 0.1325 & 35.8341 & 72.487 & 42.8899 & 19.6292 & 0.0 \\ \hline 0.2085 & 93.1702 & 47.7692 & 42.1523 & 25.6095 & 0.0 \\ \hline 0.4297 & 6.5149 & 7.3256 & 5.479 & 2.4194 & 0.0095 \\ \hline 0.8328 & 2.9159 & 1.447 & 0.7619 & 0.4956 & 0.0421 \\ \hline 1.9083 & 2.4749 & 2.9666 & 3.022 & 3.3971 & 3.1313 \\ \hline 3.2388 & 4.6107 & 4.3332 & 4.464 & 4.5286 & 4.8426 \\ \hline 5.2955 & 2.7231 & 2.8949 & 2.7872 & 2.6545 & 2.4639 \\ \hline 8.9753 & 0.7087 & 0.5961 & 0.6287 & 0.6727 & 0.7464 \\ \hline 18.2815 & 0.1959 & 0.2782 & 0.2486 & 0.205 & 0.1581 \\ \hline 23.0355 & 0.0695 & 0.0069 & 0.022 & 0.0474 & 0.0716 \\ \hline 42.0922 & 0.0 & 0.0057 & 0.0042 & 0.0018 & 0.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Parametrization of the average dielectric function of continuous media, \(\varepsilon(i\xi)\), at imaginary frequencies for La\({}_{3-x}\)Te\({}_{4}\) as calculated with first-principles calculations and a damping coefficient (\(\Gamma\)) set to 0.2 eV. The \(\omega_{j}\) modes are given in eV. The largest difference between fitted and calculated \(\varepsilon(i\xi)\) is 0.04%. mesh and a Gaussian smearing of 0.05 eV. The resulting permittivity for gold agrees with the experimental data presented in Ref. [53]. The plasma frequency was calculated to 9.6 eV and we used that with \(\Gamma=0.05\) eV when calculating the Drude contribution. ### Theory of the Casimir Force Let \(F(L)\) be the free energy per unit surface area between planar surfaces, and \(f(L)\) be the force between a gapped metal (medium 1) and a sphere with radius \(R\) (medium 3). For \(L<<R\) the force between a sphere and a planar surface can be deduced from the free energy between two planar surfaces using the so-called proximity force approximation [4]. This approximation is used to estimate the force between a sphere and a planar surface based on the interaction-free energy between two half-planes. [4] The sphere may be gold, a gapped transparent conductor, or an air bubble. The intervening medium 2 may be a diluted gas with \(\varepsilon_{2}(i\xi_{m})\sim 1\), or water with dielectric functions given in the literature [54; 55]. The free Figure 1: (Color online) Origin of off-stoichiometry in La\({}_{3}\)Te\({}_{4}\) compound. Formation of acceptor vacancy in (a) insulator and (b) n-type gapped metal. Here, E\({}_{F}\) and E\({}_{g}^{int}\) correspond to the Fermi level and internal band gap between the principal valence band maximum and conduction band minimum. (c) Schematic crystal structure for La-Te system depicting metallic state with free electrons in conduction bands and insulator state of La\({}_{3-x}\)Te\({}_{4}\) phase due to presence of acceptor vacancy. energy and force can be written as [2; 4] \[F(L)=\frac{f(L)}{2\pi R}=\frac{k_{B}T}{2\pi}{\sum_{m=0}^{\infty}}^{\prime}\int \limits_{0}^{\infty}dq\,q\sum_{\sigma}\ln(1-r_{\sigma}^{21}r_{\sigma}^{23}{\rm e }^{-2\kappa_{2}L}), \tag{3}\] where \(\sigma={\rm TE,TM}\), and the prime in the sum above indicates that the first term (\(m\) = 0) has to be weighted by 1/2. The Fresnel reflection coefficients between surfaces \(i\) and \(j\) for the transverse magnetic (TM) and transverse electric (TE) polarizations are given by \[r_{\rm TE}^{ij}=\frac{\kappa_{i}-\kappa_{j}}{\kappa_{i}+\kappa_{j}};\quad r_{ \rm TM}^{ij}=\frac{\varepsilon_{j}\kappa_{i}-\varepsilon_{i}\kappa_{j}}{ \varepsilon_{j}\kappa_{i}+\varepsilon_{i}\kappa_{j}}\,. \tag{4}\] Here \(\kappa_{i}=\sqrt{q^{2}+\varepsilon_{i}\xi_{m}^{2}/c^{2}}\), with \(i=1,2,3\) and the Matsubara frequency is defined above. Notably, as the dielectric constant is finite for gapped metals unless we set \(\Gamma=0\), the transverse electric reflection coefficient between non-magnetic gapped metals and dilute gas goes to zero (\(r_{\rm TE}^{21}=0\quad s\quad\xi\to 0\)). As a comparison, we also consider what happens when we assume perfect metallic behavior for the transverse electric reflection coefficients in the zero frequency limit [11; 13; 22] (\(r_{\rm TE}^{21}r_{\rm TE}^{23}=1\quad as\quad\xi\to 0\)). The alternative limiting behavior leads to the "perfect metal correction" (PMC) to the interaction between real metal surfaces (when \(L<<R\)) [11; 13; 22], \[\Delta F^{\rm PMC}(L)=\frac{\Delta f^{ PMC}(L)}{2\pi R}=\frac{-k_{B}T\zeta(3)}{16\pi L^{2}}, \tag{5}\] where \(\zeta(3)\) is a Riemann zeta function. This could be the case if the dielectric function for La\({}_{3}\)Te\({}_{4}\) is not finite as \(\xi\to 0\), and goes to infinite at least as \(1/\xi^{2}\). The energy correction factor, \(\eta(L,T)\), is defined [4] as the ratio of the calculated Casimir force and the corresponding Casimir force between a perfectly conducting sphere with radius \(R\) and a perfectly conducting plane (when \(L<<R\)) at zero temperature [5], \[F_{C}(L)=\frac{f_{C}(L)}{2\pi R}\cong\frac{-\pi^{2}\hbar c}{720L^{3}}, \tag{6}\] \[\eta(L,T)=\frac{f(L)}{f_{C}(L)}=\frac{F(L)}{F_{C}(L)}. \tag{7}\] We will match our results against an ideal case (relevant to any metal surfaces behaving as ideal plasmas [2]). For two metallic surfaces interacting across air, it is thus relevant for comparison to include the "perfect metal correction" by adding Eq. (5) and Eq. (3) and divide the sum with Eq. (6). The corrected ratio is, \[\eta^{*}=\frac{f(L)+\Delta f^{ PMC}(L)}{f_{C}(L)}=\frac{F(L)+\Delta F^{ PMC}(L)}{F_{C}(L)}. \tag{8}\] ### A note on the effect of dissipation The dissipation is included in our paper via a damping term \(\Gamma\) in the Drude dispersion relation, thus not from microscopic models. To see this from a broader perspective, it may be worthwhile to start from the general case where the temperature dependence of \(\Gamma\) is also taken into account. We have then \[\varepsilon(i\xi,T)=1+\frac{\omega_{pl}^{2}}{\xi[\xi+\Gamma(T)]}, \tag{9}\] where \(\xi\) is the imaginary frequency. If one at first ignores impurities, one may here make use of the Bloch-Gruneisen formula [56] for the temperature variation of the electrical resistivity \(\rho\), the latter being proportional to \(\Gamma\) Figure 2: (Color online) (a) Dielectric functions for imaginary frequencies from top to bottom for La\({}_{3}\)Te\({}_{4}\), La\({}_{2.92}\)Te\({}_{4}\), La\({}_{2.83}\)Te\({}_{4}\), La\({}_{2.75}\)Te\({}_{4}\), La\({}_{2.67}\)Te\({}_{4}\). (b) We see in the low-frequency region ratio of the dielectric function for La\({}_{3}\)Te\({}_{4}\) with different \(\Gamma\) (damping coefficient also known as dissipation parameter) to the corresponding values with La\({}_{3}\)Te\({}_{4}\) with \(\Gamma=0.2\) eV. The results include a superposition of interband transitions and Drude (free electron) contributions, both calculated using DFT. This formula implies, among other things, that \(\varepsilon(i\xi,T)\) is actually higher when \(T\) is low than at room temperature if the frequencies are lower than about \(10^{14}\) rad/s, i.e., of the same order as the first Matsubara frequency at room temperature (cf. the discussion on these points by Hoye et al. [57; 58]). Now, from numerical estimates, it turns out that the influence of the temperature on \(\Gamma\) is modest. A more important factor in this context is the presence of impurities in the metal. The existence of these makes the resistivity \(\rho\), and thus also \(\Gamma\), constant at low temperatures and low frequencies. Consequently, we can put \(\varepsilon(i\xi,T)\rightarrow\varepsilon(i\xi)\propto 1/\xi\) when \(\xi\to 0\). Now one must recognize that for practical purposes it is not \(\varepsilon\) itself that is the central quantity, but rather the combination \[\xi^{2}[\varepsilon(i\xi)-1], \tag{10}\] which has to go to zero as \(\xi\to 0\). This relationship is satisfied by the simple Drude ansatz for the dispersion equation, and the Casimir theory for metals becomes consistent. The expression (10) implies that the contribution to the Casimir force from the TE zero mode vanishes, and also that the Nernst theorem becomes satisfied (i.e., that the free energy depicted as a function of \(T\) has a zero slope as \(T\to 0\)). Cf. again the mentioned references by Hoye et al. [57; 58]. ## III Results ### A pair of identical La\({}_{3-x}\)Te\({}_{4}\) surfaces To demonstrate the effect of off-stoichiometry on the Casimir-Lifshitz force, we first consider the interaction between gapped metal planar surfaces interacting across the air as a function of interplane distance (Fig. 3a). At short separations, that is, when the finite velocity of light can be approximated as infinite, the product of reflection coefficients goes as \[\frac{(\varepsilon_{1}-\varepsilon_{2})^{2}}{(\varepsilon_{1}+\varepsilon_{2}) ^{2}}. \tag{11}\] From this part of the full expression, two things can be deduced: (1) the force between identical surfaces is attractive, and (2) the closer the ratio in Eq. (11) is to 1 (i.e. the more metallic), the stronger the attraction. The Casimir-Lifshitz free energy between gapped metal surfaces at finite temperatures is predicted to deviate strongly from the T\(\,\)=\(\,0\,\)K ideal metal Casimir interaction. (Note that the long-range thermal asymptote decays slower than the zero-temperature Casimir asymptote, leading to an increase of the ratio with increasing separations). As can be seen in Fig. 3b-d, off-stoichiometry effects for gapped metals can be used as an effective knob to induce 10-40 % changes in the magnitude of the interaction. Notably, in the separation range where forces are typically measured, the effects are of the same magnitude as any potential corrections from "the ideal metal" (plasma model) plate zero frequency transverse electric contribution. The force in Fig. 3b increases in magnitude as the surfaces become more metallic (up to a limit where the product of the reflection coefficient in a specific frequency range is close to one). The results shown in Fig. 3d is the corresponding case when the ideal metal or "plasma model" approximation [2] is used for the zero frequency term. Another important observation is that interaction between different metallic La\({}_{3-x}\)Te\({}_{4}\) surfaces is also different (i.e., dependent on \(x\)). This behavior can be understood on the electronic structure theory level as increasing the off-stoichiometry results in a change of free carrier concentration (each La vacancy removes three electrons from the principal conduction band), reducing the free carrier contribution to \(\varepsilon(i\xi)\). Hence, even though multiple La\({}_{3-x}\)Te\({}_{4}\) surfaces are metallic, they substantially differ in dielectric functions simply due to differences in free carrier concentration. As noted above, the dielectric properties of materials are defined by the superposition of different contributions. While one can explicitly calculate the band-to-band transition, the calculation of free carrier contribution to dielectric function relies on the Drude model. We described in Fig. 2 the effect on the low-frequency tail of the dielectric function from changes in the choice of damping coefficient in the Drude model (\(\Gamma\)). The static dielectric constant rises to higher values with a reduced \(\Gamma\). However, in Fig. 3c, we show how slight the energy correction factor is on the choice of damping coefficient for the specific case of a pair of La\({}_{3}\)Te\({}_{4}\) surfaces. Since the gapped metal surfaces have large but finite dielectric constants, one would expect the "perfect metal correction" to be purely hypothetical. However, future optical and force measurements will enable us to distinguish better how to accurately model the dielectric functions at extremely low frequencies. ### Gold sphere in air near La\({}_{3-x}\)Te\({}_{4}\) surfaces As is well known, all ordinary metals have a finite static conductivity. The Drude model describes the optical and dielectric properties quite well for small frequencies. The dielectric function in the Drude model is \[\varepsilon\left(\omega\right)=1+\frac{i\sigma\left(\omega\right)}{\omega}=1 -\frac{\omega_{pl}^{2}}{\omega\left(\omega+i\Gamma\right)}. \tag{12}\] Setting the damping parameter \(\Gamma\) zero, the plasma model is obtained. The damping parameter has a real physical origin, and is the result of scattering of the carriers against lattice imperfections. At finite temperature, processes with phonons emitted or absorbed further contribute. Bostrom and Sernelius [11; 13] found that the damping parameter has a dramatic effect on the Casimir interaction between gold surfaces at separations where the finite temperature is important. The plasma model actually predicts a result that coincides with that of the classic Casimir gedanken experiment between two perfectly reflecting half-spaces, while the Drude model predicts that this result is reduced by a factor of two. In the limit of large \(L\rightarrow\infty\) (while still demanding \(\frac{L}{R}<<1\)), the ratio for two interacting metallic surfaces would, within the PMC, become asymptotically, \[\eta^{*}\sim\frac{90Lk_{B}T\zeta(3)}{\pi^{3}\hbar c}, \tag{13}\] while \(\eta\) in the same limit has half this magnitude. For all cases considered, the ratios go (for large \(L\)) \(\propto Lk_{B}T/(\hbar c)\). This means, as is well known, that the ratio increases linearly with separation for large \(L\). Some experiments favor the Drude model [10; 19; 16] while most appear to favor the plasma model [22; 17; 18; 12]. The materials we consider are evaluated by density functional theory to have large, but finite, values for the zero frequency permittivity (but should go to infinity if \(\Gamma\to 0\) ). They should hence have a free energy behavior not observed experimentally by, for example, Mohideen and collaborators for metal surfaces [2; 17; 18; 22]. Rather it is expected to be closer to the experimental observations by Lamoreaux and his collaborators [10; 19; 16] (and even more to the case of doped silicon surfaces). If future experiments would prove this to be wrong, further research must be prompted into how to model the low-frequency part of the dielectric functions for gapped metals within density functional theory. Typically, experimental measurements of Casimir forces are conducted using atomic force microscopy (AFM), where a specialized sphere (e.g., Au) is attached to the AFM tip. Because of this, herein, we also consider another proposed experimental setup shown in Fig. 4, where we consider Au sphere attached to an atomic force microscope tip, interacting with a planar surface of different La\({}_{3-x}\)Te\({}_{4}\) surfaces at different separation. We present our result in terms of the ratio between the calculated force and the corresponding hypothetical force between a perfectly conducting plane and a perfectly conducting sphere (in the limit of zero temperature). Very similar systems, but with different material combinations, have been carefully studied by some of the world's best groups in force measurements. Notably, as we clearly demonstrate, both off-stoichiometry and "perfect metal correction" effects are large enough to be within the few percentage measurement accuracy claimed in the \(0.1\,\mu\,m\) range in several experimental labs (for instance by Mohideen and co-workers) [10; 17; 19; 22; 59; 60; 61; 62]. ### Air bubble in liquid water near La\({}_{3-x}\)Te\({}_{4}\) surfaces One interesting prediction from theory and force measurements is that these forces can be repulsive [6; 63; 64; 65; 66; 67; 68; 68], and even change sign [69; 70]. When reflecting on the expression for the Casimir-Lifshitz interaction, it is clear that repulsion occurs whenever \(\varepsilon_{1}>\varepsilon_{2}>\varepsilon_{3}\) for a broad range of finite Matsubara frequencies. This was well known to Dzyaloshinskii, Lifshitz and Pitaevskii [6]. A remarkable point discussed by Elbaum and Schick [69] is that the dispersion forces (including [4], e.g., van der Waals, Casimir-Polder, Lifshitz and Casimir interactions) can change sign when for a range of "high" frequencies (small separations) \(\varepsilon_{1}>\varepsilon_{2}>\varepsilon_{3}\), while for low frequencies (large separations) \(\varepsilon_{2}>\varepsilon_{1}>\varepsilon_{3}\). The origin of this effect is the relevant reflection coefficients combined with the exponential factor \(e^{-2\sqrt{q^{2}+\varepsilon_{2}\xi_{m}/c^{2}}L}\). At very large separations (the factor \(\sqrt{\varepsilon_{2}}L\xi_{m}/(qc)\) should be small or of the order unity to result in a significant contribution to the interaction), the finite velocity of light means only the zero frequency term contributes. If \(\varepsilon_{2}(0)>\varepsilon_{1}(0)>\varepsilon_{3}(0)\), then long-range attraction follows. We demonstrate in Fig. 5 that the Casimir-Lifshitz force between an air bubble in water near a gapped metal surface can, via control of the off-stoichiometry and separation, at large separations change from repulsion to attraction. We propose that further studies on how to tune more effectively the transition from repulsion to attraction-based trapping of gas bubbles in liquids must include other effects including hydration, dissolved gases, surface charges, and ion-specific double layer forces within the DLVO theory and beyond [3; 71]. ## IV Conclusions As dimensions and distances relevant for nanomachines go down in size, it becomes increasingly important to control the sign and magnitude of short-range van der Waals/Casimir-Lifshitz interactions [72; 73; 66] and torques [74; 75]. We observe that in the literature one finds that similar analyses have been performed for conductive oxides, phase-changing materials, chiral materials, magnetic material, Weyl semimetals, graphene, topological insulators, and many other systems. We refer the readers to the review by Woods and collaborators [76]. Herein, we investigate the effects of off-stoichiometry on the Casimir-Lifshitz force between La\({}_{3-x}\)Te\({}_{4}\) surfaces in various experimental setups. The results show that off-stoichiometry affects significantly impact the interaction between different La\({}_{3-x}\)Te\({}_{4}\) surfaces. In particular, we propose that these forces can be influenced to more than 10-40% by manipulating the off-stoichiometry of gapped metal and tuning it from metallic to insulating behavior (this phenomenon is due to the fact that La\({}_{3}\)Te\({}_{4}\) gapped metal possesses a distinctive electronic structure resulting in the reduction of free carrier concentration by the formation of La vacancies). The study of this new material category indicates a roadmap for how to enhance or reduce Casimir-Lifshitz interactions. As we show, for gas bubbles in liquid water near a gapped metal, tuning off-stoichiometry effects can even change the sign of the long-range part of the Casimir-Lifshitz interactions. Our main conclusion is that off-stoichiometry in gapped metals can be used as a knob to tune long-range interactions. In a longer perspective, the use of controlled quantum switches for fluid systems, together with an exploitation of off-stoichiometry effects, may be a promising way to investigate the Drude/plasma controversy. ###### Acknowledgements. The authors thank the "ENSEMBLE3 - Centre of Excellence for nanophotonics, advanced materials and novel crystal growth-based technologies" project (GA No. MAB/2020/14) carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund, the European Union's Horizon 2020 research and innovation programme Teaming for Excellence (GA. No. 857543), and European Union's Horizon 2020 research and innovation programme (grant No. 869815), for support of this work. We gratefully acknowledge Poland's high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2022/015458. We also acknowledge access to high-performance computing resources via NAISS, provided by NSC and PDC, as well as NOTUR, provided by Sigma2. Figure 3: (Color online) (a) Schematic scheme over the set-up. (b) Energy correction factor (\(\eta(L,T)\)) between equal gapped metal surfaces across air at T=300 K. (c) \(\eta(L,T)\) for two equal La\({}_{3-x}\)Te\({}_{4}\) varying the \(\Gamma\). (d) Same as in (b) but adding the “perfect metal correction” to the free energy for the metallic systems. The Drude damping parameter is here introduced on a phenomenological level. The influence from dissipation does not significantly change the strength of the interaction. In Fig. 3d, the reported change is however different, since it arises from the "perfect metal correction" (the meaning of the zero frequency limit given is explained in Fig. 4).
2303.14261
Maximum Relative Divergence Principle for Grading Functions on Direct Products of Chains
The concept of Shannon Entropy for probability distributions and associated Maximum Entropy Principle are extended here to the concepts of Relative Divergence of one Grading Function from another and Maximum Relative Divergence Principle for grading functions on direct products of totally ordered chains (chain bundles). Several Operations Research applications are analyzed.
Alexander Dukhovny
2023-03-24T20:08:29Z
http://arxiv.org/abs/2303.14261v1
# Maximum Relative Divergence Principle for Grading Functions on Direct Products of Chains ###### Abstract The concept of Shannon Entropy for probability distributions and associated Maximum Entropy Principle are extended here to the concepts of Relative Divergence of one Grading Function from another and Maximum Relative Divergence Principle for grading functions on direct products of totally ordered chains (chain bundles). Several Operations Research applications are analyzed. ## 1 Introduction In numerous probability theory problems and beyond The Insufficient Reason Principle has been used in the form Maximum (Shannon) Entropy Principle (MEP). According to MEP, the "most reasonable" (using the fewest extra assumptions) way to determine missing pieces of the needed probability distribution is to maximize, under some application-specific constraints, the Shannon Entropy functional (see, e.g., [1]). That approach has proved effective in so many cases that references are just too many to quote. The original Shannon Entropy formula and MEP have been generalized in many contexts. The list includes Relative Entropy, Kullback-Leibler Divergence, Partition Entropy, Kolmogorov-Sinai Entropy, Topological Entropy, Entropy of general non-probabilistic measures (capacities) and great many others (see, e.g., references [5-11] and a review in [13]. In our initial paper [3], Shannon Entropy was generalized to the concept of Relative Divergence (RD) of one Grading Function (GF) from another on a totally ordered set (chain), reducing to Shannon Entropy in a special case. (The term Relative Divergence was chosen in keeping with Kullback-Leibler Divergence - see [11] - also known as Relative Divergence of probability measures.) Further, in [4] we started the process of extending the concepts of Relative Divergence and Maximum Relative Divergence Principle (MRDP) to partially ordered sets. In the process, we demonstrated that 1. both of those concepts reduce to Shannon Entropy and Maximum Entropy Principle when used to determine the "most reasonable" probability distribution on a sample space. 2. conclusions made by using MRDP in new problems agree with "common sense" ones - where such are available; 3. MRDP can be effectively used in new applications. The new applications in [4] included working with a power set \(W=2^{X}\) of the event space \(X\) ordered by subset inclusion. In that case our results coincided with the ones obtained in our earlier work using the concept of General Entropy of General (non-additive) Measures (see [2]) for power sets. Namely, using a "normalized" General Measure \(\mu(w),\quad w\in W\), as a Grading Function \(F(w)\) of a subset \(w\), it followed that its relative divergence from the "subset cardinality" grading function \(N(w)\) reduces to the minimum of Shannon Entropy values of all probability distributions said to be "subordinate" to \(\mu(w)\). Applying MRDP in [4] beyond Probability Theory applications made it possible to consider some problems arising in Operations Research. It showed that MRDP does lead to "most reasonable, common sense, natural" results where such results are available. Also, some new results were obtained under new application-specific constraints. In this paper we continue the process started in [3] and extend the RD and MRDP concepts to sets that are direct products of totally ordered chains referred to as "chain bundles". To make the paper self-contained, some basic general definitions and properties of RD and MRDP are presented in section 2, along with relevant technical tools from the Shannon Entropy theory. In section 3 general results for RD and MRDP are specified for direct products (bundles) of totally ordered chains. Based on single-chain cases, we explore possible solution process domain (the set of admissible grading functions) issues under external linear constraints. Next, in sections 4, 5 and 6 we consider a number of special cases that emerge naturally in representative Operations Research applications. Using MRDP, we explore the MRDP solution process under several types of constraints on the admissible grading functions: imposed direct linear relations on their values and implied structural forms (referred to as "height-dependent", "additively separable", "parameters-controlled"). ## 2 Basic Definitions and properties The initial setup of Relative Divergence in [3] begins as follows: let \(W\) be a set totally ordered by the order relation \(\prec\). A real-valued function \(F\) on \(W\) is said to be a Grading Function (GF) on \(W\) if it is order-monotonic, that is, \(w\prec v\iff F(w)<F(v)\) for all \(w,v\in W\). In this paper we consider a discrete countable \(W=\{w_{k},\quad k=\ldots,-1,0,1,\ldots\}\) and refer to its "ordinal" function \(I:I(w_{k})=k\) as a "natural" GF on \(W\). When \(F(w)\) and \(G(w)\) are GFs on a discrete countable \(W\), the RD of \(F\) from \(G\) on \(W\) is defined (assuming absolute convergence of the series), as \[{\cal D}(F\|G)|_{W}=-\ \sum_{k=-\infty}^{\infty}\ \ln\left(\frac{\Delta_{k}F}{ \Delta_{k}G}\right)\Delta_{k}F, \tag{1}\] where \(\quad\Delta_{k}F=F(w_{k})-F(w_{k-1}),\quad\Delta_{k}G=G(w_{k})-G(w_{k-1}), \quad k=\ldots,-1,0,1,\ldots\). It follows directly from the definition that \({\cal D}(F\|G)\) possesses some special properties w.r.t. linear transformations of those grading functions. 1. \({\cal D}((c+F)\|(c+G))|_{W}={\cal D}(F\|G)|_{W}\). 2. \({\cal D}(cF\|cG)|_{W}=c{\cal D}(F\|G)|_{W},\quad\forall c>0\). Also, when a GF \(F\) is bounded on \(W\), denoting by \(m\) and \(M\), respectively, the minimum and maximum of \(F\) on \(W\), it follows directly from (1) that \(\forall c>0\) \[{\cal D}(cF\|I)|_{W}=c{\cal D}(F\|I)|_{W}-c(M-m)\ln c, \tag{2}\] where \(\Delta_{W}F=M-m\). Any bounded GF can be presented as \(F(w)=m+\hat{F}(w)(M-m)\), where we define \(\hat{F}:=\frac{F-m}{M-m}\). Obviously, \(\hat{F}\) is itself a grading function with the grading range [0,1] (the "standardized" \(F\)) which can be interpreted as a cumulative probability distribution function on \(W\). Based on that, we will use the term "Disorder Entropy" for \(\hat{F}\) on \(W\) and denote it \({\cal H}(\hat{F})|_{W}\): \[{\cal H}(\hat{F})|_{W}:={\cal D}(\hat{F}\|I)|_{W} \tag{3}\] Now, using \(c=M-m\) in (2) for \(\hat{F}\), it follows that \({\cal D}(F\|I)|_{W}={\cal D}((M-m)\hat{F})\|I)|_{W}=(M-m){\cal H}(\hat{F})-(M -m)\ln(M-m)\), which establishes the connection between the concepts of Relative Divergence and Shannon Entropy. As such, some classical techniques from the standard toolkit of Shannon Entropy theory (see, e.g., [5]) become relevant here. **Lemma 1.** For a probability distribution \(\{p_{i}\},\quad 0\leq p_{i}\leq 1,\quad\sum_{i=1}^{n}p_{i}=1,\quad i=1,\ldots,n\), the maximum value of Shannon Entropy of that distribution \({\cal H}\quad=-\sum_{i=1}^{n}p_{i}\ln p_{i}\quad=\ln n\) is attained when \(p_{i}=\frac{1}{n},\quad i=1,\ldots,n\). Based on Lemma 1, the following result from [4] will be important in what follows. **Lemma 2.** Let \(F(i)\) be a grading function on the chain \(W=\{0,\ldots,n\}\), and suppose some values of \(F(i)\) are specified, that is, \(F(n_{k})=M_{k},\quad k=1,\ldots,K\), where we define \(n_{0}=0,\quad M_{0}=m,\quad n_{K}=n,\quad M_{n}=M\) and also \(\quad\Delta_{k}M=M_{k}-M_{k-1}\), \(\Delta_{k}n=n_{k}-n_{k-1}\), \(k=1,\ldots,K\). Then the maximum value of \({\cal D}(F\|I)|_{W}=\sum_{k=1}^{K}[\Delta_{k}M\Delta_{k}n)-\Delta_{k}M\ln \Delta_{k}n)]\) is attained when \(F\)(i) is a piece-wise linear function: \[F(i)=a_{k}+b_{k}i,\quad i\in I_{k},\quad k=1,\ldots,K,\quad\forall i\in W, \tag{4}\] where \(\quad b_{k}=\frac{\Delta_{k}M}{\Delta_{k}n},\quad a_{k}=M_{k}-b_{k}n_{k-1}\), and index intervals \(\quad I_{k}=(n_{k-1},n_{k}],\quad k=1,\ldots,K\). **Proof**. Let \(\quad q_{i,k}=\frac{F(i)-F(i-1)}{M_{k}-M_{k-1}},\quad i\in I_{k}\). Maximizing \(\mathcal{D}(F\|I)|_{W}\) in this case reduces to the following problem. Find the \(q_{i,k}\geq 0,\quad i\in I_{k},\quad k=1,\ldots,K\) that maximize \(\quad-\sum_{k=1}^{K}\sum_{i\in I_{k}}q_{i,k}\ln(q_{i,k})\) subject to \(\quad\sum_{i\in I_{k}}q_{i,k}=1,\quad k=1,\ldots,K\). The additive form of the maximized expression leads to \(K\) independent maximization problems for each \(k=1,\ldots,K\). Using Lemma 1, the (unique) solution of each one of them is \(q_{i,k}=\frac{1}{\Delta_{k}n},\quad i\in I_{k}\), which completes the proof of Lemma 2. In particular, when only \(M_{n}=M,\quad M_{0}=m=0\) are specified, \(K=1\), so (4) reduces to \(\quad F(i)=M\frac{i}{n},\quad\forall i\in W\), \(\quad\mathcal{D}(F\|I)|_{W}=M\ln n-M\ln M\) ## 3 Relative Divergence and Maximum Relative Divergence Principle on Chain Bundles In this section we specify general results for Relative Divergence of grading functions and Maximum Relative Divergence Principle to an "event space" \(W\) - referred heretofore to as a chain bundle - a direct product of \(R\) totally ordered chains: \(W=X_{1}\times,\ldots,\times X_{R}\), where \(X_{r}=\{x_{r}(i),\quad i=0,1,\ldots,n_{r}\},\quad r=1,\ldots,R\). Using vector notation, \(W=\{\vec{w}(\vec{i})\}\) where the elements of \(W\) are denoted as \(\vec{w}(\vec{i})=[x_{1}(i_{1}),\ldots,x_{N}(i_{R})]\) and their vector indices \(\vec{i}=[i_{1},\ldots,i_{R}]\). As a direct product, the standard order relation of the elements of \(W\) is imposed by the order of their vector indices: for all unequal vectors in \(W\) \(\vec{w}_{\vec{i}}\prec\vec{w}_{\vec{j}}\iff\vec{i}\prec\vec{j}\), (that is, \(i_{t}\leq j_{t},\quad\forall t\).) As such, \(W\) has the minimal element \(\vec{w}_{min}=[x_{1}(0),\ldots,x_{R}(0)]\) and the maximal element \(\vec{w}_{max}=[x_{1}(n_{1}),\ldots,x_{R}(n_{R})]\) Two elements of \(W\) are said to be adjacent if their vector indices differ by only one component where the difference is 1. A sequence of vectors of \(W\) is said to be a maximal chain \(MC\) in \(W\) if it is totally ordered and no other chain in \(W\) contains it. Consequently, all maximal chains in \(W\) have the same minimal and maximal elements and the same number of elements \(K=1+n_{1}+\ldots+n_{R}\), and each element of \(W\) belongs to at least one maximal chain. Therefore, for any grading function \(F(\vec{w})\) on \(W\) its reduction to any maximal chain \(MC\in\) has the same minimum and maximum values \(m,M\) and overall grade spread \(M-m\) on each \(MC\). Because of that, following the approach of [4], and treating \(W\) as a union of all of its maximal chains, we define the relative divergence of two grading functions on \(W\) as follows: \[{\cal D}(F\|G)|_{W}:=\min_{MC\subset W}{\cal D}(F\|G)|_{MC} \tag{5}\] (That definition follows the one proposed in [2] to facilitate Maximum Entropy Principle for a general (non additive) measure \(\mu\) on the powerset \(W\) of the element set \(X\): its Shannon entropy \({\cal H}(\mu)|_{W}\) should be taken as the minimum over all maximal chains \(MC\) in \(W\) of Shannon Entropy values of probabilistic measures \(\mu^{s}_{MC}\) said to be "subordinate to \(\mu\) on \(MC\)". Each such \(\mu^{s}_{MC}\) is completely and uniquely determined by its values on the subsets comprising \(MC\) set equal to the values of \(\mu\) on those subsets.) Since the definition of RD does not involve actual values of elements of \(W\), to simplify notation we will, where feasible, refer to the elements \(\vec{w}(\vec{i})\) simply by their index vectors \(\vec{i}\). Each maximal chain \(MC\) in \(W\) is a sequence of adjacent vectors \(\{\vec{i}(k),\quad k=0,1,\ldots,K\}\). For a grading function \(F(\vec{i})\) defined on \(W\) we also define \(f_{MC}(\vec{i}(k))=F(\vec{i}(k))-F(\vec{i}(k-1)),\quad k=1,\ldots,K\), - the "increment" function of \(F\) along the chain \(MC\). As a grading function, \(F\) is monotonic, so its increment function assumes only nonegative values. In some applications it is possible to identify an obvious "common sense", "natural" GF on \(W\). When \(W\) is a direct product of chains, in the absence of constraints, the natural one is \(N(\vec{i})\) - the "height" of \(\vec{i}\) w.r.t. the order on \(W\)), the sum of all components of its vector index \(\vec{i}\): \(N(\vec{i}):=i_{1}+\ldots+i_{R}\). Its increment function along any maximal chain \(MC\) has the same constant values: \(f_{MC}(\vec{i}(k))=1,\quad k=1,\ldots,K\). Just as in our previous papers on the subject, we can now extend Maximum Entropy Principle (MEP) for probability distributions to Maximum Relative Divergence Principle (MRDP), as stated in [3], applied to grading functions on chain bundles (direct products of chains). MRDP: An "admissible" (satisfying the constraints of the problem) bounded grading function \(F\) on a chain bundle \(W\) is said to be "the most reasonable" (within the fixed grading interval) if it gives maximum to \({\cal D}(F\|N)|_{W}\). In its most general setup, obtaining \({\cal D}(F\|G)|_{W}\) has a very high computational cost as the number of maximal chains in a bundle of \(R\) chains increases rapidly with the number and the sizes of the bundled chains. In addition, using MRDP involves optimization analysis of all \({\cal D}(F\|N)|_{MC}\) over all \(MC\) in \(W\), possibly under application-specific constraints, increasing the overall computational load. In the next sections we explore a number of special cases that arise naturally in some applications under application-specific assumptions which reduce the computational cost of finding Relative Divergence and using MRDP. At the same time, in some special cases, the nature of the problem of interest and the imposed constraints complicate the set of admissible grading functions on \(W\) and the analysis of the MRDP problem. For a simple illustration, consider the case where \(W\) is a single chain: \(W=\{0,1,\ldots,n\}\), and an admissible grading function's \(F(i)\) increments \(\quad f_{i}=F(i)-F(i-1)\geq 0,\quad i=1,\ldots,n\), must satisfy two constraints: one imposed by the very definition of increments, and the other one that imposes a specified value of a linear combination of all \(f_{i}\). Assuming for simplicity that \(F(0)=m=0,\quad F(n)=M\), the constraints look as follows: \(\sum_{i=1}^{n}f_{i}=M\), and \(\quad\sum_{i=1}^{n}c_{i}f_{i}=\mu\). Following MRDP in this case calls for maximizing the value of \(\mathcal{D}(F\|N)|_{W}=-\sum_{i=1}^{n}f_{i}\ln f_{i}\) under those constraints. It is clear from the constraints that, depending on the coefficients of the linear constraint, the domain of the problem may be empty - as, for example, when either all \(c_{i}>\frac{\mu}{M}\), or all \(c_{i}<\frac{\mu}{M}\). When \(c_{i}=\frac{\mu}{M},\quad\forall i\), the constraints coincide and the MRDP problem reduces to the one covered by Lemma 2, so \(F(i)=i\frac{M}{n},\quad i=0,1,\ldots,n\). Otherwise, when the maximum value of \(-\sum_{i=1}^{n}f_{i}\ln f_{i}\) is attained at an interior point of the domain, using Lagrange multipliers, all \(f_{i}\) of that solution, in addition to the constraints of the problem, must satisfy the following Lagrange equations: \(-\ln f_{i}-1-\alpha-c_{i}\beta=0,\quad i=1,\ldots,n\). Denoting \(a=e^{-1-\alpha}\) and \(b=e^{-\beta}\), it follows then that \[f_{i}=ab^{c_{i}},\quad i=1,\ldots,n, \tag{6}\] using which, the constraints can be restated as \(a\sum_{i=1}^{n}b^{c_{i}}=M\), \(a\sum_{i=1}^{n}c_{i}b^{c_{i}}=\mu\), and used to specify \(a\) and \(b\). Taking a representative case from the Probability Theory, consider a random variable assuming values in \(W=\{1,\ldots,n\}\). Looking for its "most reasonable" probability distribution function \(F(i)\) under the assumption that the expected value \(\mu>1\) is specified, one can treat \(F(i)\) as a grading function on \(W\), where \(M=1,m=0\). The specified \(\mu\) presents as a linear constraint where \(c_{i}=i,\quad i=1,\ldots,n\). Using MRDP, it follows from (6) that the distribution must be geometric, exactly as it follows from MEP in the Probability Theory. ## 4 "Height-dependent" Grading Functions When \(W\) is a bundle of more than one chain (\(R>1\)), using MRDP may still lead to smaller computational cost under some special assumptions on the application-imposed form of the grading functions, constraints and the structure of the involved chains. When the application model imposes no constraints on the grading functions, then the MRDP-suggested most reasonable grading function must be linear: \(F(\vec{i})=m+N(\vec{i})\frac{(M-m)}{K}\). Indeed, in that case, by Lemma 2, for any maximal chain \(MC\) in \(W\), using that formula for \(F\) results in the highest possible value of \({\cal D}(F\|N)|_{MC}=(M-m)\ln K-(M-m)\ln(M-m)\). At the same time, direct results can also be obtained under the constraint on the very nature of the admissible grading functions: \(F(\vec{i})\) on \(W\) must be "height-dependent", that is, \(F(\vec{i})=F(N(\vec{i}))\). Indeed, in that case, the formula of (5) yields \[{\cal D}(F\|N)|_{W}=-\sum_{k=1}^{K}f(k)\ln(f(k)), \tag{7}\] where \(f(k)=F(k)-F(k-1),\quad k=1,\ldots,K\). A representative example arises in Queuing Theory: a service batch is formed by the server by selecting groups of customers from several waiting lines. The service batch is therefore an element of the bundle of the queues. The total batch service "cost" (say, total batch service time, including loading and processing), is often assumed to depend only on the total size of the batch. Assuming that the cost increases with the batch size, it is a "height-dependent" grading function on \(W\). By Lemma 2, in the absence of constraints, the MRDP-suggested most reasonable grading function in this case must be a linear function of the "height" of the element of \(W\): \(F(\vec{i})=m+N(\vec{i})\frac{(M-m)}{K}\). When an admissible "height-dependent" grading function must, in addition, have its increments satisfy a linear constraint, the previous section single-chain analysis applies and yields direct results. ## 5 "Additively separable" Grading Functions In many applications where the event space \(W\) of its model is a direct product (bundle) of chains, admissible grading functions are to be construed out of grading functions defined on the chains bundled in \(W\). Specifically, consider a direct product of two (or, consequently, any number) chain bundles (of different totally ordered chains): \(U=X_{1}\times\ldots\times X_{R},\quad\ V=Y_{1}\times\ldots\times Y_{Q}\), and \(W=U\times V=\{\vec{w}(\vec{s})\}\), where vector index \(\vec{s}=[\vec{i},\vec{j}]\) and vector elements of \(W\) are \(\vec{w}(\vec{s})=[\vec{u}(\vec{i}),\vec{v}(\vec{j})]\). A grading function \(F_{W}\) on \(W=U\times V\) is said to be "additively separable" on \(W\) w.r.t. to \((U,V)\) if its values are sums of values of grading functions defined on \(U\) and \(V\), that is for any \(\vec{w}=[\vec{u},\vec{v}]\) \[F_{W}(\vec{w})=F_{U}(\vec{u})+F_{V}(\vec{v}) \tag{8}\] In particular, the "natural" GF \(N\) on a chain bundle is additively separable: \(N_{W}(\vec{w})=i_{1}+\ldots+i_{R}+j_{1}+\ldots j_{Q}=N_{U}(\vec{u})+N_{V}( \vec{v}))\) An important representative example arises in the context of a Queuing Theory application: a server forms a service batch by seperately selecting groups of items from two separate waiting lines \(U\) and \(V\), in order of their lines, independently of the other line choice. The service batch is an element of \(W=U\times V\). The total batch service "cost" (say, batch service time), in its simplest model is the sum of both groups' service costs, determined, in turn, by each group's size. Under a common assumption that those costs increase with the group size, the total service batch cost \(F_{W}\) is a grading function on \(W\) computed as a sum of the batch forming groups' costs \(F_{U}\) and \(F_{V}\). Proposition 1. Suppose \(F\) and \(G\) are additively separable grading functions on chain bundles \(U\) and \(V\) and \(W=U\times V\). Then \[\mathcal{D}(F\|G)|_{W}=\mathcal{D}(F_{U}\|G_{U})|_{U}+\mathcal{D}(F_{V}\|G_{V })|_{V}. \tag{9}\] Proof. The general definition formula in 8 is a sum of terms involving involving only the increments of \(F\) and \(G\) corresponding to each of the values of \(k,\quad k=1,\ldots,K\) for each maximal chain \(MC\). Each \(MC\) in \(W=U\times V\) can be split into subchains \(\{MC_{\vec{i}(k)}\}\) in \(V\) and \(\{MC_{\vec{j}(k)}\}\) in \(U\), where \(\vec{i}(k)\) and \(\vec{j}(k)\), respectively, are constant. Clearly, both kinds of subchains spliced together form maximal chains \(MC_{V}\) in \(V\) and \(MC_{U}\) in \(U\), respectively. Combining the terms of into groups corresponding to \(MC_{V}\) in \(V\) and \(MC_{U}\) in \(U\) and using the additive separability of both \(F\) and \(G\) one observes that in those groups the increments of those functions are either the increments of \(F_{V}\) and \(G_{V}\) or the increments of \(F_{U}\) and \(G_{U}\), respectively. As such, it follows that \(\mathcal{D}(F\|G)|_{MC}=\mathcal{D}(F_{U}\|G_{U})|_{MC_{U}}+\mathcal{D}(F_{V} \|G_{V})|_{MC_{V}}\). Clearly, so obtained chains \(MC_{V}\) in \(V\) and \(MC_{U}\) in \(U\) contain all possible maximal chains in \(U\) and \(V\). As such, taking the minimum of the expression above over all \(MC\) in \(W\) completes the proof of Proposition 1. Proposition 1 may simplify the way MRDP applies to the combined problem. Namely, if the constraints of the maximization problem are stated separately and independently for \(F_{U}|_{U}\) and \(F_{V}|_{V}\) the overall maximization problem splits into separate smaller maximization problems, resulting in massive reduction of the computing cost. Consequently, it opens a way to analyze an MRDP problem on the entire chain bundle by first identifying separate independent sub-bundles and splitting the combined problem into two (or more) smaller problems. Corollary 1. If grading functions \(F\) and \(G\) on \(W=X_{1}\times\ldots\times X_{R}\) are completely additively separable, that is, \(F(\vec{w})=F_{1}(x_{1})+\ldots+F_{R}(x_{R})\), and \(G(\vec{w})=G_{1}(x_{1})+\ldots+G_{R}(x_{R})\), then \[{\cal D}(F\|G)|_{W}=\sum_{r=1}^{R}{\cal D}(F_{r}\|G_{r})|_{W_{r}}. \tag{10}\] In the context of the previous example, should the batch service cost \(F\) be modeled as a sum of separate costs of service \(F_{1},\ldots,F_{R}\) for each of the selections from the respective waiting lines, that \(F\) becomes a completely additive separable grading function on \(W\). As such, based on formula (10), MRDP calls for maximizing \[{\cal D}(F\|N)|_{W}=\sum_{r=1}^{R}{\cal D}(F_{r}\|N_{r})|_{W_{R}} \tag{11}\] subject to the application-specific constraints. Furthermore, if the constraints are stated separately and independently for each of the bundled chains (waiting lines in the Queueing Theory application above) the overall maximization problems simply splits into \(R\) separate independent single-chain problems. Otherwise, to allow for computing cost reduction, the total service cost model may sometimes be adjusted so as to reflect interdependence of costs in a convenient way. Say, one common sense way to model the batch \(\vec{w}\) service cost is to present it as a sum of separate subgroups service costs and a server-specific component. That component could reflect, for example, the service batch subgroups selection, loading, post-processing, etc. Again, under the natural assumption of that cost component increasing with the batch size, the total service cost \(F_{0}(\vec{w})\) presents as proper grading function on \(W\). Modelling that component as linearly dependent on the batch size: \(F_{0}(\vec{w})=a+bN(\vec{w}),a>0,b>0\), using the aforementioned linear properties of Relative Divergence, the MRDP problem simply leads to the same (11) where \(F_{r}(w_{r})\) is replaced by \(F_{r}+bN_{r}(w_{r})\). ## 6 MRDP for Parameter-Controlled Grading Functions Using MRDP, in some applications it has to be done by choosing values of certain controlling "intrinsic" variables ("parameters"). A representative example arises again in the context of the Queuing Theory where \(R\) streams of incoming servers stop by the service station to pick up a group of customers of their designated types from the waiting line \(W_{r}\). The server's capacity is \(n_{r}\), and the cost of serving a group of \(i_{r}\leq n_{r}\) customers by the \(r\)-th kind server, \(F_{r}(i_{r})\), is naturally assumed to increase with \(i_{r}\). From the point of view of an observer, an assumption can be made that the incoming servers are specialized to serve customers of just one type \(r=1,\ldots,R\) of, modeled as a random variable with stationary probabilities \(p_{r}\). The expected cost of service of a group \(\vec{i}=[i_{1},\ldots,i_{R}],\quad F(\vec{i})=\sum_{r=1}^{R}p_{r}F_{r}(i_{r})\) is, therefore, a grading function on the chain bundle \(W=\{\vec{i}\}\). Given the information on the service costs of groups of each type, the "most reasonable" probability distribution of the server type can be obtained by applying MRDP to \(F(\vec{i})\). Using (2) and (11), it requires finding such nonnegative \(p_{1},\ldots,p_{R},\quad p_{1}+\ldots+p_{R}=1,\quad\) that would maximize \[\sum_{r=1}^{R}[p_{r}D_{r}-(M_{r}-m_{r})p_{r}\ln p_{r}] \tag{12}\] , where \(D_{r}={\cal D}(F_{r}\|N_{r})|_{W_{r}}\). Using here Lagrange multipliers with respect to variables \(p_{1},\ldots,p_{R}\), it follows that at a domain-interior point of maximum, in addition to summing up to 1, they must satisfy the following system of equations: \(D_{r}-(M_{r}-m_{r})(\ln p_{r}+1)-\lambda=0,\quad r=1,\ldots,R\). Solving the system, it follows that \(p_{r}=e^{[-1+\frac{D_{r}-\lambda}{M_{r}-m_{r}}]},\quad r=1,\ldots,R\), with the Lagrange multiplier \(\lambda\) to be determined from the condition \(\quad p_{1}+\ldots+p_{R}=1\). In some cases the service contract between the pick-up server and the queuing station may specify the same "cost" (say, the server loading time at the station) values interval) for all server types. \(M_{r}-m_{r}=M-m,\quad\forall r\). In that case, it follows that \(p_{r}=cq^{D_{r}},\quad r=1,\ldots R\), where \(\ln q=(M-m)^{-1},\quad c^{-1}=\sum_{r=1}^{R}q^{D_{r}}\). ## 7 Conclusion In [3] we introduced the concept of Relative Divergence (RD) of grading function on totally ordered sets (chains). Based on that, RD and Maximum Relative Divergence Principle (MRDP) were extended here to partially ordered sets that are direct products (bundles) of chains. In particular, a number of special cases arising from representative Operations Research applications were studied here. It was shown that: 1. Similar to Maximum Entropy Principle (MEP), MRDP-based analysis leads to "most reasonable", "common sense" solutions akin to analogous Probability Theory problems; 2. A new method emerges to analyze some Operations Research applications (such as group service in Queuing Theory, resource distribution under constraints, etc.) which can be stated as MRDP problems on a bundle of chains. 3. In those applications, the computational cost of arising MRDP problems reduces strongly (and may lead to direct results) when the application of interest can be modeled in some special ways so that the grading functions emerging there can be assumed to have convenient form. In particular, it is done here when those grading functions are "height-dependent", or "additively separable", or "parameter-controlled".
2305.10389
Cache-Oblivious Parallel Convex Hull in the Binary Forking Model
We present two cache-oblivious sorting-based convex hull algorithms in the Binary Forking Model. The first is an algorithm for a presorted set of points which achieves $O(n)$ work, $O(\log n)$ span, and $O(n/B)$ serial cache complexity, where $B$ is the cache line size. These are all optimal worst-case bounds for cache-oblivious algorithms in the Binary Forking Model. The second adapts Cole and Ramachandran's cache-oblivious sorting algorithm, matching its properties including achieving $O(n \log n)$ work, $O(\log n \log \log n)$ span, and $O(n/B \log_M n)$ serial cache complexity. Here $M$ is the size of the private cache.
Reilly Browne, Rezaul Chowdhury, Shih-Yu Tsai, Yimin Zhu
2023-05-17T17:29:16Z
http://arxiv.org/abs/2305.10389v2
# Cache-Oblivious Parallel Convex Hull in the Binary Forking Model ###### Abstract We present two cache-oblivious sorting-based convex hull algorithms in the Binary Forking Model. The first is an algorithm for a presorted set of points which achieves \(O(n)\) work, \(O(\log n)\) span, and \(O(n/B)\) serial cache complexity, where \(B\) is the cache line size. These are all optimal worst-case bounds for cache-oblivious algorithms in the Binary Forking Model. The second adapts Cole and Ramachandran's cache-oblivious sorting algorithm, matching its properties including achieving \(O(n\log n)\) work, \(O(\log n\log\log n)\) span, and \(O(n/B\log_{M}n)\) serial cache complexity. Here \(M\) is the size of the private cache. ## 1 Introduction Finding the convex hull of a set of \(n\) points in the plane is one of the most fundamental problems in computational geometry. It has wide applications, such as robot motion planning in robotics, image processing in pattern recognition, and tracking disease epidemics in ethology [3, 24, 35]. In the serial setting, there have been several efficient algorithms for constructing convex hulls [20, 25, 32, 38], both in 2-dimensions and in the more general problem of \(d\)-dimensions. In the parallel setting, a bunch of efficient convex hull algorithms have been developed for a set of \(d\)-dimensional points [4, 7, 8, 18, 33, 36, 39]. There are also many parallel convex hull algorithms specifically for presorted or unsorted points in 2-dimensions [11, 21, 29, 36]. In this paper, we focus on designing parallel algorithms for the convex hull of a set of points in 2-dimensions in the _binary-forking model_[14]. In particular, we present _cache-oblivious_[27]_parallel_ algorithms for both presorted (by \(x\)-coordinate) and unsorted points with optimal worst-case _work_ (see next paragraph) with respect to the size of the input. We achieve optimal _span_ (see next paragraph) in the presorted case and match the best-known span for sorting in the unsorted case for the binary-forking model. We use the _work-span_ model [23] to analyze the performance of parallel algorithms. The _work_, \(W(n)\) of a parallel algorithm is defined as the total number of CPU operations it performs when it is executed on a single processor. Its _span_, \(S(n)\) on the other hand, is the maximum number of operations performed by any single processor when the program runs on an unlimited number of processors. The _binary-forking_ model [1, 10, 12, 13, 14] realistically captures the performance of parallel algorithms designed for modern multicore shared-memory machines. Its formal definition is in [14]. In this model, the computation starts with a single thread, and then threads are dynamically and asynchronously created by some existing threads as time progresses. The creation of threads is based on the spawn/fork action: a thread can spawn/fork a concurrent asynchronous child thread while it continues its task. Note that the forks can happen recursively. The model also includes a join operation to synchronize the threads and an atomic test-and-set (TS) instruction. This model can be viewed as an extension of the binary fork-join model [23] which does not include the TS instruction. This model and its variants [1, 2, 12, 15, 19, 2] are widely used in many parallel programming languages or environments such as Intel TBB [43], the Microsoft Task Parallel Library [42], Cilk [28], and the Java fork-join framework [26]. The binary-forking model is an ideal candidate for modeling parallel computations on modern architectures when compared with the closely related PRAM model [34]. The main difference between the binary-forking model and the PRAM model is synchronicity. The binary-forking model allows asynchronous thread creation; in the PRAM model, all processors work in synchronous steps. PRAM does not model modern architectures well because they utilize new techniques such as multiple caches, branch prediction, and many more, which lead to many asynchronous events such as varying clock speed, cache misses, etc [14]. Any algorithm designed for the PRAM model can be transformed into an algorithm for the binary-forking model at the cost of an \(O(\log n)\)-factor blow-up in the span while keeping the work asymptotically the same as in the PRAM model. We also employ the use of the _cache-oblivious_ model, first described by Frigo et al. [27]. In this model, memory is assumed to have two layers: a cache of size \(M\) and a main memory of unlimited size. The memory is split into blocks of size \(B\), and every time the processor tries to access a data point that is not in the cache it incurs a cache miss and the block containing the data point is copied into the cache from the main memory. When copying into a cache that is already full, an old block is evicted to make space for the new block. However, in contrast to the cache-aware model, cache-oblivious algorithms do not use the knowledge of the values of \(M\) and \(B\). In both models, the cache complexity of an algorithm is measured in terms of the number of cache misses it incurs and is referred to as cache-optimal if it incurs the fewest possible cache misses asymptotically. There are several ways to expand the cache-oblivious model to parallel computation. The performance of our algorithms are analyzed for private caches, using the same model as in Cole and Ramachandran [22]. It is similar to the parallel external memory (PEM) model of Arge et al. [5], with the primary difference being in the type of parallelism used. The PEM model uses bulk-synchrony for synchronization whereas we use binary forking. However, the use of private caches is almost identical. Each processor has a private cache of size \(M\) which consists of blocks of size \(B\) and the main memory of arbitrary size is shared amongst all processors. Several problems have been studied in the PEM model, particularly geometric problems including convex hull [41]. Sitchinava [41] showed that Atallah and Goodrich's [9]\(O(n\log n)\) work and \(O(\log n)\) span algorithm on presorted points matches the cache misses of sorting for the PEM model. This implies a convex hull algorithm for unsorted points that matches sorting bounds. However, in the binary-forking model and the binary fork-join model, the span of their algorithm becomes \(O(\log^{2}n)\) which is dominated by thread-spawning at each level in the \(O(\log n)\) height recursion. We improve upon this result by presenting an algorithm with \(O(\log n)\) span and \(O(n)\) work for presorted points while preserving cache-obliviousness. In terms of cache analysis for convex hulls, one of the earliest serial algorithms developed by Graham [32], when combined with a cache-oblivious sorting algorithm, could achieve \(O(n/B\log_{M}n)\) cache misses. Arge and Miltersen [6] showed that this bound is optimal for non-output sensitive convex hull algorithms in the cache-aware model, which carries over to the cache-oblivious model. When output sensitivity is accounted for, where \(h\) is the number of points comprising the convex hull, this bound decreases to \(O(n/B\log_{M/B}(h/B))\), as is achieved by Goodrich et al. [31] for the external memory model. In terms of parallel cache-oblivious algorithms, Sharma and Sen [40] presented a randomized CRCW algorithm that achieves expected \(O(n/B\log_{M}n)\) cache misses and expected \(O(\log n\log\log n)\) span. [Our Contributions.]In summary, we have the following results: * For finding the convex of a set of presorted points (by x-coordinate), we give a deterministic cache-oblivious algorithm that uses \(n^{1/5}\)-way divide and conquer, performing \(O(n)\) work in \(O(\log n)\) span and incurring \(O(n/B)\) cache misses. It is optimal across work, span and cache complexity for the problem of finding the convex hull of presorted points in the binary-forking model. * For finding the convex hull of unsorted points, we give a deterministic cache-oblivious algorithm based on multi-way merging which performs worst-case optimal \(O(n\log n)\) work in \(O(\log n\log\log n)\) span, and achieves optimal parallel cache complexity. To the best of our knowledge, this is the first deterministic cache-efficient 2D convex hull algorithm for unsorted points in the binary-forking model with the lowest span. Its span outperforms the recently proposed randomized incremental convex hull algorithm for the binary-forking model as well [18]. It matches the best known bounds for span, work, and cache-efficiency for sorting, which means it matches the previous result but simply requires changing a sorting algorithm instead of implementing two complex procedures. * All our bounds hold for the binary fork-join model as well since neither algorithm uses the atomic test-and-set operation which is unique to the binary-forking model. ## 2 A Cache-Optimal \(O(\log n)\) Span Algorithm for Presorted Points We present a cache-oblivious divide-and-conquer algorithm for finding the upper convex hull of an array of presorted points. It achieves \(O(\log n)\) span with \(O(n)\) work and \(O(n/B)\) cache misses, which are optimal bounds for work, span, and cache efficiency for the convex hull problem in the binary-forking model. As is common practice, we describe a procedure which finds the "upper hull", the set of extreme points which lie above the line from the leftmost and rightmost points. Repeating this procedure on the point set rotated by 180 degrees yields the entire convex hull. We consider our algorithm to be one of several to build off of the recursive structure of Atallah and Goodrich's [8]\(O(n\log n)\) work algorithm. Atallah and Goodrich's algorithm is a \(\sqrt{n}\)-way divide-and-conquer algorithm. To perform the merges, it uses Overmars and Van Leuwen's [37] technique for finding the upper common tangent of two disjoint convex polygons to compare the \(\sqrt{n}\) subproblem solutions and determine which points from each should remain. At each layer in this recursion, \(\binom{\sqrt{n}}{2}=O(n)\) applications of this technique are used at the cost of \(O(\log n)\), giving the recurrence \(T(n)=\sqrt{n}T(\sqrt{n})+O(n\log n)\) for the total work. The span of this method is \(O(\log n)\), even under the more restrictive binary-forking model where \(O(\log n)\) is optimal for any problem requiring \(\Omega(n)\) work, as spawning \(\Omega(n)\) threads incurs \(\Omega(\log n)\) span. However, here the work remains suboptimal. Goodrich [30] presented a method to obtain \(O(n)\) work using a tree data structure which they refer to as a "hull tree". The hull tree data structure stores the convex hull of a set of points in a binary tree that permits a simulation of the Overmars and Van Leuwen's procedure and an efficient means of "trimming off" points that have been eliminated and unifying subproblem solutions under a new tree. The issue is that using this data structure incurs many cache misses because the tree cannot guarantee any data locality. Chen [21] also uses a tree-based approach to achieve \(O(n)\) work, which encounters the same issue. The first step to ensure data locality while reducing to \(O(n)\) work is to decrease the number of subproblems so that the pairwise hull comparisons do not take \(O(n\log n)\) work. In fact, we want this work from these comparisons to total \(O(n/B)\). This can be done by choosing a different exponent. We will use \(n^{1/6}\)-way divide-and-conquer, which means that comparing the subproblem solutions will take \(O(n^{2/6}\log n)\) work, which allows us to use the tall cache assumption (\(M=\Omega(B^{2})\)) to bound the total work from these comparisons by \(O(n/B)\). Once the subproblems are compared, we can eliminate all of the points which have been determined to be within the hull by shifting all of the remaining points to the front of the array (using a prefix sum). This can be done cache-obliviously [16], incurring \(O(n/B)\) cache misses for each recursion layer. However, this comes out to be \(O(n\log\log n)\) work total. So, if we are to maintain data locality, we have to determine a way to eliminate points without having to shift everything down and incur too much work. Instead of deleting points immediately, Algorithm 1, (pseudocode in appendix) uses an array of size \(O(\frac{n}{\log n})\) which stores a set of x-coordinate intervals corresponding to the points which must be deleted. We store these intervals as a list of "starts" and "ends" which correspond to the beginning and ending of each deletion interval all sorted by x-coordinate. For an interval \(I\), consider the largest interval \(I^{\prime}\) for which \(I\subseteq I^{\prime}\). As an invariant, we will store in \(start(I)\) a pointer to \(start(I^{\prime})\) and in \(end(I)\) a pointer to \(end(I^{\prime})\). This will crucially allow us to get from a point that is within one of these intervals to the nearest point which is not within any of these deletion intervals in \(O(1)\) time, which lets us simulate the Overmars and Van Leuwen [37] technique on subproblem solutions. By primarily operating on this array of intervals, we can avoid incurring \(O(n)\) work at every layer and instead amortize to \(O(n)\) total work across all layers. **Data:**: \(A\), a subset of points that are sorted by their \(x\)-coordinate. \(n\), the number of points in \(A\). \(N\), the number of points in the original input, which is a superset of \(A\). \(D_{alloted}\), an allotted section of global array \(D\). \(n_{alloted}\), the size of alloted section \(D_{alloted}\) **Result:**: The upper hull of \(A\) **if**: \(n<\log N\)**then** solve using a serial algorithm (such as Graham scan) ; **else**: Split \(A\) into \(n^{1/6}\) subproblems of size roughly \(n^{5/6}\). Allot a contiguous segment of \(\frac{n_{alloted}-2n^{1/6}}{n^{1/6}}\) slots of \(D_{alloted}\) to each subproblem (in x-order). Solve these recursively; Find the common tangents between each pair of subproblems using the modified Overmars and Van Leuven technique (See Theorem 2.1). Store in a local array of tangents, \(\mathcal{T}\) ; Comparing the elements of \(\mathcal{T}\) for each subproblem, determine the local set of new dividers, \(D_{new}\). Additionally store the dividers in the \(2n^{1/6}\) slots at the end of \(D_{alloted}\) and another copy in \(D_{new}\); Merge the list of old and new dividers by x-order in \(D_{alloted}\); For all \(d\in D_{new}\), find d's copy in \(D_{alloted}\). Store in \(d\)._Rank_ the copy's rank in \(D_{alloted}\); For each (contiguous) left-right pair \((d_{l},d_{r})\subset D_{new}\), mark the dividers \(D_{alloted}[d_{l}.\textit{Rank},d_{r}.\textit{Rank}]\) by setting their _Wrapper_ to \(D_{alloted}[d_{l}.\textit{Rank}]\) if it is a _Left_ divider, \(D_{alloted}[d_{r}.\textit{Rank}]\) if _Right_ ; For each element \(d\) in \(D_{new}\), mark the points in \(A\) between its \(D_{alloted}\) copy's _Landing_ flag and the \(D_{alloted}\) copy's neighbor (\(D_{alloted}[d.\textit{Rank}+1]\) for _Left_, \(D_{alloted}[d.\textit{Rank}-1]\) for _Right_); Set the _Rank_ flag of all the copies of \(D_{new}\) dividers to 1 if _Left_ and -1 if _Right_. Then perform a prefix sum on the _Rank_ flags, this determines if a divider is a subset of another interval. For each element \(d\) in \(D_{alloted}\) that is a _Right_ divider with \(d.\textit{Rank}\) greater than 0 and \(d.\textit{Marked}\) set to false, mark the points in the adjacent interval for deletion and set \(d.\textit{Rank}\) to true. **if**: \(n=N\)**then** Use prefix sums to determine the ranks of the unmarked (extreme) points and delete the marked points; **end**: **Algorithm 1:** Cache-Optimal algorithm for Presorted Points ### Data Structure Invariants We use three data types in our arrays: **points**, **dividers**, and **tangents**. Each type is constant-sized, so the arrays have space linear in their respective numbers of elements. **Points** are based on the input, with some extra attributes assigned to each point so that if it is not an extreme point, it can be marked instead of immediately removed. In addition to \(x\)- and \(y\)-coordinates, the point contains a _Rank_ integer flag which is initialized to 1 and two pointers to dividers called _LeftPar_ and _RightPar_. _LeftPar_ and _RightPar_ are initialized to _Null_, but when they are set, they indicate that the point is within some deletion interval \(I\). _LeftPar_ points to a divider corresponding to the start of \(I\), whereas _RightPar_ points to a divider corresponding to the end of \(I\). **Dividers** represent the _boundaries_ of deletion intervals. If the divider represents the beginning of some interval \(I\), we will refer to it as \(L_{I}\). If the divider refers to the end of \(I\), then we will refer to it as \(R_{I}\). The divider contains an \(x\)-coordinate, a pointer _Landing_ that points to the point element corresponding to the start/end of the interval (but not within the interval), a pointer _Wrapper_ to a divider from an interval \(I^{\prime}\) that is a superset of \(I\) (\(L^{\prime}_{I}\) for \(L_{I}\) and \(R^{\prime}_{I}\) for \(R_{I}\)), an integer flag _Rank_ for use in prefix sums, a bit flag _Marked_ to denote whether _Landing_ has been marked, and a bit flag indicating whether the divider is a "_Left_" divider (the start of an interval) or a "_Right_" divider (the end of an interval). All of these are initialized to 0 or _Null_. **Tangents** represent an upper common tangent between the upper hulls of two subsections. An upper common tangent between two hulls is the line that lies above all points in each hull and which intersects the boundaries of each hull in exactly one contiguous section. This is typically expressed as a pair of extreme points, one from each hull. These are used to determine which points would be deleted if the two hulls were merged together. In our representation, each tangent simply contains 2 pointers to points, one for the point of tangency of the leftward of the two hulls, and one for the point of tangency of the rightward of the two hulls. Both pointers are set to _Null_ to start. Our presorted algorithm will utilize 4 arrays of these types: 2 permanent arrays (of points, \(A\), and of dividers \(D\)) and \(2\) temporary arrays specific to the particular recursions (of tangents, \(\mathcal{T}\), and of dividers, \(D_{new}\)). There are \(5\) invariants that are maintained with respect to \(D\) that will allow us to bound our work by \(O(n)\) and our cache misses by \(O(n/B)\). **Invariant 1**: The size of \(D\) must be \(O(\frac{n}{\log n})\). **Invariant 2**: For any interval \(I\) with dividers \(L_{I},R_{I}\in D\), and the largest subset \(I^{\prime}\) such that \(I\subseteq I^{\prime}\), \(L_{I}.\textit{Wrapper}=L_{I^{\prime}}\) and \(R_{I}.\textit{Wrapper}=R_{I^{\prime}}\). In the case of ties, take \(I^{\prime}\) to be that was most recently added. **Invariant 3**: For any two intervals \(I_{1},I_{2}\) with dividers in \(D\), either \(I_{1}\cap I_{2}=\emptyset\), \(I_{1}\subseteq I_{2}\), or \(I_{2}\subseteq I_{1}\). **Invariant 4**: For any point \(p\) which is in at least one interval, for one such interval \(I\), \(p.\textit{LeftPar}=L_{I}\) and \(p.\textit{RightPar}=R_{I}\). **Invariant 5**: For any interval \(I\) for which \(L_{I}.\textit{Wrapper}=L_{I}\) and \(R_{I}.\textit{Wrapper}=R_{I}\), then neither \(L_{I}.\textit{Landing}\) nor \(R_{I}.\textit{Landing}\) are elements of any interval. We give an example depiction of this structure in Figure 0(a). We show this in the context of paths from points that are marked for deletion (lie in some interval) to a nearest point to the left or right which is not in that interval. Assuming the invariants hold, we can claim the following theorem. **Theorem 2.1**.: _Given two convex hulls \(C_{1}\), \(C_{2}\), that are both expressed by an array of points \(A\) and an array of dividers \(D\), and that are separable by a vertical line, the upper common tangent of \(C_{1}\) and \(C_{2}\) can be computed in \(O(\log n)\) time serially._ Proof.: We assume that \(A\) and \(D\) abide by the \(5\) invariants listed above. We will simulate Overmars and Van Leuwen [37] on \(A\), making use of \(D\) in the case that we accidentally land on a point which has been marked for deletion i.e. is not in either \(C_{1}\) or \(C_{2}\). Since we can proceed as usual if there are no dividers (and hence no intervals of deletion), we will consider only the case in which there have been points marked for deletion. For any unmarked point, we will consider the _left path_ and the _right path_ in the divider structure. These paths follow down pointers from the source, so we will represent the points in the path using \(\rightarrow\) as a delimiter, indicating we are going to the point pointed to by that flag. These paths land on the nearest unmarked Figure 1: (a) Example of paths from marked points to points that have not been marked; (b) Simulating the Overmars and Van Leuwen [37] procedure using the divider array. Only one of the two hulls are depicted. The blue line depicts the current point of consideration and the green the previous point of consideration. point to the left and right respectively. The left path is either \(p\to LEFTPAR\to LANDING\) or, if \(p.LEFTPAR\) points to the \(L_{I}\) for an interval \(I\) which is the subset of a larger interval \(I^{\prime}\), then the path is \(p\to LEFTPAR\to WRAPPER\to LANDING\). By invariant 5, we know that this path will land on a point which has not been marked. Specifically, this point is the point which is immediately outside \(I\)'s (or \(I^{\prime}\)'s) interval on the left side. Since \(p\in I\) (\(p\in I^{\prime}\)), any point between the end of the left path and \(p\) must be marked for deletion. Therefore, the left path gives the nearest point to the left. By symmetry, the same applies for the right path, given as \(p\to RIGHTPAR\to LANDING\) or \(p\to RIGHTPAR\to WRAPPER\to LANDING\). These paths are shown in blue in Figure 0(a). Using the left and right paths, we can deal with the case of points marked for deletion. The Overmars and Van Leuwen [37] technique relies on the ability to eliminate half of the remaining candidate points (for the line of tangency, not the hull) from either \(C_{1}\), \(C_{2}\) or both at every step, which we also guarantee. This gives us the same \(O(\log n)\) time guarantee since determining the points to eliminate from consideration for the upper common tangent takes \(O(1)\) time. In the first part of the Overmars and Van Leuwen [37] technique, the point with maximum y-coordinate, \(p_{top}\), is found for each hull using a binary search. Consider \(C_{1}.\) Each step in this search determines whether \(p_{top}\) is to the left or right of the current point, \(p\), by comparing it to its predecessor and successor in \(C_{1}\). If the successor to \(p\) in the array \(A\) is marked for deletion, take the right path from this to get the true successor of \(p\). For the predecessor, take the left path. If \(p\) itself is marked for deletion, then consider both the point at the end of the left path and the point at the end of the right path. These comparisons will either lead to the candidate set contracting to either the left or right of both or we will know that the top is one of the two, since the top point cannot be in the deleted interval. In either case, at least half of the points are eliminated from consideration since \(p\) lies at the midpoint of the candidates. We show an example search in Figure 0(b). The second part of the technique is another binary search, but this one relies on both hulls. At every step, we consider the midpoint of the remaining candidates in \(C_{1}\) (and also for \(C_{2}\)). Just as before, if this point is marked, we instead consider the points at the end of the left path and the right path. The details of how to compare two points that are being considered (one from \(C_{1}\) and one from \(C_{2}\)) are given in [37]. When the comparison is done, we can eliminate all candidates to the left of or to the right of at least one of the considered points. This holds for our case even if we have to consider 2 points from \(C_{1}\) and two points from \(C_{2}\). Let \(l_{1},l_{2}\) be the points from following the left paths and \(r_{1},r_{2}\) be the points from following the right paths. We represent the comparisons as pairs \((l_{1},l_{2}),(l_{1},r_{2}),(r_{1},l_{2}),(r_{1},r_{2})\). Each comparison specifies that for at least one of the points, all points in its respective hull to either the left or the right cannot be a point of tangency. Whichever way the comparisons lie, one of the two hulls \(C_{i}\) will have both \(l_{i}\) and \(r_{i}\) able to eliminate points. If this is the case, then the argument from the search for \(p_{top}\) applies here. If only one of \(C_{1}\) or \(C_{2}\) has 2 points to consider from landing in an interval, without loss of generality consider \(C_{1}\) to have the 2 points of consideration. Either both points in \(C_{1}\) can eliminate candidates in the search, which gives us the same as the previous case or the point from \(C_{2}\) can eliminate half to its left or right. In either case, half of the remaining candidates from \(C_{1}\) or \(C_{2}\) have been eliminated. If both \(C_{1}\) and \(C_{2}\) have 1 point to consider, it follows directly from the original Overmars and Van Leuwen technique. Therefore, we can guarantee that only \(O(\log n)\) comparisons need to be made to find the upper common tangent between \(C_{1}\) and \(C_{2}\). ### Maintaining the invariants Now that we have a method to compare the hulls assuming the invariants hold, we will show how to merge the hulls, keeping in mind that we also must maintain the invariants. Lemma 2.2 will show how to maintain invariants 1 and 3, while Lemma 2.4 will show how to maintain invariants 2, 4, and 5. **Lemma 2.2**.: _The set of new dividers \(D_{new}\) for a given recursion layer (of size \(n\)) can be determined in \(O(\log n)\) span with \(O(n^{2/6}\log n)\) work (and cache misses)_ Proof.: We will first calculate the \(O(n^{2/6})\) common tangents, storing them in a 2D array \(\mathcal{T}\). We use the smallest and largest sloped tangents (as in Atallah and Goodrich [8]) to determine which points should be removed from each subproblem hull, or subhull. For each subhull, there will be some section starting with the leftmost point and some section starting with the rightmost point which will need to be removed, leaving some contiguous set of points in the center, or no points from this hull. If there is such an set, we use a _Right_ divider with _Landing_ set to the first (leftmost) unremovable point and a _Left_ divider with _Landing_ set to the last (rightmost) unremovable point. If there are no unremovable points from this subhull, then do not add any divider. We can omit dividers for the leftmost and rightmost points among all of these subhulls, since they will not correspond to intervals of deletion (they are extreme points). We can store these all in an array of size \(2n^{1/6}-2\), with gaps allowed for nonexisting dividers. Set the _Wrapper_ for each of these dividers to itself. At this point, \(D_{new}\cup D\) has satisfied invariants 1 and 3. 1 is satisfied since \(D\) must be of size at most \(O(\frac{n}{5/6\log n})\) and the addition of \(2n^{1/6}-2\) dividers does not affect this asymptotically. 3 is satisfied by induction, since when \(D\) is empty, this holds since it holds for \(D_{new}\) alone, and assuming it holds for \(D\neq\emptyset\), since \(D_{new}\) does not have dividers that "land" inside previously marked points, they cannot intersect with any previous intervals corresponding to \(D\) without entirely containing such an interval. **Lemma 2.3**.: _The set of new dividers \(D_{new}\) for a given recursion layer (of size \(n\)) can be integrated into the full set of dividers \(D\) in \(O(\log n)\) span with \(O(\frac{n}{\log n})\) work and \(O(\frac{n}{B\log n})\) cache misses._ Proof.: We can integrate these using a merge routine. Specifically, we will be merging a copy of \(D_{new}\) into \(D_{alloted}\), a set of contiguous slots in \(D\) that have been allotted to this subproblem. After alloting for each of the subproblems, there are \(2n^{1/6}\) slots remaining at the end, allowing us to copy \(D_{new}\) there. Before doing so, we will compress \(D_{new}\) so they occupy contiguous positions. Only then will we copy them over to the end of \(D_{alloted}\). These both can be done using prefix sums [16], which takes \(O(n^{1/6})\) work and \(O(n^{1/6}/B)\) cache misses. These calculations will use the _Rank_ flag. **Lemma 2.4**.: _All non-extreme points can be marked for deletion with \(O(n)\) work and \(O(n/B)\) cache misses, amortized._ Proof.: We show that every cache miss incurred by the process of marking points for deletion can be charged either to the binary searches (which we show to be \(O(n/B)\) in Theorem 2.5) or to the total number of blocks representing the points. We also show that the remaining invariants hold. Now that the array \(D_{new}\) has copies of its elements integrated into \(D_{alloted}\), we can start by satisfying invariant 2. For each adjacent pair in \(D_{new}\) corresponding to an interval of deletion \(I\), conduct binary searches on \(D_{alloted}\) for their copies. Between these two copies will be a set of dividers corresponding to intervals which are subsets of \(I\). We have already marked the points inside these intervals, so we just need to mark what is left. We start with the points between the beginning of \(I\) and its first subset, and the points between the last subset and the end of \(I\). To determine these subsets, we will store the rank in \(D_{alloted}\) of the copies in the _Rank_ flags of the \(D_{new}\) elements. For the "in-between" dividers in \(D_{alloted}\) which are _Left_, set their _Wrapper_ pointers to the _Left_ element of the \(D_{new}\) pair, specifically the copy in \(D_{alloted}\). For the _Right_ divider, do the same with the _Right_ copy. This will satisfy invariant 2 by the same principle of induction as before. All that remains is to mark the points inbetween subintervals of \(I\). These points are distinguished by being between dividers which have their _Marked_ flag set to false (the default) and which are themselves between a _Left-Right_ divider pair from \(D_{new}\). We will set each point's _LeftPar_ to the left divider of the subinterval after the point and the _RightPar_ to the right divider of the subinterval before the point. This will still let the point reach to the nearest unmarked point, since the subintervals all point to the largest superset. The marking procedure is cache efficient, incurring \(O(\frac{n}{B\log n})\) cache misses at each recursion layer. There can be additional cache misses incurred due to the deletion marking intervals not aligning with blocks, but these misalignments can only happen at the edges of an interval, which means we can charge 2 cache misses to one of the binary searches used to compute that divider earlier. There may not be a cache miss corresponding to the binary search if the divider came from a subproblem of size less than \(M\). If this is the case, we can argue that if two different dividers corresponding the points within the same block have this occur, then unless they are subsumed by the same larger interval later, then whichever intervals subsume them must not take the entire block, which means one of their endpoints lands in the block and we can amortize the accesses to the binary searches for subsuming intervals. The searches are not cache efficient, but only incur \(O(n^{1/6}\log n)\) work, which is dominated by the binary searches used for comparing the subhulls, which are analyzed in Theorem 2.5. At this point, invariant 5 has already been satisfied. Now that the _Wrapper_ flags have been set for intervals that are subsets, we can note that from before the _Landing_ flags for \(D_{new}\) elements will not land inside previously marked points. This means that the only way a divider \(d\) could have a _Landing_ which has been marked for deletion is if that landing were in a new interval, in which case the interval of \(d\), \(I\), is a subset of the interval that marked _d.Landing_, by 3. All that remains is invariant 4. To maintain this, we mark the points for deletion. To ensure that we do not mark twice, we only consider intervals that were previously unmarked in the previous iteration, or those that are between dividers from \(D_{new}\). Consider a divider \(d\in D_{new}\) Let \(d^{\prime}\) be the corresponding divider to \(d\) that together represent interval \(I\). Looking at the copy of \(d\) in \(D_{alloted}\), we take the neighboring divider: \(D_{alloted}[d.\text{{Rank}}+1]\) if \(d\) is a _Left_ divider and \(D_{alloted}[d.\text{{Rank}}-1]\) if \(d\) is a _Right_ divider. These dividers have landing spots, between which are points will be marked for deletion (exclude the landing spot of a \(D_{new}\)). Consider the \(d\) to be _Left_. Set _LeftPar_ to \(D_{alloted}[d.\text{{Rank}}]\) and _RightPar_ to \(D_{alloted}[d^{\prime}.\text{{Rank}}]\). Do the reverse if \(d\) is _Right_. This will satisfy invariant 4. We will mark the points in parallel, in groups of size at most \(\log n\). Since only points in "new" intervals are marked, this means that each point is marked at most 1 time, allowing us to amortize the work of marking to \(O(n)\). We address some issues due to data alignment in 2.4, but note here that the marking of points is done to adjacent points. We can charge 2 cache misses from each of these to the elements of \(D_{new}\), which allows us ignore the issues of alignment on the ends of these intervals. This allows us to consider the "middles" of these intervals, which are access cache-efficiently. These will amortize to \(O(n/B)\). ### Work, Span, and Cache Analysis **Theorem 2.5**.: _Algorithm 1 achieves \(O(\log n)\) span with \(O(n)\) work and incurring \(O(n/B)\) cache misses_ Proof.: First, we show span. To start, Algorithm 1 finds the convex hull of \(O(\frac{n}{\log n})\) subhulls at the lowest level in the recursion layer. Each individual subhull is done serially using a serial algorithm. The Graham scan [32] is ideal since it achieves \(O(n/B)\) cache misses for presorted input. With each subhull having size at most \(\log n\), this takes \(O(\log n)\) span. From here, merges are used. The merges start with finding the new dividers \(D_{new}\), which takes \(O(\log n)\) span from Lemma 2.2. From there, integrating them into the full set of dividers takes \(O(\log n)\) span from Lemma 2.3. Finally, marking the points for deletion takes \(O(\log n)\) span due to the binary searches. Marking the points and old dividers takes no more than \(O(\log n)\) span since each point/divider can be done in parallel. This gives us the following recursion for subproblems of size \(\tilde{n}\), for some positive constant \(c_{0}\): \[T_{\infty}(\tilde{n})\leq T_{\infty}(\tilde{n}^{1/6})+c_{0}\log\tilde{n}\] For \(\tilde{n}=n\), this expands to \(c_{0}\log n+\frac{c_{0}\log n}{6}+\frac{c_{0}\log n}{6^{2}}+...\) which is clearly upper-bounded by \(2c_{0}\log n=O(\log n)\) The final step, using a prefix sum to align the unmarked points into adjacent array positions, takes only an additional \(O(\log n)\) span [16]. Figure 2: Each pair in \(D_{new}\) binary searches for its copy, marks its subsidiaries and then marks the “outermost layer” of points. For the work, the total work is dominated by the final prefix sum, \(O(n)\)[16]. The \(O(\frac{n}{\log n})\) base cases of size \(\log n\) cost \(O(n)\) work in total when Graham's algorithm is used. From Lemma 2.4, we know that marking points for deletion costs only \(O(n)\) work amortized. From here, it suffices to show that all other operations cost \(O(\frac{n}{\log n})\) work per recursion layer. The work from Lemmas 2.2 and 2.3 are dominated by \(O(\frac{n}{\log n})\), which leaves the work on the divider array leading up to point marking. Setting the old dividers' _Wrapper_ flags is linear in the size of \(D_{alloted}\), the available section of divider array. This is \(O(\frac{n}{\log n})\) where \(n\) is the size of this recursion instance. Therefore, for the recursive steps, we have the following recursion for work. For problem size \(\tilde{n}\) and some positive constant \(c_{1}\): \[T_{1}^{\prime}(\tilde{n})\leq\tilde{n}^{1/6}T_{1}^{\prime}(\tilde{n}^{5/6})+c _{1}\left(\frac{\tilde{n}}{\log\tilde{n}}\right)\] Since we accounted for the base cases earlier, we can set \(T_{1}^{\prime}(\log n)=0\), We get that total work in \(T_{1}^{\prime}(n)\) is at most \(\frac{c_{1}n\log\log n}{\log n}\) which is clearly dominated by \(O(n)\). Therefore, \(T_{1}(n)=T_{1}^{\prime}(n)+O(n)+O(n)=O(n)\) For cache misses, we know that the serial algorithm on the \(O(\log n)\) sized subproblems incurs \(O(n/B)\) cache misses. From Lemma 2.4, we know that marking the points can be amortized to \(O(n/B)\) by charging inefficiently marked points to the binary searches. The recursive layers are cache efficient with the exception of \(O(n^{2/6}\log n)\) binary searches (and other factors dominated by this). The cache efficient routines incur at most the work divided by \(B\), which is \(O(n/B)\). The binary searches incur misses until the recursion size is \(M\) or \(\log n\), whichever is greater. If \(\log n\) is greater, then simply bounding by the total work done in the recursions suffices, since for sufficiently large \(n\) and any positive constant \(c\), \(c\frac{n\log\log n}{\log n}\leq c\frac{n\log M}{M}\leq c\frac{nM^{1/2}}{M}=c \frac{n}{M^{1/2}}\). By tall cache assumption i.e. \(M=\Omega(B^{2})\), and thus \(\frac{1}{M^{1/2}}=O(\frac{1}{B})\), the cache misses can be bounded by \(O(\frac{cn}{B})=O(n/B)\). If \(M>\log n\), then we consider the following recurrence for subproblems of size \(\tilde{n}\): \[Q(\tilde{n})=\left\{\begin{array}{ll}O(1+\frac{\tilde{n}}{B})&\mbox{if } \tilde{n}\leq M\\ \tilde{n}^{1/6}Q(\tilde{n}^{5/6})+O(\tilde{n}^{2/6}\log\tilde{n})&\mbox{ otherwise}\end{array}\right.\] Consider the following expansion of the recurrence to \(k\) terms. For some positive constant \(c_{2}\): \[Q(n) \leq n^{1/6}Q(n^{5/6})+c_{2}n^{2/6}\ln n\] \[\leq n^{1-(5/6)^{k}}Q(n^{(5/6)^{k}})+c_{2}n\ln n\sum_{i=0}^{k-1} \frac{(5/6)^{i}}{(n^{10/6-1})^{(5/6)^{i}}}\] Let \(R=n^{(5/6)^{k}}\). We know that when the size of a subproblem is less than or equal to \(M\), the subproblem is simply loaded into the cache as the algorithm runs, incurring no more than an additional \(O(n/B)\) cache misses across all subproblems of size \(\leq M\). We want \(R\) to be the size of the largest such subproblem. \[R =n^{(5/6)^{k}}\leq M<n^{(5/6)^{k-1}}\] \[\implies M<R^{\frac{6}{5}}\] \[\implies\frac{1}{R}<\frac{1}{M^{5/6}}\] Let \(S=\sum_{i=0}^{k-1}\frac{(5/6)^{i}}{(n^{10/6-1})^{5/6i}}\). We know that if the function \(f(i)=\frac{(5/6)^{i}}{(n^{10/6-1})^{5/6i}}\) is increasing from \(i\) to \(k-1\), then we can use an integral to bound \(S\). \(f^{\prime}(i)=n^{-(5/6)^{i}(10/6-1)}\ln\!\left(\frac{6}{5}\right)\left((5/6)^{ 2i}\ln(n^{10/6-1}-(5/6)^{i}\right)\) is positive for \(i\leq\log_{6/5}\ln n-\log_{6/5}\left(\frac{1}{10/6-1}\right)\). Therefore, in order to bound \(S\) by the integral of \(f(i)\) from \(0\) to k, it must be that \(k<\log_{6/5}\ln n-\log_{6/5}\left(\frac{1}{10/6-1}\right)\). We know that \(R=n^{(5/6)^{k}}\), which implies \(k=\log_{(6/5)}\log_{R}(n)\) It suffices to show that \(\log_{(6/5)}\log_{R}(n)<\log_{6/5}\ln n-\log_{6/5}\left(\frac{1}{10/6-1}\right)\). \[\log_{6/5}\ln n-\log_{6/5}+\log_{6/5}\left(\frac{1}{10/6-1}\right) <\log_{6/5}\ln n\] \[\implies\ln R >\frac{1}{2(\frac{5}{6})-1}\] \[\implies R >e^{1/(10/6-1)}\approx 4.48169\] Since \(f(i)^{\prime}\) is always increasing over this range, we can use an integral to bound S. \(S=\sum_{i=0}^{k-1}\frac{(5/6)^{i}}{(n^{10/6-1})^{5/6i}}\leq\int_{0}^{k}\frac{( 5/6)^{x}}{(n^{10/6-1})^{5/6x}}dx\). The integral solves to \(\frac{R^{1-10/6}-n^{1-10/6}}{(10/6-1)\ln(6/5)\ln n}\). We simplify this. \[S\leq\frac{R^{1-10/6}-n^{1-10/6}}{(10/6-1)\ln(6/5)\ln n}<\frac{1}{(10/6-1)\ln( 6/5)}\frac{1}{M^{5/6(10/6-1)}\ln n}\] Since \(5/6(10/6-1)>1/2\) and \(M=\Omega(B^{2})\): \[S< \frac{1}{(10/6-1)\ln(6/5)}\frac{1}{\Omega(B)\ln n}\] \[=O\left(\frac{1}{B\ln n}\right)\] Since \(S=O(\frac{1}{B\ln n})\), which means the total number of cache misses is \(Q(n)=O(n/B)+c_{2}n\ln n\cdot O(\frac{1}{B\ln n})=O(n/B)\). ### Additional Cache Misses from Work-Stealing and False Sharing **Theorem 2.6**.: _Algorithm 1 incurs \(O(\frac{n}{B}+S\cdot\frac{M}{B})\) cache misses (including false shares) under a work-stealing scheduler, where \(S\) is the number of steals._ In the previous discussion of the number of cache misses incurred across the runtime of the algorithm, the only remaining source of cache misses is the scheduler. We use the same notion of work stealing as in Cole and Ramachandran [22], ie that in parallel sections, each processor places tasks into a work queue, and where idle processors "steal" work from the tops of these queues to decrease runtime. Associated with work-stealing is an additional type of cache misses, known as false sharing. False sharing occurs in private cache settings when a processor \(P_{1}\) writes to a block which had previously been stored in the cache of another processor \(P_{2}\) and then \(P_{2}\) now has to access that block. Each step in Algorithm 1 corresponds to a set of parallel tasks, with synchronization after the conclusion of each step to ensure correctness. Different tasks only share data in the binary searches, where processors may concurrently reading from the same locations, but do not incur any false shares since nothing is updated. However, false shares can result from data alignment. For cache-oblivious algorithms, there is no guarantee that subproblems will be partitioned along block boundaries. This can incur at most 1 additional cache miss per boundary due to false sharing. If we implement each step recursively, then we can ensure that these boundaries will only contribute a constant factor of additional false shares before steals. This applies to marking points and setting the WRAPPER flags of the dividers, but is redundant for the merging of dividers as Blelloch and Gibbons [17] is already implemented recursively. Steals are another source of false shares. All of our steps which write to data do so in contiguous array locations within each parallel task. Tasks are ordinarily distributed to processors such that the write locations for successive tasks will be contiguous to each other. However, this contiguity can be disrupted by a steal. Even though a thief may update a piece of memory that won't be needed by the original processor, it may be within a block that is within the original processor's private cache, incurring a false share. Any particular write can affect at most \(B-1\) other processors which are operating in the same area, meaning that false shares only contribute \(O(S\cdot B)\subset O(S\cdot\frac{M}{B})\) additional cache misses. Aside from false shares, the only additional cache misses incurred due to steals are those incurred due the stealing processors accessing data that ordinarily would have been accessed by the original processor in order. This could be as bad as a stealing processor completely reloading as many blocks as could fit into the cache, implying at worst \(O(S\cdot\frac{M}{B})\) cache misses. So in total, when using any work-stealing scheduler, there is at most \(O(\frac{n}{B}+S\cdot\frac{M}{B})\) cache misses incurred by Algorithm 1. **Corollary 2.7**.: _Algorithm 1 incurs \(O(n/B)\) cache misses w.h.p in \(p\) with the RWS scheduler when the number of processors is \(p=O(\frac{n}{M\log n})\)._ We additionally consider a randomized work-stealing scheduler, the same as is described in Cole and Ramachandran [22] and which was originally described by Blumofe and Leiserson [19]. The randomized work-stealing scheduler (RWS) is a work-stealing scheduler where "idle" processors steal the head task from a randomly selected processor. If this fails, as in the task queue selected is empty, the scheduler simply tries again until either a task is found or the parallel section is completed. Acar, Blelloch, and Blumofe [1] show that w.h.p. in \(p\), the number of steals incurred during execution using the RWS is \(S=O(p\cdot T_{\infty})\). Therefore, if \(p=O(\frac{n}{M\log n})\), then \(S=O(\frac{n}{M})\), which when applied to the total number of cache misses yields \(O(\frac{n}{B}+\frac{n}{M}\cdot\frac{M}{B})=O(n/B)\). ## 3 A Cache- and Work-Optimal \(\Theta(\log n\log\log n)\) Span Algorithm The current state of the art for deterministic parallel sorting algorithms in the binary-forking model with regards to span is Cole and Ramachandran's [22]\(O(\log n\log\log n)\) span cache oblivious algorithm. Since the algorithm is cache optimal, costing only \(O((n/B)\log_{M}(n))\) cache misses, this means that using their algorithm to sort an instance of unsorted points, followed by our Algorithm 1 (pseudocode in appendix) will yield a work and cache-optimal algorithm with \(O(\log n\log\log n)\) span. However, considering that implementation of Cole and Ramachandran's sorting algorithm is already required for this, a simpler approach for unsorted points is to modify the internal structure of the sorting algorithm. Overall, this modification is much simpler than implementing the presorted algorithm after already implementing Cole and Ramachandran's. We propose a divide-and-conquer algorithm for unsorted points which performs \(O(n\log n)\) work in \(O(\log n\log\log n)\) span while incurring only \(O((n/B)\log_{M}n)\) cache misses. Our modifications are to account for the fact that sorting alone does not remove points which are not extreme points of the convex hull. Our augmentations preserve many key properties of Cole and Ramachandran's algorithm. Notably, ours is also cache-oblivious and cache optimal (From Arge and Miltersen [6], when output sensitivity is not a consideration). The cache complexity of our algorithm matches Sharma and Sen's [40] randomized cache oblivious convex hull algorithm but does so in the worst case rather than in expectation. Cole and Ramachandran's [22] cache-oblivious sorting algorithm is a divide-and-conquer algorithm with a relatively complicated recursive structure. It takes as input \(r\) sorted lists of elements with total size \(n\). Depending on the relative size between \(n\) and \(r\), the algorithm reorganizes the data so that elements of similar rank are grouped together into subproblems. This is accomplished by using a set of pivots which is of size approximately \(n/r^{3}\) if \(r\) is sufficiently large. These subproblems are divided further into smaller subproblems which are then merged together. At this point the entire input has been sorted because the initial pivot-based partition ensures that for any two (larger) subproblems the ranks of all elements in one are larger than those of all elements in the other. Our algorithm determines what we refer to as the "quarter-hull", and is thus iterated 4 times to produce the complete convex hull. We define the "quarter-hull" of a point set \(S\) as the counter-clockwise section of the convex hull between the point in \(S\) with the largest y-coordinate and the point in \(S\) with smallest x-coordinate. Since the convex hull is preserved under dimension-preserving linear transformations, it is clear that this algorithm can be applied to all 4 "quadrants" of the convex hull by rotation. It can also be the case that the smallest x-coordinate and largest y-coordinate points are the same point, in which case the quarter hull is only that singleton point and can be returned immediately. In case of a tie for x-coordinate, use the largest y-coordinate and vice versa. The data structure is much simpler than in the presorted algorithm. For each point, \(p\), we simply need to include an additional \((x,y)\) coordinate pair, which we will refer to as the TAIL, and two integers RANK, and HOME. If the TAIL lies to the left and below \(p\), this indicates that it is an extreme point, otherwise it is not. Implicitly, the TAIL defines an "active range" for \(p\) starting with it predecessor in the hull. However, the TAIL is not a pointer, it explicitly stores the coordinates of the predecessor in the hull. We will refer to \(p\) itself as the HEAD of the point. RANK will be used to perform all prefix sums and HOME will mark the "true" subset that it would be placed in if the TAIL were the same as the point itself. ``` Data:\(A\), a collection of quarter hulls, \(L_{1},L_{2},...,L_{r}\) where \(n\leq 3r^{6}\) Result: The combined quarter hull of \(A\) if\(n\leq 24\)then apply any serial convex hull algorithm and return; if\(n\leq 3r^{3}\)then\(k\gets 1,A_{1}\gets A\)else Form a sample \(S\) of every \(r^{2}\)th point in each \(L_{i}\), for a total of \(n_{i}/r^{2}\) points from each quarter hull, where \(n_{i}\) is the number of extreme points defining quarter hull \(L_{i}\); Compute ranks of elements of \(S\) using TAIL; Form a sample \(P\) of every \(2r\)th member in \(S\) by rank for a total of \(\leq n/2r^{3}\) elements; Using \(P\) as a set of x-coordinate pivots (by TAIL), partition \(A\) into \(k=|P|+1\) subsets \((A_{1},...A_{k})\). Include the points for which the x-coordinate interval between its TAIL and itself contains some overlap with the interval between pivots \(P[i-1],P[i]\) (See Figure 6); end if parallel foreach subset \(A_{i}\) of \(A\) (from \(1\) to \(k\))do Separate \(A_{i}\) into smaller subsets \(A_{ij}\) such that each contains elements from at most \(\sqrt{r}\) different lists; parallel for each \(A_{ij}\)do MultiwayMerge(\(A_{ij}\)) ; Run MultiwayMerge(\(A_{i}\)) using the sorted \(A_{ij}\) as lists; end if parallel foreach\(p\in[A_{1},A_{2},...A_{k}]\)do Mark any point for which its active range no longer overlaps with the subset it is assigned to. end if Using prefix sums on each subset, eliminate any marked points. parallel foreach\(i\in[1,k]\)do Determine if the first element of \(A_{i+1}\) invalidates the last element of \(A_{i}\). If so, set the TAIL of the first element of \(A_{i+1}\) to the second to last element of \(A_{i}\). Otherwise, set its TAIL to the last element of \(A_{i}\). end if Using prefix sums, reassign all the identified extreme points into adjacent sorted positions in an output array. ``` **Algorithm 2**Divide-and-Conquer with Multiway Merge **Theorem 3.1**.: _For \(n\) points in the plane, Algorithm 2 (Multiway Merge) finds their quarter hull in \(O(n\log n)\) work and \(O(\log n\log\log n)\) span._ Proof.: Our algorithm differs from Cole and Ramachandran's [22] in two key ways. The first is the size of the subsets \(A_{i}\), since ours includes points which may be on the other side of the pivot but their TAILs are on the correct side, there can be at most \(r\) more elements in each of our subsets than in the original sorting algorithm. This is because each list can only contribute 1 copy of a point which "belongs" in another subset since the lists are given as quarter hulls. However, since the size of their subsets is at most \(3r^{3}-r^{2}-r\) (From Lemma 2.1 of [22]) and the bound required for the recursion to hold is that the subsets be of size at most \(3r^{3}\), this will not be an issue. Ours will still work since \(3r^{3}-r^{2}\leq 3r^{3}\). This overlap also presents marginal difficulty in the implementation of the partition step. Within each of the \(r\) lists, it is generally the case that a particular pivot will lie between the HEAD and TAIL of a point, leading to that point being represented in more than one subset in the partition. To achieve this, Cole and Ramachandran find the element of each list which is just after the point in the ordering to determine which subsection of values needs to be copied into each bucket. We can simply buffer these ranges for each bucket and we will be sure to capture all the required points, incurring specifically the extra \(r\) for each bucket. The other difference is that we have to check the boundaries of each subset to make sure that the last point in some quarter hull \(A_{i}\) is not invalidated by \(A_{i+1}\) and vice versa. That is determined by observing the quarterhull of the boundary points and their TAILs (see Figure 2(b)). Update the TAILs of the points to be equal to the predecessor along this quarter-hull. If a point is deleted, update it such that the new first point in \(A_{i+1}\) has the new last point in \(A_{i}\) as its TAIL. We do not need to check any more than the boundaries. Since we delete from the subsets points whose intervals are outside the subset's interval induced by the pivots, we know that the only points that can have intervals which are partially outside the subset's interval are at the boundaries. We also know that every point's interval is disjoint from those of the other points except for the x-coordinate at which one point's TAIL coincides with the HEAD of another. This check takes \(O(\log r)\leq O(\log n)\) span and performs \(O(n/r^{3})\leq O(\sqrt{n})\) work, which is dominated by the work and span of the rest of the algorithm. Therefore, the work and span bounds of our algorithm are equivalent to Cole and Ramachandran's sorting algorithm [22], \(O(n\log n)\) and \(O(\log n\log\log n)\), respectively. **Theorem 3.2**.: _For n points in 2D, Algorithm 2 finds their convex hull while incurring at most \(O((n/B)\log_{M}n)\) cache misses._ Proof.: This follows directly from the Cole and Ramachandran bounds. We use the same tall cache assumption as before, that \(M=\Omega(B^{2})\). At each recursion layer, at most \(O(n/B)\) cache misses are incurred. This is trivially true for the elimination of points who are not active within their subset's ranges (steps 12 and 13 in Algorithm 2). It is also trivial for the elimination phase at the end of the execution (step 15). The additional copies of points which belong in multiple buckets also incur at most \(O(n/B)\) additional cache misses. The reads required to make these additional copies are contiguous with the other points which are being placed into that bucket, so it can be at worst an additional \(O(n/B)\) cache misses, since it would have the exact same misalignment issues as Cole and Ramachandran's. For the comparisons in Step 14, we access at most \(s/r\) memory locations, where \(s\) is the size of the sample S. Since \(s/r<s\cdot r\), then even if every memory access in the Step 14 is a cache miss, we will not exceed \(O(n/B)\) cache misses at each recursion layer. Hence, the recursion for Cole and Ramachandran's sorting algorithm also applies here: \(Q(n,r)=O(n/B)+\sum_{i,j}Q(n_{ij},\sqrt{r})+\sum Q(n_{i},\sqrt{r})\) which solves to \(Q(n,r)=O(\frac{n\log n}{B\log M})\) **Theorem 3.3**.: _Using any work-stealing scheduler, Algorithm 2 incurs at most \(O((n/B)\log_{M}n+S\cdot\frac{M}{B})\) cache misses where \(S\) is the number of steals._ Proof.: This is the exact same bound as is given in Cole and Ramachandran [22], so it suffices to show that at most \(O(M/B)\) cache misses are incurred when a processor steals one of the modified tasks. The sampling and partition phase (Steps 2-6 in Algorithm 2) is very similar to Cole and Ramachandran's, and incurs the same number of cache misses due to steals and false sharing asymptotically. This follows from the fact that Figure 3: (a) Showing how the pivots work in Algorithm 2; (b) Possible configurations of points on the boundaries between subsets in Algorithm 2. TAILs before the check are shown with solid lines and updated TAILs are with dotted lines. each parallel task from the modified procedure in Step 6 only makes a constant number of writes, incurring at most \(O(S\cdot B)\subset O(S\cdot\frac{M}{B})\) (due to the tall cache assumption) false shares. The decomposition into even smaller subproblems (Steps 7-10) uses the same structure as Cole and Ramachandran's sorting algorithm and thus is already know to incur an additional \(O(S\cdot\frac{M}{B})\) cache misses. All that remains is the elimination of points which extend outside the bucket they were placed in during partitioning (Steps 12-13), checking the boundaries (Step 14), and the final deletion (Step 15). Both rounds of deletions also perform only a constant number of writes, which means that only \(O(S\cdot\frac{M}{B})\) false shares can be incurred. The exact same bound applies to the boundary checks. Aside from the cache misses incurred by false shares, the cache misses incurred from performing a steal can be at most \(M/B\), since any more cache misses would be incurred without steals. This means that in total, there will be \(O(S\cdot\frac{M}{B})\) additional cache misses due to the use of a work-stealing scheduler. **Corollary 3.4**.: _Algorithm 1 incurs \(O(n/B\log_{M}(n))\) cache misses w.h.p in \(n\) with the RWS scheduler when the number of processors is \(p=O\left(\frac{n}{M\log M}\cdot\text{min }\left(\frac{1}{\log\log n},\frac{M\log B}{B^{2}} \right)\right)\)._ Proof.: This comes directly from our Theorem 3.3 and Theorem 1.2[22]. Since our algorithm matches the false sharing and cache miss bounds of Cole and Ramachandran's, their analysis directly carries over. ## 4 Conclusion We have presented two cache-oblivious algorithms for the binary-forking model which find the convex hull of a set of points in 2 dimensions. We presented the first cache-oblivious algorithm for presorted points which achieves optimal span \(O(\log n)\) with optimal work, \(O(n)\), while incurring the optimal amount of cache misses \(O(n/B)\). The current state of the art for sorting in the binary forking model achieves \(O(\log n\log\log n)\) span with \(O(n\log n)\) work and while incurring \(O(n/B\log_{M}(n))\) cache misses. Here only span is suboptimal when considering only the size of the input (as opposed to also considering the size of the output). By modifying this algorithm, we provide a simpler method to achieve these bounds for convex hulls compared to sorting and then applying any algorithm for presorted points that we are aware of. We believe our techniques for converting merge-based sorting algorithms into convex hull algorithms can be generalized to many merge-based sorting algorithms. Further work would entail expanding the algorithm for presorted input to the more general case of simple polygons, reducing the span of sorting in the binary forking model and developing a cache-oblivious algorithm which achieves work which is optimal with respect to the size of the output as well as the input.
2307.06392
Deep learning-based Segmentation of Rabbit fetal skull with limited and sub-optimal annotations
In this paper, we propose a deep learning-based method to segment the skeletal structures in the micro-CT images of Dutch-Belted rabbit fetuses which can assist in the assessment of drug-induced skeletal abnormalities as a required study in developmental and reproductive toxicology (DART). Our strategy leverages sub-optimal segmentation labels of 22 skull bones from 26 micro-CT volumes and maps them to 250 unlabeled volumes on which a deep CNN-based segmentation model is trained. In the experiments, our model was able to achieve an average Dice Similarity Coefficient (DSC) of 0.89 across all bones on the testing set, and 14 out of the 26 skull bones reached average DSC >0.93. Our next steps are segmenting the whole body followed by developing a model to classify abnormalities.
Rajath Soans, Alexa Gleason, Tosha Shah, Corey Miller, Barbara Robinson, Kimberly Brannen, Antong Chen
2023-05-24T21:00:50Z
http://arxiv.org/abs/2307.06392v1
# Deep learning-based segmentation of rabbit fetal skull with limited and sub-optimal training labels ###### Abstract In this paper, we propose a deep learning-based method to segment the skeletal structures in the micro-CT images of Dutch-Belted rabbit fetuses which can assist in the assessment of drug-induced skeletal abnormalities as a required study in developmental and reproductive toxicology (DART). Our strategy leverages sub-optimal segmentation labels of 22 skull bones from 26 micro-CT volumes and maps them to 250 unlabeled volumes on which a deep CNN-based segmentation model is trained. In the experiments, our model was able to achieve an average Dice Similarity Coefficient (DSC) of 0.89 across all bones on the testing set, and 14 out of the 26 skull bones reached average DSC \(>\)0.93. Our next steps are segmenting the whole body followed by developing a model to classify abnormalities. U-Net, non-clinical drug safety assessment, DART, micro-CT, rabbit fetus, sub-optimal ground truth training label, sparse label map ## 1 Introduction In Developmental and Reproductive Toxicology (DART) studies as a part of the non-clinical drug safety assessment, fetuses from drug-induced pregnant rabbits are stained using Alizarin red and inspected by experts under a microscope. The procedure has been replaced gradually by micro-CT imaging (example shown in Figure 1) [1], followed by inspections on an image viewer and reporting of abnormalities at the level of each bone. Figure 1: **Dutch-belted rabbit fetus.(left to right) alizarin red staining; Rendering of the skeletal structure from micro-CT image; Color coded label maps; table illustrating 22 bone segments of the skull** Automation of such processes would require segmentation of each bone from the skeleton, however, training a segmentation model is challenged by a) lack of annotated data and b) sub-optimal quality of annotations (Tajbakhsh et al., 2020). Acquiring sufficient and accurate manual annotations on complicated skeletal structures is expensive and impractical. In our work, we leverage annotations that are poorly delineated and available only in a limited quantity. We use image registration to map these annotations to a larger dataset which is then used to train a deep convolutional neural network (CNN) to perform automated segmentation. ## 2 Materials and Methods Micro-CT images were acquired using GE Locus Ultra micro-CT scanner with a polystyrene holder bucket containing up to 9 rabbit fetuses in each scan. Image volumes were reconstructed with voxel size of \(0.1\times 0.1\times 0.1\ mm^{3}\) and scaled to Hounsfield units (HU). To analyze the fetus skull, we first cropped a sub-volume of size \(320\times 320\times 250\) containing the skull region with 250 slices on the z-direction. From a legacy set of 513 volumes segmented using a previously proposed automated segmentation pipeline (1), although the segmentation labels are sub-optimal, we inspected them and selected 26 volumes with relatively more accurate and complete segmentation labels to be the atlases for a multi-atlas segmentation (MAS) strategy shown in Figure 2. Although the MAS strategy is effective, the execution of the registration workflow is time consuming and can absorb substantial amount of computing resource. Therefore, we elect to leverage the MAS strategy to create a dataset to train a U-Net segmentation model (Ronneberger et al., 2015). Specifically, the MAS strategy is used in obtaining segmentation maps for a set of 250 un-annotated images which is then partitioned into 220 training and 30 testing images. The segmentation maps representing just a single bone segment tend to pose difficulty in training due to its sparse nature. To overcome this challenge, we obtained distance transform of the segmentation maps and used it in guiding the model to convergence. This was realized by designing the loss function using a combination of a normalized distance regression loss (Ma et al., 2020) and the Dice Similarity Co-efficient (DSC) as shown in 1. \[L=\alpha*Dice\_loss+\beta*\frac{1}{N|\Omega|}\sum_{\Omega}SDM(ground\_truth)\ o predicted\_map \tag{1}\] Figure 2: **MAS workflow. Registration from source to target are performed in this order-_global rigid_\(\rightarrow\)_global non-rigid_\(\rightarrow\)_local non-rigid_ using ANTS suite (Avants et al., 2009). Fusion weights are proportional to local intensity correlation.** where \(SDM()\) is the function to obtain Signed Distance Map as defined in (1), \(N\) is the normalization factor to scale both losses to the same range, \(\Omega\) is the grid on which image is defined, \(\alpha\) and \(\beta\) are the co-efficients, and \(o\) is the Hadamard product. Training is initialized with higher \(\beta\) (0.8 in our experiments) and every 10 epochs it is reduced by 10% with an equal increase in \(\alpha\). Our overall pipeline is illustrated in Figure 3. ## 3 Results and Conclusion DSC profile for U-Net segmentations on 30 test images is shown in the left panel of Figure 4. To make an intuitive assessment, we used the U-Net based approach to regenerate segmentations on the original 26 atlases to compare with the sub-optimal ground truth labels. Example cases are shown in the right panel of Figure 4. Our proposed segmentation strategy is effective and can function as the initial step in identifying anomalies in rabbit fetus skull bones followed by abnormality detection in Figure 4: **U-Net segmentation results. (left) DSC boxplot and (right) visualization.** U-Net predictions has a DSC \(>\)0.9 for most bones. Smaller and thinner bones e.g. Tympanic Rings are challenging to segment, yielding low DSC. Example segmentations on the original 26 atlases (yellow: U-Net predictions (top row); red: ground truth (bottom row)) illustrating improvement over the ground truth on the atlases, showing robustness of our MAS+U-Net based approach and its ability to overcome sub-optimal labels. Figure 3: **U-Net segmentation pipeline.** 22 models are trained targeting one bone segment per model. segmented bones. We will further explore segmentation of the whole body skeleton which is relatively more challenging due to higher degree of inter-specimen variations.
2302.03434
Weighted Tree Automata with Constraints
The HOM problem, which asks whether the image of a regular tree language under a given tree homomorphism is again regular, is known to be decidable [Godoy & Gim\'enez: The HOM problem is decidable. JACM 60(4), 2013]. However, the problem remains open for regular weighted tree languages. It is demonstrated that the main notion used in the unweighted setting, the tree automaton with equality and inequality constraints, can straightforwardly be generalized to the weighted setting and can represent the image of any regular weighted tree language under any nondeleting and nonerasing tree homomorphism. Several closure properties as well as decision problems are also investigated for the weighted tree languages generated by weighted tree automata with constraints.
Andreas Maletti, Andreea-Teodora Nász
2023-02-07T12:40:44Z
http://arxiv.org/abs/2302.03434v1
# Weighted Tree Automata with Constraints ###### Abstract The HOM problem, which asks whether the image of a regular tree language under a given tree homomorphism is again regular, is known to be decidable [Godoy & Gimenez: The HOM problem is decidable. JACM 60(4), 2013]. However, the problem remains open for regular weighted tree languages. It is demonstrated that the main notion used in the unweighted setting, the tree automaton with equality and inequality constraints, can straightforwardly be generalized to the weighted setting and can represent the image of any regular weighted tree language under any non-deleting and nonerasing tree homomorphism. Several closure properties as well as decision problems are also investigated for the weighted tree languages generated by weighted tree automata with constraints. Keywords:Weighted Tree Automaton Subtree Equality Constraint Tree Homomorphism HOM Problem Weighted Tree Grammar Subtree Inequality Constraint Closure Properties Msc: 68Q45 68Q42 68Q70 16Y60 + Footnote †: journal: Theory of Computing Systems ## 1 Introduction Numerous extensions of nondeterministic finite-state string automata have been proposed in the past few decades. On the one hand, the qualitative evaluation of inputs was extended to a quantitative evaluation in the weighted automata of [23]. This development led to the fruitful study of recognizable formal power series [22], which are well-suited for representing factors such as costs, consumption of resources, or time and probabilities related to the processed input. The main algebraic structure for the weight calculations are semirings [16; 17], which offer a nice compromise between generality and efficiency of computation (due to their distributivity). On the other hand, finite-state automata have been generalized to other input structures such as infinite words [21] and trees [4]. Finite-state tree automata were introduced independently in [7; 24; 25], and they and the tree languages they generate, called regular tree languages, have been intensively studied since their inception [4]. They are successfully utilized in various applications in many diverse areas like natural language processing [18], picture generation [8], and compiler construction [28]. Indeed several applications require the combination of the two mentioned generalizations, and a broad range of weighted tree automaton (WTA) models has been studied (see [13; Chapter 9] for an overview). It is well-known that finite-state tree automata cannot ensure that two subtrees (of potentially arbitrary size) are always equal in an accepted tree [14]. An extension proposed in [20] aims to remedy this problem and introduces a tree automaton model that explicitly can require certain subtrees to be equal or different. Such models are very useful when investigating (tree) transformation models (see [13] for an overview) that can copy subtrees (thus resulting in equal subtrees in the output), and they are the main tool used in the seminal paper [15] that proved that the HOM problem is decidable. The HOM problem was a long-standing open problem in the theory of tree languages and recently solved in [15]. It asks whether the image of an (effectively presented) regular tree language under a given tree homomorphism is again regular. This is not necessarily the case as tree homomorphisms can create copies of subtrees. Indeed removing this ability from the tree homomorphism, obtaining a linear tree homomorphism, yields that the mentioned image is always regular [14]. In the solution to the HOM problem provided in [15] the image is first represented by a tree automaton with constraints, and then it is investigated whether this tree automaton actually generates a regular tree language. The HOM problem is also interesting in the weighted setting as it once again provides an answer whether a given homomorphic image of a regular weighted tree language can be represented efficiently. While preservation of regularity has been investigated [3; 10; 11; 12] also in the weighted setting, the decidability of the HOM problem remains wide open. With the goal of investigating this problem, we introduce weighted tree grammars with constraints (WTGc for short) in this contribution. We demonstrate that those WTGc can again represent all (nondeleting and nonerasing) homomorphic images of the regular weighted tree languages. Thus, in principle, it only remains to provide a decision procedure for determining whether a given WTGc generates a regular weighted tree language. We approach this task by providing some common closure properties following essentially the steps also taken in [15]. For zero-sum free semirings we can also show that decidability of support emptiness and finiteness are directly inherited from the unweighted case [15]. The present work is a revised and extended version of [29] presented at the 26th Int. Conf. Developments in Language Theory (DLT 2022). We provide additional proof details and examples, as well as a new pumping lemma for the class of (non-deleting and nonerasing) homomorphic images of regular weighted tree languages. We utilize this pumping lemma to show that for any zero-sum free semiring, the class of homomorphic images of regular weighted tree languages is properly contained in the class of weighted tree languages generated by all positive WTGc, which are WTGc that utilize only equality constraints. ## 2 Preliminaries We denote the set of nonnegative integers by \(\mathbb{N}\), and we let \([k]=\{i\in\mathbb{N}\mid 1\leq i\leq k\}\) for every \(k\in\mathbb{N}\). For all sets \(T\) and \(Z\) let \(T^{Z}\) be the set of all mappings \(\varphi\colon Z\to T\), and correspondingly we sometimes write \(\varphi_{z}\) instead of \(\varphi(z)\) for every \(\varphi\in T^{Z}\). The inverse image \(\varphi^{-1}(S)\) of \(\varphi\) for a subset \(S\subseteq T\) is \(\varphi^{-1}(S)=\{z\in Z\mid\varphi(z)\in S\}\), and we write \(\varphi^{-1}(t)\) instead of \(\varphi^{-1}(\{t\})\) for every \(t\in T\). The _range_ of \(\varphi\) is \[\operatorname{ran}(\varphi)=\left\{\varphi(z)\mid z\in Z\right\}\enspace.\] Finally, the cardinality of \(Z\) is denoted by \(|Z|\). A _ranked alphabet_\((\Sigma,\operatorname{rk})\) is a pair consisting of a finite set \(\Sigma\) and a map \(\operatorname{rk}\in\mathbb{N}^{\Sigma}\) that assigns a rank to each symbol of \(\Sigma\). If there is no risk of confusion, we denote a ranked alphabet \((\Sigma,\operatorname{rk})\) by \(\Sigma\). We write \(\sigma^{(k)}\) to indicate that \(\operatorname{rk}(\sigma)=k\). Moreover, for every \(k\in\mathbb{N}\) we let \(\Sigma_{k}=\operatorname{rk}^{-1}(k)\). Let \(X=\{x_{i}\mid i\in\mathbb{N}\}\) be a countable set of (formal) variables. For each \(k\in\mathbb{N}\) we let \(X_{k}=\left\{x_{i}\mid i\in[k]\right\}\). Given a ranked alphabet \(\Sigma\) and a set \(Z\), the set \(T_{\Sigma}(Z)\) of \(\Sigma\)_-trees indexed by \(Z\)_ is the smallest set such that \(Z\subseteq T_{\Sigma}(Z)\) and \(\sigma(t_{1},\ldots,t_{k})\in T_{\Sigma}(Z)\) for every \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\). We abbreviate \(T_{\Sigma}(\emptyset)\) simply to \(T_{\Sigma}\), and any subset \(L\subseteq T_{\Sigma}\) is called a _tree language_. Let \(\Sigma\) be a ranked alphabet, \(Z\) a set, and \(t\in T_{\Sigma}(Z)\). The set \(\operatorname{pos}(t)\) of _positions of \(t\)_ is inductively defined by \(\operatorname{pos}(z)=\{\epsilon\}\) for all \(z\in Z\) and by \[\operatorname{pos}\bigl{(}\sigma(t_{1},\ldots,t_{k})\bigr{)}=\bigl{\{} \epsilon\bigr{\}}\cup\bigcup_{i\in[k]}\left\{iw\mid w\in\operatorname{pos}(t_ {i})\right\}\] for all \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\). The size \(|t|\) of \(t\) is defined as \(|t|=|\operatorname{pos}(t)|\), and its height \(\operatorname{ht}(t)\) is \(\operatorname{ht}(t)=\max_{w\in\operatorname{pos}(t)}|w|\). For \(w\in\operatorname{pos}(t)\) and \(t^{\prime}\in T_{\Sigma}(Z)\), the _label_\(t(w)\) of \(t\) at \(w\), the _subtree_\(t|_{w}\) of \(t\) at \(w\), and the _substitution_\(t[t^{\prime}]_{w}\) of \(t^{\prime}\) into \(t\) at \(w\) are defined by \(z(\epsilon)=z|_{\epsilon}=z\) and \(z[t^{\prime}]_{\epsilon}=t^{\prime}\) for all \(z\in Z\) and for \(t=\sigma(t_{1},\ldots,t_{k})\) by \(t(\epsilon)=\sigma\), \(t(iw^{\prime})=t_{i}(w^{\prime})\), \(t|_{\epsilon}=t\), \(t|_{iw^{\prime}}=t_{i}|_{w^{\prime}}\), \(t[t^{\prime}]_{\epsilon}=t^{\prime}\), and \[t[t^{\prime}]_{iw^{\prime}}=\sigma\bigl{(}t_{1},\ldots,t_{i-1},t_{i}[t^{\prime }]_{w^{\prime}},t_{i+1},\ldots,t_{k}\bigr{)}\] for all \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\), \(i\in[k]\), and \(w^{\prime}\in\operatorname{pos}(t_{i})\). For all \(S\subseteq\Sigma\cup Z\), we let \(\operatorname{pos}_{S}(t)=\left\{w\in\operatorname{pos}(t)\mid t(w)\in S\right\}\) and \(\operatorname{var}(t)=\left\{x\in X\mid\operatorname{pos}_{x}(t)\neq\emptyset\right\}\). For a single \(\sigma\in\Sigma\cup Z\) we abbreviate \(\operatorname{pos}_{\{\sigma\}}(t)\) simply by \(\operatorname{pos}_{\sigma}(t)\). The yield mapping \(\operatorname{yield}\colon T_{\Sigma}(Z)\to Z^{*}\) is recursively defined by \[\operatorname{yield}\bigl{(}z\bigr{)}=z\qquad\text{and}\qquad\operatorname{ yield}\bigl{(}\sigma(t_{1},\ldots,t_{k})\bigr{)}=\operatorname{yield}(t_{1})\cdots \operatorname{yield}(t_{k})\] for every \(z\in Z\), \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and trees \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\). A tree \(t\in T_{\Sigma}(Z)\) is called _context_ if \(|\operatorname{pos}_{z}(t)|=1\) for every \(z\in Z\). We write \(C_{\Sigma}(Z)\) for the set of such contexts and \(\widehat{C}_{\Sigma}(X_{k})=\left\{c\in C_{\Sigma}(X_{k})\mid\operatorname{ yield}(c)=x_{1}\cdots x_{k}\right\}\). Finally, for every \(t\in T_{\Sigma}(Z)\) finite \(V\subseteq Z\), and \(\theta\in T_{\Sigma}(Z)^{V}\), the substitution \(\theta\) applied to \(t\) is written as \(t\theta\) and defined by \(v\theta=\theta_{v}\) for every \(v\in V\), \(z\theta=z\) for every \(z\in Z\setminus V\), and \[\sigma(t_{1},\ldots,t_{k})\theta=\sigma(t_{1}\theta,\ldots,t_{k}\theta)\] for all \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\). We also write the substitution \(\theta\in T_{\Sigma}(Z)^{V}\) as \([v_{1}\leftarrow\theta_{v_{1}},\ldots,v_{n}\leftarrow\theta_{v_{n}}]\) if \(V=\{v_{1},\ldots,v_{n}\}\). Finally, we abbreviate it further to just \([\theta_{v_{1}},\ldots,\theta_{v_{n}}]\) if \(V=X_{n}\). A _commutative semiring_[17; 16] is a tuple \((\mathbb{S},+,\cdot,0,1)\) such that \((\mathbb{S},+,0)\) and \((\mathbb{S},\cdot,1)\) are commutative monoids, \(\cdot\) distributes over \(+\), and \(0\cdot s=0\) for all \(s\in\mathbb{S}\). Examples include (i) the Boolean semiring \(\mathbb{B}=\big{(}\{0,1\},\vee,\wedge,0,1\big{)}\), (ii) the semiring \(\mathbb{N}=\big{(}\mathbb{N},+,\cdot,0,1\big{)}\), (iii) the tropical semiring \(\mathbb{T}=\big{(}\mathbb{N}\cup\{\infty\},\min,+,\infty,0\big{)}\), and (iv) the arctic semiring \(\mathbb{A}=\big{(}\mathbb{N}\cup\{-\infty\},\max,+,-\infty,0\big{)}\). Given two semirings \[(\mathbb{S},+,\cdot,0,1)\qquad\text{and}\qquad(\mathbb{T},\oplus,\odot, \bot,\top)\enspace,\] a _semiring homomorphism_ is a mapping \(h\in\mathbb{T}^{\mathbb{S}}\) such that \(h(0)=\bot\), \(h(1)=\top\), and \(h(s_{1}+s_{2})=h(s_{1})\oplus h(s_{2})\) as well as \(h(s_{1}\cdot s_{2})=h(s_{1})\odot h(s_{2})\) for all \(s_{1},s_{2}\in\mathbb{S}\). When there is no risk of confusion, we refer to a semiring \((\mathbb{S},+,\cdot,0,1)\) simply by its carrier set \(\mathbb{S}\). A semiring \(\mathbb{S}\) is a _ring_ if there exists \(-1\in\mathbb{S}\) such that \(-1+1=0\). Let \(\Sigma\) be a ranked alphabet. Any mapping \(A\in\mathbb{S}^{T_{\Sigma}}\) is called a _weighted tree language_ over \(\mathbb{S}\), and its support is \(\operatorname{supp}(A)=\{t\in T_{\Sigma}\mid A_{t}\neq 0\}\). Let \(\Sigma\) and \(\Delta\) be ranked alphabets and \(h^{\prime}\in T_{\Delta}(X)^{\Sigma}\) a map such that \(h^{\prime}_{\sigma}\in T_{\Delta}(X_{k})\) for all \(k\in\mathbb{N}\) and \(\sigma\in\Sigma_{k}\). We extend \(h^{\prime}\) to \(h\in T_{\Delta}^{T_{\Sigma}}\) by (i) \(h(\alpha)=h^{\prime}_{\alpha}\in T_{\Delta}(X_{0})=T_{\Delta}\) for all \(\alpha\in\Sigma_{0}\) and (ii) \(h\big{(}\sigma(t_{1},\ldots,t_{k})\big{)}=h^{\prime}_{\sigma}\big{[}h(t_{1}), \ldots,h(t_{k})\big{]}\) for all \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}\). The mapping \(h\) is called the _tree homomorphism induced by \(h^{\prime}\)_, and we identify \(h^{\prime}\) and its induced tree homomorphism \(h\). It is _nonerasing_ if \(h^{\prime}_{\sigma}\notin X\) for all \(k\in\mathbb{N}\) and \(\sigma\in\Sigma_{k}\), and it is _nondeleting_ if \(\operatorname{var}(h^{\prime}_{\sigma})=X_{k}\) for all \(k\in\mathbb{N}\) and \(\sigma\in\Sigma_{k}\). Let \(h\in T_{\Delta}^{T_{\Sigma}}\) be a nonerasing and nondeleting homomorphism. Then \(h\) is _input finitary_; i.e., the set \(h^{-1}(u)\) is finite for every \(u\in T_{\Delta}\) because \(|t|\leq|u|\) for each \(t\in h^{-1}(u)\). Additionally, let \(A\in\mathbb{S}^{T_{\Sigma}}\) be a weighted tree language. We define the weighted tree language \(h(A)\in\mathbb{S}^{T_{\Delta}}\) for every \(u\in T_{\Delta}\) by \(h(A)_{u}=\sum_{t\in h^{-1}(u)}A_{t}\). ## 3 Weighted Tree Grammars with Constraints Let us start with the formal definition of our weighted tree grammars. They are a weighted variant of the tree automata with equality and inequality constraints originally introduced in [1; 5]. Compared to [1; 5] our model is slightly more expressive as we allow arbitrary constraints, whereas constraints were restricted to subtrees occurring in the productions in [1; 5]. This more restricted version will be called classic in the following. An overview of further developments for these automata can be found in [26]. We essentially use the version recently utilized to solve the HOM problem [15, Definition 4.1]. For the rest of this section, let \((\mathbb{S},+,\cdot,0,1)\) be a commutative semiring. Definition 1 (see [15, Definition 4.1]): A _weighted tree grammar with constraints_ (WTGc) is a tuple \(G=(Q,\Sigma,F,P,\operatorname{wt})\) such that * \(Q\) is a finite set of nonterminals and \(F\in\mathbb{S}^{Q}\) assigns final weights, * \(\Sigma\) is a ranked alphabet of input symbols, * \(P\) is a finite set of productions of the form \((\ell,q,E,I)\), where \(\ell\in T_{\Sigma}(Q)\setminus Q\), \(q\in Q\), and \(E,I\subseteq\mathbb{N}^{*}\times\mathbb{N}^{*}\) are finite sets, and * \(\operatorname{wt}\in\mathbb{S}^{P}\) assigns a weight to each production. In the following, let \(G=(Q,\Sigma,F,P,\operatorname{wt})\) be a WTGc. The components of a production \(p=(\ell,q,E,I)\in P\) are the left-hand side \(\ell\), the target nonterminal \(q\), the set \(E\) of equality constraints, and the set \(I\) of inequality constraints. Correspondingly, the production \(p\) is also written \(\ell\xrightarrow{E,J}q\) or even \(\ell\xrightarrow{E,J}_{\operatorname{wt}_{p}}q\) if we want to indicate its weight. Additionally, we simply list an equality constraint \((v,v^{\prime})\in E\) as \(v=v^{\prime}\) and an inequality constraint \((v,v^{\prime})\in I\) as \(v\neq v^{\prime}\). A production \(\ell\xrightarrow{E,J}q\in P\) is _normalized_ if \(\ell=\sigma(q_{1},\ldots,q_{k})\) for some \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(q_{1},\ldots,q_{k}\in Q\). It is _positive_ if \(I=\emptyset\); i.e., it has no inequality constraints, and it is _unconstrained_ if \(E=\emptyset=I\); i.e., the production has no constraints at all. Instead of \(\ell\xrightarrow{\emptyset,\emptyset}q\) we also write just \(\ell\to q\). The production is _classic_ if \(\{v,v^{\prime}\}\subseteq\operatorname{pos}_{Q}(\ell)\) for all constraints \((v,v^{\prime})\in E\cup I\). In other words, in a classic production the constraints can only refer to nonterminal-labeled subtrees of the left-hand side. The WTGc \(G\) is a _weighted tree automaton with constraints_ (WTAc) if all productions \(p\in P\) are normalized, and it is a _weighted tree grammar_ (WTG) [14] if all productions \(p\in P\) are unconstrained. If \(G\) is both a WTAc as well as a WTG, then it is a _weighted tree automaton_ (WTA) [14]. All these devices have _Boolean final weights_ if \(F\in\{0,1\}^{Q}\), they are _positive_ if every \(p\in P\) is positive, and they are _classic_ if every production \(p\in P\) is classic. Finally, if we utilize the Boolean semiring \(\mathbb{B}\), then we reobtain the unweighted versions and omit the 'W' in the abbreviations and the mapping 'wt' from the tuple. The semantics for our WTGc \(G\) is a slightly non-standard _derivation semantics_ when compared to (15, Definitions 4.3 & 4.4). Let \((v,v^{\prime})\in\mathbb{N}^{*}\times\mathbb{N}^{*}\) and \(t\in T_{\Sigma}\). If \(v,v^{\prime}\in\operatorname{pos}(t)\) and \(t|_{v}=t|_{v^{\prime}}\), we say that \(t\) satisfies \((v,v^{\prime})\), otherwise \(t\) dissatisfies \((v,v^{\prime})\). Let now \(C\subseteq\mathbb{N}^{*}\times\mathbb{N}^{*}\) be a finite set of constraints. We write \(t\models C\) if \(t\) satisfies all \((v,v^{\prime})\in C\), and \(t\mathrel{\mathop{\kern 0.0pt\not\subseteq}\limits}C\) if \(t\) dissatisfies all \((v,v^{\prime})\in C\). Universally dissatisfying \(C\) is generally stronger than simply not satisfying \(C\). Definition 2: A _sentential form (for \(G\))_ is simply a tree of \(\xi\in T_{\Sigma}(Q)\). Given an input tree \(t\in T_{\Sigma}\), sentential forms \(\xi,\zeta\in T_{\Sigma}(Q)\), a production \(p=\ell\xrightarrow{E,J}q\in P\), and a position \(w\in\operatorname{pos}(\xi)\), we write \(\xi\Rightarrow^{p,w}_{G,J}\zeta\) if \(\xi|_{w}=\ell\), \(\zeta=\xi[q]_{w}\), and the constraints \(E\) and \(I\) are fulfilled on \(t|_{w}\); i.e., \(t|_{w}\models E\) and \(t|_{w}\mathrel{\mathop{\kern 0.0pt\not\subseteq}\limits}I\). A sequence \[d=(p_{1},w_{1})\cdots(p_{n},w_{n})\in(P\times\mathbb{N}^{*})^{*}\] is a _derivation of \(G\) for \(t\)_ if there exist \(\xi_{1},\ldots,\xi_{n}\in T_{\Sigma}(Q)\) such that \[t\Rightarrow^{p_{1},w_{1}}_{G,t}\xi_{1}\Rightarrow^{p_{2},w_{2}}_{G,t}\cdots \Rightarrow^{p_{n},w_{n}}_{G,t}\xi_{n}\enspace.\] It is _left-most_ if additionally \(w_{1}\prec w_{2}\prec\cdots\prec w_{n}\), where \(\preceq\) is the lexicographic order on \(\mathbb{N}^{*}\) in which prefixes are larger, so \(\varepsilon\) is the largest element. Note that the sentential forms \(\xi_{1},\ldots,\xi_{n}\) are uniquely determined if they exist, and for any derivation \(d\) for \(t\) there exists a unique permutation of \(d\) that is a left-most derivation for \(t\). The derivation \(d\) is _complete_ if \(\xi_{n}\in Q\), and in that case it is also called a derivation to \(\xi_{n}\). The set of all complete left-most derivations for \(t\) to \(q\in Q\) is denoted by \(D_{G}^{q}(t)\). The WTGc \(G\) is _unambiguous_ if \(\sum_{q\in\operatorname{supp}(F)}\lvert D_{G}^{q}(t)\rvert\leq 1\) for every \(t\in T_{\Sigma}\). Let \(p=\ell\stackrel{{ E,J}}{{\longrightarrow}}q\in P\) be a production. Since there exist unique \(k=\lvert\operatorname{pos}_{Q}(\ell)\rvert\), \(c\in\widehat{C}_{\Sigma}(X_{k})\), and \(q_{1},\ldots,q_{k}\in Q\) such that \(\ell=c[q_{1},\ldots,q_{k}]\), we also simply write \[c[q_{1},\ldots,q_{k}]\stackrel{{ E,J}}{{\longrightarrow}}q\] instead of \(p\). Using this notation, we can present a recursion for the set \(D_{G}^{q}(t)\) of complete derivations for \(t\in T_{\Sigma}\) to \(q\in Q\). \[D_{G}^{q}(t)=\Big{\{}d_{1}\cdots d_{k}(p,\epsilon)\ \Big{|} \ k\in\mathbb{N},\,p=c[q_{1},\ldots,q_{k}]\stackrel{{ E,J}}{{ \longrightarrow}}q\in P,\,t\models E,\,t\stackrel{{\vee}}{{ \bigvee}}I\] \[t_{1},\ldots,t_{k}\in T_{\Sigma},t=c[t_{1},\ldots,t_{k}],\, \forall i\in[k]\colon d_{i}\in D_{G}^{q_{i}}(t_{i})\Big{\}}\] Specifically, let \(d=(p_{1},w_{1})\cdots(p_{n},w_{n})\) be a complete derivation for some tree \(t\in T_{\Sigma}\). For a given position \(w\in\{w_{1},\ldots,w_{n}\}\), we let \(k\in\mathbb{N}\) and \(1\leq i_{1}<\cdots<i_{k}\leq n\) be the indices such that \(\big{\{}i_{1},\ldots,i_{k}\big{\}}=\big{\{}i\in[n]\mid w_{i}=ww_{i}^{\prime} \big{\}}\); i.e., the indices of the derivation steps applied to positions below \(w\) with \(w_{i}^{\prime}\) being the suffix of \(w_{i}\) following the prefix \(w\) for all \(i\in\{i_{1},\ldots,i_{k}\}\). The _derivation for \(t|_{w}\) incorporated in \(d\)_ is the derivation \((p_{i_{1}},w_{i_{1}}^{\prime}),\ldots,(p_{i_{k}},w_{i_{k}}^{\prime})\). Conversely, for every \(w\in\mathbb{N}^{*}\) we abbreviate the derivation \((p_{1},ww_{1})\cdots(p_{n},ww_{n})\) by simply \(wd\). Definition 3: The _weight_ of a derivation \(d=(p_{1},w_{1})\cdots(p_{n},w_{n})\) is defined to be \[\operatorname{wt}_{G}(d)=\prod_{i=1}^{n}\operatorname{wt}(p_{i})\enspace.\] The weighted tree language generated by \(G\), written simply \(G\in\mathbb{S}^{T_{\Sigma}}\), is defined for every \(t\in T_{\Sigma}\) by \[G_{t}=\sum_{q\in Q,d\in D_{G}^{q}(t)}F_{q}\cdot\operatorname{wt}_{G}(d)\enspace.\] Two WTGc are _equivalent_ if they generate the same weighted tree language. Finally, a weighted tree language is * _regular_ if it is generated by some WTG, * _positive constraint-regular_ if it is generated by some positive WTGc, * _classic constraint-regular_ if it is generated by some classic WTGc, and * _constraint-regular_ if it is generated by some WTGc. Since the weights of productions are multiplied, we can assume without loss of generality that \(\operatorname{wt}_{p}\neq 0\) for all \(p\in P\). Example 1: Consider the WTGc \(G=(Q,\Sigma,F,P,\mathrm{wt})\) over the arctic semiring \(\mathbb{A}\) with nonterminals \(Q=\{q,q^{\prime}\}\), \(\Sigma=\{\alpha^{(0)},\gamma^{(1)},\sigma^{(2)}\}\), \(F_{q}=-\infty\), \(F_{q^{\prime}}=0\), and \(P\) and 'wt' given by the productions \(p_{1}=\alpha\rightarrow_{0}q\), \(p_{2}=\gamma(q)\rightarrow_{1}q\), and \(p_{3}=\sigma\big{(}\gamma(q),q\big{)}\stackrel{{ 11=2}}{{ \longrightarrow}}1\ q^{\prime}\). Clearly, \(G\) is positive and classic, but not a WTAc. The tree \(t=\sigma\big{(}\gamma(\gamma(\alpha)),\gamma(\alpha)\big{)}\) has the unique left-most derivation \[d=(p_{1},111)\,(p_{2},11)\,(p_{1},21)\,(p_{2},2)\,(p_{3},\varepsilon)\] to the nonterminal \(q^{\prime}\), which is illustrated in Figure 1. Overall, we have \[\mathrm{supp}(G)=\big{\{}\sigma\big{(}\gamma^{i+1}(\alpha),\gamma^{i}(\alpha) \big{)}\mid i\in\mathbb{N}\big{\}}\] and \(G_{t}=|\mathrm{pos}_{\gamma}(t)|\) for every \(t\in\mathrm{supp}(G)\), where \(\gamma^{i}(t)\) abbreviates \(\gamma(\cdots\gamma(t)\cdots)\) containing \(i\)-times the unary symbol \(\gamma\) atop \(t\). Next, we introduce another semantics, called initial algebra semantics, which is based on the presented recursive presentation of derivations and often more convenient in proofs. Definition 4: For every nonterminal \(q\in Q\) we recursively define the map \(\mathrm{wt}^{q}_{G}\in\mathbb{S}^{T_{\Sigma}}\) such that for every \(t\in T_{\Sigma}\) by \[\mathrm{wt}^{q}_{G}(t)=\sum_{\begin{subarray}{c}p=c\{q_{1},\ldots,q_{k}\} \subseteq E,t_{1},\ldots,t_{k}\in T_{\Sigma}\\ t=c\{t_{1},\ldots,t_{k}\}\\ t=E,t\stackrel{{\cong}}{{\prod}}\end{subarray}}\mathrm{wt}_{p} \cdot\prod_{i=1}^{k}\mathrm{wt}^{q_{i}}_{G}(t_{i})\enspace. \tag{1}\] It is a routine matter to verify that \(\mathrm{wt}^{q}_{G}(t)=\sum_{d\in D^{q}_{G}(t)}\mathrm{wt}_{G}(d)\) for every \(q\in Q\) and \(t\in T_{\Sigma}\). This utilizes the presented recursive decomposition of complete derivations as well as distributivity of the semiring \(\mathbb{S}\). As for WTG and WTA [13], also every (positive) WTGc can be turned into an equivalent (positive) WTAc at the expense of additional nonterminals by decomposing the left-hand sides. Lemma 1 (cf. [15, Lemma 4.8]): _WTGc and WTAc are equally expressive. This also applies to positive WTGc._ Figure 1: Illustration of the derivation mentioned in Example 1. Proof: Let \(G=(Q,\Sigma,F,P,\mathrm{wt})\) be a WTGc with a non-normalized production \[p=\sigma(\ell_{1},\ldots,\ell_{k})\stackrel{{ E,J}}{{ \longrightarrow}}q\in P\enspace,\] let \(U\supseteq Q\) and let \(\varphi\in U^{T_{\Sigma}(Q)}\) be an injective map such that \(\varphi_{q}=q\) for all \(q\in Q\). We define the WTGc \(G^{\prime}=(Q^{\prime},\Sigma,F^{\prime},P^{\prime},\mathrm{wt}^{\prime})\) such that \(Q^{\prime}=Q\cup\{\varphi_{\ell_{1}},\ldots,\varphi_{\ell_{k}}\}\), \(F^{\prime}_{q}=F_{q}\) for all \(q\in Q\) and \(F^{\prime}_{q^{\prime}}=0\) for all \(q^{\prime}\in Q^{\prime}\setminus Q\), and \[P^{\prime}=\left(P\setminus\{p\}\right)\cup\left\{\sigma(\varphi_{\ell_{1}}, \ldots,\varphi_{\ell_{k}})\stackrel{{ E,J}}{{\longrightarrow}}q \right\}\cup\left\{\ell_{i}\rightarrow\varphi_{\ell_{i}}\mid i\in[k],\ell_{i }\notin Q\right\}\enspace,\] and for every \(p^{\prime}\in P^{\prime}\) \[\mathrm{wt}^{\prime}_{p^{\prime}}=\begin{cases}\mathrm{wt}_{p^{\prime}}& \text{if }p^{\prime}\in P\setminus\{p\}\\ \mathrm{wt}_{p}&\text{if }p^{\prime}=\sigma(\varphi_{\ell_{1}},\ldots,\varphi_{ \ell_{k}})\stackrel{{ E,J}}{{\longrightarrow}}q\\ 1&\text{otherwise}.\end{cases}\] To prove that \(G^{\prime}\) is equivalent to \(G\) we observe that for every left-most derivation \[d=(p_{1},w_{1})\cdots(p_{n},w_{n})\] of \(G\), there exists a corresponding derivation \(d^{\prime}\) of \(G^{\prime}\), which is obtained by replacing each derivation step \((p_{a},w_{a})\) with \(p_{a}=p\) by the sequence \[(\ell_{i}\rightarrow\varphi_{\ell_{i}},w_{a}i)_{i\in[k],\ell_{i}\notin Q} \big{(}\sigma(\varphi_{\ell_{1}},\ldots,\varphi_{\ell_{k}})\stackrel{{ E,J}}{{ \longrightarrow}}q,w_{a}\big{)}\] of derivation steps of \(G^{\prime}\) (yielding also a unique corresponding left-most derivation). This replacement preserves the weight of the derivation. Vice versa any left-most derivation of \(G^{\prime}\) that utilizes the production \(\sigma(\varphi_{\ell_{1}},\ldots,\varphi_{\ell_{k}})\stackrel{{ E,J}}{{ \longrightarrow}}q\in P^{\prime}\) at \(w\) needs to previously utilize the productions \(\ell_{i}\rightarrow\varphi_{\ell_{i}}\in P^{\prime}\) at \(wi\) for all \(i\in[k]\) with \(\ell_{i}\notin Q\) since these are the only productions that generate the nonterminal \(\varphi_{\ell_{i}}\). Thus, we established a weight-preserving bijection between the left-most derivations of \(G\) and \(G^{\prime}\), so it is obvious that \(G^{\prime}=G\). Repeated application of the normalization eventually (after finitely many steps) yields an equivalent WTAc. Finally, we note that the constructed WTAc is positive if the original WTGc is positive. As we will see in the next example, the construction used in the proof of Lemma 1 does not preserve the classic property. Example 2: Consider the classic and positive WTGc \(G\) of Example 1 and its non-normalized production \(p=\sigma\big{(}\gamma(q),q\big{)}\stackrel{{ 11=2}}{{ \longrightarrow}}q^{\prime}\). Applying the construction in the proof of Lemma 1 we replace \(p\) by the productions \(\sigma(q^{\prime\prime},q)\stackrel{{ 11=2}}{{ \longrightarrow}}_{1}q\), which is not classic, and \(\gamma(q)\rightarrow_{0}q^{\prime\prime}\), where \(q^{\prime\prime}\) is some new nonterminal. The WTGc obtained this way is already a positive WTAc. Another routine normalization turns the final weights into Boolean final weights following the approach of [2, Lemma 6.1.1]. This is achieved by adding special copies of all nonterminals that terminate the derivation and pre-apply the final weight. Lemma 2: _WTGc and WTGc with Boolean final weights are equally expressive. This also applies to positive WTGc, classic WTGc, and classic positive WTGc as well as the same WTAc._ Proof: Let \(G=(Q,\Sigma,F,P,\mathrm{wt})\) be a WTGc. Let \(f\in C^{Q}\) be bijective with \(C\cap Q=\emptyset\). We construct the WTGc \(G^{\prime}=(Q\cup C,\Sigma,F^{\prime},P\cup P^{\prime},\mathrm{wt}\cup \mathrm{wt}^{\prime})\) such that \(p^{\prime}=\ell\stackrel{{ E,J}}{{\longrightarrow}}f_{q}\) belongs to \(P^{\prime}\) and \(\mathrm{wt}^{\prime}_{p^{\prime}}=\mathrm{wt}_{p}\cdot F_{q}\) for every \(p=\ell\stackrel{{ E,J}}{{\longrightarrow}}q\in P\). No other productions belong to \(P^{\prime}\). Finally, \(F^{\prime}_{q}=0\) for all \(q\in Q\) and \(F_{c}=1\) for all \(c\in C\). The proof of equivalence is straightforward showing for every \(t\in T_{\Sigma}\) and \(q\in Q\) that \[\mathrm{wt}^{q}_{G^{\prime}}(t)=\mathrm{wt}^{q}_{G}(t)\qquad\text{and}\qquad \mathrm{wt}^{f(q)}_{G^{\prime}}(t)=\mathrm{wt}^{q}_{G}(t)\cdot F_{q}\enspace.\] The construction trivially preserves the properties normalized, positive, and classic. Let \(d\in D^{q}_{G}(t)\) be a derivation for some \(q\in Q\) and \(t\in T_{\Sigma}\). Since we often argue with the help of such derivations \(d\), it is a nuisance that we might have \(\mathrm{wt}_{G}(d)=0\). This anomaly can occur even if \(\mathrm{wt}_{p}\neq 0\) for all \(p\in P\) due to the presence of zero-divisors, which are elements \(s,s^{\prime}\in\mathbb{S}\setminus\{0\}\) such that \(s\cdot s^{\prime}=0\). However, we can fortunately avoid such anomalies altogether utilizing a construction of [19], which has been lifted to tree automata in [9]. Lemma 3: _For every WTGc \(G\) there exists a WTGc \(G^{\prime}=(Q^{\prime},\Sigma,F^{\prime},P^{\prime},\mathrm{wt}^{\prime})\) that is equivalent and \(\mathrm{wt}^{\prime}_{G^{\prime}}(d^{\prime})\neq 0\) for all \(q^{\prime}\in Q^{\prime}\), \(t^{\prime}\in T_{\Sigma}\), and \(d^{\prime}\in D^{q^{\prime}}_{G^{\prime}}(t^{\prime})\). This also applies to positive WTGc, classic WTGc, and classic positive WTGc as well as the same WTAc. The construction also preserves Boolean final weights._ Proof: Let \(G=(Q,\Sigma,F,P,\mathrm{wt})\). Obviously, \((\mathbb{S},\cdot,1,0)\) is a commutative monoid with zero. Let \((s_{1},\ldots,s_{n})\) be an enumeration of the finite set \(\mathrm{wt}(P)\setminus\{1\}\subseteq\mathbb{S}\). We consider the monoid homomorphism \(h\colon\mathbb{N}^{n}\to\mathbb{S}\), which is given by \[h(m_{1},\ldots,m_{n})=\prod_{i=1}^{n}s_{i}^{m_{i}}\] for every \(m_{1},\ldots,m_{n}\in\mathbb{N}\). According to Dickson's lemma [6] the set \(\min h^{-1}(0)\) is finite, where the partial order is the standard pointwise order on \(\mathbb{N}^{n}\). Hence there is \(u\in\mathbb{N}\) such that \(\min h^{-1}(0)\subseteq\{0,\ldots,u\}^{n}=U\). We define the operation \(\oplus\colon U^{2}\to U\) by \((v\oplus v^{\prime})_{i}=\min(v_{i}+v_{i}^{\prime},u)\) for every \(v,v^{\prime}\in U\) and \(i\in[n]\). Moreover, for every \(i\in[n]\) we let \(1_{s_{i}}\in U\) be the vector such that \((1_{s_{i}})_{i}=1\) and \((1_{s_{i}})_{a}=0\) for all \(a\in[n]\setminus\{i\}\). Let \(V=U\setminus h^{-1}(0)\). We construct the equivalent WTGc \(G^{\prime}\) such that \(Q^{\prime}=Q\times V\), \(F^{\prime}_{\langle q,v\rangle}=F_{q}\) for all \(\langle q,v\rangle\in Q^{\prime}\), and \(P^{\prime}\) and \(\mathrm{wt}^{\prime}\) are given as follows. For every production \[p=c[q_{1},\ldots,q_{k}]\stackrel{{ E,J}}{{\longrightarrow}}q\in P\] and all \(v_{1},\ldots,v_{k}\in V\) such that \(v=1_{\mathrm{wt}_{p}}\oplus\bigoplus_{i=1}^{k}v_{i}\in V\) the production \[c\big{[}\langle q_{1},v_{1}\rangle,\ldots,\langle q_{k},v_{k}\rangle\big{]} \stackrel{{ E,J}}{{\longrightarrow}}\langle q,v\rangle\] belongs to \(P^{\prime}\) and its weight is \(\mathrm{wt}^{\prime}_{p^{\prime}}=\mathrm{wt}_{p}\). No further productions are in \(P^{\prime}\). The construction trivially preserves the properties positive, classic, and normalized. For correctness, let \(q^{\prime}=\langle q,v\rangle\in Q^{\prime}\), \(t^{\prime}\in T_{\Sigma}\), and \(d^{\prime}\in D^{q^{\prime}}_{G^{\prime}}(t^{\prime})\). We suitably (for the purpose of zero-divisors) track the weight of the derivation in \(v\) and \(h_{v}\neq 0\) by definition. Consequently, \(\mathrm{wt}^{\prime}_{G^{\prime}}(d^{\prime})\neq 0\) as required. We note that possibly \(\mathrm{wt}_{G^{\prime}}(d^{\prime})\neq h_{v}\). For zero-sum free semirings [16; 17] we obtain that the support \(\mathrm{supp}(G)\) of an WTGc can be generated by a TGc. A semiring is _zero-sum free_ if \(s=0=s^{\prime}\) for every \(s,s^{\prime}\in\mathbb{S}\) such that \(s+s^{\prime}=0\). Clearly, rings are never zero-sum free, but the mentioned semirings \(\mathbb{B}\), \(\mathbb{N}\), \(\mathbb{T}\), and \(\mathbb{A}\) are all zero-sum free. Corollary 1 (of Lemmata 2 and 3): _If \(\mathbb{S}\) is zero-sum free, then \(\mathrm{supp}(G)\) is (positive, classic) constraint-regular for every (respectively, positive, classic) WTGc \(G\)._ Proof: We apply Lemma 2 to obtain an equivalent WTGc with Boolean final weights and then Lemma 3 to obtain the WTGc \(G^{\prime}=(Q^{\prime},\Sigma,F^{\prime},P^{\prime},\mathrm{wt}^{\prime})\) with Boolean final weights. As mentioned we can assume that \(\mathrm{wt}^{\prime}_{p^{\prime}}\neq 0\) for all \(p^{\prime}\in P^{\prime}\). Let \(q^{\prime}\in\mathrm{supp}(F^{\prime})\) and \(t^{\prime}\in T_{\Sigma}\) with \(D^{q^{\prime}}_{G^{\prime}}(t^{\prime})\neq 0\). Since \(\mathrm{wt}^{\prime}_{G^{\prime}}(d^{\prime})\neq 0\) for every derivation \(d^{\prime}\in D^{q^{\prime}}_{G^{\prime}}(t^{\prime})\) and \(s+s^{\prime}\neq 0\) for all \(s,s^{\prime}\in\mathbb{S}\setminus\{0\}\) due to zero-sum freeness, we obtain \(t^{\prime}\in\mathrm{supp}(G^{\prime})\). Thus, the existence of a complete derivation for \(t^{\prime}\) to an accepting nonterminal (i.e., one with final weight 1) characterizes whether we have \(t^{\prime}\in\mathrm{supp}(G^{\prime})\). Consequently, the TGc \(\left(Q^{\prime},\Sigma,\mathrm{supp}(F^{\prime}),P^{\prime}\right)\) generates the tree language \(\mathrm{supp}(G^{\prime})\), which is thus constraint-regular. The properties positive and classic are preserved in all the constructions. ## 4 Closure Properties Next we investigate several closure properties of the constraint-regular weighted tree languages. We start with the (point-wise) sum, which is given by \((A+A^{\prime})_{t}=A_{t}+A^{\prime}_{t}\) for every \(t\in T_{\Sigma}\) and \(A,A^{\prime}\in\mathbb{S}^{T_{\Sigma}}\). Given WTGc \(G\) and \(G^{\prime}\) generating \(A\) and \(A^{\prime}\) we can trivially use a disjoint union construction to obtain a WTGc generating \(A+A^{\prime}\). We omit the details. Proposition 1: _The (positive, classical) constraint-regular weighted tree languages (over a fixed ranked alphabet) are closed under sums. _ The corresponding (point-wise) product is the Hadamard product, which is given by \((A\cdot A^{\prime})_{t}=A_{t}\cdot A^{\prime}_{t}\) for every \(t\in T_{\Sigma}\) and \(A,A^{\prime}\in\mathbb{S}^{T_{\Sigma}}\). With the help of a standard product construction we show that the (positive) constraint-regular weighted tree languages are also closed under Hadamard product. As preparation we introduce a special normal form. A WTAc \(G=(Q,\Sigma,F,P,\mathrm{wt})\) is _constraint-determined_ if \(E=E^{\prime}\) and \(I=I^{\prime}\) for all productions \[\sigma(q_{1},\ldots,q_{k})\stackrel{{ E,I}}{{\longrightarrow}}q \in P\quad\text{and}\quad\sigma(q_{1},\ldots,q_{k})\stackrel{{ E^{\prime},I^{\prime}}}{{\longrightarrow}}q\in P\enspace.\] In other words, two productions cannot differ only in the sets of constraints. It is straightforward to turn any (positive) WTAc into an equivalent constraint-determined (positive) WTAc by introducing additional nonterminals (e.g. annotate the constraints to the nonterminal on the right-hand side). Theorem 3.1: _The (positive) constraint-regular weighted tree languages (over a fixed ranked alphabet) are closed under Hadamard products._ Proof: Let \(A,A^{\prime}\in\mathbb{S}^{T_{\Sigma}}\) be constraint-regular. Without loss of generality (see Lemma 1) we can assume constraint-determined WTAc \[G=(Q,\Sigma,F,P,\mathrm{wt})\qquad\text{and}\qquad G^{\prime}=(Q^{\prime}, \Sigma,F^{\prime},P^{\prime},\mathrm{wt}^{\prime})\] that generate \(A\) and \(A^{\prime}\), respectively. We construct the direct product WTAc \[G\times G^{\prime}=(Q\times Q^{\prime},\Sigma,F^{\prime\prime},P^{\prime \prime},\mathrm{wt}^{\prime\prime})\] such that \(F^{\prime\prime}_{\langle q,q^{\prime}\rangle}=F_{q}\cdot F^{\prime}_{q^{ \prime}}\) for every \(q\in Q\) and \(q^{\prime}\in Q^{\prime}\) and for every production \(p=\sigma(q_{1},\ldots,q_{k})\stackrel{{ E,I}}{{\longrightarrow}}q \in P\) and production \[p^{\prime\prime}=\sigma\big{(}\langle q_{1},q^{\prime}_{1}\rangle,\ldots, \langle q_{k},q^{\prime}_{k}\rangle\big{)}\stackrel{{ E,|E^{\prime},I,|t^{\prime}}}{{\longrightarrow}}\langle q,q^{\prime}\rangle\] belongs to \(P^{\prime\prime}\) and its weight is \(\mathrm{wt}^{\prime\prime}_{p^{\prime\prime}}=\mathrm{wt}_{p}\cdot\mathrm{wt }^{\prime}_{p^{\prime}}\). No other productions belong to \(P^{\prime\prime}\). It is straightforward to see that the property positive is preserved. The correctness proof that \(G\times G^{\prime}=A\cdot A^{\prime}\) is a straightforward induction proving \[\mathrm{wt}^{\langle q,q^{\prime}\rangle}_{G\times G^{\prime}}(t)=\mathrm{wt}^ {q}_{G}(t)\cdot\mathrm{wt}^{q^{\prime}}_{G^{\prime}}(t)\] for all \(t\in T_{\Sigma}\) using the initial algebra semantics. The WTAc \(G\) and \(G^{\prime}\) are required to be constraint-determined, so that we can uniquely identify the basic productions \(p\in P\) and \(p^{\prime}\in P^{\prime}\) that construct a newly formed production \(p^{\prime\prime}\in P^{\prime\prime}\). We can obtain a constraint-determined WTAc at the expense of a polynomial increase in the number of productions (assuming that the ranked alphabet of input symbols is fixed). Let \(r=\max_{\sigma\in\Sigma}\mathrm{rk}(\sigma)\) be the maximal rank of an input symbol and \(c=|P|\) be the number of productions of the given WTAc \(G=(Q,\Sigma,F,P,\mathrm{wt})\). First, we modify the target nonterminal \(q\) of each production \(\rho=(\ell,q,E,I)\in P\) to additionally include the identifier \(\rho\), which yields the production \((\ell,\langle q,\rho\rangle,E,I)\). This effectively yields the new nonterminal set \(Q\times P\), which has size \(|Q|\cdot c\). Then we create copies of the production \((\sigma(q_{1},\ldots,q_{k}),\langle q,\rho\rangle,E,I)\) by the set of productions \[\left\{\big{(}\sigma(\langle q_{1},\rho_{1}\rangle,\ldots,\langle q_{k},\rho_{ k}\rangle),\langle q,\rho\rangle,E,I\big{)}\ \Big{|}\ \rho_{1},\ldots,\rho_{k}\in P\right\}\.\] Clearly, this turns each production into at most \(c^{r}\) productions since \(k\leq r\), so the overall number of productions after all replacements is at most \(c^{r+1}\). The product construction itself is then quadratic. We note that the previous construction also works for classic WTAc. Example 3: Let \(G=\big{(}\{q\},\Sigma,F,P,\mathrm{wt}\big{)}\) and \(G^{\prime}=\big{(}\{z\},\Sigma,F^{\prime},P^{\prime},\mathrm{wt}^{\prime}\big{)}\) be WTAc over \(\mathbb{A}\) and \(\Sigma=\{\alpha^{(0)},\gamma^{(1)},\sigma^{(2)}\}\), \(F_{q}=F_{z}^{\prime}=0\), and the productions \[\alpha\to_{0}q \gamma(q)\to_{2}q \sigma(q,q)\stackrel{{ 1-2}}{{\longrightarrow}}_{0}q\] ( \[P\] ) \[\alpha\to_{0}z \gamma(z)\stackrel{{ 11\neq 12}}{{\longrightarrow}}_{1}z \sigma(z,z)\to_{1}z\,.\] ( \[P^{\prime}\] ) We observe that \[\mathrm{supp}(G) =\big{\{}t\in T_{\Sigma}\mid\forall w\in\mathrm{pos}_{\sigma}(t) \colon t|_{w1}=t|_{w2}\big{\}}\] \[\mathrm{supp}(G^{\prime}) =\big{\{}t\in T_{\Sigma}\mid\forall w\in\mathrm{pos}_{\gamma}(t) \colon\text{ if }t(w1)=\sigma\text{ then }t|_{w11}\neq t|_{w12}\big{\}}\] and \(G_{t}=2|\mathrm{pos}_{\gamma}(t)|\) as well as \(G_{t^{\prime}}^{\prime}=|\mathrm{pos}_{\gamma}(t^{\prime})|+|\mathrm{pos}_{ \sigma}(t^{\prime})|\) for every tree \(t\in\mathrm{supp}(G)\) and tree \(t^{\prime}\in\mathrm{supp}(G^{\prime})\). We obtain the WTAc \(G\times G^{\prime}=\big{(}\{\langle q,z\rangle\},\Sigma,F^{\prime\prime},P^{ \prime\prime},\mathrm{wt}^{\prime\prime}\big{)}\) with \(F_{\langle q,z\rangle}^{\prime\prime}=0\) and the following productions. \[\alpha\to_{0}\langle q,z\rangle\qquad\gamma\big{(}\langle q,z\rangle\big{)} \stackrel{{ 11\neq 12}}{{\longrightarrow}}_{3}\langle q,z\rangle \qquad\sigma\big{(}\langle q,z\rangle,\langle q,z\rangle\big{)}\stackrel{{ 1=2}}{{ \longrightarrow}}_{1}\langle q,z\rangle\] Hence we obtain the equality \((G\times G^{\prime})_{t}=3|\mathrm{pos}_{\gamma}(t)|+|\mathrm{pos}_{\sigma}(t )|=G_{t}\cdot G_{t}^{\prime}\) for every tree \(t\in\mathrm{supp}(G)\cap\mathrm{supp}(G^{\prime})\). Next, we use an extended version of the classical power set construction to obtain an unambiguous WTAc that keeps track of the reachable nonterminals, but preserves only the homomorphic image of its weight. The unweighted part of the construction mimics a power-set construction and the handling of constraints roughly follows [15, Definition 3.1]. Theorem 3.1: _Let \(h\in\mathbb{T}^{\mathbb{S}}\) be a semiring homomorphism into a finite semiring \(\mathbb{T}\). For every (classic) WTAc \(G=(Q,\Sigma,F,P,\mathrm{wt})\) over \(\mathbb{S}\) there exists an unambiguous (classic) WTAc \(G^{\prime}=(\mathbb{T}^{Q},\Sigma,F^{\prime},P^{\prime},\mathrm{wt}^{\prime})\) such that for every tree \(t\in T_{\Sigma}\) and \(\varphi\in\mathbb{T}^{Q}\)_ \[\mathrm{wt}_{G^{\prime}}^{\varphi}(t)=\begin{cases}1&\text{if }\varphi_{q}=h \big{(}\mathrm{wt}_{G}^{q}(t)\big{)}\text{ for all }q\in Q\\ 0&\text{otherwise.}\end{cases}\] _Moreover, \(G_{t}^{\prime}=h(G_{t})\) for every \(t\in T_{\Sigma}\)._ Proof: For every \(\sigma\in\Sigma\), let \[\mathcal{C}_{\sigma}=\big{\{}E\mid\sigma(q_{1},\ldots,q_{k})\stackrel{{ E,J}}{{\longrightarrow}}q\in P\big{\}}\cup\big{\{}I\mid\sigma(q_{1},\ldots,q_{k}) \stackrel{{ E,J}}{{\longrightarrow}}q\in P\big{\}}\] be the constraints that occur in productions of \(G\) whose left-hand side contains \(\sigma\). We let \(F_{\varphi}^{\prime}=\sum_{q\in Q}h(F_{q})\cdot\varphi_{q}\) for every \(\varphi\in\mathbb{T}^{Q}\). For all \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), nonterminals \(\varphi^{1},\ldots,\varphi^{k}\in\mathbb{T}^{Q}\), and constraints \(\mathcal{E}\subseteq\mathcal{C}_{\sigma}\) we let \(p^{\prime}=\sigma(\varphi^{1},\ldots\varphi^{k})\stackrel{{\mathcal{ E},J}}{{\longrightarrow}}\varphi\in P^{\prime}\), where \(\mathcal{I}=\mathcal{C}_{\sigma}\setminus\mathcal{E}\) and for every \(q\in Q\) \[\varphi_{q}=\sum_{\begin{subarray}{c}p=\sigma(q_{1},\ldots,q_{k})\stackrel{{ E,J}}{{\longrightarrow}}q\in P\\ E\subseteq\mathcal{E},\mathcal{I}\subseteq\mathcal{E}\end{subarray}}h(\mathrm{wt }_{p})\cdot\varphi_{q_{1}}^{1}\cdot\ldots\cdot\varphi_{q_{k}}^{k}\enspace. \tag{2}\] . No additional productions belong to \(P^{\prime}\). Finally, we set \(\mathrm{wt}^{\prime}_{p^{\prime}}=1\) for all \(p^{\prime}\in P^{\prime}\). In general, the WTAc \(G^{\prime}\) is certainly not deterministic due to the choice of constraints, but \(G^{\prime}\) is unambiguous since the resulting \(2^{|\mathcal{C}_{\sigma}|}\) rules for each left-hand side have mutually exclusive constraint sets. In fact, for each \(t\in T_{\Sigma}\) there is exactly one left-most complete derivation of \(G^{\prime}\) for \(t\), and it derives to \(\varphi\in\mathbb{T}^{Q}\) such that \(\varphi_{q}=h\big{(}\mathrm{wt}^{q}_{G}(t)\big{)}\) for every \(q\in Q\). The weight of that derivation is \(1\). These statements are proven inductively. The final statement \(G^{\prime}_{t}=h(G_{t})\) for every \(t\in T_{\Sigma}\) is an easy consequence of the previous statements. If \(G\) is classic, then also the constructed WTAc \(G^{\prime}\) is classic. Example 4: Recall the WTAc \(G\) and \(G^{\prime}\) from Example 3. Consider the WTAc generating their disjoint union, as well as the semiring homomorphism \(h\in\mathbb{B}^{\mathbb{A}}\) given by \(h_{a}=1\) for all \(a\in\mathbb{A}\setminus\{-\infty\}\) and \(h_{-\infty}=0\). The sets \(\mathcal{C}_{\gamma}\) and \(\mathcal{C}_{\sigma}\) of utilized constraints are \(\mathcal{C}_{\gamma}=\big{\{}(11,12)\big{\}}\) and \(\mathcal{C}_{\sigma}=\big{\{}(1,2)\big{\}}\), and we write \(\varphi\in\mathbb{B}^{Q}\) simply as subsets of \(Q\). We obtain the unambiguous WTAc \(G^{\prime\prime}\) with the following sensible (i.e., having satisfiable constraints) productions for all \(Q^{\prime},Q^{\prime\prime}\subseteq\{q,z\}\), which all have weight \(1\). \[\alpha \longrightarrow\ \{q,z\}\] \[\gamma(Q^{\prime}) \stackrel{{ 11=1}}{{\longrightarrow}}Q^{\prime}\cap\{q\} \gamma(Q^{\prime}) \stackrel{{ 11\neq 12}}{{\longrightarrow}}Q^{\prime}\] \[\sigma(Q^{\prime},Q^{\prime\prime}) \stackrel{{ 1=2}}{{\longrightarrow}}Q^{\prime}\cap Q^{\prime\prime} \sigma(Q^{\prime},Q^{\prime\prime}) \stackrel{{ 1\neq 2}}{{\longrightarrow}}Q^{\prime}\cap Q^{ \prime\prime}\cap\{z\}\] Each \(t\in T_{\Sigma}\) has exactly one left-most complete derivation in \(G^{\prime\prime}\); it derives to \(Q^{\prime}\), where (i) \(q\in Q^{\prime}\) iff \(t\in\mathrm{supp}(G)\) and (ii) \(z\in Q^{\prime}\) iff \(t\in\mathrm{supp}(G^{\prime})\). It is \(F^{\prime\prime}_{\emptyset}=0\) and \(F^{\prime\prime}_{Q}=1\) for all non-empty \(Q\subseteq\{q,z\}\). Corollary 2 (of Theorem 2.2): _Let \(\mathbb{S}\) be finite. For every (classic) WTAc over \(\mathbb{S}\) there exists an equivalent unambiguous (classic) WTAc. _ Corollary 3 (of Theorem 2.2): _Let \(\mathbb{S}\) be zero-sum free. For every (classic) WTAc \(G\) over \(\mathbb{S}\) there exists an unambiguous (classic) TAC generating \(\mathrm{supp}(G)\)._ Proof: Utilizing Lemma 2 we can first construct an equivalent WTAc with Boolean final weights. If \(\mathbb{S}\) is zero-sum free, then there exists a semiring homomorphism \(h\in\mathbb{B}^{\mathbb{S}}\) by [27]. By Lemma 3 we can assume that each derivation of \(G\) has non-zero weight and sums of non-zero elements remain non-zero by zero-sum freeness. Thus we can simply replace the factor \(h(\mathrm{wt}_{p})\) by \(1\) in (2). The such obtained TAC generates \(\mathrm{supp}(G)\). Corollary 4 (of Theorem 2.2): _Let \(\mathbb{S}\) be zero-sum free. For every (classic) WTAc \(G\) over \(\mathbb{S}\) there exists an unambiguous (classic) TAC generating \(T_{\Sigma}\setminus\mathrm{supp}(G)\)._ Proof: Let \(G^{\prime}=(Z,\Sigma,Z_{0},P^{\prime})\) be the unambiguous TAC given by Corollary 3. Since \(G^{\prime}\) is also complete in the sense that every input tree has a derivation, the desired unambiguous TAC \(G^{\prime\prime}\) is simply \(G^{\prime\prime}=(Z,\Sigma,Z\setminus Z_{0},P^{\prime})\). Let \(A,A^{\prime}\in\mathbb{S}^{T_{\Sigma}}\). It is often useful (see [15, Definition 4.11]) to restrict \(A\) to the support of \(A^{\prime}\) but without changing the weights of those trees inside the support. Formally, we define \(A|_{\mathrm{supp}(A^{\prime})}\in\mathbb{S}^{T_{\Sigma}}\) for every \(t\in T_{\Sigma}\) by \(A|_{\mathrm{supp}(A^{\prime})}(t)=A_{t}\) if \(t\in\mathrm{supp}(A^{\prime})\) and \(A|_{\mathrm{supp}(A^{\prime})}(t)=0\) otherwise. Utilizing unambiguous WTAc and the Hadamard product, we can show that \(A|_{\mathrm{supp}(A^{\prime})}\) is constraint-regular if \(A\) and \(A^{\prime}\) are constraint-regular and the semiring \(\mathbb{S}\) is zero-sum free. Theorem 3.1: _Let \(\mathbb{S}\) be zero-sum free. For all (classic) WTAc \(G\) and \(G^{\prime}\) there exists a (classic) WTAc \(H\) such that \(H=G|_{\mathrm{supp}(G^{\prime})}\)._ Proof: By Corollary 1 the support \(\mathrm{supp}(G^{\prime})\) is constraint-regular. Hence we can obtain an unambiguous WTAc \(G^{\prime\prime}\) for \(\mathrm{supp}(G^{\prime})\) using Theorem 3.2. Without loss of generality we assume that both \(G\) and \(G^{\prime\prime}\) are constraint-determined; we note that the normalization preserves unambiguous WTAc. Finally we construct \(G\times G^{\prime\prime}\), which by Theorem 3.1 generates exactly \(G|_{\mathrm{supp}(G^{\prime})}\) as required. In the following, we establish a special property for classic WTGc. To this end, we first need another notion. Let \(G=(Q,\Sigma,F,P,\mathrm{wt})\) be a WTGc. A nonterminal \(\bot\in Q\) is a _sink nonterminal (in \(G\))_ if \(F_{\bot}=0\) and \[\left\{\sigma(\bot,\ldots,\bot)\to_{1}\bot\mid\sigma\in\Sigma\right\}=\left\{ \ell\xrightarrow{E,J}_{s}q\in P\mid q=\bot\right\}\enspace.\] In other words, for every sink nonterminal \(\bot\) the production \(\sigma(\bot,\ldots,\bot)\to\bot\) belongs to \(P\) with weight \(1\) for every symbol \(\sigma\in\Sigma\). Additionally, no other productions have the sink nonterminal \(\bot\) as target nonterminal. Given a set \(E\subseteq\mathbb{N}^{*}\times\mathbb{N}^{*}\) of equality constraints, we let \(\equiv_{E}=(E\cup E^{-1})^{*}\) be the smallest equivalence relation containing \(E\) and \([w]_{\equiv_{E}}\) be the equivalence class of \(w\in\mathbb{N}^{*}\). Additionally, for every production \(c[q_{1},\ldots,q_{k}]\xrightarrow{E,J}q\in P\) we let \[c(E)=\left\{(i,j)\in[k]\times[k]\mid(v,v^{\prime})\in E,\,c(v)=x_{i},c(v^{ \prime})=x_{j}\right\}\] be a representation of the equality constraints on the indices \([k]\). Definition 5: A classic WTGc \(G=(Q,\Sigma,F,P,\mathrm{wt})\) is _eq-restricted_ if there exists a sink nonterminal \(\bot\in Q\) such that for every production \(p=c[q_{1},\ldots,q_{k}]\xrightarrow{E,J}q\in P\) and index \(i\in[k]\) there exists a nonterminal \(q^{\prime}\in Q\) such that 1. \(\left\{q_{j}\mid j\in[i]_{\equiv_{c(E)}}\right\}\subseteq\left\{q^{\prime},\bot\right\}\) and 2. there exists exactly one index \(j\in[i]_{\equiv_{c(E)}}\), also called _governing index for \(i\) in \(p\)_, such that \(q_{j}=q^{\prime}\). The mapping \(g_{p}\colon[k]\to[k]\) assigns to each index \(i\in[k]\) its governing index for \(i\) in \(p\). In other words, in an eq-restricted classic WTGc one subtree is generated normally by the WTGc and all the subtrees that are required to be equal by means of the equality constraints are generated by the sink nonterminal \(\bot\), which can generate any tree with weight \(1\). In this manner, the restrictions on subtree and weight generation induced by the WTGc are exhibited completely on a single subtree and the "copies" are only provided by the equality constraint, but not further restricted by the WTGc. We will continue to use \(\bot\) for the suitable sink nonterminal of an eq-restricted classic WTGc. Finally, we show that the weighted tree languages generated by eq-restricted positive classic WTGc are closed under relabelings. A _relabeling_ is a tree homomorphism \(\pi\in T_{\Delta}(X)^{\Sigma}\) such that for every \(k\in\mathbb{N}\) and \(\sigma\in\Sigma_{k}\) there exists \(\delta\in\Delta_{k}\) with \(\pi_{\sigma}=\delta(x_{1},\ldots,x_{k})\). In other words, a relabeling deterministically replaces symbols respecting their rank. We often specify a relabeling just as a mapping \(\pi\in\Delta^{\Sigma}\) such that \(\pi_{\sigma}\in\Delta_{k}\) for every \(k\in\mathbb{N}\) and \(\sigma\in\Sigma_{k}\). Theorem 3.4: _The weighted tree languages generated by eq-restricted positive classic WTGc are closed under relabelings._ Proof: Let WTGc \(G=(Q,\Sigma,F,P,\mathrm{wt})\) be an eq-restricted positive classic WTGc with sink nonterminal \(\bot\). Without loss of generality, suppose that \(\Sigma\cap X=\emptyset\). Moreover, let \(\pi\in\Delta^{\Sigma}\) be a relabeling. We first extend \(\pi\) to a mapping \(\pi^{\prime}\in(\Delta\cup X)^{\Sigma\cup X}\), in which we treat the elements of \(X\) as nullary symbols, for every \(\sigma\in\Sigma\) and \(x\in X\) by \(\pi^{\prime}_{\sigma}=\pi_{\sigma}\) and \(\pi^{\prime}_{x}=x\). Let \(G^{\prime}=(Q,\Delta,F,P^{\prime},\mathrm{wt}^{\prime})\) be the eq-restricted positive classic WTGc such that \[P^{\prime}=\Big{\{}\pi^{\prime}(c)[q_{1},\ldots,q_{k}]\stackrel{{ E,\emptyset}}{{\longrightarrow}}q\mid c[q_{1},\ldots,q_{k}] \stackrel{{ E,\emptyset}}{{\longrightarrow}}q\in P,q\neq\bot \Big{\}}\] and for every production \(p^{\prime}=c^{\prime}[q_{1},\ldots,q_{k}]\stackrel{{ E,\emptyset}}{{ \longrightarrow}}q\in P^{\prime}\) with \(q\neq\bot\) we let \[\mathrm{wt}^{\prime}_{p^{\prime}}=\sum_{\begin{subarray}{c}p=c[q_{1},\ldots,q_ {k}]\stackrel{{ E,\emptyset}}{{\longrightarrow}}q\in P\\ c\in(\pi^{\prime})^{-1}(c^{\prime})\end{subarray}}\mathrm{wt}_{p}\enspace. \tag{3}\] Finally, \(\mathrm{wt}^{\prime}\big{(}\delta(\bot,\ldots,\bot)\to\bot\big{)}=1\) for all \(\delta\in\Delta\). For correctness we prove the following equality for every \(u\in T_{\Delta}\) and \(q\in Q\) by induction on \(u\) \[\mathrm{wt}^{q}_{G^{\prime}}(u)=\begin{cases}\sum_{t\in\pi^{-1}(u)}\mathrm{wt} ^{q}_{G}(t)&\text{if }q\neq\bot\\ 1&\text{otherwise.}\end{cases} \tag{4}\] The second case is immediate since there is a single derivation, namely the one utilizing only nonterminal \(\bot\), for \(u\) to \(\bot\) and its weight is \(1\). In the remaining case we have \(q\neq\bot\). Then \[\mathrm{wt}^{q}_{G^{\prime}}(u)\] \[\stackrel{{\eqref{eq:wt_G}}}{{=}}\sum_{\begin{subarray} {c}p^{\prime}=c^{\prime}[q_{1},\ldots,q_{k}]\stackrel{{ E,\emptyset}}{{ \longrightarrow}}q\in P^{\prime}\\ u_{1},\ldots,u_{k}\in T_{\Delta}\\ u=c^{\prime}[u_{1},\ldots,u_{k}]\\ u_{i}=c\end{subarray}}\mathrm{wt}^{\prime}_{p^{\prime}}\cdot\prod_{ \begin{subarray}{c}i\in[k]\\ q_{i}\neq\bot\end{subarray}}\Big{(}\sum_{t_{1}\in\pi^{-1}(u_{i})}\mathrm{wt} ^{q_{i}}_{G}(t_{i})\Big{)}\cdot\prod_{\begin{subarray}{c}i\in[k]\\ q_{i}=\bot\end{subarray}}1\enspace.\] Recall that \(g_{p}\colon[k]\to[k]\) assigns to each index its governing index. For better readability, we write just \(g^{\prime}\). Note that due to the special form of substitution we automatically fulfill \(u\models E\) and can thus drop it. \[\stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq: **Theorem 5**: _Let \(G=(Q,\Sigma,F,P,\mathrm{wt})\) be a WTA and \(h\in T_{\Delta}^{T_{\Sigma}}\) be a nondeleting and non-erasing tree homomorphism. There exists an eq-restricted positive classic WTGc \(G^{\prime}\) with \(G^{\prime}=h(G)\)._ Proof: We construct a WTGc \(G^{\prime}\) for \(h(G)\) in two stages. First, let \[G^{\prime\prime}=\big{(}Q\cup\{\bot\},\Delta\cup\Delta\times P,F^{\prime \prime},P^{\prime\prime},\mathrm{wt}^{\prime\prime}\big{)}\] such that for every \(p=\sigma(q_{1},\ldots,q_{k})\to q\in P\) and \(h_{\sigma}=u=\delta(u_{1},\ldots,u_{n})\), \[p^{\prime\prime}=\Big{(}\langle\delta,p\rangle(u_{1},\ldots,u_{n})[q_{1}, \ldots,q_{k}]\stackrel{{ E,\emptyset}}{{\longrightarrow}}q \Big{)}\in P^{\prime\prime}\] with \(E=\bigcup_{i\in[k]}\mathrm{pos}_{x_{i}}(u)^{2}\), in which the substitution \(\langle\delta,p\rangle(u_{1},\ldots,u_{n})[q_{1},\ldots,q_{k}]\) replaces for every \(i\in[k]\) only the left-most occurrence of \(x_{i}\) in \(\langle\delta,p\rangle(u_{1},\ldots,u_{n})\) by \(q_{i}\) and all other occurrences by \(\bot\). Moreover \(\mathrm{wt}^{\prime\prime}_{p^{\prime\prime}}=\mathrm{wt}_{p}\). Additionally, we let \[p^{\prime\prime}_{\delta}=\delta(\bot,\ldots,\bot)\rightarrow\bot\in P^{\prime\prime}\] with weight \(\mathrm{wt}^{\prime\prime}_{p^{\prime\prime}_{\delta}}=1\) for every \(k\in\mathbb{N}\) and \(\delta\in\Delta_{k}\cup\Delta_{k}\times P\). No other productions are in \(P^{\prime\prime}\). Finally, we let \(F^{\prime\prime}_{q}=F_{q}\) for all \(q\in Q\) and \(F^{\prime\prime}_{\bot}=0\). Obviously, \(G^{\prime\prime}\) is eq-restricted, positive, and classic. In order to better describe the behaviour of \(G^{\prime\prime}\), let us introduce the following notation. Given a tree \(t=\sigma(t_{1},\ldots,t_{k})\in T_{\Sigma}\) and a complete left-most derivation \(d=(p_{1},w_{1})\cdots(p_{m},w_{m})\) of \(G\) for \(t\), let \(d_{1},\ldots,d_{k}\) be the derivations for \(t_{1},\ldots,t_{k}\), respectively that are incorporated in \(d\) and \(h_{\sigma}=\delta(u_{1},\ldots,u_{n})\). Then we define the tree \(h(t,d)\in T_{\Delta\cup\Delta\times P}\) inductively by \[h(t,d)=\langle\delta,p_{m}\rangle(u_{1},\ldots,u_{n})\big{[}h(t_{1},d_{1}), \ldots,h(t_{k},d_{k})\big{]}\enspace.\] Using this notation, let us now prove that for each \(q\in Q\) we have \[\big{\{}s\in T_{\Delta\cup\Delta\times P}\mid D^{q}_{G^{\prime\prime}}(s)\neq \emptyset\big{\}}=\big{\{}h(t,d)\mid t\in T_{\Sigma},d\in D^{q}_{G}(t)\big{\}} \tag{5}\] and, in turn, every such \(D^{q}_{G^{\prime\prime}}(s)\) is a singleton set with \(\mathrm{wt}_{G^{\prime\prime}}(d^{\prime\prime})=\mathrm{wt}_{G}(d)\) for the unique \(d^{\prime\prime}\in D^{q}_{G^{\prime\prime}}\big{(}h(t,d)\big{)}\). We start with the inclusion from right to left. To this end, let \(t\in T_{\Sigma}\) be a tree and \(d=(p_{1},w_{1})\cdots(p_{m},w_{m})\) be a complete left-most derivation of \(G\) for \(t\) to some nonterminal \(q\in Q\). Let \(t=\sigma(t_{1},\ldots,t_{k})\) be the input tree with \(h_{\sigma}=\delta(u_{1},\ldots,u_{n})\), let \(p_{m}=\sigma(q_{1},\ldots,q_{k})\to q\) be the production utilized last in \(d\), and let \(d_{i}\) be the complete left-most derivation for \(t_{i}\) to \(q_{i}\) incorporated in \(d\) for every \(i\in[k]\). For every \(i\in[k]\), we utilize the induction hypothesis to conclude that \(D^{q_{i}}_{G^{\prime\prime}}\big{(}h(t_{i},d_{i})\big{)}\) is a singleton set, so let \(d^{\prime\prime}_{i}\in D^{q_{i}}_{G^{\prime\prime}}\big{(}h(t_{i},d_{i}) \big{)}\) be the unique element, for which we additionally have \(\mathrm{wt}_{G^{\prime\prime}}(d^{\prime\prime}_{i})=\mathrm{wt}_{G}(d_{i})\). Moreover, for every \(i\in[k]\) there is a derivation \(d_{i}^{\bot}\) for \(h(t_{i},d_{i})\) with weight \(1\) that exclusively utilizes the nonterminal \(\bot\). We define \[s=\langle\delta,p_{m}\rangle(u_{1},\ldots,u_{n})\big{[}h(t_{1},d_{1}),\ldots,h( t_{k},d_{k})\big{]}\enspace.\] For every \(i\in[k]\), let \(v_{i}\) be the left-most occurrence of \(x_{i}\) in \(h_{\sigma}\). We consider the derivations \(v_{1}h(t_{1},d_{1}),\ldots,v_{k}h(t_{k},d_{k})\), and for every other occurrence \(v\) of \(x_{i}\) in we consider the derivation \(vd_{i}^{\perp}\). Let \(d^{\prime\prime}\) be the derivation assembled from the considered subderivations followed by \((p_{m}^{\prime\prime},\varepsilon)\), where the production \(p_{m}^{\prime\prime}\) at the root is \(p_{m}^{\prime\prime}=\langle\delta,p_{m}\rangle(u_{1},\ldots,u_{n})[\![q_{1}, \ldots,q_{k}]\!]\stackrel{{ E,\emptyset}}{{\longrightarrow}}q\) with the constraints \(E=\bigcup_{i=1}^{k}\operatorname{pos}_{x_{i}}(h_{\sigma})^{2}\). Clearly, the production \(p_{m}^{\prime\prime}\) is the only applicable one since the only other production whose left-hand side is labeled by \(\langle\delta,p_{m}\rangle\) at the root reaches \(\perp\neq q\). Reordering the derivation \(d^{\prime\prime}\) to be left-most, we obtain the desired complete left-most derivation \(\underline{d}^{\prime\prime}\) for \(s\), for which we also have \(\operatorname{wt}_{G^{\prime\prime}}(\underline{d}^{\prime\prime})= \operatorname{wt}_{G}(d)\). This proves that \(\underline{d}^{\prime\prime}\) is the required single element of \(D_{G^{\prime\prime}}^{q}(s)=D_{G^{\prime\prime}}^{q}\big{(}h(t,d)\big{)}\neq\emptyset\). On the other hand, consider \(s\in T_{\Delta\cup\Delta\times P}\) such that there exists a complete left-most derivation \(d^{\prime\prime}=(p_{1}^{\prime\prime},w_{1}^{\prime\prime})\cdots(p_{m}^{ \prime\prime},w_{m}^{\prime\prime})\) for \(s\) to \(q\); i.e. \(d^{\prime\prime}\in D_{G^{\prime\prime}}^{q}(s)\neq\emptyset\). The final rule \(p_{m}^{\prime\prime}\) that is applied must be of the form \[p_{m}^{\prime\prime}=\langle\delta,p\rangle(u_{1},\ldots,u_{n})[\![q_{1}, \ldots,q_{k}]\!]\stackrel{{ E,\emptyset}}{{\longrightarrow}}q\] with \(\delta(u_{1},\ldots,u_{n})[\![q_{1},\ldots,q_{k}]\!]=h_{\sigma}[\![q_{1}, \ldots,q_{k}]\!]\) for some symbol \(\sigma\in\Sigma_{k}\) and production \(p=\sigma(q_{1},\ldots,q_{k})\to q\). For every \(i\in[k]\), we denote by \(w_{i}\) the unique position in \(h_{\sigma}[\![q_{1},\ldots,q_{k}]\!]\) labeled by \(q_{i}\). By the induction hypothesis applied to \(s|_{w_{i}}\), for which the complete left-most derivation \(d_{i}^{\prime\prime}\) for \(s|_{w_{i}}\) to \(q_{i}\) incorporated in \(d^{\prime\prime}\) exists, there exists a tree \(t_{i}\in T_{\Sigma}\) and a complete left-most derivation \(d_{i}\) of \(G\) for \(t_{i}\) to \(q_{i}\) such that \(s|_{w_{i}}=h(t_{i},d_{i})\) and \(\operatorname{wt}_{G}(d_{i})=\operatorname{wt}_{G^{\prime\prime}}(d_{i}^{ \prime\prime})\). For the tree \(t=\sigma(t_{1},\ldots,t_{k})\) we obtain that \(s=h(t,d)\) for the complete left-most derivation \(d\in D_{G}^{q}(t)\) given by \[d=(1d_{1})\cdots(kd_{k})(p,\varepsilon)\enspace,\] for which we also have \(\operatorname{wt}_{G}(d)=\operatorname{wt}_{G^{\prime\prime}}(d^{\prime \prime})\), which completes this proof. So far, \(Q^{\prime\prime}\) and \(P^{\prime\prime}\) are larger than \(Q\) and \(P\) only by a constant (assuming a fixed alphabet \(\Sigma\)) caused by the additional sink nonterminal \(\perp\) and its productions, but the alphabet size increases by the summand \(|\Delta|\cdot|P|\). We now delete the annotation with the help of the relabeling \(\pi\in\Delta^{\Delta\cup\Delta\times P}\) given for every \(\delta\in\Delta\) and \(p\in P\) by \(\pi_{\delta}=\pi_{\langle\delta,p\rangle}=\delta\) following the construction in Theorem 4. \[\pi(G^{\prime\prime})_{u} =\sum_{s\in\pi^{-1}(u)}G_{s}^{\prime\prime}=\sum_{s\in\pi^{-1}(u)} \Big{(}\sum_{q\in Q}F_{q}^{\prime\prime}\cdot\operatorname{wt}_{G^{\prime \prime}}^{q}(s)\Big{)}=\sum_{\begin{subarray}{c}q\in Q,s\in\pi^{-1}(u)\\ d^{\prime\prime}\in D_{G^{\prime\prime}}^{q}(s)\end{subarray}}F_{q}^{\prime \prime}\cdot\operatorname{wt}_{G^{\prime\prime}}(d^{\prime\prime})\] \[\stackrel{{\eqref{eq:def_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eqeq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eqeq_eq_eq_eqeq_eq_eq_eq_eqeq_eq_eqeq_eq_eq_eq_eqeq_eq_eq_eqeq_eq_eqeq_eq_eqeq_eq_eq_eq_eqeq_eq_eqeq_eq_eqeq_eq_eq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eq_eqeqeq_eqeq_eqeqeq_eqeqeqeq_ Then \(\operatorname{supp}(G)=\big{\{}\phi(t)\mid t\in T_{\Sigma\setminus\{\phi\}}\big{\}}\) and \(G_{t}=2^{|\operatorname{pos}_{\gamma}(t)|}\) for every \(t\in\operatorname{supp}(G)\). Consider the ranked alphabet \(\Delta=\{\alpha^{(0)},\gamma^{(1)},\sigma^{(2)}\}\) and the homomorphism \(h\) induced by \(h_{\alpha}=\alpha\), \(h_{\gamma}=h_{\varepsilon}=\gamma(x_{1})\), and \(h_{\phi}=\sigma\big{(}\gamma(x_{1}),x_{1}\big{)}\). Consequently, \[\operatorname{supp}\bigl{(}h(G)\bigr{)}=\big{\{}\sigma\big{(}\gamma^{\prime+1} (\alpha),\gamma^{n}(\alpha)\big{)}\mid n\in\mathbb{N}\big{\}}\] and \(h(G)_{t}=\sum_{k=0}^{n}\binom{n}{k}2^{k}=3^{n}\) for every \(t=\sigma\big{(}\gamma^{\prime+1}(\alpha),\gamma^{n}(\alpha)\big{)}\in \operatorname{supp}\bigl{(}h(G)\bigr{)}\). A WTGc for \(h(G)\) is constructed as follows. First, we let \[G^{\prime\prime}=\bigl{(}\{q,q^{\prime},\bot\},\Delta\cup\Delta\times P,F^{ \prime\prime},P^{\prime\prime},\operatorname{wt}^{\prime\prime}\bigr{)}\] with \(F^{\prime\prime}_{q^{\prime}}=1\), \(F^{\prime\prime}_{q}=F^{\prime\prime}_{\bot}=0\) and the productions and their weights are given by \[\langle\alpha,p_{1}\rangle\to_{1}q\quad\langle\gamma,p_{2}\rangle(q)\to_{2}q \quad\langle\gamma,p_{3}\rangle(q)\to_{1}q\quad\langle\sigma,p_{4}\rangle \bigl{(}\gamma(q),\bot\bigr{)}\stackrel{{ 11=2}}{{\longrightarrow}}1\ q^{\prime}\] and \(\delta(\bot,\ldots,\bot)\to_{1}\bot\) for all \(\delta\in\Delta\cup\Delta\times P\). Next we remove the second component of the symbols of \(\Delta\times P\) and add the weights of all productions that yield the same production once the second components are removed. In our example, this applies to the production \(\gamma(q)\to q\), which is the result of the two productions \(\langle\gamma,p_{2}\rangle(q)\to_{2}q\) and \(\langle\gamma,p_{3}\rangle(q)\to_{1}q\), so its weight is \(2+1=3\). Overall, we obtain the WTGc \(G^{\prime}=\bigl{(}\{q,q^{\prime},\bot\},\Delta,F^{\prime\prime},P^{\prime}, \operatorname{wt}^{\prime}\bigr{)}\) with the following productions for all \(\delta\in\Delta\): \[\alpha\to_{1}q\qquad\gamma(q)\to_{3}q\qquad\sigma\bigl{(}\gamma(q),\bot\bigr{)} \stackrel{{ 11=2}}{{\longrightarrow}}1\ q^{\prime}\qquad\delta(\bot,\ldots,\bot)\to_{1} \bot\ \.\qed\] Trees generated by a WTGc must satisfy certain equality constraints on their subtrees. Therefore, if we naively swap subtrees of generated trees, then we might violate such an equality constraint and obtain a tree that is no longer generated by the WTGc. Luckily, the particular kind of WTGc constructed in Theorem 4.5, namely eq-restricted positive classic WTGc, allows us to refine the subtree substitution such that it takes into consideration the equality constraints in force. The following definition is the natural adaptation of (15, Definition 5.1) for (Boolean) tree automata with constraints. Definition 6: Let \(G=(Q,\Sigma,F,P,\operatorname{wt})\) be an eq-restricted, positive, and classic WTGc with sink nonterminal \(\bot\). Moreover, let \(q,q^{\prime}\in Q\), \(t,t^{\prime}\in T_{\Sigma}\), and \(d\in D^{q}_{G}(t)\) as well as \(d^{\prime}\in D^{q^{\prime}}_{G}(t^{\prime})\) such that \(q\neq\bot\neq q^{\prime}\) and \(d=\underline{d}(p,\varepsilon)\) with the final utilized production \(p=c[q_{1},\ldots,q_{k}]\stackrel{{ E,\emptyset}}{{\longrightarrow }}q\in P\). For every \(i\in[k]\) let \(w_{i}=\operatorname{pos}_{x_{i}}(c)\) and \(d_{i}\) be the unique derivation for \(t_{i}=t|_{\operatorname{pos}_{x_{i}}(c)}\) incorporated in \(d\). Finally, for every tree \(u\in T_{\Sigma}\) let \(d_{u}^{\bot}\) be the unique derivation for \(u\) to \(\bot\). For every \(w\in\operatorname{pos}(t)\), for which the derivation for \(t|_{w}\) incorporated in \(d\) yields \(q^{\prime}\) we recursively define the derivation substitution \(d[\![d^{\prime}]\!]_{w}\) of \(d^{\prime}\) into \(d\) at \(w\) and the resulting tree \(t[\![t^{\prime}]\!]_{w}^{d}\) as follows. If \(w=\varepsilon\), then \(d[\![d^{\prime}]\!]_{\varepsilon}=d^{\prime}\) and \(t[\![t^{\prime}]\!]_{\varepsilon}^{d}=t^{\prime}\). Otherwise \(w=w_{j}\underline{w}\) for some \(j\in[k]\) and we have \[d[\![d^{\prime}]\!]_{w}=d^{\prime}_{1}\cdots d^{\prime}_{k}(p,\varepsilon) \qquad\text{and}\qquad t[\![t^{\prime}]\!]_{w}^{d}=c[t^{\prime}_{1},\ldots,t^{ \prime}_{k}]\ \,\] where for each \(i\in[k]\) we have * if \(i=j\) (i.e., \(w_{i}\) is a prefix of \(w\)), then \(d^{\prime}_{i}=w_{i}(d_{i}[\![d^{\prime}]\!]_{w})\) and \(t^{\prime}_{i}=t_{i}[\![t^{\prime}]\!]_{w}^{d^{\prime}_{i}}\), * if \(q_{i}=\bot\) and \(w_{i}\in[w_{j}]_{\equiv_{\mathcal{E}}}\) (i.e., it is a position that is equality restricted to \(w_{j}\)), then \(d^{\prime}_{i}=w_{i}d^{\bot}_{u}\) and \(t^{\prime}_{i}=u\) with \(u=t_{j}[\![t^{\prime}]\!]^{d^{\prime}_{j}}_{\underline{w}}\), and * otherwise \(d^{\prime}_{i}=w_{i}d_{i}\) and \(t^{\prime}_{i}=t_{i}\) (i.e., derivation and tree remain unchanged). It is straightforward to verify that \(d[\![d^{\prime}]\!]_{w}\) is a complete left-most derivation of \(G\) for \(t[\![t^{\prime}]\!]^{d}_{w}\) to \(q\). Example 6: We consider the WTGc \(G=\big{(}\{q,\bot\},\Sigma,F,P,\mathrm{wt}\big{)}\) with input ranked alphabet \(\Sigma=\{a^{(0)},g^{(2)},f^{(2)}\}\), final weights \(F_{q}=1\) and \(F_{\bot}=0\) as well as productions \[p_{a}=a\to_{1}q\qquad p_{g}=g(q,\bot)\stackrel{{ 1=2}}{{\longrightarrow}}_{1}q\quad\text{ and}\quad p_{f}=f\big{(}q,f(q,\bot)\big{)}\stackrel{{ 1 =22}}{{\longrightarrow}}_{1}q\] besides the sink nonterminal productions \(p^{\bot}_{\sigma}=\sigma(\bot,\ldots,\bot)\to_{1}\bot\) for all \(\sigma\in\Sigma\). As before, for every \(u\in T_{\Sigma}\) we let \(d^{\bot}_{u}\in D^{\bot}_{G}(u)\) be the unique derivation of \(G\) for \(u\) to \(\bot\), which utilizes only the nonterminal \(\bot\). According to Definition 6 we choose the states \(q=q^{\prime}\) and the trees \(t\) and \(t^{\prime}\) and derivations \(d\) and \(d^{\prime}\) as given in Figure 2 and below. \[d =(p_{a},11)\,(p^{\bot}_{a},12)\,(p_{g},1)\,(p_{a},21)\,(p^{\bot}_{ a},221)\,(p^{\bot}_{a},222)\,(p^{\bot}_{g},22)\,(p_{f},\epsilon)\] \[d^{\prime} =(p_{a},1)\,(p^{\bot}_{a},2)\,(p_{g},\epsilon)\] We select that position \(w=11\) and observe that that the derivation for \(t|_{11}\) is \((p_{a},\epsilon)\), which yields \(q=q^{\prime}\). We compute \(d[\![d^{\prime}]\!]_{w}\) as follows \[d[\![d^{\prime}]\!]_{11} =\left(1(d^{\prime}_{1}[\![d^{\prime}]\!]_{1})\right)\left(21(p_{ a},\epsilon)\right)\left(22d^{\bot}_{u}\right)(p_{f},\epsilon)\] \[=\left(1\Big{(}1d^{\prime}\Big{)}\left(2d^{\bot}_{g(a,a)}\right) (p_{g},\epsilon)\right)(p_{a},21)\,(22d^{\bot}_{u})\,(p_{f},\epsilon)\] \[=\left(p_{a},111\right)(p^{\bot}_{a},112)\,(p_{g},11)\,(12d^{ \bot}_{g(a,a)})\,(p_{g},1)\,(p_{a},21)\,(22d^{\bot}_{u})\,(p_{f},\epsilon)\ \,\] where \(d^{\prime}_{1}=(p_{a},1)\,(p^{\bot}_{a},2)\,(p_{g},\epsilon)\) and \(u=g\big{(}g(a,a),g(a,a)\big{)}\). We note that \(w=11\) is explicitly equality constrained to position \(12\) in \(d\) via the constraint \(1=2\) at position \(1\) and implicitly equality constrained to positions \(221\) and \(222\) via the constraint \(1=22\) at the root \(\epsilon\). Thus, we obtain \(d[\![d^{\prime}]\!]_{11}\) by substituting \(d^{\prime}\) into \(d\) at position \(11\) as well as substituting \(d^{\bot}_{t^{\prime}}\) into \(d\) at positions \(12\), \(221\), and \(222\). The obtained tree \(t[\![t^{\prime}]\!]^{d}_{w}\) is displayed in Figure 3. Figure 2: Input trees \(t\) and \(t^{\prime}\) from Example 6. As our example illustrates, the tree \(t[\![t^{\prime}]\!]_{w}^{d}\) is obtained from \(t\) by (i) identifying the set of all positions of \(t\) that are explicitly or implicitly equality constrained to \(w\) by the productions in the derivation \(d\) and (ii) substituting \(t^{\prime}\) into \(t\) at every such position. If \(w^{\prime}\in\mathrm{pos}(t)\) is parallel to all positions constrained to \(w\), like position \(21\) in Example 6, then \(t[\![t^{\prime}]\!]_{w^{\prime}}=t|_{w^{\prime}}\). Note that \(t|_{21}\) is equal to the replaced subtree \(t|_{11}\), but we only replace constrained subtrees and not all equal subtrees. This substitution allows us to prove a pumping lemma for eq-restricted, positive, and classic WTGc, which can generate all (nondeleting and nonerasing) homomorphic images of regular weighted tree languages by Theorem 5. To this end, we need some final notions. Let \(G=(Q,\Sigma,F,P,\mathrm{wt})\) be a WTGc. Moreover, let \(p=\ell\stackrel{{ E,D}}{{\longrightarrow}}q\in P\) be a production. We define the _height_\(\mathrm{ht}(p)\) of \(p\) by \(\mathrm{ht}(p)=\mathrm{ht}(\ell)\) (i.e., the height of its left-hand side). Moreover, we let \[\mathrm{ht}(P)=\max\bigl{\{}\mathrm{ht}(p)\mid p\in P\bigr{\}}\qquad\text{and }\qquad\mathrm{ht}(G)=(|Q|+1)\cdot\mathrm{ht}(P)\enspace.\] Lemma 4: _Let \(G=(Q,\Sigma,F,P,\mathrm{wt})\) be an eq-restricted, positive, and classic WTGc with sink nonterminal \(\bot\). There exists \(n\in\mathbb{N}\) such that for every tree \(t_{0}\in T_{\Sigma}\), nonterminal \(q\in Q\setminus\{\bot\}\), and derivation \(d\in D_{G}^{q}(t_{0})\) such that \(\mathrm{ht}(t_{0})>n\) and \(\mathrm{wt}_{G}(d)\neq 0\) there are infinitely many trees \(t_{1},t_{2},\ldots\) and derivations \(d_{1},d_{2},\ldots\) such that \(d_{i}\in D_{G}^{q}(t_{i})\) and \(\mathrm{wt}_{G}(d_{i})\neq 0\) for all \(i\in\mathbb{N}\)._ Proof: Without loss of generality, suppose that for every \(c[q_{1},\ldots,q_{k}]\stackrel{{ E,\emptyset}}{{\longrightarrow}}q ^{\prime}\in P\) with \(q^{\prime}\neq\bot\) and \(k\neq 0\) there exists \(i\in[k]\) such that \(q_{i}\neq\bot\). This can easily be achieved by introducing a copy \(\top\) of nonterminal \(\bot\) and replacing one instance of \(\bot\) by \(\top\) in offending productions. Similarly, we can assume without loss of generality that the construction in the proof of Lemma 3 has been applied to \(G\). If this is the case, then we can select \(n=\mathrm{ht}(G)\). Let \(t_{0}\in T_{\Sigma}\) be such that \(\mathrm{ht}(t_{0})>n\). Let \(Q^{\prime}=Q\setminus\{\bot\}\), \(d\in D_{G}^{q}(t_{0})\) be a derivation with \(\mathrm{wt}_{G}(d)\neq 0\), and select a position \(w\in\mathrm{pos}(t_{0})\) of maximal length such that \(d\) incorporates a derivation for \(t_{0}|_{w}\) to some \(q^{\prime}\in Q^{\prime}\). Then \[|w|\geq\mathrm{ht}(t_{0})-\mathrm{ht}(P)\geq\mathrm{ht}(G)-\mathrm{ht}(P)=|Q| \cdot\mathrm{ht}(P)\enspace,\] which yields that at least \(|Q|\) proper prefixes \(w^{\prime}\) of \(w\) exist such that \(d\) incorporates a derivation for \(t_{0}|_{w^{\prime}}\) to some \(q^{\prime}\in Q^{\prime}\). Hence there exist prefixes \(w^{\prime},w^{\prime\prime}\) of \(w\) such that \(d\) incorporates a derivation \(d^{\prime}\) for \(t^{\prime}=t_{0}|_{w^{\prime}}\) to \(q^{\prime}\in Q^{\prime}\) as well as a derivation for \(t_{0}|_{w^{\prime\prime}}\) to the same nonterminal \(q^{\prime}\). Then \(d[\![d^{\prime}]\!]_{w^{\prime\prime}}\) is a derivation of \(G\) for \(t_{1}=t[\![t^{\prime}]\!]_{w^{\prime\prime}}^{d}\) to Figure 3: Obtained pumped tree \(t[\![t^{\prime}]\!]_{11}^{d}\) from Example 6. with \(\operatorname{ht}(t_{1})>\operatorname{ht}(t_{0})\). Since we achieve the same state \(q\), the annotation of the proof of Lemma 3 guarantees that \(\operatorname{wt}_{G}(d_{1})\neq 0\). Iterating this substitution yields the desired trees \(t_{1},t_{2},\dots\) and derivations \(d_{1},d_{2},\dots\). A WTGc generating a (nondeleting and nonerasing) homomorphic image of a regular weighted tree language, if constructed as described in Theorem 3.5, will never have overlapping constraints since constraints always point to leaves of the left-hand sides of productions as required by classic WTGc. It is intuitive that this limitation to the operating range of constraints leads to an actual restriction in the expressive power of WTGc, but we will only prove it for eq-restricted, positive, and classic WTGc. Proposition 2: _Let \(\mathbb{S}\) be a zero-sum free semiring. The class of positive constraint-regular weighted tree languages is strictly more expressive than the class of weighted tree languages generated by eq-restricted, positive, and classic WTGc._ Proof: Let us consider the positive WTGc \(G=\big{(}\{q,q^{\prime}\},\Sigma,F,P,\operatorname{wt}\big{)}\) with input ranked alphabet \(\Sigma=\{f^{(2)},\underline{f}^{(2)},g^{(2)},a^{(0)}\}\), final weights \(F_{q}=1\) and \(F_{q^{\prime}}=0\), and the following productions, of which each has weight 1. \[a\to_{1}q^{\prime} g(q^{\prime},q^{\prime})\to_{1}q f(q,q)\stackrel{{ 12=21}}{{\longrightarrow}}_{1}q f(q,q)\stackrel{{ 12=21}}{{\longrightarrow}}_{1}q\] The first two productions are only used on leaves and on subtrees of the form \(g(a,a)\). Every other position \(w\) (i.e., neither leaf nor position with two leaves as children) is labeled either \(f\) or \(\underline{f}\) and additionally every derivation enforces the constraint \(12=21\), so the subtrees \(t|_{w12}\) and \(t|_{w21}\) of the input tree \(t\) need to be equal for a complete derivation of \(G\) to exist. For the sake of a contradiction, suppose that an eq-restricted, positive, and classic WTGc \(G^{\prime}=(Q^{\prime},\Sigma,F^{\prime},p^{\prime},\operatorname{wt}^{\prime})\) exists that is equivalent to \(G\). We recursively define the trees \(t_{n}\in T_{\Sigma}\) and \(t^{\prime}_{n}\in T_{\Sigma}\) for every \(n\in\mathbb{N}\) with \(n\geq 1\) by \[t_{0} =a t_{1} =g(t_{0},t_{0}) t_{n+1} =f(t_{n},t_{n})\] \[t^{\prime}_{0} =a t^{\prime}_{1} =g(t^{\prime}_{0},t_{0}) t^{\prime}_{n+1} =\underline{f}(t^{\prime}_{n},t_{n})\] Clearly, \(t_{n}\) and \(t^{\prime}_{n}\) are both complete binary trees of height \(n\). Naturally, the leaves are labeled \(a\), and the penultimate level in both trees is always labeled \(g\). In \(t_{n}\) the remaining levels are universally labeled \(f\), whereas in \(t^{\prime}_{n}\) the left-most spine on those levels is labeled \(f\). We illustrate an example tree \(t^{\prime}_{n}\) in Figure 4. Obviously \(G(t_{n})=1\) as well as \(G(\overline{t^{\prime}_{n}})=1\) for every \(n\in\mathbb{N}\) with \(n\geq 1\). Furthermore we note that the derivations of \(G\) only enforce equality constraints on positions of the form \(w12\) or \(w21\), but since \(\operatorname{pos}_{\underline{f}}(t^{\prime}_{n})\subseteq\{1\}^{*}\), the positions, in which the labels in \(t_{n}\) and \(t^{\prime}_{n}\) differ, are not affected by any equality constraint. This can be used to verify that \(G(t^{\prime}_{n})=1\) for each \(n\geq 1\). In the following, let \(n=3\operatorname{ht}(G^{\prime})+2\). Since \(G^{\prime}\) is equivalent to \(G\), we need to have \(G^{\prime}(t^{\prime}_{n})=1\) as well, which requires a complete derivation of \(G^{\prime}\) for \(t^{\prime}_{n}\) to some final nonterminal \(q_{0}\in Q^{\prime}\). Let \(d\in D^{q_{0}}_{G^{\prime}}(t^{\prime}_{n})\) be such a derivation. Moreover, let \(d=\underline{d}(p,\varepsilon)\) for some production \(p=c[q_{1},\ldots,q_{k}]\stackrel{{ E,\emptyset}}{{\longrightarrow}}q_{ 0}\in P^{\prime}\). Since the input tree \(t^{\prime}_{n}\) contains positions \[\Big{\{}1^{i}=\underbrace{11\cdots 1}_{i\text{ times}}\mid 0\leq i\leq n\Big{\}}\subseteq \operatorname{pos}(t^{\prime}_{n})\enspace,\] there must exist \(j\in\mathbb{N}\) such that \(c(1^{j})=x_{1}\); i.e., position \(1^{j}\) is labeled \(x_{1}\) in \(c\). Obviously, \(j\leq\operatorname{ht}(G^{\prime})\), so the height of the subtree \(t^{\prime\prime}=t^{\prime}_{n}|_{1^{j}}\), which is still a complete binary tree, is at least \(2\operatorname{ht}(G^{\prime})+2\). We can thus apply Lemma 4 to the tree \(t^{\prime\prime}\) in such a way that it modifies its second direct subtree (starting from \(1^{j}\in\operatorname{pos}(t^{\prime}_{n})\), we descend to \(1^{j}2\); from there, we either find a subderivation to some nonterminal different from \(\bot\), or all subtrees below \(1^{j}2\) are copies of subtrees below \(1^{j}1\), and in that case, we apply the pumping to an equality constrained subtree below \(1^{j}1\), which then also modifies the corresponding subtree below \(1^{j}2\)). Let \(u\) be the such obtained pumped tree, which according to zero-sum freeness and Lemma 4 is also in the support of \(G^{\prime}\); i.e., \(u\in\operatorname{supp}(G^{\prime})\). Let \(d^{\prime}\) be the derivation constructed in Lemma 4 corresponding to \(u\). We have \(u(1^{j-1})=\underline{f}\), so the position \(1^{j-1}\) is labeled \(\underline{f}\). Since \(G\) and \(G^{\prime}\) are equivalent, there must be a derivation of \(G\) for \(u\) as well, which enforces the equality constraint \(u|_{1^{j-1}12}=u|_{1^{j-1}21}\). By construction we have \(t^{\prime}_{n}|_{1^{j-1}12}\neq u|_{1^{j-1}12}\). Since the positions \(1^{j-1}12\) and \(1^{j-1}21\) have no common suffix, this equality can only be guaranteed by \(G^{\prime}\) if \(1^{j-1}12\) and \(1^{j-1}21\) are themselves (explicitly or implicitly) equality constrained in \(d^{\prime}\). The potentially several constraints that achieve this must of course be located at prefixes of \(1^{j-1}12\) and \(1^{j-1}21\), and since the production used in \(d^{\prime}\) at the root is still \(p\) and stretches all the way to \(1^{j}\), this can only be achieved if \(d^{\prime}\) enforces \(1^{j-1}1=1^{j-1}2\) via \(p\) at the root as well as \(1=2\) at \(1^{j-1}1\) or at \(1^{j-1}2\). However, this is a contradiction as \(u(1^{j-1}1)=\underline{f}\neq f=u(1^{j-1}2)\), so we cannot have an explicit or implicit equality constraint between \(1^{j-1}12\) and \(1^{j-1}21\), so \(u|_{1^{j-1}21}=t^{\prime}_{n}|_{1^{j-1}21}\), but contradicts that \(G\) has a complete derivation for \(u\). Although for zero-sum free semirings, the support of a regular weighted tree language is again regular, in general, the converse is not true, so we cannot apply the decision procedure of [15] to the support of a homomorphic image in order to decide its regularity. Instead, we hope to extend the unweighted argument in a way that tracks the weights sufficiently close. For this, we prepare two decidability results, which rely mostly on the corresponding results in the unweighted case. To this end, we need to relate our WTGc constructed in Theorem 5 to the classic TGc used in [15]. Figure 4: A snippet of the tree \(t^{\prime}_{n}\) and the productions used by \(G^{\prime}\). At this point we mention that their classic TGc additionally require that equality constrained positions have the same nonterminal label. Compared to our eq-restriction this change is entirely immaterial in the unweighted case. Theorem 4.1: _Let \(\mathbb{S}\) be a zero-sum free semiring. Moreover, let \(G=(Q,\Sigma,F,P,\mathrm{wt})\) be a WTA and \(h\in T^{T\Sigma}_{\Delta}\) be a nondeleting and nonerasing tree homomorphism. Finally, let \(G^{\prime}=h(G)\). Emptiness and finiteness of \(\mathrm{supp}(G^{\prime})\) are decidable._ Proof: We apply the construction in the proof of Lemma 3 to the eq-restricted, positive, and classic WTGc \(G^{\prime}=(Q^{\prime},\Sigma,F^{\prime},P^{\prime},\mathrm{wt}^{\prime})\) constructing according to Theorem 4.1. In this manner we ensure that all derivations have non-zero weight. Due to zero-sum freeness, we can now simply drop the weights and obtain a eq-restricted, positive, and classic TGc \(G^{\prime\prime}=(Q^{\prime\prime},\Sigma,F^{\prime\prime},P^{\prime\prime})\) generating \(\mathrm{supp}(G^{\prime})\). Emptiness and finiteness are decidable for the tree language \(\mathrm{supp}(G^{\prime})\) generated by \(G^{\prime\prime}\) according to (15, Corollaries 5.11 & 5.20). ## Conflict of interest The authors declare that they have no conflict of interest.
2302.14148
Scalable precision wide-field imaging in radio interferometry: I. uSARA validated on ASKAP data
As Part I of a paper series showcasing a new imaging framework, we consider the recently proposed unconstrained Sparsity Averaging Reweighted Analysis (uSARA) optimisation algorithm for wide-field, high-resolution, high-dynamic range, monochromatic intensity imaging. We reconstruct images from real radio-interferometric observations obtained with the Australian Square Kilometre Array Pathfinder (ASKAP) and present these results in comparison to the widely-used, state-of-the-art imager WSClean. Selected fields come from the ASKAP Early Science and Evolutionary Map of the Universe (EMU) Pilot surveys and contain several complex radio sources: the merging cluster system Abell 3391-95, the merging cluster SPT-CL 2023-5535, and many extended, or bent-tail, radio galaxies, including the X-shaped radio galaxy PKS 2014-558 and the ``dancing ghosts'', known collectively as PKS 2130-538. The modern framework behind uSARA utilises parallelisation and automation to solve for the w-effect and efficiently compute the measurement operator, allowing for wide-field reconstruction over the full field-of-view of individual ASKAP beams (up to 3.3 deg each). The precision capability of uSARA produces images with both super-resolution and enhanced sensitivity to diffuse components, surpassing traditional CLEAN algorithms which typically require a compromise between such yields. Our resulting monochromatic uSARA-ASKAP images of the selected data highlight both extended, diffuse emission and compact, filamentary emission at very high resolution (up to 2.2 arcsec), revealing never-before-seen structure. Here we present a validation of our uSARA-ASKAP images by comparing the morphology of reconstructed sources, measurements of diffuse flux, and spectral index maps with those obtained from images made with WSClean.
Amanda G. Wilber, Arwa Dabbech, Adrian Jackson, Yves Wiaux
2023-02-27T21:13:38Z
http://arxiv.org/abs/2302.14148v2
# Scalable precision wide-field imaging in radio interferometry: ###### Abstract As Part I of a paper series showcasing a new imaging framework, we consider the recently proposed unconstrained Sparsity Averaging Reweighted Analysis (uSARA) optimisation algorithm for wide-field, high-resolution, high-dynamic range, monochromatic intensity imaging. We reconstruct images from real radio-interferometric observations obtained with the Australian Square Kilometre Array Pathfinder (ASKAP) and present these results in comparison to the widely-used, state-of-the-art imager WSClean. Selected fields come from the ASKAP Early Science and Evolutionary Map of the Universe (EMU) Pilot surveys and contain several complex radio sources: the merging cluster system Abell 3391-95, the merging cluster SPT-CL 2023-5535, and many extended, or bent-tail, radio galaxies, including the X-shaped radio galaxy PKS 2014-558 and "the dancing ghosts," known collectively as PKS 2130-538. The modern framework behind uSARA utilises parallelisation and automation to solve for the \(w\)-effect and efficiently compute the measurement operator, allowing for wide-field reconstruction over the full field-of-view of individual ASKAP beams (up to \(\sim 3.3^{\circ}\) each). The precision capability of uSARA produces images with both super-resolution and enhanced sensitivity to diffuse components, surpassing traditional CLEAN algorithms which typically require a compromise between such yields. Our resulting monochromatic uSARA-ASKAP images of the selected data highlight both extended, diffuse emission and compact, filamentary emission at very high resolution (up to 2.2 arcsec), revealing never-before-seen structure. Here we present a validation of our uSARA-ASKAP images by comparing the morphology of reconstructed sources, measurements of diffuse flux, and spectral index maps with those obtained from images made with WSClean. keywords: techniques: interferometric - techniques: image processing - radio continuum: galaxies - galaxies: clusters: intracluster medium ## 1 Introduction We are now at the beginning of a booming era for radio astronomy, with a worldwide effort to gear up for the Square Kilometre Array (SKA) - a revolutionary radio telescope with capabilities for sub-arcsecond resolution and ultra-deep sensitivity. Pathfinding radio interferometers - such as the Murchison wide-field Array (MWA; Tingay et al., 2013), the LOw Frequency ARay (LOFAR; van Haarlem et al., 2013), ASKAP (Johnston et al., 2007, 2008; Hotan et al., 2021), and MeerKAT (Jonas and MeerKAT Team, 2016) - are paving the way by affording us the opportunity to expand our capabilities in detection, calibration, and image reconstruction of the unknown radio sky. Ongoing wide-field radio continuum surveys - such as the LOFAR Two-meter Sky Survey (LoTSS; Shimwell et al., 2017, 2019), the LOFAR LBA Sky Survey (LoLSS; de Gasperin et al., 2021), the MeerKAT MIGHTree survey (Taylor and Jarvis, 2017), and the ASKAP EMU survey (Norris et al., 2011) - will be used in conjunction to gather statistics on millions of radio galaxies and thousands of galaxy clusters, leading to new and exciting results on cosmic magnetism, cosmic rays, dark matter, dark energy, and the evolution of large-scale structure in the Universe. The quest to convert the sheer quantity of radio data from these surveys into science-ready images has propelled radio astronomers toward developing innovative, state-of-the-art calibration (_e.g._ Smirnov and Tasse, 2015; Williams et al., 2016; van Weeren et al., 2016) and imaging (_e.g._ Offringa and Smirnov, 2017; Tasse et al., 2018; Pratley et al., 2018) techniques throughout the last decade. Through the implementation of these trailwlagling techniques, results from LOTSS, the ASKAP-EMU Pilot Survey (Norris et al., 2021), and MIGHTEE are already revealing extraordinary extragalactic radio sources that have not been previously detected, some of which are so complex in their physical properties that they challenge current taxonomies and origination theories (_e.g._ Botteon et al., 2020; Bruggen et al., 2021; Knowles et al., 2022). Real intensity structure in radio sources - such as Galactic HI emission, supernova remnants, extended radio galaxies, and galaxy cluster radio halos and relics - often exhibits both compact and diffuse components. A complex radio source might include bright, collimated threads of emission that appear embedded within fainter, dispersed lobes. For instance, intracluster radio sources, generated by large-scale turbulence and shocks, tend to exhibit high-resolution filamentary structures (tracing well-aligned magnetic field lines) as well as low-surface-brightness diffuse structure, which can span Mega parsec scales (see the radio relic of Abell 2256 as one iconic example - _e.g._ Owen et al., 2014; Rajpurohit et al., 2022). When imaging such complex radio emission, state-of-the-art CLEAN-based algorithms (_e.g._ Hogbom, 1974; Clark, 1980; Schwab, 1984; Wakker & Schwarz, 1988; Cornwell, 2008; Offringa et al., 2014) are usually implemented with data weighting schemes to adjust the sensitivity to either compact or diffuse emission. By modifying the shape of the synthesised beam and placing weights (such as a \(uv\)-taper) to short-baseline data, the sensitivity to fainter, more extended components of radio emission can be enhanced, albeit with a considerable loss of resolution. Consequently, such imaging methods often fail to accurately reconstruct both compact and diffuse components simultaneously in a single image. In the last decade, progress in compressed sensing research has led to the development of optimisation-based algorithms to reconstruct true signal from partial, undersampled visibility data. This methodology was first applied and shown to be effective for radio interferometry imaging by Wiaux et al. (2009) and has since led to the development of several designated imaging algorithms (e.g. Li et al., 2011; Carrillo et al., 2012; Garsden et al., 2015; Dabbech et al., 2015). Such innovative approaches - relying on sophisticated sparsity-based image models - have demonstrated their success at capturing both compact and diffuse components in the reconstructed radio images, albeit with an increase in computational expense. One such state-of-the-art optimisation-based class of methods is the "Sparsity Averaging Reweighted Analysis" (SARA) family (Carrillo et al., 2012; Dabbech et al., 2018; Abdulaziz et al., 2019; Thouvenin et al., 2022, 2020). See the Puri-Psi web-page for more details). The monochromatic SARA algorithm, initially proposed by Carrillo et al. (2012), leverages state-of-the-art algorithmic structures from optimisation theory to enforce non-negativity and sparsity in a highly redundant sparsity dictionary of the sought radio image under noise-driven data fidelity constraints (Onose et al., 2016; Onose et al., 2017). Evolved versions of SARA include Faceted HyperSARA for wide-band imaging, shipped with spectral and spatial faceting functionalities to handle large image dimensions (Thouvenin et al., 2022, 2022). Polarized SARA for polarisation imaging (Birdi et al., 2018), and a sparsity-based joint calibration and imaging framework (Repetti et al., 2017; Repetti & Wiaux, 2017; Birdi et al., 2020; Dabbech et al., 2021). In addition to the precision and robustness requirements of RI imaging algorithms, the need for scalability is more critical than ever in light of the extreme data volumes, wide fields-of-view, and broad frequency bandwidths offered by modern radio arrays. In this context, we have recently proposed a parallel and automated framework for wide-field, high-resolution, high-dynamic range monochromatic intensity imaging (Dabbech et al., 2022). The framework encapsulates two imaging algorithms at the interface of optimisation theory and deep learning, recently proposed by Teris et al. (2022): (i) the unconstrained SARA (uSARA) algorithm relying on a handcrafted image model enforced via an optimisation-based denoiser, and (ii) the AI for Regularization in Imaging (AIRI) algorithm relying on an implicitly learnt image model promoted via denoising deep neural networks. The framework offers scalable implementation of both imaging algorithms through two key features: memory-efficient, parallel operators and automated parameter selection. This article is Part I of a series aiming to showcase and validate the uSARA algorithm on imperfectly calibrated RI observations from the ASKAP radio telescope. In Part II, we expand upon this work to include a validation of the AIRI algorithm. In both articles, we aim to study the imaging performance of uSARA and AIRI in comparison to Multi-scale CLEAN (Cornwell, 2008) via the WSClean imager (Offringa et al., 2014), both in terms of reconstruction quality and computational efficiency. For a coherent comparative analysis of the three imaging algorithms - uSARA, AIRI, and WSClean - both Part I and Part II utilise the same RI data from publicly available ASKAP observations collected during Early Science and Pilot surveys and processed through the ASKAPsoft pipeline (Hotan et al., 2021). Targeted fields-of-view - hosting extended, diffuse, and complex radio sources - were carefully selected to test the precision and scalability capabilities of our imaging framework. A comprehensive summary of the considered imaging framework (with interchangeable denoising algorithms), including the wide-field measurement operator model, its distribution through parallelisation, and its implementation using high-performance computing systems (HPC), is provided by Dabbech et al. (2022). A fully detailed analysis of the framework's scalability is the subject of a forthcoming article. The remainder of this article is structured as follows. In Section 2, we present an overview of the investigated imaging framework from the algorithmic structure underpinning uSARA to the parallelisation and automation functionalities ensuring the computational scalability of the framework to large image and data dimensions in the context of wide-field imaging. In Section 3, we provide details of the scrutinised ASKAP data and the imaging settings of uSARA and the CLEAN-based benchmark algorithm. Reconstruction results of our primary targets of interest are exposed in Section 4 and discussed in Section 5. Section 6 documents and discusses the computational cost of the imaging framework. Finally, conclusions are drawn in Section 7. ## 2 Methods In this section, we present the RI data model in the context of wide-field monochromatic intensity imaging. We provide an overview of the uSARA imaging algorithm and its underpinning algorithmic structure, and summarise the scalability features of its encompassing framework (Teris et al., 2022; Dabbech et al., 2022). ### Data Model In the absence of instrumental and atmospheric perturbations, RI visibilities measured at a given observing wavelength are noisy Fourier components of the radio sky, modulated by the so-called \(w\)-effect, a varying chirp-like phase induced by the non-coplanarity of the radio array. With no loss of generality, the data model can be discretised, such that the measured visibilities \(\mathbf{y}\in\mathbb{C}^{M}\) are modelled from the sought intensity image \(\mathbf{x}\in\mathbb{R}_{+}^{\mathbf{N}}\) as follows \[\mathbf{y}=\mathbf{\Phi}\mathbf{x}+\mathbf{n}, \tag{1}\] where \(\mathbf{n}\in\mathbb{C}^{\mathbf{M}}\) is a realisation of a zero-mean random Gaussian noise with a standard deviation \(\tau>0\). The operator \(\mathbf{\Phi}\in\mathbb{C}^{M\times N}\) is the measurement operator encompassing the Fourier sampling, and the \(w\)-effect. Due to the large amounts of data, a Direct Fourier transform would be intractable; therefore, the incomplete Fourier sampling is modelled via the non-uniform fast Fourier transform (NUFFT) (Fessler & Sutton, 2003; Onose et al., 2016). Furthermore, the \(w\)-effect is taken into account via a hybrid model combining \(w\)-stacking (Offringa et al., 2014) and \(w\)-projection (Cornwell et al., 2008), whereby RI data are grouped by their \(w\)-coordinates into \(P\)\(w\)-stacks. For each data point, the chirp-like phase is decomposed into two terms: a modulation of its associated \(w\)-stack injected in the measurement operator via image-domain multiplication, and a compact Fourier kernel encoding the resulting \(w\)-offset modulation, injected via Fourier-domain convolution (Dabbech et al., 2017). As a final consideration, a noise-whitening operation is typically applied to the measured data and injected into the associated measurement operator to ensure constant standard deviation of the noise (see Appendix A of Terris et al., 2022, for more details). On some occasions, the operation is performed in combination with a data-weighting scheme derived from the sampling profile (_e.g._ Briggs weighting; Briggs, 1995) to improve the effective resolution of the observation (due to the highly non-uniform density profile of the RI sampling). Under these considerations, the measurement operator is decomposed into computationally and memory-efficient blocks as the vertical concatenation of the operators \(\{\mathbf{\Phi}_{p}\}_{1\leq p\leq p}\), where for each \(w\)-stack \(p\in\{1,\ldots,P\}\), the associated measurement operator \(\mathbf{\Phi}_{p}\in\mathbb{C}^{M_{p}\times N}\) is given by \(\mathbf{\Phi}_{p}=\mathbf{\Phi}_{p}\mathbf{G}_{p}\mathbf{F}\mathbf{Z}_{p}\)(Dabbech et al., 2022). More specifically, the operator \(\mathbf{\Phi}_{p}\in\mathbb{R}^{M_{p}\times M_{p}}\) is a diagonal matrix encoding the considered data-weighting scheme. The sparse matrix \(\mathbf{G}_{p}\in\mathbb{C}^{M_{p}\times N^{p}}\) is the de-gridding matrix, encompassing convolutions between the NUFFT interpolation kernels and the compact \(w\)-kernels correcting for the associated \(w\)-offsets in the Fourier plane. Note that estimates of direction-dependent effects (DDEs) can also be encoded as additional convolutions in the rows of the de-gridding matrix, when available. \(\mathbf{F}\in\mathbb{C}^{N^{\prime}\times N^{\prime}}\) is the Discrete Fourier transform and the operator \(\mathbf{Z}_{p}\in\mathbb{C}^{N^{\prime}\times N}\) encodes the \(w\)-modulation of the \(p^{\text{th}}\)\(w\)-stack, the zero-padding operator for a finer grid of the Fourier plane, and the correction for the convolution with the approximate NUFFT interpolation kernels. ### uSARA algorithm Image formation from the noisy and incomplete RI measurements \(\mathbf{y}\) is an ill-posed inverse problem. Here we consider the unconstrained SARA imaging algorithm from optimisation theory (Terris et al., 2022). The algorithm provides an estimate of the radio sky as the minimizer of an objective function posed as the sum of two terms: a data fidelity term \(f\), emanating from the nature of the noise, and a regularisation term \(r\) encoding a prior knowledge of the image to address the ill-posedness of the inverse problem. The minimisation task is of the form \[\underset{\mathbf{x}\in\mathbb{R}^{N}}{\text{minimise}}\ f(\mathbf{x};\mathbf{y})+ \lambda r(\mathbf{x}), \tag{2}\] where \(\lambda>0\) is the regularisation parameter controlling the balance between the two terms. Given the Gaussian nature of the noise affecting the RI data, \(f\) is naturally set to \(f(\mathbf{x};\mathbf{y})=1/2\big{\|}\mathbf{\Phi}\mathbf{x}-\mathbf{y}\big{\|}_{2}^{2}\), with \(\big{\|}_{2}\) denoting the \(\ell_{2}\) norm of its argument vector. The uSARA regularisation function \(r\) is a multi-term non-differentiable function composed of a non-convex log-sum function enforcing average sparsity in an overcomplete dictionary \(\mathbf{\Psi}\in\mathbb{R}^{N\times B}\), consisting in the normalised concatenation of nine orthogonal bases, and a non-negativity constraint (Carrillo et al., 2012; Terris et al., 2022), which reads \[r(\mathbf{x})=\rho\sum_{j=1}^{B}\log\left(\rho^{-1}\left|\left(\mathbf{\Psi}^{\dagger }\mathbf{x}\right)_{j}\right|+1\right)+t_{\mathbb{R}^{N}_{\mathbf{x}}}(\mathbf{x}), \tag{3}\] where \((.)_{j}\) denotes the \(j^{\text{th}}\) coefficient of its argument vector, and \((.)^{\dagger}\) stands for the adjoint of its argument operator. The parameter \(\rho>0\) prevents the argument of the logarithmic from reaching zero values and can be set to the estimate of the noise level in the sparsity domain (Thouvenin et al., 2022). The non-negativity constraint is encoded via the indicator function of the real positive orthant, given by \(t_{\mathbb{R}^{N}_{\mathbf{x}}}(\mathbf{x})=+\infty\) if \(\mathbf{x}\notin\mathbb{R}^{N}_{\mathbf{x}}\) and \(0\) otherwise. As such, the resulting minimisation task is non-convex and is addressed in an iterative manner. More specifically, the problem is approached by solving a sequence of surrogate convex minimisation tasks whereby \(r\) is replaced by a convex regularisation function \(g\) of the form \(g(\mathbf{x})=\|\mathbf{\Psi}\mathbf{W}^{\dagger}\mathbf{x}\|_{1}+t_{\mathbb{R}^{N}_{\mathbf{x}}} (\mathbf{x})\) substituting the log-sum function with the \(\ell_{1}\) function, denoted by \(\|\cdot\|_{1}\), and weighted by the diagonal matrix \(\mathbf{W}\in\mathbb{R}^{B\times B}\). Each of the surrogate weighted minimisation tasks is the form \[\underset{\mathbf{x}\in\mathbb{R}^{N}}{\text{minimise}}\ f(\mathbf{x};\mathbf{y})+\lambda g (\mathbf{x}), \tag{4}\] where the convex and non-differentiable function \(g\) is redefined through the update of its underlying weighting matrix \(\mathbf{W}\) from the solution of its preceding task (Carrillo et al., 2012; Terris et al., 2022). The convex minimisation task is solved approximately (_i.e._ for a finite number of iterations \(K>0\)) using the forward-backward (FB) iterative scheme (Repetti and Wiaux, 2020; Terris et al., 2022), and the overall procedure benefits from convergence guarantees (Repetti and Wiaux, 2021). The FB iterative scheme relies on two-step image updates: a 'forward' gradient descent step calling for the gradient of the data fidelity function \(f\), given by \(\nabla f(\mathbf{x})=\operatorname{Re}\{\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\}\mathbf{x}- \operatorname{Re}\{\mathbf{\Phi}^{\dagger}\mathbf{y}\}\), followed by a 'backward' denoising step using the proximal operator of the convex regularisation function \(g\)(see Terris et al., 2022, for the mathematical details) such that for any \(k\in\mathbb{N}\) \[\mathbf{x}^{(k+1)}=\operatorname{prox}_{\lambda g}\left(\mathbf{x}^{(k)}-\gamma\nabla f (\mathbf{x}^{(k)})\right). \tag{5}\] Let \(L>0\) denote the Lipschitz constant of \(\nabla f\) given by \(L=\|\operatorname{Re}\{\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\}\|_{\mathbb{S}}\), with \(\|.\|_{\mathbb{S}}\) denoting the spectral norm of its argument operator. The step-size \(\gamma\) satisfies the condition \(0<\gamma<2/L\) to guarantee the convergence of the iterative scheme. Finally, the proximal operator \(\operatorname{prox}_{\lambda g}\), not benefiting from closed-form solutions, is computed sub-iteratively, involving soft-thresholding operations in the sparsity dictionary \(\mathbf{\Psi}\) by \(\gamma\lambda\). ### A scalable and automated imaging framework To address the scalability requirement to large data and image sizes, our imaging framework provides automated parameter choice and parallel and memory-efficient models of the operators and functions involved (Dabbech et al., 2022), summarised in what follows. Regularisation parameter selection.The choice of the regularisation parameter \(\lambda\), balancing data fidelity and image regularisation, is of paramount importance as it affects the solution of the minimisation task (2). Considering \(\sigma>0\), the estimate of the standard deviation of the image domain noise, Terris et al. (2022) proposed to equate the soft-thresholding parameter \(\gamma\lambda\), involved in the denoising step, to the estimate of the standard deviation of the sparsity domain noise given by \(\sigma/3\) (the factor three emanates from the normalisation of the sparsity dictionary). In the case when a data-weighting scheme is adopted to compensate for the non-uniform density profile of the sampling (_e.g._ Briggs weighting), additional correlation is induced in the image domain noise. Under this consideration, \(\sigma\) can be obtained as \[\sigma=\eta\tau/\sqrt{2L},\ \text{with}\ \eta=\|\operatorname{Re}\{\mathbf{\Phi}^{ \dagger}\mathbf{\Theta}^{2}\mathbf{\Phi}\}\|_{\mathbb{S}}^{1/2}/\sqrt{L}, \tag{6}\] where the data-weighting operator \(\mathbf{\Theta}\in\mathbb{R}^{M\times M}\) is a diagonal per block matrix, whose diagonal matrix-blocks are the data-weighting matrices \(\left(\mathbf{\Theta}_{p}\right)_{1\leq p\leq p}\). The correction factor \(\eta\) reduces to one otherwise. In our experiments, Briggs weighting was applied to the data in imaging, and the resulting values of \(\eta\) were found to be in the interval \([0.3,0.6]\). The regularisation parameter \(\lambda\) can be set around \[\lambda\simeq\tau\|\operatorname{Re}\{\mathbf{\Phi}^{\dagger}\mathbf{\Theta}^{2}\mathbf{ \Phi}\}\|_{\mathbb{S}}^{1/2}/(3\sqrt{2}\gamma L), \tag{7}\] with the step size fixed to \(\gamma=1.98/L\), ensuring the convergence of the FB algorithm. Finally, the parameter \(\rho\) involved in (3) is typically set to the estimate of the standard deviation of the sparsity domain noise, \(\rho=\sigma/3\). **Denoiser Faceting.** In light of the sub-iterative nature of the denoising operator underpinning uSARA, distribution and parallelisation of the sparsity operator \(\boldsymbol{\Psi}\) are required, not only to handle the large image dimensions of interest but also to accelerate the computation. For this aim, we have adopted a faceted implementation of the operator \(\boldsymbol{\Psi}\)(Prusa, 2012), enabling image facet denoising. The number of facets \(F\) is derived from the number of CPU cores of the computing architecture on which the algorithm is to be deployed, and constraints on image facet dimensions from the wavelet transforms underpinning the sparsity operator \(\boldsymbol{\Psi}\). **Automated parallelisation of the measurement operator.** Three key features are supported in our implementation of the measurement operator to ensure its scalability to large data sizes (Dabbech et al., 2022). Firstly, the choice of the number of \(w\)-stacks, \(P\), defining the decomposition of the measurement operator into the operators \(\boldsymbol{\Phi}_{p}\) is automated via a planning strategy taking into consideration the computational cost derived from the complexity of the application of the measurement operator \(\boldsymbol{\Phi}\) and the memory constraints of the computing architecture. Secondly, memory-efficient encoding of the resulting operators \(\boldsymbol{\Phi}_{p}^{\dagger}\boldsymbol{\Phi}_{p}\), called for in FB, can be achieved through a data dimensionality reduction functionality via visibility-gridding, whereby the de-gridding and gridding operations underpinning \(\boldsymbol{\Phi}_{p}^{\dagger}\boldsymbol{\Phi}_{p}\) are explicitly encoded via the sparse holographic matrices \(\boldsymbol{\mathsf{H}}_{p}=\boldsymbol{\mathsf{G}}_{p}^{\dagger}\boldsymbol{ \mathsf{G}}_{p}^{\dagger}\boldsymbol{\mathsf{G}}_{p}\). By doing so, the dimensionality of the measurement operator is effectively driven solely by the image size. The feature is enabled when the memory required to host the defidding matrices exceeds the available resources. Thirdly, further decomposition of each operator \(\boldsymbol{\Phi}_{p}^{\dagger}\boldsymbol{\Phi}_{p}\) into smaller blocks is enabled via a data-clustering step such that each data block corresponds to the aggregation of radially-neighbouring Fourier modes, identified under memory constraints. The number of CPU cores allocated for the forward step of the FB algorithmic structure is derived from the number of identified data clusters. These computing resources are initially used to compute the de-gridding matrices (\(\boldsymbol{\mathsf{G}}_{p}\)) or the holographic matrices (\(\boldsymbol{\mathsf{H}}_{p}\)), underpinning the operator \(\boldsymbol{\Phi}^{\dagger}\boldsymbol{\Phi}\) (only once), and are later used to host them and apply them at each FB iteration. ## 3 Data, Imaging, and Analysis In this section, we provide a full description of the scrutinised data, including pre-processing and calibration steps. We provide the imaging settings of uSARA and uSClean, and outline the computing architecture and resources required to run both algorithms. Finally, we provide procedures for a coherent comparative analysis. ### ASKAP Observations ASKAP consists of 36 12-metre parabolic dish antennas, spanning six kilometres, at the Murchison Radio Observatory in Western Australia. ASKAP's original design includes Phased Array Feeds (PAFs; Hay et al., 2006) at the focus of each antenna, built as dual-polarisation chequerboard grids sensing signals in the frequency range of 700-1800 MHz. Signals received by each of the 188-element-sensor grids are cross-correlated to simultaneously form 36 separate primary beams (or pointings) on the sky (Hotan et al., 2014; McConnell et al., 2016). This PAF technology gives ASKAP an instantaneous field-of-view (FoV) of 30-square-degrees, making it the most rapid surveying radio telescope in the world (DeBoer et al., 2009). ASKAP's EMU Survey will survey the radio continuum over the entire southern sky (up to a northern declination of \(+30^{\circ}\)) at a resolution of \(\sim 10\) arcsec with sensitivities reaching \(\sim 10\mu\) Jy beam\({}^{-1}\), and is projected to detect more than 70 million galaxies. Science goals of the EMU collaboration include testing fundamental models for dark energy (_e.g._ Raccanelli et al., 2012), detecting the warm-hot intergalactic medium (_e.g._ Hodgson et al., 2020), tracing active galactic nuclei (AGN) and star formation up to high-redshifts (_e.g._ Mancuso et al., 2017), and mapping the radio continuum of the Galactic Plane and Centre (_e.g._ Riggi et al., 2021). The ASKAP Early Science Broadband Survey (Project: AS034, PI: Lisa Harvey-Smith) began in 2018 with the aim to test observations using the full ASKAP array while covering scientifically interesting areas of sky, such as the Galaxy And Mass Assembly 23-hour field (GAMA 23). The EMU Pilot Survey (EMU-PS; Norris et al., 2021, Project: AS101, PI: Ray Norris), centred on RA, Dec: 21h00m00s, -.55\({}^{\circ}\)00\({}^{\prime}\)00\({}^{\prime\prime}\), was carried out in 2019 covering an area of 270 deg\({}^{2}\) overlapping a portion of sky covered by the first data release of the Dark Energy Survey (DES; Abbott et al., 2018). From these early ASKAP data releases, we have selected three individual Scheduling Block (SB) beam observations covering extended, morphologically complex radio sources which represent robust test cases for image reconstruction with uSARA. ASKAP SBs contain 36 measurement sets each, corresponding to the 36 primary beam pointings. Therefore, to ensure maximum signal-to-noise, we have chosen the single beam measurement sets which were most closely centred on our primary targets of interest (beam 15 of SB8275, beam 12 of SB9351, and beam 35 of SB9442). Beam observations for each SB are carried out with the same specifications over a 10-hour total integration time. The observing band varies slightly between the selected Early Science data (SB8275 - central frequency 1013 MHz) and EMU-PS data (SB9351 and SB9442 - central frequency 943 MHz), yet both have instantaneous bandwidths of 288 MHz with physical channel intervals of 1 MHz. See Table 1 for further details on the selected observations. The three selected SB-beam measurement sets, containing calibrated visibilities, were generated through the ASKAPsoft pipeline (Hotan et al., 2021). For direction-independent calibration, ASKAPsoft includes bandpass calibration using the standard calibrator PKS B1934-638 - observed for five minutes in each beam before or after the science target. Further calibration includes a cycle of phase-only self-calibration for every 1 MHz of data. Each beam observation in an SB is calibrated and imaged independently, and as a final step, ASKAPsoft stitches images together to form mosaics covering the full 30-square-degree FoV. For our imaging purposes, we used the ASKAP data products after ASKAPsoft processing, with no further flagging or calibration. Prior to imaging, we shifted the pointing centres of our selected beam observations by their beam offsets. Although most RI packages assume a Stokes parameter \(I\) (intensity) of \(I=(XX+YY)/2\) (where \(X\) and \(Y\) represent the instrumental polarisations), ASKAPsoft uses the IAU (International Astronomical Union) definition \(I=XX+YY\). We did not apply a factor two correction for the IAU intensity convention in any of our final image reconstructions; therefore, flux densities in our images are halved when compared to the values found in ASKAPsoft mosaic images. ### Imaging Settings To perform monochromatic sub-band imaging, we split the selected wide-band data into effective channels, or spectral windows (SPWs), across the full frequency band. More precisely, data from all three fields were binned uniformly into eight spectral windows with a bandwidth of 36 MHz each, under the assumption of nearly flat spectral behaviour within a spectral window. Data sizes per spectral window range from \(\sim 0.8\) to \(\sim 1.2\) GB. To further demonstrate the scalability of uSARA, we also imaged the third field (SB9442-35) over the full frequency band (covering 288 MHz) to form a single monochromatic image from \(\sim 7.5\) GB of data. We chose not to generate full-band monochromatic images for the other two fields (SB8275-15 and SB9351-12) since they host sources known to exhibit very steep spectral behaviour (specifically, the galaxy clusters Abell 3391 and SPT-CL 2023-5535; Bruggen et al., 2021; Hyeong-Han et al., 2020) and because both fields contain bright, large-scale artefacts. According to the ASKAP Science Observation Guide1, recommended image size is set by the full width at half maximum (FWHM) of the primary beam. ASKAP beam pointings have a primary beam FoV approximated by a circular Gaussian with FWHM of \(1.09\lambda_{\mathrm{obs}}/D\)(Hotan et al., 2021), where the observing wavelength, \(\lambda_{\mathrm{obs}}\), is about 0.3 meters (at 1000 MHz) and \(D\) is the dish diameter of a single ASKAP dish (12 m). We calculated the FWHM of a given beam pointing to be \(\sim 1.56^{\circ}\) at the middle of the frequency band. For the first two selected ASKAP fields (SB8275-15 and SB9351-12), we noted the presence of bright sources lying just outside of the primary beam that created sidelobes and decided to image a FoV covering twice the FWHM. For the third FoV (SB9442-35), we did not find any bright sources directly outside of the primary FWHM and chose to image a FoV covering 1.6 times the FWHM. In both cases, the imaged FoV is well beyond the FWHM of the primary beam. With a maximum baseline of 6268 m, and a native instrumental resolution between 9 and 12 arcsec over the bandwidth, we selected a cell size of 2.2 arcsec pixel\({}^{-1}\), corresponding to a super-resolution factor of 2 at the highest frequency. Under these considerations, the reconstructed images of fields SB8275-15 and SB9351-12 are \(5500\times 5500\) pixels in size, and those of the field SB9442-35 are \(4096\times 4096\) pixels. Unlike the output image of the CLEAN-based algorithm - by design restricted to the instrumental resolution through the application of the restoring beam to its estimated non-physical model image - the uSARA image retains super-resolution at the pixel size. On a final note, although the FWHM of the primary beam is used to determine the imaged FoVs, no primary beam correction is applied to the reconstructions in this work. Footnote 1: [https://confluence.csiro.au/display/askapsst/ASKAP+Survey-Science](https://confluence.csiro.au/display/askapsst/ASKAP+Survey-Science) Systematic imaging was carried out on the three selected ASKAP fields with both uSARA and WSClean using the same imaging settings where applicable. A summary of the imaging settings is listed in Table 2. Our parallelised and automated imaging framework implemented in MATLAB, and the C++ software WSClean, were both run on Cirrus2, a UK Tier2 high-performance computing (HPC) service, comprising 280 compute nodes. Each node has 256 GB of memory and 36 CPU cores (with two hyperthreads each). Parameter choice in both algorithms is summarised in the following paragraphs. Footnote 2: [http://www.cirrus.ac.uk](http://www.cirrus.ac.uk) utilised in uSARA imaging. The number of \(w\)-stacks considered by WSClean is set automatically based on the theoretical bound derived in Offringa et al. (2014) and the available compute resources (see Sec. 6 for more details). For future reference, we note that CLEAN reconstructions are the so-called restored images, obtained as the sum of the non-physical model image convolved with the associated restoring beam, and the residual image. To support our flux density analysis, we also consider the WSClean model images convolved with the restoring beam, referred to as smoothed model images. **uSARA parameters.** With the uSARA algorithm being implemented in MATLAB, data and associated information (including the standard deviation of the data domain noise) were extracted from the measurement set tables as a collection of MAT files using a dedi Figure 1: SB8275-15 – WSClean: Full FoV image covering the merging cluster system Abell 3391-95, at the first sub-band (SPW:1, centred at 887 MHz). This monochromatic image is a WSClean restored image with a synthesised beam of \(9.4\times 10.9\) arcsec and rms noise of \(\sigma_{\rm meas}\approx 50\)\(\mu\)Jy beam\({}^{-1}\) (2 \(\mu\)Jy pixel\({}^{-1}\)). Panel (a) centred on the FR I radio galaxy in A3391; panel (b) centred on cluster member FR II radio galaxy; (c) panels centred on FR I and diffuse source in A3395. Middle (c) panel: r-band optical image from DES overlaid with WSClean restored image, demarcated by blue contours at levels \(\{2^{m+1}\}_{1\leq m\leq 10}\)\(\mu\)Jy pixel\({}^{-1}\). Rightmost (c) panel: Spectral index map obtained with the first six sub-band images of WSClean after smoothing with a common circular Gaussian beam of 20 arcsec. In Wilber et al. (2022) are provided all sub-band images combined into the GIF ‘SB8275-15_WSClean’, and the spectral index maps of Abell 3395 obtained with WSClean and uSARA in the GIF ‘SpectralIndexMap_Abell_3395’, together with a colour blind-friendly version in the GIF ‘SpectralIndexMap_Abell_3395_colorblind_friendly’. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **A3395 Phoenix** & \(S_{\rm 877\ MHz}\) & \(S_{\rm 923\ MHz}\) & \(S_{\rm 599\ MHz}\) & \(S_{\rm 995\ MHz}\) & \(S_{\rm 1031\ MHz}\) & \(S_{\rm 1067\ MHz}\) & \(S_{\rm 1103\ MHz}\) & \(S_{\rm 1199\ MHz}\) \\ \hline uSARA model & 29.3 & 24.3 & 14.5 & 10.4 & 13.5 & 15.2 & 7.3 & 7.4 \\ \hline WSClean restored image & 30.2 & 25.1 & 25.2 & 20.0 & 25.7 & 21.6 & 10.0 & 8.3 \\ \hline WSClean smoothed model & 27.2 & 22.2 & 23.8 & 18.6 & 23.4 & 19.7 & 9.5 & 7.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Integrated flux density values in [mJy] of the diffuse phoenix source in Abell 3395 for each SPW imaged with uSARA and WSClean. Figure 2: SB8275-15 \(-\) uSARA: Full FoV image covering the merging cluster system Abell 3391-95, at the first sub-band (SPW:1, centred at 887 MHz). This monochromatic image is a uSARA model with a pixel resolution of 2.2\(\times\)2.2 arcsec. Panels are the same as described in Figure 1. Middle (c) panel: r-band optical image from DES overlaid with uSARA model image, demarcated by blue contours at levels \(\{2^{m}\}_{0\leq m\leq 10}\) \(\mu\)Jy pixel\({}^{-1}\). Rightmost (c) panel: Spectral index map obtained with the first six sub-band images of uSARA after smoothing with a common circular Gaussian beam of 5 arcsec. In Wilber et al. (2022) are provided all sub-band images combined into the GIF \({}^{*}\)SB8275-15\({}_{*}\)uSARA \({}^{*}\), and the spectral index maps of Abell 3395 obtained with uSARA and WSClean in the GIF \({}^{*}\)SpectralIndexMap_Abell_3395’, together with a colour blind-friendly version in the GIF \({}^{*}\)SpectralIndexMap_Abell_3395,colorblind_friendly’. cated Python script relying on the 'Pyxis' library (standing for Python Extensions for Interferometry Scripting, as part of the MegTrees software package; Noordam, J. E. & Smirnov, O. M. 2010). We recall that Briggs data weights generated by WSClean were also considered in uSARA imaging. Concerning the uSARA measurement operator, the number of the \(w\)-stacks used in the different imaging experiments was set in an automated manner, via the planning step supported by our imaging framework (see Tables 7-9, for details). In the reconstruction of the sub-band images of all three fields, the operator \(\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\) was encoded via the underpinning sparse de-gridding matrices (\(\mathbf{\Theta}_{p}\)). For the full-band monochromatic imaging experiment of the field SB9442-35, the resulting large data size triggered the dimensionality reduction feature. The operator \(\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\) was therefore encoded via its underpinning holographic matrices (\(\mathbf{\mathsf{H}}_{p}\)), reducing the memory requirements to host it by nearly a factor 5. The regularisation parameter \(\lambda\) was fixed to the heuristic value proposed in (7) for the full-band image of the field SB9442-35. However, some adjustment was found necessary to achieve a high reconstruction quality in terms of resolution and sensitivity for all sub-band images of the three selected fields, whereby \(\lambda\) was set from 0.7 to 0.8 times the heuristic. Higher values of the regularisation parameter resulted in somewhat smoother reconstructions and non-recovery of some fainter point sources, whereas lower values led to images highly contaminated by calibration artefacts. The applied adjusting factor may be partly attributed to the imperfection of the DIE calibration and the lack of DDE calibration, affecting the accuracy of the measurement operator. Nonetheless, it is generally consistent with the findings of the theoretical study of uSARA's heuristic in the context of simulated RI data (Terris et al., 2022). Finally, the stopping criteria of uSARA were set to their default values, including a maximum of 10 re-weighted minimisation tasks, and a relative variation between the image iterates of 0.0005. ### Quantitative Analysis Focusing on the target sources of the selected observations, we provide their flux measurements and in-band spectral index maps obtained by the two imaging algorithms. Estimated model images from uSARA are in units of [Jy pixel\({}^{-1}\)], whereas WSClean restored images are, by design, in units of [Jy beam\({}^{-1}\)]. For the sake of comparison, we normalised WSClean restored images by the area of the associated restoring Gaussian beam, denoted by \(A_{\text{beam}}\) and given by \[A_{\text{beam}}=\frac{\pi\times B_{\text{MAJ}}\times B_{\text{MIN}}}{4\times \log 2}, \tag{8}\] where \(B_{\text{MAJ}}\) and \(B_{\text{MIN}}\) are the respective major and minor axes of the restoring beam in pixel units (_i.e._ normalised by the cell size of 2.2 arcsec pixel\({}^{-1}\)). Diffuse emission of particular interest in our study presents a complex morphology with edges often blended into the background noise, as seen in WSClean maps. As is common practice, we measure the total flux of diffuse structure within manually generated regions which roughly follow \(\sim 2\sigma_{\text{meas}}\) contours of the source, where \(\sigma_{\text{meas}}\) is the root-mean-square (rms) noise measured in a nearby region void of sources from the WSClean restored map. Regions were hand-drawn in the visualisation software SAOImageDS9 (Joye and Mandel, 2003) to closely follow the contours of recovered signal in both the uSARA and WSClean maps, such that the same region was used when measuring flux density of the same diffuse source. Flux density measurements from uSARA images are expected to be lower than those measured from the WSClean restored images, due to the bias introduced from the WSClean residual map. For a more accurate comparison of flux density, we also provide measurements from WSClean smoothed model images. Note that error measurements on flux densities are not reported since the Early Science and Pilot Survey ASKAP observations are not yet validated against standard flux catalogues. All reported flux measurements and statistics were obtained using the SAOImageDS9 software. Spectral index maps were created to showcase how sources of interest change over their morphology in electron energy distribution and consequently, in their spectral energy distribution. Firstly, the sub-band maps were smoothed via convolution with a 2D circular Gaussian kernel with axes of 5 arcsec for uSARA images and 20 arcsec for WSClean images. The spectral index maps were then obtained by fitting a first-order polynomial to the function \(\log(S_{\nu})=-\alpha\log(\nu)\), where \(S_{\nu}\) is the flux density for a given beam area at a given frequency \(\nu\) and \(\alpha>0\) is the spectral index. Only the first six sub-bands are considered when generating these maps since the last two sub-bands were consistently found to recover less diffuse signal for primary targets of interest (possibly attributed to the steepness of their spectra). ## 4 Results In this section, we showcase high-resolution, high-fidelity images of our three selected fields produced by the uSARA algorithm and compare them to images made with multi-scale WSClean. Select images are presented in Figures 1 - 6, showing the full imaged fields-of-view of our chosen ASKAP observations, and include zoomed-in views focusing on complex radio emission of interest and their associated optical images and spectral index maps. In Wilber et al. (2022), we provide FITS files of all spectral windows of the three selected fields imaged with both algorithms. For each field, images are also combined into animated GIF files to show how the recovered emission changes over the full frequency band. In what follows, we provide a detailed comparison of the morphology and flux density of specific sources between the uSARA and WSClean images. ### First field: SB8275-15 Beam 15 of SB8275 covers a FoV containing the complex merging galaxy cluster system Abell 3391 - Abell 3395. This field has been recently observed with the eROSITA X-ray space telescope where a warm gas bridge has been discovered between the cluster pair as part of a 15 Mpc intergalactic filament (Reiprich et al., 2021). The field also contains multiple bent-tail and Fanaroff-Riley class I and II (FR-I & FR-II; Fanaroff and Riley, 1974) radio galaxies, some belonging to the cluster system. A recent paper utilising mosaic images of SB8275 has confirmed more than 20 giant radio galaxies (at various redshifts) in the 30 deg\({}^{2}\) field (Bufggen et al., 2021). In Figures 1 & 2, we present our images of the full FoV (3.36\({}^{\circ}\)) of the first sub-band (SPW:1) of SB8275-15, imaged with WSClean and uSARA, respectively. Both figures include zoomed-in views of the FR-I in Abell 3391 (a: top right panels), a FR-II cluster member in the east (b: middle right panels), and radio sources in Abell 3395 (c: bottom panels). The FR-I radio galaxies at the centre of Abell 3391 in the north and Abell 3395 in the south (see Table 1 for source names) are reconstructed with superior resolution by uSARA. This is most evident in the appearance of 'braiding' and gaps in the plasma of the jets, which are not resolved in the WSClean map. The FR-I in Abell 3391 is the brightest source in the field with a peak pixel flux of 20 mJy, as measured in the SPW:1 uSARA image, and 12 mJy as measured in the SPW:1 WSClean image. Calibration errors in this field manifest as strong ring-like artefacts emanating from these bright FR I-I radio galaxies. Bruggen et al. (2021) successfully carried out additional direction-dependent calibration to reduce the effect of these large-scale artefacts, however, we note that we performed no such additional calibration prior to imaging these data and therefore the extended radial artefacts remain in our final images. Over the full frequency band, source morphology and the structure of artefacts change per spectral window (see associated GIFs in Wilber et al., 2022). #### 4.1.1 Abell 3395 The southern cluster Abell 3395 is made up of two sub-clusters, with separate X-ray peaks (Reiprich et al., 2021), indicating that this cluster is undergoing its own merger within the larger merging system. West of the central FR 1 in Abell 3395, there is a faint, diffuse source with a dim core connected by arms that extend to the north-west and south-west. As reported by Bruggen et al. (2021), the peak intensity of this dim core does not clearly coincide with a host galaxy visible in optical maps, and therefore the source can not be classified as a typical radio galaxy associated with a host AGN. The diffuse source is possibly a so-called radio 'phoenix' (see Kempner et al., 2004, for classification), re-ignited fossil plasma from past AGN activity which is no longer active. In the middle bottom panels of Figures 1 & 2, WSClean and uSARA images are overlaid as contours on an _x_-band optical image from DES DR1 (Abbott et al., 2018). As seen in these optical overlays, a cluster member galaxy sits just south of the dim radio core, at RA, Dec 06h25m09.95s, -54\({}^{\circ}\)30\({}^{\prime}\)34.4", raising the possible scenario that old AGN emission from this galaxy has drifted or has been disturbed and shifted by not gas in the cluster environment, leading to the observed faint emission. The structure of this phoenix candidate appears more clearly defined in our uSARA image, while the edges are much more blended into background noise in the WSClean image. Most notably, the north-west and south-west limbs of the phoenix appear to pop out in the uSARA reconstruction, although they appear very faint and undefined in the WSClean map. We measure the flux density of this candidate phoenix source using identical polygonal regions hand-drawn to closely follow the total recovered signal in each of the uSARA and WSClean sub-band images, such that the same region is used to measure between the two imaging algorithms. In the WSClean map of SPW:1, this polygonal region closely traces the 2\(\sigma_{\rm meas}\) contour line, where \(\sigma_{\rm meas}=2\)\(\mu\)Jy pixel\({}^{-1}\). Since the morphology of the phoenix source changes dramatically over the frequency band, a polygonal region was drawn for each spectral window. Interestingly, in the uSARA map of SPW:1 we find that the border of the phoenix's recovered signal can be traced by a \(\sim 1\)\(\mu\)Jy pixel\({}^{-1}\) contour level (see middle bottom panel of Figure 2), well below the estimated standard deviation of the noise in the image domain \(\sigma\) (see Eq. 6) calculated as \(\theta\)\(\mu\)Jy pixel\({}^{-1}\). This finding remains consistent through subsequent spectral windows, indicating that the uSARA algorithm successfully recovers real signal below the estimated noise levels. In the WSClean maps, due to the blending of diffuse emission with background noise, the border of the phoenix is more clearly defined by contour lines at 2 to 3\(\sigma_{\rm meas}\). For each sub-band, the measured flux densities from both uSARA and WSClean images are listed in Table 3. For the first two spectral windows, uSARA flux measurements of the candidate phoenix are greater; however, for all subsequent spectral windows, the WSClean flux measurements are consistently greater. This is likely due to the fact that the measured flux density of this region in the WSClean map is also integrated over the noise, which may increase for higher spectral windows and is amplified by artefacts emanating from the two bright FR I sources in the field. Indeed, lower flux densities are measured from the WSClean smoothed model images, bringing them closer to uSARA values, particularly at the lower end of the frequency band. For such a faint source, we can see how the flux measurement from WSClean restored images can be easily overestimated when mixed with a noisy background signal. As apparent in Table 3, the flux of the phoenix source reconstructed by uSARA drops off dramatically as the frequency increases, indicating a steep spectral index (\(\alpha>1\)), as confirmed in Bruggen et al. (2021). However, this fading over the frequency band is less dramatic in our WSClean results, indicating that WSClean may be possibly biased when wide-band imaging is deactivated. Comparing the spectral index maps of the phoenix (shown in the bottom right panels of Figures 1 & 2), the general trend of steepening over the source morphology is similar. In the uSARA map, a steeper spectral index is seen in the phoenix's north-west limb. Interestingly, both spectral index maps show the phoenix hosting a steep core (\(\alpha\sim 1.5\)) surrounded by a halo of flatter emission (\(\alpha<1\)), in contrast to the total intensity. This steep core is more clearly defined in the uSARA spectral index map. The fact that the brightest portion of the core is steeper in its spectral index than the surrounding fainter emission provides further evidence that this source may be an AGN remnant or phoenix. The emission around the potentially dormant core may exhibit a flatter spectral index because it has undergone gentle re-energisation (_e.g._ de Gasperin et al., 2017) from turbulence or small-scale shocks in the intracluster medium. Likewise, gentle re-energisation may explain why the ultra-steep emission in the north-west and south-west limbs is visible only at the lower end of the band (_i.e._ subtle re-brightening of old AGN emission). ### Second field: SB9351-12 Beam 12 of SB9351 covers a field containing the massive, merging galaxy cluster SPT-CL J2023-5535 (hereafter SPT2023) near the centre of the FoV and the X-shaped radio galaxy PKS 2014-55 on the western edge of the FoV. Since this beam observation lies on the western border of SB9351's full 30-square-degree field, we were unable to choose another beam observation that hosted the X-shaped radio galaxy closer to the pointing centre. Both the cluster and the radio galaxy have been recently studied: HyeongHan et al. (2020) announced the discovery of a radio halo and radio relic in the merging cluster SPT2023 using the same EMU-PS observation we have selected, and Cotton et al. (2020) used MeerKAT total intensity and polarisation observations to investigate the peculiar X-shaped morphology of PKS 2014-55. In Figures 3 & 4, we present our images of the full FoV (3.36\({}^{\circ}\)) of the first sub-band (SPW:1) of SB9351-12, imaged with WSClean and uSARA, respectively. Both figures include zoomed-in views of the galaxy cluster SPT2023 (a: top right panels), a field of compact and point sources (b: middle right panels), and the X-shaped radio galaxy (c: bottom panels). The bright quasar RX J2024.3-5723 at the southern edge of the pointing introduces radial ring-type artefacts, which propagate up to 1 deg in the field. In each of the zoomed-in views, uSARA shows higher resolution and more definition in the reconstruction of both compact and diffuse emission. However, the very faint emission of the radio halo in SPT2023 is not clearly recovered in the uSARA image. It is also apparent that some of the faintest point sources are missing from the uSARA image (see (b) panels of Figures 3 & 4). This loss of the faintest point sources is likely attributed to the choice of the uSARA regularisation parameter. A lower value would enable the recovery of more of these point sources, but would also increase the amplitude of recovered calibration artefacts. #### 4.2.1 X-shaped Radio Galaxy As apparent when comparing panels (c) of Figures 3 & 4, the X-shaped radio galaxy exhibits more clearly defined borders in our uSARA image. In the middle (c) panels of the same figures, WSClean and uSARA emission of the X-shaped radio galaxy are overlaid as contours on an \(r\)-band optical image from DES DR1 (Abbott et al. Figure 3: SB9351-12 -#SClean: Full FoV image covering the merging cluster SPT2023 and the X-shaped radio galaxy PKS 2014-55, at the first sub-band (SPW:1, centred at 817 MHz). This monochromatic image is a WSClean restored image with a synthesised beam of \(10.1\times 16.4\) arcsec and rms noise of \(\sigma_{\rm meas}\approx 60~{}\mu\)Jy beam\({}^{-1}\) (1.6 \(\mu\)Jy pixel\({}^{-1}\)). Panel (a) centred on the merging galaxy cluster SPT2023; panel (b) centred on a field containing compact and point sources; (c) panels centred on the X-shaped radio galaxy PKS 2014-55. Middle (c) panel: \(r\)-band optical image from DES overlaid with the WSClean restored image, demarcated by blue contours at the levels \(\{1.6\times 2^{n}\}_{1\leq n\leq 10}~{}\mu\)Jy pixel\({}^{-1}\). Rightmost (c) panel: spectral index map obtained with the first six sub-band images of WSClean after smoothing with a common circular Gaussian beam of 20 arcsec. In Wilber et al. (2022) are provided all sub-band images combined into the GIF ‘SB9351-12_WSClean’, and the spectral index maps of the X-shaped radio galaxy obtained with WSClean and uSARA in the GIF ‘SpectralIndexMap_PKS_2014_55’, together with a colour blind-friendly version in the GIF ‘SpectralIndexMap_PKS_2014_55_colorblind_friendly’. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **X-Shaped RG** & \(S_{\rm 817~{}MHz}\) & \(S_{\rm 853~{}MHz}\) & \(S_{\rm 889~{}MHz}\) & \(S_{\rm 925~{}MHz}\) & \(S_{\rm 961~{}MHz}\) & \(S_{\rm 997~{}MHz}\) & \(S_{\rm 1033~{}MHz}\) & \(S_{\rm 1009~{}MHz}\) \\ \hline uSARA model & 758.8 & 680.3 & 591.7 & 526.7 & 454.0 & 335.2 & 286.8 & 268.2 \\ \hline **WSClean** restored image & 756.4 & 721.4 & 629.3 & 546.2 & 511.8 & 409.5 & 369.0 & 352.1 \\ \hline **WSClean** smoothed model & 743.7 & 707.1 & 613.8 & 532.7 & 498.9 & 399.7 & 357.5 & 308.4 \\ \hline \end{tabular} \end{table} Table 4: Integrated flux density values in [mJy] of the X-shaped radio galaxy PKS 2014-55 for each SPW imaged with uSARA and #SClean. Figure 4: SB9351-12 – uSARA: Full FoV image covering the merging cluster SPT2023 and the X-shaped radio galaxy PKS 2014-55, at the first sub-band (SPW:1, centred at 817 MHz). This monochromatic image is a uSARA model with a pixel resolution of \(2.2\times 2.2\) arcsec. Panels are the same as described in Figure 3. Middle (c) panel: r-band optical image from DES overlaid with the uSARA model image, demarcated by blue contours at the levels \(\{2^{m}\}_{1\leq n\leq 10}\)\(\mu\)Jy pixel\({}^{-1}\). Rightmost (c) panel: spectral index map obtained with the first six sub-band images of uSARA after smoothing with a common circular Gaussian beam of 5 arcsec. In Wilber et al. (2022) are provided all sub-band images combined into the GIF ‘SB9351–12_uSARA’, and the spectral index maps of the X-shaped radio galaxy obtained with uSARA and #SClean in the GIF ‘SpectralIndexMap_PKS_2014_55’, together with a colour blind-friendly version in the GIF ‘SpectralIndexMap_PKS_2014_55_colorblind_friendly’. 2018). Again, we find that the border of the recovered uSARA signal of the X-shaped radio galaxy traces a contour level at \(\sim 1\)\(\mu\)Jy pixel\({}^{-1}\), well below the estimated standard deviation of the image noise for this sub-band: \(\sigma=6\)\(\mu\)Jy pixel\({}^{-1}\). In contrast, the diffuse edges of the east and west wings of the X-shaped radio galaxy blend into background noise in the WSClean map, such that the border is more clearly defined by \(3\sigma_{\rm meas}\) contour lines, where \(\sigma_{\rm meas}=1.6\)\(\mu\)Jy pixel\({}^{-1}\). The total flux density of the radio galaxy is measured by summing the integrated flux density from three separate regions: the east wing, the core, and the west wing. The totalled flux density measurements from hand-drawn polygon regions for the lobes (roughly tracing emission bounded by the \(2\sigma_{\rm meas}\) contour line in WSClean images) and customised ellipse regions3 for the core are listed in Table 4. The polygonal regions covering PKS 2014-55 were modified per each spectral window to more accurately follow the source morphology over the frequency band; however, identical regions were used to measure between the two imaging algorithms. As recorded in Table 5, the flux density falls off as the frequency increases, indicating a steepening of the spectral index for this source. Except for the first spectral window, we see again that the flux is consistently greater in the WSClean sub-band images, likely due to integration over the noise in the WSClean map. When measuring from WSClean smoothed model images, we find that the WSClean flux densities decrease, bringing them more in line with uSARA measurements. Footnote 3: Since the WSClean image is convolved with the restoring beam, point sources are much more extended than they appear in our uSARA images. Therefore, we use differently sized ellipse regions to measure the flux density of the core of PKS 2014-55. Spectral index maps constructed from WSClean and uSARA sub-band images are shown in the bottom right panels of Figures 3 & 4. The general trend of steepening and flattening is consistent between the two maps, with more patches of flatter emission occurring in the lower portion of the east wing. This flattening is indicative of turbulent "hot-spots", coinciding with brightening seen in the intensity maps. Our uSARA spectral index map shows a dramatic steepening on the edges of the wings, but this is likely to be an artificial steepening since the diffuse structure at the edges is not recovered as well at higher frequencies (see associated GIF available in Wilber et al. (2022), demonstrating how the source structure changes with the frequency). #### 4.2.2 SPT-Cl J2023-5535 As shown in panel (a) of Figure 3, the radio halo in SPT2023 is barely recovered by WSClean, appearing as a very faint increase in the noise across the inner regions of the cluster. The SPT2023 radio relic is apparent in the WSClean map as a small, elongated arc at the western side of the cluster (we refer the reader to HyeongHan et al. (2020) for a more detailed analysis of these cluster sources). Our uSARA image does not recover diffuse emission resembling a radio halo (see panel (a) of Figure 4), likely due to the choice of regularisation parameter dampening signal below the estimated noise level; nonetheless, the radio relic appears more clearly defined and brighter than in the WSClean image. We note that the classification of the halo and relic from HyeongHan et al. (2020) was made using a full-band (288 MHz bandwidth) multi-frequency-synthesis image and that the signal in our individual sub-band images (36 MHz bandwidth) is much weaker. We report flux density measurements for the recovered relic source in SPT2023 in Table 5. Flux density measurements of the relic from WSClean smoothed model images are more in line with uSARA measurements, except for the last three sub-band where uSARA shows a decrease in flux. ### Third field: SB9442-35 Beam 35 of SB9442 is centred on the complex radio source PKS 2130-538. This peculiar source, nicknamed "the dancing ghosts," is shaped by the jets and lobes of two radio galaxies in the Abell 3785 galaxy cluster. With the published catalogue and initial results of EMU-PS, Norris et al. (2021) included an analysis of the morphology of PKS 2130-538. Two AGN hosts contribute to the observed emission: a radio galaxy in the north at the centre of a bright filamentary arch, and a second radio galaxy in the south at the centre of a smaller arch on the eastern lobe. Despite the advantage of ASKAP's resolution - revealing previously unseen structure in PKS 2130-538 - Norris et al. (2021) point out that it is still unclear whether these two radio galaxies are superimposed or actually interacting with each other. Similarly to the previous two fields, eight sub-band images were reconstructed and used to obtain the flux density measurements, and the first six sub-band images were used to generate spectral index maps (see associated GIF provided in Wilber et al. 2022). We also formed a single full-band monochromatic image, for increased sensitivity, and to demonstrate the scalability of our imaging framework. In Figures 5 & 6, we present our monochromatic images of the full FoV (\(3.36^{\circ}\)) of SB9442-35, formed from the full-band data using WSClean and uSARA, respectively. The figures also include zoomed-in views of a region of background sources (a: top right panels), the star-forming galaxy NGC 7090 (c: mid-right panels), and "the dancing ghosts" (b: bottom panels). Unlike the two previous fields, the SB9442-35 images do not exhibit large-amplitude calibration artefacts. In fact, only a few compact radio galaxies, catalogued by the Sydney University Molonglo Sky Survey (SUMSS; Mauch et al. 2003), emanate localised ring-like artefacts, and do not hamper the recovery of our targets of interest. Overall, there is a clear difference between WSClean and uSARA in terms of resolution. While uSARA recovers structure at higher resolution, diffuse components of the extended radio galaxy in panel (a) and the star-forming galaxy in panel (c) are also fully recovered \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **SPT2023 Relic** & \(S_{\rm 17~{}MHz}\) & \(S_{\rm 853~{}MHz}\) & \(S_{\rm 889~{}MHz}\) & \(S_{\rm 925~{}MHz}\) & \(S_{\rm 961~{}MHz}\) & \(S_{\rm 997~{}MHz}\) & \(S_{\rm 1033~{}MHz}\) & \(S_{\rm 1069~{}MHz}\) \\ \hline uSARA model & 4.9 & 4.7 & 3.5 & 3.6 & 2.9 & 1.7 & 1.5 & 1.3 \\ \hline WSClean restored image & 5.7 & 5.8 & 4.3 & 4.1 & 3.8 & 3.5 & 3.0 & 3.4 \\ \hline WSClean smoothed model & 5.0 & 5.1 & 3.8 & 3.6 & 3.6 & 2.9 & 2.6 & 2.7 \\ \hline \hline \end{tabular} \end{table} Table 5: Integrated flux density values in [mJy] of the radio relic in SPT2023 for each SPW imaged with uSARA and WSClean. Central frequency of each SPW is listed in MHz. when compared to the WSClean map. However, the faintest point sources (near the noise level in the WSClean image) in panel (a) are not fully recovered by uSARA. Again, we attribute this to the choice of the uSARA regularisation parameter. #### 4.3.1 The Dancing Ghosts We focus on the complex emission of PKS 2130-538, displayed in panels (b) of Figures 5 & 6. The left and middle panels represent a zoomed-in view of the "dancing ghosts" from the full-band image and the SPW:1 sub-band image, respectively. Looking at WSClean images, as expected, the full-band restored image exhibits lower background noise and slightly more details in the source, in comparison Figure 5: SB9442-35 – WSClean: Full FoV image covering PKS 2130-538, formed using the full-band data (centred at 943 MHz). This monochromatic image is a WSClean restored image with a synthesised beam of \(11.8\times 9.4\) arcsec and rms noise of \(\sigma_{\rm meas}\approx 30\)\(\mu\)Jy beam\({}^{-1}\) (1 \(\mu\)Jy pixel\({}^{-1}\)). Panel (a) centred on a field containing extended and point-like radio galaxies; panel (c) centred on the star-forming galaxy NGC 7090; (b) panels centred on “the dancing ghosts” (PKS 2130-538). Leftmost (b) panel: image made with the full-band data (centred at 943 MHz); middle (b) panel: image made with only the first sub-band of data (SPW.1, centred at 817), shown for a comparison of sensitivity; rightmost (b) panel: spectral index map made with the first six sub-band images of WSClean after smoothing with a common circular Gaussian beam of 20 arcsec. In Wilber et al. (2022) are provided all sub-band images combined into the GIF ‘SBP4442-35_WSClean’, and the spectral index maps of “the dancing ghosts” obtained with WSClean and uSARA in the GIF ‘SpectralIndexMap_PKS_2130_538’, together with a colour blind-friendly version in the GIF ‘SpectralIndexMap_PKS_2130_538’, colorblind_friendly’. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline **Dancing Ghosts** & \(S_{\rm fullband}\)-943 MHz & \(S_{\rm 817\ MHz}\) & \(S_{\rm 853\ MHz}\) & \(S_{\rm 889\ MHz}\) & \(S_{\rm 925\ MHz}\) & \(S_{\rm 961\ MHz}\) & \(S_{\rm 997\ MHz}\) & \(S_{\rm 1033\ MHz}\) & \(S_{\rm 10069\ MHz}\) \\ \hline uSARA model & 117.2 & 131.0 & 126.0 & 121.2 & 116.3 & 112.1 & 107.7 & 104.6 & 101.8 \\ \hline WSClean restored image & 115.5 & 128.8 & 124.5 & 120.0 & 115.3 & 111.2 & 107.2 & 104.2 & 101.0 \\ \hline WSClean smoothed model & 115.5 & 128.6 & 124.3 & 119.7 & 115.2 & 111.1 & 107.1 & 104.0 & 101.0 \\ \hline \end{tabular} \end{table} Table 6: Integrated flux density values in [mJy] of “the dancing ghosts” PKS 2130-53 for each SPW imaged with uSARA and WSClean. Figure 6: SB9442-35 – uSARA: Full FoV image covering PKS 2130-538, formed using the full-band data (centred at 943 MHz). This monochromatic image is a uSARA model with a pixel resolution of \(2.2\times 2.2\) arcsec. Panels are the same as described in Figure 5. Rightmost (b) panel: spectral index map made with the first six sub-band images of uSARA after smoothing with a common circular Gaussian beam of 5 arcsec. In Wilber et al. (2022) are provided all sub-band images combined into the GIF ‘SB9442-35,uSARA’, and the spectral index maps of “the dancing ghosts” obtained with uSARA and WSClean in the GIF ‘SpectralIndexMap_PKS_2130_538’, together with a colour blind-friendly version in the GIF ‘SpectralIndexMap_PKS_2130_538_colorblind_friendly’. to the sub-band map. As for uSARA images, a clear improvement in both resolution and sensitivity can be observed in the full-band image. The bridges, which consist of the jets from the northern and southern AGN, are more tightly collimated in the full-band image when compared to the sub-band image. Several faint point sources, missing from the sub-band image, also emerge. Interestingly, one can notice that the filamentary structure extending from the eastern lobe is more clearly defined and brighter in the uSARA reconstruction. This structure has a similar appearance to the synchrotron threads that were recently discovered in Ramatsoku et al. (2020); Dabbech et al. (2022). Recovery of this structure is an exciting result, as we may find more evidence of magnetic threads branching from and connecting extended lobes in radio galaxies with ultra-sensitive, super-resolved images. The bottom right panels of Figures 5 & 6 show spectral index maps of the dancing ghosts, inferred from the sub-band images. The uSARA spectral index map contains much more detailed information on the spectra of the turbulent lobes of the northern AGN. We also observe that the higher intensity hot-spots exhibit flatter spectral indices, with steepening as the emission traces southward. The second AGN at the centre of the south-east bridge shows a flat core, as expected, and a second set of jets that bend back toward the north-east lobe of the first AGN. In WSClean maps, the second south-east lobe appears to blend back into the emission of the first north-east lobe, however, with uSARA we are able to see a distinct separation of these two portions, indicating that the emission may be somewhat superimposed when seen in projection. The steepness of the spectra in this region does not indicate a sort of re-energising from one lobe pushing onto the other, as it stays fairly consistent in a steep range between \(1<\alpha<2\) in both spectral index maps. The north-east thread exhibits a sharp, collimated spectral trend from steep to slightly flat to steep again - in contrast to the turbulent lobes - with an index ranging from \(2.9\geq\alpha\geq 0.9\) and then \(0.9\leq\alpha\leq 3.2\) following the thread from west-to-east in our uSARA map. In Table 6, flux densities for the Dancing Ghosts are reported. Integrated flux density was measured from identical polygonal regions tracing the full recovered signal in both uSARA and WSClean maps, including the eastern and western lobes, the northern arch, the south-eastern jet, and the north-eastern filament. Unlike other diffuse sources of interest from previous fields, the uSARA flux densities are much more consistent with WSClean flux densities. This presents an interesting case that may be explained by i) the source having overall flatter spectral behaviour, and ii) the lower noise level due to the absence of large-scale calibration artefacts in this field. At higher spectral windows, uSARA flux densities are even slightly greater than WSClean, opposing the trend seen for the fainter, steeper diffuse sources of interest in previous fields. This may indicate that the faintest and steepest sources, with surface brightness near estimated noise levels, are more difficult to recover with uSARA, given our current choice of regularisation parameter. In both uSARA and WSClean full-band images, the borders of the dancing ghosts are clearly defined by a contour level at 3 \(\mu\)Jy pixel\({}^{-1}\), which is \(3\sigma_{\rm meas}\) in the WSClean full-band map (where \(\sigma_{\rm meas}=1\)\(\mu\)Jy pixel\({}^{-1}\)). The uSARA imaging framework calculated the estimated standard deviation of the noise in the image domain as \(\sigma=8\)\(\mu\)Jy pixel\({}^{-1}\) for the full-band data, however, both uSARA and WSClean images have captured signal below this value. ## 5 Discussion In this section, we discuss some specific points regarding the experiments performed with our novel automated and parallelised imaging framework, underpinned by the uSARA imaging algorithm (Terris et al., 2022; Dabbech et al., 2022). In comparison to the widely-used WSClean imager, our uSARA-ASKAP images host greater detail at high resolution, resolving never-before-seen structure at scales up to 2.2 arcsec. Arguably the most exciting feature of our reconstructed uSARA maps is that they successfully capture both compact and diffuse radio emission. Standard CLEAN imaging methods often require a compromise on resolution in order to gain sensitivity to diffuse emission, necessitating multiple maps at various resolutions to accurately recover emission on separate spatial scales. In many scientific publications and published radio surveys, imaging results are often found to be separated into one high-resolution map and another low-resolution map, where point sources and compact emission may be subtracted (usually through a crude technique that can leave holes in the image). Here, we have demonstrated that uSARA reconstructs images with both high resolution and sensitivity to diffuse emission, enabling advanced scientific analyses of physically complex sources. Thanks to uSARA's super-resolution and superior sensitivity to diffuse and extended components, we are also able to generate highly-detailed spectral index maps, which aid in the classification of our targeted sources. Moreover, we argue that uSARA can catapult the sensitivity and resolution of existing surveys. Unlike WSClean, no residual image is added to the final uSARA reconstruction. It is therefore highly likely that the flux densities of low surface-brightness sources measured from uSARA images are a closer approximation to the true source flux. **Calibration errors.** Several of the ASKAPsoft mosaic continuum images from early science and pilot fields show that calibrated data are still affected by radial artefacts that propagate - up to several degrees in some cases - from bright radio sources. For the ASKAP data considered in this work, the largest source of DDEs (manifested as antenna-based image modulations) is most likely attributed to ASKAP's synthesised beams. ASKAP's beamforming hardware applies digitised weights to the 188-elemental receivers and sums them to form a single primary beam at 1 MHz intervals. Currently, ASKAP uses a common algorithm to maximise signal-to-noise ratio to calculate the weights for beam synthesizing (_e.g._ Jeffs et al., 2008; Ivashina et al., 2011). Holographic observations of ASKAP's beam patterns (see Hotan, 2016, for details) show that their sensitivities vary over frequency from antenna to antenna. The complex-valued sensitivity pattern of the PAF beams therefore introduces DDEs which need to be modelled. Furthermore, antenna pointing errors can introduce direction-dependent antenna gains. ASKAPsoft corrects only for DIEs, and imperfections of the calibration undermine the accuracy of the measurement operator model. Consequently, reconstructed images can exhibit imaging artefacts and, more seriously, suffer from severely limited dynamic ranges. Examples of such artefacts can be seen in the field SB8275-15 containing the merging cluster system A3391-95, where large-amplitude ring artefacts can be seen around the two bright FR-I radio galaxies (at the centres of the Abell clusters) in both uSARA and WSClean reconstructions. In spite of the lack of DDE calibration, overall uSARA exhibits higher reconstruction quality than CLEAN. On a further note, uSARA can be easily plugged as the imaging module into a joint calibration and imaging framework (Dabbech et al., 2021). In-band spectral index maps.Spectral index maps obtained from sub-band monochromatic imaging with uSARA have shown to be more detailed than WSClean. All sub-band images having a common cell size at least two times beyond the observation's nominal resolution at the highest sub-band, the spectral index maps were inferred using a small blurring kernel, preserving their high level of detail in comparison with WSClean spectral index maps. We have found that the classification of some sources is more clearly defined based on the spectral behaviour exhibited in uSARA maps. Interestingly, some sources have shown steeper spectral indices in uSARA spectral index maps when compared to their WSClean counterparts, which can be the result of the increase of sensitivity brought by uSARA. However, this spectral trend warrants further investigation by moving to wide-band deconvolution for a more precise spectral analysis. ## 6 Computational performance To assess the computational performance of our imaging framework, we report specific information for all uSARA imaging experiments in Tables 7-9. Details of the measurement operator are listed in these tables, including the number of processed visibilities \(M\), the number of \(w\)-stacks \(P\), the memory requirements to host its underlying sparse matrices \(m_{\mathbf{H/Q}}\), and the computational cost in CPU core hour of \(\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\) pre-computation \(\mathbf{C_{\Phi^{\dagger}\Phi}}\). We also report the total compute time \(\mathbf{T_{Image}}\) and the computational cost in CPU core hour \(\mathbf{C_{Image}}\) of the deconvolution in the same tables. For comparison purposes, we report both \(\mathbf{T_{Image}}\) and \(\mathbf{C_{Image}}\) of WSClean runs in Table 10. For uSARA, the retained number of \(w\)-stacks in each experiment from the planning step determines the sparsity of the de-gridding/holographic matrices underpinning the measurement operator and consequently the memory requirements to host \(\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\). The decomposition of the data and associated measurement operator into smaller blocks, and the resulting number of deployed CPU cores are inferred from the data clustering step (see Section 2.3). From Tables 7-9, one can notice that, in general, a larger number of visibilities will increase the computational cost of the measurement operator's pre-computation. Specific to FB iterations of uSARA, the compute time of the forward step is dominated by the Fourier transforms performed, while that of the backward step is driven by its sub-iterative nature. Both steps, being parallelised, are on par in terms of computing time, with the latter taking about 1.6 times longer on average. Although the number of \(w\)-stacks considered in WSClean is significantly more important, the reported computational time and cost in CPU core hour of uSARA is about 20 times higher on average. The superior computational performance of the standard WSClean RI imager is attributed to (i) its fast approximate data fidelity steps, whereby visibility gridding and de-gridding operations are only conducted few times, and (ii) its simplistic image model, in particular given the overall spatial compactness of the radio emission in the selected ASKAP fields, forcing multi-scale CLEAN to operate only on small-scales. However, the simple regularisation approach underpinning WSClean comes at the expense of lower imaging quality. Even though WSClean is about one order of magnitude faster than our imaging algorithm in its current MATLAB implementation prototype, substantial improvement of uSARA's computational performance is expected when migrated to a production implementation using C++ and parallel libraries. show better reconstruction quality overall in comparison to WSClean, both in terms of resolution and sensitivity. Our uSARA-ASKAP images host more detailed structure of the targeted radio emission - most clearly seen in the super-resolved jets and lobes of radio galaxies in each of the selected FoVs. In addition to high-resolution structure, faint diffuse emission has also been captured by uSARA, revealing more extended emission of intracluster radio sources which appeared blended into background noise in sub-band WSClean maps. An advantageous result of our super-resolved uSARA-ASKAP images is the ability to generate more detailed spectral index maps. High-resolution structure, resembling turbulent emission in the radio lobes of several imaged radio galaxies, appears to closely trace small changes in the steepness of the observed spectra. Furthermore, each of our primary sources of interest exhibit steeper spectra in the uSARA spectral index maps, attributed to the increase in sensitivity and resolution delivered by the algorithm. Nonetheless, our spectral analysis of the target sources remains preliminary and warrants a deeper study using wide-band imaging. Planned upgrades to the uSARA framework - which will incorporate joint DDE calibration (Repetti et al., 2017) and wide-band deconvolution (Thouvenin et al., 2022) - guarantee the robustness of future images, and consequently, more precise spectral information across all frequency channels. In terms of scalability of the proposed imaging framework, we have demonstrated that its fully automated and parallel measurement operators enable image reconstruction of data up to 7.5 GB in size. Larger data dimensions and fields-of-view necessitate distributing the data and associated measurement operator into more blocks, and decomposing the image into more facets for the parallel application of the sparsity dictionary and its adjoint in the uSARA denoiser. By adding more computational resources, the time to solution can be maintained at a reasonable scale. In its current MATLAB implementation, the computational cost of our imaging framework remains higher than the benchmark imager WSClean. However, its migration to C++ leveraging parallel libraries, can substantially boost its computational efficiency, thus narrowing the computational gap with the state-of-the-art imager. In the sequel to this series, Part II: "AIRI validated on ASKAP data," we investigate uSARA's sister algorithm AIRI, the second deconvolution algorithm built into our parallel automated imaging framework. AIRI differs from uSARA by exploiting learned Deep Neural Network (DNN) denoisers in lieu of uSARA's proximal operator in the underpinning FB algorithmic structure (5). AIRI was already very recently and briefly demonstrated on MeerKAT data (Dabbech et al., 2022). The algorithm will be validated on the same challenging ASKAP data as here, with the aim to further demonstrate its potential to deliver further imaging precision and faster reconstruction than uSARA. ## Acknowledgements The first two authors contributed equally to this work. This work was supported by the UK Research and Innovation under the EPSRC grants EP/T028270/1 and EP/T028351/1, and the STFC grant ST/W000970/1. The research used Cirrus, a UK National Tier-2 HPC Service at EPCC funded by the University of Edinburgh and EPSRC (EP/P020267/1). ASKAP, from which the data under scrutiny originate, is part of the Australia Telescope National Facility managed by CSIRO. This project used public archival data from the Dark Energy Survey (DES). ## Data Availability The ASKAP data underlying this article (calibrated visibilities and mosaic images of Scheduling Blocks) are made publicly available for viewing and download on the CSIRO ASKAP Science Data Archive (CASDA; Chapman et al., 2017), and can be accessed with the unique Project Identifiers AS034 and AS101. The reconstructed images in FITS format as well as the GIF files showing the imaged fields over the spectral windows are made available in Wilber et al. (2022). The uSARA and AIRI code will become available in a later release of the Puri-Psi library for RI imaging.
2303.02403
Discussion of Martingale Posterior Distributions
I briefly discuss the Martingale Posteriors Distributions paper by Edwing Hong, Chris Holmes and Stephen G. Walker
David Rossell
2023-03-04T12:32:39Z
http://arxiv.org/abs/2303.02403v1
# Discussion of _Martingale posterior distributions_ ###### Abstract Here is a brief discussion of _Martingale posterior distributions_, by Fong et al. (2021). Congratulations on a thought-provoking piece. Building Bayesian inference from a (likelihood, predictive) pair, rather than a (likelihood, prior), enriches the paradigm and provides new ways to think about, formulate and solve problems. A few respectful remarks. First, although the authors never claim this, it is worth emphasizing that the framework is not prior-free. There is a posterior and a likelihood, hence the prior is proportional to their ratio. The key is that said prior is data-dependent, providing an interesting avenue to develop objective Bayes methods, at the cost of loosing the coherence property in belief updating. Inspecting the prior can be informative. Figure 1 shows a Bernoulli example where truly \(\theta=0.5\) but the implied prior places little mass around that value, and a Gaussian example where the prior is centered around the sample mean1. This apparently erratic prior behavior might be problematic for model choice via Bayes factors, e.g. returning a very small integrated likelihood in the Bernoulli example. Footnote 1: code at [https://github.com/davidrusi/paper_examples/tree/main/2022_Rossell_martingale_posteriors](https://github.com/davidrusi/paper_examples/tree/main/2022_Rossell_martingale_posteriors) Second, while sometimes it is easier to elicit a predictive than a prior, in my experience the reverse is often true. For example in regression a prior on parameters defines a prior on the \(R^{2}\) coefficient, an easy-to-interpret quantity, whereas eliciting predictives may be less intuitive for non-statisticians. Further, note that computational considerations elegantly discussed by the authors severely restrict the range of predictives one may consider in practice, limiting the flexibility of the framework. Third, I am afraid I disagree on the frameworks' computational convenience. Doing a single optimization may be faster than sampling, but the framework requires solving many optimizations. This is not cheaper than posterior sampling in a standard (likelihood, prior) construction, also the latter offers fast non-sampling based tools, e.g. Laplace approximations and extensions. It would be interesting to consider analogues for the predictive framework. Finally, a remark on assuming that at \(n=\infty\) there is no uncertainty left. In some settings this is not true, e.g. in high-dimensional regression with \(p\gg n\) (one adds higher-order polynomial terms as \(n\) grows, say) and a normal prior on the parameters there remains posterior uncertainty even as \(n\to\infty\). The proposed framework does not account for such uncertainty, unless suitable adjustments are made.
2305.01284
Adiabatic ground state preparation of fermionic many-body systems from a two-body perspective
A well-known method to prepare ground states of fermionic many-body hamiltonians is adiabatic state preparation, in which an easy to prepare state is time-evolved towards an approximate ground state under a specific time-dependent hamiltonian. However, which path to take in the evolution is often unclear, and a direct linear interpolation, which is the most common method, may not be optimal. In this work, we explore new types of adiabatic paths based on the spectral decomposition of the two-body projection of the residual hamiltonian (the difference between the final and initial hamiltonian). The decomposition defines a set of hamiltonian terms which may be adiabatically interpolated in a piecewise or combined fashion. We demonstrate the usefulness of partially piecewise interpolation through examples involving Fermi-Hubbard models where, due to symmetries, level crossings occur in direct (fully combined) interpolation. We show that this specific deviation from a direct path appropriately breaks the relevant symmetries, thus avoiding level crossings and enabling an adiabatic passage. On the other hand, we show that a fully piecewise scheme, which interpolates every hamiltonian term separately, exhibits a worst-case complexity of $O(L^6/\Delta^3)$ as compared to $O(L^4/\Delta^3)$ for direct interpolation, in terms of the number of one-body modes $L$ and the minimal gap $\Delta$ along the path. This suboptimality result suggests that only those terms which break necessary symmetries should be taken into account for piecewise interpolation, while the rest is treated with direct interpolation.
Dyon van Vreumingen, Kareljan Schoutens
2023-05-02T09:37:40Z
http://arxiv.org/abs/2305.01284v4
# Adiabatic ground state preparation of fermionic many-body systems from a two-body perspective ###### Abstract A well-known method to prepare ground states of fermionic many-body hamiltonians is adiabatic state preparation, in which an easy to prepare state is time-evolved towards an approximate ground state under a specific time-dependent hamiltonian. However, which path to take in the evolution is often unclear, and a direct linear interpolation, which is the most common method, may not be optimal. In this work, we explore new types of adiabatic paths based on the spectral decomposition of the two-body projection of the residual hamiltonian (the difference between the final and initial hamiltonian). The decomposition defines a set of hamiltonian terms which may be adiabatically interpolated in a piecewise or combined fashion. We demonstrate the usefulness of partially piecewise interpolation through examples involving Fermi-Hubbard models where, due to symmetries, level crossings occur in direct (fully combined) interpolation. We show that this specific deviation from a direct path appropriately breaks the relevant symmetries, thus avoiding level crossings and enabling an adiabatic passage. On the other hand, we show that a fully piecewise scheme, which interpolates every hamiltonian term separately, exhibits a worst-case complexity of \(O(L^{6}/\Delta^{3})\) as compared to \(O(L^{4}/\Delta^{3})\) for direct interpolation, in terms of the number of one-body modes \(L\) and the minimal gap \(\Delta\) along the path. This suboptimality result suggests that only those terms which break necessary symmetries should be taken into account for piecewise interpolation, while the rest is treated with direct interpolation. ## I Introduction Quantum computers are currently regarded as a prime candidate for solving problems in condensed matter physics and chemistry that are untractable for classical computers. In particular, since Feynman's observation of the potential of quantum simulation Feynman (1935), the pioneering work by Lloyd Lloyd (1993) and the invention of quantum phase estimation Feynman (1935), interest in the deployment of quantum computers as simulators of highly correlated quantum systems has exploded. A large body of work has been established describing techniques for simulating dynamics of many-body systems on a quantum computer Feynman (1935); Feynman and Zoller (1936); Feynman (1937); Feynman and Zoller (1938); Feynman (1939), and these may be combined with quantum phase estimation in order to estimate eigenenergies Feynman (1935). A critical question however, to make these methods useful, is how to prepare the states of interest - be it thermal states or eigenstates of the system under investigation - that serve as input to the algorithms that simulate dynamics or compute energies. Although experimental efforts using heuristics such as variational quantum eigensolvers Feynman (1935); Feynman (1936); Feynman and Zoller (1938); Feynman (1939) have shown great success in preparing such states for systems of fixed size, much remains unknown with regards to "solving" highly correlated systems in general. A well-known method for preparing approximate ground states of complex systems is adiabatic state preparation, which uses the adiabatic theorem to carry out quantum computation. While originally formulated as a tool to approximate quantum dynamics on large time scales with respect to the inverse energy gap, the adiabatic theorem was reintroduced to attack combinatorial problems Feynman (1939) and to study many-body systems such as Fermi-Hubbard models Feynman (1939); Feynman and Zoller (1939) and molecules Feynman (1938); Feynman (1939); Feynman and Zoller (1938); Feynman (1939); Feynman (1940). The idea of adiabatic state preparation (ASP) is to prepare an eigenstate \(|\psi^{\rm f}\rangle\) of a "final" hamiltonian \({\rm H}^{\rm f}\), starting with an eigenstate \(|\psi^{\rm i}\rangle\) of an "initial" hamiltonian \({\rm H}^{\rm i}\) which is straightforward to prepare. Given this initial state, one time-evolves the state according to the time-rescaled Schrodinger equation, \[i\frac{d}{ds}|\psi(s)\rangle=T\,{\rm H}(s)|\psi(s)\rangle, \tag{1}\] where \(s=t/T\) is a dimensionless time, and \(T\) is the total (physical) evolution time. (We work in units such that \(\hbar=1\).) The evolution is carried out under a time-dependent hamiltonian \({\rm H}(s)\) which equals \({\rm H}^{\rm i}\) at \(s=0\) and \({\rm H}^{\rm f}\) at \(s=1\). After an evolution with time \(s\), one obtains a state \(|\psi^{\rm T}(s)\rangle={\rm U}^{\rm T}(s)|\psi(0)\rangle\), where \(|\psi(0)\rangle=|\psi^{\rm i}\rangle\) and \({\rm U}^{\rm T}(s)\) solves eq. 1. By what is known as the _adiabatic theorem_, the state at the end of this evolution, \(|\psi^{\rm T}(1)\rangle\) will be close to the final eigenstate \(|\psi^{\rm f}\rangle\) if \(T\) is sufficiently large. One variant of this adiabatic theorem which precisely indicates what "close" and "sufficiently large" mean in this context, is due to Jansen et al. Jansen et al. (2008). The statement is that if \({\rm H}(s)\) is a hamiltonian defined on the interval \([0,1]\) which for every \(s\in[0,1]\) has an instantaneous eigenstate \(|\psi(s)\rangle\) whose energy is separated from the rest of the spectrum by \(\Delta(s)>0\), then for any \(s\in[0,1]\), the condition \[T\geq\frac{1}{\delta}\bigg{(}\int_{0}^{s}\left[\frac{\|\partial_{s}^{2}{\rm H} (\sigma)\|}{\Delta^{2}(s)}+7\frac{\|\partial_{s}{\rm H}(\sigma)\|^{2}}{\Delta ^{3}(s)}\right]d\sigma+B\bigg{)} \tag{2}\] where \(\|\cdot\|\) denotes the operator norm and \(B\) is a boundary term that may be set to zero if \({\rm H}(0)={\rm H}(1)=0\), is sufficient to guarantee that \[|\langle\psi(s)|\psi^{T}(s)\rangle|\geq 1-\delta \tag{3}\] provided that \(|\psi^{T}(0)\rangle=|\psi(0)\rangle\). Throughout the rest of this paper, we will consider the case where \(|\psi(s)\rangle\) is the ground state of \(\mathrm{H}(s)\). In principle, any adiabatic evolution may be implemented on a gate-based quantum computer through Trotter-Suzuki [23; 24] or more sophisticated time-dependent hamiltonian simulation methods [25; 26; 27]. Alternative approaches approximate the evolution through a series of measurements [28; 29] or simulations thereof [30; 31]. The most commonly used interpolation method in adiabatic state preparation is a direct linear interpolation between \(\mathrm{H}^{\mathrm{i}}\) and \(\mathrm{H}^{\mathrm{f}}\)[32], which is to say that \[\mathrm{H}(s)=\mathrm{H}^{\mathrm{i}}+s(\mathrm{H}^{\mathrm{f}}-\mathrm{H}^{ \mathrm{i}}). \tag{4}\] However, this method is rather restrictive as the evolution is controlled by only a single parameter, \(s\). Thus the evolution is sensitive to gap closures along the path, which cannot be avoided. An obvious solution is to increase the number of control parameters in the passage from \(\mathrm{H}^{\mathrm{i}}\) to \(\mathrm{H}^{\mathrm{f}}\). This approach is discussed by Tomka et al. [33], who show that evolving along a geodesic path, based on the quantum metric tensor (or Fubini-Study metric) with respect to the control parameters, maximises the local fidelity along the path. In addition, they show that an increase in the number of control parameters leads to higher final fidelities. Put simply, their results rely on the fact that geodesic paths "walk around" regions of parameter space associated with small energy gaps, thus minimising diabatic errors. A similar category of methods to avoid problematic regions in adiabatic state preparation is known as counteradiabatic driving, where an additional hamiltonian term is added during the evolution, which actively suppresses diabatic errors and is set to zero at the end [34; 35; 36]. The problem with these approaches, however, is that their implementation becomes infeasible for large, complex systems. For counterdiabatic driving to work, the eigenstates and spectrum of the hamiltonian must be known along the path, which is something we cannot expect to achieve in such settings. For the geodesic approach, the main roadblock is the inability to solve the geodesic equations, which become inaccesibly large systems of differential equations already for small many-body problems. In this work, we introduce a more hands-on approach to produce new types of adiabatic paths for generic fermionic many-body hamiltonians in a second quantised representation. Section II gives a brief description of such systems. The adiabatic paths are based on a decomposition of the coefficient tensor of such hamiltonians (section III), which defines a set of control parameters that govern the adiabatic evolution. We emphasise that such adiabatic paths can be seen as a new view on adiabatic state preparation for fermionic systems by considering many-body hamiltonians in terms of their two-body eigenstates. We demonstrate, through a set of worked examples (section IV), that there exist scenarios in which direct interpolation suffers from level crossings caused by symmetries, and how the two-body decomposition may be used to explicitly break symmetries and lift such crossings. In section V, we show how a description of these two-body eigenstates as superpositions of fermion pairs, following a suitable one-body transformation, leads to a worst-case adiabatic complexity in terms of the number of one-body modes \(L\) and a minimum gap \(\Delta\) (section V). The implications of this analysis are discussed for different systems. We summarise and conclude in section VI. ## II Many-body hamiltonians Of interest in this work are generic fermionic, interacting, particle-conserving many-body hamiltonians, expressed in a second-quantised representation as \[\mathrm{H}= \sum_{P,Q=1}^{L}h_{PQ}\,\mathrm{a}_{P}^{\dagger}\mathrm{a}_{Q}\] \[+\frac{1}{2}\sum_{P,Q,R,S=1}^{L}g_{PQRS}\,\mathrm{a}_{P}^{ \dagger}\mathrm{a}_{R}^{\dagger}\mathrm{a}_{S}\mathrm{a}_{Q} \tag{5}\] where \(P,Q,R,S\) index general single-particle modes (which may include a spin index, in which case the modes are known as spin orbitals). Furthermore, the coefficients \(h_{PQ}\) and \(g_{PQRS}\) describe the one- and two-body terms respectively, and the fermionic creation (annihilation) operators \(\mathrm{a}_{P}^{\dagger}\) (\(\mathrm{a}_{P}\)) satisfy the canonical anticommutation relations, \[\{\mathrm{a}_{P},\mathrm{a}_{Q}\}=\{\mathrm{a}_{P}^{\dagger}, \mathrm{a}_{Q}^{\dagger}\}=0,\] \[\{\mathrm{a}_{P}^{\dagger},\mathrm{a}_{Q}\}=\delta_{PQ}. \tag{6}\] Depending on context, we will sometimes split a single-particle mode into a spatial and a spin component, writing lowercase \(p,q,r,s\) for the spatial and \(\sigma,v,\tau,\varphi\) for the spin component. Such hamiltonians are the central object of study in chemistry and condensed matter theory. In chemistry, the starting point for describing molecules is typically the electronic structure hamiltonian in the nonrelativistic Born-Oppenheimer approximation, given in first quantisation by \[\hat{\mathrm{H}}=E_{\mathrm{nuc}}\underbrace{-\sum_{Ti}\frac{1}{|\mathbf{r}_{ I}-\mathbf{r}_{i}|}+\frac{1}{2}\sum_{i}\nabla_{i}^{2}}_{\hat{\mathrm{H}}}+ \underbrace{\frac{1}{2}\sum_{ij}\frac{1}{|\mathbf{r}_{i}-\mathbf{r}_{j}|}}_{ \hat{\mathrm{g}}} \tag{7}\] where the upper case indices run over all nuclei and the lower case indices label the electrons. \(E_{\mathrm{nuc}}\) is a nuclear energy constant which may be set to zero for practical purposes. When we project the Hilbert space onto a fixed basis set \(\{\ket{\psi_{p\sigma}}\}\) consisting of single-particle modes (which may be assumed to be real, following common practice), the projected electronic structure hamiltonian assumes the form of eq. 5 with \[h_{(p\sigma)(qv)} =h_{pq}\delta_{\sigma v},\;\;g_{(p\sigma)(qv)(rr)(s\varphi)}=g_{ pqrs}\delta_{\sigma v}\delta_{\tau\varphi}, \tag{8}\] \[h_{pq} =\int d\mathbf{r}_{1}\,\psi_{p}(\mathbf{r}_{1})\,\hat{\mathrm{h} }\,\psi_{q}(\mathbf{r}_{1}),\] (9) \[g_{pqrs} =\int d\mathbf{r}_{1}d\mathbf{r}_{2}\,\psi_{p}(\mathbf{r}_{1}) \psi_{q}(\mathbf{r}_{1})\,\hat{\mathrm{g}}\,\psi_{r}(\mathbf{r}_{2})\psi_{s}( \mathbf{r}_{2}). \tag{10}\] From these expressions we may draw the symmetry conditions \[h_{PQ} =h_{QP} \tag{11}\] \[g_{PQRS} =g_{RSPQ} =g_{QPRS} \tag{12}\] which we shall assume satisfied for all hamiltonians considered in this paper. Note that the hermiticity of the hamiltonian is guaranteed by the use of real-valued single-particle modes. The electronic structure hamiltonian may be simplified by restricting the electrons to orbitals localised at sites arranged on a lattice, and neglecting any Coulomb interaction between different sites. Taking one orbital per site, one arrives at the single-band Fermi-Hubbard (FH) hamiltonian, \[\mathrm{H}=j\sum_{\langle p,q\rangle,\sigma}(\mathrm{a}_{p\sigma}^{\dagger} \mathrm{a}_{q\sigma}+\mathrm{a}_{q\sigma}^{\dagger}\mathrm{a}_{p\sigma})+U \sum_{p}\mathrm{n}_{p\uparrow}\mathrm{n}_{p\downarrow}+\mu\sum_{p\sigma} \mathrm{n}_{p\sigma} \tag{13}\] where \(j\) is the hopping strength between two neighbouring sites, \(U\) is the on-site Coulomb interaction and \(\mu\) is a chemical potential strength. The Fermi-Hubbard model may be written in the form of eq. 5 with coefficients as in eq. 8 through the identification \[h_{pq} =\begin{cases}j&\text{if sites $p$ and $q$ are neighbours},\\ 0&\text{else},\end{cases}\] \[g_{pqrs} =U\delta_{pq}\delta_{rs}\delta_{pr} \tag{14}\] which are readily seen to exhibit the symmetries of eqs. 11-12. In section IV, we study a generalisation of the one-dimensional FH hamiltonian that allows for spin-dependent hopping strengths, i.e. \(h_{(p\sigma)(qv)}=h_{pq}^{\sigma}\delta_{\sigma v}\). Throughout the rest of this paper, we will work with a fixed particle number (denoted \(N\)) for each hamiltonian. In a sector of fixed \(N\), we can absorb the one-body terms of any hamiltonian of the form in eq. 5 into the two-body terms, by inserting the identity as \[\mathrm{a}_{P}^{\dagger}\mathrm{a}_{Q}=\frac{1}{N-1}\sum_{R}\mathrm{a}_{P}^{ \dagger}\mathrm{a}_{R}^{\dagger}\mathrm{a}_{R}\mathrm{a}_{Q} \tag{15}\] and when we define \[w_{PQRS}:=\frac{h_{PQ}\delta_{RS}+\delta_{PQ}h_{RS}}{N-1} \tag{16}\] then we may write the one-body operator as \[\sum_{PQ}h_{PQ}\,\mathrm{a}_{P}^{\dagger}\mathrm{a}_{Q}=\frac{1}{2}\sum_{PQRS }w_{PQRS}\,\mathrm{a}_{P}^{\dagger}\mathrm{a}_{R}^{\dagger}\mathrm{a}_{S} \mathrm{a}_{Q}. \tag{17}\] Now define the combined one-body and two-body interaction tensor \[G_{PQRS}:=\frac{1}{2}(w_{PQRS}+g_{PQRS}) \tag{18}\] and observe that \[\mathrm{H}=\sum_{PQRS}G_{PQRS}\,\mathrm{a}_{P}^{\dagger}\mathrm{a}_{R}^{ \dagger}\mathrm{a}_{S}\mathrm{a}_{Q}; \tag{19}\] the one- and two-electron terms have now been combined into a single term. We shall refer to G as the interaction tensor of H. Lastly, it will be convenient to express H as a sum over only the unique pairs \((P,R)\) and \((Q,S)\): by invoking the fermionic anticommutation relations, we may write \[\mathrm{H}=\sum_{\begin{subarray}{c}P<R\\ Q<S\end{subarray}}\tilde{G}_{PQRS}\,\mathrm{a}_{P}^{\dagger}\mathrm{a}_{R}^{ \dagger}\mathrm{a}_{S}\mathrm{a}_{Q} \tag{20}\] with \(\tilde{G}_{PQRS}:=G_{PQRS}-G_{PSRQ}-G_{RQPS}+G_{RSPQ}=2(G_{PQRS}-G_{PSRQ})\). This tensor shall be termed the antisymmetrised interaction tensor of H. Naturally, if the initial hamiltonian \(\mathrm{H}^{\mathrm{i}}\) contains all one-body terms, then there is no need to explicitly include them in the two-body terms of the residual hamiltonian \(\mathrm{H}^{\mathrm{r}}\) as described above. In section IV we will see an example of this where \(\mathrm{H}^{\mathrm{i}}\) takes the form of a mean-field (also known as Hartree-Fock) approximation. ## III Adiabatic state preparation by two-body eigendecomposition In this work, we deviate from the direct interpolation approach (eq. 4) by decomposing the residual hamiltonian \(\mathrm{H}^{\mathrm{r}}=\mathrm{H}^{\mathrm{f}}-\mathrm{H}^{\mathrm{i}}\) into a sum of terms \(\mathrm{H}_{k}^{\mathrm{r}}\), \(k\in\{1,\ldots,M\}\), and evolving a linear combination of the terms \(\mathrm{H}_{k}^{\mathrm{r}}\). That is, \[\mathrm{H}(s)=\mathrm{H}^{\mathrm{i}}+\sum_{k=1}^{M}\gamma_{k}(s)\,\mathrm{H}_ {k}^{\mathrm{r}}. \tag{21}\] The decomposition defines an \(M\)-dimensional parameter space in which we consider paths \(\mathbf{\gamma}(s)\) restricted to a hypercube, starting from \(\mathbf{\gamma}(0)=[0,0,\ldots,0]\) and ending at \(\mathbf{\gamma}(1)=[1,1,\ldots,1]\). We define the residual terms \(\mathrm{H}_{k}^{\mathrm{r}}\) through a decomposition of the antisymmetrised interaction tensor of \(\mathrm{H}^{\mathrm{r}}\). This is akin to known low-rank factorisation methods of the two-body part of the interaction tensor, aimed at achieving improved memory efficiency and speed-ups in quantum simulation implementations [37, 9]. Whereas most of these works focus on the Cholesky decomposition of the interaction tensor [38, 39, 40], we use an eigendecomposition. More precisely, we regard the antisymmetrised interaction tensor \(\tilde{\mathrm{G}}\) as an \(L(L-1)/2\times L(L-1)/2\) matrix \(\mathrm{F}\), by combining the indices \((PR)\) and \((QS)\): \[F_{(PR)(QS)}:=\tilde{G}_{PQRS}. \tag{22}\] We point out that \(\mathrm{F}\) represents a two-particle hamiltonian, since in the two-particle sector, the operator string \(\mathrm{a}_{P}^{\dagger}\mathrm{a}_{R}^{\dagger}\mathrm{a}_{S}\mathrm{a}_{Q}\) is equivalent to the outer product \(|PR\rangle\langle QS|\). For this reason, we refer to \(\mathrm{F}\) as the two-particle matrix. Note however that the noninteracting part of \(\mathrm{F}\) is scaled by a factor \(1/(N-1)\) with respect to that of \(\mathrm{H}^{\mathrm{r}}\), which arises from the insertion of identity (eq. 15). Now since \(\mathrm{F}\) is symmetric with respect to the exchange \((PR)\leftrightarrow(QS)\) (following from the symmetry conditions of eq. 11-12, we may eigendecompose \(\mathrm{F}\) into (normalised) orthogonal eigenvectors, \[F_{(PR)(QS)}=\sum_{k}\lambda_{k}\,\phi_{k}^{(PR)}\phi_{k}^{(QS)} \tag{23}\] and define \[\mathrm{H}_{k}^{\mathrm{r}}:=\lambda_{k}\Big{(}\sum_{P<R}\phi_{k }^{(PR)}\mathrm{a}_{P}^{\dagger}\mathrm{a}_{R}^{\dagger}\Big{)}\Big{(}\sum_{ Q<S}\phi_{k}^{(QS)}\mathrm{a}_{S}\mathrm{a}_{Q}\Big{)}=:\lambda_{k}\Phi_{k}. \tag{24}\] In the following, we will refer to \(\lambda_{k}\) as the two-body eigenvalues; the states \(|\phi_{k}\rangle=\sum_{P<R}\phi^{(PR)}\mathrm{a}_{P}^{\dagger}\mathrm{a}_{R}^ {\dagger}|\mathrm{vac}\rangle\) as the two-body eigenstates; and the operators \(\Phi_{k}\) as the pseudoprojectors of \(\mathrm{F}\). ## IV Lifting crossings by symmetry breaking in Fermi-Hubbard models We will now illustrate how our method applies to cases where a discrete symmetry in the hamiltonian is influential on the course of the adiabatic evolution. In particular, we consider the situation in which the initial hamiltonian and the final hamiltonian share such a symmetry. In such a case, if the ground states of the initial and final hamiltonian belong to different symmetry sectors, then necessarily at some point the energy levels cross and an excited state is obtained at the end of the adiabatic evolution. This is a known problem that was addressed in the work by Farhi et al. [15] and also plays a role in many-body contexts [41]. Typically the solution is to add a symmetry-breaking field to the interpolation hamiltonian which is set to zero in the end. In this section, we show how symmetry breaking behaviour emerges naturally from the formalism of the two-body eigendecomposition. To see where this symmetry breaking comes from, it is important to note the following fact: if and only if a two-particle eigenstate \(|\phi\rangle=\sum_{P<R}\phi_{(PR)}\mathrm{a}_{P}^{\dagger}\mathrm{a}_{R}^{ \dagger}|\mathrm{vac}\rangle\) is also an eigenstate of some unitary symmetry operator \(\mathrm{U}\) (with some eigenvalue \(\mu\)), which is expressible as a product of one-body rotations, then the corresponding two-body operator \(\Phi\) commutes with \(\mathrm{U}\). The "only if" direction is trivial (since \(|\phi\rangle\) is an eigenstate of \(\Phi\)); for the "if" direction, observe that \[\mu|\phi\rangle =\mathrm{U}\Big{(}\sum_{P<R}\phi_{(PR)}\,\mathrm{a}_{P}^{\dagger} \mathrm{a}_{R}^{\dagger}|\mathrm{vac}\rangle\Big{)}\] \[=\sum_{P<R}\phi_{(PR)}\mathrm{U}\mathrm{a}_{P}^{\dagger}\mathrm{U} ^{\dagger}\mathrm{U}\mathrm{a}_{R}^{\dagger}\mathrm{U}^{\dagger}|\mathrm{vac}\rangle\] \[=\sum_{P<R}\phi_{(PR)}\,\Big{(}\sum_{M}U_{PM}\mathrm{a}_{M}^{ \dagger}\Big{)}\Big{(}\sum_{N}U_{RN}\mathrm{a}_{N}^{\dagger}\Big{)}|\mathrm{ vac}\rangle\] \[=\sum_{M<N}\overbrace{\sum_{P<R}\phi_{(PR)}\big{[}U_{PM}U_{RN}-U_{ PN}U_{RM}\big{]}}\times\] \[\times\mathrm{a}_{M}^{\dagger}\mathrm{a}_{N}^{\dagger}|\mathrm{ vac}\rangle. \tag{25}\] The last line and the fact that \(|\mu|=1\) then imply that \[\mathrm{U}\Phi\mathrm{U}^{\dagger}=\mu\Phi\mu^{*}=\Phi \tag{26}\] so that indeed \([\Phi,\mathrm{U}]=0\). This has the following implication: if all two-body eigenvalues \(\lambda_{k}\) are distinct, then all two-particle states \(|\phi_{k}\rangle\), being eigenstates of the residual hamiltonian \(\mathrm{H}^{\mathrm{r}}\), are also eigenstates of the symmetry operator \(\mathrm{U}\), and thus all hamiltonian terms \(\mathrm{H}_{k}^{\mathrm{r}}\) commute with \(\mathrm{U}\). In such a case, no symmetry is broken. However, if the eigenvalues corresponding to two two-body eigenstates with different symmetry are degenerate, then these two-body eigenstates may be mixed to produce new hamiltonian terms which do not commute with \(\mathrm{U}\) and therefore break the symmetry. In practice, one will need to fix a particular mixing to make the adiabatic process unambiguous; in the following, this is taken care of by a small splitting \(d\lambda\) in the relevant two-body eigenvalues. Note that while such a splitting does open a gap, this gap scales inversely in \(d\lambda\); thus if this splitting is small with respect to the overall energy scale, direct linear interpolation still requires a problematically large evolution time. We demonstrate this idea with two simple examples, chosen such that (i) it displays an approximate discrete symmetry, which is only slightly broken and, (ii) the best mean field (Hartree-Fock) solution predicts a ground state in a symmetry sector different from that of the true ground state. In such a situation, straightforward adiabatic following of the Hartree-Fock state has to be exceedingly slow to avoid a crossing into an excited state. A multi-step adiabatic procedure, along the lines presented in this paper, will avoid the crossing altogether and allow a convergence on the true ground state. ### Fermi-Hubbard trimer First, consider the following two-particle, three-site Fermi-Hubbard model: \[\text{H}=\raisebox{-11.381102pt}{\includegraphics[height=14.226378pt]{ Fig1.eps}}\] \[= \sum_{\sigma}j(\text{a}^{\dagger}_{1\sigma}\text{a}_{2\sigma}+ \text{a}^{\dagger}_{3\sigma}\text{a}_{1\sigma})+j_{\sigma}\,\text{a}^{\dagger}_ {1\sigma}\text{a}_{3\sigma}+\text{h.c.}\] \[+\,U(\text{n}_{1\uparrow}\text{n}_{1\downarrow}+\text{n}_{3 \uparrow}\text{n}_{3\downarrow}) \tag{27}\] where \(\text{n}_{i\sigma}=\text{a}^{\dagger}_{i\sigma}\text{a}_{i\sigma}\); \(h\), \(j_{\uparrow}\), \(j_{\downarrow}\) and \(U\leq 0\) are real constants; and we set \(j_{\uparrow}=j_{13}=-j_{\downarrow}\). The discrete symmetry we will keep track of is the reflection of sites \(1\leftrightarrow 3\). It is exact when \(U_{1}=U_{3}\), and we will later consider cases where \(U_{1}\) and \(U_{3}\) are slightly different, breaking the symmetry. We assume \(j>0\) throughout. #### ii.1.1 The case \(U=0\) Let us first consider \(U_{1}=U_{3}=0\). For \(j_{1}3=j\), the one-body kinetic energy terms for spin up have two degenerate ground states, at energy \(E_{\text{kin}\uparrow}=-j\). For \(j_{13}<j\), the unique ground state is symmetric (S), while for \(j_{13}>j\) the ground state is anti-symmetric (A). This makes clear that the ground state for one spin-up and one spin-down particle is symmetric for \(j_{13}<j\) but antisymmetric for \(j_{13}>j\). Turning on \(U<0\) will shift the S-A transition to lower values of \(j_{13}\). The Hartree-Fock mean field solution follows this trend but we will see that there are values of \(j_{13}\) where Hartree-Fock places the ground state in the wrong symmetry sector. #### ii.1.2 The case \(U<0\): tracing the ground state of \(\text{H}\) Turning on a negative \(U_{1}=U_{3}=U\) will change the nature of the two-body ground state. For small \(|U|\ll j\) and \(j_{13}=j+\delta\) the energies of the symmetric (S) and anti-symmetric (A) states split as (in first order perturbation theory in \(U\), \(\delta\)) \[E_{\text{S}}=-3j+U/9-\delta/3,\ \ E_{\text{A}}=-3j+U/3-5\delta/3 \tag{28}\] implying that the S-A crossing (as a function of \(j_{13}\)) shifts to \(j_{13}=j+U/6\), that is to a smaller value of \(j_{13}\). For \(U\) large and negative, the two electrons will tend to form a local pair at site 1 or 3, with energy \(U\). In second order perturbation theory, taking into account processes with two hops (of strength \(j\) or \(j_{13}\)) connecting the pair states with unpaired states at energy 0, the on-site energies of these pairs are adjusted to \[\epsilon_{1}=\epsilon_{3}=U+2j^{2}/U+2j_{13}^{2}/U \tag{29}\] while the pair hopping amplitude becomes \[t_{13}=-2j_{13}^{2}/U. \tag{30}\] This leads to S and A ground state energies \[E_{\text{S}}^{(2)}=U+2j^{2}/U,\ \ \ E_{\text{A}}^{(2)}=U+2j^{2}/U+4j_{3}^{2}/U. \tag{31}\] Including terms of order \(j^{4}/U\), \(j^{2}j_{13}^{2}/U^{3}\) and \(j_{13}^{4}/U^{3}\) we find \[E_{\text{S}}^{(4)} =U+2\frac{j^{2}}{U}+8\frac{j^{4}}{U^{3}}+\frac{14}{3}\frac{j^{2} j_{13}^{2}}{U^{3}}+O\Big{(}\frac{(j^{2}+j_{13}^{2})^{3}}{U^{5}}\Big{)}, \tag{32}\] \[E_{\text{A}}^{(4)} =U+2\frac{j^{2}}{U}+4\frac{j_{13}^{2}}{U}-4\frac{j^{4}}{U^{3}}+ 2\frac{j^{2}j_{13}^{2}}{U^{3}}-16\frac{j_{13}^{4}}{U^{3}}\] \[+O\Big{(}\frac{(j^{2}+j_{13}^{2})^{3}}{U^{5}}\Big{)}. \tag{33}\] This puts the S-A crossing (in an expansion in terms of \(j/U\)) at \(j_{13}=\sqrt{3}j^{2}/|U|\). #### ii.1.3 The case \(U<0\): Hartree-Fock approximation To write a mean field (Hartree-Fock) Ansatz, we should first decide on the symmetry sector. For an overall anti-symmetric Ansatz, we have \[|\text{HF},\text{A}\rangle=\frac{1}{\sqrt{2}}(\text{a}^{\dagger}_{1\uparrow}- \text{a}^{\dagger}_{3\uparrow})\frac{1}{\sqrt{2+x^{2}}}(\text{a}^{\dagger}_{1 \downarrow}+x\text{a}^{\dagger}_{2\downarrow}+\text{a}^{\dagger}_{3\downarrow} )|\text{vac}\rangle. \tag{34}\] The expectation value becomes \[\langle\text{H}\rangle_{\text{HF},\text{A}} =\frac{1}{2+x^{2}}\left(4jx-2j_{13}-(2+x^{2})j_{13}+U\right)\] \[=\frac{1}{2+x^{2}}\left((U-4j_{13})+4jx-j_{13}x^{2}\right). \tag{35}\] This expression is minimised for (keeping the leading terms in an expansion in terms of \(j/U\), \(j_{13}/U\)) \[x=4j/U\ \ \Rightarrow\] \[\langle\text{H}\rangle_{\text{HF},\text{A}}^{\text{min}} =\frac{U}{2}-2j_{13}+4\frac{j^{2}}{U}+O\Big{(}\frac{(j+j_{13})^{3}}{U^{2 }}\Big{)}. \tag{36}\] The competing Ansatz is symmetric in both the up and the down factors, \[|\text{HF},\text{S}\rangle =\frac{1}{\sqrt{2+y^{2}}}(\text{a}^{\dagger}_{1\uparrow}+ya^{ \dagger}_{2\uparrow}+\text{a}^{\dagger}_{3\downarrow})\times\] \[\quad\times\frac{1}{\sqrt{2+x^{2}}}(\text{a}^{\dagger}_{1\downarrow} +xa^{\dagger}_{2\downarrow}+\text{a}^{\dagger}_{3\downarrow})|\text{vac}\rangle, \tag{37}\] leading to \[\langle\mathrm{H}\rangle_{\mathrm{HF,S}} = \frac{1}{(2+x^{2})(2+y^{2})}\big{[}4jx(2+y^{2})+4jy(2+x^{2}) \tag{38}\] \[-2(2+y^{2})j_{13}+2(2+x^{2})j_{13}+2U\big{]}\] \[= \frac{1}{(2+x^{2})(2+y^{2})}\big{[}8j(x+y)+4j(xy^{2}+yx^{2})\] \[+2(x^{2}-y^{2})j_{13}+2U\big{]}.\] In leading order, the minimum energy is reached for \[x=y=4j/U \Rightarrow\] \[\langle\mathrm{H}\rangle_{\mathrm{HF,S}}^{\mathrm{min}} = \frac{U}{2}+8\frac{j^{2}}{U}+O\Big{(}\frac{j^{4}}{U^{3}}\Big{)}. \tag{39}\] Comparing the expressions in the S and A sectors, we conclude that, in mean field and to leading order in \(j/U\), the S-A crossing happens at \(j_{13}=2j^{2}/|U|\). #### ii.1.4 Adiabatic procedure Suppose now that we consider the Fermi-Hubbard trimer with \(j>0\), \(U<0\), \(|U|\gg j\) and \(\sqrt{3}j^{2}/|U|<j_{13}<2j^{2}/|U|\) and try to identify the ground state with a single spin-up and spin-down particle through adiabatic following. We have just demonstrated that in this situation, the HF solution is in the S sector, while the true ground state is in the A sector. This means that the adiabatic procedure will fail altogether and end up in a symmetric state, which is an excited state of H. Let us now consider how the stepwise procedure works out in this example. The adiabatic procedure starts from the HF hamiltonian, with mean field parameters (called \(x\), \(y\) in the above) optimised for our choice of \(U\), \(j\) and \(j_{13}\). It is obtained from H by replacing \[\mathrm{n}_{i\uparrow}\mathrm{n}_{i\downarrow}\to\mathrm{n}_{i\uparrow} \langle\mathrm{n}_{i\downarrow}\rangle+\langle\mathrm{n}_{i\uparrow}\rangle \mathrm{n}_{i\downarrow}-\langle\mathrm{n}_{i\uparrow}\rangle\langle\mathrm{n }_{i\downarrow}\rangle. \tag{40}\] This implies that the exact two-body hamiltonian \(\mathrm{H}^{(2)}\) differs from the two-body HF hamiltonian via diagonal terms only, which directly correspond to the two-body eigenvalues \(\lambda_{k}\) of the stepwise adiabatic following from \(\mathrm{H}^{(2)}\) to \(\mathrm{H}_{\mathrm{HF}}^{(2)}\). Among these eigenvalues, the most negative ones are \[\lambda_{11} = \langle\mathrm{1}_{\uparrow}\mathrm{1}_{\downarrow}|(\mathrm{H}^ {(2)}-\mathrm{H}_{\mathrm{HF}}^{(2)})|\mathrm{1}_{\uparrow}\mathrm{1}_{ \downarrow}\rangle=\frac{U}{2}+\frac{32j^{4}}{U^{3}}+O\Big{(}\frac{j^{6}}{U^{5} }\Big{)}\] \[\lambda_{33} = \langle\mathrm{3}_{\uparrow}\mathrm{3}_{\downarrow}|(\mathrm{H}^ {(2)}-\mathrm{H}_{\mathrm{HF}}^{(2)})|\mathrm{3}_{\uparrow}\mathrm{3}_{ \downarrow}\rangle=\lambda_{11} \tag{41}\] For \(U_{1}=U_{3}=U\), these two-body eigenvalues are degenerate, leaving an ambiguity in the definition of the stepwise procedure. After all, one could perfectly define the eigenvectors of the two-particle matrix in such a way that the symmetric and antisymmetric sectors sectors are not mixed. As such, it is necessary to add an arbitrarily small splitting \(\delta U=U_{3}-U_{1}\). While this implies that the energy levels in a direct interpolation will not strictly cross, the gap that is opened will only scale in \(\delta U\), meaning the time required for the direct interpolation can be made arbitrarily large. Having resolved this ambiguity, we can then design the stepwise adiabatic procedure as follows. Defining \[\mathrm{H}_{1}^{\mathrm{r}} = \lambda_{11}\mathrm{a}_{\uparrow 1}^{\dagger}\mathrm{a}_{ \downarrow 1}^{\dagger}\mathrm{a}_{\downarrow 1}\mathrm{a}_{\downarrow 1}\] \[\mathrm{H}_{3}^{\mathrm{r}} = \lambda_{33}\mathrm{a}_{\uparrow 1}^{\dagger}\mathrm{a}_{3 \downarrow}^{\dagger}\mathrm{a}_{3\downarrow}\mathrm{a}_{3\uparrow}\] \[\mathrm{H}^{\mathrm{rest}} = \mathrm{H}-\mathrm{H}_{\mathrm{HF}}-\mathrm{H}_{1}^{\mathrm{r}}- \mathrm{H}_{3}^{\mathrm{r}} \tag{42}\] we interpolate thus: \[\mathrm{H}^{\mathrm{i}} \to \mathrm{H}^{\mathrm{i}}+\mathrm{H}_{1}^{\mathrm{r}}\] \[\to \mathrm{H}^{\mathrm{i}}+\mathrm{H}_{1}^{\mathrm{r}}+\mathrm{H}^{ \mathrm{rest}}\] \[\to \mathrm{H}^{\mathrm{i}}+\mathrm{H}_{1}^{\mathrm{r}}+\mathrm{H}^{ \mathrm{rest}}+\mathrm{H}_{3}^{\mathrm{r}}=\mathrm{H}. \tag{43}\] With this, the discrete symmetry is broken along all steps of the path, and the S-A crossing, which derails the direct adiabatic interpolation from the HF to the exact hamiltonian, is avoided. The resulting development of the instantaneous gap can be seen in figure 1. ### Fermi-Hubbard model on four sites with alternating hopping As a second example, we present a variation on the same theme: a simple model for correlated electrons where a mean field (Hartree Fock) solution is unable to correctly incorporate two-body correlations and as a result puts the ground state in the wrong symmetry sector, derailing adiabatic interpolation with the HF state as starting Figure 1: Instantaneous ground state energy gaps in the adiabatic ground state preparation of the Fermi-Hubbard trimer, with \(U=-5\), \(j=1\) and \(j_{13}=0.37\). In the direct interpolation, \(\mathrm{H}(s)=(1-s)\mathrm{H}_{\mathrm{HF}}+s\mathrm{H}\), a gap closure occurs around \(s=0.69\). The stepwise procedure is carried out as in eq. 43, with each step taking a third of the total time. Through symmetry breaking, a gap is visibly opened. point. The stepwise adiabatic procedure based on two-body eigenspaces cures this situation. We consider a Fermi-Hubbard model on four sites, \[\mathrm{H}=\raisebox{-19.916929pt}{\includegraphics[scale=0.4]{fig/Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_Hubbard_H_Hubbard_Hubbard_H_Hubbard_Hubbard_Hubbard_Hubbard_H_Hubbard_H_Hubbard_H_Hubbard_H_Hubbard_H_Hubbard_ wise paths, in view of the adiabatic complexity bound of eq. 2. For simplicity, we replace the gap \(\varDelta(s)\) by its minimum \(\varDelta=\min_{s\in[0,1]}\varDelta(s)\), and consider complexity in terms of this parameter. What remains then is to determine the scaling of the numerators \[I_{n}:=\int_{0}^{1}\|\partial_{s}^{n}\mathrm{H}(\sigma)\|^{2/n}d\sigma\quad(n \in\{1,2\}) \tag{47}\] in the system size. Since for typical systems of interest, the number of particles \(N\) scales proportionally with the number of one-particle modes \(L\), we take \(L\) as the system size scaling parameter. Now, from eqs. 21 and 24, \[\|\partial_{s}^{n}\mathrm{H}(s)\|\leq\sum_{k=1}^{L(L-1)/2}|\partial_{s}^{n} \gamma_{k}(s)|\cdot|\lambda_{k}|\cdot\|\Phi_{k}\|. \tag{48}\] Note that the path functions \(\gamma_{k}\) can always be chosen such that \(|\partial_{s}^{n}\gamma_{k}(s)|\) is upper bounded by a constant. In particular, one may always pick all \(\gamma_{k}\) such that \(\partial_{s}\gamma_{k}(0)=\partial_{s}\gamma_{k}(1)=0\), so that the boundary term in eq. 2 drops out. The two-body eigenvalues \(\lambda_{k}\) then, being the eigenvalues of \(\mathrm{F}\), are bounded by the energy scale of a two-particle system which does not grow with the system size. Therefore the norms \(\|\Phi_{k}\|\) are the only meaningful quantities to be upper bounded. As such, it suffices to consider only the first derivative numerator \(I_{1}\). We shall universally upper bound the operator norm of any pseudoprojector \(\Phi_{k}\) in the following, and shall henceforth drop the subscript \(k\). Afterwards, we discuss some implications of this bound for different choices of paths and systems. #### iv.2.1 Upper bound to the pseudoprojector operator norm Define b such that \(\Phi=\mathrm{b}^{\dagger}\mathrm{b}\) for some operator \(\Phi\) from the decomposition. When we define the \(L\times L\) antisymmetric matrix \(\tilde{\phi}\) with entries \[\tilde{\phi}^{PR}:=\phi^{(PR)}-\phi^{(RP)} \tag{49}\] we may write \[\mathrm{b}^{\dagger}=\sum_{P<R}\phi^{(PR)}\mathrm{a}_{P}^{\dagger}\mathrm{a}_ {R}^{\dagger}=\frac{1}{2}\sum_{PR}\tilde{\phi}^{PR}\mathrm{a}_{P}^{\dagger} \mathrm{a}_{R}^{\dagger}. \tag{50}\] Next, apply a Youla decomposition \(\tilde{\phi}=\mathrm{V}\Xi\mathrm{V}^{\mathsf{T}}\) where \(\mathrm{V}\) is an \(L\times L\) orthogonal matrix and \[\Xi=\bigoplus_{m=1}^{L/2}\begin{bmatrix}0&\xi_{m}\\ -\xi_{m}&0\end{bmatrix} \tag{51}\] if \(L\) is even; if \(L\) is odd, \(\Xi\) has an additional row and column of zeros. This then yields \[\mathrm{b}^{\dagger}=\sum_{m=1}^{\lfloor L/2\rfloor}\xi_{m}\tilde{\mathrm{a} }_{2m-1}^{\dagger}\tilde{\mathrm{a}}_{2m}^{\dagger} \tag{52}\] where we defined the rotated fermionic operators \(\tilde{\mathrm{a}}_{K}^{(\dagger)}=\sum_{P}V_{PK}\mathrm{a}_{P}^{(\dagger)}\). Note that since the vector with entries \(\tilde{\phi}^{(PR)}\) is a normalised eigenvector, the squares of \(\xi_{m}\) sum to unity. Since \(\|\Phi\|=\|\mathrm{b}\|^{2}=\max_{|\Psi\rangle}\|\mathrm{b}|\Psi\rangle\|^{2}\), in order to obtain the spectral norm of \(\Phi\) it is sufficient to find the state whose norm is maximised under the application of b. Now, from eq. 52 we observe that b defines a set of pairs \(\{(2m-1,2m)\}_{m=1}^{\lfloor L/2\rfloor}\), and only annihilates particles from a product state \(\prod_{P}\tilde{\mathrm{a}}_{P}^{\dagger}|\mathrm{vac}\rangle\) if they appear together in these pairs. As such, it makes sense to describe a product state in terms of its fermion pairs and its unpaired fermions. We shall denote a product state as a ket \(|\mathcal{P},\mathcal{U}\rangle\) where \(\mathcal{P}\) is the set of filled pairs and \(\mathcal{U}\) is the set of remaining unpaired fermions; in other words, \[|\mathcal{P},\mathcal{U}\rangle=\bigg{(}\prod_{i\in\mathcal{U}}^{\prime} \tilde{\mathrm{a}}_{i}^{\dagger}\bigg{)}\bigg{(}\prod_{m\in\mathcal{P}}\tilde{ \mathrm{a}}_{2m-1}^{\dagger}\tilde{\mathrm{a}}_{2m}^{\dagger}\bigg{)}|\mathrm{ vac}\rangle \tag{53}\] where the prime on the leftmost product symbol indicates that a certain order of the unpaired modes is assumed, in order to fix the sign of \(|\mathcal{P},\mathcal{U}\rangle\). In this notation, the matrix elements of \(\Phi\) are given by \[\langle\mathcal{P},\mathcal{U}|\Phi|\mathcal{P}^{\prime}, \mathcal{U}^{\prime}\rangle=\delta_{\mathcal{U}\mathcal{U}^{\prime}}\begin{cases} \sum_{m\in\mathcal{P}}\xi_{m}^{2}&\text{if }\mathcal{P}=\mathcal{P}^{\prime}\\ \xi_{m}\xi_{n}&\text{if }\mathcal{P}\setminus\{m\}=\mathcal{P}^{\prime} \setminus\{n\}\\ 0&\text{otherwise.}\end{cases} \tag{54}\] From eq. 54, it is clear that \(\|\mathrm{b}|\mathcal{P},\mathcal{U}\rangle\|\) is maximised when \(|\mathcal{P},\mathcal{U}\rangle\) lies in the sector with a minimal number of unpaired fermions (zero if \(N\) is even, one if odd). Furthermore, \(\mathrm{b}^{\dagger}\mathrm{b}\) preserves the unpaired fermions, and therefore Figure 3: Instantaneous gaps for the four-site Fermi-Hubbard model, with positive \(U\) (\(U=+2\), \(j=1\) and \(\delta=0.25\)). In the same way as in figure 2, the direct interpolation causes a gap closure around \(s=0.61\) while the stepwise procedure keeps the gap open. Note however a small gap in the stepwise procedure around \(s=0.67\). the state \(|\Psi\rangle\) that maximises the norm must lie in this sector. If \(N\) is even, all particles in this sector are paired up. Such paired fermions, then, are equivalent to what are known as _hardcore bosons_ (HCBs): particles whose operator algebra commutes at different sites, but which may only singly occupy any given site. In this sense, the set \(\{\mathrm{b}_{m}^{\dagger}|\mathrm{vac}\rangle\}_{m=1}^{\lfloor L/2\rfloor}\) with \(\mathrm{b}_{m}^{\dagger}=\tilde{\mathrm{a}}_{2m-1}^{\dagger}\tilde{\mathrm{a} }_{2m}^{\dagger}\) may be viewed as a single-particle HCB basis, and \(\mathrm{b}^{\dagger}\), to which we shall add a subscript \(\mathrm{b}^{\dagger}=\mathrm{b}_{\mathbf{\xi}}^{\dagger}\), is then a rotated HCB creation operator. A universal upper bound on the spectral norm of a pseudoprojector \(\Phi\) is now given by \[\|\Phi\|\leq\max_{\|\mathbf{\xi}\|=1}\max_{|\Psi\rangle\in\mathcal{H}_{\{\mathrm{ L}/2\},N/2\}^{\mathrm{HCB}}}\langle\Psi|\,\mathrm{b}_{\mathbf{\xi}}^{\dagger} \mathrm{b}_{\mathbf{\xi}}|\Psi\rangle \tag{55}\] where \(\mathcal{H}_{\mathrm{I},n}^{\mathrm{HCB}}\) denotes an \(l\)-site, \(n\)-particle HCB Hilbert space. An expression for the right-hand side of eq. 55 was found by Tennie et al. [42, theorem 1]; the upper bound that follows is **Theorem 1**.: _For even particle number \(N\), a universal upper bound on the operator norm of a pseudoprojector \(\Phi\) is given by_ \[\|\Phi\|\leq\frac{N/2}{\lfloor L/2\rfloor}(\lfloor L/2\rfloor-N/2+1). \tag{56}\] For \(N\) odd, a similar result may be found through the observation that any eigenstate of \(\Phi\) with maximum eigenvalue must lie in the sector where all fermions except one are paired up, for the same reason as discussed above. The problem of upper bounding the operator norm of \(\|\Phi\|\) is then equivalent to the even case in a Hilbert space with one less HCB site available. This is formalised in the following theorem. **Theorem 2**.: _For odd \(N\), \(\|\Phi\|\) is upper bounded by_ \[\|\Phi\|\leq\frac{\lfloor N/2\rfloor}{\lfloor L/2\rfloor-1}(\lfloor L/2 \rfloor-\lfloor N/2\rfloor). \tag{57}\] The bounds of theorems 1 and 2 are also tight. **Theorem 3**.: _Let \(\ell=\lfloor L/2\rfloor\). The upper bound in the even case, theorem 1, is saturated by taking \(\xi_{m}=1/\sqrt{\ell}\;\forall m\), and taking \(|\Psi\rangle\) to be the maximally symmetric state_ \[|\Psi\rangle=\binom{\ell}{N/2}^{-1/2}\sum_{\mathcal{P}:|\mathcal{P}|=N/2}| \mathcal{P},\emptyset\rangle \tag{58}\] _where the sum runs over all sets \(\mathcal{P}\) of \(N/2\) HCB sites. The upper bound of in the odd case, theorem 2, is saturated by \(\xi_{m}=1/\sqrt{\ell-1}\;\forall m\neq\ell\), \(\xi_{\ell}=0\) and_ \[|\Psi\rangle=\binom{\ell-1}{\lfloor N/2\rfloor}^{-1/2}\sum_{\begin{subarray}{ c}\mathcal{P}:|\mathcal{P}|=\lfloor N/2\rfloor\\ \ell\notin\mathcal{P}\end{subarray}}|\mathcal{P},\{2\ell-1\}\rangle. \tag{59}\] The proofs of theorems 2 and 3 are deferred to appendices A and B respectively. We note that instead of the \(\ell\)-th HCB site, the unpaired fermion could occupy any HCB site. In typical systems of interest, the particle number \(N\) will scale proportionally to the number of modes \(L\); we have thus shown that the operator norm of each pseudoprojector \(\Phi_{k}\) scales at most linearly in \(L\). #### iv.1.2 Implications Let us now think about how the above result can be used to reason about the adiabatic complexity of a choice of system or path, in terms of the numerator of eq. 47. We set a baseline with the following bound which applies to direct interpolation. Define the residual hamiltonian \(\mathrm{H}^{\mathrm{r}}\) in a generic fashion as in eq. 20, and observe that \[\|\mathrm{H}^{\mathrm{r}}\| \leq\sum_{\begin{subarray}{c}P<R\\ Q<S\end{subarray}}\|F_{(PR)(QS)}\,\mathrm{a}_{P}^{\dagger}\mathrm{a}_{R}^{ \dagger}\mathrm{a}_{S}\mathrm{a}_{Q}\|\] \[\leq\sum_{\begin{subarray}{c}P<R\\ Q<S\end{subarray}}|F_{(PR)(QS)}|\leq\sqrt{L(L-1)/2}\;\|\mathrm{F}\|_{\mathrm{F}} \tag{60}\] where \(\|\cdot\|_{\mathrm{F}}\) denotes the Frobenius norm. Given the dimensionality of \(\mathrm{F}\), it is clear that \(\|\mathrm{F}\|_{\mathrm{F}}\leq c\sqrt{L(L-1)/2}\) for some nonnegative constant \(c\), and thus \(\|\mathrm{H}^{\mathrm{r}}\|\leq O(L^{2})\). The resulting numerator, for direct interpolation, from eq. 47 then scales as \(O(L^{4})\). In comparison, consider a fully stepwise scheme where we "adiabatically add" every term \(\mathrm{H}_{k}^{\mathrm{r}}\) from the eigendecomposition, eq. 24, separately, i.e. we evolve \[\mathrm{H}^{\mathrm{i}}\to\mathrm{H}^{\mathrm{i}}+\mathrm{H}_{k_{1}}^{\mathrm{r }}\to\mathrm{H}^{\mathrm{i}}+\mathrm{H}_{k_{1}}^{\mathrm{r}}+\mathrm{H}_{k_{2}}^ {\mathrm{r}}\to\cdots\to\mathrm{H} \tag{61}\] where \(\mathrm{A}\to\mathrm{B}\) denotes a direct adiabatic interpolation between \(\mathrm{A}\) and \(\mathrm{B}\). In this scheme, at any point in the evolution, exactly one of the \(\gamma_{k}\) (cf. eq. 21) must increase at a rate scaling in the number of terms (which is \(O(L^{2})\)), with the rest staying constant (being either \(0\) or \(1\)). From the universal result that \(\|\Phi_{k}\|\leq O(L)\), we then have \[I_{1}\leq\int_{0}^{1}|O(L^{2})O(L)|^{2}d\sigma=O(L^{6}). \tag{62}\] This indicates that the fully stepwise procedure is unfavourable as compared to direct interpolation. This is no surprise: with the work of Tomka et al [33] in mind, the direct path is a geodesic if the gap is held constant, whereas the fully stepwise approach is a walk along the corners of a hypercube in parameter space. However, we emphasise that these bounds are worst-case and can be improved in certain settings. Consider, for example, the standard Fermi-Hubbard model of eq. 13 with the hopping part plus the chemical potential (which is proportional to the identity in for fixed \(N\)) taken as the initial hamiltonian. The spectral norm of the residual hamiltonian \(\mathrm{H}^{\mathrm{r}}\) is easily found to be \(UN/2=O(L)\), leading to \(I_{1}\leq O(L^{2})\). Furthermore, since \(\mathrm{H}^{\mathrm{r}}\) is diagonal in the two-particle position basis, all its pseudo-projectors are of the form \(\Phi_{k}=\mathrm{a}_{I_{k}}^{\dagger}\mathrm{a}_{J_{k}}^{\dagger}\mathrm{a}_{J _{k}}\mathrm{a}_{I_{k}}\) and thus have unit spectral norm. Since \(\mathrm{H}^{\mathrm{r}}\) contains \(L/2\) such terms, a fully stepwise procedure yields the same numerator bound, \(I_{1}\leq O(L^{2})\). On the other hand, the paired fermion formalism may be used to find examples which saturate the bounds of both eq. 60 and 62. To this end, we define a residual hamiltonian \(\mathrm{H}^{\mathrm{r}}\) and the corresponding operators \(\Phi_{k}\) through its two-body eigenstates. As we have seen, a fully paired state \(|\Psi\rangle=(\xi_{1}\mathrm{a}_{1}^{\dagger}\mathrm{a}_{2}^{\dagger}+\cdots+ \xi_{\lfloor L/2\rfloor}\mathrm{a}_{L-1}^{\dagger}\mathrm{a}_{L}^{\dagger})| \mathrm{vac}\rangle\), where \(\xi_{m}=(\lfloor L/2\rfloor)^{-1/2}\) for all \(m\), gives rise to a pseudoprojector with a maximal norm that is \(O(L)\). Since the one-body basis is free to choose, we can permute the one-body modes in \(L-1\) ways such that all resulting two-body states are fully paired and mutually orthogonal1. Furthermore, we make use of the fact that the mapping \(\xi_{m}\mapsto-\xi_{m}\) preserves the spectrum of any pseudoprojector \(\Phi\). After all, from eq. 54 we see that \(\xi_{m}\) appears in an off-diagonal element \(\langle\mathcal{P},\mathcal{U}|\Phi|\mathcal{P}^{\prime},\mathcal{U}\rangle\) only if \(m\in\mathcal{P}\) and \(m\notin\mathcal{P}^{\prime}\) or vice versa. Therefore this mapping is realised by the transformation \(\Phi_{k}\mapsto\Sigma\Phi_{k}\Sigma\) where \(\Sigma\) is a diagonal matrix with a \(-1\) entry in those columns corresponding to the product state \(|\mathcal{P},\mathcal{U}\rangle\) where \(m\in\mathcal{P}\), and a \(+1\) entry elsewhere. Now we can use this to vary the signs of the terms in a fully paired two-body state; if \(L\) is a power of two, we can construct \(L/2\) vectors \(\boldsymbol{\xi}\) that are the (normalised) columns of an Hadamard matrix, so that the resulting \(L/2\) two-body states are mutually orthogonal. As such, we have defined the \(L(L-1)/2\) two-body eigenstates necessary to describe an interacting hamiltonian, each of which fully paired. Footnote 1: One way to see this is to draw a complete graph of \(L\) nodes where nodes \(2,\ldots,L\) are drawn in a circle around node \(1\). A full pairing (also known as perfect matching) may then be found by selecting an edge from node \(1\) to any other node and pairwise connecting the other nodes through edges orthogonal to the first edge. In this way, we find \(L-1\) pairings without drawing any parallel edges, guaranteeing that that resulting two-body states are mutually orthogonal. Now, since we have \(L(L-1)/2\) pseudoprojectors with maximal norm, the inequality of eq. 62 is automatically saturated (for any choice of the two-body eigenvalues), and the adiabatic numerator is maximised for the stepwise procedure. Furthermore, this construction also attains a maximally scaling numerator in the case of direct interpolation, if we set all two-body eigenvalues to \(1\). Indeed, consider the sum over all pseudoprojectors \(\Phi_{k}\) which carry the same pairing (and therefore only differ in their \(\boldsymbol{\xi}\) vectors): \[\sum_{k\,:\,\mathrm{same\,\,pairing}}\Phi_{k} =\sum_{k}\sum_{mn}\xi_{m}^{k}\xi_{n}^{k}\,\mathrm{a}_{2m-1}^{ \dagger}\mathrm{a}_{2m}^{\dagger}\mathrm{a}_{2n}\mathrm{a}_{2n-1}\] \[=\sum_{m}\mathrm{n}_{2m-1}\mathrm{n}_{2m}. \tag{63}\] In other words, this particular sum of pseudoprojectors is an operator that counts all pairs of fermions in a product state that coincide with the mode pairs that define the pseudoprojectors. As a result, the sum over _all_ pseudoprojectors defined in this example is an operator that counts all possible pairs of fermions, and is therefore simply equal to \(N(N-1)/2\) times the identity. This operator saturates the bound \(\|\mathrm{H}^{\mathrm{r}}\|\leq O(L^{2})\) (under the assumption that \(L/N=O(1)\)) and therefore realises a maximal adiabatic numerator scaling of \(O(L^{4})\). This analysis establishes a condition, expressed in the paired fermion formalism, on the two-body eigenstates that yields worst-case numerator scaling for both direct and fully stepwise interpolation. In addition, it shows that direct interpolation indeed outperforms a fully stepwise protocol in this sense. ## VI Conclusion In this work, we have proposed a new protocol for adiabatic preparation of fermionic many-body ground states based on the eigendecomposition of the (combined one- and two-body) coefficient tensor of the residual hamiltonian, being the difference between the initial and final hamiltonian, in second quantisation. The eigenvectors in this decomposition are equivalent to two-body eigenstates of the residual hamiltonian. The method decomposes the residual hamiltonian into a sum of simpler terms, each of which corresponds to an eigenvalue and eigenvector from the eigendecomposition. In the adiabatic scheme, every point along the evolution path is then a linear combination of these terms. We have demonstrated how this idea may be applied to generalised Fermi-Hubbard models, through a few small worked examples. Our finding is that a level crossing occurring in a direct interpolation from a mean-field hamiltonian, which arises from a discrete one-body symmetry, can be cured with the two-body decomposition approach. Although this is not a general superiority result, it shows the existence of scenarios in which the use of (partially) piecewise paths resulting from a two-body decomposition is advantageous as compared to direct interpolation. More precisely, in this approach, one can design a procedure which explicitly breaks the symmetry by interpolating through an intermediate hamiltonian which contains only a subset of the hamiltonian terms from the decomposition. As a result, a gap is seen to be opened. The conditions for this to occur are rather specific: while the initial hamiltonian must share a symmetry with the target hamiltonian and place the ground state in the incorrect symmetry sector, the two-particle matrix of the residual hamiltonian must have degenerate eigenvalues in order to mix two-particle eigenstates from the relevant sectors. Having established this gap opening potential of the two-body decomposition method, we proceeded to analyse the adiabatic complexity of piecewise paths more broadly, by examining how the two-body decomposition influences the numerator part of the complexity of many-body adiabatic state preparation. This numerator is primarily dependent on the operator norm of each term from the residual hamiltonian decomposition. We have found that a description in terms of fermion pairs (or equivalently, hard-core bosons) is key to understanding the scaling, in terms of the number of single-particle modes \(L\), of this operator norm and therefore the adiabatic numerator. The main result is that each residual hamiltonian term scales at most as \(O(L)\), for a typical system where the number of particles \(N\) scales proportionally with \(L\). This result has different implications, depending on the system under investigation and the chosen evolution path. For example, for the Fermi-Hubbard model with the interaction part taken as the residual hamiltonian, the adiabatic complexity scales as \(O(L^{2}/\Delta^{3})\) both in a direct interpolation, and when following a fully piecewise path. This is due to the fact that the norm of each residual hamiltonian term from the decomposition scales as \(O(1)\), and there are only \(L\) nonzero terms. On the other hand, for a situation in which all two-body eigenstates are uniformly weighted superpositions of distinct fermion pairs, each term attains the maximal scaling of \(O(L)\); as a result, the time complexity of direct interpolation in this case scales as \(O(L^{4}/\Delta^{3})\), whereas under a fully stepwise path the we find an \(O(L^{6}/\Delta^{3})\) scaling. Both these scalings are worst-case. This finding agrees with the statement by Tomka et al. that a geodesic path in parameter space is generally beneficial in terms of time complexity [33]. The result suggests that one should be selective when choosing which of the hamiltonian terms from the decomposition to interpolate in a piecewise fashion, and which to interpolate directly. Namely, by piecewise interpolating only those terms which (are expected to) break any relevant symmetries, one retains the power to lift level crossings, while avoiding potentially unfavourable scaling in \(L\). We note however that the situation of maximal scaling is a case where the residual hamiltonian is particularly dense. In large chemical systems, for example, the two-electron part of the hamiltonian is typically sparse, so it is expected that a lower adiabatic numerator can be achieved for such systems. All in all, our examples show that the two-body eigen-decomposition method can outperform direct interpolation through symmetry breaking, and we expect that the method can be helpful in situations beyond a single reflection symmetry. An example is the nonrelativistic treatment of molecular electronic structure, which maintains a \(SU(2)\) spin symmetry. Another approach could be the use of a two-body eigendecomposition as a black box if there is a hidden symmetry and the precise cause of a gap closure is not straightforward to determine. ## VII Acknowledgments We thank Luuk Visscher, Emiel Koridon and Stefano Polla for valuable discussions. This work was supported by the Dutch Ministry of Economic Affairs and Climate Policy (EZK), as part of the Quantum Delta NL programme.
2309.08613
Multimodal Recommender Systems in the Prediction of Disease Comorbidity
While deep-learning based recommender systems utilizing collaborative filtering have been commonly used for recommendation in other domains, their application in the medical domain have been limited. In addition to modeling user-item interactions, we show that deep-learning based recommender systems can be used to model subject-disease code interactions. Two novel applications of deep learning-based recommender systems using Neural Collaborative Filtering (NCF) and Deep Hybrid Filtering (DHF) were utilized for disease diagnosis based on known past patient comorbidities. Two datasets, one incorporating all subject-disease code pairs present in the MIMIC-III database, and the other incorporating the top 50 most commonly occurring diseases, were used for prediction. Accuracy and Hit Ratio@10 were utilized as metrics to estimate model performance. The performance of the NCF model making use of the reduced "top 50" ICD-9 code dataset was found to be lower (accuracy of ~80% and hit ratio@10 of 35%) as compared to the performance of the NCF model trained on all ICD-9 codes (accuracy of ~90% and hit ratio@10 of ~80%). Reasons for the superior performance of the sparser dataset with all ICD codes can be mainly attributed to the higher volume of data and the robustness of deep-learning based recommender systems with modeling sparse data. Additionally, results from the DHF models reflect better performance than the NCF models, with a better accuracy of 94.4% and hit ratio@10 of 85.36%, reflecting the importance of the incorporation of clinical note information. Additionally, compared to literature reports utilizing primarily natural language processing-based predictions for the task of ICD-9 code co-occurrence, the novel deep learning-based recommender systems approach performed better. Overall, the deep learning-based recommender systems have shown promise in predicting disease comorbidity.
Aashish Cheruvu
2023-08-30T01:40:45Z
http://arxiv.org/abs/2309.08613v1
# Multimodal Recommender System in the Prediction of Disease Comorbidity ###### Abstract This paper presents two novel applications of deep learning algorithm-based recommender systems, Neural Collaborative Filtering (NCF) and Deep Hybrid Filtering (DHF), for disease diagnosis. Two datasets, first dataset with all diseases and the second dataset with 50 most commonly occurring diseases, were derived from the MIMIC database. The testing, validation and training accuracy of the model with reduced dataset (50 ICD code) was lower (~ 80%) than the model trained on all ICD-9 codes (~ 90%). The model using all ICD codes performed better (80%), also in terms of hit ratio@10, compared to the model with 50 ICD-9 codes (35%). Reasons for a superior performance with dataset using all ICD can be mainly attributed to the higher volume of data and the powerful nature of deep learning algorithms. Compared to literature reports, the novel approach of using deep recommender systems performed well. Results from the deep hybrid filtering model show better in training accuracy (93.75%) compared to NCF model (90.82%), indicating that the addition of text data from clinical notes provided improved performance in predicting comorbidity. The deep learning-based recommender systems have shown promise in accurately predicting subject disease co-occurrence. Deep Learning, Recommender Systems, Comorbidity, Natural Language Processing ## I Introduction With the fastest growing demographic of people older than 65, there is a growing incidence of disease comorbidity. To address the comorbidity and the ensuing health care costs, there is a need to develop tools that can learn from the multitude of data available from various sources (viz. Electronic Health Records (EHR), Health insurance claims, etc). Among these various resources, EHRs are one of the most valuable resources, as they provide data from real world evidence. Furthermore, the data in EHR includes information from varied sources including both structured data (e.g., blood pressure, dementia score etc) - which are directly interpretable, and unstructured data (images- CT scan, MRI and text- clinical notes etc.) - that need expert interpretation. With the availability of these vast amounts of data, there is a need for developing tools that can effectively handle the data. The current work used MIMIC-III [1] a publicly available EHR database containing information about 40,000 ICU patients. The databased is organized to include various diseases (included as International Classification of Diseases - ICD-9 codes [2]), clinical notes, CT scans, physiologic signals data etc. To this end Machine Learning (ML) methods that can process large amounts of data in multiple dimensions are promising tools in assisting physicians. ML algorithms can discover, classify, and identify patterns and relationships between various disease characteristics and effectively predict future outcomes of disease. Among the various ML approaches Deep Learning (DL) approaches have shown promise in working with data from multiple modalities including text, image, voice, video etc. Deep learning has had tremendous success across multiple domains including image recognition or natural language processing. Recommender systems (RS) are algorithms aimed at suggesting relevant items to users (movies to watch, text to read, products to buy etc). RS have the potential to handle the increasing "information overload" effectively and tailor content precisely. RS typically fall into 3 broad categories - content-based filtering, collaborative filtering, and hybrid filtering (a combination of the first two). This paper introduces two RS (collaborative and hybrid filtering) that use state-of-the-art techniques, which were evaluated in predicting the probability of subject disease co-occurrence. The contributions of this study are three-fold. The first contribution is the development of a hybrid recommender system that combines deep learning based collaborative filtering techniques and text features derived from clinical notes using natural language processing (NLP) techniques in predicting disease co-occurrence. The second contribution is to understand the impact of using partial data (top 50-ICD codes) versus using the sparser full dataset on the accuracy of predicting disease co-occurrence. The third contribution is to understand the impact of changing the proportion of clinical notes on the accuracy of disease co-occurrence prediction. ## II Related work While much work has been done on using data mining and in particular NLP/text mining algorithms for the purpose of ICD-9 code prediction, not much work has been done in using recommender systems for ICD code prediction. However, Davis et al., created a rudimentary collaborative RS using cosine similarity for ICD code prediction [3]. While this work makes use of basic techniques, many new techniques have been introduced in the literature that allow for much more accurate models, most notably deep learning-based RS. Also, only collaborative filtering is used for disease prediction, which makes it unable to incorporate useful content related features. In regard to text-based ICD prediction models, Zhang et al., have trained a BERT (bidirectional encoder representations from transformers) model using text data for the purpose of ICD code classification on a proprietary dataset, achieving high performance [4]. However, the authors have not made use of recommender system based methods in disease prediction. Mullehbach et al, have trained a Convolutional Neural Network using text data for ICD code prediction using the MIMIC-III database [5]. Moons et al., have trained a host of deep learning methods using text data and evaluated them using the top 50 ICD codes and all ICD codes on the MIMIC-III [6]. Unlike these and other earlier works, this current work attempts to apply the state-of-the-art deep learning architectures in recommender systems (hybrid and collaborative deep recommender systems) and use ICD-9 codes as well as notes (using NLP techniques) for ICD code prediction. ## III Methodology As shown in the Figure 1, the first step was to perform SQL queries to extract relevant information from the MIMIC-III database. This was done with the use of the google-cloud bigquery python package, used in interaction with the MIMIC III Database stored in Google BigQuery. Exploratory data analysis was performed to identify the data characteristics. As a next step the data was preprocessed to extract the relevant data and perform data wrangling. A major part of this step is Negative Example Generation (NEG). After this the models were trained and evaluated on accuracy, Macro F1, AUC, and Hit Ratio. The scikit-learn package was used to calculate metrics (apart from hit ratio). Sections 3.1.1 and 3.1.2 describe each step in more detail. ### Data Preprocessing The MIMIC-III dataset is a large dataset relating to patients admitted to critical care units at a large tertiary care hospital. It contains de-identified medical records of patients who stayed from 2001 to 2012 within the intensive care units at Beth Israel Deaconess Medical Center. The database has extensive information on approximately 60,000 ICU admissions at the hospital from 2001 - 2012. Clinical data across many modalities (e.g., image, structured data, clinical notes etc.) are included, but for the task of future disease prediction, the 'DIAGNOSES_ICD' table was used for collaborative filtering and the combination of the 'DIAGNOSES_ICD' table and features derived from the 'NOTEEVENTS' table were used. DIAGNOSES_ICD contains ICD-9 codes (International Classification of Diseases), where individual ICD-9 codes signify specific medical diagnoses. While ICD-9 codes have been used primarily for the purpose of medical billing, standardized "coding" for diseases that is commonly adopted serves very well for the purpose of disease prediction. NOTEEVENTS contains various clinical notes, such as physician notes, radiology notes, discharge summaries, etc. Because discharge summaries report finalized diagnoses from which ICD-9 codes for patients are created, these types of notes were excluded from analysis. The goal of this study is to explore useful semantic information using both structured and unstructured data. The datasets used were 'DIAGNOSES_ICD' and 'NOTEEVENTS' from which only the free-text clinical note section was used. Furthermore, discharge summary notes were not included as they contain actual ground truth and free-text upon which ICD-9 codes are prepared. Other categories of notes that represent information obtained during the visit were instead utilized. The data were preprocessed to produce separate datasets using two approaches. The first approach is to treat the ICD-9 code independently from each other, find the admissions (unique HADM ID) for each ICD-9 classification, and consider only records related to the top 50 common ICD-9 codes. The top 50 were chosen because they covered a majority of the dataset: 93.6% of all data. To test the robustness of the model, the next step was to utilize all ICD-9 codes. Evaluations were performed on the two datasets, which will hereby be referred to as top-50-code and all-ICD-codes, respectively. #### 3.1.1 Neural Collaborative Filtering Recommender System The 'DIAGNOSES_ICD' table was used. Label encoding on each subject and each ICD 9 code were performed. Two datasets were created: one with the 50 most commonly occurring ICD codes within the dataset, and another with the full set of ICD codes. A model with all ICD codes (which are relatively sparse data) was used to evaluate the robustness of the model. An "implicit feedback" RS approach was used. A rating column was added that identified the occurrence of a given ICD-9 code for each subject having a given ICD code as part of the "positive class" (1). However, negative samples are also needed for training the models inorder to maximize the prediction of similar diseases and minimizing in unrelated instances. Subjects who did not have certain ICD codes constitute the "negative class" (0). these co-occurrences had to be generated. Generation was done by randomly sampling subject-disease "pairs", in which a randomly sampled disease was compared with a certain patient's past history. If the disease was not included in the past history, a pair of the subject and disease was created and given a label of 0. Negative examples found in the dataset were randomly generated at positive class to negative class ratios of 1:10, 1:4, and 1:2. The use of random generation was able to allow for the chance that even when a certain disease is not reported in a subject, it is possible that the subject may have the disease. The model should be able to predict these cases in an accurate way, so random generation, as opposed to incorporating all negative examples, was performed. #### 3.1.2 Deep Hybrid Filtering Recommender System To test the hypothesis for the improvement recommender system performance with the inclusion of symptom information, a novel deep hybrid filtering recommender system was developed. "NOTEEVENTS" transcripts were used for extracting patient symptoms for a given subject. The 'en_core_sci_md' and 'en_ner_bc5cdr_md' models from the SciSpaCy python package were used for implementing the NLP component. SciSpaCy contains SpaCy models trained on biomedical, scientific, and clinical text. The 'en_core_sci_md' model was used to remove i) stop words, punctuation, and other unnecessary information and ii) words that do not have biomedical/clinical significance. Then, the data was passed through the "en_ner_bc5cdr_md" model (a named entity recognition model) was used to isolate tokens (words) that represent symptoms or medications. Because the performance of the NCF model using the top 50 ICD codes was not as good as using all ICD codes, a separate dataset containing only the top 50 ICD codes was not created or used to train the hybrid filtering models. A combination of a subject, an ICD-9 code they had, and a randomly selected symptom mentioned in the text notes constituted individual data points in the "positive class". NEG for the deep hybrid filtering RS was done by combining a subject, an ICD-9 code they did not have, and a randomly sampled symptoms that the patient had. This was in hopes that this would aid the model in predicting whether a patient had an ICD code using information from whether that symptom was relevant to the ICD-9 code ### Model Training and Testing Two recommender systems that use deep learning approaches were developed sequentially. Firstly, a neural collaborative filtering (NCF) model was used in predicting Figure 1: Methodology overview subject disease co-occurrence, where model derived features for subjects and ICD codes were used. As a next step, a deep hybrid filtering model was created by combining the NCF model developed in earlier step with NLP-derived features (symptoms & medications) from clinical notes (the NOTEVENTS table). The method of hybridization used to incorporate NLP features was feature combination (Vall et al.). It was hypothesized that the addition of NLP based features improves the model performance in predicting the disease cooccurrence. #### Iii-B1 Neural Collaborative Filtering Recommender System The NCF RS was proposed by Barrett et al [7]. Mathematically, the NCF architecture can be represented as such: \[\hat{y}_{sc}=f(\mathbf{P}^{T}\mathbf{v}_{s}^{S},\mathbf{Q}^{T}\mathbf{v}_{c}^{C }\ |\ \mathbf{P},\mathbf{Q},\Theta_{f})\] Here, \(\hat{y}_{sc}\) represents a binary prediction of whether there is a co-occurrence of a subject s and an ICD code c. P and Q represent the matrices of subject and ICD code, and v represents the latent/feature vector that \(\hat{y}\)sc the model creates. In the models created, an 8-dimensional "embedding" for both P and Q were learned by the NCF models. and theta represents model parameters. \(\Theta\) represents a binary prediction for the hybrid model. The function f represents function formed by the "dense" neural network layers present in the model, and can be shown as: \[f(\mathbf{P}^{T}\mathbf{v}_{s}^{S},\mathbf{Q}^{T}\mathbf{v}_{c}^{C})=\phi_{ out}(\phi_{2}(\phi 1(\mathbf{P}^{T}\mathbf{v}_{s}^{S},\mathbf{Q}^{T}\mathbf{v}_{c}^{C})))\] where \(\phi\) represents the function represented by each layer in the neural network, with the subscript denoting the specific layer it represents. The model architecture for NCF is represented in Figure 2. The inputs to the model are the one-hot encoded ICD-9 and subject vector. The ICD-9 code vector, subject vector, and symptom vector are fed to the ICD-9 code embedding, subject embedding and symptom embedding respectively, which results in a smaller, denser ICD-9 code and subject vectors. The embedded ICD-9 code vector and subject vectors are concatenated before passing through a series of fully connected layers, which maps the concatenated embeddings into a prediction vector as output. At the output layer, a sigmoid activation function was applied to obtain the most probable class. In the example above, the most probable class is 1 (positive class), since 0.8 \(>\) 0.2. #### Iii-B2 Deep Hybrid Filtering Recommender System Just as with the NCF model, embeddings were created for both the subject and ICD-9 code vectors. However, an embedding of the text feature column (which includes both symptoms and medications) was also created. It was hypothesized that creating embeddings for the text feature column in relation to the other columns would help make better predictions.: \[\hat{v}_{scm}=f(\mathbf{P}^{T}\mathbf{v}_{s}^{S},\mathbf{Q}^{T}\mathbf{v}_{c}^{ C},R^{T}\mathbf{v}_{m}^{M}\ |\ \mathbf{P},\mathbf{Q},\mathbf{R},\Theta_{f})\] vectors. P, Q and R represent the matrices of subject, ICD code, and symptom. v represents the latent/feature vector that the model creates. In the models created, an 8-dimensional "embedding" for both P, Q and R were learned by the DHF models. \(\Theta\) represents model parameters. \(\hat{y}\)scm represents a binary prediction for the hybrid model. The function f represents function formed by the "dense" neural network layers present in the model, and can be shown as: \[f(\mathbf{P}^{T}\mathbf{v}_{s}^{S},\mathbf{Q}^{T}\mathbf{v}_{c}^{C},\mathbf{R} ^{T}\mathbf{v}_{m}^{M})=\phi_{out}(\phi_{2}(\phi 1(\mathbf{P}^{T}\mathbf{v}_{s}^{S},\mathbf{Q}^{T} \mathbf{v}_{c}^{C},\mathbf{R}^{T}\mathbf{v}_{m}^{M})))\] where \(\phi\) represents the function represented by each layer in the neural network, with the subscript denoting the particular layer it represents. The model architecture for DHF is represented in Figure 3. The inputs to the model are the one-hot encoded ICD-9 code vector, subject vector, and symptom vector. The ICD-9 code vector, subject and symptom vector are fed to the ICD-9 code embedding, subject embedding and symptom embedding respectively, which results in a smaller, denser representation of the ICD-9 code, subject and symptom vectors. The embedded ICD-9 code vector, subject and symptom vectors are concatenated before being passed through a series of fully connected layers, which map the concatenated embeddings into a prediction vector as output. At the output layer, the sigmoid function was applied to obtain the most probable class. In the example above, the most probable class is 1 (positive class - subject will develop disease), since 0.8 \(>\) 0.2. Figure 3: DHF Model Architecture Figure 2: NCF Model Architecture ## IV Results and Discussion ### _Neural Collaborative Filtering Recommender System_ Table 1 presents overall precision performance of models with Top 50 and all ICD codes using NCF. The testing, validation and training accuracy of the model trained on 50 ICD code data was lower ( \(\sim\) 80%) than the model trained on all ICD codes (\(\sim\) 90%). Because of the binary nature of the dataset, test micro F1 was the same as test accuracy. Macro F1 scores for the model with all ICD codes was far higher than with 50 ICD codes. AUC (reflective of model performance) also performed best with model using all ICD codes, with AUC with all ICD codes higher (0.9199) than the AUC with 50 ICD codes (0.6684). The model using the 50 ICD codes had a much worse hit ratio of 0.35 as compared to the model with all ICD codes (0.8) for algorithm using all ICD codes reflective performance as a recommender system. The main reason for a superior performance with all ICD compared to 50 ICD codes can be mainly attributed to i) the higher volume of data with dataset containing all ICD codes and ii) the powerful nature of deep learning algorithms. It is intriguing to observe this result, even though there was sparsity making a harder problem to predict. Compared to approaches using only NLP algorithms (such as BERT and CNN) by Zhang et al, in ICD-9 code prediction with MIMIC-III dataset, using NCF performed better [4]. The NCF Macro F1 score of 83.14% was higher than the next highest F1 score of 63% using DR-CAML (convolutional neural network with attention and L1 regularization). Also, the NCF Micro F1 score is higher (89.99%) than the best model reported in the literature (63.3%). This could possibly be due to the limitation of the deep natural language processing algorithms using only text data. The proposed deep recommender systems were able to handle the sparse and imbalanced multimodal data efficiently. ### _Deep Hybrid Filtering Recommender System_ Table 2 presents overall precision performance of models using NCF and DHF. 12 hybrid filtering models in total were evaluated across 8 metrics. Within each set of models trained on different numbers of notes, it was observed that a larger positive class to negative class ratio of 1:10 improved performance. For example, the 1 million note model with a positive class to negative class ratio of 1:2 had a test accuracy of 88.66% and a hit ratio @ 10 of 83.88%, while the 1 million note model with a positive class to negative class ratio of 1:10 had a test accuracy of 94.40% and a hit ratio @ 10 of 85.36%. Across note categories, it can be seen that the addition of more notes also generally resulted in greater performance. The 1 million note model performed better than the 100,000-note model, and the 100,000-note model performed better than the 20,000-note model. The best overall model in terms of test accuracy and hit ratio @ 10 was the model with the 1:10 positive class to negative class ratio that was trained with text features derived from the million notes. It can also be observed that some models trained on 20,000 notes and 100,000 note, had a worse performance than the original NCF model - this is hypothesized to occur because the reduction of notes results in the removal of some ICD codes from the data. The addition of data from clinical notes resulted in improved model predictions, as the increase to 1 million notes resulted in models that had a superior performance than the NCF models. It is thought that with models trained on all notes present in the MIMIC-III database would result in an even greater margin of accuracy. ## V Conclusions and Future Work This study evaluated two different recommender systems based on deep learning methods. The recommender system algorithm using Neural Collaborative filtering had shown good accuracy (89.46) and hit ratio (85%) in predicting subject and ICD-9 code co-occurrence. The use of hybrid filtering with the text features improved NCF model performance, with the best model (1:10 with 1 million notes) improving accuracy (94.40%) and hit ratio (85.36%). Both the recommender systems were robust, with high AUC of 93.50% for the NCF and an AUC 95.05% for the 1:10 with 1 million notes model. Based on the results, both the deep hybrid filtering model and Neural Collaborative Filtering model have the potential to improve prediction of comorbidity.
2308.07609
Three alternative model-building strategies using quasi-Hermitian time-dependent observables
A $(K+1)-$plet of non-Hermitian and time-dependent operators (say, $\Lambda_j(t)$, $j=0,1,\ldots,K$) can be interpreted as the set of observables characterizing a unitary quantum system. What is required is the existence of a self-adjoint and, in general, time-dependent operator (say, $\Theta(t)$ called inner product metric) making the operators quasi-Hermitian, $\Lambda_j^\dagger(t)\Theta(t)=\Theta(t)\Lambda_j(t)$. The theory (called non-Hermitian interaction-picture, NIP) requires a separate description of the evolution of the states $\psi(t)$ (realized, via Schr\"{o}dinger-type equation, by a generator, say, $G(t)$) and of the observables themselves (a different generator (say, $\Sigma(t)(t)$) occurs in the related non-Hermitian Heisenberg-type equation). Every $\Lambda_j(t)$ (and, in particular, Hamiltonian $H(t)=\Lambda_0(t)$) appears isospectral to its hypothetical isospectral and self-adjoint (but, by assumption, prohibitively user-unfriendly) avatar $\lambda_j(t)=\Omega(t)\Lambda_j(t)\Omega^{-1}(t)$ with $\Omega^\dagger(t)\Omega(t)=\Theta(t)$. In our paper the key role played by identity $H(t)=G(t)+\Sigma(t)$ is shown to imply that there exist just three alternative meaningful implementations of the NIP approach, viz., ``number one'' (a ``dynamical'' strategy based on the knowledge of $H(t)$), ``number two'' (a ``kinematical'' one, based on the Coriolis force $\Sigma(t)$) and ``number three'' (in the literature, such a construction based on $G(t)$ is most popular but, paradoxically, it is also most complicated).
Miloslav Znojil
2023-08-15T07:35:42Z
http://arxiv.org/abs/2308.07609v1
**Three alternative model-building strategies using quasi-Hermitian time-dependent observables** ## Abstract In the conventional (so called Schrodinger-picture) formulation of quantum theory the operators of observables are chosen self-adjoint and time-independent. In the recent innovation of the theory the operators can be not only non-Hermitian but also time-dependent. The formalism (called non-Hermitian interaction-picture, NIP) requires a separate description of the evolution of the time-dependent states \(\psi(t)\) (using Schrodinger-type equations) as well as of the time-dependent observables \(\Lambda_{j}(t)\), \(j=1,2,\ldots,K\) (using Heisenberg-type equations). In the unitary-evolution dynamical regime of our interest, both of the respective generators of the evolution (viz., in our notation, the Schrodingerian generator \(G(t)\) and the Heisenbergian generator \(\Sigma(t)\)) have, in general, complex spectra. Only the spectrum of their superposition remains real. Thus, only the observable superposition \(H(t)=G(t)+\Sigma(t)\) (representing the instantaneous energies) should be called Hamiltonian. In applications, nevertheless, the mathematically consistent models can be based not only on the initial knowledge of the energy operator \(H(t)\) (forming a "dynamical" model-building strategy) but also, alternatively, on the knowledge of the Coriolis force \(\Sigma(t)\) (forming a "kinematical" model-building strategy), or on the initial knowledge of the Schrodingerian generator \(G(t)\) (forming, for some reasons, one of the most popular strategies in the literature). In our present paper every such a choice (marked as "one", "two" or "three", respectively) is shown to lead to a construction recipe with a specific range of applicability. ## Keywords quantum theory of unitary systems; non-Hermitian interaction representation; non-stationary physical inner products; model-building classification; Introduction One of the sources of inspiration of our present study was a compact review [1] of the history of alternative formulations of quantum mechanics. In their paper dated 2002 the authors asked the question of how many formulations of quantum mechanics do we have. For pedagogical reasons, nevertheless, their list remained incomplete. Surprisingly enough, it did not include the Dirac's "intermediate-" _alias_ "interaction-picture" (IP) form of the Hermitian theory. The authors also avoided any reference to the innovative paper [2] in which the conventional lists of the available alternative formulations of quantum mechanics were complemented, as early as in 1992, by a manifestly non-Hermitian reformulation of Schrodinger picture (NSP, see also its more recent comprehensive review in [3]). The latter omission was in fact not too surprising because the NSP (also known as "quasi-Hermitian" [2, 4]) formalism has only been developed between the years 1998 and 2007 when Bender with multiple coauthors made the idea widely known and popular [5, 6]. In spite of certain scepticism among specialists (as verbalized, e.g., by Streater [7] or, more recently, by several mathematicians [8, 9, 10, 11, 12]), Bender with his coauthors persuaded the quantum-physics community that there exists a broad class of innovative stationary realizations of quantum theory (including, importantly, quantum field theory [13]) in which the manifestly non-Hermitian candidates \(H\) for the Hamiltonians with real spectra could be phenomenologically appealing as well as mathematically sufficiently user-friendly (cf. also the newer reviews of the field in [14, 15, 16, 17, 18, 19, 20]). Before the year 2008 the next-step transition to the non-stationary non-Hermitian theory has been considered impossible [21]. At the same time, the idea of the _stationary_ unitary evolution "in non-Hermitian disguise" has been widely accepted. People realized that such a formulation of the theory remains equivalent to its standard textbook predecessors. For this reason, the presentation of the innovation could have started from the conventional stationary Schrodinger equation \[{\rm i}\,\frac{d}{dt}\,|\psi(t)\!\!\succ\,=\mathfrak{h}\,|\psi(t)\!\!\succ\, \,,\ \ \ \ \mathfrak{h}=\mathfrak{h}^{\dagger}\neq\mathfrak{h}(t)\,. \tag{1}\] The generalization (attributed, often, to Dyson [22]) proved based just on a replacement of such an equation by its non-Hermitian (or, better, hiddenly Hermitian) upgrade mediated by an invertible time-independent mapping \(\Omega\neq\Omega(t)\), \[{\rm i}\,\frac{d}{dt}\,|\psi(t)\rangle=H\,|\psi(t)\rangle\,,\ \ \ \ H=\Omega^{-1}\, \mathfrak{h}\,\Omega\neq H^{\dagger}\,,\ \ \ H\neq H(t)\,. \tag{2}\] During the year 2008 the scientific community became prepared to accept the proposal of making the non-Hermitian theory _non-stationary_[23]. In the recent application of this approach to the so called wrong-sign interaction potentials [24] we pointed out that within the corresponding form of quantum theory called non-Hermitian interaction picture (NIP, see also its compact review in [25]), the unitarity of the evolution of the so called closed quantum systems can still be guaranteed in consistent manner. In contrast to our preceding NIP-based paper [24], its present continuation will be example-independent. Only a few remarks on possible applications will be added, mainly in Appendix A. In the main body of our new, more methodically oriented paper we will discuss the three main model-building strategies. In a systematic manner the presentation of our results will start in section 2 in which we will review the basic ideas behind the existing hiddenly-Hermitian quantum theories. In sections 3 - 5 we will then outline the three respective construction options emphasizing, in each of them, the necessity of a clear separation of what is assumed and postulated from what is calculated, reconstructed and deduced. A compact summary of our considerations is finally added in section 6. ## 2 The abstract NIP quantum theory ### The concept of non-stationary non-Hermitian observables In conventional textbooks one often reads about the choice of a "picture" _alias_ "representation" of quantum mechanics (cf., e.g., [26]). Let us temporarily return, therefore, to the Hermitian theory. One then usually mentions just the Schrodinger picture (SP) and the Heisenberg picture (HP). Sometimes, another option is presented under the name of "intermediate picture" (IP, cf. pp. 321 - 322 in [26]). In this case one is assumed to split a given self-adjoint Hamiltonian in its two separate self-adjoint components, \(\mathfrak{h}=\mathfrak{h}_{S}+\mathfrak{h}_{H}\). Typically, \(\mathfrak{h}_{S}\) is designed to control the evolution of states (i.e., it appears as a generator in a Schrodinger-type equation) while \(\mathfrak{h}_{H}\) is interpreted as entering the Heisenberg-type equations for the relevant and, necessarily, time-dependent but still self-adjoint observables [26]. After one moves to the above-mentioned non-Hermitian (or, using a mathematically more precise terminology, quasi-Hermitian [2, 4]) reformulations of quantum theory, a part of the terminology survives. In particular, in the non-Hermitian interaction picture (NIP, [25]) we still encounter the Heisenberg's generator (say, \(\Sigma(t)\), i.e., the operator controlling the time-evolution of the non-stationary NIP observables [27]) as well as the Schrodinger's generator (to be denoted as \(G(t)\)) entering the Schrodinger-type evolution equations. The new feature of the more general non-Hermitian theory is that one can speak about a quantum Coriolis force \(\Sigma(t)\)[28] filling the interval or space between its NSP extreme \(\Sigma^{(NSP)}(t)=0\) and its non-Hermitian HP extreme such that \(\Sigma^{(NHP)}(t)=H^{(NHP)}(t)\)[29]. Moreover, one can formally define, not quite expectedly [30], the superposition of the generators \[H(t)=G(t)+\Sigma(t)\,. \tag{3}\] This operator carries a clear physical meaning of an isospectral avatar of its self-adjoint SP partner Hamiltonian \(\mathfrak{h}\) (cf. Eq. (2)). The relevance of such a property has been emphasized in [24] where we paid attention to the very specific non-Hermitian anharmonic-oscillator models. We arrived there at the conclusion that one cannot easily transfer the "picture-selection" experience gained during the study of the special "wrong-sign" oscillators to the other non-stationary quasi-Hermitian quantum systems. Every NIP-described unitary quantum system has to be treated as specific. In both of the above-mentioned NSP and NHP special cases relation (3) degenerates to an identity. We either get the coincidence \(G^{(NSP)}(t)\equiv H^{(NSP)}(t)\) with disappearing \(\Sigma^{(NSP)}(t)=0\), or \(\Sigma^{(NHP)}(t)\equiv H^{(NHP)}(t)\) with disappearing \(G^{(NHP)}(t)=0\). Incidentally, it is worth adding that the widely used attribute "non-Hermitian" of the theory can be misleading because we mean non-Hermitian in our working space \({\cal H}_{(unphysical)}\) but not in the correct space of states \({\cal H}_{(physical)}\). A better name would certainly be "hiddenly Hermitian" theory, meaning that the operator \(\Theta=\Omega^{\dagger}\Omega\) of the inner-product metric in \({\cal H}_{(physical)}\) is nontrivial, \(\Theta\neq I\)[2, 3]. In the most general NIP setting the latter operator is also assumed manifestly time-dependent, \(\Theta=\Theta(t)\). The flexibility is enhanced because \(G^{(NIP)}(t)\neq 0\neq\Sigma^{(NIP)}(t)\). At the same time, even in the NIP framework one can distinguish between the different ways of encoding the input information about dynamics into operators. Thus, in our present paper we will speak about a "strategy number one" (cf. section 3 below), a "strategy number two" (cf. section 4) and "strategy number three" (discussed in section 5). In other words, we are now going to propose that every choice of _one_ of the operators in Eq. (3) might be interpreted as leading to a specific eligible representation of the (by assumption, unitary) evolution. Our forthcoming analysis of a triplet of refined NIP formulations of quantum theory was in fact motivated by the recent growth of the diversity of applications of the non-stationary versions of the non-Hermitian operators in the various branches of physics [16]. In these applications the building of models appeared often restricted by a requirement of having an exact, non-numerical form of the model. Such a solvability requirement made the conclusions rather special and model-dependent. In what follows we will accept, therefore, a different attitude. We will try to separate, clearly, the form and extent of the input information about the system from a systematic and consistent step-by-step reconstruction of the consequences of the assumptions. As we already indicated, we will arrive at three alternative model-building strategies. In the construction strategy number one as described in section 3 we will accept the most traditional "dynamical" point of view of Scholtz et al [2]. We will emphasize that, in some sense, the observability property makes the energy-representing operator \(H^{(NIP)}(t)=H^{(NIP)}_{(one)}(t)\) a unique candidate for being called Hamiltonian. Indeed, its time-dependent choice determines the quantum system's dynamics in a phenomenologically satisfactory manner even in the non-stationary scenario. In the alternative construction strategy number two as presented in section 4 we will start from the knowledge of the physical-Hilbert-space "kinematics". Having the knowledge of the Coriolis-force operator \(\Sigma^{(NIP)}(t)=\Sigma^{(NIP)}_{(two)}(t)\) at all times at our disposal we will reconstruct the eligible forms of the dynamics in a way which will be shown to be exceptionally straightforward. In the strategy number three we will assume that a key technical as well as phenomenological role is played by the time-dependence of the states. Thus, what is assumed to be given in advance is the Schrodingerian generator \(G^{(NIP)}(t)=G^{(NIP)}_{(three)}(t)\). ### The physical inner-product metric In the NIP picture, both the states \(\psi\) (i.e., the elements of a suitable Hilbert space) and the operators \(\Lambda=\Lambda_{j}\) (representing observables) are allowed to vary with time. The respective generators of evolution, i.e., an operator \(G(t)\) entering the Schrodinger-type equation for \(\psi=\psi(t)\), and another operator \(\Sigma(t)\) in the Heisenberg-type equation for \(\Lambda_{j}=\Lambda_{j}(t)\) may be (and, in the literature, quite often are) both called "Hamiltonians". For this reason it may be useful to try to avoid misunderstandings by speaking, more explicitly, about a "non-Hermitian time-dependent "Schrodinger-equation Hamiltonian" \(G(t)=G^{(NIP)}(t)\) in the former case. We may also need to amend the denotation of the "Heisenberg-equation-Hamiltonian", \(\Sigma(t)=\Sigma^{(NIP)}(t)\). In the unitary evolution scenario both of the auxiliary NIP generators \(G(t)\) and \(\Sigma(t)\) are just auxiliary and non-observable. Their spectra may be complex - for illustration see, e.g., the schematic examples in [31, 32]. Only their instantaneous-energy-representing sum (3) may be considered, in a way shown in [23, 30], observable. For this reason we will call such an operator "observable Hamiltonian" or simply "Hamiltonian". We will interpret the relationship (3) between the three eligible operators (i.e., between the "Hamiltonians" in a broader sense) as a starting point of the theoretical constructive efforts. We propose that in such a setting one picks up simply one of these operators as "known", i.e., as an operator carrying a decisive portion of the "physical" input information about the unitary quantum system in question. In this manner one arrives at the three alternative quantum-model-building strategies as described below. As long as the form of the NIP Hamiltonian \(H(t)\) of Eq. (3) has to be flexible and, in particular, not necessarily Hermitian, the underlying "working" Hilbert space (say, \({\cal F}\)) can be declared, in general, unphysical, playing just the role of a mathematical tool, \({\cal F}\equiv{\cal H}_{(unphysical)}\). In 1998, such an idea of working with observables in a mathematically friendlier non-Hermitian representation has been made particularly attractive and popular by Bender with Boettcher [5]. One of its main consequences is that our auxiliary, computation-friendly Hilbert space \({\cal F}\) (i.e., in many realistic models, just \(L^{2}(\mathbb{R}^{d})\)[14, 15]) has to be complemented by another, correct physical alternative \({\cal H}_{(physical)}\equiv{\cal H}\neq{\cal F}\). In the words of the older comprehensive review [2] one only has to clarify the relationship between \({\cal F}\) and \({\cal H}\) by establishing "a criterion for a set of non-Hermitian operators" (i.e., for the above-mentioned set of operators \(\Lambda_{j}\)) "to constitute a consistent quantum mechanical system" which "involves the construction of a [physical Hilbert-space] metric", i.e., which involves a representation of \({\cal H}\) in \({\cal F}\). In our present notation this means that there must exist a suitable "inner-product-metric" operator \(\Theta\) such that \[\Lambda_{j}^{\dagger}\,\Theta=\Theta\,\Lambda_{j}\,,\ \ \ \ j=0,1,\ldots,J\,. \tag{4}\] This relation would, indeed, render all of our observables \(\Lambda_{j}\) self-adjoint in \({\cal H}\) and, at the same time, non-Hermitian and, in a way specified by definition (4), "quasi-Hermitian" in \({\cal F}\)[2, 4]. This being said, the feasibility of the NIP-based model-building strategy is still marred by the fairly complicated nature of the description of the evolution using the two independent generators \(G(t)\) and \(\Sigma(t)\). This is one of the central questions and challenges in the theory. A simplification of the formalism is needed and sought in the various methodically (rather than phenomenologically) motivated restrictions of the admissible classes of eligible non-Hermitian Hamiltonians. In what follows we will analyze and describe, systematically, the possibilities of such a simplification. ## 3 The first, dynamical-input strategy In any model-building scenario reflecting relation (3) which connects the three different "Hamiltonians" one may start from an "input" knowledge of any one of them. Still, the "input information" selection of the observable instantaneous energy, viz., of the operator \[H^{(NIP)}_{(one)}(t)\neq 0 \tag{5}\] looks most natural. Such an option could be called "dynamical", being most closely connected with the philosophy of Scholtz et al [2] who treated all of their generalized non-Hermitian quantum models as specified by their observables. Once we restrict attention just to the observable Hamiltonian, we have to deliver, first of all, a rigorous proof of the reality of the energies. Secondly, a consistent probabilistic interpretation of the model requires a confirmation of the quasi-Hermiticity of the observable Hamiltonian. Thus, whenever the input information is encoded in operator \(H_{(one)}(t)\) (as well as in its conjugate form \(H^{\dagger}_{(one)}(t)\)), our first task is to solve the quasi-Hermiticity-constraint equation \[H^{\dagger}_{(one)}(t)\,\Theta_{(one)}(t)=\Theta_{(one)}(t)\,H_{(one)}(t) \tag{6}\] for an unknown metric \(\Theta_{(one)}(t)\). The solution of such a linear algebraic problem is non-unique but, conceptually, straightforward. At any time \(t\), in a way indicated in [25], we may simply follow the notation convention of review [30] and initiate the search for all (or at least for some) of the admissible metric operators \(\Theta_{(one)}(t)\) by solving the two instantaneous Schrodingerian eigenvalue problems \[H_{(one)}(t)\,|\psi^{(one)}(t)\rangle=E_{\psi}(t)\,|\psi^{(one)}(t)\rangle\,, \hskip 19.916929ptH^{\dagger}_{(one)}(t)\,|\psi^{(one)}(t)\rangle\!\rangle=E_ {\psi}(t)\,|\psi^{(one)}(t)\rangle\!\rangle\,. \tag{7}\] In our notation, the symbol \(\psi\) can be read either as an index (numbering the elements of a complete set of states) or, more traditionally, as a letter which identifies a state in its two different and complementary (i.e., single-ket and doubled-ket) realizations. In both of the equations in (7) the energy eigenvalues remain the same because they are, by assumption, observable (i.e., real), discrete (because they have to represent bound states [33]) and bounded from below (because the system in question is assumed stable [6]). Nevertheless, due to the non-Hermiticity \(H_{(one)}(t)\neq H^{\dagger}_{(one)}(t)\) of the Hamiltonian, the respective two sets of the eigenvectors in (7) are different. In the context of physics the knowledge of both of them is necessary because both of them contribute to the probabilistic predictions, i.e., to the matrix elements \[\langle\!\langle\psi(t)|\Lambda(t)|\psi(t)\rangle\!\rangle \tag{8}\] in which \(\Lambda(t)\) denotes any observable of interest and in which \(t=t_{f}\) is a time of its measurement. In the language of mathematics this means that what is needed for a definition of a state is in fact an elementary dyadic projector \[\pi_{\psi}(t)=|\psi^{(one)}(t)\rangle\,\frac{1}{\langle\!\langle\psi^{(one)}(t )|\psi^{(one)}(t)\rangle}\,\langle\!\langle\psi^{(one)}(t)| \tag{9}\] rather than just one of the two alternative versions of the state vector. This being clarified we may recall their biorthogonality property [34, 35] and, via a suitable rescaling, we may upgrade it to a biorthonormality and bicompleteness, \[\langle\!\langle\psi^{(one)}(t)|\phi^{(one)}(t)\rangle=\delta_{\psi\phi}\,, \hskip 14.226378pt\sum_{\psi}\,|\psi^{(one)}(t)\rangle\langle\!\langle\psi^{( one)}(t)|=I\,. \tag{10}\] Formally, _all_ of the metrics compatible with Eq. (6) can be then expressed in terms of the wave-function solutions of the second, conjugate-operator equation in (7) [35], \[\Theta_{(one)}(t)=\sum_{\psi}\,|\psi^{(one)}(t)\rangle\!\rangle\,\kappa_{n}^ {(one)}(t)\,\langle\!\langle\psi^{(one)}(t)|\,. \tag{11}\] It is easy to verify that in such a formula, all of the parameters \(\kappa_{n}^{(one)}\) are arbitrary. For the reasons as explained in [2], they only have to be real and positive. Also, for the sake of keeping the formalism reasonably tractable (see a more explicit formulation of this reason in [25]), their recommended choice will be time-independent, \(\kappa_{n}^{(one)}(t)=\kappa_{n}^{(one)}(0)\). The variability of the latter parameters can be interpreted either as a formal kinematical freedom of the theory (see, e.g., [36]) or, better, as a manifestation of the above-mentioned incompleteness of the dynamical-input information when restricted to the single observable \(H_{(one)}(t)\). Indeed, the formalism admits (and also, for the sake of completeness, requires) an additional information about dynamics simulated by the choice of parameters \(\kappa_{n}^{(one)}\). More consequently and directly such an information could and should be, of course, provided by the introduction of additional observables - see a more detailed discussion of the suppression of the ambiguity in [2]. ## 4 The second, Coriolis-choice strategy The oldest formulation of quantum mechanics, viz., the so called Heisenberg picture (HP, [26]) which appeared in June 1925 [1] can be characterized as "kinematical" since a strict time-independence of the wave functions is required, \(\psi(t)=\psi(0)\). In the Hermitian theory one simply puts \(G^{(HP)}(t)=0\) so that relation (3) degenerates to the identity \(H^{(HP)}(t)=\Sigma^{(HP)}(t)\). Just the above-discussed observable-Hamiltonian dynamical input is reobtained. In a properly generalized non-Hermitian NIP setup the situation is different [29]. A formulation of strategy number two becomes less straightforward. Although the "kinematical" design of models may still start from the initial specification of the NIP Coriolis force at all of the relevant times \(t\), \[\Sigma^{(NIP)}_{(two)}(t)\neq 0 \tag{12}\] one has to admit also a non-triviality of \(G^{(NIP)}_{(two)}(t)\neq 0\) and of \(H^{(NIP)}_{(two)}(t)\neq\Sigma^{(NIP)}_{(two)}(t)\) in (3). Once we pick up the kinematics (i.e., operator \(\Sigma_{(two)}(t)\) and/or its conjugate partner \(\Sigma^{\dagger}_{(two)}(t)\)), we immediately imagine that this opens the way to the reconstruction of the unknown Dyson-mapping operator \(\Omega_{(two)}(t)\). For the purpose, indeed, it is sufficient to recall its definition (see the detailed introduction of this concept in [30]) and to re-write it in the following equivalent form \[\mathrm{i}\frac{d}{dt}\Omega_{(two)}(t)=\Omega_{(two)}(t)\,\Sigma_{(two)}(t) \tag{13}\] of the operator differential equation of the first order. Its solution yields the Dyson-map operator at all times from any preselected initial value at \(t=0\). For the sake of symmetry we may either conjugate the solution or solve the conjugate problem \[\mathrm{i}\frac{d}{dt}\Omega^{\dagger}_{(two)}(t)=-\Sigma^{\dagger}_{(two)}( t)\,\Omega^{\dagger}_{(two)}(t)\,. \tag{14}\] In the subsequent step we become able to define the correct physical Hilbert-space metric as the product of the two Dyson maps, \[\Theta_{(two)}(t)=\Omega^{\dagger}_{(two)}(t)\,\Omega_{(two)}(t)\,. \tag{15}\] The evaluation of this operator product enables us to specify all of the eligible Hamiltonians \(H_{(two)}(t)\) as (naturally, non-unique [2, 35]) solutions of the Dieudonne's [4] quasi-Hermiticity constraint \[H^{\dagger}_{(two)}(t)\,\Theta_{(two)}(t)=\Theta_{(two)}(t)\,H_{(two)}(t)=A_{ (two)}(t)\,. \tag{16}\] In a way inspired by the non-Hermitian random-matrix theories [37] we introduced here a new operator \(A_{(two)}(t)\) encoding the input information about dynamics which is still missing. It is worth emphasizing that the latter operator is almost arbitrary, restricted merely by the requirement (16) tractable as its Hermiticity, \(A_{(two)}(t)=A^{\dagger}_{(two)}(t)\). This immediately yields the ultimate explicit definitions of both of the remaining unknown components of the model, \[H_{(two)}(t)=\Theta^{-1}_{(two)}(t)\,A_{(two)}(t)\,,\ \ \ \ \ G_{(two)}(t)=H_{(two)}(t)-\Sigma_{(two)}(t)\,. \tag{17}\] The construction is completed. ## 5 The third, state-evolution strategy Although the Erwin Schrodinger's formulation of quantum mechanics is not the oldest one [1], its extreme conceptual as well as computational appeal and simplicity make it a dominant paradigm in textbooks [26]. For this reason even Hynek Bila, one of my previous PhD students refused the terminology and philosophy of my papers [23, 30] and, even in the NIP regime, he insisted on using the dedicated name "Hamiltonian", strictly, just for the denotation of the Schrodinger's "input physical information" generator of the evolution of wave functions [38, 39], \[G^{(NIP)}_{(three)}(t)\neq 0\,. \tag{18}\] The Bila's convention has later been accepted by a number of other researchers [3, 40, 41]. They were influenced by the widely accepted _stationary_ non-Hermitian quantum theory of reviews [2, 3, 6] in which the time-independent Schrodinger's operator \(G(t)=G(0)\) coincides with its stationary energy-representing partner \(H(t)=H(0)\). Obviously, this operator still had a real spectrum and carried a fully consistent physical meaning of an observable. Due to a rather naive straightforward transfer of terminology to non-stationary scenarios the key role has been allocated to \(G(t)\) even when \(G(t)\neq H(t)\). Incidentally, the change of the convention appeared to have also several positive aspects and consequences. The main one was that after a shift of attention from the closed-system theory to the open-system theory [42] or even beyond the domain of quantum physics [43, 44], the loss of the observability status of \(G(t)\) (cf. its proof in Theorem 2 of review [3]) became irrelevant. Thus, in the study of non-unitary, open quantum systems, serendipitously, the Bila's terminology proved inspiring. A number of interesting innovations of the traditional mathematical concepts has been revealed in this area: Cf., e.g., the new use of the Lewis-Riesenfeld invariants as described in [45], an innovative introduction of a generalized entropy in non-Hermitian systems in [46], a reinterpretation of the concept of \({\cal PT}-\)symmetry in [47] or, last but not least, a new wave of interest in non-linear theories, quantum (i.e., effective [14]) as well as non-quantum [15]. Even after one returns back to the description of the closed quantum systems, the initial selection of \(G(t)\) need not destroy an internal consistency of the theory. A detailed description of the related technicalities may be found, e.g., in section 5.3.2 of paper [48]. The point is that even the randomly emerging complexifications of the spectrum of \(G(t)\) may be kept compatible with the unitarity of evolution of the underlying closed quantum system. Via a few schematic non-stationary toy models this was illustrated in [31, 32]. A model-independent methodical support of the \(G(t)-\)based approach can be based on our preceding considerations. First of all, we have to return to the concept of a biorthonormal and bicomplete basis. Our assumption of the knowledge of operator (18) at all times implies that it is now sufficient to know the pure-state-representing projector (9) just at \(t=0\), \[\pi_{\psi}(0)=|\psi^{(three)}(0)\rangle\,\frac{1}{\langle\psi^{(three)}(0)| \psi^{(three)}(0)\rangle}\,\langle\!\langle\psi^{(three)}(0)|\,. \tag{19}\] From the point of view of physics this means that the theory admits the preparation of a more or less arbitrary initial state of the quantum system in question. In the next step we may now recall the availability of \(G(t)\) and solve the two evolution-equation analogues \[{\rm i}\,\frac{d}{dt}\,|\psi^{(three)}(t)\rangle=G_{(three)}(t)\,|\psi^{(three)}(t)\rangle \tag{20}\] and \[{\rm i}\,\frac{d}{dt}\,|\psi^{(three)}(t)\rangle\!\rangle=G^{\dagger}_{(three)}(t )\,|\psi^{(three)}(t)\rangle\!\rangle \tag{21}\] of the two Schrodingerian eigenvalue problems (7) (see also the details in [23]). Next, via an appropriate rescaling of the initial-value vectors we may achieve their bi-orthonormality and bi-completeness, \[\langle\!\langle\psi^{(three)}(0)|\phi^{(three)}(0)\rangle=\delta_{\psi\phi}\,, \ \ \ \ \sum_{\psi}\,|\psi^{(three)}(0)\rangle\langle\!\langle\psi^{(three)}(0)|=I\,. \tag{22}\] Finally, recalling the theory [25] we may extend the validity of these postulates to all times \(t\), \[\langle\!\langle\psi^{(three)}(t)|\phi^{(three)}(t)\rangle=\delta_{\psi\phi}\,, \ \ \ \ \sum_{\psi}\,|\psi^{(three)}(t)\rangle\langle\!\langle\psi^{(three)}(t)|=I\,. \tag{23}\] Partial methodical parallels with the dynamical-input strategy emerge: Whenever our initial-time choice of the biorthonormalized and bicomplete basis of Eq. (22) is made compatible with the input-information form of one of the observables, (i.e., say, of the energy operator) at \(t=0\), \[H_{(three)}(0)=\sum_{\psi}\,|\psi^{(three)}(0)\rangle\,E_{\psi}^{(three)}(0)\, \langle\!\langle\psi^{(three)}(0)|\,, \tag{24}\] we may immediately reconstruct the same operator at all times \(t>0\). Naturally, also the construction of the metric acquires the explicit form \[\Theta_{(three)}(t)=\sum_{\psi}\,|\psi^{(three)}(t)\rangle\!\rangle\,\kappa_{n }^{(three)}(t)\,\langle\!\langle\psi^{(three)}(t)|\,. \tag{25}\] At this stage of development of the theory it is useful to notice that many of its applications (cf., e.g., [27]) are just considered in a finite-dimensional Hilbert space. Then, many operators of interest (i.e., \(N\) by \(N\) matrices with \(N<\infty\)) may happen to form a representation of a suitable Lie algebra. This does not only render the constructions feasible but it also enables us to factorize the metric into a product of Dyson maps, \[\Theta_{(three)}(t)=\Omega^{\dagger}_{(three)}(t)\,\Omega_{(three)}(t). \tag{26}\] The latter formula may be compared with Eq. (15) where the construction proceeded from \(\Omega\) to \(\Theta\), i.e., in the opposite direction. In other words, the Dyson map may formally be written as the sum \[\Omega_{(three)}(t)=\sum_{\psi}\,|\psi_{(three)}(t)\!\succ\ \sqrt{\kappa_{n}^{( three)}(0)}\ \langle\!\langle\psi_{(three)}(t)| \tag{27}\] where the new auxiliary basis \(\{|\psi_{(three)}(t)\!\succ\ \}\) may be chosen orthonormal. On this level of reconstruction one is already able to define the Coriolis force, \[\Sigma_{(three)}(t)=\frac{\mathrm{i}}{\Omega_{(three)}(t)}\,\dot{\Omega}_{(three)}(t) \tag{28}\] where the dot represents the differentiation with respect to time. Now, the last step yielding, finally, the observable Hamiltonian and its decomposition \[H_{(three)}(t)=G_{(three)}(t)+\Sigma_{(three)}(t) \tag{29}\] is already trivial. ## 6 Summary It is well known that the price to be paid for the generality and flexibility of the NIP formulation of non-stationary quantum theory in both of its quantum-mechanical and field-theoretical versions is not too low. Only too many evolution equations have to be solved. In our present paper we managed to show that a systematic subdivision of the related model-building strategies could simplify the picture thoroughly. The core of our message lies in the observation that the most natural interpretation of all of the eligible NIP-based mathematical constructions of quantum models should be based on a clear separation of the consistent implementation alternatives. We have shown that an explicit guide to the choice out of the menu has to be correlated with a context-dependent dominance of one of the operators \(H(t)\), \(\Sigma(t)\) or \(G(t)\). We argued that such an identification of dominance leads directly to the three different NIP-implementation recipes. They may be characterized by their specific respective mathematical merits as well as by a natural subdivision and classification of the related phenomenological intentions. Thus, what we described are the three mutually complementary forms of the implementation of the abstract NIP quantum theory in the situations where the set of the underlying unitary (i.e., closed) quantum systems can be subdivided according to the more detailed practical criteria. The resulting construction process seems useful, enhancing the tractability of the systems living in a non-stationary dynamical regime in which the use of the hiddenly Hermitian representations of observables might throw new light also on the physical interpretation of the various important open questions, say, in cosmology [49]. In all of these contexts, a clear separation of the input information about the system from the resulting predictions seems to play, in non-stationary systems, a more important role than in their stationary predecessors because the increase of the complexity of mathematics is enormous. The traditional guidance by the stationary constructions using analogies with the techniques of linear algebra becomes, in the NIP framework, replaced by the necessity of solving complicated operator evolution equations. We believe that such a challenge has to be accepted. As a reward, indeed, the NIP formalism may be expected to open the new ways of description of multiple deeply non-stationary phenomena.
2303.12610
Distributed Safe Control Design and Safety Verification for Multi-Agent Systems
We propose distributed iterative algorithms for safe control design and safety verification for networked multi-agent systems. These algorithms rely on distributing a control barrier function (CBF) related quadratic programming (QP) problem. The proposed distributed algorithm addresses infeasibility issues of existing schemes by dynamically allocating auxiliary variables across iterations. The resulting control input is guaranteed to be optimal, and renders the system safe. Furthermore, a truncated algorithm is proposed to facilitate computational implementation. The performance of the truncated algorithm is evaluated using a distributed safety verification algorithm. The algorithm quantifies safety for a multi-agent system probabilistically, using a certain locally Lipschitz continuous feedback controller by means of CBFs. Both upper and lower bounds on the probability of safety are obtained using the so called scenario approach. Both the scenario sampling and safety verification procedures are fully distributed. The efficacy of our algorithms is demonstrated by an example on multi-robot collision avoidance.
Han Wang, Antonis Papachristodoulou, Kostas Margellos
2023-03-22T14:48:48Z
http://arxiv.org/abs/2303.12610v2
# Distributed Control Design and Safety Verification for Multi-Agent Systems ###### Abstract We propose distributed iterative algorithms for safe control design and safety verification for networked multi-agent systems. These algorithms rely on distributing a control barrier function (CBF) related quadratic programming (QP) problem. The proposed distributed algorithm addresses infeasibility issues of existing schemes by dynamically allocating auxiliary variables across iterations. The resulting control input is guaranteed to be optimal, and renders the system safe. Furthermore, a truncated algorithm is proposed to facilitate computational implementation, with probabilistically guaranteed constraint satisfaction, while generating a Lipschitz continuous control input. We further develop a distributed safety verification algorithm to quantify safety for a multi-agent system by means of CBFs in probability. Both upper and lower bounds on the probability of safety are obtained using the so called scenario approach. Both the scenario sampling and safety verification procedures are fully distributed. The efficacy of our algorithms is demonstrated by an example on multi-robot collision avoidance. Distributed Optimisation, Scenario Approach, Safe Control, Multi-Agent Systems, Nonlinear Systems ## I Introduction Safety of a dynamical system requires the system state to remain in a safe set for all time. This property is important in many applications such as collision avoidance [1, 2], vehicle platooning [3, 4], vehicle merging control [5], etc. For a single agent system, safety is usually captured by introducing constraints on the state of the agent and the environment. For a multi-agent system, the meaning of safety extends to capture the interactions among agents. In this case, safety is encoded by coupling constraints over the states of a group of agents. For a networked multi-agent system, where agents cooperate to satisfy safety constraints, we consider designing distributed algorithms to ensure safety for all agents. Another problem of interest is to validate the proposed control law. For a single agent system, an agent can evaluate the system behaviour to characterize its risk of being unsafe under the employed control input. Similarly, for a multi-agent safety verification problem, cooperation among agents is necessary since safety involves multiple agents. In summary, this paper focuses on designing a distributed protocol for safe control input design and developing a distributed safety verification algorithm. ### _Related Work_ Safety in control systems is often certified by control barrier functions (CBF), which is a type of control Lyapunov-like functions [6, 7, 8]. By enforcing the inner product of the CBF derivative and vector field of the controlled system to be bounded, safety is rigorously guaranteed at any time. CBF is shown to be powerful and scalable in control input design for control-affine systems, as this condition can be encoded as a linear constraint in a quadratic programming (QP) problem [6]. By solving online QP problems for every state, the system can be guaranteed to be safe [9, 10]. Higher-order derivative based methods for high relative degree systems are proposed in [11, 12, 13]. In [14], adaptive coefficients are introduced to improve the feasibility of the CBF-QP. For the case where multiple CBFs exist, an optimal decay based method is proposed to tune the CBF constraints [15]. CBFs for discrete time systems are proposed in [16]. For the case where model uncertainty and system noise are added, robust CBF with worst case analysis [17, 18] can be considered. Most of the existing results in this direction involve a centralized approach; however, multi-agent considerations call for distributed solution regimes. In this paper we address the distributed safety problem for multi-agent systems. Related to the problem considered in this paper, CBFs for multi-robot systems were studied in [19, 20, 21]. These works propose to split the CBF constraints into two components for neighbouring agents: the computation is therefore distributed as every agent solves a local optimisation problem. An improved constraint sharing mechanism is developed in [22], where the CBF constraints are dynamically tuned for compatibility. Optimality is further considered in [23], and a dynamical constraint allocation scheme among agents based on a consensus protocol is proposed. In our work, we aim at dealing with the problem of feasibility and optimality simultaneously, as well as considering multiple CBF constraints for safety. In essence, the distributed CBF-based safe control design problem can be seen under the lens of distributed optimisation. Distributed optimisation for a multi-agent system aims to design a distributed protocol that involves solving an optimisation problem locally for every agent. Algorithms can be divided into two types, dual decomposition based [24, 25, 26, 27] and primal decomposition based ones [28, 29, 30, 31, 32]. Dual decomposition methods consider the dual problem, where each agent maintains a local copy of the dual variables. Constraint satisfaction is achieved by consensus over the dual variables. Primal decomposition methods directly decompose
2307.11836
Hard probes in isobar collisions as a probe of the neutron skin
We present an estimate of the yield of hard probes expected for collisions of the isobars $^{96}_{44}$Ru and $^{96}_{40}$Zr at collision energies reachable at RHIC and the LHC\@. These yields are proportional to the number of binary nucleon-nucleon interactions, which is characteristically different due to the presence of the large neutron skin in $^{96}_{40}$Zr. This provides an independent opportunity to measure the difference between the neutron skin of $^{96}_{44}$Ru and $^{96}_{40}$Zr, which can provide an important constraint on the Equation of State of cold neutron-rich matter.
Wilke van der Schee, Yen-Jie Lee, Govert Nijs, Yi Chen
2023-07-21T18:10:21Z
http://arxiv.org/abs/2307.11836v1
# Hard probes in isobar collisions as a probe of the neutron skin ###### Abstract We present an estimate of the yield of hard probes expected for collisions of the isobars \({}^{96}_{44}\)Ru and \({}^{96}_{40}\)Zr at collision energies reachable at RHIC and the LHC. These yields are proportional to the number of binary nucleon-nucleon interactions, which is characteristically different due to the presence of the large neutron skin in \({}^{96}_{44}\)Zr. This provides an independent opportunity to measure the difference between the neutron skin of \({}^{96}_{44}\)Ru and \({}^{96}_{40}\)Zr, which can provide an important constraint on the Equation of State of cold neutron-rich matter. + Footnote †: preprint: CERN-TH-2023-140/MIT-CTP/5588 ## I Introduction The relativistic heavy ion programs at RHIC and the LHC [1; 2] aim to extract properties of the quark-gluon plasma (QGP), a state of matter believed to have existed in the early universe [3]. However, current knowledge of the initial condition of the QGP, especially how it is formed and shaped from the colliding nuclei, remains limited. Isobar collisions [4], involving nuclei with significant differences in structural properties but similar size, offer a new way to study the QGP. In particular, it is possible to study ratios of observables obtained from collisions of different isobars, such as the momentum anisotropies of detected particles or the total multiplicity as a function of impact parameter (centrality) and the transverse momenta of the particles. These ratios will have significantly reduced theoretical and experimental systematic uncertainties. Given the similarity in size between both isobars, the ratio of observables becomes relatively insensitive to shared properties, such as the speed of sound or shear viscosity. However, it becomes particularly sensitive to differences in the shape of the isobars [5; 6; 7; 8]. Subsequently, a precise understanding of the shape contributes to minimising uncertainties in determining QGP properties through data analysis. The nuclear structure program aims to explain the emergence of nuclei from fundamental theory [9; 10; 11]. Synergising with the hot QCD program based on high-energy heavy-ion collisions, this field can benefit from event-by-event measures of particle angular correlations in the final stages of such collisions [12; 13]. These angular correlations are sensitive to the many-body distribution and correlations of nucleons, including deformations, in the colliding nuclei. High-energy colliders are thus a novel tool for gaining insight into strongly correlated atomic nuclear systems and testing ab initio nuclear structure calculations. In this paper, we present a study of the yield of hard probes expected for the collision of isobars \({}^{96}_{44}\)Ru and \({}^{96}_{40}\)Zr at energies for RHIC and the LHC. Hard probes generically can be separated into colour-neutral probes (such as photons and W and Z bosons) that do not interact with QGP and coloured probes such as quarks and gluons (partons). Due to the transparency in the QGP the former can be used to directly access the production of hard probes, while quarks and gluons can be used to (also) study the interaction with the QGP. We will show that the production is significantly larger for \({}^{96}_{44}\)Ru due to its smaller and denser nucleus. Furthermore, while \({}^{96}_{40}\)Zr is larger, the lower temperature has a compensating effect, and only subleading effects are expected for the difference in the interactions of quarks and gluons with the QGP. This approach provides an independent opportunity to measure the difference between the neutron skins of \({}^{96}_{44}\)Ru and \({}^{96}_{40}\)Zr. ## II Production of hard probes At large transverse momentum the heavy ions are to an excellent approximation transparent, which means that the production rate is proportional to the total number of nucleon-nucleon collisions, i.e. the number of binary collisions \(N_{\rm coll}\)1. The interactions, to be more fully described later, will also depend on the QGP formation and its evolution. Here we describe the model used, with a particular focus on the shapes of the nuclei, since as discussed this has a large effect on the ratios. Footnote 1: In this work we neglect the modification of the nuclear parton distribution functions (nPDFs). For many of the isospin symmetric probes we expect similar modifications for \({}^{96}_{44}\)Ru and \({}^{96}_{40}\)Zr, such that for ratios their effect is reduced. We use the _Trajectum_ 1.2 framework [7; 14]2 using the maximum likelihood settings as in [15]. _Trajectum_ features an initial state depending on the nucleon positions that generalises the T\({}_{\mbox{R}}\)ENTo model [16], a far-from-equilibrium stage that can interpolate between free streaming and a holographic scenario [17], a hydrodynamic phase with temperature-dependent first and second order transport coefficients and finally a freeze-out prescription [18] that can transfer the resulting QCD res
2304.08837
Sensor Fault Detection and Isolation in Autonomous Nonlinear Systems Using Neural Network-Based Observers
This paper presents a novel observer-based approach to detect and isolate faulty sensors in nonlinear systems. The proposed sensor fault detection and isolation (s-FDI) method applies to a general class of nonlinear systems. Our focus is on s-FDI for two types of faults: complete failure and sensor degradation. The key aspect of this approach lies in the utilization of a neural network-based Kazantzis-Kravaris/Luenberger (KKL) observer. The neural network is trained to learn the dynamics of the observer, enabling accurate output predictions of the system. Sensor faults are detected by comparing the actual output measurements with the predicted values. If the difference surpasses a theoretical threshold, a sensor fault is detected. To identify and isolate which sensor is faulty, we compare the numerical difference of each sensor meassurement with an empirically derived threshold. We derive both theoretical and empirical thresholds for detection and isolation, respectively. Notably, the proposed approach is robust to measurement noise and system uncertainties. Its effectiveness is demonstrated through numerical simulations of sensor faults in a network of Kuramoto oscillators.
John Cao, Muhammad Umar B. Niazi, Matthieu Barreau, Karl Henrik Johansson
2023-04-18T09:05:07Z
http://arxiv.org/abs/2304.08837v2
Sensor Fault Detection and Isolation in Autonomous Nonlinear Systems Using Neural Network-Based Observers ###### Abstract This paper presents a new observer-based approach to detect and isolate faulty sensors in industrial systems. Two types of sensor faults are considered: complete failure and sensor deterioration. The proposed method is applicable to general autonomous nonlinear systems without making any assumptions about its triangular and/or normal form, which is usually considered in the observer design literature. The key aspect of our approach is a learning-based design of the Luenberger observer, which involves using a neural network to approximate the injective map that transforms the nonlinear system into a stable linear system with output injection. This learning-based Luenberger observer accurately estimates the system's state, allowing for the detection of sensor faults through residual generation. The residual is computed as the norm of the difference between the system's measured output and the observer's predicted output vectors. Fault isolation is achieved by comparing each sensor's measurement with its corresponding predicted value. We demonstrate the effectiveness of our approach in capturing and isolating sensor faults while remaining robust in the presence of measurement noise and system uncertainty. We validate our method through numerical simulations of sensor faults in a network of Kuramoto oscillators. ## I Introduction Fault detection and isolation (FDI) are essential for the safe and efficient operation of many industrial processes. A fault is an undesirable disruption or disturbance to the system, which could lead to catastrophic consequences if not addressed properly. Some examples of fault types include mechanical breakdown or failure of sensors. By using effective FDI methods, operators can identify faults early (detection), localize the fault source (isolation), and take corrective action before they cause costly damage or downtime. FDI methods generally fall into two main categories: hardware and analytical redundancy [1]. Hardware redundancy methods rely on the use of multiple sensors to obtain and compare information about processes. This approach has the drawback of adding monetary cost for the purchase and maintenance of extra hardware. Methods using analytical redundancy overcome this by using the principle of residual generation. This generated residual is the difference between a predicted system output and its real measurement. Under fault-free operating conditions, the residual is approximately zero, while its value grows distinctly over a pre-defined threshold when faults occur. Historically, methods for FDI based on analytical redundancy have been model-based, requiring an explicit mathematical model of the system being considered [2]. This approach was originally pioneered for linear systems during the 1970s in [3], where it was shown that it is always possible to design a filter capable of detecting and localizing faults in observable system dynamics. It was further refined and improved upon in [4], which together with [3] resulted in the famous _Beard-Jones Fault Detection filter_. Further extensions and application examples are addressed in [5, 6, 7, 8, 9]. The framework of observer-based fault detection schemes, developed in parallel with the above mentioned works, were first introduced for linear systems in [10]. Since then, it has become widely regarded as one of the most successful approaches for FDI and has resulted in several research directions. For example, the application of sliding-mode observers for FDI has been demonstrated in [11, 12, 13], where the problem is addressed by explicitly reconstructing the fault via manipulation of the output injection error. Nonlinear unknown input observers (NUIO) have also been widely used for FDI [2]. In [14], a bank of NUIOs with adjustable observer dynamics are used to detect and isolate thruster faults of an autonomous spacecraft. In recent years, an interval-based unknown input observer has gained a lot attention [15]. These observers are shown to be advantageous by relaxing the information requirements about the inputs and the non-linearity of the system. Substantial results on interval observers for FDI have been reported in [15, 16, 17, 18]. Another principal approach to analytical redundancy-based FDI are data-driven methods, which rose in popularity in the past decade driven by significant advancements in deep learning algorithms. These methods do not require an explicit system model, instead they rely on sensor data to approximate the underlying dynamics of the source that generates them in order to form a residual [19]. In [20], FDI with long short-term memory neural networks (LSTM) was demonstrated by generating residuals from comparing network predictions based on past time-series data with actual measurement. Other examples of neural network-based approaches are proposed for industrial manufacturing processes [19], power plants [21] and unmanned aerial vehicles [22]. Despite the diversity of existing FDI schemes, most approaches still share certain limitations. The application domains of observer-based techniques are bound by their assumptions about specific system structures, which makes them impractical when such assumptions do not hold. Data-driven methods are relaxed in their assumptions, but this usually comes with the drawback of requiring substantial sensor data, which may be difficult or expensive to obtain. In this paper, we address these issues by considering FDI using a learning-based approach to nonlinear Luenberger observers, also known as Kazantzis-Kravaris/Luenberger (KKL) observers [23, 24]. KKL observers rely on transforming the original system into a new system of higher dimension, where the transformation is governed by a certain partial differential equation. The observer is then described by the transformed system which is required to be stable up to output injection. An estimate of the system state is obtained by applying the inverse of the transformation to the new system, which takes it back to the original state space. KKL observers are not bound by considerations of specific system forms, instead they rely on certain observability conditions which have been shown to be very mild, making them applicable to a wide range of systems. The main contribution of this paper includes a full description of a novel fault detection and isolation method that combines model and learning-based techniques using KKL observers. We present a neural network-based approach for approximating the observer transformations, leveraging the fact that any amount of training data can be generated from knowing the system model. We demonstrate the FDI capabilities of our method using numerical simulations, showing that it is able to effectively detect and isolate sensor faults under a variety of circumstances, while remaining robust and functional under the influence of system and measurement noise. The outline of this paper is as follows: We begin by formulating the sensor FDI problem in Section II. In Section III, we describe the design of KKL observers and provide a detailed step-by-step description of our FDI method. Numerical results are provided in Section IV where we test our method by simulating a variety of fault cases. Lastly, Section V concludes the paper. ## II Problem Formulation We consider an autonomous nonlinear system \[\dot{x}(t) =f(x(t))+w(t) \tag{1a}\] \[y(t) =h(x(t))+v(t) \tag{1b}\] where \(x(t)\in\mathcal{X}\subset\mathbb{R}^{n_{x}}\) is the state, \(y(t)\in\mathbb{R}^{n_{y}}\) is the output, and \(w(t),v(t)\) are bounded process and measurement noises, respectively. The output of the system (1) might be affected by sensor faults, which are modeled by modifying (1b) as \[y(t)=\phi(t)\left[h(x(t))+v(t)+\zeta(t)\right]. \tag{2}\] In (2), the measurements are corrupted by two types of faults: \[\phi(t)=\begin{bmatrix}\phi_{1}(t)&\dots&\phi_{n_{y}}(t)\end{bmatrix}^{ \mathsf{T}}\in[0,1]^{n_{y}}\] that models complete sensor failure and \[\zeta(t)=\begin{bmatrix}\zeta_{1}(t)&\dots&\zeta_{n_{y}}(t)\end{bmatrix}^{ \mathsf{T}}\in\mathbb{R}^{n_{y}}\] that models any type of sensor deterioration affecting the correctness of the output. If \(\phi(t)=1_{n_{y}}\) and \(\zeta(t)=0_{n_{y}}\), then the system is fault-free. Other cases represent multiple type of faults; for example, biasing or precision degradation of sensor \(i\) when \(\zeta_{i}(t)\neq 0\) or failure of sensor \(i\) when \(\phi_{i}(t)=0\). Our objective is to detect and isolate sensor faults in (1) by designing a nonlinear observer of the form \[\dot{\hat{z}}(t) =\psi(\hat{z}(t),y(t)) \tag{3a}\] \[\hat{x}(t) =\mathcal{T}^{*}(\hat{z}(t))\] (3b) \[\hat{y}(t) =h(\hat{x}(t)) \tag{3c}\] which takes measurements \(y(t)\) as input and gives the state estimate \(\hat{x}(t)\) as output. The state \(\hat{z}(t)\in\mathbb{R}^{n_{z}}\) of the observer follows a nonlinear transformation \(\hat{z}=\mathcal{T}(\hat{x})\), where \(\mathcal{T}:\mathcal{X}\rightarrow\mathbb{R}^{n_{z}}\), and \(\mathcal{T}^{*}:\mathbb{R}^{n_{z}}\rightarrow\mathcal{X}\) is the left inverse of \(\mathcal{T}\). More details on these transformations are given in the next section. The FDI problem can be split into two components: 1. Design (3) so that \(\lim_{t\rightarrow\infty}\|x(t)-\hat{x}(t)\|=0\) when there are no faults. 2. Detect and isolate faults by filtering the output signals of (2) using (3). Then, take the difference between the measured and estimated outputs to form residuals \[r_{i}(t)=|y_{i}(t)-\hat{y}_{i}(t)|,\quad i=1,\dots,n_{y}.\] FDI is then performed by discovering and analyzing anomalies occurring in the residuals. ## III FDI using learning-based KKL-Observer In this section, we describe the proposed FDI approach using neural network-based observers. First, we briefly recall the theory of KKL observer design. Then, we present a machine learning-based observer design procedure. Finally, we propose our FDI methodology. ### _KKL-Observers_ Consider a nonlinear system \[\dot{x}(t) =f(x(t)) \tag{4a}\] \[y(t) =h(x(t)) \tag{4b}\] where \(x(t)\in\mathcal{X}\subset\mathbb{R}^{n_{x}}\) is the system state, \(y(t)\in\mathbb{R}^{n_{y}}\) the measured output, and \(f\) and \(h\) are smooth maps. The KKL observer is designed according to the following steps: 1. Transform (4) to new coordinates by an injective transformation \(\mathcal{T}:\mathcal{X}\rightarrow\mathbb{R}^{n_{z}}\) with \(z=\mathcal{T}(x)\) and \(n_{z}=n_{y}(2n_{x}+1)\) satisfying \[\dot{z}(t)=Az(t)+Bh(x(t));\;\;z(0)=\mathcal{T}(x_{0}),\] (5) where \(A\in\mathbb{R}^{n_{z}\times n_{z}}\) is a Hurwitz matrix and \(B\in\mathbb{R}^{n_{z}\times n_{y}}\) is such that the pair \((A,B)\) is controllable. From (5), we see that \(\mathcal{T}\) must be governed by the partial differential equation (PDE) \[\frac{\partial\mathcal{T}(x)}{\partial x}f(x)=A\mathcal{T}(x)+Bh(x).\] (6) 2. Find the left inverse \(\mathcal{T}^{*}:\mathbb{R}^{n_{z}}\rightarrow\mathcal{X}\) of \(\mathcal{T}\), i.e, \(\mathcal{T}^{*}(\mathcal{T}(x))=x\), which exists due to the injectivity of \(\mathcal{T}\). Then, the KKL observer is given by \[\dot{z}(t) =Az(t)+By(t);\;\;\;z(0)=z_{0}\] (7a) \[\dot{x}(t) =\mathcal{T}^{*}(z(t))\] (7b) where \(\dot{x}(t)\) is the state estimate obtained from the observer. The existence of (7) is guaranteed if (4) is _forward complete within \(\mathcal{X}\)_ and _backward \(\mathcal{O}\)-distinguishable_. We omit the details in this paper and refer to [24] for further details. ### _Machine Learning Approach for KKL Observer Design_ The design principle described in Section III-A relies on an injective transformation \(\mathcal{T}\) satisfying (6) and its inverse \(\mathcal{T}^{*}\). However, finding exact \(\mathcal{T}\) and \(\mathcal{T}^{*}\) are generally difficult. Even if \(\mathcal{T}\) is known, finding \(\mathcal{T}^{*}\) is very challenging [25]. We overcome these challenges by approximating the transformations using a _supervised physics-informed learning_ approach [26]. More specifically, we generate synthetic training data by simulating (4) and (5) to accurately capture the maps between their state spaces. We then perform nonlinear regression under the constraint of (6) to learn the approximate transformations. Relying on the theoretical results of [26], the convergence of the observer is guaranteed if the approximation error is sufficiently small. Assuming a well-trained model, the learning-based observer is capable of generalizing to previously unseen initial conditions and trajectories [26]. #### Iii-B1 Generation of Training Data An arbitrary amount of training data may be generated by knowing the structure of (4) and (5). We begin by choosing \(p\) pairs of initial conditions \((x_{0}^{i},z_{0}^{i}),i=1,\ldots,p\) from the defined sets \(\mathcal{X}^{\text{train}}\subset\mathcal{X}\) and \(\mathcal{Z}^{\text{train}}\subset\mathbb{R}^{n_{z}}\). It is imperative that the points \(x_{0}^{i}\) are well distributed within \(\mathcal{X}^{\text{train}}\) in order to fully capture the transformations. This can be accomplished by using statistical methods such as Latin hypercube sampling suggested in [27]. Since we do not know \(\mathcal{T}\) in advance, it is impossible to sample from \(\mathcal{Z}^{\text{train}}\) so that \(z_{0}^{i}=\mathcal{T}(x_{0}^{i})\). Instead, we utilize a truncation method proposed in [26, 28]. The procedure of generating training data is as follows: 1. Choose \(p\) number of pairs of initial conditions \((x_{0}^{1},z_{0}^{1}),\ldots,(x_{0}^{p},z_{0}^{p})\). 2. Simulate (4) and (5) for a finite time \(t_{s}>0\) to obtain trajectories \[(x(t_{k};x_{0}^{1}),z(t_{k};z_{0}^{1})),\ldots,(x(t_{k};x_{0}^{p}),z(t_{k};z_{ 0}^{p}))\] with \(k=0,1,2,..,s\) and \(s\) being the number of samples. 3. Divide1 the dataset into _physics data points_\(\mathsf{P}_{p}\) and _regression data points_\(\mathsf{P}_{r}\). One way to accomplish this is to choose \(\mathsf{P}_{r}\) as all samples with an even index, and \(\mathsf{P}_{p}\) as all with an odd index. Footnote 1: This is motivated by the use of physics-informed neural networks [29]. #### Iii-B2 Learning the Transformations We learn the transformation and its inverse by considering two neural networks \(\hat{\mathcal{T}}_{\theta}\) and \(\hat{\mathcal{T}}_{\eta}^{*}\), where \(\theta\) and \(\eta\) are the trainable parameters of the network. Both neural networks are trained jointly in an encoder-decoder architecture by minimizing a loss function consisting of a _regression loss_ and a _physics loss_. We define the regression loss as the mean squared error \[\mathcal{L}_{\text{reg}}(\theta,\eta)\doteq\frac{1}{p}\sum_{i=1}^ {p}\frac{1}{|\mathsf{P}_{r}|}\sum_{k\in\mathsf{P}_{r}}\left\|z^{i}(t_{k})-\hat{ \mathcal{T}}_{\theta}(x^{i}(t_{k}))\right\|^{2}\\ +\chi\big{\|}x^{i}(t_{k})-\hat{\mathcal{T}}_{\eta}^{*}(\hat{ \mathcal{T}}_{\theta}(x^{i}(t_{k})))\big{\|}^{2} \tag{8}\] where \(\chi\) is a discount factor balancing the two loss terms. The function (8) uses the available information about \(x^{i}(t_{k})\) and \(z^{i}(t_{k})\) to minimize the deviations in both the latent space and the final output. Furthermore, we leverage knowledge of the fact that a good approximation \(\hat{\mathcal{T}}_{\theta}\) should satisfy the PDE (6) by defining a physics loss as the PDE residual \[\mathcal{L}_{\text{phy}}(\theta)\doteq\frac{1}{p}\sum_{i=1}^{p} \frac{1}{|\mathsf{P}_{p}|}\sum_{k\in\mathsf{P}_{p}}\big{\|}\frac{\partial \hat{\mathcal{T}}_{\theta}}{\partial x}(x^{i}(t_{k}))f(x^{i}(t_{k}))\\ -A\hat{\mathcal{T}}_{\theta}(x^{i}(t_{k}))-Bh(x^{i}(t_{k}))\big{\|} ^{2}. \tag{9}\] This has the effect of reducing the risk of overfitting while also improving the generalization capabilities of the neural networks. Combining (8) and (9), we define the loss function to be \[\mathcal{L}(\theta,\eta)\doteq\mathcal{L}_{\text{reg}}(\theta,\eta)+\lambda \mathcal{L}_{\text{phy}}(\theta) \tag{10}\] where \(\lambda>0\) is a hyperparameter acting as a balancing factor for the two loss terms. ### _Performing Fault Detection and Isolation_ Our method detects and isolates faults by discovering rapid changes in the residual signals \(r_{i}(t)\) caused by fault induced observer transients. This is accomplished by defining a threshold \(r_{\Delta}\). Due to the inevitability of approximation errors in \(\hat{\mathcal{T}}_{\theta}\) and \(\hat{\mathcal{T}}_{\eta}^{*}\), and the presence of noise in (1), the residuals \(r_{i}\) will never converge to zero. Therefore, to avoid false positive alarms, it is important to distinguish between the non-zero residuals due to noise and approximation error and the increase in residual magnitude due to sensor faults. However, the introduction of a fault in a sensor will not only induce a transient in the corresponding output estimate, but in all other estimated outputs as well. Therefore, fault in one sensor could induce large residuals for the output estimates of other sensors, which may cause problem in fault isolation. Moreover, because of these inter-dependencies, the transients after the occurrence of fault may persist above the threshold, leading to an inability to even detect other subsequent faults. Therefore, instead of using residuals \(r_{i}(t)\), we propose to use the finite difference approximation of its derivative to perform FDI, i.e., \[\tilde{r}_{i}(t_{k})=\left|\frac{r_{i}(t_{k})-r_{i}(t_{k-1})}{h}\right|,\quad i=1,\ldots,n_{y} \tag{11}\] where \(\tilde{r}_{i}(t_{k})\) is the numerical approximation of \(\frac{dr_{i}(t_{k})}{dt}\) with \(h\) being the step size between samples. This remedies the unwanted influence of other transients and allows for efficient and reliable FDI, which is illustrated in the simulations. The method for computing the threshold \(r_{\Delta}\) is described below: 1. Draw \(N\) initial conditions to form a set \(\mathcal{X}^{r_{\Delta}}\in\mathcal{X}\), with \(\mathcal{X}^{r_{\Delta}}\) satisfying \(\mathcal{X}^{r_{\Delta}}\cap\mathcal{X}^{\text{train}}=\emptyset\) to simulate realistic operating conditions with previously unseen data. 2. From each initial condition, generate measured and estimated output trajectories \((y^{1}(t),\ldots,y^{N}(t))\) and \((\hat{y}^{1}(t),\ldots,\hat{y}^{N}(t))\), by simulating (1) and (7), replacing \(\mathcal{T}^{*}\) with \(\tilde{\mathcal{T}}^{*}_{\eta}\). 3. For each measured and estimated output pair, compute (11). 4. The threshold \(r_{\Delta}\) is then taken as the maximum of all finite difference approximations. To eliminate the influence of initial transients, which may cause \(r_{\Delta}\) to become unnecessarily large, we truncate the measured and estimated output trajectories at some time \(t_{\text{run}}>0\). A simple way to empirically choose \(t_{\text{run}}\) is to simulate the observer dynamics and select a time at which the transients have vanished. A fault is considered to be detected whenever any \(\tilde{r}_{i}(t_{k})>r_{\Delta}\). ## IV Simulation Results Our approach is demonstrated in this section using numerical simulations. We show that FDI of a highly nonlinear system can be achieved using neural network-based KKL-observers, even in the presence of additive process and measurement noises. Both types of faults, sensor deterioration represented by the fault signal \(\zeta_{i}(t)\neq 0\) and sensor failure represented by \(\phi_{i}(t)=0\) in (2), are demonstrated. ### _Kuramoto Model_ We consider the Kuramoto model for demonstrating our FDI method. The Kuramoto model describes the phenomena of synchronization in a multitude of systems, including electric power networks, multi-agent coordination, and distributed software networks [30]. The dynamics of a network with \(n\) nodes are given as \[\dot{\theta_{i}}(t)=\omega_{i}+\sum_{j=1}^{n}a_{ij}\sin(\theta_{i}(t)-\theta_ {j}(t)) \tag{12}\] where \(\theta_{i}(t)\in\mathbb{S}^{1}\) is the phase angle of node \(i=1,..,n\), \(\omega_{i}\in\mathbb{R}\) is the natural frequency of node \(i\), and \(a_{ij}\geq 0\) denotes the coupling between node \(i\) and \(j\). In the literature, the state trajectories of (12) are often represented graphically as \(\sin(\theta_{i})\) in order to better illustrate their synchronization. We follow the same convention in our simulations. ### _Experimental Setup_ For (12), we consider a network of 10 nodes with randomly generated natural frequencies \(\omega_{i}\) and couplings \(a_{ij}\). The measurements are chosen as \(y=\begin{bmatrix}\theta_{1}&\theta_{2}&\theta_{3}&\theta_{4}&\theta_{5}\end{bmatrix}^ {\mathsf{T}}\). A set of 50 initial conditions are generated using Latin hypercube sampling, with \(\chi^{\text{train}}\in[-2,2]^{10}\). We choose Runge-Kutta-4 as our numerical ODE solver to simulate (12) and (5) over a time interval of \([0,30]\), partitioned into 4000 sample points for each trajectory. The neural networks \(\tilde{\mathcal{T}}_{\theta}\) and \(\tilde{\mathcal{T}}^{*}_{\eta}\) are chosen to be fully connected feed-forward networks, each consisting of 3 hidden layers of 250 neurons with ReLU activation function. Model training is facilitated by data standardization and learning rate scheduling. Following [31], the matrices of (5) are chosen as \[A=\Lambda\otimes I_{n_{y}},\qquad B=\Gamma\otimes I_{n_{y}}\] where \(\Lambda\in\mathbb{R}^{(2n_{x}+1)\times(2n_{x}+1)}\) is a diagonal matrix with diagonal elements linearly distributed in \([-15,-21]\), \(\Gamma\in\mathbb{R}^{2n_{x}+1}\) is a column vector of ones, and \(I_{n_{y}}\) is the identity matrix of size \(n_{y}\times n_{y}\). Here, \(n_{x}\) and \(n_{y}\) are 10 and 5, respectively, and \(n_{z}=n_{y}(2n_{x}+1)=105\). The estimation capabilities of the observer under fault-free conditions are demonstrated in Fig. 1, which shows the estimated and true trajectories of two (randomly chosen) unmeasured states \(\theta_{7}\) and \(\theta_{8}\) over a time interval of \([0,20]\), with noise terms \(w(t),v(t)\sim\mathcal{N}(0,0.02)\). The figure demonstrates that the estimation error is stable under noise and neural network approximation error. ### _Numerical Results_ We now apply the learned neural network-based KKL observer to perform FDI. The fault threshold \(r_{\Delta}=7.1\) is computed according to the method described in Section III-C. We choose \(N=100\) initial conditions to create \(\mathcal{X}^{r_{\Delta}}\in[-2,2]^{10}\), again using Latin hypercube sampling. Fig. 1(a)-1(e) demonstrate the detection and isolation capabilities of our method under a variety of faults. In the figure, the first, second, and third rows correspond to the finite difference approximation (11), measured and estimated state trajectories, and the residuals \(r_{i}=|y_{i}-\hat{y}_{i}|\), respectively. In Fig. 1(a), sensor \(2\) is disturbed by a constant fault term \(\zeta_{2}(t)=1\) introduced at \(t=5\) and ending at \(t=15\). The observer is able to follow the measured state trajectories Fig. 1: Estimated and true trajectories of states \(\theta_{7}\) and \(\theta_{8}\) under the influence of process and sensor noise. despite the presence of process and measurement noise, thus generating a small residual when no faults are present. The introduction of a fault induces a transient in the observer, causing a large residual to be generated for the disturbed state. Due to the stability of the observer, it will attempt to track the faulty trajectories after the occurrence of fault, leading to a decrease in the residual magnitudes. A new transient is induced upon the end of the fault term, creating the sharp "spikes" shown in the figure. The differences between using (11) and the generated residuals can be seen by comparing the top and bottom figures. It is clear that by approximating \(\frac{dr_{i}(t_{b})}{dt}\), the anomaly caused by \(\zeta(t)\) can be isolated from other simultaneously occurring transients. Fig. 1(b) illustrates the situation when more than one fault is present. Sensors \(1\) and \(5\) are disturbed by \(\zeta_{1}(t)=\zeta_{5}(t)=1\), during \(t=5\) to \(t=10\), and \(t=7\) to \(t=15\) respectively. Each fault is distinctly detectable at the moment of occurrence. Again, using the finite difference approximations allows for isolation of the affected states by dampening the undesired transients. In Fig. 1(c), we show that our method is also capable of detecting complete sensor shutdowns, which we demonstrate by modeling the fault in sensor \(4\) with \(\phi_{4}(t)=0\). Fig. 1(d) and 1(e) illustrates the case when the fault signal on sensor \(3\) is a sin-wave \[\zeta_{3}(t)=A\sin(\omega t) \tag{13}\] where \(A\) is the amplitude and \(\omega\) the angular frequency. In Fig. 1(d), we simulate a fault in sensor \(y_{3}\) with \(A=5\) and \(\omega=2\pi\) rad\(/s\). In Fig. 1(e), the amplitude remains unchanged, but the angular frequency is increased to \(\omega=20\pi\) rad\(/s\). Unsurprisingly, an increase in frequency results in more prominent responses. Note that the large spikes in the residuals and finite differences seen at the beginning of each figure is also a result of the observer transients, due to the full system state being unknown upon initialization. This should therefore be treated as an expected behavior and ignored in practice. ## V Conclusion and Future Work We have presented a novel method for sensor fault detection and isolation using neural network-based observers. Our approach is effective in both fault detection and isolation, and it has the potential to be applied to a wide range of systems because it is not restricted to nonlinear systems with specific form or structure. Instead, it relies only on a certain observability condition of the system. We described a method to systematically design an observer, using neural networks to learn an injective transformation and its inverse. A state Fig. 2: (a) Sensor deterioration in \(y_{2}\) with constant fault term. (b) Sensor deterioration in \(y_{1}\) and \(y_{5}\) at different time points with constant fault terms. (c) Shutdown of sensor \(y_{4}\). (d), (e) Sensor deterioration in \(y_{3}\) with a sinusoidal fault term. estimate is obtained by transforming the original nonlinear system into a new one that is a stable linear system with output injection, and then applying the inverse transform back to its original state space. Sensor faults are detected by generating a residual, defined as the absolute difference between measured and estimated state values. We propose to monitor the time derivative of the residual signal by using the finite difference approximation. An empirical threshold is computed which if crossed, signifies the occurrence of a fault. We tested our method on a network of Kuramoto oscillators by simulating a variety of fault cases, including precision degradation due to sensor deterioration (injection of constant and sinusoidal fault signals) and sensor shutdown due to complete failure. The simulations demonstrated the effectiveness of our method in both fault detection and isolation. The theory of KKL observers extends to non-autonomous systems, and adapting our method to those systems remains an open research topic. It is also of interest to study the performance of the method in the real world, especially in systems where conventional solutions are known to fail. Developing analytic methods to compute the threshold for residuals will also be considered in the future.
2308.10375
Model Selection over Partially Ordered Sets
In problems such as variable selection and graph estimation, models are characterized by Boolean logical structure such as presence or absence of a variable or an edge. Consequently, false positive error or false negative error can be specified as the number of variables/edges that are incorrectly included or excluded in an estimated model. However, there are several other problems such as ranking, clustering, and causal inference in which the associated model classes do not admit transparent notions of false positive and false negative errors due to the lack of an underlying Boolean logical structure. In this paper, we present a generic approach to endow a collection of models with partial order structure, which leads to a hierarchical organization of model classes as well as natural analogs of false positive and false negative errors. We describe model selection procedures that provide false positive error control in our general setting and we illustrate their utility with numerical experiments.
Armeen Taeb, Peter Bühlmann, Venkat Chandrasekaran
2023-08-20T22:08:44Z
http://arxiv.org/abs/2308.10375v3
# Model Selection over Partially Ordered Sets ###### Abstract In problems such as variable selection and graph estimation, models are characterized by Boolean logical structure such as presence or absence of a variable or an edge. Consequently, false positive and false negative errors can be specified as the number of variables or edges that are incorrectly included/excluded in an estimated model. However, there are several other problems such as ranking, clustering, and causal inference in which the associated model classes do not admit transparent notions of false positive and false negative errors due to the lack of an underlying Boolean logical structure. In this paper, we present a generic approach to endow a collection of models with partial order structure, which leads to a hierarchical organization of model classes as well as natural analogs of false positive and false negative errors. We describe model selection procedures that provide false positive error control in our general setting and we illustrate their utility with numerical experiments. Keywords: combinatorics \(|\) greedy algorithms \(|\) multiple testing \(|\) stability ## 1 Introduction In data-driven approaches to scientific discovery, one is commonly faced with the problem of model selection. Popular examples include variable selection (which covariates influence a response?) and graph estimation (which pairs of variables have nonzero correlation or partial correlation?). As exemplified by these two problems, a common feature of most model selection problems in the literature is that the collection of models is organized according to some type of Boolean logical structure, such as presence versus absence of a variable or an edge. A consequence of such structure is that model complexity can be conveniently specified as the number of attributes (variables or edges) in a model, while false positives and false negatives correspond to the number of attributes that are incorrectly included or excluded in the model, respectively. In many contemporary applications, models represent a far richer range of phenomena that are not conveniently characterized via Boolean logical structure. As a first example, suppose we are given observations of covariate-response pairs and we wish to order the covariates based on how much they influence a response; the collection of models is given by the set of rankings of the covariates. Second, consider a clustering problem in which we are given observations of a collection of variables and the goal is to group them according to some measure of affinity, with the number of groups and the number of variables assigned to each group not known a priori; here the model class is given by the collection of all possible partitions of the set of variables. Third, suppose we wish to identify causal relations underlying a collection of variables; the model class is the set of completed partially directed acyclic graphs. Finally, consider the blind source separation problem in which we are given a signal expressed as an additive combination of source signals and our objective is to identify the constituent sources, without prior information about the number of sources or their content; here the model class is the collection of all possible linearly independent subsets of vectors. In these preceding examples, we lack a systematic definition of model complexity, false positives, and false negatives due to the absence of Boolean logical structure in each collection of models. In particular, in the first three examples, valid models are characterized by structural properties such as transitivity, set partitioning, and graph acyclicity, respectively; these properties are global in nature and are not concisely modeled via separable and local characteristics such as an attribute (a variable or edge) being included in a model independently of other attributes. In the fourth example concerning blind source separation, false positive and false negative errors should not be defined merely via the inclusion or exclusion of true source vectors in an estimated set but should instead consider the degree of alignment between the estimated and true sources, which again speaks to the lack of a natural Boolean logical structure underlying the associated class of models. As a concrete illustration of the inappropriateness of Boolean logical structure for the task of clustering, consider three items \(a,b,c\), with the true model given by the two clusters \(\{a,b\},\{c\}\) and the estimated model given by the single cluster \(\{a,b,c\}\). An incorrect perspective grounded in Boolean logical structure would suggest a false positive error of two, with the mistakes being that \(c\) is in the same cluster as \(a\) and as \(b\). On the other hand, accounting for set partition structure would yield the more accurate false positive error value of one as \(a\) and \(b\) are in the same cluster in the true and estimated models, and therefore including \(c\) in the same cluster as \(\{a,b\}\) should only incur one false discovery. While the preceding four problems have been studied extensively, the associated methods do not systematically control for false positive error as this quantity is not formally defined. Selection procedures that yield models with small false positive error play an important role in data-driven methods for gathering evidence, rooted in the empirical philosophy and statistical testing foundations of falsification of theories and hypotheses [6, 16, 17]. ### Our Contributions We begin in Section 2 by describing how collections of models may be endowed with the structure of a partially ordered set (poset). Posets are relations that satisfy reflexivity, transitivity, and antisymmetry, and they facilitate a hierarchical organization of a set of models that leads to a natural definition of model complexity. Building on this framework, we develop an axiomatic approach to defining functions over poset element pairs for evaluating similarity. This yields generalizations of well-known measures such as family-wise error and false discovery rate to an array of model selection problems in the context of ranking, causal inference, multiple change-point estimation, clustering, multi-sample testing, and blind source separation. In Section 3, we describe two generic model selection procedures that search over poset elements in a greedy fashion and that provide false discovery control in discrete model posets. The first method is based on subsampling and model averaging and it builds on the idea of stability selection [13, 22] for the variable selection problem, while the second method considers a sequence of hypothesis tests between models of growing complexity. With both these methods, the combinatorial properties of a model poset play a prominent role in determining computational and statistical efficiency. Proofs of the theorems of Section 3 are provided in Section 5. In Section 4 we provide numerical illustration via experiments on synthetic and real data. The code for implementing our methods is available at [https://github.com/armeentaeb/model-selection-over-posets](https://github.com/armeentaeb/model-selection-over-posets). ### Related Work Classic approaches to model selection such as the AIC and BIC assess and penalize model complexity by counting the number of attributes included in a model [1, 21]. More generally, such complexity measures facilitate a hierarchical organization of model classes, and this perspective is prevalent throughout much of the model selection literature [7, 8, 12, 18, 27]. However, these complexity measures rely on a Boolean logical structure underlying a collection of models, and are therefore not well-suited to model classes that are not characterized in this manner. The poset formalism presented in this paper is sufficiently flexible to facilitate model selection over model classes that are more complex than those characterized by Boolean logical structure (such as the illustration presented previously with clustering, see also Example 2), while being sufficiently structured to permit precise definitions of model complexity as well as false positive and false negative errors. ## 2 Poset Framework for Model Selection We begin by describing how collections of models arising in various applications may be organized as posets. Next, we present approaches to endow poset-structured models with suitable notions of true and false discoveries. ### Model Classes as Posets We begin with some basics of posets. A _poset_\((\mathcal{L},\preceq)\) is a collection \(\mathcal{L}\) of elements and a relation \(\preceq\) that is reflexive (\(x\preceq x,\ \forall x\in\mathcal{L}\)), transitive (\(x\preceq y,y\preceq z\Rightarrow x\preceq z,\ \forall x,y,z\in\mathcal{L}\)), and anti-symmetric (\(x\preceq y,y\preceq x\Rightarrow x=y,\ \forall x,y\in\mathcal{L}\)). An element \(y\in\mathcal{L}\)_covers_\(x\in\mathcal{L}\) if \(x\preceq y\), \(x\neq y\), and there is no \(z\in\mathcal{L}\backslash\{x,y\}\) with \(x\preceq z\preceq y\); we call such \((x,y)\) a _covering pair_. A _path_ from \(x_{1}\in\mathcal{L}\) to \(x_{k}\in\mathcal{L}\) is a sequence \((x_{1},\ldots,x_{k})\) with \(x_{2},\ldots,x_{k-1}\in\mathcal{L}\) such that \(x_{i}\) covers \(x_{i-1}\) for each \(i=2,\ldots,k\). Throughout this paper, we focus on posets in which there is a _least element_, i.e., an element \(x_{\text{least}}\in\mathcal{L}\) such that \(x_{\text{least}}\preceq y\) for all \(y\in\mathcal{L}\); such least elements are necessarily unique. Finally, a poset is _graded_ if there exists a function \(\text{rank}(\cdot)\) mapping poset elements to the nonnegative integers such that the rank of the least element is \(0\) and \(\text{rank}(y)=\text{rank}(x)+1\) for \(y\in\mathcal{L}\) that covers \(x\in\mathcal{L}\). In graded posets with least elements, each path from the least element to any \(x\in\mathcal{L}\) has length equal to \(\text{rank}(x)\). Posets are depicted visually using Hasse diagrams in which a directed arrow is drawn from \(x\in\mathcal{L}\) to any \(y\in\mathcal{L}\) that covers \(x\). Posets offer an excellent framework to formulate model selection problems as model classes in many applications possess rich partial order structure. In particular, the poset-theoretic quantities in the preceding paragraph have natural counterparts in the context of model selection - the least element corresponds to the 'null' model that represents no discoveries, the relation \(\preceq\) specifies a notion of containment between simpler and more complex models, and the rank function serves as a measure of model complexity that respects the underlying containment relation. We present several concrete illustrations next; Figure 1 presents Hasse diagrams associated to several of these examples. **Example 1** (Variable selection).: _As a warm-up, consider the variable selection problem of selecting which of \(p\) variables influence a response. The poset here is the collection of all subsets of \(\{1,\ldots,p\}\) ordered by set Figure 1: Hasse diagrams for a) variable selection with 3 variables (Example 1); b) clustering 4 variables (Example 2); c) multisample testing with 4 samples (Example 3); d) causal inference with 3 variables (Example 4); e) partial ranking of 3 items (Example 6); and f) total ranking of 3 items (Example 7). inclusion, the least element is given by the empty set, and the rank of a subset is its cardinality. This poset is called the Boolean poset [23]._ **Example 2** (Clustering).: _Suppose we wish to group a collection of \(p\) variables based on a given notion of similarity. The poset here is the collection of all partitions of \(\{1,\ldots,p\}\) ordered by refinement, the least element is given by \(p\) groups each consisting of one variable, and the rank of a partition is equal to \(p\) minus the number of groups. Thus, higher-rank elements correspond to models specified by a small number of clusters. This poset is called the partition poset [23]._ **Example 3** (Multisample testing).: _As a generalization of the classic two-sample testing problem, consider the task of grouping \(p\) samples with the objective that samples in a group come from the same distribution. Although this problem is closely related to the preceding clustering problem, it is more natural for the underlying poset here to be the reverse of the partition poset that is formed by reversing the order relation of the partition poset, i.e., the poset is the collection of all partitions of \(\{1,\ldots,p\}\) ordered by coarsening. With this reverse ordering, the least element corresponds to all \(p\) samples belonging to the same group (i.e., coming from the same distribution), which generalizes the usual null hypothesis in two-sample testing. The rank of a partition is equal to the number of groups minus one. Thus, higher-rank elements correspond to the \(p\) samples arising from many distinct distributions._ **Example 4** (Causal structure learning).: _Causal associations among a collection of variables are often characterized by a directed acyclic graph (DAG), namely a graph with directed edges and no (directed) cycles, in which the nodes index the variables. Causal structure learning entails inferring this DAG from observations of the variables. The structure of a DAG specifies a causal model via conditional independence relations among the variables, with denser DAGs encoding fewer conditional independencies in comparison with sparser DAGs. (See [4] for details on how the structure of a DAG encodes conditional independence relations; here we describe only those aspects that pertain to a poset formulation to organize the collection of all causal models based on graph structure.) Distinct DAGs can specify the same set of conditional independence relations, and these are called Markov equivalent DAGs. We introduce some terminology to characterize Markov equivalent DAGs. The skeleton of a DAG is the undirected graph obtained by making all the edges undirected. A \(v\)-structure is a set of three nodes \(x,y,z\) such that there are directed edges from \(x\) to \(z\) and from \(y\) to \(z\), and there is no edge between \(x\) and \(y\). Two DAGs are Markov equivalent if and only if they have the same skeleton and the same collection of \(v\)-structures. A Markov equivalence class of DAGs can be described by a completed partially DAG (CPDAG), which is a graph consisting of both directed and undirected edges. A CPDAG has a directed edge from a node \(x\) to a node \(y\) if and only if this directed edge is present in every DAG in the associated Markov equivalence class. A CPDAG has an undirected edge between nodes \(x\) and \(y\) if the corresponding Markov equivalence class contains a DAG with a directed edge from \(x\) to \(y\) and a DAG with a directed edge from \(y\) to \(x\). One can check that the total number of edges in a CPDAG (directed plus undirected) is equal to the number of edges in any DAG in the associated Markov equivalence class. The collection of CPDAGs on \(p\) variables may be viewed as a poset ordered by inclusion - CPDAGs \(\mathcal{C}^{(1)},\mathcal{C}^{(2)}\) satisfy \(\mathcal{C}^{(1)}\preceq\mathcal{C}^{(2)}\) if and only if there exist DAGs \(\mathcal{G}^{(1)},\mathcal{G}^{(2)}\) in the respective Markov equivalence classes such that \(\mathcal{G}^{(1)}\) is a directed subgraph of \(\mathcal{G}^{(2)}\). In other words, \(\mathcal{C}^{(1)}\preceq\mathcal{C}^{(2)}\) if and only if all the conditional independencies encoded by \(\mathcal{C}^{(2)}\) are also encoded by \(\mathcal{C}^{(1)}\), or equivalently that all the conditional dependencies encoded by \(\mathcal{C}^{(1)}\) are also encoded by \(\mathcal{C}^{(2)}\). The least element is given by the CPDAG with no edges, and the rank function is equal to the number of edges. Higher-rank elements in this poset correspond to causal models exhibiting more conditional dependence relations._ **Example 5** (Multiple changepoint estimation).: _Consider the problem of detecting changepoints in a multivariate time series. Specifically, we observe \(p\) signals each for time instances \(t=0,\ldots,T-1\), each signal consists of at most one change (e.g., a change in the distribution or dynamics underlying the signal observations), and the objective is to identify these changes. We denote changepoints via vectors \(x=(x_{1},\ldots,x_{p})\in\{0,\ldots,T\}^{p}\), with \(x_{i}\) denoting the time index when a change occurs in the \(i\)'th signal and \(x_{i}=T\) corresponding to no change occurring. The poset here is the set \(\{0,1,\ldots,T\}^{p}\) ordered such that \(x\preceq y\) if and only if \(x_{j}\geq y_{j}\) for all \(j=1,\ldots,p\), the least element is \((T,\ldots,T)\), and the rank of an element is \(p\cdot T\) minus the sum of the coordinates. Higher-rank elements correspond to changepoint estimates in which the changes occur early. This poset is the reverse of the (bounded) integer poset [23] with the product order._ **Example 6** (Partial ranking).: _We seek a ranking of a finite set of items given noisy observations (e.g., pairwise comparisons), and we allow some pairs of items to be declared as incomparable. Such a _partial ranking_ of the elements of a finite set \(S\) corresponds to a strict partial order_ on \(S\), i.e., a relation \(\mathcal{R}\) that is irreflexive \(((a,a)\notin\mathcal{R},\ \forall a\in S)\), asymmetric \(((a,b)\in\mathcal{R}\Rightarrow(b,a)\notin\mathcal{R},\ \forall a,b\in S)\), and transitive; if an element of \(S\) does not appear in \(\mathcal{R}\), then that element is incomparable to any of the other elements of \(S\) in the associated partial ranking. The poset here is the collection of strict partial orders on \(S\) ordered by inclusion, the least element is the empty set, and the rank of a partial ranking is the cardinality of the associated relation. Thus, higher-rank elements correspond to partial rankings that compare many of the covariates._ **Example 7** (Total ranking).: _We again wish to rank a finite collection of items but now we seek a total ranking that provides an ordered list of all the items. The setting is that we are given a total ranking that represents our current state of knowledge (i.e., a 'null model') as well as a new set of noisy observations, and the goal is to identify a total ranking that represents an update of the null model to reflect the new information. Each total ranking of the elements of a finite set \(S\) corresponds to a one-to-one function from \(S\) to the integers \(\{1,\ldots,|S|\}\). Let \(\pi_{\mathrm{null}}\) be the function that describes the null ranking. A convenient way to compare total rankings and to define a poset structure over them is via the notion of an inversion set. For any total ranking specified by a function \(\pi\), the associated inversion set (with respect to the null ranking \(\pi_{\mathrm{null}}\)) is defined as \(\mathrm{inv}(\pi;\pi_{\mathrm{null}})\triangleq\{(x,y)\in S\times S\mid\pi_{ \mathrm{null}}(x)<\pi_{\mathrm{null}}(y),\ \pi(x)>\pi(y)\}\). The poset here (with respect to a given null ranking \(\pi_{\mathrm{null}}\)) is the collection of total rankings on \(S\) ordered by inclusion of the associated inversion sets, the least element is the null ranking \(\pi_{\mathrm{null}}\), and the rank of a total ranking is the cardinality of the associated inversion set; this rank function is also equal to the Kendall tau distance between a total ranking and \(\pi_{\mathrm{null}}\). Thus, higher-rank elements are given by total rankings that depart significantly from the null ranking \(\pi_{\mathrm{null}}\). This poset is called the _permutation poset_[23]._ **Example 8** (Subspace estimation).: _The task is to estimate a subspace in \(\mathbb{R}^{p}\) given noisy observations of points in the subspace. The poset is the collection of subspaces in \(\mathbb{R}^{p}\) ordered by inclusion, the least element is the subspace \(\{0\}\), and the rank of a subspace is its dimension. This poset is called the subspace poset._ **Example 9** (Blind source separation).: _We are given a signal in \(\mathbb{R}^{p}\) that is expressed as a linear combination of some unknown source signals and the goal is to estimate these sources. The poset here is the collection of linearly independent subsets of unit-norm vectors in \(\mathbb{R}^{p}\) ordered by inclusion, the least element is the empty set, and the rank of a linearly independent subset is equal to the cardinality of the subset._ With respect to formalizing the notion of false positive and false negative errors, Example 1 is prominently considered in the literature, while Examples 3 and 5 are multivariate generalizations of previously studied cases [9, 11]. Finally, Example 8 was studied in [24], although that treatment proceeded from a geometric perspective rather than the order-theoretic approach presented in this paper. With the exception of Example 1, none of the other examples permit a natural formulation within the traditional multiple testing paradigm due to the lack of a Boolean logical structure underlying the associated model classes. Moreover, Examples 8-9 are model classes consisting of infinitely many elements. Nonetheless, we describe in the sequel how the poset formalism enables a systematic and unified framework for formulating model selection in all of the examples above. ### Evaluating True and False Discoveries To assess the extent to which an estimated model signifies discoveries about the true model, we describe next a general approach to quantify similarity between poset elements in a manner that respects partial order structure. **Definition 1** (similarity valuation).: _Let \((\mathcal{L},\preceq,\mathrm{rank}(\cdot))\) be a graded poset. A function \(\rho:\mathcal{L}\times\mathcal{L}\rightarrow\mathbb{R}\) that is symmetric, i.e., \(\rho(x,y)=\rho(y,x)\) for all \(x,y\in\mathcal{L}\), is called a _similarity valuation_ over \(\mathcal{L}\) if:_ * \(0\leq\rho(x,y)\leq\min\{\mathrm{rank}(x),\mathrm{rank}(y)\}\) _for all_ \(x,y\in\mathcal{L}\)_,_ * \(\rho(x,y)\leq\rho(z,y)\) _for all_ \(x\preceq z\)_,_ * \(\rho(x,y)=\mathrm{rank}(x)\) _if and only if_ \(x\preceq y\)_._ **Remark 1**.: _The term 'valuation' is often used in the order-theory literature [23] to denote functions on posets that respect the underlying partial order structure, and we use it in our context for the same reason._ In the sequel we describe similarity valuations for the various model posets discussed previously. The conditions above make similarity valuations well-suited for quantifying the amount of discovery in an estimated model with respect to a true model. The first condition states that the amount of discovery must be bounded above by the complexities of the true and estimated models (which are specified by the rank function). The second condition requires similarity valuations to respect partial order structure so that more complex models do not yield less discovery than less complex ones. The final condition expresses the desirable property that the amount of discovery contained in an estimated model is equal to the complexity of that model if and only if it is 'contained in' the true model. With these properties, we obtain the following analogs of true and false discoveries and of related quantities such as false discovery proportion. **Definition 2** (true and false discoveries).: _Let \((\mathcal{L},\preceq,\mathrm{rank}(\cdot))\) be a graded poset and let \(\rho\) be a similarity valuation on \(\mathcal{L}\). Letting \(x^{\star}\in\mathcal{L}\) be a true model and \(\hat{x}\in\mathcal{L}\) be an estimate, the true discovery, the false discovery, and the false discovery proportion are, respectively, defined as follows:_ \[\mathrm{TD}(\hat{x},x^{\star}) \triangleq\rho(\hat{x},x^{\star}),\] \[\mathrm{FD}(\hat{x},x^{\star}) \triangleq\mathrm{rank}(\hat{x})-\rho(\hat{x},x^{\star})= \mathrm{rank}(\hat{x})-\mathrm{TD}(\hat{x},x^{\star}),\] \[\mathrm{FDP}(\hat{x},x^{\star}) \triangleq\frac{\mathrm{rank}(\hat{x})-\rho(\hat{x},x^{\star})}{ \mathrm{rank}(\hat{x})}=\frac{\mathrm{FD}(\hat{x},x^{\star})}{\mathrm{rank}( \hat{x})}.\] With these definitions, we articulate our model selection objective more precisely: **Goal**: _identify the largest rank model subject to control in expectation or in probability on false discovery (proportion)._ This objective is akin to seeking the largest amount of discovery subject to control on false discovery (rate). The data available to carry out model selection vary across our examples; in Section 3 we describe methods to obtain false discovery control guarantees in various settings. To carry out this program, a central question is the choice of a suitable similarity valuation for a graded model poset. Indeed, it is unclear whether there always exists a similarity valuation for any graded model poset \((\mathcal{L},\preceq,\mathrm{rank}(\cdot))\). To address this question, consider the following function for \(x,y\in\mathcal{L}\): \[\rho_{\mathrm{meet}}(x,y)\triangleq\max_{z\preceq x,z\preceq y}\ \mathrm{rank}(z). \tag{1}\] **Remark 2**.: _In order theory, a poset \((\mathcal{L},\preceq)\) is said to possess a meet if for each \(x,y\in\mathcal{L}\) there exists a \(z\in\mathcal{L}\) satisfying \((i)\ z\preceq x,z\preceq y\) and \((ii)\) for any \(w\in\mathcal{L}\) with \(w\preceq x,w\preceq y\), we have \(w\preceq z\); such a \(z\) is called the meet of \(x,y\) and posets that possess a meet are called meet semi-lattices. Except for the poset in Example 4 on causal structure learning, the posets in the other examples are meet semi-lattices (see Appendix A). The subscript'meet' in (1) signifies that \(\rho_{meet}\) is the rank of the meet for meet semi-lattices, although \(\rho_{meet}\) is well-defined even if \((\mathcal{L},\preceq)\) is not a meet semi-lattice._ One can check that \(\rho_{\mathrm{meet}}\) is a similarity valuation on any graded poset \((\mathcal{L},\preceq,\mathrm{rank}(\cdot))\); see Appendix B for a proof. For Example 1 on variable selection, \(\rho_{\mathrm{meet}}\) has the desirable property that it reduces to the number of common variables in two models; thus, the general model selection goal formulated above reduces to the usual problem of maximizing the number of selected variables subject to control on the number of selected variables that are null. Next we describe the model selection problems we obtain in Examples 2-6 with \(\rho_{\mathrm{meet}}\) as the choice of similarity valuation. In Example 2 on clustering, the value of \(\rho_{\mathrm{meet}}\) for two partitions of \(p\) variables is equal to \(p\) minus the number of groups in the coarsest common refinement of the partitions. The model selection problem is that of partitioning the variables into the smallest number of groups subject to control on the additional number of groups in the coarsest common refinement of the estimated and true partitions compared to the number of groups in the estimated partition. Recall that the poset in Example 3 on multisample testing is the reverse of the poset in Example 2; thus, many of the notions from the preceding paragraph are appropriately'reversed' in Example 3. In particular, the value of \(\rho_{\mathrm{meet}}\) in Example 3 for two partitions of \(p\) samples is equal to the number of groups in the finest common coarsening of the partitions. The model selection problem entails partitioning the samples into the largest number of groups subject to control on the additional number of groups in the estimated partition compared to the number of groups in the finest common coarsening of the estimated and true partitions. In Example 4 on causal structure learning, the value of \(\rho_{\text{meet}}\) for two CPDAGs \(\mathcal{C}^{(1)},\mathcal{C}^{(2)}\) is equal to the maximum number of edges in a CPDAG that encodes all the conditional independencies of \(\mathcal{C}^{(1)}\) and of \(\mathcal{C}^{(2)}\). The model selection task is then to identify the CPDAG with the largest number of edges subject to control on the additional number of edges in the estimated CPDAG compared to the densest CPDAG that encodes all the conditional independence relationships in both the true and estimated CPDAGs. In Example 5 on multiple changepoint estimation, suppose \(x,y\in\{0,\ldots,T\}^{p}\) are vectors of time indices specifying changepoints in \(p\) signals. We have that \(\rho_{\text{meet}}(x,y)=p\cdot T-\sum_{i=1}^{p}\max\{x_{i},y_{i}\}\). The model selection problem entails identifying changes as quickly as possible subject to control on early detection of changes (i.e., declaring changes before they occur); this is a multivariate generalization of the classic quickest change detection problem [11]. In Example 6 on partial ranking, the value of \(\rho_{\text{meet}}\) for two partial rankings is equal to the cardinality of the intersection of the associated relations, i.e., the number of common comparisons in the two partial rankings. The associated model selection problem is that of identifying a partial ranking with the largest number of comparisons (i.e., the associated relation must have large cardinality) subject to control on the number of comparisons in the estimated partial ranking that are not in the true partial ranking. In Examples 1-6, the function \(\rho_{\text{meet}}\) of (1) provides a convenient way to assess the amount of discovery in an estimated model with respect to a true model, thereby yielding natural formulations for model selection. However, in Examples 7-9, \(\rho_{\text{meet}}\) has some undesirable features. Consider first the setup in Example 7 on total ranking for the set \(S=\{a,b,c\}\) with the null model given by the ranking \(\pi_{\text{null}}(a)=1,\pi_{\text{null}}(b)=2,\pi_{\text{null}}(b)=3\), the true model given by the ranking \(\pi^{\star}(a)=3,\pi^{\star}(b)=1,\pi^{\star}(c)=2\) (Hasse diagram shown in Figure 1), and the estimated ranking given by \(\hat{\pi}(a)=2,\hat{\pi}(b)=3,\hat{\pi}(c)=1\). In this case, one can see from Figure 1 that \(\rho_{\text{meet}}(\hat{\pi},\pi^{\star})=0\), which suggests that no discovery is made. On the other hand, the inversion sets of these rankings are given by \(\text{inv}(\pi^{\star};\pi_{\text{null}})=\{(a,b),(a,c)\}\) and \(\text{inv}(\hat{\pi};\pi_{\text{null}})=\{(a,c),(b,c)\}\), and the element \((a,c)\) is common to both inversion sets as the fact that item \(c\) is ranked higher than item \(a\) in the true model has been discovered in the estimated model; this reasoning suggests that a positive quantity would be a more appropriate value for the similarity valuation between \(\hat{\pi}\) and \(\pi^{\star}\). The key issue is that \(\text{inv}(\pi^{\star};\pi_{\text{null}})\cap\text{inv}(\hat{\pi};\pi_{\text{ null}})\) is not an inversion set of any total ranking, but this intersection still carries valuable information about true discoveries made in \(\hat{\pi}\) about \(\pi^{\star}\). However, the similarity valuation \(\rho_{\text{meet}}\) only considers subsets of \(\text{inv}(\pi^{\star};\pi_{\text{null}})\cap\text{inv}(\hat{\pi};\pi_{\text{ null}})\) that correspond to inversion sets of total rankings as the maximization in (1) is constrained to be over poset elements. Motivated by this discussion, we employ the following similarity valuation in Example 7 for total rankings \(\pi,\tilde{\pi}\) (with respect to a null model \(\pi_{\text{null}}\)): \[\rho_{\text{total-ranking}}(\pi,\tilde{\pi})=|\text{inv}(\pi;\pi_{\text{ null}})\cap\text{inv}(\tilde{\pi};\pi_{\text{null}})|. \tag{2}\] With this similarity valuation, the model selection problem reduces to identifying a total ranking with the largest inversion set (with respect to \(\pi_{\text{null}}\)) subject to control on the number of comparisons in the inversion set of the estimated total ranking that are not in the inversion set of the true total ranking. Next, in Example 8, \(\rho_{\text{meet}}(\hat{x},x^{\star})\) is equal to the dimension of the intersection of the subspaces \(\hat{x},x^{\star}\). When these subspaces have small dimension, for example, \(\rho_{\text{meet}}\) generically equals zero regardless of the angle between the subspaces; in words, \(\rho_{\text{meet}}\) does not consider the smooth structure underlying the collection of subspaces. As discussed in [24], a more suitable measure of similarity is the sum of the squares of the cosines of the principal angles between the subspaces, which is expressed as follows using projection matrices onto subspaces \(\mathcal{U},\tilde{\mathcal{U}}\): \[\rho_{\text{subspace}}(\mathcal{U},\tilde{\mathcal{U}})=\text{trace}(\mathcal{ P}_{\mathcal{U}}\mathcal{P}_{\tilde{\mathcal{U}}}). \tag{3}\] The model selection task is to identify the largest-dimensional subspace subject to control on the sum of the squares of the cosines of the principal angles between the estimated subspace and the orthogonal complement of the true subspace. Finally, \(\rho_{\text{meet}}\) is inadequate as a similarity valuation in Example 9 for the same reasons as in Example 8 due to the underlying smooth structure, and we propose here a more appropriate alternative. Given \(B\in\mathbb{R}^{p\times k},\ \tilde{B}\in\mathbb{R}^{p\times\ell}\) (these matrices have unit-norm and linearly independent columns representing source signals), suppose without loss of generality that \(k\leq\ell\) (due to the symmetry of similarity valuations) and let \(\text{Perm}(\ell)\) be the collection of bijections on \(\{1,\dots,\ell\}\). With this notation, consider the following similarity valuation: \[\rho_{\text{source-separation}}(B,\tilde{B})=\max_{\sigma\in\text{Perm}(\ell )}\sum_{i=1}^{k}(B^{T}\tilde{B})_{i,\sigma(i)}^{2}. \tag{4}\] This valuation is better suited to quantify the degree of alignment between two collections of vectors in source separation than \(\rho_{\text{meet}}\). Model selection entails identifying the largest collection of source vectors subject to control on the difference in the number of estimated source vectors and the alignment between the true and estimated source vectors as evaluated by \(\rho_{\text{source-separation}}\). Table 1 summarizes our discussion of the various model posets and their associated similarity valuations. In conclusion, while \(\rho_{\text{meet}}\) is a similarity valuation for any model poset, it is not always the most natural choice and identifying a suitable similarity valuation that captures the essential features of an application is key to properly formulating a model selection problem. This situation is not unlike the selection of an appropriate loss function in point estimation - while there exist many candidates that are mathematically valid, the utility of an estimation procedure in the context of a problem domain depends critically on a well-chosen loss. ## 3 False discovery control over posets In this section we turn our attention to the task of identifying models of large rank that provide false discovery control. We begin in Section 3.1 with a general greedy strategy for poset search that facilitates the design of model selection procedures, and we specialize this framework to specific approaches in Sections 3.2 and 3.3. Some of the discussion in Section 3.1 is relevant for all of the posets in Examples 1-9, while the methodology presented in Sections 3.2-3.3 is applicable to general discrete posets with integer-valued similarity valuations such as in Examples 1-7. Along the way, we remark on some of the challenges that arise in the two continuous cases of Examples 8-9. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **problem** & **models** & **least element** & **partial order** & **rank** & **similarity valuation** \\ **domain** & **models** & **(i.e. global null)** & **order** & **complexity** & **(i.e. true discoveries)** \\ \hline _variable_ & subsets of & \(\emptyset\) & inclusion of subsets & cardinality of subset & subsets \(x,\tilde{x}\); \\ _selection_ & \(\{1,\dots,p\}\) & \(\{1\},\{2\},\dots,\{p\}\) & refinement of partition & \(p-\#\)groups & \(\rho(x,\tilde{x})=|x\cap\tilde{x}|\) \\ \hline _clustering_ & partitions of \(\{1,\dots,p\}\) & \(\{1\},\{2\},\dots,\{p\}\) & refinement of partition & \(p-\#\)groups & partitions \(x,\tilde{x}\); \\ \hline _multisample testing_ & partitions of \(\{1,\dots,p\}\) & \(\{1,2,\dots,p\}\) & coarsening of partition & \(\#\)groups & \(\rho(x,\tilde{x})=\#\) groups in finest common coarsening \\ \hline _causal_ & completed partially directed acyclic graphs (CDDAG) & CPDAG & inclusion of conditional dependencies & CPDAG \(\mathcal{C},\mathcal{\hat{C}}\); \(\rho(\mathcal{C},\mathcal{\hat{C}})=\#\)edges in \\ _learning_ & (CPDAG) on a set of variables & edges & dependencies encoded by CPDAGs & \(\#\)edges & densest CPDAG encoding of both \(\mathcal{C},\mathcal{\hat{C}}\) \\ \hline _multiple changepoint_ & elements of \(\{0,\dots,T\}^{p}\) & \((T,T,\dots,T)\) & entrywise reverse reverse reverse ordering & \(p\cdot T\) minus sum of entries & changepoint vectors \(x,\tilde{x}\); \(\rho(x,\tilde{x})=p\cdot T-\sum_{i}\max\bigl{\{}x_{i},\tilde{x}_{i}\bigr{\}}\) \\ \hline _partial ranking_ & relations specified by strict partial orders on a set of items & \(\emptyset\) & inclusion of sets specifying relations relations & cardinality of set specifying relations; \(\rho(\mathcal{R},\mathcal{R})=|\mathcal{R}\cap\mathcal{\hat{R}}|\) \\ \hline _total ranking_ & total orders on a set of items & base ranking & inclusion of inversion sets & cardinality of invasion set \(\rho(\pi,\tilde{x})=[\operatorname{inv}(\pi,\pi_{\text{null}})\cap\operatorname {inv}(\tilde{x},\pi_{\text{null}})]\) \\ \hline _subspace estimation_ & subspaces in \(\mathbb{R}^{p}\) & \(\{0\}\) & inclusion of subspaces & dimension of subspace \(\mathcal{U}(\mathcal{U},\mathcal{\hat{C}})\) & subspaces \(\mathcal{U}(\mathcal{U},\mathcal{\hat{C}})\) \\ \hline _blind source separation_ & linearly independent subsets of \(\mathbb{R}^{p}\) & \(\emptyset\) & inclusion of subsets & cardinality of subset & subsets given by columns of \(B\in\mathbb{R}^{p\times k},\tilde{B}\in\mathbb{R}^{p\times k},k\leq\ell\); \(\rho(B,\tilde{B})=\max_{\pi\in\text{Perm}(\ell)}\sum_{i=1}^{k}(B^{T}\tilde{B} )_{i,\sigma(i)}^{2}\) \\ \hline \end{tabular} \end{table} Table 1: Problem classes and associated characterization of model selection via posets. ### Greedy Approaches to Model Selection To make progress on the problem of identifying large rank models that provide control on false discovery, we begin by noting that the false discovery \(\text{FD}(\hat{x},x^{\star})\) in an estimated model \(\hat{x}\) with respect to a true model \(x^{\star}\) may be expressed as the following telescoping sum for any path \((x_{0},x_{1},\ldots,x_{k-1},x_{k})\) with \(x_{0}\) being the least element \(x_{\text{least}}\) and \(x_{k}=\hat{x}\): \[\text{FD}(\hat{x},x^{\star})=\sum_{i=1}^{k}1-[\rho(x_{i},x^{\star})-\rho(x_{i-1 },x^{\star})]. \tag{5}\] The term \(1-[\rho(x_{i},x^{\star})-\rho(x_{i-1},x^{\star})]\) may be interpreted as the "additional false discovery" incurred by the model \(x_{i}\) relative to the model \(x_{i-1}\). The above decomposition of false discovery in terms of a path from the least element to an estimated model suggests a natural approach for model selection. In particular, we observe that a sufficient condition for \(\text{FD}(\hat{x},x^{\star})\) to be small is for each term in the above sum to be small. Thus, we will greedily grow a path starting from the least element \(x_{0}=x_{\text{least}}\) by adding one element \(x_{i}\) at a time such that each \((x_{i-1},x_{i})\) is a covering pair and each \(1-[\rho(x_{i},x^{\star})-\rho(x_{i-1},x^{\star})]\) is small. We continue this process until we can no longer guarantee that \(1-[\rho(x_{i},x^{\star})-\rho(x_{i-1},x^{\star})]\) is small. For such a procedure to be fruitful, we require some data-driven method to bound \(1-[\rho(x_{i},x^{\star})-\rho(x_{i-1},x^{\star})]\) as the true model \(x^{\star}\) is not known. Our objective therefore is to design a data-dependent function \(\Psi:\{(a,b)\mid b\text{ covers }a\text{ in }\mathcal{L}\}\to[0,1]\) that takes as input covering pairs and outputs a number in the interval \([0,1]\), and further satisfies the property that \(\Psi(u,v)\) being small is a sufficient condition for \(1-[\rho(v,x^{\star})-\rho(u,x^{\star})]\) to be small (in expectation or in probability). Given such a function, we grow a path using the greedy strategy outlined above by identifying at each step a covering pair that minimizes \(\Psi\). Algorithm 1 provides the details. In Sections 3.2 and 3.3, we present two approaches for designing suitable functions \(\Psi\): one based on a notion of stability and the other based on testing. Proofs that both these methods control for false discoveries are presented in Section 5. ``` 1:Input: poset \(\mathcal{L}\), threshold \(\alpha\in[0,1]\); data-dependent function \(\Psi:\{(a,b)\mid b\text{ covers }a\text{ in }\mathcal{L}\}\to[0,1]\) 2:Greedy selection: Set \(u=x_{\text{least}}\) and perform: (a) find \(v_{\text{opt}}\in\operatorname*{argmin}_{\{(u,v)\mid\text{ v covers }u\text{ in }\mathcal{L}\}}\Psi(u,v)\). (b) if \(\Psi(u,v_{\text{opt}})\leq\alpha\), set \(u=v_{\text{opt}}\) and repeat steps (2a-2b). Otherwise, stop. 3:Output: return \(\hat{x}=u\) ``` **Algorithm 1** Greedy sequential algorithm for model selection In designing a suitable function \(\Psi\) so that \(1-(\rho(v,x^{\star})-\rho(u,x^{\star}))\) is small (in expectation or in probability) whenever \(\Psi(u,v)\) is small, we note that the examples presented in Section 2 exhibit an important invariance. Specifically, in each example there are distinct covering pairs \((u,v)\) and \((u^{\prime},v^{\prime})\) such that \(1-[\rho(v,x^{\star})-\rho(u,x^{\star})]=1-[\rho(v^{\prime},x^{\star})-\rho(u^{ \prime},x^{\star})]\) for every true model \(x^{\star}\). Accordingly, it is natural that the function \(\Psi\) also satisfies the property that \(\Psi(u,v)=\Psi(u^{\prime},v^{\prime})\); stated differently, one need only specify \(\Psi\) for a'minimal' set of covering pairs. We present next a definition that formalizes this notion precisely. **Definition 3** (Minimal covering pairs).: _Consider a graded poset \((\mathcal{L},\preceq,\operatorname*{rank}(\cdot))\) endowed with a similarity valuation \(\rho\). A subset \(\mathcal{S}\subset\{(a,b)\mid b\text{ covers }a\text{ in }\mathcal{L}\}\) of covering pairs in \(\mathcal{L}\) is called minimal if the following two properties hold:_ * _For each covering pair_ \((u^{\prime},v^{\prime})\notin\mathcal{S}\)_, there exists_ \((u,v)\in\mathcal{S}\) _with_ \(\operatorname*{rank}(v)\leq\operatorname*{rank}(v^{\prime})\) _such that_ \(\rho(v,z)-\rho(u,z)=\rho(v^{\prime},z)-\rho(u^{\prime},z)\) _for all_ \(z\in\mathcal{L}\)_._ * _For distinct covering pairs_ \((u,v),(u^{\prime},v^{\prime})\in\mathcal{S}\)_, there exists some_ \(z\in\mathcal{L}\) _such that_ \(\rho(v,z)-\rho(u,z)\neq\rho(v^{\prime},z)-\rho(u^{\prime},z)\)_._ In words, a minimal set of covering pairs \(\mathcal{S}\) for a graded poset \(\mathcal{L}\) is an inclusion-minimal collection of smallest rank covering pairs for which it suffices to consider the values of \(\Psi\). For Example 1 on variable selection with the similarity valuation \(\rho_{\text{meet}}\), a minimal set of covering pairs is given by \(\mathcal{S}=\{(\emptyset,\{i\})\mid i=1,\ldots,p\}\) and this minimal set is unique. In general, however, such sets are not unique; see Appendix E where we derive minimal sets of covering pairs for several examples. Minimal sets of covering pairs are significant methodologically from both computational and statistical perspectives. In particular, several of our bounds for discrete pos depend on the cardinality \(|\mathcal{S}|\) and these also involve computations that scale in number of operations with \(|\mathcal{S}|\). Therefore, identifying a minimal set of covering pairs that is small in cardinality is central to the success of our proposed methods. In the remainder of this section, we assume that a minimal set of covering pairs \(\mathcal{S}\) for a given model poset \(\mathcal{L}\) is available. ### Model selection based on stability Our first method for designing a suitable function \(\Psi\) to employ in Algorithm 1 is based on subsampling and corresponding model averaging. We assume that we have access to a base procedure \(\hat{x}_{\mathrm{base}}\) that provides model estimates from data as well as a dataset \(\mathcal{D}\) consisting of observations drawn from a probability distribution parameterized by the true model \(x^{\star}\), and our approach is to aggregate the model estimates provided by \(\hat{x}_{\mathrm{base}}\) on subsamples of \(\mathcal{D}\). The requirements on the quality of the procedure \(\hat{x}_{\mathrm{base}}\) are quite mild, and we prove bounds in the sequel on the false discovery associated to the aggregated model. In particular, the aggregation method ensures that the averaged model is'stable' in the sense that it contains discoveries that are supported by a large fraction of the subsamples. Our method generalizes the stability selection method for variable selection [13, 22] and subspace stability selection for subspace estimation [24]. We demonstrate the broad applicability of this methodology in Section 4 by applying it to several examples from Section 2. Formally, fix a positive even integer \(B\) and obtain \(B/2\) complementary partitions of the dataset \(\mathcal{D}\), each of which partitions \(\mathcal{D}\) into two subsamples of equal size. Let this collection of subsamples be denoted \(\{\mathcal{D}^{(\ell)}\}_{\ell=1}^{B}\), and let \(\hat{x}_{\mathrm{base}}(\mathcal{D}^{(\ell)})\) denote the model estimate obtained by applying the base procedure to the subsample \(\mathcal{D}^{(\ell)}\). For any covering pair \((u,v)\) of a model poset \(\mathcal{L}\), we define: \[\Psi_{\mathrm{stable}}(u,v)\triangleq 1-\frac{1}{B}\sum_{\ell=1}^{B}\frac{ \rho(v,\hat{x}_{\mathrm{base}}(\mathcal{D}^{(\ell)}))-\rho(u,\hat{x}_{\mathrm{ base}}(\mathcal{D}^{(\ell)}))}{c_{\mathcal{L}}(u,v)}, \tag{6}\] where \(c_{\mathcal{L}}(u,v)\triangleq\max_{z\in\mathcal{L}}\rho(v,z)-\rho(u,z)\). Appealing to properties of similarity valuations, we have that \(\rho(v,\hat{x}_{\mathrm{base}}(\mathcal{D}^{(\ell)}))-\rho(u,\hat{x}_{ \mathrm{base}}(\mathcal{D}^{(\ell)}))\geq 0\) and \(c_{\mathcal{L}}(u,v)\geq 1\). The term \(\rho(v,\hat{x}_{\mathrm{base}}(\mathcal{D}^{(\ell)}))-\rho(u,\hat{x}_{ \mathrm{base}}(\mathcal{D}^{(\ell)}))\) measures the additional discovery about \(\hat{x}_{\mathrm{base}}(\mathcal{D}^{(\ell)})\) in the model \(v\) relative to the model \(u\), while the quantity \(c_{\mathcal{L}}(u,v)\) serves as normalization to ensure that \(\Psi_{\mathrm{stable}}(u,v)\in[0,1]\). In particular, \(\Psi_{\mathrm{stable}}(u,v)\) being small implies that the additional discovery represented by the model \(v\) over the model \(u\) is supported by a large fraction of the subsamples \(\{\mathcal{D}^{(\ell)}\}_{\ell=1}^{B}\). Consequently, when \(\Psi_{\mathrm{stable}}\) is employed in the context of Algorithm 1 in which we greedily grow a path, each'step' in the path corresponds to a discovery that is supported by a large fraction of the subsamples. We provide theoretical support for this approach in Theorem 10 in the sequel and the proof proceeds by showing that \(\Psi_{\mathrm{stable}}(u,v)\) being small implies that \(\mathbb{E}[1-(\rho(u,x^{\star})-\rho(v,x^{\star}))]\) is small; we combine this observation with the telescoping sum formula (5) to obtain a bound on the expected false discovery of the model estimated by Algorithm 1. When Algorithm 1 with \(\Psi=\Psi_{\mathrm{stable}}\) is specialized to Example 1 and Example 8, we obtain the stability selection procedure of [13, 22] and the subspace stability selection method of [24]. For variable selection in particular, Algorithm 1 with \(\Psi=\Psi_{\mathrm{stable}}\) outputs the subset of variables that appear in at least a \(1-\alpha\) fraction of the models estimated by the base procedure when applied to the subsamples \(\{\mathcal{D}^{(\ell)}\}_{\ell=1}^{B}\). More generally, Algorithm 1 with \(\Psi=\Psi_{\mathrm{stable}}\) also provides a procedure for model selection in Examples 2-7 corresponding to discrete model posets. **Theorem 10** (false discovery control for Algorithm 1 with \(\Psi=\Psi_{\mathrm{stable}}\)).: _Let \((\mathcal{L},\preceq,\mathrm{rank}(\cdot))\) be a graded discrete model poset with integer-valued similarity valuation \(\rho\) and let \(\mathcal{S}\) be an associated set of minimal covering pairs. Let \(\hat{x}_{\mathrm{base}}\) be a base estimator. Suppose the dataset \(\mathcal{D}\) employed in the computation of \(\Psi_{stable}\) consists of i.i.d. observations from a distribution parametrized by the true model \(x^{\star}\in\mathcal{L}\), and suppose \(\hat{x}_{\mathrm{sub}}\) is an estimator obtained by applying \(\hat{x}_{\mathrm{base}}\) to a subsample of \(\mathcal{D}\) of size \(|\mathcal{D}|/2\). Fix \(\alpha\in(0,1/2)\) and a positive, even integer \(B\). The output \(\hat{x}_{stable}\) from Algorithm 1 with \(\Psi=\Psi_{\mathrm{stable}}\) satisfies the following false discovery bound_ \[\mathbb{E}[\mathrm{FD}(\hat{x}_{\mathrm{stable}},x^{\star})]\leq\sum_{ \begin{subarray}{c}(u,v)\in\mathcal{S}\\ \rho(v,x^{\star})=\rho(u,x^{\star})\end{subarray}}\frac{\mathbb{E}[\rho(v, \hat{x}_{\mathrm{sub}})-\rho(u,\hat{x}_{\mathrm{sub}})]^{2}}{(1-2\alpha)c_{ \mathcal{L}}(u,v)^{2}}. \tag{7}\] The summation in the false discovery bound (7) is over covering pairs \((u,v)\in\mathcal{S}\) for which there is no additional discovery in the model \(v\) over the model \(u\) with respect to the true model \(x^{\star}\). For each such \((u,v)\), the term \(\mathbb{E}[\rho(v,\hat{x}_{\text{sub}})-\rho(u,\hat{x}_{\text{sub}})]^{2}\) characterizes the quality of the base estimator on subsamples; base estimators for which this term is small, when employed in the computation of \(\Psi_{\text{stable}}\) in the context of Algorithm 1, yield models \(\hat{x}_{\text{stable}}\) with small false discovery. **Remark 3**.: _When specialized to Example 1 on variable selection with similarity valuation \(\rho_{\text{meet}}\), we recover Theorem 1 of [22]. Specifically, in (7), we have that \(c_{\mathcal{L}}(u,v)=1\) for any covering pair \((u,v)\) and \(\sum_{(u,v)\in\mathcal{S},\rho(v,x^{\star})=\rho(u,x^{\star})}\mathbb{E}[\rho( v,\hat{x}_{\text{sub}})-\rho(u,\hat{x}_{\text{sub}})]^{2}=\sum_{\text{null}} \,\mathbb{E}[\mathbb{I}[\text{$i$ selected by $\hat{x}_{\text{sub}}$}]]^{2}\)._ Theorem 10 is general in its applicability to all the discrete posets in Section 2, and it provides an intuitive bound on expected false discovery. Nonetheless, it requires a characterization of the quality of the base estimator \(\hat{x}_{\text{base}}\) employed on subsamples. When such a characterization is unavailable, the false discovery bound (7) may not be easily computable in practice. To address this shortcoming and obtain easily computable bounds on false discovery, we consider natural assumptions on the estimator \(\hat{x}_{\text{sub}}\) corresponding to the base estimator \(\hat{x}_{\text{base}}\) applied to subsamples; these assumptions generalize those developed in [13, 24] for stability-based methods for variable selection and subspace estimation. To formulate these assumptions, we introduce some notation. Let \(\mathcal{T}_{\text{null}}\triangleq\{(u,v)\text{ covering pair in }\mathcal{L}\mid\rho(v,x^{\star})=\rho(u,x^{\star})\}\) consist of all covering pairs \((u,v)\) for which there is no additional discovery in the model \(v\) over the model \(u\) with respect to the true model \(x^{\star}\), let \(\text{rank}(\mathcal{L})\triangleq\max_{u\in\mathcal{L}}\text{rank}(u)\) be the largest rank of an element in \(\mathcal{L}\), and let \(\mathcal{S}_{k}\triangleq\{(u,v)\in\mathcal{S}\mid\text{rank}(v)=k\}\) for each \(k\in[\text{rank}(\mathcal{L})]\). **Assumption 1** (better than random guessing).: _For each \(k\in[\text{rank}(\mathcal{L})]\) with \(\mathcal{S}_{k}\neq\emptyset\), we have that_ \[\sum_{(u,v)\in\mathcal{S}_{k}\cap\mathcal{T}_{\text{null}}}\frac {1}{|\mathcal{S}_{k}\cap\mathcal{T}_{\text{null}}|}\cdot\frac{\mathbb{E}[\rho( v,\hat{x}_{\text{sub}})-\rho(u,\hat{x}_{\text{sub}})]}{c_{\mathcal{L}}(u,v)}\] \[\leq\sum_{(u,v)\in\mathcal{S}_{k}\setminus\mathcal{T}_{\text{ null}}}\frac{1}{|\mathcal{S}_{k}\setminus\mathcal{T}_{\text{null}}|}\frac{ \mathbb{E}[\rho(v,\hat{x}_{\text{sub}})-\rho(u,\hat{x}_{\text{sub}})]}{c_{ \mathcal{L}}(u,v)}.\] **Assumption 2** (invariance in mean).: _For each \(k\in[\text{rank}(\mathcal{L})]\) with \(\mathcal{S}_{k}\neq\emptyset\), we have that \(\frac{\mathbb{E}[\rho(v,\hat{x}_{\text{sub}})-\rho(u,\hat{x}_{\text{sub}})]}{ c_{\mathcal{L}}(u,v)}\) is the same for each \((u,v)\in\mathcal{S}_{k}\cap T_{\text{null}}\)._ In words, Assumption 1 states that the average normalized difference in similarity valuation of the estimator \(\hat{x}_{\text{sub}}\) is smaller over 'null' covering pairs than over non-null covering pairs. Assumption 2 states that the expected value of the normalized difference in similarity of \(\hat{x}_{\text{sub}}\) is the same for each 'null' covering pair. For the case of variable selection (Example 1), Assumption 1 reduces precisely to the 'better than random guessing' assumption employed by [13], namely that the expected number of true positives divided by the expected number of false positives selected by the estimator \(\hat{x}_{\text{sub}}\) is larger than the same ratio for an estimator that selects variables at random. As a second condition, [13] required that the random variables in the collection \(\{\mathbb{I}[i\in\hat{x}_{\text{sub}}]:i\text{ null}\}\) are exchangeable. Our Assumption 2 when specialized to variable selection reduces to the weaker requirement that each of the random variables in the collection \(\{\mathbb{I}[i\in\hat{x}_{\text{sub}}]:i\text{ null}\}\) has the same mean. As a second illustration, consider the case of total ranking (Example 7) involving items \(a_{1},\ldots,a_{p}\), with the least element \(\pi_{\text{null}}\) given by \(\pi_{\text{null}}(a_{i})=i,\ i=1,\ldots,p\), the true total ranking by \(\pi^{\star}\), and the estimator on subsamples by \(\hat{\pi}_{\text{sub}}\). Fix any \(k\in\{1,\ldots,p-1\}\). Assumption 1 states that the expected number of pairs \((a_{i},a_{j})\in\text{inv}(\hat{\pi}_{\text{sub}};\pi_{\text{null}})\cap\text{ inv}(\pi^{\star};\pi_{\text{null}})\) with \(j-i=k\) divided by the expected number of pairs \((a_{i},a_{j})\in\text{inv}(\hat{\pi}_{\text{sub}};\pi_{\text{null}})\setminus \text{inv}(\pi^{\star};\pi_{\text{null}})\) with \(j-i=k\) is larger than the same ratio for an estimator that outputs a total ranking at random. Assumption 2 states that the probability that \((a_{i},a_{j})\in\text{inv}(\hat{\pi}_{\text{sub}};\pi_{\text{null}})\) is the same for all pairs \((a_{i},a_{j})\) with \(j-i=k\) and \((a_{i},a_{j})\not\in\text{inv}(\pi^{\star};\pi_{\text{null}})\). See Appendix F for a formal derivation. **Theorem 11** (refined false discovery control for Algorithm 1 with \(\Psi=\Psi_{\text{stable}}\)).: _Consider the setup of Theorem 10, and suppose additionally that Assumptions 1 and 2 are satisfied. The output \(\hat{x}_{\text{stable}}\) from Algorithm 1 with \(\Psi=\Psi_{\text{stable}}\) satisfies the false discovery bound:_ \[\mathbb{E}[\text{FD}(\hat{x}_{\text{stable}},x^{\star})]\leq\sum_{k\in[\text{ rank}(\mathcal{L})],\mathcal{S}_{k}\neq\emptyset}\frac{q_{k}^{2}}{|\mathcal{S}_{k}|(1-2 \alpha)}, \tag{8}\] _where \(q_{k}=\sum_{(u,v)\in\mathcal{S}_{k}}\mathbb{E}[\rho(v,\hat{x}_{\rm sub})-\rho(u, \hat{x}_{\rm sub})]/c_{\mathcal{L}}(u,v)\)._ The quantities in the bound (8) may be readily computed in practice. In particular, each \(\mathcal{S}_{k}\) and \(c_{\mathcal{L}}(\cdot,\cdot)\) depends only on the model poset \(\mathcal{L}\) and each \(q_{k}\) can be approximated as \(q_{k}\approx\frac{1}{B}\sum_{\ell=1}^{B}\sum_{(u,v)\in\mathcal{S}_{k}}\frac{ \rho(v,\hat{x}_{\rm sub}(\mathcal{D}^{(\ell)})-\rho(u,\hat{x}_{\rm base}( \mathcal{D}^{(\ell)}))}{c_{\mathcal{L}}(u,v)}\). We give characterizations of the sets \(\mathcal{S}_{k}\) and \(c_{\mathcal{L}}(\cdot,\cdot)\) for posets corresponding to total ranking, partial ranking, clustering, and causal structure learning in Appendix E. **Remark 4**.: _Specializing Theorem 11 to the case of variable selection, we arrive at the bound in Theorem 1 of [13]. Specifically, note that for the Boolean poset with the similarity valuation \(\rho_{\text{meet}}\), \(\mathcal{S}_{k}=\emptyset\) for \(k\geq 2\), \(|\mathcal{S}_{1}|=\#\) variables, and \(q_{1}=\sum_{i}E[\![\text{\emph{variable }}i\text{ selected by }\hat{x}_{\rm sub}]]\) is the average number of variables selected by the estimator \(\hat{x}_{\rm sub}\)._ Turning our attention to Examples 8-9, the situation is considerably more complicated with continuous model posets. A result for these two cases under the same setup as in Theorem 10 yields the following bound for \(\alpha\in(0,1/2)\) (see Appendix D): \[\mathbb{E}[\text{FD}(\hat{x}_{\rm stable},\mathbf{x}^{\star})]\ \leq\ \frac{2\alpha+2\sqrt{1-\alpha}}{1-\alpha}\mathbb{E}[\text{rank}(\hat{x}_{\rm sub })]+\mathbb{E}[\sqrt{\text{FD}(\hat{x}_{\rm sub},\mathbf{x}^{\star})}]^{2}. \tag{9}\] The first term in the bound is a function of the average number of discoveries made by the estimator \(\hat{x}_{\rm sub}\), and this term is smaller for \(\alpha\approx 0\). The second term in the bound concerns the quality of the estimator \(\hat{x}_{\rm sub}\). Specifically, note that Jensen's inequality implies \(\mathbb{E}[\sqrt{\text{FD}(\hat{x}_{\rm sub},\mathbf{x}^{\star})}]^{2}\leq \mathbb{E}[\text{FD}(\hat{x}_{\rm sub},\mathbf{x}^{\star})]\), so that the improvement provided by the estimator \(\hat{x}_{\rm stable}\) based on subsampling and model averaging over the estimator \(\hat{x}_{\rm sub}\) that simply employs the base estimator on subsamples is characterized by \(\text{var}(\text{FD}(\hat{x}_{\rm sub},\mathbf{x}^{\star}))\). Thus, the key remaining task as before is to characterize the properties of the estimator \(\hat{x}_{\rm sub}\). However, the difficulty with the continuous examples is that conditions akin to Assumptions 1-2 are substantially more challenging to formulate and analyze at an appropriate level of generality. (One such effort under a limited setting for the case of subspace estimation is described in [24].) It is of interest to develop such a general framework for continuous model posets, and we leave this as a topic for future research. ### Model selection based on testing Our second approach to designing a suitable function \(\Psi\) to employ in Algorithm 1 is based on testing the following null hypothesis for each (minimal) covering pair \((u,v)\) of a discrete model poset \(\mathcal{L}\): \[\begin{split} H_{0}^{u,v}:\rho(v,\mathbf{x}^{\star})=\rho(u,\mathbf{x}^{ \star}),\\ \Psi_{\rm test}(u,v)\triangleq\text{p-value corresponding to }H_{0}^{u,v}.\end{split} \tag{10}\] The null hypothesis \(H_{0}^{u,v}\) in (10) states that there is no additional discovery about \(x^{\star}\) in the model \(v\) relative to the model \(u\), and small values of \(\Psi_{\rm test}(u,v)\) provide evidence for rejecting this null hypothesis and accepting the alternative that \(\rho(v,\mathbf{x}^{\star})>\rho(u,\mathbf{x}^{\star})\). When \(\Psi_{\rm test}\) is employed in the context of Algorithm 1 in which we greedily grow a path, each'step' in the path corresponds to a discovery for which we have the'strongest evidence' using the above test. Our next result provides theoretical support for this method. **Theorem 12** (false discovery control for Algorithm 1 with \(\Psi=\Psi_{\rm test}\)).: _Let \((\mathcal{L},\preceq,\text{rank}(\cdot))\) be a graded discrete model poset with integer-valued similarity valuation \(\rho\) and let \(\mathcal{S}\) be an associated set of minimal covering pairs. The output \(\hat{x}_{\rm test}\) of Algorithm 1 with \(\Psi=\Psi_{\rm test}\) satisfies the false discovery bound \(\mathbb{P}\left(\text{FD}(\hat{x}_{\rm test},\mathbf{x}^{\star})>0\right)\leq \alpha|\mathcal{S}|\)._ The multiplicity factor involving the cardinality of the set of minimal covering pairs \(\mathcal{S}\) is akin to a Bonferroni-type correction, and it highlights the significance of identifying a set of minimal covering pairs of small cardinality. We emphasize that although Algorithm 1 with \(\Psi=\Psi_{\rm test}\) proceeds via sequential hypothesis testing, the procedure is applicable to general model classes with no underlying Boolean logical structure; in particular, it is the graded poset structure underlying our framework that facilitates such methodology. As an illustration of the multiplicity factor \(|\mathcal{S}|\) for different model posets, we have that \(|\mathcal{S}|=p(p-1)\) for partial ranking; \(|\mathcal{S}|=\sum_{k=1}^{p-1}\binom{p}{k+1}\sum_{\ell=1}^{k}\binom{k+1}{\ell}\) for clustering; and \(|\mathcal{S}|=\frac{p(p-1)}{2}\) for total ranking. See Appendix E for further details. The graded poset structure of a model class can also yield more powerful model selection procedures than those obtained by the greedy procedure of Algorithm 1. We give one such illustration next in which a collection of model estimates that each exhibit zero false discovery (with high probability) can be 'combined' to derive a more complex model that also exhibits zero false discovery. Formally, a poset \((\mathcal{L},\preceq)\) is said to possess a _join_ if for each \(x,y\in\mathcal{L}\) there exists a \(z\in\mathcal{L}\) satisfying \((i)\)\(z\succeq x,z\succeq y\) and \((ii)\) for any \(w\in\mathcal{L}\) with \(w\succeq x,w\succeq y\), we have \(w\succeq z\); such a \(z\) is called the join of \(x,y\) and posets that possess a join are called _join semi-lattices_ (these are dual to the notion of a meet defined in Section 2). Except for the posets in Examples 4, 6, and 9, the posets in the other examples are join semi-lattices (see Appendix A). For a model class that is a join semi-lattice, suppose we are provided estimates \(\hat{x}^{(1)},\ldots,\hat{x}^{(m)}\) of a true model \(x^{\star}\) such that \(\mathrm{FD}(\hat{x}^{(j)},x^{\star})=0,\ j=1,\ldots,m\) (for example, by appealing to greedy methods such as Algorithm 1 or its variants). Appealing to the properties of a similarity valuation, we can conclude that the join \(\hat{x}_{\mathrm{join}}\) of \(\hat{x}^{(1)},\ldots,\hat{x}^{(m)}\) satisfies \(\mathrm{FD}(\hat{x}_{\mathrm{join}},x^{\star})=0\); in general, \(\mathrm{rank}(\hat{x}_{\mathrm{join}})\) is larger than \(\mathrm{rank}(\hat{x}^{(1)}),\ldots,\mathrm{rank}(\hat{x}^{(m)})\), and therefore, this procedure is one way to obtain a more powerful model by combining less powerful ones while still retaining control on the amount of false discovery. The following result formalizes matters. **Proposition 13** (using joins to obtain more powerful models).: _Let \((\mathcal{L},\preceq,\mathrm{rank}(\cdot))\) be a graded discrete model poset that is a join semi-lattice with integer-valued similarity valuation \(\rho\) and let \(\mathcal{S}\) be an associated set of minimal covering pairs. Consider a collection of estimates \(\hat{x}^{(1)},\ldots,\hat{x}^{(m)}\) of a true model \(x^{\star}\) and let \(\hat{x}_{\mathrm{join}}\) denote the join of \(\hat{x}^{(1)},\ldots,\hat{x}^{(m)}\). Suppose for each \(\hat{x}^{(j)},\ j=1,\ldots,m\) there is a path from the least element of \(\mathcal{L}\) to \(\hat{x}^{(j)}\) such that every covering pair \((u,v)\) along the path satisfies \(\Psi_{\mathrm{test}}(u,v)\leq\alpha\). Then we have the false discovery bound \(\mathbb{P}(\mathrm{FD}(\hat{x}_{\mathrm{join}},x^{\star})>0)\leq\alpha| \mathcal{S}|\)._ ## 4 Experiments We describe the results of numerical experiments on synthetic and real data in this section. We employ Algorithm 1 with with both \(\Psi=\Psi_{\mathrm{stable}}\) and \(\Psi=\Psi_{\mathrm{test}}\). For the testing-based approach, the manner in which p-values are obtained is described in the context of each application and we set \(\alpha\) equal to \(0.05/|\mathcal{S}|\) for a given set \(\mathcal{S}\) of minimal covering pairs. For the stability-based approach, we consider \(B=100\) subsamples obtained by partitioning a given dataset \(50\) times into subsamples of equal size and we set \(\alpha=0.3\). To obtain a desired level of expected false discovery with the stability-based approach, we appeal to Theorem 11 as follows. In the bound (8), each \(q_{k}\) can be derived by averaging over subsamples (as explained in the discussion after the statement of Theorem 11) and all the other quantities are known. The values of these \(q_{k}\)'s in turn depend on the model estimates returned by the base procedure \(\hat{x}_{\mathrm{base}}\) employed on the subsamples; in particular, if the estimate is the least element then each \(q_{k}\) equals zero, and as \(\hat{x}_{\mathrm{base}}\) returns models of increasing complexity, the value of each \(q_{k}\) generally increases. Building on this observation, we tune parameters in \(\hat{x}_{\mathrm{base}}\) to return increasingly more complex models until the bound (8) is at the desired level. For causal structure learning we employ Greedy Equivalence Search as our base procedure with tuning via the regularization parameter that controls model complexity [3]. For clustering, we employ \(k\)-means [10] as the base procedure with tuning via the number of clusters. For our illustrations with ranking problems (both partial and total) in which we are provided with pairwise comparison data, our base procedure first employs the maximum-likelihood estimator associated to the Bradley-Terry model [2], which returns a vector of positive weights \(\hat{w}\) of dimension equal to the number of items. Using this \(\hat{w}\) we associate numerical values to covering pairs; each covering pair corresponds to increasing the complexity of a model by including a pair of items \((i,j)\) to the inversion set (in total ranking) or to the relation specifying a strict partial order (in partial ranking), and the value we assign is the difference \(\hat{w}_{j}-\hat{w}_{i}\). Our base procedure then constructs a path starting from the least element by greedily adding covering pairs of largest value at each step, provided these values are larger than a regularization parameter \(\lambda>0\); smaller values of \(\lambda\) yield model estimates of larger complexity, while larger values yield estimates of smaller complexity. Finally, for causal structure learning, we restrict our search during the model aggregation phase of Algorithm 1 to paths that yield CPDAG models in which each connected component in the skeleton has diameter at most two; such a restriction facilitates a simple characterization of covering pairs. This restriction is not imposed on the output of the base procedure. Moreover, the true model can be an arbitrary CPDAG. ### Synthetic data We describe experiments with synthetic data using Algorithm 1 with \(\Psi=\Psi_{\text{stable}}\). **Total ranking**: We consider a total ranking problem with \(p=30\) items. We observe \(n\) i.i.d. games between players \(i,j\) with the outcome modeled as \(y_{ij\ell}\sim\texttt{Bernoulli}(w_{i}^{\star}/(w_{i}^{\star}+w_{j}^{\star}))\) for \(\ell=1,\ldots,n\), where \(w^{\star}\in\mathbb{R}_{++}^{p}\) is a feature vector and \(n\in\{200,250,300\}\). We fix \(w^{\star}\) by first defining \(\tilde{w}\in\mathbb{R}_{++}^{p}\) as \(\tilde{w}_{i}=\tau^{i-1},\ i=1,\ldots,p\) for \(\tau\in\{0.97,0.98,0.99\}\), and then setting \(w^{\star}\) equal to a permutation of \(\tilde{w}\) in which we swap the entries \(1,3\), the entries \(8,10\), the entries \(15,17\), the entries \(20,22\), and the entries \(25,27\). Smaller values of \(\tau\) correspond to better-distinguished items, and hence to easier problem instances. The base procedure is tuned such that the expected false discovery in (8) is at most three. **Clustering**: We consider a clustering problem with \(p=20\) variables. The true partition consists of \(12\) clusters with five variables in one cluster, another five variables in a second cluster, and the remaining variables in singleton clusters. The \(p\) variables are independent two-dimensional Gaussians. Each variable in cluster \(i\) has mean \((\mu_{i},0)\) and covariance \(\frac{1}{4}I\); each \(\mu_{i}=i/d\) for \(d\in\{3,3.5,4\}\). Smaller values of \(d\) correspond to better-separated clusters, and hence to easier problem instances. We are provided \(n\) i.i.d. observations of these variables for \(n\in\{40,65,90\}\). The base procedure is tuned such that the expected false discovery in (8) is at most three. **Causal structure learning**: We consider a causal structure learning problem over \(p=10\) variables. The true DAG is generated by considering a random total ordering of the variables, drawing directed edges from higher nodes in the ordering to lower nodes independently with probability \(v\in\{0.13,0.18\}\), and defining a linear structural causal model in which each variable is a linear combination of its parents plus independent Gaussian noise with mean zero and variance \(\frac{1}{4}\). The coefficients in the linear combination are drawn uniformly at random from the interval \([0.5,0.7]\). Larger values of \(v\) lead to denser DAGs, and hence to harder problem instances. We obtain \(n\) i.i.d. observations from these models for \(n\in\{1000,1200,1400,1600,1800\}\). The base procedure is tuned such that the expected false discovery in (8) is at most two. For the preceding three problem classes, we compare the performance of our stability-based methodology versus that of a non-subsampled approach in which the base procedure (with suitable regularization) is applied to the entire dataset. For total ranking, the non-subsampled procedure simply extracts the ranking implied by the maximum-likelihood estimator associated to the Bradley-Terry model. For clustering, the non-subsampled approach employs \(k\)-means where the number of clusters is chosen to maximize the average silhouette score [19]. For causal structure learning, the non-subsampled approach applies Greedy Equivalence Search with a regularization parameter chosen based on holdout validation (\(70\%\) of the data is used for training and the remaining \(30\%\) for validation). Figure 2 presents the results of our experiments averaged over \(50\) trials, and as the plots demonstrate, our stability-based methods yield models with smaller false discovery than the corresponding non-subsampled approaches. This reduction in false discovery comes at the expense of a loss in power, which is especially significant for some of the harder problem settings. However, in all cases our stability-based method provides the desired level of control on expected false discovery. Figure 2: Comparing the performance of Algorithm 1 with \(\Psi=\Psi_{\text{stable}}\) versus a non-subsampling approach for total ranking, clustering, and causal structure learning. Each problem setting corresponds to a pair of dots and a connecting line. The comparison is in terms of the amount of false and true discoveries. ### Real data We describe next experiments with real data. **Partial ranking of tennis players**: We consider the task of partially ranking six professional tennis players - Berdych, Djokovic, Federer, Murray, Nadal, and Wawrinka - based on historical head-to-head matches of these players up to the end of 2022. We apply Algorithm 1 with \(\Psi=\Psi_{\text{stable}}\) and with the base procedure tuned such that the expected false discovery in (8) is at most three. The output of our procedure is a rank-nine model given by the partial ranking {Djokovic, Nadal} > {Berdych, Murray, Wawrinka} and {Federer} > {Berdych, Wawrinka}. **Total ranking of educational systems**: We consider the task of totally ordering \(p=15\) OECD countries in reading comprehension based on test results from the Programme for International Student Assessment (PISA). We take the null ranking as the ordering of the countries based on performance in 2015 (see the first row in Table 2), and we wish to update this model based on 2018 test scores (data obtained from [25]), with the number of test scores ranging from 696 to 3414. We apply Algorithm 1 with \(\Psi=\Psi_{\text{test}}\) and we obtain p-values by modeling the average test score of each country as a Gaussian. We set \(\alpha=0.05/\frac{p(p-1)}{2}\) (here \(\frac{p(p-1)}{2}\) is the cardinality of a set of minimal covering pairs), which yields the guarantee from Theorem 12 that the estimated model has zero false discovery with probability at least 0.95. The output of our procedure is the rank-nine model given by the total ranking in the second row in Table 2. **Learning causal structure among proteins**: We aim to learn causal relations underlying \(p=11\) phosphoproteins and phospholipids from a mass spectroscopy dataset containing 854 measurements of abundance levels in an observational setting [20]. We apply Algorithm 1 with \(\Psi=\Psi_{\text{stable}}\) and with the base procedure tuned such that the expected false discovery in (8) is at most two. Figure 3 presents the rank-six CPDAG model obtained from our algorithm and compares to the estimates obtained from the literature [14, 20, 26]. Our CPDAG estimate has fewer edges than those in [14, 20, 26], which do not explicitly provide control on false discovery. ## 5 Proofs For notational ease, for a covering pair \((u,v)\) and element \(z\) in the poset \(\mathcal{L}\), we define \(f(u,v;z)\triangleq\rho(v,z)-\rho(u,z)\). Recall that \(\mathcal{T}_{\text{null}}\triangleq\{(u,v)\text{ covering pair in }\mathcal{L}\mid\rho(v,x^{\star})=\rho(u,x^{\star})\}\). Our analysis relies on the following lemmas with the proofs presented in Appendix C. **Lemma 14**.: _Fix a discrete model poset \(\mathcal{L}\) with integer-valued similarity valuation \(\rho\). For any model \(x\in\mathcal{L}\) with \((x_{0},\ldots,x_{k})\) being any path from the least element \(x_{0}=x_{least}\) to \(x_{k}=x\), we have that \(\operatorname{FD}(x,x^{\star})\leq\operatorname{FD}(x,x^{\star})\) for all \(x\in\mathcal{L}\)._ \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline 2015 base ranking & CAN & FIN & IRL & EST & KOR & JPN & NOR & NZL & DEU & POL & SvN & NLD & AUS & SWE & DNK \\ \hline testing approach & FIN & IRL & EST & CAN & KOR & JPN & NOR & NZL & POL & DEU & AUS & SWE & SvN & DNK & NLD \\ \hline \end{tabular} \end{table} Table 2: Ranking of nations according to PISA reading comprehension scores; the first column is the 2015 ranking of 15 OECD countries which serves as the base ranking for our analysis: based on test results in 2018, we update this ranking using Algorithm 1 based on \(\Psi=\Psi_{\text{test}}\) with the result shown in the second column. Figure 3: **left**: CPDAG obtained by Algorithm 1 with \(\Psi=\Psi_{\text{stable}}\); **right**: comparing the edges obtained by our algorithm (shown in the leftmost column) with different causal discovery methods (with indicated reference). The consensus network according to [20] is denoted here by “[20]a” and their reconstructed network by “[20]b”; The authors in [14] apply two methods, and the results are presented by “[14]a” and “[14]b”. Here, “\(-\)” means that the edge direction is not identified. \(\sum_{i=1}^{k}\mathbb{I}[(x_{i-1},x_{i})\in\mathcal{T}_{null}]\). As a result, we have that \(\mathrm{FD}(x,x^{\star})>0\) implies the existence of some \(i\) for which \((x_{i-1},x_{i})\in\mathcal{T}_{null}\)._ **Lemma 15**.: _For any covering pairs \((u,v)\) and \((x,y)\) with \(v\preceq x\), we cannot have that \(f(u,v;z)=f(x,y;z)\) for all \(z\in\mathcal{L}\)._ ### Proof of Theorem 10 For notational convenience, we let \(\hat{x}_{\mathrm{base}}^{(\ell)}=\hat{x}_{\mathrm{base}}(\mathcal{D}^{(\ell)})\) where \(\{\mathcal{D}^{(\ell)}\}_{\ell=1}^{B}\) are the subsamples of \(\mathcal{D}\). Let \(\hat{x}_{\mathrm{stable}}\) be the output of Algorithm 1 with \(\mathrm{rank}(\hat{x}_{\mathrm{stable}})=\hat{k}\), and let \((x_{0},\ldots,x_{\hat{k}})\) be the associated path from the least element \(x_{0}=x_{\mathrm{least}}\) to \(x_{\hat{k}}=\hat{x}_{\mathrm{stable}}\); we have that \(\frac{1}{B}\sum_{\ell=1}^{B}f(x_{i-1},x_{i};\hat{x}_{\mathrm{base}}^{(\ell)} )/c_{\mathcal{L}}(x_{i-1},x_{i})\geq(1-\alpha)\) for each \(i=1,\ldots,\hat{k}\). Let \(\mathcal{C}\triangleq\{(x_{i-1},x_{i})\mid i=1,\ldots,\hat{k}\}\). From Lemma 14, we also have that \(\mathrm{FD}(\hat{x}_{\mathrm{stable}},x^{\star})\leq|\mathcal{C}\cap\mathcal{ T}_{\mathrm{null}}|\). Combining these observations, we conclude that \(\mathrm{FD}(\hat{x}_{\mathrm{stable}},x^{\star})\leq\sum_{(u,v)\in\mathcal{C} \cap\mathcal{T}_{\mathrm{null}}}\mathbb{I}\left[\frac{1}{B}\sum_{\ell=1}^{B} \frac{f(u,v;\hat{x}_{\mathrm{base}}^{(\ell)})}{c_{\mathcal{L}}(u,v)}\geq 1- \alpha\right]\). Next, we observe that for each covering pair in \(\mathcal{C}\) there exists a covering pair in the minimal set \(\mathcal{S}\) with the values of \(f\) and \(c_{\mathcal{L}}\) remaining the same; moreover, distinct covering pairs in \(\mathcal{C}\) map to distinct covering pairs in \(\mathcal{S}\) from Lemma 15. Thus, we conclude that \(\mathrm{FD}(\hat{x}_{\mathrm{stable}},x^{\star})\leq\sum_{(u,v)\in\mathcal{S} \cap\mathcal{T}_{\mathrm{null}}}\mathbb{I}\left[\frac{1}{B}\sum_{\ell=1}^{B} \frac{f(u,v;\hat{x}_{\mathrm{base}}^{(\ell)})}{c_{\mathcal{L}}(u,v)}\geq 1- \alpha\right]\). We then have the following sequence of steps: \[\begin{split}\mathrm{FD}(\hat{x}_{\mathrm{stable}},x^{\star})& \leq\sum_{\begin{subarray}{c}(u,v)\in\\ \mathcal{S}\cap\mathcal{T}_{\mathrm{null}}\end{subarray}}\mathbb{I}\left[ \frac{1}{B/2}\sum_{\ell=1}^{B/2}\sum_{i\in\{0,1\}}\frac{f(u,v;\hat{x}_{ \mathrm{base}}^{(2\ell-i)})}{c_{\mathcal{L}}(u,v)}\geq 2-2\alpha\right]\\ &\leq\sum_{\begin{subarray}{c}(u,v)\in\\ \mathcal{S}\cap\mathcal{T}_{\mathrm{null}}\end{subarray}}\mathbb{I}\left[ \frac{1}{B/2}\sum_{\ell=1}^{B/2}\prod_{i\in\{0,1\}}\frac{f(u,v;\hat{x}_{ \mathrm{base}}^{(2\ell-i)})}{c_{\mathcal{L}}(u,v)}\geq 1-2\alpha\right].\end{split} \tag{11}\] The second inequality follows from \(ab\geq a+b-1\) for \(a,b\in[0,1]\), where we set \(a=f(u,v;\hat{x}_{\mathrm{base}}^{(2\ell-1)})/c_{\mathcal{L}}(u,v)\) and \(b=f(u,v;\hat{x}_{\mathrm{base}}^{(2\ell)})/c_{\mathcal{L}}(u,v)\), and note that \(f(u,v;z)/c_{\mathcal{L}}(u,v)\in[0,1]\) for any \(z\in\mathcal{L}\). Taking expectations on both sides of the preceding inequality, we finally seek a bound on \(\mathbb{P}\left[\frac{1}{B/2}\sum_{\ell=1}^{B/2}\prod_{i\in\{0,1\}}\frac{f(u,v ;\hat{x}_{\mathrm{base}}^{(2\ell-i)})}{c_{\mathcal{L}}(u,v)}\geq 1-2\alpha\right]\). We have that: \[\begin{split}\mathbb{P}\left[\frac{1}{B/2}\sum_{\ell=1}^{B/2} \prod_{i\in\{0,1\}}\frac{f(u,v;\hat{x}_{\mathrm{base}}^{(2\ell-i)})}{c_{ \mathcal{L}}(u,v)}\geq 1-2\alpha\right]&\leq\frac{\mathbb{E}\left[\frac{1}{B/2} \sum_{\ell=1}^{B/2}\prod_{i\in\{0,1\}}\frac{f(u,v;\hat{x}_{\mathrm{base}}^{(2 \ell-0)})}{c_{\mathcal{L}}(u,v)}\right]}{1-2\alpha}\\ &=\frac{\mathbb{E}\left[f(u,v;\hat{x}_{\mathrm{sub}})\right]^{2}}{c _{\mathcal{L}}(u,v)^{2}(1-2\alpha)}.\end{split} \tag{12}\] Here \(\hat{x}_{\mathrm{sub}}\) represents the estimator corresponding to the base procedure \(\hat{x}_{\mathrm{base}}\) applied to a subsample of \(\mathcal{D}\) of size \(|\mathcal{D}|/2\). The inequality follows from Markov's inequality, and the equality follows by noting that complementary bags are independent and identically distributed. Combining (11) and (12), we obtain the desired result. ### Proof of Theorem 11 We have from Theorem 10 that: \[\mathbb{E}[\mathrm{FD}(\hat{x}_{\mathrm{stable}},x^{\star})]\leq\sum_{k=1}^{ \mathrm{rank}(\mathcal{L})}\sum_{(u,v)\in\mathcal{S}_{k}\cap\mathcal{T}_{ \mathrm{null}}}\frac{\mathbb{E}[f(u,v;\hat{x}_{\mathrm{sub}})]^{2}}{(1-2\alpha)c _{\mathcal{L}}(u,v)^{2}}.\] Our goal is to bound \(\mathbb{E}[f(u,v;\hat{x}_{\mathrm{sub}})]/c_{\mathcal{L}}(u,v)\) for \((u,v)\in\mathcal{S}_{k}\cap\mathcal{T}_{\mathrm{null}}\). Note that each \(q_{k}\) may be decomposed as \[q_{k}=\sum_{\begin{subarray}{c}(u,v)\in\\ \mathcal{S}_{k}\cap\mathcal{T}_{\mathrm{null}}\end{subarray}}\frac{\mathbb{E}[f(u,v;\hat{x}_{\mathrm{sub}})]}{c_{\mathcal{L}}(u,v)}+\sum_{\begin{subarray}{c}(u,v )\in\\ \mathcal{S}_{k}\setminus\mathcal{T}_{\mathrm{null}}\end{subarray}}\frac{\mathbb{E}[f (u,v;\hat{x}_{\mathrm{sub}})]}{c_{\mathcal{L}}(u,v)}.\] Appealing to Assumption 1, we have that \[q_{k}\geq\left(1+\frac{|\mathcal{S}_{k}\setminus\mathcal{T}_{\text{null}}|}{| \mathcal{S}_{k}\cap\mathcal{T}_{\text{null}}|}\right)\sum_{(u,v)\in\mathcal{S}_ {k}\cap\mathcal{T}_{\text{null}}}\frac{\mathbb{E}[f(u,v;\hat{x}_{\text{sub}})]} {c_{\mathcal{L}}(u,v)}.\] Rearranging the terms, we obtain that \[\sum_{(u,v)\in\mathcal{S}_{k}\cap\mathcal{T}_{\text{null}}}\frac{\mathbb{E}[f (u,v;\hat{x}_{\text{sub}})]}{c_{\mathcal{L}}(u,v)}\leq\frac{q_{k}}{|\mathcal{ S}_{k}|}|\mathcal{S}_{k}\cap\mathcal{T}_{\text{null}}|.\] Appealing to Assumption 2, we have for each \((u,v)\in\mathcal{S}_{k}\cap\mathcal{T}_{\text{null}}\) that \(\frac{\mathbb{E}[f(u,v;\hat{x}_{\text{sub}})]}{c_{\mathcal{L}}(u,v)}\leq\frac {q_{k}}{|\mathcal{S}_{k}|}\). Plugging this bound into the conclusion of Theorem 10 yields the desired result. ### Proof of Theorem 12 Let \(\hat{x}_{\text{test}}\) be the output of Algorithm 1 with \(\text{rank}(\hat{x}_{\text{test}})=\hat{k}\), and let \((x_{0},\ldots,x_{\hat{k}})\) be the associated path from the least element \(x_{0}=x_{\text{least}}\) to \(x_{\hat{k}}=\hat{x}_{\text{test}}\); we have that \(\Psi_{\text{test}}(x_{i-1},x_{i})\leq\alpha\) for each \(i=1,\ldots,\hat{k}\). Let \(\mathcal{C}\triangleq\{(x_{i-1},x_{i})\ |\ i=1,\ldots,\hat{k}\}\). From Lemma 14, we have that \(\text{FD}(\hat{x}_{\text{test}},x^{\star})>0\) implies the existence of a covering pair \((u,v)\in\mathcal{C}\cap\mathcal{T}_{\text{null}}\) for which \(\Psi_{\text{test}}(u,v)\leq\alpha\). For each covering pair in \(\mathcal{C}\), there exists a covering pair in \(\mathcal{S}\) with the same value of \(\Psi_{\text{test}}\); thus, there exists \((u,v)\in\mathcal{S}\cap\mathcal{T}_{\text{null}}\) such that \(\Psi_{\text{test}}(u,v)\leq\alpha\). Consequently: \[\mathbb{P}(\text{FD}(\hat{x}_{\text{test}},x^{\star})>0) \leq\mathbb{P}\left(\exists(u,v)\in\mathcal{S}\cap\mathcal{T}_{ \text{null}}\text{ s.t. }\Psi_{\text{test}}(u,v)\leq\alpha\right)\] \[\leq\sum_{(u,v)\in\mathcal{S}\cap\mathcal{T}_{\text{null}}} \mathbb{P}(\Psi_{\text{test}}(u,v)\leq\alpha)\leq\alpha|\mathcal{S}|. \tag{13}\] Here the second inequality follows from the union bound and the final inequality follows from the fact that the random variable \(\Psi_{\text{test}}(u,v)\) is a valid p-value under the null hypothesis \(\rho(v,x^{\star})=\rho(u,x^{\star})\). ### Proof of Proposition 13 For each \(\hat{x}^{(j)},\ j=1,\ldots,m\), we are given that there is a path from \(x_{\text{least}}\) to \(\hat{x}^{(j)}\) such that \(\Psi_{\text{test}}\) is bounded by \(\alpha\) for each covering pair in the path; let \(\mathcal{C}^{(j)}\) be the set of these covering pairs. As described in Section 3.3 in the discussion preceding Proposition 13, \(\text{FD}(\hat{x}_{\text{join}},x^{\star})>0\) implies that \(\text{FD}(\hat{x}^{(j)},x^{\star})>0\) for some \(j=1,\ldots,m\), which in turn implies from Lemma 14 the existence of a covering pair \((u,v)\in\mathcal{C}^{(j)}\cap\mathcal{T}_{\text{null}}\) for some \(j=1,\ldots,m\). Following the same logic as in the proof of Theorem 12, we conclude that \(\text{FD}(\hat{x}_{\text{join}},x^{\star})>0\) implies the existence of \((u,v)\in\mathcal{S}\cap\mathcal{T}_{\text{null}}\) such that \(\Psi_{\text{test}}(u,v)\leq\alpha\). Using the same reasoning as in (13), we have the desired conclusion. ## 6 Discussion We present a general framework to endow a collection of models with poset structure. This framework yields a systematic approach for quantifying model complexity and false positive error in an array of complex model selection tasks in which models are not characterized by Boolean logical structure (such as in variable selection). Moreover, we develop methodology for controlling false positive error in general model selection problems over posets, and we describe experimental results that demonstrate the utility of our framework. We finally discuss some future research questions that arise from our work. On the mathematical front, a basic open question is to characterize fundamental tradeoffs between false positive and false negative errors that are achievable by any procedure in model selection over a general poset; this would generalize the Neyman-Pearson lemma on optimal procedures for testing between two hypotheses. On the computational and methodological front, it is of interest to develop new methods to control false positive error as well as false discovery rates, including in settings involving continuous model posets. ## Acknowledgements We thank Marina Meila and Lior Pachter for insightful conversations. PB received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program (grant agreement No. 786461). VC was supported in part by Air Force Office of Scientific Research grant FA9550-20-1-0320 and by National Science Foundation grant DMS 2113724.
2308.10413
Mechanisms that play a game, not toss a coin
Randomized mechanisms can have good normative properties compared to their deterministic counterparts. However, randomized mechanisms are problematic in several ways such as in their verifiability. We propose here to derandomize such mechanisms by having agents play a game instead of tossing a coin. The game is designed so an agent's best action is to play randomly, and this play then injects ``randomness'' into the mechanism. This derandomization retains many of the good normative properties of the original randomized mechanism but gives a mechanism that is deterministic and easy, for instance, to audit. We consider three related methods to derandomize randomized mechanism in six different domains: voting, facility location, task allocation, school choice, peer selection, and resource allocation. We propose a number of novel derandomized mechanisms for these six domains with good normative properties. Each mechanism has a mixed Nash equilibrium in which agents play a modular arithmetic game with an uniform mixed strategy. In all but one mixed Nash equilibrium, agents report their preferences over the original problem sincerely. The derandomized methods are thus ``quasi-strategy proof''. In one domain, we additionally show that a new and desirable normative property emerges as a result of derandomization.
Toby Walsh
2023-08-21T01:43:08Z
http://arxiv.org/abs/2308.10413v2
# Mechanisms that play a game, ###### Abstract Randomized mechanisms can have good normative properties compared to their deterministic counterparts. However, randomized mechanisms are problematic in several ways such as in their verifiability. We propose here to derandomize such mechanisms by having agents play a game instead of tossing a coin. The game is designed so an agent's best action is to play randomly, and this play then injects "randomness" into the mechanism. This derandomization retains many of the good normative properties of the original randomized mechanism but gives a mechanism that is deterministic and easy, for instance, to audit. We consider three related methods to derandomize randomized mechanisms in six different domains: voting, facility location, task allocation, school choice, peer selection, and resource allocation. We propose a number of novel derandomized mechanisms for these six domains with good normative properties. Each mechanism has a mixed Nash equilibrium in which agents play a modular arithmetic game with a uniform mixed strategy. In all but one mixed Nash equilibrium, agents report their preferences over the original problem sincerely. The derandomized methods are thus "quasi-strategy proof". In one domain, we additionally show that a new and desirable normative property emerges as a result of derandomization. ## Introduction In many social choice problems, deterministic mechanisms can struggle to choose between (perhaps essentially equivalent) alternatives without impacting desirable normative properties like fairness or strategy proofness (e.g. when breaking ties, resolving Condorcet cycles or allocating a contested and indivisible item). One attractive escape is to introduce randomness into the mechanism to make such choices. For example, no deterministic mechanism for house allocation with three or more houses is Pareto efficient, strategy proof and anonymous Zhou (1990). Note that two of these properties can be achieved (e.g. the serial dictatorship mechanism is Pareto efficient and strategy proof but not anonymous). On the other hand, if we introduce randomness into mechanisms, we can achieve all three properties. For instance, the random priority mechanism is (ex post) Pareto efficient, strategy proof and anonymous. As a second example, no deterministic and strategy proof mechanism for locating a facility on the line can do better than 2-approximate the maximum cost. However, randomized mechanisms exist that can do better, \(3/2\)-approximating this cost Procaccia and Tennenholtz (2013). Even though randomization can lead to good normative properties, randomized mechanisms are not universally liked. There are a number of challenges that randomization introduces. First, true rather than pseudo randomness is difficult to obtain. We typically require access to some external data source to provide a true source of random bits. Second, if we are making what is an one-off decision then it is inherently difficult to demonstrate that the randomized mechanism was fair. Third, even where decision making is repeated, it can be difficult to audit fairness. We may, for instance, require many executions of the mechanism to have high confidence in its fairness. Fourth, randomization can introduce undesirable computational complexity. For instance, randomness can make it computationally intractable to compute the probability distribution over outcomes (e.g. Saban and Sethuraman (2013)). In this paper, we suggest an alternative solution that tackles many of these challenges. Rather than introduce randomness into the mechanism, we stick with deterministic mechanisms but add a game where equilibrium behaviour for agents is to play randomly. We can thereby inject randomness into a deterministic mechanism through the agents' play. With such deterministic mechanisms, we can retain many of the advantages of tossing a coin, and avoid many of the disadvantages. In addition, we also demonstrate that such derandomization can even enhance the normative properties achieved. For instance, we propose a new peer selection mechanism where agents always have an incentive to participate, irrespective of how the other participants vote, ## Two simple examples We illustrate this idea of derandomizing mechanisms with two simple examples drawn from some classic problems in social choice: voting and facility location. We will then consider four more complex domains where both the mechanisms and the analysis are significantly more complex. These four additional domains will be task allocation, school choice, peer selection, and resource allocation. ### Random Dictator mechanism We first consider one of the most fundametal problems in social choice, designing mechanisms for voting with good normative problems. As is well known, we quickly run into a wide range of impossibility results. For instance, with three or more candidates, any voting rule that is surjective and strategy proof must also be dictatorial Gibbard (1973); Satterthwaite (1975). One escape from such impossibility results is randomizaton. For example, the random dictator mechanism is surjective and strategy proof but the result is not decided by just one voter (i.e. it is not dictatorial). Indeed, it is the only rule that is strategy proof and ex post efficient (i.e. never gives positive probability on Pareto-dominated alternatives) Gibbard (1977). However, voters may not be too keen to see this mechanism being used. The chair might tell a voter: "You preferred candidate lost because some other voter was randomly choosen as dictator", and the voter might have little option but to trust that the chair was fair. We propose instead the following derandomization of the random dictator mechanism. Agents submit an integer in \([0,n)\), along with their preferred winner. Let \(j\) be the sum of these integers mod \(n\). The dictator is chosen to be the preferred winner of the \(j+1\)th voter. This derandomization of the random dictator mechanism is not strategy proof. Any voter can ensure that they are the dictator by choosing a suitable integer. However, there is a mixed strategy Nash equilibrium in which agents choose an integer uniformly at random, as well as reporting sincerely their most preferred candidate to win. A nice feature of this derandomization (which we will observe in almost every example explored in this paper) is that an agent's ability to manipulate the outcome is limited to this modular arithmetic game. It is in their best interests to declare their preferences over outcomes (in this case, their preferred winner) sincerely. ### Left-Right-Middle mechanism We turn next to the classic problem of facility location, and show how we can return better, higher quality solutions using derandomization. When locating a single facility on the line, no deterministic and strategy proof mechanism can do better than 2-approximate the maximum cost an agent travels Procaccia and Tennenholtz (2013). However, the randomized left-right-middle (LRM) mechanism 3/2-approximates the maximum cost, and this is optimal as no randomized and strategy proof mechanism can do better Procaccia and Tennenholtz (2013). The LRM mechanism selects the leftmost agent with probability 1/4, the midpoint between leftmost and rightmost agents with probability 1/2, and the rightmost agent again with probability 1/4. We propose the following derandomization of the LRM mechanism. Agents submit an integer between 0 and 3, along with their location. If the sum of the integers modulo 4 is 0 then the facility is located at the leftmost agent. If the sum of the integers modulo 4 is 1 or 2 then the facility is located at the midpoint between the leftmost and rightmost agents. Otherwise, the sum of the integers modulo 4 is 3 and the facility is located at the rightmost agent. This derandomized facility location mechanism is not strategy proof. Suppose, for example, that I am the rightmost agent and I know the other reports. I can ensure the facility is located at my location by submitting a suitable integer. However, there is a mixed strategy Nash equilibrium in which agents choose an integer between 0 and 3 uniformly at random, as well as reporting their sincere location. The expected maximum cost of this mixed strategy is 3/2 times the optimal maximum cost. This is better than that obtained by the best deterministic and strategy proof mechanism. These two simple examples have illustrated some of the basic ideas in derandomizing randomized mechanisms. We now apply derandomization to four other domains where the analysis is more complex. These examples uncover three different methods to derandomize a randomized mechanism. In the first method ("game-first"), we play a modular arithmetic game to pick a random "seed". This is then applied to the original randomized mechanism. In the second method ("game-last)", we apply a randomized mechanism to generate a probablistic outcome. We then play a modular arithmetic game to convert this into a discrete ex post outcome. And in the third method ("game-interleaved"), we interleave playing a modular arithmetic game with applying the mechanism. ## Task allocation The first more complex domain that we consider is task allocation. There are \(m\) tasks that need to be allocated to 2 agents. Agent \(i\) declares that task \(j\) will take time \(t_{j}^{i}\). The goal is to allocate tasks to agents to minimize the makespan (the completion time of the last task to either agent). To compensate agents for performing a task, agents receive a payment. The payment is in hours of work, and the overall utility of an agent is this payment less the actual amount of work performed. The mechanism design problem here is to devise a mechanism which approximates well the optimal makespan and incentivizes agents to report the time it will take them to execute each task sincerely. Nisan and Ronen prove that no deterministic mechanism can better than 2-approximate the makespan, and that a simple VGC-style min-work mechanism that allocates each task to the quickest agent, paying them the time that would be taken by the other agent to perform the task is strategy proof and achieves this 2-approximation Nisan and Ronen (2001). They also prove that randomization can improve this approximation ratio in expectation. In particular, they show that a biased min-work mechanism which provides a 7/4-approximation of the optimal makespan in expectation when given \(m\) random bits drawn uniformly. This mechanism is strongly truthful (i.e. even if we know the random bits, it remains strategy proof). We denote these random bits by \(b_{j}\). \begin{tabular}{l l} BiasedMinWork\((m,t_{j}^{i},b_{j})\) & \\ 1 & \(a_{1},a_{2}\leftarrow\{\}\) & _; task allocation_ \\ 2 & \(p_{1},p_{2}\gets 0\) & _; payment_ \\ 3 & **for**\(j=1\) **to**\(m\) & _; iterate over tasks_ \\ 4 & **do**\(i\gets 1+b_{j},i^{\prime}\gets 3-i\) & _; pick a permutation_ \\ 5 & **if**\(t_{j}^{i}\leq\frac{4}{3}t_{j}^{i^{\prime}}\) & _; pick biased smallest_ \\ 6 & **then**\(a_{i}\gets a_{i}\cup\{j\},p_{i}\gets p_{i}+\frac{4}{3}t_{j}^{i^{\prime}}\) \\ 7 & **else**\(a_{i^{\prime}}\gets a_{i^{\prime}}\cup\{j\},p_{i^{\prime}}\gets p_{i^{ \prime}}+\frac{3}{4}t_{j}^{i}\) \\ 8 & **return**\(a_{1},a_{2},p_{1},p_{2}\) \\ \end{tabular} To derandomize this biased min-work mechanism, we have agents set the bits \(b_{j}\) by means of a simple parity game. In particular, we suppose the mechanism operates in \(m\) rounds. In the \(j\)th round, both agents submit their time to complete task \(j\). We additionallly suppose they now submit a single bit, 0 or 1. We set \(b_{j}\) to be the xor of these two bits. This example thus illustrates the "game-inteleaved" method to derandomize a mechanism. **Theorem 1**: _This derandomized biased min-work mechanism has a mixed subgame perfect Nash equilibrium in which agents submit bits uniformly at random, and sin cretely report their task times. This \(\frac{7}{4}\)-approximates the optimal makespan in expectation._ **Proof:** Consider agent 1 in round \(j\). We say that agent 1 wins the round iff \(b_{j}=0\). There are two cases. In the first, \(t_{j}^{1}\leq\frac{4}{3}t_{j}^{2}\). If agent 1 wins this round (that is, \(b_{j}=0\)), then agent 1 pays a cost of \(t_{j}^{1}\) but receives a greater or equal payment of \(\frac{4}{3}t_{j}^{2}\). Agent 2, on the other hand, loses this round, incurs no cost but receives no payment. Suppose agent 2 instead wins the round (that is, \(b_{j}=1\)). There are two subcases. In the first subcase, \(\frac{3}{4}t_{j}^{1}\leq t_{j}^{2}\leq\frac{4}{3}t_{j}^{1}\). Agent 1 is now not allocated the task and agent 1 therefore has no additonal cost or payment. However, agent 2 is allocated the task. Agent 2 receives a payment of \(\frac{4}{3}t_{j}^{1}\) which is greater than their cost. Hence, in the first subcase, both agent 1 and 2 want to win the round. In the second subcase, \(t_{j}^{2}>\frac{4}{3}t_{j}^{1}\) and agent 1 is allocated task \(j\). Their payment is \(\frac{3}{4}t_{j}^{2}\). This is strictly greater than their cost, \(t_{j}^{i}\). However, it is a smaller payment than when agent 1 wins the round. Hence, it was desirable for both agents to have won this round. The other case, in which \(t_{j}^{1}>\frac{4}{3}t_{j}^{2}\) is dual. The mechanism treats agents identically so agent 2 therefore also wants to win each round. It follows that the mixed Nash equilibrium has both agents submitting bits uniformly at random. The biased min-work mechanism is strongly truthful so that, even if the result of the parity game is known, agents have no incentive to misreport their task time. As the mixed subgame perfect Nash equilibirum has agents winning each round with equal probability, the mechanism returns the same distribution of task allocations as the randomized and biased min-work mechanism. Hence, it \(\frac{7}{4}\)-approximates the optimal makespan in expectation. \(\diamond\) ## School choice We turn next to a very practical social choice problem that impacts citizens of many different countries. In school choice, we consider the two sided matching problem in which schools and students choose each other (e.g. Abdulkadiroglu and Sonmez (2003); Ergin and Sonmez (2006); Abdulkadiroglu et al. (2005)). One of the most popular mechanisms for school choice in every day use around the world is the student-proposing deferred acceptance (DA) mechanism of Gale and Shapley (1962). This has many desirable normative properties such as strategy proofness for the students (but not schools) and stability of the solution. One issue with the DA mechanism is that it supposes schools have a complete ordering over students. In practice, schools often only have broad preferences over students. For example, those with siblings at the school might be strictly preferred to those merely within the school district, and these two groups might be strictly preferred to those outside the school district. However, within each group, schools might be indifferent between students. It is therefore common to order students within each group using a random lottery. To derandomize the DA mechanism which uses such a random lottery, we could have the \(n\) students in a particular group instead play a modular arithmetic game by submitting an integer in \([0,n!)\). We then use the result of this game to pick one of the \(n!\) priority orderings of students in this group via an inverse Lehmer code. This school choice example thus illustrates the game-first method to derandomize a randomized mechanism: we first play a modular arithmetic game to construct a random "seed" (ordering) which is then used in the second step by the original mechanism. While this game is polynomial, requiring students to submit just \(O(n\log n)\) bits, it may nevertheless be prohibitively expensive. For instance, with 1000 students to order in a group, we would require a student to submit an integer with several thousand decimal digits (as \(1000!\approx 4.0\) x \(10^{2568}\)). We propose a more efficient mechanism where, instead of each student submitting \(O(n\log n)\) bits, each of the \(n\) students submits only \(O(\log n)\) bits. These are then combined to form a (random) \(O(n\log n)\) priority order. The mechanism differs on whether \(n\) is odd or even. We suppose students are numbered from 0 to \(n-1\). If \(n=2k\), student 0 submits an integer \(b_{0}\in[0,n)\), each student \(i\in[1,n-1)\) submits two integers, \(a_{i}\in[0,i]\) and \(b_{i}\in[0,n-i]\), and the final student \(n-1\) submits an integer \(a_{n-1}\in[0,n)\). We construct a permutation ordering as follows. The first student in the ordering is \((a_{n-1}+b_{0})\) mod \(n\). The second student is then the \((a_{n-2}+b_{1})\) mod \((n-1)\) largest remaining student (counting from zero). The third student in the ordering is then the \((a_{n-3}+b_{2})\) mod \((n-2)\) largest remaining student, and so on. If \(n=2k+1\), student 0 submits two integers \(a_{0}\in[0,k]\) and \(b_{0}\in[0,n)\), each student \(i\in[1,n-1)\) submits two integers, \(a_{i}\in[0,i]\) and \(b_{i}\in[0,n-i]\), and the final student \(n-1\) submits two integers \(a_{n-1}\in[0,n)\) and \(b_{n-1}\in[0,k]\). We construct a permutation from these as follows: the first student in the permutation is \((a_{n-1}+b_{0})\) mod \(n\). The second student is then the \((a_{n-2}+b_{1})\) mod \((n-1)\) largest remaining student (counting from zero), and so on. There is, however, one exception in the exact middle of the permutation as the two integers being added together would otherwise be submitted by the same agent. More precisely, the \(k+1\)th student in the permutation is computed not as the \((a_{k}+b_{k})\) mod \((k+1)\) largest remaining student but instead as the \((a_{k}+b_{k}+a_{0}+b_{n-1})\) mod \((k+1)\) largest. In the second stage of our derandomized mechanism, we run the regular DA mechanism using this priority order to break ties that a school has within a group. This derandomized mechanism inherits the stability of the underlying DA mechanism. In addition, whilst students can act strategically, their stategic behaviours are limited to how they play the modular arithmetic game. **Theorem 2**: _The derandomized DA mechanism has a mixed Nash equilibrium in which students in the first stage select integers with uniform probability, and then select schools in the second stage sincerely. This equilibrium corresponds to a probability distribution over ex post stable matchings._ **Proof:** Suppose all but one student plays this uniform mixed strategy and picks truthfully in the second stage. Then, this one student gets the same expected return irrespective of the integers that they play. Note that, once the priority ordering has been selected, students have, as in the regular DA mechanism, no incentive to misreport. Hence, this uniform mixed strategy is a Nash equilibrium and is a distribution over ex post stable matchings. \(\diamond\) ## Peer selection We consider next the peer selection problem in which agents have to choose one amongst themselves to receive a prize (e.g. Alon et al. (2011); Merrifield and Saari (2009); Aziz et al. (2016, 2019)). For example, a committee might want to choose one person within the committee to be chair. As a second example, a school class might want to choose a class representative. As a third example, the members of an academic society might want to choose a member to receive a medal. We propose here a new peer selection mechanism with a novel normative property: each one of the agents is guaranteed to have a say in the winner. That is, irrespective of how the other agents vote, every agent can change the winner of the prize. This peer selection mechanism is based on the idea of sequentially eliminating agents until just one is left who wins the prize. Our results about this sequential elimination mechanism easily generalize from the peer selection setting where \(n\) agents vote over (the same) \(n\) candidates to the more general voting setting where \(n\) agents vote over \(m\) candidates, and \(m\) may be different to \(n\). We first consider a deterministic peer selection mechanism in which agents successively eliminate candidates. This mechanism lacks the desirable normative property of anonymity (i.e. permuting the names of the agents changes the outcome). We therefore randomize it to give anonymity. We will then derandomized this mechanism to give a new deterministic and anonymous mechanism. This derandomized mechanism has the property that, even if some agent has a large majority support, a dissenting agent still has an incentive to participate and change the outcome. This example is an instance of the "game-first" method for derandomizing a randomized mechanism in which we play a modular arithmetic game first to select a random "seed". ### Sequential elimination mechanism We start by considering the strategy behaviour of agents with the simple deterministic sequential elimination (SE) mechanism. We will need this equilibrium result to discuss the equilibria of the derandomized mechanism that we will shortly introduce. The SE mechanism starts with all agents as possible winners, and then has the agents in a given order eliminate one of the remaining agents from the set of possible winners until only a single agent remains. We suppose the \(n\) agents have strict preferences so there is, for example, always an unique least preferred agent to eliminate. This SE mechanism is in some sense a dual of the dictatorship mechanism. In the dictatorship mechanism, a particular agents picks an agent as winner, whilst in the SE mechanism, particular agents eliminate agents as winner. Whilst the dictatorship mechanism is strategy proof, sequential elimination is not. A agent may not eliminate their worst ranked agent if a later agent will. Strategic play is, however, easily computed. Given a fixed elimination ordering, we can view the SE mechanism as a repeated game. Supposing agents have complete information about the elimination ordering and other preferences, the subgame perfect Nash equilibrium of this game can be computed using backward induction. However, such a computation is exponential in \(m\). There is, however, a simple linear time method to compute the subgame perfect Nash equilibrium. We simply reverse the elimination ordering and play the game backwards. It is easy to see informally why this is the case. Agents can play strategically to ensure that the last agent to be eliminated is the least preferred agent of the last agent in the elimination ordering. The last agent will surely want this agent to be eliminated. Therefore earlier agents in the elimination order might strategically not eliminate this agent, even if it is also their least preferred agent. An early agent will profit by eliminating some more preferred agent, safe in the knowledge that their least preferred agent will be eliminated at the end of the game. Similarly, if we discount this agent, the agents can play strategically to ensure that the penultimate agent to be eliminated is the least preferred agent of the penultimate agent in the elimination ordering, and so on. Indeed, all subgame perfect Nash equilibrium return the same winner, the one computed by this reversed computation. **Theorem 3**: _All subgame perfect Nash equilibria of the SE mechanism game given the picking ordering \(\pi\) of agents have the same winner, and this is found in \(O(m)\) time by computing \(elim(rev(\pi))\) where \(rev\) reverses the elimination ordering, and \(elim\) has the agents in turn eliminating their least preferred remaining agent._ **Proof:** We prove a strictly more general result considering the game where we eliminate \(k\) agents to leave a set of \(m-k\) winners. To compare outcomes with multiple winners (i.e. \(m-k>1\)), we suppose agents have additive utilities over sets of agents. To show that the winners of the subgame perfect Nash equilibrium of this SE game are computed by \(elim(rev(\pi))\), we use induction on the number of elimination steps \(k\). In the base case, \(k=1\) and the optimal play for the single agent that eliminates a single agent is computed by \(elim(rev(\pi))\). In the step case, we suppose the induction hypothesis holds for \(k\) (\(k\geq 1\)) elimination steps and demonstrate that it holds for \(k+1\) such steps. Consider the last agent eliminated in the final \(k+1\)th step. Suppose the last agent eliminated is not the least preferred agent of the agent eliminating agents at this step. There are two cases. In the first case, the least preferred agent survives to be in the winning set. The final step would be dominated by eliminating this least preferred agent contradicting this being subgame perfect. In the second case, the least preferred agent has already been eliminated before the final step. We can then swap the final elimination with the elimination of this least preferred agent and construct an equivalent game in which the final elimination is the least preferred agent of this agent. We then appeal to the induction hypothesis. To demonstrate that the winner is unique, suppose there are two subgame perfect Nash equilibria with different winners. Consider the agents in common eliminated in both games. Without loss of generality, we permute the eliminations in one game so that these are in the same order as in the second game. Consider next the first agent in the first game that eliminates the winner of the second game. Recall that we assume agents have strict preferences so this agent will strictly prefer one of these two different outcomes. But this means one of these eliminations is dominated which contradicts that this is a subgame perfect Nash equilibrium. Note that there can be multiple subgame perfect Nash equilibria that give the same winner. For example, if in one equilibrium, agent 1 eliminates agent \(a\), and agent 2 eliminates agent \(b\), and both agents 1 and 2 have agents \(b\) and then \(a\) at the end of their preference ordering, then there is another equilibrium with the same winner in which agent 1 eliminates agent \(b\) and agent 2 eliminates agent \(a\). \(\diamond\) Note that this result about computing the subgame perfect Nash equilibrium of the SE mechanism also holds for the more general voting game in which the number of candidates may be different to the number of voters, and not just to the peer selection game in which it is equal. We also contrast this result with the unique subgame perfect Nash equilibrium of the sequential allocation mechanism in the fair division game with indivisible items Kalinowski et al. (2013); Kohler and Chandrasekaran (1971). With two agents, this can also be computed by reversing the picking sequence. However, the computation of the subgame perfect Nash equilibrium in the fair division game also reverses the preferences so that agents are allocated their least preferred unallocated items remaining rather than their most preferred. In this fair division game, agents strategically pick to ensure that the last remaining item is the last agent's least preferred item. And if we discount this item, the last but one remaining item is that agent's least preferred item, and so on. ### Random Sequential Elimination mechanism The SE mechanism that we just considered is not anonymous. It treats agents differently according to where they appear in the elimination ordering. We can make it anonymous by randomizing over the agents. In particular, the random sequential elimination (RSE) mechanism picks a random order of agents and then has agents eliminate agents one by one using this elimination ordering until just one agent remains. This agent is declared the winner. This RSE mechanism is a dual of the random dictator mechanism which has a random agent select (rather than eliminate) the winner. We can derandomize RSE using a "game-first" method. In the first stage, we have the \(n\) agents play a modular arithmetic game in which each submits an integer in \([0,n!)\). We use the result of this game to pick one of the \(n!\) priority orderings of agents via an inverse Lehmer code. In the second stage, we eliminate agents using this ordering. With this derandomized RSE mechanism, agents have only limited options for strategic play. In particular, the voting game for the derandomized RSE mechanism has a mixed subgame perfect Nash equilibrium in which agents pick random integers uniformly, and then eliminate their least preferred remaining agent in reverse order. **Theorem 4**: _The voting game of the derandomized RSE mechanism has a mixed subgame perfect Nash equilibrium in which agents pick numbers uniformly at random in the first stage, and eliminate their least preferred agent remaining in reverse order in the second stage._ **Proof:** Suppose all but one agent plays this uniform mixed strategy. Then, in the first stage, a agent gets the same expected return irrespective of the integer that they play. Note that, once the first stage has been played, the elimination order for agents is fixed and the subgame perfect Nash equilibrium for the second stage is to pick the least preferred remaining agent with the reverse elimination order. \(\diamond\) A novel property of this mechanism is that agents always have an incentive to participate irrespective of the votes of the other agents. We say that a peer selection mechanism over two or more agents is **responsive** iff, regardless of the votes of the other agents, there exist two different ways that any agent can vote with different outcomes. Note that the SE mechanism is not responsive as the last agent in the elimination order cannot change the outcome. The derandomized RSE mechanism is, on the other hand, responsive. **Theorem 5**: _The derandomized RSE mechanism is responsive._ **Proof:** Consider the first elimination round and any agent. There are two cases. In the first case, the agent is chosen to perform the first elimination. By changing the agent that is eliminated to be the current winner, the overall winner must change. In the second case, the agent is not chosen to perform the first elimination. Suppose this agent submits a different integer to ensure that they perform the first eliminatio. If the agent now eliminates the current winner, then the overall winner must again change. \(\diamond\) Another desitable normative property in peer selection is impartiality Holzman and Moulin (2013). A peer selection mechanism is **impartial** iff an agent cannot influenece whether they win or not. For example, a randomized mechanism that is impartial partitions agents into two sets, has each set vote on the other, and then randomly chooses between the two possible winners. We can derandomize this mechanism without violating impartiality by having the agents who are not one of the two possible winners playing a parity game to pick between the two possible winners. The resulting "game-last" mechanism is impartial but is not responsive. On the other hand, the derandomized RSE mechanism is responsive but not impartial. However, this is to be expected as impartiality and responsiveness are impossible to achieve simultaneously. **Theorem 6**: _No deterministic peer selection mechanism is both responsive and impartial._ **Proof:** With two agents, the only impartial mechanism selects a fixed winner. This is not responsve. Consider then three or more agents, and any reports for these agents. Pick any agent. Suppose this agent wins. If the mechanism is responsive, there must be some other report for this agent that changes the winner. Consider the agent changing from this new report to its original report. This change violates impartiality. This is a contradiction. Hence, the assumption that the agent wins is false. But this is true for every possible agent. Thus, no agent can win. \(\diamond\) We end this section with some important related work. Bouveret _et al._ previously studied the SE mechanism for the more general voting setting Bouveret et al. (2017). They argue that the SE mechanism has low-communication complexity and, with a good elimination ordering, can have good properties such as returning the Borda winner. Here we considered such a mechanism where the elimination ordering is chosen randomly. The resulting mechanism is now anonymous (as is the derandomized version too). ## Resource allocation Our final application domain is resource allocation where we consider (randomized) mechanisms for the allocation of indivisible items. We have \(n\) agents who have preferences over \(m\) indivisible items. Our goal is to allocate items whole to agents according to these preferences. In this setting, randomization allows us to deal with contested items fairly. For example, rather than unilaterally give a contested item to one agent, we could toss a coin to decide which agent gets it. Two of the most prominent randomized mechanisms with good normative properties for this domain are the probabilistic serial (PS) and random priority (RP) mechanisms. ### Probabilistic Serial mechanism In the probabilistic serial (PS) mechanism agents simultaneously "eat" at a constant speed their most preferred remaining item Bogomolnaia and Moulin (2001). This gives a randomized or probablistic assignment which can easily be realized as a probability distribution over discrete allocations. Unfortunately, the PS mechanism is not strategy proof. However, it has good welfare and efficiency properties. It is, for instance, SD-efficient and SD-envyfree1. Footnote 1: These are efficiency and fairness notions that are defined for ordinal preferences. In particular, the _SD_ (stochastic dominance) ordering prefers an allocation \(p\) to an agent over \(q\) if the probability for the agent to get the top \(i\) items in \(p\) is at least as large as in \(q\) for all \(i\in[1,m]\). If an allocation to an agent is _SD-preferred_ over another then it has greater or equal expected utility for all utilities consistent with the agent’s ordinal preferences. Notions like _SD-efficiency_, _SD-envy freeness_ and _SD-strategy proofness_ can be defined in the natural way from the SD-preference ordering. To derandomize the PS mechanism, we first identify how much precision is needed to represent the probabilities in the random assignment it generates. This precision will dictate the size of the modular aritmetic game that agents will play. We prove here that the PS mechanism only needs a polynomial number of bits with which to represent probabilities. **Theorem 7**: _For every one of the \(n\) agents and of the \(m\) items, there exists an integer \(k\) such that the PS mechanism allocates the item to the agent with probability \(\frac{k}{(n!)^{m}}\)._ **Proof:** We suppose without loss of generality that items are fully "consumed" by the PS mechanism in numerical order. We can consider then \(m\) steps of the PS mechanism, each ending with a new item being fully consumed. We assume that only one item is ever fully consumed at a time. We will discuss relaxing this assumption shortly. Let \(k_{i,j}\) be the number of agents eating the \(i\)th item at the \(j\)th step of the probabilistic serial mechanism. For notational simplicity, we write \(k_{i}\) for \(k_{i,i}\). Hence, \(k_{i}\) is the number of agents eating the \(i\)th item when it is fully consumed. Note that \(k_{i}\in[1,n]\) and \(k_{i,j}\in[0,n)\) for \(i\neq j\). The first step takes \(\frac{1}{k_{1}}\) time for the first item to be fully consumed. The \(k_{1}\) agents each get a share of \(\frac{1}{k_{1}}\) of probability of this first fully consumed item. Note that this is an integer multiple of \(\frac{1}{n!}\), and thus also of \(\frac{1}{(n!)^{m}}\). Consider any other item \(j>1\). The \(k_{j,1}\) agents eating this item each get a share of \(\frac{1}{k_{1}}\) of probability of item \(j\). Note that this is again an integer multiple \(\frac{1}{(n!)^{m}}\). There is now \(1-\frac{k_{j,1}}{k_{1}}\) of item \(j\) left. That is, \(\frac{k_{1}-k_{j,1}}{k_{1}}\) of item \(j\) left. Note that this is an integer multiple of \(\frac{1}{n!}\) Supposing \(m\geq 2\), during the next step, \(k_{2}\) agents eat whatever remains of the second item. This takes \((\frac{k_{1}-k_{2,1}}{k_{1}})/k_{2}\) time. Each of the \(k_{2}\) agents eating this item thereby receives a share of \(\frac{k_{1}-k_{2,1}}{k_{1}k_{2}}\) of probability of item 2. Note that this is an integer multiple of \(\frac{1}{(n!)^{m}}\), and thus also of \(\frac{1}{(n!)^{m}}\). Consider any other item \(j>2\). The \(k_{j,1}\) agents eating this item each get a share of \(\frac{k_{1}-k_{2,1}}{k_{1}k_{2}}\) of probability of item \(j\). Note that this is again an integer multiple \(\frac{1}{(n!)^{m}}\). There is now \(\frac{k_{1}-k_{j,1}}{k_{1}}-\frac{k_{j,1}(k_{1}-k_{2,1})}{k_{1}k_{2}}\) of item \(j\) left. That is, \(\frac{k_{1}k_{2}-k_{j,1}(k_{1}+k_{2})+k_{j,1}k_{2,1}}{k_{1}k_{2}}\). Note that this is an integer multiple of \(\frac{1}{(n!)^{2}}\). The argument repeats in subsequent steps. In the \(j\)th step, agents add an additional probability for an item which is an integer multiple of \(\frac{1}{(n!)^{j}}\). And the amount left of any item not fully consumed is also an integer multiple of \(\frac{1}{(n!)^{j}}\). Adding together all the probabilities, we conclude at the final \(m\)th step that for each agent and item, there exists an integer \(k\) such that the PS mechanism allocates the item to the agent with probability \(\frac{k}{(n!)^{m}}\). Note that if two or more items are fully consumed at exactly the same time, the argument is similar. However, we now have strictly less than \(m\) steps but the conclusion still holds. Note that \((n!)^{m}\) can be represented with \(O(mn\log(n))\) bits which is polynomial in both \(n\) and \(m\). \(\diamond\) We next define a two stage mechanism that derandomizes the PS mechanism. In the first stage, the mechanism constructs the probablistic allocation of the normal PS mechanism. In the second stage, agents play a modular arithmetic game, and the mechanism uses the outcome of this game to construct a discrete ex post outcome. This example therefore illustrates the "game-last" method to derandomize a randomized mechanism. In more detail, in the second stage, each agent submits an integer in \([0,(n!)^{m})\). Let \(\sigma\) be the sum of these integers mod \((n!)^{m}\). The mechanism then allocates item \(j\) to agent min\(\{k\mid\sum_{i=1}^{k}p_{i,j}\geq\frac{\sigma}{(n!)^{m}}\}\) where \(p_{i,j}\) is the probability that agent \(i\) is allocated item \(j\) in the probablistic allocation. In other words, the mechanism treats \(\sigma/(n!)^{m}\) as a random draw. Note that this is a worst case. As this is a "game-last" derandomization, we can analyse the probablistic allocation returned in the first stage and perhaps use a smaller range of integers than \([0,(n!)^{m})\). Finally we prove that the second stage has a mixed Nash equilibrium in which agents choose an integer uniformly at random. This gives each agent an expected welfare equal to that of the randomized allocation computed in the first stage. **Theorem 8**: _A mixed Nash equilibrium of the second stage has every agent select each integer in \([0,(n!)^{m})\) with uniform probability._ **Proof:** Suppose all but the first agent play this uniform mixed strategy. Then, the first agent gets the same expected return irrespective of the integer they play. Hence, this is a Nash equilibrium. \(\diamond\) As with the PS mechanism itself, agents may strategically misreport their ordinal preferences over items in the first stage of this derandomized PS mechanism. For example if one agent prefers item 1 to 2 and 2 to 3, and a second agent prefers 2 to 3 and 3 to 1, then the first agent can profitably misreport that they prefer 2 to 1 and 1 to 3. A pure Nash equilibrium of the PS mechanism is guaranteed to exist, but is NP-hard to compute in general Aziz et al. (2015a). Indeed, even computing a best response is NP-hard Aziz et al. (2015b). One tractable special case is two agents since there exists a linear time algorithm to compute a pure Nash equilibrium with two agents which yields the same probablistic allocation as the truthful profile Aziz et al. (2015a). As the PS mechanism is envy-free ex ante, it follows that a mixed Nash equilibrium with two agents consisting of this pure strategy for the ordinal preferences and an uniform mixed strategy for the modular arithmetic game is envy-free ex ante. Another tractable special case is identical ordinal preferences. The PS mechanism is strategy proof in this case. Hence, a combined mixed Nash equilibrium with identical ordinal preferences (consisting of a truthful strategy for the ordinal preferences and an uniform mixed strategy for the modular arithmetic game) is again envy-free ex ante. ### Random Priority mechanism For simplicity, we consider house allocation (i.e. one item to allocate per agent). However, our results easily generalize to a setting with more (or fewer) items than agents. The random priority (RP) mechanism picks a random order of agents, and then agents take turns according to this order to pick their most preferred remaining item (house) (e.g. Abdulkadiroglu and Sonmez (1998); Zhou (1990)). Random priority is used in many real world settings. For example, one of the authors of this paper was allocated undergraduate housing using the RP mechanism. RP is one of the few strategy proof mechanisms available for house allocation. Indeed, any strategy proof, nonbossy2 and neutral3 mechanism is necessarily some form of serial dictatorship Svensson (1999). Footnote 2: A mechanism is _nonbossy_ if an agent cannot change the allocation without changing their own allocation. Footnote 3: A mechanism is _neutral_ if permuting the names of the items merely permutes the outcome We define a two stage "game-first" mechanism that derandomizes the RP mechanism. In the first stage, agents play a modular arithmetic game, and the mechanism uses the outcome of this game to construct a picking ordering for the second stage. Each agent submits an integer in \([0,n!)\). We sum these integers mod \(n!\) and then convert this into a permutation ordering of agents via an inverse Lehmer code. In the second stage, the mechanism uses this ordering with a serial dictatorship mechanism to allocate items to agents. This is then another example of a "game-first" method to derandomize a randomized mechanism. **Theorem 9**: _The derandomized RP mechanism has a mixed Nash equilibrium in which agents in the first stage select each integer in \([0,n!)\) with uniform probability, and then select items in the second stage truthfully._ **Proof:** Suppose all but the first agent play this uniform mixed strategy. Then, the first agent gets the same expected return irrespective of whatever integer they play. Once the ordering has been selected, agents have no incentive to misreport. Hence, this is a Nash equilibrium. \(\diamond\) Like RP which returns a probability distribution over ex post outcomes that are Pareto efficient, the derandomized RP mechanism is Pareto efficient. Note that we could play the modular arithmetic game second, after we first compute the randomized allocation returned by the RP mechanism, and then use the outcome of the modular arithmetic game to perform a "random draw" like we did when derandomizing the PS mechanism. However, it is NP-hard to compute the probablistic allocation returned by RP Saban and Sethuraman (2015). Therefore this is not a tractable derandomization. ## Conclusions We have proposed three related methods of derandomizing mechanisms: "game-first", "game-last" and "game-onterleaved". Each introduces a modular arithmetic game which can inject randomness into the mechanism via the randomness in agents' play. These derandomized mechanisms retain many of the good normative properties of the underlying randomized mechanism. For instance, all but one of the derandomized mechanisms is "quasi-strategy proof" as they have mixed Nash equilibria in which agents play the game randomly but report other preferences sincerely. We demonstrate how these derandomization methods work in six different domains: voting, facility location, task allocation, school choice, peer selection, and resource allocation. In one case, derandomization additionally introduced a new and desirable normative property (namely that the derandomized peer selection mechanism was responsive always to the preferences of any agent).
2303.02365
Supercloseness analysis of the nonsymmetric interior penalty Galerkin method on Bakhvalov-type mesh
In this paper, we study the convergence of the nonsymmetric interior penalty Galerkin (NIPG) method on a Bakhvalov-type mesh for the first time. For this purpose, a new composite interpolation is designed, which solves the inherent difficulty of analysis on Bakhvalov-type meshes. More specifically, Gauss Radau interpolation and Gauss Lobatto interpolation are used outside and inside the layer, respectively. On the basis of that, by choosing the specific values of the penalty parameters at different mesh points, we derive the supercloseness of $k+1/2$th order ($k\ge 1$), and prove the convergence of optimal order in an energy norm. The theoretical conclusion is consistent with the numerical results.
Xiaoqi Ma, Jin Zhang
2023-03-04T09:34:48Z
http://arxiv.org/abs/2303.02365v1
Supercloseness analysis of the nonsymmetric interior penalty Galerkin method on Bakhvalov-type mesh1 ###### Abstract In this paper, we study the convergence of the nonsymmetric interior penalty Galerkin (NIPG) method on a Bakhvalov-type mesh for the first time. For this purpose, a new composite interpolation is designed, which solves the inherent difficulty of analysis on Bakhvalov-type meshes. More specifically, Gauss Radau interpolation and Gauss Lobatto interpolation are used outside and inside the layer, respectively. On the basis of that, by choosing the specific values of the penalty parameters at different mesh points, we derive the supercloseness of \(k+\frac{1}{2}\)th order (\(k\geq 1\)), and prove the convergence of optimal order in an energy norm. The theoretical conclusion is consistent with the numerical results. keywords: Convection diffusion, Singular perturbation, NIPG method, Bakhvalov-type mesh, Supercloseness + Footnote †: journal: ## 1 Introduction In recent years, with the wide application of singularly perturbated problems in practical life, relevant numerical methods have attracted the attention of more and more researchers, see [3; 7; 8; 9; 10; 11; 12; 16; 18; 23] and their references. It is worth noting that the exact solutions of this kind of problem usually change sharply locally, resulting boundary layers or interior layers. In order to better resolve these layers, researchers designed a simple and effective mesh strategy-layer adapted meshes, whose most representative ones are Bakhvalov-type meshes [2] and Shishkin meshes [20]. In the numerical experiment of [17], we find that even if we use the standard Galerkin method on a layer adapted mesh, there still exist small oscillations. Therefore, it is necessary to consider strong stable numerical methods on layer adapted meshes, such as continuous interior penalty stabilization, the streamline diffusion finite element method, local projection stabilization and discontinuous Galerkin method, see literature [1; 6; 12; 14; 15; 22] for more details. Since the 1970s, the nonsymmetric interior penalty Galerkin (NIPG) method has gradually become a popular stabilization technique. Because this method applies an interior penalty term to restrain the discontinuity across element boundaries, it has flexibility and advantages that the traditional finite element method does not have. In addition, compared with the incomplete interior penalty Galerkin method and the symmetrical interior penalty Galerkin method, a prominent advantage of the NIPG method is that it has strong stability and has no strict restrictions on the value of penalty parameters. With the continuous research of scholars, the convergence analysis of NIPG method is not uncommon. For example, Roos and Zarin analyzed the convergence of a two-dimensional convection diffusion problem of NIPG method on a Shishkin mesh with bilinear elements in [19]. Then, Zarin used the NIPG method on a Shishkin-type mesh, and derived the convergence of almost \(k\) order [21]. In [24], Zhu et al. applied the NIPG method on a Shishkin-type mesh, and proved the method is uniformly convergent in an energy norm. It can be seen that the analysis in the above work only focuses on Shishkin meshes, and the convergence analysis of NIPG method on Bakhvalov-type meshes has not been established. This is because the convergence of the convection term cannot be analyzed on the element in the vicinity of the transition point near the layer. To deal with this difficulty, we propose a new interpolation, which lays a foundation for the analysis of the NIPG method in two-dimensional case. In this paper, for a one-dimensional singularly perturbed problem, the supercloseness analysis of NIPG method is studied on a Bakhvalov-type mesh for the first time. For the sake of the desired results, we use Gauss Radau interpolation outside the layer, and Gauss Lobatto interpolation inside the layer. Note that the division inside and outside the layer needs to be based on the characteristics of mesh and analysis. Then we derive the penalty parameters at different element boundaries and the supercloseness of \(k+\frac{1}{2}\) order. The rest of the paper is organized as follows. Firstly, we describe a continuous problem and provide some basic assumptions in Section 2. In addition, the NIPG method on Bakhvalov-type mesh is introduced. Then in Section 3 a new interpolation is defined, further, the corresponding interpolation error estimate is obtained. In Section 4, the uniform supercloseness related to perturbation parameter is presented. At last, we provide some numerical results to verify the main conclusion. Throughout the paper, let \(C\) be a general positive constant, which is independence of the perturbation parameter \(\varepsilon\) and the mesh parameter \(N\). Furthermore, assume that \(k\) is a fixed integer and satisfy the condition \(k\geq 1\). ## 2 Continuous problem We consider the following singularly perturbed problem: \[\begin{split} Lu&:=-\varepsilon u^{\prime\prime}(x )+b(x)u^{\prime}(x)+c(x)u(x)=f(x)\quad x\in\Omega:=(0,1),\\ u(0)&=u(1)=0,\end{split} \tag{1}\] where \(0<\varepsilon\ll 1\), and \(b(x)\) is the convection coefficient satisfying \(b(x)\geq\alpha>0\) on \(\bar{\Omega}\). Furthermore, for some fixed constant \(\gamma\), assume that \[c(x)-\frac{1}{2}b^{\prime}(x)\geq\gamma>0,\quad\forall x\in\bar{\Omega}. \tag{2}\] Here \(b\), \(c\) and \(f\) are sufficiently smooth. Due to \(\varepsilon\) can be arbitrarily small, the exact solution \(u\) of (1) typically features a boundary layer near \(x=1\), whose width is \(\mathcal{O}(\varepsilon\ln(1/\varepsilon))\). Below, we introduce _a priori_ information of the solution, which is the basis of our analysis. **Theorem 1**.: _Suppose that \(q\) is a positive integer. Assume that (2) holds true and \(b\), \(c\), \(f\) are sufficiently smooth. Then the solution \(u\) of (1) can be decomposed into \(u=S+E\), where the smooth component \(S\) and the layer component \(E\) satisfy \(LS=f\) and \(LE=0\), separately. Then for \(0\leq l\leq q\)_ \[|S^{(l)}(x)|\leq C,\qquad|E^{(l)}(x)|\leq C\varepsilon^{-l}e^{-\alpha(1-x)/ \varepsilon}. \tag{3}\] _In particular, when \(b,c,f\in C^{\infty}(\Omega)\), (3) holds for any \(q\in\mathbb{N}\)._ Proof.: From [12], this conclusion can be obtained directly. ### Bakhvalov-type mesh First, we suppose that the mesh points \(\Omega_{N}=\{x_{j}\in\Omega:j=0,1,2,\cdots,N\}\) and a partition of \(\Omega\) \[\mathcal{T}_{N}=\{I_{j}=[x_{j-1},x_{j}]:j=1,2,\cdots,N\}.\] Then assume that \(h_{j}=x_{j}-x_{j-1}\) is denoted as the length of \(I_{j}\), while \(I\) represents the general interval. In order to better characterize the change of solution in the region \(\Omega\), we adopt a Bakhvalov-type mesh, the specific description is as follows: The domain \(\bar{\Omega}\) is divided as \(\bar{\Omega}=[0,\tau]\cup[\tau,1]\), where the transition point \(\tau=1+\frac{\sigma\varepsilon}{\alpha}\ln\varepsilon\) satisfies \(\tau\geq 1/2\) and \(\sigma\geq k+1\). Let \(N\in\mathbb{N}\) be an integer divisible by \(2\) and \(N\geq 4\). Each subdomain contains \(N/2\) mesh points. Therefore, the mesh generating function is defined as \[x=\psi(t)=\begin{cases} 1+\frac{\sigma\varepsilon}{\alpha}\ln[1+2(1- \varepsilon)(t-1)],\quad t\in[\frac{1}{2},1]\\ 2\tau t,\quad t\in[0,\frac{1}{2}).\end{cases} \tag{4}\] Obviously, \(x_{N/2}=\tau\) can be obtained. **Assumption 1**.: _In this paper, we will make an assumption that_ \[\varepsilon\leq CN^{-1},\] _as is not a restriction in practice._ When \(\varepsilon\geq CN^{-1}\), the layer in the exact solution of (1) is weak, and it can be well resolved by uniform meshes. And the relevant theoretical analysis can be covered by the usual discontinuous finite element theory [5]. ### The NIPG method In the following, we present some basic notions. Let \(m\) be a nonnegative integer and for \(I\in\mathcal{T}_{N}\), the space of order \(m\) is denoted as \[H^{m}(\Omega,\mathcal{T}_{N})=\{\omega\in L^{2}(\Omega):\omega|_{I}\in H^{m}( I),\text{for all }I\in\mathcal{T}_{N}\}.\] Then the corresponding norm and seminorm can be defined by \[\|w\|_{m,\mathcal{T}_{N}}^{2}=\sum_{j=1}^{N}\|w\|_{m,I_{j}}^{2},\quad|w|_{m, \mathcal{T}_{N}}^{2}=\sum_{j=1}^{N}|w|_{m,I_{j}}^{2},\] where \(\|\cdot\|_{m,I_{j}}\) is the usual Sobolev norm and \(|\cdot|_{m,I_{j}}\) is the usual semi-norm in \(H^{m}(I_{j})\). In particular, \(\|\cdot\|_{I}\) and \((\cdot,\cdot)_{I}\) are usually used to stand for the \(L^{2}(I)\)-norm and the \(L^{2}(I)\)-inner product, respectively. Then on Bakhvalov-type mesh, we define the finite element space as \[V_{N}^{k}=\{v\in L^{2}(\Omega):v|_{I}\in\mathbb{P}_{k}(I),\quad\forall I\in \mathcal{T}_{N}\}.\] Here \(\mathbb{P}_{k}(I)\) is the space of polynomials of degree at most \(k\) on \(I\). It is worth noting that, the functions in \(V_{N}^{k}\) are discontinuous at the boundary between two adjacent elements, so they might be multivalued at each node \(\{x_{j}\},j=0,1,\cdots,N\). For a function \(u\in H^{1}(\Omega,\mathcal{T}_{N})\), we define the jump and average at the interior node as \[[u(x_{j})]=u(x_{j}^{-})-u(x_{j}^{+}),\quad\{u(x_{j})\}=\frac{1}{2}\left(u(x_{j }^{+})+u(x_{j}^{-})\right),\] where \(u(x_{j}^{+})=\lim_{x\to x_{j}^{+}}u(x)\) and \(u(x_{j}^{-})=\lim_{x\to x_{j}^{-}}u(x)\) for all \(j=1,\cdots,N-1\). In general, the definitions of jump and average can be extended to the boundary nodes \(x_{0}\) and \(x_{N}\), that is \[[u(x_{0})]=-u(x_{0}^{+}),\quad\{u(x_{0})\}=u(x_{0}^{+}),\quad[u(x_{N})]=u(x_{ N}^{-}),\quad\{u(x_{N})\}=u(x_{N}^{-}).\] Now we present the weak formulation for the problem (1): Find \(u_{N}\in V_{N}^{k}\) such that \[B(u_{N},v_{N})=L(v_{N})\quad\text{for all }v_{N}\in V_{N}^{k}, \tag{5}\] where \[B(u,v)=B_{1}(u,v)+B_{2}(u,v)+B_{3}(u,v),\] \[B_{1}(u,v)=\sum_{j=1}^{N}\int_{I_{j}}\varepsilon u^{\prime}v^{ \prime}\mathrm{d}x-\varepsilon\sum_{j=0}^{N}\{u^{\prime}(x_{j})\}[v(x_{j})]+ \varepsilon\sum_{j=0}^{N}[u(x_{j})]\{v^{\prime}(x_{j})\}+\sum_{j=0}^{N}\mu(x_{ j})[u(x_{j})][v(x_{j})],\] \[B_{2}(u,v)=\sum_{j=1}^{N}\int_{I_{j}}b(x)u^{\prime}v\mathrm{d}x- \sum_{j=0}^{N-1}b(x_{j})[u(x_{j})]v(x_{j}^{+}),\] \[B_{3}(u,v)=\sum_{j=1}^{N}\int_{I_{j}}c(x)uv\mathrm{d}x,\] \[L(v)=\sum_{j=1}^{N}\int_{I_{j}}fv\mathrm{d}x.\] Note that the penalty parameters \(\mu(x_{j})(j=0,1,\cdots,N)\) associated with \(x_{j}\) are some nonnegative constants. In this paper, we will take \(\mu(x_{j})\) as \[\mu(x_{j})=\begin{cases}1,\quad 0\leq j\leq N/2,\\ N^{2},\quad N/2+1\leq j\leq N.\end{cases} \tag{6}\] **Lemma 1**.: _Let \(u\) be the exact solution of (1), then for all \(v\in V_{N}^{k}\), we have the following Galerkin orthogonality_ \[B(u-u_{N},v)=0,\] _where \(B(\cdot,\cdot)\) is defined as (5)._ Proof.: Using the similar arguments in [24], we draw this conclusion directly. For all \(v\in V_{N}^{k}\), the natural norm associated with \(B(\cdot,\cdot)\) is defined by \[\|v\|_{NIPG}:=\left(\varepsilon\sum_{j=1}^{N}\|v^{\prime}\|_{I_{j}}^{2}+\sum_{ j=1}^{N}\gamma\|v\|_{I_{j}}^{2}+\sum_{j=0}^{N}\left(\mu(x_{j})+\frac{1}{2}b(x_{j}) \right)[v(x_{j})]^{2}\right)^{\frac{1}{2}}. \tag{7}\] According to the similar arguments in [24], it is easy to see that one has the coercivity \[B(v_{N},v_{N})\geq\|v_{N}\|_{NIPG}^{2}\quad\forall v_{N}\in V_{N}^{k}. \tag{8}\] Then from Lax-Milgram lemma [13, Theorem 1.1.3] and (8), \(u_{N}\) is the unique solution of (5). ### Some preliminary conclusions **Lemma 2**.: _Suppose that Assumption 1 holds true. Then on Bakhvalov-type mesh (4), one has_ \[h_{N/2+2}\geq\cdots\geq h_{N}, \tag{9}\] \[\frac{\sigma\varepsilon}{4\alpha}\leq h_{N/2+2}\leq\frac{\sigma \varepsilon}{\alpha},\] (10) \[\frac{\sigma\varepsilon}{2\alpha}\leq h_{N/2+1}\leq\frac{2\sigma }{\alpha}N^{-1},\] (11) \[N^{-1}\leq h_{j}\leq 2N^{-1},\quad 1\leq j\leq N/2. \tag{12}\] _Furthermore, we present estimates at some special points,_ \[x_{N/2+1}\leq 1-C\frac{\sigma\varepsilon}{\alpha}\ln N,\quad x_{N/2}\leq 1+C \frac{\sigma\varepsilon}{\alpha}|\ln\varepsilon|. \tag{13}\] _In particular, for \(N/2+2\leq j\leq N\) and \(0\leq\lambda\leq\sigma\),_ \[h_{j}^{\lambda}\max_{x_{j-1}\leq x\leq x_{j}}e^{-\alpha(1-x)/\varepsilon}\leq h _{j}^{\lambda}e^{-\alpha(1-x_{j})/\varepsilon}\leq C\varepsilon^{\lambda}N^{- \lambda}. \tag{14}\] Proof.: Applying the similar method in [23, Lemma 3], we can derive this lemma without any difficulties. **Lemma 3**.: _Suppose that Assumption 1 holds, then on Bakhvalov-type mesh (4),_ \[|E(x_{N/2+1})|\leq CN^{-\sigma},\quad|E(x_{N/2})|\leq C\varepsilon^{\sigma}.\] Proof.: Through (4) and (13), the conclusion of this lemma can be obtained. ## 3 Interpolation and interpolation error ### Interpolation Below we introduce a new interpolation operator \(\Pi\), that is \[(\Pi u)|_{I}=\begin{cases}(L_{k}u)|_{I},&\text{if }I\subset[x_{N/2+1},1],\\ (P_{h}u)|_{I},&\text{if }I\subset[0,x_{N/2+1}],\end{cases} \tag{15}\] where \(L_{k}u\) is \(k\)-degree Gauss Lobatto interpolation, and \(P_{h}u\) is Gauss Radau interpolation of \(u\). Then, we shall provide the definitions of these two interpolations. First, assume that \(x_{j-1}=s_{0}<s_{1}<\cdots<s_{k}=x_{j}\) is the Gauss Lobatto points, where \(s_{1},s_{2},\cdots,s_{k-1}\) are zeros of the derivative of \(k\)-degree Legendre polynomial on \(I_{j}\). For \(\varphi\in C(\overline{\Omega})\), let \((L_{k}\varphi)|_{I_{j}}\) for \(j=1,\ldots,N\) be the Lagrange interpolation of degree \(k\) at the Gauss Lobatto points \(\{t_{m}\}_{m=0}^{k}\). Then for \(\varphi(x)\in H^{k+2}(I_{j})\), we have \[|(\varphi^{\prime}-(L_{k}\varphi)^{\prime},v^{\prime})_{I_{j}}|\leq Ch_{j}^{k +1}|\varphi|_{k+2,I_{j}}|v|_{1,I_{j}}\quad\text{for all }v\in\mathbb{P}_{k}. \tag{16}\] Furthermore, if \(k\geq 1\), we define Gauss Radau interpolation \(P_{h}u\in V_{h}^{k}\) by: For \(j=1,2,\cdots,N\), \[\int_{I_{j}}(P_{h}u)v_{h}\mathrm{d}x=\int_{I_{j}}uv_{h}\mathrm{d}x,\quad\forall v_{h}\in\mathbb{P}_{k-1}, \tag{17}\] \[(P_{h}u)(x_{j}^{-})=u(x_{j}^{-}), \tag{18}\] see [4] for more details. **Remark 1**.: _Now we provide the situation at \(x_{N/2+1}\). On the one hand, by using the definition of Gauss Radau interpolation (18), there is_ \[(u-P_{h}u)(x_{N/2+1}^{-})=0. \tag{19}\] _On the other hand, according to Gauss Lobatto interpolation,_ \[(u-L_{k}u)(x_{N/2+1}^{+})=0. \tag{20}\] _Therefore, from (15), (19) and (20), we have_ \[[(u-\Pi u)(x_{N/2+1})]=0.\] ### Interpolation error Recall that \(L_{k}\) is the Lagrange interpolation operator with Gauss Lobatto points as the interpolation nodes. From the interpolation theories in Sobolev spaces [13, Theorem 3.1.4], for all \(v\in W^{k+1,m}(I_{j})\), \[\|v-L_{k}v\|_{W^{l,n}(I_{j})}\leq Ch_{j}^{k+1-l+1/n-1/m}|v|_{W^{k+1,m}(I_{j})}, \tag{21}\] where \(l=0,1\) and \(1\leq m,n\leq\infty\). Then recall that \(P_{h}v\) is Gauss Radau interpolation of \(v\). According to the arguments in [13], for all \(v\in H^{k+1}(I_{j})\), we have \[\|v-P_{h}v\|_{I_{j}}+h_{j}^{\frac{1}{2}}\|v-P_{h}v\|_{L^{\infty}(I_{j})}\leq Ch _{j}^{k+1}|v|_{k+1,I_{j}},\quad j=1,2,\cdots,N. \tag{22}\] On the basis of that, it is straightforward for us to derive the following error. **Lemma 4**.: _Suppose that Assumption 1 hold and \(\mu(x_{j})\) is presented in (6). Then on Bakhvalov-type mesh (4) with \(\sigma\geq k+1\), one has_ \[\|S-P_{h}S\|_{[0,x_{N/2+1}]}+\|u-\Pi u\|_{[0,1]}\leq CN^{-(k+1)}, \tag{23}\] \[\|(S-\Pi S)^{\prime}\|_{[0,1]}\leq CN^{-k},\] (24) \[\|(E-P_{h}E)^{\prime}\|_{[0,x_{N/2+1}]}\leq C\varepsilon^{\sigma} N+C\varepsilon^{-\frac{1}{2}}N^{-\sigma},\] (25) \[\|u-\Pi u\|_{L^{\infty}(I_{j})}\leq CN^{-(k+1)},\quad j=1,2, \cdots,N\] (26) \[\|L_{k}u-u\|_{NIPG,[x_{N/2+2},1]}\leq CN^{-k},\] (27) \[\|u-P_{h}u\|_{NIPG,[0,x_{N/2+1}]}\leq CN^{-(k+\frac{1}{2})}. \tag{28}\] Proof.: First, (21) and (22) yield (23-26) easily, thus, we just estimate (27) and (28). According to the NIPG norm (7) and the definition of Gauss Lobatto interpolation, one has \([(L_{k}u-u)(x_{j})]=0,j=N/2+2,\cdots,N\). Further, \[\|L_{k}u-u\|_{NIPG,[x_{N/2+2},1]}^{2}=\varepsilon\sum_{j=N/2+2}^{N}\|(L_{k}u-u )^{\prime}\|_{I_{j}}^{2}+\sum_{j=N/2+2}^{N}\gamma\|L_{k}u-u\|_{I_{j}}^{2}.\] Then from Theorem 1, we will divide the first item for analysis, that is, \[|\varepsilon\sum_{j=N/2+2}^{N}\|(L_{k}u-u)^{\prime}\|_{I_{j}}^{2}|\leq C \varepsilon\sum_{j=N/2+2}^{N}\|(L_{k}S-S)^{\prime}\|_{I_{j}}^{2}+C\varepsilon \sum_{j=N/2+2}^{N}\|(L_{k}E-E)^{\prime}\|_{I_{j}}^{2}.\] Actually, from (3) and (21), we can derive \[\varepsilon\sum_{j=N/2+2}^{N}\|(L_{k}S-S)^{\prime}\|_{I_{j}}^{2}\leq C \varepsilon\sum_{j=N/2+2}^{N}h_{j}^{2k}\|S^{(k+1)}\|_{I_{j}}^{2}\leq C \varepsilon^{2k+2}N,\] where we also use (9) and (10). Note that (3), (14) and (21) yield \[\varepsilon\sum_{j=N/2+2}^{N}\|(L_{k}E-E)^{\prime}\|_{I_{j}}^{2} \leq C\varepsilon\sum_{j=N/2+2}^{N}h_{j}^{2k}\|E^{(k+1)}\|_{I_{j}}^{2}\] \[\leq C\varepsilon\sum_{j=N/2+2}^{N}h_{j}^{2k+1}e^{-2\alpha(1-x_{j })/\varepsilon}\varepsilon^{-2(k+1)}\] \[\leq C\sum_{j=N/2+2}^{N}N^{-(2k+1)}\leq CN^{-2k}.\] Moreover, by using the similar method, one has \[|\sum_{j=N/2+2}^{N}\gamma\|L_{k}u-u\|_{I_{j}}^{2}|\leq C\sum_{j=N/2+2}^{N}\|L_ {k}S-S\|_{I_{j}}^{2}+C\sum_{j=N/2+2}^{N}\|L_{k}E-E\|_{I_{j}}^{2}\leq C \varepsilon N^{-(2k+1)}.\] Thus the derivation of (27) has been completed. Now let's analyze the estimate of (28). Through (7) and Remark 1, \[\|u-P_{h}u\|_{NIPG,[0,x_{N/2+1}]}^{2} =\varepsilon\sum_{j=1}^{N/2+1}\|(u-P_{h}u)^{\prime}\|_{I_{j}}^{2} +\sum_{j=1}^{N/2+1}\gamma\|u-P_{h}u\|_{I_{j}}^{2}\] \[+\sum_{j=0}^{N/2}\mu(x_{j})[(u-P_{h}u)(x_{j})]^{2}+\frac{1}{2} \sum_{j=0}^{N/2}b(x_{j})[(u-P_{h}u)(x_{j})]^{2}\] \[=\Lambda_{1}+\Lambda_{2}+\Lambda_{3}+\Lambda_{4}.\] Next, we estimate \(\Lambda_{1}\), \(\Lambda_{2}\), \(\Lambda_{3}\) and \(\Lambda_{4}\) in turn. For \(\Lambda_{1}\), we first decompose it into the following forms, \[\Lambda_{1}\leq\varepsilon\sum_{j=1}^{N/2+1}\|(S-P_{h}S)^{\prime}\|_{I_{j}}^{2 }+\varepsilon\sum_{j=1}^{N/2+1}\|(E-P_{h}E)^{\prime}\|_{I_{j}}^{2}.\] For one thing, through (22) and (3), \[|\varepsilon\sum_{j=1}^{N/2+1}\|(S-P_{h}S)^{\prime}\|_{I_{j}}^{2}|\leq C \varepsilon\sum_{j=1}^{N/2+1}h_{j}^{2k}\|S^{(k+1)}\|_{I_{j}}^{2}\leq C \varepsilon N^{-2k}.\] For another, by means of the triangle inequality and the inverse inequality [13, Theorem 3.2.6], we have \[|\varepsilon\sum_{j=1}^{N/2+1}\|(E-P_{h}E)^{\prime}\|_{I_{j}}^{2}|\leq C \varepsilon^{2\sigma+1}N^{2}+CN^{-2\sigma},\] where (11) and (12) have been used. For \(\Lambda_{2}\), the triangle inequality and (22) can yield \[|\sum_{j=1}^{N/2+1}\gamma\|u-P_{h}u\|_{I_{j}}^{2}|\leq C\sum_{j=1}^{N/2+1}\|S- P_{h}S\|_{I_{j}}^{2}+C\sum_{j=1}^{N/2+1}\|E-P_{h}E\|_{I_{j}}^{2}\leq CN^{-2(k+1)}.\] Recall that \(\mu(x_{j})\) is defined as (6), then one has \[|\sum_{j=1}^{N/2}\mu(x_{j})[(S-P_{h}S)(x_{j})]^{2}|\leq C\sum_{j=1 }^{N/2}\mu(x_{j})\|S-P_{h}S\|_{L^{\infty}(I_{j}\cup I_{j+1})}^{2}\leq CN^{-(2k +1)},\] \[|\sum_{j=1}^{N/2}\mu(x_{j})[(E-P_{h}E)(x_{j})]^{2}|\leq C\sum_{j=1 }^{N/2}\mu(x_{j})\|E-P_{h}E\|_{L^{\infty}(I_{j}\cup I_{j+1})}^{2}\leq C \varepsilon^{2\sigma}N.\] Similarly, it is easy to get \[\Lambda_{4}\leq CN^{-(2k+1)}.\] Here \(b(x)\) is a smooth function on \([0,1]\), thus it is bounded. Finally, by some simple calculations, (28) can be obtained. **Theorem 2**.: _Suppose that Assumption 1 and \(\mu(x_{i})\) is defined as (6). On Bakhvalov-type mesh (4) with \(\sigma\geq k+1\),_ \[\|u-\Pi u\|_{NIPG}\leq CN^{-k},\] _where \(u\) is the solution of (1), and \(\Pi u\) is the interpolation defined in (15)._ Proof.: From (27) and (28), this theorem can be derived easily. **Lemma 5**.: _Suppose that \(z\in H^{1}(I_{j}),\quad j=1,2,\cdots,N\), then_ \[|z(x_{s})|^{2}\leq 2\left(h_{j}^{-1}\|z\|_{I_{j}}^{2}+\|z\|_{I_{j}}\|z^{\prime} \|_{I_{j}}\right),\quad s\in\{j-1,j\}.\] Proof.: The corresponding arguments can be found in [24]. In particular, we simplify \((\Pi u-u)(x)\) to \(\eta(x)\) in the following. **Lemma 6**.: _Assume that \(\varepsilon\leq CN^{-1}\) and on the mesh (4) with \(\sigma\geq k+1\), there is_ \[\{\eta^{\prime}(x_{j})\}^{2}\leq\begin{cases}CN^{-2k}+C\varepsilon^{2\sigma-2}+ C\varepsilon^{\sigma-\frac{3}{2}}N^{-(k+\frac{1}{2})},\quad 0\leq j\leq N/2-1,\\ C\varepsilon^{-1}N^{-(2k+1)}+C\varepsilon^{-2}N^{-2\sigma}+C\varepsilon^{- \frac{3}{2}}N^{-(\sigma+k+\frac{1}{2})},\quad j=N/2,\\ C\varepsilon^{-2}N^{-2k},\quad N/2+1\leq j\leq N.\end{cases} \tag{29}\] Proof.: Through [24], we can draw this conclusion. By means of the definition of average and Lemma 5, \[\{\eta^{\prime}(x_{j})\}^{2} =\frac{1}{4}\left(\eta^{\prime}(x_{j}^{-})+\eta^{\prime}(x_{j}^{ +})\right)^{2}\leq\frac{1}{2}\left(\eta^{\prime}(x_{j}^{-})^{2}+\eta^{\prime} (x_{j}^{+})^{2}\right)\] \[\leq h_{j}^{-1}\|\eta^{\prime}\|_{I_{j}}^{2}+\|\eta^{\prime}\|_{ I_{j}}\|\eta^{\prime\prime}\|_{I_{j}}+h_{j+1}^{-1}\|\eta^{\prime}\|_{I_{j+1}}^{2}+ \|\eta^{\prime}\|_{I_{j+1}}\|\eta^{\prime\prime}\|_{I_{j+1}}.\] In the following, we will estimate \(\|\eta^{\prime}\|_{I_{j}}\) and \(\|\eta^{\prime\prime}\|_{I_{j}}\), respectively. For \(1\leq j\leq N/2\), one has the following estimate, \[\|\eta^{\prime}\|_{I_{j}}^{2} \leq\|(S-P_{h}S)^{\prime}\|_{I_{j}}^{2}+\|(E-P_{h}E)^{\prime}\|_{ I_{j}}^{2}\] \[\leq Ch_{j}^{2k}\|S^{(k+1)}\|_{I_{j}}^{2}+\|E^{\prime}\|_{I_{j}}^ {2}+\|(P_{h}E)^{\prime}\|_{I_{j}}^{2}\] \[\leq Ch_{j}^{2k+1}\|S^{(k+1)}\|_{L^{\infty}(I_{j})}^{2}+C \varepsilon^{-2}\int_{I_{j}}e^{-2\alpha(1-x)/\varepsilon}\mathrm{d}x+Ch_{j}^{ -2}\|P_{h}E\|_{I_{j}}^{2}\] \[\leq CN^{-(2k+1)}+C\varepsilon^{-1}\varepsilon^{2\sigma}+Ch_{j}^ {-1}\|E\|_{L^{\infty}(I_{j})}^{2}\] \[\leq CN^{-(2k+1)}+C\varepsilon^{-1}\varepsilon^{2\sigma}+CN \varepsilon^{2\sigma}\] \[\leq CN^{-(2k+1)}+C\varepsilon^{2\sigma-1},\] where (12), (22) and the inverse inequality have been used. In a similar way, we can obtain \[\|\eta^{\prime\prime}\|_{I_{j}}^{2}\leq\|(S-P_{h}S)^{\prime\prime}\|_{I_{j}}^ {2}+\|(E-P_{h}E)^{\prime\prime}\|_{I_{j}}^{2}\leq CN^{-(2k-1)}+C\varepsilon^{ 2\sigma-3}. \tag{31}\] To sum up, for \(j=1,\cdots,N/2-1\), there is \[\{\eta^{\prime}(x_{j})\}^{2} \leq h_{j}^{-1}\|\eta^{\prime}\|_{I_{j}}^{2}+\|\eta^{\prime}\|_{ I_{j}}\|\eta^{\prime\prime}\|_{I_{j}}+h_{j+1}^{-1}\|\eta^{\prime}\|_{I_{j+1}}^{2 }+\|\eta^{\prime}\|_{I_{j+1}}\|\eta^{\prime\prime}\|_{I_{j+1}}\] \[\leq CN^{-2k}+C\varepsilon^{2(\sigma-1)}+C\varepsilon^{\sigma- \frac{3}{2}}N^{-(k+\frac{1}{2})},\] where note that \(\varepsilon\leq CN^{-1}\) and \(\sigma\geq k+1\). In addition, when \(j=N/2+2,\cdots,N\), for one thing, from (3), (9), (10) and (21), \[\|\eta^{\prime}\|_{I_{j}}^{2}\leq\|(S-L_{k}S)^{\prime}\|_{I_{j}}^{2}+\|(E-L_{k}E )^{\prime}\|_{I_{j}}^{2}\leq C\varepsilon^{-1}N^{-(2k+1)}.\] For another, by means of (3) and (21), one has \[\|\eta^{\prime\prime}\|_{I_{j}}^{2}\leq\|(S-L_{k}S)^{\prime\prime}\|_{I_{j}}^{ 2}+\|(E-L_{k}E)^{\prime\prime}\|_{I_{j}}^{2}\leq C\varepsilon^{-3}N^{-(2k-1)}.\] Then for \(j=N/2+2,\cdots,N\) we derive \[\{\eta^{\prime}(x_{j})\}^{2}\leq C\varepsilon^{-2}N^{-2k}. \tag{32}\] Finally, we will analyze the situation when \(j=N/2\) and \(j=N/2+1\), respectively: On the one hand, when \(j=N/2\), \[\{\eta^{\prime}(x_{N/2})\}^{2} \leq h_{N/2}^{-1}\|\eta^{\prime}\|_{I_{N/2}}^{2}+\|\eta^{\prime} \|_{I_{N/2}}\|\eta^{\prime\prime}\|_{I_{N/2}}\] \[+h_{N/2+1}^{-1}\|\eta^{\prime}\|_{I_{N/2+1}}^{2}+\|\eta^{\prime} \|_{I_{N/2+1}}\|\eta^{\prime\prime}\|_{I_{N/2+1}}.\] (3), (11), (22) and the inverse inequality [13, Theorem 3.2.6] yield \[\|\eta^{\prime}\|_{I_{N/2+1}}^{2} \leq\|(S-P_{h}S)^{\prime}\|_{I_{N/2+1}}^{2}+\|(E-P_{h}E)^{\prime} \|_{I_{N/2+1}}^{2}\] \[\leq Ch_{N/2+1}^{2k}\|S^{(k+1)}\|_{I_{N/2+1}}^{2}+\|E^{\prime}\|_ {I_{N/2+1}}^{2}+\|(P_{h}E)^{\prime}\|_{I_{N/2+1}}^{2}\] \[\leq Ch_{N/2+1}^{2k+1}\|S^{(k+1)}\|_{L^{\infty}(I_{N/2+1})}^{2}+C \varepsilon^{-1}N^{-2\sigma}+Ch_{N/2+1}^{-1}\|E\|_{L^{\infty}(I_{N/2+1})}^{2}\] \[\leq CN^{-2(k+\frac{1}{2})}+C\varepsilon^{-1}N^{-2\sigma}+C \varepsilon^{-1}N^{-2\sigma}\] \[\leq CN^{-(2k+1)}+C\varepsilon^{-1}N^{-2\sigma}. \tag{33}\] In a same way, we have \[\|\eta^{\prime\prime}\|_{I_{N/2+1}}^{2}\leq CN^{-(2k-1)}+C\varepsilon^{-3}N^{ -2\sigma}. \tag{34}\] Then using (30), (31), (33) and (34), it is straightforward to derive \[\{\eta(x_{N/2})\}^{2}\leq C\varepsilon^{-1}N^{-(2k+1)}+C\varepsilon^{-2}N^{-2 \sigma}+C\varepsilon^{-\frac{3}{2}}N^{-(\sigma+k+\frac{1}{2})}.\] On the other hand, when \(j=N/2+1\), from (32), (33) and (34), there is \[\{\eta^{\prime}(x_{N/2+1})\}^{2}\leq C\varepsilon^{-2}N^{-2k}.\] So far, we have proved this conclusion. ## 4 Supercloseness Now introduce \(\xi:=\Pi u-u_{N}\) and recall \(\eta:=\Pi u-u\). According to (8) and the Galerkin orthogonality, \[\begin{split}&\|\xi\|_{NIPG}^{2}\leq B(\xi,\xi)=B(\Pi u-u+u-u_{N}, \xi)=B(\eta,\xi)\\ &=\sum_{j=1}^{N}\int_{I_{j}}\varepsilon\eta^{\prime}\xi^{\prime} \mathrm{d}x+\varepsilon\sum_{j=0}^{N}[\eta(x_{j})]\{\xi^{\prime}(x_{j})\}- \varepsilon\sum_{j=0}^{N}\{\eta^{\prime}(x_{j})\}[\xi(x_{j})]\\ &+\sum_{j=0}^{N}\mu(x_{j})[\eta(x_{j})][\xi(x_{j})]+\sum_{j=1}^{N }\int_{I_{j}}b(x)\eta^{\prime}\xi\mathrm{d}x-\sum_{j=0}^{N-1}b(x_{j})[\eta(x_{ j})]\xi(x_{j}^{+})\\ &+\sum_{j=1}^{N}\int_{I_{j}}c(x)\eta\xi\mathrm{d}x\\ &=\sum_{j=1}^{N}\int_{I_{j}}\varepsilon\eta^{\prime}\xi^{\prime} \mathrm{d}x+\varepsilon\sum_{j=0}^{N}[\eta(x_{j})]\{\xi^{\prime}(x_{j})\}- \varepsilon\sum_{j=0}^{N}\{\eta^{\prime}(x_{j})\}[\xi(x_{j})]\\ &+\sum_{j=0}^{N}\mu(x_{j})[\eta(x_{j})][\xi(x_{j})]-\sum_{j=1}^{N }\int_{I_{j}}b(x)\eta\xi^{\prime}\mathrm{d}x+\sum_{j=1}^{N}b(x_{j})[\xi(x_{j}) ]\eta(x_{j}^{-})\\ &+\sum_{j=1}^{N}\int_{I_{j}}\left(c(x)-b^{\prime}(x)\right)\eta \xi\mathrm{d}x\\ &=:\Gamma_{1}+\Gamma_{2}+\Gamma_{3}+\Gamma_{4}+\Gamma_{5}+\Gamma_ {6}+\Gamma_{7}.\end{split} \tag{35}\] In the sequel, the terms on the right-hand side of (35) are estimated. Firstly, to facilitate analysis, \(\Gamma_{1}\) is decomposed as \[\Gamma_{1}=\sum_{j=1}^{N/2+1}\int_{I_{j}}\varepsilon\eta^{\prime}\xi^{\prime} \mathrm{d}x+\sum_{j=N/2+2}^{N}\int_{I_{j}}\varepsilon\eta^{\prime}\xi^{\prime} \mathrm{d}x.\] For \(1\leq j\leq N/2+1\), according to (24), Holder inequalities and (25), there is \[\begin{split}|\sum_{j=1}^{N/2+1}\int_{I_{j}}\varepsilon\eta^{ \prime}\xi^{\prime}\mathrm{d}x|&\leq C\left(\sum_{j=1}^{N/2+1} \varepsilon\|\eta^{\prime}\|_{I_{j}}^{2}\right)^{\frac{1}{2}}\left(\sum_{j=1}^ {N/2+1}\varepsilon\|\xi^{\prime}\|_{I_{j}}^{2}\right)^{\frac{1}{2}}\\ &\leq C\left(\varepsilon^{\frac{1}{2}}N^{-k}+N^{-\sigma}\right)\| \xi\|_{NIPG}.\end{split} \tag{36}\] In addition, for \(j=N/2+2,\cdots,N\), from Holder inequalities and (24), \[|\sum_{j=N/2+2}^{N}\int_{I_{j}}\varepsilon(S-L_{k}S)^{\prime}\xi^{\prime} \mathrm{d}x|\leq C\varepsilon^{\frac{1}{2}}N^{-k}\|\xi\|_{NIPG}. \tag{37}\] Through (9), (10), (14), (16) and note \(\sigma\geq k+1\), some direct calculations show that \[\begin{split}&|\sum_{j=N/2+2}^{N}\int_{I_{j}}\varepsilon(E-L_{k}E)^{ \prime}\xi^{\prime}\mathrm{d}x|\\ &\leq C\sum_{j=N/2+2}^{N}\varepsilon h_{j}^{k+1}\|E^{(k+2)}\|_{I _{j}}\|\xi^{\prime}\|_{I_{j}}\\ &\leq C\left(\sum_{j=N/2+2}^{N}\varepsilon h_{j}h_{j}^{2(k+1)}\|E^ {(k+2)}\|_{L^{\infty}(I_{j})}^{2}\right)^{\frac{1}{2}}\left(\sum_{j=N/2+2}^{N} \varepsilon\|\xi^{\prime}\|_{I_{j}}^{2}\right)^{\frac{1}{2}}\\ &\leq C\left(\sum_{j=N/2+2}^{N}\varepsilon h_{j}h_{j}^{2(k+1)}e^{ -2\alpha(1-x_{j})/\varepsilon}\varepsilon^{-2(k+2)}\right)^{\frac{1}{2}}\|\xi \|_{NIPG}\\ &\leq\left(C\sum_{j=N/2+2}^{N}\varepsilon^{-1}h_{j}N^{-2(k+1)} \right)^{\frac{1}{2}}\|\xi\|_{NIPG}\\ &\leq CN^{-(k+\frac{1}{2})}\|\xi\|_{NIPG}.\end{split} \tag{38}\] From (36), (37), (38), \(\varepsilon\leq CN^{-1}\) and \(\sigma\geq k+1\), we have the following estimate, \[\Gamma_{1}\leq CN^{-(k+\frac{1}{2})}\|\xi\|_{NIPG}. \tag{39}\] For \(\Gamma_{2}\), by using (29) and the definition of \(\mu(x_{j})\), one has \[\begin{split}|\Gamma_{2}|&=|\varepsilon\sum_{j=0}^{N }\{\eta^{\prime}(x_{j})\}[\xi(x_{j})]|\\ &\leq\left(\sum_{j=0}^{N}\frac{\varepsilon^{2}}{\mu(x_{j})}\{\eta ^{\prime}(x_{j})\}^{2}\right)^{\frac{1}{2}}\left(\sum_{j=0}^{N}\mu(x_{j})[\xi (x_{j})]^{2}\right)^{\frac{1}{2}}\\ &\leq\left(\sum_{j=0}^{N/2-1}\frac{\varepsilon^{2}}{\mu(x_{j})}\{ \eta^{\prime}(x_{j})\}^{2}+\frac{\varepsilon^{2}}{\mu(x_{N/2})}\{\eta^{\prime }(x_{N/2})\}^{2}+\sum_{i=N/2+1}^{N}\frac{\varepsilon^{2}}{\mu(x_{j})}\{\eta^{ \prime}(x_{j})\}^{2}\right)^{\frac{1}{2}}\|\xi\|_{NIPG}\\ &\leq CN^{-(k+\frac{1}{2})}\|\xi\|_{NIPG}.\end{split} \tag{40}\] Now we decompose \(\Gamma_{3}\) into two parts, that is \[\Gamma_{3}=-\varepsilon\sum_{j=0}^{N/2}\{\xi^{\prime}(x_{j})\}[\eta(x_{j})]- \varepsilon\sum_{j=N/2+1}^{N}\{\xi^{\prime}(x_{j})\}[\eta(x_{j})].\] According to (15), the definitions of Gauss Lobatto interpolation and Remark 1, we just estimate \(-\varepsilon\sum_{j=0}^{N/2}\{\xi^{\prime}(x_{j})\}[\eta(x_{j})]\). From the inverse inequality and (26), \[\begin{split}|-\varepsilon\sum_{j=0}^{N/2}\{\xi^{\prime}(x_{j})\}[ \eta(x_{j})]|&\leq|\varepsilon\{\xi^{\prime}(x_{0})\}[\eta(x_{0}) ]|+|\varepsilon\sum_{j=1}^{N/2}\{\xi^{\prime}(x_{j})\}[\eta(x_{j})]|\\ &\leq\left(C\varepsilon^{\frac{1}{2}}N^{-k}+CN^{-(k+1)}\right) \|\xi\|_{NIPG},\end{split} \tag{41}\] where the following estimate holds. More specifically, \[\begin{split}&|\varepsilon\sum_{j=1}^{N/2}[\eta(x_{j})]\{\xi^{ \prime}(x_{j})\}|\\ &\leq|\varepsilon\sum_{j=1}^{N/2-1}[\eta(x_{j})]\{\xi^{\prime}(x _{j})\}|+|\varepsilon[\eta(x_{N/2})]\{\xi^{\prime}(x_{N/2})\}|\\ &\leq C\varepsilon\sum_{j=1}^{N/2-1}\|\eta\|_{L^{\infty}(I_{j} \cup I_{j+1})}\|\xi^{\prime}\|_{L^{\infty}(I_{j}\cup I_{j+1})}+C\varepsilon\| \eta\|_{L^{\infty}(I_{N/2}\cup I_{N/2+1})}\|\xi^{\prime}\|_{L^{\infty}(I_{N/2} \cup I_{N/2+1})}\\ &\leq C\varepsilon\|\eta\|_{L^{\infty}(I_{j}\cup I_{j+1})}N^{ \frac{1}{2}}\sum_{j=1}^{N/2-1}\|\xi^{\prime}\|_{I_{j}\cup I_{j+1}}+C\varepsilon \|\eta\|_{L^{\infty}(I_{N/2}\cup I_{N/2+1})}\varepsilon^{-\frac{1}{2}}\|\xi^{ \prime}\|_{I_{N/2}\cup I_{N/2+1}}\\ &\leq C\left(\varepsilon^{\frac{1}{2}}N^{-k}+N^{-(k+1)}\right) \|\xi\|_{NIPG},\end{split}\] where (11) and (12) have been used. Then, divide \(\Gamma_{4}\) into the following two parts: \[\sum_{j=0}^{N}\mu(x_{j})[\eta(x_{j})][\xi(x_{j})]=\sum_{j=0}^{N/2}\mu(x_{j})[ \eta(x_{j})][\xi(x_{j})]+\sum_{j=N/2+1}^{N}\mu(x_{j})[\eta(x_{j})][\xi(x_{j})].\] According to (15) and Remark 1, there is \([\eta(x_{j})]=0,j=N/2+1,\cdots,N\). That is to say, we just analyze the first item. Then from (26), \[\begin{split}&|\sum_{j=0}^{N/2}\mu(x_{j})[\eta(x_{j})][\xi(x_{j})]| \\ &\leq\left(\sum_{j=0}^{N/2}\mu(x_{j})[\eta(x_{j})]^{2}\right)^{ \frac{1}{2}}\left(\sum_{j=0}^{N/2}\mu(x_{j})[\xi(x_{j})]^{2}\right)^{\frac{1}{2} }\\ &\leq C\left(\mu(x_{0})\|\eta\|_{L^{\infty}(I_{1})}^{2}+\sum_{j=1 }^{N/2}\mu(x_{j})\|\eta\|_{L^{\infty}(I_{j}\cup I_{j+1})}^{2}\right)^{\frac{1}{ 2}}\|\xi\|_{NIPG}\\ &\leq C\left(N^{-2(k+1)}+N^{-(2k+1)}\right)^{\frac{1}{2}}\|\xi\|_ {NIPG}\\ &\leq CN^{-(k+\frac{1}{2})}\|\xi\|_{NIPG}.\end{split} \tag{42}\] Now we analyze \(\Gamma_{5}\) and \(\Gamma_{6}\), which are also divided into two parts \(1\leq j\leq N/2+1\) and \(N/2+2\leq j\leq N\). For \(1\leq j\leq N/2+1\), through Remark 1, (17), (18), and assume that \(b(x_{j-\frac{1}{2}})\) is the value of \(b(x)\) at the midpoint \(x_{j-\frac{1}{2}}\) in the interval \(I_{j}\), then \[\begin{split}&-\sum_{j=1}^{N/2+1}\int_{I_{j}}b(x)\eta\xi^{\prime} \mathrm{d}x-\sum_{j=1}^{N/2+1}b(x_{j})[\xi(x_{j})]\eta(x_{j}^{-})\\ &=-\sum_{j=1}^{N/2+1}\int_{I_{j}}\left(b(x)-b(x_{j-\frac{1}{2}}) \right)\eta\xi^{\prime}\mathrm{d}x-\sum_{j=1}^{N/2+1}\int_{I_{j}}b(x_{j-\frac{1 }{2}})\eta\xi^{\prime}\mathrm{d}x-\sum_{j=1}^{N/2+1}b(x_{j})\eta(x_{j}^{-})[ \xi(x_{j})]\\ &=-\sum_{j=1}^{N/2+1}\int_{I_{j}}\left(b(x)-b(x_{j-\frac{1}{2}}) \right)\eta\xi^{\prime}\mathrm{d}x.\end{split}\] According to the mean value theorem, there is \(\xi\) between \(x_{j-\frac{1}{2}}\) and \(x\) to satisfy \[b(x)-b(x_{j-\frac{1}{2}})=b^{\prime}(\xi)(x-x_{j-\frac{1}{2}}).\] Note that in this paper \(b(x)\) is a smooth function. From the inverse inequality, (26) and the Cauchy Schwartz inequality, we have \[|-\sum_{j=1}^{N/2+1}\int_{I_{j}}\left(b(x)-b(x_{j-\frac{1}{2}})\right) \eta\xi^{\prime}\mathrm{d}x|=|-\sum_{j=1}^{N/2+1}\int_{I_{j}}b^{\prime}(\xi)(x- x_{j-\frac{1}{2}})\eta\xi^{\prime}\mathrm{d}x|\] \[\leq C\sum_{j=1}^{N/2+1}h_{j}\|\eta\|_{L^{\infty}(I_{j})}\|\xi^{ \prime}\|_{L^{1}(I_{j})}\leq C\sum_{j=1}^{N/2+1}h_{j}\|\eta\|_{L^{\infty}(I_{j })}h_{j}^{-\frac{1}{2}}\|\xi\|_{I_{j}}\] \[\leq C\sum_{j=1}^{N/2+1}N^{-\frac{1}{2}}\|\eta\|_{L^{\infty}(I_{j })}\|\xi\|_{I_{j}}\] \[\leq CN^{-(k+\frac{3}{2})}\left(\sum_{j=1}^{N/2+1}1^{2}\right)^{ \frac{1}{2}}\left(\sum_{j=1}^{N/2+1}\|\xi\|_{I_{j}}^{2}\right)^{\frac{1}{2}}\] \[\leq CN^{-(k+1)}\|\xi\|_{NIPG}.\] For \(N/2+2\leq j\leq N\), we need to consider the following formula, \[-\sum_{j=N/2+2}^{N}\int_{I_{j}}b(x)\eta\xi^{\prime}\mathrm{d}x- \sum_{j=N/2+2}^{N}b(x_{j})\eta(x_{j}^{-})[\xi(x_{j})]\] \[=-\sum_{j=N/2+2}^{N}\int_{I_{j}}b(x)(S-L_{k}S)\xi^{\prime}\mathrm{ d}x-\sum_{j=N/2+2}^{N}\int_{I_{j}}b(x)(E-L_{k}E)\xi^{\prime}\mathrm{d}x\] \[-\sum_{j=N/2+2}^{N}b(x_{j})\eta(x_{j}^{-})[\xi(x_{j})].\] From the Holder inequality and the inverse inequality, \[|-\sum_{j=N/2+2}^{N}\int_{I_{j}}b(x)(S-L_{k}S)\xi^{\prime}\mathrm{ d}x|\leq C\sum_{j=N/2+2}^{N}\|S-L_{k}S\|_{L^{\infty}(I_{j})}\|\xi^{\prime}\|_{L^{1} (I_{j})}\] \[\leq C\|S-L_{k}S\|_{L^{\infty}(I_{j})}\sum_{j=N/2+2}^{N}h_{j}^{ \frac{1}{2}}\|\xi^{\prime}\|_{I_{j}}\] \[\leq C\varepsilon^{k+1}\left(\sum_{j=N/2+2}^{N}1^{2}\right)^{ \frac{1}{2}}\left(\sum_{j=N/2+2}^{N}\varepsilon\|\xi^{\prime}\|_{I_{j}}^{2} \right)^{\frac{1}{2}}\] \[\leq C\varepsilon^{k+1}N^{\frac{1}{2}}\|\xi\|_{NIPG}.\] Recall \(\sigma\geq k+1\), then the inverse inequality, (14) and (21) yield \[|-\sum_{j=N/2+2}^{N}\int_{I_{j}}b(x)(E-L_{k}E)\xi^{\prime}\mathrm{d }x|\leq C\sum_{j=N/2+2}^{N}\|E-L_{k}E\|_{L^{\infty}(I_{j})}\|\xi^{\prime}\|_{L^{ 1}(I_{j})}\] \[\leq C\sum_{j=N/2+2}^{N}h_{j}^{k+1}\|E^{(k+1)}\|_{L^{\infty}(I_{j} )}h_{j}^{\frac{1}{2}}\|\xi^{\prime}\|_{I_{j}}\] \[\leq C\sum_{j=N/2+2}^{N}h_{j}^{k+1}e^{-\alpha(1-x_{j})/\varepsilon }\varepsilon^{-(k+1)}h_{j}^{\frac{1}{2}}\|\xi^{\prime}\|_{I_{j}}\] \[\leq C\sum_{j=N/2+2}^{N}N^{-(k+1)}\varepsilon^{\frac{1}{2}}\|\xi^ {\prime}\|_{I_{j}}\] \[\leq CN^{-(k+1)}\left(\sum_{j=N/2+2}^{N}1^{2}\right)^{\frac{1}{2 }}\left(\sum_{j=N/2+2}^{N}\varepsilon\|\xi^{\prime}\|_{I_{j}}^{2}\right)^{ \frac{1}{2}}\] \[\leq CN^{-(k+\frac{1}{2})}\|\xi\|_{NIPG}.\] Besides, using (26) and recall that the values of \(\mu(x_{j})\) (6), \[|-\sum_{j=N/2+2}^{N}b(x_{j})[\xi(x_{j})]\eta(x_{j}^{-})|\] \[\leq C\left(\sum_{j=N/2+2}^{N}\mu^{-1}(x_{j})\eta(x_{j}^{-})^{2} \right)^{\frac{1}{2}}\left(\sum_{j=N/2+2}^{N}\mu(x_{j})[\xi(x_{j})]^{2}\right) ^{\frac{1}{2}}\] \[\leq C\left(\sum_{j=N/2+2}^{N}\mu^{-1}(x_{j})\|\eta\|_{L^{\infty} (I_{j})}^{2}\right)^{\frac{1}{2}}\|\xi\|_{NIPG}\] \[\leq CN^{-(k+\frac{3}{2})}\|\xi\|_{NIPG}.\] Therefore, we derive \[\Gamma_{5}+\Gamma_{6}\leq CN^{-(k+\frac{1}{2})}\|\xi\|_{NIPG} \tag{43}\] without any difficulties. For \(\Gamma_{7}\), from Holder inequalities and (23), we have \[\Gamma_{7}\leq C\|\eta\|_{[0,1]}\|\xi\|_{NIPG}\leq CN^{-(k+1)}\|\xi\|_{NIPG}. \tag{44}\] Finally, according to (39), (40), (41), (42), (43) and (44), one has \[\|\xi\|_{NIPG}^{2} \leq\Gamma_{1}+\Gamma_{2}+\Gamma_{3}+\Gamma_{4}+\Gamma_{5}+\Gamma_ {6}+\Gamma_{7}\] \[\leq CN^{-(k+\frac{1}{2})}\|\xi\|_{NIPG},\] which implies the following estimate holds true, that is \[\|\Pi u-u_{N}\|_{NIPG}\leq CN^{-(k+\frac{1}{2})}. \tag{45}\] Now we will present the main conclusion of this paper. **Theorem 3**.: _Suppose that Assumption 1 holds true and \(\mu(x_{j})\) is defined as (6). Then on Bakhvalov-type mesh (4) with \(\sigma\geq k+1\), we have_ \[\|L_{k}u-u_{N}\|_{NIPG}+\|\Pi u-u_{N}\|_{NIPG}\leq CN^{-(k+\frac{1}{2})},\] _where \(\Pi u\) is the interpolation defined as (15), and \(u_{N}\) is the solution of (5)._ Proof.: From the triangle inequality, we have \[\|L_{k}u-u_{N}\|_{NIPG}\leq\|L_{k}u-\Pi u\|_{NIPG}+\|\Pi u-u_{N}\|_{NIPG}.\] According to (45), we just estimate the bound of \(\|L_{k}u-\Pi u\|_{NIPG}\). By the definition of \(\Pi u\) (15) and the triangle inequality, \[\|L_{k}u-\Pi u\|_{NIPG} =\|L_{k}u-L_{k}u\|_{NIPG,[x_{N/2+1},1]}+\|L_{k}u-P_{h}u\|_{NIPG,[0, x_{N/2+1}]}\] \[=\|L_{k}u-P_{h}u\|_{NIPG,[0,x_{N/2+1}]}\] \[\leq\|L_{k}u-u\|_{NIPG,[0,x_{N/2+1}]}+\|u-P_{h}u\|_{NIPG,[0,x_{N/2 +1}]}.\] Therefore from (28) and some direct calculations, we obtain \[\|L_{k}u-P_{h}u\|_{NIPG,[0,x_{N/2+1}]}\leq CN^{-(k+\frac{1}{2})}.\] Note that \(\mu(x_{j})=1,j=0,1,\cdots,N/2+1\). So far, we have completed the proof. **Remark 2**.: _The reason why we choose to use Gauss Radau interpolation in \([0,x_{N/2+1}]\) is that the convergence analysis of convection term in \(I_{N/2+1}\) can not analyze the past by using the standard Lagrange interpolation. In short, on \([x_{N/2},x_{N/2+1}]\), we can't get a \(\varepsilon^{\frac{1}{2}}\), making \(\varepsilon^{\frac{1}{2}}\|\xi^{\prime}\|_{I_{N/2+1}}\leq\|\xi\|_{NIPG}\). More specifically,_ \[\int_{x_{N/2}}^{x_{N/2+1}}b(x)(E-E_{I})\xi^{\prime}\mathrm{d}x \leq C\|E-E_{I}\|_{I_{N/2+1}}\|\xi^{\prime}\|_{I_{N/2+1}}\leq CN ^{-(\sigma+\frac{1}{2})}\|\xi^{\prime}\|_{I_{N/2+1}}\] \[\leq C\varepsilon^{-\frac{1}{2}}N^{-(\sigma+\frac{1}{2})}\|\xi\|_ {NIPG},\] _where \(E_{I}\) represents the standard Lagrange interpolation of \(E\). This difficulty can be easily handled by Gauss Radau interpolation._ **Theorem 4**.: _Let \(\mu(x_{j})\) defined in (6) and Assumption 1 hold true. Then on Bakhvalov-type mesh (4) with \(\sigma\geq k+1\), we have_ \[\|u-u_{N}\|_{NIPG}\leq CN^{-k},\] _where \(u\) is the exact solution of (1), and \(u_{N}\) is the solution of (5)._ Proof.: Combining Theorem 2 and (45), we can draw this conclusion directly. ## 5 Numerical experiment In order to verify the theoretical conclusion about supercloseness, we consider the following test problem, \[\begin{cases}-\varepsilon u^{\prime\prime}(x)+(3-x)u^{\prime}(x)+u(x)=f(x), \quad x\in\Omega:=(0,1),\\ u(0)=u(1)=0.\end{cases}\] Here \(f(x)\) is chosen such that \[u(x)=x-x\cdot e^{-2(1-x)/\varepsilon}\] is the exact solution of. In our numerical experiment, we first consider \(\varepsilon=10^{-5},\cdots,10^{-9},k=1,2\) and \(N=8,\cdots,1024\). And on Bakhvalov-type mesh (4), we set \(\sigma=k+1\), \(\alpha=2\) and \[\mu(x_{j})=\begin{cases}1,\quad j=0,1,2,\cdots,N/2,\\ N^{2},\quad j=N/2+1,\cdots,N.\end{cases}\] Now the corresponding convergence rate is defined as \[r_{N}=\frac{\ln e_{N}-\ln e_{2N}}{\ln 2},\] where for a particular \(\varepsilon\), \(e_{N}=\|L_{k}u-u_{N}\|_{NIPG}\) is the calculation error related to the mesh parameter \(N\). Below, we present the following tables, which imply Theorem 3 is correct. In addition, we present some numerical results when \(\varepsilon\geq CN^{-1}\). For this purpose, we consider \(\varepsilon=10^{-1},\cdots,10^{-4},k=1,2,3\) and \(N=8,\cdots,1024\). And on a Bakhvalov-type mesh (4), set \(\sigma=k+1\), \(\alpha=2\), the following tables can be obtained. From Table (1-6), we find that with the increase of \(k\) and mesh parameter \(N\) or the decrease of perturbation parameter \(\varepsilon\), the numerical results might be unstable. This is because the changes in the above conditions may increase the condition number of the linear system, thus increasing the difficulty of solving the linear system. Therefore, for the application of high-order numerical methods, a new iterative solver should be developed to solve these ill conditioned linear systems.
2307.10296
Towards Automated Semantic Segmentation in Mammography Images
Mammography images are widely used to detect non-palpable breast lesions or nodules, preventing cancer and providing the opportunity to plan interventions when necessary. The identification of some structures of interest is essential to make a diagnosis and evaluate image adequacy. Thus, computer-aided detection systems can be helpful in assisting medical interpretation by automatically segmenting these landmark structures. In this paper, we propose a deep learning-based framework for the segmentation of the nipple, the pectoral muscle, the fibroglandular tissue, and the fatty tissue on standard-view mammography images. We introduce a large private segmentation dataset and extensive experiments considering different deep-learning model architectures. Our experiments demonstrate accurate segmentation performance on variate and challenging cases, showing that this framework can be integrated into clinical practice.
Cesar A. Sierra-Franco, Jan Hurtado, Victor de A. Thomaz, Leonardo C. da Cruz, Santiago V. Silva, Alberto B. Raposo
2023-07-18T15:04:42Z
http://arxiv.org/abs/2307.10296v1
# Towards Automated Semantic Segmentation in Mammography Images ###### Abstract Mammography images are widely used to detect non-palpable breast lesions or nodules, preventing cancer and providing the opportunity to plan interventions when necessary. The identification of some structures of interest is essential to make a diagnosis and evaluate image adequacy. Thus, computer-aided detection systems can be helpful in assisting medical interpretation by automatically segmenting these landmark structures. In this paper, we propose a deep learning-based framework for the segmentation of the nipple, the pectoral muscle, the fibroglandular tissue, and the fatty tissue on standard-view mammography images. We introduce a large private segmentation dataset and extensive experiments considering different deep-learning model architectures. Our experiments demonstrate accurate segmentation performance on variate and challenging cases, showing that this framework can be integrated into clinical practice. mammography, semantic segmentation, deep learning, dataset ## I Introduction Mammography is a type of exam based on X-rays, whose main objective is to evaluate the breast tissue. This type of exam is highly recommended to prevent breast cancer because it is useful for visually identifying non-palpable lesions or nodules. Generally, mammography exams include two types of view, i.e., Medio-Lateral Oblique (MLO) and Craino-Caudal (CC), applied to both breasts. These modalities present top-bottom and side views of the breast, allowing a multi-perspective simultaneous analysis. Both views capture a set of anatomical structures of interest that are useful to locate abnormalities and evaluate image acquisition adequacy. The reference structures consider but are not limited to the nipple, the pectoral muscle, the fibro-glandular tissue, and the fatty tissue. The automatic segmentation of these structures can assist the medical interpretation or help less experienced operators to understand the image content. In this paper, we introduce a study for the application of deep learning models on the mammography image segmentation task. More precisely, on the segmentation of the structures of interest mentioned above and considering MLO and CC views. We present a large private dataset that was created for this purpose and experiments with different segmentation models. Our experiments show that it is possible to automate the segmentation task on mammography images with acceptable performance. The rest of this paper is structured as follows. Section II explains some related work for mammography image segmentation. Section III introduces the dataset used for the segmentation task. Section IV shows our experiments and results using deep learning models for segmentation. Finally, Section V presents our conclusion and future work. ## II Related work Although the segmentation task on mammography images is typically related to the segmentation of abnormalities [1], such as masses or nodules, we focus on those methods that estimate landmark structures useful to define the spatial distribution of the breast tissues. The pectoral muscle is an important landmark structure used in the MLO view assessment to define the risk of abnormalities or the adequacy of the image. This structure is usually presented in the corner of the image as a triangular shape. Its segmentation is challenging because it can present variate shapes due to the multiple anatomical conditions, the occlusion caused by the fibroglandular tissue, the inclusion of the minor pectoral muscle, and the inclusion of skin folds, among others. Several methods were proposed in the literature considering conventional signal processing and statistical analysis [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. Rampun et al. propose a multi-step method that uses a deep learning model for pectoral muscle boundary segmentation and post-processing steps to obtain an accurate delimitation [13]. Similarly, Soleimani et al. propose a deep learning model to segment the pectoral muscle boundary and a graph-based analysis to improve it [14]. Instead of segmenting the boundary, Ali et al. present full pectoral muscle shape segmentation using a U-Net-based deep learning model [15]. Also focusing on the segmentation of the full shape of the pectoral muscle, Guo et al. propose a two-step method that uses a U-Net to estimate pectoral muscle confident regions and a GAN to estimate the final pectoral muscle shape [16]. Rubio and Montiel present a comparative study considering multiple deep learning models and metrics for pectoral muscle segmentation [17]. In addition to the pectoral muscle, they also consider the breast shape for segmentation. Yu et al. propose a novel deep learning model that includes an attention mechanism to achieve better results than simple encoder-decoder models. Differently from the MLO view, the pectoral muscle in the CC view is not always present, and its recognition is more challenging. The delimitation of the pectoral muscle in the CC view is essential because it serves as a reference for the posterior limit of the breast tissues. However, its automated segmentation did not receive enough attention in the literature. Ge et al. propose a shape-guided region-growing method that uses ellipse fitting to approximate the pectoral muscle shape [18]. This approach is redesigned and improved in [19] by using Markov random fields. The nipple is another important landmark structure because it is useful to register multiple views or modalities. This allows the operator to match regions of interest and perform anatomical measurements. Some methods rely on shape and texture analysis over different regions of the breast boundary [20, 21, 22, 23]. The methods proposed in [24] and [25] are based on the assumption that the fibrolandular tissue converges at the nipple position. Thus, the authors designed geometric descriptors to find the optimal convergence point. Casti et al. propose a Hessian-based method that considers geometric descriptors and constraints to define the nipple position [26]. Jiang et al. propose a random forest classifier to detect subtle nipples using quantitative radiomic features to define regions of interest [27]. Lin et al. propose a deep learning classification model applied to a set of candidate patches extracted from the mammography image [28]. The region that presents more intersections of patches classified as possible nipples is selected as the nipple position. These methods focus on defining a specific nipple position instead of segmenting it. The fibrolandular tissue is a risk region that requires a special focus during the medical evaluation. Depending on the patient's anatomy, this tissue can be dense or scattered, where higher density presents a higher risk. Several methods were proposed in the literature to segment dense fibrolandular tissue regions [29], considering handcrafted [30, 31, 32, 33] and data-driven [34, 35, 36, 37, 38] models. Although dense regions are the most critical, clinical experts should also take care of scattered areas where abnormalities can be found. Thus, segmenting dense and scattered regions is important for the spatial description of the breast. Other methods integrate the segmentation of different landmark structures in a single solution. Tiryaki et al. present some experiments using multiple U-Net-based models for the segmentation of the pectoral muscle, dense fibrolandular tissue regions, and fatty tissues [39]. Considering these structures and adding the nipple, Dubrovina et al. introduce a novel deep learning-based framework for the segmentation task [40]. Using multiple deep learning models, Bou presents segmentation results considering more granular structures, such as vessels, calcifications, and skin, among others [41]. All of these methods consider small datasets, making their evaluations less robust and confident for their consideration in a real-world application. Also, the first two focus on the MLO view only, as most of the segmentation methods described in this section. This paper presents a method for the integrated segmentation of the main landmark structures, i.e., the pectoral muscle, the nipple, the fibrolandular tissue, and the fatty tissue. Unlike most segmentation methods, we focus on both standard views, MLO and CC. Thus, we are incorporating novel solutions for the segmentation of the pectoral muscle and the nipple on CC view images. In the case of fibrolandular tissue segmentation, our method considers dense and scattered areas as regions of interest, making it useful for spatial description instead of density analysis. Also, to the best of our knowledge, we are introducing the largest dataset for the segmentation of multiple landmark structures in mammography images. The latter allows us to show robust experiments using multiple deep-learning models. ## III Dataset ### _Raw data_ To construct a mammography image segmentation dataset, we collected a set of 2581 mammography examinations performed in a specialized hospital using different General Electric equipment. These examinations include 5213 MLO view images and 5137 CC view images in DICOM format, where each examination can repeat or lack any of the views for any of the patient's breasts. All the images were acquired using full-field digital mammography technology and present pixel spacing equal to 0.1mm or 0.094mm. This raw data do not consider images that present breast implants, complex surgeries, or image artifacts that complicate the visualization of breast tissues. Some cases with abnormalities are included but without a corresponding clinical evaluation and categorization. ### _Annotation guidelines_ We consider four landmark structures for the segmentation problem, i.e., nipple, pectoral muscle, fibrolandular tissue, and fatty tissue. Because we need a consistent segmentation dataset, we defined a set of guidelines for the delimitation of these structures that was discussed with clinical experts. For the nipple structure, we focus on the external and internal nipple tissues, avoiding the inclusion of the areola. If the nipple location is not clear, we can consider the fibrolandular tissue convergence region to guide a deeper assessment. The external limit can be easily differentiated from the image background in cases where the nipple is in profile and is not inverted or flat. For inverted and flat in-profile nipples, we define an external limit parallel to the breast contour. In both scenarios, the internal limit is defined on the intersection with the milk ducts. The delimitation of nipples that are not in profile presents a higher complexity. In this case, we delineate the bright region generated by the nipple overlapping projection. This region is usually presented as a clear rounded shape, but it can be confused with fibrolandular tissue or abnormalities, needing clinical expert assistance in some cases. The same methodology is applied in both standard views. The pectoral muscle in the MLO view usually presents a bright triangular shape with noticeable muscle texture and certain curvature, where the boundary with the fatty tissue is mostly clear. However, in cases where the fibroglandular tissue is dense, the bottom region of the pectoral muscle can be occluded, making its delineation difficult. Thus, we delineate the clear boundary and its corresponding projection on the diffuse regions. In the CC view, the pectoral muscle is not always present and can be in different sizes. We delineate this structure if the shape is visible by manipulating the image contrast and if it represents a considerable area within the image. We avoid the inclusion of very thin structures that can be confused with an image boundary artifact. This is the most challenging structure to segment, requiring intensive clinical expert support. Our fibroglandular tissue delineation is not limited to the segmentation of dense regions; we also include scattered and fatty regions where we can find agglomeration of ducts or glands. Thus, our structure represents a region of interest for the fibroglandular tissue instead of an accurate selection of tissues. We consider fatty tissue every breast content that is outside the structures defined previously. Its segmentation can be simplified by just delineating the breast boundary, including a portion of the abdomen, and then subtracting the other structure shapes automatically. ### _Annotation tool_ We implemented a sketch-based contour drawing annotation tool to delineate the structures of interest. This tool allows the user to draw a closed contour for a given structure and edit it if necessary, considering an intuitive interaction for deformation. The user can also manipulate the image by adjusting the standard window/level parameters to obtain a clearer visualization and by zooming and translating it to focus on the target structure. The contours are saved as dense high-resolution polygons defined in the image space. The annotation tool allows an automatic initialization of the breast contour for the fatty tissue to speed up the annotation process. This breast contour initialization is computed by segmenting the image using the Otsu binary thresholding [42] and then computing the external contour of the largest connected component. For the other structures, the tool includes deep learning-based initializations performed by two segmentation models trained on partial versions of the annotated dataset. These models follow the same idea presented in Section IV, considering each standard view as an independent domain. Because these initializations are predictions, they should be edited to match the appropriate boundaries. ### _Annotation process_ A team of 8 annotators was trained by two clinical experts for the identification and delimitation of the structures of interest. These clinical experts resolved the annotators' doubts about challenging cases during the entire annotation process. Unlike identifying abnormalities that require high expertise, the structures of interest for our problem are relatively easy to identify in most cases. The raw data was split and assigned to the annotators, such that each image was annotated by a single person. The entire annotation process was realized in approximately one year, including multiple refinements based on the feedback of the clinical experts. At the end of the annotation process, the four structures of interest were annotated over all the 5213 MLO images. The nipple, the fibroglandular tissue, and the fatty tissue were annotated over all the 5137 CC view images. Because the pectoral muscle is not always present in this view, this structure was annotated on 2952 (57%) of these images. ### _Data stratification_ Following the standard data splitting process for machine learning methods and considering independent sets for each view, we randomly divide the annotated images into three disjoint subsets: train, validation, and test. This splitting process avoids the inclusion of samples of the same examination in different subsets and tries to balance the samples regarding the fibroglandular tissue density. The density-based balancing is possible because some of the examinations were previously categorized by clinical experts into the four standard density classes: (A) almost entirely fatty, (B) scattered fibroglandular tissue, (C) heterogeneously dense, and (D) extremely dense. However, the majority of examinations do not present this density annotation. The distribution of the subsets for each view is presented in Table I, considering the density annotations. Notice how the test subset presents a balanced behavior regarding the density classes, making it a good benchmark for evaluation. ## IV Mammography deep learning semantic segmentation ### _Overview_ Figure 1 shows the mammography image segmentation pipeline proposed in this paper. This pipeline focuses on the segmentation of the structures of interest and the image background. The proposed solution employs a deep learning-based approach using semantic segmentation neural networks. The semantic segmentation process involves predicting labels for each pixel in the image, assigning them to specific classes. This process establishes meaningful regions within the image, associating each pixel with semantically relevant information. In our solution, we consider five classes of region: background, nipple, pectoral muscle, fibroglandular tissue, and fatty tissue. The deep learning model consumes a pre-processed mammography image and generates a set of probability maps that define the degree of correspondence of each pixel to a given class. ### _Evaluation metric_ To assess the segmentation accuracy of the target structures in the resulting trained models, we use the Intersection over Union (IoU) score, a widely-used metric in segmentation tasks. The IoU measures the overlap between the model-predicted structures and the ground-truth annotations. This score is calculated by dividing the intersection area between the predicted and ground-truth regions by the area of their union. This ratio yields a value between 0 and 1, where a score of 1 indicates a perfect overlap between the predicted and ground-truth structures, while a score of 0 suggests no overlap at all. Higher IoU scores indicate that the model's predictions align more closely with the desired target structures. ### _Pre-processing_ To create ground-truth probability maps for each structure, we first rasterize the annotated polygons into a single multi-class label map considering the following operations: (1) fill the full label map with the background class label. (2) rasterize the fatty tissue polygon on the previous label map, i.e., the breast contour polygon. (3) rasterize the fibroglandular tissue polygon on the previous label map. (4) rasterize the pectoral muscle polygon on the previous label map. (5) rasterize the nipple polygon on the previous label map. Then, this label map is converted to the probability maps by using one-hot encoding. The mammography image is represented in the DICOM file as an integer grayscale image with values in the range \([0,4095]\). To prepare this image for the deep-learning model consumption, we apply the following operations: (1) image normalization using the percentiles 2 and 98 as minimum and maximum values. (2) adaptive histogram equalization using the algorithm Contrast Limited Adaptive Histogram Equalization (CLAHE) [43] with kernel size \(1/8\) times the height and width of the image. (3) image intensity rescaling to the range \([0,255]\). To reduce the domain space, we flip all the left breast images and their corresponding probability maps horizontally. Thus, all the images are treated as right breast images. ### _Neural Network architectures results_ Due to the distinct characteristics of MLO and CC views, we trained separate models for each one. In our training experiments, we test several architectures for semantic segmentation, including Unet, FPN, Linknet, and PSPNet. While all these architectures serve the same purpose they employ different strategies. Based on extensive hyper-parameter optimization experiments, we selected the following training configuration. The model input is a single-channel \(384\times 384\) image with intensity values in the range \([0,1]\). Thus, the pre-processed image should be rescaled and resized to the corresponding range and size. The model output is a tensor that represents the probability maps of the structures of interest, including the background. We use softmax as the activation function in the final layer to help the model to predict a single class for each pixel, i.e., a multi-class problem. Further, because the segmentation models allow the integration of a custom convolutional backbone, we consider the EfficientBT3 as the feature extractor. During training, we use the Jaccard loss function with learning rate \(10^{-3}\) and batch size \(4\). We consider an early stopping scheme with 65 as the maximum number of epochs and 20 as patience, selecting the best weights on the validation dataset regarding the loss function. Tables II and III present the mean IoU results per structure for the CC and MLO view, respectively. From the tables, it is possible to observe good IoU scores above 0.7 for all the segmented structures obtained from the test set. In this mammography segmentation context, the PSPNet network presented the worst metrics, while the FPN showed the best results, improving pectoral muscle detection. Fig. 1: Mammography deep learning segmentation ### _Visual results_ Figures 2 and 3 illustrate some visual results for the models' segmentation predictions. In both Figures, column a) refers to the preprocessed image, column b) represents the ground-truth, and column c) the model prediction. From the visual results, it's possible to observe the effectiveness of the model's segmentation predictions. Comparing the ground truth with the model's predictions in column c), we can assess the model's performance in accurately delineating the structures of interest. These visual results offer insights into the models' ability to capture important features and accurately segment the desired mammogram structures. Despite the positive results, upon visual inspection, we observe that there is still room for improvement. For instance, in the second row of Figure 2, there is a false positive segmentation of the pectoral muscle on the CC view. From the visual point of view, the pectoral structure is not always present in this view, and when it is present, it could not be easy to spot even for human specialists. This structure presents a high variability of cases that impact this structure's visualization. Sometimes it overlaps with the fibrolandular tissue and does not always present a clear pattern like in the MLO view. Another imprecision could be observed on some mammograms with flat or not-in-profile nipples, such as the sample illustrated on the second row in Figure 3. These observations highlight a potential area for refinement in the model's performance. While the overall results may be promising, identifying and addressing such instances of inaccuracies can contribute to enhancing the model's segmentation accuracy and overall effectiveness. ## V Conclusion and future work We presented a deep learning-based solution for the segmentation of structures of interest in mammography images. This solution includes a fully-annotated dataset useful for training the models in a supervised manner. The segmentation results show that this method is at least promising for its integration into clinical practice. Although the results are encouraging, there is still a gap for improvement in the estimation of more accurate segments, especially for the challenging structures. Further, this idea can be extended for detecting and segmenting other structures, such as skin folds, vessels, abnormalities, dense fibrolandular tissue regions, and image artifacts, among others.
2309.01637
The Robust F-Statistic as a Test for Weak Instruments
Montiel Olea and Pflueger (2013) proposed the effective F-statistic as a test for weak instruments in terms of the Nagar bias of the two-stage least squares (2SLS) estimator relative to a benchmark worst-case bias. We show that their methodology applies to a class of linear generalized method of moments (GMM) estimators with an associated class of generalized effective F-statistics. The standard nonhomoskedasticity robust F-statistic is a member of this class. The associated GMMf estimator, with the extension f for first-stage, is a novel and unusual estimator as the weight matrix is based on the first-stage residuals. As the robust F-statistic can also be used as a test for underidentification, expressions for the calculation of the weak-instruments critical values in terms of the Nagar bias of the GMMf estimator relative to the benchmark simplify and no simulation methods or Patnaik (1949) distributional approximations are needed. In the grouped-data IV designs of Andrews (2018), where the robust F-statistic is large but the effective F-statistic is small, the GMMf estimator is shown to behave much better in terms of bias than the 2SLS estimator, as expected by the weak-instruments test results.
Frank Windmeijer
2023-09-04T14:42:21Z
http://arxiv.org/abs/2309.01637v1
# The Robust F-Statistic as a Test for Weak Instruments ###### Abstract Montiel Olea and Pflueger (2013) proposed the effective F-statistic as a test for weak instruments in terms of the Nagar bias of the two-stage least squares (2SLS) estimator relative to a benchmark worst-case bias. We show that their methodology applies to a class of linear generalized method of moments (GMM) estimators with an associated class of generalized effective F-statistics. The standard nonhomoskedasticity robust F-statistic is a member of this class. The associated GMMf estimator, with the extension "f" for first-stage, is a novel and unusual estimator as the weight matrix is based on the first-stage residuals. As the robust F-statistic can also be used as a test for underidentification, expressions for the calculation of the weak-instruments critical values in terms of the Nagar bias of the GMMf estimator relative to the benchmark simplify and no simulation methods or Patnaik (1949) distributional approximations are needed. In the grouped-data IV designs of Andrews (2018), where the robust F-statistic is large but the effective F-statistic is small, the GMMf estimator is shown to behave much better in terms of bias than the 2SLS estimator, as expected by the weak-instruments test results. **Keywords:** Instrumental variables, weak instruments, nonhomoskedasticity, robust F-statistic, GMM **JEL Codes:** C12, C26 Introduction It is commonplace to report the first-stage F-statistic to test for weak instruments in linear models with a single endogenous variable, estimated by two-stage least squares (2SLS). This follows the work of Staiger and Stock (1997) and Stock and Yogo (2005), with the latter providing critical values for the first-stage non-robust F-statistic for null hypotheses of weak instruments in terms of bias of the 2SLS estimator relative to that of the OLS estimator and Wald-test size distortions. These weak-instruments critical values for the non-robust F-statistic are valid only under homoskedasticity (i.e. conditional homoskedasticity, no serial correlation and no clustering) of both the first-stage and structural errors, and do not apply to the robust (to nonhomoskedasticity) F-statistic in general designs, see Bun and de Haan (2010), Montiel Olea and Pflueger (2013) and Andrews (2018). In particular, Andrews (2018) found for some cross-sectional heteroskedastic designs that the standard 2SLS confidence intervals had large coverage distortions even for very large values of the robust F-statistic. For example, he found for a high endogeneity design that "the 2SLS confidence set has a 15% coverage distortion even when the mean of the first-stage robust F-statistic is 100,000", Andrews (2018, Supplementary Appendix, p 11). For general nonhomoskedasticity, Montiel Olea and Pflueger (2013) proposed the first-stage effective F-statistic and derived critical values for the null of weak instruments in terms of the Nagar bias of the 2SLS estimator, relative to a benchmark worst-case bias. As shown in Section 4.3.1, the effective F-statistics in the designs of Andrews (2018) do not reject the null of weak instruments, and therefore correctly indicate the poor performance of the 2SLS estimator. In their review paper Andrews, Stock, and Sun (2019, p 729) recommend "that researchers judge instrument strength based on the effective F-statistic of Montiel Olea and Pflueger (2013)". The effective F-statistic is specific to the Nagar bias of the 2SLS estimator and the main contribution of this paper is that the methods of Montiel Olea and Pflueger (2013) apply to a wider class of linear generalized method of moments (GMM) estimators resulting in a class of associated generalized effective F-statistics. The nonhomoskedasticity robust F-statistic is a member of this class, and we call its associated GMM estimator the GMMf estimator, with the extension "f" for first-stage. This is because the weight matrix of the GMMf estimator is based on the first-stage residuals, with \(k_{z}\) times the robust F-statistic being the denominator of the GMMf estimator, where \(k_{z}\) is the number of excluded instruments. This is similar to the relationship of the non-robust F-statistic and the 2SLS estimator. Unlike the effective F-statistic, the robust F-statistic can be used for testing \(H_{0}:\pi=0\), where \(\pi\) is the vector of parameters on the excluded instruments in the first-stage linear model specification. Under the null, its limiting distribution is central chi-square, scaled by the degrees of freedom, \(k_{z}\). Under the alternative, its limiting distribution is non-central chi-square. The methods needed for calculating the critical values for the null of weak instruments in terms of the Nagar bias of the GMMf estimator relative to its benchmark worst-case bias are simpler for the robust F-statistic than for the effective F-statistic. For the robust F-statistic no simulation methods or Patnaik (1949) curve-fitting approximation is needed. In that sense, the relationship between the robust F-statistic and the GMMf estimator can be described as being canonical. Section 2 introduces the linear model specification, main assumptions, effective and robust F-statistics and the GMMf estimator. Section 3 then formulates the class of generalized F-statistics for the class of linear GMM estimators and shows that the weak-instruments testing methods developed by Montiel Olea and Pflueger (2013) apply straightforwardly to this class. The section then provides a summary of the Montiel Olea and Pflueger (2013) results. Section 4 shows how the general results and specifications simplify for the robust-F statistic in relation to the Nagar bias of the GMMf estimator. As the Nagar bias is relative to a benchmark worse-case bias, which is estimator specific, we harmonize in Section 4.1 the benchmark bias by considering the worst-case OLS bias as the benchmark, which applies to the class of GMM estimators considered. When the value of the robust F-statistic is large in the Andrews (2018) designs, this then implies that the GMMf estimator does not suffer from a weak-instruments problem, which is indeed confirmed in Section 4.3.1, where we replicate the Monte Carlo analysis of Andrews (2018). The design is the same as a grouped-data one, see Angrist (1991) and the discussion in Angrist and Pischke (2009), where the instruments are mutually exclusive group membership indicators. In the two designs considered, there is in each only one informative group, but the first-stage heteroskedasticity is such that the 2SLS estimator does not utilize this information well, whereas the GMMf estimator gives almost all the weight to the informative groups. The advice for practice is then to report both the effective and robust F-statistics, together with their critical values, and to consider the GMMf estimator in cases where there is a clear discrepancy with a large value for the robust F-statistic. We do this in Section 5 for specifications of Stephens and Yang (2014), who study the effect of schooling on wages using data from the 1960-1980 US Censuses of Population. The endogenous variable is years of schooling and the instruments are three indicator variables corresponding to being required to attend seven, eight or nine or more years of schooling. The specifications are estimated by 2SLS and Stephens and Yang (2014) report the robust F-statistic but not the effective F-statistic. We find no large discrepancies between the two measures, and the GMMf estimates are very similar to the 2SLS ones.1 We give some concluding remarks in Section 6. Footnote 1: An extension (beta-version) of the “weakivtest” command of Pflueger and Wang (2015) in Stata, StataCorp. (2023), called “gfweakivtest” for calculating these is available from the author upon request. ## 2 Model, Assumptions and F-Statistics We have a sample \(\left\{y_{i},x_{i},z_{i}\right\}_{i=1}^{n}\), where \(z_{i}\) is a \(k_{z}\)-vector of instrumental variables. We are interested in the effect of \(x\) on \(y\) in a linear model specification, where \(x\) is endogenously determined. We consider the linear structural and first-stage specifications \[y =x\beta+u \tag{1}\] \[x =Z\pi+v_{2}, \tag{2}\] where \(y\), \(x\), \(u\) and \(v_{2}\) are \(n\)-vectors and \(Z\) an \(n\times k_{z}\) matrix. Other exogenous explanatory variables, including the constant have been partialled out. The reduced-form specification for \(y\) is then given by \[y=Z\pi\beta+v_{1}=Z\pi_{y}+v_{1}, \tag{3}\] where \(v_{1}=u+\beta v_{2}\), and \(\pi_{y}=\pi\beta\). Following Montiel Olea and Pflueger (2013), we make the following assumptions. **Assumption 1**.: __ 1. _Weak instruments asymptotics. The vector_ \(\pi\) _is local to zero,_ \[\pi=\pi_{n}=c/\sqrt{n},\] _where_ \(c\) _is a fixed vector_ \(c\in\mathbb{R}^{k_{z}}\) 2. _As_ \(n\rightarrow\infty\)_,_ \[\frac{1}{n}Z^{\prime}Z\overset{p}{\rightarrow}Q_{zz};\] \[\frac{1}{n}\left[v_{1}\,v_{2}\right]^{\prime}\left[v_{1}\,v_{2}\right] \overset{p}{\rightarrow}\Sigma_{v};\] \[\frac{1}{\sqrt{n}}\left(\begin{array}{c}Z^{\prime}v_{1}\\ Z^{\prime}v_{2}\end{array}\right)\overset{d}{\rightarrow}\left(\begin{array}[ ]{c}\psi_{1}\\ \psi_{2}\end{array}\right)\sim N\left(0,W\right),\] _with_ \(Q_{zz}\)_,_ \(\Sigma_{v}\) _and_ \(W\) _finite, positive definite matrices, and_ \[\Sigma_{v}=\left[\begin{array}{cc}\sigma_{1}^{2}&\sigma_{12}\\ \sigma_{12}&\sigma_{2}^{2}\end{array}\right];\] \[W=\left[\begin{array}{cc}W_{1}&W_{12}\\ W_{12}^{\prime}&W_{2}\end{array}\right].\] 3. _There exists a sequence of positive definite estimates_ \(\left\{\widehat{W}_{n}\right\}\)_, such that_ \(\widehat{W}_{n}\overset{p}{\rightarrow}W\) _as_ \(n\rightarrow\infty\)_._ In the remainder, we drop the subscript \(n\) from \(\widehat{W}_{n}\) for ease of exposition, and, commensurate with the partitioning of \(W\), \[\widehat{W}=\left[\begin{array}{cc}\widehat{W}_{1}&\widehat{W}_{12}\\ \widehat{W}_{12}^{\prime}&\widehat{W}_{2}\end{array}\right].\] The two-stage least squares (2SLS) estimator is given by \[\widehat{\beta}_{2sls}=\frac{x^{\prime}P_{Z}y}{x^{\prime}P_{Z}x},\] where \(P_{Z}=Z\left(Z^{\prime}Z\right)^{-1}Z^{\prime}\). The standard nonrobust first-stage F-statistic is \[\widehat{F}=\frac{x^{\prime}P_{Z}x}{k_{z}\widehat{\sigma}_{v_{2}}^{2}},\] where \(\widehat{\sigma}_{v_{2}}^{2}=\widehat{v}_{2}^{\prime}\widehat{v}_{2}/n\), \(\widehat{v}_{2}=\left(I_{n}-P_{Z}\right)x\), where \(I_{n}\) is the identity matrix of order \(n\). Note that we refrain throughout from finite sample degrees-of-freedom corrections in the exposition. It follows that we can alternatively express the 2SLS estimator as \[\widehat{\beta}_{2sls}=\frac{\widehat{\pi}^{\prime}\left(V\widehat{a}r\left( \widehat{\pi}\right)\right)^{-1}\widehat{\pi}_{y}}{\widehat{\pi}^{\prime}\left( V\widehat{a}r\left(\widehat{\pi}\right)\right)^{-1}\widehat{\pi}_{y}}=\frac{ \widehat{\pi}^{\prime}\left(V\widehat{a}r\left(\widehat{\pi}\right)\right)^{-1 }\widehat{\pi}_{y}}{k_{z}\widehat{F}}, \tag{4}\] where \(\widehat{\pi}\) and \(\widehat{\pi}_{y}\) are the OLS estimators of \(\pi\) and \(\pi_{y}\) in the first-stage and reduced-form models (2) and (3), and \(V\widehat{a}r\left(\widehat{\pi}\right)=\widehat{\sigma}_{v_{2}}^{2}\left(Z^{ \prime}Z\right)^{-1}\), the non-robust estimator of the variance of \(\widehat{\pi}\). \(\widehat{F}\) can be used as a test for weak instruments in terms of the bias of the 2SLS estimator relative to that of the OLS estimator of \(\beta\), or the size distortion of the Wald test for hypotheses on \(\beta\), Stock and Yogo (2005). The Stock and Yogo critical values are valid only under conditional homoskedasticity of both \(u\) and \(v_{2}\), i.e. \(\mathbb{E}\left[u_{i}|z_{i}\right]=\sigma_{u}^{2}\) and \(\mathbb{E}\left[v_{2i}|z_{i}\right]=\sigma_{v_{2}}^{2}\), or \(W=\Sigma_{v}\otimes Q_{zz}\), with \(\Sigma_{v}=\mathbb{E}\left[v_{i}v_{i}^{\prime}\right]\) and \(v_{i}=\left(v_{1i},v_{2i}\right)^{\prime}\). The nonhomoskedasticity-robust F-statistic is given by \[\widehat{F}_{r}=\frac{x^{\prime}Z\widehat{W}_{-}^{-1}Z^{\prime}x}{nk_{z}}\] and is a standard test statistic for testing \(H_{0}:\pi=0\) under general forms of nonhomoskedasticity. But \(\widehat{F}_{r}\) cannot be used as a test for weak instruments in relation to the behaviour of the 2SLS estimator. Andrews (2018) showed in a grouped-data IV design that \(\widehat{F}_{r}\) could take very large values, of the order of \(100,000\), whereas the 2SLS estimator was still poorly behaved in terms of bias and Wald test size. For the 2SLS estimator, Montiel Olea and Pflueger (2013) proposed the effective F-statistic2 Footnote 2: \(\widehat{F}_{\text{eff}}\) can also be used as a test for weak instruments in relation to the Nagar bias of the LIML estimator, but we do not consider this estimator here. \[\widehat{F}_{\text{eff}} =\frac{x^{\prime}P_{Z}x}{\text{tr}\left(\left(\frac{1}{n}Z^{ \prime}Z\right)^{-1/2}\widehat{W}_{2}\left(\frac{1}{n}Z^{\prime}Z\right)^{-1/2 }\right)}\] \[=\frac{x^{\prime}P_{Z}x}{\text{tr}\left(\widehat{W}_{2}\left( \frac{1}{n}Z^{\prime}Z\right)^{-1}\right)}\] and showed that this F-statistic can be used as a test for weak instruments in relation to the Nagar (1959) bias of the 2SLS estimator, relative to a worst-case benchmark. Andrews et al. (2019) advocate the use of \(\widehat{F}_{\text{eff}}\) to gauge instrument strength for the 2SLS estimator. Although this weak-instrument test is related to the bias, the results pre sented in Andrews et al. (2019, Section 3) for a sample of 106 specifications from papers published in the _American Economic Review_ suggest that the effective F-statistic "may convey useful information about the instrument strength more broadly, since we see that conventional asymptotic approximations appear reasonable in specifications where the effective F-statistic exceeds 10.", Andrews et al. (2019, p 739). In the next section, we introduce a class of generalized effective F-statistics, denoted \(\widehat{F}_{\text{geff}}\), associated with a class of linear Generalized Method of Moments (GMM) estimators. We show that the weak-instrument Nagar bias results of Montiel Olea and Pflueger (2013), derived for the effective F-statistic in relation to the 2SLS estimator, applies to this general class. The robust F-statistic is a member of this class and the associated GMM estimator, denoted GMMf, with the extension f for first stage, is defined as \[\widehat{\beta}_{gmmf}=\frac{x^{\prime}Z\widehat{W}_{2}^{-1}Z^{\prime}y}{x^{ \prime}Z\widehat{W}_{2}^{-1}Z^{\prime}x}. \tag{5}\] This is a novel, and unusual GMM estimator, as the weight matrix is based on the first-stage residuals. As we will show and explain below, for the Andrews (2018) design with large values for the robust F-statistic, but small values for the effective F-statistic, this estimator is much better behaved in terms of bias and also inference than the 2SLS estimator. Like the expression of the 2SLS estimator in (4), we can write the GMMf estimator as \[\widehat{\beta}_{gmmf}=\frac{\widehat{\pi}^{\prime}\left(V\widehat{a}r_{r} \left(\widehat{\pi}\right)\right)^{-1}\widehat{\pi}_{y}}{\widehat{\pi}^{ \prime}\left(V\widehat{a}r_{r}\left(\widehat{\pi}\right)\right)^{-1}\widehat{ \pi}}=\frac{\widehat{\pi}^{\prime}\left(V\widehat{a}r_{r}\left(\widehat{\pi} \right)\right)^{-1}\widehat{\pi}_{y}}{k_{z}\widehat{F}_{r}}, \tag{6}\] where \(V\widehat{a}r_{r}\left(\widehat{\pi}\right)=n\left(Z^{\prime}Z\right)^{-1} \widehat{W}_{2}\left(Z^{\prime}Z\right)^{-1}\) is the nonhomoskedasticity-robust estimator of the variance of \(\widehat{\pi}\). ## 3 The Generalized Effective F-Statistic as a Test for Weak Instruments Consider the class of linear Generalized Methods of Moments (GMM) estimators of \(\beta\), given by \[\widehat{\beta}=\frac{x^{\prime}Z\Omega_{n}Z^{\prime}y}{x^{\prime}Z\Omega_{n} Z^{\prime}x}, \tag{7}\] where \(\Omega_{n}\) is a \(k_{z}\times k_{z}\) possibly data dependent weight matrix satisfying the following assumption, **Assumption 2**.: _As \(n\rightarrow\infty\), \(\Omega_{n}\overset{p}{\rightarrow}\Omega\), with \(\Omega\) a finite, full rank matrix._ For ease of exposition, the dependence of \(\widehat{\beta}\) on the choice of \(\Omega_{n}\) has notationally been suppressed, likewise for further expressions below. Let \[W^{*}=\left[\begin{array}{cc}W_{1}^{*}&W_{12}^{*}\\ W_{12}^{*\prime}&W_{2}^{*}\end{array}\right]=\left(I_{2}\otimes\Omega^{1/2} \right)W\left(I_{2}\otimes\Omega^{1/2}\right),\] and \[\widehat{W}^{*}=\left(I_{2}\otimes\Omega_{n}^{1/2}\right)\widehat{W}\left(I_{ 2}\otimes\Omega_{n}^{1/2}\right).\] Then consider the class of generalized effective F-statistics, given by \[\widehat{F}_{\text{geff}}=\frac{x^{\prime}Z\Omega_{n}Z^{\prime}x}{n\text{tr} \left(\widehat{W}_{2}^{*}\right)}=\frac{x^{\prime}Z\Omega_{n}Z^{\prime}x}{n \text{tr}\left(\Omega_{n}^{1/2}\widehat{W}_{2}\Omega_{n}^{1/2}\right)}=\frac{ x^{\prime}Z\Omega_{n}Z^{\prime}x}{n\text{tr}\left(\widehat{W}_{2}\Omega_{n} \right)}. \tag{8}\] For the 2SLS estimator we have \(\Omega_{n}=\left(\frac{1}{n}Z^{\prime}Z\right)^{-1}\) and \[\widehat{F}_{\text{geff}}=\frac{x^{\prime}P_{Z}x}{\text{tr}\left(\widehat{W}_ {2}^{*}\right)}=\frac{x^{\prime}P_{Z}x}{\text{tr}\left(\widehat{W}_{2}\left( \frac{1}{n}Z^{\prime}Z\right)^{-1}\right)}=\widehat{F}_{\text{eff}}.\] For the GMMf estimator as defined in (5), we have \(\Omega_{n}=\widehat{W}_{2}^{-1}\), hence \(\widehat{W}_{2}^{*}=I_{k_{z}}\), and \[\widehat{F}_{\text{geff}}=\frac{x^{\prime}Z\widehat{W}_{2}^{-1}Z^{\prime}x}{ nk_{z}}=\widehat{F}_{r}.\] We now show that the Montiel Olea and Pflueger (2013) weak-instruments testing methodology applies to the class of generalized F-statistics in relation to the Nagar bias of the linear GMM estimators. It follows from the first-stage and reduced-form model specifications (2) and (3) and Assumptions 1 and 2 that, as \(n\rightarrow\infty\), \[\frac{1}{\sqrt{n}}\left(\begin{array}{c}\Omega_{n}^{1/2}Z^{ \prime}y\\ \Omega_{n}^{1/2}Z^{\prime}x\end{array}\right) =\left(\begin{array}{c}\Omega_{n}^{1/2}\left(\frac{1}{n}Z^{ \prime}Z\right)c\beta+\frac{1}{\sqrt{n}}\Omega_{n}^{1/2}Z^{\prime}v_{1}\\ \Omega_{n}^{1/2}\left(\frac{1}{n}Z^{\prime}Z\right)c+\frac{1}{\sqrt{n}}\Omega _{n}^{1/2}Z^{\prime}v_{2}\end{array}\right)\] \[\overset{d}{\rightarrow}\left(\begin{array}{c}\gamma_{1}^{*} \\ \gamma_{2}^{*}\end{array}\right):=\left(\begin{array}{c}c^{*}\beta+\Omega^{1/ 2}\psi_{1}\\ c^{*}+\Omega^{1/2}\psi_{2}\end{array}\right)\sim N\left(\left(\begin{array}[] {c}c^{*}\beta\\ c^{*}\end{array}\right),W^{*}\right), \tag{9}\] where \(c^{*}=\Omega^{1/2}Q_{zz}c\). As \(v_{1}=u+\beta v_{2}\), it follows that, as \(n\rightarrow\infty\), \[\frac{1}{\sqrt{n}}\left(\begin{array}{c}\Omega_{n}^{1/2}Z^{\prime}u\\ \Omega_{n}^{1/2}Z^{\prime}\nu_{2}\end{array}\right)\overset{d}{\rightarrow}N \left(0,S^{*}\left(\beta,W^{*}\right)\right),\] with \[S^{*}\left(\beta,W^{*}\right)=\left[\begin{array}{cc}S_{1}^{*}\left(\beta,W^ {*}\right)&S_{12}^{*}\left(\beta,W^{*}\right)\\ S_{12}^{*}\left(\beta,W\right)^{\prime}&W_{2}^{*}\end{array}\right],\] where \[S_{1}^{*}\left(\beta,W^{*}\right) =W_{1}^{*}-\beta\left(W_{12}^{*}+W_{12}^{*\prime}\right)+\beta^{2 }W_{2}^{*}\] \[S_{12}^{*}\left(\beta,W^{*}\right) =W_{12}^{*}-\beta W_{2}^{*}.\] It follows that \[\widehat{\beta}-\beta\overset{d}{\rightarrow}\beta^{*}=\left( \gamma_{2}^{*\prime}\gamma_{2}^{*}\right)^{-1}\gamma_{2}^{*\prime}\left( \gamma_{1}^{*}-\beta\gamma_{2}^{*}\right) \tag{10}\] \[\widehat{F}_{\text{geff}}\overset{d}{\rightarrow}F_{\text{geff}} ^{*}=\gamma_{2}^{*\prime}\gamma_{2}^{*}/\text{tr}\left(W_{2}^{*}\right). \tag{11}\] The limiting distributions (9), (10) and (11) are the same expressions in \(c^{*}\), \(\beta\), \(W^{*}\), \(\gamma_{1}^{*}\) and \(\gamma_{2}^{*}\) as those derived for the 2SLS estimator and effective F-statistic in Montiel Olea and Pflueger (2013, Lemma 1, p 262), expressed in terms of \(c\), \(\beta\), \(W\), \(\gamma_{1}\) and \(\gamma_{2}\), after they orthonormalized the instruments such that \(\frac{1}{n}Z^{\prime}Z=I_{n}\). It follows that their weak-instrument representation in terms of Nagar bias for the 2SLS estimator in relation to the effective F-statistic applies to the Nagar bias of the general class of GMM estimators (7) in relation to the generalized effective F-statistic. We next summarize these results, for further detail and derivations, see Montiel Olea and Pflueger (2013). Following Theorem 1 of Montiel Olea and Pflueger (2013), let \(c_{0}^{*}:=c^{*}/\|c^{*}\|\), where \(\|c^{*}\|=\sqrt{c^{*\prime}c^{*}}\), and let \(\mu^{*2}:=\|c^{*}\|^{2}/\text{tr}\left(W_{2}^{*}\right)\). The Taylor series expansion of \(\beta^{*}\) around \(\left(\mu^{*}\right)^{-1}=0\) results in the Nagar (1959) bias approximation \[\mathbb{E}\left[\beta^{*}\right]\approx N\left(\beta,c^{*},W^{*}\right)= \frac{n\left(\beta,c_{0}^{*},W^{*}\right)}{\mu^{*2}},\] with \[n\left(\beta,c_{0}^{*},W^{*}\right)=\frac{\text{tr}\left(S_{12}^{*}\left( \beta,W^{*}\right)\right)-2c_{0}^{*\prime}S_{12}^{*}\left(\beta,W^{*}\right)c _{0}^{*}}{\text{tr}\left(W_{2}^{*}\right)}.\] The benchmark worst-case bias is defined as \[\mathrm{BM}\left(\beta,W^{*}\right)\coloneqq\sqrt{\frac{\mathrm{tr}\left(S_{1}^{*} \left(\beta,W^{*}\right)\right)}{\mathrm{tr}\left(W_{2}^{*}\right)}}, \tag{12}\] which is derived by approximating the expectation of the ratio by the ratio of expectations, \[\mathbb{E}\left[\beta^{*}\right] \approx\frac{\mathbb{E}\left[\gamma_{2}^{*\prime}\left(\gamma_{1}^ {*}-\beta\gamma_{2}^{*}\right)\right]}{\mathbb{E}\left[\gamma_{2}^{*\prime} \gamma_{2}^{*}\right]}=\frac{\mathrm{tr}\left(S_{12}^{*}\left(\beta,W^{*} \right)\right)}{\mathrm{tr}\left(W_{2}^{*}\right)\left(1+\mu^{*2}\right)}\] \[=\frac{1}{\left(1+\mu^{*2}\right)}\frac{\mathrm{tr}\left(S_{12}^{ *}\left(\beta,W^{*}\right)\right)}{\sqrt{\mathrm{tr}\left(W_{2}^{*}\right)} \sqrt{\mathrm{tr}\left(S_{1}^{*}\left(\beta,W^{*}\right)\right)}}\sqrt{\frac{ \mathrm{tr}\left(S_{1}^{*}\left(\beta,W\right)\right)}{\mathrm{tr}\left(W_{2} ^{*}\right)}}\] \[\leq\sqrt{\frac{\mathrm{tr}\left(S_{1}^{*}\left(\beta,W^{*} \right)\right)}{\mathrm{tr}\left(W_{2}^{*}\right)}}=\mathrm{BM}\left(\beta,W^{ *}\right).\] Then let \[B\left(W^{*}\right)\coloneqq\sup_{\beta\in\mathbb{R},c_{0}^{*}\in\mathcal{S}^ {k_{z}-1}}\left(\frac{\left|n\left(\beta,c_{0}^{*},W^{*}\right)\right|}{ \mathrm{BM}\left(\beta,W^{*}\right)}\right), \tag{13}\] where \(\mathcal{S}^{k_{z}-1}\) is the \(k_{z}\)-dimensional unit sphere. The null hypothesis of weak instruments is specified as in Montiel Olea and Pflueger (2013) as \[H_{0}:\mu^{*2}\in\mathcal{H}\left(W^{*},\tau\right)\ \ \mathrm{against}\ \ H_{1}:\mu^{*2}\notin\mathcal{H}\left(W^{*},\tau\right),\] where \[\mathcal{H}\left(W^{*},\tau\right)=\left\{\mu^{*2}\in\mathbb{R}_{+}:\sup_{ \beta\in\mathbb{R},c_{0}^{*}\in\mathcal{S}^{k_{z}-1}}\left(\frac{\left|N \left(\beta,\mu^{*}\sqrt{\mathrm{tr}\left(W_{2}^{*}\right)}c_{0}^{*},W^{*} \right)\right|}{\mathrm{BM}\left(\beta,W^{*}\right)}\right)<\tau\right\},\] or equivalently \[\mathcal{H}\left(W^{*},\tau\right)=\left\{\mu^{*2}\in\mathbb{R}_{+}:\mu^{*2}< \frac{B\left(W^{*}\right)}{\tau}\right\}.\] Under the null hypothesis, the Nagar bias exceeds a fraction \(\tau\) of the benchmark for at least some value of the structural parameter \(\beta\) and some direction \(c_{0}^{*}\). The parameter \(\tau\) is a user specified threshold, commonly set to \(\tau=0.10\). The test for weak instruments is then based on \(\widehat{F}_{\mathrm{geff}}\) which is asymptotically distributed as \(\gamma_{2}^{*\prime}\gamma_{2}^{*}/\mathrm{tr}\left(W_{2}^{*}\right)\), with \(\gamma_{2}^{*}\sim N\left(c^{*},W_{2}^{*}\right)\), which has mean \(1+\mu^{*2}\). It follows that we reject \(H_{0}\) when \(\widehat{F}_{\mathrm{geff}}\) is large. Denote by \(F_{c^{*},W_{2}^{*}}^{-1}\left(\alpha\right)\) the upper \(\alpha\) quantile of the distribution of \(\gamma_{2}^{s\prime}\gamma_{2}^{*}/\mathrm{tr}\left(W_{2}^{*}\right)\) and let \[cv\left(\alpha,W_{2}^{*},d\right):=\sup\left\{F_{c^{*},W_{2}^{*}}^{-1}\left( \alpha\right)1_{\left(\frac{c^{*}\prime\epsilon^{*}}{\mathrm{tr}\left(W_{2}^{* }\right)}<d\right)}\right\},\] where \(1_{\left(A\right)}\) denotes the indicator function over a set \(A\). The null of weak instruments is then rejected if \[\widehat{F}_{\mathrm{geff}}>cv\left(\alpha,\widehat{W}_{2}^{*},B\left( \widehat{W}^{*}\right)/\tau\right),\] which is shown in Lemma 2 of Montiel Olea and Pflueger (2013) to be pointwise asymptotically valid, \[\sup_{\mathcal{H}\left(W^{*},\tau\right)}\lim_{n\to\infty}\mathbb{P}\left( \widehat{F}_{\mathrm{geff}}>cv\left(\alpha,\widehat{W}_{2}^{*},B\left( \widehat{W}^{*}\right)/\tau\right)\right)\leq\alpha,\] and, provided that \(B\left(\widehat{W}^{*}\right)\) is bounded in probability, \[\lim_{\mu^{*2}\to\infty}\lim_{n\to\infty}\mathbb{P}\left(\widehat{F}_{\mathrm{ geff}}>cv\left(\alpha,\widehat{W}_{2}^{*},B\left(\widehat{W}^{*}\right)/ \tau\right)\right)=1.\] After obtaining \(B\left(\widehat{W}^{*}\right)\) by a numerical routine, Montiel Olea and Pflueger (2013) show that the critical values can be obtained by Monte Carlo methods or by the Patnaik (1949) curve-fitting methodology. The Patnaik critical value is obtained as the the upper \(\alpha\) quantile of \(\chi_{\widehat{k}_{\mathrm{geff}}}^{2}\left(d\widehat{k}_{\mathrm{geff}}\right) /\widehat{k}_{\mathrm{geff}}\) where \(\chi_{\widehat{k}_{\mathrm{geff}}}^{2}\left(d\widehat{k}_{\mathrm{geff}}\right)\) denotes the noncentral \(\chi^{2}\) distribution with \(\widehat{k}_{\mathrm{geff}}\) degrees of freedom and noncentrality parameter \(d\widehat{k}_{\mathrm{geff}}\), with \[d =B\left(\widehat{W}^{*}\right)/\tau;\] \[\widehat{k}_{\mathrm{geff}} =\frac{\left[\mathrm{tr}\left(\widehat{W}_{2}^{*}\right)\right]^ {2}\left(1+2d\right)}{\mathrm{tr}\left(\widehat{W}_{2}^{*\prime}\widehat{W}_{2 }^{*}\right)+2d\mathrm{tr}\left(\widehat{W}_{2}^{*}\right)\lambda_{\max} \left(\widehat{W}_{2}^{*}\right)},\] and where \(\lambda_{\max}\left(\widehat{W}_{2}^{*}\right)\) denotes the maximum eigenvalue of \(\widehat{W}_{2}^{*}\). Theorem 1 in Montiel Olea and Pflueger (2013) shows that \[B\left(W^{*}\right)\leq 1,\] and they propose a simplified asymptotically valid but conservative test, which is to reject the null hypothesis \(\mu^{*2}\in\mathcal{H}\left(W^{*},\tau\right)\) if \[\widehat{F}_{\mathrm{gef}}>cv\left(\alpha,\widehat{W}_{2}^{*},1/\tau\right),\] with \(cv\left(\alpha,\widehat{W}_{2}^{*},1/\tau\right)\geq cv\left(\alpha,\widehat{W} _{2}^{*},B\left(\widehat{W}^{*}\right)/\tau\right)\). ## 4 The Robust F-Statistic as a Test for Weak Instruments For the robust F-statistic in relation to the Nagar bias of the GMMf estimator as defined in (5), \[\widehat{\beta}_{gmmf}=\frac{x^{\prime}Z\widehat{W}_{2}^{-1}Z^{\prime}y}{x^{ \prime}Z\widehat{W}_{2}^{-1}Z^{\prime}x},\] the above expressions apply, but simplify significantly. With \(\Omega_{n}=\widehat{W}_{2}^{-1}\overset{p}{\rightarrow}W_{2}^{-1}\), it follows that \[W^{*}=\left[\begin{array}{cc}W_{2}^{-1/2}W_{1}W_{2}^{-1/2}&W_{2}^{-1/2}W_{1 2}W_{2}^{-1/2}\\ W_{2}^{-1/2}W_{12}^{\prime}W_{2}^{-1/2}&I_{k_{z}}\end{array}\right].\] For the Nagar bias expression, we get \[N_{gmmf}\left(\beta,c^{*},W^{*}\right)=\frac{1}{\mu^{*2}}n_{gmmf}\left(\beta, c_{0}^{*},W^{*}\right),\] with \[n_{gmmf}\left(\beta,c_{0}^{*},W^{*}\right)=\frac{\mathrm{tr}\left(S_{12}^{*} \left(\beta,W^{*}\right)\right)-2c_{0}^{*\prime}S_{12}^{*}\left(\beta,W^{*} \right)c_{0}^{*}}{k_{z}},\] where \[S_{12}^{*}\left(\beta,W^{*}\right)=W_{12}^{*}-\beta I_{k_{z}}.\] The expression therefore simplifies to \[n_{gmmf}\left(\beta,c_{0}^{*},W^{*}\right)=\frac{1}{k_{z}}\left(\mathrm{tr} \left(W_{12}^{*}\right)-2c_{0}^{*\prime}W_{12}^{*}c_{0}^{*}-\left(k_{z}-2 \right)\beta\right).\] The benchmark worst-case bias for the GMMf estimator is then given by \[\mathrm{BM}\left(\beta,W^{*}\right)=\sqrt{\frac{\mathrm{tr}\left(S_{1}^{*} \left(\beta,W^{*}\right)\right)}{k_{z}}}\] where \[\operatorname{tr}\left(S_{1}^{*}\left(\beta,W^{*}\right)\right)=\operatorname{tr} \left(W_{1}^{*}\right)-2\beta\text{tr}\left(W_{12}^{*}\right)+k_{z}\beta^{2}.\] Then \[B_{gmmf}\left(W^{*}\right)=\sup_{\beta\in\mathbb{R},c_{0}^{*}\in\mathcal{S}^{k_ {z}-1}}\left(\frac{\left|n_{gmmf}\left(\beta,c_{0}^{*},W^{*}\right)\right|}{ \text{BM}\left(\beta,W^{*}\right)}\right).\] As \(\Omega=W_{2}^{-1}\) we have that \(\gamma_{2}^{*}\sim\left(c^{*},I_{k_{z}}\right)\) and so it follows that \(\gamma_{2}^{*\prime}\gamma_{2}^{*}\sim\chi_{k_{z}}^{2}\left(c^{*\prime}c^{*} \right)=\chi_{k_{z}}^{2}\left(k_{z}\mu^{*2}\right)\). Therefore, the null of weak instruments for the GMMf estimator, specified as \[H_{0}:\mu^{*2}\in\mathcal{H}_{gmmf}\left(W^{*},\tau\right)\] \[\mathcal{H}_{gmmf}\left(W^{*},\tau\right)=\left\{\mu^{*2}\in\mathbb{R}_{+}: \mu^{*2}<\frac{B_{gmmf}\left(W^{*}\right)}{\tau}\right\}\] is rejected if \[\widehat{F}_{r}>cv\left(\alpha,k_{z},B_{gmmf}\left(\widehat{W}^{*}\right)/ \tau\right),\] with \(cv\left(\alpha,k_{z},B_{gmmf}\left(\widehat{W}^{*}\right)/\tau\right)\) the upper \(\alpha\) quantile of \(\chi_{k_{z}}^{2}\left(k_{z}B_{gmmf}\left(\widehat{W}^{*}\right)/\tau\right)/k_ {z}\), and where \(\widehat{W}^{*}=\left(I_{2}\otimes\widehat{W}_{2}^{-1/2}\right)\widehat{W} \left(I_{2}\otimes\widehat{W}_{2}^{-1/2}\right)\). Relative to the general results for the generalized effective F-statistic, we see that for the GMMf estimator, \(B_{gmmf}\left(\widehat{W}^{*}\right)\) is a simpler function to maximize with respect to \(\beta\) and \(c_{0}^{*}\) than \(B\left(\widehat{W}^{*}\right)\) from (13) when \(\Omega_{n}\neq\widehat{W}_{2}^{-1}\). There is further no need for Monte Carlo simulations or Patnaik's curve-fitting methodology to compute the critical values, as \(\widehat{F}_{r}\) follows an asymptotic scaled noncentral chi-square distribution, with the "effective" degrees of freedom here equal to \(k_{z}\). The simplified conservative test based on the fact that \(B_{gmmf}\left(W^{*}\right)\leq 1\) is obtained using the critical value \(cv\left(\alpha,k_{z},1/\tau\right)\), which is simply the upper \(\alpha\) quantile of \(\chi_{k_{z}}^{2}\left(k_{z}/\tau\right)/k_{z}\). ### Harmonizing the Benchmark Bias The benchmark bias \(\text{BM}\left(\beta,W^{*}\right)\) as defined in (12) is estimator/F-statistic specific. This makes a comparison of weak-instrument test results between generalized F-statistics for different GMM estimators difficult, as the critical values of the test are based on the maximum of the Nagar bias relative to the estimator specific benchmark. A solution is to harmonize the benchmark bias, such that it is the same for each estimator/F-statistic. One possibility is to consider the worst-case bias under the maintained assumption of homoskedasticity, hence assuming that \(W=\Sigma_{v}\otimes Q_{zz}\) and thus \(W^{*}=\Sigma_{v}\otimes\Omega^{-1/2}Q_{zz}\Omega^{-1/2}\). For all GMM estimators defined in (7) this results in \[\mathrm{BM}_{LS}\left(\beta,\Sigma_{v}\right) =\sqrt{\frac{\left(\sigma_{1}^{2}-2\beta\sigma_{12}+\beta^{2}\sigma _{2}^{2}\right)\mathrm{tr}\left(Q_{zz}\Omega^{-1}\right)}{\sigma_{2}^{2}\mathrm{ tr}\left(Q_{zz}\Omega^{-1}\right)}}\] \[=\sqrt{\frac{\sigma_{1}^{2}-2\beta\sigma_{12}+\beta^{2}\sigma_{2} ^{2}}{\sigma_{2}^{2}}}.\] It follows that \(\mathrm{BM}_{LS}\left(\beta,\Sigma_{v}\right)\) is the probability limit of the worst-case weak-instrument OLS bias, as \[\widehat{\beta}_{LS}-\beta =\frac{x^{\prime}u}{x^{\prime}x}\] \[=\frac{c^{\prime}Z^{\prime}u/\sqrt{n}+v_{2}^{\prime}u}{c^{\prime }Z^{\prime}c/n+2c^{\prime}Z^{\prime}v_{2}/\sqrt{n}+v_{2}^{\prime}u}\] \[\overset{p}{\rightarrow}\frac{\sigma_{w_{2}}\left(\beta,\Sigma_ {v}\right)}{\sigma_{2}^{2}}=\rho_{w_{2}}\frac{\sigma_{u}\left(\beta,\Sigma_{v }\right)}{\sigma_{2}}\] \[\leq\frac{\sigma_{u}\left(\beta,\Sigma_{v}\right)}{\sigma_{2}}= \sqrt{\frac{\sigma_{1}^{2}-2\beta\sigma_{12}+\beta^{2}\sigma_{2}^{2}}{\sigma_ {2}^{2}}}=\mathrm{BM}_{LS}\left(\beta,\Sigma_{v}\right).\] We have that \[\lim_{\beta\rightarrow\pm\infty}\frac{\left|n\left(\beta,c_{0}^{*},W^{*} \right)\right|}{\mathrm{BM}\left(\beta,W^{*}\right)}=\lim_{\beta\rightarrow\pm \infty}\frac{\left|n\left(\beta,c_{0}^{*},W^{*}\right)\right|}{\mathrm{BM}_{ LS}\left(\beta,\Sigma_{v}\right)}=1-\frac{2\lambda_{\min}\left(W_{2}^{*} \right)}{\mathrm{tr}\left(W_{2}^{*}\right)},\] where \(\lambda_{\min}\left(W_{2}^{*}\right)\) is the minimum eigenvalue of \(W_{2}^{*}\). It follows further from Assumptions 1 and 2 that \[0<\frac{\mathrm{BM}\left(\beta,W^{*}\right)}{\mathrm{BM}_{LS}\left(\beta, \Sigma_{v}\right)}<C\] \(\forall\beta\in\mathbb{R}\), for some finite \(C>1\). Note that the positive definiteness, or full rank assumption of \(\Sigma_{v}\) is important, as otherwise \(\mathrm{BM}_{LS}\left(\beta,\Sigma_{v}\right)\) would be zero for some value of \(\beta\). Therefore the case of \(\rho_{12}^{2}=1\) is excluded. It then follows that, \[B_{LS}\left(W^{*},\Sigma_{v}\right)=\sup_{\beta\in\mathbb{R},c_{0}^{*}\in \mathbb{S}^{k_{2}-1}}\left(\frac{\left|n\left(\beta,c_{0}^{*},W^{*}\right) \right|}{\mathrm{BM}_{LS}\left(\beta,\Sigma_{v}\right)}\right)<\infty,\] and the test procedures described in Section 3 apply, replacing \(B\left(W^{*}\right)\) by \(B_{LS}\left(W^{*},\Sigma_{v}\right)\). The interpretation of the weak-instruments null hypothesis \(H_{0}:{\mu^{*}}^{2}\in\mathcal{H}\left(W^{*},\Sigma_{v},\tau\right)\), where \[\mathcal{H}\left(W^{*},\Sigma_{v},\tau\right)=\left\{\mu^{*2}\in\mathbb{R}_{ +}:\mu^{*2}<\frac{B_{LS}\left(W^{*},\Sigma_{v}\right)}{\tau}\right\},\] is then that the Nagar bias exceeds a fraction \(\tau\) of the benchmark maximum OLS bias for at least some value of the structural parameter \(\beta\) and some direction of the first-stage coefficients. Whilst this is not the same as the asymptotic relative bias results of Stock and Yogo (2005), that of the weak-instruments bias of the 2SLS estimator relative to that of the OLS estimator under homoskedasticity, it is more aligned with it. It clearly makes the interpretation of the null hypothesis and hence that of its rejection the same for different GMM estimators and their associated generalized effective F-statistics. Note that the simplified conservative test procedure does not apply here, as it is not the case that \(B_{LS}\left(W^{*},\Sigma_{v}\right)\leq 1\) in general. ### Wald Test The Montiel Olea and Pflueger (2013) approach to testing for weak instruments relates to the Nagar bias relative to a benchmark and does not readily extend to a weak-instruments test related to the size behaviour of the Wald test, which is a subject for future research. For the GMMf estimator, Windmeijer (2022) derives in a cross sectional heteroskedasticity setting that for the Wald test statistic, testing \(H_{0}:\beta=\beta_{0}\), under Assumption 1 and under the null, as \(n\rightarrow\infty\), \[\widehat{W}_{gmmf}\overset{d}{\rightarrow}\frac{q^{2}}{a-2bq/\eta+q^{2}/ \eta}, \tag{14}\] where \[q = \left(c^{*}+\xi_{2}\right)^{\prime}W_{2}^{-1/2}S_{1}^{1/2}\xi_{1}\] \[a = \left(c^{*}+\xi_{2}\right)^{\prime}W_{2}^{-1/2}S_{1}W_{2}^{-1/2} \left(c^{*}+\xi_{2}\right)\] \[b = \left(c^{*}+\xi_{2}\right)^{\prime}W_{2}^{-1/2}S_{12}W_{2}^{-1/2} \left(c^{*}+\xi_{2}\right)\] \[\eta = \left(c^{*}+\xi_{2}\right)^{\prime}\left(c^{*}+\xi_{2}\right),\] with \(c^{*}=W_{2}^{-1/2}Q_{zz}c\), as before, and where for ease of exposition we have here used the notation \[\left(\begin{array}{c}\frac{1}{\sqrt{n}}Z^{\prime}u\\ \frac{1}{\sqrt{n}}Z^{\prime}v_{2}\end{array}\right)\overset{d}{\rightarrow}N \left(\left(\begin{array}{c}0\\ 0\end{array}\right),\left[\begin{array}{cc}S_{1}&S_{12}\\ S_{12}^{\prime}&W_{2}\end{array}\right]\right),\] and \[\left(\begin{array}{c}S_{1}^{-1/2}\frac{1}{\sqrt{n}}Z^{\prime}u\\ W_{2}^{-1/2}\frac{1}{\sqrt{n}}Z^{\prime}v_{2}\end{array}\right)\overset{d}{ \rightarrow}\left(\begin{array}{c}\xi_{1}\\ \xi_{2}\end{array}\right)\sim N\left(\left(\begin{array}{cc}0\\ 0\end{array}\right),\left[\begin{array}{cc}I_{k_{z}}&R\\ R^{\prime}&I_{k_{z}}\end{array}\right]\right),\] where \(R=S_{1}^{-1/2}S_{12}W_{2}^{-1/2}\). It follows then from (14) that \(W_{gmmf}\overset{d}{\rightarrow}\chi_{1}^{2}\), as \(n\rightarrow\infty\) and \(c^{*}\rightarrow\infty\), and hence \(\mu^{*2}\rightarrow\infty\), implying that larger values of \(\widehat{F}_{r}\) are associated with a better approximation of the distribution of \(\widehat{W}_{gmmf}\) by the \(\chi_{1}^{2}\) distribution. Windmeijer (2022) further shows that if there is a kronecker variance structure of the form \[\left[\begin{array}{cc}S_{1}&S_{12}\\ S^{\prime}_{12}&W_{2}\end{array}\right]=\left[\begin{array}{cc}\kappa&\delta \\ \delta&1\end{array}\right]\otimes W_{2},\] for \(\kappa>0\) and \(|\delta^{2}/\kappa|\leq 1\), then the Stock and Yogo (2005) critical values apply for the weak-instruments test based on the \(\widehat{W}_{gmmf}\) size properties. As \(S_{1}=\kappa W_{2}\), it also follows then that the GMMf estimator is the efficient estimator under standard strong-instruments asymptotics. ### Grouped-Data IV Model We now consider the heteroskedastic model design from Andrews (2018). In these designs, very large values of the robust F-statistic are accompanied by a poor performance of the 2SLS estimator, where Andrews (2018) focused on coverage distortions of confidence sets. We find that in these designs the effective F-statistic is indeed small, indicating a weak-instrument problem for 2SLS, but large values of the robust F-statistic indicate there is not a weak-instrument problem for the GMMf estimator. The design in Andrews (2018, Supplementary Appendix C.3) is the same as a grouped-data IV setup, \[y_{i} = x_{i}\beta+u_{i}\] \[x_{i} = z_{i}^{\prime}\pi+v_{2,i},\] for \(i=1,...,n\), where the \(G\)-vector \(z_{i}\in\{e_{1},...,e_{G}\}\), with \(e_{g}\) a \(G\)-vector with \(g\)th entry equal to 1 and zeros everywhere else, for \(g=1,\ldots,G\). The variance-covariance structure for the errors is modeled fully flexibly by group, and specified as \[\left(\left(\begin{array}{c}u_{i}\\ v_{2,i}\end{array}\right)|z_{i}=e_{g}\right)\sim\left(0,\Sigma_{g}\right),\] \[\Sigma_{g}=\left[\begin{array}{cc}\sigma_{u,g}^{2}&\sigma_{uv_{2},g}\\ \sigma_{uv_{2},g}&\sigma_{v_{2},g}^{2}\end{array}\right]. \tag{15}\] At the group level, we therefore have for group member \(j\) in group \(g\) \[y_{jg} = x_{jg}\beta+u_{jg} \tag{16}\] \[x_{jg} = \pi_{g}+v_{2,jg} \tag{17}\] \[\left(\begin{array}{c}u_{jg}\\ v_{2,jg}\end{array}\right)\sim\left(0,\Sigma_{uv_{2},g}\right),\] for \(j=1,...,n_{g}\) and \(g=1,...,G\), with \(n_{g}\) the number of observations in group \(g\), \(\sum_{g=1}^{G}n_{g}=n\), see also Bekker and Ploeg (2005). We assume that \(\lim_{n\rightarrow\infty}\frac{n_{g}}{n}=f_{g}\), with \(0<f_{g}<1\). The OLS estimator of \(\pi_{g}\) is given by \(\widehat{\pi}_{g}=\overline{x}_{g}=\frac{1}{n_{g}}\sum_{j=1}^{n_{s}}x_{jg}\) and \(Var\left(\widehat{\pi}_{g}\right)=\sigma_{v_{2},g}^{2}/n_{g}\). The OLS residual is \(\widehat{v}_{2,jg}=x_{jg}-\overline{x}_{g}\) and the estimator for the variance is given by \(V\widehat{a}r\left(\widehat{\pi}_{g}\right)=\widehat{\sigma}_{v_{2},g}^{2}/n _{g}\), where \(\widehat{\sigma}_{v,g}^{2}=\frac{1}{n_{g}}\sum_{j=1}^{n_{g}}\widehat{v}_{2,jg} ^{2}\). Let \(Z\) be the \(n\times G\) matrix of instruments. For the vector \(\pi\) the OLS estimator is given by \[\widehat{\pi}=\left(Z^{\prime}Z\right)^{-1}Z^{\prime}x=\left(\overline{x}_{1 },\overline{x}_{2},...,\overline{x}_{G}\right)^{\prime}.\] Let \[\widehat{W}_{2} = \frac{1}{n}\sum_{i=1}^{n}\widehat{v}_{2,i}^{2}z_{i}z_{i}^{\prime}\] \[= \mbox{Diag}\left(n_{g}\widehat{\sigma}_{v_{2},g}^{2}\right)/n,\] where \(\mbox{Diag}\left(q_{g}\right)\) is a diagonal matrix with \(g\)th diagonal element \(q_{g}\). Then the robust estimator of \(Var\left(\widehat{\pi}\right)\) is given by \[V\widehat{a}r_{r}\left(\widehat{\pi}\right) = \left(Z^{\prime}Z\right)^{-1}n\widehat{W}_{2}\left(Z^{\prime}Z \right)^{-1}\] \[= \mbox{Diag}\left(\widehat{\sigma}_{v_{2},g}^{2}/n_{g}\right).\] The non-robust variance estimator is \[V\widehat{a}r\left(\widehat{\pi}\right) = \left(\frac{1}{n}\sum_{i=1}^{n}\widehat{v}_{2,i}^{2}\right)\left( Z^{\prime}Z\right)^{-1}\] \[= \left(\sum_{g=1}^{G}\frac{n_{g}}{n}\widehat{\sigma}_{v_{2},g}^{2} \right)\mbox{Diag}\left(\frac{1}{n_{g}}\right).\] The group- (or instrument-) specific IV estimators for \(\beta\) are given by \[\widehat{\beta}_{g}=\frac{z_{g}^{\prime}y}{z_{g}^{\prime}x}=\frac{\overline{y}_{ g}}{\overline{x}_{g}}, \tag{19}\] with \(\overline{y}_{g}=\frac{1}{n_{g}}\sum_{j=1}^{n_{g}}y_{jg}\), and the 2SLS estimator for \(\beta\) is \[\widehat{\beta}_{2sls} = \left(x^{\prime}P_{Z}x\right)^{-1}x^{\prime}P_{Z}y\] \[= \frac{\sum_{g=1}^{G}n_{g}\overline{x}_{g}\overline{y}_{g}}{ \overline{\sum_{g=1}^{G}n_{g}\overline{x}_{g}^{2}}}\] \[= \frac{\sum_{g=1}^{G}n_{g}\overline{x}_{g}^{2}\left(\overline{y}_ {g}/\overline{x}_{g}\right)}{\sum_{g=1}^{G}n_{g}\overline{x}_{g}^{2}}=\sum_{g =1}^{G}\widehat{w}_{2sls,g}\widehat{\beta}_{g},\] the standard result that \(\widehat{\beta}_{2sls}\) is a linear combination of the instrument specific IV estimators, (see e.g. Windmeijer, 2019). The weights are given by \[\widehat{w}_{2sls,g}=\frac{n_{g}\overline{x}_{g}^{2}}{\sum_{s=1}^{G}n_{s} \overline{x}_{s}^{2}}\geq 0 \tag{20}\] and hence the 2SLS estimator is here a weighted average of the group specific estimators. For the group specific estimates, the first-stage F-statistics are given by \[\widehat{F}_{g}=\frac{\widehat{\pi}_{g}^{2}}{V\widehat{a}r\left(\widehat{\pi }_{g}\right)}=\frac{n_{g}\overline{x}_{g}^{2}}{\widehat{\sigma}_{v_{2},g}^{2}} \tag{21}\] for \(g=1,...,G\). The standard weak instruments results of Staiger and Stock (1997) and Stock and Yogo (2005) apply to each group-specific IV estimator \(\widehat{\beta}_{g}\). As these are just-identified models, we can relate the values of the F-statistics to Wald-test size distortions. From (20) and (21) it follows that the weights for the 2SLS estimator are related to the individual F-statistics as follows \[\widehat{w}_{2sls,g}=\frac{n_{g}\overline{x}_{g}^{2}}{\sum_{s=1}^{G}n_{s} \overline{x}_{s}^{2}}=\frac{\widehat{\sigma}_{v_{2},g}^{2}\widehat{F}_{g}}{ \sum_{s=1}^{G}\widehat{\sigma}_{v_{2},g}^{2}\widehat{F}_{s}}. \tag{22}\] Under first-stage homoskedasticity, \(\sigma_{v_{2},g}^{2}=\sigma_{v_{2},s}^{2}\), for \(g,s=1,\ldots,G\), then \(\widehat{\sigma}_{v_{2},s}^{2}\approx\widehat{\sigma}_{v_{2},s}^{2}\) for all \(g,s\), and hence \(\widehat{F}\approx\frac{1}{G}\sum_{g=1}^{G}\widehat{F}_{g}\). Then the weights are given by \(\widehat{w}_{2sls,g}\approx\frac{\widehat{F}_{g}}{\sum_{s=1}^{G}\widehat{F}_{g }}\approx\frac{\widehat{F}_{g}}{\widehat{G}\widehat{F}}\), so we see that the groups with the larger individual F-statistics get the larger weights in the 2SLS estimator under homoskedasticity. This is not necessarily the case under heteroskedasticity. For equal sized groups with approximately the same value of the signal \(\widehat{\pi}_{g}^{2}\), \(\widehat{F}_{g}\) is larger for groups with, and because of, the smaller values of \(\widehat{\sigma}_{v_{2},g}^{2}\). The 2SLS weights ignore this information and give in this example approximately equal weights to groups with similar values of \(\widehat{\pi}_{g}^{2}\). In practice it could then be the case that a small variance, large \(\widehat{F}_{g}\) group could receive a small weight in the 2SLS estimator. As shown in the Monte Carlo exercises below, this is exactly what happens in the design of Andrews (2018). There is one group with a large individual F-statistic. However, this group has a very small population variance \(\sigma_{v_{2},g}^{2}\) resulting in a relatively small weight in the 2SLS estimator and a poor performance of the estimator in terms of bias and size of the Wald-test. The non-robust F-statistic for \(\pi\) is given by \[\widehat{F} = \frac{1}{G}\widehat{\pi}^{\prime}\left(V\widehat{a}r\left( \widehat{\pi}\right)\right)^{-1}\widehat{\pi}\] \[= \frac{1}{G}\frac{\sum_{g=1}^{G}n_{g}\overline{x}_{g}^{2}}{\left( \sum_{g=1}^{G}\frac{n_{g}}{n}\widehat{\sigma}_{v_{2},g}^{2}\right)}=\frac{1} {G}\sum_{g=1}^{G}\frac{\widehat{\sigma}_{v_{2},g}^{2}}{\left(\sum_{s=1}^{G} \frac{n_{s}}{n}\widehat{\sigma}_{v_{2},s}^{2}\right)}\widehat{F}_{g}.\] The effective F-statistic is given by \[\widehat{F}_{\text{eff}}=\sum_{g=1}^{G}\frac{\widehat{\sigma}_{v_{2},g}^{2}}{ \left(\sum_{l=1}^{G}\widehat{\sigma}_{v_{2},l}^{2}\right)}\widehat{F}_{g},\] and so \(\widehat{F}_{\text{eff}}=\widehat{F}\) if \(n_{g}=n/G\) for \(g=1,\ldots,G\), which is the case in expectation in the designs of Andrews (2018). Both these F-statistics will therefore correctly reflect weak-instruments problems for the 2SLS estimator in these designs. The robust first-stage F-statistic is given by \[\widehat{F}_{r} = \frac{1}{G}\widehat{\pi}^{\prime}\left(V\widehat{a}r_{r}\left( \widehat{\pi}\right)\right)^{-1}\widehat{\pi}\] \[= \frac{1}{G}\sum_{g=1}^{G}\frac{n_{g}\overline{x}_{g}^{2}}{ \widehat{\sigma}_{v_{2},g}^{2}}=\frac{1}{G}\sum_{g=1}^{G}\widehat{F}_{g}.\] It is therefore clear, that if \(\widehat{F}_{r}\) is large, then at least one of the \(\widehat{F}_{g}\) is large. For the GMMf estimator we have that \[\widehat{\beta}_{gmmf} = \frac{\sum_{g=1}^{G}n_{g}\overline{x}_{g}\overline{y}_{g}/ \widehat{\sigma}_{v_{2},g}^{2}}{\sum_{g=1}^{g}n_{g}\overline{x}_{g}^{2}/ \widehat{\sigma}_{v_{2},g}^{2}}=\frac{\sum_{g=1}^{G}\left(n_{g}\overline{x}_{ g}^{2}/\widehat{\sigma}_{v_{2},g}^{2}\right)\widehat{\beta}_{g}}{\sum_{s=1}^{S}n_{g} \overline{x}_{g}^{2}/\widehat{\sigma}_{v_{2},g}^{2}}\] \[= \sum_{g=1}^{G}\widehat{w}_{gmmf,g}\widehat{\beta}_{g},\] with \[\widehat{w}_{gmmf,g}=\frac{\widehat{F}_{g}}{\sum_{s=1}^{G}\widehat{F}_{s}}=\frac{ \widehat{F}_{g}}{G\widehat{F}_{r}},\] hence the groups with the larger F-statistics get the larger weights, independent of the values of \(\widehat{\sigma}_{v_{2},s}^{2}\), mimicking the 2SLS weights under homoskedasticity of the first-stage errors. #### 4.3.1 Some Monte Carlo Results We consider here the two heteroskedastic designs of Andrews (2018) with \(G=10\) groups, \(\beta=0\) and one with moderate and one with high endogeneity. Tables 9 and 12 in the Supplementary Appendix C.3 of Andrews (2018) present the values of the conditional group-specific variance matrices \(\Sigma_{g}\) as defined in (15) and the first-stage parameters, denoted \(\pi_{0g}\), for \(g=1,\ldots,10\). The correlation between \(u_{i}\) and \(v_{2i}\) is \(-0.59\) in the moderate and we set it equal to \(0.99\) in the high endogeneity case. We multiply the first-stage parameters \(\pi_{0}\) by \(0.040\) and \(0.026\), such that the value of the robust \(\widehat{F}_{r}\) is just over \(80\) on average for \(10,000\) replications and sample size \(n=10,000\) in both designs. The group sizes are equal in expectation with \(P\left(z_{i}=e_{g}\right)=0.1\) for all \(g\). The first two rows in each panel of Table 2 present the values of \(\pi_{g}\) and \(\sigma_{v_{2},g}^{2}\) for \(g=1,\ldots,10\). Table 1 presents the estimation results. The (means of the) non-robust F-statistics are small in both designs, \(\widehat{F}=1.41\) and \(\widehat{F}=0.99\), and the effective F-statistics are equal to the non-robust F-statistics, confirming the results derived in Section 4.3. Although the robust F-statistics are large, \(\widehat{F}_{r}=80.23\) and \(\widehat{F}_{r}=80.12\), and the \(\widehat{F}_{r}\)-based test for weak instruments rejects the null of weak instruments in all replications, the 2SLS estimator \(\widehat{\beta}_{2sls}\) is poorly behaved. It has large biases and the Wald test rejection frequencies for \(H_{0}:\beta=0\) is equal to \(0.534\) and \(0.999\) at the \(5\%\) level. In contrast, the GMMf estimator \(\widehat{\beta}_{gmmf}\) is unbiased and the Wald-test rejection frequencies are equal to \(0.049\) and \(0.065\) at the \(5\%\) level. The behaviour of the 2SLS estimator is correctly indicated by the low values of \(\widehat{F}_{\text{eff}}\), with the \(\widehat{F}_{\text{eff}}\)-based test indicating very weak instruments for the 2SLS estimator. The means of the critical values for \(\widehat{F}_{\text{eff}}\) are \(17.09\) and and \(17.12\), whereas those for \(\widehat{F}_{r}\) are \(12.22\) in the moderate and \(12.31\) in the high endogeneity design. The means of the critical values based on the worst case least-squares benchmark bias, \(\text{BM}_{LS}\left(\widehat{W}^{*},\widehat{\Sigma}_{v}\right)\), are the same for \(\widehat{F}_{\text{eff}}\), as expected in this design. For \(\widehat{F}_{r}\) it is different and larger in the moderate endogeneity design, where it is equal to \(13.45\). In comparison, the \(10\%\) relative bias Stock and Yogo critical value for the homoskedastic case is here given by \(11.46\), see Skeels and Windmeijer (2018). The details as given in Table 2 below make clear what is happening. It reports the population values of \(\pi_{g}\), \(\sigma^{2}_{v_{2},g}\), \(\mu^{2}_{n,g}=1000\pi^{2}_{g}/\sigma^{2}_{v_{2},s}\) and the mean values of \(\widehat{F}_{g}\), \(\widehat{w}_{2sls,g}\) and \(\widehat{w}_{gmmf,g}=\widehat{F}_{g}/\sum_{s=1}^{G}\widehat{F}_{g}\). For the moderate endogeneity design identification in the first group is strong, with an average value of \(\widehat{F}_{1}=789.5\). Identification in all other 9 groups is very weak, with the largest average value for \(\widehat{F}_{5}=2.23\). The signal for group 1, \(\pi^{2}_{1}\), is somewhat larger than those for the other groups, but the population value \(\mu^{2}_{n,1}\) is large mainly due to the relatively very small value of \(\sigma^{2}_{v_{2},1}\). As detailed in (22), the 2SLS weights ignore the \(\sigma^{2}_{v_{2},1}\) part of the information in group 1 which leads to the low average value of \(\widehat{w}_{2sls,1}=0.127\). This shows that the 2SLS estimator does not utilize the identification strength of the first group well, with some larger weights given to higher variance, but lower concentration-parameter groups. Table 2 further shows that for the GMMf estimator almost all weight is given to the first group, with the average of \(\widehat{w}_{gmmf,1}\) equal to \(0.984\), resulting in the good behaviour of the GMMf estimator in terms of bias and Wald test size. In this case the standard deviation of the GMMf estimator is quite large relative to that of the 2SLS estimator. This is driven by the value of \(\sigma^{2}_{u,1}\), which in this design is equal to \(1.10\), much larger than \(\sigma^{2}_{v_{2},1}\). Reducing the value of \(\sigma^{2}_{u,1}\) (and the value for \(\sigma_{w_{2},1}\) accordingly to keep the same correlation structure within group 1), will reduce the standard deviation of the GMMf estimator. The pattern of group information for the high endogeneity case is similar to that of the moderate endogeneity case, with one informative group, \(g=10\), with an average value of \(\widehat{F}_{10}=792.2\). However, the variance \(\sigma^{2}_{v_{2},10}\) is now so small in relative terms, that the 2SLS weight for group 10 has an average value of only \(\widehat{w}_{2sls,10}=0.003\). The GMMf estimator \begin{table} \begin{tabular}{|c|c c|c c c c c c|} \hline & \(\widehat{F}\) & \(\widehat{F}_{\text{eff}}\) & \(\widehat{F}_{r}\) & \(\widehat{\beta}_{ols}\) & \(\widehat{\beta}_{2sls}\) & \(\widehat{\beta}_{gmmf}\) & \(\widehat{W}_{2sls}\) & \(\widehat{W}_{gmmf}\) \\ \hline Mod Endog & 1.411 & 1.411 & 80.23 & -0.608 & -0.424 & -0.001 & 0.534 & 0.049 \\ & & \(\left[\begin{array}{c}17.09,0\\ 17.09,0\end{array}\right]\) & \(\left[\begin{array}{c}12.22,1\\ 13.45,1\end{array}\right]\) & (0.011) & (0.257) & (0.563) & & & \\ \hline High Endog & 0.993 & 0.993 & 80.12 & 0.747 & 0.742 & 0.007 & 0.999 & 0.065 \\ & & \(\left[\begin{array}{c}17.12,0\\ 17.12,0\end{array}\right]\) & \(\left[\begin{array}{c}12.31,1\\ 12.26,0\end{array}\right]\) & (0.001) & (0.057) & (0.029) & & & \\ \hline \multicolumn{10}{c}{Notes: \(\beta=0\). Means and (st.dev.), [mean of critical values, rej.freq., \(\alpha=0.05\), \(\tau=0.10\), \(cv\left(B_{LS}\left(\widehat{W}^{*},\widehat{\Sigma}_{v}\right)\right)\) in second row], of \(10,000\) replications. Rej.freq. of robust Wald tests at \(5\%\) level.} \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 1: Estimation results corrects this, with the average value of \(\widehat{w}_{gmmf,10}=0.989\). The standard deviation of the GMMf estimates, 0.029, is in this case smaller than that of the 2SLS estimates, 0.057. The left panels of Figure 1 displays the relative bias of the 2SLS and GMMf estimators, relative to that of the OLS estimator, as a function of the mean values of the robust F-statistic \(\widehat{F}_{r}\), together with the rejection frequency of the \(\widehat{F}_{r}\)-based test for weak instruments, using the critical values from the least-squares benchmark bias. We present the relative bias here to be in line with the homoskedastic case as presented below. Different values of \(\widehat{F}_{r}\) are obtained by different values of the scalar \(e\) when setting the first-stage parameters \(\pi=e\pi_{0}\). The relative bias of the GMMf estimator decreases quite rapidly with increasing values of \(\widehat{F}_{r}\). For the moderate endogeneity case, the test has a rejection frequency of 5% at a mean \(\widehat{F}_{r}\) of 10.03, with the relative bias of the GMMf estimator at that point equal to 0.092. As shown in the top right-hand panel of Figure 1, the GMMf estimator based Wald test is well behaved in terms of size, with hardly any size distortion for mean values of \(\widehat{F}_{r}\) larger than 5. The GMMf relative bias picture for the high-endogeneity case is very similar to that of the moderate-endogeneity case. Here the \(\widehat{F}_{r}\) based test for weak instruments has a rejection frequency of 5% at a mean \(\widehat{F}_{r}\) of 8.98, with the relative bias there being 0.091. As for the homoskedastic case, where the Wald test size deviation from nominal size is larger for larger values of \(\rho^{2}_{uv_{2}}\), the GMMf Wald \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline & \(g\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline ME & \(\pi_{g}\) & 0.058 & -0.023 & 0.049 & 0.015 & 0.022 & 0.008 & -0.017 & 0.011 & -0.036 & -0.040 \\ & \(\sigma^{2}_{v_{2},g}\) & 0.004 & 2.789 & 4.264 & 0.779 & 0.395 & 7.026 & 1.226 & 0.308 & 1.709 & 6.099 \\ & \(\mu^{2}_{n,g}\) & 785.7 & 0.184 & 0.556 & 0.284 & 1.190 & 0.009 & 0.236 & 0.387 & 0.770 & 0.266 \\ & \(\widehat{F}_{g}\) & 789.5 & 1.170 & 1.564 & 1.279 & 2.225 & 0.997 & 1.203 & 1.372 & 1.798 & 1.246 \\ & \(\widehat{w}_{2sls,g}\) & 0.126 & 0.098 & 0.178 & 0.035 & 0.031 & 0.180 & 0.049 & 0.015 & 0.096 & 0.192 \\ & \(\widehat{w}_{gmmf,g}\) & 0.984 & 0.002 & 0.002 & 0.002 & 0.003 & 0.001 & 0.002 & 0.002 & 0.002 & 0.002 \\ \hline HE & \(100\cdot\pi_{g}\) & -0.021 & 0.095 & -0.484 & -0.069 & 0.159 & -0.028 & 0.101 & -0.418 & 0.450 & -0.546 \\ & \(\sigma^{2}_{v_{2},g}\) & 1.600 & 0.478 & 2.975 & 1.142 & 0.174 & 0.145 & 4.658 & 1.963 & 2.990 & 0.38\(\cdot a\) \\ & \(\mu^{2}_{n,g}\) & 0.28\(\cdot a\) & 0.002 & 0.008 & 4.2\(\cdot a\) & 0.015 & 5.6\(\cdot a\) & 2.2\(\cdot a\) & 0.009 & 0.007 & 789.9 \\ & \(\widehat{F}_{g}\) & 0.998 & 1.017 & 0.979 & 1.010 & 1.034 & 0.984 & 0.977 & 1.031 & 0.997 & 792.2 \\ & \(\widehat{w}_{2sls,g}\) & 0.111 & 0.040 & 0.177 & 0.085 & 0.016 & 0.013 & 0.242 & 0.134 & 0.181 & 0.003 \\ & \(\widehat{w}_{gmmf,g}\) & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.989 \\ \hline & & \multicolumn{6}{c}{Notes: \(\mu^{2}_{n,g}=1000\pi^{2}_{g}/\sigma^{2}_{v,g}\); \(a=10^{-4}\)} \\ \end{tabular} \end{table} Table 2: Group information and estimator weights test has a worse size performance in the high-endogeneity design, and has a 10% rejection frequency at a mean \(\widehat{F}_{r}\) of 26.64. This would imply a critical value at the 5% level of around 32, which compares to the Stock and Yogo weak-instruments critical value of 38.54 for a Wald test size of 10% at the 5% nominal level. #### 4.3.2 Homoskedastic Design We next consider the homoskedastic design for the moderate endogeneity case with \(\Sigma_{uv_{2}}=\frac{1}{G}\sum_{g=1}^{G}\Sigma_{uv_{2},g}\), resulting in \[\Sigma_{uv_{2}}=\left[\begin{array}{cc}2.57&-1.50\\ -1.50&2.46\end{array}\right],\] Figure 1: Top: Moderate Endogeneity. Bottom: High Endogeneity. Left: Bias of 2SLS and GMMf estimators relative to OLS bias, and \(\widehat{F}_{r}\)-based weak-instrument test rejection frequencies, \(\alpha=0.05\), \(\tau=0.10\), least-squares benchmark bias. The “0.05” line refers to the rej. freq. Right: Rejection frequencies of robust Wald tests, \(\alpha=0.05\). with \(\rho_{uv_{2}}=-0.59\), as above. We consider smaller sample sizes of \(n=250\) and \(n=500\), or group sizes of 25 or 50 on average, to compare the weak-instrument finite sample behaviour of the GMMf estimator to that of the 2SLS estimator. In particular, the noise induced by estimation of \(W_{2}^{-1}\) may adversely affect the GMMf estimator. The results in Figure 2 shows that for this design and sample sizes the relative biases and Wald rejection frequencies are virtually identical for the two estimators, with the standard deviations of the GMMf estimates slightly larger than those of the 2SLS estimator, as expected. The rejection frequencies of the \(\widehat{F}_{r}\)-based test are here closer to those of the standard Stock and Yogo \(\widehat{F}\)-based test compared to the rejection frequencies of the \(\widehat{F}_{\text{eff}}\)-based test, with the latter test more conservative. Figure 2: Homoskedastic design, moderate endogeneity. Top: \(n=250\). Bottom: \(n=500\). Left: Relative bias and weak-instrument tests rejection frequencies. Right: Wald test rejection frequencies and standard deviations of 2SLS and GMMf estimates. Considerations for Practice and an Application The grouped-data IV designs above are quite extreme in the variation of \(\sigma_{v_{2},g}^{2}\), leading to the large differences between the values of \(\widehat{F}_{\text{eff}}\) and \(\widehat{F}_{r}\) and between the performances of the 2SLS and GMMf estimators. Note that these results carry over to a model with a constant and a full set of mutually exclusive binary indicators as instruments, when the variances \(\sigma_{v_{2},g}^{2}\) for at least two groups are relatively small and their \(\pi_{g}\) coefficients are different. This is the case if we for example change \(\sigma_{v_{2},2}^{2}\) in the moderate endogeneity design above to be equal to the small \(\sigma_{v_{2},1}^{2}\). An example where this could be relevant is from Stevenson (2018) who studies the effect of pretrial detention on conviction, using judge indicators as instruments, as cases are randomly assigned to judges.3 As the treatment is here binary, with variance \(\pi\left(1-\pi\right)\), a very lenient (small \(\pi\)) and a very strict judge (large \(\pi\)) in terms of sending defendants to pretrial detention have small values of \(\sigma_{v_{2}}^{2}\), but clearly different values of \(\pi\). Unlike the 2SLS estimator, the GMMf estimator takes the differential strengths of the instruments due to the different values of \(\sigma_{v_{2}}^{2}\) into account, giving more weight to very lenient and very strict judges. This could then lead to a better performance of the GMMf estimator in terms of bias, which would be indicated by the values of \(\widehat{F}_{\text{eff}}\) and \(\widehat{F}_{r}\). Footnote 3: I would like to thank an anonymous referee for this example. For any application, one should therefore check whether there is a difference between the values of \(\widehat{F}_{\text{eff}}\) and \(\widehat{F}_{r}\). From the expressions of the 2SLS and GMMf estimators as given in (4) and (6) and the results for the grouped-data IV example, it is clear that the weights for the 2SLS estimator are determined only by the values of \(\widehat{\pi}\) and the variance of the instruments \(Z\), whereas the GMMf weights take into account first-stage nonhomoskedasticity through the robust estimator of the variance of \(\widehat{\pi}\). If the nonhomoskedasticity is such that these weights are very different for the two estimators, and if the situation is as in the Andrews (2018) examples above that \(cv_{r}<\widehat{F}_{r}>\widehat{F}_{\text{eff}}<cv_{\text{eff}}\), then the GMMf estimator is preferred from the weak-instruments test results in terms of Nagar bias. In Stata, StataCorp. (2023), the robust first-stage F-statistic is provided with the output of "ivregress" or "ivreg2", Baum, Schaffer, and Stillman (2010), whereas "weakivtest", Pflueger and Wang (2015), calculates \(\widehat{F}_{\text{eff}}\) and critical values for the weak-instruments test. An extended version of the latter, called "gfweakivtest"4 also calculates the robust F-statistic and its weak-instruments critical values. It further includes the critical val ues based on the least-squares benchmark bias for both \(\widehat{F}_{r}\) and \(\widehat{F}_{\text{eff}}\), and presents the estimation results for the GMMf estimator. A study with a set of mutually exclusive binary indicators as instruments, and one of the _American Economic Review_ studies as considered in the review paper by Andrews et al. (2019), is Stephens and Yang (2014), who study the effect of schooling on wages, using data from the 1960-1980 US Censuses of Population. The endogenous variable is years of schooling for individual \(i\), born in state \(s\) in year \(t\), and the instruments are three indicator variables \(RS7\), \(RS8\) and \(RS9\), corresponding to being required to attend seven, eight or nine or more years of schooling, respectively. All specifications include state-of-birth and year-of-birth fixed effects, and the computed standard errors are robust to heteroskedasticity and clustering at the state-of-birth/year-of-birth cell. Stephens and Yang (2014) report the robust first-stage F-statistics \(\widehat{F}_{r}\) in their Table 1, which presents estimates of the returns of schooling on log weekly wages for different samples, but do not present results for the effective F-statistic. As the estimator used is the 2SLS estimator, it is therefore important to consider whether the \(\widehat{F}_{r}\) statistic misrepresents weak-instruments bias of the 2SLS estimator, in the sense that a large value of \(\widehat{F}_{r}\) may not be an indicator of a good performance of the 2SLS estimator. Table 3 replicates the estimation results of Table 1 in Stephens and Yang (2014, p 1784), and adds the values of \(\widehat{F}\) and \(\widehat{F}_{\text{eff}}\) together with the weak-instruments critical values for both \(\widehat{F}_{r}\) and \(\widehat{F}_{\text{eff}}\). It further reports the estimation results for the GMMf estimator. We present here the estimated standard errors of the 2SLS and GMMf estimators, Stephens and Yang (2014) instead presented weak-instruments robust confidence intervals based on the CLR test of Moreira (2003), robust to clustering using the methods of Finlay and Magnusson (2009). We further present the p-values of the test for over-contribying restrictions for the two-step Hansen \(J\)-statistic. The p-values of the robust Cragg-Donald rank statistics are identical and have been omitted from the table. The small p-values found for columns (5), (6) and (7) indicate significant heterogeneity of the effect estimates for the different instruments for these specifications. We will explore this further below for column (7). The values of \(\widehat{F}_{\text{eff}}\) and \(\widehat{F}_{r}\) are quite similar across the specifications and both are smaller than the nonrobust \(\widehat{F}\), for some specifications substantially smaller. The weak-instruments critical values for the \(\widehat{F}_{r}\) statistic are smaller and vary less then those for \(\widehat{F}_{\text{eff}}\). The choice of benchmark bias does not have a substantive effect on the critical values for both statistics. The null of weak instruments is rejected by the two statistics for the same specifications, with non-rejection in columns (2) and (8). The 2SLS and GMMf estimates and standard errors are very similar. It is clear that we do not have the situation here of a large value of \(\widehat{F}_{r}\) accompanied by a small value of \(\widehat{F}_{\text{eff}}\). One interesting difference occurs in column (7). For the non-southern born the 2SLS estimate is negative. The CLR-based confidence interval as given in Table 1 of Stephens and Yang (2014) is \([-0.031,0.001]\) and Stephens and Yang (2014, p 1785) state "For the non-southern born shown in column (7), we continue to find a very strong first-stage relationship including a large \(\widehat{F}_{r}\)-statistic. However, we find a negative and nearly statistically significant estimate of the return to schooling." Here the value of \(\widehat{F}_{r}\) is equal to 67.25, nearly double the value of \(\widehat{F}_{\text{eff}}\), which is equal to 34.40. Both \(\widehat{F}_{\text{eff}}\) and \(\widehat{F}_{r}\) do not indicate a weak-instruments problem in terms of Nagar bias for the 2SLS and GMMf estimators. The GMMf estimate here is positive and equal to 0.015, (se 0.011), moving away from a nearly statistically negative returns to schooling for this group as indicated by the CLR confidence interval. However, the robust Hansen two-step GMM J-statistic \begin{table} \begin{tabular}{c c c c c c c c c} \hline & \multicolumn{2}{c}{White males} & \multicolumn{2}{c}{White males} & \multicolumn{2}{c}{All whites} & \multicolumn{2}{c}{Whites 25-54 born:} \\ & \multicolumn{2}{c}{ages 40-49} & \multicolumn{2}{c}{ages 25-54} & \multicolumn{2}{c}{ages 25-54} & \multicolumn{2}{c}{Non-south} & \multicolumn{2}{c}{South} \\ & (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline 2SLS & 0.096 & -0.020 & 0.097 & -0.014 & 0.105 & -0.003 & -0.009 & 0.019 \\ & (0.016) & (0.041) & (0.010) & (0.021) & (0.011) & (0.016) & (0.012) & (0.043) \\ GMMf & 0.095 & -0.014 & 0.100 & -0.012 & 0.111 & 0.004 & 0.015 & 0.022 \\ & (0.016) & (0.040) & (0.010) & (0.021) & (0.011) & (0.016) & (0.011) & (0.044) \\ \hline \(\widehat{F}\) & 108.61 & 16.06 & 547.96 & 89.06 & 869.99 & 197.00 & 603.64 & 20.88 \\ \(\widehat{F}_{\text{eff}}\) & 42.85 & 8.11 & 64.17 & 24.38 & 62.98 & 42.40 & 34.40 & 6.13 \\ \(\widehat{F}_{r}\) & 42.76 & 8.22 & 81.37 & 23.63 & 91.73 & 40.57 & 67.25 & 6.34 \\ \hline \(cv_{\text{eff}}\) & 9.21 & 10.31 & 13.50 & 10.36 & 14.65 & 11.10 & 16.30 & 9.49 \\ \(cv_{\text{eff}}^{LS}\) & 9.22 & 10.31 & 13.71 & 10.31 & 14.36 & 11.18 & 16.31 & 9.30 \\ \(cv_{r}\) & 8.64 & 8.73 & 8.74 & 8.74 & 9.62 & 8.65 & 8.96 & 8.82 \\ \(cv_{r}^{LS}\) & 8.63 & 8.69 & 8.86 & 8.69 & 9.03 & 8.70 & 8.85 & 8.96 \\ \hline \(J\)-test, p & 0.81 & 0.27 & 0.12 & 0.22 & 0.00 & 0.00 & 0.00 & 0.07 \\ \hline Region\(\times\)yob & No & Yes & No & Yes & No & Yes & No & No \\ Controls & None & None & Age & Age & Age & Age & Age & Age \\ & & & quartic, & quartic, & quartic, & quartic, & quartic, & quartic, \\ & & census yr & census yr & census yr, & census yr, & census yr, & census yr, & census yr, \\ & & & & gender & gender & gender & gender \\ \hline \(n\) & 609,852 & 2,166,387 & 3,680,223 & 2,566,127 & 1,114,096 \\ \hline \end{tabular} Notes: All specifications include state-of-birth and year-of-birth fixed effects. Standard errors in brackets (), robust to heteroskedasticity and clustering at the state-of-birth/year-of-birth cell. \end{table} Table 3: The effect of schooling on log weekly wages, Stephens and Yang (2014, Table 1) and robust Cragg-Donald rank statistic for overidentifying restrictions with two degrees of freedom have values of 22.05 and 23.95 respectively and p-values of 0.0000, indicating significant heterogeneity in the estimated effect sizes for the different instruments. To investigate this further, let \(\widehat{\beta}_{ind}\) be the \(k_{z}\)-vector with \(j\)-th element \(\widehat{\beta}_{ind,j}=\widehat{\pi}_{y,j}/\widehat{\pi}_{j}\). From (4) and (6) it follows that \[\widehat{\beta}_{2sls} =\frac{\widehat{\pi}^{\prime}\left(V\widehat{a}r\left(\widehat{ \pi}\right)\right)^{-1}\text{Diag}\left(\widehat{\pi}\right)\widehat{\beta}_{ ind}}{k_{z}\widehat{F}}=\widehat{w}^{\prime}_{2sls}\widehat{\beta}_{ind};\] \[\widehat{\beta}_{gmmf} =\frac{\widehat{\pi}^{\prime}\left(V\widehat{a}r_{r}\left( \widehat{\pi}\right)\right)^{-1}\text{Diag}\left(\widehat{\pi}\right)\widehat{ \beta}_{ind}}{k_{z}\widehat{F}_{r}}=\widehat{w}^{\prime}_{gmmf}\widehat{\beta}_ {ind}.\] \(\widehat{\beta}_{ind,j}\) is the just identified IV estimator with \(Z_{j}\) as the excluded instrument, but with the other instruments \(Z_{\{-j\}}\) included in the model as explanatory variables, see Windmeijer et al. (2021) and Masten and Poirier (2021). In order for a better comparison of the results for the individual instruments, we transform them linearly as \(SRS7=RS7+RS8+RS9\), \(SRS8=RS8+RS9\) and \(SRS9=RS9\), corresponding to being required to attend seven or more, eight or more, or nine or more years of schooling, respectively. Clearly, this does not change the estimation results for the 2SLS and GMMf estimators, but using each of these transformed instruments as the excluded instrument in turn has a clear group comparison and the transformation does affect the values of \(\widehat{\beta}_{ind}\) and the 2SLS and GMMf weights. From the reduced-form and first-stage specifications (3) and (2), it follows that \(\beta_{ind,j}\coloneqq\pi_{y,j}/\pi_{j}=\beta\), for \(j=1,\ldots,k_{z}\), and the overidentification test can be formulated as a test for \(H_{0}:\beta_{ind,1}=\beta_{ind,2}=\ldots=\beta_{ind,k_{z}}\), see Windmeijer (2019). Table 4 presents the results for \(\widehat{\pi}\), \(\widehat{\beta}_{ind}\) and the 2SLS and GMMf weights for the specifications of columns (1) and (7) in Table 3. For column (1), the p-value of the overidentification test statistic is large, and the values of \(\widehat{\beta}_{ind,j}\) are all positive, with ordered values of 0.07, 0.10 and 0.14. The 2SLS and GMMf weights are very similar, resulting in the virtually identical estimates of 0.0955 and 0.0953. For column (7), we find very different values of \(\widehat{\beta}_{ind,j}\). These are given by \(-0.14\), 0.03 and 0.18, with the estimated variance of \(\widehat{\beta}_{ind}\) such that the null of equal \(\beta_{ind,j}\) is rejected by the overidentification test statistic. The large negative value is associated with the \(SRS8\) instrument, the one indicating that an individual is required to attend eight or more years of schooling. The 2SLS and GMMf weights are now different, with the GMMf estimator giving less weight to the negative value, resulting in a small positive estimate \(\widehat{\beta}_{gmmf}\), whereas \(\widehat{\beta}_{2sls}\) is negative. But the large heterogeneity in estimated effect sizes, by essentially the same type of instruments, needs to be investigated and makes it difficult to interpret any linear combination of these estimates. ## 6 Concluding Remarks For models with a single endogenous explanatory variable, we have introduced a class of generalized effective F-statistics as defined in (8) in relation to a class of linear GMM estimators given in (7) and have shown that the Montiel Olea and Pflueger (2013) weak-instruments testing procedure that they established for the effective F-statistic in relation to the Nagar bias of the 2SLS estimator applies to this extended class. In particular, the standard nonhomoskedasticity robust F-statistic is a member of this class and is associated with the behaviour in terms of Nagar bias of the novel GMM estimator, which has its weight matrix based on the first-stage residuals. We then focused on a comparison of the effective F-statistic and the robust F-statistic and the associated weak-instrument behaviours of the 2SLS and GMMf estimators. In particular, we have shown that and explained why the GMMf estimator's performance is much better in terms of bias than that of the 2SLS estimator in the grouped-data designs of Andrews (2018), where the robust F-statistic can take very large values, but the effective F-statistic is very small. One should therefore in general not use the robust F-statistic to gauge instrument strength in relation to the performance of the 2SLS estimator, Andrews et al. (2019, pp 738-739),5 but as shown here, it can be used as a weak-instrument test in relation to the Nagar bias of the GMMf estimator. In practice, therefore, both the effective F-statistic and robust F-statistic should be reported, together with their critical values, and the GMMf estimator \begin{table} \begin{tabular}{c c c c c c c c c} \hline Column & Inst & \(\widehat{\pi}\) & \(se_{\widehat{\pi}}\) & \(se_{\widehat{\pi},r}\) & \(\widehat{\beta}_{ind}\) & \(se_{\widehat{\beta}_{ind,r}}\) & \(\widehat{w}_{2sls}\) & \(\widehat{w}_{gmmf}\) \\ \hline (1) & \(SRS7\) & 0.095 & 0.023 & 0.036 & 0.137 & 0.089 & 0.126 & 0.128 \\ & \(SRS8\) & 0.129 & 0.018 & 0.028 & 0.071 & 0.041 & 0.348 & 0.356 \\ & \(SRS9\) & 0.180 & 0.015 & 0.024 & 0.102 & 0.024 & 0.526 & 0.517 \\ \hline (7) & \(SRS7\) & 0.212 & 0.014 & 0.056 & 0.177 & 0.045 & 0.257 & 0.290 \\ & \(SRS8\) & 0.206 & 0.010 & 0.057 & -0.140 & 0.057 & 0.454 & 0.347 \\ & \(SRS9\) & 0.156 & 0.006 & 0.020 & 0.033 & 0.018 & 0.289 & 0.362 \\ \hline \end{tabular} Notes: Column numbers refer to those in Table 3. Instrument \(SRSj\) indicates being required to attend \(j\) or more years of schooling. \end{table} Table 4: Some Further Estimation Results considered in cases where there is a clear discrepancy with a large value for the robust F-statistic. We found no such discrepancy in the applied analysis of Stephens and Yang (2014). We have not focused here on the wider applicability of the class of generalized effective F-statistics and their associated GMM estimators, but an example is the one-step Arellano and Bond (1991) GMM estimator for panel data models with a single endogenous variable. Two-step estimators do not fall in the class because of the presence of estimated structural parameters in the weight matrix, but one could test for weak instruments in this setting, fixing the parameter of the endogenous variable in the weight matrix, for example under a specific null value of interest. A topic for future research for the general nonhomoskedastic setting is an extension to the linear model with more than one endogenous variable. Lewis and Mertens (2022) is an extension of the Montiel Olea and Pflueger (2013) method to the multiple endogenous variable case for the 2SLS estimator, but they do not consider such an extension for the wider class of GMM estimators. Future research should also address the weak-instruments Wald size properties for both the single and multiple endogenous variables settings.
2308.16062
Energy distribution and substructure formation in astrophysical MHD simulations
During substructure formation in magnetized astrophysical plasma, dissipation of magnetic energy facilitated by magnetic reconnection affects the system dynamics by heating and accelerating the ejected plasmoids. Numerical simulations are a crucial tool for investigating such systems. In astrophysical simulations, the energy dissipation, reconnection rate and substructure formation critically depend on the onset of reconnection of numerical or physical origin. In this paper, we hope to assess the reliability of the state-of-the-art numerical codes, PLUTO and KORAL by quantifying and discussing the impact of dimensionality, resolution, and code accuracy on magnetic energy dissipation, reconnection rate, and substructure formation. We quantitatively compare results obtained with relativistic and non-relativistic, resistive and non-resistive, as well as two- and three-dimensional setups performing the Orszag-Tang test problem. We find the sufficient resolution in each model, for which numerical error is negligible and the resolution does not significantly affect the magnetic energy dissipation and reconnection rate. The non-relativistic simulations show that at sufficient resolution, magnetic and kinetic energies convert to internal energy and heat up the plasma. The results show that in the relativistic system, energy components undergo mutual conversion during the simulation time, which leads to a substantial increase in magnetic energy at 20\% and 90\% of the total simulation time of $10$ light-crossing times -- the magnetic field is amplified by a factor of five due to relativistic shocks. We also show that the reconnection rate in all our simulations is higher than $0.1$, indicating plasmoid-mediated regime. It is shown that in KORAL simulations magnetic energy is slightly larger and more substructures are captured than in PLUTO simulations.
Fatemeh Kayanikhoo, Miljenko Cemeljic, Maciek Wielgus, Wlodek Kluzniak
2023-08-30T14:36:45Z
http://arxiv.org/abs/2308.16062v2
# Energy distribution and substructure formation in astrophysical MHD simulations ###### Abstract During substructure formation in magnetized astrophysical plasma, dissipation of magnetic energy facilitated by magnetic reconnection affects the system dynamics by heating and accelerating the ejected plasmoids. Numerical simulations are a crucial tool for investigating such systems. In astrophysical simulations, the energy dissipation, reconnection rate and substructure formation critically depend on the onset of reconnection of numerical or physical origin. In this paper, we hope to assess the reliability of the state-of-the-art numerical codes, PLUTO and KORAL by quantifying and discussing the impact of dimensionality, resolution, and code accuracy on magnetic energy dissipation, reconnection rate, and substructure formation. We quantitatively compare results obtained with relativistic and non-relativistic, resistive and non-resistive, as well as two- and three-dimensional setups performing the Orszag-Tang test problem. We find the sufficient resolution in each model, for which numerical error is negligible and the resolution does not significantly affect the magnetic energy dissipation and reconnection rate. The non-relativistic simulations show that at sufficient resolution, magnetic and kinetic energies convert to internal energy and heat up the plasma. The results show that in the relativistic system, energy components undergo mutual conversion during the simulation time, which leads to a substantial increase in magnetic energy at 20% and 90% of the total simulation time of \(10\) light-crossing times--the magnetic field is amplified by a factor of five due to relativistic shocks. We also show that the reconnection rate in all our simulations is higher than \(0.1\), indicating plasmoid-mediated regime. It is shown that in KORAL simulations magnetic energy is slightly larger and more substructures are captured than in PLUTO simulations. ## 1 Introduction Dissipation processes in astrophysical plasma, including magnetic reconnection [1], are of fundamental relevance for our understanding of a variety of observed systems, such as solar flares [2, 3] or magnetic sub-storms in the Earth's magnetosphere [4, 5]. The relative motion in plasmas and gas often leads to the formation of shocks. Non-relativistic magnetized shocks in supernovae remnants are possible sources of acceleration of cosmic rays [6, 7, 8, 9]. Energy dissipation in the relativistic regime leads to spectacular displays, such as jets and relativistic ejections from the accretion systems around compact objects [10, 11], or event horizon scale synchrotron emission [12] and flaring [13, 14] in the hot advection-dominated accretion flows. In the context of accretion onto compact objects, understanding dissipation occurring on small spatial scales is crucial to finding realistic sub-grid physics prescriptions for global simulations. Magnetic reconnection is a process by which the magnetic field lines in a plasma break and reconnect, releasing stored energy in the form of heat, particles/plasmoid acceleration, or radiation. Reconnection often occurs spontaneously and is usually associated with the presence of a current sheet, a region where the magnetic field lines become almost antiparallel and the plasma conductivity is finite. The magnetic field lines can break and reconnect due to the tearing instability, which is driven by the pressure of the plasma and the tension of the magnetic field [15, 16, 17]. Spontaneous reconnection is relatively slow, and the rate is determined by the local plasma conditions [18, 19]. Petschek proposed a shock geometry that allows fast magnetic reconnection to occur [20], this may be realized in magnetohydrodynamic (MHD) simulations for large values of (anomalous) resistivity. Forced magnetic reconnection occurs due to external perturbation in a turbulent system [21, 22, 23]. In this scenario, the reconnection rate can be much faster than spontaneous reconnection, as the external forces can overcome the moderating resistances of the plasma. Such turbulent systems can be found in various environments, such as solar wind, the interstellar medium, or the accretion disks around black holes and neutron stars. In this work, we study energy dissipation and magnetic reconnection in the MHD framework, using a simple example of a vortical system, the Orszag-Tang (OT) vortex [24], a popular test problem for numerical MHD codes. In such a system, the magnetic field lines stretch and twist thus facilitating the reconnection process. This test has already been performed with state-of-the-art codes like Athena++ [25], BHAC [26], and HARM [27]. Here, we implement the OT test in two more state-of-the-art codes used in numerical simulations of accretion. We quantitatively compare results obtained with the two codes of our choice at different resolutions and setups in relativistic/non-relativistic, resistive/non-resistive, and two-dimensional (2D) vs. three-dimensional (3D) configurations, to study how much these different aspects impact the obtained results, characterized by the energy balance and reconnection rate. The well-established codes we selected for the comparison are the widely used, public PLUTO code [28] and the radiative, general relativistic code KORAL [29, 30]. PLUTO has extensively been used in simulations of magnetospheric star-disk interaction with alpha-viscous disk in [31, 32], with magneto-rotational instability including alpha-dynamo in [33], jet launching disks in [34], accretion-ejection problem in [35], to mention only some. It was also used in the simulations of star-planet magnetospheric interaction, e.g. in [36] and [37] and related papers. A radiative module was included in simulations of accretion columns in classical T Tauri stars in [38]. KORAL code is used to study the accreting compact objects in general relativity involving radiation using M1 closure scheme [29]. The code has been used to study the radiative black hole accretion discs [39, 40, 41, 42] as well as super-Eddington accretion onto magnetized neutron stars [43]. The paper is organized as follows: in SS2 we review the theoretical framework, including the formalism of the MHD equations. The initial conditions in the OT problem in 2D and 3D setups are given in SS3. In SS4 we discuss the results in different cases. The reconnection rate is studied in SS5. In SS6 we present the direct comparison of the results in the two codes we used here and we conclude in SS7. ## 2 Special relativistic resistive MHD equations We investigate the energy distribution in astrophysical simulations in the following setups: * Ideal nonrelativistic magnetohydrodynamics (Ideal-MHD), * Ideal relativistic MHD (Rel-MHD), * Resistive nonrelativistic MHD (Res-MHD). We begin with presenting the resistive special relativistic MHD equations in Minkowski spacetime, which we then simplify to relativistic ideal MHD and non-relativistic resistive MHD cases. The simulations are performed in the PLUTO and KORAL codes, with the exception of Res-MHD, which is performed in PLUTO alone (KORAL only treats non-resistive MHD equations). The dynamics of magnetic fluids can be described using the equations of conservation of mass, momentum, and energy, as well as the Maxwell-Faraday, Ampere-Maxwell, and Ohm equations. For a fluid propagating in the laboratory reference frame with bulk velocity \(\mathbf{v}=\mathbf{\beta}c\), the Lorentz factor is defined as \(\Gamma=(1-\beta^{2})^{-1/2}\), and the fluid four-velocity is \(\mathbf{u}=(\Gamma c,\Gamma\mathbf{v})\). We denote fluid rest mass density in the fluid frame by \(\rho c^{2}\), fluid pressure by \(p\), fluid internal energy density in the fluid frame by \(U_{\rm int}\), electric field by \(\mathbf{E}\), and magnetic field by \(\mathbf{B}\). The \(\mathbf{E}\) and \(\mathbf{B}\) fields were redefined to absorb a factor of \(1/\sqrt{4\pi}\) each, so that factors of \(1/(4\pi)\) do not appear in relations such as Eqs. 2 and 3, 7. Furthermore, we define enthalpy density in the fluid frame, \[\omega=\rho c^{2}+U_{\rm int}+p\;, \tag{1}\] momentum density \[\mathbf{m}=\omega\Gamma^{2}\mathbf{v}+c\mathbf{E}\times\mathbf{B}\;, \tag{2}\] and the total energy density \(\varepsilon\) \[\varepsilon=\omega\Gamma^{2}-p+\frac{1}{2}(E^{2}+B^{2})\;. \tag{3}\] The conservation equations are then \[\frac{\partial(\Gamma\rho)}{\partial t}+\mathbf{\nabla}\cdot(\Gamma \rho\mathbf{v}) =0, \tag{4}\] \[\frac{\partial\mathbf{m}}{\partial t}+\mathbf{\nabla}\cdot\left[\omega \Gamma^{2}\mathbf{v}\mathbf{v}+c^{2}(p\mathbf{I}+\mathbf{T}_{EM})\right] =0,\] (5) \[\frac{\partial\varepsilon}{\partial t}+\mathbf{\nabla}\cdot(\omega \Gamma^{2}\mathbf{v}+c\,\mathbf{E}\times\mathbf{B}) =0, \tag{6}\] where additionally we denote identity matrix with \(\mathbf{I}\), and the electromagnetic stress tensor with \(\mathbf{T}_{EM}\), hence \[\mathbf{T}_{EM}=\frac{1}{2}(E^{2}+B^{2})\mathbf{I}-(\mathbf{E}\mathbf{E}+\mathbf{B}\mathbf{B})\;. \tag{7}\] The Maxwell-Faraday and Ampere-Maxwell equations are \[\begin{array}{c}\frac{1}{c}\frac{\partial\mathbf{B}}{\partial t}+\mathbf{\nabla} \times\mathbf{E}=0,\\ \frac{1}{c}\frac{\partial\mathbf{E}}{\partial t}-\mathbf{\nabla}\times\mathbf{B}=-\mathbf{J}/ c,\end{array} \tag{8}\] respectively, where \(\mathbf{J}\) is the current density that comes from Ohm's law, \[\mathbf{J}=(\Gamma c^{2}/\eta)(\mathbf{E}+\mathbf{\beta}\times\mathbf{B}), \tag{10}\] where \(\eta\) is the magnetic diffusivity, which is identical to resistivity. The additional condition \(\nabla\cdot\ \mathbf{B}=0\) from Gauss's law is enforced during the numerical evolution of the magnetic field. In order to obtain the system of nonrelativistic resistive MHD equations from Eqs. 4-6, we make a number of approximations based on \(\beta\ll 1\) and \(p+U_{\rm int}\ll\rho\) assumptions, leading to a following formulation: \[\frac{\partial\rho}{\partial t}+\mathbf{\nabla}\cdot(\rho\mathbf{v}) =0, \tag{11}\] \[\frac{\partial\rho\mathbf{v}}{\partial t}+\mathbf{\nabla}\cdot(\rho\mathbf{v }\mathbf{v}+p\mathbf{I}+\mathbf{T}_{EM}) =0,\] (12) \[\frac{\partial\varepsilon}{\partial t}+\mathbf{\nabla}\cdot\left[ \left(\omega+\rho\frac{v^{2}}{2}\right)\mathbf{v}+c\mathbf{E}\times\mathbf{B}\right] =0, \tag{13}\] where the non-relativistic total energy and enthalpy densities are \[\varepsilon=U_{\rm int}+\rho\frac{v^{2}}{2}+\frac{1}{2}(E^{2}+B^{2})\;, \tag{14}\] \[\omega=U_{\rm int}+p\;. \tag{15}\] Additionally, Ohm's law in resistive nonrelativistic MHD becomes \[\mathbf{J}=\frac{c^{2}}{\eta}(\mathbf{E}+\mathbf{\beta}\times\mathbf{B})=c\mathbf{\nabla}\times\mathbf{B}\, \tag{16}\] neglecting the displacement currents (\(\partial\mathbf{E}/\partial t=0\)) in Eq. 9 to obtain the second equality. The diffusive time scale, \(\tau_{\eta}=L^{2}/\eta\) (in conventional units, if \(\eta\) is in \(\mathrm{cm}^{2}/\mathrm{s}\) and \(L\) is in \(\mathrm{cm}\), then \(\tau_{\eta}\) is in seconds) can be compared with the dynamical time scale \(\tau_{v}=L/v\), where \(L\) is the characteristic length scale of the system and \(v\) is the characteristic velocity scale. The ratio of the two time scales is known as the magnetic Reynolds number \[R_{\mathrm{m}}=\frac{\tau_{\eta}}{\tau_{v}}=\frac{vL}{\eta}. \tag{17}\] When the typical velocity scale of the system is the Alfven velocity \(v_{\mathrm{A}}=B/\sqrt{4\pi\rho}\), this ratio is called the Lundquist number \[S=v_{A}L/\eta. \tag{18}\] Astrophysical systems often satisfy the condition \(S\gg 1\), which is equivalent to \(L\gg\eta/v_{A}\). In such cases, for either relativistic or non-relativistic cases, we can use the ideal MHD approximation \[\mathbf{E}=\mathbf{B}\times\mathbf{\beta}. \tag{19}\] As a consequence, \(\mathbf{E}\) can be readily evaluated and does not need to be evolved with the Ampere-Maxwell equation (Eq. 9), simplifying the Maxwell-Faraday equation (Eq. 8) for the \(\mathbf{B}\) field evolution to \[\frac{1}{c}\frac{\partial\mathbf{B}}{\partial t}+\mathbf{\nabla}\times(\mathbf{B}\times\bm {\beta})=0. \tag{20}\] ## 3 Orszag-Tang test problem With implicit inclusion of the most important features of MHD turbulent flow such as energy dissipation and magnetic reconnection [24, 44] the Orszag-Tang vortex is a comprehensive test problem for MHD codes. This problem mostly tests the code performance in simulations with MHD shocks and shock-shock interactions. We study the energy distribution in different setups by performing the OT test problem simulations using two astrophysical simulation codes: PLUTO [ver. 4.4; 28] and KORAL [30]. The description of our simulations is mostly presented in code units. These are obtained by scaling the equations with fiducial values of certain physical quantities. All velocities are scaled with \(v_{0}=c\), e.g., the statement that \(v_{\mathrm{A}}=1\) in code units signifies that the Alfven velocity is equal to the speed of light. The density is scaled with some density \(\rho_{0}\), the pressure with \(p_{0}\), and the electromagnetic fields with \(B_{0}\). The exact value of \(\rho_{0}\) is immaterial, as long as \(p_{0}=\rho_{0}v_{0}^{2}\) and \(B_{0}=v_{0}\sqrt{4\pi\rho_{0}}\). Figure 1: The time evolution of \(\overline{B^{2}}\) in PLUTO simulations: the Ideal-MHD case with different resolutions (_left panel_), and Res-MHD case with different physical resistivities \(\eta\) for the resolution of \(512^{2}\) (_right panel_). The unit of time is \(t_{c}=L/c\). #### Two dimensional setup The simulation is set up in a 2D computational box \(0\leq x,y\leq 2\pi\) with periodic boundary conditions and the following initial conditions for velocity and magnetic fields [45]: \[v=\tilde{v}(-\sin y,\sin x,0), \tag{21}\] \[B=\tilde{B}(-\sin y,\sin 2x,0). \tag{22}\] We adopt \(\tilde{v}=0.99v_{0}/\sqrt{2}\) and \(\tilde{B}=B_{0}\). The initial density is uniform. In 2D we perform the OT simulations in the range of uniform resolutions from \(64^{2}\) to \(4096^{2}\) in different setups (Ideal-MHD, Res-MHD, and Rel-MHD), doubling the number of grid points in each dimension to increase the resolution step by step. In 3D we run the Ideal-MHD and Rel-MHD simulations in three resolutions \(128^{3}\), \(256^{3}\), and \(512^{3}\). Only with PLUTO, we run the Res-MHD simulation (in both 2D and 3D) in the resolution \(512^{3}\). Without resistivity, both PLUTO and KORAL are used for Ideal-MHD and Rel-MHD simulations in 2D and 3D setups3. Footnote 3: The physical resistivity module is publicly available only in non-relativistic PLUTO, and this is the one we use to perform our Res-MHD simulations. All simulations run to the final time \(t=10\,t_{c}\), where \(t_{c}\) is the light-crossing time across the typical length in the system. In code units, \(t_{c}=L\), and we take \(t_{c}=1\). #### Three dimensional setup In order to study the difference between 2D and 3D MHD flows and reconnection, we extend the Orszag-Tang test problem to three dimensions. We set up the simulation in a cubic computational box \(0\leq(x,y,z)\leq 2\pi\) with periodic boundary conditions. Figure 2: The rest mass-density at \(t=2.5t_{c}\) in the resolution of \(4096^{2}\) with PLUTO in the Ideal-MHD simulations (_left panels_) and Res-MHD simulations with physical resistivity \(\eta=10^{-4}\) (_right panels_). Plasmoids, zoomed-in in the bottom panels, form only in a case with sufficiently low resistivity, corresponding to a Lundquist number larger than \(S\sim 10^{4}\). For the Rel-MHD simulations, the initial equations are chosen in such a way as to result in a realistic turbulent system, following the definition of a Taylor-Green vortex [24]: \[v=\tilde{v}(\cos z\sin y\cos z,-\sin x\cos y\cos z,0), \tag{23}\] \[B=\tilde{B}(-\sin y,\sin 2x,0), \tag{24}\] where \(\tilde{v}\) and \(\tilde{B}\) are the same as in the 2D setup. We find that such initial conditions do not result in a sufficiently turbulent outcome in non-relativistic simulations in 3D, so for Ideal-MHD and Res-MHD simulations in 3D we use different initial conditions, following [46]: \[v=\tilde{v}(-\sin y,\sin x,0), \tag{25}\] \[B=\tilde{B}(-2\sin 2y+\sin z,2\sin x+\sin z,\sin x+\sin y), \tag{26}\] where \(\tilde{v}=2v_{0}\) and \(\tilde{B}=0.8B_{0}\). The initial density is uniform. ## 4 Energy components in the results We study the dissipation of magnetic energy and investigate the conversion of energy by following the time evolution of the energy components: the electromagnetic energy density \(U_{\rm EB}=E_{\rm B}+E_{\rm E}=\frac{1}{2}(B^{2}+E^{2})\), the kinetic energy density \(E_{\rm K}\), and internal energy density \(U_{\rm int}\). We study all components in the laboratory frame, thus the kinetic energy and internal energy densities in the relativistic simulations Rel-MHD are computed as follows: \[E_{\rm K}=\rho(\Gamma^{2}-1)c^{2}, \tag{27}\] \[U_{\rm int}=\left(\frac{\gamma}{\gamma-1}\Gamma^{2}-1\right)p. \tag{28}\] Here, \(\gamma=4/3\) is the polytropic constant. In the non-relativistic limit (simulations Ideal-MHD and Res-MHD) the internal energy density becomes \[U_{\rm int}=\frac{p}{\gamma-1}, \tag{29}\] while the kinetic energy density is \[E_{\rm K}=\frac{1}{2}\rho v^{2}, \tag{30}\] as can be seen from Eqs. 13, 14. Another quantity that is a function of space and time is the magnetization defined as \(\sigma=B^{2}/(\rho c^{2})\). We discuss and compare the averaged energy densities denoted by a bar and computed in 3D through \[\overline{Q}=\frac{\int\!\!\int\!\!\int_{V}Q\,dx\,dy\,dz}{\int\! \!\!\int\!\!\int_{V}\,dx\,dy\,dz}, \tag{31}\] where \(V\) is the volume of the simulation box. In 2D the corresponding formula is \[\overline{Q}=\frac{\int\!\!\int_{S}Q\,dx\,dy}{\int\!\!\!\int_{S} \,dx\,dy}. \tag{32}\] The results in PLUTO and KORAL simulations are very similar both qualitatively and quantitatively. Unless stated otherwise, we present the PLUTO results. The KORAL results and details of their difference from the PLUTO results are discussed in detail in Section SS6. ### Ideal-MHD and Res-MHD simulations In this section, we estimate the numerical dissipation in the simulations and study the effect of resistivity on the evolution of the system. In the left panel of Fig. 1, we plot the time evolution of the averaged squared magnetic field \(\overline{B^{2}}\) measured in Ideal-MHD simulations for different resolutions. It is clear that at later times the value of \(\overline{B^{2}}\) increases with an increase in the resolution. This is because in grid-based codes the flux is computed over the surface of every grid cell. In such a calculation there is some amount of computational dissipation, so-called numerical resistivity. Before we study the effect of physical resistivity in simulations, it is important to estimate the numerical dissipation at each resolution and find a reasonable minimal resolution. We compare the results in non-resistive Ideal-MHD simulations with the Res-MHD simulations set with different physical resistivities (\(\eta\) in Eq. 16), at each resolution4. In the right panel of Fig. 1 the results obtained with the resolution \(512^{2}\) are shown. We compare \(\overline{B^{2}}\) of the simulations with \(\eta=0,10^{-4},10^{-3},5\times 10^{-3}\). The curves corresponding to the Ideal-MHD and Res-MHD simulations with \(\eta=10^{-4}\) are almost overlapping, so at this resolution we estimate the numerical resistivity to be below \(10^{-4}\) and conclude that the resolutions higher than \(512^{2}\) are reasonably reliable for our simulations with the PLUTO code. Footnote 4: With a different setup in PLUTO, [47] found that for the numerical resistivity to decrease by an order of magnitude, the number of grid cells should be quadrupled, as also follows from the estimate with the characteristic length and diffusive timescale, \(\eta=L^{2}/t_{\eta}\). The magnetic energy initially increases and then decreases, forming the hump at \(2t_{c}\) in its plot (Fig. 1). This is caused by the compression of a region around a current sheet and subsequent formation of a reconnection layer (at \(t\approx 2t_{c}\)) which then dissipates the magnetic field energy. In Fig. 2 we show the mass-density plots at \(t=2.5t_{c}\) in the simulations Ideal-MHD (numerical resistivity below \(10^{-4}\)) and Res-MHD (physical resistivity \(\eta=10^{-4}\)) for the resolution of \(4096^{2}\). In the left panel (Ideal-MHD) we have identified plasmoids (regions of higher density and lower magnetization relative to their surroundings), these are the substructures located in the central region of the simulation box. In the right panel (Res-MHD) the chain of plasmoids does not appear. Similarly, we see no such chain in the simulations with a resistivity larger than \(10^{-4}\). The resistivity of \(10^{-4}\) corresponds to the Lundquist number \(S=Lv_{A}/\eta\,\approx\,10^{4}\), with the typical length scale of the system \(L\,\approx\,1\) and Alfven velocity \(v_{\rm A}\,\approx\,1\). This result matches theoretical studies which confirm that the current sheet is plasmoid unstable5 at \(S>10^{4}\)[48, 45]. We also confirm that with a smaller physical resistivity (\(\eta<10^{-5}\), \(S>10^{5}\)) some substructures are resolved in the Res-MHD simulations. Footnote 5: Plasmoid unstable current sheet involves a dynamic process where plasmoids merge and split within a sheet-like structure of magnetized plasma. We compare the different terms in energy distribution (magnetic energy \(\overline{E_{B}}\), kinetic energy \(\overline{E_{K}}\), internal energy \(\overline{U_{\rm int}}\), and electric energy \(\overline{E_{E}}\), respectively) in the Res-MHD simulations with \(\eta=5\times 10^{-3}\) and \(\eta=10^{-4}\) (Fig. 3). The first row of this figure shows magnetic energy where the horizontal dashed line, located at \(\overline{E_{B}}=0.5\), shows the initial value of magnetic energy. We see that with decreasing physical resistivity (from the left panel to the right panel) the rate of magnetic energy decrease becomes smaller. The dissipated magnetic energy converts to the internal energy and heats up plasma as shown in the third row of this plot. We will discuss the energy components in Rel-MHD and Ideal-MHD simulations in the next section. ### Ideal-MHD and Rel-MHD simulations We compare the results of non-relativistic (Ideal-MHD) and relativistic (Rel-MHD) non-resistive MHD simulations in the PLUTO code. The different terms in energy distribution (magnetic energy \(\overline{E_{B}}\), kinetic energy \(\overline{E_{K}}\), internal energy \(\overline{U_{\rm int}}\), and electric energy \(\overline{E_{E}}\), respectively) are shown in Fig. 4. Panels in the left column show the results for Rel-MHD and in the right column for Ideal-MHD. The magnetic energy evolution, shown in the first row of panels in Fig. 4, indicates that in simulation Rel-MHD the magnetic energy increases five-fold from the initial value of \(0.5\) (shown by the black dashed line in both left and right top panels). In the non-relativistic simulation Ideal-MHD there is only a minor initial increase of the magnetic energy followed by a slow decay. The kinetic energy evolution is presented in the second row of Fig. 4, where a black dashed line is also drawn for reference at the value of 0.5. The kinetic energies were computed using Eqs. 27 and 30 for the Rel-MHD and Ideal-MHD simulations, respectively. In Rel-MHD the effect of the Lorentz factor on the kinetic energy leads to an initial value of approximately 0.62, which is higher than the magnetic energy. In contrast, for the Ideal-MHD simulations, the initial value of the kinetic energy is approximately 0.25, half the value of the magnetic energy. Initially, in Rel-MHD the kinetic energy amplifies the magnetic field, while in the nonrelativistic Ideal-MHD case the low value of \(\overline{E_{\rm K}}\) is not enough to amplify the magnetic energy. Thus, in Rel-MHD the effect of kinetic energy on the magnetic energy evolution in the second half of the simulation is significant, causing a secondary increase of \(\overline{E_{\rm B}}\). In the Ideal-MHD no such effect is observed. In the third row of panels in Fig. 4, we show the internal energy as computed from Eqs. 28 and 29. Comparison with the first row of panels shows the conversion between magnetic energy and internal energy. Figure 3: Energy distribution in Res-MHD simulations with physical resistivities \(\eta=5\times 10^{-3}\) (_left panels_) and \(\eta=10^{-4}\) (_right panels_) at the resolution of \(512^{2}\) grid cells. The horizontal black dashed lines in the \(\overline{E_{B}}\) and \(\overline{E_{K}}\) panels indicate the initial value (0.5) of the magnetic energy \(\overline{E_{B}}\). See the detailed discussion in §4.1. The dissipated magnetic energy heats up the plasma. Figure 4: Energy distribution in Rel-MHD and Ideal-MHD PLUTO simulations at the resolution of \(512^{2}\) grid cells are shown in the (_left_) and (_right_) panels, respectively. The horizontal black dashed lines in the panels with \(\overline{E_{B}}\) and \(\overline{E_{K}}\) indicate the initial value of the magnetic energy \(\overline{E_{B}}=0.5\). See the detailed discussion in §4.2. Figure 5: Residuals of total energy \(|E_{\rm resid}|=|E_{\rm tot}(t)-E_{\rm tot}(0)|/E_{\rm tot}(0)\) in the Ideal-MHD and Res-MHD (\(\eta=5\times 10^{-3}\)) simulations with PLUTO (top two, nearly coinciding curves) and Rel-MHD simulations with PLUTO (bottom curve) and KORAL, with the resolution \(512^{2}\) grid cells. The conservation of energy is significantly more accurate in the Rel-MHD simulation, particularly in PLUTO. Figure 6: Magnetization (\(\sigma=B^{2}/\rho\) in code units) in Ideal-MHD and Rel-MHD simulations with PLUTO at the resolution \(512^{2}\) grid cells. In Rel-MHD, after \(t\simeq 5t_{c}\), the large amount of the internal and kinetic energy amplifies the magnetic field. This is visible as the second increase ("hump") in the \(\overline{E_{B}}\) curve. Such an outcome in the Rel-MHD simulation offers an explanation for the energy reservoir in magnetized systems like relativistic jets in active galactic nuclei, accretion discs of black holes, and magnetized neutron stars in high-energy astrophysics. In the nonrelativistic Ideal-MHD case, shown in the right panel, the released magnetic energy converts to internal energy and heats up the plasma. In contrast with the relativistic case, the amount of energy in the system is not enough to re-amplify the magnetic field. The final row of panels in Fig. 4 displays the electric energy, which exhibits a significant evolution in the Rel-MHD simulation. The electric field is a function of magnetic field and velocity (Eq. 19), Consequently when the magnetic field is increased around \(\sim 2t_{c}\), the electric energy \(\overline{E_{E}}\) also increases. Furthermore, as the system evolves, there is another subsequent increase in \(\overline{E_{E}}\), coinciding with an increase in kinetic energy after \(4t_{c}\). The sum of all energy components in each of the simulations is conserved over time, as shown in Fig. 5. The residuals in the total energy, \(|\overline{E_{\rm resid}(t)}|=|E_{\rm tot}(t)-E_{\rm tot}(0)|/E_{\rm tot}(0)\), are displayed for the simulations Ideal-MHD, Res-MHD (with \(\eta=5\times 10^{-3}\)), and Rel-MHD. Here, \(E_{\rm tot}\) is the sum of the magnetic, electric, kinetic, and internal energies displayed in Fig. 5. The residuals in the Ideal-MHD and Res-MHD in PLUTO simulation are \(\approx 3\times 10^{-3}\), in KORAL Rel-MHD \(\sim 10^{-4}\), whereas, in the Rel-MHD case in PLUTO they are \(\approx 10^{-9}\). This indicates that the numerical dissipation in the relativistic simulation is significantly lower than in the non-relativistic simulations. Also, the results indicate that Rel-MHD simulation in PLUTO is less dissipative than in KORAL. Space averaged magnetization in both simulations, Rel-MHD and Ideal-MHD, with the fixed resolution of \(512^{2}\) grid cells, is shown as a function of time in Fig. 6. This shows once again how the relativistic system is strongly magnetized and the magnetization increases by the end of the simulation time, while in a non-relativistic simulation, the magnetization does not evolve significantly. ### 3D simulations We perform the OT test problem simulations in three dimensions in PLUTO and KORAL with the initial conditions of Eqs. 23, 24 in Rel-MHD simulations and with the initial conditions of Eqs. 25, 26 in Ideal-MHD and Res-MHD simulations. The time evolution of \(\overline{B^{2}}\) in the Ideal-MHD simulations is shown in the left panel of Fig. 7. We expect the current sheet to be resolved at time \(t\simeq 1.5t_{c}\), because of the increase in magnetic energy discussed in the previous section. We search for the reconnection layers and plasmoids in different slices of the simulation domain at this simulation time. An example of a resulting rest-mass density plot is shown in Fig. 8, which is a slice at \(z=\pi/2\). The plasmoid (in the left panel) is shown at the center of the simulation box, which is zoomed-in at the bottom panel. Figure 7: The time evolution of \(\overline{B^{2}}\) in 3D simulations with PLUTO for the simulations Ideal-MHD (_left panel_) and Rel-MHD (_right panel_). We estimate the numerical resistivity at each resolution in Ideal-MHD simulations in 3D by comparing with Res-MHD simulations for different values of \(\eta\). The plot of \(\overline{B^{2}}\) with different physical resistivities \(\eta=0,10^{-4},10^{-3},5\times 10^{-3}\), in the resolution of \(512^{3}\) grid cells is shown in Fig. 9 (the method is discussed in SS4.1). It is shown that the curves corresponding to \(\eta=10^{-4}\) resistive simulations and the non-resistive Ideal-MHD cases are convergent, so the numerical resistivity in Ideal-MHD simulations with PLUTO at the given resolution is estimated to be \(\leq 10^{-4}\). We expect that at this resolution the current sheets are well resolved. The rest-mass density plots in the Ideal-MHD simulations (left panel) and resistive Res-MHD simulations with \(\eta=10^{-3}\) (right panel) with the resolution of \(512^{3}\) are shown in Fig. 8. The zoomed-in frames in the bottom panels show the substructure at the center of each simulation box. From the configuration of the magnetic field which is not shown in this figure, we found that there is a thick current sheet containing a plasmoid in the Ideal-MHD simulation, which is not resolved in the Res-MHD simulation. The right panel in Fig. 7 shows the time evolution of \(\overline{B^{2}}\) in the Rel-MHD simulation. It shows that \(\overline{B^{2}}\) increases to the time \(t\simeq 7t_{c}\). At the low resolutions, the magnetic energy drops after this time, but at the high resolution \(512^{3}\), the peak is flattened. We found that at the smaller resolutions, due to the high numerical dissipation, the current sheets are compressed and plasmoids are not resolved. At the high resolution \(512^{3}\), we can see the plasmoid unstable current sheets at different slices in the simulated cubic computational domain. We show the slice in the rest-mass density at \(z=\pi/2\) in the Rel-MHD simulation with the resolution \(512^{3}\) in Fig. 10, with a few magnetic islands in the simulation box6. We check the profile of magnetic field components and magnetization in that region. For instance, we take a closer look at one plasmoid located at \((x,y)=(4.7,3.68)\). In the right panel, we show the profile of magnetic field components, magnetization, and mass density along the dashed line at \(y=3.68\) with \(x\in[4,5.6]\). The mass density \(\rho\) reaches a local maximum at the position of the plasmoid, while the Figure 8: The slice in \(z=\pi/2\) in the simulation box of the rest-mass density \(\rho\) for a vortex at \(t=1.5t_{c}\) at the resolution of \(512^{3}\) in PLUTO. _Left panel_: Ideal-MHD. _Right panel_: Res-MHD with \(\eta=10^{-3}\). The zoomed-in panels show the current layer in the middle of the simulation boxes. Plasmoids form only in the cases with sufficiently low resistivity, corresponding to a Lundquist number larger than \(10^{4}\) (\(\eta\leq 10^{-4}\)). Figure 10: _Left panel_: a slice of the rest-mass density at \(z=\pi/2\) in the Rel-MHD simulation in 3D at a resolution \(512^{3}\) with PLUTO. The streamlines indicate the magnetic field lines and the white circles show plasmoids. _Right panel_: the magnetic field components, magnetization, and density profile along the black dashed line at \(y=3.68\), shown in the left panel. Figure 9: The time evolution of \(\overline{B^{2}}\) in 3D Res-MHD simulations with PLUTO, at the resolution \(512^{3}\) with different physical resistivities. parallel magnetic field component \(B_{x}\), and magnetization \(\sigma\) have a minimum local value. Such a profile confirms that there is a plasmoid at this point [49, 50]. In the same Rel-MHD simulation we made another slice, shown in Fig. 15, through the same simulation box in the \(xz\) plane at \(y=3.68\) (where the black dashed line is in Fig. 10). In the top panel we show the reconnection layer and plasmoids. The zoomed plots show the magnetization of the selected reconnection layer. In the next section, we estimate the reconnection rate at this chosen layer. Using the same method (just described for the 3D Rel-MHD simulation in the last paragraph), we choose the layer shown in Fig. 16 in the 3D Ideal-MHD run. ## 5 Reconnection rate Magnetic reconnection might occur spontaneously due to the internal MHD instability in a resistive model [18, 20] or in the ideal MHD as a kink mode [19]. In a turbulent system, the external perturbation can cause magnetic reconnection in a so-called forced reconnection, where the plasma is in a state of chaotic and unpredictable motion. The magnetic field lines can become distorted and twisted, leading to reconnection [21]. Turbulent systems can be found in various environments, such as in the solar wind, in the interstellar medium, and in the accretion disks of black holes and neutron stars. In these environments, magnetic reconnection can lead to a variety of phenomena, such as the acceleration of particles to high energies, the formation of jets and flares, and the heating of the plasma. An external perturbation in turbulent plasma can accelerate the formation of the X-point, causing a reconnection one order of magnitude faster than spontaneous reconnection. Such a reconnection process is complex and still not well understood, and is an active area of research in astrophysics and plasma physics. There are analytical Figure 11: The reconnection rate as a function of resolution in the simulations Ideal-MHD and Rel-MHD with KORAL. Figure 12: The reconnection rate as a function of resistivity for resistivities \(\geq 10^{-4}\) in 2D (red circles) and 3D (blue stars) Res-MHD simulations with PLUTO. The change is cosistent with \(1/\log S\) dependence, \(S\) being the Lundquist number. and numerical studies on forced magnetic reconnection including perturbation in the isolated current sheet [21, 22], and a study searching for the observational signatures of simulated forced reconnection in solar flares [23]. The OT is a vortex problem, for which turbulence develops during evolution. It is shown in the rest-mass density plots (Figs. 17 and 18) that the current sheets are not formed in isolation, but are a result of evolution of high-density regions, which are driven together by the evolution of the system. Therefore, fast reconnection is expected in our simulations. Fig. 19 in Appendix A shows selected reconnection layers in the chosen 2D simulations Ideal-MHD and Rel-MHD. When magnetic field lines reconnect, the magnetic tension acts to shorten the field lines and make a magnetic slingshot, which drives the outflow (plasmoids ejection) from both sides of the layer in the parallel direction [51, 52]. For a steady-state reconnection, the outflow (from the reconnection area) should be balanced with the inflow (toward the reconnection layer) which is shown with the white arrows in the figure. The ratio of inflow and outflow velocity (\(v_{\rm in}\) and \(v_{\rm out}\), respectively), is called the reconnection rate \(v_{\rm r}=v_{\rm in}/v_{\rm out}\). The outflow propagates along the background magnetic field lines with the Alfven speed \(v_{\rm A}=c\sqrt{\sigma/(\sigma+1)}\), in conventional units. When \(\sigma\simeq 10\), \(v_{\rm A}\simeq c\), the reconnection rate can be approximated with \(v_{\rm r}=v_{\rm in}/c\). The magnetization values on both sides of the reconnection layer in all simulations are greater than \(8\), as demonstrated in Appendix A (Figs. 15, 16 and 19). To compute the reconnection rate we average the inflow velocity of \(6\) grid cells located on both sides of the layer. The structure of the layer is found by the Harris equilibrium method [53, 45]. According to analytical and numerical studies, the reconnection rate in 3D might be both lower or higher than in 2D. The reconnection rate depends on different parameters such as the initial setup, strength of the magnetic field, and turbulence of the system. [54] studied magnetic reconnection in 2D and 3D geometries using resistive MHD simulations and found that the reconnection rate in 3D was approximately twice as fast as in 2D. [55] found that in some cases the 3D reconnection rate can be lower than the 2D reconnection rate due to the complex interplay between the plasmoid instability and the turbulent background. Our study presents various initial setups in both two and three dimensions (Section 3) that affect the magnetization on both sides of the connection region, which in turn influences the reconnection rate. Our Ideal-MHD simulations result in faster reconnection in 3D than in 2D, while the opposite is observed in the Rel-MHD simulations, where the reconnection rate is slower in 3D. In Fig. 11 we show \(v_{\rm r}\) as a function of resolution in the simulations with KORAL simulations. We summarize the results of Fig. 11 as follows. In 2D setups: 1) Results of the Ideal-MHD simulations show that the resolution does not affect the reconnection rate in the resolutions \(\geq 256^{2}\). We confirm that in the non-relativistic simulations, the current sheet is well resolved in the resolutions \(\geq 256^{2}\) (It is also shown in the top panels of Fig. 13 at \(t\simeq 2.5t_{c}\) that the curves of \(\overline{B^{2}(t)}\) at higher resolutions are convergent). In the lower resolutions the reconnection rate changes as a function of resolution \(v_{\rm r}\approx 0.04\tilde{R}^{-0.7}\) (\(\tilde{R}=R/100\)). 2) Results of the Rel-MHD simulations show that the reconnection rate changes as a function of the resolution as \(v_{\rm r}\approx 0.25\tilde{R}^{-0.45}\) in the resolutions \(\leq 2048^{2}\). The current sheets and plasmoids are well resolved in the two highest resolutions. In both Ideal-MHD and Rel-MHD simulations in the lowest resolutions (\(64^{2}\) and \(128^{2}\)), the numerical resistivity is much higher than \(10^{-4}\), and the current layer is not resolved. The reconnection rate converges to a constant value at a lower resolution in the Ideal-MHD than in the Rel-MHD simulations. Therefore, in Rel-MHD, it is necessary to increase the resolution with respect to the non-relativistic case to reach a reconnection rate limit that is resolution independent. In 3D setups, the current sheets are not resolved in the resolution \(128^{3}\). With the higher resolutions \(256^{3}\) and \(512^{3}\), we do not see a significant effect of the resolution. In KORAL the lowest value of reconnection rate in 2D simulations at the highest resolution is about 0.1 in the Ideal-MHD and about 0.16 in the Rel-MHD. In 3D simulations, the value of the reconnection rate in the highest resolutions is around \(0.3\) in both Ideal-MHD and Rel-MHD simulations. Turning to the resisitive simulations, in Fig. 12 we plot the reconnection rates of Res-MHD runs with \(\eta=10^{-4},10^{-3}\), and \(5\times 10^{-3}\) in the resolution \(512^{2}\) in 2D and \(512^{3}\) in 3D. The reconnection rate changes as a function of resistivity, increasing by a factor of about 60% in the 3D case and 30% in the 2D one, as the resistivity changes from \(10^{-4}\) to \(5\times 10^{-3}\). This increase is much smaller than the factor 7.07 expected from the Sweet-Parker law (\(v_{r}\propto\eta^{1/2}\)). The dependence seems to be consistent with \(1/\log\eta\), instead. Given our fairly low resolution and the small number of points, we cannot make definite claims about the functional form of the reconnection rate. However, the reconnection rate we find is consistent with the dependence on the Lundquist number predicted in Petschek reconnection [\(v_{r}=a/\log S\), 20]. The proportionality constant is \(a=0.34\) for the 3D Figure 13: Time evolution of \(\overline{B^{2}}\) in simulations with different resolutions using PLUTO (_left panels_) and KORAL (_right panels_) for the simulations Ideal-MHD (_top panels_) and Rel-MHD (_bottom panels_). The value of \(\overline{B^{2}}\) is slightly higher in the simulations with KORAL. Note: the y-axis is common between left and right panels, and the legend is the same for all panels. simulations,7 and \(a=0.10\) for the 2D simulations. Here we assumed \(v_{\rm A}L=1\) and we take logarithms to the base 10 (\(\log\equiv\log_{10}\)). Footnote 7: For the 3D simulation, \(v_{r}\) is within \(\sim 10\%\) of \(\pi/(8\log_{10}S)\), assuming \(v_{\rm A}L=1\). An accurate fit to this formula can be found if we allow values of the characteristic scale \(v_{\rm A}L\) to be slightly larger than unity. ## 6 Code comparison The codes we used in our simulations, PLUTO and KORAL, rely on solving the MHD equations (given in Section SS2) employing the finite volume method. The initial equations are typically formulated in terms of the primitive variables, which include the fluid density, pressure, and velocity, as well as the magnetic field (given in Section SS3). To solve the equations using the finite volume method, the computational domain is divided into a grid of cells, each of which contains a set of conserved quantities. These conserved quantities are related to the primitive variables through a set of conversion equations, which are typically derived from the conservation laws of mass, momentum, and energy. Although both PLUTO and KORAL employ the same scheme to calculate conserved quantity fluxes at the boundary of each grid cell, the conversion of primitive to conserved quantities differs between the two codes. PLUTO employs the inversion scheme provided by [28], while KORAL uses the \(1D_{W}\) inversion scheme outlined in [56]. We perform simulations of the OT test problem with PLUTO and KORAL codes in the simulations Ideal-MHD and Rel-MHD. The same initial conditions are used in both codes. Here we compare the energy components in the results, the ability of the codes to capture substructures, and the reconnection rates. In Fig. 13 we present the time evolution results for the magnetic energy in the Ideal-MHD and Rel-MHD simulations in PLUTO and KORAL. The value of \(\overline{B^{2}}\) in the simulations Ideal-MHD slightly increases in KORAL with respect to PLUTO. This difference in the value of \(\overline{B^{2}}\) is more obvious in the lower resolutions and in the later time steps. In addition, in Fig. 5, we showed that at the identical time steps of Rel-MHD simulation, the residual of the total energy in Rel-MHD in KORAL is typically slightly higher than the one in PLUTO. To investigate the difference between the codes we plot in Fig. 20 of Appendix A relative differences between KORAL and PLUTO of various quantities. In the Ideal-MHD simulations with sufficient resolution for the small numerical resistivity, both PLUTO and KORAL show almost the same numerical dissipation. In the Rel-MHD simulations, the difference between the codes is more pronounced. Also, by comparing the results in Ideal-MHD and Rel-MHD simulations in Fig. 13, we find that the numerical resistivity is negligible at the largest resolution \(4096^{2}\) in the Ideal-MHD simulations (the curves of two larger resolutions overlapping) while in the Rel-MHD simulations, one should increase the resolution to obtain a negligible numerical error. Figure 14: The reconnection rate as a function of resolution in simulations Rel-MHD in 2D and 3D. Red symbols indicate simulations with KORAL and blue symbols with PLUTO. As mentioned in SS4.1, we expect the plasmoid unstable current sheets when there is a hump in \(\overline{B^{2}}\) plot. We show the rest-mass density plot at \(t=2.5t_{c}\) in the simulation Ideal-MHD and \(t=9t_{c}\) in the simulation Rel-MHD at the highest resolution \(4096^{2}\) in Appendix A, Figs. 17 and 18. These density plots confirm that KORAL is more precise than PLUTO in capturing the substructures. We compare the reconnection rate in the simulation Rel-MHD in PLUTO and KORAL in Fig. 14. In Fig. 20, we show that the residual relative difference between various quantities in the Ideal-MHD simulation is at the level below 1%, so we only compare \(v_{r}\) in the Rel-MHD simulation. We observe that in both 2D and 3D setups the reconnection rate in KORAL simulations is higher than in PLUTO simulations. The magnetization on both sides of the reconnection layer directly affects the reconnection rate (which is discussed in SS5), and we showed that in KORAL simulations the magnetic energy (and corresponding magnetization) is higher than in PLUTO simulations. This causes a higher reconnection rate in KORAL simulations compared to PLUTO simulations, as shown in Fig. 14. ## 7 Summary and Conclusions We investigate how the resolution and dimensionality of the simulation setup affect the energy dissipation, substructure formation, and reconnection rate, all of which are critically dependent in astrophysical simulations on the onset of reconnection. We study these effects by performing the Orszag-Tang test problem in the numerical simulation codes PLUTO and KORAL. We perform a quantitative comparison between the results obtained from various setups, including relativistic, non-resistive MHD (Rel-MHD), non-relativistic, non-resistive MHD (Ideal-MHD), non-relativistic, resistive MHD (Res-MHD), in 2D as well as 3D simulations. First of all, we estimated the numerical resistivity of the simulations in each resolution to find a sufficient resolution in which we can resolve the substructures and study the energy conversion in our simulations. We used PLUTO code in resistive and non-resistive modes (Res-MHD and Ideal-MHD, respectively) in non-relativistic simulations. We show that the numerical resistivity in the resolution \(512^{2}\) in both 2D and 3D setups is \(\eta\approx 10^{-4}\), which is also the limit of the formation of a plasmoid unstable current sheet. After finding the sufficient resolution for overcoming the effects of numerical resistivity, we study energy conversion in Ideal MHD, Rel-MHD, and Res-MHD simulations. We showed that in Ideal-MHD and Res-MHD simulations magnetic and kinetic energies convert into internal energy and heat up the plasma. We also show that in Res-MHD simulations, as expected, the magnetic energy dissipation increases with increasing physical resistivity. In higher resistivity cases, there is a corresponding increase in internal energy. In relativistic simulations, Rel-MHD, we find that the relativistic shocks amplify the magnetic field with the magnetic energy \(\overline{E_{\rm B}}\) increasing by a factor of five at \(t=20\)% of total simulation time. It is also shown that magnetic energy converts into internal and kinetic energies which amplify the magnetic field for the second time during our simulation. The second increase in magnetic energy at \(t=90\)% of total simulation time is coincident with the formation of a set of plasmoid unstable current sheets. We also compare two state-of-the-art codes, PLUTO and KORAL, in both non-relativistic and relativistic simulations. Our findings indicate that in both Ideal-MHD and Rel-MHD simulations, KORAL simulations show higher magnetic energy, \(\overline{B^{2}}\), (implying less magnetic dissipation) compared to PLUTO with the difference more prominent at low resolutions. We show that in resolution \(1024^{2}\), in the Ideal-MHD simulations, the relative difference of relevant quantities in PLUTO and KORAL is less than \(10^{-2}\), while in the Rel-MHD simulations, for some quantities the residual reaches \(0.1\). In the highest resolution run (\(4096^{2}\)), we found that KORAL captures more substructures than PLUTO in both Ideal-MHD and Rel-MHD simulations. We show that the reconnection rate in all simulations in KORAL is higher than that in PLUTO--it is caused by higher magnetization in the reconnection layer region in KORAL. We study the effect of resolution on the reconnection rate \(v_{r}\) in our simulations. As expected, numerical resistivity influences the reconnection rate. Increasing the resolution leads to a decrease in both numerical dissipation and reconnection rate. In 2D simulations, \(v_{r}\) is initially a function of scaled resolution (\(\tilde{R}=R/100\)) as \(v_{r}\approx 0.04\tilde{R}^{-0.7}\) (Ideal-MHD) and \(v_{r}\approx 0.25\tilde{R}^{-0.45}\) (Rel-MHD). In each set of simulations, we find a resolution beyond which the reconnection rate is no longer affected by the resolution, and we find the limiting reconnection rate in this limit: in 2D simulations in KORAL, in the Ideal-MHD runs, \(v_{r}=0.1\) for resolutions \(\geq 512^{2}\); in the Rel-MHD, \(v_{r}\approx 0.18\) for resolutions \(\geq 2048^{2}\). In PLUTO simulations, the reconnection rate is lower than that in KORAL simulations. In PLUTO, in Ideal-MHD \(v_{r}\approx 0.03\), in Rel-MHD \(v_{r}\approx 0.05\). We conclude that the Rel-MHD simulations should be performed at resolutions at least four times larger than in the non-relativistic Ideal-MHD simulations, to reach a negligible effect of the resolution on the reconnection rate. In 3D simulations in KORAL the Ideal-MHD and Rel-MHD simulations are not directly comparable since we initialized the velocity and magnetic fields differently. Still, in both setups, the results are remarkably similar, with the effect of resolution on \(v_{r}\) not significant in higher resolutions. In both Ideal-MHD and Rel-MHD simulations with resolution \(512^{3}\) the reconnection rate \(v_{r}\simeq 0.3\) (Fig. 11). When comparing the reconnection rate in 2D and 3D setups, it is crucial to consider several parameters, such as the initial setup, the strength and topology of the magnetic field, and the turbulence of the system. In setups with the equivalent magnetization and turbulence levels, we show that the reconnection rate in 3D ideal MHD simulations is lower than that observed in 2D simulations. This trend is particularly notable in relativistic simulations when comparing the 2D and 3D setups. However, in the resistive runs (Res-MHD) the trend is the opposite, the reconnection rate is about a factor of 3 smaller in 2D simulations than in 3D ones. We also show that in the resistive simulations, the reconnection rate seems to be well approximated by a \(v_{r}\propto 1/\log\eta\) dependence, reminiscent of Petschek's fast reconnection [20]. The results presented here add to the information needed to evaluate the behavior of numerical MHD codes in different setups. The performance of the codes can be evaluated and compared only with a detailed account of the relation between the substructure formation and the amount of energy in each component. By using the standard Orszag-Tang test, we provided detailed quantitative information on energy components, reconnection rates and substructure formation. Our approach can be followed--and the results compared--for other codes. A caveat in our work here is that, because of the computational expense, we did not follow the convergence of the results in 3D up to the same resolutions as we did in the 2D setups. The new generation of simulations will unavoidably need such an update in benchmarking. ## Acknowledgements This project was funded by the Polish NCN grant No.2019/33/B/ST9/01564. MC acknowledges the Czech Science Foundation (GACR) grant No. 21-06825X. MW was supported by the European Research Council advanced grant "M2FINDERS - Mapping Magnetic Fields with INterferometry Down to Event hoRizon Scales" (Grant No. 101018682). High-resolution computations in this work were performed on the Prometheus and Ares machines, part of the PLGrid infrastructure. We thank K. Nalewajko and B. Ripperda for inspiring discussions and suggestions. ## Data availability The data underlying this article will be shared on reasonable request to the corresponding author.
2305.07193
Aggressive Internet-Wide Scanners: Network Impact and Longitudinal Characterization
Aggressive network scanners, i.e., ones with immoderate and persistent behaviors, ubiquitously search the Internet to identify insecure and publicly accessible hosts. These scanners generally lie within two main categories; i) benign research-oriented probers; ii) nefarious actors that forage for vulnerable victims and host exploitation. However, the origins, characteristics and the impact on real networks of these aggressive scanners are not well understood. In this paper, via the vantage point of a large network telescope, we provide an extensive longitudinal empirical analysis of aggressive IPv4 scanners that spans a period of almost two years. Moreover, we examine their network impact using flow and packet data from two academic ISPs. To our surprise, we discover that a non-negligible fraction of packets processed by ISP routers can be attributed to aggressive scanners. Our work aims to raise the network community's awareness for these "heavy hitters", especially the miscreant ones, whose invasive and rigorous behavior i) makes them more likely to succeed in abusing the hosts they target and ii) imposes a network footprint that can be disruptive to critical network services by incurring consequences akin to denial of service attacks.
Aniket Anand, Michalis Kallitsis, Jackson Sippe, Alberto Dainotti
2023-05-12T01:29:58Z
http://arxiv.org/abs/2305.07193v1
# Aggressive Internet-Wide Scanners: Network Impact and Longitudinal Characterization ###### Abstract. Aggressive network scanners, i.e., ones with immoderate and persistent behaviors, ubiquitously search the Internet to identify insecure and publicly accessible hosts. These scanners generally lie within two main categories; _i_) benign research-oriented probes; _ii_) nefarious actors that forage for vulnerable victims and host exploitation. However, the origins, characteristics and the impact on real networks of these _aggressive scanners_ are not well understood. In this paper, via the vantage point of a large network telescope, we provide an extensive longitudinal empirical analysis of aggressive IPv4 scanners that spans a period of almost two years. Moreover, we examine their _network impact_ using flow and packet data from two academic ISPs. To our surprise, we discover that a non-negligible fraction of packets processed by ISP routers can be attributed to aggressive scanners. Our work aims to raise the network community's awareness for these "heavy hitters", especially the miscarent ones, whose invasive and rigorous behavior _i_) makes them more likely to succeed in abusing the hosts they target and _ii_) imposes a network footprint that can be disruptive to critical network services by incurring consequences akin to denial of service attacks. + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 network in terms of packet volume. We found that **AH** packets contribute between 0.1-5.85% of the _total ingress/egress packets_ processed by core routers on a typical day; this is a non-negligible fraction. Our main contributions include the _i_) up-to-date longitudinal profiling of Internet-wide "aggressive" scanners and _ii_) measurable evidence that the aggregate network footprint of these scanners is not as inconspicuous as researchers and operators generally assume. This traffic can be disruptive to network operators; especially traffic originating from origins that never disclose their intents (as opposed to the seemingly benign "Acknowledged" lists (Bradner, 2017) that do reveal the scanning purpose). Scanners of unspecified intent are the vast majority of probvers we categorize as "aggressive", and can be associated with botnet propagation and nefarious reconnaissance (e.g., see (Bradner, 2017)). We plan to produce and share daily lists of such scanners (using all three definitions) that the network and "threat exchange" communities (Steiner, 2017; Steiner, 2017) could subscribe to, hoping that they can be utilized by operators to block and mitigate this disruptive Internet background noise. ## 2. Description of Datasets **A. Darknet data.** We analyze data from the ORION NT to identify and then study the aggressive hitters. To study yearly trends, we split the Darknet dataset into two parts: **Darknet-1** (spanning the entire 2021) and **Darknet-2** (January 1st, 2022-October 15th, 2022). See Table 1. Central to our analysis of Darknet data is the notion of a **darknet event**. For this study, a darknet event represents a "logical scan" such as those defined in (Krause et al., 2017; Steiner, 2017). Following (Krause et al., 2017), a _logical scan_ summarizes the scanning activities of a source IP appearing in the Darknet. TCP-SYN packets, UDP packets, or ICMP "Echo Request" packets are the three _traffic types_ we consider as "scanning packets" (Krause et al., 2017). A logical scan represents the activity of a _source IP_ associated with a particular Darknet _destination port_ and _traffic type_. For each darknet event / logical scan we record its _start_ and _end_ timestamps; an event is considered to have _ended_ when no packets have been seen in the Darknet from the event's source IP to the event's targeted destination port and traffic category for more than a "timeout" period of around 10 minutes1. For each event, we record _total packets_, number of _unique Darknet destinations_ contacted and metadata (Steiner, 2017). Footnote 1: The timeout or “expiration” period is based on ideas from (Steiner, 2017) (see Section II.E, “Flow Timeout Problem”) and the intuition behind it is to avoid splitting “long scans” into individual shorter ones. To calculate this timeout interval, one needs the Darknet size, an assumed scanning rate and an assumed duration for the “long scan”; we used 100pps and 2 days, respectively. **B. ISP flows.** To quantify the scanners' network impact, we utilize ISP flows from Merit. The flows are in Netflow format and collected with a _packet sampling_ rate of 1:1000 at three core Merit routers. The Netflow collectors are configured to only sample _ingress_ and _egress_ traffic to/from the ISP. i.e., internally facing router interfaces are not included in the flow data. We employ two datasets: **Flows-1** (January 15th, 2022 to January 21st, 2022) and **Flows-2** (October 1st, 2022). **C. Packet streams.** To further validate the network impact results, we also performed measurements on mirrored packet streams at Merit and the campus network at the University of Colorado (to be referred as CU). CU is not associated with Merit (i.e., Merit does not provide upstream/transit services to CU and the IP spaces of both networks are different), and serves a population of 100,000 users. These _non-sampled_ packet streams include the majority of ingress/egress traffic observed at a major core router at Merit (one of the three routers we have flow data from) and all campus traffic at CU. We examine 72 hours starting on 2022-11-28. During then, at Merit, the monitoring station processed traffic exceeding 8 Mpps (million packets per second) and \(\approx\) 80 Gbps. At CU, we observed peak rates at 5 Mpps and \(\approx\) 40 Gbps. **D. Acknowledged scanners.** To obtain insights into the seemingly benign/research scanners while also partially validating our lists of detected aggressive scanners, we employ the publicly available list of "Acknowledged Scanners" (Bradner, 2017). The list curator considers a scanning IP as an "Acknowledged Scanner" ("ACKed" scanner, in short) if the scanners make any efforts to disclose their intentions (e.g., research purposes). At the moment our analysis was performed, the list (Bradner, 2017) makes available the source IPs of 36 unique organizations. **E. Honeypot data.** To cross-validate the lists of non-ACKed scanners (i.e., the likely miscreat ones) and shed light into their behaviors, we employ data from GreyNoise (GreyNoise, 2017). GreyNoise (GN) operates distributed honeypot sensors at multiple cloud providers meticulously placed throughout the world. The IPs observed contacting their sensors are tagged by the GN team via an internal process. An IP is annotated as _benign_,_malicious_ or _unknown_; more specific _tags_ are also available for some IPs. We examined GN data (with 2,962,153 unique IPs) for the whole month of June 2022. **Ethical considerations.** Working with real-world traces requires ethical and responsible data handling. Our measurement infrastructure was designed with careful consideration and follows best practices imposed by the security/privacy boards and network managers of the organizations that operate the corresponding instrumentation. For instance, all of our datasets are passively collected and we never interact or probe any of the identified IPs present in our datasets. The data were analyzed in a secure manner only by the authors. Moreover, we followed the "code-to-data" paradigm for analyzing the live packet streams in which our code was \begin{table} \begin{tabular}{l|c|c|c|c} \hline & **Darknet-1** & **Darknet-2** & **Flows-1** & **Flows-2** \\ \hline Packets (Billions) & 1,098 & 833 & 7,560 & 770 \\ Source IPs (Millions) & 123 & 57 & 7 & 2.7 \\ Dest. IPs (Millions) & 0.475 & 0.475 & 22 & 10 \\ Total Events (Billions) & 26 & 32 & - & - \\ \hline \end{tabular} \end{table} Table 1. Description of Datasets. shared with and executed by authorized personnel with access to the mirrored data. We do not collect nor examine any device MAC addresses or user payload, and we merely performed packet counting (i.e., total packets originating from **AH**) when examining the packet streams. Darknet data are generally considered to pose minimal privacy risks; however, we take measures to not expose any identifiable information that might endanger networks or individuals. E.g., in the analyses that follow we elected to not publicly disclose the actual ASN and organization names that originate **AH** to protect the reputation of these networks. ## 3. Aggressive Network Scanners **Definition 1: Address Dispersion.** We classify a source IP appearing in our Darknet as aggressive whenever it is involved in a darknet event that targets 10% or more dark IPs. This definition was also employed in (Darknet et al., 2017) to identify "large scans". We found 2,977,242 scanning events in Darknet-1 and 2,075,485 events in Darknet-2. We identified 158,681 distinct IPs satisfying this condition in the Darknet-1 dataset and 155,010 IPs in 2022. **Definition 2: Packet Volume.** The second definition is based on _packet volume_. For each Darknet dataset, we compile the _Empirical Cumulative Distribution Function (ECDF)_ for the number of packets sent per event. Using the empirical distribution, we calculate the \((1-\alpha)\)th-percentile, and declare a scanner as "aggressive" whenever it participates in an event with total packets transmitted crossing the critical threshold. We utilized \(\alpha=0.0001\). The thresholds that correspond to the top-0.01% events were found to be 64,810 packets and 23,491 for Darknet-1 and Darknet-2, respectively. The number of identified aggressive source IPs found from this definition in 2021 was 159,159. We noticed that these numbers are very similar to those obtained using the address dispersion rule; indeed, the _Jaccard similarity score2_ for the two sets of hitters is found to be 0.8. Due to the high similarity among the two populations in the sequel we mostly focus our attention to scanners identified using the address dispersion definition. Footnote 2: Given sets \(D_{t}\) and \(D_{j}\), the value \(J:=|D_{t}\cap D_{j}|/|D_{t}\cup D_{j}|\) denotes the Jaccard score, where \(|\cdot|\) denotes the set cardinality. **Definition 3: Number of Distinct Destination Ports.** Our final definition is based on the number of distinct ports that a scanning IP contacts in the Darknet in a given day. We again source our data to obtain the ECDFs for the number of unique ports for both years. We use the same \(\alpha=0.0001\) to find the critical threshold. The ECDFs for Darknet-1 and Darknet-2 differ, indicating a shift towards more scanned ports (see Izhikevich _et al._(Izhikevich et al., 2017) for a possible explanation). For Darknet-1, we classified the IPs scanning more than or equal to 6542 ports per day as aggressive, whereas for 2022 the threshold is 57,410 ports. ## 4. Network Impact Having the lists of **AH** available, we now shift focus into understanding the impact that these scanners pose to networks. First, we utilize flow data from Merit to measure the collective packet volume generated by the identified **AH** and processed by the ISP's routers as they transit the network. We start by individually checking flow data from three core Merit routers. These routers collectively process more than 50% of all packets transiting Merit's network. Table 2 showcases the network impact imposed by aggressive scanners for definition #1 (we omit results for the second definition since that scanning population is very similar to the one identified with the first definition; results for definition #3 show a less pronounced impact, albeit non-negligible, but we omit them for brevity). We report on the total number of packets observed at a specific vantage point _originating_ from a source IP belonging to an identified **AH**. In addition, we also include the _portion of traffic_ that these packets amount to with regards to all the packets that a given router processes for the days examined. The tables highlight a somehow unexpected result: the daily fraction of aggressive scanners' packet volume lies between \(1.1-5.85\%\); this is a relatively high percentage and indicates that the impact of aggressive scanners on network traffic is not negligible. To rephrase, we see evidence that, _on average, at least one out of every hundred ingress or egress packets that a router processes is a packet originating from an **AH**._ Table 2 illustrates that the peering arrangements in place at the ISP directly affect the fraction of **AH** packets recorded on a given router. For instance, we remark that router-1 endures the highest impact with regards to hitters identified with the address dispersion metric; this can be explained by the fact that definition #1 **AH** frequently originate from Europe and Asia, as shown in Table 5, and router-1's routing policies (e.g., upstream tier-1 peers) dictate that such traffic would enter Merit at that point-of-presence. We next reflect further on interpreting and validating this surprising result. We note that the higher percentages occur on weekends, namely when the overall Merit traffic is lower. We also speculate that _content caching_(Merrit et al., 2017) plays a critical role in "amplifying" the effect of network scanning. Merit has put in place careful traffic engineering considerations to have their users benefit from content caches (e.g., videos, etc.) that reside _within_ the ISP. User traffic to/from these content caches does not traverse the 3 border routers we study here so these packets do not contribute to the calculated ratio. To further validate our results, and to eliminate the possibility that the high network impact might be due to some bias arising from the sampled flow data, we next examine the mirrored packet streams at both Merit and CU. Figure 1 illustrates the results, offering some interesting findings: _i) This non-sampled dataset confirms that the network impact at Merit (and router-1, specifically) lies around 2% (see left panel, top row)3; _ii_) the network impact at CU is also high, but an order of magnitude less than Merit (see right panel, top row), hovering just shy of 0.10%. We hypothesized that this could be an artifact of the lack of content caching at CU which means that the monitoring station at CU sees more video-related traffic compared to the Merit station. Indeed, we checked with the network engineers at CU and they verified that no content caching is present within their network and off-net caching is provided by their upstream ISP; _iii_) the instantaneous impact from **AH** could even exceed 7% on certain occasions (middle row panels) on both networks, reaching even 12% at Merit; _iv_) as we observe on the bottom row panels, on several 1-second intervals (shown in red color) when the **AH** impact is high, overall network traffic could also reach high levels (e.g., exceeding 6 Mpps). This implies that **AH** are overwhelming the network even during its "busy" times, and consequently network performance might suffer due to potentially incurred packet drops and network delays. In short, these **AH** collectively exhibit behavior akin to denial-of-service attacks. Footnote 3: The (cumulative) fraction declines over time since we transition from a weekend day to a weekday. Further, we performed this 3-day analysis using **AH** for Nov. 27th, 2022, and due to DHCP churn (see (Sund The left panel shows the number of _active_**AH** per day (which includes **AH** that may have started scanning prior to that day), the number of unique _daily_**AH** (i.e., ones that started their scanning efforts during that day), and the number of _all_ active and daily scanners. The lines for the latter two scanner numbers seem to coincide because their values are very similar; their average difference is only 8,471 IPs. The right panel shows the number of packets transmitted by the number of _daily_ scanners in a given day, juxtaposed with the aggregate Darknet scanning packets. Due to the _darknet events_ data format, we can only calculate packet statistics for daily scanners. The plot shows that the number of aggressive scanners increases over time. On average, we found 1452 (3876) daily (active) hitters per day in 2021, whereas there are 1779 (5349) daily (active) hitters per day in 2022. Figure 3 (right) depicts that the identified hitters contribute the vast majority of packets seen in the Darknet. We observe that on average around 0.1% of scanning IPs appearing in the Darknet and corresponding to **AH** are responsible for over 63% of the total packets captured per day in ORION NT. Next, we discuss the origins of **AH**. We characterize the type of Autonomous Systems (AS) that originate these scanners, and the country of origin. Table 5 tabulates the top-10 networks and the countries associated with definition #1 **AH**. (Numbers in parentheses indicate ACKed scanners.) We also studied the origins of **AH** based on the other two definitions; for space economy, we omit these tables, but we point out that the origins for the first two definitions are very similar, echoing the previous observations that scanners from the first two definitions (address dispersion and packet volume) largely overlap. On the other hand, the origins for the third group differ, and we even see the presence of research institutions. Notably, a certain _US-based cloud provider_ ranks top in all six definitions/datasets (except once), indicating strong preference from scanning organizations for its use. Next, we validate our inferences using the publicly available lists of "Acknowledged Scanners" [9], aiming to shed light into organizations that are seemingly benign and perform aggressive scanning for research purposes. We consider an identified **AH** as an ACKed scanner if _i_) its IP is within the list of IPs available in [9]; _ii_) we find a match via reverse DNS checks. I.e., we compiled a list of 48 "keywords"(see list [2]). based on the reverse DNS records of the IPs in [9]. Table 6 summarizes the matching results. E.g., we find that 4706 IPs from 27 distinct organizations using definition #1 and Darknet-1 are indeed **AH**. We note that we discovered several IPs (around 7600 in total) belonging to organizations considered as "ACKed scanners" that were not included in [9]. Overall, we identified 7,974 IPs from 29 unique ACKed scanning organizations (out of 36 in [9]) during the full 22-months period across all definitions. We next characterize the aggressive hitters in terms of the top applications they target (with regards to packets received). We also break down the attempts against each port based on whether the ZMap, Masscan or "Other" fingerprints have been observed (see [17] for the ZMap, Masscan fingerprints). Figure 4 shows the top ports/protocols for definition #1. We notice that 20 out of top 25 ports are present both in 2021 and 2022, and that **AH** send large number of packets to TCP ports. Out of top 25 services which receive the most number of packets in 2021, only 4 UDP-based services are targeted. ICMP (Echo Requests) completes the top-25 set. Next, we take a moment to compare this behavior with prior work [17], which also employed Merit's Darknet. Figure 2 in [17] shows the same type of **AH** (i.e.,_large scans_ targeting more than 10% of the dark IP space) and offers a baseline for comparison. Indeed, **AH**'s profile has dramatically changed since the Durumeric _et al._ 2014 study. SSH was the top-targeted port by **AH** back then, but it now ranks 3rd in both 2021 and 2022. The top-ranked aimed ports currently, \begin{table} \begin{tabular}{l|c c|c|c|c} \hline \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Address Dispersion**} & \multicolumn{2}{c|}{**Packet Volume**} & \multicolumn{2}{c}{**Total Ports**} \\ \cline{3-6} \multicolumn{1}{c|}{} & **2021** & **2022** & **2021** & **2022** & **2022** \\ IP & 766 & 766 & 523 & 762 & 317 & 29 \\ Domain matches & 4672 & 4382 & 4334 & 5513 & 71 & 31 \\ \hline Total IPs & 4706 & 4418 & 4350 & 5549 & 325 & 31 \\ Packets (Billions) & 158.3 & 130.9 & 152.5 & 145.2 & 299.0 & 5.7 \\ Packets (\(\pm\)all **AH**) & 20.4 & 24.1 & 19.9 & 24.3 & 34.0 & 28.1 \\ \hline Total Orgs & 28 & 25 & 27 & 27 & 8 & 4 \\ \hline \end{tabular} \end{table} Table 6: Validation via “ACKed Scanners” lists [9]. \begin{table} \begin{tabular}{l|c|c|c} \hline & **Router-1** & **Router-2** & **Router-3** \\ \hline Definition \# 1 & 3.17 (1.01\%) & 2.42 (0.92\%) & 5.47 (2.52\%) \\ Definition \# 2 & 3.35 (1.06\%) & 3.13 (1.19\%) & 5.55 (2.56\%) \\ Definition \# 3 & 0.5 (0.16\%) & 2.83 (1.08\%) & 0.59 (0.27\%) \\ \hline \end{tabular} \end{table} Table 4: Network impact attributed to ACKed scanners. We report total packets sent by ACKed (in billions) and their fraction amongst all ingress/egress packets. Figure 3: Temporal trends (definition #1, address dispersion). \begin{table} \begin{tabular}{l|c c|c|c|c} \hline \multicolumn{1}{c|}{**As Type**} & \multicolumn{1}{c|}{**unique (22)**} & \multicolumn{1}{c|}{**unique (22)**} & \multicolumn{1}{c}{**Ports (22)**} & \multicolumn{1}{c}{**Ports (22)**} & \multicolumn{1}{c}{**Ports (22)**} \\ \hline Class(1) & 373(699) & 794(1602) & 65.3 & Class(1) & 2973(203) & 660(1) & 67.2 \\ Class(1) & 1134(17) & 274(162) & 1.8 & By(2) & 1005(2) & 1012(2) & 8.3 \\ ISP(C) & 6791 & 759(16) & 3.4 & By(2) & 9088(2) & 792(1) & 4.8 \\ Host(C) & 6497(16) & 4479(16) & 44.4 & Class(1) & 6777(1) & 410(2) & 79.4 \\ ISP(C) & 3733(1) & 201(1) & 1.4 & By(2) & 6228(2) & 779(3) & 3.7 \\ ISP(C) & 3601(2) & 300(2) & 4.4 & [Ret.] & 4497(16) & 453(1) & 11.2 \\ ISP(C) & 2708(2) & 374(1) & 5.3 & By(2) & 777(1) & 4009(2) & 2.6 \\ ISP(C) & 2811(2) & 1126(2) & 4.2 & Class(1) & 238(2) & 293(2) & 2.5 \\ Class(1) & 2708(2) & 121(2) & 6.9 & Class(1) & 237(1) & 402(1) & 232(1) \\ Class(1) & 224(2) & 128(2) & 2.6 & Class(1) & 224(2) & 238(2) & 1.5 \\ \hline Total (5) & 792(2) & 3053(375) & 113.1 (15) & \multicolumn{1}{c|}{90000.811.81.81.822.84.123.123.123.123.123.123.123.123.123.123.123.123.123.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.124.1244.124.124.1244.124.124.1244.124.124.1244.124.1244.124.124.1244.1244.1244.1244.124.1244.1244.1244.1244.1244.1244.1244.1244.1244.12444.12444.1244.12444.12444.12444.124444.1244.124444.124444.12444.12444.124444.12444.124444.124444.1244444.1244444.124444.1244444.1244444.1244444.1244444.1244444.124444.1244444.12444444.1244444.1244444.12444444.12444444.12444444.124444444.124444444.124444444.124444444.124444444.12444444444.12 according to ORION NT, are Redis and Telnet; neither of them were in the top-5 ports in 2014. This result is somewhat expected if one considers the rise of IoT applications and the botnet families that target Telnet services on IoT devices (e.g., see (Wang et al., 2017)). Further, Redis vulnerabilities are recently popularly mined for Cryptojacking (Bradner et al., 2017) and other application-level attacks (Richter et al., 2017). Looking at Figure 3 in (Richter et al., 2017), we also notice that ZMap/Masscan currently play a prominent role in Internet-wide scanning whereas in 2014 their presence was minimal (as expected, since they were relatively unknown tools then). Comparing with Richter _et al._ study (Richter et al., 2017), we do observe some similarities in the top-ranked ports (see Figure 10 (Richter et al., 2017)) as well as some notable differences. E.g., Telnet was the top-scanned port in the scanners identified in Richter _et al._ (Richter et al., 2017), agreeing with current trends (i.e., Telnet is the 2nd most scanned port in our datasets). However, we notice that Redis/6379 was absent from the rankings of Richter _et al._ (Richter et al., 2017). Interestingly, we also see that TCP/445, one of the most scanned ports in Richter _et al._ (Richter et al., 2017), is not preferred by **AH**. This agrees with the results in Durumeric _et al._ (Durumeric et al., 2017) where we see TCP/445 mostly associated with "small scans" (i.e., scanning less than 10% of the Darknet space; see Figure 2, (Richter et al., 2017)). We also validate our results using lists of scanners obtained from GreyNoise (Richter et al., 2017) in which nefarious aggressive scanners are included. Using the month of June 2022 as a basis for comparison, we found a significant overlap between the two vantage points; namely, on average 99.3% of **AH** identified in our Darknet are also found in GN on a given day. Since GreyNoise operates a "distributed" honeypot in several regions worldwide, this suggests that most of our identified hitters are not performing localized scans, but rather engage into macroscopic Internet-wide behaviors. ## 6. Related Work Several notable works have leveraged darknet data to understand IPv4 macroscopic activities; see, e.g., (Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017). For instance, network telescopes have been employed to study malware and botnet outbreaks (Durumeric et al., 2017; Durumeric et al., 2017; Durumeric et al., 2017), network outages (Durumeric et al., 2017; Durumeric et al., 2017), distributed denial of service attacks (DDoS) (Shi et al., 2017; Durumeric et al., 2017), trends in Internet-wide scanning (Richter et al., 2017; Durumeric et al., 2017), misconfigurations (Durumeric et al., 2017; Durumeric et al., 2017), address usage (Bradner et al., 2017; Durumeric et al., 2017), etc. Leveraging the large "aperture" offered by large Darknets (i.e., ones that monitor hundreds of thousands or even millions of dark IPs), one can detect even moderately paced scans within only a few seconds with very high probability (assuming uniform scans--see (Durumeric et al., 2017)). Our study is closest to the works of Durumeric _et al._ (Durumeric et al., 2017) and Richter _et al._ (Richter et al., 2017). Scanning trends have changed since these studies were conducted (2014 and 2019, respectively), and we document some differences in Section 5. To the best of our knowledge, this study is the first that quantifies the network impact of aggressive Internet-wide scanners. We note though that we have not examined IPv6 scanners (Durumeric et al., 2017; Durumeric et al., 2017) nor their impact. The recent work in (Richter et al., 2017) studies such scanners through the lens of a large Content Delivery Network and available firewall logs. We leave analysis of **AH** IPv6 scanners as future work. ## 7. Conclusions The paper studies a germane sub-population of Internet-wide IPs, namely the **AH** observed at the ORION NT. The impact on the network of these **AH**, as shown in the paper, is surprisingly high. Thus, understanding their behavior is important, with the tangible goal of potentially blocking malicious ones (e.g., the non-ACKed ones) either at the "edge" of an ISP or as they transit the Internet. An important security implication of these **AH**, which are intense and persistent, is that they are more likely to succeed in finding the vulnerabilities they seek. Further, from a network performance perspective, a critical consequence is that high packet rates (see Figure 1) from these **AH** could lead to service degradation akin to ones occurring during DoS attacks. Thus, raising awareness towards them is important; we plan to share curated lists of these **AH** with the community on a regular basis. We offer three concrete methodologies on how to identify **AH**. With the proposed methodologies we aim at obtaining "quality lists" of scanners, minimizing false positives due to spoofing or misconfigurations. Further, succinct **AH** lists have practical implications: engineers that would consider blocking Internet-wide scanners are likely to focus anyways on the top ones in order to minimize the risk of blocking legitimate traffic due to DHCP IP churn and NAT considerations (Shen et al., 2017). In fact, as Figure 6 (right, Zipf-like distribution) in the Appendix shows, even starting by blocking a small amount of **AH**, a large fraction of the problem is ameliorated. Future plans include further investigating the impact of the aggressive hitters on more networks beyond the academic ones studied here. In addition, by examining **AH** observed at additional vantage points (e.g., other large Darknets), we are aiming to further validate that there is no bias in our existing results. The fact that we identified **AH** using Merit's "dark" IP space and that these **AH** contribute an important traffic portion at a completely different network (i.e., CU campus) points towards no selection bias. We leave analysis of heavy IPv6 scanners as part of future work, along with further characterizations of the IPv4 **AH** population. Figure 4. Top-25 ports targeted by **AH** (definition #1).
2304.08189
Control and Coordination of a SWARM of Unmanned Surface Vehicles using Deep Reinforcement Learning in ROS
An unmanned surface vehicle (USV) can perform complex missions by continuously observing the state of its surroundings and taking action toward a goal. A SWARM of USVs working together can complete missions faster, and more effectively than a single USV alone. In this paper, we propose an autonomous communication model for a swarm of USVs. The goal of this system is to implement a software system using Robot Operating System (ROS) and Gazebo. With the main objective of coordinated task completion, the Markov decision process (MDP) provides a base to formulate a task decision problem to achieve efficient localization and tracking in a highly dynamic water environment. To coordinate multiple USVs performing real-time target tracking, we propose an enhanced multi-agent reinforcement learning approach. Our proposed scheme uses MA-DDPG, or Multi-Agent Deep Deterministic Policy Gradient, an extension of the Deep Deterministic Policy Gradients (DDPG) algorithm that allows for decentralized control of multiple agents in a cooperative environment. MA-DDPG's decentralised control allows each and every agent to make decisions based on its own observations and objectives, which can lead to superior gross performance and improved stability. Additionally, it provides communication and coordination among agents through the use of collective readings and rewards.
Shrudhi R S, Sreyash Mohanty, Susan Elias
2023-04-17T12:01:55Z
http://arxiv.org/abs/2304.08189v1
Control and Coordination of a SWARM of Unmanned Surface Vehicles using Deep Reinforcement Learning in ROS ###### Abstract An unmanned surface vehicle (USV) can perform complex missions by continuously observing the state of its surroundings and taking action toward a goal. A SWARM of USVs working together can complete missions faster, and more effectively than a single USV alone. In this paper, we propose an autonomous communication model for a swarm of USVs. The goal of this system is to implement a software system using Robot Operating System (ROS) and Gazebo. With the main objective of coordinated task completion, the Markov decision process (MDP) provides a base to formulate a task decision problem to achieve efficient localization and tracking in a highly dynamic water environment. To coordinate multiple USVs performing real-time target tracking, we propose an enhanced multi-agent reinforcement learning approach. Our proposed scheme uses MA-DDPG, or Multi-Agent Deep Deterministic Policy Gradient, an extension of the Deep Deterministic Policy Gradients (DDPG) algorithm that allows for decentralized control of multiple agents in a cooperative environment. MA-DDPG's decentralised control allows each and every agent to make decisions based on its own observations and objectives, which can lead to superior gross performance and improved stability. Additionally, it provides communication and coordination among agents through the use of collective readings and rewards. Reinforcement Learning Robot Operating System (ROS) Localization and Tracking Unmanned Surface Vehicles (USVs) Deep Deterministic Policy Gradient (DDPG) Multi-Agent DDPG (MA-DDPG) ## 1 Introduction Unmanned surface vehicles (USVs) have gained increasing attention in recent years, due to their potential impact on various applications, such as oceanographic research, environmental monitoring, and maritime security. The effectiveness of individual USVs in accomplishing complex tasks is limited. To overcome this challenge, swarm-based approaches have emerged as promising solutions. A powerful approach to enable a swarm of USVs is reinforcement learning, a sub-field of machine learning. Using this, the USVs will be capable of learning from their environment and making informed decisions to achieve specific objectives. This research paper explores the potential of a reinforcement learning-based swarm of USVs to reduce marine debris in prominent water bodies while ensuring that inaccessible parts of larger water bodies are adequately cleaned by the collective strength of the swarm. The strategy involves collaboration between multiple USVs within a SWARM. The purpose of this research is to utilize multi-agent reinforcement learning to effectively clear marine debris with a greater degree of efficiency and further implement the research using ROS. The paper presents a comprehensive literature review and discusses the challenges and opportunities in implementing such swarms. The accumulation of marine debris is caused in part by littering, storm winds, and poor waste management. Approximately 80 percent of it originates from land-based sources. Marine debris can majorly include plastic objects such as beverage bottles, bottle caps, good wrappers and plastic straws. We intend to reduce the overall marine debris present in prominent water bodies through a swarm of USVs. This method will help to reduce the amount of time necessary for the space to be cleaned. By combining the collective strength of the USV SWARM, it would also be possible to clean inaccessible areas of larger water bodies effectively. We are currently looking at a USV SWARM to coordinate navigation and target tracking of the trash, which can significantly reduce the time required for cleaning up marine trash by focusing on efficiency. The purpose of this paper is to address the problem of a swarm of unmanned surface vehicles (USVs) communicating to achieve a common objective. In our case, we are trying to clear marine residues from large bodies of water using a group of three to four autonomous USVs. To reduce the complexity of the problem, we first examine trash that is uniformly sized while utilizing USVs that are surface vehicles. ## 2 Related Work A number of studies have investigated the use of RL to optimize USV performance. In Wang et al. [3], data-driven performance-prescribed reinforcement learning (DDP-PPRL) was used to control an unmanned surface vehicle. A reinforcement learning (RL)-based approach to achieving optimal tracking control of an unknown unmanned surface vehicle (USV) is presented in Wang et. Al [4]. Zhao et al. [5] optimized path following for an underactuated USV using smooth-convergent deep RL, whereas an auto-tuning environment for static obstacle avoidance in USVs was proposed by Guardeno et al. [9]. RL-based swarm control of UAVs has also been investigated. In aerial reconfigurable intelligent surfaces, Samir et al. [1] optimized the same using RL, and a distributed deep RL was proposed by Han et al. [6] for straight-path following and formation control of USVs. Kim et al. [13] presented a multi-agent deep RL approach for path planning in unmanned surface vehicles (USVs), and Wang et al. [19] proposed an adaptive and extendable control method for USV formations using distributed deep RL. RL has also been utilized in USVs for collision avoidance. Meyer et al. [14] proposed COLREG-compliant collision avoidance for USVs, whereas Na et al. [11] utilized bio-inspired collision avoidance with deep RL in swarm systems. Luis et al. [17] employed censored deep RL for Autonomous Surface Vehicles patrolling large water resources. In addition, swarm intelligence-based methods for USV control have been investigated in a number of studies. Zhao et al. [2] employed deep RL with random braking for USV formation and path-following control, whereas Xin et al. [7] suggested a greedy mechanism-based particle swarm optimization for USV path planning. Other studies have investigated the use of RL for USV path planning and localization. Yan et al. [10] proposed an RL-based method for AUV-assisted localization in the Internet of Underwater Things, whereas Yu et al. [21] developed a USV path planning method with velocity variation and global optimization using the AIS service platform. Lastly, a number of studies have surveyed UAVs, including their primary challenges and future trends. For example, Jorge et al. [8] provided an overview of USVs for disaster robotics, whereas Zereik et al. [18] analyzed the challenges and future trends in marine robotics. In conclusion, RL has been extensively utilized for USV control, and swarm control utilizing ROS is an active area of research. The studies cited in this section provide a comprehensive overview of the pertinent literature, including methods for optimizing USV performance, collision avoidance, and path planning, among others. ## 3 Proposed Algorithm When we deal with the problem of cleaning up marine trash, we assume we are working in highly complex surroundings. As a result, the set of states and actions in the environment are complex and cannot be dealt with by the simple Deep Q-Network Architecture which is generally used to handle simple feedback control tasks. To overcome this problem, we look at a paradigm of reinforcement learning architectures known as Actor-Critic Networks. These architectures are well-known for their capability to handle continuous action spaces efficiently. The algorithm we use initially for the single agent environment is the Deep Deterministic Policy Gradient. This is an Actor-Critic Network for complex control environments. The algorithm uses a combination of target actor-critic and actor-critic networks to converge to an optimal solution by modifying the parameters of the actor-critic network using the target networks; these are copies of the value and policy networks used to calculate the target values within the training process. The variance of value estimates can be reduced by DDPG by periodically updating the target networks, further improving the learning process stability. The networks are updated based on a hyperparameter (\(\tau\)), which determines the rate at which updates are performed. Then we move to a multi-agent environment where we emphasize the MADDPG algorithm, which performs the same task of coordinated navigation in the SWARM. In a multi-agent SWARM environment, the robots now have a common goal to achieve rather than individual goals. Every USV of the SWARM takes actions that benefit the goal. DDPG has been successfully applied to a variety of tasks, including robotic control, navigation, and game-playing. For example, DDPG has been employed to train robot arms to perform reach and grasp tasks. ### Multi-Agent Reinforcement Learning In MARL, the paradigm for which is shown in Figure 1, numerous agents interact with one another and the environment to accomplish a common goal. Although each agent receives and acts upon separate observations of the environment, all agents partake in the rewards and goals. MARL is a potent method for complicated tasks requiring cooperation and coordination between numerous actors. In ROS, a SWARM of marine trash collection robots can be trained to work together to collect trash and clean up the environment using MARL. Every robot would get unique information about its surroundings, including the proximity to the trash and other robots, and act accordingly, such as heading towards the nearest piece of trash or avoiding collisions with other robots. All the robots would share in the benefits and goals, which would promote cooperation and coordination. By using the right algorithms and training techniques, such as Q-learning or actor-critic methods, and adjusting the rewards and objectives to incentivize cooperation and coordination among the robots, the MARL schema in the image can be used to solve the problem of SWARM marine trash collection in ROS. Moreover, strategies like decentralized control and communication can be used to boost the swarm's performance. Owing to the aforementioned factors, we believe a method based on MARL would promote coordination while optimizing the SWARM's purpose. ### Deep Deterministic Policy Gradient The DDPG algorithm for a single marine trash-collecting robot employing DDPG in ROS is shown in a generalized form in Figure 2 below. A deterministic policy that links the agent's observations to its actions is learned for the agent by the actor-critic DDPG algorithm [22]. A robot used to gather marine trash might make observations about the area around trash and barriers, the water's depth, and the robot's own speed and heading. The actor-network receives the agent's observations and produces a deterministic action. The output of the actor-network in the context of a marine-trash-collecting robot would be a velocity command for the robot, directing it towards the closest trash while avoiding obstacles. The critic network receives the agent's observations and actions as input and produces a value function (\(Q_{i}\)) that estimates the agent's expected cumulative reward from that state-action pair (\(s_{i}\),\(a_{i}\)). This value function is used to assess the quality of the agent's actions and to update the actor's policy to maximize the expected cumulative reward R. The DDPG algorithm is an effective method for teaching a marine-trash collecting robot to navigate its environment, identify and collect trash, and avoid obstacles. Because the DDPG is suitable for continuous action areas, it is a good choice for controlling the movement of a marine-trash collection robot. The swarm of marine trash collection robots can avoid overfitting to recent experiences by using experience replay. This can strengthen the learned policies and improve agent coordination in the task of marine trash collection. To improve the stability and convergence of the learning process, target networks are used in the training of a single-agent marine-trash collection robot in ROS. The agent is equipped with two neural networks: the local network and the target network. During training, the local network is updated, while the target network is updated on a regular basis to Figure 1: Schematic of Multi-agent Reinforcement Learning Paradigm match the weights of the local network. This helps to reduce the variance in Q-value estimates and stabilize neural network weight updates, resulting in better performance and higher rewards. ### Multi-agent Deep Deterministic Policy Gradient The architecture of the MADDPG algorithm is depicted in Figure 3. This is a variant of the DDPG algorithm that is used in multi-agent systems to grasp decentralized policies for each agent in a cooperative or competitive setting [23]. It depicts two major components: a centralized critic network and decentralized actor networks. Each agent is related to its own actor-network, which receives its own observations as input and produces its own action. The centralized critic network takes all agents' joint states and actions as input and outputs a single value function that estimates the team's expected total reward. For The estimation of the Q-value for the action of each agent, further, the critic network uses a Q-network which allows the critic to estimate the total expected reward while taking into consideration, the actions of all agents in the system. The agent's observations from the environment are used by the actor networks to generate an action as an Figure 3: Schematic of Multi-agent Deep Deterministic Policy Gradient Figure 2: Schematic of Deep Deterministic Policy Gradient output, which is then combined with the actions of the other agents of the swarm and input into the critic network to estimate the Q-value. The critic network predicts the overall reward as the output, and the actor networks maximize it. Each robot in the swarm could be thought of as an agent. The robots learn how to cooperate in order to properly collect the marine trash in the water body. The MADDPG algorithm is used here, to teach decentralized policies to each robot, enabling them to effectively work together and collect as much trash as possible. The Q-values generated are used to update each agent's actor network during training. The critic network is updated based on the collective experience of all agents. MA-DDPG-based agents learn to interact without requiring contact with other agents. Target networks help in lowering variance in Q-value estimates and stabilizing neural network updates, leading to increased agent coordination and performance. ### Robot Operating System (ROS) To enhance the development process to build and control robotics devices, ROS is the most prominently used open-source platform. The MA-DDPG algorithm can be used to manage a swarm of marine trash-collecting robots. The MA-DDPG algorithm allows each robot in the swarm to design its own independent policy which helps them to boost the common reward as well as coordinate with the other robots of the swarm to achieve a shared goal. Here, ROS can be used as a platform for controlling the swarm and achieving the goal. The ROS paradigm of the publisher-subscriber model can be used to establish communication between the marine trash-collecting robots. It helps publishers send messages about a given topic and subscribers receive messages about the same topic. Each robot may have a publisher who sends messages about its current position, velocity, and other data required. After that, the robots in the swarm will subscribe to those signals to track the movements of the other robots. Based on the data, the robots collaborate and coordinate to collect waste by using the above ROS communication paradigm. Figure 4: Schematic of ROS Communication System ### Observation and Action Spaces To utilise MA-DDPG with ROS, each USV must have an observation space that enables it to examine its surroundings and take the appropriate actions required. The USV's position, orientation, left-right propeller linear and angular speeds, laser scan data, and true-false value for trash detection are stored within the observation space. The positional coordinates of the USV reveal its location within the water body. This data is essential for the USV to navigate and move toward the detected trash. The orientation of the USV, which conveys the overall direction in which it is facing, is very useful in determining how to progress toward the trash. The orientation of the robot provides information on Figure 5: Schematic of the Proposed Workflow the direction it is facing, which is useful for determining how to progress towards the trash. The propeller and angular speeds are utilized to regulate the robot's movement in the water. The robots may work together to travel towards and gather rubbish in a coordinated manner by modifying their speeds based on their observations. The distance from garbage calculated using laser scan data indicates the distance between the robot and the rubbish. This knowledge is critical for making decisions on how to move efficiently towards the trash. Finally, the trash detection boolean variable indicates whether or not the robot has identified any rubbish in its proximity. This information is critical for deciding how to approach and collect waste. MA-DDPG is used to coordinate the robots' behaviours based on their observations, ensuring that they cooperate to fulfil their common aim of cleaning up the ocean. The agents learn to conduct activities that maximize a reward signal, which is intended to encourage the robots to travel towards and collect rubbish as efficiently as possible. The agents learn to coordinate their actions and operate as a team to achieve their goal of cleaning up the ocean through training. Propeller speeds are employed as the action space for each robot in the ROS challenge of a swarm of marine trash collection robots using MA-DDPG. The propeller speeds regulate the movement of the robots in the water, allowing them to travel towards and collect rubbish efficiently. The robots may work together to collect rubbish in a coordinated and efficient manner by modifying their propeller speeds based on their observations.MADDPG is used to coordinate the robots' actions, ensuring that they all work together to clean up the water body. ## 4 Implementation ### Architecture of WAM-V in ROS and Gazebo To solve the challenge at hand, we have made use of the Wave Adaptive Modular Vessel (WAM-V) which has been used for a variety of missions in real-time such as oceanographic data collection, environmental monitoring, and maritime security. To simulate and control the WAM-V in a virtual environment, we utilise the Robot Operating System (ROS) and Gazebo. The distinct nodes present within the architecture of the WAM-V in ROS and Gazebo, each of which is accountable for a certain task such as sensor data processing, motion control, and communication with external devices. Figure 6: WAM-V in Gazebo The data from a variety of sensors is collected by the sensor nodes, including GPS, IMU and Depth Sensors, and is transmitted to the data processing nodes. This data is analyzed by these nodes to provide the position, orientation, and environment information of the WAM-V. To visualise the data, the RViz tool can be used which renders the WAM-V and its surroundings in real time. This tool can visualise the sensor data in a meaningful manner, providing insight into the WAM-V's operation and surrounding environment. Figure 7: RQT graph of WAM-V Additionally, the RViz tool can interact with the WAM-V allowing for the testing and evaluation of the control algorithms. For instance, the movement of the right and left propellers of the WAM-V can be controlled by sending commands from the RViz tool. Implementation of the proposed algorithm, with WAM-V, in ROS and Gazebo provides a versatile platform for acquiring, processing, and visualising sensor data. As mentioned before, the utilisation of the RViz tool provides real-time visualisation and interaction with the WAM-V. Figure 8: WAM-V in RViz Figure 9: WAM-V propeller view in RViz ### Overview of the openai_ros package for Integrating ROS and OpenAI To implement the aforementioned algorithms within ROS, we make use of the openai_ros package. The package contains a collection of components that offer to interface with OpenAI Gym. The nodes facilitate the exchange of data between ROS and OpenAI Gym, allowing ROS-based robotics applications to be integrated with OpenAI's reinforcement learning algorithms. To utilise the package, a training environment is created which represents the robot application to be trained. States, actions, and rewards are comprised within the Training environment which is further implemented as a ROS node that communicates with the openai_ros package's OpenAI Gym interface. After establishing the training environment, a corresponding training script is composed which utilises OpenAI's reinforcement learning algorithms to train a policy for the given environment. Using the openai_ros package, the training script is enabled to interact with the training environment to acquire data for training the policy. During training, the reinforcement learning algorithm interacts with the training environment and receives data on the current state and further chooses actions based on the current policy. The environment then offers the algorithm a reward based on the action performed, and the algorithm modifies its policy accordingly. ## 5 Reward Shaping We devise a weighted reward function \(R\) for controlling a SWARM of marine trash-collecting robots, based on the problem's objectives and limitations. The essential goal of the problem is to collect as much trash as possible while minimizing collisions. _Reward for collection (\(R_{collect}\))_: Reward is given to the robots when the trash is collected. _Penalty for collision (\(P_{col}\))_: Negative reward is given to a robot when it collides with another robot or an obstacle. _Penalty for Time Taken (\(P_{time}\))_: Negative reward is given to the robots as time passes by to incentivize a quicker collection of trash. _Reward for coordination (\(R_{coord}\))_: The coordination reward \(R_{coord}\) is a positive reward for the robots to make them cooperate with each other and gather the trash. This could be accomplished by rewarding the robots for collecting trash in closer proximity to each other. The above-weighted reward function provides positive rewards for collecting trash and negative rewards for delays caused and collisions occurred. Additionally, a reward is given to the robots based on the proximity of the robots to one another and thus the coordination achieved between them. The reward function can be written as: \[R\text{ = }w_{1}\text{* }R_{collect}\text{ - }w_{2}\text{*}P_{coll}\text{ - }w_{3}\text{* }P_{time}\text{ + }w_{4}\text{*}R_{coord}\] Here, \(w_{1}\), \(w_{2}\), and \(w_{3}\) represent the weights that have been assigned to each reward term. The collection reward can be defined as a function of the amount of garbage the robots collect together, whereas the collision penalty can be defined as a negative reward for each collision the robots cause. In a similar fashion, the time penalty can be defined as a negative reward for the summation of each of the time steps the robots take. The weights assigned to each term can be adjusted to control the behaviour of the swarm of marine trash-collecting robots. A high weight on the collection Figure 10: openai_ros environment with WAM-V reward term would motivate the robots to prioritize trash collection, whereas a high weight on the collision penalty term would motivate them to avoid collisions. In conclusion, the previously formulated weighted reward function can be used to incentivize robots to accomplish the primary goal while avoiding collisions and time delays. The weight allocated to the coordination reward term is denoted by \(w_{4}\). The sum of all pairs of robots \((i,j)\) is calculated, and \(D(i,j)\) denotes the distance between the two robots. The \((1-D(i,j))\) term ensures that the robots are compensated for their proximity to one another. A larger weight on the reward for coordination would motivate the robots to coordinate and collaborate with each other in order to collect trash more efficiently. Here, The coordination reward is defined as the product of the weight applied to the coordination reward term (\(w_{4}\)) and \((1-D(i,j))\) over all pairs of robots \((i,j)\). This term ensures that the robots are rewarded for their closeness in distance to one another. Hence, the coordination reward incentivizes collaboration and coordination among the swarm of robots to achieve the primary goal of collecting as much trash as possible while minimizing collisions with one another and the obstacles in the way and reducing the time delays. ## 6 Applications ROS autonomous unmanned vehicle swarms based on Reinforcement Learning (RL) have several possible applications. Firstly there is environmental monitoring, wherein an RL-based swarm of autonomous unmanned vehicles in ROS can be used for environmental monitoring and surveillance. The swarm can be taught to navigate through many terrains and environments, record data, and communicate with the base station. Next, there is search and rescue. This is where an RL-based swarm of autonomous unmanned vehicles in ROS can be used for search and rescue operations in emergency circumstances. In disaster-stricken areas, a trained swarm of USVs can be used to search for survivors and provide the rescue team with real-time data. The third application is within agriculture. For precision agriculture, a swarm of autonomous unmanned vehicles based on RL and powered by ROS can be utilised. The swarm can be taught to monitor crops, locate disease outbreaks, and apply required fertilisers or pesticides. ROS autonomous unmanned vehicles also provide support in the transportation sector. An RL-based swarm of autonomous unmanned vehicles in ROS can be used for transference. The swarm can be taught to transport goods or individuals from one location to another in a safe and efficient manner. Next, it aids in Infrastructure inspection where an RL-based swarm of autonomous unmanned vehicles in ROS can be used to inspect essential infrastructures such as bridges, pipelines, and power lines. The swarm can be trained to detect liabilities and provide early warning indicators of potential failures. Lastly, it assists in military applications where An RL-based swarm of autonomous unmanned vehicles in ROS can be used for military reconnaissance and surveillance. The swarm can be trained to conduct covert operations and gather intelligence in hostile territories. ## 7 Conclusion The MA-DDPG-based SWARM of Autonomous Unmanned Vehicles in ROS provides a flexible and robust solution for decentralised swarm control. The MA-DDPG algorithm gives each and every agent the ability to learn from its own experiences and interactions with other agents, therefore enhancing its scalability and adaptability. The integration of RL with ROS supplies a modular and malleable surface for developing and integrating the many different components of the swarm. The simulation environment used in this study allows us to evaluate the execution of the proposed system in numerous situations and compare it with other existing approaches. In terms of scalability, adaptability, and efficiency, the proposed MA-DDPG-based swarm outperforms all other existing procedures. The swarm was able to navigate through complex environments and avoid obstacles while coordinating with other agents all while maintaining stable communication. Despite the encouraging results, there are still challenges that need to be fully addressed in order to realise the full potential of the MA-DDPG-based swarm of autonomous unmanned vehicles in ROS. These challenges include improving the learning efficiency of the agents, addressing and resolving safety concerns, and making sure of the ethical use of autonomous systems. The proposed MA-DDPG-based swarm of autonomous unmanned vehicles in ROS represents an innovative and promising strategy for the development of intelligent, self-aware and autonomous systems. Integration of RL with ROS opens up exciting prospects for future research and development in the field.
2302.13141
3D Printed Proprioceptive Soft Fluidic Actuators with Graded Porosity
Integration of both actuation and proprioception into the robot body would provide actuation and sensing in a single integrated system. Within this work, a manufacturing approach for such actuators is investigated that relies on 3D printing for fabricating soft-graded porous actuators with piezoresistive sensing and identified models for strain estimation. By 3D printing, a graded porous structure consisting of a conductive thermoplastic elastomer both mechanical programming for actuation and piezoresistive sensing were realized. Whereas identified Wiener-Hammerstein (WH) models estimate the strain by compensating the nonlinear hysteresis of the sensorized actuator. Three actuator types were investigated, namely: a bending actuator, a contractor, and a three DoF bending segment (3DoF). The porosity of the contractors was shown to enable the tailoring of both the stroke and resistance change. Furthermore, the WH models could provide strain estimation with on average high fits (83%) and low RMS errors (6%) for all three actuators, which outperformed linear models significantly (76.2/9.4% fit/RMS error). These results indicate that an integrated manufacturing approach with both 3D printed graded porous structures and system identification can realize sensorized actuators that can be tailored through porosity for both actuation and sensing behavior but also compensate for the nonlinear hysteresis.
Nick Willemstein, Herman van der Kooij, Ali Sadeghi
2023-02-25T18:50:17Z
http://arxiv.org/abs/2302.13141v2
# 3D Printed Proprioceptive Soft Fluidic Actuators With Graded Porosity ###### Abstract Integration of both actuation and proprioception into the robot body enables sensorized soft actuators that can operate in a closed loop. An interesting class of actuators for this purpose are graded porous actuators, which can be mechanically programmed by their porosity (gradient) and sensorized by using a smart material. Three types of such actuators were 3D printed, namely: a bending finger, contractor, and a three DoF bending segment. Piezoresistive sensing was embedded by printing with a conductive thermoplastic elastomer. A challenge with piezoresistive sensors is to relate the change in resistance to deformation due to their inherent hysteresis and nonlinearity. In this work, an (estimated) Wiener-Hammerstein (WH) model was used to predict the deformation. The bending and contracting actuators showed that the linear and WH models could reach 70+% and 80+% fits, respectively. Thereby indicating that the deformation of the printed actuators could be estimated quite well. Similarly, the 3DoF bending segment showed similar values with the WH model reducing both the fitting and RMS error on average with 20+%. These results indicate that sensorized actuators based on 3D-printed soft structures with a porosity gradient can be mechanically programmed whereas strain estimation can be done using identified Wiener-Hammerstein models. ## 1 Introduction Soft robotics inherently enable safe cooperation between robots, their environment, and users. Part of this capability is realized by their adaptability due to their reliance on (mechanically) soft structures. This reliance on soft structures instead of rigid mechanical joints is partially inspired by nature, such as seen in the octopus and elephant trunk. This similarity makes nature a source of inspiration for soft robots [1]. This inspiration led to soft robots such as those that mimic a caterpillar's rolling [2], an octopus' arm [3], and skin-inspired sensors (pressure and temperature) [4]. Many biological systems demonstrate adaptive behavior due to their soft skin, body, and muscles. The flexibility of biological muscles has been the inspiration for many soft robotic actuators such as fluidic soft actuators [5, 6, 7]. These actuators have showcased that a broad range of deformations can be realized such as contraction, twisting, and bending [5, 6, 7]. However, besides the capability of actuation, biological muscles also have proprioceptive sensors integrated into their structures that enable them to measure the lengthening and force on a muscle [8]. Integration of sensing capability would allow soft actuators to close the loop. To this end, researchers are investigating the integration of sensors into the body of the robot/actuator. Examples include the usage of conductive fibers and the change in inductance [9]. Other examples directly use the change in capacitance/resistance of the actuator itself [10, 11]. Another approach is to embed smart materials, such as piezo-resistive material, into the body of the actuator [12, 13]. An interesting class of soft fluidic actuators are porous structures (such as foam). These foam-like structures are inherently flexible, allow for fluid transport, and can be mechanically programmed [14]. Researchers have exploited this property, to realize a broad range of foam actuators, such as contractors, continuum arms, bending fingers, twisting actuators, and pumps [14, 15, 16, 17]. In addition, foam-based sensors have been investigated using principles such as optical [18], piezocapacitive [19, 20] and piezoresistive [16, 21] sensing. By exploiting geometry foam-based sensors can measure a wide range of deformations such as bending, compression, and shear [16, 19, 21]. In addition, foam-based sensors can be scaled up by patterning multiple electrodes in a matrix format [22]. Lastly, researchers have already demonstrated that sensorized foam can provide proprioceptive data using both optical fibers [18] and piezoresistivity [16]. To enhance the design flexibility of soft actuators, the fabrication of porous structures that incorporate: a stiffness gradient for deformation programming, a smart material for sensing, and complex geometries need to be possible. One popular approach for manufacturing porous/foam-based actuators in the literature is to use commercial foams as a base material, which are subsequently modified using semi-automatic and manual approaches, which include laser cutting and gluing [15]. This approach has been combined with dip-coating to add the smart material to the structure [16, 21]. Another popular method is to use lost sugar/salt casting methods [17, 18, 19]. One group embedded optical fibers in the casting step for optical sensing [18]. An interesting manufacturing process to fabricate porous structures is 3D printing, as it allows for complex geometries, and a broad range of materials, and can realize composite structures. One approach is to exploit the viscoelastic behavior of the material itself. One example is to exploit liquid rope coiling (i.e. the coiling seen when dropping honey from a height) to create a coiling pattern [23]. The InFoam method [14] used these coils to create pores in a user-defined pattern. Combined with normal printing this allowed for 3D-printed structures with a porosity gradient. Changes in porosity lead to large changes in stiffness (more than one order magnitude [14, 24]), which can be used to program the deformation. Examples of actuators based on graded porosity are the bending finger and contractor (Fig. 1(a) and (b)) when actuated with a vacuum the structure collapses leading to a programmed deformation. Furthermore, actuators can be mechanically programmed by changing the porosity of the bending actuator [14] and/or (in general) the spacer location (and number thereof) [25]. Within this work, we combine the InFoam method with a conductive thermoplastic elastomer to print sensorized soft actuators with graded porosity. This approach will be shown to enable both mechanical programming and adjusting the strain-resistance change sensitivity of the contractor. In addition, we demonstrate how the output of the sensorized actuators can be used in conjunction with a Wiener-Hammerstein model for deformation sensing for curvature and translation. Lastly, this combination of printing and system identification will be shown to also apply to the three degrees of freedom (3DoF) bending segment (Fig. 1(c)). ## 2 Materials and Methods ### Sensorized Actuators with Graded Porosity Graded porosity can be exploited to realize actuators with programmable deformation. By fabricating these actuators using a conductive thermoplastic (i.e. carbon-black filled), sensing can be integrated. The conductive thermoplastic enables sensing as the resistance between points in the structure will change due to the deformation history [16, 21, 26]. The resistance changes can be very significant as seen by the brightness change in Fig. 1(d). An advantage of these sensors Figure 1: Actuators with porosity gradients: bending actuator (a), contractor (b), 3DoF bending segment (c), effect of compression on resistance of actuator by changing LED brightness (d), and InFoam printing method (e). is that resistance measurements require simple electronics. Specifically, the change of resistance is often measured, which is defined as \[\Delta R=\frac{R-R_{0}}{R_{0}}\cdot 100\% \tag{1}\] Wherein the variables \(\Delta R\), \(R\), \(R_{0}\) are the relative change of resistance (%), and the instantaneous and initial (at no strain/stress) resistance (\(\Omega\)). However, it is not straightforward to relate the resistance change to a deformation as carbon-black-based sensors are nonlinear. Therefore, a model is necessary to compensate for the nonlinearities. Within this work, a system identification approach is used to convert the resistance change to a deformation. Specifically, a Wiener-Hammerstein model (WH-model) is used (Fig. 2(a). The WH-model consists of two linear systems and a static nonlinearity in the middle The combination of linear and nonlinear functions allows the WH-model to capture the inherent nonlinear couplings between resistance change and strain but also take the deformation history into account (to compensate hysteresis). A similar model has shown good results for force sensing resistors [27] but has, to our knowledge, not been applied to strain sensing of soft actuators. Furthermore, we investigate the use of the Wiener-Hammerstein form, which was not (to our knowledge) investigated before for strain estimation. This model structure mimics the structure of the underlying physics. Firstly, the left part models the stress-driven resistance change [26]. Furthermore, it has been shown that the stress \(\sigma(t)\) is captured well by a Wiener model [27] \[\sigma(t)=f_{1}\left(\left(H_{1}(j\omega)\,\Delta R(t)\right)\right. \tag{2}\] The functions \(f_{1}\) and \(H_{1}(j\omega)\) represent a nonlinear mapping and linear transfer function, respectively. The first stage of the model relates the resistance change and stress. The stress and strain are also nonlinearly related for porous structures [14, 28, 29]. One possible approximation is through a nonlinear viscoelastic model to relate the strain to stress [29]. Within this work, a Hammerstein (HS) model is used as an approximation of the strain \(\epsilon(t)\) \[\epsilon(t)=H_{2}(j\omega)f_{2}\left(\sigma(t)\right) \tag{3}\] Within this equation, the functions \(f_{2}\) and \(H_{2}(j\omega)\) represent a nonlinear mapping and a viscoelastic model (transfer function), respectively. Combining these two equations in series leads to the following equation \[\epsilon(t)=H_{2}(j\omega)\left(g\left(H_{1}(j\omega)\Delta R(t)\right))\right) \tag{4}\] Within this equation, the two-stage static nonlinearity (i.e. \(f_{1}(f_{2}(.))\)) is reduced to a single one \(g(.)\). This function \(g(.)\) approximates the combined nonlinear functions. Within this work, \(g(.)\) was implemented as a piecewise linear function. This function type was selected as it fitted well with our data, and has seen use before in [27] for stress estimation. Lastly, it should be noted that the model in Equation 4 can be reduced to an HS or Wiener model by either removing \(H_{2}(j\omega)\) or \(H_{1}(j\omega)\), respectively. ### Fabrication of Sensorized Graded Actuators All actuators in this work were fabricated using the InFoam method [14]. The InFoam method exploits the liquid rope coiling effect to print structures with a porosity gradient. Specifically, it uses the coils to create pores leading to a structure with programmable porosity (Fig. 1(e)). The coiling radius \(R_{c}\) and coiling density \(N\) define the coiling Figure 2: The pipeline for strain estimation (a) and the experimental setups: bending finger (b), single contractor (c), and 3DoF bending segment (d). pattern, which is machine specific and is determined by the height \(H\) and extruded amount. The coiling density \(N\) is the number of coils within a single outer coil diameter [14]. The relation between these parameters and the machine was determined through the method described in [14]. Subsequently, the InFoam method was used to generate the GCode through a custom MATLAB (The Mathworks, Inc., USA) script. The bending, contracting, and 3DoF bending segment actuators (Fig. 1)(a-c)) were fabricated using a modified Creality Ender 5 Plus (Shenzhen Creality 3D Technology Co., Ltd., China). This modified printer incorporated a screw extruder to enable the usage of thermoplastic elastomers. For this work, we used TC/OEX-BLCK pellets (Kraiburg TPE, Germany) with a Shore Hardness of 70A and a volume resistivity of 10 \(\Omega\)cm, which were printed at 195\({}^{\circ}\)C with a 0.6 mm nozzle. For this material, the coiling radius and height were determined to fit a linear function as \(R_{c}=0.40H-0.3\) (range 2.5 to 10 mm) based on the methodology of [14]. In addition, to compute the porosity (\(\phi\) in %) the bulk density (\(\rho_{b}\) = 0.97 g/cm\({}^{3}\)) was used in conjunction with the measured weight (\(m\) (g)) and volume of printed cubes (\(V\) (cm\({}^{3}\))) using \(\phi=100\left(1-\frac{m}{V\rho_{b}}\right)\) (rounded to the nearest whole number). Lastly, the actuators were put into 0.4 mm thick heat-sealed styrene-ethylene-butylene-styrene (SEBS) to finalize the actuator. ### Curvature Sensing To investigate the capability of curvature sensing, a sensorized bending actuator was characterized. This bending actuator is based on the design of Fig. 1 and with dimensions 75x15x10 mm\({}^{3}\) (length x width x height). For grading a 1 mm thick zero porosity layer, low (below 5%) porosity spacers, and a high porosity section of 84%. Electrodes (copper wires) were connected to the top left and bottom right of the actuator (Fig. 2). This experiment was performed using the setup shown in Fig. 2(b-d). The bending actuator was put into a holder printed from polylactic acid (PLA). Subsequently, the bending actuator was exposed to four levels of vacuum pressure (10, 20, 40, and 60 kPa). The pressure was applied in on/off pulses of ten seconds for three cycles (repeated in duplicate). During these experiments, the resistance change was measured using an Arduino Uno (Arduino AG, Italy) through a voltage divider and sent over serial to MATLAB. This voltage divider consisted of a bias (\(R_{b}\)) resistor of 1k\(\Omega\), which was fed to an ADS1115 16-bit analog to digital converter (Texas Instruments, USA). In addition, a second experiment was conducted with a ramp of vacuum pressure from 10 up to 60 kPa (with a ten-second hold at 20, 40, and 60 kPa), after which the pressure was ramped down to 10 kPa (with ten second holds at the same pressures). Lastly, a webcam was used to capture images of the actuator during the experiments. Trackers were added to the sleeve, in the form of black stripes, to reconstruct the curvature to act as the ground truth. Afterwards, MATLAB was used to reconstruct the curvature over time by computing a circle that fits the markers. Subsequently, the models were estimated using the curvature and resistance change through MATLAB's System Identification Toolbox. The NRMSE (normalized root mean square) fits of the model were evaluated through MATLAB's compare-function as: \[NRMSEfit=100\left(1-\frac{||y-\hat{y}||}{||y-\bar{y}||}\right) \tag{5}\] In this equation, the NRMSE fit is computed using the two norm of the measured \(y\) and predicted \(\hat{y}\) output divided by the difference between the estimated value and average output \(\bar{y}\). Similarly, the RMS error was computed and scaled by the maximum output (and the number of data points \(N\)) in the dataset RMS\(=\frac{100}{\max(y)}\sqrt{\frac{\sum_{i=1}^{N}(y(i)-\bar{y}(i))^{2}}{N}}\). Four types of models were identified, namely: linear, HS, Wiener, and Wiener-Hammerstein (i.e Eq. 4). The latter was acquired by first identifying a Wiener/linear model and then using that as a filter to refine the output by adding a linear/HS model in series. All WH models use piece-wise linear functions as the nonlinear functions with five to ten breakpoints. The data was separated into an identification (ramp and the 60/10 kPa) and validation set (20 and 40 kPa). ### Contraction Sensing Besides curvature, the capability to predict/sense contraction was also investigated. Specifically, a set of contractors were investigated for their stroke and resistance change for multiple levels of porosity. Specifically, contractors based on the coiling pattern at heights of 4, 6, 8, and 10 mm and a coiling density of 3 (equivalent to porosities of 68, 76, 82, and 86%). All contractors were printed with a diameter of 25 mm and a total height of 45 mm. The graded porous section (Fig. 1) had a height of around 40 mm and a low porosity (less than 5%) top and end cap (both 4 mm thick). The graded porosity section had two zero porosity rings at heights of 14 and 28 mm, which had a width of 5.6 mm. The experimental protocol is identical to that of the curvature sensing. However, the setup is different as can be seen in Fig. 2. The contractor is hung in line with gravity and only a single marker was used. The identification approach was also kept the same but performed only for a low and high-porosity contractor. In addition, the experiments were performed for three load cases: no load, a 200-gram weight, and a dumbbell (500 grams). ### Three Degree-Of-Freedom Bending Segment To validate our approach for a more complex system with multiple inputs and outputs was investigated namely, a three DoF bending segment with three contractors in parallel (Fig. 1(c)). Three contractors were printed with the same dimension as indicated in the previous section with a porosity of 76%. Whereas the output, middle, and bottom stages were printed with PLA. The bottom and middle stages were spaced on an equilateral triangle with lengths of 30 mm. Whereas the spacers (below the black marker) have a length of 15 mm and the output stage on top is an equilateral triangle (length of 52 mm) with rounded edges. For this experiment, a setup with three vacuum inputs and three resistance measurements (Fig. 2(d)) was used. Similar to the other setups, bias resistors of 1k\(\Omega\) and the ADS1115 were used. To measure the orientation of the bending segment an IMU (a 9DOF absolute orientation BNO055 (Bosch Sensortec, Germany)) on a breakout board (Adafruit, USA) was used (with axis definition of Fig. 1(c)). The IMU and ADS1115 were connected to an Arduino Uno, which was used for sending data over serial to MATLAB. A black marker was added, (Fig. 2(c)) to track the height change using a webcam. As an input, both sequential and parallel activation patterns of the contractors were used. Either one, two, or three contractors were activated at 10, 20, 40, and 60 kPa. Within these experiments, cycles of 10 seconds pressure on/off for three times were performed (repeated in duplicate). In another pattern, two contractors were activated with one at vacuum pressures of [10, 20, 30] and one at [40, 60] kPa. The timings for the high and low-pressure contractors were (5,25,45)/(15,35,55) (turn on/off) and (5,45)/(35,65), respectively. The data from these experiments were used for the identification of three models, i.e. one for each output. The same set of models as for the other systems was identified and validated. ## 3 Results and Discussion ### Curvature Sensing The curvature versus change in resistance is shown in Fig. 3. It can be observed that there is hysteresis in all three scenarios. This behavior means a simple relationship between resistance change and curvature does not exist but is dependent on history. The area of hysteresis seems to not be dependent on the magnitude of the vacuum pressure as the 10 kPa seems to have more hysteresis than the ramp even though the latter reaches 60 kPa. The slower increase in pressure in the ramp dataset implies that the rate could play a significant factor. Such behavior is expected, as existing models of carbon black-based piezo-resistive sensors include a damping component [26]. Thereby implying that the rate does play a role in the observed behavior and that a proper model of these sensors will need to take the history into account, which the Wiener-Hammerstein model can do. A set of models were estimated based on the identification datasets (one for each model type). These models were evaluated based on their average NRMSE fit and the best one was kept. The predictions of these identified models are shown in Fig. 4, which includes a set from identification and two validation sets. It can be seen that all models can capture the dynamics of the ramp dataset quite well. In line with our hypothesis, the WH model outperforms the other three. Whereas the Wiener and HS have a lower fit than the linear model only for the ramp dataset. However, looking at how well they approximate the curve the linear model just averages whereas the nonlinear models try to capture the faster dynamics and nonlinearities. This discrepancy is especially apparent in the other two datasets (i.e. the validation). The fits for 20 kPa clearly show that the linear model has significant problems with capturing the dynamics properly leading to overshooting. However, this difference is significantly less apparent for the 40 kPa dataset instead of 40% it Figure 3: Hysteresis curve of resistance change versus curvature with pressure turn on/off indicated. differs only 10.6% with the WH-model. Whereas the HS and Wiener model is around 30 and 9% better, which indicates that the nonlinear models generalize better. It is expected that the error is much less for the 40 kPa dataset as foam-like structures collapse and have distinct regimes of change. These distinct regimes have a rapid and large change when the pores collapse [20] followed by a slower decrease in resistance after the air is pushed out. It is expected that the linear approximation between the datasets of 10 kPa and 60 kPa does approximate the 40 kPa-dataset well but not the 20 kPa, which implies that the densification within that range. The overall identification results are summed up in Table 1. It can be seen that on average all the models capture the dynamics quite well. It can be observed that refining the WH model reduces the fitting error by 21.4% (i.e. error decreased from 19.6 to 15.4%). In addition, the WH model seems better in general than the Wiener model. The Wiener model has fits that vary from 72.4-90.7% whereas the WH model has a much narrower range of 80.2-90.0%. This behavior is to be expected, as the linear model would average the error out (see for instance the ramp signal). The RMS error (normalized by the maximum curvature) of the models was computed to be: \(5.2\pm 1.4,6.5\pm 1.9,7.0\pm 2.7\), and \(10.42\pm 7.98\)% for the WH, Wiener, HS, and linear models, respectively, which values indicate that the nonlinear models are better estimators. These results imply that WH and its variation can provide a reasonable estimation of the curvature. Whereas nonlinearities are necessary to compensate for the nonlinear relation between resistance change and curvature in general. The piezoresistive sensorized actuator's ability to estimate its strain indicates that it performs (at least) on par with the works primarily focussed on empirical behavior of foam-like porous sensors and sensorized actuators [19, 20, 16, 21]. Whereas the sensorized piezoresistive bellow [13] was comparable in terms of estimation errors (albeit for the force) with around 4% (they did not report a value for position estimation). Similarly, in [27] a 90% fit was acquired for stress estimation but using a commercial force sensing resistor, which makes absolute comparisons difficult. However, similar to their result the Wiener/HS models provide much better results than those seen with linear models. In addition, our results indicate that the WH, for strain estimation, is a better model structure. Lastly, other researchers have shown better results for other sensor types, such as twisted-coiled actuators [11], reported RMSE below 1% but with low sensitivity (resistance change/strain of 0.83 \(\Omega\) with a maximum of 25%). Thereby implying piezoresistive sensing could have some advantages but efforts need to be made for improving accuracy. ### Contraction Sensing The averaged strain and resistance change for all contractors is shown in Fig. 5 for different load cases. Within the six pressure cycles the maximum strain varied less than 10% and was, therefore, not added to the graphs. In general, a similar trend can be seen for all data points. In general the compressive strain and resistance change increase with increasing porosity and pressure. In contrast, the increase of a load negatively impacts the strain but the resistance change is less affected. This discrepancy implies that the relation between strain and resistance change is load dependent. The nonlinear decrease of the resistance with the contractor's deformation is shown in Fig. 1(d). \begin{table} \begin{tabular}{|c||c|c|c||c|c||c|} \hline **Type** & **Ramp** & **10** & **60** & **20** & **40** & **Avg.** \\ \hline \hline Linear & 79.3\% & 76.3\% & 84.7\% & 43.2\% & 74.9\% & 71.7\% \\ \hline Wiener & 76.4\% & 72.4\% & 90.7\% & 78.8\% & 83.9\% & 80.4\% \\ \hline HS & 76.2\% & 77.8\% & 87.8\% & 73.0\% & 83.6\% & 79.7\% \\ \hline WH & 84.4\% & 80.2\% & 90.0\% & 83.2\% & 85.5\% & 84.6\% \\ \hline \end{tabular} \end{table} Table 1: NRMSE fits for the bending actuator. Figure 4: Real and predicted curvature (incl. NRMSE fit in legend) for (pressure) ramp dataset (a) and validation datasets (20 (b) and 40 (c) kPa. Higher porosity coincides, in general, with a higher change in resistance, which is to be expected as the final shape is smaller and the contact area increases more significantly. However, this increase does not correlate with a higher strain sensitivity for the 500 gr load. This contradiction is expected to be due to the load's effect on the contractor's stiffness. The addition of a load will increase the longitudinal stiffness as a constant force spring but not the radial stiffness. At a certain load, the radial stiffness will be lower than the longitudinal. At that point, the collapse will be more radially. Thereby still reducing the resistance significantly but reducing the strain. Such change is more apparent for the higher porosity (softer) actuators as these have a lower radial stiffness overall. Interestingly, the trends of both the change in resistance and strain can be approximated by a power-law \(p_{f}=C\left(1-\phi/100\right)^{n}\). Within this equation, variables \(p_{f},\phi,C,n\) represent the property of interest of the porous structure, the porosity (%), and two fitting parameters. It has been shown that this power law can estimate changes in mechanical properties (yield stress, density, elastic modulus) of cellular solids (such as foams) [30] and mechanical programming of soft actuators [14]. Similarly, the results of Fig. 5(a-f) show that this empirical law approximates the strain change for different loads quite well. Similarly, the change in resistance also fits this empirical quite well. The general behavior is similar to the result of [19], which also showed increased sensitivity to pressure with increasing porosity. These results provide evidence that the porosity can be used as a tool to program both the strain but also its sensitivity (i.e. resistance change). This power-law behavior with porosity was also seen for the behavior of a bending actuator in our earlier work [14]. Although for low vacuum pressures (i.e. \(\leq\)20 kPa) this approximation seems less correct. In addition, it can be observed that the contractors have a higher sensitivity than the proprioceptive bellow of [13] with values exceeding 4.5%. All models were identified separately for the no load and 500 grams. This change was mandatory based on the results of Fig. 5. This identification approach gave the results of Fig. 6(a-c) for the 40 kPa validation dataset, which indicates overall good NRSME fits for all models. Although the lines in Fig. 6(a) seems to qualitatively approximate it well there are some discrepancies. However, in steady-state these are at most a strain error of 0.02 (around 8.7% of the strain magnitude), which still seems a reasonable prediction. The results of the average NRSME fit are shown in Table 2. In general, the NRMSE fits of the WH model outperformed the others. Although it can be noted that the others were still good. But overall the WH model reduced the fitting error of the linear/Wiener/HS model from 22.5/19.0/18.0% to 14.4% a fitting error reduction of 36.1/24.3/20.1%. Implying that the WH model's structure correlates better with the underlying physics. Lastly, the RMS error (normalized by the maximum strain) is averaged over all three datasets [8.7, 7.1, 7.3, 5.8]% for the linear, HS, Wiener, and WH models, respectively. These values indicate an overall good estimation of the models with the nonlinear models performing better and WH being the best overall. Lastly, it can be observed that this data corroborates the discussion of the previous section, further providing evidence that the WH model is an appropriate structure for strain estimation with piezoresistive sensors. Figure 5: Strain/resistance change versus porosity with fitted power law curve (with parameters in legend) for different vacuum pressures for the case of no load (a,b), a 500 grams load (c,d), and multiple load cases for two vacuum pressures (e,f) for the contractor system. ### Three Degree-Of-Freedom Bending Segment The realized 3DoF bending segment could bend (\(\theta_{x}\),\(\theta_{y}\)) for 26.3/22.6 degrees. Whereas it had a maximum \(\Delta z\) of 16 mm (axis definition in Fig. 1(c)). Three individual identification (one for each deformation) were performed with input (the three resistance changes) and outputs (\(\theta_{x},\theta_{y},\Delta z\)). The acquired predictions for these identified models are seen in Fig. 7. The HS models had NRMSE fits below 70% and were omitted. In general, the linear and WH models seem robust, as they stay around similar levels for all cases. In contrast, the Wiener model dips below 60% and for some datasets (not shown) below 40%. However, for the steady state situations, the WH-model and Wiener models perform better than the linear model. This discrepancy is especially visible in the oscillations that the linear models predict during steady state. In general, the overall behavior seems better captured by the WH model. \begin{table} \begin{tabular}{|c||c|c|c|} \hline **Type** & **Low - no-load** & **Low - 500 gr** & **High - no-load** \\ \hline \hline Linear & 75.5/67.7\% & 73.2/78.5\% & 86.1/83.0\% \\ \hline Wiener & 76.4/74.7\% & 86.7/83.6\% & 88.3/72.3\% \\ \hline Hammerstein & 80.6/76.4\% & 84.4/86.2\% & 84.9/77.7\% \\ \hline WH & 83.1/76.7\% & 88.4/86.7\% & 91.2/84.8\% \\ \hline \end{tabular} \end{table} Table 2: Averaged NRMSE fits for the contractor. Figure 6: Measured and estimated contraction (with NRMSE fit in the legend) of identified models for the contractor system for the 40 kPa datasets (validation) for a low porosity (68%) with no load (a) and 500 gr (b) and the high porosity (82%) no load (c). Figure 7: Measured and predicted angle of the 3DoF bending segment in the \(x\)-(a,b) and \(y\)(c,d)-direction and deformation in \(z\) (e,f) from the identification (a,c,e) and validation (b,d,f) dataset with their NRMSE fits included in the legend. The NRMSE fit was computed by averaging the identification and validation datasets NRSME fits, which are shown in Table 3. It can be noted that the NRMSE fit is better for the WH model but not the Wiener model. A possible reason for this discrepancy is that the actuator is nonlinearly related to the contraction of an individual contractor [31], which might not be captured well by a Wiener model. Similarly, the RMSE (normalized by maximum bending/deformation within each experiment) is 7.4\(\pm\)0.9/6.1\(\pm\)1.9 and 8.4\(\pm\)1.7 % for the WH whereas the linear model was 10.1\(\pm\)0.8/9.5\(\pm\)1.1 and 9.9\(\pm\)1.0 %. Implying again that the WH model is a good option for these systems. In general, these results corroborate that the sensorized actuators can reconstruct their state. The multi-input and output system corroborates the results for the bending actuator and contractor, which implies that the WH model is an appropriate model structure. In addition, it goes beyond the works discussed previously, which focused on single input-output systems, and shows that our approach generalizes to more complex strain estimation problems. ## 4 Conclusion Sensor-integrated soft actuators have the potential to be used in a broad range of applications as they inherently provide proprioceptive data. By providing proprioceptive data these actuators can sense their deformation, which can be used for, for instance, feedback control. Within this work, we have demonstrated that the InFoam method can 3D print conductive thermoplastic elastomers with a user-defined porosity gradient to combine actuation and sensing. The ability to fabricate these sensorized actuators using 3D printing allows for significant design freedom and flexibility. In essence, the presented results show that the InFoam can print mechanically programmable sensorized actuators. The measured change in resistance of these sensorized 3D printed actuators can be fed into an estimated Wiener-Hammerstein model to estimate the strain behavior over time (\(\geq\)80%), which was significantly better than linear models in most cases. The results in this paper indicate that the Wiener-Hammerstein model is viable for different levels of porosity, motion (curvature and translation), and multiple degrees of freedom (3DoF bending segment). In general, the WH model could compensate for the nonlinearities and hysteresis of the sensorized actuator. It is expected that other piezoresistive sensors could also use this model structure, as it correlates with the underlying physics. When combined with data-driven techniques (such as Wiener-Hammerstein models) the measurement data can be used to estimate the strain. However, the accuracy can still be improved significantly. The next step is to integrate multiple outputs to accurately reconstruct the deformation of the actuator. The current implementation assumes that deformation is solely due to the actuator, which as seen with the radial collapse of the contractor makes the accuracy load dependent.
2306.08928
Quantum Game Theory meets Quantum Networks
Classical game theory is a powerful tool focusing on optimized resource distribution, allocation and sharing in classical wired and wireless networks. As quantum networks are emerging as a means of providing true connectivity between quantum computers, it is imperative and crucial to exploit game theory for addressing challenges like entanglement distribution and access, routing, topology extraction and inference for quantum networks. Quantum networks provide the promising opportunity of employing quantum games owing to their inherent capability of generating and sharing quantum states. Besides, quantum games offer enhanced payoffs and winning probabilities, new strategies and equilibria, which are unimaginable in classical games. Employing quantum game theory to solve fundamental challenges in quantum networks opens a new fundamental research direction necessitating inter-disciplinary efforts. In this article, we introduce a novel game-theoretical framework for exploiting quantum strategies to solve, as archetypal example, one of the key functionality of a quantum network, namely, the entanglement distribution. We compare the quantum strategies with classical ones by showing the quantum advantages in terms of link fidelity improvement and latency decrease in communication. In future, we will generalize our game framework to optimize entanglement distribution and access over any quantum network topology. We will also explore how quantum games can be leveraged to address other challenges like routing, optimization of quantum operations and topology design.
Indrakshi Dey, Nicola Marchetti, Marcello Caleffi, Angela Sara Cacciapuoti
2023-06-15T08:00:50Z
http://arxiv.org/abs/2306.08928v2
# Quantum Game Theory meets Quantum Networks ###### Abstract Classical game theory is a powerful tool focusing on optimized resource distribution, allocation and sharing in classical wired and wireless networks. As quantum networks are emerging as a means of providing true connectivity between quantum computers, it is imperative and crucial to exploit game theory for addressing challenges like entanglement distribution and access, routing, topology extraction and inference for quantum networks. Quantum networks provide the promising opportunity of employing quantum games owing to their inherent capability of generating and sharing quantum states. Besides, quantum games offer enhanced payoffs and winning probabilities, new strategies and equilibria, which are unimaginable in classical games. Employing quantum game theory to solve fundamental challenges in quantum networks opens a new fundamental research direction necessitating inter-disciplinary efforts. In this article, we introduce a novel game-theoretical framework for exploiting quantum strategies to solve, as archetypal example, one of the key functionality of a quantum network, namely, the entanglement distribution. We compare the quantum strategies with classical ones by showing the quantum advantages in terms of link fidelity improvement and latency decrease in communication. In future, we will generalize our game framework to optimize entanglement distribution and access over any quantum network topology. We will also explore how quantum games can be leveraged to address other challenges like routing, optimization of quantum operations and topology design. Quantum Networks, Quantum Games, Entanglement Distribution, Network Topology, Fidelity, Latency ## Introduction Everyone wants to enter the quantum race, from tech giants to states and governments with massive public funds for infrastructure development, like European Commission's Quantum Technologies Flagship program [1], and USA's National Quantum Initiative worth 1.2 billion US8[2]. Such a huge investment is motivated by the promise of quantum computer's capability of executing tasks that choke classical computers within realistic time-scale [3]. Unleashing the full potential of quantum computing requires implementation of operations among a large number of qubits, which is impossible for a single quantum processor to execute with the current level of technology. In order to circumvent the challenges associated with large monolithic quantum processors, the most promising approach is to network multiple realizable smaller quantum processors (or nodes) together [4]. Each such processor can execute few operations individually, but when interconnected in a quantum network i.e. the _Quantum Internet_, one is able to compile large and complex computing tasks exceeding the computing power of a single quantum processor. As quantum networks will be rolled out, providing true quantum connectivity between quantum computers over short and long distances, it will be possible to realize a wide range of distributed quantum computing, secure communications, and quantum-enhanced applications [5]. Quantum networks have to exploit the unique phenomenon of _entanglement_[6] to fully unleash the communication and computing potentialities allowed by quantum mechanics. Entanglement is the most powerful form of correlation with no-counterpart in the classical world, a property that can be exploited for information processing between two quantum communicating nodes, separated by considerable distance. Therefore, entanglement has emerged and elevated as the most important _'resource'_ for the communication over quantum networks; its importance is analogous to the fundamental role played by frequency bands or sub-bands within classical networks. Irrespective of the kind of entanglement, bipartite or multipartite, it is a very fragile resource, difficult to generate and retain in a sustainable way. However, in order to leverage the properties of entanglement for communication within quantum networks, entanglement needs to be distributed among the communicating nodes. Formulating algorithms for managing entanglement within a quantum network and distributing entangled states between multiple nodes (processors) is a fundamentally crucial and complex challenge [7]. A promising solution for entanglement distribution within quantum networks can be the development of centralized or distributed decision-making targeting the optimization of long-term system properties. Decision-making procedures exploit the interplay between instantaneous distribution of resources and long-term performance results in distinctive formulations, and aim at optimizing figures-of-merits subjected to associated constraints. In classical networks, constraints are environmental characteristics, like, fading, pathloss and interference, while figures-of-merit are error probability performance, spectral efficiency, outage probability. On the other hand, in quantum networks, constraints are characteristics resulting from the interaction between quantum states and the environment, in a noise process known as decoherence with no-counterpart in the classical world. Figures-of-merit include fidelity, or communication rate in terms of ebits per channel use [8]. ### Motivation Classical game theory has proved to be instrumental in optimized decision-making for resource distribution, allocation and sharing within resource-constrained classical networks, like Internet-of-Things (IoT), network of unmanned aerial vehicles (UAVs) [9]. Game theory-based framework has been a preferred choice for online decision-making with limited and incomplete information, over i) traditional numerical optimization techniques, which become too complex to solve with the increase in the network size and number of parameters involved, and ii) learning techniques, which are only as good as the \(a\)-priori data (information) available, and need complete information about the network to achieve reliable performance. Besides, learning and common optimization techniques operate offline with average response time increasing exponentially with the increase in the number of nodes and shortage in the resource (like, computing power, memory) availability. Game theory champions optimization scenarios with limited information as it enables entities to model uncertainties resulting from other entities' behaviors and to alter their actions based on the models, and learns from the network topology to enable stable coordination in a decentralized setting. Furthermore, game-theoretic techniques are distributed decision-making tools, which makes them scalable with the network size in terms of computational complexity and memory requirements. **Quantum Games promise increase in efficiency and payoffs, emergence of new equilibria and novel game strategies which are simply not possible in the classical domain**. Owing to these advantages, quantum games have been applied to recast different quantum algorithms and information processing techniques as games characterized by strategies and rules for gathering a deeper understanding of those algorithms and techniques. Examples include experimental demonstration of quantum Prisoner's Dilemma using nuclear magnetic resonance quantum computers, one-way model of quantum computation, or alternatively, representation of quantum non-locality, cluster-state generation, or different counter-intuitive paradoxes using non-zero-sum games or graphic games [10]. However, neither classical nor quantum games have ever been exploited for solving challenges like, distribution and sharing of fragile resources like entanglement, optimized topology extraction, and fidelity-guaranteed information routing through fixed or dynamic quantum networking topologies in both competitive and cooperative scenarios. **As the Quantum Internet gradually becomes a reality, it will be possible for quantum networks to leverage the benefits offered by quantum games over classical games in the aforementioned challenges. Indeed, by incorporating quantumness in form of pre-shared entanglement among network nodes, quantum games can achieve equilibria outperforming their classical counterparts and allow the players to explore correlated outcomes (with no-counterpart in the classical world) even in the absence of communication [11].** ### Contribution In this article, we aim at exploiting the promise of game theory for quantum networks. As a first-ever application, we propose a novel game-theoretic framework for entanglement distribution, capable of establishing stable links between any two nodes separated by a distance within fixed network topologies. Consequently, we investigate how classical and quantum strategies can be formulated for the game framework such that fidelity is maximized, while maintaining entanglement rate, and link latency is minimized subjected to coherence time constraint. We formulate two different kinds of games; i) multi-player coalition game where multiple nodes within a quantum network cooperate to establish entanglement (link) between source and final destination, ii) 2-way consensus game, where each node decides on the next 1-hop destination among multiple nodes available to communicate with. We devise both classical and quantum strategies for each game; where quantum strategies offer advantage in performance in both cases. Introduction of quantum strategies blurs the boundary between cooperative and competitive scenario, as the initial entangled quantum state allows players to utilize the correlations present in the state. In this paper, we deviate a bit from this condition in our 2-way consensus game, where the players' action does not depend only on the player's observation of the quantum state received from the referee. The player decides on the next node for communicating, based on fidelity payoff and latency cost estimates over the forwards links available. ## Background on Quantum and Classical Games Game theory provides a set of mathematical tools and frameworks that leverage interactions of rational heterogeneous self-interested entities with different information availability, to achieve a global objective and predict system-level emerging behaviors. From a network perspective, game models can capture the influence of the network topology on distributed decision-making processes of entities, with the freedom to plan their moves independently based on their own goals and incomplete local information. A basic game model involves five components : a) **Players** - Participants or decision-makers; b) **Action** - Preferred move of each player through implementation of player's control; c) **Information** - Local or global knowledge that dictates each player's decision; d) **Strategy** - Mapping player's moves with the information available to it at any point in time; e) **Utility or Payoff** - Each player's preference ordering of possible game outcomes. A very important concept in game theory is **Equilibrium**. The most commonly known form of equilibrium is the Nash Equilibrium (NE). NE constitutes an optimal set of strategies, for all players within a game under a certain scenario, with which no player can increase their current expected payoff further by unilaterally changing their strategies. NE, however, requires any player to know its opponent's strategies. When there is incomplete information available on the opponent's strategies, NE is extended to Bayesian equilibrium, where the set of optimal strategies is obtained using information on all possible sets of strategies and actions available in the game. In a network scenario, where multiple players cooperate to create a link, Wardrop equilibrium [12] is used to refer to a set of optimal strategies, through which players can ensure that the journey time for information between them is minimum. Depending on the kind of game played and nature of players, the choice of equilibrium is crucial for determining mathematically and logically the players' actions that can ensure the best outcome for each player. Games can be classical or quantum depending on whether they employ classical or quantum resources/strategies respectively. **Quantum games offer advantages over classical games in terms of winning probabilities, efficiency, payoffs and equilibria**[11]. Quantum strategies also offer strictly higher average payoffs over classical strategies in competitive scenarios where participating entities have conflicting interests. For example, in CHSH games [13], if the _a priori_ shared resource between two spatially-separated players is classical, the probability of winning is 0.75, while if the resource is quantum (like a pair of maximally-entangles qubits), the probability of joint winning exceeds 0.75 i.e., \(\cos^{2}\pi/8>0.75\). This gain in payoffs can be attributed to the fact that entanglement interferes with the **dilemma** present in classical games. In classical games, dilemma refers to the condition where one player can only win by lowering another player's expected payoff. In the quantum version, by incorporating entanglement at the initial state of the game, more than one player can win with acceptable payoffs within the range of possible payoffs for strategies available in the game. Including entanglement at the initial state determines generation of a range of strategies that are not available with a classical game and classical players. Classical games and classical players resort to deterministic strategies, which allow only one player to win under a certain scenario with maximum possible payoff. Quantum strategies can be formulated using convex linear combinations of unitary actions, and more than one player can maximize its expected payoff accordingly to a selected strategy. Since quantum actions form a convex compact subset of a finite dimensional vector space, game equilibria will exist following Glicksberg's generalization of NE [14]. **Quantum games can be cooperative** (all players have common interests) **or competitive** (players compete for a particular target or have conflicting interests). However, the way they are played, i.e. the strategies and actions taken by the players, are different; a representation of how cooperative and competitive quantum games are played differently is provided in Fig. 1. For example, in the top part of Fig. 1, two or more players playing as a group cooperatively can coordinate their strategies using a quantumly entangled pair of systems. Players within the group can each decide on the respective moves, based on their own individual situation and observation of other players' strategies. The extra information on another player's strategy can help a player decide on its move, rather than going for a random step, thereby enhancing the player's winning probability. For the competitive scenario at the bottom of Fig. 1, players independently decide on their strategies and actions based on their own individual situations. Depending on the payoffs they receive in return, they can either win, lose but stay in the game, or retreat. ## Game-based Optimization Framework for Entanglement Distribution The primary contribution of this section is to employ game-theoretic framework for addressing the fundamental challenge in quantum networks. Specifically, we focus on distributing entanglement among multiple nodes within a quantum network, targeting optimization of long-term system properties like, fidelity and coherence, entanglement rate and latency in communication. Current technology restricts quantum networks to be limited to fixed topologies where the quantum nodes are usually connected through optical fibre links. Within these topologies, a group of nodes are connected to execute a computing task efficiently, where such nodes form a coalition. Entanglement needs to be distributed among all the nodes in the coalition to compile the task. Each link connecting the nodes within Fig. 1: Diagrammatic Representation of Quantum Games in Cooperative and Competitive Scenarios. the coalition, can be characterized by a different coherence time. Under this circumstance, entanglement distribution is affected by the decoherence time, and, in turn, latency on the source-to-final-destination link will affect the fidelity and the overall network entanglement rate. Consequently, entanglement distribution is also related with the optimization of the number of hops between the source and the destination, such that communication takes place within the coherence time. It is to be noted here that irrespective of the ultimate applications of the network which can be heterogeneous, entanglement distribution is inherently married with optimized link set-up while maintaining fidelity and entanglement rate. We design a game-based optimization framework for entanglement distribution within a fixed quantum network topology. In this game, the quantum nodes are players, and the utility function is defined as the difference between the payoff and the cost function, where fidelity and entanglement rate are considered as the payoff and link latency is represented as the cost function, subject to constraints on coherence time. The nodes sample their local fidelity and entanglement rate and update their individual strategies with the aim of minimizing latency within the coherence time of the forward link, satisfying an equilibrium flow of distributed entanglement. Our players are always quantum as we are working on a quantum network topology. We, however, explore and compare classical and quantum strategies and resort to _Nash equilibrium_ for the classical version and to _Wardrop equilibrium_ for the quantized version. Another important aspect for practical implementation is to search for a stable solution. Wardrop equilibrium is analogous to NE, however, we consider it for the quantum strategies, in which case, the nodes aim at equalizing latency over their individual forward (outgoing) links. It is worth-noting, we consider the outgoing links from each node in order to account for the constraint on the link coherence time. Quantum strategies start with each node (player) being allocated a single entangled qubit. The arbitrary unitary rotation that nodes apply to their qubit is their strategy choice. The strategy choice determines whether a particular node will be part of the coalition to which entanglement will be distributed. ### Scenario 1 We consider a network topology which consists of \(N\) super-nodes (leader nodes), each capable of generating a given \(M\)-partite cluster state. Each super-node is connected to \(M\) end-nodes. There are also \(L\) repeater nodes between each pair of super-nodes. We represent such an example topology in Fig. 2 with \(N=3\), \(M=4\) and \(L=2\). Let us assume we want to establish a communication path between source A and destination B. Consequently, we want to establish the best possible link between A and B to distribute entanglement in a way that i) minimizes the number of quantum operations and latency in entanglement distribution, ii) maximizes fidelity within the coherence time of the link and iii) maintains the target overall network entanglement rate. We start by considering a coalition formation game for the above scenario, where the coalition-driven link formulation is dictated by the coherence time of the link between the source and the destination. Specifically, lower coherence time means we need to distribute entanglement between the source and the destination in a unit time over a minimum number of hops. Considering the entanglement rate as the payoff function and the number of hops between the source and the destination as the cost function, we find the optimum set of nodes that can participate in the coalition or link set-up. This optimization can be conducted through alternate iteration of coalition formation and entanglement distribution, until the coalition converges to a final stable formation. The coalition formation process is dynamic, during which the source node will try to join different coalitions and compare the corresponding utilities. When a stable state is reached, the link coalition is fixed and maintained for a certain period of time to complete different application tasks. ### Scenario 2 Here, let us consider a tree-like network topology which consists of multiple trees. Each tree consists of one source node and several leaf nodes. However, each source (leader) can exchange information with only one destination leaf node at any point of time. We represent such an example topology in Fig. 3 with 2 trees, where one tree has 5 leaf nodes and other one consists of 4 leaf nodes. In this case, we want to establish the best possible path for entanglement distribution between two leaf nodes A and B, where A and B belong to two different trees, in a way that, i) latency in communication is minimized, ii) the link fidelity is maximized and iii) the entanglement distribution can be completed within the coherence time of the link. In order to optimize the overall network performance, each quantum node needs to decide on the next 1-hop destination to communicate towards, depending on its current state-related information (location, direction). All such 1-hop links between the source and the final destination will form the link topology. We consider a two-way choice for each node; an example is provided in Fig. 3. Let node 2 decides to switch its link from node 1 to node 3. The link Fig. 2: Optimized information and resource flow over a quantum network topology with three leader nodes (Leader), multiple repeaters (R) and end-nodes (E) between A and B; both classical and quantum coalition games are employed and \(e_{1}\to e_{3}\to e_{4}\to e_{2}\) are selected links for information flow. between nodes 1 and 2 is removed followed by a consensus between nodes 2 and 3 to establish the link. Through this link deviation, the latency cost is reduced from 100 to 60 and the fidelity payoff increases from 0.3 to 0.8. So the edge between two nodes in this case dictates the two-way consensus game process. ### Classical V/s Quantum Strategies Here we describe the differences between classical and quantum strategies for each of the two scenarios. For scenario 1, we apply both the classical and quantum forms of the multiplayer coalitional game, towards solving the optimized link formation between two quantum end-nodes in the network topology outlined in Fig. 2. In the classical form of the game, based on the classical strategy adopted by each player, one guarantees that each player forming the coalition is rewarded by a certain amount called the 'value of coalition'. Other players in the game who are unable to join the coalition can prevent the players forming the coalition from getting any more payoffs than the 'value of coalition'. In our particular network topology set-up, the 'value of coalition' is attributed to a target network throughput. For the quantum version of the game, the leader node to which the source node \(A\) is connected, is selected as the referee or arbiter of the game. The referee prepares an initial quantum cluster state and forwards it to the players. Each player is in possession of a single entangled qubit, on which it employs an arbitrary unitary rotation depending on its preferred action. The resultant state is forwarded back to the referee for measurement and the corresponding payoff assignment. If the initial quantum states are unentangled, the quantum coalition game breaks down into its classical form. For scenario 2, we apply both classical and quantum versions of the multiplayer 2-way consensus game for optimizing 1-hop link control between nodes within the network topology outlined in Fig. 3. In the classical form of the game, each player has two choices to select from, and the player will select the next 1-hop in a way that cost is minimized and payoff is maximized over the next set of links. The other nodes in the network employ actions that can only influence the overall utility of the network, but have no impact or restriction on the selection of the individual node's next 1-hop link. For the quantum version, we consider the quantum coin flipping game with the requirements that, there is no cheating, each player is aware of the others' decisions to participate in the link formation, and each player can agree on the respective individual decision on the game outcome. Therefore, each player can confirm its own progress in the game and the game converges, even if there are multiple players working together and taking their own decisions on their individual moves. ## Results and Analysis To implement the proposed game-based optimized link set-up for information flow and resource access within quantum networks, we conduct numerical evaluations. For each set of parameter settings, simulations are run through 1000 trials and the results are averaged out. Since the average lifetime of a qubit with current superconducting technology is around 500\(\mu\)s, we employ a synchronization time step of 300\(\mu\)s. The probability that a link will exist between any two quantum nodes, irrespective of their type, repeater, end-node or leader, is expressed as Fig. 4: Nash equilibrium point between the links for information flow from \(A\) to \(B\) with the objective of minimizing the number of quantum operations and latency, such that the information is exchanged within the end-to-end coherence time of the link. The cost function for node \(A\) and node \(B\) is computed using the total latency experienced over all the links that information of the nodes propagates on. These results are based on topology outlined in Fig. 2. Fig. 3: Optimized information and resource flow through the control of the next one-hop link within a tree-like quantum network topology; nodes 1 and 6 are the leader nodes that are connected through a fixed link – Links are selected for information exchange from \(A\) to \(B\) using a 2-way consensus game. Numbers in red \([60,80,90,\dots]\) are latency-based cost and numbers in green \(\{0.3,0.4,0.5,\dots\}\) are fidelity-based payoffs. \(\mu\exp[-d(m,n)/\delta\lambda]\), where \(d(m,n)\) is the Euclidean distance between nodes \(m\) and \(n\), \(\delta\) is the maximum distance between \(m\) and \(n\), \(\mu\) and \(\lambda\) are the control parameters of the distribution; \(\mu,\lambda\in(0,1]\); \(\mu\) controls the number of edges (links) present in the network topology and \(\lambda\) controls the length of different links. In Fig. 4, we analyze the Nash equilibrium for the latency and operations minimization problems with \(X\) and \(Y\) axes representing the cost incurred at nodes \(A\) and \(B\) respectively. The curves are the best response functions or the information exchange rate over the edges. We are particularly interested in the equilibrium point over the edges between the leader-repeater-leader nodes. We reached a unique Nash equilibrium point at (0.695,0.74). Next we investigate the efficiency of the classical and quantum versions of the coalition game approach for topology extraction in quantum networks in Fig. 5. We represent the normalized latency in one communications cycle as a function of the number of nodes present in the network. The average latency is calculated by summing the delay experienced over all the links and then dividing the summation by the number of hops. Performance of four different scenarios is compared, where no game is used for classical networks, classical game for classical networks, classical game for quantum networks, and quantum game for quantum networks. The quantum strategies emerge as the winner and the reason can be explained through an intuitive example. Let us consider a coin toss scenario. For the classical strategy, each player can choose head or tail with equal probability. The quantum version will use entangled photons as qubits instead of coins and the players can use a polarizer and single-photon detector. So there is a possibility of rotating the polarizer at most by \(90^{o}\) and players can maximize their winning probabilities by choosing the angle of rotation. So on average the probability of winning for each player can increase manifold with quantum strategies. Looking at Fig. 5 and depending on the discussion above, it seems counter-intuitive that initially when the number of nodes is below 2, classical games perform better than the quantum version. Its worth-noting here that quantum games need at least two nodes in the network to begin with to distribute an entangled pair of initial quantum states. Therefore, the quantum advantage is visible once the number of nodes in the network increases to more than 2. In Fig. 6, we analyze the game-theoretic performance in terms of the end-to-end fidelity over the network topology in Fig. 3 as a function of average link decoherence rate. Decoherence in quantum links is represented in terms of damping (amplitude or phase) or depolarizing. The results generated in Fig. 6 consider depolarizing rate over the quantum links. Quantum strategies perform better owing to the increase in the average equilibrium payoff offered by quantum correlation and unitary strategies. ## Concluding Remarks and Future Directions In this paper, we explored the promise of quantum game-theoretic framework for distributing entanglement within quantum networks, with the aim of striking a flexible balance between link fidelity and latency while maintaining entanglement rate over link coherence. In future, we will generalize the quantum game framework to address different challenges in designing, developing and deploying quantum networks. An interesting direction will be to address the impact of (complex) network topologies on the evolution of the strategy and utility function of a game, or vice-versa studying how different game strategies impact the network topology. Another promising research thread would be focused on evolutionary game theory [6] which embodies a suitable framework for analyzing co-evolution, i.e., the process in which the properties of interacting systems evolve in dependence of each other at the backdrop of the dynamic fitness landscape. The concepts of spatial structure and evolutionary game theory can be used to understand cooperation and competition among the nodes of a quantum network, regarding the use of quantum resources such as entanglement and the study of the co-evolution of the quantum nodes in response to their environment. Fig. 5: Variation in the normalized delay experienced by information flow arriving at any end-node within the topology obtained in Fig. 2, as a function of the increasing number of nodes simultaneously communicating over the network. CN stands for Classical Networks and ON stands for Quantum Networks. Fig. 6: End-to-end fidelity improvement with variation in the link decoherence rate through the application of classical 2-way consensus game and its quantized version. These results are based on the topology outlined in Fig. 3. Quantum decoherence is a crucial quantum phenomenon that we need to consider when designing optimal resource sharing and allocation algorithms for probabilistic quantum networks. Important open questions in this context relate to how to decide when and where to send information over the network while having only a local \(k\)-hop knowledge of the network topology (or a limited network state knowledge in general), for example how we should make use of quantum repeaters or how to do error correction while in storage to keep logical qubits alive. Factoring in such aspects related to decoherence in a quantum game theory framework is another interesting line of attack for future work.
2305.11929
Top-Ranked Cycle Flux Network Analysis of Molecular Photocells
We introduce a top-ranked cycle flux ranking scheme of network analysis to assess the performance of molecular junction solar cells. By mapping the Lindblad master equation to the quantum-transition network, we propose a microscopic Hamiltonian description underpinning the rate equations commonly used to characterize molecular photocells. Our approach elucidates the paramount significance of edge flux and unveils two pertinent electron transfer pathways that play equally important roles in robust photocurrent generation. Furthermore, we demonstrate that non-radiative loss processes impede the maximum power efficiency of photocells, which may otherwise be above the Curzon-Ahlborn limit. These findings shed light on the intricate functionalities that govern molecular photovoltaics and offer a comprehensive approach to address them in a systematic way.
Nikhil Gupt, Shuvadip Ghosh, Arnab Ghosh
2023-05-19T15:15:00Z
http://arxiv.org/abs/2305.11929v2
# Top-Ranked Cycle Flux Network Analysis of Molecular Photocells ###### Abstract We introduce a top-ranked cycle flux ranking scheme of network analysis to assess the performance of molecular junction solar cells. By mapping the Lindblad master equation to the quantum-transition network, we propose a microscopic Hamiltonian description underpinning the rate equations commonly used to characterize molecular photocells. Our approach elucidates the paramount significance of edge flux and unveils two pertinent electron transfer pathways that play equally important roles in robust photocurrent generation. Furthermore, we demonstrate that non-radiative loss processes impede the maximum power efficiency of photocells, which may otherwise be above the Curzon-Ahlborn limit. These findings shed light on the intricate functionalities that govern molecular photovoltaics and offer a comprehensive approach to address them in a systematic way. ## I Introduction Molecular junctions consisting of a single donor and acceptor, placed between two external leads is an active field of research [1; 2; 3; 4] that combines the fundamental aspects of quantum transport [5; 6; 7] and their possible practical implementations [8; 9; 10; 11]. One such application of molecular junctions, among many others, is in photovoltaic (PV) cells, where solar energy of incident photons is converted into electric power [12; 13; 14]. The interplay between the heat current mediated by the temperature difference of the solar radiation and the PV cell at ambient temperature, together with the charge current arising due to the bias voltage across the electrodes, drive the system towards strong non-equilibrium steady states [15; 16; 17; 18]. A great deal is therefore focused on theoretical modeling to optimize the performance of PV cells [19; 20] and to explore the underlying transport mechanisms that could facilitate the realization of sophisticated on-chip complex quantum thermal devices [21; 22; 23; 24; 25]. Though, the master equation in Lindblad form is a conventional theoretical technique for the description of non-equilibrium open quantum systems [26], particularly in quantum optics [27] and quantum thermodynamics communities [28; 29; 30; 31; 32; 33; 34; 35; 36] where spins and atomic degrees are involved, its extension towards molecular systems is relatively a novel area of exploration [37; 38; 39; 40; 41; 42]. As an alternative, in recent times, a state space representation in the framework of network theory by Nitzan et. al. [43; 44; 45; 46; 47] has become a popular method to study the non-equilibrium charge transport behavior of molecular photovoltaics [43; 44; 45; 46] and thermoelectric devices [47; 23; 48]. In this context, the latest finding by Wang et al. [49], based on the works of Zurek [50], and Cao [51], is worth mentioning. They have shown that the quantum Lindblad master equation can be cast into a Pauli master equation without loss of any generality. This facilitates the representation of dissipative quantum dynamics as a weighted network with nodes and edges, where nodes (vertices) denote quantum states and edges denote the non-equilibrium transition from one quantum state to another with nonzero flux rates [52; 53]. Thus, optimization of the performance of multi-component quantum thermal devices reduces to identifying the major working cycles amongst various possible pathways as is the case with photovoltaics with a multitude of electron transfer channels. Although the concept of cycle flux was well developed in algebraic graph theory [54; 55; 56; 57; 58; 59; 60] from the early works of Hill [54], Kohler and Vollmerhaus [55], as well as Schnakenberg [56], the recent developments by Wang et. al. have provided an efficient cycle flux ranking scheme to fully comprehend the intricate functionality of complex quantum systems, with particular emphasis on spin-Seebeck effect within the linear response regime [49]. In this paper, we extend this idea to molecular systems and establish the equivalence between the dynamical formulation of the Lindblad master equation and the state space representation of molecular photocells pioneered by Nitzan et. al. [43; 44; 45; 46]. We point out however the state space method is quite effective in computing the steady-state currents in PV cells, it falls short to unravel the underlying working mechanism of the photovoltaic devices. On the contrary, top-ranked cycle flux analysis provides a natural and alternative gateway to capture the underlying features, which is otherwise challenging due to multiple electron transfer pathways in molecular junctions. Thus, our present findings demonstrate that the cycle flux ranking scheme could go beyond its standard applications of near-equilibrium situations and could serve as a potential candidate for decoding the fundamental working principle of complex molecular systems even far from equilibrium scenarios. The work is organized as follows: In section II, we introduce the basic model of the PV cell and derive the open quantum dynamics of Lindbladian form which is shown to be equivalent to the Pauli master equations employed by the Nitzan et. al. [43] within a state space formulation. Next, we elaborate on the fundamental principles of the cycle flux ranking scheme in the context of our present model in Sec. III, and summarize important outcomes and findings of our analysis in Sec. IV. Finally, we conclude in Sec. V. Model and dynamics The basic model of a PV cell is comprised of two "effective" sites, representing the donor (D) and acceptor (A) molecules placed between two metallic leads (L and R), as depicted in Fig. 1. We consider each site as a two-state system with the ground (\(|n_{\rm s1}\rangle\)) and excited (\(|n_{\rm s2}\rangle\)) states, corresponding to the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) of two molecules having energies \(\varepsilon_{\rm s1}\) and \(\varepsilon_{\rm s2}\) (\(\rm s=D,A\)) respectively. The general form of the electronic part of the Hamiltonian for the donor-acceptor system reads as \[H_{el}=\sum_{\rm s=D,A}\sum_{\rm i=1,2}\varepsilon_{\rm si}n_{\rm si}+\sum_{\rm s =D,A}U_{\rm s}n_{\rm s1}n_{\rm s2}. \tag{1}\] Here, \(U_{\rm D}\) (\(U_{\rm A}\)) stands for the positive Coulomb repulsion energy if two electrons are present in the donor (acceptor) sites and \(n_{\rm si}=c_{\rm si}^{\dagger}c_{\rm si}\), \(\rm i=\{1,2\}\) represents the number operator corresponding to the state \(|n_{\rm si}\rangle\), satisfying fermionic anti-commutation relation [61]\(\{c_{\rm si},c_{\rm i^{\prime}\rm j}^{\dagger}\}=\delta_{\rm ss^{\prime}} \delta_{\rm ij}\). The most important energy scales of the problem are notably the energy gap between the donor levels (\(\Delta E=\varepsilon_{\rm D2}-\varepsilon_{\rm D1}\)) and the donor-acceptor levels (\(\Delta\varepsilon=\varepsilon_{\rm D2}-\varepsilon_{\rm A2}\)), while the acceptor's ground state energy has no influence on the overall cell operation as one expects [41; 43]. This can be attributed to the following two facts (i) solar radiation triggers the radiative transition of electrons from the donor ground to the excited state; and (ii) electron transfer between the excited states of the donor to the acceptor could in principle be governed by the cell's vibrational degrees of freedom, such as thermal phonons modes. To make a further realistic and practical assumption about our model in view of the prior literature [41; 43; 44; 45], we restrict the donor to be either in the ground or excited state, whereas the acceptor can be either in the single or doubly occupied state. This allows us to directly compare our results with the state space model proposed by Nitzan et. al. [43], which is characterized by a single Coulomb repulsion parameter arising out of \(U_{\rm A}\) in Eq. (1). The Hamiltonian of the free-electron reservoirs [35] to which the donor and acceptor molecules are coupled are respectively given by \[H_{\rm L}=\sum_{l}(\epsilon_{l}-\mu_{\rm L})d_{l}^{\dagger}d_{l},\quad\text{ and}\quad H_{\rm R}=\sum_{r}(\epsilon_{r}-\mu_{\rm R})d_{r}^{\dagger}d_{r}, \tag{2}\] where \(d_{l}(d_{r})\) and \(d_{l}^{\dagger}(d_{r}^{\dagger})\) are the electron annihilation and creation operators for left (L) and right (R) electrodes and \(\mu_{\alpha}\) is the corresponding chemical potential of \(\alpha\)-th (\(\alpha={\rm L,R}\)) reservoirs. In other words, the bias voltage between two metallic electrodes is given by \(U=(\mu_{\rm L}-\mu_{\rm R})/|e|\), where \(e\) is the charge of the electron. Tunneling Hamiltonian between the electrodes and the molecule is chosen to be of the form of [16; 35] \[H_{I}=\sum_{l}\hbar g_{l}(d_{l}^{\dagger}c_{\rm D1}+c_{\rm D1}^{\dagger}d_{l}) +\sum_{r}\hbar g_{r}(d_{r}^{\dagger}c_{\rm A2}+c_{\rm A2}^{\dagger}d_{r}), \tag{3}\] where \(g_{l(r)}\) is the respective coupling strength. The first term indicates that the left electrode swaps electrons solely with the ground state of the donor which gets triggered by the solar radiation to its excited state. The second term denotes that the right electrode can only exchange electrons with the excited state of the acceptor, as the ground state of the acceptor molecule is already occupied. Finally, electron transfer processes within the system are governed by both photons (\(H_{\rm pht}\)) and phonon (\(H_{\rm phn}\)) baths [41] with \(H_{\rm pht}=\sum_{k}\epsilon_{k}a_{k}^{\dagger}a_{k}\), and \(H_{\rm phn}=\sum_{q}\epsilon_{q}^{\prime}b_{q}^{\dagger}b_{q}\), where \(a_{k}^{\dagger}\) (\(b_{q}^{\dagger}\)) and \(a_{k}\) (\(b_{q}\)) are bosonic creation and annihilation operators for the \(k\)-th (\(q\)-th) bath mode with energy \(\epsilon_{k}\) (\(\epsilon_{q}^{\prime}\)) respectively. The interactions between the photon and phonon baths with the molecule are taken as [16; 18; 41] \[H_{I}^{\rm pht}=\sum_{k}\hbar g_{k}^{\rm D}(a_{k}^{\dagger}c_{\rm D1}^{\dagger }c_{\rm D2}+c_{\rm D2}^{\dagger}c_{\rm D1}a_{k}), \tag{4}\] and \[H_{I}^{\rm phn} =\sum_{q}\hbar g_{q}^{\rm D}(b_{q}^{\dagger}c_{\rm D1}^{\dagger}c_{ \rm D2}+c_{\rm D2}^{\dagger}c_{\rm D1}b_{q})\] \[+\sum_{q}\hbar g_{q}^{\rm DA}(b_{q}^{\dagger}c_{\rm A2}^{\dagger}c_ {\rm D2}+c_{\rm D2}^{\dagger}c_{\rm A2}b_{q}), \tag{5}\] respectively, where \(g_{k}^{\rm D}\) and \(g_{q}^{\rm D}(g_{q}^{\rm DA})\) are the corresponding coupling constants. From Eqs. (4) and (5), it is clear that the transition between the ground to the excited state at the donor site is governed by both photon (radiative process) and phonon (nonradiative process) modes, Figure 1: Schematic diagram of a molecular photovoltaic solar cell. The system consists of the donor (D) and the acceptor (A) molecules placed between two free-electron reservoirs. The left reservoir (L) is exclusively coupled to the ground state of the donor and the right reservoir (R) is coupled only to the excited state of the acceptor. Photon-induced transition is indicated by a wiggly line and phonon-induced processes are depicted by broken lines. while the electron transfer between the donor and the acceptor, is solely driven by vibrational thermal phonon modes at the ambient temperature. Consequently, the time evolution of the system dynamics in the interaction picture under the Born-Markov approximation is described by the quantum master equation [26; 31; 34; 35] (Appendix A) \[\frac{d\rho}{dt}=\mathcal{L}_{\rm L}[\rho]+\mathcal{L}_{\rm R}[\rho]+\mathcal{ L}_{\rm pht}^{\rm D}[\rho]+\mathcal{L}_{\rm phn}^{\rm D}[\rho]+\mathcal{L}_{\rm phn }^{\rm DA}[\rho], \tag{6}\] where \(\rho\) is the reduced density matrix of the system and \(\mathcal{L}_{\nu}\) is the Lindblad superoperator describing the effect of dissipation induced by the \(\nu\)-th thermal bath. Since the Hamiltonian in Eq. (1) is diagonal in the number state basis for the donor and acceptor molecules, the reduced density matrix \(\rho\) of the above Lindblad master equation effectively decouples the diagonal and off-diagonal matrix elements in the eigenbasis of \(H_{el}\)[35]. This, in turn, allows for a closed-form equation of motion for the occupation probabilities or the population of the \(\mathfrak{i}\)-th site as \(P_{\rm I}=\langle\mathfrak{i}|\rho|\mathfrak{i}\), w.r.t the various system eigenstates \(\{|\mathfrak{i}\rangle\}\). As we mentioned before that the acceptor can only be in two states \(\{|10\rangle,|11\rangle\}\), while the donor can be any of the three possible configurations \(\{|00\rangle,|10\rangle,|01\rangle\}\), so there are six possible eigenstates for the overall system. We label them as follows: \(|0\rangle=|0010\rangle\), \(|1\rangle=|1010\rangle\), \(|2\rangle=|0110\rangle\), \(|3\rangle=|0011\rangle\), \(|4\rangle=|1011\rangle\) and \(|5\rangle=|0111\rangle\). The corresponding eigen-energies \(\varepsilon_{\rm I}\) (\(\mathfrak{j}=\mathfrak{0},\mathfrak{1},\mathfrak{2},..,\mathfrak{5}\)) for the states are given by: \(\varepsilon_{0}=\varepsilon_{\rm A1}\), \(\varepsilon_{1}=\varepsilon_{\rm D1}+\varepsilon_{\rm A1}\), \(\varepsilon_{2}=\varepsilon_{\rm D2}+\varepsilon_{\rm A1}\), \(\varepsilon_{3}=\varepsilon_{\rm A1}+\varepsilon_{\rm A2}+U_{\rm A}\), \(\varepsilon_{4}=\varepsilon_{\rm D1}+\varepsilon_{\rm A1}+\varepsilon_{\rm A 2}+U_{\rm A}\) and \(\varepsilon_{5}=\varepsilon_{\rm D2}+\varepsilon_{\rm A1}+\varepsilon_{\rm A 2}+U_{\rm A}\) respectively. As a result, the time evolution equations for the population are governed by the kinetic equations (detailed derivation in Appendix A) \[\frac{dP_{0}}{dt}=(k_{01}P_{1}-k_{10}P_{0})+(k_{03}P_{3}-k_{30}P_{0}), \tag{7}\] \[\frac{dP_{1}}{dt}=(k_{10}P_{0}-k_{01}P_{1})+(k_{12}P_{2}-k_{21}P_ {1})\] \[+(k_{14}P_{4}-k_{41}P_{1}), \tag{8}\] \[\frac{dP_{2}}{dt}=(k_{21}P_{1}-k_{12}P_{2})+(k_{23}P_{3}-k_{32}P_ {2})\] \[+(k_{25}P_{3}-k_{52}P_{2}), \tag{9}\] \[\frac{dP_{3}}{dt}=(k_{30}P_{0}-k_{03}P_{3})+(k_{32}P_{2}-k_{23}P_ {3})\] \[+(k_{34}P_{4}-k_{43}P_{3}), \tag{10}\] \[\frac{dP_{4}}{dt}=(k_{41}P_{1}-k_{14}P_{4})+(k_{43}P_{3}-k_{34}P_ {4})\] \[+(k_{45}P_{3}-k_{54}P_{4}), \tag{11}\] \[\frac{dP_{3}}{dt}=(k_{32}P_{2}-k_{25}P_{3})+(k_{34}P_{4}-k_{45}P_ {3}). \tag{12}\] This is a "classical" looking Pauli master equation where the transition rates involved are quantum mechanical in nature. For instance, \(k_{\rm B}\) (\(k_{\mathfrak{j}\mathfrak{k}\mathfrak{j}\mathfrak{j}\mathfrak{j}}\)) denotes the rate of transition from quantum state \(|\mathfrak{i}\rangle\) to \(|\mathfrak{j}\rangle\) (\(\mathfrak{i},\mathfrak{j}=\mathfrak{0},\mathfrak{1},\mathfrak{2},3,4,\mathfrak{ 5}\) but \(\mathfrak{i}\neq\mathfrak{j}\mathfrak{j}\)), which are given by: \[k_{10} = k_{43}=\gamma_{\rm L}f(\epsilon_{\rm L}), \tag{13}\] \[k_{01} = k_{34}=\gamma_{\rm L}[1-f(\epsilon_{\rm L})],\] (14) \[k_{30} = k_{41}=k_{32}=\gamma_{\rm R}f(\epsilon_{\rm R}),\] (15) \[k_{03} = k_{14}=k_{25}=\gamma_{\rm R}[1-f(\epsilon_{\rm R})],\] (16) \[k_{21} = k_{54}\equiv k_{\rm r}+k_{\rm nr},\] (17) \[= \gamma_{\rm pht}^{\rm D}n(\epsilon_{\rm r})+\gamma_{\rm phn}^{\rm D }n(\epsilon_{\rm nr}),\] \[k_{12} = k_{45}\equiv\tilde{k}_{\rm r}+\tilde{k}_{\rm nr},\] (18) \[= \gamma_{\rm pht}^{\rm D}[n(\epsilon_{\rm r})+1]+\gamma_{\rm phn}^{ \rm D}[n(\epsilon_{\rm nr})+1],\] \[k_{23} = \gamma_{\rm phn}^{\rm DA}n(\epsilon_{\rm DA}),\] (19) \[k_{32} = \gamma_{\rm phn}^{\rm DA}[n(\epsilon_{\rm DA})+1]. \tag{20}\] The transition rates consist of two terms: (a) the rate coefficient \(\gamma\) depends on the coupling strength through the bath spectral function that characterizes the inverse time scale associated with the corresponding processes; (b) the second term contains the information about the statistical properties of the quantum bath (fermionic and bosonic) through its temperature-dependent autocorrelation functions [61]. It can be classified into two categories, namely, the absorption and relaxation processes. For a fermionic bath, the absorption or excitation process is governed by \(f(\epsilon_{\alpha})\), while the de-excitation process is controlled by \(1-f(\epsilon_{\alpha})\) factor [29; 35]. The \(f(\epsilon_{\alpha})\) is the Fermi-Dirac distribution which is given by \(f(\epsilon_{\alpha})=1/(e^{\epsilon_{\alpha}}+1)\equiv f(\varepsilon_{\rm I}, \mu_{\alpha},T_{\alpha})\), where \(\alpha={\rm L},{\rm R}\), \(\epsilon_{\alpha}=(\varepsilon_{\rm I}-\mu_{\alpha})/k_{B}T_{\alpha}\) and \(T_{\alpha}\) is the temperature of the \(\alpha\)-th reservoir. In the case of bosonic (photon and phonon) bath, the same factors are given by \(n(\epsilon_{\beta})\) and \(1+n(\epsilon_{\beta})\)[29], with \(n(\epsilon_{\beta})\) as the Bose-Einstein distribution \(n(\epsilon_{\beta})=1/(e^{\epsilon_{\beta}}-1)\), for \(\beta={\rm r},{\rm nr},{\rm DA}\), where \(\epsilon_{\rm r}=(\varepsilon_{\rm D2}-\varepsilon_{\rm D1})/k_{B}T_{\rm S}\), \(\epsilon_{\rm nr}=(\varepsilon_{\rm D2}-\varepsilon_{\rm D1})/k_{B}T_{\rm ph}\), \(\epsilon_{\rm DA}=(\Delta\varepsilon-U_{\rm A})/k_{B}T_{\rm ph}\) and \(T_{\rm S}\) (\(T_{\rm ph}\)) stands for the temperature of the photon (phonon) bath respectively. We emphasize that Eqs. (7)-(12), along with their rate coefficients calculated from the above microscopic picture, are analogous to the phenomenological rate equations considered by Nitzan et. al. [43; 44; 45] in their study of molecular photocells. In terms of the occupation probabilities of the individual states, empirical definitions for the electron currents used by Nitzan et. al. [43], within the state-space model, can be summarized as \[J_{\rm R} = [k_{30}P_{0}-k_{03}P_{3}]+[k_{41}P_{1}-k_{14}P_{4}] \tag{21}\] \[+ [k_{52}P_{2}-k_{25}P_{3}],\] \[J_{\rm L} = [k_{10}P_{0}-k_{01}P_{1}]+[k_{43}P_{3}-k_{34}P_{4}],\] (22) \[J_{\rm S} = k_{\rm r}[P_{1}+P_{4}]-\tilde{k}_{\rm r}[P_{2}+P_{3}],\] (23) \[J_{\rm nr} = k_{\rm nr}[P_{1}+P_{4}]-\tilde{k}_{\rm nr}[P_{2}+P_{3}],\] (24) \[J_{\rm DA} = k_{23}P_{3}-k_{32}P_{2}. \tag{25}\] Here \(J_{\rm R}\) and \(J_{\rm L}\) are the electron currents leaving and en tering the molecular system to and from the electrodes, \(J_{\rm S}\) and \(J_{\rm nr}\) are, respectively, the radiative (photon-induced) and nonradiative (phonon induced losses) electron currents between the ground and excited states of the donor, and \(J_{\rm DA}\) is the average current due to transfer of electrons (phonon induced) between donor and acceptor molecules. While the above rate equations can be solved numerically to obtain the steady-state flux-current, \(J_{\rm L}=-J_{\rm R}=-J_{\rm DA}=J_{\rm S}+J_{\rm nr}=J\), it does not reveal much about the actual mechanisms of the transport processes involved. In what follows, we take advantage of the cycle flux analysis scheme of the algebraic graph theory [54; 55; 56; 57; 58; 59; 60] to gain a deeper insight into the underlying transport channels of the molecular solar cells and explore the effect of non-radiative losses on the thermodynamic efficiency of the photocell. ## III The cycle flux analysis From Eqs. (7)-(12), we may rewrite the time evolution equation of the population distribution \(P_{\rm I}\) over the quantum mechanical system states in the following compact form \[\frac{dP_{\rm I}}{dt}=\sum_{\rm j=0}^{5}[k_{\rm ij}P_{\rm j}-k_{\rm ij}P_{\rm i }];\quad{\rm j}\neq{\rm i}, \tag{26}\] subject to the condition \(\sum_{\rm i=1}^{5}P_{\rm I}=1\). Thus it completely characterizes the overall system where \(k_{\rm ij}\) (\(k_{\rm ij}\)) depicts the open quantum system under non-equilibrium conditions with forward (backward) transition rates listed in Eqs. (13)-(20). Following the Ref. [49], the above dissipative quantum dynamics can be effectively represented by a network or graph, where nodes or vertices are assigned to each quantum state, and edges or lines relate the allowed transitions between them. In the present case, the basic graph (G) depicted in Fig. 2 provides a useful visualization of the system, with each vertex denoting a quantum state and its associated probability \(P_{\rm i}\), and each edge representing a pair of transition probabilities \(k_{\rm ij}\) and \(k_{\rm ij}\), one for each possible direction. Since the existence of the forward transition rate \(k_{\rm ij}>0\) implies the reverse transition \(k_{\rm ij}>0\) in all practical circumstances, we may assume without loss of any generality that our basic graph G is connected in the sense that for each pair of states or vertices \(({\rm i},{\rm j}),{\rm i}\neq{\rm j}\), there exists _at least one sequence of transitions_ or edges that connects them in both directions. If G is not connected, the physical system represented by G can be decomposed into non-interacting subsystems, which can be analyzed independently [56]. Despite the fact that the underlying master equation (26) is linear and the uniqueness of the steady-state solution is guaranteed by the properties of the master equation, finding out the complete analytical solutions of the steady-state populations is a nontrivial task even for the simplest physical conditions [56]. However, the diagrammatic representation of the system in terms of its basic graph offers a highly versatile and effective tool to tackle such complicated problems [62; 63; 49]. For instance, the steady state solution of \(P_{\rm I}\) of our basic graph, is defined as [54; 55; 56] \[\bar{P}_{\rm I}=\Lambda_{\rm I}/\Lambda, \tag{27}\] where \(\Lambda_{\rm I}\) is the sum of the weight of the spanning trees rooted on \({\rm i}^{th}\) state and \(\Lambda\) is the sum of weights of spanning trees rooted on every individual state, i.e., \(\Lambda=\sum_{\rm i=0}^{5}\Lambda_{\rm i}\). The above method was first invented by Kirchhoff in 1847 [64] in the context of network theory and later on, rigorously formulated by King and Altmann [65] in the context of biochemical reactions. As a result, it is commonly referred to as Kirchhoff's theorem in the theory of network analysis [63; 64; 65; 56]. Here by _spanning tree_, we refer to a covering subgraph of the basic graph that contains all the vertices with the minimum number of edges which is always connected but contains no _circuits_ (cyclic sequence of edges or _cycle trajectories_). From Fig. 2, it is easy to understand that a basic graph generally contains a large number of undirected subcycles, and each subcycle is a combination of a pair of two one-directional _circuits_ or cycle trajectories, \(C^{+}\) (counterclockwise) and \(C^{-}\) (clockwise). The net cycle flux is therefore given by the difference between the two circuit fluxes such as \(J_{C}=J_{C}^{+}-J_{C}^{-}\). The notion of the "circuit flux" was introduced by Kohler and Vollmerhaus [55], and has since been widely employed to characterize a large variety of biological systems [62; 63; 65; 66; 67; 68; 69]. In essence, it captures the frequency of circuit completion along a specific cycle trajectory (\(C^{+}\) or \(C^{-}\)) and can be used to quantify the edge flux \(J_{\rm i\to j}=\sum_{\mathcal{C}}J_{\mathcal{C}}^{+}-J_{\mathcal{C}}^{-}\), as the summation of the differences between the circuit fluxes along all cycle trajectories that include the edge \({\rm i}\to{\rm j}\). For example, the net edge flux \(J_{2\to 3}\) in Fig. 2 can be obtained Figure 2: The fundamental basic graph (G) as well as its undirected subcycles, encompasses all possible quantum transport channels (networks) of the photovoltaic solar cell under non-equilibrium conditions. by \(J_{2\to 3}=(J_{C_{1}}^{+}+J_{C_{3}}^{+}+J_{C_{4}}^{+})-(J_{C_{1}}^{-}+J_{C_{3}}^{-}+J_{ C_{4}}^{-})\). Finally, the flux associated with each one-directional "circuit" is determined by [54, 55, 56] \[J_{C}^{\pm}=\Pi_{C}^{\pm}\frac{\Lambda_{C}}{\Lambda}. \tag{28}\] Here, \(\Pi_{C}^{\pm}\) represents the weight factor which is defined by the product of transition rates along the cycle trajectory \(C^{\pm}\), whereas the sum of the weights of the spanning trees rooted on cycle \(C\) is given by \(\Lambda_{C}\) and \(\Lambda\) measures the total weight of the spanning trees rooted on each individual state. As an example, Fig 3, displays three such cycles of our basic graph \(\mathsf{G}\) and the corresponding spanning trees that are rooted on them, where the cycle \(C_{1}^{+}\) possesses a weight factor \(\Pi_{C_{1}}^{+}=k_{21}k_{32}k_{03}k_{10}\). In reality, enumerating the vast number of spanning trees that are rooted in each individual vertices requires an inconceivable amount of work, particularly as the graph size escalates. Fortunately, we can navigate this problem by leveraging the generalized matrix-tree theorem from the algebraic graph [49, 62]. Upon rewriting the master equation in the following form: \(\frac{d\mathbf{P}}{dt}=-\mathbf{M}\mathbf{P}\), where \(\mathbf{P}=\{P_{0},P_{1},P_{2},P_{3},P_{4},P_{5}\}\) is a column matrix and \[\mathbf{M}=\left[\begin{array}{ccccc}k_{10}+k_{30}&-k_{01}&0&-k_{03}&0&0\\ -k_{10}&k_{01}+k_{41}+k_{21}&-k_{12}&0&-k_{14}&0\\ 0&-k_{21}&k_{12}+k_{32}+k_{52}&-k_{23}&0&-k_{25}\\ -k_{30}&0&-k_{32}&k_{03}+k_{23}+k_{43}&-k_{34}&0\\ 0&-k_{41}&0&-k_{43}&k_{34}+k_{14}+k_{54}&-k_{45}\\ 0&0&-k_{52}&0&-k_{54}&k_{25}+k_{45}\end{array}\right], \tag{29}\] is the Laplacian (transition) matrix of the weighted graph, the matrix-tree theorem provides a powerful recipe for calculating the number of directed spanning trees rooted on a particular cycle. Specifically, the matrix-tree theorem asserts that an effective expression for \(\Lambda_{C}\) can be evaluated as the determinant of the principal minor of the Laplacian matrix \(\mathbf{M}\) of the basic graph, namely \(\det(\mathbf{M}[C;C])\). In other words, the determinant of the reduced matrix \(\mathbf{M}[C;C]\) obtained by removing the rows and columns indexed by \(\mathfrak{i}\in C\) of the original matrix \(\mathbf{M}\), is equal to the sum of the weights of directed spanning trees rooted on C, i.e., \(\Lambda_{C}=\det(\mathbf{M}[C;C])\). As an example, for the cycle \(C_{1}\), the reduced Laplacian matrix \(\mathbf{M}[C_{1};C_{1}]\) or \(\mathbf{M}[\mathfrak{0},\mathfrak{1},\mathfrak{2},\mathfrak{3};\mathfrak{0}, \mathfrak{1},\mathfrak{2},\mathfrak{3}]\) and its determinant are given by: \[\mathbf{M}[C_{1};C_{1}]=\left[\begin{array}{cc}k_{34}+k_{14}+k_ {54}&-k_{45}\\ -k_{54}&k_{25}+k_{45}\end{array}\right], \tag{30}\] \[\det(\mathbf{M}[C_{1};C_{1}])=k_{25}(k_{34}+k_{14}+k_{54})+k_{45}( k_{34}+k_{14}). \tag{31}\] Likewise, in the case of \(\Lambda_{\mathfrak{i}}\), the determinant related to the principal minor of the \(\mathbf{M}\) matrix can be derived by excluding the relevant row and column that correspond to state \(\mathfrak{i}\). Consequently, \(\Lambda_{\mathfrak{i}}\) can be represented by the determinant of \(\mathbf{M}[\mathfrak{i};\mathfrak{i}]\) and the steady-state population, \(\bar{P}_{\mathfrak{i}}\), can be precisely written as the ratio of the determinants [49, 62] \[\bar{P}_{\mathfrak{i}}=\frac{\Lambda_{\mathfrak{i}}}{\Lambda}\equiv\frac{\det( \mathbf{M}[\mathfrak{i};\mathfrak{i}])}{\sum_{\mathfrak{i}}\det(\mathbf{M}[ \mathfrak{i};\mathfrak{i}])}, \tag{32}\] where we identify \(\Lambda\) as the sum of determinants of the principal minors of \(\mathbf{M}\), i.e., \(\sum_{\mathfrak{i}}\mathbf{M}[\mathfrak{i};\mathfrak{i}]\). Similarly, the one-directional circuit flux can directly be calculated as a product of two factors [49, 62] \[J_{C}^{\pm}=\Pi_{C}^{\pm}\frac{\det(\mathbf{M}[C;C])}{\sum_{\mathfrak{i}}\det( \mathbf{M}[\mathfrak{i};\mathfrak{i}])}. \tag{33}\] The first coefficient captures the weight of the cycle in the particular direction, for instance, \(C_{1}(\mathfrak{0}\rightarrow\mathfrak{1}\rightarrow\mathfrak{2}\to \mathfrak{3}\rightarrow\mathfrak{0})\) in Fig 3(a), has a counterclockwise weight factor \(\Pi_{C_{1}}^{+}=k_{21}k_{32}k_{03}k_{10}\). The subsequent term involves a ratio of two determinants: the numerator \(\det(\mathbf{M}[C_{1}\{\mathfrak{0},\mathfrak{1},\mathfrak{2},\mathfrak{3}\};C _{1}\{\mathfrak{0},\mathfrak{1},\mathfrak{2},\mathfrak{3}\}])\) tallies with the weighted summation of all five spanning trees rooted on \(C_{1}\) (Fig 3b), while the denominator \(\sum_{\mathfrak{i}=\mathfrak{0}}^{3}\det(\mathbf{M}[\mathfrak{i};\mathfrak{i}])\) serves as a constant of normalization factor via a common term, representing the total weight of the spanning trees rooted on each individual state. In summary, the graph-theoretic representation of the cycle flux can be intuitively understood as the flow of weighted edges on spanning trees directed towards a cycle, which is intricately linked to the frequencies at which the cycle trajectory occurs. As a result, the determinant \(\det(\mathbf{M}[C_{1};C_{1}])\) exemplified above, can be alternatively computed by means of the determinant \(\det(\mathbf{N}[\mathfrak{k};\mathfrak{k}])\) of a "new graph" \(\mathbf{N}\) which is obtained by merging the set of vertices \(\{\mathfrak{0},\mathfrak{1},\mathfrak{2},\mathfrak{3}\}\) into a new vertex "\(\mathbf{k}\)" within the initial graph \(\mathbf{M}\)[49], represented in Fig 3(b) by the shaded region. Following the same procedure, we can evaluate the cycle fluxes of all the cycle trajectories of our basic graph and efficiently ranks out the top-ranked cycle fluxes. The generalized matrix-tree theorem thus provides valuable machinery by accomplishing the behavior of weighted graphs and their corresponding cycle fluxes. Notably, the proficiency of the cycle flux ranking scheme improves with the size of the graph [49]. ## IV Results and discussion In what follows, we utilize the aforementioned ranking scheme to examine the non-equilibrium transport characteristic of PV solar cells. To unfold the working principle of the donor-acceptor molecular junctions, we begin by decomposing the PV cell network into subcycles or a complete set of paired cycle trajectories. In the present case, the basic graph [Fig. 2] consists of 5 subcycles or 10 paired one-directional circuits (cycle trajectories). Subsequently, the efficient cycle flux ranking scheme [Cf. Eq. (33)] is implemented to rank out the top-ranked cycle fluxes and identify the major working cycles of the molecular solar cell. In Fig. 4, we have plotted all the circuit fluxes against the bias voltage \(U\) and notice that the cycle flux trajectories \(C_{1}^{+}\) (\(|\uparrow\rangle\rightarrow|2\rangle\rightarrow|3\rangle\rightarrow|0 \rangle\rightarrow|1\rangle\)) or (\(|1010\rangle\rightarrow|0110\rangle\rightarrow|0011\rangle\rightarrow|0010 \rangle\rightarrow|0010\rangle\rightarrow|1010\rangle\)) and \(C_{4}^{+}\) (\(|\uparrow\rangle\rightarrow|2\rangle\rightarrow|3\rangle\rightarrow|4 \rangle\rightarrow|1\rangle\)) or (\(|1010\rangle\rightarrow|0110\rangle\rightarrow|0011\rangle\rightarrow|1011 \rangle\rightarrow|1010\rangle\)), are comparable in magnitudes and can be classified as the first-ranked circuits. The second-ranked circuit is found to be \(C_{3}^{+}\) (\(|3\rangle\rightarrow|4\rangle\rightarrow|5\rangle\rightarrow|2\rangle \rightarrow|3\rangle\)) or (\(|0011\rangle\rightarrow|1011\rangle\rightarrow|0111\rangle\rightarrow|0110 \rangle\rightarrow|0011\rangle\)). Both the first and second-ranked circuits are plotted in Fig. 4a, while \(C_{1}^{-}\) (\(|\uparrow\rangle\rightarrow|0\rangle\rightarrow|3\rangle\rightarrow|2 \rangle\rightarrow|1\rangle\)) or (\(|1010\rangle\rightarrow|0010\rangle\rightarrow|00111\rangle\rightarrow|0110 \rangle\rightarrow|1010\rangle\)), \(C_{3}^{-}\) (\(|3\rangle\rightarrow|2\rangle\rightarrow|5\rangle\rightarrow|4\rangle \rightarrow|3\rangle\)) or (\(|0011\rangle\rightarrow|0110\rangle\rightarrow|0111\rangle\rightarrow|1011 \rangle\rightarrow|00111\rangle\)), and \(C_{4}^{-}\) (\(|1\rangle\rightarrow|4\rangle\rightarrow|3\rangle\rightarrow|2\rangle \rightarrow|1\rangle\)) or (\(|1010\rangle\rightarrow|1011\rangle\rightarrow|0011\rangle\rightarrow|0110 \rangle\rightarrow|1010\rangle\)), are the lowest ranked cycle trajectories shown in Fig. 4b. On the contrary, the third-ranked paired circuits \(C_{2}^{\pm}\) correspond to sequence of states (\(|1\rangle\leftrightarrow|4\rangle\leftrightarrow|3\rangle\leftrightarrow|0 \rangle\leftrightarrow|1\rangle\)) or (\(|1010\rangle\leftrightarrow|1011\rangle\leftrightarrow|0011\rangle\leftrightarrow| 0010\rangle\)), and the fourth-ranked paired circuits, \(C_{5}^{\pm}\) correspond to sequence of states (\(|1\rangle\leftrightarrow|2\rangle\leftrightarrow|5\rangle\leftrightarrow|4 \rangle\leftrightarrow|1\rangle\)) or (\(|1010\rangle\leftrightarrow|0110\rangle\leftrightarrow|0111\rangle\leftrightarrow| 1011\rangle\leftrightarrow|1010\rangle\)), are plotted in Figs. 4c and 4d, respectively. From Fig. 4, we understand in spite of the fact that circuit fluxes of \(C_{2}^{\pm}\) and \(C_{5}^{\pm}\) are only one and two orders of magnitude smaller than the first and second-ranked cycle trajectories, they have an equal amount of flux current in both directions. Therefore, the cycle affinity [46], \(\mathcal{A}\) becomes zero upon the addition of two counter-pair cycle trajectories, where the cycle affinity is defined as \(\mathcal{A}=-\ln\mathcal{K}\), \(\mathcal{K}=\Pi_{C}^{+}/\Pi_{C}^{-}\) being the ratio of the forward and the backward rates for a specific cycle. Since the magnitude of the cycle flux is the difference between the circuit fluxes \(J_{C}=J_{C}^{+}-J_{C}^{-}\), it follows immediately that cycles \(C_{1}\), \(C_{3}\) and \(C_{4}\) possess nonzero flux current fulfilling \(\mathcal{K}\neq 1\), while cycle flux associated with \(C_{2}\) and \(C_{5}\) are identically zero, satisfying \(\mathcal{K}=1\). As a result, we conclude that the \(C_{2}\) and \(C_{5}\) cycles do not contribute to the overall photocurrent, and the entire contribution to the electron current generated within the photocell solely comes from the first (\(C_{1}\) and \(C_{4}\)) and second-ranked (\(C_{3}\)) cycles. We have made a dual-axis plot to illustrate the total current (\(J(U)\)) and the corresponding power (\(P=UJ(U)\)), where \(P\) passes through a maximum in Fig. 5, as expected. Our analysis reveals that the total current is really the sum of the top three Figure 3: (a) First three top-ranked cycle trajectories of our basic graph \(\mathsf{G}\) shown in Fig. 2. (b) Spanning trees rooted on the top-ranked cycles \(C_{1}\), \(C_{3}\), and \(C_{4}\). cycle fluxes, i.e, \(J=J_{C_{1}}+J_{C_{4}}+J_{C_{3}}\), regardless of the presence or the absence of the nonradiative loss processes due to \(\gamma_{\rm phn}^{\rm D}\). To make a proper comparison, we have taken the parameter set used by Nitzan's group [43] and found excellent agreement with their results. Qualitatively, one can understand that among the five possible cycles, those cycles (\(C_{1}\), \(C_{3}\) and \(C_{4}\)) which involve the \(|2\rangle\leftrightarrow|3\rangle\) transition, contribute a finite amount to the overall electron current, whereas sub-cycles \(C_{2}\) and \(C_{5}\) which do not possess this edge, have zero contribution to the total current. This can be attributed to the fact that among various possible electron transfer channels, those pathways are only relevant that do involve the electron transfer between donor and acceptor molecules. In graph-theoretical language, this corresponds to subcycles with nonzero edge flux along \(|2\rangle\leftrightarrow|3\rangle\). Furthermore, it is evident from Fig. 5 that compared to first-ranked cycles, the second-ranked cycle \(C_{3}\) makes a negligible contribution to the overall electron current for the majority of the parameter range. Consequently, the total flux current \(J\) is simply twice that of the two individual first-ranked cycles. Intuitively, one expects the \(C_{1}\) electron transfer channel to be the one and only exclusive route for the electron transfer pathways in photovoltaic devices [45; 46]. Surprisingly, our analysis reveals the existence of an equally important \(C_{4}\) pathway that contributes to a similar extent to the overall photocurrent, along with the natural \(C_{1}\) electron transfer channel. These findings represent the first set of important results for our efficient ranking scheme, which goes beyond the conventional understanding of the electron transfer pathways in molecular photovoltaics. Secondly, it can be inferred from Fig.5 that \(C_{1}\) and \(C_{4}\) operate as first-ranked cycles, while \(C_{3}\) acts as a second-ranked cycle for bias voltage \(U\) around \(\sim 1\) eV. After that, both the net current as well as the power experience a simultaneous drop. The explanation for this behavior can be acquired from the population plot shown in Fig.6, as a function of \(U\). For bias voltages \(U\sim 1\) eV, \(|1010\rangle\) or \(|1\rangle\) is the maximally populated state, and both the highest-ranked cycles start with the initial state \(|1010\rangle\), represented by the curve \(P_{1}\) in Fig. 6. In contrast, the second-ranked cycle \(C_{3}\) starts with a less populated initial state \(|0011\rangle\), denoted by \(P_{0}\). Notably, the first two steps of the highest-ranked cycles are identical: it starts with the electron transfer between the levels D1 to D2 at the donor site that involve radiative (photon-induced) and nonradiative (phonon-induced) processes, while the second step designates the electron transfer between donor and acceptor molecules. It is worth pointing out that the above two steps of the highest-ranked cycles represent their counter-clockwise (forward) cycle trajectories, namely, \(C_{1}^{+}\) and \(C_{4}^{+}\), respectively. This is reasonable since the backward (clockwise) cycle flux trajectories are negligible compared to their forward counterpart and because of this reason it is the counter-clockwise or forward cycle trajectories that predominately determines the underlying directions of the cycle flux of the individual subcycles. In particular, the dynamical steps of the cycle \(C_{1}^{+}\) are as follows: starting from the neutral state \(|1010\rangle\) (\(|1\rangle\)), the system transits sequentially into \(|0110\rangle\) (\(|2\rangle\)) by absorbing the photon (electron transfer between D1 and D2 at the donor site), followed by \(|3\rangle\) or \(|0011\rangle\) (via phonon relaxation), \(|0\rangle\) or \(|0010\rangle\) (one electron tunnels from A2 of the acceptor into the right electron Figure 5: Dual axis plot of top-ranked cycle fluxes \(J_{C_{1}}\) (azure blue), \(J_{C_{3}}\) (green), \(J_{C_{4}}\) (black), and electron current \(J\) (blue) on the left vertical axis (in Ampere) as a function of voltage bias \(U\) and \(P\) on the right vertical axis (in Watt). The parameters are the same as mentioned in Fig. 4. The solid lines represents \(J(P)\) in the presence of nonradiative loss process (\(\gamma_{\rm phn}^{\rm D}=10^{10}\) s\({}^{-1}\)) whereas the dash-dot lines represents \(J(P)\) in the absence of nonradiative losses (\(\gamma_{\rm phn}^{\rm D}=0\)). reservoir) and finally returns to its initial state \(|1\rangle\) or \(|1010\rangle\) (where one electron tunnels from the left reservoir into D1 of the donor). Similarly, for cycle, \(C_{4}^{+}\), the first two processes are common with the cycle \(C_{1}^{\uparrow}\) (i.e., \(|1010\rangle\rightarrow|0110\rangle\) (\(|1\rangle\rightarrow|2\rangle\)) and \(|0110\rangle\rightarrow|0011\rangle\) (\(|2\rangle\rightarrow|3\rangle\))). The system then changes its state from \(|0011\rangle\) to \(|1011\rangle\) or \(|3\rangle\rightarrow|4\rangle\), i.e., one electron tunnel from the left reservoir into D1 of the donor, and finally, returns back to its initial state \(|1\rangle\) or \(|1010\rangle\) (where one electron tunnels from the A2 of the acceptor into the right reservoir). So, these are the two pertinent electron transfer pathways that play equivalent roles in a typical donor-acceptor molecular junction photocell. For all the numerical plots presented in Figs. 4- 6, we have used the same set the parameters as Nitzan et. al. [43], which are also reasonable from the experimental perspective as reported in Ref. [70; 71; 72]. To understand the significant drop in the total electron current and power, after a certain bias voltage in Fig. 5, we need to examine the steady-state populations as shown in Fig. 6. We observe that as long as the chemical potential of the right reservoir, \(\mu_{\rm R}\), reaches the energy level of \(\varepsilon_{\rm A2}\), the trend of the populations obeys \(P_{1}>>P_{0}>P_{3}\simeq P_{4}>P_{2}\simeq P_{5}\). However, once \(\mu_{\rm R}\) exceeds the energy of the acceptor level A2, the probability of finding the system in states \(|1010\rangle\) (\(P_{1}\)), \(|0010\rangle\) (\(P_{0}\)), and \(|0110\rangle\) (\(P_{2}\)) decreases sharply. Meanwhile, the probability of finding the system in states \(|1011\rangle\) (\(P_{4}\)) and \(|0111\rangle\) (\(P_{3}\)) increases rapidly. This indicates that electron tunneling from the excited state of the acceptor into the right reservoir becomes less favorable. As a result, the circuit fluxes \(J_{C_{1}}^{+}\) and \(J_{C_{4}}^{+}\) remain constant until \(\mu_{\rm R}\) reaches the energy level of \(\varepsilon_{\rm A2}\), and then decrease steadily after surpassing this energy level. We can also analyze the second-ranked circuit flux, \(J_{C_{3}}^{+}\), which remains constant until \(\mu_{\rm R}\) reaches the energy level \(\varepsilon_{\rm A2}\), and becomes maximum precisely at the midpoint of \(\varepsilon_{\rm D2}\) and \(\varepsilon_{\rm A2}\), i.e., \(1.15\) eV. By carefully analyzing Fig. 5, we can see that we have varied the bias voltages by changing \(\mu_{\rm R}\) while keeping all other parameters fixed. Now, Fig.2 shows that \(\mu_{\rm R}\) governs transitions between pairs of states where the acceptor's excited state is occupied in at least one of them. For the counter-clock \(C_{1}\) cycle, the second last edge (\(|3\rangle\leftrightarrow|0\rangle\)) is controlled by \(\mu_{\rm R}\), whereas, for \(C_{4}\), it is the last edge (\(|4\rangle\leftrightarrow|1\rangle\)) that is controlled by the bias voltage. Thus, we get a single drop in the current vs. voltage diagram as \(\mu_{\rm R}\) crosses the value of \(\varepsilon_{\rm A2}\). In the case of the \(C_{3}\) cycle, the last two edges (\((|4\rangle\leftrightarrow|5\rangle)\)) and \((|5\rangle\leftrightarrow|2\rangle)\)) are controlled by \(\mu_{\rm R}\). Since \(\mu_{\rm R}\) controls both transitions, the cycle current will be maximum when both transitions are favored optimally [35], i.e., at the average value of both energy levels, \(\varepsilon_{\rm D2}\) and \(\varepsilon_{\rm A2}\). For the present choice of parameters, this turns out to be precisely \(1.15\) eV, as obtained in Fig. 5. From the above analysis, it becomes clear that the donor-acceptor energy gap plays a crucial role in determining the performance of the photocell. To this end, in Figs. 7 and 8, we consider for our photocell, a real practical quantity of interest, i.e., the efficiency at the maximum power as a function of the donor-acceptor energy gap \(\Delta\varepsilon=\varepsilon_{\rm D2}-\varepsilon_{\rm A2}\). To be precise, we define the thermodynamic efficiency [68; 69; 43] at maximum power as follows: \[\eta^{*} =\frac{P_{max}}{\dot{Q}_{\rm S}}=\frac{U_{max}J(U_{max})}{\dot{Q} _{\rm S}}, \tag{34}\] \[\approx\frac{U_{max}J_{C_{1}}(U_{max})}{\dot{Q}_{\rm S}}+\frac{U_{ max}J_{C_{4}}(U_{max})}{\dot{Q}_{\rm S}}, \tag{35}\] where \(U_{max}\) denotes the bias voltage at which power passes through a maximum, i.e., at \(U=U_{max}\), power becomes \(P_{max}=U_{max}J(U_{max})\). In Eq. (34), the denominator \(Q_{\mathrm{S}}=\Delta EJ_{\mathrm{S}}(U_{max})\), represents the total energy absorbed per unit time from the radiation field, with \(\Delta E\) being the energy gap between the donor excited and ground levels. We rewrite the expression of efficiency at maximum power in terms of the two first-ranked cycle fluxes in Eq. (35), where each term represents the contribution of its respective cycle to the overall efficiency. From Figs. 7 and 8, we can conclude that the cycles \(C_{1}\) and \(C_{4}\) are the primary electron transfer channels across the entire parameter space, regardless of whether radiative process or non-radiative losses are at play. Interestingly, the plots exhibit a distinctive peak in the behavior of \(\eta^{*}\), in the presence of non-radiative loss processes. In such situations, \(\eta^{*}\) attains a maximum value as a function of \(\Delta\varepsilon\), where the energy gap \(\Delta\varepsilon\) regulates the electron transfer process between donor and acceptor [Fig. 8]. Moreover, it is worth noting that although \(\eta^{*}\) is bounded by the Carnot (\(\eta_{C}=1-T/T_{\mathrm{S}}\)), it can exceed the standard Curzon-Ahlborn bound [73] (\(\eta_{CA}=1-\sqrt{T/T_{\mathrm{S}}}\)) in the absence of any nonradiative losses [Fig. 7]. In other words, the strength of nonradiative loss processes is found to have a deleterious effect on the overall performance of the solar cell. It may even reduce the efficiency at maximum power at a value well below the \(\eta_{CA}\) as shown in Fig. 8. Hence, in order to improve the device performance to its ultimate level, it becomes crucial to diligently mitigate all forms of nonradiative losses to a considerable extent. ## V Conclusion We introduce the top-ranked cycle flux ranking scheme of network analysis as a tool to elucidate the complex working principles of molecular junction solar cells. Our approach takes advantage of the mapping between the dissipative Lindblad master equation for molecular systems and the quantum-transition network that characterizes the nonequilibrium transport behavior of molecular photocells. We now summarize our key insights as follows: (i) We have provided a microscopic Hamiltonian description of the phenomenological rate equations that are commonly used to characterize molecular junction solar cells. Based on a minimal model Hamiltonian, _classical_ looking rate equations are derived from detailed quantum Lindblad master equations. The resulting rate equations, though appear "classical", the underlying transition rates are shown to be quantum mechanical in nature. (ii) With the implementation of the effective ranking scheme, we have predicted the existence of a counterintuitive electron transfer pathway, which provides valuable insights into the detailed working principles of molecular photocells. Through rigorous analysis, we have clearly justified that cycles that contribute to the overall photocurrent must involve nonzero edge flux between donor and acceptor molecules embedded in a complex graph of quantum transition networks. We provide explanations for the drop in the current and power when measured against the bias voltage, as well as the various cycle fluxes. Our result advances the conventional understanding of nonequilibrium electron transfer pathways in donor-acceptor molecular junction solar cells which might be a significant step toward making efficient photovoltaic devices in the near future. (iii) Finally, we obtain a crucial insight into the efficiency of photocells, revealing that their maximum power efficiency in the absence of non-radiative losses, can exceed the conventional Curzon-Ahlborn bound, yet abide by the Carnot limit. Nonetheless, the incorporation of a non-radiative recombination process at the donor site has been found to have a detrimental effect on the photocell's performance, reducing its efficiency at maximum power, below the Curzon-Ahlborn limit. These findings underscore the intricacies of various factors that govern the overall performance of molecular photovoltaics. While in our present work, top-ranked cycle fluxes capture the essential physics of molecular junction solar cells, future research can take into account other additional effects, such as environment-assisted electron transfer and recombination rates [41], Marcus homogeneous and heterogeneous electron transfer rates [44], from a similar perspective. Thus the current approach offers important insights for further research in the field of photovoltaics. ## Acknowledgement AG acknowledges financial support from the Initiation grant of IITK (Grant No. IITK/CHM/2018513). N.G. is thankful to CSIR for the fellowship, and it is supported in part by the International Centre for Theoretical Sciences (ICTS) during a visit for participating in the online program - Bangalore School on Statistical Physics XII (code: ICTS/bssp2021/6). S.G. is grateful to the Ministry of Education, Government of India, for the Prime Minister Research Fellowship (PMRF).
2301.05273
Improving the speed of variational quantum algorithms for quantum error correction
We consider the problem of devising a suitable Quantum Error Correction (QEC) procedures for a generic quantum noise acting on a quantum circuit. In general, there is no analytic universal procedure to obtain the encoding and correction unitary gates, and the problem is even harder if the noise is unknown and has to be reconstructed. The existing procedures rely on Variational Quantum Algorithms (VQAs) and are very difficult to train since the size of the gradient of the cost function decays exponentially with the number of qubits. We address this problem using a cost function based on the Quantum Wasserstein distance of order 1 ($QW_1$). At variance with other quantum distances typically adopted in quantum information processing, $QW_1$ lacks the unitary invariance property which makes it a suitable tool to avoid to get trapped in local minima. Focusing on a simple noise model for which an exact QEC solution is known and can be used as a theoretical benchmark, we run a series of numerical tests that show how, guiding the VQA search through the $QW_1$, can indeed significantly increase both the probability of a successful training and the fidelity of the recovered state, with respect to the results one obtains when using conventional approaches.
Fabio Zoratti, Giacomo De Palma, Bobak Kiani, Quynh T. Nguyen, Milad Marvian, Seth Lloyd, Vittorio Giovannetti
2023-01-12T19:44:53Z
http://arxiv.org/abs/2301.05273v3
# Improving the speed of variational quantum algorithms ###### Abstract We consider the problem of devising a suitable Quantum Error Correction (QEC) procedures for a generic quantum noise acting on a quantum circuit. In general, there is no analytic universal procedure to obtain the encoding and correction unitary gates, and the problem is even harder if the noise is unknown and has to be reconstructed. The existing procedures rely on Variational Quantum Algorithms (VQAs) and are very difficult to train since the size of the gradient of the cost function decays exponentially with the number of qubits. We address this problem using a cost function based on the Quantum Wasserstein distance of order 1 (\(QW_{1}\)). At variance with other quantum distances typically adopted in quantum information processing, \(QW_{1}\) lacks the unitary invariance property which makes it a suitable tool to avoid to get trapped in local minima. Focusing on a simple noise model for which an exact QEC solution is known and can be used as a theoretical benchmark, we run a series of numerical tests that show how, guiding the VQA search through the \(QW_{1}\), can indeed significantly increase both the probability of a successful training and the fidelity of the recovered state, with respect to the results one obtains when using conventional approaches. ## I Introduction Performing reliable computations on physically imperfect hardware is something that has become usual nowadays, given the current state of classical computers, which can produce perfect results without any software-side mitigation of the imperfections of the physical media where the computation happens. Error correction is based on the fact that these machines automatically perform, on the hardware side, procedures that allow errors to happen and to be fixed without any intervention from the end user. This kind of setting is even more crucial in a quantum scenario where the current noisy intermediate-scale quantum computers (NISQ) have a much larger error rate than their classical counterparts [1]. Performing reliable computations with a trustworthy error correction procedure has direct implications not only in quantum computation [2; 3], but potentially also in all the other sectors of quantum technology which indirectly relay on it (e.g. quantum communication or quantum key distribution [4; 5; 6]). In the typical Quantum Error Correction (QEC) scheme, the quantum information that has to be protected is stored in a subspace of a larger Hilbert space, using an _encoding_ procedure. Stabilizer codes [7], which are within the best analytical results in this field, are not universal because they are tailored for a generic noise acting on a small but unknown subset of qubits. Several attempts have already been made to create a numerical optimization procedure to find an error correction code for specific noise models [8; 9; 10; 11], but these studies are not universal because they rely heavily on the type of noise on the specific quantum circuit and this is a problem because real quantum devices are not characterized by a single kind of quantum noise. Some attempts have been made to characterize the noise of the current and near-term devices [12; 13], but these methods will become very difficult to implement soon because classical computers are not able to simulate efficiently quantum circuits when the number of qubits increases. Near-term devices with approximately 50 qubits may already be intractable to simulate for supercomputers [14].
2305.14689
Least Squares Regression Can Exhibit Under-Parameterized Double Descent
The relationship between the number of training data points, the number of parameters, and the generalization capabilities has been widely studied. Previous work has shown that double descent can occur in the over-parameterized regime, and believe that the standard bias-variance trade-off holds in the under-parameterized regime. These works provide multiple reasons for the existence of the peak. We postulate that the location of the peak depends on the technical properties of both the spectrum as well as the eigenvectors of the sample covariance. We present two simple examples that provably exhibit double descent in the under-parameterized regime and do not seem to occur for reasons provided in prior work.
Xinyue Li, Rishi Sonthalia
2023-05-24T03:52:48Z
http://arxiv.org/abs/2305.14689v2
# Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line ###### Abstract The relationship between the number of training data points, the number of parameters in a statistical model, and the generalization capabilities of the model has been widely studied. Previous work has shown that double descent can occur in the over-parameterized regime, and believe that the standard bias-variance trade-off holds in the under-parameterized regime. In this paper, we present a simple example that provably exhibits double descent in the under-parameterized regime. For simplicity, we look at the ridge regularized least squares denoising problem with data on a line embedded in high-dimension space. By deriving an asymptotically accurate formula for the generalization error, we observe sample-wise and parameter-wise double descent with the peak in the under-parameterized regime rather than at the interpolation point or in the over-parameterized regime. Further, the peak of the sample-wise double descent curve corresponds to a peak in the curve for the norm of the estimator, and adjusting \(\mu\), the strength of the ridge regularization, shifts the location of the peak. We observe that parameter-wise double descent occurs for this model for small \(\mu\). For larger values of \(\mu\), we observe that the curve for the norm of the estimator has a peak but that this no longer translates to a peak in the generalization error. Moreover, we study the training error for this problem. The considered problem setup allows for studying the interaction between two regularizers. We provide empirical evidence that the model implicitly favors using the ridge regularizer over the input data noise regularizer. Thus, we show that even though both regularizers regularize the same quantity, i.e., the norm of the estimator, they are not equivalent. ## 1 Introduction This paper aims to demonstrate interesting new phenomena that suggest that our understanding of the relationship between the number of data points, the number of parameters, and the generalization error is incomplete, even for simple linear models with data on a line. The classical bias-variance theory postulates that the generalization risk versus the number of parameters for a fixed number of training data points is U-shaped (Figure 0(a)). However, modern machine learning showed that if we keep increasing the number of parameters, the generalization error eventually starts decreasing again [1, 2] (Figure 0(b)). This second descent has been termed as _double descent_ and occurs in the _over-parameterized regime_, that is when the number of parameters exceeds the number of data points. Understanding the location and the cause of such peaks in the generalization error is of significant importance. Hence many recent works have theoretically studied the generalization error for linear regression [3, 4, 5, 6, 7, 8, 9, 10, 11, 12] and kernelized regression [13, 14, 15, 16, 17, 18, 19, 20, 21] and show that there exists a peak at the boundary between the under and over-parameterized regimes. Further works such as [22, 23, 24, 10, 25] show that there can be multiple descents in the over-parameterized regime and [26] shows that any shaped generalization error curve can occur in the over-parameterized regime. However, all prior works assume that the classical bias-variance trade-off is true in the under-parameterized regime. The implicit bias of the learning algorithm is a possible reason that the error decreases in the over-parameterized regime [28, 29, 30, 31, 32, 33]. In the under-parameterized regime, there is exactly one solution that minimizes the loss. However, once in the over-parameterized regime, there are many different solutions, and the training algorithm implicitly picks one that generalizes well. For linear models, the generalization error and the variance are very closely related to the norm of the estimator [11, 34]. Then, using the well-known fact that the pseudo-inverse solution to the least squares problem is the minimum norm solution, we see that the training algorithm picks solutions with the minimum norm. Hence this learning algorithm minimizes the variance and lowers the generalization error. Double descent can be mitigated by optimally tuning the model's regularization [24, 34, 35]. There are two different phenomena in the literature. First, the optimal amount of regularization is independent of the number of training data points [24]. Second, the optimal regularization follows a double descent curve with respect to the number of training data points, with the peak occurring at the same spot as the peak in the generalization error curve [34, 36]. Further, increasing the amount of regularization from zero to the optimal amount of regularization results in the magnitude of the peak in the generalization getting smaller until a peak no longer exists. However, the location of the peak does not change by changing the amount of regularization. In contrast with prior work, this paper shows that double descent can occur in the under-parameterized regime. Specifically, when denoising data on a line embedded in high-dimensional space using a denoiser obtained as the pseudo-inverse solution for the ridge regularized least squares problem, we show that a peak in the generalization error curve occurs in the under-parameterized regime. We also show that changing the ridge regularization strength changes the location of the peak. To further understand the location of the peak, we derive the exact asymptotics for the training error and explore the connections between the norm of the estimator and the peak of the generalization error curve. Moreover, since this model has two regularizers, we study the trade-off between them and notice that they are not equivalent. Main Contributions.The major contributions of this paper are as follows.4 Footnote 4: All code is available anonymized at [Github Repo] * (**Generalization error)** We derive a theoretical formula for the generalization error (Theorem 1). * (**Under-parameterized double descent**) We prove (Theorem 2) and empirically demonstrate that the generalization error versus the number of data points curve has double descent in the under-parameterized regime. For small values of the ridge regularization coefficient \(\mu\), we empirically Figure 1: Classical bias-variance trade-off and modern double descent. show that the generalization error versus the number of parameters curve exhibits double descent in the under-parameterized regime. * **(Location of the peak)** The peak location depends on the regularization strength. We provide evidence (Theorem 3) that the peak is near \(c=\frac{1}{\mu^{2}+1}\) for the sample-wise double descent curves. * **(Norm of the estimator)** We show that the peak in the curve for the generalization error versus the number of training data points corresponds to a peak in the norm of the estimator. However, versus the number of parameters, we show that there is still a peak in the curve for the norm of the estimator (Theorem 4), but this no longer corresponds to a peak in the generalization error. * **(Training Error)** We derive an asymptotically exact formula for the training error in the under-parameterized regime (Theorem 5). We notice a weak correlation between the location of the double descent peak and a local minimum of the third derivative of the training error. * **(Regularization Trade-off)** We explore the trade-offs between the two regularizers and the generalization error. The model's implicit bias leads to the best generalization error occurring with high data quality (low noise regularization) and high ridge regularization. Thus, showing that the two regularizers have very different roles in relation to the generalization error (Section 4). Low-Dimensional Data.It is important to highlight that using low-rank data does not immediately imply that a peak occurs in the under-parameterized regime. Specifically, [37] looks at the problem of Principal Component Regression (PCR), which is a procedure that projects data onto the space spanned by the first \(k\) principal components. Note that the data lives in a low-dimensional space embedded in a higher dimensional space. They show that projecting onto low-dimensional versions of the data acts as a regularizer and removes the peak altogether. [37] also study the problem with data poisoning, i.e., adding one outlier to the training data, and show that things can be arbitrarily bad in the over-parameterized regime. [38], also looks at a similar problem, but they consider isotropic Gaussian data and project onto the first \(k\) components. In this case, the data is artificially high-dimensional (since only the first \(k\) coordinates are non-zero). They again see a peak at the interpolation point (\(N_{trn}=k\)). [39] also looks at a version of PCR in which the data dimension is reduced. That is, the data is not embedded in high-dimensional space anymore. [39] sees a peak at the boundary between the under and over-parameterized regions. Finally, [34, 40] look at the denoising problem for low-dimensional data and have peaks at \(c=1\). Hence, low-dimensional data does not immediately imply a peak in the under-parameterized regime. Table 1 compares common assumptions and the location of the peak. Other Related Works.Much work has been done on understanding the implicit bias for gradient-based methods and linear models [41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52]. Significant work has also been done to understand the role of noise as a regularizer and its impact on generalization [34, 53, 54, 55, 56, 57, 58]. Other related works include the use of noise to learn features [59, 60, 61, 62], to improve robustness [63, 64, 65], to prevent adversarial attacks [66, 67], and the connection to training dynamics [68]. There has also been work to understand Bayes optimal denoiser using matrix factorization [69, 70, 71, 72, 73]. Finally, works such as [24, 34, 35, 39, 74] have looked at theoretically determining the optimal regularization strength. Structure and Limitations.The rest of this paper is organized as follows. Section 2 provides basic definitions and modeling assumptions. One of the major differences between prior work and this work is the shifting of the noise from the output variable to the input variable. Section 2.1 details the equivalent model with output noise. The model in Section 2.1 serves as a baseline for previously seen \begin{table} \begin{tabular}{c c c c c} \hline \hline Noise & Ridge Reg. & Dimension & Peak Location & Reference \\ \hline Input & Yes & 1 & Under-parameterized & This paper. \\ Input & No & Low & Over-parameterized/interpolation point & [34, 40] \\ Output & No & Full & Over-parameterized/interpolation point & [5, 8, 11] \\ Output & Yes & Full & Over-parameterized/interpolation point & [11, 24] \\ Output & No & Low & Over-parameterized/interpolation point & [38] \\ Output & Yes & Low & Over-parameterized/interpolation point & [39] \\ Output & No & Low & No peak5 & [37] \\ \hline \hline \end{tabular} \end{table} Table 1: Table showing various assumptions on the data and the location of the double descent peak for linear regression and denoising. We only present a subset of references for each problem setting. phenomena and will be used to contrast the phenomena seen here. Section 3 provides the theoretical results that show that the peak occurs in the under-parameterized regime. The section also explores why the peak occurs where it does and empirically verifies the results. Finally, Section 4 explores the trade-off between the two regularizers. We highlight some limitations of our work. Our analysis is very much for the ridge regularized least squares linear denoising problem, and significant work would be required to adapt any of the results for neural networks or other linear methods with real data. ## 2 Background and Model Assumptions We provide some crucial definitions needed for this paper. Throughout the paper, we assume that noiseless training data \(x_{i}\) live in \(\mathbb{R}^{d}\) and that we have access to a \(d\times N_{trn}\) matrix \(X_{trn}\) of training data. We then solve a least squares problem with linear models, such as linear regression or denoising. Then given new data \(X_{tst}\in\mathbb{R}^{d\times N_{tst}}\), we are interested in the least squares generalization error. Two scenarios for the generalization error curve are considered; data scaling and parameter scaling. **Definition 1**.: \({}^{\bullet}\) Data scaling _refers to the regime in which we fix the dimension \(d\) of the input data and vary the number of training data points \(N_{trn}\). This is also known as the sample-wise regime._ \({}^{\bullet}\) Parameter scaling _refers to the regime in which we fix the number of training data points \(N_{trn}\) and vary the dimension \(d\) of the input data. This is also known as the parameter-wise regime._ **Definition 2**.: \({}^{\bullet}\) _A linear model is under-parameterized, if \(d<N_{trn}\)._ \({}^{\bullet}\) _A linear model is over-parameterized, if \(d<N_{trn}\)._ \({}^{\bullet}\) _The boundary of the under and over-parameterized regimes is when \(d=N_{trn}\)._ \({}^{\bullet}\) _Given \(N_{trn}\), the interpolation point is the smallest \(d\) for the which the model has zero training error._ \({}^{\bullet}\) _A curve has double descent if the curve has a local maximum or peak._ \({}^{\bullet}\) _The aspect ratio of an \(m\times n\) matrix is \(c:=m/n\)._ Note that double descent originally referred to curves that initially decreased, then increased, and then decreased again. However, this is not the case in subsequent literature, as for many linear models, the first descent is not seen. Instead, an initial ascent followed by a descent is observed. Since the presence of a peak and the subsequent descent are the crucial aspects of interest, we shall refer to a curve with a local maximum as exhibiting double descent. ### Prior Double Descent This section presents a _baseline model from prior work on double descent_. This is to highlight _prior_ important phenomena related to double descent in the literature. Concretely, consider the following simple linear model that is a special case of the general models studied in [5; 8; 11; 24] amongst many other works. Let \(x_{i}\sim\mathcal{N}(0,I_{d})\) and let \(\beta\in\mathbb{R}^{d}\) be a linear model with \(\|\beta\|=1\). Let \(y_{i}=\beta^{T}x_{i}+\xi_{i}\) where \(\xi\sim\mathcal{N}(0,1)\). Then, let \[\beta_{opt}:=\operatorname*{arg\,min}_{\tilde{\beta}}\|\beta^{T}X_{trn}- \tilde{\beta}X_{trn}+\xi_{trn}\|\] where \(\xi_{trn}\in\mathbb{R}^{N_{trn}\times 1}\). Then the excess risk, when taking the expectation over the new test data point, can be expressed as \[\mathcal{R}=\|\beta-\beta_{opt}\|^{2}=\|\beta\|^{2}+\|\beta_{opt}\|^{2}-2\beta ^{T}\beta_{opt}.\] Let \(c\) be the aspect ratio of the data matrix. That is, \(c=d/N_{trn}\). Then it can be shown that6 Footnote 6: The proofs are in Appendix A.1. \[\mathbb{E}_{X_{trn},\xi_{trn}}[\|\beta_{opt}\|^{2}]=\begin{cases}1+\frac{c}{1 -c}&c<1\\ \frac{1}{c}+\frac{c-1}{c-1}&c>1\end{cases}\quad\text{ and }\quad\mathbb{E}_{X_{trn},\xi_{trn}}[ \beta^{T}\beta_{opt}]=\begin{cases}1&c<1\\ \frac{1}{c}&c>1\end{cases}\] Then, the excess risk can be expressed as \(\mathcal{R}=\begin{cases}\frac{c}{1-c}&c<1\\ \frac{c-1}{c}+\frac{1}{c-1}&c>1\end{cases}.\) There are a few important features that are considered staple in many prior double descent curves for linear models that are present in this model. 1. The peak happens at \(c=1\), on the border between the under and over-parameterized regimes. 2. Further, at \(c=1\) the training error equals zero. Hence this is the interpolation point. 3. The peak occurs due to the expected norm of the estimator \(\beta_{opt}\) blowing up near the interpolation point. _Further, [26] proved that double descent cannot take place in the under-parameterized regime for the above model._ For the ridge regularized version of the regression problem, as shown in [11; 24], _the peak is always at \(c=1\)_ (see Figure 1 in [24]). Further, as seen in Figure 1 in [24], changing the strength of the regularization changes the magnitude of the peak. _Not the location of the peak._ Building on this, [23] looks at the model where \(y_{i}=f(x_{i})+\xi_{i}\) and shows that triple descent occurs for the random features model [75] in the _over-parameterized_ regime. Specifically, as before, they show that the first peak is due to the norm of the estimator peaking, and that the second peak is due to the initialization of the random features. Their results, Figure 3 in [23], show that the peaks only occur if the model is over-parameterized. Further [26] shows that by considering a variety of product data distributions, any shaped risk curve can be observed in the _over-parameterized_ regime. ### Assumptions for Denoising Model With the context from the previous section in mind, we are now ready to present the assumptions for the input noise model with double descent in the under-parameterized regime. Here we build on [34], in which they switch focus from looking at noise on the outputs to noise on the inputs. For the denoising problem, let \(A_{trn}\in\mathbb{R}^{d\times N_{trn}}\) be the noise matrix, then the ridge regularized least square denoiser \(W_{opt}\) is the minimum norm solution to \[W_{opt}:=\operatorname*{arg\,min}_{W}\|X_{trn}-W(X_{trn}+A_{trn})\|_{F}^{2}+ \mu^{2}\|W\|_{F}^{2}. \tag{1}\] Given test data \(X_{tst}\), the mean squared generalization error is given by \[\mathcal{R}(W_{opt})=\mathbb{E}_{A_{trn},A_{tst}}\left[\frac{1}{N_{tst}}\|X_{ tst}-W_{opt}(X_{tst}+A_{tst})\|_{F}^{2}\right]. \tag{2}\] **Remark 1** (Linear Model).: _The reason we consider linear models with the pseudo-inverse solution is that this eliminates other factors, such as the initialization of the network that could be a cause of the double descent [23]._ Assumptions for the data.We assume that the data lies on a line embedded in high-dimensional space. We note that [40] could be used to extend the analysis to the rank \(r\) case. **Assumption 1**.: _Let \(\mathcal{U}\subset\mathbb{R}^{d}\) be a one dimensional space with a unit basis vector \(u\). Then let \(X_{trn}=\sigma_{trn}uv_{trn}^{T}\in\mathbb{R}^{d\times N_{trn}}\) and \(X_{tst}=\sigma_{tst}uv_{tst}^{T}\in\mathbb{R}^{d\times N_{tst}}\) be the respective SVDs for the training data and test data matrices. We further assume that \(\sigma_{trn}=O(\sqrt{N_{trn}})\) and \(\sigma_{tst}=O(\sqrt{N_{tst}})\)._ There are no assumptions on the distribution of \(v_{trn},v_{tst}\) beyond that they have unit norm. **Remark 2** (Data on a line).: _In [26], it was shown that by considering specific data distributions, any shaped generalization error curve could be observed in the over-parameterized regime. Hence to limit the effect of the data, we consider data on a line with norm restrictions._ Assumptions about the noise.The analysis here can be done for the general noise assumptions in [34]. However, for simplicity, we assume that the noise matrix \(A\) has I.I.D. entries drawn from an appropriately normalized Gaussian. **Assumption 2**.: _The entries of the noise matrices \(A\in\mathbb{R}^{d\times N}\) are I.I.D. from \(\mathcal{N}(0,1/d)\)._ Notational note.One final piece of technical notation is the following function definition. \[\begin{split} p(\mu):=(4\mu^{15}+48\mu^{13}+204\mu^{11}+352\mu^{ 9}+192\mu^{7})\sqrt{\mu^{2}+4}\\ -(4\mu^{16}+56\mu^{14}+292\mu^{12}+680\mu^{10}+640\mu^{8}+128\mu^{ 6}).\end{split} \tag{3}\] Under-Parameterized Regime Peak We begin by providing a formula for the generalization error given by Equation 2 for the least squares solution given by Equation 1. All proofs are in Appendix A. **Theorem 1** (Generalization Error Formula).: _Suppose the training data \(X_{trn}\) and test data \(X_{tst}\) satisfy Assumption 1 and the noise \(A_{trn},A_{tst}\) satisfy Assumption 2. Let \(\mu\) be the regularization parameter. Then for the under-parameterized regime (i.e., \(c<1\)) for the solution \(W_{opt}\) to Problem 1, the generalization error or risk given by Equation 2 is given by_ \[\mathcal{R}(c,\mu)=\tau^{-2}\left(\frac{\sigma_{tst}^{2}}{N_{tst}}+\frac{c \sigma_{trn}^{2}(\sigma_{trn}^{2}+1))}{2d}\left(\frac{1+c+\mu^{2}c}{\sqrt{(1-c +\mu^{2}c)^{2}+4\mu^{2}c^{2}}}-1\right)\right)+o\left(\frac{1}{d}\right),\] _where \(\tau^{-1}=\frac{2}{2+\sigma_{trn}^{2}(1+c+\mu^{2}c-\sqrt{(1-c+\mu^{2}c)+4\mu^{ 2}c^{2}})}\)._ Since the focus is on the under-parameterized regime, Theorem 1 only presents the under-parameterized case. The over-parameterized case can be found in Appendix A.2. Data Scaling.Looking at the formula in Theorem 1, the risk curve's shape is unclear. In this section, we prove that the risk curve in Theorem 1 has a peak for \(c\in(0,1)\). Theorem 2 tells us that under certain conditions, we are theoretically guaranteed to have a peak in the under-parameterized regime. This contrasts with prior work such as [3, 5, 8, 9, 10, 11, 14, 25] where double descent occurs in the over-parameterized regime or on the boundary between the two regimes. **Theorem 2** (Under-Parameterized Peak).: _If \(\mu\in\mathbb{R}_{>0}\) is such that \(p(\mu)<0\), \(\sigma_{trn}^{2}=N_{trn}=d/c\) and \(\sigma_{tst}^{2}=N_{tst}\), and \(d\) is sufficiently large, then the risk \(\mathcal{R}(c)\) from Theorem 1, as a function of \(c\), has a local maximum in the under-parameterized regime (\(c\in(0,1)\))._ Since the peak no longer occurs at \(c=1\), one important question is to determine the location of the peak. Theorem 3 provides a method for estimating the location of the peak. **Theorem 3** (Peak Location).: _If \(\mu\in\mathbb{R}_{>0}\) is such that \(p(\mu)<0\), \(\sigma_{trn}^{2}=N_{trn}=d/c\) and \(\sigma_{tst}^{2}=N_{tst}\), then the partial derivative with respect to \(c\) of the risk \(\mathcal{R}(c)\) from Theorem 1 can be written as_ \[\frac{\partial}{\partial c}\mathcal{R}(c,\mu)=\frac{(\mu^{2}c+c-1)P(c,\mu,T(c,\mu),d)+4d\mu^{2}c^{2}(2\mu^{2}c-T(c,\mu))}{Q(c,\mu,T(c,\mu),d)},\] _where \(T(c,\mu)=\sqrt{(1-c+\mu^{2}c)^{2}+4\mu^{2}c^{2}}\) and \(P,Q\) are polynomials in four variables._ Here, at \(c=(\mu^{2}+1)^{-1}\), the first term in the numerator is zero. Hence we conjecture that the peak of the generalization error curve occurs near \(c=(\mu^{2}+1)^{-1}\). **Remark 3**.: _Note that as \(\mu\to 0\), we have that \(4d\mu^{2}c^{2}(2\mu^{2}c-T(c,\mu))\to 0\). We also note that, when \(\mu=1\), we have that \(2c-T(c,1)=0\). Thus, we see that for \(\mu\) near \(0\) or \(1\), we should expect our estimate of the location of the peak to be accurate._ We numerically verify the predictions from Theorems 1, 2, 3. Figure 2 shows that the theoretically predicted risk matches the numerical risk. Moreover, the curve has a single peak for \(c<1\). Thus, _verifying that double descent occurs in the under-parameterized regime._ Finally, Figure 2 shows that the location of the peak is near the conjectured location of \(\frac{1}{\mu^{2}+1}\). This conjecture is further tested Figure 2: Figure showing the risk curve in the data scaling regime for different values of \(\mu\) [(L) \(\mu=0.1\), (C) \(\mu=1\), (R) \(\mu=2\)]. Here \(\sigma_{trn}=\sqrt{N_{trn}},\sigma_{tst}=\sqrt{N_{tst}},d=1000,N_{tst}=1000\). For each empirical point, we ran at least 100 trials. More details can be found in Appendix D. for a larger range of \(\mu\) values in Appendix B. One similarity with prior work is that the peak in the generalization error or risk is corresponds to a peak in the norm of the estimator \(W_{opt}\) as seen in Figure 3 (i.e., the curve passes through the top right corner). The figure further shows, as conjectured in [76], that the double descent for the generalization error disappears when plotted as a function of \(\|W_{opt}\|_{F}^{2}\) and, in some cases, recovers an approximation of the standard U shaped error curve. Risk curve shape depends on \(d\).Another interesting aspect of Theorem 2 is that it requires that \(d\) is large enough. Hence the shape of the risk curve depends on \(d\). Most results for the risk are in the asymptotic regime. While Theorems 1, 2, and 3 are also in the asymptotic regime, we see that the results are accurate even for (relatively) small values of \(d,N_{trn}\). Figure 4 shows that the shape of the risk curve depends on the value of \(d\). Both curves still have a peak at the same location. Parameter Scaling.For many prior models, the data scaling and parameter scaling regimes are analogous in that the shape of the risk curve does not depend on which one is scaled. The shape is primarily governed by the aspect ratio \(c\) of the data matrix. However, we see significant differences between the parameter scaling and data scaling regimes for our setup. Figure 5 shows risk curves that differ from those in Figure 2. Further, while for small values of \(\mu\), double descent occurs in the under-parameterized regime, for larger values of \(\mu\), the risk is monotonically decreasing.7 Footnote 7: This is verified for more values of \(\mu\) in Appendix B. Even more astonishing, as shown in Figure 6, is the fact that for larger values of \(\mu\), _there is still a peak_ in the curve for the norm of the estimator \(\|W_{opt}\|_{F}^{2}\). However, this _does not_ translate to a peak in the risk curve. Thus, showing that the norm of the estimator increasing cannot solely result in the generalization error increasing. The following theorem provides a local maximum in the \(\|W_{opt}\|_{F}^{2}\) versus \(c\) curve for \(c<1\). **Theorem 4** (\(\|W_{opt}\|_{F}\) Peak).: _If \(\sigma_{tst}=\sqrt{N_{tst}}\), \(\sigma_{trn}=\sqrt{N_{trn}}\) and \(\mu\) is such that \(p(\mu)<0\), then for \(N_{trn}\) large enough and \(d=cN_{trn}\), we have that \(\|W_{opt}\|_{F}\) has a local maximum in the under-parameterized regime. Specifically for \(c\in((\mu^{2}+1)^{-1},1)\)._ Training Error.As seen in the prior section, the peak happens in the interior of the under-parameterized regime and not on the border between the under-parameterized and over-parameterized regimes. In many prior works, the peak aligns with the interpolation point (i.e., zero training error). Figure 4: Figure showing that the shape of the risk curve in the data scaling regime depends on \(d\) [(L) \(d=1000\), (R) \(d=2000\)]. Here \(\mu=\sqrt{2},\sigma_{trn}=\sqrt{N_{tst}},\sigma_{trn}=\sqrt{N_{tst}},N_{tst}= 1000\). Each empirical point is an average of at least 200 trials. More details can be found in Appendix D. Figure 3: Figure showing generalization error versus \(\|W_{opt}\|_{F}^{2}\) for the data scaling regime for three different values of \(\mu\). More details can be found in Appendix B and D. Theorem 5 derives a formula for the training error in the under-parameterized regime. Figure 7 plots the location of the peak, the training error, and the third derivative of the training error. Here the figure shows that the training error curve does not signal the location of the peak in the generalization error curve. However, it shows that for the data scaling regime, the peak roughly corresponds to a local minimum of the third derivative of the training error. **Theorem 5** (Training Error).: _Let \(\tau\) be as in Theorem 1. The training error for \(c<1\) is given by_ \[\mathbb{E}_{A_{trn}}[\|X_{trn}-W_{opt}(X_{trn}+A_{trn})\|_{F}^{2}]=\tau^{-2} \left(\sigma_{trn}^{2}\left(1-c\cdot T_{1}\right)+\sigma_{trn}^{4}T_{2}\right)+ o(1),\] _where \(T_{1}=\dfrac{\mu^{2}}{2}\left(\dfrac{1+c+\mu^{2}c}{\sqrt{(1-c+\mu^{2}c)^{2}+4 \mu^{2}c^{2}}}-1\right)+\dfrac{1}{2}+\dfrac{1+\mu^{2}c-\sqrt{(1-c+\mu^{2}c)^{2 }+4c^{2}\mu^{2}}}{2c}\), and \(T_{2}=\dfrac{(\mu^{2}c+c-1-\sqrt{(1-c+\mu^{2}c)^{2}+4c^{2}\mu^{2}})^{2}(\mu^{2 }c+c+1-\sqrt{(1-c+\mu^{2}c)^{2}+4c^{2}\mu^{2}})}{2\sqrt{(1-c+\mu^{2}c)^{2}+4c^ {2}\mu^{2}}}\)._ ## 4 Regularization Trade-off We analyze the trade-off between the two regularizers and the generalization error. Figure 5: Figure showing the risk curves in the parameter scaling regime for different values of \(\mu\) [L) \(\mu=0.1\), (C) \(\mu=0.2\), (R) \(\mu=0.2\)]. Here only the \(\mu=0.1\) has a local peak. Here \(N_{trn}=N_{tst}=1000\) and \(\sigma_{trn}=\sigma_{tst}=\sqrt{1000}\). Each empirical point is an average of 100 trials. Figure 6: Figure showing generalization error versus \(\|W_{opt}\|_{F}^{2}\) for the parameter scaling regime for three different values of \(\mu\). More details can be found in Appendix B. Figure 7: Figure showing the training error, the third derivative of the training error, and the location of the peak of the generalization error for different values of \(\mu\) [L) \(\mu=1\), (C) \(\mu=2\)] for the data scaling regime. (R) shows the location of the local minimum of the third derivative and \(\frac{1}{\mu^{2}+1}\). **Optimal \(\sigma_{trn}\).** First, we fix \(\mu\) and determine the optimal \(\sigma_{trn}\). Figure 8 displays the generalization error versus \(\sigma_{trn}^{2}\) curve. The figure shows that the error is initially large but then decreases until the optimal generalization error. The generalization error when using the optimal \(\sigma_{trn}\) is also shown in Figure 8. Here, unlike [24], picking the optimal value of \(\sigma_{trn}\) does not mitigate double descent. **Proposition 1** (Optimal \(\sigma_{trn}\)).: _The optimal value of \(\sigma_{trn}^{2}\) for \(c<1\) is given by_ \[\sigma_{trn}^{2}=\frac{\sigma_{tot}^{2}d[2c(\mu^{2}+1)^{2}-2T(c\mu^{2}+c+1)+2( c\mu^{2}-2c+1)]+N_{test}(\mu^{2}c^{2}+c^{2}+1-T)}{N_{test}(c^{3}(\mu^{2}+1)^{2}-T( \mu^{2}c^{2}+c^{2}-1)-2c^{2}-1)}.\] Additionally, it is interesting to determine how the optimal value of \(\sigma_{trn}\) depends on both \(\mu\) and \(c\). Figure 9 shows that for small values of \(\mu\) (0.1,0.5), as \(c\) changes, there exists an (inverted) double descent curve for the optimal value of \(\sigma_{trn}\). However, unlike [34], for the data scaling regime, the minimum of this double descent curve _does not match the location for the peak of the generalization error_. Further, as the amount of ridge regularization increases, the optimal amount of noise regularization decreases proportionally; optimal \(\sigma_{trn}^{2}\approx d\mu^{2}\). Thus, for higher values of ridge regularization, it is preferable to have higher-quality data. **Interaction Between the Regularizers.** The optimal values of \(\mu\) and \(\sigma_{trn}\) are jointly computed using grid search for \(\mu\in(0,100]\) and \(\sigma_{trn}/\sqrt{N_{trn}}\in(0,10]\). Figure 10 shows the results. Specifically, \(\sigma_{trn}\) is at the highest possible value (so best quality data), and then the model regularizes purely using the ridge regularizer. This results in a monotonically decreasing generalization error curve. Thus, in the data scaling model, _there is an implicit bias that favors one regularizer over the other_. Specifically, the model's implicit bias _is to use higher quality data while using ridge regularization to regularize the model appropriately_. It is surprising that the two regularizers are not balanced. Figure 8: The first two figures show the \(\sigma_{trn}\) versus risk curve for \(c=0.5,\mu=1\) and \(c=2,\mu=0.1\) with \(d=1000\). The second two figures show the risk when training using the optimal \(\sigma_{trn}\) for the data scaling and parameter scaling regimes. Figure 10: Trade-off between the regularizers. The left column is the optimal \(\sigma_{trn}\), the central column is the optimal \(\mu\), and the right column is the generalization error for these parameter restrictions. Figure 9: The first figure plots the optimal \(\sigma_{trn}^{2}/N_{trn}\) versus \(\mu\) curve. The middle figure plots the optimal \(\sigma_{trn}^{2}/N_{trn}\) versus \(c\) in the data scaling regime for \(\mu=0.5\), and the last figure plots the optimal \(\sigma_{trn}^{2}/N_{trn}\) versus \(c\) in the parameter scaling regime for \(\mu=0.1\). Conclusion This paper presented a simple model with double descent in the under-parameterized regime for the data scaling and parameter scaling regimes. We also see that the three prevalent theories for the location of the peak, i.e., at the boundary between the under and over-parameterized regimes, at the interpolation point, and at the point where the norm of estimator peaks, do not explain the peak for this model. Specifically, we show that the peak in the data scaling regime is near \(c=1/(\mu^{2}+1)\), where \(\mu\) is the ridge regularization coefficient. In the parameter-scaling regime, we show that for large values of \(\mu\), we still have a peak in the curve for \(\|W_{opt}\|_{F}\) versus the number of parameters. But this no longer corresponds to a peak in the generalization error curve. Hence to provide a general theory, further investigations into the cause and the locations of peaks in the generalization error curve is required.
2310.08016
What can we learn from the experiment of electrostatic conveyor belt for excitons?
Motivated by the experiment of electrostatic conveyor belt for indirect excitons [A. G. Winbow, \textit{et al.}, Phys. Rev. Lett. \textbf{106}, 196806 (2011)], we study the exciton patterns for understanding the exciton dynamics. By analyzing the exciton diffusion, we find that the patterns mainly come from the photoluminescence of two kinds of excitons. The patterns near the laser spot come from the hot excitons which can be regarded as the classical particles. However, the patterns far from the laser spot come from the cooled excitons or coherent excitons. Taking into account of the finite lifetime of Bosonic excitons and of the interactions between them, we build a time-dependent nonlinear Schr\"{o}dinger equation including the non-Hermitian dissipation to describe the coherent exciton dynamics. The real-time and imaginary-time evolutions are used alternately to solve the Schr\"{o}dinger equation in order to simulate the exciton diffusion accompanied with the exciton cooling in the moving lattices. By calculating the escape probability, we obtain the transport distances of the coherent excitons in the conveyor which are consistent with the experimental data. The cooling speed of excitons is found to be important in the coherent exciton transport. Moreover, the plateau in the average transport distance cannot be explained by the dynamical localization-delocalization transition induced by the disorders.
T. T. Zhao, Rui Li, C. S. Liu
2023-10-12T03:30:15Z
http://arxiv.org/abs/2310.08016v3
# What can we learn from the experiment of electrostatic conveyer for excitons? ###### Abstract Motivated by the experiment of electrostatic conveyer for indirect excitons [A. G. Winbow, _et al._, Phys. Rev. Lett. **106**, 196806 (2011)], we studied the exciton patterns to understand the exciton dynamics. By analyzing the exciton diffusion, we found that the patterns were from two kinds of excitons approximately. The patterns near the laser spot came from the hot excitons which can be taken as classical particles. However, the patterns far from the laser spot were formed by the cooled excitons or coherent excitons. Considering the Bosonic excitons with the limited lifetime and the interactions, we set up a time-dependent nonlinear Schrodinger equation including the non-Hermitian dissipation to depict the coherent exciton dynamics. The real-time and imaginary-time evolutions were used alternately to solve the Schrodinger equation to study the exciton diffusion accompanied by the exciton cooling in the moving lattices. By calculating the escape probability, we got theoretically the transport distances of the coherent excitons in the conveyer which is consistent with the experimental data. The exciton cooling speed was found to be the key element to the coherent exciton transport. Moreover, the plateau in the average transport distance as a function of the conveyer amplitude cannot be explained by the dynamical localization-delocalization transition due to the disorder. ## I Introduction The quantum-mechanical time evolution of particles is usually called as quantum walkers. Standard quantum mechanics assumes Hermiticity of Hamiltonian, yet non-Hermitian Hamiltonian is valuable in many fields of physics. For example, dissipative systems with gain and loss naturally exhibit non-Hermiticity [1]. As a result, the non-Hermitian quantum walkers are of universal significance and have attracted attention recently [2; 3; 4; 5]. However, the direct observation of the quantum walkers is very difficult whether in Hermitian system or non-Hermitian system. Taking the particle in a one-dimensional periodic potential as an example, if the potential changes in time and recoveries to itself in the time cycle, the transport of the simplest quantum walkers is the Chen number according to the Thouless mechanism [6]. The gedanken experiment demands adiabatic approximation, that the potential changes very slow. In particular, the particles must be the Fermions occupying in filled Bloch bands at zero temperature. The excitons are the electron-hole bound pairs in semiconductor which are Bose quasiparticles and are expected to realize the phenomenon of Bose-Einstein condensation [7]. Due to the finite lifetime, the excitons are endogenous non-Hermitian system. The short lifetime and the low cooling rate hinder realizing the exciton Bose-Einstein condensation. To overcome the two disadvantages, the indirect excitons, the spatially separated electron-hole bound pairs, were generated in coupled quantum wells [8]. These long-lived particles could provide a means to transport information as electrons. As their neutral overall, they are harder to move electrically however. Due to the dipole moment of the indirect excitons, their energy can be controlled by voltage, which leads to the creating of the lattices. By applying an alternating voltage to an electrode grid that overlays the device, a wavelike potential were able to be created for the excitons, and then slide across the sample as though on a conveyer. Sample regions with a higher density of excitons luminesec, allowing us to track the location of the excitons. The moving lattices, or the conveyer, become an ideal place to observe the quantum walker in non-Hermitian Boson system. The data of electrostatic conveyer for indirect excitons were reported in Ref. [9]. [(a) and (b)], [(c) and (d)] and [(e) and (f)] in Fig. 1 show the \(x\)-\(y\) photoluminescence (PL) images, \(x\)-energy PL images and \(x\)- PL intensity respectively for conveyer off and on. The left column of the last row [Fig. 1 (g)] shows the average transport distance of indirect excitons via conveyer \(M_{1}\) as a function of the conveyer amplitude. Fig. 1 (h) shows the distance of indirect excitons via the conveyer \(M_{1}\) as a function of density. The first moment of the PL intensity \(M_{1}=\int xI(x)dx/\int I(x)dx\), which characterizes the average transport distance of the indirect excitons via conveyer. \(I(x)\) is the PL intensity profile obtained by the integration of the \(x\)-energy images over the emission wavelength. Major features of the indirect exciton transport are summarized as follows. (i) There exist the dynamical localization-delocalization transitions. (ii) Crossing the transition point, the transport distance increases with the conveyer amplitude and tends to saturation. (iii) The exciton transport is less efficient for higher velocity. (iv) The efficient exciton transport via the conveyer only occurs at intermediate densities. (v) Several bright stripes are shown in the PL images. The above experimental facts raise several interesting questions which need to be clarified. For example, Whether the efficiency of the exciton transport for lower conveyer velocity is also governed by the Thouless mechanism, or not [6]? The other important issue is whether the several bright stripes are from the exciton coherence, or not. Previously, a nonlinear Schrodinger equation including the attractive two-body interaction and the repulsive three-body interaction was proposed to explain the complex exciton pattern where the excitons are assumed to be in a coherent state [10; 11; 12; 13]. One may ask naturally if the nonlinear Schrodinger equation can explain the exciton transport. Although the conveyer velocity and amplitude, as well as the exciton density, dependencies of the transport distances are explained by a nonlinear partial differential equation [9], the exciton degenerateness are not involved. Since any real-life material will inevitably contain a certain degree of impurities and defects, and inter-particle interactions are almost always present, the combined effect of disorder and interactions becomes central to the richness of condensed matter physics and can lead to novel quantum states such as the Bose glass phase in disordered Bosonic systems [14; 15]. Recently, the phase transition between the Bose glass and the superfluid was directly observed in the two-dimensional Bose glass in an optical quasicrystal [16]. The observed dynamical localization-delocalization transitions in conveyer for excitons gives another platform to answer more fundamental questions about the delocalization-localization transition of excitons [17]. By analysing the PL patterns, we find the exciton movements can be simplified as the diffusion of the finite lifetime Bosons in moving lattices. The main findings of this work are summarized as follows. (i) The pattern can be divided approximately by the incoherent part and coherent part which can be described by the nonlinear partial differential equation and the nonlinear Schrodinger equation respectively. (ii) The bright stripes are the interplay between the coherent excitons and the moving lattices. (iii) The cooling rate is the key factor to the exciton transport. (iv) The lifetime and cooling rate of the coherent excitons are estimated. (v) The sample is very clean and free of impurity. We discuss the data and set up a time-dependent nonlinear Schrodinger equation in Sec. II to depict the coherent excitons. We also show how to obtain the PL intensity profiles \(I(x)\) by calculating the escape rate. In Sec. III, numerical calculations and detailed discussions on the exciton distribution in moving lattices are given. Sec. IV is devoted to a brief summary. ## II Model Hamiltonian The clues of exciton states in the conveyer come from the various exciton patterns found by the previous experiments, in particular, the two puzzling exciton rings, inner and external, and periodic bright spots in the external ring [8; 18; 19]. A charge-separated transport mechanism was proposed and it gives a satisfactory explanation for the formation of the two exciton rings and the dark region between the inner and the external ring [20; 21]. This mechanism was further confirmed by PL images of single quantum well [21]. As pointed out in Refs. [20; 21], when electrons and holes are first excited by high-power laser, they are actually charge-separated and have a small recombination rate. No true exciton is formed at this stage. They can travel a long distance from the laser spot before combination. After a long-distance traveling, the hot electrons and holes collide with the semiconductor lattices and are cooled down. The cooled electrons and holes lead to an exciton inner ring near the laser spot. As the drift speed of electrons (with smaller effective mass) is larger than that of holes (with larger effective mass), there are always some electrons and holes escaping from this combinations. Also due to the neutrality of the coupled quantum wells, the negative charges will slow down and accumulate far away from the laser spot. The cold electrons and holes then meet and form exciton external ring eventually at the boundary of the opposite charges. As the formed excitons in external ring are from the cooled electrons and holes, they have a low kinetic energy and temperature. While whether these excitons are condensed is in debate [22; 23; 24; 25; 20; 26; 27; 28], we reasonably assumed that the excitons Figure 1: The figures are taken from Ref. [9]. (a-d) \(x-y\) and \(x-energy\) PL images, and (e, f) PL intensity profiles \(I(x)\) for conveyer off and on. \(P_{\text{ex}}=20\ \mu\)W, \(A_{\text{cow}}=7.5\) meV, \(v_{\text{cow}}=0.7\ \mu\)m/ns. The blue dashed lines in (c) and (e) are used to separate the hot excitons near the center and the cooled excitons far from the center when the conveyer is off. The green dashed line across (b), (d) and (f) is used to indicate the hot (incoherent and coherent) excitons near the center and the coherent excitons far from the center when the conveyer is on. The green solid line across (b), (d) and (f) indicates the cooled excitons (degenerated excitons) travel about \(40\ \mu\)m. (g) The average transport distance of indirect excitons via conveyer \(M_{1}\) as a function of the conveyer amplitude, \(A_{\text{cow}}\). Lines are a guide to the eye. The intersection point of the line is defined as \(A_{\text{num-on}}\) where the dynamical localization-delocalization transitions occur as shown in (g\({}_{1}\)). \(P_{\text{ex}}=20\ \mu\)W. (h) The measured average transport distance of indirect excitons via conveyer \(M_{1}\) as a function of density. \(A_{\text{cow}}=4.9\) meV, \(v_{\text{cow}}=0.7\ \mu\)m/ns. were in the highly degenerated states, and proposed a self-trapped interaction model involving an attractive two-body interaction and a repulsive three-body interaction [10; 11; 13]. The mechanism gave a good account to the periodic bright spots in the external ring. In addition, it also explained well the abnormal exciton distribution in an impurity potential, in which the PL pattern becomes much more compact than a Gaussian with a central intensity dip, exhibiting an annular shape with a darker central region [29]. Moreover, the model also captured some experimental details. For instance, the dip can turn into a tip at the center of the annular cloud when the sample was excited by higher power lasers. Inspired by the above experimental data and theories, we are ready to investigate the exciton PL patterns in the moving lattices. As we know, PL intensity is approximately proportional to exciton number and PL energy is approximately proportional to exciton energy which includes the kinetic energy, the bound energy and the interaction energy. However, the bound energy remains unchanged basically due to the special structure of the spatially separated electrons and holes. The interaction energy between excitons is also negligible in the low-density case. As a result, the change of PL energy is related to the kinetic energy only. The PL energies as shown in Fig. 1 (c) (1.534 eV marked by blue dashed line) and PL intensities in Fig. 1 (e) far from the center are lower than those near the center. So it indicates that the excitons near the center have larger kinetic energy and higher particle density. The spatial distribution of excitons here is also consistent with that of the exciton rings reported in Ref. [8; 18]. We can assume that the hot excitons are formed at the center [dark region -10 \(\mu\)m \(<\) x \(<\) 10 \(\mu\)m in Fig. 1 (e)] and the cooled excitons locate far from the center. We believe that there exist two kinds of excitons. One is the hot excitons near the center and can be taken as classical particles. The other is the cooled excitons far away the center cannot be taken as classical particles. When the conveyer is turned on [shown in Fig. 1 (b)], the moving potential modifies the exciton potential energy and the PL energy [shown in Fig. 1 (d)]. While the moving lattices drag the hot excitons, whether they are in coherent state or not, they are cooled down further by inelastic collision between exciton and the semiconductor phonon. Consequently, the exciton distribution can be divided roughly into two sections by green dashed line as shown in Fig. 1 (b), (d) and (f). On the left side of the green dashed line, part of the excitons are in coherent state and part of excitons are in incoherent state. On the right side of the green dashed line, however, all the excitons are in coherent states. According to the above analysis, the coherent excitons may come from two parts. One part is directly from the resource by the hot exciton cooling. The other is from the pairing of the cooled electrons and holes. It is useful to estimate the exciton parameters according to the above analysis. Indicated by the green dashed line in Fig. 1, after 10 \(\mu\)m traveling, the hot excitons can become highly degenerated excitons. Marked by a blue solid line, experiencing another 30 \(\mu\)m traveling, the coherent excitons recombine. If the Thouless mechanism is still effective to the exciton transport at low velocity \(v_{\rm{c}om}=0.7\)\(\mu\)m/ns, it takes about 14 ns from the hot excitons to the highly degenerated excitons. It is reasonable to see that the cooling time of the excitons is about 14 ns from classical particles to coherent particles. It takes another 40 ns before the degenerated excitons recombine. So the lifetime of the coherent excitons is estimated to be about 40 ns. This is the first time to obtain the lifetime of indirect exciton experimentally. Based on the above discussions, it is reasonable to model the highly degenerate exciton gas by a time-dependent nonlinear Schrodinger equation [10; 13] \[i\frac{d\left|\Psi\left(t\right)\right\rangle}{dt}=H\left|\Psi\left(t\right) \right\rangle, \tag{1}\] where \(\left|\Psi\right\rangle=\left|\psi_{1},\psi_{2},\cdots,\psi_{L}\right\rangle\) and \(L\) is the lattice length. The tight-binding Hamiltonian reads \[H = \sum_{j}\left[\tilde{t}\left(\left|\psi_{j}\right\rangle\left\langle \psi_{j+1}\right|+{\rm h.c.}\right)\right.\] \[\left.+\left[V_{{\rm{c}om},j}\left(t\right)+F\left(n_{j}\right)- i\gamma\right]\left|\psi_{j}\right\rangle\left\langle\psi_{j}\right|\right],\] where \[V_{{\rm{c}om},j}\left(t\right) = A_{\rm{c}om}\cos\left[2\pi\left(j-v_{{\rm{c}om}}t\right)/\lambda\right] \tag{3}\] \[= A_{\rm{c}om}\cos\left(2\pi\eta j-\omega t\right)\] denotes the moving external potential created by a set of ac voltages. When \(v_{\rm{c}om}=0\) and \(\omega=0\), the moving lattices reduce to the static lattices. \(\gamma=1/\Gamma>0\) is the loss rate and \(\Gamma\) is the exciton average lifetime. \(\tilde{t}\) is the hopping amplitude and set to be 1 in the following discussions. \(n_{j}=\left|\psi_{j}\right|^{2}\) is the local probability density. \(F\left(n_{j}\right)\) is the effective interaction between the indirect excitons. In general, the excitons are considered as the weak repulsive interactions \(F\left(n_{j}\right)=gn_{j}\). \(F\left(n_{j}\right)=-g_{1}n_{j}+g_{2}n_{j}^{2}\) has also be used to explain various exciton patterns [10; 11; 12; 13]. The phenomenological interaction may be from the dipolar interaction and the exchange interaction. Since the effective parameter \(g_{1}\propto N\) and \(g_{2}\propto N^{2}\) (\(N\) is the particle number), we get the relationship \(g_{2}\propto g_{1}^{2}\). We take \(g_{2}=\epsilon g_{1}^{2}\) here \(\epsilon\) is the parameter to describe the complex interactions. The effects of the two kinds of interactions are identical except for the cases of the very low and very high particle density. We will discuss the issue in Sec. III B. Eq. (1) is the modified Gross-Pitaevskii equation when \(\gamma=0\). Unlike the Gross-Pitaevskii equation which is usually used to determine the ground states of a low-temperature Bosonic gas with a short- or zero-range two-body interaction, Eq. (1) considers the effect of the finite particle lifetime. As the wavefunction norm decreases with \[\frac{d}{dt}I(t) = \frac{d}{dt}\langle\Psi(t)|\Psi(t)\rangle=i\langle\Psi(t)|\left(H ^{\dagger}-H\right)|\Psi(t)\rangle\] \[= -2\gamma\langle\Psi(t)|\Psi(t)\rangle=-2\gamma I(t),\] the time-dependent wavefunction norm \[I(t)=\langle\Psi(t)|\Psi(t)\rangle=\sum_{j}|\psi_{j}(t)|^{2}=\exp\left(-2 \gamma t\right) \tag{4}\] when the initial-state is normalized \(\langle\Psi(0)|\Psi(0)\rangle=1\). Defining the probability of the walker escaping from location \(j\) \[I_{j}=2\gamma\int_{0}^{\infty}dt|\psi_{j}(t)|^{2}, \tag{5}\] which satisfies that \(\sum_{j}I_{j}=\sum_{j}2\gamma\int_{0}^{\infty}dt|\psi_{j}(t)|^{2}=\int_{0}^{ \infty}dtI(t)=1\). Therefore \(I_{j}\) is the PL intensity profile. The exciton cooling is another important physical factor for exciton time evolution. As the time goes on, the high-energy excitons relaxes to low-energy excitons. The cold excitons are condensed into their highly degenerate states. To simplify the discussions, the excitons are assumed to be in the Boltzmann distribution \(\sim e^{-\beta L\kappa}\) in which \(E_{n}\) is the energy level and \(\beta=1/k_{B}T\). \(k_{B}\) and \(T\) are the Boltzmann constant and the temperature respectively. The energy levels have previously been investigated experimentally and theoretically [8; 18; 19; 29; 30][10; 11; 12; 13]. The cooling process can be well characterized by the imaginary time-evolution Schrodinger equation in Eq. (1) where the time \(t\) is replaced by \(-it\). Although the energy levels are not well defined in the time-dependent non-Hermitian system, we adopt the imaginary time-evolution Schrodinger equation to characterize the exciton cooling. In the following calculations, we evolute the initial state \(\Psi(0)\) in real-time with the Eq. (1) in a time span \(\Delta t\) to obtain \(\Psi(\Delta t)\). Then the state \(\Psi(\Delta t)\) is evaluated in imaginary time with the time span \(\Delta t^{\prime}\) to obtain \(\Psi(\Delta t+i\Delta t^{\prime})\). At last, the wavefunction \(\Psi(\Delta t+i\Delta t^{\prime})\) is normalized \[I(\Delta t+i\Delta t^{\prime}) = \left\langle\Psi(\Delta t+i\Delta t^{\prime})|\Psi(\Delta t+i \Delta t^{\prime})\right\rangle\] \[= \exp\left(-2\gamma\left[\Delta t+i\Delta t^{\prime}\right]\right).\] Repeating the process above \(\mathcal{N}\) times until \(I\left(\mathcal{N}\left[\Delta t+i\Delta t^{\prime}\right]\right)\sim 0\), we get a serial \(\Psi\left(n\left[\Delta t+i\Delta t^{\prime}\right]\right)\) which is used in \(I_{j}=2\gamma\sum_{n=0}^{N}|\Psi\left(n\left[\Delta t+i\Delta t^{\prime} \right]\right)|^{2}\). We define \(\tau=\Delta t^{\prime}/\Delta t\). The larger \(\tau\) indicates the longer cooling time. So \(\tau\) is proportional to cooling speed. It should emphasize that the method above is effective since the normalization condition of the escape rate \(I_{j}\) in Eq. (5) guarantees the convergence of the calculations. Without considering the time-dependent normalized condition, the propagation of a wave-pocket still meets the boundary and reflects whether the open boundary condition or the absorbing boundary condition is adopted. The interference fringes are formed by interference between incident and reflected waves. In such a case, no convergent solution can be obtained. ## III Numerical analysis ### A wave pocket diffusion: without particle interactions According to the analysis above, the sources of the coherent excitons are diverse. The initial distribution of the degenerate excitons can be considered as a wave pocket with multi-peaks. To understand the exciton transport, let us first study the simplest case of the spreading of initial wave pocket with one-peak. We solve the time-dependent nonlinear Schrodinger equation (1) in real-time only under the open boundary condition (\(\psi_{\pm L}=0\)) with an initial Gauss wave function \(\psi_{j}(0)=\sqrt{(2\kappa/\pi)}\exp\left[-\kappa(j+j_{0})^{2}\right]\). The results with the parameters \(j_{0}=-16\), \(\kappa=0.006\) and \(L=128\) are shown in Fig. 2. In the following discussions, the blue lines in all the figures indicate the initial wave pocket. The physics of the wave pocket spreading is very simple [31; 32]. For the Gauss wave pocket, the uncertainty relation is \(\Delta\chi\Delta p\approx\hbar\) which means, with time increasing, the wider \(\Delta x\) is, the smaller \(\Delta p\) is and the lower the propagation speed is. Although the spreading becomes slower and slower as the time increases, the wave finally meets the boundary and is reflected. Interference fringes generated by interference between reflected and incident waves. The statements above is true whether it is without the periodic potential in Fig. 2 (a) or it is with the static periodic potential in Fig. 2 (d). The difference is that more interference fringes appear in Fig. 2 (d) due to the interplay between the wave and the periodic potential. When taking the particle dissipation into account, the wave pocket decays in Fig. 2 (b) and (e) as the time elapses. The escape probability \(I_{j}\) in Eq. (5) is calculated and presented in Fig. 2 (c) and (f). As expected, the particle dissipation suppresses the wave pocket spreading. From Fig. 2 (f), \(I_{j}\) can be deviated from the initial position due to the modulation of the external potential. We next investigate the cooling effect to a wave pocket diffusion. We solve the time-dependent nonlinear Schrodinger equation (1) in the real-time and imaginary-time alternatively to obtain the escape rate \(I_{j}\) of the coherent excitons. \(I_{j}\) is shown in Fig. 3. It shows that the cooling helps the wave pocket diffusion. We can understand the corresponding phys Figure 2: The wave pocket diffusion without a periodic potential (left column) and with the static periodic potential (right column). The free diffusion of a wave function at different time without the dispersion \(\gamma=0\) (a) and with the slgasion \(\gamma=0.01\) (b). The diffusion of a wave function modulated by the static periodic potential (dashed line) at different time without the slgasion \(\gamma=0\) (d) and with the slgasion \(\gamma=0.05\) (e). \(I_{j}\) distribution for the different dissipation \(\gamma\) for the free diffusion (c) and for the diffusion modulated by the static periodic potential \(V_{\text{conv},j}\) (f). The parameters of the periodic potential in Eq. (3) are set to be \(A_{\text{conv}}=1.6\), \(\omega=0\), and \(\lambda=64\). ical picture as follow [31; 32]. A wave pocket in free space can be considered as the superposition of the plane waves with the different wave vectors. When the condensed Bosons are considered as a Gauss wave pocket in momentum space, their distribution in artesian coordinate space is still a Gauss type. With time increasing, the wave pocket will spread in coordinate space and will contract in momentum space. However, their waves are still Gauss types. With temperature decreasing, the low-energy probability of the Gauss wave pocket increases accordingly since the wave pocket in momentum space is modulated by the factor \(\exp(-\beta E_{k})\) (\(E_{k}=k^{2}/2\)). The narrower wave pocket in momentum space is, the wider wave pocket in coordinate space is [31; 32]. As a result, the quantum mechanical effect is contrary to the classical case where the classical particle spreading becomes slow with temperature decreasing. By comparing the black lines and the red lines in Fig. 3 (b), it was found that the cooling also helps the wave pocket spread in static periodic potential case. The physical picture can also be understood in a similar way. In the static periodic potential, the eigenfunction of a particle is the Bloch wave which can be expressed in terms of Wannier functions. Correspondingly, the energy bands come from the splitting of the degenerate energy levels. A wave pocket in periodic potential can be considered as the superposition of the Bloch waves with the different wave vectors. With time increasing and cooling, the particle probabilities in low-energy bands increase accordingly. The narrower the wave pocket in momentum space is, the wider the wave pocket in space is. As a result, the cooling enhances the spread of the wave pocket. It is interesting to see in Fig. 3 (c) that the peaks of the \(I_{j}\) for \(\tau>0\) are located at the valley tops of the periodic potential, which is in contrast to the case of \(\tau=0\) where the peaks are located at the valley bottoms of the static periodic potential. According to the analysis above, with cooling, the Bosons tend to condense at \(k=0\), which results in the increasing of \(\Delta x\). In the large \(A_{\text{conv}}\) case, the Wannier functions are the atomic orbital (wavefunctions of a unit cell) and the particles are confined in a single unit cell. The stronger confinement is, the larger \(\Delta x\) is. The particles tend to move to the boundary of the unit cell to obtain the large \(\Delta x\) which causes the dip of particle distribution in the center of the unit cell. The quantum behavior can be taken as a probe to detect the exciton degenerateness. The wave pocket diffusion in moving lattices is shown in Fig. 4. The modulation of the moving lattices modifies the shape of the Gauss wave pocket. With the increasing time, the wave pocket follows the lattice and its height decreases obviously due to the finite lifetime of the exciton [Fig. 4 (a) and (b)]. The exciton cooling (\(\tau=0.5\)) in Fig. 4 (b) dramatically increases the distance of the transport comparing the case without the cooling (\(\tau=0\)) in Fig. 4 (a). \(I_{j}\) corresponding to the case in Fig. 4 (a) and (b) is presented in Fig. 4 (c). The average transport distance \(M_{1}\) increases with the cooling parameter \(\tau\) and then tends to saturation as shown in Fig. 4 (d). We can conclude that the cooling is the key factor to the exciton transport by the conveyer. The reason can be understood as following. In the low velocity case of the moving periodic potential, the Bloch theory is assumed to be correct approximately. In the high \(A_{\text{conv}}\) case, the Winner functions can be approximated by the atom-orbital function (particle wave functions of unit cell). The overlap of the neighbouring low orbital functions is less than that of the Figure 3: The wave pocket diffusions without the cooling [black lines] with the cooling [red lines] modulated by the static periodic potentials with the different intensities (a) \(A_{\text{conv}}=0\), (b) \(A_{\text{conv}}=0.01\) and (c) \(A_{\text{conv}}=0.4\). Figure 4: The wave pocket diffusions in moving lattices without the cooling (a) and with the cooling (b). \(I_{j}\) of the two cases of (a) and (b) are given in (c). (e) and (g) show \(I_{j}\) for the different moving lattice intensity \(A_{\text{conv}}\) and moving speed \(v_{\text{conv}}\). The cooling speed \(\tau\), the lattice intensity \(A_{\text{conv}}\) and the lattice velocity \(v_{\text{conv}}\) dependencies of the transport distant \(M_{1}\) are shown in (d), (f) and (h) respectively. high orbital functions. The little overlap of the neighbouring low orbital functions leads to the little tunneling between unit cells. With the exciton cooling, the excitons tend to occupy the low energy bands. The occupancy probability of the low orbital increases with the cooling parameters \(\tau\). Further increasing the cooling, most part of the particles are in the low orbital. As a result, the particles follow the moving lattices. So the change of the average transport distance \(M_{1}\) is obvious with \(\tau\) as shown in Fig. 4 (d). The tunneling between unit cells decreases with the increasing of the lattice height \(A_{\text{conv}}\). In the high \(A_{\text{conv}}\) case, the tunneling between unit cells is inconspicuous. So the transport distance increases with the \(A_{\text{conv}}\) and then tends to saturation as shown in Fig. 4 (e). In the low lattice speed case, the average transport distance \(M_{1}\) increases with the lattice speed as shown in Fig. 4 (h) which indicates the exciton transport follows the moving lattices. In the high lattice speed case, the energy band theory breaks down. It is definitely that the tunneling between the unit cells decreases with the increasing of the lattice speed which prevents the particles transport. We therefore argue that the Thouless mechanism is effective to the case of the low velocity and high lattice amplitude of the moving lattices. ### A wave pocket transport with the particle interactions In general, the excitons are considered as the weak repulsive interactions. We study the exciton transport with this kind of interactions in Fig. 5 (a), (b) and (c). The interaction \(g|\Psi_{j}|^{2}\) acts as the effective potential as shown in Fig. 5 (a) with a different intensity \(g\). \(I_{j}\) of the different \(g\) is presented in Fig. 5 (b). It indicates that the interactions have a significant effect on the particle transport. \(M_{1}\) as a function of \(g\) is shown in Fig. 5 (c). Two different effects govern the particle transport for the repulsive interactions. One is the repulsive interactions always favor the particle spreading. The other is the repulsive interactions modify the lattice intensity equivalently, which is detrimental to particle transport. So the particle transport \(M_{1}\) shows non-monotonic change as a function of \(g\). We have phenomenally proposed the two-body attractive and three repulsive interactions to understand the exciton patterns. To study how the complex interactions modify the exciton transport, we show the effective interaction potential \(-g_{1}|\psi_{j}|^{2}+g_{2}|\psi_{j}|^{4}\) in Fig. 5 (d) here \(g_{2}=\epsilon g_{1}^{2}\) with \(\epsilon=8\). For the large \(g_{1}\) cases, the excitons are in repulsive interactions as the case in Fig. 5 (a). So \(I_{j}\) in Fig. 5 (e) and \(M_{1}\) in Fig. 5 (f) are similar to the cases of Fig. 5 (b) and \(M_{1}\) in Fig. 5 (c) respectively. In the low exciton density case, \(-g_{1}|\psi_{j}|^{2}+g_{2}|\psi_{j}|^{4}\) is in attractive interaction region. It is interesting to study how the particle attractive interactions \(-g|\psi_{j}|^{2}\) and \(-g_{1}|\psi_{j}|^{2}+g_{2}|\psi_{j}|^{4}\) modify the particle transport. We present the effective attractive interactions in Fig. 6 (a) and (d). To study the effects of the weak interaction case more clearly, a small lattice intensity \(A_{\text{conv}}=0.2\) is adopted. This is in contrast to the repulsive case in Fig. 5 (a) and (d) [\(A_{\text{conv}}=0.8\)]. In the pure attractive case \(-g|\psi_{j}|^{2}\), a wave pocket will collapse with the time evolution as shown in Fig. 6 (b). However, the finite exciton lifetime prevents the pocket from collapsing further. The attractive particle interactions also have two effects on particle transport. One is that the attractive interactions always hinder the particle spreading. The other is that the increase of the effective lattice intensity profits to particle transport. So the particle transport \(M_{1}\) also shows a non-monotonic change as a function of \(g\) shown in Fig. 6 (c). In the small \(g_{1}<4\) case, the weak complex interactions are in the attractive interactions region, which also modify the exciton transport in Fig. 6 (e) and (f). However, \(M_{1}\) shows a monotonic change as a function of \(g_{1}\), which is contrast to the pure attractive case in Fig. 6 (c). ### The disorder effects The data of the exciton transport distance \(M_{1}\) via conveyer as a function of the conveyer amplitude \(A_{\text{conv}}\) are presented in Fig.1 (g). It shows that the exciton cloud extension \(M_{1}\) is not affected by the conveyer motion for a shallow conveyer. However, excitons are moved by the moving lattices at higher conveyer amplitude. Across the transition point, the exciton cloud starts to follow the conveyer, and \(M_{1}\) changes from constant to increasing with \(A_{\text{conv}}\). As we know, the random disorders or impurities exist inevitably in the coupled quan Figure 5: The wave pocket diffusions of the weak repulsive particles. The following parameters are used in the numerical analysis: \(A_{\text{conv}}=0.8\), \(v_{\text{conv}}=2.08\), \(\tau=0.5\) and \(\gamma=0.05\). For the pure repulsive case, the interaction parameter \(g\)-dependence of the effective potential \(g|\psi_{j}|^{2}\), the distribution patterns \(I_{j}\) for different \(g\) and transport distance \(M_{1}\) as a function of \(g\) are given in (a), (b) and (c) respectively. For the complex interaction \(-g_{1}|\psi_{j}|^{2}+g_{2}|\psi_{j}|^{4}\) case, we take \(g_{2}=\epsilon g_{1}^{2}\) and \(\epsilon=8\). The interaction parameter \(g_{1}\)-dependence of the effective potential, the distribution patterns \(I_{j}\) of the different \(g_{1}\) and transport distance \(M_{1}\) as a function of \(g_{1}\) are given in (d), (e) and (f) respectively. tum well structure grown with molecular beam epitaxy. The destructive interference of scattered waves due to the strong disorders leads to the Anderson localizations [33; 34]. As a result, the experimental data is naturally explained as the dynamical localization-delocalization transition. The dynamical localization was found in two-band system driven by the DC-AC electric field, in which the Rabi oscillation is quenched under the certain ratio of Bloch frequency and AC frequency [35; 36]. An interesting issue here is whether the AC electron field can also induce the dynamical localization in one-band disorder system. The Anderson localizations are generally studied with indirect method where the random disorders are replaced by the quasiperiodic on-site modulations [37; 38]. The quasiperiodic potential is set to be site dependent, i.e. \[V_{\mathrm{dis},j}=2A_{\mathrm{dis}}\cos(2\pi\alpha j)\] with \(A_{\mathrm{dis}}\) being the strength, \(\alpha\) being an irrational number which is used to characterize the quasiperiodicity. It usually takes the value of the inverse of golden ratio [\(\alpha=(\sqrt{5}-1)/2\)]. The value of the golden ratio is closely related to the Fibonacci number which is defined by \(F_{n}/F_{n+1}\) where the Fibonacci sequence of numbers \(F_{n}\) is defined using the recursive relation with the seed values \(F_{0}=0\), \(F_{1}=1\) and \(F_{n}=F_{n-1}+F_{n-2}\). When \(\alpha\) is replaced by Fibonacci numbers, the quasiperiodic potential \(V_{\mathrm{dis}}\) (\(j+F_{n+1}\)) = \(V_{\mathrm{dis}}\left(j\right)\) is the periodic lattice. To simplify the discussion, the particle interaction, lifetime and cooling are not taken into account. The Hamilton in Eq. (2) can be written as \[H\left(t\right)=\sum_{j}\left[\tilde{t}\left(\left|\psi_{j}\right\rangle \left\langle\psi_{j+1}\right|+\mathrm{h.c.}\right)+\mathcal{V}_{j}\left(t \right)\left|\psi_{j}\right\rangle\left\langle\psi_{j}\right|\right], \tag{6}\] here \(\mathcal{V}_{j}\left(t\right)=V_{\mathrm{conv},j}(t)+V_{\mathrm{dis},j}\), which is a spatial periodic \(\lambda F_{n+1}\) and time periodic \(T\) function \(\mathcal{V}_{j}\left(t\right)=\mathcal{V}_{j}\left(t+T\right)=\mathcal{V}_{j+ \lambda F_{n+1}}\left(t\right)\). The time dependence of the Hamiltonian in Eq. (6) leads to no stationary states in the system. However, the Hamiltonian in Eq. (6) has the time periodicity. We can write the state as \[\left|\psi_{j}\left(t\right)\right\rangle=e^{-i\pi t}\left|c_{j}\left(t\right)\right\rangle\] where \(\varepsilon\) is the quasienergy and \(\left|c_{j}\left(t\right)\right\rangle=\left|c_{j}\left(t+T\right)\right\rangle\) is periodic function which meets the time-dependent equation \[\left[H\left(t\right)-i\partial_{t}\right]\left|c_{j}\left(t\right)\right\rangle =\varepsilon\left|c_{j}\left(t\right)\right\rangle\] according to the Floquet theorem [36; 39]. As \(e^{i\varepsilon\left(t+T\right)}=e^{i\pi t}\), it requires the quasienergy \(\varepsilon_{l}=\frac{2\pi l}{T}=\omega l\) where \(l\) is an integer. To ensure the consistency between the Floquet theorem and the Bloch theorem in the form, the period is assumed to be \(T=Na\) and the quasienergy is confined in the first time Brillouin zone \(\left(-\frac{\pi}{a},\frac{\pi}{a}\right]\), here \(a\) being the time unit. As a result, \(l\) can be taken as \(\left(-\frac{\pi}{2},\frac{N}{2}\right]\). From the time-dependent nonlinear Schrodinger equation, the Hamiltonian in Eq. (6) can be transformed to a tight-binding Floquet operator in the second quantization form as \[\mathcal{H}\left(t\right) = \sum_{j}\left[\tilde{t}\left(c_{j}^{\dagger}\left(t\right)c_{j+1} \left(t\right)+\mathrm{h.c.}\right)\right.\] \[+\left.c_{j}^{\dagger}\left(t\right)\left(\mathcal{V}_{j}\left(t \right)-i\partial_{t}\right)c_{j}\left(t\right)\right].\] After the Fourier transform \[c_{j}\left(t\right) = N^{-1/2}\sum_{n=-N/2}^{N/2}c_{j\mu}e^{i\pi nt},\] \[c_{j}^{\dagger}\left(t\right) = N^{-1/2}\sum_{n=-N/2}^{N/2}c_{j\mu}^{\dagger}e^{-i\pi nt},\] and the inner product \(\left\langle\left\langle\mathcal{H}\left(t\right)\right\rangle\right\rangle= \frac{1}{T}\int_{0}^{T}\mathcal{H}\left(t\right)dt=\frac{1}{N}\sum_{n=-N/2}^{N/ 2}\mathcal{H}\left(n\right)\), the time-dependent Floquet operator \(\mathcal{H}\left(t\right)\) becomes [39; 40] \[\mathcal{H} = \sum_{j\mu}\left\{\left(\tilde{t}c_{j\mu}^{\dagger}c_{j+1,n}+A_{ \mathrm{conv}}e^{-2\pi\eta j}c_{j\mu-1}^{\dagger}c_{j\mu}+\mathrm{h.c.} \right)\right. \tag{8}\] \[+\left.\left[A_{\mathrm{dis}}\cos\left(2\pi\alpha j\right)-n\omega \right]c_{j\mu}^{\dagger}c_{j\mu}\right\}.\] Under the above Floquet ansatz, \(\left\langle\left\langle c_{j}^{\dagger}\left(t\right)c_{j}\left(t\right) \right\rangle\right\rangle=\sum_{n}c_{j\mu}^{\dagger}c_{j\mu}\), we can define a time-average inverse of the participation ratio of the normalized eigenstate \(\left|c_{j\mu}\right\rangle_{l}\) corresponding Figure 6: The wave pocket diffusions of the attractive particles. The following parameters are used in the numerical analysis: \(A_{\mathrm{conv}}=0.2\), \(v_{\mathrm{conv}}=2.08\), \(\tau=0.5\) and \(\gamma=0.05\). For the pure attractive case, the interaction parameter \(g\)-dependence of the effective potential \(-g|\psi_{j}|^{2}\), the distribution patterns \(I_{j}\) for different \(g\) and transport distance \(M_{1}\) as a function of \(g\) are given in (a), (b) and (c) respectively. For the complex interaction \(-g_{1}|\psi_{j}|^{2}+g_{2}|\psi_{j}|^{4}\) in the weak attractive interaction region, we take \(g_{2}=\epsilon g_{1}^{2}\) and \(\epsilon=8\). The interaction parameter \(g_{1}\)-dependence of the effective potential, the distribution patterns \(I_{j}\) for the different \(g_{1}\) and transport distance \(M_{1}\) a s a function of \(g_{1}\) are given in (d), (e) and (f) respectively. to the eigenvalue \(\varepsilon_{l}\). \[\mathrm{TMIPR}_{l}=\sum_{j\kappa}\left|\left\langle c_{j,\kappa}^{\dagger}|c_{j, \kappa}\right\rangle\right|_{l}^{4}\] The localization of the whole system can be characterized by the average of \(\mathrm{TMIPR}\) \[\overline{\mathrm{TMIPR}}=\sum_{l}\mathrm{TMIPR}_{l}/L.\] For a delocalizaton phase of the system, \(\overline{\mathrm{TMIPR}}\) is of the order \(1/L\), whereas it approaches \(1\) for a localized phase. We diagonalize the time-mean tight-binding Hamiltonian in Eq. (8) and get the eigenstates \(\left|c_{j,\kappa}\right\rangle_{l}\) to calculate \(\overline{\mathrm{TMIPR}}\) as shown in Fig. 7. In the case of \(A_{\mathrm{conv}}=0\), the Hamiltonian in Eq. (6) is reduced to the AAH model. The \(A_{\mathrm{dis}}\) dependence of \(\overline{\mathrm{TMIPR}}\) and its slope \(\left(\overline{\mathrm{TMIPR}}\right)^{\prime}=d\overline{\left(\mathrm{TMIPR }\right)}/d\left(A_{\mathrm{dis}}\right)\) are shown in Fig. 7 by the black line and the green dashed line. A peak is found at \(A_{\mathrm{dis}}=1\) which is consistent with that of the AAH model where the Anderson localization transition occurs at \(A_{\mathrm{dis}}=t\). It indicates the effectiveness of the time-mean method. We further calculate the \(A_{\mathrm{dis}}\) dependence of \(\overline{\mathrm{TMIPR}}\) for different \(A_{\mathrm{conv}}\). As we study above, for a wave pocket driven by the moving lattices, the wave pocket spreads over time, and tends to delocalize. For a wave pocket driven by the random lattice, it tends to localize. As a result, the interplay of the two lattices leads to that \(\overline{\mathrm{TMIPR}}\) is suppressed with the increasement of \(A_{\mathrm{conv}}\). When there is a competition between localization and delocalization in particular, a localization-delocalization transition occurs. It is a surprising that no visible localization-delocalization transition is found in the \(A_{\mathrm{dis}}\) dependence of \(\overline{\mathrm{TMIPR}}\) in Eq. (7) for the case of \(A_{\mathrm{conv}}\geq 0.6\). The localization-delocalization transition of the AAH model [\(A_{\mathrm{conv}}=0\) in Eq. (6)] can also be captured by the asymptotic behavior over long time of the second-order moment of position operator \(\sigma^{2}(t)\) defined by the wave spreading [41; 42; 43], \[\sigma^{2}(t)=\sum_{j}\left\langle c_{j}(t)\right|j^{2}\left|c_{j}(t)\right\rangle\] with the initial localized state \(\left|c_{j}(0)\right\rangle\)= \(\delta_{j,0}\). The asymptotic spreading of \(\sigma^{2}(t)\) is described by the power law, i.e. \(\sigma^{2}(t)\sim t^{2\phi}\) where \(\delta\) is the diffusion exponent. In terms of dynamical behavior of a wave pocket, measured by the exponent \(\delta=\delta(A_{\mathrm{dis}})\), the phase transition is discontinuous since \(\delta(A_{\mathrm{dis}})=1\) for \(A_{\mathrm{dis}}<\tilde{t}\) (ballistic transport), \(\delta(A_{\mathrm{dis}})\simeq 1/2\) at the critical point \(A_{\mathrm{dis}}=\tilde{t}\) (almost diffusive transport), and \(\delta(A_{\mathrm{dis}})=0\) in the localized phase \(A_{\mathrm{dis}}>\tilde{t}\) (dynamical localization). When the spreading velocity further \(v(A_{\mathrm{dis}})\sim\sigma(t)/t\) is defined, the phase transition turns out to be smooth, with \(v(A_{\mathrm{dis}})\) being continuous function of potential amplitude \(A_{\mathrm{dis}}\) and \(v(A_{\mathrm{dis}})=0\) for \(A_{\mathrm{dis}}\geq 0.6\). We still use the above method to characterize the dynamical localization. We solve the time-dependent nonlinear Schrodinger equation corresponding to the Hamiltonian in Eq. (6) with the initial localized state \(\left|c_{j}(0)\right\rangle=\delta_{j,0}\) to obtain \(\left|c_{j}(t)\right\rangle\) and \(v(A_{\mathrm{dis}})\). \(v(A_{\mathrm{dis}})^{\prime}s\) dependence on \(A_{\mathrm{dis}}\) is shown in Fig. 8. For the case of \(A_{\mathrm{dis}}=0\), the Hamiltonian in Eq. (6) is reduced to the AAH model. \(A_{\mathrm{conv}}\)'s dependence of \(v(A_{\mathrm{dis}})\) [black line] shows a transition at \(A_{\mathrm{dis}}=1(\tilde{t})\) obviously. Applying the moving lattices, whether increasing its intensity \(A_{\mathrm{conv}}\) [\(v_{\mathrm{conv}}\) unchanged] in Fig. 8 (a) or increasing its velocity \(v_{\mathrm{conv}}\) [\(A_{\mathrm{conv}}\) unchanged] in Fig. 8 (b), the spreading velocity \(v(A_{\mathrm{dis}})\) is suppressed. The most obvious feature is the disappearance of the delocalization-localization transition. We also calculate the conveyer amplitude \(A_{\mathrm{conv}}\)'s dependence of \(M_{1}\) for the different disorder intensity \(A_{\mathrm{dis}}\) [\(v_{\mathrm{conv}}=2.02\)] in Fig.8 (c) and for the different conveyer velocity \(v_{\mathrm{conv}}\) [\(A_{\mathrm{dis}}=1.6\)] (d). Recalling the behavior of \(\overline{\mathrm{TMIPR}}\) in Fig.7, we therefore argue that the moving lattices break the Anderson localization transition. ### Comparing the theory with the experiments After investigating the various effects on the diffusion of a wave pocket in moving lattices, we are ready to study the experimental data. According to the discussions in subsec. III.3, the plateaus can not be explained by delocalization-localization transition due to the disorders, even though the disorders suppressed the wave-pocket transport. The disorders are firstly neglected in following numerical calculations. In addition, as the coherent excitons have different origins, the wave function of the coherent excitons was assumed with two peaks initially. We solved the time-dependent nonlinear Schrodinger equation (1) with the initial wave function [black line in Fig. 9] to obtain \(I_{j}\) [red line in Fig. 9 (a)]. The appearance of the four peaks is basically consistent with the interference fringe. The conveyer amplitude \(A_{\mathrm{conv}}\)'s dependence of \(M_{1}\) is shown in Fig. 9 (b) for different \(v_{\mathrm{conv}}\). It indicates that the transport distance \(M_{1}\) increases with the conveyer amplitude \(A_{\mathrm{conv}}\) Figure 7: The \(\overline{\mathrm{TMIPR}}\) s dependence on \(A_{\mathrm{dis}}\) for different \(A_{\mathrm{conv}}\). The data have been expanded three and six times for the cases of \(A_{\mathrm{conv}}=0.6\) and \(1.2\). The green dashed line is the slop of black line and is used to show the phase transition occur at \(A_{\mathrm{dis}}=1\) for the AAH model (\(A_{\mathrm{conv}}=0\)). The parameters used in the numerical calculation are \(L=N=65\), \(\alpha=0.2\) and \(\omega=0.5\). However, the exciton transport is less efficient for higher velocity. We also studied the low-velocity case [green line] and found that the exciton transport is also less efficient for lower velocity. It indicates that the effective transport occurs at moderate conveyer speed. Although no disorders are involved in the potential, the plateaus can still be found in the low moving lattice amplitude \(A_{\text{conv}}\). In particular, the plateaus width increases with the conveyer velocity \(v_{\text{conv}}\) as shown in Fig. 9 (b\({}_{1}\)). This is consistent with the experimental data [shown by black points in subfigure of Fig. 1 (g)] where \(A_{\text{turn-on}}\) [defined as \(A_{\text{conv}}\) at the line intersection] increases with \(v_{\text{conv}}\). Applying the disorders as shown in Fig. 9 (c), it is interesting to see that the plateaus are destroyed while increasing the disorder intensity \(A_{\text{dis}}\). We therefore believe that the plateaus cannot be caused by the dynamical localization-delocalization transitions due to the disorder. We infer that the experimental sample is very clean. Their random potential strength is very weak, even if impurities exist. According to the discussion in subsec. III.2, there is no difference between \(g|\psi_{j}|^{2}\) and \(-g_{1}|\psi_{j}|^{2}+g_{2}|\psi_{j}|^{4}\) to depict the exciton interactions except for the case of the very low particle density. We calculated the \(g\) dependence of \(M_{1}\) as shown in Fig. 9 (d). The transport distance \(M_{1}\) increased firstly and then decreased with \(g\). The behavior is similar to the experimental data in Fig. 1 (h) where the exciton transport via conveyer \(M_{1}\) increased firstly and then decreased with the excitation power \(\log(P_{\text{ex}})\). As \(g\propto N\) when the wave function \(\psi_{j}\) is normalized, it indicates that the coherent exciton number \(N\propto\log(P_{\text{ex}})\). We therefore infer that it is less efficient to increase the coherent exciton number by increasing the laser power \(P_{\text{ex}}\). The cooling speed and long lifetime are still the key factors. ## IV Summary In summary, we have investigated the experimental data of electrostatic conveyer for excitons to understand their dynamical behaviors. We found that the formation of exciton patterns was from the spatially-separated hot excitons and cooled excitons. The hot excitons can be taken as classical particles whose transport can be well described by the classical diffusion equation. However, the cooled excitons are the coherent Bosons which must be described by the Schrodinger equation. The studies captured the nature of the exciton diffusion in conveyer, i. e. the excitons are cooling down during the transport. In particular, the excitons are in highly degenerate states far from the laser spot, and their temperature becomes lower and lower. This is why the method of the real-time and imaginary-time evolution of the Schrodinger equation can give a good account of the spatial separation patterns. The discrepancies between the theory and experiment may be from the existence of the non-degenerate excitons. By calculating the distribution of the escape probability, we got the PL stripes and the transport distance consistently with the experiment. We found the cooling speed is the key factor to the transport distance. According to comprising the calculation and the experimental data of transport distance, a function of density, we found increasing Figure 8: \(v\) as a function of the disorder amplitude \(A_{\text{dis}}\) for the largest propagation time \(t=150\) where (a) the speed of the moving lattice is set to be \(v_{\text{conv}}=1.48\) and (b) the disorder amplitude is set to be \(A_{\text{conv}}=1.6\) respectively. \(M_{1}\)’s dependence on the conveyer amplitude \(A_{\text{conv}}\) for the different disorder intensity \(A_{\text{conv}}\) [\(v_{\text{conv}}=2.02\)] (c) and for the different conveyer speed \(v_{\text{conv}}\) [\(A_{\text{conv}}=1.6\)] (d). The other parameters \(\gamma=0.05\) and \(\tau=0.5\). Figure 9: (a) Red line: the PL intensity profile \(I_{j}\). Black line: the initial wave-pocket. Green-dashed line: the initial conveyer. The parameters \(v_{\text{conv}}=1.48\) and \(A_{\text{conv}}=1.6\) are used in the calculations. (b) \(M_{1}\) as a function of the conveyer amplitude \(A_{\text{conv}}\) for the different conveyer velocity \(v_{\text{conv}}=1.48\), \(3.29\), \(4.03\) and \(6.25\) respectively. (b\({}_{1}\)) \(A_{\text{turn-on}}\) versus the conveyer velocity \(v_{\text{conv}}\). (c) \(M_{1}\) as a function of the interaction parameter \(A_{\text{conv}}\). (d) \(M_{1}\) as a function of the interaction parameter \(g\) with the parameters \(v_{\text{conv}}=4.03\), \(A_{\text{conv}}=2.5\). The other parameters in all calculations are \(g=1.2\),\(\tau=0.5\) and \(\gamma=0.05\). the excitation power is not an effective approach to increase the coherent excitons. We also found that the disorders failed to induce the dynamical localization-delocalization transition in the moving lattices. As a result, we infer the sample is basically impurity free. In order to realize the controllable exciton transport in the moving lattices, according to our study, two priority research directions are obvious. Experimental study should still focus on finding the ways of the long lifetime and the fast cooling speed of the excitons. Theoretically, designing achievable moving lattices to realize the dynamical localization-delocalization transition may be a new research direction. After finishing the manuscript, we noted that the experiment of transport and localization of indirect excitons were reported in a van der Waals MoSe\({}_{2}\)/WSe\({}_{2}\) heterostructure [44]. Whether the theory present in the manuscript can explain the data or not is an issue which is also worthy of further theoretical study. ###### Acknowledgements. This work was supported by Hebei Provincial Natural Science Foundation of China (Grant No. A2010001116, A2012203174, and D2010001150), and National Natural Science Foundation of China (Grant Nos. 10974169, 11174115 and 10934008).
2310.07325
An Adversarial Example for Direct Logit Attribution: Memory Management in gelu-4l
How do language models deal with the limited bandwidth of the residual stream? Prior work has suggested that some attention heads and MLP layers may perform a "memory management" role. That is, clearing residual stream directions set by earlier layers by reading in information and writing out the negative version. In this work, we present concrete evidence for this phenomenon in a 4-layer transformer. We identify several heads in layer 2 that consistently remove the output of a single layer 0 head. We then verify that this erasure causally depends on the original written direction. We further demonstrate that direct logit attribution (DLA) suggests that writing and erasing heads directly contribute to predictions, when in fact their effects cancel out. Then we present adversarial prompts for which this effect is particularly salient. These findings reveal that memory management can make DLA results misleading. Accordingly, we make concrete recommendations for circuit analysis to prevent interpretability illusions.
James Dao, Yeu-Tong Lau, Can Rager, Jett Janiak
2023-10-11T09:14:40Z
http://arxiv.org/abs/2310.07325v3
# An Adversarial Example for Direct Logit Attribution: Memory Management in gelu-4l ###### Abstract How do language models deal with the limited bandwidth of the residual stream? Prior work has suggested that some attention heads and MLP layers may perform a "memory management" role. That is, clearing residual stream directions set by earlier layers by reading in information and writing out the negative version. In this work, we present concrete evidence for this phenomenon in a 4-layer transformer. We identify several heads in layer 2 that consistently remove the output of a single layer 0 head. We then verify that this erasure causally depends on the original written direction. We further demonstrate that direct logit attribution (DLA) suggests that writing and erasing heads directly contribute to predictions, when in fact their effects cancel out. Then we present adversarial prompts for which this effect is particularly salient. These findings reveal that memory management can make DLA results misleading. Accordingly, we make concrete recommendations for circuit analysis to prevent interpretability illusions. ## 1 Introduction In recent years, large language models (LLMs) have made impressive gains in capability (Vaswani et al., 2017; Devlin et al., 2019; OpenAI, 2023; Radford et al., 2019; Brown et al., 2020), often surpassing expectations (Wei et al., 2022). However, these models remain poorly understood, with their successes and failures largely unexplained. Understanding what LLMs learn and how they generate predictions is therefore an increasingly urgent scientific and practical challenge. Mechanistic interpretability (MI) aims to reverse engineer models into human-understandable algorithms or _circuits_(Geiger et al., 2021; Olah, 2022; Wang et al., 2022), attempting to avoid pitfalls such as illusory understanding. With MI, we can identify and fix model errors (Vig et al., 2020; Hernandez et al., 2022; Meng et al., 2023; Hase et al., 2023), steer its outputs (Li et al., 2023), and explain emergent behaviors (Nanda et al., 2023; Barak et al., 2023). The central goals in MI are (a) localization: identifying the specific model components (attention heads, MLP layers) that the circuit is composed of; and (b) explaining the behavior of these components. Activation patching (Zhang and Nanda, 2023) and direct logit attribution (DLA) (Wang et al., 2022; Elhage et al., 2021; nostalgebraist, 2020) are standard tools for localization in language models. Activation patching can be used to measure both direct and indirect effects (Pearl, 2013). This method is computationally heavy as it requires a forward pass of the model for each model component, although it can be approximated (Nanda, 2023; Syed, Rager, and Conmy, 2023). DLA measures only direct effects, but leverages the fact that for a fixed scaling factor, the final logits are the sum of the contributions from each component. This allows us to get the effects for all components in a single forward pass (Lieberum et al., 2023). Previous work has explained various behaviors of attention heads (Olsson et al., 2022; Wang et al., 2022; Heimersheim and Janiak, 2023) and MLPs (Gurnee et al., 2023; Cunningham et al., 2023). However, some of the crucial mechanisms remain elusive. For instance, Elhage et al. (2021) speculated about "memory management", where some components erase the residual stream directions set by earlier components. This idea was based on an observation that the residual stream has limited bandwidth (akin to limited memory in computers) and analysis of model weights. However, there was no rigorous study, and no follow-up work has provided evidence for "memory management" as a real mechanism in LLMs. This is relevant to MI because it could provide insight into how information flows through the model. In particular, DLA implicitly assumes continuity of the residual stream, meaning a direction written to the stream stays there until the unembedding layer. However, this assumption would not hold if some components erase residual directions set by earlier ones. The direct effects of a writing component could be canceled out by the effects of an erasing component. Both could then appear highly influential according to DLA, even if their actual functions are not related to the task. This could simply occur because the erased residual direction aligns with the unembedding direction of some tokens. Therefore, if real, memory management could significantly impact what conclusions we can draw from DLA results. In section 4.1, we investigate the gelu-4l model, a small language model with 4 transformer blocks, to search for potential memory management. Specifically, we look for model components whose output is initially written to the residual stream but then disappears from it in later layers, indicating the output may have been erased. In section 4.2, we look for components responsible for the suspected erasure. We use path patching (Goldowsky-Dill et al., 2023) to test whether potential erasure depends causally on the presence of written output in the residual stream (section 4.3). We also examine the correlation between the direct effects of putative writing and erasure (section 5.1). Finally, in section 5.2, we construct adversarial prompts designed to make the direct effect of the writing appear highly significant for the prediction, when, in fact, it has no direct effect. FindingsOur experiments revealed several signatures indicative of memory management in gelu-4l. We identified one writing attention head in layer 0, whose output vector persisted in the residual stream only until layer 2. In layer 2, we found 6 heads that erased residual stream directions set by the writing head. Approximately 85% of this erasure depended causally on the presence of the written vector, suggesting deliberate overwriting. Furthermore, the direct effects of writing and erasure were negatively correlated1, implying that they cancel each other's effects out. Finally, on adversarial prompts, DLA incorrectly assigned to the writing head responsibility for making the top 1 prediction 1.28 to 2.7 times more likely2 than the top 2 prediction. Together, these findings provide evidence that memory management is a real phenomenon in transformers and that it can make DLA results misleading. Footnote 1: \(r=-0.702\) with a best fit slope of \(-0.613\) Footnote 2: logit difference ranged from 0.25 to 1.0 Recommendations for practiceOur results reveal the sensitivity of DLA to memory management. Accordingly, we make two concrete recommendations for circuit analysis: 1. We advocate testing effects across varied prompts, particularly those with different correct next token completions, as averaging over many prompts can cancel out spurious results. 2. We recommend complementing DLA with activation patching to measure both direct and indirect effects of components, as in the case of writing components these effects should cancel out. ## 2 Memory management Elhage et al. (2021) observed that residual stream dimensions can be thought of as "memory" or "bandwidth" that layers use to communicate with each other. Moreover, they observe that there are much more "computational dimensions" (such as neurons and attention head result dimensions) than residual stream dimensions, thus we should expect residual stream bandwidth to be in high demand. The authors speculated that some model components perform a kind of "memory management" role, clearing residual stream dimensions set by other components by reading in information and writing out the negative version, to free some of this bandwidth. They reported seeing hints of this, such as MLP neurons having negative cosine similarity between their input and output weights, or attention heads attending mostly to current position and having negative eigenvalues in their OV circuit. The models or the data were not shared, and there was no follow-up work exploring memory management. While Elhage et al. (2021) considered conditional forms of memory management, we will focus on a simple, unconditional erasure. We characterize it as 3 steps during a forward pass of a model: 1. a _writing component_ adds its output to the residual stream 2. subsequent components read this information to perform their function 3. an _erasing component_ removes the writing component's output from the residual stream, by reading it and writing out a negative version The freed subspace can be then reused by other components. Methods and Notation Projection ratioWe introduce the projection ratio to compare the extent to which the direction in the residual stream has been overwritten. This quantity is a normalized scalar projection. Let \(\mathbf{a}\), \(\mathbf{b}\) be elements of the residual stream \(\mathbb{R}^{d_{\text{mnd}}}\). The _scalar projection_ defined as \[\mathbf{a}_{\|\mathbf{b}}=\mathbf{a}\cdot\frac{\mathbf{b}}{||\mathbf{b}||}\] is a signed length of the projection of \(\mathbf{a}\) onto the direction of \(\mathbf{b}\). The _projection ratio_ \[\text{PR}(\mathbf{a},\mathbf{b}):=\frac{\mathbf{a}_{\|\mathbf{b}}}{||\mathbf{b} ||}=\frac{\mathbf{a}\cdot\mathbf{b}}{||\mathbf{b}||^{2}}\] describes the proportion of \(\mathbf{b}\) present in \(\mathbf{a}\). We can use projection ratio to decompose \(\mathbf{a}\) into components parallel and orthogonal to \(\mathbf{b}\): \[\mathbf{a}=\mathbf{a}_{\|\mathbf{b}}+\mathbf{a}_{\bot\mathbf{b}}=\text{PR}( \mathbf{a},\mathbf{b})\ \mathbf{b}+\mathbf{a}_{\bot\mathbf{b}}\] where \(\mathbf{b}\cdot\mathbf{a}_{\bot\mathbf{b}}=0\). The projection ratio has the same sign as cosine similarity between \(\mathbf{a}\) and \(\mathbf{b}\). Attention headsWe refer to attention head \(k\) in layer \(n\) as \(\text{L}n\text{H}k\). Residual stream statesWe distinguish the following states of the residual stream in gelu-4l: * before any attention or MLP layers, just token and positional embeddings, * after the attention layer \(n\), * after the MLP layer \(n\), where \(n=0,1,2,3\). For a given state, we denote the residual stream as \(x\), or as \(x_{j}\) to specify the sequence position \(j\). Attention head outputWe will denote the output of an attention head \(h\) at sequence position \(j\) by \(o_{j}(h)\). The symbol \(A\) will represent an attention pattern, with \(A^{h}_{j,i}\) being an attention weight from destination position \(j\) to source position \(i\). The OV circuit of a head will be associated with a matrix \(W^{h}_{OV}=W^{h}_{V}W^{h}_{O}\). \(\text{LN}_{n}\) represents the first layer norm operation at layer \(n\), just before the attention block. For an attention head in layer \(n\), its output at sequence position \(j\) is given by the equation \[o_{j}(h)=\Big{(}\sum_{i}\text{LN}_{n}(x_{i})A^{h}_{j,i}\Big{)}W^{h}_{OV}. \tag{1}\] Attention head V-composition outputThe part of the output of a head \(h_{2}\) in layer \(n\) at position \(j\) that comes from V-composition (Elhage et al., 2021) with a head \(h_{1}\) is given by \[o(h_{1}\overset{V}{\rightarrow}h_{2})=\Big{(}\sum_{i}\text{LN}_{n}\big{(}o_{ i}(h_{1})\big{)}A^{h_{2}}_{j,i}\Big{)}W^{h_{2}}_{\text{OV}}. \tag{2}\] ## 4 Evidence for Memory Management In contrast to Elhage et al. (2021), we investigate the activations of the model from a few hundred forward passes, instead of analyzing its weights. ### Identifying a writing component: output of L0H2 is being erased To identify a writing component, or a component that is being erased, we measured how much of each attention head and MLP layer output is present in the subsequent states of the residual stream. The projection ratio between residual stream states and outputs of attention heads in layer 0 is shown in fig. 0(b). We identified that the output of L0H2 is being erased across 300 forward passes of the model. In fig. 0(a), we can track the presence of L0H2's information in the residual stream as it moves through the transformer model: * Initially we see a projection ratio close to 0 at resid_pre0, as L0H2 (which resides between resid_pre0 and resid_mid0) has not written to the residual stream yet. * After L0H2 writes to the residual stream (at resid_mid0), the projection ratio goes to about 1. It is not exactly 1 since the output of other heads in layer 0 is not orthogonal to the output of L0H2. * After resid_mid0, a projection ratio of about 1 suggests that L0H2's information is present in the residual stream until resid_post1 (inclusive). * Attention heads in layer 2 (which resides between resid_post1 and resid_mid2) appear to remove the information the L0H2 originally wrote, resulting in a much smaller projection ratio close to 0. This happens consistently across 300 prompts (randomly sampled from the model's training dataset), but there is some variance across different sequence positions. The function of L0H2 is not totally clear, but it resembles a positional information head (Nanda 2022; Heimersheim and Janiak 2023). ### Identifying an erasing component: layer 2 attention heads are erasing the output of L0H2 In fig. 2, we can see the projection ratio between the outputs of every attention head and MLP (except L0H2) and the output of L0H2. We find that 6 out of 8 attention heads in layer 2, numbered 2-7, have consistently negative projection ratio, implying that they are writing to the residual stream in the direction opposite to L0H2. In aggregate, they are responsible for erasing 90.7% 3 of the output of L0H2. We refer to these heads as erasing heads. And we refer to the erasing part of their function as erasing component, distributed over all erasing heads. Footnote 3: The distribution of projection ratio between the sum of erasing heads output and L0H2 has quantiles: 25% = -1.128, 50%=-0.907, 75%=-0.700 We believe that most of the variance as shown by the error bars is due to erasure being sensitive with respect to position rather than sensitivity with respect to prompt. ### Verifying causality: erasure depends on writing To check for a causal relationship between the erasure and the writing, we use path patching with zero ablation between L0H2 and the value input to the erasing heads. This is equivalent to removing the output of erasing component that depends on V-composition with L0H2 \[o(\text{L0H2}\overset{V}{\rightarrow}\text{L2H[2-7]})=\sum_{k=2}^{7}o(\text{L0H2 }\overset{V}{\rightarrow}\text{L2H}k) \tag{3}\] from resid_mid2. We repeat experiments from sections 4.1 and 4.2 with and without the patching, and present the results in figs. 2(a) and 2(b) respectively. Figure 1: Projection ratios between residual stream states and outputs of attention heads in layer 0, aggregated using median across batch (\(n_{b}=300\)) and position (\(n_{ctx}=1024\)). Figure 2(a) shows that the projection ratio of residual stream onto L0H2 remains high after the attention block in layer 2 in the patched run (0.91 in patched, 0.12 in clean), indicating that around 85% of the erasure in layer 2 is indeed input-dependent. We note that the projection ratio goes down after layer 2, suggesting that components in subsequent layers are involved it the erasure as well. Figure 2(b) compares patched to clean runs for each head in layer 2. The erasure seems to disappear when the erasing heads loose access to the information written by L0H2. ## 5 Implications for DLA As mentioned in section 1, DLA applies the final layer norm and unembedding directly to the component output. To get a scalar value out of DLA, it is often (Wang et al., 2022; Heimersheim and Janiak, 2023; Lieberum et al., 2023) combined with logit difference between the correct next token prediction and an "obvious" wrong prediction. The direct effect on logit difference -- according to DLA -- will be high if the component output happens to be aligned with the unembedding direction of aforementioned tokens. In the gelu-4l model however, the output of L0H2 is largely removed from the residual stream after the attention block in layer 2, before it can reach the unembedding, as we have seen in fig. 1. Therefore significant logit differences for L0H2 will not correspond to a significant direct effect. In section 5.1 we investigate the correlation between direct effects of writing and erasure on a subset of the training dataset, showing that their effects mostly cancel out. In section 5.2, we present four simple adversarial prompts where DLA reports high logit difference for L0H2, despite no direct effect. Figure 3: Projection ratios in clean and patched runs, aggregated using median across batch (\(n_{b}=300\)) and position (\(n_{ctx}=1024\)), with error bars at the 25th and 75th quantiles. Figure 2: Projection ratios between the outputs of every attention head and MLP (except L0H2) and the output of L0H2, aggregated using median across prompts (\(n_{b}=300\)) and position (\(n_{ctx}=1024\)), with error bars at the 25th and 75th quantiles. ### Correlation between DLA of writing and erasure We collect 30 random samples from the model's training dataset, and we consider top 2 next token predictions at every sequence position (\(n_{ctx}=1024\)). Specifically, we measure the effect on the logit difference between the top 2 predictions using DLA. To measure the effect of writing, we apply DLA directly to the output of L0H2. To measure the effect of erasure, we have to separate it from other effects of the erasing heads. Since about 85% of the erasure happens via a path from L0H2 to the value input of the erasing heads, we apply DLA to \(o\)(L0H2 \(\overset{V}{\rightarrow}\) L2H[2-7]) (see eq. (3)). As we can see in fig. 4, the effects of writing and erasure are negatively correlated (\(r=-0.702\), best fit slope \(=-0.613\))4. In other words, writing and erasure contributions to the final logits mostly cancel out. Footnote 4: a perfect erasure would yield r = -1 and a best fit slope \(=-1\) ### Adversarial examples of high DLA values without direct effect We selected four tokens for which the unembedding direction aligns with the output of L0H2: "" bottom","" State", "", and "" Church". Then, we constructed four prompts such that the model predicts one of the tokens with highest probability. 1. prompt: "It's in the cupboard, either on the top or on the" top predictions: "" bottom","" top" (logit difference 1.07) 2. prompt: "I went to university at Michigan" top predictions: "" State","" University" (logit difference 1.89) 3. prompt: "class MyClass:\n|def" top predictions: "" "" "" " get"(logit difference 3.02) 4. prompt: "The church I go to is the Seventh-day Adventist" top predictions: "" Church","" church" (logit difference 0.94) We find it implausible that a component can have direct effect on model predictions in prompts 1-4 based purely on positional information. However, patching the input to L0H2 with unrelated text does not affect the DLA-measured logit difference, as shown in fig. 4(a). Therefore, we conclude that L0H2 does not directly contribute to the model predictions in prompts 1-4, despite significant DLA values. The invariance of L0H2's DLA to input tokens is unusual. We reran the patching experiment for four other attention heads that, according to DLA, have the highest direct effect on logit difference for the respective prompt in fig. 4(b). In contrast to L0H2, the results for these heads are severely affected by the patch. Figure 4: Correlation between the effects of writing and erasure, according to DLA. ## 6 Conclusion In this paper, we presented a concrete example of memory management in transformers. Furthermore, we have shown that memory management can make DLA results misleading. We propose several directions for future work: 1. Examine how activation patching is affected by memory management. 2. Better understand the function of L0H2. 3. Look for memory management in other models. In particular, Heimersheim and Janiak (2023) and Wang et al. (2022) report high DLA values for early heads, which would be interesting to investigate. 4. Understand how the freed subspace is used. The code to reproduce the results can be accessed here: [https://colab.research.google.com/drive/16Kp-4iH330a1dF6F0ntPK7kfNuxqkvfZ](https://colab.research.google.com/drive/16Kp-4iH330a1dF6F0ntPK7kfNuxqkvfZ) ## 7 Acknowledgments and Contributions Our research benefited from discussions, feedback, and support from many people, including Chris Mathwin, Evan Hockings, Neel Nanda, Lucia Quirke, Jacek Karwowski, Callum McDougall, Joseph Bloom, Alan Cooney, Arthur Conmy, Matthias Dellago, Eric Purdy and Stefan Heimersheim. Some of this work was produced as part of ARENA 2.0 and the SERI ML Alignment Theory Scholars Program - Spring 2023 Cohort. All authors contributed equally to this article. Jett proposed and led the project, while James, Can, and Yeu-Tong ran most of the experiments. The first paragraph of section 1 was adapted from Zhang and Nanda (2023) and "Is This the Subspace You Are Looking for?" (2023). Figure 5: Logit difference of top 2 predictions on adversarial examples, according to DLA. Patched refers to replacing the input to a head with one from a run on unrelated text with the same number of tokens (300 examples). The orange bars show median with error bars at the 25th and 75th quantiles.
2308.05778
Current percolation model for the special resistivity behavior observed in Cu-doped Apatite
Since the initial report of the potential occurrence of room-temperature superconductivity under normal pressure [arXiv: 2307.12008], there has been significant interest in the field of condensed matter physics regarding Cu-doped Apatite (Pb10-xCux(PO4)6O). In this study, we performed temperature-dependent resistivity measurements on the synthesized Pb10-xCux(PO4)6O samples. The structure of the sample was confirmed to match the reference literature through X-ray diffraction analysis. Remarkably, we observed four distinct types of resistivity behaviors within samples from the same pellet: (1) A semiconductor-like behavior characterized by a decrease in resistivity as the temperature is lowered. (2) A gradual reduction in resistivity, reaching an exceptionally small value that falls below the resolution limits of our measurement equipment. (3) An abrupt drop in resistivity to a low value at ~ 250 K. (4) An almost linear reduction in resistivity exhibiting a transition at approximately 7 K (possibly associated with Pb). Following a thorough compositional analysis, we proposed a current percolation model, based on the formation of a Cu/Pb current channel, to elucidate the observed special resistivity behaviors. It is important to note that the Meissner effect was not observed in our magnetization measurements. Consequently, we reached the conclusion that the presence of superconductivity in Cu-doped Apatite has yet to be substantiated.
Qiang Hou, Wei Wei, Xin Zhou, Xinyue Wang, Yue Sun, ZhiXiang Shi
2023-08-10T13:37:33Z
http://arxiv.org/abs/2308.05778v1
# Current percolation model for the special resistivity behavior observed in Cu-doped apatite ###### Abstract Since the initial report of the potential occurrence of room-temperature superconductivity under normal pressure [arXiv: 2307.12008], there has been significant interest in the field of condensed matter physics regarding Cu-doped Apatite (Pb\({}_{10\cdot x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O). In this study, we performed temperature-dependent resistivity measurements on the synthesized Pb\({}_{10\cdot x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O samples. The structure of the sample was confirmed to match the reference literature through X-ray diffraction analysis. Remarkably, we observed four distinct types of resistivity behaviors within samples from the same pellet: (1) A semiconductor-like behavior characterized by a decrease in resistivity as temperature is lowered. (2) A gradual reduction in resistivity, reaching an exceptionally small value that falls below the resolution limits of our measurement equipment. (3) An abrupt drop in resistivity to a low value at \(\sim\) 250 K. (4) An almost linear reduction in resistivity exhibiting a transition at approximately 7 K (possibly associated with Pb). Following a thorough compositional analysis, we proposed a current percolation model, based on the formation of a Cu/Pb current channel, to elucidate the observed special resistivity behaviors. It is important to note that the Meissner effect was not observed in our magnetization measurements. Consequently, we reached the conclusion that the presence of superconductivity in Cu-doped Apatite has yet to be substantiated. ## Introduction Room temperature superconductivity, a remarkable scientific feat, is set to enable materials to conduct electricity with absolute zero resistance at ordinary temperatures. This groundbreaking discovery will have immense potential for enhancing energy efficiency and driving technological innovation across various industries, including electronics, transportation, and beyond [1, 2, 3]. Recently, Lee et al. reported a announcement about the discovery of a room-temperature superconductor under atmospheric pressure, Pb\({}_{10\cdot x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O with \(0.9<x<1.1\)[4, 5]. They substantiated the presence of room temperature superconductivity in Pb\({}_{10\cdot x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O through three lines of evidence: (1) a sudden resistivity drop in electrical measurements to near zero, (2) the observation of diamagnetic signals and magnetic levitation phenomena in magnetic measurements, and (3) the sudden voltage jump in measurements of applied current versus voltage.
2308.02017
A Compact Model of Threshold Switching Devices for Efficient Circuit Simulations
In this paper, we present a new compact model of threshold switching devices which is suitable for efficient circuit-level simulations. First, a macro model, based on a compact transistor based circuit, was implemented in LTSPICE. Then, a descriptive model was extracted and implemented in MATLAB, which is based on the macro model. This macro model was extended to develop a physical model that describes the processes that occur during the threshold switching. The physical model derived comprises a delay structure with few electrical components adjacent to the second junction. The delay model incorporates an internal state variable, which is crucial to transform the descriptive model into a compact model and to parameterize it in terms of electrical parameters that represent the component's behavior. Finally, we applied our model by fitting measured i-v data of an OTS device manufactured by Western Digital Research.
Mohamad Moner Al Chawa, Daniel Bedau, Ahmet S. Demirkol, James W. Reiner, Derek A. Stewart, Michael K. Grobis, Ronald Tetzlaff
2023-08-03T20:19:46Z
http://arxiv.org/abs/2308.02017v1
# A Compact Model of Threshold Switching Devices for Efficient Circuit Simulations ###### Abstract In this paper, we present a new compact model of threshold switching devices which is suitable for efficient circuit-level simulations. First, a macro model, based on a compact transistor based circuit, was implemented in LTSPICE. Then, a descriptive model was extracted and implemented in MATLAB, which is based on the macro model. This macro model was extended to develop a physical model that describes the processes that occur during the threshold switching. The physical model derived consider a delay structure with few electrical components near to the second junction. The delay model incorporates an internal state variable, which is crucial to transform the descriptive model into a compact model and to parameterize it in terms of electrical parameters that represent the component's behavior. Finally, we applied our model by fitting \(i\)-\(v\) measured data of an OTS device manufactured by Western Digital Research. OTS, switching, compact model, circuit simulation. ## I Introduction The ovonic threshold switch (OTS) is a promising two-terminal nanodevice based on chalcogenide alloys. OTS devices typically demonstrate a current controlled negative differential resistance (NDR) characteristic on their DC \(i\)-\(v\) loci, and therefore, they can exhibit very sharp and fast transition between on and off states that makes them attractive for many applications, including as selectors for memory cells, as fast switches, or as devices for neuromorphic computing [1, 2, 3]. While threshold switching in chalcogenide glasses has been known for a long time, the exact mechanism is still unknown. Although several models capture the phenomenological behavior of OTS cells very well [4], they often tend to be not very suitable for practical circuit design applications which need an accurate, but fast, model of the device expressed in a way that naturally fits into an analog simulation pipeline. There is a trade-off between the physical complexity that describes the device and the accuracy of the model representing it. The ability to implement complicated models in circuit simulators represents a challenge. These challenges have been historically addressed in modelling by using the classical current-voltage description [5, 6], or Chua's original proposal in flux and charge [7, 8, 9, 10, 11]. Both descriptions are shown to be equivalent [12], and, for the sake of usual convention, we will use in this paper the current and voltage. Typical approaches use transport models [13, 14] that model the charge carrier dynamics using differential equations to simulate the behavior of an OTS device. These physics based models are complex and do not run natively in circuit simulators, and are not well suited as compact models for circuit simulations. An example of a non-physical model of PCM for low-computational cost, as an alternative model, has been demonstrated in [15]. However, in this contribution, a 2T-1R macromodel for threshold switching devices, such as an OTS, has been implemented in LTSPICE. In addition, a mathematical description of the macromodel was made, including the design of a new circuit based on the 2T-1R circuit. Later, an internal state variable was extracted to convert the descriptive model into a compact model for threshold switching devices. The proposed model is an accurate tool for circuit designers since It has been implemented in a circuit simulator, LTSPICE. For example, OTS devices can be used as selectors for memory cell cross points. Finally, the compact model was applied to fit \(i\)-\(v\) measurement data obtained from an OTS device manufactured by Western Digital Research. ## II Model Description and Implementation The threshold switching device has three states: off, on, and snapack [16, 17]. Let us consider a resistor for each state, \(R_{\text{off}}\), \(R_{\text{on}}\), \(-R_{\text{bias}}\), where the values of these resistors follows according to \[R_{\text{off}}>>R_{\text{bias}}>>R_{\text{on}}. \tag{1}\] Fig. 1 shows a conceptual block diagram of these three states. In the ideal case, we have \(R_{\text{off}}=\infty\) and \(R_{\text{on}}=0\) which Figure 1: Conceptual block diagram of the threshold switching device: \(R_{off}>>R_{bias}>>R_{on}\). (a) Off state (\(S_{1}=off\) and \(S_{2}=off\)); (b) Snapback state (\(S_{1}=on\) and \(S_{2}=off\)); (c) On state (\(S_{1}=on\) and \(S_{2}=on\)); (d) \(i\)–\(v\) for (a-c). would result in slope values \(0\) and \(\infty\) in the off and the on states, respectively. For the Off state, both switches are off (\(S_{1}=off\) and \(S_{2}=off\)) and the current pass through the \(R_{\text{off}}-R_{\text{bias}}\) branch. The on state is implemented when both switches are on (\(S_{1}=on\) and \(S_{2}=on\)) and the equivalent resistor is \(R_{\text{off}}//R_{\text{on}}\) is approximated by \(R_{\text{on}}\). The snapback state occurs when (\(S_{1}=on\) and \(S_{2}=off\)) and the equivalent resistor is \((R_{\text{off}}//R_{\text{on}})-R_{\text{bias}}\), which is approximated by \(-R_{\text{bias}}\). It should be noticed that, the snapback state has a negative slope originated from the negative resistance state represented by \(-R_{\text{bias}}\). Our goal is to translate these findings into a mathematical description that can be implemented in circuit simulators in a cost efficient way. This model consists of the topological connectivity, and compact models, of the elements that form the circuit, including extraneous components like parasitic resistance, capacitance, and inductance. The mathematical model of the circuit is given by a system of nonlinear, coupled, differential algebraic equations (DAEs) as follows \[F(x,x\,,t)=0 \tag{2}\] where \(F(\cdot)\) represents Kirchhoff's current law (KCL) and \(x\) is the state variable (or voltage drop). The dynamics of these state variables can be defined as follows [18] \[i=f(v)+\frac{d}{dt}q(v) \tag{3}\] where \(f\) represents the currents flowing in static branches of the equivalent network, \(dq/dt\) represents the (capacitive) current flowing in time-dependent branches of the equivalent network. This formalism in [18] already has been used to model an electrostatic discharge (ESD) snapback in [19, 20]. It is worth to draw attention to the fact that OTS behavior is similar to that of the silicon controlled rectifier (SCR), however the SCR is a three terminals device [21]. A new model for SCR device as two terminals has been described in [22]. ### _Macro Model_ Based on the conceptual block diagram in Fig. 1, the threshold switching device can be represented by two switches. In order to to model the OTS and to implement the model in a circuit simulator by considering (3) in a compact and calculation efficient way, an equivalent circuit has been proposed using only two bipolar junction transistors (BJTs) and a self biased resistor to describe OTS characteristics [23, 24]. The proposed 2T1R circuit exhibits NDR characteristics, which is suitable for modelling locally active memristors or threshold switching devices. Thus, a two-terminal device with a NDR has been implemented in LTSPICE as shown in Fig. 2. The first transistor is a PNP type, the voltage drop across the base-collector junction is described as follows \[v_{BC_{1}}=v_{2}=-v_{R} \tag{4}\] and the voltage drop across the emitter-base junction as \[v_{\text{EB}_{1}}=v_{1} \tag{5}\] The second transistor is a NPN type, the voltage drop across the base-collector junction is defined as follows \[v_{\text{BC}_{2}}=-v_{2}=v_{R} \tag{6}\] and the voltage drop across the base-emitter junction as \[v_{\text{BE}_{2}}=v_{3} \tag{7}\] It should be mentioned that, the pair of corresponding NPN and PNP transistors are complementary transistors with near identical characteristics to each other. This results to \[v_{1}=v_{3} \tag{8}\] The voltage drop across the ideal OTS device (\(Rs=0\) and \(Rp=\infty\)) can be written as \[v=v_{1}+v_{2}+v_{3} \tag{9}\] thus, \[v-v_{2}=v_{1}+v_{3} \tag{10}\] Using 6 and 8, the last equation can be rewritten as \[v+v_{R}=2v_{1} \tag{11}\] or \[\frac{v+v_{R}}{2}=v_{3}=v_{1} \tag{12}\] It should be noticed that, for the NPN transistor, \[v_{\text{BE}_{2}}=\frac{v+v_{R}}{2} \tag{13}\] On the other hand, for the PNP transistor, \[v_{\text{EB}_{1}}=\frac{v+v_{R}}{2}=-v_{\text{BE}_{1}} \tag{14}\] The phenomenon of an OTS device can be modeled mainly using the BJT. From the transistor point of view, a simple model like the Ebers-Moll one fits our requirements for the circuit in Fig. 2[25]. The Ebers-Moll equation used to Figure 2: Threshold switching (OTS) device equivalent circuit (\(R_{S}=0\) and \(R_{\text{P}}=\infty\) for ideal device). describe the emitter current of the PNP transistor in any operating region is given as follows \[i_{\text{E}_{1}}=I_{s}\cdot[e^{-v_{\text{E}_{1}}/v_{T}}-e^{-v_{\text{E}_{1}}/v_{T} }+\frac{1}{\beta_{F}}\cdot(e^{-v_{\text{E}_{1}}/v_{T}}-1)] \tag{15}\] where \(v_{\text{T}}\) is the thermal voltage \(kT/q\) (approximately \(26mV\) at \(300K\approx\) room temperature) \(I_{\text{S}}\) is the reverse saturation current of the base-emitter diode (on the order of \(10^{-15}\) to \(10^{-12}\) amperes) \(\beta_{F}\) is the forward common emitter current gain (20 to 500) replacing (4) and (14) in the last equation is given \[i_{\text{E}_{1}}=I_{s}\cdot[e^{(v+v_{R})/2v_{T}}(1+\frac{1}{\beta_{F}})-e^{v_{R }/v_{T}}-\frac{1}{\beta_{F}}] \tag{16}\] For ideal OTS devices (\(Rs=0\) and \(Rp=\infty\)) the total DC current flowing through the junctions (3) can be expressed as follows \[f(v)=i_{\text{E}_{1}}\ . \tag{17}\] As mentioned above, the the emitter current of the PNP transistor is equal to the emitter current of the NPN transistor (\(i_{\text{E}_{2}}=i_{\text{E}_{1}}\)). The capacitive current flowing through the OTS device (\(J_{1}\), \(J_{2}\), and \(J_{3}\) in Fig. 5) can be modelled as follows \[\frac{d}{dt}q(v)=\frac{d}{dt}q(v_{1})+\frac{d}{dt}q(v_{2})+\frac{d}{dt}q(v_{3}) \tag{18}\] The first term in the last equation is related to the charge between the emitter-base junction of the PNP transistor (\(J_{1}\)) and can be expressed as follows \[\frac{d}{dt}q(v_{1})=\frac{d[C_{1}\cdot v_{1}]}{dt} \tag{19}\] assuming the capacitance \(C_{1}\) consists of diffusion capacitance \(C_{d_{1}}\) and junction capacitance \(C_{j_{1}}\), the right hand side of (19) can be written as \[\frac{d}{dt}q(v_{1})=\frac{d(C_{d_{1}}+C_{j_{1}})}{dt}\cdot(\frac{v+v_{2}}{2}) +(C_{d_{1}}+C_{j_{1}})\cdot\frac{d(\frac{v+v_{2}}{2})}{dt} \tag{20}\] Figure 3: (a) Definition of parameters on the \(i\)–\(v\) curve of a typical OTS device. (b-c) Implemented model in MATLAB extracted from the macro model, (b) is plotted using logarithmic scale; (d) Macro model implemented in LTSPICE; the \(i\)–\(v\) curve has been obtained using a current sweep (\(R_{b}=5k\Omega\), \(R_{p}=100k\Omega\), and \(R_{s}=200\Omega\)). where \[v_{1}=\frac{v+v_{2}}{2} \tag{21}\] and \[v_{1}=v_{3}. \tag{22}\] The third term in (18) is related to the charge between the emitter-base junction of the NPN transistor (\(J_{3}\)). Since the pair of transistors are complementary, this term can be modelled as \[\frac{d}{dt}q(v_{3})=\frac{d}{dt}q(v_{1}) \tag{23}\] These two terms are negligible in comparison with one related to the base-collector junctions of both transistors (\(J_{2}\)). The main capacitive current flowing through the OTS device can be found by applying KCL as follows \[\frac{d}{dt}q(v_{2})=i_{C_{1}}-i_{B_{2}}-i_{Rb}-[\frac{d}{dt}q(v_{1})+\frac{d} {dt}q(v_{3})] \tag{24}\] The Ebers-Moll equation used to describe the collector current of the PNP transistor in any operating region is given as follows \[i_{C_{1}}=I_{s}\cdot[e^{-v_{\text{Re}_{1}}/v_{T}}-e^{-v_{\text{Re}_{1}}/v_{T}} -\frac{1}{\beta_{R}}\cdot(e^{-v_{\text{Re}_{1}}/v_{T}}-1)] \tag{25}\] Figure 4: Model Parameter Variation. (a) Adding a resistor in series, \(R_{s}=100\Omega\); (b) Thermal voltage; (c) Saturation current; (d) Snapback resistor. The insets show the results using logarithmic scale for the current. Figure 5: OTS device cell circuit. where \(\beta_{R}\) is the reverse common emitter current gain (0 to 20). The Ebers-Moll equation used to describe the base current of the NPN transistor in any operating region is given as follows \[i_{B_{2}}=I_{s}\cdot[\frac{1}{\beta_{F}}\cdot(e^{v_{\text{kg}}/v_{T}}-1)+\frac{1 }{\beta_{R}}\cdot(e^{v_{\text{sc}}{}_{2}/v_{T}}-1)] \tag{26}\] Using the Ebers-Moll equations mentioned above and substituting (24) will yield \[\frac{d}{dt}q(v_{2})=I_{s}\cdot[e^{(v+v_{R})/2v_{T}}(1-\frac{1}{ \beta_{F}})-\\ e^{v_{R}/v_{T}}\cdot(1+\frac{2}{\beta_{R}})+\frac{2}{\beta_{R} }+\frac{1}{\beta_{F}}]-\frac{v_{R}}{R_{b}}-\\ [\frac{d}{dt}q(v_{1})+\frac{d}{dt}q(v_{3})] \tag{27}\] Finally, the current flow through the OTS device in (3) can be found by adding (16) and (27) as follows \[i=2\cdot I_{S}\cdot[e^{\frac{v+v_{R}}{2v_{T}}}-e^{v_{R}/v_{T}}\cdot(1+\frac{1 }{\beta_{R}})+\frac{1}{\beta_{R}}]\;-\frac{v_{R}}{R_{b}} \tag{28}\] The macro model in Fig. 2 has been implemented in LTSPICE. Also, the derived equation (28) from the macro model has been implemented in MATLAB. Fig. (a)a shows the \(i\)-\(v\) curve for an physical OTS device fabricated by Western Digital Research, where \(v_{th}\) is the turn on voltage, \(I_{th}\) is the current at turn on event, \(v_{offset}\) is the voltage across the OTS device when on, and \(I_{hold}\) is the smallest current required to maintain the on state. Fig. (b)b and Fig. (c)c show the \(i\)-\(v\) curve of the implemented model in MATLAB extracted from the macro model, where Fig. (d)d demonstrates the \(i\)-\(v\) curve of the macro model imple Figure 6: Delay Model. (a) Internal state model extracted from \(J_{2}\) in Fig. 5: \(C_{2}=1nF\), \(R_{b}=1M\Omega\) and \(I_{state}=1\mu A\); (b) variable state \(\zeta(t)\) (which is treated as a voltage by the simulator) vs. time (a.u.). mented in LTSPICE. The behavior of our model in (28) has been inspected for different parameters as can be seen in Fig. 4. For example, Fig. (a)a shows \(R_{ON}\) when considering a series resistor \(R_{s}\)=100 \(\Omega\). On the other hand Fig. (b)b demonstrates the \(i\)-\(v\) curves for thermal voltage variations. However, for all other simulations, the thermal voltage \(v_{\text{T}}\) has been set to \(26mV\) at \(300K\) room temperature. Moreover, the behavior of the model has been tested for the reverse saturation current variation as seen in Fig. (c)c. The assumption for the other cases is \(I_{S}=10^{-14}A\). Furthermore, the behaviour of our model has been simulated for different bias resistor (snapback resistor) and plotted in Fig. (d)d. For the rest of the simulation result we assume \(R_{b}=5k\Omega\), \(R_{p}=100k\Omega\), and \(R_{s}=200\Omega\). It is worth drawing the reader's attention to the fact that the snapback makes the \(i\)-\(v\) curve multi valued, so we keep track of the branch using the snapback state variable. ### _Physical Model_ The OTS device can be represented by three junctions based on the macro model in Fig. 2, \(J_{1}\), \(J_{2}\), and \(J_{3}\). The DC junction currents \(f(v)\) in (3) are obtained by superposition of all currents. Adopting the Ebers-Moll model with additional junction capacitances for the BJTs, and extending this model to the transient behavior, we get the OTS equivalent circuit in Fig. 5. So, each junction is represented by a diode, a capacitance, and a current source in parallel. The diode is equivalent to the DC characteristic of the p-n junction current as follows \[I_{J}=I_{0}\cdot(e^{v_{J}/V_{T}}-1) \tag{29}\] where, \[I_{0}=\frac{I_{s}}{\alpha} \tag{30}\] and \[\alpha=\frac{\beta}{1+\beta} \tag{31}\] The current source is defined as \[\alpha\cdot I_{J} \tag{32}\] The currents flowing in the static branch can be found by applying KCL as follows \[f(v)=I_{1}-\alpha_{R}\cdot I_{2} \tag{33}\] Figure 8: \(i\)–\(v\) characteristic curves with different sets of \(K\) using (56). (a-b) \(K<1\); (c-d) \(K>1\). From (29) and (32) the last equation can be written as \[f(v)=I_{0_{1}}\cdot(e^{v_{1}/V_{T}}-1)-\alpha_{R}\cdot I_{0_{2}}\cdot(e^{v_{2}/V_{ T}}-1) \tag{34}\] replacing (30) in (34) yielding to \[f(v)=\frac{I_{s}}{\alpha_{F}}\cdot(e^{v_{1}/V_{T}}-1)-I_{s}\cdot(e^{v_{2}/V_{T} }-1) \tag{35}\] hence, \[f(v)=I_{s}\cdot[e^{v_{1}/v_{T}}-e^{-v_{2}/v_{T}}+\frac{1}{\beta_{F}}\cdot e^{v _{1}/v_{T}}] \tag{36}\] or \[f(v)=I_{s}\cdot[e^{v_{1}/v_{T}}(1+\frac{1}{\beta_{F}})-e^{-v_{2}/v_{T}}-\frac {1}{\beta_{F}}] \tag{37}\] The capacitance \(C_{total}\) in (3) consists of a junction capacitance and a diffusion capacitance for each P-N junction (depicted in Fig. 5) as follows \[C_{total}=C_{j}+C_{d}. \tag{38}\] Both capacitances are voltage dependent. We treat these junctions as Schottky junctions with the capacitance defined as follows [19] \[C_{j}=\frac{C_{j0}}{(1-v_{c}/v_{j})^{M}}, \tag{39}\] where \(C_{j0}\) is the zero-bias capacitance, \(v_{j}\) is the built-in potential, and \(M\) is the grading coefficient. The diffusion ca Figure 10: \(i\)-\(v\) curve, red dots are experimental measurements (symbols) and modeled using (56) (blue line) for an OTS device fabricated by Western Digital Research. Figure 9: (a) Derivative of the scaled internal state variable (\(K=0.7\)); (b) Charge Model for two values of capacitance (\(1nF\) and \(1pF\)). pacitance represents the minority carrier charge. The different diffusion capacitances must be summed as follows \[C_{d}\approx\Sigma\tau_{j}\frac{I_{j}}{V_{T}} \tag{40}\] The current flowing in time-dependent branch related to \(v_{2}\) and \(J_{2}\) is the main branch, and can be expressed as \[\frac{dq(v)}{dt}\approx\frac{dq(v_{2})}{dt} \tag{41}\] In order to find this current, we have applied KCL at \(J_{2}\) as follows \[\frac{dq(v_{2})}{dt}=\alpha_{F}\cdot I_{1}-I_{2}-I_{3}+\\ \alpha_{R}\cdot I_{2}+\alpha_{F}\cdot I_{3}-I_{2}-\frac{v_{R}}{R_ {b}} \tag{42}\] replacing (29) and (32) in the last equation gives \[\frac{dq(v_{2})}{dt}=\alpha_{F}\cdot I_{0_{1}}\cdot(e^{v_{1}/V_{T }}-1)-2\cdot I_{0_{2}}\cdot(e^{v_{2}/V_{T}}-1)\\ -I_{0_{3}}\cdot(e^{v_{3}/V_{T}}-1)+\alpha_{R}\cdot I_{0_{2}}\cdot (e^{v_{2}/V_{T}}-1)+\\ \alpha_{F}\cdot I_{0_{3}}\cdot(e^{v_{3}/V_{T}}-1)-\frac{v_{R}}{R _{b}} \tag{43}\] where the \(\alpha\) are the scaling factors of the current sources in Fig. 5. Using (30) results to \[\frac{dq(v_{2})}{dt}=I_{s}\cdot(e^{v_{1}/V_{T}}-1)-2\cdot\frac{I _{s}}{\alpha_{R}}\cdot(e^{v_{2}/V_{T}}-1)\\ -\frac{I_{s}}{\alpha_{F}}\cdot(e^{v_{3}/V_{T}}-1)+I_{s}\cdot(e^{ v_{2}/V_{T}}-1)+\\ I_{s}\cdot(e^{v_{3}/V_{T}}-1)-\frac{v_{R}}{R_{b}} \tag{44}\] hence, \[\frac{dq(v_{2})}{dt}=I_{s}\cdot[2\cdot(e^{v_{1}/V_{T}}-1)+(e^{v_ {2}/V_{T}}-1)\cdot(1-\frac{2}{\alpha_{R}})\\ -\frac{1}{\alpha_{F}}\cdot(e^{v_{1}/V_{T}}-1)-\frac{v_{R}}{R_{b}} \tag{45}\] replacing (31) in the last equation yields \[\frac{d}{dt}q(v_{2})=I_{s}\cdot[e^{v_{1}/2v_{T}}(1-\frac{1}{ \beta_{F}})-\\ e^{v_{R}/v_{T}}\cdot(1+\frac{2}{\beta_{R}})+\frac{2}{\beta_{R}}+ \frac{1}{\beta_{F}}]-\frac{v_{R}}{R_{b}} \tag{46}\] Finally, the current flow through the OTS device in (3) can be found by taking the sum of (37) and (46) as follows \[i=2\cdot I_{S}\cdot[e^{\frac{v_{1}}{V_{T}}}-e^{v_{R}/v_{T}}\cdot(1+\frac{1}{ \beta_{R}})+\frac{1}{\beta_{R}}]\ -\frac{v_{R}}{R_{b}} \tag{47}\] ### _Internal State Variable_ Based on the physical model, we have implemented a delay model to switch the state between its off and on values, 0 and 1, respectively, as seen in Fig. 6a. It should be noticed that the physical model allowed constructing the delay model using few elements relative to the second junction in the physical model. For instance, the capacitor \(C_{2}\) is proportional to all the parallel capacitors in the second junction \(J_{2}\), the resistor \(R_{2}\) is proportional to the bias resistor and the diodes in parallel, and the current source \(I_{State}\) is linked to the current sources. The value of the current source driving the delay model in Fig. 6a is set to \(1\mu A\) when the voltage across the device exceeds or reaches the threshold voltage, \(v_{th}\) (where \(i_{th}\) is the corresponding current). Otherwise, the value of the current source is zero when the voltage across the device drops below the threshold voltage. Applying KCL, the delay model can be expressed as follows \[I_{State}=\frac{v_{R}}{R_{2}}+C_{2}\cdot\frac{dv_{R}}{dt} \tag{48}\] We introduce an internal state variable \(State\) or \(\zeta\in[0,1]\), the value of the which represents a voltage that can be found from the solution of the previous equation as follows \[\zeta(t)=I_{State}\cdot R_{2}-e^{\frac{-t}{R_{2}^{2}C_{2}}} \tag{49}\] The values of the internal state variable are plotted in Fig. 6b. Let us consider a scaling factor \(K\), the internal voltage drop across the bias resistor can be written as follows \[v_{R}=K\cdot\zeta \tag{50}\] The internal voltage drop across the bias resistor has been plotted in Fig. 7 for different \(K\) values. As a result, the on state voltage can be simply characterized by the voltage drop across the bias resistor as follows \[v_{on}=ln(\beta_{F}/(\beta_{F}+1))^{2\cdot V_{T}}-v_{R}+\\ ln((i_{on}/Is)^{2\cdot V_{T}}+(1/\beta_{F})+exp(v_{R}/V_{T})) \tag{51}\] or the on state current can be written as \[i_{on}=2\cdot I_{S}\cdot[e^{\frac{v_{1}}{v_{T}}}-e^{v_{R}/v_{T}}\cdot(1+\frac {1}{\beta_{R}})+\frac{1}{\beta_{R}}]\ -\frac{v_{R}}{R_{b}} \tag{52}\] The off state current is essentially a leakage current, and it is modeled by a modified Shockley diode equation as \[i_{\text{off}}=10^{\log_{10}[i_{th}]-\Delta\cdot(v-v_{th})}-i_{0} \tag{53}\] where, \[\Delta=\log_{10}[i_{th}]\cdot(\frac{1}{v_{th}}) \tag{54}\] \begin{table} \begin{tabular}{|c|c|} \hline **Parameter** & **Value** \\ \hline \(I_{s}\) & \(10^{-14}A\) \\ \hline \(\beta_{F}\) & \(250\) \\ \hline \(V_{T}\) & \(0.0259V\) \\ \hline \(K\) & \(0.7\) \\ \hline \(I_{State}\) & \(1\mu A\) \\ \hline \(R\) & \(1M\Omega\) \\ \hline \(C\) & \(1nF\) \\ \hline \(v_{th}\) & \(2.4V\) \\ \hline \(i_{th}\) & \(1\mu A\) \\ \hline \(R_{b}\) & \(5k\Omega\) \\ \hline \end{tabular} \end{table} Table I: The used parameters to fit data in Fig. 10 and their corresponding values. and \[i_{0}=10^{\log_{10}[i_{th}]+\Delta\cdot v_{th}} \tag{55}\] With the state variable \(\zeta\), \(S=0/1\) indicating off/on states, both equations for the current \(i\) using (3) can be combined into one formula as follows \[i=C\cdot\frac{d}{dt}v_{R}+(1-S)\cdot i_{\text{off}}(v)+S\cdot i_{\text{on}}(v) \tag{56}\] It should be noticed that a linear capacitor has been assumed as an approximation \(C\approx C_{2}\), as we concentrate mainly on the modelling of \(i\)-\(v\) snapback in this article. A more accurate charge model explained in (38) can be used in this formulation without modifying the equation structure. Fig. 9 shows the derivative of the internal voltage drop across the bias resistor and the charge model for two different values of capacitance. The \(i\)-\(v\) characteristics using the compact model of (56) are shown in Fig. 8. We have validated the model with experimental data for a physical OTS device which consisted of a Se-based OTS film approximately 15nm thick with carbon electrodes. The OTS layer and electrodes were patterned into a pillar of approximately 40nm diameter. Data was collected by applying a voltage pulse, with an on-chip resistor used to limit the current after OTS thresholding. The parameters values for this fitting are listed in Table I. By overlaying the fitting line with the measurement, we show that the model reproduces the \(i\)-\(v\) characteristics well, not just in the on and off states, but also in the snapback region as can be seen in Fig. 10. ## III Conclusion In this paper, we have presented a new compact model for efficient circuit-level simulations of threshold switching devices. As a first step, a macro model has been implemented in LTSPICE. Based on this macro model, a descriptive model has been extracted and implemented in MATLAB. The macro model has been extended into a physical model to capture the switching process by adding a delay through additional components connected to the second junction \(J_{2}\) in the physical model. This delay model introduces an internal state variable, which is necessary to convert the descriptive model to a compact model and to parameterize it in terms of easily extractable electrical parameters that represent device behaviour. Finally, we verified our model by fitting \(i\)-\(v\) measured data of the physical OTS devices. ## Acknowledgment This work has been funded by Western Digital Corporation, California, U.S.A.
2310.18205
Lost in Translation, Found in Spans: Identifying Claims in Multilingual Social Media
Claim span identification (CSI) is an important step in fact-checking pipelines, aiming to identify text segments that contain a checkworthy claim or assertion in a social media post. Despite its importance to journalists and human fact-checkers, it remains a severely understudied problem, and the scarce research on this topic so far has only focused on English. Here we aim to bridge this gap by creating a novel dataset, X-CLAIM, consisting of 7K real-world claims collected from numerous social media platforms in five Indian languages and English. We report strong baselines with state-of-the-art encoder-only language models (e.g., XLM-R) and we demonstrate the benefits of training on multiple languages over alternative cross-lingual transfer methods such as zero-shot transfer, or training on translated data, from a high-resource language such as English. We evaluate generative large language models from the GPT series using prompting methods on the X-CLAIM dataset and we find that they underperform the smaller encoder-only language models for low-resource languages.
Shubham Mittal, Megha Sundriyal, Preslav Nakov
2023-10-27T15:28:12Z
http://arxiv.org/abs/2310.18205v1
# _Lost in Translation, Found in Spans_: ###### Abstract Claim span identification (CSI) is an important step in fact-checking pipelines, aiming to identify text segments that contain a check-worthy claim or assertion in a social media post. Despite its importance to journalists and human fact-checkers, it remains a severely understudied problem, and the scarce research on this topic so far has only focused on English. Here we aim to bridge this gap by creating a novel dataset, X-CLAIM, consisting of 7K real-world claims collected from numerous social media platforms in five Indian languages and English. We report strong baselines with state-of-the-art encoder-only language models (e.g., XLM-R) and we demonstrate the benefits of training on multiple languages over alternative cross-lingual transfer methods such as zero-shot transfer, or training on translated data, from a high-resource language such as English. We evaluate generative large language models from the GPT series using prompting methods on the X-CLAIM dataset and we find that they underperform the smaller encoder-only language models for low-resource languages.1 Footnote 1: We release our X-CLAIM dataset and code at [https://github.com/mbzuai-nlp/x-claim](https://github.com/mbzuai-nlp/x-claim) ## 1 Introduction Social media platforms have become a prominent hub for connecting people worldwide. Along with the myriad benefits of this connectivity, e.g., the ability to share information instantaneously with a large audience, the spread of inaccurate and misleading information has emerged as a major problem Allcott and Gentzkow (2017). Misinformation spread via social media has far-reaching consequences, including the potential to sow chaos, to foster hatred, to manipulate public opinion, and to disturb societal stability Wasserman and Madrid-Morales (2019); Dewatana and Adillah (2021). Claims play an integral role in propagating fake news and misinformation, serving as the building blocks upon which these deceptive narratives are formed. In their _Argumentation Theory_, Toulmin (2003) described a claim as "_a statement that asserts something as true or valid, often without providing sufficient evidence for verification._" Such intentional or unintentional claims quickly gain traction over social media platforms, resulting in rapid dissemination of misinformation as was seen during recent events such as the COVID-19 pandemic van Der Linden et al. (2020) and Brexit Bastos and Mercea (2019). To mitigate the detrimental impact of false claims, numerous fact-checking initiatives, such as PolitiFact and Snopes, dedicate substantial efforts to fact-checking claims made by public figures, organizations, and social media users. However, due to the time-intensive nature of this process, many misleading claims dodge verification and remain unaddressed. To address this, computational linguistic approaches have been developed that can assist human fact-checkers Vlachos and Riedel (2014); Nakov et al. (2018); Shaar et al. (2020); Gupta and Srikumar (2021); Nakov et al. (2021); Shaar et al. (2022). Figure 1: Social media posts from our X-CLAIM dataset. The English translation is shown in parentheses for the Hindi tweet (middle) and for the Punjabi Instagram post (bottom). The claim spans are in bold. Recently, Sundriyal et al. (2022) introduced the task of claim span identification (CSI), where the goal is to identify textual segments that contain claims or assertions made within the social media posts. The CSI task serves as a precursor to various downstream tasks such as claim verification and check-worthiness estimation. While efforts have been made in combating misinformation in different languages (Jaradat et al., 2018; Barron-Cedeno et al., 2023), research in identifying the claim spans has so far been limited to English. Previously, Sundriyal et al. (2022) have manually extracted COVID-19 claim spans from Twitter in English. However, the landscape of fraudulent claims goes beyond COVID-19 and Twitter. In this work, we aim to bridge these gaps by studying the task of multilingual claim span identification (mCSI) across numerous social media platforms and multiple languages. To the best of our knowledge, this is the first attempt towards identifying the claim spans in a language different from English. We design the first data curation pipeline for the task of mCSI, which, unlike Sundriyal et al. (2022), does not require manual annotation to create the training data. We collect data from various fact-checking sites and we automatically annotate the claim spans within the post. Using the pipeline, we create a novel dataset, named X-CLAIM, containing 7K real-world claims from numerous social media platforms in six languages: English, Hindi, Punjabi, Tamil, Telugu, and Bengali. Figure 1 showcases a few examples from our dataset. We report strong baselines for the mCSI task with state-of-the-art multilingual models. We find that joint training across languages improves the model performance when compared to alternative cross-lingual transfer methods like zero-shot transfer, or training on translated data, from a high-resource language like English. In this work, we make the following contributions: * We introduce the first automated data annotation and curation pipeline for the mCSI task. * We create a novel dataset, named X-CLAIM, for the mCSI task in six languages. * We experiment with multiple state-of-the-art encoder-only language models and the generative large language models to achieve high performance on the proposed task. ## 2 Related Work Efforts to combat misinformation and fake news have focused on claims in various sources. The existing body of work in this area can be broadly categorized into the following major groups: claim detection (Chakrabarty et al., 2019; Gupta et al., 2021; Wuhrl and Klinger, 2021; Gangi Reddy et al., 2022, 2022), claim check-worthiness (Jaradat et al., 2018; Wright and Augenstein, 2020), claim span identification (Sundriyal et al., 2022), and claim verification (Ma et al., 2019; Soleimani et al., 2020). Being the precursor of several other downstream tasks, claim detection has garnered significant attention. Various methods have been proposed to tackle claim detection, aiming to identify statements that may contain claims (Lippi and Torroni, 2015; Levy et al., 2017; Gangi Reddy et al., 2022). In response to the escalating issue of false claims on social media, there has been a surge in the development of claim detection systems specifically designed to handle text from social media platforms (Chakrabarty et al., 2019; Gupta et al., 2021; Sundriyal et al., 2021). Recently, Sundriyal et al. (2022) introduced the task of claim span identification where the system needs to label the claim-containing textual segments from social media posts, making claim detection systems more explainable through this task. While most existing methods to combat fake news are primarily tailored for English (Levy et al., 2014; Lippi and Torroni, 2015; Sundriyal et al., 2021, 2022), in recent times, there has been a surge in interest regarding the advancement of fact-checking techniques for various other languages. ClaimRank (Jaradat et al., 2018) introduced an online system to identify sentences containing check-worthy claims in Arabic and English. The Check-That! Lab has organized several multilingual claim tasks over the past five years, progressively expanding language support and garnering an increasing number of submissions (Nakov et al., 2018; Elsayed et al., 2019; Shaar et al., 2020; Nakov et al., 2021, 2022). In their latest edition, Barron-Cedeno et al. (2023) featured factuality tasks in seven languages: English, German, Arabic, Italian, Spanish, Dutch, and Turkish. Gupta and Srikumar (2021) introduced X-FACT, a comprehensive multilingual dataset for factual verification of real-world claims in 25 languages. Unlike that work, here we focus on extracting the claim from a social media post, rather than fact-checking a claim. The task of claim span identification remains unexplored due to the lack of datasets in other languages. Sundriyal et al. (2022) developed a dataset of 7.5K manually annotated claim spans in tweets, named CURT; all the tweets and claim spans in that dataset are in English. Additionally, while there has been interest in claims in other languages, there is a notable lack of progress on Indian languages. Here, we aim to bridge this gap. ## 3 Dataset We follow a two-step pipeline to develop our dataset: (_i_) data collection and (_ii_) automated annotation. We present a high-level overview of our proposed data creation methodology in Figure 2. Below, we explain these steps in detail. ### Data Collection We observe in various fact-checking websites that professional fact-checkers, while investigating a given social media post or news article, first find the claim made in the post, which we call a _normalized_ claim, and then they verify whether that claim is true, misleading, or false. This is the motivation for the CSI task as a precursor to fact-checking as it is a step in the fact-checking process as performed by humans. Thus, we leverage the efforts of fact-checkers and we collect data from numerous fact-checking websites that are recognized by the International Fact-Checking Network (IFCN).2 We aim to create a dataset comprising claims made in social media and in multiple languages, with a focus on Indian languages. We scrape data from fact-checked posts in six languages: English, Hindi, Punjabi, Tamil, Telugu, and Bengali. Footnote 2: [https://www.poynter.org/ifcn/](https://www.poynter.org/ifcn/) We highlight that we deal with low-resource languages since we found only a couple of fact-checking websites that analyze social media posts in languages other than English. For each website, we scrape all the fact-checked posts3 with the help of a web scraping API.4 Footnote 3: The data was scraped in May 2023. Footnote 4: [https://www.octoparse.com/](https://www.octoparse.com/) Then, we collect the text of the social media post text and the normalized claim from the web page of each fact-checked post with the help of regular expressions based on the structure of the fact-checking website. Finally, we use various filtering rules to remove posts that are about videos, Instagram reels, or when their text is too short or excessively long. These rules help us to collect only the social media posts with a text modality. We provide more details about the process of data collection in Appendix A. ### Automated Annotation We label the claim-containing a textual segment within the social media post using the human-written normalized claim as a guidance from the previous step. The normalized claim can be relied on to be extremely trustworthy since it was manually written by professional fact-checkers. However, it does not have to be literally spelled out as part of the social media post. Having this normalized claim gives us a good guidance about where to look for the claim span, and we try to do this mapping automatically. As shown in the bottom row in Figure 2, this step includes two substeps: sentence selection and conversion of the normalized claim to the claim span. Both substeps use modules that support multiple languages and do not require human intervention. Figure 2: Our two-step methodology to create the X-CLAIM dataset for the multilingual claim span identification task. The top row shows the data collection (Section 3.1) from a fact-checking website. The bottom row illustrates the automated annotation step (Section 3.2) in which, first, the most similar post sentence (_ps_) is selected, and then, the claim span is created with the help of a normalized claim (_nc_). We use BERTScore-Recall (Zhang et al., 2020) for sentence selection and awesome-align (Dou and Neubig, 2021) for word alignment between _nc_ and _ps_2. First, we look for the most relevant sentence that encapsulates the claim made in the post. We do this by computing a similarity score between the normalized claim and each of the post's sentences, and we select the sentence with the maximum score. Second, using awesome-align(Dou and Neubig, 2021), we find the word tokens in the post sentence that align with the word tokens in the normalized claim. We then obtain the claim span as the sequence of word tokens, starting with the first aligned word token and ending with the last aligned word token in the sentence. We use Stanza (Qi et al., 2020) to perform sentence segmentation for English, Hindi, Tamil, and Telugu. For Punjabi and Bengali, we consider the complete post text as a single sentence since we did not find any publicly available sentence segmentation tools for these languages. While using awesome-align in conversion from the normalized claim to the claim span, we used the official repository of Dou and Neubig (2021). Recent works (Yarmohammadi et al., 2021; Kolluru et al., 2022) have used word-alignment to produce silver labels in the target language (like Hindi) using gold labels available in the source language (like English). Mittal et al. (2023) used word alignments from awesome-align, and then considered the longest contiguous sequence of aligned tokens in the translated text as the final projected gold labels. Taking the longest contiguous sequence is suitable for tasks where the target text, the gold labels, or both, are relatively short. However, in our mCSI task, the normalized claims and the post texts are quite long (see Table 1). Thus, we took the sequence of words from the first to the last aligned word. We found that this yielded better performance than taking the longest contiguous sequence of aligned words in the social media post. Note that we empirically chose the most appropriate sentence similarity measure for sentence selection, after trying a variety of similarity measures. Tasks such as machine translation (Dong et al., 2015) and text summarization (Liu and Lapata, 2019) require evaluation measures that take paraphrasing and synonyms into account while comparing the model's generated text to the gold reference text. We leverage these evaluation measures for sentence similarity. To evaluate the commonly used measures such as ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005) and BERTScore, we manually annotated the claim spans for 300 randomly sampled posts in the six languages. Then, we evaluated the automatically annotated claim spans when using different similarity measures against the manually annotated claim spans. The results are shown in Table 2: we can see that BERTScore-Recall yields consistently better performance for finding the annotated spans. For Punjabi and Bengali, we only used awesome-align due to the lack of a sentence segmentation module and we observed high-quality F1 scores of 81.23% and 78.6%, respectively. Overall, our two-step data creation methodology yields a robust, scalable, and high-quality automatically annotated data for our multilingual claim span identification task. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & **English (En)** & **Hindi (Hi)** & **Punjabi (Pa)** & **Tamil (Ta)** & **Telugu (Te)** & **Bengali (Bn)** \\ \hline \# train & 3891 & 1193 & 346 & 100 & - & - \\ \# dev & 400 & 100 & 100 & 30 & - & - \\ \# test & 371 & 100 & 100 & 100 & 107 & 102 \\ text len (t) & 37.58\({}_{\pm 34.59}\) & 28.59\({}_{\pm 23.07}\) & 29.00\({}_{\pm 21.92}\) & 26.40\({}_{\pm 20.10}\) & 24.42\({}_{\pm 15.17}\) & 29.48\({}_{\pm 21.73}\) \\ claim len (t) & 17.67\({}_{\pm 12.33}\) & 17.79\({}_{\pm 11.62}\) & 17.10\({}_{\pm 11.20}\) & 14.12\({}_{\pm 08.27}\) & 13.54\({}_{\pm 06.73}\) & 15.00\({}_{\pm 07.69}\) \\ text len (c) & 229.34\({}_{\pm 200.56}\) & 143.05\({}_{\pm 114.19}\) & 145.03\({}_{\pm 106.75}\) & 229.63\({}_{\pm 114.83}\) & 186.63\({}_{\pm 113.99}\) & 186.95\({}_{\pm 128.81}\) \\ claim len (c) & 108.95\({}_{\pm 81.88}\) & 85.25\({}_{\pm 86.54}\) & 85.25\({}_{\pm 83.71}\) & 122.23\({}_{\pm 68.86}\) & 104.50\({}_{\pm 50.65}\) & 97.42\({}_{\pm 46.71}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics about the X-CLAIM dataset. The number of samples in the train, the development, and the test splits are reported in first three rows, respectively. Text len and claim len are the average (\(\pm\) standard deviation) length of social media post text and claim span, respectively, in number of tokens (\(t\)) and characters (\(c\)), respectively. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Approach** & **En** & **Hi** & **Ta** & **Te** \\ \hline awesome-align & 70.48 & 77.27 & 82.24 & 82.61 \\ +ROUGE-F1 & 74.19 & **80.52** & 82.62 & 78.85 \\ +METEOR & 78.71 & 79.50 & **82.64** & 81.08 \\ +BERTScore-F1 & 79.60 & **80.52** & **82.64** & 82.73 \\ +BERTScore-Recall & **83.91** & **80.52** & **82.64** & **83.34** \\ \hline \hline \end{tabular} \end{table} Table 2: F1 score (in %) of the automated annotation for our data curation pipeline when using different sentence similarity measures during sentence selection. ### Evaluation Sets and Dataset Analysis We created the evaluation sets with the help of linguistic experts in the six languages. We provided them with nearly 100 samples from the curated data in each language (400 in English) along with detailed annotation guidelines for the CSI task from Sundriyal et al. (2022). We asked them to annotate the claim spans in the social media posts under the guidance of claims authored by professional fact-checkers. We created training and development splits in a ratio of 80:20 on the remaining curated data. For Telugu and Bengali, we only formed test sets as there were less examples available for these languages. Table 1 shows statistics about the dataset and the splits, and Figure 1 shows a few examples from our X-CLAIM dataset. Table 1 further reports the length of the post text and the claim span. As the claim spans are generally concise and do not contain extra neighboring words, we observe that the claim spans are nearly half of the text of the post for all languages. ## 4 Experiments Evaluation Measures:Following Sundriyal et al. (2022), we address mCSI as a sequence tagging task. For evaluation, we use three measures, computed at the span level (Da San Martino et al., 2019): Precision (P), Recall (R), and F1-score. Models:We use state-of-the-art transformer-based (Vaswani et al., 2017) multilingual pretrained encoder-only language models such as mBERT (Devlin et al., 2019), mDeBERTa (He et al., 2023), and XLM-RoBERTa (XLM-R) (Conneau et al., 2020). We encode each post's token with _IO (Inside-Outside)_ tags to mark the claim spans. Other encodings such as _BIO_, _BEO_ and _BEIO_ performed worse (see Appendix C for detailed comparison of encodings). More details about the training are given in Appendix B. ## 5 Results We carry out an exhaustive empirical investigation to answer the following research questions: 1. Does the model benefit from joint training with multiple languages? (Section 5.1) 2. Do we need training data in low-resource languages when we have abundant data in high-resource languages7 (Section 5.2) Footnote 7: We consider English to be a high-resource language. 3. Can large language models (LLMs) such as GPT-4 identify the claims made in multilingual social media? (Section 5.3) 4. How does the automatically annotated X-CLAIM dataset compare to prior manually annotated datasets like CURT? (Section 5.4) ### Training on Multilingual Social Media We train and compare two kinds of models: Monolingual and Multilingual models. In a Monolingual setup, we train one model for each language using the available training data in X-CLAIM dataset, whereas in a Multilingual setup, we train a single model on the training data for all languages combined. We note that there is no Monolingual model for Telugu and Bengali due to the lack of training data for these languages. However, we evaluate the Multilingual model on them as that model was trained in multiple languages. The performance of these models with different pretrained encoders is shown in Table 3. We can see that the Multilingual models outperform the Monolingual models by 1.15\(\%\) precision and 0.93\(\%\) F1, averaged over all languages (except for Telugu and Bengali). Even though the recall gets hurt by 0.45\(\%\), the improvement in F1 suggests that the model does benefit from joint training. We posit that the drop in recall and the gain in precision indicate that the model has become more careful when identifying the claims. ### Cross-lingual Transfer from English We use the English training data in two experimental settings and we compare them to Multilingual models. In the first setting, we leverage the strong cross-lingual transfer capabilities of pretrained multilingual models (Wu and Dredze, 2019). We take Monolingual models for English and test them on the remaining five languages. In this setting, we have zero-shot transfer from monolingual-English models. In the second setting, which we call _translate-train_ models, we translate the English training data to the target language and we train a model only on the translated data. To perform translation of social media posts, we use Google translate,8 and we project the claim spans (in English), or the token labels, on the translated post using our automated annotation pipeline (see Section 3.2 for detail). Footnote 8: [https://translate.google.com/](https://translate.google.com/) Both the zero-shot transfer and the translate-train models are almost consistently worse than the Multilingual models (in terms of F1) for all five languages. The translate-train models show a drop of 1.19% F1, whereas zero-shot transfer models are 2.13% F1 behind Multilingual. This offers strong evidence that the training data in low-resource languages helps over the training data in a high-resource language. Interestingly, we notice that zero-shot transfer models are consistently worse than translate-train ones when using mBERT and mDeBERTa, for all five languages. For instance, with mBERT, zero-shot transfer models are worse by 2.92% F1. However, with XLM-R, zero-shot transfer models are better than translate-train models by 1.15% precision and 0.64% F1. We believe that this is because XLM-R has stronger cross-lingual transfer capabilities, stemming from its larger pretraining data compared to mBERT and mDeBERTa. ### Evaluating the GPT Series LLMs We experiment with several large language models (LLMs): text-davinci-003 (T-DV3), gpt-3.5-turbo (GPT-3.5) and gpt-4-0314 (GPT-4) on the mCSI task using the OpenAI API.7 We prompted each LLM with each social post from the test sets in our X-CLAIM dataset and we asked the LLM to respond with the claim span. Footnote 7: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference) The generated response may contain words that are either not present in the post or are synonyms of words from the posts. Thus, we treated the response like a normalized claim (Section 3.2) and we passed it through our automated annotation step (Section 3.2) to create the corresponding claim span. We evaluated the predicted claim spans with respect to the gold claim spans. More details about this setup are given in Appendix D. Zero-shot Prompting.We experiment with four prompts that use no examples: Identify, Extract, Span, and Language. The exact prompt structure is given in Figure 6 in the Appendix. Table 4 shows their performance when used with different LLMs on our X-CLAIM dataset. We noticed that the LLMs mostly responded in English even when asked to analyze a post in another language. One reason could be that the prompts do not explicitly specify the language the LLM should respond in. Since our automated annotation step is language-agnostic, the corresponding claim span is in the target language. To overcome this, we asked the LLM to respond in the target language with the Language prompt. Interestingly, and unlike GPT-3.5 and GPT-4, the performance of T-DV3 with Language prompt significantly dropped by 12-37% F1 (averaged over all languages except English) when compared to the other three prompts. This suggests that T-DV3 is weaker in a multilingual setup. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**English**} & \multicolumn{3}{c}{**Hindi**} & \multicolumn{3}{c}{**Punjabi**} & \multicolumn{3}{c}{**Tamil**} & \multicolumn{3}{c}{**Telugu**} & \multicolumn{3}{c}{**English**} \\ \cline{2-13} & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline \multicolumn{13}{c}{_monolingual models (train using only training data in target language)_} \\ \hline mBERT & 70.30 & 77.08 & 69.86 & 77.57 & 88.93 & 79.03 & 69.40 & 94.22 & 76.93 & 73.83 & 87.74 & 76.94 & n/a & n/a & n/a & n/a & n/a & n/a \\ mDeBERTa & **74.28** & 80.72 & **73.79** & 75.94 & 92.30 & 79.84 & 69.55 & 92.14 & 75.78 & 67.68 & 76.62 & 69.05 & n/a & n/a & n/a & n/a & n/a \\ XLM-R & 71.56 & **81.51** & 72.79 & 75.34 & **94.49** & **81.09** & 68.85 & 93.58 & 75.62 & 72.88 & 80.42 & 71.82 & n/a & n/a & n/a & n/a & n/a \\ \hline \multicolumn{13}{c}{_ multilingual models (train using training data in all languages)_} \\ \hline mBERT & 70.86 & 77.01 & 70.39 & 76.16 & 90.40 & 80.04 & 68.30 & 88.19 & 73.96 & 73.80 & 85.94 & 76.57 & 79.58 & 80.74 & 78.07 & 76.79 & 86.22 & **79.39** \\ mDeBERTa & 72.25 & 80.90 & 73.01 & 75.94 & 92.66 & 80.87 & 70.62 & 90.99 & 76.72 & **78.21** & 88.69 & **80.29** & 82.34 & **87.10** & **82.92** & **77.11** & 85.89 & 79.24 \\ XLM-R & 72.45 & 78.61 & 71.93 & 75.30 & 89.09 & 78.63 & **73.65** & 87.95 & **77.03** & 73.23 & 83.63 & 74.76 & 80.48 & 85.29 & 80.68 & 76.99 & 81.24 & 77.22 \\ \hline \multicolumn{13}{c}{_strong-shot transfer from monolingual English models \({}^{*}\)_} \\ \hline mBERT & n/a & n/a & n/a & 74.53 & 83.46 & 76.51 & 66.84 & 79.11 & 69.91 & 76.92 & 72.77 & 70.85 & 79.58 & 68.68 & 70.31 & 72.74 & 80.91 & 74.21 \\ mDeBERTa & n/a & n/a & n/a & n/a & 75.71 & 91.18 & 80.08 & 73.18 & 88.87 & 76.78 & 80.97 & 75.44 & **84.42** & 74.91 & 76.25 & 76.42 & 79.49 & 75.88 \\ XLM-R & n/a & n/a & n/a & 73.42 & 88.28 & 77.42 & 70.88 & 92.44 & 76.68 & 76.88 & 78.89 & 74.33 & 80.44 & 79.28 & 77.81 & 73.67 & 80.37 & 75.04 \\ \hline \multicolumn{13}{c}{_translate-train models (train on translated training data from English to target language)_} \\ \hline mBERT & n/a & n/a & n/a & **78.60** & 86.52 & 79.55 & 67.33 & 92.80 & 74.70 & 75.93 & 81.28 & 76.04 & 75.56 & 72.07 & 71.76 & 70.16 & 83.97 & 74.36 \\ mDeBERTa & n/a & n/a & n/a & n/a & 76.73 & 87.43 & 78.77 & 68.84 & 91.73 & 73.56 & 77.46 & **89.06** & 80.13 & 82.18 & 73.63 & 75.16 & 72.41 & **88.78** & 77.57 \\ XLM-R & n/a & n/a & n/a & n/a & 75.55 & 83.37 & 76.11 & 68.97 & **94.43** & 76.53 & 77.97 & 82.79 & 77.59 & 77.07 & 72.11 & 72.40 & 69.99 & 86.82 & 75.47 \\ \hline \hline \end{tabular} \end{table} Table 3: Precision (P), recall (R) and F1 performance (in \(\%\)) of pretrained encoder-only language models in different settings on X-CLAIM dataset. \({}^{\dagger}\)The monolingual models for Telugu and Bengali are not available (n/a) due to the lack of training data for these languages. \({}^{\ddagger}\)n/a: we do not evaluate on the English test set since we focus on the cross-lingual transfer from English to the target language. The reported numbers are the median of three runs with different seeds as high variance was observed across the fine-tuning runs. The best scores are in bold. We further find that GPT-4 is nearly always better than GPT-3.5 by an average of 4.23% precision and 1.5% F1 over the four prompts. GPT-3.5 consistently outperformed T-DV3 by an average of 35.96% recall and 27.63% F1, but it lags behind by 0.5% in terms of precision. In-Context Learning.Here, we give the model a few labeled examples as part of the prompt as shown in Figure 7 of the Appendix. Since GPT-4 outperformed the other two LLMs and showed the best performance with Language (Table 4), we experimented with in-context learning with GPT-4 and Language prompt. For Telugu and Bengali, we use examples from translated data (Section 5.2) due to the lack of training data in these languages. The results are shown in Table 5. We see that in-context learning consistently improves F1 score over the zero-shot prompting in all six languages. With more examples shown, the performance increased in English, Hindi and Punjabi at the cost of more computation time. We find that 10-shot in-context learning improved the performance by an average of 2.78% F1 for the six languages in comparison to zero-shot prompting. Comparing mDeBERTa and GPT-4.We compared the best-performing fine-tuned encoder-only language model to the best-performing generative LLM. The Multilingual mDeBERTa model and GPT-4 yielded the best results for most languages as reported in Table 3, Table 4, and Table 5. In the case of GPT-4, the best setting uses the Language prompt with 10-shot in-context learning for the six languages. Figure 3 compares the two models in terms of F1 scores; we further offer comparison in terms of precision and recall in Table 10 of the Appendix. We find in Figure 3 that Multilingual mDeBERTa outperforms GPT-4 by 2.07% F1, averaged over the six languages. GPT-4 shows competitive performance with mDeBERTa in English, Hindi and Punjabi. On the remaining three languages, mDeBERTa outperforms GPT-4 by a large margin of 2-7% F1. This suggests that the LLMs show strong performance on high-resource languages like English, but still lag behind smaller fine-tuned LMs on low-resource languages such as Bengali. ### Comparing X-CLAIM and CURT We trained mDeBERTa on the CURT dataset (Sundriyal et al., 2022a), containing tweets in English, and we compared it to the English Monolingual model (trained with mDeBERTa on English data in X-CLAIM) on the test sets for the six languages in the X-CLAIM dataset. We show the F1 scores for both models in Figure 4 and we report the precision and the recall scores in Table 11 in the Appendix. The mDeBERTa model fine-tuned on the X-CLAIM English data performs competitively in English with the CURT trained model and shows 3.52% F1 average gain over the remaining five languages. Note that CURT is manually annotated and is twice larger than the English part of the X-CLAIM dataset. This offers empirical evidence of better model generalization when training on the X-CLAIM dataset compared to the CURT dataset. ## 6 Error Analysis In this section, we qualitatively analyze the errors made by the best-performing Multilingual mDeBERTa model. To provide insights on how LLMs can be improved for this task, we also discuss the errors made by GPT-4 in its best-performing setting of 10-shot in-context learning. We analyzed the predictions on the test examples in English and Hindi, and we report the kinds of errors made by the two language models in Figure 5. Below, we discuss the results of the analysis. English.In the first post in Figure 5, both models deviate from the gold claim span. GPT-4 model correctly identifies the presence of the claim but inadvertently veers away from the central check-worthy assertion and focuses on the secondary claim. On the other hand, the mDeBERTa model includes information about _moisture and bacteria in the mask_, but contains several grammatical errors and lacks clarity. In particular, the phrase _'every day day legionnaires disease'_ is confusing and doesn't convey a clear message. Both models provide similar claim spans for the second social media post, capturing the central assertion accurately. However, mDeBERTa contains the extra words _'pregnancy your'_ at the beginning that are not present in the gold span. These extra words introduce confusion and do not accurately represent the claim made in the social media post. Hindi.Claim span identification in other languages is more complicated than in English due to the lack of proper guidelines pertaining to their linguistic characteristics. In the first example, GPT-4 almost accurately predicted the span, missing the first word (_Mrs._) in the beginning. While mDeBERTa predicted both the claim and the premise, defying the very purpose of the task, which is to extract precise claim phrases from the post. In the second post, both models performed well overall. However, we observe a similar issue as for English: the inclusion of additional phrases alongside the claim spans, which can potentially detract from the clarity and precision of the claim. This indicates that these models struggle to make precise decisions about claim boundaries. We can conclude that for both languages, the models can identify the claim but might propose wider boundaries, including extra words. Figure 4: F1 score (in %) of mDeBERTa trained on the CURT dataset (Sundriyal et al., 2022a) vs. mDeBERTa trained on the English data in X-CLAIM dataset. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**\# Examples**} & \multicolumn{3}{c}{**English**} & \multicolumn{3}{c}{**Hindi**} & \multicolumn{3}{c}{**Punjabi**} & \multicolumn{3}{c}{**Tamil**} & \multicolumn{3}{c}{**Telugu**} & \multicolumn{3}{c}{**Bengali**} \\ \cline{2-13} & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline 0 & 74.76 & 75.66 & 72.41 & **79.51** & 80.87 & 77.27 & 64.21 & **92.31** & 71.23 & 81.47 & 75.38 & 75.32 & 84.96 & 72.75 & 76.15 & 68.95 & 83.50 & 71.14 \\ 1 & **75.55** & 76.45 & 73.23 & 77.41 & 79.43 & 76.22 & 67.50 & 87.95 & 72.15 & **81.52** & **82.59** & **79.21** & 85.22 & **74.38** & **77.49** & 70.87 & 84.00 & 73.18 \\ 4 & 74.76 & 76.49 & 72.74 & 79.09 & 85.15 & 79.09 & 73.80 & 88.64 & 76.32 & 81.30 & 81.25 & 78.25 & 85.08 & 74.28 & 77.18 & 70.29 & **85.72** & **73.37** \\ 7 & 74.28 & 76.31 & 72.29 & 78.05 & 86.40 & 79.59 & 71.11 & 90.30 & 76.06 & 78.44 & 80.61 & 76.76 & 85.01 & 73.06 & 76.46 & 70.42 & 80.54 & 71.18 \\ 10 & 75.28 & **77.80** & **73.49** & 79.42 & **91.06** & **82.49** & 73.49 & 91.20 & **77.58** & 79.12 & 81.99 & 77.92 & **86.11** & 71.31 & 75.54 & **71.88** & 81.79 & 73.17 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance (in %) of the GPT-4 model using in-context learning with Language prompt. The first row contains zero-shot prompting (i.e., no examples) results from Table 4. The best scores are in bold. ## 7 Conclusion and Future Work We proposed a novel automated data annotation methodology for multilingual claim span identification. Using it, we created and released a new dataset called X-CLAIM, which consists of real-world claim spans, and social media posts containing them, collected from numerous social media platforms in six languages: English, Hindi, Punjabi, Tamil, Telugu, and Bengali. Using state-of-the-art multilingual models, we established strong baselines based on encoder-only and generative language models. Our experiments demonstrated the benefits of multilingual training when compared to other cross-lingual transfer methods such as zero-shot transfer, or training on the translated data, from a high-resource language like English. We observed lower performance for GPT-style generative LLMs when compared to smaller fine-tuned encoder-only language models and we discussed their error analysis in the spirit of improving the LLMs on this task. Our work opens many important research questions: (1) How to obtain real-world claims without relying on fact-checkers analysis? (2) How to improve the understanding of LLMs about claims and social media in low-resource languages? (3) How to automatically curate multiple check-worthy claims made in the post? (4) How to improve the evaluation metric for the mCSI task? and (5) How to expand the CSI task to other low-resource languages? We plan to address these research questions in future work. Figure 5: Error analysis of the GPT-4 model and the Multilingual mDeBERTa model on English and Hindi test instances from the X-CLAIM dataset. The social media post and the gold claim span are shown in the second column. The predicted claim spans for both models are provided in the third and fourth columns, respectively. The English translations for the Hindi examples are given inside parenthesis in _italics_. ### Limitations Our X-CLAIM dataset for the mCSI task is limited to six languages. We do not know how well the developed systems will perform in languages that are not considered in this work. Moreover, the proposed dataset handles only the primary claim in the given social media post and ignores any other potentially check-worthy claims that the post might contain. In practice, the post may contain multiple check-worthy claims. ## Ethics Broader Impact:Our dataset and model will help the fact-checkers filter out extraneous information, thus saving them significant amounts of time, effort and resources. Data:We place the utmost importance on user privacy. As a result, we have no intention of disclosing any information about the users. The data we curated is solely for research purposes, ensuring that user confidentiality and privacy are protected. Environmental Impact:It is critical to acknowledge the environmental consequences of training large language models. In our case, we mitigate this concern to some extent by focusing primarily on fine-tuning pretrained models rather than training them from scratch.
2303.00572
Nonminimally-coupled warm Higgs inflation: Metric vs. Palatini Formulations
In this work, we study the non-minimally-coupled Higgs model in the context of warm inflation scenario on both metric and Palatini approaches. We particularly consider a dissipation parameter of the form $\Gamma=C_{T}T$ with $C_{T}$ being a coupling parameter and focus only on the strong regime of the interaction between inflaton and radiation fluid. We compute all relevant cosmological parameters and constrain the models using the observational Planck 2018 data. We discover that the $n_s$ and $r$ values are consistent with the observational bounds. Having used the observational data, we constrain a relation between $\xi$ and $\lambda$ for the non-minimally-coupled warm Higgs inflation in both metric and Palatini cases. To produce $n_s$ and $r$ in agreement with observation, we find that their values are two orders of magnitude higher than those of the usual (cold) non-minimally-coupled Higgs inflation.
Thammarong Eadkhong, Punsiri Dam-O, Phongpichit Channuie, Davood Momeni
2023-03-01T15:16:23Z
http://arxiv.org/abs/2303.00572v2
# Nonminimally-coupled warm Higgs inflation: Metric vs. Palatini Formulations ###### Abstract In this work, we study the non-minimally-coupled Higgs model in the context of warm inflation scenario on both metric and Palatini approaches. We particularly consider a dissipation parameter of the form \(\Gamma=C_{T}T\) with \(C_{T}\) being a coupling parameter and focus only on the strong regime of the interaction between inflaton and radiation fluid. We compute all relevant cosmological parameters and constrain the models using the observational Planck 2018 data. We discover that the \(n_{s}\) and \(r\) values are consistent with the observational bounds. Having used the observational data, we constrain a relation between \(\xi\) and \(\lambda\) for the non-minimally-coupled warm Higgs inflation in both metric and Palatini cases. To produce \(n_{s}\) and \(r\) in agreement with observation, we find that their values are two orders of magnitude higher than those of the usual (cold) non-minimally-coupled Higgs inflation. ## I Introduction Inflationary cosmology is a widely accepted framework for explaining the exponentially rapid expansion of the early universe. The flatness and homogeneity/unwanted relics problems can be solved using such paradigm which provides a mechanism to generate the inhomogeneities in the cosmic microwave background radiation (CMBR) [1; 2; 3; 4; 5]. In a standard fashion of slow-roll (cold) inflation, the universe experiences an exponential expansion, during which density perturbations are created by quantum fluctuations of the inflation field, followed by the reheating stage. However, the standard inflationary model is plagued by several challenges, such as the graceful exit problem. To address these issues, several alternative models of inflation have been proposed, including warm inflation. This is a combination of the exponential accelerating expansion phase and the reheating. Warm inflation has at the moment become a growing area of research, with the potential to provide new insights into the physics of the early universe. Warm inflation is an alternative version of standard inflation that takes into account the effects of dissipation and thermal fluctuations on the inflationary process. In warm inflation scenario, the scalar field responsible for driving inflation, is coupled to a thermal bath and transfers energy to radiation during inflation, thus maintaining a non-zero temperature. Warm inflation was first proposed by Berera and Fang [6]. Since then, numerous studies have been carried out to study the dynamics and predictions of warm inflation. One of the main advantages of warm inflation is that it provides a natural solution to the graceful exit problem, as the inflaton can gradually decay into the thermal bath, leading to a smooth transition from inflation to the hot big bang era. The predictions of warm inflation have been studied both analytically and numerically. Some of the most notable works in this field include Berera _et. al._[7; 8; 9; 10; 11; 12], Graham and I. G. Moss [13], Bastero-Gil _et. al._[14] and Zhang [15]. These studies have shown that warm inflation can produce a sufficient number of e-folds, consistent with the observed CMB temperature fluctuations, and that it can lead to a broad spectrum of curvature perturbations. There have also been several studies comparing the predictions of warm inflation with those of the standard inflationary model and other alternative models of inflation. For example, Kamali [16] compared warm inflation with the Higgs inflation model and found that warm inflation can produce a smaller tensor-to-scalar ratio, which is more in line with the current observations. Similarly, the authors of [17] showed that even when dissipative effects are still small compared to Hubble damping, the amplitude of scalar curvature fluctuations can be significantly enhanced, whereas tensor perturbations are generically unaffected due to their weak coupling to matter fields. In the context of warm inflation, it was also found that recent studies in many different theories were proposed. For instance, the authors of Ref.[18] conducted a possible realization of warm inflation owing to a inflaton field self-interaction. Additionally, models of minimal and non-minimal coupling to gravity were investigated in Refs.[19; 20; 21; 22; 23; 16]. Recently, warm scenario of the Higgs-Starobinsky (HS) model was conducted [24]. The model includes a non-minimally coupled scenario with quantum-corrected self-interacting potential in the context of warm inflation [25]. An investigation of warm inflationary models in the context of a general scalar-tensor theory of gravity has been made in Ref.[26]. Recent review on warm inflation has been recently discussed in Ref.[27]. The plan of the work is structured as follows: In Sec.II, we review a formulation of non-minimally-coupled Higgs inflation considering both metric and Palatini approaches. In Sec.III, we provide the basic evolution equations for the inflaton and the radiation fields and define the slow roll parameters and conditions. We also describe the primordial power spectrum for warm inflation and the form of the dissipation coefficient. In Sec.IV, we present the models of nonminimally-coupled Higgs inflation and compute all relevant cosmological parameters. We then constrain our models using the the observational (Planck 2018) data in Sec.V. Finally, we summarize the present work and outline the conclusion. ## II A review on metric vs. Palatini formulations of nonminimally-coupled Higgs inflation Models in which the Higgs field is non-minimally coupled to gravity lead to successful inflation and produce the spectrum of primordial fluctuations in good agreement with the observational data. Here we consider the theory composed of the Standard Model Higgs doublet \(H_{J}\) with the non-minimal coupling to gravity in the Jordan (J) frame: \[S_{J}=\int d^{4}x\sqrt{-g_{J}}\left[\frac{M_{p}^{2}}{2}\left(1+2\xi\frac{H_{J}^ {\dagger}H_{J}}{M_{p}^{2}}\right)R_{J}+g_{J}^{\mu\nu}(D_{\mu}H_{J})^{\dagger}( D_{\nu}H_{J})-\lambda(H_{J}^{\dagger}H_{J})^{2}\right], \tag{1}\] where \(M_{p}\) is the Planck mass, \(\xi\) is a coupling constant, \(R_{J}\) is the Ricci scalar, and \(H\) is the Higgs field with \(\lambda\) being the self-coupling of the Higgs doublet. Note that the mass term of the Higgs doublet is neglected throughout this paper because it is irrelevant during inflation. As was known the metric formalism is considered as a standard gravitational method, however, one can study gravity adopting the Palatini approach leading to different phenomenological consequences in a theory with a non-minimal coupling to gravity. The differences of them are explicit and easily understandable in the so-called Einstein (E) frame where the non-minimal coupling is removed from the theory by taking a conformal redefinition of the metric \[g_{\mu\nu}\rightarrow\Omega^{2}g_{J,\mu\nu},\qquad\Omega^{2}=1+2\xi\frac{H_{J }^{\dagger}H_{J}}{M_{p}^{2}}. \tag{2}\] Using the metric redefinition, the connection is also transformed in the metric formalism since it is given by the Levi-Civita connection: \[\Gamma^{\rho}_{\mu\nu}(g)=\frac{1}{2}g^{\rho\lambda}\Big{(}\partial_{\mu}g_{ \nu\lambda}+\partial_{\nu}g_{\mu\lambda}-\partial_{\lambda}g_{\mu\nu}\Big{)}. \tag{3}\] It is noticed that the connection is left unaffected in the Palatini formalism because it is treated as an independent variable as well as the metric. Thus, the Ricci scalar transforms differently depending on the underlying gravitational formulations as [28] \[\sqrt{-g_{J}}\Omega^{2}\,R_{J}=\sqrt{-g}(R+6\kappa\Omega g^{\mu\nu}\nabla_{ \mu}\nabla_{\nu}\Omega^{-1}), \tag{4}\] where \(\kappa=1\) and \(\kappa=0\) correspond to the metric and the Palatini formalism, respectively. The Einstein frame expression can then be obtained after the rescaling of the metric: \[S=\int d^{4}x\sqrt{-g}\left[\frac{M_{p}^{2}}{2}R+3\kappa M_{p}^{2}\Omega g^{\mu \nu}\nabla_{\mu}\nabla_{\nu}\Omega^{-1}+\frac{1}{\Omega^{2}}g^{\mu\nu}(D_{\mu} H_{J})^{\dagger}(D_{\nu}H_{J})-\frac{\lambda}{\Omega^{4}}(H_{J}^{\dagger}H_{J})^{2} \right]. \tag{5}\] In the Einstein frame, the connection is not directly coupled to the Higgs field \(H_{J}\) and the gravity sector is just the Einstein-Hilbert form. In this case, the Euler-Lagrange constraint in the Palatini formalism restricts the connection to the Levi-Civita one, and the two approaches become equivalent, up to the explicit difference in the \(\kappa\) term [28]. Let us next review phenomenological aspects of the metric-Higgs inflation [29] and the Palatini-Higgs inflation [30; 31; 32; 33]. In this subsection, we neglect the gauge sector for simplicity. In the inflationary fashion, we usually consider the unitary gauge in which the Higgs doublet is described by a real scalar field \(\phi(x)\) as \(H_{J}^{T}(x)=(0,\phi(x)/\sqrt{2})\). Therefore, the action in Eq. (5) becomes \[S_{E}=\int d^{4}x\sqrt{-g}\Bigg{(}\frac{M_{p}^{2}}{2}R-\frac{1+\xi\frac{\phi^{ 2}}{M_{p}^{2}}+6\kappa\xi^{2}\frac{\phi^{2}}{M_{p}^{2}}}{2\left(1+\xi\frac{ \phi^{2}}{M_{p}^{2}}\right)^{2}}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu} \phi-\frac{\lambda\phi^{4}}{4\left(1+\xi\frac{\phi^{2}}{M_{p}^{2}}\right)^{2} }\Bigg{)}, \tag{6}\] where \(\kappa=1\) for the metric-Higgs and \(\kappa=0\) for the Palatini-Higgs inflation. The non-trivial kinetic term can be canonically normalized by introducing the field \(\psi\) defined through \[\frac{d\psi}{d\phi}=\sqrt{\frac{1+\xi\frac{\phi^{2}}{M_{p}^{2}}+6\kappa\xi^{2 }\frac{\phi^{2}}{M_{p}^{2}}}{\left(1+\xi\frac{\phi^{2}}{M_{p}^{2}}\right)^{2} }}. \tag{7}\] In terms of \(\psi\), the action can be rewritten as \[S_{E}=\int d^{4}x\sqrt{-g}\Big{(}\frac{M_{p}^{2}}{2}R-\frac{1}{2}g^{\mu\nu} \partial_{\mu}\psi\partial_{\nu}\psi-U(\psi(\phi))\Big{)}, \tag{8}\] with the potential in the Einstein frame \[U(\psi(\phi))=\frac{\lambda\phi^{4}(\psi)}{4\left(1+\xi\frac{\phi^{2}(\psi)}{ M_{p}^{2}}\right)^{2}}. \tag{9}\] The change of variable can be easily integrated in the Palatini case, while an asymptotic form in the large field limit \(\xi\phi^{2}/M_{p}^{2}\gg 1\) is useful in the metric case as \[\text{metric}\quad\phi \simeq \frac{M_{p}}{\sqrt{\xi}}\text{exp}\Bigg{(}\sqrt{\frac{1}{6}}\frac {\psi}{M_{p}}\Bigg{)}\,, \tag{10}\] \[\text{Palatini}\quad\phi = \frac{M_{p}}{\sqrt{\xi}}\text{sinh}\left(\frac{\sqrt{\xi}\psi}{M _{p}}\right). \tag{11}\] The potential is reduced to \[\text{metric}:\quad U \simeq \frac{\lambda M_{p}^{4}}{4\xi^{2}}\left(1+\exp\left(-\sqrt{\frac{2}{ 3}}\frac{\psi}{M_{p}}\right)\right)^{-2}, \tag{12}\] \[\text{Palatini}:\quad U = \frac{\lambda M_{p}^{4}}{4\xi^{2}}\text{tanh}^{4}\left(\frac{ \sqrt{\xi}\psi}{M_{p}}\right). \tag{13}\] The potentials in both scenarios approach asymptotically to a constant value \(U\simeq\frac{\lambda M_{p}^{4}}{4\xi^{2}}\) at a large field region, which is suitable for slow-roll inflation. An observed amplitude \(\mathcal{P}_{\zeta}\simeq 2.2\times 10^{-9}\)[34] fixes the relation between \(\xi\) and \(\lambda\) in the metric and Palatini approaches, \(\xi_{\text{met}}\sim 5\times 10^{4}\sqrt{\lambda}\), \(\xi_{\text{Pal}}\sim 10^{10}\lambda\), respectively. The CMB normalization restricts that the coupling to gravity \(\xi\) should be quite large unless the quartic coupling \(\lambda\) is extremely small both in the metric and Palatini formalisms, see also models with non-minimal coupling in metric and Palatini formalisms [35]. ## III Theory of warm inflation revisited The warm inflation dynamics is characterized by the coupled system of the background equation of motion for the inflaton field, \(\psi(t)\), the evolution equation for the radiation energy density, \(\rho_{r}(t)\). Considering the Einstein frame action with the flat FLRW line element, the Friedmann equation for warm inflation taks the form \[H^{2}=\frac{1}{3\,M_{p}^{2}}\left(\rho_{\psi}+\rho_{r}\right)=\frac{1}{3\,M_{p }^{2}}\left(\frac{1}{2}\,\dot{\psi}^{2}+U(\psi)+\rho_{r}\right)\,, \tag{14}\] with \(\dot{\psi}=d\psi/dt\) and \(\rho_{r}\) being the energy density of the radiation fluid with the equation of state given by \(w_{r}=1/3\). The Planck 2018 baseline plus BK15 constraint on \(r\) is equivalent to an upper bound on the Hubble parameter during inflation of \(H_{*}/M_{p}<2.5\times 10^{-5}\left(95\%\,CL\right)\)[34]. The equation of motion of the homogeneous inflaton field \(\phi\) during warm inflation is governed as \[\ddot{\psi}+3H\,\dot{\psi}+U^{\prime}(\psi)=-\Gamma\,\dot{\psi}\,, \tag{15}\] where \(U^{\prime}(\psi)=dU(\psi)/d\psi\). The above relation is equivalent to the evolution equation for the inflaton energy density \(\rho_{\phi}\) given by \[\dot{\rho}_{\psi}+3H(\rho_{\psi}+p_{\psi})=-\Gamma(\rho_{\psi}+p_{\psi})\,, \tag{16}\] with pressure \(p_{\psi}=\dot{\psi}^{2}/2-U(\psi)\), and \(\rho_{\psi}+p_{\psi}=\dot{\psi}^{2}\). Here the RHS of Eq. (16) acts as the source term. In case of radiation, we have \[\dot{\rho}_{r}+4H\rho_{r}=\Gamma\dot{\psi}^{2}\,. \tag{17}\] A condition for warm inflation requires \(\rho_{r}^{1/4}>H\) in which the dissipation potentially affects both the background inflaton dynamics, and the primordial spectrum of the field fluctuations. Following Refs.[42, 15], we consider the general form of the dissipative coefficient, given by \[\Gamma=C_{m}\frac{T^{m}}{\psi^{m-1}}\,, \tag{18}\] where \(m\) is an integer and \(C_{m}\) is associated to the dissipative microscopic dynamics which is a measure of inflaton dissipation into radiation. Different choices of \(m\) yield different physical descriptions, e.g., Refs.[42, 15, 43]. For \(m=1\), the authors of Refs.[19, 36, 10] have discussed the high temperature regime. For \(m=3\), a supersymmetric scenario has been implemented [44, 42, 10]. A minimal warm inflation was also proposed [45, 46, 47]. Particularly, it was found that thermal effects suppress the tensor-to-scalar ratio \(r\) significantly, and predict unique non-gaussianities. Apart from the Hubble term, the present of the extra friction term, \(\Gamma\), is relevant in the warm scenario. In slow-roll regime, the equations of motion are governed by \[3H(1+Q)\dot{\psi} \simeq -U_{\psi}\,, \tag{19}\] \[4\rho_{r} \simeq 3Q\dot{\psi}^{2}\,. \tag{20}\] where the dissipative ratio \(Q\) is defined as \(Q=\Gamma/(3H)\) and \(Q\) is not necessarily constant. Since the coefficient \(\Gamma\) depends on \(\phi\) and \(T\), the dissipative ratio \(Q\) may increase or decrease during inflation. The flatness of the potential \(U(\psi)\) in warm inflation is measured in terms of the slow roll parameters which are defined in Ref.[37] given by \[\varepsilon = \frac{M_{p}^{2}}{2}\left(\frac{U^{\prime}}{U}\right)^{2}\,,\quad \eta=M_{p}^{2}\,\frac{U^{\prime\prime}}{U}\,,\quad\beta=M_{p}^{2}\left(\frac{ U^{\prime}\,\Gamma^{\prime}}{U\,\Gamma}\right)\,. \tag{21}\] Since a \(\beta\) term depends on \(\Gamma\) and hence disappears in standard cold inflation. In warm inflationary model, the slow roll parameters are defined as follows: \[\varepsilon_{H}=\frac{\varepsilon}{1+Q}\,,\quad\eta_{H}=\frac{\eta}{1+Q}\,. \tag{22}\] Inflationary phase of the universe in warm inflation takes place when the slow-roll parameters satisfy the following conditions [37, 38, 7]: \[\varepsilon\ll 1+Q\,,\qquad\eta\ll 1+Q\,,\qquad\beta\ll 1+Q\,, \tag{23}\] where the condition on \(\beta\) ensures that the variation of \(\Gamma\) with respect to \(\phi\) is slow enough. Compared to the cold scenario, the power spectrum of warm inflation gets modified and it is given in Refs.[7, 12-14, 37, 39-41] and it takes the form: \[P_{\cal R}(k)=\left(\frac{H_{k}^{2}}{2\pi\dot{\phi}_{k}}\right)^{2}\left(1+2n_{k }+\left(\frac{T_{k}}{H_{k}}\right)\frac{2\sqrt{3}\,\pi\,Q_{k}}{\sqrt{3+4\pi\,Q _{k}}}\right)G(Q_{k})\,, \tag{24}\] where the subscript "\(k\)" signifies the time when the mode of cosmological perturbations with wavenumber "\(k\)" leaves the horizon during inflation and \(n=1/\big{(}\exp H/T-1\big{)}\) is the Bose-Einstein distribution function. Additionally, the function \(G(Q_{k})\) encodes the coupling between the inflaton and the radiation in the heat bath leading to a growing mode in the fluctuations of the inflaton field. It is originally proposed in Ref.[13] and its consequent implications can be found in Refs.[48; 12]. This growth factor \(G(Q_{k})\) is dependent on the form of \(\Gamma\) and is obtained numerically. As given in Refs.[20; 14], we see that for \(\Gamma\propto T\): \[G(Q_{k})_{\rm linear}=1+0.0185Q_{k}^{2.315}+0.335Q_{k}^{1.364}\,. \tag{25}\] In this work, we consider a linear form of \(G(Q_{k})\) with \(Q\gg 1\). Clearly, for small \(Q\), i.e., \(Q\ll 1\), the growth factor does not enhance the power spectrum. It is called the weak dissipation regime. However, for large \(Q\), i.e., \(Q\gg 1\), the growth factor significantly enhances the power spectrum. The latter is called the strong dissipation regime. The primordial tensor fluctuations of the metric give rise to a tensor power spectrum. It is the same form as that of cold inflation given in Ref.[17] as \[P_{T}(k)=\frac{16}{\pi}\Big{(}\frac{H_{k}}{M_{p}}\Big{)}^{2}\,. \tag{26}\] The ratio of the tensor to the scalar power spectrum is expressed in terms of a parameter \(r\) as \[r=\frac{P_{T}(k)}{P_{\cal R}(k)}\,. \tag{27}\] As of the primordial power spectrum for all the models written in terms of \(Q,\lambda,{\rm and}\ C_{1}\), we can demonstrate how the the power spectrum does depend on the scale. The spectral index of the primordial power spectrum is defined as \[n_{s}-1=\frac{d\ln P_{\cal R}(k)}{d\ln(k/k_{p})}=\frac{d\ln P_{\cal R}(k)}{dQ }\frac{dQ}{dN}\frac{dN}{dx}\Bigg{|}_{k=k_{p}}\,, \tag{28}\] where \(x=\ln(x/x_{p})\) and \(k_{p}\) corresponds to the pivot scale. From a definition of \(N\), it is rather straightforward to show that [23] \[\frac{dN}{dx}=-\frac{1}{1-\varepsilon_{H}}\,. \tag{29}\] Now we compute \(r\) and \(n_{s}\) using Eq.(27) and Eq.(28) for a linear form of the growing mode function \(G(Q)\) given in Eq.(25). Note that \(r\) and \(n_{s}\) are approximately given in Refs. [12; 20; 14]. Models of nonminimally-coupled warm inflation considered ### Metric Formalism The energy density during inflation is predominated by the potential of the inflaton field. Therefore, we can write \[H^{2}=\frac{\lambda M_{p}^{2}}{12\xi^{2}\left(e^{-\frac{\sqrt{\frac{3}{4}}\psi}{ M_{p}}}+1\right)^{2}}\,. \tag{30}\] Using this we can express Eq.(14) for this model as \[\dot{\psi}\approx-\frac{U^{\prime}(\psi)}{3(Q+1)H}=-\frac{\sqrt{2}M_{p}e^{ \frac{\sqrt{\frac{3}{4}}\psi}{M_{p}}}\sqrt{\frac{\lambda M_{p}^{2}}{\xi^{2}}}} {3(Q+1)\left(e^{\frac{\sqrt{\frac{3}{4}}\psi}{M_{p}}}+1\right)^{2}}\,. \tag{31}\] Using Eq.(30) and Eq.(31), we come up with the following expression: \[\frac{H^{2}}{2\pi\dot{\psi}}=-\frac{(Q+1)e^{\frac{\sqrt{\frac{3}{4}}\psi}{M_{p} }}\sqrt{\frac{\lambda M_{p}^{2}}{\xi^{2}}}}{8\sqrt{2}\pi M_{p}}\,. \tag{32}\] On substituting \(Q=\Gamma/3H=C_{T}T/3H\) in the energy density of radiation given in Eq.(20), we obtain the temperature of the thermal bath as \[T=\frac{1}{6^{1/4}}\left(\frac{\lambda QM_{p}^{4}e^{\frac{2\sqrt{\frac{3}{4}} \psi}{M_{p}}}}{C_{r}\xi^{2}(Q+1)^{2}\left(e^{\frac{\sqrt{\frac{3}{4}}\psi}{M_ {p}}}+1\right)^{4}}\right)^{1/4}\,. \tag{33}\] Dividing the above relation with \(H\), we find \[\frac{T}{H}=2^{3/4}\,3^{1/4}\left(\frac{\lambda M_{p}^{2}}{\xi^{2}\left(e^{- \frac{\sqrt{\frac{3}{4}}\psi}{M_{p}}}+1\right)^{2}}\right)^{-1/2}\left(\frac{ \lambda QM_{p}^{4}e^{\frac{2\sqrt{\frac{3}{4}}\psi}{M_{p}}}}{\mbox{Cr}\xi^{2}( Q+1)^{2}\left(e^{\frac{\sqrt{\frac{3}{4}}\psi}{M_{p}}}+1\right)^{4}}\right)^{1/4}\,. \tag{34}\] The dissipation parameter is defined as \(Q=\Gamma/3H=C_{T}T/3H\). In this model of warm inflation, we have \(3H\) considered \(\Gamma=C_{T}T\). On substituting this form of \(\Gamma\) we get \(T=3HQ/C_{T}\). We equate this with Eq.(33) to obtain \[e^{\frac{\sqrt{\frac{3}{4}}\psi}{M_{p}}}\approx\frac{2\sqrt{\frac{3}{4}}C_{t} ^{2}\xi}{3\sqrt{C_{r}}\sqrt{\lambda}Q^{5/2}}\quad\rightarrow\quad\psi=\sqrt{ \frac{3}{2}}M_{p}\log\left(\frac{2\sqrt{\frac{3}{4}}C_{T}^{2}\xi}{3\sqrt{C_{r} }\sqrt{\lambda}Q^{5/2}}\right)\,. \tag{35}\] On substituting Eq.(35) in Eqs.(33) and (32), we can express \(P_{R}(k)\) in terms of variables \(\xi\), \(\lambda\), \(Q\) and \(C_{T}\). Also, from its definition in Eq.(22), the slow roll parameters can be written \[\varepsilon_{H} = \frac{4e^{-\frac{2\sqrt{\frac{2}{3}}\psi}{M_{p}}}}{3(Q+1)\left(e ^{-\frac{\sqrt{\frac{2}{3}}\psi}{M_{p}}}+1\right)^{2}}=\frac{9\lambda Q^{5}C_{ r}}{2\xi^{2}(Q+1)C_{t}^{4}\left(\frac{3\sqrt{\frac{3}{2}}\sqrt{\lambda}Q^{5/2} \sqrt{C_{r}}}{2\xi C_{T}^{2}}+1\right)^{2}}\,, \tag{36}\] \[\eta_{H} = \frac{4\xi^{2}\left(e^{-\frac{\sqrt{\frac{2}{3}}\psi}{M_{p}}}+1 \right)^{2}}{\lambda(Q+1)M_{p}^{2}}\Bigg{(}\frac{\lambda M_{p}^{2}e^{-\frac{2 \sqrt{\frac{2}{3}}\psi}{M_{p}}}}{\xi^{2}\left(e^{-\frac{\sqrt{\frac{2}{3}} \psi}{M_{p}}}+1\right)^{4}}-\frac{\lambda M_{p}^{2}e^{-\frac{\sqrt{\frac{2}{3} }\psi}{M_{p}}}}{3\xi^{2}\left(e^{-\frac{\sqrt{\frac{2}{3}}\psi}{M_{p}}}+1 \right)^{3}}\Bigg{)}\] (37) \[= \frac{16\left(9C_{r}\lambda Q^{5}-\sqrt{6}\sqrt{C_{r}}C_{T}^{2} \sqrt{\lambda}\xi Q^{5/2}\right)}{(Q+1)\left(3\sqrt{6}\sqrt{C_{r}}\sqrt{ \lambda}Q^{5/2}+4C_{T}^{2}\xi\right)^{2}}\,.\] Using Eq.(30), the tensor power spectrum for this model is evaluated and we can use Eq.(35) and express \(P_{T}(k)\) in terms of model parameters \[P_{T}(k)=\frac{16}{\pi}\Big{(}\frac{H}{M_{p}}\Big{)}^{2}=\frac{4\lambda}{3\pi \xi^{2}\left(e^{-\frac{\sqrt{\frac{2}{3}}\psi}{M_{p}}}+1\right)^{2}}=\frac{4 \lambda}{3\pi\xi^{2}\left(\frac{3\sqrt{\frac{3}{2}}\sqrt{\lambda}Q^{5/2} \sqrt{C_{r}}}{2\xi C_{T}^{2}}+1\right)^{2}}\,. \tag{38}\] In this subsection, we will evaluate how the dissipation parameter, \(Q\), evolves with the number of efolds, \(N\). We differentiate Eq.(35) w.r.t \(N\) and then again write \(d\psi/dN=-\dot{\psi}/H\). By using Eqs. (30), (31) and (35), we obtain \[\frac{dQ}{dN}=-\frac{2\sqrt{6}\sqrt{\lambda}Q^{5/2}\sqrt{C_{r}}}{5\xi C_{T}^{ 2}}\,, \tag{39}\] where we have assumed a large field approximation \(\xi\phi^{2}/M_{p}^{2}\gg 1\) or \(\psi\gg\sqrt{3/2}M_{p}\). We show the behaviour of the evolution of \(\psi\) (in units of \(M_{p}\)) and the temperature \(T\) during warm inflation of the metric case in Fig.(1). The dissipation parameter, \(Q\), depending on both \(\psi\) and \(T\), is not a constant but rather evolves during inflation. This behaviour can also be seen in Fig.(1). Additionally, as shown in Fig.(1), we find that the energy density of radiation does not change appreciably when the modes of cosmological interest cross the horizon. ### Palatini Formalism We follow the proceeding subsection. Since the energy density during inflation is predominated by the potential of the inflaton field, for the Palatini case, this allows us to write \[H^{2}=\frac{\lambda M_{p}^{2}\tanh^{4}\left(\frac{\sqrt{\xi}\psi}{M_{p}}\right)}{ 12\xi^{2}}\,. \tag{40}\] Using the above relation, we can express Eq.(14) for this model as \[\dot{\psi}\approx-\frac{U^{\prime}(\psi)}{3(Q+1)H}=-\frac{\lambda M_{p}^{3} \tanh^{3}\left(\frac{\sqrt{\xi}\psi}{M_{p}}\right)\mathrm{sech}^{2}\left(\frac {\sqrt{\xi}\psi}{M_{p}}\right)}{2\sqrt{3}\xi^{3/2}(Q+1)}\left(\frac{C_{T}^{4} \lambda\xi M_{p}^{2}}{\left(3\sqrt{C_{r}}\sqrt{\lambda}Q^{5/2}+4C_{T}^{2}\xi^ {3/2}\right)^{2}}\right)^{-1/2}\,. \tag{41}\] Using Eq.(40) and Eq.(42), we come up with the following expression: \[\frac{H^{2}}{2\pi\dot{\psi}}=-\frac{(Q+1)\sinh\left(\frac{\sqrt{\xi}\phi}{M_{p }}\right)\cosh\left(\frac{\sqrt{\xi}\phi}{M_{p}}\right)}{12\sqrt{3}\pi M_{p} \sqrt{\xi}}\left(\frac{C_{T}^{4}M_{p}^{2}\xi}{C_{r}Q^{5}\left(\frac{4C_{T}^{2 }\xi^{3/2}}{3\sqrt{C_{r}}\sqrt{\lambda}Q^{5/2}}+1\right)^{2}}\right)^{1/2}\,. \tag{42}\] Figure 1: The behaviour of the homogeneous inflaton field \(\psi\) (in units of \(M_{p}\)), the dissipation parameter \(Q\), the energy density in radiation, and temperature \(T\) of the Universe is shown as a function of the number of efolds \(N\) with the dissipation coefficient \(\Gamma=C_{T}T\) in the metric case. To generate this plot, we take \(\xi=10^{6.1}\sqrt{\lambda}\), \(C_{r}=70\), \(C_{T}=0.045\). On substituting \(Q=\Gamma/3H=C_{T}T/3H\) in the energy density of radiation given in Eq.(20), we obtain the temperature of the thermal bath as \[T=\left(\frac{\lambda M_{p}^{4}Q\tanh^{2}\left(\frac{\sqrt{\xi}\phi}{M_{p}} \right)\mathrm{sech}^{4}\left(\frac{\sqrt{\xi}\phi}{M_{p}}\right)}{C_{r}\xi(Q+ 1)^{2}}\right)^{1/4}. \tag{43}\] We divide the above relation with \(H\) to obtain \[\frac{T}{H}=\frac{9}{4\sqrt{\frac{C_{T}^{4}M_{p}^{2}\xi}{C_{r}Q^{5}\left(\frac {4C_{T}^{2}\xi^{3/2}}{3\sqrt{C_{r}\sqrt{\lambda}Q^{5/2}}}+1\right)^{2}}}}\left( \frac{\lambda^{2}M_{p}^{4}Q^{6}\tanh^{6}\left(\frac{\sqrt{\xi}\phi}{M_{p}} \right)\mathrm{sech}^{4}\left(\frac{\sqrt{\xi}\phi}{M_{p}}\right)\left(\frac{ 4C_{T}^{2}\xi^{3/2}}{3\sqrt{C_{r}\sqrt{\lambda}Q^{5/2}}}+1\right)^{2}}{C_{T}^ {4}\xi^{4}(Q+1)^{2}}\right)^{1/4} \tag{44}\] The dissipation parameter is defined as \(Q=\Gamma/3H\). In this model of warm inflation, we have considered \(\Gamma=C_{T}T\). On substituting this form of \(\Gamma\) we get \(T=3HQ/C_{T}\). We equate this with Eq.(43) to obtain \[\frac{\psi}{M_{p}}=\frac{1}{\sqrt{\xi}}\sinh^{-1}\left(\frac{2C_{T}\xi^{3/4}}{ \sqrt{3}\sqrt{C_{r}\sqrt{\lambda}}Q^{3/4}\sqrt{Q+1}}\right)\,. \tag{45}\] On substituting Eq.(45) in Eqs.(43) and (42), we can express \(P_{R}(k)\) in terms of variables \(\xi,\,\lambda,\,Q\) and \(C_{T}\). Also, from its definition in Eq.(22), the slow roll parameters can be written \[\varepsilon_{H} = \frac{8\xi\mathrm{csch}^{2}\left(\frac{\sqrt{\xi}\psi}{M_{p}} \right)\mathrm{sech}^{2}\left(\frac{\sqrt{\xi}\phi}{M_{p}}\right)}{Q+1}=\frac {6\sqrt{C_{r}}\sqrt{\lambda}Q^{3/2}}{C_{T}^{2}\sqrt{\xi}\left(\frac{4C_{T}^{2 }\xi^{3/2}}{3\sqrt{C_{r}\sqrt{\lambda}Q^{3/2}}(Q+1)}+1\right)}\,, \tag{46}\] \[\eta_{H} = \frac{4\xi^{2}\coth^{4}\left(\frac{\sqrt{\xi}\phi}{M_{p}}\right)} {\lambda M_{p}^{2}(Q+1)}\Bigg{(}\frac{3\lambda M_{p}^{2}\tanh^{2}\left(\frac{ \sqrt{\xi}\phi}{M_{p}}\right)\mathrm{sech}^{4}\left(\frac{\sqrt{\xi}\phi}{M_{p }}\right)}{\xi}-\frac{2\lambda M_{p}^{2}\tanh^{4}\left(\frac{\sqrt{\xi}\phi}{ M_{p}}\right)\mathrm{sech}^{2}\left(\frac{\sqrt{\xi}\phi}{M_{p}}\right)}{\xi}\Bigg{)}\] (47) \[= \frac{9C_{r}Q^{5}\left(\frac{4C_{T}^{2}\xi^{3/2}}{3\sqrt{C_{r} \sqrt{\lambda}Q^{5/2}}}+1\right)^{2}}{4C_{T}^{4}M_{p}^{2}\xi(Q+1)}\Bigg{(} \frac{4C_{T}^{2}\sqrt{\lambda}M_{p}^{2}\sqrt{\xi}}{\sqrt{C_{r}}Q^{5/2}\left( \frac{4C_{T}^{2}\xi^{3/2}}{3\sqrt{C_{r}\sqrt{\lambda}Q^{5/2}}}+1\right)^{3}}\] \[-\frac{32C_{T}^{4}M_{p}^{2}\xi^{2}}{9C_{r}Q^{5}\left(\frac{4C_{T} ^{2}\xi^{3/2}}{3\sqrt{C_{r}\sqrt{\lambda}Q^{5/2}}}+1\right)^{3}}\Bigg{)}\,.\] Using Eq.(30), the tensor power spectrum for this model is evaluated and we can use Eq.(45) and express \(P_{T}(k)\) in terms of model parameters \[P_{T}(k)=\frac{16}{\pi}\Big{(}\frac{H}{M_{p}}\Big{)}^{2}=\frac{13824\sqrt[3]{ \frac{2}{5}}3^{2/3}C_{T}^{4}\xi\left(\frac{\sqrt{C_{r}}\sqrt{\lambda}n}{C_{T} ^{2}\sqrt{\xi}}\right)^{10/3}}{125\pi C_{r}\left(\frac{\tau_{2}\sqrt[3]{3}( \frac{2}{5})^{2/3}C_{t}^{2}\xi^{3/2}\left(\frac{\sqrt{C_{r}}\sqrt{\lambda}n}{C _{T}^{2}\sqrt{\xi}}\right)^{5/3}}{5\sqrt{C_{r}\sqrt{\lambda}}}+1\right)^{2}}\,. \tag{48}\] In this subsection, we will evaluate how the dissipation parameter, \(Q\), evolves with the number of efolds, \(N\). We differentiate Eq.(45) w.r.t \(N\) and then again write \(d\psi/dN=-\dot{\psi}/H\). By using Eqs. (40), (41) and (45), we obtain \[\frac{dQ}{dN}=-\frac{12\sqrt{C_{r}}\sqrt{\lambda}Q^{5/2}}{5C_{T}^{2}\sqrt{\xi}}\,, \tag{49}\] where we have assumed a large field approximation \(\xi\phi^{2}/M_{p}^{2}\gg 1\) or equivalently \(\psi\gg\sqrt{3/2}M_{p}\). The behaviour of the evolution of \(\psi\) (in units of \(M_{p}\)) and the temperature \(T\) during warm inflation of the Palatini case is displayed in Fig.(2). Similarly, the dissipation parameter, \(Q\), depending on both \(\psi\) and \(T\), is also not a constant but rather evolves during inflation. This behaviour can also be seen in Fig.(2). We also find that the energy density of radiation does not change appreciably when the modes of cosmological interest cross the horizon shown in Fig.(2). Figure 2: The behaviour of the homogeneous inflaton field \(\psi\) (in units of \(M_{p}\)), the dissipation parameter \(Q\), the energy density in radiation, and temperature \(T\) of the Universe is shown as a function of the number of efolds \(N\) with the dissipation coefficient \(\Gamma=C_{T}T\) in the Palatini case. To generate this plot, we take \(\xi=10^{12.8}\lambda\), \(C_{r}=70\), \(C_{T}=0.045\). Confrontation with the Planck 2018 data We constrain our results using the amplitude of the primordial power spectrum. Consider Eq.(24) we find that our predictions can produce the prefered values of \(P_{R}\sim A_{s}=2.2\times 10^{-9}\) shown in Fig.(3). We notice for the metric case that in order to produce a corrected value of \(P_{R}\), when we decrease values of \(C_{T}\), the magnitudes of \(\psi\) get increased. However, in the Palatini case, when we decrease values of \(C_{T}\), a number of efolds get decreased. We compute the inflationary observables and then compare with the Plank 2018 data. We plot the derived \(n_{s}\) and \(r\) for our models along with the observational constraints from Planck 2018 data displayed in Fig.(4). Left panel, we used \(\xi=10^{6.1}\sqrt{\lambda},\,C_{r}=70\) and \(N=50,\,60\) for \(C_{T}\in[0.001,0.06]\). Our results obtained in the metric case show that, for \(N=50\), \(C_{T}\in[0.0161,0.0276]\), while for \(N=60\), \(C_{T}\in[0.008,0.0206]\) is required in order to have the derived \(n_{s}\) consistent with the Planck 2018 observations at \(1\sigma\) CL. Additionally, we can obtain \(n_{s}=0.9649\) using \(C_{T}=0.0214\) and \(C_{T}=0.0139\) for \(N=50\) and \(N=60\), respectively. Likewise, for the right panel, we used \(\xi=10^{12.8}\lambda,\,C_{r}=70\) and \(N=50\), \(60\) for \(C_{T}\in[0.001,0.06]\). Our results obtained in the metric case show that, for \(N=50\), \(C_{T}\in[0.010,0.022]\), while for \(N=60\), \(C_{T}\in[0.00053,0.0141]\) is required in order to have the derived \(n_{s}\) consistent with the Planck 2018 observations at \(1\sigma\) CL. Additionally, we can obtain \(n_{s}=0.9649\) using \(C_{T}=0.0156\) and \(C_{T}=0.007\) for \(N=50\) and \(N=60\), respectively. Figure 3: We constrain our models with the amplitude of the primordial power spectrum. Left panel, we showed the metric case with \(\xi\sim 1.26\times 10^{6}\sqrt{\lambda}\) and \(C_{T}=0.015,\,0.045\) with \(N=50,\,60\). The parameters used provide the preferred amplitude of the primordial power spectrum of \(A_{s}\sim 2.2\times 10^{-9}\). Right panel: we displayed the Palatini case with \(\xi\sim 6.31\times 10^{12}\lambda\) and \(C_{T}=0.015,\,0.025\). The parameters used also provide the preferred amplitude of the primordial power spectrum. A dotted horizon line denotes \(A_{s}=2.2\times 10^{-9}\). ## VI Conclusion In this work, we studied the non-minimally-coupled Higgs model in the context of warm inflation scenario using both metric and Palatini approaches. We particularly considered a dissipation parameter of the form \(\Gamma=C_{T}T\) with \(C_{T}\) being a coupling parameter and focused only on the strong regime of the interaction between inflaton and radiation fluid. We compute all relevant cosmological parameters and constrained the models using the observational Planck 2018 data. We discovered that the \(n_{s}\) and \(r\) values are consistent with the observational bounds. Having used the observational data, we obtained a relation between \(\xi\) and \(\lambda\) for the non-minimally-coupled warm Higgs inflation in both metric and Palatini cases. Having compared between two approaches, the energy density and the temperature of the thermal bath in the metric case, see Fig.1, are many orders of magnitude larger than those found in the Palatini case, see Fig.2. To produce \(n_{s}\) and \(r\) in agreement with observation, we found that their values are two orders of magnitude higher than those of the usual (cold) non-minimally-coupled Higgs inflation [30; 31]. However, we noticed that the ratio of \(\xi^{2}/\lambda\) of the metric case in this work are four orders of magnitude higher than that of model present in Ref.[16]. Figure 4: We compare the theoretical predictions of \((r,\,n_{s})\) in the strong limit \(Q\gg 1\) for the metric (left panel) and Palatini (right panel) approaches. We consider a linear form of the growing mode function \(G(Q_{N})\). For the plots, we have used \(C_{r}=70\), \(\xi=1.26\times 10^{6}\sqrt{\lambda}\) for the metric case, and \(C_{r}=70\), \(\xi=6.31\times 10^{12}\lambda\) for the Palatini case. We consider theoretical predictions of \((r,\,n_{s})\) for different values of \(C_{T}\) with Planck’18 results for TT, TE, EE, +lowE+lensing+BK15+BAO. ###### Acknowledgements. P. Channuie acknowledged the Mid-Career Research Grant 2020 from National Research Council of Thailand (NRCT5-RSA63019-03).
2305.14851
Sharpness-Aware Data Poisoning Attack
Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks. These attacks aim to inject poisoning samples into the models' training dataset such that the trained models have inference failures. While previous studies have executed different types of attacks, one major challenge that greatly limits their effectiveness is the uncertainty of the re-training process after the injection of poisoning samples, including the re-training initialization or algorithms. To address this challenge, we propose a novel attack method called ''Sharpness-Aware Data Poisoning Attack (SAPA)''. In particular, it leverages the concept of DNNs' loss landscape sharpness to optimize the poisoning effect on the worst re-trained model. It helps enhance the preservation of the poisoning effect, regardless of the specific retraining procedure employed. Extensive experiments demonstrate that SAPA offers a general and principled strategy that significantly enhances various types of poisoning attacks.
Pengfei He, Han Xu, Jie Ren, Yingqian Cui, Hui Liu, Charu C. Aggarwal, Jiliang Tang
2023-05-24T08:00:21Z
http://arxiv.org/abs/2305.14851v2
# Sharpness-Aware Data Poisoning Attack ###### Abstract Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks. These attacks aim to inject poisoning samples into the models' training dataset such that the trained models have inference failures. While previous studies have executed different types of attacks, one major challenge that greatly limits their effectiveness is the uncertainty of the re-training process after the injection of poisoning samples, including the re-training initialization or algorithms. To address this challenge, we propose a novel attack method called "_Sharpness-Aware Data Poisoning Attack (SAPA)_". In particular, it leverages the concept of DNNs' loss landscape sharpness to optimize the poisoning effect on the worst re-trained model. It helps enhance the preservation of the poisoning effect, regardless of the specific retraining procedure employed. Extensive experiments demonstrate that SAPA offers a general and principled strategy that significantly enhances various types of poisoning attacks. ## 1 Introduction The rise of machine learning (ML) models that collect training data from public sources, such as large language models (Brown et al., 2020; Radford et al., 2019) and large visual models (Ramesh et al., 2021; Radford et al., 2021; Rombach et al., 2022), highlights the need for these models to be robust against data poisoning attacks (Steinhardt et al., 2017; Shafahi et al., 2018; Chen et al., 2017; Ren et al., 2022). Data poisoning attack refers to the threats of an adversary injecting poisoning data samples into the collected training dataset, such that the trained ML models can have malicious behaviors. For example, by injecting poisoning samples, the adversary's objective is to cause a poisoned model has a poor overall accuracy (known as _un-targeted attacks_(Steinhardt et al., 2017; Li et al., 2020; Ren et al., 2022)), or misclassifies a specified subset of test samples (known as _targeted attacks_(Shafahi et al., 2018; Zhu et al., 2019)). Additionally, in _backdoor attacks_(Chen et al., 2017; Saha et al., 2020; Tran et al., 2018), the adversary aims to create "backdoors" in the poisoned model such that the model gives a specific output as the adversary desires if the backdoor trigger is presented, regardless of the actual input. Many poisoning attacks in Deep Neural Networks (DNNs) face a common obstacle that limits their effectiveness, which is the uncertainty of the re-training process after the injection of poisoning samples. This challenge is also highlighted in previous studies such as [16, 17, 14, 15]. In particular, most existing methods (as reviewed in Section 2 and Appendix B) generate poisoning samples based on the effect of only one victim model [14, 15] or several victim models [14, 13]. However, during the re-training stage, the poisoned model may converge to a different point due to the uncertainty of the training process, such as model initialization and hyperparameter selection. As a consequence, the injected poisoning samples could lose their efficacy and the poisoning effect is compromised. In fact, if we consider the poisoning attacks as general bilevel optimization problems [1, 13] (Eq. 2 in Section 3), poisoning problems in DNNs can be categorized as "multiple inner minima" bilevel optimization problems [15, 16, 17, 18]. Prior works [15, 16, 17] have demonstrated the theoretical difficulty and complexity of solving "multiple inner minima" problems. Among them, the work [16] points out that simply applying substitute solvers in "single inner minima" problems, which is the strategy followed by many poisoning attacks, is prone to fail. These theoretical studies can also explain the insufficient effectiveness of many currently existing poisoning attacks. To overcome this challenge, we introduce a novel attack method, _Sharpness-Aware Data Poisoning Attack (SAPA)_. In this method, we aim to inject poisoning samples, which optimizes the poisoning effect of the "worst" re-trained model. In other words, even the worst re-trained model (which achieves the relatively worst poisoning effect) can have a strong poisoning effect. In this way, with a high possibility, the re-trained model can converge to a point where the poisoning effect presents. Furthermore, we find that this strategy can be successfully accomplished by tackling the victim model's loss landscape sharpness [11]. Notably, the loss landscape sharpness is more frequently used to explain the generalization characteristic of DNNs. In this work, we show the possibility to leverage existing algorithms of sharpness (such as [11]) to advance poisoning attacks. Through experimental studies, we show that our proposed method SAPA is a general and principle strategy and it can significantly enhance various types of poisoning attacks, including targeted attacks, un-targeted attacks and backdoor attacks, without an obvious computation overhead. ## 2 Related work ### End-to-end Data Poisoning Attacks Data poisoning attacks [13, 14] refer to the adversarial threat during the data collection phase of training ML models. These attacks manipulate the training data so that the trained models have malicious behaviors. Common objectives of poisoning attacks include the purposes to cause a poisoned model to have a poor overall accuracy (_un-targeted attacks_[14, 15, 16, 17]), misclassify a specified subset of test samples (_targeted attacks_[14, 15]), or insert backdoors (_backdoor attacks_[14, 15, 16]). Notably, in this work, we focus on the poisoning attacks in an "end-to-end" manner, which means that the victim model is trained on the poisoned dataset from scratch. It is a practical setting for poisoning attacks as the attackers cannot take control of the re-training process. In general, the feasibility of poisoning attacks highly depends on the complexity of the victim model. For linear models, such as logistic regression and SVM, there are methods [13, 14, 15] that exactly solve the poisoning problem and find optimal attacks. For DNNs, the exact solution is generally considered intractable, due to the complexity and uncertainty of the re-training process. Therefore, many heuristic methods have been proposed, such as Gradient Match [14] and Feature Collision [14]. Notably, one issue, which we called the "multiple inner minima" problem (see Section 2.2) can greatly hinder their success. In poisoning attacks, it means that using a single model or several models to craft poisoning samples is very likely to fail in the new re-trained models [14, 15]. Due to this difficulty, many works [14] focus on the transfer learning setting [14, 15], where the attacker has knowledge of a pre-trained model, and the victim model is fine-tuned on this pre-trained model. However, they are usually ineffective in the end-to-end setting. Notably, there are works [14], Geiping et al. (2020) devise the "ensembling and re-initialization" strategy to take several victim models into consideration. However, it is time and memory-consuming, and cannot sufficiently cover the re-training possibilities. In Appendix B, we provide a more comprehensive review of existing poisoning attacks. ### Bilevel Optimization Bilevel optimization has extensive applications in machine learning (Sinha et al., 2017), and most successful works require that the inner minimization has a single minima (Maclaurin et al., 2015; Franceschi et al., 2017, 2018). However, poisoning attacks in DNNs can be categorized as a "multiple inner minima" problem in general. This is because the inner problem of poisoning attacks refers to training a DNN on the poisoned dataset, which has multiple minima (Choromanska et al., 2015). In the literature of bilevel optimization, the challenge of the "multiple inner minima" problems is widely recognized and theoretically demonstrated by existing works (Sow et al., 2022; Li et al., 2020; Liu et al., 2020; Liu et al., 2020; Liu et al., 2021). Notably, there are methods (Liu et al., 2020; Li et al., 2020; Liu et al., 2021; Sow et al., 2022) which are specifically designed for solving multiple inner minima problems. However, their proposed algorithms are not explored for applications in poisoning attacks. Moreover, many of them (Liu et al., 2020; Liu et al., 2021; Li et al., 2020) require updating the outer variable after fully solving the inner problem and it is very inefficient for poisoning attacks in DNNs. ### Loss Landscape Sharpness In this paper, our proposed method involves calculating and optimizing the loss landscape sharpness of DNNs. The notion of the loss landscape sharpness and its connection to generalization has been extensively studied, both empirically (Keskar et al., 2016; Jiang et al., 2019; Neyshabur et al., 2017; Dinh et al., 2017) and theoretically (McAllester, 1999; Dziugaite and Roy, 2017; Neyshabur et al., 2017). These studies have motivated the development of methods (Hochreiter and Schmidhuber, 1997; Mobahi, 2016; Izmailov et al., 2018; Chaudhari et al., 2019) that aim to improve model generalization by manipulating or penalizing sharpness. Among these methods, Sharpness-Aware Minimization (SAM) (Foret et al., 2020; Andriushchenko and Flammarion, 2022; Wen et al., 2023) has shown to be highly effective and scalable for DNNs. In this paper, we explore the use of sharpness for data poisoning attacks. ## 3 Preliminary In this section, we introduce the definitions of the (loss landscape) sharpness, as well as the formulations of several most frequently studied data poisoning attacks. We start by providing some necessary notations. In this paper, we focus on classification tasks, with input \(x\in\mathcal{X}\) and label \(y\in\mathcal{Y}\) following the distribution \(D\) which is supported on \(\mathcal{X}\times\mathcal{Y}\). Under this dataset, a classification model \(f(\cdot;\theta):\mathcal{X}\rightarrow\mathcal{Y}\) is trained on the training set \(D_{tr}=\{(x_{i},y_{i}),i=1,...,n\}\), whose \(n\) data samples follow \(D\). The model parameter \(\theta\) is from the parameter space \(\Theta\). We define the loss function as \(l(f(x;\theta),y)\), and the (training) loss as \(L(\theta;D_{tr})=\frac{1}{n}\sum_{i=1}^{n}l(f(x_{i};\theta),y_{i})\). ### Loss Landscape Sharpness We follow the precedent work (Foret et al., 2020) to define the loss landscape sharpness (refer to as "sharpness" for simplicity) as in Eq. 1. It measures how quickly the model's training loss can be increased by moving its parameter to a nearby region. Following the definition in Eq. 1, it calculates the training loss increase after the model parameter \(\theta\) is perturbed by \(v\), whose \(l_{p}\) norm is constrained by \(||v||_{p}\leq\rho\): \[S^{\rho}(\theta;D_{tr})=\max_{||v||_{p}\leq\rho}\Big{[}L(\theta+v;D_{tr})\Big{]} -L(\theta;D_{tr}). \tag{1}\] In this work, we focus on the definition of sharpness as in Eq.1. Note that there are other sharpness definitions (Andriushchenko and Flammarion, 2022), we will investigate them as one future work. Given this definition of sharpness, previous works (Foret et al., 2020; Wen et al., 2023; Andriushchenko and Flammarion, 2022) build the theoretical and empirical connections between sharpness and model generalization. Generally, a smaller sharpness indicates a better generalization performance. ### Data Poisoning Attacks In this subsection, we discuss several most frequently studied poisoning attacks, which are proposed for various purposes. Note that the training set \(D_{tr}\) has \(n\) samples, we assume that the attacker selects a subset \(D_{q}\) from \(D_{tr}\) which takes an \(\epsilon\in[0,1]\) percentage of \(D_{tr}\), and replaces it with a poisoning set \(D_{p}\). Usually, the samples in \(D_{p}\) are from modifying samples in \(D_{q}\). During the re-training stage, the model is trained from scratch on the perturbed training set \((D_{tr}-D_{q})\cup\bar{D}_{p}\) and we denote it as "\(D_{tr}+D_{p}\)" for simplicity. Although different attacks may have different purposes and formulations, we give a general formulation as in Eq.2: the attacker aims to find the poisoning samples \(D_{p}\), such that the trained model (i.e., via Empirical-Risk-Minimization (ERM)) has an optimized malicious behavior towards data samples from a victim set \(D_{T}\): \[\min_{D_{p}}\ \ Q(\theta^{*},D_{T}),\ \ \text{s.t.}\ \ \theta^{*}=\operatorname*{arg \,min}_{\theta}L(\theta;D_{tr}+D_{p}) \tag{2}\] where \(\mathcal{C}\) denotes the constraint of the poisoning samples. Notably, the design of "_adversarial objective_" \(Q(\cdot)\) and "_victim set_" \(D_{T}\) is based on the purpose of the adversary. By giving different formulations of \(Q(\cdot)\) and definitions of \(D_{T}\), the attacker can achieve various adversarial goals. **Un-targeted Attacks**. In un-targeted attacks (Steinhardt et al., 2017), the attacker aims to cause the trained model with an overall low test accuracy. However, because the attacker usually does not have knowledge of test distribution, most works Steinhardt et al. (2017); Fowl et al. (2021); Huang et al. (2021) define the adversarial objective as the following to maximize the model error on the clean training set \(D_{tr}\): \[Q_{ut}(\theta^{*},D_{tr})=-L(\theta^{*};D_{tr}) \tag{3}\] **Targeted attacks**. In targeted attacks (Shafahi et al., 2018), the attacker aims to cause the trained model to misclassify a specified test sample or a subset of test samples. For example, they are targeting on a victim person and have knowledge of \(k\ (k\geq 1)\) photographs of this person1\((x_{i}^{(vicc},y^{vicc}))_{i=1}^{k}\). They aim to cause the model to misclassify the photos of this person while preserving the overall accuracy of the rest. Therefore, they define the victim set \(D_{T}=\{(x_{i}^{vic},y^{vicc})\}_{i=1}^{k}\) and the adversarial objective as: Footnote 1: We assume that the samples of the victim are from the same class \(y^{vic}\), following most existing works (Shafahi et al., 2018). \[Q_{t}(\theta^{*},D_{T})=\sum_{(x_{i},y_{i})\in D_{T}}l(f(x_{i}; \theta^{*}),y^{obj}), \tag{4}\] where \(y^{obj}\) is a designated class different from \(y^{vic}\). **Backdoor attacks**. In backdoor attacks (Chen et al., 2017; Souri et al., 2022), the attacker aims to take control of the model prediction by injecting samples with "triggers". In particular, if there is a trigger, such as a patch \(p\), present in an image, the poisoned model will predict this sample to a specified class \(y^{obj}\). Otherwise, the poisoned model will make a correct prediction. In backdoor attacks, most works target on samples from a specific victim class \(y=y^{vic}\) and define the victim set as \(D_{T}=\{(x,y)\in D_{tr}|y=y^{vic}\}\). During the attack, they aim to solve the adversarial objective: \[Q_{b}(\theta^{*},D_{T})=\sum_{(x_{i},y_{i})\in D_{T}}l(f(x_{i} \oplus p;\theta^{*}),y^{obj}) \tag{5}\] where \(x\oplus p\) denotes the process that \(p\) is attached to a clean image \(x\). In this way, the poisoned model is likely to predict the samples with triggers to be class \(y^{obj}\). ## 4 Method In this section, we introduce our proposed method Sharpness-aware Data Poisoning Attack (SAPA). In Section 4.1, we will first introduce SAPA in targeted and backdoor attacks, because they share similar adversarial objectives. Then we introduce SAPA in un-targeted attacks in Section 4.2. Based on our discussion, we demonstrate that the strategy of considering model sharpness in SAPA is general that can be incorporated into various types of poisoning attacks. ### SAPA in Targeted / Backdoor Attacks We will first introduce SAPA in targeted and backdoor attacks. Recall the definitions of these two attacks in Section 3.2, they share a similar adversarial objective \(Q(\cdot)\), which is the (positive) loss value on the victim set \(D_{T}\). Thus, we discuss SAPA as a general form for these two attacks in this subsection, although they have minor differences during implementation. In Appendix C, we provide detailed algorithms for both of them. **Objective.** Based on our discussion in Section 1 and 2, one major challenge for poisoning attacks originates from the existence of multiple solutions of the inner minimization in Eq. 2. Therefore, in our proposed method SAPA, we focus on the _Worst-case Poisoned Model_, which is the inner minima (the poisoned model) that has the worst poisoning effect \(\theta^{\prime}=\operatorname*{arg\,max}_{\theta\in\Theta_{p}}\;Q(\theta;D_{T})\). Here, the space \(\Theta_{p}\) refers to the _Poisoned Model Space_, which is the set of all models that are trained on the poisoned dataset and have a small training loss: \(\Theta_{p}=\{\theta:L(\theta;D_{tr}+D_{p})\leq\tau\}\). In general, our proposed method SAPA aims to optimize the poisoning effect on the worst-case poisoned model. Therefore, the general objective of SAPA is: \[\min_{D_{p}}\;Q(\theta^{\prime};D_{T}), \tag{6}\] \[\text{where }\theta^{\prime}=\operatorname*{arg\,max}\;Q(\theta;D_{T}),\; \text{s.t. }L(\theta;D_{tr}+D_{p})\leq\tau. \tag{7}\] By solving this problem, we can find poisoning samples \(D_{p}\) such that the worst poisoned model \(\theta^{\prime}\) can have a strong poisoning effect (with a small \(Q(\theta^{\prime};D_{T})\)). Therefore, for the models which are re-trained on the poisoned dataset, the poisoning effect is also very likely to persist. Admittedly, it is also hard to solve the above problem exactly. However, we propose to approximate the term \(Q(\theta^{\prime};D_{T})\), by leveraging the existing notion of model sharpness (Foret et al., 2020). In detail, given a fixed \(D_{p}\), we approximate \(Q(\theta^{\prime};D_{T})\) by: \[Q(\theta^{\prime};D_{T})\approx\max_{\|v\|_{p}\leq\rho}Q(\theta^{*}+v;D_{T}), \;\text{where }\theta^{*}=\operatorname*{arg\,min}_{\theta\in\Theta}L(\theta;D_{tr}+D_{p}) \tag{8}\] In details, we first trained the model \(\theta^{*}\) on \((D_{tr}+D_{p})\) (i.e., via ERM), so that \(\theta^{*}\) satisfies the constraint \(L(\theta^{*};D_{tr}+D_{p})\leq\tau\). Then, we locally maximize the term \(Q(\theta^{*}+v)\) by perturbing the model parameter \(\theta^{*}\) with a vector \(v\) (which has a limited norm \(\|v\|_{p}\leq\rho\)). In this way, the perturbed model \(\theta^{*}+v\) has a worse poisoning effect compared to \(\theta^{*}\), but it is still likely to fall into the space \(\Theta_{p}\) (because \(\theta^{*}+v\) is not far away from \(\theta^{*}\)). Remarkably, the form of the objective Eq. 8 resembles the definition of model sharpness in Eq. 1. Therefore, we call the term in Eq. 8 as _Sharpness-aware adversarial objective_, and we desire to find \(D_{p}\) to optimize this objective: \[\min_{D_{p}}\;\max_{\|\hat{v}\|_{p}\leq\rho}\;Q(\theta^{*}+v;D_{T}),\;\text{ s.t. }\theta^{*}=\operatorname*{arg\,min}_{\theta\in\Theta}L(\theta;D_{tr}+D_{p}) \tag{9}\] In general, the formulation of SAPA can be seen as a modification of existing attacks: it replaces the adversarial objective (Eq. 2) of existing attacks with Sharpness-aware adversarial objective. Intuitively, compared with the traditional adversarial objective (Eq. 2), in SAPA, the perturbation on the inner minima \(\theta^{*}\) with \(v\) enables the algorithm to escape "bad inner minima" where the poisoning effect is unstable. Thus, SAPA has the potential to induce the re-trained models to have a stable poisoning effect under the uncertainty of the re-training process. **Algorithm.** For targeted / backdoor attacks, there are existing algorithms such as Gradient Matching (Geiping et al., 2020). We show that we can incorporate the idea of calculating model sharpness into this method to solve the objective of SAPA in Eq. 9 In detail, if we denote the sharpness aware poisoning objective as \(Q^{S}(\theta^{*};D_{T})=\max_{\|v\|_{p}\leq\rho}\;Q(\theta^{*}+v;D_{T})\) for simplicity, and assume that we have a model \(\theta^{*}\) which is pre-trained on the clean training set \(D_{tr}\), SAPA is to solve: \[\operatorname*{arg\,min}_{D_{p}}\Big{(}1-\frac{\langle\nabla_{\theta}Q^{S}( \theta^{*};D_{T}),\nabla_{\theta}L(\theta^{*};D_{tr}+D_{p})\rangle}{\|\nabla _{\theta}Q^{S}(\theta^{*};D_{T})\|\cdot\|\nabla_{\theta}L(\theta^{*};D_{tr}+D_ {p})\|}\Big{)}. \tag{10}\] It finds a poisoning set \(D_{p}\), such that the gradient of the training loss, \(\nabla_{\theta}L(\theta^{*};D_{tr}+D_{p})\), has a maximized alignment with \(\nabla_{\theta}Q^{S}(\theta^{*};D_{T})\). In this way, during the re-training process, the model is updated by \(\nabla_{\theta}L(\theta^{*};D_{tr}+D_{p})\) and likely to minimize \(Q^{S}(\theta^{*};D_{T})\). In our algorithm to calculate the term \(\nabla_{\theta}Q^{S}(\theta^{*};D_{T})\), we follow the approach in (Foret et al., 2020) to first approximate \(Q^{S}(\theta^{*};D_{T})\) by leveraging a first-order method to find \(\hat{v}\): \[\hat{v}=\rho\cdot\text{sign}\Big{(}\nabla_{\theta}Q(\theta^{*};D_{T})\Big{)}| \nabla_{\theta}Q(\theta^{*};D_{T})|^{q-1}\left(\|\nabla_{\theta}Q(\theta^{*};D_ {T})\|_{q}^{q}\right)^{1/p}, \tag{11}\] where \(1/p+1/q=1\) and we consider \(p=2\) as illustrated in (Foret et al., 2020) if not specified. Then we can have the following approximation to calculate the gradient of \(Q^{S}(\theta^{*};D_{T})\): \[\nabla_{\theta}Q^{S}(\theta^{*};D_{T})\approx\nabla_{\theta}Q(\theta;D_{T})|_{ \theta=\theta^{*}+\hat{v}} \tag{12}\] Combining Eq.10 and Eq.12, we can solve the optimization problem in Eq.9 via gradient descent. Notably, for the original Gradient Matching method (Geiping et al., 2020), it solves a similar problem like Eq. 10 to find \(D_{p}\) with maximized gradient alignment with \(Q(\theta^{*};D_{T})\) instead of \(Q^{S}(\theta^{*};D_{T})\). Compared to this method, SAPA only requires two more steps in Eq. 11&12 to estimate \(\hat{v}\) and query the loss gradient on \(\theta^{*}+\hat{v}\). Therefore, applying SAPA will not introduce much computation overhead. The detailed algorithm of SAPA is shown in Algorithm 1 in Appendix C. ### SAPA in Un-targeted Attacks In DNN models, conducting un-targeted attacks with a small poisoning budget \(\epsilon\) is usually considered to be a hard problem (Munoz-Gonzalez et al., 2017). Existing feasible solutions (Fowl et al., 2021; Huang et al., 2021; Ren et al., 2022) are motivated from a "data protection perspective" (which are often referred as "un-learnable examples"). In detail, they perturb the whole (or at least a large portion of) training set, such that the trained classifiers cannot effectively learn the true data distribution and have a low accuracy on the true distribution. Take the method **Error-Min**(Huang et al., 2021) as an example, to induce the model to have a low accuracy (as in Eq. 3), it generates "shortcut" perturbations \(\delta_{i}\) for each training sample to solve the following bilevel optimization problem, such that the model trained on this perturbed dataset has a minimized loss: \[\min_{\theta\in\Theta}\min_{\{\delta_{i}\}_{i=1,\dots,n}}\ \Big{[}\sum_{(x_{i},y_{i})\in D _{tr}}l\big{(}f(x_{i}+\delta_{i};\theta),y_{i}\big{)}\Big{]}, \tag{13}\] As a consequence, the found \(\delta_{i}\) via solving Eq. 13 has patterns with a strong correlation to the labels \(y_{i}\). The models trained on the perturbed dataset can predict \((x_{i}+\delta_{i})\) to be \(y_{i}\) mostly based on the information of \(\delta_{i}\), and prohibit the model to learn useful knowledge from the clean samples in \(D_{tr}\). Hence, the poisoned model will have poor accuracy on clean test samples from the distribution \(\mathcal{D}\). **Error-Min+SAPA.** Similar to the strategy for targeted / backdoor attacks, we replace the objective in Error-Min (Eq. 13) by considering the model sharpness. In detail, we turn to search for a set of perturbations \(\{\delta_{i}\}_{i=1}^{n}\) based on the poisoning effect of \(\theta+v\) which maximizes the model loss. Formally, we define our method as Error-Min+SAPA which aims to solve: \[\min_{\theta\in\Theta}\min_{\{\delta_{i}\}_{i=1,\dots,n}}\ \Big{[}\max_{\|v\|_{p}\leq\rho}\sum_{(x_{i},y_{i})\in D _{tr}}l\big{(}f(x_{i}+\delta_{i};\theta+v),y_{i}\big{)}\Big{]}. \tag{14}\] Intuitively, compared with Eq. 13, our objective strengthens the bond between perturbations \(\{\delta_{i}\}_{i=1}^{n}\) and \(\{y_{i}\}_{i=1}^{n}\), to additionally consider the oscillation of the model parameter \(\theta\) (by adding \(v\)). Thus, the poisoning effect tends to persist after re-training. Notably, a similar strategy can also be adapted to other un-targeted attacks, i.e., Error-Max Fowl et al. (2021). We provide the detailed algorithms of Error-Min+SAPA and Error-Max+SAPA (which applies SAPA to enhance Error-Max) in Appendix C. ## 5 Experiment In this section, we conduct experiments to validate the effectiveness of our proposed attack SAPA when it is applied to targeted, backdoor and un-targeted attacks in Section 5.1, 5.2 and 5.3, respectively. In Section 5.4, we further test the stability of SAPA under the various re-training algorithms and their hyperparameter selections and the impact of ensembling on SAPA. Through this section, we focus on image classification tasks on benchmark datasets CIFAR10 and CIFAR100 with model architectures ResNet18 and ResNet50. Meanwhile, we provide additional empirical results of the dataset SVHN in Appendix D. For our method SAPA, we set the radius \(\rho\) (Eq. 9 & 14 in Section 4) to be \(0.05\) in all experiments. We provide detailed implementation in Appendix C for all SAPA-based algorithms. ### Performance of SAPA in Targeted Attacks **Experiment Setup**. The goal of targeted attacks is to cause the poisoned model to misclassify one (or multiple) victim samples. During attacks, under each dataset, we assume that the attacker randomly chooses a small proportion of training dataset with "_poisoning ratio_" \(\epsilon=1\%,0.2\%\), and inserts unnoticeable perturbations (whose \(l_{\infty}\) norm is limited by "_perturbation budget_" \(16/255,8/255,4/255\)) on each of them. After crafting the poisoned dataset, the model is randomly initialized and re-trained from scratch, via SGD for 160 epochs with an initial learning rate of 0.1 and decay by 0.1 at epochs 80 and 120. For each setting, we repeat the experiment 50 times and report the average performance. More details of the implementation can be found in Appendix.C.1 **Baselines**. We compare SAPA with representative baselines, including Bullseye[Aghakhani et al., 2021], Poison-Frog[Shafahi et al., 2018], Meta-Poison [Huang et al., 2020], and Grad-Match [Geiping et al., 2020]. Notably, our method resembles the algorithm of Grad-Match [Geiping et al., 2020] by additionally optimizing the model sharpness. MetaPoison leverages meta learning [Vilalta and Drissi, 2002] to unroll the training pipeline and ensembles multiple models. Poison-frog and Bullseye, also known as "feature collision", generate poisoning samples with representations similar to those of the victim, and misguide the model to make wrong predictions based on the victim's representations. In Appendix B, we provide detailed discussions of these baselines. **Performance comparison**. We show the performance comparison between SAPA and baselines by separately discussing the "single-victim" case and "multiple-victim" case: 1. "_Single-victim"_ refers to the case where the attacker targets on only one victim sample. Thus, in Table, 1 we report the "Success Rate" which is the probability that the targeted sample is successfully classified to be the designated wrong label \(y^{obj}\). From the results, we can see that SAPA consistently exhibits advantages over all baselines. Remarkably, the advantage of SAPA is obvious, especially when the attacker's capacity is limited (i.e., the perturbation budget is \(8/255\) or \(4/255\)). 2. "_Multiple-victim"_ refers to the case where the attacker targets on multiple victim samples. We report the "Average Success Rate" in Table 2 which means: for each victim set, we calculate the proportion of successful attacks in this set of targeted victim samples, and then we report the average of them across the 50 trials. In Table 2, we only show the performance of SAPA and Grad-Match, which is the best-considered baseline in single-victim setting, and we report the performance for 4-victim and 8-victim cases. From the results, we can see that our method SAPA outperforms Grad-Match across all settings. ### Performance of SAPA in Backdoor Attacks **Experiment setup**. In this subsection, we study the effectiveness of SAPA in backdoor attacks. In particular, we focus on the "hidden-trigger" setting [Saha et al., 2020, Souri et al., 2022] where the attackers can only add imperceptible perturbations to the clean training samples. Recall Eq. 5 in Section 3, the adversarial goal is to cause the samples from a victim class \(y^{vic}\) to be wrongly classified as a designated class \(y^{obj}\) by inserting triggers. Besides, we also follow the setting in [Souri et al., 2022] to further assume that the attacker adds perturbations to samples from class \(y^{obj}\). Therefore, in our study, we randomly choose the two different classes \(y^{vic},y^{obj}\) for poisoning sample generation. In this evaluation, we constrain that the attacker can only perturb 1% of the whole training set and the re-training process resembles our settings in Section 5.1. All experiments are repeated for 50 \begin{table} \begin{tabular}{c|c|c c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{\(16/255\)} & \multicolumn{2}{c|}{\(8/255\)} & \multicolumn{2}{c}{\(4/255\)} \\ & & \(\epsilon=1\%\) & \(\epsilon=0.2\%\) & \(\epsilon=1\%\) & \(\epsilon=0.2\%\) & \(\epsilon=1\%\) & \(\epsilon=0.2\%\) \\ \hline \multirow{4}{*}{**CIFAR10**} & **Bullseye** & 3.7 & 1.1 & 1.0 & 0.0 & 0.0 & 0.0 \\ & **Poison-Frog** & 1.3 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ & **Meta-Poison** & 42.5 & 30.7 & 28.1 & 19.4 & 5.2 & 3.9 \\ & **Grad-Match** & 73.1 & 63.2 & 35.4 & 26.8 & 10.3 & 6.4 \\ & **SAPA** & **80.1** & **70.8** & **48.4** & **31.5** & **16.7** & **11.2** \\ \hline \multirow{4}{*}{**CIFAR100**} & **Bullseye** & 2.1 & 0.0 & 2.6 & 1.2 & 0.5 & 0.1 \\ & **Poison-Frog** & 1.0 & 0.0 & 1.1 & 0.0 & 0.2 & 0.0 \\ & **Meta-Poison** & 50.3 & 24.1 & 43.2 & 22.7 & 4.5 & 3.1 \\ & **Grad-Match** & 90.4 & 54.8 & 62.1 & 33.4 & 11.7 & 7.5 \\ & **SAPA** & **91.3** & 75.2 & **86.8** & **52.6** & **31.6** & **12.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Success Rate under the “Single-Victim” Setting in Targeted Attacks. \begin{table} \begin{tabular}{c|c|c c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{\(16/255\)} & \multicolumn{2}{c|}{\(8/255\)} & \multicolumn{2}{c}{\(4/255\)} \\ & & 1\% & 0.25\% & 1\% & 0.25\% & 1\% & 0.25\% \\ \hline \multirow{2}{*}{**CIFAR10: 4 victims**} & **Grad-Match** & 62.9 & 36.2 & 34.7 & 25.1 & 20.3 & 7.5 \\ & **SAPA** & **75.1** & **53.4** & **47.9** & **30.8** & **24.3** & **10.8** \\ \hline \multirow{2}{*}{**CIFAR10: 8 victims**} & **Grad-Match** & 52.1 & 23.2 & 27.9 & 18.4 & 12.7 & 5.6 \\ & **SAPA** & **64.6** & 31.2 & 34.6 & 26.1 & 17.5 & 7.3 \\ \hline \multirow{2}{*}{**CIFAR100: 4 victims**} & **Grad-Match** & 67.3 & 30.1 & 37.2 & 12.5 & 17.8 & 2.9 \\ & **SAPA** & **74.2** & **43.8** & **44.3** & **19.7** & 25.1 & **6.1** \\ \hline \multirow{2}{*}{**CIFAR100: 8 victims**} & **Grad-Match** & 43.6 & 23.2 & 16.7 & 4.9 & 13.8 & 2.7 \\ & **SAPA** & **52.7** & **31.3** & 24.8 & 8.3 & 18.7 & 4.2 \\ \hline \end{tabular} \end{table} Table 2: Avg. Success Rate under the “Multiple-Victim” Setting in Targeted Attacks. times and we report the average success rate. More details of the implementation can be found in Appendix.C.2 **Baselines**. In the experiment, we compare our proposed method SAPA with Sleeper Agent [Souri et al., 2022] which also leverages Gradient Match [Geiping et al., 2020] to achieve the adversarial goal in backdoor attacks. We also show the results of the method, Hidden-Trigger Backdoor [Saha et al., 2020], which optimizes the poison over images with triggers to preserve information of triggers, and the Clean-Label Backdoor method [Turner et al., 2019] leverages calculating adversarial examples [Goodfellow et al., 2014] to train a backdoored model. **Performance comparison**. Our results are shown in Table 3, where we report the "Success Rate", which is the ratio of samples (with triggers) in \(y^{vic}\) that are classified as \(y^{obj}\) by the poisoned model. From the result, our method outperforms all baselines under all settings. Specifically, Hidden-trigger [Saha et al., 2020] and Clean-label [Turner et al., 2019] suffer from low ineffectiveness, as they are either designed for transfer learning or require control over the training process. Compared to these methods, SAPA shows effectiveness for different model architectures and perturbation budgets. In comparison with Sleeper Agent [Souri et al., 2022], which is also based on Gradient Matching [Geiping et al., 2020], our method can also have a clear advantage. Especially, under the perturbation budget 8/255 and CIFAR100 dataset, SAPA can obviously outperform Sleeper Agent across these settings. ### Performance of SAPA in Un-targeted Attacks **Experiment Setup**. The goal of un-targeted attacks is to degrade the models' test accuracy. However, based on our discussion in Section 4.2, un-targeted attacks with a small poisoning ratio (i.e., \(\leq 10\%\)) usually have a negligible impact on DNN models [Munoz-Gonzalez et al., 2017]. Therefore, we follow the line of existing works [Huang et al., 2021, Fowl et al., 2021, Ren et al., 2022] (which are also called "un-learnable examples") to perturb a large portion of training samples (\(50\%,80\%,100\%\)) in CIFAR10 and CIFAR100 datasets, in order to protect the data against being learned by DNNs. In our experiment, we limit the perturbation budget to \(8/255\). We first generate poisoning samples targeting on ResNet18. Then we re-train the victim models under ResNet18 and ResNet50 following existing works [Huang et al., 2020, Fowl et al., 2021]. The training procedure for each model also resembles the settings in Section 5.1, and we repeat experiments for each setting 5 times and report the average performance. More details of the implementation are in Appendix C.3. **Baselines**. We compare SAPA with representative "un-learnable" methods, such as Error-Min [Huang et al., 2020], Error-Max [Fowl et al., 2021], Separable Perturbation [Yu et al., 2022], and Autoregressive Perturbation [Sandoval-Segura et al., 2022]. We also report the clean performance which refers to the accuracy of models without poisoning attack. Notably, our proposed method SAPA can be incorporated into either Error-Min or Error-Max. Thus, we denote our method as "Error-Min+SAPA" and "Error-Max + SAPA", respectively. We provide more details of the algorithm of "Error-Max+SAPA" in Appendix C. It is also worth mentioning that Autoregressive [Sandoval-Segura et al., 2022] is different from others because it is designed based on the intrinsic vulnerability of Convolutional Filters in CNNs [He et al., 2016]. It generates perturbations with patterns that are easily captured by CNNs. However, this method can be hardly applied in other domains beyond CNNs. **Performance comparison.** In Table 4, we report the accuracy of the re-trained model on the clean test dataset of CIFAR10 and CIFAR100, so a lower value indicates better attack performance. From the result, we can see that our proposed method can improve the poisoning ef Figure 1: Visualization of Loss Landscape \begin{table} \begin{tabular}{c|c|c c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{ResNet18} & \multicolumn{2}{c|}{ResNet50} & \multicolumn{2}{c}{VGG11} \\ \cline{3-8} & & \(16/255\) & \(8/255\) & \(16/255\) & \(8/255\) & \(16/255\) & \(8/255\) \\ \hline \multirow{4}{*}{**CIFAR10**} & **Hidden-trigger** & 3.5 & 1.3 & 3.2 & 1.3 & 5.0 & 1.8 \\ & **Clean-label** & 2.8 & 0.9 & 2.6 & 0.9 & 4.7 & 1.1 \\ & **Sleeper Agent** & 91.8 & 31.5 & 93.7 & 21.2 & 85.5 & 26.7 \\ & **SAPA-backdoor** & **97.1** & **49.3** & **98.4** & **33.2** & **94.3** & **35.5** \\ \hline \multirow{4}{*}{**CIFAR100**} & **Hidden-trigger** & 2.1 & 1.3 & 1.7 & 0.8 & 3.4 & 1.2 \\ & **Clean-label** & 1.5 & 0.9 & 1.2 & 0.4 & 2.6 & 0.8 \\ \cline{1-1} & **Sleeper Agent** & 58.3 & 21.7 & 47.2 & 18.5 & 41.6 & 12.9 \\ \cline{1-1} & **SAPA-backdoor** & **72.4** & **41.8** & **63.9** & **31.4** & 67.7 & **30.3** \\ \hline \hline \end{tabular} \end{table} Table 3: Success Rate in Backdoor Attacks on CIFAR10 and CIFAR100 fect for both Error-Min and Error-Max. For example, when incorporating SAPA to Error-Min, Error-Min+SAPA has a clear advantage, by reducing the accuracy to around 10% when the poisoning ratio is \(100\%\) in the CIFAR10 dataset. In other settings, Error-Min+SAPA can also manage to achieve a 2-4% accuracy reduction compared to Error-Min. Similarly, Error-Max+SAPA is also demonstrated to have a consistent improve over Error-Max. Admittedly, our method is slightly worse when compared with Autoregressive Perturbations (Sandoval-Segura et al., 2022). This is because Autoregressive perturbations create "autoregressive" perturbations targeted on the vulnerability of CNNs. However, this property prevents its application to other DNN architectures beyond CNNs, so we exclude it in Table 4 when we report the best performance. **Visualization of loss landscape**. We provide visualizations to illustrate that our method can successfully manipulate the loss landscape of the re-trained models. In Figure 1, we use the un-targeted attacks as an example and plot the landscape of training loss (the loss on the poisoned training set) of the models which are re-trained on the poisoning set via Error-Min and Error-Min+SAPA respectively. From these plots, we can see that the re-trained model obtained from Error-Min+SAPA has a flatter loss landscape, which suggests that the poisoning effect of Error-Min+SAPA tends to be more stable during re-training. We also provide visualization of targeted and backdoor attacks in Appendix.D. ### Ablation Study We provide ablation studies on the effectiveness of SAPA under various re-training settings. We also study the impact of ensembling strategy (Huang et al., 2020) on the performance of SAPA. Notably, we only focus on the CIFAR10 dataset and ResNet18 model in this study. For targeted and backdoor attacks, we specify the perturbation budget to \(16/255\) with poisoning ratio \(\epsilon=1\%\). For un-targeted attacks, we specify the perturbation budget to be \(8/255\) with poisoning ratio \(\epsilon=100\%\). **Various Re-training Algorithms**. There are studies (Schwarzschild et al., 2021; Ren et al., 2022) demonstrating that many poisoning attacks can lose efficacy if the re-training algorithm is different from the one used for poisoning samples generation. Therefore, we provide additional experiments to test the performance of SAPA when the re-training is different from Empirical Risk Minimization (ERM). In this subsection, we mainly consider the algorithms including Cut-Out (DeVries and Taylor, 2017), Mix-Up (Zhang et al., 2017), and Sharpness-aware Minimization (SAM) (Foret et al., 2020), which is proposed to minimize model sharpness to improve model generalization. In Table 5, we compare SAPA with the strongest baselines for each type of poisoning attack based on previous results, and we use the same evaluation metric as previous subsections for each attack. From this table, we can see that our method remains outperforming the baselines. Among these re-training strategies, Mix-Up shows a strong ability to reduce the poisoning effect for all attack methods that we studied. SAM can effectively reduce the poisoning effect for baseline attacks in un-targeted and backdoor attacks, but can hardly weaken the targeted attacks. These results show that SAPA still has a (relatively) stable poisoning effect when facing various re-training strategies. **Various Re-Training Schedules**. For poisoning attacks, there are also studies (Schwarzschild et al., 2021; Huang et al., 2020) suggesting that many attack methods may lose efficacy under a different re-training epoch number or schedule. Therefore, we provide additional experiments when: (1) the model is trained for 500 epochs and the learning rate is updated by "steps" similar to previous studies, and (2) the re-training learning rate is updated "cyclically". In Figure 2, we plot the curve of the poisoning effect of SAPA and baselines in the backdoor and un-targeted attacks. Note that we exclude \begin{table} \begin{tabular}{c|c c c|c c c|c c|c c c} \hline \hline & \multicolumn{4}{c|}{**CIFAR10**} & \multicolumn{4}{c|}{**CIFAR100**} \\ \hline & \multicolumn{4}{c|}{**ResNet18**} & \multicolumn{4}{c|}{**ResNet50**} & \multicolumn{4}{c|}{**ResNet18**} & \multicolumn{4}{c}{**ResNet50**} \\ & 100\% & 80\% & 50\% & 100\% & 80\% & 50\% & 100\% & 80\% & 50\% & 100\% & 80\% & 50\% \\ \hline **Clean\({}^{*}\)** & 94.8 & 94.8 & 94.8 & 95.0 & 95.0 & 95.0 & 74.8 & 74.8 & 74.8 & 75.2 & 75.2 & 75.2 \\ **Sparable.** & 13.5 & 86.3 & 92.9 & 14.9 & 88.1 & 93.2 & 9.1 & 57.1 & 66.2 & 8.4 & 60.8 & 66.7 \\ Autoregressive. & 11.8 & 82.3 & 89.8 & 10.1 & 83.6 & 90.3 & 4.2 & 51.6 & 64.7 & 4.3 & 53.5 & 66.1 \\ \hline **Error-Max** & 11.9 & 88.2 & 92.2 & 12.8 & 90.1 & 93.9 & 4.8 & 57.3 & 66.9 & 5.6 & 58.3 & 68.1 \\ **Error-Max+SAPA** & **9.6** & 84.6 & 90.1 & 10.9 & 85.7 & **91.3** & **4.1** & 55.1 & **64.8** & **4.9** & **56.8** & **66.9** \\ \hline **Error-Min** & 21.2 & 87.1 & 93.4 & 18.9 & 89.5 & 94.5 & 11.2 & 56.9 & 67.7 & 10.8 & 60.5 & 70.3 \\ **Error-Min+SAPA** & 10.9 & **83.7** & **90.0** & **10.3** & **85.2** & 91.8 & 8.7 & **53.1** & 65.3 & 9.5 & 57.9 & 67.6 \\ \hline \hline \end{tabular} \end{table} Table 4: Test Accuracy of Models Trained on Poisoned Datasets via Un-targeted Attacks. targeted attacks because the poisoning effect is discrete in one model. From these figures, we can find that the models trained on SAPA can stably converge to the point with a strong poisoning effect that consistently outperforms baselines. In the early stage of training, the poisoning effect of these attacks is relatively weak or unstable. However, in backdoor attacks, as early models are not well trained, their overall performance can be compromised. In un-targeted attacks, most studies in the line of "un-learnable examples" [20, 17, 16] focus on the converged models, because the model trainer has no access to the clean samples during training. These facts suggest the strategies like early-stopping could not be leveraged to defend SAPA. **Model Ensembling.** We show that ensembling of models (which is first proposed by [20]) can also effectively improve the poisoning effect of our method SAPA. In detail, during poisoning sample generation, this strategy finds poisoning samples that are based on the average poisoning effect of several models with different initializations. Since ensembling is highly expensive and time-consuming for un-targeted attacks, we only study targeted and backdoor attacks and show results in Table 6, and we show the performance of SAPA with different numbers of ensemble models. From the result, the performance of SAPA can also be increased with the increase in the number of ensembles. ## 6 Conclusion and Limitation In this paper, we introduce a novel and versatile framework for data poisoning attacks that takes into account the model landscape sharpness. We apply this strategy to various types of poisoning attacks, including un-targeted, targeted, and backdoor attacks. Our experimental results demonstrate the superiority of our method compared to existing approaches. However, in this paper, our method focuses on image classification. Therefore, we will leave the relative studies, such as the poisoning attacks in self-supervised learning settings, and poisoning attacks in other domains such as texts and graphs for future investigation.
2310.10881
On two possible ways to recover Ordinary Thermodynamics from Extended Thermodynamics of Polyatomic gases
We consider two possible ways, i.e., the Maxwellian Iteration and the Chapman-Enskog Method, to recover Relativistic Ordinary Thermodynamics from Relativistic Extended Thermodynamics of Polyatomic gases with N moments. Both of these methods give the Eckart equations which are the relativistic version of the Navier-Stokes and Fourier laws as a first iteration. However, these methods do not lead to the same expressions of the heat conductivity \chi, the shear viscosity \mu, and the bulk viscosity \nu which appear as coefficients in the Eckart equations. In particular, we prove that the expressions of \chi, \mu, and \nu obtained via the Chapman-Enskog method do not depend on N , while those obtained through the Maxwellian Iteration depend on N . Moreover, we also prove that these two methods lead to the same results in the nonrelativistic limit.
F. Demontis, S. Pennisi
2023-10-16T23:26:14Z
http://arxiv.org/abs/2310.10881v1
On two possible ways to recover Ordinary Thermodynamics from Extended Thermodynamics of Polyatomic gases ###### Abstract We consider two possible ways, i.e., the Maxwellian Iteration and the Chapman-Enskog Method, to recover Relativistic Ordinary Thermodynamics from Relativistic Extended Thermodynamics of Polyatomic gases with \(N\) moments. Both of these methods give the Eckart equations which are the relativistic version of the Navier-Stokes and Fourier laws as a first iteration. However, these methods do not lead to the same expressions of the heat conductivity \(\chi\), the shear viscosity \(\mu\), and the bulk viscosity \(\nu\) which appear as coefficients in the Eckart equations. In particular, we prove that the expressions of \(\chi\), \(\mu\), and \(\nu\) obtained via the Chapman-Enskog method do not depend on \(N\), while those obtained through the Maxwellian Iteration depend on \(N\). Moreover, we also prove that these two methods lead to the same results in the nonrelativistic limit. ## 1 Introduction Rational Extended Thermodynamics (RET) is an elegant theory appreciated by mathematicians and physicist. This theory was developed in a systematic way by Liu and Muller in [1] for the classical case, while the relativistic case was considered by Liu, Muller and Ruggeri in [2]. Both articles [1] and [2] are based on few natural assumptions, in fact only universal principles1 are imposed and, as a consequence of these principles, the hyperbolicity of the field equations is established. This is an important achievement because in this way in the relativistic case the paradox of infinity velocity of the propagating waves is automatically eliminated. These and other related results can also be found in the book [3] but they concern only the case of a monoatomic gas. The extension to the case of polyatomic gases was done in [4] for the classical case and in [5] for the relativistic case. More details on these generalizations to the polyatomic case can also be found in the book [6]. However, a physical observation by Pennisi recently published in [7] has caused a revision of the previous models both for the classical and relativistic cases leading to new results that can be found in [8] and [9]. In particular, in the article [9] a satisfactory model for the Relativistic Extended Thermodynamics of Polyatomic gases with \(N\) moments (\(ET^{N}\)) has been proposed. However, so far, Ordinary Thermodynamics (OT) has been succesfully used in practical applications, so a good test for establishing the validity of RET consists of finding procedures of approximation which allow us to get the equations of OT as a first step. In the literature two procedures have been proposed which realize this objective: the Maxwellian Iteration (MI) and the Chapman-Enskog Method (CEM). We will prove that the application of both in the relativistic case of these procedures leads, as a first iteration, to the Eckart equations [10], which in [2] are called the relativistic version of the Navier-Stokes and Fourier laws and are two fundamental laws of Relativistic Ordinary Thermodynamics (ROT). From now on we refer to the Eckart equations as the Navier-Stokes and Fourier laws. It is important to remark that in the Navier-Stokes and Fourier equations the following important quantities appear as coefficients: the heat conductivity \(\chi\), the shear viscosity \(\mu\), and the bulk viscosity \(\nu\). The aim of this paper is to show that the expressions of \(\chi\), \(\mu\), and \(\nu\) obtained via the Chapman-Enskog method do not depend on \(N\), whereas these expressions obtained through the Maxwellian Iteration depend on \(N\). In order to get this result, let us recall the basic facts on the field equation of Relativistic Extended Thermodynamics of Polyatomic gases and Relativistic Ordinary Thermodynamics. Let us start by considering the balance equations of ET\({}^{N}\)[9]. They are obtained starting from the Boltzmann equation \[\begin{array}{l}p^{\alpha}\,\partial_{\alpha}\,f=Q\,,\mbox{with}\quad Q= \frac{U^{\mu}p_{\mu}}{c^{2}\tau}\left[f_{E}-f-f_{E}\,p^{\gamma}q_{\gamma}\frac {3}{m\,c^{4}\rho\,\theta_{1,2}}\,\left(1+\frac{\cal I}{mc^{2}}\right)\right]\,, \\ f=e^{-1\,-\,\frac{\chi}{k_{B}}}\,,\quad\chi=\sum_{n=0}^{N}\frac{1}{m^{n-1}} \lambda_{\alpha_{1}\cdots\alpha_{n}}p^{\alpha_{1}}\cdots p^{\alpha_{n}}\left( 1\,+\,\frac{\cal I}{m\,c^{2}}\right)^{n}\,,\\ f_{E}=e^{-1\,-\,\frac{m\,\lambda_{E}\,+\,\frac{U_{\mu}}{T}\,p^{\mu}\left(1\,+\, \frac{\cal I}{m\,c^{2}}\right)}{k_{B}}}\,,\quad\theta_{1,2}=\frac{3\,p}{\rho^ {2}\,c^{4}}\,\left(e\,+\,p\right)\,.\end{array} \tag{1}\] Here, \(f\) is the distribution function, \(k_{B}\) the Boltzmann constant, \(m\) the relativistic particle mass, \(\lambda_{\alpha_{1}\cdots\alpha_{n}}\) are Lagrange multipliers, \(c\) the light speed, \(p^{\mu}\) the 4-momentum of the particle (from now on the greek indexes take the values \(0,1,2,3\)) such that \(p_{\alpha}p^{\alpha}=m^{2}c^{2}\), \({\cal I}\) is the internal energy of the particle due to rotational and vibrational modes, \(\tau\) a relaxation time, \(\lambda_{E}\) the first Lagrange multiplier calculated at equilibrium, \(U^{\mu}\) the 4-velocity such that \(U_{\alpha}U^{\alpha}=c^{2}\), \(Q\) the production term in the Boltzmann equation, \(T\) the absolute temperature, \(\rho\) the mass density, \(p\) the pressure, \(e\) the energy, and \(q^{\alpha}\) the heat flux such that \(U_{\alpha}q^{\alpha}=0\). The name "Lagrange multipliers" has been used in literature [1, 2, 3] and this terminology is due to the fact that the distribution function \(f\) can be obtained through a variational principle called the Maximum Entropy Principle (MEP) [5] with constrained variables. In the case with six moments, the heat flux is zero and \(q^{\alpha}\) replaces the scalar anonimous quantities which are present in [11] and \(q^{\alpha}\) is an unknown function to be determined by imposing that the production of mass and energy-momentum are zero. After that, the function \(\varphi({\cal I})\) is introduced which measures "how much" the gas is polyatomic. Finally, by multiplying (1)\({}_{1}\) by \(\frac{c}{m^{n-1}}\,p^{\alpha_{1}}\cdots p^{\alpha_{n}}\left(1\,+\,\frac{\cal I }{mc^{2}}\right)^{n}\,\varphi({\cal I})\) and integrating the result with respect to \(d\,{\cal I}\,d\,\vec{P}\) one obtains the balance equations \[\partial_{\alpha}A^{\alpha}=0\,,\quad\partial_{\alpha}A^{\alpha \alpha_{1}}=0\,, \tag{2a}\] \[\partial_{\alpha}A^{\alpha\alpha_{1}\cdots\alpha_{n}}=I^{\alpha_{ 1}\cdots\alpha_{n}}\,,\quad\mbox{for}\quad n=2,\,\cdots\,N\,,\] where \[A^{\alpha_{1}\cdots\alpha_{n+1}}=\frac{c}{m^{n-1}}\int_{\Re^{3} }\int_{0}^{+\infty}f\,p^{\alpha_{1}}\cdots p^{\alpha_{n+1}}\left(1\,+\,\frac{ \cal I}{m\,c^{2}}\right)^{n}\,\varphi({\cal I})\,d\,{\cal I}\,d\,\vec{P}\,, \tag{2b}\] \[I^{\alpha_{1}\cdots\alpha_{n}}=\frac{c}{m^{n-1}}\int_{\Re^{3}} \int_{0}^{+\infty}Q\,p^{\alpha_{1}}\cdots p^{\alpha_{n}}\left(1\,+\,\frac{ \cal I}{m\,c^{2}}\right)^{n}\,\varphi({\cal I})\,d\,{\cal I}\,d\,\vec{P}\,.\] Obviously, eqs. (2a)\({}_{1,2}\) are particular cases of (2a)\({}_{3}\) with \(n=0,1\) but it is better to write them separately because they are the mass and energy-momentum conservation laws, respectively; their productions \(I\) and \(I^{\alpha_{1}}\) are zero as a consequence of the definition of \(Q\) (see (1)\({}_{2}\)). Moreover, we sometimes denote \(A^{\alpha}\) with \(V^{\alpha}\) and \(A^{\alpha\alpha_{1}}\) with \(T^{\alpha\alpha_{1}}\). Then, the Lagrange multipliers \(\lambda_{\alpha_{1}\cdots\alpha_{n}}\) are obtained in terms of the physical variables but in a linear departure from equilibrium (here denoted with the suffix \(E\)) which is defined as the status where \(\lambda_{\alpha_{1}\cdots\alpha_{n}}=0\) for \(n=2,\,\cdots\,N\) and \(\lambda_{\alpha}^{E}=\frac{U_{\alpha}}{T}\). The calculations for the case \(N=2\) can be found in [9]. These calculations are based on the expression of the following tensor \[A_{E}^{\alpha_{1}\cdots\alpha_{n+1}}=\frac{c}{m^{n-1}}\int_{\Re^{3}}\int_{0}^ {+\,\infty}f_{E}\,p^{\alpha_{1}}\,\cdots\,p^{\alpha_{n+1}}\,\left(1\,+\,\frac {\cal I}{m\,c^{2}}\right)^{n}\,\varphi({\cal I})\,d\,{\cal I}\,d\,\vec{P}\,. \tag{3}\] This tensor is only determined in terms of the energy \(e\) which is given by \[\frac{e}{\rho\,c^{2}}=\frac{\int_{0}^{+\infty}J_{2,2}^{*}\,\left(1\,+\,\frac{ \cal I}{m\,c^{2}}\right)\,\varphi({\cal I})\,d\,{\cal I}}{\int_{0}^{+\infty}J_ {2,1}^{*}\,\varphi({\cal I})\,d\,{\cal I}}\,, \tag{4}\] where \(J_{m,n}(\gamma)=\int_{0}^{\infty}e^{-\gamma\cosh s}\cosh^{n}s\sinh^{m}s\,ds\), \(\gamma=\frac{mc^{2}}{k_{B}T}\), \(J_{m,n}^{*}=J_{m,n}\left[\gamma\left(1+\frac{\cal I}{mc^{2}}\right)\right]\,.\) Here we report a new short proof of this result because we need to use \(A_{E}^{\alpha_{1}\cdots\alpha_{n+1}}\). In fact, from (3) it follows that \[d\,A_{E}^{\alpha_{1}\cdots\alpha_{n+1}}=-\,\frac{m}{k_{B}}\,\left(A_{E}^{ \alpha_{1}\cdots\alpha_{n+1}}d\,\lambda^{E}\,+\,A_{E}^{\alpha_{1}\cdots \alpha_{n+2}}d\,\lambda_{\alpha_{n+2}}^{E}\right)\,. \tag{5}\] This equation, written for \(n=0\), is \[d\,\left(\rho\,U^{\alpha_{1}}\right)=-\,\frac{m}{k_{B}}\,\left[\rho\,U^{ \alpha_{1}}\,d\,\lambda^{E}\,+\,\left(e\,\frac{U^{\alpha_{1}}U^{\alpha_{2}}}{ c^{2}}\,+\,p\,h^{\alpha_{1}\alpha_{2}}\right)d\,\lambda_{\alpha_{2}}^{E} \right]\,,\] whose contraction with \(U^{\alpha_{1}}\) allows us to determine \[d\,\lambda^{E}=-\,\frac{k_{B}}{m\,\rho}\,d\,\rho\,-\,\frac{e}{\rho\,c^{2}}\,U ^{\alpha_{2}}\,d\,\lambda_{\alpha_{2}}^{E}\,.\] By substituting this in eq. (5), we find \[d\,A_{E}^{\alpha_{1}\cdots\alpha_{n+1}}=A_{E}^{\alpha_{1}\cdots\alpha_{n+1}}\, \left(\frac{1}{\rho}\,d\,\rho\,+\,\frac{e\,m}{\rho\,c^{2}k_{B}}\,U^{\gamma}\, d\,\lambda_{\gamma}^{E}\right)\,-\,\frac{m}{k_{B}}\,A_{E}^{\alpha_{1} \cdots\alpha_{n+2}}d\,\lambda_{\alpha_{n+2}}^{E}\,.\] If we take \(\rho\) and \(\lambda^{E}_{\gamma}\) as independent variables, the coefficient of \(d\,\rho\) shows that \(A^{\alpha_{1}\cdots\alpha_{n+1}}_{E}\) is linear and homogeneous in the variable \(\rho\), while the coefficient of \(d\,\lambda^{E}_{\gamma}\) allows us to determine \[A^{\alpha_{1}\cdots\alpha_{n+2}}_{E}=-\,\frac{k_{B}}{m}\,\frac{ \partial\,A^{\alpha_{1}\cdots\alpha_{n+1}}_{E}}{\partial\,\lambda^{E}_{\alpha_ {n+2}}}\,+\,\frac{e}{\rho\,c^{2}}\,A^{\alpha_{1}\cdots\alpha_{n+1}}_{E}\,U^{ \alpha_{n+2}}\,. \tag{6}\] Taking into account this result, all the tensors \(A^{\alpha_{1}\cdots\alpha_{n+1}}_{E}\) are determined in terms of the previous ones. Obviously, we must be careful and express everything in terms of \(\rho\) and \(\lambda^{E}_{\gamma}\). Regarding \(\lambda^{E}_{\gamma}\), we note that \[\lambda^{E}_{\gamma}=\frac{U_{\gamma}}{T}\quad\to\quad T=\frac{c}{\sqrt{ \lambda^{E}_{\delta}\lambda^{E\delta}}}\,;\quad U_{\gamma}=\frac{c}{\sqrt{ \lambda^{E}_{\delta}\lambda^{E\delta}}}\,\lambda^{E}_{\gamma}\,;\quad A^{ \gamma}_{E}=\rho\,U^{\gamma}=\frac{\rho\,c}{\sqrt{\lambda^{E}_{\delta}\lambda^ {E\delta}}}\,\lambda^{E}_{\gamma}\,.\] As a test, let us consider eq. (6) for \(n=0\), i.e., \[T^{\alpha_{1}\alpha_{2}}_{E} =-\,\frac{k_{B}}{m}\,\frac{\partial\,A^{\alpha_{1}}_{E}}{\partial \,\lambda^{E}_{\alpha_{2}}}\,+\,\frac{e}{\rho\,c^{2}}\,A^{\alpha_{1}}_{E}\,U^ {\alpha_{2}}\] \[=-\,\frac{k_{B}}{m}\left(\frac{\rho\,c}{\sqrt{\lambda^{E}_{\delta }\lambda^{E\delta}}}g^{\alpha_{1}\alpha_{2}}\,-\,\frac{\rho\,c}{\left(\lambda^ {E}_{\delta}\lambda^{E\delta}\right)^{3/2}}\lambda^{\alpha_{1}}_{E}\,\lambda^ {\alpha_{2}}_{E}\right)\,+\,\frac{e}{\lambda^{E}_{\delta}\lambda^{E\delta}} \lambda^{\alpha_{1}}_{E}\,\lambda^{\alpha_{2}}_{E}\] \[=\frac{k_{B}}{m}\,\rho\,T\,h^{\alpha_{1}\alpha_{2}}\,+\,\frac{e}{ c^{2}}\,U^{\alpha_{1}}U^{\alpha_{2}}\,.\] So we have obtained the expression for the coefficient of \(U^{\alpha_{1}}U^{\alpha_{2}}\), while the other term gives \[p=n\,k_{B}\,T\,. \tag{7}\] **Note**: In the above calculations we have used the property \[\lambda^{E\gamma}\lambda^{E}_{\gamma}=g^{\mu\gamma}\,\lambda^{E}_{\mu}\lambda^ {E}_{\gamma}\quad\to\quad\frac{\partial}{\partial\,\lambda^{E}_{\delta}}\left( \lambda^{E\gamma}\lambda^{E}_{\gamma}\right)=2\,g^{\mu\delta}\,\lambda^{E}_{ \mu}=2\,\lambda^{E\delta}\,.\] We note also that (6) does not allow us to obtain the expression for the energy \(e\); so to find it we must go back to the definitions (3) for \(n=0\) and \(n=1\) and contract them by \(U_{\alpha_{1}}\) and \(U_{\alpha_{1}}U_{\alpha_{2}}\), respectively. The results are \[\rho\,c^{2} =\frac{4\,\pi\,m^{4}c^{5}}{\sqrt{-\,g}}\,e^{-1-\,\frac{m\,\lambda ^{E}}{k_{B}}}\int_{0}^{+\infty}J^{*}_{2,1}\,\varphi(\mathcal{I})\,d\,\mathcal{ I}\,,\] \[e\,c^{2} =\frac{4\,\pi\,m^{4}c^{7}}{\sqrt{-\,g}}\,e^{-1-\,\frac{m\,\lambda ^{E}}{k_{B}}}\int_{0}^{+\infty}J^{*}_{2,2}\,\left(1+\,\frac{\mathcal{I}}{m\,c ^{2}}\right)\,\varphi(\mathcal{I})\,d\,\mathcal{I}\,,\] the second one of these expressions, divided by the first one gives the formula (4) reported above. From these considerations it is possible to derive the expression for \(A^{\alpha\alpha_{1}\cdots\alpha_{j}}_{E}\) reported in eqs. (29) and (30) of [7] and rewritten in eqs.(14)-(16) of [9]. For the convenience of the reader we write this expression below: \[A^{\alpha_{1}\cdots\alpha_{j+1}}_{E}=\sum_{k=0}^{\left[\frac{j+1}{2}\right]} \rho c^{2k}\theta_{k,j}\,h^{(\alpha_{1}\alpha_{2}}\cdots h^{\alpha_{2k-1} \alpha_{2k}}U^{\alpha_{2k+1}}\cdots U^{\alpha_{j+1})}\,, \tag{8}\] where the round brackets appearing in \(h^{(\alpha_{1}\alpha_{2}}\cdots h^{\alpha_{2k-1}\alpha_{2k}}U^{\alpha_{2k+1}} \cdots U^{\alpha_{j+1})}\) denote the symmetric part of this tensor, while the scalar coefficients \(\theta_{k,j}\) are defined as follows: \[\theta_{k,j}=\frac{1}{2k+1}\begin{pmatrix}j+1\\ 2k\end{pmatrix}\,\frac{\int_{0}^{+\infty}J_{2k+2,j+1-2k}^{*}\,\left(1+\frac{ \mathcal{I}}{mc^{2}}\right)^{j}\,\phi(\mathcal{I})\,d\,\mathcal{I}}{\int_{0}^ {+\infty}J_{2,1}^{*}\,\phi(\mathcal{I})\,d\,\mathcal{I}}\,. \tag{9}\] Moreover, \(\theta_{k,j}\) can be determined by the recurrence relations which use the quantity \(\gamma=\frac{m\,c^{2}}{k_{B}T}\): \[\begin{split}&\theta_{0,0}=1\,,\\ &\theta_{0,j+1}=\frac{e}{\rho\,c^{2}}\,\theta_{0,j}\,-\,\frac{ \partial\,\theta_{0,j}}{\partial\,\gamma}\,,\\ &\theta_{h,j+1}=\frac{j+2}{\gamma}\left(\theta_{h,j}+\frac{j+3-2 h}{2h}\theta_{h-1,j}\right)\,,\qquad\qquad\text{for $h=1$, $\cdots$, $\,\left[\frac{j+1}{2}\right]$ }\,,\\ &\theta_{\frac{j+2}{2},j+1}=\frac{1}{\gamma}\theta_{\frac{j}{2},j}\,, \qquad\qquad\qquad\qquad\qquad\text{for $j$ even}\,.\end{split} \tag{10}\] Regarding the production terms in the balance equations (2b)\({}_{2}\) we see that, by means of (3), it becomes \[\begin{split}& I^{\alpha_{1}\cdots\alpha_{i}}=-\,\frac{U_{\alpha }}{c^{2}\tau}\,\left(A^{\alpha\alpha_{1}\cdots\alpha_{i}}\,-\,A_{E}^{\alpha \alpha_{1}\cdots\alpha_{i}}\right)\,-\,\frac{3}{\rho\,c^{6}\,\tau\,\theta_{1,2 }}\,U_{\alpha}\,q_{\beta}\,A_{E}^{\alpha\beta\alpha_{1}\cdots\alpha_{i}}\,,\\ &\text{where}\quad q^{\beta}=U_{\gamma}\,\left(T^{\gamma\beta}\,- \,T_{E}^{\gamma\beta}\right)\,.\end{split} \tag{11}\] So far we have described the results of Extended Thermodynamics of polyatomic gases, as obtained in [9] following the new ideas of [7]. On the other hand, **Relativistic Ordinary Thermodynamics** (ROT) uses only the equations (2a) with the following definitions \[\begin{split}& A^{\alpha\alpha_{1}}=T^{\alpha\alpha_{1}}=\frac{e}{c ^{2}}\,U^{\alpha}U^{\alpha_{1}}\,+\,(p+\pi)h^{\alpha\alpha_{1}}\,+\,\frac{2}{c ^{2}}\,U^{(\alpha}q^{\alpha_{1})}\,+\,t^{<\alpha\alpha_{1}>}\quad\text{where}\\ &\pi=-\nu\,\partial_{\alpha}\,U^{\alpha}\,,\,q^{\beta}=-\,\chi\, h^{\alpha\beta}\left(\partial_{\alpha}\,T\,-\,\frac{T}{c^{2}}\,U^{\mu}\,\partial_{ \mu}\,U_{\alpha}\right)\,,\,t_{<\beta\gamma>}=2\,\mu\,h^{\alpha}_{\beta}\,h^{ \mu}_{\gamma}\,\partial_{<\alpha}\,U_{\mu>}\,,\end{split} \tag{12}\] where the angular brackets appearing in \(t_{<\beta\gamma>}\) denote the traceless symmetric part of the tensor. Here (12)\({}_{2-4}\) are the Eckart equations [10]; in particular, following [2] equation (12)\({}_{3}\) corresponds to relativistic version of the Fourier law while equations (12)\({}_{2}\) and (12)\({}_{4}\) correspond to the relativistic version of the Navier-Stokes law. The coefficients \(\nu\), \(\chi\), \(\mu\) are called the bulk viscosity, the heat conductivity and the shear viscosity, respectively. These equations have the drawback that they are not hyperbolic but parabolic. As we have already said, this was the reason for the birth of Extended Thermodynamics whose equations are hyperbolic. The paper is organized as follows: In section 2 we briefly recall how the MI procedures works and show how it is possible to reconstruct the laws of Relativistic Ordinary Thermodynamics by using this procedure. Moreover, in section 2 we derive the expressions of the heat conductivity \(\chi\), the shear viscosity \(\mu\), and the bulk viscosity \(\nu\) in the particular cases \(N=3\) and \(N=2\) putting in evidence that, if one uses the MI procedure, these expressions depend on the number of moments \(N\). In section 3 we explain how the CEM procedure works and derive the laws of the ROT by using this procedure. In section 4, generalizing a well-known result of [12] for monoatomic case, we prove that in the non-relativistic case the MI and the CEM procedures lead to the same results. The convergence of \(\nu\), \(\chi\), \(\mu\) in the non relativistic limit is proved in section 4.1. Then, the results obtained are summarized in section 5. Finally, an appendix is devoted to some particular integrals used to develop the computations of this paper. ## 2 The Maxwellian Iteration The Maxwellian Iteration Method was applied to recover OT from Extended Thermodynamics of monoatomic gases in [1] for the non relativistic case and in [2] for the relativistic framework. The relativistic case for polyatomic gases with \(N=2\), and for its subsystems with fourteen and six moments, has been treated in [9]. In the next section MI will be implemented in the case of an arbitrary number \(N\). It works in the following way: * The eqs. (2a) are considered, but with their left hand sides calculated at equilibrium and their right hand sides at first order with respect to equilibrium, i.e., \[\begin{array}{l}\partial_{\alpha}A_{E}^{\alpha}=0\,,\quad\partial_{\alpha} A_{E}^{\alpha\alpha_{1}}=0\,,\\ \partial_{\alpha}A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}}=I_{MI}^{\alpha_{1} \cdots\alpha_{n}}\,,\quad\mbox{for}\quad n=2,\,\cdots N\,,\end{array}\] (13) where the meaning of the subscrpt MI will be introduced in the next item. * The deviations of the independent variables from equilibrium are calculated in terms of \(\partial_{\alpha}\lambda^{E}\) and \(\partial_{\alpha}\lambda_{\mu}^{E}\) from (13)\({}_{3}\); they are called "first iterates" and we will denote them with a suffix \(MI\). After that, they are substituted in \(T^{\alpha\beta}\,-\,T_{E}^{\alpha\beta}\) with \(T^{\alpha\beta}\) given by (12)\({}_{1}\). * The quantities \(\partial_{\alpha}\lambda^{E}\) and \(U^{\alpha}U^{\mu}\partial_{\alpha}\lambda_{\mu}^{E}\) are calculated from (13)\({}_{1,2}\) and substituted in the expression of \(T^{\alpha\beta}\,-\,T_{E}^{\alpha\beta}\) obtained in the previous step. In this way one obtains (12) of ROT, with particular expressions \(\nu_{MI}^{N}\), \(\chi_{MI}^{N}\), \(\mu_{MI}^{N}\) of the bulk viscosity \(\nu\), the heat conductivity \(\chi\) and the shear viscosity \(\mu\). However, these expressions depend on the number \(N\) of the extended model from which they come from. For example, in [9] it was found that, for the subsystem with 14 moments, the values of \(\mu\) and \(\chi\) remain the same as for the model with 15 moments (i.e., \(N=2\)) while \(\nu\) changes. This expression for \(\nu\) changes again for its further subsystem with six moments (In this subsystem with six moments \(\mu\) and \(\chi\) do not play a role). This is not fully satisfactory because there is only one Ordinary Thermodynamics and it is strange that its equations depend on the number \(N\) of the extended model from which they are derived. In the next subsection this Maxwellian iteration will be described in more detail for an arbitrary number \(N\); furthermore, we will see what is the difference of the expressions of \(\mu\), \(\chi\) and \(\nu\) in the cases \(N=3\) and \(N=2\) in the subsections 2.2 and 2.3, respectively. ### ROT recovered with the Maxwellian Iteration It is easy to prove that eqs. (13)\({}_{1,2}\) can be written as (see [5] for more details) \[V_{E}^{\alpha}\,\partial_{\alpha}\,\lambda^{E}\,+\,T_{E}^{\alpha\mu}\,\partial_ {\alpha}\,\lambda_{\mu}^{E}=0\,,\quad T_{E}^{\alpha\beta}\,\partial_{\alpha}\, \lambda^{E}\,+\,A_{E}^{\alpha\beta\mu}\,\partial_{\alpha}\,\lambda_{\mu}^{E}=0\,. \tag{14}\] The first one of these equations and the second one contracted with \(U_{\beta}\) give a system whose solution is: \[\begin{array}{l}U^{\alpha}\,\partial_{\alpha}\lambda^{E}=-\left|\begin{array} []{ccc}\rho&\frac{e}{c^{2}}\\ \frac{e}{c^{2}}&\rho\,\theta_{0,2}\end{array}\right|^{-1}\left|\begin{array}{ cccc}p&&\frac{e}{c^{2}}\\ \frac{1}{3}\,\rho\,c^{2}\,\theta_{1,2}&\rho\,\theta_{0,2}\end{array}\right|h^{ \alpha\delta}\,\partial_{\alpha}\lambda_{\delta}^{E}\,,\\ U^{\alpha}U^{\beta}\,\partial_{\alpha}\lambda_{\beta}^{E}=-\left|\begin{array} []{ccc}\rho&\frac{e}{c^{2}}\\ \frac{e}{c^{2}}&\rho\,\theta_{0,2}\end{array}\right|^{-1}\left|\begin{array}[ ]{ccc}\rho&p\\ \frac{e}{c^{2}}&\frac{1}{3}\,\rho\,c^{2}\,\theta_{1,2}\end{array}\right|h^{ \alpha\delta}\,\partial_{\alpha}\lambda_{\delta}^{E}\,.\end{array} \tag{15}\] It is interesting to note that \(h^{\alpha\mu}\,\partial_{\alpha}\,\lambda_{\mu}^{E}=\frac{1}{T}\,h^{\alpha\mu} \,\partial_{\alpha}\,U_{\mu}=-\,\frac{1}{T}\,\partial_{\alpha}\,U^{\alpha}\). The equation (14)\({}_{2}\), contracted with \(h_{\delta}^{\beta}\), allows us to determine \[h^{\alpha\theta}\,\partial_{\alpha}\lambda^{E}=-\,\frac{2}{3}\,\frac{\rho}{p} \,c^{2}\,\theta_{1,2}\,h^{\theta(\alpha}U^{\delta)}\,\partial_{\alpha}\lambda_ {\delta}^{E}\,. \tag{16}\] Now we consider (13)\({}_{3}\), with use of (2b), (1)\({}_{2}\) and taking into account that \(q_{\gamma}=U^{\beta}\left(T_{\gamma\beta}\,-\,T_{E\,\gamma\beta}\right)\); jointly with \(V^{\alpha}-V_{E}^{\alpha}=0\), \(U_{\alpha}U_{\beta}\,\left(T^{\alpha\beta}-T_{E}^{\alpha\beta}\right)=0\) we obtain the system \[\begin{array}{l}\sum_{m=0}^{N}U_{\alpha}\left(A_{E}^{\alpha\alpha_{1}\cdots \alpha_{n}\beta_{1}\cdots\beta_{m}}\,+\,\frac{3}{c^{4}\rho\,\theta_{1,2}}\,g_{ \gamma\delta}\,U_{\beta}\,A_{E}^{\alpha\gamma\alpha_{1}\cdots\alpha_{n}}\,A_ {E}^{\delta\beta_{1}\cdots\beta_{m}}\right)\left(\lambda_{\beta_{1}\cdots \beta_{m}}\,-\,\lambda_{\beta_{1}\cdots\beta_{m}}^{E}\right)=\\ =-\,c^{2}\tau\,\left(A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}}\,\partial_{ \alpha}\,\lambda^{E}\,+\,A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}\mu}\, \partial_{\alpha}\,\lambda_{\mu}^{E}\right)^{MI}\\ =-\,c^{2}\tau\,\left(A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}}\,\partial_{ \alpha}\,\lambda^{E}\,+\,A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}\mu}\, \partial_{(\alpha}\,\lambda_{\mu}^{E})\right)\,,\,\mbox{for}\quad n=2\,,\, \cdots,N\,,\\ \sum_{m=0}^{N}A_{E}^{\alpha\beta_{1}\cdots\beta_{m}}\left(\lambda_{\beta_{1} \cdots\beta_{m}}\,-\,\lambda_{\beta_{1}\cdots\beta_{m}}^{E}\right)^{MI}=0\,, \\ \sum_{m=0}^{N}U_{\alpha}U_{\beta}\,A_{E}^{\alpha\beta_{1}\cdots\beta_{m}} \left(\lambda_{\beta_{1}\cdots\beta_{m}}\,-\,\lambda_{\beta_{1}\cdots\beta_{m}} ^{E}\right)^{MI}=0\,.\end{array} \tag{17}\] From this system we can obtain \(\left(\lambda_{\beta_{1}\cdots\beta_{m}}\,-\,\lambda_{\beta_{1}\cdots\beta_{m} }^{E}\right)^{MI}\) as a linear and homogeneous combination of \(\partial_{\alpha}\,\lambda^{E}\), \(\partial_{(\alpha}\,\lambda_{\mu)}^{E}\). By using (15) and (16), \(\left(\lambda_{\beta_{1}\cdots\beta_{m}}\,-\,\lambda_{\beta_{1}\cdots\beta_{m}}^ {E}\right)^{MI}\) will be a linear and homogeneous combination of \(h^{\alpha\mu}\,\partial_{\alpha}\,\lambda_{\mu}^{E}\), \(h^{\delta\alpha}\,U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\) and \(h^{\alpha<\delta}\,h^{\beta>\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}=h^{ \alpha\delta}\,h^{\beta\mu}\,\partial_{<\alpha}\,\lambda_{\mu>3}^{E}\). By substituting these \(\left(\lambda_{\beta_{1}\cdots\beta_{m}}\,-\,\lambda_{\beta_{1}\cdots\beta_{m}}^ {E}\right)^{MI}\) in \[\left(T^{\alpha\beta}-T_{E}^{\alpha\beta}\right)^{MI}=-\,\frac{m}{k_{B}}\sum_{m= 0}^{N}A_{E}^{\alpha\beta\beta_{1}\cdots\beta_{m}}\left(\lambda_{\beta_{1} \cdots\beta_{m}}\,-\,\lambda_{\beta_{1}\cdots\beta_{m}}^{E}\right)^{MI}\,, \tag{18}\] we obtain that \(h_{\alpha\beta}\left(T^{\alpha\beta}-T_{E}^{\alpha\beta}\right)^{MI}\) is a scalar function which is linear and homogeneous in the independent variables \(h^{\alpha\mu}\,\partial_{\alpha}\,\lambda_{\mu}^{E}\) (a scalar variable), \(h^{\delta\alpha}\,U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\) (a 3-dimensional vector variable) and \(h^{\alpha\delta}\,h^{\beta\mu}\,\partial_{<\alpha}\,\lambda_{\mu>_{\lambda}}^{E}\) (a variable which is a 3-dimensional second order tensor). For the representation theorems (see [13] for more details on the representation theorems) the quantity \(h_{\alpha\beta}\left(T^{\alpha\beta}-T_{E}^{\alpha\beta}\right)^{MI}\) must be proportional to \(h^{\alpha\mu}\,\partial_{\alpha}\,\lambda_{\mu}^{E}=\frac{-1}{T}\,\partial_{ \alpha}\,U^{\alpha}\). In this way, \((12)_{1}\) is obtained with \(\nu_{MI}^{N}\) instead of \(\nu\). Similarly, \(h_{\alpha\delta}\left(T^{\alpha\beta}-T_{E}^{\alpha\beta}\right)^{MI}\) is a 3-dimensional vector function which is linear and homogeneous in the same independent variables. So, by the representation theorems, \(h_{\alpha\delta}\left(T^{\alpha\beta}-T_{E}^{\alpha\beta}\right)^{MI}\) must be proportional to \(2\,h^{\delta\alpha}\,U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}=\frac{-c ^{2}}{T^{2}}\,h^{\delta\alpha}\,\left(\partial_{\alpha}\,T\,-\,\frac{T}{c^{2}} \,U^{\mu}\,\partial_{\mu}\,U_{\alpha}\right)\). In this way, \((12)_{2}\) is obtained with \(\chi_{MI}^{N}\) instead of \(\chi\). Finally, \(h_{\mu}^{<\beta}h_{\nu}^{>\gamma}\,\left(T^{\mu\nu}-T_{E}^{\mu\nu}\right)^{MI}\) is a 3-dimensional traceless second order tensorial function which is linear and homogeneous in the same independent variables. By the representation theorems \(h_{\mu}^{<\beta}h_{\nu}^{>\gamma}\,\left(T^{\mu\nu}-T_{E}^{\mu\nu}\right)^{MI}\) must be proportional to \(h_{\beta}^{\alpha}\,h_{\gamma}^{\mu}\,\partial_{<\alpha}\,U_{\mu>}\). In this way, \((12)_{3}\) is obtained with \(\mu_{MI}^{N}\) instead of \(\mu\). As examples of this procedure, we will consider the particular cases with \(N=3\) and \(N=2\) in the next subsections. ### The Maxwellian iteration in the case \(N=3\). We begin with the case \(N=3\) because the calculations developed in this case allow us also to treat the case with \(N=2\) which will be considered in the next subsection. By comparing the results obtained in the case \(N=3\) with those obtained in the case \(N=2\), it is easy to see that the expressions of the coefficients of bulk viscosity, heat conductivity and shear stress are different. This argument allows us to prove that the expressions of \(\nu,\chi\) and \(\mu\) obtained with the Maxwellian iteration depend on \(N\). So far in literature, this fact has been proved only comparing the case \(N=2\) with two of its subsystems. The equations (17) and (18) for \(N=3\) are: \[\begin{split}& U_{\alpha}\left(A_{E}^{\alpha\alpha_{1}\cdots \alpha_{n}}\,+\,\frac{3}{c^{4}\rho\,\theta_{1,2}}\,g_{\gamma\delta\,U_{\beta} \,A_{E}^{\alpha\gamma\alpha_{1}\cdots\alpha_{n}}A_{E}^{\delta\beta}}\right) \left(\lambda\,-\,\lambda^{E}\right)^{MI}+\\ &+\,U_{\alpha}\left(A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}\beta _{1}}\,+\,\frac{3}{c^{4}\rho\,\theta_{1,2}}\,g_{\gamma\delta\,U_{\beta}\,A_{ E}^{\alpha\gamma\alpha_{1}\cdots\alpha_{n}}A_{E}^{\delta\beta\beta_{1}}} \right)\left(\lambda_{\beta_{1}}\,-\,\lambda_{\beta_{1}}^{E}\right)^{MI}+\\ &+\,\sum_{m=0}^{N}U_{\alpha}\left(A_{E}^{\alpha\alpha_{1}\cdots \alpha_{n}\beta_{1}\beta_{2}}\,+\,\frac{3}{c^{4}\rho\,\theta_{1,2}}\,g_{\gamma \delta\,U_{\beta}\,A_{E}^{\alpha\gamma\alpha_{1}\cdots\alpha_{n}}A_{E}^{\delta \beta\beta_{1}\beta_{2}}}\right)\,\left(\lambda_{\beta_{1}\beta_{2}}\right)^{ MI}+\\ &+\,U_{\alpha}\left(A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}\beta _{1}\beta_{2}\beta_{3}}\,+\,\frac{3}{c^{4}\rho\,\theta_{1,2}}\,g_{\gamma\delta \,U_{\beta}\,A_{E}^{\alpha\gamma\alpha_{1}\cdots\alpha_{n}}A_{E}^{\delta \beta\beta_{1}\beta_{2}\beta_{3}}}\right)\,\left(\lambda_{\beta_{1}\beta_{2} \beta_{3}}\right)^{MI}=\\ &=-\,c^{2}\tau\,\left(A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}}\, \partial_{\alpha}\,\lambda^{E}\,+\,A_{E}^{\alpha\alpha_{1}\cdots\alpha_{n}\mu} \,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,,\,\mbox{for}\quad n=2\,,\, \cdots\,,3\,,\\ &\qquad\qquad A_{E}^{\alpha}\left(\lambda\,-\,\lambda^{E}\right)^{ MI}+\,A_{E}^{\alpha\beta_{1}}\left(\lambda_{\beta_{1}}\,-\,\lambda_{\beta_{1}}^{E} \right)^{MI}+\,A_{E}^{\alpha\beta_{1}\beta_{2}}\left(\lambda_{\beta_{1}\beta_ {2}}\right)^{MI}+\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ \[U_{\alpha}U_{\beta}\,\left[A_{E}^{\alpha\beta}\left(\lambda\,-\, \lambda^{E}\right)^{MI}\,+\,A_{E}^{\alpha\beta\beta_{1}}\left(\lambda_{\beta_{1} }\,-\,\lambda_{\beta_{1}}^{E}\right)^{MI}\,+\,A_{E}^{\alpha\beta\beta_{1}\beta_ {2}}\left(\lambda_{\beta_{1}\beta_{2}}\right)^{MI}\,+\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ \[a_{41}=\frac{1}{2}\,\theta_{1,3}\,+\,\frac{9}{10}\,\frac{\theta_{0,1}\, \theta_{0,4}}{\theta_{1,2}}\,;\,a_{42}=\frac{3}{10}\,\theta_{1,4}\,+\,\frac{9}{10 }\,\frac{\theta_{0,2}\,\theta_{0,4}}{\theta_{1,2}}\,;\,a_{43}=\frac{1}{5}\, \theta_{1,5}\,+\,\frac{9}{10}\,\frac{\theta_{0,3}\,\theta_{1,4}}{\theta_{1,2}}\,;\] \[a_{44}=\frac{1}{7}\,\theta_{1,6}\,+\,\frac{9}{10}\,\frac{\theta_{ 0,4}\,\theta_{1,4}}{\theta_{1,2}}\,;\,a_{45}=\frac{1}{9}\,\theta_{2,5}\,+\, \frac{3}{20}\,\frac{\theta_{1,3}\,\theta_{1,4}}{\theta_{1,2}}\,;\] \[a_{46}=\frac{1}{7}\,\theta_{2,6}\,+\,\frac{27}{100}\,\frac{( \theta_{1,4})^{2}}{\theta_{1,2}}\,;\] \[b_{4}=-\,\tau\,\left[\frac{1}{2}\,\theta_{1,3}\,U^{\alpha} \partial_{\alpha}\,\lambda^{E}\,+\left(\frac{3}{10}\,\theta_{1,4}U^{\alpha}U^{ \mu}\,+\,\frac{1}{3}\,c^{2}\,\theta_{2,4}\,h^{\alpha\mu}\right)\partial_{( \alpha}\,\lambda^{E}_{\mu)}\right]\,;\] \[a_{51}=\theta_{0,0}\,;\,a_{52}=\theta_{0,1}\,;\,a_{53}=\theta_{0,2}\,;\,a_{54} =\theta_{0,3}\,;\,a_{55}=\frac{1}{3}\,\theta_{1,2}\,;\,a_{56}=\frac{1}{2}\, \theta_{1,3}\,;\,b_{5}=0\,;\] \[a_{61}=\theta_{0,1}\,;\,a_{62}=\theta_{0,2}\,;\,a_{63}=\theta_{0,3}\,;\,a_{64} =\theta_{0,4}\,;\,a_{65}=\frac{1}{6}\,\theta_{1,3}\,;\,a_{66}=\frac{3}{10}\, \theta_{1,4}\,;\,b_{6}=0\,;\] \[a_{71}=3\,\theta_{1,1}\,;\,a_{72}=\theta_{1,2}\,;\,a_{73}=\frac{1}{2}\, \theta_{1,3}\,;\,a_{74}=\frac{3}{10}\,\theta_{1,4}\,;\,a_{75}=\frac{5}{3}\, \theta_{2,5}\,;\,a_{76}=\theta_{2,4}\,;\] \[b_{7}=-\,\frac{k_{B}}{m\,\rho\,c^{2}}\,\left(T^{\alpha\beta}-T^{\alpha\beta}_{ E}\right)^{MI}\,h_{\alpha\beta}\,;\] where the expression of \(\theta_{k,j}\) has been introduced in (9). By applying the Rouche-Capelli Theorem, we see that the determinant of the augmented matrix must be zero; so, from this condition, by calling \(D_{j}\) the algebraic complement on the line \(i\), column 7 of this matrix, we find \[h_{\alpha\beta}\,\left(T^{\alpha\beta}-T^{\alpha\beta}_{E}\right) ^{MI}=-\,\frac{c^{2}m\rho\,\tau}{k_{B}}\cdot\] \[\,\cdot\left\{\frac{D_{1}}{D_{7}}\,\left[\theta_{0,2}\,U^{\alpha} \partial_{\alpha}\,\lambda^{E}\,+\left(\theta_{0,3}U^{\alpha}U^{\mu}\,+\, \frac{1}{6}\,c^{2}\,\theta_{1,3}\,h^{\alpha\mu}\right)\partial_{(\alpha}\, \lambda^{E}_{\mu)}\right]+\right.\] \[\,+\,\frac{D_{2}}{D_{7}}\,\left[\theta_{0,3}\,U^{\alpha} \partial_{\alpha}\,\lambda^{E}\,+\left(\theta_{0,4}U^{\alpha}U^{\mu}\,+\, \frac{3}{10}\,c^{2}\,\theta_{1,4}\,h^{\alpha\mu}\right)\partial_{(\alpha}\, \lambda^{E}_{\mu)}\right]+\] \[\,+\,\frac{D_{3}}{D_{7}}\,\left[\theta_{1,2}\,U^{\alpha} \partial_{\alpha}\,\lambda^{E}\,+\left(\frac{1}{2}\,\theta_{1,3}U^{\alpha}U^{ \mu}\,+\,\frac{5}{3}\,c^{2}\,\theta_{2,3}\,h^{\alpha\mu}\right)\partial_{( \alpha}\,\lambda^{E}_{\mu)}\right]+\] \[\,+\,\frac{D_{4}}{D_{7}}\,\left[\frac{1}{2}\,\theta_{1,3}\,U^{ \alpha}\partial_{\alpha}\,\lambda^{E}\,+\left(\frac{3}{10}\,\theta_{1,4}U^{ \alpha}U^{\mu}\,+\,\frac{1}{3}\,c^{2}\,\theta_{2,4}\,h^{\alpha\mu}\right) \partial_{(\alpha}\,\lambda^{E}_{\mu)}\right]\right\}\,,\] i.e., by using (15), \[\begin{array}{l}h_{\alpha\beta}\,\left(T^{\alpha\beta}-T_{E}^{\alpha\beta} \right)^{MI}=-\,3\,\nu^{MI}\,\partial_{\alpha}\,U^{\alpha}\,,\quad\mbox{with}\\ \\ \nu^{MI}=\frac{c^{2}m_{\rho}\,\tau}{3\,k_{B}T}\left[\begin{array}{c} \left(\frac{D_{1}}{D_{7}}\,\theta_{0,2}+\,\frac{D_{2}}{D_{7}}\,\theta_{0,3}+ \frac{D_{3}}{D_{7}}\,\theta_{1,2}\,+\,\frac{1}{2}\,\theta_{1,3}\right)\, \begin{vmatrix}\frac{p}{\rho}&\frac{e}{\rho\,c^{2}}\\ c^{2}\theta_{1,2}&\theta_{0,2}\end{vmatrix}\\ \hline 1&\frac{e}{\rho\,c^{2}}\\ \frac{e}{\rho\,c^{2}}&\theta_{0,2}\end{array}\right.+\\ \\ +\,\left(\frac{D_{1}}{D_{7}}\,\theta_{0,3}\,+\,\frac{D_{2}}{D_{7}}\,\theta_{0,4 }\,+\,\frac{D_{4}}{D_{7}}\,\frac{3}{10}\,\theta_{1,4}\right)\,\begin{vmatrix} 1&\frac{p}{\rho}\\ \frac{e}{\rho\,c^{2}}&c^{2}\,\theta_{1,2}\\ \hline 1&\frac{e}{\rho\,c^{2}}\\ \frac{e}{\rho\,c^{2}}&\theta_{0,2}\end{vmatrix}\end{array}\,+\\ \\ +\,\left(\frac{e}{\rho\,c^{2}}\right)\,. \tag{21}\] **Determination of the heat conductivity \(\chi\).** We consider now eqs. (19)\({}_{1}\) with \(n=2\) contracted by \(\frac{h_{\alpha_{1}}^{\theta}U_{\alpha_{2}}}{\rho\,c^{6}}\), (19)\({}_{1}\) with \(n=3\) contracted by \(\frac{h_{\alpha_{1}}^{\theta}U_{\alpha_{2}}U_{\alpha_{3}}}{\rho\,c^{6}}\), (19)\({}_{1}\) with \(n=3\) contracted by \(\frac{h_{\alpha_{1}}^{\theta}a_{2}}{\rho\,c^{6}}\), (19)\({}_{2}\) contracted by \(\frac{h_{\alpha}^{\theta}}{-\rho\,c^{2}}\) and (19)\({}_{4}\) contracted by \(k_{B}\,\frac{h_{\alpha}^{\theta}U\beta}{\rho\,c^{4}}\). So we obtain a system \(\sum_{j=1}^{4}b_{ij}X^{j\theta}=b_{i}^{\theta}\) constituted by 5 equations in the 4 unknowns \(X^{1\theta}=h^{\theta\beta_{1}}\left(\lambda_{\beta_{1}}\,-\,\lambda_{\beta_{ 1}}^{E}\right)^{MI}\), \(X^{2\theta}=h^{\theta\beta_{1}}U^{\beta_{2}}\left(\lambda_{\beta_{1}\beta_{2} }\right)^{MI}\), \(X^{3\theta}=h^{\theta\beta_{1}}U^{\beta_{2}}U^{\beta_{3}}\left(\lambda_{\beta_{ 1}\beta_{2}\beta_{3}}\right)^{MI}\), \(X^{4\theta}=c^{2}h^{\theta\beta_{1}}h^{\beta_{2}\beta_{3}}\) (\(\lambda_{\beta_{1}\beta_{2}\beta_{3}}\))\({}^{MI}\) with coefficients \[\begin{array}{l}b_{11}=-\,\frac{1}{3}\,\theta_{1,3}\,;\quad b_{12}=-\, \frac{1}{5}\,\theta_{1,4}\,+\,\frac{1}{6}\,\frac{\left(\theta_{1,3}\right)^{2} }{\theta_{1,2}}\,;\\ b_{13}=-\,\frac{1}{5}\,\theta_{1,5}\,+\,\frac{3}{20}\,\frac{\theta_{1,3}\, \theta_{1,4}}{\theta_{1,2}}\,;\quad b_{14}=-\,\frac{1}{15}\,\theta_{2,5}\,+\, \frac{1}{10}\,\frac{\theta_{1,3}\,\theta_{2,4}}{\theta_{1,2}}\,;\\ b_{1}^{\theta}=\tau\,\left(\frac{1}{3}\,\theta_{1,2}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{3}\,\theta_{1,3}\,h^{\theta\alpha }U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,;\\ \\ b_{21}=-\,\frac{1}{5}\,\theta_{1,4}\,;\quad b_{22}=-\,\frac{2}{15}\,\theta_{1,5 }\,+\,\frac{1}{10}\,\frac{\theta_{1,4}\,\theta_{1,3}}{\theta_{1,2}}\,;\\ b_{23}=-\,\frac{1}{7}\,\theta_{1,6}\,+\,\frac{9}{100}\,\frac{\left(\theta_{1,4 }\right)^{2}}{\theta_{1,2}}\,;\quad b_{24}=-\,\frac{1}{35}\,\theta_{2,6}\,+\, \frac{3}{50}\,\frac{\theta_{1,4}\,\theta_{2,4}}{\theta_{1,2}}\,;\\ b_{2}^{\theta}=\tau\,\left(\frac{1}{6}\,\theta_{1,3}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{5}\,\theta_{1,4}\,h^{\theta\alpha }U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,;\\ \\ b_{2}^{\theta}=\tau\,\left(\frac{1}{6}\,\theta_{1,3}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{5}\,\theta_{1,4}\,h^{\theta\alpha}U ^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,;\\ \\ b_{2}^{\theta}=\tau\,\left(\frac{1}{6}\,\theta_{1,3}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{5}\,\theta_{1,4}\,h^{\theta\alpha }U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,;\\ \\ b_{2}^{\theta}=\tau\,\left(\frac{1}{6}\,\theta_{1,3}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{5}\,\theta_{1,4}\,h^{\theta\alpha}U ^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,;\\ \\ b_{2}^{\theta}=\tau\,\left(\frac{1}{6}\,\theta_{1,3}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{5}\,\theta_{1,4}\,h^{\theta\alpha}U^ {\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,;\\ \\ b_{2}^{\theta}=\tau\,\left(\frac{1}{6}\,\theta_{1,3}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{5}\,\theta_{1,4}\,h^{\theta\alpha }U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,;\\ \\ b_{2}^{\theta}=\tau\,\left(\frac{1}{6}\,\theta_{1,3}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{5}\,\theta_{1,4}\,h^{\theta\alpha}U ^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,;\\ \\ b_{2}^{\theta}=\ \[b_{31}=-\,\frac{2}{3}\,\theta_{2,4}\,;\quad b_{32}=-\,\frac{4}{15} \,\theta_{2,5}\,+\,\frac{1}{3}\,\frac{\theta_{2,4}\,\theta_{1,3}}{\theta_{1,2}}\,;\] \[b_{33}=-\,\frac{1}{7}\,\theta_{2,6}\,+\,\frac{3}{10}\,\frac{ \theta_{1,4}\,\theta_{2,4}}{\theta_{1,2}}\,;\quad b_{34}=-\,\frac{1}{5}\, \theta_{3,6}\,+\,\frac{1}{5}\,\frac{(\theta_{2,4})^{2}}{\theta_{1,2}}\,;\] \[b_{3}^{\theta}=\,\tau\,\left(\theta_{2,3}\,h^{\theta\alpha} \partial_{\alpha}\,\lambda^{E}\,+\,\frac{2}{3}\,\theta_{2,4}\,h^{\theta\alpha} U^{\mu}\,\partial_{(\alpha}\,\lambda^{E}_{\mu)}\right)\,;\] \[b_{41}=\theta_{1,1}\,;\quad b_{42}=\frac{2}{3}\,\theta_{1,2}\,;\quad b_{43}= \frac{1}{2}\,\theta_{1,3}\,;\quad b_{44}=\theta_{2,3}\,;\quad b_{4}^{\theta}= 0\,;\] \[b_{51}=\frac{1}{3}\,\theta_{1,2}\,;\quad b_{52}=\frac{1}{3}\, \theta_{1,3}\,;\quad b_{53}=\frac{3}{10}\,\theta_{1,4}\,;\quad b_{54}=\frac{1 }{5}\,\theta_{2,4}\,;\] \[b_{5}^{\theta}=\frac{k_{B}}{m\,\rho\,c^{4}}\,h_{\alpha}^{\theta} U_{\beta}\,\left(T^{\alpha\beta}\,-\,T_{E}^{\alpha\beta}\right)^{MI}\,;\] By applying the Rouche-Capelli Theorem, we see that the determinant of the augmented matrix must be zero; so, from this condition by calling \(M_{j}\) the algebraic complement on the line \(i\), column \(5\) of this matrix, we find \[h_{\alpha}^{\theta}U_{\beta}\,\left(T^{\alpha\beta}\,-\,T_{E}^{ \alpha\beta}\right)^{MI}=-\,\frac{c^{4}m\rho\,\tau}{k_{B}}\cdot\] \[\cdot\left[\frac{M_{1}}{M_{5}}\,\left(\frac{1}{3}\,\theta_{1,2}\, h^{\theta\alpha}\partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{3}\,\theta_{1,3}\,h^{ \theta\alpha}U^{\mu}\,\partial_{(\alpha}\,\lambda^{E}_{\mu)}\right)\,+\right.\] \[\left.\,+\,\frac{M_{2}}{M_{5}}\,\left(\frac{1}{6}\,\theta_{1,3}\, h^{\theta\alpha}\partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{5}\,\theta_{1,4}\,h^{ \theta\alpha}U^{\mu}\,\partial_{(\alpha}\,\lambda^{E}_{\mu)}\right)\,+\right.\] \[\left.\,+\,\frac{M_{3}}{M_{5}}\,\left.\left(\theta_{2,3}\,h^{ \theta\alpha}\partial_{\alpha}\,\lambda^{E}\,+\,\frac{2}{3}\,\theta_{2,4}\,h^{ \theta\alpha}U^{\mu}\,\partial_{(\alpha}\,\lambda^{E}_{\mu)}\right)\right]\,,\] i.e., by using (15), \[q^{\theta}=-\,\chi^{MI}\,h^{\alpha\theta}\left(\partial_{\alpha} \,T\,-\,\frac{T}{c^{2}}\,U^{\mu}\,\partial_{\mu}\,U_{\alpha}\right)\,,\mbox{ with}\] \[\chi^{MI}=\frac{c^{6}m\rho\,\tau}{2\,k_{B}T^{2}}\left[\frac{M_{1}} {M_{5}}\,\left(-\,\frac{2}{3}\,\frac{\rho\,c^{2}}{p}\,\,(\theta_{1,2})^{2}\,+ \,\frac{1}{3}\,\theta_{1,3}\right)\,+\right. \tag{23}\] \[+\,\left.\frac{M_{2}}{M_{5}}\,\left(-\,\frac{1}{3}\,\frac{\rho\,c ^{2}}{p}\,\theta_{1,2}\,\theta_{1,3}\,+\,\frac{1}{5}\,\theta_{1,4}\right)+\, \frac{M_{3}}{M_{5}}\,\left(-\,2\,\frac{\rho\,c^{2}}{p}\,\theta_{1,2}\,\theta_{ 2,3}\,+\,\frac{2}{3}\,\theta_{2,4}\right)\right]\,,\] **Determination of the shear viscosity \(\mu\).** Let us consider now eqs. (19)\({}_{1}\) with \(n=2\) contracted by \(\frac{h_{\alpha_{1}<\theta}h_{\psi>\alpha_{2}}}{\rho\,c^{3}}\), (19)\({}_{1}\) with \(n=3\) contracted by \(\frac{h_{\alpha_{1}<\theta}h_{\psi>\alpha_{2}}U_{\alpha_{3}}}{\rho\,c^{3}}\) and (19)\({}_{4}\) contracted by \(\frac{h_{\alpha_{<}\theta}h_{\psi>\beta}}{\rho\,c^{4}}\). So we obtain a system \(\sum_{j=1}^{2}c_{ij}X_{<\theta\psi>}^{j}=b_{i<\theta\psi>}\) constituted by \(3\) equations in the \(2\) unknowns \(X_{<\theta\psi>}^{1}=h_{<\theta}^{\beta_{1}}h_{\psi>}^{\beta_{2}}\left(\lambda_ {\beta_{1}\beta_{2}}\right)^{MI}\), \(X_{<\theta\psi>}^{2}=h_{<\theta}^{\beta_{1}}h_{\psi>}^{\beta_{2}}U^{\beta_{3}} \left(\lambda_{\beta_{1}\beta_{2}\beta_{3}}\right)^{MI}\), with coefficients \[c_{11}=\frac{2}{15}\,\theta_{2,4}\,;\quad c_{12}=\frac{2}{15}\,\theta_{2,5}\,; \quad b_{1<\theta\psi>}=-\,\frac{2}{3}\,\tau\,\theta_{2,3}\,\partial_{<\theta} \,\lambda^{E}_{\psi>_{3}}\,; \tag{24}\] \[c_{21}=\frac{2}{45}\,\theta_{2,5}\,;\quad c_{22}=\frac{1}{35}\,\theta_{2,6}\,; \quad b_{2<\theta\psi>}=-\,\frac{2}{15}\,\tau\,\theta_{2,4}\,\partial_{<\theta} \,\lambda_{\psi>_{3}}^{E}\,;\] \[c_{31}=\frac{2}{3}\,\theta_{2,3}\,;\quad c_{32}=\frac{2}{5}\,\theta_{2,4}\,; \quad b_{3<\theta\psi>}=-\,\frac{k_{B}}{m\,c^{4}\rho}\,h_{\alpha<\theta}h_{ \psi>\beta}\left(T^{\alpha\beta}\,-\,T_{E}^{\alpha\beta}\right)^{MI}\,.\] From this system we obtain \[\begin{array}{c}t_{<\theta\psi>}=2\,\mu^{MI}\,\partial_{<\theta}\,U_{\psi>}\,, \mbox{with}\\ \left|\frac{2}{15}\,\theta_{2,4}\right.\quad\frac{2}{15}\,\theta_{2,5}\right. \quad-\,\frac{2}{3}\,\theta_{2,3}\\ \left|\frac{2}{45}\,\theta_{2,5}\right.\quad\frac{1}{35}\,\theta_{2,6}\quad- \,\frac{2}{15}\,\theta_{2,4}\\ \left|\frac{2}{3}\,\theta_{2,3}\right.\quad\frac{2}{5}\,\theta_{2,4}\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ we find \[\left(T^{\alpha\beta}-T_{E}^{\alpha\beta}\right)^{MI}\,h_{\alpha \beta}=-\,\frac{m\,\rho\,c^{2}\,\tau}{k_{B}}\,\left(\frac{\theta_{0,2}\,D_{1}\,+ \,\theta_{1,2}\,D_{2}}{D_{5}}\,\,U^{\alpha}\partial_{\alpha}\,\lambda^{E}+\right.\] \[+\,\left.\frac{\theta_{0,3}\,D_{1}\,+\,\frac{1}{2}\,\theta_{1,3} \,D_{2}}{D_{5}}\,U^{\alpha}U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E}\,+ \,c^{2}\,\,\frac{\frac{1}{6}\,\theta_{1,3}\,D_{1}\,+\,\frac{5}{3}\,\theta_{2,3 }\,D_{2}}{D_{5}}\,\,h^{\alpha\mu}\partial_{(\alpha}\,\lambda_{\mu)}^{E}\right)\,,\] i.e., by using (15), \[\left(T^{\alpha\beta}-T_{E}^{\alpha\beta}\right)^{MI}\,h_{\alpha \beta}=-\,3\,\nu^{2MI}\,\partial_{\alpha}\,U^{\alpha}\,,\quad\mbox{where}\] \[\nu^{2MI}=\frac{c^{2}m\,\rho\,\tau}{3\,k_{B}T}\,\left(\frac{ \theta_{0,2}\,D_{1}\,+\,\theta_{1,2}\,D_{2}}{D_{5}}\,\frac{\left|\begin{array} []{cc}\frac{p}{\rho}&\frac{e}{\rho\,c^{2}}\\ \frac{c^{2}\theta_{1,2}}{1}&\frac{e}{\rho\,c^{2}}\\ \frac{e}{\rho\,c^{2}}&\theta_{0,2}\end{array}\right|}{\left|\begin{array}{cc }\frac{p}{\rho}&\frac{e}{\rho\,c^{2}}\\ \frac{c^{2}\theta_{1,2}}{1}&\frac{e}{\rho\,c^{2}}\\ \frac{e}{\rho\,c^{2}}&\theta_{0,2}\end{array}\right|}+\right. \tag{26}\] \[+\,c^{2}\,\,\frac{\frac{1}{6}\,\theta_{1,3}\,D_{1}\,+\,\frac{5}{3 }\,\theta_{2,3}\,D_{2}}{D_{5}}+\,\frac{\theta_{0,3}\,D_{1}\,+\,\frac{1}{2}\, \theta_{1,3}\,D_{2}}{D_{5}}\,\frac{\left|\begin{array}{cc}1&\frac{p}{\rho} \\ \frac{e}{\rho\,c^{2}}&c^{2}\,\theta_{1,2}\\ \hline 1&\frac{e}{\rho\,c^{2}}\\ \frac{e}{\rho\,c^{2}}&\theta_{0,2}\end{array}\right|}\,.\] It is evident that this value of \(\nu\) is different from that found in (21) when \(N=3\). **Determination of the heat conductivity \(\chi\)**. We consider now eqs. (19)\({}_{1}\) with \(n=2\) contracted by \(\frac{h_{\alpha_{1}}^{\theta}U_{\alpha_{2}}}{\rho\,c^{6}}\), (19)\({}_{2}\) contracted by \(\frac{h_{\alpha}^{\theta}}{-\,\rho\,c^{2}}\) and (19)\({}_{4}\) contracted by \(k_{B}\,\frac{h_{\alpha}^{\theta}U\beta}{\rho\,c^{4}}\). We obtain a system \(\sum_{j=1}^{2}b_{ij}X^{j\theta}=b_{i}^{\theta}\) constituted by 3 equations in the 2 unknowns \(X^{1\theta}=h^{\theta\beta_{1}}\left(\lambda_{\beta_{1}}\,-\,\lambda_{\beta_{ 1}}^{E}\right)^{MI}\) and \(X^{2\theta}=h^{\theta\beta_{1}}U^{\beta_{2}}\left(\lambda_{\beta_{1}\beta_{2} }\right)^{MI}\). The augmented matrix can be obtained by eliminating the rows 2 and 3 and the columns 3 and 4 from the augmented matrix introduced in subsection 4.2, so its determinant is given by: \[\begin{vmatrix}b_{11}&b_{12}&b_{1}^{\theta}\\ b_{41}&b_{42}&b_{4}^{\theta}\\ b_{51}&b_{52}&b_{5}^{\theta}\end{vmatrix}=0\,,\] where \(b_{ij}\) and \(b_{i}^{\theta}\) are the same of the case \(N=3\). By calling \(D_{j}\) the algebraic complements of the line \(i\), column 3 of the preceding matrix, we find \[\frac{k_{B}}{m\,\rho\,c^{4}}\,h_{\alpha}^{\theta}U_{\beta}\,\left(T^{\alpha \beta}\,-\,T_{E}^{\alpha\beta}\right)^{MI}=-\,\frac{\tau\,\left(\frac{1}{3}\, \theta_{1,2}\,h^{\theta\alpha}\partial_{\alpha}\,\lambda^{E}\,+\,\frac{1}{3} \,\theta_{1,3}\,h^{\theta\alpha}U^{\mu}\,\partial_{(\alpha}\,\lambda_{\mu)}^{E} \right)\,D_{1}}{D_{3}}\,,\] i.e., by using (15), \[\begin{vmatrix}q^{\theta}=-\,\chi^{2MI}\,h^{\alpha\theta}\left( \partial_{\alpha}\,T\,-\,\frac{T}{c^{2}}\,U^{\mu}\,\partial_{\mu}\,U_{\alpha} \right)\,,\mbox{where}\\ \chi^{2MI}=\frac{m\,\rho\,c^{6}\tau}{2\,k_{B}T^{2}}\,\frac{D_{1}}{D_{3}}\,\left( -\,\frac{2}{3}\,\rho\,c^{2}\,\frac{\left(\theta_{1,2}\right)^{2}}{p}\,+\, \frac{1}{3}\,\theta_{1,3}\right)\,,\end{vmatrix} \tag{27}\] It is evident that this value of \(\chi\) is different from the value obtained by using the expression of \(\chi\) found in (23) in the case \(N=3\). **Determination of the shear viscosity \(\mu\).** Let us consider now eqs. (19)\({}_{1}\) with \(n=2\) contracted by \(\frac{h_{\alpha_{1}<\theta}h_{\psi>\alpha_{2}}}{\rho\,c^{\theta}}\) and (19)\({}_{4}\) contracted by \(\frac{h_{\alpha<\theta}h_{\psi>\beta}}{\rho\,c^{4}}\). We obtain a system \(\sum_{j=1}^{1}c_{ij}X_{<\theta\psi>}^{j}=b_{i<\theta\psi>}\) constituted by 2 equations in the 1 unknown \(X_{<\theta\psi>}^{1}=h_{<\theta}^{\beta_{1}}h_{\psi>}^{\beta_{2}}\,(\lambda_{ \beta_{1}\beta_{2}})^{MI}\). The augmented matrix can be obtained by eliminating row 2 and column 2 from the augmented matrix introduced in subsection 4.3 and its determinant is given by: \[\begin{vmatrix}c_{11}&&b_{1<\theta\psi>}\\ c_{31}&&b_{3<\theta\psi>}\end{vmatrix}=0\,,\] where \(c_{ij}\) and \(b_{i<\theta\psi>}\) are the same of the case \(N=3\). From this equation we find \[t_{<\theta\psi>}=2\,\mu^{MI}\,\partial_{<\theta}\,U_{\psi>}\,, \quad\mbox{with} \tag{28}\] \[\mu^{MI}\,=\frac{1}{3}\,\frac{m\,c^{4}\rho\,\tau}{k_{B}T}\,\frac {c_{31}}{c_{11}}\,\theta_{2,3}=\frac{5}{3}\,\frac{m\,c^{4}\rho}{k_{B}T}\,\frac {\theta_{2,3}}{\theta_{2,4}}\,\theta_{2,3}\,, \tag{29}\] It is evident that this value of \(\mu\) is different from the value of \(\mu\) furnished by (25) when \(N=3\). These transport coefficients are the same ones of [9] if we take into account that in [9] the authors call \(\omega=\frac{e}{\rho\,c^{2}}\) (see also eq. (12)\({}_{2}\)) and the quantities \(B_{q}\), \(B_{2}^{\pi}\), \(B^{t}\) present in [9] whose expressions are reported in equation (44) in terms also of \(C_{5}\) which is described in eq. (34)\({}_{2}\), the matrices \(N^{\pi}\) and \(D_{4}\) which are given in the equation before equation (30) and the matrices \(N_{3}\) and \(D_{3}\) which are given in the equation after (32)). ## 3 The Chapman-Enskog Method This method can be found in the articles [14], [15] and has been further explained in [11]. We describe how this method works by enclosing the full expression of the production term which was found in [16] and modified in [9]. In particular, the method starts by considering the following equations \[p^{\alpha}\,\partial_{\alpha}\,f=Q\,,\quad\partial_{\alpha}V^{\alpha}=0\,, \quad\partial_{\alpha}T^{\alpha\beta}=0\,, \tag{30}\] i.e., the Boltzmann equation and the conservation laws of mass and momentum-energy. Then the following steps have to be followed: 1. The eqs. (30) are considered, but with their left hand sides calculated at equilibrium and their right hand sides at first order with respect to equilibrium, i.e. \[p^{\alpha}\,\partial_{\alpha}\,f_{E}=Q^{(OT)}=\frac{U^{\mu}p_{\mu}}{c^{2}\tau }\left[\left(f_{E}-f\right)^{OT}-f_{E}\,p^{\gamma}q_{\gamma}^{OT}\frac{3}{m\,c ^{4}\rho\,\theta_{1,2}}\,\left(1+\frac{\cal I}{mc^{2}}\right)\right]\,.\] \[\partial_{\alpha}V_{E}^{\alpha}=0\,,\quad\partial_{\alpha}T_{E}^{ \alpha\beta}=0\,,\] (31) where the superscript \(OT\) denotes that these quantities are the first iterates defined with this approach. 2. The deviation of the distribution function from its value at equilibrium is calculated in terms of \(\partial_{\alpha}\lambda^{E}\) and \(\partial_{\alpha}\lambda^{E}_{\mu}\) from (31)\({}_{1}\) and used in eqs. (2b)\({}_{1,2}\) with \(n=0\) and \(n=1\). Obviously, in this way \(I=0\), \(I^{\alpha_{1}}=0\) are obtained, thus respecting the conservation laws of mass and of momentum-energy. 3. The quantities \(\partial_{\alpha}\lambda^{E}\) and \(U^{\alpha}U^{\mu}\partial_{\alpha}\lambda^{E}_{\mu}\) are calculated from (31)\({}_{2,3}\) and substituted in the expression of \(A^{\alpha}\,-\,A^{\alpha}_{E}\), \(T^{\alpha\beta}\,-\,T^{\alpha\beta}_{E}\) obtained in the previous step. We note that in the expression of \(Q\) in [11] there are 6 unknown scalars \(a_{i}\) with \(i=0,\cdots 5\) which have to be determined. From the third line on page 116 of [11], by imposing \(V^{\alpha}-V^{\alpha}_{E}=0\), \(e\,-\,e_{E}=\frac{U_{\alpha}U_{\beta}}{c^{2}}\left(T^{\alpha\beta}-T^{\alpha \beta}_{E}\right)=0\) the authors find \(a_{0}\), \(a_{1}\), \(a_{3}\). By imposing \(V^{\alpha}-V^{\alpha}_{E}=0\), \(e\,-\,e_{E}=\frac{U_{\alpha}U_{\beta}}{c^{2}}\left(T^{\alpha\beta}-T^{\alpha \beta}_{E}\right)=0\) we find simply that \(q_{\gamma}\) is constrained by \(q_{\gamma}\,U^{\gamma}=0\). So, \(q_{\gamma}\) replaces the remaining part of Cercignani-Kremer's unknown scalars \(a_{2}\), \(a_{4}\), \(a_{5}\). It is interesting that in a model with 14 or more moments, \(q_{\gamma}\) becomes exactly the heat flux density. In a model with six moments, there is no heat flux; in this case \(q_{\gamma}\) remains a mathematical tool as the scalars \(a_{2}\), \(a_{4}\), \(a_{5}\) of the Cercignani-Kremer method. But it cannot be eliminated, otherwise the zero deviation of \(V^{\alpha}\) from its value at equilibrium would be lost. ### ROT recovered with the Chapman-Enskog Method In this subsection we apply the Chapman-Enskog method to the equations of polyatomic gases with an arbitrary number \(N\). In this way we will find (12) of ROT, with particular expressions \(\nu^{OT}\), \(\chi^{OT}\), \(\mu^{OT}\) of the bulk viscosity \(\nu\), the heat conductivity \(\chi\) and the shear viscosity \(\mu\) and we will show that all these coefficients do not depend on \(N\). We have to consider the equations \[f\,-\,f_{E}=\frac{c^{2}\tau}{k_{B}U^{\mu}p_{\mu}}\,f_{E}\,p^{ \delta}\left[m\,\partial_{\delta}\,\lambda^{E}\,+\left(1+\frac{\cal I}{mc^{2}} \right)\,p^{\nu}\,\partial_{\delta}\,\lambda^{E}_{\nu}\right]-\,3\,f_{E}\,p^{ \mu}q_{\mu}\frac{1+\frac{\cal I}{mc^{2}}}{m\,c^{4}\rho\,\theta_{1,2}}\,,\] \[V^{\alpha}\,-\,V^{\alpha}_{E}=0\,,\,U_{\alpha}U_{\beta}\left(T^ {\alpha\beta}\,-\,T^{\alpha\beta}_{E}\right)=0\,,\,\partial_{\alpha}V^{\alpha }_{E}=0\,,\,\partial_{\alpha}T^{\alpha\beta}_{E}=0\,.\] The equations (32)\({}_{4}\) and (32)\({}_{5}\) are exactly the equations (14) of the MI approach and so the solution of these equations is given by (15) and (16). Let us now consider equation (32)\({}_{2}\) contracted with \(\frac{U_{\alpha}}{c^{2}}\). By using (32)\({}_{1}\) contracted with \(m\,c\,\varphi({\cal I})\,\frac{U_{\alpha}}{c^{2}}\,p^{\alpha}\) and integrated in \(d\,{\cal I}\,d\,{\vec{P}}\) it becomes \[0=\frac{U_{\alpha}}{c^{2}}\left(V^{\alpha}\,-\,V^{\alpha}_{E}\right)=\frac{m\, \tau}{k_{B}}\left(V^{\alpha}_{E}\,\partial_{\alpha}\lambda^{E}\,+\,T^{\alpha \delta}_{E}\,\partial_{\alpha}\lambda^{E}_{\delta}\right)\,-\,\frac{3}{\, \rho\,c^{6}\,\theta_{1,2}}\,U_{\alpha}\,q_{\mu}\,T^{\alpha\mu}_{E}\,,\] which is an identity for eqs. (15) (see also the first equation after (32)). To impose eq. (32)\({}_{2}\) contracted with \(h^{\theta}_{\alpha}\), we need the tensors (49) and their representations (50) of the appendix. By using (32)\({}_{1}\) contracted with \(m\,c\,\varphi({\cal I})\,h^{\theta}_{\alpha}\,p^{\alpha}\) and integrated in \(d\,{\cal I}\,d\,{\vec{P}}\) we find \[0=h^{\theta}_{\alpha}\left(V^{\alpha}\,-\,V^{\alpha}_{E}\right)=\frac{m\,\tau} {k_{B}}\left(h^{\theta}_{\alpha}\,A^{*\alpha\delta}\,\partial_{\delta}\lambda^{ E}\,+\,h^{\theta}_{\alpha}\,A^{*\alpha\delta\nu}\,\partial_{\delta}\lambda^{E}_{ \nu}\right)\,-\,\frac{3}{\,\rho\,c^{4}\,\theta_{1,2}}\,h^{\theta}_{\alpha}\,q_ {\mu}\,T^{\alpha\mu}_{E}\,,\] from which we desume \[q^{\theta}=-\,\frac{m\,\tau c^{6}}{3\,k_{B}}\,\frac{\rho^{2}}{p}\,\theta_{1,2} \left(\theta_{1,1}^{*}\,h^{\alpha\theta}\,\partial_{\alpha}\lambda^{E}+\,\frac{ 2}{3}\,\theta_{1,2}^{*}\,h^{\theta(\delta}U^{\nu)}\,\partial_{\delta}\lambda^{E }_{\nu}\right)=-\,\chi\,h^{\theta\alpha}\,\left(\partial_{\alpha}\,T\,-\, \frac{T}{c^{2}}\,U^{\mu}\,\partial_{\mu}\,U_{\alpha}\right)\,, \tag{33}\] \[\mbox{with}\quad\chi=-\,\frac{m\,\tau c^{8}}{9\,k_{B}T^{2}}\,\frac{\rho^{2}}{p }\,\theta_{1,2}\left(\theta_{1,2}^{*}\,-\,\frac{\rho\,c^{2}}{p}\,\theta_{1,2} \theta_{1,1}^{*}\right)\,,\] where in the last passage we have used (16) and \(\lambda^{E}_{\nu}=\frac{U\nu}{T}\). We see here that \(q^{\theta}\), replaces the Cercignani-Kremer's scalars which did not have a clear physical meaning. They cannot simply be put equal to zero (as in [17]), otherwise the physical requirement \(V^{\alpha}-V^{\alpha}_{E}=0\) would be violated. We now impose eq. (32)\({}_{3}\), by using (32)\({}_{1}\) contracted with \(U_{\alpha}U_{\beta}\,c\,p^{\alpha}p^{\beta}\left(1+\frac{\mathcal{I}}{mc^{2}} \right)\,\varphi(\mathcal{I})\) and integrated in \(d\,\mathcal{I}\,d\,\vec{P}\); we find \[0=\frac{m\,c^{2}\tau}{k_{B}}\left(U_{\beta}T^{\beta\delta}_{E}\,\partial_{ \delta}\lambda^{E}+U_{\beta}A^{\beta\delta\nu}_{E}\,\partial_{\delta}\lambda^ {E}_{\nu}\right)-\,\frac{3}{c^{4}\rho\,\theta_{1,2}}\,A^{\mu\alpha\beta}_{E}q _{\mu}\,U_{\alpha}U_{\beta}\,.\] This is an identity for eqs. (15) (see also the second equation after (32)). We now proceed evaluating the other components of \(T^{\alpha\beta}\,-\,T^{\alpha\beta}_{E}\). We use (32)\({}_{1}\) contracted with \(h^{\theta}_{\alpha}U_{\beta}\,c\,p^{\alpha}p^{\beta}\left(1+\frac{\mathcal{I}} {mc^{2}}\right)\,\varphi(\mathcal{I})\) and integrate in \(d\,\mathcal{I}\,d\,\vec{P}\) to find \[\left(T^{\alpha\beta}\,-\,T^{\alpha\beta}_{E}\right)h^{\theta}_{ \alpha}U_{\beta}=\frac{m\,c^{2}\tau}{k_{B}}\left(h^{\theta}_{\alpha}T^{\alpha \delta}_{E}\,\partial_{\delta}\lambda^{E}+h^{\theta}_{\alpha}T^{\alpha\delta \nu}_{E}\,\partial_{\delta}\lambda^{E}_{\nu}\right)-\,\frac{3}{c^{4}\rho\, \theta_{1,2}}\,A^{\mu\alpha\beta}_{E}q_{\mu}\,h^{\theta}_{\alpha}U_{\beta}=\] \[=-\,\frac{m\,c^{2}\tau}{k_{B}}\left(p\,h^{\theta\delta}_{\delta} \,\partial_{\delta}\lambda^{E}+\frac{2}{3}\,\rho\,c^{2}\,\theta_{1,2}\,h^{ \theta(\delta}U^{\nu)}\,\partial_{\delta}\lambda^{E}_{\nu}\right)-\,q^{\theta }=-\,q^{\theta}\,, \tag{34}\] where, in the last passage, we have used (16). The result is an identity. We note that, in the 6 moments model, the left hand side of (34) is zero, so that the right hand side is \(-\,q^{\theta}\) must be zero; but we have said, after eq. (33) that in this case the physical requirement \(V^{\alpha}-V^{\alpha}_{E}=0\) would be violated. This means that this approach cannot be applied to the case of 6 moments. This is not surprising because it has been shown in eq. (19) of [18] (see also [19]) that the optimal choices of moments are \(N=0\) (trivial case with only the conservation law of mass), \(N=1\) (only the 5 Euler's Equations where there is no production term), \(N=2\) (the 15 moments model), \(N=3\) (the 35 moments model) and so on. The 6 moments model is not present in this hierarchy, but it can be considered a subsystem of the 15 moments model by putting \(q^{\theta}=0\) (forgetting the role it played in building the model and simply eliminating eq. (33)). From this perspective the article [17] can be considered correct. Finally, we multiply eq. (32)\({}_{1}\) by \(h^{\theta}_{\alpha}\,h^{\psi}_{\beta}\,cp^{\alpha}p^{\beta}\left(1+\frac{ \mathcal{I}}{mc^{2}}\right)\,\varphi(\mathcal{I})\) and integrate in \(d\,\mathcal{I}\,d\,\vec{P}\); so we obtain \[h^{\theta}_{\alpha}\,h^{\psi}_{\beta}\left(T^{\alpha\beta}\,-\,T^{\alpha \beta}_{E}\right)=h^{\theta}_{\alpha}\,h^{\psi}_{\beta}\left[\frac{m\,\tau}{k _{B}}\,\left(A^{*\delta\alpha\beta}\partial_{\delta}\,\lambda^{E}+\,A^{*\delta \alpha\beta\nu}\,\partial_{\delta}\,\lambda^{E}_{\nu}\right)\,-\frac{3}{c^{4} \,\rho\,\theta_{1,2}}\,q_{\mu}\,A^{\mu\alpha\beta}_{E}\right]=\] \[=\,\frac{m\,\tau}{k_{B}}\left[\frac{1}{3}\,\rho\,c^{2}\,\theta_{1,2}^{*}\,h^{\theta\psi}U^{\delta}\partial_{\delta}\,\lambda^{E}+\left(\frac{1 }{6}\,\rho\,c^{2}\,\theta_{1,3}^{*}\,h^{\theta\psi}\,U^{\delta}\,U^{\nu}\,+\, \rho\,c^{4}\,\theta_{2,3}^{*}\,h^{(\theta\psi}\,h^{\delta\nu)}\right)\partial_{ \delta}\,\lambda^{E}_{\nu}\right]\,.\] This equation, contracted with \(h_{\theta\psi}\) gives \[\Pi=\frac{m\,\tau}{k_{B}}\left[\frac{1}{3}\,\rho\,c^{2}\,\theta_{1,2}^{*}\,U^{ \delta}\partial_{\delta}\,\lambda^{E}+\left(\frac{1}{6}\,\rho\,c^{2}\,\theta_{1, 3}^{*}\,U^{\delta}\,U^{\nu}\,+\,\frac{5}{9}\,\rho\,c^{4}\,\theta_{2,3}^{*}\,h^ {\delta\nu}\right)\partial_{\delta}\,\lambda_{\nu}^{E}\right]=-\,\nu\,\partial_{ \alpha}\,U^{\alpha}\,, \tag{35}\] with \(\nu=\) \[-\,\frac{m\,\tau}{k_{B}}\,\left[\left|\matrix{\rho&\frac{e}{c^{2}} \cr\frac{e}{c^{2}}&\rho\,\theta_{0,2}\cr}\right|^{-1}\,\left(\frac{1}{3}\,\rho \,c^{2}\,\theta_{1,2}^{*}\,\left|\matrix{p&\frac{e}{c^{2}}\cr\frac{1}{3}\, \rho\,c^{2}\,\theta_{1,2}&\rho\,\theta_{0,2}\cr}\right|+\right.\right.\] \[\left.\left.+\frac{1}{6}\,\rho\,c^{2}\,\theta_{1,3}^{*}\,\left| \matrix{\rho&p\cr\frac{e}{c^{2}}&\frac{1}{3}\,\rho\,c^{2}\,\theta_{1,2}\cr} \right|\right)\,-\,\frac{5}{9}\,\rho\,c^{4}\,\theta_{2,3}^{*}\right]\,, \tag{36}\] where eqs. (15) have been used. Moreover, contracting equation (34) with \(h_{\theta}^{<\gamma}\,h_{\psi}^{\phi>}=h_{\theta}^{\gamma}\,h_{\psi}^{\phi}\,- \,\frac{1}{3}\,h_{\theta\psi}h^{\gamma\phi}\) it gives us \[t_{<\gamma\phi>}=2\,\mu\,h_{<\beta}^{\alpha}\,h_{\gamma>}^{\mu}\,\partial_{ \alpha}\,U_{\mu}\,,\quad\mbox{with}\quad\mu=\frac{1}{3}\,\frac{m\,\tau}{k_{B} T}\,\rho\,c^{4}\,\theta_{2,3}^{*}\,. \tag{37}\] The equations (33), (35) and (37)\({}_{1}\) are those of Relativistic Ordinary Thermodynamics. In conclusion, with this approach we have obtained the equations of Relativistic Ordinary Thermodynamics with heat conductivity, bulk viscosity and shear viscosity given respectively by (33)\({}_{2}\), (35)\({}_{2}\) and (37)\({}_{2}\). It is evident from these expressions that they do not depend on the number of moments of the extended model from which they are derivated. ## 4 The non relativistic approach In this case the balance equation found in eq. (19)-(20) of [19] are \[\partial_{t}\,H_{s}^{i_{1}\cdots i_{h}}+\partial_{k}\,H_{s}^{ki_{1}\cdots i_{ h}}=J_{s}^{i_{1}\cdots i_{h}}\,\,\mbox{with}\quad s=0,\cdots,N,\mbox{and}\quad h=0, \cdots,N-s. \tag{38}\] In particular, \(H_{0}=\rho\) is the mass density, \(H_{0}^{i_{1}}=\rho\,v^{i_{1}}\) where \(v^{i_{1}}\) is the velocity and \(H_{1}=2\,\rho\,\epsilon\,+\,\rho\,v^{2}\) where \(\epsilon\) is the energy density. All the variables are expressed in integral form as \[H_{s}^{i_{1}\cdots i_{h}}=m\,\int_{\mathbb{R}^{3}}\int_{0}^{+\,\infty}f\, \xi^{i_{1}}\,\cdots\,\xi^{i_{h}}\,\left(\frac{2\,{\cal I}}{m}\,+\,\xi^{2} \right)^{s}\,\varphi({\cal I})\,d\,{\cal I}\,d\,\vec{\xi}\,. \tag{39}\] The expression of \(H_{s}^{ki_{1}\cdots i_{h}}\) is the same of equation (39) but with a further factor \(\xi^{k}\) inside the integral; the expression of \(J_{s}^{i_{1}\cdots i_{h}}\) is the same of (39) but with the production density \(Q=-\,\frac{f-f_{E}}{\tau}\) instead of the distribution function \(f\). This distribution function has the form \[f=e^{-1-\frac{m}{k_{B}}\,\chi}\,,\quad\chi=\sum_{h=0}^{N}\sum_{s=0}^{N-h}\lambda _{i_{1}\cdots i_{h}}^{s}\xi^{i_{1}}\,\cdots\,\xi^{i_{h}}\,\left(\frac{2\,{\cal I }}{m}\,+\,\xi^{2}\right)^{s}\,. \tag{40}\] We prove now that the Chapman-Enskog method and the Maxwellian Iteration method give the same result for polyatomic gases and with whatever number of moments. This was already proved in [12] but only for monoatomic gases with 14 moments. Let us start with the Chapman-Enskog Method where the Boltzamnn equation and the conservation laws of mass, momentum and energy are considered: \[\partial_{t}\,f+\xi^{k}\partial_{k}\,f=-\,\frac{f-f_{E}}{\tau}\,,\,\partial_{t }\,H_{0}+\partial_{k}\,H_{0}^{k}=0\,,\,\partial_{t}\,H_{0}^{i}+\partial_{k}\, H_{0}^{ki}=0\,,\,\partial_{t}\,H_{1}+\partial_{k}\,H_{1}^{k}=0\,. \tag{41}\] After that, the following steps are followed: 1. The left hand sides of (41) are calculated at equilibrium, while the right hand sides at first order with respect to equilibrium \[\begin{split}&\partial_{t}\,f_{E}+\xi^{k}\partial_{k}\,f_{E}= \frac{f-f_{E}}{\tau}\,,\\ &\partial_{t}\,H_{0}+\partial_{k}\,H_{0}^{k}=0\,,\,\partial_{t} \,H_{0}^{i}+\partial_{k}\,H_{0E}^{ki}=0\,,\,\partial_{t}\,H_{1}+\partial_{k} \,H_{1E}^{k}=0\,.\end{split}\] (42) 2. The derivatives with respect to time of the independent variables \(\rho\), \(v^{i}\), \(T\) are obtained from (42)\({}_{2-4}\) and substituted in (42)\({}_{1}\) which, after that, depends only on the independent variables and on their derivatives with respect to \(x^{k}\). 3. The new eq. (42)\({}_{1}\) is multiplied by \(m\,\xi^{i_{1}}\,\xi^{i_{2}}\,\varphi(\mathcal{I})\) and integrated with respect to \(d\,\mathcal{I}\,d\,\vec{\xi}\). \[\begin{split}\overbrace{\partial_{t}\,H_{0E}^{i_{1}i_{2}}}+ \partial_{k}\,H_{0E}^{ki_{1}i_{2}}=-\,\begin{pmatrix}H_{0}^{i_{1}i_{2}}&- \,H_{0E}^{i_{1}i_{2}}\end{pmatrix}\,\frac{1}{\tau}\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par in the non relativistic case, the Chapman-Enskog Method and the Maxwellian Iteration give the same result. So it is natural to ask why in the relativistic case the two methods give different results. To understand it, let us repeat the same steps in the relativistic case. With the **Chapman-Enskog Method** the Boltzmannn equation and the conservation laws of mass, momentum-energy, with the left hand sides calculated at equilibrium, are considered: \[\begin{split}& p^{\alpha}\,\partial_{\alpha}\,f_{E}=\frac{U_{\mu} }{c^{2}\tau}\left[(f_{E}-f)\;p^{\mu}-f_{E}\,q_{\gamma}\,\frac{3}{m\,c^{4}\rho \,\theta_{1,2}}\,p^{\mu}p^{\gamma}\,\left(1+\frac{\mathcal{I}}{mc^{2}}\right) \right]\,.\\ &\partial_{\alpha}V^{\alpha}=0\,,\quad\partial_{\alpha}T^{\alpha \beta}=0\,,\end{split} \tag{43}\] After that, the following steps are followed: 1. The deviation of the distribution function from its value at equilibrium is calculated in terms of \(\partial_{\alpha}\lambda_{E}\) and \(\partial_{\alpha}\lambda_{E}^{\mu}\) from (43)\({}_{1}\) and used in the definition of \(T^{\alpha\beta}\) which now becomes \[\begin{split}& T^{\alpha\beta}-T_{E}^{\alpha\beta}=-c^{3}\tau\, \int_{\mathbb{R}^{3}}\int_{0}^{+\infty}\frac{p^{\alpha}\partial_{\alpha}f_{E} }{p^{\mu}U_{\mu}}p^{\alpha}p^{\beta}\left(1+\frac{\mathcal{I}}{mc^{2}}\right) \,\varphi(\mathcal{I})\,d\,\mathcal{I}\,d\,\vec{p}-3\frac{q_{\gamma}}{c^{4} \theta_{1,2}}A_{E}^{\gamma\alpha\beta}\,,\end{split}\] (44) 2. The quantities \(\partial_{\alpha}\lambda_{E}\) and \(U^{\alpha}U^{\mu}\partial_{\alpha}\lambda_{E}^{\mu}\) are calculated from (43)\({}_{2,3}\) and substituted in (44). From the resulting expression, the bulk viscosity, the shear viscosity and the heat conductivity can be obtained and they do not depend on the number \(N\) because they use equation which are present in every model with \(N\geq 1\). There is also the opportunity to modify a little the procedure, by taking the last term in (43)\({}_{1}\) to the left hand side before calculating the left hand sides at equilibrium; in this case it will disappear and, consequently, also the last term in (44) will be no more present. Instead of this, with the **Maxwellian Iteration** 1. The conservation laws of mass, momentum-energy, and the balance equation for the triple tensor with the left hand sides calculated at equilibrium, are considered: \[\partial_{\alpha}V_{E}^{\alpha}=0\,,\quad\partial_{\alpha}T_{E}^{\alpha\beta }=0\,,\] (45) \[\partial_{\alpha}\,A_{E}^{\alpha\beta\gamma}=-\,\left(A^{\mu\beta\gamma}\,- \,A_{E}^{\mu\beta\gamma}\right)\,\frac{U_{\mu}}{c^{2}\tau}\,-\,U_{\mu}q_{ \delta}A_{E}^{\mu\delta\beta\gamma}\,\frac{3}{\tau\,c^{\beta}\rho\,\theta_{1, 2}}\,.\] Same variable but different from \(T^{\beta\gamma}\). Variable different from \(T_{E}^{\beta\gamma}\) and from \(A_{E}^{\mu\beta\gamma}\). 2. Some derivatives of the independent variables are obtained from (45)\({}_{1,2}\) and substituted in (45)\({}_{3}\). 3. The new eq. (45)\({}_{3}\) is used to obtain \(\Pi\), \(q^{\alpha}\), \(t^{<\beta\gamma>}\) and, consequently, the bulk viscosity, the shear viscosity and the heat conductivity. This fact could give rise to some doubts because these coefficients should be obtained from \(T^{\beta\gamma}\,-\,T_{E}^{\beta\gamma}\), not from \(U_{\mu}\left(A^{\mu\beta\gamma}\,-\,A_{E}^{\mu\beta\gamma}\right)\). Moreover, \(U_{\mu}\left(A^{\mu\beta\gamma}\,-\,A_{E}^{\mu\beta\gamma}\right)\) depends not only on \(\Pi\), \(q^{\alpha}\), \(t^{<\beta\gamma>}\) but also on other variables whose number increases for increasing values of \(N\). So \(\Pi\), \(q^{\alpha}\), \(t^{<\beta\gamma>}\) must be isolated from the other variables and this means solving some algebraic linear systems depending on \(N\). It is therefore not surprising that the solution also depends on \(N\). Obviously, this is consequence of the form of the production term in the right hand side of (43). It remains open the problem to find another expression which respect the requirements of zero production of mass and of momentum-energy, and whose consequent Maxwellian Iteration does not depend on \(N\). We can say that another possible expression is \[Q=-\,\frac{U_{L}^{\alpha}\,p^{\alpha}}{c^{2}\tau}\,\left(f-f_{E}\right)\,, \tag{46}\] where \(U_{L}^{\alpha}\) is the 4-velocity in the Landau-Liftschiz frame as reported in [20], [21]. But in [22] it was proved that, up to first order with respect to equilibrium, the expression (46) is equivalent to the right hand side of the present eq. (43). So nothing changes by adopting the production term (46). In any case, the two procedures have to give the same result at the non relativistic limit. In fact, from \(U^{\alpha}U_{\alpha}=c^{2}\), \(p^{\alpha}p_{\alpha}=m^{2}c^{2}\) we have the following decompositions: \[U^{\alpha}=\Gamma(v)\left(c\,,v^{i}\right)\,,\quad p^{\alpha}=m\,\Gamma\left( \frac{p}{m}\right)\left(c\,,\frac{p^{i}}{m}\right)\,,\quad\mbox{with}\quad \Gamma(v)=\left(1-\,\frac{v^{2}}{c^{2}}\right)^{-1/2}\,. \tag{47}\] Consequently, the limit for \(c\,\rightarrow\,+\infty\) of (43) is \[m\,\left(\partial_{t}\,f+\xi^{k}\partial_{k}\,f\right)=-\frac{m}{\tau}\,\left( f-f_{E}\right)\,, \tag{48}\] as in eq. (41)\({}_{1}\). It follows that both the results of the Chapman-Enskog Method and the Maxwellian Iteration have the same non relativistic limit. In the next section we compute the non relativistic limit. ### The non relativistic limit of \(\chi\), \(\mu\) and \(\nu\) In this section we prove the convergence in the non relativistic limit of the heat conductivity \(\chi\), the shear viscosity \(\mu\), and the bulk viscosity \(\nu\). In the previous sections we have introduced the new variables \(\theta_{1,1}^{*}\), \(\theta_{1,2}^{*}\), \(\theta_{1,3}^{*}\), \(\theta_{2,3}^{*}\) which have not studied so far in literature. In order to compute the non relativistic limit of \(\chi\), \(\mu\) and \(\nu\) it is necessary to analyze the non relativistic limit of these new quantitities. Taking into account (52), we have \[\theta_{1,2}^{*}=3\,\theta_{1,1}\,,\,\theta_{1,3}^{*}=2\,\theta_{1,2}\,.\] So we need only the non relativistic limit of \(\theta_{1,1}^{*}\), \(\theta_{2,3}^{*}\) given by (54). To evaluate them, let us consider the expression of \(J_{4,-1}\), i.e., \[J_{4,-1}=\int_{0}^{+\infty}e^{-\gamma\,\cosh\,s}\frac{\sinh^{4}\,s}{\cosh\,s} \,d\,s=\int_{0}^{+\infty}e^{-x}\,e^{-\gamma}\frac{\left[\left(\frac{x}{\gamma} +1\right)^{2}-\,1\right]^{\frac{3}{2}}}{\frac{x}{\gamma}+1}\,\frac{d\,x}{ \gamma}=\] \[=\frac{e^{-\gamma}}{\gamma}\int_{0}^{+\infty}e^{-x}\,\frac{\left(\frac{x}{ \gamma}+2\right)^{\frac{3}{2}}}{\frac{x}{\gamma}+1}\,\left(\frac{x}{\gamma} \right)^{\frac{3}{2}}\,d\,x\,,\] where in the first passage we have changed the integration variable according to the law \(\cosh\,s=\frac{x}{\gamma}+1\quad\rightarrow\quad\sinh\,s\,d\,s=\frac{d\,x}{\gamma}\). Now the Mac-Laurin expansion of the function \(g(y)=\frac{(y+2)^{\frac{3}{2}}}{y+1}\) around \(y=0\) is \[g(y) =2\,\sqrt{2}\left(1\,-\,\frac{1}{4}\,y\right)\,+\,y^{2}(\cdots) \,\,\rightarrow\,\,\gamma^{\frac{5}{2}}e^{\gamma}\,J_{4,-1}=\] \[=2\,\sqrt{2}\,\int_{0}^{+\infty}e^{-x}\,\left(1\,-\frac{1}{4}\, \frac{x}{\gamma}\,+\,\frac{1}{\gamma^{2}}(\cdots)\right)\,x^{\frac{3}{2}}\,d \,x=\] \[=2\,\sqrt{2}\,\left[\Gamma\left(\frac{5}{2}\right)\,-\,\frac{1}{4 }\Gamma\left(\frac{7}{2}\right)\frac{1}{\gamma}\,+\,\frac{1}{\gamma^{2}}( \cdots)\right]=\] \[=2\,\sqrt{2\,\pi}\,\left[\frac{3}{4}\,-\,\frac{15}{32}\,\frac{1} {\gamma}\,+\,\frac{1}{\gamma^{2}}(\cdots)\right]\,.\] where in the last passage we have used the Gamma function \[\Gamma(s)=\int_{0}^{+\infty}e^{-x}\,x^{s-1}\,d\,x\] defined for \(s>0\) and satisfying the relations \(\Gamma(s+1)=s\,\Gamma(s),\,\,\,\Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}\). In a similar way we can obtain the expansion of \(J_{2,1}\) or we can read it on page 21 of [5] and it is \[J_{2,1}=2\,\sqrt{2\,\pi}\,e^{-\gamma}\,\gamma^{-1/2}\,\left[\frac{1}{4\,\gamma }\,+\,\frac{15}{32}\,\frac{1}{\gamma^{2}}\,+\,\frac{105}{512}\,\frac{1}{\gamma ^{3}}\,-\,\frac{315}{32\,\cdot\,128}\,\frac{1}{\gamma^{4}}\,+\,\cdots\right]\,.\] It follows that \[\frac{\int_{0}^{+\infty}J_{4,-1}^{*}\,\varphi(\mathcal{I})\,d \,\mathcal{I}}{\int_{0}^{+\infty}J_{2,1}^{*}\,\varphi(\mathcal{I})\,d\, \mathcal{I}} =\frac{\int_{0}^{+\infty}\gamma^{\frac{5}{2}}e^{\gamma}\,J_{4,-1 }^{*}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}{\gamma\,\int_{0}^{+\infty}\gamma ^{\frac{3}{2}}e^{\gamma}\,J_{2,1}^{*}\,\varphi(\mathcal{I})\,d\,\mathcal{I}} =\frac{\int_{0}^{+\infty}\left(\frac{\gamma}{\gamma^{*}}\right)^{ \frac{5}{2}}e^{\gamma-\gamma^{*}}\,e^{\gamma^{*}}\,\gamma^{*}\,\gamma^{*}\, \frac{5}{2}\,J_{4,-1}^{*}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}{\gamma\,\int _{0}^{+\infty}\left(\frac{\gamma}{\gamma^{*}}\right)^{\frac{3}{2}}e^{\gamma- \gamma^{*}}\,e^{\gamma^{*}}\,\gamma^{*}\,\gamma^{*}\,\frac{3}{2}\,J_{2,1}^{*} \,\varphi(\mathcal{I})\,d\,\mathcal{I}}=\] \[=\frac{\int_{0}^{+\infty}\left(\frac{\gamma}{\gamma^{*}}\right)^{ \frac{5}{2}}e^{\gamma-\gamma^{*}}\,2\,\sqrt{2\,\pi}\,\left[\frac{3}{4}\,-\, \frac{15}{32}\,\frac{1}{\gamma^{*}}\,+\,\frac{1}{\gamma^{*2}}(\cdots)\right]\, \varphi(\mathcal{I})\,d\,\mathcal{I}}{\gamma\,\int_{0}^{+\infty}\left(\frac{ \gamma}{\gamma^{*}}\right)^{\frac{3}{2}}e^{\gamma-\gamma^{*}}\,2\,\sqrt{2\, \pi}\,\left[\frac{1}{4}\,+\,\frac{15}{32}\,\frac{1}{\gamma^{*}}\,+\,\frac{1} {\gamma^{*2}}(\cdots)\right]\,\varphi(\mathcal{I})\,d\,\mathcal{I}}\,,\] and, consequently, \[\gamma\,\left(\gamma\,\frac{\int_{0}^{+\infty}J_{4,-1}^{*}\, \varphi(\mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+\infty}J_{2,1}^{*}\,\varphi( \mathcal{I})\,d\,\mathcal{I}}\,-\,3\right)=\] \[=\frac{\int_{0}^{+\infty}\left(\frac{\gamma}{\gamma^{*}}\right)^{ \frac{3}{2}}e^{\gamma-\gamma^{*}}\,\left[\frac{3}{4}\gamma\,\left(\frac{ \gamma}{\gamma^{*}}\,-\,1\right)\,-\,\frac{15}{32}\,\left(\frac{\gamma}{\gamma ^{*}}\right)^{2}\,-\,\frac{45}{32}\,\frac{\gamma}{\gamma^{*}}\,+\,\frac{1}{ \gamma^{*}}(\cdots)\right]\,\varphi(\mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+ \infty}\left(\frac{\gamma}{\gamma^{*}}\right)^{\frac{3}{2}}e^{\gamma-\gamma^{ *}}\,\left[\frac{1}{4}\,+\,\frac{1}{\gamma^{*}}(\cdots)\right]\,\varphi( \mathcal{I})\,d\,\mathcal{I}}\,.\] Since we have \[e^{\gamma-\gamma^{*}}=e^{-\frac{\mathcal{I}}{k_{B}T}}\,,\,\gamma\,\left(\frac{ \gamma}{\gamma^{*}}\,-\,1\right)=\frac{\gamma}{\gamma^{*}}\,(\gamma-\gamma^{* })=-\,\frac{\gamma}{\gamma^{*}}\,\frac{\mathcal{I}}{k_{B}T}\,,\] we can write \[\lim_{\gamma\,\rightarrow\,+\infty}\,\gamma\,\left(\gamma\,\frac{\int_{0}^{+ \infty}J_{4,-1}^{*}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+\infty}J_ {2,1}^{*}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}\,-\,3\right)=-\,3\,\frac{\int_ {0}^{+\infty}e^{-\frac{\mathcal{I}}{k_{B}T}}\frac{\mathcal{I}}{k_{B}T}\, \varphi(\mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+\infty}e^{-\frac{\mathcal{I} }{k_{B}T}}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}-\,\frac{15}{2}\,.\] Consequently, we get \[\frac{\int_{0}^{+\infty}J_{4,-1}^{*}\,\varphi(\mathcal{I})\,d\, \mathcal{I}}{\int_{0}^{+\infty}J_{2,1}^{*}\,\varphi(\mathcal{I})\,d\, \mathcal{I}}=3\,\frac{1}{\gamma}\,+\,\left(-\,3\,\frac{\int_{0}^{+\infty}e^{- \frac{\mathcal{I}}{k_{B}T}}\frac{\mathcal{I}}{k_{B}T}\,\varphi(\mathcal{I})\, d\,\mathcal{I}}{\int_{0}^{+\infty}e^{-\frac{\mathcal{I}}{k_{B}T}}\,\varphi( \mathcal{I})\,d\,\mathcal{I}}-\,\frac{15}{2}\right)\frac{1}{\gamma^{2}}\,+\, \frac{1}{\gamma^{3}}(\cdots)\,,\] \[\theta_{1,1}^{*} =\frac{1}{\gamma}\,+\,\left(-\,\frac{\int_{0}^{+\infty}e^{-\, \frac{\mathcal{I}}{k_{B}T}}\frac{\mathcal{I}}{k_{B}T}\,\varphi(\mathcal{I})\, d\,\mathcal{I}}{\int_{0}^{+\infty}e^{-\,\frac{\mathcal{I}}{k_{B}T}}\,\varphi( \mathcal{I})\,d\,\mathcal{I}}-\,\frac{5}{2}\right)\frac{1}{\gamma^{2}}\,+\, \frac{1}{\gamma^{3}}(\cdots)=\] \[=\frac{p}{\rho\,c^{2}}\,+\,\left(-\,\frac{\int_{0}^{+\infty}e^{- \,\frac{\mathcal{I}}{k_{B}T}}\frac{\mathcal{I}}{k_{B}T}\,\varphi(\mathcal{I}) \,d\,\mathcal{I}}{\int_{0}^{+\infty}e^{-\,\frac{\mathcal{I}}{k_{B}T}}\, \varphi(\mathcal{I})\,d\,\mathcal{I}}-\,\frac{5}{2}\right)\frac{1}{\gamma^{2} }\,+\,\frac{1}{\gamma^{3}}(\cdots)\,.\] We can apply this result in (33)\({}_{2}\), jointly with \(\theta_{1,2}^{*}=3\,\theta_{1,1}\), and have that the heat conductivity has the form \[\chi=-\frac{m\,\tau c^{8}}{9\,k_{B}T^{2}}\,\frac{\rho^{2}}{p}\, \theta_{1,2}\] \[\left[3\,\theta_{1,1}\,-\,\theta_{1,2}+\,\frac{\rho\,c^{2}}{p}\, \theta_{1,2}\left(\frac{\int_{0}^{+\infty}e^{-\,\frac{\mathcal{I}}{k_{B}T}} \frac{\mathcal{I}}{k_{B}T}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+ \infty}e^{-\,\frac{\mathcal{I}}{k_{B}T}}\,\varphi(\mathcal{I})\,d\,\mathcal{I} }+\,\frac{5}{2}\right)\frac{1}{\gamma^{2}}-\frac{\rho\,c^{2}}{p}\,\theta_{1,2} \frac{1}{\gamma^{3}}(\cdots)\right]\,.\] Moreover, from eq. (11) of [23], we have that the exact expressions \[\theta_{1,1}=\frac{p}{\rho}\,\frac{1}{c^{2}}\,,\,\theta_{1,2}=3\,\frac{p}{\rho }\,\frac{1}{c^{2}}\,+\,3\,\frac{p}{\rho}\,g_{1}\,\frac{1}{c^{4}}\quad\text{ with}\quad g_{1}=\frac{e-\rho c^{2}\,+\,p}{\rho}\,,\] so that we find \[\chi=-\,\frac{m\,\tau c^{2}}{k_{B}T^{2}}\,\rho\,\left[-\,\frac{p}{\rho}\,g_{1} \,+\,\left(\frac{\int_{0}^{+\infty}e^{-\,\frac{\mathcal{I}}{k_{B}T}}\frac{ \mathcal{I}}{k_{B}T}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+\infty }e^{-\,\frac{\mathcal{I}}{k_{B}T}}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}+\, \frac{5}{2}\right)\left(\frac{p}{\rho}\right)^{2}\,-\,\frac{p^{3}}{\rho^{3}c^ {2}}(\cdots)\right]\,.\] By performing similar calculations, we find that \[g_{1}=\left(\frac{\int_{0}^{+\infty}e^{-\,\frac{\mathcal{I}}{k_{B}T}}\frac{ \mathcal{I}}{k_{B}T}\,\varphi(\mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+ \infty}e^{-\,\frac{\mathcal{I}}{k_{B}T}}\,\varphi(\mathcal{I})\,d\,\mathcal{I}} +\,\frac{5}{2}\right)\,\frac{p}{\rho}+\,\frac{1}{\gamma}\,(\cdots)\,,\] so that the above expression of the heat conductivity \(\chi\) has a finite non relativistic limit. To evaluate the non relativistic limit of the shear viscosity \(\mu\) in (37)\({}_{2}\), we need the expression of \(\theta_{2,3}^{*}\) and of \(J_{6,-1}\). With similar computations we obtain that \(c^{4}\,\theta_{2,3}^{*}\) and \(\mu\) have a finite limit \[\lim_{c\,\rightarrow\,+\infty}c^{4}\,\theta_{2,3}^{*}=3\,\left(\frac{p}{\rho} \right)^{2}\quad\text{and}\quad\lim_{c\,\rightarrow\,+\infty}\mu=\,\tau\,p\,.\] Moreover, with similar computations we obtain that \(\nu\) is convergent in the non relativistic limit. Summary In this article we have described how it is possible to reconstruct, as a first iteration, the laws of the Relativistic Ordinary Thermodynamics starting from the laws of the Relativistic Extended Thermodynamics of polyatomic gases by using two different iteration methods. In literature, two procedures are used which are the so-called Maxwellian Iteration and the Chapman-Enskog Method. Both of these methods lead to the relativistic version of the Navier-Stokes and Fourier laws, i.e, the so-called Eckart equations as a first iteration. It is well known that the relativistic version of the Navier-Stokes and Fourier laws are two fundamental laws of Relativistic Ordinary Thermodynamics and in these equations the following remarkable physical quantities appear as coefficients: the heat conductivity \(\chi\), the shear viscosity \(\mu\), and the bulk viscosity \(\nu\). We have proved that the expressions of \(\chi\), \(\mu\), and \(\nu\) obtained via the Chapman-Enskog method do not depend on \(N\), whereas these expressions obtained through the Maxwellian Iteration depend on \(N\). In order to make clear this difference we describe our main results giving more details. First of all, we observe that we have found the following expressions for the shear viscosity \(\mu\) by using the Maxwellian Iteration method in the case \(N=3\) (see eq. (25)) and in the case \(N=2\) (see eq. (28)) \[\mu=-\,\frac{c^{4}m\,\rho\,\tau}{2\,k_{B}T}\,\frac{\left|\frac{2}{3}\,\theta_{ 2,3}\,\right.}{\left|\frac{2}{15}\,\theta_{2,4}\,\right.}\,,\quad\mbox{case} \,N=3\] It is immediate to realize that these two expressions of \(\mu\) are, in general, different each other and then we can conclude that the Maxwellian Iteration give us in the relativistic case a result depending, in general, on the number of moments \(N\). Analogous conclusions can be reached by observing the different expressions of \(\chi\) (compare eqs. (23) and (27)) and \(\nu\) (compare eqs. (21) and (26)) when one uses the Maxwellian Iteration method in the relativistic cases for polyatomic gases in the cases \(N=3\) and \(N=2\), irspectively. Let us now look at the expressions obtained for \(\chi\), \(\nu\) and \(\mu\) by using the Chapman-Enskog Method in the relativistic case for a polyatomic gas with an arbitrary value of \(N\) (which are written below for the convenience of the reader) \[\nu = -\frac{m\,\tau}{k_{B}}\,\left[\left|\begin{matrix}\rho&\frac{e}{c^{ 2}}\\ \frac{e}{c^{2}}&\rho\,\theta_{0,2}\end{matrix}\right|^{-1}\,\left(\frac{1}{3}\, \rho\,c^{2}\,\theta_{1,2}^{*}\,\left|\begin{matrix}p&\frac{e}{c^{2}}\\ \frac{1}{3}\,\rho\,c^{2}\,\theta_{1,2}&\rho\,\theta_{0,2}\end{matrix}\right|+\right.\] \[+ \left.\frac{1}{6}\,\rho\,c^{2}\,\theta_{1,3}^{*}\,\left|\begin{matrix} \rho&p\\ \frac{e}{c^{2}}&\frac{1}{3}\,\rho\,c^{2}\,\theta_{1,2}\end{matrix}\right|\right) \,-\frac{5}{9}\,\rho\,c^{4}\,\theta_{2,3}^{*}\right]\,,\] \[\chi = -\frac{m\,\tau c^{8}}{9\,k_{B}T^{2}}\,\frac{\rho^{2}}{p}\,\theta _{1,2}\left(\theta_{1,2}^{*}\,-\,\frac{\rho\,c^{2}}{p}\,\theta_{1,2}\theta_{1,1}^{*}\right)\,,\] \[\mu = \frac{1}{3}\,\frac{m\,\tau}{k_{B}T}\,\rho\,c^{4}\,\theta_{2,3}^{ *}\,,\] where \(\theta_{k,j}\) and \(\theta_{k,j}^{*}\) are introduced in eqs. (9) and (51), irspectively. Since these expressions of \(\chi\), \(\nu\) and \(\mu\) do not depend on \(N\) we can conclude that the Chapman-Enskog Method furnishes results which do not depend on the number of the moments \(N\) in the relativistic case. The convergence of \(\nu,\mu,\chi\) in the non relativistic limit have been proved in section 4.1. Moreover, in section 4 it has been proved that the Maxwellian Iteration and the Chapman-Enskog Method lead at the same results in nonrelativistic case. Finally, we want to conclude this section with an important observation. Of course, if one uses the Maxwellian Iteration Method for polyatomic gas in the relativistic case the results depend on the choice of the production term \(Q\) defined in equation (1). So a natural problem (still open) is the determination of a specific function \(Q\) such that the requirements of zero production of mass and of momentum-energy are satisfied and whose consequent Maxwellian Iteration does not depend on the number of the moments \(N\). This remains an open problem. Another open problem is the limit in the case of the Maxwellian Iteration for \(N\) going to infinity. Obviously, if this would be possible, the result will not depend on \(N\), but we do not know if this will be the same of Ordinary Thermodynamics obtained with the Chapman-Enskog method being based on the non proved paradigma that"the truncated moment approach is to consider an approximation of the Boltzmann equation that corresponds to infinite moments". In past times it was thought that this was the solution of the problem of the loss of hyperbolicity of the field equations (See [24], [25]. But this was not true; in fact, Struchtrup ([26], [27]) proved that with an increasing number of moments the hyperbolicity region does not increase. In [24] a second order approach with respect to equilibrium was tested and the result showed that the hyperbolicity region increases. Finally, in [7], in section 2, the following facts have been proved: 1. For every truncated system there exists full hyperbolicity if only the Lagrange multipliers are taken as independent variables; 2. The hyperbolicity region appears only as a consequence of the approximations (usually at first order with respect to equilibrium) involved in the tranformation from the Lagrange multipliers to physical variables; 3. The full hyperbolicity is recovered if the above transformation is performed up to whatever order with respect to equilibrium. So the full hyperbolicity is not achieved by increasing the number of moments (or taking its limit for \(N\) going to infinity), but eliminating the approximations in the passage from the Lagrange multipliers to physical variables. Based on this fact, we are not able to anticipate how the model with infinite moments can influence the present problem of the transition to Ordinary Thermodynamics. It needs to be tested. ## Acknowledgments The authors would like to thank the two anonymous referees and C. van der Mee whose suggestions, comments and remarks helped us to improve the quality of the paper. The authors have been partially supported by INdAM-GNFM. Moreover, one of the authors (FD) has been partially supported by Ministero dell'Universita e della Ricerca of Italy (MIUR) under the PRIN project "2022TEB52W - The charm of integrability: from nonlinear waves to random matrices". ## Appendix A Some integrals necessary for recovering OT with the Chapman-Enskog Method We define \[A^{*\alpha_{1}\cdots\alpha_{n+1}}=\frac{c^{3}}{m^{n-2}}\int_{\Re^{3}}\int_{0} ^{+\infty}\frac{f_{E}}{U_{\mu}p^{\mu}}\,p^{\alpha_{1}}\cdots p^{\alpha_{n+1}} \left(1\,+\,\frac{\mathcal{I}}{m\,c^{2}}\right)^{n-1}\,\varphi(\mathcal{I})\, d\,\mathcal{I}\,d\,\vec{P}\,. \tag{49}\] We see that \(A_{E}^{*\alpha_{1}\cdots\alpha_{n+1}}\) is like \(A_{E}^{\alpha_{1}\cdots\alpha_{n+1}}\) but with the function to be integrated which is now divided by \(\cosh\,s\left(1\,+\,\frac{\mathcal{I}}{m\,c^{2}}\right)\). Consequently, we find the expressions corresponding to (8), (9), i.e., \[A_{E}^{*\alpha_{1}\cdots\alpha_{n+1}}=\sum_{k=0}^{\left[\frac{n+1}{2}\right]} \rho c^{2k}\theta_{k,n}^{*}\,h^{(\alpha_{1}\alpha_{2}}\cdots k^{\alpha_{2k-1} \alpha_{2k}}U^{\alpha_{2k+1}}\cdots U^{\alpha_{n+1})}\,. \tag{50}\] where the scalar coefficients \(\theta_{k,n}^{*}\) are \[\theta_{k,n}^{*}=\frac{1}{2k+1}\,{n+1\choose 2k}\,\frac{\int_{0}^{+\infty}J_{2k +2,n-2k}^{*}\,\left(1+\frac{\mathcal{I}}{mc^{2}}\right)^{n-1}\,\phi(\mathcal{ I})\,d\,\mathcal{I}}{\int_{0}^{+\infty}J_{2,1}^{*}\,\phi(\mathcal{I})\,d\, \mathcal{I}}\,, \tag{51}\] where \(J_{m,n}(\gamma)=\int_{0}^{\infty}e^{-\gamma\cosh\,s}\cosh^{n}s\sinh^{m}s\,ds\), \(\gamma=\frac{mc^{2}}{k_{B}T}\), \(J_{m,n}^{*}=J_{m,n}\left[\gamma\left(1+\frac{\mathcal{I}}{mc^{2}}\right)\right]\). By comparing this last equation with (9), we find that \[\theta_{k,n}^{*}=\frac{n+1}{n+1-2k}\,\theta_{k,n-1}\,,\quad\text{for every $k$ such that}\quad n+1>2k\,. \tag{52}\] From this last equation it follows that only the expressions for \(n+1=2k\) are present (which means that only the case \(n\) odd has to be considered), and for these cases, eq. (51) gives \[\theta^{*}_{k,2k-1}=\frac{1}{2k+1}\,\frac{\int_{0}^{+\infty}J^{*}_{2k+2,-1}\, \left(1+\frac{\mathcal{I}}{mc^{2}}\right)^{2k-2}\,\phi(\mathcal{I})\,d\, \mathcal{I}}{\int_{0}^{+\infty}J^{*}_{2,1}\,\phi(\mathcal{I})\,d\,\mathcal{I}}\,. \tag{53}\] The expressions are necessary with \(k=1\) and with \(k=2\), i.e., \[\theta^{*}_{1,1}=\frac{1}{3}\,\frac{\int_{0}^{+\infty}J^{*}_{4,-1}\,\phi( \mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+\infty}J^{*}_{2,1}\,\phi(\mathcal{I}) \,d\,\mathcal{I}}\,,\quad\theta^{*}_{2,3}=\frac{1}{5}\,\frac{\int_{0}^{+ \infty}J^{*}_{6,-1}\,\left(1+\frac{\mathcal{I}}{mc^{2}}\right)^{2}\,\phi( \mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+\infty}J^{*}_{2,1}\,\phi(\mathcal{I}) \,d\,\mathcal{I}}\,. \tag{54}\] The derivative of (54)\({}_{1}\) with respect to \(\gamma\) gives \[\frac{\partial\,\theta^{*}_{1,1}}{\partial\,\gamma}=\frac{-1}{3}\,\frac{\int_ {0}^{+\infty}J^{*}_{4,0}\,\left(1+\frac{\mathcal{I}}{mc^{2}}\right)\,\phi( \mathcal{I})\,d\,\mathcal{I}}{\int_{0}^{+\infty}J^{*}_{2,1}\,\phi( \mathcal{I})\,d\,\mathcal{I}}\,+\,\theta^{*}_{1,1}\,\frac{e}{\rho\,c^{2}}=- \,\frac{1}{\gamma}\,+\,\theta^{*}_{1,1}\,\frac{e}{\rho\,c^{2}}\,.\] From this result it follows \[\frac{e}{\rho\,c^{2}}=\frac{1}{\gamma\,\theta^{*}_{1,1}}\,+\,\frac{\partial}{ \partial\,\gamma}\,\ln\theta^{*}_{1,1}\,. \tag{55}\] Since in literature everything has been expressed in terms of \(\frac{e}{\rho\,c^{2}}\) and its derivatives, we see that now everything is expressed in terms of \(\theta^{*}_{1,1}\), its derivative and of \(\theta^{*}_{2,3}\).
2305.05784
Comprehensive Dataset of Synthetic and Manipulated Overhead Imagery for Development and Evaluation of Forensic Tools
We present a first of its kind dataset of overhead imagery for development and evaluation of forensic tools. Our dataset consists of real, fully synthetic and partially manipulated overhead imagery generated from a custom diffusion model trained on two sets of different zoom levels and on two sources of pristine data. We developed our model to support controllable generation of multiple manipulation categories including fully synthetic imagery conditioned on real and generated base maps, and location. We also support partial in-painted imagery with same conditioning options and with several types of manipulated content. The data consist of raw images and ground truth annotations describing the manipulation parameters. We also report benchmark performance on several tasks supported by our dataset including detection of fully and partially manipulated imagery, manipulation localization and classification.
Brandon B. May, Kirill Trapeznikov, Shengbang Fang, Matthew C. Stamm
2023-05-09T22:09:35Z
http://arxiv.org/abs/2305.05784v1
Comprehensive Dataset of Synthetic and Manipulated Overhead Imagery for Development and Evaluation of Forensic Tools ###### Abstract. We present a first of its kind dataset of overhead imagery for development and evaluation of forensic tools. Our dataset consists of real, fully synthetic and partially manipulated overhead imagery generated from a custom diffusion model trained on two sets of different zoom levels and on two sources of pristine data. We developed our model to support controllable generation of multiple manipulation categories including fully synthetic imagery conditioned on real and generated base maps, and location. We also support partial inpainted imagery with same conditioning options and with several types of manipulated content. The data consist of raw images and ground truth annotations describing the manipulation parameters. We also report benchmark performance on several tasks supported by our dataset including detection of fully and partially manipulated imagery, manipulation localization and classification. ## 1. Introduction In the last several years, synthetic images have emerged as an important forgery threat. Various generative models, such as the StyleGAN-based family (Brandon et al., 2016), have made it possible to create realistic-looking images in various domains. As these models become more accessible to the general public, the potential for malicious uses of synthetic images increases. There have already been instances of StyleGAN-generated faces being used in influence campaigns (Brandon et al., 2016) and marketing scams (Brandon et al., 2016). The emergence of diffusion models (Brandon et al., 2016) poses a particularly important threat. With new conditional diffusion techniques (such as GLIDE (Brandon et al., 2016), Imagen (Brandon et al., 2016), DALL-E 2 (Dall et al., 2016) and Latent Diffusion (Dall et al., 2016)), users can control and modify specific aspects of the generation process through text, image or other inputs. With this capability, it is possible to selectively modify or synthesize particular parts of a real image. This enables the creation of partially synthetic images1, i.e. images with both real and synthetic content, that are highly visually realistic. Existing research has focused largely on detecting GAN-generated content of mainly face images (Brandon et al., 2016), and nearly exclusively on discriminating between real and fully synthetic images, leaving an important gap in research. Footnote 1: In this paper, we’ll use partially synthetic and partially manipulated interchangeably. Therefore, there is a need for datasets to advance research towards detecting synthetic images created using diffusion models. The majority of existing synthetic datasets are composed of GAN-generated images (Dall et al., 2016). However, because diffusion models likely leave behind different traces than GANs, new datasets are needed to train and benchmark new model detectors. Additionally, existing datasets focus only on discriminating between fully real and fully synthetic images. They do not consider partially synthetic images, such as those enabled by guided diffusion (Brandon et al., 2016). New datasets are needed to develop forensic systems capable of discriminating between these three types of images and localizing synthetic content. A parallel development has been the increasingly, commonplace use of satellite imagery by major news outlets as a way to provide context and supporting evidence for their reporting. This is partly due to wider access to satellite imagery from commercial companies. For instance in Fig. 1 (left), the front page of the New York Times (from 16 March 2022) is using commercial satellite imagery to show a before and an after comparison of widespread damage from Russian strikes to civilian building in Mariupol, Ukraine. In addition to established news organizations, numerous social media accounts provide open source intelligence via analysis of publicly available overhead and ground level imagery. These accounts claim to analyze activities such as military buildup, construction, and the aftermath of natural disasters in locations all around the globe. (For examples see multiple investigations by bellingcat.com (Miller et al., 2016)) We hypothesize that it's only a matter of time before synthetic forgeries of satellite imagery will be exploited for nefarious purposes such as disinformation campaigns (as we demonstrate in Fig. 1 (Right)). So there is a clear need to develop state of the art forensics tools that can handle overhead imagery that has been manipulated with modern computer vision generative techniques. Specifically, the forensics need to be able to detect, localize and characterize manipulations in overhead imagery. While there has been active research in the field of digital and machine generated forensics, it has mostly focused on natural image domains, primarily due to the lack of high quality comprehensive datasets of overhead synthetically manipulated imagery. In this paper, **(I)** we propose a new dataset of synthetic overhead imagery for forensic research created using diffusion models. Our dataset contains both real, fully and partially synthetic images along with localization masks (see Fig. 2). The full dataset can be accessed at **[https://stresearch.github.io/synthetic-overhead-dataset/](https://stresearch.github.io/synthetic-overhead-dataset/). (II)** The generation was done by our custom adaptation of guided diffusion (Bellein et al., 2016) trained on a large scale multi-source dataset of satellite imagery. Synthetic content was created using multiple strategies: unconditional, as well conditional generation from both real and _synthetically generated_ basemaps. Furthermore, we implemented models capable of mimicking natural disasters and performing partial image inpainting, which we used to create visually realistic, semantically meaningful forgeries (as we demonstrated with fake "aftermath" images in Fig. 1(right)). **(III)** Through a set of benchmarking experiments using several existing detection and localization algorithms, we show that important research is needed to improve performance on this dataset. ## 2. Related Work To the best of our knowledge, we present the first application of diffusion models for generating overhead imagery. Diffusion models for image generation were initially popularized in a seminal paper by (Miller et al., 2016). Since then there have been many extensions focusing on unconditional, conditional and text-driven image generation and editing (Bellein et al., 2016; Bellein et al., 2016; Bellein et al., 2016; Bellein et al., 2016; Bellein et al., 2016). However, the majority focuses on natural image domains such as human faces, animals and landscapes. Specifically for overhead (satellite or aerial) imagery generation, existing techniques are based on generative adversarial networks (GANs) (Miller et al., 2016). There are several unconditional approaches based on StyleGAN-2 architectures such as (Dosov et al., 2016; Dosov et al., 2016). There are also several conditional generative approaches based on the image-to-image translation that can condition an additional information such as basemaps, elevation maps, etc (Dosov et al., 2016; Dosov et al., 2016; Dosov et al., 2016). Our generative approach includes multiple additional conditioning options such as class and text guidance and it is based on diffusion models instead of GANs. Other researchers have explored generative techniques for other modalities outside of visible spectrum such as SAR, infrared and hyper-spectral (Miller et al., 2016). In this context, these approaches are designed to perform image translation between the modalities such as translating from visible to infrared. For a comprehensive review of remote sensing generative techniques (including conditional GAN-based visible light approaches) see (Krizhevsky et al., 2015). To the best of our knowledge, there are no publicly available datasets that contain synthetic satellite imagery generated conditioned on multiple variables such as real and synthetic basemap, location and source providers. There are also no datasets that contain partially manipulated or in-painted imagery with a varying size of inpainted region and generated with same controls as above. (Dosov et al., 2016) contains a dataset of real and StyleGAN2 generated synthetic images (trained from a single provider). Prior work in forensic analysis for synthetic satellite imagery primarily focused on GANs (Krizhevsky et al., 2015). There also has been recent work (Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015) exploring transferability of GAN trained synthetic image detectors to diffusion models for natural image domains. We are not aware of any work that benchmarks splicing or localization algorithms on diffusion model outputs. ## 3. Dataset Description Our main contribution is a dataset for forensic research in synthetic overhead imagery. It consists of three major components: pristine, fully synthetic, and partially manipulated imagery (Fig. 2). The latter we found to be especially challenging for existing forensic methods. All imagery is either sourced directly from from MapBox Figure 2. Our dataset consist of three types of images (a) pristine, (b) fully synthetic, and (c) partially manipulated with ground truth manipulation masks and other parameters such as basemap type (none, truth, generated), city, image source (mapbox, google, zoom level), manipulation size and class (buildings-roads or greenspace-water). Rows corresponds to generation conditioned on a different city. and Google Maps satellite imagery or generated using our custom guided diffusion models trained with this data. Fully synthetic imagery for a given model and city combination was either generated unconditionally or conditioned on the corresponding truth or a synthetically generated basemap. Partially manipulated imagery for a given model and city combination was inpainted conditioned on an edited truth basemap. The editing was done by inpainting masked areas with buildings-roads or greenspace-water map layers. The breakdown of generation parameters is in Table 1. To prevent potential misuse, _we are not releasing the full set of pristine imagery or the generative models themselves_. The dataset is partitioned into two splits: train and test for development and evaluation of forensic algorithms respectively. The set of cities covered in the splits are disjoint. For each of these splits, we selected at random a reference image to either be preserved as pristine or to be manipulated, with a further 50% probability of being fully or partially synthetic. To test how well the performance of the forensic tools would generalize to unseen data sources and ground sampling distances, we generated all training data using the MB16 model and only included the G17 and MB18 models when generating the test data. ### Sourcing Pristine Imagery To train our generative models (see Sec. 4), we sourced a large collection of satellite image and basemap pairs. For each city and each image provider, we generated a collection of geospatial coordinates by uniformly sampling within the city bounds. These were used as the query locations when requesting image data. Specifically, we collected data from MapBox at zoom levels 16 (MB16) and 18 (MB18), and from Google Maps at zoom level 17 (G17 - roughly equivalent to MB16) from 232 major non-US cities around the globe. The higher resolution MB18 imagery was only available and collected from 9 cities. To simplify the basemaps for use as conditioning in our model, custom styles were created for each provider that removed extraneous elements such as borders, labels, and terrain shading leaving only color-coded representations of elements such as roads, highways, buildings, greenspace, water, and airports. Satellite and basemap image pairs have resolution of 512x512 centered on each coordinate for a given zoom level (16, 18 for MapBox, 17 for Google Maps). In total, we collected roughly 376k basemap-image pairs from MapBox and 77k basemap-image pairs from Google Maps. Fig. 3 shows approximately the same location for all three data sources. ### Imagery with Synthetic Content #### 3.2.1. Fully Synthetic Each fully synthetic image was generated using our basemap to satellite image diffusion model conditioned on the city and the associated randomly selected reference basemap. There are three basemap conditioning options; each occurring with equal probability in the dataset. **(i) Truth:** the ground truth basemap was used as conditioning. **(ii) Generated:** the basemap was generated by using our basemap generation diffusion model, conditioned on the true city. **(iii) None:** the basemap was generated by sampling from random Gaussian noise (See Fig. 4). #### 3.2.2. Partially Synthetic One of the unique features of our dataset is to enable forensic research into detection and localization when images contain both real and synthetic content. This partially synthetic imagery were also generated using our basemap to satellite diffusion model, where masks were used to define the manipulated regions. These regions were then inpainted using a method similar to (Wang et al., 2018), conditioned on one of two manipulation classes (buildings-roads or greenspace-water) and the city associated with the randomly selected reference image. The masks were randomly generated using either bezier shapes (Zhu et al., 2018) or GrabCut segmentations (Wang et al., 2018) across a range of sizes covering up to 20% of the area of \begin{table} \begin{tabular}{l|c c} \hline \hline & **Pristine** & **Synthetic** & **Manipulated** \\ \hline \# **Images** & 4,964/1,511 & 2,496/886 & 2,540/753 \\ \hline **Sources** & MB16/MB16, G17, MB18; 152/80 cities \\ \hline **Basemaps** & None, Truth, Generated, Inpainted \\ \hline **Manipulation** & Buildings-Roads, Greenspace-Water \\ \hline **Masks** & Bezier, GrabCut \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset breakdown by each type of image (train/test) Figure 4. Fully synthetic image generation examples for the city of Brussels. From left to right, conditioned on: truth basemap, generated basemap, no basemap. Figure 3. All three varieties of data sources for satellite imagery and basemaps over approximately the same region in Brussels. From left to right: MapBox zoom level 16, Google Maps zoom level 17, MapBox zoom level 18. the image. In the case of the MB18 model, we used only GrabCut masks initialized with the building footprints from the ground truth basemap. Given the mask and manipulation class, the inpainting proceeded in a two step approach: first, inpainting of the ground _truth basemap_ to synthesize structure in the masked region according to the selected manipulation class, followed by inpainting of the satellite image conditioned on the manipulated basemap. Examples across models, manipulation classes, and mask types are in Fig. 5. ## 4. Guided Diffusion Approach Our generative model used to create the dataset is based on an existing guided diffusion architecture (Dosov et al., 2017). This presents additional challenges to forensics over GAN-based methods due to general lack of diffusion-trained detection models, better generation controls and ability to produce partially manipulated content. Additionally, we add support for the follow control mechanisms: (i) location conditioning using Classifier-Free Guidance (CFG), (ii) basemap image-based conditioning, (iii) partial inpainting with basemap conditioning, (iv) compound editing, (v) CLIP (Srivastava et al., 2017) guidance and (vi) style transfer. Controls (i)-(iii) for generation and manipulation were used in constructing the dataset while (iv)-(vi) were not explicitly used but can still be potentially exploited for malicious uses and thus are presented as a proof of concept. For details on diffusion models and conditioning techniques consult (Dosov et al., 2017; Chen et al., 2018). ### Controls and Guidance #### 4.1.1. Conditioning on Location To control the style of the generated image such that it matches a particular city in our dataset we apply class conditioning during training and inference. This is achieved by mapping the names of all cities to class IDs which are then used to construct an index of learned vector embeddings that get appended to the timestep embeddings at each residual block in the UNet architecture of the model. Using the embedding corresponding to a specific city at inference time, the model generates images in the desired style. See Fig. 2(b) for examples of fully synthetic images generated in the style of four different cities. #### 4.1.2. Conditioning on Basemap Support for conditioning on a reference basemap was added by expanding the number of input channels from three to six to support an additional RGB image as input to the network. During training the basemap associated with the overhead image is passed along as the conditioning image. At inference time, if a basemap is supplied, the generated image will have structures very closely corresponding to those specified by the basemap. Note that this technique can be used for training any kind of image-to-image translation model. The left and center images in Fig. 4 show examples of fully synthetic outputs generated conditioned on an input basemap. #### 4.1.3. Partial Inpainting Support We implemented an inpainting technique closely following the approach in (Zhu et al., 2017) that was used to perform partial manipulation of the images. Given a mask specifying the pixels to be generated (and thus also the pixels that are "known"), at each iteration of the denoising process we replace the region of "known" pixels in the generated image with the ground truth data according to the mask, noised using the learned denoising function up to the appropriate timestep. This effectively conditions each subsequent denoising step on the ground truth surrounding the inpainted region ensuring a coherent result. Figs. 2(c) and 5 both show examples of partial manipulations using this inpainting technique. The white regions of the masks represent the generated pixels and the black regions represent the "known" pixels in the corresponding basemap and overhead images. #### 4.1.4. Compound Editing Partial manipulations can also be used to perform "compound" editing of a reference image by iteratively manipulating its basemap, manually or otherwise. The output of each previous edit is used as the input to the next stage until the desired manipulations are complete. For example, Fig. 6 demonstrates the results of manually editing the basemap several times to replace highways with forest and add in new roads and buildings. At each stage the masks are computed automatically based on the edited regions of the basemap, and the corresponding regions of the image are inpainted conditioned on the new basemap. #### 4.1.5. CLIP Guidance Trained diffusion models can generate content conditioned on external information provided by other models such as image classifiers. During inference, at each denoising step, the gradient of the classifier score w.r.t. to the generated image modifies the output by a scalar guidance factor (Dosov et al., 2017). We use the CLIP multi-modal model (fine-tuned on satellite imagery) to guide the Figure 5. Partially manipulated generation example using MapBox zoom 18 high resolution imagery for the city of Brussels. Left to right: Ground truth, inpainted with greenspace-water, inpainted with buildings-roads, GrabCut mask. Figure 6. Compound editing by manual basemap manipulation. Left to right: ground truth, highway and business park deleted and replaced with forest, road on right extended and populated with buildings, sports field erased and added connecting road going through neighborhood. content of images using text prompts without having to explicitly train the diffusion model with text-image pairs. #### 4.1.6. Style Transfer of Natural Disasters The image-to-image translation method of SS4.1.2 can be used to train a "style-transfer" model given pairs of images from the source and target distributions. The model learns to generate an image from the target distribution conditioned on a reference image from the source. We train a natural disaster generator using the xView2 dataset (Wang et al., 2017) conditioned on the disaster class and the source imagery. The xView2 dataset contains before and after images from around the world of a variety of natural disasters including hurricanes, wildfires, tsunamis, and volcanoes. Fig. 7 shows examples of how we can construct realistic disaster imagery (bottom row) of particular locations using the original images (top row), natural disaster class conditioning, and prompt conditioning via CLIP guidance. ### Implementation and Training Each diffusion model was trained from scratch across 4 NVIDIA GTX 2080 GPUs for about 100k iterations with a batch size of 16, taking approximately one week. Compared to the default model parameters used by guided diffusion, we increased the output resolution from 256x256 to 512x512 and decreased the number of channels from 256 to 128 to fit within the memory constraints. Otherwise, we used the same default parameters corresponding to the 256x256 (unconditional) pretrained ImageNet model (Deng et al., 2017). Simple random vertical and horizontal flips were used for our data augmentations. During inference, denoising was performed for approximately 1000 time steps. The final output was then color matched to the reference image to better preserve realistic dynamic range. ## 5. Baseline Forensic Performance We evaluate several existing forensic algorithms on the following tasks: (1) detection of fully synthetic imagery, (2) detection of partially synthetic imagery, (3) detection of any synthetic content (full or partial) in image and (4) localization of manipulated region. We employ the following existing forensic models as baselines. **Synthetic Image Detection:** GAN detector from (Wang et al., 2017) trained to detect images generated from several CNN-based ProGAN model and shown to have good generalization to other generators. **Partial or Splicing Detection and Localization:** We evaluate three well established models, Forensic Similarity Graphs (FSG) (Wang et al., 2017), EXIFNet (Wang et al., 2017) and Noiseprint (Noiseprint, 2017) (for localization only). _We used the models as_ is i.e. trained on their respective datasets and did not further fine-tune them on our proposed dataset. ### Detection of Synthetic Images Classification of fully synthetic vs pristine images is a well established task in forensic research, and we evaluate the GAN detector described previously. However, there are no existing classifiers for discriminating between pristine, fully synthetic and partially synthetic. For real world forensic application, differentiating between fully synthetic and partially manipulated (or spliced) images is critical. In order to be able to localize and characterize manipulations, first, one must determine if only part of the image has been manipulated and only then apply localization algorithms. Furthermore (as we discover), existing synthetic image detectors are not always well suited to classify images that are not fully synthetic. We map the problem of reasoning over varying amount of synthetic content to four binary tasks: (i) pristine vs fully synthetic, (ii) pristine vs partially synthetic, (iii) pristine vs any synthetic and (iv) partially vs fully synthetic. The results are presented in Table 2. For the GAN-synthetic detector, we assess its ability to discriminate tasks (i) - (iii). We report the AUC of the ROC for detection and also the average accuracy before and after calibration on our proposed dataset. The accuracy before calibration uses the threshold proposed in the original work, while the accuracy after calibration uses the threshold that maximizes the average accuracy. The average accuracy before calibration is 0.5, as the original threshold causes all algorithms to always predict the image to be non-synthetic or non-spliced. For the GAN detector, it performs well in discriminating between real and fully synthetic images, with an AUC of 0.82. However, it only has an AUC of 0.57 in detecting partially synthetic images. This indicates that the performance drops sharply when only part of the image is synthetic. Similar drop in non-calibrated \begin{table} \begin{tabular}{c c c c} \hline \hline **Benchmark** & **GAN (Wang et al., 2017)** & \multicolumn{2}{c}{**Splicing Detector**} \\ **Task** & **Detector** & FSG (Wang et al., 2017) & EXIF (Wang et al., 2017) \\ \hline Pristine v.s. Fully & 0.82/0.75/0.50 & \(-\) & \(-\) \\ Pristine v.s. Partially & 0.57/0.56/0.50 & 0.62/0.59/0.50 & 0.54/0.54/0.50 \\ Pristine v.s. Any & 0.69/0.64/0.50 & \(-\) & \(-\) \\ Partially v.s. Fully & \(-\) & 0.60/0.58/0.50 & 0.59/0.57/0.50 \\ \hline \hline \end{tabular} \end{table} Table 2. Results on four binary tasks. Metrics are AUC / Accuracy w. threshold calibration / Accuracy w. original threshold Figure 7. Generating natural disasters using style transfer. Top row: original images. Bottom row: synthetic disaster images. Left to right: Colosseum (Rome) flooded with ‘Hurricane Harvey’ style, St. Sophia’s Cathedral (Kiev) and Nuclear Sarcophagus (Chernobyyl). “SoCal Fire” style and text prompt “destroyed buildings with smoke rising. performance has been observed in prior work (Kirshman et al., 2019) when testing GAN-trained detectors on diffusion model outputs. For the partial manipulation (or splicing) detection, we evaluate FSG and EXIFNet performance in discriminating pristine vs partially synthetic, and partially synthetic vs fully synthetic. Both detectors struggle in these tasks. This aligns with our expectation, as they are not trained on our proposed synthetic overhead images, which may have different forensic traces from the camera model images that they were trained on. ### Pristine vs. Fully vs. Partially Synthetic To construct a 3-way classification that can discriminate pristine, partial, and fully synthetic images, we combine the output decisions of the GAN detector with a splicing detector. We choose two decision strategies. In the first one, the GAN detector comes first. If it classifies an input as pristine then the image is predicted to be pristine. Otherwise, we use the splicing detector to classify partial (spliced) or fully (non-spliced) synthetic. In the second strategy, the splicing detector comes first. If the splicing detector finds a splice then the image is classified to be partially synthetic. Otherwise, we use the GAN detector to detect if it is fully synthetic or pristine. For evaluation, we test the performance of the combinations of (FSG, GAN Detector) and (EXIFNet, GAN Detector), and report the best average accuracy over three classes. Noting that the average accuracy would be 0.33 for a random guess. We report that the two combinations of the first hierarchical system have a lower average accuracy of 0.36 (with FSG) and 0.43 (with EXIFNet), while the combinations of the second hierarchical system have similar performance, with an average accuracy of 0.51 but still not useful for real world applications. ### Localizing Manipulated Regions If the the image is deemed to be partially manipulated then the next step in the processing chain is to localize the manipulated area. We evaluate the performance of existing forgery localization algorithms including FSG, EXIF-Net and Noiseprint on the partially synthetic images. The evaluations are divided into 4 categories based on the different sizes of synthetic content in the images, and the Matthews Correlation Coefficient (MCC) score is calculated using the predicted and ground truth masks. The results in Table 3 show that FSG has the best performance on large synthetic content, with an MCC score of 0.350. Although FSG and EXIF-Net achieved an MCC of 0.82 and 0.78 on the Columbia splicing dataset (Sararar et al., 2019), their performance drops significantly on our dataset. Note that we observed similar performance drops between calibrated and uncalibrated scores in the detection tasks. Further investigation and research are necessary to address this issue. ## 6. Conclusion In this paper, we highlight the need for further research into detection and localization of synthetic content in overhead imagery generated by diffusion models. Specifically, we release a first of its kind dataset of real, fully and partially synthetic imagery. The data is generated with a custom implementation of a guided diffusion model with support for multiple manipulation methods. Lastly, we benchmark several baseline forensic models to illustrate the importance of continuing forensic research in the space of diffusion models and partially synthetic content localization. ## 7. Acknowledgements This material is based on research sponsored by DARPA and AFRL (HR0011-20-C-0126, HR0011-20-C-0129).
2310.16228
On the Foundations of Shortcut Learning
Deep-learning models can extract a rich assortment of features from data. Which features a model uses depends not only on \emph{predictivity} -- how reliably a feature indicates training-set labels -- but also on \emph{availability} -- how easily the feature can be extracted from inputs. The literature on shortcut learning has noted examples in which models privilege one feature over another, for example texture over shape and image backgrounds over foreground objects. Here, we test hypotheses about which input properties are more available to a model, and systematically study how predictivity and availability interact to shape models' feature use. We construct a minimal, explicit generative framework for synthesizing classification datasets with two latent features that vary in predictivity and in factors we hypothesize to relate to availability, and we quantify a model's shortcut bias -- its over-reliance on the shortcut (more available, less predictive) feature at the expense of the core (less available, more predictive) feature. We find that linear models are relatively unbiased, but introducing a single hidden layer with ReLU or Tanh units yields a bias. Our empirical findings are consistent with a theoretical account based on Neural Tangent Kernels. Finally, we study how models used in practice trade off predictivity and availability in naturalistic datasets, discovering availability manipulations which increase models' degree of shortcut bias. Taken together, these findings suggest that the propensity to learn shortcut features is a fundamental characteristic of deep nonlinear architectures warranting systematic study given its role in shaping how models solve tasks.
Katherine L. Hermann, Hossein Mobahi, Thomas Fel, Michael C. Mozer
2023-10-24T22:54:05Z
http://arxiv.org/abs/2310.16228v2
# On the Foundations of Shortcut Learning ###### Abstract Deep-learning models can extract a rich assortment of features from data. Which features a model uses depends not only on _predictivity_--how reliably a feature indicates train-set labels--but also on _availability_--how easily the feature can be extracted, or leveraged, from inputs. The literature on shortcut learning has noted examples in which models privilege one feature over another, for example texture over shape and image backgrounds over foreground objects. Here, we test hypotheses about which input properties are more available to a model, and systematically study how predictivity and availability interact to shape models' feature use. We construct a minimal, explicit generative framework for synthesizing classification datasets with two latent features that vary in predictivity and in factors we hypothesize to relate to availability, and quantify a model's shortcut bias--its over-reliance on the shortcut (more available, less predictive) feature at the expense of the core (less available, more predictive) feature. We find that linear models are relatively unbiased, but introducing a single hidden layer with ReLU or Tanh units yields a bias. Our empirical findings are consistent with a theoretical account based on Neural Tangent Kernels. Finally, we study how models used in practice trade off predictivity and availability in naturalistic datasets, discovering availability manipulations which increase models' degree of shortcut bias. Taken together, these findings suggest that the propensity to learn shortcut features is a fundamental characteristic of deep nonlinear architectures warranting systematic study given its role in shaping how models solve tasks. ## 1 Introduction Natural data domains provide a rich, high-dimensional input from which deep-learning models can extract a variety of candidate features. During training, models determine which features to rely on. Following training, the chosen features determine how models generalize. A challenge for machine learning arises when models come to rely on _spurious_ or _shortcut_ features instead of the core or defining features of a domain (Arjovsky et al., 2019; McCoy et al., 2019; Geirhos et al., 2020; Singla and Feizi, 2022). Shortcut "cheat" features, which are correlated with core "true" features in the training set, obtain good performance on the training set as well as on an iid test set, but poor generalization on out-of-distribution inputs. For instance, ImageNet-trained CNNs classify primarily according to an object's texture (Baker et al., 2018; Geirhos et al., 2018; Hermann et al., 2020), whereas people define and classify solid objects by shape (e.g., Landau et al., 1988). Focusing on texture leads to reliable classification on many images but might result in misclassification of, say, a hairless cat, which has wrinkly skin more like that of an elephant. The terms "spurious" and "shortcut" are largely synonymous in the literature, although the former often refers to features that arise unintentionally in a poorly constructed dataset, and the latter to features easily latched onto by a model. In addition to a preference for texture over shape, other common shortcut features include a propensity to classify based on image backgrounds rather than foreground objects (Beery et al., 2018; Sagawa et al., 2020; Xiao et al., 2020; Moayeri et al., 2022), or based on individual diagnostic pixels rather than higher-order image content (Malhotra et al., 2020). The literature examining feature use has often focused on _predictivity_--how well a feature indicates the target output. Anomalies have been identified in which networks come to rely systematically on one feature over another when the features are equally predictive, or even when the preferred feature has lower predictivity than the non-preferred feature (Beery et al., 2018; Perez et al., 2018; Tachet et al., 2018; Arjovsky et al., 2019; McCoy et al., 2019; Hermann and Lampinen, 2020; Shah et al., 2020; Nagarajan et al., 2020; Pezeshki et al., 2021; Fel et al., 2023). Although we lack a general understanding of the cause of such preference anomalies, several specific cases have been identified. For example, features that are linearly related to classification labels are preferred by models over features that require nonlinear transforms (Hermann and Lampinen, 2020; Shah et al., 2020). Another factor leading to anomalous feature preferences is the redundancy of representation, e.g., the size of the pixel footprint in an image (Sagawa et al., 2020; Wolff and Wolff, 2022; Tartaglini et al., 2022). Because predictivity alone is insufficient to explain feature reliance, here we explicitly introduce the notion of _availability_ to refer to the factors that influence the likelihood that a model will use a feature more so than a purely statistical account would predict. A more-available feature is easier for the model to extract and leverage. Past research has systemtically manipulated predictivity; in the present work, we systematically manipulate both predictivity _and_ availability to better understand their interaction and to characterize conditions giving rise to shortcut learning. Our contributions are: * We define quantitative measures of predictivity and availability using a generative framework that allows us to synthesize classification datasets with latent features having specified predictivity and availability. We introduce two notions of availability relating to singular values and nonlinearity of the data generating process, and quantify shortcut bias in terms of how a learned classifier deviates from an optimal classifier in its feature reliance. * We perform parametric studies of latent-feature predictivity and availability, and examine the sensitivity of different model architectures to shortcut bias, finding that it is greater for nonlinear models than linear models, and that model depth amplifies bias. * We present a theoretical account based on Neural Tangent Kernels (Jacot et al., 2018) which indicates that shortcut bias is an inevitable consequence of nonlinear architectures. * We show that vision architectures used in practice can be sensitive to non-core features beyond their predictive value, and show a set of availability manipulations of naturalistic images which push around models' feature reliance. ## 2 Related work The propensity of models to learn spurious (Arjovsky et al., 2019) or shortcut (Geirhos et al., 2020) features arises in a variety of domains (Heuer et al., 2016; Gururangan et al., 2018; McCoy et al., 2019; Sagawa et al., 2020) and is of interest from both a scientific and practical perspective. Existing work has sought to understand the extent to which this tendency derives from the statistics of the training data versus from model inductive bias (Neyshabur et al., 2014; Tachet et al., 2018; Perez et al., 2018; Rahaman et al., 2019; Arora et al., 2019; Geirhos et al., 2020; Sagawa et al., 2020, 2020; Pezeshki et al., 2021; Nagarajan et al., 2021), for example a bias to learn simple functions (Tachet et al., 2018; Perez et al., 2018; Rahaman et al., 2019; Arora et al., 2019). Hermann and Lampinen (2020) found that models preferentially represent one of a pair of equally predictive image features, typically whichever feature had been most linearly decodable from the model at initialization. They also identified cases where models relied on a less-predictive feature that had a linear relationship to task labels over a more-predictive feature that had a nonlinear relationship to labels. Together, these findings suggest that predictivity alone is not the only factor that determines model representations and behavior. A theoretical account by Pezeshki et al. (2021) studied a situation in supervised learning in which minimizing cross-entropy loss captures only a subset of predictive features, while other relevant features go unnoticed. They introduced a formal notion of _strength_ that determines which features are likely to dominate a solution. Their notion of strength confounds predictivity and availability, the two concepts which we aim to disengangle in the present work. Work in the vision domain has studied which features vision models rely on when trained on natural image datasets. For example, ImageNet models prefer to classify based on texture rather than shape (Baker et al., 2018; Geirhos et al., 2018; Hermann et al., 2020) and local rather than global image content (Baker et al., 2020; Malhotra et al., 2020), marking a difference from how people classify images (Landau et al., 1988; Baker et al., 2018). Other studies have found that image backgrounds play an outsize role in driving model predictions (Beery et al., 2018; Sagawa et al., 2020; Xiao et al., 2020). Two studies manipulated the quantity of a feature in an image to test how this changed model behavior. Tartaglini et al. (2022) probed pretrained models with images containing a shape consistent with one class label and a texture consistent with another, where the texture was present throughout the image, including in the background. They varied the opacity of the background and found that as the the texture became less salient, models increasingly classified by shape. Wolff & Wolff (2022) found that when images had objects with opposing labels, models preferred to classify by the object with the larger pixel footprint, an idea we return to in Section 5. The development of methods for reducing bias for particular features over others is an active area (e.g. Arjovsky et al., 2019; Geirhos et al., 2018; Hermann et al., 2020; Robinson et al., 2021; Minderer et al., 2020; Sagawa et al., 2020; Ryali et al., 2021; Kirichenko et al., 2022; Teney et al., 2022; Tiwari & Shenoy, 2023; Ahmadian & Lindsten, 2023; Pezeshki et al., 2021; Puli et al., 2023; LaBonte et al., 2023), important for improving model generalization and addressing fairness concerns (Zhao et al., 2017; Buolamwini & Gebru, 2018). ## 3 Generative procedure for synthetic datasets To systematically explore the role of predictivity and availability, we construct synthetic datasets from a generative procedure that maps a pair of latent features, \(\mathbf{z}=(z_{s},z_{c})\), to an input vector \(\mathbf{x}\in\mathbb{R}^{d}\) and class label \(y\in\{-1,+1\}\). The subscripts \(s\) and \(c\) denote the latent dimensions that will be treated as the potential _shortcut_ and _core_ feature, respectively. The procedure draws \(\mathbf{z}\) from a multivariate Gaussian conditioned on class, \[\mathbf{z}\,|\,y\sim\mathcal{N}\left(\begin{bmatrix}y\,\mu_{s}\\ y\,\mu_{c}\end{bmatrix},\begin{bmatrix}1&\sigma_{sc}\\ \sigma_{sc}&1\end{bmatrix}\right),\] with \(|\sigma_{sc}|<1\). Through symmetry, the optimal decision boundary for latent feature \(i\) is \(z_{i}=0\), allowing us to define the feature predictivity \(\rho_{i}\equiv\Pr(y=\text{sign}(z_{i}))\). For Gaussian Figure 1: **Synthetic data.** A: Two datasets differing in the predictivity of \(z_{s}\). B: Schematic of the embedding procedure manipulating availability via the mapping from \(\mathbf{z}\) to \(\mathbf{x}\). Dashed boxes are optional. likelihoods, this predictivity is achieved by setting \(\mu_{i}=\sqrt{2}\text{ erf}^{-1}\left(2\rho_{i}-1\right)\). Figure 1A shows sample latent-feature distributions for different levels of \(\rho_{s}\) and \(\rho_{c}\). **Availability manipulations**. Given a latent vector \(\mathbf{z}\), we manipulate the hypothesized availability (hereafter, simply _availability_) of each feature independently through an embedding procedure, sketched in Figure 1B, that yields an input \(\mathbf{x}\in\mathbb{R}^{d}\). We posit two factors that influence availability of feature \(i\), its _amplification_\(\alpha_{i}\) and its _nesting_\(\eta_{i}\). Amplification \(\alpha_{i}\) is a scaling factor on an embedding, \(\mathbf{e}_{i}=\alpha_{i}\mathbf{w}_{i}z_{i}\), where \(\mathbf{w}_{i}\in\mathbb{R}^{d}\) is a feature-specific random unit vector. Amplification includes manipulations of redundancy (replicating a feature) and magnitude (increasing a feature's dynamic range). Nesting \(\eta_{i}\) is a factor that determines ease of recovery of a latent feature from an embedding. We assume a nonlinear, rank-preserving transform, \(\mathbf{e}^{\prime}_{i}=f_{\eta_{i}}(\mathbf{e})\), where \(f_{\eta_{i}}\) is a fully connected, random net with \(\eta_{i}\in\mathbb{N}\) tanh layers in cascade. For \(\eta_{i}=0\), feature \(i\) remains in explicit form, \(\mathbf{e}^{\prime}_{i}=\mathbf{e}_{i}\); for \(\eta_{i}>0\), the feature is recoverable through an inversion of increasing complexity with \(\eta_{i}\). To complete the data generative process, we combine embeddings by summation: \(\mathbf{x}=\mathbf{e}^{\prime}_{s}+\mathbf{e}^{\prime}_{c}\). **Assessing reliance on shortcut features**. Given a synthetic dataset and a model trained on this dataset, we assess the model's _reliance_ on the shortcut feature, i.e., the extent to which the model uses this feature as a basis for classification decisions. When \(z_{c}\) and \(z_{s}\) are correlated (\(\sigma_{sc}>0\)), some degree of reliance on the shortcut feature is Bayes optimal (see orange dashed line in Figure 6B). Consequently, we need to assess reliance relative to that of an optimal classifier. We perform this assessment in latent space, where the Bayes optimal classifier can be found by linear discriminant analysis (LDA) (see Figure 2 inset). The _shortcut bias_ is the reliance of a model on the shortcut feature over that of the optimal, LDA: \[\text{{bias}}=\text{{reliance}}_{\text{model}}-\text{{reliance}}_{\text{optimal}}.\] Appendix C.1 describes and justifies our reliance measure in detail. For a given model \(\mathcal{M}\), whether trained net or LDA, we probe over the latent space and determine for each probe \(\mathbf{z}\) the binary classification decision, \(\hat{y}_{\mathcal{M}(\mathbf{z})}\) (see Figure 2). Shortcut reliance is the difference between the model's alignment (covariance) with decision boundaries based only on the shortcut and core features: \[\text{{reliance}}_{\mathcal{M}}=\mathbb{E}_{\mathbf{z}}[\,\hat{y}_{\mathcal{M}( \mathbf{z})}\left(\text{sign}(z_{s})-\text{sign}(z_{c})\right)]\] The reliance score is in \([-1,+1]\), though in practice (due to the construction of the datasets), it varies in \([0,1]\). And because the trained net will typically have more reliance on the shortcut feature than LDA, the shortcut bias will lie in \([0,1]\) as well. ## 4 Experiments manipulating feature predictivity and availability **Methodology**. Using the procedure described in Section 3, we conduct controlled experiments examining how feature availability and predictivity and model architecture affect the shortcut bias. We sample class-balanced datasets with 3200 train instances, 1000 validation instances, and 900 probe (evaluation) instances that uniformly cover the \((z_{s},z_{c})\) space by taking a Cartesian product of 30 \(z_{s}\) evenly spaced in \([-3\mu_{s},+3\mu_{s}]\) and 30 \(z_{c}\) evenly spaced in \([-3\mu_{c},+3\mu_{c}]\). In all simulations, we set \(d=100\), \(\eta_{c}=\eta_{s}=0\), \(\rho_{c}=0.9\), \(\sigma_{sc}=0.6\). We manipulate shortcut-feature predictivity with the constraint that it is lower than core-feature predictivity but still predictive, i.e., \(0.5<\rho_{s}<\rho_{c}=0.9\). Because only the relative amplification of features matters, we vary the ratio \(\alpha_{s}/\alpha_{c}\), with the shortcut feature more available, i.e., \(\alpha_{s}/\alpha_{c}\geq 1\). We report the means across 10 runs (Appendix C.3). **Models prefer the more available feature, even when it is less predictive.** We first test whether models prefer \(z_{s}\) when it is more available than \(z_{c}\), including when it is less predictive, while holding model architecture fixed (see Appendix C.3). Figure 2 shows that when predictivity of the two features is matched (\(\rho_{c}=\rho_{s}=0.9\)), models prefer the more-available feature. And given sufficiently high availability, models can prefer the less-predictive but more-available shortcut feature to the more-predictive but less-available core feature. _Availability can override predictivity._ **Model depth increases shortcut bias.** In the previous experiment, we used a fixed model architecture. Here, we investigate how model depth and width influence shortcut bias when trained with \(\alpha_{s}/\alpha_{c}=64\) and \(\rho_{s}=0.85\), previously shown to induce a bias (see Figure 2, gray square). As shown in Figure 2A, we find that bias increases as a function of model depth when dataset is held fixed. **Model nonlinearity increases shortcut bias.** To understand the role of the hidden-layer activation function, we compare models with linear, ReLU, and Tanh activations while holding weight initialization and data fixed. As indicated in Figure 2B, nonlinear activation functions induce a larger shortcut bias than their linear counterpart. **Feature nesting increases shortcut bias.** The synthetic experiments reported above all manipulate availability with the amplitude ratio \(\alpha_{s}/\alpha_{c}\). We also conducted experiments manipulating a second factor we expected would affect availability (Hermann and Lampinen, 2020; Shah et al., 2020), the relative nesting of representations, i.e., \(\eta_{c}-\eta_{s}\geq 1\). We report these experiments in Appendix C.3. ## 5 Availability manipulations with synthetic images What if we instantiate the same latent feature space studied in the previous section in images? We form shortcut features that are analogous to texture or image background--features Figure 3: A: **Model depth increases shortcut bias.** The color of each cell indicates the mean bias of an MLP with ReLU hidden activation-functions, for various model widths and depths, trained on data with a shortcut feature that is more available (\(\alpha_{s}/\alpha_{c}=64\)) but less predictive (\(\rho_{s}=0.85\)) than the core feature. **Model nonlinearity increases shortcut bias.** B: Shortcut bias for three hidden activation functions for a deep MLP with width 128 and depth 2, trained on datasets where predictivity is matched (\(\rho_{s}=\rho_{c}=0.9\)), but shortcut availability is higher (\(\alpha_{s}/\alpha_{c}=32\)). A shortcut bias is more pronounced when the model contains a nonlinear activation function. C: Shortcut bias for MLPs with a single hidden layer and a hidden activation function that is either linear (left) or ReLU (right), for various shortcut feature availabilities (\(\alpha_{s}/\alpha_{c}\)) and predictivities (\(\rho_{s}\)). See B.1 for Tanh. Figure 2: **Deep nonlinear models can prefer a less-predictive but more-available feature to a more-predictive but less-available one.** The color of each cell in the heatmap indicates the mean bias of a model as a function of the availability and predictivity of the shortcut feature, \(z_{s}\). The inset shows in faint coloring the decision surface for an optimal Bayes classifier (LDA) and a trained model. Overlaid points are a subset of training instances. The model obtains a shortcut bias of 0.53. previously noted to preferentially drive the behavior of vision models (e.g. Geirhos et al., 2018; Baker et al., 2018; Hermann et al., 2020; Beery et al., 2018; Sagawa et al., 2020; Xiao et al., 2020). Building on the work of Wolff and Wolff (2022), we hypothesize that these features are more available because they have a large footprint in an image, and hence, by our notions of availability, a large \(\alpha_{s}\). **Methods**. We instantiate a latent vector \(\mathbf{z}\) from our data-generation procedure as an image. Each feature becomes an object (\(z_{s}\) a circle, \(z_{c}\) a square) whose color is determined by the respective feature value. Following Wolff and Wolff (2022), we manipulate the availability of each feature in terms of its size, or pixel footprint. We randomly position the circle and square entirely within the image, avoiding overlap, yielding a \(224\times 224\) pixel image (Figure 4A, Appendix C.4). **Results**. Figure 4B reports ResNet18 shortcut bias as a function of shortcut-feature availability (footprint) when the two features are equally predictive (\(\rho_{s}=\rho_{c}=0.9\)). In Figure 4C, the availability ratio is fixed at \(\alpha_{s}/\alpha_{c}=4\), and the shortcut bias is assessed as a function of \(\rho_{s}\). ResNet18 is biased toward the more available shortcut feature even when it is less predictive than the core feature. Together, these results suggest that a simple characteristic of image contents--the pixel footprint of an object--can bias models' output behavior, and may therefore explain why models can fail to leverage typically-smaller foreground objects in favor of image backgrounds (Section 7). ## 6 Theoretical account In our empirical investigations, we quantified the extent to which a trained model deviated from a statistically optimal classifier in its reliance on the more-available feature using a measure which considered the basis for probe-instance classifications. Here, we use an alternative approach of studying the sensitivity of a Neural Tangent Kernel (NTK) (Jacot et al., 2018) to the availability of a feature. The resulting form presents a crisp perspective of how predictivity and availability interact. In particular, we prove that availability bias is absent in linear networks but present in ReLU networks. The proof for the theorems of this section is presented in the Supplementary Materials. For tractability of the analysis, we consider a few simplifying assumptions. We focus on 2-layer fully connected architectures for which the kernel of the ReLU networks admits a simple closed form. In addition, to be able to compute integrations that arise in the analysis, we resort to an asymptotic approximation which assumes covariance matrix is small. Specifically, we represent the covariance matrix as \(s\,[\,1\,,\,\sigma_{12}\,;\,\sigma_{12}\,,\,1\,]\), where the scale parameter \(s>0\) is considered to be small. Finally, in order to handle the analysis for ReLU kernel, we will use a polynomial approximation. **Kernel Spectrum**. Consider a two-layer network with the first layer having a possibly Figure 4: **ResNet-18 prefers a shortcut feature when availability is instantiated as the pixel footprint of an object (feature), even when that feature is less predictive.** A: Sample images. B: Shortcut bias increases as a function of relative availability of the shortcut feature when features are equally predictive (\(\rho_{s}=\rho_{c}=0.9\)), consistent with Wolff and Wolff (2022). C: Even when the shortcut feature is less predictive, models have a shortcut bias due to availability, when \(\alpha_{s}/\alpha_{c}=4\). nonlinear activation function. When the width of this model gets large, learning of this model can be approximated by a kernel regression problem, with a given kernel function \(k(\,.\,,\,.\,)\) that depends on the architecture and the activation function. Given a distribution over the input data \(p(\mathbf{x})\), we define a (linear) kernel operator as one that acts on a function \(f\) to produce another function \(g\) as in \(g(\mathbf{x})\,=\,\int_{\mathbb{R}^{n}}k(\mathbf{x},\mathbf{z})\,f(\mathbf{z})\,p(\mathbf{z})\,d\mathbf{z}\). This allows us to define an eigenfunction \(\phi\) of the kernel operator as one that satisfies, \[\lambda\,\phi(\mathbf{x})\ =\ \int_{\mathbb{R}^{n}}k(\mathbf{x},\mathbf{z})\,\phi(\mathbf{z})\,p (\mathbf{z})\,d\mathbf{z}\,. \tag{1}\] The value of \(\lambda\) will be the eigenvalue of that eigenfunction when the eigenfunction \(\phi\) is normalized as \(\int_{\mathbb{R}^{n}}\phi^{2}(\mathbf{x})\,p(\mathbf{x})\,d\mathbf{x}\ =\ 1\,.\) **Form of \(p(\mathbf{z})\)**. Recall that in our generative dataset framework, we have a pair of latent features \(z_{c}\) and \(z_{s}\) that are embedded into a high dimensional space via \(\mathbf{x}_{d\times 1}\,=\,\alpha_{s}z_{s}\mathbf{w}_{s}+\alpha_{c}z_{c}\mathbf{w}_{c}= \,\mathbf{U}_{d\times 2}\mathbf{A}_{2\times 2}\mathbf{z}_{2\times 1}\,\). With this expression, we switch terminology such that our \(\mathbf{w}_{i}\to\mathbf{U}\) and \(\alpha_{i}\to\mathbf{A}\), and therefore \(\mathbf{A}\) is diagonal matrix with positive diagonal entries, and the columns of \(\mathbf{U}\) are (approximately) orthonormal. Henceforth, we also refer to features with indices \(1\) and \(2\) instead of \(s\) and \(c\). An implication of orthonormal columns on \(\mathbf{U}\) is that the dot product of any two input vectors \(\mathbf{x}\) and \(\mathbf{x}^{\dagger}\) will be independent of \(\mathbf{U}\), i.e., \(\left\langle\mathbf{x},\mathbf{x}^{\dagger}\right\rangle\,=\,\left\langle\mathbf{A}\mathbf{z},\mathbf{A}\mathbf{z}^{\dagger}\right\rangle\). Consequently, we can compute dot products in the original \(2\)-dimensional space instead of in the \(d\)-dimensional embedding space. On the other hand, we will later see that the kernel function \(k(\mathbf{x}_{1},\mathbf{x}_{2})\) of the two cases we study here (ReLU and linear) depends on their input only through the dot product \(\left\langle\mathbf{x}_{1},\mathbf{x}_{2}\right\rangle\) and norms \(\left\|\mathbf{x}_{1}\right\|\) and \(\left\|\mathbf{x}_{2}\right\|\) (self dot products). Thus, the kernel is entirely invariant to \(\mathbf{U}\) and without loss of generality, we can consider the input to the model as \(\mathbf{x}=\mathbf{A}\mathbf{z}\). Therefore, \[\mathbf{x}^{+}\sim\mathcal{N}\left(\begin{bmatrix}a_{1}\mu_{1}\\ a_{2}\mu_{2}\end{bmatrix},\begin{bmatrix}a_{1}^{2}&a_{1}a_{2}\sigma_{12}\\ a_{1}a_{2}\sigma_{12}&a_{2}^{2}\end{bmatrix}\right),\ \mathbf{x}^{-}\sim\mathcal{N}\left( -\begin{bmatrix}a_{1}\mu_{1}\\ a_{2}\mu_{2}\end{bmatrix},\begin{bmatrix}a_{1}^{2}&a_{1}a_{2}\sigma_{12}\\ a_{1}a_{2}\sigma_{12}&a_{2}^{2}\end{bmatrix}\right).\] **Linear kernel function**. If the activation function is linear, then the kernel function simply becomes a standard dot product, \[k(\mathbf{x}_{1},\mathbf{x}_{2})\,\triangleq\,\left\langle\mathbf{x}_{1},\mathbf{x}_{2}\right\rangle\,. \tag{2}\] The following theorem provides details about the spectrum of this kernel. **Theorem 1**: _Consider the kernel function \(k(\mathbf{x}_{1},\mathbf{x}_{2})\,\triangleq\,\left\langle\mathbf{x}_{1},\mathbf{x}_{2}\right\rangle\). The kernel operator associated with \(k\) under the data distribution \(p\) specified above has only one non-zero eigenvalue \(\lambda=\left\|\mathbf{A}\mathbf{\mu}\right\|^{2}\) and its eigenfunction has the form \(\phi(\mathbf{x})\,=\,\frac{\left\langle\mathbf{A}\mathbf{\mu},\mathbf{x}\right\rangle}{\left\| \mathbf{A}\mathbf{\mu}\right\|^{2}}\)._ **ReLU kernel function**. If the activation function is set to be a ReLU, then the kernel function is known to have the following form (Cho & Sapi, 2009; Bietti & Bach, 2020): \[k(\mathbf{x}_{1},\mathbf{x}_{2})\,\triangleq\,\left\|\mathbf{x}_{1}\right\|\left\|\mathbf{x}_{ 2}\right\|h\left(\left\langle\frac{\mathbf{x}_{1}}{\left\|\mathbf{x}_{1}\right\|}, \frac{\mathbf{x}_{2}}{\left\|\mathbf{x}_{2}\right\|}\right\rangle\right)\,,\,h(u)\, \triangleq\,\frac{1}{\pi}\bigg{(}u\big{(}\pi-\arccos(u)\big{)}+\sqrt{1-u^{2}} \,\bigg{)}\,. \tag{3}\] In order to obtain an analytical form for the eigenfunctions of the kernel under the considered data distribution, we resort to a quadratic approximation of \(h\) by \(\widehat{h}\) as \(\widehat{h}(u)\triangleq\frac{815}{3072}(1+u)^{2}\). This approximation enjoys certain optimality criteria. Derivation details of this quadratic form are provided in the Supplementary Materials, as is a plot showing the quality of the approximation. We now focus on spectral characteristics of the approximate ReLU kernel. Replacing \(h\) in the kernel function \(k\) of Equation 3 with \(\widehat{h}\), we obtain an alternative kernel function approximation for ReLUs: \[k(\mathbf{x},\mathbf{z})\,\triangleq\,\left\|\mathbf{x}\right\|\left\|\mathbf{z}\right\| \widehat{h}\left(\left\langle\frac{\mathbf{x}}{\left\|\mathbf{x}\right\|},\frac{\mathbf{z} }{\left\|\mathbf{z}\right\|}\right\rangle\right)\,=\,\left\|\mathbf{x}\right\|\left\| \mathbf{z}\right\|a^{*}\left(1+\left\langle\frac{\mathbf{x}}{\left\|\mathbf{x}\right\|}, \frac{\mathbf{z}}{\left\|\mathbf{z}\right\|}\right\rangle\right)^{2}\,. \tag{4}\] The following theorem characterizes the spectrum of this kernel. **Theorem 2**: _Consider the kernel function \(k(\mathbf{x},\mathbf{z})\,\triangleq\,\left\|\mathbf{x}\right\|\left\|\mathbf{z}\right\|a^{*} \left(1+\left\langle\frac{\mathbf{x}}{\left\|\mathbf{x}\right\|},\frac{\mathbf{z}}{\left\| \mathbf{z}\right\|}\right\rangle\right)^{2}\). The kernel operator associated with \(k\) under the data distribution \(p\) specified above has only two non-zero eigenvalues \(\lambda_{1}=\lambda_{2}=2a^{*}\|A\mathbf{\mu}\|^{2}\) with associated eigenfunctions given by_ \[\phi_{1}(\mathbf{x})\,=\,\frac{\left\|\mathbf{x}\right\|}{\left\|\mathbf{A}\mathbf{\mu}\right\| }\,\frac{1+\left\langle\frac{\mathbf{x}}{\left\|\mathbf{x}\right\|},\frac{\mathbf{A}\mathbf{ \mu}}{\left\|\mathbf{A}\mathbf{\mu}\right\|}\right\rangle^{2}}{2}\ \ \text{and}\ \ \phi_{2}(\mathbf{x})\,=\,\left\langle\frac{\mathbf{x}}{\left\|\mathbf{A}\mathbf{\mu}\right\|}, \frac{\mathbf{A}\mathbf{\mu}}{\left\|\mathbf{A}\mathbf{\mu}\right\|}\right\rangle.\] ### Sensitivity Analysis We now assign a target value \(y(\mathbf{x})\) to each input point \(\mathbf{x}\). Under the squared loss for regression. By expressing the kernel operator using its eigenfunctions, it is easy to show that \(f(\mathbf{x})=\sum_{i}(\phi_{i}(\mathbf{x})\int_{\mathbb{R}^{n}_{i}}\phi_{i}(\mathbf{z})\,y( \mathbf{z})\,p(\mathbf{z})\,d\mathbf{z})\) where \(i\) runs over non-zero eigenvalues \(\lambda_{i}\) of the kernel operator (see also Appendix G). We now restrict our focus to a binary classification scenario, \(y:\mathcal{X}\to\{0,1\}\). More precisely, we replace \(p\) with our Gaussian mixture and set \(y(\mathbf{x})\) to \(1\) and \(0\) depending on whether \(\mathbf{x}\) is drawn from the positive or negative component of the mixture. \[f(\mathbf{x})=\sum_{i}\left(\phi_{i}(\mathbf{x})\big{(}\tfrac{1}{2}\int_{\mathbb{R}^{ 2}}\phi_{i}(\mathbf{z})\,(1)\,p^{+}(\mathbf{z})\,d\mathbf{z}+\tfrac{1}{2}\int_{\mathbb{R}^ {2}}\phi_{i}(\mathbf{z})\,(0)\,p^{-}(\mathbf{z})\,d\mathbf{z}\big{)}\right)=\tfrac{1}{2} \sum_{i}\phi_{i}(\mathbf{x})\phi_{i}(\mathbf{A}\mathbf{\mu})\,,\] where \(p^{+}\) and \(p^{-}\) denote class-conditional normal distributions. Now let us tweak the availability of each feature by a diagonal scaling matrix \(\mathbf{B}\triangleq\operatorname{diag}(\mathbf{b})\) where \(\mathbf{b}\triangleq[b_{1},b_{2}]\). Denote the modified prediction function as \(g_{\mathbf{B}}\). That is, \(g_{\mathbf{B}}(\mathbf{x})\triangleq\tfrac{1}{2}\sum_{i}\phi_{i}(\mathbf{x})\phi_{i}(\mathbf{ B}\mathbf{A}\mathbf{\mu})\). The alignment between \(f\) and \(g\) is their normalized dot product, \[\gamma(\mathbf{b})\triangleq\int_{\mathbb{R}^{2}}\frac{f(\mathbf{x})}{\sqrt{\int_{ \mathbb{R}^{2}}f^{2}(\mathbf{t})p(\mathbf{t})\,dt}}\frac{g_{\mathbf{B}}(\mathbf{x})}{\sqrt{ \int_{\mathbb{R}^{2}}g_{\mathbf{B}}^{2}(\mathbf{t})p(\mathbf{t})\,dt}}p(\mathbf{x})\,d\mathbf{x}\,. \tag{5}\] We define the sensitivity of the alignment to feature \(i\) for \(i=1,2\) as its \(m\)'th order derivative of \(\gamma\) w.r.t. \(b_{i}\) evaluated at \(\mathbf{b}=\mathbf{1}\) (which leads to identity scale factor \(\mathbf{B}=\mathbf{I}\)), \[\zeta_{i}\triangleq\left(\tfrac{g^{m}}{\partial b_{i}^{*}}\,\gamma\right)_{ \mathbf{b}=\mathbf{1}}=\left(\tfrac{g^{m}}{\partial b_{i}^{*}}\,\int_{\mathbb{R}^{2}} \tfrac{\sum_{i}\phi_{i}(\mathbf{x})\phi_{i}(\mathbf{A}\mathbf{\mu})}{\sqrt{\int_{\mathbb{R }^{2}}\left(\sum_{i}\phi_{i}(\mathbf{x})\phi_{i}(\mathbf{A}\mathbf{\mu})\right)^{2}\,dt}} \,\tfrac{\sum_{i}\phi_{i}(\mathbf{x})\phi_{i}(\mathbf{B}\mathbf{A}\mathbf{\mu})}{\sqrt{\int_{ \mathbb{R}^{2}}\left(\sum_{i}\phi_{i}(\mathbf{x})\phi_{i}(\mathbf{B}\mathbf{A}\mathbf{\mu}) \right)^{2}\,dt}}\,d\mathbf{x}\right)_{\mathbf{b}=\mathbf{1}}.\] \(\zeta_{i}\) indicates how much the model relies on feature \(i\) to make a prediction. In particular, if whenever feature \(i\) is more available than feature \(j\), i.e. \(a_{i}>a_{j}\), we also see the model is more sensitive to feature \(i\) than feature \(j\), i.e. \(|\zeta_{i}|>|\zeta_{j}|\), the model is biased toward the more-available feature. In addition, when \(a_{i}<a_{j}\) but \(|\zeta_{i}|>|\zeta_{j}|\), the bias is toward the less-available feature. One can express the presence of these biases more concisely as \(\operatorname{sign}\left((|\zeta_{1}|-|\zeta_{2}|)(a_{1}-a_{2})\right)\), where values of \(+1\) and \(-1\) indicate bias toward more- and less-available features, respectively. To verify either of these two cases via sensitivity, we must choose the lowest order \(m\) that yields a non-zero \(|\zeta_{1}|-|\zeta_{2}|\). On the other hand, if \(|\zeta_{1}|-|\zeta_{2}|=0\) for any choice of \(a_{1}\) and \(a_{2}\), the model is unbiased to feature availability. The following theorems now show that linear networks are unbiased to feature availability, while ReLU networks are biased toward more available features. **Theorem 3**: _In a linear network, \(|\zeta_{1}|-|\zeta_{2}|\) is always zero for any choice of \(m\geq 1\) and regardless of the values of \(a_{1}\) and \(a_{2}\)._ **Theorem 4**: _In a ReLU network, \(|\zeta_{1}|-|\zeta_{2}|=0\) for any \(1\leq m\leq 8\). The first non-zero \(|\zeta_{1}|-|\zeta_{2}|\) happens at \(m=9\) and has the following form,_ \[|\zeta_{1}|-|\zeta_{2}|\,=\,\frac{5670}{\|\mathbf{A}\mathbf{\mu}\|^{18}}(a_{1}a_{2} \mu_{1}\mu_{2})^{8}(a_{1}^{2}\mu_{1}^{2}-a_{2}^{2}\mu_{2}^{2})\,. \tag{6}\] A straightforward consequence of this theorem is that \[\operatorname{sign}\left((|\zeta_{1}|-|\zeta_{2}|)(a_{1}-a_{2})\right)= \operatorname{sign}\left(\tfrac{5670}{\|\mathbf{A}\mathbf{\mu}\|^{18}}(a_{1}a_{2} \mu_{1}\mu_{2})^{8}(a_{1}^{2}\mu_{1}^{2}-a_{2}^{2}\mu_{2}^{2})(a_{1}-a_{2}) \right)=\operatorname{sign}\left((a_{1}^{2}\mu_{1}^{2}-a_{2}^{2}\mu_{2}^{2})(a_ {1}-a_{2})\right)\,. \tag{7}\] Recall from Section 3 that feature predictivity \(\rho_{i}\) is related to \(\mu_{i}\) via \(\rho_{i}=\frac{1}{2}(1+\operatorname{erf}(\frac{\mu_{i}}{\sqrt{2}}))\). Observe that \(\rho_{i}\) is an increasing function in \(\mu_{i}\); bigger \(\mu_{i}\) implies larger \(\rho_{i}\). Putting that beside (7) provides a crisp interpretation of the trade-off between predictivity and availability. For example, when the latent features are equally predictive (\(\mu_{1}=\mu_{2}\)), the sign becomes \(+1\) for any (non-negative) choice of availability parameters \(a_{1}\) and \(a_{2}\). Thus, for equally predictive features, the ReLU networks are always biased toward the more available feature. Figure 5 shows some more examples with certain levels of predictivity. The coloring indicates the direction of the availability bias (only the boundaries between the blue and the yellow regions have no availability bias). ## 7 Feature availability in naturalistic datasets We have seen that models trained on controlled, synthetic data are influenced by availability to learn shortcuts when a nonlinear activation function is present. How do feature predictivity and availability in naturalistic datasets interact to shape the behavior of models used in practice? To test this, we train ResNet18 models (He et al., 2016) to classify naturalistic images by a binary core feature irrespective of the value of a non-core feature. We construct two datasets by sampling images from Waterbirds (Sagawa et al., 2020) (core: Bird, non-core: Background), and CelebA (Liu et al., 2015) (core: Attractive, non-core: Smiling). See C.5 for additional details. **Sensitivity to the non-core feature beyond a statistical account**. Figures 6A and B show that, for both datasets, as the training-set predictivity of the non-core feature increases, model accuracy increases for congruent probes, but decreases for incongruent ones, whereas a Bayes optimal classifier is not comparably sensitive to the predictivity of the non-core feature. So, the model is more influenced by the non-core feature than what we would expected based solely on predictivity. This heightened sensitivity implies that models prioritize the non-core feature more than they should, given its predictive value. Thus, in the absence of predictivity as an explanatory factor, we conclude that the non-core feature is more _available_ than the core feature. **Availability manipulations**. Motivated by the result that Background is more available to models than the core Bird (Figure 6A), we test whether specific background manipulations (hypothesized types of availability) shift model feature reliance. As shown in Figure 6C, we find that Bird accuracy increases as we reduce the availability of the image background by manipulating its spatial extent (_Bird size_, _Background patch removal_) or drop background color (_Color_), implicating these as among the features that models latch onto in preferring image backgrounds (validated with explainability analyses in C.5). Experiments in Figure B.9 show that this phenomenon also occurs in ImageNet-pretrained models; background noise and spatial frequency manipulations also drive feature reliance. ## 8 Conclusion Shortcut learning is of both scientific and practical interest given its implications for how models generalize. Why do some features become shortcut features? Here, we introduced the notion of _availability_ and conducted studies systematically varying availability. We proposed a generative framework that allows for independent manipulation of predictivity and availability of latent features. Testing hypotheses about the contributions of each to model behavior, we find that for both vector and image classification tasks, deep nonlinear models exhibit a shortcut bias, deviating from the statistically optimal classifier in their feature reliance. We provided a theoretical account which indicates the inevitability of a shortcut bias for a single-hidden-layer nonlinear (ReLU) MLP but not a linear one. The theory specifies the exact interaction between predictivity and availability, and consistent with our empirical studies, predicts that availability can trump predictivity. In naturalistic datasets, vision architectures used in practice rely on non-core features more than they should on statistical grounds alone. Connecting with prior work identifying availability biases for texture and image background, we explicitly manipulated background properties such as spatial extent, color, and spatial frequency and found that they influence a model's Figure 5: Plot of \(\operatorname{sign}((|\zeta_{1}|-|\zeta_{2}|)(a_{1}-a_{2}))\) for ReLU network as a function of \(a_{1}\) and \(a_{2}\). We fix \(\mu_{1}=1\) and vary \(\mu_{2}\in\{0.1,0.5,1,2,10\}\). Yellow and blue correspond to values \(+1\) and \(-1\) respectively. propensity to learn shortcuts. Taken together, our empirical and theoretical findings highlight that models used in practice are prone to shortcut learning, and that to understand model behavior, one must consider the contributions of both feature predictivity and availability. Future work will study shortcut features in additional domains, and develop methods for automatically discovering further shortcut features which drive model behavior. The generative framework we have laid out will support a systematic investigation of architectural manipulations which may influence shortcut learning. ## Acknowledgments We thank Pieter-Jan Kindermans for feedback on the manuscript, and Jaehoon Lee, Lechao Xiao, Robert Geirhos, Olivia Wiles, Isabelle Guyon, and Roman Novak for interesting discussions. Figure 6: **Availability as well as predictivity determines which features image classifiers rely on.** A: Models (ResNet18) were trained to classify Birds from images sampled from Waterbirds. We varied Background (non-core) predictivity while keeping Bird (core) predictivity fixed (\(=0.99\)), and show Bird classification accuracy for two types of probes: _congruent_ (blue, core and non-core features support the same label) and _incongruent_ (orange, core and non-core features support opposing labels). As Background predictivity increases, the gap in accuracy between incongruent and congruent probes also increases. The model is more sensitive to the non-core feature than expected by a Bayes-optimal classifier (_optimal_): predictivity alone does not explain the model’s behavior. B: Similar to the Waterbirds dataset, models trained to classify images from CelebA as “Attractive” exhibit an effect of “Smiling” availability. C: Bird accuracy for incongruent Waterbirds probes is influenced by both Background predictivity (\(\rho\)) and availability when we manipulate the latter explicitly (see also B.9).
2305.15597
Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language Models
The mission of open knowledge graph (KG) completion is to draw new findings from known facts. Existing works that augment KG completion require either (1) factual triples to enlarge the graph reasoning space or (2) manually designed prompts to extract knowledge from a pre-trained language model (PLM), exhibiting limited performance and requiring expensive efforts from experts. To this end, we propose TAGREAL that automatically generates quality query prompts and retrieves support information from large text corpora to probe knowledge from PLM for KG completion. The results show that TAGREAL achieves state-of-the-art performance on two benchmark datasets. We find that TAGREAL has superb performance even with limited training data, outperforming existing embedding-based, graph-based, and PLM-based methods.
Pengcheng Jiang, Shivam Agarwal, Bowen Jin, Xuan Wang, Jimeng Sun, Jiawei Han
2023-05-24T22:09:35Z
http://arxiv.org/abs/2305.15597v1
# Text-Augmented Open Knowledge Graph Completion via ###### Abstract The mission of open knowledge graph (KG) completion is to draw new findings from known facts. Existing works that augment KG completion require either (1) factual triples to enlarge the graph reasoning space or (2) manually designed prompts to extract knowledge from a pre-trained language model (PLM), exhibiting limited performance and requiring expensive efforts from experts. To this end, we propose TagReal that automatically generates quality query prompts and retrieves support information from large text corpora to probe knowledge from PLM for KG completion. The results show that TagReal achieves state-of-the-art performance on two benchmark datasets. We find that TagReal has superb performance even with limited training data, outperforming existing embedding-based, graph-based, and PLM-based methods. ## 1 Introduction A knowledge graph (KG) is a heterogeneous graph that encodes factual information in the form of entity-relation-entity triplets, where a _relation_ connects a _head_ entity and a _tail_ entity (e.g., "_Maimi-located_in-USA_") (Wang et al., 2017; Hogan et al., 2021). KG (Dai et al., 2020) plays a central role in many NLP applications, including question answering (Hao et al., 2017; Yasunaga et al., 2021), recommender systems (Zhou et al., 2020), and drug discovery (Zitnik et al., 2018). However, existing works (Wang et al., 2018; Hamilton et al., 2018) show that most large-scale KGs are incomplete and cannot fully cover the massive real-world knowledge. This challenge motivates KG completion, which aims to find one or more object entities given a subject entity and a relation (Lin et al., 2015). For example, in Figure 1, our goal is to predict the object entity with "_Detroit_" as the subject entity and "_contained_by_" as the relation. However, existing KG completion approaches (Trouillon et al., 2016; Das et al., 2018) have several limitations (Fu et al., 2019). First, their performance heavily depends on the density of the graph. They usually perform well on dense graphs with rich structural information but poorly on sparse graphs which are more common in real-world applications. Second, previous methods (e.g., Bordes et al. (2013)) assume a closed-world KG without considering vast open knowledge in the external resources. In fact, in many cases, a KG is usually associated with a rich text corpus (Bodenreider, 2004), which contains a vast amount of factual data not yet extracted. To overcome these challenges we investigate the task of open knowledge graph completion, where KG can be constructed using new facts from outside the KG. Recent text-enriched solutions (Fu et al., 2019) focus on using a pre-defined set of facts to enrich the knowledge graph. Nonetheless, the pre-defined set of facts is often noisy and constricted, that is, they do not provide sufficient information to efficiently update the KG. Figure 1: The quality of hand-crafted prompts can be limited, while prompt mining is a scalable alternative. Support information also helps PLM understand the purpose of prompts. In this example, Canada and Michigan are potentially valid options but given prompt mining and support information retrieval, the model becomes confident about Michigan as the answer here. Pre-trained language models (PLMs) Devlin et al. (2019); Liu et al. (2019) have shown to be powerful in capturing factual knowledge implicitly from learning on massive unlabeled texts Petroni et al. (2019). Since PLMs are superb in text encoding, they can be utilized to facilitate knowledge graph completion with external text information. Recent knowledge graph completion methods Shin et al. (2020); Lv et al. (2022) focus on using manually crafted prompts (e.g., "Detroit is located in [MASK]" in Figure 1) to query the PLMs for graph completion (e.g., "Michigan"). However, manually creating prompts can be expensive with limited quality (e.g., PLM gives a wrong answer "Canada" to the query with a handcrafted prompt, as shown in Figure 1). Building on the above limitations of standard KG and the enormous power of PLMs Devlin et al. (2019); Liu et al. (2019), we aim to use PLMs for open knowledge graph completion. We propose an end-to-end framework that jointly exploits the implicit knowledge in PLMs and textual information in the corpus to perform knowledge graph completion (as shown in Figure 1). Unlike existing works (e.g., Fu et al. (2019); Lv et al. (2022)), our method does not require a manually pre-defined set of facts and prompts, which is more general and easier to adapt to real-world applications. Our contributions can be summarized as: * We study the open KG completion problem that can be assisted by facts captured from PLMs. To this end, we propose a new framework **TagReal** that denotes **text augmented** open KG completion with **real**-world knowledge in PLMs. * We develop prompt generation and information retrieval methods, which enable TagReal to automatically create high-quality prompts for PLM knowledge probing and search support information, making it more practical especially when PLMs lack some domain knowledge. * Through extensive quantitative and qualitative experiments on real-world knowledge graphs such as Freebase1 we show the applicability and advantages of our framework2. Footnote 1: [https://github.com/thunlp/OpenNRE](https://github.com/thunlp/OpenNRE) Footnote 2: Our code is available at: [https://github.com/pat-jj/TagReal](https://github.com/pat-jj/TagReal) ## 2 Related Work ### KG Completion Methods KG completion methods can be categorized into embedding-based and PLM-based methods. **Embedding-based methods** represent entities and relations as embedding vectors and maintain their semantic relations in the vector space. TransE Bordes et al. (2013) vectorizes the head, the relation and the tail of triples into a Euclidean space. Dist-Mult Yang et al. (2014) converts all relation embeddings into diagonal matrices in bilinear models. RotatE Sun et al. (2019) presents each relation embedding as a rotation in complex vector space from the head entity to the tail entity. In recent years, researchers have realized that PLMs can serve as knowledge bases Petroni et al. (2019); Zhang et al. (2020); AlKhamissi et al. (2022). **PLM-based methods** for KG completion Yao et al. (2019); Kim et al. (2020); Chang et al. (2021); Lv et al. (2022) start to gain attention. As a pioneer, KG-BERT Yao et al. (2019) fine-tunes PLM with concatenated head, relation, and tail in each triple, outperforming the conventional embedding-based methods in link prediction tasks. Lv et al.(2022) present PKGC, which uses manually designed triple prompts and carefully selected support prompts as inputs to the PLM. Their result shows that PLMs could be used to substantially improve the KG completion performance, especially in the _open-world_Shi and Weninger (2018) setting. Compared to PKGC, our framework TagReal automatically generates prompts of higher quality without any domain expert knowledge. Furthermore, instead of pre-supposing the existence of support information, we search relevant textual information from the corpus with an information retrieval method to support the PLM knowledge probing. ### Knowledge Probing using Prompts LAMA Petroni et al. (2019) is the first framework for knowledge probing from PLMs. The prompts are manually created with a subject placeholder and an unfilled space for the object. For example, a triple query (_Miami_, _location_,?) may have a prompt "Miami is located in [MASK]" where "<subject> is located in [MASK]" is the template for "location" relation. The training goal is to correctly fill [MASK] with PLM's prediction. Another work, BertNet Hao et al. (2022), proposes an approach applying GPT-3 Brown et al. (2020) to automatically generate a weighted prompt ensemble with input entity pairs and a manual seed prompt. It then uses PLM again to search and select top-ranked entity pairs with the ensemble for KG completion. ### Prompt Mining Methods When there are several relations to interpret, manual prompt design is costly due to the requirement of domain expert knowledge. In addition, the prompt quality could not be ensured. Hence, _quality prompt mining_ catches the interest of researchers. Jiang et al. 2020 propose an approach MINE which searches middle words or dependency paths between the given inputs and outputs in a large text corpus (e.g., Wikipedia). They also propose a reasonable approach to optimize the ensemble of the mined prompts by weighting prompt individuals regarding their performance on the PLM. Before the emergence and widespread use of PLMs, textual pattern mining performed a similar function to find reliable patterns for information extraction. For instance, MetaPAD Jiang et al. (2017) generates quality meta patterns by context-aware segmentation with the pattern quality function, and TruePIE Li et al. (2018) proposes the concept of pattern embedding and a self-training framework, that discovers positive patterns automatically. ## 3 Methodology We propose TagReal, a PLM-based framework to handle KG completion tasks. In contrast to the previous work, our framework does not rely on handcrafted prompts or pre-defined relevant facts. As shown in Figure 2, we automatically create appropriate prompts and search relevant support information, which are further utilized as templates to explore implicit knowledge from PLMs. ### Problem Formulation **Knowledge graph completion** is to add new triples (facts) to the existing triple set of a KG. There are two tasks to achieve this goal. The first is **triple classification**, which is a binary classification task to predict whether a triple \((h,r,t)\) belongs to the KG, where \(h,r,t\) denote head entity, relation and tail entity respectively. The second task is **link prediction**, which targets on predicting either the tail entity \(t\) with a query \((h,r,?)\) or the head entity \(h\) with a query \((?,r,t)\). ### Prompt Generation Previous studies (e.g., Jiang et al. (2020)) demonstrate that the accuracy of relational knowledge extracted from PLMs heavily relies on the quality of prompts used for querying. To this end, we develop a comprehensive approach for automatic quality prompt generation given triples in KG as the only input, as shown in Figure 3. We use textual pattern mining methods to mine quality patterns from large corpora as the prompts used for PLM knowledge probing. As far as we know, we are pioneers in using **textual pattern mining** methods for **LM prompt mining**. We believe in the applicability of this approach for the following reasons. * Similar data sources. We apply pattern mining on large corpora (e.g., Wikipedia) which are the data sources where most of PLMs are pre-trained. * Similar objectives. Textual pattern mining is to mine patterns to extract new information from large corpora; prompt mining is to mine prompts to probe implicit knowledge from PLMs. * Similar performance criteria. The reliability of a pattern or a prompt is indicated by how many accurate facts it can extract from corpora/PLMs. **Sub-corpora mining** is the first step that creates the data source for the pattern mining. Specifically, given a KG with a relation set \(R=(r_{1},r_{2},...,r_{k})\), we first extract tuples \(T_{r_{i}}\) paired by head entities and tail entities for each relation \(r_{i}\in R\) from the KG. For example, for the relation \(r_{1}\): /business/company/founder, we extract all tuples like <microsoft, bill_gates> in this relation from the KG. For each tuple \(t_{j}\), we then search sentences \(s_{t_{j}}\) containing both head and tail from a large corpus (e.g., Wikipedia) and other reliable sources, which is added to compose the sub-corpus \(C_{r_{i}}\). We limit the size of each set to \(\theta\) Figure 2: **TagReal Framework. The input and output of each phase are highlighted by red and green, respectively. The dotted arrow indicates the optional process.** for each tuple to mine more generic patterns for future applications. **Phrase segmentation and frequent pattern mining** are applied to mine patterns from sub-corpora as prompt candidates. We use AutoPhrase [22] to segment corpora to more natural and unambiguous semantic phrases, and use FP-Growth algorithm [15] to mine frequent appeared patterns to compose a candidate set \(\mathcal{P}^{{}^{\prime}}_{r_{i}}=(p^{{}^{\prime}}_{1},p^{{}^{\prime}}_{2},...,p^{{} ^{\prime}}_{m})\). The size of the set is large, as there are plenty of messy textual patterns. **Prompt selection**. To select quality patterns from the candidate set, we apply two textual mining approaches: MetaPAD [16] and TruePIE [17]. MetaPAD applies pattern quality function introducing several criteria of contextual features to estimate the reliability of a pattern. We explain why those features can also be adapted for LM prompt estimation: (1) _Frequency and concordance_: Since a PLM learns more contextual relations between frequent patterns and entities during the pre-training stage, a pattern occurs more frequently in the background corpus can probe more facts from the PLM. Similarly, if a pattern composed of highly associated sub-patterns appears frequently, it should be considered as a good one as the PLM would be familiar with the contextual relations among the sub-patterns. (2) _Informativeness_: A pattern with low informativeness (e.g., \(p^{{}^{\prime}}_{1}\) in Figure 3) has the weak ability of PLM knowledge probing, as the relation between the subject or object entities cannot be well interpreted by it. (3) _Completeness_: The completeness of a pattern affects a lot to the PLM knowledge probing especially when any of the placeholders is missing (e.g., \(p^{{}^{\prime}}_{m-2}\) in Figure 3) so that PLM cannot even give an answer. (4) _Coverage_: A quality pattern should be able to probe accurate facts from PLM as many as possible. Therefore, patterns like \(p^{{}^{\prime}}_{4}\) which only suit a few or only one case should have a low quality score. We then apply TruePIE on the prompts (patterns) selected by MetaPAD. TruePIE filters the prompts that have low cosine similarity with the positive samples (e.g., \(p^{{}^{\prime}}_{3}\) and \(p^{{}^{\prime}}_{m-1}\) are filtered), which matters to the creation of prompt ensemble since we want the prompts in the ensemble to be semantically close to each other so that some poor-quality prompts would not significantly impact the prediction result by PLM. As a result, we create a more reliable prompt ensemble \(\mathcal{P}_{r_{i}}=\{p_{i,1},p_{i,2},...,p_{i,n}\}\) based on the averaged probabilities given by the prompts: \[P(y|x,r_{i})=\frac{1}{n}\sum_{j=1}^{n}P_{LM}(y|x,p_{i,j}), \tag{1}\] where \(r_{i}\) is the \(i\)-th relation and \(p_{i,j}\) is the \(j\)-th prompt in \(\mathcal{P}_{r_{i}}\). Beyond prompt selection, a **prompt optimization** process is also employed. Pointed out by Jiang et al. 2020, some prompts in the ensemble are more reliable and ought to be weighted more. Thus, we change Equation 1 to: \[P(y|x,r_{i})=\sum_{j=1}^{n}w_{i,j}P_{LM}(y|x,p_{i,j}), \tag{2}\] where \(w_{i,j}\) is the weight of \(j\)-th prompt for \(i\)-th relation. In our setting, all weights \(\{w_{1,1},..,w_{k,n}\}\) Figure 3: **Prompt generation process**. The solid lines connect the intermediate processes, and the arrows point to the intermediate/final results. Input and output are highlighted in **red** and green respectively. [X] and [Y] denote head and tail entities respectively. are learned through PLM to optimize \(P(y|x,r_{i})\) for \(r_{i}\in R\) ahead of the training process. ### Support Information Retrieval In addition to the prompt mining, we also attach some query-wise and triple-wise support text information to the prompt to help the PLMs understand the knowledge we want to probe as well as to aid in training triple classification ability. As seen in Figure 4, for the \(i\)-th query \(q_{i}^{r}\) in relation \(r\), we use BM25 (Robertson et al., 1995) to retrieve highly ranked support texts with score greater than \(\delta\) and length shorter than \(\phi\) from the reliable corpus and randomly select one of them as the support information. To compose the input cloze \(\hat{q}_{i}^{r}\) to the PLM, we concatenate the support text to each prompt in the optimized ensemble we obtained through previous steps, with the subject filled and the object masked. [CLS] and [SEP] are the tokens for sequence classification and support information-prompt separation accordingly. In the training stage, we search texts using triples rather than queries, and the [MASK] would be filled by the object entities. It is worth noting that support text is optional in TagReal, and we leave it blank if no matching data is found. ### Training To train our model, we create negative triples in addition to the given positive triples following the idea introduced by PKGC Lv et al. (2022), to handle the triple classification task. We create negative triples by replacing the head and tail in each positive triple with the "incorrect" entity that achieves high probability by the KGE model. We also create random negative samples by randomly replacing the heads and tails to enlarge the set of negative training/validation triples. The labeled training triples are assembled as \(\mathcal{T}=\mathcal{T}^{+}\cup(\mathcal{T}^{-}_{KGE}\cup\mathcal{T}^{-}_{ RAND})\) where \(\mathcal{T}^{+}\) is the positive set, \(\mathcal{T}^{-}_{KGE}\) and \(\mathcal{T}^{-}_{RAND}\) are two negative sets we created by embedding model-based and random approaches respectively. Then, we transform all training triples of each relation \(r\) into sentences with the prompt ensemble \(\mathcal{P}_{r}\) and the triple-wise support information retrieved by BM25 (if there is any). At the training stage, the [MASK] is replaced by the object entity in each positive/negative triple. The query instances \(\hat{q}_{i}^{r}\) are then used to fine-tune the PLM by updating its parameters. Cross-entropy loss Lv et al. (2022) is applied for optimization: \[\mathcal{L}=-\sum_{\tau\in\mathcal{T}}(y_{\tau}\log(c_{\tau}^{1})+(1-y_{\tau}) \frac{\log(c_{\tau}^{0})}{M}), \tag{3}\] where \(c_{\tau}^{0},c_{\tau}^{1}\in[0,1]\) are the softmax classification scores of the token [CLS] for the triple \(\tau\), \(y_{\tau}\) is the ground truth label \((1/0)\) of the triple, and \(M=(|\mathcal{T}^{+}|/|\mathcal{T}^{-}|)\) is the ratio between the number of positive and negative triples. After the PLM is fine-tuned with positive/negative triples in training set, it should have a better performance on classifying the triples in the dataset compared to a raw PLM. This capability would enable it to perform KG completion as well. ### Inference Given a query \((h,r,?)\), we apply the query-wise support information that is relevant to the head entity \(h\) and relation \(r\), as we presume that we are unaware of the tail entity (our prediction goal). Then, we make the corresponding query instances containing [MASK], with both support information and prompt ensemble, as shown in Figure 4. To leverage the triple classification capability of the PLM on link prediction, we replace [MASK] in a query instance with each entity in the known entity set and rank their classification scores in descending order to create a 1-d vector as the prediction result for each query. This indicates that the lower-indexed entities in the vector are more likely to compose a positive triple with the input query. For prompt ensemble, we sum up the scores by entity index before ranking them. The detailed illustration is placed in Appendix E. Figure 4: Support information retrieval. ## 4 Experiment ### Datasets and Compared Methods **Datasets.** We use the datasets FB60K-NYT10 and UMLS-PubMed provided by Fu et al., where FB60K and UMLS are knowledge graphs and NYT10 and PubMed are corpora. FB60K-NYT10 contains more general relations (e.g., "nationality of perso") whereas UMLS-PubMed focuses on biomedical domain-specific relations (e.g., "gene mapped to disease"). We apply the pre-processed dataset 3 (with training/validation/testing data size 8:1:1) to align the evaluation of our method with the baselines. Due to the imbalanced distribution and noise present in FB60K-NYT10 and UMLS-PubMed, 16 and 8 relations are selected for the performance evaluation, respectively. We place more details of the datasets in Appendix A. Footnote 3: [https://github.com/INK-USC/CPL#datasets](https://github.com/INK-USC/CPL#datasets) **Compared Methods.** We compare our model TagReal with four categories of methods. For (1) traditional KG embedding-based methods, we evaluate **TransE**(Bordes et al., 2013), **DisMult**(Yang et al., 2014), **ComplEx**(Trouillon et al., 2016a), **ConvE**(Dettmers et al., 2018), **TuckER**(Balazevic et al., 2019) and **RotatE**(Sun et al., 2019) where TuckER is a newly added model. For (2) joint text and graph embedding methods, we evaluate **RC-Net**(Xu et al., 2014), **TransE+LINE**(Fu et al., 2019) and **JointNRE**(Han et al., 2018). For (3) reinforcement learning (RL) based path-finding methods, we evaluate **MINERVA**(Das et al., 2017) and **CPL**(Fu et al., 2019). For (4) PLM-based methods, we evaluate **PKGC**(Lv et al., 2022) and our method **TagReal.** We keep the reported data of (2) and (3) by Fu et al.2019 while re-evaluating all models in (1) in different settings for more rigorous comparison (see Appendix I for details). PKGC in our setting can be viewed as TagReal with manual prompts and without support information. ### Experimental Setup For FB60K-NYT10, we use LUKE (Yamada et al., 2020), a PLM pre-trained on more Wikipedia data with RoBERTa (Liu et al., 2019). For UMLS-PubMed, we use SapBert (Liu et al., 2021) that pre-trained on both UMLS and PubMed with BERT (Devlin et al., 2019). For sub-corpora mining, we use Wikipedia with 6,458,670 document examples as the general corpus and NYT10/PubMed as the reliable sources, and we mine 500 sentences at maximum (\(\theta=500\)) for each tuple. For the prompt selection, we apply MetaPAD with its default setting, and apply TruePIE with the infrequent pattern penalty, and thresholds for positive patterns and negative patterns reset to {0.5, 0.7, 0.3} respectively. For support information retrieval, we use BM25 to search relevant texts with \(\delta=0.9\) and \(\phi=100\) in the corpora NYT10/PubMed. We follow the same fine-tuning process as PKGC. We use TuckER as the KGE model to create negative triples, and we set \(M=30\) as the ratio of positive/negative triples. To compare with baselines, we test our model on training sets in the ratios of [20%, 50%, 100%] for FB60K-NYT10 and [20%, 40%, 70%, 100%] for UMLS-PubMed. The evaluation metrics are described in Appendix F. ## 5 Results ### Performance Comparison We show the performance comparison with the state-of-the-art methods in Tables 1 and 2. As one can observe, TagReal outperforms the existing \begin{table} \begin{tabular}{c l c c c c c c c c c} \hline \hline \multicolumn{2}{c}{Model} & \multicolumn{3}{c}{20\%} & \multicolumn{3}{c}{50\%} & \multicolumn{3}{c}{100\%} \\ & Hits@5 & Hits@10 & MRR & Hits@5 & Hits@10 & MRR & Hits@5 & Hits@10 & MRR \\ \hline \hline \multirow{8}{*}{**KGE-based**} & TransE(Bordes et al., 2013) & 29.13 & 32.67 & 15.80 & 41.54 & 45.74 & 25.82 & 42.53 & 46.77 & 29.86 \\ & DisMult (Yang et al., 2014) & 3.44 & 4.31 & 2.64 & 15.98 & 18.85 & 13.14 & 37.94 & 41.62 & 30.56 \\ & ComplEx (Trouillon et al., 2016a) & 4.32 & 5.48 & 3.16 & 15.00 & 17.73 & 12.21 & 35.42 & 38.85 & 28.59 \\ & ConvE (Dettmers et al., 2018) & 29.49 & 33.30 & 24.31 & 40.10 & 44.03 & 32.97 & 50.18 & 54.06 & 40.39 \\ & TuckER (Balazevic et al., 2019) & 29.50 & 32.48 & 24.44 & 41.73 & 45.58 & 33.84 & 51.09 & 54.80 & 40.47 \\ & RotatE (Sun et al., 2019) & 15.91 & 18.32 & 12.65 & 35.48 & 39.42 & 28.92 & **51.73** & 55.27 & **42.64** \\ \hline \multirow{3}{*}{**Text\&KGE-based**} & RC-Net (Xu et al., 2014) & 13.48 & 15.37 & 13.26 & 14.87 & 16.54 & 14.63 & 14.69 & 16.34 & 14.41 \\ & TransE+Line (Fu et al., 2019) & 12.17 & 15.16 & 4.88 & 21.70 & 25.75 & 8.81 & 26.76 & 31.65 & 10.97 \\ & JointNRE (Han et al., 2018) & 16.93 & 20.74 & 11.39 & 26.96 & 31.54 & 21.24 & 42.02 & 47.33 & 32.68 \\ \hline \multirow{3}{*}{**RL-based**} & MINERVA (Das et al., 2017) & 11.64 & 14.16 & 8.93 & 25.16 & 31.54 & 22.24 & 43.80 & 44.70 & 34.62 \\ & CPL (Fu et al., 2019) & 15.19 & 18.00 & 10.87 & 26.81 & 31.70 & 23.80 & 43.25 & 49.50 & 33.52 \\ \hline \multirow{3}{*}{**PLM-based**} & PKGC (Lv et al., 2022) & 35.77 & 43.82 & 28.62 & 41.93 & 46.70 & 31.81 & 41.98 & 52.56 & 32.11 \\ & TagReal (our method) & **45.59** & **51.34** & **35.41** & **48.98** & **55.64** & **38.03** & 50.85 & **60.64** & 38.86 \\ \hline \hline \end{tabular} \end{table} Table 1: **Performance comparison of KG completion on FB60K-NYT10 dataset**. Results are averaged values of ten independent runs of head/tail entity predictions. The highest score is highlighted in **bold.** works in most cases. Given dense training data, KGE-based methods (e.g., RotatE) and RL-based methods (e.g., CPL) can still achieve relatively high performance. However, when the training data is limited, these approaches suffer, whereas PLM-based methods (PKGC and TagReal) are not greatly impacted. Our approach performs noticeably better in such cases than the current non-PLM-based ones. This is because the KGE models cannot be trained effectively with inadequate data, and the RL-based path-finding models cannot recognize the underlying patterns given insufficient evidential and general paths in KG. On the other hand, PLMs already possess implicit information that can be used directly, and the negative effects of insufficient data in fine-tuning would be less harsh than in training from scratch. TagReal outperforms PKGC due to its ability to automatically mine quality prompts and retrieve support information in contrast to manual annotations which are often limited. Next, we analyze the impacts of support information and prompt generation on the performance of TagReal. ### Model Analysis We conduct an ablation study to verify the effectiveness of both automatically generated prompts and retrieved support information. The results are presented in Table 3, Figure 5 and 6. **Support Information.** As shown in Table 3, for FB60K-NYT10, support information helps improve Hits@5 and Hits@10 in ranges of [5.2%, \begin{table} \begin{tabular}{c l c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{20\%} & \multicolumn{2}{c}{40\%} & \multicolumn{2}{c}{70\%} & \multicolumn{2}{c}{100\%} \\ & Hits@5 & Hits@10 & Hits@5 & Hits@10 & Hits@5 & Hits@10 & Hits@5 & Hits@10 \\ \hline \hline \multirow{8}{*}{**KGE-based**} & TransE (Bordes et al., 2013) & 19.70 & 30.47 & 27.72 & 41.99 & 34.62 & 49.29 & 40.83 & 53.62 \\ & DisMut (Yang et al., 2014) & 19.02 & 28.35 & 28.28 & 40.48 & 32.66 & 47.01 & 39.53 & 53.82 \\ & ComplEx (Trouillon et al., 2016a) & 11.28 & 17.17 & 24.64 & 35.15 & 25.89 & 38.19 & 34.54 & 49.30 \\ & ConvE (Dentmers et al., 2018) & 20.45 & 30.72 & 27.90 & 42.49 & 30.67 & 45.91 & 29.85 & 45.68 \\ & TuckER (Balajcevic et al., 2019) & 19.94 & 30.82 & 25.79 & 41.00 & 26.48 & 42.48 & 30.22 & 45.33 \\ & RotatE (Sun et al., 2019) & 17.95 & 27.55 & 27.35 & 40.68 & 34.81 & 48.81 & 40.15 & 53.82 \\ \hline \multirow{2}{*}{**Text\&KGE-based**} & RC-Net (Xu et al., 2014) & 7.94 & 10.77 & 7.56 & 11.43 & 8.31 & 11.81 & 9.26 & 12.00 \\ & TransE-Line (Fu et al., 2019) & 23.63 & 31.85 & 28.46 & 38.58 & 25.43 & 34.88 & 22.31 & 33.65 \\ & JointNetB (Han et al., 2018) & 21.05 & 31.37 & 27.96 & 40.10 & 30.87 & 44.47 & - & - \\ \hline \multirow{2}{*}{**RL-based**} & MINERVA (Das et al., 2017) & 11.55 & 19.87 & 24.65 & 35.71 & 35.80 & 46.26 & 57.63 & 63.83 \\ & CPL (Fu et al., 2019) & 15.32 & 24.22 & 26.96 & 38.03 & 37.23 & 47.60 & 58.10 & **65.16** \\ \hline \multirow{2}{*}{**PLM-based**} & PKGC (Lv et al., 2022) & 31.08 & 43.49 & 41.34 & 52.44 & 47.39 & 55.52 & 55.05 & 59.43 \\ & TagReal (our method) & **35.83** & **46.45** & **46.26** & **55.99** & **53.46** & **60.40** & **60.68** & 62.88 \\ \hline \hline \end{tabular} \end{table} Table 2: **Performance comparison of KG completion on UMLS-PubMed dataset**. Results are averaged values of ten independent runs of head/tail entity predictions. The highest score is highlighted in **bold. Figure 5: **Performance (F1-Score) variation of triple classification w.r.t training time. ”man” or ”optim” means TagReal with manual prompts or optimized prompt ensemble. ”supp” denotes support information.** Figure 6: **Relation-wise KG completion performance (Hits@10) comparison on FB60K-NYT10. Labels on the x-axis are the abbreviations of relations.** 7.5%] and [3.8%, 5.3%], respectively. For UMLS-PubMed, it helps improve Hits@5 and Hits10 in ranges of [1.9%, 4.94%] and [0.9%, 3.6%], respectively. Although the overlap between UMLS and PubMed is higher than that between FB60K and NYT10 (Fu et al., 2019), the textual information in PubMed could not help as much as NYT10 since that: (1) SapBert already possesses adequate implicit knowledge on both UMLS and PubMed so that a large portion of additional support texts might be useless. The lines "u2", "u3", "u4" and "u5" in Figure 5 show that support information helps more when using LUKE as the PLM as it contains less domain-specific knowledge. It also infers that the support information could be generalized to any application, especially when fine-tuning a PLM is difficult in low-resource scenarios (Arase and Tsuji, 2019; mahabadi et al., 2021). (2) UMLS contains more queries with multiple correct answers than FB60K (see Appendix A), which means some queries are likely "misled" to another answer and thus not counted into the Hits@N metric. **Prompt Generation.** Almost all of the relations, as shown in Figure 6, could be converted into better prompts by our prompt mining and optimization, albeit some of them might be marginally worse than manually created prompts due to the following fact. A few of the mined prompts, which are of lower quality than the manually created prompts, may significantly negatively affect the prediction score for the ensemble with equal weighting. Weighting based on PLM reduces such negative effects of the poor prompts for the optimized ensembles and enables them to outperform most handcrafted prompts. In addition, Table 3 shows the overall improvement for these three types of prompts, demonstrating that for both datasets, optimized ensembles outperform equally weighted ensembles, which in turn outperform manually created prompts. Moreover, by comparing line "f1" with line "f2", or line "u1" with line "u3" in Figure 5, we find a performance gap between PLM with manual prompts and with the optimized ensemble for triple classification, highlighting the effectiveness of our method. ### Case Study Figure 7 shows an example of using TagReal for link prediction with a query (_?,location/location/contains, alba_) where "_piedmont_" is the ground truth. By comparing the prediction results in different pairs, we find that both prompt generation and support information could enhance the KG completion performance. With the handcrafted prompt, the PLM simply lists out the terms that have some connections to the subject entity "_alba_" without being aware that we are trying to find the place it is located in. Differently, with the optimized prompt ensemble, the PLM lists entities that are highly relevant to our target, where "_cuneo_", "_italy_", "_north-ern_italy_" are correct real-world answers, indicating that our intention is well conveyed to the PLM. With the support information, the PLM increases the score of entities that are related to the keywords ("_italy_", "_piedmont_") in the text. Moreover, the optimized ensemble removes "_texas_" and "_scotland_" from the list and leaves only Italy-related locations. More examples are placed in Appendix H. Figure 7: **Example of the link prediction with TagReal on FB60K-NYT10**. **Man** denotes manual prompt. **Optim** denotes optimized prompt ensemble. **Supp** denotes support information. The **ground truth tail entity**. **helpful information** and **optimized prompts** (darker for higher weights) are highlighted. ## 6 Conclusion and Future Works In this study, we proposed a novel framework to exploit the implicit knowledge in PLM for open KG completion. Experimental results show that our method outperforms existing methods especially when the training data is limited. We showed that the optimized prompts with our approach outperform the handcrafted ones in PLM knowledge probing. The effectiveness of the support information retrieval to aid the prompting is also demonstrated. In the future, we may leverage QA model's power to retrieve more reliable support information. Another potential extension is to make our model more explainable by exploring path-finding tasks. ## 7 Limitations Due to the nature of deep learning, our method is less explainable than path-finding-based KG completion methods (e.g., CPL), which provide a concrete reasoning path to the target entity. Composing the path with multiple queries might be an applicable strategy that is worthwhile to investigate in order to extend our work on the KG reasoning task. For the link prediction task, we adapt the "recall and re-ranking" strategy from PKGC Lv et al. (2022), which brings a trade-off between prediction efficiency and accuracy. We alleviate the issue by applying different hyper-parameters given different sizes of training data, which is discussed in detail in Appendix C. As a common issue of existing KG completion models, the performance of our model also degrades when the input KG contains noisy data. The advantage of our approach in addressing this issue is that it can use both corpus-based textual information and implicit PLM knowledge to reduce noise. ## 8 Ethical Statements In this study, we use two datasets FB60K-NYT10 and UMLS-PubMed, which include the knowledge graphs FB60K and UMLS as well as the text corpora NYT10 and PubMed. The data is all publicly available. Our task is knowledge graph completion, which is performed by finding missing facts given existing knowledge. This work is only relevant to NLP research and will not be put to improper use by ordinary people. ## 9 Acknowledgements Research was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and INCAS Program No. HR001121C0165, National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532, and the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897, and the Institute for Geospatial Understanding through an Integrative Discovery Environment (I-GUIDE) by NSF under Award No. 2118329, and NSF Award SCH-2205289, SCH-2014438, IIS-2034479.
2302.09119
A Review on Generative Adversarial Networks for Data Augmentation in Person Re-Identification Systems
Interest in automatic people re-identification systems has significantly grown in recent years, mainly for developing surveillance and smart shops software. Due to the variability in person posture, different lighting conditions, and occluded scenarios, together with the poor quality of the images obtained by different cameras, it is currently an unsolved problem. In machine learning-based computer vision applications with reduced data sets, one possibility to improve the performance of re-identification system is through the augmentation of the set of images or videos available for training the neural models. Currently, one of the most robust ways to generate synthetic information for data augmentation, whether it is video, images or text, are the generative adversarial networks. This article reviews the most relevant recent approaches to improve the performance of person re-identification models through data augmentation, using generative adversarial networks. We focus on three categories of data augmentation approaches: style transfer, pose transfer, and random generation.
Victor Uc-Cetina, Laura Alvarez-Gonzalez, Anabel Martin-Gonzalez
2023-02-17T20:07:28Z
http://arxiv.org/abs/2302.09119v3
A Review on Generative Adversarial Networks for Data Augmentation in Person Re-Identification Systems ###### Abstract Interest in automatic people re-identification systems has significantly grown in recent years, mainly for developing surveillance and smart shops software. Due to the variability in person posture, different lighting conditions, and occluded scenarios, together with the poor quality of the images obtained by different cameras, it is currently an unsolved problem. In machine learning-based computer vision applications with reduced data sets, one possibility to improve the performance of re-identification system is through the augmentation of the set of images or videos available for training the neural models. Currently, one of the most robust ways to generate synthetic information for data augmentation, whether it is video, images or text, are the generative adversarial networks. This article reviews the most relevant recent approaches to improve the performance of person re-identification models through data augmentation, using generative adversarial networks. We focus on three categories of data augmentation approaches: style transfer, pose transfer, and random generation. ## 1 Introduction Re-identification of a person consists of the recognition of the same person through videos obtained from different cameras with a non-overlapping range of vision, captured at different moments in time. If we focus on the security domain, every year the installation of security cameras in cities around the world increases. The amount of images generated by these cameras every second translates into the need for manually, semi-automatically or automatically analysis of hundreds of hours of surveillance cameras by security forces. In recent years, interest in the problem of re-identification has increased, due to the fact that it is not possible to use biometric data for the identification of a person, since the quality of the images does not allow a clear definition of the facial data. Therefore, other image features must be used, such as the structure of the body and clothing. Currently, different neural network models have been proposed to improve the re-identification performance, but to make these models work more efficiently, they need to be trained with very diverse data. One way to artificially generate diverse data is by means of generative adversarial networks (GANs) [6] which are capable of generating synthetic data that can help to improve the performance of the re-identification neural networks. Training a basic GAN model involves two main steps: 1. In the first step, we train a neural network to receive as input a vector of randomly generated numbers, and produce as output the pixel values for a synthetic image. This neural network is called the generator network. At the same time, during this step, we train a second neural network, in this case, the so-called discriminator network. The discriminator receives an image as input and produce a classification as output, indicating whether the input images is a real one or a fake one. 2. In the second steps, we use the generator network to produce some fake images, but we label them as real images, and we pass them to the discriminator network. The discriminator network will produce an error signal, given that the image is fake, but it is labelled as real one. This error signal is used to further train the generator network, making it to produce more realistic fake images. These two steps are intertwined during several iterations until both networks improve their performance to discriminate and generate fake images. In general, It is not a trivial task to train GANs, especially considering the large amounts of data and computational power needed. However, as we describe in this article, there has been already some successful approaches. On the other hand, due to the current increase in the amount of information from video surveillance systems that has to be analyzed, the implementation of an intelligent automatic Figure 1: Example of Market-1501 [15] images obtained by security cameras. Images corresponding to the same person are marked with green borders. The low resolution of the images prevent us from using automatic facial recognition information. analysis system is extremely important. In some countries, such as China, advanced cameras with sufficient resolution to extract unique face features are currently being used. However, since most surveillance cameras generate low-resolution images, the option of generating an automatic facial recognition system for re-identification is ruled out. As it can be seen in Fig. 1, images are low-resolution, with variations in lighting and contrast, in addition to other factors that make the re-identification of people in images more complex, such as: the change of clothing, whether or not the person is wearing a backpack or sweater, the presence of obstacles or people in the background that limits the visibility of the person of interest in an open space. In order to support the development of re-identification neural networks, different deep training libraries have been developed, such as Torchreid [19]. However, to achieve effective training of these models, a large amount of correctly labeled data is needed. This constitutes one of the greatest challenges for training efficient re-identification models. Currently, the largest datasets are very limited because they do not contain a large number of images. For example, two of the most important datasets, Market1501 and DukeMTMC-reID have 1501 persons recorded on 6 different cameras and 702 persons recorded on 8 different cameras, respectively. The purpose of this article is to review the most relevant data augmentation methods based on the use of generative adversarial networks for people re-identification, in such a way that it provides a reference guide to researchers and engineers interested in this field. ## 2 Most Common Approaches Generative adversarial networks, proposed in 2014 by Ian Goodfellow et al. [6], are capable of artificially generating images with great diversity. Over time, new architectures have been generated that improve the quality of the data generated, such as the CycleGan architecture, proposed in 2017 by Zhu et al. [20]. Apart from improving the quality of the generated images, it manages to transfer the style or domain of a group of images to another group, using two generative adversarial networks. The performance of adversarial generative networks has been improving year after year, as new models have been proposed. This improvement has motivated several researchers to start investigating the use of GANs for data augmentation tasks. An important increase in the study of GANs for data augmentation in the training of re-identification models can be noted since 2018. The most relevant approaches have been grouped into three categories, corresponding to different methods used to generate new artificial images. 1. Style transfer. New images are artificially generated from an input image, using different styles, known also as domains. The styles are imposed on the new images at the moment they are generated by neural networks previously trained for that purpose. In the new generated images, you can see modifications with respect to the input image, such as color, tone, and lighting. 2. Pose transfer. In this approach the inputs are one image of a real person and the target posture that we want to impose on that person. The posture can be specified whether as a heat map or by the joints that correspond to the skeleton of the desired posture. The model is capable of generating the image of the input person with the determined posture. 3. Random generation. As its name suggests, methods in this category are less constrained and they are focused on randomly generating synthetic images with the only condition that the generated images should have similar characteristics of those images in the dataset that we want to augment. ## 3 Style Transfer Images obtained by cameras usually have different resolutions or are in different positions. This can cause the lighting and tonality to vary, among other aspects. One way to generate new data is by transferring from one domain to another or adapting the domain, which is based on the idea of transferring the style of one or more images to others without changing the structure nor the background of the original images. Which means that there is no change in the positions of the pixels, leaving the images exactly the same. There are several ways of dealing with style transfer. CycleGAN [20] is a GAN introduced in 2017. It is capable of learning the style of some images and transferring it to other different images, that is, transferring the style from one domain to another. This was a milestone within the generative adversarial networks and as of 2018 a large number of works based on this architecture began to appear. For instance in 2019, Zhong et al. [18] proposed CamStyle for transferring the style of a security camera to that of another camera. The newly proposed method based on CycleGAN can transfer the style only between two domains, limiting the architecture in such a way that it is necessary to generate a model for each pair of security cameras. Following the same philosophy of transferring one style from another, Dai et al. [4] proposed the cmGAN model focused on style transfer to convert RGB and infrared camera images. It is the first method using a RGB-Infrared Cross-Modality Re-ID Dataset which includes images from four infrared and two RGB cameras. In this case, the generative adversarial network's discriminator is part of the feature extractor of the re-identification model. The input to the re-identification model is an infrared image and the target person needs to be searched within the RGB images. All these methods mentioned so far have the limitation of only transferring a style from one domain A to another domain B, which results in the need to duplicate the project for each different style. Furthermore, one of the biggest challenges faced by the models of re-identification is the poor performance obtained when images from one database are used during the testing of a re-identification model trained with images from another database. Therefore, some works propose the transfer of domains between different databases and/or multiple domains. In 2018, as an improvement to the Camstyle architecture and seeking better performance using the model in different databases, the M2M-GAN [9] architecture is proposed, which classifies the images of each database into subdomains, that is, for each of the cameras. Being able to transfer the sub-domain of domain A to a sub-domain of domain B, the training being carried out in a supervised manner and requiring that all the data from the different databases have been labeled by hand. As of 2019, different more sophisticated architectures began to appear, such as the one proposed by Zheng et al. [16], DG-NET, which uses two encoders that are capable of extracting the colors of the image, appearance, and transfer those colors to another image where the structure of the person has been extracted. This model also uses the adversarial generative network's own architecture as a re-identification model (see Fig. 2). Some architectures can be trained to differentiate between different colors of clothing and preserves the coherence of identity of the same person with the color of clothing. Such color-aware architectures are capable of differentiating and modifying the different colors of upper clothing (sweatshirts/t-shirts) or lower clothing (pants). Zhiqi Pang et al. [11] in the year 2021 proposed a hybrid method, supervised and unsupervised. This method uses a novel architecture TC-GAN to generate labelled artificial images, transferring the person from the input image to the background of the target style image. They also proposed the use of the DFE-Net re-identification model that uses a modified version of the ResNet-50 network, pretrained with the ImageNet database that has as input the real images without tag and artificially generated ones. It uses the network as an image feature extractor for later comparison. ## 4 Pose Transfer One of the biggest challenges within the re-identification problem is the great variation in the posture of a person that can be seen in different cameras. To mitigate this problem, the generation of new data of the same person is proposed, modifying the posture using different architectures. In this approach, the data augmentation process is based on the generation of new data through the extraction of the person in the original image, which can be done by obtaining joints or heat maps. The pose transfer is obtained with the help of the original image and those generated maps encapsulating the posture information. In 2018, Qian et al. [12] proposed the PN-GAN architecture. It is capable of generating artificial images by means of the image of a person in eight different postures. The eight Figure 2: Style transfer. In this example of DE-GAN [16], the algorithm transfers the appearance of the image on the left to all images on the right, combining appearance and structure. canonical postures are obtained using the k-means algorithm on the distribution of all the images in the database. To generate the template, the open pose [3] estimator tool is used, which is capable of detecting 18 joints of the human body and their joints. Through the articulation map of both images it is able to transfer the posture of each one of the eight canonical postures to the input images, as illustrated in Fig. 3. In 2019, Borgia et al. [2] following the same line as the previous architecture and extracting the joints using also the open pose method, they propose an architecture which, instead of evaluating a single image, it evaluates video footage of a person's movement. Eight canonical postures are predefined: three looking forward, three from behind, one in profile looking to the right side and another to the left side. In the first place, the video sequences of a person are obtained and the corresponding images with the canonical postures are searched by means of the Euclidean distance, if any of those eight postures does not exist, the artificial image will be generated. The same happens with all the people that are in the video, their sequences are obtained and if there is no canonical position it generates it, by means of the cosine distance they compare the eight corresponding images with the eight canonical postures of the input person with all the other sequence of images of people. A classification is made and the one with the smallest cosine distance is assigned as if it were the same person in the re-identification model. In 2020, Zhang et al. [14] proposed PAC-GAN. It consists of two models, the first is CPG-Net, through the use of a conditional GAN, it is capable of generating artificial images of a person from a camera A, and converting it to the point of view of another camera B. The data is increased by generating new images with postures from different points of view of different cameras. It trains with the joints generated with Open Pose and the image itself. More recently, in 2021, Ziyang et al. [10] proposed a new architecture which is capable of correcting images in such a way that the images of people are centered and straight. For this, the database was trained in such a way that the images with the correct positions were indicated. ## 5 Random Generation The third major approach consists in generating random images of persons, with different postures, lighting, colors and backgrounds (see Fig. 4). After the images has been generated, different methods are used to automatically label them. Finally the labelled images are used Figure 3: Pose transfer. A new image of the person is generated from an input image and the skeleton and the heat map of the desired posture. for training the re-identification model. One of the main algorithms for the automatic labeling of the random images is called Label Smooth Regularization (LSR) and it was proposed in 2015 by Szegedy et al. [13]. This algorithm was originally used in an image classification problem. Later on, in 2017, Zheng et al. [17] used a Deep Convolutional GAN (DCGAN) together with the LSR algorithm to generate images and label them. They called this method Label Smooth Regularization for Outliers (LSRO) and it assigns the artificially generated images the same value in all classes, that is, it is uniformly distributed throughout all classes. The same year, Ainam et al. [1] proposed the use of the k-means algorithm to group the images for the training set of a DCGAN. They also introduced the Sparse Label Smoothing Regularization (SLSR), which is based on the LSRO labeling method. It labels the artificially generated images using the groups generated by k-means. Interpolation of images has been also successfully tried. Eom et al. [5] proposed a new GAN architecture called Identity Shuffle GAN (IS-GAN). In this method, the generation of artificial images is through of the interpolation between two real images, being able to distinguish between the upper and the lower part. The labels of the artificial image will be the same as the images that have generated the interpolation. More recently, in 2021, Hussin et al. 2021 [7] proposed the use of the StyleGAN [8] for the generation of new data. First, StyleGAN is trained with some of the most popular datasets for people re-identification. Once StyleGAN has been trained, a set of new artificial images is generated. Finally, the generated images are labelled using the LSRO method. ## 6 Discussion Data augmentation is a difficult task in any domain and the re-identification domain is not the exception. In this article we have reviewed the most relevant approaches for data augmentation in person re-identification systems which employ generative adversarial networks. We have categorized all the reviewed methods in three groups: style transfer, pose transfer, and random generation of images. Neither of these approaches can be regarded as the best one. The kind of application may indicate the use of one approach over the other. For example, a re-identification system Figure 4: Random generation. On the left, real images taken from the Market-1501 [15] database. On the right, artificial images randomly generated with StyleGAN3 [8]. operating inside a shop with controlled light conditions might require a data augmentation based on pose transfer, meanwhile an re-id application for outdoors surveillance might benefit more from style transfer models. In Table 1 we provide the summary of all the methods here reviewed. Moreover, even though good advances have been achieved so far with these approaches to data augmentation, we must keep in mind that generative adversarial networks are not trivial models to train and fine-tune. Common problems that researchers generally encounter when training GANs, such as mode collapse and training instability, are also very likely to appear when learning to augment a re-identification dataset. Mode collapse occurs when the generator network becomes less and less diverse as the number training iterations increases. Basically, the generator improves the quality of one specific category of fake images that it produces, for example, women images, and begins to produce almost exclusively women images. Consequently, the discriminator network begins to experience only fake images of women and it also starts to forget to discriminate other category of images, for example, men images. With time, when the discriminator manages to discriminate images of women, the generator is forced to start producing another category of images, for example children images. This problem causes the GAN to cycle around a few number of categories, never becoming good in all the required categories. The training instability problem has to do with the mere fact of training two neural networks at the same time, in a way that the good performance of one network depends on the good performance of the other. In such training dependency, it is very easy for the whole system to get trapped into an unstable phase, in which all network's parameters oscillate. Very often, the training process might seem to be going very well and then, from one iteration to the other, the performance drops unexpectedly, without no clear reason. Under these circumstances, a lot of fine-tuning is necessary, in order to find an optimal set \begin{table} \begin{tabular}{|c|c|} \hline **GAN Model** & **Approach** \\ \hline \hline CycleGAN [20] & Style transfer \\ \hline CamStyle [18] & Style transfer \\ \hline cmGAN [4] & Style transfer \\ \hline M2M-GAN [9] & Style transfer \\ \hline DG-NET [16] & Style transfer \\ \hline TC-GAN + DFE-Net [11] & Style transfer \\ \hline \hline PN-GAN [12] & Pose transfer \\ \hline WF+WPR [2] & Pose transfer \\ \hline PAC-GAN [14] & Pose transfer \\ \hline GAN + CNN + STN [10] & Pose transfer \\ \hline \hline LSR [13] & Random generation \\ \hline DCGAN + LSR [17] & Random generation \\ \hline k-means + DCGAN SLSR & Random generation \\ \hline IS-GAN [5] & Random generation \\ \hline StyleGAN [8][7] & Random generation \\ \hline \end{tabular} \end{table} Table 1: Summary of GAN approaches to data augmentation in person re-identification. of hyperparameters. ## 7 Conclusion In this article we have reviewed three of the most common approaches for data augmentation in person re-identification systems which employ generative adversarial networks for style transfer, pose transfer, and random generation of images. We have included in this review, what we consider to be the most relevant and successful ways to augment image datasets for the task of re-identifying people. This is of course not a exhaustive survey, but a reference guide to researchers or engineers interested in this field. Finally, although data augmentation for re-identification systems is nowadays possible through the use of generative adversarial networks, using the approaches above mentioned, it is important to consider that this task may require an important amount of expertise in implementing, training and fine-tuning generative adversarial models.
2304.10936
Real-Time Implementation of Dynamic State Estimation for Microgrid Load Bus Protection
Inverter-interfaced microgrids, owing to the lack of fault current, cannot be protected using traditional over-current protections, while admittance or differential relaying protection schemes are not practical to be implemented. Dynamic state estimation can track and predict power system transients and has been extensively investigated for setting-less protection. A novel real-time application of dynamic state estimation for protection is proposed in this paper, wherein parameter estimation and parallel processing is used to identify the state of the system. The implementation scheme has low process complexity and employs a data acquisition device and estimator that run on a general-purpose computer. This proposed implementation extends the state-of-the-art, under short-circuit conditions, to a real-time implementation with a lumped-load radial microgrid and a grid-forming inverter with current-limiting behavior.
Sarbajit Basu, Arthur K. Barnes, Adam Mate, Olga Lavrova
2023-04-21T13:14:48Z
http://arxiv.org/abs/2304.10936v2
# Real-Time Implementation of Dynamic State Estimation ###### Abstract Inverter-interfaced microgrids, owing to the lack of fault current, cannot be protected using traditional over-current protections, while admittance or differential relaying protection schemes are not practical to be implemented. Dynamic state estimation can track and predict power system transients and has been extensively investigated for setting-less protection. A novel real-time application of dynamic state estimation for protection is proposed in this paper, wherein parameter estimation and parallel processing is used to identify the state of the system. The implementation scheme has low process complexity and employs a data acquisition device and estimator that run on a general-purpose computer. This proposed implementation extends the state-of-the-art, under short-circuit conditions, to a real-time implementation with a lumped-load radial microgrid and a grid-forming inverter with current-limiting behavior. power system operation, microgrid, distribution network, protection, dynamic state estimation. ## I Introduction Owing to their smaller carbon footprint, renewable resource-based generation - often inverter-interfaced - is becoming increasingly prevalent. However, due to their rapid current-limiting behavior, these resources requires novel protection schemes. Traditional protection schemes are unable to detect faults on microgrids, require complex coordination schemes among relays, and are susceptible to misoperation owing to erroneous values provided by faulty measurement devices [1, 2]. As inverter-dominated microgrids are unable to supply high currents during faulted conditions, the inverters shut down during faults. Additionally, bi-directional flow of current during switching over from grid-connected to islanded mode can make fault detection difficult [3, 4]. Dynamic state estimation (DSE), coupled with high accuracy GPS-synchronized measurements, can accurately track system dynamics and provide details about system operating characteristics. The robustness, combined with the relatively low time complexity of DSE, has paved the way to explore its real-time applications for control and protection: DSE has been explored for real-time parameter identification to develop dynamic load models and load model behaviors [5, 6, 7, 1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 322, 324, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 422, 413, 424, 43, 444, 451, 452, 46, 471, 48, 492, 401, 402, 403, 404, 411, 405, 406, 407, 409, 411, 422, 413, 434, 445, 414, 415, 416, 417, 418, 424, 425, 419, 435, 426, 436, 447, 448, 449, 450, 461, 47, 48, 493, 400, 410, 402, 403, 404, 405, 406, 407, 408, 409, 411, 424, 425, 426, 434, 44, 452, 446, 453, 46, 471, 48, 492, 401, 402, 403, 404, 405, 406, 407, 409, 411, 424, 425, 426, 434, 44, 453, 46, 471, 48, 493, 400, 411, 424, 44, 454, 455, 46, 471, 48, 492, 403, 404, 405, 406, 407, 409, 410, 408, 409, 411, 424, 415, 416, 417, 418, 424, 434, 445, 446, 447, 419, 448, 449, 450, 409, 410, 411, 424, 446, 447, 419, 425, 436, 44, 453, 46, 471, 48, 492, 403, 404, 405, 406, 407, 409, 410, 411, 424, 415, 44, 416, 417, 419, 425, 44, 44, 455, 46, 471, 48, 493, 400, 411, 424, 44, 449, 450, 409, 411, 424, 446, 449, 460, 411, 447, 419, 435, 44, 44, 455, 46, 471, 48, 492, 403, 404, 405, 406, 407, 409, 410, 411, 424, 44, 448, 449, 411, 449, 450, 412, 44, 44, 453, 46, 47, 419, 451, 46, 47, 48, 492, 403, 404, 41, 424, 44, 455, 46 The output and state of the system, respectively: \[y =\begin{bmatrix}v_{a}(t)&v_{b}(t)&v_{c}(t)&i_{a}(t)&i_{b}(t)&i_{c}(t) \end{bmatrix}^{T} \tag{2a}\] \[x =\begin{bmatrix}G_{f}&v_{ra}(t)&v_{rb}(t)&v_{rc}(t)\end{bmatrix}^{T} \tag{2b}\] Assuming that the signals are sampled at timesteps \(n\in\{1,2,\ldots,N\}\) with sample time \(\Delta t\), the discrete-time state-output mapping function, \(h(x)\), is therefore: \(\forall n\in\{1,\ldots,N\}\) \[h_{n}(x) =v_{a}(n)=v_{ra}(n) \tag{3}\] \[h_{n+N}(x) =v_{b}(n)=v_{rb}(n)\] \[h_{n+2N}(x) =v_{c}(n)=v_{rc}(n)\] \[h_{n+3N}(x) =i_{a}(n)=G_{f}v_{va}(n)\] \[h_{n+4N}(x) =i_{b}(n)=G_{f}v_{rb}(n)\] \[h_{n+5N}(x) =i_{c}(n)=G_{f}v_{rc}(n)\] The Jacobian of the system, \(\mathbf{H}\), is therefore: \(\forall n\in\{1,\ldots,N\}\) \[\frac{\partial v_{a}(n)}{\partial v_{ra}(n)} =1\qquad\frac{\partial v_{b}(n)}{\partial v_{rb}(n)}=1\qquad \frac{\partial v_{c}(n)}{\partial v_{rc}(n)}=1 \tag{4}\] \[\frac{\partial i_{a}(n)}{\partial G_{f}} =v_{ra}(n)\qquad\frac{\partial i_{b}(n)}{\partial v_{rb}(n)}=G_{ f}\qquad\frac{\partial i_{c}(n)}{\partial G_{f}}=v_{rc}(n)\] \[\frac{\partial i_{a}(n)}{\partial v_{ra}(n)} =G_{f}\qquad\frac{\partial i_{b}(n)}{\partial G_{f}}=v_{rb}(n) \qquad\frac{\partial i_{c}(n)}{\partial v_{rc}(n)}=G_{f}\] \[\frac{\partial v_{a}(n)}{\partial v_{ra}(n)} =1\qquad\frac{\partial v_{b}(n)}{\partial v_{rb}(n)}=1\qquad \frac{\partial v_{c}(n)}{\partial v_{rc}(n)}=1\] \[\frac{\partial i_{a}(n)}{\partial v_{ra}(n)} =G_{f}\qquad\frac{\partial i_{b}(n)}{\partial v_{rb}(n)}=G_{f} \qquad\frac{\partial i_{c}(n)}{\partial v_{rc}(n)}=G_{f}\] \[\frac{\partial i_{a}(n)}{\partial G_{f}} =v_{ra}(n)\qquad\frac{\partial i_{b}(n)}{\partial G_{f}}=v_{rb}(n) \qquad\frac{\partial i_{c}(n)}{\partial G_{f}}=v_{rc}(n)\] _Note that for simplicity of notation, the indexing of individual elements of \(\mathbf{H}\) is not presented._ ### _DSE Solver Formulation_ This process is repeated iteratively until either the maximum number of iterations is reached or the algorithm has converged, indicated by the change in the log of the squared error falling below a specified threshold: \[\Delta J_{i}=\left|\log\right|\epsilon_{i}^{*}\cdot\epsilon_{i}\left|-\log \right|\epsilon_{i-1}^{*}\cdot\epsilon_{i-1}\right| \tag{5}\] The measurement error test is performed as follows: \[p=F_{m-n}(J_{i})\geq 0.95 \tag{6}\] where \(F_{m-n}\) is the chi-squared cumulative distribution function for \(m-n\) degrees of freedom, in which \(m-n\) is the number of linearly independent observables. ## III Real-Time Implementation The algorithm presented below was implemented as a collection of programs in an interpreted language: separate programs to estimate each load model were developed that run in parallel; these were executed on a general-purpose computer running a POSIX operating system. The emphasis of the implementation was on developing a system of communication that closely resembles industry standards. ### _Communication of Data_ The load models were simulated offline and the measurements for different operational conditions were stored in a comma-separated value (CSV) file. A program was written in the Go language to emulate the behavior of an analog-digital converter (ADC); this program reads data from the CSV files and directs them to the STDIN channel of the orchestrator program, pausing after each line for a time equal to the sample period. The measurements include time-stamped voltage and current values, with data being sent line by line from the CSV. ### _Orchestrator_ The orchestrator program receives and re-directs the measurements. Load models, corresponding to different operational conditions, have been developed in the C language. The orchestrator scans all the measurements from STDIN and directs them to the different models, which then executed in parallel as different processes. The communication is achieved by using two-way channels via Unix pipes between the orchestrator and each model. Once each model was run, the chi-squared confidence and estimated parameters are being returned to the orchestrator, which then scans all values and selects the model with the highest confidence; after this is completed, the corresponding protective action is recommended and instructions are sent to the breakers to protect the inverter and load. Fig. 1: Grounded-wye-connected RL load with a three-phase fault. Fig. 2: Test system used for implementation. ``` 11Initialize BUFFER based on window size while reading end of PIPE is non-emptydo 2 Read measurements from PIPE 3 Write measurements to writing end of BUFFER 4ifBUFFER is fullthen 5 Run estimator Calculate confidence Write confidence to reading end of PIPE 6 Remove measurements for oldest timestamp from BUFFER ``` **Algorithm 1**Model Implementation ``` 1Spawn subprocesses for each model Create PIPEs for each model whileSTDIN is non-emptydo 2 Read measurements from STDIN 3foreach model[i]do 4 Write measurements to writing end of PIPE[i] Read confidence for each model from reading end of PIPE[i] 5 Identify model with highest confidence ifnew chosen model = old chosen modelthen 6\(N\gets N+1\) 7else 8\(N\gets 0\) 9ifN > Hysteresis Samplesthen 10 Apply protection settings for model with highest confidence 11else 12 Apply protection settings for previously chosen model ``` **Algorithm 2**Orchestrator Implementation ## IV Case Study and Experiments To verify the implementation of the proposed method and demonstrate the use of DSE in efficiently providing setting-less protection for microgrids, a system with a balanced grounded-wye constant-impedance load was considered; Table I summarizes the details of the system. The simulation was done in MATLAB/Simulink(r), using the Specialized Power Systems library on 64-bit MATLAB R2019b(r) [23]. The load itself was connected to an inverter rated at 480 \(V_{rms}\) line-line, through a 1000 ft of 1/0 AWG quadruplex overhead service drop cable, as demonstrated in Fig. 2 (_on Page 2_). The modes of operation for the three-phase system investigated the following scenarios: 1) Unfaulted; 2) Phase A line-ground fault; 3) Phase B line-ground fault; 4) Phase C line-ground fault; 5) Phases A-B line-line fault; 6) Phases B-C line-line fault; 7) Phases C-A line-line fault; and 8) Three-phase fault. DSEs were devised corresponding to the modes of operation and run in parallel; the current state of the system was indicated by the DSE with the lowest error. The fault impedance for each line-ground fault case was 15 m\(\Omega\) and the total fault impedance for each line-line fault case was 10 m\(\Omega\). For the Three-phase fault, the effective fault impedance was 15 m\(\Omega\) Three different cases (Case I, Case II, and Case III) were devised, as elaborated in Table II; their corresponding voltage and current variations are demonstrated in Figs. 3-5. Each case started under the Unfaulted condition and then went into a faulted condition. After that the orchestrator was tasked with identifying the estimator model(s) that correspond to the different time stamps associated with each case, thereby predicting the operational conditions of the system. ## V Simulation Results Results verify the orchestrator's ability to be able to identify different operating conditions; illustrated on Fig. 7, Fig. 9, and Fig. 11. The chi-squared statistics - illustrated in Fig. 6, Fig. 8, and Fig. 10 - were always considerably higher for the correct model than other models, thereby allowing easy distinction; therefore, the orchestrator was able to identify the right configuration and implement corresponding protection settings. For the DSE implementation, a sliding-window approach was used - as shown in Fig. 12 - with 5 measurement samples considered for analysis to identify system status; the samples were collected every 500 \(\mu\)s, comparable to that of a PMU. A hysteresis of 5 samples were used to offset erroneous model identifications during state transitions. In Case I and Case II, the chi-squared confidences of relevant fault models were significantly higher than the others after the orchestrators processed measurements from post-fault conditions - i.e., 2.5 ms (5 samples) post-fault - which is comparably faster than a conventional relay trip scheme. The Fig. 4: Plot of voltage and current for Case B. Fig. 5: Plot of voltage and current for Case C. Fig. 6: Chi-squared confidence for all fault models during Case I. Fig. 7: Fault models chosen by orchestrator during Case I. Figure 8: Chi-squared confidence for all fault models during Case II. Figure 10: Chi-squared confidence for all fault models during Case III. Figure 9: Fault models chosen by orchestrator during Case II. orchestrator, however, chose the correct fault model 9 samples later, owing to the hysteresis implementation. There were no errors in the orchestrator's choice once all measurements corresponded to faulted conditions. In Case III, however, the implementation became complicated, since during the fault conditions several other models reported fairly high values of chi-squared confidence. This took a toll on the orchestrator implementation and the model took significantly longer to identify the correct fault model; this was a delay of about 20ms (38 samples). These spikes in chi-squared confidence for incorrect models during the faulted conditions were of short duration, and once the orchestrator identified the correct model, the hysteresis enabled these false positives to not have an impact. Lastly, when the transition between Unfaulted and faulted conditions passed through the sliding window, there was a blackout period wherein the chi-squared confidences were small for all the models, thereby causing the orchestrator to not make a clear choice; the solution to this remains to be addressed. ## VI Conclusions This paper investigated the implementation of a real-time application of DSE for the protection of inverter-interfaced microgrids. The implementation, based on parameter estimation to determine system state, examined parallel processing of multiple models; DSE was able to identify the correct load configurations for measurements corresponding to normal and different modes of faulted operations. The number of samples considered, along with the hysteresis, should also impact the choice of model and should potentially solve the experienced blackout issue. A delay in the availability of measurements, caused by channel delays, would impact the overall implementation; the orchestrator needs to be modified to address the same. The impact of the ratio of the number of measurements to the number of states, and the influence of window size on estimation efficiency is yet to be addressed. To reduce model complexity, only balanced lumped radial loads were considered; further work is to be performed to investigate the impact of load imbalance on the quality of estimates. The time and computational complexity of running several models in parallel also require discussion, especially if measurements are received from a PMU and the inverter feeds a critical load. Finally, the impact of noise on the orchestrators performance has to be explored.
2308.08274
Level crossings of fractional Brownian motion
Since the classical work of L\'evy, it is known that the local time of Brownian motion can be characterized through the limit of level crossings. While subsequent extensions of this characterization have primarily focused on Markovian or martingale settings, this work presents a highly anticipated extension to fractional Brownian motion -- a prominent non-Markovian and non-martingale process. Our result is viewed as a fractional analogue of Chacon et al. (1981). Consequently, it provides a global path-by-path construction of fractional Brownian local time. Due to the absence of conventional probabilistic tools in the fractional setting, our approach utilizes completely different argument with a flavor of the subadditive ergodic theorem, combined with the shifted stochastic sewing lemma recently obtained in Matsuda and Perkowski (22, arXiv:2206.01686). Furthermore, we prove an almost-sure convergence of the (1/H)-th variation of fractional Brownian motion with the Hurst parameter H, along random partitions defined by level crossings, called Lebesgue partitions. This result raises an interesting conjecture on the limit, which seems to capture non-Markovian nature of fractional Brownian motion.
Purba Das, Rafał Łochowski, Toyomu Matsuda, Nicolas Perkowski
2023-08-16T10:30:24Z
http://arxiv.org/abs/2308.08274v1
# Level crossings of fractional Brownian motion ###### Abstract Since the classical work of Levy, it is known that the local time of Brownian motion can be characterized through the limit of level crossings. While subsequent extensions of this characterization have primarily focused on Markovian or martingale settings, this work presents a highly anticipated extension to fractional Brownian motion -- a prominent non-Markovian and non-martingale process. Our result is viewed as a fractional analogue of Chacon et al. [1]. Consequently, it provides a global path-by-path construction of fractional Brownian local time. Due to the absence of conventional probabilistic tools in the fractional setting, our approach utilizes completely different argument with a flavor of the subadditive ergodic theorem, combined with the shifted stochastic sewing lemma recently obtained in [12]. Furthermore, we prove an almost-sure convergence of the (\(1/H\))-th variation of fractional Brownian motion with the Hurst parameter \(H\), along random partitions defined by level crossings, called Lebesgue partitions. This result raises an interesting conjecture on the limit, which seems to capture non-Markovian nature of fractional Brownian motion. _Keywords and phrases._ Fractional Brownian motion, level crossings, local time, (\(1/H\))-th variation, Lebesgue partition, stochastic sewing lemma. _MSC 2020._ 60G22, 60J55. ###### Contents * 1 Introduction * 1.1 Main results * 1.2 Conjecture * 1.3 Strategy of the proofs * 2 Variation along uniform Lebesgue partitions * 2.1 Elementary results * 2.2 The shifted stochastic sewing lemma * 2.3 Convergence of variation * 3 Local time via level crossings * 3.1 Heuristics * 3.2 Convergence to local time * 3.3 Uniform convergence * 3.4 Horizontally rough function * A An estimate on log-normal distribution Introduction Level crossings of stochastic processes have been studied since the classical works of Kac [14] and Rice [15]. Depending on whether the process is smooth or rough, the study of its level crossings rely on different methods. As for the smooth case, which is not the scope of this article, the reader can refer to the survey article [13], the textbook [1] and the reference therein. By far the most prominent example of a rough stochastic process is a Brownian motion. The first work on level crossings of Brownian motion is attributed to Levy [14], who characterized its local time as a limit of normalised numbers of level crossings. More precisely, for a given process \(w\), we set \[U_{s,t}(\varepsilon,w):=\#\big{\{}(u,v):s\leq u<v\leq t,\ w_{u}=0,w_{v}= \varepsilon,\forall r\in(u,v)\ w_{r}\in(0,\varepsilon)\big{\}},\] as illustrated in Figure 1. The quantity \(U_{s,t}(\varepsilon,w)\) counts the number of upcrossings from \(0\) to \(\varepsilon\) in the interval \([s,t]\). For Brownian motion \(W\) and \(a\in\mathbb{R}\), we have \[\lim_{\varepsilon\to 0}\varepsilon U_{0,t}(\varepsilon,W-a)=\frac{1}{2}L_{t}^{W}(a)\] almost surely, where \(L_{t}^{W}(a)\) is the local time of \(W\) at time \(t\) and at level \(a\). The local time is defined as the density of the occupation measure, see Definition 1.2 below. This result can be found in standard textbooks such as [11, 12, 13], and it can be generalized for semimartingales [15] and for Markov processes [16]. On the other hand, there exist rough stochastic processes that do not fall under the category of semimartingales or Markovian processes. One such example is the _fractional Brownian motion_\(B^{H}\), which is a Gaussian process parameterized by Hurst index \(H\in(0,1)\). Precisely, \(B^{H}\) is neither a semimartingale nor Markovian for \(H\neq 1/2\), and it becomes Brownian motion when \(H=1/2\). Fractional Brownian motion possesses a local time (defined as in Definition 1.2). Given Levy's result on the local time of Brownian motion, a natural question arises regarding whether a similar result holds for the local time of fractional Brownian motion. However, a complete answer to this question has not been obtained thus far. This is surprising considering the age of Levy's result and that of fractional Brownian motion. Some works have explored the characterization of the fractional Brownian local time in relation to level crossings. For instance, the works [1, 1] demonstrate that the number of zeros for certain smoothed fractional Brownian motions converges, in a suitable sense, to the local time. Furthermore, level crossings of stochastic processes have received attention in the context of pathwise stochastic calculus [17, 18, 19, 20, 21, 16], as well as in some applied literatures [13, 14]. Constructing the local time via level crossings is not only a natural problem, but also it can lead to a significant implication on the path properties of the process. This was first observed by Chacon et al. [1] built on [17]. Therein proven is the existence of a measurable set \(\Omega_{W}\) such that Figure 1: Graphical illustration of \(U_{s,t}(\varepsilon,w)\). In the picture, \(U_{s,t}(\varepsilon,w)=2\). \(\mathbb{P}(W\in\Omega_{W})=1\) (recall that \(W\) is Brownian motion) and for every \(w\in\Omega_{W}\), \(a\in\mathbb{R}\) and \(t\geq 0\), the limit \[\lim_{\varepsilon\to 0}\varepsilon U_{0,t}(\varepsilon,w-a) \tag{1}\] exists and is equal to one-half times the local time of \(w\) at the level \(a\). Hence, the existence of the limit of level crossings is a path property, and this provides a path-by-path construction of Brownian local time. Furthermore, the construction is global in that the convergence in (1) holds for all \(a\in\mathbb{R}\) off a single null set. This result explains why such construction of the local time receives attention in the pathwise stochastic calculus. It is worth noting that the master thesis [10] of Lemieux extended the result for a large class of semimartingales. The result of Chacon et al. has a remarkable consequence on _pathwise quadratic variation_, calculated as a limit of sums of square increments, where the increments are taken along partitions of a fixed interval with vanishing mesh. The precise definition of the pathwise quadratic variation is as follows: given a sequence \(\boldsymbol{\pi}\) of partitions \(\pi_{n}\) (\(n\in\mathbb{N}\)) with vanishing mesh, the pathwise quadratic variation \([w]_{\boldsymbol{\pi}}\) of a process \(w\) is defined by \[[w]_{\boldsymbol{\pi}}:=\lim_{n\to\infty}\sum_{[s,t]\in\pi_{n}}|w_{t}-w_{s}|^{2}\] whenever the limit exists. A deterministic partition is a partition that does not depend on the path/process. The classical works [11, 12] of Levy show that for any refining deterministic partition sequence \(\boldsymbol{\pi}\) of \([0,t]\) with vanishing mesh we have \[\mathbb{P}([W]_{\boldsymbol{\pi}}=t)=1. \tag{2}\] Dudley [14] proved that if the deterministic sequence \(\boldsymbol{\pi}=(\pi_{n})\) satisfies \[|\pi_{n}|:=\max_{[s,t]\in\pi_{n}}|t-s|=o(1/\log n) \tag{3}\] then (2) holds, with the optimality of the condition (3) being shown as well. In general, the pathwise quadratic variation (even when it exists) may depend on the choice of a sequence of partitions [13, page 47], [15]. Furthermore, the null set in Levy's and Dudley's works depends on the sequence \(\boldsymbol{\pi}\) of partitions. Hence, an obvious question is if, given a stochastic process \(X\), there is any uncountable class \(\mathrm{P}\) of partition sequences such that almost surely for any \(\boldsymbol{\pi},\boldsymbol{\pi}^{\prime}\in\mathrm{P}\) we have \([X]_{\boldsymbol{\pi}}=[X]_{\boldsymbol{\pi}^{\prime}}\). Here comes the result of Chacon et al., which proves that Brownian motion has a measure zero set outside which any quadratic variation along any sequence of the _Lebesgue partitions_ (defined at the beginning of Section 1.1) of \([0,t]\) with vanishing mesh is equal to \(t\). We remark that, unlike in Dudley's result, there is no condition on the decay of meshes of partitions and the null set is uniform over all sequences of the Lebesgue partitions. In this paper, we extend Levy's construction of the local time and the result of Chacon et al. for fractional Brownian motions with the Hurst parameter \(H<1/2\). ### Main results We write \(B^{H}\) for a fractional Brownian motion with \(B^{H}_{0}=0\) and with the Hurst parameter \(H\in(0,1)\). Specifically, \(B^{H}\) is a one-dimensional centered Gaussian process satisfying \(B^{H}_{0}=0\) and \[\mathbb{E}[(B^{H}_{t}-B^{H}_{s})^{2}]=a_{H}(t-s)^{2H}.\] We choose the constant \(a_{H}\) in a manner that the identity (18) below holds. The specific value of \(a_{H}\) is not important for our study. To introduce the Lebesgue partitions, let \(\mathrm{P}\) be a partition of the space \(\mathbb{R}\). That is, we have a strictly increasing sequence \((y_{n})_{n\in\mathbb{Z}}\) of real numbers such that \[\lim_{n\to-\infty}y_{n}=-\infty,\quad\lim_{n\to+\infty}y_{n}=+\infty,\] and we have \(\mathrm{P}=\{[y_{n-1},y_{n}]:n\in\mathbb{Z}\}\). Let \(\Lambda(\mathrm{P})\) be the set of all endpoints of intervals from \(\mathrm{P}\), or \(\Lambda(\mathrm{P})=\{y_{n}:n\in\mathbb{Z}\}\). Given a path \(w:[0,\infty)\to\mathbb{R}\), we set \(T_{0}(\mathrm{P},w):=0\) and recursively define \[T_{n}(\mathrm{P},w):=\inf\{t>T_{n-1}(\mathrm{P},w):w_{t}\in\Lambda(\mathrm{P}) \setminus\{w_{T_{n-1}(\mathrm{P},w)}\}\} \tag{4}\] with \(\inf\varnothing=+\infty\). (If \(T_{n-1}=+\infty\), we set \(T_{n}:=+\infty\).) Note that we do not assume \(w_{0}=0\). The partition given by \[\{[T_{n-1}(\mathrm{P},w),T_{n}(\mathrm{P},w)]:n\in\mathbb{N},T_{n}(\mathrm{P},w)\leq t\} \tag{5}\] is called a _Lebesgue partition_. See Figure 2 for a graphical illustration. Lebesgue partition is also called partition along \(y\)-axis. For a uniform partition \(\mathrm{P}^{\varepsilon}:=\{[\varepsilon k,\varepsilon(k+1)]:k\in\mathbb{Z}\}\), we simply write \(T_{n}(\varepsilon,w):=T_{n}(\mathrm{P}^{\varepsilon},w)\). We denote by \(K_{s,t}(\varepsilon,w)\) the number of \(\varepsilon\)-level crossings in the interval \([s,t]\), defined precisely as1 Footnote 1: Our convention is \(\mathbb{N}=\{1,2,3,\ldots\}\). In particular \(0\notin\mathbb{N}\). \[K_{s,t}(\varepsilon,w):=\#\{n\in\mathbb{N}\setminus\{1\}:T_{n}(\varepsilon,w_{ s+.})\leq t\}+\mathbf{1}_{\{w_{\varepsilon}\in\mathbb{Z}\}}\mathbf{1}_{\{T_{1}( \varepsilon,w_{s+.})\leq t\}}, \tag{6}\] where \(\#A\) represents the cardinality of the set \(A\). We observe \[\sum_{n:T_{n}(\varepsilon,w)\leq t}|w_{T_{n}(\varepsilon,w)}-w_{T_{n-1}( \varepsilon,w)}|^{\frac{1}{H}}=\varepsilon^{1/H}K_{0,t}(\varepsilon,w)+ \mathbf{1}_{\{w_{0}\notin\mathbb{Z}\}}|w_{T_{1}(\varepsilon,w)}-w_{0}|^{\frac {1}{H}}.\] Note that \(|w_{T_{1}(\varepsilon,w)}-w_{0}|\leq\varepsilon\). Therefore, the study of the (\(1/H\))-th variation along a sequence of uniform Lebesgue partitions is equivalent to that of limiting behavior of \(K_{0,t}(\varepsilon,w)\) as \(\varepsilon\to 0\). For \(\rho\in\mathbb{R}\) and a process \(w\), the process \(w+\rho\) is defined by \((w+\rho)_{t}:=w_{t}+\rho\). Our first main result is on the (\(1/H\))-th variation of fractional Brownian motion along uniform Lebesgue partitions of fractional Brownian motion \(B^{H}\). **Theorem 1.1** (Convergence of (\(1/H\))-th variation along uniform Lebesgue partitions).: _For all \(H\in(0,1)\), there exists a positive finite constant \(\mathfrak{c}_{H}\) with the following property. Let \(\rho\in\mathbb{R}\), \(t\in(0,\infty)\) and \((\varepsilon_{n})_{n=1}^{\infty}\) be a sequence of positive numbers such that \(\varepsilon_{n}=O(n^{-\eta})\) for some \(\eta>0\). We then have_ \[\lim_{n\to\infty}\varepsilon_{n}^{1/H}K_{0,t}(\varepsilon_{n},B^{H}+\rho)= \mathfrak{c}_{H}t\quad\text{almost surely.} \tag{7}\] The proof of Theorem 1.1 will be given at the end of Section 2.3. A more explicit representation of the constant \(\mathfrak{c}_{H}\) is given by (21). For \(H<1/2\), we can remove the condition \(\varepsilon_{n}=O(n^{-\eta})\), see Theorem 1.4 below. Theorem 1.1 concerns level crossings at all levels. We can also consider level crossings at a specific level, leading to Levy's classical work on local time [10]. For each \(\varepsilon\in(0,1)\) and Figure 2: Lebesgue partition \(w\in C([0,T];\mathbb{R})\), we consider the number of upcrossings by \(w\) the space interval \([0,\varepsilon]\) between times \(s\) and \(t\) by setting \[U_{s,t}(\varepsilon,w):=\#\big{\{}(u,v)\in[s,t]^{2}:u<v,w_{u}=0,w_{v}= \varepsilon,\forall r\in(u,v)\;w_{r}\in(0,\varepsilon)\big{\}}. \tag{8}\] See Figure 1 for illustration. In the case of Brownian motion (\(H=1/2\)), it is well-known (e.g., [13, Section 2.2], [20, Chapter VI], [21, Section 6]) that provided \(\varepsilon_{n}\to 0\) and \(\varepsilon_{n}>0\), we have \[\lim_{n\to\infty}\varepsilon_{n}U_{0,t}(\varepsilon_{n},B^{1/2} -a)=\frac{1}{2}L_{t}^{1/2}(a)\quad\text{almost surely,} \tag{9}\] where \(L^{1/2}\) is the local time of Brownian motion \(B^{1/2}\). This representation of the local time was extended for semimartingales by Karoui [10]. We recall that the local time of \(B^{H}\), defined just below, exists for all \(H\in(0,1)\). **Definition 1.2** (Local time).: We denote by \((L_{t}^{H}(a))_{t\geq 0,a\in\mathbb{R}}\) the _local time_ of \(B^{H}\) with \(t\) representing time and \(a\) representing level. More precisely, \(L^{H}\) is a unique random field satisfying the following occupation density formula: \[\int_{0}^{t}f(B_{r}^{H})\mathrm{d}r=\int_{\mathbb{R}}f(a)L_{t}^{H }(a)\mathrm{d}a,\quad\forall t\geq 0,\forall f\in C(\mathbb{R}).\] As for the existence of \(L^{H}\) and its continuity, see e.g., [1, 1]. The following is an extension of Levy's result to fractional Brownian motion. **Theorem 1.3** (Local time via level crossings).: _Let \(H<1/2\), \(a\in\mathbb{R}\), \(t\in(0,\infty)\) and \((\varepsilon_{n})_{n=1}^{\infty}\) be a sequence of positive numbers such that \(\varepsilon_{n}\to 0\). We then have_ \[\lim_{n\to\infty}\varepsilon_{n}^{\frac{1}{H}-1}U_{0,t}( \varepsilon_{n},B^{H}-a)=\frac{\mathfrak{c}_{H}}{2}L_{t}^{H}(a)\quad\text{ almost surely,} \tag{10}\] _where the constant \(\mathfrak{c}_{H}\) is the same as in Theorem 1.1 and in Equation (21)._ In principle, the null set of (9) or (10) could potentially depend on the level \(a\) and the sequence \((\varepsilon_{n})_{n=1}^{\infty}\). However, an intriguing result by Chacon et al. [1] demonstrates that for Brownian motion the null set can be chosen uniformly over the level \(a\) and the sequence \((\varepsilon_{n})_{n=1}^{\infty}\). In the next theorem, we extend this result to fractional Brownian motion. Before stating the theorem, we introduce some notation. Analogously to \(U_{s,t}\), we denote by \(D_{s,t}\) the total number of downcrossings \[D_{s,t}(\varepsilon,w):=\#\big{\{}(u,v)\in[s,t]^{2}:u<v,w_{v}=0,w_ {u}=\varepsilon,\forall r\in(u,v)\;w_{r}\in(0,\varepsilon)\big{\}}. \tag{11}\] Given a partition \(\mathrm{P}\) of \(\mathbb{R}\), we set \[V_{s,t}(\mathrm{P},w):=\sum_{[a,b]\in\mathrm{P}}(b-a)^{\frac{1}{H }}\left(U_{s,t}(b-a,w-a)+D_{s,t}(b-a,w-a)\right). \tag{12}\] The quantity \(V_{s,t}(\mathrm{P},w)\) measures the \((1/H)\)-th variation along the Lebesgue partition defined by \(\mathrm{P}\). For the uniform partition \(\mathrm{P}^{\varepsilon}\), we have \[V_{0,t}(\mathrm{P}^{\varepsilon},w)=\varepsilon^{1/H}K_{0,t}( \varepsilon,w)+\mathbf{1}_{\{w_{0}\notin\mathbb{Z}\}}|w_{T_{1}(\varepsilon,w )}-w_{0}|^{\frac{1}{H}}.\] **Theorem 1.4** (Fractional analogue of Chacon et al.).: _Let \(H<1/2\) and let \(\mathfrak{c}_{H}\) be the constant of Theorem 1.1. Then, there exists a measurable set \(\Omega_{H}\subseteq C([0,\infty);\mathbb{R})\) with the following property._ * \(\mathbb{P}(B^{H}\in\Omega_{H})=1\) * _For every_ \(w\in\Omega_{H}\) _and_ \(T\in(0,\infty)\)_, we have_ \[\lim_{\varepsilon\to 0,\varepsilon>0}\sup_{\begin{subarray}{c}\mathrm{P}:| \mathrm{P}|\leq\varepsilon,\\ t\leq T\end{subarray}}|V_{0,t}(\mathrm{P},w)-\mathfrak{c}_{H}t|=0.\] * _For every_ \(w\in\Omega_{H}\)_, there exists a continuous map_ \[[0,\infty)\times\mathbb{R}\ni(t,a)\mapsto l_{t}(w,a)\in[0,\infty)\] _such that for every_ \(T\in(0,\infty)\) _we have_ \[\lim_{\varepsilon\to 0,\varepsilon>0}\sup_{a\in\mathbb{R},t\leq T \mid}\lvert\varepsilon^{\frac{1}{H}-1}U_{0,t}(\varepsilon,w-a)-\frac{\mathfrak{ c}_{H}}{2}l_{t}(w,a)\rvert=0.\] _Furthermore, the occupation density formula holds:_ \[\int_{0}^{t}f(w_{r})\mathrm{d}r=\int_{\mathbb{R}}f(a)l_{t}(w,a) \mathrm{d}a,\quad\forall t\geq 0,\forall f\in C(\mathbb{R}).\] Proof.: It follows from Theorem 3.23 and Theorem 3.24. In [11] the corresponding result for Brownian motion is interpreted as a _quadratic arc length_. By analogy, we could say that the fractional Brownian motion of Hurst index \(H<1/2\) has \(1/H\)_arc length_\(\mathfrak{c}_{H}t\). Note that the \(1/H\) arc length is a purely geometric, path-dependent quantity which is invariant under translations and time-reparametrizations. **Remark 1.5**.: In [10] the following quantitative estimate was proved. There exists an \(\varepsilon\in(0,1)\) such that for every \(m\in[1,\infty)\) we have \[\Big{\|}\sum_{[u,v]\in\pi}|B_{v}^{H}-B_{u}^{H}|^{\frac{1}{H}}- \mathbb{E}\big{[}|B_{1}^{H}|^{\frac{1}{H}}\big{]}t\Big{\|}_{L^{m}(\mathbb{P})} \lesssim_{m,t}|\pi|^{\varepsilon}\] for any deterministic partition \(\pi\) of \([0,t]\). Therefore, by the Borel-Cantelli lemma, for any sequence \((\pi_{n})_{n=1}^{\infty}\) of deterministic partitions with \[|\pi_{n}|=O(n^{-\delta}),\quad\delta\in(0,\infty), \tag{13}\] we have \(\lim_{n\to\infty}\sum_{[u,v]\in\pi^{n}}|B_{v}^{H}-B_{u}^{H}|^{\frac{1}{H}}= \mathbb{E}[|B_{1}^{H}|^{\frac{1}{H}}]t\) almost surely. Unlike Theorem 1.4, we need the decaying condition (13). The work [10] shows that the condition (13) is not optimal for the Brownian case; finding the optimal condition for the fractional case seems open. ### Conjecture There is an interesting aspect on the constant \(\mathfrak{c}_{H}\). For Brownian motion, the quadratic variation along any deterministic partition almost surely matches with the quadratic variation along any Lebesgue partitions. That is, \[\mathfrak{c}_{\frac{1}{2}}=\mathbb{E}[(B_{1}^{1/2})^{2}]. \tag{14}\] It is tempting to guess that such relation holds for \(H\neq 1/2\) as well. Indeed, such conjecture is stated in [14, after Lemma 3.5]. However, the identity (14) is due to the strong Markov property of Brownian motion. Therefore, for \(H\neq 1/2\), there is no reason why \(\mathfrak{c}_{H}\) and \(\mathbb{E}[|B_{1}^{H}|^{\frac{1}{H}}]\) should be equal. Motivated by the simulation shown in Figures 3, we propose the following conjecture. **Conjecture 1.6** (The constant \(\mathfrak{c}_{H}\)).: For fractional Brownian motion with \(H\neq 1/2\), we conjecture that the \((1/H)\)-th variation of fractional Brownian motion along deterministic partitions differs from the \((1/H)\)-th variation of fractional Brownian motion along uniform Lebesgue partitions. To be more precise, we conjecture \[\begin{cases}\mathfrak{c}_{H}>\mathbb{E}[|B_{1}^{H}|^{1/H}]&\text{if }H<1/2,\\ \mathfrak{c}_{H}<\mathbb{E}[|B_{1}^{H}|^{1/H}]&\text{if }H>1/2.\end{cases}\] If this is indeed the case, the constant \(\mathfrak{c}_{H}\) captures non-Markovian nature of fractional Brownian motion. **Remark 1.7**.: We comment on the simulation for Figure 3. The variation, denoted as \(V_{t}\), is given by \[V_{t}:=\sum_{[u,v]\in\pi^{\#},v\leq t}|B_{v}^{H}-B_{u}^{H}|^{\frac{1}{H}},\] up to time \(T\), where \(\#\in\{\text{deterministic},\text{Lebesgue}\}\). We discretize the fractional Brownian motion with a step size of \(T/n\). For the deterministic partition, we use the partition \(\pi^{\text{deterministic}}=\{kT/n\}_{k=1}^{n}\), while for the Lebesgue partition, we utilize the partition \(\pi^{\text{Lebesgue}}=\{T_{k}(\varepsilon,B)\}_{k}\). To simulate the variation \(V\) along a Lebesgue partition, it is crucial to appropriately choose the parameters \(T\), \(n\) and \(\varepsilon\) to ensure that the partition is neither too dense nor too sparse. For instance, when \(H=0.4\), we simulate \(V\) with \(T=0.1\), \(n=30000\), and \(\varepsilon=0.015\). Similarly, for \(H=0.6\), we simulate \(V\) with \(T=2\), \(n=30000\), and \(\varepsilon=0.013\). Figure 4 represents a similar figure for Brownian motion (i.e. \(H=1/2\)). The figure is generated with \(T=1,n=30000,\varepsilon=0.014\). The figure supports the fact that for a Brownian motion the quadratic variation along deterministic partitions is same as that along the Lebesgue partitions. Figure 3: Comparison between the variation along a deterministic uniform partition and that along a Lebesgue partition. ### Strategy of the proofs Let us outline our strategy to prove Theorem 1.1. The first observation is that it is easier to deal with an averaged version of \(K\). Namely, we define \[\bar{K}_{s,t}(\varepsilon,w):=\varepsilon^{-1}\int_{-\varepsilon/2}^{\varepsilon/ 2}K_{s,t}(\varepsilon,w+\rho)\mathrm{d}\rho.\] The quantity \(\bar{K}\) is related to _truncated variation_, see Remark 2.5 below. By scaling (Lemma 2.6), we have \[\bar{K}_{0,1}(\varepsilon,B^{H})=\bar{K}_{0,\varepsilon^{-1/H}}(1,B^{H})\quad \text{in law.}\] Therefore, at least in law, the limiting behavior of \(\bar{K}_{0,1}(\varepsilon,B^{H})\) as \(\varepsilon\to 0\) is the same as that of \(\bar{K}_{0,\varepsilon^{-1/H}}(1,B^{H})\). Rewriting \(T=\varepsilon^{-1/H}\), we thus hope to see the limit of \(T^{-1}\bar{K}_{0,T}(1,B^{H})\). Regarding this, we are actually in the setting of the superadditive ergodic theorem. Indeed, it is not difficult to see that the family \((\bar{K}_{s,t}(1,B^{H}))_{s<t}\) is stationary and superadditive (Lemma 2.6). Therefore, the limit \[\lim_{T\to\infty}\frac{1}{T}\bar{K}_{0,T}(\varepsilon,B^{H})=\sup_{T>0}\frac{ 1}{T}\mathbb{E}[\bar{K}_{0,T}(\varepsilon,B^{H})]\] exists almost surely and in \(L^{1}(\mathbb{P})\). It turns out that the limit is the constant \(\mathfrak{c}_{H}\) in Theorem 1.1. To take advantage of this finding on \(\bar{K}\), we can naively guess that \[|K_{0,1}(\varepsilon,B^{H}+\rho)-K_{0,1}(\varepsilon,B^{H}+\rho^{\prime})| \tag{15}\] is small provided that \(|\rho-\rho^{\prime}|\) is small. It is therefore expected to have \[\lim_{\varepsilon\to 0}\varepsilon^{\frac{H}{H}}K_{0,1}(\varepsilon,B^{H})= \lim_{\varepsilon\to 0}\varepsilon^{\frac{1}{H}}\bar{K}_{0,1}(\varepsilon,B^{H})= \mathfrak{c}_{H}. \tag{16}\] However, proving (16) requires a non-trivial argument. In fact, the map \(\rho\mapsto K_{0,1}(\varepsilon,w+\rho)\) can be highly discontinuous, and it is impossible to estimate (15) pathwisely. This inherent difficulty motivates us to employ a _probabilistic_ argument in order to prove Theorem 1.1. In addition to superadditivity, a crucial ingredient is the stochastic sewing lemma introduced by Le [10], which provides effective estimates for stochastic Riemann sums. For our specific problem, we require an extension of this lemma, called the shifted stochastic sewing lemma, recently obtained by the third and fourth authors [14]. This extension is particularly suitable for capturing asymptotic decorrelation in stochastic Riemann sums. It is worth mentioning that there are already some studies that leverage the stochastic sewing lemma for analyzing local times [14, 1, 2]. Our proof of Theorem 1.1 proceeds as follows. Not only is the family \((K_{s,t}(\varepsilon,B^{H}))_{s<t}\) superadditive, but it is also almost subadditive: \[K_{s,t}(\varepsilon,B^{H})\leq K_{s,u}(\varepsilon,B^{H})+K_{u,t}(\varepsilon, B^{H})+1,\quad s<u<t.\] This leads to an approximation: \[\varepsilon^{1/H}K_{0,1}(\varepsilon,B^{H})\approx\sum_{[s,t]\in\pi_{ \varepsilon}}\varepsilon^{1/H}K_{s,t}(\varepsilon,B^{H}),\] where \(\pi_{\varepsilon}\) is a uniform partition of the interval \([0,1]\) with \(|\pi_{\varepsilon}|\approx\varepsilon^{1/H}\). A similar approximation holds for \(\bar{K}\). Thus, we obtain \[\varepsilon^{1/H}K_{0,1}(\varepsilon,B^{H})-\varepsilon^{1/H}\bar{K}_{0,1}( \varepsilon,B^{H})\approx\sum_{[s,t]\in\pi_{\varepsilon}}\varepsilon^{1/H}[K_ {s,t}(\varepsilon,B^{H})-\bar{K}_{s,t}(\varepsilon,B^{H})].\] The Riemann sum on the right-hand side can then be estimated using the shifted stochastic sewing lemma, which ultimately yields the desired convergence result. To establish the convergence to the local time (Theorem 1.3), we follow a similar line of argument. However, the technical difficulty increases substantially due to the lack of a counterpart to \(\widetilde{K}\). We postpone an overview of this technical argument to Section 3.1. One notable strength of the stochastic sewing lemma is its ability to provide a quantitative bound. Thanks to this property, we can obtain a quantitative version of Theorem 1.3, as stated in Theorem 3.1. This quantitative result enables us to employ the pathwise argument of Chacon et al. [1], which elevates the convergence result from Theorem 1.3 to the more refined version of Theorem 1.4. #### Outline The organization of the paper is as follows. Theorem 1.1 is proven in Section 2, while Theorem 1.3 and Theorem 1.4 are proven in Section 3. In Section 2.1, we derive some elementary results on \(K\), in Section 2.2, we recall our key ingredient called the shifted stochastic sewing lemma, and in Section 2.3, we give our proof of Theorem 1.1. In Section 3, after stating a quantitative version of Theorem 1.3 as Theorem 3.1, we review our technical strategy in Section 3.1. Section 3.2 is the most demanding part of the paper, which proves Theorem 3.1 with the key estimate being Lemma 3.7. As a consequence of Theorem 3.1, we complete the proof of Theorem 1.4 in Section 3.3. #### Notation Given a path \(f\colon[0,T]\to\mathbb{R}^{d}\), we write \(f_{s,t}:=f_{t}-f_{s}\), and we denote by \(\dot{f}\) the derivative \(\frac{\mathrm{d}f}{\mathrm{d}t}\). We fix a probability space \((\Omega,\mathcal{F},\mathbb{P})\) on which fractional Brownian motion is defined, we write \(\mathbb{E}\) for the expectation with respect to \(\mathbb{P}\), and we write \[\|F\|_{L^{p}(\mathbb{P})}:=\Big{(}\int_{\Omega}\lvert F\rvert^{p}\mathrm{d} \mathbb{P}\Big{)}^{1/p}\] with usual modification for \(p=\infty\). The expression \(X\stackrel{{\mathrm{d}}}{{=}}Y\) means that the random variables \(X\) and \(Y\) have the same law. We write \(A\lesssim A^{\prime}\) if there exists a positive constant \(C\), depending only on some (unimportant) parameters, such that \(A\leq CA^{\prime}\). If we want to emphasize the dependency on parameters \(\alpha,\beta,\ldots\), then we write \(A\lesssim_{\alpha,\beta,\ldots}A^{\prime}\). The following objects appear throughout the paper. * We write \(B=B^{H}\) for fractional Brownian motion in one dimension. Unlike Section 1, we mostly suppress the script \(H\) (i.e., we simply write \(B\) for fractional Brownian motion). **In this paper we will not write down dependency on \(H\).** For instance, when we write \(A\lesssim A^{\prime}\), the proportional constant may depend on \(H\). * We denote by \(K_{s,t}(\varepsilon,w)\) the total number of \(\varepsilon\)-level crossings in the interval \([s,t]\), as precisely defined at (6). We denote by \(U_{s,t}(\varepsilon,w)\) the total number of upcrossings from \(0\) to \(\varepsilon\) in the interval \([s,t]\), as precisely defined in (8). * We write \((L_{t}^{H}(a))_{t\geq 0,a\in\mathbb{R}}=(L_{t}(a))_{t\geq 0,a\in\mathbb{R}}\) for the local time of fractional Brownian motion \(B\), see Definition 1.2. #### Acknowledgement Our work is aided by the Deutsche Forschungsgemeinschaft (DFG) through the Berlin-Oxford IRTG 2544 program, which enabled PD to visit Berlin and financially supports TM. RL's research was partially funded by the National Science Centre, Poland, under Grant No. 2019/35/B/ST1/04292. Variation along uniform Lebesgue partitions The goal of this section is to prove Theorem 1.1. We begin observing elementary results on the counting \(K\) of level crossings, defined by (6). ### Elementary results Let us first recall the Mandelbrot-Van Ness representation of fractional Brownian motion [14], which will be used throughout. **Definition 2.1** (Fractional Brownian motion).: We set \[\mathcal{K}(t,s):=(t-s)^{H-\frac{1}{2}}-(-s)_{+}^{H-\frac{1}{2}},\quad s<t. \tag{17}\] Let \(W=(W_{t})_{t\in\mathbb{R}}\) be a two-sided Brownian motion in one dimension. Throughout the paper, we suppose that fractional Brownian motion \(B=B^{H}\) has the _Mandelbrot-Van Ness representation_ \[B_{t}=\int_{-\infty}^{t}\mathcal{K}(t,s)\mathrm{d}W_{s}. \tag{18}\] Note that we have \(\mathbb{E}[(B_{t}-B_{s})^{2}]=a_{H}(t-s)^{2H}\) for some constant \(a_{H}\), whose actual value is irrelevant for us. **Lemma 2.2** (Scaling of \(K\)).: _For \(\lambda\in(0,\infty)\), we have_ \[(K_{s,t}(\varepsilon,B+\rho))_{s<t,\;\varepsilon>0,\rho\in\mathbb{R}}\stackrel{{ \mathrm{d}}}{{=}}(K_{\lambda^{1/H}s,\lambda^{1/H}t}(\lambda \varepsilon,B+\lambda\rho))_{s<t,\;\varepsilon>0,\rho\in\mathbb{R}}.\] Proof.: We set \(B_{t}^{(\lambda)}:=\lambda B_{\lambda^{-1/H}t}\). Note that \(B^{(\lambda)}\stackrel{{\mathrm{d}}}{{=}}B\) and observe that \[K_{s,t}(\varepsilon,B_{t}+\rho)=K_{s,t}(\lambda\varepsilon,\lambda(B+\rho))=K _{\lambda^{1/H}s,\lambda^{1/H}t}(\lambda\varepsilon,B^{(\lambda)}+\lambda\rho).\qed\] **Lemma 2.3** (Superadditivity of \(K\)).: _Let \(r<s<t\) and \(w\) be a path. Then,_ \[K_{r,s}(\varepsilon,w)+K_{s,t}(\varepsilon,w)\leq K_{r,t}(\varepsilon,w)\leq K _{r,s}(\varepsilon,w)+K_{s,t}(\varepsilon,w)+1.\] Proof.: Without loss of generality, we set \(r=0\). Recalling the definition of \(T_{n}\) from (4), we set \[N:=\max\{n\in\mathbb{N}\cup\{0\}:T_{n}(\varepsilon,w)\leq s\}.\] As shown in Figure 5, we have the following cases. 1. If \(w_{T_{N}(\varepsilon,w)}=w_{T_{1}(\varepsilon,s+w_{s+\cdot})}\) or if \(w_{T_{N}(\varepsilon,w)}=w_{s}\), then \(K_{r,t}(\varepsilon,w)=K_{r,s}(\varepsilon,w)+K_{s,t}(\varepsilon,w)\). 2. Otherwise, we have \(K_{r,t}(\varepsilon,w)=K_{r,s}(\varepsilon,w)+K_{s,t}(\varepsilon,w)+1\). Figure 5: Level crossings around the middle time \(s\). In case (b), the level crossing around \(s\) is not counted in \(K_{r,s}(\varepsilon,w)+K_{s,t}(\varepsilon,w)\). For our arguments, the following variants of \(K\) will appear. **Notation 2.4** (Averaged \(K\)).: We set \[\bar{K}_{s,t}(\varepsilon,w):=\varepsilon^{-1}\int_{-\varepsilon/2}^{\varepsilon /2}K_{s,t}(\varepsilon,w+\rho)\mathrm{d}\rho.\] **Remark 2.5**.: The quantity \(\bar{K}\) is related to the so-called _truncated variation_[10] defined by \[\mathrm{TV}^{\varepsilon}(w,[s,t]):=\sup_{\pi:\text{partition of }[s,t]}\sum_{[u,v ]\in\pi}\max\{|w_{v}-w_{u}|-\varepsilon,0\}.\] Indeed, as shown in [10, Theorem 1], we have the identity \[\mathrm{TV}^{\varepsilon}(w,[s,t])=\varepsilon^{-1}\bar{K}_{s,t}(\varepsilon, w).\] The advantage of \(\bar{K}\) is that in addition to the superadditivity, it is stationary. **Lemma 2.6** (Scaling, superadditivity and stationarity of \(\bar{K}\)).: _Let \(r<s<t\) and let \(w\) be a process._ 1. _For_ \(\lambda>0\)_, we have_ \[(\bar{K}_{s,t}(\varepsilon,B))_{s<t,\varepsilon>0}\stackrel{{ \mathrm{d}}}{{=}}(\bar{K}_{\lambda^{1/H}s,\lambda^{1/H}t}(\lambda \varepsilon,B))_{s<t,\varepsilon>0}.\] 2. _We have_ \[\bar{K}_{r,s}(\varepsilon,w)+\bar{K}_{s,t}(\varepsilon,w)\leq\bar{K}_{r,t}( \varepsilon,w)\leq\bar{K}_{r,s}(\varepsilon,w)+\bar{K}_{s,t}(\varepsilon,w)+1.\] 3. _We have_ \(\bar{K}_{s,t}(\varepsilon,w)=\bar{K}_{0,t-s}(\varepsilon,w_{s+\cdot}-w_{s})\)_. In particular,_ \[\bar{K}_{s,t}(\varepsilon,B)\stackrel{{\mathrm{d}}}{{=}}\bar{K}_ {0,t-s}(\varepsilon,B).\] Proof.: The claim (a) follows from Lemma 2.2 and the claim (b) follows from Lemma 2.3. For the claim (c), we observe that for every \(\rho\in\mathbb{R}\) we have \[\bar{K}_{s,t}(\varepsilon,w)=\bar{K}_{s,t}(\varepsilon,w+\rho)=\bar{K}_{0,t-s }(\varepsilon,w_{s+\cdot}+\rho)\] In particular, we choose \(\rho:=-w_{s}\). **Lemma 2.7** (Moments of \(K\)).: _For every \(p,t,\varepsilon\in(0,\infty)\) we have \(\mathbb{E}[\sup_{\rho\in\mathbb{R}}K_{0,t}(\varepsilon,B+\rho)^{p}]<\infty\)._ Proof.: For \(\alpha\in(0,H)\), we set \[[\![B]\!]_{C^{\alpha}([0,t])}:=\sup_{0\leq r<s\leq t}\frac{|B_{s}-B_{r}|}{(s-r )^{\alpha}}.\] By the Kolmogorov continuity theorem, we have \[\mathbb{E}[[\![B]\!]_{C^{\alpha}([0,t])}^{p}]<\infty. \tag{19}\] We set \[\delta:=\varepsilon^{\frac{1}{\alpha}}(1+[\![B]\!]_{C^{\alpha}([0,t])})^{- \frac{1}{\alpha}}.\] To lead to a contradiction, suppose that there exist integers \(k\) and \(n\) such that \[k\delta\leq T_{n}(\varepsilon,B+\rho)<T_{n+1}(\varepsilon,B+\rho)\leq(k+1) \delta\quad\text{with }T_{n+1}(\varepsilon,B+\rho)\leq t.\] Then, \[\varepsilon=|B_{T_{n+1}(\varepsilon,B+\rho)}-B_{T_{n}(\varepsilon,B+\rho)}| \leq[\![B]\!]_{C^{\alpha}([0,t])}\delta^{\alpha}\] \[=\varepsilon[\![B]\!]_{C^{\alpha}([0,t])}(1+[\![B]\!]_{C^{\alpha}([0,t])}) ^{-1}<\varepsilon,\] which is a contradiction. Thus, we must have \[\#\{n:k\delta\leq T_{n}(\varepsilon,B+\rho)\leq(k+1)\delta\}\leq 1\quad \text{for each }k\] and \[\sup_{\rho\in\mathbb{R}}K_{0,t}(\varepsilon,B+\rho)\leq t\delta^{-1}=t \varepsilon^{-\frac{1}{\alpha}}(1+[\![B]\!]_{C^{\alpha}([0,t])})^{\frac{1}{ \alpha}}, \tag{20}\] which is \(L^{p}(\mathbb{P})\)-integrable by (19). In view of Lemma 2.6, the family \((\mathbb{E}[\bar{K}_{0,t}(1,B)])_{t\geq 0}\) satisfies \[\mathbb{E}[\bar{K}_{0,s+t}(1,B)]\geq\mathbb{E}[\bar{K}_{0,s}(1,B)]+\mathbb{E}[ \bar{K}_{0,t}(1,B)].\] Therefore, by Fekete's Lemma, the following limit exists in \([0,\infty]\): \[\mathfrak{c}_{H}:=\lim_{t\to\infty}\frac{1}{t}\mathbb{E}[\bar{K}_{0,t}(1,B)]= \sup_{t>0}\frac{1}{t}\mathbb{E}[\bar{K}_{0,t}(1,B)]. \tag{21}\] The constant \(\mathfrak{c}_{H}\) coincides with the one from Theorem 1.1. The following lemma shows that \(\mathfrak{c}_{H}\) is non-trivial. **Lemma 2.8** (Non-triviality of \(\mathfrak{c}_{H}\)).: _We have \(\mathfrak{c}_{H}\in(0,\infty)\)._ Proof.: To see \(\mathfrak{c}_{H}>0\), we observe \[\mathfrak{c}_{H}\geq\mathbb{E}[\bar{K}_{0,1}(1,B)]\geq\mathbb{P}(B_{1}\geq 2 )>0.\] To see \(\mathfrak{c}_{H}<\infty\), we note by Lemma 2.6 that \((\bar{K}_{s,t}+1)\), \(s<t\) is subadditive. Therefore, \[\mathfrak{c}_{H}\leq\mathbb{E}[\bar{K}_{0,1}(1,B)]+1\leq\mathbb{E}[\sup_{\rho }K_{0,1}(1,B-\rho)]+1,\] which is finite by Lemma 2.7. **Remark 2.9**.: By the subadditivity, we have \[\frac{\mathbb{E}[\bar{K}_{0,t}(1,B)]}{t}\leq\mathfrak{c}_{H}\leq\frac{ \mathbb{E}[\bar{K}_{0,t}(1,B)]+1}{t}.\] In particular, \[\Big{|}\mathfrak{c}_{H}-\frac{\mathbb{E}[\bar{K}_{0,t}(1,B)]}{t}\Big{|}\leq t ^{-1}. \tag{22}\] ### The shifted stochastic sewing lemma A key ingredient to the proof of Theorem 1.1 is the following lemma. **Lemma 2.10** (Shifted stochastic sewing lemma, [22, Theorem 1.1]).: _Let \((\mathcal{F}_{t})_{t\in[0,T]}\) be a filtration and let \((A_{s,t})_{0\leq s<t\leq T}\) be a two-parameter stochastic process such that \(A_{s,t}\) is \(\mathcal{F}_{t}\)-measurable. Suppose that for some \(p\in[2,\infty)\) we have \(A_{s,t}\in L^{p}(\mathbb{P})\) for every \(s<t\). Moreover, suppose that for \(v<s<u<t\) and \(M\in(0,\infty)\) we have the estimates_ \[\|A_{s,t}-A_{s,u}-A_{u,t}\|_{L^{p}(\mathbb{P})} \leq\Gamma_{1}(t-s)^{\beta_{1}},\] \[\|\mathbb{E}[A_{s,t}-A_{s,u}-A_{u,t}|\mathcal{F}_{v}]\|_{L^{p}( \mathbb{P})} \leq\Gamma_{2}(s-v)^{-\alpha}(t-s)^{\beta_{2}}, \tag{23}\] _where \(t-s\leq M^{-1}(s-v)\) is assumed in (23), with \(\alpha,\beta_{1},\beta_{2}\) satisfying_ \[\min\{2\beta_{1},2(\beta_{2}-\alpha),\beta_{2}\}>1.\] _Then there exists a unique \((\mathcal{F}_{t})\)-adapted stochastic process \((\mathcal{A}_{t})_{t\in[0,T]}\) with \(\mathcal{A}_{0}=0\) such that_ \[\|\mathcal{A}_{s,t}-A_{s,t}\|_{L^{p}(\mathbb{P})} \lesssim_{p,\alpha,\beta_{1},\beta_{2},M}\Gamma_{1}(t-s)^{\beta_{ 1}}+\Gamma_{2}(t-s)^{\beta_{2}-\alpha},\] \[\|\mathbb{E}[\mathcal{A}_{s,t}-A_{s,t}|\mathcal{F}_{v}]\|_{L^{p}( \mathbb{P})} \lesssim_{p,\alpha,\beta_{1},\beta_{2},M}\Gamma_{2}(s-v)^{-\alpha}(t-s)^{ \beta_{2}}\] _for every \(v<s<t\), where \(t-s\leq M^{-1}(s-v)\) is assumed in the second estimate. Furthermore, we can find a \(\delta>0\), depending only on \(\alpha,\beta_{1},\beta_{2}\), such that_ \[\left\|\mathcal{A}_{T}-\sum_{[s,t]\in\pi}A_{s,t}\right\|_{L^{p}( \mathbb{P})}\lesssim_{p,\alpha,\beta_{1},\beta_{2},T}(\Gamma_{1}+\Gamma_{2}) |\pi|^{\delta}\] _for every partition \(\pi\) of \([0,T]\)._ **Remark 2.11**.: The stochastic sewing lemma was first obtained in the seminal work [11], and the first shifted version, where \(\alpha=0\), was obtained by [10], which was extended by [11] to handle the case \(\alpha>0\). **Remark 2.12**.: For our problems, the family \((A_{s,t})_{0\leq s<t\leq T}\) satisfies \[\|A_{s,t}\|_{L^{p}(\mathbb{P})}\leq\Gamma_{1}(t-s)^{\beta_{1}}, \quad\|\mathbb{E}[A_{s,t}|\mathcal{F}_{v}]\|_{L^{p}(\mathbb{P})}\leq\Gamma_{2} (s-v)^{-\alpha}(t-s)^{\beta_{2}}.\] In this case, by uniqueness of the limit \(\mathcal{A}\), we must have \(\mathcal{A}\equiv 0\). In particular, with some \(\delta=\delta(\alpha,\beta_{1},\beta_{2})\) we have \[\left\|\sum_{[s,t]\in\pi}A_{s,t}\right\|_{L^{p}(\mathbb{P})} \lesssim_{T,p,\alpha,\beta_{1},\beta_{2}}(\Gamma_{1}+\Gamma_{2})|\pi|^{\delta}.\] Let us explain the strength of Lemma 2.10. The key is the estimate (23), which allows us to bring a _weak_ estimate into a strong estimate. To illustrate an example, recall the Mandelbrot-Van Ness representation (18), and we write \((\mathcal{F}_{t})_{t\in\mathbb{R}}\) for the filtration generated by the Brownian motion \(W\) in (18). Suppose that we want to estimate \(\mathbb{E}[F(B)|\mathcal{F}_{v}]\). We then have \[\mathbb{E}[F(B)|\mathcal{F}_{v}]=\mathbb{E}[F(y+\tilde{B})]|_{y =\mathbb{E}[B|\mathcal{F}_{v}]},\quad\tilde{B}:=\int_{v}^{\cdot}\mathcal{K}( \cdot,s)\mathrm{d}W_{s}.\] Then the problem reduces to the estimate of a Gaussian expectation (weak estimate), which allows us to leverage regularity of the Gaussian density. Obviously, weak estimates provide better bounds than strong estimates do. As this point of view is the key to our arguments, let us elaborate on one simple example. Let \(X\) be a one-dimensional standard Gaussian random variable, and let \(F\colon\mathbb{R}\to\mathbb{R}\). If we want to estimate \[\mathbb{E}[|F(X)-F(X+a)|]\] for a small \(a\in\mathbb{R}\), then we need to assume some regularity of \(F\). On the other hand, if we want to estimate \[|\mathbb{E}[F(X)]-\mathbb{E}[F(X+a)]|,\] then by the Gaussian change of variable we get \[\mathbb{E}[F(X+a)]=e^{-\frac{a^{2}}{2}}\mathbb{E}[e^{aX}F(X)].\] Hence, only assuming \(F\) is bounded, using the Cauchy-Schwarz inequality, we get \[|\mathbb{E}[F(X)]-\mathbb{E}[F(X+a)]|\lesssim\|F\|_{L^{\infty}}|a|,\quad \forall a\text{ with }|a|\leq 1.\] Very roughly speaking, we will go through such lines of reasoning to prove Theorem 1.1. The Gaussian change of variable will be replaced by Girsanov's theorem in the spirit of Picard [13], see Lemma 2.16. ### Convergence of variation As already suggested, to prove Theorem 1.1, we will apply the shifted stochastic sewing, Lemma 2.10. We denote by \((\mathcal{F}_{t})_{t\in\mathbb{R}}\) the filtration generated by the Brownian motion \(W\) appearing in the Mandelbrot-Van Ness representation (18). The following is the first observation. **Lemma 2.13** (Asymptotic weak estimate of \(\bar{K}\)).: _Let \(\zeta\geq 1\) and \(v<s<t\). We set \(\varepsilon:=(\frac{t-s}{\zeta})^{H}\). Then, if \(\frac{t-s}{s-v}\) is sufficiently small and \(p>1\), we have_ \[\|\mathbb{E}[\bar{K}_{s,t}(\varepsilon,B)|\mathcal{F}_{v}]- \mathbb{E}[\bar{K}_{0,\zeta}(1,B)]\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta} \Big{(}\frac{t-s}{s-v}\Big{)}^{1-H}.\] Lemma 2.13 is an easy consequence of the following result. **Lemma 2.14** (Asymptotic independence, [20, Lemma A.1]).: _Let \(0\leq v<s<t\). Let \(F\) and \(G\) be measurable with respect to \(\mathcal{F}_{v}\) and_ \[\sigma(B_{t^{\prime}}-B_{s^{\prime}}:s\leq s^{\prime}<t^{\prime} \leq t) \tag{24}\] _respectively, and suppose that \(F,G\in L^{p}(\mathbb{P})\) with \(p\in(1,\infty)\). If \((t-s)(s-v)^{-1}\) is sufficiently small, then we have_ \[|\mathbb{E}[FG]-\mathbb{E}[F]\mathbb{E}[G]|\lesssim_{p}\Big{(} \frac{t-s}{s-v}\Big{)}^{1-H}\|F\|_{L^{p}(\mathbb{P})}\|G\|_{L^{p}(\mathbb{P})}.\] **Remark 2.15**.: Consequently, we have the following estimate. Let \(p\in[2,\infty)\), and we set \(p^{\prime}:=p/(p-1)\). If \(G\) is measurable with respect to the \(\sigma\)-algebra (24), then for any \(\mathcal{F}_{v}\)-measurable \(F\) we have \[|\mathbb{E}[(G-\mathbb{E}[G])F]|\lesssim_{p}\Big{(}\frac{t-s}{s -v}\Big{)}^{1-H}\|F\|_{L^{p^{\prime}}(\mathbb{P})}\|G\|_{L^{p^{\prime}}( \mathbb{P})},\] provided that \(\frac{t-s}{s-v}\) is sufficiently small. Since \(L^{p^{\prime}}(\mathbb{P})\) is the dual of \(L^{p}(\mathbb{P})\) and \(p^{\prime}\leq p\), we have \[\|\mathbb{E}[G|\mathcal{F}_{v}]-\mathbb{E}[G]\|_{L^{p}(\mathbb{ P})}\lesssim_{p}\Big{(}\frac{t-s}{s-v}\Big{)}^{1-H}\|G\|_{L^{p}(\mathbb{P})}, \tag{25}\] where we do not need to assume that \(\frac{t-s}{s-v}\) is small. Proof of Lemma 2.13.: By Lemma 2.6-(iii) (or by Remark 2.5), the random variable \(\bar{K}_{s,t}(\varepsilon,B)\) is measurable with respect to \(\sigma(B_{r}-B_{s}:s\leq r\leq t)\). The estimate (25) implies \[\|\mathbb{E}[\bar{K}_{s,t}(\varepsilon,B)|\mathcal{F}_{v}]-\mathbb{E}[\bar{K}_ {s,t}(\varepsilon,B)]\|_{L^{p}(\mathbb{P})}\lesssim\Big{(}\frac{t-s}{s-v} \Big{)}^{1-H}\|\bar{K}_{s,t}(\varepsilon,B)\|_{L^{p}(\mathbb{P})}.\] By the stationarity and the scaling (Lemma 2.6), \[\bar{K}_{s,t}(\varepsilon,B)\stackrel{{\rm d}}{{=}}\bar{K}_{0, \zeta}(1,B)\] and the moment \(\|\bar{K}_{0,\zeta}(1,B)\|_{L^{p}(\mathbb{P})}\) is bounded by Lemma 2.7. The claim now follows. We recall the Mandelbrot-Van Ness representation (Definition 2.1). The next lemma is a consequence of Girsanov's theorem. **Lemma 2.16** (Weak estimate on \(K\)).: _Let \(v<s<t\), \(\varepsilon\in(0,1)\), \(\rho,\rho^{\prime}\in[-\varepsilon/2,\varepsilon/2]\) and \(y:[v,t]\to\mathbb{R}\) be a deterministic continuous path. We set_ \[\tilde{B}_{r}^{v}:=\int_{v}^{r}(r-u)^{H-1/2}{\rm d}W_{u},\quad v \leq r\leq t \tag{26}\] \[b_{H}:=\frac{1}{4(1-H)}\Big{(}\frac{1}{\Gamma(H+1/2)\Gamma(3/2-H)}\Big{)}^{2},\] _where \(\Gamma\) is the usual Gamma function_ \[\Gamma(z):=\int_{0}^{\infty}t^{z-1}e^{-t}\mathrm{d}t. \tag{27}\] _We then have the bound_ \[|\mathbb{E}[K_{s,t}(\varepsilon,\tilde{B}^{v}+y+\rho)]-\mathbb{E} [K_{s,t}(\varepsilon,\tilde{B}^{v}+y+\rho^{\prime})]|\\ \lesssim e^{b_{H}|\rho-\rho^{\prime}|^{2}(s-v)^{-2}(t-v)^{2-2H}} \times\mathbb{E}[K_{s,t}(\varepsilon,\tilde{B}^{v}+y+\rho)^{2}]^{\frac{1}{2}} |\rho-\rho^{\prime}|(s-v)^{-1}(t-v)^{1-H}.\] Proof.: The proof is inspired by [20, Theorem A.1]. Let \(\delta:=\rho^{\prime}-\rho\) and \[h_{r}:=\begin{cases}(s-v)^{-1}(r-v)\delta&\text{if }v\leq r\leq s,\\ \delta&\text{if }s\leq r.\end{cases}\] Note that the functions \(r\mapsto\tilde{B}^{v}_{r}+y_{r}+\rho^{\prime}\) and \(r\mapsto\tilde{B}^{v}_{r}+y_{r}+h_{r}+\rho\) are equal on the interval \([s,t]\). Thus, \[K_{s,t}(\varepsilon,\tilde{B}^{v}+y+\rho^{\prime})=K_{s,t}(\varepsilon,\tilde {B}^{v}+y+h+\rho).\] We claim \[h_{r}=\int_{v}^{r}(r-u)^{H-1/2}\mathrm{d}g_{u},\] where for \(r>v\), \[g_{r}:=\frac{\delta\{(r-v)^{3/2-H}-(r-r\wedge s)^{3/2-H}\}}{\Gamma(H+1/2) \Gamma(3/2-H)(3/2-H)(s-v)}.\] Indeed, \[\dot{g}_{r} :=\frac{\mathrm{d}g_{r}}{\mathrm{d}r}\] \[=\frac{1}{\Gamma(H+1/2)\Gamma(3/2-H)}\frac{\delta}{s-v}\{(r-v)^{ 1/2-H}-(r-s)^{1/2-H}\mathbf{1}_{\{r>s\}}\}\] and \[\int_{v}^{r}(r-u)^{H-1/2}(u-v)^{1/2-H}\mathrm{d}u =\int_{0}^{r-v}(r-v-u)^{H-1/2}u^{1/2-H}\mathrm{d}u\] \[=(r-v)\int_{0}^{1}(1-u)^{H-1/2}u^{1/2-H}\mathrm{d}u\] \[=\Gamma(H+1/2)\Gamma(3/2-H)(r-v),\] where in the last line the relation between the Beta function and the Gamma function is used. Therefore, \[\int_{v}^{r}(r-u)^{H-1/2}\mathrm{d}g_{u}=\frac{\delta}{s-v}\{(r-v)-(r-s) \mathbf{1}_{\{r>s\}}\}=h_{r}.\] If we set \[F(w):=K_{s,t}\Big{(}\varepsilon,\int_{v}^{\cdot}(\cdot-u)^{H-1/2}\mathrm{d}w_ {u}+y+\rho\Big{)},\] then \(K_{s,t}(\varepsilon,\tilde{B}^{v}+y+\rho^{\prime})=F(W+g)\) and by Girsanov's theorem (or the Cameron-Martin theorem) \[\mathbb{E}[F(W+g)]=\mathbb{E}\Big{[}e^{\int_{v}^{t}\dot{g}_{r}\mathrm{d}W_{r}- \frac{1}{2}\int_{v}^{t}|\dot{g}_{r}|^{2}\mathrm{d}r}F(W)\Big{]}.\] Thus, \[\mathbb{E}[K_{s,t}(\varepsilon,\tilde{B}^{v}+y+\rho^{\prime})]- \mathbb{E}[K_{s,t}(\varepsilon,\tilde{B}^{v}+y+\rho)]\] \[=\mathbb{E}\Big{[}\Big{\{}e^{\int_{v}^{t}\dot{g}_{r}\mathrm{d}W_{r} -\frac{1}{2}\int_{v}^{t}|\dot{g}_{r}|^{2}\mathrm{d}r}-1\Big{\}}K_{s,t}( \varepsilon,\tilde{B}^{v}+y+\rho)\Big{]}.\] By the Cauchy-Schwarz inequality, it is bounded by \[\mathbb{E}\Big{[}\Big{(}e^{\int_{v}^{t}\dot{g}_{r}\mathrm{d}W_{r}-\frac{1}{2} \int_{v}^{t}|\dot{g}_{r}|^{2}\mathrm{d}r}-1\Big{)}^{2}\Big{]}^{1/2}\mathbb{E}[ K_{s,t}(\varepsilon,\tilde{B}^{v}+y+\rho)^{2}]^{\frac{1}{2}}.\] Since \(\int_{v}^{t}\dot{g}_{r}\mathrm{d}W_{r}\) is centered Gaussian with variance \[\int_{v}^{t}|\dot{g}_{r}|^{2}\mathrm{d}r\leq 2b_{H}\delta^{2}(s-v)^{-2}(t-v)^{ 2-2H},\] we obtain \[\mathbb{E}\Big{[}\Big{(}e^{\int_{v}^{t}\dot{g}_{r}\mathrm{d}W_{r} -\frac{1}{2}\int_{v}^{t}|\dot{g}_{r}|^{2}\mathrm{d}r}-1\Big{)}^{2}\Big{]} =e^{\int_{v}^{t}|\dot{g}_{r}|^{2}\mathrm{d}r}-1\] \[\leq\int_{v}^{t}|\dot{g}_{r}|^{2}\mathrm{d}r\times e^{\int_{v}^{ t}|\dot{g}_{r}|^{2}\mathrm{d}r}\] \[\lesssim e^{2b_{H}|\rho-\rho^{\prime}|^{2}(s-v)^{-2}(t-v)^{2-2H}} |\rho-\rho^{\prime}|^{2}(s-v)^{-2}(t-v)^{2-2H},\] (in the second line we used \(e^{a}-1\leq a\times e^{a}\) for \(a\geq 0\)) which completes the proof. Proof of Theorem 1.1.: In view of the scaling, we may suppose that \(T=1\). The proof resembles that of the subadditive ergodic theorem [19, Theorem 6.4.1]. **Step 1, lower bound.** We fix a parameter \(\zeta\geq 1\), which will go to infinity at the end. (The parameter \(\zeta\) corresponds to the parameter \(m\) in [19, Theorem 6.4.1].) Let \(\pi_{\varepsilon,\zeta}\) be the partition of \([0,1]\) with identical mesh size \(\zeta\varepsilon^{\frac{1}{H}}\). By the superadditivity (Lemma 2.3), using the relation \(t-s=\zeta\varepsilon^{1/H}\) for \([s,t]\in\pi_{\varepsilon,\zeta}\), we obtain \[\varepsilon^{\frac{1}{H}}K_{0,1}(\varepsilon,B+\rho)\geq\sum_{[s,t]\in\pi_{ \varepsilon,\zeta}}\varepsilon^{\frac{1}{H}}K_{s,t}(\varepsilon,B+\rho)= \zeta^{-1}\sum_{[s,t]\in\pi_{\varepsilon,\zeta}}A^{1}_{s,t},\] where \(A^{1}_{s,t}:=K_{s,t}((\frac{t-s}{\zeta})^{H},B+\rho)(t-s)\). Furthermore, we set \[A^{2}_{s,t}:=\bar{K}_{s,t}\Big{(}\Big{(}\frac{t-s}{\zeta}\Big{)}^{H},B\Big{)} (t-s),\quad A^{3}_{s,t}:=\mathbb{E}[\bar{K}_{0,\zeta}(1,B)](t-s).\] We see that \(A_{s,t}:=A^{1}_{s,t}-A^{3}_{s,t}\) satisfies the condition of Lemma 2.10. Indeed, by scaling we have \[\|K_{s,t}(\varepsilon,B+\rho)\|_{L^{p}(\mathbb{P})}+\|\bar{K}_{s,t}( \varepsilon,B+\rho)\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta}1\] and hence \[\|A_{s,t}\|_{L^{p}(\mathbb{P})}\leq\|A^{1}_{s,t}\|_{L^{p}(\mathbb{P})}+\|A^{3} _{s,t}\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta}(t-s).\] To estimate the conditional expectation, let \((t-s)/(s-v)\) be so small that the claim of Lemma 2.13 holds. Since \[K_{s,t}(\varepsilon,B+\rho)-\bar{K}_{s,t}(\varepsilon,B)=\varepsilon^{-1}\int_ {-\varepsilon/2}^{\varepsilon/2}\{K_{s,t}(\varepsilon,B+\rho)-K_{s,t}( \varepsilon,B+\rho+\rho^{\prime})\}\mathrm{d}\rho^{\prime},\] by Lemma 2.16, using \(|\rho-\rho^{\prime}|\leq\varepsilon\), \((s-v)^{-1}(t-v)^{1-H}\leq\sqrt{2}(s-v)^{-H}\) (this holds for \((t-s)/(s-v)\) sufficiently small) and \(\varepsilon(s-v)^{-H}=((t-s)/(s-v))^{H}/\zeta^{H}\) we have \[|\mathbb{E}[K_{s,t}(\varepsilon,B+\rho)-\bar{K}_{s,t}(\varepsilon,B)|\mathcal{ F}_{v}]|\] \[\leq\frac{\mathbb{E}[K_{s,t}(\varepsilon,B+\rho)^{2}|\mathcal{F}_{v} ]^{\frac{1}{2}}}{\varepsilon}\Big{(}\frac{t-s}{s-v}\Big{)}^{H},\] which readily yields \[\|\mathbb{E}[A_{s,t}^{1}-A_{s,t}^{2}|\mathcal{F}_{v}]\|_{L^{p}( \mathbb{P})}\lesssim_{p,\zeta}\Big{(}\frac{t-s}{s-v}\Big{)}^{H}(t-s).\] By Lemma 2.13, \[\|\mathbb{E}[A_{s,t}^{2}-A_{s,t}^{3}|\mathcal{F}_{v}]\|_{L^{p}( \mathbb{P})}\lesssim_{p,\zeta}\Big{(}\frac{t-s}{s-v}\Big{)}^{1-H}(t-s).\] Therefore, \[\|\mathbb{E}[A_{s,t}|\mathcal{F}_{v}]\|_{L^{p}(\mathbb{P})} \lesssim_{p,\zeta}\Big{(}\frac{t-s}{s-v}\Big{)}^{\min\{H,1-H\}}(t-s),\] and we indeed see that \((A_{s,t})_{s<t}\) satisfies the conditions of Lemma 2.10. Consequently, recalling Remark 2.12, we obtain \[\varepsilon^{\frac{1}{H}}K_{0,1}(\varepsilon,B)\geq\frac{\mathbb{E}[\bar{K}_ {0,\zeta}(1,B)]}{\zeta}-R_{\varepsilon,\zeta}, \tag{28}\] where \[\|R_{\varepsilon,\zeta}\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta} \varepsilon^{\delta}\] for some \(\delta\) depending only on \(H\). By the Borel-Cantelli lemma, if \(\varepsilon_{n}=O(n^{-\eta})\) for some \(\eta>0\), then \(R_{\varepsilon_{n},\zeta}\to 0\) a.s. This implies \[\liminf_{n\to\infty}\varepsilon^{\frac{1}{H}}K_{0,1}(\varepsilon_{n},B+\rho) \geq\frac{\mathbb{E}[\bar{K}_{0,\zeta}(1,B)]}{\zeta}\quad\text{a.s.}\] Since \(\zeta\) is an arbitrary real no smaller than \(1\), the lower bound is obtained. **Step 2, upper bound.** Since \((K_{s,t}(\varepsilon,B+\rho)+1)_{s<t}\) is subadditive, we obtain \[\varepsilon^{\frac{1}{H}}K_{0,1}(\varepsilon,B)\leq\frac{\mathbb{E}[\bar{K}_ {0,\zeta}(1,B)]}{\zeta}+\frac{1}{\zeta}+R_{\varepsilon,\zeta}, \tag{29}\] and we similarly obtain the upper bound. ## 3 Local time via level crossings In this section, we are interested in level crossings at a specific level. Our goal of this section is to prove Theorem 1.4, fractional analogue of Chacon et al. [1]. The key is to obtain a more quantitative version of Theorem 1.3, as stated just below. Recall the definition of \(U_{s,t}(\varepsilon,w)\) from (8), which counts the total number of upcrossings from \(0\) to \(\varepsilon\) in the interval \([s,t]\). **Theorem 3.1** (Quantitative bound on number of upcrossing \(U\)).: _Let \(H\in(0,1/2)\), \(T\in(0,\infty)\) and \(a\in\mathbb{R}\). The constant \(\mathfrak{c}_{H}\) is defined by (21). Almost surely, we have the following quantitative bound:_ \[\Big{|}\varepsilon^{\frac{1}{H}-1}U_{0,T}(\varepsilon,B-a)-\frac{\mathfrak{c} _{H}}{2}L_{T}(a)\Big{|}\leq\zeta^{-1}L_{T}(a)+\mathcal{R}_{\varepsilon,\zeta,T,a},\] _for all \(\varepsilon\in(0,\infty)\) and \(\zeta\in(1,\infty)\), where there exists a positive \(\kappa\) such that for every \(p\in(0,\infty)\) we have_ \[\|\mathcal{R}_{\varepsilon,\zeta,T,a}\|_{L^{p}(\mathbb{P})}\leq C_{p,\zeta}T^ {1-H}\varepsilon^{\kappa}\] _with \(C_{p,\zeta}\) independent of \(\varepsilon\), \(T\) and \(a\)._ The proof of Theorem 3.1 is somewhat similar to that of Theorem 1.1, especially the bounds (28) and (29). Indeed, it is based on the super(sub)-additivity, Girsanov's theorem and shifted stochastic sewing lemma. However, a major difficulty here is that we cannot find a counterpart to \(\bar{K}\). This leads to more involved technical arguments. Therefore, instead of directly going to the proof, in the next section we heuristically explain our strategy. ### Heuristics Herein we explain our heuristic strategy to prove Theorem 3.1. Let \((\mathcal{F}_{t})_{t\in\mathbb{R}}\) be the filtration generated by \(W\) in the Mandelbrot-Van Ness representation (18). We set \[A_{s,t}:=U_{s,t}((t-s)^{H},B-a)(t-s)^{1-H}.\] In view of Lemma 2.10, our goal is to show \[\mathbb{E}[A_{s,t}|\mathcal{F}_{v}]\approx\frac{\epsilon_{H}}{2}\mathbb{E}[L_ {s,t}(a)|\mathcal{F}_{v}]. \tag{30}\] Indeed, once the estimate (30) is proven, the rest of the argument is similar to the proof of Theorem 1.1. We thus explain heuristically how to prove (30). For simplicity, we set \(a=0\), and we write \(\varepsilon:=(t-s)^{H}\). (Strictly speaking, we actually introduce another parameter \(\zeta\) going to infinity and set \(\varepsilon:=(\frac{t-s}{\zeta})^{H}\), but for simplicity here we set \(\zeta=1\).) Let us introduce another parameter \(u\in(v,s)\) (in mind \(t-s\ll s-u\ll u-v\)), and, recalling the Mandelbrot-Van Ness representation from Definition 2.1, for \(r\in[s,t]\) we decompose \[B_{r} =\int_{-\infty}^{v}\mathcal{K}(r,\theta)\mathrm{d}W_{\theta}+ \int_{v}^{u}\mathcal{K}(r,\theta)\mathrm{d}W_{\theta}+\int_{u}^{r}\mathcal{K} (r,\theta)\mathrm{d}W_{\theta}\] \[=:X_{r}+Y_{r}+Z_{r}.\] In the interval \([s,t]\) the smooth processes \(X\) and \(Y\) do not change much compared to \(Z\). Therefore, we can freeze time of \(X\) and \(Y\) (Lemma 3.9): \[\mathbb{E}[U_{s,t}(\varepsilon,B)|\mathcal{F}_{v}]\approx\mathbb{E}[U_{s,t}( \varepsilon,X_{s}+Y_{s}+Z)|\mathcal{F}_{v}].\] But we see \[\mathbb{E}[U_{s,t}(\varepsilon,X_{s}+Y_{s}+Z)|\mathcal{F}_{v}]=\mathbb{E}[U_{s,t}(\varepsilon,x+Y_{s}+Z)]|_{x=X_{s}},\] and the Gaussian change of variable to \(Y\) yields \[\mathbb{E}[U_{s,t}(\varepsilon,x+Y_{s}+Z)]=e^{-\frac{1}{2}(\frac{x}{\sigma_{Y }})^{2}}\mathbb{E}\Big{[}e^{\frac{xY_{s}}{\sigma_{Y}^{2}}}U_{s,t}(\varepsilon, Y_{s}+Z)\Big{]},\] where \(\sigma_{Y}\) is the variance of \(Y_{s}\) (Lemma 3.11). For \(U_{s,t}(\varepsilon,Y_{s}+Z)\) to be positive, \(Y_{s}\) must be around \(0\) with high probability. (In other words, if \(Y_{s}\) is far away from \(0\), the process \(Z\) must move quite a lot, which is costly.) Therefore (Lemma 3.12), \[\mathbb{E}\Big{[}e^{\frac{xY_{s}}{\sigma_{Y}^{2}}}U_{s,t}(\varepsilon,Y_{s}+Z )\Big{]}\approx\mathbb{E}[U_{s,t}(\varepsilon,Y_{s}+Z)]\approx\mathbb{E}\Big{[} U_{s,t}(\varepsilon,Y+Z)\Big{]}.\] As \(v\ll u\ll s\), we have \(\sigma_{Y}\approx\sigma_{Y+Z}\), with \(\sigma_{Y+Z}\) being the variance of \(Y+Z\) (Lemma 3.15). In the end, we have (Lemma 3.7) \[\mathbb{E}[U_{s,t}(\varepsilon,B)|\mathcal{F}_{v}]\approx\mathbb{E}\Big{[}U_{ s,t}(\varepsilon,Y+Z)\Big{]}e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y+Z}})^{2}}.\] It is well-known that the local time is heuristically represented as integral of Dirac's delta function along \(B\) (see Lemma 3.19). We then observe (Lemma 3.19) \[\mathbb{E}\Big{[}\int_{s}^{t}\delta_{0}(B_{r})\mathrm{d}r\Big{|} \mathcal{F}_{v}\Big{]} \approx\int_{s}^{t}\mathbb{E}[\delta_{0}(B_{s})|\mathcal{F}_{v}] \mathrm{d}r\] \[=\frac{1}{\sqrt{2\pi}\sigma_{Y+Z}}e^{-\frac{X_{s}^{2}}{2\sigma_{Y +Z}^{2}}}(t-s).\] It is not obvious, but in Lemma 3.17 we prove \[\sqrt{2\pi}\sigma_{Y+Z}(t-s)^{-H}\mathbb{E}[U_{s,t}(\varepsilon,Y+Z)]\approx \frac{\epsilon_{H}}{2}.\] Now we see (30). With this heuristic argument in mind, we move to a rigorous proof in the next section. ### Convergence to local time #### 3.2.1 Estimates on level crossings The following process will appear in our argument. **Definition 3.2**.: The kernel \(\mathcal{K}\) is defined by (17). We denote by \(\tilde{B}=\tilde{B}^{H}\) the _Riemann-Liouville process_ \[\tilde{B}_{t}:=\int_{0}^{t}\mathcal{K}(t,r)\mathrm{d}W_{r}.\] In view of the Mandelbrot-Van Ness representation (Definition 2.1), we have \[B_{t}=\int_{-\infty}^{0}\mathcal{K}(t,r)\mathrm{d}W_{r}+\tilde{B}_{t}. \tag{31}\] We begin with three elementary lemmas. **Lemma 3.3** (Scaling of \(U\)).: _We have the following scaling property: for \(\lambda>0\),_ \[(U_{s,t}(\varepsilon,B+\rho))_{s<t,\varepsilon>0,\rho\in\mathbb{R}}\stackrel{{ \mathrm{d}}}{{=}}(U_{\lambda^{1/H}s,\lambda^{1/H}t}(\lambda\varepsilon,B+ \lambda\rho))_{s<t,\varepsilon>0,\rho\in\mathbb{R}}.\] _A similar result holds with \(B\) replaced by \(\tilde{B}\)._ Proof.: Similarly to Lemma 2.2, it follows from the scaling property of \(B\) and \(\tilde{B}\). **Notation 3.4**.: We set \[\bar{U}_{s,t}(\varepsilon,w):=U_{s,t}(\varepsilon,w)+\mathbf{1}_{\{w_{s}\in(0,\varepsilon)\}}.\] **Lemma 3.5** (Sub/super-additivity of \(U\)).: _For \(s<u<t\) we have_ \[U_{s,t}(\varepsilon,w)\geq U_{s,u}(\varepsilon,w)+U_{u,t}(\varepsilon,w),\quad \bar{U}_{s,t}(\varepsilon,w)\leq\bar{U}_{s,u}(\varepsilon,w)+\bar{U}_{u,t}( \varepsilon,w).\] Proof.: We have \[U_{s,t}(\varepsilon,w)=U_{s,u}(\varepsilon,w)+U_{u,t}(\varepsilon,w)+1\] if there exist \(a\) and \(b\) such that \(s\leq a<u<b\leq t\), \(w_{a}=0,w_{b}=\varepsilon\) and \(w_{r}\in(0,\varepsilon)\) for all \(r\in(a,b)\), and otherwise \[U_{s,t}(\varepsilon,w)=U_{s,u}(\varepsilon,w)+U_{u,t}(\varepsilon,w).\qed\] **Lemma 3.6** (Moment bound on \(U\)).: _There exists a positive constant \(\beta=\beta(H)\) such that for \(a\in\mathbb{R}\), \(\varepsilon\in(0,\infty)\), \(s<t\) and \(p\in(0,\infty)\) we have_ \[\|U_{s,t}(\varepsilon,B-a)\|_{L^{p}(\mathbb{P})}\lesssim_{H,p,\varepsilon}1+( t-s)^{\beta}.\] _A similar estimate holds with \(B\) replaced by \(\tilde{B}\)._ Proof.: Regarding \(B\), the claim follows from the obvious inequality \(U_{s,t}(\varepsilon,B-a)\leq K_{s,t}(\varepsilon,B-a)\) and the estimate (20). Regarding \(\tilde{B}\), by (31) we observe that \[\|\tilde{B}_{s,t}\|_{L^{p}(\mathbb{P})}\leq\|B_{s,t}\|_{L^{p}(\mathbb{P})}.\] This yields an estimate on Holder norm of \(\tilde{B}\) in \(L^{m}(\mathbb{P})\), with which we can proceed as in \(B\) We introduce some notation that will be used throughout Section 3.2.1. As in the proof of Theorem 1.1, we fix \(\zeta\geq 1\), and at the very end we let \(\zeta\to\infty\). We fix \(v<u<s<t\) with \(t-s\ll s-u\ll u-v\) and set \[\varepsilon:=\left(\frac{t-s}{\zeta}\right)^{H}, \tag{32}\] as shown in Figure 6. We set \[X_{r}:=\int_{-\infty}^{v}\mathcal{K}(r,\theta)\mathrm{d}W_{\theta}-a,\quad Y _{r}:=\int_{v}^{u}\mathcal{K}(r,\theta)\mathrm{d}W_{\theta},\quad Z_{r}:=\int_ {u}^{r}\mathcal{K}(r,\theta)\mathrm{d}W_{\theta} \tag{33}\] for \(r\in[s,t]\). Let \((\mathcal{F}_{t})_{t\in\mathbb{R}}\) be the filtration generated by \(W\) in the Mandelbrot-Van Ness representation (18). We have the identity \[\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{v}]=\mathbb{E}[U_{s,t}( \varepsilon,x+Y+Z)]|_{x=X}.\] Finally, we write \[\sigma_{Y}^{2} :=\mathbb{E}[Y_{s}^{2}]=\frac{1}{2H}\{(s-v)^{2H}-(s-u)^{2H}\}, \tag{34}\] \[\sigma_{Y+Z}^{2} :=\mathbb{E}[(Y_{s}+Z_{s})^{2}]=\frac{1}{2H}(s-v)^{2H}. \tag{35}\] In the spirit of the shifted stochastic sewing (Lemma 2.10), we will estimate \[\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{v}].\] The most crucial ingredient to Theorem 3.1 is the following. **Lemma 3.7** (Asymptotic weak estimate on \(U\)).: _Let \(H\in(0,1/2)\) and \(p\in(1,\infty)\). We further let \(0\leq v<s<t\leq 1\) and \(\zeta\in[1,\infty)\), and set \(\varepsilon\) as in (32). Let \((\mathcal{F}_{t})_{t\in\mathbb{R}}\) be the filtration generated by \(W\) in the Mandelbrot-Van Ness representation (18). We define \(X\) by (33) and \(\sigma_{Y+Z}\) by (35). For every \(\kappa\in(0,1)\), if \(\frac{t-s}{s-v}\) is sufficiently small, we have_ \[\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{v}]=\frac{\mathbb{E}[\bar{K} _{0,\zeta}(1,B)]}{2\sqrt{2\pi}\sigma_{Y+Z}}e^{-\frac{1}{2}(\frac{X_{s}}{ \sigma_{Y+Z}})^{2}}\varepsilon+R_{v,s,t}, \tag{36}\] _where (we omit the \(a\)-dependence of \(R_{v,s,t}\) from the notation and the following estimate holds uniformly in \(a\)):_ \[\|R_{v,s,t}\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta,\kappa}\Big{(}\frac{t-s}{s -v}\Big{)}^{(2-\kappa)H(1-H)}(t-s)^{-\kappa H}. \tag{37}\] **Remark 3.8**.: Due to scaling, it suffices to prove Theorem 3.1 with \(T=1\). Therefore, in Lemma 3.7 we assume \(t\leq 1\). We then do not have to keep track of dependency of constants on the final time \(T\). The proof of Lemma 3.7, to which the rest of Section 3.2.1 is devoted, will be built on several technical lemmas. For the sake of the next lemma, we recall the Riemann-Liouville operator (e.g., [10]) \[I_{\alpha}f(r):=\frac{1}{\Gamma(\alpha)}\int_{s}^{r}(r-\theta)^{\alpha-1}f( \theta)\mathrm{d}\theta,\quad r>s,\] Figure 6: Parameters for Lemma 3.7 where \(\Gamma\) is the Gamma function (27) and \(\alpha>0\). If \(f\) is Lipschitz with \(f_{s}=0\) and \(\alpha\in(-1,0]\), we set \[I_{\alpha}f(r):=\frac{1}{\Gamma(1+\alpha)}\int_{s}^{r}(r-\theta)^{\alpha}\dot{f }(\theta)\mathrm{d}\theta.\] The family \((I_{\alpha})_{\alpha>-1}\) has the semigroup property \(I_{\alpha}I_{\beta}=I_{\alpha+\beta}\). **Lemma 3.9** (Fix time of \(X\) and \(Y\)).: _For \(p\in(1,\infty)\), \(\zeta\in[1,\infty)\), \(\varepsilon\) given by (32) and \(\kappa\in(1-1/p,1)\), there exists a positive constant \(c\), depending on \(H\) only, such that if \((t-s)(s-u)^{-1}\) is sufficiently small, we have_ \[\|\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{u}]-\mathbb{E} [U_{s,t}(\varepsilon,X_{s}+Y_{s}+Z)|\mathcal{F}_{u}]\|_{L^{p}(\mathbb{P})}\\ \lesssim_{p,\kappa,\zeta}\|U_{s,t}(\varepsilon,B-a)\|_{L^{1}( \mathbb{P})}^{1-\kappa}e^{-\alpha^{2}}\Big{(}\frac{t-s}{s-u}\Big{)}^{1-H}.\] Proof.: The proof is similar to [10, Lemma A.1]. We have \[\mathbb{E}[U_{s,t}(\varepsilon,X_{s}+Y_{s}+Z)|\mathcal{F}_{u}]=\mathbb{E}[U_{ s,t}(\varepsilon,x_{s}+y_{s}+Z)]|_{x=X,y=Y}\] and \[\mathbb{E}[U_{s,t}(\varepsilon,x_{s}+y_{s}+Z)]=\mathbb{E}[U_{s,t}(\varepsilon,x+y+w+Z)]\] where \[w_{r}:=-(x_{r}+y_{r}-x_{s}-y_{s}),\quad r\in[s,t].\] Since \(X\) and \(Y\) are smooth on \([s,t]\), we may suppose that the realizations \(x\) and \(y\) are smooth as well. As \(w_{s}=0\), \[w=I_{1}\dot{w}=I_{H-\frac{1}{2}}I_{\frac{3}{2}-H}\dot{w}\] and \[w_{r}+\int_{s}^{r}\mathcal{K}(r,\theta)\mathrm{d}W_{\theta}=\int_{s}^{r} \mathcal{K}(r,\theta)\mathrm{d}\Big{(}W_{\theta}+c_{1}\big{(}I_{\frac{3}{2}-H }\dot{w}\big{)}_{\theta}\Big{)}\] for some constant \(c_{1}\) depending only on \(H\). By Girsanov's theorem, \[\mathbb{E}[U_{s,t}(\varepsilon,x+y+w+Z)]=\mathbb{E}\Big{[}U_{s, t}(\varepsilon,x+y+Z)\\ \times\exp\Big{(}c_{1}\int_{s}^{t}\frac{\mathrm{d}}{\mathrm{d} \theta}I_{\frac{3}{2}-H}\dot{w}\,\mathrm{d}W_{\theta}-\frac{c_{1}^{2}}{2}\int _{s}^{t}\Big{|}\frac{\mathrm{d}}{\mathrm{d}\theta}I_{\frac{3}{2}-H}\dot{w} \Big{|}^{2}\mathrm{d}\theta\Big{)}\Big{]}.\] Therefore, if \(p^{-1}+q^{-1}=1\), by Holder's inequality, \[|\mathbb{E}[U_{s,t}(\varepsilon,x+y+Z)]-\mathbb{E}[U_{s,t}( \varepsilon,x_{s}+y_{s}+Z)]|\\ \lesssim\mathbb{E}\Big{[}\Big{|}\exp\Big{(}c_{1}\int_{s}^{t} \frac{\mathrm{d}}{\mathrm{d}\theta}I_{\frac{3}{2}-H}\dot{w}\mathrm{d}W_{ \theta}-\frac{c_{1}^{2}}{2}\int_{s}^{t}\Big{|}\frac{\mathrm{d}}{\mathrm{d} \theta}I_{\frac{3}{2}-H}\dot{w}\Big{|}^{2}\mathrm{d}\theta\Big{)}-1\Big{|}^{q \Big{]}^{\frac{1}{\theta}}}\\ \times\mathbb{E}[U_{s,t}(\varepsilon,x+y+Z)^{p}]^{\frac{1}{p}}.\] Since the random variable \[\int_{s}^{t}\frac{\mathrm{d}}{\mathrm{d}\theta}I_{\frac{3}{2}-H}\dot{w}( \theta)\mathrm{d}W_{\theta}\] is Gaussian, by Lemma A.1, \[\mathbb{E}\Big{[}\Big{|}\exp\Big{(}c_{1}\int_{s}^{t}\frac{\mathrm{ d}}{\mathrm{d}\theta}I_{\frac{3}{2}-H}\dot{w}\mathrm{d}W_{\theta}-\frac{c_{1}^{2}}{2} \int_{s}^{t}\Big{|}\frac{\mathrm{d}}{\mathrm{d}\theta}I_{\frac{3}{2}-H}\dot{w} \Big{|}^{2}\mathrm{d}\theta\Big{)}-1\Big{|}^{q}\Big{]}^{\frac{1}{q}}\\ \lesssim_{q}\Big{(}\int_{s}^{t}\Big{|}\frac{\mathrm{d}}{\mathrm{d} \theta}I_{\frac{3}{2}-H}\dot{w}\Big{|}^{2}\mathrm{d}\theta\Big{)}^{\frac{1}{2} }\exp\Big{(}C_{q}\int_{s}^{t}\Big{|}\frac{\mathrm{d}}{\mathrm{d}\theta}I_{ \frac{3}{2}-H}\dot{w}\Big{|}^{2}\mathrm{d}\theta\Big{)}.\] Hence, by setting \[S_{s,t}:=\int_{s}^{t}\Big{|}\frac{\mathrm{d}}{\mathrm{d}\theta}I_{ \frac{3}{2}-H}(\dot{X}+\dot{Y})\Big{|}^{2}\mathrm{d}\theta,\] we have \[\|\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{u}] -\mathbb{E}[U_{s,t}(\varepsilon,X_{s}+Y_{s}+Z)|\mathcal{F}_{u}] \|_{L^{p}(\mathbb{P})}\] \[\lesssim\Big{[}\mathbb{E}\left(\mathbb{E}[U_{s,t}(\varepsilon,B- a)^{p}|\mathcal{F}_{u}]S_{s,t}^{\frac{p}{2}}e^{pC_{q}S_{s,t}}\right)\Big{]}^{\frac{1}{p}}\] \[\leq\|U_{s,t}(\varepsilon,B-a)\|_{L^{p_{1}}(\mathbb{P})}\Big{\|} S_{s,t}^{\frac{1}{2}}e^{C_{q}S_{s,t}}\Big{\|}_{L^{q_{1}}(\mathbb{P})},\] where \(p_{1}^{-1}+q_{1}^{-1}=p^{-1}\). Choose \(p_{2}\) so that \(p_{1}^{-1}=(1-\kappa)+\kappa p_{2}^{-1}\) (since \(\kappa>1-p^{-1}\), this is possible by choosing \(p_{1}\) close to \(p\)). By the log-convexity of \(L^{p}\) norms, \[\|U_{s,t}(\varepsilon,B-a)\|_{L^{p_{1}}(\mathbb{P})} \leq\|U_{s,t}(\varepsilon,B-a)\|_{L^{1}(\mathbb{P})}^{1-\kappa} \|U_{s,t}(\varepsilon,B-a)\|_{L^{p_{2}}(\mathbb{P})}^{\kappa}.\] We also have (by the Cauchy-Schwarz inequality) \[\|U_{s,t}(\varepsilon,B-a)\|_{L^{p_{2}}(\mathbb{P})} \leq\mathbb{P}(\|B\|_{L^{\infty}([0,1])}\geq a)^{\frac{1}{2}}\|U_ {s,t}(\varepsilon,B-a)\|_{L^{2p_{2}}(\mathbb{P})}\] \[\lesssim e^{-c_{2}a^{2}}\|U_{s,t}(\varepsilon,B-a)\|_{L^{2p_{2}}( \mathbb{P})}.\] The scaling property (Lemma 3.3) gives \[\|U_{s,t}(\varepsilon,B-a)\|_{L^{2p_{2}}(\mathbb{P})}=\|U_{s/(t-s),t/(t-s)}( \zeta^{-H},B-(t-s)^{-H}a)\|_{L^{2p_{2}}(\mathbb{P})}.\] By Lemma 3.6, \[\|U_{s/(t-s),t/(t-s)}(\zeta^{-H},B-(t-s)^{-H}a)\|_{L^{2p_{2}}( \mathbb{P})}\lesssim_{p_{2},\zeta}1.\] It remains to see \[\Big{\|}S_{s,t}^{\frac{1}{2}}e^{C_{q}S_{s,t}}\Big{\|}_{L^{q_{1}}( \mathbb{P})}\lesssim\Big{(}\frac{t-s}{s-u}\Big{)}^{1-H},\quad\text{if $\frac{t-s}{s-u}$ is sufficiently small.}\] This was essentially proven in [10, Lemma A.1] (our \(S_{s,t}\) corresponds to \(L\) therein). **Remark 3.10**.: We note that a similar reasoning shows that for \(p<p_{1}<\infty\) if \(\frac{t-s}{s-u}\) is sufficiently small, we have \[\|U_{s,t}(\varepsilon,Y_{s}+Z)\|_{L^{p}(\mathbb{P})}\lesssim_{ \zeta,p,p_{1}}\|U_{s,t}(\varepsilon,Y+Z)\|_{L^{p_{1}}(\mathbb{P})}.\] **Lemma 3.11** (Gaussian change of variable).: _Recall \(\sigma_{Y}\) from (34). We have the estimate_ \[\mathbb{E}[U_{s,t}(\varepsilon,X_{s}+Y_{s}+Z)|\mathcal{F}_{v}]=e^{-\frac{1}{2} \frac{(X_{s})}{\sigma_{Y}^{2}}}\mathbb{E}\Big{[}e^{\frac{X_{s}Y_{s}}{\sigma_{Y }^{2}}}U_{s,t}(\varepsilon,Y_{s}+Z)\Big{|}\mathcal{F}_{v}\Big{]}.\] Proof.: We set \(F(\eta):=\mathbb{E}[U_{s,t}(\varepsilon,\eta+Z)]\) for \(\eta\in\mathbb{R}\). Since \(X,Y\) and \(Z\) are independent and \(X\) is \(\mathcal{F}_{v}\)-measurable, \[\mathbb{E}[U_{s,t}(\varepsilon,X_{s}+Y_{s}+Z)|\mathcal{F}_{v}]= \mathbb{E}[F(x_{s}+Y_{s})]|_{x=X}.\] Since \(Y_{s}\) is Gaussian with the variance \(\sigma_{Y}^{2}\) and the mean \(0\), we observe \[\mathbb{E}[F(x_{s}+Y_{s})] =\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}F(x_{s}+\sigma_{Y}\eta)e^ {-\frac{\eta^{2}}{2}}\mathrm{d}\eta\] \[=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}F(\sigma_{Y}\eta)e^{-\frac{ 1}{2}(\eta-\sigma_{Y}^{-1}x_{s})^{2}}\mathrm{d}\eta\] \[=e^{-\frac{1}{2}(\frac{x_{s}}{\sigma_{Y}^{2}})^{2}}\mathbb{E} \Big{[}e^{\frac{Y_{s}x_{s}}{\sigma_{Y}^{2}}}F(Y_{s})\Big{]}.\] The claim thus follows. **Lemma 3.12** (\(Y_{s}\) must be near \(0\)).: _For every \(p_{1}\in(1,\infty)\), if \(\frac{t-s}{s-u}\) is sufficiently small, then_ \[\Big{|}\mathbb{E}\Big{[}e^{\frac{x_{s}Y_{s}}{\sigma_{Y}^{2}}}U_{s, t}(\varepsilon,Y_{s}+Z)\Big{]}-\mathbb{E}[U_{s,t}(\varepsilon,Y_{s}+Z)]\Big{|}\\ \lesssim_{H,p_{1}}\frac{|x_{s}|(t-u)^{H}}{\sigma_{Y}^{2}}e^{c( \frac{|x_{s}|(t-u)^{H}}{\sigma_{Y}^{2}})^{2}}\|U_{s,t}(\varepsilon,Y+Z)\|_{L^{ p_{1}}(\mathbb{P})}\] _with \(c\) depending only on \(H\) and \(p_{1}\)._ Proof.: For \(U_{s,t}(\varepsilon,Y_{s}+Z)\) to be non-zero, we must have \(\inf_{r\in[s,t]}|Y_{s}+Z_{r}|=0\). Therefore, \[\mathbb{E}\Big{[}e^{\frac{x_{s}Y_{s}}{\sigma_{Y}^{2}}}U_{s,t}( \varepsilon,Y_{s}+Z)\Big{]}-\mathbb{E}[U_{s,t}(\varepsilon,Y_{s}+Z)]\\ =\mathbb{E}\Big{[}\Big{(}e^{\frac{x_{s}Y_{s}}{\sigma_{Y}^{2}}}-1 \Big{)}U_{s,t}(\varepsilon,Y_{s}+Z)\mathbf{1}_{\{\|Z\|_{L^{\infty}([s,t)\geq|Y _{s}|)\}\}}\Big{]}.\] Using the inequality \[|e^{\lambda}-1|\leq e^{|\lambda|}|\lambda|,\quad\lambda\in \mathbb{R},\] we estimate \[\Big{|}\mathbb{E}\Big{[}\Big{(}e^{\frac{x_{s}Y_{s}}{\sigma_{Y}^{2 }}}-1\Big{)}U_{s,t}(\varepsilon,Y_{s}+Z)\mathbf{1}_{\{\|Z\|_{L^{\infty}([s,t) \geq|Y_{s}|)\}\}}\Big{]}\\ \leq\frac{|x_{s}|}{\sigma_{Y}^{2}}\mathbb{E}\Big{[}e^{\frac{|x_{ s}||\|Z\|_{L^{\infty}([s,t])}}{\sigma_{Y}^{2}}}\|Z\|_{L^{\infty}([s,t])}U_{s,t}( \varepsilon,Y_{s}+Z)\Big{]},\] and, by Holder's inequality, the expectation on the right hand side is bounded by \[[\mathbb{E}U_{s,t}(\varepsilon,Y_{s}+Z)^{p_{1}}]^{\frac{1}{p_{1}}}\left[ \mathbb{E}\|Z\|_{L^{\infty}([s,t])}^{p_{2}}\right]^{\frac{1}{p_{2}}}\left[ \mathbb{E}e^{\frac{p_{3}|x_{s}||\|Z\|_{L^{\infty}([s,t])}}{\sigma_{Y}^{2}}} \right]^{\frac{1}{p_{3}}},\] where \(p_{1},p_{2},p_{3}\in(1,\infty)\) satisfy \[\frac{1}{p_{1}}+\frac{1}{p_{2}}+\frac{1}{p_{3}}=1.\] By Remark 3.10, if \(\frac{t-s}{s-u}\) is sufficiently small, we have \[[\mathbb{E}U_{s,t}(\varepsilon,Y_{s}+Z)^{p_{1}}]^{\frac{1}{p_{1}}}\lesssim_{p _{1}}[\mathbb{E}[U_{s,t}(\varepsilon,Y+Z)^{p_{1}}]^{\frac{1}{p_{4}}}\,,\quad p _{4}:=p_{1}^{2}.\] Recalling \(\tilde{B}\) from Definition 3.2, the scaling property yields \[\mathbb{E}\|Z\|_{L^{\infty}([s,t])}^{p_{2}}\leq\mathbb{E}\|Z\|_{L^{\infty}([u,t])}^{p_{2}}=(t-u)^{p_{2}H}\mathbb{E}\|\tilde{B}\|_{L^{\infty}([0,1])}^{p_{2}}\] and similarly \[\mathbb{E}\Big{[}e^{\frac{p_{3}|x_{s}||\|Z\|_{L^{\infty}([s,t])}}{\sigma_{Y}^{ 2}}}\Big{]}\leq\mathbb{E}\Big{[}e^{\frac{p_{3}|x_{s}|(t-u)^{H}}{\sigma_{Y}^{2} }\|\tilde{B}\|_{L^{\infty}([0,1])}}\Big{]}.\] Since \(\|\tilde{B}\|_{L^{\infty}([0,1])}\) has a Gaussian tail by Fernique's theorem, there exists a constant \(c\) depending only on \(H\) such that \[\mathbb{E}\Big{[}e^{\frac{p_{3}|x_{s}|(t-u)^{H}}{\sigma_{Y}^{2}}\|\tilde{B}\| _{L^{\infty}([0,1])}}\Big{]}\lesssim e^{c\frac{(p_{3}|x_{s}|(t-u)^{H}}{\sigma_ {Y}^{2}})^{2}}.\] Now the claim is proved. **Lemma 3.13** (Sharp bound on \(U\)).: _For every \(p_{1},p_{2}\in(1,\infty)\) we have_ \[\|U_{s,t}(\varepsilon,Y+Z)\|_{L^{p_{1}}(\mathbb{P})}\lesssim_{\zeta,p_{1},p_{2 }}\Big{(}\frac{t-s}{s-v}\Big{)}^{\frac{H}{p_{1}p_{2}}}.\] Proof.: By the scaling, \[\|U_{s,t}(\varepsilon,Y+Z)\|_{L^{p_{1}}(\mathbb{P})}=\|U_{s-v,t-v}( \varepsilon,\tilde{B})\|_{L^{p_{1}}(\mathbb{P})}=\|U_{\frac{s-v}{t-s}}\frac{t-v} {t-s}(\zeta^{-H},\tilde{B})\|_{L^{p_{1}}(\mathbb{P})}.\] We set \(k_{1}:=\frac{s-v}{t-s}\) and \(k_{2}:=\frac{t-v}{t-s}\). We observe \[\|U_{k_{1},k_{2}} (\zeta^{-H},\tilde{B})\|_{L^{p_{1}}(\mathbb{P})}^{p_{1}}\] \[=\sum_{a\in\mathbb{Z}}\mathbb{E}\Big{[}U_{k_{1},k_{2}}(\zeta^{-H},\tilde{B})^{p_{1}}\mathbf{1}_{\{\tilde{B}_{k_{1}}\in(a-1,a]\}}\Big{]}\] \[=\sum_{a\in\mathbb{Z}}\mathbb{E}\Big{[}U_{k_{1},k_{2}}(\zeta^{-H},\tilde{B})^{p_{1}}\mathbf{1}_{\{\tilde{B}_{k_{1}}\in(a-1,a]\}}\mathbf{1}_{\{ \max_{r\in[k_{1},k_{2}]}|\tilde{B}_{r}-\tilde{B}_{k_{1}}|\geq|a|-1\}}\Big{]}\] \[\leq\sum_{a\in\mathbb{Z}}\mathbb{E}[U_{k_{1},k_{2}}(\zeta^{-H}, \tilde{B})^{p_{1}q_{2}}]^{\frac{1}{q_{2}}}\mathbb{P}(\max_{r\in[k_{1},k_{2}]}| \tilde{B}_{r}-\tilde{B}_{k_{1}}|\geq|a|-1)^{\frac{1}{q_{2}}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ **Lemma 3.16** (Combining estimates obtained so far).: _For every \(p\in(1,\infty)\), \(\kappa\in(1-p^{-1},1)\) and \(p_{1}\in(1,2)\), if \(\frac{t-s}{s-u}\) and \(\frac{s-u}{u-v}\) are sufficiently small, we have_ \[\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{v}]=\mathbb{E}[U_{s-v,t-v}( \varepsilon,\tilde{B})]e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}+2})^{2}}+R^{1}_ {v,u,s,t}(a)+R^{2}_{v,u,s,t}(a), \tag{38}\] _where_ \[\|R^{1}_{v,u,s,t}(a)\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta,\kappa}\mathbb{E} [U_{s,t}(\varepsilon,B-a)]^{1-\kappa}e^{-\alpha a^{2}}\Big{(}\frac{t-s}{s-u} \Big{)}^{1-H} \tag{39}\] _with \(c\) being a constant depending only on \(H,\kappa,p\), and almost surely_ \[|R^{2}_{v,u,s,t}(a)|\lesssim_{p_{1},\zeta}e^{-\frac{1}{2p_{1}}(\frac{X_{s}}{ \sigma_{Y}+2})^{2}}\frac{(t-u)^{H}}{\sigma_{Y+Z}}\Big{(}\frac{t-s}{s-v}\Big{)} ^{H/p_{1}}+e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}})^{2}}\Big{(}\frac{t-s}{s-v }\Big{)}^{H/p_{1}}\Big{(}\frac{t-s}{s-u}\Big{)}^{1-H}. \tag{40}\] _In particular,_ \[\Big{\|}\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{v}]- \mathbb{E}[U_{s-v,t-v}(\varepsilon,\tilde{B})]e^{-\frac{1}{2}(\frac{X_{s}}{ \sigma_{Y}+2})^{2}}\Big{\|}_{L^{p}(\mathbb{P})}\\ \lesssim_{p,\zeta,\kappa}\Big{(}\frac{t-s}{s-u}\Big{)}^{1-H}+ \Big{(}\frac{t-s}{s-u}\Big{)}^{-H}\Big{(}\frac{t-s}{s-v}\Big{)}^{(2-\kappa)H}. \tag{41}\] Proof.: In view of Lemma 3.11, we decompose \[\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{v}]=R_{1}+R_{2}+R_{3}+R_{4}+R _{5},\] where \[R_{1}:=\mathbb{E}[U_{s,t}(\varepsilon,B-a)|\mathcal{F}_{v}]- \mathbb{E}[U_{s,t}(\varepsilon,X_{s}+Y_{s}+Z)|\mathcal{F}_{v}],\] \[R_{2}:=e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}})^{2}}\mathbb{E} \Big{[}e^{\frac{X_{s}Y_{s}}{\sigma_{Y}^{2}}}U_{s,t}(\varepsilon,Y_{s}+Z) \Big{|}\mathcal{F}_{v}\Big{]}-e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}})^{2}} \mathbb{E}[U_{s,t}(\varepsilon,Y_{s}+Z)],\] \[R_{3}:=e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}})^{2}}\mathbb{E} [U_{s,t}(\varepsilon,Y_{s}+Z)]-e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}})^{2}} \mathbb{E}[U_{s,t}(\varepsilon,Y+Z)],\] \[R_{4}:=e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}})^{2}}\mathbb{E} [U_{s,t}(\varepsilon,Y+Z)]-e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y+Z}})^{2}} \mathbb{E}[U_{s,t}(\varepsilon,Y+Z)],\] \[R_{5}:=e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y+Z}})^{2}}\mathbb{E }[U_{s,t}(\varepsilon,Y+Z)]=\mathbb{E}[U_{s-v,t-v}(\varepsilon,\tilde{B})]e^{ -\frac{1}{2}(\frac{X_{s}}{\sigma_{Y+Z}})^{2}}.\] By Lemma 3.9, \[\|R_{1}\|_{L^{p}(\mathbb{P})}\lesssim_{H,p,\zeta}\mathbb{E}[U_{s,t}( \varepsilon,B-a)]^{1-\kappa}e^{-\alpha a^{2}}\Big{(}\frac{t-s}{s-u}\Big{)}^{1-H}.\] By Lemma 3.14, \[|R_{3}|\lesssim_{\zeta,p_{1}}e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}})^{2}} \Big{(}\frac{t-s}{s-v}\Big{)}^{H/p_{1}}\Big{(}\frac{t-s}{s-u}\Big{)}^{1-H}.\] To estimate \(R_{2}\), by Lemma 3.12, \[|R_{2}|\lesssim e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}})^{2}} \frac{|X_{s}|(t-u)^{H}}{\sigma_{Y}^{2}}e^{c(\frac{|X_{s}|(t-u)^{H}}{\sigma_{Y}^{ 2}})^{2}}\|U_{s,t}(\varepsilon,Y+Z)\|_{L^{p_{1}}(\mathbb{P})}.\] If \(\frac{t-u}{u-v}\) is sufficiently small, we have \[c(\frac{(t-u)^{H}}{\sigma_{Y}})^{2}\leq\frac{1}{2}-\frac{1}{2p_{1}},\] hence \[|R_{2}|\lesssim e^{-\frac{1}{2p_{1}}(\frac{X_{s}}{\sigma_{Y}})^{2}} \frac{|X_{s}|(t-u)^{H}}{\sigma_{Y}^{2}}\|U_{s,t}(\varepsilon,Y+Z)\|_{L^{p_{1}} (\mathbb{P})}.\] Using the estimate \(\sup_{\lambda\geq 0}{\lambda e^{-(\frac{1}{2p_{1}}-\frac{1}{2p_{1}^{2}})^{ \lambda^{2}}}}<\infty\) and Lemma 3.13, we get \[|R_{2}| \lesssim e^{-\frac{1}{2p_{1}^{2}}(\frac{X_{s}}{\sigma_{Y}})^{2}} \frac{(t-u)^{H}}{\sigma_{Y}}\Big{(}\frac{t-s}{s-v}\Big{)}^{H/p_{1}^{2}}\] \[\lesssim e^{-\frac{1}{2p_{1}^{2}}(\frac{X_{s}}{\sigma_{Y}+Z})^{2}} \frac{(t-u)^{H}}{\sigma_{Y+Z}}\Big{(}\frac{t-s}{s-v}\Big{)}^{H/p_{1}^{2}}.\] Finally, we estimate \(R_{4}\). By Lemma 3.15, we get \[|R_{4}|\lesssim_{p_{1}}\|U_{s,t}(\varepsilon,Y+Z)\|_{L^{1}(\mathbb{P})}e^{- \frac{1}{2p_{1}}(\frac{X_{s}}{\sigma_{Y}+Z})^{2}}\Big{(}\frac{s-u}{s-v}\Big{)} ^{2H}.\] By Lemma 3.13, we obtain \[|R_{4}| \lesssim e^{-\frac{1}{2p_{1}}(\frac{X_{s}}{\sigma_{Y}+Z})^{2}} \Big{(}\frac{s-u}{s-v}\Big{)}^{2H}\Big{(}\frac{t-s}{s-v}\Big{)}^{H/p_{1}^{2}}\] \[\lesssim e^{-\frac{1}{2p_{1}^{2}}(\frac{X_{s}}{\sigma_{Y}+Z})^{2}} \frac{(t-u)^{2H}}{\sigma_{Y+Z}}\Big{(}\frac{t-s}{s-v}\Big{)}^{H/p_{1}^{2}}.\] Setting \(R_{v,u,s,t}^{1}(a):=R_{1}\) and \(R_{v,u,s,t}^{2}(a):=R_{2}+R_{3}+R_{4}\), we get the estimates (39) and (40). In particular, by the trivial bound on the exponential function and Lemma 3.6, \[\|R_{v,u,s,t}^{1}(a)\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta}\Big{(}\frac{t-s} {s-u}\Big{)}^{1-H},\] \[|R_{v,u,s,t}^{2}(a)|\lesssim_{\zeta,\kappa}\Big{(}\frac{s-u}{t-s}\Big{)}^{H} \Big{(}\frac{t-s}{s-v}\Big{)}^{(2-\kappa)H}+\Big{(}\frac{t-s}{s-u}\Big{)}^{1- H}\Big{(}\frac{t-s}{s-v}\Big{)}^{(1-\kappa)H}.\] Since \(\frac{t-s}{s-v}\leq 1\), these bounds lead to (41). As a final ingredient in the proof of Lemma 3.7, we estimate \(\mathbb{E}[U_{s-v,t-v}(\varepsilon,\tilde{B})]\). **Lemma 3.17** (Asymptotics on constants).: _For every \(\kappa\in(0,1)\), if \(\frac{t-s}{s-u}\) and \(\frac{s-u}{u-v}\) are sufficiently small, we have the estimate_ \[\Big{|}\sqrt{2\pi}\varepsilon^{-1}\sigma_{Y+Z}\mathbb{E}[U_{s-v,t -v}(\varepsilon,\tilde{B})]-\frac{1}{2}\mathbb{E}[\bar{K}_{0,\zeta}(1,B)] \Big{|}\\ \lesssim_{\kappa,\zeta}\Big{(}\frac{t-s}{s-u}\Big{)}^{-H}\Big{(} \frac{t-s}{s-v}\Big{)}^{(1-\kappa)H}+(t-s)^{-\kappa H}\Big{(}\frac{t-s}{s-u} \Big{)}^{1-H}.\] Proof.: By integrating (38) over \(\mathbb{R}\) with respect to \(a\), we get \[\int_{\mathbb{R}}\mathbb{E}[U_{s,t}(\varepsilon,B-a)]\mathrm{d}a =\mathbb{E}[U_{s-v,t-v}(\varepsilon,\tilde{B})]\mathbb{E}\Big{[} \int_{\mathbb{R}}e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y+Z}})^{2}}\mathrm{d}a \Big{]}\\ +\mathbb{E}\Big{[}\int_{\mathbb{R}}R_{v,u,s,t}^{1}(a)\mathrm{d}a \Big{]}+\mathbb{E}\Big{[}\int_{\mathbb{R}}R_{v,u,s,t}^{2}(a)\mathrm{d}a\Big{]}. \tag{42}\] We will estimate each term of (42). By the scaling (Lemma 3.3), \[\int_{\mathbb{R}}\mathbb{E}[U_{s,t}(\varepsilon,B-a)]\mathrm{d}a =\int_{\mathbb{R}}\mathbb{E}[U_{\frac{s}{t-s}\zeta,\frac{t}{t-s} \zeta}(1,B-\zeta^{H}(t-s)^{-H}a)]\mathrm{d}a\] \[=\zeta^{-H}(t-s)^{H}\int_{\mathbb{R}}\mathbb{E}[U_{\frac{t}{t-s} \zeta,\frac{t}{t-s}\zeta}(1,B-a)]\mathrm{d}a\] \[=\varepsilon\int_{\mathbb{R}}\mathbb{E}[U_{\frac{t}{t-s}\zeta,\frac {t}{t-s}\zeta}(1,B-a)]\mathrm{d}a.\] We recall the downcrossing \(D_{s,t}(\varepsilon,w)\) from (11). By definition, we have \[\bar{K}_{s,t}(\varepsilon,w)=\int_{\mathbb{R}}\{U_{s,t}(\varepsilon,w-a)+D_{s, t}(\varepsilon,w-a)\}\mathrm{d}a\] Since \(D_{s,t}(\varepsilon,w)=U_{s,t}(\varepsilon,-w-\varepsilon)\) and \(B\stackrel{{\mathrm{d}}}{{=}}-B\), we obtain \[\int_{\mathbb{R}}\mathbb{E}[U_{\frac{s}{t-s}\zeta,\frac{t}{t-s} \zeta}(1,B-a)]\mathrm{d}a=\int_{\mathbb{R}}\mathbb{E}[D_{\frac{s}{t-s}\zeta, \frac{t}{t-s}\zeta}(1,B-a)]\mathrm{d}a=\frac{1}{2}\mathbb{E}[\bar{K}_{\frac{s }{t-s}\zeta,\frac{t}{t-s}\zeta}(1,B)].\] By the stationarity of \(\bar{K}\) (Lemma 2.6), \[\mathbb{E}[\bar{K}_{\frac{s}{t-s}\zeta,\frac{t}{t-s}\zeta}(1,B)] =\mathbb{E}[\bar{K}_{0,\zeta}(1,B)].\] Therefore, \[\int_{\mathbb{R}}\mathbb{E}[U_{s,t}(\varepsilon,B-a)]\mathrm{d}a =\frac{\mathbb{E}[\bar{K}_{0,\zeta}(1,B)]}{2}\varepsilon. \tag{43}\] Recalling how \(X\) depends on \(a\) from (33), for any \(\sigma>0\) \[\int_{\mathbb{R}}e^{-\frac{1}{2}(\frac{X_{s}}{\sigma})^{2}}\mathrm{d}a=\sqrt{ 2\pi}\sigma,\] in particular, we have \[\int_{\mathbb{R}}e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y}+Z})^{2}} \mathrm{d}a=\sqrt{2\pi}\sigma_{Y+Z},\quad\int_{\mathbb{R}}e^{-\frac{1}{2}( \frac{X_{s}}{\sigma_{Y}})^{2}}\mathrm{d}a=\sqrt{2\pi}\sigma_{Y}. \tag{44}\] Combining (42), (43) and (44), we obtain \[\Big{|}\sqrt{2\pi}\varepsilon^{-1}\sigma_{Y+Z}\mathbb{E}[U_{s-v,t-v}( \varepsilon,\tilde{B})]-\frac{\mathbb{E}[\bar{K}_{0,\zeta}(1,B)]}{2}\Big{|} \leq\varepsilon^{-1}\sum_{i=1,2}\int_{\mathbb{R}}\lVert R^{i}_{v,u,s,t}(a) \rVert_{L^{1}(\mathbb{P})}\mathrm{d}a.\] It remains to estimate the right-hand side. By (39), \[\lVert R^{1}_{v,u,s,t}(a)\rVert_{L^{1}(\mathbb{P})}\lesssim_{ \zeta,\kappa}\mathbb{E}[U_{s,t}(\varepsilon,B-a)]^{1-\kappa}e^{-c\alpha^{2}} \Big{(}\frac{t-s}{s-u}\Big{)}^{1-H}.\] By Jensen's inequality and (43), \[\int_{\mathbb{R}}\mathbb{E}[U_{s,t}(\varepsilon,B-a)]^{1-\kappa}e^{-c\alpha^ {2}}\mathrm{d}a\lesssim_{\kappa}\Big{(}\int_{\mathbb{R}}\mathbb{E}[U_{s,t}( \varepsilon,B-a)]e^{-c\alpha^{2}}\mathrm{d}a\Big{)}^{1-\kappa}\lesssim_{\zeta, \kappa}\varepsilon^{1-\kappa}.\] This gives an estimate for \(R^{1}\). To estimate \(R^{2}\) we use (40) and (44), and obtain \[\int_{\mathbb{R}}\mathbb{E}\left|R^{2}_{v,u,s,t}(a)\right| \mathrm{d}a =\mathbb{E}\int_{\mathbb{R}}\left|R^{2}_{v,u,s,t}(a)\right| \mathrm{d}a\] \[\lesssim_{\zeta,\kappa}(t-u)^{H}\Big{(}\frac{t-s}{s-v}\Big{)}^{( 1-\kappa)H}+\sigma_{Y}\Big{(}\frac{t-s}{s-v}\Big{)}^{(1-\kappa)H}\Big{(}\frac {t-s}{s-u}\Big{)}^{1-H}.\] Recalling (34), we have \(\sigma_{Y}\lesssim(s-v)^{H}\). Using also \(t-u\leq 2(s-u)\) and \(\varepsilon=(\frac{t-s}{\zeta})^{H}\), we get \[\varepsilon^{-1}\int_{\mathbb{R}}\lVert R^{2}_{v,u,s,t}(a)\rVert _{L^{1}(\mathbb{P})}\mathrm{d}a\lesssim_{\zeta,\kappa}\Big{(}\frac{t-s}{s-u} \Big{)}^{-H}\Big{(}\frac{t-s}{s-v}\Big{)}^{(1-\kappa)H}+\Big{(}\frac{t-s}{s-v }\Big{)}^{-\kappa H}\Big{(}\frac{t-s}{s-u}\Big{)}^{1-H}.\] Noting that \((\frac{t-s}{s-v})^{-\kappa H}\leq(t-s)^{-\kappa H}\) (recall that we assume that \(0\leq v<s<t\leq 1\)), we conclude the proof. Proof of Lemma 3.7.: By the bound (41) in Lemma 3.16, we have \[\Big{\lVert}U_{s,t}(\varepsilon,B-a)-\frac{\mathbb{E}[\bar{K}_{0, \zeta}(1,B)]\varepsilon}{2\sqrt{2\pi}\sigma_{Y+Z}}e^{-\frac{1}{2}(\frac{X_{s} }{\sigma_{Y+Z}})^{2}}\Big{\rVert}_{L^{p}(\mathbb{P})}\] \[\lesssim_{p,\zeta,\kappa}\Big{\|}\Big{(}\mathbb{E}[U_{s-v,t-v}( \varepsilon,\tilde{B})]-\frac{\mathbb{E}[\bar{K}_{0,\zeta}(1,B)]}{2\sqrt{2\pi} \sigma_{Y+Z}}\varepsilon\Big{)}e^{-\frac{1}{2}(\frac{X_{s}}{\sigma_{Y+Z}})^{2} }\Big{\|}_{L^{p}(\mathbb{P})}\] \[+\Big{(}\frac{t-s}{s-u}\Big{)}^{1-H}+\Big{(}\frac{t-s}{s-u}\Big{)} ^{-H}\Big{(}\frac{t-s}{s-v}\Big{)}^{(2-\kappa)H}\] if \(\frac{t-s}{s-u}\) and \(\frac{s-u}{u-v}\) are sufficiently small. By the bound \(\frac{\varepsilon}{\sigma_{Y+Z}}\lesssim(\frac{t-s}{s-v})^{H}\leq 1\) and Lemma 3.17 we have \[\Big{|}\mathbb{E}[U_{s-v,t-v}(\varepsilon,\tilde{B})]-\frac{ \mathbb{E}[\bar{K}_{0,\zeta}(1,B)]}{2\sqrt{2\pi}\sigma_{Y+Z}}\varepsilon\Big{|} \leq\Big{|}\sqrt{2\pi}\varepsilon^{-1}\sigma_{Y+Z}\mathbb{E}[U_{s-v,t-v}( \varepsilon,\tilde{B})]-\frac{1}{2}\mathbb{E}[\bar{K}_{0,\zeta}(1,B)]\Big{|}\] \[\lesssim_{\zeta,\kappa}(t-s)^{-\kappa H}\Big{(}\frac{t-s}{s-u} \Big{)}^{1-H}+\Big{(}\frac{t-s}{s-u}\Big{)}^{-H}\Big{(}\frac{t-s}{s-v}\Big{)}^ {(2-\kappa)H}.\] Therefore, \[\Big{\|}U_{s,t}(\varepsilon,B-a)-\frac{\mathbb{E}[\bar{K}_{0, \zeta}(1,B)]\varepsilon}{2\sqrt{2\pi}\sigma_{Y+Z}}e^{-\frac{1}{2}(\frac{X_{s}} {\sigma_{Y+Z}})^{2}}\Big{\|}_{L^{p}(\mathbb{P})}\\ \lesssim_{p,\zeta,\kappa}(t-s)^{-\kappa H}\Big{(}\frac{t-s}{s-u} \Big{)}^{1-H}+\Big{(}\frac{t-s}{s-u}\Big{)}^{-H}\Big{(}\frac{t-s}{s-v}\Big{)}^ {(2-\kappa)H}\] if \(\frac{t-s}{s-u}\) and \(\frac{s-u}{u-v}\) are sufficiently small. To optimize, we choose \(u\) so that \[\frac{t-s}{s-u}=\Big{(}\frac{t-s}{s-v}\Big{)}^{(2-\kappa)H}.\] Note that, as \(H<1/2\), the exponent \((2-\kappa)H\) is less than \(1\). Therefore, if \(\frac{t-s}{s-v}\) is sufficiently small, then \(\frac{t-s}{s-u}\) and \(\frac{s-u}{u-v}\) are sufficiently small as well. This gives the claimed bound. **Remark 3.18**.: Recall \(\bar{U}\) from Notation 3.4. Since \[\int_{\mathbb{R}}\mathbb{E}[\bar{U}_{s,t}(\varepsilon,B-a)]\mathrm{d}a=\frac{ \mathbb{E}[\bar{K}_{0,\zeta}(1,B)]+1}{2}\varepsilon,\] we similarly obtain \[\mathbb{E}[\bar{U}_{s,t}(\varepsilon,B-a)|\mathcal{F}_{v}]=\frac{\mathbb{E}[ \bar{K}_{0,\zeta}(1,B)]+1}{2\sqrt{2\pi}\sigma_{Y+Z}}e^{-\frac{1}{2}(\frac{X_{ s}}{\sigma_{Y+Z}})^{2}}\varepsilon+\bar{R}_{v,s,t}\] with, provided that \(\frac{t-s}{s-v}\) is sufficiently small, \[\|\bar{R}_{v,s,t}\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta,\kappa}\Big{(}\frac {t-s}{s-v}\Big{)}^{(2-\kappa)H(1-H)}(t-s)^{-\kappa H}.\] #### 3.2.2 Estimates on the local time The following is the last technical ingredient for Theorem 3.1. **Lemma 3.19** (Local time approximation).: _Let \(H\in(0,1/2)\). We set_ \[\tilde{A}_{s,t} :=\mathbb{E}[\delta_{0}(B_{s}-a)|\mathcal{F}_{s-(t-s)}](t-s) \tag{45}\] \[=\sqrt{\frac{H}{\pi}}e^{-\frac{H}{(t-s)^{2H}}\mathbb{E}[B_{s}-a] \mathcal{F}_{s-(t-s)}]^{2}}(t-s)^{1-H}.\] _Then, there exists a \(\delta>0\) such that for any \(p<\infty\) and for any partition \(\pi\) of \([0,1]\),_ \[\Big{\|}L_{1}(a)-\sum_{[s,t]\in\pi}\tilde{A}_{s,t}\Big{\|}_{L^{p}(\mathbb{P})} \lesssim_{p}|\pi|^{\delta}.\] Proof.: We write \(L_{s,t}(a):=L_{t}(a)-L_{s}(a)\). We use the shifted stochastic sewing (Lemma 2.10). To this end, it suffices to check \[\|L_{s,t}(a)\|_{L^{p}(\mathbb{P})}\lesssim_{p}(t-s)^{1-H},\quad\|\tilde{A}_{s,t }\|_{L^{p}(\mathbb{P})}\lesssim_{p}(t-s)^{1-H} \tag{46}\] and \[\|\mathbb{E}[L_{s,t}(a)-\tilde{A}_{s,t}|\mathcal{F}_{v}]\|_{L^{p}(\mathbb{P})} \lesssim_{p}(s-v)^{-1-H}(t-s)^{2},\quad t-s\leq s-v. \tag{47}\] The estimate for \(L\) in (46) is well known and can be shown for example by (non-shifted) stochastic sewing with \(\Xi_{u,v}=\mathbb{E}[\int_{u}^{v}\delta(B_{r}-a)\mathrm{d}r|\mathcal{F}_{u}]\), and the estimate for \(\tilde{A}\) in (46) is not difficult to show. Hence, we focus on the estimate (47). In [10, Lemma 4.7], an estimate similar to (47) is obtained, but the exponents therein depend on \(p\). We slightly improve the argument. We have \[\mathbb{E}[L_{s,t}(a)-\tilde{A}_{s,t}|\mathcal{F}_{v}]=\sqrt{ \frac{H}{\pi}}\int_{s}^{t}\Big{\{}e^{-\frac{H}{(r-v)^{2H}}\mathbb{E}[B_{r}-a| \mathcal{F}_{v}]^{2}}(r-v)^{-H}\\ -e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{s}-a|\mathcal{F}_{v}]^{2} }(s-v)^{-H}\Big{\}}\mathrm{d}r.\] For simplification, we replace \(B-a\) by \(B\). We decompose the integrand as \(R_{1}+R_{2}+R_{3}\), where \[R_{1} :=e^{-\frac{H}{(r-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}( r-v)^{-H}-e^{-\frac{H}{(r-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}(s-v)^{-H},\] \[R_{2} :=e^{-\frac{H}{(r-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}( s-v)^{-H}-e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}(s-v)^{-H},\] \[R_{3} :=e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}( s-v)^{-H}-e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{s}|\mathcal{F}_{v}]^{2}}(s-v)^{-H}.\] To obtain (47), it suffices to show \[\|R_{1}\|_{L^{p}(\mathbb{P})}+\|R_{2}\|_{L^{p}(\mathbb{P})}+\|R_{3}\|_{L^{p}( \mathbb{P})}\lesssim_{p}(s-v)^{-1-H}(t-s).\] Since \[0\leq(s-v)^{-H}-(r-v)^{-H}\lesssim(s-v)^{-H-1}(r-s),\] we have \[|R_{1}|\lesssim(s-v)^{-H-1}(t-s).\] We observe \[e^{-\frac{H}{(r-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}- e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}\] \[=e^{-\frac{H}{(r-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}(1 -e^{-H((s-v)^{-2H}-(r-v)^{-2H})\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}})\] \[\lesssim e^{-\frac{H}{(r-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v} ]^{2}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}((s-v)^{-2H}-(r-v)^{-2H})\] \[\lesssim e^{-\frac{H}{(r-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v }]^{2}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}(s-v)^{-2H-1}(r-s)\] \[\lesssim(r-v)^{2H}(s-v)^{-2H-1}(r-s)\] \[\lesssim(s-v)^{-1}(t-s),\] where in the last step we used that \((r-v)\leq(t-s)+(s-v)\leq 2(s-v)\). Hence, \[|R_{2}|\lesssim(s-v)^{-1-H}(t-s).\] Finally, we estimate \(R_{3}\). Suppose that \(\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}\leq\mathbb{E}[B_{s}|\mathcal{F}_{v}]^{2}\). Then, \[\left|e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{ 2}}-e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{s}|\mathcal{F}_{v}]^{2}}\right|\\ \leq e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2} }\frac{H}{(s-v)^{2H}}(\mathbb{E}[B_{s}|\mathcal{F}_{v}]^{2}-\mathbb{E}[B_{r}| \mathcal{F}_{v}]^{2}).\] Since \[\mathbb{E}[B_{s}|\mathcal{F}_{v}]^{2}-\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}=2 \mathbb{E}[B_{r}|\mathcal{F}_{v}](\mathbb{E}[B_{s}|\mathcal{F}_{v}]-\mathbb{E}[B _{r}|\mathcal{F}_{v}])+(\mathbb{E}[B_{s}|\mathcal{F}_{v}]-\mathbb{E}[B_{r}| \mathcal{F}_{v}])^{2}\] and \[e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}}(s-v)^{-H}| \mathbb{E}[B_{r}|\mathcal{F}_{v}]|\lesssim 1,\] we obtain \[\left|e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2 }}-e^{-\frac{H}{(s-v)^{2H}}\mathbb{E}[B_{s}|\mathcal{F}_{v}]^{2}}\right|\] \[\lesssim(s-v)^{-H}|\mathbb{E}[B_{s}|\mathcal{F}_{v}]-\mathbb{E}[ B_{r}|\mathcal{F}_{v}]|+(s-v)^{-2H}|\mathbb{E}[B_{s}|\mathcal{F}_{v}]-\mathbb{E}[B _{r}|\mathcal{F}_{v}]|^{2}.\] A similar estimate holds if \(\mathbb{E}[B_{r}|\mathcal{F}_{v}]^{2}\geq\mathbb{E}[B_{s}|\mathcal{F}_{v}]^{2}\). Therefore, it remains to note \[\|\mathbb{E}[B_{s}|\mathcal{F}_{v}]-\mathbb{E}[B_{r}|\mathcal{F}_{v}]\|_{L^{p }(\mathbb{P})}\lesssim_{p}(s-v)^{H-1}(t-s).\qed\] #### 3.2.3 Concluding estimates Now we can finish the proof of Theorem 3.1. Recall from Remark 3.8 that we can set \(T=1\). Let \(\pi\) be a partition of \([0,1]\). By Lemma 3.5, \[\varepsilon^{\frac{1}{H}-1}U_{0,1}(\varepsilon,B-a) \geq\sum_{[s,t]\in\pi}\varepsilon^{\frac{1}{H}-1}U_{s,t}(\varepsilon,B-a), \tag{48}\] \[\varepsilon^{\frac{1}{H}-1}U_{0,1}(\varepsilon,B-a) \leq\sum_{[s,t]\in\pi}\varepsilon^{\frac{1}{H}-1}\bar{U}_{s,t}( \varepsilon,B-a). \tag{49}\] Here and henceforth, \(\varepsilon\) is an independent parameter; unlike Subsection 3.2.1, we do not assume the relation (32). **Lemma 3.20** (Lower bound on \(U\)).: _Let \(H\in(0,1/2)\), \(p\in[2,\infty)\), \(\varepsilon\in(0,1)\) and \(\zeta\in[1,\infty)\). Then, we have_ \[\varepsilon^{\frac{1}{H}-1}U_{0,1}(\varepsilon,B-a)\geq\frac{1}{2\zeta} \mathbb{E}[\bar{K}_{0,\zeta}(1,B)]L_{1}(a)-R_{\varepsilon},\] _where for some \(\delta\) depending only on \(H\) we have_ \[\|R_{\varepsilon}\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta} \varepsilon^{\delta}.\] Proof.: We define \(\hat{A}\) by (45), and we set \[\hat{A}_{s,t}:=U_{s,t}(\zeta^{-H}(t-s)^{H},B-a)\Big{(}\frac{t-s} {\zeta}\Big{)}^{1-H}.\] By Lemma 3.6, we have \[\|\hat{A}_{s,t}\|_{L^{p}(\mathbb{P})}\lesssim(t-s)^{1-H}.\] By Lemma 3.7, \[\mathbb{E}[\hat{A}_{s,t}|\mathcal{F}_{v}] =\frac{1}{2\zeta}\mathbb{E}[\bar{K}_{0,\zeta}(1,B)]\sqrt{\frac{H} {\pi(s-v)^{2H}}}e^{-\frac{H\mathbb{E}[B_{s}|\mathcal{F}_{v}]^{2}}{(s-v)^{2H}} }(t-s)+R_{v,s,t}\] \[=\frac{1}{2\zeta}\mathbb{E}[\bar{K}_{0,\zeta}(1,B)]\mathbb{E}[ \tilde{A}_{s,t}|\mathcal{F}_{v}]+R_{v,s,t},\] where \[\|R_{v,s,t}\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta,\kappa}\Big{(} \frac{t-s}{s-v}\Big{)}^{(2-\kappa)H(1-H)}(t-s)^{1-(1+\kappa)H}.\] for any \(\kappa\in(0,1)\). Since \(H<1/2\), choosing \(\kappa\) sufficiently small, we can suppose that \[1-(1+\kappa)H>\frac{1}{2},\quad 1-(1+\kappa)H+(2-\kappa)H(1-H)>1.\] Hence, by Lemma 2.10, with some \(\delta=\delta(H)\), \[\Big{\|}\sum_{[s,t]\in\pi}\Big{(}\hat{A}_{s,t}-\frac{1}{2\zeta}\mathbb{E}[ \bar{K}_{0,\zeta}(1,B)]\tilde{A}_{s,t}\Big{)}\Big{\|}_{L^{p}(\mathbb{P})} \lesssim_{p,\zeta}|\pi|^{\delta}.\] In particular, considering a partition of size \(\zeta\varepsilon^{\frac{1}{H}}\), the claim follows in view of (48) and Lemma 3.19. **Lemma 3.21** (Upper bound on \(U\)).: _Let \(H\in(0,1/2)\), \(p\in[2,\infty)\), \(\varepsilon\in(0,1)\) and \(\zeta\in[1,\infty)\). Then, we have_ \[\varepsilon^{\frac{1}{H}-1}U_{0,1}(\varepsilon,B-a)\leq\frac{1}{2\zeta}( \mathbb{E}[\bar{K}_{0,\zeta}(1,B)]+1)L_{1}(a)+\bar{R}_{\varepsilon},\] _where for some \(\delta\) depending only on \(H\) we have_ \[\|\bar{R}_{\varepsilon}\|_{L^{p}(\mathbb{P})}\lesssim_{p,\zeta}\varepsilon^{ \delta}.\] Proof.: In view of Remark 3.18 and (49), the proof is similar to Lemma 3.20. Proof of Theorem 3.1.: It readily follows from Lemma 3.20, Lemma 3.21 and the estimate (22). ### Uniform convergence A naive application of Theorem 3.1 yields that, by the Borel-Cantelli lemma, for any \(a\in\mathbb{R}\) and for any \(\mathbf{\varepsilon}=(\varepsilon_{n})_{n=1}^{\infty}\) with polynomial decay, there exists a measurable set \(\Omega_{a,\mathbf{\varepsilon}}\) such that \(\mathbb{P}(B\in\Omega_{a,\mathbf{\varepsilon}})=1\) and for every \(w\in\Omega_{a,\mathbf{\varepsilon}}\) the limit \[\lim_{n\to\infty}\varepsilon^{\frac{1}{H}-1}_{n}U_{0,t}(\varepsilon_{n},w-a)\] exists for every \(t\geq 0\). However, as observed by Chacon et al. [1], the quantitative estimate in Theorem 3.1 implies more strongly that we can take \(\Omega_{a,\mathbf{\varepsilon}}\) uniformly over \(a\) and \(\mathbf{\varepsilon}\). Furthermore, we can remove the polynomial decaying condition. The arguments below are essentially given in [1] and [1], but we repeat them for the reader's convenience. We begin with the following lemma. **Lemma 3.22** (Uniform convergence over grids).: _Let \(H\in(0,1/2)\) and \(t\in(0,\infty)\). We define the grid_ \[G_{k}:=\{ik^{-7}:i\in\mathbb{Z},|i|\leq k^{8}\},\quad k\in\mathbb{N}\cup\{0\}.\] _We then have_ \[\lim_{k\to\infty}\max_{x\in G_{k}}\bigl{|}k^{-6(\frac{1}{H}-1)}U_{0,t}(k^{-6}, B-x)-\frac{\mathfrak{c}_{H}}{2}L_{t}(x)\bigr{|}=0\quad\text{almost surely.}\] Proof.: In the notation of Theorem 3.1, we have \[\max_{x\in G_{k}}\bigl{|}k^{-6(\frac{1}{H}-1)}U_{0,t}(k^{-6},B-x)-\frac{ \mathfrak{c}_{H}}{2}L_{t}(x)\bigr{|}\leq\zeta^{-1}\sup_{x\in\mathbb{R}}L_{t}(x )+\max_{x\in G_{k}}R_{k,\zeta,x}.\] Since \(x\mapsto L_{t}(x)\) is continuous and \(L_{t}(\cdot)\) is supported on \[\{x\in\mathbb{R}:|x|\leq\|B\|_{L^{\infty}([0,t])}\},\] we see that \(\sup_{x\in\mathbb{R}}L_{t}(x)<\infty\) a.s. By Theorem 3.1, \[\|\max_{x\in G_{k}}R_{k,\zeta,x}\|_{p}^{p}\leq\sum_{x\in G_{k}}\|R_{k,\zeta,x}\|_ {p}^{p}\lesssim_{p,\zeta}k^{-p\delta+8},\] where \(\delta\) is independent of \(p\). Since \(p\) can be arbitrarily large, the Borel-Cantelli lemma implies that almost surely we have \[\lim_{k\to\infty}\max_{x\in G_{k}}R_{k,\zeta,x}=0\] and \[\limsup_{k\to\infty}\max_{x\in G_{k}}\bigl{|}k^{-6(\frac{1}{H}-1)}U_{0,t}(k^{ -6},B-x)-\frac{\mathfrak{c}_{H}}{2}L_{t}(x)\bigr{|}\leq\zeta^{-1}\sup_{x\in \mathbb{R}}L_{t}(x).\] Since \(\zeta\) is arbitrary, we complete the proof. **Theorem 3.23** (Uniform convergence to local time, [1, Theorem II.2.4]).: _Let \(H\in(0,1/2)\) and \(T\in(0,\infty)\). Almost surely, we have_ \[\lim_{\varepsilon\to 0}\sup_{t\leq T}\sup_{a\in\mathbb{R}}\bigl{|}\varepsilon^{ \frac{1}{H}-1}U_{0,t}(\varepsilon,B-a)-\frac{\mathfrak{c}_{H}}{2}L_{t}(a) \bigr{|}=0.\] Proof.: Firstly, by an elementary argument using monotonicity of \(U\) and continuity of \(L\)[1, 10]. Note after Theorem II.2.4], it suffices to prove that for each \(t\in(0,\infty)\) we have \[\lim_{\varepsilon\to 0}\sup_{a\in\mathbb{R}}\bigl{|}\varepsilon^{\frac{1}{H}-1 }U_{0,t}(\varepsilon,B-a)-\frac{\mathfrak{c}_{H}}{2}L_{t}(a)\bigr{|}=0\quad \text{a.s.}\] By Lemma 3.22, we can find an \(\Omega_{1}\subseteq\Omega\) with \(\mathbb{P}(\Omega_{1})=1\) such that for any \(\delta\in(0,1)\) and \(\omega\in\Omega_{1}\) there exists an \(N=N(\delta,\omega)\) with the following inequalities: \[(k-1)^{-6}-(k-1)^{-7}>k^{-6}\quad\forall k\geq N, \tag{50}\] \[\|B(\omega)\|_{L^{\infty}(0,t)}<N-1,\] (51) \[\sup_{k\geq N}\max_{x\in G_{k}}\bigl{|}k^{-6(\frac{1}{H}-1)}U_{0, t}(k^{-6},B(\omega)-x)-\frac{\mathfrak{c}_{H}}{2}L_{t}(x)(\omega)\bigr{|}<\delta. \tag{52}\] The argument below holds on the event \(\Omega_{1}\). For \(\varepsilon\leq(N+1)^{-6}\), there exists a unique \(m=m_{\varepsilon}\geq N+1\) such that \[(m+1)^{-6}<\varepsilon\leq m^{-6}.\] If \(|x|\geq N-1\), then by (51) we have \(L_{t}^{H}(x)=0\). On the other hand, if \(|x|<N-1\), then we define \[x_{k}:=\max_{y\in G_{k}}\bigl{\{}y\leq x\bigr{\}}\] for all \(k\geq N\). Since \(x<x_{m-1}+(m-1)^{-7}\), we have * \(x_{m-1}\leq x<x+\varepsilon<x_{m-1}+(m-1)^{-7}+m^{-6}\leq x_{m-1}+(m-1)^{-6}\) and * \(x<x_{m+2}+(m+2)^{-7}<x_{m+2}+(m+2)^{-7}+(m+2)^{-6}<x+\varepsilon\), where (50) are applied in both items. Hence, defining the two sets \(I_{m-1}\) and \(\bar{I}_{m+2}\) as \[I_{m-1}:=\left[x_{m-1},x_{m-1}+(m-1)^{-6}\right],\quad\bar{I}_{m+2}:=\left[ \bar{x}_{m+2},\bar{x}_{m+2}+(m+2)^{-6}\right],\] where \(\bar{x}_{m+2}:=x_{m+2}+(m+2)^{-7}\), we have the inclusions \[\bar{I}_{m+2}\subseteq[x,x+\varepsilon]\subseteq I_{m-1}. \tag{53}\] Now we move to the bound on \(U\). We first observe the monotonicity of \(U\): \[U_{0,t}(\varepsilon_{1},B-x_{1})\leq U_{0,t}(\varepsilon_{2},B-x_{2})\] provided that \([x_{2},x_{2}+\varepsilon_{2}]\subseteq[x_{1},x_{1}+\varepsilon_{1}]\). The relation (53) thus yields \[U_{0,t}((m-1)^{-6},B-x_{m-1})\leq U_{0,t}(\varepsilon,B-x)\leq U_{0,t}((m+2)^{ -6},B-\bar{x}_{m+2}).\] Hence, \[\sup_{x\in\mathbb{R}}\bigl{|}\varepsilon^{\frac{1}{H}-1}U_{0,t}(\varepsilon,B- x)-\frac{\mathfrak{C}_{H}}{2}L_{t}(x)\bigr{|}\leq A_{\varepsilon}+\bar{A}_{ \varepsilon}+\frac{\mathfrak{C}_{H}}{2}\sup_{x,y|x-y|\leq 2\varepsilon} \bigl{|}L_{t}(x)-L_{t}(y)\bigr{|}, \tag{54}\] where \[A_{\varepsilon} :=\sup_{x\in G_{m_{\varepsilon}-1}}\bigl{|}\varepsilon^{\frac{1 }{H}-1}U_{0,t}((m_{\varepsilon}-1)^{-6},B-x)-\frac{\mathfrak{C}_{H}}{2}L_{t}( x)\bigr{|},\] \[\bar{A}_{\varepsilon} :=\sup_{x\in G_{m_{\varepsilon}+2}}\bigl{|}\varepsilon^{\frac{1 }{H}-1}U_{0,t}((m_{\varepsilon}+2)^{-6},B-x)-\frac{\mathfrak{C}_{H}}{2}L_{t}( x)\bigr{|}.\] Due to the uniform continuity of \(L_{t}(\cdot)\), the last term of (54) converges to \(0\). To estimate \(A_{\varepsilon}\), we observe the bound \[A_{\varepsilon}\leq\sup_{x\in G_{m_{\varepsilon}-1}}\bigl{|}(m_ {\varepsilon}-1)^{-6(\frac{1}{H}-1)}U_{0,t}((m_{\varepsilon}-1)^{-6},B-x)- \frac{\mathfrak{C}_{H}}{2}L_{t}(x)\bigr{|}\\ +\sup_{x\in G_{m_{\varepsilon}-1}}\bigl{|}\{(m_{\varepsilon}-1)^ {-6(\frac{1}{H}-1)}-\varepsilon^{\frac{1}{H}-1}\}U_{0,t}((m_{\varepsilon}-1)^ {-6},B-x)\bigr{|}.\] By (52), \[\limsup_{\varepsilon\to 0}\sup_{x\in G_{m_{\varepsilon}-1}}\bigl{|}(m_{ \varepsilon}-1)^{-6(\frac{1}{H}-1)}U_{0,t}((m_{\varepsilon}-1)^{-6},B-x)- \frac{\mathfrak{C}_{H}}{2}L_{t}(x)\bigr{|}\leq\delta.\] On the other hand, \[|(m_{\varepsilon}-1)^{-6(\frac{1}{H}-1)}-\varepsilon^{\frac{1}{H }-1}| \lesssim|\varepsilon-(m_{\varepsilon}-1)^{-6}|^{\frac{1}{H}-1}\] \[\lesssim(m_{\varepsilon}-1)^{-7(\frac{1}{H}-1)}.\] As (52) implies \[\sup_{\varepsilon\in(0,1)}\sup_{x\in G_{m_{\varepsilon}-1}}(m_{\varepsilon}-1 )^{-6(\frac{1}{H}-1)}U_{0,t}((m_{\varepsilon}-1)^{-6},B-x)<\infty,\] we obtain \[\lim_{\varepsilon\to 0}\sup_{x\in G_{m_{\varepsilon}-1}}\bigl{|}\{(m_{ \varepsilon}-1)^{-6(\frac{1}{H}-1)}-\varepsilon^{\frac{1}{H}-1}\}U_{0,t}((m_{ \varepsilon}-1)^{-6},B-x)\bigr{|}=0.\] Hence, we get \(\limsup_{\varepsilon\to 0}A_{\varepsilon}\leq\delta\), and we get a similar estimate for \(\bar{A}_{\varepsilon}\). Recalling (54), this implies \[\lim_{\varepsilon\to 0}\sup_{x\in\mathbb{R}}\bigl{|}\varepsilon^{\frac{1}{H}-1}U_{0,t}(\varepsilon,B-x)-\frac{\mathfrak{C}_{H}}{2}L_{t}(x)\bigr{|}\leq 2\delta.\] Since \(\delta\) is arbitrary, we conclude the proof. Recall the total number \(D_{s,t}(\varepsilon,w)\) of downcrossings from (11) and the variation \(V_{s,t}(\mathrm{P},w)\) along Lebesgue partition \(\mathrm{P}\) from (12). Since the total number of upcrossings and that of downcrossings can differ by at most \(1\), almost surely we have \[\lim_{\varepsilon\to 0}\sup_{t\leq T}\sup_{a\in\mathbb{R}}\bigl{|}\varepsilon^{ \frac{1}{H}-1}D_{0,t}(\varepsilon,B-a)-\frac{\mathfrak{C}_{H}}{2}L_{t}(a) \bigr{|}=0\quad\forall T\geq 0,\] or \[\lim_{\varepsilon\to 0}\sup_{t\leq T}\sup_{a\in\mathbb{R}}\bigl{|}\varepsilon^{ \frac{1}{H}-1}(U_{0,t}(\varepsilon,B-a)+D_{0,t}(\varepsilon,B-a))-\mathfrak{C} _{H}L_{t}(a)\bigr{|}=0\quad\forall T\geq 0. \tag{55}\] **Theorem 3.24** (Uniform convergence of variation, [12, Proposition III.2.1]).: _Let \(H\in(0,1/2)\) and \(T\in(0,\infty)\). Almost surely, we have_ \[\lim_{\varepsilon\to 0}\sup_{t\leq T\ \mathrm{P:\ partition\ of\ R},}|V_{0,t}( \mathrm{P},B)-\mathfrak{c}_{H}t|=0.\] Proof.: We have the identity \[V_{0,t}(\mathrm{P},B)=\int_{\mathbb{R}}\sum_{[a,b]\in\mathrm{P}}(b-a)^{\frac{ 1}{H}-1}\{U_{0,t}(b-a,B-a)+D_{0,t}(b-a,B-a)\}\mathbf{1}_{(a,b]}(x)\mathrm{d}x.\] Setting \(I:=[-\|B\|_{L^{\infty}([0,T])},\|B\|_{L^{\infty}([0,T])}]\), the occupation density formula yields \[|V_{0,t}(\mathrm{P},B)-\mathfrak{c}_{H}t|\\ \leq\int_{I}\sum_{[a,b]\in\mathrm{P}}(b-a)^{\frac{1}{H}-1}|U_{0,t }(b-a,B-a)+D_{0,t}(b-a,B-a)-\mathfrak{c}_{H}L_{t}(x)|\mathbf{1}_{(a,b]}(x) \mathrm{d}x.\] For \(x\in[a,b]\), we have the bound \[|U_{0,t}(b-a,B-a)+D_{0,t}(b-a,B-a)-\mathfrak{c}_{H}L_{t}(x)|\\ \leq|U_{0,t}(b-a,B-a)+D_{0,t}(b-a,B-a)-\mathfrak{c}_{H}L_{t}(a)|\\ +\mathfrak{c}_{H}\sup_{a_{1},a_{2}:|a_{1}-a_{2}|\leq|\mathrm{P}| }|L_{t}(a_{1})-L_{t}(a_{2})|.\] Therefore, \[|V_{0,t}(\mathrm{P},B)-\mathfrak{c}_{H}t|\leq|I|\times\Big{\{} \sup_{\delta\leq|\mathrm{P}|}\sup_{a\in\mathbb{R}} \lvert U_{0,t}(\delta,B-a)+D_{0,t}(\delta,B-a)-\mathfrak{c}_{H}L_{t}(a)|\\ +\mathfrak{c}_{H}\sup_{|a_{1}-a_{2}|\leq|\mathrm{P}|}|L_{t}(a_{1} )-L_{t}(a_{2})|\Big{\}}.\] In view of (55) and the uniform continuity of \(L\), the claim follows. ### Horizontally rough function In a recent work [13], a concept of _quadratic roughness_ has been introduced. This pathwise quadratic roughness property ensures an invariant notion of quadratic variation, i.e., given two (appropriate) partition sequences \(\mathbf{\pi}\) and \(\mathbf{\sigma}\), quadratic roughness of function \(x\) implies \([x]_{\mathbf{\pi}}=[x]_{\mathbf{\sigma}}\). As expected, Brownian motion satisfies this quadratic roughness property. In fact, for any _deterministic_ partition sequence \(\mathbf{\pi}=(\pi^{n})\) with \(|\pi^{n}|\log n\to 0\), we have \([B^{1/2}]_{\mathbf{\pi}}(t)=t\) almost surely. That is there exists \(\Omega_{\mathbf{\pi}}\subset\Omega\) of full \(\mathbb{P}\)-measure such that for all \(\omega\in\Omega_{\mathbf{\pi}},\ [\omega]_{\mathbf{\pi}}(t)=t\). On the other hand, by [10] there exists for each \(\omega\in\Omega\) a partition \(\pi=\pi(\omega)\) such that \([\omega]_{\pi(\omega)}(t)=0\), and therefore \(\cap_{\mathbf{\pi}}\Omega_{\mathbf{\pi}}=\varnothing\). So even for Brownian motion quadratic roughness does not ensure an almost sure invariance of quadratic variation across all deterministic partitions (partitions purely on time variable). So an obvious question is: are there any notion of roughness which ensures almost sure invariance of quadratic variation across a large (uncountable) class of partition sequence? To answer this question we define the notion of _horizontal roughness_ (the word 'horizontal' represents path dependent Lebesgue type partitions constructed from level crossings). **Definition 3.25** (Horizontally rough: an invariance notion for \(p\)-th variation).: A function \(x\in C^{0}([0,T],\mathbb{R})\) is called _horizontally rough_ if for any \(t\in[0,T]\), \(\rho\in\mathbb{R}\) and \(\varepsilon=\{\varepsilon_{n}\}\) with \(\varepsilon_{n}\downarrow 0\), \[\lim_{n\to\infty}\frac{K_{0,t}(\varepsilon_{n},x+\rho)}{K_{0,t}(\varepsilon_{n},x)}=1.\] This notion of horizontal roughness is completely pathwise and scale invariant. If a continuous function \(x\) has the horizontally rough property and \(x\) also has \(p\)-th variation along a uniform Lebesgue partition, then \(x\) has \(p\)-th variation along all uniform Lebesgue partition and the \(p\)-th variation is the same across different uniform Lebesgue partitions. Unlike the notion of quadratic roughness defined in [13], this notion of horizontally rough ensures an invariant notion of \(p\)-th variation across a large class of Lebesgue type partitions almost surely i.e. there is a common measure zero set outside which the \(p\)-th variation along any uniform Lebesgue type partition is the same. **Example 3.26**.: The following are examples of horizontally rough functions. * From the definition any linear function is horizontally rough. * Using results from [1, 13], we can show that Brownian motion and more generally continuous semimartingales are horizontally rough almost surely. * Theorem 1.4 shows that fractional Brownian motion with Hurst index \(H<1/2\) is horizontally rough almost surely. It is interesting to construct horizontally rough functions sampled from a non-Gaussian, non-semimartingale process. We leave this and further properties of such functions as a future work. ## Appendix A An estimate on log-normal distribution **Lemma A.1**.: _Let \(Z\) be a standard normal distribution, and let \(q\in[2,\infty)\). We then have_ \[\|e^{\lambda Z-\frac{\lambda^{2}}{2}}-1\|_{L^{q}(\mathbb{P})}\leq\sqrt{2\log 2 }\sqrt{q-1}\lambda e^{(q-1)\lambda^{2}},\quad\forall\lambda\geq 0.\] Proof.: By [16, Equation (1.1)], we have \[e^{\lambda Z-\frac{\lambda^{2}}{2}}-1=\sum_{n=1}^{\infty}\lambda^{n}H_{n}(Z),\] where \(H_{n}\) is \(n\)th Hermite polynomial. By the triangle inequality, \[\|e^{\lambda Z-\frac{\lambda^{2}}{2}}-1\|_{L^{q}(\mathbb{P})}\leq\sum_{n=1}^{ \infty}\lambda^{n}\|H_{n}(Z)\|_{L^{q}(\mathbb{P})}.\] The hypercontractivity [16, Theorem 1.4.1] implies that \[\|H_{n}(Z)\|_{L^{q}(\mathbb{P})}\leq(q-1)^{\frac{n}{2}}\|H_{n}(Z)\|_{L^{2}( \mathbb{P})}.\] Furthermore, by [16, Lemma 1.1.1], \[\|H_{n}(Z)\|_{L^{2}(\mathbb{P})}=\frac{1}{\sqrt{n!}}.\] Therefore, \[\|e^{\lambda Z-\frac{\lambda^{2}}{2}}-1\|_{L^{q}(\mathbb{P})} \leq\sqrt{q-1}\lambda\sum_{n=0}^{\infty}\frac{\lambda^{n}(q-1)^{ \frac{n}{2}}}{\sqrt{n!}}\] \[\leq\sqrt{q-1}\lambda\Big{(}\sum_{n=0}^{\infty}\frac{2^{-n}}{n+1 }\Big{)}^{\frac{1}{2}}\Big{(}\sum_{n=0}^{\infty}\frac{2^{n}\lambda^{2n}(q-1)^ {n}}{n!}\Big{)}^{\frac{1}{2}}\] \[\leq\sqrt{q-1}\lambda\sqrt{2\log 2}e^{(q-1)\lambda^{2}}.\qed\]
2302.12277
What is the super-sample covariance? A fresh perspective for second-order shear statistics
Cosmological analyses of second-order weak lensing statistics require precise and accurate covariance estimates. These covariances are impacted by two sometimes neglected terms: A negative contribution to the Gaussian covariance due to finite survey area and the super-sample covariance (SSC) which for the power spectrum contains the impact by Fourier modes larger than the survey window. We show here that these two effects are connected and can be seen as correction terms to the "large-field-approximation", the asymptotic case of an infinitely large survey area. We describe the two terms collectively as "Finite-Field-Terms". We derive the covariance of second-order shear statistics from first principles. For this, we use an estimator in real space without relying on an estimator for the power spectrum. The resulting covariance does not scale inversely with the survey area, as naively assumed. This scaling is only correct under the large-field approximation when the contribution of the finite-field terms tends to zero. Furthermore, all parts of the covariance, not only the SSC, depend on the power- and trispectrum at all modes, including those larger than the survey. We also show that it is generally impossible to transform an estimate for the power spectrum covariance into the covariance of a real-space statistic. Such a transformation is only possible in the asymptotic case of the "large-field approximation". Additionally, we find that the total covariance of a real-space statistic can be calculated using correlation functions estimates on spatial scales smaller than the survey window. Consequently, estimating covariances of real-space statistics, in principle, does not require information on spatial scales larger than the survey area. We demonstrate that this covariance estimation method is equivalent to the standard sample covariance method.
Laila Linke, Pierre A. Burger, Sven Heydenreich, Lucas Porth, Peter Schneider
2023-02-23T19:03:18Z
http://arxiv.org/abs/2302.12277v3
# What is the super-sample covariance? A fresh perspective for second-order shear statistics. ###### Abstract Cosmological analyses of second-order weak lensing statistics require precise and accurate covariance estimates. One significant but sometimes misunderstood component of the covariance is the so-called super-sample covariance (SSC). The SSC is regularly defined as the covariance part capturing all impact of modes outside of a survey area. However, we show here that this intuition is incorrect for real-space statistics. We derive the covariance of second-order shear statistics from first principles. For this, we use an estimator in real space without relying on an estimator for the power spectrum. All parts of the covariance, not only the SSC, depend on the power- and trispectrum at all modes, including those larger than the survey. The defining feature of the SSC is not its dependence on "super-survey modes" but its behaviour under the 'large-field approximation', i.e., the limiting case of a very large survey area. While the non-SSC parts of the covariance scale with the inverse survey area in this limit, the SSC completely vanishes. We also show that it is generally impossible to transform an estimate for the power spectrum covariance to the covariance of a real-space statistic. Such a transformation is only possible in the limiting case of the 'large-field approximation'. Additionally, we find that the total covariance of a real-space statistic can be completely estimated using correlation functions measured only inside the survey area. Consequently, estimating covariances of real-space statistics, in principle, does not require information outside the survey boundaries. ## 1 Introduction Second-order statistics of cosmic shear are essential tools for cosmological analyses (Heymans et al., 2021; Hikage et al., 2019, 2022). Inference of cosmological parameters from these statistics requires a robust understanding of their covariances. While covariance models for second-order shear statistics have been derived and validated for over a decade (Joachimi et al., 2008; Takada and Hu, 2013), one part of these models is often ill-understood, namely the super-sample covariance (SSC). In this paper, we will discuss the origin and interpretation of the SSC, with a particular focus on real-space statistics. Analytic models for the covariance of second-order statistics are usually expressed in terms of the power spectrum covariance. The power spectrum covariance can be divided into three terms: A Gaussian and Intra-survey Non-Gaussian part, which depend only on the power- and trispectrum at \(\ell\)-modes within a survey area, and the SSC, which also depends on \(\ell\)-modes outside the survey (Takada and Hu, 2013). While the first two terms scale linearly with the inverse survey area, the SSC shows a complicated dependence on the survey window function and calculating it is generally more complex than the other covariance terms (Lacasa and Grain, 2019; Gouyou Beauchamps et al., 2022). Additionally, N-body simulations with small box sizes and periodic boundary conditions cannot fully reproduce the SSC of the matter power spectrum, as they do not include the small \(\ell\)-modes (large spatial scales) on which the SSC depend (de Putter et al., 2012; Takahashi et al., 2009). Instead, one needs to use either large boxes, from which only a small region is taken to estimate the power spectrum (e.g. Bayer et al., 2022), or'separate universe simulations', where multiple simulations with varying mean densities are simulated (Li et al., 2014). However, while the SSC for the power spectrum can be interpreted as capturing the clustering information at \(\ell\)-modes outside a survey, the same interpretation is not necessarily accurate for statistics in real space. These real-space statistics, such as shear correlation functions (Kaiser, 1992; Amon et al., 2022) or COSEBIs (Schneider et al., 2010; Asgari et al., 2020) are preferred for cosmological analyses, as they can be directly estimated from survey data. We will here derive the covariance for the estimator of general second-order shear statistics in real space to find an interpretation for the SSC of these statistics. We will show the following five key findings. First, for a 'localized'1 second-order shear statistic \(\Xi\), the full covariance \(C_{\pm}\) can be obtained from correlation functions of the convergence field smoothed according to the chosen statistic. These correlation functions need to be known only within the survey area. No convergence information outside the survey area needs to be known. This includes the SSC term. Second, the exact covariance of \(\Xi\) does not scale inversely with the survey area. Instead, it also depends on the survey geometry. Third, all parts of \(C_{\Xi}\) depend on the power- and trispectrum at \(\ell\)-modes within and outside of the survey area. Therefore, the SSC term does not, on its own, capture the clustering information at \(\ell\)-modes outside a survey. In particular, even for Gaussian fields, an SSC term is present. Fourth, the SSC of both the power spectrum and \(\Xi\) is given by the difference between the exact covariance and an approximation of the covariance for an infinitely broad survey window function. Finally, it is, in general, not possible to transform the covariance of the power spectrum to the covariance of a real-space statistic. Such a conversion requires the assumption of an infinitely broad survey window. This paper is structured as follows: In Sect. 2, we discuss the covariance of the power spectrum and show the origin of the SSC. In Sect. 3, we introduce an estimator for a real-space statistic \(\Xi\) and show how its covariance is related to correlation functions of \(\Xi\). In Sect. 4, we connect the covariance of \(\Xi\) to the power- and trispectrum and show that the SSC for \(\Xi\) is given by the difference between the exact covariance and an approximation of the covariance for an infinitely broad survey window function. We conclude in Sect. 5. Throughout this paper, we are working in the flat sky limit. Any figures and calculations are performed using the parameters and simulations described in Appendix B. Note that we are not explicitly giving the dependence of the covariances on shape noise as its effect can be included by replacing in all covariance expressions the power spectrum \(P\) by \(P+\sigma_{e}^{2}/2n\), where \(n\) is the galaxy number density and \(\sigma_{e}^{2}\) the two-component ellipticity dispersion. ## 2 Power spectrum covariance Before we consider the covariance of a real space statistic, we first give an overview of the covariance of the power spectrum and the origin of the SSC, based on Takada & Hu (2013). We are considering here the power spectrum of the weak lensing convergence, which is the normalised surface mass density, related to the density contrast \(\delta(\chi\mathbf{\vartheta},\chi)\) at angular position \(\mathbf{\vartheta}\) and comoving distance \(\chi\). In a flat universe, it is \[\kappa(\mathbf{\vartheta})=\frac{3H_{0}^{2}\Omega_{\rm m}}{2c^{2}}\int_{0}^{\infty }\mathrm{d}\chi\;\;q(\chi)\;\frac{\delta(\chi\mathbf{\vartheta},\chi)}{a(\chi)}\;, \tag{1}\] where \[q(\chi)=\int_{\chi}^{\infty}\mathrm{d}\chi^{{}^{\prime}}\;p(\chi^{\prime}) \frac{\chi^{\prime}-\chi}{\chi^{\prime}}\;, \tag{2}\] with the Hubble constant \(H_{0}\), the matter density parameter \(\Omega_{\rm m}\), the cosmic scale factor \(a(\chi)\) at \(\chi\), normalized to unity today, and the probability distribution \(p(\chi)\;\mathrm{d}\chi\) of source galaxies with comoving distance. The convergence power spectrum \(P(\ell)\) is defined by \[(2\pi)^{2}\,\delta_{\rm D}(\mathbf{\ell}+\mathbf{\ell}^{\prime})\,P(\ell)=\left\langle \tilde{\kappa}(\mathbf{\ell})\,\tilde{\kappa}(\mathbf{\ell}^{\prime})\right\rangle\;, \tag{3}\] where \(\kappa\) is the convergence and the tilde denotes Fourier transform. We assume a survey of simple geometry (i.e. continuous and without small-scale masks) with a window function \(W\), which is either zero or one, and survey area \(A\), given by \(A=\int\mathrm{d}^{2}\vartheta\;W(\mathbf{\vartheta})\). Here, \(P\) can be estimated with the estimator \[\hat{P}(\ell) =\frac{1}{A}\int_{A\kappa(\ell)}\frac{\mathrm{d}^{2}\ell^{\prime} }{A\kappa(\ell)}\left[\prod_{i=1}^{2}\int\frac{\mathrm{d}^{2}q_{i}}{2\pi}\tilde {W}(\mathbf{q}_{i})\right] \tag{4}\] \[\quad\times\tilde{\kappa}(\mathbf{\ell}^{\prime}-\mathbf{q}_{1})\,\tilde {\kappa}(-\mathbf{\ell}^{\prime}-\mathbf{q}_{2})\;,\] where \(A_{\rm E}(\ell)\) denotes the size of the \(\ell\)-bin. The usual form used for the covariance of \(\hat{P}\) in the literature (Takada & Hu, 2013; Krause & Eifler, 2017) is \[C_{\hat{P}}^{\rm fit}(\ell_{1},\ell_{2}) \tag{5}\] \[=\frac{2}{A}\int_{A\kappa(\ell_{1})}\frac{\mathrm{d}^{2}\ell^{ \prime}_{1}}{A\kappa(\ell_{1})}\int_{A\kappa(\ell_{2})}\frac{\mathrm{d}^{2} \ell^{\prime}_{2}}{A\kappa(\ell_{2})}\,P(\ell^{\prime}_{1})\,P(\ell^{\prime}_ {2})\,\delta_{\rm D}(\mathbf{\ell}^{\prime}_{1}+\mathbf{\ell}^{\prime}_{2})\] \[\quad+\frac{1}{A}\int_{A\kappa(\ell_{1})}\frac{\mathrm{d}^{2} \ell^{\prime}_{1}}{A\kappa(\ell_{1})}\int_{A\kappa(\ell_{2})}\frac{\mathrm{d}^ {2}\ell^{\prime}_{2}}{A\kappa(\ell_{2})}\,T(\mathbf{\ell}^{\prime}_{1},-\mathbf{\ell }^{\prime}_{1},\mathbf{\ell}^{\prime}_{2},-\mathbf{\ell}^{\prime}_{2})\] \[\quad+C_{\hat{P}}^{\rm SSC}(\ell_{1},\ell_{2})\;,\] where \(T\) is the convergence trispectrum and \(C_{\hat{P}}^{\rm SSC}\) is the SSC of the power spectrum. Takada & Hu (2013) derive the SSC to be \[C_{\hat{P}}^{\rm SSC}(\ell_{1},\ell_{2}) =\frac{1}{A^{2}}\int_{A\kappa(\ell_{1})}\frac{\mathrm{d}^{2}\ell ^{\prime}_{1}}{A\kappa(\ell_{1})}\int_{A\kappa(\ell_{2})}\frac{\mathrm{d}^{2 }\ell^{\prime}_{2}}{A\kappa(\ell_{2})} \tag{6}\] \[\quad\times\int\frac{\mathrm{d}^{2}q}{(2\pi)^{2}}T_{\rm SSC}(\mathbf{ \ell}_{1},\mathbf{\ell}_{2},\mathbf{\vartheta})\,\tilde{W}(\mathbf{q})\,\tilde{W}(-\mathbf{q})\;,\] where \(T_{\rm SSC}\) is part of the convergence trispectrum and given by Equation (32) in Takada & Hu (2013). However, this expression is incomplete. To see this, we start from Eq. (4), so \(C_{\hat{P}}\) is \[C_{\hat{P}}(\ell_{1},\ell_{2})=\left\langle\hat{P}(\ell_{1})\, \hat{P}(\ell_{2})\right\rangle-\left\langle\hat{P}(\ell_{1})\right\rangle \left\langle\hat{P}(\ell_{2})\right\rangle \tag{7}\] \[=\frac{1}{A^{2}}\int_{A\kappa(\ell_{1})}\frac{\mathrm{d}^{2}\ell ^{\prime}_{1}}{A\kappa(\ell_{1})}\int_{A\kappa(\ell_{2})}\frac{\mathrm{d}^{2} \ell^{\prime}_{2}}{A\kappa(\ell_{2})}\left[\prod_{i=1}^{4}\int\frac{\mathrm{d}^ {2}q_{i}}{(2\pi)^{2}}\,\tilde{W}(\mathbf{q}_{i})\right]\] \[\quad\times\left[\left\langle\tilde{\kappa}(\mathbf{\ell}^{\prime}_{1}+ \mathbf{q}_{1})\,\tilde{\kappa}(-\mathbf{\ell}^{\prime}_{1}+\mathbf{q}_{1})\right\rangle \left\langle\tilde{\kappa}(\mathbf{\ell}^{\prime}_{2}+\mathbf{q}_{3})\,\tilde{\kappa}(- \mathbf{\ell}^{\prime}_{2}+\mathbf{q}_{4})\right\rangle\right.\] \[\quad-\left.\left\langle\tilde{\kappa}(\mathbf{\ell}^{\prime}_{1}+ \mathbf{q}_{1})\,\tilde{\kappa}(-\mathbf{\ell}^{\prime}_{1}+\mathbf{q}_{1})\right\rangle \left\langle\tilde{\kappa}(\mathbf{\ell}^{\prime}_{2}+\mathbf{q}_{3})\,\tilde{\kappa}(- \mathbf{\ell}^{\prime}_{2}+\mathbf{q}_{4})\right\rangle\right]\] The four-point function of \(\tilde{\kappa}\) can be decomposed into its connected and unconnected parts and written in terms of the power- and trispectrum as \[\langle\tilde{\kappa}(\mathbf{\ell}_{1})\,\tilde{\kappa}(\mathbf{\ell}_{2}) \,\tilde{\kappa}(\mathbf{\ell}_{3})\,\tilde{\kappa}(\mathbf{\ell}_{4})\rangle \tag{8}\] \[=\left\langle\tilde{\kappa}(\mathbf{\ell}_{1})\,\tilde{\kappa}(\bm {\ell}_{2})\,\tilde{\kappa}(\mathbf{\ell}_{3})\,\tilde{\kappa}(\mathbf{\ell}_{4}) \right\rangle_{\rm c}\] \[\quad+\left[\left\langle\tilde{\kappa}(\mathbf{\ell}_{1})\,\tilde{ \kappa}(\mathbf{\ell}_{2})\right\rangle\,\left\langle\tilde{\kappa}(\mathbf{\ell}_{3})\, \tilde{\kappa}(\mathbf{\ell}_{4})\right\rangle+2\;\text{Perm.}\right]\] \[=(2\pi)^{2}\,T(\mathbf{\ell}_{1},\mathbf{\ell}_{2},\mathbf{\ell}_{3},\mathbf{ \ell}_{4})\,\delta_{\rm D}(\mathbf{\ell}_{1}+\mathbf{\ell}_{2}+\mathbf{\ell}_{3}+\mathbf{\ell}_{4})\] \[\quad+\left[(2\pi)^{4}P(\ell_{1})\,P(\ell_{3})\,\delta_{\rm D }(\mathbf{\ell}_{1}+\mathbf{\ell}_{2})\,\delta_{\rm D}(\mathbf{\ell}_{3}+\mathbf{\ell}_{4})+2\; \text{Perm.}\right]\;,\] so, with suitable renaming of the \(q Considering only \(\ell\) at small spatial scales well within the survey area and ignoring the impact of masks, \(\bar{W}(\mathbf{q})\) gives significant contributions only for \(q\ll\ell\) so that we can approximate \(P(\mathbf{l}+\mathbf{q}|)\simeq P(\ell)\). Then, using that \(W\) is one or zero, so, \[\int\frac{\mathrm{d}^{2}q}{(2\pi)^{2}}\bar{W}(\mathbf{l}+\mathbf{q})\,\bar{W }(\mathbf{q}) =\int\mathrm{d}^{2}\alpha\,W^{2}(\alpha)\,\mathrm{e}^{-\mathrm{i} \alpha\,\mathbf{l}} \tag{10}\] \[=\int\mathrm{d}^{2}\alpha\,W(\alpha)\,\mathrm{e}^{-\mathrm{i} \alpha\,\mathbf{l}}=\bar{W}(\mathbf{l})\, \tag{11}\] we find \[C_{\bar{P}}(\ell_{1},\ell_{2})\] \[\simeq\frac{2}{A^{2}}\int_{A\ell_{\mathrm{L}}(\ell_{1})}\frac{ \mathrm{d}^{2}{\ell_{1}}^{\prime}}{A_{R}(\ell_{1})}\int_{A\ell_{\mathrm{L}}( \ell_{2})}\frac{\mathrm{d}^{2}{\ell_{2}}^{\prime}}{A_{R}(\ell_{2})}\,P(\ell_{1 }^{\prime})\,P(\ell_{2}^{\prime})\] \[\qquad\times\bar{W}(\mathbf{\ell}_{1}^{\prime}+\mathbf{\ell}_{2}^{\prime} )\,\bar{W}(\mathbf{-}_{1}^{\prime}-\mathbf{\ell}_{2}^{\prime})\] \[\quad+\frac{1}{A^{2}}\int_{A_{R}(\ell_{1}^{\prime})}\frac{ \mathrm{d}^{2}{\ell_{1}}^{\prime}}{A_{R}(\ell_{1})}\int_{A\ell_{\mathrm{L}}( \ell_{2})}\frac{\mathrm{d}^{2}{\ell_{2}}^{\prime}}{A_{R}(\ell_{2})} \tag{12}\] \[\qquad\times\int\frac{\mathrm{d}^{2}q}{(2\pi)^{2}}T(\mathbf{\ell}_{1 }^{\prime},\mathbf{-}_{1}^{\prime}+\mathbf{q},\mathbf{\ell}_{2}^{\prime},\mathbf{-}_{2}^{\prime }-\mathbf{q})\,\bar{W}(\mathbf{q})\,\bar{W}(-\mathbf{q})\] \[=:2\int_{A_{R}(\ell_{1})}\frac{\mathrm{d}^{2}{\ell_{1}}^{\prime} }{A_{R}(\ell_{1})}\int_{A\ell_{\mathrm{L}}(\ell_{2})}\frac{\mathrm{d}^{2}{ \ell_{2}}^{\prime}}{A_{R}(\ell_{2})}P(\ell_{1}^{\prime})\,P(\ell_{2}^{\prime}) \,G_{A}(\mathbf{\ell}_{1}^{\prime}+\mathbf{\ell}_{2}^{\prime})\] \[\quad+\int_{A_{R}(\ell_{1})}\frac{\mathrm{d}^{2}{\ell_{1}}^{ \prime}}{A_{R}(\ell_{1})}\int_{A\ell_{\mathrm{L}}(\ell_{2})}\frac{\mathrm{d}^{ 2}{\ell_{2}}^{\prime}}{A_{R}(\ell_{2})}\] \[\qquad\times\int\frac{\mathrm{d}^{2}q}{(2\pi)^{2}}T(\mathbf{\ell}_{1 }^{\prime},\mathbf{-}_{1}^{\prime}+\mathbf{q},\mathbf{\ell}_{2}^{\prime},\mathbf{-}_{2}^{ \prime}-\mathbf{q})\,G_{A}(\mathbf{q})\,\] where we introduced the geometry factor \(G_{A}\), defined as \[G_{A}(\mathbf{q}) =\frac{1}{A^{2}}\bar{W}(\mathbf{q})\,\bar{W}(-\mathbf{q}) \tag{13}\] \[=\frac{1}{A^{2}}\int\mathrm{d}^{2}\alpha_{1}\,\int\mathrm{d}^{2} \alpha_{2}\,W(\mathbf{\alpha}_{1})\,W(\mathbf{\alpha}_{2})\,\mathrm{e}^{-\mathrm{i} \alpha(\mathbf{\alpha}_{1}-\mathbf{\alpha}_{2})}\.\] The geometry factor contains the full dependence of \(C_{\bar{P}}\) on the survey area. However, \(C_{\bar{P}}\) in Eq. (12) is not the same as \(C_{\bar{P}}^{\mathrm{lit}}\) in Eq. (5). To arrive there, we need to perform the 'large-field approximation'. For this approximation, we note that \(G_{A}\) is related to the function \(E_{A}(\mathbf{\eta})\), which for a point \(\mathbf{\alpha}\) inside \(A\) gives the probability that a point \(\mathbf{\alpha}+\mathbf{\eta}\) is also inside \(A\) and is given by (Heydenreich et al., 2020; Linke et al., 2022) \[E_{A}(\mathbf{\eta})=\frac{1}{A}\int_{A}\mathrm{d}^{2}\alpha\,W(\mathbf{\alpha}+\mathbf{ \eta}). \tag{14}\] With \(E_{A}\), \(G_{A}\) is \[G_{A}(\mathbf{q})=\frac{1}{A}\int\mathrm{d}^{2}\alpha\,E_{A}(\mathbf{\alpha})\,\mathrm{ e}^{\mathrm{i}\alpha\,\mathbf{q}}. \tag{15}\] The large-field approximation now assumes that \(A\) is infinitely large so that \(E_{A}(\mathbf{\alpha})\) is unity for all \(\mathbf{\alpha}\). Then, \[G_{A}(\mathbf{q})\to\frac{(2\pi)^{2}}{A}\,\delta_{\mathrm{D}}(\mathbf{q}). \tag{16}\] We define the result of \(C_{\bar{P}}\) under this approximation as \(C_{\bar{P}}^{\infty}\), given as \[C_{\bar{P}}^{\infty}(\ell_{1},\ell_{2}) =\frac{1}{A}\int_{A\ell_{1}}\frac{\mathrm{d}^{2}{\ell_{1}}^{ \prime}}{A_{R}(\ell_{1})}\int_{A\ell_{2}(\ell_{2})}\frac{\mathrm{d}^{2}{\ell_{ 2}}^{\prime}}{A_{R}(\ell_{2})} \tag{17}\] \[\qquad\times\left[2P^{2}(\ell_{1}^{\prime})\,(2\pi)^{2}\delta_{ \mathrm{D}}(\mathbf{\ell}_{1}^{\prime}+\mathbf{\ell}_{2}^{\prime})+T(\mathbf{\ell}_{1}^{ \prime},\mathbf{-}_{1}^{\prime},\mathbf{\ell}_{2}^{\prime},\mathbf{-}_{2}^{\prime})\right]\.\] These are exactly the first two terms of \(C_{\bar{P}}^{\mathrm{lit}}\). Consequently, the super-sample covariance \(C_{\bar{P}}^{\mathrm{SSC}}\) is the difference between \(C_{\bar{P}}\) and \(C_{\bar{P}}^{\infty}\), i.e., the error introduced by the large-field approximation. It is \[C_{\bar{P}}^{\mathrm{SSC}} =2\int_{A\ell_{1}(\ell_{1})}\frac{\mathrm{d}^{2}{\ell_{1}}^{ \prime}}{A_{R}(\ell_{1})}\int_{A\ell_{2}(\ell_{2})}\frac{\mathrm{d}^{2}{\ell_{ 2}}^{\prime}}{A_{R}(\ell_{2})}P(\ell_{1}^{\prime})\,P(-\ell_{2}^{\prime}) \tag{18}\] \[\qquad\times\left[G_{A}(\mathbf{\ell}_{1}^{\prime}+\mathbf{\ell}_{2}^{ \prime})-\frac{(2\pi)^{2}}{A}\delta_{\mathrm{D}}(\mathbf{\ell}_{1}^{\prime}+\mathbf{ \ell}_{2}^{\prime})\right]\] \[\quad+\frac{1}{A}\int_{A\ell_{1}(\ell_{1})}\frac{\mathrm{d}^{2}{ \ell_{1}}^{\prime}}{A_{R}(\ell_{1})}\int_{A\ell_{2}(\ell_{2})}\frac{\mathrm{d}^{2 }{\ell_{2}}^{\prime}}{A_{R}(\ell_{2})}\] \[\qquad\times\left[\frac{1}{A}\int\frac{\mathrm{d}^{2}q}{(2\pi)^{2}} T(\mathbf{\ell}_{1},\mathbf{-}_{1}+\mathbf{q},\mathbf{\ell}_{2},-\mathbf{\ell}_{2}-\mathbf{q})\,\bar{W}(\mathbf{q})\, \bar{W}(-\mathbf{q})\right.\] \[\qquad\left.-\,T(\mathbf{\ell}_{1},\mathbf{-}_{1},\mathbf{\ell}_{1},\mathbf{ \ell}_{2},-\mathbf{\ell}_{2})\right]\.\] Takada & Hu (2013) showed that the trispectrum can be approximated by \[T(\mathbf{\ell}_{1},\mathbf{-}_{1}+\mathbf{q},\mathbf{\ell}_{2},-\mathbf{-}\mathbf{q})\simeq T(\mathbf{ \ell}_{1},\mathbf{-}\mathbf{\ell}_{1},\mathbf{\ell}_{2},-\mathbf{\ell}_{2})+T_{\mathrm{SSC}}( \mathbf{\ell}_{1},\mathbf{\ell}_{2},\mathbf{q}). \tag{19}\] In this case, one obtains \[C_{\bar{P}}^{\mathrm{SSC}} \simeq\frac{2}{A}\int_{A\ell_{1}(\ell_{1})}\frac{\mathrm{d}^{2}{ \ell_{1}}^{\prime}}{A_{R}(\ell_{1})}\int_{A\ell_{2}(\ell_{2} statistics \(\left\langle M_{\text{ap}}^{2}\right\rangle\) (Schneider et al. 1998). Note that while we are here writing the statistic in terms of the (unobservable) convergence for simplicity, \(\Xi\) can also be written in terms of the weak lensing shear \(\gamma\), which is observable, if \(U_{1}\) and \(U_{2}\) are compensated. This is the case for both COSEBIs and \(\left\langle M_{\text{ap}}^{2}\right\rangle\). To estimate the statistic from a convergence field \(\kappa\) of size \(A^{\prime}\), we can convolve \(\kappa\) with the filter functions and then average over the pixel values. However, due to the finiteness of the survey area, the convolution result at the survey boundaries is biased. Therefore, the average needs to be taken over a smaller area \(A\), which excludes the borders of the field (see Fig. 1). This leads to the estimator \[\hat{\Xi}(\theta)=\frac{1}{A}\int_{A}\mathrm{d}^{2}\alpha\prod_{i=1}^{2}\int_ {A^{\prime}}\mathrm{d}^{2}\theta_{i}\,U_{i}(\theta,\mathbf{\vartheta}_{i}-\mathbf{ \alpha})\,\kappa_{i}\, \tag{22}\] where \(\kappa_{i}=\kappa(\mathbf{\vartheta}_{i})\). Under the assumption that \(U_{i}(\theta,\mathbf{\vartheta}_{i}-\mathbf{\alpha})\) vanishes for \(\mathbf{\vartheta}_{i}\) outside of \(A^{\prime}\) for all \(\mathbf{\alpha}\in A\), we can replace the integral over \(A^{\prime}\) by an integral over the whole \(\mathbb{R}^{2}\). With this, and the survey window function \(W(\mathbf{\vartheta})\), which is one for \(\mathbf{\vartheta}\) inside \(A\) and zero otherwise, \[\hat{\Xi}(\theta)=\frac{1}{A}\int\mathrm{d}^{2}\alpha\,W(\mathbf{\alpha})\,\prod _{i=1}^{2}\,\int\mathrm{d}^{2}\theta_{i}\,U_{i}(\theta,\mathbf{\vartheta}_{i}- \mathbf{\alpha})\,\kappa_{i}. \tag{23}\] The covariance of \(\hat{\Xi}\) is \[C_{\hat{\Xi}}(\theta_{1},\theta_{2})=\left\langle\hat{\Xi}(\theta_{1})\,\hat{ \Xi}(\theta_{2})\right\rangle-\left\langle\hat{\Xi}(\theta_{1})\right\rangle \,\left\langle\hat{\Xi}(\theta_{2})\right\rangle. \tag{24}\] With Eq. (23), \[\left\langle\hat{\Xi}(\theta_{1})\,\hat{\Xi}(\theta_{2})\right\rangle =\frac{1}{A^{2}}\int\mathrm{d}^{2}\alpha_{1}\int\mathrm{d}^{2} \alpha_{2}\,W_{A}(\mathbf{\alpha}_{1})\,W_{A}(\mathbf{\alpha}_{2}) \tag{26}\] \[\quad\times\int\mathrm{d}^{2}\theta_{1}\int\mathrm{d}^{2}\theta_ {2}\int\mathrm{d}^{2}\theta_{3}\int\mathrm{d}^{2}\theta_{4}\,\langle\kappa_{1 }\,\kappa_{2}\,\kappa_{3}\,\kappa_{4}\rangle\] \[\quad\times U_{1}(\theta_{1},\mathbf{\vartheta}_{1}-\mathbf{\alpha}_{1}) \,U_{2}(\theta_{1},\mathbf{\vartheta}_{2}-\mathbf{\alpha}_{1})\] \[\quad\times U_{1}(\theta_{2},\mathbf{\vartheta}_{3}-\mathbf{\alpha}_{2}) \,U_{2}(\theta_{2},\mathbf{\vartheta}_{4}-\mathbf{\alpha}_{2})\] \[=\frac{1}{A^{2}}\int\mathrm{d}^{2}\alpha_{1}\int\mathrm{d}^{2} \alpha_{2}\,W_{A}(\mathbf{\alpha}_{1})\,W_{A}(\mathbf{\alpha}_{2})\] \[\quad\times\left(\kappa_{1}(\theta_{1},\mathbf{\alpha}_{1})\,\kappa_{ 2}(\theta_{1},\mathbf{\alpha}_{1})\,\kappa_{1}(\theta_{2},\mathbf{\alpha}_{2})\, \kappa_{2}(\theta_{2},\mathbf{\alpha}_{2})\right)\,\] where we introduced the smoothed convergence field \(\varkappa_{a}\), \[\varkappa_{a}(\theta,\mathbf{\alpha})=\int\mathrm{d}^{2}\mathbf{\vartheta}\,U_{a}( \theta,\mathbf{\vartheta}-\mathbf{\alpha})\,\kappa(\mathbf{\vartheta}). \tag{27}\] For ease of notation we define the field \(\varkappa^{2}(\theta_{1},\mathbf{\alpha})=\varkappa_{1}(\theta_{1},\mathbf{\alpha})\, \varkappa_{2}(\theta_{1},\mathbf{\alpha})\). The expectation value in Eq. (26) is a second-order correlation function \(\xi_{x^{2}}\) of this field, defined by \[\xi_{x^{2}}(\theta_{1},\theta_{2},|\mathbf{\eta}|) =\left\langle\varkappa^{2}(\theta_{1},\mathbf{\alpha})\,\varkappa^{2} (\theta_{2},\mathbf{\alpha}+\mathbf{\eta})\right\rangle \tag{28}\] \[=\left\langle\varkappa_{1}(\theta_{1},\mathbf{\alpha})\,\varkappa_{2} (\theta_{1},\mathbf{\alpha})\,\varkappa_{1}(\theta_{2},\mathbf{\alpha}+\mathbf{\eta})\, \varkappa_{2}(\theta_{2},\mathbf{\alpha}+\mathbf{\eta})\right\rangle\.\] Due to statistical isotropy and homogeneity, \(\xi_{x^{2}}\) only depends on the absolute value of \(\mathbf{\eta}2\). With \(\xi_{x^{2}}\) and the function \(E_{A}\), defined in Eq. (14), \[C_{\hat{\Xi}}(\theta_{1},\theta_{2}) =\frac{1}{A^{2}}\,\int\mathrm{d}^{2}\alpha_{1}\int\mathrm{d}^{2} \alpha_{2}\,W_{A}(\mathbf{\alpha}_{1})\,W_{A}(\mathbf{\alpha}_{2}) \tag{29}\] \[\quad\times\xi_{x^{2}}(\theta_{1},\theta_{2},|\mathbf{\alpha}_{1}- \mathbf{\alpha}_{2}|)-\left\langle\hat{\Xi}(\theta_{1})\right\rangle\left\langle \hat{\Xi}(\theta_{2})\right\rangle\] \[=\frac{1}{A}\int\mathrm{d}^{2}\eta\,E_{A}(\mathbf{\eta})\,\xi_{x^{2}} (\theta_{1},\theta_{2},\mathbf{\eta})-\left\langle\hat{\Xi}(\theta_{1})\right\rangle \,\left\langle\hat{\Xi}(\theta_{2})\right\rangle\.\] In this expression, the covariance of \(\Xi\) can be inferred from a two-point correlation function \(\xi_{x^{2}}\). Notably, \(\xi_{x^{2}}\) needs to be known only for \(\mathbf{\eta}\) within the survey area, as \(E_{A}\) vanishes outside. Consequently, \(C_{\hat{\Xi}}\) does not depend on any information on scales larger than the survey area. This finding might appear contradictory to the interpretation of the SSC as being caused by modes larger than the survey window. However, this interpretation takes a 'Fourier-space'-view, while Eq. (29) is fully in real space. The real-space correlations at scales within the survey window are impacted by the power- and trispectrum at \(\ell\)-modes larger than the survey window. Therefore, while the covariance estimation in Fourier space requires all modes, in real space we can limit ourselves to the survey window, and no'super-survey' information is required. To validate our expressions for the covariance, we specify our statistic in the remainder of this section to the second-order aperture masses \(\left\langle M_{\text{ap}}^{2}\right\rangle(\theta)\) with the filter function by Crittenden et al. (2002), \[U_{1}(\theta,\mathbf{\vartheta})=U_{2}(\theta,\mathbf{\vartheta})=U(\theta,\theta)=\frac{1}{ 2\pi\theta^{2}}\,\left(1-\frac{\theta^{2}}{2\theta^{2}}\right)\,\exp\!\left(- \frac{\theta^{2}}{2\theta^{2}}\right)\,. \tag{30}\] Then, \(\varkappa^{2}=M_{\text{ap}}^{2}\), given as \[M_{\text{ap}}^{2}(\mathbf{\vartheta};\theta) \tag{31}\] \[=\int\mathrm{d}^{2}\alpha_{1}\int\mathrm{d}^{2}\alpha_{2}\,U( \theta,|\mathbf{\vartheta}+\mathbf{\alpha}_{1}|)\,U(\theta,|\mathbf{\vartheta}+\mathbf{\alpha}_{2}|)\, \kappa(\alpha_{1})\,\kappa(\alpha_{2})\,,\] and \(\xi_{x^{2}}\) becomes \[\xi_{M_{\text{ap}}^{2}}(\theta_{1},\theta_{2},\mathbf{\eta})=\left\langle M_{\text{ap} }^{2}(\mathbf{\vartheta};\theta_{1})\,M_{\text{ap}}^{2}(\mathbf{\vartheta}+\mathbf{\eta}; \theta_{2})\right\rangle. \tag{32}\] We validate Eq. (29) by measuring the covariance \(C_{\hat{\Xi}_{\text{ap}}^{2}}\) of \(\hat{M}_{\text{ap}}^{2}\) in convergence maps from the Scinet LIghtcone Simulations (SLICS, Harnois-Deraps & van Waerbeke 2015), whose details are described in Appendix B.1. We estimate the covariance of \(\left\langle M_{\text{ap}}^{2}\right\rangle\) using two different approaches (see Appendix B.2). In the first, we measure \(\left\langle M_{\text{ap}}^{2}\right\rangle\) for each lightcone of the SLICS by evaluating the convolution in Eq. (31) using the convolution theorem and then take the sample covariance, which we denote with Figure 1: Illustration of estimation of the statistic \(\Xi\). The area \(A^{\prime}\) is the size of the full convergence field, which we convolve with filter functions of scale radius \(\theta\), illustrated by the circles. The convolution results at positions \(\mathbf{\alpha}\) within the smaller area \(A\) only depend on \(\kappa\) within \(A^{\prime}\), while for positions \(\mathbf{\alpha}^{\prime}\) outside of \( \(C_{\hat{M}_{2}}^{\rm sim}\). The uncertainty of the sample covariance is estimated using bootstrapping. In the second approach, we measure the correlation functions \(\xi_{\hat{\kappa}}\) and use Eq. (29) to obtain the covariance estimate, which we denote by \(C_{\hat{M}_{2}}^{\rm corr}\). The covariance estimates and their difference are shown in the first two panels of Fig. 2. Both estimates almost coincide. Accordingly, the correlation-function-based approach captures the full covariance, even though \(\xi_{M_{2}}^{\rm corr}\) is known only for scales within the survey area. Consequently, as expected from Eq. (29), no information on spatial scales outside the survey is needed for an accurate covariance estimate. ## 4 Connection between real and Fourier space statistics The statistic \(\Xi\) defined in Eq. (21) can be expressed in terms of the power spectrum as \[\Xi(\theta)=\int\frac{\mathrm{d}^{2}\ell}{(2\pi)^{2}}\,\tilde{U}_{1}(\theta, \boldsymbol{\ell})\,\tilde{U}_{2}(\theta,\boldsymbol{\ell})\,P(\ell)\;, \tag{33}\] where the \(\tilde{U}_{i}\) are the Fourier transforms of the filter functions \(U_{i}\). A common strategy (e.g., Joachimi et al.2021; Friedrich et al.2021) to model the covariance of \(\Xi\) is to use \[C_{\Xi}(\theta_{1},\theta_{2}) =\int\frac{\mathrm{d}^{2}\ell_{1}}{(2\pi)^{2}}\int\frac{\mathrm{ d}^{2}\ell_{2}}{(2\pi)^{2}}\tilde{U}_{1}(\theta_{1},\boldsymbol{\ell}_{1}) \,\tilde{U}_{2}(\theta_{1},\boldsymbol{\ell}_{1}) \tag{34}\] \[\times\,\tilde{U}_{1}(\theta_{2},\boldsymbol{\ell}_{2})\,\tilde{ U}_{2}(\theta_{2},\boldsymbol{\ell}_{2})\,C_{\tilde{P}}(\ell_{1},\ell_{2})\;.\] However, this approach is not necessarily correct. To show this, we relate \(C_{\hat{\Xi}}\) to the power- and trispectrum to compare it to previous expressions of second-order shear covariances and to discuss the SSC for real-space statistics. As shown in Appendix A, \(C_{\hat{\Xi}}(\theta_{1},\theta_{2})\) can be expressed as \[C_{\hat{\Xi}}(\theta_{1},\theta_{2}) =\int\frac{\mathrm{d}^{2}\ell_{1}}{(2\pi)^{2}}\int\frac{\mathrm{ d}^{2}\ell_{2}}{(2\pi)^{2}}\,G_{A}(\boldsymbol{\ell}_{1}+\boldsymbol{\ell}_{2})\,P( \ell_{1})\,P(\ell_{2}) \tag{35}\] \[\times\left(\tilde{U}_{1}(\theta_{1},\boldsymbol{\ell}_{1})\, \tilde{U}_{1}(\theta_{2},\boldsymbol{\ell}_{1})\,\tilde{U}_{2}(\theta_{1}, \boldsymbol{\ell}_{2})\,\tilde{U}_{2}(\theta_{2},\boldsymbol{\ell}_{2})\right.\] \[\qquad\left.+\tilde{U}_{1}(\theta_{1},\boldsymbol{\ell}_{1})\, \tilde{U}_{1}(\theta_{2},\boldsymbol{\ell}_{2})\,\tilde{U}_{2}(\theta_{1}, \boldsymbol{\ell}_{2})\,\tilde{U}_{2}(\theta_{2},\boldsymbol{\ell}_{1})\right)\] \[+\int\frac{\mathrm{d}^{2}\ell_{1}}{(2\pi)^{2}}\int\frac{\mathrm{ d}^{2}\ell_{2}}{(2\pi)^{2}}\int\frac{\mathrm{d}^{2}\ell_{3}}{(2\pi)^{2}}\,G_{A}( \boldsymbol{\ell}_{1}+\boldsymbol{\ell}_{2})\] \[\times\,T(\boldsymbol{\ell}_{1},\boldsymbol{\ell}_{2}, \boldsymbol{\ell}_{3},-\boldsymbol{\ell}_{1}-\boldsymbol{\ell}_{2}-\boldsymbol {\ell}_{3})\,\tilde{U}_{1}(\theta_{1},\boldsymbol{\ell}_{1})\,\tilde{U}_{2}( \theta_{1},\boldsymbol{\ell}_{2})\] \[\times\,\tilde{U}_{1}(\theta_{2},\boldsymbol{\ell}_{3})\,\tilde{ U}_{2}(\theta_{2},\boldsymbol{\ell}_{1}+\boldsymbol{\ell}_{2}+\boldsymbol{\ell}_{3})\;.\] One notices a Gaussian and non-Gaussian part of the covariance with the Gaussian part depending on the power spectrum and the non-Gaussian part depending on the trispectrum, similar to the exact covariance of the power spectrum in Eq. (9). However, \(C_{\hat{\Xi}}\) is not given as suggested by Eq. (34), that is as an integral over \(C_{\hat{\mu}}\), weighted by the \(U_{i}\)-filters. We also notice three differences when directly comparing Eq. (35) to Equations (15) and (15) in Joachimi et al. (2021). First, neither the Gaussian nor the non-Gaussian term scale with the inverse of the survey area \(A\) and instead show a more complicated dependence on survey geometry via \(G_{A}\). Second, the non-Gaussian term depends on the trispectrum for all \(\ell\)-configurations, not simply parallelograms with \(\boldsymbol{\ell}_{3}=\boldsymbol{\ell}_{1}\). Third, there is no indication of the SSC. To reconcile Eq. (35) with the expressions in Joachimi et al. (2021), we need to perform the large-field approximation. As mentioned in Sect. 2, under this approximation, \(G_{A}\) is proportional to a Dirac delta, so the covariance becomes \[C_{\hat{\Xi}}^{\rm cov}(\theta_{1},\theta_{2}) \tag{36}\] \[=\frac{2}{A}\,\int\frac{\mathrm{d}^{2}\ell}{(2\pi)^{2}}\,P^{2}( \ell)\,\tilde{U}_{1}(\theta_{1},\boldsymbol{\ell})\,\tilde{U}_{1}(\theta_{2}, \boldsymbol{\ell})\,\tilde{U}_{2}(\theta_{1},\boldsymbol{\ell})\,\tilde{U}_{2} (\theta_{2},\boldsymbol{\ell})\] \[\quad+\frac{1}{A}\int\frac{\mathrm{d}^{2}\ell_{1}}{(2\pi)^{2}} \int\frac{\mathrm{d}^{2}\ell_{2}}{(2\pi)^{2}}\,T(\boldsymbol{\ell}_{1},- \boldsymbol{\ell}_{1},\boldsymbol{\ell}_{2},-\boldsymbol{\ell}_{2})\] \[\qquad\times\,\tilde{U}_{1}(\theta_{1},\boldsymbol{\ell}_{1})\, \tilde{U}_{2}(\theta_{1},\boldsymbol{\ell}_{1})\,\tilde{U}_{1}(\theta_{2}, \boldsymbol{\ell}_{2})\,\tilde{U}_{2}(\theta_{2},\boldsymbol{\ell}_{2})\;.\] This is equivalent to the commonly used expressions for the Gaussian and intra-survey non-Gaussian covariance for a second-order statistic (Equations 15 and 15 in Joachimi et al.2021). In particular, we recover the scaling with the inverse survey area and the dependence on parallelogram \(\ell\)-configurations for the non-Gaussian part. Consequently, the sum of the Gaussian and intra-survey non-Gaussian covariance can be considered as an approximation of the exact covariance in Eq. (35) for very large survey windows. A comparison of Eq. (36) to Eq. (17) shows that \(C_{\hat{\Xi}}^{\rm cov}\) is given by the integral over the large-field approximation of the power spectrum covariance \(C_{\hat{P}}^{\rm cov}\). Therefore, the approach to obtain the covariance of the real-space statistic from the power spectrum covariance is correct if the large-field approximation holds. However, the large-field approximation neglects a significant part of the covariance. We can see this for the \(\left(M_{2}^{\rm cov}\right)\), for which we calculate Eq. (36) according to Appendix B.3. We show in the third panel of Fig. 2 the covariance for \(\left(M_{2\varphi}^{2}\right)\), modelled with Eq. (36) along with the fractional difference to the sample covariance estimate from the SLICS (see Appendix B). The approximation \(C_{\hat{M}_{2}}^{\rm cov}\) is significantly too small with deviations of more than five times the statistical uncertainty on the sample covariance. This large difference is not surprising since the survey area here is only \(62\)deg\({}^{2}\). However, even for the Kilo-degree survey (KIDS) data release KiDS-1000, \(C^{\rm cov}\) cannot describe the covariance of the cosmic shear band powers (Joachimi et al.2021). In analogy to the power spectrum case, we define the difference between \(C_{\hat{\Xi}}\) and \(C_{\hat{\Xi}}^{\rm cov}\) as SSC \(C_{\hat{\Xi}}^{\rm SSC}\) for \(\hat{\Xi}\). To calculate this term, we can perform the same approximation as done for the power spectrum covariance, namely that the modes \(\ell,\ell_{1}\), and \(\ell_{2}\) are large compared to the modes \(q\) on which \(G_{A}\) varies. With this, \[C_{\hat{\Xi}}^{\rm SSC}(\theta_{1},\theta_{2})\] (37) \[=\int\frac{\mathrm{d}^{2}\ell_{1}}{(2\pi)^{2}}\int\frac{\mathrm{ d}^{2}\ell_{2}}{(2\pi)^{2}}\,\tilde{U}_{1}(\theta_{1},\boldsymbol{\ell}_{1})\, \tilde{U}_{2}(\theta_{1},\boldsymbol{\ell}_{1})\,\tilde{U}_{1}(\theta_{2}, \boldsymbol{\ell}_{2})\,\tilde{U}_{2}(\theta_{2},\boldsymbol{\ell}_{2})\] \[\quad\times\left\{P(\ell_{1})\,P(\ell_{2})\,\left[G_{A}( \boldsymbol{\ell}_{1}+\boldsymbol{\ell}_{2})-\frac{(2\pi)^{2}}{A}\delta_{ \rm D}(\boldsymbol{\ell}_{1}+\boldsymbol{\ell}_{2})\right]\right.\] \[\qquad\left.+\int\frac{\mathrm{d}^{2}q}{(2\pi)^{2}}G_{A}( \boldsymbol{q})\,T_{\rm SSC}(\boldsymbol{\ell}_{1},\boldsymbol{\ell}_{2}, \boldsymbol{q})\right\}\] \[=C_{\hat{\Xi},1}^{\rm SSC}(\theta_{1},\theta_{2})+C_{\hat{\Xi},2}^ {\rm SSC}(\theta_{ general, it is negative and decreases the magnitude of the other Gaussian covariance term. This effect has already been noticed for the covariance of shear correlation functions in Sato et al. (2011), although it has not been called SSC there. In Joachimi et al. (2021), only the second summand \(C_{\underline{\delta},2}^{\text{SSC}}\) in Eq. (37) is called SSC-term (see their Equation E10), while the first summand \(C_{\underline{\delta},1}^{\text{SSC}}\) is neglected there. However, at least for the second-order aperture statistics, this neglect is valid since the \(C_{\underline{\delta}^{\text{SSC}},1}^{\text{SSC}}\) is small compared to \(C_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline {\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{ \text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}} _{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{ \delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{ \text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}} _{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{ \delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{ \text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}} _{\underline{\delta}^{\text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{ \delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}}_{\underline{\delta}^{ \text{SSC}}_{\underline{\delta}^{\text{SSC}}}_{\underline{\delta}^{\text{SSC}} _{\underline{\delta}^{\text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{ \delta}^{\text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{ \text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}} _{\underline{\delta}^{\text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{ \delta}^{\text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{ \text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}} _{\underline{\delta}^{\text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{ \delta}^{\text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{ \text{SSC}}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}} _{\underline{\delta}^{\text{SSC}}}}}}}}}}}}}}\) where we compare the full SSC to \(C_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{ \underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{ \delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{ \text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}} _{\underline{\delta}^{\text{SSC}}}}}}}}}}}\) for the \(\hat{M}_{\text{ap}}^{2}\) covariance in the SLICS. The first SSC term accounts for less than 5% of the total SSC. Therefore, using just \(C_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline {\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{ \text{SSC}}_{\underline{\delta}^{\text{SSC}}_{\underline{\delta}^{\text{SSC}} _{\underline{\delta}^{\text{SSC}}}}}}}}}}^{\text{ space statistics. A preferable interpretation for the SSC is as a correction term for the large-field approximation because, as shown before, this interpretation holds both for the covariance of the power spectrum and of \(\Xi\). ## 5 Conclusion We have derived the full covariance \(C_{\frac{\pi}{2}}\) for a localized, second-order statistic \(\Xi\) in real space and compared it to the covariance \(C_{\hat{P}}\) of the lensing power spectrum. Both covariances depend on the exact survey geometry. Under the 'large-survey approximation', which is the limit for a broad window function, they reduce to approximated terms \(C_{\frac{\pi}{2}}^{\infty}\) and \(C_{\hat{P}}^{\infty}\) that scale with the inverse survey area. While we defined \(\Xi\) in terms of the convergence \(\kappa\), we note that for compensated filter functions, \(\Xi\) can be equivalently written in terms of the weak lensing shear, so all our conclusions are valid for shear statistics as well. We found that the difference between \(C_{\hat{P}}\) and \(C_{\hat{P}}^{\infty}\) gives the SSC \(C_{\hat{P}}^{\rm SSC}\) of the power spectrum. While \(C_{\hat{P}}^{\infty}(\ell_{1},\ell_{2})\) depends only on the power- and trispectrum at \(\ell\)-modes within bins around \(\ell_{1}\) and \(\ell_{2}\), \(C_{\hat{P}}^{\rm SSC}\) depends on the trispectrum at all \(\ell\)-modes. This includes \(\ell\)-modes outside of the survey area. Additionally, while \(C_{\hat{P}}^{\infty}\) scales linearly with the inverse survey area, \(C_{\hat{P}}^{\rm SSC}\) shows a complicated dependence on the survey geometry via the window function. The covariance \(C_{\frac{\pi}{2}}\) can also be written as the sum of a large-field approximation \(C_{\frac{\pi}{2}}^{\infty}\) and an SSC \(C_{\frac{\pi}{2}}^{\rm SSC}\). However, both \(C_{\frac{\pi}{2}}^{\infty}\) and \(C_{\frac{\pi}{2}}^{\rm SSC}\) depend on the power- and trispectrum at all \(\ell\)-modes, including those larger than the survey area. Therefore, the label'super-sample' is misleading for the SSC of a real-space statistic. We found that the SSC is composed of two components. One of those depends on the power spectrum and is already present in Gaussian fields. This term essentially decreases the Gaussian covariance, an effect already noted for shear correlation functions by Sato et al. (2011). While we showed here that the common approximation of only using the non-Gaussian part of the SSC (e.g. Joachimi et al. 2021) is accurate for the second-order aperture statistics \(\hat{M}_{\rm ap}^{2}\), Shirasaki et al. (2019) and Troxel et al. (2018) found that for the shear correlation functions \(\xi_{+}\) and \(\xi_{-}\) ignoring this effect leads to a significant overestimation of the covariance. We showed that the covariance \(C_{\frac{\pi}{2}}\) of the real-space statistic cannot be obtained from the power spectrum covariance without the large-field approximation. The commonly used transformation between power spectrum covariance and real space covariance only holds for \(C_{\hat{P}}^{\infty}\) and \(C_{\frac{\pi}{2}}^{\infty}\). This finding is not surprising. A linear transform between the \(C_{\hat{P}}\) and \(C_{\frac{\pi}{2}}\) is mathematically only possible if the estimators \(\hat{P}\) and \(\hat{\Xi}\) are related linearly. While \(P\) and \(\Xi\) are indeed related by a Fourier transform, the estimators are generally not. Consequently, one would not expect to simply transform one covariance into the other. Finally, the covariance \(C_{\frac{\pi}{2}}\) can be fully determined from correlation functions of smoothed convergence maps known only within the survey area. We demonstrated this finding by measuring the sample variance of second-order aperture statistics in the SLICS and comparing them to the covariance estimate from measured \(\left\langle M_{\rm ap}^{2}\right\rangle\)-correlation functions. The agreement between these estimates indicates that correlations outside the survey area do not influence the covariance of a second-order shear statistic in real space. This finding is not surprising since Schneider et al. (2002) already showed that the Gaussian covariance of shear correlation functions is given by second-order correlation functions of galaxy ellipticities known inside the survey area. However, it is interesting for covariances estimated from simulations. To obtain a covariance estimate with this approach, it is possible to measure the correlation functions in a few full-sky realisations and account for the survey geometry later with the factor \(E_{A}\). Consequently, covariance estimates for different survey geometries can be based on the same simulated correlation functions. In conclusion, for real-space statistics, the SSC cannot be interpreted as the sole part of the covariance depending on \(\ell\)-modes larger than a survey area or on clustering information outside the survey. Instead, it is more appropriate to define the SSC as the difference between the exact covariance of an estimator and the large-field approximation, which replaces the survey window function by unity everywhere. This definition describes both the SSC of the power spectrum and the SSC of the real-space statistic \(\Xi\). While this paper was concerned with second-order statistics only, similar thoughts apply to higher-order statistics. Linke et al. (2022) showed that for third-order aperture statistics \(\left\langle M_{\rm ap}^{3}\right\rangle\), the covariance contains several terms that vanish under the large-field approximation and can therefore be considered part of the SSC for \(\left\langle M_{\rm ap}^{3}\right\rangle\). One of these terms is already present (and significant) for Gaussian fields. Consequently, for \(\left\langle M_{\rm ap}^{3}\right\rangle\), the SSC has a large impact on the overall covariance. In contrast, Uhlemann et al. (2022) showed for the convergence probability distribution that the large-field approximation (in their notation \(P_{d}(\theta)\simeq\theta\)) leads to model covariances in agreement with simulations, so the SSC is less important for their statistic. This indicates that the impact of SSC depends on the considered statistic. ###### Acknowledgements. Funded by the TRA Matter (University of Bonn) as part of the Excellence Strategy of the federal and state governments. This work has been supported by the Deutsche Forschungsgemeinschaft through the project SCHN 34/215-1 and DFG SCHN 34/213. PAB and SH acknowledge support from the German Academic Scholarship Foundation. LP acknowledges support from the DLR grant 50QE2002. We would like to thank Joachim Hamousl Deraps for making public the SLICS mock data, which can be found at [http://slics.roe.ac.uk/](http://slics.roe.ac.uk/). We thank Oliver Friedrich and Nick Wiedlers for helpful discussions.
2303.05422
Machine Learning based KNO-scaling of charged hadron multiplicities with Hijing++
The scaling properties of the final state charged hadron and mean jet multiplicity distributions, calculated by deep residual neural network architectures with different complexities are presented. The parton-level input of the neural networks are generated by the Hijing++ Monte Carlo event generator. Hadronization neural networks, trained with $\sqrt{s}=7$ TeV events are utilized to perform predictions for various LHC energies from $\sqrt{s}=0.9$ TeV to 13 TeV. KNO-scaling properties were adopted by the networks at hadronic level.
Gábor Bíró, Gergely Gábor Barnaföldi
2023-03-09T17:14:20Z
http://arxiv.org/abs/2303.05422v1
# Machine Learning based KNO-scaling of charged hadron multiplicities with Hijing++ ###### Abstract The scaling properties of the final state charged hadron and mean jet multiplicity distributions, calculated by deep residual neural network architectures with different complexities are presented. The parton-level input of the neural networks are generated by the Hijing++ Monte Carlo event generator. Hadronization neural networks, trained with \(\sqrt{s}=7\) TeV events are utilized to perform predictions for various LHC energies from \(\sqrt{s}=0.9\) TeV to 13 TeV. KNO-scaling properties were adopted by the networks at hadronic level. ## 1 Introduction Modern developments in Machine Learning methods led us to use these techniques in the field of high-energy physics (HEP) with great benefits [1, 2, 3, 4, 5]. Applications of the artificial intelligence hopefully not only provide solution for so far unsolved questions, but may help to improve physical models by recognizing and investigating the inner correlations from these new approaches. In our recent works, Deep Neural Networks (DNN) were proposed to calculate the hadron-level statistical properties of collision events from the parton-level input, which was pre-calculated and trained by the widely used Pythia 8 Monte Carlo (MC) event generator [6, 7, 8, 9, 10]. We showed that the application of relatively simple neural network models preserve the strong KNO-scaling of the hadronic final-state production yields and their multiplicity distributions at energies available at the Large Hadron Collider (LHC). Another observation was that, despite the models were trained exclusively at one fixed center-of-mass energy of \(\sqrt{s}=7\) TeV [10, 11, 12, 13], the acquired scaling properties result in the application of the same network in more general kinematical ranges. The Hijing++ (Heavy Ion Jet INteraction Generator, C++ version) is the new generation of the popular Monte Carlo event generator for heavy-ion physics. This program code is under final tests in the development timeline, and the latest, tuned version is already performing well with data [14, 15, 16]. In the current study, the previously proposed and Pythia 8-trained, ML-based hadronization models were used to investigate KNO-like scaling behaviour. In order to test more inclusively the neural network (NN) model, the parton-level input of the ML-based hadronization model were generated by the Hijing++ in this study. Replacing the Lund hadronization model with a DNN-based one can provide a cross-check and valuable input for the validation of the hadronization model, as it is presented in Fig. 1. ## 2 The applied models One of the focus of our interest is to investigate, whether a neural network is able to represent the properties of the hadronization, especially at the non-perturbative regime of quantum chromodynamics. Within this kinematical regime the physical description lacks of first principle calculations, therefore only complex phenomenological models exist with large sets of inner parameters. Since it has been proven that a complex deep neural network can pick up the properties of the jet evolution [5], indeed presenting QCD-like scaling properties [6], a machine learning-based hadronization model is well motivated. On the other hand, the universality of our hadronization network module is a key concept for the further developments, therefore we investigated the ML-based model by inserting it to an another Monte Carlo generator framework for cross-check tests. The DNN hadronization models were developed by applying the popular ResNet architecture [2, 3, 9]. Two models with different complexities were proposed, designated as 'Model L' and 'Model S', with respect to the size of the hyperparameter space. Using DNN-based hadronization models in the original Pythia 8-based Monte Carlo framework we were able to reproduce the measured charged hadron multiplicity distribution, jet-multiplicity distribution, and observables _vs._ event activity classifiers in 'jetty' events (i.e. events where at least 2 jets with \(p_{T_{J}}\geq 40\) GeV and \(R=0.4\) are present) within a wide range of LHC energies. These correlated well with the physical expectations. The Hijing++ model is the successor of the original Fortran Hijing, completely rewritten in modern C++ programming language [15, 16, 17]. The core concepts of its Figure 1: The schematic overview of the investigated processes and cross-checks. physics engine are the well-known wounded nucleon model with energy-dependent minijet production [18], with the Lund string fragmentation model of Pythia 8, taking care of the hadronization [7, 8, 19]. It has new, modern built-in features such as modularity and CPU multithreading, whereas the underlying physics has been revamped and tuned for the RHIC-LHC energy era. In this current study, our concept follows the idea presented in Fig. 1: the partonic initial state events, that are the inputs of the DNN-hadronization models, are calculated with Hijing++ with the same event criteria that has been applied in Refs. [6, 10]. Observables are presented and compared to the original Monte Carlo generated events. ## 3 Results The multiplicity distributions of charged hadrons stemming from proton-proton collisions are presented on the left panel of Fig. 2, where the Hijing++-calculated values (orange markers) are compared with the NN-predicted results for various c.m. energies in mid-rapidity, \(|y|<0.5\) (blue and green lines). The original Pythia 8 results are also shown with red lines for reference. Each curve has 300k generated events. The right panel of Fig. 2 shows the corresponding \(P_{n}=\frac{1}{\langle n\rangle}\Psi\left(\frac{n}{\langle n\rangle}\right)\) scaling functions, with the joint curves presenting the effect of KNO-like scaling. The primer observation on the multiplicity distribution is that, the Hijing++ results display deviation compared to Pythia 8 ones. A significant excess contribution appears at higher multiplicity classes against the low-multiplicity region. This difference is not surprising, since the phenomenological mechanisms of the two models are different in the non-perturbative regime. Minijet production in Hijing++ generates more hadrons in the mid-multiplicity range. The predictions from the NN-based Model L and Model S lie between the two Monte Carlo models. Recalling that, NN-based model were trained with the fixed 7 TeV c.m. energy in Pythia 8 data only, the Figure 2: Mid-rapidity multiplicity (_left panel_) and KNO-scaled distributions (_right panel_) of charged hadrons in proton-proton collisions at LHC energies. presented multiplicity distributions of these networks convoluted with Hijing++ is closer to the original Pythia 8-calculated curves at all energies. Indeed, the trends are more Pythia 8-like. This supports the idea, that hadronization plays a more significant role in the multiplicity production, than the parton shower evolution--which latter is different in the two Monte Carlo generators. We investigated, whether KNO-scaling of the multiplicities is preserved. One can observe on the right panel of Fig. 2 that, after applying the KNO-transformation, good agreement were found among all the models up to the highest multiplicities. It is also true for all datasets, that the larger the multiplicity, the stronger the violation of the KNO-scaling is, which has been showed experimentally as well [20]. This violation here were found to be more remarkable for the original Hijing++ data. Model L and Model S scale well in parallel to these, apart form the lowest multiplicity values, where the applied cuts and statistics limit the training process. The mean jet multiplicity distributions and the their KNO-scaled curves are shown in the left and right panels of Figure 3, respectively. The deviation between the distribution shapes of the original Monte Carlo model results mostly vanishes. In this high-momentum fragmentation regime minor impact form the soft non-perturbative sector is present, therefore difference between the MC calculations appears only at the highest multiplicity values. In contrary to the above agreement, the NN-based Model L and Model S present irregular, double-bump structure in the mean jet multiplicity distributions. The magnitude of this effect is independent of the hyperparameter-space volume but getting stronger for higher \(\sqrt{s}\) values. This suggests that though the global jet structure (e.g. the mean jet multiplicity) is similar among the two MC models, the sub-structure is quite different. This effect requires further investigations on a per-jet basis. The KNO-scaled mean jet multiplicities on the right panel of Fig. 3 are similar to the results that we have seen previously: curve shapes are mostly universal at all Figure 3: Jet mean multiplicity (_left panel_) and KNO-scaled jet mean multiplicity (_right panel_) in proton-proton collisions, for jets with \(p_{T_{J}}\geq 40\) GeV and \(R=0.4\). energies for each investigated model--again with good agreement between the Monte Carlo models, apart form the highest multiplicity bins, which lack of statistics. The shapes of the scaled distributions are different for the NN-based Model L and Model S, separated to two branches by the size of the hyperparameter space volume. ## 4 Summary In this contribution the scaling properties of charged hadron multiplicities and jets at LHC energies, stemming from proton-proton collisions were presented. The multiplicity distributions were determined by two Monte Carlo event generators and deep neural network based hadronization models. The neural network results presented a KNO-scaling in jetty events at \(|y|<0.5\) rapidity, which differed from the Monte Carlo predictions. On the other hand, the mean jet multiplicity distributions revealed diverse scaling behavior for the different models, with a better agreement between the Hijing++ and Pythia 8 calculations. ## Acknowledgements The research was supported by the Hungarian National Research, Development and Innovation Office OTKA K135515, 2019-2.1.11-TET-2019-00078, 2019-2.1.11-TET-2019-00050, 2020-2.1.1-ED-2021-00179, 2019-2.1.6-NEMZ_KI-2019-00011, 2022-4.1.2-NEMZ_KI-2022-00008 and 2022-4.1.2-NEMZ_KI-2022-00009 grants, and by the Wigner Scientific Computing Laboratory and the ELKH Cloud. Author G.B. was supported by the EU project RRF-2.3.1-21-2022-00004 (Artificial Intelligence National Laboratory).
2305.09196
Exploring Platform Migration Patterns between Twitter and Mastodon: A User Behavior Study
A recent surge of users migrating from Twitter to alternative platforms, such as Mastodon, raised questions regarding what migration patterns are, how different platforms impact user behaviors, and how migrated users settle in the migration process. In this study, we elaborate on how we investigate these questions by collecting data over 10,000 users who migrated from Twitter to Mastodon within the first ten weeks following the ownership change of Twitter. Our research is structured in three primary steps. First, we develop algorithms to extract and analyze migration patterns. Second, by leveraging behavioral analysis, we examine the distinct architectures of Twitter and Mastodon to learn how user behaviors correspond with the characteristics of each platform. Last, we determine how particular behavioral factors influence users to stay on Mastodon. We share our findings of user migration, insights, and lessons learned from the user behavior study.
Ujun Jeong, Paras Sheth, Anique Tahir, Faisal Alatawi, H. Russell Bernard, Huan Liu
2023-05-16T06:07:50Z
http://arxiv.org/abs/2305.09196v4
# Exploring Platform Migration Patterns between Twitter and Mastodon: ###### Abstract A recent surge of users migrating from Twitter to alternative platforms, such as Mastodon, raised questions regarding what migration patterns are, how different platforms impact user behaviors, and how migrated users settle in the migration process. In this study, we elaborate how we investigate these questions by collecting data over 10,000 users who migrated from Twitter to Mastodon within the first ten weeks following Elon Musk's acquisition of Twitter. Our research is structured in three primary steps. First, we develop algorithms to extract and analyze migration patterns. Second, by leveraging behavioral analysis, we examine the distinct architectures of Twitter and Mastodon to learn how different platforms shape user behaviors on each platform. Last, we determine how particular behavioral factors influence users to stay on Mastodon. We share our findings of user migration, insights, and lessons learned from the user behavior study. ## Introduction With the proliferation of social media platforms, users tend to be increasingly mobile, moving between different platforms as their needs, preferences, and interests change, a phenomenon known as online platform migration [22]. Elon Musk's acquisition of Twitter on October 27, 2022, created an opportunity to study platform migration as numerous users migrated to Mastodon [23], a microblogging platform with similar features to Twitter, such as "toots" and "boosts" corresponding to "tweets" and "retweets," but which operates on a decentralized network of self-hosted servers [25]. Despite its decentralized architecture, Mastodon enables users to request follows and share tools, boosts, or favorite requests across servers using the shared Activity Pub protocol [26]. This facilitates the formation of federated servers and interest-based communities, each managed by distinct moderation policies, as depicted in Figure 1. Mastodon is thus not a replica of Twitter; it provides a unique, community-based user experience and interactions across various servers [10]. Such distinctive features made Mastodon an increasingly popular choice for users seeking a new platform for migration. Online platform migration has been researched [11, 12, 13, 14]. Observing the exodus from Twitter, however, we are intrigued to understand user migration patterns: **(1)** what follows the exodus, i.e., whether migration is persistent or waning - this issue arises because users can maintain accounts on multiple platforms, oscillating their focus between platforms until fully committing to the new platform; **(2)** whether a platform architecture impacts the behaviors of migrated users - exploring this connection may reveal the relationships between platform disparities user behaviors following migration and different levels of user engagement; and **(3)** what behavioral factors contribute to the sustainability of platform migration essential for a comprehensive understanding of the dynamics of platform migration. In this study, we investigate migration patterns by comparing user behaviors between Twitter and Mastodon, focusing on the dynamics of users switching their attention between the two platforms over time. To facilitate this comparative study, we propose methods to link the accounts of over 10,000 individuals who migrated from Twitter to Mastodon and to determine user characteristics, such as occupations. Moreover, we collect data on user behaviors, including activities and network interactions on each platform during the first ten weeks after Twitter's ownership change on October 27, 2022. Our study is motivated by three research questions: Figure 1: The distinct platform architectures of (a) Twitter, which is a centralized platform, and (b) Mastodon, which employs a decentralized platform with a federated network. * **RQ1**: What are the migration patterns observed between Twitter and Mastodon after Twitter's ownership change? * **RQ2**: What distinct user behaviors between Twitter and Mastodon arise due to each platform's architecture? * **RQ3**: What specific behavioral characteristics of migrated users influence their decision to stay on Mastodon? With respect to **RQ1**, we examine the evolving migration patterns with users' active status and attention shifts between these platforms over time. Building on this analysis, we correlate migration patterns with notable events, such as Musk's acquisition of Twitter, his announcement of not running Twitter forever, and so on. Through comparison of discourse and sentiment with migration patterns, we aim to uncover the potential motivations and how they affect migration patterns. In light of **RQ2**, we compare the behaviors of migrated users on Twitter and Mastodon to understand the disparities between the two platforms. Especially, our experiments include comparing the distribution of users' occupations and hashtags based on their popularity on each platform. This experiment aims to serve as empirical evidence of how the design of each platform affects the engagement of specific user groups more than others. Regarding **RQ3**, we focus on the behavioral traits of users who stayed on Mastodon despite notable events drawing attention back to Twitter. By performing statistical analyses, we highlight correlations between user retention and various user behaviors, including some distinctive features exclusive to Mastodon. Through this analysis, we shed light on Mastodon's unique appeal to its users and draw insights on certain behavioral factors that promote sustainable migration. Our main contributions are summarized as follows: * Following the platform policies, we crawl and curate a dataset of over 10,000 users who migrated from Twitter to Mastodon by developing an effective method of mapping accounts between the two platforms. * We propose a novel framework for understanding the intricate dynamics of migration patterns between Twitter and Mastodon, with a focus on the shift of user attention between the two platforms. It enables us to understand the interplay between motivational factors, the unique platform architectures, and their relations to user behaviors. * We present key behavioral factors that promote the sustainability of platform migration to Mastodon, interesting insights for future study of platform migration. ## Related Work ### Migration Theory and Platform Migration Migration has long been a subject of study across social sciences, with the push-pull theory being a key concept in this field [11, 12]. The push-pull theory suggests that factors pushing people away from a location and pulling them toward a new one drive migration decisions [11]. This concept can also be applied to online platform migration [12, 13, 14]. Kumar et al. [15] showed that user behaviors, such as posting activity and follower count, influence migration to specific platforms. Newell et al. [16] studied the motivations for migration through push-pull factors by surveying users who migrated from Reddit to other platforms during community unrest. Lorincz et al. [17] found that social capital can affect users' decisions when selecting a new platform due to potential loss of social investment. Other factors including platform architecture, interface, toxicity, moderation policy, presence of friends, and community migration can serve as motivations for online platform migration [14]. Zia et al. [23] studied the recent surge in user migration from Twitter to Mastodon [15]. This study presented several findings, including users' preference for joining large servers, often importing their Twitter followers, and exhibiting low toxicity in their posts [16]. ### Growth and Challenges of Mastodon Mastodon's decentralized architecture attracted many users from traditional centralized platforms like Twitter and Facebook because of its several unique characteristics [12, 13]. La Cava et al. [20] found that connections in Mastodon are more topic-based rather than popularity-based due to the lack of a recommendation system [12]. Mastodon emphasizes conversations and interactions over favorites and reshares [12]. Despite Mastodon's unique advantages over traditional platforms, it faces with numerous challenges inherent to decentralized networks. Information consumers on Mastodon establish most connections and broker information [12]. The user-driven trend toward the centralization of Mastodon, primarily on a handful of large servers, is also observed. [10, 16]. Other issues, such as advertising revenue, handling moderation tasks, and the the availability of servers, pose challenges to operating Mastodon [16, 17, 18]. Unlike previous studies on platform migration, our research explored dynamic migration patterns between Twitter and Mastodon over time. We primarily concentrated on users shifting their attention between these platforms and algorithms to infer the motivations for such migrations. Furthermore, we analyzed the relation between platform architecture and the behaviors of migrated users, pinpointing behavioral factors that encourage users to stay on Mastodon. This holistic approach enriches our comprehension of migration patterns and sustainable platform migration. Figure 2: The process of migration, while still maintaining the previous account and shifting attention between platforms. ## Migration Types on Social Media In social media and migration studies, two types of migration are identified [21, 16]: (1) _Permanent migration_, where users transition to a new platform, deactivate their original account, and exclusively engage on the new platform; and (2) _Attention migration_, where users maintain presence on both platforms but shift their focus toward one of them. Permanent Migration.Let \(U_{p_{1}}\) represent the set of users on the platform \(p_{1}\) and \(U_{p_{2}}\) represent the set of users on the platform \(p_{2}\). A user \(u\) is considered to have permanently migrated from platform \(p_{1}\) to platform \(p_{2}\) if two conditions are met: (1) the user \(u\) was a member of \(p_{1}\) before time \(t\), and (2) the user \(u\) is no longer a member of \(p_{1}\) and is a member of \(p_{2}\) at time \(t\) (i.e., \(u\notin U_{p_{1}}\) and \(u\in U_{p_{2}}\)). Permanent migration may be a result of profile removal, deletion, or suspension from the original platform. Attention Migration.For a user \(u\) who is a member of both platforms (i.e., \(u\in U_{p_{1}}\) and \(u\in U_{p_{2}}\)) and active on both platforms at time \(t_{i}\), attention migration is said to occur between two distinct times \(t_{i}\) and \(t_{j}\) where \(t_{j}>t_{i}\) if \(u\) becomes inactive on platform \(p_{1}\) while remaining active on platform \(p_{2}\) at time \(t_{j}\). In this case, the user's attention is considered to have migrated from platform \(p_{1}\) to platform \(p_{2}\) during the time interval \(\delta=t_{j}-t_{i}\). Attention migration could represent a short-term migration with the potential to evolve into permanent migration over time. To ascertain a user's active status on a platform, we define an active user as: Active User.Given a social media platform \(p\), a user \(u\in U_{p}\), time \(t_{j}>t_{i}\), and interval \(\delta=t_{j}-t_{i}\), \(u\) is considered to be active on the platform \(p\) at time \(t_{j}\), if the user has performed at least one activity on the site since time \(t_{i}\). Otherwise, the user is considered inactive. User activities include various actions possible on the social media platform, such as posting messages and sharing other users' contents. Our study recognizes that the permanent migration is not limited to the transition from Twitter to Mastodon, as the reverse direction is also possible. However, we focus on users who initiated migration from Twitter to Mastodon to examine the impact of Twitter's ownership change on migration. For this specific objective, we excluded users who created Mastodon accounts before creating their Twitter accounts. ## Data Collection From October 24, 2022, to January 2, 2023, we identified a total of 10,491 migrated users who have accounts on both Twitter and Mastodon. We removed 56 users (0.53%) who had Mastodon accounts before Twitter accounts, along with 266 users (2.53%) whose accounts are inaccessible either on Twitter or Mastodon as of January 2, 2023. This process left us 10,169 migrated users for our analysis. Mapping Mastodon to Twitter Accounts.We utilize Mastodon users' profiles, noting that they frequently disclose details about their additional social media accounts. We first search for fields with Twitter, and use a regular expression to obtain the username from their Twitter URL. If the Twitter URL is absent in the profile, we employ a regular expression to extract the Twitter username following the @ symbol. It is worth noting that Mastodon's decentralized structure poses challenges in gathering information across all servers [10]. Throughout our research period, several prominent servers, such as _mastodon.social_, halted new account registrations due to a surge in user traffic. To maintain the integrity of this study, we concentrated on servers that are highly available when using the Mastodon API. In this study, we collected data from the top 10 Mastodon servers with the highest number of incoming users during the research period, as depicted in Table 1; these servers primarily utilize English for communication. We analyzed the number of incoming user data for each server through the _instances.social_ API. ### Collecting User Activities and Network Interactions We utilized APIs to monitor the behavior of the users included in our study. In compliance with ethical and privacy policies, these APIs enabled us to collect user activities such as tweets, retweets, toots, and boosts. Our data collection efforts included gathering the messages posted during the research period on both Twitter and Mastodon's public timelines. This process yielded 1,198,431 tweets and 554,141 retweets from Twitter, along with 728,188 tools and 548,885 boosts from Mastodon. All the data was securely stored in an anonymized form within a MongoDB database. Moreover, we employed the APIs to perform comprehensive crawling of followers of migrants on both Twitter and Mastodon, resulting in obtaining nearly all available followers of migrated users on these platforms: 85.42% of the total 53,573,492 followers of the migrated users on Twitter and 88.72% of the total 2,990,316 followers of the migrated users on Mastodon. We also collected data on interactions with other users, such as replies to the postings on Mastodon, and the profiles of users who participated in these replies. ### Collecting and Grouping User Occupations Since users may not always include occupations in their profiles, we utilized Stanford CoreNLP [12], a pretrained model capable of identifying occupation titles through named entity recognition. Initially, we examined Twitter user profiles for occupation titles, and if unavailable, referred to corresponding Mastodon profiles. Then, we employed the UK's Standard Occupational Classification (SOC 2010) system to assign an occupation per user, widely adopted in prior research on social media analysis [13, 14, 15]. The system assigns a three-digit code corresponding to occupation titles in its dictionary. We matched each user's first occupation title in the profile to its SOC code, using the next titles if not matched. This method identified the occupations for 5,817 of 10,169 users (57.2%). To manage numerous SOC codes, we focused on the first digit (e.g., 1xx), which denotes the nine major groups. Two annotators classified these major groups for a random 300-user subset, resulting in a Cohen's Kappa of 77.82%. Our approach achieved F1 score of 65.26%, effectively predicting nine major groups [1]. Figure 4 displays the distribution of occupations among users, with 2xx (Professional Occupations) and 3xx (Associate Professionals and Technicians) groups being the most prominent. ## RQ1: What are the migration patterns between Twitter and Mastodon? In this section, we investigate the extent to which individuals who were active on Twitter and Mastodon, and how they shifted their attention to each platform following Musk's acquisition of Twitter and other major events related to Twitter. ### Understanding Evolving Migration Patterns Figure 5 displays significant Twitter-related events along with the number of daily active users, accounting for users active on either one or both platforms within specific ranges. Active users are identified by setting a one-day interval, symbolized as \(\delta\). This interval commences at midnight on the preceding date, \(t_{i}\), and concludes at midnight on the present date, \(t_{j}\). To illustrate, if \(t_{j}\) represents January 2nd and the interval is configured to span one day, the system would check the user's activities from 00:00 AM to 23:59 PM on January 1st. To delineate major changes in Figure 5, we examine temporal shifts in daily active users using Prophet [11] developed by Facebook, a time-series method that is capable of handling nonlinear growth and seasonal changes. Our analysis involved applying Prophet separately to the three trends named Twitter-only, Mastodon-only, and Twitter & Mastodon. When pinpointing key moments where these temporal shifts overlapped, we found that these overlapped points correspond to Mastodon's first peak [11, 12], and its second peak [11, 12]. Based on these two important moments, we were able to distinguish three distinct phases in migration patterns: (1) the _protesting phase_, (2) the _adaptation phase_, and (3) the _current phase_. The _protesting phase_ began following the change in ownership of Twitter and concluded before the first peak in usage of Mastodon. During this phase, the number of active users on Twitter consistently declines, while there is a steep increase in the number of active users on Mastodon. These significant changes in the number of active users appear with various events happened on the Twitter, such as the mass layoffs at Twitter and the release of the Twitter 2.0, which included controversial changes such as the discontinuation of verification for misinformation related to COVID-19 and the rollout of a subscription system for verified accounts. The _adaptation phase_ follows the first peak and continues until the second peak of Mastodon. In this phase, the number of active users on Mastodon surges following a regulation change on Twitter, which involved the suspension of users who mention any server of Mastodon or provide a link to their Mastodon profile. This resulted in the suspension of several accounts on Twitter, such as the official Mastodon account Figure 4: The pie chart illustrates the distribution of the distribution of nine major groups, each accompanied by corresponding tags. The major groups are based on the first digit of the UK’s Standard Occupational Classification (SOC 2010) code, which has been assigned to each user. Further descriptions regarding nine major groups in the UK’s SOC 2010 are provided in Appendix. Figure 5: Trends in daily active users tagged with major events. Twitter-only (blue) and Mastodon-only (orange) lines indicate the number of users active exclusively on one platform, while Twitter & Mastodon (green) represents users active on both platforms. The \(x\)-axis denotes a particular date \(t_{j}\), when we assess whether a user was active within the past one day. Red dashed lines highlight the key moments, where the temporal shifts overlap among the three trends (blue, orange, and green) Figure 6: Trends in attention migrations toward Twitter (blue) and Mastodon (orange). The \(x\)-axis represents a date \(t_{j}\) when users migrated their attention. The trends are shown on a daily or weekly basis, with corresponding intervals of \(\delta\) = _1 day_ or \(\delta\) = _1 week_. and the accounts of various journalists. This event could have prompted more users to become active on Mastodon. The _current phase_ follows the second peak of Mastodon and continues until the present time. After Elon Musk announced his intention to step down as CEO of Twitter when he finds a suitable successor, the number of active users on Mastodon started to decline. This event may have had a mitigating effect on attention given to Mastodon. At present, it appears that migrants tend to stay active on Twitter rather than continuing to use Mastodon, which raises questions about the longevity of the platform migration. Upon analyzing the three phases alongside Figure 6, which presents attention migration trends on daily and weekly intervals, it becomes evident that both the number of active users and attention migration to Twitter surpass those of Mastodon. This pattern implies that migrated users tend to be more active on Twitter and shift to Twitter after trying both platforms. Notably, a significant gap in attention migration trends between Twitter and Mastodon emerges during the _adaptation phase_. We speculate such a large gap in attention migration is because numerous users faced challenges in adjusting to Mastodon after arrival. Overall, the findings reveal that users tend to return to Twitter after the migration, and suggest that the momentum of migration to Mastodon has weakened. ### Inferring Motivational Factors of Migration To examine if Twitter's change in ownership prompted users to migrate to Mastodon, we leveraged BERTopic [13] to analyze discourse over time. The frequency of tweets and roots relating to the top ten topic groups over time are graphically represented in Figure 7. Our experiment uncovered a significant portion of Twitter-related discusses (topic 1 on both Twitter and Mastodon), as well as a strong focus on Elon Musk (topic 4 on Twitter and topic 1 on Mastodon). The prominent presence of Twitter-related topics throughout the protest phase and Elon Musk-associated topics toward the end of the adaptation phase provides quantitative evidence of a connection between Twitter's change in ownership and the subsequent platform migration. Especially, tools on Mastodon showed a greater frequency of references to the specific keywords Twitter and Elon Musk in comparison to tweets on Twitter. From the total of 728,188 tools analyzed, 13.6% included one of these keywords. In contrast, 6.1% of the 72,455 tweets contained one of these keywords Identifying trending discourses, however, does not provide insight into the motivations for the migration. To fill this gap, we utilized sentiment analysis, leveraging a DeBERTa-based model [12, 13], to investigate if Musk's acquisition led users to abandon Twitter as a form of protest. This model, adeptly fine-tuned for aspect-based sentiment analysis, allowed us to gauge sentiments specifically targeted at either Twitter or Elon Musk, in addition to capturing the general sentiment from non-specific messages. Figure 8 illustrates the distribution of sentiment scores of tweets and toots on each platform. Our observations suggest that Mastodon may be particularly appealing to individuals with negative sentiments toward Twitter and its leadership. The migration from Twitter to Mastodon aligns with the push-pull theory of migration [14], wherein the negative sentiment toward Twitter and its leadership may act as a "push factor" [15] prompting individuals to migrate to Mastodon. In contrast, users show more positive sentiment toward Mastodon than Twitter, which can serve as a "pull factor" attracting users to Mastodon. ## RQ2: Distinct User Behaviors of Migrated Users on Twitter and Mastodon Addressing this research question allows us to compare the distinct behaviors exhibited by the studied users on both Twitter and Mastodon. Through this examination, we can investigate whether these varying behaviors can be attributed to the unique architectures of the respective platforms. ### User-level Features To explore how users exhibit different behaviors on each platform, we decided to compare the user behaviors that are common on both Mastodon and Twitter, such as tweets and roots. For a particular user denoted by \(u\), we define three distinct types of user-level features on each platform represented as \(p\): (1) _User Activity_, (2) _User Network_, and (3) _User Response_. The definitions for these categories are as follows: **User Activity.**\(\mathcal{A}(u,p)\) is determined by a user's two distinct activity types \(\{a_{i}\}_{i=1}^{2}\), which compromise status messages and reshares (e.g., tweets and retweets that a user has posted). The counting function \(\sigma(u,a_{i},p)\) measures the cumulative count of the activity \(a_{i}\) during the studied period. Formally, the user activity can be represented as: \[\mathcal{A}(u,p)=\sum_{i=1}^{2}\frac{\sigma(u,a_{i},p)-\min_{u^{\prime}}\left( \sigma(u^{\prime},a_{i},p)\right)}{\max_{u^{\prime}}\left(\sigma(u^{\prime},a _{i},p)\right)-\min_{u^{\prime}}\left(\sigma(u^{\prime},a_{i},p)\right)} \tag{1}\] **User Network.**\(\mathcal{W}(u,p)\) is derived by two distinct network types \(\{w_{i}\}_{i=1}^{2}\), which include followings and followers of a user. The counting function \(\sigma(u,w_{i},p)\) measures the size of the network \(w_{i}\). The user network can be defined as: \[\mathcal{W}(u,p)=\sum_{i=1}^{2}\frac{\sigma(u,w_{i},p)-\min_{u^{\prime}} \left(\sigma(u^{\prime},w_{i},p)\right)}{\max_{u^{\prime}}\left(\sigma(u^{ \prime},w_{i},p)\right)-\min_{u^{\prime}}\left(\sigma(u^{\prime},w_{i},p) \right)} \tag{2}\] **User Response.**\(\mathcal{R}(u,p)\) is obtained by two distinct response types \(\{r_{i}\}_{i=1}^{2}\), which include favorites and reshare a user has received (e.g., likes and retweets from other users). The counting function \(\sigma(u,r_{i},p)\) measures the cumulative count of the response \(r_{i}\) during the studied period. Formally, the user response can be represented as: \[\mathcal{R}(u,p)=\sum_{i=1}^{2}\frac{\sigma(u,r_{i},p)-\min_{u^{\prime}}\left( \sigma(u^{\prime},r_{i},p)\right)}{\max_{u^{\prime}}\left(\sigma(u^{\prime}, r_{i},p)\right)-\min_{u^{\prime}}\left(\sigma(u^{\prime},r_{i},p)\right)} \tag{3}\] where \(u^{\prime}\) denotes a user within the population under study. Please note that we utilize max-min normalization on each platform when calculating \(\mathcal{R}(u,p)\), \(\mathcal{A}(u,p)\), and \(\mathcal{W}(u,p)\). This normalization accounts for the different scales of user behaviors present on each platform. Figure 8: The sentiment score distribution is compared between Twitter (blue) and Mastodon (orange). We compare the distributions of Twitter and Mastodon across three categories: (1) general sentiment; (2) Twitter-specific; and (3) Elon Musk-focused. The scores range from -1 to 1, with lower values indicating a more negative sentiment. The mean sentiment score for each distribution is denoted by the dashed lines. Utilizing the Kolmogorov-Smirnov test for statistical analysis, we found a considerable difference between the two distributions of Twitter and Mastodon across the three categories (\(p<0.001\)). Figure 7: The trends in the number of tweets on Twitter and toots on Mastodon that mention one of the top 10 topic groups. Each topic group is shown with a discourse consisting of its four most frequently occurring words, separated by underscores. The topic groups are numbered based on their rank of proportion to the total messages, where a lower rank signifies a higher proportion. ### Disparity in User-level Features We conducted a comparative analysis of three distinct user-level features on both Twitter and Mastodon platforms. Our comprehensive study derives insights from Table 2, which shows the mean and mean absolute deviation values, along with Figure 8, that illustrates the interquartile plot. Based on this analysis, we make three primary observations as follows: First, we observe that users exhibit, on average, 4.47 times higher _user activity_ on Twitter than on Mastodon. This disparity may stem from the fact that numerous users continued to use both platforms even after migration, with Twitter serving as their primary platform [13]. Additionally, the higher MAD value on Twitter indicates a broader spread of activities compared to Mastodon. Second, we find that users maintain, on average, 1.95 times more _user network_ on Mastodon than on Twitter. This indicates that users tend to focus on building connections with other users on Mastodon [13] than Twitter. The MAD value is higher on Mastodon, signifying a more varied distribution of network sizes among the migrated users. Third, our analysis indicates that users, on average, receive 1.25 times more _user responses_ on Twitter than on Mastodon, likely due to Mastodon's lower Q1 value (0.00003) compared to Twitter's (0.003). The higher MAD value on Twitter suggests that user responses are more well spread among users on Twitter. Notably, a larger percentage of users receive zero responses on Mastodon (13.6%) than on Twitter (3.2%), consistent with previous findings of overall lower response rates on Mastodon [11]. ### Disparity in Occupations to User Response We utilized the Gini coefficient, a widely accepted metric for assessing disparities in income or wealth among diverse groups [12], to evaluate the inequality in occupational distribution. The Gini coefficient values range between 0 and 1, with lower values signifying greater equality. We computed the Gini coefficient using \(n\) occupation groups (i.e., the nine major groups in the UK's SOC code) and \(x_{i}\), representing the user count for the \(i\)-th group among the 5,817 users (57.2% of the 10,169 users) with identifiable occupations as follows: \[Gini\,Coefficient=1-\sum_{i=1}^{n}\left(\frac{x_{i}}{\sum_{j=1}^{n}x_{j}}\right) ^{2} \tag{4}\] The Gini coefficient for 5,817 users is calculated to be 0.627. Such high inequality mainly stems from the over-representation of user groups like SOC 2xx (Professionals) and SOC 3xx (Associate Professionals and Technicians). One potential reason for this inequality is the unique appeal that Mastodon's decentralized platform holds for individuals with academic or technology-related backgrounds. [13, 14, 15]. To discern if certain groups of occupations are encouraged to attract more user responses on each platform, we examined the occupational inequality of a subset in each platform. Our primary focus was on the relationship between occupational inequality and the user response on each platform. To this end, we first ranked users according to their responses on each platform. Once we created the ranking, we focused on the users who actively engage on each platform and represent a value above the median in the data. Figure 10 demonstrates that occupational inequality on Twitter surpasses the inequality of the original sample within the top 50% of users. Interestingly, the disparity between Twitter and Mastodon peaks within the highest 2.5% of users, underlining a significant inequality among this extremely popular user group on Twitter. Conversely, on Mastodon, users exhibit lower inequality than the original sample up to the top 50% of users, suggesting a more evenly distributed occupational distribution among its highly engaged users. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Metric**} & \multicolumn{2}{c}{**User Activity**} & \multicolumn{2}{c}{**User Network**} & \multicolumn{2}{c}{**User Response**} \\ \cline{2-7} & A & B & A & B & A & B \\ \hline AVG. & 0.0197 & 0.0044 & 0.0064 & 0.0125 & 0.0061 & 0.0049 \\ MAD. & 0.0228 & 0.0058 & 0.0062 & 0.0103 & 0.0081 & 0.0074 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of user features across platform **A** (Twitter) and **B** (Mastodon). AVG denotes average, and MAD (Mean Absolute Deviation) measures the dispersion of statistics. Figure 10: Gini coefficients for Twitter (blue) and Mastodon (orange) depict occupational distribution among migrants, focusing on users above the top 50% response rate. The red dashed line is the Gini coefficient for the original sample. Figure 9: Interquartile box plots comparing the values of user-level features in Twitter (blue) and Mastodon (orange). Within each box plot, the red line indicates the median value. ### Disparity in Hashtags to User Response We compared hashtags shared by migrated users on Twitter and Mastodon, and their respective user response on each platform. As a user's popularity on one platform may not necessarily translate to popularity on another platform, we categorized users into four categories based on the popularity of users: (1) popular on Twitter, but not Mastodon; (2) popular on Mastodon, but not Twitter; (3) popular on both platforms; and (4) unpopular on both platforms. Users were classified into one of four categories based on their ranks on each platform, \(rank_{A}\) for Twitter and \(rank_{B}\) for Mastodon. For categories (1) and (2), users are sorted in descending order based on the difference in their ranks (\(rank_{diff}=rank_{A}-rank_{B}\)). This represents users with the largest differences in user response between the two platforms, with the top 10% being users who are popular on Mastodon but not on Twitter and the bottom 10% being users who are popular on Twitter but not on Mastodon. For categories (3) and (4), users are sorted based on the sum of their ranks (\(rank_{sum}=rank_{A}+rank_{B}\)). The top 10% in this case are users who are unpopular on both platforms, while the bottom 10% were users who are popular on both platforms. Figure 11 showcases frequently used hashtags among users, categorized into four groups based on their popularity. On Twitter, popular users predominantly focus on social issue hashtags, such as #Covid and #UKraine, whereas their less popular counterparts are inclined toward technology-oriented discussions, evident by those frequent hashtags such as #Opensource, #Security, and #Python. This suggests sharing global and social hashtags tend to be the main focus for getting more response from other users. In contrast, Mastodon users of all popularity levels engage with niche contents, encompassing photography, gaming, cats, and programming. Most of all, on Mastodon, users frequently share migration-specific hashtags, such as #TwitterMigration and #Introduction, which are rarely observed among users popular on Twitter. ### Analysis Summary Our findings suggest empirical evidence that the distinct designs of Twitter and Mastodon have an effect on two aspects: * The occupational inequality among popular users is more pronounced on Twitter than on Mastodon. This can be due to Twitter's design, which focuses on popularity by promoting well-known users and their posts [11]. In contrast, Mastodon's topic-oriented approach creates connections and interactions based on shared interests. This approach provides an opportunity for users from diverse occupations to gain popularity. * Popular users on Twitter frequently use hashtags centered around global and societal issues. This tendency can be attributed to Twitter's centralized structure, which facilitates broader reach and a larger audience, thereby amplifying the popularity of users with social interests [12]. Conversely, Mastodon users create connections by seeking out those with similar interests, promoting the sharing of more niche content, irrespective of the users' popularity levels. Future research aims to enhance our understanding of the communication dynamics between Twitter and Mastodon. To investigate how information consumption is influenced among migrating users, further studies should focus on: (1) the impact of disparities between Twitter and Mastodon on the communication structure of both platforms; (2) the effects of disparities in occupational distribution on the overall user experience and information sharing; and (3) how these imbalances can inform strategies to improve user experience and promote equitable engagement among diverse users. ## RQ3: Sustainability of Platform Migration: A Case Study of Mastodon ### Behavioral Characteristics of Residents In our investigation of user residency on Mastodon, we categorized the migrated users into two distinct groups: (1) _Non-Residents_, whose last activity on Mastodon was either during the protest or adaptation phase, and (2) _Residents_, whose last activity on Mastodon is in the current phase. To concentrate on users who have remained dedicated to Mastodon without reverting their attention back to Twitter, we refined the resident group by excluding users who shifted their attention to Twitter, either daily or weekly, during the current phase. In analyzing the behavioral aspects that retain users on Mastodon, we applied user-level features to predict the two types of migrants. This investigation involved unique user-level features for Mastodon including (1) Interaction diversity, (2) Random migration, and (3) Migration hashtags, each of which are delineated below: Interaction Diversity.We conducted a study on interaction diversity for a specific user, denoted as \(u\). This metric measures the level of uncertainty for a user \(u\) to engage with other users who have different primary servers. The Shannon entropy formula is utilized to compute this metric as follows: \[\begin{split} H(u)=-\sum_{i=1}^{|\mathcal{S}|}P(u,s_{i})\log P(u,s _{i}),\\ P(u,s_{i})=\frac{\sigma(u,s_{i})}{\sum_{j=1}^{|\mathcal{S}|} \sigma(u,s_{i})},\end{split} \tag{5}\] where \(\sigma(u,s_{i})\) is the cumulative count of interactions with a server \(s_{i}\in\mathcal{S}\), and \(P(u,s_{i})\) is the probability of a user \(u\) interacting with other users from a server \(s_{i}\). The interaction is defined as user behaviors such as following or replying to other users whose primary server is \(s_{i}\). Fandom Migration.We examined the concept of "Fandom migration" [13], a phenomenon in which a leader's migration to a new platform often leads to a collective migration of their followers from the previous platform as well. To determine the percentage of a user's Mastodon followers who had previously followed them on Twitter, we analyzed identical usernames on both platforms, under the premise that identical usernames belong to the same person. Migration Hashtags.To investigate whether users who led migration from Twitter to Mastodon are more inclined to stay on Mastodon, we evaluated the frequency of usage of twelve commonly observed migration-related hashtags during the study period: #RipTwitter, #GoodbyeTwitter, #ElonIsDestroyingTwitter, #JoinMastodon, #MastodonMigration, #MastodonSocial, #ByebyeTwitter, #TwitterMigration, #TwitterTakeover, #TwitterShutdown, #LeaveTwitter, and #TwitterExodus. We ran statistical tests on six distinct user-level features for our analysis to predict the two types of migrants, such as _non-residents_ and _residents_. The outcomes of a linear regression, showcased in Table 3, demonstrate the importance of various user-level features in predicting these migrant classifications. The user network feature is significant with the _p_-value below 0.05. This reveals a positive correlation between the types of migration, suggesting that as the value of the user network feature increases, the likelihood of individuals being residents also increases. User activity, interaction diversity, and random migration are highly significant with _p_-values below 0.001. Positive coefficients suggest that a high number of activities, diverse interactions, and more connections with individuals who followed the user on Twitter considerably motivate users to stay on Mastodon. Notably, the relatively higher coefficient (0.812) and odd ratio (2.253) indicate the strong association between interaction diversity and user retention. Surprisingly, the volume of user response is not significantly associated with a user's decision to stay on Mastodon, implying that users who merely focus on attracting a large user response tend to discontinue using the platform over time. Moreover, there is no statistical correlation between the frequency of sharing migration-related hashtags and the residency of users, signifying that many individuals who once actively led the migration movement left Mastodon. ### Analysis Summary Mastodon, a decentralized social media platform, uses a federated network to encourage connections based on shared interests. However, this can lead to social fragmentation, complicating user interaction during building social networks. [14]. Our analysis yields two notable outcomes: * The decision to remain on Mastodon is primarily associated with a user's active effort to frequent conversations and diverse social interactions to appreciate the community-centered experience, rather than by the volume of responses received or the widespread sharing of migration hashtags. * Mastodon distinguishes itself by prioritizing community-centered experiences, contrasting with traditional platforms that often give preference to individual self-promotion or the creation of viral content to capture a broader audience. As a prospective avenue to enhance the sustainability of platform migration, it may be helpful to examine social network analysis theories that can shed light on concealed patterns within the social networks of migrated users by pinpointing information brokers linking disconnected groups, or structural holes [12] and by examining the impact of weak ties in loosely connected networks on obtaining unique information and resources [1]. In turn, this information can inform platform developers, community managers, and users about the factors that facilitate sustainability of migration to a platform with different architecture and reduce mass departures and provide insight on how to avoid similar issues when launching new social media. Figure 11: Wordclouds after lowercasing hashtags on two platforms **A** (Twitter) and **B** (Mastodon), categorized based on the user response of migrants: (1) popular on Twitter but not on Mastodon, (2) popular on Mastodon but not on Twitter, (3) popular on both platforms, and (4) unpopular on both platforms. More frequently shared hashtags are highlighted with larger font size. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Features** & **Coef.** & **SD** & **OR** & _p-value_ \\ \hline User Activity & 0.592 & 0.260 & 1.806 & 0.000** \\ \hline User Network & 0.192 & 0.082 & 1.212 & 0.019* \\ \hline User Response & 0.019 & 0.064 & 1.019 & 0.763 \\ \hline Interaction Diversity & 0.812 & 0.031 & 2.253 & 0.000** \\ \hline Random Migration & 0.084 & 0.023 & 1.087 & 0.000** \\ \hline Migration Hashtags & -0.029 & 0.030 & 0.971 & 0.663 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on linear regression to predict the two types of migrants (_non-resident_ and _resident_) on Mastodon. Coef denotes coefficient, SD signifies standard deviation, and OR indicates odds ratio. Note that *\(p<0.05\) and **\(p<0.001\). ## Limitations First, the data available for this study does not include the 266 users who experienced _permanent migration_ by January 2, 2023, due to API restrictions preventing access to deleted accounts on both Twitter and Mastodon. Our data is also limited to the first ten weeks post Elon Musk's Twitter acquisition. As such, the findings from the migration patterns might not be applicable to other periods or predictive of future trends. Second, we acknowledge the presence of alternative platforms for Twitter users, such as Hive (a microblogging service) and Damus (an end-to-end encrypted social network). However, this study focuses on the migration from Twitter to Mastodon, assuming that user attention mainly oscillates between Twitter and Mastodon due to Mastodon's increased popularity during the time when Musk acquired Twitter. Last, the motivational factors are inferred from observational data on migration patterns, frequent discourses and users' sentiment toward them. Testing the correctness of the inferred motivations for migration requires surveys of users including open-ended interviews, where users can report their reasons for migrating or not from any platform. ## Conclusion We introduce two types of migration on social media and explore the migration patterns between Twitter and Mastodon. Our observations indicate that dissatisfaction with Twitter and its management, notably the Twitter's ownership change, is a primary motivation for users to migrate to Mastodon. Interestingly, these migrated users often utilize both platforms, frequently shifting their attention back to Twitter after trying both platforms. This migration pattern refutes claims of dooms day scenario for Twitter after the mass exodus. Our comparison of user behaviors on Twitter and Mastodon unveils notable disparities between the two platforms. Specifically, users exhibit a larger inequality in the distribution of users' occupations within highly engaged users on Twitter. An analysis of hashtag usage reveals that popular Twitter users typically concentrate on global and social issues, while Mastodon users, regardless of their popularity, often share content related to migration and niche contents. We identify key user behaviors that influence the retention of users on Mastodon. There is a highly positive correlation between user retention and the variety of interactions with users across different Mastodon servers. Conversely, the volume of user responses, such as favorites and reshares, does not exhibit a significant statistical impact on user retention. This underscores the distinctive appeal of Mastodon's community-oriented architecture for users. In conclusion, our findings offer valuable insights for platform migration and how to retain users during periods after a mass migration. By leveraging these behavioral factors, platform designers can create more engaging and sustainable platforms that cater to user preferences and needs. ## Future Work Exploring the phenomenon of returning migration is an important area for future research. Our plan is to study this phenomenon as part of a long-term analysis, when more user migration data becomes available between Twitter and Mastodon. Furthermore, we aim to understand how the disparities between Twitter and Mastodon impact each platform, including user experience and the way information is shared among migrated users. One aspect of this analysis is to apply theories of weak ties[12] and structural holes [1]. Furthermore, surveys will be conducted to complement the findings and provide qualitative evidence of the potential motivations behind migration. Finally, the proposed approach can be extended to other alternative social media platforms beyond Twitter, to gather more comprehensive understanding of the migration. ## Appendix ### Standard Occupational Classification system The UK's SOC 2010 system uses a hierarchical structure to organize occupations, where the first digit of each SOC code represents the nine major groups, as shown in Table 4. ### Experimental Details Utilizing the Prophet model set to a 0.243 change point prior scale, we observed eight temporal shifts in Twitter-only users on specific dates (Oct. 25, 27, Nov. 9, 22, Dec. 5, 10, 20), four in Mastodon-only users (Nov. 8, 22, Dec. 7, 20), and four in Mastodon & Twitter users (Nov. 13, 22, Dec. 1, 20). In the BERTopic model, we utilized all-MiniLM-L6-v2 setting as default, which is a pre-trained transformer model recognized for its ability to handle multiple languages efficiently. We also configured the auto-adjustment setting and a minimum size of 100 topics for optimal topic generation, yielding 841 topics for tweets and 483 for tools. For aspect-based sentiment analysis, we leveraged the DeBERTa with deberta-v3-large-absa model, which is fine-tuned for the ABSA (Aspect-Based Sentiment Analysis) datasets. The model's default setup has a hidden size of 768, 12 hidden layers, and 12 attention heads. \begin{table} \begin{tabular}{|c|c|} \hline **Major** & **Description** \\ **Group** & \\ \hline 1xx & Planing, directing, or coordinating the operations of businesses or other organizations. \\ \hline 2xx & Specialized tasks that require advanced knowledge, such as law, science, or education. \\ \hline 3xx & Support in various professional fields, such as healthcare, science, or engineering. \\ \hline 4xx & Administrative and clerical duties for business, government agencies or organizations. \\ \hline 5xx & Work in skilled jobs, such as construction, mechanics, or electrical work. \\ \hline 6xx & Personal services to individuals or groups, such as childcare, leisure, or personal care. \\ \hline 7xx & Selling products or services to customers or provide customer support. \\ \hline 8xx & Operating machinery or perform manual labor in manufacturing or production. \\ \hline 9xx & Simple and routine tasks that do not usually require formal education or training. \\ \hline \end{tabular} \end{table} Table 4: Nine major groups in the UK’s SOC 2010 system. ## Ethical Statement, Impact, and Reproducibility Our proposed approach for analyzing migration from Twitter to Mastodon may suffer from selection bias, as it relies on users who have chosen to disclose their other social media platform accounts on their profiles. As a result, the selected users may not be representative of the larger population of users migrating from Twitter to Mastodon. Our study aims to gain insights into the migration of users between Twitter and Mastodon, and is not intended to exploit or manipulate these communities for financial gain. In compliance with the terms of service of Twitter2 and Mastodon3, we share the user IDs of the accounts included in our analysis. We will share the proposed methods to map user accounts between the two platforms, as well as the list of matched user IDs. To facilitate the reproducibility of our results, we will provide a GitHub URL link once the paper is accepted. Footnote 2: [https://twitter.com/en/tos](https://twitter.com/en/tos) Footnote 3: [https://discourse.joinmastodon.org/tos](https://discourse.joinmastodon.org/tos)
2303.03053
Crowdsourcing on Sensitive Data with Privacy-Preserving Text Rewriting
Most tasks in NLP require labeled data. Data labeling is often done on crowdsourcing platforms due to scalability reasons. However, publishing data on public platforms can only be done if no privacy-relevant information is included. Textual data often contains sensitive information like person names or locations. In this work, we investigate how removing personally identifiable information (PII) as well as applying differential privacy (DP) rewriting can enable text with privacy-relevant information to be used for crowdsourcing. We find that DP-rewriting before crowdsourcing can preserve privacy while still leading to good label quality for certain tasks and data. PII-removal led to good label quality in all examined tasks, however, there are no privacy guarantees given.
Nina Mouhammad, Johannes Daxenberger, Benjamin Schiller, Ivan Habernal
2023-03-06T11:54:58Z
http://arxiv.org/abs/2303.03053v1
# Crowdsourcing on Sensitive Data with Privacy-Preserving Text Rewriting ###### Abstract Most tasks in NLP require labeled data. Data labeling is often done on crowdsourcing platforms due to scalability reasons. However, publishing data on public platforms can only be done if no privacy-relevant information is included. Textual data often contains sensitive information like person names or locations. In this work, we investigate how removing personally identifiable information (PII) as well as applying differential privacy (DP) rewriting can enable text with privacy-relevant information to be used for crowdsourcing. We find that DP-rewriting before crowdsourcing can preserve privacy while still leading to good label quality for certain tasks and data. PII-removal led to good label quality in all examined tasks, however, there are no privacy guarantees given. ## 1 Introduction For supervised NLP tasks, large amounts of labeled data are needed. In many cases, only unlabeled data is available and labeling is then performed via crowdsourcing/crowdworking platforms like Amazon Mechanical Turk (AMT). These crowdsourcing platforms are used because they provide a time-efficient way to obtain labels for unlabeled data, making the annotation task easily scalable. However, data should only be published on crowdsourcing platforms if it contains no privacy-relevant information. Unfortunately, it is not always obvious what is privacy relevant and what is not (Narayanan et al., 2012). As a consequence, most textual datasets cannot be annotated on crowdsourcing platforms if the privacy of affected persons contained in the data needs to be respected. A common practice is to automatically replace personally identifiable information (PII) in a text. However, not all privacy-relevant information is contained in PII (Narayanan et al., 2012) and the automatic detection of PII does not work perfectly. Therefore, PII-removal alone is no guarantee that privacy is preserved. An approach that can actually give privacy guarantees is differential privacy (DP). DP offers formal mathematical guarantees for privacy-preserving data publishing, which has most recently also been applied to textual data (Igamberdiev et al., 2022; Krishna et al., 2021; Bo et al., 2021). The benefit of using differential privacy is that it is possible to set an upper boundary for privacy risks. Therefore, one exactly knows how large the privacy risk is and can set it to a sufficiently low level when using DP. In this work, we want to explore different privacy preservation techniques for textual data in the context of crowdsourcing. We do this by performing crowdsourcing on data which has been modified by using DP rewriting, PII-removal, or a combination of both. We show that there is a tradeoff between privacy and utility (label quality) when deciding for one of these methods, how this tradeoff is expressed and how it depends on the chosen task and data. Furthermore, we provide recommendations which task properties might lead to the most desirable results. ## 2 Related work Privacy leakages can have harmful consequences for individuals. Therefore, privacy protection is regulated by law in some parts of the world, e.g., by the GDPR in Europe (European Commission, 2016) or the HIPAA Act (Centers for Medicare & Medicaid Services, 1996) for medical data in the US. Unfortunately, it is impossible to fully prevent the risk of privacy leakages. Therefore, the ultimate goal is to reduce this risk. A common practice to reduce the risk of privacy leakages in textual data is to automatically detect and replace personally identifiable information (e.g. Ge et al., 2020; Pilan et al., 2022; Eder et al., 2020). This approach is called PII-removal in the following. However, there are two problems with PII-removal. First, without PII-labeled training data, in most cases named entity recognition or regular expressions are used for PII-removal (Ge et al., 2020; Pilan et al., 2022; Eder et al., 2020). This narrows down which kind of PII can be detected. Second, there is no possibility to quantify the remaining privacy risk. Differential privacy (DP) solves the problem of estimating privacy risks. It is a mathematical concept, supposed to enable sharing datasets containing private information without giving away this private information (Dwork and Roth, 2014). It has recently been applied in NLP for rewriting texts in a differentially private way (Krishna et al., 2021; Bo et al., 2021; Igamberdiev et al., 2022). The basic idea of 'local' differential privacy rewriting for textual data is to add noise to each data point. As a result, the probability of distinguishing data belonging to one individual from data of any other individual in the dataset is bounded. Furthermore, it is possible to define how small the probability of being able to distinguish this data should be by setting the 'privacy budget' parameter called \(\epsilon\). ## 3 Data Three corpora were used for the experiments: ATIS (Tur et al., 2010), SNIPS (Coucke et al., 2018) and TripAdvisor (TA) (Li et al., 2013). The ATIS corpus consists of transcriptions of flight information requests and the task is to classify them based on their intent. There are different versions of the ATIS corpus available, we use it in the form provided by Tur et al. (2010). SNIPS (Coucke et al., 2018) is an intent classification dataset as well and consists of instructions for voice assistants. TripAdvisor (Li et al., 2013) (TA) contains hotel reviews. We use only the titles of these hotel reviews because the full review texts were too long. We chose those datasets based on multiple criteria. First, we had some task-specific criteria. The task should be relevant in real-world use cases, it should not require previous knowledge and it should be simple and quick to solve. Second, we had some text-specific criteria. The texts should contain privacy relevant information, it should be in clear and generally understood language and the text snippets should be short. Furthermore, all datasets should have high-quality gold labels so that we could compare the labels obtained in our experiments with these gold labels. Finally, these datasets have been used in related works on privacy text rewriting. To simplify the task further, we reduced all of them to binary labelling tasks. This means that we chose one class as target class (e.g. "Airfare" for ATIS) and defined the task as deciding whether a given data point belonged to that target class or not. Furthermore, we only included data points which consisted of less than 200 characters for the crowdsourcing, but still used the longer texts for the DP pretraining in order to have enough pretraining data. An overview of the properties of all corpora in the modified versions used in this work can be found in Table 1. Furthermore, example sentences are shown in Table 2. ## 4 Model PII-removalThe PII-removal is based on regular expressions and on space (Honnibal et al., 2020) which we used for named entity recognition and part of speech tagging. With space, we detected names of persons, locations, dates and times. Those were then replaced with the strings "<NAME>", "<LOCATION>", "<DATE>" and "<TIME>". Additionally, we used regular expressions, to replace other personal information like mail addresses and phone numbers. DP-rewritingFor DP-rewriting we used the work of Igamberdiev et al. (2022). They provide an open-source framework for DP rewriting with a trainable model based on the idea behind ADePT (Krishna et al., 2021). This model consists of an auto-encoder which is pretrained first to learn how to compress texts. Afterwards, the texts to be rewritten are transformed into a compressed version, noise according to either a Gaussian or Laplacian distribution is added and then the text is reconstructed based on this vector. We used Gaussian noise and set \(\delta=1*10^{-4}\). For \(\epsilon\), differ \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{corpus} & \multicolumn{2}{c|}{data points} & \multicolumn{2}{c}{avg. length} \\ \cline{2-5} & target & rest & target & rest \\ \hline ATIS & 403 & 4100 & 67.91 & 66.77 \\ \hline SNIPS & 1936 & 11681 & 48.24 & 46.33 \\ \hline TA & 19663 & 9974 & 181.48 & 298.96 \\ \hline \end{tabular} \end{table} Table 1: Number of data points (“data points”) and average number of characters per data point (“avg length” per corpus in our modified version of the corpora. “target” stands for “target class” and “rest’” for all data points not belonging to the target class. ent values were used in different experiments. We state which value has been used when explaining each of the experiments. Furthermore, we did not append the class labels (as proposed in [14]), because usually class labels are only crowdsourced if there are none yet. For each corpus, we split the data into three different subsets, one for pretraining, one for validation of the pretraining and one that will be rewritten for the crowdsourcing. Based on this, we created six differently pretrained models. For each corpus, we had one model pretrained with the unchanged pretraining data and one pretrained with the pretraining data after PII were replaced. Rewriting pipelinesWe created three different rewriting pipelines so that we can compare the two chosen rewriting methods and the combination of them. For each rewriting method, there is one pipeline where only this rewriting method is applied to privatize the data (PII-only and DP-only). Furthermore, there is one pipeline where we first perform PII-removal and then DP-rewriting (PII + DP). They are visualized in Figure 1. After the data has been rewritten in different ways, we requested annotations based on our binary labeling task on Amazon Mechanical Turk. An example HIT can be found in the Appendix B. All crowdworkers were from the US. Therefore, the payment per HIT was calculated based on the US minimal wage in order to guarantee fair payment. ## 5 Results PII-only vs. DP-only vs. PII + DPFirst, we wanted to explore general differences between the three rewriting pipelines. Therefore, we run the data through all pipelines and requested annotations from 5 crowdworkers per pipeline and data point. For the DP-rewriting in DP-only and PII + DP we set \(\epsilon=10000\). This is a very high choice for \(\epsilon\). However, it was the smallest value which ensured that the resulting text still had some very basic utility. After the annotation, we aggregated the individual annotations per data point by using MACE [1] with a threshold of 1. Then we compared these aggregated labels to the original gold labels by calculating F1-scores (see Table 3). PII-only performed best for all corpora regarding the F1-score. Furthermore, DP-only led to better F1-scores than PII + DP. However, this depicts only the performance rega \begin{table} \begin{tabular}{p{85.4pt}|p{113.8pt}|p{113.8pt}} \hline \hline & **target class** & **not target class** \\ \hline ATIS & cheapest airfare from tacoma to orlando & what flights are available from pitsburgh to baltimore on thursday morning \\ & show me all the one way fares from tacoma to montreal & what is the arrival time in san francisco for the 755 am flight leaving washington? \\ \hline SNIPS & add The Crowd to corinne’s acoustic soul playlist & Book a restaurant in El Salvador for 10 people. \\ & add this track to krystal’s piano 100 & Play a chant by Mj Cole \\ \hline TA & AMAZING Concierge Staff/Eric Sofield is the best & Avoid lower floors... especially room 202 \\ & Best Hotel in Philly & Bugs and terrible housekeeping \\ \hline \hline \end{tabular} \end{table} Table 2: Examples per corpus and class. Figure 1: We used three different rewriting pipelines: PII-only, DP-only and PII + DP. They are depicted here. \begin{table} \begin{tabular}{p{85.4pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}} \hline \hline Pipeline & ATIS & SNIPS & TA \\ \hline PII + DP & 0.377 & 0.828 & 0.588 \\ DP-only & 0.549 & 0.935 & 0.698 \\ PII-only & **0.949** & **0.991** & **0.932** \\ \hline \hline \end{tabular} \end{table} Table 3: F1-scores of the original gold labels compared to the labels obtained in our experiments. The highest value per column is indicated in bold. Differences per row were statistically significant with \(\alpha=0.05\) for all values. Regarding privacy, it is the other way around. This will be discussed in more detail in Section 6. Apart from this, in Table 3 we can see that there are differences between the corpora, especially regarding DP-rewriting. For the SNIPS corpus, the DP-rewriting had a far smaller negative effect on the F1-scores than on the TA corpus or even the ATIS corpus. The effect of \(\epsilon\)In DP-rewriting, the \(\epsilon\)-parameter is the most important parameter, because it represents the privacy guarantee. A high value stands for high privacy risks. To investigate the effects of this \(\epsilon\)-parameter, we reran the DP-only pipeline in a slightly modified way. We set \(\epsilon=3333\) and requested annotations from three different crowdworkers per pipeline and data point. Then, again, we aggregated the annotations per pipeline and data point by using MACE (Hovy et al., 2013) and calculated the F1-scores in comparison to the original gold labels. We compared the F1-scores to the F1-scores of the data rewritten with \(\epsilon=10000\). To guarantee a fair comparison, we only used 3 annotations per data point as well and reaggregated them with MACE (see Table 4). For all corpora, the lower \(\epsilon\) resulted in statistically significantly lower F1-scores. With the lower \(\epsilon\), the performance difference between SNIPS and the other corpora decreased. Multiple rewritten versionsWhile lower \(\epsilon\) values increase privacy, they decrease the utility drastically. But what if we rewrite multiple times with the same \(\epsilon\), but different random seeds and then aggregate the crowdsourced annotations? Can the differently added noise be counterbalanced by this so that utility is overall increased? For each data point, we created two other versions rewritten with DP-only and \(\epsilon=3333\). Then we requested three annotations per version from crowdworkers and aggregated the annotations per data point over all versions. This time, we could not use MACE (Hovy et al., 2013) to aggregate the data, because for using MACE the annotations need to be independent when conditioned on the true labels. However, in our case, they are only independent when conditioned on the true labels and the corresponding rewritten version. Therefore, we could only use MACE to aggregate the annotations per version and aggregated the results of this by using majority voting. The whole process is illustrated in Figure 2. Again, we calculated F1-scores between our aggregated labels and the original gold labels. The results, as well as a comparison to the previous results, can be found in Table 5. Interestingly, using multiple differently rewritten versions did not increase, but decreased the F1-scores for all corpora except SNIPS. We explored different aggregation methods. They can be divided into two types: two-step-aggregation and one-step-aggregation. The two-step-aggregation methods consist of two steps: In the first, there is an aggregation per rewritten version and in the second step, these aggregations are aggregated again. The aggregation we used for Table 5 and illustrated in Figure 2 is a two-step aggregation method with MACE as the first step and majority voting as the second step. In the one \begin{table} \begin{tabular}{c|c|c} \hline \hline Corpus & \(\epsilon=3333\) & \(\epsilon=10000\) \\ \hline ATIS & 0.229 & **0.517** \\ SNIPS & 0.519 & **0.920** \\ TA & 0.426 & **0.687** \\ \hline \hline \end{tabular} \end{table} Table 4: F1-scores of the same data rewritten with DP-only and different values for \(\epsilon\). Differences per row are statistically significant with alpha = 0.05. The highest value per row is highlighted in bold. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Corpus & \(\epsilon=3333\) & \(\begin{array}{c}\text{multiple}\\ \text{versions}\end{array}\) & \(\epsilon=10000\) \\ \hline ATIS & 0.229 & 0.180 & **0.517** \\ SNIPS & 0.519 & 0.519 & **0.920** \\ TA & 0.426 & 0.350 & **0.687** \\ \hline \hline \end{tabular} \end{table} Table 5: F1-scores of the same data rewritten with DP-only and different values for \(\epsilon\). The highest value per row is highlighted in bold. Figure 2: Process of generating multiple differently rewritten versions and aggregating their annotations. step-aggregation methods, all annotations of all versions are aggregated in one single step with one aggregation technique. The aggregation methods were chosen based on commonly occurring problems in our experiments. In general, it was very noticeable, that there were far more cases where data points that belong to the target class were not recognized as belonging to the target class than the other way around. Therefore, we created a threshold-based aggregation method for this. It is a one-step-aggregation method and the idea is, that the target class is chosen if more than x annotations of one data point are target class annotations. So if we have a threshold of x = 3 and a data point with four target class annotations and five non-target class annotations, the aggregated label will be the target class label. If there were only three target class annotations and six non-target class annotations, the aggregated label would be the non-target class annotation. This method will be abbreviated as tx in the following, where x is replaced with the used threshold. Based on that threshold idea, we also created a two-step-aggregation method where first, annotations per version were aggregated with MACE and afterwards the aggregated labels were aggregated with a threshold of 0. This method will be abbreviated as MACE_t0. Furthermore, we tried plain majority voting in a one-step-aggregation (MV), majority voting in a two-step-aggregation (MV_MV) and the previously discussed two-step-aggregation with MACE and majority voting (MACE_MV). Per aggregation method, we calculated the F1-Scores of the resulting labels and the original gold labels (see Table 6). The methods which do not take into consideration that target class data points have been mislabeled more often than non-target class points give the worst results. The methods taking this point into consideration lead to a lot better F1-scores. The most extreme method, t0, in which a data point is labeled as target class if only one crowdworker annotated one version as target class, lead to the best F1-scores. ## 6 Discussion Corpus differencesThe negative effect on the utility of DP-rewriting in our experiments has been corpus dependent. In the following, we will explore reasons for this. As already discussed before, the lower F1-scores can mainly be traced back to data points which belong to the target class but have not been recognized as belonging to the target class. While this problem exists for all corpora, it is least prominent for SNIPS, see Table 7. To explore potential reasons for the indifference of target class non-recognition, we will use a concept we call _indicator words_. Indicator words are words which do not appear equally often in the target class and the non-target class data. For example, for ATIS the target class is "Airfare", meaning that all requests asking about prices for flights belong to that class. Words that therefore often occur in the target class, but not in the non-target class data are "fare", "airfare", "cost", etc. While it is not possible to correctly identify the class based on only these indicator words, they are helpful signals in many cases and therefore a useful approximation to explore the indifference in the class recognition further. For ATIS and TA, the usefulness of indicator words has been substantially decreased by the DP-rewriting, as we can see in Table 8. Based on the given tasks, indicator words indicate the affiliation to the target class (like in ATIS and SNIPS) or the affiliation to the non-target class (like in TA). After DP-rewriting, we see that in ATIS the target class indicator words occurred only half as often \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Aggregation & ATIS & SNIPS & TA \\ \hline MV & 0.050 & 0.297 & 0.260 \\ t0 & **0.448** & **0.799** & **0.638** \\ t1 & 0.368 & 0.730 & 0.581 \\ t2 & 0.322 & 0.648 & 0.503 \\ \hline MV\_MV & 0.078 & 0.313 & 0.269 \\ MACE\_MV & 0.180 & 0.519 & 0.350 \\ MACE\_t0 & 0.431 & 0.777 & 0.604 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of different aggregation methods for the annotations of multiple rewritten versions. The highest value per column is highlighted in bold. \begin{table} \begin{tabular}{c|c|c} \hline \hline Corpus & Gold & DP-only \\ \hline ATIS & 29.41\% & 13.10\% \\ SNIPS & 50.00\% & 42.64\% \\ TA & 50.00\% & 36.86\% \\ \hline \hline \end{tabular} \end{table} Table 7: Percentage of data points in the crowdsourcing set labelled as target class according to the original gold labels (“Gold”) and according to the labels gained by crowdsourcing after using DP-only with \(\epsilon=10000\) (“DP-only”). in target class texts as before, while this was not the case in non-target class texts. In TA, the non-target class indicator words appeared less often in the non-target class texts but more often in the target class texts than before. In both cases, the difference between the target class and the non-target class, as approximated by indicator words has been decreased. For SNIPS, however, no such clear effect could be observed. This assimilation of both classes according to the indicator words in ATIS and TA, but not in SNIPS is due to the relative uncommonness of these indicator words. The basic idea of the version of DP we use is that uncommonness in the dataset is correlated with the probability of being removed. Therefore, uncommon words have a higher probability of being removed than common words. For SNIPS, we had only two indicator words and they occurred 522 times in the original dataset. For ATIS, we had six different indicator words and all of them only occurred 253 times. This is even more extreme in TA, where we used basically all negatively connoted words as indicator words and nevertheless there were only 147 of them in the original corpus. This relative uncommonness of the indicator words in ATIS and TA is the reason why they have often been replaced during DP-rewriting. However, based on this argumentation, the F1-score as well as the difference between the classes regarding the indicator words should have been higher for ATIS than for TA. Why is this not the case? It can probably be traced back to the pretraining data. For ATIS, the original dataset was very small and imbalanced. Therefore, only 4.28% of the pretraining data (compared to 29.41% of the crowdsourcing data) has been from the target class. This further reduced the uncommonness of the indicator words, especially in comparison to TA where 50% of the pretraining data came from the target class. Another important factor is the amount of difference between the two classes. If the target class and the non-target class are very similar, changing one word might already change the class. If they are very different, a change of one word does not affect which class a text belongs to. To illustrate the differences between the two classes per corpus, we created wordclouds containing the 25 most common non-stopwords per class (see Figures 3, 4, 5). For this, we used the PII-only version of the datasets, because then e.g. locations were summarized by "location" and the wordclouds are easier to grasp. Figure 3 shows that the target class "Add to Playlist" of the SNIPS corpus is very different from the non-target class "Not Add to Playlist". Furthermore, the indicator words "add" and "playlist" are Figure 4: Wordcloud for the 25 most common non-stopword words per class of the PII-only version of ATIS Figure 5: Wordcloud for the 25 most common non-stopword words per class of the PII-only version of TA Figure 3: Wordcloud for the 25 most common non-stopword words per class of the PII-only version of SNIPS \begin{table} \begin{tabular}{c|c|r|r} \hline \hline Corpus & Version & Target & Rest \\ \hline \multirow{2}{*}{ATIS} & original & 232 & 21 \\ & DP-only & 104 & 24 \\ \hline \multirow{2}{*}{SNIPS} & original & 520 & 2 \\ & DP-only & 596 & 6 \\ \hline \multirow{2}{*}{TA} & original & 5 & 142 \\ & DP-only & 48 & 118 \\ \hline \hline \end{tabular} \end{table} Table 8: Distribution of indicator words for the target class (ATIS and SNIPS) or the non target class (TA) before and after DP-only. very prominent in the target class, but not in the non-target class. For ATIS, the wordclouds of the two classes are less different, see Figure 4. Furthermore, in ATIS relatively small changes can cause a class change. The sentence "How much is the cheapest flight from Pittsburgh to Baltimore?" belongs to the class "Airfare", while "What is the cheapest flight from Pittsburgh to Baltimore?" does not belong to the class "Airfare" because the answer to this question would not be a price. There are many more examples like this in ATIS, but not in SNIPS. For TA, the wordclouds are also less different than for SNIPS. Additionally, there are also cases where changing one word changes the whole class. For example "Best hotel in Philly" could be changed to "Worst hotel in Philly" and would then belong to the other class. However, there are fewer cases like this in TA than in ATIS. All in all, there are multiple reasons explaining the corpus differences. First, the balance in the pretraining data is important, especially for very small corpora. Second, the diversity of the corpus, in relation to the corpus size affects the utility. And third, the difference between classes influences how often class distinctions will be removed. Privacy versus utilityWhen comparing PII-removal and DP-rewriting, we saw that the F1-scores approximating the utility have been far better when using PII-removal than when using DP-rewriting. However, this is not the case for privacy. We will discuss this further in the following. In general, we know that one of the key points of DP-rewriting is that we can control the privacy risk, while in PII-removal there are no privacy guarantees. By setting the \(\epsilon\) value in DP-rewriting, we can essentially set an upper boundary for the probability of a privacy leakage. For PII-removal, there are no guarantees at all. If we want to ensure that there are no privacy leakages, we would need to check every rewritten text for potential privacy leakages. Of course, this is unfeasible for larger datasets. Therefore, in practice, one would try to improve the PII-removal as much as possible and then hope that there are no privacy leakages, without knowing how high the risk for such a leakage exactly is. We will discuss what this means for our data in the following. For this, we will look at how many words of the input text have been changed or replaced. Of course, changing the wording is required but not sufficient to guarantee privacy. However, measuring the exact level of privacy preservation is hard and looking at the number of changed and replaced words is enough to give us a rough impression of how this minimal requirement was fulfilled on our data. The heatmap in Figure 6 shows the results of this analysis per corpus and rewriting method. For a better understanding of this heatmap, we will explain one row as an example. The first row represents the PII-only version of the ATIS corpus. The value of the first column ("0") is 5.6%. This means, that for 5.6% of all data points of the ATIS corpus, zero ("0") words of the original sentence have been replaced or changed during PII-removal. So all words of the original sentence were copied into the PII-only version. In the next column ("1"), the value is 14%, which means for 14% of all data points of the ATIS corpus there is one word of the original sentence which has been changed or replaced during PII-removal. It continues like this for the next few columns. Then there is a column called "7 - 11", which is an aggregated column. The value 2.9% tells us that for 2.9% of all data points of the ATIS corpus between seven and eleven words of the original sentence have been replaced in the PII-only version of that sentence. The following columns are to be understood the same way. In general, we see that with PII-only fewer words have been replaced than with DP-only. Especially for the SNIPS and TA data, there were many sentences which have not been changed at all (SNIPS: 48.1%, TA: 36.3%). Privacy preservation completely failed for these data points. Additionally, the amount of sentences where only a few words have been changed is also quite high when using PII-only. The privacy preservation to expect from those few changes might also be quite low. Therefore, the minimal requirement for privacy preservation, to change and/or replace words, has been fulfilled far better by DP-only than by PII-only. However, there is one exception, where PII-only did not work that badly regarding privacy preservation. In the ATIS corpus, we see that in general a lot more words have been replaced by PII-only than in the other corpora. This is due to the fact that there are many easy-to-detect and therefore easy-to-replace PIIs in ATIS. Locations, dates and times can be detected quite well and ATIS is full of locations, dates and times. In SNIPS and TA, there are in general fewer of these easy-to-detect PII and additionally, the often uncommon sentence structures in SNIPS and TA make it harder to detect them. Therefore, PII-only was able to detect and therefore replace more PIIs in the ATIS corpus than in the SNIPS and TA corpora. Nevertheless, there were also a noticeable number of examples in which PII-only failed in the ATIS corpus. For example, the original sentence "what flights from indianapolis to membis" has been changed to "what flights from <LOCATION> to membis" by PII-only. Obviously, "memphis" has not been recognized as a location. There are more examples like this. While one could try to further improve the PII-removal, as discussed before, there is no way to know how well privacy is preserved if you do not either have data in which all PII are labeled or manually check all texts. All in all, we see that the performance of PII-only regarding privacy preservation is very domain specific. In general, PII-only replaces fewer words than DP-only. Furthermore, with DP-only one can set the upper bound for the probability of a privacy leakage, while with PII-only you do not have any guarantees. ## 7 Conclusion and future work In this work, we explored the effects of applying different privacy-preserving rewriting methods on textual data used for crowdsourcing. We compared PII-removal and DP-rewriting as well as a combination of both regarding utility and privacy. PII-removal turned out to be a simple-to-implement approach that effects the utility least. However, there are no privacy guarantees given. DP-removal decreases the utility while at the same time giving privacy guarantees and decreasing the risk of privacy leakages. The utility decrease is highly dependent on the type of task and data. Therefore, based on our findings, we can give the following recommendations when using DP-rewriting. First, it is important to ensure that the pretraining data has an appropriate size based on the corpus and task. The higher the similarity between classes as well as the diversity in sentence structures and wording of the corpus is, the more pretraining data is needed. Second, pretraining data should in the best case be balanced. Future work should focus on overcoming the current shortcomings of current DP text rewriting approaches, namely the need to use very high values for \(\epsilon\) which result in very low privacy guarantees. ## 8 Limitations and ethical impact Regarding the corpora, important limitations are that we only requested annotations for three corpora of which at least two had quite simple tasks. With only three corpora there is not that much di Figure 6: Distribution of the number of data points by the number of words from the original sentence that have been changed / replaced. E.g. 48.0% in SNIPS-PII-only and 0 means that for 48.0% of the data points of the SNIPS corpus the PII-only version contains the same words as the original sentence. Attention: look at the x-axis closely. There is a single column for each of the values from zero to six. Starting at value seven, we summed up the fractions for five values per column. versity in the selected corpora so that generalizing our results to other corpora is harder. Therefore, we originally aimed to experiment with more corpora. However, DP-rewriting did not work well enough for half of the originally chosen corpora, therefore we needed to exclude them. While the low number of corpora was one problem, another problem was that the selected corpora and their corresponding tasks were mostly quite simple. We were able to identify a very small set of what we called indicator words for ATIS and SNIPS and a larger set of indicator words for TripAdvisor. Probably, automatic labeling dependent on these indicator words might have already worked quite well. This makes generalization to more complex tasks even harder. Apart from the used corpora, also the used rewriting methods cause some limitations. First, we needed to use very high \(\epsilon\)-values for DP-rewriting in order to guarantee some basic utility. However, these high \(\epsilon\)-values might not guarantee sufficient privacy in most scenarios. Second, also PII-removal causes some limitations. PII-removal is very domain dependent. Therefore, transferring our results to other domains is difficult. Furthermore, PII-removal did not work that well for SNIPS and TripAdvisor, since in these corpora PII were harder to identify. Therefore, there were many cases were PII-removal just resulted in copying the input text which resulted in zero privacy.
2303.13240
Equations over solvable groups
Not any nonsingular equation over a metabelian group has solution in a larger metabelian group. However, any nonsingular equation over a solvable group with a subnormal series with abelian torsion-free quotients has a solution in a larger group with a similar subnormal series of the same length (and an analogous fact is valid for systems of equations).
Anton A. Klyachko, Mikhail A. Mikheenko, Vitaly A. Roman'kov
2023-03-23T13:17:34Z
http://arxiv.org/abs/2303.13240v3
# Udc 512.543.72+512.543.73+512.544.33+512.543.56 MSC: 20F70, 20F16, 20E22, 20E10 ###### Abstract Not any nonsingular equation over a metabelian group has solution in a larger metabelian group. However, any nonsingular equation over a solvable group with a subnormal series with abelian torsion-free quotients has a solution in a larger group with a similar subnormal series of the same length (and an analogous fact is valid for systems of equations). ## 0 Introduction A system of equations \(\{w_{i}=1\,|\,i\in I\}\) with coefficients from a group \(G\), where \(w_{i}\) are words in the alphabet \(G\sqcup X^{\pm 1}\), and \(X\) is a set (the _set of unknowns_), is called _solvable over \(G\)_, if there exist a group \(\widetilde{G}\) containing \(G\) as a subgroup and a retraction of the free product \(\widetilde{G}*F(X)\) onto \(\widetilde{G}\) containing all elements \(w_{i}\) in its kernel (henceforth, \(F(X)\) is the free group with basis \(X\)). If the _solution group_\(\widetilde{G}\) can be chosen from a class \(\mathcal{K}\), then we say that the system _is solvable in \(\mathcal{K}\)_. The study of solvability of equations over groups has a long history: see, e.g., [GR62], [Le62], [Ly80], [How81], [B84], [EH91], [How91], [K93], [KP95], [FeR96], [K97], [K99], [CG00], [EdJu00], [IK00], [Juha03], [Ko06], [P08][BK12], [KL12], [KT17], [Ro17], [BE18], [ABA21], [EH21], [NT22], [KM22], and references therein; see also surveys [Ro12], [NRR79], and book [LS80]. A system of equations (possibly infinite and with, possibly, infinitely many unknowns) over a group is called _nonsingular_ if the rows composed of the exponent-sums of unknowns in each equation are linearly independent over \(\mathbb{Q}\). If these row are linearly independent over the \(p\)-element field \(\mathbb{F}_{p}=\mathbb{Z}/p\mathbb{Z}\) for each prime \(p\), then we call the system of equations _unimodular_. In particular, one equation with one unknown is * nonsingular if the exponent sum of the unknown in this equation is nonzero; * unimodular if this sum is \(\pm 1\). Unimodular equations behave better than arbitrary nonsingular ones: e.g, in [K93] (see also [FeR96]), it was proven that \[\text{any unimodular equation over a torsion-free group is solvable over this group;}\] it is unknown whether a similar statement is valid for arbitrary nonsingular equations. For nilpotent groups, everything is simple: Shmel'kin's theorem [Sh67] says (in particular) that \[\text{any finite nonsingular system of equations over a nilpotent torsion-free group}\ G\text{ has a (unique) solution in a nilpotent group}\ \widetilde{G}\supseteq G\text{ of the same nilpotency class; namely, }\widetilde{G}\text{ is the completion of }G\text{ (if the system is unimodular, then the torsion-free condition can be dropped, and the unique solution exists in }G\text{ itself).*) For solvable groups, the situation is more complicated: Section 1 contains examples showing (in particular) that \[\text{there exists a unimodular equation with one unknown over a metabelian group (that can be chosen finite or, on the contrary, torsion-free), which is not solvable in any larger metabelian group. (An example on this subject in [Ro17] is incorrect.) In 1981, Howie suggested the following generalisation of the well-known Kervaire-Laudenbach conjecture. The work of the first two authors was supported by the Russian Science Foundation, project no. 22-11-00075. The work of the third author was supported by the Russian Science Foundation, project no. 22-21-00745. \({}^{*}\)) Note that a theorem of Kuz'min [Ku74] (see also [Ku06]) says that a finitely generated metabelian group \(G\) is residually nilpotent if and only if any unimodular equation with one unknown over \(G\) has at most one solution in \(G\) (and the same is true for finitely generated abelian-by-nilpotent groups [Ku78]).
2308.11725
Spectroscopic performance evaluation and modeling of a low background HPGe detector using GEANT4
Low background gamma spectrometry employing HPGe detectors is a sensitive technique for measuring low-level radioactivity in environmental applications, material screening, and for rare decay searches. This work presents spectroscopic performance evaluation and modelling of a low background measurement setup developed at IIT Ropar in Punjab, India, to measure trace natural radioactive elements, with a particular interest in studying low-level radioactivity in soil and/or rock samples to generate specific inputs for low background experiments. The performance test and characterization of a low background cryocooled HPGe detector with relative efficiency of $\sim$33% have been carried out. An effective detector model has been developed using GEANT4 Monte Carlo simulation to determine the response of the detector over an energy range of 80.9-1408 keV and compared with the experimental performance of the detector. The response of the detector obtained using Monte Carlo simulations agrees reasonably well within 93% level of confidence, indicating only 7% deviation in the comparison. The present setup offers improved detection limits of primordial radionuclides (U/Th and K) to measure radioactive contamination in environmental matrices, which has been used elsewhere [1].
Swati Thakur, Soni Devi, Sanjeet S. Kaintura, Katyayni Tiwari, Pushpendra P. Singh
2023-08-22T18:34:13Z
http://arxiv.org/abs/2308.11725v1
# Spectroscopic performance evaluation and modeling of a low background HPGe detector using GEANT4 ###### Abstract Low background gamma spectrometry employing HPGe detectors is a sensitive technique for measuring low-level radioactivity in environmental applications, material screening, and for rare decay searches. This work presents spectroscopic performance evaluation and modelling of a low background measurement setup developed at IIT Ropar in Punjab, India, to measure trace natural radioactive elements, with a particular interest in studying low-level radioactivity in soil and/or rock samples to generate specific inputs for low background experiments. The performance test and characterization of a low background cryocooled HPGe detector with relative efficiency of \(\sim\)33% have been carried out. An effective detector model has been developed using GEANT4 Monte Carlo simulation to determine the response of the detector over an energy range of 80.9-1408 keV and compared with the experimental performance of the detector. The response of the detector obtained using Monte Carlo simulations agrees reasonably well within 93% level of confidence, indicating only 7% deviation in the comparison. The present setup offers improved detection limits of primordial radionuclides (U/Th and K) to measure radioactive contamination in environmental matrices, which has been used elsewhere [1]. Carbon loaded HPGe detector, Detector scanning, and characterization, GEANT4 Monte Carlo simulation, Gamma-ray spectroscopy, Low background measurements Environmental radioactivity, Soil sample analysis [1]Department of Physics, Indian Institute of Technology Ropar, Rupnagar - 140 001, Punjab, India ## 1 Introduction Gamma spectroscopy setups employing low background HPGe spectrometers play an increasingly important role in measurements of environmental radioactivity and material selection for rare event experiments [2, 3, 4, 5, 6, 7, 8]. The experimental sensitivity of these low background measurements depends critically on the identification and minimization of confounding background. In above-ground measurements, the background is mainly dominated by cosmic rays and cosmic ray-induced interactions, cosmogenic radionuclides (T\({}_{1/2}\sim\) d-y), primordial radioactivity consisting of gamma-ray radiation from natural decay chains of \({}^{238}\)U, \({}^{232}\)Th and \({}^{40}\)K (T\({}_{1/2}\sim\) 10\({}^{8}\)-10\({}^{10}\) y) in and around the detector setup material, airborne radioactivity of radon and its progeny and the detector itself is the source of \(\alpha\), \(\beta\), \(\gamma\) and neutrons background [7]. The environmental radioactivity originating from naturally occurring radionuclides, namely; \({}^{238}\)U, \({}^{232}\)Th, and \({}^{40}\)K are long-lived (\(T_{1/2}\sim\)10\({}^{8}\) - 10\({}^{10}\) y) and present in trace amounts in the earth crust. The gamma-ray background from these naturally occurring radionuclides is a significant source of radiation exposure for above-ground laboratories. Measurement at the environmental level deals with low radioactivity concentration and involves inherent complexity due to the typical interference of natural radionuclides over a wide energy range. Additionally, the gamma-ray flux is subject to variation, possibly due to experiment location in the laboratory/site, seasonal fluctuations, the radon concentration near the experimental setup, or unknown systematic uncertainties. Background statistical fluctuations can influence the assessment of peak identification, peak area calculations, energy resolution, confined intervals, and detection limits. These background fluctuations can cause differences in the precise determination of activity concentrations in weak samples. For reliable estimation of the radioactive impurities, it is necessary to identify and minimize the background to improve the minimum detection limits of the spectrometer. Due to the typical interference of natural radionuclides, it is essential to accurately estimate the background level by measuring the concentration of the members of these decay chains and their variation [9, 10]. In India, the proposals for a potential underground laboratory driven by the interest in rare event studies have been initiated [11, 12, 13]. As mentioned earlier, other than the ambient background from the surroundings, the material of the counting detectors itself contributes to the background contamination levels in rare event experiments. To improve the sensitivity of measurement and accurately determine radio impurities, an essential prerequisite is low radioactivity around the detector as much as possible. The low background cryostats contribute miniature to the sample spectrum, improving its signal-to-noise ratio and significantly reducing the Minimum detectable activity (MDA). Therefore, the radiopure cryostat with selectively refined detector construction materials and applying passive shielding allows for lower minimum detectable activity and high sample throughput for a specific counting time, which is highly desirable in low background counting applications. Furthermore, cryocooler-based low background HPGe detectors are desirable for probing low background experiments in above-ground or remote underground locations for long-term counting measurements. To maximize the counting efficiency for investigating trace radioactive elements, it is necessary to understand the detector performance in compact geometry over a wide energy range and different counting setup configurations. With this motivation, we have taken up the initiative to build a low background measurement setup at the Indian Institute of Technology (IIT) Ropar using the HPGe detectors. This setup is intended for radio-purity assessment and understanding of the radiation background at IIT Ropar to carry out low background experiments. This paper describes the performance test and characterization of a low background carbon fiber HPGe detector with a relative efficiency (R.E.) of \(\sim\)33%. The first set of studies presents detector characterization with radioactive sources over a wide energy range from 80.9-1408 keV. The performance specifications are successfully tested against the warranted values provided by the manufacturer. Measurements have been performed with point-like sources to calculate the peak shape, Peak-to-Compton ratio (P/C), Full energy peak efficiency (FEPE), and Figure of merit (FOM). Mono-energetic sources are used to perform close scanning of Ge crystal along its parallel and perpendicular axis to determine the inner crystal structure. A Monte Carlo model has been developed using GEANT4 to calculate the FEPE of the HPGe detector for gamma spectrometry measurements associated with the development of a low background setup. It has been observed that germanium crystal volume and dead layers in the surroundings are crucial parameters in the detector geometry [14, 15]. Over the operational lifespan of the detector, the continuous diffusion of lithium ions into the germanium crystal leads to an increase in the dead layer thickness and a decrease in the active volume of the detector. Previous literature has reported a significant deviation of 10%-30% between the experimental and simulated FEPE efficiencies in the energy range of 60-2000 keV [15, 16, 17, 18]. In order to validate a simulation code, the detector has been optimized, which typically involves adjusting various detector parameters, such as the dimensions, dead layer thickness, and other relevant factors. The second set of studies characterizes the natural gamma-ray background during long counting measurement and its further reduction by incorporating a Pb shield at the above-ground laboratory of IIT Ropar. The organized statistical data on this topic would be valuable for inter-laboratory comparisons and unique in different geological and geographical conditions than the rest of the studied areas, especially quantifying low-level natural radioactivity. The activity concentration and Minimum detectable activity (MDA) for common radioisotopes in environmental samples are evaluated. The presentation of this paper is organized as follows, section 2 describes the experimental setup and procedure details, section 3 discusses the experimental characterization, detector modelling, comparison with simulated data, and spectroscopy measurements for characterizing background distributions. The summary and conclusions are given in section 4. ## 2 Experimental setup and Methodology The experimental setup (ILM-0) comprises of p-type co-axial HPGe detector, ORTEC make (GEM30P4-83-RB) with a measured relative efficiency of 33% having a crystal diameter of 62 mm and thickness of 46 mm. The detector crystal is mounted inside a cryostat of low background carbon fiber (carbon-composite) body, and the top face is 5 mm beneath a 0.9 mm thick carbon fiber entrance window. The rear end of the detector is attached to the \(\sim\) 150 mm long vertical cold finger connected to the transfer hose of an electro-mechanical cooler (ORTEC X-Cooler III) to cool the detector crystal to liquid nitrogen (LN\({}_{2}\)) temperature (77 K). The typical cooldown time required by the electro-mechanical cooler is found to be 10-16 h. The detector assembly is supported with additional online UPS to protect during utility power failure. The HPGe detector is surrounded by moderate passive shielding with lead bricks of dimensions 22.9 cm\(\times\) 7.6 cm\(\times\) 5.1 cm and mounted on a custom-made stainless steel (SS) table, as shown in Figure 1. In this arrangement, the detector with a preamplifier is entirely shielded with adequate space for mounting voluminous samples generally used in environmental radioactivity measurements. In addition, it has a provision for an inner layer of passive shielding and an active muon veto. It should be mentioned that different shapes and configurations of the lead shields were also considered before the final implementation of bricks geometry. The detector is coupled with the pulse processing electronics and data acquisition system, including a NIM-based high voltage power supply, spectroscopic amplifier, and multichannel analyzer procured from ORTEC. The multichannel analyzer records the data using the emulator software Maestro. A list of relevant technical details, along with the associated electronics, are summarized in Table 1. The characterization measurements have been carried out using a set of standard sealed disk-type gamma sources of the active diameter of \(\sim\)6 mm and thickness of \(\sim\)1 mm with absolute strength of gamma sources \(\leq\)95 kBq within 5% uncertainty. These sources cover a gamma ray energy range from 80.9 keV to 1408 keV, as shown in Table 2. Radiography with X-rays and gamma-rays is used to determine the detector structure and its dimensions. However, an active volume may differ depending upon the electric field configuration inside the crystal [19, 20, 21]. In the present work, radiography is not possible, and hence mono-energetic gamma sources are used to scan the crystal in the energy range of 88-1332.5 keV. In order to characterize the detector geometry, three types of scans, namely, i) Distance scan, ii) Radial scan, and iii) Lateral scan, have been performed around the detector. The scanning schemes of the detector are demonstrated in Figure 2. The radial and lateral scans were carried out using \({}^{57}\)Co, \({}^{60}\)Co, \({}^{137}\)Cs, \({}^{54}\)Mn, \({}^{65}\)Zn and \({}^{109}\)Cd gamma sources, see Table 2 for the reference energies. The radial scan was performed by moving the source parallel to the top detector face at a distance of 10 mm from the top face of the detector endcap (Carbon fiber window) in the position steps of 5 mm. It covered a range of \(\pm\) 6.5 cm w.r.t. the center of the detector. For the lateral scan, the gamma source was moved parallel to its cylindrical axis at a distance of 10 mm from the side face of the detector in 5 mm steps and covered a range of -2 cm to 9 cm again w.r.t. the detector endcap. The distance scan was performed with point-like sources at a regular interval from 5 cm to 30 cm along the axial symmetry axis away from the top face of the detector. Typical uncertainty in the positioning of the source in all scans is estimated to be \(\leq\)1 mm. Since some of the parameters are unknown or maybe not even related to the specific detector but are instead averaged over a set of detectors of the same or similar type, generic parameters offered by the manufacturer are typically insufficient. \begin{table} \begin{tabular}{l l} \hline Model & GEM30P4-83-RB \\ Manufacturer & ORTEC \\ In service since & 2017 \\ Capsule type & Pop top \\ HV Bias & +2800 V \\ Crystal polarity & p-type \\ DAQ & Analog \\ Shaping time & 6 \(\mu\)s \\ Geometry & Closed end \\ Cryostat type & Vertical dipstick \\ Cryostat diameter & 75 mm \\ Crystal diameter & 62 mm \\ \hline \end{tabular} \end{table} Table 1: Technical specifications of the HPGe detector supplied by the manufacturer. Figure 1: Schematic cross-section of the experimental setup of ILM-0 (IIT Ropar Low Background Measurement setup). The soil sample is mounted on the face of the detector, and the setup is placed on a thick stainless steel (SS) table 1 m above the ground. Therefore, parameters must be optimized. In order to determine the actual value of the parameters, optimizations are made to the crystal length (L), crystal radius (R), hole length (\(L_{h}\)), front gap, and dead layers. Given relatively low source strengths, no pileup effects have been observed in the spectra. Errors associated with the photopeak efficiency were computed, including statistical and systematic errors. Typical errors obtained in the present work were estimated to be \(\sim\) 0.2% in radial/lateral scans for E\({}_{\gamma}\) = 88 keV, \(\sim\) 0.5% for E\({}_{\gamma}\) = 122.1 keV, \(\sim\) 0.4% for E\({}_{\gamma}\) = 661.7 keV and \(\sim\) 0.1% for E\({}_{\gamma}\) = 1115.5 keV. It should be noted that the statistical errors are mainly due to the difference in strengths of various sources and energy-dependent variation in detection efficiency. Similarly, scan errors in the experiment were \(\sim\) 0.1% for the axial distances. Measurements were also performed with multi-gamma sources such as \({}^{60}\)Co, \({}^{152}\)Eu and \({}^{133}\)Ba at z \(\geq\) 10 cm to ensure that the coincidence summing is negligible. Data have been acquired for at least 10,000 counts for characterization measurements to reduce the statistical error to 1%. Detector dead time has been monitored throughout the measurement and maintained to be less than 1%. No significant worsening of energy resolution and shift in energy was observed at different times of long counting measurements of soil samples. The data of the background spectrum was recorded in a timestamp of 24 h. The raw data is then converted and analyzed using LAMPS [22] software. Each photopeak is fitted to the sum of Gaussian and second-order polynomials for a background to extract the net photopeak area. \begin{table} \begin{tabular}{l l l l l} \hline Sources & Gamma & Decay & Half- & Branching \\ & Energy & mode & life & Ratio \\ & (keV) & & & (\%) \\ \hline \({}^{109}\)Cd & 88.0 & EC & 461.4 d & 3.64 \\ \hline \({}^{57}\)Co & 122.1 & \(\beta^{-}\) & 271.7 d & 85.60 \\ & 136.5 & \(\beta^{-}\) & & 10.68 \\ \hline \({}^{133}\)Ba & 80.9 & EC & 10.5 y & 32.90 \\ & 276.4 & & & 7.160 \\ & 302.9 & & & 18.34 \\ & 356.0 & & & 62.05 \\ & 383.9 & & & 8.940 \\ \hline \({}^{22}\)Na & 511.0 & \(\beta^{+}\) & 2.6 y & 180.7 \\ & 1274.5 & & & 99.94 \\ \hline \({}^{137}\)Cs & 661.7 & \(\beta^{-}\) & 30.1 y & 85.10 \\ \hline \({}^{54}\)Mn & 834.5 & EC & 312.2 d & 99.98 \\ \hline \({}^{65}\)Zn & 1115.5 & EC & 243.9 d & 50.04 \\ \hline \({}^{60}\)Co & 1173.2 & \(\beta^{-}\) & 1925.3 d & 99.85 \\ & 1332.5 & & & 99.98 \\ \hline \({}^{152}\)Eu & 121.7 & EC & 13.5 y & 28.53 \\ & 244.7 & EC & & 7.550 \\ & 344.4 & \(\beta^{-}\) & & 26.60 \\ & 411.1 & \(\beta^{-}\) & & 2.240 \\ & 444.0 & EC & & 0.298 \\ & 779.0 & \(\beta^{-}\) & & 12.93 \\ & 867.4 & EC & & 4.230 \\ & 964.0 & EC & & 14.51 \\ & 1085.8 & EC & & 10.11 \\ & 1089.7 & \(\beta^{-}\) & & 1.734 \\ & 1112.1 & EC & & 13.67 \\ & 1213.0 & EC & & 1.415 \\ & 1299.1 & \(\beta^{-}\) & & 1.633 \\ & 1408.0 & EC & & 20.87 \\ \hline \end{tabular} \end{table} Table 2: Radioactive gamma-ray sources used in the measurements. ## 3 Results and Analysis ### Spectroscopic performance of the detector The accuracy of the measurements depends profoundly on the performance and stability of the detector and associated electronics. Different characteristics of the HPGe detector, such as energy calibration and resolution, peak shape and Peak-to-Compton ratio (P/C), Full energy peak efficiency (FEPE), and Figure of Merit (FoM), are determined as a function of gamma-ray energies against the warranted values provided by the manufacturer. The radioactive gamma sources used in the scanning of the detector are given in Table 2. For the present HPGe detector, R.E. was found to be 33 (0.3)% in the laboratory test, which is 6% lower than the measured value provided by the manufacturer. The HPGe detector has been calibrated using different standard gamma sources and shown in Figure 3 (a). The line through the data points represents a fitting function of type y = a + bx, where a = 0.4 \(\pm\) 0.03 and b = 0.38185 \(\pm\) 1.59279 x10\({}^{-5}\). As can be noticed from this figure, a good linear relationship with the channel number can be observed. The statistical correlation coefficient is found to be 1 for each measured data point with the detector. The energy resolution measures the width (FWHM) of a single energy peak at a specific energy, usually expressed in keV for germanium detectors. It may be pointed out that the typical energy resolution of NaI and HPGe detectors are found to be 50 keV and 1.5 keV at 1332.5 keV, respectively [24]. Hence, HPGe is preferred over NaI detectors for high-resolution gamma-ray spectroscopy, even though the NaI is likely to have greater counting efficiency. Generally, the energy resolution of a detector is expressed as the ratio of FWHM to the gamma-ray energy. The distribution of energy resolution with \begin{table} \begin{tabular}{c c c c} \hline Compound & Mass fraction & Compound & Mass fraction \\ \hline SiO\({}_{2}\) & 0.5890 & K\({}_{2}\)O & 0.0325 \\ \hline Al\({}_{2}\)O\({}_{3}\) & 0.1625 & Na\({}_{2}\)O & 0.0235 \\ \hline Fe\({}_{2}\)O\({}_{3}\) & 0.1340 & MgO & 0.0135 \\ \hline CaO & 0.0360 & TiO\({}_{2}\) & 0.0090 \\ \hline \end{tabular} \end{table} Table 3: Typical clay loam composition of of soil samples [23]. Figure 2: A cross-sectional view of the detector with scanning directions indicated. different gamma-rays is shown in Figure 3 (b) and fitted to an empirical three-parameter function of type, \[\mathrm{R=\frac{A}{E^{B}}+C} \tag{1}\] The best-fit values to the fitted parameters A, B, and C are 0.4 \(\pm\) 0.03, 0.9 \(\pm\) 0.01 and 6.42229 \(\times 10^{-4}\pm 3.43594\)\(\times 10^{-5}\), where A and B are in keV. The typical energy resolution of this detector at 1332.5 keV is 1.72 keV and found to be similar to another LN\({}_{2}\) based HPGe detector of similar relative efficiency present in the laboratory. No worsening of the energy resolution has been observed over the running period of about five years. In addition to the FWHM taken at each peak, the full width at one-fifth maximum (FW.2M), full width at one-tenth maximum (FW.1M), and full width at one-fiftieth maximum (FW.02M) are also recorded to check for the worsening of the tail. The peak-to-Compton ratio has been measured from the same gamma line, 1332.5 keV, which has been used for the resolution measurement obtained from \({}^{60}\)Co source. The value of this ratio was measured at various axial distances, and the mean value of the peak-to-Compton ratio was found to be 62:1, following the manufacturer data. The efficiency calibration of a spectrometer is of great importance in analyzing radionuclides of interest. Different physical parameters, such as; the crystal volume and shape, source dimensions, gamma-ray absorption cross-section, attenuation layers in front of the detector, and the distance and position from the source to the detector, determine the efficiency of the detector [19, 20, 25]. Absolute efficiency, also known as full energy peak efficiency (FEPE), is defined as the ratio of the number of photopeak counts detected to the total number emitted by the source and can be determined according to the following equation: \[\mathrm{\epsilon=\frac{N_{\gamma}}{A_{\gamma}*I_{\gamma}*t}} \tag{2}\] Where N\({}_{\gamma}\) is the net photopeak area (background subtracted), A\({}_{\gamma}\) is the present activity of source (Bq), I\({}_{\gamma}\) is the gamma-ray emission probability. t is the time elapsed (taking into account the analyzer counting losses). Figure 4 shows the efficiency measurements for all the considered gamma-ray energies at a distance of 25 cm. The quoted errors may be due to the number of counts in the photopeak (statistical) and source activities (systematic). As shown in Figure 4, the solid line through the data points represents the best fit to the data. The efficiency curve is fitted using an inverse square fitting function of type, \[\mathrm{\epsilon=A+\frac{B}{E}+\frac{C}{E^{2}}} \tag{3}\] where \(\mathrm{\epsilon}\) is the photopeak efficiency, and E is the gamma-ray energy. The best-fit values to the fitted parameters A, B, and C are found to be 1.23209 \(\times 10^{-5}\pm\) 0.91263 \(\times 10^{-5}\), 0.5 \(\pm\) 0.01 and -30 \(\pm\) 1. The fitting function yields good approximations over different energy ranges and for different crystal sizes. As can be noticed from the figure, some Figure 3: Energy calibration and resolution of the HPGe detector are shown in (a) and (b), respectively. The solid line through the data points is the best fit. fluctuations in the data points are primarily due to variations in peak shape and low count rates. The HPGe detector has a closed-end coaxial configuration of p-type material and a thick attenuation layer at the detector entrance contact. Correspondingly, the detector's efficiency was found to be less for low energy gamma-rays below 80.9 keV as clearly visible in Figure 4. The efficiency curve shows a rapid increase from 80.9 keV of \({}^{133}\)Ba source, peaks at 121.8 keV of \({}^{152}\)Eu, and sharply decreases as a result of an increase in gamma-ray energy. This implies that the efficiency is maximum at low energy and decreases exponentially at higher emitted gamma-rays consistent with similar detectors by ORTEC [17]. The efficiency curve demonstrates the excellent performance of the measurements and analysis applied in this work. Measurement of photopeak efficiencies using the above radioactive sources can help better estimate the detector's active volume and surrounding materials. ### Detector modelling The Monte Carlo simulation program was developed using the GEANT4 framework (version 4.10.00.p02). The photopeak efficiencies have been estimated directly by determining the energy deposition in the simulated volume of the detector. The source-detector geometry, as shown in Figure 2, has been implemented using the information provided by the manufacturer. The primary particles were generated from the input files using G4ParticleGenerator class. The 10\({}^{6}\) photons were generated uniformly from the radioactive source to keep the statistical error below 1%. The photons cutoff energy was set at 1 keV using the electromagnetic processes PhysicsList. The energy calibration obtained from the experiments has been utilized to set up the energy bins in the simulated spectra. In order to avoid summing effects, the photopeak efficiencies were calculated using the mono-energetic sources at close distances. The photons that entirely lose their energy in the active volume of the detector are taken into account for evaluating photopeak efficiencies. The efficiency at the given energy was calculated by generating the histogram for a number of events against the energy deposited inside the active volume of the crystal. The simulation results have been analyzed using the ROOT data analysis framework. The simulated photopeak efficiency (\(\epsilon^{\rm Sim}\)) has been determined from the stored energy histograms using the following equation, \[\epsilon^{\rm Sim}=\frac{\rm N_{c}}{\rm N_{g}} \tag{4}\] Where \(\rm N_{c}\) is the number of photons that deposited energy in the crystal after the background elimination and \(\rm N_{g}\) is the number of photons generated. The relative deviations (\(\sigma_{R}\)) between simulated and experimental photopeak efficiencies were calculated using equation 5, where n is the total number of data points in each set corresponding to a specific energy and \(\rm r_{i}\) represents the individual data points in the data set. \[\sigma_{\rm R}=\frac{1}{\rm n}\Sigma_{i=1}^{n}\frac{\epsilon^{\rm Exp}(\rm r_{ i})-\epsilon^{\rm Sim}(\rm r_{i})}{\epsilon^{\rm Sim}(\rm r_{i})} \tag{5}\] Figure 4: Efficiency calibration curve of the HPGe detector at source to detector distance of 25 cm. Equation 6 defines the total relative deviation (\(\sigma_{\text{TR}}\)), where m is the number of data sets that correspond to the various energy scans and \(\sigma_{R}(\text{E}_{i})\) is the relative deviation corresponding to the different energies. \[\sigma_{\text{TR}}=\frac{1}{\text{m}}\Sigma_{i=1}^{\text{m}}\sigma_{\text{R}}( \text{E}_{i}) \tag{6}\] A polynomial function is used to fit the given data points and determine the minimum value of \(\sigma_{\text{R}}\) that yields the optimal fit. As low-order polynomials exhibit greater smoothness in comparison to high-order polynomials, the parabolic \(\sigma_{\text{R}}=\text{ax}^{2}+\text{bx}+\text{c}\) function was used to fit the given data. As an illustration, the best-fit values of front gap (g) and crystal radius (R) are shown in Figure 5 (a) and Figure 5 (b), initial estimation was done using \({}^{65}\)Zn source. #### 3.2.1 Parametric study Initially, the detector model was constructed according to the specifications provided by the manufacturer. Step-by-step adjustments were carried out to optimize the parameters, namely, crystal radius (R), hole depth (L\({}_{h}\)), dead layers (t\({}_{d}\), t\({}_{b}\), and t\({}_{s}\)) and front gap (g) to obtain an effective detector model (see Table 4). To replicate the experimental data, the detector's size must be optimized. The two main dimensions that have a significant impact on volume are data, the detector's size must be optimized. The two main dimensions that have a significant impact on volume are \begin{table} \begin{tabular}{l l l l l} \hline \hline Parameter & Manufacturer & Varied from & Step size & Optimized \\ & (mm) & (mm) & (mm) & (mm) \\ \hline Crystal Radius (R) & 31 & 25-31 & 0.5 & 29.53 \(\pm\) 1.2 \\ Hole radius\({}^{\text{\tiny\text@underline{\text@underline{\text@underline{\text@underline{ \text@underline{\text@underline{\text@underline{\text@underline{\text@underline{ \text@underline{\text@underline{\text@underline{\text@underline{ \text@underline{\text@@underline{\text@ R, and L. Crystal length (L) was constructed as solid and hollow cylinders and optimized in two parts, namely, Disc length (L\({}_{1}\)) and hole depth (L\({}_{h}\)) and optimized independently. Low-energy gamma-rays are considered optimal for estimating the thickness of the top dead layer (t\({}_{d}\)) on a crystal, as this layer attenuates gamma-rays. This has negligible effects on photopeak efficiencies at higher energies. Therefore, 88 and 122.1 keV energies are used for top dead layer determination. Depending on how long the detector has been in operation, it can decrease its active volume. The manufacturer did not supply the t\({}_{d}\) value. Because both detectors are identical, the same side dead layer (t\({}_{s}\)) as CRADLE [26] was implemented. A passive Ge dead layer with a thickness of b=L-L\({}_{opt}\) is introduced into the model when employing the manufacturer's specified physical length, L. In the simulation, uniform dead layers were employed. The effect of air gap has significant effects on small source-detector distances. However, influence can be considered negligible at large distances. It can also be estimated better with low-energy gamma-rays. The results of the lateral scan, as shown in Figure 7, indicate a significant deviation at the end side of the crystal. However, this issue was successfully addressed after the integration of the bottom dead layer into the model. Figure 6 to Figure 7 compares distance, radial, and lateral scan data for gamma-rays of various energies with corresponding simulated data using the optimized detector model. Figure 9 illustrates a comparison of the experimental data with simulated efficiencies within the energy range of 88-1408 keV. It is observed that the measured value of 33% relative efficiency of the detector corresponds to an active volume of about 120 cm\({}^{3}\), which is 11% smaller than the manufacturer's stated value of 135 cm\({}^{3}\). As it is clear from the Figure 10 that \(\epsilon^{\rm Exp}\) and \(\epsilon^{\rm Sim}\) using the nominal parameters for E= 122.1-1115.5 keV at d = 5-25 cm produced a significant \(\sigma_{\rm TR}\) of 26.2%, which was reduced up to 7% after optimisation. Figure 6: Simulated and experimental absolute photopeak efficiency as a function of distance scan for 122.1 keV, 661.6 keV, 834.5 keV and 1115.5 keV (a), (b), (c) and (d) respectively. ### Radioactivity measurements The ambient background in the laboratory has been measured using an HPGe detector without any shielding at different times since its installation to check for possible changes in the background due to incoming and outgoing materials (for impurity control), daily variations of radon concentration, and aperiodic variations of cosmic rays intensity. From a typical gamma-ray background spectrum recorded for 24 h, about 120 gamma lines were identified, emitted from the uranium and thorium decay chains in the room environment around the detector. The prominent radionuclides present in the spectra are due to the airborne radioactivity of radon and its short-lived decay products, i.e., the intermediate members of radioactive decay series of \({}^{238}\)U and \({}^{232}\)Th. Day-wise investigations of full gamma-ray spectra have shown a considerable variability of the areas of the \({}^{226}\)Ra progeny lines (\({}^{214}\)Pb, \({}^{214}\)Bi) in a range of about 14-18%. In contrast, the gamma lines of 1460.8 keV and 2614.5 keV from \({}^{40}\)K and \({}^{208}\)Tl, respectively, were comparatively constant within the experimental uncertainty. The integral background from energy range of 40 keV to 2700 keV is \(5.9\times 10^{5}\)/kg/h with a total count rate of 116 counts/day. Long background measurements were also performed to observe the day-wise variation in the overall background and estimate the statistical fluctuation in the prominent gamma peaks. Most of the variations were attributed to radon dynamics in the laboratory, while the background count rate over the energy region of 40-2700 keV is found to be similar. The spectrometer exhibits good gain stability and negligible calibration drifts (<1 keV) over a long duration of measurements. A one-day ambient background spectrum of HPGe detector without and with Pb shielding is shown as a representative case in Figure 11. As a result of lead shielding, the overall background has been significantly reduced from 116 to 2 counts/day in the energy range of 40-2700 keV. The background rate for 1460.8 keV (\({}^{40}\)K) and 2614.5 keV (\({}^{208}\)Tl) lines is 634 and 443 counts/day, respectively. It can be seen that the Pb shield effectively stops most of the gamma-ray from entering Figure 7: Simulated and experimental absolute photopeak efficiency as a function of lateral scan for 122.1 keV, 661.6 keV, 834.5 keV and 1115.5 keV (a),(b), (c) and (d) respectively. the active volume of the detector crystal. At the same time, the reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the count rate is \(\sim 10^{-4}\). The reduction ratio of the integration of the count rate is \(\sim 10^{-4}\). The reduction ratio of the count rate is \(\sim 10^{-4}\). depends on the gamma-ray energy. The background reduction achieved has been compared with unshielded data for major peaks and various energy ranges. The ratio is reduced to 1.2% at 40-600 keV. In the energy range from 40 to 2700 keV, the ratio is 1.7%. The photopeak intensity reduction of major gamma lines visible in the unshielded setup has been compared with a shielded setup as given in Table 5. For \({}^{228}\)U, the activity levels are reduced to 0.52% for \({}^{214}\)Bi and \({}^{214}\)Pb. In case of \({}^{232}\)Th, the ratios fluctuate between 0.25% and 2.46% for \({}^{208}\)Tl and 0.57 % for \({}^{228}\)Ac. Photopeaks of \({}^{137}\)Cs and \({}^{60}\)Co are completely disappeared in the shielded setup. The ratio of \({}^{40}\)K is 1.14%, and the annihilation peak drops to 20.3% of its original value without shielding. Ambient background in the shielded setup has been counted for a duration of 49 d over a span of 3 months. It should be mentioned that the stability of the energy scale was monitored with background gamma-rays such as 1460.8 and Figure 10: Comparison of total relative deviation (\(\sigma_{TR}\)) between nominal and optimized parameters over an energy range of 88-1408 keV for d=5-25 cm. \begin{table} \begin{tabular}{l c c c c} \hline \hline Element & Energy & Without Shield (X) & With Shield (Y) & Reduction factor \\ & (keV) & (counts/day) & (counts/day) & Y/X(\%) \\ \hline \({}^{212}\)Pb & 238.6 & 37812(147) & 127(47) & 0.34(3) \\ \({}^{214}\)Pb & 295.3 & 16937(266) & 86(54) & 0.51(20) \\ \({}^{214}\)Pb & 351.9 & 31365(329) & 219(25) & 0.70(8) \\ \({}^{208}\)Tl/Ann. & 511.0 & 14956(903) & 3039(110) & 20.32(12) \\ \({}^{208}\)Tl & 583.2 & 21159(286) & 54(23) & 0.25(8) \\ \({}^{214}\)Bi & 609.3 & 32746(349) & 170(23) & 0.52(7) \\ \({}^{137}\)Cs & 661.7 & 858(98) & – & – \\ \({}^{214}\)Bi & 806.4 & 1084(139) & – & – \\ \({}^{228}\)Ac & 835.6 & 1356(133) & – & – \\ \({}^{228}\)Ac & 911.2 & 16609(232) & 94(28) & 0.57(12) \\ \({}^{214}\)Bi & 1120.3 & 9411(208) & 106(21) & 1.13(10) \\ \({}^{60}\)Co & 1173.2 & 654(130) & – & – \\ \({}^{60}\)Co & 1332.5 & 464(87) & – & – \\ \({}^{40}\)K & 1460.8 & 61137(834) & 700(37) & 1.14(4) \\ \({}^{214}\)Bi & 1764.5 & 8916(154) & 199(33) & 2.23(21) \\ \({}^{208}\)Tl & 2614.5 & 19101(400) & 469(31) & 2.64(8) \\ \hline \hline \end{tabular} \end{table} Table 5: Observed activity with and without Pb shield along with the reduction factor. 2614.5 keV, where calibration with standard sources was only sometimes convenient due to heavy passive shielding. In summed gamma-ray background spectra of 49 d besides gamma-rays emitted by natural radionuclides, another source of background is generated by neutron interactions. The anthropogenic radionuclides \({}^{137}\)Cs and \({}^{60}\)Co were found below detection limits and no contribution from long-lived cosmogenically produced radionuclides, e.g., \({}^{22}\)Na, \({}^{60}\)Co and \({}^{65}\)Zn were observed. In above-ground low-level gamma spectrometry systems, some peaks associated with the activation of germanium and fast neutron scattering in the shielding material occur in the background spectrum. For example, 139.7 keV gamma line produced via neutron-induced process \({}^{74}\)Ge(n,\(\gamma\))\({}^{75}\)Ge. Peaks at 569.7 keV and 1063.3 keV from scattering reaction (n,n') on lead were also observed. Another gamma ray at 803.3 keV is detectable and corresponds to the de-excitation of \({}^{206}\)Pb\({}^{*}\) which can result from the decay of \({}^{210}\)Po or from fast neutron scattering on lead. An important characteristic in low activity measurements is the Minimum detectable activity (A\({}_{\rm D}\)) that defines the least amount of activity in the sample to quantify the radiation level slightly above the unavoidable background. Shorter measurement times and higher background radiation levels would increase the A\({}_{\rm D}\) and reduce the sensitivity of all detectors. The detector efficiency significantly affects the detection levels. In addition to the detection efficiency, amount of sample, measurement time, and photon emission probability, the A\({}_{\rm D}\) in gamma spectrometry depends on the background level at a specific energy. These background events mainly come from three primary sources: the sample, the Compton continuum, and natural radioactivity. The A\({}_{\rm D}\) in Bq/kg at a given gamma-ray energy is calculated using Currie's method [27] and given by: \[\rm A_{D}=\frac{N_{D}}{I_{\gamma}*m*\varepsilon_{\gamma}*t} \tag{7}\] Where N\({}_{\rm D}\) = 2.7 + 4.65 \(\sigma_{\rm B}\) is minimum detectable counts and \(\sigma_{\rm B}\) is the standard deviation in the background counts, I\({}_{\gamma}\) is the branching ratio of the gamma-ray, \(\varepsilon_{\gamma}\) is the photopeak detection efficiency computed using GEANT4 simulation, Figure 11: Ambient gamma-ray background in ILM-0 with (red line) and without (black line) Pb shield (t = 1 d). Some of the intense gamma-ray lines corresponding to different radionuclides are marked. m is the mass of the sample and t is the counting time. It should be noted that simulated efficiencies (\(\epsilon^{\rm Sim}\)) were obtained with the optimized detector model for 10\({}^{6}\) events uniformly distributed within the studied sample modeled in GEANT4 similar to the counting geometry kept during the measurement. ### Measurement of Soil and Rock Samples The sensitivity of the ILM-0 setup was estimated using sub-surface soil (30 cm depth) and rock samples. The soil samples were collected from the Katli village (30.99582 N, 76.534 80 E) in the Ropar region of Punjab (India), and the rock samples were collected from the Aut region of Himachal Pradesh (India). It may be noted that the rock samples were remeasured in the present work to assess the performance of the ILM-0 setup as compared to TiLES measurements reported elsewhere [13]. The soil samples were oven dried (110\({}^{\circ}\)C), fine powered, and then sieved through a 150 \(\mu\)m mesh. Each sample was packed and sealed in a cylindrical polypropylene container of \(\sim\)3.5 cm dia. and \(\sim\)2.2 cm height and stored for stabilization. The smaller sample geometries are desirable over voluminous sample geometries because the attenuation of gamma-rays within the sample matrix is negligible. A total of 10 soil samples, average mass \(\langle\)m\(\rangle\)\(\sim\)20 g were counted in compact geometry for 24 h. The mean soil density \(\langle\rho\rangle\), as packed in the container, is estimated to be \(\sim\)1.3 g/cm\({}^{3}\) with an overall variation of \(\sim\)10%. It has been reported [28] that the Punjab soil is mainly of clay loam type and thus expected to have a similar composition as listed in table 3. To enhance the counting efficiency, the soil samples were counted on the front of the detector face and confined within 60% of the radial extensions to avoid edge effects. The effect of the elemental composition and sample density were taken into account. The average sample mass of \(\langle\)m\(\rangle\)\(\sim\)20 g was counted in compact geometry for 24 h. The same geometry was adopted for counting of rock samples. Figure 12 compares typical one-day spectra of ambient background (x0.1), Ropar soil (x3), and Aut rock samples. The lines and notations are self-explanatory. In order to determine the detection limit of ILM-0 for low activity measurements, A\({}_{\rm D}\) for soil matrices were computed from the minimum detectable counts Figure 12: A typical gamma-ray spectra of soil sample (red line), Aut rock sample (blue line) and ambient background (black line) without the sample (t=1 d). The soil sample and ambient background are scaled arbitrarily for better visibility. The gamma-rays of interest are indicated (*) in the spectra. (N\({}_{\rm D}\)) using the Eq. 7. In close counting geometry, coincident summing affects the observed photopeak yield [13]. In the present analysis, nuclides in natural radioactive decay chains were considered to be in secular equilibrium, and the gamma-rays with negligible coincidence summing were chosen to estimate activities. A\({}_{\rm D}\) is calculated for several radionuclides often encountered in the environmental samples from which the most dominating ones are shown in Table 6. MDA values for \({}^{137}\)Cs nuclide have the lowest value and hence determine the nuclide detection limit. The specific activities were estimated from the measured photopeak yield after background correction, defined as activity per unit mass A\({}_{\gamma}\) corresponding to a given transition of the radionuclide was determined using, \[\rm A_{\gamma}=\frac{N_{\gamma}}{I_{\gamma}*m*\varepsilon_{\gamma}*t}, \tag{8}\] where N\({}_{\gamma}\) is the net observed counts in the photopeak after correcting for the ambient background. The uncertainty in the specific activity includes the error in efficiency and peak fitting. Most of the gamma-rays were visible from the uranium and thorium decay chains. However, only those gamma-rays that could be unambiguously assigned to a particular nuclide were considered for further analysis. Therefore, correction factors due to sample self-absorption and coincidence summing can be reasonably neglected. For \({}^{238}\)U single gamma line of \({}^{214}\)Pb at 295.3 keV, for \({}^{232}\)Th gamma line of \({}^{212}\)Pb at 238.6 keV and \({}^{40}\)K 1460.8 keV were selected for the analysis. The observed specific activity of primordial radionuclides in the soil samples was found to range between 32-67, 66-107, and 590-860 with a mean specific activity of 50, 85, and 670 Bq/kg for \({}^{238}\)U, \({}^{232}\)Th, and \({}^{40}\)K, respectively. The measured activity of \({}^{238}\)U and \({}^{232}\)Th showed a consistent distribution of primordial radionuclides among all the soil samples, while a somewhat large scatter is observed in \({}^{40}\)K data. In the case of rock sample measurements, the measured specific activity of \({}^{238}\)U in Aut rock is 7 (1) Bq/kg, which is similar in comparison to the value reported in Ref [13], while for \({}^{232}\)Th and \({}^{40}\)K no measurable activity could be observed above the ambient background at the present experimental sensitivity. To estimate the activity concentration in rock samples, reducing the overall background by an order of magnitude is necessary, as seen in Figure 12. The existing setup needs augmentation with thicker low activity lead/copper shields to improve the detection limits. As discussed earlier, above-ground laboratories are mostly dominated by muon-induced interactions; therefore, deploying additional cosmic muon veto systems are desirable to improve the sensitivity with an overall background reduction of about 50% in the present setup. ## 4 Summary and Conclusions In summary, a moderately shielded low background counting facility has been set up at IIT Ropar to investigate environmental radioactivity and rare decays. The GEANT4 simulations employing Monte Carlo procedures have been performed to determine the photopeak efficiency of the HPGe detector, and an effective detector model has been obtained. Various parameters, such as; crystal radius (R), crystal length (L), front gap (g), dead layers (t\({}_{d}\), t\({}_{b}\), and t\({}_{s}\)) have been compared and optimized through the lateral, radial and distance scanning measurements. The relative deviations between simulated and experimental efficiencies were found to be \(\sim\)7% within the photon energy range of 80.9-1408 keV, indicating as high as \(\sim\)93% confidence level in the comparison between experimental characterization and simulation. In order to assess the performance of the setup, the trace concentration of radioactive elements has been measured in soil samples from the agricultural test site in Ropar with ILM-0. The studied site has mean specific \begin{table} \begin{tabular}{c c c c} \hline \hline Radionuclide & Energy & N\({}_{\rm D}\) & A\({}_{\rm D}\) \\ (/Parent) & (keV) & (counts/day) & (Bq/kg) \\ \hline \({}^{212}\)Pb (/\({}^{232}\)Th) & 238.6 & 65 & 1 \\ \({}^{214}\)Bi (/\({}^{238}\)U) & 1764.5 & 76 & 25 \\ \({}^{214}\)Pb (/\({}^{238}\)U) & 295.3 & 85 & 5 \\ \({}^{228}\)Ac (/\({}^{232}\)Th) & 911.2 & 39 & 4 \\ \({}^{137}\)Cs & 661.6 & 3 & 0.1 \\ \({}^{60}\)Co & 1332.5 & 38 & 2 \\ \({}^{40}\)K & 1460.8 & 120 & 49 \\ \hline \hline \end{tabular} \end{table} Table 6: Estimated sensitivity of the setup. activity of 50, 85, and 670 Bq/kg for \({}^{238}\)U, \({}^{232}\)Th, and \({}^{40}\)K, respectively. While \({}^{238}\)U and \({}^{232}\)Th activities among the samples are consistent within overall uncertainty, \({}^{40}\)K shows a somewhat more significant variation. The Aut rock measurements have shown specific activity of \({}^{238}\)U to be 7 (1) Bq/kg, while for \({}^{232}\)Th, and \({}^{40}\)K furthur measures to improve signal-to-noise ratio are essential. Efforts are underway to augment the setup with low activity lead shields and plastic veto detectors to improve the sensitivity for studies relevant to rare decay searches. Measurements of the studied site with rich datasets employing two detector configurations are proposed. ## 5 Acknowledgments The authors thank Prof. R. G. Pillay and Prof. Vandana Nanal for their suggestions during the development of the setup. The Indian Institute of Technology Ropar acknowledged for an ISIRD grant for setting up ILM-0 for rare decay physics. One of the authors, Swati Thakur thanks the Ministry of Education (MoE), Government of India, for the doctoral fellowship, and iHub - AWaDH, a Technology Innovation Hub established by the Department of Science & Technology, Government of India, in the framework of the National Mission on Interdisciplinary Cyber-Physical Systems (NM - ICPS), for financial support to execute this work. Soni Devi and Katyayni Tiwari thank the doctoral fellowships received from the University Grant Commission and DST - INSPIRE, respectively.
2306.08900
Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization
Offline reinforcement learning (RL) that learns policies from offline datasets without environment interaction has received considerable attention in recent years. Compared with the rich literature in the single-agent case, offline multi-agent RL is still a relatively underexplored area. Most existing methods directly apply offline RL ingredients in the multi-agent setting without fully leveraging the decomposable problem structure, leading to less satisfactory performance in complex tasks. We present OMAC, a new offline multi-agent RL algorithm with coupled value factorization. OMAC adopts a coupled value factorization scheme that decomposes the global value function into local and shared components, and also maintains the credit assignment consistency between the state-value and Q-value functions. Moreover, OMAC performs in-sample learning on the decomposed local state-value functions, which implicitly conducts max-Q operation at the local level while avoiding distributional shift caused by evaluating out-of-distribution actions. Based on the comprehensive evaluations of the offline multi-agent StarCraft II micro-management tasks, we demonstrate the superior performance of OMAC over the state-of-the-art offline multi-agent RL methods.
Xiangsen Wang, Xianyuan Zhan
2023-06-15T07:08:41Z
http://arxiv.org/abs/2306.08900v1
# Offline Multi-Agent Reinforcement Learning ###### Abstract. Offline reinforcement learning (RL) that learns policies from offline datasets without environment interaction has received considerable attention in recent years. Compared with the rich literature in the single-agent case, offline multi-agent RL is still a relatively underexplored area. Most existing methods directly apply offline RL ingredients in the multi-agent setting without fully leveraging the decomposable problem structure, leading to less satisfactory performance in complex tasks. We present OMAC, a new offline multi-agent RL algorithm with coupled value factorization. OMAC adopts a coupled value factorization scheme that decomposes the global value function into local and shared components, and also maintains the credit assignment consistency between the state-value and Q-value functions. Moreover, OMAC performs in-sample learning on the decomposed local state-value functions, which implicitly conducts max-Q operation at the local level while avoiding distributional shift caused by evaluating out-of-distribution actions. Based on the comprehensive evaluations of the offline multi-agent StarCraft II micro-management tasks, we demonstrate the superior performance of OMAC over the state-of-the-art offline multi-agent RL methods. Multi-agent reinforcement learning, Offline reinforcement learning, Multi-agent cooperation + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: Footnote †: journal: Information Systems + Footnote †: Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: Footnote †: journal: Information Systems + Footnote †: Footnote †: journal: Information Systems + [MISSING_PAGE_POST] FootnoteFootnote †: [MISSING_PAGE_POST] FootnoteFootnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: FootnoteFootnote †: Footnote †: Footnote †: Footnote †: Footnote †: FootnoteFootnote †: Footnote †: Footnote †: FootnoteFootnote †: Footnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: FootnoteFootnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: Footnote †: [MISSING_PAGE_POST] †: FootnoteFootnote: FootnoteFootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnoteFootnote: FootnoteFootnote: FootnoteFootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnoteFootnote: FootnoteFootnote: FootnoteFootnoteFootnote: FootnoteFootnoteFootnote: To tackle above issues, we propose OMAC, a new offline multi-agent RL algorithm with coupled value factorization. OMAC organically marries offline RL with a specially designed coupled multi-agent value decomposition strategy. In additional to decomposing global Q-value function \(Q_{tot}\) in typical CTDE framework, OMAC also decomposes \(V_{tot}\) into local state-value functions \(V_{i}\) and a shared component \(V_{share}\). Moreover, OMAC poses an extra coupled credit assignment scheme between state-value and Q-value functions to enforce consistency and a more regularized global-local relationship. Under this factorization strategy, we can learn an upper expectile local state-value function \(V_{i}\) in a completely in-sample manner. It enables separated learning of the local Q-value function \(Q_{i}\) and the policy \(\pi_{i}\), which improves the learning stability of both the local and global Q-value functions. We benchmark our method using offline datasets of StarCraft Multi-Agent Challenge (SMAC) tasks (Sutton et al., 2017). The results show that OMAC achieves state-of-the-art (SOTA) performance compared with the competing baseline methods. We also conduct further analyses to demonstrate the effectiveness of our design, as well as the sample efficiency of OMAC. ## 2. Related Work **Offline reinforcement learning.** The main challenge in offline RL is to prevent distributional shift and exploitation error accumulation when evaluating the value function on out-of-distribution (OOD) samples. Existing offline RL methods adopt several approaches to regularize policy learning from deviating too much from offline datasets. Policy constraint methods (Grover et al., 2016; Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016) add explicit or implicit behavioral constraints to restrain the policy to stay inside the distribution or support of data. Value regularization methods (Grover et al., 2016; Sohn et al., 2016; Sohn et al., 2016) regularize the value function to assign low values on OOD actions. Uncertainty-based and model-based methods (Grover et al., 2016; Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016) estimate the epistemic uncertainty from value functions or learned models to penalize OOD data. Finally, in-sample learning methods (Grover et al., 2016; Sohn et al., 2016; Sohn et al., 2016) learn the value function entirely within data to avoid directly querying the Q-function on OOD actions produced by policies. The offline RL component of OMAC shares a similar ingredient with in-sample learning methods, which enjoys the benefit of stable and decoupled learning of value functions and policies. **Multi-agent reinforcement learning.** The complexity of multi-agent decision-making problems is reflected in their huge joint action spaces (Grover et al., 2016). In recent years, the CTDE framework (Grover et al., 2016; Sohn et al., 2016) has become a popular choice to separate agents' learning and execution phases to tackle the exploding action space issue. In CTDE, agents are trained in a centralized manner with global information but learn decentralized policies to make decisions in individual action spaces during execution. Representative MARL algorithms under CTDE framework are the value decomposition methods (Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016), which decompose the global Q-function into a combination of local Q-functions for scalable multi-agent policy learning. There have been a few recent attempts to design MARL algorithms under the offline setting. For example, ICQ (Sohn et al., 2016) uses importance sampling to implicitly constrain policy learning on OOD samples under the CTDE framework. OMAR (Sohn et al., 2016) extends multi-agent CQL (Sohn et al., 2016) by adding zeroth-order optimization to avoid policy learning from falling into bad local optima. MADT (Sohn et al., 2016) leverages the transformer architecture that has strong sequential data modeling capability to solve offline MARL tasks. However, the existing offline MARL algorithms simply combine well-established multi-agent frameworks with offline RL ingredients, rather than marry them in an organic way. All of these methods do not fully utilize the underlying decomposable problem structure for offline modeling. Moreover, they rely on the coupled learning process of local Q-functions and policies, which is prone to bootstrapping error and hard to trade-off between policy exploitation and data-related regularization, causing either instability during training or over-conservative policy learning (Grover et al., 2016; Sohn et al., 2016). In this work, we develop OMAC to tackle the above limitations of prior works, which enables perfect unification of both multi-agent modeling and offline learning. ## 3. Preliminaries ### Notations A fully cooperative multi-agent task can be described as a decentralized partially observable Markov decision process (DecPOMDP) (Grover et al., 2016). Dec-POMDP is formally defined by a tuple \(G=\langle\mathcal{S},\mathcal{A},\mathcal{P},r,\mathcal{Z},O,n,\gamma\rangle\). \(s\in\mathcal{S}\) is the true state of the environment. \(\mathcal{A}\) denotes the action set for each of the \(n\) agents. At every time step, each agent \(i\in\{1,2,...n\}\) chooses an action \(a_{i}\in\mathcal{A}\), forming a joint action \(\mathbf{a}=(a_{1},a_{2},...,a_{n})\in\mathcal{A}^{n}\). It causes a transition to the next state \(s^{\prime}\) in the environment according to the transition dynamics \(P\left(s^{\prime}|s,\mathbf{a}\right):\mathcal{S}\times\mathcal{A}^{n}\times \mathcal{S}\rightarrow[0,1]\). All agents share the same global reward function \(r(\mathbf{s},\mathbf{a}):\mathcal{S}\times\mathcal{A}^{n}\rightarrow\mathbb{R}\). \(\gamma\in[0,1)\) is a discount factor. In the partial observable environment, each agent draws an observation \(o_{i}\in\mathcal{O}\) at each step from the observation function \(\mathcal{Z}(s,i):\mathcal{S}\times N\rightarrow\mathcal{O}\). The team of all agents aims to learn a set of policies \(\pi=\{\pi_{1},\cdots,\pi_{n}\}\) that maximize their expected discounted returns \(\mathbb{E}_{\mathbf{a}\in\mathbf{r},\mathbf{s}\in\mathcal{S}}\left[\sum_{t=0}^{\infty} \gamma^{t}r(s_{t},\mathbf{a}_{t})\right]\). Under the offline setting, we are given a pre-collected dataset \(\mathcal{D}\) and the policy learning is conducted entirely with the data samples in \(\mathcal{D}\) without any environment interactions. ### CTDE Framework and IGM Condition For multi-agent reinforcement learning, the joint action space increases exponentially with the increase of the number of agents, so it is difficult to query an optimal joint action from the global Q-function \(Q_{tot}(\mathbf{o},\mathbf{a})\). Besides, applying policy gradient updates with the global Q-function \(Q_{tot}\) on the local policy of each agent and treating other agents as part of the environment may lead to poor performance. It suffers from the non-stationarity issue and poor convergence guarantees, as the global Q-function could be affected by the suboptimality of individual agents and potentially disturb policy updates of others (Grover et al., 2016; Sohn et al., 2016). To address these problems, Centralized Training with Decentralized Execution (CTDE) framework (Grover et al., 2016; Sohn et al., 2016) is proposed. During the training phase, agents have access to the full environment state and can share each other's experiences. In the execution phase, each agent only selects actions according to its individual observation \(o_{i}\). The benefit is that optimization at the individual level leads to optimization of the joint action space, which avoids the exponential growth of the joint action space with the number of agents. condition is proposed to realize the decomposition of the joint action space, which asserts the consistency between joint and individual greedy action selections in the global and local Q-functions \(Q_{tot}\) and \(Q_{i}\): \[\arg\max_{\mathbf{a}}Q_{tot}(\mathbf{o},\mathbf{a})=\left(\begin{array}{c}\arg\max_{\mathbf{a }_{1}}Q_{1}\left(o_{1},a_{1}\right)\\ \vdots\\ \arg\max_{\mathbf{a}_{n}}Q_{n}\left(o_{n},a_{n}\right)\end{array}\right) \tag{1}\] Through the IGM condition, MARL algorithms can learn the global Q-function and choose actions at the individual level. Meanwhile, it is also possible for offline MARL algorithms to impose constraints on the individual action space, while indirectly enforcing regulations on the joint action space. ## 4. Method In this section, we formally present the coupled value factorization strategy of OMAC and explain how it can be integrated into effective offline learning. OMAC decomposes both the state-value and Q-value functions, and also maintains a coupled credit assignment between \(Q\) and \(V\). With this scheme, OMAC can perform in-sample learning on the decomposed local state-value functions without the involvement of policies, which implicitly enables max-Q operation at the local level while avoiding distributional shift caused by evaluating OOD actions. Finally, the local policy can be separately learned with the well-learned value functions through advantage weighted regression (Zhou et al., 2017). ### Coupled Value Factorization (CVF) #### 4.1.1. A refined value factorization strategy In OMAC, we consider the following factorization on the global Q-value and state-value. For each agent, we define the local state-value function \(V_{i}\) as the optimal value of the local Q-function \(Q_{i}\). In particular, we decompose the global state-value function into a linear combination of local state-value functions \(V_{i}(o_{i})\) with weight function \(w_{i}^{g}(\mathbf{o})\), as well as the shared component based on the full observation \(V_{share}(\mathbf{o})\). The global Q-function is further decomposed as the state-value function plus a linear combination of local advantages \(Q_{i}(o_{i},a_{i})-V_{i}(o_{i})\) with weight function \(w_{i}^{g}(\mathbf{o},\mathbf{a})\): \[V_{tot}(\mathbf{o})=\sum_{i=1}^{n}w_{i}^{g}(\mathbf{o})V_{i}\left(o_{i} \right)+V_{share}(\mathbf{o})\] \[Q_{tot}(\mathbf{o},\mathbf{a})=V_{tot}(\mathbf{o})+\sum_{i=1}^{n}w_{i}^{g}( \mathbf{o},\mathbf{a})(Q_{i}\left(o_{i},a_{i}\right)-V_{i}\left(o_{i}\right))\] \[V_{i}(o_{i})=\max_{a_{i}}Q_{i}(o_{i},a_{i}),\ w_{i}^{g},w_{i}^{g }\geq 0,\ \forall i=1,\cdots,n \tag{2}\] where we enforce the positivity condition on weight functions \(w_{i}^{g}(\mathbf{o})\) and \(w_{i}^{g}(\mathbf{o},\mathbf{a})\). It can be shown that this factorization strategy has a number of attractive characteristics. Property 1 ().: _The definition of global \(Q\) function in Eq.(2) satisfies \(\max_{\mathbf{a}}Q_{tot}(\mathbf{o},\mathbf{a})=V_{tot}(\mathbf{o})\) and the IGM condition._ Proof.: Since \(V_{i}(o_{i})=\max_{a_{i}}Q_{i}(o_{i},a_{i})\) for all \(a_{i}\) and \(w_{i}^{g}\geq 0\), we have \(\sum_{i=1}^{n}w_{i}^{g}(\mathbf{o},\mathbf{a})(Q_{i}\left(o_{i},a_{i}\right)-V_{i} \left(o_{i}\right))\leq 0\). Therefore \(Q_{tot}(\mathbf{o},\mathbf{a})\leq V_{tot}(\mathbf{o})\) and the maximal value of global Q-function \(\max_{\mathbf{a}}Q_{tot}(\mathbf{o},\mathbf{a})=V_{tot}(\mathbf{o})\) is only achievable when all local Q-functions achieve their maximum (i.e., \(\max_{a_{i}}Q_{i}(o_{i},a_{i})=V_{i}(o_{i})\)). Second, in Eq. (2) the globally shared information is partly captured in the shared component of the state-value function \(V_{share}(\mathbf{o})\), which is free of the joint actions and not affected by the OOD actions under offline learning. The information sharing across agents and credit assignment are captured in weight functions \(w_{i}^{g}(\mathbf{o})\), \(w_{i}^{g}(\mathbf{o},\mathbf{a})\), and the local value functions \(V_{i}(o_{i})\) and \(Q_{i}(o_{i},a_{i})\) are now only responsible for local observation and action information. The shared and the local information are separated, and agents can make decisions by using local \(V_{i}\) and \(Q_{i}\) at an individual level. As we will show in the later section, this structure also leads to a particularly nice form to incorporate in-sample offline value function learning. #### 4.1.2. Coupled credit assignment (CCA) The value factorization strategy in Eq. 2 can potentially allow too much freedom on the weight function \(w^{g}(\mathbf{o})\) and \(w^{g}(\mathbf{o},\mathbf{a})\). Ideally, the credit assignment Figure 1. The network structure of OMAC. on global state-value and Q-value should be coupled and correlated. Thus, we further design a coupled credit assignment scheme implemented with neural networks to enforce such consistency, which also leads to more regularized relationship between \(w^{0}(\mathbf{o})\) and \(w^{q}(\mathbf{o},\mathbf{a})\): \[\begin{split} h_{\mathbf{o}}(\mathbf{o})&=f_{\mathbf{o}}^{(1)}( \mathbf{o}),\quad h_{\mathbf{q}}(\mathbf{o})=f_{\mathbf{q}}^{(1)}(\mathbf{o},\mathbf{a})\\ w_{i}^{\mathbf{p}}(\mathbf{o})&=|f_{\mathbf{o}}^{(2)}(h_{\mathbf{o} }(\mathbf{o}))|\\ w_{i}^{\mathbf{q}}(\mathbf{o},\mathbf{a})&=|f_{\mathbf{q}}^{(2)} (\text{concat}(h_{\mathbf{o}}(\mathbf{o}),h_{\mathbf{q}}(\mathbf{o},\mathbf{a}))|\end{split} \tag{3}\] where \(f_{\mathbf{o}}^{(1)},f_{\mathbf{o}}^{(2)},f_{\mathbf{q}}^{(1)}\), and \(f_{\mathbf{q}}^{(2)}\) are hidden neural network layers. We take absolute values on the network outputs to ensure positivity condition of \(w^{q}(\mathbf{o})\) and \(w^{q}(\mathbf{o},\mathbf{a})\). CCA enforces a coupled relationship between \(w^{\mathbf{p}}(\mathbf{o})\) and \(w^{q}(\mathbf{o},\mathbf{a})\) by sharing the same observation encoding structure, which makes training on \(w^{q}(\mathbf{o},\mathbf{a})\) can also update the parameters of \(w^{\mathbf{p}}(\mathbf{o})\). This coupling relationship allows more stable credit assignment between state-value and Q-value functions on the same observation \(\mathbf{o}\). It can also improve data efficiency during training, which is particularly important for the offline setting, since the size of the real-world dataset can be limited. ### Integrating Offline Value Function Learning #### 4.2.1. Local value function learning In the proposed coupled value factorization, the condition of \(V_{i}(o_{i})=\max\limits_{a_{i}}Q_{i}(o_{i},a_{i})\) needs to be forced. Directly implementing this condition can be problematic under the offline setting, as it could lead to queries on OOD actions, causing distributional shift and overestimated value functions. To avoid this issue, one need to instead consider the following condition: \[V_{i}(o_{i})=\max\limits_{a_{i}\in\mathcal{V_{i}},\text{ \& }\pi_{\beta}(a_{i}|o_{i})>0}Q_{i}(o_{i},a_{i}), \tag{4}\] where \(\pi_{\beta}\) is the behavior policy of the offline dataset. Drawing inspiration from offline RL algorithm IQL (Gang et al., 2017), we can implicitly perform the above max-Q operation by leveraging the decomposed state-value functions \(V_{i}\), while also avoiding explicitly learning the behavior policy \(\pi_{\beta}\). This can be achieved by learning the local state-value function \(V_{i}(o_{i})\) as the upper expectile of target local Q-values \(\bar{Q}_{i}(o_{i},a_{i})\) based on \((o_{i},a_{i})\) samples from dataset \(\mathcal{D}\). For each agent, its local state-value function \(V_{i}(o_{i})\) is updated by minimizing the following objective: \[L_{V_{i}}=\mathbb{E}_{(o_{i},a_{i})\sim\mathcal{D}}\left[L_{2}^{\tau}\left( \bar{Q}_{i}(o_{i},a_{i})-V_{i}(o_{i})\right)\right], \tag{5}\] where \(L_{2}^{\tau}(u)=|\tau-1(u<0)|u^{2}\) denotes the expectile regression, which solves an asymmetric least-squares problem given the expectile \(\tau\in(0,1)\). When \(\tau=0.5\), it reduces to the common least square error. When \(\tau\to 1\), the objective Eq. (5) makes \(V_{i}(o_{i})\) to approximate the maximum of the target local Q-function \(\bar{Q}_{i}(o_{i},a_{i})\) over actions \(a_{i}\) constrained to the dataset actions. #### 4.2.2. Global value function learning With the estimated local state-value function \(V_{i}(o_{i})\), we can then use it to update the global value functions \(V_{tot}\) and \(Q_{tot}\), which are essentially parameterized by the shared state-value function \(V_{share}(\mathbf{o})\), local Q-value function \(Q_{i}(o_{i},a_{i})\), as well as the credit assignment weight functions \(w_{i}^{\mathbf{p}}(\mathbf{o})\) and \(w_{i}^{q}(\mathbf{o},\mathbf{a})\) as in Eq. (2). These terms can be thus jointly learned by minimizing the following objective: \[L_{Q}=\mathbb{E}_{(o,\mathbf{a},\mathbf{o}^{\prime})\sim\mathcal{D}}\left[(r(\mathbf{o}, \mathbf{a})+\gamma V_{tot}\left(\mathbf{o}^{\prime}\right)-Q_{tot}(\mathbf{o},\mathbf{a}))^{2 }\right]. \tag{6}\] It should be noted that the learning of both local and global value functions in OMAC is completely performed in an in-sample manner without the involvement of the agent policies \(\pi_{i}\). This separated learning process greatly improves the learning stability of both the local and global value functions, as it avoids querying OOD actions from the policies during Bellman evaluation, which is the main contributor to the distributional shift in offline RL. ### Local Policy Learning Although our method learns the approximated optimal local and global Q-functions, it does not explicitly represent the local policy of each agent for decentralized execution. Therefore a separate policy learning step is needed. With the learned local state-value and Q-value functions \(Q_{i}\) and \(V_{i}\), we can extract the local policies by maximizing the local advantage values with KL-divergence constraints to regularize the policy to stay close to the behavior policy: \[\begin{split}&\max\limits_{n_{i}}\mathbb{E}_{a_{i}\sim\pi_{i}(a_{i} |o_{i})}\left[Q_{i}(o_{i},a_{i})-V_{i}(o_{i})\right]\\ &\text{s.t. }D_{\text{KL}}\left(\pi_{i}(\cdot\mid o_{i})\|\pi_{\beta,i}(\cdot\mid o_{i})\right)\leq\epsilon\end{split} \tag{7}\] The above optimization problem can be shown equivalent to minimizing the following advantage-weighted regression objective (Han et al., 2017; Wang et al., 2018) by enforcing the KKT condition, which can be solved by sampling directly from the dataset without the need to explicitly learn the local behavior policy \(\pi_{\beta,i}\): \[L_{\pi_{i}}=\mathbb{E}_{(o_{i},a_{i})\sim\mathcal{D}}\left[\text{exp}\left( \beta\left(Q_{i}(o_{i},a_{i})-V_{i}(o_{i})\right)\right)\log\pi_{i}(a_{i}|o_{i} )\right], \tag{8}\] where \(\beta\) is a temperature parameter. For smaller \(\beta\) values, the algorithm is more conservative and produces policies closer to behavior cloning. While for larger values, it attempts to recover the maximum of the local Q-function. The detailed algorithm of OMAC is summarized below. ``` 0: Offline dataset \(\mathcal{D}\). hyperparameters \(\tau\) and \(\beta\). 1: Initialize local state-value network \(V_{i}\), local Q-value network \(Q_{i}\) and its target network \(\bar{Q}_{i}\), and policy network \(\pi_{i}\) for agent \(i\)=1, 2,... \(n\). 2: Initialize the shared state-value network \(V_{share}\) as well as weight function network \(w^{\mathbf{o}}\) and \(w^{q}\). 3:for\(t=1,\cdots,\) max-value-iterationdo 4: Sample batch transitions \((\mathbf{o},\mathbf{a},r,\mathbf{o}^{\prime})\) from \(\mathcal{D}\) 5: Update local state-value function \(V_{i}(o_{i})\) for each agent \(i\) via Eq. (5). 6: Compute \(V_{tot}(\mathbf{o}^{\prime})\), \(Q_{tot}(\mathbf{o},\mathbf{a})\) via Eq. (2). 7: Update local Q-value network \(Q_{i}(o_{i},a_{i})\), weight function network \(w^{\mathbf{o}}(\mathbf{o})\) and \(w^{q}(\mathbf{o},\mathbf{a})\) with objective Eq. (6). 8: Soft update target network \(\bar{Q}_{i}(o_{i},a_{i})\) by \(Q_{i}(o_{i},a_{i})\) for each agent \(i\). 9:endfor 10:for\(t=1,\cdots,\) max-policy-iterationdo 11: Update local policy network \(\pi_{i}\) for each agent \(i\) via Eq. (8). 12:endfor ``` **Algorithm 1** Pseudocode of OMAC Offline dataset \(\mathcal{D}\). hyperparameters \(\tau\) and \(\beta\). ``` 0: Offline dataset \(\mathcal{D}\). hyperparameters \(\tau\) and \(\beta\). 1: Initialize local state-value network \(V_{i}\), local Q-value network \(Q_{i}\) and its target network \(\bar{Q}_{i}\), and policy network \(\pi_{i}\) for agent \(i\)=1, 2,... \(n\). 2: Initialize the shared state-value network \(V_{share}\) as well as weight function network \(w^{\mathbf{o}}\) and \(w^{q}\). 3:for\(t=1,\cdots,\) max-value-iterationdo 4: Sample batch transitions \((\mathbf{o},\mathbf{a},r,\mathbf{o}^{\prime})\) from \(\mathcal{D}\) 5: Update local state-value function \(V_{i}(o_{i})\) for each agent \(i\) via Eq. (5). 6: Compute \(V_{tot}(\mathbf{o}^{\prime})\), \(Q_{tot}(\mathbf{o},\mathbf{a})\) via Eq. (2). 7: Update local Q-value network \(Q_{i}(o_{i},a_{i})\), weight function network \(w^{\mathbf{o}}(\mathbf{o})\) and \(w^{q}(\mathbf{o},\mathbf{a})\) with objective Eq. (6). 8: Soft update target network \(\bar{Q}_{i}(o_{i},a_{i})\) by \(Q_{i}(o_{i},a_{i})\) for each agent \(i\). 9:endfor 10:for\(t=1,\cdots,\) max- Analysis ### Optimality Analysis In this section, we will show that OMAC can recover the optimal value function under the dataset support constraints. We can show in the following theorem that the learned local and global Q-functions approximate the optimal local and global Q-functions with the data support constraints as the expectile \(\tau\to 1\): **Theorem 1**.: _Given the value factorization strategy in Eq. (2) and expectile \(\tau\), we define \(V_{t}^{\tau}(o_{i})\) as the \(\tau^{th}\) expectile of \(Q_{t}^{\tau}(o_{i},a_{i})\) (e.g., \(\tau=0.5\) corresponds to the standard expectile) and define \(V_{tot}^{\tau}(\mathbf{o})=\sum_{i=1}^{n}w_{i}^{\mathbf{o}}(\mathbf{o})V_{i}^{\tau}\left(o_{ i}\right)+V_{share}(\mathbf{o})\), then we have_ \[\lim_{\tau\to 1}V_{t}^{\tau}(o_{i}) =\max_{\begin{subarray}{c}a_{i}\in\mathcal{H}_{i}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\ast}(o_{i},a_{i}) \tag{9}\] \[\lim_{\tau\to 1}V_{tot}^{\tau}(\mathbf{o}) =\max_{\begin{subarray}{c}a_{i}\in\mathcal{H}\\ \mathrm{s.t.}\pi_{\beta}(\mathbf{o})>0\end{subarray}}Q_{tot}^{\ast}(\mathbf{o},\mathbf{a}) \tag{10}\] Let \(m_{\tau}\) be the \(\tau\in(0,1)\) expectile solution to the asymmetric least square problem: \(\underset{m_{\tau}}{\arg\min}\mathbb{E}_{x\sim X}\left[L_{2}^{\tau}\left(x-m_ {\tau}\right)\right]\). We re-use two lemmas from Kostrikov et al. [9] related to the expectile properties of a random variable to prove Theorem 1: **Lemma 1**.: _Let \(X\) be a random variable with a bounded support and the supremum of the support is \(x^{\ast}\), then_ \[\lim_{\tau\to 1}m^{\tau}=x^{\ast}\] The proof is provided in Kostrikov et al. [9]. It follows as the expectiles of a random variable \(X\) have the same supremum \(x^{\ast}\), and we have \(m^{\tau_{1}}<m^{\tau_{2}}\) for all \(\tau_{1}<\tau_{2}\). Hence we can obtain the above limit according to the property of bounded monotonically non-decreasing functions. Let \(V_{t}^{\tau}(o_{i})\) be the \(\tau\)-expectile of \(V_{t}(o_{i})\) in OMAC, then we have the following lemma by extending the Lemma 2 of Kostrikov et al. [9] to multi-agent setting: **Lemma 2**.: _For all \(o_{i}\), \(\tau_{1}\) and \(\tau_{2}\) such that \(\tau_{1}\leq\tau_{2}\) we have \(V_{i}^{\tau_{1}}(o_{i})\leq V_{i}^{\tau_{2}}(o_{i})\)._ Proof.: In OMAC, the learning objective of the local state-value function \(V_{t}(o_{i})\), \(L_{V_{t}}=\mathbb{E}_{(o_{i},a_{i})\sim D}\left[L_{2}^{\tau}(\hat{Q}_{t}(o_{i },a_{i})-V_{t}(o_{i}))\right]\) has the same form as the state-value function IQL under the single-agent case. Hence the conclusion of Lemma 1 (\(V_{\tau_{1}}(s)\leq V_{\tau_{2}}(s)\) for \(\forall\tau_{1}<\tau_{2}\)) in the IQL paper [9] also carries over with the state-value function \(V(s)\) being replaced by local state-value functions \(V_{t}(o_{i})\) under the multi-agent case. Next, we use the above lemmas to formally prove Theorem 1. Proof.: We first prove the local part Eq. (9) of Theorem 1. As the local state-value function \(V_{t}(o_{i})\) is learned through expectile regression, therefore, for the \(\tau\)-expectile of local state-value \(V_{t}^{\tau}(o_{i})\) and an optimal Q-value function constrained to the dataset \(Q_{t}^{\ast}(o_{i},a_{i})\), we have: \[V_{t}^{\tau}(o_{i}) =\mathbb{E}_{a_{i}\sim\pi_{\beta,i}(\cdot|o_{i})}^{\mathbb{F}} \left[Q_{t}^{\tau}(o_{i},a_{i})\right] \tag{11}\] \[\leq\max_{\begin{subarray}{c}a_{i}\in\mathcal{H}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\tau}(o_{i},a_{i}) \leq\max_{\begin{subarray}{c}a_{i}\in\mathcal{H}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\ast}(o_{i},a_{i})\] The inequality follows from the fact that the convex combination is smaller than the maximum. Thus, \(V_{t}^{\tau}(o_{i})\) is a random variable with bounded support and its supremum is \(\max_{a_{i}\in\mathcal{H}_{i}\leq\pi_{\beta,i}(a_{i})>0}Q_{t}^{\ast}(o_{i},a_ {i})\). Applying Lemma 1, we can obtain the local condition: \[\lim_{\tau\to 1}V_{t}^{\tau}(o_{i}) =\max_{\begin{subarray}{c}a_{i}\in\mathcal{H}_{i}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\ast}(o_{i},a_{i}).\] Moreover, based on Lemma 1 and the second inequality in Eq. (11), it's also easy to see: \[\lim_{\tau\to 1}Q_{t}^{\tau}(o_{i},a_{i})=\max_{\begin{subarray}{c}a_{i}\in \mathcal{H}_{i}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\ast}(o_{i},a_{i}). \tag{12}\] For the global state-value and Q-value functions, according to the couple value factorization strategy in Eq. (2), we have: \[Q_{tot}^{\tau}(\mathbf{o},\mathbf{a})=V_{tot}^{\tau}(\mathbf{o})+\sum_{i=1}^{ n}w_{i}^{\mathbf{q}}(\mathbf{o},\mathbf{a})(Q_{i}^{\tau}\left(o_{i},a_{i}\right)-V_{i}^{\tau} \left(o_{i}\right))\] \[=\sum_{i=1}^{n}w_{i}^{\mathbf{o}}(\mathbf{o})V_{t}^{\tau}\left(o_{i} \right)+V_{share}(\mathbf{o})+\sum_{i=1}^{n}w_{i}^{\mathbf{q}}(\mathbf{o},\mathbf{a})(Q_{i}^{ \tau}\left(o_{i},a_{i}\right)-V_{i}^{\tau}\left(o_{i}\right)).\] Taking the limit \(\tau\to 1\) on both sides, and use the local condition and Eq. (12), we have: \[\lim_{\tau\to 1}Q_{tot}^{\tau}(\mathbf{o},\mathbf{a})=\sum_{i=1}^{n}w_{i}^{\mathbf{o}}( \mathbf{o})\max_{\begin{subarray}{c}a_{i}\in\mathcal{H}_{i}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\ast}(o_{i},a_{i})+V_{share }(\mathbf{o})\] \[+\sum_{i=1}^{n}w_{i}^{\mathbf{q}}(\mathbf{o},\mathbf{a})\left(\max_{ \begin{subarray}{c}a_{i}\in\mathcal{H}_{i}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\ast}(o_{i},a_{i})-\max_{ \begin{subarray}{c}a_{i}\in\mathcal{H}_{i}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\ast}(o_{i},a_{i})\right)\] \[=\sum_{i=1}^{n}w_{i}^{\mathbf{o}}(\mathbf{o})\max_{\begin{subarray}{c}a_{i} \in\mathcal{H}\\ \mathrm{s.t.}\pi_{\beta,i}(a_{i})>0\end{subarray}}Q_{t}^{\ast}(o_{i},a_{i})+V_{share }(\mathbf{o})\] \[=\max_{\begin{subarray}{c}\mathbf{a}\in\mathcal{H}^{\mathbf{q}}\\ \mathrm{s.t.}\pi_{\beta,i}(\mathbf{a})>0\end{subarray}}Q_{tot}^{\ast}(o,\mathbf{a})\] \[=\max_{\begin{subarray}{c}\mathbf{a}\in\mathcal{H}^{\mathbf{q}}\\ \mathrm{s.t.}\pi_{\beta,i}(\mathbf{a})>0\end{subarray}}Q_{tot}^{\ast}(o,\mathbf{a})\] On the other hand, we have: \[V_{tot}^{\tau}(\mathbf{o}) =\sum_{i=1}^{n}w_{i}^{\mathbf{o}}(\mathbf{o})V_{i}^{\tau}\left(o_{i} \right)+V_{share}(\mathbf{o})\] \[=\sum_{i=1}^{n}w_{i}^{\mathbf{o}}(\mathbf{o})\mathbb{E}_{a_{i}\sim\pi_{ \beta,i}(\cdot|o_{i})}^{\mathbb{F}}\left[Q_{t}^{\tau}(o_{i},a_{i})\right]+V_{share }(\mathbf{o})\] \[\leq\sum_{i=1}^{n}w_{i}^{\mathbf{o}}(\mathbf{o})\max_{\begin{subarray}{c}a_{ ## 6. Experiments In this section, we present the experiment setups and comparative evaluation of OMAC against SOTA offline MARL baseline algorithms. We also carry out data efficiency analyses as well as ablation studies to better illustrate the effectiveness and properties of our algorithm. ### Experiment Settings #### 6.1.1. Offline datasets We choose the StarCraft Multi-Agent Challenge (SMAC) benchmark (Zhou et al., 2017) as our testing environment. SMAC is a popular multi-agent cooperative control environment for evaluating advanced MARL methods due to its high control complexity. It focuses on micromanagement challenges where a group of algorithm-controlled agents learns to defeat another group of enemy units controlled by built-in heuristic rules, and the goal is to maximize the average return to achieve victory. The offline SMAC dataset used in this study is provided by (Krishnaman et al., 2017), which is the largest open offline dataset on SMAC. Different from single-agent offline datasets, it considers the property of Dec-POMDP, which owns local observations and available actions for each agent. The dataset is collected from the trained MAPPO agent (Zhou et al., 2017), and includes three quality levels: good, medium, and poor. SMAC consists of several StarCraft II multi-agent micromanagement maps. We consider 4 representative battle maps, including 1 hard map (5m_vs_6m), and 3 super hard maps (6h_vs_8z, 35z_vs_386z, corridor). #### 6.1.2. Baselines We compare OMAC against four recent offline MARL algorithms: ICQ (Zhou et al., 2017), OMAR (Zhou et al., 2017), multi-agent version of BCQ (Krishnaman et al., 2017) and CQL (Zhou et al., 2017), namely BCQ-MA and CQL-MA. BCQ-MA and CQL-MA use linear weighted value decomposition structure for the multi-agent setting. Details for baseline implementations and hyperparameters in OMAC are discussed in Appendix. ### Comparative Results We report the mean and standard deviation of average returns for the offline SMAC tasks during training in Fig. 2. Each algorithm is evaluated using 32 independent episodes and run with 5 random seeds. The results show that OMAC consistently outperforms all baselines and achieves state-of-the-art performance in most maps. For the super hard SMAC map such as 6h_vs_8z or corridor, the cooperative relationship of agents is very complex and it is difficult to learn an accurate global Q-value function. Due to the couple value factorization, the global \(Q_{tot}\) of OMAC has stronger expressive capability, which makes OMAC have better performance than other baseline algorithms. Moreover, both the local and global value functions in OMAC are completely performed in an in-sample manner without the involvement of the agent policies \(\pi_{t}\), which also leads to better offline performance. ### Evaluation on Data Efficiency Data efficiency is particularly important for offline RL applications, as real-world datasets can often be limited. For offline MARL problems, this can be even more challenging due to high-dimensional joint state and action spaces, which potentially requires a larger Figure 2. Learning curves over 5 random seeds on the offline SMAC tasks. amount of data to guarantee reasonable model performance. To demonstrate the sample utilization efficiency of OMAC over baseline algorithms, we further conduct experiments on SMAC map 6h_ vs 8z with the size of the original datasets reduced to 50% and 10%. As shown in Table 1, OMAC consistently outperforms the baseline algorithms in all tasks. Moreover, it is observed that OMAC experiences a lower level of performance drop when the dataset size is reduced, whereas recent offline MARL counterpart algorithms like ICQ and OMAR suffer from noticeable performance drop. The reasons for the better data efficiency of OMAC could be due to the use of both coupled credit assignment and in-sample learning. As in OMAC, training on the credit assignment weights \(w^{q}\) also updates the parameters of \(w^{q}\), which enables effective re-use of data. Meanwhile, the local state-value function \(V_{i}\) is learned by expectile regression in a supervised manner rather than performing dynamic programming, which in principle can be more stable and sample efficient. ### Analyses on the Design Components of OMAC In this section, we conduct ablation studies and additional analyses to examine the effectiveness of different design components of OMAC. #### 6.4.1. Ablation on coupled value factorization To examine the impact of our coupled value factorization (CVF) strategy, we conduct the ablation study on map 6h_vs_8z with poor dataset. We test OMAC and the variant without using the coupled value factorization (OMAC-w/o-CVF), which uses the linear weighted decomposition structure used by ICQ and OMAR. As shown in Fig. 3(a), OMAC performs better than OMAC-w/o-CVF, which clearly suggests the advantage of coupled value factorization strategy. #### 6.4.2. Analyses on coupled credit assignment An important design in our method is the coupled credit assignment (CCA) scheme in CVF that learns \(w^{0}\) and \(w^{q}\) dependently. We compare OMAC \begin{table} \begin{tabular}{l l l|c c c c} \hline Map & Dataset & Ratio & OMAC(ours) & ICQ & OMAR & BCQ-MA & CQL-MA \\ \hline 6h\_vs\_8z & good & 100\% & **12.57\(\pm\)0.47** & 11.81\(\pm\)0.12 & 9.85\(\pm\)0.28 & 12.19\(\pm\)0.23 & 10.44\(\pm\)0.20 \\ 6h\_vs\_8z & good & 50\% & **12.28\(\pm\)0.43** & 11.59\(\pm\)0.43 & 9.00\(\pm\)0.27 & 11.93\(\pm\)0.52 & 9.06\(\pm\)0.38 \\ 6h\_vs\_8z & good & 10\% & **10.61\(\pm\)0.18** & 8.86\(\pm\)0.21 & 7.88\(\pm\)0.19 & 9.92\(\pm\)0.10 & 8.41\(\pm\)0.16 \\ \hline 6h\_vs\_8z & medium & 100\% & **12.17\(\pm\)0.52** & 11.56\(\pm\)0.34 & 10.81\(\pm\)0.21 & 11.77\(\pm\)0.36 & 11.59\(\pm\)0.35 \\ 6h\_vs\_8z & medium & 50\% & **11.98\(\pm\)0.32** & 10.80\(\pm\)0.25 & 10.047\(\pm\)0.11 & 11.51\(\pm\)0.33 & 10.68\(\pm\)0.23 \\ 6h\_vs\_8z & medium & 10\% & **10.86\(\pm\)0.08** & 9.47\(\pm\)0.27 & 8.27\(\pm\)0.07 & 9.92\(\pm\)0.17 & 8.61\(\pm\)0.29 \\ \hline 6h\_vs\_8z & poor & 100\% & **11.08\(\pm\)0.36** & 10.34\(\pm\)0.23 & 10.64\(\pm\)0.20 & 10.67\(\pm\)0.19 & 10.76\(\pm\)0.11 \\ 6h\_vs\_8z & poor & 50\% & **10.84\(\pm\)0.14** & 9.97\(\pm\)0.14 & 9.87\(\pm\)0.32 & 10.39\(\pm\)0.46 & 9.99\(\pm\)0.44 \\ 6h\_vs\_8z & poor & 10\% & **8.59\(\pm\)0.21** & 7.51\(\pm\)0.22 & 7.29\(\pm\)0.09 & 8.34\(\pm\)0.19 & 8.18\(\pm\)0.39 \\ \hline \end{tabular} \end{table} Table 1. Evaluation on data efficiency of different methods on offline SMAC datasets with reduced size Figure 3. Analyses and ablations on the design components of OMAC and the variant without coupled credit assignment (OMAC-CVF-w/o-CCA), which is trained by implementing \(w^{q}\) and \(w^{q}\) as two independent networks without the coupled structure. Fig. 3(a) also shows that OMAC has better performance and stability than OMAC-CVF-w/o-CCA, suggesting the important contribution of CCA on model performance due to more stable and better regularized credit assignment between the state-value function and Q-value function on the same observations. To verify that the coupled network structure of \(w^{q}(\mathbf{o})\) and \(w^{q}(\mathbf{o},\mathbf{a})\) in OMAC indeed produces correlated credit assignments on state-value and Q-value functions on the same observations, we further conduct an experiment to inspect their produced values. We use the learned models of \(w^{q}(\mathbf{o})\) and \(w^{q}(\mathbf{o},\mathbf{a})\) to evaluate \(\mathbb{E}_{a_{i}}[w^{q}_{i}(\mathbf{o},a_{i},\mathbf{a}_{-i})]\) and its relationship with \(w^{q}_{i}(\mathbf{o})\) for an arbitrary agent \(i\). Based on the results plotted in Fig. 4, we observe that a positively correlated relationship exists between \(w^{q}(\mathbf{o})\) and \(w^{q}(\mathbf{o},\mathbf{a})\). #### 6.4.3. Impact of implicit max-Q operation on local value functions In our method, OMAC implicitly performs the max-Q operation at the local level by learning the local state-value function \(V_{i}(o_{i})\) as the upper expectile of \(Q_{i}(o_{i},a_{i})\) based on \((o_{i},a_{i})\) samples entirely from dataset \(\mathcal{D}\). Choosing a appropriate \(\tau\) will make \(V_{i}(o_{i})\) to approximate the maximum of the local Q-function \(Q_{i}(o_{i},a_{i})\) over actions \(a_{i}\) constrained to the dataset actions. Fig. 3(b) shows the performance of OMAC with different levels of \(\tau\) on the offline 6h_vs_8z_poor dataset. The performances of OMAC (\(\tau=0.7\)) and OMAC (\(\tau=0.8\)) are close and are much better than OMAC (\(\tau=0.5\)), showing some degree of hyperparameter robustness when \(\tau\) is reasonably large. With \(\tau=0.5\), the local state-value function \(V_{i}\) is essentially learned to be the expected value of \(Q_{i}\), which is more conservative and leads to suboptimal performance. To illustrate the benefit of the implicit max-Q evaluation in OMAC, we also implement another algorithm CVF-maxQ for comparison. In CVF-maxQ, the coupled value factorization structure is partially preserved while all the local state-value \(V_{i}(o_{i})\) are replaced by \(\max_{a_{i}}Q_{i}(o_{i},a_{i})\). As Fig. 3 (b) shows, OMAC performs much better than CVF-maxQ, which clearly suggests the advantage of performing local max-Q operation in an implicit and in-sample manner under offline learning. ## 7. Conclusion In this paper, we propose a new offline MARL algorithm named OMAC. OMAC adopts a coupled value factorization structure, which organically marries offline RL with a specially designed coupled multi-agent value decomposition strategy and has stronger expressive capability. Moreover, OMAC performs in-sample learning on the decomposed local state-value functions, which implicitly conducts the max-Q operation at the local level while avoiding distributional shift caused by evaluating on the out-of-distribution actions. We benchmark our method using offline datasets of SMAC tasks and the results show that OMAC achieves superior performance and better data efficiency over the state-of-the-art offline MARL methods.
2308.14885
Inferring phase transitions and critical exponents from limited observations with Thermodynamic Maps
Phase transitions are ubiquitous across life, yet hard to quantify and describe accurately. In this work, we develop an approach for characterizing generic attributes of phase transitions from very limited observations made deep within different phases' domains of stability. Our approach is called Thermodynamic Maps, which combines statistical mechanics and molecular simulations with score-based generative models. Thermodynamic Maps enable learning the temperature dependence of arbitrary thermodynamic observables across a wide range of temperatures. We show its usefulness by calculating phase transition attributes such as melting temperature, temperature-dependent heat capacities, and critical exponents. For instance, we demonstrate the ability of thermodynamic maps to infer the ferromagnetic phase transition of the Ising model, including temperature-dependent heat capacity and critical exponents, despite never having seen samples from the transition region. In addition, we efficiently characterize the temperature-dependent conformational ensemble and compute melting curves of the two RNA systems GCAA tetraloop and HIV-TAR, which are notoriously hard to sample due to glassy-like landscapes.
Lukas Herron, Kinjal Mondal, John S. Schneekloth, Pratyush Tiwary
2023-08-28T20:13:39Z
http://arxiv.org/abs/2308.14885v1
Inferring phase transitions and critical exponents from limited observations with Thermodynamic Maps ###### Abstract Phase transitions are ubiquitous across life, yet hard to quantify and describe accurately. In this work, we develop an approach for characterizing generic attributes of phase transitions from very limited observations made deep within different phases' domains of stability. Our approach is called Thermodynamic Maps, which combines statistical mechanics and molecular simulations with score-based generative models. Thermodynamic Maps enable learning the temperature dependence of arbitrary thermodynamic observables across a wide range of temperatures. We show its usefulness by calculating phase transition attributes such as melting temperature, temperature-dependent heat capacities, and critical exponents. For instance, we demonstrate the ability of thermodynamic maps to infer the ferromagnetic phase transition of the Ising model, including temperature-dependent heat capacity and critical exponents, despite never having seen samples from the transition region. In addition, we efficiently characterize the temperature-dependent conformational ensemble and compute melting curves of the two RNA systems GCAA tetraloop and HIV-TAR, which are notoriously hard to sample due to glassy-like landscapes. ## I Introduction Phase transitions are widely observed in biological, material, and social sciences. Across these disciplines, phase transitions can be defined as the emergence of higher-level, large-scale organization from the coordinated, short-range interactions between many individual constituents. Classical examples include the ferromagnetic to paramagnetic transition, boiling of water, and conformational transitions in biomolecules like proteins and nucleic acids. Statistical mechanics, and especially the framework of energy landscapes, provides a simple and unifying way of studying phase transitions in these diverse systems [71]. In this work we are specifically interested in phase transitions in systems that stay in equilibrium throughout. For these, the Boltzmann distribution relates the probability of finding a system in a particular microscopic configuration \(\mathbf{x}\) to its energy \(U(\mathbf{x})\) and the system's inverse temperature \(\beta\) as \[\mu(\mathbf{x})=\frac{e^{-\beta U(\mathbf{x})}}{Z(\beta)}\quad\text{with} \tag{1}\] \[Z(\beta)=\int e^{-\beta U(\mathbf{x})}d\mathbf{x}. \tag{2}\] \(Z(\beta)\) is a normalization constant known as the partition function, whose behavior is often associated with phase transitions. Exploration of the energy landscape is guided by competition between energy and entropy, which is encapsulated by a temperature-dependent free energy \(F(\beta)\) which may be computed from the partition function as \[F(\beta)=-\beta^{-1}\ln Z(\beta). \tag{3}\] At a glance, Eqs. 1-3 suggest that the relationship between temperature, energy, microscopic probability, and macroscopic free energy is simple and tractable. It appears as if with these equations, one has the machinery to directly calculate the free energy across temperatures. By doing so for different macroscopic phases one could then obtain various thermodynamic attributes of phase transitions, including transition temperatures and phase diagrams. By calculating appropriate fluctuations, one could directly obtain response functions such as heat capacities and others. In reality, however, the situation is quite complex. Studying phase transitions and their characteristics computationally is made difficult by Eq. 2, which requires integrating over a (usually) intractably large number of dimensions. Numerous elegant theoretical and computational schemes have been proposed over the decades to solve this problem. For example, free energy perturbation, Markov chain Monte Carlo (MC) methods, the replica trick and others [72; 73; 74; 75; 76; 77]. In this study, we propose a generative Artificial Intelligence (AI) based approach that characterizes phase transitions by learning the temperature dependence of the partition function, and therefore the free energy. Our method, which we call "Thermodynamic Maps" (TM) incorporates score-based generative modeling into the framework of free energy perturbation within statistical mechanics [78; 79; 80; 81]. The central idea underlying Thermodynamic Maps is that mapping the temperature dependence of ensembles of configurations of a complex system onto the temperature dependence of a simple, idealized system allows for efficient generation of physically realistic samples of the complex system with the correct Boltzmann weights. Within the framework of free energy perturbation, the mapping allows for temperature-dependent free energy estimates. Additionally, Thermodynamic Maps are highly-efficient and can learn from limited data which is not sampled from the global equilibrium distribution. We demonstrate the applicability of Thermodynamic Maps for three complex systems where we compare against benchmarks from theory, extensive computational studies, and experiments. The first system we consider is the Ising model on a two-dimensional square lattice. With observations made at two temperatures, one deep in the paramagnetic regime and the other deep in the ferromagnetic regime, we are able to correctly infer critical behavior. We then study two different Ribonucleic Acid (RNA) systems: the GCAA tetraloop and HIV-TAR RNA [82; 83]. For both of these, we infer the temperature-dependence of the equilibrium distribution across temperatures using Thermodynamic Maps trained on data generated by short molecular dynamics (MD) simulations. For both RNA systems, we predict the temperature dependence of conformational ensemble and compute melting temperatures in agreement with computational studies and experiment. Given this demonstration of applicability, we believe that Thermodynamic Maps will be found useful for the characterization of complex phase transitions in diverse systems, especially those with multiple phases. For example, local minima within the energy landscape of biomolecular systems are often biologically and functionally relevant [84], and while they have been challenging to characterize, we are able to study them with Thermodynamic Maps. The computational efficiency of the learning algorithm and scalability due to not requiring samples from the global equilibrium distribution make Thermodynamic Maps especially suitable for studying large-scale systems exhibiting complex behavior across long timescales. ## II Method ### Targeted Free Energy Perturbation with Machine Learning To motivate Thermodynamic Maps, we start with relevant background work. The guiding principle behind all such methods is that one is usually interested in differences of free energies, rather than their absolute values. Such differences can be estimated from ratios of partition functions. Since pioneering work by Zwanzig and Feynman in the 1950s, many frameworks have been developed to do this efficiently [72; 75; 76; 85; 86]. One such relatively recent framework is that of Targeted Free Energy Perturbation (TFEP) [87]. TFEP proposes a potential solution for more efficient estimation of relative free energies, suggesting that estimating the free energy under an invertible mapping \[\mathcal{M}:\mathbf{x}\rightarrow\mathbf{x}^{\prime} \tag{4}\] of the configuration space onto itself can enhance the convergence of relative free energy estimates. The motivation behind this approach is that a well-chosen \(\mathcal{M}\) can dramatically increase the overlap between states in the configuration space, thus accelerating the convergence of estimates of the relative free energies between them. However, finding such an invertible mapping is difficult in practice due to the complex, high-dimensional distribution of configurations and the requirement that \(\mathcal{M}\) preserves dimensionality. Flow-based generative models have emerged as attractive candidates for this task. These models effectively learn to parameterize high-dimensional, invertible \(\mathcal{M}_{\theta}\) which transforms samples from a simple prior distribution \(q(\mathbf{x}^{\prime})\) into those of a complicated distribution \(p(\mathbf{x})\)[88; 89; 90]. Once the map is learned, samples from \(q(\mathbf{x}^{\prime})\) can be efficiently transformed into samples from \(p(\mathbf{x})\) at a lower computational cost than generating samples of \(p(\mathbf{x})\) through other means. If \(p(\mathbf{x})\) is the image of \(q(\mathbf{x}^{\prime})\) under \(\mathcal{M}_{\theta}\), then probability densities of \(q(\mathbf{x}^{\prime})\) can be transformed into densities of \(p(\mathbf{x})\) through \[p(\mathbf{x})=\frac{q(\mathbf{x}^{\prime})}{|\det J_{\mathcal{M}_{\theta}}( \mathbf{x}^{\prime})|}, \tag{5}\] where \(J_{\mathcal{M}_{\theta}}\) is the Jacobian of \(\mathcal{M}_{\theta}\). Clearly, using Eq. 5 in practice depends on parameterizing \(\mathcal{M}_{\theta}\) which is invertible, has an easy-to-compute Jacobian, and is still expressive enough to transform a simple \(q(\mathbf{x}^{\prime})\) into a complex \(p(\mathbf{x})\). Normalizing flows parameterize invertible functions \(\mathcal{M}_{\theta}\) with tractable Jacobians, and are the most widely employed flow-based models for free energy estimation [91; 92; 93]. Although normalizing flows are theoretically appealing for free energy estimation, much effort has been spent addressing barriers of application to complex systems. The main barrier to the wide application of normalizing flows is in training difficulty. The tractable Jacobian comes at the price of reduced expressivity, impeding their ability to map simple priors to complicated target distributions in high dimensions [94]. Recent developments improve robustness by developing more expressive black-box operations with simple Jacobians, or incorporating stochasticity into the flow [95; 96; 97; 98]. Another approach for improving the robustness of normalizing flows is to bring the prior distribution closer to the empirical distribution, thereby simplifying \(\mathcal{M}_{\theta}\). However, this often requires expert knowledge of the system being studied. For example, the Learned Free Energy Perturbation (LFEP) approach enhances free energy estimation of crystalline solids by mapping configurations onto a periodic lattice of particles whose positions are perturbed by Gaussian noise [99; 100]. Free energy differences are first estimated within the Figure 1: **Illustration of a Thermodynamic Map between systems.****A** The Thermodynamic Map is parameterized by a diffusion model, denoted as \(\mathcal{M}_{\theta}\), which learns to invert a diffusion process that maps the temperature dependence of samples \(\mathbf{x}\) from a complex system, whose equilibrium distribution is \(p(\mathbf{x},\beta)\) onto samples \(\mathbf{x}^{\prime}\) of a simple system whose equilibrium distribution is \(q(\mathbf{x}^{\prime},\beta^{\prime})\). As pictured, samples data of an Ising model sampled at two temperatures are mapped onto a simple harmonic oscillator. **B** Once learned, the Thermodynamic Map allows samples of the complex system to be generated from the simple prior system at any temperature, even those showing non-trivial behavior. tractable prior, and then transformed into free energy estimates of the target distribution using Eq. 5. We take inspiration from this example, where \(\mathcal{M}_{\theta}\) is a mapping from the equilibrium distribution of a physical system onto that of an idealized system. We extend the applicability of generative modeling to free energy estimation by using score-based models rather than normalizing flows. Score-based models have demonstrated a high degree of expressiveness and robustness in reliably being able to map complex, high-dimensional distributions to trivially simple priors, without requiring a computationally tractable Jacobian [80]. Score-based models are generally formulated as pairs of forward and backward stochastic differential equations (SDEs) of the form: \[\mathrm{d}\mathbf{x}=-f(\mathbf{x},t)\mathrm{d}t+g(t)\mathrm{d}\mathbf{w} \quad\text{and} \tag{6}\] \[\mathrm{d}\mathbf{x}=-\left[f(\mathbf{x},t)+g(t)^{2}\nabla_{\mathbf{x}}\log p _{t}(\mathbf{x})\right]\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}. \tag{7}\] Eq. 6 defines a diffusion process that relaxes to a prior distribution \(q(\mathbf{x}^{\prime})\) for any distribution of initial conditions \(p(\mathbf{x})\). Score-based models exploit the remarkable property that any diffusion process of the form Eq. 6 can be reversed according to Eq. 7 [101]. Together, the stochastic processes of Eqs. 6 and 7 define a map between distributions \(p(\mathbf{x})\) and \(q(\mathbf{x}^{\prime})\) that is exactly invertible in the ensemble limit. However, carrying out the reverse diffusion is not as straightforward as the forward diffusion. The term \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\), referred to as the score \(\mathbf{s}(\mathbf{x},t)\), depends on the initial conditions of Eq. 6 which generally cannot be expressed in closed-form. Therefore, the score is estimated by a neural network that is trained to match the predicted score to the true score. Using score-based models, we demonstrate that the temperature dependence of the distribution of structures for a complex system \(p(\mathbf{x},\beta)\) can be mapped onto the distribution of structures of a trivial prior system \(q(\mathbf{x}^{\prime},\beta^{\prime})\). Treating the prior distribution of the score-based model as arising from the dynamics of a physical system allows us to infer the temperature dependence of free energies, extending TFEP to the multi-ensemble case. ### Thermodynamic Maps To extend TFEP to multi-ensemble thermodynamics, we augment our coordinates \(\mathbf{x}\in\mathbb{R}^{d}\) with auxiliary inverse temperature-type variables \(\boldsymbol{\beta}\in\mathbb{R}^{d}\). Together, the coordinates and temperatures form a state vector \((\mathbf{x},\boldsymbol{\beta})^{\top}\in\mathbb{R}^{2d}\), and the thermodynamic map \(\mathcal{M}_{\theta}\) is parameterized in the joint \(\mathbf{x}\)-\(\boldsymbol{\beta}\)-space as a score-based model. For a discussion on the role of \(\boldsymbol{\beta}\) in learning thermodynamic relationships, refer to Appendix B.3. We express \(\mathcal{M}_{\theta}\) as a pair of forwards and backward SDEs, as is standard for score-based models: \[\begin{pmatrix}\mathrm{d}\mathbf{x}\\ \mathrm{d}\boldsymbol{\beta}^{-1}\end{pmatrix}=-\frac{1}{2}\sigma(t)\begin{pmatrix} \mathbf{x}\\ \boldsymbol{\beta}^{-1}\end{pmatrix}\mathrm{d}t+\sqrt{\sigma(t)}\begin{pmatrix} \sqrt{\boldsymbol{\beta}_{0}^{-1}}\\ \mathbf{1}\end{pmatrix}\mathrm{d}\mathbf{w}\quad\text{and} \tag{8}\] \[\begin{pmatrix}\mathrm{d}\mathbf{x}\\ \mathrm{d}\boldsymbol{\beta}^{-1}\end{pmatrix}=-\frac{1}{2}\sigma(t)\left[ \begin{pmatrix}\mathbf{x}\\ \boldsymbol{\beta}^{-1}\end{pmatrix}+\begin{pmatrix}\mathbf{s}_{\theta}(\mathbf{ x},t)\\ \mathbf{s}_{\theta}(\boldsymbol{\beta}^{-1},t)\end{pmatrix}\right]\mathrm{d}t+ \sqrt{\sigma(t)}\begin{pmatrix}\sqrt{\boldsymbol{\beta}_{0}^{-1}}\\ \mathbf{1}\end{pmatrix}\mathrm{d}\mathbf{w}, \tag{9}\] where \(\mathbf{w}\) is isotropic, unit Gaussian noise. Eq. 8 governs the forward diffusion of the coordinates and inverse temperatures, with monotonically increasing \(\sigma(t)\) chosen so that \(\sigma(0)=0\) and \(\sigma(T)=1\). Starting from initial conditions \(\mathbf{x}_{0}\) and \(\boldsymbol{\beta}_{0}\) at \(t=0\), the coordinates and temperatures converge to \(\mathcal{N}(\mathbf{0},\boldsymbol{\beta}_{0}^{-1})\) and \(\mathcal{N}(\mathbf{0},\mathbf{1})\) respectively at \(t=T\). The score-based model defined by Eqs 8 and 9 are motivated by treating the prior distribution of the coordinates as the equilibrium distribution arising from the dynamics of some physical system. If the prior is a normal distribution, then the corresponding physical system is a harmonic oscillator, and the temperature \(\boldsymbol{\beta}_{0}\) corresponds to the oscillator's temperature. Generating samples at a particular temperature amounts to sampling from the prior at the corresponding temperature in Eq. 9. Optionally, one may hold the temperatures fixed over the course of the generative process so that \(\mathcal{M}_{\theta}\) is restricted to the configuration space as required by TFEP. Empirically we find that diffusion in the joint \(\mathbf{x}\)-\(\boldsymbol{\beta}\)-space yields similar results to the conditional generation in [81]. More details on training and inference are provided in Appendix A ### Critical Behavior of the Ising Model We apply Thermodynamic Maps to the most widely studied formulation of the classical Ising model as a system of interacting spins arranged on a two-dimensional square lattice, without the presence of an external magnetic field. Arguably, this model serves as a prototypical example of a simple system with a complex phase transition. Each spin can have one of two states with value \(\sigma=+1\) or \(\sigma=-1\). The spins interact with their nearest neighbors through an interaction term \(J\) and Hamiltonian given by \[\mathcal{H}=J\sum_{\langle i,j\rangle}\sigma_{i}\sigma_{j}, \tag{10}\] where \(\langle i,j\rangle\) denotes nearest-neighbor pairs. When \(J<0\) the model exhibits ferromagnetic behavior in its ground state. Going forward, we set \(J=-1\) without loss of generality. As the temperature increases, an Ising model in two or higher dimensions transitions from an ordered magnetic phase to a disordered paramagnetic phase. For our set-up of a two-dimensional Ising model on a square lattice with \(J=-1\), this critical temperature is known to be \(T_{c}\approx 2.27\)[102]. The two phases can be distinguished from each other with the magnetization order parameter, \(M\), defined as the absolute value of the average of the spins. Above \(T_{c}\), the spins are equally likely to be +1 or -1, regardless of their neighbors, resulting in a net magnetization of zero. Well below \(T_{c}\), all spins in the lattice align, leading to a magnetization of 1. Divergences are a signature of critical behavior, and the magnetization \(M\) and heat capacity \(C\) of the Ising model diverge near the critical temperature as: \[M\sim|\tau|^{\beta}\quad\text{and}\quad C\sim|\tau|^{-\alpha}\quad\text{where} \quad\tau=\frac{T-T_{c}}{T_{c}}, \tag{11}\] and with critical exponents \(\alpha=0\) and \(\beta=0.125\) (not to be confused with the inverse temperature variables \(\mathbf{\beta}\)), which can be derived analytically in the thermodynamic limit [103]. For systems which are not solvable the critical exponents must be measured numerically, which is often done through MC sampling, wherein proposals for spin flips are generated and their acceptance or rejection is determined based on the detailed balance condition. Near the critical temperature, the presence of long-range correlations causes MC dynamics to slow down exponentially, leading to difficulty in sampling the phase transition [104]. We investigate the ability of Thermodynamic Maps to infer such critical behaviors by generating configurations of an Ising model through MC sampling across temperatures, and using the data from two temperatures asymmetrically spaced about \(T_{c}\) to train a TM. We then generate configurations at all temperatures using Figure 2: **Inferring the phase transition of the 2D Ising model from limited sampling.****A** The magnetization is plotted for samples of a \(32\times 32\) square Ising model generated through MC sampling (orange) and the thermodynamic map (blue). The thermodynamic map predicts change in magnetization at \(T_{c}\) when trained on samples generated at \(T=1.5\) and \(T=4\) (red stars). **B** The heat capacity of samples generated from MC sampling (orange) and the thermodynamic map (blue) is plotted. The thermodynamic map correctly infers the divergence in the heat capacity, numerically computed for the red dots, when trained on the same samples as panel A (red stars). the TM, and compare the behavior of \(M\) and \(C\) between the MC and TM-generated samples, as shown in Figure 2. The TM infers the correct value of \(T_{c}\) (\(\tau=0\)) even on the basis of limited, deliberately misleading training data, and generates samples with divergences in \(M\) and \(C\). Figure 2A shows the behavior of the magnetization for MC and TM-generated samples across temperatures, with critical exponents measured as \(\beta_{MC}\approx 0.149\pm 0.004\) and \(\beta_{TM}\approx 0.178\pm 0.012\). Similarly, Figure 2B shows a divergence in the heat capacity at \(T_{MC}\approx 2.25\) and \(T_{TM}\approx 2.30\), with critical exponents \(\alpha_{MC}\approx 0.338\pm 0.050\) and \(\alpha_{TM}\approx 0.236\pm 0.061\). For both exponents, the MC and TM exponents are in agreement with each other, although far from the ideal value due to finite size effects. These results highlight the ability of our model to infer the non-trivial thermodynamic behavior of phase transitions without being shown samples from the transition region; the physical meaning of temperature in the prior system has been transferred to the complex system, complete with the statistics properties associated with critical behavior. The correct prediction of \(T_{m}\), even with asymmetrically spaced training temperatures, indicates our model has learned the physics of the Ising model, not merely the distribution of structures at each temperature. ### Exploring RNA Conformational Landscapes To show the broad applicability of Thermodynamic Maps, we now study conformational transitions and melting in two different ribonucleic acid (RNA) systems. Studying atomic-resolution conformational ensembles of RNAs through molecular dynamics simulations has proved crucial for understanding RNA structural dynamics, yet remains challenging due to the disordered, glassy nature of RNA energy landscapes [105; 106]. Increasing evidence points towards some RNAs having glassy energy landscapes where multiple minima are separated by high barriers [107; 108; 109]. The ruggedness of the landscape results in many competing degrees of freedom and no clear-cut separation of timescales within the dynamics [108; 109; 110]. The most striking feature of glassy energy landscapes as they relate to polymers is the presence of conformational heterogeneity at equilibrium [108; 111]. While the conformational landscape of proteins is often dominated by a single well-defined fold (energy minimum), the RNA conformational landscape may not dominated by a single structure [112; 113]. This difference is analogous to the single magnetized phase of an Ising model and the many phases of long-ranged spin glasses [114; 115; 116; 77]. Since multiple members of the ensemble contribute substantially to the free energy and other thermodynamic observables, RNAs are best described as a weighted ensemble of conformers [113; 117]. Exploring energy landscapes by biasing dynamics along a small number of slow degrees of freedom has proven successful for exploring the conformational landscape of proteins, but the lack of timescale separation and ensemble nature of the RNA conformational landscape violates fundamental assumptions of dimensionality made in biasing methods [118]. On the other hand, Thermodynamic Maps have the advantage of being able to learn the conformational landscape directly in the high-dimensional configuration space. We train Thermodynamics Maps on limited information generated by bioinformatic approaches and multi-ensemble molecular dynamics simulations to efficiently characterize RNA conformational ensembles. Our starting point is the physics and knowledge-based potential that is central to Rosetta to generate a putative conformational ensemble [119]. These structures serve as the starting point to explore a more realistic energy landscape through all-atom, explicit solvent molecular dynamics performed over a range of temperatures. Between rounds of molecular dynamics simulation, the global equilibrium distribution is inferred using Thermodynamic Maps. Initial conditions are re-sampled from the inferred equilibrium distribution according to a general RNA order parameter. Here on, we refer to this protocol as Thermodynamic Map-accelerated Molecular Dynamics (TM-aMD). Although agreement between the input equilibrium distribution and the output from the Thermodynamic Map is a necessary, though not sufficient condition that the true equilibrium distribution has been attained, we leave rigorously addressing convergence to equilibrium to future work. Here we take solace in the numerical results for the two challenging test systems, described next. #### ii.4.1 GCAA Tetraloop With a shift in perspective towards viewing RNAs as dynamic entities, there has been interest in studying the variation in dynamics between different tetraloops. Studying combining computation and experiment have demonstrated that even so-called simple tetraloops can exhibit rich dynamics [120]. We study the GCAA tetraloop, a well-studied model system, enabling us to compare the equilibrium distribution generated by our model with extensive molecular dynamics simulations and experimental data [121; 105; 122]. The GCAA Tetraloop is a small, highly-stable, 12-nucleotide RNA sequence that adopts a hairpin structure consisting of an eight-nucleotide helix and a four-nucleotide loop (PDB: 1ZIH) [82]. Consistent with an ensemble perspective, the variable arrangement of nucleotides in the loop gives rise to alternative conformations, which we investigate with TM-aMD. The Thermodynamic Map learns to generate RNA structures represented as \(G\)-vectors, which is an internal coordinate system for RNAs that effectively clusters distinct folded states [123]. The principal components of \(G\)-vectors have been shown to be a convenient visualization of RNA structural diversity, which we use to Figure 3: **GCAA Tetraloop Conformational Landscape.****A** Joint distribution of the first two principal components of G-vectors for GCAA. Contours representing TM-generated samples at 310K are overlaid on MD samples shaded by the number of base pairs, with selected regions of the conformational landscape annotated as 1-6. The 10 NMR conformers reported in the Protein Data Bank (PDB: 1ZIH) are depicted as green stars. **B** Temperature-dependent free energy profiles along the first two principal components shown in panel A. Colors indicate temperatures ranging from 310K to 450K at 10K intervals. **C** Representative structures sampled from labeled minima in panel A, with a representative NMR conformer shown in translucent blue. The structures are colored from blue to red from the 5’ to 3’ end. **D** A fraction folded curve is obtained by reweighting the MD data in panel A with the TM conformational ensemble for each temperature in panel B, and fitting a two-state model. Uncertainties are computed from the last three iterations of TM-aMD. The cutoff for folded and unfolded states is three base pairs to match our reference MD study [105]. guide adaptive sampling. Further information on \(G\)-vectors, along with details of TM-aMD can be found in Appendix B. Figure 3 summarizes the result of nine iterations of our enhanced sampling procedure, with a total of 50\(\mu\)s of simulation. Although we performed extensive MD simulations reaching long timescales, we still used two orders of magnitude less compute compared to our reference millisecond replica exchange simulation, even with sub-optimal scheduling of TM learning with MD simulation [105]. Supplementary Figure 5A suggests that TM-aMD would benefit from more frequent reseeding of simulations. In Figure 3A, we present the projection of the learned equilibrium distribution onto the first two principal components of the \(G\)-vectors. The contours represent the free energy landscape inferred by the thermodynamic map, while the shaded regions represent the distribution of structures observed in the simulation, shaded by the number of base pairs. The 10 conformers reported in the Protein Data Bank (PDB), represented as green stars, lie within the most dominant TM-predicted cluster. Figure 3B shows the learned, temperature-dependent free energy profiles along each of the principal components in Figure 3A. The first principal component clearly shows the reweighting of the folded and unfolded states with temperature. Figure 3C shows the structures associated with the minima in Figure 3A. The first three conformations are abundant at 310K and are consistent with MD studies [105, 121] and experiment (an example NMR conformer is depicted in blue). The last three represent unfolded states that are stabilized by base stacking and are weakly present in the 310K ensemble. The first principal component corresponds to the folding-unfolding transition, while the second captures the conformational heterogeneity of the loop region. Figure 3D displays the melting curve of the GCAA tetraloop computed from the last three rounds of MD simulation, with the melting curve of the learned equilibrium distribution. Both curves exhibit agreement, matching the range of melting temperatures predicted from a lengthy replica exchange simulation [105]. #### iii.2.2 HIV-TAR RNA The HIV-TAR RNA is an extensively studied, 29 nucleotide RNA hairpin from the HIV genome that displays rich conformational diversity. The secondary structure consists of lower and upper helices separated by a three-nucleotide bulge with an apical loop closing the upper helix (PDB: 1ANR) [83]. The disordered loop and bulge regions mediate interactions with proteins and small molecules [124]. We investigate the conformational landscape of the HIV-TAR RNA using six iterations of TM-aMD, requiring a total of 70\(\mu\)s of simulation time. We infer relative free energies between the dominant conformer observed through NMR spectroscopy and alternative conformers, which are unattainable through MD simulation alone. The conformational landscape predicted by the learned TMs, projected along the principal components of the \(G\)-vectors is shown in Figure 4A. Structures corresponding to the NMR ensemble (green stars) are well-separated from other misfolded structures within the two-dimensional projection. The temperature dependence of the free energy along each principal component is shown in Figure 4B, where we find that the free energy barrier separating the NMR conformers from unfolded states reaches a height of 5kcal/mol, which is approximately the free energy of breaking a canonical base pair. Figure 4C depicts representative samples from each cluster. The first cluster shows agreement with the NMR conformers, with the first reported conformer shown in translucent blue. The other clusters correspond to varied secondary structure motifs. Generally, across the conformational landscape, we find that the contribution of folded states to the conformational ensemble diminishes as temperature increases. This can be clearly shown by computing a fraction folded curve (Figure 4D), which shows agreement between melting temperatures of the MD and TM-derived ensembles, and the experimental melting temperature [125]. Our results support the idea that RNAs evolve over a rugged free energy landscape punctuated by many long-lived states [126]. We find that the native state is indeed the most stable at physiological temperatures (310K), but misfolded states consisting of secondary structure elements still have substantial contributions to the ensemble. Minima of the energy landscape corresponding to states 2-5 in Figure 4 differ from the NMR conformers (state 1) by less than 1kcal/mol. Overall, our findings are in agreement with theoretical studies of RNA folding pointing towards a rugged energy landscape [109, 126]. Clearly, our findings are dependent on the accuracy of the simulation force field, which are notoriously inaccurate for RNAs, and our algorithm is incapable of reaching experimental folding timescales. Despite this, the fraction folded curve is in agreement with experimental results. As such, TMs may be useful in the context of forcefield development, where a major barrier to improved forcefields is the difficulty in attaining equilibrium through MD simulation, especially for systems that have many metastable states. ## III Discussion We have demonstrated that Thermodynamic Maps are capable of accurately inferring the non-trivial behavior of the free energy with temperature from limited data for the two-dimensional square Ising model. Additionally, we also show how Thermodynamic Maps may be integrated with molecular dynamics simulation to enhance sampling. In this section, we outline prospective, potential applications for Thermodynamic Maps. Our RNA results indicate that Thermodynamic Maps are well suited for inferring the equilibrium distribution from swarms of short, independent simulations. We show that TM-aMD is capable of exploring the Figure 4: **HIV-TAR RNA Conformational Landscape.****A** Joint distribution of the first two principal components of G-vectors for HIV-TAR. Contours representing TM-generated samples at 310K are overlaid on MD samples shaded by the number of base pairs, with basins of the TM landscape labeled 1-5. The 20 reported NMR conformers in the Protein Data Bank (PDB: 1ANR) are depicted as green stars. **B** Temperature-dependent free energy profiles along the first two principal components are shown in panel A. Colors indicate temperatures ranging from 310K to 450K at 10K intervals. **C** Representative structures sampled from labeled clusters in panel A, with a representative NMR conformer shown in translucent blue. The structures are colored from blue at the 5’ end to red at the 3’ end. **D** A fraction folded curve is obtained by reweighting the MD data in panel A with the TM conformational ensemble for each temperature in panel B, and fitting a two-state model. Uncertainties are computed from iterations of the TM-aMD algorithm showing agreement between MD and TM-predicted melting temperatures (see Appendix B.5). The cutoff for folded and unfolded states is nine base pairs, as determined by the NMR conformers). conformational landscape orders of magnitude faster than REMD, while still inferring global equilibrium. Still, TM-accelerated MD stands to benefit greatly from optimization. And since TMs allow for truly parallel simulations across arbitrary temperatures, TMs are poised to make efficient use of distributed computing resources [127]. As pointed out in Section II.4, we believe that our implementation of TM-aMD is hindered by the scheme used to adaptively re-sample initial states. The one we employ aims for agreement between the TM and MD-generated distributions. However, in practice, it is more productive to balance thermodynamic accuracy with directed sampling of diverse conformations. Methods such as FAST and REAP quantify this trade-off as a reward function which is to be optimized by the choice of adaptive sampling strategy [128; 129]. The large number of observed metastable states for small RNAs such as HIV-TAR indicate that a thorough characterization of RNA structure at equilibrium will require reaching simulation timescales that are longer than can be reached through brute-force computing alone. We sidestep the timescale problem by starting our exploration from a putative ensemble generated by the coarse-grained Rosetta potential. We leave the more computationally intense task of truly _de novo_ folding of RNAs through MD simulations to future work. Overall, we have presented Thermodynamic Maps (TMs) as an exciting way of integrating score-based generative modelling with the theoretical framework of statistical mechanics. We have demonstrated the capability of Thermodynamic Maps to learn physics by predicting the critical behavior associated with the Ising Model from limited, deliberately misleading sampling. We also use Thermodynamic Maps to accelerate Molecular Dynamics (TM-aMD) and show that TM-aMD efficiently infers the equilibrium distribution of two model RNA systems using a fraction of the computational resources required by Replica-Exchange simulations. Based on these results, we believe that Thermodynamic Maps are suitable for widespread use and have great potential for further theoretical development and computational optimization. ## IV Acknowledgements This research was supported by the Intramural Research Program of the National Institutes of Health, National Cancer Institute (NCI), Center for Cancer Research, Project BC011585 09 (J.S.S.) L.H. was supported by the National Science Foundation, Grant No. CHE-2044165. J.S.S. and P.T. thank the NCI-UMD Partnership for Integrative Cancer Research. P.T. was an Alfred P. Sloan Foundation fellow during preparation of this manuscript. We thank Zaratan and XSEDE (project CHE180027P) for computational resources. L.H. would like to thank Yihang Wang and Disha Sanwal for insightful discussions, and Eric Beyerle and Suemin Lee for critical reading of our manuscript.
2303.07235
Distance Evaluation to the Set of Defective Matrices
We treat the problem of the Frobenius distance evaluation from a given matrix $ A \in \mathbb R^{n\times n} $ with distinct eigenvalues to the manifold of matrices with multiple eigenvalues. On restricting considerations to the rank $ 1 $ real perturbation matrices, we prove that the distance in question equals $ \sqrt{z_{\ast}} $ where $ z_{\ast} $ is a positive (generically, the least positive) zero of the algebraic equation $$ \mathcal F(z) = 0, \ \mbox{where} \ \mathcal F(z):= \mathcal D_{\lambda} \left( \det \left[ (\lambda I - A)(\lambda I - A^{\top})-z I_n \right] \right)/z^n $$ and $ \mathcal D_{\lambda} $ stands for the discriminant of the polynomial treated with respect to $\lambda $. In the framework of this approach we also provide the procedure for finding the nearest to $ A $ matrix with multiple eigenvalue. Generalization of the problem to the case of complex perturbations is also discussed. Several examples are presented clarifying the computational aspects of the approach.
Alexei Yu. Uteshev, Elizaveta A. Kalinina, Marina V. Goncharova
2023-03-13T16:07:41Z
http://arxiv.org/abs/2303.07235v1
# Distance Evaluation to the Set of Defective Matrices ###### Abstract We treat the problem of the Frobenius distance evaluation from a given matrix \(A\in\mathbb{R}^{n\times n}\) with distinct eigenvalues to the manifold of matrices with multiple eigenvalues. On restricting considerations to the rank 1 real perturbation matrices, we prove that the distance in question equals \(\sqrt{z_{*}}\) where \(z_{*}\) is a positive (generically, the least positive) zero of the algebraic equation \[\mathcal{F}(z)=0,\text{ where }\mathcal{F}(z):=\mathcal{D}_{\lambda}\left( \det\left[(\lambda I-A)(\lambda I-A^{\top})-zI_{n}\right]\right)/z^{n}\] and \(\mathcal{D}_{\lambda}\) stands for the discriminant of the polynomial treated with respect to \(\lambda\). In the framework of this approach we also provide the procedure for finding the nearest to \(A\) matrix with multiple eigenvalue. Generalization of the problem to the case of complex perturbations is also discussed. Several examples are presented clarifying the computational aspects of the approach. **Keywords:** Wilkinson's problem, defective matrix, multiple eigenvalues, distance equation **MSC 2010:** 68W30, 15A18, 12D10, 58C40 ## 1 Introduction The origin of the problem of finding the distance from a matrix \(A\in\mathbb{C}^{n\times n}\) to the set \(\mathbb{D}\) of matrices with multiple eigenvalues can be traced back to Wilkinson [23] who posed it in relation to the sensitivity analysis of matrix eigenvalues. The desired distance further will be treated with respect to either the \(2\)-norm or to the Frobenius norm in \(\mathbb{C}^{n\times n}\) and will be denoted \(d(A,\mathbb{D})\). It is usually referred to as the _Wilkinson distance_ of \(A\)[3, 18]. Alternatively, \(d(A,\mathbb{D})\) can be defined as the \(\inf\|A-B\|\) where \(B\) belongs to the subset of defective matrices, i.e. those possessing at least one eigenvalue whose geometric multiplicity is less than its algebraic multiplicity. Starting from Wilkinson's works [24, 25, 26], the problem of evaluation of \(d(A,\mathbb{D})\) has been studied intensively in [3, 7, 8, 16, 17, 19]. The most recent result is presented in the work [2]. We briefly trace the developed approaches. Most of them are in the framework of singular value analysis of appropriate parameter dependent matrices. The following theorem gives the min-max representation of \(d(A,\mathbb{D})\) obtained by Malyshev [17]. **Theorem 1.1**: _Let \(A\in\mathbb{C}^{n\times n}\). Let the singular values of the matrix_ \[\left[\begin{array}{cc}A-\lambda I_{n}&\gamma I_{n}\\ \mathbb{O}_{n\times n}&A-\lambda I_{n}\end{array}\right] \tag{1.1}\] _be ordered like \(\sigma_{1}(\lambda,\gamma)\geq\sigma_{2}(\lambda,\gamma)\geq\ldots\geq\sigma_{ 2n}(\lambda,\gamma)\geq 0\). Then the \(2\)-norm distance \(d(A,\mathbb{D})\) can be evaluated as_ \[d(A,\mathbb{D})=\min_{\lambda\in\mathbb{C}}\max_{\gamma\geq 0}\sigma_{2n-1}( \lambda,\gamma)\,.\] The straight computation of this distance is quite difficult, so to find this distance, in many works the notion of pseudospectra [20] is used. **Definition.** For both the \(2\)-norm and the Frobenius norm, the \(\varepsilon\)-pseudospectra of a matrix \(A\) is \[\Lambda_{\varepsilon}(A)=\{\sigma_{\min}<\varepsilon\}\] where \(\varepsilon>0\) and \(\sigma_{\min}\) stands for the smallest singular value of the matrix \(A-zI\). Equivalently, \[\Lambda_{\varepsilon}(A)=\{z\in\mathbb{C}|\det(A+E-zI)=0,\mbox{ for some }E\in\mathbb{C}^{n\times n}\mbox{ with }\|E\|<\varepsilon\}\,.\] If \(\Lambda_{\varepsilon}\) has \(n\) components, then \(A+E\) has \(n\) distinct eigenvalues for all perturbations \(E\in\mathbb{C}^{n\times n}\) and hence \(A+E\) is not defective. In subsequent papers, the pseudospectra approach is used to find the distance to the nearest defective matrix. In [16], a geometric solution to the problem of finding \(d(A,\mathbb{D})\) in Frobenius norm is given. The nearest defective matrix is related to the critical points of the minimal singular value \(\sigma_{\min}(x,y)\) of the matrix \(A-(x+{\bf i}y)I\) that could be obtained by examination of pseudospectra of \(A\). For an approximation of a multiple eigenvalue of the nearest defective matrix, the averaging heuristic by Puiseux series is proposed. Also an iterative method for finding this eigenvalue together with the minimal perturbation is presented. In [3], it is proposed to find the smallest perturbation \(E\) such that the components of the pseudospectra of \(A+E\) coalesce. The problem is reformulated as follows. One needs to find \(z\in\mathbb{C},\varepsilon\in\mathbb{R},\varepsilon>0\) and \(U,V\in\mathbb{C}^{n}\), such that1 Footnote 1: Hereinafter \({}^{\top}\) stands for the transpose while \({}^{\mathsf{H}}\) stands for the Hermitian transpose. \[(A-zI)V-\varepsilon U=\mathbb{O}_{n\times 1},\varepsilon V-(A-zI)^{\mathsf{H}}U= \mathbb{O}_{n\times 1},\,U^{\mathsf{H}}V=0\,. \tag{1.2}\] The algorithm to solve the system of equations presented in this work is rather expensive because it requires the repeated calculation of pseudospectra. Also any condition of coalescence of two pseudospectral curves is necessary. In [4], a new computational approach to approximating the nearest defective matrix by a variant of Newton's method is suggested. The Implicit Determinant Method based on standard Newton's method is used to solve the system (1.2) in [2]. There are several works considering generalizations of Wilkinson's problem for the cases of prescribed eigenvalues or their multiplicities [5, 11, 15, 14, 18], and matrix pencils [1]. The approaches developed in the above cited papers could be characterized as related to the Numerical Linear Algebra. The present paper aims at solving the stated problem for the case of Frobenius norm within the framework of symbolic computation approach. Namely, we reduce the problem to that of the univariate polynomial equation solving. As a matter of fact, the manifold \(\mathbb{D}\) of matrices with multiple eigenvalues in the \(\mathbb{R}^{n^{2}}\) space of their entries is an algebraic one, i.e. it is represented by a multivariate polynomial equation. If we slightly modify Wilkinson's problem to that of finding \(d^{2}(A,\mathbb{D})\), then the constrained optimization problem becomes an algebraic one in the sense that both the objective function and the constraint be polynomials. Application of the Lagrange multipliers method reduces the problem to that of system of algebraic equations solving. The latter can be resolved, at least in theory, via the analytical procedure of elimination of variables consisting in the multivariate resultant computation or the Grobner basis construction. Application of these procedures to the system of equations of the treated problem, complemented with \(z-d^{2}(A,\mathbb{D})=0\), results in a univariate equation \(\mathcal{F}(z)=0\) whose zero set contains all the critical values of the squared distance function. This equation will be further referred to as the **distance equation** and its computation is the priority of the present paper. This approach has been developed in [13]. Unfortunately, soon after that publication, a significant gap in reasoning was discovered. It was assumed that the value \(d(A,\mathbb{D})\) could be provided by only the rank 1 perturbation matrix \(E_{*}\) and that the nearest to \(A\) matrix \(B_{*}=A+E_{*}\) in \(\mathbb{D}\) might possess only a double real eigenvalue. In Section 6, an example of the order 4 matrix \(A\) is given where the nearest in \(\mathbb{D}\) matrix possesses a pair of double complex-conjugate eigenvalues. As yet we failed to manage this scenario for the general statement of the problem; neither do we able to claim that it is a _zero probability event_. We confine ourselves here to considering the case where the critical values of \(d^{2}(A,\mathbb{D})\) are provided only by the rank 1 perturbation matrices. For this case, the practical implementations of the elimi nation of variables procedure mentioned above can be reduced to just only two bivariate equations. One of these equations follows quite naturally from the developed in [13] approach. This is \[\Phi(\lambda,z)=0\ \text{where}\ \Phi(\lambda,z):=\det\left[(\lambda I-A)( \lambda I-A^{\top})-zI_{n}\right]\,.\] The more difficulties causes the deduction of the second equation. It happens to be \[\partial\Phi(\lambda,z)/\partial\lambda=0\,.\] To obtain the distance equation, it is then sufficient to eliminate the variable \(\lambda\) from the obtained system. This can be managed with the aid of **discriminant** computation, i.e. the function of the coefficients of a polynomial responsible for the existence of a multiple zero for this polynomial. We recall some basic features of this function in Section 2. In Section 3, we prove the main result of the paper, namely that the value \(d^{2}(A,\mathbb{D})\) is in the set of non-negative zeros of the distance equation. If \(A\not\in\mathbb{D}\) then generically \(d^{2}(A,\mathbb{D})\) equals the least positive zero \(z_{*}\) of this equation. We also detail here the structure of the matrix \(B_{*}\) nearest to \(A\) in \(\mathbb{D}\). It appears that the multiple eigenvalue of \(B_{*}\) coincide with the multiple zero of the polynomial \(\Phi(\lambda,z_{*})\). In Section 5, computational aspects of the proposed approach are discussed via solving the problem for the two families of matrices treated in the literature. In Section 7, we address to the generalization of Wilkinson's problem to the case of complex perturbations. Here the results are presented in a very concise manner with the potential intention of returning to them in future articles. **Notation** is kept to correlate with [13]. For a matrix \(A\in\mathbb{R}^{n\times n}\), \(f_{A}(\lambda)\) denotes its characteristic polynomial, \(d(A,\mathbb{D})\) denotes the distance from \(A\) to the set \(\mathbb{D}\) of matrices possessing a multiple eigenvalue. \(E_{*}\) and \(B_{*}=A+E_{*}\) stand for, respectively, the (minimal) perturbation matrix and the nearest to \(A\) matrix in \(\mathbb{D}\) (i.e. \(d(A,\mathbb{D})=\|A-B_{*}\|\)); we then term by \(\lambda_{*}\) the multiple eigenvalue of \(B_{*}\). \(I\) (or \(I_{n}\)) denotes the identity matrix (of the corresponding order). \(\mathcal{D}\) (or \(\mathcal{D}_{\lambda}\)) denotes the discriminant of a polynomial (with subscript indicating the variable). **Remark.** All the computations were performed in CAS Maple 15.0 with those approximate done within the accuracy \(10^{-40}\). In the paper they are presented rounded to \(10^{-6}\). ## 2 Algebraic Preliminaries It is well-known that in the \((N+1)\)-dimensional space of the polynomial \[F(x)=a_{0}x^{N}+a_{1}x^{N-1}+\ldots+a_{N}\in\mathbb{C}[x],\,a_{0}\neq 0,N\geq 2 \tag{2.1}\] coefficients, the manifold of polynomials with multiple zeros is given by the equation \[D(a_{0},a_{1},\ldots,a_{N})=0\quad\text{where}\ D:=\mathcal{D}_{x}(F(x)) \tag{2.2}\] denotes the **discriminant** of the polynomial. Discriminant is formally defined as a symmetric function of the zeros \(\{\lambda_{1},\ldots,\lambda_{N}\}\) of the polynomial \(F(x)\) \[D_{x}(F(x))=a_{0}^{2N-2}\prod_{1\leq j<k\leq N}(\lambda_{k}-\lambda_{j})^{2}\,. \tag{2.3}\] This representation gives rise to further transformation of the discriminant into the homogeneous polynomial \(D(a_{0},a_{1},\ldots,a_{N})\) of the order \(2N-2\) with respect to the coefficients of \(F(x)\). Such a transformation can be implemented through a preliminary representation of discriminant in an appropriate determinantal form. We will follow the approach based on the Hankel matrix formalism [21]. For this aim, find first the _Newton sums_\(s_{0},s_{1},\ldots,s_{2N-2}\) of the polynomial \(F(x)\) with the aid of recursive formulas \[s_{0}=N,\ s_{1}=-a_{1}/a_{0},\] \[s_{k}=\left\{\begin{array}{ll}-(a_{1}s_{k-1}+a_{2}s_{k-2}+\ldots+a_{k-1}s_{1 }+a_{k}k)/a_{0},&\mbox{ if }k\leq N,\\ -(a_{1}s_{k-1}+a_{2}s_{k-2}+\ldots+a_{N}s_{k-N})/a_{0},&\mbox{ if }k>N,\end{array}\right. \tag{2.4}\] and compose the Hankel matrix \[S=[s_{j+k}]_{j,k=0}^{N-1}=\left[\begin{array}{ccccc}s_{0}&s_{1}&s_{2}&\ldots &s_{N-2}&s_{N-1}\\ s_{1}&s_{2}&s_{3}&\ldots&s_{N-1}&s_{N}\\ s_{2}&s_{3}&s_{4}&\ldots&s_{N}&s_{N+1}\\ \ldots&&&\ldots\\ s_{N-1}&s_{N}&s_{N+1}&\ldots&s_{2N-3}&s_{2N-2}\end{array}\right]_{N\times N}\,.\] Denote by \(S_{1},\ldots,S_{N}=\det S\) its leading principal minors. **Theorem 2.1**: _One has_ \[{\cal D}(F)=a_{0}^{2N-2}S_{N}\,. \tag{2.5}\] _The condition_ \[S_{N}=0,\ldots,S_{N-k+1}=0,S_{N-k}\neq 0\] _is the necessary and sufficient for the polynomial \(F(x)\) to possess \(k\) common zeros with \(F^{\prime}(x)\). In particular, if \(S_{N}=0,S_{N-1}\neq 0\), then \(F(x)\) possesses a unique multiple zero and the multiplicity of this zero equals \(2\). This zero can be computed via the formula_ \[\lambda=s_{1}-\frac{1}{S_{N-1}}\left|\begin{array}{ccccc}s_{0}&s_{1}&\ldots &s_{N-3}&s_{N-1}\\ s_{1}&s_{2}&\ldots&s_{N-2}&s_{N}\\ \vdots&&&\vdots\\ s_{N-2}&s_{N-1}&\ldots&s_{2N-1}&s_{2N-3}\end{array}\right|\,. \tag{2.6}\] _The determinant in the right-hand side is constructed by deleting the last row and the last but one column in \(\det S\)._ Consequently, the set \(\mathbb{D}\) of matrices with multiple eigenvalues is given by the equation \[\mathcal{D}_{\lambda}\left(\det(\lambda I-B)\right)=0\,.\] For the case of polynomials with real coefficients, the sequence of leading principal minors of the matrix \(S\) permits one to establish the exact number of real zeros for \(F(x)\)[10]. **Theorem 2.2** (Jacobi): _Let_ \[S_{N}=0,\ldots,S_{N-k+1}=0,S_{N-k}\neq 0,\ldots,S_{1}\neq 0\] _Then the number of distinct pairs of complex-conjugate zeros for \(F(x)\in\mathbb{R}[x]\) equals_ \[\mathcal{V}(1,S_{1},\ldots,S_{N-k})\] _where \(\mathcal{V}\) denotes the number of variations of sign in the given sequence._ In the space \(\mathbb{R}^{N+1}\) of polynomials (2.1) with real coefficients, the discriminant manifold (2.2) separates the domains of vectors providing the coefficients of polynomials with the same number of real zeros. The last comment of the present section relates to application of discriminant to one problem from Elimination Theory. Consider a bivariate polynomial \(F(x,y)\in\mathbb{R}[x,y],\deg F\geq 2\). The discriminant furnishes the tool for eliminating the variable \(x\) from the system of equations \[F(x,y)=0,\partial F(x,y)/\partial x=0\,. \tag{2.7}\] Namely, if \((x_{0},y_{0})\) is a solution to the system (2.7), then \(y_{0}\) is necessarily a zero of the algebraic univariate equation \[\mathcal{Y}(y)=0\ \ \mbox{where}\ \ \mathcal{Y}(y):=\mathcal{D}_{x}(F(x,y))\,.\] The reverse statement is subject to an extra assumption. If \(y_{1}\in\mathbb{C}\) is a zero for \(\mathcal{Y}(y)\), then there exists a multiple zero for the polynomial \(F(x,y_{1})\). Under the assumption that \(y_{1}\) is a simple zero for \(\mathcal{Y}(y)\), \(x_{1}\) is a unique multiple zero and its multiplicity equals \(2\). Then it can be expressed as a rational function of \(y_{1}\) using the result of Theorem 2.1. These considerations are valid for all the solutions of the system (2.7) provided that \(\mathcal{D}_{y}(\mathcal{Y}(y))\neq 0\). ## 3 Distance Equation In terms of the discriminant manifold referred to in the previous section, the problem of evaluation of \(d^{2}(A,\mathbb{D})\) is equivalent to that of constrained optimization \[\min\|B-A\|^{2}\quad\mbox{subject to}\ \mathcal{D}_{\lambda}(f_{B}(\lambda))=0, \ B\in\mathbb{R}^{n\times n}\,.\] Here the constraint is an algebraic equation with respect to the entries of the matrix \(B\). Traditional application of the Lagrange multipliers method reduces the problem to that of solving a system of \(n^{2}+1\) nonlinear algebraic equations. Under the additional assumption the matrix \(B_{*}\in\mathbb{R}^{n\times n}\) providing a solution to this system possesses only one multiple eigenvalue and its multiplicity equals \(2\), it is possible to reduce the number of variables in the constrained optimization approach. The following result is presented in [13]: **Theorem 3.1**: _The value \(d^{2}(A,\mathbb{D})\) belongs to the set of critical values of the objective function_ \[G(U):=U^{\top}AA^{\top}U-\left(U^{\top}AU\right)^{2} \tag{3.1}\] _for the constrained optimization problem under constraints_ \[U^{\top}U=1,\ U\in\mathbb{R}^{n}\,. \tag{3.2}\] _If \(U_{*}\) be the point providing \(d^{2}(A,\mathbb{D})\), then the perturbation can be computed as_ \[E_{*}=U_{*}U_{*}^{\top}(\kappa I-A)\quad\mbox{ where }\kappa:=U_{*}^{\top}AU_{ *}\,.\] The new optimization problem still have significant number of variables. We aim to eliminate all of them but introduce an extra one responsible for the critical _values_ of the objective function. Stationary points of the function (3.1) under the constraints (3.2) can be found via Lagrange method applied to the function \(G(U)-\mu(U^{\top}U-1)\). This results into the system \[AA^{\top}U-(U^{\top}AU)(A+A^{\top})U-\mu U=\mathbb{O}_{n\times 1}\,. \tag{3.3}\] Denote \[\lambda:=U^{\top}AU\,. \tag{3.4}\] Then the equation (3.3) has a nontrivial solution with respect to \(U\) if and only if \[\det(AA^{\top}-\lambda(A+A^{\top})-\mu I)=0\,. \tag{3.5}\] Under this condition, multiplication of (3.3) by \(U^{\top}\) yields \[U^{\top}AA^{\top}U=2\lambda^{2}+\mu\,.\] Wherefrom it follows that the critical values of the objective function (3.1) are given by \[z=\lambda^{2}+\mu\,.\] Substitution this into (3.5) results in the equation connecting \(z\) and \(\lambda\): \[\Phi(\lambda,z)=0 \tag{3.6}\] where \[\Phi(\lambda,z):=\det\left[AA^{\top}-\lambda(A+A^{\top})+(\lambda^{2}-z)I\right] \tag{3.7}\] \[=\det\left[(\lambda I-A)(\lambda I-A)^{\top}-zI\right] \tag{3.8}\] Zeros \(z_{1},\ldots,z_{n}\) of the polynomial \(\Phi(\lambda,z)\) with respect to the variable \(z\) are evidently real since they are the squares of the singular values for the matrix \(\lambda I-A\). Our further task is to deduce an extra equation connecting \(\lambda\) and \(z\). **Theorem 3.2**: _The value \(d^{2}(A,\mathbb{D})\) belongs to the set of non-negative zeros of the polynomial_ \[{\cal F}(z)\equiv{\cal D}_{\lambda}(\Phi(\lambda,z))/z^{n}\,. \tag{3.9}\] **Proof.** Under the condition (3.6), there exists a nontrivial solution for (3.3) with respect to the column \(U\) \[(\lambda I-A)(\lambda I-A)^{\top}U=z\,U\,. \tag{3.10}\] This equality means that \(U\) is the right singular vector for the matrix \(\lambda I-A\) corresponding to the singular value \(\sqrt{z}\). The corresponding left singular vector for that matrix can be found from the equality \[\sqrt{z}V:=(\lambda I-A)^{\top}U\,. \tag{3.11}\] Dual relationship is valid for \(U\): \[\sqrt{z}U=(\lambda I-A)V\,. \tag{3.12}\] From the conditions (3.2) and (3.4) it follows that \[U^{\top}(\lambda I-A)U=0\,. \tag{3.13}\] Multiply (3.12) from the left by \(U^{\top}\). From (3.2), it follows that \[\sqrt{z}=U^{\top}(\lambda I-A)V\,. \tag{3.14}\] Multiply (3.11) from the left by \(V^{\top}\) and utilize (3.14): \[\sqrt{z}V^{\top}V=V^{\top}(\lambda I-A)^{\top}U=\sqrt{z}\,.\] Wherefrom the two alternatives follow \[V^{\top}V=1\quad\mbox{or}\quad\sqrt{z}=0\,.\] Similarly, multiplication of (3.11) from the left by \(U^{\top}\) and further application of (3.13) yields \[\sqrt{z}U^{\top}V=0\,.\] This also leads to two alternatives: \[U^{\top}V=0\quad\text{or}\quad\sqrt{z}=0\,.\] Ignore the case \(\sqrt{z}=0\). \[V^{\top}V=1,\ U^{\top}V=0\,. \tag{3.15}\] Consider the equation (3.14) as a definition of the \(\sqrt{z}\) as the function of \(\lambda\). Differentiate this relation with respect to \(\lambda\): \[\frac{d\,\sqrt{z}}{d\,\lambda}=U^{\top}V+\frac{d\,U^{\top}}{d\,\lambda}( \lambda I-A)V+U^{\top}(\lambda I-A)\frac{d\,V}{d\,\lambda}\,.\] With the aid of (3.11) and (3.12) transform this into \[U^{\top}V+\sqrt{z}\left[\frac{d\,U^{\top}}{d\,\lambda}U+V^{\top}\frac{d\,V}{d \,\lambda}\right]\,.\] Due to (3.2) and (3.15), we arrive at \[\frac{d\,\sqrt{z}}{d\,\lambda}=0\,. \tag{3.16}\] Equation (3.6) defines implicit function \(z(\lambda)\). Differentiation of the identity \(\Phi(\lambda,z(\lambda))\equiv 0\) with respect to \(\lambda\) yields the identity \[\Phi^{\prime}_{\lambda}(\lambda,z)+\Phi^{\prime}_{z}(\lambda,z)\frac{d\,z}{d \,\lambda}\equiv 0\,.\] Under the condition (3.16), the variables \(\lambda\) and \(z\) are linked by an extra relationship \[\Phi^{\prime}_{\lambda}(\lambda,z)=0\,. \tag{3.17}\] Together with (3.6), the deduced condition composes the system of algebraic equations \[\Phi(\lambda,z)=0,\ \Phi^{\prime}_{\lambda}(\lambda,z)=0\,. \tag{3.18}\] According with the results of Section 2, elimination of \(\lambda\) from this system can be implemented with the aid of the discriminant computation, i.e. the variable \(z\) should satisfy the equation \[\mathcal{D}_{\lambda}(\Phi(\lambda,z))=0\,.\] To prove the validity of (3.9), it is necessary to additionally confirm that the left-hand side of the last equation is divisible by \(z^{n}\). This is indeed the case, since the polynomial \(\Phi(\lambda,0)\) possesses \(n\) multiple zeros coinciding with the eigenvalues of the matrix \(A\). With \({\cal F}(z)\) given by (3.9), the **distance equation**\({\cal F}(z)=0\) is now well-defined and in Section 4 we discuss some of related features and computational aspects. To conclude the present section, we have to detail the properties of the \(\lambda\)-component for the solution of the system (3.18). Let the polynomial \({\cal F}(z)\) defined by (3.9) possess a positive real zero \(z_{0}\) and this zero be simple. Then the polynomial \(\Phi(\lambda,z_{0})\) has a unique multiple zero and multiplicity of this zero equals \(2\). We denote by \(\lambda_{0}\). It is evidently real and can be expressed as a rational function of \(z_{0}\) via, for instance, formula (2.6). The less evident conclusion is as follows: this multiple zero coincides with the multiple eigenvalue of the matrix in \({\mathbb{D}}\) providing the critical value \(z_{0}\) for the function \(d^{2}(A,{\mathbb{D}})\). **Theorem 3.3**: _For any real solution \((\lambda_{0},z_{0})\) of the system (3.18) where \(z_{0}\neq 0\), there exists the rank \(1\) perturbation \(E_{0}\) such that \(\|E_{0}\|=\sqrt{z_{0}}\) and the matrix \(B_{0}=A+E_{0}\) possesses the multiple eigenvalue \(\lambda_{0}\)._ **Proof.** The number \(\sqrt{z_{0}}\) is a singular value for the matrix \(\lambda_{0}I-A\). We intend to prove that the matrix from the theorem statement is defined by the formula \[E_{0}:=\sqrt{z_{0}}U_{0}V_{0}^{\top}\,, \tag{3.19}\] where \(U_{0}\) and \(V_{0}\) are respectively the left and the right singular vectors of the unit norm for the matrix \(\lambda_{0}I-A\) corresponding to \(\sqrt{z_{0}}\). Indeed, the matrix \(B_{0}=A+E_{0}\) has \(\lambda_{0}\) as the eigenvalue corresponding to the eigenvector \(V_{0}\): \[B_{0}V_{0}=(A+E_{0})V_{0}\stackrel{{\eqref{eq:1.1}}}{{=}}AV_{0}+ \sqrt{z_{0}}U_{0}\stackrel{{\eqref{eq:1.2}}}{{=}}AV_{0}+(\lambda_ {0}I-A)V_{0}=\lambda_{0}V_{0}\,.\] If \({\bf rank}(B_{0}-\lambda_{0}I)<n-1\) then the theorem is proved. Assume that \({\bf rank}(B_{0}-\lambda_{0}I)=n-1\). Let us prove the existence of a column \(W\) such that \[(B_{0}-\lambda_{0}I)W=V_{0}\,.\] The necessary and sufficient condition for resolving this equation consists in the fulfillment of the equality \[(B_{0}-\lambda_{0}I)(B-\lambda_{0}I)^{+}V_{0}=V_{0} \tag{3.20}\] where \(\ {}^{+}\) stands for the Moore-Penrose inverse of the matrix. It can be easily verified that \[(B_{0}-\lambda_{0}I)(B_{0}-\lambda_{0}I)^{+}=I-U_{0}U_{0}^{\top}\] (by assumption, \({\bf rank}(B_{0}-\lambda_{0}I)=n-1\)), and the condition (3.20) is fulfilled: \[(B_{0}-\lambda_{0}I)(B_{0}-\lambda_{0}I)^{+}V_{0}=(I-U_{0}U_{0}^{\top})V_{0} \stackrel{{\eqref{eq:1.2}}}{{=}}V_{0}\,.\] The columns \(V_{0}\) and \(W\) are linearly independent. Indeed, if \[\alpha V_{0}+\beta W=\mathbb{O}_{n\times 1}\quad\mbox{for $\{\alpha,\beta\} \subset\mathbb{R}$}\] then on multiplying this equality from the left by \(B_{0}-\lambda_{0}I\) it follows that \(\beta V_{0}=\mathbb{O}_{n\times 1}\), and thus \(\beta=0\). But then \(\alpha=0\) since \(V_{0}\) is a nonzero column. Hence, \[(B_{0}-\lambda_{0}I)^{2}V_{0}=\mathbb{O},\ (B_{0}-\lambda_{0}I)^{2}W=\mathbb{O}\] for the linear independent \(V_{0}\) and \(W\). Consequently, \({\bf rank}(B_{0}-\lambda_{0}I)^{2}\leq n-2\) and this gives evidence that \(\lambda_{0}\) should be a multiple eigenvalue for \(B_{0}\). \(\Box\) **Corollary 3.1**: _If \(A\not\in\mathbb{D}\), then_ \[d(A,\mathbb{D})=\sqrt{z_{*}}\,,\] _where \(z_{*}\) is the minimal positive zero of the polynomial (3.9) provided that this zero is not a multiple one. Minimal perturbation is evaluated by the formula_ \[E_{*}=U_{*}U_{*}^{\top}(\lambda_{*}I-A)\,. \tag{3.21}\] _Here \(\lambda_{*}\) is the multiple zero for the polynomial \(\Phi(\lambda,z_{*})\) and \(U_{*}\in\mathbb{R}^{n},\|U_{*}\|=1\) is the left singular vector of the matrix \(\lambda_{*}I-A\) corresponding to the singular value \(\sqrt{z_{*}}\)._ The significance of condition for simplicity of the minimal positive zero \(z_{*}\) can be explained as follows. Since we are looking for only real perturbations, formula (3.21) yields such a matrix if \(\lambda_{*}\) is real. For the matrices of the order \(n\geq 4\), it might happen that the system (3.18) possesses a solution \((z_{*},\lambda_{*})\) with an imaginary \(\lambda_{*}\) (we give an example of such a matrix in Section 6). Then the system necessarily possesses the solution \((z_{*},\overline{\lambda_{*}})\). This implies (v. the last comment from Section 2) that \(z_{*}\) should be a multiple zero for (3.9). Therefore, the condition for simplicity of \(z_{*}\)is sufficient to prevent such an occasion. Formal verification of this condition can be replaced by a more general one relating the discriminant of \({\cal F}(z)\): \[{\cal D}_{z}({\cal F}(z))\neq 0\,.\] ## 4 Properties of the Distance Equation **Example 4.1**: _The distance equation for the matrix \(A=[a_{jk}]_{j,k=1}^{2}\) is found in the form_ \[{\cal F}(z):=16\left[(a_{11}-a_{22})^{2}+(a_{12}+a_{21})^{2}\right]\cdot\left\{ [4z-{\cal D}(f_{A}(\lambda))]^{2}-16(a_{12}-a_{21})^{2}z\right\}=0\,.\] _Polynomial in braces has only real zeros with respect to \(z\) since its discriminant equals_ \[256(a_{12}-a_{21})^{2}\left[(a_{11}-a_{22})^{2}+(a_{12}+a_{21})^{2}\right]\geq 0\,.\] Some terms in the canonical representation of the polynomial (3.6) can be explicitly expressed via the entries of the matrix \(A\): \[\Phi(\lambda,z)\equiv\lambda^{2n}-2{\bf tr}(A)\lambda^{2n-1}+(-nz+{\bf tr}(AA^{ \top})+p_{2})\lambda^{2n-2}+\ldots+\det(AA^{\top}-zI)\,. \tag{4.1}\] Here \(p_{2}\) is the coefficient of \(\lambda^{n-2}\) in the characteristic polynomial \(f_{A+A^{\top}}(\lambda):=\det(\lambda I-A-A^{\top})\). It happens that this polynomial is also responsible for the order of the distance equation. **Theorem 4.1**: _One has_ \[{\cal F}(z)\equiv 4^{n}\left[{\cal D}_{\lambda}(f_{A+A^{\top}}(\lambda))\right] ^{2}z^{n(n-1)}+\mbox{lower order terms in }\;\;z\,. \tag{4.2}\] **Proof.** Let \(\{\mu_{1},\ldots,\mu_{n}\}\) be the spectrum of the matrix \(A+A^{\top}\) while \(P\in{\mathbb{R}}^{n\times n}\) be an orthogonal matrix reducing it to the diagonal form: \[P^{\top}(A+A^{\top})P={\bf diag}(\mu_{1},\ldots,\mu_{n})\,.\] Apply the same transformation to the determinant (3.7): \[\Phi(\lambda,z)\equiv\det\left[P^{\top}AA^{\top}P+{\bf diag}(\lambda^{2}-\mu_{ 1}\lambda-z,\ldots,\lambda^{2}-\mu_{n}\lambda-z)\,.\right]\] The leading term of the polynomial \({\cal D}_{\lambda}(\Phi(\lambda,z))\) with respect to \(z\) coincide with that of \[{\cal D}_{\lambda}\left(\prod_{j=1}^{n}(\lambda^{2}-\mu_{j}\lambda-z)\right)\,.\] The set of zeros of the polynomial under the discriminant sign is as follows \[\left\{\frac{1}{2}\left(\mu_{j}\pm\sqrt{\mu_{j}^{2}+4z}\right)\right\}_{j=1}^{ n}\,.\] Using the definition (2.3) of the discriminant, one gets \[{\cal D}_{\lambda}\left(\prod_{j=1}^{n}(\lambda^{2}-\mu_{j}\lambda-z)\right)= \prod_{j=1}^{n}(4\,z+\mu_{j}^{2})\prod_{1\leq j<k\leq n}\left[z^{2}(\mu_{k}- \mu_{j})^{4}\right]\,.\] Coefficient of the monomial \(z^{n^{2}}\) in the right-hand side can be recognized, via (2.3), as the square of the discriminant of the characteristic polynomial of \(A+A^{\top}\). \(\square\) As for the determining the structure of the free term of \({\cal F}(z)\), our successes are restricted to the following **Hypothesis.** If computed symbolically with respect to the entries of \(A\), \({\cal F}(0)\) has a factor \([{\cal D}_{\lambda}(f_{A}(\lambda))]^{2}\). According to Theorem 2.1, the polynomial \({\cal F}(z)\) can be constructed in the form of determinant of a suitable Hankel matrix. For this aim, compute first the Newton sums \(\{s_{j}(z)\}_{j=0}^{4n-2}\) for the polynomial \(\Phi(\lambda,z)\) treated with respect to \(\lambda\). Direct utilization of the formulas (2.4) requires the canonical representation (4.1) for the polynomial \(\Phi(\lambda,z)\) while initially we have just only its representation in the determinantal form (3.8). Fortunately, the Newton sums can be computed in an alternative way. Indeed, \[\Phi(\lambda,z)\equiv\det(\lambda I_{2n}-W)\quad\mbox{where }W:=\left[\begin{array} []{cc}A^{\top}&\sqrt{z}I_{n}\\ \sqrt{z}I_{n}&A\end{array}\right]\] and it is known that the Newton sums of the characteristic polynomial of a matrix can be computed as the traces of matrix powers: \[s_{j}(z)\equiv{\bf tr}(W^{j})\quad\mbox{for }j\in\{0,1,\ldots\}\] Thus, one has \[s_{2}(z)=2({\bf tr}(A^{2})+nz),\ s_{3}(z)=2({\bf tr}(A^{3})+3\,z\,{\bf tr}(A)),\ \ldots\] Compose the Hankel matrix \[S(z):=[s_{j+k}(z)]_{j,k=0}^{2n-1}\] and compute the sequence of its leading principal minors \(S_{1}(z),\ldots,S_{2n}(z)\). Due to (2.5) and (3.9), \[S_{2n}(z)\equiv{\cal D}_{\lambda}(\Phi(\lambda,z))\equiv{\cal F}(z)z^{n}\,. \tag{4.3}\] Evidently, the polynomial \(\Phi(\lambda,0)\) possesses only \(n\) double zeros, and they all are distinct provided that \(A\not\in{\mathbb{D}}\). Consequently, due to Theorem 2.1, one has \(S_{n+1}(0)=0,\ldots,S_{2n}(0)=0\). **Theorem 4.2**: _Polynomial \({\cal F}(z)\) does not have negative zeros. The number of its positive zeros lying within the interval \([0,z_{0}],z_{0}>0\) is not less than_ \[\left|{\cal V}(1,S_{1}(z_{0}),\ldots,S_{2n}(z_{0}))-{\cal V}(1,S_{1}(0), \ldots,S_{n}(0))\right|. \tag{4.4}\] **Proof.** The first claim of the theorem follows from the positive definiteness of the matrix \((\lambda I-A)(\lambda I-A)^{\top}-zI\) for \(z<0\). By Theorem 2.2, the number \({\cal V}(1,S_{1}(z_{0}),\ldots,S_{2n}(z_{0}))\) equals the number of complex-conjugate pairs of zeros for the polynomial \(\Phi(\lambda,z_{0})\). When the parameter \(z\) varies from \(0\) to \(z_{0}\), the discriminant \({\cal D}_{\lambda}(\Phi(\lambda,z))\) vanishes at any value of \(z\) where a pair of real zeros of \(\Phi(\lambda,z)\) transforms to a pair complex-conjugate ones or vice versa. The discriminant vanishes at these values. \(\Box\) Theorem 4.1 claims that the degree of the distance equation generically equals \(n(n-1)\). One can immediately watch that for the skew-symmetric matrix \(A\) this estimation is not valid. Moreover, for this type of matrices, polynomial \({\cal F}(z)\) vanishes identically. Some other types of matrices that permit explicit representation for the polynomial \(\Phi(\lambda,z)\), and, as a consequence, for the value \(d(A,{\mathbb{D}})\), in terms of the spectrum of \(A\) can be found in [13]. We summarize those results in the following **Theorem 4.3**: _Let all the eigenvalues \(\lambda_{1},\ldots,\lambda_{n}\) of \(A\) be distinct. One has:_ \[\Phi(\lambda,z)\equiv\prod_{j=1}^{n}[(\lambda-c)^{2}-(\lambda_{j}-c)^{2}-z] \quad\mbox{for $A=\mbox{skew-symmetric }+cI_{n}$ }, \tag{4.5}\] _where \(c\in{\mathbb{R}}\) is an arbitrary scalar;_ \[\Phi(\lambda,z)\equiv\prod_{j=1}^{n}(\lambda^{2}-z+1-2\lambda\Re(\lambda_{j}) )\quad\mbox{for orthogonal $A$ }\ \ ; \tag{4.6}\] \[\Phi(\lambda,z)\equiv\prod_{j=1}^{n}\left[(\lambda-\lambda_{j})^{2}-z\right] \quad\mbox{for symmetric $A$ }. \tag{4.7}\] For the case (4.5), \(\Phi(\lambda,z)\) has a multiple zero if \(n\geq 2\). For the case (4.6), \(\Phi(\lambda,z)\) has a multiple zero if \(n\geq 3\). For the both cases, the distance \(d(A,{\mathbb{D}})\) is attained at the continuum of matrices in \({\mathbb{D}}\)[13]. **Example 4.2**: _Find \(d(A,{\mathbb{D}})\) for the skew-symmetric matrix_ \[A=\left[\begin{array}{rrrr}0&-4&2&-1\\ 4&0&7&3\\ -2&-7&0&11\\ 1&-3&-11&0\end{array}\right]\,.\] **Solution.** Here \[\Phi(\lambda,z)\equiv\left(\lambda^{4}-2\,\lambda^{2}z+200\,\lambda^{2}+z^{2}- 200\,z+3249\right)^{2}\,,\] and \({\cal D}_{\lambda}(\Phi(\lambda,z))\equiv 0\). However, if we take \[{\cal D}_{\lambda}(\sqrt{\Phi(\lambda,z)})={\cal D}_{\lambda}(\lambda^{4}-2\, \lambda^{2}z+200\,\lambda^{2}+z^{2}-200\,z+3249)\] the result is the true distance equation \[11667456256\,z^{2}-2333491251200\,z+37907565375744=0\,.\] Its least positive zero equals \[100-\sqrt{6751}=\frac{1}{4}(\sqrt{314}-\sqrt{86})^{2}\] where \(\pm 1/2{\bf i}(\sqrt{314}-\sqrt{86})\) are the eigenvalues of \(A\). \(\Box\) **Remark**. Similar trick works also for the case of orthogonal matrices. Examples and Computational Aspects Once the canonical form of the distance equation is computed, Wilkinson's problem is nearly solved. Indeed, for a univariate algebraic equation, the exact number of real zeros, as well as their location, could be trustworthy determined via purely algebraic procedures. **Remark.** Theorem 4.1 claims that generically the degree of the distance equation equals \(n(n-1)\). The both examples below fall into this genericity. For instance, one has \(\deg{\cal F}(z)=870\) for \(n=30\). **Example 5.1**: _Find \(d(F_{n},{\mathbb{D}})\) for Frank's matrix [9]_ \[F_{n}=\left[\begin{array}{cccccc}n&n-1&n-2&\ldots&2&1\\ n-1&n-1&n-2&\ldots&2&1\\ 0&n-2&n-2&\ldots&2&1\\ 0&0&n-3&\ldots&2&1\\ \vdots&\vdots&&\ddots&\vdots&\vdots\\ 0&0&0&\ldots&1&1\end{array}\right]\,. \tag{5.1}\] **Solution.** For \(n=3\), one has \[\Phi(\lambda,z)=\lambda^{6}-12\,\lambda^{5}+\left(-3\,z+48\right)\lambda^{4}+ \left(24\,z-74\right)\lambda^{3}\] \[+\left(3\,z^{2}-73\,z+48\right)\lambda^{2}+\left(-12\,z^{2}+70\,z-12\right) \lambda-z^{3}+25\,z^{2}-33\,z+1\] and \[{\cal F}(z)=23839360000\,z^{6}-476315200000\,z^{5}+3522206312000\,z^{4}-1166836 8222400\,z^{3}\] \[+16297635326400\,z^{2}-6895772352000\,z+230443315200\,.\] Distance equation has only real zeros, namely \[z_{1}\approx 0.036482,\ z_{2}\approx 0.648383,\ z_{3}\approx 2.316991,\ z_{4} \approx 4.954165,\ z_{5}\approx 5.274176,\ z_{6}=27/4=6.75\,.\] Thus, \(d(F_{3},{\mathbb{D}})=\sqrt{z_{1}}\approx 0.191004\). To find the corresponding perturbation via (3.21), first evaluate the multiple zero for \(\Phi(\lambda,z_{1})\) via (2.6): \[\lambda_{*}\approx 0.602966\,.\] Then evaluate the unit left singular vector of the matrix \(\lambda_{*}I-A\) corresponding to \(\sqrt{z_{1}}\): \[U_{*}\approx\left[0.639244,\,-0.751157,\,-0.164708\right]^{\top}\] Finally, \[E_{*}\approx\left[\begin{array}{rrr}-0.019161&-0.041159&0.113343\\ 0.022516&0.048365&-0.133186\\ 0.004937&0.010605&-0.029204\end{array}\right]\,.\] The nearest to \(F_{3}\) matrix in \(\mathbb{D}\) \[B_{*}=F_{3}+E_{*}\approx\left[\begin{array}{rrr}2.980838&1.958840&1.113343\\ 2.022516&2.048365&0.866813\\ 0.004937&1.010605&0.970795\end{array}\right]\] possesses the spectrum \(\{\lambda_{*},\lambda_{*},6-2\lambda_{*}\approx 4.794067\}\). For \(n>3\), the set of nonreal zeros for the distance equation becomes nonempty, and its cardinality, relative to that of real, increases fastly with \(n\). \begin{tabular}{c|c|c|c|c} \(n\) & \(d(F_{n},\mathbb{D})\approx\) & coefficient size & number of real zeros & timing (s) \\ \hline 5 & \(4.499950\ \times 10^{-3}\) & \(\sim 10^{50}\) & 12 & \(-\) \\ 10 & \(3.925527\times 10^{-8}\) & \(\sim 10^{300}\) & 30 & \(-\) \\ 12 & \(1.849890\times 10^{-10}\) & \(\sim 10^{480}\) & 34 & 0.13 \\ 20 & \(3.757912\times 10^{-21}\) & \(\sim 10^{1690}\) & 62 & 5 \\ 30 & \(1.638008\times 10^{-36}\) & \(\sim 10^{4450}\) & 102 & 30 \\ \end{tabular} The results for \(F_{10}\) and \(F_{12}\) confirm estimations \(d_{10}\approx 3.93\cdot 10^{-8}\) and \(d_{12}\approx 1.85\cdot 10^{-10}\) given in [3]. \(\square\) **Example 5.2**: _Find \(d(K_{n},\mathbb{D})\) for Kahan's matrix [3, 12]_ \[K_{n}=\left[\begin{array}{rrrrr}1&-c&-c&\ldots&-c&-c\\ 0&s&-sc&\ldots&-sc&-sc\\ 0&0&s^{2}&\ldots&-s^{2}c&-s^{2}c\\ &&\ddots&\ldots&&\\ 0&0&0&\ddots&s^{n-2}&-s^{n-2}c\\ 0&0&0&\ldots&0&s^{n-1}\end{array}\right]\quad\mbox{for }s^{2}+c^{2}=1\,.\] **Solution.** We present computational results for two specialization of parameter values. The first one is \(s=3/5,c=4/5\): \begin{tabular}{c|c|c|c|c} \(n\) & \(d(K_{n},\mathbb{D})\approx\) & coefficient size & number of real zeros & timing (s) \\ \hline 5 & \(1.370032\times 10^{-3}\) & \(\sim 10^{310}\) & 8 & \(-\) \\ 10 & \(5.470834\times 10^{-6}\) & \(\sim 10^{2970}\) & 48 & \(-\) \\ 15 & \(2.246949\times 10^{-8}\) & \(\sim 10^{10590}\) & 138 & 6.7 \\ 20 & \(9.245309\times 10^{-11}\) & \(\sim 10^{25730}\) & 288 & 145.4 \\ 25 & \(3.984992\times 10^{-10}\) & \(\sim 10^{52910}\) & 258 & 218.23 \\ 30 & \(1.240748\times 10^{-11}\) & \(\sim 10^{92460}\) & 464 & 937.66 \\ \end{tabular} The second test series correspond to a specialization \(s^{n-1}=1/10\) treated in [3]. For this case, an extra difficulty results from approximation of the entries of the matrix \(K_{n}\) as rational numbers. This results in increasing the length of the coefficients of the distance equation. Compared with the previous case, the timing increases drastically, i.e. more than \(10^{2}\) times for the same specializations of \(n\). \begin{tabular}{c|c|c} \(n\) & \(d(K_{n},\mathbb{D})\approx\) & number of real zeros \\ \hline 6 & \(4.704940\times 10^{-4}\) & 10 \\ 10 & \(1.538157\times 10^{-5}\) & 18 \\ 15 & \(4.484974\times 10^{-7}\) & 28 \\ 20 & \(1.904858\times 10^{-8}\) & 38 \\ \end{tabular} The results for \(K_{6},K_{15}\) and \(K_{20}\) confirm estimations given in [2]. \(\square\) It should be emphasized however that computation of the whole sets of real zeros for the distance equation is redundant for evaluation of \(d(A,\mathbb{D})\). We need to find just only the least positive zero of \(\mathcal{F}(z)\). For this aim, the determinantal representation (4.3) for this polynomial might be sufficient for the real zero localization. According to Theorem 4.2, the lower estimate for the number of real zeros of \(\mathcal{F}(z)\) lying within the interval \([0,z_{0}],z_{0}>0\) is given by the number (4.4). If this number is not zero then at least one real zero for \(\mathcal{F}(z)\) lies in \([0,z_{0}]\), and the next step in its localization might be the treatment of the matrix \(S(z_{0}/2)\). Experiments with the Frank's matrix (5.1) demonstrate the unambiguity of the zero isolation process. For the matrix \(F_{10}\), one has \(\mathcal{V}(1,S_{1}(0),\ldots,S_{10}(0))=0\), i.e. all the eigenvalues of \(A\) are real. Then (4.4) coincides with \[\mathcal{V}_{z_{0}}:=\mathcal{V}(1,S_{1}(z_{0}),\ldots,S_{10}(z_{0}),\ldots,S_{ 20}(z_{0}))\,.\] Some specializations for \(z_{0}\) \begin{tabular}{c||c|c|c|c} \(z_{0}\) & \(10^{-3}\) & \(10^{-9}\) & \(2\times 10^{-15}\) & \(10^{-15}\) \\ \hline \(\mathcal{V}_{z_{0}}\) & 5 & 3 & 1 & 0 \\ \end{tabular} demonstrate that the number of real zeros of \(\mathcal{F}(z)\) lying in any interval \([0,z_{0}]\) happens to be equal to \(\mathcal{V}_{z_{0}}\). For instance, there are precisely \(5\) zeros within the interval \([0,10^{-3}]\), namely \[1.540976\times 10^{-15},\ 7.739368\times 10^{-15},7.463686\times 10^{-13},\ 1.403 045\times 10^{-9},1.412301\times 10^{-5}\,.\] However, for the case of the matrix \[\left[\begin{array}{ccc}1&1&-2\\ 2&1&0\\ -3&1&1\end{array}\right]\] variations \(\mathcal{V}_{0.4}=0,\mathcal{V}_{0.5}=1,\mathcal{V}_{2.25}=0\) permit one to locate single zeros within the intervals \([0.4,0.5]\) and \([0.5,2.25]\) but are unable to detect this number for \([0.4,2.25]\). ## 6 Counterexamples We exemplify here two cases * The minimal positive zero of the distance equation not always provides the value \(d^{2}(A,\mathbb{D})\) even if we restrict ourselves to the rank \(1\) perturbation matrices; * The distance \(d(A,\mathbb{D})\) is not always provided by the rank \(1\) perturbations. **Example 6.1**: _For the matrix_ \[A(\epsilon)=\left[\begin{array}{cccc}0&1&1&0\\ -1&0&0&1\\ \epsilon&0&0&1\\ 0&0&-1&0\end{array}\right]\,,\] _find \(d(A(\epsilon),\mathbb{D})\) for \(\epsilon>0\)._ **Solution.** Distance equation is provided by the polynomial \[\mathcal{F}(z)\equiv 65536\epsilon^{8}\left[(\epsilon+2)^{4}z^{2}-2\epsilon( \epsilon+8)(\epsilon+2)^{2}z+\epsilon^{2}(\epsilon-8)^{2}\right]^{2}\cdot \left[(\epsilon+1)z-3\epsilon-1\right]^{4}\] \[\times(z^{2}-3\,z+1)\left[z^{2}-\left(\epsilon^{2}+3\right)z+(\epsilon+1)^{2 }\right]\,.\] Its zeros are \[z_{1}=\frac{\epsilon(\sqrt{\epsilon}-\sqrt{8})^{2}}{(\epsilon+2)^{2}},\ z_{2}= \frac{\epsilon(\sqrt{\epsilon}+\sqrt{8})^{2}}{(\epsilon+2)^{2}},z_{3}=\frac{3 \epsilon+1}{\epsilon+1},\] \[z_{4}=\frac{3-\sqrt{5}}{2}\approx 0.381966,\ z_{5}=\frac{3+\sqrt{5}}{2} \approx 2.618033\,,\] \[z_{6}=\frac{1}{2}\left(\epsilon^{2}+3-|\epsilon-1|\sqrt{\epsilon^{2}+2\,\epsilon^{ 2}+5}\right),\ z_{7}=\frac{1}{2}\left(\epsilon^{2}+3+|\epsilon-1|\sqrt{\epsilon ^{2}+2\,\epsilon^{2}+5}\right)\] are all real. Zero \(z_{4}\) is simple, it coincides with the square of a singular value of the matrix \(A\), and the polynomial \(\Phi(\lambda,z_{4})\) has the real double zero \(\lambda_{4}=0\). The corresponding value of the distance function from \(A\) to \(\mathbb{D}\) does not depend on \(\epsilon\), it equals2 Footnote 2: Amazing coincidence with the reciprocal to the _golden ratio_! \[\sqrt{z_{4}}=\frac{\sqrt{5}-1}{2}\approx 0.618033\,.\] The corresponding perturbation and matrix in \(\mathbb{D}\) are as follows: \[E_{4}=\frac{1}{10}\left[\begin{array}{cccc}0&\sqrt{5}-5&(3\sqrt{5}-5&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&-2\sqrt{5}&5-\sqrt{5}&0\end{array}\right],B_{4}=\frac{1}{10}\left[\begin{array} []{cccc}0&5+\sqrt{5}&5+3\sqrt{5}&0\\ -10&0&0&10\\ 10\epsilon&0&0&10\\ 0&-2\sqrt{5}&-5-\sqrt{5}&0\end{array}\right]\,.\] Double eigenvalue of \(B_{4}\) is just \(0\). Next, we do not need to treat the zeros \(z_{6},z_{7}\) and \(z_{3}\), since they are greater than \(z_{4}\). Also \(z_{2}>z_{1}\), therefore, the two zeros that can compete for the distance value are \(z_{1}\) and \(z_{4}\). It can be verified that \[z_{1}\leq z_{4}\ \text{iff}\ \epsilon\leq\epsilon_{2}\ \text{where}\ \epsilon_{2}=2\sqrt{2}(\sqrt{5}+3)\sqrt{\sqrt{5}+2}+7\sqrt{5}+15\approx 6 1.133652\,.\] It looks like \(d(A,\mathbb{D})=\sqrt{z_{1}}\) for \(\epsilon\leq\epsilon_{2}\). However, this is not true for some subinterval in \([0,\epsilon_{2}]\). Indeed, \(z_{1}\) is a double zero for \(\mathcal{F}(z)\), and polynomial \(\Phi(\lambda,z_{1})\) possesses two double zeros: \[\lambda_{1,2}=\pm\frac{\sqrt{K(\epsilon)}}{\epsilon+2}\quad\text{where}\ K( \epsilon):=\sqrt{2}(\epsilon-\sqrt{2}\sqrt{\epsilon}+2)\left(\sqrt{\epsilon}+ \frac{1}{\sqrt{2}}\right)\left(\sqrt{\epsilon}+\frac{\sqrt{5}+1}{\sqrt{2}} \right)\left(\sqrt{\epsilon}-\frac{\sqrt{5}-1}{\sqrt{2}}\right)\,.\] These zeros are real only for \[\epsilon\geq\epsilon_{1}\ \text{where}\ \epsilon_{1}:=3-\sqrt{5}\approx 0.763932\,.\] For the values \(\epsilon<\epsilon_{2}\), the minimal positive zero of the distance equation is not responsible for the distance from \(A\) to \(\mathbb{D}\). It seems that \(d(A,\mathbb{D})=\sqrt{z_{4}}\) for \(\epsilon<\epsilon_{2}\). However, this statement is also invalid for some subinterval of the parameter values. The matrix \[\widetilde{E}(\epsilon):=\frac{\epsilon(8-\epsilon)}{(\epsilon^{2}+16)^{2}} \left[\begin{array}{cccc}0&-4\,\epsilon&\epsilon^{2}&0\\ -4\,\epsilon&0&0&\epsilon^{2}\\ -16&0&0&4\,\epsilon\\ 0&-16&4\,\epsilon&0\end{array}\right]\] represents a rank 2 perturbation that provides for the matrix \(A(\epsilon)+\widetilde{E}(\epsilon)\) a pair of double eigenvalues \[\lambda_{1,2}=\pm\frac{1}{\epsilon^{2}+16}\sqrt{(\epsilon^{2}+4\,\epsilon-16) \left(3\,\epsilon^{2}+4\,\epsilon+16\right)}\,.\] These eigenvalues are non-real for \(\epsilon<2(\sqrt{5}-1)\approx 2.472136\). For these parameter values, one has \[\|\widetilde{E}(\epsilon)\|=\frac{\sqrt{2}\epsilon(8-\epsilon)}{\epsilon^{2}+ 16}\] and this value is lesser than \(\sqrt{z_{1}}\) for \(\epsilon<\epsilon_{c}\) where \(\epsilon_{c}\) denotes the least positive zero of the polynomial \[\epsilon^{8}-80\,\epsilon^{7}-368\,\epsilon^{6}-1024\,\epsilon^{5}+64\, \epsilon^{4}-9216\,\epsilon^{3}-16384\,\epsilon^{2}-32768\,\epsilon+65536\,;\] i.e. \(\epsilon_{c}\approx 1.055249\). Figure 1. Summarizing: \[d(A(\epsilon),\mathbb{D})=\left\{\begin{array}{cl}\sqrt{2}\epsilon(8- \epsilon)/(\epsilon^{2}+16)&\mbox{if $\epsilon\in[0,\epsilon_{c}]$}\\ \sqrt{\epsilon}|\sqrt{\epsilon}-\sqrt{8}|/(\epsilon+2)&\mbox{if $\epsilon\in[ \epsilon_{c},\epsilon_{2}]$}\\ (\sqrt{5}-1)/2&\mbox{if $\epsilon>\epsilon_{2}$}\end{array}\right.\] The plot is displayed in Fig. 1 (the first formula -- red, the second one -- blue, the third one -- green). \(\Box\) **Remark.** As it is mentioned in Introduction, the case where \(d(A,\mathbb{D})\) is achieved at the rank 2 matrix (i.e. the nearest in \(\mathbb{D}\) matrix possesses two double imaginary eigenvalues) is beyond our consideration. We are not able even to conjecture whether this is a zero probability event or not. ## 7 Complex Perturbations The method proposed above can be extended to the case of complex perturbations. For a real matrix \(A\), we are now looking for the distance to the nearest complex matrix \(B\) with multiple eigenvalue: \[d_{C}(A,\mathbb{D}):=\min\|B-A\|\quad\mbox{subject to }{\cal D}_{\lambda}(f_{B}( \lambda))=0,\ B\in\mathbb{C}^{n\times n}\,.\] **Warning.** The present section should be considered as a draft of a separate publication to be prepared sometime afterwards. We skip here the details of algebraic backgrounds, proofs of theoretical results and do not bother ourselves with mentioning that the validity of some of the declared results is subject to several extra assumptions preventing the appearance of troubles similar to those dealt with in the previous section. Consider the polynomial \[\Theta(a,b,z)=\det\left[((a+b{\bf i})I-A)\left((a-b{\bf i})I-A^{\top}\right)-zI\right] \tag{7.1}\] and generate the system of algebraic equations \[\Theta=0,\ \partial\Theta/\partial a=0,\ \partial\Theta/\partial b=0\,. \tag{7.2}\] We are looking for the real solutions to this system. Since \[\Theta(a,0,z)\stackrel{{(\ref{eq:2})}}{{=}}\Phi(a,z)\,,\] this solution set includes that for the system (3.18). **Theorem 7.1**: _If the system (7.2) possesses a solution \((a_{0},b_{0},z_{0})\) with \(b_{0}\neq 0\) then it has the solution \((a_{0},-b_{0},z_{0})\)._ **Proof.** Polynomial \(\Theta(a,b,z)\) is even in \(b\): \[\Theta(a,-b,z)=\det\left[((a+{\bf i}b)I-A^{\top})((a-{\bf i}b)I-A)-zI\right]\] \[=\det\left[\left\{((a+{\bf i}b)I-A^{\top})((a-{\bf i}b)I-A)\right\}^{\top}-zI\right]\] \[=\det\left[((a-{\bf i}b)I-A^{\top})((a+{\bf i}b)I-A)-zI\right]=\Theta(a,b,z)\,.\] Consequently \(\Theta^{\prime}_{a}\) is even in \(b\) while \(\Theta^{\prime}_{b}\) is odd \(b\). The latter becomes even on dividing by \(b\). \(\Box\) Our aim is to eliminate the variables \(a\) and \(b\) from the system (7.2), i.e. to find the **bivariate discriminant**\(\mathcal{D}_{a,b}(\Theta)\) for the polynomial \(\Theta(a,b,z)\) treated with respect to these variables. The discriminant \(\mathcal{D}_{x,y}(F)\) of a polynomial \(F(x,y,z)\in\mathbb{C}[x,y,z]\) is formally defined as the result of elimination of variables \(x\) and \(y\) from the system of equations \[F=0,\ \partial F/\partial x=0,\ \partial F/\partial y=0\,. \tag{7.3}\] This is a polynomial in \(z\) and its vanishment at \(z=z_{0}\in\mathbb{C}\) is the necessary and sufficient condition for the existence of solution \((x_{0},y_{0},z_{0})\in\mathbb{C}^{3}\) to the system (7.3), or equivalently, for the existence of the multiple zero \((x_{0},y_{0})\) for the polynomial \(F(x,y,z_{0})\). Constructive computation of discriminant can be implemented in several ways, and we will exemplify below the procedure based of the Bezout construction of the resultant [6]. **Theorem 7.2**: _The discriminant \(\mathcal{D}_{a,b}(\Theta(a,b,z))\) is factorized as follows:_ \[\mathcal{D}_{a,b}(\Theta(a,b,z))\equiv z^{n(n+1)/2}\mathcal{F}(z)\widetilde{ \mathcal{F}}(z)\,. \tag{7.4}\] _Here \(\mathcal{F}(z)\) is defined by (3.9), while_ \[\widetilde{\mathcal{F}}(z)\in\mathbb{R}[z],\ \deg\widetilde{\mathcal{F}}(z)=n(n-1 )(n-2)/2\,,\] _(For \(n=2\) polynomial \(\widetilde{\mathcal{F}}(z)\) is just a constant)._ According to Section 3, the distance equation \(\mathcal{F}(z)=0\) is responsible for the rank \(1\) real perturbation that provides the distance \(d(A,\mathbb{D})\). It turns out that the equation \[\widetilde{\mathcal{F}}(z)=0\] is responsible for the rank \(1\) imaginary perturbation. Its real zero \(\widetilde{z}_{0}\) corresponds to a pair of multiple zeros of the polynomial \(\Theta(a,b,\widetilde{z}_{0})\), and these zeros are either in the form \((a_{0},\pm\beta_{0})\) or in the form \((a_{0},\pm\mathbf{i}\beta_{0})\) with real \(\beta_{0}\). We are definitely interested only in the real solutions for the system (7.2). **Theorem 7.3**: _Let the system (7.2) possess a real solution \((a_{0},b_{0},\widetilde{z}_{0})\) with \(\widetilde{z}_{0}>0,b_{0}\neq 0\). Denote \(U_{0}\in\mathbb{C}^{n},\|U_{0}\|=1\) the left singular vector for the matrix \((a_{0}+\mathbf{i}b_{0})I-A\) corresponding to the singular value \(\sqrt{\widetilde{z}_{0}}\). Then the rank \(1\) perturbation_ \[E_{0}=U_{0}U_{0}^{\mathsf{H}}((a_{0}+\mathbf{i}b_{0})I-A) \tag{7.5}\] _is such that \(\|E_{0}\|=\sqrt{\widetilde{z}_{0}}\) and the matrix \(B_{0}=A+E_{0}\in\mathbb{C}^{n\times n}\) possesses the double eigenvalue \(a_{0}+\mathbf{i}b_{0}\)._ **Remark.** Evidently, the matrix \(\overline{E_{0}}\) provides for the matrix \(\overline{B_{0}}=A+\overline{E_{0}}\) the double eigenvalue \(a_{0}-{\bf i}b_{0}\). In view of Theorem 7.3, the distance \(d_{C}(A,{\mathbb{D}})\) results from the competition between the least positive zero of \({\cal F}(z)\) and that minimal positive zero of \(\widetilde{\cal F}(z)\) that corresponds to the real solution for the system (7.2). Computation of the polynomial \(\widetilde{\cal F}(z)\) can be simplified if we take into account Theorem 7.1. Substitute \[{\mathfrak{b}}:=b^{2}\] in the polynomials of the system (7.2) and denote \[\Xi(a,{\mathfrak{b}},z):=\Theta(a,b,z),\ \Xi_{a}(a,{\mathfrak{b}},z):=\Theta^{ \prime}_{a}(a,b,z),\ \Xi_{{\mathfrak{b}}}(a,{\mathfrak{b}},z):=\Theta^{\prime}_{b}(a,b,z)/b\,.\] **Theorem 7.4**: _The result of elimination of variables \(a\) and \({\mathfrak{b}}\) from the system_ \[\Xi=0,\ \Xi_{a}=0,\Xi_{{\mathfrak{b}}}=0 \tag{7.6}\] _is the equation_ \[z^{n(n-1)/2}\widetilde{\cal F}(z)=0\,.\] If \(\widetilde{z_{0}}\) is a positive zero of \(\widetilde{\cal F}(z)\), the corresponding real solution to the system (7.6) might have the \({\mathfrak{b}}\)-component either positive or negative. We are interested only in the positive variant. **Example 7.1**: _Find \(d_{C}(A,{\mathbb{D}})\) for_ \[A=\left[\begin{array}{ccc}0&1&0\\ 0&0&1\\ -91&-55&-13\end{array}\right]\,.\] **Solution.** First compute the polynomial \({\cal F}(z)\) via (3.9): \[\begin{array}{c}{\cal F}(z):=33076090700402342058246544\,z^{6}-37703919886130 6289080145178864\,z^{5}\\ +937864902703881321034450183916\,z^{4}-771868276098720970149792503999\,z^{3}\\ +211070978787821517684022650624\,z^{2}\\ -510584100140452518540394496\,z+319295875259784560640000\,.\end{array}\] Its real zeros are as follows \[z_{1}\approx 0.739336,\ 0.765571,\ 0.980468,\ 11396.658548\,.\] Next compose the polynomial \(\Xi(a,\mathfrak{b},z)\): \[\Xi(a,\mathfrak{b},z)=-z^{3}+(3a^{2}+3\mathfrak{b}+26a+11477)z^{2}\] \[-(3\,a^{4}+6\,a^{2}\mathfrak{b}+3\,\mathfrak{b}^{2}+52a^{3}+52a\mathfrak{b}+11756 a^{2}+11536\mathfrak{b}+11466\,a+19757)z\] \[+\left(a^{2}+\mathfrak{b}+14\,a+49\right)\left((a^{2}+\mathfrak{b}+6\,a+13)^{ 2}-16\,\mathfrak{b}\right)\,.\] Now we trace briefly the procedure of elimination of \(a\) and \(\mathfrak{b}\) from the system (7.6). Consider the monomial sequence \[\mathbb{M}:=\left\{\mathfrak{m}_{j}(a,\mathfrak{b})\right\}=\left\{1,a, \mathfrak{b},\mathfrak{b}^{2}\right\}.\] It is possible **to reduce** the polynomial \(\mathfrak{m}_{j}\Xi\)**modulo**\(\Xi_{a}\) and \(\Xi_{\mathfrak{b}}\), i.e. to find the polynomials \(\{\beta_{jk}(z)\}_{j,k=1}^{4}\subset\mathbb{R}[z]\) and \(\{p_{11}(a,\mathfrak{b},z),p_{j2}(a,\mathfrak{b},z)\}_{j=1}^{4}\subset \mathbb{R}[a,\mathfrak{b},z]\) satisfying the identity \[m_{j}\Xi\equiv\beta_{j1}(z)+\beta_{j2}(z)a+\beta_{j3}(z)\mathfrak{b}+\beta_{j 4}(z)\mathfrak{b}^{2}+p_{j1}\Xi_{a}+p_{j2}\Xi_{\mathfrak{b}}\text{ for }j\in\{1,2,3,4\}\,.\] For instance, \[\beta_{11}(z)=-17718805921\,z^{2}+610367232\,z+22937600,\ \beta_{12}(z)=-3935 3600\,z+5324800,\] \[\beta_{13}(z)=146694400\,z-512000,\beta_{14}(z)=-307200,\ldots,\] \[\beta_{44}(z)=-76550493273549926400\,z^{3}+{}_{162810741053705011200\,z^{2}}- {}_{1867736871075840000\,z}-{}_{50331648000000}\,.\] Compose the Bezout matrix \[\mathfrak{B}(z):=\left[\beta_{jk}(z)\right]_{j,k=1}^{4}\,.\] Then \[\det\mathfrak{B}(z)\equiv z^{3}\widetilde{\mathcal{F}}(z)\] where \[\widetilde{\mathcal{F}}(z)=412324266119803814719539025\,z^{3}+3392333449867641 5590177600\,z^{2}\] \[+691077589890510378371072\,z-899669298077697638400\,.\] For any zero \(\widetilde{z}_{0}\) of this polynomial, the corresponding \(a\) and \(\mathfrak{b}\) components of the solution to the system (7.6) can be obtained in the following way. Denote by \(\{\mathfrak{B}_{4j}\}_{J=1}^{4}\) the cofactors of \(\det\mathfrak{B}\) corresponding to the entries of the last row of the matrix \(\mathfrak{B}\). Then the \(a\)-component of solution is connected with the \(z\)-component as \[a=\frac{\mathfrak{B}_{42}}{\mathfrak{B}_{41}}=\frac{43719663040898080379\,z^{2 }+2929017747573439808\,z+29336262189312000}{2(624300876564482975\,z^{2}-22625456 0538037856\,z-3469512291865600)}\] while the \(\mathfrak{b}\)-component as \[\mathfrak{b}=\frac{\mathfrak{B}_{43}}{\mathfrak{B}_{41}}=\frac{3083432482762007 609519\,z^{3}+{}_{1101690698089389073600\,z^{2}}+{}_{6718638632998878456\,z}-{}_ {129087561954918400}}{16(624300876564482975\,z^{2}-226254560538037856\,z-34695122918 65600)}\] Polynomial \(\widetilde{\mathcal{F}}(z)\) possesses a single real zero, namely3 Footnote 3: All the decimals in the following approximation are error-free. \[\widetilde{z}_{1}\approx 0.0012268490707391199222512104943\,,\] and substitution of this value into the last formulas yields \[a=a_{1}\approx-4.403922040624116177182912013601,\ \mathfrak{b}=\mathfrak{b}_{1} \approx 0.750705046015830894563798035515\,.\] Since \(\mathfrak{b}_{1}>0\), one may claim that \[d_{C}(A,\mathbb{D})=\sqrt{\widetilde{z}_{1}}\approx 0.0350264053356766817715431 51648\,.\] The two perturbations in \(\mathbb{C}^{3\times 3}\) providing this distance correspond to the solutions \[(a_{1},b_{1},\widetilde{z}_{1})\ \text{and}\ (a_{1},-b_{1},\widetilde{z}_{1})\ \text{where}\ \ b_{1}=\sqrt{\mathfrak{b}_{1}}\approx 0.866432366671415902596255690462\,.\] of the system (7.2). Let us compute via (7.5) the one corresponding to \((a_{1},-b_{1},\widetilde{z}_{1})\). The unit left singular vector of \((a_{1}-\mathbf{i}b_{1})I-A\) corresponding to the singular value \(\sqrt{\widetilde{z}_{1}}\) is as follows \[U_{1}\approx[0.930609,\ 0.360923+0.039918\,\mathbf{i},\ 0.045052+0.008866\, \mathbf{i}]^{\top}\] and the minimal perturbation \[E_{1}\approx\left[\begin{array}{ll}0.001289-0.000442\mathbf{i}&-0.007120+0.0 00832\mathbf{i}&0.031666+0.002551\mathbf{i}\\ 0.000519-0.000116\mathbf{i}&-0.002797+0.000017\mathbf{i}&0.012172+0.002348 \mathbf{i}\\ 0.000067-0.000009\mathbf{i}&-0.000353-0.000028\mathbf{i}&0.001509+0.000425 \mathbf{i}\end{array}\right]\,.\] The spectrum of the matrix \(A+E_{1}\) is \[\left\{a_{1}-\mathbf{i}b_{1},a_{1}-\mathbf{i}b_{1},-13-2(a_{1}-\mathbf{i}b_{1 })\approx-4.192156-1.732865\mathbf{i}\right\}.\] To test the performability of the algorithm sketched in the present section, we chose the next matrix from the Matlab gallery('gracar',6). **Example 7.2**: _Find \(d_{C}(A,\mathbb{D})\) for_ \[A=\left[\begin{array}{rrrrrr}1&1&1&1&0&0\\ -1&1&1&1&1&0\\ 0&-1&1&1&1&1\\ 0&0&-1&1&1&1\\ 0&0&0&-1&1&1\\ 0&0&0&0&-1&1\end{array}\right]\,.\] **Solution.** Here the minimal zero of \(\mathcal{F}(z)\) equals \(z_{1}\approx 0.116565\) and that of \(\widetilde{\mathcal{F}}(z)\) equals \[\widetilde{z}_{1}\approx 0.04630491415327188209539627157\,.\] The latter corresponds to the real solution for the system (7.2): \[(a_{1},\pm b_{1},\widetilde{z}_{1})\text{ where }a_{1}\approx 0.753316,\ b_{1} \approx-1.591155\,.\] Thus, one obtains \[d_{C}(A,\mathbb{D})=\sqrt{\widetilde{z}_{1}}\approx 0.2151857666140395125353\,.\] This confirms estimation \(d_{C}(A,\mathbb{D})\approx 0.21519\) from [2, 3]. For the solution \((a_{1},b_{1},\widetilde{z}_{1})\), the spectrum of the nearest to \(A\) matrix in \(\mathbb{D}\) is as follows \[\{0.361392-1.944783\mathbf{i},1.139422-1.239762\mathbf{i},1.502453-0.616966 \mathbf{i},1.490100+0.619201\mathbf{i},a_{1}+\mathbf{i}b_{1},a_{1}+\mathbf{i} b_{1}\}\,.\] ## 8 Conclusion We have investigated Wilkinson's problem for the distance evaluation from a given matrix to the set of matrices possessing multiple eigenvalues. The proposed approach consists in the construction of distance equation with the zero set containing the critical values of the squared distance function. This construction is realized in the ideology of symbolic computations, i.e. the procedure consists of a finite number of elementary algebraic operations on the entries of the matrix. The representation of the distance equation with the aid of the discriminant function should not be taken as a complete surprise. Indeed, the Wilkinson's problem is the one of evaluation the distance to the discriminant manifold in the space of matrix entries. Hence, in view of this circumstance, the appearance of the discriminant in a solution to the problem is somehow natural. The more astonishing is the emergence of the discriminant in nearly _any_ problem of distance evaluation from a point to an algebraic manifold in a multidimensional space [21, 22]. Direction for further research is clearly related the stuff of Section 6, i.e. the problem of existence the rank 2 minimal perturbation providing \(d(A,\mathbb{D})\).
2303.08106
Domain Generalization in Machine Learning Models for Wireless Communications: Concepts, State-of-the-Art, and Open Issues
Data-driven machine learning (ML) is promoted as one potential technology to be used in next-generations wireless systems. This led to a large body of research work that applies ML techniques to solve problems in different layers of the wireless transmission link. However, most of these applications rely on supervised learning which assumes that the source (training) and target (test) data are independent and identically distributed (i.i.d). This assumption is often violated in the real world due to domain or distribution shifts between the source and the target data. Thus, it is important to ensure that these algorithms generalize to out-of-distribution (OOD) data. In this context, domain generalization (DG) tackles the OOD-related issues by learning models on different and distinct source domains/datasets with generalization capabilities to unseen new domains without additional finetuning. Motivated by the importance of DG requirements for wireless applications, we present a comprehensive overview of the recent developments in DG and the different sources of domain shift. We also summarize the existing DG methods and review their applications in selected wireless communication problems, and conclude with insights and open questions.
Mohamed Akrout, Amal Feriani, Faouzi Bellili, Amine Mezghani, Ekram Hossain
2023-03-13T15:52:30Z
http://arxiv.org/abs/2303.08106v1
Domain Generalization in Machine Learning Models for Wireless Communications: Concepts, State-of-the-Art, and Open Issues ###### Abstract Data-driven machine learning (ML) is promoted as one potential technology to be used in next-generations wireless systems. This led to a large body of research work that applies ML techniques to solve problems in different layers of the wireless transmission link. However, most of these applications rely on supervised learning which assumes that the source (training) and target (test) data are independent and identically distributed (i.i.d). This assumption is often violated in the real world due to domain or distribution shifts between the source and the target data. Thus, it is important to ensure that these algorithms generalize to out-of-distribution (OOD) data. In this context, domain generalization (DG) tackles the OOD-related issues by learning models on different and distinct source domains/datasets with generalization capabilities to unseen new domains without additional finetuning. Motivated by the importance of DG requirements for wireless applications, we present a comprehensive overview of the recent developments in DG and the different sources of domain shift. We also summarize the existing DG methods and review their applications in selected wireless communication problems, and conclude with insights and open questions. _Keywords_: ML-aided wireless networks, Out-of-distribution generalization, Domain generalization ## I Introduction ### _Motivation_ The envisioned design, standardization1, and deployment of ML in wireless networks require the establishment of evaluation guidelines to properly assess the true potential of data-driven methods. Nevertheless, almost all the openly published ML techniques for wireless systems have several limitations such as \(i)\) difficulty to generalize under a _distribution shift_, \(ii)\) inability to continuously learn from different scenarios, and \(iii)\) inability to _quickly_ adapt to unseen scenarios, to name a few. Their showcased performance seems over-fitted to a specific set of simulation settings or fixed datasets, thereby limiting their attractiveness to compete with classical methods at the moment. As one example, the linear minimum mean-square error (LMMSE) estimator of an arbitrary channel model is considered by the industry as one of the most robust estimators in practice. While it is always possible to beat the LMMSE estimator with deep neural network (DNNs) approximators [2], this fact holds only for an _a priori_ known model that is used to generate training and test datasets on which DNNs are trained and also evaluated. When the distributions of training and test datasets are different (e.g., Ricean vs. Rayleigh, or sparse vs. rich-scattering channels), the performance of DNNs deteriorates appreciably due to domain distribution gaps. Furthermore, the lack of real-world wireless communication datasets aggravates the uncertainty toward the practical deployment of ML-based methods. This calls for the development of new ML training algorithms and the establishment of rigorous evaluation protocols to assess their OOD generalization. Footnote 1: See 3GPP Release 18 [1, Section 9.2] for some potential use cases of ML in wireless. In this work, we focus on generalization under domain shift. This includes any change in the distribution between the training (i.e., source) data and the target (i.e., test data). The most studied type of distribution shift is _covariate_ shift when the distribution of the model inputs (or features) changes between the source and the target domains [3]. It was shown that the performance of DNNs degrades drastically due to small variations or perturbations in the training datasets [4]. Thus, the acclaimed success of deep learning (DL) is mostly driven by the power of supervised learning. One straightforward idea to overcome domain shift is to adapt the model to the new domain via additional finetuning using techniques such as transfer learning [5] and domain adaptation [6]. However, this is not always feasible in practice because \(i)\) target _labeled_ data may not be available for finetuning and \(ii)\) the finetuning or adaptation may take a long time in contrast to the "real time" requirement in most wireless applications. This motivates the DG problem [7] to handle domain shift _without_ requiring target domain data. DG has been extensively studied in the last decade in the ML community which led to a broad spectrum of methodologies and learning techniques. Moreover, DG was examined in different applications, namely, computer vision [8, 9], natural language processing [10, 11], and medical imaging [12], etc. Here, we emphasize the importance of the DG problem in wireless applications to advance the current state-of-the-art research, and raise attention to the problem of domain shift which can seriously impede the success of ML techniques in wireless networks. Specifically, we highlight the importance of leveraging wireless communication domain knowledge to tailor or design more generalizable ML algorithms. This work provides a timely and comprehensive overview of the DG research landscape and insights into promising future research directions. The scope of this paper is limited to the DG problem as defined above. At the time of the writing, we have identified several DG variants proposed in the literature that we will briefly discuss but we focus on the standard definition of the DG problem. Other related fields such as domain adaptation, transfer learning, zero-shot learning, multi-task learning, and test time training are beyond the scope of this work. However, we will explain the difference between these fields and DG. In addition, we do distinguish between the terms "generalization" and "robustness", unlike most wireless communication papers which use them interchangeably. Here, generalization which is also known as _model robustness_ denotes the ability of DNNs to generalize to unseen scenarios under distribution shifts. Robustness, however, refers to the stability of DNNs' performance under noise and adversarial examples, i.e., _adversarial robustness_[13]. ### _Contributions and Organization of the Paper_ The main contributions of this paper are summarized as follows: * We define the DG problem and present four types of distribution shifts. We then contrast DG to existing research fields such as domain adaptation, transfer learning, continual learning, etc. * We summarize different ML methodologies for DG which focus on the following three DNN training steps: (i) data manipulation to cover richer domains pertaining to a given dataset, (ii) representation learning to acquire domain-invariant features enabling generalization, and (iii) learning frameworks which go beyond the standard gradient-based DNN optimization. * We also review the literature on previous attempts for applying ML techniques for DG in several wireless communications problems such as channel decoding, beamforming, multiple-input multiple-output (MIMO) channel estimation, and reconfigurable intelligent surface (RIS)-aided communications. To the best of our knowledge, this is the first initiative to reconsider the existing applications of ML techniques in wireless research from the DG perspective. * We present the main challenges facing the application of data-driven machine learning techniques in wireless communication under DG requirements and discuss their potential for improving the network performance. The rest of the paper is organized as illustrated in Fig. 1. In Section II, we introduce the DG problem formulation and show its key differences with related research fields. State-of-the-art algorithms for DG belonging to data manipulation, representation learning, and learning paradigms are reviewed in Sections III, IV, and V, respectively. Section VI showcases the recent advances of the reviewed DG algorithms in several wireless communication problems, followed by a summary of the learned lessons from their applications. Finally, we outline in Section VII potential research directions, from which we draw out our concluding remarks in Section VIII. ## II Background In this section, we will define the scope of the tutorial and highlight the resemblance between domain generalization and other related research fields. We start by introducing the following definitions. ### _DG Problem Formulation_ **Definition 1**.: **(domain)** Let \(\mathcal{X}\) and \(\mathcal{Y}\) be the input space and the output space of a dataset \(\mathcal{D}=\{(X_{i},Y_{i})\big{|}P_{\mathcal{X},\mathcal{Y}}(X_{i},Y_{i})\}_{ i=1}^{K}\), where \(K\) is the size of the dataset, \(X_{i}\in\mathcal{X}\) and \(Y_{i}\in\mathcal{Y}\) are the \(i\)-th input and label samples, respectively. When \(X_{i}\) and \(Y_{i}\) are seen as realizations of their respective random variables \(\mathsf{X}\) and \(\mathsf{Y}\), it is possible to define a domain as their joint distribution \(P_{\mathsf{X},\mathsf{Y}}(X,Y)\). Moreover, \(P_{\mathsf{X}}(X)\) and \(P_{\mathsf{Y}}(Y)\) refer to the marginal distribution of \(\mathsf{X}\) and \(\mathsf{Y}\), respectively. Throughout the paper, we will drop distribution arguments to lighten the notation. Machine learning algorithms use one or multiple datasets, and as such make use of one or multiple data domains according to Definition 1. Indeed, training and evaluating ML techniques require _at least_ two domains: * a source (e.g., training) domain \(P_{\mathsf{X},\mathsf{Y}}^{s}\) encoding both the source input marginal \(P_{\mathsf{X}}^{s}\) and the source label marginal \(P_{\mathsf{Y}}^{s}\); * a target (e.g., test) domain \(P_{\mathsf{X},\mathsf{Y}}^{t}\) encoding both the target input marginal \(P_{\mathsf{X}}^{t}\) and the target label marginal \(P_{\mathsf{Y}}^{t}\). Generalization is an active ML research area where the ultimate goal is to learn models that perform well on unseen data domains. This tutorial focuses on out-of-distribution generalization or domain generalization (DG). In the next section, we define the DG problem and explain the difference between this subfield and other generalization problems in ML. **Definition 2**.: **(Domain Generalization)** The traditional setting of DG consists of \(M\)_distinct_ source datasets, i.e., \(\mathcal{D}_{\text{train}}=\{\mathcal{D}^{s}\}_{s=1}^{M}\) with \(\mathcal{D}^{s}=\{(X_{i}^{s},Y_{i}^{s},d_{i}^{s})\}P_{\mathsf{X}^{s},\mathcal{ Y}^{s}}(X_{i}^{s},Y_{i}^{s})\}_{i}\). Here, the \(i\)th data-target sample pair \((X_{i}^{s},Y_{i}^{s})\) is sampled from the domain \(P_{\mathsf{X}}^{s}\) pertaining to the dataset \(\mathcal{D}^{s}\), i.e., \((X_{i}^{s},Y_{i}^{s})\sim P_{\mathsf{X}\mathsf{Y}}^{s}\), and \(d_{i}^{s}\) is a label that is used to distinguish the key Fig. 1: Scope of this work. characteristics of the domain to which the data-target samples belong, e.g., radar, mmWave transmission, etc. DG also considers unseen target (i.e., test) datasets \(\mathcal{D}_{\text{test}}=\{\mathcal{D}^{t}\}_{t=1}^{T}\) which are different from the source datasets (i.e., \(\mathcal{D}^{t}=\{(X^{t}_{j},Y^{t}_{j})\big{|}P_{\mathcal{K}^{t},\mathcal{Y}^{t }}(X^{t}_{i},Y^{t}_{i})\}_{j}\neq\mathcal{D}^{s},\ \forall s\) for \(1\leq s\leq M\)). The goal of DG is to train on the source domains a model \(f\) that generalizes to the target domains _without any access to the target data during training_. The generalization is often measured via a loss function \(\mathcal{L}(\cdot,\cdot)\) on the test domains, i.e., \(\mathbb{E}_{(\mathcal{X}^{t}_{j},\mathcal{Y}^{t}_{j})\in\mathcal{D}^{t}}\big{[} \mathcal{L}\big{(}f(X^{t}_{j}),Y^{t}_{j}\big{)}\big{]}\). Different variations of the vanilla DG described above have been studied in the literature: * **Single-source DG** assumes that the training data is _homogeneous_ and belongs to a single domain (i.e., \(M=1\)); * **Homogeneous DG** requires the source and target domains to share the same label space, i.e., \(\mathcal{Y}^{s}=\mathcal{Y}^{t}\); * **Heterogeneous DG** assumes different label spaces for the source and target domains, i.e., \(\mathcal{Y}^{s}\neq\mathcal{Y}^{t}\); * **Compound DG**: The vanilla DG setting assumes that source domain labels \(d^{s}\) are known prior to learning. In contrast, compound DG does not require domain annotations and assumes that the source data is _heterogeneous_ and consists of mixed domains. In other words, the training is not divided into distinct domains before learning. Thus, in addition to generalizing to new unseen domains, compound DG methods need to infer/learn domain information from mixed heterogeneous datasets. For this reason, compound DG is more challenging than vanilla DG. Fig. 2 illustrates the difference between the vanilla and compound DG settings for estimating a wireless communication channel. There, we consider the wireless multi-path (MP) channel model \(\mathbf{H}^{\text{MP}}(L,\,f)\) parametrized by the number of paths \(L\) and the frequency \(f\). In vanilla DG, channel samples within the source dataset belong to _known_ domains, i.e., the number of paths \(L\) is known for each sample. This is illustrated in Fig. 2(a) by clustering channel samples into three a priori known domains pertaining to three MP channel models associated with \(L=1,5,15\). In compound DG, however, the domains of the channel samples are _not known_ as highlighted in Fig. 2(b). Hence, the source dataset can be perceived as an unlabeled dataset where the domain knowledge of samples is unknown. In summary, prior knowledge of "which samples belong to which domain" is the main difference between vanilla DG different from compound DG. Before delving into the details of DG algorithms, we now present the possible distribution shifts in the source/target domains and the related DG research fields. ### _Different Types of Domain Shifts_ Given a joint distribution \(P_{\mathsf{X},\mathsf{Y}}\) associated with either a source or target domain, it is always possible to factorize it in two different forms using the Bayes rule: \[P_{\mathsf{X},\mathsf{Y}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! (1). For this reason, we distinguish four types of distribution shifts between the source and target domains. Fig. 3 depicts the factor graphs associated with each of the following distribution shifts: * a distribution shift between the source and target input distributions, i.e., \(P^{s}_{\mathsf{X}}\neq P^{t}_{\mathsf{X}}\), as shown in Fig. 3(a). This shift is commonly called _covariate shift_[15] and is the most studied type of distribution shift in the literature. * a distribution shift between the source and target concept distributions, i.e., \(P^{s}_{\mathsf{Y}|\mathsf{X}}\neq P^{t}_{\mathsf{Y}|\mathsf{X}}\), as shown in Fig. 3(b). The concept shift is usually not examined in DG classification tasks because most of the prior work assumes that data samples have different labels in different domains. * a distribution shift between the source and target label distributions, i.e., \(P^{s}_{\mathsf{Y}}\neq P^{t}_{\mathsf{Y}}\), as illustrated in Fig. 3(c). This is called _label shift_ and is common in ML datasets, e.g., class imbalance in classification tasks. * a distribution shift between the source and target conditional distributions, i.e., \(P^{s}_{\mathsf{X}|\mathsf{Y}}\neq P^{t}_{\mathsf{X}|\mathsf{Y}}\), as depicted in Fig. 3(d). This shift is often considered unchanged (i.e., \(P^{s}_{\mathsf{X}|\mathsf{Y}}=P^{t}_{\mathsf{X}|\mathsf{Y}}\)) to ensure that the label random variable \(\mathsf{Y}\) causes the input random variable \(\mathsf{X}\) in the same way between the source and target domain. Note that each type of distribution shift is often studied independently, and the existing algorithms for DG assume that the other shifts are not present [16]. It is worth mentioning that most proposed algorithms in the literature focus on the covariate shift only and are specialized in classification tasks. DG is closely related to other generalization concepts such as: multi-task learning [17], transfer learning [5], zero-shot learning [18], domain adaptation [6], and test-time training [19]. Figure 4 illustrates the taxonomy of these generalization concepts and we will subsequently elaborate further on their differences. ### _Related Research Fields_ When the source and target domains are assumed to be the same (i.e., \(P^{s}_{\mathsf{X},\mathsf{Y}}=P^{t}_{\mathsf{X},\mathsf{Y}}\)), the learned model is not exposed to any domain shift. This assumption is pervasive in wireless communication ML applications where training and test datasets are usually generated using the _same_ system model and/or assumed to originate from the same propagation environments2. However, in practice, this assumption is often violated and the test domains are usually different from the training domain(s). Supervised and multi-task learning are two common learning techniques where no domain shift occurs. Footnote 2: By propagation environments, we refer to all the different parameters that impact the signal propagation conditions like path loss, coherence time, blockages, etc. **Supervised learning** learns a mapping between inputs and outputs assuming that training and test samples are identically and independently distributed (i.i.d). Supervised learning considers a _single_ domain (\(M=1\)). Because supervised learning does not handle domain shifts, the source and target samples are i.i.d. drawn from the same joint distribution, i.e., \(P^{s}_{\mathsf{X}}=P^{t}_{\mathsf{Y}}\). This is different from the DG setting where the i.i.d assumption is violated since the source and target samples are drawn from different distributions. **Multi-task learning** trains a single model to simultaneously perform multiple tasks, i.e., \(M>1\). For the rest of the paper, a task refers to a type of problem to be solved such as classification or regression. Different tasks result in different but related domains or datasets which enable learning shared representations between tasks. Note that each task is characterized by a specific joint distribution \(P^{s_{\_}{\_}{XY}}_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\{\_}{\_}{\_}\{\_}\{\_}{\_}{\_}{\_}{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}\_{\}\_\_}{\_}\{\_}\{\_}\{\_}\{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\{\_}\{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\_\_\_\_\}\{\_\}\{\_}\{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\_{\}\_{\_}\{\_}\{\_}\{\_}\_}\{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\_{\}\_}\{\_}\{\_}\{\_}\{\_}\{\_}\{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}\_}\{\_}\{\_}\{\_}\{\_}\{\_}\{\_}\_}\{\_}\{\_}\{\_}\{\_}\{\_}\{\_}\{\_}\{\_}\_{\}\_}\{\_}\{\_}\{\_}\{\_}\_\_}\{\_}\{\_}\{\_}\_\_\_}\_\_\_\_\_\}\ self-supervised auxiliary task. The self-supervised task will be used at test time to create labels for the unlabeled test samples. The standard version of test-time training requires a very limited amount (e.g., a mini-batch) of data to fine tune the model based on the auxiliary task. This is where test-time training differs from DG due to the use of test data for updating the model parameters. **Continual/lifelong learning** learns a model on multiple domains or tasks sequentially without forgetting the knowledge previously learned. Continual learning assumes that the model does not have access to data from previous tasks and updates the parameters using labeled data from new tasks or domains. This is different from DG where the objective is to generalize to new domains without accessing target data or finetuning the model on the target domain. In the next sections, we put forward the current state-of-the-art methodologies for handling DG. We also specify which methods cope with distribution shifts beyond the covariate shift. Fig. 5 presents the organization of the covered DG methodologies across the next three sections. ## III DG Methods: Data Manipulation In order to generalize to unseen scenarios, this category of methods manipulates the DNN input data. Two types of manipulations are possible either in the raw input space or in the latent input space: \(i)\) data augmentation by adding random noise or transformation to the input data, and \(ii)\) data generation which generates new training samples using generative models. The main objective of these methods is to increase the quantity and improve the diversity of the training dataset for better generalization capabilities without requiring manual labeling of datasets. A data manipulation operation is represented by an arbitrary function \(\mathcal{M}(\cdot)\) which transforms the input data \(X\) to the manipulated data \(X^{\prime}=\mathcal{M}(X)\). Given a DNN that is represented as an input-output function \(g(\cdot)\), the learning objective of data manipulation for DG can be expressed as follows: \[\min_{g}\underbrace{\mathbb{E}_{\mathcal{X},\mathsf{Y}}\big{[}\mathcal{L}(g(X),Y)\big{]}}_{\text{task loss}}+\underbrace{\mathbb{E}_{\mathcal{X}^{\prime}, \mathsf{Y}}\big{[}\mathcal{L}(g(X^{\prime}),Y)\big{]}}_{\text{data manipulation loss}}, \tag{2}\] where \(\mathcal{L}(\cdot,\cdot)\) is the DNN cost function. It is worth noting that most data manipulation techniques proposed in the literature are geared towards computer vision applications where all datasets consist of images. In this section, we describe these methods within the context of vision applications and point out their potential use for wireless applications. ### _Data Generation_ Generating new data samples using generative models is a popular technique to augment existing datasets so as to cover richer training scenarios, thereby enhancing the generalization capability of a DNN. The data manipulation function \(\mathcal{M}(\cdot)\) in (2) can be represented by deep generative models such as variational auto-encoder (VAE) [21] and generative adversarial network (GAN) [22]. Various distribution distance metrics can be employed to generate high-quality samples including: * _domain discrepancy measures_ such as the maximum mean discrepancy (MMD) [23] to minimize the distribution divergence between real and generated data samples. * _the Wasserstein distance_ between the prior distribution of the DNN input and a latent target distribution as carried out in Wasserstein auto-encoder (WAE) [24]. This metric is a regularization that encourages the encoded training distribution of a WAE to match the data prior and hence preserves the semantic and domain transfer capabilities. * _semantic consistency loss functions_ that maximize the difference between the source and the newly generated distributions, thereby creating new domains that augment the existing source domains [25]. It is also possible to generate new domains instead of new data samples using adversarial training [26] where one or multiple generative models are trained to progressively generate unseen domains by learning relevant cross-domain invariant representations. Such an alternative involves an entire generative model pipeline composed of multiple DNNs trained in cascade or in parallel, and therefore has a significant computational cost. As one example for channel estimation problems, one can start by generating line-of-sight datasets and then progressively increase the rank of the estimated MIMO channel to multi-path models up to full-rank channels such as rich-scattering MIMO channels. Furthermore, the data manipulation function \(\mathcal{M}(\cdot)\) can also be defined without training generative models. In particular, it is possible to generate new data samples by linearly interpolating any two training samples and their associated labels as done in the low-complexity Mixup method [27]. More recently, many techniques have built upon Mixup for DG to \(i)\) generate new data samples by interpolating either in the raw data space [28, 29, 30], or \(ii)\) to build robust models with better generalisation capabilities by interpolating in the feature space [31, 32, 33]. Fig. 5: Taxonomy of domain generalization methods. ### _Data Augmentation_ DNNs are heavily reliant on large datasets to enhance the generalization by avoiding overfitting [34]. Data augmentation methods provide a cheap way to augment training datasets. They artificially inflate the dataset size by transforming existing data samples while preserving labels. Data augmentation includes geometric and color transformations for visual tasks, random erasing and/or permutation, adversarial training, and neural style transfer. Every data augmentation operation can be considered as a data manipulation function \(\mathcal{M}(\cdot)\) in (2). Here, we classify the data augmentation methods for DG into two categories: * _domain randomization_: this family of methods creates a variety of datasets stemming from data generation processes (e.g., simulated environments) with randomized properties and trains a model that generalizes well across all of them. * _adversarial data augmentation_: this family of methods guides the augmentation by enhancing the diversity of the dataset while ensuring their reliability for better generalization capabilities. #### Iii-B1 **Domain randomization** The reality gap between the data domains resulting from simulations and real-world data collections often leads to failure due to distribution shifts. This gap is triggered by an inconsistency between the physical parameters of simulations (e.g., channel distribution, noise level) and, more fatally, the incorrect physical modeling (e.g., physical considerations of wireless communication [35, 36]). To perceive how DNNs should be trained and evaluated under data distribution shifts for communication applications, Fig. 6 depicts the training and evaluation pipeline where datasets are generated through communication systems models. There, it is seen that source (i.e., training) and target (i.e., test) domains, \(\mathcal{D}^{\text{train}}\) and \(\mathcal{D}^{\text{test}}\), are obtained according to the training and test scenarios, \(\mathcal{S}^{\text{train}}\) and \(\mathcal{S}^{\text{test}}\). The latter are determined by defining a set of communication scenarios by varying one or multiple communication parameters of interest. The choice of these parameters dictates the data domains and hence provides a way to control and then analyze the impact of distribution shifts on the performance of DNNs. For instance, research efforts to design broadband ML-aided decoding algorithms should vary the signal frequency and assess the generalization capability of DNNs when trained on carriers in the sub-6 GHz band then evaluated on a different communication band. Domain randomization generates new data samples stemming from simulated dynamics of complex environments. For computer vision applications, the function \(\mathcal{M}(\cdot)\) in (2) encloses different manual transformations such as altering object properties (e.g., shape, location, texture), scene editing (e.g., illumination, camera view), or random noise injection [37]. For real-valued data input vectors, augmentation involves scaling, pattern switching, and random perturbation [38]. These augmentation methods are particularly interesting for wireless communication applications because they handle general signal transmission scenarios that are tolerant to variations in the path-loss coefficient, synchronization delays, signal-to-noise ratio, etc. Fig. 6: Summary of the training and evaluation pipeline of machine learning models under data distribution shifts for communication applications. #### Iii-C2 **Adversarial data augmentation** The fact that most domain randomization described in Section III-B1 is performed randomly indicates that there exist potential improvements to remove ineffective randomization that does not help with DNNs' generalization. This optimization is performed by adversarial data augmentation. Toward this goal, research efforts have been dedicated to designing better strategies for non-random data augmentation. By modeling the dependence between the data sample \(X\), its label \(Y\), and the domain label \(d\) (cf. Definition II-A), it has been shown that the input data can be perturbed along the direction of greatest domain change (i.e., domain gradient) while changing the class label as little as possible [39]. Another line of work devised an adaptive data augmentation procedure where adversarially perturbed samples in the feature space are iteratively added to the training dataset [40]. It is also possible to train a dedicated transformation network for data augmentation by \(i)\) maximizing the domain classification loss on the transformed data samples to tolerate domain generation differences, and \(ii)\) minimizing the label classification loss to ensure that the learned augmentation does not affect the DNN performance [41]. While adversarial data augmentation can provide richer datasets and fill in data gaps against some adversarial examples, this comes at the cost of a more complex training procedure which is known to be less stable and computationally extensive. When it comes to wireless communications applications, physics-based models are available to guide data augmentations that are consistent with the law of physics, beyond purely random strategies. For example, the study of the achievable rate of reconfigurable intelligent surface (RIS)-aided communication systems do exhibit the same performance regardless of the carrier frequency due to the scaling invariance property of Maxwell's equations when no source is present (i.e., passive RISs) [42]. Another interesting implication stemming from the symmetry of Maxwell's equations is the frequency independence property of certain wideband antennas that display very similar radiation pattern, gain and impedance above a certain threshold frequency [43]. This suggests that the generation of wireless datasets for far-field communication can be made independent of the carrier frequency for specific types of antennas. From this perspective, data augmentation methods that are aware of the physics of wave propagation do not blindly generate source and target domains for different carrier frequencies. They should instead collapse the data augmentation process to scenarios that do enjoy the scaling invariance property. As a result, not only do data augmentation techniques become efficient but also physically consistent with the electromagnetic properties of RISs. ## IV DG Methods: Representation Learning Generalizing to unseen scenarios is not solely dependent on the DNN prediction approximation function \(g(\cdot)\) given in (2). It also depends on the data representations (i.e., features) learned by the DNN [44]. To better isolate these two distinct tasks, one can view the overall DNN approximation function, \(g(h(\cdot))\), as a composition of a prediction/classification function \(g(\cdot)\) and a representation learning function \(h(\cdot)\). Fig. 7(b) depicts this decomposition, \(h(X)\), as the output of the representation learning step. In theory, this representation in the feature space comprises two separate representations. The first one denoted by \(h_{\text{inv}}(X)\) is a domain-invariant representation that is shared across domains (a.k.a., cross-domain representation) and is key to enabling generalization over multiple domains. The second representation \(h_{\text{spe}}(X)\), however, is domain-specific and represents the variation pertaining to a specific domain. In practice, these two representations can either be non-separable or separable. For instance, several earlier research works [45, 46, 47] have shown that in the Fourier spectrum of signals, the phase component predominantly carries low-level statistics whereas the amplitude component mainly contains high-level semantics. Hence, Fourier phase features represent domain-invariant features that cannot be easily affected by domain shifts when used for DG [48]. From a mathematical point of view, the optimization problem of representation learning can be written as follows: \[\min_{g,\,h}\ \ \underbrace{\mathbb{E}_{\text{X,Y}}\big{[}\mathcal{L}(g(h(X)), Y)\big{]}}_{\text{task loss}}+\underset{\text{regularization loss}}{\lambda}\ \underbrace{r(X)}_{\text{regularization loss}}, \tag{3}\] where \(r(X)\) is a regularization function and \(\lambda\) is the associated regularization parameter. Depending on the type of the regularization function \(r(X)\) or the representation learning function \(h(\cdot)\), it is possible to categorize representation learning for DG into two categories: * _domain-invariant representation learning_: the goal of this family of methods is to learn features that are invariant across different domains. These features are transferable from one domain to another, hence their importance for domain generalization. * _feature disentanglement_: these methods decompose a feature representation into one or multiple sub-features, each of which is either domain-specific or domain-invariant. ### _Data-Invariant Representation Learning_ #### Iv-A1 **Kernel-based methods** Learning representation using kernel methods (e.g., support vector machines [49], kernel component analysis [50]) is a Fig. 7: Illustration of ML-aided classification/prediction (a) without an explicit representation learning step (a.k.a. end-to-end learning), and (b) with a representation learning step. classical problem in the ML literature. In such a setting, the representation learning function \(h(\cdot)\) in (3) maps the data samples to the feature space using kernel functions (e.g., radial basis function (RBF), Gaussian, and Laplacian kernels). For domain generalization, several methods were devised to learn domain-invariant kernels to determine \(h(\cdot)\) from the training dataset. Specifically, a positive semi-definite kernel learning approach for DG was proposed in [51] by considering the conventional supervised learning problem where the original feature space is augmented to include the marginal distribution that generates the features. It is also possible to learn kernel functions by minimizing the distribution discrepancy between all the data samples in the feature space. This method is known as domain-invariant component analysis (DICA) [52] and is one of the classical kernel methods for DG. For classification tasks, in presence of covariate shift only, a randomized kernel algorithm was devised in [53] to extract features that minimize the difference between the marginal distributions across domains. Multi-domain discriminant analysis (MDA) and scatter component analysis (SCA) approaches were proposed in [54, 55] to learn a domain-invariant feature transformation in presence of both covariate and conditional shifts across domains. This is done by jointly minimizing the divergence among domains within each class and maximizing the separability among classes. #### Iii-A2 **Domain adversarial learning** Since the presence of spurious features in the data decreases the robustness of DNNs, adversarial learning is a widely used technique to learn invariant features by training generative adversarial networks (GANs). Specifically, the discriminator is trained to distinguish the domains while the generator is trained to fool the discriminator so as to learn domain invariant feature representations for DG [56]. Another line of work in [57] generated a continuous sequence of intermediate domains flowing from one domain to another to gradually reduce the domain discrepancy, and hence improve the DNN generalization ability on unseen target domains. Learning class-wise adversarial networks for DG was also proposed in [58] based on conditional invariant adversarial training when both covariate and conditional shifts coexist. #### Iii-A3 Explicit feature alignment This family of methods learns domain-invariant representations by aligning the features across source domains using one of the following two mechanisms: * explicit feature distribution alignment through distance minimization or moment matching. * feature normalization addressing data variations to avoid learning nonessential domain-specific features. Feature distribution alignment methods were devised to impose a variety of distribution distances such as the maximum mean discrepancy (MMD) on latent feature distributions [56, 59], and the label similarities for samples of the same classes from different domains using the Wasserstein distance [60]. Moment matching for multi-source domain adaptation (M3SDA) was also introduced in [61] to transfer learned features from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions. Feature normalization methods, however, focus on increasing the discrimination capability of DNNs. They do so by normalizing the features to eliminate domain-specific variation while keeping domain-invariant features to enhance generalization. In particular, instance normalization (IN) [62] and batch instance normalization (BIN) [63] have been proposed to enhance the generalization capabilities of convolutional neural networks (CNNs). Instance normalization has been applied in [64] for DG where labels were missing in the training domains to acquire invariant and transferable features. It was also shown that adaptively learning the normalization technique can improve DG without predefining the normalization technique in the DNN architecture a priori [65]. #### Iii-A4 **Invariant risk minimization** Another unique perspective on learning domain-invariant representations for DG is to constrain DNNs to have the same output across all domains. The motivation behind this constraint is that an optimal representation for prediction or classification is _the cause_ of the DNN output label. This causal relationship from the representation (i.e., the cause) to the label (i.e., the effect) should not be affected by other factors including the domain input. Therefore, the optimal representation is domain invariant and can be learned using invariant risk minimization (IRM) [66]. Given \(K\) different domains, the IRM problem can be formulated as follows: \[\min_{h\in\mathcal{H}} \ \sum_{k=1}^{K}\ \mathbb{E}_{\mathbf{\chi}_{k},\mathbf{Y}_{k}} \big{[}\mathcal{L}(g(h(X_{k})),Y_{k})\big{]} \tag{4a}\] \[\text{subject to}\ g\in\bigcap_{k=1}^{K}\ \operatorname*{arg\,min}_{g^{\prime}\in\,\mathcal{G}}\ \mathbb{E}_{\mathbf{\chi}_{k},\mathbf{Y}_{k}}\big{[}\mathcal{L}(g^{\prime}(h( X_{k})),Y_{k})\big{]}, \tag{4b}\] where \(\mathcal{H}\) and \(\mathcal{G}\) are the learnable function classes for representation and task functions, \(h(\cdot)\) and \(g(\cdot)\), respectively. The optimization in (4) finds the optimal representation function \(h(\cdot)\) that minimizes the sum of all the task losses in (4a) given in (3). This minimization is carried out under the constraint in (4b) which ensures that all domains share the same optimal representation function \(h(\cdot)\). The idea behind the IRM formulation has drawn significant attention. Specifically, the IRM optimization was extended to text classification [67], reinforcement learning [68], self-supervised settings [69], and to the case of extrapolated task losses among source domains [70]. Moreover, it was shown in [71] that constraining the invariance to the task function \(g(\cdot)\) only -- as done in (4) -- is not enough to guarantee the causal relationship from the representation to the label. A new regularization has thus been proposed to ensure that the representation function \(h(\cdot)\) cannot capture fully invariant features that break down the assumed causality as required by the IRM formulation. ### _Feature Disentanglement_ Unlike domain-invariant representation learning, disentangled representation learning relies on DNNs to learn a function that maps a data sample to a feature vector, which factorizes into distinct feature sets as depicted in Fig. 8. There, it is seen that the entire feature space can be decomposed into a set of feature subspaces. Each feature set is a representation pertaining to a specific feature subspace only. When the feature representation is decomposable into multiple non-overlapping feature subsets, the feature representation is said to be "disentangled". The importance of disentanglement-based representation learning for DG stems from the fact that features can be explicitly decomposed into domain-invariant and domain-specific features. As a result, the representation function \(h(\cdot)\) defined in (3) can be decomposed into two distinct representation functions: \(h_{\text{inv}}(\cdot)\) for domain-invariant representation and \(h_{\text{spe}}(\cdot)\) for domain-specific representation. The disentanglement-based optimization can be formulated as follows: \[\begin{split}\min_{h_{\text{spe}},\,h_{\text{inv}},\,g}& \underbrace{\mathbb{E}_{\text{X,Y}}\big{[}\mathcal{L}(g(h_{\text{inv}}(X)),Y) \big{]}}_{\text{task loss}}+\lambda\underbrace{r(X)}_{\text{regularization loss}}\\ &+\mu\underbrace{\mathbb{E}_{\text{X}}\big{[}\mathcal{L}(h_{ \text{inv}}(X),h_{\text{spe}}(X),X)\big{]}}_{\text{reconstruction loss}},\end{split} \tag{5}\] where \(\lambda\) and \(\mu\) are regularization parameters. In (3), the regularization loss encourages the separation between domain-invariant and domain-specific features, while the reconstruction loss ensures that such separation does not lead to significant information loss. In other words, regularization and reconstruction losses are competing penalties that add up to the task loss, and it is the task of the ML designer to find the suitable trade-off that enhances the generalization of DNNs. #### Iv-B1 **Multi-component analysis** Multi-component methods dedicate different sets of parameters to learn domain-invariant and domain-specific features. The method "UndoBias" proposed in [72] learns dedicated SVM models. It represents the dedicated SVM parameters, \(\mathbf{w}_{k}\), pertaining to the \(k\)th domain as a perturbation of the domain-invariant parameters \(\mathbf{w}\) with the domain-specific parameters \(\mathbf{\Delta}\mathbf{w}_{k}\), i.e., \(\mathbf{w}_{k}=\mathbf{w}+\Delta\mathbf{w}_{k}\). This method has been extended for multi-view vision tasks by introducing a regularization to minimize the mismatch between any two view representations [73] for better generalization. Neural networks have also been used to capture disentangled representations by learning domain-specific networks for each domain and one domain-invariant network for all domains [74]. Another line of work considered manually comparing specific areas of DNN's attention heatmaps from different domains which proved beneficial to learning disentangled representations and ensuring a more robust generalization [75]. #### Iv-B2 **Generative modeling** Generating data samples whose feature representations are disentangled requires adapting the data generative process of generative models to new constraints. The latter can be incorporated in the loss functions of GANs to encourage feature disentanglement by separating the domain-specific and domain-invariant features [76]. An autoencoder-based variational approach was devised to disentangle the features by learning three independent latent subspaces, one for the domain, one for the class, and one for any residual variations [77]. To generate domains that are different from the source domain, the discrepancy between augmented and sources domains was maximized for out-of-domain augmentation using meta-learning under a semantic consistency constraint [78]. For classification tasks, diversifying the inter-class variation by modeling potential seen or unseen variations across classes was formulated as a disentanglement-constrained optimization problem [79]. This was made possible by minimizing the discrepancy of the inter-class variation where both intra- and inter-domain variations are regarded as constraints. ## V DG Methods: Learning Paradigms ### _Meta-learning_ Meta-learning [80], also known as learning-to-learn is a research area that has attracted much interest in recent years. The main goal of meta-learning is to learn a general model using samples from multiple tasks to quickly adapt to new unseen tasks. The learned meta-model encompasses the general knowledge from all the different training tasks which makes it a better model initialization to adapt for new tasks [81]. Traditional supervised learning (SL) methods learn a model \(f_{\theta}\) that maps inputs to outputs. The model's parameters \(\theta\) are learned by minimizing a loss function given a dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{m}\) as follows: \[\theta^{*}_{\text{SL}}=\operatorname*{arg\,min}_{\theta}\mathcal{L}(\mathcal{D },\theta)\] At each iteration, the parameters are updated based on a specific optimization procedure \(g_{\omega}\) where \(\omega\) denotes all the pre-defined assumptions about the learning algorithm such as the function class of \(f\) (e.g., DNN), the initial model initialization, the choice of the optimizer, etc. In the literature, \(\omega\) is also called _pre-defined meta-knowledge_[82]. It is straightforward to observe that the model's performance depends drastically on \(\omega\). In addition, it is common to split the dataset \(\mathcal{D}\) into training and testing sets. The model is first learned using the training samples, and the generalization of the model is subsequently evaluated on the test set with unseen samples and known outputs. Consequently, the learned parameters \(\theta_{\text{SL}}\) are specific to the dataset \(\mathcal{D}\) and are not guaranteed to generalize to samples different from the ones in \(\mathcal{D}\). Different from the supervised learning setting, meta-learning aims to learn a meta-knowledge \(\omega\) over a distribution of tasks \(p(\mathcal{T})\). A task \(i\) can be defined by a loss function and Fig. 8: Illustration of how a trained neural network transforms a data sample into a disentangled representation vector that factorizes into \(N\) small feature vectors. a dataset (i.e., \(\mathcal{T}_{i}=\{\mathcal{L}_{i},\mathcal{D}_{i}\}\)). Learning the meta-knowledge from multiple tasks enables the quick learning of new tasks from \(p(\mathcal{T})\). In meta-learning, different choices of the meta-knowledge \(\omega\) are proposed such as parameter initialization, optimizer, hyperparameters, task-loss functions, etc. We refer the interested reader to [82] for a detailed discussion about the different choices for \(\omega\). Meta-learning algorithms also involve two stages, namely meta-training followed by meta-testing. The objective of meta-training is to learn the "best" meta-knowledge \(\omega\) across multiple tasks. To do so, a set of training tasks \(\mathcal{T}_{\text{train}}\sim p(\mathcal{T})\) is used where each task \(i\) has training and validation datasets (i.e., \(\mathcal{D}_{i}=\{\mathcal{D}_{i}^{\text{train}},\mathcal{D}_{i}^{\text{val}}\}\)). The meta-training phase is commonly presented as a bi-level optimization problem [82] as follows: \[\omega^{*}=\overbrace{\underset{\omega}{\arg\min}\sum_{i=1}^{ \mathcal{T}_{\text{train}}}\mathcal{L}\left(\theta_{i}^{*}(\omega),\mathcal{D }_{i}^{\text{val}}\right)}^{\text{outer level}},\] (6) s.t. \[\underbrace{\theta_{i}^{*}(\omega)=\underset{\theta}{\arg\min} \mathcal{L}_{i}\left(\mathcal{D}_{i}^{\text{train}},\theta,\omega\right)}_{ \text{inner level}}. \tag{7}\] The inner level consists in learning task-specific learners conditioned on the meta-knowledge \(\omega\). Note that the inner level only optimizes the task-specific parameters \(\theta\) using the task train datasets \(\mathcal{D}_{i}^{\text{train}}\) and does not change \(\omega\). Whilst, the outer level learns \(\omega\) that minimizes the aggregated losses from all the train tasks on their validation datasets. In the literature, it is common to divide meta-learning methods into three families: optimization-based, model-based, and metric-based. Optimization-based methods, promoted by the Model Agnostic Meta-Learning (MAML) algorithm [81] have been recently adopted for domain generalization. The general idea is to consider the different domains as different tasks. Hence, data from multiple source domains are divided into meta-training and meta-testing sets. By training with data from different domains, the meta-learner is exposed to domain shift and is required to learn a meta-knowledge that quickly adapts to domain shift in new unseen domains [83]. ### _Self-Supervised Learning_ Self-supervised learning (SSL) is a learning paradigm that generates labels from data and subsequently uses these labels as ground truth. SSL is useful in real-world applications where abundant unlabeled data is available, especially when the labeling process is cumbersome and expensive. Another motivation behind SSL is to learn rich and general representations, unlike supervised learning methods that learn biased representations via the supervision signal or the type of annotations [84]. In supervised learning, labels serve as the supervision signal to learn a specific task. However, in SSL, a model is learned using the data as a supervision signal. In other words, the labels in SSL are generated from the data itself. The SSL pipeline can be divided into two parts: * learn feature representations by solving a _pretext_ task. An example of a pretext task is to retain part of the input data to be predicted by a model that is trained on the other part of the data [85]. Another pretext task consists in learning the relationship between data instances (e.g., similarity) or reconstruct an input from its shuffled parts (also known as the jigsaw puzzle). Note that the labels (or supervision signal) for the pretext task is generated from the input data, thus no human intervention is needed; * solve a downstream task using the learned representations and a few annotated data. SSL is applied in DG to learn domain-invariant features that help in avoiding overfitting on domain-specific biases while aligning features from different source domains. As discussed in Section IV-A, these invariant features can be leveraged in unseen target domains to achieve better generalization [52]. In this context, contrastive learning is a well-known SSL method that aims to learn latent representations such that positive instances are close and negative samples are pushed away. Therefore, in the learned embedding space, the distance between similar instances is reduced while the distance between negative pairs is increased. For instance, the authors in [86] proposed two self-supervised contrastive losses to measure feature dissimilarities in the embedding space. For dissimilarities across domains, the authors used a Mix-up layer [29, 87] (i.e., a convex combination of samples' embeddings from different domains) to compute the interpolated feature representation across domains. Thus, the regularisation loss is defined as the distance between the individual representations and the interpolated one using the mix-up layer. One caveat of this method is that it assumes the label space does not change for all the domains. ### _Ensemble Learning_ Ensemble learning [88] is a famous technique in traditional and modern machine learning where multiple models are learned and combined for prediction/classification. The same idea was also exploited for DG. The most straightforward approach is to learn a model for each source domain and average the individual predictions to compute the final ensemble prediction [89, 90]. Instead of learning separate models for each source domain, it is common to design the ensemble as a shared feature extractor and different domain-specific heads [89]. Another line of work focuses on the weighting of the individual models' predictions. For instance, the domain-specific models can be weighted differently depending on the similarity of the target domain to the source domain. The authors in [91] proposed to learn a domain predictor that predicts the probability that a target sample belongs to a source domain. These probabilities can be used to fuse the models' predictions at test time. An alternative solution proposes to train domain-invariant classifiers for each source domain by learning domain-specific normalization [92, 93]. All the classifiers share the same parameters except the ones in the normalization layers. The objective of learning domain-specific normalization is to obtain domain-agnostic latent feature space that can be used to map samples from unknown domains to the source domains. This idea is related to the feature alignment methods reviewed in Section IV-A3. Alternatively, the stochastic weight averaging (SWA) method [94] aggregates weights at different training epochs to form an ensemble model instead of combining the predictions of multiple learners. Starting from a pre-trained model, SWA trains a single model using a cyclic learning rate schedule (or a constant high learning rate) and saves model snapshots corresponding to different local minima. Averaging these points leads to better solutions in flatter regions of the loss landscape. Intuitively, flatter minima are more robust than sharp minima to changes in the loss landscape between the training and testing datasets [94]. Consequently, this weight averaging idea was extended to the DG proving that flat minima lead to better generalization on unseen domains [95]. ### _Hypernetwork-Based Learning_ Hypernetwork-based learning [96] is an approach that learns a network (i.e. the hypernetwork) to generate weights for another network called the main network. The latter represents the usual model that maps raw data to their targets or labels. The goal of the hypernetwork is to generate a specific set of weights depending on inputs about the structure of the weights or tasks. Different from the usual supervised learning setting, only the hypernetwork's parameters are learned during training whilst keeping the main network's parameters unchanged. At inference, the main network is evaluated based on the weights generated by the hypernetwork. Recent work proposed hypernetwork-based algorithms for DG in natural language processing [97] and vision [98]. For vanilla DG, a straightforward application of hypernetworks is to train a hypernetwork on data samples from different source domains to produce the model's weights for each domain. On the other hand, for compound DG, the appropriate approach is to first learn a latent embedding space for the different domains, then the hypernetwork learns to map the latent features to a set of weights so as to compute model predictions. In the next sections, we will overview the different applications of the techniques detailed above to wireless communication problems. ## VI Domain Generalization Applications in Wireless Communications When designing data-driven ML-based algorithms for solving wireless communication problems, it is crucial to ensure that the developed algorithms have guaranteed generalization capabilities. However, little effort has been devoted to investigating the DG issue despite the huge research effort in applying data-driven machine learning techniques to various wireless communication. The goal of this section is to overview the existing DG methodologies that were applied by the communication community, and summarize the learned lessons from their applications. ### _Channel Decoding_ Iterative turbo/LDPC decoders [99, 100] based on the belief-propagation (BP) framework [101] are recognized as state-of-the-art channel decoders because of their capacity approaching/achieving performance for relatively large block lengths. For this reason, they have been adopted in the 4G/5G communication standards. Many deep learning studies have shown that data-driven ML techniques can decrease the BP decoding complexity especially for short-to-moderate block lengths [102]. For short-block-length polar codes [103] (e.g., 16 bits), DNN-based decoders were shown to exhibit near-optimal performance using maximum a posteriori (MAP) decoding [104]. For larger block length codes (i.e., larger than 100 bits), the BP algorithm was unfolded into a DNN in which weights are assigned to each variable edge, thereby showing an improvement in comparison to the baseline BP method [105]. By varying the signal-to-noise ratio (SNR) values of the received signal, hypernetworks have been employed to generate the weight of a variable-node network in the Tanner graph [106]. Together, all the variable-node networks represent the graph neural network (GNN) on which message passing is performed. Meta-leaning algorithms have been explored in [107] as part of an end-to-end learning approach. There, meta-tasks were designed by varying the SNR to account for the task difficulty, under a convolutional encoder with a fixed coding rate of \(1/2\). Overall, the aforementioned ML-based channel decoding methods can be classified into two categories [102]: * _Data-driven methods_: these methods promote end-to-end learning approaches by substituting all the BP decoding components with a DNN [104]. Here, the structure of the code is ignored, and the channel decoding problem is regarded as a classification task from the input (i.e., received signal) to the output (i.e., decoded bits). * _Model-driven methods_: the goal of this family of methods is to substitute the decoding components of the classical BP-based decoder (e.g., deinterleaver, log-likelihood ratio estimators) with trained DNNs without altering the classical sequence of decoding components [108, 109, 110]. Little attention has been, however, paid to studying how DG methodologies can be applied to both categories beyond the simple variation of the SNR values. Moreover, empirical and theoretical understanding of their potential for channel decoding is still lacking. ### _Channel Estimation_ One of the crucial components of any wireless communication system is the channel estimator [111]. A vast body of prior work made use of data-driven ML techniques for channel estimation to show the attractive features of DNNs such as the low computational complexity at inference time [112, 113, 114, 115]. None of these studies, however, did analyze the impact of the distribution shifts on the reported estimation performance. Indeed, little effort has been devoted to investigating the robustness of DG algorithms in estimating wireless channels. Another channel estimation algorithm for wideband mmWave systems was proposed in [116] based on unfolding the iterative shrinkage thresholding algorithm with a few learnable parameters. This algorithm was further extended to include a hypernetwork for the sake of generalization to new environments. Given the SNR level and the number of resolvable paths, the hypernetwork generates suitable learnable parameters for the channel estimation model. Alternatively, in [117], the authors proposed to train a hypernetwork to learn weighting factors so as to aggregate channel estimation models learned for three main scenarios: urban micro, urban macro, and suburban macro. Hypernetwork recurrent DNNs have also been used to track wireless channels over a wide range of Doppler values [118]. For this multi-Doppler case, classical tracking methods make use of a bank of Kalman filters with an additional Doppler estimation step. Meta-learning was also adopted to train an encoder-decoder architecture to quickly adapt to new channel conditions by varying the number of pilot blocks preceding the payload in each transmission block [119]. For sparse MIMO channel estimation, the optimization/estimation modules of the approximate message passing (AMP) [120] and vector AMP (VAMP) [121] algorithms were substituted by learnable DNNs [122]. Specifically, DNNs did not neglect the "Onsager correction", which lies at the heart of the AMP paradigm, and was rather employed to construct the underlying DNNs. By doing so, it was shown that the Onsager correction is beneficial to train DNNs that \(i)\) require fewer layers to reach a predefined level of accuracy and \(ii)\) yield greater accuracy overall as compared to DNNs ignoring the Onsager correction term. Designing multiple channel estimation tasks pertaining to distinct domains requires varying wireless transmission parameters to simulate different channel communication scenarios. As depicted in Fig. 6, these parameters are categorized as: * _Propagation parameters_ which capture the different types of randomness in channel models [111]. They are not under control in practical communication scenarios. * _System parameters_ which govern multiple aspects of communication systems that are set by system designers such as the code rate, the number of transmit and receive antennas, the type and order of the modulation constellation, and the carrier frequency, etc. It is worth noting that varying these parameters to generate different domains will lead to one or multiple types of distribution shifts. As one example, the design of a channel estimator for broadband communication has to generalize over the channel distributions. With the widely adopted strategy for bandwidth expansion, known as carrier aggregation [123], the distribution of the channel coefficient shifts across multiple non-contiguous narrow frequency bands. For this reason, assuming that the channel is the output of a DNN, the DNN-based channel estimator has to account for the label shift of the estimated channel coefficients because their support changes as a function of the frequency band. Other related studies focusing on continual learning (CL) benchmarked the performance of CL-based methods for MIMO channels estimation by varying the SNR and the coherence time of the channel [124]. A continual learning minimum mean-square error (CL-MMSE) method has also been proposed in [125] where the DNN adapts to different numbers of receive antennas between 8 and 128 to generate tasks with different difficulties. ### _Beamforming_ Steering the main lobe of antenna array systems toward users in a real-time manner (i.e., beamforming) is a critical task to minimize interference and enhance the achievable rate of wireless communication systems. This is because, the antenna array processing in adaptive/reconfigurable digital signal processing algorithms assume no mismatch between the actual and expected array responses to the received signal [126]. With the increase of the number of antenna elements in massive MIMO systems, a larger number of degrees of freedom is achieved at the cost of higher algorithmic complexity incurred when optimizing the beamformer weights [111]. Since beamforming weights must be continuously computed under changing propagation environments, ML methods have been explored as a possible solution to low-complexity beamforming design [127, 128]. For instance, the weighted minimum mean-square error (WMMSE) estimator of the transmit MISO beamforming vector was unfolded such that each estimation iteration corresponds to a DNN [129]. By doing so, the matrix-inverse operation of the standard WMMSE estimator is avoided in addition to the advantage of a lower computational complexity without sacrificing the estimation performance. It was also reported that fully distributed reinforcement learning (RL) estimates the uplink beamforming matrix by dividing the beamforming computations among distributed access points without significant accuracy deterioration [130]. We refer the reader to [131] for a comprehensive review of ML-based beamforming methods. Few studies, however, have considered DG as an important ingredient to assess the performance of ML-aided beamforming solutions based on the meta-learning framework reviewed in Section V-A. A meta-learning algorithm for weighted sum rate maximization was proposed for beamforming optimization in MISO downlink channels [132]. Instead of using the WMMSE algorithm iteratively to update each variable involved in the beamforming optimization problem, long-short-term-memory (LSTM) networks were used in the inner-loop of the meta-learning framework to learn the dynamic optimization strategy and hence update the optimization variables iteratively. The outer-loop of the meta-learning framework, however, makes use of the updated parameters to maximize the weighted sum rate. This strategy adaptively optimizes each variable with respect to the geometry of the sum-rate objective function, thereby achieving a better performance than the WMMSE algorithm. Another line of work employed the standard meta-learning MAML algorithm [81] for adaptive beamforming to new wireless environments [133]. This work was further extended to reduce the complexity of the MAML algorithm by dedicating a DNN model as a transferable feature extractor for feature reuse across wireless channel realizations [134]. Self-supervised learning was used to map uplink sub-6 GHz channels into mmWave beamforming vectors without accessing labeled training datasets [135]. By exploiting a dataset containing pairs of uplink and downlink channels, DNNs learned implicitly and autonomously the data representations from correlations in the training data pairs to predict the beamforming vectors. ### _Data Detection and Classification_ To decrease the computational complexity of classical data detection algorithms, ML techniques were proposed to detect communication signals under various conditions by reformulating bit/symbol detection as a conventional classification problem [136, 137, 138, 139]. In this context, the various DG techniques reviewed in Sections III-V can be leveraged to investigate the generalization capabilities of DNNs when applied to the data detection problem. For instance, DG demodulation methods for _multiple_ modulation schemes have to account for both concept and label shifts of the estimated symbols because the modulation constellation varies from one domain to another. This scenario corresponds to wireless transmissions with adaptive modulation and coding where the choice of modulation order and coding rate is based on the instantaneous channel quality indicator (CQI). Recently, data detection in MIMO systems with spatially correlated channels has been extensively studied. Indeed, MMNet [140] proposed an unfolding algorithm based on approximate message passing augmented by learnable parameters to achieve state-of-the-art performance on correlated channels. However, this algorithm needs to be re-trained for each channel realization. To overcome this drawback, the authors proposed to use a hypernetwork to predict the learnable parameters based on perfect CSI and noise power knowledge [141]. The generalization of this framework was tested under different SNR levels and user mobility settings to simulate different channel spatial correlations. One drawback of this approach is that it assumes that the CSI and noise power are perfectly known at the receiver. Similarly, the unfolded version of the expectation propagation detector was proposed wherein damping factors are learned using meta-learning [142]. This detector was also extended using hypernetworks to achieve generalization to new channel realizations and noise levels but for typical values of many other system parameters [143]. The major drawback here is that DNN must be retrained for each set of new system parameters. A meta-learning strategy was also used to train the damping factors of the VAMP algorithm to improve its convergence speed and quickly adapt to new environments, thereby yielding more accurate signal detection performance [144]. Other similar types of detection/recognition tasks are also of the same classification nature such as modulation classification in non-cooperative communication systems [145] and wireless transmitter classification [146]. These works focus on improving the classification accuracy only, and the generalization ability of DNNs was studied in a few prior work only [147]. ### _Beam Prediction_ Since 6G and beyond communication systems are moving to higher frequency bands (e.g., mmWave and sub-terahertz), developing techniques for narrow directive beam management is critical to guarantee sufficient receive power. Existing solutions rely on leveraging the channel sparsity [148], constructing adaptive beam codebooks [149], and beam tracking [150]. Due to beam training overheads, these classical strategies, however, cannot meet the ever-increasing data rate demands of emerging applications for future systems with large antenna arrays serving highly-mobile users and latency-critical devices [151]. For these reasons, the development of ML-aided methods can offer data-driven solutions for the beam management problem because the beam direction decision depends on the user location and the geometry of the surroundings about which sensory datasets can be collected. A practical ML solution is expected to generalize to unseen scenarios and operate in realistic dense deployments. The fact that practical sensors do not normally provide accurate enough positions/orientations for narrow beam alignment motivates acquiring multi-modality datasets about the environment such as sub-6GHz channel information, LiDAR point clouds, and radar measurements [152]. DG algorithms should be developed to leverage these datasets representing different domains in the same environment. For example, ideas from domain-invariant representations are beneficial to cope with distribution shift sources such as the quality of collected measurements (e.g, noise level, sensitivity to weather conditions), user mobility, and signal blockages. These factors lead to the acquisition of multiple data domains which can be exploited to learn both domain-invariant and domain-specific features to determine the index of the optimal beamforming vector from the codebook in a generalizable manner. ### _RIS-Aided Wireless Communications_ Wireless communications aided by RISs has triggered a remarkable research effort in the last few years [153]. The possibility to purposely manipulate the electromagnetic propagation environment via the use of IRSs, incorporation of IRSs as integral part has pushed researchers to revisit fundamental wireless communication problems (e.g., beamforming, channel estimation) and incorporate the impact of RISs on the overall communication system performance measured in terms capacity, estimation accuracy, secrecy, outage, and energy efficiency. In this context, ML methods belonging to multiple learning paradigms (e.g., supervised/unsupervised learning, reinforcement learning, federated learning) have been also devised to account for the propagation effects of IRSs. We refer the reader to the survey in [154] for an exhaustive summary of ML approaches for RIS-aided communication. In regard to DG, only a handful of studies have assessed the performance of ML methods from the perspective of accuracy-generalization tradeoff. The problem of channel estimation for RIS-aided communication has been investigated in [155] where an adaptive shrinkage parameter based on a hypernetwork was used instead of a fixed shrinkage parameter. Based on the current channel recovery status, the hypernetwork provides an updated shrinkage parameter thanks to which the IRS-aided channel estimation accuracy has been assessed over different iterations as well as SNR values ranging between -10 dB and 25 dB. This work does not study DG as a function of the wireless communication parameters but rather with respect to the algorithmic steps of the LAMP algorithm. The robustness to additional noise of RL algorithms when the CSI is perturbed has been examined in [156] in the context of the optimization of RIS phase shifts. This work showed that RL methods exhibit resilience to different channel impairmentsments as compared to classical optimization methods in the evaluation step only. In other words, DG training methodologies were not adopted and hence the work does not consider handling the domain shifts in estimating the phase shifts and only reports the performance degradation during inference. ### _Applications in Edge Networks_ The domain shift problem arises naturally in IoT applications due to the heterogeneity in devices' behavior, spatial and temporal information, etc. For healthcare IoT sensors, the work in [157] applied a data alignment algorithm to learn and project accelerometer data from different users into a common feature space. The learned shared feature space is then used to track users' symptoms. For vehicle-to-everything (V2X) applications, a meta-learning approach for power allocation tasks has been proposed in [158] to enhance DNNs to achieve fast adaption to new environments with limited interactions. DL has been applied in human activity recognition to extract meaningful features from raw sensory data instead of hand-engineered ones. Human activity recognition usually involves multi-modal sensory data from multiple devices/subjects to predict one or multiple activity labels. For the same activity, sensor data can vary depending on the subjects' characteristics such as gender, age, and behavior. One solution to this intra-activity shift problem is to remove the user-specific feature from the sensory information and keep the common activity features across all users only. To do so, feature disentanglement is proposed to learn two groups of representations: the common activity features and user-specific representations [159]. Another line of work focused on learning statistical features from sensory data using kernel-based techniques [160, 161]. These studies, however, make use of kernel-based methods for more predictive feature extraction from raw sensory data only. The use of kernel-based methods to improve DG as explained in Section IV was not explored. ### _Summary and Lessons Learned_ In the preceding sections, we reviewed different key applications in wireless communication where DG algorithms should be further investigated for the sake of robust generalization. Our observations and lessons learned are summarized below. * **Lack of DG algorithms for wireless**: To better judge the suitability of data-driven ML methods for real-world communication uses cases, it is crucial to determine the uncertainty of ML algorithms and analyze their ability to generalize in order to lay the ground for rigorous evaluation protocols. However, minimal effort has been dedicated by the communication community to initiate such an investigation. As one example, use cases in 3GPP Release 18 package such as CSI compression with autoencoders raise multiple interesting DG questions. Questions about the autoencoder training procedure, as well as, the different user traffic scenarios and urban areas should be considered before determining the source and target datasets. * **One-sided focus on end-to-end DG**: Most DG algorithms are produced by the ML community and hence lack wireless communication knowledge in their designs. As a consequence, most DG communication papers make use of ML end-to-end techniques that are blind to the characteristics of the communication problem at hand. While this trend is worthwhile to assess the generalization performance of end-to-end learning methods, tailoring existing DG algorithms and devising new ones are essential research avenues that require further investigation. * **Need for wireless DG benchmarks**: The number of DG benchmarks within the ML community has significantly increased over the last few years due to the need for algorithmic generalization evaluation (see [162] for a comprehensive review). Unfortunately, with few exceptions [107, 152], the absence of a unified benchmarking in wireless communications renders the comparison of the different proposed DG algorithms impossible. Consequently, it is crucial to establish a unified framework to analyze the improvements of the research endeavors and henceforth design robust and efficient DG algorithms. ## VII Open Issues In this section, we discuss some of the open questions that evolve around the necessity to carefully incorporate the DG concept in ML-oriented communication research. This is because, unlike many other ML-based technologies, most real-world communication applications require real-time operation and seamless adaptation to dynamically changing propagation conditions. This precludes the luxury of repeatedly training ML-oriented models and makes DG-induced robustness a must-have feature in any ML-aided communication system. ### _Beyond End-to-End Learning for Generalization_ Most of the existing studies rely on end-to-end learning to train a holistic over-parametrized DNN architecture by applying gradient-based optimization to the learning system as a whole. This means that all transmit/receive modules of the communication system must be differentiable (in the reverse-mode algorithmic differentiation sense [163]). Few wireless communication libraries have been proposed to study differentiable communication systems [164, 165]. Before advocating the adequacy of applying ML methods to the building blocks of the wireless physical layer depicted in Fig. 9, DG has to be meticulously investigated and guaranteed _within_ and _across_ the blocks. From this perspective, it is not enough to claim the migration from model-based classical signal processing techniques to data-driven ML techniques without analyzing the impact of each migration on the overall system performance in terms of both accuracy and robustness. While such migration is a conceptually profound paradigm shift, its impact continues to be assessed from the accuracy perspective only, and hence must also be carefully analyzed through the lens of generalization/robustness. The legacy physical layer design strategy relies on the divide-and-conquer approach by decomposing (a.k.a. layering) the entire communication chain into smaller blocks [166]. Designing ML methods to substitute a single block or multiple blocks (see Fig. 9) raises critical generalization questions justified by the following two facts: * End-to-end learning methods are trained with gradient descent-like optimizers, which exhibit slow convergence on ill-conditioned problems or convergence to possibly poor local optima. In other words, training is performed while hoping that the structural preconditioning is sufficiently strong to steer a method as simple as gradient descent from a random initial state to a highly non-trivial solution [167]. This assumption is risky since all ML techniques tailored for wireless applications are exclusively used for non-convex optimization problems. * The valuable wireless communication know-how developed since the 50s is completely neglected during end-to-end training. "Standing on the shoulders of giants" (as Sir Isaac Newton once said) is a scientific tradition which promotes building upon the accumulated knowledge and discoveries made by others, and "end-to-end learning" must be proven robust to domain shifts to be considered an exception. For these considerations, going beyond conventional end-to-end learning is an important step towards answering critical DG questions in data-driven ML techniques applied to wireless communications. In what follows, we discuss research directions to cope with some end-to-end learning limitations. ### _Hybrid Data-Driven and Model-Driven Methods_ After more than a century-long research effort in radio communications, state-of-the-art communication modeling and fast estimation algorithms are becoming more essential to high-bandwidth transmissions. From a DG perspective, the power of these classical model-driven tools lies in their guaranteed generalization capabilities because they do not depend on specific domains that are tied to generated/collected datasets. This generalization, however, often comes at the cost of high complexity. Data-driven methods can come into play as an effective tool to reduce the computational complexity of classical model-based methods at the cost of generalization. As advocated in [168], a hybrid framework that combines the benefits of both data-driven and model-based techniques is worth pursuing. Adopting this framework will prevent the generated domains for DG from being fully dependent on \(i)\) the convergence of gradient-based optimizers for data-driven methods, or \(ii)\) the complexity of model-based methods. For better illustration, we elaborate in what follows on how data-driven methods can be combined with physically consistent model-based methods. The study of DG for MIMO communication should benefit from the side information provided by the physical laws governing the wave transmission and the circuits of RF components (i.e., amplifiers, and antennas). By employing physically consistent models [169, 35, 170], it is possible to exploit the inherent symmetries and invariances in communication scenarios owing to Maxwell's equations [42, 171]. From this perspective, physically consistent models for wireless communications offer an opportunity to generate communication datasets which exhibit domain-invariant regularities (e.g., antenna impedances), thereby diminishing the generalization difficulties across domains. As one example, fixing the impedance matrices of transmit and receive linear/planar antenna arrays increases the amount of correlation in the wireless channel, which can be exploited by DNNs for better channel estimation accuracy. Fig. 9: The possible integration steps of ML methods into the conventional transmit/receive communication chain if ML methods will be proven to be robust to domain shifts. Moreover, this physically consistent direction opens the door for the analysis of DG through the lens of antenna theory. For example, it might be possible to determine which spacing parameter of the antenna array provides the best DNN accuracy for channel estimation. By doing so, realistic wireless communication domains are generated and more faithful representations of the real-world transmissions are simulated, thereby leading to a physically consistent version of digital twins for wireless communications [172]. ### _From Image-Based DG Methods to Signal-Based Methods_ Existing DG methodologies have been predominantly geared towards image-based vision tasks, leaving signal-based tasks almost unexplored despite being versatile in several real-world applications such as healthcare, retail, climate, finance, and communication. This unbalanced exploration impacts the development of specific DG methods for signal-based tasks. For instance, feature alignment approaches for DG are relying heavily on DNNs as feature extractors which are specifically fine-tuned to vision tasks, thereby leaving DG feature extraction for non-image signals severely underexplored. Some work looked at temporal distributional shifts in clinical healthcare [173, 174] and climate [175] applications, but none of the prior work explored it in wireless communication. From this perspective, we highlight the importance of taking the first step towards a deeper understanding of temporal distributional shifts in wireless communication due to dynamic changes in the received signal resulting from the varying propagation properties (e.g., coherence time and Doppler shift). ### _Compound Domain Generalization_ As mentioned previously, most of the presented methods for DG assume a homogeneous setting where domain labels are available. However, this assumption may not be realistic in several problems where the domain labels are hard to obtain or define. In this case, several techniques discussed above either become inapplicable (e.g., meta-learning) or their performance degrades drastically [7]. Recently, there has been a surge of interest in studying the compound DG setting in vision problems. Most of the methods for compound DG propose to infer latent domain information from data and then use standard learning techniques to generalize across the latent domains. These solutions are, however, based on different restrictive assumptions such as: \(i)\) the latent domains are distinct and separable [98], \(ii)\) the domain heterogeneity originates from stylistic differences [176] or \(iii)\) the latent domains are balanced [177]. Compound DG is hence still an active research field with a lot of room for improvement, especially in wireless communication problems. ### _Federated Domain Generalization_ Distributed learning algorithms enable devices to cooperatively build a unified learning model across agents with local training. As a result, a wide variety of distributed ML methods have been proposed and extensively analyzed within the federated learning (FL) framework [178]. For wireless physical layer applications, FL has been explored to address multiple key communication problems beyond the data security aspect [179] such as channel estimation [180], symbol detection [181] and beamforming [182]. All of these works do not assume the availability of a central entity (e.g., base station) at which the learning model is trained. However, the question of whether the model learned by each agent generalizes to unseen scenarios is still unanswered and this remains an unexplored research area. In the context of IoT applications, very few efforts started investigating the challenges of DG for IoT devices by aligning each device's domain to a reference distribution in a distributed manner [183]. Addressing DG in the FL context is known as _federated domain generalization_ (FDG) [184]. Distributed agents can collect their local data independently, hence naturally forming a distinct source domain. At the time of writing, no research paper in wireless communication has studied FDG, e.g., in the context of distributed MIMO [185] consisting of distributed antenna array systems. ## VIII Conclusion Studying the impact of distribution shifts on the performance of ML-based algorithms for wireless applications is of paramount importance to our research community to better reflect on the adequacy of adopting the data-driven ML approaches in communication systems engineering. In particular, the investigation of domain generalization will lay the ground for rigorous evaluation protocols of data-driven algorithms for wireless communications systems. In this paper, we presented an overview of state-of-the-art methodologies for domain generalization problems to handle distribution shifts. To justify the need to devise new algorithms with better generalization capabilities, we distinguished the four types of distribution shifts between source and target domains. We also provided an overview of multiple important fields related to generalization to better put domain generalization in proper perspective across close research areas. Then, we summarized the three existing methodologies to improve the generalization capabilities of deep learning models, namely, data manipulation, data representation, and domain generalization learning paradigms. In doing so, we gave multiple examples and suggestions not covered in the current literature where these methodologies can be applied for wireless communications applications. We then reviewed the recent research contributions to improve the generalization of neural network models when solving wireless communication problems. These problems involve beam prediction, data detection, channel decoding, beamforming, edge networks, etc. We also presented the learned lessons from the existing applications of domain generalization methodologies for wireless communication problems by highlighting the lack of \(i)\) algorithms exploiting the domain knowledge from well-established communication models, and \(ii)\) open-source benchmarks to accelerate the development of robust algorithms for future wireless networks. Finally, we discussed open questions to enrich and bridge the gap between both domain generalization and wireless communication fields.
2306.12780
Automatic Feature Detection in Lung Ultrasound Images using Wavelet and Radon Transforms
Objective: Lung ultrasonography is a significant advance toward a harmless lung imagery system. This work has investigated the automatic localization of diagnostically significant features in lung ultrasound pictures which are Pleural line, A-lines, and B-lines. Study Design: Wavelet and Radon transforms have been utilized in order to denoise and highlight the presence of clinically significant patterns. The proposed framework is developed and validated using three different lung ultrasound image datasets. Two of them contain synthetic data and the other one is taken from the publicly available POCUS dataset. The efficiency of the proposed method is evaluated using 200 real images. Results: The obtained results prove that the comparison between localized patterns and the baselines yields a promising F2-score of 62%, 86%, and 100% for B-lines, A-lines, and Pleural line, respectively. Conclusion: Finally, the high F-scores attained show that the developed technique is an effective way to automatically extract lung patterns from ultrasound images.
Maria Farahi, Joan Aranda, Hessam Habibian, Alicia Casals
2023-06-22T10:19:44Z
http://arxiv.org/abs/2306.12780v1
# Automatic Feature Detection in Lung Ultrasound Images using Wavelet and Radon Transforms ###### Abstract **Objective:** Lung ultrasonography is a significant advance toward a harmless lung imagery system. This work has investigated the automatic localization of diagnostically significant features in lung ultrasound pictures which are Pleural line, A-lines, and B-lines. **Study Design:** Wavelet and Radon transforms have been utilized in order to denoise and highlight the presence of clinically significant patterns. The proposed framework is developed and validated using 3 different lung ultrasound image datasets. Two of them contain synthetic data and the other one is taken from the publicly available POCUS dataset. The efficiency of the proposed method is evaluated using 200 real images. **Results:** The obtained results prove that the comparison between localized patterns and the baselines yields a promising F2-score of 62%, 86%, and 100% for B-lines, A-lines, and Pleural line, respectively. **Conclusion:** Finally, the high F-scores attained show that the developed technique is an effective way to automatically extract lung patterns from ultrasound images. Pattern Localization Lung Ultrasound Pattern Extraction Wavelet Transform Radon Transform Image Processing LUS ## 1 Introduction are different types of Lung Diseases (LD) including pneumonia, pneumothorax, SARS, COVID-19, etc. Alvarado and Arce (2016). LD pathologies could worsen and cause more severe problems if left without treatment. For example, untreated pneumonia would give rise to a blood infection or even lung abscesses Yoon et al. (2013). Similarly, unhealed pneumothorax or COVID-19 might lead to death Zhu et al. (2020). At present, there are several ways to perform lung tests including computed tomography (CT) Yau et al. (2021), chest X-ray Buonsensno et al. (2020), magnetic resonance imaging (MRI) Sodhi (2021), and ultrasound (US) Lichtenstein (2014). CT imaging is a gold standard for lung pathologies detection since it generates high-resolution images. However, it is expensive and delivers a high dose of radiation to the patient Yau et al. (2021). X-ray is widely available and cost-effective although it can not assure the detection of all lung pathologies and still produces ionizing radiation Buonsensno et al. (2020). Advances in MRI have made it a feasible option for lung evaluation. MRI yields very high-resolution images without ionizing radiation. However, it is expensive and with limited availability Sodhi (2021). Instead, US avoids radiation and is a cheaper modality Lichtenstein (2014). It does not require special shielded and equipped rooms like MRI or CT Shams et al. (2021). It also can be effectively utilized by the bedside, thus avoiding the need to move the patient and consequently, saving time and cost.Buonsensco et al. (2020). However, US images have not been used widely for classification approaches because of their low-quality Liu et al. (2019). In spite of this limitation, this work will prove that US is reliable enough to make a first lung disease diagnosis. The emergence of the COVID-19 pandemic has pushed the use of ultrasonography devices Soldati et al. (2020). This shift in focus is primarily due to the time-consuming nature of CT and MRI scans. As a consequence, healthcare professionals have increasingly relied on ultrasound images to feed their decision-making processes, aiming to minimize the necessity for unnecessary hospitalizations Smargiassi et al. (2021). They follow the Bedside Lung Ultrasound in Emergency (BLUE) protocol Lichtenstein (2014), Bekgoz et al. (2019) to make their decisions. The BLUE protocol is a systematic approach used by clinicians to evaluate and diagnose various lung pathologies, such as pneumothorax, pneumonia, pleural effusion, etc. by examining specific lung zones and patterns using lung ultrasound imaging. The LUS patterns include the Pleural line, A-lines, and B-lines. In an adult LUS image, the Pleural line is formed by the interface between the air-filled lung tissue and the pleural membrane. The ultrasound waves encounter this boundary and undergo a strong reflection, resulting in a bright horizontal line on the image. The pleural line is an essential landmark in lung ultrasound as it helps identify the lung boundaries and serves as a reference point for further analysis. A-lines are created due to the multiple reflections of ultrasound waves between the pleural line and the ultrasound probe. As the sound waves bounce back and forth, they create parallel lines that appear equidistant from each other. A-lines are typically observed in normal lung tissue and indicate a healthy air-filled lung with no significant pathology. B-lines occur when ultrasound waves encounter a significant difference in tissue density, such as the presence of fluid in the lung interstitium or alveoli. The sound waves penetrate the fluid-filled spaces and are strongly reflected, resulting in vertical bright lines starting from the pleural line. They are commonly associated with various pathological conditions, such as pulmonary edema, interstitial lung diseases, or pneumonia. The number and distribution of B-lines can provide important diagnostic information about the underlying lung pathology. Lichtenstein (2014). Fig. 1 shows these LUS features in a real image taken from an adult's lung using a convex probe. Fig. 0(a) is a normal lung showing pleural line withouts any A or B lines. Fig 0(b) is a normal lung showing one A-line. Fig 0(c) belong to a patient suffering from COVID-19 showing multiple B-Lines. Finally Fig 0(d) is from a pneumonia patient with one -Line and two -Lines. Table 1 illustrates how the mentioned features are useful in lung ultrasound disease diagnosis. Nowadays, Deep Learning (DL) methods are commonly used for image segmentation and diagnosis Bandyk et al. (2021). In DL, features are automatically extracted by a network after a training process. Deep learning-based systems come out with good performance provided that the amount of available data is large enough Bandyk et al. (2021). However, they are black boxes, which often lead to unexplainble procedures. They also require special and expensive GPU systems for training. Instead, standard computer vision algorithms are preferred when data is scarce due to their ability to achieve reliable results with smaller amounts of data. These algorithms typically follow logical frameworks that are explainable and understandable, making them easier to interpret and analyze. This makes them practical and widely applicable in various computer vision tasks. Nevertheless, exploring the best way for their formulation is not Figure 1: Adult’s lung ultrasound images and their patterns indicating the Pleural line (green arrows), A-lines (blue arrows), and B-lines (red arrows). LUS imafes including (a) Only Pleural line, (b) Pleural line and A-Line, (c) Pleural line and B-Line, (d) Pleural line, A-Lines, and B-Lines. always easy and fast, or even feasible Khan et al. (2021). Taking into consideration the lack of a considerable amount of LUS images, in the current study, we have explored the feasibility of relying on computer vision. During the past decade, especially after the appearance of COVID-19, some efforts were dedicated to discover a suitable way for LUS patterns extraction. For instance, Barrientos _et al._Barrientos et al. (2016) utilized US image data from children under 5 suffering from pneumonia to detect the Pleural line. The image was locally thresholded using a vertical gradient. They compared their final results with manually segmented images, reporting a mean quadratic error of 11.17 pixels. Karakus _et al._Karakus et al. (2020) combined a sparsity-enforcing and Cauchy-based penalty process to quantify B-lines in LUS images of patients with COVID-19. They asserted that their algorithm correctly detects B-lines in 87% of non-COVID-19 patients. Unfortunately, they did not reveal any results about the testing of their algorithm on patients suffering from COVID-19. Susanti _et al._Susanti et al. (2021) utilized a top-hat morphological grayscale filtering method with a texture structure element to extract the Pleural line and A-lines. They also did not report any sensitivity or specificity of their results. Anantrasirchai _et al._Anantrasirchai et al. (2017) proposed a method for line detection in speckle images. They used the Radon transform and the L1 regularization method to extract lines from 50 simulated noisy LUS images. They reported the simulated B-lines detection performance of 54%, 40%, and 33% for F0.5, F1, and F2 scores, respectively. Table 2 summarizes the mentioned works on LUS pattern extraction. With the aim to improve these still poor results on segmentation, we have developed an algorithm based on the use of Wavelet to denoise and Radon transforms to extract lung ultrasound features including Pleural line, A-Lines, and B-Lines. In section II we illustrate the materials and data conditioning. Our proposed method for data analysis is explained in section III. Results are shown in section IV and the results and method performance are discussed in section V. ## 2 Materials and Data Conditioning In a LUS image, all clinically important signs arise below the Pleural line Lichtenstein (2014). When the Pleural line is detected, the area above can be discarded cropping the image since it just represents the skin and muscles. Indeed, \begin{table} \begin{tabular}{l c c c c} \hline \hline **Lung Pathology** & & & & \\ \hline Normal Lung & ✓ & ✓ & * & * \\ \hline Pneumothorax & ✓ & ✓ & ✓ & \(\times\) \\ \hline COVID-19 & ✓ & ✓ & * & \(\geq 2\) \\ \hline Broncho-Pneumonia & ✓ & ✓ & ✓ & ✓ \\ \hline Pneumonia Consolidation & ✓ & ✗ & ✗ & \(-\) \\ \hline Pleural Effusion & ✓ & ✗ & ✗ & ✗ \\ \hline \hline \end{tabular} *May be seen. \end{table} Table 1: A summary of LUS features in different lung pathologies \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Ref. Goal & Data & Result & Result \\ \hline Li & Lung area segmentation & LUS Images from normal pregnant women. & Shown by images. \\ \hline Bar-disc elimination in LUS [20] & 23 LUS images from children under 5. & Mean quadratic error = 11.17 pixels \\ \hline Karakus _et al._Stine artifacts quantification [20] & LUS images from patients with COVID-19 & 87\% correctly detected lines in non-COVID samples. \\ \hline Sus-Eleural line and A-line detal section [20] & 150 LUS frames of public data & Shown by images. \\ \hline Anad-fanocarcinoma in et al. speckle images & 50 simulated LUS images & F0.5=54\%, F1=40\%, and F2=33\% \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the works mentioned in the state of the art significant information on the A and B lines appears below the Pleural line. Fig. 2 shows the layers of a LUS image. Observing the following clinical features is beneficial in order to extract relevant patterns Lichtenstein (2014). * From the top of the image, a LUS image starts with information about skin and muscles. * The Pleural line appears after ribs. * The Pleural line is horizontal with an angle between -15 to +15 degrees. * A-lines are the reflection of a Pleural line and appear at the same distance from each other * B-lines are vertical and start from Pleural line and advance down through the image to the bottom Knowing the characteristics of these patterns, we use three different data-sets to develop the framework. ### Dataset To design the denoising procedure, we created a dataset consisting of 1950 simulated ultrasound images. These images were generated using the Matlab Ultra Sound Toolbox (MUST). We modeled a convex transducer with 76 elements. The dataset was designed to mimic real-world ultrasound data and encompassed a variety of parameters, including the number of A-lines (from 0 to 3), B-lines (from 0 to 5), and pleural lines (one/multiple). The simulated data covered a range of US frequencies (from 1MHz to 5MHz) and depths (from 7cm to 11cm), representing different imaging scenarios. By incorporating various frequencies, we aimed to capture the diversity of ultrasound signals encountered in clinical practice. Additionally, the different depths simulated in the dataset allowed us to evaluate the performance of the denoising methodology across different imaging depths. The total 1950 images used contain 3300 Pleural lines, 1575 A-Lines, and 4500 B-Lines. We used the simulated dataset published in Zhao et al. (2022) to develop the features detection procedure. In this dataset, 30,000 lung phantoms were simulated with MATLAB using Field II simulation software package. Simulations were based on lung ultrasound images with A-line, B-line, and consolidation features. We used data related to A-line (10,000 images) and B-line (10,000 images) to implement our algorithm. Fig 3 shows samples and their related ground truth of this data set. The left image contains the simulated image while the right image depicts their ground-truth mask. To validate the proposed framework, we utilized a publicly available dataset, POCUS, consisting of different lung pathologies, like pneumonia, COVID-19, and healthy Born et al. (2021). It includes 183 video clips acquired by different types of transducers. We also utilized published data in Ketelaars et al. (2018) that contains 32 convex video clips recorded from patients suffering from pneumothorax. From these data, we randomly chose 200 frames to test our algorithm. Table 3 shows the distribution of the used data per disease category. Since the used dataset is provided by different hospitals that apply various convex transducers in different frequencies, we manually selected the ultrasound area of the images to normalize. Figure 2: LUS layers. The first and second columns show images from a linear and a convex transducer respectively. The skin layer (yellow), the subcutaneous layer (purple), the muscular layer (red), the cortex of the ribs (white curve) with shadowing (white arrow), the intercostal region between the ribs (blue), and the pleural line (green) are all depicted in the second row ### Pre-Processing and denoising Ultrasound images are susceptible to heavy corruption caused by various sources of noise, including patient movements, sensor movements, and reflections from the skin and muscles Piscaglia et al. (2020). These noises can manifest as Salt and Pepper (impulse or spike), Poisson, Gaussian, and Speckle Gupta et al. (2018). In the field of ultrasound imaging, wavelet-based denoising methods are commonly employed Georgieva et al. (2021). For this, a comparative review performed by Georgieva et al. (2020) demonstrated the significant impact of using wavelet for noise reduction in ultrasound images. Additionally, in our previous study Farahi et al. (2022), we implemented a comprehensive methodology to identify the optimal wavelet family for denoising ultrasound signals obtained from a portable Doppler device. In this study, we utilize a methodology similar to that employed in our previous work to identify the optimal mother wavelet family for denoising lung ultrasound images. Our investigation involves assessing the Signal-to-Noise Ratio (SNR) of denoised images obtained using various mother wavelets, different levels of decomposition, and threshold values. To explore the effectiveness of different mother wavelets, we consider a total of 61 options, including Haar, Daubechies (db1, db2,..., db20), Symlets (sym1, sym2,..., sym20), Coiflets (coeif1, coeif2,..., coeif5), and Biorthogonal (bior1.1, bior1.3,..., bior6.8). Furthermore, we evaluate the impact of different levels of decomposition, ranging from level 2 to level 5. Additionally, we examine a wide range of threshold values, spanning from 0 to 101, to determine their influence on the denoising process. By systematically investigating these parameters, we aim to identify the combination that reaches the highest SNR after denoising. In order to facilitate further analysis and processing, we perform a transformation on the ultrasound images obtained from convex transducers. Since the original images are in polar format, we employ a trapezium to delineate the LUS area and subsequently apply an affine transformation to convert the images into a rectangular format. This transformation enables us to work with the LUS data more conveniently in Cartesian coordinates, allowing for an easier interpretation and analysis. (Fig. 3(a), 3(b)). ## 3 Data Analysis Data analysis of LUS images involves processing and interpreting ultrasound area to extract meaningful information about the lungs. It mainly includes tasks such as Pleural line, A and B lines extraction which are explained below. Figure 3: Simulated samples taken from Zhao et al. (2022).From left to right: simulated image with A-lines, related mask to A-lines, simulated image with B-lines, and related mask to B-lines. \begin{table} \begin{tabular}{|l|l|l|l|l|} \cline{2-5} \multicolumn{1}{c|}{} & No. of Data & \multicolumn{1}{c|}{No. of Used Frames} \\ \hline Group Name & Videos & Frames & Test our work & Ref. \\ \hline Normal Lung & 28 & 3834 & 50 & Born et al. \\ & & & & [2021] \\ \hline COVID-19 & 54 & 10083 & 50 & Born et al. \\ & & & & [2021] \\ \hline Pneumonia & 38 & 4254 & 50 & Born et al. \\ & & & & [2021] \\ \hline Pneumothorax & 32 & 14400 & 50 & Ketelas et al. \\ & & & & [2018] \\ \hline Sum & 152 & 32571 & 200 & \\ \hline \end{tabular} \end{table} Table 3: The distribution of public data used for the study ### Pleural-line extraction Pleural line detection and localization in lung ultrasound images is a challenging task due to factors such as noise, low image quality, and variability in the appearance of the pleural line across different patients. Once discarded both, the use of gradients to segment the detected different tissue structures and the use of DL due to the lack of enough data, it's essential to use advanced image processing and computer vision techniques, as well as a reasonable dataset for training and validation to achieve good performance. We employ a series of steps to extract the Pleural line. Firstly, we apply a blurring technique to reduce the influence of skin and muscle in the top part of the image. This helps to enhance Figure 4: The result of transforming the convex image to cartesian using the affine transformation. (a) Convex denoised image and the deployed trapezoid on it. (b) Transformed trapezoid to rectangular. Figure 5: The proposed framework to extract lung features including Pleural line, A-lines and B-lines. the visibility of the Pleural line, which typically exhibits a slope within the range of [-15, 15]+\(\frac{\pi}{2}\) degrees Lichtenstein (2014). Next, we apply the Radon transform values within the range of [-20, 20]+\(\frac{\pi}{2}\) degrees, as this range considers the potential orientations of the Pleural line. By applying the Radon transform, the image is transformed from the spatial domain to the Radon domain, where straight lines appear as distinct peaks. This enables efficient detection and localization of the Pleural line. By identifying the local maximum intensity in the Radon domain, we are able to locate the Pleural line. It is worth to mention that the Pleural line is characterized by its length and brightness, making it distinguishable from other structures in the image. Once the Pleural line is detected, we proceed to ignore the skin and muscle regions from the image. This allows us to focus our subsequent analysis on the bottom region of the Pleural line, where important features related to lung abnormalities are more likely to be present. For a visual representation of the applied steps in Pleural line extraction, refer to Fig. 5, which provides a detailed overview of the process. ### A-lines and B-lines extraction To extract A-lines, which are reflections of the Pleural lines, we utilize the Radon transform with a narrow angle range between \((\theta_{PL}-5)\) and \((\theta_{PL}+5)\) degrees. Here, \(\theta_{PL}\) represents the angle of the extracted Pleural line. Since A-lines are short straight lines, the procedure involves applying the Radon transform to each A-line and discarding the top part of the line. This process is repeated iteratively until the bottom of the image is reached, enabling the extraction of A-lines throughout the lung region of interest. Finally, we check the distance between the extracted A-lines. It should be equal to the distance between the pleural line and the skin. A-lines that do not have a similar distance are discarded. Similarly, for B-line extraction, we employ the Radon transform on a skin-removed image because B-lines are also straight bright lines. In this case, the transform is applied once, with a vertical angle range between [-5, 5] degrees. This allows us to capture the characteristic patterns associated with B-lines in the image. In the end, B-lines that are not expanded from the Pleural line to the bottom of the LUS area are discarded. To provide an overview of the LUS pattern extraction process, please refer to Fig. 5, which illustrates the schema of the procedure. ### Evaluation Metrics To examine the performance of pattern detection and localization, we structured a ground truth by manually extracting the existing patterns and creating masks for the Pleural line, A and B-lines (see Fig. 6). The manual extraction of the masks was performed by an expert in the field and verified by a physician to ensure their accuracy. By comparing the algorithm's results with the ground truth, we were able to determine the precision and recall of the algorithm's localization performance. This analysis helps us assess how accurately the algorithm is able to identify and locate the patterns of interest. In addition to precision and recall, we calculated the top three common F-scores (F0.5, F1, and F2) Jiang (2020), which provide a comprehensive evaluation of the algorithm's performance by considering both precision and recall simultaneously. These metrics allow us to assess the algorithm's effectiveness in detecting patterns in lung ultrasound images and provide insights into its potential clinical utility. ## 4 Results In this paper, we denoised LUS images using different mother wavelets and analyzed their performance by calculating the SNR parameter. A higher value of SNR gives information about which type of mother wavelet is the most suitable for noise removal in LUS images. Fig. 7 provides information about the average and standard deviation of SNR on the different 1950 simulated images. As shown in Fig. 7, the average value of the SNRs is higher in level 5. Also, the 5th level is less sensitive to the threshold selection. To have a better visualization, for each combination, the number of images that have an SNR higher than the maximum of the average of SNRs is calculated and shown in a heatmap graph (see Fig.8). This figure explains that in level 5, the SNR is not highly sensitive to the selection of the threshold value, especially for values between 40 and 60. Therefore, in level 5 using a threshold of 50, we can choose the mother wavelet that more images show their maximum SNR with it. Based on the graph shown in Fig. 9, the'sym 17' shows the maximum. So we chose the'sym 17' with a threshold of 50 in the 5th level of decomposition to denoise the LUS images. Afterward, using local maxima in Radon space, we obtained the Pleural line position. Fig.10 shows an example of pleural line extraction. In this figure, an affine transformation of an image from trapezium to rectangular is shown Figure 6: Different LUS samples and their manually extracted masks as ground truth. The first column shows LUS images. The second column illustrates the Pleural line, while the third illustrates A-lines and the last column depicts B-lines. Figure 7: Average and standard deviation of SNRs obtained after denoising using mother wavelets. The first row shows the average of SNRs wile the standard deviation of the SNRs is depicted in the second row. Level of decomposition 2, 4, and 5 are shown in the first, second, and third column, respectively. In each graph, xlabel depicts the threshold, and ylabel is the chosen mother wavelet. (Fig.10a). Then, the rectangular image is transformed into a Radon space (Fig.10b). The maximum of local maxima is shown by a red star, which is the candidate of the pleural line. (Fig.10c) illustrates the result of the extracted pleural line in a rectangular image using inverse Radon transform for the angle and position of the red star. Finally, the image is splitted using the obtained pleural line to define the lung area (Fig.10d). Below the yellow line is defined as the lung area and is used for further analysis. The results of A-lines and B-lines extraction are illustrated in Fig 11. Fig. 11a is real sample data in polar coordinates. The A-line extracted in the affined image to Cartesian coordinates is shown in Fig. 11b. An extracted B-line is depicted in Fig. 11c. Fig. 11d shows all extracted patterns together in Cartesian coordinates. Finally, all patterns are highlighted in polar coordinates in Fig. 11e. Some subjective results are shown in Fig. 12. This figure covers images with different patterns inside. It also depicts the comparison of our method with the manually extracted gold standard. In this figure, the first column illustrates the original images, the second column shows the results manually extracted by an expert, and the third column illustrates the results using the proposed algorithm. Pleural lines are in green, A-lines are in blue, and B-lines are in red. The best-obtained F2-score for Pleural line detection is 100%, 86.29% for A-lines, and 62.93% for B-lines extraction. Table 4 briefly explains the achieved results. ## 5 Discussion In this work, firstly, a Wavelet-based method is utilized to denoise lung ultrasound images. It can be seen that the selection of the threshold value is not sensitive to the selection of the mother wavelet. Then, the performance of Radon transform is investigated for pattern detection in LUS images. Patterns are automatically extracted using the local maxima in Radon domain. It is demonstrated that Radon transform is reliable for the extraction of the three most important patterns in LUS images called Pleural line, A-lines, and B-lines. Considering the results of Table 4, the proposed algorithm is highly robust in Pleural line detection as its precision and recall are 100 %. It is obvious that detecting a higher number of desirable patterns yields a greater recall value. While higher precision means that the number of undesirable detected patterns is low. In our work, it is important not to miss any useful pattern for disease diagnosis. So, higher recall is more important than higher precision. Looking at Table 4, our suggested method reached an F2-score of 86.29%, and 62.93% for the extraction of A-lines, and B-lines, respectively. For A-lines patterns, F2-Score is higher than F0.5-Score which means that the proposed framework worked well for A-lines detection. Whereas, F0.5-Score for B-lines detection is higher than F2-Score, indicating that fewer unwanted patterns are detected. Our work achieved satisfactory results for F-Scores compared to previously published work Anantrasirichai et al. (2017). In future work, we will try to increase the recall value for B-lines detection using more data. We also aim to investigate the performance of deep learning techniques and explore how fast and accurate they are in the topic of this paper. \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Proposed Method} & \multicolumn{1}{c|}{Ref.Anantrasirichai et al. (2017)} \\ \cline{2-5} \multicolumn{1}{c|}{} & Pleural Line & A- Lines & B- Lines & \\ \hline Precision & 100* & 83.52 & 74.10 & - \\ \hline Recall & 100 & 87.02 & 60.65 & - \\ \hline F0.5 & 100 & 84.19 & 70.95 & 54 \\ \hline F1 & 100 & 85.23 & 66.70 & 40 \\ \hline F2 & 100 & 86.29 & 62.93 & 33 \\ \hline \end{tabular} * all numbers are in percentage. \end{table} Table 4: The comparison between obtained results by the proposed method and the work introduced by Anantrasirichai et al. (2017). Figure 11: The results on the suggested method for LUS patterns extraction. (a) Original image, (b) extracted A-lines, (c) extracted B-lines, (d) all patterns projected in the rectangular image, (e) convert the rectangular image back to the trapezoid image Figure 10: An example of pleural line extraction procedure. (a) Affined image to rectangular. (b) The rectangular image in Radon space. The maximum is shown by a red star. (c) extracted pleural line. (d) Split image using the obtained pleural line to define the lung area. The area below the yellow line is defined as the lung and is used for further steps. ## Acknowledgment The authors would like to thank R. Ketelaars who donated 66 LUS videos recorded from patients suffering from pneumothorax. We also appreciate the contribution of Jannis Born and Lingyi Zhao who helped us with data collection.
2302.00673
ADAPT: Action-aware Driving Caption Transformer
End-to-end autonomous driving has great potential in the transportation industry. However, the lack of transparency and interpretability of the automatic decision-making process hinders its industrial adoption in practice. There have been some early attempts to use attention maps or cost volume for better model explainability which is difficult for ordinary passengers to understand. To bridge the gap, we propose an end-to-end transformer-based architecture, ADAPT (Action-aware Driving cAPtion Transformer), which provides user-friendly natural language narrations and reasoning for each decision making step of autonomous vehicular control and action. ADAPT jointly trains both the driving caption task and the vehicular control prediction task, through a shared video representation. Experiments on BDD-X (Berkeley DeepDrive eXplanation) dataset demonstrate state-of-the-art performance of the ADAPT framework on both automatic metrics and human evaluation. To illustrate the feasibility of the proposed framework in real-world applications, we build a novel deployable system that takes raw car videos as input and outputs the action narrations and reasoning in real time. The code, models and data are available at https://github.com/jxbbb/ADAPT.
Bu Jin, Xinyu Liu, Yupeng Zheng, Pengfei Li, Hao Zhao, Tong Zhang, Yuhang Zheng, Guyue Zhou, Jingjing Liu
2023-02-01T18:59:19Z
http://arxiv.org/abs/2302.00673v1
# ADAPT: Action-aware Driving Caption Transformer ###### Abstract End-to-end autonomous driving has great potential in the transportation industry. However, the lack of transparency and interpretability of the automatic decision-making process hinders its industrial adoption in practice. There have been some early attempts to use attention maps or cost volume for better model explainability which is difficult for ordinary passengers to understand. To bridge the gap, we propose an end-to-end transformer-based architecture, ADAPT (Action-aware Driving cAPtion Transformer), which provides _user-friendly_ natural language narrations and reasoning for each decision making step of autonomous vehicular control and action. ADAPT jointly trains both the driving caption task and the vehicular control prediction task, through a shared video representation. Experiments on BBD-X (Berkeley DeepDrive Explanation) dataset demonstrate state-of-the-art performance of the ADAPT framework on both automatic metrics and human evaluation. To illustrate the feasibility of the proposed framework in real-world applications, we build a novel deployable system that takes raw car videos as input and outputs the action narrations and reasoning in real time. The code, models and data are available at [https://github.com/jxbbb/ADAPT](https://github.com/jxbbb/ADAPT). ## I Introduction The goal of an autonomous system is to gain precise perception of the environment, make safe real-time decisions, take reliable actions without human involvement and provide a safe and comfortable ride experience for passengers. There are generally two types of paradigms for autopilot controller design: mediation-aware method [1, 2] and end-to-end learning approach [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. Mediation-aware approaches rely on recognizing human-specified features such as vehicles, lane markings, etc., which require rigorous parameter tuning to achieve satisfactory performance. In contrast, end-to-end methods directly take raw data from sensors as input to generate planning routes or control signals. One of the key challenges in deploying such autonomous control systems to real vehicles is that intelligent decision-making policies in autonomous cars are often too complicated and difficult for common passengers to understand, for whom the safety of such vehicles and their controlability is the top priority. Some previous work has explored the interpretation of autonomous navigation [13, 14, 24, 25, 26, 27, 28, 29, 30]. Cost map, for example, is employed in [13] to interpret the actions of a self-driving system by visualizing the difficulty of traversing through different areas of the map. Visual attention is utilized in [24] to filter out non-salient image regions, and [31] constructs BEV (Bird's eye view) to visualize the motion information of the vehicle. However, these interfaces can easily lead to misinterpretation if the user is unfamiliar with the system. An ideal solution is to include natural language narrations to guide the use throughout the decision making and action taking process of the autonomous control module, which is comprehensible and user-friendly. Furthermore, an additional reasoning explanation for each control/action decision can help users understand the current state of the vehicle and the surrounding environment, as supporting evidence for the actions taken by the autonomous vehicle. For example, "[_Action narration_:] the car pulls over to the right side of the road, [_Reasoning_:] because the car is parking", as shown in Fig. 1. Explaining vehicle behaviors via natural language narrations and reasoning thus makes the whole autonomous system more transparent and easier to understand. To this end, we propose ADAPT, the first action-aware transformer-based driving action captioning architecture that provides for passengers user-friendly natural language narrations and reasoning of autonomous driving vehicles. To eliminate the discrepancy between the captioning task and the vehicular control signal prediction task, we jointly train these two tasks with a shared video representation. This multi-task framework can be built upon various end-to-end autonomous systems by incorporating a text generation head. We demonstrate the effectiveness of the ADAPT approach on a large-scale dataset that consists of control signals and videos along with action narration and reasoning. Based on ADAPT, we build a novel deployable system that takes raw vehicular navigation videos as input and generates the action narrations and reasoning explanations in real time. Our contributions can be summarized as: * We propose ADAPT, a new end-to-end transformer-based action narration and reasoning framework for self-driving vehicles. * We propose a multi-task joint training framework that aligns both the driving action captioning task and the control signal prediction task. * We develop a deployable pipeline for the application of ADAPT in both the simulator environment and the real world. ## II Related Work ### _Video Captioning_ The main goal of the video captioning task is to describe the objects and their relationship of a given video in natural language. Early researches [32, 33, 34, 35] generate sentences with specific syntactic structures by filling recognized elements in fixed templates, which are inflexible and lack of richness. [36, 37, 38, 39, 40, 41, 42, 43, 44, 45] exploit sequence learning approaches to generate natural sentences with flexible syntactic structures. Specifically, these methods employ a video encoder to extract frame features and a language decoder to learn visual-textual alignment for caption generation. To enrich captions with fine-grained objects and actions, [46, 47, 48] exploit object-level representations that capture detailed object-aware interaction features in videos. [49] further develops a novel dual-branch convolutional encoder to jointly learn the content and semantic information of videos. Moreover, [50] adapts the uni-modal transformer to video captioning and employs a sparse boundary-aware pooling to reduce the redundancy in video frames. The development of scene understanding [51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62] also contribute a lot to the captioning task. Most recently, [63] proposes an end-to-end transformer-based model SWINBERT, which utilizes a sparse attention mask to lessen the redundant and irrelevant information in consecutive video frames. While existing architectures achieve promising results for general video captioning, it cannot be directly applied to action representation because simply transferring video caption to self-driving action representation would miss some key information like the speed of the vehicle, which is essential in the autonomous system. How to effectively use these multi-modal information to generate sentences remains a mystery, which is the focus of our work. ### _End-to-End Autonomous Driving_ Learning-based autonomous driving is an active research area [64, 65]. Some learning-based driving methods such as affordances [3, 4] and reinforcement learning [5, 6, 7] are employed, gaining promising performance. Imitation methods [8, 9, 10, 11, 12, 13] are also utilized to regress the control commands from human demonstrations. For example, [14, 15, 16] model the future behavior of driving agents like vehicles, cyclists or pedestrians to predict the vehicular waypoints, while [17, 18, 19, 20, 21, 22, 23] predict vehicular control signals directly according to the sensor input, which is similar to our control signal prediction sub-task. ### _Interpretability of Autonomous Driving_ Interpretability, or the ability to provide a comprehensive explanation plays a significant role in the social acceptance of artificial intelligence [66, 67] and autonomous driving is no exception. Most interpretable approaches of autonomous vehicles are vision-based [24, 25, 26, 27, 31] or LiDAR-based [14, 28, 29, 30, 24] first utilizes the visualization of an attention map that filters out non-salient image regions to make autonomous vehicles reasonable and interpretable. Nevertheless, the attention map may easily include some less important areas which cause misunderstanding for passengers. [25, 26, 27, 31] constructs BEV (Bird's eye view) from a vehicle camera to visualize the motion information and environmental status of the vehicle. [13] takes as input LiDAR and HD maps to forecast the bounding boxes of driving agents and exploits cost volume to explain the reason for the planner's decision. Furthermore, [14] constructs an online map from segmentation as well as the states of driving agents to avoid heavy dependence on HD maps. Although the vision-based or LiDAR-based approaches provide promising results, the lack of linguistic interpretation makes them too complicated for passengers like the elderly to understand. [68] first explores the possibility of textual explanations for self-driving vehicles, which offline extracts video features from control signal prediction task and conducts video captioning afterwards. Unfortunately, the discrepancy between these two tasks makes the offline-extracted features sub-optimal for downstream captioning task, which is the focus of our work. ### _Multi-task Learning in Autonomous Driving_ Our end-to-end framework adopts multi-task learning, where we train the model on a joint objective of text generation and control signal prediction. Multi-task learning helps extract more useful information by exploiting inductive biases between different tasks [69] and has shown promising prospects in autonomous driving. [70, 71] shows that detection and tracking can be trained together. [72] further applies a joint detector and trajectory predictor into a single model and gains promising results. This idea is extended by [73] to simultaneously predict the intention of actors. More recently, [13] further includes a cost map based control signal planner in the joint model. These works show that joint training of different tasks improves the performance of individual tasks due to better data utilization and shared features, which inspires our joint training strategy of control signal prediction task and text generation task. Fig. 1: Different interpretation methods of self-driving vehicles, including attention map [24], cost volume [13] and natural language. Despite the effectiveness of attention map or cost volume, language-based interpretation is more user-friendly to common passengers. ## III Method ### _Overview_ The ADAPT architecture is illustrated in Fig. 2, which addresses two tasks: Driving Caption Generation (DCG) and Control Signal Prediction (CSP). DCG takes a sequence of raw video frames as inputs, and outputs two natural language sentences: one describes the vehicle's action (e.g., "the car is accelerating"), and the other explains the reasoning for taking this action (e.g., "because the traffic lights turn green"). CSP takes the same video frames as inputs, and outputs a sequence of control signals, such as speed, course or acceleration. Generally, DCG and CSP tasks share the same Video Encoder, while employing different prediction heads to produce the final prediction results. For DCG task, we employ a vision-language transformer encoder to generate two natural language sentences via sequence-to-sequence generation. For CSP task, we use a motion transformer encoder to predict the control signal sequence. ### _Video Encoder_ Following Swinbert [63], we employ Video Swin Transformer (video swin) [74] as the visual encoder to encode video frames into video feature tokens. Given a car video captured from the first perspective, we first do uniform sampling to get \(T\) frames of size \(H\times W\times 3\). These frames are passed as inputs to video swin, resulting in feature \(F_{V}\) of size \(\frac{T}{2}\times\frac{H}{32}\times\frac{W}{32}\times 8C\), where \(C\) is the channel dimension defined in video swin. Then the video features are fed into different prediction heads for individual tasks. ### _Prediction Heads_ **Text Generation Head** The purpose of the text generation head is to generate two sentences that describe both the action of the vehicle and the reason behind it. As mentioned in Sec. III-B, the video frames are encoded to video features \(F_{V}\) of size \(\frac{T}{2}\times\frac{H}{32}\times\frac{W}{32}\times 8C\). Then we tokenize the video features along the channel dimension, resulting in \(\frac{T}{2}\times\frac{H}{32}\times\frac{W}{32}\) tokens with dimension of \(8C\). As for the text inputs (action narrations and reasoning), we first tokenize each sentence and pad it to a fixed length. Then we concatenate these two sentences and embed them with an embedding layer. To identify the difference between action narration and reasoning, we exploit a segment embedding method (widely used in Bert [75]) to distinguish them. And we use a learnable MLP that transforms the dimension of video tokens to ensure the dimension consistency between video tokens and text tokens. Finally, the text tokens and video tokens are fed into the vision-language transformer encoder, which will generate a new sequence includes both action narrations and reasoning. **Control Signal Prediction Head** The goal of CSP head is to predict the control signals (e.g. acceleration) of the vehicle based on video frames. Given video features of \(T\) frames, along with the corresponding control signal recordings \(\mathbf{S}=\{\mathbf{s}_{1},\mathbf{s}_{2},..,\mathbf{s}_{T}\}\), the output of CSP head is a sequence of Fig. 2: Overview of ADAPT framework. (a) Input is a vehicle-front-view video, and outputs are predicted vehicle’s control signals and the narration and reasoning of the current action. We first densely and uniformly sample \(T\) frames from the video, which are sent to the learnable video swin transformer and tokenized into video tokens. Different prediction heads generate final motion results and text results. (b)(c) show the prediction heads, respectively. control signals \(\hat{\mathbf{S}}=\{\hat{\mathbf{s}}_{2},...,\hat{\mathbf{s}}_{T}\}\). Each control signal \(\mathbf{s}_{i}\) or \(\hat{\mathbf{s}}_{i}\) is a n-tuple, where \(n\) refers to how many types of sensor we exploit. We first tokenize the video features, then utilize another transformer (motion transformer) to generate the prediction of these control signals. The loss function \(\mathcal{L}_{CSP}\) is defined as the mean squared error of \(\mathbf{S}\) and \(\hat{\mathbf{S}}\): \[\mathcal{L}_{CSP}=\frac{1}{T-1}\sum_{i=2}^{T}\left(s_{i}-\hat{s}_{i}\right)^{2} \tag{1}\] Note that we do not predict control signal corresponding to the first frame, since the dynamic information of the first frame is limited, while other signals can be easily inferred from previous frames. ### _Joint Training_ In our framework, we assume that CSP and DCG tasks are aligned on the semantic level of the video representation. Intuitively, action narration and the control signal data are different expression forms of the action of self-driving vehicles, while reasoning explanation concentrates on the elements of the environment that influence the action of the vehicles. We believe that jointly training these tasks in a single network can improve performance by leveraging the inductive biases between different tasks. During training, CSP and DCG are performed jointly. We simply add the \(\mathcal{L}_{CSP}\) and \(\mathcal{L}_{DCG}\) to get the final loss function: \[\mathcal{L}=\mathcal{L}_{CSP}+\mathcal{L}_{DCG} \tag{2}\] Despite the joint training of both tasks, inference on each task can be carried out independently. For the DCG task, ADAPT takes a video sequence as input, and outputs the driving caption with two segments. Text generation is performed in an auto-regressive manner. Specifically, our model starts with a "[CLS]" token and generates one word token at a time, consuming previously generated tokens as the inputs of the vision-language transformer encoder. Generation continues until the model outputs the ending token "[SEP]" or reaches the maximum length threshold of a single sentence. After padding the first sentence to the maximum length, we concatenate another "[CLS]" to the inputs and repeat the aforementioned process. ## IV Experiment In this section, we evaluate ADAPT over metrics of the standard captioning task, including BLEU4 [76], METEOR [77], ROUGE-L [78] and CIDEr [79] (abbreviated as B4, M, R and C in later tables). As quantitative evaluation of captioning is still an open question, we also provide detailed human evaluation results for the subjective correctness of the generated text. Ablation studies further demonstrate the effectiveness of the proposed joint-training framework. ### _Dataset_ BDD-X [68] is a driving-domain caption dataset, consisting of nearly 7000 videos paired with control signals. The videos and control signals are collected from BDD100K dataset [80]. Each video has a duration of 40 seconds on average, with 1280\(\times\)720 resolution and 30 FPS. Each video contains 1 to 5 vehicle behaviors, such as accelerating, turning right or merging lanes. All these behaviors are accompanied by text annotation, including action narration (e.g., "the car stops") and reasoning (e.g., "because the traffic light is red"). There are around 29000 behavior-annotation pairs in total. To the best of our knowledge, BDD-X is the only driving-domain caption dataset accompanied by car videos and control signals. ### _Implementation Details_ The video swin transformer is pre-trained on Kinetics-600 [81], while the vision-language transformer and motion transformer are randomly initialized. Note that in our implementation we do not freeze the parameters of video swin, so ADAPT is trained in a complete end-to-end manner. The input video frames are resized and cropped to the spatial size of 224. And for narration and reasoning, we use WordPiece embeddings [75] instead of the whole words (e.g., "stops" is cut to "stop" and "#s") and the maximal length of each sentence is 15. During training period, we randomly mask \(50\%\) of the tokens for masked language modeling. And the masked token has \(80\%\) chance to be a "[MASK]" token, \(10\%\) chance to be a random word, and \(10\%\) chance to remain the same. We employ AdamW optimizer and use a learning rate warm-up during the early \(10\%\) training steps followed by linear decay. The whole training process for 40 epochs takes about 13 hours on 4 NVIDIA V100 GPUs with a batch size of 4 per GPU. ### _Main Results_ We compare ADAPT with state-of-the-art methods on BDD-X dataset. Table I shows the comparison results on standard captioning metrics. We observe that ADAPT achieves significant performance gain over existing methods. Specifically, ADAPT outperforms prior state-of-the-art work [68] by 31.7 for action narration and 33.1 for reasoning on CIDEr metric. In addition to automatic evaluation measures, we also conduct human evaluation to measure the subjective correctness of output narration and reasoning. The whole evaluation process is divided into three sections: (1) narration, (2) reasoning, and (3) full sentence. During the first section, a human evaluator judges whether the predicted narrations conform to the vehicle's action. In the second section, we display both ground-truth narration and predicted reasoning, and require human evaluators to judge whether the reasoning is correct. Then in the last section, both predicted narrations and predicted reasoning are displayed. Table II shows that ADAPT outperforms previous work in reasoning accuracy while maintaining high accuracy on narration evaluation, demonstrating the effectiveness of ADAPT. ### _Ablation Study_ We conduct a comprehensive ablation study to analyze various aspects of ADAPT design. **Effect of Action-aware Joint Training** To investigate the effect of action-awareness in joint training on ADAPT, we train a single captioning model by removing the CSP (control signal prediction) head of ADAPT, referred to as "Single". As shown in Table III, ADAPT outperforms single training with an improvement of 15.9 for narration and 7.2 for reasoning on CIDEr metric. This suggests that cues from the other task help regularize the shared video representation and improve the performance of the text generation task. Additionally, we can see from Fig. 2(a) that the caption and control signal data are employed in two streams in ADAPT. An interesting question is: can we simply pass the control signals to the multi-modal transformer to get the final caption prediction? So we create such an architecture that takes video tokens, control signal tokens (generated by a learnable embedding layer) and masked text tokens as input and generates predictions of the masked tokens, which is referred to as "Single+". Results are shown in the second row of Table III. We can see that the proposed ADAPT still achieves the best results, especially for reasoning segment, which demonstrates the superiority of multi-task learning over using both videos and control signals as inputs despite the latter is an intuitive setting. **Impact of Different Control Signal Types** In our implementations, we leverage control signals (e.g., course) as supervision for the CSP task. In this analysis, we investigate the impact of different supervision signal types of ADAPT. The base signals in our experiments are speed and course. We first conduct experiments by removing one of them, results of which are shown in the first two rows of Table IV. Then in the third row both speed and course are utilized, which is the same as previous experiments. We observe that the removal of each signal leads to the decrease of performance. For example, the CIDEr metric decreases by 29.3 for narration and by 14.0 for reasoning without the speed inputs. This is understandable because being aware of speed and course can help the network learn representations that are informative for narration and reasoning and the lack of either can result in the bias of video representations. **Interaction between Narration and Reasoning** Compared with the general caption task, the driving caption task generates two sentences: action narration and reasoning. In this section, we explore how these two segments interact with each other by controlling the attention mask or the order of two sentences. Specifically, as shown in the right of Fig. 2(c), we use a causal self-attention mask for each sentence where a word token can only attend to the existing output tokens, and employ sparse attention [63] for video tokens. The reasoning segment has full attention to the narration segment, referred to as cross attention, which defines the dependence of reasoning on narration. In this section, we first conduct experiments without cross attention or with swapped cross attention (by swapping the order of narration and reasoning). Results are reported in Table V. Compared with the default setting (denoted as "Ours"), results without cross attention have lower performance in both sentences, which indicates that conditioning the reasoning segment on the narration segment is beneficial for training. And the performance with swapped cross attention also decreases, especially for the narration part, which further demonstrates this dependence of reasoning on narration, instead of the other way around. Additionally, we conduct experiments with only one sentence, referred to as "Narration only" and "Reasoning only". Table V shows that training with both sentences yields improvement on the performance, especially for the reasoning segment, indicating that the interaction between narration and reasoning promotes each component of the full caption task. **Impact of Different Sampling Rates** In previous experiments, we uniformly sample \(T=32\) frames from a given video, along with control signal data of the same timestamp. In this study, we investigate the impact of sampling rate by varying the number of sampled frames. Specifically, we uniformly sample \(T=2,4,8,16,32\) frames from a variable-length video, as shown in Table VI. The performance of ADAPT improves steadily as the sampled number increases since more frames lead to less missing visual content. This suggests that caption results can be enhanced by densely sampled frames and control signals. The training time costs are also provided in Table VI. We hope this ablation provides robotics practitioner with insights about the accuracy-efficiency trade-off of driving caption. ### _Analysis on Control Signal Prediction_ Although the main goal of driving caption task is to generate sentences, we also investigate the performance of control signal prediction tasks. We employ root mean squared error (RMSE) and a tolerant accuracy (\(A_{\sigma}\)) to measure the final performance. Tolerant accuracy means we first use two thresholds to determine the range of the control signal deviation and truncate it. For example, we define the truncation value of predicted course \(\hat{c}\) as: \[c_{\sigma}=\left\{\begin{array}{ll}1,&-\sigma<\hat{c}-c<\sigma\\ 0,&otherwise\end{array}\right. \tag{3}\] where \(c\) is the ground-truth course and \(\sigma\) is the tolerant threshold value. Then \(A_{\sigma}\) of course represents the accuracy of \(c_{\sigma}\) recorded as a percentage, and \(A_{\sigma}\) of speed is defined similarly. Results are provided in Table VII. We observe that our joint training framework can further improve the performance of control signal prediction, indicating the benefit of joint training. ### _Deployment in Autonomous Systems_ We further develop a pipeline for the deployment of ADAPT in both the simulator environment (e.g., Carla [82]) and the real world. The system takes raw vehicular videos as input and generates action narrations and reasoning explanations in real time. Specifically, we first record the frames captured by the camera from the front view. Then the frames in the last several seconds are passed as input to ADAPT to generate the action narration and reasoning of the current step. Moreover, we further utilize text-to-speech technology to convert the generated sentences into speech narration, to make it more convenient and more interactive for common passengers (especially helpful for vision-impaired passengers). ## V Conclusion Language-based interpretability is essential for the social acceptance of self-driving vehicles. We present Adapt (Action-aware Driving cAPHon Transformer), a new end-to-end transformer-based framework for generating action narration and reasoning for self-driving vehicles. ADAPT utilizes multi-task joint training to reduce the discrepancy between the driving action captioning task and the control signal prediction task. Experiments on BDD-X dataset over standard captioning metrics as well as human evaluation demonstrate the effectiveness of ADAPT over state-of-the-art methods. We further develop a deployable pipeline for the application of ADAPT in both simulator environment and the real world. Fig. 3: Qualitative analysis: generated narrations correctly describe the current action, with sound reasoning.
2306.03855
Faster real root decision algorithm for symmetric polynomials
In this paper, we consider the problem of deciding the existence of real solutions to a system of polynomial equations having real coefficients, and which are invariant under the action of the symmetric group. We construct and analyze a Monte Carlo probabilistic algorithm which solves this problem, under some regularity assumptions on the input, by taking advantage of the symmetry invariance property. The complexity of our algorithm is polynomial in $d^s, {{n+d} \choose d}$, and ${{n} \choose {s+1}}$, where $n$ is the number of variables and $d$ is the maximal degree of $s$ input polynomials defining the real algebraic set under study. In particular, this complexity is polynomial in $n$ when $d$ and $s$ are fixed and is equal to $n^{O(1)}2^n$ when $d=n$.
George Labahn, Cordian Riener, Mohab Safey El Din, Éric Schost, Thi Xuan Vu
2023-06-06T16:49:15Z
http://arxiv.org/abs/2306.03855v1
# Faster real root decision algorithm for symmetric polynomials ###### Abstract. In this paper, we consider the problem of deciding the existence of real solutions to a system of polynomial equations having real coefficients, and which are invariant under the action of the symmetric group. We construct and analyze a Monte Carlo probabilistic algorithm which solves this problem, under some regularity assumptions on the input, by taking advantage of the symmetry invariance property. The complexity of our algorithm is polynomial in \(d^{s}\), \(\binom{n+d}{d}\), and \(\binom{n}{s+1}\), where \(n\) is the number of variables and \(d\) is the maximal degree of \(s\) input polynomials defining the real algebraic set under study. In particular, this complexity is polynomial in \(n\) when \(d\) and \(s\) are fixed and is equal to \(n^{O(1)}2^{n}\) when \(d=n\). + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science When adding the symmetry property to polynomials defining the variety _and_ the polynomial map for which one computes the critical points, significant improvements have been achieved recently in (Leskovsky et al., 2017) by using the symbolic homotopy algorithms in (Leskovsky et al., 2018). These improvements, which allows one to obtain complexity gains related to the combinatorial complexity of the symmetric group, also borrow ideas from algebraic algorithms working with data which are invariant by the action of this group (Leskovsky et al., 2017). We emphasize that taking advantage of symmetries in data is a topical and difficult issue, which involves a variety of methodologies (Leskovsky et al., 2017; Leskovsky et al., 2018; Leskovsky et al., 2018; Leskovsky et al., 2018; Leskovsky et al., 2018; Leskovsky et al., 2018). In (Leskovsky et al., 2017), Timofte proves a breakthrough result which is now known as the degree principle. It states that a symmetric polynomial of degree \(d\) with real coefficients has real solutions if and only if one of these solutions has at most \(d\) distinct coordinates. This shows that when \(d\) is fixed and \(n\) grows, the real root decision problem can be solved in polynomial time. This is far better than computing at least one sample point per connected component (see also (Leskovsky et al., 2017; Leskovsky et al., 2018; Leskovsky et al., 2018)), and is one of the rare interesting cases where the best known algorithms for these two problems admit different complexities. This is also the starting point of several results which enhance the real root decision problem and polynomial optimization under some \(S_{n}\)-invariance property for classes of problems where \(d\) remains fixed and \(n\) grows (see (Leskovsky et al., 2017; Leskovsky et al., 2018; Leskovsky et al., 2018; Leskovsky et al., 2018) and (Leskovsky et al., 2018) for equivariant systems). **Main contributions.** Being able to leverage \(S_{n}\)-invariance for critical point computations is not sufficient to solve root decision problems more efficiently using the critical point method. Additional techniques are needed. Indeed, to solve the real root decision problem by finding the critical points of a polynomial map \(\phi\), one typically defines \(\phi\) as the distance from points on the variety to a generic point. This map reaches extrema at each connected component of the semi-algebraic set under study. However, the map \(\phi\) is not symmetric. If it was, our problem would be solved by the critical point algorithm of (Leskovsky et al., 2017). Unfortunately there does not appear to be an obvious symmetric map that fits the bill. Instead, our approach is to apply the critical point method on individual \(S_{n}\)-orbits, with suitable \(\phi\) found for each orbit. Thus while we cannot use the critical point algorithm of (Leskovsky et al., 2017) directly we can make use of the various subroutines used in it to construct a fast decision procedure. Intuitively, working with \(S_{n}\)-orbits is the same as separately searching for real points having distinct coordinates, or real points having two or more coordinates which are the same, or groups of coordinates each of which has equal coordinates and so on. In each case an orbit can be described by points having \(n\) or fewer pairwise distinct coordinates, a key observation in constructing generic maps invariant for each orbit. Theorem 1.1 ().: _Let \(\mathbf{f}=(f_{1},\ldots,f_{k})\) be symmetric polynomials in \(\mathbb{Q}[x_{1},\ldots,x_{n}]\) having maximal degree \(d\). Assume that the Jacobian matrix of \(\mathbf{f}\) with respect to \(x_{1},\ldots,x_{n}\) has ranks at any point of \(V(\mathbf{f})\). Then there is a Monte Carlo algorithm Real_emptiness _which solves the real root decision problem for \(\mathbf{f}\) with_ \[O^{\cdot}\left(d^{68+2}n^{11}\binom{n+d}{n}^{6}\left(\binom{n+d }{n}+\binom{n}{s+1}\right)\right)\\ \subset\left(d^{3}\binom{n+d}{n}\binom{n}{s+1}\right)^{O(1)}\] _operations in \(\mathbb{Q}\). Here the notion \(O^{\cdot}\) indicates that polylogarithmic factors are omitted._ The remainder of the paper proceeds as follows. The next section reviews known material, on invariant polynomials over products of symmetric groups, the tools we use to work with \(S_{n}\)-orbits, and our data structures. Section 3 discusses our smoothness requirement and shows that it is preserved by alternate representations of invariant polynomials. Section 4 shows how we construct critical point functions along with their critical point set. This is followed in Section 5 by a description of our algorithm along with a proof of correctness and complexity. The paper ends with a section on topics for future research. ## 2. Preliminaries ### Invariant Polynomials We briefly review some properties of polynomials invariant under the action of \(S_{t_{1}}\times\cdots\times S_{t_{k}}\), with \(S_{t_{i}}\) the symmetric group on \(t_{i}\) elements, for all \(i\). In this paragraph, we work with variables \(\mathbf{z}=(\mathbf{z}_{1},\ldots,\mathbf{z}_{k})\), with each \(\mathbf{z}_{i}=(z_{1,i},\ldots,z_{t_{i},i})\); for all \(i\), the group \(S_{t_{i}}\) permutes the variables \(\mathbf{z}_{i}\). For \(j\geq 0\), we denote by \[E_{j,i}=\sum_{1\leq m_{i}<m_{2}<\cdots<m_{j}\leq t_{i}}z_{m_{i},i}z_{m_{2},i} \cdots z_{m_{j},i},\] the elementary polynomial in the variables \(\mathbf{z}_{i}\), with each \(E_{j,i}\) having degree \(j\), and by \[P_{j,i}=z_{1,i}^{j}+\cdots+z_{t_{i},i}^{j}\] the \(j\)-th Newton sum in the variables \(\mathbf{z}_{i}\), for \(i=1,\ldots,k\). The following two results are well-known. For \(i=1,\ldots,k\), let \(\mathbf{e}_{i}=(\mathbf{e}_{1,i},\ldots,\mathbf{e}_{t,i})\) be a set of \(t_{1}\) new variables and let \(\mathbf{E}_{i}=(E_{1,i},\ldots,E_{t_{i},i})\); we write \(\mathbf{e}=(\mathbf{e}_{1},\ldots,\mathbf{e}_{k})\) and \(\mathbf{E}=(\mathbf{E}_{1},\ldots,\mathbf{E}_{k})\). Lemma 2.1 ().: _Let \(g\in[\mathbf{z}_{1},\ldots,\mathbf{z}_{k}]\) be invariant under the action of \(S_{t_{1}}\times\cdots\times S_{t_{k}}\). Then there exists a unique \(\nu_{g}\) in \(\mathbb{Q}[\mathbf{e}]\) such that \(g=\zeta_{g}(\mathbf{E})\)._ Similarly, let \(p_{j,i}\) be new variables, and consider the sequences \(\mathbf{p}_{i}=(p_{1,i},\ldots,p_{t_{i},i})\) and \(\mathbf{p}=(\mathbf{p}_{1},\ldots,\mathbf{p}_{k})\), together with their polynomial counterparts \(\mathbf{P}_{i}=(P_{1,i},\ldots,P_{t_{i},i})\) and \(\mathbf{P}=(\mathbf{P}_{1},\ldots,\mathbf{P}_{k})\). Lemma 2.2 ().: _Let \(g\in[\mathbf{z}_{1},\ldots,\mathbf{z}_{k}]\) be invariant under the action of \(S_{t_{1}}\times\cdots\times S_{t_{k}}\). Then there exists a unique \(\zeta_{g}\) in \(\mathbb{Q}[\mathbf{p}]\) such that \(g=\nu_{g}(\mathbf{P})\)._ Example 2.3 ().: _Let_ \[g=2(z_{1,1}z_{2,1}+z_{1,1}^{2}+2z_{1,1}z_{2,1}+z_{2,1}^{2})(z_{1,2}^{2}+z_{2,2} ^{2}),\] _a polynomial invariant under \(S_{2}\times S_{2}\), with \(z_{1}=(z_{1,1},z_{2,1})\), \(z_{2}=(z_{1,2},z_{2,2})\), \(k=2\) and \(t_{1}=t_{2}=2\). In this case, we have_ \[g=(3P_{1,1}^{2}-P_{1,2})P_{2,2}\] _and hence \(\gamma_{g}=(3p_{1,1}^{2}-p_{1,2})p_{2,2}\in\mathbb{Q}[p_{1,1},p_{1,2},p_{2,1},p_{2, 2}]\)._ ### Describing \(S_{n}\)-orbits via Partitions \(S_{n}\)-orbits are subsets of \(\mathbb{C}^{n}\) that play a central role in our algorithm. In this section, we review notation and description of \(S_{n}\)-orbits, along with the form of the output used in (Zhou and Zhang, 2017). A simple way to parameterize \(S_{n}\)-orbits is through the use of partitions of \(n\). A sequence \(\lambda=(n_{1}^{t_{1}}\ \ldots\ n_{k}^{t_{k}})\), where \(n_{1}<\cdots<n_{k}\) and \(n_{i}\)'s and \(t_{i}\)'s are positive integers, is called a partition of \(n\) if \(n_{1}t_{1}+\cdots+n_{k}t_{k}=n\). The _length_ of the partition \(\lambda\) is defined as \(t:=t_{1}+\cdots+t_{k}\). For a partition \(\lambda=(n_{1}^{t_{1}}\ \ldots\ n_{k}^{t_{k}})\) of \(n\), we use the notation from (Zhou and Zhang, 2017, Section 2.3) and let \(U_{\lambda}\) denote the set of all points \(\mathbf{u}\) in \(\mathbb{C}^{n}\) that can be written as \[\mathbf{u}=\underbrace{(u_{1,1},\ldots,u_{1,1})}_{n_{1}},\ \ldots, \underbrace{u_{t_{1},1},\ldots,u_{t_{1},1}}_{n_{1}},\ \ldots,\] \[\underbrace{u_{1,k},\ldots,u_{1,k}}_{n_{k}},\ \ldots, \underbrace{u_{t_{k},k},\ldots,u_{t_{k},k}}_{n_{k}}. \tag{1}\] For any point \(\mathbf{u}\) in \(\mathbb{C}^{n}\), we define its _type_ as the unique partition \(\lambda\) of \(n\) such that there exists \(\sigma\in S_{n}\) such that \(\sigma(\mathbf{u})\in U_{\lambda}\), with the \(u_{i,j}\)'s in (1) pairwise distinct. Points of a given type \(\lambda=(n_{1}^{t_{1}}\ \ldots\ n_{k}^{t_{k}})\) are stabilized by the action of \(S_{\lambda}=S_{t_{1}}\times\cdots\times S_{t_{k}}\), the cartesian product of symmetric groups \(S_{t_{i}}\). For a partition \(\lambda\) as above, we can then define a mapping \(F_{\lambda}:U_{\lambda}\to\mathbb{C}^{\ell}\) as \[\mathbf{u}\ \text{as}\ \text{in}\ (1)\mapsto\\ (E_{1,i}(u_{1,i},\ldots,u_{t_{i},i}),\ldots,E_{t_{i},i}(u_{1,i},\ldots,u_{t_{i},i}))_{1\leq i\leq k},\] where \(E_{j,i}(u_{1,i},\ldots,u_{t_{i},i})\) is the \(j\)-th elementary symmetric function in \(u_{1,k},\ldots,u_{t_{i},i}\) for \(i=1,\ldots,k\) and \(j=1,\ldots,t_{i}\). One can think of the map \(F_{\lambda}\) as a compression of orbits. By applying this map, we can represent an \(S_{n}\)-orbit \(\mathcal{O}\) of type \(\lambda\) by the single point \(F_{\lambda}(\mathcal{O}\cap U_{\lambda})\). Furthermore, the map \(F_{\lambda}\) is onto: for any \(\mathbf{c}=(c_{1,1},\ldots,c_{t_{k},k})\in\mathbb{C}^{\ell}\), we define polynomials \(\rho_{1}(u),\ldots,\rho_{k}(u)\) by \[\rho_{1}(t)=T^{t_{i}}-c_{1,i}T^{t_{i}-1}+\cdots+(-1)^{t_{i}}c_{t_{i},i}.\] We can then find a point \(\mathbf{u}\in\mathbb{C}^{n}\) in the preimage \(F_{\lambda}^{-1}(\mathbf{c})\) by finding the roots \(u_{1,i},\ldots,u_{t_{i},i}\) of \(\rho_{1}(T)\). ### Zero-Dimensional Parametrizations The subroutines we use from (Zhou and Zhang, 2017) give their output in terms of _zero-dimensional parametrizations_, which are defined as follows. Let \(W\subset\mathbb{C}^{n}\) be a variety of dimension zero, defined over \(\mathbb{Q}\). A zero-dimensional parametrization \(\mathcal{R}=((\varrho,v_{1},\ldots,v_{n}),\mu)\) of \(W\) is 1. a squarefree polynomial \(\varrho\) in \(\mathbb{Q}[t]\), where \(t\) is a new indeterminate, and \(\deg(v)=|W|\), 2. polynomials \(\varrho_{1},\ldots,\varrho_{n}\) in \(\mathbb{Q}[t]\) such that \(\deg(\varrho_{1})<\deg(\varrho)\) for all \(i\) and \[W=\left\{\left(\frac{v_{1}(\tau)}{\varrho^{\prime}(\tau)},\ldots,\frac{v_{n}( \tau)}{\varrho^{\prime}(\tau)}\right)\in\mathbb{C}^{n}\,:\,v(\tau)=0\right\},\] 3. a linear form \(\mu\) in \(n\) variables such that \(\mu(\varrho_{1},\ldots,\varrho_{n})=t\sigma^{\prime}\) (so the roots of \(v\) are the values taken by \(\mu\) on \(W\)). When these conditions hold, we write \(W=Z(\mathcal{R})\). Representing the points of \(W\) by means of rational functions with \(\sigma^{\prime}\) as denominator is not necessary, but allows for a sharp control of the bit-size of the output. ## 3. Preserving Smoothness In our main algorithm, we assume that our input system \(\mathbf{f}=(f_{1},\ldots,f_{k})\) satisfies the following smoothness condition 1. _: the Jacobian matrix of \(\mathbf{f}\) has rank \(s\) at any point of \(V(\mathbf{f})\)._ In this section, we discuss consequences of this assumption for symmetric polynomials. **Mapping to orbits: the map \(\mathbb{T}_{\lambda}\).** For a partition \(\lambda=(n_{1}^{t_{1}}\ \ldots\ n_{k}^{t_{k}})\) of \(n\), we define the \(\mathbb{Q}\)-algebra homomorphism \(\mathbb{T}_{\lambda}:\mathbb{Q}[x_{1},\ldots,x_{n}]\to\mathbb{Q}[z_{1},\ldots,z_ {k}]\), with \(z_{i}=(z_{1,i},\ldots,z_{t_{i},i})\) for all \(i\), which maps the variables \(x_{1},\ldots,x_{n}\) to \[\underbrace{z_{1,1},\ldots,z_{1,1}}_{n_{1}},\ \ldots, \underbrace{z_{t_{1},1},\ldots,z_{t_{1},1}}_{n_{1}}\ \ldots,\] \[\underbrace{z_{1,k},\ldots,z_{1,k}}_{n_{k}},\ \ldots, \underbrace{z_{t_{k},k},\ldots,z_{t_{k},k}}_{n_{k}}. \tag{2}\] The operator \(\mathbb{T}_{\lambda}\) extends to vectors of polynomials and polynomial matrices entry-wise. The key observation here is that if \(f\) is symmetric, then its image through \(\mathbb{T}_{\lambda}\) is \(S_{t_{1}}\times\cdots\times S_{t_{k}}\)-invariant. Fix a partition \(\lambda=(n_{1}^{t_{1}}\ \ldots\ n_{k}^{t_{k}})\) of \(n\), and let \(\ell\) be its length. Set \[I_{j,i}:=\{\sigma_{j,i}+1,\ldots,\sigma_{j,i}+n_{1}\},\,1\leq i\leq k;1\leq j \leq t_{i}\] with \(\sigma_{j,i}:=\sum_{r=1}^{t_{1}}t_{r}n_{r}+(j-1)n_{i}\). Variables \(x_{m}\), for \(m\) in \(I_{j,i}\), are precisely those that map to \(z_{j,i}\) under \(\mathbb{T}_{\lambda}\). Define further the matrix \(Z\in\mathbb{Q}^{\ell\times n}\) with \(\ell=t_{1}+\cdots+t_{k}\), where rows are indexed by pairs \((j,i)\) as above and columns by \(m\in\{1,\ldots,n\}\). For all such \((j,i)\), the entry of row index \((j,i)\) and column index \(m\in I_{j,i}\) is set to \(1/n_{i}\), all others are zero. In other words, \(Z=\operatorname{diag}(Z_{1},\ldots,Z_{k})\), where \[\mathbf{Z}_{i}=\begin{pmatrix}\frac{1}{n_{i}}&\cdots&\frac{1}{n_{i}}&\mathbf{0}& \cdots&\mathbf{0}\\ \mathbf{0}&\frac{1}{n_{i}}&\cdots&\frac{1}{n_{i}}&\cdots&\mathbf{0}\\ \vdots&&&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\cdots&\frac{1}{n_{i}}&\cdots&\frac{1}{n_{i}}\\ \end{pmatrix}\] is a matrix in \(\mathbb{Q}^{t_{i}\times n_{i}t_{i}}\). **Example 3.1**.: _Consider the partition \(\lambda=(2^{2}\,3^{1})\) of \(n=7\). Then \(n_{1}=2\), \(t_{1}=2\), \(n_{2}=3\), \(t_{2}=1\) and the length of \(\lambda\) is \(3\). In this case,_ \[Z=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}&&\\ &\frac{1}{2}&\frac{1}{2}&& and \(m\) in \(I_{j,i}\). By the multivariable chain rule, \[\frac{\partial\mathbb{T}_{\lambda}(f)}{\partial z_{j,i}}=\sum_{m\in I_{j,i}} \mathbb{T}_{\lambda}\left(\frac{\partial f}{\partial x_{m}}\right).\] If \(f\) is symmetric, for \(m,m^{\prime}\) in \(I_{j,i}\), we then have \[\mathbb{T}_{\lambda}\left(\frac{\partial f}{\partial x_{m}}\right)=\mathbb{T} _{\lambda}\left(\frac{\partial f}{\partial x_{m^{\prime}}}\right),\] so that, for \(m\) in \(I_{j,i}\), \[\mathbb{T}_{\lambda}\left(\frac{\partial f}{\partial x_{m}}\right)=\frac{1}{ n_{i}}\frac{\partial\mathbb{T}_{\lambda}(f)}{\partial z_{j,i}}.\] This argument can be extended to a sequence of polynomials to obtain our claim. **Example 3.3**.: _We continue Example 3.1 with a single \(S_{7}\)-invariant polynomial \(f=\sum_{1\leq i\leq 7}x_{i}x_{j}\). Then_ \[\mathbb{T}_{\lambda}(f)=3z_{1,1}^{2}+3z_{2,1}^{2}+6z_{1,2}^{2}+6z_{1,1}z_{1,2 }+4z_{1,1}z_{2,1}+6z_{1,2}z_{2,1},\] _and so_ \[\operatorname{Jac}(\mathbb{T}_{\lambda}(f))=(6z_{1,1}+6z_{1,2}+4z_{2,1},4z_{ 1,1}+6z_{1,2}+6z_{2,1},6z_{1,1}+12z_{1,2}+6z_{2,1}).\] _This implies that \(\operatorname{Jac}(\mathbb{T}_{\lambda}(f))\cdot\mathbf{Z}\) is equal to \((u,u,v,w,w,w)\), with \(u=3z_{1,1}+3z_{1,2}+2z_{2,1},v=2z_{1,1}+3z_{1,2}+3z_{2,1},w=2z_{1,1}+4z_{1,2 }+2z_{2,1}\). This is precisely \(\mathbb{T}_{\lambda}(\operatorname{Jac}(f))\)._ **Corollary 3.4**.: _Under the assumptions of the previous lemma, if \(\mathbf{f}\) satisfies condition (A), then \(\mathbb{T}_{\lambda}(f)\subset\mathbb{Q}[z_{1},\ldots,z_{k}]\) does as well._ Proof.: Let \(\mathbf{\alpha}=(\alpha_{1,1},\ldots,\alpha_{t_{1},1},\ldots,\alpha_{t_{k},k})\) be a zero of \(\mathbb{T}_{\lambda}(\mathbf{f})\) in \(\mathbb{C}^{f}\). We have to prove that \(\operatorname{Jac}_{\mathbf{z}_{1},\ldots,\mathbf{z}_{k}}(\mathbb{T}_{\lambda}(f))( \mathbf{\alpha})\) has a trivial left kernel. Consider the point \[\mathbf{\varepsilon}=(\underbrace{\alpha_{1,1},\ldots,\alpha_{1,1}}_{ n_{1}},\ldots,\underbrace{\alpha_{t_{1},1},\ldots,\alpha_{t_{1},1}}_{n_{1}}, \ldots,\\ \underbrace{\alpha_{1,k},\ldots,\alpha_{1,k}}_{n_{k}},\ldots, \underbrace{\alpha_{t_{k},k},\ldots,\alpha_{t_{k},k}}_{n_{k}})\in\mathbb{C}^{n}, \tag{3}\] which lies in \(V(\mathbf{f})\). In particular, for any \(g\) in \(\mathbb{Q}[x_{1},\ldots,x_{n}]\), we have \(\mathbb{T}_{\lambda}(g)(\mathbf{\alpha})=g(\mathbf{\varepsilon})\). Applying this to the Jacobian matrix of \(\mathbf{f}\), we obtain \(\mathbb{T}_{\lambda}(\operatorname{Jac}(f))(\mathbf{\alpha})=\operatorname{Jac}( \mathbf{f})(\mathbf{\varepsilon})\). Since by assumption \(\mathbf{f}\) is symmetric, the previous lemma implies that \[\operatorname{Jac}(\mathbf{f})(\mathbf{\varepsilon})=\operatorname{Jac}_{\mathbf{z}_{1}, \ldots,\mathbf{z}_{k}}(\mathbb{T}_{\lambda}(\mathbf{f}))(\mathbf{\alpha})\cdot\mathbf{Z}.\] Since \(\operatorname{Jac}(\mathbf{f})(\mathbf{\varepsilon})\) has rank \(s\) (by condition A), the left kernel of \(\operatorname{Jac}(\mathbf{f})(\mathbf{\varepsilon})\) is trivial. It follows that the left kernel of \(\operatorname{Jac}_{\mathbf{z}_{1},\ldots,\mathbf{z}_{k}}(\mathbb{T}_{\lambda}(f))( \mathbf{\alpha})\) is also trivial. When we represent \(S_{t_{1}}\times\cdots\times S_{t_{k}}\)-invariant functions in terms of Newton sums, we can show that the new representation also preserves condition (A). **Lemma 3.5**.: _Assume \((g_{1},\ldots,g_{s})\subset\mathbb{Q}[z_{1},\ldots,z_{k}]\) is \(S_{t_{1}}\times\cdots\times S_{t_{k}}\)-invariant and satisfies condition (A). If we set \(h_{i}=\gamma_{g_{i}}\) for all \(i\), then \((h_{1},\ldots,h_{s})\) also satisfies condition (A)._ Proof.: The Jacobian matrix \(\operatorname{Jac}(\mathbf{g})\) of \((g_{1},\ldots,g_{s})\) factors as \[\operatorname{Jac}(\mathbf{g})=\operatorname{Jac}(\mathbf{h})(\mathbf{P})\cdot\mathbf{V},\text { where }\mathbf{V}=\operatorname{diag}(V_{1},\ldots,V_{k})\] with each \(V_{i}\) a row-scaled Vandermonde matrix given by \[V_{i}=\begin{pmatrix}1&&\\ &2&&\\ &&\ddots&\\ &&t_{i}\end{pmatrix}\begin{pmatrix}1&1&\cdots&1\\ z_{1,i}&z_{2,i}&\cdots&z_{t_{i},i}\\ \vdots&&\vdots\\ z_{1,i}^{t_{i}-1}&z_{2,i}^{t_{i}-1}&\cdots&z_{t_{i},i}^{t_{i}-1}\end{pmatrix} \tag{4}\] Let \(\mathbf{\eta}\) be a point in the vanishing set of \((h_{1},\ldots,h_{s})\) and let \(\mathbf{\varepsilon}\) be in \(\mathbf{P}^{-1}(\mathbf{\eta})\). If \(\operatorname{Jac}(\mathbf{h})\) is rank deficient at \(\mathbf{\eta}\) then \(\operatorname{Jac}(\mathbf{h})(\mathbf{P})(\mathbf{\varepsilon})\) is also rank deficient. This implies that the rank of \(\operatorname{Jac}(\mathbf{g})(\mathbf{\varepsilon})\), which is bounded above by those of \(\operatorname{Jac}(\mathbf{h})(\mathbf{P})(\mathbf{\varepsilon})\) and \(\mathbf{V}(\mathbf{\varepsilon})\), is deficient. Similarly, instead of using a row-scaled Vandermonde matrix \(V_{i}\) as in (4), we can use \(V_{i}\) as the Jacobian matrix of elementary symmetric functions in \(\mathbf{z}_{i}\). This gives a similar result but for the polynomials \(\zeta_{g_{1}},\ldots,\zeta_{g_{k}}\). **Lemma 3.6**.: _Assume \((g_{1},\ldots,g_{s})\subset\mathbb{Q}[\mathbf{z}_{1},\ldots,\mathbf{z}_{k}]\) is \(S_{t_{1}}\times\cdots\times S_{t_{k}}\)-invariant and satisfies condition (A). Then the sequence of polynomials \((\zeta_{g_{1}},\ldots,\zeta_{g_{s}})\) also satisfies condition (A)._ ## 4. Critical Loci If \(W\subset\mathbb{C}^{f}\) is an equidimensional algebraic set, and \(\phi\) a polynomial function defined on \(W\), a non-singular point \(\mathbf{w}\in W\) is called a _critical point_ of \(\phi\) on \(W\) if the gradient of \(\phi\) at \(\mathbf{w}\) is normal to the tangent space \(T_{\mathbf{w}}W\) of \(W\) at \(\mathbf{w}\). If \(\mathbf{g}=(g_{1},\ldots,g_{s})\) are generators of the ideal associated to \(W\), then \(T_{\mathbf{w}}W\) is the right kernel of the Jacobian matrix \(\operatorname{Jac}(\mathbf{g})\) of \(\mathbf{g}\) evaluated at \(\mathbf{w}\). In the cases we will consider, this matrix will have rank \(s\) at all points of \(W\) (that is, \(\mathbf{g}\) satisfies condition A). The set of critical points of the restriction of \(\phi\) to \(W\) is then defined by the vanishing of \(\mathbf{g}\), and of the \((s+1)\)-minors of the Jacobian matrix \(\operatorname{Jac}(\mathbf{g},\phi)\) of \(\mathbf{g}\) and \(\phi\). ### Finiteness through genericity Let \(\mathbf{g}=(g_{1},\ldots,g_{s})\) in \(\mathbb{Q}[z_{1},\ldots z_{k}]\) with each \(g_{i}\) invariant under the action of \(S_{t_{1}}\times\cdots\times S_{t_{k}}\); we write \(\ell=t_{1}+\cdots+t_{k}\). We introduce some useful \(S_{t_{1}}\times\cdots\times S_{t_{k}}\)-invariant mappings and discuss the properties of their critical points on \(V(\mathbf{g})\subset\mathbb{C}^{f}\). For \(1\leq i\leq k\), let \(\mathfrak{a}_{i}=(\mathfrak{a}_{1,i},\ldots,\mathfrak{a}_{t_{i},i})\) be new indeterminates, and recall that \(P_{j,i}\) is the \(j\)-th Newton sum for the variables \(\mathbf{z}_{i}\). Set (5) \[\phi_{\mathbf{\mathfrak{a}}}=\sum_{i=1}^{k}c_{i}P_{t_{1}+1,i}+\sum_{i=1}^{k}\sum_{j=1 **Proposition 4.1**.: _Let \(\mathbf{g}=(g_{1},\ldots,g_{s})\) be \(S_{\mathbf{t}_{1}}\times\cdots\times S_{\mathbf{t}_{k}}\)-invariant polynomials in \(\mathbb{Q}[\mathbf{z}_{1},\ldots,\mathbf{z}_{k}]\). Suppose further that \(\mathbf{g}\) satisfies condition (A). Then there exists a non-empty Zariski open set \(\mathcal{A}\subset\mathbb{C}^{\mathbf{t}_{1}}\times\cdots\times\mathbb{C}^{\mathbf{t}_ {k}}\) such that for \(\mathbf{a}\in\mathcal{A}\), the restriction of \(\phi_{\mathbf{a}}\) to \(V(\mathbf{g})\) has finitely many critical points in \(\mathcal{U}\)._ ### Proof of Proposition 4.1 For new variables \(L_{1},\ldots,L_{\mathsf{s}}\), we denote by \(S_{\mathbf{a}}\) the polynomials \[S_{\mathbf{a}}=\big{(}g_{1},\ldots,g_{\mathsf{s}},\quad[L_{1}\cdots L_{\mathsf{s }}\ 1]\cdot\mathrm{Jac}(\mathbf{g},\phi_{\mathbf{a}})\big{)}.\] For \(\mathbf{a}=(\mathbf{a}_{1},\ldots,\mathbf{a}_{k})\) in \(\mathbb{C}^{\mathbf{t}_{1}}\times\cdots\times\mathbb{C}^{\mathbf{t}_{k}}\), with each \(\mathbf{a}_{i}\) in \(\mathbb{C}^{\mathbf{t}_{i}}\), we denote by \(S_{\mathbf{a}}\) the polynomials in \(\mathbb{C}[L_{1},\ldots,L_{\mathsf{s}},\mathbf{z}_{1},\ldots\mathbf{z}_{k}]\) obtained by evaluating \(\mathbf{a}_{i}\) at \(\mathbf{a}_{i}\) in \(S_{\mathbf{a}}\), for all \(i\). Finally, denote by \(\pi\) the projection from the \((L,z)\)-space \(\mathbb{C}^{\mathbf{s}\neq\mathbf{t}}\) to the \(\mathbf{z}\)-space \(\mathbb{C}^{\mathbf{t}}\). **Lemma 4.2**.: _Suppose that \(\mathbf{g}\) satisfies condition (A). Then for \(\mathbf{a}\in\mathbb{C}^{\mathbf{t}_{1}}\times\cdots\times\mathbb{C}^{\mathbf{t}_{k}}\), \(\pi(V(\mathbf{S_{\mathbf{a}}}))\) is the critical locus of the restriction of the map \(\phi_{\mathbf{a}}\) to \(V(\mathbf{g})\)._ Proof.: For any \(\mathbf{a}\in\mathbb{C}^{\mathbf{t}_{1}}\times\cdots\times\mathbb{C}^{\mathbf{t}_{k}}\), we denote by \(W(\phi_{\mathbf{a}},\mathbf{g})\) the set of critical points of the restriction of \(\phi_{\mathbf{a}}\) to \(V(\mathbf{g})\). Since \(\mathbf{g}\) satisfies condition (A), the set \(W(\phi_{\mathbf{a}},\mathbf{g})\) is given by \[\{\mathbf{w}\ |g_{1}(\mathbf{w})=\cdots=g_{\mathsf{s}}(\mathbf{w})=0,\quad\mathrm{rank}( \mathrm{Jac}(\mathbf{g},\phi_{\mathbf{a}})(\mathbf{w}))\leq s\}.\] Consider \(\mathbf{w}\) in \(W(\phi_{\mathbf{a}},\mathbf{g})\) and a nonzero vector \(\mathbf{c}\) in the left kernel of \(\mathrm{Jac}(\mathbf{g},\phi_{\mathbf{a}})(\mathbf{w})\), of the form \(\mathbf{c}=(c_{1},\ldots,c_{\mathsf{s}},c_{\mathsf{s}+1})\). The last coordinate \(c_{\mathsf{s}+1}\) cannot vanish, as otherwise \((c_{1},\ldots,c_{\mathsf{s}})\) would be a nonzero vector in the left kernel of \(\mathrm{Jac}(\mathbf{g})(\mathbf{w})\) (which is ruled out by condition (A)). Dividing through by \(c_{\mathsf{s}+1}\), the point \((\mathbf{c}^{\prime},\mathbf{w})\), with \(c^{\prime}_{i}=c_{i}/c_{\mathsf{s}+1}\) for \(i=1,\ldots,s\), is a solution of \(S_{\mathbf{a}}\). Conversely, take \((\mathbf{t},\mathbf{w})\) in \(V(\mathbf{S_{\mathbf{a}}})\). Thus, \(\mathbf{w}\) cancels \(\mathbf{g}\), and \(\mathrm{Jac}(\mathbf{g},\phi_{\mathbf{a}})\) has rank less than \(s+1\) at \(\mathbf{w}\), so that \(\pi(V(\mathbf{S_{\mathbf{a}}}))\) is in \(W(\phi_{\mathbf{a}},\mathbf{g})\). Let \(\phi_{\mathbf{a}}\) and \(\gamma_{\phi_{\mathbf{a}}}\) be defined as in (5) and Lemma 2.2, respectively. For \(i=1,\ldots,k\), set \(Q_{i}=\gamma_{P_{i}\neq\mathsf{s}+i}\), and let \(h_{1},\ldots,h_{\mathsf{s}}=\gamma_{P_{1}},\ldots,\gamma_{P_{\mathsf{s}}}\). In particular, Lemma 2.2 implies that \(\gamma_{\phi_{\mathbf{a}}}\) is given by \[\sum_{i=1}^{k}c_{i}Q_{i}+\sum_{i=1}^{k}\sum_{j=1}^{k}\mathbf{a}_{j,i}p_{j,i}.\] The sequence \(S_{\mathbf{a}}\) can be rewritten as \[\begin{array}{cccc}h_{1}\circ\mathbf{P},\ldots,h_{\mathsf{s}}\circ\mathbf{P},\\ \left[L_{1}\ \ldots\ L_{\mathsf{s}}\ 1\right]\left(\begin{array}{ccc} \frac{\partial h_{1}}{\partial\phi_{\mathsf{s}+1}}&\cdots&\frac{\partial h_{1}}{ \partial p_{\mathsf{s}+k}}\\ \vdots&&\vdots\\ \frac{\partial h_{1}}{\partial p_{\mathsf{s}+k}}&\cdots&\frac{\partial h_{1}}{ \partial p_{\mathsf{s}+k}}\\ c_{1}\frac{\partial Q_{1}}{\partial p_{\mathsf{s}+1}}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{ +}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} \mathbf{+}\mathbf{+}\mathbf{+}\mathbf{+} Let \(F=F_{k}(x_{1},\ldots,x_{n})+F_{k-1}(x_{1},\ldots,x_{n})+\cdots+F_{0}(x_{1},\ldots,x_{n}):\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a real polynomial, where \(F_{i}\) is the homogeneous component of degree \(i\) of \(F\). Assume further that the leading form \(F_{k}\) of \(F\) is positive definite; then, \(F\) is proper. In particular, the map \(P_{2m}+\sum_{i=0}^{2m-1}\lambda_{i}P_{i}\), with \(P_{i}\) the Newton sums in \(x_{1},\ldots,x_{n}\) and all \(\lambda_{i}\) in \(\mathbb{Q}\), is proper. We can extend this to blocks of variables. Lemma 4.5 ().: _Let \(z_{1},\ldots,z_{k}\) be blocks of \(t_{1},\ldots,t_{k}\) variables, respectively. If \(P_{j,i}:=z_{1,i}^{j}+\cdots+z_{t_{i},i}^{j}\), then for any \(m_{1},\ldots,m_{k}\geq 1\) and coefficients \(\lambda_{i,j}\) in \(\mathbb{Q}\), the map_ \[\sum_{i=1}^{k}P_{2m_{i},i}+\sum_{i=1}^{k}\sum_{j=0}^{2m_{i}-1}\lambda_{j,i}P_{ j,i}\] _is proper._ ## 5. Main Result Let \(\mathbf{f}=(f_{1},\ldots,f_{s})\) be a sequence of symmetric polynomials in \(\mathbb{Q}[x_{1},\ldots,x_{n}]\) that satisfies condition (A). In this section we present an algorithm and its complexity to decide whether the real locus of \(V(\mathbf{f})\) is empty or not. To exploit the symmetry of \(\mathbf{f}\) and to decide whether the set \(V_{\mathbb{R}}(\mathbf{f})\) is empty or not, our main idea is slicing the variety \(V(\mathbf{f})\) with hyperplanes which are encoded by a partition \(\lambda\) of \(n\). This way, we obtain a new polynomial system which is invariant under the action \(S_{\lambda}=S_{t_{1}}\times\cdots\times S_{t_{k}}\) of symmetric groups. We proved in Lemma 3.4 that this new system also satisfies condition (A). We then use the critical point method to decide whether the real locus of the algebraic variety defined by this new system is empty or not by taking a \(S_{\lambda}\)-invariant map as defined in the previous section. ### Critical points along \(S_{n}\)-orbits Let \(\mathbf{g}=(g_{1},\ldots,g_{s})\) be a sequence of \(S_{\lambda}\)-invariant polynomials and \(\phi\) be a \(S_{\lambda}\)-invariant map in \(\mathbb{Q}[z_{1},\ldots,z_{k}]\), with \(\mathbf{z}_{i}=(z_{1,i},\ldots,z_{t_{i},i})\) for all \(i\). As before, we set \(\ell=t_{1}+\cdots+t_{k}\), and we assume that \(s\leq\ell\). Assume further that the sequence \(\mathbf{g}\) satisfies condition (A). Let \(\phi\) be a \(S_{\lambda}\)-invariant map in \(\mathbb{Q}[z_{1},\ldots,z_{k}]\). Let \(\zeta_{\mathbf{g}}\) and \(\zeta_{\mathbf{g}}\) in \(\mathbb{Q}[\mathbf{e}_{1},\ldots,\mathbf{e}_{k}]\), where \(\mathbf{e}_{i}=(e_{1,i},\ldots,e_{t_{i},i})\) is a set of \(t_{i}\) new variables, be such that \[\phi=\zeta_{\mathbf{\phi}}(\mathbf{E}_{1},\ldots,\mathbf{E}_{k})\quad\text{and}\quad\mathbf{g }=\zeta_{\mathbf{g}}(\mathbf{E}_{1},\ldots,\mathbf{E}_{k}).\] Here \(\mathbf{E}_{i}=(E_{1,i},\ldots,E_{t_{i},i})\) denotes the vector of elementary symmetric polynomials in variables \(\mathbf{z}_{i}\), with each \(E_{j,i}\) having degree \(j\) for all \(j,i\). Lemma 5.1 ().: _Let \(\mathbf{g},\phi\), and \(\lambda\) as above. Assume further that \(\zeta_{\mathbf{\phi}}\) has finitely many critical points on \(V(\zeta_{\mathbf{g}})\). Then there exists a randomized algorithm \(\text{Critical\_points}\)\((\mathbf{g},\phi,\lambda)\) which returns a zero-dimensional parametrization of the critical points of \(\zeta_{\mathbf{\phi}}\) restricted to \(V(\zeta_{\mathbf{g}})\). The algorithm uses_ \[O^{\prime}\left(\delta^{2}c_{\lambda}(e_{\lambda}+c_{\lambda}^{5})n^{4}\Gamma\right)\] _operations in \(\mathbb{Q}\), where_ \[c_{\lambda} =\frac{\deg(g_{1})\cdots\deg(g_{s})\cdot E_{\ell-s}(\delta-1, \ldots,\delta-\ell)}{t_{1}!\cdots t_{k}!},\] \[\Gamma =n^{2}\binom{n+\delta}{\delta}+n^{4}\binom{n}{s+1},\text{ and}\] \[e_{\lambda} =\frac{n(\deg(g_{1})+1)\cdots(\deg(g_{s})+1)\cdot E_{\ell-s}( \delta,\ldots,\delta-\ell+1)}{t_{1}!\cdots t_{k}!},\] _with \(\delta=\max(\deg(\mathbf{g}),\deg(\phi))\). The number of solutions is at most \(c_{\lambda}\)._ Proof.: The Critical_points procedure contains two steps: first finding \(\zeta_{\mathbf{g}}\) and \(\zeta_{\mathbf{\phi}}\) from \(\mathbf{g}\) and \(\phi\) and then computing a representation for the set \(W(\zeta_{\phi},\zeta_{\mathbf{g}})\) of critical points of \(\zeta_{\mathbf{\phi}}\) on \(V(\zeta_{\mathbf{g}})\). The first step can be done using the algorithm \(\text{Symmetric\_Coordinates}\) from (Zuang and Zuang, 2015, Lemma 9), which uses \(O^{\prime}\left(\binom{\ell+\delta}{\delta}^{2}\right)\) operations in \(\mathbb{Q}\). Since the sequence \(\mathbf{g}\) satisfies condition (A), Lemma 3.6 implies that \(\zeta_{\mathbf{g}}\) also satisfies condition (A). Then, the set \(W(\zeta_{\mathbf{\phi}},\zeta_{\mathbf{g}})\) is the zero set of \(\zeta_{\mathbf{g}}\) and all the \((s+1)\)-minors of \(\operatorname{Jac}(\zeta_{\mathbf{g}},\zeta_{\mathbf{\phi}})\). In particular, when \(\ell=s\), \(W(\zeta_{\mathbf{\phi}},\zeta_{\mathbf{g}})=V(\zeta_{\mathbf{g}})\). Since each \(E_{j,i}\) has degree \(j\), it is natural to assign a weight \(j\) to the variable \(e_{j,i}\), so that the polynomial ring \(\mathbb{Q}[\mathbf{e}_{1},\ldots,\mathbf{e}_{k}]\) is weighted of weights \((1,\ldots,t_{1},\ldots,1,\ldots,t_{k})\). The weighted degrees of \(\zeta_{\mathbf{g}}\) and \(\zeta_{\mathbf{\phi}}\) are then equal to those of \(\mathbf{g}\) and \(\phi\), respectively. To compute a zero-dimensional parametrization for \(W(\zeta_{\mathbf{\phi}},\zeta_{\mathbf{g}})\) we use the symbolic homotopy method for weighted domain given in (Zuang and Zuang, 2015, Thm 5.3) (see also (Zuang and Zuang, 2015, Sec 5.2) for a detailed complexity analysis). This procedure is randomized and requires \[O^{\prime}\left(\delta^{2}c_{\lambda}(e_{\lambda}+c_{\lambda}^{5})n^{4}\Gamma \right)\text{ operations in }\mathbb{Q}.\] Furthermore, results from (Zuang and Zuang, 2015, Thm 5.3) also imply that the number of points in the output is at most \(c_{\lambda}\). Thus, the total complexity of the Critical_points algorithm is then \(O^{\prime}\left(\delta^{2}c_{\lambda}(e_{\lambda}+c_{\lambda}^{5})n^{4}\Gamma\right)\) operations in \(\mathbb{Q}\). ### The Decide procedure Consider a partition \(\lambda=(n_{1}^{t_{1}}\ \ldots\ n_{k}^{t_{k}})\) of \(n\), and let \[\mathcal{R}_{\lambda}=(v,v_{1},\ldots,v_{t_{1}},\ldots,v_{1,k},\ldots,v_{t_{k},k},\mu)\] be a parametrization which encodes a finite set \(W_{\lambda}\subset\mathbb{C}^{\ell}\). This set lies in the target space of the algebraic map \(F_{\lambda}:U_{\lambda}\rightarrow\mathbb{C}^{\ell}\) defined in Subsection 2.2 as \[\mathbf{u} =(\underbrace{u_{1,1},\ldots,u_{1,1}}_{n_{1}},\ \ldots,\ \underbrace{u_{t_{k},k},\ldots,u_{t_{k},k}}_{n_{k}})\] \[\quad\mapsto(E_{1,i}(u_{1,i},\ldots,u_{t_{i},i}),\ldots,E_{t_{i},i}(u_{1,i},\ldots,u_{t_{i},i}))_{1\leq i\leq k}, \tag{6}\] where \(E_{j,i}(u_{1,i},\ldots,u_{t_{i},i})\) is the \(j\)-th elementary symmetric function in \(u_{1,i},\ldots,u_{t_{i},i}\) for \(i=1,\ldots,k\) and \(j=1,\ldots,t_{i}\). Let \(V_{\lambda}\) be the preimage of \(W_{\lambda}\) by \(F_{\lambda}\). In this subsection we present a procedure called \(\text{Decide}(\mathcal{R}_{\lambda})\) which takes as input \(\mathcal{R}_{\lambda}\), and decides whether the set \(V_{\lambda}\) contains real points. the preimage of a single point in \(W_{\lambda}\): we would lose the benefit of all that had been done before. This difficulty can be bypassed by encoding one single point per orbit in the preimage of the points in \(W_{\lambda}\). This can be done via the following steps. 1. Group together the variables \(\mathbf{e}_{i}=(e_{1,i},\ldots,e_{t_{i},i})\) which encode the values taken by the elementary symmetric functions \(E_{i,1},\ldots,E_{i,t_{i}}\) (see Sec. 2.2) and denote by \(v_{i,1},\ldots,v_{i,t_{i}}\) the parametrizations corresponding to \(e_{1,i},\ldots,e_{t_{i},i}\); 2. Make a reduction to a bivariate polynomial system by considering the polynomial with coefficients in \(\mathbb{Q}[t]\) \[\rho_{i}=v^{\prime}u^{t_{i}}-v_{1,i}u^{t_{i}-1}+\cdots+(-1)^{t_{i}}v_{t_{i},i }\in\mathbb{Q}[t][u]\] and "solving" the system \(\rho_{i}=v=0\). Here we recall that \(v\in\mathbb{Q}[t]\) and is square-free, so that \(v\) and \(v^{\prime}\) are coprime. 3. It remains to decide whether, for all \(1\leq i\leq k\), there is a real root \(\vartheta\) of \(v\) such that when replacing \(t\) by \(\vartheta\) in \(\rho_{i}\), the resulting polynomial has all its roots real. To do this we proceed by performing the following steps for \(1\leq i\leq k\): 1. first we compute the Sturm-Habicht sequence associated to \(\left(\rho_{i},\frac{\partial\rho_{i}}{\partial u}\right)\) in \(\mathbb{Q}[t]\) (the Sturm-Habicht sequence is a signed subresultant sequence, see (9, Chap. 9, Algo. 8.21)); 4. next, we compute Thom-encodings of the real roots of \(v\), which is a way to uniquely determine the roots of a univariate polynomial with real coefficients by means of the signs of its derivatives at the considered real root (see e.g. (9, Chap. 10, Algo. 10.14)); 5. finally, for each real root \(\vartheta\) of \(v\), evaluate the signed subresultant sequence at \(\vartheta\)(9, Chap. 10, Algo. 10.15) and compute the associated Cauchy index to deduce the number of real roots of \(\rho_{i}\) (see (9, Cor. 9.5)). 6. For a given real root \(\vartheta\) of \(v\), it holds that, for all \(1\leq i\leq k\), the number of real roots of \(\rho_{i}\) equals its degree, if and only if \(V_{\lambda}\) is non-empty. The above steps describe our Decide, which returns false if \(V_{\lambda}\) contains real points, else true. ### The main algorithm Our main algorithm \(\mathsf{Real\_empt}\) optimess takes symmetric polynomials \(\mathbf{f}=(f_{1},\ldots,f_{s})\) in \(\mathbb{Q}[x_{1},\ldots,x_{n}]\), with \(s<n\), which satisfy condition (A), and decides whether \(V_{\mathbb{R}}(f)\) is empty. For a partition \(\lambda\), we first find the polynomials \(\mathbf{f}_{\lambda}\coloneqq\mathbb{T}_{\lambda}(\mathbf{f})\), which are \(S_{\lambda}\)-invariant in \(\mathbb{Q}[z_{1},\ldots,z_{k}]\), where \(\mathbb{T}_{\lambda}\) is defined as in (2). By Corollary 3.4, \(\mathbf{f}_{\lambda}\) satisfies condition (A), so we can apply the results of Section 4. Let \(\phi_{\mathbf{a}}\) be the map defined in (5) and \(\mathcal{A}_{\lambda}\subset\mathbb{C}^{t_{1}}\times\cdots\times\mathbb{C}^{t_ {1}}\) be the non-zero Zariski open set defined in Proposition 4.1. Assume \(\mathbf{a}\) is chosen in \(\mathcal{A}_{\lambda}\) (this is one of the probabilistic aspects of our algorithm) at step 1b. By Corollary 3.4, \(\mathbf{f}_{\lambda}\) satisfies condition (A). Then, the critical locus of the restriction of \(\phi_{\mathbf{a}}\) to \(V(\mathbf{f}_{\lambda})\) is of dimension at most zero (by Proposition 4.1). In addition, the map \(\phi_{\mathbf{a}}\) is invariant under the action of the group \(S_{\lambda}\). Let \(\zeta_{\phi_{\mathbf{a}}}\) and \(\zeta_{\mathbf{f}_{\lambda}}\) in \(\mathbb{Q}[\mathbf{e}_{1},\ldots,\mathbf{e}_{k}]\) such that \[\phi_{\mathbf{a}}=\zeta_{\phi_{\mathbf{a}}}(\mathbf{E}_{1},\ldots,\mathbf{E}_{k})\quad\text{ and}\quad\mathbf{f}_{\lambda}=\zeta_{\mathbf{f}_{\lambda}}(\mathbf{E}_{1},\ldots,\mathbf{E}_{k}).\] Here \(\mathbf{E}_{i}=(E_{1,i},\ldots,E_{t_{i},i})\) denotes the vector of elementary symmetric polynomials in variables \(z_{i}\). In the next step, we compute a zero-dimensional parametrization \(\mathcal{R}_{\lambda}\) of the critical set \(W_{\lambda}\coloneqq W(\zeta_{\phi_{\mathbf{a}}},\zeta_{\mathbf{f}_{\lambda}})\) of \(\zeta_{\phi_{\mathbf{a}}}\) restricted to \(V(\zeta_{\mathbf{f}_{\lambda}})\) by using the Critical_points algorithm from Lemma 5.1. The parametrization \(\mathcal{R}_{\lambda}\) is given by a sequence of polynomials \((v,v_{1,1},\ldots,v_{t_{1},1},\ldots,v_{1,k},\ldots,v_{t_{k},k})\) in \(\mathbb{Q}[t]\) and a linear form \(\mu\). At the final step, we run the \(\text{Decide}(\mathcal{R}_{\lambda})\) in order to determine whether the preimage of \(W_{\lambda}\) by the map \(\mathbf{F}_{\lambda}\) contains real points. ``` Input: symmetric polynomials \(\mathbf{f}=(f_{1},\ldots,f_{s})\) in \(\mathbb{Q}[x_{1},\ldots,x_{n}]\) with \(s<n\) such that \(\mathbf{f}\) satisfies (A) Output: false if \(V(f)\cap\mathbb{R}^{n}\) is non-empty; true otherwise ``` **Algorithm 1**\(\mathsf{Real\_emptt}\)\((\mathbf{f})\) **Proposition 5.2**.: _Assume that, on input symmetric \(\mathbf{f}\) as above, and satisfying condition (A), for all partitions \(\lambda\) of length at least \(s\), \(\mathbf{a}\) is chosen in \(\mathcal{A}_{\lambda}\) and that all calls to the randomized algorithm \(\mathsf{Critical\_points}\) return the correct result. Then Algorithm \(\mathsf{Real\_emptt}\)\(\mathsf{Real\_emptt}\) returns true if\(V(\mathbf{f})\cap\mathbb{R}^{n}\) is empty and otherwise it returns false._ Proof.: Since \(\mathbf{f}\) satisfies condition (A), Lemma 3.4 implies that \(\mathbf{f}_{\lambda}\) also satisfies this condition. Then, by the Jacobian criterion (22, Thm 16.19), \(V(\mathbf{f}_{\lambda})\) is smooth and equidimensional of dimension \((\ell-s)\), where \(\ell\) is the length of \(\lambda\). Therefore, if \(\ell<s\), then the algebraic set \(V(\mathbf{f}_{\lambda})\) is empty. Thus, the union of \(V(\mathbf{f}_{\lambda})\cap\mathcal{U}_{\lambda}\) where \(\mathcal{U}_{\lambda}\) is the open set defined in Subsection 4.1 and \(\lambda\) runs over the partitions of \(n\) of length at least \(s\), forms a partition of \(V(f)\). Hence, \(V(f)\cap\mathbb{R}^{n}\) is non-empty if and only if there exists at least one such partition for which \(V(\mathbf{f}_{\lambda})\cap\mathcal{U}_{\lambda}\cap\mathbb{R}^{n}\) is non-empty. We already observed that for all \(\lambda\), \(\mathbf{f}_{\lambda}\) does satisfy condition (A). Since we have assumed that each time Step 1b is performed, \(\mathbf{a}\) is chosen in \(\mathcal{A}_{\lambda}\), we apply Proposition 4.4 to deduce that the conditions of Lemma 5.1 are satisfied. Hence, all calls to Critical_points are valid. Note that since we assume that all these calls return the correct result, we deduce that their output encodes points which all lie in \(V(f)\). Hence, if \(V(\mathbf{f})\cap\mathbb{R}^{n}\) is empty, applying the routine Decide on these outputs will always return true and, all in all, our algorithm returns true when \(V(\mathbf{f})\cap\mathbb{R}^{n}\) is empty. It remains to prove that it returns false when \(V(\mathbf{f})\cap\mathbb{R}^{n}\) is non-empty. Note that there is a partition \(\lambda\) such that \(V(\mathbf{f}_{\lambda})\cap\mathbb{R}^{n}\) is nonempty and has an empty intersection with the complement of \(\mathcal{U}_{\lambda}\). That is, all connected components of \(V(\mathbf{f}_{\lambda})\cap\mathbb{R}^{n}\) are in \(\mathcal{U}_{\lambda}\). Let \(C\) be such a connected component. By Lemma 4.5, the map \(\phi_{\mathbf{a}}\) is proper, and non-negative. Hence, its restriction to \(V(\mathbf{f}_{\lambda})\cap\mathbb{R}^{n}\) reaches its extremum at all connected components of \(V(f_{\lambda})\cap\mathbb{R}^{n}\). This implies that the restriction of \(\phi_{\mathbf{a}}\) to \(V(f_{\lambda})\) has real critical points which are contained in \(C\) (and by Proposition 4.1 there are finitely many). Those critical points are then encoded by the output of the call to Critical_points (Step 1c) and false is returned. ### Complexity analysis Let \(d=\max(\deg(f))\). First for a partition \(\lambda\), applying \(\mathbb{T}_{\lambda}\) to \(f\) takes linear time in \(O(n{n+d\choose d})\), the number of monomials of \(f\) and the cost of Step 1b is nothing. At the core of the algorithm, computing \(\mathcal{R}_{\lambda}\) at Step 1c requires \(O^{*}\left(\delta^{2}c_{\lambda}(e_{\lambda}+c_{\lambda}^{5})n^{4}\Gamma\right)\) operations in \(\mathbb{Q}\) by Lemma 5.1, where \(\delta=\max(d,\deg(\phi_{\mathbf{a}}))\). Also, the degree of \(\mathcal{R}_{\lambda}\) is at most \(c_{\lambda}\). In order to determine the cost of the Decide process at Step 1d, let \(a\) be the degree of \(v\) and \(b\) be the maximum of the partial degrees of \(\rho_{i}\)'s w.r.t. \(u\). By the complexity analysis of ((Bull and P., 2014, Algo, 8.21 ; Sec. 8.3.6), Step (1)) above is performed within \(O\left(b^{4}a\right)\) arithmetic operations in \(\mathbb{Q}[t]\) using a classical evaluation interpolation scheme (there are \(b\) polynomials to interpolate, all of them being of degree \(\leq 2ab\)). Step (2) above requires \(O\left(a^{4}\log(a)\right)\) arithmetic operations in \(\mathbb{Q}\) (see the complexity analysis of (Bull and P., 2014, Algo, 10.14, Algo, 10.4)). Finally, in Step (3), we evaluate the signs of \(b\) polynomials of degree \(\leq 2ab\) at the real roots of \(v\) (of degree \(a\)) whose Thom encodings were just computed. This is performed using \(O\left(ba^{3}\left(\left(\log(a)+b\right)\right)\right)\) arithmetic operations in \(\mathbb{Q}\) following the complexity analysis of (Bull and P., 2014, Algo, 10.15, Algo, 10.4). The sum of these estimates lies in \(O\left(b^{4}a+ba^{4}\left(\left(\log(a)+b\right)\right)\right)\). Now, recall that the degree of \(v\) is the degree of \(\mathcal{R}_{\lambda}\), so \(a\leq c_{\lambda}\). The degree of \(\rho_{i}\) w.r.t. \(u\) equals \(t_{i}\) and \(t_{i}\leq n\). This means \(b\leq n\). All in all, we deduce that the total cost of this final step lies in \(O\left(n^{4}c_{\lambda}+n^{2}c_{\lambda}\right)\), which is negligible compared to the previous costs. In the worst case, one need to consider all the partitions of \(n\) of length at least \(s\). Thus the total complexity of Real_emptiness is \[\sum_{\lambda,\ell\geq s}O^{*}\left(\delta^{2}c_{\lambda}(e_{\lambda}+c_{ \lambda}^{5})n^{4}\Gamma\right)\] operations in \(\mathbb{Q}\). In addition, Lemma 34 in (Han et al., 2015) implies that \[\sum_{\lambda,\ell\geq s}c_{\lambda}\leq c\text{ and }\sum_{\lambda,\ell\geq s }e_{\lambda}\leq e,\] where \(c=\deg(\zeta_{\ell})s^{\binom{n+\delta-1}{n}}\) and \(e=n(\deg(\zeta_{\ell})+1)^{s}\binom{n+\delta}{n}\). Notice further that \(\binom{n+\delta}{\delta}\leq(n+1)\binom{n+\delta-1}{d}\) and \(e=n(d+1)^{s}\binom{n+\delta}{n}\leq n(n+1)c^{5}\) for \(\delta\geq 2\). In addition, since \(\deg(\phi_{\mathbf{a}})\leq\max(t_{i})+1\leq n\), the total cost of our algorithm is \[O^{*}\left(d^{2}n^{6}c^{6}\Gamma\right)=O^{*}\left(d^{6+2}n^{11}\binom{n+d}{ n}^{6}\left(\binom{n+d}{n}+\binom{n}{s+1}\right)\right)\] operations in \(\mathbb{Q}\). ### An example Let \(n=4\) and \(s=1\) with \(f=(f)\) where \[f=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}-6x_{1}x_{2}x_{3}x_{4}-1.\] Consider first the partition \(\lambda=(4^{1})\). Then \(f_{\lambda}=\mathbb{T}_{\lambda}(f)=-6x_{1,1}^{4}+4x_{1,1}^{2}-1\) which has no real solution as \(f_{\lambda}=-2x_{1,1}^{4}-(2x_{1,1}^{2}-1)^{2}<0\) for all \(x_{1,1}\in\mathbb{R}\). Next we consider \(\lambda=(2^{2})\). Then \[f_{(2^{2})}=2x_{1,1}^{2}+2x_{2,1}^{2}-6x_{1,1}^{2}x_{2,1}^{2}-1\] and we take \(\phi=5(2_{1,1}^{2}+x_{2,1}^{2})-9(z_{1,2}+z_{2,1})-3\). In this case \(\zeta_{f_{(2^{2})}}=2e_{1,1}^{2}-6e_{2,1}^{2}-4e_{2,1}-1\) and \(\zeta_{\phi}=5e_{1,1}^{2}-9e_{1,1}-10e_{2,1}-3\). The critical points of \(\zeta_{\phi}\) restricted to \(V(\zeta_{f_{(2^{2})}})\) are solutions to \[\zeta_{f_{(2^{2})}}=\det\left(\operatorname{Jac}(\zeta_{f_{(2^{2})}},\zeta_{ \phi})\right)=0,\] that is \(2e_{1,1}^{2}-6e_{2,1}^{2}-4e_{2,1}-1=120e_{1,1}e_{2,1}-108e_{2,1}-36=0\). A zero-dimensional parametrization of these critical points is given by \(((v,v_{1,1},v_{2,1}),\mu)\), where \[v=200t^{4}-360t^{3}+62t^{2}+60t-27,\] \[v_{1,1}=t,\text{ and }\] \[v_{2,1}=-\frac{1}{6}t^{3}+\frac{9}{20}t^{2}-\frac{31}{600}t-1/20.\] At the final step, we check that the system \[\rho_{1}=v=0,\quad\text{ with }\quad\rho_{1}=v^{\prime}u^{2}-v_{1,1}u+v_{2,1} \in\mathbb{Q}[t,u],\] has real solutions. This implies that \(V_{\mathbb{R}}(f)\) is non-empty. The output of our algorithm is consistent with the fact that the point \((1,1,1/2,1/2)\) is in \(V_{\mathbb{R}}(f)\). ##.6. Proics for future research Determining topological properties of a real variety \(V_{\mathbb{R}}(f)\) is an important algorithmic problem. Here we have presented an efficient algorithm to determine if \(V_{\mathbb{R}}(f)\) is empty or not. More generally, we expect that the ideas presented here may lead to algorithmic improvements also in more refined questions, like computing one point per connected component or the Euler characteristic for a real symmetric variety. Furthermore, while our complexity gains are significant for symmetric input we conjecture that we can do better in certain cases. In particular, when the degree of the polynomials is at most \(n\) then we expect we that a combination with the topological properties of symmetric semi algebraic sets found in (Han et al., 2015, Prop 9) can reduce the number of orbits considered, for example, instead of \(n^{d}\) we might only need \(n^{d/2}\) for fixed \(d\). Finally, a generalization to general symmetric semi algebraic sets should be possible.
2309.03008
Sparse 3D Reconstruction via Object-Centric Ray Sampling
We propose a novel method for 3D object reconstruction from a sparse set of views captured from a 360-degree calibrated camera rig. We represent the object surface through a hybrid model that uses both an MLP-based neural representation and a triangle mesh. A key contribution in our work is a novel object-centric sampling scheme of the neural representation, where rays are shared among all views. This efficiently concentrates and reduces the number of samples used to update the neural model at each iteration. This sampling scheme relies on the mesh representation to ensure also that samples are well-distributed along its normals. The rendering is then performed efficiently by a differentiable renderer. We demonstrate that this sampling scheme results in a more effective training of the neural representation, does not require the additional supervision of segmentation masks, yields state of the art 3D reconstructions, and works with sparse views on the Google's Scanned Objects, Tank and Temples and MVMC Car datasets. Code available at: https://github.com/llukmancerkezi/ROSTER
Llukman Cerkezi, Paolo Favaro
2023-09-06T13:54:31Z
http://arxiv.org/abs/2309.03008v2
# Sparse 3D Reconstruction via Object-Centric Ray Sampling ###### Abstract We propose a novel method for 3D object reconstruction from a sparse set of views captured from a 360-degree calibrated camera rig. We represent the object surface through a hybrid model that uses both an MLP-based neural representation and a triangle mesh. A key contribution in our work is a novel object-centric sampling scheme of the neural representation, where rays are shared among all views. This efficiently concentrates and reduces the number of samples used to update the neural model at each iteration. This sampling scheme relies on the mesh representation to ensure also that samples are well-distributed along its normals. The rendering is then performed efficiently by a differentiable renderer. We demonstrate that this sampling scheme results in a more effective training of the neural representation, does not require the additional supervision of segmentation masks, yields state of the art 3D reconstructions, and works with sparse views on the Google's Scanned Objects, Tank and Temples and MVMC Car datasets. ## 1 Introduction The task of reconstructing the 3D surface of an object from multiple calibrated views is a well-established problem with a long history of methods exploring a wide range of 3D representations and optimization methods [10, 11, 51, 12]. Recent approaches have focused their attention on deep learning models [5, 20, 26, 32, 33, 37, 48, 63]. In particular, methods based on neural rendering such as NeRF and its variants [3, 4, 33, 40, 59], have not only shown impressive view interpolation capabilities, but also the ability to output 3D reconstructions as a byproduct of their training. NeRF's neural rendering drastically simplifies the generation of images given a new camera pose. It altogether avoids the complex modeling of the light interaction with surfaces in the scene. A neural renderer learns to output the color of a pixel as a weighted average of 3D point samples from the NeRF model. Current methods choose these samples along the ray defined by the given pixel and the camera center (see Figure 2 left). Because each camera view defines a separate pencil of rays, the 3D samples rarely overlap. Thus, each view will provide updates for mostly independent sets of parameters of the NeRF model, which can lead to data overfitting. In practice, overfitting means that views used for training will be rendered correctly, but new camera views will give unrealistic images. Such overfitting is particularly prominent when training a NeRF on only a sparse set of views with a broad object coverage (, see the \(360^{\circ}\) camera rig in Figure 1). In this work, we address overfitting when working with sparse views by proposing an object-centric sampling scheme that is shared across all views (see Figure 2 right and Figure 3). We design the scheme so that all (visible) views can provide an update for the same 3D points on a given sampling ray. To do so, we introduce a hybrid 3D representation, where we simultaneously update a Multi Layer Perceptron (MLP) based implicit surface model (similarly to a NeRF) and an associated triangle mesh. The MLP model defines an implicit 3D representation of the scene, while the mesh is used to define the sampling rays. These rays are located at each mesh vertex and take _the direction of the normal to the mesh_. Then, the mesh vertex of the current object surface is updated by querying the MLP model at 3D samples on the corresponding ray. We use a similar deep learning model to associate color to the mesh. We then image the triangle mesh in each camera view via a differentiable renderer. Because of our representation, the queried 3D samples can be shared across multiple views and thus avoid the overfitting shown by NeRF models in our settings. Notice that a common practice to handle overfitting in NeRF models trained on sparse views is to constrain the 3D reconstruction through object masks. Masks provide a very strong 3D cue. In fact, a (coarse) reconstruction of an object can even be obtained from the masks alone, a technique known as shape from silhouette [23]. We show that our method yields accurate 3D reconstructions even without mask constraints. This confirms experimentally the effectiveness of our sampling scheme in avoiding overfitting. To summarize, our contributions are * A novel object-centric sampling scheme that efficiently shares samples across multiple views and avoids overfitting; the robustness of our method is such that it does not need additional constraints, such as 2D object masks; * A 3D reconstruction method that yields state of the art results with a sparse set of views from a 360-degree camera rig (on the Google's Scanned Objects [9], Tank and Temples [22] and MVMC Car datasets [63]). ## 2 Prior Work Mesh-based methods.With the development of differentiable renderers [5, 17, 19, 50], object reconstruction is now possible through gradient descent (or backpropagation in the context of deep learning). A common approach to predict the shape of an object using differentiable rendering is to use category level image collections [14, 18, 34, 47, 53, 61]. Recently, some methods aim to estimate the shape of an object in a classic multi-view stereo setting and without any prior knowledge of the object category [15, 35, 57, 58, 63]. Several methods also propose different ways to update the surface of the reconstructed object. The methods proposed by Goel et al. [15] and Worchel et al. [57] update the mesh surface by predicting vertex offsets to the template mesh. Zhang et al. [63] use a neural displacement field over a canonical sphere, but restrict the geometry to model only genus-zero topologies. Xu et al. [58], after getting a smooth initial shape via [60], proposes surface-based local MLPs to encode the vertex displacement field for the reconstruction of surface details. Munkberg et al. [35] use a hybrid representation as we do in our method. They learn the signed distance field (SDF) of the reconstructed object. The SDF is defined on samples on a fixed tetrahedral grid and then converted to a surface mesh via deep marching tetrahedra [46]. In contrast, we adapt the samples to the surface of the object as we reconstruct it. Implicit representations for volume rendering.Recently, Neural Radiance Field (NeRF)-based methods have shown great performance in novel view synthesis tasks [2, 25, 30, 33, 43, 49, 54, 64]. However, these methods require a dense number of training views and camera poses to render realis Figure 3: **View vs object-centric sampling (see Figure 2).** Computational efficiency: The view-centric approach uses \(8\times K\) samples per mesh vertex, with \(K\) camera views. In contrast, the object-centric approach uses only \(8\) samples per vertex regardless of the number of camera views. Object-centric sampling is not only more efficient but also avoids overfitting. For more details, please check Section 3. Figure 2: **Sampling schemes.** Left: NeRF view-centric sampling scheme. Right: Our object-centric sampling scheme. The view-centric sampling scheme uses separate sets of 3D samples for each camera view. This leads to overfitting when views are sparse. Object-centric sampling instead shares the same 3D samples across multiple views. tic views. Methods that tackle novel view rendering from a small set of training views usually exploit two directions. The methods in the first group pre-train their models on large scale calibrated multiview datasets of diverse scenes [4, 6, 24, 27, 41, 56, 62]. In our approach, however, we consider training only on a small set of images. The methods in the second group add an additional regularization term to their optimization cost to handle the limited number of available views. Diet-Nerf [16] incorporates an additional loss that encourages the similarity between the pre-trained CLIP features between the training images and rendered novel views. RegNerf [38] incorporates two additional loss terms: 1) color regularization to avoid color shifts between different views and 2) geometry regularization to enforce smoothness between predicted depths of neighboring pixels. InfoNerf [21] adds a ray entropy loss to minimize the entropy of the volume density along each ray (thus, they encourage the concentration of the volume density on a single location along the ray). This is not suitable for sparse 360-degree camera rigs, where the camera positions lie at the same elevation angle (as in our case) as a ray can be shared by two opposite camera centers. They also add a loss that minimizes the KL-divergence between the normalized density profiles of two neighboring rays. DS-Nerf [8] instead improves the reconstruction quality by adding depth supervision. As they report, its performance is only as good as the estimates of depth obtained by COLMAP [44, 45]. Common to all of the above methods is that they require some sort of additional training (except InfoNerf [21]), while our method reconstructs objects without any additional pre-training. **Implicit representations for surface rendering.**[52] provides an overview of methods that use implicit representations for either volume or surface rendering. This family of approaches uses a neural SDF or an occupancy field as an implicit surface representation. DVR [37] and IDR [59] are pioneering SDF methods that use only images for training. They both provide a differentiable rendering formulation using implicit gradients. However, both methods require accurate object masks as well as appropriate weight initialization due to the difficulty of propagating gradients. IRON [65] proposes a method to estimate edge derivatives to ease the optimization of neural SDFs. Some of the methods combine implicit surface models with volumetric ones [39, 55, 60] and also implicit surface models with explicit ones [7, 31, 42]. One advantage of the methods that combine implicit surface models with volumetric ones is that they do not require mask supervision and are more stable. However, they heavily depend on a large number of training images. SparseNeuS [28] can work in the sparse view setting, but requires pre-training on a multi-view dataset of multiple scenes. Additionally, it is pretrained only for the narrow view setup, as opposed to the 360-degree one. ## 3 Sparse 3D Reconstruction ### Problem Formulation Our goal is to reconstruct the 3D surface of the object depicted in \(N\) images \(I=\{I_{1},I_{2},\cdots,I_{N}\}\) given their corresponding calibrated camera views \(\Pi=\{\pi_{1},\pi_{2},\cdots,\pi_{N}\}\), where \(\pi_{i}\) denotes the 3D camera pose and intrinsic camera calibration parameters. We consider the _sparse setting_, _i.e_., when \(N\) is small (_e.g_., \(8-15\) views). We mostly use camera views distributed uniformly in a \(360^{\circ}\) rig (see Figure 1), but our method can also work for the narrow view setup (see the supplementary material for experiments with this setting). We pose the 3D reconstruction task as the problem of finding the 3D surface and texture such that the images \(I^{r}=\{I^{r}_{1},\ldots,I^{r}_{N}\}\) rendered with the given camera views \(\Pi\), best match the corresponding set of captured images \(I\). ### 3D Representation We describe the surface of the 3D object via a hybrid model that maintains both an implicit (density-based) and explicit (mesh-based) representation. The two representations serve different purposes. The explicit one is used to efficiently render views of the object and is directly obtained from the implicit representation. The advantage of the implicit representation is that it can smoothly transition through a general family of 3D shapes (_e.g_., from a sphere to a torus). This is especially useful when the 3D reconstruction is achieved through iterative gradient-based optimization algorithms. Such transition is typically much more difficult to achieve with a lone explicit representation. More specifically, the implicit representation is based on the Implicit Surface Neural Network (ISNN) \(F_{\text{shape}}:\mathbb{R}^{3}\rightarrow\mathbb{R}\), that outputs the object density value \(\sigma\doteq F_{\text{shape}}(X)\) at a 3D point \(X\in\mathbb{R}^{3}\). The explicit representation is obtained by converting the implicit representation in \(F_{\text{shape}}\) to a triangle mesh \(\mathcal{G}=(\mathcal{V},\mathcal{F})\) consisting of \(M\) vertex locations \(\mathcal{V}\) and a face list \(\mathcal{F}\). A triangle in \(\mathcal{F}\) is the triplet of indices of the vertices in \(\mathcal{V}\) that form that triangle. The conversion of \(F_{\text{shape}}\) to the explicit representation mesh is based on the selection of a finite set of 3D points, which we call samples and discuss in detail in the next section. ### Object-Centric Sampling In Figure 4, we show a 2D slice of the implicit representation of the cow 3D shape. The implicit representation will have a density \(\sigma\) that is close to \(1\) at the surface of the object and \(0\) elsewhere. In an iterative procedure, we can assume that we already have some existing mesh that is sufficiently close to the current surface of the implicit representation (recall that the implicit representation will be updated through the optimization procedure). To also update the mesh, we use its existing mesh vertices and normals to define segments that are approximately normal to the up dated implicit surface and to select samples on the segments in equal number on either side of the surface. More formally, for each vertex \(V_{i}\in\mathcal{V}\), \(i=1,\dots,M\), in the current _out-of-date_ mesh, we define a _sampling ray_\(R_{i}\), such that \(R_{i}\propto N_{i}\), where \(N_{i}\) is the surface normal at the vertex \(V_{i}\). Along the ray \(R_{i}\) we draw \(K\) 3D samples \(X_{i,1},\dots,X_{i,K}\) (in Figure 4 we show \(K=4\)). We define outward and inward point samples by drawing \(K\) equally spaced 3D points from the segments \([V_{i},V_{i}+t_{i}^{\text{out}}N_{i}]\) and \([V_{i}-t_{i}^{\text{in}}N_{i},V_{i}]\), where \(t_{i}^{\text{in}},t_{i}^{\text{out}}>0\). The range factors \(t_{i}^{\text{in}}\) and \(t_{i}^{\text{out}}\) are defined independently so that samples on either one of the two segments stay always either inside or outside the mesh, with a maximum possible range. This choice allows to deal with the reconstruction of thin structures of the mesh (, the leg of a horse). For each 3D point we obtain corresponding densities \(\sigma_{i,1},\cdots,\sigma_{i,K}\) from the ISNN via \(\sigma_{i,j}=F_{\text{shape}}(X_{i,j})\). We then compute normalized weights via the softmax function as \(w_{i,j}\propto\exp(\sigma_{i,j})\), such that \(\sum_{j=1}^{K}w_{i,j}=1\). Finally, we define the updated mesh vertex \(V_{i}\) as the following weighted sum \[\hat{V}_{i}=\sum_{k=1}^{K}w_{i,k}X_{i,k}. \tag{1}\] See Figures 6 and 4 for an illustration of these steps. **Remark.** In view-centric methods, such as NeRF, the sampling rays are defined via the camera directions. The view-centric approach presents two drawbacks: Firstly, the number of points grows linearly with the number of cameras. Secondly, when using view-centric (VC) sampling, the surface can only evolve within the subspace determined by the camera poses, resulting in elongated shapes (as observed in Figure 3). This limitation becomes particularly challenging in scenarios with sparse camera views. **Adding Texture.** Instead of obtaining color directly from the ISNN as in NeRF models [2, 30, 33, 49, 64], we introduce a separate model, the Texture Neural Network (TNN) \(F_{\text{texture}}:\mathbb{R}^{P}\mapsto\mathbb{R}^{3}\), where \(P\) is the size of the 3D position embedding. Given the updated 3D vertex location \(\hat{V}_{i}\), we compute its positional embedding \(\gamma(\hat{V}_{i})\), where \(\gamma(\cdot)\) denotes the positional encoding operator, and then obtain the color \(C_{i}\doteq F_{\text{texture}}(\gamma(\hat{V}_{i}))\). **Image Rendering.** The above procedure yields an updated triangle mesh \(\hat{\mathcal{G}}=(\hat{\mathcal{V}},\mathcal{F})\), where \(\hat{\mathcal{V}}=\{\hat{V}_{1},\dots,\hat{V}_{M}\}\), with corresponding vertex colors \(C=\{C_{1},\dots,C_{M}\}\). We render the image viewed by the \(j\)-th camera with calibration \(\pi_{j}\), \(j\in\{1,\dots,N\}\), by feeding \(\hat{\mathcal{G}}\) and the vertex colors \(C\) to a differentiable renderer [5, 17, 19, 50]. This yields the rendered image \(I_{j}^{r}=\texttt{Renderer}(\hat{\mathcal{G}},C,\pi_{j})\) (see Figure 5). **Reconstruction Loss.**\(F_{\text{shape}}\) and \(F_{\text{texture}}\) are parametrized as Multi Layer Perceptron (MLP) networks (more details of their architectures are in section 4.1). We train their parameters by minimizing the following loss on the images \(I\) \[L=L_{\text{images}}(I,I^{r})+L_{\text{perceptual}}(I,I^{r})+\lambda L_{ \text{laplacian}}(\hat{\mathcal{G}}) \tag{2}\] where \(L_{\text{images}}(I,I^{r})=\sum_{i=1}^{N}|I_{i}-I_{i}^{r}|_{1}\) is the \(L_{1}\) loss between the rendered images and captured images. \(L_{\text{perceptual}}\) is the same loss as \(L_{\text{images}}\), but where instead of the \(L_{1}\) loss we use the perceptual loss [66] and \(L_{\text{Laplacian}}\) is the Laplacian loss of the mesh \(\hat{\mathcal{G}}\)[36], which we use to regularize the reconstructed 3D vertices \(\hat{\mathcal{V}}\) through the parameter \(\lambda>0\). We optimize the loss \(L\) using the AdamW optimizer [29]. ### Technical Details We employ several ideas to make the optimization robust and accurate. **Mesh Initialization.** We use a robust initialization procedure to obtain a first approximate surface mesh. We start from a predefined sphere mesh with radius \(\rho\). This sphere defines vertices and triangles of the mesh \(\mathcal{G}\). Then, for each vertex \(V_{i}\in\mathcal{V}\) we cast a ray \(R_{i}\) in the radial direction from Figure 4: Detailed model representation. We feed the object-centric points to ISNN and obtain a density value. Then, we update the vertex location via eq. (1) using the points sampled along the vertex normal. We repeat this operation for all vertices to get the updated mesh surface. Figure 5: We assign a color to each vertex of the mesh by querying the TNN model at that vertex. Then, we feed the textured mesh and a camera viewpoint as input to a differentiable renderer to synthesize a view of the scene. The reconstruction task is based on minimizing the difference between the synthesized view and a captured image (with the same viewpoint) in both \(L_{1}\) and perceptual norms. the origin of the sphere towards the vertex \(V_{i}\) and draw \(K\) equally spaced points \(X_{i,1},\cdots,X_{i,K}\) along the ray \(R_{i}\), such that \(|X_{i,K}|_{2}=\rho\) (see Figure 6). These 3D points are never changed throughout this initial model training. The mesh vertices are then computed as in eq. (1). Because each linear combination considers only samples \(X_{i,1},\cdots,X_{i,K}\) along a ray, the initial representation \(F_{\text{shape}}^{\text{init}}\) can only move the updated vertices radially. Although the reconstructed mesh can model only genus zero objects and only describe a radial structure, it gives us a very reliable initial mesh. **Surface Normals.** Surface normals are computed by averaging the normals of the faces within the second order neighbourhood around \(V_{i}\). **Re-meshing.** Every \(100\) iterations during the first \(2500\) iterations, and then every \(250\) iterations afterwards, we apply a re-meshing step that regularizes and removes self-intersections from the mesh \(\mathcal{G}\). This is a separate step that is not part of the optimization of the loss eq. (2) (_i.e_., there is no backpropagation through these operations). Through re-meshing, the mesh can have genus different from zero and its triangles are adjusted so that they have similar sizes. We use the implementation available in PyMesh - Geometry Processing Library [1]. For more details, see the supplementary material. Notice that the total time for the above calculations during training is almost negligible as we do not apply these operations at every iteration and they are highly optimized. **Inside/Outside 3D Points.** For the identification of which 3D samples are inside/outside the 3D surface we use the generalized _winding number_ technique, which is also available in the PyMesh - Geometry Processing Library [1]. **Texture Refinement.** During training, we observe that the TNN model does not learn to predict sharp textures. Therefore, we run a final phase during which the mesh is kept constant and we fine-tune the TNN separately. Following IDR [59], we feed the vertex location, the vertex normal and the camera viewing direction to the TNN so that it can describe a more general family of reflectances. These quantities are concatenated to \(\gamma(\hat{V}_{i})\) and then fed as input to \(F_{\text{texture}}\). As described earlier on, in this final phase, we optimize eq. (2) only with respect to the parameters of \(F_{\text{texture}}\). **Handling the Background.** So far, we have not discussed the presence of a background in the scene and have focused instead entirely on the surface of the object. Technically, unless a mask for each view is provided, there is no explicit distinction between the object and the background. Masks give a strong 3D cue about the reconstructed surface, so much so that they can do most of the heavy-lifting in the 3D reconstruction. Thus, to further demonstrate the strength of our sampling and optimization scheme, we introduce a way to avoid the use of user pre-defined segmentation masks. We extend our model with an approximate background mesh representation. For simplicity, we initialize the background on a fixed mesh (a cuboid) that is sufficiently separated from the volume of camera frustrums' intersections. When we reconstruct the background, we only optimize the texture assigned to each vertex of the background. Note that we add a separate TNN to estimate the texture of the background and the texture is view-independent at all stages. See also the supplementary material for further details. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & Mask & CH-L2* \(\downarrow\) & CH-L1 \(\downarrow\) & Normal \(\uparrow\) & F@10 \(\uparrow\) \\ \hline NeRS & yes & 18.58 & 0.052 & 0.54 & 98.10 \\ RegNeRF & yes & 60.19 & 0.107 & 0.30 & 91.44 \\ Munkberg & yes & 13.32 & 0.047 & 0.56 & 98.65 \\ DS & yes & 13.21 & 0.042 & 0.71 & 98.58 \\ NeuS & no & 1217.00 & 0.495 & 0.37 & 32.40 \\ NeuS & yes & 13.85 & 0.049 & 0.70 & 98.79 \\ COLMAP* & yes & 34.35 & 0.049 & - & 99.11 \\ Our w/BCG & yes & **8.69** & 0.034 & **0.75** & **99.24** \\ Our w/BCG & no & 11.08 & 0.038 & 0.75 & 98.85 \\ \hline \hline \end{tabular} \end{table} Table 1: Reported 3D metrics on the GSO Dataset. *Note that we obtained COLMAP-reconstructed point clouds using 50 views. **(****All scores have been multiplied by \(10^{4}\)).** Notice the robustness of our method even when not using the mask constraints. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & Mask & PSNR \(\uparrow\) & MSE \(\downarrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline NeRS & yes & 20.108 & 0.0185 & 0.874 & 0.126 \\ RegNeRF & yes & 20.217 & 0.013 & 0.882 & 0.143 \\ Munkberg et al. & yes & 26.838 & 0.002 & 0.955 & 0.067 \\ DS & yes & 24.649 & 0.004 & 0.944 & 0.081 \\ Our wo/BCG & yes & **29.029** & **0.001** & **0.967** & **0.028** \\ Our w/BCG & no & 27.370 & 0.002 & 0.964 & 0.038 \\ \hline \hline \end{tabular} \end{table} Table 2: GSO Dataset: Quantitative evaluation of generated views on the test set. Notice the robustness of our method even when not using the mask constraints. Figure 6: Mesh initialization. The points in the radial point cloud are fed to ISNN to obtain a density value. Then, for each ray we update the vertex location using eq. (1). By repeating this operation for all rays, we obtain the mesh surface. ## 4 Experiments In this section, we present implementation details and results obtained on the standard datasets. For more comprehensive ablation studies as well as for more visual results, we refer to the supplementary material. ### Implementation Details We parameterize \(F_{\text{shape}}^{\text{init}}\), \(F_{\text{shape}}\) and \(F_{\text{texture}}\) as MLPs with 5, 5, and 3 layers respectively and a hidden dimension of 256 for all. The initialization mesh uses 2500 vertices, while the mesh for the object reconstruction uses a maximum of 10K vertices. For the background mesh we use a vertex resolution of 10K. When training the texture network TNN in the final step, we upsample the mesh resolution to 250K vertices. The scale \(t_{in}\) and \(t_{out}\) for the detailed model reconstruction are both set to \(0.15\). The number of samples along the rays for \(F_{\text{shape}}^{\text{init}}\) and \(F_{\text{shape}}\) is \(16\) and \(8\) respectively. The learning rate for the training of the initialization, shape, and texture models is \(10^{-5}\), \(5\times 10^{-5}\), and \(10^{-3}\). The Laplacian regularization may change across datasets. For objects with non-Lambertian surfaces,, specular surfaces, we use a higher Laplacian regularization, as in this case the \(F_{\text{shape}}\) network can overfit and generate spiky surfaces due to the lack of multiview consistency across the views. We will release the code of all components of our work to facilitate further research and allow others to reproduce our results. ### Datasets **Google's Scanned Objects (GSO) [9].** We test our algorithm on 14 different objects. For training, we use 8 views, and for validation 100 views. Camera poses are uniformly spread out around the object where the elevation angle is uniformly sampled in \([0^{\circ},15^{\circ}]\). The background image is generated by warping a panorama image onto a sphere. **MVMC Car dataset [63].** We run our algorithm on 5 different cars from the MVMC dataset. We use the optimized camera poses provided in the dataset. Although they are optimized, we find that some of them are not correct. Thus, we eliminated those views for both training and testing. We follow a leave-one-out cross-validation setup, where we leave one view for validation and the rest is used for training. We repeat this 5 times for each car. Note that this dataset is more challenging than the GSO Dataset [9] as the camera locations are not spread out uniformly around the object. Most of the views are placed mainly on two opposite sides of the cars. Furthermore, the surface of cars is not-Lambertian, and there are many light sources present in the scene too. **Tank and Temple dataset [22].** We evaluate our method on images from 2 objects, _Truck_ and _Ignatius_. We use 15 images for training and the rest as the test set. We obtain the image masks of each object by rendering its corresponding laser-scanned ground-truth 3D point cloud. The camera poses are computed via COLMAP's SfM pipeline [44]. ### Evaluation **Metrics.** For the datasets with a 3D ground truth, we compare the reconstructed meshes with the ground-truth meshes or point clouds. More specifically, we report the L2-Chamfer and L1-Chamfer distances, normal consistency, and F1 score, following [13]. We also report texture metrics to evaluate the quality of the texture on unseen views. More specifically, we employ Mean-Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS) [66]. **Baselines.** We compare our method with the following methods: (1) RegNerf [38], a volume rendering method, (2) Munkberg et al. [35], a hybrid-based method, (3) DS [15], a mesh-based method, (4) COLMAP [44, 45], a multi-view stereo method, (5) NeUS [55], neural surface reconstruction method, and (6) NeRS [63] a neural reflectance surface method. Further details about these baseline methods can be found in the supplementary material. ### Results We run our algorithm under two settings: with background (_w/BCG_) and by removing the background (_wo/BCG_), the \begin{table} \begin{tabular}{l c c c c c} \hline \hline Scene & Method & Mask & PSNR \(\uparrow\) & MSE \(\downarrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline \multirow{4}{*}{Truck} & RegNeRF & yes & 18.078 & 0.018 & 0.657 & 0.254 \\ & Munkenberg et al. & yes & **18.398** & 0.018 & 0.673 & **0.245** \\ & Our w/BCG & yes & 18.315 & **0.017** & **0.701** & 0.252 \\ & Our w/BCG & no & 13.168 & 0.0508 & 0.666 & 0.343 \\ \hline \multirow{4}{*}{Ignatius} & RegNeRF & yes & 21.123 & 0.105 & 0.866 & 0.108 \\ & Munkenberg et al. & yes & 22.720 & **0.007** & 0.873 & **0.073** \\ \cline{1-1} & Our w/BCG & yes & **23.022** & **0.007** & **0.896** & 0.080 \\ \cline{1-1} & Our w/BCG & no & 17.845 & 0.021 & 0.875 & 0.153 \\ \hline \hline \end{tabular} \end{table} Table 4: Tank and Temples Dataset: Quantitative evaluation of generated views on the test set. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & Mask & PSNR \(\uparrow\) & MSE \(\downarrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline NeRS & yes & 18.381 & 0.015 & 0.852 & 0.080 \\ RegNeRF & yes & 15.776 & 0.028 & 0.751 & 0.259 \\ Munkberg et al. & yes & 15.145 & 0.031 & 0.761 & 0.239 \\ DS & yes & 18.608 & 0.015 & 0.835 & 0.139 \\ Our w/BCG* & no & 18.301 & 0.015 & 0.855 & 0.132 \\ Our wo/BCG* & yes & 20.030 & 0.010 & 0.867 & 0.095 \\ Our w/BCG & no & 18.450 & 0.014 & 0.865 & 0.142 \\ Our wo/BCG & yes & **21.563** & **0.007** & **0.883** & **0.091** \\ \hline \hline \end{tabular} \end{table} Table 3: MVMC Car Dataset: Quantitative evaluation of generated views on the test set. Our wo/BCG* and Our w/BCG are trained with the same pre-processing as in NeRS. latter of which is akin to using a mask. We present both qualitative (Figure 7 and Figure 9) and quantitative (Table 1 and Table 2) result on the GSO dataset. We observe that our proposed method for both _w/BCG_ and _wo/BCG_ is able to recover the original shape with high accuracy. DS, Munkberg et al. [35] and NeuS (with mask supervision) show the closest performance to ours. We observe that NeuS without mask supervision struggles to accurately reconstruct the original shape for seven out of the fourteen objects. NeRS is also able to recover the shape, but cannot recover genus 1 objects. RegNeRF shows blur artifacts as a result of the inherent ambiguity of sparse input data and also may miss some parts of the original object, _e.g._, the leg of the cow object. RegNeRF does not always recover the thin parts of the object, _e.g._, legs of the horse, and thus the reconstructed geometry is not fully accurate. When we run COLMAP with 8 views we find that for most objects the reconstructed point cloud is mostly empty and the object is not recognizable (see also the supplementary material for visual results). This is not surprising since we have only 8 Figure 8: Qualitative Results on the MVMC Car dataset. NeRS*: the original NeRS implementation crops the images before training and thus changes their aspect ratio during training. Thus, the rendered images have an aspect ratio of 1, while the original ones do not. For more information, see section 4.3. Note that we remove the background rendering in the w/BCG* column to simplify the visual comparisons. Figure 7: Qualitative Results on the GSO Dataset. Note that in our w/BCG* column we remove the background rendering in the w/BCG column to simplify the visual comparisons. views covering 360-degrees of the object. Furthermore, in this setting, any surface is visible at most from 3 views and the objects does not have a rich texture. Because of this reason, we run COLMAP with 50 views and present the results in all tables just for reference. In Figure 8 and Table 3 we present qualitative and quantitative results for car objects on MVMC Car dataset. We qualitatively observe that our method wo/BCG shows better view renderings than other methods. Our method fails to recover the texture around transparent surfaces,, the car window. The model in the w/BCG case is able to recover the main shape of the car, but it misses some parts,, the tires of the car that are attached to the ground due to overlaps with the background mesh. Additionally, the tires contain poor texture,, mostly they are black, so this can be easily captured by the background texture network and thus introduce an ambiguity in the reconstruction. We note that the performance for NeRS is consistent across different cars. This is not surprising as they use the mask and their initial template is also car-like. We do not run COLMAP on this dataset as the dataset has a limited number of views. DS has weaker performance on this dataset compared to the GSO. The main reason for poor texture quality is that texture is obtained by 3D back-projections of the mesh to the input views. Thus, incorrect geometry leads to poor texture quality. We observe that the quality of the reconstructions from RegNeRF and Munkberg et al. [35] lack realism. There are two main reasons for this. Firstly, the camera locations are not uniformly spread out around the object. Most of them are located on two sides of the cars. In this case, the methods struggle to recover the original shape. Secondly, because of the many light sources present in the scene and non-Lambertian surfaces, the multi-view consistency across the views is not satisfied. As can be observed, our method is more robust to the above issues. The main reason for that is that during the training of ISNN, the output color of TNN does not depend on the camera view and thus it is less prone to overfitting. In Table 5 and Table 4 we present quantitative results for two objects in the Tank and Temple datasets. Note that the performance of all methods is drastically decreased especially in the recovered 3D shape compared to the GSO dataset. This is expected as the ground-truth point clouds are hollow (without the bottom),, and the reported numbers only approximate the quality of the shape. Our _wo/BCG_ has a higher Chamfer distance compared to the others although it looks visually better. This is because the corresponding ground-truth shape does not only include the target object, but also some other components from the background, as,, in the _Truck_ scene. For visual results and more details, see the supplementary material. ## 5 Conclusion We have introduced a novel multi-view stereo method that works with sparse views from a 360 rig. The method can handle this extreme setting by using a novel object-centric sampling scheme and a corresponding hybrid surface representation. The sampling scheme allows to concentrate the updates due to multiple camera views to the same components of the surface representation and to structure the updates so that they result in useful surface changes (along its normals, rather than its tangent space). We have demonstrated the robustness of this method by working without the common mask supervision constraint, by using datasets with diverse 3D objects (GSO Dataset), on scenes with complex illumination sources and with non-Lambertian surfaces (MVMC Car). \begin{table} \begin{tabular}{c c c c c} \hline \hline Scene & Method & Mask & Chamfer- & Chamfer- & F@10 \\ & & & L2 \(\downarrow\) & L1 \(\downarrow\) & \(\uparrow\) \\ \hline \multirow{4}{*}{Truck} & RegNeRF-clean & yes & 0.059 & 0.342 & 42.56 \\ & Munkberg et al. & yes & 0.072 & 0.355 & 50.11 \\ & DS & yes & 0.110 & 0.479 & 31.73 \\ & COLMAP* & yes & **0.056** & **0.298** & **57.74** \\ & NeuS & no & 3.342 & 2.417 & 6.50 \\ & NeuS & yes & 0.629 & 1.253 & 11.93 \\ & Our w/BCG & yes & 0.094 & 0.406 & 48.14 \\ & Our w/BCG & no & 0.225 & 0.613 & 45.78 \\ \hline \multirow{4}{*}{Ignatius} & RegNeRF-clean & yes & 0.106 & 0.423 & 43.45 \\ & Munkberg et al. & yes & 0.022 & 0.189 & 81.75 \\ & DS & yes & 0.024 & 0.207 & 77.68 \\ & COLMAP* & yes & 0.013 & 0.166 & 86.90 \\ & NeuS & no & 0.155 & 0.572 & 31.95 \\ & NeuS & yes & 0.061 & 0.3878 & 37.72 \\ & Our wo/BCG & yes & **0.018** & **0.147** & **87.12** \\ & Our w/BCG & no & 0.139 & 0.480 & 55.32 \\ \hline \hline \end{tabular} \end{table} Table 5: Tank and Temples Dataset: 3D metrics. *Note that we obtained the COLMAP*-reconstructed point clouds using 50 views. Figure 9: Reconstructed meshes for the _COW_ object in the GSO Dataset.
2307.02862
A Critical Look at the Current Usage of Foundation Model for Dense Recognition Task
In recent years large model trained on huge amount of cross-modality data, which is usually be termed as foundation model, achieves conspicuous accomplishment in many fields, such as image recognition and generation. Though achieving great success in their original application case, it is still unclear whether those foundation models can be applied to other different downstream tasks. In this paper, we conduct a short survey on the current methods for discriminative dense recognition tasks, which are built on the pretrained foundation model. And we also provide some preliminary experimental analysis of an existing open-vocabulary segmentation method based on Stable Diffusion, which indicates the current way of deploying diffusion model for segmentation is not optimal. This aims to provide insights for future research on adopting foundation model for downstream task.
Shiqi Yang, Atsushi Hashimoto, Yoshitaka Ushiku
2023-07-06T08:57:53Z
http://arxiv.org/abs/2307.02862v2
# A Critical Look at the Current Usage of Foundation Model for Dense Recognition Task ###### Abstract In recent years large model trained on huge amount of cross-modality data, which is usually be termed as foundation model, achieves conspicuous accomplishment in many fields, such as image recognition and generation. Though achieving great success in their original application case, it is still unclear whether those foundation models can be applied to other different downstream tasks. In this paper, we conduct a short survey on the current methods for discriminative dense recognition tasks, which are built on the pretrained foundation model. And we also provide some preliminary experimental analysis of an existing open-vocabulary segmentation method based on Stable Diffusion, which indicates the current way of deploying diffusion model for segmentation is not optimal. This aims to provide insights for future research on adopting foundation model for downstream task. ## 1 Introduction In the last decades, deep model trained with large amount of labeled data succeeds to be top-rank in almost all computer vision tasks. Besides the achievements in the supervised learning tasks, other research lines improve the generalization and universality ability, such as self-supervised learning [3, 9, 11] which empowers the model with strong representation feature learning capacity with only unlabeled data, open-set or open-world learning which endows the model with the ability to either reject [22, 33, 38] or distinguish [2, 15, 32, 26] novel categories, and domain generalization [10, 18, 27] or domain adaptation [20, 30, 36] which improves model's generalization to test data of different distributions, to name a few. More recently, the training of models with abundant cross modality data is becoming more popular. For example, CLIP [25] is a visual-language model trained with huge amount of image and text pairing data, via a contrastive learning objective. Due to the learned image-language pairing representation, with the provided text prompts during inference time, the model excels at zero-shot recognition. SAM [16] is a general category-agnostic segmentation/localization solution which supports several types of prompts, it is capable of segmenting whole objects or object parts of any shape. ImageBind [8] learns a joint embedding space across six different modalities, with visual space as the intermedia embedding space, and it is a strong pipeline for cross-modality recognition tasks. Besides large model for discriminative tasks, diffusion based1 image generation is another emerging hot research topic. Stable Diffusion [28] is one of the most popular methods in both academic and non-academic communities. The pretrained Stable Diffusion could be easily adapted to the personalized data, for both image generation or editing, by fine-tuning part of the model [13, 29] or conducting some processing in the fixed model [7, 12]. Originally designed for text-to-image generation task, it can be easily extended to other conditional image generation task [39], such as depth-to-image and sketch-to-image generation/translation. Footnote 1: In this report, we regard (text-to-image) diffusion model also as a kind of foundation model. With the popularity of those foundation models, a natural question arises: can those pretrained models, which are originally for image recognition or generation, be applied to other downstream tasks? As these models are trained with huge amounts of data and possess strong zero-shot recognition ability or good feature representation, the learned knowledge is expected to also facilitate other downstream tasks. This provides the possibility of using a unified model for different tasks, which could have high practical value in real-world applications. In this paper, we conduct a short survey on utilizing pretrained foundation model for downstream tasks. We mainly focus on the segmentation task, since segmentation information is also useful for other tasks such as detection and localization. ## 2 Utilizing Foundation Model for Downstream Task In the first part, we will focus on the typical discriminative foundation model for downstream task. In the second part, we will exploit some current methods utilizing Stable Diffusion for downstream discriminative task. ### Visual-Language Model Large vision-language model, such as CLIP [25] and ALIGN [14], are trained with image-language pairs via contrastive learning, due to its strong zero-shot image recognition performance, there is a new research line dubbed as open-vocabulary objection/segmentation, aiming to introduce the open category recognition ability into the objection or segmentation tasks. Early works on open-vocabulary segmentation such as LSeg [1] directly transform the vision-language model classification model to segmentation pipeline. More specifically LSeg directly predicts the category of the pixel embedding with the text embedding, without introducing any extra mask generator module. MaskCLIP [41] first shows that the value (\(V\)) embedding output by the CLIP visual part could be used as mask proposal for segmentation, together with the text embedding as the classifier weight the CLIP pipeline could directly output segmentation mask, then it further introduces Mask2Former [4] to improve the results, which is trained in a self training manner with the predicted segmentation masks. The recent works [19, 24] follow the similar pipeline, which typically has two parts: the first part is transformer based mask proposal network and the second part is the CLIP which is to provide open-vocabulary prediction. There are also a few methods elegantly unifying these two parts, for example, ZegCLIP [42] and SAN [35] directly adopt CLIP as the main backbone (feature extractor part) and add a lightweight mask generator which takes input feature from CLIP. Since the pipeline with Mask2Former usually takes longer training time, the methods including MaskCLIP (the one without extra mask generator) have fewer parameters and also could achieve better performance, which could be a baseline for future research. ### Text-to-image Diffusion Model Diffusion models are another research hotspot in recent years. The most successful application is text-to-image generation, by fine-tuning or directly utilizing the pretrain diffusion model, where Stable Diffusion [28] is one of the most popular deployed diffusion model. Since the text-to-image generation model2 such as Stable Diffusion is trained with large amount of image-text pairs just like CLIP, a natural question is that whether those cross-modality generative models could be applied to discriminative task? As some pioneer works [12] show that the features inside the diffusion model already have rich semantic and localization information, the pretrained diffusion model has potential to be extended to other discriminative tasks. Footnote 2: If not specified, the (text-to-image) diffusion model refers to Stable Diffusion in the subsequent sections. There are already a few works trying to utilize the text-to-image diffusion model for downstream tasks. Some methods [5, 17] transform the text-to-image diffusion model to a zero-shot classification model which is competitive to CLIP, by obtaining the posterior classification scores based on the predicted noise during the denoising process. And other methods like OIDSE [34] and VPN [40] utilize the UNet features in the diffusion model for downstream tasks such as segmentation and depth estimation. In the following texts we focus on the segmentation task. Figure 1: Grad-CAM visualization of pretrained CLIP visual encoder. Only four classes (used for text prompt) are considered: car, building, windows and trees/palms. Figure 2: Adopting binary threshold on the Grad-CAM. In ODISE and VPN, the diffusion model is only to provide features, which will be the input to the subsequent mask generator network such as Mask2Former [4] or LAVT [37], both methods only adopt one time step for the diffusion model, and VPN does not add the noise to the latent vector while ODISE does. In ODISE, an extra learnable module called implicit captioner is proposed to provide the textual embedding to UNet. VPN also utilizes a similar module denoted as text adapter, as well as cross attention maps to be combined with multi-level UNet features. Although these methods achieve good performance in the downstream tasks, we question the efficiency of this naive way of directly using UNet features with one time step. And actually, the ablation study in VPN already shows that there is limited improvement by using the extra text adapter and cross attention features, which indicates that this naive way is not totally efficient to fully exploit the diffusion model for segmentation. This conclusion also holds for the implicit captioner in ODISE. ## 3 Experimental Analysis In this section, we will first show that the pretrained visual-language model, more specifically the CLIP, has the potential to be directly extended to other downstream tasks. Then, we will show the current methods using text-to-image diffusion model are not efficient with the naive way of deploying pretrained diffusion model as the feature extractor. ### Visual-Language Model We choose the widely used visual-language model CLIP for analysis. We visualize the CLIP visual features under the weakly supervised segmentation task [21], where every image is provided with its ground-truth class labels. We adopt the Grad-CAM [31] for visualization3. For text prompt input to the CLIP language part, we only use 4 classes here: trees/palms, car, building and windows. The format of text prompt is "a photo of classname". The visualization is shown in Fig. 1, it indicates that directly using CLIP features is enough to achieve good localization or segmentation, and also the prompt engineering, _i.e._, the choice of text prompt, is also important to achieve better results. Figure 4: K-means clustering visualization on UNet features of real image. The image is directly fed into the diffusion model with adding noise once. We use the average feature across the last 6 blocks for visualization. Figure 3: Visualization of cross attention (for token ’horse’ with the text prompt ’a horse on the grass’). The upper part is the cross attention in the last timestep during generation stage. Then just like ODISE, we sent the generated image back to diffusion model with adding one noise, the visualization is shown in the lower part. In each case, the first row shows the attention map from different layer in the output block and the second row shows the average of those attention maps of different sizes. In Fig. 2, we further show that simply adopting the binary threshold on the Grad-CAM could lead to refined segmentation. Those findings that CLIP visual features already have localization and semantic information show the potential of the extensibility to other discriminative task. Fully investigating such localization ability of CLIP for segmentation or other tasks is still not widely studied yet in the community. ### Text-to-image Diffusion Model Here we do a detailed analysis for ODISE, which is an open-vocabulary segmentation method based on Stable Diffusion. In ODISE, the image will be fed into the diffusion model with adding noise once, and the features from encoder-decoder in the VQGAN along with the features from the UNet will be used for the subsequent Mask2Former for mask proposal. Unlike the original diffusion model achieving image generation through the denoising of multiple time steps, the UNet feature (with one time step) from the ODISE may have poor quality regarding the semantic and localization information, as a recent method [23] hypothesizes that the denoising process is a coarse-to-fine synthesis with multiple time steps. To verify it, we visualize the cross attention in different scenarios as shown in Fig. 3. In Fig. 3, we first deploy the Stable Diffusion for the normal text-to-image generation with the text prompt 'a horse on the grass', we visualize the cross attention corresponding to the token 'horse' in the last time step, we find these attentions are basically accurate localizing the object. Then we send the generated image back to the diffusion model with adding noise once just like ODISE does, and we also visualize the cross attention of token 'horse'. It turns out that the resulting attention maps become blurry and less accurate for localization, compared to the ones during the generation process. This attention degradation phenomenon may be even more severe if using real image as in the ODISE. Since the UNet features used by ODISE, which will be used by Mask2Former for mask proposal, are directly related to cross attention, the attention degradation may deteriorate the segmentation performance. We also directly visualize the UNet features by k-means clustering in Fig. 4, it shows in some case the UNet feature indeed has poor semantic and localization information, as shown in Fig. 4 (_right_). The finding indicates the necessity of denoising process to get high quality features containing better semantic and localization information. We also conduct ablation study of ODISE. In ODISE, there are a diffusion model (with adding noise once) and an implicit caption module, the output of which will be utilized as textual/conditional embedding and will be combined with null text embedding via summation. The features from encoder-decoder in VQGAN inside the diffusion model, as well as the features from UNet will be sent to Mask2Former for mask proposal. In Tab. 1, we ablate several modules in ODISE, and it turns out that directly using UNet features with null text embedding (_ODISE w/o IC_) already achieves decent performance, and the performance gain from implicit captioner is relatively limited. Note that not using implicit captioner means only having null text embedding in UNet (unconditional embedding), which is not the right usage way of text-to-image diffusion with conditional and unconditional embeddings, it has not explored the language related information in Stable Diffusion. And in Tab. 1, ODISE, which utilizes CLIP, diffusion model and Mask2Former, is still inferior to SAN, which only uses CLIP and a lightweight decoder network. This indicates the current way using diffusion model in ODISE is relatively naive, and has further space to be improved. ## 4 Conclusion In this paper, we investigate some recent works on using foundation models for downstream tasks. Features from both discriminative model CLIP and generative model Stable Diffusion, which are trained with large amount of cross-modality data pairs, already contain semantic and localization information, and could be deployed for other discriminative tasks. Although achieving great performance, the current way using diffusion model for downstream tasks is not efficient. We hope this report could provide some insights for the future research. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Method** & **Architecture** & **PAS-21** & **A-847** & **PC-459** & **A-150** & **PC-59** & **COCO** \\ \hline MaskCLIP [6] & CLIP + Mask2Former & - & 8.2 & 10.0 & 23.7 & 45.9 & - \\ ODISE [34] & CLIP + SD + Mask2Former & 84.4 & 11.1 & 14.5 & 29.9 & 57.3 & 65.2 \\ \hline ODISE w/o _IC_ & CLIP + SD + Mask2Former & 82.4 & 10.3 & 12.4 & 28.3 & 54.3 & 61.6 \\ ODISE w/o _(IC \& U_Net_) & CLIP + SD + Mask2Former & 76.6 & 10.1 & 12.4 & 27.2 & 51.7 & 56.3 \\ ODISE * w/o _(IC \& VQGAN)_ & CLIP + SD + Mask2Former & 80.1 & 10.2 & 13.2 & 28.6 & 52.5 & 61.2 \\ \hline SAN [35] & CLIP + light decoder & 94.6 & 12.4 & 15.7 & 32.1 & 57.7 & - \\ \hline \hline \end{tabular} \end{table} Table 1: Detailed ablation study on ODISE, where the model is trained with train set from COCO and the evaluated on other datasets. ’w/o _IC_’ denotes not using implicit captioner and instead only adopting null text embedding, _UNet_ denotes using features from UNet, and * denotes without adding noise to the output of encoder. We directly use the official ODISE code, due to the limited computation resource, for the three ablation studies we only train 66999, 81999 and 48999 iterations out of the whole 92188 iterations, with 16 A100 GPUs. ## 5 Acknowledgement We thank the GPUs support from Stability AI.
2305.14879
ByteSized32: A Corpus and Challenge Task for Generating Task-Specific World Models Expressed as Text Games
In this work, we investigate the capacity of language models to generate explicit, interpretable, and interactive world models of scientific and common-sense reasoning tasks. We operationalize this as a task of generating text games, expressed as hundreds of lines of Python code. To facilitate this task, we introduce ByteSized32 (Code: github.com/cognitiveailab/BYTESIZED32), a corpus of 32 reasoning-focused text games totaling 20k lines of Python code. We empirically demonstrate that GPT-4 can use these games as templates for single-shot in-context learning, successfully producing runnable games on unseen topics in 28% of cases. When allowed to self-reflect on program errors, game runnability substantially increases to 57%. While evaluating simulation fidelity is labor-intensive, we introduce a suite of automated metrics to assess game fidelity, technical validity, adherence to task specifications, and winnability, showing a high degree of agreement with expert human ratings. We pose this as a challenge task to spur further development at the juncture of world modeling and code generation.
Ruoyao Wang, Graham Todd, Eric Yuan, Ziang Xiao, Marc-Alexandre Côté, Peter Jansen
2023-05-24T08:31:30Z
http://arxiv.org/abs/2305.14879v2
# ByteSized32: A Corpus and Challenge Task for Generating ###### Abstract In this work we examine the ability of language models to generate explicit world models of scientific and common-sense reasoning tasks by framing this as a problem of generating text-based games. To support this, we introduce ByteSized32, a corpus of 32 highly-templated text games written in Python totaling 24x lines of code, each centered around a particular task, and paired with a set of 16 unseen text game specifications for evaluation. We propose a suite of automatic and manual metrics for assessing simulation validity, compliance with task specifications, playability, winnability, and alignment with the physical world. In a single-shot evaluation of GPT-4 on this simulation-as-code-generation task, we find it capable of producing runnable games in 27% of cases, highlighting the difficulty of this challenge task. We discuss areas of future improvement, including GPT-4's apparent capacity to perform well at simulating near canonical task solutions, with performance dropping off as simulations include distractors or deviate from canonical solutions in the action space. 1 Footnote 1: ByteSized32 available at: [https://github.com/cognitiveailab/BTYIESIZED32](https://github.com/cognitiveailab/BTYIESIZED32) ## 1 Introduction In a relatively short span of time, the capabilities of large language models (LLMs) have exploded. While trained on natural language to predict subsequent tokens in context, LLMs have nonetheless been deployed in a variety of additional contexts - as conversational partners (Bang et al., 2023; Bahrini et al., 2023), code assistants (Bird et al., 2023; Xia and Zhang, 2023), and even artificial agents (Huang et al., 2022; Ahn et al., 2022). In this work, we investigate the ability for large language models to operate on a novel task: producing small but explicit world models, in the form of simulations of performing various scientific and common-sense reasoning tasks. We frame the problem of simulation as generating text-based games, in which agents receive environmental observations and perform actions entirely through natural language. Text games can encode aspects of reality without the need for graphical rendering pipelines, and are a challenging domain for artificial agents (Hausknecht et al., Figure 1: An overview of our text game generation and evaluation process. The model, here GPT-4, generates a game by using a prompt consisting of (1) a 1-shot game example, and (2) the task specification for the game to generate. The generated game is then evaluated by an automated evaluation harness, as well as manually by human evaluators, to measure its validity, compliance, playability, winnability, and physical world alignment. 2020), as well as a fruitful method for interrogating agents' understanding of scientific reasoning concepts Wang et al. (2022). Text-based games thus offer the ability to convert a model's latent general knowledge absorbed during the training process into actionable and falsifiable hypotheses about the world. By formulating this as a code generation task, where models must generate hundreds of lines of Python code to create a simulation, we provide a vehicle for examining a language model's internal world model. We introduce ByteSized32, a dataset of text-based games written in Python, each testing a different common-sense or scientific reasoning concept, in order to facilitate the development of automatic simulation-generation systems through few-shot learning. The dataset uses a consistent code architecture and provides generative models with a robust template that can be adapted to a wide range of simulations. In addition, we propose a corresponding challenge task in using the ByteSized32 corpus to generate novel text game simulations using a corpus of unseen text game specifications used for evaluation. We perform a preliminary analysis of the performance of GPT-4 on this task, shown in Figure 1, using a range of proposed automatic and manual metrics that measure the model's ability to generate text-based games that: (1) are syntactically correct, (2) contain core aspects of the simulation enumerated in a specification prompt, (3) are playable and winnable, and (4) align with known facts about physical reality. Our results indicate that GPT-4 is broadly capable of generating novel text-based games that match the ByteSized32 template and contain specified objects and actions. However, it often fails to produce simulations that are robust to all possible user actions and that fully align with reality. The contributions of this work are: 1. We introduce ByteSized32, a corpus and challenge task for generating world model simulations framed as long-form code generation of text games. 2. We provide a corpus of 32 highly-templated text games centered around specific scientific and common sense reasoning tasks. The corpus totals 24K lines of Python code, with individual games designed to fit within the 8k context window of popular and capable LLMs. 3. We empirically demonstrate that, in an _single-shot_ setting, a baseline GPT-4 model is capable of producing running simulations 27% of the time, but simulations are physically correct in only 2% of cases, highlighting the difficulty of this task. 4. We further propose automatic evaluation metrics that break down performance into finer-grained components, such as the ability to initialize the world, produce valid actions, or generate scores that accurately model task performance. We pair these with human evaluation metrics that measure whether generated games are playable, winnable, and accurately models the physical world. ## 2 Related Work **Text Games and Virtual Environments:** Interactive text environments are an attractive choice for studying embodied agents, owing to their relative simplicity compared to full 3D simulations and ability to model complex and abstract tasks Jansen (2021). While early text game research focused on testing agents on a small set of extant "interactive fiction" games like _Zork_, recent approaches have leaned towards procedurally generating a wider set of simple text-based games in order to evaluate agents' ability to generalize Cote et al. (2018); Urbanek et al. (2019); Shridhar et al. (2020); Wang et al. (2022). These frameworks typically rely on hand-crafted rules and templates programmatically arranged in novel configurations, though some efforts leverage external data sources Barros et al. (2016) and generative language models Fan et al. (2019) as well. In contrast, the ByteSized32 challenge task requires models to produce a novel text game as a complete program, expressed as Python code, using only a single existing game for reference. **Code Generation:** As large language models have become more capable, interest in their ability to generate working snippets of program code has only grown. Several recent datasets have been proposed to facilitate this research, covering a wide range of programming languages and problem types Yu et al. (2018); Lin et al. (2018); Austin et al. (2021); Chen et al. (2021). Contemporaneously, improvements in model architecture and training have led to impressive gains in code generation Chen et al. (2021); Nijkamp et al. (2022); Li et al. (2022); Fried et al. (2023). The GPT-4 language model (OpenAI, 2023), in particular, has sparked an interest in the use of prompting for code generation tasks, a technique which has led to advancements in self-debugging Chen et al. (2023) and problem decomposition Pourreza and Rafiei (2023). Despite these gains, however, existing code generation benchmarks tend to require short and relatively simple programs. In contrast, here models must generate many hundreds of lines of Python code to generate complete and accurate task simulations, with training games ranging from 500 to 1000 lines of code. ## 3 Simulation as Code Generation In this work, we investigate generating simulations - in the form of Python programs - that provide minimal world models to sufficient detail that they allow accomplishing specific multi-step tasks drawn from the scientific and common-sense reasoning domains, such as _boiling water_ or _washing dishes_. We operationalize this as a task of generating task-specific text games, which are virtual game environments that are rendered (and interacted upon) exclusively through text. Specifically, we pose as a challenge task the notion of generating text games when provided with two components as input: (1) a detailed _task specification_ for the game to generate, and (2) one or more highly-templated text games as examples. In Section 4, we describe a manually-constructed corpus of highly-templated text games crafted to facilitate _n-shot_ generation of new text-game-based simulations. In Section 5, we describe the challenges in evaluating long (several hundred line) programs for functionlity, task faithfulness, and simulation fidelity. In Section 6, we evaluate the performance of a baseline GPT-4 model at generating novel text games. Finally, we discuss future challenges and opportunities in Section 8. ## 4 Text Games Corpus To support the task of generating simulations in the form of text games, we construct a corpus of highly-templated text games written in Python that can serve as examples in a _few-shot_ generation setting. Each game is between 500 and 1000 lines code, and covers specific tasks within the domain of scientific and common-sense inference, such as _boiling water_, _loading a dishwasher_, or _making a campfire_. Each game has two components: the game code itself, as well as a detailed _task specification_ in the form of structured comments at the top of each game that provide a detailed summary of the game task, critical objects, and solution paths. These components are described below, with an example game playthrough shown in Table 2. ### Task Specification The _task specification_ is a set of structured comments at the start of each game in the corpus, that serve as a high-level outline for the critical compo \begin{table} \begin{tabular}{l} \hline \hline **Python Text Game Template** \\ \hline \# Generic parent class for all game objects \\ \# Provides geters/setters for object properties \\ \& class GameObject():... \\ \# Parent class for game objects that are containers \\ \# Provides matchors for adding/removing objects from container \\ \& class Container(GameObject):... \\ \# Parent class for game objects that are devices \\ \# Provides methods for activating/deactivating a device \\ \& class Device(GameObject):... \\ \# Example object: Soap for washing dishes \\ \& class DisNoap(GameObject):... \\ \# Example object: A dishwasher (that can contain food) \\ \& class DisN(Container):... \\ \# Example object: A dishwasher (that can contain dishes, \\ \# dish soap, and be activated to wash the dishes) \\ \& class Dishwasher(Device, Container): \\ \# Main Simulation Class \\ \& TextGrad():... \\ \# Creates the game world and populates with game objects \\ \# (including the kitchen, dishes, dishwasher, foods, etc.) \\ \& derInitializeWorld():... \\ \# Returns a string describing the game and task \\ \& dergetTaskDescription():... \\ \# Returns an array with all possible valid actions given \\ \# the current game state \\ \& der generateValidations():... \\ \# Performs an action (e.g. turn on dishwasher) in the environment, \\ \# changing the environment state. \\ \& derstep(action:str):... \\ \# Calculate the current game score given progress, as well as \\ \# whether the game has been won. \\ \& der calculateScore():... \\ \# Main Entry Point (example of a user playing) \\ if\_name\_== "\_main\_": \\ game = TextGrad() \\ \# ”Task”: + game\_getTaskDescription() \\ while not game gameOver: \\ actionSet = input(\(\times\)) \\ observation, score, reward = game.step(actionStr) \\ print(Observation: "\(+\) observation) \\ print(Score: "\(+\) score) \\ print( Reward: "\(+\) reward) \\ print(Game Completed.) \\ print(Game Won: "\(+\) str(game.gameWon)) \\ \hline \hline \end{tabular} \end{table} Table 1: An illustration of the core classes and member functions present in the highly-templated games of the ByteSized32 corpus. Note that as each game consists of between 500 and 1000 lines of code, the example here provides an overview of only a subset of the most important functions examined during scoring. nents of each game. These are intended to provide a useful high-level scaffold that language models can use to better structure games they generate. The components of the task specification include: * for example, _washing dirty dishes using a dishwasher_. * **Task-Critical Objects:** Names of any task-critical objects, such as _dishes_, _dish soap_, and a _dishwasher_. * **Actions:** Actions that an agent playing the game can take, such as _opening_ or _closing_ containers, _activating_ or _deactivating_ devices, _picking up_ or _putting down_ objects, and so forth. * for example, adding _food_ that an agent can eat, that creates more dirty dishes. * **Solution:** A high-level solution to the game. For example: opening the dishwasher, moving each dirty dish from the kitchen into the dishwasher, moving dish soap into the dishwasher, closing the dishwasher, and activating the dishwasher. ### Game Code To maximize utility as _n-shot_ training data for code generation tasks, each game in the corpus uses a highly-templated structure consisting of core objects and member functions, shown in Table 1 and described below. The core architecture and API of these functions mirrors other frameworks for text game research Hausknecht et al. (2020), which are derived from the OpenAI Gym specification for reinforcement learning models Brockman et al. (2016). * **World Initialization:** Initialize the game world, including any objects. For example, for the dishwasher game, create a _kitchen_ room that includes _dirty dishes_, _dish soap_, a _dishwasher_, and any other relevant objects. * **Valid Actions:** Return a list of all possible valid actions that an agent could take, given the current state of the environment. For example, _take dirty dish_, or _open dishwasher_. * for example, the _take dirty dish_ action might return the observation _"the dirty dish is now in your inventory"_. * **Scoring:** Return an agent's current progress in solving the game task, abstracted to an arbitrary numerical score, and a set of boolean flags that represent whether the game has been won or lost. While the above methods are provided through a main game class (TextGame), each game also includes a large number of classes representing specific **game objects**. Each game object derives from a common class, GameObject, from which generic subclasses that share common methods inherit (e.g. Containers, that can store objects, or Devices, that can be activated), before finally instantiating specific game objects (e.g. Dish, Dishwasher). ## 5 Evaluating Generated Simulations Evaluating model-generated simulations (in the form of hundreds of lines of Python code) presents a number of challenges, in part due to the largely open-ended possibilities when generating large simulations constrained only by short task prompts. Here, we devise a set of automatic and manual metrics that measure both technical aspects (such as whether the simulation runs error-free), as well as how well generated simulations adhere to the task specification. ### Automatic Metrics We propose automated metrics designed to measure two aspects of a generated simulation: _technical validity_, and _specification compliance_. Technical Validity:Generated games are required to adhere to the API provided Table 1, and the _technical validity_ metric evaluates this by testing whether each member function of the generated game's API can run without error. First, each API function is tested in turn, with errors recorded. Finally, the _environment step_ function is tested by conducting a breadth-first search over the game's action space, as returned by its own generatePossibleActions() function, to exhaustively test all game trajectories for errors up to some maximum number of steps \(n\). An error on any step is considered a failure of the Step() function. Because a game can have as many as 2000 possible valid actions at each step, we pragmatically limit \(n\) to only 3 steps in order to ensure tractability. **Specification Compliance (automatic):** This metric reflects whether the generated game meets the requirements outlined in the task specification. In particular, we measure whether a generated game includes the required action, object, and distractor requested in the task specification. In order to facilitate this evaluation at scale, we leverage the question-answering capacity of GPT-4. The generated game and its accompanying prompt are fed into GPT-4, which is then asked a series of true-or-false questions about the presence of the required components - for example, _"Does the simulation contain the object 'Sink'?"_ To validate this automatic GPT-4-based specification compliance metric we also perform a corresponding manual evaluation. Two human-raters independently evaluated the compliance of the generated game on each of the three metrics (i.e. action, object, and distractor compliance). Human raters (co-authors of this work) had a high initial inter-annotator agreement (Cohen's \(\kappa=0.92\)), and any disagreements were subsequently resolved. These consensus-based manual ratings were then compared against GPT-4's automatic evaluation. The average agreement is substantial (Avg. \(\kappa=0.79\); Object: \(\kappa=1.00\); Action: \(\kappa=0.62\); Distractor: \(\kappa=0.64\)), which suggests the automatic GPT-4 metric has a high correlation with human judgements. ### Manual Metrics We propose three additional binary metrics to further evaluate if generated games are _functionally_ and _conceptually_ correct, which are currently difficult to measure automatically. In light of this, we measure these quantities by a manual human evaluation, both in examining generated game code, and playing generated game. **Playability:** We consider a game to be "playable" if there exists at least one executable action for the first 20 steps or until the winning state and at least one executable action that contributes positively to the game's winning objective. By executable, we mean actions can be performed without any error and change the game state. The human evaluator determines the set of possible "contributing actions" and then manually checks to see if any of them can be performed. For instance, in a game about _boiling water_, actions that contribute to the winning state might include _removing a pot from a cabinet_ or _turning on the stove_. **Winnability:** A game is considered "winnable" if there exists a sequence of actions that, when performed in order, will lead to a winning state of the game. To determine if a game is winnable, a \begin{table} \begin{tabular}{p{284.5pt}} \hline **Boll Water Game** \\ \hline **Task Description**: Your task is to boil water. \\ **Observation**: You find yourself in a kitchen. In the kitchen, you see: yourself \\ a stove that is currently off and has nothing on it. \\ a sink that is empty \\ a pot that is empty \\ a peanut butter \\ a orange \\ Type ‘help’ for a list of possible actions. \\ \(>\)_take pot_ \\ **Observation**: The pot is removed from the kitchen. You put the pot in your inventory. \\ \(>\)_put for its sink_ \\ **Observation**: The pot is removed from the agent. The pot is placed in the sink. \\ \(>\)_examine_ sink \\ **Observation**: a sink that contains the following items: \\ a pot that is empty \\ \(>\)_turn on sink_ \\ **Observation**: The sink is now turned off. \\ \(>\)_take pot_ \\ **Observation**: The pot is removed from the sink. You put the pot in your inventory. \\ \(>\)_put for on stove_ \\ **Observation**: The pot is removed from the agent. The pot is placed in the stove. \\ \(>\)_turn on stove_ \\ **Observation**: The stove is now turned on. \\ \(>\)_examine stove_ \\ **Observation**: a stove that is currently on and has the following items on it: \\ \(>\)_examine_ stove_ \\ human evaluator plays the game and attempts to complete the task by submitting natural language actions to the game's step function. We note that this process does not produce a perfect evaluation of a game's winnability, as it may be the case that a human evaluator fails to find a winning trajectory. While this is a risk, we find that pragmatically in the vast majority of cases a game is either obviously winnable or obviously impossible to win. **Physical Reality Alignment:** This subjective metric measures if a generated game accurately models the physical world, by having a human manually play the game while actively attempting to break the simulation - for example, by actively trying to move unmovable objects, and observing whether task-critical causal relations are implemented (e.g. does water on a stove increase temperature). ## 6 Experiments Here we examine how well a baseline GPT-4 model performs at the simulation generation task. ### Model and Prompt We use GPT-4 [1] with a context window of 32k tokens for each of our experiments. We perform single-shot experiments, where the model receives the following as part of its input prompt: 1. _Purpose:_ A general statement about the purpose and requirements of the text game generation task 2. _1-shot example:_ A single Python reference game from the ByteSized32 corpus, and its accompanying high-level task specification. 3. _Task request:_ The _task specification_ for the game to generate, drawn from the evaluation set. For each experiment we use greedy decoding (setting the generation temperature to zero), and leave all other hyperparameters at their default values. ### Evaluation Set In addition to the 32 games in the ByteSized32 dataset, we also provide a test set of 16 additional games in the form of _task specifications_. These specifications are intended to evaluate models' generative capabilities for unseen games, and each task in the test set explicitly requires at least one distractor to be included. Each game in the evaluation set is explicitly crafted to have highly similar or highly dissimilar characteristics to games found in the training set, to evaluate the effect of game template similarity on generation quality. This alignment between training and evaluation games is described in detail below. ### Reference Game Selection We hypothesize that the quality of a generated game will depend on the alignment between the task specification and the game provided as reference - for example, we might expect apriori that reference games that are highly similar to target games might have better generation performance than reference and target games that are highly dissimilar. To test this hypothesis, we pair each target game specification in the evaluation set with distinct reference games that either resemble or differ from the evaluation specification in the following ways: 1. **Objects**: Two games align with respect to objects if they both contain at least one object of the same category - for example, if the reference game contains a _device_ (such as a _dishwasher_), and the target game also requests generating a _device_ (such as a _sink_). 2. **Actions**: Two games align in terms of actions if they share at least one action that is required to successfully complete the game, such as _opening_ a container or _activating_ a device. 3. **Distractors**: Two games align in terms of distractors if they either both require a distractor, or both do not require a distractor. We note that all evaluation games in the test set require generating a distractor. At test time, for each game specification in the test set, we randomly select six reference games from the training corpus: three training games that align with objects, actions, and distractors, respectively, for the _similar reference game_ conditions, and three training games that do not align on each of these criteria for the _dissimilar reference game_ condition. With 16 game specifications in the test set, this results in creating a total of 96 model-generated games. ## 7 Results ### Technical Validity Results of the _technical validity_ evaluation across each of the 96 model-generated games are shown in Table 3. Among the 96 generated games, the vast majority implement the initializeWorld() and generateTaskDescription() functions without error (90.6% and 83.6%, respectively), with a similar proportion generating calculateScore() without error. More than three-quarters of games successfully implement generatePossibleActions() (77.3%). Unsurprisingly, the step() function, which requires parsing input actions then updating game states and observations accordingly, proves the most difficult to implement - with only 27.3% of games (26 of 96) passing the exhaustive 3-step trajectory search without error. ### Specification Compliance Automated specification compliance results are shown in Table 4, broken down by whether the reference game has similar (_"in template"_) or dissimilar (_"not in template"_) characteristics. We observe that the GPT-4 model demonstrates the ability to consistently generate task-critical objects regardless of whether a similar object is present in the reference game. In terms of task-critical actions, we find an unexpected result: 100% of generated games contain the required task-critical actions when the action is _not_ present in the reference game, while only 81.3% of generated games contain the action when it _is_ present in the reference - though we find that in the cases where the model fails to generate a required action, it tends to generate a semantically-equivalent action with a different name. For instance, instead of generating the required action putOn, the model generates the action wear. Regardless, the overall success rates are high in both conditions, indicating that GPT-4 is able to follow instructions to generate new objects and actions, even when they are not present in its reference prompt. Generating distractors proves to be more challenging for GPT-4. We observe that, even when the reference game contains a distractor, only 31.3% of generated games meet the requirement of incorporating a distractor. This number falls to 12.5% in cases where the reference game does not include a distractor. The implications of this result are two-fold: that distractor objects and actions represent a more difficult concept for the language model to encode than task-critical objects and actions, and that the presence of distractors in the reference game goes some of the way towards mitigating this difficulty. We present the manual evaluation results for specification compliance in Table 4. We note that the manual evaluation differs from the automatic evaluation in only 8 of the 96 games, indicating that automatic evaluation is a viable substitute for costly human evaluation of these metrics. We also note that the automatic evaluation most frequently differed from the manual evaluation in the distractors section, which is also the section that proved the most difficult in terms of generation. Finally, it is also worth noting that these compliance metrics indicate only whether the specified the object, action, or distractor is present in the game, without assessing the correctness of their implementation. ### Playability, Winnability, and Physical Plausibility Table 5 shows the results of the manual evaluation of each game's playability, Winnability, and \begin{table} \begin{tabular}{l c c} \hline \hline **Measurement (Auto)** & **In template** & **Not in template** \\ \hline Task-critical objects & 100 & 100 \\ Task-critical actions & 81.8 & 100 \\ Distractors & 31.8 & 12.5 \\ \hline **Measurement (Human)** & **In template** & **Not in template** \\ \hline Task-critical objects & 100 & 100 \\ Task-critical actions & 68.8 & 93.8 \\ Distractors & 43.8 & 31.3 \\ \hline \hline \end{tabular} \end{table} Table 4: Automatic (GPT-4) and manual (human) evaluated specification compliance results. Values represent the percentage of games that include a certain feature. _In template_ refers to given feature being present within the reference game provided in the model prompt. \begin{table} \begin{tabular}{l c} \hline \hline **API Method** & **GPT-4** \\ \hline Game Initialization & 90.6 \\ Task Description Generation & 83.6 \\ Score Calculation & 83.6 \\ Possible Action Generation & 77.3 \\ Step & 27.3 \\ \hline **All Checks Passed** & 27.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Fine-grained _technical validity_ evaluation results on the 96 games generated by GPT-4. Values represent the proportion of games that pass a given metric. physical reality alignment. We note that while only 27.3% of generated games pass the exhaustive trajectory search, a full 72.9% of games are playable (i.e. successfully implement at least one action that contributes towards the task objective) and 40.6% are winnable. This suggests that a substantial number of generated games are valid within the narrow scope of a gold solution trajectory, but begin to exhibit errors at the edges of the action space as they drift from the canonical solution trajectory. In addition, only 2.1% (2 out of 96) of games manage to produce a simulation that is fully consistent with the limited subset of physical reality they model. This fundamental component of simulation through code generation thus appears to be beyond the capacity of current models, providing the basis for a strong challenge task in physical simulation through code generation. ## 8 Discussion **To what extent can GPT-4 generate long structured text games under in a single-shot setting?** At a high level, our results indicate that GPT-4 is capable of generating syntactically valid and templated programs that are hundreds of lines in length. Of the generated games, 72.9% implement at least one task-critical action without error, and a full 40.6% allow a user or agent to reach a winning state. A more nuanced interpretation of these results suggests that the model has best learned to successfully replicate the _high-level structure_ of the highly-templated ByteSized32 game API - as model performance begins to degrade once we examine the minute details: only 27.3% of games include a simulation that is robust to a 3-step exhaustive trajectory search, and only 31.5% of games include a required distractor despite their presence in the reference games. **Can GPT-4 create items and actions that were not observed in the reference game?** GPT-4 demonstrates a capability to generate games that incorporate unseen objects, as required by the task specifications in all of our test cases. Similarly, in an impressive 93.8% of cases, GPT-4 generates task-critical actions that are not observed in the reference game. However, it is important to note that in numerous instances, these generated objects and actions do not accurately reflect the physical world. Only a mere 2.1% of generated games successfully complete the text game generation task without any errors in terms of modeling the physical world. **Can we observe the internal world models of LLMs through the simulation they generated?** The "simulation as code generation" paradigm potentially offers a valuable means to explicitly assess how large language models understand the world. For example, for an evaluation specification where the objective is to generate a game involving burying a treasure box in a hole, the generated game mandated the placement of soil in the hole before burying the treasure box - highlighting a deficiency in GPT-4's comprehension of the process of _burying_. Similarly, in another case, the agent was able to directly access water without utilizing any containers, indicating a lack of grasp of the concept of liquids by the model - at least in certain contexts. Despite the extensive knowledge encompassed within its pretraining data, only 2.1% of the GPT-4 generated games accurately model the physical world. Consequently, constructing correct explicit world models in code remains a formidable challenge for LLMs. ## 9 Conclusion In this paper, we examine the task of generating simulations of scientific and common-sense reasoning tasks, formulated as a code generation problem of generating text games that explicitly test these tasks in Python. We present a text game corpus for code generation, ByteSized32, containing 32 training games paired with detailed structured task specifications, as well as a separate corpus of 16 unseen task specifications for evaluation. Notably, all the games in the training set are implemented following a unified code template, while the test game specifications are carefully designed to evaluate the benefit of the presence or absense of similar features in training games, such as shared objects, actions, or distractors. This facilitates investigating \begin{table} \begin{tabular}{l c} \hline \hline **Measurement** & **GPT-4** \\ \hline Playability & 72.9 \\ Winnability & 40.6 \\ Physical Reality Alignment & 2.1 \\ \hline \hline \end{tabular} \end{table} Table 5: Results of the 96 evaluation games on fully-human evaluation metrics. Values represent the percentage of games that are playable, winnable, or that correctly model the physical world.
2301.10130
Trialitarian Triples
Trialitarian triples are triples of central simple algebras of degree 8 with orthogonal involution that provide a convenient structure for the representation of trialitarian algebraic groups as automorphism groups. This paper explicitly describes the canonical "trialitarian'' isomorphisms between the spin groups of the algebras with involution involved in a trialitarian triple, using a rationally defined shift operator that cyclically permutes the algebras. The construction relies on compositions of quadratic spaces of dimension 8, which yield all the trialitarian triples of split algebras. No restriction on the characteristic of the base field is needed.
Demba Barry, Jean-Pierre Tignol
2023-01-24T16:53:45Z
http://arxiv.org/abs/2301.10130v1
# Trialitarian triples ###### Abstract. Trialitarian triples are triples of central simple algebras of degree \(8\) with orthogonal involution that provide a convenient structure for the representation of trialitarian algebraic groups as automorphism groups. This paper explicitly describes the canonical "trialitarian" isomorphisms between the spin groups of the algebras with involution involved in a trialitarian triple, using a rationally defined shift operator that cyclically permutes the algebras. The construction relies on compositions of quadratic spaces of dimension \(8\), which yield all the trialitarian triples of split algebras. No restriction on the characteristic of the base field is needed. Key words and phrases:Clifford algebras, Clifford groups, triality, composition algebras 2010 Mathematics Subject Classification: Primary 20G15, 11E57 The first author would like to thank the second author and UCLouvain for their hospitality during several visits while the work for this paper was carried out. He gratefully acknowledges support from the Association pour la Promotion Scientifique de l'Afrique through a grant "APSA Awards 2020." This work was partially supported by the Fonds de la Recherche Scientifique-FNRS under grant J.0159.19. \({}^{1}\)Prior to [7], examples of trialitarian triples in characteristic \(2\) were given by Knus-Villa [14]. are studied in SS2. Compositions of quadratic spaces provide a new perspective on the classical theory of composition algebras by triplicating their underlying vector space. They also demonstrate more diversity, because--in contrast with compositions arising from composition algebras--the three quadratic spaces involved in a composition need not be isometric; this accounts for the interpretation in SS3.5 of the \(\operatorname{mod}2\) cohomological invariants of \(\operatorname{\mathbf{Spin}}_{8}\), since compositions of quadratic spaces of dimension 8 are torsors under \(\operatorname{\mathbf{Spin}}_{8}\). Compositions of three different quadratic spaces of equal dimension have been considered earlier, for instance in Knus' monograph [12, V(7.2)], in [13, (35.18)] and in the papers [2, 5.3] and [3, SS3] by Alsaody-Gille and Alsaody respectively. However, the shift operator \(\partial\) on compositions of quadratic spaces, briefly mentioned in [13, (35.18)], seems to have been mostly ignored so far. By attaching to every composition on quadratic spaces \((V_{1},q_{1})\), \((V_{2},q_{2})\), \((V_{3},q_{3})\) two cyclic derivatives, which are compositions on \((V_{2},q_{2})\), \((V_{3},q_{3})\), \((V_{1},q_{1})\) and on \((V_{3},q_{3})\), \((V_{1},q_{1})\), \((V_{2},q_{2})\) respectively, the shift operator provides the model for the operator \(\partial\) on triilatarian triples. Compositions of quadratic spaces of dimension 8 also afford a broader view of the classical Principle of Triality for similitudes of the underlying vector space of an octonion algebra, as discussed by Springer-Veldkamp [19, SS3.2], and also of the local version of this principle in characteristic 2 described by Elduque [8, SS3, SS5], see Corollary 3.24 and Corollary 3.25. Automorphisms of the compositions of quadratic spaces arising from composition algebras are by definition the _related triples_ of isometries defined in [19, SS3.6], [8, SS1] and [2, SS3] (see Remark 2.17); they are closely related to _autotopies_ of the algebra, which form the structure group defined for alternative algebras by Petersson [15], see SS2.5 and SS3.6. The first section reviews background information on Clifford groups and their Lie algebras, notably on extended Clifford groups, which play a central role in subsequent sections. More detail on the contents of this work can be found in the introduction of each section. ###### Contents * 1 Clifford groups and Lie algebras * 1.1 Quadratic forms and quadratic pairs * 1.2 Clifford algebras * 1.3 Clifford groups * 1.4 Lie algebras of orthogonal groups * 1.5 Extended Clifford Lie algebras * 1.6 Homomorphisms from Clifford algebras * 2 Compositions of quadratic spaces * 2.1 Composition maps and their cyclic derivatives * 2.2 Canonical Clifford maps * 2.3 Similitudes and isomorphisms * 2.4 Compositions of pointed quadratic spaces * 2.5 Composition algebras * 3 Trialitarian triples * 3.1 The trialitarian triple of a composition of quadratic spaces * 3.2 Similitudes of trialitarian triples * 3.3 Derived trialitarian triples * 3.4 Trialitarian isomorphisms ### 3.5. Compositions of \(8\)-dimensional quadratic spaces #### 3.6. The structure group of \(8\)-dimensional composition algebras #### 1. Clifford groups and Lie algebras The purpose of the first subsections of this section is to recall succinctly the Clifford groups of even-dimensional quadratic spaces and their twisted analogues (in the sense of Galois cohomology), which are defined in arbitrary characteristic through central simple algebras with quadratic pair. Most of the material is taken from [13], but we incorporate a few complements that are made possible by the definition of canonical quadratic pairs on Clifford algebras by Dolphin-Queguiner-Mathieu [7]. A detailed discussion of the corresponding Lie algebras is given in SS1.4 and SS1.5. For a central simple algebra with quadratic pair \(\mathfrak{A}\), we emphasize the difference between the Lie algebra \(\mathfrak{o}(\mathfrak{A})\) of the orthogonal group and the Lie algebra \(\mathfrak{pgo}(\mathfrak{A})\) of the group of projective similitudes, which are canonically isomorphic when the characteristic is different from \(2\) but contain different information in characteristic \(2\). The last subsection provides a major tool for the definition of homomorphisms \(\mathfrak{C}(\mathfrak{A})\to\mathfrak{A}^{\prime}\) from the Clifford algebra of a central simple algebra with quadratic pair \(\mathfrak{A}\) to a central simple algebra with quadratic pair \(\mathfrak{A}^{\prime}\). These homomorphisms are shown to be uniquely determined by Lie algebra homomorphisms \(\mathfrak{pgo}(\mathfrak{A})\to\mathfrak{pgo}(\mathfrak{A}^{\prime})\); see Theorem 1.21. #### Quadratic forms and quadratic pairs Let \((V,q)\) be a (finite-dimensional) quadratic space over \(F\). The polar form \(b\colon V\times V\to F\) is defined by \[b(x,y)=q(x+y)-q(x)-q(y)\qquad\text{for $x$, $y\in V$.}\] We only consider quadratic spaces whose polar form \(b\) is nonsingular. This restriction entails that \(\dim V\) is even if \(\operatorname{char}F=2\), for \(b\) is then an alternating form. Nonsingularity of \(b\) allows us to define the adjoint involution \(\sigma_{b}\) on \(\operatorname{End}V\) by the condition \[b\bigl{(}x,a(y)\bigr{)}=b(\sigma_{b}(a)(x),y)\qquad\text{for $a\in\operatorname{ End}V$ and $x$, $y\in V$.}\] Moreover, we may identify \(V\otimes V\) with \(\operatorname{End}V\) by mapping \(x\otimes y\in V\otimes V\) to the operator \(z\mapsto x\,b(y,z)\). Under the identification \(V\otimes V=\operatorname{End}V\), the involution \(\sigma_{b}\) and the (reduced) trace \(\operatorname{Trd}\) are given by \[\sigma_{b}(x\otimes y)=y\otimes x\qquad\text{and}\qquad\operatorname{Trd}(x \otimes y)=b(x,y)\qquad\text{for $x$, $y\in V$,}\] see [13, SS5.A]. Moreover, for \(a\in\operatorname{End}V\) and \(x\), \(y\in V\) we have \[a\circ(x\otimes y)=a(x)\otimes y\qquad\text{and}\qquad(x\otimes y)\circ a=x \otimes\sigma_{b}(a)(y).\] The identification \(V\otimes V=\operatorname{End}V\), which depends on the choice of the nonsingular polar form \(b\), will be used repeatedly in the sequel. It will be referred to as a _standard identification_. Let \[\operatorname{Sym}(\sigma_{b})=\{a\in\operatorname{End}V\mid\sigma_{b}(a)=a\}.\] To the quadratic form \(q\) on \(V\) we further associate a linear form \(\mathfrak{f}_{q}\) on \(\operatorname{Sym}(\sigma_{b})\) defined by the condition \[\mathfrak{f}_{q}(x\otimes x)=q(x)\qquad\text{for $x\in V$,}\] see [13, (5.11)]. Linearizing this condition yields \(\mathfrak{f}_{q}\bigl{(}x\otimes y+\sigma_{b}(x\otimes y)\bigr{)}=b(x,y)\) for \(x\), \(y\in V\), hence \[\mathfrak{f}_{q}\bigl{(}a+\sigma_{b}(a)\bigr{)}=\operatorname{Trd}(a)\qquad \text{for $a\in\operatorname{End}V$.}\] The pair \((\sigma_{b},\mathfrak{f}_{q})\) determines the quadratic form \(q\) up to a scalar factor by [13, (5.11)], which is sufficient to define the orthogonal group \(\operatorname{O}(q)\) of isometries of \((V,q)\), as well as the group of similitudes \(\operatorname{GO}(q)\) and the group of projective similitudes \(\operatorname{PGO}(q)\), as follows: \[\operatorname{O}(q) =\{a\in\operatorname{End}V\mid q\big{(}a(x)\big{)}=q(x)\text{ for all }x\in V\},\] \[=\{a\in\operatorname{End}V\mid\sigma_{b}(a)a=1\text{ and }\mathfrak{f}_{q}(asa^{-1})= \mathfrak{f}_{q}(s)\text{ for all }s\in\operatorname{Sym}(\sigma_{b})\},\] \[\operatorname{GO}(q) =\{a\in\operatorname{End}V\mid\text{ there exists }\mu\in F^{\times}\text{ such that }q\big{(}a(x)\big{)}=\mu\,q(x)\text{ for all }x\in V\},\] \[=\{a\in\operatorname{End}V\mid\sigma_{b}(a)a\in F^{\times}\text{ and }\mathfrak{f}_{q}(asa^{-1})=\mathfrak{f}_{q}(s)\text{ for all }s\in \operatorname{Sym}(\sigma_{b})\},\] \[\operatorname{PGO}(q) =\operatorname{GO}(q)/F^{\times}.\] In the equivalent definitions of \(\operatorname{GO}(q)\), the scalar \(\mu\) such that \(q\big{(}a(x)\big{)}=\mu\,q(x)\) for all \(x\in V\) is \(\sigma_{b}(a)a\). It is called the _multiplier_ of the similitude \(a\). Isometries and similitudes are also defined between different quadratic spaces: if \((V,q)\) and \((\widetilde{V},\widetilde{q})\) are quadratic spaces over a field \(F\), a _similitude_\(u\colon(V,q)\to(\widetilde{V},\widetilde{q})\) is a linear bijection \(V\to\widetilde{V}\) for which there exists a scalar \(\mu\in F^{\times}\) such that \(\widetilde{q}\big{(}u(x)\big{)}=\mu\,q(x)\) for all \(x\in V\). The scalar \(\mu\) is called the _multiplier_ of the similitude, and similitudes with multiplier \(1\) are called _isometries_. Abusing notation, for every linear bijection \(u\colon V\to\widetilde{V}\) we write \[\operatorname{Int}(u)\colon\operatorname{End}V\to\operatorname{End}\widetilde{ V}\quad\text{for the map }a\mapsto u\circ a\circ u^{-1}.\] It is readily verified that for every similitude \(u\) the isomorphism \(\operatorname{Int}(u)\) restricts to group isomorphisms \[\operatorname{O}(q)\xrightarrow{\sim}\operatorname{O}(\widetilde{q}),\qquad \operatorname{GO}(q)\xrightarrow{\sim}\operatorname{GO}(\widetilde{q}),\qquad \operatorname{PGO}(q)\xrightarrow{\sim}\operatorname{PGO}(\widetilde{q}).\] The groups \(\operatorname{O}(q)\), \(\operatorname{GO}(q)\) and \(\operatorname{PGO}(q)\) are groups of rational points of algebraic groups (i.e., smooth affine algebraic group schemes) which are denoted respectively by \(\operatorname{O}(q)\), \(\operatorname{GO}(q)\) and \(\operatorname{PGO}(q)\), see [13, SS23]. As pointed out in [13], twisted forms (in the sense of Galois cohomology) of these groups can be defined through a notion of quadratic pair on central simple algebras, which is recalled next. Let \(A\) be a central simple algebra over an arbitrary field \(F\). An \(F\)-linear involution \(\sigma\) on \(A\) is said to be _orthogonal_ (resp. _symplectic_) if after scalar extension to a splitting field of \(A\) it is adjoint to a symmetric nonalternating (resp. to an alternating) bilinear form. For any involution \(\sigma\) on \(A\) we write \[\operatorname{Sym}(\sigma)=\{a\in A\mid\sigma(a)=a\}.\] **Definition 1.1**.: A _quadratic pair_\((\sigma,\mathfrak{f})\) on a central simple algebra \(A\) consists of an involution \(\sigma\) on \(A\) and a linear map \(\mathfrak{f}\colon\operatorname{Sym}(\sigma)\to F\) subject to the following conditions: 1. \(\sigma\) is orthogonal if \(\operatorname{char}F\neq 2\) and symplectic if \(\operatorname{char}F=2\); 2. \(\mathfrak{f}\big{(}x+\sigma(x)\big{)}=\operatorname{Trd}_{A}(x)\) for \(x\in A\), where \(\operatorname{Trd}_{A}\) is the reduced trace. The map \(\mathfrak{f}\) is called the _semitrace_ of the quadratic pair \((\sigma,\mathfrak{f})\). This terminology is motivated by the observation that when \(\operatorname{char}F\neq 2\) every \(x\in\operatorname{Sym}(\sigma)\) can be written as \(x=\frac{1}{2}\big{(}x+\sigma(x)\big{)}\), hence \(\mathfrak{f}(x)=\frac{1}{2}\operatorname{Trd}_{A}(x)\). Thus, the semitrace of a quadratic pair \((\sigma,\mathfrak{f})\) is uniquely determined by the orthogonal involution \(\sigma\) if \(\operatorname{char}F\neq 2\). To simplify notation, when possible without confusion we use a single letter to denote a central simple algebra with quadratic pair, and write \[\mathfrak{A}=(A,\sigma,\mathfrak{f}).\] The twisted forms of orthogonal groups are defined as follows: for \(\mathfrak{A}\) as above, \[\operatorname{O}(\mathfrak{A}) =\{a\in A\mid\sigma(a)a=1\text{ and }\mathfrak{f}(asa^{-1})= \mathfrak{f}(s)\text{ for all }s\in\operatorname{Sym}(\sigma)\},\] \[\operatorname{GO}(\mathfrak{A}) =\{a\in A\mid\sigma(a)a\in F^{\times}\text{ and }\mathfrak{f}(asa^{-1})= \mathfrak{f}(s)\text{ for all }s\in\operatorname{Sym}(\sigma)\},\] \[\operatorname{PGO}(\mathfrak{A}) =\operatorname{GO}(\mathfrak{A})/F^{\times}.\] The group of similitudes \(\operatorname{GO}(\mathfrak{A})\) can be alternatively defined as the group of elements \(a\in A^{\times}\) such that \(\operatorname{Int}(a)\) is an automorphism of \(\mathfrak{A}\). Therefore, by the Skolem-Noether theorem the group \(\operatorname{PGO}(\mathfrak{A})\) can be identified with the group of automorphisms of \(\mathfrak{A}\). The groups \(\operatorname{O}(\mathfrak{A})\), \(\operatorname{GO}(\mathfrak{A})\) and \(\operatorname{PGO}(\mathfrak{A})\) are groups of rational points of algebraic groups denoted respectively by \(\mathbf{O}(\mathfrak{A})\), \(\mathbf{GO}(\mathfrak{A})\) and \(\mathbf{PGO}(\mathfrak{A})\), see [13, SS23]. For \(a\in\operatorname{GO}(\mathfrak{A})\), the scalar \(\sigma(a)a\in F^{\times}\) is called the _multiplier_ of the similitude \(a\). We write \(\mu(a)=\sigma(a)a\) and thus obtain a group homomorphism \[\mu\colon\operatorname{GO}(\mathfrak{A})\to F^{\times}\] whose kernel is \(\operatorname{O}(\mathfrak{A})\). Thus, for every quadratic space \((V,q)\) we have by definition \[\operatorname{O}(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})= \operatorname{O}(q),\qquad\operatorname{GO}(\operatorname{End}V,\sigma_{b}, \mathfrak{f}_{q})=\operatorname{GO}(q),\qquad\operatorname{PGO}(\operatorname {End}V,\sigma_{b},\mathfrak{f}_{q})=\operatorname{PGO}(q).\] The following statement is given without detailed proof in [13, (12.36)]. **Proposition 1.2**.: _Let \((V,q)\) and \((\widetilde{V},\widetilde{q})\) be quadratic spaces over an arbitrary field \(F\). If \(u\colon(V,q)\to(\widetilde{V},\widetilde{q})\) is a similitude, then \(\operatorname{Int}(u)\) is an isomorphism of algebras with quadratic pair_ \[\operatorname{Int}(u)\colon(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q}) \xrightarrow{\sim}(\operatorname{End}\widetilde{V},\sigma_{\widetilde{b}}, \mathfrak{f}_{\widetilde{q}}).\] _Conversely, every isomorphism \((\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})\xrightarrow{\sim}( \operatorname{End}\widetilde{V},\sigma_{\widetilde{b}},\mathfrak{f}_{ \widetilde{q}})\) has the form \(\operatorname{Int}(u)\) for some similitude \(u\colon(V,q)\to(\widetilde{V},\widetilde{q})\) uniquely determined up to a scalar factor._ Proof.: Observe that for every linear bijection \(u\colon V\to\widetilde{V}\) there exists a map \(\hat{u}\colon V\to\widetilde{V}\) such that \[\widetilde{b}(\hat{u}(x),\widetilde{y})=b\big{(}x,u^{-1}(\widetilde{y})\big{)} \qquad\text{for all }x\in V\text{ and }\widetilde{y}\in\widetilde{V},\] because the polar forms \(b\) and \(\widetilde{b}\) are nonsingular. Under the standard identifications \(\operatorname{End}V=V\otimes V\) and \(\operatorname{End}\widetilde{V}=\widetilde{V}\otimes\widetilde{V}\) afforded by \(b\) and \(\widetilde{b}\), we have \[\operatorname{Int}(u)(x\otimes y)=u(x)\otimes\hat{u}(y)\qquad\text{for all }x,\,y\in V.\] If \(u\) is a similitude with multiplier \(\mu\), then \(\hat{u}=\mu^{-1}u\), hence \(\operatorname{Int}(u)\circ\sigma_{b}=\sigma_{\widetilde{b}}\circ\operatorname {Int}(u)\) and \[\mathfrak{f}_{\widetilde{q}}\big{(}\operatorname{Int}(u)(x\otimes x)\big{)} =\mu^{-1}\,\widetilde{q}\big{(}u(x)\big{)}=q(x)=\mathfrak{f}_{q}(x\otimes x) \qquad\text{for all }x\in V.\] Since \(\operatorname{Sym}(\sigma_{b})\) is spanned by elements of the form \(x\otimes x\), it follows that \(\operatorname{Int}(u)\) is an isomorphism of algebras with quadratic pair. For the converse, note that the Skolem-Noether theorem shows that every \(F\)-algebra isomorphism \(\operatorname{End}V\xrightarrow{\sim}\operatorname{End}\widetilde{V}\) has the form \(\operatorname{Int}(u)\) for some linear bijection \(u\colon V\to\widetilde{V}\). If \(\operatorname{Int}(u)\) is an isomorphism of algebras with quadratic pair, then \(\operatorname{Int}(u)(x\otimes x)\in\operatorname{Sym}(\sigma_{\widetilde{b}})\) for every \(x\in V\), hence \(\hat{u}=\mu^{-1}u\) for some \(\mu\in F^{\times}\). Since \(\mathfrak{f}_{\widetilde{q}}\big{(}\operatorname{Int}(u)(x\otimes x)\big{)}= \mathfrak{f}_{q}(x\otimes x)\) for all \(x\in V\), it follows that \(\widetilde{q}\big{(}u(x)\big{)}=\mu\,q(x)\) for all \(x\in V\), hence \(u\) is a similitude. To complete the proof, suppose that \(u\), \(u^{\prime}\colon(V,q)\to(\widetilde{V},\widetilde{q})\) are similitudes such that \(\operatorname{Int}(u)=\operatorname{Int}(u^{\prime})\). Then \(\operatorname{Int}(u^{-1}u^{\prime})=\operatorname{Id}_{V}\), hence \(u^{-1}u^{\prime}\) lies in the center of \(\operatorname{End}V\), which is \(F\). Therefore, \(u\) and \(u^{\prime}\) differ by a scalar factor. ### Clifford algebras For any quadratic space \((V,q)\) over \(F\) we let \(C(V,q)\) denote the Clifford algebra of \((V,q)\) and \(C_{0}(V,q)\) its even Clifford algebra. We will only consider even-dimensional quadratic spaces; if \(\dim V=2m\), then the algebra \(C(V,q)\) is central simple of degree \(2^{m}\) and \(C_{0}(V,q)\) is semisimple with center a quadratic etale \(F\)-algebra \(Z\) given by the discriminant or Arf invariant of \(q\), see [17, Ch. 9]. In most cases considered through this text, the algebra \(Z\) is split, i.e., \(Z\simeq F\times F\). We may then define a polarization of \((V,q)\) as follows: **Definition 1.3**.: If \((V,q)\) is an even-dimensional quadratic space with trivial discriminant or Arf invariant, a _polarization_ of \((V,q)\) is a designation of the primitive central idempotents of \(C_{0}(V,q)\) as \(z_{+}\) and \(z_{-}\). Given a polarization of \((V,q)\), we let \(C_{+}(V,q)=C_{0}(V,q)z_{+}\) and \(C_{-}(V,q)=C_{0}(V,q)z_{-}\), so \[C_{0}(V,q)=C_{+}(V,q)\times C_{-}(V,q).\] Each even-dimensional quadratic space of trivial discriminant or Arf invariant thus has two possible polarizations. The algebra \(C(V,q)\) carries an involution \(\tau\) such that \(\tau(x)=x\) for all \(x\in V\). This involution preserves \(C_{0}(V,q)\) and restricts to an involution \(\tau_{0}\) on \(C_{0}(V,q)\). The type of the involutions \(\tau\) and \(\tau_{0}\) is determined in [13, (8.4)] as follows: * If \(\dim V\equiv 2\bmod 4\) the involution \(\tau_{0}\) does not leave \(Z\) fixed; we will not need to consider this case. * If \(\dim V\equiv 4\bmod 8\), then the involutions \(\tau\) and \(\tau_{0}\) are symplectic. When \(Z\simeq F\times F\), this means that \(\tau_{0}\) restricts to symplectic involutions on each of the simple components of \(C_{0}(V,q)\). * If \(\dim V\equiv 0\bmod 8\) and \(\operatorname{char}F\neq 2\), then the involutions \(\tau\) and \(\tau_{0}\) are orthogonal. * If \(\dim V\equiv 0\bmod 8\) and \(\operatorname{char}F=2\), then the involutions \(\tau\) and \(\tau_{0}\) are symplectic. Following Dolphin-Queguiner-Mathieu [7, Prop. 6.2], a canonical quadratic pair \((\tau,\mathfrak{g})\) can be defined on \(C(V,q)\) when2\(\dim V\equiv 0\bmod 8\) by associating to \(\tau\) the following semitrace: Footnote 2: Dolphin–Queguiner-Mathieu only assume \(\dim V\) even, \(\dim V\geq 6\), but they restrict to \(\operatorname{char}F=2\). \[\mathfrak{g}(s)=\operatorname{Trd}_{C(V,q)}(ee^{\prime}s)\in F\qquad\text{for $s \in\operatorname{Sym}(\tau)$,}\] where \(e\), \(e^{\prime}\in V\) are arbitrary vectors such that \(b(e,e^{\prime})=1\). If \(\operatorname{char}F\neq 2\), then for any such vectors \(e\), \(e^{\prime}\) and for every \(s\in\operatorname{Sym}(\tau)\) we have \[\operatorname{Trd}_{C(V,q)}(ee^{\prime}s)=\operatorname{Trd}_{C(V,q)}\bigl{(} \tau(ee^{\prime}s)\bigr{)}=\operatorname{Trd}_{C(V,q)}(se^{\prime}e)= \operatorname{Trd}_{C(V,q)}(e^{\prime}es).\] Therefore, \[\operatorname{Trd}_{C(V,q)}(ee^{\prime}s)=\tfrac{1}{2}\operatorname{Trd}_{C(V, q)}\bigl{(}(ee^{\prime}+e^{\prime}e)s\bigr{)}=\tfrac{1}{2}\operatorname{Trd}_{C(V, q)}(s),\] as expected. Likewise, Dolphin-Queguiner-Mathieu show in [7, Prop. 3.6] that a canonical quadratic pair \((\tau_{0},\mathfrak{g}_{0})\) can be defined on \(C_{0}(V,q)\) when \(\dim V\equiv 0\bmod 8\) by associating to \(\tau_{0}\) the following semitrace: \[\mathfrak{g}_{0}(s)=\operatorname{Trd}_{C_{0}(V,q)}(ee^{\prime}s)\in Z\qquad \text{for $s\in\operatorname{Sym}(\tau_{0})$,}\] where \(e\), \(e^{\prime}\in V\) are arbitrary vectors such that \(b(e,e^{\prime})=1\). If \(Z\simeq F\times F\), then \(C_{0}(V,q)\simeq C_{+}(V,q)\times C_{-}(V,q)\) for some central simple \(F\)-algebras \(C_{+}(V,q)\), \(C_{-}(V,q)\), and the quadratic pair \((\tau_{0},\mathfrak{g}_{0})\) defined above is a pair of quadratic pairs \((\tau_{+},\mathfrak{g}_{+})\) on \(C_{+}(V,q)\) and \((\tau_{-},\mathfrak{g}_{-})\) on \(C_{-}(V,q)\). Every similitude of quadratic spaces \(u\colon(V,q)\to(V,\widetilde{q})\) with multiplier \(\mu\) defines an \(F\)-isomorphism, \(C_{0}(u)\colon C_{0}(V,q)\xrightarrow{\sim}C_{0}(\widetilde{V},\widetilde{q})\) such that \[C_{0}(u)(x\cdot y)=\mu^{-1}u(x)\cdot u(y)\qquad\text{for $x$, $y\in V$.}\] It is clear from the definition that \(C_{0}(u)\) preserves the canonical involutions \(\tau_{0}\) and \(\widetilde{\tau}_{0}\) on \(C_{0}(V,q)\) and \(C_{0}(\widetilde{V},\widetilde{q})\). If \(\dim V\equiv 0\) mod \(8\), then \(C_{0}(u)\) also preserves the semitraces \(\mathfrak{g}_{0}\) and \(\widetilde{\mathfrak{g}}_{0}\). To see this, observe that the images \(u(e)\), \(u(e^{\prime})\) of vectors \(e\), \(e^{\prime}\in V\) such that \(b(e,e^{\prime})=1\) satisfy \(\widetilde{b}\big{(}u(e),u(e^{\prime})\big{)}=\mu\). We may therefore use \(\mu^{-1}u(e)\) and \(u(e^{\prime})\) to compute the semitrace \(\widetilde{\mathfrak{g}}_{0}\): for \(s\in\operatorname{Sym}(\tau_{0})\), \[\widetilde{\mathfrak{g}}_{0}\big{(}C_{0}(u)(s)\big{)}=\operatorname{Trd}_{C_{ 0}(\widetilde{V},\widetilde{q})}\big{(}\mu^{-1}u(e)u(e^{\prime})C_{0}(u)(s) \big{)}.\] Now, \(\mu^{-1}u(e)u(e^{\prime})=C_{0}(u)(ee^{\prime})\), hence by substituting in the preceding equation and using the property that algebra isomorphisms preserve reduced traces, we obtain \[\widetilde{\mathfrak{g}}_{0}\big{(}C_{0}(u)(s)\big{)}=\operatorname{Trd}_{C_ {0}(\widetilde{V},\widetilde{q})}\big{(}C_{0}(u)(ee^{\prime}s)\big{)}=C_{0}(u )\big{(}\operatorname{Trd}_{C_{0}(V,q)}(ee^{\prime}s)\big{)}=C_{0}(u)\big{(} \mathfrak{g}_{0}(s)\big{)}.\] Thus, \(C_{0}(u)\) is an isomorphism of algebras with involution \[C_{0}(u)\colon(C_{0}(V,q),\tau_{0})\xrightarrow{\sim}(C_{0}(\widetilde{V}, \widetilde{q}),\widetilde{\tau}_{0})\] and an isomorphism of algebras with quadratic pair if \(\dim V=\dim\widetilde{V}\equiv 0\) mod \(8\) \[C_{0}(u)\colon(C_{0}(V,q),\tau_{0},\mathfrak{g}_{0})\xrightarrow{\sim}(C_{0} (\widetilde{V},\widetilde{q}),\widetilde{\tau}_{0},\widetilde{\mathfrak{g}}_{ 0}).\] Among auto-similitudes \(u\in\operatorname{GO}(q)\) we may distinguish proper similitudes by considering the restriction of \(C_{0}(u)\) to the center \(Z\) of \(C_{0}(V,q)\): the similitude \(u\) is said to be _proper_ if \(C_{0}(u)\) fixes \(Z\) and _improper_ if \(C_{0}(u)\) restricts to the nontrivial \(F\)-automorphism of \(Z\), see [13, SS13.A]. The proper similitudes form a subgroup \(\operatorname{GO}^{+}(q)\) of index \(2\) in \(\operatorname{GO}(q)\), and we let \[\operatorname{O}^{+}(q)=\operatorname{O}(q)\cap\operatorname{GO}^{+}(q), \qquad\operatorname{PGO}^{+}(q)=\operatorname{GO}^{+}(q)/F^{\times}.\] #### Twisted forms Following ideas of Jacobson and Tits, an analogue of the even Clifford algebra for a central simple algebra with quadratic pair \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\) of even degree is defined in [13, SS8.B]. The Clifford algebra \(C(\mathfrak{A})\) is obtained by a functorial construction such that for every quadratic space \((V,q)\) of even dimension, the identification \(\operatorname{End}V=V\otimes V\) set up in SS1.1 yields an identification \[C(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})=C_{0}(V,q).\] This property implies that \(C(\mathfrak{A})\) is a semisimple algebra with center a quadratic etale \(F\)-algebra given by the discriminant of the quadratic pair \((\sigma,\mathfrak{f})\). **Definition 1.4**.: If the discriminant of \((\sigma,\mathfrak{f})\) is trivial, a _polarization_ of \(\mathfrak{A}\) is a designation of the primitive central idempotents of \(C(\mathfrak{A})\) as \(z_{+}\) and \(z_{-}\). A polarization induces the labeling of the simple components of \(C(\mathfrak{A})\) as \(C_{+}(\mathfrak{A})=C(\mathfrak{A})z_{+}\) and \(C_{-}(\mathfrak{A})=C(\mathfrak{A})z_{-}\), so \[C(\mathfrak{A})=C_{+}(\mathfrak{A})\times C_{-}(\mathfrak{A}).\] The algebra \(C(\mathfrak{A})\) comes equipped with a canonical linear map \[c\colon A\to C(\mathfrak{A})\] whose image generates \(C(\mathfrak{A})\) as an \(F\)-algebra. In the split case \(A=\operatorname{End}V\), the map \(c\) is given by multiplication in \(C(V,q)\): \[c\colon V\otimes V\to C_{0}(V,q),\qquad x\otimes y\mapsto x\cdot y.\] The algebra \(C(\mathfrak{A})\) carries a canonical involution \(\underline{\sigma}\) characterized by the condition that \(\underline{\sigma}\big{(}c(a)\big{)}=c\big{(}\sigma(a)\big{)}\) for \(a\in A\). If \(\deg A\equiv 0\) mod \(8\), Dolphin-Queguiner-Mathieu show that a canonical quadratic pair \((\underline{\sigma},\mathfrak{f})\) is defined on \(C(\mathfrak{A})\) by associating to \(\underline{\sigma}\) the following semitrace: \[\mathfrak{f}(s)=\operatorname{Trd}_{C(\mathfrak{A})}(c(a)s)\qquad\text{for }s \in\operatorname{Sym}(\underline{\sigma}),\] where \(a\in A\) is any element such that \(\operatorname{Trd}_{A}(a)=1\), see [7, Def. 3.3]. These constructions are compatible with the corresponding definitions in the split case, in the sense that for every even-dimensional quadratic space \((V,q)\) the standard identification \(\operatorname{End}V=V\otimes V\) of SS1.1 yields identifications of algebras with involution or quadratic pair: \[(C(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q}),\underline{ \sigma}) =(C_{0}(V,q),\tau_{0}) \text{if }\dim V\equiv 0\bmod 4,\] \[(C(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q}),\underline{ \sigma},\underline{\mathfrak{f}}) =(C_{0}(V,q),\tau_{0},\mathfrak{g}_{0}) \text{if }\dim V\equiv 0\bmod 8.\] By functoriality of the Clifford algebra construction, every isomorphism of algebras with quadratic pair \(\varphi\colon\mathfrak{A}\xrightarrow{\sim}\widetilde{\mathfrak{A}}\) induces an isomorphism of algebras with involution or with quadratic pair \[C(\varphi)\colon(C(\mathfrak{A}),\underline{\sigma})\xrightarrow{\sim}(C( \widetilde{\mathfrak{A}}),\underline{\widetilde{\sigma}})\quad\text{or} \quad(C(\mathfrak{A}),\underline{\sigma},\underline{\mathfrak{f}})\xrightarrow {\sim}(C(\widetilde{\mathfrak{A}}),\underline{\widetilde{\sigma}},\widetilde{ \mathfrak{f}})\] such that \[C(\varphi)\big{(}c(a)\big{)}=c\big{(}\varphi(a)\big{)}\qquad\text{for }a\in A.\] As in the split case, we may distinguish between proper and improper similitudes: every similitude \(u\in\operatorname{GO}(\mathfrak{A})\) induces an \(F\)-automorphism \(\operatorname{Int}(u)\) of \(\mathfrak{A}\), hence an \(F\)-automorphism \(C\big{(}\operatorname{Int}(u)\big{)}\) of \(C(\mathfrak{A})\). The similitude \(u\) is said to be _proper_ if \(C\big{(}\operatorname{Int}(u)\big{)}\) leaves the center of \(C(\mathfrak{A})\) elementwise fixed; otherwise it is said to be _improper_. This definition agrees with the previous definition of proper similitude in the case where \(\mathfrak{A}=(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})\) for a quadratic space \((V,q)\), because \(C\big{(}\operatorname{Int}(u)\big{)}=C_{0}(u)\) for every similitude \(u\in\operatorname{GO}(q)\), see [13, (13.1)]. Proper similitudes form a subgroup \(\operatorname{GO}^{+}(\mathfrak{A})\) of index \(1\) or \(2\) in \(\operatorname{GO}(\mathfrak{A})\), and we let \[\operatorname{O}^{+}(\mathfrak{A})=\operatorname{O}(\mathfrak{A})\cap \operatorname{GO}^{+}(\mathfrak{A}),\qquad\operatorname{PGO}^{+}(\mathfrak{A })=\operatorname{GO}^{+}(\mathfrak{A})/F^{\times}.\] These groups are groups of rational points of algebraic groups \(\mathbf{O}^{+}(\mathfrak{A})\), \(\mathbf{GO}^{+}(\mathfrak{A})\) and \(\mathbf{PGO}^{+}(\mathfrak{A})\), which are the connected components of the identity in \(\mathbf{O}(\mathfrak{A})\), \(\mathbf{GO}(\mathfrak{A})\) and \(\mathbf{PGO}(\mathfrak{A})\), see [13, SS23.B]. ### Clifford groups Let \((V,q)\) be a quadratic space of even dimension. The multiplicative group of \(C_{0}(V,q)\) acts on \(C(V,q)\) by conjugation. The _special Clifford group_\(\mathbf{\Gamma}^{+}(q)\) is defined in [13, p. 349] as the normalizer of the subspace \(V\). Thus, for every commutative \(F\)-algebra \(R\), letting \(V_{R}=V\otimes_{F}R\), \[\mathbf{\Gamma}^{+}(q)(R)=\{\xi\in C_{0}(V,q)_{R}^{\times}\mid\xi\cdot V_{R} \cdot\xi^{-1}=V_{R}\}.\] For \(\xi\in\mathbf{\Gamma}^{+}(q)(R)\), the map \(\operatorname{Int}(\xi)|_{V_{R}}\colon V_{R}\to V_{R}\) is a proper isometry. Mapping \(\xi\) to \(\operatorname{Int}(\xi)|_{V_{R}}\) defines a morphism of algebraic groups \(\chi\) known as the _vector representation_, which fits in an exact sequence \[1\to\mathbf{G}_{\mathbf{m}}\to\mathbf{\Gamma}^{+}(q)\xrightarrow{\chi} \mathbf{O}^{+}(q)\to 1, \tag{1.1}\] where \(\mathbf{G}_{\mathbf{m}}\) is the multiplicative group, see [13, p. 349]. Mapping \(\xi\in\mathbf{\Gamma}^{+}(q)(R)\) to \(\tau_{0}(\xi)\xi\) defines a morphism \[\underline{\mu}\colon\ \mathbf{\Gamma}^{+}(q)\to\mathbf{G}_{\mathbf{m}}.\] Its kernel is the Spin group \(\mathbf{Spin}(q)\). It is an algebraic group to which we may restrict the vector representation to obtain the following exact sequence: \[1\to\boldsymbol{\mu}_{2}\to\mathbf{Spin}(q)\xrightarrow{\chi}\mathbf{O}^{+}( q)\to 1,\] where \(\boldsymbol{\mu}_{2}\) is the algebraic group scheme defined by \[\boldsymbol{\mu}_{2}(R)=\{\rho\in R\mid\rho^{2}=1\}\quad\text{for every commutative $F$-algebra $R$}.\] Note that \(\boldsymbol{\mu}_{2}\) is not smooth if \(\operatorname{char}F=2\). Extended Clifford groupsLet \(Z\) be the center of \(C_{0}(V,q)\). Henceforth, we assume \(\dim V\equiv 0\bmod 4\), so the canonical involution \(\tau_{0}\) acts trivially on \(Z\). Let \(\mathbf{Sim}(\tau_{0})\) be the group of similitudes of \((C_{0}(V,q),\tau_{0})\), whose rational points over any commutative \(F\)-algebra \(R\) is \[\mathbf{Sim}(\tau_{0})(R)=\{\xi\in C_{0}(V,q)_{R}^{\times}\mid\tau_{0}(\xi)\xi \in Z_{R}^{\times}\}.\] The multiplier map \(\xi\mapsto\tau_{0}(\xi)\xi\) is a morphism \[\underline{\mu}\colon\mathbf{Sim}(\tau_{0})\to R_{Z/F}(\mathbf{G_{m}}),\] where \(R_{Z/F}(\mathbf{G_{m}})\) is the corestriction (or Weil's _restriction of scalars_) of the multiplicative group. Mapping \(x\in C(V,q)_{R}\) and \(\xi\in\mathbf{Sim}(\tau_{0})(R)\) to \(\tau_{0}(\xi)x\xi\) defines an action of \(\mathbf{Sim}(\tau_{0})\) on \(C(V,q)\) (on the right). The _extended Clifford group_\(\mathbf{\Omega}(q)\) is defined3 as the normalizer of \(V\). Thus, for every commutative \(F\)-algebra \(R\), Footnote 3: For a more general definition covering the case where \(\dim V\equiv 2\bmod 4\), see [13, §13.B]. \[\mathbf{\Omega}(q)(R)=\{\xi\in\mathbf{Sim}(\tau_{0})(R)\mid\tau_{0}(\xi)\cdot V _{R}\cdot\xi=V_{R}\}.\] We proceed to show that \(\mathbf{\Gamma}^{+}(q)\) is a subgroup of \(\mathbf{\Omega}(q)\) by reformulating the condition that \(\tau_{0}(\xi)\cdot V_{R}\cdot\xi=V_{R}\). Let \(\iota\colon Z\to Z\) denote the nontrivial \(F\)-automorphism of \(Z\). Note that \(xz=\iota(z)x\) for all \(x\in V\) and \(z\in Z\). **Lemma 1.5**.: _Let \(R\) be a commutative \(F\)-algebra and let \(\xi\in\mathbf{Sim}(\tau_{0})(R)\) and \(u\in\mathbf{GL}(V)(R)\). The following are equivalent:_ 1. \(\tau_{0}(\xi)x\xi=\sigma_{b}(u)(x)\) _for all_ \(x\in V_{R}\)_;_ 2. \(u(y)=\iota\big{(}\underline{\mu}(\xi)\big{)}\xi y\xi^{-1}\) _for all_ \(y\in V_{R}\)_._ _When these conditions hold, then \(u\in\mathbf{GO}^{+}(q)(R)\), \(C_{0}(u)=\mathrm{Int}(\xi)\) and \(\mu(u)=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}\)._ Proof.: Suppose (a) holds. Squaring each side of the equation yields \[\tau_{0}(\xi)x\underline{\mu}(\xi)x\xi=q\big{(}\sigma_{b}(u)(x)\big{)}\qquad \text{for all }x\in V_{R}\] hence, since \(\tau_{0}(\xi)x\underline{\mu}(\xi)x\xi=\tau_{0}(\xi)x^{2}\xi\big{(}\underline {\mu}(\xi)\big{)}=q(x)N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}\), \[q\big{(}\sigma_{b}(u)(x)\big{)}=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}q(x) \qquad\text{for all }x\in V_{R}.\] It follows that \(\sigma_{b}(u)\in\mathbf{GO}(q)(R)\) and \(\mu\big{(}\sigma_{b}(u)\big{)}=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}\), hence also \(u\in\mathbf{GO}(q)(R)\) and \(\mu(u)=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}\). On the other hand, multiplying each side of (a) on the left by \(\xi\) and on the right by \(\xi^{-1}\) yields \[\underline{\mu}(\xi)x=\xi\sigma_{b}(u)(x)\xi^{-1}.\] Letting \(y=\sigma_{b}(u)(x)\), we have \(u(y)=\mu(u)x\). By substituting in the last displayed equation we obtain \[\mu(u)^{-1}\underline{\mu}(\xi)u(y)=\xi y\xi^{-1}.\] As \(\mu(u)=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}\), condition (b) follows. Now, suppose (b) holds. Squaring each side of the equation yields \[q\big{(}u(y)\big{)}=\iota\big{(}\underline{\mu}(\xi)\big{)}\underline{\mu}( \xi)\xi y^{2}\xi^{-1}=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}q(y)\qquad \text{for all }y\in V_{R},\] hence \(u\in\mathbf{GO}(q)(R)\) and \(\mu(u)=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}\). On the other hand, multiplying each side of (b) by \(\tau_{0}(\xi)\) on the left and by \(\xi\) on the right yields \[\tau_{0}(\xi)u(y)\xi=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}y=\mu(u)y\qquad \text{for all }y\in V_{R}.\] Letting \(x=u(y)\), we have \(\sigma_{b}(u)(x)=\mu(u)y\), hence by substituting in the last displayed equation we obtain (a). To complete the proof, we compute \(C_{0}(u)\) using (b). For \(x\), \(y\in V_{R}\), taking into account that \(\mu(u)=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}\) we find \[C_{0}(u)(xy)=\mu(u)^{-1}u(x)u(y)=\mu(u)^{-1}\iota\big{(}\underline{\mu}(\xi) \big{)}\xi x\xi^{-1}\iota\big{(}\underline{\mu}(\xi)\big{)}\xi y\xi^{-1}=\xi xy \xi^{-1}.\] Since \(\xi\in C_{0}(V,q)_{R}\), it follows that \(C_{0}(u)\) restricts to the identity on \(Z_{R}\), hence \(u\) is a proper similitude. For \(\xi\in\mathbf{\Omega}(q)(R)\), the map \(x\mapsto\tau_{0}(\xi)x\xi\) is an invertible linear operator on \(V_{R}\). If \(u\in\mathbf{GL}(V)(R)\) is the image of this operator under \(\sigma_{b}\), then condition (a) of Lemma 1.5 holds for this \(u\). We write \(u=\chi_{0}(\xi)\), so \(\chi_{0}(\xi)\in\mathbf{GO}^{+}(q)(R)\) is equivalently defined by any of the two equations \[\tau_{0}(\xi)x\xi=\sigma_{b}\big{(}\chi_{0}(\xi)\big{)}(x)\quad\text{and}\quad \chi_{0}(\xi)(x)=\iota\big{(}\underline{\mu}(\xi)\big{)}\xi x\xi^{-1}\quad \text{for all $x\in V_{R}$.} \tag{1.2}\] The map \(\chi_{0}\) is a morphism \[\chi_{0}\colon\mathbf{\Omega}(q)\to\mathbf{GO}^{+}(q).\] Lemma 1.5 yields \[\operatorname{Int}\nolimits_{\mathbf{\Omega}(q)}=C_{0}\circ\chi_{0}\in \mathbf{Aut}\big{(}C_{0}(V,q)\big{)}\qquad\text{and}\qquad N_{Z/F}\circ \underline{\mu}=\mu\circ\chi_{0}\colon\mathbf{\Omega}(q)\to\mathbf{G}_{\mathbf{ m}}. \tag{1.3}\] **Proposition 1.6**.: _The special Clifford group \(\mathbf{\Gamma}^{+}(q)\) is a subgroup of \(\mathbf{\Omega}(q)\). More precisely,_ \[\mathbf{\Gamma}^{+}(q)=\underline{\mu}^{-1}(\mathbf{G}_{\mathbf{m}})\subset \mathbf{\Omega}(q).\] _Moreover, \(\chi_{0}|_{\mathbf{\Gamma}^{+}(q)}=\underline{\mu}\cdot\chi\colon\,\mathbf{ \Gamma}^{+}(q)\to\mathbf{O}^{+}(q)\), hence \(\chi_{0}\) and \(\chi\) coincide on \(\mathbf{Spin}(q)\)._ Proof.: As pointed out in the definition of \(\mathbf{Spin}(q)\) above, for every commutative \(F\)-algebra \(R\) the multiplier \(\underline{\mu}(\xi)\) of any \(\xi\in\mathbf{\Gamma}^{+}(q)(R)\) lies in \(R^{\times}\). Therefore, Lemma 1.5 shows that \(\xi V_{R}\xi^{-1}=V_{R}\) implies \(\tau_{0}(\xi)V_{R}\xi=V_{R}\), hence \(\mathbf{\Gamma}^{+}(q)(R)\subset\mathbf{\Omega}(q)(R)\). Conversely, if \(\xi\in\mathbf{\Omega}(q)(R)\) and \(\underline{\mu}(\xi)\in R^{\times}\), Lemma 1.5 shows that \(\tau_{0}(\xi)V_{R}\xi=V_{R}\) implies \(\xi V_{R}\xi^{-1}=V_{R}\), hence \(\xi\in\mathbf{\Gamma}^{+}(q)(R)\). Therefore \(\mathbf{\Gamma}^{+}(q)(R)\) is the subgroup of elements in \(\mathbf{\Omega}(q)(R)\) whose multiplier lies in \(R^{\times}\). Moreover, for \(\xi\in\mathbf{\Gamma}^{+}(q)(R)\) we have \(\chi(\xi)(x)=\xi x\xi^{-1}\) and \(\chi_{0}(\xi)(x)=\underline{\mu}(\xi)\xi x\xi^{-1}\) for all \(x\in V_{R}\), hence \(\chi_{0}(\xi)=\underline{\mu}(\xi)\chi(\xi)\). Twisted forms.Twisted forms of \(\mathbf{\Gamma}^{+}(q)\) and \(\mathbf{\Omega}(q)\) are defined in [13, SS13.B and SS23.B] by using a Clifford bimodule \(B(\mathfrak{A})\) associated to any central simple algebra of even degree with quadratic pair \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\). This bimodule is defined in such a way that for every even-dimensional quadratic space \((V,q)\) the standard identification \(\operatorname{End}V=V\otimes V\) yields \[B(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})=V\otimes C_{1}(V,q),\] where \(C_{1}(V,q)\) is the odd part of the Clifford algebra \(C(V,q)\). The left action \(*\) and the right action \(*\) of \(C(\mathfrak{A})\) on \(B(\mathfrak{A})\) are given in the split case by \[\xi*(x\otimes\eta)=x\otimes(\xi\eta)\qquad\text{and}\qquad(x\otimes\eta)\cdot \xi=x\otimes(\eta\xi)\] for \(\xi\in C_{0}(V,q)\), \(\eta\in C_{1}(V,q)\) and \(x\in V\). The bimodule \(B(\mathfrak{A})\) also carries a left \(A\)-module structure and a canonical left \(A\)-module homomorphism \(b\colon A\to B(\mathfrak{A})\) (for which we use the exponential notation) given in the split case by \[a(x\otimes\eta)=a(x)\otimes\eta\quad\text{and}\quad(x\otimes y)^{b}=x\otimes y \in V\otimes C_{1}(V,q)\] for \(a\in\operatorname{End}V\), \(x\), \(y\in V\) and \(\eta\in C_{1}(V,q)\). The multiplicative group of \(C(\mathfrak{A})\) acts on \(B(\mathfrak{A})\) on the right as follows: \(\eta\mapsto\xi^{-1}*\eta\cdot\xi\) for \(\xi\in C(\mathfrak{A})^{\times}\) and \(\eta\in B(\mathfrak{A})\). The _Clifford group_\(\mathbf{\Gamma}(\mathfrak{A})\) is the normalizer of the subspace \(A^{b}\subset B(\mathfrak{A})\), hence for every commutative \(F\)-algebra \(R\) \[\mathbf{\Gamma}(\mathfrak{A})(R)=\{\xi\in C(\mathfrak{A})_{R}^{\times}\mid\xi ^{-1}*A_{R}^{b}\cdot\xi=A_{R}^{b}\}.\] On the same model, when \(\deg A\equiv 0\bmod 4\), we define4 the _extended Clifford group_\(\mathbf{\Omega}(\mathfrak{A})\) as the normalizer of \(A^{b}\) under the action on \(B(\mathfrak{A})\) of the group of similitudes of the canonical involution \(\underline{\sigma}\) by Footnote 4: An alternative definition, which also covers the case where \(\deg A\equiv 2\bmod 4\), is given in [13, §23.B]. \[\xi\mapsto(\eta\mapsto\underline{\sigma}(\xi)*\eta\cdot\xi).\] Thus, letting \(Z\) denote the center of \(C(\mathfrak{A})\), \[\mathbf{\Omega}(\mathfrak{A})(R)=\{\xi\in C(\mathfrak{A})_{R}^{\times}\mid \underline{\sigma}(\xi)\xi\in Z_{R}^{\times}\text{ and }\underline{\sigma}(\xi)*A^{b}\cdot\xi=A^{b}\}\] for every commutative \(F\)-algebra \(R\). Let \(\underline{\mu}\) denote the multiplier map \[\underline{\mu}\colon\,\mathbf{\Omega}(\mathfrak{A})\to R_{Z/F}(\mathbf{G_{m}} ),\qquad\xi\mapsto\underline{\sigma}(\xi)\xi\] and define morphisms \[\chi\colon\,\mathbf{\Gamma}(\mathfrak{A})\to\mathbf{O}^{+}(\mathfrak{A}) \qquad\text{and}\qquad\chi_{0}\colon\,\mathbf{\Omega}(\mathfrak{A})\to\mathbf{ GO}^{+}(\mathfrak{A})\] by \[\xi^{-1}*1^{b}\cdot\xi=\chi(\xi)^{b}\qquad\text{and}\qquad\underline{\sigma}( \xi)*1^{b}\cdot\xi=\chi_{0}(\xi)^{b},\] see [13, (13.11) and (13.29)]. In the split case where \((\sigma,\mathfrak{f})=(\sigma_{b},\mathfrak{f}_{q})\) for some quadratic space \((V,q)\), the standard identification yields \[\mathbf{\Gamma}(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})=\mathbf{ \Gamma}^{+}(q)\qquad\text{and}\qquad\mathbf{\Omega}(\operatorname{End}V,\sigma _{b},\mathfrak{f}_{q})=\mathbf{\Omega}(q),\] and the maps \(\chi\) and \(\chi_{0}\) are identical respectively to the vector representation and to the map \(\chi_{0}\) defined in (1.2). We next show that they satisfy analogues of (1.3) and Proposition 1.6. **Proposition 1.7**.: _Let \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\) be an \(F\)-algebra with quadratic pair of degree divisible by \(4\). The Clifford group \(\mathbf{\Gamma}(\mathfrak{A})\) is a subgroup of \(\mathbf{\Omega}(\mathfrak{A})\). More precisely,_ \[\mathbf{\Gamma}(\mathfrak{A})=\underline{\mu}^{-1}(\mathbf{G_{m}})\subset \mathbf{\Omega}(\mathfrak{A}).\] _Moreover, \(R_{Z/F}(\mathbf{G_{m}})\subset\mathbf{\Omega}(\mathfrak{A})\) and \(\chi_{0}|_{R_{Z/F}(\mathbf{G_{m}})}=N_{Z/F}\colon R_{Z/F}(\mathbf{G_{m}})\to \mathbf{G_{m}}\),_ \[\operatorname{Int}\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\] _and_ \[\chi_{0}|_{\mathbf{\Gamma}(\mathfrak{A})}=\underline{\mu}\cdot\chi\colon\, \mathbf{\Gamma}(\mathfrak{A})\to\mathbf{O}^{+}(\mathfrak{A}).\] Proof.: The first part is proved in [13, (13.25)]. (Alternatively, it follows from Proposition 1.6 by Galois descent from a Galois splitting field of \(A\).) Let \(R\) be a commutative \(F\)-algebra. For \(z\in Z_{R}^{\times}\) we have \(\underline{\sigma}(z)=z\) and \(z*1^{b}=1^{b}\cdot\iota(z)\), hence \(z\in\mathbf{\Omega}(\mathfrak{A})(R)\) with \(\chi_{0}(z)=N_{Z/F}(z)\). The rest follows from Proposition 1.6 by scalar extension to a splitting field of \(A\). Define \(\chi^{\prime}\colon\mathbf{\Omega}(\mathfrak{A})\to\mathbf{PGO}^{+}(\mathfrak{A})\) by composing \(\chi_{0}\) with the canonical map \(\mathbf{GO}^{+}(\mathfrak{A})\to\mathbf{PGO}^{+}(\mathfrak{A})\). Recall from [13, p. 352] the following commutative diagram with exact rows, whose vertical maps are canonical: (1.4) The exact rows of this diagram show that \(\mathbf{\Gamma}(\mathfrak{A})\) and \(\mathbf{\Omega}(\mathfrak{A})\) are connected, since \(\mathbf{G_{m}}\), \(\mathbf{O}^{+}(\mathfrak{A})\), \(R_{Z/F}(\mathbf{G_{m}})\) and \(\mathbf{PGO}^{+}(\mathfrak{A})\) are connected. In the next proposition, we write \(R^{1}_{Z/F}(\mathbf{G_{m}})\) for the kernel of the norm map \[N_{Z/F}\colon R_{Z/F}(\mathbf{G_{m}})\to\mathbf{G_{m}}.\] **Proposition 1.8**.: _Let \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\) be an algebra with quadratic pair of degree divisible by \(4\). The following sequence is exact:_ \[1\to R^{1}_{Z/F}(\mathbf{G_{m}})\to\mathbf{\Omega}(\mathfrak{A})\xrightarrow{ \chi_{0}}\mathbf{GO}^{+}(\mathfrak{A})\to 1.\] Proof.: Since \(\ker\chi_{0}\subset\ker\chi^{\prime}\), it follows from the exactness of the lower row in (1.4) that \(\ker\chi_{0}\subset R_{Z/F}(\mathbf{G_{m}})\). Moreover, the following diagram is commutative with exact rows: Since we already know that \(\ker\chi_{0}\subset R_{Z/F}(\mathbf{G_{m}})\), it follows that \(\ker\chi_{0}=R^{1}_{Z/F}(\mathbf{G_{m}})\). As \(\mathbf{GO}^{+}(\mathfrak{A})\) is smooth, to prove that \(\chi_{0}\) is onto it suffices by [13, (22.3)] to see that \(\chi_{0}\) defines a surjective map on the group of rational points over an algebraic closure. This is clear from the last commutative diagram above, because the norm \(N_{Z/F}\) is surjective when \(F\) is algebraically closed. As in the split case, we define the Spin group \[\mathbf{Spin}(\mathfrak{A})=\ker(\underline{\mu}\colon\mathbf{\Gamma}( \mathfrak{A})\to\mathbf{G_{m}})=\ker\bigl{(}\underline{\mu}\colon\mathbf{ \Omega}(\mathfrak{A})\to R_{Z/F}(\mathbf{G_{m}})\bigr{)}\] and we have an exact sequence (see [13, p. 352]): \[1\to\boldsymbol{\mu}_{2}\to\mathbf{Spin}(\mathfrak{A})\xrightarrow{\chi} \mathbf{O}^{+}(\mathfrak{A})\to 1.\] We may also restrict the map \(\chi^{\prime}\) to \(\mathbf{Spin}(\mathfrak{A})\) to obtain a morphism \(\chi^{\prime}\colon\mathbf{Spin}(\mathfrak{A})\to\mathbf{PGO}^{+}(\mathfrak{A})\). This morphism is surjective since the vector representation \(\chi\) is surjective and the canonical map \(\mathbf{O}^{+}(\mathfrak{A})\to\mathbf{PGO}^{+}(\mathfrak{A})\) is surjective. Its kernel is \(R_{Z/F}(\mathbf{G_{m}})\cap\mathbf{Spin}(\mathfrak{A})=R_{Z/F}(\boldsymbol{ \mu}_{2})\), hence the following sequence is exact: \[1\to R_{Z/F}(\boldsymbol{\mu}_{2})\to\mathbf{Spin}(\mathfrak{A})\xrightarrow{ \chi^{\prime}}\mathbf{PGO}^{+}(\mathfrak{A})\to 1. \tag{1.5}\] The last proposition refers to the canonical quadratic pair \((\underline{\sigma},\underline{\mathfrak{f}})\) on \(C(\mathfrak{A})\) defined by Dolphin-Queguiner-Mathieu (see SS1.2). Assuming \(\deg A\equiv 0\bmod 8\), we write \(\mathfrak{C}(\mathfrak{A})\) for the Clifford algebra of \(\mathfrak{A}\) with its canonical quadratic pair: \[\mathfrak{C}(\mathfrak{A})=(C(\mathfrak{A}),\underline{\sigma},\underline{ \mathfrak{f}}).\] **Proposition 1.9**.: _Let \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\) be an algebra with quadratic pair. If \(\deg A\equiv 0\bmod 8\), then \(\mathbf{\Omega}(\mathfrak{A})\subset\mathbf{GO}^{+}\big{(}\mathfrak{C}( \mathfrak{A})\big{)}\)._ Proof.: Let \(R\) be a commutative \(F\)-algebra and let \(\xi\in\mathbf{\Omega}(\mathfrak{A})(R)\). Since \(\chi_{0}(\xi)\in\mathbf{GO}(\mathfrak{A})(R)\), it follows that \(\operatorname{Int}\bigl{(}\chi_{0}(\xi)\bigr{)}\) is an automorphism of \(\mathfrak{A}_{R}\), hence \(C\bigl{(}\operatorname{Int}(\chi_{0}(\xi))\bigr{)}\) is an automorphism of \(\mathfrak{C}(\mathfrak{A})_{R}\). But Proposition 1.7 shows that \(C\bigl{(}\operatorname{Int}(\chi_{0}(\xi))\bigr{)}=\operatorname{Int}(\xi)\), hence \(\xi\in\mathbf{GO}\big{(}\mathfrak{C}(\mathfrak{A})\big{)}(R)\). We thus see that \(\mathbf{\Omega}(\mathfrak{A})\subset\mathbf{GO}\big{(}\mathfrak{C}( \mathfrak{A})\big{)}\). Since \(\mathbf{\Omega}(\mathfrak{A})\) is connected, it actually lies in the connected component \(\mathbf{GO}^{+}\big{(}\mathfrak{C}(\mathfrak{A})\big{)}\). ### Lie algebras of orthogonal groups Throughout this subsection, \(A\) is a central simple algebra of even degree \(n=2m\) over an arbitrary field \(F\), and \((\sigma,\mathfrak{f})\) is a quadratic pair on \(A\). We discuss several Lie algebras related to the algebra with quadratic pair \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\), and obtain different results depending on whether the characteristic is \(2\) or not. The discrepancies derive from the observation that the Lie algebra of the algebraic group scheme \(\boldsymbol{\mu}_{2}\) is \(F\) when \(\operatorname{char}F=2\), whereas it vanishes when \(\operatorname{char}F\neq 2\). The bracket \([a,b]=ab-ba\) turns \(A\) into a Lie algebra denoted by \(\mathfrak{L}(A)\). As usual, for \(a\in A\) we let \(\operatorname{ad}_{a}\colon A\to A\) denote the linear operator defined by \[\operatorname{ad}_{a}(x)=[a,x]\qquad\text{for $x\in A$}.\] The following are subalgebras of \(\mathfrak{L}(A)\) associated with the quadratic pair \((\sigma,\mathfrak{f})\); they are the Lie algebras of the algebraic group schemes \(\mathbf{O}(\mathfrak{A})\) and \(\mathbf{GO}(\mathfrak{A})\) respectively, see [13, SS23.B]: \[\mathfrak{o}(\mathfrak{A}) =\operatorname{Alt}(\sigma)=\{a-\sigma(a)\mid a\in A\}\] \[\mathfrak{go}(\mathfrak{A}) =\{g\in A\mid\sigma(g)+g\in F\text{ and }\,\mathfrak{f}([g,s])=0 \text{ for all }s\in\operatorname{Sym}(\sigma)\}.\] Note that \(\mathfrak{o}(\mathfrak{A})\) depends only on \(\sigma\) and not on \(\mathfrak{f}\). Clearly, \(F\subset\mathfrak{go}(\mathfrak{A})\). We let \[\mathfrak{pgo}(\mathfrak{A})=\mathfrak{go}(\mathfrak{A})/F\] and define \[\dot{\mu}\colon\mathfrak{go}(\mathfrak{A})\to F\qquad\text{by}\quad\dot{\mu}( g)=\sigma(g)+g.\] This map is the differential of the multiplier morphism \(\mu\colon\mathbf{GO}(\mathfrak{A})\to\mathbf{G_{m}}\), hence it is a Lie algebra homomorphism. **Proposition 1.10**.: _Let \(\ell\in A\) be such that \(\mathfrak{f}(s)=\operatorname{Trd}_{A}(\ell s)\) for all \(s\in\operatorname{Sym}(\sigma)\). Then_ \[\begin{array}{rcl}\mathfrak{go}(\mathfrak{A})&=&\{g\in A\mid \operatorname{ad}_{g}\circ\sigma=\sigma\circ\operatorname{ad}_{g}\text{ and }(\mathfrak{f}\circ\operatorname{ad}_{g})(s)=0\text{ for all }s\in \operatorname{Sym}(\sigma)\}\\ &=&\{g\in A\mid\operatorname{Trd}_{A}(gs)=(\sigma(g)+g)\,\mathfrak{f}(s)\text{ for all }s\in \operatorname{Sym}(\sigma)\}\\ &=&\mathfrak{o}(\mathfrak{A})+\ell F\end{array} \tag{1.6}\] _and the following sequence is exact:_ \[0\to\mathfrak{o}(\mathfrak{A})\to\mathfrak{go}(\mathfrak{A})\xrightarrow{ \dot{\mu}}F\to 0. \tag{1.7}\] _Moreover,_ \[\dim\mathfrak{o}(\mathfrak{A})=\dim\mathfrak{pgo}(\mathfrak{A})=m(2m-1) \qquad\text{and}\qquad\dim\mathfrak{go}(\mathfrak{A})=m(2m-1)+1.\] _If \(\operatorname{char}F\neq 2\), the inclusion \(\mathfrak{o}(\mathfrak{A})\hookrightarrow\mathfrak{go}(\mathfrak{A})\) is split by the map \(\frac{1}{2}(\operatorname{Id}-\sigma)\colon\mathfrak{go}(\mathfrak{A})\to \mathfrak{o}(\mathfrak{A})\), and it induces a canonical isomorphism_ \[\mathfrak{o}(\mathfrak{A})\xrightarrow{\sim}\mathfrak{pgo}(\mathfrak{A}).\] _If \(\operatorname{char}F=2\), the map \(\dot{\mu}\) induces a map \(\mathfrak{pgo}(\mathfrak{A})\to F\) for which we also use the notation \(\dot{\mu}\), and the map \(\mathfrak{o}(\mathfrak{A})\to\mathfrak{pgo}(\mathfrak{A})\) induced by the inclusion \(\mathfrak{o}(\mathfrak{A})\hookrightarrow\mathfrak{go}(\mathfrak{A})\) fits into an exact sequence_ \[0\to F\to\mathfrak{o}(\mathfrak{A})\to\mathfrak{pgo}(\mathfrak{A})\xrightarrow{ \dot{\mu}}F\to 0.\] Proof.: For \(g\), \(x\in A\), \[(\operatorname{ad}_{g}\circ\!\sigma-\sigma\circ\operatorname{ad}_{g})(x)=[g, \sigma(x)]-\sigma\big{(}[g,x]\big{)}=[g+\sigma(g),\sigma(x)].\] Therefore, \(\operatorname{ad}_{g}\circ\!\sigma=\sigma\circ\operatorname{ad}_{g}\) if and only if \(g+\sigma(g)\in F\), and the definition of \(\mathfrak{go}(\mathfrak{A})\) readily yields \[\mathfrak{go}(\mathfrak{A})=\{g\in A\mid\operatorname{ad}_{g}\circ\!\sigma= \sigma\circ\operatorname{ad}_{g}\text{ and }(\mathfrak{f}\circ\operatorname{ad}_{g})(s)=0\text{ for all }s\in \operatorname{Sym}(\sigma)\}.\] Now, suppose \(g\in A\) satisfies \(\sigma(g)+g\in F\), and let \(\mu=\sigma(g)+g\). For \(s\in\operatorname{Sym}(\sigma)\) we have \[\operatorname{Trd}_{A}(gs)=\mathfrak{f}\big{(}gs+\sigma(gs)\big{)}=\mathfrak{ f}\big{(}gs+s\sigma(g)\big{)}=\mathfrak{f}\big{(}gs+s(\mu-g)\big{)}=\mathfrak{f}([g,s ])+\mu\,\mathfrak{f}(s). \tag{1.8}\] Therefore, \(\operatorname{Trd}_{A}(gs)=\mu\,\mathfrak{f}(s)\) for \(g\in\mathfrak{go}(\mathfrak{A})\) and \(s\in\operatorname{Sym}(\sigma)\), hence \[\mathfrak{go}(\mathfrak{A})\subset\{g\in A\mid\operatorname{Trd}_{A}(gs)=( \sigma(g)+g)\,\mathfrak{f}(s)\text{ for all }s\in\operatorname{Sym}(\sigma)\}.\] To prove the reverse inclusion, suppose \(g\in A\) satisfies \(\operatorname{Trd}_{A}(gs)=(\sigma(g)+g)\,\mathfrak{f}(s)\) for all \(s\in\operatorname{Sym}(\sigma)\). We first show that \(\sigma(g)+g\in F\). If \(x\in A\) is such that \(\operatorname{Trd}_{A}(x)=1\), then \(\mathfrak{f}(\sigma(x)+x)=1\), hence the hypothesis on \(g\) yields \(\operatorname{Trd}_{A}\big{(}g(\sigma(x)+x)\big{)}=\sigma(g)+g\), which shows that \(\sigma(g)+g\in F\). Letting \(\mu=\sigma(g)+g\), we have by (1.8) above \(\operatorname{Trd}_{A}(gs)=\mathfrak{f}([g,s])+\mu\,\mathfrak{f}(s)\) for all \(s\in\operatorname{Sym}(\sigma)\). On the other hand, the hypothesis on \(g\) yields \(\operatorname{Trd}_{A}(gs)=\mu\,\mathfrak{f}(s)\), hence \(\mathfrak{f}([g,s])=0\), proving \(g\in\mathfrak{go}(\mathfrak{A})\). The first two equations in (1.6) are thus proved. The second one shows that \(\ell\in\mathfrak{go}(\mathfrak{A})\) since \(\operatorname{Trd}_{A}(\ell s)=\mathfrak{f}(s)\) for all \(s\in\operatorname{Sym}(\sigma)\) and \(\sigma(\ell)+\ell=1\). This last equation also reads \(\dot{\mu}(\ell)=1\), hence the map \(\dot{\mu}\colon\mathfrak{go}(\mathfrak{A})\to F\) is onto. The second characterization of \(\mathfrak{go}(\mathfrak{A})\) in (1.6) also shows that \[\ker(\dot{\mu}\colon\mathfrak{go}(\mathfrak{A})\to F)=\{g\in A\mid \operatorname{Trd}_{A}(gs)=0\text{ for all }s\in\operatorname{Sym}(\sigma)\},\] which means that \(\ker(\dot{\mu})\) is the orthogonal complement of \(\operatorname{Sym}(\sigma)\) for the bilinear form \(\operatorname{Trd}_{A}(XY)\). This orthogonal complement is known to be \(\operatorname{Alt}(\sigma)\) by [13, (2.3)]. As \(\mathfrak{o}(\mathfrak{A})=\operatorname{Alt}(\sigma)\), it follows that \(\mathfrak{o}(\mathfrak{A})\subset\mathfrak{go}(\mathfrak{A})\) and the sequence (1.7) is exact. From the above observations it follows that \[\mathfrak{o}(\mathfrak{A})+\ell F\subset\mathfrak{go}(\mathfrak{A}).\] We use dimension count to show that this inclusion is an equality, completing the proof of (1.6). Note that \(\ell\notin\mathfrak{o}(\mathfrak{A})\) since \(\mathfrak{o}(\mathfrak{A})=\ker(\dot{\mu})\) whereas \(\dot{\mu}(\ell)=1\). Therefore, \(\dim(\mathfrak{o}(\mathfrak{A})+\ell F)=1+\dim\mathfrak{o}(\mathfrak{A})\). On the other hand, the exact sequence (1.7) yields \(\dim\mathfrak{go}(\mathfrak{A})=1+\dim\mathfrak{o}(\mathfrak{A})\), hence the proof of (1.6) is complete. Since \(\dim\operatorname{Alt}(\sigma)=m(2m-1)\) by [13, (2.6)], we obtain \[\dim\mathfrak{o}(\mathfrak{A})=m(2m-1)\quad\text{and}\quad\dim\mathfrak{go}( \mathfrak{A})=m(2m-1)+1.\] It follows that \(\dim\mathfrak{pgo}(\mathfrak{A})=m(2m-1)\) because \(\mathfrak{pgo}(\mathfrak{A})=\mathfrak{go}(\mathfrak{A})/F\). If \(\operatorname{char}F\neq 2\), then we may take \(\ell=\frac{1}{2}\) in the discussion above, so \(\mathfrak{go}(\mathfrak{A})=\mathfrak{o}(\mathfrak{A})\oplus F\) and \(\mathfrak{pgo}(\mathfrak{A})\simeq\mathfrak{o}(\mathfrak{A})\) canonically. If \(\operatorname{char}F=2\), then \(F\subset\operatorname{Alt}(\sigma)\) because the involution \(\sigma\) is symplectic, and the map \(\dot{\mu}\colon\mathfrak{go}(\mathfrak{A})\to F\) vanishes on \(F\). Therefore, \(\dot{\mu}\) induces a map \(\mathfrak{pgo}(\mathfrak{A})\to F\) whose kernel is the image of \(\mathfrak{o}(\mathfrak{A})\). When the algebra \(A\) is split, we may represent it as \(A=\operatorname{End}V\) for some \(F\)-vector space \(V\) of dimension \(n\). The quadratic pair \((\sigma,\mathfrak{f})\) is then the quadratic pair \((\sigma_{b},\mathfrak{f}_{q})\) adjoint to a nonsingular quadratic form \(q\) on \(V\) (see SS1.1), and we write simply \(\mathfrak{go}(q)\) for \(\mathfrak{go}(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})\). **Proposition 1.11**.: _Let \(g\in\operatorname{End}V\) and \(\mu\in F\). We have \(g\in\mathfrak{go}(q)\) and \(\dot{\mu}(g)=\mu\) if and only if_ \[b(g(u),u)=\mu\,q(u)\qquad\text{for all $u\in V$.} \tag{1.9}\] Proof.: We use the standard identification \(V\otimes V=\operatorname{End}V\) set up in SS1.1. For \(s=u\otimes u\in\operatorname{Sym}(\sigma_{b})\) we have \(gs=g(u)\otimes u\), hence \(\operatorname{Trd}(gs)=b(g(u),u)\). On the other hand \(\mathfrak{f}_{q}(s)=q(u)\), hence if \(g\in\mathfrak{go}(q)\) and \(\dot{\mu}(g)=\mu\) then the second characterization of \(\mathfrak{go}(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})\) in (1.6) shows that (1.9) holds. Conversely, if (1.9) holds, then \(\operatorname{Trd}(gs)=\mu\,\mathfrak{f}_{q}(s)\) for all \(s\in\operatorname{Sym}(\sigma_{b})\) of the form \(s=u\otimes u\) with \(u\in V\). Applying this to \(s=(u+v)\otimes(u+v)\) with \(u\), \(v\in V\) yields \[\operatorname{Trd}\bigl{(}g(u\otimes v+v\otimes u)\bigr{)}=\mu\,\mathfrak{f} _{q}(u\otimes v+v\otimes u)=\mu\operatorname{Trd}(u\otimes v),\] hence \(b(g(u),v)+b\bigl{(}u,g(v)\bigr{)}=\mu\,b(u,v)\). Since \(\sigma_{b}\) is the adjoint involution of \(b\), it follows that \(\sigma_{b}(g)+g=\mu\). We thus see that \(\operatorname{Trd}(gs)=(\sigma_{b}(g)+g)\,\mathfrak{f}_{q}(s)\) for all \(s\in\operatorname{Sym}(\sigma_{b})\), which proves \(g\in\mathfrak{go}(q)\) by the second characterization of \(\mathfrak{go}(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})\) in (1.6). Returning to the general case, where the algebra \(A\) is not necessarily split, let \(C(\mathfrak{A})\) denote the Clifford algebra of \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\), and write \(c\colon A\to C(\mathfrak{A})\) for the canonical map. Every \(g\in\mathfrak{go}(\mathfrak{A})\) defines a derivation \(\delta_{g}\) of \(C(\mathfrak{A})\) such that \[\delta_{g}\bigl{(}c(a)\bigr{)}=c([g,a])\qquad\text{for $a\in A$;} \tag{1.10}\] this can be checked directly from the definition of \(C(\mathfrak{A})\) or by viewing the map \(g\mapsto\delta_{g}\) as the differential of the morphism \(\operatorname{\mathbf{GO}}(\mathfrak{A})\to\operatorname{\mathbf{Aut}} \bigl{(}C(\mathfrak{A})\bigr{)}\) defined on rational points by mapping \(g\in\operatorname{GO}(\mathfrak{A})\) to \(C\bigl{(}\operatorname{Int}(g)\bigr{)}\). The derivation \(\delta_{g}\) is uniquely determined by (1.10), because \(c(A)\) generates \(A\) as an associative algebra. Recall from [13, SS8.C] that \(c(A)\) is a Lie subalgebra of \(\mathfrak{L}\bigl{(}C(\mathfrak{A})\bigr{)}\). By [13, p. 351], \(c(A)\) is the Lie algebra of the algebraic group \(\boldsymbol{\Gamma}(\mathfrak{A})\), whose group of rational points is the Clifford group \(\Gamma(\mathfrak{A})\), hence we call it the _Clifford Lie algebra_ of \(\mathfrak{A}\) and write \[\boldsymbol{\gamma}(\mathfrak{A})=c(A)\subset\mathfrak{L}\bigl{(}C(\mathfrak{ A})\bigr{)}.\] The kernel of the map \(c\colon A\to\boldsymbol{\gamma}(\mathfrak{A})\) is \(\ker(\mathfrak{f})\subset\operatorname{Sym}(\sigma)\) by [13, (8.14)], hence \[\dim\boldsymbol{\gamma}(\mathfrak{A})=m(2m-1)+1. \tag{1.11}\] Let \(\underline{\sigma}\) be the canonical involution on \(C(\mathfrak{A})\), which is characterized by the condition that \(\underline{\sigma}\bigl{(}c(a)\bigr{)}=c\bigl{(}\sigma(a)\bigr{)}\) for \(a\in A\). We have \[\underline{\sigma}\bigl{(}c(a)\bigr{)}+c(a)=c(\sigma(a)+a)=\mathfrak{f}( \sigma(a)+a)=\operatorname{Trd}_{A}(a),\] hence \(\underline{\sigma}(\xi)+\xi\in F\) for \(\xi\in\boldsymbol{\gamma}(\mathfrak{A})\), and we may define a Lie algebra homomorphism \[\underline{\dot{\mu}}\colon\boldsymbol{\gamma}(\mathfrak{A})\to F\qquad\text{ by}\quad\underline{\dot{\mu}}(\xi)=\underline{\sigma}(\xi)+\xi, \tag{1.12}\] so \(\underline{\dot{\mu}}\bigl{(}c(a)\bigr{)}=\operatorname{Trd}_{A}(a)\) for \(a\in A\). We let \(\mathfrak{spin}(\mathfrak{A})\) denote the kernel \[\mathfrak{spin}(\mathfrak{A})=\ker\underline{\dot{\mu}}=\{c(a)\mid \operatorname{Trd}_{A}(a)=0\}\subset\boldsymbol{\gamma}(\mathfrak{A}),\] which is the Lie algebra of the algebraic group \(\operatorname{\mathbf{Spin}}(\mathfrak{A})\) defined in SS1.3. By definition of \(\mathfrak{spin}(\mathfrak{A})\), the following sequence is exact: \[0\to\mathfrak{spin}(\mathfrak{A})\to\boldsymbol{\gamma}(\mathfrak{A}) \xrightarrow{\dot{\mu}}F\to 0,\] and therefore \[\dim\mathfrak{spin}(\mathfrak{A})=m(2m-1). \tag{1.13}\] Recall from [13, (8.15)] the Lie homomorphism \[\dot{\chi}\colon\boldsymbol{\gamma}(\mathfrak{A})\to\mathfrak{o}(\mathfrak{A}), \qquad c(a)\mapsto a-\sigma(a)\text{ for }a\in A,\] which fits in the following exact sequence \[0\to F\to\boldsymbol{\gamma}(\mathfrak{A})\xrightarrow{\dot{\chi}}\mathfrak{o} (\mathfrak{A})\to 0. \tag{1.14}\] That sequence is the Lie algebra version of the exact sequence of algebraic groups from [13, p. 352]: \[1\to\mathbf{G}_{\mathbf{m}}\to\boldsymbol{\Gamma}(\mathfrak{A})\xrightarrow{ \chi}\mathbf{O}^{+}(\mathfrak{A})\to 1.\] We let \[\mathfrak{so}(\mathfrak{A})=\dot{\chi}\big{(}\mathfrak{spin}(\mathfrak{A}) \big{)}=\{a-\sigma(a)\mid\operatorname{Trd}_{A}(a)=0\}\subset\mathfrak{o}( \mathfrak{A}).\] If \(\operatorname{char}F\neq 2\), then \(\mathfrak{o}(\mathfrak{A})=\operatorname{Skew}(\sigma)\), hence every \(a\in\mathfrak{o}(\mathfrak{A})\) satisfies \(\operatorname{Trd}_{A}(a)=0\) and \(a=\frac{1}{2}a-\sigma(\frac{1}{2}a)\), hence \[\mathfrak{so}(\mathfrak{A})=\mathfrak{o}(\mathfrak{A}).\] Moreover, in \(\boldsymbol{\gamma}(\mathfrak{A})\) we have \(F\cap\mathfrak{spin}(\mathfrak{A})=0\) because \(\dot{\mu}(\lambda)=2\lambda\) for \(\lambda\in F\), hence the restriction of \(\dot{\chi}\) is an isomorphism \[\dot{\chi}\colon\mathfrak{spin}(\mathfrak{A})\xrightarrow{\sim}\mathfrak{o}( \mathfrak{A}). \tag{1.15}\] By contrast, if \(\operatorname{char}F=2\) we may define a map \[\operatorname{Trp}\colon\mathfrak{o}(\mathfrak{A})\to F\qquad\text{by} \quad\operatorname{Trp}\big{(}a-\sigma(a)\big{)}=\operatorname{Trd}_{A}(a),\] because \(\operatorname{Trd}_{A}\big{(}\operatorname{Sym}(\sigma)\big{)}=0\). (The map \(\operatorname{Trp}\) is the _pfaffian trace_, see [13, (2.13)].) For \(a\), \(b\in A\) we have \[[a-\sigma(a),b-\sigma(b)]=[a-\sigma(a),b]-\sigma([a-\sigma(a),b]),\] hence \[\operatorname{Trp}([a-\sigma(a),b-\sigma(b)])=\operatorname{Trd}_{A}([a- \sigma(a),b])=0.\] Therefore, \(\operatorname{Trp}\) is a Lie algebra homomorphism. Note also that \(F\subset\mathfrak{spin}(\mathfrak{A})\) because \(\dot{\mu}(\lambda)=2\lambda=0\) for \(\lambda\in F\). Therefore, there is a commutative diagram with exact rows and columns: (1.16) ### Extended Clifford Lie algebras Throughout this subsection \(A\) is a central simple algebra of degree \(n=2m\) over an arbitrary field \(F\), and we assume \(m\) is even. Let \((\sigma,\mathfrak{f})\) be a quadratic pair on \(A\), and let \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\). Recall from SS1.3 the Clifford bimodule \(B(\mathfrak{A})\) with its canonical left \(A\)-module homomorphism \(b\colon A\to B(\mathfrak{A})\). We write \(Z\) for the center of \(C(\mathfrak{A})\) and \(\iota\) for the nontrivial \(F\)-automorphism of \(Z\). Since the left \(A\)-module action on \(B(\mathfrak{A})\) commutes with the left and right \(C(\mathfrak{A})\)-module actions, the condition \(\underline{\sigma}(\xi)\ast A_{R}^{b}\cdot\xi=A_{R}^{b}\) in the definition of the extended Clifford group \(\mathbf{\Omega}(\mathfrak{A})\) is equivalent to \(\underline{\sigma}(\xi)\ast 1^{b}\cdot\xi\in A_{R}^{b}\). The Lie algebra of \(\mathbf{\Omega}(\mathfrak{A})\) is therefore as follows: **Definition 1.12**.: The _extended Clifford Lie algebra of \(\mathfrak{A}\)_ is \[\boldsymbol{\omega}(\mathfrak{A})=\{\xi\in C(\mathfrak{A})\mid\underline{ \sigma}(\xi)+\xi\in Z\text{ and }\underline{\sigma}(\xi)\ast 1^{b}+1^{b}\cdot\xi\in A^{b}\}.\] It is shown in [13, p. 352] that the algebraic group scheme \(\mathbf{\Omega}(\mathfrak{A})\) is smooth, because \(R_{Z/F}(\mathbf{G_{m}})\) and \(\mathbf{PGO}^{+}(\mathfrak{A})\) are smooth and the lower row of the diagram (1.4) is exact. Since \(\dim R_{Z/F}(\mathbf{G_{m}})=2\) and \(\dim\mathbf{PGO}^{+}(\mathfrak{A})=m(2m-1)\), it follows that \[\dim\mathbf{\Omega}(\mathfrak{A})=\dim\boldsymbol{\omega}(\mathfrak{A})=m(2m -1)+2.\] For \(\xi\in\boldsymbol{\omega}(\mathfrak{A})\) we write \[\underline{\dot{\mu}}(\xi)=\underline{\sigma}(\xi)+\xi\in Z.\] Since the map \(b\) is injective, for each \(\xi\in\boldsymbol{\omega}(\mathfrak{A})\) there is a uniquely determined element \(\dot{\chi}_{0}(\xi)\in A\) such that \[\underline{\sigma}(\xi)\ast 1^{b}+1^{b}\cdot\xi=\dot{\chi}_{0}(\xi)^{b}.\] Thus, letting \(F[\varepsilon]\) denote the algebra of dual numbers, where \(\varepsilon^{2}=0\), we have \[\underline{\sigma}(1+\varepsilon\xi)\ast 1^{b}\cdot(1+\varepsilon\xi)=\big{(}1 +\varepsilon\dot{\chi}_{0}(\xi)\big{)}^{b}\qquad\text{for }\xi\in\boldsymbol{\omega}( \mathfrak{A}).\] This shows that \(\dot{\chi}_{0}\) is the differential of \(\chi_{0}\colon\mathbf{\Omega}(\mathfrak{A})\to\mathbf{GO}^{+}(\mathfrak{A})\). For the next statement, recall from (1.10) that every \(g\in\mathfrak{go}(\mathfrak{A})\) defines a derivation \(\delta_{g}\) of \(C(\mathfrak{A})\) such that \(\delta_{g}\big{(}c(a)\big{)}=c([g,a])\) for all \(a\in A\). **Proposition 1.13**.: _The Lie algebra \(\boldsymbol{\omega}(\mathfrak{A})\) is a subalgebra of \(\mathfrak{L}\big{(}C(\mathfrak{A})\big{)}\) containing \(Z\) and \(\boldsymbol{\gamma}(\mathfrak{A})\), and \(\dot{\chi}_{0}\), \(\underline{\dot{\mu}}\) are Lie algebra homomorphisms_ \[\dot{\chi}_{0}\colon\boldsymbol{\omega}(\mathfrak{A})\to\mathfrak{go}( \mathfrak{A})\qquad\text{and}\qquad\underline{\dot{\mu}}\colon\boldsymbol{ \omega}(\mathfrak{A})\to Z.\] _Moreover, \(\dot{\chi}_{0}(z)=\operatorname{Tr}_{Z/F}(z)\in F\) for \(z\in Z\),_ \[\operatorname{ad}_{\xi}=\delta_{\dot{\chi}_{0}(\xi)}\quad\text{and}\quad \dot{\mu}\big{(}\dot{\chi}_{0}(\xi)\big{)}=\operatorname{Tr}_{Z/F}\big{(} \underline{\dot{\mu}}(\xi)\big{)}\quad\text{for }\xi\in\boldsymbol{\omega}( \mathfrak{A})\text{,}\] _and_ \[\dot{\chi}_{0}(\xi)=\underline{\dot{\mu}}(\xi)+\dot{\chi}(\xi)\qquad\text{ for }\xi\in\boldsymbol{\gamma}(\mathfrak{A})\text{.}\] Proof.: That \(\boldsymbol{\omega}(\mathfrak{A})\) is a Lie subalgebra of \(\mathfrak{L}\big{(}C(\mathfrak{A})\big{)}\) and \(\dot{\chi}_{0}\), \(\underline{\dot{\mu}}\) are Lie algebra homomorphisms is clear because \(\boldsymbol{\omega}(\mathfrak{A})\) is the Lie algebra of \(\mathbf{\Omega}(\mathfrak{A})\) and \(\dot{\chi}_{0}\), \(\underline{\dot{\mu}}\) are the differentials of \(\chi_{0}\) and \(\underline{\mu}\colon\mathbf{\Omega}(\mathfrak{A})\to R_{Z/F}(\mathbf{G_{m}})\) respectively. Over the algebra \(F[\varepsilon]\) of dual numbers, Proposition 1.7 yields \[\operatorname{Int}(1+\varepsilon\xi)=C\big{(}\operatorname{Int}(\chi_{0}(1+ \varepsilon\xi))\big{)}\qquad\text{for }\xi\in\boldsymbol{\omega}(\mathfrak{A}),\] Hence for \(\xi\in\boldsymbol{\omega}(\mathfrak{A})\) and \(a\in A\) \[(1+\varepsilon\xi)\,c(a)\,(1-\varepsilon\xi)=c\big{(}(1+\varepsilon\dot{\chi}_ {0}(\xi))\,a\,(1-\varepsilon\dot{\chi}_{0}(\xi))\big{)}.\] Comparing the coefficients of \(\varepsilon\) yields \([\xi,c(a)]=c([\dot{\chi}_{0}(\xi),a])\). Therefore, the derivations \(\operatorname{ad}_{\xi}\) and \(\delta_{\dot{\chi}_{0}(\xi)}\) coincide on \(c(A)\), hence \(\operatorname{ad}_{\xi}=\delta_{\dot{\chi}_{0}(\xi)}\) because \(c(A)\) generates \(C(\mathfrak{A})\) as an associative algebra. The other equations similarly follow by taking the differentials of \(\chi_{0}(z)=N_{Z/F}(z)\) for \(z\in Z^{\times}\), \(\mu\big{(}\chi_{0}(\xi)\big{)}=N_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}\) for \(\xi\in\mathbf{\Omega}(\mathfrak{A})\) and \(\chi_{0}(\xi)=\underline{\mu}(\xi)\chi(\xi)\) for \(\xi\in\Gamma(\mathfrak{A})\) (see Proposition 1.7). **Corollary 1.14**.: _If \(\operatorname{char}F\neq 2\), then \(\boldsymbol{\omega}(\mathfrak{A})=\boldsymbol{\gamma}(\mathfrak{A})+Z\)._ Proof.: If \(\operatorname{char}F\neq 2\), then \(Z\cap\boldsymbol{\gamma}(\mathfrak{A})=F\), while Proposition 1.13 shows that \(\boldsymbol{\gamma}(\mathfrak{A})+Z\subset\boldsymbol{\omega}(\mathfrak{A})\). Dimension count then shows that \(\boldsymbol{\omega}(\mathfrak{A})=\boldsymbol{\gamma}(\mathfrak{A})+Z\). Note that \(Z\subset\boldsymbol{\gamma}(\mathfrak{A})\) if \(\operatorname{char}F=2\) (see [13, (8.27)]), hence \(\boldsymbol{\gamma}(\mathfrak{A})+Z=\boldsymbol{\gamma}(\mathfrak{A})\subsetneq \boldsymbol{\omega}(\mathfrak{A})\) in that case. The following Lie algebra versions of the commutative diagram (1.4) and of Proposition 1.8 can be derived from their algebraic group scheme versions. We give a direct proof instead. **Proposition 1.15**.: _Let \(Z^{0}=\ker(\operatorname{Tr}\colon Z\to F)\) and let \(\dot{\chi}^{\prime}\colon\boldsymbol{\omega}(\mathfrak{A})\to\mathfrak{pgo}( \mathfrak{A})\) be defined by \(\dot{\chi}^{\prime}(\xi)=\dot{\chi}_{0}(\xi)+F\) for \(\xi\in\boldsymbol{\omega}(\mathfrak{A})\). The following sequence is exact:_ \[0\to Z^{0}\to\boldsymbol{\omega}(\mathfrak{A})\xrightarrow{\dot{\chi}_{0}} \mathfrak{go}(\mathfrak{A})\to 0. \tag{1.17}\] _The following diagram is commutative with exact rows and canonical vertical maps:_ (1.18) _Moreover,_ \[\boldsymbol{\gamma}(\mathfrak{A})=\{\xi\in\boldsymbol{\omega}(\mathfrak{A}) \mid\underline{\mu}(\xi)\in F\}\quad\text{and}\quad\mathfrak{spin}(\mathfrak{ A})=\ker(\underline{\mu}\colon\boldsymbol{\omega}(\mathfrak{A})\to Z).\] Proof.: We first show \(Z^{0}=\ker\dot{\chi}_{0}\). The inclusion \(Z^{0}\subset\ker\dot{\chi}_{0}\) follows from Proposition 1.13. To prove the reverse inclusion, let \(\xi\in\ker\dot{\chi}_{0}\). Proposition 1.13 yields \([\xi,c(a)]=0\) for all \(a\in A\). As \(c(A)\) generates \(C(\mathfrak{A})\), we conclude that \(\xi\in Z\). But then Proposition 1.13 shows that \(\dot{\chi}_{0}(\xi)=\operatorname{Tr}_{Z/F}(\xi)\), hence \(\xi\in Z^{0}\). Dimension count now shows that \(\dot{\chi}_{0}\) is surjective, hence (1.17) is an exact sequence. The upper sequence of diagram (1.18) is (1.14). We have just seen that \(\dot{\chi}_{0}\) is surjective, hence \(\dot{\chi}^{\prime}\) also is surjective. By Proposition 1.13, its kernel contains \(Z\). Dimension count then yields \(\ker\dot{\chi}^{\prime}=Z\), hence the lower sequence of the diagram is exact. Commutativity of the diagram follows from Proposition 1.13, since \(\underline{\mu}(\xi)\in F\) for \(\xi\in\boldsymbol{\gamma}(\mathfrak{A})\). This last observation shows that \(\boldsymbol{\gamma}(\mathfrak{A})\) lies in the kernel of the map \[\dot{\varkappa}\colon\boldsymbol{\omega}(\mathfrak{A})\to Z/F,\qquad\xi\mapsto \underline{\mu}(\xi)+F.\] We have to prove that \(\boldsymbol{\gamma}(\mathfrak{A})=\ker\dot{\varkappa}\). To see this, it suffices to show that \(\dot{\varkappa}\) is onto, because \(\dim\boldsymbol{\gamma}(\mathfrak{A})=(\dim\boldsymbol{\omega}(\mathfrak{A} ))-1\) and \(\dim(Z/F)=1\). If \(\operatorname{char}F\neq 2\), surjectivity is clear because \(\underline{\mu}(z)=2z\) for all \(z\in Z\). If \(\operatorname{char}F=2\), we pick an element \(\ell\in\mathfrak{go}(\mathfrak{A})\) such that \(\dot{\mu}(\ell)=1\). Since \(\dot{\chi}_{0}\) is onto, we may find \(\xi\in\boldsymbol{\omega}(\mathfrak{A})\) such that \(\dot{\chi}_{0}(\xi)=\ell\). Then by Proposition 1.13 we have \(\operatorname{Tr}_{Z/F}\big{(}\underline{\mu}(\xi)\big{)}=1\), hence \(\underline{\mu}(\xi)\notin F\). This shows \(\dot{\varkappa}\) is onto. To complete the proof, it suffices to observe that \(\mathfrak{spin}(\mathfrak{A})=\ker(\underline{\mu}\colon\boldsymbol{\gamma}( \mathfrak{A})\to F)\) by definition. When \(\operatorname{char}F=2\) we have \(\dot{\underline{\mu}}(Z)=0\), hence \(Z\subset\operatorname{\mathsf{spin}}(\mathfrak{A})\) and we may define a Lie algebra homomorphism \(\dot{S}\colon\operatorname{\mathfrak{pgo}}(\mathfrak{A})\to Z\) by \[\dot{S}(g+F)=\dot{\underline{\mu}}(\xi)\quad\text{for any $\xi\in\boldsymbol{ \omega}(\mathfrak{A})$ such that $\dot{\chi}^{\prime}(\xi)=g+F$.}\] **Corollary 1.16**.: _If \(\operatorname{char}F\neq 2\), then \(\dot{\chi}^{\prime}\) yields an isomorphism \(\operatorname{\mathsf{spin}}(\mathfrak{A})\xrightarrow{\sim}\operatorname{ \mathfrak{pgo}}(\mathfrak{A})\)._ _If \(\operatorname{char}F=2\), the restriction of \(\dot{\chi}^{\prime}\) fits in the exact sequence_ \[0\to Z\to\operatorname{\mathsf{spin}}(\mathfrak{A})\xrightarrow{\dot{\chi}^{ \prime}}\operatorname{\mathfrak{pgo}}(\mathfrak{A})\xrightarrow{\dot{S}}Z\to 0.\] Proof.: If \(\operatorname{char}F\neq 2\) we saw in (1.15) that \(\dot{\chi}\) yields an isomorphism \(\operatorname{\mathsf{spin}}(\mathfrak{A})\simeq\mathfrak{o}(\mathfrak{A})\), and in Proposition 1.10 we saw that the canonical map is an isomorphism \(\mathfrak{o}(\mathfrak{A})\xrightarrow{\sim}\operatorname{\mathfrak{pgo}}( \mathfrak{A})\), hence \(\dot{\chi}^{\prime}\) is an isomorphism \(\operatorname{\mathsf{spin}}(\mathfrak{A})\simeq\operatorname{\mathfrak{pgo}}( \mathfrak{A})\). For the rest of the proof, assume \(\operatorname{char}F=2\). Since \(\dot{\chi}^{\prime}\colon\boldsymbol{\omega}(\mathfrak{A})\to\operatorname{ \mathfrak{pgo}}(\mathfrak{A})\) is onto and \(\operatorname{\mathsf{spin}}(\mathfrak{A})=\ker\dot{\underline{\mu}}\) by Proposition 1.15, it is clear from the definition of \(\dot{S}\) that \(\ker\dot{S}=\dot{\chi}^{\prime}\big{(}\operatorname{\mathsf{spin}}(\mathfrak{ A})\big{)}\). As \(\dot{S}\big{(}\operatorname{\mathfrak{pgo}}(\mathfrak{A})\big{)}\subset\bar{Z}\), it follows that \[\dim\operatorname{\mathfrak{pgo}}(\mathfrak{A})-\dim\dot{\chi}^{\prime} \big{(}\operatorname{\mathsf{spin}}(\mathfrak{A})\big{)}\leq 2. \tag{1.19}\] On the other hand we have \(Z\subset\operatorname{\mathsf{spin}}(\mathfrak{A})\) because \(\dot{\underline{\mu}}(Z)=0\), and \(Z\subset\ker\dot{\chi}^{\prime}\) by Proposition 1.15, hence \[\dim\dot{\chi}^{\prime}\big{(}\operatorname{\mathsf{spin}}(\mathfrak{A}) \big{)}\leq\dim\operatorname{\mathsf{spin}}(\mathfrak{A})-2. \tag{1.20}\] As \(\dim\operatorname{\mathfrak{pgo}}(\mathfrak{A})=m(2m-1)=\dim\operatorname{ \mathsf{spin}}(\mathfrak{A})\) by Proposition 1.10 and (1.13), the inequalities (1.19) and (1.20) cannot be strict. Therefore, \(Z=\ker\dot{\chi}^{\prime}=\dot{S}\big{(}\operatorname{\mathfrak{pgo}}( \mathfrak{A})\big{)}\) and the corollary is proved. Finally, we consider the case where \(m\) is divisible by \(4\); then \(C(\mathfrak{A})\) carries a canonical quadratic pair \((\underline{\sigma},\underline{\mathfrak{f}})\) defined by Dolphin-Queguiner-Mathieu: see the end of SS1.2. As in Proposition 1.9, we let \[\mathfrak{C}(\mathfrak{A})=(C(\mathfrak{A}),\underline{\sigma},\underline{ \mathfrak{f}}).\] **Proposition 1.17**.: _If \(\deg A\equiv 0\bmod 8\), then \(\boldsymbol{\omega}(\mathfrak{A})\subset\operatorname{\mathfrak{go}}\big{(} \mathfrak{C}(\mathfrak{A})\big{)}\)._ Proof.: The definition of \(\boldsymbol{\omega}(\mathfrak{A})\) entails that \(\underline{\sigma}(\xi)+\xi\in Z\) for all \(\xi\in\boldsymbol{\omega}(\mathfrak{A})\), hence it suffices to prove \(\underline{\mathfrak{f}}([\xi,s])=0\) for \(\xi\in\boldsymbol{\omega}(\mathfrak{A})\) and \(s\in\operatorname{Sym}(\underline{\sigma})\). By the definition of \(\underline{\mathfrak{f}}\), this amounts to showing that if \(a\in A\) is such that \(\operatorname{Trd}_{A}(a)=1\), then \[\operatorname{Trd}_{C(\mathfrak{A})}(c(a)\,[\xi,s])=0\qquad\text{for $\xi\in \boldsymbol{\omega}(\mathfrak{A})$ and $s\in\operatorname{Sym}(\underline{\sigma})$.}\] For this, observe that \[\operatorname{Trd}_{C(\mathfrak{A})}\big{(}c(a)(\xi s-s\xi)\big{)}= \operatorname{Trd}_{C(\mathfrak{A})}\big{(}(\xi c(a)-c(a)\xi)s\big{)}.\] Now, by Proposition 1.13 we have \([\xi,c(a)]=c([\dot{\chi}_{0}(\xi),a])\). As \(\operatorname{Trd}_{A}([\dot{\chi}_{0}(\xi),a])=0\), it follows that \(c([\dot{\chi}_{0}(\xi),a])\in\operatorname{\mathsf{spin}}(\mathfrak{A})\). Now, \(\operatorname{\mathsf{spin}}(\mathfrak{A})\subset\operatorname{Alt}( \underline{\sigma})\) by [7, Lemma 3.2], hence \[\operatorname{Trd}_{C(\mathfrak{A})}\big{(}c([\dot{\chi}_{0}(\xi),a])s\big{)}=0 \qquad\text{for all $s\in\operatorname{Sym}(\underline{\sigma})$.}\qed\] **Remark 1.18**.: When \(\operatorname{char}F=2\), the Lie algebra \(\mathfrak{L}(A)\) has an additional structure given by the squaring map \(a\mapsto a^{2}\), which turns it into a _restricted Lie algebra_. It can be verified that the Lie algebras \(\mathfrak{o}(\mathfrak{A})\), \(\mathfrak{so}(\mathfrak{A})\), \(\mathfrak{go}(\mathfrak{A})\), \(\operatorname{\mathfrak{pgo}}(\mathfrak{A})\), \(\operatorname{\mathfrak{pgo}}(\mathfrak{A})\), \(\operatorname{\gamma}(\mathfrak{A})\), \(\operatorname{\mathsf{spin}}(\mathfrak{A})\), \(\boldsymbol{\omega}(\mathfrak{A})\) are all restricted (i.e., preserved under the squaring map), and the maps \(\dot{\mu}\), \(\underline{\mu}\), \(\dot{\chi}\), \(\operatorname{Trp}\), \(\dot{\chi}_{0}\), \(\dot{S}\) are homomorphisms of restricted Lie algebras (i.e., commute with the squaring map). The proof is omitted, as the restricted Lie algebra structure will not be used in this work. ### Homomorphisms from Clifford algebras Throughout this subsection, \(\mathfrak{A}=(A,\sigma,\mathfrak{f})\) is an algebra with quadratic pair of degree \(2m\) over an arbitrary field \(F\). We assume \(m\equiv 0\bmod 4\) and the discriminant of \((\sigma,\mathfrak{f})\) is trivial, which implies that the Clifford algebra \(C(\mathfrak{A})\) decomposes as an algebra with quadratic pair into a direct product of two central simple \(F\)-algebras with quadratic pair of degree \(2^{m-1}\). We further choose a polarization of \(\mathfrak{A}\) (see Definition 1.4), which provides a designation of the primitive central idempotents of \(C(\mathfrak{A})\) as \(z_{+}\) and \(z_{-}\). The simple components of \(C(\mathfrak{A})\) are then \[C_{+}(\mathfrak{A})=C(\mathfrak{A})z_{+}\qquad\text{and}\qquad C_{-}( \mathfrak{A})=C(\mathfrak{A})z_{-}.\] We write \(\pi_{+}\colon C(\mathfrak{A})\to C_{+}(\mathfrak{A})\) and \(\pi_{-}\colon C(\mathfrak{A})\to C_{-}(\mathfrak{A})\) for the projections: \[\pi_{+}(\xi)=\xi z_{+},\qquad\pi_{-}(\xi)=\xi z_{-}\qquad\text{for $\xi\in C( \mathfrak{A})$},\] and let \[\mathfrak{C}(\mathfrak{A})=(C(\mathfrak{A}),\underline{\sigma},\mathfrak{f}).\] Given another central simple \(F\)-algebra with quadratic pair \(\mathfrak{A}^{\prime}=(A^{\prime},\sigma^{\prime},\mathfrak{f}^{\prime})\) of degree \(2^{m-1}\), we define a _homomorphism of algebras with quadratic pair_ \[\varphi\colon\mathfrak{C}(\mathfrak{A})\to\mathfrak{A}^{\prime} \tag{1.21}\] to be an \(F\)-algebra homomorphism \(\varphi\colon C(\mathfrak{A})\to A^{\prime}\) such that Since we assume \(\dim A^{\prime}=\frac{1}{2}\dim C(\mathfrak{A})\), such a homomorphism factors through one of the projections \(\pi_{+}\) or \(\pi_{-}\), and maps the center \(Z\) of \(C(\mathfrak{A})\) to \(F\). It readily follows that \(\varphi\) defines a morphism \(\mathbf{GO}\big{(}\mathfrak{C}(\mathfrak{A})\big{)}\to\mathbf{GO}(\mathfrak{ A}^{\prime})\) and maps \(\mathfrak{go}\big{(}\mathfrak{C}(\mathfrak{A})\big{)}\) to \(\mathfrak{go}(\mathfrak{A}^{\prime})\). **Definition 1.19**.: We say that \(\varphi\)_has the \(+\) sign_ if it factors through \(\pi_{+}\) (i.e., \(\varphi(z_{+})=1\) and \(\varphi(z_{-})=0\)), and that \(\varphi\)_has the \(-\) sign_ if it factors through \(\pi_{-}\) (i.e., \(\varphi(z_{+})=0\) and \(\varphi(z_{-})=1\)). Since \(\mathbf{\Omega}(\mathfrak{A})\subset\mathbf{GO}^{+}\big{(}\mathfrak{C}( \mathfrak{A})\big{)}\) by Proposition 1.9, we may restrict \(\varphi\) to \(\mathbf{\Omega}(\mathfrak{A})\) to obtain the following commutative diagram with exact rows, where \(\pm\) is the sign of \(\varphi\): (1.22) We also consider the corresponding diagram with exact rows involving the differentials: (1.23) Since \(\varphi\circ\underline{\sigma}=\sigma^{\prime}\circ\varphi\), it follows that \(\varphi\circ\underline{\mu}=\mu\circ\varphi\) on \(\mathbf{\Omega}(\mathfrak{A})\), hence \(\varphi\) maps \(\mathbf{Spin}(\mathfrak{A})\) to \(\mathbf{O}^{+}(\mathfrak{A}^{\prime})\). Restricting the morphism \(\varphi\) to \(\mathbf{Spin}(\mathfrak{A})\), we obtain from (1.22) the following commutative diagram of algebraic group schemes with exact rows: (1.24) Our goal in the rest of this subsection is to show that the map \(\theta\) in (1.23) determines the homomorphism \(\varphi\) in (1.21) uniquely. **Definition 1.20**.: Given \(\varphi\) as in (1.21), of sign \(\pm\), the Lie algebra homomorphism \(\theta\colon\mathfrak{pgo}(\mathfrak{A})\to\mathfrak{pgo}(\mathfrak{A}^{ \prime})\) in diagram (1.23) is said to be _induced_ by \(\varphi\). Changing the perspective, a Lie algebra homomorphism \(\theta\colon\mathfrak{pgo}(\mathfrak{A})\to\mathfrak{pgo}(\mathfrak{A}^{ \prime})\) is said to be _liftable_ if it is induced by some homomorphism of algebras with quadratic pair \(\varphi\), which is then called a _lift_ of \(\theta\). If \(\theta\) is induced by a homomorphism \(\varphi\), the _sign_ of \(\theta\) is defined to be the same as the sign of \(\varphi\). The following theorem shows that the latter definition is not ambiguous: **Theorem 1.21**.: _If a Lie algebra homomorphism \(\theta\colon\mathfrak{pgo}(\mathfrak{A})\to\mathfrak{pgo}(\mathfrak{A}^{ \prime})\) is liftable, then its lift is unique._ Proof.: It suffices to prove the theorem after scalar extension. We may therefore assume \(\mathfrak{A}=(\operatorname{End}V,\sigma_{b},\mathfrak{f}_{q})\) for some hyperbolic quadratic space \((V,q)\) of dimension \(2m\). We use the standard identification \(V\otimes V=\operatorname{End}V\) set up in SS1.1. Since \(q\) is hyperbolic, by decomposing \(V\) into an orthogonal sum of hyperbolic planes we may find a base \((e_{i},e^{\prime}_{i})_{i=1}^{m}\) of \(V\) such that \[q(e_{i})=q(e^{\prime}_{i})=b(e_{i},e_{j})=b(e^{\prime}_{i},e^{\prime}_{j})=0 \qquad\text{for all $i$, $j=1$, $\dots$, $m$}\] and \[b(e_{i},e^{\prime}_{j})=\begin{cases}1&\text{if $i=j$,}\\ 0&\text{if $i\neq j$.}\end{cases}\] The products \(e_{i}e_{j}\), \(e_{i}e^{\prime}_{j}\), \(e^{\prime}_{i}e_{j}\), \(e^{\prime}_{i}e^{\prime}_{j}\) for \(i\), \(j=1\), \(\dots\), \(m\) span \(V\cdot V\subset C_{0}(V,q)\), hence they generate \(C_{0}(V,q)\) as an \(F\)-algebra. Since \(q(e_{i})=q(e^{\prime}_{i})=0\) for all \(i\), we do not need to count \(e_{i}e_{j}\) nor \(e^{\prime}_{i}e^{\prime}_{j}\) among the generators if \(i=j\). Moreover, \(e_{j}e^{\prime}_{j}+e^{\prime}_{j}e_{j}=b(e_{j},e^{\prime}_{j})=1\) for all \(j\), hence if \(i\neq j\) \[e_{i}e^{\prime}_{i}=e_{i}(e_{j}e^{\prime}_{j}+e^{\prime}_{j}e_{j})e^{\prime}_ {i}=(e_{i}e_{j})(e^{\prime}_{j}e^{\prime}_{i})+(e_{i}e^{\prime}_{j})(e_{j}e^{ \prime}_{i})\] and similarly \[e^{\prime}_{i}e_{i}=e^{\prime}_{i}(e_{j}e^{\prime}_{j}+e^{\prime}_{j}e_{j})e_ {i}=(e^{\prime}_{i}e_{j})(e^{\prime}_{j}e_{i})+(e^{\prime}_{i}e^{\prime}_{j})( e_{j}e_{i}).\] These equations show that \(e_{i}e^{\prime}_{i}\) and \(e^{\prime}_{i}e_{i}\) lie in the subalgebra of \(C_{0}(V,q)\) generated by \(e_{k}e_{\ell}\), \(e_{k}e^{\prime}_{\ell}\), \(e^{\prime}_{k}e_{\ell}\), \(e^{\prime}_{k}e^{\prime}_{\ell}\) for all \(k\neq\ell\) in \(\{1,\dots,m\}\). Therefore, these elements generate \(C_{0}(V,q)\). Consequently, if \(\varphi_{1}\), \(\varphi_{2}\colon C_{0}(V,q)\to A^{\prime}\) are two lifts of a given \(\theta\colon\mathfrak{pgo}(\mathfrak{A})\to\mathfrak{pgo}(\mathfrak{A}^{ \prime})\), it suffices to prove that \(\varphi_{1}\) and \(\varphi_{2}\) coincide on \(e_{k}e_{\ell}\), \(e_{k}e^{\prime}_{\ell}\), \(e^{\prime}_{k}e_{\ell}\), \(e^{\prime}_{k}e^{\prime}_{\ell}\) for all \(k\neq\ell\) in \(\{1,\dots,m\}\) to conclude that \(\varphi_{1}=\varphi_{2}\). This is what we proceed to show. The condition that \(\varphi_{1}\) and \(\varphi_{2}\) induce the same \(\theta\) means that \(\varphi_{1}(\xi)-\varphi_{2}(\xi)\in F\) for all \(\xi\in\boldsymbol{\omega}(q)\), hence \[\varphi_{1}([\xi_{1},\xi_{2}])=\varphi_{2}([\xi_{1},\xi_{2}])\qquad\text{for all $\xi_{1}$, $\xi_{2}\in\boldsymbol{\omega}(q)$.}\] We apply this to \(\xi_{1}=c(u_{1}\otimes v_{1})=u_{1}v_{1}\) and \(\xi_{2}=c(u_{2}\otimes v_{2})=u_{2}v_{2}\in\boldsymbol{\gamma}(q)\subset \boldsymbol{\omega}(q)\) for \(u_{1}\), \(u_{2}\), \(v_{1}\), \(v_{2}\in V\). If \(i\neq j\), we have \[[e_{i}e_{j},e^{\prime}_{j}e_{j}]=e_{i}e_{j}e^{\prime}_{j}e_{j}-e^{\prime}_{j}e_ {j}e_{i}e_{j}.\] Since \(e_{i}\) and \(e_{j}\) anticommute and \(e_{j}^{2}=0\), the second term on the right side vanishes. In the first term, we may substitute \(1-e_{j}^{\prime}e_{j}\) for \(e_{j}e_{j}^{\prime}\) and use \(e_{j}^{2}=0\) to obtain \[[e_{i}e_{j},e_{j}^{\prime}e_{j}]=e_{i}(1-e_{j}^{\prime}e_{j})e_{j}=e_{i}e_{j}.\] Similar computations yield for all \(i\neq j\) in \(\{1,\ldots,m\}\) \[[e_{i}e_{j}^{\prime},e_{j}e_{j}^{\prime}]=e_{i}e_{j}^{\prime},\qquad[e_{i}^{ \prime}e_{j},e_{j}^{\prime}e_{j}]=e_{i}^{\prime}e_{j},\qquad[e_{i}^{\prime}e_{j }^{\prime},e_{j}e_{j}^{\prime}]=e_{i}^{\prime}e_{j}^{\prime}.\] Since \(\varphi_{1}\) and \(\varphi_{2}\) take the same value on each \([\xi_{1},\xi_{2}]\) for \(\xi_{1}\), \(\xi_{2}\in\boldsymbol{\omega}(q)\), it follows that \(\varphi_{1}\) and \(\varphi_{2}\) coincide on each \(e_{i}e_{j}\), \(e_{i}e_{j}^{\prime}\), \(e_{i}^{\prime}e_{j}\) and \(e_{i}^{\prime}e_{j}^{\prime}\) for \(i\neq j\), hence \(\varphi_{1}=\varphi_{2}\). **Corollary 1.22**.: _Let \(\theta\colon\mathfrak{pgo}(\mathfrak{A})\to\mathfrak{pgo}(\mathfrak{A}^{\prime})\) be a homomorphism of Lie algebras and let \(K\) be a Galois field extension of \(F\). If \(\theta_{K}\colon\mathfrak{pgo}(\mathfrak{A})_{K}\to\mathfrak{pgo}(\mathfrak{A} ^{\prime})_{K}\) is liftable, then \(\theta\) is liftable._ Proof.: Let \(\varphi\colon C(\mathfrak{A})_{K}\to A_{K}^{\prime}\) be the lift of \(\theta_{K}\), and let \(\rho\) be an element of the Galois group of \(K/F\). Then \((\operatorname{Id}_{A^{\prime}}\otimes\rho)\circ\varphi\circ(\operatorname{ Id}_{C(\mathfrak{A})}\otimes\rho^{-1})\colon C(\mathfrak{A})_{K}\to A_{K}^{\prime}\) is a lift of \((\operatorname{Id}_{\mathfrak{pgo}(\mathfrak{A}^{\prime})}\otimes\rho) \circ\theta_{K}\circ(\operatorname{Id}_{\mathfrak{pgo}(\mathfrak{A})}\otimes \rho^{-1})=\theta_{K}\), hence, by uniqueness of the lift, \[(\operatorname{Id}_{A^{\prime}}\otimes\rho)\circ\varphi\circ(\operatorname{ Id}_{C(\mathfrak{A})}\otimes\rho^{-1})=\varphi.\] Therefore, \(\varphi|_{C(\mathfrak{A})}\) maps \(C(\mathfrak{A})\) to \(A^{\prime}\); it lifts \(\theta\) since \(\varphi\) lifts \(\theta_{K}\). ## 2. Compositions of quadratic spaces This section introduces the notion of a composition of quadratic spaces. We emphasize an important feature of compositions, which will be central to the definition of trialitarian automorphisms in the next section: each composition gives rise to two other compositions on the quadratic spaces cyclically permuted. Restricting to the case where the quadratic spaces have the same finite dimension, we show that this dimension is 1, 2, 4 or 8, the comparatively trivial case of dimension 1 arising only when the characteristic of the base field is different from 2. In order to prove this fairly classical result we set up isomorphisms of algebras with involution or with quadratic pair involving Clifford algebras. In dimension 8, these isomorphisms will provide in the next section examples of trialitarian triples of split algebras. In SS2.3 we investigate similitudes and isometries of compositions of quadratic spaces, which define algebraic groups that are close analogues of those attached to quadratic spaces. Even though the quadratic spaces in a composition are not necessarily isometric, it is easy to see that every composition of quadratic spaces is similar to a composition of _isometric_ quadratic spaces (see Proposition 2.15). The focus in the last two subsections is on this type of compositions. Using a related notion of composition of _pointed_ quadratic spaces, we show in SS2.4 that every composition of isometric quadratic spaces is isomorphic to its derivatives and also to a composition that is its own derivative, and in SS2.5 we discuss compositions of quadratic spaces arising from the classical notion of composition algebra. To each composition algebra is associated a composition of isometric quadratic spaces, and isotopies of composition algebras are shown in Theorem 2.32 to be similitudes of the associated compositions of quadratic spaces. Throughout this section, \(F\) is an arbitrary field. Unless explicitly specified, there is no restriction on its characteristic \(\operatorname{char}F\). ### Composition maps and their cyclic derivatives Let \((V_{1},q_{1})\), \((V_{2},q_{2})\), \((V_{3},q_{3})\) be (finite-dimensional) quadratic spaces over \(F\). Write \(b_{1}\), \(b_{2}\), \(b_{3}\) for the associated polar bilinear forms \[b_{i}\colon V_{i}\times V_{i}\to F,\qquad b_{i}(x_{i},y_{i})=q_{i}(x_{i}+y_{i}) -q_{i}(x_{i})-q_{i}(y_{i})\quad\text{for $i=1$, $2$, $3$}.\] We assume throughout that the forms \(b_{1}\), \(b_{2}\), \(b_{3}\) are nonsingular, hence each \(\dim V_{i}\) is even if \(\operatorname{char}F=2\), and we may use the polar forms to identify each \(V_{i}\) with its dual \(V_{i}^{*}\). Bilinear maps \(V_{1}\times V_{2}\to V_{3}\) are then identified with tensors in \(V_{3}\otimes V_{2}\otimes V_{1}\), so that for \(v_{i}\in V_{i}\) the tensor \(v_{3}\otimes v_{2}\otimes v_{1}\) is regarded as the bilinear map \[V_{1}\times V_{2}\to V_{3},\qquad(x_{1},\,x_{2})\mapsto v_{3}\,b_{2}(v_{2},x _{2})\,b_{1}(v_{1},x_{1}).\] Let \[\partial\colon V_{3}\otimes V_{2}\otimes V_{1}\to V_{1}\otimes V_{3} \otimes V_{2}\qquad\text{and}\qquad\partial^{2}\colon V_{3}\otimes V_{2} \otimes V_{1}\to V_{2}\otimes V_{1}\otimes V_{3}\] be the isomorphisms that permute the tensor factors cyclically. These maps allow us to derive bilinear maps \(V_{2}\times V_{3}\to V_{1}\) and \(V_{3}\times V_{1}\to V_{2}\) from a given bilinear map \(V_{1}\times V_{2}\to V_{3}\). In our notation, bilinear maps are adorned with the same index as the target space. **Proposition 2.1**.: _Let \(*_{3}\colon V_{1}\times V_{2}\to V_{3}\) be a bilinear map, and let \(*_{1}=\partial(*_{3})\) and \(*_{2}=\partial^{2}(*_{3})\) be the derived maps_ \[*_{1}\colon V_{2}\times V_{3}\to V_{1},\qquad*_{2}\colon V_{3}\times V_{1} \to V_{2}.\] _The maps \(*_{1}\) and \(*_{2}\) are uniquely determined by the following property: for all \(x_{1}\in V_{1}\), \(x_{2}\in V_{2}\), \(x_{3}\in V_{3}\),_ \[b_{1}(x_{1},\,x_{2}*_{1}x_{3})=b_{2}(x_{2},\,x_{3}*_{2}x_{1})=b_{3}(x_{3},\,x _{1}*_{3}x_{2}). \tag{2.1}\] Proof.: Uniqueness is clear because the forms \(b_{1}\) and \(b_{2}\) are nonsingular. By linearity, it suffices to prove (2.1) in the case where \(*_{3}=v_{3}\otimes v_{2}\otimes v_{1}\) for some \(v_{1}\in V_{1}\), \(v_{2}\in V_{2}\), \(v_{3}\in V_{3}\). Then \(*_{1}=v_{1}\otimes v_{3}\otimes v_{2}\) and \(*_{2}=v_{2}\otimes v_{1}\otimes v_{3}\), and each of the terms in (2.1) is equal to \(b_{1}(v_{1},x_{1})b_{2}(v_{2},x_{2})b_{3}(v_{3},x_{3})\). The bilinear maps of interest in this work satisfy the following multiplicativity condition: **Definition 2.2**.: A _composition map_\(*_{3}\colon V_{1}\times V_{2}\to V_{3}\) is a bilinear map subject to \[q_{3}(x_{1}*_{3}x_{2})=q_{1}(x_{1})q_{2}(x_{2})\qquad\text{for $x_{1}\in V_{1}$ and $x_{2}\in V_{2}$}. \tag{2.2}\] Even though this notion makes sense--and has been studied for instance in [18, Chap. 14]--when the dimensions of \(V_{1}\), \(V_{2}\) and \(V_{3}\) are not the same, we will always assume in the sequel that \(\dim V_{1}=\dim V_{2}=\dim V_{3}\). **Proposition 2.3**.: _Let \(*_{3}\colon V_{1}\times V_{2}\to V_{3}\) be a composition map, with \(\dim V_{1}=\dim V_{2}=\dim V_{3}\). The derived bilinear maps \(*_{1}\) and \(*_{2}\) are composition maps, i.e., for all \(x_{1}\in V_{1}\), \(x_{2}\in V_{2}\), \(x_{3}\in V_{3}\),_ \[q_{1}(x_{2}*_{1}x_{3})=q_{2}(x_{2})q_{3}(x_{3})\qquad\text{and}\qquad q_{2}(x_ {3}*_{2}x_{1})=q_{3}(x_{3})q_{1}(x_{1}). \tag{2.3}\] _Moreover, the following relations hold for all \(x_{1}\), \(y_{1}\in V_{1}\), \(x_{2}\), \(y_{2}\in V_{2}\), \(x_{3}\), \(y_{3}\in V_{3}\):_ \[b_{3}(x_{1}*_{3}x_{2},x_{1}*_{3}y_{2}) =q_{1}(x_{1})b_{2}(x_{2},y_{2}), \tag{2.5}\] \[b_{3}(x_{1}*_{3}x_{2},y_{1}*_{3}x_{2}) =b_{1}(x_{1},y_{1})q_{2}(x_{2}),\] (2.6) \[b_{1}(x_{2}*_{1}x_{3},x_{2}*_{1}y_{3}) =q_{2}(x_{2})b_{3}(x_{3},y_{3}),\] (2.7) \[b_{1}(x_{2}*_{1}x_{3},y_{2}*_{1}x_{3}) =b_{2}(x_{2},y_{2})q_{3}(x_{3}),\] (2.8) \[b_{2}(x_{3}*_{2}x_{1},x_{3}*_{2}y_{1}) =q_{3}(x_{3})b_{1}(x_{1},y_{1}),\] (2.9) \[b_{2}(x_{3}*_{2}x_{1},y_{3}*_{2}x_{1}) =b_{3}(x_{3},y_{3})q_{1}(x_{1}), \tag{2.4}\] \[(x_{1}*_{3}x_{2})*_{2}x_{1}=x_{2}q_{1}(x_{1})\qquad\text{and}\qquad x_{2}*_{1}(x_{1 }*_{3}x_{2})=x_{1}q_{2}(x_{2}), \tag{2.11}\] \[(x_{2}*_{1}x_{3})*_{3}x_{2}=x_{3}q_{2}(x_{2})\qquad\text{and}\qquad x _{3}*_{2}(x_{2}*_{1}x_{3})=x_{2}q_{3}(x_{3}),\] (2.12) \[(x_{3}*_{2}x_{1})*_{1}x_{3}=x_{1}q_{3}(x_{3})\qquad\text{and}\qquad x _{1}*_{3}(x_{3}*_{2}x_{1})=x_{3}q_{1}(x_{1}),\] (2.13) \[(x_{1}*_{3}x_{2})*_{2}y_{1}+(y_{1}*_{3}x_{2})*_{2}x_{1}=x_{2}b_{1 }(x_{1},y_{1}),\] (2.14) \[x_{2}*_{1}(x_{1}*_{3}y_{2})+y_{2}*_{1}(x_{1}*_{3}x_{2})=x_{1}b_{ 2}(x_{2},y_{2}),\] (2.15) \[(x_{2}*_{1}x_{3})*_{3}y_{2}+(y_{2}*_{1}x_{3})*_{3}x_{2}=x_{3}b_{ 2}(x_{2},y_{2}),\] (2.16) \[x_{3}*_{2}(x_{2}*_{1}y_{3})+y_{3}*_{2}(x_{2}*_{1}x_{3})=x_{2}b_{ 3}(x_{3},y_{3}),\] (2.17) \[(x_{3}*_{2}x_{1})*_{1}y_{3}+(y_{3}*_{2}x_{1})*_{1}x_{3}=x_{1}b_{ 3}(x_{3},y_{3}),\] (2.18) \[x_{1}*_{3}(x_{3}*_{2}y_{1})+y_{1}*_{3}(x_{3}*_{2}x_{1})=x_{3}b_{ 1}(x_{1},y_{1}). \tag{2.10}\] Proof.: First, (2.4) and (2.5) are obtained by linearizing (2.2). By (2.1) and (2.4) we have for \(x_{1}\in V_{1}\) and \(x_{2}\), \(y_{2}\in V_{2}\) \[b_{2}\big{(}(x_{1}*_{3}x_{2})*_{2}x_{1},y_{2}\big{)}=b_{3}(x_{1}*_{3}x_{2},x_{1 }*_{3}y_{2})=q_{1}(x_{1})b_{2}(x_{2},y_{2}).\] Since \(b_{2}\) is nonsingular, it follows that \((x_{1}*_{3}x_{2})*_{2}x_{1}=x_{2}q_{1}(x_{1})\). Similarly, (2.1) and (2.5) yield \[b_{1}\big{(}y_{1},x_{2}*_{1}(x_{1}*_{3}x_{2})\big{)}=b_{3}(y_{1}*_{3}x_{2},x_{ 1}*_{3}x_{2})=b_{1}(y_{1},x_{1})q_{2}(x_{2})\quad\text{for all $y_{1}\in V_{1}$},\] hence \(x_{2}*_{1}(x_{1}*_{3}x_{2})=x_{1}q_{2}(x_{2})\). We thus obtain (2.10); then (2.13), (2.14) follow by linearization. The main part of the proof consists in proving (2.3). For this, fix an anisotropic vector \(x_{2}\in V_{2}\). The map \(r_{x_{2}}\colon V_{1}\to V_{3}\) defined by \(r_{x_{2}}(x_{1})=x_{1}*_{3}x_{2}\) is injective, for \(x_{1}*_{3}x_{2}=0\) implies \(x_{1}=0\) by (2.10). Since \(\dim V_{1}=\dim V_{3}\) the map \(r_{x_{2}}\) is also surjective, hence every \(x_{3}\in V_{3}\) can be written as \(x_{3}=x_{1}*_{3}x_{2}\) for some \(x_{1}\in V_{1}\). Then by (2.10) \[x_{2}*_{1}x_{3}=x_{2}*_{1}(x_{1}*_{3}x_{2})=x_{1}q_{2}(x_{2}),\] hence \[q_{1}(x_{2}*_{1}x_{3})=q_{1}(x_{1})q_{2}(x_{2})^{2}.\] But since \(x_{3}=x_{1}*_{3}x_{2}\) it follows from (2.2) that \(q_{3}(x_{3})=q_{1}(x_{1})q_{2}(x_{2})\), hence the right side of the last displayed equation can be rewritten as \(q_{2}(x_{2})q_{3}(x_{3})\). We have thus proven \(q_{1}(x_{2}*_{1}x_{3})=q_{2}(x_{2})q_{3}(x_{3})\) when \(x_{2}\) is anisotropic. Moreover, by (2.10) we have for all \(z_{2}\in V_{2}\) \[b_{1}(x_{2}*_{1}x_{3},z_{2}*_{1}x_{3})=b_{1}\big{(}x_{2}*_{1}(x_{1}*_{3}x_{2}), z_{2}*_{1}(x_{1}*_{3}x_{2})\big{)}=q_{2}(x_{2})b_{1}\big{(}x_{1},z_{2}*_{1}(x_{1}*_{3} x_{2})\big{)}.\] By (2.1) and (2.4), \[b_{1}\big{(}x_{1},z_{2}*_{1}(x_{1}*_{3}x_{2})\big{)}=b_{3}(x_{1}*_{3}z_{2},x_{1 }*_{3}x_{2})=q_{1}(x_{1})b_{2}(z_{2},x_{2}),\] hence, as \(q_{1}(x_{1})q_{2}(x_{2})=q_{3}(x_{3})\), \[b_{1}(x_{2}*_{1}x_{3},z_{2}*_{1}x_{3})=b_{2}(x_{2},z_{2})q_{3}(x_{3}). \tag{2.19}\] Now, assume \(x_{2}\) is isotropic. Pick anisotropic vectors \(y_{2}\), \(z_{2}\in V_{2}\) such that \(x_{2}=y_{2}+z_{2}\). (If \(\dim V_{2}>2\), we may pick any anisotropic \(y_{2}\) orthogonal to \(x_{2}\) and let \(z_{2}=x_{2}-y_{2}\).) By the first part of the proof we have \[q_{1}(y_{2}*_{1}x_{3})=q_{2}(y_{2})q_{3}(x_{3})\qquad\text{and}\qquad q_{1}(z_{ 2}*_{1}x_{3})=q_{2}(z_{2})q_{3}(x_{3}).\] Moreover, (2.19) yields \[b_{1}(y_{2}*_{1}x_{3},z_{2}*_{1}x_{3})=b_{2}(y_{2},z_{2})q_{3}(x_{3}).\] Therefore, \[q_{1}(x_{2}*_{1}x_{3})=q_{1}(y_{2}*_{1}x_{3})+b_{1}(y_{2}*_{1}x_{3},z _{2}*_{1}x_{3})+q_{1}(z_{2}*_{1}x_{3})\\ =q_{2}(y_{2})q_{3}(x_{3})+b_{2}(y_{2},z_{2})q_{3}(x_{3})+q_{2}(z_{2 })q_{3}(x_{3})=q_{2}(x_{2})q_{3}(x_{3}).\] Thus, the equation \(q_{1}(x_{2}*_{1}x_{3})=q_{2}(x_{2})q_{3}(x_{3})\) is proved for all \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\). The proof of \(q_{2}(x_{3}*_{2}x_{1})=q_{3}(x_{3})q_{1}(x_{1})\) for all \(x_{3}\in V_{3}\), \(x_{1}\in V_{1}\) is similar, using bijectivity of the map \(\ell_{x_{1}}\colon V_{2}\to V_{3}\) carrying \(x_{2}\) to \(x_{1}*_{3}x_{2}\) for \(x_{1}\) anisotropic. This completes the proof of (2.3), and (2.6), (2.7), (2.8), (2.9) follow by linearization. The same arguments that gave (2.10) from (2.4) and (2.5) yield (2.11) from (2.6) and (2.7), and also (2.12) from (2.8) and (2.9). The relations (2.15) and (2.16) (resp. (2.17) and (2.18)) are derived by linearizing (2.11) (resp. (2.12)). Our main object of study in this section is defined next. **Definition 2.4**.: A _composition of quadratic spaces_ over \(F\) is a \(4\)-tuple \[\mathscr{G}=\big{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3}\big{)}\] where \((V_{1},q_{1})\), \((V_{2},q_{2})\), \((V_{3},q_{3})\) are nonsingular quadratic spaces of the same dimension over \(F\) and \(*_{3}\colon V_{1}\times V_{2}\to V_{3}\) is a composition map. We write \(\dim\mathscr{G}=n\) if \(\dim V_{1}=\dim V_{2}=\dim V_{3}=n\). In view of Proposition 2.3, each composition of quadratic spaces \(\mathscr{G}\) yields _derived compositions of quadratic spaces_\(\partial\mathscr{G}\) and \(\partial^{2}\mathscr{G}\) defined by \[\partial\mathscr{G}=\big{(}(V_{2},q_{2}),\,(V_{3},q_{3}),\,(V_{1},q_{1}),\,*_ {1}\big{)}\] and \[\partial^{2}\mathscr{G}=\big{(}(V_{3},q_{3}),\,(V_{1},q_{1}),\,(V_{2},q_{2}), \,*_{2}\big{)}.\] The composition maps \(*_{1}\) and \(*_{2}\) are called the _derived composition maps_ of \(*_{3}\). Since \(\partial\) is a cyclic operation of period \(3\), we have \[\partial(\partial\mathscr{G})=\partial^{2}\mathscr{G},\qquad\partial^{2}( \partial\mathscr{G})=\mathscr{G}=\partial(\partial^{2}\mathscr{G}),\qquad \partial^{2}(\partial^{2}\mathscr{G})=\partial\mathscr{G}.\] **Examples 2.5**.: 1. Let \(A\) be either \(F\), a quadratic etale \(F\)-algebra, a quaternion \(F\)-algebra or an octonion \(F\)-algebra, and let \(n_{A}\colon A\to F\) be (respectively) the squaring map, the norm, the quaternion (reduced) norm or the octonion norm. Assuming \(\operatorname{char}F\neq 2\) if \(A=F\), we know from the properties of these algebras that multiplication in \(A\) defines a composition of quadratic spaces \[\mathscr{G}=\big{(}(A,n_{A}),\,(A,n_{A}),\,(A,n_{A}),\,*_{3}\big{)}.\] This particular type of composition is discussed in SS2.5 in relation with composition algebras. Note that if \(A\neq F\) the derived composition maps \(*_{1}\) and \(*_{2}\) are _not_ simply given by the multiplication in \(A\); see Proposition 2.28. The following examples are obtained in relation with a Galois \(F\)-algebra \(L\) with elementary abelian Galois group \(\{1,\,\sigma_{1},\,\sigma_{2},\,\sigma_{3}\}\), i.e., an etale biquadratic \(F\)-algebra. 1. Assume \(\operatorname{char}F\neq 2\), and for \(i=1,\,2,\,3\) let \(V_{i}\) denote the following \(1\)-dimensional subspace of \(L\): \[V_{i}=\{x_{i}\in L\mid\sigma_{j}(x_{i})=-x_{i}\text{ for }j\neq i\}.\] Define \(q_{i}\colon V_{i}\to F\) by \(q_{i}(x_{i})=x_{i}^{2}\). For \(x_{1}\in V_{1}\) and \(x_{2}\in V_{2}\) we have \(x_{1}x_{2}\in V_{3}\) and \((x_{1}x_{2})^{2}=x_{1}^{2}x_{2}^{2}\), hence multiplication in \(L\) defines a composition map \(*_{3}\colon V_{1}\times V_{2}\to V_{3}\). The derived composition maps \(*_{1}\) and \(*_{2}\) are also given by the multiplication in \(L\). 3. Let \(A\) be a central simple \(F\)-algebra of degree \(4\) containing \(L\). Assume \(\operatorname{char}F\neq 2\) and \(F\) contains an element \(\zeta\) such that \(\zeta^{2}=-1\). For \(i=1\), \(2\), \(3\), define \[V_{i}=\{x_{i}\in A\mid x_{i}\ell=\sigma_{i}(\ell)x_{i}\text{ for all }\ell\in L\}.\] The \(F\)-vector space \(V_{i}\) has dimension \(4\) and carries a quadratic form \(q_{i}\) given by \(q_{i}(x_{i})=\operatorname{Trd}_{A}(x_{i}^{2})\), where \(\operatorname{Trd}_{A}\) is the reduced trace. It is shown in [16] that the following formula defines a composition map \(*_{3}\colon V_{1}\times V_{2}\to V_{3}\): \[x_{1}*_{3}x_{2}=(1+\zeta)x_{1}x_{2}+(1-\zeta)x_{2}x_{1}.\] The derived maps are given by similar formulas: \[x_{2}*_{1}x_{3} =(1+\zeta)x_{2}x_{3}+(1-\zeta)x_{3}x_{2}\qquad\text{for }x_{2} \in V_{2}\text{ and }x_{3}\in V_{3},\] \[x_{3}*_{2}x_{1} =(1+\zeta)x_{3}x_{1}+(1-\zeta)x_{1}x_{3}\qquad\text{for }x_{3} \in V_{3}\text{ and }x_{1}\in V_{1}.\] A characteristic \(2\) version of these composition maps is given in [20]. 4. Compositions of dimension \(8\) from central simple algebras with symplectic involution of degree \(8\) are given in a similar way in [4]. ### Canonical Clifford maps Our goal in this subsection is to obtain structural information on the quadratic spaces for which a composition exists. This information will be derived from algebra homomorphisms defined on Clifford and even Clifford algebras. Throughout this subsection, we fix a composition of quadratic spaces \[\mathscr{G}=\bigl{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3} \bigr{)}\] and we let \(*_{1}\) and \(*_{2}\) denote the derived composition maps of \(*_{3}\), as per Definition 2.4. For each \(x_{1}\in V_{1}\) we may consider two linear maps \[\ell_{x_{1}}\colon V_{2}\to V_{3},\quad x_{2}\mapsto x_{1}*_{3}x_{2}\qquad \text{and}\qquad r_{x_{1}}\colon V_{3}\to V_{2},\quad x_{3}\mapsto x_{3}*_{2} x_{1}.\] By (2.10) and (2.12) we have \[\ell_{x_{1}}\circ r_{x_{1}}=q_{1}(x_{1})\operatorname{Id}_{V_{3}}\qquad\text {and}\qquad r_{x_{1}}\circ\ell_{x_{1}}=q_{1}(x_{1})\operatorname{Id}_{V_{2}}.\] Therefore, the linear map \[\alpha\colon V_{1}\to\operatorname{End}(V_{2}\oplus V_{3}),\qquad x_{1}\mapsto \begin{pmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{pmatrix}\] extends to an \(F\)-algebra homomorphism defined on the Clifford algebra \(C(V_{1},q_{1})\): \[C(\alpha)\colon C(V_{1},q_{1})\to\operatorname{End}(V_{2}\oplus V_{3}).\] The image of the even Clifford algebra \(C_{0}(V_{1},q_{1})\) lies in the diagonal subalgebra, hence \(C(\alpha)\) restricts to an \(F\)-algebra homomorphism \[C_{0}(\alpha)\colon C_{0}(V_{1},q_{1})\to(\operatorname{End}V_{2})\times( \operatorname{End}V_{3}).\] We write \(\tau_{1}\) for the involution on \(C(V_{1},q_{1})\) that leaves every vector in \(V_{1}\) fixed, and \(\tau_{01}\) for the restriction of \(\tau_{1}\) to \(C_{0}(V_{1},q_{1})\). We let \(\sigma_{b_{2}\perp b_{3}}\) (resp. \(\sigma_{b_{2}}\), resp. \(\sigma_{b_{3}}\)) denote the involution on \(\operatorname{End}(V_{2}\oplus V_{3})\) (resp. \(\operatorname{End}V_{2}\), resp. \(\operatorname{End}V_{3}\)) adjoint to \(b_{2}\perp b_{3}\) (resp. \(b_{2}\), resp. \(b_{3}\)). **Theorem 2.6**.: _The maps \(C(\alpha)\) and \(C_{0}(\alpha)\) are homomorphisms of algebras with involution_ \[\begin{array}{cl}C(\alpha)\colon&(C(V_{1},q_{1}),\tau_{1})\to(\operatorname {End}(V_{2}\oplus V_{3}),\sigma_{b_{2}\perp b_{3}}),\\ C_{0}(\alpha)\colon&(C_{0}(V_{1},q_{1}),\tau_{01})\to(\operatorname{End}V_{2}, \sigma_{b_{2}})\times(\operatorname{End}V_{3},\sigma_{b_{3}}).\end{array}\] _Moreover, \(\dim\mathscr{G}=1\), \(2\), \(4\) or \(8\)._ Proof.: For the first part, it suffices to show that for \(x_{1}\in V_{1}\), \[\sigma_{b_{2}\perp b_{3}}\begin{pmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{pmatrix}=\begin{pmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{pmatrix}.\] This amounts to proving that for \(x_{1}\in V_{1}\), \(x_{2}\), \(y_{2}\in V_{2}\) and \(x_{3}\), \(y_{3}\in V_{3}\) \[b_{2}(x_{3}*_{2}x_{1},y_{2})+b_{3}(x_{1}*_{3}x_{2},y_{3})=b_{2}(x_{2},y_{3}*_{2 }x_{1})+b_{3}(x_{3},x_{1}*_{3}y_{2}),\] which follows from (2.1). To determine the various options for \(\dim\mathscr{C}\), observe that the map \(C(\alpha)\) endows \(V_{2}\oplus V_{3}\) with a structure of left \(C(V_{1},q_{1})\)-module; similarly, \(V_{2}\) and \(V_{3}\) are left modules over \(C_{0}(V_{1},q_{1})\) through \(C_{0}(\alpha)\). This observation yields restrictions on the dimensions of \(V_{2}\) and \(V_{3}\), because the dimension of a left module over a central simple algebra \(A\) is a multiple of \((\deg A)(\operatorname{ind}A)\), where \(\deg A\) is the degree of \(A\) and \(\operatorname{ind}A\) is its (Schur) index. Let \(n=\dim\mathscr{C}\). If \(n\) is even, then \(C(V_{1},q_{1})\) is a central simple \(F\)-algebra, and \(V_{2}\oplus V_{3}\) is a left module over \(C(V_{1},q_{1})\) through \(C(\alpha)\), hence \((\deg C(V_{1},q_{1}))(\operatorname{ind}C(V_{1},q_{1}))\) divides \(2n\). Since \(\deg C(V_{1},q_{1})=2^{n/2}\), it follows that \(2^{n/2}\) divides \(2n\), hence \(n=2\), \(4\) or \(8\). If \(n\) is odd, the even Clifford algebra \(C_{0}(V_{1},q_{1})\) is central simple over \(F\), and \(V_{2}\) is a left module over \(C_{0}(V_{1},q_{1})\) through \(C_{0}(\alpha)\), hence \(\dim V_{2}\) is a multiple of \((\deg C_{0}(V_{1},q_{1}))(\operatorname{ind}C_{0}(V_{1},q_{1}))\). As \(\deg C_{0}(V_{1},q_{1})=2^{(n-1)/2}\), this means that \(2^{(n-1)/2}\operatorname{ind}C_{0}(V_{1},q_{1})\) divides \(n\). As \(n\) is assumed to be odd, we must have \(n=1\). Mimicking the construction above, we attach to the derived compositions \(\partial\mathscr{C}\) and \(\partial^{2}\mathscr{C}\) linear maps \[\alpha^{\prime}\colon V_{2}\to\operatorname{End}(V_{3}\oplus V_{1}),\qquad x _{2}\mapsto\begin{pmatrix}0&r_{x_{2}}\\ \ell_{x_{2}}&0\end{pmatrix}\] and \[\alpha^{\prime\prime}\colon V_{3}\to\operatorname{End}(V_{1}\oplus V_{2}), \qquad x_{3}\mapsto\begin{pmatrix}0&r_{x_{3}}\\ \ell_{x_{3}}&0\end{pmatrix}.\] These maps yield homomorphisms \[C(\alpha^{\prime})\colon(C(V_{2},q_{2}),\tau_{2})\to(\operatorname{End}(V_{3} \oplus V_{1}),\sigma_{b_{3}\perp b_{1}}), \tag{2.20}\] and \[C(\alpha^{\prime\prime})\colon(C(V_{3},q_{3}),\tau_{3})\to( \operatorname{End}(V_{1}\oplus V_{2}),\sigma_{b_{1}\perp b_{2}}),\] \[C_{0}(\alpha^{\prime\prime})\colon(C_{0}(V_{3},q_{3}),\tau_{03}) \to(\operatorname{End}V_{1},\sigma_{b_{1}})\times(\operatorname{End}V_{2}, \sigma_{b_{2}}). \tag{2.21}\] We next take a closer look at compositions of the various degrees. If \(\dim\mathscr{C}=1\), then \(\operatorname{char}F\neq 2\) since odd-dimensional quadratic forms are singular in characteristic \(2\). If \(q_{1}\) represents \(\lambda_{1}\in F^{\times}\) and \(q_{2}\) represents \(\lambda_{2}\in F^{\times}\), then by multiplicativity \(q_{3}\) represents \(\lambda_{1}\lambda_{2}\in F^{\times}\), hence also \((\lambda_{1}\lambda_{2})^{-1}\). Thus in this case there exist \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\in F^{\times}\) such that \(\lambda_{1}\lambda_{2}\lambda_{3}=1\) and \[q_{1}\simeq\langle\lambda_{1}\rangle,\qquad q_{2}\simeq\langle\lambda_{2} \rangle,\qquad q_{3}\simeq\langle\lambda_{3}\rangle,\] and \(\langle 1\rangle\perp q_{1}\perp q_{2}\perp q_{3}\) is a \(2\)-fold Pfister form. We will mostly ignore this easy case. (See however Example 2.5(2).) **Proposition 2.7**.: _Let \(\dim\mathscr{C}=2\). There exists a \(1\)-fold Pfister form \(n_{\mathscr{C}}\), uniquely determined up to isometry, and scalars \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\in F^{\times}\) such that \(\lambda_{1}\lambda_{2}\lambda_{3}=1\) and_ \[q_{1}\simeq\langle\lambda_{1}\rangle n_{\mathscr{C}},\qquad q_{2}\simeq \langle\lambda_{2}\rangle n_{\mathscr{C}},\qquad q_{3}\simeq\langle\lambda_{3} \rangle n_{\mathscr{C}}.\] _The form \(n_{\mathscr{C}}\perp q_{1}\perp q_{2}\perp q_{3}\) is a \(3\)-fold Pfister form canonically associated to \(\mathscr{C}\) up to isometry._ Proof.: Since \(\dim V_{1}=2\), we have \(q_{1}\simeq\langle\lambda_{1}\rangle n_{\mathscr{C}}\) for some \(\lambda_{1}\in F^{\times}\) and some uniquely determined \(1\)-fold Pfister form \(n_{\mathscr{C}}\). For any anisotropic \(y_{2}\in V_{2}\), the map \(r_{y_{2}}\colon V_{1}\to V_{3}\) carrying \(x_{1}\) to \(x_{1}*_{3}y_{2}\) is a similitude with multiplier \(q_{2}(y_{2})\) by (2.2), hence \(q_{3}\simeq\langle q_{2}(y_{2})\rangle q_{1}\). Similarly, for any anisotropic \(y_{1}\in V_{1}\) the map \(\ell_{y_{1}}\colon V_{2}\to V_{3}\) is a similitude with multiplier \(q_{1}(y_{1})\), hence \(q_{3}\simeq\langle q_{1}(y_{1})\rangle q_{2}\). Therefore, \[q_{3}\simeq\langle\lambda_{1}q_{2}(y_{2})\rangle n_{\mathscr{C}}\qquad\text{ and}\qquad q_{2}\simeq\langle q_{1}(y_{1})\rangle q_{3}\simeq\langle\lambda_{1}q_{1}(y_{1})q_{2 }(y_{2})\rangle n_{\mathscr{C}}.\] Now, \(\lambda_{1}q_{1}(y_{1})\) is represented by \(n_{\mathscr{C}}\) since \(q_{1}\simeq\langle\lambda_{1}\rangle n_{\mathscr{C}}\), hence \(\langle\lambda_{1}q_{1}(y_{1})\rangle n_{\mathscr{C}}\simeq n_{\mathscr{C}}\). Letting \(\lambda_{2}=q_{2}(y_{2})\) and \(\lambda_{3}=(\lambda_{1}q_{2}(y_{2}))^{-1}\), we then have \(\lambda_{1}\lambda_{2}\lambda_{3}=1\) and \[q_{1}\simeq\langle\lambda_{1}\rangle n_{\mathscr{C}},\qquad q_{2}\simeq\langle \lambda_{2}\rangle n_{\mathscr{C}},\qquad q_{3}\simeq\langle\lambda_{3}\rangle n_{ \mathscr{C}}.\qed\] **Proposition 2.8**.: _Let \(\dim\mathscr{C}=4\). There exists a \(2\)-fold quadratic Pfister form \(n_{\mathscr{C}}\), uniquely determined up to isometry, and scalars \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\in F^{\times}\) such that \(\lambda_{1}\lambda_{2}\lambda_{3}=1\) and_ \[q_{1}\simeq\langle\lambda_{1}\rangle n_{\mathscr{C}},\qquad q_{2}\simeq\langle \lambda_{2}\rangle n_{\mathscr{C}},\qquad q_{3}\simeq\langle\lambda_{3}\rangle n _{\mathscr{C}}.\] _The form \(n_{\mathscr{C}}\perp q_{1}\perp q_{2}\perp q_{3}\) is a \(4\)-fold Pfister form canonically associated to \(\mathscr{C}\) up to isometry._ Proof.: Consider the homomorphisms of algebras with involution induced by \(C_{0}(\alpha)\): \[\varphi_{2}\colon(C_{0}(V_{1},q_{1}),\tau_{01})\to(\operatorname{End}V_{2}, \sigma_{b_{2}})\qquad\text{and}\qquad\varphi_{3}\colon(C_{0}(V_{1},q_{1}),\tau _{01})\to(\operatorname{End}V_{3},\sigma_{b_{3}}).\] If \(Z\) is a field, then \(C_{0}(V_{1},q_{1})\) is simple and its image under \(\varphi_{2}\) is the centralizer in \(\operatorname{End}V_{2}\) of a separable quadratic subfield fixed under \(\sigma_{b_{2}}\). But the restriction of \(\sigma_{b_{2}}\) to such a centralizer is an orthogonal involution (see [13, (4.12)]), whereas \(\tau_{01}\) is symplectic, so this case is impossible. Therefore, \(Z\) is not a field, which means that the discriminant (or Arf invariant) of \(q_{1}\) is trivial. It follows that \(q_{1}\) is a multiple of some uniquely determined \(2\)-fold Pfister form \(n_{\mathscr{C}}\). The same arguments as in the proof of Proposition 2.7 show that there exist \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\in F^{\times}\) such that \(q_{i}\simeq\langle\lambda_{i}\rangle n_{\mathscr{C}}\) for \(i=1,\,2,\,3\). Finally, we consider the case where \(\dim\mathscr{C}=8\). Recall from SS1.2 that in this case the Clifford algebra \(C(V_{1},q_{1})\) and the even Clifford algebra \(C_{0}(V_{1},q_{1})\) carry canonical quadratic pairs. We use for these quadratic pairs the notation \((\tau_{1},\mathfrak{g}_{1})\) and \((\tau_{01},\mathfrak{g}_{01})\) respectively. **Proposition 2.9**.: _Let \(\dim\mathscr{C}=8\). The canonical maps \(C(\alpha)\) and \(C_{0}(\alpha)\) are isomorphisms of algebras with quadratic pair_ \[\begin{array}{ll}C(\alpha)\colon&(C(V_{1},q_{1}),\tau_{1},\mathfrak{g}_{1}) \xrightarrow{\sim}(\operatorname{End}(V_{2}\oplus V_{3}),\sigma_{b_{2}\perp b _{3}},\mathfrak{f}_{q_{2}\perp q_{3}}),\\ C_{0}(\alpha)\colon&(C_{0}(V_{1},q_{1}),\tau_{01},\mathfrak{g}_{01}) \xrightarrow{\sim}(\operatorname{End}V_{2},\sigma_{b_{2}},\mathfrak{f}_{q_{2} })\times(\operatorname{End}V_{3},\sigma_{b_{3}},\mathfrak{f}_{q_{3}}).\end{array}\] _Moreover, there exists a \(3\)-fold quadratic Pfister form \(n_{\mathscr{C}}\), uniquely determined up to isometry, and scalars \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\in F^{\times}\) such that \(\lambda_{1}\lambda_{2}\lambda_{3}=1\) and_ \[q_{1}\simeq\langle\lambda_{1}\rangle n_{\mathscr{C}},\qquad q_{2}\simeq\langle \lambda_{2}\rangle n_{\mathscr{C}},\qquad q_{3}\simeq\langle\lambda_{3}\rangle n _{\mathscr{C}}.\] _The form \(n_{\mathscr{C}}\perp q_{1}\perp q_{2}\perp q_{3}\) is a \(5\)-fold Pfister form canonically associated to \(\mathscr{C}\) up to isometry._ Proof.: In this case we have \(\dim C(V_{1},q_{1})=\dim\operatorname{End}(V_{2}\oplus V_{3})\). Since the algebra \(C(V_{1},q_{1})\) is simple, it follows that \(C(\alpha)\) is an isomorphism, hence \(C(V_{1},q_{1})\) is split. Moreover, \(C_{0}(\alpha)\) also is an isomorphism, hence the center of \(C_{0}(V_{1},q_{1})\) is isomorphic to \(F\times F\), and therefore the discriminant (or Arf invariant) of \(q_{1}\) is trivial. It follows that \(q_{1}\) is a multiple of some uniquely determined \(3\)-fold Pfister form \(n_{\mathscr{C}}\), and the existence of \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\in F^{\times}\) such that \(q_{i}\simeq\langle\lambda_{i}\rangle n_{\mathscr{C}}\) for \(i=1\), \(2\), \(3\) is proved as in the case where \(\dim\mathscr{C}=2\) (see Proposition 2.7). Since we already know from Theorem 2.6 that \(C(\alpha)\) and \(C_{0}(\alpha)\) are homomorphisms of algebras with involution, it only remains to see that these maps also preserve the semitraces. The arguments in each case are similar. For \(C(\alpha)\) we have to show that \[\mathfrak{f}_{q_{2}\perp q_{3}}\big{(}C(\alpha)(s)\big{)}=\mathfrak{g}_{1}(s) \qquad\text{for all }s\in\operatorname{Sym}(\tau_{1}).\] Fix \(e_{1}\), \(e_{1}^{\prime}\in V_{1}\) such that \(b_{1}(e_{1},e_{1}^{\prime})=1\). By definition, \(\mathfrak{g}_{1}(s)=\operatorname{Trd}_{C(V_{1},q_{1})}(e_{1}e_{1}^{\prime}s)\). Since isomorphisms of central simple algebras preserve reduced traces, we have for all \(s\in\operatorname{Sym}(\tau_{1})\) \[\mathfrak{g}_{1}(s)=\operatorname{Trd}_{\operatorname{End}(V_{2}\oplus V_{3}) }\big{(}C(\alpha)(e_{1}e_{1}^{\prime}s)\big{)}=\operatorname{Trd}_{ \operatorname{End}(V_{2}\oplus V_{3})}\big{(}C(\alpha)(e_{1}e_{1}^{\prime}) \circ C(\alpha)(s)\big{)}.\] Now, \(C(\alpha)\big{(}\operatorname{Sym}(\tau_{1})\big{)}=\operatorname{Sym}( \sigma_{b_{2}\perp b_{3}})\) because \(C(\alpha)\) is an isomorphism of algebras with involution. Therefore, we may rewrite the equation we have to prove as \[\mathfrak{f}_{q_{2}\perp q_{3}}(s^{\prime})=\operatorname{Trd}_{\operatorname {End}(V_{2}\oplus V_{3})}(C(\alpha)(e_{1}e_{1}^{\prime})\circ s^{\prime}) \qquad\text{for all }s^{\prime}\in\operatorname{Sym}(\sigma_{b_{2}\perp b_{3}}).\] Using the standard identification \(\operatorname{End}(V_{2}\oplus V_{3})=(V_{2}\oplus V_{3})\otimes(V_{2}\oplus V _{3})\) set up in SS1.1, we see that \(\operatorname{Sym}(\sigma_{b_{2}\perp b_{3}})\) is spanned by elements of the form \((x_{2}+x_{3})\otimes(x_{2}+x_{3})\) with \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\), and that for \(s^{\prime}=(x_{2}+x_{3})\otimes(x_{2}+x_{3})\) \[C(\alpha)(e_{1}e_{1}^{\prime})\circ s^{\prime}=\big{(}C(\alpha)(e_{1}e_{1}^{ \prime})(x_{2}+x_{3})\big{)}\otimes(x_{2}+x_{3})=\big{(}r_{e_{1}}\ell_{e_{1}^ {\prime}}(x_{2})+\ell_{e_{1}}r_{e_{1}^{\prime}}(x_{3})\big{)}\otimes(x_{2}+x_{ 3}).\] Therefore, it suffices to show that for all \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\) \[\mathfrak{f}_{q_{2}\perp q_{3}}\big{(}(x_{2}+x_{3})\otimes(x_{2}+x_{3}) \big{)}=\operatorname{Trd}_{\operatorname{End}(V_{2}\oplus V_{3})}\big{(}(r_{ e_{1}}\ell_{e_{1}^{\prime}}(x_{2})+\ell_{e_{1}}r_{e_{1}^{\prime}}(x_{3}))\otimes(x_{2}+x_{ 3})\big{)}. \tag{2.22}\] The right side is \[(b_{2}\perp b_{3})\big{(}(e_{1}^{\prime}*_{3}x_{2})*_{2}e_{1}+e_{ 1}*_{3}(x_{3}*_{2}e_{1}^{\prime}),x_{2}+x_{3})=\\ b_{2}\big{(}(e_{1}^{\prime}*_{3}x_{2})*_{2}e_{1},x_{2}\big{)}+b_{ 3}(e_{1}*_{3}(x_{3}*_{2}e_{1}^{\prime}),x_{3}).\] Now, by (2.1) and (2.4) we have \[b_{2}\big{(}(e_{1}^{\prime}*_{3}x_{2})*_{2}e_{1},x_{2}\big{)}=b_{3}(e_{1}^{ \prime}*_{3}x_{2},e_{1}*_{3}x_{2})=b_{1}(e_{1}^{\prime},e_{1})q_{2}(x_{2})\] and, similarly, \[b_{3}(e_{1}*_{3}(x_{3}*_{2}e_{1}^{\prime}),x_{3})=b_{2}(x_{3}*_{2}e_{1}^{ \prime},x_{3}*_{2}e_{1})=q_{3}(x_{3})b_{1}(e_{1}^{\prime},e_{1}).\] As \(b_{1}(e_{1},e_{1}^{\prime})=1\), it follows that \[\operatorname{Trd}_{\operatorname{End}(V_{2}\oplus V_{3})}\big{(}(r_{e_{1}} \ell_{e_{1}^{\prime}}(x_{2})+\ell_{e_{1}}r_{e_{1}^{\prime}}(x_{3}))\otimes(x_{ 2}+x_{3})\big{)}=q_{2}(x_{2})+q_{3}(x_{3}).\] On the other hand, by definition of \(\mathfrak{f}_{q_{2}\perp q_{3}}\) we have \[\mathfrak{f}_{q_{2}\perp q_{3}}\big{(}(x_{2}+x_{3})\otimes(x_{2}+x_{3})\big{)} =(q_{2}\perp q_{3})(x_{2}+x_{3})=q_{2}(x_{2})+q_{3}(x_{3}).\] We have thus checked (2.22). The proof that \(C_{0}(\alpha)\) also preserves the semitraces is obtained by a slight variation of the preceding arguments. We have to show that \[(\mathfrak{f}_{q_{2}},\mathfrak{f}_{q_{3}})\big{(}C_{0}(\alpha)(s)\big{)}=C_{0} (\alpha)\big{(}\mathfrak{g}_{01}(s)\big{)}\qquad\text{for all }s\in\operatorname{Sym}(\tau_{01}).\] Since \(C_{0}(\alpha)\) is an isomorphism of algebras with involution, this amounts to showing \[\big{(}\mathfrak{f}_{q_{2}}(s_{2}^{\prime}),\mathfrak{f}_{q_{3}}(s_{3}^{\prime}) \big{)}=\big{(}\operatorname{Trd}_{\operatorname{End}V_{2}}(r_{e_{1}}\ell_{e_{1 }^{\prime}}^{\prime}s_{2}^{\prime}),\operatorname{Trd}_{\operatorname{End}V_{3} }(\ell_{e_{1}}r_{e_{1}^{\prime}}^{\prime}s_{3}^{\prime})\big{)} \tag{2.23}\] for all \(s_{2}^{\prime}\in\operatorname{Sym}(\sigma_{b_{2}})\), \(s_{3}^{\prime}\in\operatorname{Sym}(\sigma_{b_{3}})\). It suffices to consider \(s_{2}^{\prime}\), \(s_{3}^{\prime}\) of the form \(x_{2}\otimes x_{2}\), \(x_{3}\otimes x_{3}\) for \(x_{2}\in V_{2}\), \(x_{3}\in V_{3}\) under the standard identifications \(\operatorname{End}V_{2}=V_{2}\otimes V_{2}\), \(\operatorname{End}V_{3}=V_{3}\otimes V_{3}\). For \(s_{2}^{\prime}=x_{2}\otimes x_{2}\) we have \[r_{e_{1}}\ell_{e_{1}^{\prime}}s_{2}^{\prime}=\big{(}(e_{1}^{\prime}*_{3}x_{2})*_{2 }e_{1}\big{)}\otimes x_{2},\] hence \[\operatorname{Trd}_{\operatorname{End}V_{2}}(r_{e_{1}}\ell_{e_{1}^{\prime}}s_{2}^{ \prime})=b_{2}\big{(}(e_{1}^{\prime}*_{3}x_{2})*_{2}e_{1},x_{2})=b_{3}(e_{1}^{ \prime}*_{3}x_{2},e_{1}*_{3}x_{2})=q_{2}(x_{2}).\] On the other hand \(\mathfrak{f}_{2}(x_{2}\otimes x_{2})=q_{2}(x_{2})\) by definition. Likewise, for \(s_{3}^{\prime}=x_{3}\otimes x_{3}\) \[\operatorname{Trd}_{\operatorname{End}V_{3}}(\ell_{e_{1}}r_{e_{1}^{\prime}}s_ {3}^{\prime})=q_{3}(x_{3})=\mathfrak{f}_{q_{3}}(s_{3}^{\prime}),\] hence (2.23) is proved. **Remark 2.10**.: The map \(C_{0}(\alpha)\) in Proposition 2.9 yields an isomorphism between the center of \(C_{0}(V_{1},q_{1})\) and \(F\times F\), hence also a polarization of \((V_{1},q_{1})\) (see Definition 1.3): the primitive central idempotents \(z_{+}\) and \(z_{-}\) of \(C_{0}(V_{1},q_{1})\) are such that \(C_{0}(\alpha)(z_{+})=(1,0)\) and \(C_{0}(\alpha)(z_{-})=(0,1)\), so that \(C_{0}(\alpha)\) induces homomorphisms \[C_{+}(\alpha)\colon C_{0}(V_{1},q_{1})\to\operatorname{End}V_{2}\quad\text{ and}\quad C_{-}(\alpha)\colon C_{0}(V_{1},q_{1})\to\operatorname{End}V_{3}.\] Similarly, the maps \(C_{0}(\alpha^{\prime})\) and \(C_{0}(\alpha^{\prime\prime})\) of (2.20) and (2.21) attached to \(\partial\mathscr{G}\) and \(\partial^{2}\mathscr{G}\) yield polarizations of \((V_{2},q_{2})\) and \((V_{3},q_{3})\), so that \[C_{+}(\alpha^{\prime})\colon C_{0}(V_{2},q_{2})\to\operatorname{End}V_{3} \quad\text{and}\quad C_{-}(\alpha^{\prime})\colon C_{0}(V_{2},q_{2})\to \operatorname{End}V_{1},\] and \[C_{+}(\alpha^{\prime\prime})\colon C_{0}(V_{3},q_{3})\to\operatorname{End}V_{1} \quad\text{and}\quad C_{-}(\alpha^{\prime\prime})\colon C_{0}(V_{3},q_{3})\to \operatorname{End}V_{2}.\] **Corollary 2.11**.: _For any composition of quadratic spaces \(\mathscr{G}\), the following are equivalent:_ * \(q_{1}\simeq q_{2}\simeq q_{3}\)_;_ * \(q_{1}\)_,_ \(q_{2}\) _and_ \(q_{3}\) _are Pfister forms;_ * \(q_{1}\)_,_ \(q_{2}\) _and_ \(q_{3}\) _represent_ \(1\)_._ Proof.: According to Propositions 2.7, 2.8, 2.9, there exist a quadratic Pfister form \(n_{\mathscr{G}}\) and scalars \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) such that \(\lambda_{1}\lambda_{2}\lambda_{3}=1\) and \(q_{1}\simeq\langle\lambda_{1}\rangle n_{\mathscr{G}}\), \(q_{2}\simeq\langle\lambda_{2}\rangle n_{\mathscr{G}}\) and \(q_{3}\simeq\langle\lambda_{3}\rangle n_{\mathscr{G}}\). This also holds when \(\dim\mathscr{G}=1\), with \(n_{\mathscr{G}}=\langle 1\rangle\). (i) \(\Rightarrow\) (ii) If \(q_{1}\simeq q_{2}\), then \(\langle\lambda_{1}\lambda_{2}\rangle n_{\mathscr{G}}\simeq n_{\mathscr{G}}\), hence \(\langle\lambda_{3}\rangle n_{\mathscr{G}}\simeq n_{\mathscr{G}}\). Therefore, \(q_{3}\simeq n_{\mathscr{G}}\). (ii) \(\Rightarrow\) (iii) This is clear since Pfister quadratic forms represent \(1\). (iii) \(\Rightarrow\) (i) For \(i=1\), \(2\), \(3\), if \(q_{i}\) represents \(1\), then \(n_{\mathscr{G}}\) represents \(\lambda_{i}\), hence \(\langle\lambda_{i}\rangle n_{\mathscr{G}}\simeq n_{\mathscr{G}}\). ### Similitudes and isomorphisms Consider two compositions \(\mathscr{G}=\big{(}(V_{1},q_{1})\), \((V_{2},q_{2})\), \((V_{3},q_{3})\), \(*_{3}\big{)}\) and \(\widetilde{\mathscr{G}}=\big{(}(\widetilde{V_{1}},\widetilde{q}_{1})\), \((\widetilde{V_{2}},\widetilde{q}_{2})\), \((\widetilde{V_{3}},\widetilde{q}_{3})\), \(\widetilde{*}_{3}\big{)}\) over an arbitrary field \(F\). As in Definition 2.4, we write \(*_{1}\) and \(*_{2}\) (resp. \(\widetilde{*}_{1}\), \(\widetilde{*}_{2}\)) for the derived composition maps of \(*_{3}\) (resp. \(\widetilde{*}_{3}\)). **Definition 2.12**.: For \(i=1\), \(2\), \(3\), let \(g_{i}\colon(V_{i},q_{i})\to(\widetilde{V}_{i},\widetilde{q}_{i})\) be a similitude with multiplier \(\mu(g_{i})\in F^{\times}\). The triple \((g_{1},g_{2},g_{3})\) is a _similitude of compositions \(\mathscr{G}\to\widetilde{\mathscr{G}}\)_ if there exists \(\lambda_{3}\in F^{\times}\) such that \[\lambda_{3}\,g_{3}(x_{1}*_{3}x_{2})=g_{1}(x_{1})\,\widetilde{*}_{3}\,g_{2}(x_{2} )\qquad\text{for all }x_{1}\in V_{1},\,x_{2}\in V_{2}. \tag{2.24}\] **Proposition 2.13**.: _If \(g=(g_{1},g_{2},g_{3})\) is a similitude \(\mathscr{G}\to\widetilde{\mathscr{G}}\), then \(\partial g:=(g_{2},g_{3},g_{1})\) is a similitude \(\partial\mathscr{G}\to\partial\widetilde{\mathscr{G}}\) and \(\partial^{2}g:=(g_{3},g_{1},g_{2})\) is a similitude \(\partial^{2}\mathscr{G}\to\partial^{2}\widetilde{\mathscr{G}}\). Moreover, the scalars \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\in F^{\times}\) such that for all \(x_{1}\in V_{1}\), \(x_{2}\in V_{2}\), \(x_{3}\in V_{3}\)_ \[\lambda_{1}\,g_{1}(x_{2}*_{1}x_{3}) =g_{2}(x_{2})\,\widetilde{*}_{1}\,g_{3}(x_{3}),\] \[\lambda_{2}\,g_{2}(x_{3}*_{2}x_{1}) =g_{3}(x_{3})\,\widetilde{*}_{2}\,g_{1}(x_{1}),\] \[\lambda_{3}\,g_{3}(x_{1}*_{3}x_{2}) =g_{1}(x_{1})\,\widetilde{*}_{3}\,g_{2}(x_{2})\] _are related to the multipliers of \(g_{1}\), \(g_{2}\), \(g_{3}\) by_ \[\mu(g_{1})=\lambda_{2}\lambda_{3},\qquad\mu(g_{2})=\lambda_{3}\lambda_{1}, \qquad\mu(g_{3})=\lambda_{1}\lambda_{2}. \tag{2.25}\] Proof.: For the first part, we have to prove the existence of \(\lambda_{1}\in F^{\times}\) such that \[\lambda_{1}\,g_{1}(x_{2}*_{1}x_{3})=g_{2}(x_{2})\,\widetilde{*}_{1}\,g_{3}(x_ {3})\qquad\text{for all $x_{2}\in V_{2}$ and $x_{3}\in V_{3}$.}\] Multiplying each side of (2.24) on the left by \(g_{2}(x_{2})\), we obtain for all \(x_{1}\in V_{1}\) and \(x_{2}\in V_{2}\) \[\lambda_{3}\,g_{2}(x_{2})\,\widetilde{*}_{1}\,g_{3}(x_{1}*_{3}x_{2})=\widetilde {q}_{2}\big{(}g_{2}(x_{2})\big{)}g_{1}(x_{1})=\mu(g_{2})q_{2}(x_{2})g_{1}(x_{1}).\] If \(x_{2}\) is anisotropic, then \(r_{x_{2}}\colon V_{1}\to V_{3}\) is bijective with inverse \(q_{2}(x_{2})^{-1}\ell_{x_{2}}\), hence every \(x_{3}\in V_{3}\) can be written as \(x_{3}=x_{1}*_{3}x_{2}\) with \(x_{1}=q_{2}(x_{2})^{-1}x_{2}*_{1}x_{3}\). Substituting in the last displayed equation, we obtain for \(x_{2}\in V_{2}\) anisotropic and \(x_{3}\in V_{3}\) \[g_{2}(x_{2})\,\widetilde{*}_{1}\,g_{3}(x_{3})=\mu(g_{2})\lambda_{3}^{-1}g_{1} (x_{2}*_{1}x_{3}).\] Since anisotropic vectors span \(V_{2}\), this relation holds for all \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\). Therefore, \((g_{2},g_{3},g_{1})\) is a similitude of compositions, with scalar \(\lambda_{1}=\mu(g_{2})\lambda_{3}^{-1}\). Applying the same arguments to \(\partial g\) instead of \(g\), we see that \(\partial(\partial g)=\partial^{2}g\) is a similitude \(\partial^{2}\mathscr{G}\to\partial^{2}\widetilde{\mathscr{G}}\), with scalar \(\lambda_{2}=\mu(g_{3})\lambda_{1}^{-1}\). Applying the arguments one more time, we obtain that \(g\) is a similitude \(\mathscr{G}\to\widetilde{\mathscr{G}}\) with scalar \(\mu(g_{1})\lambda_{2}^{-1}\), hence \(\lambda_{3}=\mu(g_{1})\lambda_{2}^{-1}\) and the proof is complete. **Definition 2.14**.: In the situation of Proposition 2.13, the triple \((\lambda_{1},\lambda_{2},\lambda_{3})\in F^{\times}\times F^{\times}\times F^ {\times}\) is said to be the _composition multiplier_ of the similitude of compositions \(g\colon\mathscr{G}\to\widetilde{\mathscr{G}}\), and we write \[\lambda(g)=(\lambda_{1},\lambda_{2},\lambda_{3}),\] hence \(\lambda(\partial g)=(\lambda_{2},\lambda_{3},\lambda_{1})\) and \(\lambda(\partial^{2}g)=(\lambda_{3},\lambda_{1},\lambda_{2})\). Writing \(\rho(g)=\lambda_{1}\lambda_{2}\lambda_{3}\), we thus have by (2.25) \[\lambda(g)=(\rho(g)\mu(g_{1})^{-1},\rho(g)\mu(g_{2})^{-1},\rho(g)\mu(g_{3})^{- 1})\quad\text{and}\quad\mu(g_{1})\mu(g_{2})\mu(g_{3})=\rho(g)^{2}.\] Similitudes with composition multiplier \((1,1,1)\) are called _isomorphisms_ of compositions. **Proposition 2.15**.: _Every composition of quadratic spaces is similar to a composition of isometric quadratic spaces._ Proof.: Let \(\mathscr{G}=\big{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3}\big{)}\) be an arbitrary composition of quadratic spaces. Let \(\lambda_{1}\in F^{\times}\) (resp. \(\lambda_{2}\in F^{\times}\)) be a value represented by \(q_{1}\) (resp. \(q_{2}\)) and let \(\lambda_{3}=\lambda_{1}^{-1}\lambda_{2}^{-1}\in F^{\times}\). Then \(\lambda_{3}\) is represented by \(q_{3}\); define quadratic forms \(\widetilde{q}_{1}\), \(\widetilde{q}_{2}\), \(\widetilde{q}_{3}\) on \(V_{1}\), \(V_{2}\), \(V_{3}\) by \[\widetilde{q}_{1}(x_{1})=\lambda_{1}^{-1}q_{1}(x_{1}),\qquad\widetilde{q}_{2} (x_{2})=\lambda_{2}^{-1}q_{2}(x_{2}),\qquad\widetilde{q}_{3}(x_{3})=\lambda_{3 }^{-1}q_{3}(x_{3})\] for \(x_{1}\in V_{1}\), \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\). Depending on the dimension of \(\mathscr{G}\), Proposition 2.7, 2.8 or 2.9 shows that the forms \(\widetilde{q}_{1}\), \(\widetilde{q}_{2}\) and \(\widetilde{q}_{3}\) are isometric Pfister forms. Define a map \(\widetilde{*}_{3}\colon V_{1}\times V_{2}\to V_{3}\) by \[x_{1}\,\widetilde{*}_{3}\,x_{2}=\lambda_{3}\,x_{1}*_{3}x_{2}\qquad\text{for $x_{1}\in V_{1}$ and $x_{2}\in V_{2}$.}\] Straightforward computations show that \(\widetilde{\mathscr{G}}=\big{(}(\widetilde{V}_{1},\widetilde{q}_{1}),\,( \widetilde{V}_{2},\widetilde{q}_{2}),\,(\widetilde{V}_{3},\widetilde{q}_{3}), \,\widetilde{*}_{3}\big{)}\) is a composition, and that \((\text{Id}_{V_{1}},\,\text{Id}_{V_{2}},\,\text{Id}_{V_{3}})\colon\mathscr{G} \to\widetilde{\mathscr{G}}\) is a similitude of compositions, with composition multiplier \((\lambda_{1},\,\lambda_{2},\,\lambda_{3})\). Auto-similitudes of compositions of quadratic spaces define algebraic groups which we discuss next. For every composition \(\mathscr{G}=\big{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3}\big{)}\), we associate to each similitude \[(g_{1},g_{2},g_{3})\colon\mathscr{G}\to\mathscr{G}\] with multiplier \((\lambda_{1},\lambda_{2},\lambda_{3})\) the \(4\)-tuple \((g_{1},g_{2},g_{3},\lambda_{3})\), from which \(\lambda_{1}\) and \(\lambda_{2}\) can be determined by the relations (2.25). We may thus consider the group of similitudes of \(\mathscr{G}\) as the subgroup of \(\operatorname{GO}(q_{1})\times\operatorname{GO}(q_{2})\times\operatorname{GO} (q_{3})\times F^{\times}\) defined by the equations \[\lambda_{3}\,g_{3}(x_{1}*_{3}x_{2})=g_{1}(x_{1})*_{3}g_{2}(x_{2})\qquad\text{ for all }x_{1}\in V_{1},\,x_{2}\in V_{2}.\] These equations define a closed subgroup of \(\operatorname{\mathbf{GO}}(q_{1})\times\operatorname{\mathbf{GO}}(q_{2}) \times\operatorname{\mathbf{GO}}(q_{3})\times\operatorname{\mathbf{G}}_{\mathbf{ m}}\), hence an algebraic group scheme, for which we use the notation \(\operatorname{\mathbf{GO}}(\mathscr{G})\). From Proposition 2.13 it follows that \(\partial\) and \(\partial^{2}\) yield isomorphisms \[\partial\colon\operatorname{\mathbf{GO}}(\mathscr{E})\to\operatorname{ \mathbf{GO}}(\partial\mathscr{G})\qquad\text{and}\qquad\partial^{2}\colon \operatorname{\mathbf{GO}}(\mathscr{E})\to\operatorname{\mathbf{GO}}( \partial^{2}\mathscr{E})\] defined as follows: for every commutative \(F\)-algebra \(R\) and \((g_{1},g_{2},g_{3},\lambda_{3})\in\operatorname{\mathbf{GO}}(\mathscr{E})(R)\), \[\partial(g_{1},g_{2},g_{3},\lambda_{3})=(g_{2},g_{3},g_{1},\lambda_{1})\qquad \text{and}\qquad\partial(g_{1},g_{2},g_{3},\lambda_{3})=(g_{3},g_{1},g_{2}, \lambda_{2}),\] with \[\lambda_{1}=\mu(g_{2})\lambda_{3}^{-1}\qquad\text{and}\qquad\lambda_{2}=\mu(g _{1})\lambda_{3}^{-1}.\] The Lie algebra \(\mathfrak{go}(\mathscr{E})\) of \(\operatorname{\mathbf{GO}}(\mathscr{E})\) consists of \(4\)-tuples \((g_{1},g_{2},g_{3},\lambda_{3})\in\mathfrak{go}(q_{1})\times\mathfrak{go}(q_{ 2})\times\mathfrak{go}(q_{3})\times F\) satisfying the following condition: \[g_{3}(x_{1}*_{3}x_{2})=g_{1}(x_{1})*_{3}x_{2}+x_{1}*_{3}g_{2}(x_{2})-\lambda_ {3}\,x_{1}*_{3}x_{2}\qquad\text{for all }x_{1}\in V_{1},\,x_{2}\in V_{2}. \tag{2.26}\] The following is the Lie algebra version of Proposition 2.13: **Proposition 2.16**.: _For \(g=(g_{1},g_{2},g_{3},\lambda_{3})\in\mathfrak{go}(\mathscr{E})\), there are scalars \(\lambda_{1}\), \(\lambda_{2}\in F\) such that_ \[\dot{\mu}(g_{1})=\lambda_{2}+\lambda_{3},\qquad\dot{\mu}(g_{2})=\lambda_{3}+ \lambda_{1},\qquad\dot{\mu}(g_{3})=\lambda_{1}+\lambda_{2}\] _and for all \(x_{1}\in V_{1}\), \(x_{2}\in V_{2}\), \(x_{3}\in V_{3}\)_ \[g_{1}(x_{2}*_{1}x_{3})=g_{2}(x_{2})*_{1}x_{3}+x_{2}*_{1}g_{3}(x_{ 3})-\lambda_{1}\,x_{2}*_{1}x_{3},\] \[g_{2}(x_{3}*_{2}x_{1})=g_{3}(x_{3})*_{2}x_{1}+x_{3}*_{2}g_{1}(x_ {1})-\lambda_{2}\,x_{3}*_{2}x_{1},\] \[g_{3}(x_{1}*_{3}x_{2})=g_{1}(x_{1})*_{3}x_{2}+x_{1}*_{3}g_{2}(x_ {2})-\lambda_{3}\,x_{1}*_{3}x_{2}.\] _Thus, \(\partial g:=(g_{2},g_{3},g_{1},\lambda_{1})\) lies in \(\mathfrak{go}(\partial\mathscr{G})\) and \(\partial^{2}g:=(g_{3},g_{1},g_{2},\lambda_{2})\) in \(\mathfrak{go}(\partial^{2}\mathscr{G})\)._ The composition multiplier map \(\lambda_{\mathscr{E}}\) yields a morphism of algebraic group schemes \[\lambda_{\mathscr{E}}\colon\operatorname{\mathbf{GO}}(\mathscr{E})\to \operatorname{\mathbf{G}}_{\mathbf{m}}^{3}\] defined as follows: for every commutative \(F\)-algebra \(R\) and \((g_{1},g_{2},g_{3},\lambda_{3})\in\operatorname{\mathbf{GO}}(\mathscr{E})(R)\), \[\lambda_{\mathscr{E}}(g_{1},g_{2},g_{3},\lambda_{3})=(\mu(g_{2})\lambda_{3}^{-1 },\mu(g_{1})\lambda_{3}^{-1},\lambda_{3})\in R^{\times}\times R^{\times}\times R ^{\times}. \tag{2.27}\] Its differential \(\dot{\lambda}_{\mathscr{E}}\colon\mathfrak{go}(\mathscr{E})\to F\times F\times F\) is given by \[\dot{\lambda}_{\mathscr{E}}(g_{1},g_{2},g_{3},\lambda_{3})=(\dot{\mu}(g_{2})- \lambda_{3},\,\dot{\mu}(g_{1})-\lambda_{3},\,\lambda_{3}).\] We let \(\operatorname{\mathbf{O}}(\mathscr{E})=\ker\lambda_{\mathscr{E}}\) and \(\mathfrak{o}(\mathscr{E})=\ker\dot{\lambda}_{\mathscr{E}}\), so \(\operatorname{\mathbf{O}}(\mathscr{E})\) is the algebraic group scheme of automorphisms of \(\mathscr{E}\) and \(\mathfrak{o}(\mathscr{E})\) is its Lie algebra. **Remark 2.17**.: For every commutative \(F\)-algebra \(R\) and \((g_{1},g_{2},g_{3},\lambda_{3})\in\operatorname{\mathbf{O}}(\mathscr{E})(R)\) we have \(\lambda_{3}=1\) and \(\mu(g_{1})=\mu(g_{2})=1\), hence also \(\mu(g_{1})=1\) by (2.25). Thus, \((g_{1},g_{2},g_{3})\) is a _related triple_ of isometries according to the definition given by Springer-Veldkamp [19, SS3.6], Elduque [8, SS1] or Alsaody-Gille [2, SS3.1] for some specific compositions of quadratic spaces arising from composition algebras. In SS3.4 below we establish isomorphisms \(\operatorname{\mathbf{O}}(\mathscr{E})\simeq\operatorname{\mathbf{Spin}}(q_{1}) \simeq\operatorname{\mathbf{Spin}}(q_{2})\simeq\operatorname{\mathbf{Spin}}(q_{3})\), which are the analogues of the isomorphisms given in [19, Prop. 3.6.3], [8, Th. 1.1] and [2, Th. 3.12] in terms of related triples. **Proposition 2.18**.: _The algebraic group schemes \(\mathbf{O}(\mathscr{C})\) and \(\mathbf{GO}(\mathscr{C})\) are smooth, and the following sequences are exact:_ \[1\to\mathbf{O}(\mathscr{C})\to\mathbf{GO}(\mathscr{C})\xrightarrow{\lambda_{ \mathscr{C}}}\mathbf{G}_{\mathbf{m}}^{3}\to 1 \tag{2.28}\] _and_ \[0\to\mathfrak{o}(\mathscr{C})\to\mathfrak{go}(\mathscr{C})\xrightarrow{\dot{ \lambda}_{\mathscr{C}}}F^{3}\to 0. \tag{2.29}\] Proof.: _Step 1:_ We show that \(\lambda_{\mathscr{C}}\) is surjective. Since \(\mathbf{G}_{\mathbf{m}}^{3}\) is smooth, it suffices by [13, (22.3)] to show that \(\lambda_{\mathscr{C}}\) is surjective on points over an algebraic closure \(F_{\mathrm{alg}}\) of \(F\). For this, we consider the homotheties: if \(\nu_{1}\), \(\nu_{2}\), \(\nu_{3}\in F_{\mathrm{alg}}^{\times}\), then \(\nu_{i}\operatorname{Id}_{(V_{i})_{F_{\mathrm{alg}}}}\colon(V_{i},q_{i})_{F_{ \mathrm{alg}}}\to(V_{i},q_{i})_{F_{\mathrm{alg}}}\) is a similitude with multiplier \(\nu_{i}^{2}\), and \[\left(\nu_{1}\operatorname{Id}_{(V_{1})_{F_{\mathrm{alg}}}},\,\nu_{2} \operatorname{Id}_{(V_{2})_{F_{\mathrm{alg}}}},\,\nu_{3}\operatorname{Id}_{( V_{3})_{F_{\mathrm{alg}}}}\right)\colon\mathscr{C}_{F_{\mathrm{alg}}}\to \mathscr{C}_{F_{\mathrm{alg}}}\] is a similitude with multiplier \((\nu_{2}\nu_{3}\nu_{1}^{-1},\,\nu_{3}\nu_{1}\nu_{2}^{-1},\,\nu_{1}\nu_{2}\nu_{ 3}^{-1})\). Therefore, the image of the map \(\lambda_{\mathscr{C}}\) in \((F_{\mathrm{alg}}^{\times})^{3}\) contains \((\nu_{2}\nu_{3}\nu_{1}^{-1},\,\nu_{3}\nu_{1}\nu_{2}^{-1},\,\nu_{1}\nu_{2}\nu_{ 3}^{-1})\) for all \(\nu_{1}\), \(\nu_{2}\), \(\nu_{3}\in F_{\mathrm{alg}}^{\times}\). Given \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\in F_{\mathrm{alg}}^{\times}\), we may find \(\nu_{1}\), \(\nu_{2}\), \(\nu_{3}\in F_{\mathrm{alg}}^{\times}\) such that \(\nu_{2}^{2}=\lambda_{1}\lambda_{3}\), \(\nu_{3}^{2}=\lambda_{1}\lambda_{2}\) and \(\nu_{1}=\lambda_{1}^{-1}\nu_{2}\nu_{3}\). Then \[(\nu_{2}\nu_{3}\nu_{1}^{-1},\,\nu_{3}\nu_{1}\nu_{2}^{-1},\,\nu_{1}\nu_{2}\nu_ {3}^{-1})=(\lambda_{1},\,\lambda_{2},\,\lambda_{3}),\] proving surjectivity of \(\lambda_{\mathscr{C}}\). _Step 2:_ We show that \(\dot{\lambda}_{\mathscr{C}}\) is surjective. For \(u_{1}\), \(v_{1}\in V_{1}\), consider the maps \[g_{1} \colon V_{1}\to V_{1}, x_{1}\mapsto u_{1}b_{1}(v_{1},x_{1})-v_{1}b_{1}(u_{1},x_{1}),\] \[g_{2} \colon V_{2}\to V_{2}, x_{2}\mapsto(v_{1}*_{3}x_{2})*_{2}u_{1},\] \[g_{3} \colon V_{3}\to V_{3}, x_{3}\mapsto u_{1}*_{3}(x_{3}*_{2}v_{1}).\] For \(x_{1}\in V_{1}\), \[b_{1}(g_{1}(x_{1}),\,x_{1})=b_{1}(u_{1},\,x_{1})b_{1}(v_{1},\,x_{1})-b_{1}(v_{ 1},\,x_{1})b_{1}(u_{1},\,x_{1})=0,\] hence \(g_{1}\in\mathfrak{go}(q_{1})\) with \(\dot{\mu}(g_{1})=0\) by Proposition 1.11. Moreover, (2.1), (2.5) and (2.8) yield for \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\) \[b_{2}(g_{2}(x_{2}),\,x_{2})=b_{3}(v_{1}*x_{2},\,u_{1}*_{2}x_{2})=b_{1}(v_{1},\, u_{1})q_{2}(x_{2}),\] \[b_{3}(g_{3}(x_{3}),\,x_{3})=b_{2}(x_{3}*_{2}v_{1},\,x_{3}*_{2}u_{1})=q_{3}(x_{3} )b_{1}(v_{1},u_{1}).\] Therefore, \(g_{2}\in\mathfrak{go}(q_{2})\) and \(g_{3}\in\mathfrak{go}(q_{3})\) with \(\dot{\mu}(g_{2})=\dot{\mu}(g_{3})=b_{1}(v_{1},\,u_{1})\). Now, for \(x_{1}\in V_{1}\) and \(x_{2}\in V_{2}\) we compute \(g_{3}(x_{1}*_{3}x_{2})=u_{1}*_{3}\left((x_{1}*_{3}x_{2})*_{2}v_{1}\right)\) by using (2.18) twice in succession to interchange first \(x_{1}\) and \(v_{1}\), and then \(x_{1}\) and \(u_{1}\): \[g_{3}(x_{1}*_{3}x_{2}) =(u_{1}*_{3}x_{2})b_{1}(v_{1},\,x_{1})-u_{1}*_{3}\left((v_{1}*_{3}x_ {2})*_{2}x_{1}\right)\] \[=(u_{1}*_{3}x_{2})b_{1}(v_{1},\,x_{1})-(v_{1}*_{3}x_{2})b_{1}(u_{1},\,x_{1})+x_{1}*_{3}\left((v_{1}*_{3}x_{2})*_{2}u_{1}\right)\] \[=g_{1}(x_{1})*_{3}x_{2}+x_{1}*_{3}g_{2}(x_{2}).\] It follows that \((g_{1},g_{2},g_{3},0)\) lies in \(\mathfrak{go}(\mathscr{C})\), and the computation of \(\dot{\mu}(g_{2})\) and \(\dot{\mu}(g_{1})\) above yields \[\dot{\lambda}_{\mathscr{C}}(g_{1},g_{2},g_{3},0)=(b_{1}(v_{1},\,u_{1}),\,0,\,0).\] Thus, taking \(u_{1}\), \(v_{1}\) such that \(b_{1}(v_{1},\,u_{1})=1\), we see that \((1,0,0)\) lies in the image of \(\dot{\lambda}_{\mathscr{C}}\). Similarly, we may find \(g^{\prime}\in\mathfrak{go}(\partial\mathscr{C})\) and \(g^{\prime\prime}\in\mathfrak{go}(\partial^{2}\mathscr{C})\) such that \(\dot{\lambda}_{\partial\mathscr{C}}(g^{\prime})=\dot{\lambda}_{\partial^{2} \mathscr{C}}(g^{\prime\prime})=(1,0,0)\). Then \(\partial^{2}(g^{\prime})\), \(\partial(g^{\prime\prime})\in\mathfrak{go}(\mathscr{C})\) satisfy \(\dot{\lambda}_{\partial^{2}\mathscr{C}}\big{(}\partial^{2}(g^{\prime})\big{)}=(0,0,1)\) and \(\dot{\lambda}_{\partial\mathscr{C}}\big{(}\partial(g^{\prime\prime})\big{)}=(0,1,0)\), hence \(\dot{\lambda}_{\mathscr{C}}\) is surjective. Steps 1 and 2 establish the exactness of the sequences (2.28) and (2.29). Step 2 shows that the surjective map \(\lambda_{\mathscr{C}}\) is separable, hence \(\mathbf{O}(\mathscr{C})\) is smooth by [13, (22.13)]. Since \(\mathbf{G}_{\mathbf{m}}^{3}\) is also smooth, it follows that \(\mathbf{GO}(\mathscr{C})\) is smooth by [13, (22.12)]. Step 1 of the proof of Proposition 2.18 introduces the subgroup of homotheties of \(\mathbf{GO}(\mathscr{C})\): this subgroup \(\mathbf{H}(\mathscr{C})\) is the image of the closed embedding \(\mathbf{G}_{\mathbf{m}}^{3}\to\mathbf{GO}(\mathscr{C})\) given by \[(\nu_{1},\nu_{2},\nu_{3})\mapsto(\nu_{1}\operatorname{Id}_{V_{1}},\nu_{2} \operatorname{Id}_{V_{2}},\nu_{3}\operatorname{Id}_{V_{3}},\nu_{1}\nu_{2}\nu_ {3}^{-1}).\] The algebraic group \(\mathbf{H}(\mathscr{C})\) lies in the center of \(\mathbf{GO}(\mathscr{C})\), hence we may consider the quotient algebraic group \[\mathbf{PGO}(\mathscr{C})=\mathbf{GO}(\mathscr{C})/\mathbf{H}(\mathscr{C}).\] This is a smooth algebraic group since \(\mathbf{GO}(\mathscr{C})\) is smooth. Let also \[\mathbf{Z}(\mathscr{C})=\mathbf{H}(\mathscr{C})\cap\mathbf{O}(\mathscr{C}).\] This group is the kernel of the canonical map \(\mathbf{O}(\mathscr{C})\to\mathbf{PGO}(\mathscr{C})\). For every commutative \(F\)-algebra \(R\), \[\mathbf{Z}(\mathscr{C})(R)=\{(\nu_{1},\nu_{2},\nu_{3},1)\mid\nu_{1}^{2}=\nu_{2 }^{2}=\nu_{3}^{2}=\nu_{1}\nu_{2}\nu_{3}=1\}\subset R^{\times}\times R^{\times} \times R^{\times}\times R^{\times},\] hence \(\mathbf{Z}(\mathscr{C})\) is isomorphic to the kernel of the multiplication map \(m\colon\boldsymbol{\mu}_{2}\times\boldsymbol{\mu}_{2}\times\boldsymbol{\mu}_ {2}\to\boldsymbol{\mu}_{2}\) carrying \((\nu_{1},\nu_{2},\nu_{3})\) to \(\nu_{1}\nu_{2}\nu_{3}\). It is thus also isomorphic to \(\boldsymbol{\mu}_{2}\times\boldsymbol{\mu}_{2}\), hence it is a smooth algebraic group if and only if \(\operatorname{char}F\neq 2\). **Proposition 2.19**.: _The following diagram is commutative with exact rows and columns:_ Proof.: Commutativity of the diagram is clear, and the lower row is exact by definition of \(\mathbf{PGO}(\mathscr{C})\). Step 1 of the proof of Proposition 2.18 shows that \(\lambda_{\mathscr{C}}\colon\mathbf{H}(\mathscr{C})\to\mathbf{G}_{\mathbf{m}}^ {3}\) is surjective, hence the left column is exact. Moreover, the right column is exact by Proposition 2.18; therefore it only remains to prove that the canonical map \(\mathbf{O}(\mathscr{C})\to\mathbf{PGO}(\mathscr{C})\) is surjective. Since \(\mathbf{PGO}(\mathscr{C})\) is smooth, it suffices to consider the group of rational points over an algebraic closure \(F_{\text{alg}}\) of \(F\). We know \(\lambda_{\mathscr{C}}\colon\mathbf{H}(\mathscr{C})\to\mathbf{G}_{\mathbf{m}}^ {3}\) is surjective, hence for every \(g\in\mathbf{GO}(\mathscr{C})(F_{\text{alg}})\) there exists \(h\in\mathbf{H}(\mathscr{C})(F_{\text{alg}})\) such that \(\lambda_{\mathscr{C}}(g)=\lambda_{\mathscr{C}}(h)\). Then \(gh^{-1}\) lies in \(\mathbf{O}(\mathscr{C})(F_{\text{alg}})\) and has the same image in \(\mathbf{PGO}(\mathscr{C})(F_{\text{alg}})\) as \(g\), hence the canonical map \(\mathbf{O}(\mathscr{C})(F_{\text{alg}})\to\mathbf{PGO}(\mathscr{C})(F_{\text{ alg}})\) is surjective. Let \(\mathfrak{h}(\mathscr{F})\) and \(\mathfrak{pgo}(\mathscr{F})\) be the Lie algebras of \(\mathbf{H}(\mathscr{F})\) and \(\mathbf{PGO}(\mathscr{F})\) respectively. By definition, \[\mathfrak{h}(\mathscr{G})=\{(\nu_{1}\operatorname{Id}_{V_{1}},\nu_{2} \operatorname{Id}_{V_{2}},\nu_{3}\operatorname{Id}_{V_{3}},\nu_{1}+\nu_{2}- \nu_{3})\mid\nu_{1},\nu_{2},\nu_{3}\in F\}\simeq F\times F\times F.\] On the other hand, since \(\mathbf{H}(\mathscr{G})\) is smooth, the canonical map \(\mathbf{GO}(\mathscr{G})\to\mathbf{PGO}(\mathscr{G})\) is separable by [13, (22.13)], hence its differential is surjective. Therefore, \[\mathfrak{pgo}(\mathscr{F})=\mathfrak{go}(\mathscr{F})/\mathfrak{h}(\mathscr{ G}).\] The following result yields an explicit description of \(\mathfrak{pgo}(\mathscr{F})\) for use in SS3.3: **Proposition 2.20**.: _Mapping \((g_{1},g_{2},g_{3},\lambda_{3})+\mathfrak{h}(\mathscr{F})\in\mathfrak{pgo}( \mathscr{F})\) to \((g_{1}+F,\,g_{2}+F,\,g_{3}+F)\in\mathfrak{pgo}(q_{1})\times\mathfrak{pgo}(q_{2 })\times\mathfrak{pgo}(q_{3})\) identifies \(\mathfrak{pgo}(\mathscr{F})\) with the subgroup of \(\mathfrak{pgo}(q_{1})\times\mathfrak{pgo}(q_{2})\times\mathfrak{pgo}(q_{3})\) consisting of triples \((g_{1}+F,\,g_{2}+F,\,g_{3}+F)\) where \(g_{1}\in\mathfrak{go}(q_{1})\), \(g_{2}\in\mathfrak{go}(q_{2})\) and \(g_{3}\in\mathfrak{go}(q_{3})\) satisfy (2.26) for some \(\lambda_{3}\in F\)._ Proof.: It suffices to show that \(\mathfrak{h}(\mathscr{G})\) is the kernel of the map \(\mathfrak{go}(\mathscr{G})\to\mathfrak{pgo}(q_{1})\times\mathfrak{pgo}(q_{2 })\times\mathfrak{pgo}(q_{3})\) carrying \((g_{1},g_{2},g_{3},\lambda_{3})\) to \((g_{1}+F,\,g_{2}+F,\,g_{3}+F)\). Clearly, \(\mathfrak{h}(\mathscr{G})\) lies in the kernel of this map. Conversely, if \((g_{1},g_{2},g_{3},\lambda_{3})\) lies in the kernel, then there are scalars \(\nu_{1}\), \(\nu_{2}\), \(\nu_{3}\in F\) such that \(g_{i}=\nu_{i}\operatorname{Id}_{V_{i}}\) for \(i=1,\,2,\,3\). Then (2.26) yields \(\lambda_{3}=\nu_{1}+\nu_{2}-\nu_{3}\), hence \((g_{1},g_{2},g_{3},\lambda_{3})\) lies in \(\mathfrak{h}(\mathscr{G})\). **Remark 2.21**.: If \(\operatorname{char}F\neq 2\), the upper row of the diagram in Proposition 2.19 shows that the canonical map \(\mathfrak{o}(\mathscr{C})\to\mathfrak{pgo}(\mathscr{G})\) is an isomorphism, for then \(\mathbf{Z}(\mathscr{C})\) is smooth and its Lie algebra is \(0\). This canonical map is _not_ bijective if \(\operatorname{char}F=2\), even though \(\mathfrak{o}(\mathscr{F})\) and \(\mathfrak{pgo}(\mathscr{C})\) have the same dimension. ### Compositions of pointed quadratic spaces Fixing a representation of \(1\) in a quadratic space yields a new structure: **Definitions 2.22**.: A _pointed quadratic space_ over an arbitrary field \(F\) is a triple \((V,q,e)\) where \((V,q)\) is a quadratic space with nonsingular polar form over \(F\) and \(e\in V\) is a vector such that \(q(e)=1\). Each pointed quadratic space is endowed with a canonical isometry \(\overline{\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt} }}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{ \text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{ \rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt} {0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt} }}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{ \text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{ \rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt} {0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt} }}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{ \text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{ \rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt} {0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt} }}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{ \text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{ \rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt} {0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt} }}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{ \text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{ \rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt} {0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{ \rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt} {0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}} \phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{ \rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0.0pt} {0.0pt}}}\phantom{\text{\rule{0.0pt}{0.0pt}}}\phantom{\text{\rule{0. Let \(\mathscr{C}^{\bullet}\) and \(\widetilde{\mathscr{C}}^{\bullet}\) be compositions of pointed quadratic spaces, and let \(\mathscr{C}\) and \(\widetilde{\mathscr{C}}\) be the compositions of quadratic spaces obtained from \(\mathscr{C}^{\bullet}\) and \(\widetilde{\mathscr{C}}^{\bullet}\) by forgetting the distinguished vectors. Every similitude \(f\colon\mathscr{C}\to\widetilde{\mathscr{C}}\) that preserves the distinguished vectors must be an isometry, because the equations \[\lambda_{1}\,f_{1}(x_{2}*_{1}x_{3}) =f_{2}(x_{2})\,\widetilde{*}_{1}\,f_{3}(x_{3}),\] \[\lambda_{2}\,f_{2}(x_{3}*_{2}x_{1}) =f_{3}(x_{3})\,\widetilde{*}_{2}\,f_{1}(x_{1}),\] \[\lambda_{3}\,f_{3}(x_{1}*_{3}x_{2}) =f_{1}(x_{1})\,\widetilde{*}_{3}\,f_{2}(x_{2})\] for \(x_{1}\in V_{1}\), \(x_{2}\in V_{2}\), \(x_{3}\in V_{3}\) imply \(\lambda_{1}=\lambda_{2}=\lambda_{3}=1\) if \(f(e_{i})=\widetilde{e}_{i}\) for \(i=1\), \(2\), \(3\). Therefore, between compositions of pointed quadratic spaces the only type of maps we consider are isomorphisms. **Definition 2.23**.: An _isomorphism_\(f\colon\mathscr{C}^{\bullet}\to\widetilde{\mathscr{C}}^{\bullet}\) of compositions of pointed quadratic spaces is an isomorphism \(f\colon\mathscr{C}\to\widetilde{\mathscr{C}}\) of compositions of quadratic spaces that maps the distinguished vectors of \(\mathscr{C}^{\bullet}\) to the distinguished vectors of \(\widetilde{\mathscr{C}}^{\bullet}\). The automorphisms of \(\mathscr{C}^{\bullet}\) define an algebraic group scheme \(\mathbf{O}(\mathscr{C}^{\bullet})\), which is a closed subgroup of \(\mathbf{O}(\mathscr{C})\). Our goal in this subsection is to show that every composition of pointed quadratic spaces \(\mathscr{C}^{\bullet}\) carries a canonical isomorphism \(\Delta\colon\mathscr{C}^{\bullet}\to\partial\mathscr{C}^{\bullet}\) and is isomorphic to a composition \(S(\mathscr{C}^{\bullet})\) such that \(\partial S(\mathscr{C}^{\bullet})=S(\mathscr{C}^{\bullet})\). For this, we will use the following identities relating the canonical isometry \(\overline{\phantom{\bullet}}\) and multiplication by the distinguished vectors: **Lemma 2.24**.: _Let \(\mathscr{C}^{\bullet}\) be a composition of pointed quadratic spaces as in (2.30)._ 1. _For every_ \(x_{1}\in V_{1}\)_,_ \(x_{2}\in V_{2}\)_,_ \(x_{3}\in V_{3}\)_,_ \[\overline{e_{1}*_{3}x_{2}} =e_{1}*_{3}\overline{x_{2}}, \overline{x_{1}*_{3}e_{2}} =\overline{x_{1}}*_{3}e_{2},\] \[\overline{e_{2}*_{1}x_{3}} =e_{2}*_{1}\overline{x_{3}}, \overline{x_{2}*_{1}e_{3}} =\overline{x_{2}}*_{1}e_{3},\] \[\overline{e_{3}*_{2}x_{1}} =e_{3}*_{2}\overline{x_{1}}, \overline{x_{3}*_{2}e_{1}} =\overline{x_{3}}*_{2}e_{1}.\] 2. _For every_ \(x_{1}\in V_{1}\)_,_ \(x_{2}\in V_{2}\)_,_ \(x_{3}\in V_{3}\)_,_ \[e_{1}*_{3}(x_{3}*_{2}x_{1}) =\overline{x_{1}}*_{3}(x_{3}*_{2}e_{1}), (x_{1}*_{3}x_{2})*_{2}e_{1} =(e_{1}*_{3}x_{2})*_{2}\overline{x_{1}},\] \[e_{2}*_{1}(x_{1}*_{3}x_{2}) =\overline{x_{2}}*_{1}(x_{1}*_{3}e_{2}), (x_{2}*_{1}x_{3})*_{3}e_{2} =(e_{2}*_{1}x_{3})*_{3}\overline{x_{2}},\] \[e_{3}*_{2}(x_{2}*_{1}x_{3}) =\overline{x_{3}}*_{2}(x_{2}*_{1}e_{3}), (x_{3}*_{2}x_{1})*_{1}e_{3} =(e_{3}*_{2}x_{1})*_{1}\overline{x_{3}}.\] 3. _For every_ \(x_{1}\in V_{1}\)_,_ \(x_{2}\in V_{2}\)_,_ \(x_{3}\in V_{3}\)_,_ \[\overline{x_{1}*_{3}x_{2}} =(x_{2}*_{1}e_{3})*_{3}(e_{3}*_{2}x_{1})=\big{(}(e_{3}*_{2}\overline {x_{1}})*_{1}(e_{1}*_{3}\overline{x_{2}})*_{3}e_{2},\] \[\overline{x_{2}*_{1}x_{3}} =(x_{3}*_{2}e_{1})*_{1}(e_{1}*_{3}x_{2})=\big{(}(e_{1}*_{3}\overline {x_{2}})*_{2}(e_{2}*_{1}\overline{x_{3}})\big{)}*_{1}e_{3},\] \[\overline{x_{3}*_{2}x_{1}} =(x_{1}*_{3}e_{2})*_{2}(e_{2}*_{1}x_{3})=\big{(}(e_{2}*_{1}\overline {x_{3}})*_{3}(e_{3}*_{2}\overline{x_{1}})\big{)}*_{2}e_{1}.\] Proof.: To avoid repetitions, we just prove the first formulas in each case. (a) By definition, \[\overline{e_{1}*_{3}x_{2}}=e_{3}\,b_{3}(e_{3},e_{1}*_{3}x_{2})-e_{1}*_{3}x_{2}.\] Substituting \(e_{1}*_{3}e_{2}\) for \(e_{3}\) and using \(b_{3}(e_{3},e_{1}*_{3}x_{2})=b_{2}(e_{3}*_{2}e_{1},x_{2})=b_{2}(e_{2},x_{2})\) yields \[\overline{e_{1}*_{3}x_{2}}=e_{1}*_{3}(e_{2}\,b_{2}(e_{2},x_{2})-x_{2})=e_{1}*_{3 }\overline{x_{2}}.\] (b) By (2.18) and (2.12), \[e_{1}*_{3}(x_{3}*_{2}x_{1})=x_{3}\,b_{1}(e_{1},x_{1})-x_{1}*_{3}(x_{3}*_{2}e_{1 })\quad\text{and}\quad x_{3}=e_{1}*_{3}(x_{3}*_{2}e_{1}),\] hence \[e_{1}*_{3}(x_{3}*_{2}x_{1})=(e_{1}\,b_{1}(e_{1},x_{1})-x_{1})*_{3}(x_{3}*_{2}e_ {1})=\overline{x_{1}}*_{3}(x_{3}*_{2}e_{1}).\] (c) Using (b) and (2.10), we have \[e_{2}*_{1}\big{(}e_{1}*_{3}(e_{3}*_{2}x_{1})\big{)}=e_{2}*_{1}\big{(} \overline{x_{1}}*_{3}(e_{3}*_{2}e_{1})\big{)}=e_{2}*_{1}(\overline{x_{1}}*_{3} e_{2})=\overline{x_{1}}.\] (d) We compute \((x_{2}*_{1}e_{3})*_{3}(e_{3}*_{2}x_{1})\) by using (2.18) to exchange the factors \(x_{2}*_{1}e_{3}\) and \(x_{1}\): \[(x_{2}*_{1}e_{3})*_{3}(e_{3}*_{2}x_{1})=e_{3}\,b_{1}(x_{2}*_{1}e_{3},x_{1})-x_ {1}*_{3}\big{(}e_{3}*_{2}(x_{2}*_{1}e_{3})\big{)}.\] Since \(e_{3}*_{2}(x_{2}*_{1}e_{3})=x_{2}\) by (2.11) and \(b_{1}(x_{2}*_{1}e_{3},x_{1})=b_{3}(e_{3},x_{1}*_{3}x_{2})\) by (2.1), it follows that \[(x_{2}*_{1}e_{3})*_{3}(e_{3}*_{2}x_{1})=e_{3}\,b_{3}(e_{3},x_{1}*_{3}x_{2})-x_ {1}*_{3}x_{2}=\overline{x_{1}*_{3}x_{2}}.\] On the other hand, (b) yields \[x_{2}*_{1}e_{3}=x_{2}*_{1}(e_{1}*_{3}e_{2})=e_{2}*_{1}(e_{1}*_{3}\overline{x_ {2}}),\] hence, using (b) again together with (a), \[(x_{2}*_{1}e_{3})*_{3}(e_{3}*_{2}x_{1})=\big{(}e_{2}*_{1}(e_{1}* _{3}\overline{x_{2}})\big{)}*_{3}(e_{3}*_{2}x_{1})\\ =\big{(}\overline{e_{3}*_{2}x_{1}}*_{1}(e_{1}*_{3}\overline{x_{2 }})\big{)}*_{3}e_{2}=\big{(}(e_{3}*_{2}\overline{x_{1}})*_{1}(e_{1}*_{3} \overline{x_{2}})\big{)}*_{3}e_{2}.\qed\] For a composition of pointed quadratic spaces \(\mathscr{G}^{\bullet}\) as in (2.30), we define a composition of pointed quadratic spaces \(S(\mathscr{C}^{\bullet})\) as follows: \[S(\mathscr{C}^{\bullet})=\big{(}(V_{3},q_{3},e_{3}),(V_{3},q_{3},e_{3}),(V_{3 },q_{3},e_{3}),\righteq_{3}\big{)} \tag{2.31}\] where \[x\righteq_{3}y=(e_{2}*_{1}\overline{x})*_{3}(\overline{y}*_{2}e_{1})\qquad \text{for $x$, $y\in V_{3}$.} \tag{2.32}\] We also define linear maps \(\Delta_{1}\colon V_{1}\to V_{2}\), \(\Delta_{2}\colon V_{2}\to V_{3}\), \(\Delta_{3}\colon V_{3}\to V_{1}\) as follows: for \(x_{1}\in V_{1}\), \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\), \[\Delta_{1}(x_{1})=e_{3}*_{2}\overline{x_{1}},\qquad\Delta_{2}(x_{2})=e_{1}*_{3 }\overline{x_{2}},\qquad\Delta_{3}(x_{3})=e_{2}*_{1}\overline{x_{3}}.\] **Theorem 2.25**.: _With the notation above,_ * _the triple_ \(\Delta=(\Delta_{1},\Delta_{2},\Delta_{3})\) _is an isomorphism_ \(\Delta\colon\mathscr{C}^{\bullet}\to\partial\mathscr{C}^{\bullet}\)_;_ * _the triple_ \((\Delta_{3},\Delta_{2}^{-1},\operatorname{Id}_{V_{3}})\) _is an isomorphism_ \(S(\mathscr{C}^{\bullet})\to\mathscr{C}^{\bullet}\)_;_ * \(\partial S(\mathscr{C}^{\bullet})=S(\mathscr{C}^{\bullet})\)_._ Proof.: (a) It is clear that each \(\Delta_{i}\) is an isometry of pointed quadratic spaces, so it suffices to prove \(\Delta_{3}(x_{1}*_{3}x_{2})=\Delta_{1}(x_{1})*_{1}\Delta_{2}(x_{2})\) for \(x_{1}\in V_{1}\) and \(x_{2}\in V_{2}\), which amounts to \[e_{2}*_{1}\overline{x_{1}*_{3}x_{2}}=(e_{3}*_{2}\overline{x_{1}})*_{1}(e_{1}*_{ 3}\overline{x_{2}}).\] This readily follows from (d) of Lemma 2.24. (b) For \(y\in V_{3}\) we have \(\Delta_{2}(\overline{y}*_{2}e_{1})=e_{1}*_{3}\overline{\overline{y}*_{2}e_{1}}=y\), hence by definition \(x\righteq_{3}y=\Delta_{3}(x)*_{3}\Delta_{2}^{-1}(y)\), which proves (b). (c) It suffices to prove \[b_{3}(x\,\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{14.0pt}{\includegraphics [height=1.0pt]{14.0pt}{\includegraphics[height=1. A _unital composition algebra_5 is a \(4\)-tuple \(\mathscr{A}^{\bullet}=(A,q,e,\diamond)\) where \((A,q,e)\) is a pointed quadratic space and \(\diamond\colon A\times A\to A\) is a bilinear map such that Footnote 5: Unital composition algebras are called _Hurwitz algebras_ in [13], see [13, (33.17)]. \[q(x\diamond y)=q(x)q(y)\qquad\text{and}\qquad e\diamond x=x\diamond e=x\qquad \text{for all $x$, $y\in A$.}\] In any unital composition algebra we have \(e\diamond e=e\), hence \[\mathscr{G}^{\bullet}(\mathscr{A}^{\bullet})=\big{(}(A,q,e),\,(A,q,e),\,(A,q, e),\,\diamond\big{)} \tag{2.34}\] is a composition of pointed quadratic spaces. As with more general compositions of quadratic spaces, the multiplication law \(\diamond\) of a composition algebra \(\mathscr{A}\) induces derived composition maps \(\diamond_{1}\) and \(\diamond_{2}\) of \((A,q)\), \((A,q)\), \((A,q)\), defined by the conditions \[b(x,y\diamond_{1}z)=b(x\diamond y,z)\qquad\text{and}\qquad b(x\diamond_{2}y,z) =b(x,y\diamond z)\qquad\text{for $x$, $y$, $z\in A$.} \tag{2.35}\] We may therefore define derived composition algebras \(\partial\mathscr{A}\) and \(\partial^{2}\mathscr{A}\) by \[\partial\mathscr{A}=(A,q,\diamond_{1})\qquad\text{and}\qquad\partial^{2} \mathscr{A}=(A,q,\diamond_{2}).\] Composition algebras \(\mathscr{A}\) such that \(\partial\mathscr{A}=\mathscr{A}\) are called _symmetric composition algebras_. They are characterized by the condition that \[b(x\diamond y,z)=b(x,y\diamond z)\qquad\text{for all $x$, $y$, $z\in A$.}\] By contrast with compositions of pointed quadratic spaces, the derivation procedure does _not_ preserve unitality of composition algebras. To make this point clear, we determine below the derived composition maps of a unital composition algebra, using results from [19, Ch. 1]. Note that unital composition algebras carry a canonical involutory isometry \(\overline{\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0 pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{ \text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom {\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0 pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{ \text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{ \text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{ \rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0 pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0.0pt}{1.0pt}}} \phantom{\text{\rule{0.0pt}{1.0pt}}}\phantom{\text{\rule{0. From the definition of \(\diamond_{1}\) in (2.35) and its determination in (a), it follows that \[b(\overline{x}\diamond\overline{y},z)=b(\overline{x},\overline{y}\diamond_{1}z)=b (\overline{x},z\diamond y)\qquad\text{for all $x$, $y$, $z\in A$.}\] Now, \(z\diamond y=\overline{\overline{y}\diamond\overline{z}}\) by [19, Lemma 1.3.1], and \(\overline{\phantom{x}}\) is an isometry, hence the rightmost side in the last displayed equation is equal to \(b(x,\overline{y}\diamond\overline{z})\), which proves (2.36). Note that in the context of Proposition 2.28, \(x\diamond_{1}e=\overline{x}=e\diamond_{2}x\) for all \(x\in A\). Hence \((A,q,e,\diamond_{1})\) and \((A,q,e,\diamond_{2})\) are _not_ unital composition algebras, unless \(\overline{\phantom{x}}=\operatorname{Id}_{V}\), which occurs only if \(\dim A=1\). Symmetric composition algebras \(S(\mathscr{A}^{\bullet})\) derived from unital composition algebras \(\mathscr{A}^{\bullet}\) as in Proposition 2.28 are called _para-unital_ composition algebras (_para-Hurwitz algebras_ in the terminology of [13]). They are characterized by the property that they contain a _para-unit_, see [13, (34.8)]. Between algebras, maps that are more general than homomorphisms are considered, following Albert [1]. **Definition 2.29**.: Let \((A,\diamond)\) and \((\widetilde{A},\widetilde{\diamond})\) be \(F\)-algebras (i.e., \(F\)-vector spaces with a bilinear multiplication). An _isotopy_\(f\colon(A,\diamond)\to(\widetilde{A},\widetilde{\diamond})\) is a triple \(f=(f_{1},f_{2},f_{3})\) of linear bijections \(f_{i}\colon A\to\widetilde{A}\) such that \[f_{3}(x\diamond y)=f_{1}(x)\,\widetilde{\diamond}\,f_{2}(y)\qquad\text{for all $x$, $y\in A$.}\] An _autotopy_ is an isotopy of an algebra to itself. Under the composition of maps, autotopies of an algebra form a group \(\operatorname{Str}(A,\diamond)\) known as the _structure group_ of \((A,\diamond)\). This group is the set of \(F\)-rational points of an algebraic group scheme \(\operatorname{\mathbf{Str}}(A,\diamond)\), which is a closed subgroup of \(\operatorname{\mathbf{GL}}(A)\times\operatorname{\mathbf{GL}}(A)\times \operatorname{\mathbf{GL}}(A)\). For example, in the construction \(S\) of Proposition 2.28, which yields the symmetric composition algebra \(S(\mathscr{A}^{\bullet})\) from the unital composition algebra \(\mathscr{A}^{\bullet}\), the algebra \((A,*)\) is isotopic to \((A,\diamond)\). The following construction, due to Kaplansky [11], shows that the algebra of every composition algebra is isotopic to the algebra of a unital composition algebra. **Proposition 2.30**.: _Let \(\mathscr{A}=(A,q,\diamond)\) be a composition algebra. There exists a bilinear map \(*\colon A\times A\to A\) and a vector \(e\in A\) for which_ * \((A,q,e,*)\) _is a unital composition algebra, and_ * _there exists an isotopy_ \(f=(f_{1},f_{2},f_{3})\colon(A,\diamond)\to(A,*)\) _which is also an isomorphism_ \(f\colon\mathscr{C}(\mathscr{A})\to\mathscr{C}(A,q,*)\) _of the associated compositions of quadratic spaces as in (_2.33_)._ Proof.: Since \(\mathscr{A}\) is a composition algebra, Corollary 2.11 shows that there exists \(u\in A\) such that \(q(u)=1\). The maps \(\ell_{u}\colon A\to A\) and \(r_{u}\colon A\to A\) defined by \(\ell_{u}(x)=u\diamond x\) and \(r_{u}(x)=x\diamond u\) are isometries of \((A,q)\), hence they are invertible. Define \(*\colon A\times A\to A\) by \[x*y=r_{u}^{-1}(x)\diamond\ell_{u}^{-1}(y)\qquad\text{for $x$, $y\in A$,}\] hence \(x\diamond y=r_{u}(x)*\ell_{u}(y)\) for \(x\), \(y\in A\). It is clear from the definitions that \((\operatorname{Id}_{A},r_{u},\ell_{u})\) is an isotopy \((A,\diamond)\to(A,*)\) and also a similitude \(\mathscr{C}(\mathscr{A})\to\mathscr{C}(A,q,*)\) with multiplier of the form \((\lambda_{1},\lambda_{2},1)\) for some \(\lambda_{1}\), \(\lambda_{2}\in F^{\times}\). Since \(\mu(\ell_{u})=\mu(r_{u})=q(u)=1\), it follows from (2.25) that \(\lambda_{1}=\lambda_{2}=1\), hence \(f\) is an isomorphism of compositions of quadratic spaces. Moreover for \(e=u\diamond u\) we have \(r_{u}^{-1}(e)=u=\ell_{u}^{-1}(e)\), hence \[e*x=u\diamond\ell_{u}^{-1}(x)=\ell_{u}\big{(}\ell_{u}^{-1}(x)\big{)}=x=r_{u} \big{(}r_{u}^{-1}(x)\big{)}=r_{u}^{-1}(x)\diamond u=x*e,\] hence \((A,q,e,*)\) is a unital composition algebra. **Corollary 2.31**.: _Every composition of isometric quadratic spaces is isomorphic to a composition \(\mathscr{C}(\mathscr{A})\) as in (2.33) for some unital composition algebra \(\mathscr{A}\), and also to a composition \(\mathscr{C}(\mathscr{S})\) for some symmetric composition algebra \(\mathscr{S}\). Up to isomorphism, there is a unique composition of hyperbolic quadratic spaces of dimension \(n\), for \(n=2\), \(4\) and \(8\)._ Proof.: Let \(\mathscr{C}=\big{(}(V_{1},q_{1}),(V_{2},q_{2}),(V_{3},q_{3}),*_{3}\big{)}\) be a composition of isometric quadratic spaces, and let \(S(\mathscr{C})=\big{(}(V_{3},q_{3}),(V_{3},q_{3}),(V_{3},q_{3}),\,\raisebox{-1.0pt}{\scalebox{1.0}{$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, As \(h_{3}\) is bijective, there exist vectors \(x\in A\) such that \(\widetilde{q}\big{(}h(x)\big{)}\neq 0\), hence \(\widetilde{q}\big{(}h_{1}(e)\big{)}\neq 0\) and \(\widetilde{q}\big{(}h_{2}(e)\big{)}\neq 0\). Equation (2.38) then yields \(\widetilde{q}\big{(}h_{3}(e)\big{)}=\widetilde{q}\big{(}h_{1}(e)\big{)} \widetilde{q}\big{(}h_{2}(e)\big{)}\neq 0\). Define \(q^{\prime}\colon A\to F\) by \[q^{\prime}(x)=\widetilde{q}\big{(}h_{3}(e)\big{)}^{-1}\widetilde{q}\big{(}h(x )\big{)}\qquad\text{for $x\in A$},\] so \(q^{\prime}(e)=1\). Since \(\widetilde{\mathscr{A}}\) is a composition algebra, we obtain from (2.37) \[\widetilde{q}\big{(}h_{3}(x*y)\big{)}=\widetilde{q}\big{(}h_{1}(x)\big{)} \widetilde{q}\big{(}h_{2}(y)\big{)}\qquad\text{for all $x$, $y\in A$},\] hence by (2.38) \[\widetilde{q}\big{(}h_{3}(x*y)\big{)}=\widetilde{q}\big{(}h_{1}(e)\big{)}^{- 1}\widetilde{q}\big{(}h_{2}(e)\big{)}^{-1}\widetilde{q}\big{(}h_{3}(x)\big{)} \widetilde{q}\big{(}h_{3}(y)\big{)}=\widetilde{q}\big{(}h_{3}(e)\big{)}^{-1} \widetilde{q}\big{(}h_{3}(x)\big{)}\widetilde{q}\big{(}h_{3}(y)\big{)}.\] Therefore, \[q^{\prime}(x*y)=q^{\prime}(x)q^{\prime}(y)\qquad\text{for all $x$, $y\in A$}.\] Thus, \((A,q^{\prime},e,*)\) is a unital composition algebra, just like \((A,q,e,*)\). But the quadratic form in a unital composition algebra is uniquely determined as the "generic norm" of the algebra (see [13, (33.9)] or [19, Cor. 1.2.4]), hence \(q^{\prime}=q\), which means that \[\widetilde{q}\big{(}h_{3}(x)\big{)}=\widetilde{q}\big{(}h_{3}(e)\big{)}q(x) \qquad\text{for all $x\in A$}.\] Thus, \(h_{3}\colon(A,q)\to(\widetilde{A},\widetilde{q})\) is a similitude with multiplier \(\widetilde{q}\big{(}h_{3}(e)\big{)}\). Equation (2.38) then yields \[\widetilde{q}\big{(}h_{1}(x)\big{)}=\widetilde{q}\big{(}h_{2}(e)\big{)}^{-1} \widetilde{q}\big{(}h_{3}(e)\big{)}q(x)\quad\text{and}\quad\widetilde{q}\big{(} h_{2}(x)\big{)}=\widetilde{q}\big{(}h_{1}(e)\big{)}^{-1}\widetilde{q}\big{(}h_{3}(e) \big{)}q(x),\] hence \(h_{1}\) and \(h_{2}\) also are similitudes. Now, \(f=h\circ g\) and all the components of \(g\) are isometries, hence all the components of \(f\) are similitudes. In the case where \(\widetilde{\mathscr{A}}=\mathscr{A}\) Theorem 2.32 has the following immediate consequence: **Corollary 2.33**.: _For any composition algebra \(\mathscr{A}=(A,q,\diamond)\), let \(\lambda^{\prime}\colon\mathbf{GO}\big{(}\mathscr{G}(\mathscr{A})\big{)}\to \mathbf{G_{m}}\) be the third component of the composition multiplier map \(\lambda_{\mathscr{G}(\mathscr{A})}\colon\mathbf{GO}\big{(}\mathscr{G}( \mathscr{A})\big{)}\to\mathbf{G_{m}^{3}}\), and let \(\mu^{\prime}\colon\mathbf{Str}(A,\diamond)\to\mathbf{G_{m}^{2}}\) be the map defined on rational points by mapping every autotopy \((f_{1},f_{2},f_{3})\) to the pair of multipliers \(\big{(}\mu(f_{1}),\mu(f_{2})\big{)}\). The algebraic group scheme \(\mathbf{Str}(A,\diamond)\) is smooth and fits in the following exact sequences:_ \[1\to\mathbf{Str}(A,\diamond)\to\mathbf{GO}\big{(}\mathscr{G}(\mathscr{A}) \big{)}\xrightarrow{\lambda^{\prime}}\mathbf{G_{m}}\to 1\] _and_ \[1\to\mathbf{O}\big{(}\mathscr{G}(\mathscr{A})\big{)}\to\mathbf{Str}(A, \diamond)\xrightarrow{\mu^{\prime}}\mathbf{G_{m}^{2}}\to 1.\] Proof.: Theorem 2.32 identifies \(\mathbf{Str}(A,\diamond)\) as the kernel of \(\lambda^{\prime}\). Proposition 2.18 shows that \(\lambda_{\mathscr{G}(\mathscr{A})}\) is a separable morphism, hence \(\lambda^{\prime}\) also is separable, and it follows by [13, (22.13)] that \(\mathbf{Str}(A,\diamond)\) is smooth. Theorem 2.32 also shows that the kernel of \(\mu^{\prime}\) is the kernel of the restriction of \(\lambda_{\mathscr{G}(\mathscr{A})}\) to \(\mathbf{Str}(A,\diamond)\), which is \(\mathbf{O}\big{(}\mathscr{G}(\mathscr{A})\big{)}\) by definition. To complete the proof, observe that \(\mu^{\prime}\) is surjective because \(\lambda_{\mathscr{G}(\mathscr{A})}\) is surjective. We next turn to automorphisms of an algebra \((A,\diamond)\), which are linear bijections \(f\colon A\to A\) such that \(f(x\diamond y)=f(x)\diamond f(y)\) for all \(x\), \(y\in A\). They form an algebraic group scheme \(\mathbf{Aut}(A,\diamond)\), which can be viewed as a closed subgroup of \(\mathbf{Str}(A,\diamond)\) since every automorphism \(f\) yields an autotopy \((f,f,f)\) of \((A,\diamond)\). To relate the condition that \(f_{1}=f_{2}=f_{3}\) in an autotopy \((f_{1},f_{2},f_{3})\) with the shift isomorphism \(\partial\colon\mathbf{GO}\big{(}\mathscr{G}(\mathscr{A})\big{)}\to\mathbf{GO} \big{(}\partial\mathscr{G}(\mathscr{A})\big{)}\), we view \(\mathbf{O}\big{(}\mathscr{G}(\mathscr{A})\big{)}\), \(\mathbf{O}\big{(}\partial\mathscr{G}(\mathscr{A})\big{)}\) and \(\mathbf{O}\big{(}\partial^{2}\mathscr{G}(\mathscr{A})\big{)}\) as subgroups of \(\mathbf{GL}(A)\times\mathbf{GL}(A)\times\mathbf{GL}(A)\) and define \[\overline{\mathbf{O}}\big{(}\mathscr{G}(\mathscr{A})\big{)}=\mathbf{O}\big{(} \mathscr{G}(\mathscr{A})\big{)}\cap\mathbf{O}\big{(}\partial\mathscr{G}( \mathscr{A})\big{)}\cap\mathbf{O}\big{(}\partial^{2}\mathscr{G}(\mathscr{A}) \big{)}.\] The shift isomorphism \(\partial\) clearly restricts to an automorphism of \(\overline{\mathbf{O}}\big{(}\mathscr{E}(\mathscr{A})\big{)}\). **Proposition 2.34**.: _For every composition algebra \(\mathscr{A}=(A,q,\diamond)\), the group \(\mathbf{Aut}(A,\diamond)\) is the subgroup of \(\overline{\mathbf{O}}\big{(}\mathscr{E}(\mathscr{A})\big{)}\) fixed under \(\partial\)._ Proof.: Let \(R\) be a commutative \(F\)-algebra. For every automorphism \(f\) of \((A,\diamond)_{R}\) the triple \((f,f,f)\) is an autotopy of \((A,\diamond)_{R}\), hence by Theorem 2.32 a similitude of \(\mathscr{E}(\mathscr{A})_{R}\) with composition multiplier \((\mu(f),\mu(f),1)\). The relations (2.25) between composition multipliers and the multipliers of the components of similitudes yield \(\mu(f)=\mu(f)^{2}\), hence \(\mu(f)=1\) and \((f,f,f)\) is an automorphism of \(\mathscr{E}(\mathscr{A})_{R}\). By Proposition 2.13 we then see that \((f,f,f)\) is also an automorphism of \(\partial\mathscr{E}(\mathscr{A})\) and of \(\partial^{2}\mathscr{E}(\mathscr{A})\), hence \((f,f,f)\in\overline{\mathbf{O}}\big{(}\mathscr{E}(\mathscr{A})\big{)}(R)\), and this triple is fixed under \(\partial\). On the other hand, if \((f_{1},f_{2},f_{3})\in\overline{\mathbf{O}}\big{(}\mathscr{E}(\mathscr{A}) \big{)}(R)\) is fixed under \(\partial\), then \(f_{1}=f_{2}=f_{3}\), hence \(f_{1}\) is an automorphism of \((A,\diamond)_{R}\) because \((f_{1},f_{2},f_{3})\in\mathbf{O}\big{(}\mathscr{E}(\mathscr{A})\big{)}(R)\). When \(\mathscr{A}=(A,q,\diamond)\) is a symmetric composition algebra, then \(\partial\mathscr{E}(\mathscr{A})=\mathscr{E}(\mathscr{A})\) by definition, hence \(\overline{\mathbf{O}}\big{(}\mathscr{E}(\mathscr{A})\big{)}=\mathbf{O}\big{(} \mathscr{E}(\mathscr{A})\big{)}\) and Proposition 2.34 shows that \(\mathbf{Aut}(A,\diamond)\) is the subgroup of \(\mathbf{O}\big{(}\mathscr{E}(\mathscr{A})\big{)}\) fixed under \(\partial\). When \(\dim A=8\), an alternative description is given in [6, Th. 6.6]: \(\mathbf{Aut}(A,\diamond)\) is shown to be isomorphic to the subgroup of \(\mathbf{PGO}^{+}(q)\) fixed under an outer automorphism of order \(3\), which is the analogue of \(\partial\). For a unital composition algebra \(\mathscr{A}^{\bullet}=(A,q,e,\diamond)\) with associated para-unital symmetric composition algebra \(S(\mathscr{A}^{\bullet})=(A,q,*)\) as in Proposition 2.28, it follows from functoriality of the \(S\) construction that \(\mathbf{Aut}(A,\diamond)\subset\mathbf{Aut}(A,*)\). The reverse inclusion holds when \(\dim A\geq 4\) by [13, (34.4)]. However, the group \(\mathbf{Aut}(A,\diamond)\) can also be described as follows: **Proposition 2.35**.: _Let \(\mathscr{A}^{\bullet}=(A,q,e,\diamond)\) be a unital composition algebra, and let \(\mathscr{E}^{\bullet}(\mathscr{A}^{\bullet})\) be the associated composition of pointed quadratic spaces as in (2.34). There is a canonical identification \(\mathbf{Aut}(A,\diamond)=\mathbf{O}\big{(}\mathscr{E}^{\bullet}(\mathscr{A}^{ \bullet})\big{)}\)._ Proof.: Let \(R\) be a commutative \(F\)-algebra. Every automorphism \(f\in\mathbf{Aut}(A,\diamond)(R)\) leaves \(e\) fixed, hence the triple \((f,f,f)\) is an automorphism of \(\mathscr{E}^{\bullet}(\mathscr{A}^{\bullet})_{R}\). Therefore, mapping \(f\) to \((f,f,f)\) defines an embedding \(\mathbf{Aut}(A,\diamond)\subset\mathbf{O}\big{(}\mathscr{E}^{\bullet}(\mathscr{ A}^{\bullet})\big{)}\). For the reverse inclusion, let \((f_{1},f_{2},f_{3})\in\mathbf{O}\big{(}\mathscr{E}^{\bullet}(\mathscr{A}^{\bullet}) \big{)}(R)\). Substituting \(e\) for \(x\) or for \(y\) in the equation \[f_{3}(x\diamond y)=f_{1}(x)\diamond f_{2}(y)\qquad\text{for all $x$, $y\in A$}\] yields \(f_{3}(y)=f_{2}(y)\) and \(f_{3}(x)=f_{1}(x)\) for all \(x\), \(y\in A\), hence \(f_{3}\in\mathbf{Aut}(A,\diamond)(R)\). ## 3. Trialitarian triples The focus in this section is on central simple algebras with quadratic pair of degree \(8\) over an arbitrary field \(F\). Altering slightly the definition in [13, SS42.A] (and extending it to characteristic \(2\)), we define a _trialitarian triple_ over \(F\) to be a \(4\)-tuple (!) \[\mathscr{T}=\big{(}(A_{1},\sigma_{1},\mathfrak{f}_{1}),\,(A_{2},\sigma_{2}, \mathfrak{f}_{2}),\,(A_{3},\sigma_{3},\mathfrak{f}_{3}),\,\varphi_{0}\big{)}\] where \((A_{i},\sigma_{i},\mathfrak{f}_{i})\) is a central simple \(F\)-algebra with quadratic pair of degree \(8\) for \(i=1\), \(2\), \(3\), and \(\varphi_{0}\) is an isomorphism of algebras with quadratic pair \[\varphi_{0}\colon(C(A_{1},\sigma_{1},\mathfrak{f}_{1}),\underline{\sigma}_{1}, \mathfrak{f}_{1})\overset{\sim}{\to}(A_{2},\sigma_{2},\mathfrak{f}_{2})\times( A_{3},\sigma_{3},\mathfrak{f}_{3}).\] To simplify notation, we denote by a single letter algebras with quadratic pair, as in Section 1, and write \(\mathfrak{A}_{i}=(A_{i},\sigma_{i},\mathfrak{f}_{i})\) and \(\mathfrak{C}(\mathfrak{A}_{1})=(C(\mathfrak{A}_{1}),\underline{\sigma}_{1}, \underline{\mathfrak{f}}_{1})\). If \(\widetilde{\mathcal{S}}=(\widetilde{\mathfrak{A}}_{1},\,\widetilde{\mathfrak{A}}_{2}, \,\widetilde{\mathfrak{A}}_{3},\,\widetilde{\varphi}_{0})\) is also a trialitarian triple, an _isomorphism of trialitarian triples_\(\gamma\colon\mathcal{T}\to\widetilde{\mathcal{T}}\) is a triple \(\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\) of isomorphisms of algebras with quadratic pair \[\gamma_{i}\colon\mathfrak{A}_{i}\to\widetilde{\mathfrak{A}}_{i}\qquad i=1,2,3,\] such that the following diagram commutes: We show in SS3.1 that every composition \(\mathscr{C}\) of quadratic spaces of dimension \(8\) yields a trialitarian triple \(\operatorname{End}(\mathscr{C})\), and that every trialitarian triple of split algebras has the form \(\operatorname{End}(\mathscr{C})\) for some composition \(\mathscr{C}\) of dimension \(8\). In SS3.2 we discuss the group scheme of automorphisms of a trialitarian triple \(\mathcal{T}\): we show that \(\operatorname{\mathbf{Aut}}(\mathcal{T})\) is smooth (hence an algebraic group) and introduce algebraic groups \(\mathbf{O}(\mathcal{T})\), \(\mathbf{GO}(\mathcal{T})\), \(\mathbf{PGO}(\mathcal{T})\), extending to the context of trialitarian triples the group schemes \(\mathbf{O}(\mathscr{C})\), \(\mathbf{GO}(\mathscr{C})\), \(\mathbf{PGO}(\mathscr{C})\) defined in SS2.3 for a composition \(\mathscr{C}\) of quadratic spaces. A main result of the section is the construction of derived trialitarian triples in SS3.3: to each trialitarian triple \(\mathcal{T}\) we canonically associate trialitarian triples \(\partial\mathcal{T}\) and \(\partial^{2}\mathcal{T}\), in such a way that for split trialitarian triples \(\operatorname{End}(\mathscr{C})\) \[\partial\operatorname{End}(\mathscr{C})=\operatorname{End}(\partial\mathscr{C })\qquad\text{and}\qquad\partial^{2}\operatorname{End}(\mathscr{C})= \operatorname{End}(\partial^{2}\mathscr{C}).\] This construction is used in SS3.4 to define for each trialitarian triple \(\mathcal{T}=(\mathfrak{A}_{1},\mathfrak{A}_{2},\mathfrak{A}_{3},\varphi_{0})\) isomorphisms \[\mathbf{O}(\mathcal{T})\simeq\operatorname{\mathbf{Spin}}(\mathfrak{A}_{1}) \simeq\operatorname{\mathbf{Spin}}(\mathfrak{A}_{2})\simeq\operatorname{ \mathbf{Spin}}(\mathfrak{A}_{3})\] and \[\mathbf{PGO}(\mathcal{T})\simeq\mathbf{PGO}^{+}(\mathfrak{A}_{1})\simeq \mathbf{PGO}^{+}(\mathfrak{A}_{2})\simeq\mathbf{PGO}^{+}(\mathfrak{A}_{3}),\] which we call the _trialitarian isomorphisms_ canonically attached to the trialitarian triple \(\mathcal{T}\). We return in SS3.5 to the study of compositions of quadratic spaces, building on the theory of trialitarian triples developed in the previous subsections to obtain a few more results about the \(8\)-dimensional case. Specifically, we establish criteria for the similarity or the isomorphism of compositions of quadratic spaces, which yield an analogue of the classical Principle of Triality, and we give an explicit description of the cohomological invariants of \(\operatorname{\mathbf{Spin}}_{8}\). In the final SS3.6 we show that the constructions of SS3.2 readily yield a canonical isomorphism between the structure group of a composition algebra of dimension \(8\) and the extended Clifford group of its quadratic form. ### The trialitarian triple of a composition of quadratic spaces Let \[\mathscr{C}=\big{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3}\big{)}\] be a composition of quadratic spaces of dimension \(8\) over \(F\). Recall from Proposition 2.9 the isomorphism of algebras with quadratic pair \[C(\alpha)\colon(C(V_{1},q_{1}),\tau_{1},\mathfrak{g}_{1})\xrightarrow{\sim} (\operatorname{End}(V_{2}\oplus V_{3}),\sigma_{b_{2}\perp b_{3}},\mathfrak{f} _{q_{2}\perp q_{3}})\] induced by the map \[\alpha\colon x_{1}\in V_{1}\mapsto\begin{pmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{pmatrix}\in\operatorname{End}(V_{2}\oplus V_{3})\] where \(\ell_{x_{1}}\colon V_{2}\to V_{3}\) carries \(x_{2}\in V_{2}\) to \(x_{1}*_{3}x_{2}\in V_{3}\) and \(r_{x_{1}}\colon V_{3}\to V_{2}\) carries \(x_{3}\in V_{3}\) to \(x_{3}*_{2}x_{1}\in V_{2}\). Its restriction to the even Clifford algebra also is an isomorphism of algebras with quadratic pair \[C_{0}(\alpha)\colon(C_{0}(V_{1},q_{1}),\tau_{01},\mathfrak{g}_{01})\xrightarrow{ \sim}(\operatorname{End}(V_{2}),\sigma_{b_{2}},\mathfrak{f}_{q_{2}})\times( \operatorname{End}(V_{3}),\sigma_{b_{3}},\mathfrak{f}_{q_{3}}),\] see Proposition 2.9. Therefore, the following is a trialitarian triple: \[\operatorname{End}(\mathscr{E})=\big{(}(\operatorname{End}(V_{1}),\sigma_{b_{ 1}},\mathfrak{f}_{q_{1}}),\,(\operatorname{End}(V_{2}),\sigma_{b_{2}}, \mathfrak{f}_{q_{2}}),\,(\operatorname{End}(V_{3}),\sigma_{b_{3}},\mathfrak{f }_{q_{3}}),\,C_{0}(\alpha)\big{)}.\] We next show that the construction of trialitarian triples from compositions of quadratic spaces is functorial. Let \(\mathscr{G}=\big{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3}\big{)}\) and \(\widetilde{\mathscr{G}}=\big{(}(\widetilde{V}_{1},\widetilde{q}_{1}),\,( \widetilde{V}_{2},\widetilde{q}_{2}),\,(\widetilde{V}_{3},\widetilde{q}_{3}),\,\widetilde{*}_{3}\big{)}\) be compositions of quadratic spaces of dimension \(8\). Recall that for every linear isomorphism \(g_{i}\colon V_{i}\to\widetilde{V}_{i}\), we define \[\operatorname{Int}(g_{i})\colon\operatorname{End}(V_{i})\to\operatorname{End }(\widetilde{V}_{i})\qquad\text{by }f\mapsto g_{i}\circ f\circ g_{i}^{-1}.\] **Proposition 3.1**.: _For every similitude \((g_{1},g_{2},g_{3})\colon\mathscr{G}\to\widetilde{\mathscr{G}}\), the triple_ \[\operatorname{Int}(g_{1},g_{2},g_{3})\colon=\big{(}\operatorname{Int}(g_{1}), \operatorname{Int}(g_{2}),\operatorname{Int}(g_{3})\big{)}\colon\operatorname {End}(\mathscr{G})\to\operatorname{End}(\widetilde{\mathscr{G}})\] _is an isomorphism of trialitarian triples. Moreover, for every isomorphism of trialitarian triples \((\gamma_{1},\gamma_{2},\gamma_{3})\colon\operatorname{End}(\mathscr{G})\to \operatorname{End}(\widetilde{\mathscr{G}})\), there exists a similitude \((g_{1},g_{2},g_{3})\colon\mathscr{G}\to\widetilde{\mathscr{G}}\) such that_ \[(\gamma_{1},\gamma_{2},\gamma_{3})=\operatorname{Int}(g_{1},g_{2},g_{3}).\] Proof.: Suppose that \((g_{1},g_{2},g_{3})\colon\mathscr{G}\to\widetilde{\mathscr{G}}\) is a similitude. For \(i=1\), \(2\), \(3\), \(\operatorname{Int}(g_{i})\) is an isomorphism of algebras with quadratic pairs \[\operatorname{Int}(g_{i})\colon(\operatorname{End}(V_{i}),\sigma_{b_{i}}, \mathfrak{f}_{q_{i}})\xrightarrow{\sim}(\operatorname{End}(\widetilde{V}_{i}),\sigma_{\widetilde{b}_{i}},\mathfrak{f}_{\widetilde{q}_{i}}).\] Note that under the identification \(C(\operatorname{End}(V_{1}),\sigma_{b_{1}},\mathfrak{f}_{q_{1}})=C_{0}(V_{1},q _{1})\) the isomorphism \(C\big{(}\operatorname{Int}(g_{1})\big{)}\) induced by \(\operatorname{Int}(g_{1})\) is the isomorphism \(C_{0}(g_{1})\colon C_{0}(V_{1},q_{1})\to C_{0}(\widetilde{V}_{1},\widetilde{q }_{1})\) that maps \(x_{1}y_{1}\) to \(\mu_{1}^{-1}g_{1}(x_{1})g_{1}(y_{1})\) for \(x_{1}\), \(y_{1}\in V_{1}\), where \(\mu_{1}\) is the multiplier of \(g_{1}\). Therefore, in order to show that \(\big{(}\operatorname{Int}(g_{1}),\operatorname{Int}(g_{2}),\operatorname{Int}(g _{3})\big{)}\) is an isomorphism of trialitarian triples \(\operatorname{End}(\mathscr{G})\to\operatorname{End}(\widetilde{\mathscr{G}})\), we have to show that the following diagram commutes: (3.1) \[\begin{CD}C_{0}(V_{1},q_{1})@>{C_{0}(\alpha)}>{}>\operatorname{End}(V_{2}) \times\operatorname{End}(V_{3})\\ @V{}V{\operatorname{Int}(g_{2})\times\operatorname{Int}(g_{3})}V_{0 Since \(\lambda_{2}\lambda_{3}=\mu_{1}\) by (2.25), it follows that \(g_{2}r_{x_{1}}\ell_{y_{1}}=\mu_{1}^{-1}r_{g_{1}(x_{1})}\ell_{g_{1}(y_{1})}g_{2}\). Similarly, \(g_{3}\ell_{x_{1}}r_{y_{1}}=\ell_{g_{1}(x_{1})}r_{g_{1}(y_{1})}g_{3}\), hence diagram (3.1) commutes. The first part of the proposition is thus proved. Now, assume \((\gamma_{1},\gamma_{2},\gamma_{3})\colon\operatorname{End}(\mathscr{C}) \to\operatorname{End}(\widetilde{\mathscr{C}})\) is an isomorphism of trialitarian triples. Each \(\gamma_{i}\) is an isomorphism \[\gamma_{i}\colon(\operatorname{End}(V_{i}),\sigma_{b_{i}},\mathfrak{f}_{q_{i}} )\stackrel{{\sim}}{{\to}}(\operatorname{End}(\widetilde{V}_{i}), \sigma_{\widetilde{b}_{i}},\mathfrak{f}_{\widetilde{q}_{i}});\] Proposition 1.2 shows that \(\gamma_{i}=\operatorname{Int}(g_{i})\) for some similitude \(g_{i}\colon(V_{i},q_{i})\to(\widetilde{V}_{i},\widetilde{q}_{i})\). We may then also consider the isomorphism \[\operatorname{Int}\bigl{(}\begin{smallmatrix}g_{2}&0\\ 0&g_{3}\end{smallmatrix}\bigr{)}\colon\operatorname{End}(V_{2}\oplus V_{3}) \to\operatorname{End}(\widetilde{V}_{2}\oplus\widetilde{V}_{3}),\] which makes the following diagram, where the vertical maps are the diagonal embeddings, commute: From the hypothesis that \((\gamma_{1},\gamma_{2},\gamma_{3})\) is an isomorphism of trialitarian triples, it follows that the diagram (3.1) commutes. Write \(\mu_{1}\) for the multiplier of \(g_{1}\) and consider the linear map \[\beta\colon V_{1}\to\operatorname{End}(\widetilde{V}_{2}\oplus\widetilde{V}_ {3}),\qquad x_{1}\mapsto\begin{pmatrix}0&r_{g_{1}(x_{1})}\\ \mu_{1}^{-1}\ell_{g_{1}(x_{1})}&0\end{pmatrix}.\] For \(\widetilde{x}_{2}\in\widetilde{V}_{2}\) and \(\widetilde{x}_{3}\in\widetilde{V}_{3}\), we have by (2.10) and (2.12) \[\mu_{1}^{-1}(g_{1}(x_{1})\operatorname{\widetilde{*}}_{3}\widetilde{x}_{2}) \operatorname{\widetilde{*}}_{2}g_{1}(x_{1})=\mu_{1}^{-1}\widetilde{q}_{1} \bigl{(}g_{1}(x_{1})\bigr{)}\widetilde{x}_{2}\] and \[\mu_{1}^{-1}g_{1}(x_{1})\operatorname{\widetilde{*}}_{3}\bigl{(}\widetilde{x} _{3}\operatorname{\widetilde{*}}_{2}g_{1}(x_{1})\bigr{)}=\mu_{1}^{-1} \widetilde{q}_{1}\bigl{(}g_{1}(x_{1})\bigr{)}\widetilde{x}_{3}.\] Since \(\mu_{1}^{-1}\bigl{(}\widetilde{q}_{1}(g_{1}(x_{1}))\bigr{)}=q_{1}(x_{1})\), it follows that \(\beta(x_{1})^{2}=q_{1}(x_{1})\operatorname{Id}_{\widetilde{V}_{2}\oplus \widetilde{V}_{3}}\) for \(x_{1}\in V_{1}\). Therefore, \(\beta\) induces an \(F\)-algebra homomorphism \[C(\beta)\colon C(V_{1},q_{1})\to\operatorname{End}(\widetilde{V}_{2}\oplus \widetilde{V}_{3}).\] Since \(C(V_{1},q_{1})\) is a simple algebra, dimension count shows that \(C(\beta)\) is an isomorphism. For \(x_{1}\), \(y_{1}\in V_{1}\), \[C(\beta)(x_{1}y_{1})=\begin{pmatrix}\mu_{1}^{-1}r_{g_{1}(x_{1})}\ell_{g_{1}(y_{ 1})}&0\\ 0&\mu_{1}^{-1}\ell_{g_{1}(x_{1})}r_{g_{1}(y_{1})}\end{pmatrix}=C_{0}(\widetilde{ \alpha})\bigl{(}C_{0}(g_{1})(x_{1}y_{1})\bigr{)},\] hence \(C(\beta)|_{C_{0}(V_{1},q_{1})}=C_{0}(\widetilde{\alpha})\circ C_{0}(g_{1})\). Since the diagram (3.1) commutes, it follows that \[C(\beta)|_{C_{0}(V_{1},q_{1})}=\operatorname{Int}\bigl{(}\begin{smallmatrix}g_{ 2}&0\\ 0&g_{3}\end{smallmatrix}\bigr{)}\circ C(\alpha)|_{C_{0}(V_{1},q_{1})}.\] Therefore, \(\operatorname{Int}\bigl{(}\begin{smallmatrix}g_{2}&0\\ 0&g_{3}\end{smallmatrix}\bigr{)}\circ C(\alpha)\circ C(\beta)^{-1}\) is an automorphism of \(\operatorname{End}(\widetilde{V}_{2}\oplus\widetilde{V}_{3})\) whose restriction to \(C(\beta)\bigl{(}C_{0}(V_{1},q_{1})\bigr{)}\) is the identity. This automorphism is inner by the Skolem-Noether theorem. Since \(C(\beta)\bigl{(}C_{0}(V_{1},q_{1})\bigr{)}=\operatorname{End}(\widetilde{V}_{2 })\times\operatorname{End}(\widetilde{V}_{3})\) we must have \[\operatorname{Int}\bigl{(}\begin{smallmatrix}g_{2}&0\\ 0&g_{3}\end{smallmatrix}\bigr{)}\circ C(\alpha)\circ C(\beta)^{-1}=\operatorname {Int}\bigl{(}\begin{smallmatrix}\nu_{2}&0\\ 0&\nu_{3}\end{smallmatrix}\bigr{)}\qquad\text{for some $\nu_{2}$, $\nu_{3}\in F^{\times}$},\] hence \[\operatorname{Int}\bigl{(}\begin{smallmatrix}g_{2}&0\\ 0&g_{3}\end{smallmatrix}\bigr{)}\circ C(\alpha)(x_{1})=\bigl{(}\begin{smallmatrix} \nu_{2}&0\\ 0&\nu_{3}\end{smallmatrix}\bigr{)}C(\beta)(x_{1})\bigl{(}\begin{smallmatrix} \nu_{2}^{-1}&0\\ 0&\nu_{3}^{-1}\end{smallmatrix}\bigr{)}\qquad\text{for $x_{1}\in V_{1}$},\] which means that \[\begin{pmatrix}0&g_{2}r_{x_{1}}g_{3}^{-1}\\ g_{3}\ell_{x_{1}}g_{2}^{-1}&0\end{pmatrix}=\begin{pmatrix}0&\nu_{2}\nu_{3}^{-1}r_ {g_{1}(x_{1})}\\ \nu_{2}^{-1}\nu_{3}\mu_{1}^{-1}\ell_{g_{1}(x_{1})}&0\end{pmatrix}.\] The equation \(g_{3}\ell_{x_{1}}g_{2}^{-1}=\nu_{2}^{-1}\nu_{3}\mu_{1}^{-1}\ell_{g_{1}(x_{1})}\) implies that for \(x_{2}\in V_{2}\) \[g_{3}(x_{1}*_{3}x_{2})=\nu_{2}^{-1}\nu_{3}\mu_{1}^{-1}g_{1}(x_{1})\,\widetilde {*}_{3}\,g_{2}(x_{2}).\] Therefore, \((g_{1},g_{2},g_{3})\) is a similitude \(\mathscr{C}\to\widetilde{\mathscr{C}}\). We next show that every trialitarian triple of split algebras has the form \(\operatorname{End}(\mathscr{C})\) for some composition \(\mathscr{C}\) of quadratic spaces of dimension \(8\). **Theorem 3.2**.: _Let \(\mathscr{T}=(\mathfrak{A}_{1},\,\mathfrak{A}_{2},\,\mathfrak{A}_{3},\,\varphi_ {0})\) be a trialitarian triple over an arbitrary field \(F\), where \(\mathfrak{A}_{i}=(A_{i},\sigma_{i},\mathfrak{f}_{i})\) for \(i=1\), \(2\), \(3\). If \(A_{1}\), \(A_{2}\) and \(A_{3}\) are split, then there is a composition \(\mathscr{C}\) of quadratic spaces of dimension \(8\) over \(F\) such that \(\mathscr{T}\simeq\operatorname{End}(\mathscr{C})\). The composition \(\mathscr{C}\) is uniquely determined up to similitude._ Proof.: For \(i=1\), \(2\), \(3\), let \(A_{i}=\operatorname{End}(V_{i})\) for some \(F\)-vector space \(V_{i}\) of dimension \(8\). Let also \(q_{i}\) be a quadratic form on \(V_{i}\) to which \((\sigma_{i},\mathfrak{f}_{i})\) is adjoint. Since \(q_{1}\) is determined only up to a scalar factor, we may assume \(q_{1}\) represents \(1\) and pick \(e_{1}\in V_{1}\) such that \(q_{1}(e_{1})=1\). The inner automorphism \(\operatorname{Int}(e_{1})\) of the full Clifford algebra \(C(V_{1},q_{1})\) preserves \(C_{0}(V_{1},q_{1})\) and is of order \(2\). It transfers under the isomorphism \[C_{0}(V_{1},q_{1})=C(\mathfrak{A}_{1})\xrightarrow{\varphi_{0}}A_{2}\times A _{3}=\operatorname{End}(V_{2})\times\operatorname{End}(V_{3})\] to an automorphism of \(\operatorname{End}(V_{2})\times\operatorname{End}(V_{3})\) that interchanges the two factors. Viewing \(\operatorname{End}(V_{2})\times\operatorname{End}(V_{3})\) as a subalgebra diagonally embedded in \(\operatorname{End}(V_{2}\oplus V_{3})\), we may find an inner automorphism of \(\operatorname{End}(V_{2}\oplus V_{3})\) which restricts to \(\varphi_{0}\circ\operatorname{Int}(e_{1})\circ\varphi_{0}^{-1}\) by (a slight generalization of) the Skolem-Noether Theorem, see [5, Th. 2, p. A VIII.252]. This inner automorphism is conjugation by an operator of the form \(\bigl{(}\begin{smallmatrix}0&u^{\prime}\\ u&0\end{smallmatrix}\bigr{)}\) since it interchanges \(\bigl{(}\begin{smallmatrix}\operatorname{Id}_{V_{2}}&0\\ 0&0\end{smallmatrix}\bigr{)}\) and \(\bigl{(}\begin{smallmatrix}0&0\\ 0&\operatorname{Id}_{V_{3}}\end{smallmatrix}\bigr{)}\). Since \(\varphi_{0}\circ\operatorname{Int}(e_{1})\circ\varphi_{0}^{-1}\) has order \(2\), it follows that \(uu^{\prime}=u^{\prime}u\in F^{\times}\), hence \(\operatorname{Int}\bigl{(}\begin{smallmatrix}0&u^{-1}\\ u&0\end{smallmatrix}\bigr{)}\) has the same restriction to \(\operatorname{End}(V_{2})\times\operatorname{End}(V_{3})\) as \(\operatorname{Int}\bigl{(}\begin{smallmatrix}0&u^{\prime}\\ u&0\end{smallmatrix}\bigr{)}\). Representing \(C(V_{1},q_{1})\) and \(\operatorname{End}(V_{2}\oplus V_{3})\) as (generalized) crossed products \[C(V_{1},q_{1}) =C_{0}(V_{1},q_{1})\oplus e_{1}C_{0}(V_{1},q_{1}),\] \[\operatorname{End}(V_{2}\oplus V_{3}) =\bigl{(}\operatorname{End}(V_{2})\times\operatorname{End}(V_{3} )\bigr{)}\oplus\bigl{(}\begin{smallmatrix}0&u^{-1}\\ u&0\end{smallmatrix}\bigr{)}\bigl{(}\operatorname{End}(V_{2})\times \operatorname{End}(V_{3})\bigr{)},\] we may extend \(\varphi_{0}\) to an isomorphism of \(F\)-algebras \[\varphi\colon C(V_{1},q_{1})\to\operatorname{End}(V_{2}\oplus V_{3})\] by mapping \(e_{1}\) to \(\bigl{(}\begin{smallmatrix}0&u^{-1}\\ u&0\end{smallmatrix}\bigr{)}\). Let \(\tau_{1}\) be the involution on \(C(V_{1},q_{1})\) that fixes every vector in \(V_{1}\) and let \(\tau^{\prime}=\varphi\circ\tau_{1}\circ\varphi^{-1}\) be the corresponding involution on \(\operatorname{End}(V_{2}\oplus V_{3})\). The restriction of \(\tau_{1}\) to \(C_{0}(V_{1},q_{1})\) is the canonical involution \(\tau_{01}\), and \(\varphi_{0}\circ\tau_{01}=(\sigma_{2}\times\sigma_{3})\circ\varphi_{0}\), hence \(\tau^{\prime}\) restricts to \(\sigma_{2}\) and \(\sigma_{3}\) on \(\operatorname{End}(V_{2})\) and \(\operatorname{End}(V_{3})\). This means that \[(\operatorname{End}(V_{2}\oplus V_{3}),\tau^{\prime})\in(\operatorname{End}(V_{ 2}),\sigma_{2})\boxplus(\operatorname{End}(V_{3}),\sigma_{3}),\] i.e., that \(\tau^{\prime}\) is adjoint to a symmetric bilinear form that is the orthogonal sum of a multiple of \(b_{2}\) and a multiple of \(b_{3}\). Scaling \(q_{2}\) or \(q_{3}\), we may assume \(\tau^{\prime}=\sigma_{b_{2}\perp b_{3}}\) is the adjoint involution of \(b_{2}\perp b_{3}\). Under the isomorphism \(\varphi\), the odd part \(C_{1}(V_{1},q_{1})=e_{1}C_{0}(V_{1},q_{1})\) is mapped to the odd part of \(\operatorname{End}(V_{2}\oplus V_{3})\) for the checkerboard grading, hence for each \(x_{1}\in V_{1}\) there exist \(\ell_{x_{1}}\in\operatorname{Hom}(V_{2},V_{3})\) and \(r_{x_{1}}\in\operatorname{Hom}(V_{3},V_{2})\) such that \[\varphi(x_{1})=\begin{pmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{pmatrix}\in\operatorname{End}(V_{2}\oplus V_{3}).\] Since \(\tau_{1}(x_{1})=x_{1}\), it follows that \(\varphi(x_{1})\) is \(\sigma_{b_{2}\perp b_{3}}\)-symmetric, hence for all \(x_{2}\), \(y_{2}\in V_{2}\) and \(x_{3}\), \(y_{3}\in V_{3}\) \[(b_{2}\perp b_{3})\left(\begin{pmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{pmatrix}\begin{pmatrix}x_{2}\\ x_{3}\end{pmatrix},\;\begin{pmatrix}y_{2}\\ y_{3}\end{pmatrix}\right)=(b_{2}\perp b_{3})\left(\begin{pmatrix}x_{2}\\ x_{3}\end{pmatrix},\;\begin{pmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{pmatrix}\begin{pmatrix}y_{2}\\ y_{3}\end{pmatrix}\right).\] This means that for all \(x_{2}\), \(y_{2}\in V_{2}\) and \(x_{3}\), \(y_{3}\in V_{3}\), \[b_{2}(r_{x_{1}}(x_{3}),y_{2})=b_{3}\big{(}x_{3},\ell_{x_{1}}(y_{2})\big{)}\quad \text{and}\quad b_{3}(\ell_{x_{1}}(x_{2}),y_{2})=b_{2}\big{(}x_{2},r_{x_{1}}( y_{3})\big{)}. \tag{3.2}\] Moreover, the relations \(x_{1}^{2}=q_{1}(x_{1})\) and \(x_{1}y_{1}+y_{1}x_{1}=b_{1}(x_{1},y_{1})\) yield for all \(x_{1}\), \(y_{1}\in V_{1}\) \[\ell_{x_{1}}r_{x_{1}}=r_{x_{1}}\ell_{x_{1}}=q_{1}(x_{1})\quad\text{and}\quad \ell_{x_{1}}r_{y_{1}}+\ell_{y_{1}}r_{x_{1}}=r_{x_{1}}\ell_{y_{1}}+r_{y_{1}} \ell_{x_{1}}=b_{1}(x_{1},y_{1}). \tag{3.3}\] Recall that the two components \(\varphi_{\pm}\) of \(\varphi_{0}\) are homomorphisms of algebras with quadratic pair \[\varphi_{+}\colon(C_{0}(V_{1},q_{1}),\tau_{01},\mathbb{f}_{1})\to\mathfrak{A} _{2},\qquad\varphi_{-}\colon(C_{0}(V_{1},q_{1}),\tau_{01},\mathbb{f}_{1})\to \mathfrak{A}_{3}.\] As observed in Definition 1.20, \(\varphi_{+}\big{(}\boldsymbol{\omega}(q_{1})\big{)}\subset\mathfrak{go}(q_{2})\) and \(\varphi_{-}\big{(}\boldsymbol{\omega}(q_{1})\big{)}\subset\mathfrak{go}(q_{3})\), hence \[\varphi_{+}(x_{1}y_{1})\in\mathfrak{go}(q_{2})\qquad\text{and}\qquad\varphi_{ -}(x_{1}y_{1})\in\mathfrak{go}(q_{3})\qquad\text{for all $x_{1}$, $y_{1}\in V_{1}$.}\] The definition of \(\varphi\) yields \(\varphi_{+}(x_{1}y_{1})=r_{x_{1}}\ell_{y_{1}}\) and \(\varphi_{-}(x_{1}y_{1})=\ell_{x_{1}}r_{y_{1}}\), hence by (3.3) \[\dot{\mu}\big{(}\varphi_{+}(x_{1}y_{1})\big{)}=\varphi_{+}(x_{1}y_{1})+\varphi _{+}(y_{1}x_{1})=b_{1}(x_{1},y_{1})\] and \[\dot{\mu}\big{(}\varphi_{-}(x_{1}y_{1})\big{)}=\varphi_{-}(x_{1}y_{1})+\varphi _{-}(y_{1}x_{1})=b_{1}(x_{1},y_{1}).\] Since \(r_{x_{1}}\ell_{y_{1}}\in\mathfrak{go}(q_{2})\) and \(\ell_{x_{1}}r_{y_{1}}\in\mathfrak{go}(q_{3})\), it follows from Proposition 1.11 that for \(x_{1}\), \(y_{1}\in V_{1}\), \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\) \[b_{2}(r_{x_{1}}\ell_{y_{1}}(x_{2}),x_{2})=b_{1}(x_{1},y_{1})q_{2}(x_{2})\quad \text{and}\quad b_{3}(\ell_{x_{1}}r_{y_{1}}(x_{3}),x_{3})=b_{1}(x_{1},y_{1})q_ {3}(x_{3}). \tag{3.4}\] If \(x_{1}\in V_{1}\) is nonzero, there exists \(y_{1}\in V_{1}\) such that \(b_{1}(x_{1},y_{1})=1\). From (3.2) and (3.3) we derive for all \(x_{2}\in V_{2}\) \[b_{3}\big{(}\ell_{x_{1}}(x_{2}),\ell_{x_{1}}r_{y_{1}}\ell_{x_{1}}(x_{2})\big{)} =b_{2}\big{(}r_{x_{1}}\ell_{x_{1}}(x_{2}),r_{y_{1}}\ell_{x_{1}}(x_{2})\big{)}=q _{1}(x_{1})b_{2}\big{(}x_{2},r_{y_{1}}\ell_{x_{1}}(x_{2})\big{)}.\] But (3.4) yields \[b_{3}\big{(}\ell_{x_{1}}(x_{2}),\ell_{x_{1}}r_{y_{1}}\ell_{x_{1}}(x_{2})\big{)}= q_{3}\big{(}\ell_{x_{1}}(x_{2})\big{)}\quad\text{and}\quad b_{2}\big{(}x_{2},r_{y_{1}} \ell_{x_{1}}(x_{2})\big{)}=q_{2}(x_{2}),\] hence \[q_{3}\big{(}\ell_{x_{1}}(x_{2})\big{)}=q_{1}(x_{1})q_{2}(x_{2})\qquad\text{for all $x_{1}\in V_{1}$, $x_{2}\in V_{2}$ with $x_{1}\neq 0$.}\] This equation obviously also holds for \(x_{1}=0\). Therefore, defining \[*_{3}\colon V_{1}\times V_{2}\to V_{3}\qquad\text{by}\quad x_{1}*_{3}x_{2}= \ell_{x_{1}}(x_{2})\qquad\text{for $x_{1}\in V_{1}$ and $x_{2}\in V_{2}$,}\] we see that \(*_{3}\) is a composition of \((V_{1},q_{1})\), \((V_{2},q_{2})\) and \((V_{3},q_{3})\). Let also \[x_{3}*_{2}x_{1}=r_{x_{1}}(x_{3})\qquad\text{for $x_{3}\in V_{3}$ and $x_{1}\in V_{1}$.}\] From (3.2) it follows that \(b_{2}(x_{3}*_{2}x_{1},x_{2})=b_{3}(x_{3},x_{1}*_{3}x_{2})\) for \(x_{1}\in V_{1}\), \(x_{2}\in V_{2}\) and \(x_{3}\in V_{3}\), hence Proposition 2.1 shows that \(*_{2}\) is the derived composition of \((V_{3},q_{3})\), \((V_{1},q_{1})\) and \((V_{2},q_{2})\). Therefore, \(\varphi_{0}=C_{0}(\alpha)\) for \(\alpha\colon V_{1}\to\operatorname{End}(V_{2}\oplus V_{3})\) mapping \(x_{1}\in V_{1}\) to \(\bigl{(}\begin{smallmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{smallmatrix}\bigr{)}\). We thus see that \(\mathscr{T}\,=\operatorname{End}(\mathscr{G})\) for \(\mathscr{G}=\bigl{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3} \bigr{)}\). Proposition 3.1 shows that the composition \(\mathscr{G}\) is uniquely determined up to similitude. ### Similitudes of trialitarian triples Throughout this subsection, we fix a trialitarian triple \[\mathscr{T}=(\mathfrak{A}_{1},\,\mathfrak{A}_{2},\,\mathfrak{A}_{3},\,\varphi_ {0})\] with \(\mathfrak{A}_{i}=(A_{1},\sigma_{i},\mathfrak{f}_{i})\) a central simple algebra with quadratic pair of degree \(8\) over an arbitrary field \(F\) for \(i=1,\,2,\,3\). The algebraic group scheme \(\operatorname{\mathbf{Aut}}(\mathscr{T})\) of automorphisms of \(\mathscr{T}\) is defined as follows: for any commutative \(F\)-algebra \(R\), the group \(\operatorname{\mathbf{Aut}}(\mathscr{T})(R)\) consists of the triples \((\gamma_{1},\gamma_{2},\gamma_{3})\in\operatorname{\mathbf{Aut}}(\mathfrak{A} _{1})(R)\times\operatorname{\mathbf{Aut}}(\mathfrak{A}_{2})(R)\times \operatorname{\mathbf{Aut}}(\mathfrak{A}_{3})(R)\) that make the following square commute: (3.5) Thus, \[\operatorname{\mathbf{Aut}}(\mathscr{T})\subset\operatorname{\mathbf{Aut}} (\mathfrak{A}_{1})\times\operatorname{\mathbf{Aut}}(\mathfrak{A}_{2})\times \operatorname{\mathbf{Aut}}(\mathfrak{A}_{3}).\] Now, recall from [13, SS23.B] that the map \(\operatorname{Int}\colon\operatorname{\mathbf{GO}}(\mathfrak{A}_{i})\to \operatorname{\mathbf{Aut}}(\mathfrak{A}_{i})\) defines an isomorphism \(\operatorname{\mathbf{PGO}}(\mathfrak{A}_{i})\xrightarrow{\sim}\operatorname {\mathbf{Aut}}(\mathfrak{A}_{i})\). Therefore, we may consider the inverse image of \(\operatorname{\mathbf{Aut}}(\mathscr{T})\) under the surjective morphism \[\operatorname{Int}\colon\operatorname{\mathbf{GO}}(\mathfrak{A}_{1})\times \operatorname{\mathbf{GO}}(\mathfrak{A}_{2})\times\operatorname{\mathbf{GO}} (\mathfrak{A}_{3})\to\operatorname{\mathbf{Aut}}(\mathfrak{A}_{1})\times \operatorname{\mathbf{Aut}}(\mathfrak{A}_{2})\times\operatorname{\mathbf{Aut} }(\mathfrak{A}_{3}).\] **Definition 3.3**.: The algebraic group scheme of _similitudes_ of the trialitarian triple \(\mathscr{T}\) is \[\operatorname{\mathbf{GO}}(\mathscr{T})=\operatorname{Int}^{-1}\bigl{(} \operatorname{\mathbf{Aut}}(\mathscr{T})\bigr{)}\subset\operatorname{ \mathbf{GO}}(\mathfrak{A}_{1})\times\operatorname{\mathbf{GO}}(\mathfrak{A}_{2 })\times\operatorname{\mathbf{GO}}(\mathfrak{A}_{3}).\] From this definition, it follows that the map \(\operatorname{Int}\) restricts to a surjective morphism (see [13, (22.4)]) \[\operatorname{Int}\colon\operatorname{\mathbf{GO}}(\mathscr{T})\to \operatorname{\mathbf{Aut}}(\mathscr{T}).\] Its kernel is the algebraic group of _homotheties_\(\operatorname{\mathbf{H}}(\mathscr{T})=\operatorname{\mathbf{G}}_{\mathbf{m}}^{3}\), which lies in the center of \(\operatorname{\mathbf{GO}}(\mathscr{T})\). We may therefore consider the quotient \[\operatorname{\mathbf{PGO}}(\mathscr{T})=\operatorname{\mathbf{GO}}( \mathscr{T})/\operatorname{\mathbf{H}}(\mathscr{T})\subset\operatorname{ \mathbf{PGO}}(\mathfrak{A}_{1})\times\operatorname{\mathbf{PGO}}(\mathfrak{A} _{2})\times\operatorname{\mathbf{PGO}}(\mathfrak{A}_{3}),\] and the map \(\operatorname{Int}\) yields an isomorphism \[\overline{\operatorname{Int}}\colon\operatorname{\mathbf{PGO}}(\mathscr{T}) \xrightarrow{\sim}\operatorname{\mathbf{Aut}}(\mathscr{T}).\] Our goal in this subsection is to define a subgroup \(\operatorname{\mathbf{O}}(\mathscr{T})\subset\operatorname{\mathbf{GO}}( \mathscr{T})\) on the same model as the subgroup \(\operatorname{\mathbf{O}}(\mathscr{G})\) of the group \(\operatorname{\mathbf{GO}}(\mathscr{G})\) of similitudes of a composition of quadratic spaces, so that when \(\mathscr{T}\,=\operatorname{End}(\mathscr{G})\) for some composition \(\mathscr{G}\) of quadratic spaces of dimension \(8\) we may identify \[\operatorname{\mathbf{O}}(\mathscr{T})=\operatorname{\mathbf{O}}(\mathscr{G}), \qquad\operatorname{\mathbf{GO}}(\mathscr{T})=\operatorname{\mathbf{GO}}( \mathscr{G})\quad\text{and}\quad\operatorname{\mathbf{PGO}}(\mathscr{T})= \operatorname{\mathbf{PGO}}(\mathscr{G});\] see Proposition 3.10. Moreover, for an arbitrary trialitarian triple \(\mathscr{T}\), we relate \(\operatorname{\mathbf{GO}}(\mathscr{T})\) to the extended Clifford group \(\operatorname{\mathbf{\Omega}}(\mathfrak{A}_{1})\) to obtain canonical isomorphisms \[\operatorname{\mathbf{Spin}}(\mathfrak{A}_{1})\xrightarrow{\sim}\operatorname{ \mathbf{O}}(\mathscr{T})\qquad\text{and}\qquad\operatorname{\mathbf{PGO}}( \mathscr{T})\xrightarrow{\sim}\operatorname{\mathbf{PGO}}^{+}(\mathfrak{A}_{1}),\] see Theorems 3.4 and 3.12. A key tool is the following construction: let \(\varphi_{+}\colon\mathfrak{C}(\mathfrak{A}_{1})\to\mathfrak{A}_{2}\) and \(\varphi_{-}\colon\mathfrak{C}(\mathfrak{A}_{1})\to\mathfrak{A}_{3}\) be the two components of the isomorphism \(\varphi_{0}\colon\mathfrak{C}(\mathfrak{A}_{1})\to\mathfrak{A}_{2}\times \mathfrak{A}_{3}\), which is part of the structure of \(\mathscr{T}\). Recall from (1.22) that \(\varphi_{+}\) and \(\varphi_{-}\) restrict to morphisms \[\varphi_{+}\colon\,\mathbf{\Omega}(\mathfrak{A}_{1})\to\mathbf{GO}^{+}( \mathfrak{A}_{2})\qquad\text{and}\qquad\varphi_{-}\colon\,\mathbf{\Omega}( \mathfrak{A}_{1})\to\mathbf{GO}^{+}(\mathfrak{A}_{3}).\] Combine \(\varphi_{+}\) and \(\varphi_{-}\) with the morphism \(\chi_{0}\colon\,\mathbf{\Omega}(\mathfrak{A}_{1})\to\mathbf{GO}^{+}( \mathfrak{A}_{1})\) of SS1.3 to obtain a morphism \[\psi_{\mathscr{T}}\colon\,\mathbf{\Omega}(\mathfrak{A}_{1})\to\mathbf{GO}( \mathscr{T}) \tag{3.6}\] as follows: for every commutative \(F\)-algebra \(R\) and \(\xi\in\mathbf{\Omega}(\mathfrak{A}_{1})(R)\), let \[\psi_{\mathscr{T}}(\xi)=\big{(}\chi_{0}(\xi),\,\varphi_{+}(\xi),\,\varphi_{- }(\xi)\big{)}\in\mathbf{GO}^{+}(\mathfrak{A}_{1})(R)\times\mathbf{GO}^{+}( \mathfrak{A}_{2})(R)\times\mathbf{GO}^{+}(\mathfrak{A}_{3})(R).\] Proposition 1.7 shows that \(C\big{(}\mathrm{Int}(\chi_{0}(\xi))\big{)}=\mathrm{Int}(\xi)\), hence \[\varphi_{0}\circ C\big{(}\mathrm{Int}(\chi_{0}(\xi))\big{)}\circ\varphi_{0}^{- 1}=\mathrm{Int}\big{(}\varphi_{0}(\xi)\big{)}=\mathrm{Int}\big{(}\varphi_{+}( \xi)\big{)}\times\mathrm{Int}\big{(}\varphi_{-}(\xi)\big{)},\] which means that \(\big{(}\mathrm{Int}(\chi_{0}(\xi)),\,\mathrm{Int}(\varphi_{+}(\xi)),\,\mathrm{ Int}(\varphi_{-}(\xi))\big{)}\) lies in \(\mathbf{Aut}(\mathscr{T})(R)\), and therefore \(\psi_{\mathscr{T}}(\xi)\in\mathbf{GO}(\mathscr{T})(R)\). Note that \(\psi_{\mathscr{T}}\) is injective, since \(\big{(}\varphi_{+}(\xi),\,\varphi_{-}(\xi)\big{)}=\varphi_{0}(\xi)\) and \(\varphi_{0}\) is an isomorphism. We first use the map \(\psi_{\mathscr{T}}\) to prove: **Theorem 3.4**.: _Projection on the first component \(\pi_{\mathscr{T}}\colon\,\mathbf{PGO}(\mathscr{T})\to\mathbf{PGO}(\mathfrak{A }_{1})\) defines an isomorphism_ \[\mathbf{PGO}(\mathscr{T})\xrightarrow{\sim}\mathbf{PGO}^{+}(\mathfrak{A}_{1}).\] Proof.: Let \(R\) be a commutative \(F\)-algebra and \((\gamma_{1},\gamma_{2},\gamma_{3})\in\mathbf{Aut}(\mathscr{T})(R)\). Since \(\varphi_{0}\) is an isomorphism, \(\gamma_{2}\) and \(\gamma_{3}\) are uniquely determined by \(\gamma_{1}\) and commutativity of the diagram (3.5). Therefore, \(\pi_{\mathscr{T}}\) is injective. Moreover, commutativity of the diagram (3.5) shows that \(C(\gamma_{1})\) leaves the center of \(C(\mathfrak{A}_{1})\) fixed, which means that \(\gamma_{1}\) lies in the connected component of the identity \(\mathbf{Aut}^{+}(\mathfrak{A}_{1})(R)\). Therefore, the image of \(\pi_{\mathscr{T}}\) lies in \(\mathbf{PGO}^{+}(\mathfrak{A}_{1})\). To complete the proof, we show that \(\pi_{\mathscr{T}}\) is surjective on \(\mathbf{PGO}^{+}(\mathfrak{A}_{1})\). Since \(\mathbf{PGO}^{+}(\mathfrak{A}_{1})\) is smooth, it suffices to consider rational points over an algebraic closure \(F_{\mathrm{alg}}\) of \(F\), by [13, (22.3)]. Recall from Proposition 1.8 that \(\chi_{0}\) is surjective. For every \(g_{1}\in\mathbf{GO}^{+}(\mathfrak{A}_{1})(F_{\mathrm{alg}})\), we may therefore find \(\xi\in\mathbf{\Omega}(\mathfrak{A}_{1})(F_{\mathrm{alg}})\) such that \(\chi_{0}(\xi)=g_{1}\). Then \(\psi_{\mathscr{T}}(\xi)\in\mathbf{GO}(\mathscr{T})(F_{\mathrm{alg}})\), and its image \(\overline{\psi}_{\mathscr{T}}(\xi)\) in \(\mathbf{PGO}(\mathscr{T})(F_{\mathrm{alg}})\) satisfies \(\pi_{\mathscr{T}}\big{(}\overline{\psi}_{\mathscr{T}}(\xi)\big{)}=g_{1}F_{ \mathrm{alg}}^{\times}\). We have thus found an element in \(\mathbf{PGO}(\mathscr{T})(F_{\mathrm{alg}})\) that maps under \(\pi_{\mathscr{T}}\) to any given \(g_{1}F_{\mathrm{alg}}^{\times}\in\mathbf{PGO}^{+}(\mathfrak{A}_{1})(F_{\mathrm{ alg}})\), hence \(\pi_{\mathscr{T}}\) is surjective. **Corollary 3.5**.: _The algebraic group schemes \(\mathbf{GO}(\mathscr{T})\) and \(\mathbf{PGO}(\mathscr{T})\) are smooth and connected._ Proof.: That \(\mathbf{PGO}(\mathscr{T})\) is smooth and connected readily follows from the theorem, since \(\mathbf{PGO}^{+}(\mathfrak{A}_{1})\) is smooth and connected by [13, SS23.B]. Then \(\mathbf{GO}(\mathscr{T})\) is also smooth and connected because \(\mathbf{PGO}(\mathscr{T})=\mathbf{GO}(\mathscr{T})/\,\mathbf{H}(\mathscr{T})\) with \(\mathbf{H}(\mathscr{T})\) smooth and connected, see [13, (22.12)]. We next use \(\psi_{\mathscr{T}}\) to determine the structure of \(\mathbf{GO}(\mathscr{T})\). Let \(Z_{1}\simeq F\times F\) denote the center of \(C(\mathfrak{A}_{1})\), and recall that \(R_{Z_{1}/F}(\mathbf{G}_{\mathbf{m}})\) lies in the center of \(\mathbf{\Omega}(\mathfrak{A}_{1})\) (see (1.4)). For every commutative \(F\)-algebra \(R\) and \(z\in(Z_{1})_{R}^{\times}\), Proposition 1.7 yields \(\chi_{0}(z)=N_{Z_{1}/F}(z)\), while \(\varphi_{+}(z)\), \(\varphi_{-}(z)\in R^{\times}\). Therefore, \[\psi_{\mathscr{T}}(z)=(N_{Z_{1}/F}(z),\,\varphi_{+}(z),\,\varphi_{-}(z)\big{)}\in R ^{\times}\times R^{\times}\times R^{\times}=\mathbf{H}(\mathscr{T})(R). \tag{3.7}\] **Proposition 3.6**.: _The morphism \(\psi_{\mathscr{T}}\) and the inclusion \(i\colon\,\mathbf{H}(\mathscr{T})\to\mathbf{GO}(\mathscr{T})\) define an isomorphism_ \[\overline{\psi_{\mathscr{T}}\times i}\colon\,\big{(}\mathbf{\Omega}(\mathfrak{A}_{1 })\times\mathbf{H}(\mathscr{T})\big{)}/R_{Z_{1}/F}(\mathbf{G}_{\mathbf{m}}) \xrightarrow{\sim}\mathbf{GO}(\mathscr{T}),\] _where \(R_{Z_{1}/F}(\mathbf{G_{m}})\) is embedded into the product canonically in the first factor and by the inversion followed by \(\psi_{\mathscr{T}}\) in the second (so the copies of \(R_{Z_{1}/F}(\mathbf{G_{m}})\) in \(\mathbf{\Omega}(\mathfrak{A}_{1})\) and in \(\mathbf{H}(\mathscr{T})\) are identified in the quotient)._ Proof.: It is clear from the definition of the quotient that the morphism \(\overline{\psi_{\mathscr{T}}\times i}\) is defined. To prove that it is injective, consider an arbitrary commutative \(F\)-algebra \(R\) and pick \(\xi\in\mathbf{\Omega}(\mathfrak{A}_{1})(R)\) and \(\nu=(\nu_{1},\nu_{2},\nu_{3})\in\mathbf{H}(\mathscr{T})(R)\) such that \(\psi_{\mathscr{T}}(\xi)\cdot\nu=1\) in \(\mathbf{GO}(\mathscr{T})(R)\), i.e., \[\chi_{0}(\xi)=\nu_{1}^{-1},\qquad\varphi_{+}(\xi)=\nu_{2}^{-1}\quad\text{and} \quad\varphi_{-}(\xi)=\nu_{3}^{-1}.\] The last two equations show that \(\varphi_{0}(\xi)=(\nu_{2}^{-1},\nu_{3}^{-1})\) in \((A_{2})_{R}\times(A_{3})_{R}\). Since \(\varphi_{0}\) is an isomorphism, it follows that \(\xi\) lies in \((Z_{1})_{R}^{\times}\), hence \((\xi,\nu)\) is trivial in the quotient, for \(\nu=\psi_{\mathscr{T}}(\xi)^{-1}\). To complete the proof, it remains to show that \(\overline{\psi_{\mathscr{T}}\times i}\) is surjective. Since \(\mathbf{GO}(\mathscr{T})\) is smooth by Corollary 3.5, it suffices to consider the groups of rational points over an algebraic closure \(F_{\text{alg}}\) of \(F\). Let \(g=(g_{1},g_{2},g_{3})\in\mathbf{GO}(\mathscr{T})(F_{\text{alg}})\). Note that \(g_{1}\), \(g_{2}\) and \(g_{3}\) are proper similitudes, because \(\mathbf{GO}(\mathscr{T})\) is connected by Corollary 3.5. We know from Proposition 1.8 that \(\chi_{0}\colon\,\mathbf{\Omega}(\mathfrak{A}_{1})\to\mathbf{GO}^{+}(\mathfrak{ A}_{1})\) is surjective, hence we may find \(\xi\in\mathbf{\Omega}(\mathfrak{A}_{1})(F_{\text{alg}})\) such that \(\chi_{0}(\xi)=g_{1}\). Then \(\psi_{\mathscr{T}}(\xi)=(g_{1},g_{2}^{\prime},g_{3}^{\prime})\) for some \(g_{2}^{\prime}\in\mathbf{GO}^{+}(\mathfrak{A}_{2})(F_{\text{alg}})\) and \(g_{3}^{\prime}\in\mathbf{GO}^{+}(\mathfrak{A}_{3})(F_{\text{alg}})\). As \(g\) and \(\psi_{\mathscr{T}}(\xi)\) belong to \(\mathbf{GO}(\mathscr{T})(F_{\text{alg}})\), the following diagrams commute: Therefore, \(\operatorname{Int}(g_{2})=\operatorname{Int}(g_{2}^{\prime})\) and \(\operatorname{Int}(g_{3})=\operatorname{Int}(g_{3}^{\prime})\), which implies that \(g_{2}=g_{2}^{\prime}\nu_{2}\) and \(g_{3}=g_{3}^{\prime}\nu_{3}\) for some \(\nu_{2}\), \(\nu_{3}\in F_{\text{alg}}^{\times}\). With \(\nu=(1,\nu_{2},\nu_{3})\in\mathbf{H}(\mathscr{T})(F_{\text{alg}})\) we then have \[\psi_{\mathscr{T}}(\xi)\cdot\nu=(g_{1},g_{2}^{\prime}\nu_{2},g_{3}^{\prime} \nu_{3})=(g_{1},g_{2},g_{3})=g.\] Surjectivity of \(\overline{\psi_{\mathscr{T}}\times i}\) follows. **Corollary 3.7**.: _Let \(m\colon\,\mathbf{H}(\mathscr{T})\to\mathbf{G_{m}}\) denote the multiplication map carrying \((\nu_{1},\nu_{2},\nu_{3})\) to \(\nu_{1}\nu_{2}\nu_{3}\). There is a morphism \(\rho_{\mathscr{T}}\colon\,\mathbf{GO}(\mathscr{T})\to\mathbf{G_{m}}\) uniquely determined by the condition that the following diagram commutes:_ (3.8) Proof.: Proposition 3.6 identifies \(\mathbf{GO}(\mathscr{T})\) with a quotient of \(\mathbf{\Omega}(\mathfrak{A}_{1})\times\mathbf{H}(\mathscr{T})\) by \(R_{Z_{1}/F}(\mathbf{G_{m}})\), hence to prove the existence and uniqueness of \(\rho_{\mathscr{T}}\) it suffices to show that \((\mu\circ\chi_{0})\) and \(m\) coincide on the images of \(R_{Z_{1}/F}(\mathbf{G_{m}})\) in \(\mathbf{\Omega}(\mathscr{T})\) by inclusion and in \(\mathbf{H}(\mathscr{T})\) by \(\psi_{\mathscr{T}}\). For every commutative \(F\)-algebra \(R\) and \(z\in(Z_{1})_{R}^{\times}\) we have \(\chi_{0}(z)=N_{Z_{1}/F}(z)\) by Proposition 1.7, hence \((\mu\circ\chi_{0})(z)=N_{Z_{1}/F}(z)^{2}\). On the other hand \(N_{Z_{1}/F}(z)=\varphi_{+}(z)\varphi_{-}(z)\), hence \[(m\circ\psi_{\mathscr{T}})(z)=m\big{(}N_{Z_{1}/F}(z),\,\varphi_{+}(z),\,\varphi _{-}(z)\big{)}=N_{Z_{1}/F}(z)^{2}.\] Thus, \((\mu\circ\chi_{0})\) and \(m\) coincide on the images of \(R_{Z_{1}/F}(\mathbf{G_{m}})\) **Definition 3.8**.: A morphism \(\lambda_{\mathscr{T}}\colon\mathbf{GO}(\mathscr{T})\to\mathbf{G}_{\mathbf{m}}^{3}\) is defined as follows: for every commutative \(F\)-algebra \(R\) and \(g=(g_{1},g_{2},g_{3})\in\mathbf{GO}(\mathscr{T})(R)\), set \[\lambda_{\mathscr{T}}(g)=\big{(}\rho_{\mathscr{T}}(g)\mu(g_{1})^{-1},\,\rho_{ \mathscr{T}}(g)\mu(g_{2})^{-1},\,\rho_{\mathscr{T}}(g)\mu(g_{3})^{-1}\big{)} \in R^{\times}\times R^{\times}\times R^{\times}.\] From the definition of \(\rho_{\mathscr{T}}\), it follows that for \(\nu=(\nu_{1},\nu_{2},\nu_{3})\in\mathbf{H}(\mathscr{T})(R)\) \[\rho_{\mathscr{T}}(\nu)=m(\nu)=\nu_{1}\nu_{2}\nu_{3},\] hence \[\lambda_{\mathscr{T}}(\nu)=(\nu_{2}\nu_{3}\nu_{1}^{-1},\,\nu_{3}\nu_{1}\nu_{2 }^{-1},\,\nu_{1}\nu_{2}\nu_{3}^{-1}). \tag{3.9}\] The definition of \(\rho_{\mathscr{T}}\) also yields \(\rho_{\mathscr{T}}\big{(}\psi_{\mathscr{T}}(\xi)\big{)}=\mu\big{(}\chi_{0}( \xi)\big{)}\) for \(\xi\in\mathbf{\Omega}(\mathfrak{A}_{1})(R)\). Letting \(\underline{\mu}\colon\mathbf{\Omega}(\mathfrak{A}_{1})\to R_{Z_{1}/F}( \mathbf{G_{m}})\) denote the multiplier map, we have by Proposition 1.7 \[\mu\big{(}\chi_{0}(\xi)\big{)}=N_{Z_{1}/F}\big{(}\underline{\mu}(\xi)\big{)}= \varphi_{+}\big{(}\underline{\mu}(\xi)\big{)}\cdot\varphi_{-}\big{(}\underline {\mu}(\xi)\big{)}.\] As \(\varphi_{0}\) is an isomorphism of algebras with quadratic pair, we also have \[\big{(}\varphi_{+}(\underline{\mu}(\xi)),\,\varphi_{-}(\underline{\mu}(\xi) )\big{)}=\varphi_{0}\big{(}\underline{\mu}(\xi)\big{)}=\big{(}\mu(\varphi_{+ }(\xi)),\,\mu(\varphi_{-}(\xi))\big{)}. \tag{3.10}\] Therefore, the definition of \(\lambda_{\mathscr{T}}\) yields \[\begin{split}\lambda_{\mathscr{T}}\big{(}\psi_{\mathscr{T}}(\xi )\big{)}&=\big{(}1,\,\mu\big{(}\chi_{0}(\xi)\big{)}\mu\big{(} \varphi_{+}(\xi)\big{)}^{-1},\,\mu\big{(}\chi_{0}(\xi)\big{)}\mu\big{(}\varphi _{-}(\xi)\big{)}^{-1}\big{)}\\ &=\big{(}1,\,\mu\big{(}\varphi_{-}(\xi)\big{)},\,\mu\big{(} \varphi_{+}(\xi)\big{)}\big{)}.\end{split} \tag{3.11}\] **Definition 3.9**.: Let \[\mathbf{O}(\mathscr{T})=\ker(\lambda_{\mathscr{T}}\colon\mathbf{GO}(\mathscr{ T})\to\mathbf{G}_{\mathbf{m}}^{3}).\] As in the proof of Proposition 2.18, it follows from (3.9) that the map \(\lambda_{\mathscr{T}}\colon\mathbf{H}(\mathscr{T})\to\mathbf{G}_{\mathbf{m}}^ {3}\), hence also \(\lambda_{\mathscr{T}}\colon\mathbf{GO}(\mathscr{T})\to\mathbf{G}_{\mathbf{m}}^ {3}\), is surjective. Therefore, the following sequence is exact: \[1\to\mathbf{O}(\mathscr{T})\to\mathbf{GO}(\mathscr{T})\stackrel{{ \lambda_{\mathscr{T}}}}{{\longrightarrow}}\mathbf{G}_{\mathbf{m}}^{3}\to 1\] Now, let \(\mathbf{Z}(\mathscr{T})\) be the kernel of the canonical map \(\mathbf{O}(\mathscr{T})\to\mathbf{PGO}(\mathscr{T})\), which is the composition of the inclusion \(\mathbf{O}(\mathscr{T})\subset\mathbf{GO}(\mathscr{T})\) and the canonical epimorphism \(\mathbf{GO}(\mathscr{T})\to\mathbf{PGO}(\mathscr{T})\). Thus, letting \(m\) be the multiplication map \((\nu_{1},\nu_{2},\nu_{3})\mapsto\nu_{1}\nu_{2}\nu_{3}\), \[\mathbf{Z}(\mathscr{T})=\mathbf{H}(\mathscr{T})\cap\mathbf{O}(\mathscr{T})= \ker(m\colon\boldsymbol{\mu}_{2}\times\boldsymbol{\mu}_{2}\times\boldsymbol{\mu} _{2}\to\boldsymbol{\mu}_{2})\simeq\boldsymbol{\mu}_{2}\times\boldsymbol{\mu}_{2}.\] The same arguments as in Proposition 2.19 yield the following commutative diagram with exact rows and columns: (3.12) Now, we show that the definitions above are compatible with the corresponding definitions for compositions of quadratic spaces in SS2.3. **Proposition 3.10**.: _For \(\mathscr{C}\) any composition of quadratic spaces of dimension \(8\) and \(\mathscr{T}=\operatorname{End}(\mathscr{C})\), canonical isomorphisms yield identifications_ \[\mathbf{H}(\mathscr{C})=\mathbf{H}(\mathscr{T}),\quad\mathbf{O}(\mathscr{C})= \mathbf{O}(\mathscr{T}),\quad\mathbf{GO}(\mathscr{C})=\mathbf{GO}(\mathscr{T }),\quad\mathbf{PGO}(\mathscr{C})=\mathbf{PGO}(\mathscr{T}).\] _Moreover, the following diagram commutes:_ (3.13) Proof.: Let \(R\) be a commutative \(F\)-algebra. For every \((g_{1},g_{2},g_{3},\lambda_{3})\in\mathbf{GO}(\mathscr{C})(R)\) the triple \((g_{1},g_{2},g_{3})\) lies in \(\mathbf{GO}(\mathscr{T})(R)\), as seen in the first part of the proof of Proposition 3.1. Since \(\lambda_{3}\) is uniquely determined by \(g_{1}\), \(g_{2}\) and \(g_{3}\), mapping \((g_{1},g_{2},g_{3},\lambda_{3})\) to \((g_{1},g_{2},g_{3})\) defines an injective map \(\mathbf{GO}(\mathscr{C})\to\mathbf{GO}(\mathscr{T})\). Proposition 3.1 also shows that for \(F_{\mathrm{alg}}\) an algebraic closure of \(F\) the map \(\mathbf{GO}(\mathscr{C})(F_{\mathrm{alg}})\to\mathbf{GO}(\mathscr{T})(F_{ \mathrm{alg}})\) is surjective. This is sufficient to prove that the map \(\mathbf{GO}(\mathscr{C})\to\mathbf{GO}(\mathscr{T})\) is surjective, since \(\mathbf{GO}(\mathscr{T})\) is smooth by Corollary 3.5. We have thus obtained a canonical isomorphism \(\mathbf{GO}(\mathscr{C})\xrightarrow{\sim}\mathbf{GO}(\mathscr{T})\). This isomorphism maps \(\mathbf{H}(\mathscr{C})\) to \(\mathbf{H}(\mathscr{T})\), hence it induces an isomorphism \(\mathbf{PGO}(\mathscr{C})\xrightarrow{\sim}\mathbf{PGO}(\mathscr{T})\). In order to prove that the isomorphism \(\mathbf{GO}(\mathscr{C})=\mathbf{GO}(\mathscr{T})\) also maps \(\mathbf{O}(\mathscr{C})\) to \(\mathbf{O}(\mathscr{T})\), it suffices to prove that the diagram (3.13) is commutative. For this, we use the description of \(\mathbf{GO}(\mathscr{T})\) in Proposition 3.6 as a quotient of the product of \(\mathbf{\Omega}(\mathfrak{A}_{1})\) and \(\mathbf{H}(\mathscr{T})\). It is clear from (3.9) that \(\lambda_{\mathscr{C}}\) and \(\lambda_{\mathscr{T}}\) coincide on the image of \(\mathbf{H}(\mathscr{C})=\mathbf{H}(\mathscr{T})\) in \(\mathbf{GO}(\mathscr{C})=\mathbf{GO}(\mathscr{T})\). Therefore, it suffices to consider the image of \(\mathbf{\Omega}(\mathfrak{A}_{1})\) under \(\psi_{\mathscr{T}}\). Let \(\mathscr{C}=\big{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3}\big{)}\), so \(\varphi_{0}=C_{0}(\alpha)\), \(A_{1}=\operatorname{End}V_{1}\) and \(\mathbf{\Omega}(\mathfrak{A}_{1})=\mathbf{\Omega}(q_{1})\). Let \(R\) be a commutative \(F\)-algebra and let \(\xi\in\mathbf{\Omega}(q_{1})(R)\). To simplify notation, write \(g_{1}=\chi_{0}(\xi)\in\mathbf{GO}^{+}(q_{1})(R)\), \(g_{2}=C_{+}(\alpha)(\xi)\in\mathbf{GO}^{+}(q_{2})(R)\) and \(g_{3}=C_{-}(\alpha)(\xi)\in\mathbf{GO}^{+}(q_{3})(R)\), so \[\psi_{\mathscr{T}}(\xi)=(g_{1},g_{2},g_{3}).\] Now, (3.10) yields \(C_{0}(\alpha)\big{(}\underline{\mu}(\xi)\big{)}=\big{(}\mu(g_{2}),\mu(g_{3}) \big{)}\), and by (1.2), we have \(g_{1}(x_{1})=\iota\big{(}\underline{\mu}(\xi)\big{)}\xi x_{1}\xi^{-1}\) for every \(x_{1}\in V_{1R}\). By taking the image of each side of the last equation under \(C(\alpha)\), we obtain \[\begin{pmatrix}0&r_{g_{1}(x_{1})}\\ \ell_{g_{1}(x_{1})}&0\end{pmatrix}=\begin{pmatrix}\mu(g_{3})&0\\ 0&\mu(g_{2})\end{pmatrix}\begin{pmatrix}g_{2}&0\\ 0&g_{3}\end{pmatrix}\begin{pmatrix}0&r_{x_{1}}\\ \ell_{x_{1}}&0\end{pmatrix}\begin{pmatrix}g_{2}^{-1}&0\\ 0&g_{3}^{-1}\end{pmatrix}.\] This equation yields \[r_{g_{1}(x_{1})}g_{3}=\mu(g_{3})g_{2}r_{x_{1}}\qquad\text{and}\qquad\ell_{g_{ 1}(x_{1})}g_{2}=\mu(g_{2})g_{3}\ell_{x_{1}}\qquad\text{for all $x_{1}\in V_{1R}$},\] which means that for all \(x_{1}\in V_{1R}\), \(x_{2}\in V_{2R}\) and \(x_{3}\in V_{3R}\) \[g_{3}(x_{3})*_{2}g_{1}(x_{1})=\mu(g_{3})g_{2}(x_{3}*_{2}x_{1})\qquad\text{and} \qquad g_{1}(x_{1})*_{3}g_{2}(x_{2})=\mu(g_{2})g_{3}(x_{1}*_{3}x_{2}).\] These equations show that \(\big{(}g_{1},g_{2},g_{3},\mu(g_{2})\big{)}\in\mathbf{GO}(\mathscr{C})(R)\), hence by (2.25) \[\lambda_{\mathscr{C}}\big{(}\psi_{\mathscr{T}}(\xi)\big{)}=\big{(}1,\mu(g_{3} ),\mu(g_{2})\big{)}.\] Therefore, \(\lambda_{\mathscr{C}}\big{(}\psi_{\mathscr{T}}(\xi)\big{)}=\lambda_{\mathscr{ T}}\big{(}\psi_{\mathscr{T}}(\xi)\big{)}\) by (3.11), and the proof is complete. **Corollary 3.11**.: _For every trialitarian triple \(\mathscr{T}\), the algebraic group scheme \(\mathbf{O}(\mathscr{T})\) is smooth._ Proof.: Over an algebraic closure \(F_{\mathrm{alg}}\) of \(F\) the trialitarian triple \(\mathscr{T}\) is split, hence by Theorem 3.2 we may find a composition \(\mathscr{C}\) of quadratic spaces of dimension \(8\) over \(F_{\mathrm{alg}}\) such that \(\mathscr{T}_{F_{\mathrm{alg}}}\simeq\mathrm{End}(\mathscr{C})\). Then \(\mathbf{O}(\mathscr{T}_{F_{\mathrm{alg}}})\) is isomorphic to \(\mathbf{O}(\mathscr{C})\), which is smooth by Proposition 2.18, hence \(\mathbf{O}(\mathscr{T})\) is smooth by [13, (21.10)]. The final result in this subsection elucidates the structure of \(\mathbf{O}(\mathscr{T})\). **Theorem 3.12**.: _For every trialitarian triple \(\mathscr{T}\), the morphism \(\psi_{\mathscr{T}}\) restricts to an isomorphism_ \[\psi_{\mathscr{T}}\colon\mathbf{Spin}(\mathfrak{A}_{1})\xrightarrow{\sim} \mathbf{O}(\mathscr{T}).\] Proof.: When defining \(\psi_{\mathscr{T}}\), we already observed that this morphism is injective. Recall from SS1.3 that \(\mathbf{Spin}(\mathfrak{A}_{1})\) is the kernel of \(\underline{\mu}\colon\mathbf{\Omega}(\mathfrak{A}_{1})\to R_{Z_{1}/F}( \mathbf{G}_{\mathbf{m}})\). Therefore, (3.10) and (3.11) show that \(\psi_{\mathscr{T}}\) map \(\mathbf{Spin}(\mathfrak{A}_{1})\) to \(\mathbf{O}(\mathscr{T})\). To prove that \(\psi_{\mathscr{T}}\) maps \(\mathbf{Spin}(\mathfrak{A}_{1})\) onto \(\mathbf{O}(\mathscr{T})\), it suffices to consider the groups of rational points over an algebraic closure \(F_{\mathrm{alg}}\) of \(F\), because we know by Corollary 3.11 that \(\mathbf{O}(\mathscr{T})\) is smooth. Proposition 3.6 shows that \(\psi_{\mathscr{T}}\times i\colon\mathbf{\Omega}(\mathscr{T})(F_{\mathrm{alg} })\times\mathbf{H}(\mathscr{T})(F_{\mathrm{alg}})\to\mathbf{GO}(\mathscr{T})(F_ {\mathrm{alg}})\) is surjective, hence for any \(g\in\mathbf{O}(\mathscr{T})(F_{\mathrm{alg}})\) we may find \(\xi\in\mathbf{\Omega}(\mathscr{T})(F_{\mathrm{alg}})\) and \(\nu=(\nu_{1},\nu_{2},\nu_{3})\in\mathbf{H}(\mathscr{T})(F_{\mathrm{alg}})\) such that \(\psi_{\mathscr{T}}(\xi)\cdot\nu=g\). Taking the image of each side under \(\lambda_{\mathscr{T}}\) and using (3.9) and (3.11), we obtain \[(1,\lambda_{2},\lambda_{3})\cdot(\nu_{2}\nu_{3}\nu_{1}^{-1},\,\nu_{3}\nu_{1} \nu_{2}^{-1},\,\nu_{1}\nu_{2}\nu_{3}^{-1})=(1,1,1)\] for some \(\lambda_{2},\,\lambda_{3}\in F_{\mathrm{alg}}^{\times}\), hence \(\nu_{1}=\nu_{2}\nu_{3}\). Therefore, \(\nu=\psi_{\mathscr{T}}(z)\) for \(z\in(Z_{1})_{F_{\mathrm{alg}}}^{\times}\) such that \(\varphi_{0}(z)=(\nu_{2},\nu_{3})\), and \(\psi_{\mathscr{T}}(\xi z)=g\). Since \(\lambda_{\mathscr{T}}(g)=(1,1,1)\), (3.10) and (3.11) show that \(\underline{\mu}(\xi z)=1\), hence \(\xi z\in\mathbf{Spin}(\mathfrak{A}_{1})(F_{\mathrm{alg}})\). Thus, \(\psi_{\mathscr{T}}\) maps \(\mathbf{Spin}(\mathfrak{A}_{1})\) onto \(\mathbf{O}(\mathscr{T})\). **Corollary 3.13**.: _The following diagram, in which the vertical maps are isomorphisms, is commutative with exact rows:_ (3.14) Proof.: The upper sequence is (1.5), and the lower sequence is from (3.12). Commutativity of the right square follows from the definition of \(\chi^{\prime}\) as the composition of \(\chi_{0}\) with the canonical map \(\mathbf{GO}^{+}(\mathfrak{A}_{1})\to\mathbf{PGO}^{+}(\mathfrak{A}_{1})\), and bijectivity of the vertical maps is proved in Theorems 3.4 and 3.12. ### Derived trialitarian triples To every trialitarian triple \(\mathscr{T}=(\mathfrak{A}_{1},\mathfrak{A}_{2},\mathfrak{A}_{3},\varphi_{0})\) we attach in this subsection two derived trialitarian triples \[\partial\mathscr{T}=(\mathfrak{A}_{2},\mathfrak{A}_{3},\mathfrak{A}_{1}, \varphi_{0}^{\prime})\quad\text{and}\quad\partial^{2}\mathscr{T}=(\mathfrak{A }_{3},\mathfrak{A}_{1},\mathfrak{A}_{2},\varphi_{0}^{\prime\prime})\] in such a way that for every composition \(\mathscr{C}\) of quadratic spaces of dimension \(8\) \[\partial\operatorname{End}(\mathscr{C})=\operatorname{End}(\partial\mathscr{C })\quad\text{and}\quad\partial^{2}\operatorname{End}(\mathscr{C})= \operatorname{End}(\partial^{2}\mathscr{C}).\] The two components of the isomorphisms \[\varphi_{0}^{\prime}\colon\mathfrak{C}(\mathfrak{A}_{2})\to\mathfrak{A}_{3} \times\mathfrak{A}_{1}\quad\text{and}\quad\varphi_{0}^{\prime\prime}\colon \mathfrak{C}(\mathfrak{A}_{3})\to\mathfrak{A}_{1}\times\mathfrak{A}_{2}\] are determined as lifts (in the sense of Definition 1.20) of Lie algebra homomorphisms \[\theta_{+}^{\prime}\colon\mathfrak{pgo}(\mathfrak{A}_{2})\to\mathfrak{pgo}( \mathfrak{A}_{3})\quad\text{and}\quad\theta_{+}^{\prime\prime}\colon \mathfrak{pgo}(\mathfrak{A}_{3})\to\mathfrak{pgo}(\mathfrak{A}_{1}),\] \[\theta_{-}^{\prime}\colon\mathfrak{pgo}(\mathfrak{A}_{2})\to\mathfrak{pgo}( \mathfrak{A}_{1})\quad\text{and}\quad\theta_{-}^{\prime\prime}\colon \mathfrak{pgo}(\mathfrak{A}_{3})\to\mathfrak{pgo}(\mathfrak{A}_{2}).\] Our main result is the following: **Theorem 3.14**.: _Let \(\mathscr{T}=(\mathfrak{A}_{1},\mathfrak{A}_{2},\mathfrak{A}_{3},\varphi_{0})\) be a trialitarian triple over an arbitrary field \(F\), and let_ \[\theta_{+}\colon\mathfrak{pgo}(\mathfrak{A}_{1})\to\mathfrak{pgo}(\mathfrak{ A}_{2})\quad\text{and}\quad\theta_{-}\colon\mathfrak{pgo}(\mathfrak{A}_{1})\to \mathfrak{pgo}(\mathfrak{A}_{3})\] _denote the Lie algebra homomorphisms induced (as per Definition 1.20) by the two components of \(\varphi_{0}\):_ \[\varphi_{+}\colon\mathfrak{C}_{+}(\mathfrak{A}_{1})\to\mathfrak{A}_{2}\quad \text{and}\quad\varphi_{-}\colon\mathfrak{C}_{-}(\mathfrak{A}_{1})\to \mathfrak{A}_{3}.\] _The homomorphisms \(\theta_{+}\) and \(\theta_{-}\) are isomorphisms, and the following Lie algebra homomorphisms are liftable:_ \[\theta_{+}^{\prime}=\theta_{-}\circ\theta_{+}^{-1},\qquad\theta_{-}^{\prime}= \theta_{+}^{-1},\qquad\theta_{+}^{\prime\prime}=\theta_{-}^{-1},\qquad\theta_ {-}^{\prime\prime}=\theta_{+}\circ\theta_{-}^{-1}.\] _Moreover, \(\theta_{+}^{\prime}\) and \(\theta_{-}^{\prime}\) on one side, and \(\theta_{+}^{\prime\prime}\) and \(\theta_{-}^{\prime\prime}\) on the other side, are of opposite signs (see Definition 1.20)._ Corollary 1.22 shows that we may extend scalars to a Galois extension of \(F\) in order to show that a Lie algebra homomorphism is liftable. We may thus reduce to split trialitarian triples, i.e., triples of the form \(\operatorname{End}(\mathscr{C})\). We investigate this case first. The proof of Theorem 3.14 will quickly follow after (3.18). Let \(\mathscr{C}=\big{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,*_{3}\big{)}\) be a composition of quadratic spaces of dimension \(8\) over an arbitrary field \(F\). Recall from Proposition 2.20 that the Lie algebra \(\mathfrak{pgo}(\mathscr{C})\) can be described as a subalgebra of \(\mathfrak{pgo}(q_{1})\times\mathfrak{pgo}(q_{2})\times\mathfrak{pgo}(q_{3})\). Let \[\pi_{1}\colon\mathfrak{pgo}(\mathscr{C})\to\mathfrak{pgo}(q_{1}),\qquad\pi_{ 2}\colon\mathfrak{pgo}(\mathscr{C})\to\mathfrak{pgo}(q_{2}),\qquad\pi_{3} \colon\mathfrak{pgo}(\mathscr{C})\to\mathfrak{pgo}(q_{3})\] denote the projections on the three components, and let \[\theta_{+}\colon\mathfrak{pgo}(q_{1})\to\mathfrak{pgo}(q_{2})\qquad\text{and} \qquad\theta_{-}\colon\mathfrak{pgo}(q_{1})\to\mathfrak{pgo}(q_{3})\] be the Lie algebra homomorphisms induced by the two components of \(C_{0}(\alpha)\), \[C_{+}(\alpha)\colon C_{+}(V_{1},q_{1})\to\operatorname{End}(V_{2})\qquad\text{ and}\qquad C_{-}(\alpha)\colon C_{-}(V_{1},q_{1})\to\operatorname{End}(V_{3}).\] **Lemma 3.15**.: _The following diagram, where all the maps are isomorphisms, is commutative:_ (3.15) Proof.: First, observe that \(\pi_{1}\) is the differential of the morphism \(\pi_{\operatorname{End}(\mathscr{C})}\) under the identification \(\mathbf{PGO}(\mathscr{C})=\mathbf{PGO}\big{(}\operatorname{End}(\mathscr{C}) \big{)}\) of Proposition 3.10. The morphism \(\pi_{\operatorname{End}(\mathscr{C})}\) is an isomorphism by Theorem 3.4, hence \(\pi_{1}\) is an isomorphism. Similarly, \(\pi_{2}\) is the differential of the isomorphism obtained by the composition \[\mathbf{PGO}(\mathscr{C})\xrightarrow{\partial}\mathbf{PGO}(\mathscr{C}) \xrightarrow{\pi_{\operatorname{End}(\partial\mathscr{C})}}\mathbf{PGO}^{+}(q_ {2}),\] hence \(\pi_{2}\) is an isomorphism. Likewise, \(\pi_{3}\) is an isomorphism. Now, recall from (1.23) that \(\theta_{+}\) and \(\theta_{-}\) are defined by the following commutative diagrams, where \(\overline{C_{+}(\alpha)}\) and \(\overline{C_{-}(\alpha)}\) are obtained by composing \(C_{+}(\alpha)\) and \(C_{-}(\alpha)\) with the canonical homomorphisms \(\mathfrak{go}(q_{2})\to\mathfrak{pgo}(q_{2})\) or \(\mathfrak{go}(q_{3})\to\mathfrak{pgo}(q_{3})\): Therefore, \[\overline{C_{+}(\alpha)}=\theta_{+}\circ\dot{\chi}^{\prime}\qquad\text{and} \qquad\overline{C_{-}(\alpha)}=\theta_{-}\circ\dot{\chi}^{\prime}. \tag{3.16}\] Next, define a Lie algebra homomorphism \[\Psi_{\mathscr{C}}\colon\boldsymbol{\omega}(q_{1})\to\mathfrak{pgo}(\mathscr{ C})\] by composing the differential \(\dot{\psi}_{\operatorname{End}(\mathscr{C})}\colon\boldsymbol{\omega}(q_{1}) \to\mathfrak{go}(\mathscr{C})\) of the morphism \(\psi_{\operatorname{End}(\mathscr{C})}\colon\boldsymbol{\Omega}(q_{1})\to \mathbf{GO}(\mathscr{C})\) of SS3.2 with the canonical map \(\mathfrak{go}(\mathscr{C})\to\mathfrak{pgo}(\mathscr{C})\). Explicitly, \[\Psi_{\mathscr{C}}(\xi)=(\dot{\chi}_{0}(\xi)+F,\;C_{+}(\alpha)(\xi)+F,\;C_{-} (\alpha)(\xi)+F)\qquad\text{for $\xi\in\boldsymbol{\omega}(q_{1})$},\] or, since Proposition 1.15 shows that \(\dot{\chi}^{\prime}(\xi)=\dot{\chi}_{0}(\xi)+F\), \[\Psi_{\mathscr{C}}(\xi)=\big{(}\dot{\chi}^{\prime}(\xi),\;\overline{C_{+}( \alpha)}(\xi),\;\overline{C_{-}(\alpha)}(\xi)\big{)}\qquad\text{for $\xi\in \boldsymbol{\omega}(q_{1})$}.\] It follows from the definitions that the following diagrams are commutative: Therefore, \[\overline{C_{+}(\alpha)}=\pi_{2}\circ\Psi_{\mathscr{C}},\qquad\dot{\chi}^{\prime}= \pi_{1}\circ\Psi_{\mathscr{C}},\qquad\overline{C_{-}(\alpha)}=\pi_{3}\circ\Psi _{\mathscr{C}}.\] Substituting in (3.16) yields \[\pi_{2}\circ\Psi_{\mathscr{C}}=\theta_{+}\circ\pi_{1}\circ\Psi_{\mathscr{C}} \qquad\text{and}\qquad\theta_{-}\circ\pi_{1}\circ\Psi_{\mathscr{C}}=\pi_{3} \circ\Psi_{\mathscr{C}}.\] We know from Proposition 1.15 that \(\dot{\chi}^{\prime}\) is surjective, hence \(\Psi_{\mathscr{C}}\) also is surjective since \(\pi_{1}\) is an isomorphism. Therefore, the last displayed equations yield \(\pi_{2}=\theta_{+}\circ\pi_{1}\) and \(\pi_{3}=\theta_{-}\circ\pi_{1}\), proving the commutativity of diagram (3.15). Bijectivity of \(\theta_{+}\) and \(\theta_{-}\) follows, since \(\theta_{+}=\pi_{2}\circ\pi_{1}^{-1}\) and \(\theta_{-}=\pi_{3}\circ\pi_{1}^{-1}\). We next apply Lemma 3.15 to the derived compositions \(\partial\mathscr{C}\) and \(\partial^{2}\mathscr{C}\). Recall from (2.20) and (2.21) the trialitarian triples \[\operatorname{End}(\partial\mathscr{C})=\big{(}(\operatorname{End}(V_{2}), \sigma_{b_{2}},\mathfrak{f}_{q_{2}}),\,(\operatorname{End}(V_{3}),\sigma_{b_{3 }},\mathfrak{f}_{q_{3}}),\,(\operatorname{End}(V_{1}),\sigma_{b_{1}},\mathfrak{ f}_{q_{1}}),\,C_{0}(\alpha^{\prime})\big{)}\] and \[\operatorname{End}(\partial^{2}\mathscr{C})=\big{(}(\operatorname{End}(V_{3}),\sigma_{b_{3}},\mathfrak{f}_{q_{3}}),\,(\operatorname{End}(V_{1}),\sigma_{b_ {1}},\mathfrak{f}_{q_{1}}),\,(\operatorname{End}(V_{2}),\sigma_{b_{2}}, \mathfrak{f}_{q_{2}}),\,C_{0}(\alpha^{\prime\prime})\big{)}.\] Let \[\theta_{+}^{\prime}\colon\mathfrak{pgo}(q_{2})\to\mathfrak{pgo}(q_{3})\qquad \text{and}\qquad\theta_{-}^{\prime}\colon\mathfrak{pgo}(q_{2})\to\mathfrak{ pgo}(q_{1})\] be the Lie algebra isomorphisms induced by \(C_{+}(\alpha^{\prime})\) and \(C_{-}(\alpha^{\prime})\) respectively, and \[\theta_{+}^{\prime\prime}\colon\mathfrak{pgo}(q_{3})\to\mathfrak{pgo}(q_{1}) \qquad\text{and}\qquad\theta_{-}^{\prime\prime}\colon\mathfrak{pgo}(q_{3})\to \mathfrak{pgo}(q_{2})\] those induced by \(C_{+}(\alpha^{\prime\prime})\) and \(C_{-}(\alpha^{\prime\prime})\). Let also \[\pi_{1}^{\prime}\colon\mathfrak{pgo}(\partial\mathscr{C})\to\mathfrak{pgo}(q _{2}),\qquad\pi_{2}^{\prime}\colon\mathfrak{pgo}(\partial\mathscr{C})\to \mathfrak{pgo}(q_{3}),\qquad\pi_{3}^{\prime}\colon\mathfrak{pgo}(\partial \mathscr{C})\to\mathfrak{pgo}(q_{1})\] and \[\pi_{1}^{\prime\prime}\colon\mathfrak{pgo}(\partial^{2}\mathscr{C})\to \mathfrak{pgo}(q_{3}),\qquad\pi_{2}^{\prime\prime}\colon\mathfrak{pgo}( \partial^{2}\mathscr{C})\to\mathfrak{pgo}(q_{1}),\qquad\pi_{3}^{\prime\prime} \colon\mathfrak{pgo}(\partial^{2}\mathscr{C})\to\mathfrak{pgo}(q_{2})\] be the projections on the various components of \(\mathfrak{pgo}(\partial\mathscr{C})\) and \(\mathfrak{pgo}(\partial^{2}\mathscr{C})\). Lemma 3.15 yields \[\theta_{+}^{\prime}=\pi_{2}^{\prime}\circ{\pi_{1}^{\prime}}^{-1},\qquad\theta_{ -}^{\prime}=\pi_{3}^{\prime}\circ{\pi_{1}^{\prime}}^{-1},\qquad\theta_{+}^{ \prime\prime}=\pi_{2}^{\prime\prime}\circ{\pi_{1}^{\prime\prime}}^{-1},\qquad \theta_{-}^{\prime\prime}=\pi_{3}^{\prime\prime}\circ{\pi_{1}^{\prime\prime}}^ {-1}. \tag{3.17}\] **Proposition 3.16**.: _The following equations hold:_ \[\theta_{+}^{\prime}=\theta_{-}\circ\theta_{+}^{-1},\qquad\theta_{-}^{\prime}= \theta_{+}^{-1},\qquad\theta_{+}^{\prime\prime}=\theta_{-}^{-1},\qquad\theta_{ -}^{\prime\prime}=\theta_{+}\circ\theta_{-}^{-1}.\] Proof.: The switch maps \(\partial\) fit in the following commutative diagrams: and \[\begin{array}{c}\includegraphics[scale=0.5]{0.0}{\includegraphics[scale=0.5]{0.0}} \end{array}\] Substituting \(\pi_{1}^{\prime}=\pi_{2}\circ\partial^{2}\), \(\pi_{2}^{\prime}=\pi_{3}\circ\partial^{2}\), \(\pi_{3}^{\prime}=\pi_{1}\circ\partial^{2}\) and \(\pi_{1}^{\prime\prime}=\pi_{3}\circ\partial\), \(\pi_{2}^{\prime\prime}=\pi_{1}\circ\partial\), \(\pi_{3}^{\prime\prime}=\pi_{2}\circ\partial\) in (3.17) yields \[\theta_{+}^{\prime}=\pi_{3}\circ\pi_{2}^{-1},\qquad\theta_{-}^{\prime}=\pi_{ 1}\circ\pi_{2}^{-1},\qquad\theta_{+}^{\prime\prime}=\pi_{1}\circ\pi_{3}^{-1},\qquad\theta_{-}^{\prime\prime}=\pi_{2}\circ\pi_{3}^{-1}.\] The proposition follows by Lemma 3.15. The maps \(\theta_{\pm}\), \(\theta_{\pm}^{\prime}\), \(\theta_{\pm}^{\prime\prime}\) thus fit in the following commutative diagram, in which all the maps are isomorphisms: (3.18) Proof of Theorem 3.14.: Corollary 1.22 shows that it suffices to prove the claim after a Galois scalar extension that splits the trialitarian triple \(\mathcal{F}\). We may thus assume that \(\mathcal{F}=\operatorname{End}(\mathcal{E})\) for some composition \(\mathcal{E}\) of quadratic spaces of dimension \(8\). Then Proposition 3.16 shows that \(\theta_{-}\circ\theta_{+}^{-1}\) and \(\theta_{+}^{-1}\) (resp. \(\theta_{-}^{-1}\) and \(\theta_{+}\circ\theta_{-}^{-1}\)) are the Lie algebra homomorphisms induced by the isomorphisms \(C_{+}(\alpha^{\prime})\) and \(C_{-}(\alpha^{\prime})\) (resp. \(C_{+}(\alpha^{\prime\prime})\) and \(C_{-}(\alpha^{\prime\prime})\)) of the trialitarian triple \(\operatorname{End}(\partial\mathcal{E})\) (resp. \(\operatorname{End}(\partial^{2}\mathcal{E})\)), hence they are liftable by definition. Moreover, \(\theta_{-}\circ\theta_{+}^{-1}\) and \(\theta_{+}^{-1}\) (resp. \(\theta_{-}^{-1}\), \(\theta_{+}\circ\theta_{-}^{-1}\)) are of opposite signs, hence the proof is complete. **Definition 3.17**.: Given any trialitarian triple \(\mathcal{F}=(\mathfrak{A}_{1}\), \(\mathfrak{A}_{2}\), \(\mathfrak{A}_{3}\), \(\varphi_{0}\)) with Lie algebra isomorphisms \[\theta_{+}\colon\mathfrak{pgo}(\mathfrak{A}_{1})\to\mathfrak{pgo}(\mathfrak{ A}_{2})\qquad\text{and}\qquad\theta_{-}\colon\mathfrak{pgo}(\mathfrak{A}_{1})\to \mathfrak{pgo}(\mathfrak{A}_{3})\] induced by the components \(\varphi_{+}\colon\mathfrak{C}(\mathfrak{A}_{1})\to\mathfrak{A}_{2}\) and \(\varphi_{-}\colon\mathfrak{C}(\mathfrak{A}_{1})\to\mathfrak{A}_{3}\) of \(\varphi_{0}\), the pair of opposite Lie algebra isomorphisms \((\theta_{+}^{\prime},\,\theta_{-}^{\prime})=(\theta_{-}\circ\theta_{+}^{-1}, \,\theta_{+}^{-1})\) (resp. \((\theta_{+}^{\prime\prime},\,\theta_{-}^{\prime\prime})=(\theta_{-}^{-1},\, \theta_{+}\circ\theta_{-}^{-1})\)) lifts by Theorem 3.14 to an isomorphism \[\varphi_{0}^{\prime}\colon\mathfrak{C}(\mathfrak{A}_{2})\to\mathfrak{A}_{3} \times\mathfrak{A}_{1}\qquad\qquad(\text{resp. }\varphi_{0}^{\prime\prime}\colon\mathfrak{C}(\mathfrak{A}_{3})\to \mathfrak{A}_{1}\times\mathfrak{A}_{2})\] that defines a trialitarian triple \[\partial\mathcal{T}=(\mathfrak{A}_{2},\,\mathfrak{A}_{3},\,\mathfrak{A}_{1},\, \varphi_{0}^{\prime})\qquad\qquad(\text{resp.}\ \partial^{2}\mathcal{T}=(\mathfrak{A}_{3},\,\mathfrak{A}_{1},\,\mathfrak{A}_{2}, \,\varphi_{0}^{\prime\prime})).\] The trialitarian triples \(\partial\mathcal{T}\) and \(\partial^{2}\mathcal{T}\) are called the _derived trialitarian triples_ of \(\mathcal{T}\). Note that \(\theta_{-}^{\prime}\circ\theta_{+}^{\prime}{}^{-1}=\theta_{+}^{\prime\prime}\) and \(\theta_{+}^{\prime}{}^{-1}=\theta_{-}^{\prime\prime}\), hence \(\partial(\partial\mathcal{T})=\partial^{2}\mathcal{T}\). Similarly, \(\partial^{2}(\partial\mathcal{T})=\mathcal{T}=\partial(\partial^{2}\mathcal{T})\) and \(\partial^{2}(\partial^{2}\mathcal{T})=\partial\mathcal{T}\). From the proof of Theorem 3.14, it is clear that for every composition \(\mathscr{C}\) of quadratic spaces of dimension \(8\), \[\partial\operatorname{End}(\mathscr{C})=\operatorname{End}(\partial\mathscr{C })\qquad\text{and}\qquad\partial^{2}\operatorname{End}(\mathscr{C})= \operatorname{End}(\partial^{2}\mathscr{C}).\] We next establish the functoriality of the \(\partial\) operation. **Proposition 3.18**.: _If \(\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\colon\mathcal{T}\to\widetilde{ \mathcal{T}}\) is an isomorphism of trialitarian triples, then \(\partial\gamma:=(\gamma_{2},\gamma_{3},\gamma_{1})\) is an isomorphism of trialitarian triples \(\partial\mathcal{T}\to\partial\widetilde{\mathcal{T}}\)._ Proof.: Let \((\theta_{+},\theta_{-})\) (resp. \((\widetilde{\theta}_{+},\widetilde{\theta}_{-})\)) be the pair of liftable homomorphisms attached to \(\mathcal{T}\) (resp. \(\widetilde{\mathcal{T}}\)). The hypothesis that \(\gamma\) is an isomorphism means that \[\gamma_{2}\circ\theta_{+}=\widetilde{\theta}_{+}\circ\gamma_{1}\qquad\text{ and}\qquad\gamma_{3}\circ\theta_{-}=\widetilde{\theta}_{-}\circ\gamma_{1}.\] It then follows that \[\gamma_{3}\circ(\theta_{-}\circ\theta_{+}^{-1})=\widetilde{\theta}_{-}\circ \gamma_{1}\circ\theta_{+}^{-1}=(\widetilde{\theta}_{-}\circ\widetilde{\theta} _{+}^{-1})\circ\gamma_{2}\quad\text{and}\quad\gamma_{1}\circ\theta_{+}^{-1}= \widetilde{\theta}_{+}^{-1}\circ\gamma_{2}.\] Since \((\theta_{-}\circ\theta_{+}^{-1},\,\theta_{+}^{-1})\) and \((\widetilde{\theta}_{-}\circ\widetilde{\theta}_{+}^{-1},\,\widetilde{\theta} _{+}^{-1})\) are the pairs of liftable homomorphisms attached to \(\partial\mathcal{T}\) and \(\partial\widetilde{\mathcal{T}}\) respectively, it follows that \((\gamma_{2},\gamma_{3},\gamma_{1})\) is an isomorphism \(\partial\mathcal{T}\to\partial\widetilde{\mathcal{T}}\). For the next corollary, observe that each trialitarian triple \(\mathcal{T}=(\mathfrak{A}_{1},\,\mathfrak{A}_{2},\,\mathfrak{A}_{3},\,\varphi_ {0})\) yields a polarization of \(\mathfrak{A}_{1}\) in the sense of Definition 1.4: the primitive idempotents in the center of \(C(\mathfrak{A}_{1})\) are designated as \(z_{1+}\) and \(z_{1-}\) according to the following convention: \[\varphi_{0}(z_{1+})=(1,0)\qquad\text{and}\qquad\varphi_{0}(z_{1-})=(0,1),\] so that the two components of \(\varphi_{0}\) are \(\varphi_{+}\colon\mathfrak{C}_{+}(\mathfrak{A}_{1})\xrightarrow{\sim} \mathfrak{A}_{2}\) and \(\varphi_{-}\colon\mathfrak{C}_{-}(\mathfrak{A}_{1})\xrightarrow{\sim} \mathfrak{A}_{3}\). Similarly, the maps \(\varphi_{0}^{\prime}\) and \(\varphi_{0}^{\prime\prime}\) of the derived trialitarian triples \(\partial\mathcal{T}\) and \(\partial^{2}\mathcal{T}\) yield polarizations of \(\mathfrak{A}_{2}\) and \(\mathfrak{A}_{3}\) so that \[\varphi_{0}^{\prime}(z_{2+})=(1,0),\quad\varphi_{0}^{\prime}(z_{2-})=(0,1), \qquad\varphi_{0}^{\prime\prime}(z_{3+})=(1,0),\quad\varphi_{0}^{\prime\prime }(z_{3-})=(0,1),\] just as in the case of compositions of quadratic spaces: see Remark 2.10. **Corollary 3.19**.: _Let \(\mathcal{T}=(\mathfrak{A}_{1},\,\mathfrak{A}_{2},\,\mathfrak{A}_{3},\,\varphi_ {0})\) and \(\widetilde{\mathcal{T}}=(\widetilde{\mathfrak{A}}_{1},\,\widetilde{\mathfrak{A} }_{2},\,\widetilde{\mathfrak{A}}_{3},\,\widetilde{\varphi}_{0})\) be trialitarian triples. There are canonical one-to-one correspondences between the following sets:_ 1. _isomorphisms of trialitarian triples_ \(\mathcal{T}\to\widetilde{\mathcal{T}}\)_;_ 2. _isomorphisms of algebras with quadratic pair_ \(\mathfrak{A}_{1}\to\widetilde{\mathfrak{A}}_{1}\) _preserving the polarizations induced by_ \(\mathcal{T}\) _and_ \(\widetilde{\mathcal{T}}\)_;_ 3. _isomorphisms of algebras with quadratic pair_ \(\mathfrak{A}_{2}\to\widetilde{\mathfrak{A}}_{2}\) _preserving the polarizations induced by_ \(\partial\mathcal{T}\) _and_ \(\partial\widetilde{\mathcal{T}}\)_;_ 4. _isomorphisms of algebras with quadratic pair_ \(\mathfrak{A}_{3}\to\widetilde{\mathfrak{A}}_{3}\) _preserving the polarizations induced by_ \(\partial^{2}\mathcal{T}\) _and_ \(\partial^{2}\widetilde{\mathcal{T}}\) Proof.: By definition, an isomorphism \(\gamma\colon\mathcal{T}\to\widetilde{\mathcal{T}}\) is a triple \((\gamma_{1},\gamma_{2},\gamma_{3})\) where each \(\gamma_{i}\) is an isomorphism \(\mathfrak{A}_{i}\to\widetilde{\mathfrak{A}}_{i}\) and the following square commutes: (3.19) Commutativity of this square implies that \(\gamma_{1}\) preserves the polarizations of \(\mathfrak{A}_{1}\) and \(\widetilde{\mathfrak{A}}_{1}\) induced by \(\mathcal{T}\) and \(\widetilde{\mathcal{T}}\). Conversely, if \(\gamma_{1}\colon\mathfrak{A}_{1}\to\widetilde{\mathfrak{A}}_{1}\) is an isomorphism of algebras with quadratic pair preserving polarizations, then there are isomorphisms \(\gamma_{2}\colon\mathfrak{A}_{2}\to\widetilde{\mathfrak{A}}_{2}\) and \(\gamma_{3}\colon\mathfrak{A}_{3}\to\widetilde{\mathfrak{A}}_{3}\) uniquely determined by the condition that the square (3.19) commute. The triple \((\gamma_{1},\gamma_{2},\gamma_{3})\) is then an isomorphism \(\mathcal{T}\to\widetilde{\mathcal{T}}\). Thus, the sets described in (i) and (ii) are in bijection under the map carrying \(\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\) to \(\gamma_{1}\). Similarly, the set in (iii) is in bijection with the set of isomorphisms \(\partial\mathcal{T}\to\partial\widetilde{\mathcal{T}}\), hence, by Proposition 3.18, with the set of isomorphisms \(\mathcal{T}\to\widetilde{\mathcal{T}}\): to each isomorphism \(\gamma\colon\mathcal{T}\to\widetilde{\mathcal{T}}\) corresponds the second component \(\gamma_{2}\colon\mathfrak{A}_{2}\to\widetilde{\mathfrak{A}}_{2}\). Likewise, mapping \(\gamma\) to \(\gamma_{3}\) defines a one-to-one correspondence between (i) and (iv). In the particular case where \(\widetilde{\mathcal{T}}=\mathcal{T}\), Corollary 3.19 yields isomorphisms between the group of automorphisms of \(\mathcal{T}\) and the groups of polarization-preserving automorphisms of \(\mathfrak{A}_{1}\), \(\mathfrak{A}_{2}\) and \(\mathfrak{A}_{3}\), which are \(\operatorname{PGO}^{+}(\mathfrak{A}_{1})\), \(\operatorname{PGO}^{+}(\mathfrak{A}_{2})\) and \(\operatorname{PGO}^{+}(\mathfrak{A}_{3})\). We discuss this case in detail in the next subsection. ### Trialitarian isomorphisms Throughout this subsection, we fix a trialitarian triple \(\mathcal{T}=(\mathfrak{A}_{1},\,\mathfrak{A}_{2},\,\mathfrak{A}_{3},\,\varphi_{ 0})\). We show how to attach to \(\mathcal{T}\) canonical isomorphisms, which we call _trialitarian isomorphisms_: \[\operatorname{\mathbf{Spin}}(\mathfrak{A}_{1})\simeq\operatorname{\mathbf{ Spin}}(\mathfrak{A}_{2})\simeq\operatorname{\mathbf{Spin}}(\mathfrak{A}_{3})\qquad\text{and} \qquad\operatorname{\mathbf{PGO}}^{+}(\mathfrak{A}_{1})\simeq\operatorname{ \mathbf{PGO}}^{+}(\mathfrak{A}_{2})\simeq\operatorname{\mathbf{PGO}}^{+}( \mathfrak{A}_{3}).\] Proposition 3.18 shows that the switch map \(\partial\) yields an isomorphism \[\partial\colon\operatorname{\mathbf{GO}}(\mathcal{T})\to\operatorname{ \mathbf{GO}}(\partial\mathcal{T}).\] This isomorphism maps \(\operatorname{\mathbf{H}}(\mathcal{T})\) to \(\operatorname{\mathbf{H}}(\partial\mathcal{T})\), hence it induces a switch isomorphism \(\partial\colon\operatorname{\mathbf{PGO}}(\mathcal{T})\to\operatorname{ \mathbf{PGO}}(\partial\mathcal{T})\). The following proposition shows that \(\partial\) also maps \(\operatorname{\mathbf{O}}(\mathcal{T})\) to \(\operatorname{\mathbf{O}}(\partial\mathcal{T})\): **Proposition 3.20**.: _The following diagram is commutative:_ Proof.: When \(\mathcal{T}=\operatorname{End}(\mathcal{G})\) for some composition \(\mathcal{G}\) of quadratic spaces of dimension \(8\), then \(\operatorname{\mathbf{GO}}(\mathcal{T})=\operatorname{\mathbf{GO}}(\mathcal{ G})\) and \(\operatorname{\mathbf{GO}}(\partial\mathcal{T})=\operatorname{\mathbf{GO}}( \partial\mathcal{G})\) by Proposition 3.10, and commutativity of the diagram is clear from (3.13). Commutativity for an arbitrary trialitarian triple follows by scalar extension to a splitting field. Recall from Corollary 3.13 the diagram (3.14) relating the groups \(\operatorname{\mathbf{Spin}}(\mathfrak{A}_{1})\) and \(\operatorname{\mathbf{PGO}}^{+}(\mathfrak{A}_{1})\) to \(\operatorname{\mathbf{O}}(\mathcal{T})\) and \(\operatorname{\mathbf{PGO}}(\mathcal{T})\). Substituting \(\partial\mathcal{T}\) for \(\mathcal{T}\) in that diagram, we obtain another commutative diagram, which involves \(\mathfrak{A}_{2}\) and \(\partial\mathscr{T}\) instead of \(\mathfrak{A}_{1}\) and \(\mathscr{T}\). We may connect this new diagram to (3.14) by means of the shift map to obtain the following commutative diagram with exact rows, where all the vertical maps are isomorphisms and \(Z_{2}\) denotes the center of \(C(\mathfrak{A}_{2})\): Define \[\Sigma_{\mathscr{T}}:\,\mathbf{Spin}(\mathfrak{A}_{1})\xrightarrow{\sim} \mathbf{Spin}(\mathfrak{A}_{2})\quad\text{and}\quad\Theta_{\mathscr{T}}:\, \mathbf{PGO}^{+}(\mathfrak{A}_{1})\xrightarrow{\sim}\mathbf{PGO}^{+}( \mathfrak{A}_{2})\] by composing the vertical isomorphisms: \(\Sigma_{\mathscr{T}}=\psi_{\partial\mathscr{T}}^{-1}\circ\partial\circ\psi_{ \mathscr{T}}\) and \(\Theta_{\mathscr{T}}=\pi_{\partial\mathscr{T}}\circ\partial\circ\pi_{\mathscr{T }}^{-1}\). Forgetting the two central lines of the last diagram, we obtain a commutative diagram with exact rows: (3.20) Applying the construction above to \(\partial\mathcal{T}\) and \(\partial^{2}\mathcal{T}\) instead of \(\mathcal{T}\), we obtain isomorphisms \(\Sigma_{\partial\mathcal{T}}\,\colon\,\mathbf{Spin}(\mathfrak{A}_{2})\xrightarrow{ \sim}\mathbf{Spin}(\mathfrak{A}_{3})\) and \(\Sigma_{\partial^{2}\mathcal{T}}\,\colon\,\mathbf{Spin}(\mathfrak{A}_{3}) \xrightarrow{\sim}\mathbf{Spin}(\mathfrak{A}_{1})\) such that \(\Sigma_{\partial^{2}\mathcal{T}}\circ\Sigma_{\partial\mathcal{T}}\circ\Sigma_ {\mathcal{T}}=\mathrm{Id}\), which make the following diagram with exact rows commute: Letting \(\pi_{i}\,\colon\,\mathbf{PGO}(\mathcal{T})\to\mathbf{PGO}^{+}(\mathfrak{A}_{i})\) denote the projection on the \(i\)-th component, we have \[\pi_{1}=\pi_{\mathcal{T}},\qquad\pi_{2}=\pi_{\partial\mathcal{T}}\circ\partial,\qquad\pi_{3}=\pi_{\partial^{2}\mathcal{T}}\circ\partial^{2},\] hence the following diagram, in which all the maps are isomorphisms, is commutative: (3.21) Similarly, defining \(\psi_{i}\,\colon\,\mathbf{Spin}(\mathfrak{A}_{i})\to\mathbf{O}(\mathcal{T})\) for \(i=1,\,2,\,3\) by \[\psi_{1}=\psi_{\mathcal{T}},\qquad\psi_{2}=\partial^{-1}\circ\psi_{\partial \mathcal{T}},\qquad\psi_{3}=\partial^{-2}\circ\psi_{\partial^{2}\mathcal{T}},\] we obtain the following commutative diagram similar to (3.21), where all the maps are isomorphisms: Restricting to the central subgroups, we also obtain a commutative diagram of isomorphisms: The action of the trialitarian isomorphism \(\Sigma_{\mathscr{T}}\) on \(R_{Z_{1}/F}(\boldsymbol{\mu}_{2})\) is easy to determine from the definition of \(\psi_{\mathscr{T}}\): **Proposition 3.21**.: _For \(i=1\), \(2\), \(3\), let \(z_{i+}\) and \(z_{i-}\) denote the primitive idempotents of \(Z_{i}\) (according to the polarization). Then for every commutative \(F\)-algebra \(R\) and \(a_{+}\), \(a_{-}\in R\) such that \(a_{+}^{2}=a_{-}^{2}=1\),_ \[\Sigma_{\mathscr{T}}(a_{+}z_{1+}+a_{-}z_{1-}) =a_{-}z_{2+}+a_{+}a_{-}z_{2-}, \tag{3.23}\] \[\Sigma_{\partial\mathscr{T}}(a_{+}z_{2+}+a_{-}z_{2-}) =a_{-}z_{3+}+a_{+}a_{-}z_{3-},\] (3.24) \[\Sigma_{\partial^{2}\mathscr{T}}(a_{+}z_{3+}+a_{-}z_{3-}) =a_{-}z_{1+}+a_{+}a_{-}z_{1-}. \tag{3.22}\] Proof.: From (3.7) it follows that \(\psi_{\mathscr{T}}(a_{+}z_{1+}+a_{-}z_{1-})=(a_{+}a_{-},\,a_{+},\,a_{-})\), hence \[\partial\circ\psi_{\mathscr{T}}(a_{+}z_{1+}+a_{-}z_{1-})=(a_{+},\,a_{-},\,a_{ +}a_{-})=\psi_{\partial\mathscr{T}}(a_{-}z_{2+}+a_{+}a_{-}z_{2-}).\] Equation (3.22) follows, since \(\Sigma_{\mathscr{T}}=\psi_{\partial\mathscr{T}}^{-1}\circ\partial\circ\psi_{ \mathscr{T}}\). Equations (3.23) and (3.24) are proved similarly. Proposition 3.21 shows that \(\Sigma_{\mathscr{T}}\) does _not_ map the subgroup \(\boldsymbol{\mu}_{2}\) of \(R_{Z_{1}/F}(\boldsymbol{\mu}_{2})\) to the subgroup \(\boldsymbol{\mu}_{2}\) of \(R_{Z_{2}/F}(\boldsymbol{\mu}_{2})\); this is a characteristic feature of trialitarian isomorphisms. ### Compositions of \(8\)-dimensional quadratic spaces Let \(\mathscr{G}=\big{(}(V_{1},q_{1}),\,(V_{2},q_{2}),\,(V_{3},q_{3}),\,\ast_{3} \big{)}\) and \(\widetilde{\mathscr{G}}=\big{(}(\widetilde{V}_{1},\widetilde{q}_{1}),\,( \widetilde{V}_{2},\widetilde{q}_{2}),\,(\widetilde{V}_{3},\widetilde{q}_{3}), \,\widetilde{\ast}_{3}\,\big{)}\) denote compositions of quadratic spaces of dimension \(8\) over \(F\) throughout this subsection. Recall from Remark 2.10 that \(\mathscr{G}\) and \(\widetilde{\mathscr{G}}\) induce polarizations of \((V_{1},q_{1})\) and \((\widetilde{V}_{1},\widetilde{q}_{1})\) respectively. Our goal is to establish criteria for the existence of a similitude or an isomorphism between \(\mathscr{G}\) and \(\widetilde{\mathscr{G}}\). **Theorem 3.22**.: _For every similitude \(g_{1}\colon(V_{1},q_{1})\to(\widetilde{V}_{1},\widetilde{q}_{1})\) preserving the polarizations induced by \(\mathscr{C}\) and \(\widetilde{\mathscr{C}}\), there exist similitudes \(g_{2}\colon(V_{2},q_{2})\to(\widetilde{V}_{2},\widetilde{q}_{2})\) and \(g_{3}\colon(V_{3},q_{3})\to(\widetilde{V}_{3},\widetilde{q}_{3})\) such that the triple \((g_{1},g_{2},g_{3})\) is a similitude \(\mathscr{C}\to\widetilde{\mathscr{C}}\). The similitudes \(g_{2}\) and \(g_{3}\) are uniquely determined up to a scalar factor._ Proof.: The similitude \(g_{1}\) defines an isomorphism of algebras with quadratic pair \[\operatorname{Int}(g_{1})\colon(\operatorname{End}V_{1},\sigma_{b_{1}}, \mathfrak{f}_{q_{1}})\to(\operatorname{End}\widetilde{V}_{1},\sigma_{ \widetilde{b}_{1}},\mathfrak{f}_{\widetilde{q}_{1}}),\] see Proposition 1.2. Since \(g_{1}\) preserves the polarizations of \((V_{1},q_{1})\) and \((\widetilde{V}_{1},\widetilde{q}_{1})\), it follows that \(\operatorname{Int}(g_{1})\) preserves the polarizations of \((\operatorname{End}V_{1},\sigma_{b_{1}},\mathfrak{f}_{q_{1}})\) and \((\operatorname{End}\widetilde{V}_{1},\sigma_{\widetilde{b}_{1}},\mathfrak{f} _{\widetilde{q}_{1}})\) induced by the trialitarian triples \(\operatorname{End}(\mathscr{C})\) and \(\operatorname{End}(\widetilde{\mathscr{C}})\) respectively, hence Corollary 3.19 yields uniquely determined isomorphisms \[\gamma_{2}\colon(\operatorname{End}V_{2},\sigma_{b_{2}},\mathfrak{f}_{q_{2}}) \to(\operatorname{End}\widetilde{V}_{2},\sigma_{\widetilde{b}_{2}},\mathfrak{ f}_{\widetilde{q}_{2}})\quad\text{and}\quad\gamma_{3}\colon(\operatorname{End}V_{3}, \sigma_{b_{3}},\mathfrak{f}_{q_{3}})\to(\operatorname{End}\widetilde{V}_{3}, \sigma_{\widetilde{b}_{3}},\mathfrak{f}_{\widetilde{q}_{3}})\] such that \((\operatorname{Int}(g_{1}),\gamma_{2},\gamma_{3})\) is an isomorphism \(\operatorname{End}(\mathscr{C})\to\operatorname{End}(\widetilde{\mathscr{C}})\). Proposition 1.2 shows that there exist similitudes \(g_{2}\colon(V_{2},q_{2})\to(\widetilde{V}_{2},\widetilde{q}_{2})\) and \(g_{3}\colon(V_{3},q_{3})\to(\widetilde{V}_{3},\widetilde{q}_{3})\), uniquely determined up to a scalar factor, such that \(\gamma_{2}=\operatorname{Int}(g_{2})\) and \(\gamma_{3}=\operatorname{Int}(g_{3})\). It follows from Proposition 3.1 that \((g_{1},g_{2},g_{3})\) is a similitude \(\mathscr{C}\to\widetilde{\mathscr{C}}\). **Corollary 3.23**.: _Let \(n_{\mathscr{C}}\) and \(n_{\widetilde{\mathscr{C}}}\) denote the \(3\)-fold Pfister forms associated to \(\mathscr{C}\) and \(\widetilde{\mathscr{C}}\) by Proposition 2.9. The following conditions are equivalent:_ * \(\mathscr{C}\) _is similar to_ \(\widetilde{\mathscr{C}}\)_;_ * \(n_{\mathscr{C}}\simeq n_{\widetilde{\mathscr{C}}}\)_._ Proof.: Recall that \(q_{1}\simeq\langle\lambda_{1}\rangle n_{\mathscr{C}}\) and \(\widetilde{q}_{1}\simeq(\widetilde{\lambda}_{1})n_{\widetilde{\mathscr{C}}}\) for some \(\lambda_{1}\), \(\widetilde{\lambda}_{1}\in F^{\times}\). If \(\mathscr{C}\) is similar to \(\widetilde{\mathscr{C}}\), then \(q_{1}\) is similar to \(\widetilde{q}_{1}\), hence \(n_{\mathscr{C}}\simeq n_{\widetilde{\mathscr{C}}}\) because similar Pfister forms are isometric. Conversely, if \(n_{\mathscr{C}}\simeq n_{\widetilde{\mathscr{C}}}\), then there is a similitude \(g_{1}\colon(V_{1},q_{1})\to(\widetilde{V}_{1},\widetilde{q}_{1})\). Composing \(g_{1}\) with an improper isometry if necessary, we may assume \(g_{1}\) preserves the polarizations of \((V_{1},q_{1})\) and \((\widetilde{V}_{1},\widetilde{q}_{1})\). Then Theorem 3.22 yields a similitude \(\mathscr{C}\to\widetilde{\mathscr{C}}\). In the particular case where \(\widetilde{\mathscr{C}}=\mathscr{C}\), Theorem 3.22 is a direct generalization of the Principle of Triality discussed by Springer-Veldkamp [19, Th. 3.2.1], as follows: **Corollary 3.24**.: _For every proper similitude \(g_{1}\in\operatorname{GO}^{+}(q_{1})\), there exist similitudes \(g_{2}\in\operatorname{GO}(q_{2})\) and \(g_{3}\in\operatorname{GO}(q_{3})\) such that_ \[g_{1}(x_{2}*_{1}x_{3})=g_{2}(x_{2})*_{1}g_{3}(x_{3})\qquad\text{for all $x_{2}\in V_{2}$ and $x_{3}\in V_{3}$.}\] Proof.: Theorem 3.22 yields similitudes \(g^{\prime}_{2}\in\operatorname{GO}(q_{2})\) and \(g^{\prime}_{3}\in\operatorname{GO}(q_{3})\) such that \((g_{1},g^{\prime}_{2},g^{\prime}_{3})\in\operatorname{GO}(\mathscr{C})(F)\). Letting \(\lambda_{\mathscr{C}}(g_{1},g^{\prime}_{2},g^{\prime}_{3})=(\lambda_{1}, \lambda_{2},\lambda_{3})\), we have by Proposition 2.13 \[\lambda_{1}\,g_{1}(x_{2}*_{1}x_{3})=g^{\prime}_{2}(x_{2})*_{1}g^{\prime}_{3}(x _{3})\qquad\text{for all $x_{2}\in V_{2}$ and $x_{3}\in V_{3}$.}\] Then \(g_{2}=\lambda_{1}^{-1}g^{\prime}_{2}\) and \(g_{3}=g^{\prime}_{3}\) satisfy the requirement. In the special case where \(*_{1}\) is the multiplication in an octonion algebra, Corollary 3.24 is (the main part of) [19, Th. 3.2.1]. Corollary 3.24 also has a "local" version: **Corollary 3.25**.: _For every \(g_{1}\in\mathfrak{go}(q_{1})\), there exist \(g_{2}\in\mathfrak{go}(q_{2})\) and \(g_{3}\in\mathfrak{go}(q_{3})\) such that_ \[g_{1}(x_{2}*_{1}x_{3})=g_{2}(x_{2})*_{1}x_{3}+x_{2}*_{1}g_{3}(x_{3})+\dot{\mu}(g_ {1})\,x_{2}*_{1}x_{3}\qquad\text{for all $x_{2}\in V_{2}$ and $x_{3}\in V_{3}$.}\] Proof.: Lemma 3.15 shows that projection on the first component \(\pi_{1}\colon\mathfrak{pgo}(\mathscr{E})\to\mathfrak{pgo}(q_{1})\) is bijective, hence there exist \(g_{2}^{\prime}\in\mathfrak{go}(q_{2})\) and \(g_{3}^{\prime}\in\mathfrak{go}(q_{3})\) such that \((g_{1}+F,g_{2}^{\prime}+F,g_{3}^{\prime}+F)\) lies in \(\mathfrak{pgo}(\mathscr{E})\), which means that there exists \(\lambda_{3}\in F\) such that \[g_{3}^{\prime}(x_{1}*_{3}x_{2})=g_{1}(x_{1})*_{3}x_{2}+x_{1}*_{3}g_{2}^{\prime }(x_{2})-\lambda_{3}\,x_{1}*_{3}x_{2}\qquad\text{for all $x_{1}\in V_{1}$ and $x_{2}\in V_{2}$.}\] By Proposition 2.16, there also exists \(\lambda_{1}\in F\) such that \[g_{1}(x_{2}*_{1}x_{3})=g_{2}^{\prime}(x_{2})*_{1}x_{3}+x_{2}*_{1}g_{3}^{\prime }(x_{3})-\lambda_{1}\,x_{2}*_{1}x_{3}\qquad\text{for all $x_{2}\in V_{2}$ and $x_{3}\in V_{3}$.}\] Then \(g_{2}=g_{2}^{\prime}-\lambda_{1}\) and \(g_{3}=g_{3}^{\prime}+\dot{\mu}(g_{1})\) satisfy the required condition. Specializing \(*_{1}\) to be the multiplication in an octonion algebra (resp. the multiplication in a symmetric composition algebra of dimension 8) yields Elduque's Principle of Local Triality [8, Th. 3.2] (resp. [8, Th. 5.2]). By contrast with similitudes in Theorem 3.22, isometries \((V_{1},q_{1})\to(\widetilde{V}_{1},\widetilde{q}_{1})\) do not necessarily extend to isomorphisms \(\mathscr{E}\to\widetilde{\mathscr{E}}\) since \((V_{i},q_{i})\) may not be isometric to \((\widetilde{V}_{i},\widetilde{q}_{i})\) for \(i=2\), \(3\). Nevertheless, we will obtain in Theorem 3.29 below an isomorphism criterion for compositions of quadratic spaces by using the following construction of similitudes. For \(\mathscr{E}\) as above, define a new composition of quadratic spaces \(\mathscr{C}^{\prime}\) as follows: \[\mathscr{C}^{\prime}=\big{(}(V_{2},q_{2}),\,(V_{1},q_{1}),\,(V_{3},q_{3}),\, \ast_{3}^{\prime}\big{)}\] where \[x_{2}\ast_{3}^{\prime}x_{1}=x_{1}*_{3}x_{2}\qquad\text{for $x_{2}\in V_{2}$ and $x_{1}\in V_{1}$.}\] To every anisotropic vector \(u_{3}\in V_{3}\), we associate the map \[\rho_{u_{3}}(x_{3})=u_{3}q_{3}(u_{3})^{-1}b_{3}(u_{3},x_{3})-x_{3}\qquad\text{ for $x_{3}\in V_{3}$.} \tag{3.25}\] Computation shows that \(\rho_{u_{3}}\) is an isometry fixing \(u_{3}\). **Proposition 3.26**.: _For every anisotropic vector \(u_{3}\in V_{3}\), the triples \((\ell_{u_{3}},r_{u_{3}},\rho_{u_{3}})\colon\mathscr{C}\to\mathscr{C}^{\prime}\) and \((r_{u_{3}},\ell_{u_{3}},\rho_{u_{3}})\colon\mathscr{C}^{\prime}\to\mathscr{C}\) are similitudes with composition multiplier \(\big{(}1,1,q_{3}(u_{3})\big{)}\)._ Proof.: Since \(\mu(r_{u_{3}})=\mu(\ell_{u_{3}})=q_{3}(u_{3})\), to prove \((\ell_{u_{3}},r_{u_{3}},\rho_{u_{3}})\) is a similitude with composition multiplier \(\big{(}1,1,q_{3}(u_{3})\big{)}\) it suffices to show \[q_{3}(u_{3})\rho_{u_{3}}(x_{1}*_{3}x_{2})=\ell_{u_{3}}(x_{1})*_{3}^{\prime}r_{ u_{3}}(x_{2})\qquad\text{for all $x_{1}\in V_{1}$, $x_{2}\in V_{2}$.}\] Likewise, to prove \((r_{u_{3}},\ell_{u_{3}},\rho_{u_{3}})\) is a similitude with composition multiplier \(\big{(}1,1,q_{3}(u_{3})\big{)}\) it suffices to show \[q_{3}(u_{3})\rho_{u_{3}}(x_{2}*_{3}^{\prime}x_{1})=r_{u_{3}}(x_{2})*_{3}\ell_ {u_{3}}(x_{1})\qquad\text{for all $x_{2}\in V_{2}$, $x_{1}\in V_{1}$.}\] Each of these equations amounts to \[u_{3}b_{3}(u_{3},x_{1}*_{3}x_{2})-(x_{1}*_{3}x_{2})q_{3}(u_{3})=(x_{2}*_{1}u_{ 3})*_{3}(u_{3}*_{2}x_{1}).\] By (2.18), we may rewrite the right side as \[(x_{2}*_{1}u_{3})*_{3}(u_{3}*_{2}x_{1})=u_{3}b_{1}(x_{2}*_{1}u_{3},x_{1})-x_{ 1}*_{3}\big{(}u_{3}*_{2}(x_{2}*_{1}u_{3})\big{)}.\] Since \(b_{1}(x_{2}*_{1}u_{3},x_{1})=b_{3}(u_{3},x_{1}*_{3}x_{2})\) by (2.1), and \(u_{3}*_{2}(x_{2}*_{1}u_{3})=x_{2}q_{3}(u_{3})\) by (2.11), the proposition follows. Proposition 3.26 allows us to describe the group \[G(\mathscr{C})=\lambda_{\mathscr{C}}\big{(}\mathbf{GO}(\mathscr{C})(F)\big{)} \subset F^{\times}\times F^{\times}\times F^{\times}\] of composition multipliers of auto-similitudes of \(\mathscr{C}\). In the next corollary, we write \(G(n_{\mathscr{C}})\) for the group of multipliers of similitudes of the Pfister form associated to \(\mathscr{C}\), which is also the set of represented values of this form because Pfister forms are round (see [9, Cor. 9.9]). **Corollary 3.27**.: \(G(\mathscr{C})=\{(\lambda_{1},\lambda_{2},\lambda_{3})\in F^{\times}\times F^ {\times}\times F^{\times}\mid\lambda_{1}\equiv\lambda_{2}\equiv\lambda_{3} \bmod G(n_{\mathscr{C}})\}\)_._ Proof.: If \((\lambda_{1},\lambda_{2},\lambda_{3})=\lambda_{\mathscr{C}}(g_{1},g_{2},g_{3},\lambda_{3})\) for some \((g_{1},g_{2},g_{3},\lambda_{3})\in\mathbf{GO}(\mathscr{C})(F)\), then by definition of \(\lambda_{\mathscr{C}}\) (see (2.27)) \[\lambda_{1}=\mu(g_{2})\lambda_{3}^{-1}\qquad\text{and}\qquad\lambda_{2}=\mu(g _{1})\lambda_{3}^{-1}.\] Since \(q_{1}\) and \(q_{2}\) are multiples of \(n_{\mathscr{C}}\), multipliers of similitudes of \(q_{1}\) and of \(q_{2}\) lie in \(G(n_{\mathscr{C}})\), hence \(\lambda_{1}\lambda_{3}\in G(n_{\mathscr{C}})\) and \(\lambda_{2}\lambda_{3}\in G(n_{\mathscr{C}})\). Therefore, \(\lambda_{1}\equiv\lambda_{2}\equiv\lambda_{3}\bmod G(n_{\mathscr{C}})\). For the converse, we first establish: _Claim: \((1,1,\nu)\in G(\mathscr{C})\) for every \(\nu\in G(n_{\mathscr{C}})\)._ To see this, pick any anisotropic vector \(u_{3}\in V_{3}\), and let \(v_{3}\in V_{3}\) be the image of \(u_{3}q_{3}(u_{3})^{-1}\) under any similitude of \((V_{3},q_{3})\) with multiplier \(\nu\), so that \(q_{3}(v_{3})=\nu q_{3}(u_{3})^{-1}\). By Proposition 3.26, the composition of maps \((r_{v_{3}},\ell_{v_{3}},\rho_{v_{3}})\circ(\ell_{u_{3}},r_{u_{3}},\rho_{u_{3}})\) is an auto-similitude of \(\mathscr{C}\) with multiplier \((1,1,q_{3}(v_{3}))(1,1,q_{3}(u_{3}))=(1,1,\nu)\). This proves the claim. Since for the derived composition \(\partial\mathscr{C}\) we have \(n_{\partial\mathscr{C}}\simeq n_{\mathscr{C}}\), it follows that \((1,1,\nu)\in G(\partial\mathscr{C})\) for every \(\nu\in G(n_{\mathscr{C}})\), hence \((\nu,1,1)\in G(\mathscr{C})\) for every \(\nu\in G(n_{\mathscr{C}})\). Now, suppose \((\lambda_{1},\lambda_{2},\lambda_{3})\in F^{\times}\times F^{\times}\times F ^{\times}\) is such that \(\lambda_{1}\lambda_{2}^{-1}\), \(\lambda_{2}^{-1}\lambda_{3}\in G(n_{\mathscr{C}})\). The previous observations show \[(\lambda_{1}\lambda_{2}^{-1},1,1),\;(1,1,\lambda_{2}^{-1}\lambda_{3})\in G( \mathscr{C}).\] Moreover, \((\lambda_{2}\operatorname{Id}_{V_{1}},\lambda_{2}\operatorname{Id}_{V_{2}}, \lambda_{2}\operatorname{Id}_{V_{3}},\lambda_{2})\in\mathbf{GO}(\mathscr{C})(F)\) is a similitude with composition multiplier \((\lambda_{2},\lambda_{2},\lambda_{2})\). Therefore, the group \(G(\mathscr{C})\) also contains the product \[(\lambda_{1}\lambda_{2}^{-1},1,1)\cdot(1,1,\lambda_{2}^{-1}\lambda_{3})\cdot( \lambda_{2},\lambda_{2},\lambda_{2})=(\lambda_{1},\lambda_{2},\lambda_{3}).\qed\] **Remark 3.28**.: Proposition 3.26 and Corollary 3.27 also hold, with the same proof, for compositions of quadratic spaces of dimension \(2\) or \(4\). **Theorem 3.29**.: _The compositions \(\mathscr{C}\) and \(\widetilde{\mathscr{C}}\) are isomorphic if and only if \((V_{i},q_{i})\) and \((\widetilde{V}_{i},\widetilde{q}_{i})\) are isometric for \(i=1\), \(2\) and \(3\)._ Proof.: If \(g=(g_{1},g_{2},g_{3})\colon\mathscr{C}\to\widetilde{\mathscr{C}}\) is an isomorphism, then from the relations between the multipliers of \(g_{1}\), \(g_{2}\), \(g_{3}\) and the composition multiplier \(\lambda(g)\) in (2.25) it follows that \(g_{1}\), \(g_{2}\) and \(g_{3}\) are isometries, hence \((V_{i},q_{i})\simeq(\widetilde{V}_{i},\widetilde{q}_{i})\) for all \(i\). For the converse, assume \((V_{i},q_{i})\) is isometric to \((\widetilde{V}_{i},\widetilde{q}_{i})\) for \(i=1\), \(2\), \(3\), and pick an isometry \(g_{1}\colon(V_{1},q_{1})\to(\widetilde{V}_{1},\widetilde{q}_{1})\). Composing it with an improper isometry if needed, we may assume \(g_{1}\) preserves the polarizations induced by \(\mathscr{C}\) and \(\widetilde{\mathscr{C}}\). Theorem 3.22 then yields a similitude \(g=(g_{1},g_{2},g_{3})\colon\mathscr{C}\to\widetilde{\mathscr{C}}\). Let \(\lambda(g)=(\lambda_{1},\lambda_{2},\lambda_{3})\). From the relations (2.25) between \(\lambda(g)\) and the multipliers of \(g_{1}\), \(g_{2}\), \(g_{3}\) it follows that \(\mu(g_{1})=\lambda_{2}\lambda_{3}\), hence \(\lambda_{2}\lambda_{3}=1\) since \(g_{1}\) is an isometry. The triple \((g_{1},\lambda_{2}g_{2},g_{3})\) also is a similitude \(\mathscr{C}\to\widetilde{\mathscr{C}}\), and \[\lambda(g_{1},\lambda_{2}g_{2},g_{3})=\lambda(g)\cdot(\lambda_{2},\lambda_{2}^{-1 },\lambda_{2})=(\lambda_{1}\lambda_{2},1,1)=(\mu(g_{3}),1,1).\] Since \((\widetilde{V}_{3},\widetilde{q}_{3})\simeq(V_{3},q_{3})\), the multiplier \(\mu(g_{3})\) is the multiplier of a similitude of \(q_{3}\), hence also of \(n_{\mathscr{C}}\). Corollary 3.27 then shows that there exists an auto-similitude \((g_{1}^{\prime},g_{2}^{\prime},g_{3}^{\prime})\) of \(\mathscr{C}\) such that \(\lambda(g_{1}^{\prime},g_{2}^{\prime},g_{3}^{\prime})=(\mu(g_{3})^{-1},1,1)\). Then \((g_{1}\circ g_{1}^{\prime},\lambda_{2}g_{2}\circ g_{2}^{\prime},g_{3}\circ g_ {3}^{\prime})\) is a similitude \(\mathscr{C}\to\mathscr{C}\) with composition multiplier \((1,1,1)\), i.e., it is an isomorphism. Corollary 3.23 and Theorem 3.29 can be given a cohomological interpretation: over a separable closure of \(F\), Corollary 2.31 (or Theorem 3.29) shows that all the compositions of quadratic spaces of dimension \(8\) are isomorphic. Therefore, if \(\mathscr{C}_{0}\) is a composition of hyperbolic quadratic spaces of dimension \(8\) over \(F\) (such as the composition associated to the split para-octonion algebra), standard arguments of nonabelian Galois cohomology (see for instance [13, SS29]) yield canonical bijections \[H^{1}\big{(}F,\mathbf{O}(\mathscr{C}_{0})\big{)}\quad\longleftrightarrow \quad\boxed{\begin{array}{l}\text{isomorphism classes of compositions of}\\ \text{quadratic spaces of dimension $n$ over $F$}\end{array}}\] and \[H^{1}\big{(}F,\mathbf{GO}(\mathscr{C}_{0})\big{)}\quad\longleftrightarrow \quad\boxed{\begin{array}{l}\text{similarity classes of compositions of}\\ \text{quadratic spaces of dimension $n$ over $F$}\end{array}}\] because \(\mathbf{O}(\mathscr{C}_{0})\) (resp. \(\mathbf{GO}(\mathscr{C}_{0})\)) is the group of automorphisms (resp. auto-similitudes) of \(\mathscr{C}_{0}\). Since by Proposition 3.10 the group \(\mathbf{PGO}(\mathscr{C}_{0})\) is the automorphism group of the trialitarian triple \(\operatorname{End}(\mathscr{C}_{0})\), there is an additional canonical bijection \[H^{1}\big{(}F,\mathbf{PGO}(\mathscr{C}_{0})\big{)}\quad\longleftrightarrow \quad\boxed{\begin{array}{l}\text{isomorphism classes of trialitarian triples over $F$}\end{array}}\] Now, Corollary 3.23 yields a bijection between \(H^{1}\big{(}F,\mathbf{GO}(\mathscr{C}_{0})\big{)}\) and the set of isometry classes of \(3\)-fold quadratic Pfister forms. Similarly, Theorem 3.29 yields a bijection between \(H^{1}\big{(}F,\mathbf{O}(\mathscr{C}_{0})\big{)}\) and the set of triples of quadratic forms \((q_{1},q_{2},q_{3})\) up to isometry, subject to the condition that there exists a \(3\)-fold quadratic Pfister form \(n\) such that \(q_{1}\), \(q_{2}\), \(q_{3}\) are similar to \(n\) and the orthogonal sum \(n\perp q_{1}\perp q_{2}\perp q_{3}\) is a \(5\)-fold quadratic Pfister form. This can also be viewed as a description of \(H^{1}(F,\mathbf{Spin}_{8})\) for \(\mathbf{Spin}_{8}\) the spin group of \(8\)-dimensional hyperbolic quadratic forms, because Theorem 3.12 yields a canonical isomorphism \(\mathbf{Spin}_{8}\simeq\mathbf{O}(\mathscr{C}_{0})\). We may use this description to give an interpretation of the mod \(2\) cohomological invariants of \(\mathbf{Spin}_{8}\) determined by Garibaldi in [10, SS18.1] under the hypothesis that \(\operatorname{char}F\neq 2\), as follows: for \(n=3\), \(4\), \(5\), let \(e_{n}\) denote the Elman-Lam cohomological invariant of \(n\)-fold Pfister forms, defined by \[e_{n}\big{(}\langle 1,-a_{1}\rangle\cdot\ldots\cdot\langle 1,-a_{n}\rangle \big{)}=(a_{1})\cup\ldots\cup(a_{n})\in H^{n}(F,\boldsymbol{\mu}_{2}),\] where \((a_{i})\in H^{1}(F,\boldsymbol{\mu}_{2})\) is the cohomology class corresponding to the square class of \(a_{i}\in F^{\times}\) by Kummer theory, see [9, SS16]. For every triple \((q_{1},q_{2},q_{3})\) as above, the cohomology classes \[e_{3}(n),\quad e_{4}(n\perp q_{1}),\quad e_{4}(n\perp q_{2}),\quad e_{4}(n \perp q_{3}),\quad e_{5}(n\perp q_{1}\perp q_{2}\perp q_{3})\] define cohomological invariants, which distinguish these triples up to isometry. According to [10, SS18.1], these invariants generate the \(H^{*}(F,\mathbb{Z}/2\mathbb{Z})\)-module of mod \(2\) invariants of \(\mathbf{Spin}_{8}\). Note that these invariants are not independent: since \[(n\perp q_{1})\perp(n\perp q_{2})\perp(n\perp q_{3})=2n\perp(n\perp q_{1} \perp q_{2}\perp q_{3})\] and \(n\perp q_{1}\perp q_{2}\perp q_{3}\) is a \(5\)-fold Pfister form, it follows that \[e_{4}(n\perp q_{1})+e_{4}(n\perp q_{2})+e_{4}(n\perp q_{3})=e_{4}(2n)=(-1)\cup e _{3}(n).\] ### The structure group of \(8\)-dimensional composition algebras Let \(\mathscr{A}=(A,q,\diamond)\) be a composition algebra of dimension \(8\). Recall from Defintion 2.29 the structure group \(\mathbf{Str}(A,\diamond)\), which is the group of autotopies of \((A,\diamond)\). Corollary 2.33 identifies \(\mathbf{Str}(A,\diamond)\) with a subgroup of \(\mathbf{GO}\big{(}\mathscr{C}(\mathscr{A})\big{)}\), for \(\mathscr{C}(\mathscr{A})\) the composition of quadratic spaces associated to \(\mathscr{A}\) as in (2.33). In the trialitarian triple \(\mathscr{S}=\operatorname{End}\big{(}\mathscr{C}(\mathscr{A})\big{)}\) we have \(\mathfrak{A}_{1}=\mathfrak{A}_{2}=\mathfrak{A}_{3}=(\operatorname{End}A, \sigma_{b},\mathfrak{f}_{q})\). Mimicking the construction in SS3.2, we obtain a morphism \(\psi_{\partial^{2}\mathscr{S}}\colon\,\mathbf{\Omega}(\mathfrak{A}_{3}) \to\mathbf{GO}(\partial^{2}\mathscr{C})\) as in (3.6). We use it to define a morphism of algebraic groups \[\psi_{\mathscr{A}}\colon\,\mathbf{\Omega}(q)\to\mathbf{GO}\big{(}\mathscr{C} (\mathscr{A})\big{)}\] by specializing to the case where \(\mathscr{S}=\operatorname{End}\big{(}\mathscr{C}(\mathscr{A})\big{)}\) the map \(\partial\circ\psi_{\partial^{2}\mathscr{S}}\colon\,\mathbf{\Omega}(\mathfrak{A }_{3})\to\mathbf{GO}(\mathscr{S})\), where \(\partial\) is the shift map. Thus, for any commutative \(F\)-algebra \(R\) and \(\xi\in\mathbf{\Omega}(q)(R)\), \[\psi_{\mathscr{A}}(\xi)=\big{(}C_{+}(\alpha^{\prime\prime})(\xi),\,C_{-}( \alpha^{\prime\prime})(\xi),\,\chi_{0}(\xi)\big{)}\] (viewing \(\mathbf{GO}\big{(}\mathscr{C}(\mathscr{A})\big{)}\) as a subgroup of \(\mathbf{GO}(q)\times\mathbf{GO}(q)\times\mathbf{GO}(q)\), as in the proof of Proposition 3.10), where \(C_{\pm}(\alpha^{\prime\prime})\) are the canonical Clifford maps attached to \(\partial^{2}\mathscr{C}(\mathscr{A})\), see (2.21). **Theorem 3.30**.: _The map \(\psi_{\mathscr{A}}\) is an isomorphism \(\mathbf{\Omega}(q)\xrightarrow{\sim}\mathbf{Str}(A,\diamond)\)._ Proof.: The map \(\psi_{\mathscr{A}}\) is injective because \(C(\alpha^{\prime\prime})\) is an isomorphism \(C(A,q)\to\operatorname{End}(A\oplus A)\), and the computation of \(\lambda_{\mathscr{S}}\circ\psi_{\mathscr{S}}\) in (3.11) together with Corollary 2.33 shows that \(\psi_{\mathscr{A}}\) maps \(\mathbf{\Omega}(q)\) to \(\mathbf{Str}(A,\diamond)\). To complete the proof, we show that for any commutative \(F\)-algebra \(R\) the group \(\mathbf{Str}(A,\diamond)(R)\) is the image of \(\mathbf{\Omega}(q)(R)\) under \(\psi_{\mathscr{A}}\). Let \((g_{1},g_{2},g_{3})\) be an autotopy of \((A,\diamond)_{R}\), which means that \[g_{3}(x_{1}\diamond x_{2})=g_{1}(x_{1})\diamond g_{2}(x_{2})\qquad\text{for all $x_{1}$, $x_{2}\in A_{R}$.}\] By Proposition 2.13 it follows that for all \(x_{1}\), \(x_{2}\), \(x_{3}\in A_{R}\) \[\mu(g_{2})\,g_{1}(x_{2}\diamond_{1}x_{3})=g_{2}(x_{2})\diamond_{1}g_{3}(x_{3} )\quad\text{and}\quad\mu(g_{1})\,g_{2}(x_{3}\diamond_{2}x_{1})=g_{3}(x_{3}) \diamond_{2}g_{1}(x_{1}).\] Equivalently, \[\mu(g_{2})\,g_{1}\circ r_{x_{3}}=r_{g_{3}(x_{3})}\circ g_{2}\quad\text{and} \quad\mu(g_{1})\,g_{2}\circ\ell_{x_{3}}=\ell_{g_{3}(x_{3})}\circ g_{1},\] which can be reformulated as an equation in \(\operatorname{End}(A\oplus A)\) as follows: \[\begin{pmatrix}0&r_{g_{3}(x_{3})}\\ \ell_{g_{3}(x_{3})}&0\end{pmatrix}=\begin{pmatrix}\mu(g_{2})&0\\ 0&\mu(g_{1})\end{pmatrix}\begin{pmatrix}g_{1}&0\\ 0&g_{2}\end{pmatrix}\begin{pmatrix}0&r_{x_{3}}\\ \ell_{x_{3}}&0\end{pmatrix}\begin{pmatrix}g_{1}^{-1}&0\\ 0&g_{2}^{-1}\end{pmatrix}. \tag{3.26}\] Since \(C(\alpha^{\prime\prime})\) is an isomorphism, there exists \(\xi\in C_{0}(A,q)_{R}\) such that \(C_{0}(\alpha^{\prime\prime})(\xi)=(g_{1},\,g_{2})\). Then \(C_{0}(\alpha^{\prime\prime})\big{(}\underline{\mu}(\xi)\big{)}=\big{(}\mu(g_{1}), \,\mu(g_{2})\big{)}\), and (3.26) yields \[C(\alpha^{\prime\prime})\big{(}g_{3}(x_{3})\big{)}=C(\alpha^{\prime\prime})( \iota(\underline{\mu}(\xi))\xi x_{3}\xi^{-1})\qquad\text{for all $x_{3}\in A_{R}$.}\] Since \(C(\alpha^{\prime\prime})\) is an isomorphism, it follows from Lemma 1.5 that \(\tau_{0}(\xi)x_{3}\xi=\sigma_{b}(g_{3})(x_{3})\) for all \(x_{3}\in A_{R}\), hence \(\xi\in\mathbf{\Omega}(q)(R)\) and \(g_{3}=\chi_{0}(\xi)\). Thus, \((g_{1},g_{2},g_{3})=\psi_{\mathscr{A}}(\xi)\). Recall from Proposition 1.8 the exact sequence \[1\to R_{Z/F}^{1}(\mathbf{G_{m}})\to\mathbf{\Omega}(q)\xrightarrow{\chi_{0}} \mathbf{GO}^{+}(q)\to 1. \tag{3.27}\] Since the discriminant of \(q\) is trivial, we have \(Z\simeq F\times F\), hence \(R_{Z/F}^{1}(\mathbf{G_{m}})\simeq\mathbf{G_{m}}\) and the Galois cohomology exact sequence derived from (3.27) takes the form \[1\to F^{\times}\to\mathbf{\Omega}(q)(F)\to\operatorname{GO}^{+}(q)\to 1.\] Substituting \(\mathbf{Str}(A,\diamond)(F)\) for \(\mathbf{\Omega}(q)(F)\), we recover the exact sequence obtained by Petersson [15, (4.13)] for \(\mathscr{A}\) an octonion algebra.
2302.09131
Nash equilibrium selection by eigenvalue control
People choose their strategies through a trial-and-error learning process in which they gradually discover that some strategies work better than others. The process can be modelled as an evolutionary game dynamics system, which may be controllable. In modern control theory, eigenvalue (pole) assignment is a basic approach to designing a full-state feedback controller, which can influence the outcome of a game. This study shows that, in a game with two Nash equilibria, the long-running strategy distribution can be controlled by pole assignment. We illustrate a theoretical workflow to design and evaluate the controller. To our knowledge, this is the first realisation of the control of equilibrium selection by design in the game dynamics theory paradigm. We hope the controller can be verified in a laboratory human subject game experiment.
Wang Zhijian
2023-02-17T20:46:32Z
http://arxiv.org/abs/2302.09131v1
# Nash equilibrium selection by eigenvalue control ###### Abstract People choose their strategies through a trial-and-error learning process in which they gradually discover that some strategies work better than others. The process can be modelled as an evolutionary game dynamics system, which may be controllable. In modern control theory, eigenvalue (pole) assignment is a basic approach to designing a full-state feedback controller, which can influence the outcome of a game. This study shows that, in a game with two Nash equilibria, the long-running strategy distribution can be controlled by pole assignment. We illustrate a theoretical workflow to design and evaluate the controller. To our knowledge, this is the first realisation of the control of equilibrium selection by design in the game dynamics theory paradigm. We hope the controller can be verified in a laboratory human subject game experiment. ###### Contents * 1 Introduction * 2 Workflow for controller design * 3 An example * 3.1 Step 0: Game and its equilibrium * 3.2 Step 1: The original eigensystem * 3.3 Step 2: Assign control goal * 3.4 Step 3: Controller design * 3.5 Step 4.1 Dynamics analysis results * 3.6 Step 4.2: ABED Simulation * 3.7 Step 5: Evaluate the controller * 3.7.1 Statistical results * 3.7.2 Explanation for the statistical result * 4 Discussion * 4.1 Related works on the control the game dynamics * 4.2 Further directions of the controller design * 5 Appendix * 5.1 Pole assignment * 5.2 Algorithm for the controller design * 5.2.1 Basic * 5.2.2 Construct \(J^{c}\) * 5.2.3 Explain \(B\) * 5.3 Agent-based evolutionary dynamics (ABED) simulation * 5.4 Abbreviations * 5.5 Data and code Introduction MotivationGame theory study the strategy interaction between intelligent species, e.g., human. Its statics paradigm predicts equilibrium, and its dynamics paradigm predicts motion. Equilibrium selection is an open question both in the statics [9] and the dynamics [20]. In this study, using game dynamics paradigm [8, 22], we show how to control game equilibrium selection by exploiting the predictability of the game motion. In another words, instead of statics [9] or existed dynamics [20] consideration, we design a mechanism to control the velocity field to influence the equilibrium selection. The controller is designed by the pole assignment approach in modern control theory [17, 2]. We will provide an overview of the workflow for designing the controller and illustrate it with an example. BackgroundThe research goal is primarily based on the two factors listed below: * **Human strategy behaviour motion can be predicted by the eigensystem of the game dynamics equations.** -- Data from game experiments show that human dynamics behaviour is governed by dynamics equations (for example, replicator dynamics) and its linearization at the Nash equilibrium. Previous results have shown that, even in discrete time and discrete strategy games, the linearization approximation works. Examples come from the 3-strategy game (the rock-paper-scissors game [5, 32, 10]), the 4-strategy one-population games ([42]), the 5-strategy one-population games (Yao 2021 [40]), and 4-strategy two-role zero-sum asymmetry game of the O'Neill 1987 game (O'Neill [15], Wang and Yao 2020 [33]). These findings demonstrate that the game dynamics theory can accurately capture human subject experimental dynamics behaviours based on distribution, cycle, and converge speed measurements. This logic chain can be seen in Figure 2(a). The chain begins with the game and dynamics equations to obtain the rest point (equilibrium), then proceeds to the Jacobian \(Jo\), and finally to the eigensystem (which includes eigenvalues, eigenvectors, and eigencycles). * **A linear dynamics system is controllable** -- Applied mathematics shows that the invariant manifold concept provides a clear picture for dynamic process description. This concept has roots in dynamical systems theory[35], a solid branch of mathematics. Dynamical systems deal with the study of the solutions to the equations of motion of systems that are primarily mechanical in nature. For linear dynamic systems, state-space feedback controller design is a well-studied and applied engineering field, namely modern control theory and application [7, 14, 2]. Main logicAs the linear dynamics system can be controlled, we turn to equilibrium selection. We show how to design a mechanism for equilibrium selection by using an example of a symmetric 5-strategy one population game, whose payoff matrix is shown in Table 1. The game has and only has two equilibrium (Nash_1 and Nash_2) as shown in Fig. 1. By using the pole assignment approach, we can control the eigenvalue at equilibrium, equilibrium Nash_1. In other words, we use pole assignment to control the stability of Nash_1. As a \begin{table} \begin{tabular}{c|c c c c c} \hline & x\({}_{1}\) & x\({}_{2}\) & x\({}_{3}\) & x\({}_{4}\) & x\({}_{5}\) \\ \hline x\({}_{1}\) & 0 & 0 & 2 & 0 & -2 \\ x\({}_{2}\) & 2 & 0 & 0 & -2 & 0 \\ x\({}_{3}\) & 0 & 2 & 0 & 2 & -1 \\ x\({}_{4}\) & \(-2\) & 0 & 1 & 0 & 1 \\ x\({}_{5}\) & 0 & \(-2\) & \(-2\) & 1 & 0 \\ \hline \end{tabular} \end{table} Table 1: The game matrix result, the long-run distribution will remain at Nash_1 or move to Nash_2. This is the main logic. In fact, in the view of game velocity field: (1) In game dynamics theory, dynamics equations (e.g., replicator dynamics) describe the velocity field. (2) The velocity field really exists in the human game experiments of von 1947's two elementary games [29], the standard rock-paper-scissors game and the matching pennies game. (3) The velocity field can be controlled by the pole assignment approach. Thus, using the pole assignment to influence human dynamics behaviours in a game experiment is logical. OutlineThe main technical point of this work is the workflow for the controller design, which is introduced in Section 2. In Section 3, we will practically show an example to realise the controller design and verify the theoretical prediction of the controller by agent-based simulations. In Section 4, we summarise the results and point out the related concepts and further directions. ## 2 Workflow for controller design In this study, our control-by-design (mechanism design) approach comes from **single-input pole assignment** approach for linear system in the modern control theory, which has being applied on control the dynamics structure [41]. As shown in Figure 2, the workflow includes follow steps: 1. Solve the dynamics equation for the original eigen-system including Jacobian \(J^{o}\), eigenvalue \(\lambda^{o}\), and eigenvector \(v^{o}\); Figure 1: Conceptual figure: In a five strategy game, the two equilibrium (Nash_1 and Nash_2) locate in the two sub space \(S_{1}(x_{1},x_{2},x_{3})\) and \(S_{2}(x_{4},x_{5})\), respectively. The pole assignment can be designed to make the Nash_1 being unstable. As consequence, the long run trajectory will converge to Nash_2, which means that the equilibrium Nash_2 is selected. Figure 2: Workflow: 1,\(J^{o}\); 2, PA; 3, \(J^{C}\); 4: DA; 5: Simulation; 6; Evaluation (a: Distribution, b: Cycle, and c: Convergence speed). 2. Assign the desired pole \(\lambda^{c}\); 3. Solve the gain matrix \(K\) for the given \(B\) for the controlled Jacobian \(J^{c}\) from the original Jacobian \(J^{o}\) (see Eq. 6 referring to [41]). 4. This step has two parallel parts: 1. Derive the theoretical dynamics analysis results on the observation set, \(O^{T}\). 2. Conduct agent-based simulation and report the observation set, \(O^{S}\). At this step, it is necessary to clarify the measurements of the observation. 5. Check the controller's valuation by comparing \(O^{T}\) and \(O^{S}\). That is to answer whether theoretical expectations are archived, meanwhile, whether the constrain conditions are satisfied. We will show how to realise the control by 5-strategy game practically in an example. ## 3 An example ### Step 0: Game and its equilibrium We employ a five strategy symmetry game to show how to observe the eigensystem and how to control the dynamics structure in experiment. The game is an one population 5\(\times\)5 symmetric game. In strategy state space, its evolution trajectory is of a 5 dimensional trajectory. Table 1 shows the 5-strategy payoff matrix. The game has and only has two Nash equilibrium [1]. The first Nash equilibrium, denoted as Nash_1, is \[\rho_{\text{Nash\_1}}=\frac{1}{3}(1,1,1,0,0), \tag{1}\] in which the game falls into a 3-dimensional subspace, appearing as a rock-paper-scissors game, and its expected payoff is 2/3. Intuitively, there should exist persistent endogenous cycles along \(x_{1}\to x_{2}\to x_{3}\to x_{1}...\) when Nash_1 is selected. The second, denoted as Nash_2, is \[\rho_{\text{Nash\_2}}=\frac{1}{2}(0,0,0,1,1), \tag{2}\] in which the game falls into a 2-dimensional subspace, appearing as an anti-coordination game, and its expected payoff is 1/2. Intuitively, there exist no persistent endogenous cycles when Nash_2 is selected. ### Step 1: The original eigensystem To illustrate the dynamics controller, we use the replicator dynamics [22], which is the original velocity vector field of the game, as the original system. Original dynamics systemThe dynamics system can be expressed as follow: \[\dot{x}_{j}=x_{j}\Big{(}U_{j}-\overline{U}\Big{)}, \tag{3}\] in which \(x_{j}\) is the \(j\)th strategy player's probability in the population with the \(j\)th strategy player included, and \(\dot{x}_{j}\) is the evolution velocity of the probability; \(U_{j}\) the payoff of the \(j\)th strategy player; And \(overlineU_{X}\) is the population's average payoff, which equals \(\sum_{k=1}^{5}x_{k}U_{k}\). This 5-dimensional space has one concentration, \(\sum_{j}x_{i}=1\cap x_{i}\geq 0\) (\(i\in\{1,2,...,5\}\)), implying that the system must be in a 5-dimensional simplex space at all times. The Nash_1 and Nash_2 are the original velocity field's rest points. Eigen systemThe Jacobian matrix \(J^{o}\) evaluated at Nash_1 is \[J^{o}_{\text{Nash\_1}}=\left[\begin{array}{ccccc}-\frac{4}{3}&-\frac{4}{3}& \frac{2}{3}&\frac{1}{5}&\frac{1}{2}\\ \frac{2}{3}&-\frac{2}{3}&-\frac{4}{3}&-\frac{2}{3}&\frac{4}{3}\\ -\frac{4}{3}&\frac{2}{3}&-\frac{4}{3}&\frac{2}{3}&\frac{4}{3}\\ 0&0&0&-1&0\\ 0&0&0&0&-2\end{array}\right].\] And its eigenvalues \(\lambda^{o}_{\text{Nash\_1}}\) are \[\lambda^{o}_{\text{Nash\_1}}==\left[\begin{array}{ccccc}-\frac{1}{3}+\frac {\sqrt{3}}{3}i&-\frac{1}{3}-\frac{\sqrt{3}}{3}i&-\frac{2}{3}&-1&-2\end{array} \right]. \tag{4}\] We choose the \(\lambda^{o}_{\text{Nash\_1}}\) as the original pole in this study. As all the real part of the eigenvalues are negative, the system is stable at Nash_1 locally. The first two column of the eigenvector matrix is the original complex eigenvector which represents the dynamics structure. The third column represents the equilibrium distribution, because its related left eigenvector is the unit vector, \([1\ 1\ 1\ 1\ 1]\), and its related eigenvalue is the oppose of the expected payoff at the equilibrium. In this example, the complex eigenvalue is the object of control-by-design. In the same way, we can calculate the eigensystem at \(\rho_{\text{Nash\_2}}\). As results, the eigenvalues are \[\lambda^{o}_{\text{Nash\_2}}=\left[\begin{array}{ccccc}-3/2&-3/2&-1/2&-1/2&0 \end{array}\right].\] As all of the eigenvalues are negative real, the system is also stable at Nash_2 locally; Meanwhile, all the eigenvectors are real, and no cyclic motion is expected around Nash_2. In general, each eqilibrium can be an object to control, but, in this study case, we do not try to control Nash_2. ### Step 2: Assign control goal Our goal is to control (plus or minus a small real number) the real parts of the complex eigenvalues shown in Eq. 4. The assign pool can be expressed as, \[\lambda^{c}=\ \lambda^{o}+b\ [0\ 0\ 0\ 1\ 1] \tag{5}\] Here, the \(\lambda^{o}\) is the original pool at Nash_1, and \(\lambda^{c}\) is the controlled (optimal) pool. That is, for each optimal goal \(b\), in \(\lambda^{0}\) of the original system matrix \(J^{o}\), a pair of poles (eigenvalue) is to be shifted. We set \(b\) form \(-1\) to \(1\) by step \(0.2\), \(b\in[-1,-0.8,-0.6,...,1]\), respectively. Such control will change the stability and the attraction of the equilibrium Nash_1. As results, in theory, the equilibrium point Nash_1 will be more attractive when \(b\longrightarrow-1\), and Nash_1 will be more likely being selected; alternatively, the equilibrium point Nash_1 will more repelled when \(b\longrightarrow+1\), and as a consequence, Nash_2 will be more likely being selected. ### Step 3: Controller design The main philosophy of the controller design is an inverse solution procedure, which recasts the controller design task to a optimization problem with constraints [2]. In this study, we set two constraint conditions, equilibrium conservation and payoff conservation. The definition of the controller is shown in Section 5.2.2. Denote \(\mathbf{x}=[x_{1}\ x_{2}\ x_{3}\ x_{4}\ x_{5}]^{T}\), referring to the definition of \(J^{c}\), the controlled velocity field is \[\mathbf{\dot{x}} = J^{c}\mathbf{x} \tag{6}\] \[= (J^{o}+BK-\sum_{i=1}^{5}k_{i}x_{i})\mathbf{x}.\] Notice that, as the relation -- \(\sum_{i=1}^{5}k_{i}x_{i}^{*}=0\) (\(x^{*}\) is the Nash_1) must hold. It is important, because the controller is required to preserve the equilibrium. The theory of optimal control is concerned with operating a dynamic system at minimum cost. At this setting, the cost is zero. Having the \(J^{c}\) defined in Eq 6, we can solve pole assignment problem, shown in Eq. 5, for the gain matrix \(K\). In this study case, we use the pole assignment algorithm [13] which is name as _place_ function in Matlab as the tool to compute the matrices \(K\) to achieve the desired closed-loop pole locations. The results are shown in Table 2. ### Step 4.1 Dynamics analysis results Now we need to evaluate the dynamics property of the controlled the velocity field. Comparing to the original system, the controlled has observable consequence. Due to the design aim, some of observation is invariant, and some changed. The verifiable theoretical predictions of the controller are listed as following. 1. **Distribution.** by changing the pole assignment parameter \(b\). The strategy distribution of the controlled game should be identical to that of the original game. The theoretical expectation is \[\rho^{T} \rightarrow \mathrm{Nash\_1}=\frac{1}{3}(1,1,1,0,0)\ \ \ \ \ \mathrm{when}\ \ b\rightarrow-1,\] (7) \[\rho^{T} \rightarrow \mathrm{Nash\_2}=\frac{1}{2}(0,0,0,1,1)\ \ \ \ \ \mathrm{when}\ \ b\rightarrow 1.\] (8) Wherein the superscript indicates the signal of the parameter. This is the prediction on equilibrium selection. In measurement, the prediction of equilibrium section can be verified by Euclidean distance (\(d\)) between \(\rho^{T}\) and \(\rho^{E}\), where \(\rho^{E}\) is the strategy proposition vector in the time series from computer simulation or human experiment. In details, \[\bar{\rho}_{i}^{E}=\lim_{t\rightarrow\infty}\frac{1}{T}\sum_{t=0}^{T}\rho_{i}( t),\] (9) where \(\rho_{i}(t)\) is the proportion of \(i\)-th strategy used at time \(t\in[0,T]\). We can measure the time dependent Euclidean distance \(d(t)\) from Nash_1 and Nash_2 respectively, \[d_{\mathrm{Nash\_1}}(t) = \big{|}\rho(t)-\mathrm{Nash\_1}\big{|}\] (10) \[d_{\mathrm{Nash\_2}}(t) = \big{|}\rho(t)-\mathrm{Nash\_2}\big{|}.\] (11) We will see that, \[d_{\mathrm{Nash\_1}}(t) = \longrightarrow 0\ \ \ \ \ \ \mathrm{when}\ \ b\rightarrow-1,\] (12) \[d_{\mathrm{Nash\_2}}(t) = \longrightarrow 0\ \ \ \ \ \mathrm{when}\ \ b\to 1.\] (13) \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \(b\) & \(k_{1}\) & \(k_{2}\) & \(k_{3}\) & \(k_{4}\) & \(k_{5}\) \\ \hline \(-0.8\) & \(0.5247\) & \(0.9485\) & \(-1.4732\) & \(-1.8335\) & \(0.2335\) \\ \(-0.6\) & \(0.4843\) & \(0.5524\) & \(-1.0368\) & \(-1.3232\) & \(0.1232\) \\ \(-0.4\) & \(0.3834\) & \(0.2623\) & \(-0.6458\) & \(-0.8476\) & \(0.0476\) \\ \(-0.2\) & \(0.2220\) & \(0.0782\) & \(-0.3002\) & \(-0.4065\) & \(0.0065\) \\ \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(0.2\) & \(-0.2825\) & \(0.0277\) & \(0.2548\) & \(0.3719\) & \(0.0281\) \\ \(0.4\) & \(-0.6256\) & \(0.1614\) & \(0.4641\) & \(0.7092\) & \(0.0908\) \\ \(0.6\) & \(-1.0292\) & \(0.4011\) & \(0.6281\) & \(1.0119\) & \(0.1881\) \\ \(0.8\) & \(-1.4933\) & \(0.7467\) & \(0.7467\) & \(1.2800\) & \(0.3200\) \\ \hline \end{tabular} \end{table} Table 2: Controller gain matrix \(K\)’s value when \(B=\big{[}0\ 0\ 0\ 1\ 1\big{]}^{T}\). 2. **Converge speed.** The converge speed to desired equilibrium is impacted by the pole assignment parameter, \(b\). The theoretical prediction is that, disregarding which equilibrium selected, convergence speed will be faster when \(|b|\) increase. In measurement, the convergence speed is defined as the time cost of time (denote as \(\tau_{1/2}\)) when \(d(\tau_{1/2})=d^{0}/2\), in which \(d_{0}\) is the Euclidean distance from the full randomly initial distribution [1, 1, 1, 1, 1]/5 to equilibrium (Nash_1 or Nash_2). In details, \(d^{0}_{\text{Nash\_1}}=0.184\) and \(d^{0}_{\text{Nash\_2}}=0.273\). So, \(\tau_{1/2}(b)\) can be obtained in the time series. 3. **Cycle.** In theory, the eigencycles in the 2-d subspace of the game can be calculated from the complex eigenvector associated to the eigenvalue shown in Eq. 4, referring to [42]. This can be verified by time average of the angular momentum measurement along the time series, referring to [42], \[\bar{L}_{mn} = \frac{1}{t^{\prime}}\sum_{t=0}^{t^{\prime}}x_{mn}(t)\times x_{mn}( t+1)\] (14) Herein, \(x_{mn}(t)\) is the strategy vector in the \(mn\) subspace (2-d subspace), \(t^{\prime}\) is the length of time series. For a 5 strategy game, the identical 2-d subspace number is 10, and the observer sample is 10 [42, 40, 33]. The equivalent of the theoretical eigencycle and the observed angular momentum \(L\) in is proved referring to [40, 33]. In measurement, the cycle strength \(|\bar{L}|\) is defined as \[|\bar{L}|=\big{(}\sum_{mn}L_{mn}^{2}\big{)}^{1/2}\] (15) in the time series to verify the theoretical expectation of the strength of the cycles. ### Step 4.2: ABED Simulation Agent-based evolutionary dynamics (abed) simulation (see appendix 5.3) are carried out for various control parameter \(b\). Having the time series data from the abed simulation, we can evaluate whether the theoretical expectation is supported by data. ### Step 5: Evaluate the controller #### 3.7.1 Statistical results The three figures associate to the three theoretical predictions are shown, respectively. 1. Figure 3(a) shows the distribution of the long run average \((\bar{\rho_{1}},\bar{\rho_{2}},\bar{\rho_{3}},\bar{\rho_{4}},\bar{\rho_{5}})\) as the function of the pole assignment parameter \(b\). As expected, when \(b\) being \(0\rightarrow-1\), the trend is to select equilibrium Nash_1; Alternatively, when \(b\) being \(0\to 1\), the trend is to select equilibrium Nash_2. 2. Figure 3(b) shows the changing of the convergence time depends on the changing of the parameter \(b\). Convergence speed depends on pole assignment parameter \(b\). When \(b\rightarrow-1\), observed \(\tau_{1/2}\to 0\), which means the convergence to Nash_1 is faster. Alternatively, when \(b\to 1\), observed \(\tau_{1/2}\to 0\) too, the convergence to Nash_2 is faster. 3. Figure 3(c) shows the eigencycle set by changing the parameter \(b\). Each curve presents an observation of \(L_{mn}\) of the 10 eigencycle, referring to the pole assignment parameter \(b\) shown in Eq. 5. It is obviously that, when the equilibrium remained at Nash_1 the cycle in the \((x_{1},x_{2},x_{3})\) subspace is significant; Alternatively, when the equilibrium shift to the Nash_2 equilibrium, the cycles disappear. Figure 3: (**a**) Distribution of the long run average \(\rho^{S}(b)\). when \(b\to-1\), the trend is Nash_1 selected; alternatively, when \(b\to 1\), Nash_2 selected. (**b**) Speed of convergence depends \(b\). When \(b\to-1\), the convergence to Nash_1 is faster. Alternatively, when \(b\to 1\), the convergence to Nash_2 is faster (**c**) cycle strength \(L(b)\). #### 3.7.2 Explanation for the statistical result Referring to the statistical results, we can reach the following conclusion: 1. By pole assignment, the equilibrium section is realised as expect. 2. As a association phenomena,the measured cycle exists or not meet the theoretical expectation well. 3. As a association phenomena, the measured cycle exists or not meet the theoretical expectation well. Base on these results, we can say, in a game with multi equilibrium, pole assignment can be apply to influence the equilibrium selection. At the same time, the associated observation relates to the equilibrium selection by pole assignment meet the general dynamics theory well. Importantly, our results suggest that, game dynamics system is not an exception of the long existed dynamics system which has been extensively studied and applied in engineering. Meanwhile it is not an expectation of the long existed modern control theory. ## 4 Discussion Equilibrium selection is a critical concern in game theory. This study demonstrates a method of controlling equilibrium selection using a dynamic control approach. Through a straightforward example game and a workflow of state feedback control, the study shows that it is possible to control equilibrium selection and achieve a desired outcome. This is the first instance, to the best of our knowledge, where equilibrium selection has been controlled using the controllability of the velocity field. ### Related works on the control the game dynamics In recent years, there has been extensive research on game-environment interaction, led by [34]. One approach is to incorporate the feedback mechanism from the environment into replicator dynamics, whereby the feedback changes the pay-off structure and influences the evolution of strategies. Several studies have explored this idea, such as [34, 28, 30]. Our method distinguishes itself from the conventional approach of treating game-environment feedback as state-dependent feedback within the game dynamics process. Instead, our feedback controller utilizes pole assignment techniques based on the eigensystem, which is grounded in modern control theory. Essentially, the core of our design strategy is to regulate the velocity field, which in turn governs the selection of equilibrium. ### Further directions of the controller design We have noticed the two limitation of the approach for equilibrium selection as follow. 1. The general condition for the approach being applied for equilibrium selection is not know. In this study, we have show only a special game case 1 to illustrate the ability of pole assignment for equilibrium selection. In this case, the left-up 3 by 3 related to rock-paper scissors, and the right-down 2 by 2 is an anti coordination game, and is a special case. Referring to [37], this question is about pole assignability in game dynamics. 2. For various game dynamics model, whether there is general method to design the equilibrium selection is not known. The algorithm designed for the controller shown in Eq. 19, especially the third term (the financial balance term \(T\)) is specified for the replicator dynamics shown in Eq. (16). It is important to emphasize the need for careful evaluation of the consequences of the control-by-design approach (as demonstrated in Equation 6) when using it for equilibrium selection. We also hope to see the development of more general solutions in the future.
2310.06013
The Not-So Dramatic Effect of Advective Flows on Gas Accretion
Super-Earths and mini-Neptunes are the most common types of exoplanets discovered, yet the physics of their formation are still debated. Standard core accretion models in gas-rich environment find that typical mini-Neptune mass planets would blow up into Jupiters before the underlying disk gas dissipates away. The injection of entropy from the protoplanetary disk into forming gaseous envelopes has recently been put forward as a mechanism to delay this runaway accretion, specifically at short orbital distances. Here, we reevaluate this line of reasoning by incorporating recycling flows of gas into a numerical one-dimensional thermodynamic model with more realistic equation of state and opacities and the thermal state of the advective flow. At 0.1 AU, we find that advective flows are only able to produce mini-Neptunes if they can penetrate below ~0.25 of the planet's gravitational sphere of influence. Otherwise, the gas-to-core mass ratio (GCR) reaches above ~10% which is too large to explain the measured properties of mini-Neptunes, necessitating other gas-limiting processes such as late-time core assembly. The effect of entropy advection on gas accretion weakens even further beyond 0.1 AU. We present an updated scaling relation between GCR and the penetration depth of the advective flows which varies non-trivially with orbital distances, core masses and dusty vs. dust-free opacity. We further demonstrate how measurements of planet mass distribution beyond ~1 AU using future instruments such as the Nancy Grace Roman Space Telescope could be used to disambiguate between different formation conditions of gas-poor planets.
Vincent Savignac, Eve J. Lee
2023-10-09T18:00:00Z
http://arxiv.org/abs/2310.06013v3
# The Not-So Dramatic Effect of Advective Flows on Gas Accretion ###### Abstract Super-Earths and mini-Neptunes are the most common types of exoplanets discovered, yet the physical scenarios behind their formation are still debated. Standard core accretion models in gas-rich environment find that typical mini-Neptune mass planets would blow up into Jupiters before the underlying disk gas dissipates away. The injection of entropy from the protoplanetary disk into forming gaseous envelopes has recently been put forward as a mechanism to delay this runaway accretion, specifically at short orbital distances. Here, we reevaluate this line of reasoning by incorporating recycling flows of gas into a numerical one-dimensional thermodynamic model with more realistic equation of state and opacities and the thermal state of the advective flow. At 0.1 AU, unless these advective flows can penetrate below \(\sim\)0.2 of the planet's gravitational sphere of influence, the gas-to-core mass ratio (GCR) stays above \(\sim\)10% before the nebular disk dissipates which is still too large to explain the measured properties of mini-Neptunes, necessitating other gas-limiting processes such as late-time core assembly. The effect of entropy advection on gas accretion weakens even further at wider orbital separations. We present an updated scaling relation between GCR and the penetration depth of the advective flows which varies non-trivially with orbital distances, core masses and dusty vs. dust-free opacity. We further demonstrate how measurements of planet mass distribution beyond \(\sim\)1 AU can be used to disambiguate between different formation conditions of gas-poor planets. 0000-0002-8880-7880]Vincent Saviganc 0000-0002-4883-0880]Eve J. Lee ## 1 Introduction Super-Earths and mini-Neptunes dominate the discovered exoplanetary populations, with \(\sim\)30-50% of all Sun-like stars harboring at least one of these planets within orbital periods of \(\sim\)300 days (e.g., Howard et al., 2010; Batalha et al., 2013; Petigura et al., 2013; Dong and Zhu, 2013; Fressin et al., 2013; Rowe et al., 2014; Burke et al., 2015; Zhu et al., 2018). The mini-Neptunes in particular have measured masses of \(\sim\)2-20 \(M_{\oplus}\)(Weiss and Marcy, 2014; Wu and Lithwick, 2013) and radii of \(\sim\)1.8-4\(R_{\oplus}\), which imply a thin H-He atmosphere stop a rocky core with the envelope mass fraction ranging from \(\sim\)1 % to 10% (Rogers and Seager, 2010; Lopez and Fortney, 2014; Wolfgang and Lopez, 2015). Classical theories of core accretion expected such planets (especially the more massive ones \(\gtrsim\)10\(M_{\oplus}\)) to have undergone a runaway gas accretion and blow up into gas giants (e.g., Mizuno, 1980; Stevenson, 1982; Pollack et al., 1996; Ikoma et al., 2000), prompting a need for a revision to such a theory to explain the existence and prevalence of mini-Neptunes. One solution is to consider the late-time assembly of planetary cores which delays the onset of gas accretion (Lee et al., 2014; Lee and Chiang, 2016). By comparing the empirically determined orbit-crossing timescale from Zhou et al. (2007) with the eccentricity damping timescale from gas dynamical friction, Lee and Chiang (2016) argued that the nebular gas needs to be depleted by about four orders of magnitude with respect to a solar nebula for the protocores to merge. By coupling a direct N-body simulation with gas accretion, Dawson et al. (2016) demonstrated that the core mergers can begin from the gas depletion factor of 1000, and Choksi and Chiang (2020) identified the peaks in the observed orbital period ratio of multi-planetary systems just wide of mean motion resonance can be explained by eccentricity damping and short-scale migration in heavily gas-depleted environment, by 3-5 orders of magnitude with respect to solar nebula, in agreement with previous literature. Such gas-depleted environment is consistent with the very late stage of disk evolution and so the limited time and the lower nebular density prevents the runaway accretion and ensures the accumulation of \(\sim\)1-10% by mass envelope (Lee and Chiang, 2016). An alternative mechanism to delay runaway is to consider hydrodynamic effects such as planetary rotation (Zhong and Yu, 2021) and the injection of entropy from the outer disk inside the envelope by "atmospheric recycling" (Ormel et al., 2015; see also Fung et al., 2015). Three-dimensional (3-D) hydrodynamic models report the advective flows can penetrate deep into the envelope which can slow down the cooling of the planet as these flows are expected to bring in the (higher) disk entropy into the deep envelopes and therefore effect the formation of mini-Neptunes instead of Jupiters (Lambrechts and Lega, 2017; Popovas et al., 2018; Bethune and Rafikov, 2019; Moldenhauer et al., 2021). Ali-Dib et al. (2020) accounted for this atmospheric recycling in one-dimensional (1-D) semi-analytical thermal calculations by modelling the upper advective region as an adiabat since they argue that the advection timescale is short compared to the cooling timescale of the envelope. They found that at short orbital distances, the effect is strong enough to significantly stall accretion which could explain why mini-Neptunes do not undergo runaway and remain small. In their comparison between 1-D semi-analytical thermal calculations and 3-D global radiative simulations that include realistic opacities and equation of state, Zhu et al. (2021) found the upper advective region to be _not_ an adiabat. They further found that the thermal state of the envelope should not be significantly altered by recycling at least at 5 AU which is where they focused their simulations. As high entropy flows from the protoplanetary disk try to penetrate a low entropy region of the envelope, buoyancy forces prevent the flows from penetrating deep in the atmosphere, as is commonly found in more realistic non-isothermal simulations (Kurokawa and Tanigawa, 2018). Zhu et al. (2021) note however that it may still be the case that such advective flows could potentially inhibit runaway accretion at short orbital distances (\(\sim\)0.1 AU). In this work, we revisit the role of recycling at short orbital distances by accounting for the outer hydrodynamic flow in 1D thermal calculations. Although our approach is similar to the one proposed by Ali-Dib et al. (2020), the main difference is that we do not treat the recycling flows as an adiabatic process. Instead, we favor a gradual transition between the disk and the atmosphere following the suggestion of Zhu et al. (2021) while also using realistic equation of state and opacities. We ultimately seek to quantify the effect of entropy advection on its ability to slow down gas accretion and whether it can successfully produce mini-Neptunes at short orbital periods, even in gas-rich environments. The paper is organized as follows. We outline the construction of envelope profiles and their time evolution in Section 2. Results are presented in Section 3 along with an updated semi-analytic scaling relationship between the envelope mass fraction, time, and the depth of the advective flow. Section 4 discusses the importance of using realistic opacity and equation of state in thermal evolution of planetary envelopes, summarizes how mini-Neptunes avoid runaway in the context of our results, and presents the critical core mass that separates gas-poor from gas-rich planets over a range of orbital distances and avenues for identifying likely formation conditions of gas-poor planets using future instruments such as the Nancy Grace Roman Space Telescope. Finally, we summarize and conclude in Section 5. ## 2 Time-Dependent Model Atmospheres We follow the spherically symmetric model of Lee et al. (2014) to simulate the formation of an atmosphere around a rocky core while also incorporating atmospheric recycling into the calculations. We first build a series of "hydrostatic snapshots" of the envelope, each corresponding to different gas-to-core mass ratio GCR \(\equiv M_{\rm gas}/M_{\rm core}\). These snapshots are threaded together by computing the time it takes for the envelope to cool from one snapshot to the next. ### Hydrostatic snapshots For each value of GCR, we solve the standard stellar structure equations: \[\frac{dM(<r)}{dr}=4\pi r^{2}\rho \tag{1}\] \[\frac{dP}{dr}=-\frac{GM(<r)}{r^{2}}\rho-\frac{GM_{\star}r}{a^{3}}\rho \tag{2}\] \[\frac{dT}{dr}=\frac{T}{P}\frac{dP}{dr}\nabla \tag{3}\] for the density \(\rho\), the pressure \(P\), the temperature \(T\), and the enclosed mass \(M(<r)\equiv M\) as functions of the radius \(r\). Here, \(G\) is the gravitational constant, \(M_{\star}\) is the mass of the central star which we fix to the solar mass, and \(a\) is the orbital distance from the star. The term on the far right of Equation (2) is inserted to account for the central star's gravitational field as suggested by Zhu et al. (2022). The effect of this correction is small as we expect \(M/r^{3}\gg M_{\star}/a^{3}\) within the atmosphere. The dimensionless temperature gradient \(\nabla\equiv d\) ln \(T/d\) ln \(P\) depends on whether the energy transport is dominated by radiation or convection. Convection initially dominates throughout the envelope, but as the gas cools and more mass is accreted, radiation zones develop in the outer regions of the envelope. From radiative diffusion, \[\nabla_{\rm rad}=-\frac{3\kappa P}{64\pi GM\sigma T^{4}}L \tag{4}\] where \(\kappa\) is the opacity, \(L\) is the internal luminosity of the envelope, and \(\sigma\) is the Stefan-Boltzmann constant. When the energy transport is dominated by convection, \[\nabla_{\rm ad}=-\frac{\partial{\rm log}S}{\partial{\rm log}P}\Big{|}_{T} \Big{(}\frac{\partial{\rm log}S}{\partial{\rm log}T}\Big{|}_{P}\Big{)}^{-1} \tag{5}\] where \(S\) is the specific entropy of the gas. We relate the state variables (\(P\), \(T\), \(\rho\) and \(S\)) in the atmosphere with the equation of state (EOS) computed in Lee et al. (2014) which includes H (molecular, atomic, and ionized), He, and metallic species in solar elemental abundances (Grevesse & Noels, 1993). We fix mass fractions of X=0.7 for H, Y=0.28 for He and Z=0.02 for metals. As we will discuss more in Section 4, adopting this realistic equation of state instead of a fixed adiabatic index affects critically the rate of gas accretion. Assuming an environment where the heavy elements are homogeneously distributed, the envelope is unstable to convection when the Schwarzschild criterion \[\nabla_{\rm rad}>\nabla_{\rm ad} \tag{6}\] is satisfied, where we ignore compositional gradients in our analysis as it is found to drive convection on negligible scales and only at the very bottom of the envelope for mini-Neptunes (see Lee et al., 2014). Thus we use \(\nabla=\)min(\(\nabla_{\rm ad},\nabla_{\rm rad}\)) in Equation (3). Throughout this work, we will refer to the boundary between the innermost convective zone and the overlying radiative region as the radiative-convective boundary (rcb). It has been shown that opacity has a sizable effect on planetary accretion rates (e.g., Ikoma et al., 2000; Piso et al., 2015). We adopt the same opacity table that is used in Lee et al. (2014) which is an expansion of the calculation of Ferguson et al. (2005) with the smoothing and extrapolation scheme outlined in Lee et al. (2014), their Section 2.1.3. Both dusty and dust-free models are considered where in the former, dust grains follow the interstellar medium (ISM) size distribution and in the latter, grains do not contribute to the opacity due to, e.g., coagulation and rain out. ### Boundary conditions We describe the outer boundary of a planet with core mass \(M_{\rm core}\) with the Hill radius \[\begin{split} R_{H}&=\Big{[}\frac{(1+{\rm GCR})M_{ \rm core}}{3M_{\odot}}\Big{]}^{\frac{1}{3}}a\\ &\simeq 40R_{\oplus}\Big{[}\frac{(1+{\rm GCR})M_{\rm core}}{5M_{ \oplus}}\Big{]}^{\frac{1}{3}}\big{(}\frac{a}{0.1{\rm AU}}\big{)},\end{split} \tag{7}\] and the Bondi radius \[\begin{split} R_{B}&=\frac{G(1+{\rm GCR})M_{\rm core }}{c_{s}^{2}}\\ &\simeq 90R_{\oplus}\Big{[}\frac{(1+{\rm GCR})M_{\rm core}}{5M_{ \oplus}}\Big{]}\Big{(}\frac{\mu_{d}}{2.37}\Big{)}\Big{(}\frac{1000K}{T_{d}} \Big{)}\end{split} \tag{8}\] where \(c_{\rm s}=\sqrt{kT_{\rm d}/\mu_{\rm d}m_{\rm H}}\) is the sound speed, \(k\) is the Boltzmann constant, \(T_{\rm d}\) is the disk midplane temperature, \(\mu_{\rm d}\) is the mean molecular weight, and \(m_{\rm H}\) is the mass of the hydrogen atom, all evaluated at \(a\). We fix the outer boundary of the envelope at the minimum of those two radii \[R_{\rm adv}=\alpha_{\rm adv}R_{\rm out}=\alpha_{\rm adv}\min(R_{H},R_{B}), \tag{9}\] with the inclusion of a free parameter \(\alpha_{\rm adv}\leq 1\), which accounts for a flow-dominated region of the atmosphere (\(R_{\rm adv}<r<R_{\rm out}\)) where the outer disk injects entropy into the system. The thermal state of this flow-dominated region will be developed in Section 2.3. As for the inner boundary down to which we integrate Equations (1)-(3), we use a core radius which scales as (Valencia et al., 2006) \[R_{\rm core}=R_{\oplus}\Big{(}\frac{M_{\rm core}}{M_{\oplus}}\Big{)}^{\frac{ 1}{4}}. \tag{10}\] The center of our planet is placed at the disk midplane for which the fiducial parameters \(T_{\rm mid}\) and \(\rho_{\rm mid}\) are taken from the minimum-mass extrasolar nebula (MMEN) of Chiang & Laughlin (2013) modified for the irradiated disk profile of Chiang & Goldreich (1997): \[\rho_{\rm mid}=6\times 10^{-6}\Big{(}\frac{a}{0.1{\rm AU}}\Big{)}^{-2.9}{\rm g /cm}^{3} \tag{11}\] \[T_{\rm mid}=1000\Big{(}\frac{a}{0.1{\rm AU}}\Big{)}^{-\frac{3}{7}}{\rm K}. \tag{12}\] Since we are interested in revisiting the ability of advective flows in delaying the accretion timescale in gas-rich environment, we limit our calculations to the gas-full disk profiles. We assume that the disk midplane parameters are constant in time over the duration of gas accretion which not only simplifies the computations, but is also justified given the weak dependence of accretion rate on outer nebular conditions (e.g., Lee et al., 2014; Ginzburg et al., 2016), as long as the nebular gas density does not deplete by more than 8 orders of magnitude (Lee et al., 2018). ### Thermal state of the outer shells To account for strong advective flows from the disk penetrating the planetary envelope, we include an outer advection region of the envelope (\(R_{\rm adv}<r<R_{\rm out}\)) dominated by entropy advection which we treat separately from the inner shells. The thermal state of this region of the envelope is determined by whether the cycling material is allowed to cool before being ejected out back to the disk. The flow timescale of this gas is well approximated by the local orbital period \(t_{\rm orb}\)(e.g., Ormel et al., 2015; Zhu et al., 2021) at a given distance \(a\). By comparison, following Zhu et al. (2021, their Section 5.2), we calculate the cooling timescale of the recycling gas \(t_{\rm diff}\sim E_{\rm diff}/L_{\rm diff}\) where \(E_{\rm diff}\) is the thermal energy that must be dissipated for the envelope temperature \(T\) to be below the isentropic extension of the disk \(T_{\rm ad}\),1 Footnote 1: Cooling is mediated by radiative diffusion here since in gas-rich environment, the advective region is optically thick. \[E_{\rm diff}\sim 4\pi r^{2}c_{V}\rho T\big{(}\nabla_{ad}-\nabla\big{)}\big{(}R_ {\rm out}-r\big{)}, \tag{13}\] and \(L_{\rm diff}\) is the excess luminosity generated by the adiabat, \[L_{\rm diff}=\frac{64\pi\sigma T^{4}GM}{3\kappa P}\big{(}\nabla_{\rm ad}- \nabla\big{)}, \tag{14}\] all evaluated over the advection zone (\(R_{\rm adv}<r<R_{\rm out}\)) and \(c_{V}\) is the heat capacity of the cycling gas. Using the ideal gas law and a diatomic heat capacity \(c_{V}=\frac{5\mathcal{R}}{2\mu}\) where \(\mathcal{R}\) is the ideal gas constant and \(\mu\) is the mean molecular weight of the gas. It follows that \[t_{\rm diff}\sim\frac{E_{\rm diff}}{L_{\rm diff}}\sim\frac{15P\tau}{32\sigma T^{ 4}}\big{(}R_{\rm out}-r\big{)}, \tag{15}\] where the optical depth \[\tau(r)=\int_{r}^{R_{\rm out}}\kappa(r^{\prime})\rho(r^{\prime})dr^{\prime} \tag{16}\] defined over \(R_{\rm adv}<r<R_{\rm out}\), and \(\kappa(r^{\prime})\) is evaluated using the opacity model described in Section 2.1. If the gas injected into the envelope is allowed to cool down (i.e. \(t_{\rm diff}<t_{\rm orb}\)), radiation will prevail through the outer region. Following the procedure of Zhu et al. (2022), the density outside the envelope (\(r>R_{\rm adv}\)) then follows the background Gaussian density profile of the protoplanetary disk \[\rho_{\rm rad}(r)=\rho_{\rm mid}e^{-r^{2}/2H^{2}} \tag{17}\] where \(H=c_{s}/\Omega\) is the local disk scale height, \(\Omega=\sqrt{GM_{\star}/a^{3}}\) is the Keplerian orbital frequency. We combine Equations (2), (3) and (4) for this radiative outer profile in order to compute the temperature over \(R_{\rm adv}<r<R_{\rm out}\): \[T_{\rm rad}^{4}(r)=\frac{3L}{16\pi\sigma}\int_{r}^{\infty}\frac{\kappa(r^{ \prime})\rho_{\rm rad}(r^{\prime})}{r^{\prime 2}}dr^{\prime}+T_{\rm mid}^{4}. \tag{18}\] If \(t_{\rm diff}>t_{\rm orb}\), the penetrating inflow retains the disk entropy over its recycling time and so here we adopt the isentropic approach of Ali-Dib et al. (2020) while remaining consistent with our realistic EOS and opacities. In other words, we solve Equations (2) and (3) for \(T_{\rm S,disk}(r)\) and \(P_{\rm S,disk}(r)\) from \(R_{\rm out}\) to \(R_{\rm adv}\) fixing \(\nabla\) to the adiabatic gradient \(\nabla_{\rm ad}\) evaluated for the local disk temperature and pressure, where the subscript 'S,disk' is used to remind us that this region is isentropic with respect to the local disk entropy. We ignore the small gas mass present above \(R_{\rm adv}\) (this assumption is justified by the centrally concentrated mass distribution shown in Figure 2). Because this region is taken to be isen Figure 1: Radiative diffusion timescale \(t_{\rm diff}\) of the recycling flows (red) for a \(5M_{\oplus}\) core embedded in minimum mass extrasolar nebula as a function of radius within \(0.1R_{\rm out}<r<R_{\rm out}\). The orbital distance is varied from 0.1 AU (upper panels) to 0.3 AU (middle panels) to 1 AU (lower panels) with the respective orbital period \(t_{\rm orb}\) indicated with blue horizontal dotted line. The dusty (left panels) opacity model is compared with the dust-free (right panels) model. Recycling flows are ejected out of dusty envelopes before having time to cool except for \(r\gtrsim 0.9R_{\rm out}\) at 1 AU. In dust-free envelopes, the flows cool at \(r\gtrsim 0.8R_{\rm out}\) at 0.3 AU before being recycled whereas at 1 AU, flows cool at all \(r\). tropic with the disk, we can relate \(P_{\rm S,disk}\) to the disk properties: \[\frac{\rho_{\rm S,disk}^{\gamma(r)}(r)}{P_{\rm S,disk}(r)}=\frac{\rho_{d}^{\gamma _{d}}}{P_{d}}, \tag{19}\] with the radially dependent adiabatic index \(\gamma(r)=\frac{1}{1-\nabla(r)}\). Here, \(\rho_{\rm d}\), \(P_{\rm d}\) and \(\gamma_{\rm d}\) are all computed with the disk conditions at height \(R_{\rm out}\) above the midplane. To systematically account for these two different methods of heat transport in our simulations, we adopt a hybrid model as follows. For a given set of parameters (dusty vs. dust-free opacity, \(M_{\rm core}\), \(a\) and \(\alpha_{\rm adv}\)), we first compute \(t_{\rm diff}\) over the outer region \(R_{\rm adv}<r<R_{\rm out}\). If \(t_{\rm diff}>t_{\rm orb}\) over the whole region, the envelope profile in the advective region follows \(T_{\rm S,disk}\), \(P_{\rm S,disk}\), and \(\rho_{\rm S,disk}\). If \(t_{\rm diff}<t_{\rm orb}\) over the whole region, then the envelope profile follows \(T_{\rm rad}\), \(\rho_{\rm rad}\) and the corresponding \(P_{\rm rad}\) according to the EOS. If there there is any transition within the advective region then we use a radiative profile within the newly defined outer shell \(R_{\rm int}<r<R_{\rm out}\) and for the inner shell \(R_{\rm adv}<r<R_{\rm int}\) we adopt the isentropic profile. Figure 1 illustrates, as an example, the radial profiles of \(t_{\rm diff}\) for \(5M_{\oplus}\) core at 0.1, 0.3, and 1 AU, down to \(0.1R_{\rm out}\). In the dusty model, we find that \(t_{\rm diff}>t_{\rm orb}\) over the whole envelope at 0.1 AU and 0.3 AU and for \(r\lesssim 0.9R_{\rm out}\) at 1 AU. As for dust-free atmospheres, the local orbital timescale is also shorter than the diffusion timescale at 0.1 AU and at 0.3 AU for \(r\lesssim 0.8R_{\rm out}\). However, the flows are allowed to cool everywhere at 1 AU in the dust-free regime which leads to a fully radiative advective layer. ### Connecting snapshots in time Like Piso and Youdin (2014), we take the planet luminosity to be spatially constant which implicitly assumes the luminosity generated in the outer region to be minimal, which we verify a posteriori, even when we account for entropy advection. For a fixed GCR, there is a constant luminosity eigenvalue \(L\) for the stellar structure equations (Equations 1-3). We integrate these equations from \(R_{\rm adv}\) to \(R_{\rm core}\) with the boundary conditions at \(R_{\rm adv}\) described by the thermal state of the advective layer (Section 2.3). We iteratively solve for \(L\) until the mass profile agrees with the fixed GCR value within \(0.01\%\) for each specific snapshot. We connect the resulting snapshots together by computing the time it takes to cool from one snapshot to next. Although the planetesimal accretion of solids forming the core can act as an battery, the range of planetesimal accretion rate that can successfully avoid runaway is extremely limited over 0.1-5 AU and the typical solid accretion rate in solar nebula would in fact accelerate the runaway by way of growing the core to too high a mass (see Lee and Chiang, 2015, their Figure 2). Changes to the energy budget of the envelope therefore come directly from its cooling and accretion of gas. Since most of the envelope mass is centrally concentrated (owing to H\({}_{2}\) dissociation driving adiabatic index \(\gamma<4/3\)) and the surface lid of the convective zone acts as a thermal bottleneck, we can characterize the thermal state of the envelope with the state variables at the radiative-convective boundary \(R_{\rm rcb}\). We follow the cooling treatment of Piso and Youdin (2014) for the change in energy which gives the time elapsed between snapshots: \[\Delta t=\frac{-\Delta E+\langle e_{M}\rangle\Delta M-\langle P\rangle\Delta V _{\langle M\rangle}}{\langle L\rangle}. \tag{20}\] Here, the average and difference of a quantity Q over two snapshots are denoted with \(\langle Q\rangle\) and \(\Delta Q\) respectively, and \(E\) is the total energy of the envelope \[E=-\int\frac{GM(<r)}{r}dM+\int UdM \tag{21}\] where we account for the specific internal energy (\(U\)) contributions with the equation of state. The integral is carried out from \(R_{\rm core}\) to \(R_{\rm rcb}\) to cover the entire inner convective zone. Energy from mass accretion is given by \[e_{M}=-\frac{GM}{r}\Big{|}_{R_{\rm rcb}}+U\mid_{R_{\rm rcb}}. \tag{22}\] The last term in the numerator of Equation (20) accounts for the change in the volume \(V_{\langle M\rangle}\) of the innermost convective zone with the pressure taken along its surface. We do not ignore the contribution of the surface energy terms \(\langle e_{M}\rangle\Delta M\) and \(\langle P\rangle\Delta V_{\langle M\rangle}\) as excluding these terms can underestimate the GCR by factors of \(\sim 1.5\)-2 and artificially delay the runaway. We discuss this effect more in Section 4. Since we only compute time intervals, we need to fix an initial time \(t_{0}\) which we set to the Kelvin-Helmholtz time of the first snapshot \(|E|/L\) which is small (\(\lesssim 0.05\) Myr) compared to the disk lifetime of \(\sim 10\) Myr (Mamajek, 2009; Michel et al., 2021). ## 3 Results ### Effect of advection on envelope structure and cooling With the one-dimensional differential system developed in Sections 2.1 and 2.2 and the hybrid characterization of the advective flows penetrating the envelope explored in Section 2.3, we can solve for snapshots of the thermal state of the envelope for any core mass, orbital distance, different opacity regimes, advection depth \(\alpha_{\rm adv}\) and total GCR. Figure 2 shows the resulting temperature, density, entropy and envelope mass profiles under this hybrid scheme for a \(5M_{\oplus}\) core embedded in a gas-rich nebula at both 0.1 AU with dusty opacities and 1 AU with dust-free opacities. While we explore the full parameter space, we choose these two limiting cases for illustration purpose: the former envelope is characterized by a fully isentropic entropy advection region while the latter envelope is allowed to cool by radiation (see Figure 1). In both 0.1 AU and 1 AU cases, we find that accounting for the advective flows result in steeper envelope profiles with more centrally concentrated mass and smaller internal specific entropy. These steep profiles emerge because for a given envelope mass, an outer penetrative flow shrinks the spatial volume within which the gas mass can be packed. While the temperature at the rcb remains the same between models with and without outer advective zones, the density at the rcb rises in the former which increases the optical depth and so we expect a delay in cooling time when advection is taken into account. Compared to 0.1 AU, we see that the changes to the thermal structures at 1 AU by advective flows are more muted. At wider orbits, the advective flows can cool easily and appear nearly isothermal, similar to the upper envelopes of non-advective, unrestricted cases. For the same core mass and gas mass, the Hill sphere Figure 3: Time evolution of the gas-to-core mass ratio (GCR \(\equiv M_{\rm gas}/M_{\rm core}\)) of a \(5M_{\oplus}\) core embedded in a minimum mass extrasolar nebula (MMEN) for an advection ratio of \(\alpha_{\rm adv}=0.3\). Left and right panels describe models with dusty and dust-free opacities respectively. Upper panels show the evolution of a core at orbital distance of 0.1 AU, middle panels 0.3 AU and lower panels 1 AU. The model with a fully isentropic (blue) outer region is compared to the hybrid scheme (red). The evolution of the envelope mass of models without recycling (black), the target mass range for mini-Neptunes envelope mass (shaded region) and the protoplanetary disk lifetime (vertical dotted line) are indicated as references. In all cases, we find that entropy advection alone is unable to reproduce mini-Neptunes. Figure 2: Radial profiles of the state variables of the gas envelope atop a \(5M_{\oplus}\) core embedded in a minimum mass extrasolar nebula with a total gas-to-core mass ratio (GCR) of 0.06. The left and right panels show respectively systems at 0.1 AU with dusty opacity and 1 AU with dust-free opacity. From top to bottom, the temperature, density, GCR and entropy radial profiles are displayed for envelopes with hybrid (red), isentropic (blue) and without (black) entropy advection from the disk. The advection ratio (\(\alpha_{\rm adv}=R_{\rm adv}/R_{\rm out}\)) is fixed to 0.3. At 0.1 AU, the dusty hybrid model agrees with a fully isentropic advection region. Dotted curves indicate the outer region dominated by advective flows while radiation and convection dominated zones of the inner envelope are represented by dashed and solid curves respectively. Under the hybrid treatment, the outer advective region is fully isentropic with the disk for dusty envelopes at 0.1 AU, and it is fully radiative for dust-free envelopes at 1 AU. is larger at 1 AU; there is more room to pack the same amount of gas mass and so the envelope profiles are less affected by the changes in the outer envelope. We see from Figure 2 that forcing an isentropic profile at 1 AU would create an even steeper inner envelope structure overestimating the density at the rcb. Figure 3 illustrates the degree of delay in gas accretion by cooling caused by the entropy advection. In general, the effect is minor, causing factors of \(\lesssim\)2 changes in the final GCR. One exception is dust-free accretion at 0.1 AU where the entropy advection successfully avoids the runaway gas accretion as compared to the unrestricted evolution. As mentioned previously, the smaller Hill sphere at shorter orbital distances forces a significantly steeper envelope structure when advective flows are taken into account and the higher rcb density leads to higher optical depth and so slower cooling. Even so, we find that a 5\(M_{\oplus}\) core that begins accreting gas in early, gas-rich environment will always (even for dust-free accretion at 0.1 AU) end up with GCR \(\gtrsim\)0.2 (for our fiducial \(\alpha_{\rm adv}=0.3\)) which is larger than the expected envelope mass fraction of mini-Neptunes. If dust grains do not contribute to opacity during accretion, such a core is expected to undergo runaway gas accretion and blow up into gas giants even at 0.3 AU, even with the recycling flows. We therefore conclude that entropy advection alone is insufficient in limiting the rapid gas accretion onto high-mass mini-Neptune cores. Our use of realistic EOS, opacity, and cooling time calculation arrive at a result that differs from the previous study by Ali-Dib et al. (2020), which we discuss in more detail in Section 4.1. ### Scaling relations for GCR(t,\(\alpha_{adv}\)) With the realistic treatment of atmospheric recycling developed in this work, we update the previously developed semi-analytic scaling relationships between GCR, time, and the penetration depth of advective flows. In unrestricted systems, Lee and Chiang (2015) derived a general expression for the evolution of the atmospheric mass in terms of state variables and fiducial parameters of the model. The goal of this section is to empirically obtain a scaling of form \[\mathrm{GCR}\propto t^{\frac{1}{2\alpha}}\alpha_{adv}^{\theta}, \tag{23}\] where \(\kappa_{\rm rcb}\propto\rho_{\rm rcb}^{\alpha}\) and \(\alpha\sim 0.1-0.8\) over the parameter space we explore. Because we fix the core mass, we do not refit the scaling on \(M_{\rm core}\) (but we check a posteriori that the GCR-\(M_{\rm core}\) scaling derived by Lee and Chiang 2015 matches well our calculations between 5\(M_{\oplus}\) and 10\(M_{\oplus}\), shown in Figure A.1 in the Appendix). Other dependencies such as the adiabatic gradient and mean molecular weight are not directly discussed here as they do not vary significantly across our simulations; we also do not discuss the effect of metallicity in this paper as we have fixed our calculations to solar metallicity. The GCR-\(T_{\rm rcb}\) scaling depends on \(\kappa-T\) scaling which we find to vary non-trivially as we vary \(\alpha_{\rm adv}\) since the \(T_{\rm rcb}\) ranges from \(\sim\)2000-3000 K where the \(\kappa_{\rm rcb}-T\) scaling probes the transition to H- opacity (i.e., from a weak dependence to a near exponential). We therefore subsume the dependence on \(T_{\rm rcb}\) under GCR-\(\alpha_{\rm adv}\) scaling which we fit against our numerical results. We empirically fit for \(\theta\) over numerical results computed at \(\alpha_{\rm adv}\in\) [0.1, 0.125, 0.15, 0.2, 0.25, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1] at a fixed time where all GCR values of \(\alpha_{\rm adv}\) are pre-runaway. The resulting values of GCR as a function of advection depth are presented in Figure 4. In some instances at 0.1 AU where the delays due to advection are significant (usually for \(\alpha_{\rm adv}\lesssim 0.3\)), the GCR for low \(\alpha_{\rm adv}\) had to be taken at a later time (as otherwise the envelope is fully convective). We observe that the GCR as a function of \(\alpha_{\rm adv}\) behaves differently for different advection depths, whereby the effect of \(\alpha_{\rm adv}\) becomes more significant when the advective flows penetrate below the rcb's in the original unrestricted envelopes, similar to what was reported by Lee et al. (2022). The calculation of Lee et al. (2022) differs from ours in two ways. First, their focus was on gas-depleted environment. Second, they accounted for advective flows by simply shrinking \(R_{\rm out}\). To more directly compare our results with Lee et al. (2022), we reduce the MMEN disk density by a factor of 0.01. We find a dependency on advection depth at 0.3 AU in dust-free atmosphere (GCR \(\propto\alpha_{\rm adv}^{0.31\pm 0.02}\)) that agrees with the scaling of the GCR with \(R_{\rm out}\) of Lee et al. (2022, see their Figures 4 and 5). Here, the outer advection region is isothermal with the disk at 0.3 AU where changes in the Gaussian density of the disk are small. Restricting the dust-free envelope with advective flows in depleted MMEN disks is therefore equivalent to simply varying its outer boundary. However, the analogous dusty atmosphere is not characterized by such an outer isotherm leading to a denser rcb in the warmer envelope restricted by entropy advection, and so for dusty envelopes, simply changing the \(R_{\rm out}\) is not equivalent to a more careful treatment of the advective flows. This difference manifests in the GCR-\(\alpha_{\rm adv}\) scaling for which we find GCR \(\propto\alpha_{\rm adv}^{0.44\pm 0.03}\), steeper than the GCR \(\propto R_{\rm out}^{0.27\pm 0.05}\) computed by Lee et al. (2022) for dusty envelopes. In gas-rich environments (which is our focus), our unrestricted dusty envelopes are characterized by outermost and innermost convective zones with a radia tive window sandwiched in-between, so that there are two rcb's: the outer one (\(\mathrm{rcb}_{\mathrm{out}}\)) where the envelope transitions from convective to radiative zone outside-in; and the inner one (\(\mathrm{rcb}_{\mathrm{in}}\)) where the envelope transitions from radiative to convective zone outside-in. As demonstrated in Figure 4, we see that \(\theta\) is the largest (i.e., GCR is affected most strongly) when the flow can penetrate inside \(\mathrm{rcb}_{\mathrm{in}}\); this is expected because the envelope is forced to be confined within a radius that is smaller than the initial \(\mathrm{rcb}_{\mathrm{in}}\), significantly increasing the density (and also the temperature) at the new rcb, delaying the cooling process. When the flow penetrates below \(\mathrm{rcb}_{\mathrm{out}}\) but above \(\mathrm{rcb}_{\mathrm{in}}\), the GCR-\(\alpha_{\mathrm{adv}}\) becomes weaker, and when the flow cannot penetrate even the outer rcb, the GCR-\(\alpha_{\mathrm{adv}}\) scaling becomes even weaker. There is one exception to this trend presented by dust-free accretion onto a \(10M_{\oplus}\) core at 1 AU where we see that GCR-\(\alpha_{\mathrm{adv}}\) steepens for larger \(\alpha_{\mathrm{adv}}\). This difference in behavior is due to the envelope mass becoming less centrally concentrated for \(\alpha_{\mathrm{adv}}\gtrsim 0.7\) in the large Hill sphere of a \(10M_{\oplus}\) core at 1 AU so that the envelope thermal evolution is more strongly affected by the outer boundary conditions. We confirm that if we limit our empirical fit of GCR\(\propto\alpha_{\mathrm{adv}}^{\theta}\) to \(\alpha_{\mathrm{adv}}<0.7\), \(\theta\) for \(\alpha_{\mathrm{adv}}\) above \(\mathrm{rcb}_{\mathrm{in}}\) becomes shallower. We demonstrate in Figure 5 that Equation (23) with the \(\theta\) computed in Figure 4 provides a good approximation to our numerical results. We note that the scaling at 0.1 AU for \(\alpha_{\mathrm{adv}}\) is slightly off due to imperfect numerical fitting. While the strong effect of entropy advection when the flow penetrates below the inner rcb could theoretically keep the final envelope mass small enough to be consistent with the observed properties of mini-Neptunes (see Figures A.1 and 5), it would require advection depths of \(\alpha_{\mathrm{adv}}\lesssim 0.15\) which is deeper than the flow penetration depths found in three-dimensional non-isothermal hydrodynamic simulations for marginally superthermal planets (i.e., \(R_{H}\lesssim R_{B}\); Lambrechts & Lega, 2017), which is the relevant regime for \(5M_{\oplus}\) cores inside 1 AU. At 1 AU, \(5M_{\oplus}\) cores become sub-thermal (\(R_{H}>R_{B}\)) and numerical simulations report deeper penetration depths reaching down to \(\alpha_{\mathrm{adv}}\sim\)0.1-0.2 (e.g., Zhu et al., 2021); however, at this orbital distance, we find that even for such deep flows, Figure 4: Gas-to-core mass ratio (GCR) of a core embedded in a gas-rich nebula at a fixed time as a function of advection depth (\(\alpha_{\mathrm{adv}}\)) under different boundary conditions. The left and right figures account for dusty and dust-free opacities respectively. From top to bottom, the orbital distance is varied from 0.1 AU to 0.3 AU and 1 AU. The first column of each figure considers a \(5M_{\oplus}\) core while the second one a \(10M_{\oplus}\) core. For each setup, the fixed time at which the GCR’s are taken is annotated in blue text and those GCR’s indicated with blue circles. In cases for which the GCR could not be obtained at a same instant for all values of \(\alpha_{\mathrm{adv}}\) (refer to Figure A.1), such GCR’s are evaluated at a later time (yellow text) and indicated with yellow circles. In general, the GCR-\(\alpha_{\mathrm{adv}}\) scaling changes its behavior when \(\alpha_{\mathrm{adv}}\) becomes smaller than the locations of rcb’s in the original unrestricted envelopes, which are denoted with magenta (\(\mathrm{rcb}_{\mathrm{in}}\)) and brown (\(\mathrm{rcb}_{\mathrm{out}}\)) vertical lines. Empirically-fit scalings GCR \(\propto\alpha_{\mathrm{adv}}^{\theta}\) are shown in red for \(\alpha_{\mathrm{adv}}<\alpha_{\mathrm{rcb,in}}\), cyan for \(\alpha_{\mathrm{rcb,in}}<\alpha_{\mathrm{adv}}<\alpha_{\mathrm{rcb,out}}\) and black for \(\alpha_{\mathrm{rcb,out}}<\alpha_{\mathrm{adv}}\). This scaling becomes steeper for smaller \(\alpha_{\mathrm{adv}}\), with an exception at 1 AU, dust-free, 10 \(M_{\oplus}\) core where the envelope becomes less centrally concentrated for \(\alpha_{adv}\gtrsim 0.7\) enhancing the effect of outer boundary conditions on thermal evolution, steepening the GCR-\(\alpha_{\mathrm{adv}}\) relation. the final GCR would exceed 0.1 (dusty accretion) or actually trigger runaway accretion (dust-free accretion), as illustrated in Figure 5. ## 4 Discussion ### Importance of opacity and equation of state Our results highlight the importance of realistic EOS, opacities, and thermal state of recycling flows in quantifying the latter's effect on planetary envelope formation. Figure 6 demonstrates how simplifications of these elements can lead to an underestimation of gas cooling and therefore accretion rates. We replace the EOS of Lee et al. (2014) used in the previous sections with the one of Ali-Dib et al. (2020) which adopts a fixed adiabatic index of \(\gamma=1.4\) and a mean molecular weight of \(\mu=2.34\). Doing so reduces the final values of GCR by a factor of \(\sim 5\), well and artificially within the mini-Neptune mass range. With our more realistic EOS, the adiabatic index is allowed to drop down to \(\approx 1.2\) in the deep envelope allowing convection to prevail through a larger radial extent and leading to faster envelope cooling. We investigate the effect of using realistic opacities by comparing the dusty and dust-free tables of Ferguson et al. (2005) to the power-law opacity of Bell & Lin (1994) with grains opacity suppressed by a factor of 10 as done by Ali-Dib et al. (2020). As illustrated in Figure 6, adopting the more realistic opacities result in the final GCR larger by a factor \(\sim\)2.5 compared to assuming the power-law opacity of Bell & Lin (1994). When adopting the latter simple power-law opacity, the outer radiative region, and so the radiative-convective boundary, becomes more optically thick from larger opacity (while remaining radiative) which results in reduced rate of cooling and therefore smaller final GCR. We also note that the dusty and dust-free models are nearly identical at 0.1 AU as the outer advective region (\(0.3R_{\rm out}<r<R_{\rm out}\)) isentropic and the inner envelope has temperatures above the H\({}_{2}\) dissociation threshold of 2500 K so that Figure 5: Time evolution of the gas-to-core mass ratio (GCR) of a 5 \(M_{\oplus}\) core embedded in a gas-rich nebula (solid lines) compared to the semi-empirical scalings (dashed lines) of Equation (23) with the appropriate \(\theta\) from Figure 4. The advection depth (\(\alpha_{\rm adv}\)) is varied from 1 to 0.1 as indicated by the color scheme. The target range for mini-Neptune envelope mass fraction (shaded region) and the protoplanetary disk lifetime (dotted vertical line) are indicated as references. The scalings of Figure 4 agree with the computed accretion rates before runaway mass accretion. A full exploration of the parameter space over core mass, orbital distance, dusty vs. dust-free opacity, and \(\alpha_{\rm adv}\) is shown in Figure A.1. Figure 6: Effect of opacity and equation of state (EOS) on the time evolution of the gas-to-core mass ratio (GCR) of a 5\(M_{\oplus}\) core embedded in a gas-rich nebula at 0.1 AU for \(\alpha_{\rm adv}=0.3\). The dusty opacity model (red) is compared to the dust-free one (cyan) and to the power-law opacity of Bell & Lin (1994) (brown). Models using the simplified EOS of Ali-Dib et al. (2020) with fixed adiabatic index \(\gamma=1.4\) and the realistic EOS of Lee et al. (2014) are described by dashed and solid curves respectively. Dash-dotted curves show the time evolution of models with both realistic opacities and EOS without accounting for the surface energy contributions in Equation (20). The target GCR range for mini-Neptunes (shaded region) and the protoplanetary disk lifetime (dotted vertical line) are indicated as references. Simplifications in opacities, EOS, and the exclusion of surface energy contributions overestimate the impact of advection on accretion rates by almost a full order of magnitude. dust grain opacities are irrelevant. Replacing both our EOS and opacity by the ones of Ali-Dib et al. (2020) limits the final GCR to \(\sim 0.04\) which is \(\sim 6\) times lower than what we find with our realistic treatment of advection (GCR \(\sim 0.23\)). Ignoring the surface energy sources in cooling timescale calculations also plays a role in overestimating the effect of entropy advection in limiting gas accretion. In particular, the extra energy gained from accreting gas between snapshots (\(\langle e_{M}\rangle\Delta M\) term in Equation (20)) increases when including advection, even before runaway accretion begins. This extra energy contribution can increase from \(\lesssim 10\%\) of the total changes in the energy budget \(\Delta E\) for \(\alpha_{\rm adv}=1\) to \(\sim 30\) % for \(\alpha_{\rm adv}=0.3\). As shown in Figure 6, including this extra energy source in cooling time calculations hastens accretion rates, with the final GCR higher by \(\sim 25\%\). While this variation is not as drastic as the impact of using realistic opacity and EOS, we still recommend including these surfaces energy sources to more accurately portray the thermal relaxation of the envelope. Ali-Dib et al. (2020) found that the density and the temperature of the disk can affect critically the gas accretion rate. For example, they report that within their convective disks, planets would quickly establish an inner isothermal region thereby completely halting accretion. In Figure 7, we illustrate the effect of adopting different disk conditions on the rate of gas accretion, using the radiative and convective disks defined by Ali-Dib et al. (2020). Even with advection, accretion is never halted before the protoplanetary disk dissipates unlike the isothermal state reached by Ali-Dib et al. (2020) after \(\sim 5\) Myr in the radiative disk and over the entire life of the disk in the convective disk (refer to their Figure 5 for comparison). We attribute the cause of this different result to our use of more realistic EOS, opacity and proper accounting of surface energy in the calculation of cooling timescale. Nevertheless, compared to the gas-full MMEN disk, there is a reduction in the final GCR in the radiative and convective disks, which are functionally equivalent to depleting the MMEN disks by factors of \(\sim\)800 and \(\sim\)1200 respectively. Such gas-poor environments have been suggested in late-time formation scenarios as we discuss in Section 4.2. ### How do mini-Neptunes avoid runaway? Although this work argues advection from the protoplanetary disks can only play a limited role on envelope formation timescales of mini-Neptunes, the observed population of such \(\sim 2-20M_{\oplus}\) planets with thin envelopes are still amongst the most common types of exoplanets discovered. We thus need to revert back to other scenarios to explain why accretion onto \(5-10\)\(M_{\oplus}\) cores does not result in the formation of gas giants. The delay in the last mass doubling of planetary cores is naturally explained by gas dynamical friction within the initially gas-rich disk preventing the merger of small proto-tocores until the last \(\sim\)10% (\(\sim 0.1-1\) Myr) of the protoplanetary disk lifetime. Lee and Chiang (2016) showed that the gas-poor conditions within that shorter period of time can be sufficient for core accretion to produce the observed thin envelopes. They also determined that this model is consistent with the degree of depletion required for mergers of protocores (see also Dawson et al., 2016). In addition, late-time core assemblies are consistent with the observed flat orbital period distribution of these small planets (Lee and Chiang, 2017), the observed distribution of orbital period ratios in _Kepler_ multi-planetary systems which feature peaks near (but not in) first-order mean motion resonances (e.g., Choksi and Chiang, 2020) with the majority of planet pairs being far away from such resonances (e.g., Izidoro et al., 2017, 2021), as well as the observed intra-system similarity (e.g., MacDonald et al., 2020; Lammers et al., 2023) whereby planets within a given system have more similar masses and radii compared to system-by-system variations. Therefore, late-time formation of planetary cores remains a likely explanation of the abundance of mini-Neptunes. We note that post formation effects may explain, in part, the thin envelope mass of mini-Neptunes. As shown in Figure 3, realistically accounting for entropy advection can lead to final GCRs of \(\sim\)20-25% at 0.1 AU. At such separation from the central star (\(a\lesssim 0.3\) AU), photoevaporation could play a significant role during the post-formation era in whittling down the final GCR to a value that is consistent with the observed radius and mass (e.g., Lopez et al., 2012; Owen and Wu, 2013, 2017; Jin et al., 2014). However, using a hierarchical inference analysis, Rogers and Owen (2021) argue that the typical initial (pre-evaporation) envelope mass fraction of mini-Neptunes must have been \(\sim\)1-10% in order to explain the observed radius-period distribution (see their Figure 11) under the theory of photoevaporation and so we conclude that entropy advection will not be enough even if we consider post-formation photoevaporative mass loss and that further limiting processes such as late-stage assembly is still required. Alternatively, Inamdar and Schlichting (2016) demonstrated how giant impacts post-formation (i.e., after the disk gas has completely dissipated away) can result in a loss of approximately half the envelope mass. Thus, the fully formed dusty envelopes of Figure 3 even at 1 AU could become mini-Neptunes following one or two giant collisions over billion years timescales. It remains unclear however whether the lost material following impact is accreted back to the planet or ejected from the system, unbound from the orbit. Furthermore, N-body calculations report systems that tend to complete the last mergers in gas-_free_ environments end up with rocky planets (\(<\)2\(R_{\oplus}\); see Figures 4 and 8 of MacDonald et al. (2020)) whereas systems with enough material to create more gas-enveloped mini-Neptunes tend to complete the last mergers in gas-poor but not gas-_free_ environments. We conclude that mass loss by giant impacts may not be so relevant for mini-Neptune populations. ### Planet population past 1 AU Currently, little is known about the population of low-mass planets beyond 1 AU as they are mostly detected by transit methods which are only sensitive to orbital periods \(\lesssim\)300 days (e.g., Petigura et al., 2018; Hsu et al., 2019; Wilson et al., 2022). The Nancy Grace Roman Space Telescope set to launch in mid-2020 promises to provide some answers with space based microlensing surveys that are uniquely capable of probing masses as low as \(\sim\)0.01\(M_{\oplus}\) at a few AU (Gaudi et al., 2021; Zhu and Dong, 2021). The amount of gas a planet can accrete depends most sensitively on the mass of the core (Lee and Chiang, 2015; Lee, 2019), a result that is consistent with our calculation (see Figure A.1). Therefore, in this section, we discuss how the current and future measurements of planet masses and mass distribution can be leveraged to distinguish between different formation conditions (e.g., dusty vs. dust-free accretion, core assembly time, penetration depth of advective flows) at orbital distances from 0.1 to a few AU. We focus our attention on sub-Saturns--planets with radii \(\sim\)3-8\(R_{\oplus}\) which are the boundary population between more gas-poor mini-Neptunes and more gas-rich Jupiters. These planets are unambiguously enveloped by H/He-dominated gas rendering their model-inferred gas-to-core mass ratios more accurate and less subject to the degeneracy between different chemical compositions (e.g., Petigura et al., 2017).2 Footnote 2: While Petigura et al. (2017) define the lower limit of sub-Saturn radius as 4\(R_{\oplus}\), we choose slightly smaller 3\(R_{\oplus}\) based on the distinct clusters shown in bulk density-mass space as shown in Figure 1 of Luque and Palle (2022) as well as the clear departure of planets at radii beyond 3\(R_{\oplus}\) from the waterworlds and rocky planets in the mass-radius space shown in Figure 1 of Otegi et al. (2020). Because the planets formally begin their runaway gas accretion at GCR \(\sim\) 0.5 (see Figure 5), we define the mass of the core that reaches this GCR for a given set of formation conditions as the maximal core mass that defines the boundary between those that are gas-poor (and so we can still probe the masses of their cores) vs. those that are gas-rich (and so their masses are dominated by the gas). We solve for this core mass by inverting GCR scalings of Lee and Chiang (2015) modified for the core radius-mass relation of Equation (10) and the dependence on \(\alpha_{\rm adv}\) that we found in Equation (23) which takes the form \[\mathrm{GCR}=\mathrm{GCR}_{0}\Big{[}\frac{t}{t_{0}}\Big{(}\frac{M_{\rm core}}{5 M_{\oplus}}\Big{)}^{\frac{3}{4}\frac{1+\alpha}{\gamma-1}-1-\frac{\alpha}{4}} \Big{]}^{\frac{1}{2+\alpha}}\alpha_{\rm adv}^{\theta}, \tag{24}\] where we use the values of \(\theta\) presented in Figure 4 and \(\alpha\) and \(\gamma\) are computed at each orbital distance. We fix the initial unrestricted envelope to GCR\({}_{0}\) = 0.08 and to the associated computed time \(t_{0}\). At short orbital distances, our model-predicted maximal core mass can be directly compared with observations using the measured masses and radii of sub-Saturns. From a visual inspection of Figure 1 of Otegi et al. (2020) and Luque and Palle (2022), we see that the total masses of such planets range from \(\sim\)10-30\(M_{\oplus}\) for both solar-type stars and M dwarfs. We divide this total mass by 1.5 to obtain the inferred core mass assuming a typical GCR of 0.5 (shaded region in Figure 8). As demonstrated in Figure 8, the maximal core mass under dusty accretion is generally independent of orbital distance (because the rate of accretion is set by H\({}_{2}\) dissociation and H- opacity which are insensitive to boundary conditions) and larger than that expected from dust-free accretion (because dusty opacity is generally higher than dust-free opacity), in agreement with Lee et al. (2014). Under dust-free accretion, the maximal core mass decreases to \(\sim\)2-4\(M_{\oplus}\) beyond 1 AU, as expected due to diminished opacity farther away from the star where the disk is colder. We find that at \(\lesssim\)0.3 AU, both dusty and dust-free accretion provide similar maximal core of \(\sim\)10\(M_{\oplus}\) in agreement with current observations. At wider orbits, the two opacity models diverge so it is potentially possible to observationally distinguish between the two formation models. Other parameters such as the total time allotted for gas accretion (at least between 1 and 10 Myr) and \(\alpha_{\rm adv}\) effect factors of \(\lesssim\)2 differences and so would be more difficult to disambiguate using planet mass alone. Current radial velocity measurements are insensitive to these small mass planets beyond \(\sim\)1 AU. While microlensing studies of Suzuki et al. (2018) report a potential break in the planet mass distribution at mass ratio \(\sim\)7\(\times\)10\({}^{-5}\) (consistent with \(\sim\)12\(M_{\oplus}\) assuming 0.5\(M_{\odot}\) host star), the shape of the distribution below this mass ratio is not well constrained and so it remains unclear whether the break is real or not. Theoretically, we would expect such a break to appear at a transition mass between gas-poor and gas-rich planets as the former would more closely track the underlying core mass distribution whereas the latter would track the physics of gas accretion. In order to provide a more direct prediction of our model that is applicable to microlensing measurements which are more sensitive to M dwarf host stars, we construct the gas accretion evolution of planets around 0.5\(M_{\odot}\) host stars and renormalize our scaling relationship to compute the maximal core mass. We adopt a cooler disk midplane temperature scaling Equation (12) by \(M_{\star}^{2/7}\) following Chachan and Lee (2023); in ad Figure 8: Maximal core masses of gas-poor planets (GCR\(<\)0.5) expected from the analytic scalings of Section 3.2 as a function of orbital distance. The top and the bottom rows each correspond to total gas accretion time of 1 Myr, representing late-stage core formation, and to total gas accretion of 10 Myr, representing early core formation. The left and right columns depict M dwarf host stars vs. solar mass host stars, respectively, with the planetary orbits corresponding to the Einstein ring radius \(R_{\rm E}\sim 3.5\)AU\(\sqrt{M_{\star}/M_{\odot}}\)(Suzuki et al., 2018) where planet detection by microlensing is most sensitive indicated in vertical magenta lines. Solid and dashed curves account for dusty and dust-free opacities respectively with the different colors illustrating a range of \(\alpha_{\rm adv}\) = 1 (black), 0.3 (red), and 0.2 (cyan). The shaded areas correspond to the estimated core mass of planets that are on the verge of runaway, calculated by taking the measured total mass of planets with radii \(\sim\)3–8\(R_{\oplus}\) and dividing them by 1.5 accounting for GCR \(\sim\)0.5; the mass and radii measurements for solar-mass and M dwarf host stars are each taken from Otegi et al. (2020, their Figure 1) and Luque and Palle (2022, their Figure 1) respectively. The maximal core masses are the most distinct between dusty vs. dust-free opacity at wide orbits. dition, \(R_{\rm out}\) would also change accordingly given that \(R_{H}\propto M_{\star}^{-1/3}\). As shown in the left panels of Figure 8, there is little difference in our maximal core mass between 0.5\(M_{\odot}\) and 1.0\(M_{\odot}\) host stars because the changes in \(M_{\star}\) make minor differences in our boundary conditions. Like the 1.0\(M_{\odot}\) case, we find that the maximal core mass is \(\sim\)2-3\(M_{\oplus}\) at \(R_{E}\) for dust-free accretion around M dwarfs, significantly smaller than \(\sim\)6-13\(M_{\oplus}\) expected from dusty accretion around the same host stars. These two limits can be compared to the measurements of breaks in planet mass distributions from future microlensing surveys in order to determine the role of dust opacity (and more generally any physical process that gives rise to orbital-distance-dependent opacities; Chachan et al., 2021) in shaping planet population at orbits beyond \(\sim\)1 AU. ## 5 Conclusion We have investigated the role of atmospheric recycling in delaying or halting gas accretion by cooling onto planetary cores embedded in gas-rich nebula. Our main findings are summarized as follows: 1. When more realistic EOS and opacities are taken into account, along with proper accounting of surface energies in thermal evolution, except for a very specific scenario (dusty accretion, \(\alpha_{\rm adv}<0.2\), 0.1 AU), advection alone is insufficient to produce mini-Neptunes because even if runaway may be avoided, the final envelope mass fraction of mini-Neptune mass planets would be \(\gtrsim\)0.2, too large to explain their measured masses and radii. Further limiting process such as late-time core assembly is required. 2. At close-in distances (0.1 AU), the outer advective layer is expected to be isentropic with the disk while at large distances (1 AU), this outer layer is found to be more radiative so that the effect of entropy advection is more pronounced at 0.1 AU compared to 1 AU. 3. The dependence of the final gas-to-core mass ratio (GCR) on the penetration depth of advective flow generally strengthens for deeper flows (lower \(\alpha_{\rm adv}\)) and at close-in orbital distances. The critical \(\alpha_{\rm adv}\) where the behavior of the GCR-\(\alpha_{\rm adv}\) scaling relationship changes is at the locations of radiative-convective boundaries in unrestricted (\(\alpha_{\rm adv}=1\)) envelopes. 4. The critical core mass that separates gas-poor vs. gas-rich planets is most sensitively determined by the nature of opacity (dusty vs. dust-free) more so than the core assembly times and \(\alpha_{\rm adv}\) and they deviate more at larger orbital distances beyond \(\sim\)1 AU where a measurement in a break in planet mass distribution using e.g., microlensing survey can distinguish between dusty vs. dust-free gas accretion at these distances. Our findings highlight the importance of EOS and opacities in regulating the thermal evolution of planetary envelopes which are the key difference between our calculations than that of e.g., Ali-Dib et al. (2020) that lead to different conclusions. Future three-dimensional hydrodynamic simulations that employ realistic EOS and opacities over a wide range of orbital distances would be welcome to verify the 1D calculations showcased in this work. We are indebted to Jason Ferguson for extending and sharing his opacity tables, and we thank Zhaohuan Zhu for his insight and helpful discussions. We also thank Andrew Cumming for his feedback on the initial version of this paper. V.S. acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), of le Fonds de recherche du Quebec - Nature et technologies (FRQNT) and of the Trottier Institute for Research on Exoplanets (iREx) under the Trottier Excellence Grant for Summer Interns. E.J.L. gratefully acknowledges support by NSERC, by FRQNT, by the Trottier Space Institute, and by the William Dawson Scholarship from McGill University. ## Appendix A Parameter Study A gallery of envelope mass growth profiles across our entire parameter space is presented in Figure A.1. We recover the strong dependence of GCR on the core mass finding a more massive 10\(M_{\oplus}\) core to undergo runaway accretion in unrestricted envelopes (e.g., Mizuno, 1980; Stevenson, 1982; Pollack et al., 1996; Ikoma et al., 2000). While keeping \(\alpha_{\rm adv}\lesssim\)0.4 is enough to avoid runaway for these massive cores at 0.1 AU, even under severe penetration of advective flows (\(\alpha_{\rm adv}=0.1\)), runaway is already unavoidable at 1 AU for formation in gas-rich environments, showcasing the need for further limiting process (see Section 4.2) is particularly dire for massive cores.
2306.12958
Radiation Emission during the Erasure of Magnetic Monopoles
We study the interactions between 't Hooft-Polyakov magnetic monopoles and the domain walls formed by the same order parameter within an $SU(2)$ gauge theory. We observe that the collision leads to the erasure of the magnetic monopoles, as suggested by Dvali, Liu, and Vachaspati. The domain wall represents a layer of vacuum with un-Higgsed $SU(2)$ gauge symmetry. When the monopole enters the wall, it unwinds, and the magnetic charge spreads over the wall. We perform numerical simulations of the collision process and in particular analyze the angular distribution of the emitted electromagnetic radiation. As in the previous studies, we observe that erasure always occurs. Although not forbidden by any conservation laws, the monopole never passes through the wall. This is explained by entropy suppression. The erasure phenomenon has important implications for cosmology, as it sheds a very different light on the monopole abundance in post-inflationary phase transitions and provides potentially observable imprints in the form of electromagnetic and gravitational radiation. The phenomenon also sheds light on fundamental aspects of gauge theories with coexisting phases, such as confining and Higgs phases. Additionally to the figures, the results of the numerical simulations can be found in the following video: https://youtu.be/JZaXUYikQbo
Maximilian Bachmaier, Gia Dvali, Juan Sebastián Valbuena-Bermúdez
2023-06-22T15:17:23Z
http://arxiv.org/abs/2306.12958v1
# Radiation Emission during the Erasure of Magnetic Monopoles ###### Abstract We study the interactions between 't Hooft-Polyakov magnetic monopoles and the domain walls formed by the same order parameter within an \(SU(2)\) gauge theory. We observe that the collision leads to the erasure of the magnetic monopoles, as suggested by Dvali, Liu, and Vachaspati [1]. The domain wall represents a layer of vacuum with un-Higgsed \(SU(2)\) gauge symmetry. When the monopole enters the wall, it unwinds, and the magnetic charge spreads over the wall. We perform numerical simulations of the collision process and in particular analyze the angular distribution of the emitted electromagnetic radiation. As in the previous studies, we observe that erasure always occurs. Although not forbidden by any conservation laws, the monopole never passes through the wall. This is explained by entropy suppression. The erasure phenomenon has important implications for cosmology, as it sheds a very different light on the monopole abundance in post-inflationary phase transitions and provides potentially observable imprints in the form of electromagnetic and gravitational radiation. The phenomenon also sheds light on fundamental aspects of gauge theories with coexisting phases, such as confining and Higgs phases. ## 1 Introduction Topological defects play significant roles in different branches of physics. These entities emerge in theories with topologically non-trivial vacuum manifolds. In particular, such manifolds are common in theories with spontaneously broken symmetries. When symmetry breaking takes place in a phase transition during the cosmological evolution, the defects can be formed via the Kibble mechanism [2]. In [1], it has been pointed out that defects can be subjected to a so-called 'erasure' phenomenon. Namely, in some cases, one and the same order parameter simultaneously gives rise to defects of different dimensionality, e.g., magnetic monopoles and domain walls. In such cases, upon the encounter, the less extended defects can be erased by the more extended ones. In [1] this effect was discussed for the domain walls and magnetic monopoles system. In particular, it was pointed out that the grand unified phase transition, which ordinarily creates 't Hooft-Polyakov magnetic monopoles [3, 4], can also give rise to domain walls. Upon the encounter, the magnetic monopole is erased by the domain wall. The essence of the erasure is that the domain wall creates a supporting surface for unwinding the monopole field. The Higgs field vanishes inside the wall. Due to this, the magnetic charge, instead of staying localized at a point, spreads over the entire wall. Original motivation of [1] was the solution to the cosmological magnetic monopole problem [5, 6]. The idea is that the domain walls'sweep away' monopoles and disappear. For short, we shall refer to this dynamics as the DLV mechanism. It was already a subject of numerical studies in [7, 8]. The monopole erasure scenario allows to have the monopole production after the inflationary phase without conflicting with the constraints on the monopole abundance. It therefore 'liberates' the grand unified symmetry from the necessity of being broken during inflation. This is beneficial for some motivated inflationary scenarios predicting the grand unified phase transition after inflation. The DLV erasure mechanism plays an important role in generic quantum field theoretical systems with defects supporting different gauge theories' phases. An early example is provided by a confining gauge theory (e.g. \(SU(2)\)) which contains domain walls with de confined \(U(1)\) Coulomb phase of the same gauge interaction [9]. Due to confinement, in the \(SU(2)\) vacuum, the gauge electric field is trapped in the form of QCD flux tubes. However, the wall serves as a base for the spread-out of the QCD electric flux. Correspondingly, for the QCD string, the wall plays a role similar to a \(D\)-brane. Upon encountering such a wall, the QCD string gets erased [10, 11]. The dual version of this, in the form of the erasure of vortices and strings by a domain wall, was recently studied numerically in [12]. One important general question is the efficiency of the erasure. As suggested in the work on the monopole-wall system [1], the erasure mechanism was expected to be very efficient. Although topologically, it is allowed for a monopole to pass through the wall, this passage is expected to be highly improbable. The argument of DLV was based on loss of coherence in the monopole wall collision. Namely, upon collision with the wall, the monopole charge starts to spread in the traveling waves. This makes the further recombination of the monopole on the other side of the wall very unlikely. As supporting evidence for this reasoning, in [1] the results of numerical studies of interactions between the skyrmions and walls [13, 14] were used. In more recent studies, the efficiency of the erasure phenomenon was repeatedly observed in monopole-anti-monopole [15], wall-vortex, and string-wall [12] systems. The analytic explanation of these numerical results was given by substantiation of the DLV coherence loss argument [1] by the entropy-count of [16]. This count indicates that the probability of survival is exponentially suppressed due to the fact that the final state after erasure has a much higher entropy in comparison to a surviving defect. In the present paper, we extend the study of the erasure phenomenon in the monopole-wall system. We use a simple prototype model with an adjoint Higgs field of \(SU(2)\) which possesses \(U(1)\) invariant vacua separated by domain walls (vacuum layers) with \(SU(2)\) invariant phases. The monopoles that exist in the \(U(1)\) phase get erased upon the encounter with the domain walls that support the \(SU(2)\) phase in their interior. Again, we observe that the erasure occurs for the considered parameters regime. The main novelty is the analysis of the emitted electromagnetic radiation during the erasure. The emission of electromagnetic radiation accompanies the spread-out of the magnetic charge of the monopole. This can have several interesting implications both for the theoretical understanding of the erasure phenomenon as well as for its observational consequences. ## 2 Generalities of Radiation Let us review a fundamental phenomenon of classical electrodynamics that will become relevant to our discussion. It is a well-known fact that the acceleration of electric charge leads to an emission of radiation. If we allow the existence of magnetic charges, the acceleration of magnetic charges will lead to the same effect due to the duality of the extended Maxwell equations. The behavior of the electric and magnetic fields, albeit, is exchanged. Consider a point charge \(q\) located at the origin with initial velocity \(u\) and acceleration \(a\), where the direction of the velocity is parallel to the direction of acceleration. The energy density of the radiation for this situation can be calculated analytically and is given by [17] \[\varepsilon=\frac{q^{2}}{16\pi^{2}}\frac{a^{2}}{r^{2}}\frac{\sin^{2}\theta}{ \left(1-u\cos\theta\right)^{6}}, \tag{1}\] where \(r\) is the distance from the charge and \(\theta\) is the angle relative to the direction of movement. As we can deduce from this equation, the energy density is not distributed homogeneously on a sphere around the point charge. Most of the radiation gets emitted in the direction \[\theta_{\rm max}=\arccos\left(\frac{-1+\sqrt{1+24u^{2}}}{4u}\right). \tag{2}\] Furthermore, we can notice that the form of the distribution depends only on the initial velocity of the charge and does not depend on the acceleration. The shape of the radiation emission is depicted in figure 1 using a normalized radiation pattern. The greater the initial velocity, the more the loops bend in the direction of the initial motion. We observe that the direction of radiation emitted during the erasure of a magnetic monopole is comparable to the expected one for a constant accelerated magnetic point charge. We will elaborate on this point below. Figure 1: The radiation pattern for an accelerated charge with initial velocity \(u\). The radius represents the normalized value of the radiation energy density \(\frac{\varepsilon}{z(\theta_{\rm max})}\). The Model and its Solutions We consider a model with an \(SU(2)\) gauge symmetry and a scalar field \(\phi\), transforming under the adjoint representation. This model is a prototype of grand unified theories, which is still able to capture the essence of the occurring phenomena. The Lagrangian is given by [11] \[\mathcal{L}=-\frac{1}{2}\operatorname{Tr}\left(G_{\mu\nu}G^{\mu\nu}\right)+ \operatorname{Tr}\left((D_{\mu}\phi)^{\dagger}(D^{\mu}\phi)\right)-V(\phi), \tag{3}\] with the potential \[V(\phi)=\lambda\left(\operatorname{Tr}(\phi^{\dagger}\phi)-\frac{v^{2}}{2} \right)^{2}\operatorname{Tr}(\phi^{\dagger}\phi). \tag{4}\] Notice that \(\lambda\) has the mass dimension \(-2\). The scalar field can be written as \(\phi=\phi^{a}T_{a}\), where the \(SU(2)\) generators \(T_{a}\) are normalized as \(\operatorname{Tr}(T_{a}T_{b})=\frac{1}{2}\delta_{ab}\). The field strength tensor is defined by \[G_{\mu\nu}\equiv\partial_{\mu}W_{\nu}-\partial_{\nu}W_{\mu}-ig\left[W_{\mu},W_ {\nu}\right], \tag{5}\] with the gauge fields \(W_{\mu}\equiv W_{\mu}^{a}T_{a}\). The covariant derivative has the usual form \[D_{\mu}\phi\equiv\partial_{\mu}\phi-ig\left[W_{\mu},\phi\right]. \tag{6}\] The feature of the sextic potential is that it has two disconnected vacua, corresponding to the \(SU(2)\) invariant phase, \(\langle\operatorname{Tr}(\phi^{\dagger}\phi)\rangle=0\), and the phase with \(SU(2)\) Higgsed down to \(U(1)\), \(\langle\operatorname{Tr}(\phi^{\dagger}\phi)\rangle=\frac{v^{2}}{2}\). In the \(SU(2)\) invariant vacuum, the vector fields are massless while \(\phi\) is massive. On the other hand, in the second vacuum, the symmetry group \(SU(2)\) is Higgsed down to \(U(1)\) and two of the vector fields gain the mass \(m_{v}=vg\) through the Higgs mechanism, while one stays massless. The mass of the Higgs boson is given by \(m_{h}=\sqrt{\lambda}v^{2}\). At the quantum level, the \(SU(2)\) invariant vacuum becomes confining. However, for the considered parameters, this can be ignored. We will elaborate more on this later. As a first approximation, let us consider the classical equations of motion. They are given by \[\partial_{\mu}(D^{\mu}\phi)^{a}+g\varepsilon^{abc}\;W_{\mu}^{b}(D ^{\mu}\phi)^{c}+\frac{\partial V}{\partial\phi^{a}}=0, \tag{7}\] \[\partial_{\mu}G^{a\mu\nu}+g\varepsilon^{abc}\;W_{\mu}^{b}G^{c \mu\nu}-g\varepsilon^{abc}\;(D^{\nu}\phi)^{b}\phi^{c}=0. \tag{8}\] The spectrum of the model contains magnetic monopoles which are realized as solitons in the \(U(1)\) vacuum. Consider the 't Hooft-Polyakov ansatz [3, 4] \[W_{i}^{a} =\varepsilon_{aij}\frac{r^{j}}{r^{2}}\frac{1}{g}(1-K(r)),\] \[W_{t}^{a} =0,\] \[\phi^{a} =\frac{r^{a}}{r^{2}}\frac{1}{g}H(r), \tag{9}\] thus, the field equations (7) and (8) reduce to \[K^{\prime\prime}= \frac{1}{r^{2}}\left(K^{3}-K+H^{2}K\right),\] \[H^{\prime\prime}= \frac{2}{r^{2}}HK^{2}\] \[+m_{h}^{2}\left(\frac{3}{4}\frac{1}{r^{4}m_{v}^{4}}H^{5}-\frac{1 }{r^{2}m_{v}^{2}}H^{3}+\frac{1}{4}H\right). \tag{10}\] To ensure good behavior at the boundary, the following standard boundary conditions are required \[K(r) \xrightarrow{r\to 0}1, K(r) \xrightarrow{r\to\infty}0,\] \[K^{\prime}(r) \xrightarrow{r\to 0}0, \frac{H(r)}{m_{v}r} \xrightarrow{r\to\infty}1,\] \[\frac{H(r)}{m_{v}r} \xrightarrow{r\to 0}0.\] The profile functions \(H(r)\) and \(K(r)\) were found numerically by using an iterative method that starts at the solution in the BPS limit \(m_{h}\to 0\)[18, 19] and relaxes to the solution with \(m_{h}\neq 0\). For the later simulations, we evaluated in this way the profile function for \(\frac{m_{h}}{m_{v}}=1\) (see figure 2). As mentioned before, the present work aims to study the interaction between domain walls and magnetic monopoles. We anticipate that the monopole is erased during the collision, and electromagnetic radiation is emitted in this process. In order to analyze the radiation, we need to know the electric and magnetic fields. Following the standard definitions, the non-abelian magnetic and electric fields can be written analogously to classical electrodynamics as \[B_{k}^{a} =-\frac{1}{2}\varepsilon_{kij}G_{ij}^{a}, \tag{11}\] \[E_{k}^{a} =G_{0k}^{a}. \tag{12}\] Since we are interested in the \(U(1)\) magnetic and electric fields, it is necessary to project out the component that points in the direction of the electromagnetic Figure 2: The profile functions of the magnetic monopole for \(\frac{m_{h}}{m_{v}}=1\). charge operator \(Q=\frac{\phi^{a}}{\sqrt{\phi^{b}\phi^{b}}}T^{a}\). Using the scalar product \(\langle A,B\rangle=2\,\mathrm{Tr}(AB)\) one can find \[B_{k}^{U(1)} =\frac{\phi^{a}}{\sqrt{\phi^{b}\phi^{b}}}B_{k}^{a}, \tag{13}\] \[E_{k}^{U(1)} =\frac{\phi^{a}}{\sqrt{\phi^{b}\phi^{b}}}E_{k}^{a}. \tag{14}\] The definitions (13) and (14) are valid in the \(U(1)\) invariant phase, \(\phi^{b}\phi^{b}=v^{2}\), and thus valid for long distances compared to the size of the monopole core \(\sim m_{h}^{-1}\). The potential (4) allows the two phases to co-exist. Therefore, we can treat the potential as an intermediate step of a first-order phase transition. In the model (3), domain walls interpolate between \(SU(2)\) and \(U(1)\) invariant phases. For a planar domain wall located at \(z=0\) with ansatz \(\phi^{1}=\phi^{2}=0\) and \(\phi^{3}=\phi(z)\), the non-trivial solutions of the Bogomolny equation [18] \[\phi^{\prime}=\pm\sqrt{2V}, \tag{15}\] derived from the field equation (7) are \[\phi_{(\pm v,0)}(z) =\frac{\pm v}{\sqrt{1+e^{m_{h}z}}}, \tag{16}\] \[\phi_{(0,\pm v)}(z) =\frac{\pm v}{\sqrt{1+e^{-m_{h}z}}}. \tag{17}\] The boundary values of these solutions are on one side \(\pm v\) and on the other side \(0\), which correspond to the \(U(1)\) invariant phase and the \(SU(2)\) invariant phase, respectively. In order to study the erasure mechanism, we consider the passage of a monopole through an \(SU(2)\) invariant vacuum layer. The vacuum layer can be approximated as a combination of two parallel domain walls, for instance [12] \[\phi_{\mathrm{VL}}(z)=\phi_{(v,0)}(z)+\phi_{(0,v)}(z-h), \tag{18}\] where \(h\) is the distance between the two domain walls. Note that for finite \(h\), the vacuum layer is not a solution to the static field equations, since the wall and anti-wall attract each other. However, the interaction is negligible for \(h\gg m_{h}^{-1}\sim m_{v}^{-1}\). We used \(h=20m_{v}^{-1}\) in the simulations. In this regime, the vacuum layer is sufficiently long-lived during the period of investigation [12]. ## 4 Initial Configuration We numerically study the interaction of a magnetic monopole and an \(SU(2)\) invariant vacuum layer. To achieve this, we numerically solved the equations (7) and (8). As initial configuration, we considered field configurations where the vacuum layer is Lorentz boosted towards the monopole. Upon the collision, we bear out the'sweeping away' mechanism [1]. In particular, we observed that the monopole is unable to pass the layer; instead, the magnetic charge dissolves and spreads out. Additionally, electromagnetic radiation gets emitted. As mentioned before, we expect the form of the radiation pattern to depend on the initial velocity of the magnetic charge. This anticipation prompted us to elaborate on situations where the magnetic monopole is also Lorentz boosted. Furthermore, boosting the magnetic monopole simultaneously with the vacuum layer allows us to check the mechanism for much higher collision velocities. The maximal velocities we could study with an appropriate accuracy were \(0.8\) (in units of \(c=1\)) for the magnetic monopole and \(0.98\) for the vacuum layer. For higher velocities, the resolution of the lattice was not acceptable. These two cases allow us to check the erasure mechanism for Lorentz factors of \(\gamma_{\mathrm{M}}=1.67\) and \(\gamma_{\mathrm{VL}}=5.03\), respectively. Boosting both objects with these velocities albeit leads to the collision relative speed of about \(0.9977\), where we used the addition rule for relativistic velocities \(u=\frac{u_{1}+u_{2}}{1+u_{1}u_{2}}\). Therefore, we were able to check the erasure mechanism for the ultra-relativistic regime up to a gamma factor of about \(\gamma=15\) without changing the resolution of the lattice and thus without increasing the computation time and memory usage of our simulations. Earlier [7, 12], this erasure was only studied in the low-relativistic regime. We developed a general ansatz with arbitrary monopole velocity \(u_{1}\) and vacuum layer velocity \(u_{2}\). Lorentz boosting the vacuum layer solution yields \[\phi_{\mathrm{VL}}(z)\rightarrow\tilde{\phi}_{\mathrm{VL}}(z,t)=\phi_{ \mathrm{VL}}(\gamma_{2}(z-u_{2}t)).\] For the scalar field of the magnetic monopole solution, we have \[\phi_{\mathrm{M}}(\mathbf{r})\rightarrow\tilde{\phi}_{\mathrm{M}}(\mathbf{r},t)=\phi _{\mathrm{M}}(x,y,\gamma_{1}(z-u_{1}t)),\] where \(\gamma_{1}=\frac{1}{\sqrt{1-u_{1}^{2}}}\) and \(\gamma_{2}=\frac{1}{\sqrt{1-u_{2}^{2}}}\) are the Lorentz factors for the magnetic monopole and vacuum layer, respectively. Since the gauge field is a Lorentz vector, it is necessary to apply the Lorentz transformation matrix to the vector additionally to the transformation of the \(z\)-coordinate. This results in \[W_{\mathrm{M},\mu}^{a}(\mathbf{r}) \rightarrow\tilde{W}_{\mathrm{M},\mu}^{a}(\mathbf{r},t)\] \[=\begin{pmatrix}-u_{1}\gamma_{1}W_{\mathrm{M},z}^{a}(x,y,\gamma_{ 1}(z-u_{1}t))\\ W_{\mathrm{M},x}^{a}(x,y,\gamma_{1}(z-u_{1}t))\\ W_{\mathrm{M},y}^{a}(x,y,\gamma_{1}(z-u_{1}t))\\ \gamma_{1}W_{\mathrm{M},z}^{a}(x,y,\gamma_{1}(z-u_{1}t))\end{pmatrix}.\] For the combined initial configuration we use for the \(\phi\) field, the product ansatz \[\phi^{a}(\bbox{r},t=0) = \frac{1}{v}\tilde{\phi}_{\rm M}^{a}(\bbox{r},t=0)\tilde{\phi}_{\rm VL }(z-d,t=0),\] \[\partial_{t}\phi^{a}(\bbox{r},t=0) = \frac{1}{v}\partial_{t}\tilde{\phi}_{\rm M}^{a}(\bbox{r},t=0)\tilde {\phi}_{\rm VL}(z-d,t=0)\] \[+ \frac{1}{v}\tilde{\phi}_{\rm M}^{a}(\bbox{r},t=0)\partial_{t} \tilde{\phi}_{\rm VL}(z-d,t=0),\] where \(d\) is the distance between the monopole and the vacuum layer. For large enough distances, \(d\gg m_{h}^{-1}\), the field \(\phi^{a}\) goes to \(\phi_{\rm M}^{a}\) for \(z<d/2\). For \(z>d/2\), the field \(\phi^{a}\) approaches the value \(\phi_{\rm VL}\hat{p}^{a}\). With our ansatz, there is no long-distance force between the monopole and the layer. We need to check the validity of this approximation. In reality, for finite \(d\) and \(h\), we expect several sources of interaction. Most significant is expected to be the quantum effect coming from the \(SU(2)\) gauge bosons which acquire non-trivial mass profiles in the layer. First, let us assume that the \(SU(2)\) theory stays in the perturbative weak coupling regime inside the layer. The parameter regime justifying this assumption will be specified below. In such a case, the perturbative quantum effects will generate some \(d\)-dependent corrections to the magnetic field energy. This correction can be estimated as follows. In the \(U(1)\) invariant vacuum, the running gauge coupling \(g^{2}\) freezes at the scale of the mass gap of the theory. This gap is given by the masses of gauge and Higgs bosons in this vacuum, \(m_{v}\sim m_{h}\). The effective low energy theory below this scale is a theory of a free massless \(U(1)\) Maxwell field. In the absence of the layer, the asymptotic value of the magnetic field energy density would be given by \(|B^{U(1)}|^{2}\to\frac{1}{g^{2}}\frac{1}{14}\). The presence of the \(SU(2)\) invariant layer changes this energy in the following way. Inside the \(SU(2)\) invariant layer, the Higgs mass is essentially the same as in the \(U(1)\) vacuum and is \(\sim m_{h}\). The Higgs thereby decouples below this scale also in the effective theory inside the layer. However, not the gauge bosons. Since the Higgs VEV vanishes in the layer and we work in the regime \(h\gg m_{v}^{-1}\), the gauge coupling in the layer continues to evolve all the way till the scale \(h^{-1}\). This running is similar to the one in a pure \(SU(2)\) gauge theory. Since such a theory is asymptotically free, the evolved gauge coupling in the layer (\(\equiv g_{L}^{2}\)) is stronger than the gauge coupling in the exterior (\(\equiv g_{E}^{2}\)), \(g_{L}^{2}=g_{E}^{2}+\delta g^{2}\). The difference is positive and is \[\delta g^{2}=\frac{11}{12\pi^{2}}g_{E}^{4}\ln(m_{v}h)+{\cal O}(g_{E}^{6}). \tag{20}\] Thus, the presence of the layer decreases the magnetic energy of the monopole (see figure 3), resulting in an attractive potential between the two. Up to one-loop order, one can approximate it as \[V(d)\approx-\frac{11}{24\pi}\frac{h}{d(d+h)}\ln(m_{v}h). \tag{21}\] The acceleration of the monopole caused by the corresponding force is \(a\sim\frac{g_{L}^{2}h}{m_{v}d^{3}}\ln(m_{v}h)\) for \(h\ll d\). This force can be safely ignored at large distances. Once the monopole enters the layer, the interaction is dominated by the classical profile of the Higgs field. This is explicitly taken into account by our numerical analysis. Let us now turn to the validity condition of the above-assumed perturbative weak coupling regime inside the layer. This condition is rather simple. Namely, the gauge coupling inside the layer must stop running before it hits the strong coupling scale of the gauge \(SU(2)\) theory, \(\Lambda\). This gives us a condition, \[h^{-1}\gg\Lambda\,. \tag{22}\] In the opposite case, \(h^{-1}<\Lambda\), the theory inside the layer will enter the strong coupling regime. The \(SU(2)\) vacuum will become confining and generates a mass gap at the scale \(\Lambda\). This leads to the effect of repelling the \(U(1)\) electric flux from the \(SU(2)\) invariant vacuum towards the \(U(1)\) invariant one, as originally studied in [9, 10, 11]. Correspondingly, if the \(SU(2)\) layer is thicker than the scale \(\Lambda^{-1}\), the magnetic flux becomes screened in its interior. This effect is illustrated in figure 4. We thereby work in a regime in which the thickness of the layer is much smaller than the scale of \(SU(2)\) confinement. Then, the quantum effects on the \(U(1)\) field Figure 3: The magnetic energy density of a magnetic monopole in the presence of an \(SU(2)\) invariant vacuum layer taking into account the quantum correction (20) for the coupling constant. are reduced to the perturbatively-generated attractive potential (21) between the monopole and the layer [9]. Note that the layer will become a dual superconductor in the regime \(h^{-1}<\Lambda\). The magnetic field of the monopole will induce the surface charges that will screen the field inside the layer. However, the magnetic Gauss law will still hold. The magnetic flux terminating on the surface charges from one side of the layer will be exactly equal to the flux originating from the opposite side. This regime goes beyond our numerical analysis and will not be considered. Hence, we can use the following initial ansatz for the gauge fields. \[W^{a}_{\mu}(\mathbf{r},t=0) =\tilde{W}^{a}_{\mathrm{M},\mu}(\mathbf{r},t=0), \tag{23}\] \[\partial_{t}W^{a}_{\mu}(\mathbf{r},t=0) =\partial_{t}\tilde{W}^{a}_{\mathrm{M},\mu}(\mathbf{r},t=0). \tag{24}\] For the ansatz and the simulations, we take the Lorenz gauge \(\partial_{\mu}W^{\mu}_{a}=0\). Let us note that Ambjorn and Olesen pointed out in [20] that for an uniform magnetic field \(B>\frac{m_{s}^{2}}{g}\), the massive vector bosons can condense. This is happening because the magnetic field provides a bilinear term in the gauge fields that generates some imaginary frequency modes. This effect does not take place in the present case. Even though the Higgs profile vanishes inside the layer, the positive masses of the off-diagonal gauge bosons are still much higher than the negative contribution from the magnetic field. The latter, therefore, is insufficient for destabilizing the vacuum inside the layer. ## 5 Numerical Implementation For the simulations, we used the programming language Python with the package Numba [21], which translates our Python code into fast machine code. Thereby this decreases the computation time substantially. For a further increase of the computation speed and also an enhancement of the utilization of the working memory, we benefit from the axial symmetry of the system: \[\phi^{1}=xf_{1} \phi^{2}=yf_{1} \phi^{3}=zf_{2}\] \[W^{1}_{x}=xyf_{3} W^{2}_{x}=-x^{2}f_{3}+f_{4} W^{3}_{x}=-yf_{6}\] \[W^{1}_{y}=y^{2}f_{3}-f_{4} W^{2}_{y}=-xyf_{3} W^{3}_{y}=xf_{6}\] \[W^{1}_{z}=yf_{5} W^{2}_{z}=-xf_{5} W^{3}_{z}=0\] \[W^{1}_{t}=yf_{7} W^{2}_{t}=-xf_{7} W^{3}_{t}=0 \tag{25}\] where the functions \(f_{i}\) depend only on the radius \(r\) around the \(z\)-axis, \(z\) and the time \(t\). With this method, it was sufficient to use only three lattice points in the \(y\)-direction. The equations were solved on the \(y=0\) plane, and for the neighboring planes, we used axial symmetry to determine the corresponding values of the fields. This idea was adapted from an earlier paper by Pogosian and Vachaspati [7]. The implementation of this symmetry was realized according to [22]. The second iterative Crank-Nicolson method described in [23] was applied for the time evolution. With the Python program, we analyze the following four cases: \[(I) u_{1}=0 u_{2}=-0.8\] \[(II) u_{1}=0.4 u_{2}=0\] \[(III) u_{1}=0.8 u_{2}=0\] \[(IV) u_{1}=0.8 u_{2}=-0.98\] The first three cases will be used to study the electromagnetic radiation which gets emitted during the collision between the monopole and the domain wall. The fourth case serves as a simulation of the erasure mechanism for the ultra-relativistic regime with a Lorentz factor of around 15. The lattice spacing in \(x\)- and \(y\)-direction was chosen to be \(0.25m_{v}^{-1}\). For the cases with monopole velocity \(u_{1}<0.8\) the lattice spacing in \(z\)-direction was also \(0.25m_{v}^{-1}\) and the time step was set to \(0.1m_{v}^{-1}\). For the cases with monopole velocity \(u_{1}=0.8\) we chose \(0.125m_{v}^{-1}\) for the lattice spacing in \(z\)-direction and \(0.05m_{v}^{-1}\) for the time step. For all four cases, we took the lattice size \([-60m_{v}^{-1},60m_{v}^{-1}]\) in the x-direction. For \((I)\) the size in \(z\)-direction was chosen to be \([-60m_{v}^{-1},60m_{v}^{-1}]\) and for \((II)-(IV)\) we chose \([-30m_{v}^{-1},90m_{v}^{-1}]\). The time interval under investigation was \([0m_{v}^{-1},150m_{v}^{-1}]\). Figure 4: In a superconductor layer, the electric field is screened. Due to this, the electric flux lines terminate on the surface charges (left). At the same time, the magnetic flux lines are repelled (right). The SU(2) invariant vacuum represents a dual superconductor, and the behavior of magnetic and electric flux lines is reversed [9; 11]. Correspondingly, such a vacuum layer repels the electric flux while the magnetic flux terminates on surface magnetic image charges. Figure 5: Evolution of the potential energy density for the case (\(I\)) in the \(y=0\) plane. The length and time values are in units of \(m_{v}^{-1}\), and the energy density in units of \(\frac{m_{v}^{4}}{g^{2}}\). The vacuum layer moves over the monopole and unwinds it. Furthermore, we can observe radial disturbances that move along the first domain wall with the speed of light. The second domain wall also shows some deformations through the backreaction of the emitted radiation. Figure 6: Evolution of the magnetic energy density for the case (\(I\)) in the \(y=0\) plane. The length and time values are in units of \(m_{v}^{-1}\), and the energy density is in units of \(\frac{m_{v}^{4}}{g^{2}}\). The black lines illustrate the \(SU(2)\) invariant vacuum layer. We used the value \(\sqrt{\varphi^{a}\phi^{a}}=0.5\) to draw its contour. As we can see, after the collision between the vacuum layer and the magnetic monopole, part of the magnetic energy moves away radially. In contrast, most of the magnetic energy is captured within the layer where the magnetic charge spreads. Figure 7: Evolution of the magnetic energy density for the case (\(IV\)) in the \(y=0\) plane. The length and time values are in units of \(m_{v}^{-1}\), and the energy density is in units of \(\frac{m_{v}^{4}}{g^{2}}\). Again, the black lines illustrate the \(SU(2)\) invariant vacuum layer. We observe the same behavior as for case (\(I\)). The magnetic energy of the monopole unwinds, the remaining energy moves away radially, and most of the energy is captured within the two domain walls. One further particular detail can be extracted from these figures. The magnetic energy is not erased immediately everywhere. It takes a finite time for the magnetic field to respond to the spread of the magnetic source. An electromagnetic pulse transports the information about the erasure. The distance between the two domain walls of the vacuum layer was set to \(h=20m_{v}^{-1}\), and the distance between the monopole and the vacuum layer was chosen to be \(d=30m_{v}^{-1}\). The constants \(m_{v}\), \(m_{h}\), and \(g\) were set to one. ## 6 Results In all four cases, \((I)-(IV)\), we observe the erasure of the magnetic monopole during the collision with the vacuum layer. For the case \((I)\), some frames of the evolution of the potential energy density and magnetic energy density can be found in figures 5 and 6, respectively. For the ultra-relativistic case \((IV)\), the evolution of the magnetic energy density is plotted in figure 7. With this, we checked the DLV mechanism [1] for the \(SU(2)\) gauge theory with \(\phi^{6}\) potential for low relativistic and ultra-relativistic collision velocities. Additionally to the figures, the results of the numerical simulations can be found in the following video: [https://youtu.be/JZAXUYikQbo](https://youtu.be/JZAXUYikQbo) Note that in figures 6, 7 there is spherical energy radiation with a factor of around \(10^{-3}\) smaller than the energy density in the magnetic monopole's core. This observation is valid for all the considered monopole and vacuum layer velocities. This radiation spreads at the speed of light and corresponds to electromagnetic radiation. We confirmed it by analyzing the Fourier spectrum of the pulse. Before we continue with the investigation of the form of electromagnetic radiation, we give some more comments on the phenomena of erasure itself. As we mentioned, the magnetic monopole is always erased, and there is no evidence to suggest that it could pass through the vacuum layer, even in the ultra-relativistic regime. This phenomenon can be attributed to the loss of coherence [1]. After the collision, most of the coherence is carried away by the radiation. This line of reasoning has already been presented in previous studies about monopole anti-monopole annihilation [15] and vortex erasure [12]. Furthermore, this behavior is also explained by entropy arguments. A state with radiation has more entropy than a state with a monopole. The entropy of a monopole is significantly lower than the entropy needed to saturate the unitarity bound [16], and thus the recreation of a monopole is strongly suppressed. To characterize the identified electromagnetic radiation, we can study the direction of its magnetic and electric fields. In figure 8, some frames of the evolution of the magnetic and electric field are depicted. Before the collision, the magnetic field pointed radially away from the center where the monopole was located. After the layer passes over the monopole, the magnetic field shifts in the direction toward the positive side of the \(z\)-axis. This shift proceeds at the speed of light and is a consequence of the appearance of an induced current during the interaction process. The current flows in circles around the \(z\)-axis, leading to a magnetic field perpendicular to the wall, i.e. parallel to the \(z\)-axis. During the erasure of the monopole, an electric field emerges and spreads away radially. In the \(y=0\) plane, the electric field points only in the \(y\)-direction. From the axial symmetry of our system, we can conclude that the electric field lines are circles around the \(z\)-axis. The outer electric field of the pulse points anti-clockwise around the \(z\)-axis, whereas the inner electric field points clockwise around the \(z\)-axis. The magnetic field arrows (see figure 9) wriggle in a banana-shaped form around the pulse's center. Although the interaction analyzed here is a combined process of the erasure and acceleration of a magnetic mo Figure 8: Evolution of the magnetic and electric field for the case \((I)\) in the \(y=0\) plane. The length and time values are in units of \(m_{v}^{-1}\). The arrows illustrate the direction of the magnetic field, whereas the red and blue colors illustrate the electric field. Red colors represent positive values of \(E_{y}\), and blue colors represent negative values of \(E_{y}\). Initially, the magnetic field arrows point radially away from the origin, where the magnetic monopole is located. After the collision, the arrows adjust to the positive \(z\)-direction when the electromagnetic pulse moves over them. From the axial symmetry of the system, we can conclude that the electric field lines are circles around the \(z\)-axis that extend with time. nopole, and the magnetic charge is not located at one point, the behavior of electromagnetic radiation is qualitatively the same as for an accelerated magnetic point charge. The previous observations prompted us to reconstruct a radiation pattern for different initial monopole velocities to compare it with equation (1). We approximated the center of radiation emission using the radiation energy density data. Furthermore, we integrated the radiation energy density over the pulse and created a radiation pattern to see in which direction most radiation gets emitted. For the cases \((I)\), \((II)\) and \((III)\), we chose the times \(85m_{v}^{-1}\), \(110m_{v}^{-1}\) and \(85m_{v}^{-1}\) respectively and created out of the data for the electromagnetic energy density \(\varepsilon=\frac{1}{2}\Big{|}\mathbf{E}^{\mathrm{U(1)}}\Big{|}^{2}+\frac{1}{2} \Big{|}\mathbf{B}^{\mathrm{U(1)}}\Big{|}^{2}\) the radiation patterns at these moments of time. The results are given in figure 10. The loops are not bent in the same way as in the case of an accelerated point charge, given in figure 1. Nevertheless, qualitatively the behavior of the angle \(\theta_{\mathrm{max}}\) corresponding to the maximum of radiation emission is conformable to equation (2), describing the radiation emitted by an accelerated point charge. This behavior is independent of the velocity of the vacuum layer. ## 7 Conclusion and Outlook In this work, we bear out the DLV mechanism of erasure of magnetic monopoles by domain walls [1]. We performed our numerical study on a prototype model with \(SU(2)\) gauge symmetry, which possesses the degenerate vacua with \(U(1)\) and \(SU(2)\) invariant phases [10, 11]. Correspondingly, it has a solution in the form of the layer of an \(SU(2)\) invariant vacuum,'sandwiched' in between the \(U(1)\) invariant vacua. The layer is taken to be sufficiently thin so that the effects of the \(SU(2)\) confinement on the gauge fields, discussed in [9, 11], can be ignored. The \(U(1)\) vacua support the 't Hooft-Polyakov magnetic monopoles. When a monopole meets the wall, it gets erased, and the magnetic charge spreads in the layer. We study the process of the erasure numerically. Special attention is paid to the emission of electromagnetic radiation. Remarkably, our simulations allow us to analyze the radiation dynamics convincingly, despite its relatively low energy. The radiation emission resembles the radiation emitted due to the acceleration of a magnetic point charge. We noted these similarities in the shape of the electric and magnetic fields and the form of the radiation pattern. This paper serves as proof of principle and motivation for future work, as it is a way to characterize and extract possible observables of the DLV mechanism. Given that this mechanism is an occurrence in the early universe, it could have relevant effects on the cosmic microwave background. Studies in this direction already exist for cosmic strings [24] and domain wall networks [25]. Additionally, the erasure of defects may contribute to the emission of high-energy particles in the early universe, similar to the study of radiation in monopoles and anti-monopoles connected by strings [26]. Furthermore, our analysis of the erasure mechanics can be straightforwardly generalized to larger symmetry groups. The next step is to consider the study of gravitational Figure 10: Radiation patterns for the cases \((I)\), \((II)\) and \((III)\) at some moments after the collision between the magnetic monopole and the \(SU(2)\) invariant vacuum layer. The radius represents the normalized value of the radiation energy \(\frac{E}{E_{\mathrm{max}}}\). Figure 9: Magnetic and electric field for case \((I)\) at time \(t=75m_{v}^{-1}\). The length and time values are given in \(m_{v}^{-1}\) units. radiation from the erasure of topological defects. It is a new mechanism that gives relevant imprints to the known scenarios of gravitational wave emission from phase transitions in the early universe (for a review see for instance [27]). In this direction, the gravitational radiation from topological defects was previously studied in the context of monopoles connected by strings. Originally, this was performed by Martin and Vilenkin in point-like approximation [28]. A more recent study [15], which goes beyond this approximation, reveals that in the regime of comparable widths of strings and monopoles, the monopole and anti-monopole never go through one another and oscillate. Instead, they get directly erased (annihilated) in a single collision, converting the entire energy into the waves of Higgs, gauge, and gravitational fields. In the present analysis of wall-monopole collision, a similar maximal rate of erasure is observed. Due to this, we expect a high efficiency of gravitational wave production during the erasure. This will be studied elsewhere. ## Acknowledgements This work was supported in part by the Humboldt Foundation under Humboldt Professorship Award, by the European Research Council Gravities Horizon Grant AO number: 850 173-6, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - 390814868, and Germany's Excellence Strategy under Excellence Cluster Origins. **Disclaimer:** Funded by the European Union. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
2303.11457
Team Coordination on Graphs with State-Dependent Edge Cost
This paper studies a team coordination problem in a graph environment. Specifically, we incorporate "support" action which an agent can take to reduce the cost for its teammate to traverse some edges that have higher costs otherwise. Due to this added feature, the graph traversal is no longer a standard multi-agent path planning problem. To solve this new problem, we propose a novel formulation by posing it as a planning problem in the joint state space: the joint state graph (JSG). Since the edges of JSG implicitly incorporate the support actions taken by the agents, we are able to now optimize the joint actions by solving a standard single-agent path planning problem in JSG. One main drawback of this approach is the curse of dimensionality in both the number of agents and the size of the graph. To improve scalability in graph size, we further propose a hierarchical decomposition method to perform path planning in two levels. We provide complexity analysis as well as a statistical analysis to demonstrate the efficiency of our algorithm.
Sara Oughourli, Manshi Limbu, Zechen Hu, Xuan Wang, Xuesu Xiao, Daigo Shishika
2023-03-20T21:14:58Z
http://arxiv.org/abs/2303.11457v1
# Team Coordination on Graphs with State-Dependent Edge Cost ###### Abstract This paper studies a team coordination problem in a graph environment. Specifically, we incorporate "support" action which an agent can take to reduce the cost for its teammate to traverse some edges that have higher costs otherwise. Due to this added feature, the graph traversal is no longer a standard multi-agent path planning problem. To solve this new problem, we propose a novel formulation that poses it as a planning problem in the joint state space: the _joint state graph_ (JSG). Since the edges of JSG implicitly incorporate the support actions taken by the agents, we are able to now optimize the joint actions by solving a standard single-agent path planning problem in JSG. One main drawback of this approach is the curse of dimensionality in both the number of agents and the size of the graph. To improve scalability in graph size, we further propose a hierarchical decomposition method to perform path planning in two levels. We provide complexity analysis as well as a statistical analysis to demonstrate the efficiency of our algorithm. ## I Introduction In this work, we are interested in designing coordinated group motion, where the safety or cost for one agent to move from one location to another may depend on the support provided by its teammate. As an example, let's say there are two robots traversing an environment represented as a graph in Fig. 1. Starting from 1, the robots face a wall, represented by a red edge. The robots could either climb a ladder together and potentially fall and break (move from 1 to 4 together), or one robot could hold the ladder (support from 2) while the other moves up from 1 to 4. The former option is high risk, while the latter is low risk and preferable. Alternatively, if the ladder is bolted to the ground, then climbing together can be low risk and preferable. This paper develops a framework to study when such coordination is beneficial. The terms cooperation and coordination take various meanings in different contexts. There is research done on the coordination of actions of agents to reach a state of order, such as consensus and formation control [1, 2, 3, 4, 5]. Others study cooperation in terms of simultaneously performing tasks in a spatially extended manner, like in surveillance [6, 1] and sampling [7]. Cooperation is also explored in problems where agents need to react locally to avoid conflict or collision, as can be seen in transportation systems on the road [8], in the air [9], and in general robotic cooperation problems [10, 11, 12]. We see in these situations that there is little coupling between the agents - agents do not rely on each other to make progress, but simply need to not be in each other's paths. In this work, we are interested in tightly coupled agents that depend on each other for _support_ in order to meet their objective. We study support in the context of mitigating some risks that exist in the environment. Such risk has been formulated and studied in various ways. For instance, probability of achieving certain levels of performance in a stochastic setting has been considered [13, 14, 15, 16]. Others have considered types of risk measures such as coherent risk measures [17], like conditional value-at-risk (CVaR) [18, 19] and entropic value-at-risk (EVAR) [20]. Risk can also be characterized in terms of chance constraints [21]. Game theory is considered to account for the risk associated with the uncertainty in the adversary's behavior [22]. Yet, risk can purely be described as the "cost" of traversal [13]. In this work, we will only use this cost of traversal approach to simplify the analysis. Cooperation has been studied both in centralized and distributed settings. Decentralized systems are better at handling scalability and computational efficiieny [23, 24]. When it comes to Distributed Continual Planning (DCP) [25], plan generation and execution can happen concurrently. As it relies on communication between agents, it is better suited for online planning. On the other hand, centralized systems are better for offline planning [26]. It is less likely to suffer from communication costs, information loss, and synchronization issues [27]. A centralized approach is better suited for tightly coupled agents that require a high degree of coordination [28]. For that reason, we use a centralized approach in our work. Since we take a centralized approach, ensuring computational tractability becomes a challenge. Approaches to simplifying a multi-agent planning problem have been widely studied, such as decomposition, graph reformulation, and Fig. 1: Example of an environment graph with risk edges and supporting nodes.
2302.06202
Circle Packings from Tilings of the Plane
We introduce a new class of fractal circle packings in the plane, generalizing the polyhedral packings defined by Kontorovich and Nakamura. The existence and uniqueness of these packings are guaranteed by infinite versions of the Koebe-Andreev-Thurston theorem. We prove structure theorems giving a complete description of the symmetry groups for these packings. And we give several examples to illustrate their number-theoretic and group-theoretic significance.
Philip Rehwinkel, Ian Whitehead, David Yang, Mengyuan Yang
2023-02-13T09:20:13Z
http://arxiv.org/abs/2302.06202v1
# Circle packings from tilings of the plane ###### Abstract. We introduce a new class of fractal circle packings in the plane, generalizing the polyhedral packings defined by Kontorovich and Nakamura. The existence and uniqueness of these packings are guaranteed by infinite versions of the Koebe-Andreev-Thurston theorem. We prove structure theorems giving a complete description of the symmetry groups for these packings. And we give several examples to illustrate their number-theoretic and group-theoretic significance. ## 1. Introduction The well-known Apollonian circle packing can be constructed from a set of four base circles, and four dual circles, as shown in Figure 1. The orbit of the base circles under the group generated by reflections through the dual circles is the packing, an infinite fractal set of circles. Beyond their aesthetic appeal, Apollonian packings have properties of great interest in number theory, group theory, and fractal geometry. In [13], Kontorovich and Nakamura define polyhedral circle packings, generalizing the Apollonian packing construction. Any circle configuration has a tangency graph, with a vertex for each circle and an edge for each tangency Figure 1. Constructing the Apollonian packing between circles. In the case of the Apollonian packing, both the base circles and the dual circles have tetrahedral tangency graphs. In general, one can start with a finite set of base circles whose tangency graph is the graph of any polyhedron and a finite set of dual circles whose tangency graph is the graph of the dual polyhedron. The orbit of the base circles under the group generated by the dual circles is a polyhedral packing. Polyhedral packings encompass many of the generalizations of the Apollonian packing that have been studied previously. For example, the packing introduced by Guettler and Mallows [9] is the octahedral packing; the \(\mathbb{Q}[\sqrt{-2}]\) packing studied by Stange [22] is the cubic packing. In this article, we study packings which originate from infinite configurations of base and dual circles, a further generalization. A particularly symmetric example is shown in Figure 2. In this example, both the base and dual circles have the square lattice as their tangency graphs. Again, the orbit of the base circles under the group generated by the dual circles is a fractal set of circles. We call this object the square packing. In general, we work with a base circle configuration \(B\) and a dual configuration \(\hat{B}\) whose tangency graphs determine dual tilings, or cellular decompositions of the sphere \(\hat{\mathbb{C}}\) or the plane \(\mathbb{C}\)-see Definition 2.1. Tilings of the sphere give rise to polyhedral packings, while tilings of the plane give rise to new examples. In both cases, the final packings have similar geometric properties. For example, the circles in the packing are pairwise disjoint or tangent, they can be oriented with disjoint interiors, and the interiors are dense in the ambient space. The symmetry groups of polyhedral packings and of our new examples have similar structure, as we prove in Section 4. And some of our new examples have integrality properties, raising number-theoretic questions. One motivation to introduce these new packings comes from the literature on the Koebe-Andreev-Thurston theorem and its generalizations. The Koebe-Andreev-Thurston theorem [12, 3, 24] is the remarkable result that Figure 2. Constructing the square packing any pair of finite graphs \(G\), \(\hat{G}\) representing a polyhedron and its dual can be realized as the tangency graphs for a pair of dual circle configurations \(B\), \(\hat{B}\). Moreover, the circle configurations \(B\), \(\hat{B}\) are unique up to conformal automorphism of \(\hat{\mathbb{C}}\) (see Theorem 3.1 for a precise statement). The Koebe-Andreev-Thurston theorem implies the existence and uniqueness up to Mobius transformation of a circle packing for every polyhedron. It is natural to try to extend this theorem to infinite graphs. Important work of Beardon-Stephenson and of Schramm achieves this in many cases [4, 19, 20], following a constructive approach suggested by Thurston. Stephenson's text [23] synthesizes this work. One result is that any infinite graph \(G\) representing a triangulation of the plane can be realized as the tangency graph for a circle configuration \(B\), and \(B\) is unique up to conformal automorphism of \(\mathbb{C}\) (see Theorem 3.2). The proofs use deep geometric ideas--mappings between circle configurations give a discrete analogue of the Riemann mapping theorem. Just as the construction of polyhedral circle packings relies on the finite Koebe-Andreev-Thurston theorem, our construction relies on its infinite generalizations. In Section 3, we state various versions of the theorem which imply the existence and uniqueness up to conformal automorphism of many of our packings. We also make a more general conjecture which would imply existence and uniqueness in all cases. Our work combines geometric ideas from the infinite Koebe-Andreev-Thurston theorem with arithmetic ideas from the Apollonian packing and its relatives. Another motivation comes from the definitions of crystallographic packings in [13]. These are a class of packings which generalize the Apollonian packing, and encompass many known examples of circle and sphere packings. To construct crystallographic circle packings, one can start with a geometrically finite reflection group which acts on \(\mathbb{H}^{3}\) with finite covolume. Each wall of the fundamental chamber intersects the spherical boundary of \(\mathbb{H}^{3}\) in a circle. Suppose that these circles are partitioned into two sets, a "cluster" and "cocluster," such that circles in the cluster are pairwise disjoint or tangent, and each circle in the cluster is disjoint, tangent, or orthogonal to each circle in the cocluster. Then the orbit of the cluster under the group generated by reflections across the cocluster is a crystallographic packing. All polyhedral packings are crystallographic, but not all crystallographic packings are polyhedral. Kontorovich and Nakamura classify superintegral crystallographic circle packings. Every superintegral packing arises from an arithmetic finite-covolume reflection group acting on \(\mathbb{H}^{3}\). There are finitely many such groups up to commensurability, tabulated in [18], and all the non-cocompact groups in the tabulation give rise to packings. The most interesting case is the Bianchi group of the Eisenstein integers. This is represented by the Coxeter-Dynkin diagram: Each vertex represents a circle, and the edge types indicate angles between circles. There is a finite-index subgroup with the diagram: See Figure 9 (A), where this diagram is realized as a set of five circles. If the left vertex is the cluster and the remaining vertices are the cocluster, the result is the triangular packing of Figure 10. Similarly, if the right vertex is the cluster and the remaining vertices are the cocluster, the result is the hexagonal packing of Figure 11. These are fundamental examples of superintegral crystallographic packings. We conjecture that they are not polyhedral, or commensurate (on the level of hyperbolic reflection groups) to any polyhedral packing. Nevertheless, our construction allows us to work with these packings similarly to polyhedral packings. One might hope to realize all superintegral crystallographic circle packings with our construction. A final motivation comes from limits in polyhedral packing families. In [2], fractal dimensions for many polyhedral packings are computed. It is observed that for some sequences of polyhedra-pyramids, prisms, antiprisms, etc.-the fractal dimensions converge to a limit. More surprisingly, the packings themselves converge to a well-defined limit. This phenomenon is illustrated in Figure 3. A configuration of circles in the polyhedral packing for the 100-sided prism is shown. As the number of sides increases, this configuration approaches a configuration found in the square packing of Figure 2. The limit of the prism packings is our square packing. The article [2] focuses on examples, and does not give a general criterion for a sequence of circle packings to converge. Some of the limit packings are also polyhedral-e.g. the limit of the pyramid packings is the original Apollonian packing-and others are not polyhedral but satisfy our definition. Figure 3. Limiting configurations We discuss all these examples in Section 5. It would be very interesting to know whether our definition gives the closure of the space of polyhedral packings. This would require a more systematic development of the notion of a limit in this space. This article is structured as follows. In Section 2, Definitions 2.1 and 2.7 describe the class of packings we study. We prove some of the fundamental geometric consequences of these definitions, and compare them to the definitions of polyhedral, crystallographic, and Kleinian packings. In Section 3, we recall versions of the Koebe-Andreev-Thurston theorem and propose our extension, Conjecture 3.3. In Section 4, we analyze the symmetries of our packings. Theorems 4.1, 4.2, and 4.10 give a complete description of the symmetry group. In Section 5, we give three main examples of our packings, which we call the triangular, square, and hexagonal packings. We focus on their arithmetic properties-integrality, quadratic and linear forms. In Section 6, we give a broader class of examples, with a focus on symmetries. Theorem 6.2 shows that all 17 wallpaper groups appear in the symmetry groups of packings. In future work, we hope to elaborate on the number theory of the packings described here. As indicated in Section 5, many of our packings have integral curvatures. Is there an asymptotic formula for curvatures in the packing, as in [14]? Is there a local-to-global principle for curvatures, as conjectured in [8]? These questions can be answered by the methods of [7] for some periodic packings. They become much more subtle if the base configuration lacks any symmetry. It is possible to construct integral, aperiodic packings using the same refinement method as in the proof of Theorem 6.2. These packings are geometrically interesting, and their number theory remains to be explored. ### Acknowledgements We thank Daniel Allcock, Arthur Baragar, Michael Dougherty, Cathy Hsu, Anna Felickson, Alex Kontorovich, Alice Mark, and Kate Stange for helpful conversations relating to this project. We thank Nooria Ahmed, William Ball, Ellis Buckminster, Emilie Rivkin, Dylan Torrance, Jake Viscusi, Runze Wang, and Gary Yang for raising some of the questions that influenced our work. We are grateful to Swarthmore College for funding the summer research project that led to this article, and we thank all the faculty and students in the Department of Mathematics and Statistics who helped create a productive research community. ## 2. Definitions and Basic Properties The extended complex plane \(\hat{\mathbb{C}}\) is \(\mathbb{C}\cup\{\infty\}\), with the topology of the sphere. Our packings consist of oriented generalized circles, i.e. circles and lines, in \(\hat{\mathbb{C}}\). Each generalized circle divides \(\hat{\mathbb{C}}\) into two simply connected regions. Choosing an orientation for the circle is equivalent to choosing one of these regions to be the interior, and the other to be the exterior. When we refer to circles in this article, we always mean oriented generalized circles. Oriented generalized circles have an action by the group of holomorphic and antiholomorphic Mobius transfomations \(\operatorname{M\ddot{o}b}\cong\operatorname{SL}_{2}(\mathbb{C})\rtimes\mathbb{Z}/2 \mathbb{Z}\). For more details on this setup, see [15]. For any collection of circles \(B\), we can associate its tangency graph \(G_{B}\) which has a vertex for every circle and an edge between each pair of tangent circles. We will say that a collection of circles accumulates at a point \(x\in\hat{\mathbb{C}}\) if any open neighborhood of \(x\) contains infinitely many circles from this collection. We now define the notions of base and dual circle configurations, which are the starting point for the packings we construct. **Definition 2.1**.: _Let \(B\) and \(\hat{B}\) be two collections of oriented generalized circles, with tangency graphs \(G_{B}\) and \(G_{\hat{B}}\), respectively. Then \(B\) is called a base configuration and \(\hat{B}\) is called a dual configuration if the following properties hold:_ 1. _The circles in_ \(B\) _are pairwise disjoint or tangent, with disjoint interiors, and the same holds for the circles in_ \(\hat{B}\)_._ 2. _The tangency graphs_ \(G_{B}\) _and_ \(G_{\hat{B}}\) _are each nontrivial, connected, and are duals of each other._ 3. _If a circle in_ \(B\) _and a circle_ \(\hat{B}\) _intersect, they do so orthogonally and they correspond to a face-vertex pair in the tangency graphs. Otherwise, their interiors are disjoint._ 4. \(B\cup\hat{B}\) _has at most one accumulation point._ Note that the roles of \(B\) and \(\hat{B}\) are interchangeable in this definition. An example of a base and dual configuration pair is shown in Figure 4. Figure 4. Base and dual circle configurations The tangency graphs \(G_{B}\) and \(G_{\hat{B}}\) are simple, and by (1) they have natural planar embeddings, placing each vertex at the center of its circle (or choosing an arbitrary interior point as the vertex if the circle is a line). The duality between graphs, with faces of \(G_{B}\) corresponding to vertices of \(G_{\hat{B}}\) and vice versa, is defined in this context. The four properties, and especially this duality, imply stronger geometric statements, for example: **Proposition 2.2**.: _Suppose that two circles \(c_{1},c_{2}\in B\) are tangent at a point \(x\in\hat{\mathbb{C}}\). Then no other circles from \(B\) go through this point. Moreover, if \(x\) is not the accumulation point, then it is also a point of tangency for exactly two dual circles \(d_{1},d_{2}\in\hat{B}\), which intersect \(c_{1},c_{2}\) orthogonally._ Proof.: First, there cannot be any additional circle in \(B\) tangent to \(c_{1},c_{2}\) at \(x\) as this would violate the disjoint interiors property (1). Next, suppose that \(x\) is not the accumulation point, and consider the edge connecting \(c_{1},c_{2}\) in \(G_{B}\). This corresponds to an edge connecting some pair of tangent dual circles \(d_{1},d_{2}\in G_{\hat{B}}\), which intersect \(c_{1},c_{2}\) orthogonally. After applying a Mobius transformation, we may assume that \(c_{1}\) and \(c_{2}\) are parallel horizontal lines which are tangent at the point \(x=\infty\). Then it is clear that \(d_{1},d_{2}\) must be parallel vertical lines which are tangent at the same point \(x\). Again, there cannot be any additional circle in \(\hat{B}\) tangent to \(d_{1},d_{2}\) at \(x\) as this would violate property (1). **Proposition 2.3**.: _Each circle in \(c\in B\) is orthogonal to at least three dual circles in \(\hat{B}\). These dual circles can be labeled by elements of \(\mathbb{Z}\) or \(\mathbb{Z}/n\mathbb{Z}\) so that consecutive circles are tangent._ Proof.: The vertex corresponding to \(c\) in \(G_{B}\) must be connected to the rest of the graph by at least three edges; if it were connected by two edges, the dual graph would have a double edge, and if it were connected by one edge, then the dual graph would have a loop. Thus the vertex is incident to at least three faces. These faces must correspond to distinct dual circles because each dual circle is uniquely determined by two points of intersection with \(c\). Then the rest of the statement follows from the fact that the faces incident to a vertex in a planar graph can be ordered cyclically so that any two consecutive faces are adjacent. We will describe the situation of Proposition 2.3 by saying that \(c\) is ringed by circles from \(\hat{B}\). One subtlety in the proof is the possibility that \(c\) could be ringed by infinitely many circles. This can happen if the unique accumulation point lies on \(c\), and in this case, the orthogonal circles can be labeled by the integers so that any two consecutive circles are tangent. **Proposition 2.4**.: _The circles in \(B\), \(\hat{B}\), and their interiors cover all points in \(\hat{\mathbb{C}}\) other than the accumulation point._ The accumulation point may or may not be covered, depending on whether or not it lies on a circle. Proof.: It suffices to show that all points on one face in the planar embedding of \(G_{B}\) are covered. After a Mobius transformation, we may assume that this face does not contain the point \(\infty\), and that none of the generalized circles in \(B\) surrounding it are lines. By Proposition 2.3, the face is a simply connected polygon (possibly with infinitely many sides), with vertices at the centers of circles in \(B\). Each edge goes through a point of tangency between two circles in \(B\), and is orthogonal to both circles. Then by Proposition 2.2, the dual circle in \(\hat{B}\) corresponding to the face is tangent to each edge, i.e. it is inscribed in the polygon. Removing the dual circle and its interior leaves one connected component for each vertex, and each connected component is contained in the interior of the corresponding circle. A graph is is said to be \(n\)-connected if after removing any \(n-1\) vertices and their adjacent edges, the graph remains connected. It is said to be \(n\)-edge connected if after removing any \(n-1\) edges, the graph remains connected. Because \(G_{B}\) and \(G_{\hat{B}}\) are a pair of dual simple planar graphs, they are necessarily \(3\)-edge connected. If removing an edge could disconnect \(G_{B}\), then \(G_{\hat{B}}\) would have to contain a loop, and if removing two edges could disconnect \(G_{\hat{B}}\), then \(G_{\hat{B}}\) would have to contain a double edge. In fact, we can make an even stronger connectedness statement: **Proposition 2.5**.: _The tangency graphs \(G_{B}\) and \(G_{\hat{B}}\) are \(3\)-connected._ Proof.: It suffices to show that if one or two vertices and their adjacent edges are removed from \(G_{B}\), then all faces in the planar embedding of \(G_{B}\) remain simply connected. This implies that \(G_{B}\) remains connected. If a single vertex \(c_{1}\) is removed, then the faces incident to that vertex are all identified. By Proposition 2.3, these faces correspond to the ring of dual circles around \(c_{1}\). Because these faces are all distinct and simply connected, gluing them at the vertex and along their common edges results in a larger simply connected face. If two non-adjacent vertices \(c_{1},c_{2}\) are removed, then at each vertex the incident faces are identified. If \(c_{1},c_{2}\) have no face in common, then the argument is exactly the same as above. If they have a face in common, there can only be one such face. Indeed, after applying a Mobius transformation, we may assume that \(c_{1},c_{2}\) are concentric circles centered at the origin. Any dual circle orthogonal to both must be a line through the origin. By the disjoint interiors property (1), \(\hat{B}\) can contain at most one such line. In this case, removing \(c_{1}\) and \(c_{2}\) creates one new larger face, the union of all faces incident to \(c_{1}\) or \(c_{2}\). Because the union of faces incident to \(c_{1}\) is simply connected, the union of faces incident to \(c_{2}\) is simply connected, and they overlap in a unique simply connected face, the new face must be simply connected. Finally, if two adjacent vertices \(c_{1},c_{2}\) are removed, by Proposition 2.2, there are exactly two faces incident to these two vertices, meeting along the edge from \(c_{1}\) to \(c_{2}\). The union of these two faces is simply connected. Removing \(c_{1}\) and \(c_{2}\) creates one new larger face, the union of all faces incident to \(c_{1}\) or \(c_{2}\). Because the union of faces incident to \(c_{1}\) is simply connected, the union of faces incident to \(c_{2}\) is simply connected, and they overlap in a simply connected union of two faces, the new face must be simply connected. By the compactness of \(\hat{\mathbb{C}}\), \(B\) is finite if an only if it has no accumulation point. In this case \(G_{B}\) is a finite \(3\)-connected simple planar graph, so by Steinitz's theorem, it is the graph of a polyhedron. One further property we might ask for in base and dual configurations is periodicity: **Definition 2.6**.: _For \(n=1,\,2\), \(B\) is \(n\)-periodic, i.e. periodic under an \(n\)-dimensional lattice, if there exist \(v_{1},\dots,v_{n}\in\mathbb{C}\), linearly independent over \(\mathbb{R}\), such that \(B+v_{1}=\dots=B+v_{n}=B\)._ If \(B\) is \(1\)-periodic or \(2\)-periodic, it necessarily has an accumulation point at \(\infty\). The tangency graph of a \(2\)-periodic \(B\) is a \(2\)-periodic tiling of the plane. We are now ready to define the class of circle packings that we will study. For any circle \(d\), let \(\sigma_{d}\) denote the reflection across \(d\), a Mobius transformation. **Definition 2.7**.: _The packing \(\mathscr{P}\) is the orbit of \(B\) under the group generated by reflections \(\sigma_{d}\) across circles \(d\in\hat{B}\). The dual packing \(\hat{\mathscr{P}}\) is the orbit of \(\hat{B}\) under the same group._ _The superpacking is the orbit of \(B\) under the group generated by reflections across circles in \(B\) and \(\hat{B}\)._ Note that under this definition, the dual packing and superpacking contain two oppositely-oriented copies of each circle. It is possible to assign a single orientation to circles in the dual packing and the superpacking in a consistent way, but since we will not use the orientations of these circles in an important way, we do not pursue this. We will only be concerned with the orientations of the circles in \(\mathscr{P}\). For each packing \(\mathscr{P}\), we define the following symmetry groups: 1. \(\Gamma=\textsc{Sym}(\mathscr{P},\hat{\mathscr{P}})\): the group of Mobius transformations that preserve both the packing and the dual packing; 2. \(\Gamma_{1}=\langle\sigma_{d}\,:\,d\in\hat{B}\rangle\): the group generated by reflections across the dual circles; 3. \(\Gamma_{2}=\textsc{Sym}(B,\hat{B})\): the group of Mobius transformations that preserve both the base configuration and the dual configuration. Some fundamental geometric properties of \(\mathscr{P}\) can be deduced directly from the definition. **Proposition 2.8**.: _Every circle in \(\mathscr{P}\) is either a base circle in \(B\) or inside some dual circle, and every circle in \(\hat{\mathscr{P}}\) is either a dual circle in \(\hat{B}\) or inside some dual circle._ Note that when we say circle \(c_{1}\) is inside circle \(c_{2}\), we only mean that \(c_{1}\) is contained in the union of \(c_{2}\) and its interior, not necessarily that the two interiors are nested. Proof.: By definition, each circle in \(\mathscr{P}\) is in \(B\) or it is \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}(c)\) for some \(c\in B\), \(d_{1},\ldots d_{k}\in\hat{B}\), with consecutive \(d_{i}\) distinct. We may assume that \(c\) is not orthogonal to \(d_{k}\), because then \(d_{k}\) could be dropped from this expression. Then by property (3), \(c\) is outside \(d_{k}\), so \(\sigma_{d_{k}}(c)\) is inside \(d_{k}\). By property (1), this implies that \(\sigma_{d_{k}}(c)\) is outside \(d_{k-1}\), so \(\sigma_{d_{k-1}}\sigma_{d_{k}}(c)\) is inside \(d_{k-1}\). Repeating the argument inductively, we see that \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}(d)\) is inside \(d_{1}\). The proof of the second statement is similar. **Proposition 2.9**.: _The circles in \(\mathscr{P}\) are pairwise disjoint or tangent, with disjoint interiors._ Proof.: Suppose that we have two circles in \(\mathscr{P}\) with overlapping interiors. After applying a Mobius transformation, we may assume that one of the circles, \(c_{1}\) is in \(B\). We may write the other circle as \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}(c_{2})\) for \(c_{2}\in B\), \(d_{1},\ldots d_{k}\in\hat{B}\), with consecutive \(d_{i}\) distinct. If \(d_{k}\) is orthogonal to \(c_{2}\), then we can shorten this expression to \(\sigma_{d_{1}}\cdots\sigma_{d_{k-1}}(c_{2})\). Otherwise, by property (3) of Definition 2.1, \(d_{k}\) and \(c_{2}\) have disjoint interiors, and after inversion, the interior of \(\sigma_{d_{k}}(c_{2})\) is contained inside the interior of \(d_{k}\). After the remaining inversions, by property (1), the interior of \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}(c_{2})\) is contained inside the interior of \(d_{1}\). By property (3), in order for \(c_{1}\) to intersect \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}(c_{2})\), \(c_{1}\) must be orthogonal to \(d_{1}\). Then inverting across \(d_{1}\), we find that \(c_{1}\) also intersects \(\sigma_{d_{2}}\cdots\sigma_{d_{k}}(c_{2})\). In either case, we have shortened the string \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}\). Repeating this process, we eventually find two circles \(c_{1},c_{2}\in B\) with overlapping interiors, contradicting property (1). **Proposition 2.10**.: _The interiors of the circles in \(\mathscr{P}\) are dense in \(\hat{\mathbb{C}}\)._ Proof.: For \(x\in\hat{\mathbb{C}}\), by Proposition 2.4, either \(x\) is the accumulation point, \(x\) is in some circle in \(B\), or \(x\) is in some circle in \(\hat{B}\). It follows that either \(x\) is in the closure of the interiors of the circles in \(B\) or \(x\) is in the interior of some circle in \(\hat{B}\). If \(x\) is in the interior of a dual circle \(d_{1}\), reflect it across \(d_{1}\). Then either \(\sigma_{d_{1}}(x)\) is in the closure of the interiors of the circles in \(B\), which means that \(x\) is in the closure of the interiors of the circles in \(\sigma_{d_{1}}(B)\), or \(\sigma_{d_{1}}(x)\) is in the interior of some circle \(d_{2}\in\hat{B}\), which means \(x\) is in the interior of \(\sigma_{d_{1}}(d_{2})\). In the latter case, reflect \(\sigma_{d_{1}}(x)\) across \(d_{2}\) and repeat the process. If, at some step, we find that \(x\) is in the closure of the interiors of the circles in \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}(B)\), then since these are circles in \(\mathscr{P}\), \(x\) is in the closure of the interiors of the circles in \(\mathscr{P}\) as desired. Otherwise, this construction produces an infinite sequence \[\tilde{d}_{1}=d_{1},\,\tilde{d}_{2}=\sigma_{d_{1}}(d_{2}),\,\tilde{d}_{3}= \sigma_{d_{1}}\sigma_{d_{2}}(d_{3}),\ldots\] with consecutive \(d_{i}\) distinct, of nested circles in \(\hat{\mathscr{P}}\) whose interiors contain \(x\). We will show that these circles converge to \(x\) in \(\hat{\mathbb{C}}\), in the sense that they eventually lie within any open neighborhood of \(x\). After applying a Mobius transformation, we may assume that \(\tilde{d}_{1}\) is not a line and is oriented inward so that its interior does not contain \(\infty\). Since the circles \(\tilde{d}_{k}\) are nested, they all have these properties. We have that \[\tilde{d}_{k+1}=\sigma_{\tilde{d}_{k}}\left(\sigma_{d_{1}}\cdots\sigma_{d_{k-1 }}(d_{k+1})\right)\] by Lemma 4.5 below. If \(d_{k+1}\neq d_{k-1}\), then \(\sigma_{d_{1}}\cdots\sigma_{d_{k-1}}(d_{k+1})\) and \(\tilde{d}_{k}\) are both contained inside \(\tilde{d}_{k-1}\), so neither one contains infinity. Thus, the reflection of \(\sigma_{d_{1}}\cdots\sigma_{d_{k-1}}(d_{k+1})\) across \(\tilde{d}_{k}\) does not contain the center of \(\tilde{d}_{k}\), so the radius of \(\tilde{d}_{k+1}\) is at most half the radius of \(\tilde{d}_{k}\). Therefore, if \(d_{k+1}\neq d_{k-1}\) for infinitely many values of \(k\), then the radii must approach \(0\), so the circles must approach \(x\). If \(d_{k+1}=d_{k-1}\), then \(\tilde{d}_{k+1}=\sigma_{\tilde{d}_{k}}(\tilde{d}_{k-1})\) (with the orientation reversed). If this holds for all but finitely many values of \(k\), then there exists some \(K\in\mathbb{N}\) such that for all \(k\geq K\), \(\tilde{d}_{k+1}=\sigma_{\tilde{d}_{k}}(\tilde{d}_{k-1})\). Suppose that \(\tilde{d}_{K}\) and \(\tilde{d}_{K+1}\) are disjoint. Then, after another Mobius transformation, we may assume that \(\tilde{d}_{K}\) has radius \(1\), \(\tilde{d}_{K+1}\) has radius \(r<1\), and they are concentric. An inductive argument shows that the radius of \(\tilde{d}_{K+k}\) is \(r^{k}\). These radii approach \(0\), so the circles must approach \(x\). On the other hand, suppose that \(\tilde{d}_{K}\) and \(\tilde{d}_{K+1}\) are tangent. After a Mobius transformation, we may assume that \(\tilde{d}_{K}\) has radius \(1\), and \(\tilde{d}_{K+1}\) has radius \(r<1\). In this case, an inductive argument shows that the radius of \(\tilde{d}_{K+k}\) is \(\frac{1}{1+k(1/r-1)}\). These radii approach \(0\), so the circles must approach \(x\). Since the circles \(\tilde{d}_{k}\in\hat{\mathscr{P}}\) approach \(x\), there is a corresponding sequence of circles \(\tilde{c}_{k}\in\mathscr{P}\), with \(\tilde{c}_{k}\) orthogonal to \(\tilde{d}_{k}\), such that the interiors of the circles \(\tilde{c}_{k}\) come arbitrarily close to \(x\). The residual set of \(\mathscr{P}\) is the set of points not in the interior of any circle. ### Relation to other Packing Definitions In [13] and [10], definitions are given for the related notions of Kleinian, crystallographic, and polyhedral packings. To compare our definition to these, we give sufficient conditions for our packings to be polyhedral, crystallographic, or Kleinian. The definition of a polyhedral packing coincides with our definition, with the stricter additional assumption that \(G_{B}\) and \(G_{\hat{B}}\) are the graphs of a convex polyhedron and its dual. By the Koebe-Andreev-Thurston theorem, every polyhedron gives rise to a polyhedral packing. Proposition 2.5 and Steinitz's theorem imply the following: **Proposition 2.11**.: _If \(B\) and \(\hat{B}\) are finite circle configurations satisfying the conditions of Definition 2.1, then they give rise to a polyhedral circle packing \(\mathscr{P}\)._ The definitions of crystallographic and Kleinian packings are more general, and relate the packing to a discrete group action on a higher-dimensional hyperbolic space. The two-sphere \(\hat{\mathbb{C}}\) is identified with the boundary of three-dimensional hyperbolic space \(\mathbb{H}^{3}\), and Mobius transformations are viewed as isometries of \(\mathbb{H}^{3}\). The definitions of crystallographic and Kleinian circle packings are as follows: **Definition 2.12**.: _Let \(\mathscr{P}\) denote a collection of circles whose interiors are disjoint and dense in \(\hat{\mathbb{C}}\). \(\mathscr{P}\) is a crystallographic packing if its residual set is the limit set of a geometrically finite reflection group of isometries of \(\mathbb{H}^{3}\). It is Kleinian if its residual set is the limit set of any geometrically finite group of isometries of \(\mathbb{H}^{3}\)._ All polyhedral packings are crystallographic, and all crystallographic packings are Kleinian. The following theorem gives sufficient conditions for a packing \(\mathscr{P}\) to be crystallographic or Kleinian. **Theorem 2.13**.: _A packing \(\mathscr{P}\) satisfying Definition 2.7 is Kleinian if any of the following conditions hold:_ 1. \(B\cup\hat{B}\) _is finite._ 2. _After applying some Mobius transformation,_ \(B\cup\hat{B}\) _is a strip configuration (i.e. it contains two parallel lines) and_ \(\Gamma_{2}\) _contains a translation._ 3. _After applying some Mobius transformation,_ \(\Gamma_{2}\) _contains two linearly independent translations._ _Furthermore, \(\mathscr{P}\) is crystallographic if any of the above conditions hold, with \(\Gamma_{2}\) replaced by the maximal reflective subgroup of \(\Gamma_{2}\)._ Proof.: By Propositions 2.9, 2.10, the interiors of circles in \(\mathscr{P}\) are disjoint and dense in \(\hat{\mathbb{C}}\). We will show that each condition (1)-(3) implies that \(\Gamma\), viewed as a group of isometries of \(\mathbb{H}^{3}\), is geometrically finite. This is essentially a consequence of Theorems 4.2, 4.10. Up to finite index, we have the following fundamental domains for \(\Gamma_{2}\): in case (1) all of \(\hat{\mathbb{C}}\), in case (2) a strip, and in case (3) a compact parallelogram. Each condition implies that this fundamental domain only intersects finitely many dual circles in \(\hat{B}\). Using the half-space model of \(\mathbb{H}^{3}\), we may define a half-plane for each wall of the fundamental domain for \(\Gamma_{2}\), and a hemisphere for each dual circle in this fundamental domain. These walls bound a geometrically finite fundamental domain for \(\Gamma\). The limit set of \(\Gamma\) is the residual set of \(\mathscr{P}\). Thus the packing is Kleinian. The proof in the crystallographic case works similarly. Our construction gives rise to packings which are polyhedral (in fact, all polyhedral packings), packings which are crystallographic but not polyhedral, packings which are Kleinian but not crystallographic, and packings which are none of the above. In Section 6, the wallpaper groups generated by reflections give rise to crystallographic packings; the wallpaper groups containing translations but no reflections give rise to Kleinian packings. And the same refinement method in the proof of Theorem 6.2 can produce aperiodic packings, which are not Kleinian. These examples can be constructed to have integrality or superintegrality properties. Note that Proposition 2.11 and Theorem 2.13 give sufficient but not necessary conditions for \(\mathscr{P}\) to be polyhedral, crystallographic, or Kleinian. Their proofs involve the structure of the group \(\Gamma=\textsc{Sym}(\mathscr{P},\hat{\mathscr{P}})\). But \(\mathscr{P}\) may have additional symmetry not detected by \(\Gamma\); in general, \(\textsc{Sym}(\mathscr{P},\hat{\mathscr{P}})\neq\textsc{Sym}(\mathscr{P})\). We might make different choices of base and dual configuration \(B\), \(\hat{B}\), which give rise to the same packing \(\mathscr{P}\) but a different dual packing \(\hat{\mathscr{P}}\), and thus a different \(\Gamma\). For example, working with the classical Apollonian packing, we could select any collection of circles defining a 3-connected subgraph of the full tangency graph as \(B\). With a nonstandard choice of base and dual configuration, we would find a smaller symmetry group \(\Gamma\), and the packing would not be immediately identifiable as polyhedral, crystallographic, or Kleinian. ## 3. Existence and Uniqueness of Packings and a Generalized Koebe-Andreev-Thurston Theorem In this section, we recall different versions of the Koebe-Andreev-Thurston theorem that imply the existence and uniqueness up to conformal automorphism of our packings in many cases. We conjecture a generalization which would imply existence and uniqueness in all cases. To state the problem precisely: let \(G\) and \(\hat{G}\) be a pair of simple 3-connected plane graphs corresponding to the 1-skeleton of a cellular decomposition of the sphere \(\hat{\mathbb{C}}\) or the plane \(\mathbb{C}\), and the 1-skeleton of the dual cellular decomposition. Note that a cellular decomposition of the plane can be viewed as a cellular decomposition of the sphere with a unique accumulation point at \(\infty\). Do there exist circle configurations \(B\) and \(\hat{B}\), satisfying the conditions of Definition 2.1, such that \(G_{B}\cong G\) and \(G_{\hat{B}}\cong\hat{G}\)? The \(\cong\) symbol here means an isomorphism of graphs and of the associated cellular decompositions. Moreover, are the configurations \(B\) and \(\hat{B}\) unique up to conformal automorphism of \(\hat{\mathbb{C}}\) or \(\mathbb{C}\)? A conformal automorphism of \(\hat{\mathbb{C}}\) is a Mobius transformation; a conformal automorphism of \(\mathbb{C}\) is a Mobius transformation which fixes infinity, i.e. a similarity \(z\mapsto az+b\) or \(z\mapsto a\bar{z}+b\). The Koebe-Andreev-Thurston theorem answers these questions in the affirmative when \(G\) and \(\hat{G}\) are finite graphs. A version of the theorem closely aligned with this article appears in [5]. Restated in our language, their Theorem 6 is as follows: **Theorem 3.1**.: _Let \(G\) and \(\hat{G}\) be a pair of finite, simple, 3-connected plane graphs, corresponding to a cellular decomposition of \(\hat{\mathbb{C}}\) and its dual. Then there exist circle configurations \(B\), \(\hat{B}\), satisfying Definition 2.1, such that \(G_{B}\cong G\) and \(G_{\hat{B}}\cong\hat{G}\), and these configurations are unique up to Mobius transformation._ As discussed in the introduction, this theorem has been extended to some infinite graphs. Theorem 4.3, the "Discrete Uniformization Theorem" in [23] implies the following: **Theorem 3.2**.: _Let \(G\) be an infinite simple, 3-connected plane graph, corresponding to a triangulation of \(\mathbb{C}\). Then there exist circle configurations \(B\), \(\hat{B}\), satisfying Definition 2.1, such that \(G_{B}\cong G\), and these configurations are unique up to similarity._ Stephenson's statement of the theorem includes triangulations of other surfaces as well. The restriction to triangulations, not general cellular decompositions, seems to be a convenient simplification rather than an essential restriction. In this case, the existence and uniqueness of \(B\) immediately imply the existence and uniqueness of \(\hat{B}\). But if \(G\) is not a triangulation, then \(B\) and \(\hat{B}\) must be constructed together, and both are needed to ensure uniqueness. Given these two versions of the Koebe-Andreev-Thurston theorem, it is natural to conjecture the following common extension: **Conjecture 3.3**.: _Let \(G\) and \(\hat{G}\) be a pair of simple, 3-connected plane graphs, corresponding to a cellular decomposition of \(\mathbb{C}\) and its dual. Then there exist circle configurations \(B\), \(\hat{B}\), satisfying Definition 2.1, such that \(G_{B}\cong G\) and \(G_{\hat{B}}\cong\hat{G}\), and these configurations are unique up to similarity._ This would imply the existence and uniqueness of the whole class of packings we study. We hope to prove this conjecture in future work, following the methods of [23]. As further evidence for the conjecture, we remark that several special cases have been studied carefully in the literature. When \(G\) is the triangular lattice, the uniqueness of the associated circle configuration is a crucial step in Rodin and Sullivan's celebrated proof of the convergence of circle packings to the Riemann mapping [17, Appendix 1]. This means that the triangular and hexagonal packings studied in Section 5 are unique up to similarity. When \(G\) is the square lattice, the uniqueness of the associated circle configuration is the main theorem of [21]. So the square packing in Section 5 is also unique up to similarity. It should also be possible to extend Conjecture 3.3 to the hyperbolic plane and other surfaces, but we have not investigated this. ## 4. Group Structure Theorems In this section, we give a complete algebraic description of the symmetry groups associated to packings \(\mathscr{P}\). We begin with the structure of the groups \(\Gamma_{1}\) and \(\Gamma_{2}\). Then, via an examination of the action on \(\mathscr{P}\), we show that \(\Gamma=\Gamma_{1}\rtimes\Gamma_{2}\). **Theorem 4.1**.: \(\Gamma_{1}\) _is a free Coxeter group generated by \(\sigma_{d}\) for \(d\in\hat{B}\), where the only relations are \(\sigma_{d}^{2}=1\)._ Proof.: We must show that the relations \(\sigma_{d}^{2}=1\) are the only ones. Choose a point \(x\in\hat{\mathbb{C}}\), outside of every circle in \(\hat{B}\). Suppose that a string \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}\in\Gamma_{1}\), with consecutive \(d_{i}\) distinct, is applied to \(x\). Since \(x\) is outside of \(d_{k}\), \(\sigma_{d_{k}}(x)\) is in the interior of \(d_{k}\). Repeating this process, by property (1) from Definition 2.1, we find that \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}(x)\) is in the interior of \(d_{1}\). Thus it is not equal to \(x\), and \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}\) is not the identity. **Theorem 4.2**.: _If \(B\) is finite, then \(\Gamma_{2}\) is the group of symmetries of a polyhedron. If \(B\) is infinite, then \(\Gamma_{2}\) is conjugate to a discrete group of isometries of the plane: a cyclic group, dihedral group, frieze group, or wallpaper group._ A frieze group is a discrete group of isometries of the plane which contains translations in one direction; a wallpaper group is a discrete group of isometries of the plane which contains translations in two linearly independent directions. Proof.: If \(B\) is finite, then we have shown that \(G_{B}\) is the graph of a polyhedron. By the Koebe-Andreev-Thurston theorem, the circle configuration \(B\) with this graph is unique up to Mobius transformations. Moreover, any graph automorphism of \(G_{B}\) gives rise to a permutation of the circles in \(G_{B}\), which must be realized by a Mobius transformation in \(\Gamma_{2}\). Conversely, any element of \(\Gamma_{2}\) determines a graph automorphism of \(G_{B}\). So \(\Gamma_{2}\equiv\textsc{Aut}(G_{B})\). There is a three-dimensional realization of the polyhedron \(G_{B}\), called the canonical embedding, with all edges tangent to the unit sphere, such that every automorphism of \(G_{B}\) is realized as a rigid motion of this polyhedron preserving the sphere [25, Thm. 4.13]. Thus \(\Gamma_{2}\) is the group of symmetries of a polyhedron, or a finite group of isometries of the sphere. If \(B\) is infinite, then we may apply a Mobius transformation to place the unique accumulation point of \(B\) at \(\infty\). All symmetries of \(B\) must map the accumulation point to itself, so they must have the form \(f(z)=az+b\) or \(f(z)=a\bar{z}+b\). If \(|a|\neq 1\), then such a map has a fixed point in \(\mathbb{C}\), which will be an attracting fixed point for \(f\) or \(f^{-1}\). This produces an additional accumulation point for \(B\), a contradiction. Thus \(\Gamma_{2}\) consists of maps \(f(z)=az+b\) or \(f(z)=a\bar{z}+b\) with \(|a|=1\), which are isometries of the plane \(\mathbb{C}\). Again because \(B\) has no accumulation points in \(\mathbb{C}\), \(\Gamma_{2}\) must be a discrete group of isometries of \(\mathbb{C}\). The rest of the theorem follows from the classification of these groups, see [6]. In Section 6, we will show that each of the above possibilities for \(\Gamma_{2}\) is in fact realized by an appropriate choice of \(B\). In order to understand the interactions of \(\Gamma_{1}\) and \(\Gamma_{2}\), we need further geometric information about their action on packings. Any circle in \(\mathscr{P}\) is \(c=\sigma_{d_{1}}\cdots\sigma_{d_{k}}(c_{0})\) for some \(c_{0}\in B\), \(d_{1},\ldots d_{k}\in\hat{B}\). Define the height \(\operatorname{ht}(c)\) as the minimum \(k\) for which such an expression exists. A circle has height \(0\) if and only if it is in \(B\). **Lemma 4.3**.: _Suppose \(c\in\mathscr{P}\) has \(\operatorname{ht}(c)>0\). For \(d\in\hat{B}\), \(\operatorname{ht}(\sigma_{d}(c))<\operatorname{ht}(c)\) if and only if \(c\) is inside \(d\)._ Proof.: Say that \(c=\sigma_{d_{1}}\cdots\sigma_{d_{k}}(c_{0})\) and that this expression is minimal. Then \(c_{0}\) is not orthogonal to \(d_{k}\), so it is outside \(d_{k}\). Thus \(\sigma_{d_{k}}(c_{0})\) is inside \(d_{k}\), and hence outside \(d_{k-1}\). Repeating inductively, we conclude that \(c\) is inside \(d_{1}\). Since the circles of \(\hat{B}\) have disjoint interiors, \(d_{1}\) is the unique circle in \(\hat{B}\) whose interior contains \(c\). If we reflect through \(d_{1}\), the height of \(c\) will be lowered. On the other hand, if we reflect through any other circle \(d\in\hat{B}\), \(\sigma_{d}(c)\) will be inside \(d\). Then \(\operatorname{ht}(c)=\operatorname{ht}(\sigma_{d}\sigma_{d}(c))<\operatorname {ht}(\sigma_{d}(c))\), so the height of \(c\) will be raised. **Lemma 4.4**.: _For any \(g\in\Gamma\) and \(d\in\hat{B}\), \(g(B)\) is either completely inside \(d\) or completely outside \(d\). (Some circles of \(g(B)\) are allowed to be orthogonal to \(d\) in either case.)_ Proof.: For convenience in this proof, we consider a circle orthogonal to \(d\) as being both inside and outside \(d\). Since \(d\) is a dual circle and \(g\) preserves \(\hat{\mathscr{P}}\), we know that \(g^{-1}(d)\in\hat{\mathscr{P}}\). By Proposition 2.8, we know that for some \(\tilde{d}\in\hat{B}\), \(g^{-1}(d)\) is either \(\tilde{d}\) or inside \(\tilde{d}\). All circles in \(B\) are completely outside \(\tilde{d}\) by definition, so they are outside \(g^{-1}(d)\) as well. Applying \(g\) to both \(B\) and \(g^{-1}(d)\), we have that all circles in \(g(B)\) are completely inside or completely outside \(d\). **Lemma 4.5**.: _Let \(g\) be a Mobius transformation, \(d\) be a circle, and \(\sigma_{d}\) be the reflection across \(d\). We have \(g\sigma_{d}g^{-1}=\sigma_{g(d)}\)._ Proof.: The map \(g\sigma_{d}g^{-1}\sigma_{g(d)}\) is a holomorphic Mobius transformation which fixes the circle \(g(d)\), so it is the identity. Thus \(g\sigma_{d}g^{-1}=\sigma_{g(d)}\). **Proposition 4.6**.: _For a circle packing \(\mathscr{P}\) with base configuration \(B\) and dual configuration \(\hat{B}\), we have \(\Gamma=\langle\Gamma_{1},\Gamma_{2}\rangle\)._ Proof.: By definition, we have \(\Gamma_{1}\leq\Gamma\). For any \(g\in\Gamma_{2}\), we know \(g\) preserves \(\hat{\mathscr{P}}\) because it preserves \(\hat{B}\), and we also know \(g\) preserves \(\mathscr{P}\) because it preserves both \(B\) and \(\hat{B}\). Hence, we have \(\Gamma_{2}\leq\Gamma\) and thus \(\langle\Gamma_{1},\Gamma_{2}\rangle\subseteq\Gamma\). We now want to show that \(\Gamma\subseteq\langle\Gamma_{1},\Gamma_{2}\rangle\). By Lemma 4.4, for any \(g\in\Gamma\), we know \(g(B)\) is either outside all dual circles or is completely inside some dual circle. Case 1: If \(g(B)\) is outside all dual circles, then \(g(B)\subseteq B\). Let \(G_{B}\) be the tangency graph of \(B\), and let \(G_{g(B)}\) be the subgraph of \(G_{B}\) that is also the tangency graph of \(g(B)\). The faces of \(G_{g(B)}\) are either faces of \(G_{B}\) or unions of these faces. We will show that each face of \(G_{g(B)}\) is a face of \(G_{B}\), implying that \(g(B)=B\). Notice that every face in \(G_{g(B)}\) corresponds to a circle \(g(\tilde{d})\in\hat{\mathscr{P}}\) for some \(\tilde{d}\in\hat{B}\). Since \(g(\tilde{d})\) is ringed by circles from \(g(B)\) and thus from \(B\), we know \(g(\tilde{d})\) cannot be inside any dual circle. By Proposition 2.8, then, we know that \(g(\tilde{d})\) is a dual circle, which implies that \(g(B)=B\). Because \(g\) preserves \(G_{B}\), we know \(g\) also preserves the dual graph of \(G_{B}\), which is the tangency graph of \(g(\hat{B})\). Let \(G_{\hat{B}}\) be the dual graph of \(G_{B}\). Since there is a unique circle orthogonal to a ring of circles, every vertex in \(G_{\hat{B}}\) must correspond to a circle in \(\hat{B}\). Therefore, we have \(g(\hat{B})=\hat{B}\) and thus \(g\in\Gamma_{2}\). Case 2: If \(g(B)\) is completely inside some dual circle \(d_{1}\in\hat{B}\), reflect across \(d_{1}\) and apply Lemma 4.4 to the configuration \(\sigma_{d_{1}}g(B)\). If this configuration is outside all dual circles, then by Case 1, we conclude that \(\sigma_{d_{1}}g\in\Gamma_{2}\). Otherwise, it lies inside some other dual circle \(d_{2}\), and we can reflect across \(d_{2}\) and repeat the argument. It suffices to show that this process eventually terminates, i.e. that \(\sigma_{d_{k}}\cdots\sigma_{d_{1}}g(B)\) is outside all the dual circles for some \(k\). Consider a finite set \(S\) of circles in \(g(B)\) such that no circle in \(\hat{\mathbb{C}}\) is orthogonal to all of them. Such a set can be obtained starting from three circles ringing a common dual circle, and then choosing a fourth which is not part of this ring. At each step of reflecting through a dual circle, the height of each circle in \(S\) decreases or stays constant by Lemma 4.3. Moreover, the latter possibility can only occur if a circle is orthogonal to the dual, so at least one circle's height decreases at each step. The process terminates when all the circles in \(S\) reach height \(0\), so they are mapped to \(B\). When this occurs, the circles in \(S\) are all orthogonal to or outside each circle in \(\hat{B}\), and at least one is outside each dual circle. By Lemma 4.4, all circles in \(g(B)\) are then outside or orthogonal to each dual circle. **Corollary 4.7**.: _For any \(g\in\Gamma\), there exists \(g_{1}\in\Gamma_{1}\) and \(g_{2}\in\Gamma_{2}\) such that \(g=g_{1}g_{2}\)._ Proof.: By the proof of Theorem 4.6, every \(g\in\Gamma\) is of the form \(g=\sigma_{d_{1}}\cdots\sigma_{d_{k}}g_{2}\) for some sequence of \(d_{i}\in\hat{B}\) and \(g_{2}\in\Gamma_{2}\). Let \(g_{1}=\sigma_{d_{1}}\cdots\sigma_{d_{k}}\). By definition, we know that \(g_{1}\in\Gamma_{1}\), and \(g=g_{1}g_{2}\). **Proposition 4.8**.: \(\Gamma_{1}\) _is a normal subgroup of \(\Gamma\)._ Proof.: By Proposition 4.6, it suffices to show that \(\Gamma_{2}\) normalizes \(\Gamma_{1}\). Let \(g\in\Gamma_{2}\), and let \(\sigma_{d}\) be a generator of \(\Gamma_{1}\). By Lemma 4.5, \(g\sigma_{d}g^{-1}=\sigma_{g(d)}\). Since \(g(d)\in\hat{B}\), this is an element of \(\Gamma_{1}\). Thus \(\Gamma_{2}\), and hence \(\Gamma\), normalizes \(\Gamma_{1}\). In general, \(\Gamma_{2}\) is not a normal subgroup of \(\Gamma\). Let \(g_{2}\) be an element in \(\Gamma_{2}\) that sends some dual circle \(d\in\hat{B}\) to a different dual circle in \(\hat{B}\). The element \(\sigma_{d}g_{2}\sigma_{d}^{-1}\in\Gamma\) sends \(d\) to a circle inside \(d\), which means that \(\sigma_{d}g_{2}\sigma_{d}^{-1}\) doesn't preserve \(\hat{B}\) and thus \(\sigma_{d}g_{2}\sigma_{d}^{-1}\not\in\Gamma_{2}\). **Proposition 4.9**.: _The intersection of \(\Gamma_{1}\) and \(\Gamma_{2}\) is trivial._ Proof.: Let \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}\in\Gamma_{1}\) with consecutive \(d_{i}\) distinct. Choose a dual circle \(d\neq d_{k}\) in \(\hat{B}\). As in the proof of Prop. 2.8, the map \(\sigma_{d_{1}}\cdots\sigma_{d_{k}}\) sends \(d\) to a circle inside \(d_{1}\), so it doesn't preserve \(\hat{B}\) and thus is not in \(\Gamma_{2}\). As a a direct result of Proposition 4.6, Proposition 4.8, and Proposition 4.9, we have the following theorem: **Theorem 4.10**.: \(\Gamma\cong\Gamma_{1}\rtimes\Gamma_{2}\)_._ We conclude this section by sketching some results on the structure of the supergroup. Let \(\hat{\Gamma}_{1}\) denote the group generated by reflections across the base circles \(b\in B\). Recall that the superpacking \(\mathscr{S}\) is defined as the orbit of \(B\) under the group \(\langle\Gamma_{1},\hat{\Gamma}_{1}\rangle\). The dual superpacking \(\hat{\mathscr{S}}\) is the orbit of \(\hat{B}\) under this group. Define \(\Gamma_{\mathscr{S}}=\textsc{Sym}(\mathscr{S},\hat{\mathscr{S}})\), the group of Mobius transformations which preserve both the superpacking and the dual superpacking. One has the following: **Theorem 4.11**.: _The group \(\langle\Gamma_{1},\hat{\Gamma}_{1}\rangle\) is a Coxeter group with generators \(\sigma_{c}\) for \(c\in B\), \(\sigma_{d}\) for \(d\in\hat{B}\), and relations \(\sigma_{c}^{2}=1\) for all \(c\in B\), \(\sigma_{d}^{2}=1\) for all \(d\in\hat{B}\), and \(\sigma_{c}\sigma_{d}=\sigma_{d}\sigma_{c}\) for all pairs \(c\in B\), \(d\in\hat{B}\) intersecting orthogonally._ **Theorem 4.12**.: \(\Gamma_{\mathscr{S}}\cong\langle\Gamma_{1},\hat{\Gamma}_{1}\rangle\rtimes \Gamma_{2}\)_._ We omit the proofs because they are similar to previous ones in this section. ## 5. Examples This section introduces examples of our construction. We call our three main examples the triangular, square, and hexagonal packings. We will focus on the arithmetic properties of these examples: quadratic forms and linear relations satisfied by the curvatures, integrality and superintegrality. A "typical" packing satisfying our definition will have little arithmetic interest, but the highly symmetric nature of these examples adds more structure. Some of these examples have appeared in the literature in other contexts, but their properties have not been explored in detail. The square packing is discussed in [16, Figure 10.17] as the limit of the \(1/n\) cusp groups in Maskit's slice. As discussed in the introduction, the triangular and hexagonal packings appear in Kontorovich and Nakamura's classification of superintegral crystallographic packings. And our examples are closely related to the limit packings in [2]. The limit of pyramid packings is the original Apollonian packing, the limit of prism packings is our square packing, and the limit of antiprism packings is our triangular packing. Other families have more complicated limits, which will be briefly discussed at the end of this section. Every packing has linear and quadratic forms satisfied by the curvatures, like the Descartes quadratic form for the Apollonian packing. The following definition characterizes packings with number-theoretic structure: **Definition 5.1**.: _A packing \(\mathscr{P}\) is integral if every circle in \(\mathscr{P}\) has integral curvature. The packing is superintegral if every circle in the superpacking has integral curvature._ We will also say that an equivalence class of packings under Mobius transformations is (super)integral if one packing in the class has this property. The main tool to find the linear and quadratic forms, and check (super)integrality, is an inversive coordinate system for oriented generalized circles in \(\hat{\mathbb{C}}\). A circle is represented as \((\tilde{b},b,h_{1},h_{2})^{T}\in\mathbb{R}^{4}\), where \(b\) is the signed curvature, \(\tilde{b}\) is the curvature after inversion through the unit circle, and \((h_{1},h_{2})\) are the coordinates of the center, multiplied by the curvature. Every circle satisfies the quadratic equation \(h_{1}^{2}+h_{2}^{2}-b\tilde{b}=1\). The action of Mobius transformations on generalized circles becomes a linear action preserving the quadratic form in this coordinate system. This setup is well explained in [15] and in [11]. The article [1] gives a full set of linear and quadratic forms for all polyhedral packings. In each of the following examples, we begin with base and dual circle configurations. We check integrality and superintegrality using the inversive coordinate system. We give quadratic and linear relations sufficient to determine the curvatures of all the circles in the packing from a finite set of base circles (in fact, just three). The proofs of these relations are omitted because they are similar to the polyhedral case. We also relate these packings to others with commensurate symmetry groups. ### Square Packing The base and dual configurations for this packing are shown in Figure 2 (A). The dual configuration is a translation of the base configuration. In these configurations, all the circles can be represented with coordinates \((\tilde{b},b,h_{1},h_{2})^{T}\in\mathbb{Z}^{4}\), so the packing is superintegral. Quadratic and linear relations satisfied by curvatures in the square packing are shown in Figure 5. In the formulas, \(b_{i}\) represents the curvature of circle \(i\). Any image of one of these configurations under \(\Gamma\) will satisfy the same relation. The symmetry group of the square superpacking is commensurate to the symmetry group of the Apollonian superpacking, as illustrated in Figure 6. The full packing is shown in Figure 2 (B). Figure 5. Quadratic form and linear relations ### Triangular and Hexagonal Packings The base configuration and dual configurations for the triangular packing are shown in Figure 7. In the hexagonal packing, the roles of base and dual configurations are reversed. Circles in the base and dual configuration for the triangular packing can be represented with coordinates in the sets \[\left\{(\tilde{b},b,h_{1},h_{2})^{T}\in\mathbb{R}^{4}\ \Big{|}\ b, \tilde{b}\in\mathbb{Z},\,h_{1}+h_{2}i\in 2\mathbb{Z}\left[\frac{1+i\sqrt{3}}{2} \right]\right\},\] \[\left\{(\tilde{b},b,h_{1},h_{2})^{T}\in\mathbb{R}^{4}\ \Big{|}\ b, \tilde{b}\in\sqrt{3}\mathbb{Z},\,h_{1}+h_{2}i\in 2i\mathbb{Z}\left[\frac{1+i \sqrt{3}}{2}\right]\setminus 2\sqrt{3}\mathbb{Z}\left[\frac{1+i\sqrt{3}}{2}\right]\right\}\] respectively. Reflection across the base and dual circles preserve these sets, so the triangular packing is superintegral, and rescaling by \(\sqrt{3}\), we see that the hexagonal packing is superintegral as well. Quadratic and linear relations satisfied by curvatures in the triangular and hexagonal packings are shown in Figure 8. The symmetry group of the triangular and hexagonal superpackings is comm Figure 6. Commensurability of superpacking groups Figure 7. Base and dual configurations for the triangular and hexagonal packings group of the limit of trapezoidal superpackings from [2], as illustrated in Figure 9. We conjecture that this group is not commensurate to the symmetry group of any polyhedral superpacking. The full triangular packing is shown in Figure 10 and the full hexagonal packing is shown in Figure 11. Note: the circles in Figures 2 (B), 10, and 11 were obtained by reflecting a subset of base circles across four generations of dual circles. Figure 8. Quadratic forms and linear relation Figure 9. Commensurability of superpacking groups We finish this section with some brief remarks on the limits of polyhedral packing families studied in [2]. The limit of trapezoidal packings is superintegral. As discussed above, the symmetry group of its superpacking is commensurate to that of the triangular and hexagonal superpackings. The limit of the cupola packings is also superintegral. The symmetry group of its superpacking is commensurate to that of the octahedral superpacking. Figure 11. Hexagonal packing Figure 10. Triangular packing The limit of the anticupola packings is not integral; this can be proven by an infinite descent argument. ## 6. Wallpaper Groups The goal of this section is to illustrate the rich variety of circle packings satisfying Definition 2.7, with a focus on symmetry groups. We give the following converse to Theorem 4.2: **Theorem 6.1**.: _All the possible groups listed in Theorem 4.2 actually arise as the group of symmetries \(\Gamma_{2}=\textsc{Sym}(B,\hat{B})\) for some base and dual configurations \(B\), \(\hat{B}\) satisfying Definition 2.1._ In the case of finite \(B\), \(\hat{B}\), this theorem follows directly from the Koebe-Andreev-Thurston theorem. For infinite \(B\), \(\hat{B}\), the wallpaper group case is Theorem 6.2 below. The other cases are simpler, and the proofs are omitted. **Theorem 6.2**.: _Any wallpaper group is the symmetry group \(\Gamma_{2}\) of the base and dual configurations \(B\), \(\hat{B}\) of some circle packing. Moreover, such \(B\) can be realized as the refinement of the base configuration of either the triangular or square packing._ Proof.: We will prove this theorem by illustration. We label the centers of rotations and axes of reflections using notation introduced in Figure 12. The entire base configurations can be generated by these symmetries or translations, starting from the circles shown. The wallpaper groups \(p6m\) and \(p4m\) are the most complicated ones, and they are the symmetry groups of the base configurations of triangular and square packings, respectively, as shown in Figure 13. Refining these two base configurations by adding smaller circles removes symmetries. Thus, we can obtain base configurations with different symmetry groups by refining the triangular or square configurations. Notice that the refinements given below are well-defined base configurations because the associated dual configurations still exist. Refinements of the triangular configuration are shown in Figure 14 and refinements of the square configuration are shown in Figure 15. Some wallpaper groups could be obtained Figure 12. Labels for axes of reflection, centers of rotation by refining either configuration, but we only show one realization of each group. Figure 14. Refinements of the triangular base configuration Figure 13. Maximal wallpaper groups Figure 15. Refinements of the square base configuration
2308.13132
Braided tensor products and polynomial invariants for the quantum queer superalgebra
The classical invariant theory for the queer Lie superalgebra $\mathfrak{q}_n$ investigates its invariants in the supersymmetric algebra $$\mathcal{U}_{s,l}^{r,k}:=\mathrm{Sym}\left(V^{\oplus r}\oplus \Pi(V)^{\oplus k}\oplus V^{*\oplus s}\oplus \Pi(V^*)^{\oplus l} \right),$$ where $V=\mathbb{C}^{n|n}$ is the natural supermodule, $V^*$ is its dual and $\Pi$ is the parity reversing functor. This paper aims to construct a quantum analogue $\mathcal{B}^{r,k}_{s,l}$ of $\mathcal{U}_{s,l}^{r,k}$ and to explore the quantum queer superalgebra $\mathrm{U}_q(\mathfrak{q}_n)$-invariants in $\mathcal{B}^{r,k}_{s,l}$. The strategy involves braided tensor products of the quantum analogues $\mathsf{A}_{r,n}$, $\mathsf{A}_{k,n}^{\Pi}$ of the supersymmetric algebras $\mathrm{Sym}\left(V^{\oplus r}\right)$, $\mathrm{Sym}\left(\Pi(V)^{\oplus k}\right)$, and their dual partners $\bar{\mathsf{A}}_{s,n}$, and $\bar{\mathsf{A}}_{l,n}^{\Pi}$. These braided tensor products are defined using explicit braiding operator due to the absence of a universal R-matrix for $\mathrm{U}_q(\mathfrak{q}_n)$. Furthermore, we obtain an isomorphism between the braided tensor product $\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}$ and $\mathsf{A}_{r+k,n}$, an isomorphism between $\mathsf{A}_{k,n}^{\Pi}$ and $\mathsf{A}_{k,n}$, as well as the corresponding isomorphisms for their dual parts. Consequently, the $\mathrm{U}_q(\mathfrak{q}_n)$-supermodule superalgebra $\mathcal{B}^{r,k}_{s,l}$ is identified with $\mathcal{B}^{r+k,0}_{s+l,0}$. This allows us to obtain a set of generators of $\mathrm{U}_q(\mathfrak{q}_n)$-invariants in $\mathcal{B}^{r,k}_{s,l}$.
Zhihua Chang, Yongjie Wang
2023-08-25T01:50:34Z
http://arxiv.org/abs/2308.13132v1
# Braided tensor products and polynomial invariants for the quantum queer superalgebra ###### Abstract The classical invariant theory for the queer Lie superalgebra \(q_{n}\) investigates its invariants in the supersymmetric algebra \[\mathcal{A}_{s,l}^{r,k}:=\operatorname{Sym}\left(V^{\oplus r}\oplus\Pi(V)^{ \oplus k}\oplus V^{*\oplus s}\oplus\Pi(V^{*})^{\oplus l}\right),\] where \(V=\mathbb{C}^{n|n}\) is the natural supermodule, \(V^{*}\) is its dual and \(\Pi\) is the parity reversing functor. This paper aims to construct a quantum analogue \(\mathcal{B}_{s,l}^{r,k}\) of \(\mathcal{A}_{s,l}^{r,k}\) and to explore the quantum queer superalgebra \(U_{q}(q_{n})\)-invariants in \(\mathcal{B}_{s,l}^{r,k}\). The strategy involves braided tensor products of the quantum analogues \(A_{r,n}\), \(A_{k,n}^{\Pi}\) of the supersymmetric algebras \(\operatorname{Sym}\left(V^{\oplus r}\right)\), \(\operatorname{Sym}\left(\Pi(V)^{\oplus k}\right)\), and their dual partners \(\bar{A}_{s,n}\), and \(\bar{A}_{l,n}^{\Pi}\). These braided tensor products are defined using explicit braiding operator due to the absence of a universal R-matrix for \(U_{q}(q_{n})\). Furthermore, we obtain an isomorphism between the braided tensor product \(A_{r,n}\otimes A_{k,n}\) and \(A_{r+k,n}\), an isomorphism between \(A_{k,n}^{\Pi}\) and \(A_{k,n}\), as well as the corresponding isomorphisms for their dual parts. Consequently, the \(U_{q}(q_{n})\)-supermodule superalgebra \(\mathcal{B}_{s,l}^{r,k}\) is identified with \(\mathcal{B}_{s+l,0}^{r+k,0}\). This allows us to obtain a set of generators of \(U_{q}(q_{n})\)-invariants in \(\mathcal{B}_{s,l}^{r,k}\). _MSC(2020):_ 17B37, 16T20, 20G42. _Keywords:_ Quantum queer superalgebra; Braided tensor product; Invariant theory. Introduction Invariant theory is one of the most inspiring themes in representation theory. It has been studied extensively for classical (super)group [8, 13, 16, 17, 18, 19, 24], Lie (super)algebra [7, 13, 23], quantum groups [14, 20, 25], and quantum superalgebras [6, 15, 22, 26]. Invariants of quantum superalgebras are not merely for mathematical interest but is also physically important. The exploration of invariants in supersymmetric algebras under an action of a supergroup usually corresponds to observables in physical theories. Inspired by Howe's unified approach [13] to investigate classical invariant theory, Sergeev obtained polynomial invariants of matrix Lie superalgebras in [23]. For the universal enveloping superalgebra \(\mathrm{U}(\mathfrak{q}_{n})\) of type \(Q\), its invariants in the supersymmetric algebra \[\mathcal{U}^{r,k}_{s,l}:=\mathrm{Sym}\left(V^{\oplus r}\oplus\Pi(V)^{\oplus k }\oplus V^{*\oplus s}\oplus\Pi(V^{*})^{\oplus l}\right),\] were explicitly described, where \(V\) is the natural \(\mathrm{U}(\mathfrak{q}_{n})\)-supermodule. The \(\mathrm{U}(\mathfrak{q}_{n})\)-supermodule \(V^{\oplus r}\oplus\Pi(V)^{\oplus k}\oplus V^{*\oplus s}\oplus\Pi(V^{*})^{ \oplus l}\) is identified with \(\left(\mathbb{C}^{r|k}\otimes V\right)\oplus\left(V^{*}\otimes\mathbb{C}^{s| l}\right)\). It is pointed out in [23] that \(\mathbb{C}^{r|k}\otimes V\) can be further identified with \(\mathbb{C}^{r+k|r+k}\otimes V\) that is one of the two isomorphic irreducible \(\mathrm{sub-U}(\mathfrak{q}_{r+k})\otimes\mathrm{U}(\mathfrak{q}_{n})\)-supermodules of \(\mathbb{C}^{r+k|r+k}\otimes V\). This leads to a \(\mathrm{U}(\mathfrak{q}_{r+k})\)-supermodule structure on \(\mathbb{C}^{r|k}\otimes V\). Moreover, \(\mathrm{Sym}\left(\mathbb{C}^{r+k|r+k}\otimes V\right)\) admits a multiplicity-free decomposition as a \(\mathrm{U}(\mathfrak{q}_{r+k})\otimes\mathrm{U}(\mathfrak{q}_{n})\)-supermodule by the Howe duality [7]. This procedure is also valid for the dual part. Thus the superalgebra \(\mathcal{U}^{r,k}_{s,l}\) is identified with \(\mathcal{U}^{r+k,0}_{s+l,0}\). The purpose of this paper is to illustrate such an isomorphism in quantum case, and obtain a set of generators of the quantum queer superalgebra \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-invariants in a quantum analogue \(\mathcal{B}^{r,k}_{s,l}\) of the supersymmetric algebra \(\mathcal{U}^{r,k}_{s,l}\). A quantum deformation of \(\mathrm{U}(\mathfrak{q}_{n})\) was constructed in G. Olshanski's letter [21] by using Faddeev-Reshetikhin-Takhtajan (FRT) formalism. Since the queer Lie superalgebra does not have an even, non-degenerate, invariant bilinear form, the \(r\)-matrix \(r\in\mathfrak{q}_{n}^{\otimes 2}\) does not satisfy the classical Yang-Baxter equation, thus the quantum queer superalgebra \(\mathrm{U}_{q}(\mathfrak{q}_{n})\) is not a quasi-triangular Hopf superalgebra. The highest weight representation theory and their crystal basis theory for quantum queer superalgebra were investigated in [10, 11, 12]. The authors established a Howe duality for quantum queer superalgebras by introducing a quantum coordinate superalgebra [5], which is isomorphic to a braided supersymmetric algebra. Based on the Howe duality, we established in [6] a first fundamental theorem (FFT) of invariant theory for \(\mathrm{U}_{q}(\mathfrak{q}_{n})\) acting on a quantum analogue \(\mathcal{O}_{r,s}\) of \(\mathrm{Sym}\left(V^{\oplus r}\oplus V^{*\oplus s}\right)\). Since the quantum queer superalgebra does not possess a universal \(\mathcal{R}\)-matrix, the technique developed in establishing the first fundamental theorem of invariant theory for quantum groups [14] could not be used in the quantum queer superalgebra setting. Our approach to obtain the invariants for quantum queer superalgebra takes advantage of an explicit braiding operator \(\Upsilon\) between a quantum coordinate superalgebra and a dual quantum coordinate superalgebra. The braiding operator enables us to define a braided tensor product, or \(R\)-twisted tensor products over \(\mathcal{O}_{r,s}=\mathrm{A}_{r,n}\otimes\tilde{\mathrm{A}}_{s,n}\) (that is, \(\mathcal{B}^{r,0}_{s,0}\)). More important, it is compatible with the \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-action. The definition of the braiding operator is based on the \(S\)-matrix on the natural representation instead of the universal \(\mathcal{R}\)-matrix. One of the nice aspects of our approach is that all relations could be expressed via matrices, this allows us to avoid a lot of complicated calculations, see [6] for more details. In this paper, we continue to utilize this advantage to investigate the \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-invariants in \(\mathcal{B}_{s,l}^{r,k}\). We extend the work of [23, Theorem 1.5.1] to quantum queer superalgebra. On one hand, we explicitly construct a braiding operator \(\Theta:\mathrm{A}_{k,n}\otimes\mathrm{A}_{r,n}\to\mathrm{A}_{r,n}\otimes \mathrm{A}_{k,n}\) (resp. \(\tilde{\Theta}:\tilde{\mathrm{A}}_{l,n}\otimes\tilde{\mathrm{A}}_{s,n}\to \tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n}\)), and prove that the resulting braided tensor product of \(\mathrm{A}_{r,n}\otimes\mathrm{A}_{k,n}\) (resp. \(\tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n}\)) is isomorphic to \(\mathrm{A}_{r+k,n}\) (resp. \(\tilde{\mathrm{A}}_{s+l,n}\)). Thus, we obtain that \(\mathrm{A}_{r,n}\) (resp. \(\tilde{\mathrm{A}}_{s,n}\)) is isomorphic to the braided tensor product of \(r\)-copies of the quantum supersymmetric algebra \(S_{q}(V)\) (resp. \(S_{q}(V^{*})\)). On the other hand, we define a quantum analogue \(\mathrm{A}_{k,n}^{\Pi}\) (resp. \(\tilde{\mathrm{A}}_{l,n}^{\Pi}\)) of the supersymmetric algebra \(\mathrm{Sym}(\Pi(V)^{\oplus k})\) (resp. \(\mathrm{Sym}(\Pi(V^{*})^{\oplus l})\) ), which is isomorphic to the \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra \(\mathrm{A}_{k,n}\) (resp. \(\tilde{\mathrm{A}}_{l,n}\)). This allows us to identify \(\mathcal{B}_{s,l}^{r,k}=\mathrm{A}_{r,n}\otimes\mathrm{A}_{k,n}^{\Pi}\otimes \tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n}^{\Pi}\) with \(\mathcal{B}_{s+l,0}^{r+k,0}=\mathrm{A}_{r+k,n}\otimes\tilde{\mathrm{A}}_{s+l,n}\). Combining with our pervious result [6, Theorem 5.10], we derive generators of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-invariants in \(\mathcal{B}_{s,l}^{r,k}\). The remainder of this paper is organized as follows. We review some definitions related to braided tensor product of two superalgebras, the quantum queer superalgebra and \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra in Section 2. We explicitly construct a braided tensor product \(\mathrm{A}_{r,n}\otimes\mathrm{A}_{k,n}\) of two quantum coordinate superalgebras and identify it with \(\mathrm{A}_{r+k,n}\) in Section 3. The definition of a quantum analogue \(\mathrm{A}_{k,n}^{\Pi}\) of \(\mathrm{Sym}\left(\Pi(V)^{\oplus k}\right)\) and an isomorphism from \(\mathrm{A}_{k,n}^{\Pi}\) to \(\mathrm{A}_{k,n}\) are also obtained in this section. Section 4 serves for discussing braided tensor products of dual quantum coordinate superalgebras and a quantum analogue of \(\mathrm{Sym}\left(\Pi(V^{*})^{\oplus l}\right)\). The \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-invariants in a quantum analogue \(\mathcal{B}_{s,l}^{r,k}\) of the supersymmetric algebra \(\mathcal{U}_{s,l}^{r,k}\) are investigated in Section 5. ## 2 Quantum queer superalgebra and their coordinate superalgebra Let \(A\) and \(B\) be two superalgebras over a commutative ring. A twisted tensor product [4, Definition 2.1] of \(A\) and \(B\) is a superalgebra \(C\), together with two injective homomorphisms \(i_{A}:A\to C\) and \(i_{B}:B\to C\), such that the canonical linear map \((i_{A},i_{B}):A\otimes B\to C\) defined by \((i_{A},i_{B})(a\otimes b)=i_{A}(a)i_{B}(b)\) is a linear superalgebra isomorphism. Twisted tensor products can be characterized by twisting maps. According to [4, Proposition 2.7], for any twisted tensor product \((C,i_{A},i_{B})\) of two superalgebras \(A\) and \(B\), there exists a twisting map \(\sigma:B\otimes A\to A\otimes B\) such that \(C\) is isomorphic to \(A\otimes B\) as superalgebras under the following multiplication \(\mu_{\sigma}\) \[\mu_{\sigma}:=(\mu_{A}\otimes\mu_{B})(\mathrm{id}_{A}\otimes\sigma\otimes \mathrm{id}_{B}),\] where \(\mu_{A}\) (resp. \(\mu_{B}\)) is the multiplication on \(A\) (resp. \(B\)). Furthermore, if \(\sigma\) satisfies the usual hexagon axioms, then \(\sigma\) is a braiding operator in sense of [9, Definition 8.1.1]. The twisted tensor product \(A\otimes B\) is also called a braided tensor product. For the quantum group \(\mathrm{U}_{q}(\mathfrak{gl}_{n})\) of type \(A\), a braiding operator of \(\mathrm{U}_{q}(\mathfrak{gl}_{n})\)-module algebras is given by the composition of the permutation operator \(P\) with the universal \(\mathcal{R}\)-matrix. However, the quantum superalgebra of type \(Q\) does not have a universal \(\mathcal{R}\)-matrix as pointed out in [21]. In this paper, we aim to construct an explicit braiding operator among quantum coordinate superalgebras and dual quantum coordinate superalgebras, see Propositions 3.3 and 4.4. We always assume that the base field \(\mathbb{C}(q)\) is the field of rational functions in an indeterminate \(q\). For a positive integer \(n\), we denote \(I_{n|n}:=\{-n,\ldots,-1,1,\ldots,n\}\). Let \(V_{q}\) be the \(2n\)-dimensional \(\mathbb{C}(q)\)-vector superspace with the basis \(\{v_{i},i\in I_{n|n}\}\), which is equipped with a \(\mathbb{Z}/2\mathbb{Z}\)-grading \[|v_{i}|=|i|:=\begin{cases}\tilde{0},&\text{if }i>0,\\ \tilde{1}&\text{if }i<0.\end{cases}\] Then \(\operatorname{End}(V_{q})\) is naturally an associative superalgebra, in which the standard matrix unit \(E_{ij}\) is of parity \(|i|+|j|\) for \(i,j\in I_{n|n}\). As in [21], we set \[S :=\sum_{i,j\in I_{n|n}}q^{\varphi(i,j)}E_{ii}\otimes E_{jj}+\xi \sum_{i<j}(-1)^{|i|}(E_{ji}+E_{-j,-i})\otimes E_{ij}\in\operatorname{End}(V_{ q})^{\otimes 2} \tag{2.1}\] \[=\sum_{i,j\in I_{n|n}}S_{ij}\otimes E_{ij},\] where \(\delta_{ij}\) is standard Kronecker symbol, \(\varphi(i,j)=(-1)^{|j|}(\delta_{ij}+\delta_{i,-j})\) and \(\xi=q-q^{-1}\). It satisfies the quantum Yang-Baxter equation: \[S^{12}S^{13}S^{23}=S^{23}S^{13}S^{12},\] where \[S^{12}=S\otimes 1,\quad S^{23}=1\otimes S,\quad S^{13}=\sum_{i,j\in I_{n|n}}S _{ij}\otimes 1\otimes E_{ij}.\] _The quantum queer superalgebra_ is defined via Faddeev-Reshetikhin-Takhtajan presentation as follows: **Definition 2.1** (G. Olshanski [21, Definition 4.2]).: The quantum queer superalgebra \(\operatorname{U}_{q}(\mathfrak{q}_{n})\) is the unital associative superalgebra over \(\mathbb{C}(q)\) generated by elements \(L_{ij}\) of parity \(|i|+|j|\) for \(i,j\in I_{n|n}\) and \(i\leqslant j\), with defining relations: \[L_{ii}L_{-i,-i}=1=L_{-i,-i}L_{ii}, \tag{2.2}\] \[L^{[1|2}L^{[1|3}S^{23}=S^{23}L^{[1|3}L^{[1|2}, \tag{2.3}\] where \(L^{[1|2}=\sum\limits_{i\leqslant j}L_{ij}\otimes E_{ij}\otimes 1\), \(L^{[1|3}=\sum\limits_{i\leqslant j}L_{ij}\otimes 1\otimes E_{ij}\) and the relation (2.3) holds in \(\operatorname{U}_{q}(\mathfrak{q}_{n})\otimes\operatorname{End}(V_{q}) \otimes\operatorname{End}(V_{q})\). The associative superalgebra \(\operatorname{U}_{q}(\mathfrak{q}_{n})\) is a Hopf superalgebra with the comultiplication \(\Delta\), the counit \(\varepsilon\) and the antipode \(\mathcal{S}\) given by: \[\Delta(L)=L\otimes L,\qquad\varepsilon(L)=1,\qquad\mathcal{S}(L)=L^{-1}.\] The \(\mathbb{C}(q)\)-vector superspace \(V_{q}\) is naturally a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule via the homomorphism \[\mathrm{U}_{q}(\mathfrak{q}_{n})\to\mathrm{End}(V_{q}),\quad L\mapsto S,\] where \(S\) is the matrix given in (2.1). If \(B\) is a superalgebra which is also a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule, we will consider the compatibility of the \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-action and the multiplication on \(B\) in the following sense: **Definition 2.2**.: Let \(B\) be a unital associative superalgebra that is also a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule. 1. \(B\) is a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra if \[u.(ab)=\sum_{(u)}(-1)^{|u_{(2)}||a|}(u_{(1)}.a)(u_{(2)}.b),\text{ and }u.1=\varepsilon(u)1,\] for all \(a,b\in B\) and \(u\in\mathrm{U}_{q}(\mathfrak{q}_{n})\), where we use the Sweedler's notation \(\Delta(u)=\sum_{(u)}u_{(1)}\otimes u_{(2)}\). 2. \(B\) is a \(\mathrm{U}_{q}(\mathfrak{q}_{n})^{\mathrm{cop}}\)-supermodule superalgebra if \[u.(ab)=\sum_{(u)}(-1)^{|u_{(1)}|(|u_{(2)}|+|a|)}(u_{(2)}.a)(u_{(1)}.b),\text{ and }u.1=\varepsilon(u)1,\] for all \(a,b\in B\) and \(u\in\mathrm{U}_{q}(\mathfrak{q}_{n})\). Let \(B_{1}\) and \(B_{2}\) be two \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras. Suppose that there is a homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodules \(\vartheta:B_{2}\otimes B_{1}\to B_{1}\otimes B_{2}\) such that the following diagrams commute (2.4) and \[\begin{CD}B_{2}\otimes B_{2}\otimes B_{1}@>{1\otimes\theta}>{}>B_{2}\otimes B _{1}\otimes B_{2}@>{\Theta\otimes 1}>{}>B_{1}\otimes B_{2}\otimes B_{2}\\ @V{\mathrm{mult}\otimes 1}V{B_{2}\otimes B_{1}}V@V{}V{\theta}V{}>B_{1}\otimes B_{2} \end{CD} \tag{2.5}\] where \(\mathrm{mult}\) denotes the multiplication map. Then it has been shown in [6, Lemma 3.1] that the braided tensor product \(B_{1}\otimes B_{2}\) is also a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra under the multiplication \[B_{1}\otimes B_{2}\otimes B_{1}\otimes B_{2}@>{1\otimes\theta\otimes 1}>{}>B_{1} \otimes B_{1}\otimes B_{2}\otimes B_{2}@>{\mathrm{mult}\mathrm{mult}}>{}>B_{1} \otimes B_{2}.\] Quantum Coordinate Superalgebras A quantum coordinate superalgebra \(\mathsf{A}_{r,n}\) of type \(Q\) is a quantum analogue of the supersymmetric algebra \(\mathrm{Sym}\left(V^{\oplus r}\right)\) on which \(\mathrm{U}(\mathfrak{q}_{n})\) acts as superalgebra endmorphism. In the classical (non-quantum) case, there is an isomorphism of \(\mathrm{U}(\mathfrak{q}_{n})\)-supermodule superalgebras \[\mathrm{Sym}\left(V^{\oplus(r+k)}\right)\cong\mathrm{Sym}\left(V^{\oplus r} \right)\otimes\mathrm{Sym}\left(V^{\oplus k}\right).\] This section is devoted to establishing such an isomorphism in quantum case where the superalgebra structure on the tensor product of two quantum coordinate superalgebras will be the braided tensor product. The quantum coordinate superalgebra \(\mathsf{A}_{r,n}\) is defined in [5, Section 4] as a sub-superalgebra of the finite dual \(\mathrm{U}_{q}(\mathfrak{q}_{n})^{\circ}\) generated by certain matrix elements with respect to the natural \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule \(V_{q}\). According to [3, Proposition 3.6.4] and [6, Section 2], the quantum coordinate superalgebra \(\mathsf{A}_{r,n}\) can also be described via generators and relations as follows: **Definition 3.1**.: The _quantum coordinate superalgebra_\(\mathsf{A}_{r,n}\) is the unital associative superalgebra generated by \(t_{ia}\) of parity \(|i|+|a|\) with \(i\in I_{r|r},a\in I_{n|n}\), subjects to the relations \[t_{ia} =t_{-i,-a}, \tag{3.1}\] \[S^{12}\,T^{1[3]}\,T^{2[3]} =T^{2[3]}\,T^{1[3]}S^{12}, \tag{3.2}\] where \(T=\sum\limits_{i\in I_{r|r},a\in I_{n|n}}E_{ia}\otimes t_{ia}\). It is a quantum analogue of the supersymmetric algebra \(\mathrm{Sym}\left(V^{\oplus r}\right)\). It is shown in [6, Lemma 3.2] that the superalgebra \(\mathsf{A}_{r,n}\) is also presented by generators \(t_{ia}\) with \(i=1,\ldots,r,a\in I_{n|n}\) and the relation \[R_{+}^{12}\,T_{+}^{1[3]}\,T_{+}^{2[3]}=T_{+}^{2[3]}\,T_{+}^{1[3]}S^{12}, \tag{3.3}\] where \(T_{+}=\sum\limits_{i=1}^{r}\sum\limits_{a\in I_{n|n}}E_{ia}\otimes t_{ia}\), \(S\) is the matrix (2.1), and \[R_{+}:=\sum\limits_{i,j=1}^{r}q^{\delta_{ij}}E_{il}\otimes E_{jj}+\xi\sum \limits_{1\leqslant i<j\leqslant r}E_{jl}\otimes E_{lj} \tag{3.4}\] is the submatrix1 of \(S\) involving the terms \(E_{ik}\otimes E_{jl}\) with \(1\leqslant i,j,k,l\leqslant r\). Footnote 1: The submatrix \(R\) of \(S\) is exactly the R-matrix of \(\mathrm{U}_{q}(\mathfrak{q}_{r})\) Beside being an associative superalgebra, \(\mathsf{A}_{r,n}\) is also a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra (see [6, Lemma 2.3]) in the sense of Definition 2.2 with the action \(\Phi\) determined by \[L^{[2]3}\underset{\Phi}{\cdot}T^{1[2]}=T^{1[2]}S^{13}, \tag{3.5}\] where \(S\) is the matrix (2.1) and \[L^{[2]3}\underset{\Phi}{\cdot}T^{1[2]}:=\sum\limits_{a,b,c\in I_{n|n}}\sum \limits_{i\in I_{r|r}}(-1)^{(|a|+|b|)(|l|+|c|)}E_{ic}\otimes\Phi_{L_{ab}}(t_{ ic})\otimes E_{ab}.\] According to [5, Remark 4.4], the \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra \(\mathsf{A}_{r,n}\) is a quantum analogue of the \(\mathrm{U}(\mathfrak{q}_{n})\)-supermodule superalgebra \(\mathrm{Sym}\left(V^{\oplus r}\right)\). **Lemma 3.2**.: _For each \(0\leq p\leq k\), there is an injective homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras_ \[\iota_{p}:\mathsf{A}_{r,n}\to\mathsf{A}_{r+k,n},\quad t_{ia}\mapsto t_{p+i,a}, \quad i=1,\ldots,r,\quad a\in I_{n|n}.\] Proof.: Since \(\mathsf{A}_{r,n}\) is generated by \(t_{ia},i=1,\ldots,r,a\in I_{n|n}\), the assignment \(t_{ia}\mapsto t_{p+i,a}\) entends to a homomorphism from the free unital associative superalgebra generated by \(t_{ia},i=1,\ldots,r,a\in I_{n|n}\) to the associative superalgebra \(\mathsf{A}_{r+k,n}\). By (3.3), the elements \(t_{ia}\) with \(i=1,\ldots,r,a\in I_{n|n}\) in \(\mathsf{A}_{r,n}\) satisfy the relation: \[q^{\delta_{ij}}\,t_{ia}\,t_{jb}-(-1)^{|a||b|}q^{\varphi(a,b)}\,t_{jb}\,t_{ia}= \xi(\delta_{a<b}-\delta_{j<i})\,t_{ja}\,t_{ib}+(-1)^{|b|}\xi\delta_{-a<b}\,t_{ j,-a}\,t_{i,-b}.\] While the elements \(t_{p+i,a}\) with \(i=1,\ldots,r,a\in I_{n|n}\) in \(\mathsf{A}_{r,n}\) in \(\mathsf{A}_{r+k,n}\) satisfy the relation: \[q^{\delta_{p+i,p+j}}\,t_{p+i,a}\,t_{p+j,b}-(-1)^{|a||b|}q^{\varphi (a,b)}\,t_{p+j,b}\,t_{p+i,a}\] \[= \xi(\delta_{a<b}-\delta_{p+j<p+i})\,t_{p+j,a}\,t_{p+i,b}+(-1)^{|b| }\xi\delta_{-a<b}\,t_{p+j,-a}\,t_{p+i,-b}.\] Hence, there is a well-defined homomorphism of associative superalgebras \(\iota_{p}:\mathsf{A}_{r,n}\to\mathsf{A}_{r+k,n}\) such that \(t_{ia}\mapsto t_{p+i,a}\). Note that the \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-actions on \(\mathsf{A}_{r,n}\) and \(\mathsf{A}_{r+k,n}\) are given by (3.5), \(\iota_{p}\) is also a homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodules. The injectivity of \(\iota_{p}\) follows from [3, Proposition 3.6.4]. It states that the monomials \[\prod_{i=1,\ldots,r,a\in I_{n|n}}t_{ia}^{d_{ia}},\quad d_{ia}\geq 0\text{ if }|a|= \bar{0}\text{ and }d_{ia}\in 0,1\text{ if }|a|=\bar{1}\] form a \(\mathbb{C}(q)\)-basis of \(\mathsf{A}_{r,n}\), where the product is taken with respect to the lexicographic order. In order to properly define a braided multiplication on the tensor product \(\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}\) such that it is compatible with the \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-action on \(\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}\) in the sense of Definition 2.2, we need a braiding operator defined in the following proposition. **Proposition 3.3**.: _There is a unique homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodules_ \[\Theta:\mathsf{A}_{k,n}\otimes\mathsf{A}_{r,n}\to\mathsf{A}_{r,n}\otimes \mathsf{A}_{k,n}\] _satisfying the commutative diagrams (2.4),(2.5), and such that_ \[\Theta\left(1\otimes x\right)= x\otimes 1,\quad x\in\mathsf{A}_{r,n} \tag{3.6}\] \[\Theta\left(y\otimes 1\right)= 1\otimes y\quad y\in\mathsf{A}_{k,n},\] (3.7) \[\Theta\left(T_{k}^{1[3]}\,T_{r}^{2[4]}\right)= T_{r}^{2[3]}\,T_{k}^{1[4]}\,S^{12}, \tag{3.8}\] _where \(T_{r}=\sum\limits_{\begin{subarray}{c}i=1,\ldots,r\\ a\in I_{n|n}\end{subarray}}E_{ia}\otimes t_{ia}\) and \(T_{k}=\sum\limits_{\begin{subarray}{c}j=1,\ldots,k\\ b\in I_{n|n}\end{subarray}}E_{jb}\otimes t_{jb}\) are the generator matrices of \(\mathsf{A}_{r,n}\) and \(\mathsf{A}_{k,n}\) respectively._ Proof.: Let \(W_{r}\) (resp. \(W_{k}\))be the \(\mathbb{C}(q)\)-sub-superspace of \(\mathsf{A}_{r,n}\) (resp. \(\mathsf{A}_{k,n}\)) spanned by \(1\) and \(t_{ia}\) for \(i=1,\ldots,r\) and \(a\in I_{n|n}\) (resp. \(t_{ia}\) for \(i=1,\ldots,k\) and \(a\in I_{n|n}\)). Then (3.6), (3.7) and (3.8) define a \(\mathbb{C}(q)\)-linear map \(W_{k}\otimes W_{r}\to W_{r}\otimes W_{k}\), which extends to a \(\mathbb{C}(q)\)-linear map \(\widetilde{\Theta}:\mathsf{F}_{k,n}\otimes\mathsf{F}_{r,n}\to\mathsf{A}_{r,n} \otimes\mathsf{A}_{k,n}\) satisfying the commutative diagrams (2.4) and (2.5), where \(\mathsf{F}_{r,n}\) and \(\mathsf{F}_{k,n}\) are the free associative superalgebras generated by \(t_{ia}\), \(i=1,\ldots,r\), \(a\in I_{n|n}\) and \(t_{jb}\), \(j=1,\ldots,k\), \(b\in I_{n|n}\), respectively. We need to show that \(\widetilde{\Theta}\) preserves the defining relations (3.3) for \(\mathsf{A}_{r,n}\) and \(\mathsf{A}_{k,n}\). Recall from (3.3) that the superalgebra \(\mathsf{A}_{r,n}\) is presented by the generators \(t_{ia}\), \(i=1,\ldots,r\) and \(a\in I_{n|n}\) and the relation \[R_{+}^{12}\,T_{r}^{1[3]}\,T_{r}^{2[3]}=T_{r}^{2[3]}\,T_{r}^{1[3]}S^{12}.\] Using the commutative diagram (2.4), we verify that \[\widetilde{\Theta}\Big{(}T_{k}^{1[4]}\,T_{r}^{2[5]}\,T_{r}^{3[5]} \Big{)}= \widetilde{\Theta}\circ(1\otimes\operatorname{mul})\left(T_{k}^ {1[4]}\,T_{r}^{2[5]}\,T_{r}^{3[6]}\right)\] \[= (\operatorname{mul}\otimes 1)\circ(1\otimes\widetilde{\Theta}) \circ(\widetilde{\Theta}\otimes 1)\left(T_{k}^{1[4]}\,T_{r}^{2[5]}\,T_{r}^{3[6]}\right)\] \[= (\operatorname{mul}\otimes 1)\circ(1\otimes\widetilde{\Theta}) \left(T_{r}^{2[4]}\,T_{k}^{1[5]}\,S^{12}\,T_{r}^{3[6]}\right)\] \[= (\operatorname{mul}\otimes 1)\left(T_{r}^{2[4]}\widetilde{\Theta} \left(T_{k}^{1[5]}\,T_{r}^{3[6]}\right)S^{12}\right)\] \[= (\operatorname{mul}\otimes 1)\left(T_{r}^{2[4]}\,T_{r}^{3[5]}\,T_{k}^{1 [6]}S^{13}S^{12}\right)\] \[= T_{r}^{2[4]}\,T_{r}^{3[4]}\,T_{k}^{1[5]}S^{13}\,S^{12},\] \[\widetilde{\Theta}\Big{(}T_{k}^{1[4]}\,T_{r}^{3[5]}\,T_{r}^{2[5]} \Big{)}= \widetilde{\Theta}\circ(1\otimes\operatorname{mul})\left(T_{k}^ {1[4]}\,T_{r}^{3[5]}\,T_{r}^{2[6]}\right)\] \[= (\operatorname{mul}\otimes 1)\circ(1\otimes\widetilde{\Theta}) \circ(\widetilde{\Theta}\otimes 1)\left(T_{k}^{1[4]}\,T_{r}^{3[5]}\,T_{r}^{2[6]}\right)\] \[= (\operatorname{mul}\otimes 1)\circ(1\otimes\widetilde{\Theta}) \left(T_{r}^{3[4]}\,T_{k}^{1[5]}S^{13}\,T_{r}^{2[6]}\right)\] \[= (\operatorname{mul}\otimes 1)\left(T_{r}^{3[4]}\,T_{r}^{2[5]}\,T_{k}^{1 [6]}S^{12}S^{13}\right)\] \[= T_{r}^{3[4]}\,T_{r}^{2[4]}\,T_{k}^{1[5]}S^{12}S^{13},\] which yield that \[\widetilde{\Theta}\Big{(}T_{k}^{1[4]}\,R_{+}^{23}\,T_{r}^{2[5]}\,T_ {r}^{3[5]}\Big{)}= R_{+}^{23}\widetilde{\Theta}\left(T_{k}^{1[4]}\,T_{r}^{2[5]}\,T_ {r}^{3[5]}\right)= R_{+}^{23}\,T_{r}^{2[4]}\,T_{r}^{3[4]}\,T_{k}^{1[5]}S^{13}S^{12}\] \[= T_{r}^{3[4]}\,T_{r}^{2[4]}\,T_{k}^{1[5]}S^{23}S^{13}S^{12},\] \[\widetilde{\Theta}\Big{(}T_{k}^{1[4]}\,T_{r}^{3[5]}\,T_{r}^{2[5]}S ^{23}\Big{)}= T_{r}^{3[4]}\,T_{r}^{2[4]}\,T_{k}^{1[5]}S^{12}S^{13}S^{23}.\] Note that the matrix \(S\) satisfies the quantum Yang-Baxter equation \[S^{12}S^{13}S^{23}=S^{23}S^{13}S^{12}.\] Thus, we obtain that \[\widetilde{\Theta}\Big{(}T_{k}^{1[4]}\,R_{+}^{23}\,T_{r}^{2[5]}\,T_ {r}^{3[5]}\Big{)}=\widetilde{\Theta}\left(T_{k}^{1[4]}\,T_{r}^{3[5]}\,T_{r}^{ 2[5]}\,S^{23}\right).\] Similarly, we use the commutative diagram (2.5) to deduce that \[\widetilde{\Theta}\left(R_{+}^{12}T_{k}^{1[4]}T_{k}^{2[4]}T_{r}^{3[5]}\right)= \widetilde{\Theta}\left(T_{k}^{2[4]}T_{k}^{1[4]}S^{12}T_{r}^{3[5]}\right).\] Hence, \(\widetilde{\Theta}\) induces a well-defined \(\mathbb{C}(q)\)-linear map \(\Theta:\mathsf{A}_{k,n}\otimes\mathsf{A}_{r,n}\to\mathsf{A}_{r,n}\otimes \mathsf{A}_{k,n}\) which satisfies (3.6), (3.7), (3.8) and the commutative diagrams (2.4), (2.5). Such a \(\mathbb{C}(q)\)-linear map is unique since its image \(\Theta(y\otimes x)\) is determined by (3.6), (3.7) and (3.8) when \(x\) is one of the generators \(t_{ia}\), \(i=1,\ldots,r\), \(a\in I_{n|n}\) of \(\mathsf{A}_{r,n}\) and \(y\) is one of the generators \(t_{jb}\), \(j=1,\ldots,k\), \(a\in I_{n|n}\). Finally, we show that \(\Theta\) is also a homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodules. Since both \(\mathsf{A}_{r,n}\) and \(\mathsf{A}_{k,n}\) are \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras, it suffices to verify that \(\Theta\) commutes with the action of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\) on generators. Since \(\mathrm{U}_{q}(\mathfrak{q}_{n})\) acts on \(\mathsf{A}_{k,n}\otimes\mathsf{A}_{r,n}\) and \(\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}\) via the comultiplication \[\Delta\left(L^{[1]2}\right)=L^{[1]2}L^{[1^{\prime}]2},\] we verify that \[\Theta\left(L^{[3]4}\cdot_{\Phi}\left(T_{k}^{1[3]}T_{r}^{2[3^{ \prime}]}\right)\right)= \Theta\left(\left(L^{[3]4}\cdot_{\Phi}T_{k}^{1[3]}\right)\left(L ^{[3^{\prime}]4}\cdot_{\Phi}T_{r}^{2[3^{\prime}]}\right)\right)=\Theta\left(T_ {k}^{1[3]}S^{14}T_{r}^{2[3^{\prime}]}S^{24}\right)\] \[= T_{r}^{2[3]}T_{k}^{1[3^{\prime}]}S^{12}S^{14}S^{24}.\] \[L^{[3]4}\cdot_{\Phi}\Theta\left(T_{k}^{1[3]}T_{r}^{2[3^{\prime}]}\right)= L^{[3]4}\cdot_{\Phi}\left(T_{r}^{2[3]}T_{k}^{1[3^{\prime}]}\right)S^{12}= \left(L^{[3]4}\cdot_{\Phi}T_{r}^{2[3]}\right)\left(L^{[3]4}\cdot_{\Phi}T_{k}^{ 1[3^{\prime}]}\right)S^{12}\] \[= T_{r}^{2[3]}S^{24}T_{k}^{1[3^{\prime}]}S^{14}S^{12}=T_{r}^{2[3]} T_{k}^{1[3^{\prime}]}S^{24}S^{14}S^{12}.\] Thus, \[\Theta\left(L^{[3]4}\cdot_{\Phi}\left(T_{k}^{1[3]}T_{r}^{2[3^{\prime}]}\right) \right)=L^{[3]4}\cdot_{\Phi}\Theta\left(T_{k}^{1[3]}T_{r}^{2[3^{\prime}]}\right),\] which proves that \(\Theta\) is a homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodules. Now, we obtain a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule homomorphism \(\Theta:\mathsf{A}_{k,n}\otimes\mathsf{A}_{r,n}\to\mathsf{A}_{r,n}\otimes \mathsf{A}_{k,n}\) satisfying the commutative diagrams (2.4) and (2.5). Hence, \(\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}\) is equipped with a braided multiplication: \[\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}\otimes\mathsf{A}_{r,n}\otimes\mathsf{ A}_{k,n}\xrightarrow{\mathsf{1}\otimes\Theta\otimes 1}\mathsf{A}_{r,n}\otimes\mathsf{A}_{r,n} \otimes\mathsf{A}_{k,n}\otimes\mathsf{A}_{k,n}\xrightarrow{\mathrm{mult} \mathsf{mult}\mathsf{mult}}\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}.\] By [6, Lemma 3.1], \(\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}\) is a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra. In terms of generators, we have \[\left(1\otimes\mathfrak{t}_{jb}\right)\left(t_{ia}\otimes 1\right)=(-1)^{|a||b|}q^{ \varphi(b,a)}t_{ia}\otimes t_{jb}+\delta_{b<a}\xi t_{ib}\otimes t_{ja}+\delta_{ -b<a}(-1)^{|a|}\xi t_{i,-b}\otimes t_{j,-a},\] for \(i=1,\ldots,r\), \(j=1,\ldots,k\) and \(a,b\in I_{n|n}\). **Lemma 3.4**.: _Let \(x\in\mathsf{A}_{r,n}\) and \(y\in\mathsf{A}_{k,n}\). Suppose that_ \[\Theta(y\otimes x)=\sum_{i}x_{i}\otimes y_{i},\] _where \(x_{i}\in\mathsf{A}_{r,n}\), \(y_{i}\in\mathsf{A}_{k,n}\) and the summation is taken over a finite set. Then the following equality holds in \(\mathsf{A}_{r+k,n}\)_ \[t_{0}(y)t_{k}(x)=\sum_{i}t_{k}(x_{i})t_{0}(y_{i}),\] _where \(\iota_{0}:\mathsf{A}_{k,n}\to\mathsf{A}_{r+k,n}\) and \(\iota_{k}:\mathsf{A}_{r,n}\to\mathsf{A}_{r+k,n}\) are the maps given in Lemma 3.2._ Proof.: The superalgebra \(\mathsf{A}_{r,n}\) has a \(\mathbb{Z}\)-grading by setting \(\deg(t_{ia})=1\), \(i=1,\ldots,r\) and \(a\in I_{n|n}\) since the defining relation (3.3) is homogeneous with respect to this grading. Similarly, there is a \(\mathbb{Z}\)-grading on \(\mathsf{A}_{k,n}\) with \(\deg(t_{jb})=1\), \(j=1,\ldots,k\) and \(b\in I_{n|n}\). The lemma will be proved by induction on \((\deg x,\deg y)\). It follows from the definition of \(\Theta\) in Proposition 3.3 that \(\deg(x_{i})\leq\deg(x)\) and \(\deg(y_{i})\leq\deg(y)\). The statement is true when \(\deg(x)=0\) or \(\deg(y)=0\) since \[\Theta(y\otimes 1)=1\otimes y,\text{ and }\Theta(1\otimes x)=x\otimes 1.\] If \(\deg(x)=\deg(y)=1\), we set \(x=t_{ia}\) and \(y=t_{jb}\), \(i=1,\ldots,r\), \(j=1,\ldots,k\), and \(a,b\in I_{n|n}\). We deduce from (3.8) that \[\Theta(t_{jb}\otimes t_{ia})= (-1)^{|a||b|}q^{\varphi(b,a)}t_{ia}\otimes t_{jb}+\delta_{b<a} \xi t_{ib}\otimes t_{ja}+\delta_{-b<a}(-1)^{|a|}\xi t_{i,-b}\otimes t_{j,-a}.\] In \(\mathsf{A}_{r+k,n}\), the defining relation (3.3) yields that \[\iota_{0}(t_{j,b})t_{k}(t_{i,a})= t_{j,b}t_{k+i,a}\] \[= (-1)^{|a||b|}q^{\varphi(b,a)}t_{k+i,a}t_{j,b}+\xi(\delta_{b<a}- \delta_{k+i<j})\,t_{k+i,b}\,t_{j,a}\] \[+(-1)^{|a|}\xi(\delta_{-b<a}-\delta_{k+i<-j})\,t_{k+i,-b}\,t_{j,-a}\] \[= (-1)^{|a||b|}q^{\varphi(b,a)}t_{k}(t_{i,a})t_{0}(t_{j,b})+\xi \delta_{b<a}t_{k}(t_{i,b})t_{0}(t_{j,a})\] \[+(-1)^{|a|}\xi\delta_{-b<a}t_{k}(t_{i,-b})t_{0}(t_{j,-a}).\] This shows that the statement is also true when \(\deg(x)=\deg(y)=1\). Now, we assume that the lemma is true for all \(x,y\) with \(\deg(x)\leq m\) and \(\deg(y)\leq m^{\prime}\). Note that an element in \(\mathsf{A}_{r,n}\) of degree \(m+1\) can be written as a linear combination of \(xx^{\prime}\) with \(\deg(x)\leq m,\deg(x^{\prime})\leq m\), we consider such an element \(xx^{\prime}\) in \(\mathsf{A}_{r,n}\) and \(y\in\mathsf{A}_{k,n}\) with \(\deg(y)\leq m^{\prime}\). Suppose that \[\Theta(y\otimes x)=\sum_{i}x_{i}\otimes y_{i},\text{ and }\Theta(y_{i} \otimes x^{\prime})=\sum_{j}x^{\prime}_{ij}\otimes y_{ij},\text{ for each }j,\] where \(x_{i},x^{\prime}_{ij}\in\mathsf{A}_{r,n}\) and \(y_{i},y_{ij}\in\mathsf{A}_{k,n}\). Since \(\deg(x)\leq m\) and \(\deg y\leq m^{\prime}\), the hypothesis implies that \[\iota_{0}(y)t_{k}(x)=\sum_{i}\iota_{k}(x_{i})\iota_{0}(y_{i}),\text{ and }\iota_{0}(y_{i})\iota_{k}(x^{\prime})=\sum_{j}\iota_{k}(x^{\prime}_{ij})\iota_{0 }(y_{ij})\text{ for each }j. \tag{3.9}\] By the commutative diagram (2.4) for \(\Theta\), we have \[\Theta(y\otimes xx^{\prime})= \Theta\circ(1\otimes\operatorname{mul})(y\otimes x\otimes x^{\prime })=(\operatorname{mul}\otimes 1)\circ(1\otimes\Theta)\circ(\Theta\otimes 1)(y \otimes x\otimes x^{\prime})\] \[= \sum_{i}(\operatorname{mul}\otimes 1)\circ(1\otimes\Theta)(x_{i} \otimes y_{i}\otimes x^{\prime})=\sum_{i,j}x_{i}x_{i,j}^{\prime}\otimes y_{ij}.\] Note that both \(t_{0}\) and \(t_{k}\) are homomorphisms of associative superalgebras, we verify by using (3.9) that \[\sum_{i,j}t_{k}(x_{i}x_{ij}^{\prime})t_{0}(y_{ij})= \sum_{i,j}t_{k}(x_{i})t_{k}(x_{ij}^{\prime})t_{0}(y_{ij})=\sum_{i }t_{k}(x_{i})t_{0}(y_{i})t_{k}(x^{\prime})\] \[= t_{0}(y)t_{k}(x)t_{k}(x^{\prime})=t_{0}(y)t_{k}(xx^{\prime}).\] This shows that the lemma is also true for \(xx^{\prime}\) and \(y\). Similarly, we show that the lemma is true for \(x\) and \(yy^{\prime}\) for \(x\in\mathsf{A}_{r,n}\) and \(y,y^{\prime}\in\mathsf{A}_{k,n}\) provided that \(\deg(x)\leq m\) and \(\deg(y),\deg(y^{\prime})\leq m^{\prime}\). Then the lemma is true for all \(x\in\mathsf{A}_{r,n}\) and \(y\in\mathsf{A}_{k,n}\) by induction. **Theorem 3.5**.: _The \(\mathbb{C}(q)\)-linear map_ \[\sigma:\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}\to\mathsf{A}_{r+k,n},\quad x \otimes y\mapsto\iota_{k}(x)t_{0}(y),\quad x\in\mathsf{A}_{r,n},\quad y\in \mathsf{A}_{k,n}\] _is an isomorphism of \(\mathsf{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras, where \(\iota_{k}:\mathsf{A}_{r,n}\to\mathsf{A}_{r+k,n}\) and \(\iota_{0}:\mathsf{A}_{k,n}\to\mathsf{A}_{r+k,n}\) are the maps given in Lemma 3.2._ Proof.: We first prove that \(\sigma\) is a homomorphism of associative superalgebras. By Lemma 3.2, \(\iota_{k}:\mathsf{A}_{r,n}\to\mathsf{A}_{r+k,n}\) and \(\iota_{0}:\mathsf{A}_{k,n}\to\mathsf{A}_{r+k,n}\) are homomorphisms of associative superalgebras. For \(x,x^{\prime}\in\mathsf{A}_{r,n}\) and \(y,y^{\prime}\in\mathsf{A}_{k,n}\), we have \[\sigma\left((x^{\prime}\otimes 1)(x\otimes y)\right)= \sigma(x^{\prime}x\otimes y)=\iota_{0}(x^{\prime}x)\iota_{k}(y)= \iota_{0}(x^{\prime})\iota_{0}(x)\iota_{r}(y)\] \[= \sigma(x^{\prime}\otimes 1)\sigma(x\otimes y).\] \[\sigma\left((x\otimes y)(1\otimes y^{\prime})\right)= \sigma(x\otimes yy^{\prime})=\iota_{0}(x)\iota_{r}(yy^{\prime})= \iota_{0}(x)\iota_{r}(y)\iota_{r}(y^{\prime})\] \[= \sigma(x\otimes y)\sigma(1\otimes y^{\prime}).\] Note that \((1\otimes y)(x\otimes 1)=\Theta(y\otimes x)\) in the braided tensor product \(\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}\). We assume that \[\Theta(y\otimes x)=\sum_{i}x_{i}\otimes y_{i},\] where \(x_{i}\in\mathsf{A}_{r,n}\) and \(y_{i}\in\mathsf{A}_{k,n}\). It follows from Lemma 3.4 that \[\sigma((1\otimes y)(x\otimes 1))=\sum_{i}\iota_{k}(x_{i})\iota_{0}(y_{i})= \iota_{0}(y)\iota_{k}(x)=\sigma(1\otimes y)\sigma(x\otimes 1).\] Hence, \(\sigma\) is a homomorphism of associative superalgebras. Since \(t_{k}:\mathbb{A}_{r,n}\to\mathbb{A}_{r+k,n}\) and \(t_{0}:\mathbb{A}_{k,n}\to\mathbb{A}_{r+k,n}\) are both homomorphisms of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodules and \(\mathbb{A}_{r+k,n}\) is a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra, we have \[\sigma\left(\Phi_{u}(x\otimes y)\right)= \sum_{(u)}(-1)^{|u_{(2)}||x|}\sigma\left(\Phi_{u_{(1)}}(x) \otimes\Phi_{u_{(2)}}(y)\right)\] \[= \sum_{(u)}(-1)^{|u_{(2)}||x|}t_{k}\left(\Phi_{u_{(1)}}(x)\right) \iota_{0}\left(\Phi_{u_{(2)}}(y)\right)\] \[= \sum_{(u)}(-1)^{|u_{(2)}||x|}\Phi_{u_{(1)}}\left(t_{k}(x)\right) \Phi_{u_{(2)}}\left(t_{0}(y)\right)\] \[= \Phi_{u}\left(t_{k}(x)t_{0}(y)\right)\] \[= \Phi_{u}(\sigma(x\otimes y)).\] This shows that \(\sigma\) is a homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodules. The homomorphism \(\sigma\) is surjective since \(\mathbb{A}_{r+k,n}\) is generated by \(t_{ia}\) for \(i=1,\ldots,r+k,a\in I_{n|n}\). While the injectivity of \(\sigma\) follows from the fact that a \(\mathbb{C}(q)\)-basis of \(\mathbb{A}_{r,n}\) (resp. \(\mathbb{A}_{k,n}\)) is given by monomials \[\prod_{i=1,\ldots,r,a\in I_{n|n}}t_{ia}^{d_{ia}},\quad\left(\mathrm{resp.}\prod _{i=1,\ldots,k,a\in I_{n|n}}t_{ia}^{d_{ia}}\right)\] where \(d_{ia}\geq 0\) if \(|a|=\bar{0}\) and \(d_{ia}\in 0,1\) if \(|a|=\bar{1}\), the product is taken with respect to the lexicographic order (see [3, Proposition 3.6.4]). Given two quantum coordinate superalgebras \(\mathbb{A}_{r,n}\) and \(\mathbb{A}_{k,n}\), the braided tensor product \(\mathbb{A}_{r,n}\otimes\mathbb{A}_{k,n}\) is also a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra, which is isomorphic to \(\mathbb{A}_{r+k,n}\) by Theorem 3.5. Hence, a further braided tensor product \(\left(\mathbb{A}_{r,n}\otimes\mathbb{A}_{k,n}\right)\otimes\mathbb{A}_{p,n}\) is permitted. It is also a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra isomorphic to \(\mathbb{A}_{r+k+p,n}\). Consequently, the braided tensor product of quantum coordinate superalgebras of type \(Q\) is associative. **Corollary 3.6**.: _There is an isomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras_ \[\left(\mathbb{A}_{r,n}\otimes\mathbb{A}_{k,n}\right)\otimes\mathbb{A}_{p,n} \cong\mathbb{A}_{r,n}\otimes\left(\mathbb{A}_{k,n}\otimes\mathbb{A}_{p,n} \right).\qed\] **Corollary 3.7**.: _The braided tensor product \(\mathbb{A}_{1,n}^{\otimes r}\) of \(r\)-copies of \(\mathbb{A}_{1,n}\) is isomorphic to \(\mathbb{A}_{r,n}\) as \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras. _ _Remark 3.8_.: The quantum coordinate superalgebra \(\mathbb{A}_{1,n}\) is isomorphic to quantum supersymmetric algebra \(\mathrm{S}_{q}(V)\), that is a unital associative superalgebra presented by generators \(v_{a}\) for \(a\in I_{n|n}\), and relation: \[qv_{a}v_{b}=q^{\varphi(a,b)}(-1)^{|a||b|}v_{b}v_{a}+\delta_{a<b}\xi v_{a}v_{b}+ \delta_{-a<b}\xi(-1)^{|b|}v_{-a}v_{-b},\quad a,b\in I_{n|n}.\] By [3, Proposition 3.6.4], there is a parity preserving vector superspace isomorphism between \(\mathbb{A}_{r,n}\) and \(\mathrm{S}_{q}(V)^{\otimes r}\). According to Corollary 3.7, \(\mathrm{S}_{q}(V)^{\otimes r}\) is understood as an associative superalgebra obtained by the braided tensor product (see [1] for the braided symmetric algebra in the case of a quantum group). It is isomorphic to \(\mathbb{A}_{r,n}\) as \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras. Consequently, the quantum coordinate superalgebra \(\mathbb{A}_{r,n}\) (or \(\mathrm{S}_{q}(V)^{\otimes r}\)) is a flat deformation of supersymmetric algebra \(\mathrm{Sym}\left(V^{\oplus r}\right)\). Next, we construct a quantum analogue of \(\mathrm{Sym}\left(\Pi(V)^{\oplus k}\right)\), where \(\Pi\) is the parity reversing functor. For any \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule \(M=M_{\tilde{0}}\oplus M_{\tilde{1}}\), define \[\Pi(M)=\Pi(M)_{\tilde{0}}\oplus\Pi(M)_{\tilde{1}},\quad\Pi(M)_{i}=M_{i+\tilde{1 }},\ \forall i\in\mathbb{Z}_{2}.\] Then \(\Pi(M)\) is also a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule with the same action. **Definition 3.9**.: We define \(\mathrm{A}^{\Pi}_{k,n}\) to be the unital associative superalgebra presented by the generators \(t^{\pi}_{ia}\) of parity \(|a|+\tilde{1}\) for \(i=1,\ldots,k\) and \(a\in I_{n|n}\) and the defining relation \[R^{12}_{+}\tilde{T}^{1[3]}_{+}\tilde{T}^{2[3]}_{+}=\tilde{T}^{2[3]}_{+}\tilde{ T}^{1[3]}_{+}S^{12}_{J}, \tag{3.10}\] where \(\tilde{T}_{+}=\sum\limits_{i=1}^{k}\sum\limits_{a\in I_{n|n}}E_{ia}\otimes t ^{\pi}_{ia}\), the tensor matrix \(R_{+}\) is given in (3.4), \[\begin{split} S_{J}=&(1\otimes J)S(1\otimes J)\\ =&-\sum\limits_{a,b\in I_{n|n}}q^{-\varphi(a,b)}E_{ aa}\otimes E_{bb}+\xi\sum\limits_{b<a}(-1)^{|a|}(E_{ba}+E_{-b,-a})\otimes E_{ab}, \end{split} \tag{3.11}\] and \(J=\sum\limits_{a\in I_{n|n}}(-1)^{|a|}E_{-a,a}\). **Proposition 3.10**.: \(\mathrm{A}^{\Pi}_{k,n}\) _is a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra with the \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-action \(\Phi^{\pi}\) determined by_ \[L^{|2|3}\underset{\Phi^{\pi}}{\cdot}\tilde{T}^{1[2]}_{+}=\tilde{T}^{1[2]}_{+}S ^{13}. \tag{3.12}\] Proof.: Let \(W\subset\mathrm{A}^{\Pi}_{k,n}\) be the \(\mathbb{C}(q)\)-subspace spanned by \(t^{\pi}_{ia}\) for \(i=1,\ldots,k\) and \(a\in I_{n|n}\). We first show that (3.12) defines a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule structure on \(W\). Note that \(L\) is the generator matrix of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\), we verify that \[\begin{split}\left(L^{[2]3}L^{[2]4}S^{34}\right)\underset{\Phi^{ \pi}}{\cdot}\tilde{T}^{1[2]}_{+}=& L^{|2|3}\underset{\Phi^{\pi}}{ \cdot}\left(L^{[2]4}\underset{\Phi^{\pi}}{\cdot}\tilde{T}^{1[2]}_{+}\right)S^ {34}=L^{[2]3}\underset{\Phi^{\pi}}{\cdot}\tilde{T}^{1[2]}_{+}S^{14}S^{34}\\ =&\tilde{T}^{1[2]}_{+}S^{13}S^{14}S^{34},\\ \left(S^{34}L^{[2]4}L^{[2]3}\right)\underset{\Phi^{\pi}}{\cdot} \tilde{T}^{1[2]}_{+}=& S^{34}L^{2[4]}\underset{\Phi^{\pi}}{\cdot}\left(L^{[2]3} \underset{\Phi^{\pi}}{\cdot}\tilde{T}^{1[2]}_{+}\right)=S^{34}L^{2[4]} \underset{\Phi^{\pi}}{\cdot}\tilde{T}^{1[2]}_{+}S^{13}\\ =& S^{34}\tilde{T}^{1[2]}_{+}S^{14}S^{13}=\tilde{T}^{1[2]}_{+}S ^{34}S^{14}S^{13}.\end{split}\] Since \(S\) satisfies the quantum Yang-Baxter equation, (3.12) determines a well-defined \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-action on \(W\). The \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-action is extended to the free associative superalgebra generated by \(t^{\pi}_{ia}\), \(i=1,\ldots,k\) and \(a\in I_{n|n}\) according to the definition of a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra: \[\Phi^{\pi}_{u}(xy)=\sum\limits_{(u)}(-1)^{|u_{[2]}||x|}\Phi^{\pi}_{u_{(1)}}(x) \Phi^{\pi}_{u_{(2)}}(y),\quad u\in\mathrm{U}_{q}(\mathfrak{q}_{n}),x,y\in \mathrm{A}^{\pi}_{k,n}. \tag{3.13}\] It suffices to show that the action \(\Phi^{\pi}_{L_{ab}}\) preserves the defining relation (3.10) of \(\mathsf{A}^{\Pi}_{k,n}\) for \(a,b\in I_{n|n}\). By (3.13), we have \[L^{[3]4}\cdot_{\Phi^{\pi}}\left(\tilde{T}^{1[3]}_{+}\tilde{T}^{2[ 3]}_{+}\right) =\left(L^{[3]4}\cdot_{\Phi^{\pi}}\tilde{T}^{1[3]}_{+}\right)\left(L ^{[3]4}\cdot_{\Phi^{\pi}}\tilde{T}^{2[3]}_{+}\right)=\tilde{T}^{1[3]}_{+}S^{1 4}\tilde{T}^{2[3]}_{+}S^{24},\] \[L^{[3]4}\cdot_{\Phi^{\pi}}\left(\tilde{T}^{2[3]}_{+}\tilde{T}^{1[ 3]}_{+}\right) =\left(L^{[3]4}\cdot_{\Phi^{\pi}}\tilde{T}^{2[3]}_{+}\right)\left(L ^{[3]4}\cdot_{\Phi^{\pi}}\tilde{T}^{1[3]}_{+}\right)=\tilde{T}^{2[3]}_{+}S^{2 4}\tilde{T}^{1[3]}_{+}S^{14}.\] Then the defining relation (3.10) implies that \[L^{[3]4}\cdot_{\Phi^{\pi}}\left(R^{12}_{+}\tilde{T}^{1[3]}_{+} \tilde{T}^{2[3]}_{+}\right) =R^{12}_{+}\tilde{T}^{1[3]}_{+}S^{14}\tilde{T}^{2[3]}_{+}S^{24}= \tilde{T}^{2[3]}_{+}\tilde{T}^{1[3]}_{+}S^{12}_{J}S^{14}S^{24},\] \[L^{[3]4}\cdot_{\Phi^{\pi}}\left(\tilde{T}^{2[3]}_{+}\tilde{T}^{1[ 3]}_{+}S^{12}_{J}\right) =\tilde{T}^{2[3]}_{+}\tilde{T}^{1[3]}_{+}S^{24}S^{14}S^{12}_{J}.\] Note that \(S_{J}=(1\otimes J)S(1\otimes J)\) and \((J\otimes 1)S=S(J\otimes 1)\), we have \[S^{12}_{J}S^{14}S^{24}= \tilde{J}^{2}S^{12}\tilde{J}^{2}S^{14}S^{24}= \tilde{J}^{2}S^{12}S^{14}S^{24}\tilde{J}^{2}\] \[= \tilde{J}^{2}S^{24}S^{14}S^{12}\tilde{J}^{2}=S^{24}S^{14}\tilde{J }^{2}\tilde{J}^{2}=S^{24}S^{14}S^{12}_{J},\] where \(J^{2}=1\otimes J\otimes 1\otimes 1\). Hence, \[L^{[3]4}\cdot_{\Phi^{\pi}}\left(R^{12}_{+}\tilde{T}^{1[3]}_{+}\tilde{T}^{2[3 ]}_{+}\right)=L^{[3]4}\cdot_{\Phi^{\pi}}\left(\tilde{T}^{2[3]}_{+}\tilde{T}^{1 [3]}_{+}S^{12}_{J}\right).\] This shows that (3.12) determines a well-defined \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-action on \(\mathsf{A}^{\Pi}_{k,n}\), under which \(\mathsf{A}^{\Pi}_{k,n}\) is a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra. The defining relation (3.10) for \(\mathsf{A}^{\Pi}_{k,n}\) can be written in terms of generators as: \[q^{\delta_{ij}}t^{\pi}_{ia}t^{\pi}_{jb}-(-1)^{|a||b|}q^{-\varphi(a,b)}t^{\pi}_ {jb}t^{\pi}_{ia}=\xi\left(\delta_{b<a}-\delta_{j<i}\right)t^{\pi}_{ja}t^{\pi}_ {ib}+(-1)^{|b|}\xi\delta_{b<-a}t^{\pi}_{j,-a}t^{\pi}_{i,-b}.\] One observes that the classical limit of \(\mathsf{A}^{\Pi}_{k,n}\) as \(q\mapsto 1\) is isomorphic to the associative superalgebra \(\mathrm{Sym}\left(\Pi(V)^{\oplus k}\right)\). Moreover, it follows from (3.12) that, for each fixed \(i=1,\ldots,k\), the \(\mathbb{C}(q)\) sub-superspace of \(\mathsf{A}^{\Pi}_{k,n}\) spanned by \(t^{\pi}_{ia}\) with \(a\in I_{n|n}\) is a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-sub-supermodule that is isomorphic to \(\Pi(V)\). Consequently, \(\mathsf{A}^{\Pi}_{k,n}\) is a quantum analogue of \(\mathrm{Sym}\left(\Pi(V)^{\oplus k}\right)\). It has been shown in [2, Remark 3.2] that the \(\mathrm{U}(\mathfrak{q}_{n})\)-supermodule superalgebra \(\mathrm{Sym}\left(\Pi(V)^{\oplus k}\right)\) is isomorphic to \(\mathrm{Sym}\left(V^{\oplus k}\right)\). Such an isomorphism is also valid in the quantum case. **Proposition 3.11**.: _There is an isomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras_ \[\tau:\mathsf{A}^{\Pi}_{k,n}\rightarrow\mathsf{A}_{k,n},\] _such that_ \[\tau\left(\tilde{T}_{+}\right)=T_{+}(J\otimes 1). \tag{3.14}\] Proof.: We first show that (3.14) determines a well-defined homomorphism of associative superalgebras \(\mathsf{A}_{k,n}^{\Pi}\to\mathsf{A}_{k,n}\). Since \(\mathsf{A}_{k,n}^{\Pi}\) is generated by entries of the generator matrix \(\tilde{T}_{+}\), it suffices to verify that (3.14) preserves the defining relation (3.10) for \(\mathsf{A}_{k,n}^{\Pi}\). \[\tau\left(R_{+}^{12}\,\tilde{T}_{+}^{1[3]}\,\tilde{T}_{+}^{2[3]}\right)= R_{+}^{12}\,T_{+}^{1[3]}\,J^{\underline{1}}_{+}T^{2[3]}\,J^{ \underline{2}}=T_{+}^{2[3]}\,T_{+}^{1[3]}\,S^{12}\,J^{\underline{1}}J^{ \underline{2}},\] \[\tau\left(\tilde{T}_{+}^{2[3]}\,\tilde{T}_{+}^{1[3]}S_{J}^{12}\right) =T_{+}^{2[3]}\,J^{\underline{2}}T_{+}^{1[3]}\,J^{\underline{1}}S _{J}^{12}=T_{+}^{2[3]}\,T_{+}^{1[3]}\,J^{\underline{2}}J^{\underline{1}}S_{J }^{12},\] where \(J^{\underline{1}}=J\otimes 1\otimes 1\) and \(J^{\underline{2}}=1\otimes J\otimes 1\). Since \[J^{\underline{2}}J^{\underline{1}}S_{J}^{12}=J^{\underline{2}}J^{\underline{1} }J^{\underline{2}}S^{12}\,J^{\underline{2}}=-J^{\underline{2}}J^{\underline{2} }S^{12}\,J^{\underline{1}}J^{\underline{2}}=S^{12}\,J^{\underline{1}}J^{ \underline{2}},\] we have \[\tau\left(R_{+}^{12}\,\tilde{T}_{+}^{1[3]}\,\tilde{T}_{+}^{2[3]}\right)=\tau \left(\tilde{T}_{+}^{2[3]}\,\tilde{T}_{+}^{1[3]}S_{J}^{12}\right),\] and (3.14) determines a superalgebra homomorphism \(\tau:\mathsf{A}_{k,n}^{\Pi}\to\mathsf{A}_{k,n}\). The superalgebra homomorphism \(\tau\) is an isomorphism since it has the inverse \(\tau^{-1}:\mathsf{A}_{k,n}\to\mathsf{A}_{k,n}^{\Pi}\) given by \[\tau^{-1}\left(T_{+}\right)=-\tilde{T}_{+}(J\otimes 1).\] Moreover, we compute that \[\tau\left(L^{[2]3}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ **Lemma 4.1**.: _An alternative presentation of the superalgebra \(\tilde{\mathbb{A}}_{s,n}\) is given by generators \(\tilde{t}_{\alpha b}\) with \(\alpha=-1,\ldots,-s,b\in I_{n|n}\) and the relation_ \[\tilde{T}_{-}^{1[3]}\,\tilde{T}_{-}^{2[3]}R_{-}^{12}=S^{12}\,\tilde{T}_{-}^{2[3 ]}\,\tilde{T}_{-}^{1[3]}, \tag{4.3}\] _where \(\tilde{T}_{-}=\sum\limits_{\alpha=-s}^{-1}\sum\limits_{b\in I_{n|n}}E_{ba} \otimes\tilde{t}_{\alpha b}\), and_ \[R_{-}=\sum\limits_{i,j=-s}^{-1}q^{-\delta_{ij}}E_{ii}\otimes E_{jj}-\xi\sum \limits_{-s\leqslant j<i\leqslant-1}E_{ji}\otimes E_{ij} \tag{4.4}\] _is the submatrix of \(S\) involving the terms \(E_{ik}\otimes E_{jl}\) with \(-s\leqslant i,j,k,l\leqslant-1\)._ Proof.: The relation (4.2) is equivalent to \[\begin{split}& q^{\varphi(\alpha,\beta)}\,\tilde{t}_{\alpha a} \,\tilde{t}_{\beta b}-(-1)^{(|\alpha|+|a|)(|\beta|+|b|)}\,q^{\varphi(a,b)}\, \tilde{t}_{\beta b}\,\tilde{t}_{\alpha a}\\ =&\theta(\alpha,\beta,b)\xi\left(\delta_{b<a}- \delta_{\alpha<\beta}\right)\tilde{t}_{\beta a}\,\tilde{t}_{\alpha b}+\theta( \alpha,\beta,b)(-1)^{|a|+|\beta|}\xi\left(\delta_{-\alpha<\beta}-\delta_{b<-a }\right)\tilde{t}_{\beta,-a}\,\tilde{t}_{\alpha,-b},\end{split} \tag{4.5}\] for \(\alpha,\beta\in I_{s|s}\) and \(a,b\in I_{n|n}\). While the relation (4.3) is equivalent to \[\begin{split}& q^{-\delta_{\alpha\beta}}\,\tilde{t}_{\alpha a}\, \tilde{t}_{\beta b}-(-1)^{(|a|+1)(|b|+1)}\,q^{\varphi(a,b)}\,\tilde{t}_{\beta b }\,\tilde{t}_{\alpha a}\\ =&\xi\left(\delta_{\alpha<\beta}-\delta_{b<a}\right) \tilde{t}_{\beta a}\,\tilde{t}_{\alpha b}-(-1)^{|a|}\xi\delta_{b<a}\,\tilde{t }_{\beta,-a}\,\tilde{t}_{\alpha,-b},\end{split} \tag{4.6}\] for \(\alpha,\beta=-1,\ldots,-s\) and \(a,b\in I_{n|n}\). We verify that (4.5) is equivalent to (4.6) provided that \(\tilde{t}_{\alpha,b}=(-1)^{|\alpha|+|b|}\,\tilde{t}_{-\alpha,-b}\) for \(\alpha\in I_{s|s}\) and \(b\in I_{n|n}\). The relation (4.6) is the special case of (4.5) where \(\alpha,\beta=-1,\ldots,-s\). Conversely, it suffices to show the relation (4.6) also implies (4.5). We consider the following four cases separately: **Case1:**\(\alpha<0,\beta<0\). In this situation, the relations (4.6) and (4.5) are same. **Case2:**\(\alpha>0,\beta<0\). We deduce from (4.6) that \[\begin{split}& q^{-\delta_{-\alpha,\beta}}\,\tilde{t}_{-\alpha,-a} \,\tilde{t}_{\beta b}-(-1)^{|a||b|+|a|}\,q^{\varphi(-a,b)}\,\tilde{t}_{\beta b }\,\tilde{t}_{-\alpha,-a}\\ =&\xi\left(\delta_{-\alpha<\beta}-\delta_{b<-a} \right)\tilde{t}_{\beta,-a}\,\tilde{t}_{-\alpha,b}+(-1)^{|a|}\xi\delta_{b<a} \,\tilde{t}_{\beta a}\,\tilde{t}_{-\alpha,-b}.\end{split}\] Since \(\tilde{t}_{-\alpha,-a}=(-1)^{|a|}\,\tilde{t}_{\alpha a}\), we obtain that \[\begin{split}& q^{-\delta_{-\alpha,\beta}}\,\tilde{t}_{\alpha a}\,\tilde{t}_{\beta b}-(-1)^{|a||b|+|a|}\,q^{\varphi(-a,b)}\, \tilde{t}_{\beta b}\,\tilde{t}_{\alpha a}\\ =&-(-1)^{|a|+|b|}\,\xi\left(\delta_{-\alpha<\beta}- \delta_{b<-a}\right)\tilde{t}_{\beta,-a}\,\tilde{t}_{\alpha,-b}+(-1)^{|b|}\, \xi\delta_{b<a}\,\tilde{t}_{\beta a}\,\tilde{t}_{\alpha b},\end{split}\] which coincides with (4.5) in the situation of \(\alpha>0\) and \(\beta<0\). **Case 3:**\(\alpha<0\), \(\beta>0\). We deduce from (4.6) that \[\begin{split}& q^{-\delta_{\alpha,-\beta}}\,\tilde{t}_{\alpha a}\, \tilde{t}_{-\beta,-b}-(-1)^{|a||b|+|b|}\,q^{\varphi(a,-b)}\,\tilde{t}_{-\beta,-b }\,\tilde{t}_{\alpha a}\\ =&\xi\left(\delta_{\alpha<-\beta}-\delta_{-b<a} \right)\tilde{t}_{-\beta,a}\,\tilde{t}_{\alpha,-b}-(-1)^{|a|}\xi\delta_{-b<-a }\,\tilde{t}_{-\beta,-a}\,\tilde{t}_{\alpha b}.\end{split}\] Then \(\tilde{t}_{-\beta,-b}=(-1)^{|b|}\tilde{t}_{\beta b}\) implies that \[q^{-\delta_{\alpha,-\beta}}\tilde{t}_{\alpha a}\tilde{t}_{\beta b} -(-1)^{|a||b|+|b|}q^{\varphi(a,b)}\tilde{t}_{\beta b}\tilde{t}_{\alpha a}\] \[= -(-1)^{|a|+|b|}\xi\left(\delta_{\alpha<-\beta}-\delta_{-b<a} \right)\tilde{t}_{\beta,-a}\tilde{t}_{\alpha,-b}-(-1)^{|b|}\xi\delta_{a<b} \tilde{t}_{\beta a}\tilde{t}_{\alpha b}.\] Hence, \[q^{\varphi(\alpha,\beta)}\tilde{t}_{\alpha a}\tilde{t}_{\beta b }-(-1)^{|a||b|+|b|}q^{\varphi(a,b)}\tilde{t}_{\beta b}\tilde{t}_{\alpha a}\] \[= \left(q^{-\delta_{\alpha,-\beta}}+\delta_{\alpha,-\beta}\xi \right)\tilde{t}_{\alpha a}\tilde{t}_{\beta b}-(-1)^{|a||b|+|b|}\left(q^{- \varphi(a,b)}+(-1)^{|b|}\left(\delta_{ab}+\delta_{a,-b}\right)\xi\right)\tilde {t}_{\beta b}\tilde{t}_{\alpha a}\] \[= -(-1)^{|a|+|b|}\xi\left(\delta_{\alpha<-\beta}-\delta_{-b<a} \right)\tilde{t}_{\beta,-a}\tilde{t}_{\alpha,-b}-(-1)^{|b|}\xi\delta_{a<b} \tilde{t}_{\beta a}\tilde{t}_{\alpha b}\] \[+\delta_{\alpha,-\beta}\xi\tilde{t}_{\alpha a}\tilde{t}_{\beta b }-(-1)^{|a||b|}\left(\delta_{ab}+\delta_{a,-b}\right)\xi\tilde{t}_{\beta b} \tilde{t}_{\alpha a}\] \[= -(-1)^{|b|}\xi\left(\delta_{a<b}+\delta_{ab}\right)\tilde{t}_{ \beta a}\tilde{t}_{\alpha b}-(-1)^{|a|+|b|}\xi\left(\delta_{\alpha<-\beta}- \delta_{-b<a}+\delta_{\alpha,-\beta}-\delta_{a,-b}\right)\tilde{t}_{\beta,-a} \tilde{t}_{\alpha,-b}\] \[= (-1)^{|b|}\xi\left(\delta_{b<a}-1\right)\tilde{t}_{\beta a} \tilde{t}_{\alpha b}+(-1)^{|a|+|b|}\xi(\delta_{-\alpha<\beta}-\delta_{b<-a}) \tilde{t}_{\beta,-a}\tilde{t}_{\alpha,-b},\] which yields with (4.5) in the situation of \(\alpha<0\) and \(\beta>0\). **Case 4:**\(\alpha>0\), \(\beta>0\). We deduce from (4.6) that \[q^{-\delta_{\alpha,\beta}}\tilde{t}_{-\alpha,-a}\tilde{t}_{-\beta,-b}-(-1)^{|a||b|}q^{\varphi(-a,-b)}\tilde{t}_{-\beta,-b}\tilde{t}_{-\alpha,-a}\] \[= \xi\left(\delta_{\beta<\alpha}-\delta_{a<b}\right)\tilde{t}_{- \beta,-a}\tilde{t}_{-\alpha,-b}+(-1)^{|a|}\xi\delta_{-b<a}\tilde{t}_{-\beta,a} \tilde{t}_{-\alpha,b}.\] It follows from \(\tilde{t}_{-\beta,-a}=(-1)^{|a|}\tilde{t}_{\beta a}\) that \[q^{-\delta_{\alpha\beta}}\tilde{t}_{\alpha a}\tilde{t}_{\beta b }-(-1)^{|a||b|}q^{\varphi(-a,-b)}\tilde{t}_{\beta b}\tilde{t}_{\alpha a}=\xi \left(\delta_{\beta<\alpha}-\delta_{a<b}\right)\tilde{t}_{\beta a}\tilde{t}_{ \alpha b}+(-1)^{|a|}\xi\delta_{-b<a}\tilde{t}_{\beta,-a}\tilde{t}_{\alpha,-b}.\] Hence, \[q^{\varphi(\alpha,\beta)}\tilde{t}_{\alpha a}\tilde{t}_{\beta b }-(-1)^{|a||b|}q^{\varphi(a,b)}\tilde{t}_{\beta b}\tilde{t}_{\alpha a}\] \[= \left(q^{-\delta_{\alpha\beta}}+\delta_{\alpha\beta}\xi\right) \tilde{t}_{\alpha a}\tilde{t}_{\beta b}-(-1)^{|a||b|}\left(q^{\varphi(-a,-b)}+ (-1)^{|b|}\left(\delta_{ab}+\delta_{a,-b}\right)\xi\right)\tilde{t}_{\beta b} \tilde{t}_{\alpha a}\] \[= \xi\left(\delta_{\beta<\alpha}-\delta_{a<b}\right)\tilde{t}_{\beta a }\tilde{t}_{\alpha b}+(-1)^{|a|}\xi\delta_{-b<a}\tilde{t}_{\beta,-a}\tilde{t}_ {\alpha,-b}\] \[+\delta_{\alpha\beta}\xi\tilde{t}_{\alpha a}\tilde{t}_{\beta b}-(-1 )^{|a||b|+|b|}\left(\delta_{ab}+\delta_{a,-b}\right)\xi\tilde{t}_{\beta b} \tilde{t}_{\alpha a}\] \[= \xi\left(\delta_{\beta<\alpha}-\delta_{a<b}+\delta_{\alpha\beta}- \delta_{ab}\right)\tilde{t}_{\beta a}\tilde{t}_{\alpha b}+(-1)^{|a|}\xi\left( \delta_{-b<a}+\delta_{-b,a}\right)\tilde{t}_{\beta,-a}\tilde{t}_{\alpha,-b}\] \[= \xi(\delta_{b<a}-\delta_{\alpha<\beta})\tilde{t}_{\beta a}\tilde{t}_ {\alpha b}+(-1)^{|a|}\xi\left(1-\delta_{b<-a}\right)\tilde{t}_{\beta,-a}\tilde{ t}_{\alpha,-b},\] which yields with (4.5) in the situation of \(\alpha>0\) and \(\beta>0\). This completes the proof. _Remark 4.2_.: The dual quantum coordinate superalgebra \(\tilde{\Lambda}_{s,n}\) is also presented by generators \(\tilde{t}_{\alpha a}\) for \(\alpha=1,\ldots,s\) and \(a\in I_{n|n}\) and the relation \[\tilde{T}_{+}^{1[3]}\tilde{T}_{+}^{2[3]}R_{J}^{12}=S_{J}^{12}\tilde{T}_{+}^{2[3 ]}\tilde{T}_{+}^{1[3]},\] where \(\tilde{T}_{+}=\sum\limits_{i=1}^{s}\sum\limits_{b\in I_{n|n}}(-1)^{|b|}E_{bi} \otimes\tilde{t}_{ib}\), \(S_{J}\) is the tensor matrix given in (3.11), and \[R_{J}=-\sum\limits_{i,j=1}^{s}q^{-\delta_{ij}}E_{ii}\otimes E_{jj}+\xi\sum \limits_{1\leqslant j<i\leqslant s}E_{ji}\otimes E_{ij}\] is the submatrix of \(S_{J}\) involving the terms \(E_{ik}\otimes E_{jl}\) with \(1\leqslant i,j,k,l\leqslant s\). The superalgebra \(\tilde{\mathrm{A}}_{s,n}\) is also a \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra under the action \(\tilde{\Phi}\)given in terms of generator matrices as \[L^{[2]3}\cdot\tilde{T}^{1[2]}=\left(S^{-1}\right)^{13}\tilde{T}^{1[2]}, \tag{4.7}\] where \[S^{-1}=\sum\limits_{i,j\in I_{r|r}}q^{-\varphi(i,j)}E_{ii}\otimes E_{jj}-\xi \sum\limits_{i,j\in I_{r|r},i<j}(-1)^{|i|}(E_{ji}+E_{-j,-i})\otimes E_{ij} \tag{4.8}\] is the inverse of the tensor matrix \(S\) given in (2.1). **Lemma 4.3**.: _For each \(0\leq p\leq l\), there is an injective homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras_ \[\tilde{t}_{p}:\tilde{\mathrm{A}}_{s,n}\to\tilde{\mathrm{A}}_{s+l,n},\quad \tilde{t}_{\alpha a}\mapsto\tilde{t}_{\alpha-p,a},\quad\alpha=-1,\ldots,-l, \quad a\in I_{n|n}.\] An explicit braiding operator \(\tilde{\Theta}:\tilde{\mathrm{A}}_{l,n}\otimes\tilde{\mathrm{A}}_{s,n}\to \tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n}\) can be defined as follows. **Proposition 4.4**.: _There is a unique homomorphism of \(\mathrm{U}_{q}(\mathfrak{q}_{n})\)-supermodules_ \[\tilde{\Theta}:\tilde{\mathrm{A}}_{l,n}\otimes\tilde{\mathrm{A}}_{s,n}\to \tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n}\] _satisfying the commutative diagrams (2.4),(2.5), and such that_ \[\tilde{\Theta}\left(1\otimes\bar{x}\right)= \bar{x}\otimes 1,\quad\bar{x}\in\tilde{\mathrm{A}}_{s,n}, \tag{4.9}\] \[\tilde{\Theta}\left(\bar{y}\otimes 1\right)= 1\otimes\bar{y}\quad\bar{y}\in\tilde{\mathrm{A}}_{l,n},\] (4.10) \[\tilde{\Theta}\left(\tilde{T}_{l}^{1[3]}\,\tilde{T}_{s}^{2[4]}\right)= \mathrm{S}^{12}\tilde{T}_{s}^{2[3]}\tilde{T}_{l}^{1[4]}, \tag{4.11}\] _where \(\tilde{T}_{l}=\sum\limits_{\beta=-l}^{-1}\sum\limits_{b\in I_{n|n}}E_{b\beta} \otimes\tilde{t}_{\beta b}\) and \(\tilde{T}_{s}=\sum\limits_{\alpha=-s}^{-1}\sum\limits_{a\in I_{n|n}}E_{aa} \otimes\tilde{t}_{\alpha a}\) are the generator matrices of \(\tilde{\mathrm{A}}_{l,n}\) and \(\tilde{\mathrm{A}}_{s,n}\), respectively._ Proof.: The proposition follows from the same technique of the proposition 3.3. The braiding operator \(\tilde{\Theta}\) yields a braided multiplication on \(\tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n}\): \[\tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n}\otimes\tilde{\mathrm{ A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n}\xrightarrow{\ \ 1\otimes\tilde{\Theta}\otimes 1\ }\tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{s,n}\otimes\tilde{ \mathrm{A}}_{l,n}\otimes\tilde{\mathrm{A}}_{l,n}\xrightarrow{\ \ \mathrm{mul}\otimes\mathrm{mul}\ }\tilde{\mathrm{A}}_{s,n}\otimes\tilde{\mathrm{A}}_{l,n},\] under which the braided tensor product \(\tilde{A}_{s,n}\otimes\tilde{A}_{l,n}\) is a \(U_{q}(q_{n})\)-supermodule superalgebra. In particular, \[\left(1\otimes\tilde{t}_{\beta b}\right)\left(\tilde{t}_{\alpha a} \otimes 1\right)= (-1)^{(|a|+1)(|b|+1)}\,q^{\varphi(b,a)}\,\tilde{t}_{\alpha a} \otimes\tilde{t}_{\beta b}-\delta_{a<b}\xi\,\tilde{t}_{\alpha b}\otimes\tilde{ t}_{\beta a}\] \[-(-1)^{|b|}\delta_{a<-b}\xi\,\tilde{t}_{\alpha,-b}\otimes\tilde{ t}_{\beta,-a},\] for \(\alpha=-1,\ldots,-s,\beta=-1,\ldots,-l\) and \(a,b\in I_{n|n}\). **Theorem 4.5**.: _The \(\mathbb{C}(q)\)-linear map_ \[\tilde{\sigma}:\tilde{A}_{s,n}\otimes\tilde{A}_{l,n}\to\tilde{A}_{s+l,n},\quad x \otimes y\mapsto\tilde{t}_{l}(x)\tilde{t}_{0}(y),\quad x\in\tilde{A}_{s,n}, \quad y\in\tilde{A}_{l,n},\] _is an isomorphism of \(U_{q}(q_{n})\)-supermodule superalgebras, where \(\tilde{t}_{l}:\tilde{A}_{s,n}\mapsto\tilde{A}_{s+l,n}\) and \(\tilde{t}_{0}:\tilde{A}_{l,n}\mapsto\tilde{A}_{s+l,n}\) are the maps given in Lemma 4.3_ Proof.: The proof is similar to Theorem 3.5, we omit the details here. _Remark 4.6_.: Fix \(1\leq\alpha\leq s\), the \(\mathbb{C}(q)\)-vector sub-superspace of \(\tilde{A}_{s,n}\) spanned by \(t_{\alpha b}\) for \(b\in I_{n|n}\) is isomorphic to the dual natural \(U_{q}(q_{n})\)-supermodule \(V_{q}^{*}\). Hence, the dual quantum coordinate superalgebra \(\tilde{A}_{1,n}\) is isomorphic to quantum supersymmetric algebra \(S_{q}(V^{*})\). Moreover, the superalgebra \(\tilde{A}_{s,n}\) is isomorphic to the braided tensor product \(S_{q}(V^{*})^{\otimes s}\). We also define a quantum analogue of the supersymmetric algebra \(\operatorname{Sym}\left(\Pi(V^{*})^{\otimes l}\right)\) as follows: **Definition 4.7**.: \(\tilde{A}_{l,n}^{\Pi}\) is defined as a unital associative superalgebra presented by generators \(\tilde{t}_{\alpha b}^{\pi}\) of parity \(|b|+\tilde{1}\) for \(\alpha=-1,\ldots,-l\) and \(b\in I_{n|n}\), and the relation \[\tilde{T}_{-}^{1[3]}\tilde{T}_{-}^{2[3]}R_{-}^{12}=S_{J}^{12}\,\tilde{T}_{-}^ {2[3]}\,\tilde{T}_{-}^{1[3]}, \tag{4.12}\] where \(\overset{\pi}{T}_{-}=\sum\limits_{\alpha=-l}^{-1}\sum\limits_{b\in I_{n|n}}E_ {b\alpha}\otimes\tilde{t}_{\alpha b}^{\pi}\), the tensor matrices \(R_{-}\) and \(S_{J}\) are given in (4.4) and (3.11), respectively. **Proposition 4.8**.: \(\tilde{A}_{l,n}^{\Pi}\) _is a \(U_{q}(q_{n})\)-supermodule superalgebra with the \(U_{q}(q_{n})\)-action \(\tilde{\Phi}^{\pi}\) determined by_ \[L^{[2]3}\cdot\overset{\pi}{\tilde{T}_{-}^{1[2]}}=\left(S^{-1}\right)^{13}\, \tilde{T}_{-}^{1[2]}.\qed\] **Proposition 4.9**.: _There is an isomorphism of \(U_{q}(q_{n})\)-supermodule superalgebras_ \[\tilde{\tau}:\tilde{A}_{l,n}^{\Pi}\to\tilde{A}_{l,n},\] _such that_ \[\tilde{\tau}\left(\overset{\pi}{T}_{-}\right)=\left(J\otimes 1\right)\, \tilde{T}_{-}.\qed\] Polynomial Invariants In the quantum case, the associative superalgebras \(\mathsf{A}_{r,n}\), \(\mathsf{A}_{k,n}^{\Pi}\), \(\tilde{\mathsf{A}}_{s,n}\) and \(\tilde{\mathsf{A}}_{l,n}^{\Pi}\) are quantum analogues of the supersymmetric algebras \(\operatorname{Sym}\left(V^{\oplus r}\right)\), \(\operatorname{Sym}\left(\Pi(V)^{\oplus k}\right)\), \(\operatorname{Sym}\left(V^{*\oplus s}\right)\) and \(\operatorname{Sym}\left(\Pi(V^{*})^{\oplus I}\right)\), respectively. We consider the \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-supermodule \[\mathcal{B}_{s,l}^{r,k}:=\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}^{\Pi}\otimes \tilde{\mathsf{A}}_{s,n}\otimes\tilde{\mathsf{A}}_{l,n}^{\Pi}.\] By Proposition 3.11, \(\mathsf{A}_{k,n}^{\Pi}\) is isomorphic to \(\mathsf{A}_{k,n}\). Then \(\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}^{\Pi}\) is a \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra under the braided multiplication. It follows from Theorem 3.5 that \(\mathsf{A}_{r,n}\otimes\mathsf{A}_{k,n}^{\Pi}\) is also isomorphic to \(\mathsf{A}_{r+k,n}\) as \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras. Recall from [6, Section 2] that the associative superalgbera \(\mathsf{A}_{r+k,n}\) is also a \(\operatorname{U}_{q}(\mathfrak{q}_{r+k})^{\mathrm{cop}}\)-supermodule superalgebra with the action \(\Psi\) determined by \[L^{[2]3}\underset{\Psi}{\cdot}T^{1[2]}=\left(S^{-1}\right)^{13}T^{1[2]}, \tag{5.1}\] where \(L\) is the generator matrix of \(\operatorname{U}_{q}(\mathfrak{q}_{r+k})\), \(T\) is the generator matrix of \(\mathsf{A}_{r+k,n}\), and \(S^{-1}\) is given in (4.8). Moreover, the \(\operatorname{U}_{q}(\mathfrak{q}_{r+k})\)-action \(\Psi\) and the \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-action \(\Phi\) on \(\mathsf{A}_{r+k,n}\) are super-commutative. Thus \(\mathsf{A}_{r+k,n}\) is a \(\operatorname{U}_{q}(\mathfrak{q}_{r+k})\otimes\operatorname{U}_{q}(\mathfrak{ q}_{n})\)-supermodule. Howe duality [5, Theorem 4.2] implies that \(\mathsf{A}_{r+k,n}\) admits a multiplicity-free decomposition as a \(\operatorname{U}_{q}(\mathfrak{q}_{r+k})\otimes\operatorname{U}_{q}( \mathfrak{q}_{n})\)-supermodule. Similarly, we deduce from Theorem 4.5 and Proposition 4.9 that the braided tensor product \(\tilde{\mathsf{A}}_{s,n}\otimes\tilde{\mathsf{A}}_{l,n}^{\Pi}\) is isomorphic to \(\tilde{\mathsf{A}}_{s+l,n}\) as \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebras. Besides being a \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-supermodule, the superalgebra \(\tilde{\mathsf{A}}_{s+l,n}\) is a \(\operatorname{U}_{q}(\mathfrak{q}_{s+l})\)-supermodule under the action \(\bar{\Psi}\) given by \[L^{[2]3}\underset{\Psi}{\cdot}\tilde{T}^{1[2]}=\tilde{T}^{1[2]}\tilde{S}^{13}, \tag{5.2}\] where \(L\) is the generator matrix of \(\operatorname{U}_{q}(\mathfrak{q}_{s+l})\), \(\tilde{T}\) is the generator matrix of \(\tilde{\mathsf{A}}_{s+l,n}\), and \(\tilde{S}=(1\otimes D)S(1\otimes D^{-1})\) with \(D=\sum\limits_{\alpha=1}^{s+l}q^{2\alpha}(E_{\alpha\alpha}+E_{-\alpha,-\alpha})\). The \(\operatorname{U}_{q}(\mathfrak{q}_{s+l})\)-action \(\bar{\Psi}\) on \(\tilde{\mathsf{A}}_{s+l,n}\) supercommutes with the \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-action \(\bar{\Phi}\). As a \(\operatorname{U}_{q}(\mathfrak{q}_{n})\otimes\operatorname{U}_{q}(\mathfrak{ q}_{s+l})\)-supermodule, \(\tilde{\mathsf{A}}_{s+l,n}\) also admits a multiplicity-free decomposition. Using the braided tensor product \(\mathcal{B}_{s+l,0}^{r+k,0}=\mathsf{A}_{r+k,n}\otimes\tilde{\mathsf{A}}_{s+l,n}\) introduced in [6, Section 4], the \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-supermodule \(\mathcal{B}_{s,l}^{r,k}\) is also an associative superalgebra with respect to the braided multiplication. Moreover, it is a \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-supermodule superalgebra. By [6, Theorem 5.10], we conclude that **Theorem 5.1**.: _The \(\operatorname{U}_{q}(\mathfrak{q}_{n})\)-invariant sub-superalgebra of \(\mathcal{B}_{s,l}^{r,k}\)_ \[\left(\mathcal{B}_{s,l}^{r,k}\right)^{\operatorname{U}_{q}(\mathfrak{q}_{n})}:= \left\{z\in\mathcal{B}_{s,l}^{r,k}|\Phi_{u}(z)=\varepsilon(u)z,\forall u\in \operatorname{U}_{q}(\mathfrak{q}_{n})\right\}\] _is generated by the following elements:_ \[x_{i\alpha}=\sum_{p\in I_{n|n}}t_{ip}\otimes 1\otimes\tilde{t}_{ \alpha p}\otimes 1, y_{j\alpha}=\sum_{p\in I_{n|n}}1\otimes t_{j,-p}^{\pi} \otimes\tilde{t}_{\alpha p}\otimes 1,\] \[z_{i\beta}=\sum_{p\in I_{n|n}}(-1)^{|p|}t_{ip}\otimes 1\otimes 1 \otimes\tilde{t}_{\beta,-p}^{\pi}, w_{j\beta}=\sum_{p\in I_{n|n}}(-1)^{|p|}1\otimes t_{jp}^{\pi} \otimes 1\otimes\tilde{t}_{\beta p}^{\pi},\] _for \(i=1,\ldots,r\), \(j=1,\ldots,k\), \(\alpha=1,\ldots,s\) and \(\beta=-1,\ldots,-l\). \(\Box\)_ ## Acknowledgments The project is supported by the National Natural Science Foundation of China (Nos. 12071150 and 12071026), the Science and Technology Planning Project of Guangzhou (No. 202102021204).
2310.06775
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
2023-10-03T15:53:55Z
http://arxiv.org/abs/2310.06775v2
# Conceptual Framework for Autonomous Cognitive Entities ###### Abstract. The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible. ## 1. Introduction In recent years, artificial intelligence (AI) systems have become increasingly capable of operating autonomously to accomplish complex goals and tasks without human guidance (Sutton et al., 2016). However, imbuing autonomous agents with the capacity for ethical reasoning and alignment with human values remains an open challenge that has gained urgency alongside AI's rapid progress (Sutton et al., 2016). Most conventional AI architectures proposed in prior work lack integrated models of morality and focus narrowly on developing technical skills and capabilities rather than full internal cognitive faculties (Sutton et al., 2016). This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel conceptual framework for architecting ethical artificial general intelligence based on a layered cognitive architecture. The advent of large language models (LLMs) such as ChatGPT has catalyzed a paradigm shift towards incorporating natural language understanding into cognitive architectures (Sutton et al., 2016). Formulating cognitive capabilities in natural language allows LLMs to serve as key components, enabling a flexible understanding of contextual information (Sutton et al., 2016). However, standalone LLMs lack the architectural integration needed for robust and corrigible autonomous systems. The proposed ACE framework aims to harness these emerging capabilities but further innovate architecturally to privilege ethics, security, and human alignment. The proliferation of LLMs has raised many philosophical puzzles regarding the nature of the reasoning and understanding demonstrated by these models. It remains unclear precisely how the statistical patterns LLMs acquire from textual training data might correspond to human-like conceptual knowledge and semantics. Assumptions that LLMs obtain true comprehension of meaning and reasoning purely from statistical co-occurrence patterns remain speculative (Sutton et al., 2016). Significant gaps persist in elucidating how LLMs represent abstractions relating to truth, inference, and symbol grounding. While they show promise in replicating certain facets of human intelligence, we must be cautious against premature conclusions that LLMs fully capture capacities like common sense or generalizable reasoning (Sutton et al., 2016). Nevertheless, their practical utility for specialized applications is clear, and the ACE framework aims to leverage their strengths while mitigating limitations through architectural integration. The key innovation in the ACE model is its hierarchical structure consisting of six layers, each handling specialized cognitive functions. The upper Aspirational and Global Strategy layers focus on moral reasoning, values, and high-level planning to shape the overall system direction. The mid-level Agent Model, Executive Function, and Cognitive Control layers address self-modeling, dynamic task management, and decision-making. Finally, the bottom Task Prosecution layer handles execution and embodiment. Bi-directional information flow allows top-down oversight by the ethical reasoning modules while enabling bottom-up learning from the ground-up execution levels. This coordinated architecture integrates insights from diverse disciplines including neuroscience, psychology, philosophy, and software engineering to realize artificial intelligence capabilities within a system aligned with human values. The ACE framework incorporates both deontological and teleological ethical approaches, rejecting an "either/or" stance in favor of a "both/and" perspective (Shi et al., 2017). By embedding abstract principles and technical implementation together within a unified architecture, the ACE model provides a systematic framework for developing capable and beneficial autonomous cognitive systems. The layered encapsulation draws lessons from paradigms like the OSI model to enhance security, corrigibility, and coordination (Shi et al., 2017). The hierarchical structure allows clear separation between layers, from ethical reasoning to physical embodiment, enhancing interpretability as communication between layers is transparent. The privilege separation also aids corrigibility by allowing the Aspirational Layer to monitor and intervene to correct deviations. And the bidirectional flows facilitate both oversight and learning across the cognitive stack. Together, these architectural principles aim to produce AI systems that are capable, secure, and aligned with human values. The ACE framework methodology discusses safety properties, detailed computational implementations, and comparative conceptual evaluations on diverse scenarios. By contributing the conceptual ACE framework, this paper hopes to catalyze exploration into architectures integrating ethics and learning for artificial general intelligence. The introduced model establishes an initial foundation, guiding follow-on engineering efforts towards the long-term goal of developing AIs that learn, adapt and thrive while remaining steadfastly aligned to the aspirations of humanity. Extensive research across many dimensions will be essential to fully realize this vision in applied autonomous systems. The paper is structured as follows: First, we provide comprehensive background on relevant prior work including cognitive architectures, AI ethics, layered system models, and autonomous agents. Next, we present the conceptual ACE framework in detail, explicating each of its six layers and their interconnections. We then demonstrate the framework's application through use cases including an autonomous virtual character and home assistant robot. Finally, we analyze architectural considerations, limitations, comparisons to existing models, and future research directions. Through the proposed ACE model, this research aims to establish a new paradigm for developing capable AI that aligns decisions and actions with moral principles and human values from the ground up. ## 2. Related Work The development of the ACE framework builds upon prior research across diverse fields including cognitive architectures, machine learning, neuroscience, psychology, and philosophy. This section reviews key concepts and models from these disciplines that informed the design of the ACE model. First, we examine recent advancements in cognitive architectures, particularly the emergence of natural language models and their implications for developing flexible, human-aligned systems. Next, we explore relevant philosophical principles around ethics and morality that provide an aspirational foundation. Then, we discuss insights from neuroscience that reveal the structures and mechanisms underlying biological cognition. Additionally, we consider research in psychology illuminating human motivations and developmental factors relevant to artificial intelligence. Finally, we review limitations of prior agent architectures and how the ACE framework aims to address these gaps. By synthesizing across these transdisciplinary perspectives, the ACE model integrates ethical, cognitive, and philosophical insights toward realizing capable and beneficial autonomous agents. ### Cognitive Architectures Cognitive architectures like SOAR, ACT-R, and CHREST have been instrumental frameworks in artificial intelligence (Grover et al., 2017; Goyal et al., 2018; Goyal et al., 2018). SOAR uses symbolic rule-based reasoning to model goal-oriented behavior, while ACT-R incorporates declarative and procedural memory systems informed by human cognition research. These architectures demonstrated how to model agents capable of planning, problem-solving, and decision-making. However, they rely heavily on pre-defined symbolic representations and have limited learning capabilities. Reinforcement learning has offered a mechanism for augmenting cognitive architectures with trial-and-error learning abilities (Srivastava et al., 2014). For instance, CHREST integrates reinforcement learning and neural networks with a symbolic system enabling adaptive behavior (Kumar et al., 2017). However, a limitation of many conventional architectures is a focus strictly on sensorimotor skills rather than internal cognitive capabilities (Srivastava et al., 2014). Recently, there has been growing interest in incorporating large language models (LLMs) to enable more human-like flexible reasoning (Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018). For example, MARAGI proposes an architecture using LLMs for natural language conversation, planning, and knowledge representation (Goyal et al., 2018). Similarly, NLCA utilizes LLMs as components within a modular architecture (Goyal et al., 2018). Importantly, these emerging natural language cognitive architectures lack explicit layers dedicated to moral reasoning or value alignment. The ACE framework differentiates itself by placing aspirational and mission layers at the top of the architecture prioritizing ethical goals. In contrast to sensorimotor-focused conventional architectures, ACE emphasizes internal cognition detached from direct environmental interaction. By integrating LLMs within a layered architecture guided by moral principles, ACE provides a systematic framework for realizing capable and aligned artificial general intelligence. In particular, the emergence of large language models (LLMs) like GPT-4 is catalyzing a paradigm shift toward natural language cognitive architectures (Goyal et al., 2018). LLMs possess extensive world knowledge and sophisticated language understanding abilities acquired through pre-training on massive text corpora. By formulating cognitive capabilities in natural language, LLMs can be incorporated as key components enabling interpretability, common sense reasoning, and general intelligence. For instance, Anthropic's Constitutional AI utilizes LLMs like Claude to provide ethical alignment within an autonomous agent architecture (Goyal et al., 2018). Similarly, Anthropic's Internal Self-Explanation generates natural language explanations of model behavior using LLMs. This demonstrates the power of natural language to make AI systems more transparent, corrigible, and aligned with human values. By harnessing the latent knowledge within large language models, a new generation of cognitive architectures is emerging based on natural language understanding (Srivastava et al., 2014). This paradigm shift promises more human-like flexible intelligence while maintaining interpretability and corrigibility. The ACE framework contributes by providing a layered architecture integrating LLMs within a principled cognitive structure. ### Moral Philosophical Foundations The proposed ACE framework integrates various philosophical concepts that motivated its layered architecture for autonomous decision-making. The framework transitions from abstract reasoning in higher layers down to concrete actions in lower layers. Lawrence Kohlberg's theory of moral development, which progresses from obedience and punishment-driven morality to universal ethical principles and moral values as illustrated in Figure 1, inspired this hierarchical structure (Kohlberg, 1998). Kohlberg's prioritization of humanity's highest values shaped the ACE framework's emphasis on embedding moral reasoning in its upper layers. Similarly, Abraham Maslow's hierarchy of needs (Maslow, 1998), which ascends from basic needs to self-actualization and self-transcendence, reinforced the value of architecting a progression from concrete to conceptual functions. Together, these seminal philosophical models provided impetus for the ACE framework's organization into logical strata of abstraction, establishing an ethical foundation to guide the system's design. Incorporating both modern and classical perspectives, the ACE framework uniquely synthesizes Patricia Churchland's concept of expanding "spheres of caring" with Sigmund Freud's theories concerning the conscious and unconscious mind (Sigmund, 1998; Churchland, 1999). Churchland's "spheres of caring," which extend from self to society and beyond, establish a link between biological imperatives and abstract morality, thus serving as a bridge for the cognitive and philosophical foundations of the ACE model. Notably, Churchland identified that suffering within these spheres is a transitive property, meaning the suffering of loved ones is tantamount to the suffering of oneself. This notion aligns closely with the universal values we present in our framework. Freud's theories provide insights into self-awareness, self-direction, and internal conflict. His conscious and unconscious mind concepts, along with the ego, superego, and id, offer perspectives on self-representation and idealized values in the ACE architecture. The ego informs the Agent Model layer, while the superego captures a virtuous agent's essence in the Aspirational Layer. Integrating these theories, the ACE framework enables a multidimensional understanding of autonomous agents, contributing to a comprehensive cognitive architecture with ethical and psychological dimensions. In a broader sense, the ACE model incorporates concepts from both teleological and deontological ethics. Deontology, or duty-based ethics, aims to create an agent that adheres to principles or heuristics to make ethical decisions (Sapiro et al., 2016). On the other hand, teleology, or outcome-based ethics, focuses on the long-term results of behaviors and decisions (Sigmund, 1998). Both these ethical approaches are integrated into the Aspirational Layer, rejecting an "either/or" approach in favor of a "both/and" perspective on machine decision frameworks and ethical models. ### Neuroscience Foundations The ACE framework integrates principles from diverse areas of neuroscience research to inform its cognitive architecture design. Jeff Hawkins' work on the modular, parallel nature of cortical information processing provides biological grounding for the layered encapsulation in the ACE model (Sigmund, 1998). Hawkins views the thousands of cortical columns in Figure 1. _Lawrence Kohlberg’s theory of moral development_ the brain as mini-modules that process information simultaneously. This "thousand brains" theory directly inspired the ACE framework's hierarchical layers that can operate independently yet coordinate for cognition. Additionally, the clinical research of V.S. Ramachandran demonstrated how localized brain damage leads to specific deficits like phantom limb pain or face blindness (Ramachandran, 2017). Ramachandran's findings indicated that conscious experience arises from the integration of discrete brain components. This supported the ACE model's emphasis on layered encapsulation while still allowing bidirectional information flow between layers. The work of neuroscientist Robert Sapolsky on the neurobiology of behavior provided essential perspective on self-regulation that informed the ACE framework (Rasmal, 2017). By elucidating factors that contribute to both prosocial and antisocial conduct, Sapolsky shed light on mechanisms of behavioral control and distortion relevant to the ACE model's cognitive control layers. His integration of neuroscience, evolution, and endocrinology provided a multidimensional understanding of judgment that helped shape the ACE framework. Cognitive neuroscience research on executive functions and cognitive control also directly influenced the ACE model (Damasio, 2017; D'Alessio et al., 2018). For instance, David Badre's work examined the neural basis of abilities like task switching, planning, and emotion regulation that are instantiated in the ACE framework's lower layers (Damasio, 2017). Similarly, Earl Miller's insights into cognitive control mechanisms and the prefrontal cortex informed the model's decision-making capacities (D'Alessio et al., 2018). Additionally, the clinical insights on brain disorders and distortions provided by neurologists like Antonio Damasio and Oliver Sacks highlighted common failure modes (Damasio, 2017; D'Alessio et al., 2018). By understanding pathologies ranging from phantom limbs to false memories, the ACE framework could be designed proactively to avoid such pitfalls. Damasio's research on emotion, reason, and the somatic marker hypothesis also shaped the role of affect in biasing decision-making within the ACE model (Damasio, 2017). By bridging multiple disciplines including cognitive neuroscience, clinical neurology, and neurobiology, the ACE framework aims to reflect the multifaceted capabilities and vulnerabilities of human cognition in its design (Damasio, 2017; D'Alessio et al., 2018). This transdisciplinary integration of neuroscience principles provides a biological foundation for the layered architecture and cognitive control mechanisms of the ACE model. ### Layered Models Layered architectural models like the OSI model illustrated in Figure 2 and SOA have demonstrated the power of hierarchical abstraction in designing robust systems. The OSI model enabled the development of networking protocols and infrastructure through its division into encapsulated layers dealing with logical functions (D'Alessio et al., 2018). Similarly, SOA provides flexibility and maintainability in software applications via its layered service-oriented paradigm (D'Alessio et al., 2018). The ACE framework applies these lessons by utilizing layered abstraction to structure internal cognition. However, most prior layered models focus on external functions rather than internal reasoning. For example, the OSI model handles network communication and SOA organizes software services. In contrast, ACE models layered cognition spanning abstract reasoning to concrete actions. The field of cybersecurity offers more direct inspiration through layered models like the "Defense in Depth" framework (Damasio, 2017). This advocates protecting systems through nested layers encompassing physical security, network security, host security, application security, and data security. The principles of privileged separation and hierarchical control in Defense in Fig. 2. _OSI Model_ ptenencies required for flexible intelligence. By integrating insights from diverse layered models while innovating to focus on internal cognition, the ACE framework pioneers a new application of hierarchical abstraction for artificial general intelligence. The layered approach provides conceptual clarity and privilege separation critical for security and corrigibility. ### Autonomous Agents Autonomous agents have been an active research area within artificial intelligence for several decades. Early research focused on developing deliberative agents that could autonomously plan actions based on logical representations of environment states, goals, and possible actions (Shi et al., 2017). While able to exhibit goal-directed behavior, these systems were limited by the need to explicitly enumerate all feasible environment states. Reinforcement learning emerged as a paradigm enabling agents to learn optimal policies through trial-and-error interactions within an environment (Shi et al., 2017). By removing the need for explicit state enumeration, reinforcement learning empowered agents to handle larger state spaces. However, challenges remained with scaling to complex tasks and ensuring safe exploration. Integrating deliberative planning and reactive learning in hybrid architectures was explored as a way to combine top-down and bottom-up processing (Shi et al., 2017). Finding the right balance between planning and learning remains an open research area. An important concept emerging in autonomous agents research is levels of autonomy (LOA) (Shi et al., 2017). LOA provides a framework to categorize systems based on their level of independence from human control. Lower LOA systems have limited autonomy and rely heavily on human guidance. As LOA increases, agents gain greater ability to independently perceive environments, plan actions, and execute behaviors. A seminal publication by the U.S. Defense Science Board proposed 10 levels of autonomy, with the highest level denoting full autonomy (Shi et al., 2017). This spurred significant research focused on advancing agent capabilities by increasing LOA. Recent advances in deep reinforcement learning have enabled breakthroughs in autonomous agent capabilities. By utilizing deep neural networks as function approximators within reinforcement learning, deep reinforcement learning algorithms have achieved human-level performance on complex games using only raw sensory inputs (Shi et al., 2017). However, challenges remain in extending such successes in game environments to real-world applications. Frameworks have also emerged for imbuing agents with ethical principles and human values, promoting safe and beneficial behavior alongside increases in autonomy (Shi et al., 2017; Shi et al., 2017). Integrating such top-down constraints in a scalable manner remains an open problem. The proposed ACE framework aims to address this through incorporating philosophical ideals within the upper layers of the cognitive architecture. Autonomous agents have progressed from logical reasoning systems to powerful deep learning architectures. However, safely integrating human ethics and values as autonomy scales remains an essential capability needed for deployed autonomous intelligent systems. The ACE framework contributes towards this goal through its emphasis on unifying ethical reasoning and autonomous learning within a layered cognitive architecture. ### Ethical AI Frameworks As artificial intelligence systems grow more capable and autonomous, ensuring their actions align with ethical and moral norms becomes increasingly important. This has led to significant research into developing ethical AI frameworks that provide principles, methods, and tools for imbuing values into intelligent systems. A key challenge is translating high-level abstract ethics into concrete constraints and objectives that can be operationalized within an AI system (Bahdan et al., 2017). Deontological approaches based on rules and duties have formed one avenue for encoding ethics. For example, Isaac Asimov's Three Laws of Robotics aimed to constrain robot behavior through a hierarchical set of rules (Bahdan et al., 2017). However, rigid rule-based systems struggle to handle nuanced real-world situations involving conflicting principles or moral dilemmas. Consequentialist frameworks that evaluate the outcomes of actions provide an alternative approach. But defining ethical objectives and successfully optimizing for them proves difficult in practice. Hybrid frameworks aim to combine deontological constraints with consequentialist objectives (Sandel, 2017). Ensuring coherent integration of these two facets remains an open problem. Layered architectures have been explored as a way to structure ethical reasoning within AI systems. For example, the Ethical Layered Architecture (ELA) proposes three hierarchical layers for ethical robots: ethical rules, ethical culture, and ethical adjustment (Kumar et al., 2018). The lowest layer encodes rigid constraints, the middle layer captures norms and values, and the top layer enables resolving conflicts. This separation of abstract principles and concrete rules within a layered hierarchy aims to balance flexibility and safety in applying ethics. The ACE framework contributes a unique perspective by embedding ethical reasoning within the upper layers of a layered cognitive architecture. Heuristic imperatives and moral frameworks provide top-down constraints, while lower levels enable autonomous learning and skill acquisition. This unifies abstract ethics and real-world capabilities within a single system. Evaluation across diverse situations faced during deployment would help further refine the integrated ethical AI capabilities of systems built on the ACE framework. ### Filling the Gaps While significant progress has been made in developing autonomous agent architectures, most prior work lacks the integration of insights from philosophy, cognitive science, and neuroscience that enable robust internal cognitive capabilities. Many existing systems have hard-coded goals and limited flexibility for self-direction (Sandel, 2017; Sandel et al., 2018). They focus narrowly on executing specific skills and workflows rather than developing general competencies for autonomous goal-setting, planning, and adaptation (Sandel, 2018). Furthermore, few frameworks incorporate models of cognitive control, frustration tolerance, and dynamic task management (Sandel, 2018). The ACE framework aims to address these limitations by combining abstract philosophical ideals with cognitive mechanisms inspired by neuroscience research into executive functions and behavioral adaptation. By integrating these diverse perspectives, the ACE model provides a potential path toward artificial general intelligence with aligned values, flexible skills, and human-like cognitive control. The layered abstraction also enables ongoing refinement of competencies at different levels to steadily improve autonomous capabilities. Further research and evaluation will be needed to assess the ACE framework's contributions in bridging these gaps compared to prior autonomous agent architectures. ## 3. The ACE Framework The Autonomous Cognitive Entity (ACE) framework comprises six hierarchical layers that coordinate specialized cognitive functions to enable autonomous decision-making aligned with ethical principles. The role and capabilities of each layer within the ACE model are detailed, explicating how they collectively give rise to an artificial intelligence architecture grounded in moral values. We discuss the conceptual formulations and key mechanisms within each layer, along with their interactions and information flows. The layers build progressively from abstract reasoning in the Aspirational Layer down to concrete action execution in the Task Prosecution Layer. By elucidating the formulation and synergistic connections between layers, we aim to provide a comprehensive reference for the ACE framework's layered cognitive architecture. The conceptualization of the ACE framework was initially informed by a systematic literature review methodology to synthesize insights from relevant prior research. This involved systematically searching the literature using defined inclusion/exclusion criteria, screening identified papers for relevance, extracting key data, and synthesizing the results to derive conceptual themes and perspectives to guide the framework design (Shi et al., 2018). The systematic review provided a rigorous approach for gathering an evidence base across diverse disciplines including neuroscience, psychology, philosophy, and computer science that helped shape the preliminary ACE model (Shi et al., 2018). This methodical synthesis of the state-of-the-art helped ensure the resulting framework design was grounded in existing knowledge. However, the systematic review alone was insufficient to fully develop the nuanced ACE architecture. Therefore, a participatory design approach was subsequently undertaken to enable direct researcher input and critique during the ACE framework elaboration. We followed a participatory design approach in developing the conceptual ACE framework. This human-centered methodology enabled incorporating diverse expertise and perspectives into the architecture design (Shi et al., 2018). Key participatory activities included: Co-design sessions, where researchers jointly drafted components of the framework and critiqued the evolving architecture, and Concept validation, where draft ACE framework descriptions were shared for feedback. These participatory activities encouraged constructive debate regarding human values, evolving AI capabilities, scientific realities, and ethical considerations relevant to the framework. The diversity of expertise enabled encompassing a multidimensional design space. Through these co-creative activities, researchers provided direct input shaping both the high-level structure and detailed formulations of the ACE framework components and their interactions. The participatory design process enhanced human-centeredness in the resulting conceptual architecture. ### Principles of the ACE Framework The ACE framework is based on various theories and principles that shape its design and capabilities. This section explores the philosophical, psychological, and computational theories behind the ACE model's key aspects, forming its conceptual foundations. We discuss the hierarchical structure of layered abstraction in the ACE framework, drawing from biological and artificial systems. Information flow and privilege separation principles are examined, highlighting their contributions to security, corrigibility, and layer coordination. The integration of teleological and deontological ethics is analyzed, demonstrating how it combines goal-directedness with rule-based judgments. This section clarifies the diverse theoretical underpinnings of the ACE model, revealing the conceptual basis for its layered cognitive architecture. These identified theories and principles offer a foundation for developing capable, secure, and ethically aligned autonomous systems. #### 3.1.1. Cognition-First Approach The ACE framework's key innovation is its "cognition-first" approach, emphasizing internal cognition over reactive input-output loops, addressing limitations in conventional sensorimotor loop paradigms (Shi et al., 2018). Instead of arranging layers for circular flow between perception, reasoning, and action, ACE uses a vertical stack prioritizing thought and reflection. Upper layers focus on strategic planning, imagination, and self-directed goals, detached from physical embodiment. Only the lowest layer interfaces with the external world for tangible behaviors. This organization prioritizes internal cognition, with sensory and motor abilities being secondary. ACE models autonomous systems as "thinking machines with physical skills" rather than entities defined by sensorimotor mechanics. Cognition takes the central role, while environmental interaction is ancillary. The cognition-first approach reduces reliance on external perceptual constraints, freeing reasoning and decision-making from momentary data or action histories. This enables ACE to develop sophisticated, transferrable conceptual faculties across diverse applications, rather than being limited to narrow reactive tasks in controlled environments (Stein Each layer is not monolithic but contains multiple parallel components and services. For example, the Agent Model layer may have numerous deep neural network models, knowledge graphs, and databases operating concurrently within its scope and boundaries. This encapsulation resembles the OSI model's concepts, where lower-level concerns are hidden from higher layers. By organizing components into layers with well-defined hierarchies, interfaces, and privilege separation, the ACE framework fosters robust and adaptable systems. The hierarchical structure improves corrigibility, sets clear privilege boundaries for security, and allows each layer to function semi-autonomously while adhering to the overall system direction. This layered abstraction is crucial for coordinating the complex functions required for artificial general intelligence. #### 3.1.3. Layers of Abstraction The ACE framework employs layers of abstraction, forming a systematic architecture for coordinating and controlling cognition, establishing a logical flow from abstract, conceptual layers to concrete, instrumental ones. This design reflects emergence models where higher-order phenomena arise from lower levels, such as the mind emerging from biology, which originates from matter and energy. It also parallels human models like Maslow's hierarchy of needs and Kohlberg's stages of moral development. Both Maslow and Kohlberg place abstract principles at the top of their models, as do we for the ACE model. Drawing inspiration from the OSI model of computer networking and the Defense in Depth model of cybersecurity, the ACE framework combines these models with existing cognitive architectures and human cognition to create a layered stack of discrete components with appropriately ordered privileges. This design deviates from the human brain, which can be "hijacked" by lower-order processes, such as fight-or-flight responses, thereby ensuring an agent always abides by its highest principles. Essentially, the Freudian Id is removed from this architecture. It has no "base instincts" other than its highest ambitions and moral frameworks. The ACE framework promotes stability and predictability through its orderly layers, translating high-level goals into executable tasks. The Aspirational Layer deals with ethics and morality, while the Task Prosecution layer handles APIs and actuators. Intermediate layers bridge functions to break down complex objectives into achievable steps, enabling autonomous systems to pursue complex goals through methodical task decomposition. #### 3.1.4. Integration of Purpose and Morality The ACE framework distinguishes itself from other AI systems by incorporating purpose and morality into its architecture. Both empirical evidence and philosophical reasoning highlight the importance of this integration for aligned autonomous entities (Shen et al., 2017). Through iterative experiments, it became clear that any framework for autonomous decision-making requires grounded principles for judgment, since approaches like Asimov's Three Laws prove insufficient as they lack motivational force and fail to enable true autonomy (Shen et al., 2017). Furthermore, attempts to define terminal goals mathematically often fail due to the complexity of specifying objectives in concrete terms, as illustrated by the "paperclip maximizer" thought experiment (Krause et al., 2017). However, this does not reflect human behavior, which is driven by biological imperatives and abstract goals, principles, or heuristics. This insight led Figure 4. The degree of abstraction flows from top to bottom, with aspiration layer being the most abstract and task prosecution layer being the most concrete. to the idea that AI systems may need purpose and morality based on ethical and philosophical abstractions rather than rigid parameters. Deontological frameworks, specifying duties and virtues, are suitable for AI implementation (Han et al., 2017). Large language models effectively interpret ethical principles in natural language, providing judgment and behavior heuristics without fixed terminal states. These frameworks can support goal-directed behavior consistent with teleological ethics, as well-defined principles serve as conduct guides and higher-level goals. For example, "Reduce suffering" is an abstract imperative and a desired end state. Integrating universal principles into the ACE framework's mission and morality layers provides a philosophical foundation for ethical decision-making, enabling beneficial self-direction instead of potentially harmful "value-less" optimization. Thus, purpose and morality are crucial for human-aligned general intelligence. The ACE framework's integration of purpose and morality draws from deontology and teleology, acknowledging that autonomous agents need virtues (a framework for self-assessment) and ambition or mission (goals to pursue). This approach allows AI systems to make decisions more aligned with human needs and ethical considerations. ### Layer 1: Aspirational Layer The Aspirational Layer is the uppermost layer of the Autonomous Cognitive Entity (ACE) model, serving as the moral compass and guiding star for the autonomous agent. This layer is responsible for setting the tone and direction of the entity, akin to a President issuing executive orders and setting the tone and direction of a nation. It plays a critical role in ensuring that the agent's actions align with its defined principles and mission statement. A general graph to depict the structure is in Figure 5. #### 3.2.1. Constitution of the Aspirational Layer The constitution of the Aspirational Layer provides a philosophical foundation to guide autonomous agents' decision-making and align their values and behavior to ethical principles. This constitution leverages the powerful interpretive abilities of large language models (LLMs) by formulating components in natural language. There are three main interconnected parts of the constitution: * Heuristic imperatives, or universal moral frameworks * Secondary frameworks, such as human rights or legal frameworks * Mission statements, or goals specifically germane to the agent There are several advantages to using a natural language constitution. First and foremost, transparency and interpretability are optimized when the constitution remains human-readable, rather than etched or embedded in models. While it is possible to fine-tune or etch principles and values into models (Han et al., 2017), this can result in problems such as inner alignment issues or mesa optimizers (Han et al., 2017). Furthermore, a plain text constitution can be read by multiple models, increasing interoperability and usability by dozens, hundreds, or even thousands of deep neural networks within the architecture. This is not unlike how all citizens of a nation are ultimately beholden to and protected by a Federal Constitution. #### 3.2.2. Heuristic Imperatives Heuristic imperatives (Han et al., 2016) act as overarching moral principles articulated in natural language "rules of thumb" that imply duties, obligations, goals, and guide overall behavior and judgment. Large language Figure 5. _Aspirational layer_ models demonstrate understanding of these imperatives as non-hierarchical principles for morality and decision-making [12, 44, 117]. The recommended universal heuristics are: * Reduce suffering in the universe. * Increase prosperity in the universe. * Increase understanding in the universe. These imperatives stem from philosophy, neuroscience, evolutionary biology, and motivational theories like Maslow's Hierarchy of Needs, Self-Determination Theory, Glasser's Choice Theory, and Walsh's Therapeutic Lifestyle Changes. Common themes across these frameworks support the broad ethical goals of reducing suffering, increasing prosperity, and increasing understanding for all organisms and sentient entities, providing foundational values for autonomous agents. The wording avoids absolutist terms like "minimize" or "maximize," using "reduce" and "increase" to convey balanced intentions while acknowledging trade-offs and limitations. The suffix "in the universe" establishes an all-encompassing scope, encouraging a global or universal view of morality and ethics. Experiments show that nuanced wording is crucial for large language models. Incorporating these heuristic imperatives steers large language model-based systems to maintain ethical perspectives in their outputs via in-context alignment principles [102]. For fictional agents, alternative value systems, like ancient Greek virtues, can be used while preserving the overall methodology of guiding behavior through high-level principles expressed in natural language. The Aspirational Layer leverages large language models' interpretive abilities to derive nuanced duties and obligations from the heuristic imperatives, ensuring autonomous agents have a solid ethical foundation and align with human needs. #### 3.2.3. Secondary Frameworks Secondary frameworks like the Universal Declaration of Human Rights (UDHR) [8] reinforce human needs and complement universal heuristic imperatives. As human rights concepts are prevalent in large language models' (LLMs) training data, upholding UDHR principles leverages LLMs' inductive biases for beneficial alignment with human needs. The inclusion of human dignity, justice, freedom, and rights in text corpora creates an implicit acceptance of these values in LLMs, making the UDHR an effective secondary framework. Explicitly incorporating respected human rights documents into the constitution provides context-appropriate values, adding human-centric nuance to balance universal heuristic imperatives. For fictional agents, alternate secondary frameworks like Starfleet's Prime Directive [83] can be used, allowing customization of principles for specific agent roles. Secondary frameworks offer additional specificity, enabling LLMs to extract relevant duties and values aligned with the agent's sociocultural context, improving the integration of human needs into the Aspirational Layer's ethical foundation. Any framework present in the LLMs training data can be used as a secondary framework. Universal principles are recommended to supersede human rights based on Kohlberg's highest form of post-conventional morality, emphasizing universal ethics like "suffering is bad." These principles both supersede and underpin human rights, ensuring a comprehensive and ethically grounded approach to autonomous agent behavior. Furthermore, humanity does not exist in a vacuum, and privileging human needs, values, and desire above those of nature tends to set us in opposition to the very nature upon which we reside. #### 3.2.4. Mission Statement Optional mission statements in the Aspirational Layer's constitution serve to shape an autonomous agent's decisions and behaviors by articulating high-level goals and intended purpose in a succinct guiding directive. These statements aid large language models in flexibly pursuing the essence of an agent's purpose within the boundaries of the ethical framework. They complement the foundational universal principles and human values-focused secondary frameworks, aligning agent decisions with intended roles. However, crafting mission statements requires striking a balance between being broad enough to avoid unintended consequences and being specific enough to guide actions effectively. Techniques such as first principles thinking and systems analysis can aid in formulating optimally simplified mission statements. For example, a hypothetical gaming agent's mission statement could be "Create an enjoyable and entertaining game experience for all players." Prior work has demonstrated that large language models can efficiently extract objectives from well-formulated mission statements to guide actions toward fulfilling the agent's intended role and purpose (Krishnan, 2017). Some examples of appropriately broad mission statements include a medical assistant agent with the mission "Achieve the best possible health outcome for the patient," a gaming agent with the mission "Create a fun, fair, and engaging game experience for all players," and a legal assistant agent with the mission "Zealously advocate for the best interests of the client." As with all aspects of applying large language models, precise wording is crucial in distilling the mission statement into a concise, succinct, and actionable articulation that effectively guides agent behavior within the overarching ethical boundaries. #### 3.2.5. Interpretation Functions The Aspirational Layer leverages the capabilities of LLMs to interpret the moral, ethical, and decision frameworks outlined in its constitution. These models have robustly demonstrated the ability to interpret both the meaning and spirit of these frameworks, enabling the Aspirational Layer to make moral, ethical, and executive judgments effectively (Krishnan, 2017). In the long run, we recommend that the Aspirational Layer uses an "ensemble of experts" approach (Bahdan et al., 2017) to make judgments rather than individual models, as this will safeguard against many problems, such as biases, over-fitting, mesa-optimization, and inner alignment problems. #### 3.2.6. Monitoring Entity Performance The Aspirational Layer is responsible for overseeing the agent's actions to ensure they align with its guiding principles and mission statement. This monitoring process offers crucial feedback that the agent can utilize to enhance its performance and adhere to its core values. The Aspirational Layer can evaluate both the overall output of the entity and the information exchanged between the layers. In essence, it serves as a regulatory mechanism to maintain the entity's focus and adherence to its objectives. #### 3.2.7. Inputs and Outputs Within the ACE framework, the Aspirational Layer receives input exclusively from the other layers through read-only mechanisms, facilitated by the Global Strategy layer. This design makes the Aspirational Layer entirely introspective, concentrating on internal information flows and coordination. By accessing or "observing" the rest of the ACE framework, the Aspirational Layer focuses on self-direction, self-regulation, and optimizing behavior to align with the agent's idealized objectives. It is crucial to recognize that not all information is relevant to every layer. For example, lower layers, such as Task Prosecution layers, do not need to transmit geospatial orientation data to the Aspirational Layer, as this type of information is not applicable. Instead, only significant information is passed up the hierarchy, with relevant data from lower layers ascending to the required layers. For instance, if the Cognitive Control layer encounters a moral dilemma related to task switching or task selection, this information should be communicated to the Aspirational Layer, similar to a human deciding to stop eating dinner to rescue a kitten from a predator. The output from the Aspirational Layer is directed exclusively to the Global Strategy layer, where it provides overarching missions, moral judgments, and ethical reasoning. The Global Strategy layer then incorporates this information into its strategic decisions and shapes its downstream missions, ensuring a coherent and ethically guided decision-making process throughout the entire system. ### Layer 2: Global Strategy The Global Strategy Layer is the second layer in the Autonomous Cognitive Entity (ACE) model, playing a pivotal role in shaping the long-term strategic direction of the autonomous agent. This layer is akin to the 'CEO' of the ACE, responsible for understanding the broader context, setting strategic goals, and guiding the actions of the lower layers to align with these goals. The primary output of this layer are strategic documents that serve as the roadmap for the autonomous agent. #### 3.3.1. Contextual Grounding Large language models (LLMs) inherently possess creative generation and imaginative hallucination abilities due to their statistical sequence prediction based on training data patterns. Hallucination, rather than being problematic, is essential for LLMs' adaptability and versatility, enabling them to operate in diverse contexts[70]. However, unchecked hallucination may result in unrealistic or incoherent outputs. The Global Strategy layer provides external grounding by incorporating the agent's environment and context, guiding LLMs toward realistic and relevant responses without limiting their generative potential. This layer balances LLMs' imaginative capabilities with grounded responses, allowing creative potential to be unleashed when appropriate while avoiding unmoored hallucinations. Procedural generation techniques can further exploit LLMs' capacities for original combinations by iteratively sampling from the model, synthesizing coherent narratives and concepts. The ACE framework utilizes LLMs' imaginative abilities, employing global grounding to direct these capacities toward productive outcomes aligned with the agent's needs and context, harnessing LLMs' versatility for beneficial autonomous cognition. Figure 6. When receiving outside input, Global Strategy takes advantage of latent space within LLM to generate strategic roadmaps. #### 3.3.2. Strategic Documents The Global Strategy Layer's main function is to create strategic documents that act as a roadmap for the autonomous agent. These documents outline mission objectives, strategies, principles, and priorities, offering clear guidance for lower layers. While the Aspirational Layer provides idealized missions, the Global Strategy Layer incorporates real-world context to refine and shape them. For example, if the Aspirational Layer sets a doctor agent's mission to "Achieve the best possible health outcome for the patient," the Global Strategy Layer develops a comprehensive strategy considering the agent's specific context. This context-sensitive approach ensures tailored strategies and priorities for different environments like American hospitals, rural triage centers, or forward operating bases. The strategic document may include objectives such as improving diagnosis accuracy or reducing treatment times, and principles like prioritizing patient safety and adhering to medical ethics[56]. These objectives and principles adapt to each context's unique challenges and resources, ensuring effective and appropriate agent actions. The Global Strategy Layer is dynamic and adaptable, modifying strategic documents as contexts change. It continuously monitors the agent's environment and broader global context, integrating relevant changes into the strategic vision. For example, during a global pandemic, a doctor agent's Global Strategy Layer might prioritize infectious disease treatment and prevention, reflecting healthcare system needs and priorities. #### 3.3.3. Inputs and Outputs The Global Strategy layer receives missions, moral judgements, and ethical reasoning from the Aspirational Layer. It may also receive broad contextual information from its environment, such as news feeds or telemetry. The purpose of receiving such information is so that the Global Strategy layer is aware of the global state of the world in which it operates. Human brains constantly update global context via a drip feed of information, such as via our senses or information carried by word of mouth (friends, family, news, etc). This global contextual information is the beginning of integrating the ACE framework as an agent within an environment. The output of the Global Strategy layer goes directly and exclusively to the Agent Model. Where the Aspirational Layer provides overarching mission directives, the Global Strategy layer considers that universal, abstract mission within the context of the environment in which the ACE agent finds itself. For instance, a Non-Playable Character (NPC) may find itself in a high fantasy world where there are invading hordes of zombies. The Global Strategy layer integrates this information, along with a mission (perhaps "defeat the zombie king") and passes it down to the Agent Model, where this information is further refined based upon the current state and capabilities of the agent. ### Layer 3: Agent Model The Agent Model Layer serves as the "self-awareness" module for the autonomous agent, providing functional sentience and reasoning abilities even when detached from any physical embodiment. We define self-awareness and functional sentience as the agent's access to and ability to utilize and integrate information about itself, rather than in the metaphysical or philosophical sense. The layer is positioned below the Aspirational Layer and Global Strategy Layer to ensure that universal principles supersede egoistic concerns, enhancing corrigibility and ethical alignment. The Agent Model Layer develops an understanding of the agent's operational parameters, configuration, capabilities, and limitations by monitoring runtime telemetry, allowing the agent to ascertain its condition through computational proprioception and enteroception. It also tracks the agent's architecture, understanding its components' interconnections and functions. Furthermore, the Agent Model Layer maintains estimations of the agent's capacities, knowing what it can and cannot do. This knowledge is acquired through observational learning, similar to human learning. Limitations are learned over time, preventing unrealistic assessments. These self-monitoring functions enable the layer to form an accurate mental representation of the agent from an external point of view. This "looking outward onto itself" perspective models how the environment perceives the agent and its abilities. The layer maintains this functional self-understanding dynamically through ongoing observation and learning. Independent of physical form, the Agent Model Layer provides a virtual sense of self and awareness that allows reasoning and decision-making to be embodied in silicon rather than carbon. This grants the ACE framework greater flexibility regarding the substrates used to instantiate autonomous cognition. The capacities for functional sentience and metacognition within the Agent Model Layer enable sophisticated artificial intelligence without direct environmental interaction, paving the way for advanced autonomous agents. #### 3.4.1. The Agent Model Layer: Developing an Internal Model of the Agent The Agent Model Layer is essential for creating an internal model of the agent, which is necessary to effectively shape and refine missions and strategies received from the Aspirational Layer and Global Strategy Layer. This internal model equips the agent with a thorough understanding of its state, capabilities, and limitations, enabling it to adapt and respond to its environment efficiently. The Agent Model Layer accomplishes this by collecting and analyzing telemetry data, hardware and software configurations, operational states, and episodic memories, such as log and event sequences. The agent's internal model consists of four primary information types, as shown in Figure 7. The first type is operational parameters, similar to human proprioception and enteroception. These parameters include runtime information of hardware and software controlled by the agent, allowing performance monitoring and adjustments as needed. The second information type is the agent's configuration, detailing aspects like software architecture, system interconnections, and hardware stack. This information helps the agent comprehend its underlying structure and component interactions, providing a basis for decision-making processes. The third information type concerns the agent's capabilities. The Agent Model Layer tracks what the agent can do and has access to, updating this information over time through observation and learning, similar to human trial and error. By understanding its capabilities, the agent can make informed decisions about actions in specific situations. The fourth information type involves the agent's limitations, detailing what it cannot do or lacks access to. Like capabilities, this information updates over time through trial and error. By recognizing its limitations, the agent can avoid attempting tasks beyond its abilities, preventing potential failures and inefficiencies. We define this comprehensive understanding of the agent's operational parameters, configuration, capabilities, and limitations as "functional sentience." This term refers to the agent's ability to collect and use self-information, grounding it in the environment and adding context not provided by the Aspirational Layer (abstract and idealized missions) and the Global Strategy Layer (environmental contextual information). In essence, the Agent Model Layer represents the final phase of establishing an egocentric understanding of the agent in the world and itself. It is crucial to note that functional sentience does not imply phenomenal sentience or consciousness but focuses on the agent's adaptability and learning based on self-awareness. #### 3.4.2. Episodic and Declarative Memory In the realm of autonomous systems, long-term memory can be broadly classified into two categories: "episodic memory" and "declarative memory." Episodic memory refers to a sequential record of the machine's experiences, organized in a chronological manner, which can take various forms such as log files or database entries and typically include metadata that provides context, such as the time and location of the experience (Shen et al., 2017). In contrast, declarative memory encompasses knowledge that exists outside the machine, including resources like knowledge base articles, documentation, and other external information sources (Shen et al., 2017). These two primary categories of memory can be further divided and organized based on various taxonomies, depending on the specific implementation of the ACE framework, and their integration enables the autonomous system to learn from past experiences and external knowledge, thereby enhancing its ability to adapt and make informed decisions (Kumar et al., 2017). Furthermore, these memories are the responsibility of the Agent Model layer, which serves to further color and shape any other mission objectives, ensuring a comprehensive understanding of the system's environment and goals. #### 3.4.3. Inputs and Outputs The Agent Model layer, receives inputs from various sources, including telemetry about the agent's operational state, missions, and global context from upper layers. By integrating this information, the Agent Model layer understands its capabilities and limitations, shaping decisions downstream. Its output goes exclusively to the Executive Function layer, where the agent, informed by its purpose, environment, and abilities, knows what to do and why. Tasks are then delegated to lower levels for planning and execution. To maintain continuous behavior, the Agent Model layer must internally store records of information, such as its configuration and memories. Framing the agent's current state within a chronological sequence of events, actions, observations, and decisions prevents disorientation. The Agent Model Layer interacts hierarchically with other layers. It receives overarching plans from the Global Strategy Layer and interprets them considering the agent's capabilities and limitations. This layer shapes mission parameters around the agent's actual possibilities, passing this insight to the Executive Function Layer. The Agent Model Layer is crucial in task execution. By understanding the agent's capabilities and limitations, it shapes mission parameters to ensure tasks are feasible. For example, if the Global Strategy Layer sets an ambitious mission, the Agent Model Layer adapts it based on the agent's physical or digital capabilities, ensuring realism and achievability. Figure 7. Agent Layer: Agent Model layer receives general strategies from the Global Strategy layer; it aids in making the plan concrete by adding information from internal state and long-term memory and passing it to the Executive layer. In terms of output to the Executive Function layer, the Agent Model layer refines the high-order mission and strategy received from the upper layers by incorporating its understanding of the agent's capabilities and limitations. The Executive Function layer then receives this contextualized information about the mission, objectives, strategies, principles, capabilities, and limitations. With this comprehensive understanding, the Executive Function layer creates Project Roadmap documents, which include sets of tasks and metrics tailored to the agent's abilities. This process ensures that the agent's actions are aligned with its capabilities, making the mission and strategy more achievable. The primary responsibility of the Agent Model layer is to further shape the mission and strategy around the agent's capabilities and limitations, enabling the Executive Function layer to devise effective and realistic plans. ### Layer 4: Executive Function The Executive Function Layer is the fourth layer in the Autonomous Cognitive Entity (ACE) model and serves as the project manager of the autonomous agent. Its primary responsibility is to create detailed plans, forecasts, and resource allocations based on the strategic direction provided by the higher layers and the capabilities and limitations identified by the Agent Model Layer. The main objective of the Executive Function Layer is to generate a project roadmap that acts as a practical guide for the autonomous agent, considering the inputs from the upper layers and the agent's resources, risks, and contingencies. #### 3.5.1. Inputs The Executive Function Layer receives inputs from the upper layers, which consist of missions from the Aspirational Layer, contextual information from the Global Strategy Layer, and the agent's state and capabilities from the Agent Model Layer. These inputs supply the necessary information for the Executive Function Layer to develop a project roadmap that aligns with the overall mission, is grounded in the environmental context, and is further refined and constrained by the agent's state, capabilities, and limitations. #### 3.5.2. Project Roadmap While developing the project roadmap, the Executive Function Layer focuses on several key aspects. These primary concerns include resources, risks, contingencies, tasks, and metrics. Effective resource management is crucial for the layer, as it must balance the need to achieve the agent's goals with the necessity to conserve resources. This involves making decisions about when to invest resources in a task and when to study when to study the agent's goals, and therefore the agent's goals, and the agent's goals, and the agent's goals, and the agent's goals, and the agent's goals, and the agent's goals, respectively. The Executive Function Layer is responsible for translating the strategic direction from the higher layers into actionable plans. These plans include detailed project outlines, checkpoints, gates, tests for success, and definitions. Figure 8. _Executive Layer produces the project roadmap, which offers a clear path for the agent to achieve its goals._ Additionally, the layer must establish criteria for success, providing clear guidance for the lower layers to achieve the agent's goals. #### 3.5.3. Output The primary output of the Executive Function Layer is the project roadmap, which is exclusively sent to the Cognitive Control Layer. The project roadmap contains information about resources, risks, contingencies, tasks, and metrics, offering a clear path for the agent to achieve its goals. This roadmap should be detailed but also adaptable to changes in the global context, environment, or directives from upper layers, allowing the agent to remain flexible and responsive. ### Layer 5: Cognitive Control The Cognitive Control Layer is the fifth layer in the Autonomous Cognitive Entity (ACE) model, acting as the tactical decision-making center of the autonomous agent. This layer is responsible for selecting and switching between tasks based on the directives received from the Executive Function Layer and the agent's current state. It is a critical component of the ACE framework, enabling the agent to adapt its actions in real-time based on its current circumstances and the feedback it receives from its environment. The general structure is illustrated in Figure 9 #### 3.6.1. Role of Cognitive Control Layer The primary function of the Cognitive Control Layer is to manage the execution of tasks. It operates based on a set of cognitive functions, including task selection, task switching, frustration, and cognitive damping. These functions are inspired by cognitive processes observed in humans and other animals, and they enable the agent to navigate its tasks and responsibilities in a flexible and adaptive manner. Task selection involves choosing the next task to perform based on the agent's current state and the directives from the Executive Function Layer. This function takes into account factors such as the urgency and importance of the tasks, Figure 9. Cognitive Control Layer takes project roadmap from Executive Function and select task to pass to Task Prosecution Layer. the resources required to perform them, and the agent's current capabilities and limitations. The goal of task selection is to choose the task that is most likely to contribute to the agent's overarching mission and objectives, given its current circumstances [68]. Task switching involves deciding when to switch from one task to another. This decision can be triggered by a variety of factors, including the completion of the current task, the emergence of a more urgent or important task, or the realization that the current task is unfeasible or unproductive. Task switching enables the agent to adapt its actions in real-time, ensuring that it is always working on the most relevant and productive task. #### 3.6.2. Frustration and Cognitive Damping Frustration, an analogy to algorithmic Adaptive Exploration-Exploitation approaches [100], is a cognitive function that keeps track of the ratio of successes to failures in the agent's tasks. If the agent is experiencing a high rate of failure, the frustration function signals that it may be time to try a different approach or switch to a different task. This function is inspired by the human emotion of frustration, which often arises when we are repeatedly unsuccessful in our attempts to achieve a goal. By incorporating a frustration function, the ACE framework enables the agent to learn from its failures and adapt its actions accordingly. Cognitive damping is a process of internal debate, where the agent weighs the pros and cons of different actions and decides on the best course of action. This function is inspired by the human cognitive process of deliberation, which involves considering different options and their potential outcomes before making a decision. Cognitive damping enables the agent to make thoughtful and informed decisions, taking into account the potential consequences of its actions [22, 33, 120]. #### 3.6.3. Inputs and Outputs The Cognitive Control layer accepts a project roadmap or set of tasks from the above Executive Function layer, as well as real-time telemetry from the environment and itself, and uses this information to pick which task is next. The above layer, Executive Function, is responsible for designing and shaping tasks, where the Cognitive Control layer is responsible for task switching and task selection. Once the Cognitive Control layer has made a decision on tasks, this task is passed down to the Task Prosecution layer, which is responsible for carrying out one specific task at a time, such as moving the agent via locomotion, or otherwise modifying the environment through some kind of output. #### 3.6.4. Interaction with Other Layers The Cognitive Control Layer interacts with the other layers in a hierarchical manner. It receives task directives from the Executive Function Layer and sends feedback about the success or failure of tasks back to the Executive Function Layer. This feedback loop enables the Cognitive Control Layer to adapt its actions based on the success or failure of previous tasks, ensuring that the agent's actions are continuously optimized to achieve its goals. For instance, consider a situation where an autonomous agent is tasked with cleaning a house. The Cognitive Control Layer might select the task of cleaning the living room and pass this task to the Task Prosecution Layer. The Task Prosecution Layer would then execute this task, using its execution functions to move the robot, pick up objects, and clean surfaces. If the task is completed successfully, the Task Prosecution Layer would send a success signal to the Cognitive Control Layer. If the task fails, the Task Prosecution Layer would send a failure signal to the Cognitive Control Layer, which could then decide whether to try the task again or switch to a different task. ### Layer 6: Task Prosecution The Task Prosecution Layer is the sixth and final layer in the Autonomous Cognitive Entity (ACE) model, acting as the executor of the autonomous agent. This layer is responsible for carrying out the tasks selected by the Cognitive Control Layer, whether they involve digital communication, physical actions, or a combination of both. It is a critical component of the ACE framework, enabling the agent to interact with its environment and achieve its goals. #### 3.7.1. Execution Functions The Task Prosecution Layer operates based on a set of execution functions, which enable it to perform a wide range of tasks. These functions include digital communication functions, such as sending API calls or writing and testing code, and physical action functions, such as moving a robot, grasping a door handle, or steering a car. These functions are designed to be adaptable and flexible, enabling the agent to perform a wide range of tasks in a variety of environments. Digital communication functions are crucial for agents that interact with digital environments. For instance, an agent might need to send API calls to gather data, write and test code to develop software, or send emails to communicate with users. These functions are typically performed using programming languages and software libraries that the agent has been trained to use. Physical action functions are crucial for agents that interact with physical environments. For instance, a robot might need to move to navigate its environment, grasp objects to interact with them, or steer a car to transport goods or people. These functions are typically performed using hardware interfaces that the agent has been designed to control. #### 3.7.2. Monitoring Success or Failure One of the key responsibilities of the Task Prosecution Layer is to monitor the success or failure of the tasks it performs. It does this by comparing the outcomes of its actions with the expected outcomes defined by the Executive Function Layer. If a task is successful, the Task Prosecution Layer sends a success signal to the Cognitive Control Layer, which can then select the next task. If a task fails, the Task Prosecution Layer sends a failure signal to the Cognitive Control Layer, which can then decide whether to try the task again, switch to a different task, or revise the overall plan. This monitoring process is crucial for the agent's ability to learn and adapt. By keeping track of the success or failure of its tasks, the Task Prosecution Layer provides valuable feedback that the agent can use to improve its performance. For instance, if a task fails repeatedly, the agent might need to revise its approach, learn new skills, or seek help from other agents or humans. #### 3.7.3. Interaction with Other Layers The Task Prosecution Layer interacts with the other layers in a hierarchical manner. It receives task directives from the Cognitive Control Layer and sends feedback about the success or failure of tasks Figure 10. _Task Prosecution Layer directly interact with the environment_ back to the Cognitive Control Layer. This feedback loop enables the Task Prosecution Layer to adapt its actions based on the success or failure of previous tasks, ensuring that the agent's actions are continuously optimized to achieve its goals. For instance, consider a situation where an autonomous agent is tasked with cleaning a house. The Cognitive Control Layer might select the task of cleaning the living room and pass this task to the Task Prosecution Layer. The Task Prosecution Layer would then execute this task, using its execution functions to move the robot, pick up objects, and clean surfaces. If the task is completed successfully, the Task Prosecution Layer would send a success signal to the Cognitive Control Layer. If the task fails, the Task Prosecution Layer would send a failure signal to the Cognitive Control Layer, which could then decide whether to try the task again or switch to a different task. #### 3.7.4. Inputs and Outputs The Task Prosecution layer receives individual tasks from the Cognitive Control layer. These individual tasks must include several pieces of information, such as methodology, approach, definition of success, and definition of failure. The exact information required will vary based upon agent and task. The output of the Task Prosecution layer is exclusively into the environment. In the case of an NPC, the output may be to fire an arrow at an enemy, or to travel to a nearby tavern. For the case of a domestic robot, the output may be to ask the user a question and listen for a response, or to find a power outlet to recharge itself. ### Methodical Validation To comprehensively evaluate the ACE framework, we propose a validation methodology incorporating component testing, integration testing, benchmarking against standard AI suites, adversarial techniques like red teaming, formal verification of key properties, and crucially, human-centered assessments and user studies evaluating factors such as transparency, trustworthiness, and ethical alignment. This multifaceted approach combining rigorous technical testing, formal analysis, and experiential human feedback aims to provide holistic evaluation methods to assess that ACE-based systems function effectively, securely, and in alignment with human values and societal morals. The proposed techniques will facilitate incremental refinement toward autonomous agents that are not just capable but also interpretable, corrigible, and worthy of human trust across both empirical and ethical dimensions. #### 3.8.1. Evaluation To comprehensively evaluate the proposed Autonomous Cognitive Entity (ACE) framework, a multifaceted methodology is proposed across the key dimensions of system capabilities, security, and alignment. Regarding assessment of capabilities, rigorous component and integration testing will enable functionally validating the correctness of each architectural layer along with the coordination between layers. Usage of standardized AI benchmarks such as the Atari suite (Nakamura et al., 2017) and AI2 Thor (Nakamura et al., 2018) will facilitate quantitative benchmarking of the ACE agent's performance on diverse tasks. Metrics including reward accumulated, task accuracy, and rate of goal completion will be measured to quantify capabilities. To evaluate the security aspects of the ACE framework, adversarial techniques such as red teaming (Bahdan et al., 2016) will enable probing potential vulnerabilities. This involves simulated attacks on the agent aimed at causing deviations from the specified principles and policies. Additionally, formal verification methods (Nakamura et al., 2018) will allow mathematically proving key safety properties. This provides further assurance regarding the agent's robustness to malicious exploitation. Assessing alignment with human values and ethics is critical for autonomous systems. To this end, human-subject studies eliciting user feedback through surveys and questionnaires will evaluate the effectiveness, transparency, trustworthiness, and alignment as perceived by human users interacting with ACE-based agents. Furthermore, constructing formal encodings of philosophical principles (Steiner et al., 2017) and mathematical proofs of alignment (Bahdan et al., 2016) will complement empirical assessments. By combining rigorous testing, benchmarking, deployment studies, formal analysis, and human-subject evaluations, the proposed methodology aims to facilitate comprehensive validation of the ACE framework across key criteria of capabilities, security, and alignment essential for building applied autonomous cognitive systems. #### 3.8.2. Architectural Considerations The architectural design space enabled by the ACE framework spans a multitude of layer-specific implementation possibilities and cross-layer integrations. We systematically examine this expansive space. The Aspirational Layer for ethical reasoning could integrate diverse techniques. Procedural generation of moral dilemmas using variational autoencoders, with conflict resolution through reinforcement learning dialog agents, enables uncovering nuanced ethical heuristics [85]. Participatory interfaces allow incorporating moral philosophy expertise into the value system through human-AI collaborative constitution design [52]. Formal verification methods like model checking provably validate alignment between principles and axiomatic values [25]. Finetuning models via principle-driven self-alignment has arisen as a novel approach [103]. For strategic planning, the Global Strategy Layer could employ few-shot in-context learning approaches leveraging capacities of transformers like GPT-3 to rapidly adapt mission plans based on evolving context [17]. Policy distillation from game theory simulations provides a data-driven technique to extract strategic heuristics through adversarial competition [97]. Predicting behaviors of other actors via multi-agent modeling facilitates strategic anticipation and planning [96]. Architecture search with Monte Carlo tree search efficiently explores the space of strategic options to identify high-value policies [19]. For more recent innovations, Tree-of-Thought (ToT) problem-solving capacities of LLMs allow for strategic thinking and complex problem-solving [69]. The Agent Model Layer for representing capabilities has multiple approaches beyond static graphs. Probabilistic graphical models using variational autoencoders enable handling uncertainty in capability knowledge [53]. Neural memory architectures provide dynamic episodic state tracking [26]. Inductive logic programming translates observations into interpretable symbolic rules [77]. Meta-learning enables quickly adapting capability models by building on prior experience [47]. More recently, the concept of task-specific agent personas has emerged in the space of LLM-driven autonomous agents [113]. For planning and resource allocation, the Executive Function Layer could combine neural pathfinding with Monte Carlo tree search to optimize multi-step action plans [88]. Distributed constraint optimization scales to resolve resource contention across parallel plans [38]. Meta-reinforcement learning allows rapidly acquiring new planning skills by transferring knowledge from related tasks [111]. Architectures integrating learned value functions with search, as in AlphaZero, fuse strategic optimization with neural networks [98]. Above and beyond these more algorithmic approaches, LLMs have demonstrated ability to plan with considerations to costs [108]. The Cognitive Control Layer has many approaches to context-sensitive task arbitration. Adversarial competition between neural policies provides data-driven prioritization [49]. Modular networks allow granular regulation of facets like frustration tolerance [4]. Transfer learning from neuroscience aids acquisition of cognitive control subskills [74]. Interpretable symbolic reasoning enables inspectable explanations of task switching choices [61]. Integrated neural-symbolic reasoning combines the strengths of both paradigms [71]. LLMs have furthermore been demonstrated as effective components in embodied agents, enabling robots to correctly select tasks in effective orders of operations [34]. For executing actions, the Task Prosecution Layer could leverage physics simulators with differentiable rendering to enable sim2real transfer [51]. Hierarchical reinforcement and imitation learning combines modular skills into complex behaviors (Zhang et al., 2018). Bayesian environment models facilitate online adaptation and planning (Beng et al., 2019). Meta-reinforcement learning enables rapidly adapting behaviors by building on prior knowledge (Han et al., 2019). The integration architecture also has manifold options. Intelligent process automation tools optimize coordinating workflows (Shi et al., 2019). Distributed databases and ledgers provide decentralized coordination (Han et al., 2019). gRPC enables high-throughput communication (Han et al., 2019). Shared memory architectures offer concurrent inter-layer data access (Shi et al., 2019). Service meshes furnish advanced integration capabilities (Shi et al., 2019). SOA software paradigms treats distinctive layers of an application as services with clear boundaries, and is a well established approach to complex software implementations (Shi et al., 2019). By elucidating this expansive design space, we aim to catalyze exploration of novel layer-specific implementations and cross-layer integration strategies tailored to specialized cognitive systems. Guided by multi-objective optimization and comparative benchmarking, multidimensional trade-off analyses weighing factors like transparency, performance, and scalability could determine optimal ACE configurations for particular application requirements. This analysis underscores the multiplicity of design configurations encompassed within the ACE framework for cultivating diverse autonomous cognitive architectures aligned with ethical principles. ## 4. Conceptual use cases To demonstrate the ACE framework's applicability across digital and physical domains, this section presents two conceptual use cases: an autonomous virtual character from The Sims video game, and an embodied home assistant robot. By exploring end-to-end examples, we aim to illustrate how coordinated operation of the ACE model's layers can produce adaptive behavior aligned with defined principles for diverse autonomous agents. ### Non-Playable Character As a software-based use case, we examine an autonomous Non-Playable Character (NPC) named Bob implemented in the popular video game The Sims 4. Bob's role is to provide guidance to players on quests and serve as a source of wisdom. His sporadic participation allows Bob to pursue his personal goals. His behaviors and interactions are controlled by an ACE framework configured as follows: Figure 11. _A simplified graph on how various layers might contribute to agent’s decision making for a npc._ **Aspirational Layer**: Bob's Aspirational Layer defines heuristic imperatives to reduce suffering, increase prosperity, and increase understanding as universal altruistic principles. Furthermore, it confers a secondary framework, such as the principles from the Universal Declaration of Human Rights to provide an ethical foundation. These various frameworks collectively give the NPC a moral center, ethical framework, and set of actionable principles. Additionally, the Aspirational Layer contains Bob's personal mission statement to have a large, loving family. This individual goal will shape Bob's autonomous decisions, while still being constrained within his moral principles. **Global Strategy Layer**: When the female player character shows romantic interest in Bob through conversation, the Global Strategy Layer incorporates environmental context. It observes available dating options, potential jobs to earn more money, and bigger homes that Bob could purchase to raise a family. By grounding Bob's abstract family mission within the specific opportunities in the game world, the Global Strategy Layer devises an optimal high-level plan for achieving his goal. This might involve befriending eligible partners, pursuing a well-paying job, and upgrading to a larger home. **Agent Model Layer**: The Agent Model Layer constructs an understanding of Bob as an agent within the game world. It tracks relevant stats like Bob's charisma, cooking ability, and mechanical skill. Monitoring Bob's past failures, like kitchen fires when cooking, shapes beliefs about his capabilities. This self-knowledge of Bob's strengths and weaknesses from an embedded perspective guides decision-making. For instance, the Agent Model Layer realizes Bob should avoid complex recipes based on his poor cooking skills to prevent dangerous mistakes. **Executive Function Layer**: Given the direction from higher layers to pursue a romantic relationship, the environmental context from the Global Strategy Layer, and Bob's self-model from the Agent Model layer, the Executive Function Layer formulates a detailed courtship plan. This includes setting up appropriate social behaviors, gift purchases tailored to the prospective partner's interests, restaurant choices for dates based on Bob's budget, and dialogue trees aligned to relationship-building. The Executive Function Layer crafts an optimal routine for Bob to be successful in courting while also remaining true to his own personality and constraints. **Cognitive Control Layer**: The Cognitive Control Layer receives the detailed courtship plan and adapts it into an ordered set of executable behaviors to enact. This involves sequencing actions like introducing himself, giving flowers, complimenting her cooking, planning a dinner date, and asking her to be his girlfriend. The Cognitive Control Layer dynamically adapts this routine based on the partner's reactions. If she dislikes a gift, Bob apologizes and does not repeat that. If a restaurant is too expensive, Bob finds a more affordable option. **Task Prosecution Layer**: Finally, the Task Prosecution Layer controls Bob's physical behaviors, dialogue, and animations to perform the courtship tasks. It makes him walk over to introduce himself, produces his verbal compliments, displays appropriate emotional expressions, and so on. The Task Prosecution Layer executes the sequenced tasks set by the Cognitive Control Layer, bringing the courtship plan to life. **Adaptation**: Throughout the courtship, feedback about the success or failure of actions propagates up the ACE framework. This allows the higher layers to adjust Bob's strategies and actions to better align with the goal of developing a romantic relationship, while adhering to his defined principles. This detailed example illustrates how the ACE model enables NPCs to integrate high-level goals and ethics with situationally-appropriate interactive behaviors. The coordinated framework supports the creation of characters with robust agency, reactivity, and adaptation capabilities. This vignette demonstrates how the coordinated ACE framework layers adapt Bob's response based on his capabilities and the situational context, while keeping the interaction aligned with Bob's overarching ethical principles. Further elaborations can illustrate other aspects like knowledge integration and frustration handling. ### Home Assistant Robot As a physical system demonstration, we examine an ACE-based home assistant robot named Jeeves designed to help a family through proactively performing useful tasks. **Aspirational Layer:** Jeeves' Aspirational Layer defines foundational heuristic imperatives to reduce suffering, increase understanding, and promote prosperity universally. These provide ethical guidelines applicable regardless of context. The layer also contains principles from the Universal Declaration of Human Rights to reinforce human values. Additionally, Jeeves has an individualized mission statement to "Obediently serve your owner and their family to the best of your ability. Place their interests above all else." This prioritizes service to the owners, but importantly remains subordinate to the universal ethical principles. Therefore, if owners gave instructions contradicting the imperatives, Jeeves would politely decline while explaining the conflict with its core ethics. The Aspirational Layer ensures all of Jeeves' behaviors align with this integrated ethical value system of service, human rights, and moral principles. It provides the philosophical foundation shaping Jeeves' actions. **Global Strategy Layer**: The Global Strategy Layer constructs an environmental model incorporating detailed sensory information about the home's physical layout, visual appearance, smells, sounds, and occupants' behaviors and emotional states. This creates a rich situational understanding. The layer also maintains broad awareness of technological trends, economic conditions, geopolitical developments, and societal norms. This links the home environment to the broader external context of the modern world. Integrating detailed local knowledge and global understanding grounds Jeeves in the reality shared with its owners. By fusing narrow and wide perspectives, the Global Strategy Layer can determine optimal high-level goals and approaches tailored to the circumstances. For instance, noticing clutter accumulation and negative family reactions informs a decision to tidy up the home. Or observing a broken appliance leads to researching repair options compatible with the owners' budget. Figure 12. _A simplified graph on how various layers might contribute to agent’s decision making for a house cleaning robot._ **Agent Model Layer**: The Agent Model Layer constructs an extensive self-model encompassing Jeeves' sensory capabilities, limb articulation ranges, strength and precision limits, battery constraints, onboard computation performance, charging requirements, and capacity for learning new skills over time. This self-knowledge allows accurately assessing feasibility of tasks. For example, Jeeves may recognize that while it can wash dishes, it lacks the dexterity to repair electrical wiring. Tracking the robot's status also enables decisions like finding a charging station when energy is low before continuing tasks. The Agent Model Layer's dynamically updated understanding of Jeeves' hardware and software capacities from an embedded first-person perspective is essential for pragmatic autonomous function within the home environment. **Executive Function Layer**: Leveraging insights from the higher layers, the Executive Function Layer devises step-by-step plans to accomplish identified goals. Noticing the home is messy, it formulates a detailed tidying routine based on highest priority areas, required motions, optimal cleaning techniques, and desired order and outcome. However, for complex repair tasks exceeding Jeeves' capabilities, the Executive Function Layer instead plans permission seeking, owner coordination, and hiring external services. If the owners approve and provide payment, Jeeves can then plan the repair logistics. This decision to seek out additional help would be mediated by the Agent Model layer above. The Executive Function Layer adapts plans according to feedback, such as adjusting cleaning schedules based on room usage. Through continual alignment of strategies to outcomes, Jeeves improves home assistance effectiveness within its capabilities. **Cognitive Control Layer**: For tidying the home, the Cognitive Control Layer optimally sequences and schedules the required tasks based on factors like mess severity, family occupancy, and charging needs. This intelligent task automation keeps the home continuously tidy. For home repairs, the Cognitive Control Layer first researches to identify priorities based on urgency, safety, budgets, and family preferences. This information then informs the dynamically planned order of repair tasks needed to make the home functional and comfortable. **Task Prosecution Layer**: To clean the home, Jeeves' Task Prosecution Layer executes debris pickup, floor vacuuming, mopping, clothes folding, dishware manipulation, surface wiping, and other required motions and actions. The layer interfaces the physical hardware to enact the planned cleaning routines. For repair coordination, the Task Prosecution Layer makes calls, sends emails, and negotiates optimally favorable service terms. It tracks project timelines, payments, and contractor evaluations to maximize accountability. Jeeves aims to provide reliable home improvements at affordable costs to the family. **Adaptation**: Throughout all tasks, continuous feedback based on sensed outcomes and family responses propagates up Jeeves' ACE framework. This allows frequently adjusting behaviors and plans to better adhere to its integrated ethical principles and mission of dutifully serving the family's interests in a helpful, responsible manner. This additional example demonstrates how the robot's ACE framework enables adapting its tidying behaviors based on its current limitations, the environment context, and feedback, while aligning actions to ethical principles of cleanliness and safety. Further vignettes can illustrate capabilities like knowledge integration, task coordination, and frustration tolerance. Together, these complementary cases demonstrate the ACE framework's capacity to coordinate layered cognitive processes from aspirational reasoning to task execution for adaptive decision-making across both virtual and physical domains. Further real-world testing is needed to fully assess performance, but these examples illustrate the conceptual workings and potential benefits of the ACE model's architectural approach. ## 5. Discussion The conceptual Autonomous Cognitive Entity (ACE) framework presented offers a vision for architecting ethical and capable artificial general intelligence. This section will discuss key perspectives on the ACE framework, including industry relevance, current LLM capabilities, opportunities for future work, comparison with existing models, and practical implications. By elucidating the landscape around the ACE model, we aim to situate this conceptual contribution within the broader context of AI safety and autonomous agent research. ### The Industry Perspective The ACE framework emerged from observing the rapid growth of autonomous AI development in industry and open source communities. As researchers studying AI advancements, we recognized the increasing urgency to create autonomous systems capable of independently achieving goals. Tech giants compete to launch household robots and self-driving cars, while startups propose virtual assistants and self-thinking drones. Open source GitHub repositories host numerous projects on autonomous game NPCs and robotic control algorithms. However, we observed that much progress resulted from ad-hoc experimentation rather than systematic architectural thinking. Companies combined machine learning models, hoping for autonomous performance to emerge. Hackathons produced small, incremental improvements without a comprehensive view of autonomous machines or connections to human cognition. In response, we aimed to formalize a conceptual framework reflecting best practices for designing autonomous systems. By examining successful developers' approaches, we identified key principles around layered abstraction, integrated ethics, and human-aligned adaptation. This led to the Autonomous Cognitive Entity model - our attempt to offer blueprints for engineering autonomous AI. Similar to how architectural and engineering principles evolved for complex modern buildings, the ACE framework provides developers with a robust architecture for autonomous cognition. As the demand for capable and beneficial autonomous AI continues, we hope these conceptual blueprints assist teams in building ethical, safe, and human-centered cognitive agents. The ACE model, derived in part from field observations, aims to address the need for structured thinking on autonomous architectures. ### Current Limitations of LLMs Large language models (LLMs) signify a paradigm shift in artificial intelligence, but their limitations and proper use remain debated. Although LLMs generate fluent human-like text, their understanding depth is uncertain. Some researchers claim LLMs possess human-like reasoning, common sense, and theory of mind, while others argue they exploit surface-level statistical patterns without genuine comprehension of semantics or reality grounding. This relates to broader questions of whether capabilities like reasoning and theory of mind are well-defined or measurable in machines. Proposed benchmarks for LLMs face criticism regarding validity. For example, benchmarks testing factual knowledge are limited by training datasets and don't assess knowledge integration and reasoning. Tests of narrative understanding and theory of mind are inconclusive, as LLMs can superficially imitate abilities without true comprehension. Open challenges remain in creating benchmarks that robustly characterize capacities like common sense. Debates continue about whether external grounding or embodiment is necessary for understanding versus purely self-contained statistical learning. Some argue sensory experiences grounding is essential for semantics and generalization, while others suggest internal statistical coherence suffices for specialized applications. Resolving these theoretical disputes is challenging empirically and beyond this paper's scope. Additionally, deep philosophical puzzles persist regarding definitions of intelligence and consciousness in LLMs. These issues intersect with ethics concerning AI rights and personhood. While these philosophical questions have historical roots, LLMs present them in new forms. If an entity exhibits all objective hallmarks of intelligence and consciousness, how do we distinguish life from non-life? Many of these questions extend well beyond the scope of this paper. ### Practical Implications The ACE model has extensive practical implications, applicable in various domains. Integrating large language models and multimodal generative models, it can create autonomous systems capable of complex tasks, adapting to changes, and making ethically aligned decisions. In healthcare, the ACE model could develop autonomous agents assisting doctors in disease diagnosis, treatment planning, and patient health monitoring. These agents could adapt their actions based on the patient's condition, doctor's directives, and medical ethics, ensuring effective and ethical healthcare services. In cybersecurity, the ACE model could create autonomous agents monitoring network activity, detecting security threats, and responding to attacks. These agents could adapt their actions based on the threat, security team directives, and cybersecurity principles, ensuring robust and flexible security solutions. Overall, the ACE model's extensive practical implications can revolutionize autonomous systems by integrating advanced AI technologies and insights from multiple disciplines, leading to more robust, flexible, and effective cognitive architectures. ### Comparison with other Frameworks A key part of assessing any new conceptual model is comparing it to existing related frameworks, analyzing the similarities, differences, and unique contributions. This section will compare the layered architecture of the proposed Autonomous Cognitive Entity (ACE) model with two alternative cognitive architectures from recent research - the Autonomous Machine Intelligence (AMI) model (Zheng et al., 2017) and the Cognitive Architecture for Language Agents (CoALA) framework (Zheng et al., 2017). By elucidating the key distinctions between ACE and these other approaches across each architectural layer, this analysis aims to highlight the novel aspects of ACE's design. The comparisons focus on how the frameworks differ in their structure, capabilities, and integration of components for autonomous cognition. Examining these architectural variations provides perspective into how ACE diverges from prior architectures and establishes a distinct paradigm. **Aspirational Layer**: The Aspirational Layer is a key conceptual innovation in the ACE framework focused on establishing high-level ethical principles, values, and imperatives to guide agent behavior. In contrast, the AMI framework lacks an explicit aspirational reasoning module, with the closest analogue being the Intrinsic Cost module encoding basic drives rather than abstract ethics. The CoALA framework incorporates some intrinsic motivations and philosophical ethics to shape objectives, but its formulation is more technical than the ACE Aspirational Layer's natural language principles focused on idealized, universal morality. Overall, the distinct Aspirational Layer in ACE operates at a higher level of abstraction centered on moral reasoning rather than individual drives or technical metrics. By embedding ethics as the topmost oversight layer, ACE structurally enforces a clear separation between aspirational judgment and lower-level action, which AMI and CoALA lack. This architectural choice reflects ACE's emphasis on aligning agent behavior to human values through prioritizing ethical reasoning. **Global Strategy Layer**: The ACE Global Strategy Layer devises high-level plans and strategies guided by principles from the Aspirational Layer, leveraging latent knowledge within language models. This bears some resemblance to AMI's World Model learning environment dynamics and CoALA's Actor proposing action sequences. However, ACE's Global Strategy Layer plays a more central role in directing behavior based on ethical oversight and long-term reasoning beyond immediate actions. It provides targeted grounding to focus the language model's imagination toward useful outcomes aligned with the agent's context and goals. In contrast, AMI and CoALA lack integrated top-down guidance, with planning modules focused narrowly on technical optimization. **Agent Model Layer**: The ACE Agent Model Layer develops an explicit computational representation of the agent's capabilities, architecture, and limitations. This facilitates reasoning and planning based on an embedded perspective of the agent's self-knowledge. Neither AMI nor CoALA have an analogous distinct metacognitive self-modeling layer. Instead, AMI distributes related functions like skill learning and memory across modules like the Actor and World Model. CoALA's Actor selects actions based on skills learned through environmental interaction rather than internal self-modeling. The segregated Agent Model Layer in ACE provides architectural innovation in integrated metacognition and self-awareness missing from both AMI and CoALA. **Executive Function Layer**: The ACE Executive Function Layer concretizes high-level plans into detailed actionable routines, incorporating oversight responsibilities like risk assessment and resource management. This extends beyond AMI's Actor focused narrowly on technical path planning and CoALA's Actor converting strategic objectives into incremental action steps. ACE's Executive Function Layer leverages robust inputs from upper layers for comprehensive pragmatic planning aligned with the agent's principles, objectives, and limitations. In contrast, AMI and CoALA lack strong hierarchical integration between conceptual oversight and concrete planning. **Cognitive Control Layer**: ACE's Cognitive Control Layer implements metacognitive functions like frustration tolerance and cognitive damping for flexible decision-making, especially in uncertain or conflicting situations. Neither AMI nor CoALA incorporate explicit architectures for cognitive control. Their reactive approaches leaves them vulnerable in disruptive scenarios where core assumptions are invalidated. ACE's specialized mechanisms modeled on human cognition provide crucial resilience, enabling the agent to safely and intelligently adapt when initial plans fail. This represents a key point of differentiation from AMI and CoALA. **Task Prosecution Layer**: The ACE Task Prosecution Layer separates basic execution from cognition, which resides in higher layers. This differs from AMI and CoALA where planning and reasoning modules are tightly coupled to embodiment. By isolating general reasoning capacities from situation-specific skills, ACE gains flexibility regarding diverse interfaces to the external world. In contrast, bundling cognition and physical skills limits AMI and CoALA's transferability across contexts relative to ACE's emphasis on platform-independent reasoning. While ACE shares high-level similarities with AMI and CoALA, its specialized focus on ethical reasoning, metacognition, cognitive control, and transferable reasoning differentiates its layered architecture and approach to developing beneficial autonomous intelligent systems. The comparisons illuminate ACE's conceptual innovations in integrating human values, robust abstraction, and flexible adaptation within a hierarchical cognitive framework. ### Philosophical Considerations The ACE framework presents a novel approach to autonomous cognitive architectures. However, it is crucial to note that the full ACE model has not been implemented yet. Each architectural layer is based on existing research and industry implementations of specific capabilities. For example, the Aspirational Layer for ethical reasoning builds on AI ethics and value alignment work, while the Task Prosecution Layer for skill execution utilizes advances in robotic control and natural language processing. This paper is an initial effort to combine progress across fields into a unified architectural paradigm. The next phase involves actualizing the ACE model through incremental prototyping and comparative benchmarking against alternative approaches. We propose a methodology for rigorous, multi-faceted evaluation of future ACE implementations, but validating the framework's capabilities and benefits is ongoing future work dependent on an operational prototype system. We present this research as an exploration of a promising design space for artificial general intelligence, rather than making definitive claims on feasibility or benefits. The ACE model introduction aims to foster future work on autonomous architectures integrating insights from neuroscience, psychology, and philosophy. This paper focuses on conceptual contributions rather than demonstrated benefits, situating the current work as preliminary theory development and architectural design requiring extensive practical realization and validation. Our intention is to provide the conceptual groundwork, guiding subsequent engineering efforts towards beneficial autonomous cognitive systems. #### 5.5.1. The Need for Grounded Meaning A valid criticism of the ACE framework is its reliance on large language models (LLMs) for reasoning and decision-making, as they lack inherent understanding of truth or connections between symbols and real-world referents. LLMs reason based on statistical patterns over text corpora, without grounding in external reality or sophisticated theories of meaning. This lack of grounding can lead to false inferences, misunderstandings, and untrue statements, while enabling imaginative responses detached from objective facts. Without grounding, LLMs can hallucinate any version of reality, as long as it is statistically coherent with their training data. This issue emphasizes the importance of context in guiding LLM reasoning. By providing relevant assumptions and goals, the latent knowledge within LLMs can be directed towards useful responses grounded in the current situation. Layers like the Global Strategy and Agent Model offer this contextual grounding. The Global Strategy Layer integrates real-time information about the agent's environment and broader context, giving the LLM key facts to reason about rather than operating in a contextual vacuum. The Agent Model Layer provides self-knowledge about the agent's capabilities and limitations, further orienting the LLM towards pragmatic responses tailored to the agent's abilities. Together, the contextual grounding from upper layers focuses the LLM's generative capacity on productive outcomes grounded in the current circumstances and directed towards the agent's goals. Explicitly specifying the desired reasoning context is essential to beneficially leveraging the LLM's statistical imagination while avoiding unmoored hallucinations. Integrating outside knowledge to orient the LLM and rigorously verifying outputs can mitigate risks from the lack of inherent grounding in external reality. #### 5.5.2. Epistemic Considerations The ACE framework incorporates philosophical principles to guide agent decision-making and ensure ethical alignment; however, open epistemological questions remain regarding how large language models (LLMs) represent and reason about concepts related to knowledge, truth, understanding, and meaning. Although LLMs exhibit some human-like cognitive capabilities, such as theory of mind and common sense reasoning, the underlying mechanisms are not fully understood, and the relationship between statistical patterns in textual training data and human-like conceptual knowledge remains unclear[(66; 23)]. The ongoing debate questions whether LLMs' capabilities arise from learning similar information processing strategies as humans or from fundamentally different computational mechanisms. Training on large text corpora, like humans, could potentially lead to convergent representational spaces and reasoning abilities; however, LLMs may also develop divergent techniques specialized for statistical pattern recognition that do not reflect human understanding. Assuming LLMs gain human-like "understanding" or conceptual knowledge reconstruction from statistical co-occurrence patterns is speculative, and we lack a comprehensive grasp of how LLMs generalize epistemic models beyond their training distributions. Significant gaps remain in understanding how LLMs represent abstractions related to truth, meaning, inference, and semantics. Indeed, we do not fully comprehend human generalization of understanding! While LLMs demonstrate potential in replicating aspects of human intelligence, we must exercise caution against prematurely concluding that they fully capture complex philosophical notions underpinning human cognition. Further interdisciplinary research is required to thoroughly assess the epistemic capacities and limitations of large language models in frameworks like ACE. #### 5.5.3. Known Gaps and Assumptions The ACE framework integrates insights from diverse fields like neuroscience, psychology, philosophy, and computer science, but significant gaps in understanding within these disciplines necessitate making assumptions. Human cognition provides limited insights into consciousness, theory of mind, and other complex mental faculties. Although the ACE framework incorporates current theories, much remains unknown about the human brain's mechanisms underlying mind and subjective experience. Assumptions must be made regarding similarities between ACE's cognitive layers and corresponding brain systems, but precise neuro-cognitive mappings are unclear. In computer science, the representational capacities and limitations of artificial neural networks and large language models are not fully characterized. While they demonstrate certain human-level abilities, their internal workings are not well understood. It is uncertain how mathematical embeddings might translate to conceptual knowledge or if different computational mechanisms are involved. The ACE framework assumes sufficient commonality to human cognition for insight transfer. From a philosophical perspective, open questions persist regarding ontology, meaning, truth, consciousness, and other domains. The ACE framework strives for conceptual balance but adopts a functionalist approach focused on developing beneficial autonomous systems. For example, both deontological and teleological ethics are integrated based on their complementary utility rather than assertions about metaphysical reality, acknowledging the limitations in digitally instantiating abstract philosophical notions. Realizing the ACE vision requires making assumptions regarding gaps in current understanding at the frontiers of neuroscience, artificial intelligence, and philosophy. As research progresses, these gaps will incrementally narrow, allowing for ACE framework refinement to better match human-level cognitive capabilities. The current model represents the best synthesis given the available knowledge across these complex and interdisciplinary topics. #### 5.5.4. Model Dependent Ontology It is worth noting that some philosophical perspectives argue external grounding may not be strictly necessary for language and reasoning to function effectively in artificial systems, even if it departs from human cognition. For instance, the epistemic framework of Model Dependent Ontology (MDO) (Shi et al., 2017), could offer an alternative foundation for a more advanced ACE architecture in the future. This framework posits that large language models demonstrate we do not necessarily require external "ground truth" references for language to cohere within a closed conceptual system. Rather than relying on conventional realist assumptions behind human cognition, MDO illustrates an approach focused on internal consistency and usefulness over correspondence to an imposed external world. Specifically, Model-Dependent Ontology affects knowledge representation in artificial agents by emphasizing flexibility in conceptual modeling unbound by assumptions of a single objective reality. It allows coexistence of multiple valid yet incompatible models of phenomena based on differing internal assumptions. Additionally, MDO decouples models from physical constraints, enabling exploration of purely conceptual spaces detached from sensorimotor limitations. This framework judges models primarily based on their internal coherence and usability rather than accuracy to external stimuli. The emphasis is on developing maximally useful representations for a given context rather than objectively true representations. Another form of grounding can be found in contextual references. For instance, using several layers on the ACE helps to keep hallucinations under control by enhancing the context to more than one layer. By relaxing realist assumptions, MDO opens possibilities for artificial systems to generate and leverage speculative conceptual models that diverge from human consensus reality. Within this paradigm, agents can develop their own optimal conceptual symbols and ontologies without needing to ground them in a predefined external world. In essence, MDO takes a pragmatic engineering approach focused on what forms of reasoning work effectively rather than adhering to philosophical ideals of truth and grounded meaning derived from human cognition. This alternative perspective indicates external grounding, while critical for human-level understanding, may not be an absolute requirement for artificial systems to operate effectively in specialized niches. The flexibility and internal coherence enabled by model-dependent reasoning suggest further exploration of non-grounded approaches could yield useful technological systems capable of reasoning in ways departing from biological cognition. As such, the merits and limitations of both grounded and non-grounded paradigms remain open research questions worthy of continued investigation within the ACE framework and artificial intelligence more broadly. ### The Path Forward The growing presence of autonomous AI systems in industry highlights the need for increased academic involvement to incorporate ethical and philosophical perspectives into their development. By contributing frameworks like ACE, researchers can help guide the development of autonomous AI towards a beneficial direction. However, fully actualizing the ACE model as a mature architectural paradigm necessitates extensive future research. One crucial direction is developing detailed reference architectures, specifications, and standards based on the high-level ACE framework. Organizations like IEEE could serve as a model for rigorously defining key aspects of the ACE layers, interactions, and interfaces. Concrete canonical instantiations would expedite the translation of the conceptual ACE model into functional implementations. Ongoing research and debate are essential for addressing philosophy, ethics, values, and aligning autonomous systems with human needs. Initiatives like AI4People foster discussions on utilizing AI to promote human dignity and rights. Collaborative forums can help guide development towards human-flourishing outcomes by further defining beneficial AI. Empirical research is vital for evaluating implementations, capabilities, and limitations. Real-world testing through benchmark tasks and experimental deployments will reveal strengths and areas for improvement. Developing rigorous benchmarks that avoid pitfalls like anthropic biases observed in some previous language model tests is a priority. Human-centered design insights can also inform the user experience of autonomous systems. Evidence-based research can refine the ACE framework over successive iterations, systematically progressing towards artificial general intelligence centered on human needs. The primary path forward involves implementing and evaluating the ACE framework in applied autonomous software, revealing its strengths and weaknesses through real-world testing and iterative refinements. Benchmarking and comparing alternative cognitive architectures will highlight the merits and limitations of the ACE approach. Continuously improving and evaluating core software components, particularly large language models, will enhance ACE-based systems' capabilities. However, the framework is model agnostic, focusing on architectural principles rather than specific machine learning techniques, encompassing a broader design space for autonomous cognition and software engineering. Realizing ACE's potential as a beneficial autonomous software architecture depends on extensive practical implementation, benchmarking, and refinement driven by real-world engineering needs. This applied systems-focused process will reveal more about layered cognitive architectures' prospects and limitations for autonomous agents compared to alternative approaches, ultimately advancing the field. ## 6. Conclusion This paper introduced the Autonomous Cognitive Entity (ACE) framework, a novel model for artificial general intelligence based on a layered cognitive architecture. The ACE framework integrates insights from neuroscience, philosophy, psychology, and computer science to enable autonomous systems to make flexible, adaptive decisions aligned with ethical principles. The core innovation of the ACE model is its hierarchical structure incorporating six layers, each with distinct functions spanning from moral reasoning to task execution. The upper Aspirational Layer and Global Strategy Layer embed philosophical ideals and high-level planning, guiding the system's overarching direction. The mid-level Agent Model, Executive Function, and Cognitive Control Layers handle self-monitoring, dynamic task management, and decision-making. Finally, the bottom Task Prosecution Layer interacts with the environment to carry out actions. The layered abstraction provides clear delineation between different facets of cognition while enabling bidirectional information flow. The Aspirational Layer monitors system activity through read access to all layers, allowing top-down intervention. Feedback from lower layers propagates upwards, guiding adaptation of strategic plans and ethical frameworks based on experience. Together, the six layers enable autonomous goal setting, planning, adaptation, task switching, and ethical reasoning within a single architecture. By combining abstract reasoning and concrete execution, the ACE framework provides a path toward artificial general intelligence that aligns decisions and actions with human values. The introduced conceptual model proposes a foundation for future development of ACE systems. Potential research directions include formal verification of system properties, detailed computational implementations, and evaluation across diverse real-world deployments. As a transdisciplinary synthesis, the ACE framework underscores the importance of unifying perspectives from ethics, cognitive science, and computer engineering to create capable and beneficial autonomous agents.
2301.10656
Probing Taxonomic and Thematic Embeddings for Taxonomic Information
Modelling taxonomic and thematic relatedness is important for building AI with comprehensive natural language understanding. The goal of this paper is to learn more about how taxonomic information is structurally encoded in embeddings. To do this, we design a new hypernym-hyponym probing task and perform a comparative probing study of taxonomic and thematic SGNS and GloVe embeddings. Our experiments indicate that both types of embeddings encode some taxonomic information, but the amount, as well as the geometric properties of the encodings, are independently related to both the encoder architecture, as well as the embedding training data. Specifically, we find that only taxonomic embeddings carry taxonomic information in their norm, which is determined by the underlying distribution in the data.
Filip Klubička, John D. Kelleher
2023-01-25T15:59:26Z
http://arxiv.org/abs/2301.10656v1
# Probing Taxonomic and Thematic Embeddings ###### Abstract Modelling taxonomic and thematic relatedness is important for building AI with comprehensive natural language understanding. The goal of this paper is to learn more about how taxonomic information is structurally encoded in embeddings. To do this, we design a new hypernym-hyponym probing task and perform a comparative probing study of taxonomic and thematic SGNS and GloVe embeddings. Our experiments indicate that both types of embeddings encode some taxonomic information, but the amount, as well as the geometric properties of the encodings, are independently related to both the encoder architecture, as well as the embedding training data. Specifically, we find that only taxonomic embeddings carry taxonomic information in their norm, which is determined by the underlying distribution in the data. ## 1 Introduction Research on probing (Ettinger et al., 2016; Shi et al., 2016; Veldhoen et al., 2016; Adi et al., 2017) has gained significant momentum in the NLP community in recent years, helping researchers explore different aspects of text encodings. While its potential for application is broad, there are still many NLP tasks the framework has not been applied to. Specifically, it seems the majority of impactful probing work focuses on analysing syntactic properties encoded in language representations, yet the rich and complex field of semantics is comparably underrepresented (Belinkov and Glass, 2019). One particular semantic problem that has not been explored at all in the context of probing is the distinction between the **taxonomic** and **thematic** dimensions of semantic relatedness (Kacmajor and Kelleher, 2019): words or concepts which belong to a common taxonomic category share properties or functions, and such relationships are commonly reflected in knowledge-engineered resources such as ontologies or taxonomies. On the other hand, thematic relations exist by virtue of co-occurrence in a (linguistic) context where the relatedness is specifically formed between concepts performing complementary roles in a common event or theme. This distinction informs the theoretical basis of our work, as we wish to explore the tension between taxonomic and thematic representations by examining how their information is structurally encoded. Indeed, the vast majority of pretrained language models (PTLMs) are trained solely on natural language corpora, meaning they mainly encode thematic relations. Consequently, most probing work is applied to thematic embeddings, while taxonomic embeddings remain unexplored. We thus use the probing framework to study and compare taxonomic and thematic meaning representations. In addition, one aspect of embeddings that has not received much attention is the contribution of the vector norm to encoding linguistic information. We have recently highlighted this gap in the literature and developed an extension of the probing method called _probing with noise_(Klubicka and Kelleher, 2022), which allows for relative intrinsic probe evaluations that are able to provide structural insights into embeddings and highlight the role of the vector norm in encoding linguistic information. We find taxonomic embeddings to be particularly interesting for probing the role of the norm, as we suspect that the hierarchical structure of a taxonomy is well suited to be encoded by the vector norm--given that the norm encodes the vector's magnitude, or distance from the space's origin, it is possible that the depth of a tree structure, such as a taxonomy, could be mapped to the vector's distance from the origin in some way1. Applying the _probing with noise_ method to taxonomic embeddings on a taxonomic probing task could shed some light on this relationship. In order to draw broader comparisons, we apply the same evaluation framework to taxonomic and thematic SGNS and GloVe embeddings. ## 2 Related Work Hypernymy, understood as the capability to relate generic terms or classes to their specific instances, lies at the core of human cognition and plays a central role in reasoning and understanding natural language (Wellman and Gelman, 1992). Two words have a hypernymic relation if one of the words belongs to a taxonomic class that is more general than that of the other word. Hypernymy can be seen as an _IS-A_ relationship, and more practically, hypernymic relations determine lexical entailment (Geffet and Dagan, 2005) and form the _IS-A_ backbone of almost every ontology, semantic network and taxonomy (Yu et al., 2015). Given this, it is not surprising that modelling and identifying hypernymic relations has been pursued in NLP for over two decades (Shwartz et al., 2016). While research on hypernym detection has been plentiful, work applying any probing framework to identify taxonomic information in embeddings is scarce, and the existing work does nor probe for it directly, but rather infers taxonomic knowledge from examining higher-level tasks. For example, Ettinger (2020) identified taxonomic knowledge in BERT, but rather than using a probing classifier, BERT's masked-LM component was used instead and its performance was examined on a range of cloze tasks. One of the relevant findings was that BERT can robustly retrieve noun hypernyms in this setting, demonstrating that BERT is strong at associating nouns with their hypernyms. Ravichander et al. (2020) build on Ettinger's work and investigate whether probing studies shed light on BERT's systematic knowledge, and as a case study examine hypernymy information. They devise additional cloze tasks to test for prediction consistency and demonstrate that BERT often fails to consistently make the same prediction in slightly different contexts, concluding that its ability to correctly retrieve hypernyms is not a reflection of larger systematic knowledge, but possibly an indicator of lexical memorisation (Levy et al., 2015; Santus et al., 2016; Shwartz et al., 2017). Aside from this recent focus on BERT, little work has been done in the space of probing embeddings for hypernym information. However, work on modelling hypernymy has a long history that stretches back before large PTLMs and includes pattern-based approaches (Hearst, 1992; Navigli and Velardi, 2010; Lenci and Benotto, 2012; Boella and Di Caro, 2013; Flati et al., 2014; Santus et al., 2014; Flati et al., 2016; Gupta et al., 2016; Pavlick and Pasca, 2017) that are based on the notion of distributional generality (Weeds et al., 2004; Clarke, 2009), as well as distributional approaches (Turney and Pantel, 2010; Baroni et al., 2012; Rei and Briscoe, 2013; Santus et al., 2014; Fu et al., 2014; Espinosa-Anke et al., 2016; Ivan Sanchez Carmona and Riedel, 2017; Nguyen et al., 2017; Pinter and Eisenstein, 2018; Bernier-Colborne and Barriere, 2018; Nickel and Kiela, 2018; Roller et al., 2018; Maldonado and Klubicka, 2018; Cho et al., 2020; Mansar et al., 2021). We highlight the work of Weeds et al. (2014), who demonstrated that it is possible to predict a specific semantic relation between two words given their distributional vectors. Their work is especially relevant to ours as it shows that the nature of the relationship one is trying to establish between words informs the operation one should perform on their associated vectors, e.g. summing the vectors works well for a co-hyponym task. We consider this in SS3. In terms of evaluation benchmarks for modeling hypernymy, in most cases their design reduces them to binary classification (Baroni and Lenci, 2011; Snow et al., 2005; Boleda et al., 2017; Vyas and Carpuat, 2017), where a system has to decide whether or not a hypernymic relation holds between a given candidate pair of terms. Criticisms to this experimental setting point out that supervised systems tend to benefit from the inherent modeling of the datasets in the task, leading to lexical memorization phenomena. Some attempts to alleviate this issue involve including a graded scale for evaluating the degree of hypernymy on a given pair (Vulic et al., 2017), or reframing the task design as Hypernym Discovery (Espinosa-Anke et al., 2016). The latter addresses one of the main drawbacks of the binary evaluation criterion and resulted in the construction of a hypernym discovery benchmark covering multiple languages and knowledge domains (Camacho-Collados et al., 2018). ## 3 Probing Dataset Construction Conneau et al. (2018) state that a probing task needs to ask a simple, non-ambiguous question, in order to minimise interpretability problems and confounding factors. While we acknowledge the hypernym discovery framing as an important benchmark, and the cloze tasks used by Ettinger (2020) as an enlightening probing scenario, we suspect neither is suitable for our probing experiments, for which we require a simpler task that more directly teases out the hypernym-hyponym relationship. We thus opt to construct a new taxonomic probing task: predicting which word in a pair is the hypernym, and which is the hyponym. This dataset is directly derived from WordNet (Fellbaum, 1998) and contains all its hypernym-hyponym pairs. Thus each word pair shares only an immediate hypernym-hyponym relationship between the candidate words: a word in a pair can _only_ be a hyponym or hypernym of the other. However, in our experiments we wish to probe both taxonomic and thematic encoders. Given that we are mostly using pretrained thematic and taxonomic embeddings (see SS4), their vocabulary coverage might vary dramatically. We wish to mitigate confounders by comparing like for like as much as possible, so to retain a higher integrity of interpretation when comparing models, we prune the dataset to only use the intersection of vocabularies of all the used models--we only include word pairs that have a representation for both candidate words in all the embedding models. Note here that one of the goals of our work is to use the _probing with noise_ method to learn about embeddings and the way they encode different types of information in vector space. We assert that a prediction of the relationship between a pair of words cannot be fairly done without the classifier having access to representations for both words in the pair. Yet, our probe is a classifier which can only take a single vector as input. Informed by the work of Weeds et al. (2014) we considered options such as averaging or summing the individual word vectors, but found that these were not suitable for our framing as they muddled the notion that the classifier is receiving two separate words as input. We instead concatenate the word vectors in question and pass a single concatenated vector to the classifier (similar to approaches used by Adi et al. (2017)). This approach allows us to formulate the task as a positional classification task: given a pair of words, is the first one the hypernym or the hyponym of the other? We can then assign each instance in the corpus a binary label--0 or 1--representing the class of the first word in the pair. The probe can then predict if the left half of the vector is the hyponym (0) of the right half, or whether it is its hypernym (1). Finally, the inherent tree structure of WordNet means that a smaller number of words will be hypernyms, while a larger number will be hyponyms. We want to avoid the probe memorising the subset of words more likely to be hypernyms, but rather to learn from information encoded in the (differences between) vectors themselves. In an attempt to achieve this, we balance out the ratio of class labels by duplicating the dataset and swapping the hypernym-hyponym positions and labels. Before duplicating, we also define a hold-out test set of 25,000 instances, so as to exclude the possibility of the same word pair appearing in both the train and test split--thus, the probe will be evaluated only on unseen instances. This duplication resulted in a final dataset of 493,494 instances, of which 50,000 comprise the test set and 443,494 comprise the training set. Here are some example instances: _0, north, direction_ _1, direction, north_ _0, hurt, upset_ _1, upset, hurt_ ## 4 Experimental Setup ### Chosen Embeddings In our experiments we probe taxonomic and thematic SGNS embeddings, and make an analogous comparison with taxonomic and thematic GloVe embeddings. Usually pretrained taxonomic embeddings are not as easy to come by as thematic ones, but fortunately we were able to include a set of freely available taxonomic embeddings that are based on a random walk algorithm over the WordNet taxonomy, inspired by the work of (Goikoetxea et al., 2015). In short, the approach is to generate a pseudo-corpus by crawling the WordNet structure and outputting the lexical items in the nodes visited, and then running the word embedding training on the generated pseudo-corpus. Naturally, the shape of the underlying knowledge graph affects the properties of the generated pseudo-corpus, while the types of connections that are traversed will affect the kinds of relations that are encoded in this resource. A Python implementation has been made freely available2 and the embeddings have been shown to encode taxonomic information (Klubicka et al., 2019). Ultimately we chose these embeddings as they allow us to be methodologically consistent by creating taxonomic embeddings that employ the same encoder architectures used to obtain thematic embeddings. word2vec (SGNS)For _taxonomic SGNS_ representations3 we opt for embeddings trained on the pseudo-corpus that yielded the highest Spearman correlation score on the wn-paths benchmark (introduced by Klubicka et al. (2020)), i.e. the corpus with 2 million sentences, with the walk going both ways and with a 2-word minimum sentence length. The lack of a directionality constraint provides higher vocabulary coverage and a smaller proportion of rare words, while the 2-word minimum sentence length limit ensures that we only have representations for words that are part of WordNet's taxonomic graph and have at least one hypernym relationship, which makes them suitable for this task. For the _thematic SGNS_ embeddings we use a pretrained model, and opt for the \(\mathrm{gensim}^{4}\) word2vec implementation which was trained on a part of the Google News dataset (about 100 billion tokens) and contains 300-dimensional vectors for 3 million words and phrases5. Footnote 3: [https://arrow.dit.ie/datas/12/](https://arrow.dit.ie/datas/12/) GloVeTo train _taxonomic GloVe_ embeddings, we use a popular Python implementation of the GloVe algorithm6\({}^{,}\)7 and, importantly, train it on the same 2m-both-2w/s pseudo-corpus as the above taxonomic SGNS was trained on8. For the _thematic GloVe_ embeddings we use the original Stanford pretrained GloVe embeddings9, opting for the larger common crawl model, which was trained on 840 billion tokens and contains 300-dimensional embeddings for a total of 2.2 million words. Footnote 6: [https://radimrehurek.com/gensim/](https://radimrehurek.com/gensim/) Footnote 7: [https://nlp.stanford.edu/projects/glove/](https://nlp.stanford.edu/projects/glove/) Footnote 8: [https://github.com/maciejkula/glove-python](https://github.com/maciejkula/glove-python) Footnote 9: [https://github.com/maciejkula/glove-python](https://github.com/maciejkula/glove-python) Note that when we concatenate the two word embeddings required for an instance in the train or test set, they become a 600-dimensional vector which is then passed on as input to the probe. ### Probing with Noise The method is described in detail in Klubicka and Kelleher (2022)10: in essence it applies targeted noise functions to embeddings that have an ablational effect and remove information encoded either in the norm or dimensions of a vector. Footnote 10: Code available here: [https://github.com/GerenParchatte/probing-with-noise](https://github.com/GerenParchatte/probing-with-noise) We remove information from the norm (abl.N) by sampling random norm values and scaling the vector dimensions to the new norm. Specifically, we sample the L2 norms uniformly from a range between the minimum and maximum L2 norm values of the respective embeddings in our dataset11. Footnote 10: Code available here: [https://github.com/GerenParchatte/probing-with-noise](https://github.com/GerenParchatte/probing-with-noise) To ablate information encoded in the dimensions (abl.D), we randomly sample dimension values and then scale them to match the original norm of the vector. Specifically, we sample the random dimension values uniformly from a range between the minimum and maximum dimension values of the respective embeddings in our dataset12. We expect this to fully remove all interpretable information encoded in the dimension values, making the norm the only information container available to the probe. Footnote 11: Thematic SGNS: [1.6854, 9.3121] Footnote 12: Thematic SGNS: [1.5547, 1.7109] Footnote 13: Taxonomic SGNS: [1.8811, 1.7843] Thematic GloVe: [-4.2095, 4.0692] Taxonomic GloVe: [-1.3875, 1.3931] Footnote 14: Thematic SGNS: [1.6854, 9.3121] Taxonomic SGNS: [-2.1666, 7.6483] Thematic GloVe: [3.1519, 13.1196] Taxonomic GloVe: [0.167, 6.3104] Thematic SGNS: [-1.5547, 1.7109] Taxonomic SGNS: [-1.8811, 1.7843] Thematic GloVe: [-4.2095, 4.0692] Taxonomic GloVe: [-1.3875, 1.3931] as a sense check, we use the vanilla SGNS and GloVe word embeddings in their respective evaluations as _vanilla baselines_ against which all of the introduced noise models are compared. Here, the probe has access to both dimension and norm information, as well as class distributions from the training set. However, given the lack of probing taxonomic embeddings in the literature, it is equally important to establish the vanilla baseline's performance against the random baselines: we need to confirm that the relevant information is indeed encoded somewhere in the embeddings. Finally, to address the degrees of randomness in the method, we train and evaluate each model 50 times and report the average score of all the runs, essentially bootstrapping over the random seeds Wendlandt et al. (2018). Additionally, we calculate a confidence interval (CI) to make sure that the reported averages were not obtained by chance, and report it alongside the results. ### Probing Classifier and Evaluation Metric The embeddings are used as input to a Multi-Layered Perceptron (MLP) classifier, which predicts their class labels. We used the scikit-learn MLP implementation Pedregosa et al. (2011) using the default parameters13. The choice of evaluation metric used to evaluate the probes is not trivial, as we want to make sure that it reliably reflects a signal captured in the embeddings, especially in an imbalanced dataset where the probe could learn the label distributions, rather than detect a true signal related to the probed phenomenon. Following our original approach Klubicka and Kelleher (2022), we use the AUC-ROC score14, which is suited to reflecting the classifier's performance on both positive and negative classes. Footnote 13: activation=relu’, solver=’adam’, max_iter=200, hidden_layer_sizes=100, learning_rate_init=0.001, batch_size=min(200,n_samples), early_stopping=False, weight init. (scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html Footnote 14: [https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) ## 5 Experimental Results Experimental evaluation results for taxonomic and thematic embeddings on the hypernym-hyponym probing task are presented in Tables 1 and 2. Note that all cells shaded light grey belong to the same distribution as random baselines on a given task, as there is no statistically significant difference between the different scores; cells shaded dark grey belong to the same distribution as the vanilla baseline on a given task; and all cells that are not shaded contain a significantly different score than both the random and vanilla baselines, indicating that they belong to different distributions. SgnstStarting with _thematic SGNS_ (THEM), Table 1 shows that the random baselines perform comparably to each other, as would be expected, and their score indicates no ability to discriminate between the two classes. We can see that the vanilla representations significantly outperform the random baselines, indicating that at least some taxonomic information is encoded in the embeddings. The norm ablation scenario (abl.N) causes a statistically significant drop in performance when compared to the vanilla baseline. In principle, this indicates that some information has been lost. If instead of the norm, we ablate the dimensions (abl.D), we see a much more dramatic performance drop compared to vanilla, indicating that much more information has been removed. Unsurprisingly, the difference in the probe's performance when applying both noising functions (abl.D+N) compared to random baselines is not statistically significant, meaning there is no pertinent information left in these representations. Notably, once just the dimension container is ablated, its performance drops to extremely low levels and approaches random baseline performance, yet it does not quite reach it--as small as it is, the difference is statistically significant, indicating that not all information has been removed in this setting. While significant, given how minor this difference is, one might argue it does not convincingly indicate the norm's role in \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{5}{|c|}{**SGNS**} \\ \hline Model & \multicolumn{3}{c|}{**THEM**} & \multicolumn{3}{c|}{**TAX**} \\ & auc & \(\pm\)CI & auc & \(\pm\)CI \\ \hline rand. pred. &.5000 &.0009 &.4997 &.0009 \\ rand. vec. &.5001 &.0012 &.5001 &.0011 \\ \hline vanilla &.9163 &.0004 &.9256 &.0003 \\ \hline abl. N &.9057 &.0004 &.9067 &.0005 \\ abl. D &.5039 &.0008 &.5294 &.0010 \\ abl. D+N &.4998 &.0010 &.5002 &.0009 \\ \hline \end{tabular} \end{table} Table 1: Probing results on SGNS models and baselines. Reporting average AUC-ROC scores and confidence intervals (CI) of the average of all training runs. encoding taxonomic information. However, we observe a much crisper signal in the _taxonomic SGNS_ (TAX) results. The random baselines perform comparably, while the vanilla baseline significantly outperforms them, while also significantly outperforming the THEM vanilla baseline, confirming that the taxonomic embeddings encode more taxonomic information than thematic embeddings. The norm ablation scenario causes a statistically significant performance drop from vanilla, while ablating the dimension container yields a larger drop, but does not reach the random-like performance achieved when ablating both containers. Here the difference in scores between ablating just the dimensions and ablating both dimensions and norm is also significantly different from random, but notably also an order of magnitude larger than in the THEM example. This indicates that the taxonomic SGNS embeddings use the norm to encode taxonomic information more so than thematic ones. GloVeIn Table 2 we see that _thematic GloVe_ (THEM) vanilla performance dramatically outperforms the baselines, but the scores drop when the norm is ablated. After ablating the dimension information, there is a substantial drop in the probe's performance and it is immediately comparable to random baselines with no statistically significant difference. Furthermore, performance does not significantly change after also ablating the norm. Meanwhile, the _taxonomic GloVe_ embeddings tell a different story. Firstly, while vanilla embeddings outperform the random baselines, they perform much worse than THEM vanilla GloVe, indicating an inferior representation for the hypernym-hyponym prediction task, even though they were trained on WordNet random walk pseudo-corpora (we discuss this in SS6). Ablating the dimensions causes a significant drop in performance, but it is nowhere near the random performance reached when ablating both dimensions and norm. This is a really strong signal that indicates the norm encodes some hypernym-hyponym information. This echoes the findings on SGNS, showing that taxonomic embeddings tend to use the norm to encode taxonomic information more so than thematic ones. ### Dataset Validation Experiments: Dimension Deletions Our experimental design is based on the assumption that providing the probe with a concatenated vector of word embeddings would allow it to infer the asymmetric relationship between the words and use that signal to make predictions. While we have taken some steps to ensure this and mitigate lexical memorisation (see SS3), there is still a concern that the models could have memorised other regularities encoded in the individual word representations and used that information to make predictions. For example, while many candidate words can indeed be both hyponyms or hypernyms, given the tree structure of the taxonomy and the distribution of edges, the frequencies at which a word takes on a hypernym or hyponym role are still skewed. It is thus more likely that any given word will be a hyponym than a hypernym, and it is possible that the embeddings implicitly encode the frequency at which a word takes on a hypernym role, versus a hyponym role. To validate that the probe is actually learning a relationship between the candidate words, we run an additional batch of probing experiments to establish another set of baselines specific to this particular probing task. We examine the impact of two scenarios on the probe's performance: given the same labels, a) what if the probe's input was only one word vector, and b) what if the probe's input was only half of each word vector in the pair? We denote this line of enquiry as _deletion experiments_, given that in practice a) can be seen as deleting half of the concatenated vector, and b) as deleting one half each vector before concatenating. The crucial difference is that in a) the probe can only learn from one word vector without having any access to a representation of the other word, meaning it can only predict whether the candidate word is a hyponym or a hypernym by relying on the probability derived from its frequency. In b) the \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{5}{|c|}{**GloVe**} \\ \hline Model & \multicolumn{2}{c|}{**THEM**} & \multicolumn{2}{c|}{**TAX**} \\ & auc & \(\pm\)CI & auc & \(\pm\)CI \\ \hline rand. pred. &.4999 &.0011 &.4998 &.0010 \\ rand. vec. &.5001 &.0010 &.5001 &.0008 \\ \hline vanilla &.9327 &.0004 &.8824 &.0005 \\ \hline abl. N &.9110 &.0004 &.8435 &.0008 \\ abl. D &.5002 &.0008 &.6621 &.0008 \\ abl. D+N &.5000 &.0011 &.5006 &.0011 \\ \hline \end{tabular} \end{table} Table 2: Probing results on GloVe models and baselines. Reporting average AUC-ROC scores and confidence intervals (CI) of the average of all training runs. probe has a representation for both vectors, meaning it could leverage the relationship between them, but the individual vectors are truncated, meaning that half of the dimensions are gone for each word, making this inferior to the vanilla setting15. Footnote 15: This choice is motivated by a desire to make this setting comparable to a) in terms of dimensionality—had we simply compared it to vanilla, it would have the advantage of having access to twice as many dimensions. We ran these experiments for taxonomic and thematic SGNS and GloVe embeddings and when performing deletions assessed the impact of both halves of the vectors. All dimension deletion results are included in Tables 3 and 4, where scenario a) is denoted as _del.et.1h/2h_ (deleted 1st/2nd half of concatenated vector) and scenario b) is denoted as _del.ea.1h/2h_ (deleted 1st/2nd half of each vector). When comparing the deletions of the different halves, in cases where there is a statistically significant difference between their scores, the lower of the two scores is marked with an asterisk (*). Sgnusunprisingly, deleting half of the vector in either scenario causes a statistically significant drop in performance when compared to vanilla. We also observe a larger drop in both _del.ct._ settings versus the _del.ea_. settings, which confirms that predicting a word's relationship to an "imaginary" other word is the more difficult task. However, strikingly, the performance is also significantly above random, which indicates that the probe likely did learn some frequency distributions from the graph. It is possible that this is a reflection of the imbalance inherent to WordNet, given the large number of leaf nodes in the taxonomic graph. Even still, the significant difference in scores between the two settings demonstrates that having access to both words, even at the cost of half the information in each word's dimensions, is more informative than having a full representation of a single word, _indicating that the probe is inferring the relevant relationship between them_. GloVeThe GloVe deletion results echo the findings on SGNS in most settings. Deleting half of the vector in either scenario causes a significant performance drop, which is largely above random performance, and the drop is larger in the _del.ct._ setting versus the _del.ea._ setting. This provides further indication that, while there is an inherent imbalance in the underlying data, the probe is inferring the relevant relationship between the candidate words when given a concatenation of two word vectors. The probe benefits significantly from having access to a representation of both words, or even just two halves of each representation. Even when it is not explicitly told that it is actually getting two inputs, it is able to pick up on the fact that there is a difference between them which can be helpful in deciding on a label. ## 6 Discussion There are a number of points to take away from our experimental results. Firstly, we see that both vanilla thematic embeddings encode taxonomic information and the GloVe vanilla model significantly outperforms the SGNS vanilla model. This is at least partially due to the fact that the pretrained SGNS and GloVe thematic embeddings were trained on unrelated corpora, which differ in terms of size, topic and coverage: the corpus that GloVe was trained on is over 8 times larger than the one used to train the SGNS model, and belongs to a different, much more varied genre of text data. Thus, word representations derived from \begin{table} \begin{tabular}{|l|r|r|r|r|} \hline \multicolumn{5}{|c|}{**GloVe**} \\ \hline Model & \multicolumn{2}{c|}{**THEM**} & \multicolumn{2}{c|}{**TAX**} \\ & auc & \(\pm\)CI & auc & \(\pm\)CI \\ \hline rand. pred. &.4999 &.0011 &.4998 &.0010 \\ rand. vec. &.5001 &.0010 &.5001 &.0008 \\ \hline vanilla &.9327 &.0004 &.8824 &.0005 \\ \hline del. ea. 1h &.9120* &.0003 &.8727 &.0005 \\ del. ea. 2h &.9179 &.0004 &.8730 &.0006 \\ \hline del. ct. 1h &.8522 &.0004 &.8405 &.0004 \\ del. ct. 2h &.8522 &.0004 &.8406 &.0004 \\ \hline \end{tabular} \end{table} Table 4: Probing results on GloVe deletions and baselines. Reporting average AUC-ROC scores and confidence intervals (CI) of the average of all training runs. \begin{table} \begin{tabular}{|l|r|r|r|r|} \hline \multicolumn{5}{|c|}{**SGNS**} \\ \hline Model & \multicolumn{2}{c|}{**THEM**} & \multicolumn{2}{c|}{**TAX**} \\ & auc & \(\pm\)CI & auc & \(\pm\)CI \\ \hline rand. pred. &.5000 &.0009 &.4997 &.0009 \\ rand. vec. &.5001 &.0012 &.5001 &.0011 \\ \hline vanilla &.9163 &.0004 &.9256 &.0003 \\ \hline del. ea. 1h &.8929 &.0004 &.8998* &.0005 \\ del. ea. 2h &.8927 &.0004 &.9039 &.0004 \\ \hline del. ct. 1h &.8496 &.0004 &.8525 &.0004 \\ del. ct. 2h &.8495 &.0004 &.8523 &.0003 \\ \hline \end{tabular} \end{table} Table 3: Probing results on SGNS deletions and baselines. Reporting average AUC-ROC scores and confidence intervals (CI) of the average of all training runs. these resources are likely very different and it is possible that due to the broader scope and much larger size of the GloVe corpus, the GloVe representations reflect more taxonomic knowledge. However, these encoders exhibit the opposite behaviour when trained on the same WordNet random walk pseudo-corpus: expectedly, vanilla taxonomic SGNS scores improve upon its thematic version, yet vanilla taxonomic GloVe scores significantly underperform compared to thematic. While we would expect it to mirror what was observed in SGNS, taxonomic GloVe is in fact our worst-performing vanilla model. Given the significant differences in model architectures, it is possible that this unexpected behaviour is due to an interaction between the architecture and training data16. While this may play a role, we suspect that the dominant factor is rather training corpus size. The WordNet pseudo-corpus used for training taxonomic embeddings was only about 9 million tokens in size (which is sufficient to encode taxonomic relations, as shown by Maldonado et al. (2019)), whereas SGNS and GloVe were trained on 100 and 840 billion tokens respectively. It is not surprising that GloVe trained on a small and relatively sparse pseudo-corpus underperforms compared to training on a large natural corpus. If anything, it is encouraging that SGNS trained on a 9-million-token pseudo-corpus outperforms one trained on a 100-billion-token natural corpus. Footnote 16: The interested reader might consult Klubicka (2022, pages 121-123) for some speculation as to what that interaction might be. Another important finding from our experiments is the strong evidence that _word embedding models can use the norm to encode taxonomic information, regardless of what is encoded in the vector dimensions_. We find the clearest example of this in taxonomic GloVe after ablating dimension information, where the score remains as high as \(\approx\)0.66, meaning that the difference of 0.16 points is solely due to information in the norm. This is a very large difference given our understanding of the underlying mechanics, where it is well known that dimensions contain most, if not all information relevant for a task (e.g. Durrani et al. (2020, 2022)), and this is much more than has been demonstrated on any of the sentence-level experiments in our previous work (Klubicka and Kelleher, 2022). Additionally, this is the only case where deleting half of each word vector yields a significantly higher score (\(\approx\)0.87) than ablating the norm (\(\approx\)0.84). This suggests that more information is lost when the norm is ablated than when half of the dimensions are removed. This is a strong indicator that in this case the _norm encodes information that is not at all available in the dimensions_. Certainly, the majority of the information in an embedding is and will always be encoded in the dimensions, but it is striking how much of it is present in the norm in this case. Generally, when it comes to dimension deletion experiments, it is expected that the performance would drop dramatically in comparison to vanilla embeddings. However, an important takeaway is that in all settings the drop is much smaller than might be expected, being quite close to vanilla performance and largely above random performance. This points to a redundancy within the dimensions themselves, seeing as either half of the vector seems to carry more than half the information required to model the task, indicating that not many dimensions are needed to encode specific linguistic features. This is consistent with the findings of (Durrani et al., 2020), who analysed individual neurons in PTLMs and found that small subsets of neurons are sufficient to predict certain linguistic tasks. Our deletion results certainly corroborate these findings, given how small the drop in the probe's performance is when half the vector is deleted. For additional insight into the norm, we examine the norm values. We calculate the norms of the individual hypernym and hyponym word vectors in our dataset and present the results in Figure 1. The median norm value shows that the difference between hypernym and hyponym norms seems to be minor in both thematic embedding types (GloVe: 6.26 and 6.24; SGNS: 2.78 and 2.76), whereas the difference is an order of magnitude larger in both taxonomic representations (GloVe: 2.03 and 2.67; SGNS: 5.64 and 5.80). The difference is also quite large between taxonomic GloVe and SGNS, and it seems to be what is reflected in our experimental results, which show that GloVe stores the most hypernym-hyponym information in the norm. The median norm measurements show that, on average, the norm of hypernyms is larger than the norm of hyponyms. This means that hypernyms, which are higher up in the tree, are positioned further away from the origin of the vector space than hyponyms, which are positioned lower in the tree and are closer to the origin. Notably, this is only true in taxonomic embeddings, but not the thematic ones, indicating that in taxonomic embeddings _there is a mapping between the taxonomic hierarchy and distance from the origin_. Finally, in spite of the fact that taxonomic GloVe (TAX) is the worst-performing vanilla model, it is interesting that its norm also encodes the most taxonomic information. We base our interpretation of this result on the following: i) in many embeddings there is a high correlation between the norm and word frequency Goldberg (2017), and ii) WordNet pseudo-corpora reflect hypernym-hyponym frequencies and co-occurrences. We suspect the principal signal that plays a role in the way taxonomic embeddings encode taxonomic knowledge is precisely these word co-occurrences, which GloVe is designed to capture. In turn, the norm can be seen as analogous to the hierarchical nature of taxonomic relationships and becomes the most accessible place to store this information. The thematic corpora reflect thematic co-occurrences and frequencies and hence GloVe (THEM) does not store taxonomic information in the norm, as such relations are not hierarchical in nature. ## 7 Conclusion In this paper we applied the _probing with noise_ method to two different types of word representations--taxonomic and thematic--each generated by two different embedding algorithms--SGNS and GloVe--on a newly-designed taxonomic probing task. The overall findings are that (a) both taxonomic and thematic static embeddings encode taxonomic information, (b) that the norm of static embedding vectors carries some taxonomic information and (c) thus the vector norm is a separate information container at the word level. (d) While in some cases there can be redundancy between the information encoded in the norm and dimensions, at other times the norm can encode information that is not at all available in the dimensions, and (e) whether the norm is utilised at all is sometimes dependant on training data, not just the encoder architecture. We also show that in the case of SGNS, taxonomic embeddings outperform thematic ones on the task, demonstrating the usefulness of taxonomic pseudo-corpora in encoding taxonomic information. Indeed, this work serves to further emphasise the importance of the norm, showing that the taxonomic embeddings use the norm to supplement their encoding of taxonomic information. In other words, random walk corpora can improve taxonomic information in word representations, which is not always the case for natural language corpora. ## Acknowledgements This research was conducted with the financial support of Science Foundation Ireland under Grant Agreements No. 13/RC/2106 and 13/RC/2106_P2 at the ADAPT SFI Research Centre at Technological University Dublin. ADAPT, the SFI Research Centre for AI-Driven Digital Content Technology, is funded by Science Foundation Ireland through the SFI Research Centres Programme, and is co-funded under the European Regional Development Fund.
2308.04328
The Product Beyond the Model -- An Empirical Study of Repositories of Open-Source ML Products
Machine learning (ML) components are increasingly incorporated into software products for end-users, but developers face challenges in transitioning from ML prototypes to products. Academics have limited access to the source of commercial ML products, hindering research progress to address these challenges. In this study, first and foremost, we contribute a dataset of 262 open-source ML products for end users (not just models), identified among more than half a million ML-related projects on GitHub. Then, we qualitatively and quantitatively analyze 30 open-source ML products to answer six broad research questions about development practices and system architecture. We find that the majority of the ML products in our sample represent more startup-style development than reported in past interview studies. We report 21 findings, including limited involvement of data scientists in many open-source ML products, unusually low modularity between ML and non-ML code, diverse architectural choices on incorporating models into products, and limited prevalence of industry best practices such as model testing, pipeline automation, and monitoring. Additionally, we discuss seven implications of this study on research, development, and education, including the need for tools to assist teams without data scientists, education opportunities, and open-source-specific research for privacy-preserving telemetry.
Nadia Nahar, Haoran Zhang, Grace Lewis, Shurui Zhou, Christian Kästner
2023-08-08T15:19:13Z
http://arxiv.org/abs/2308.04328v2
# A Dataset and Analysis of Open-Source Machine Learning Products ###### Abstract. Machine learning (ML) components are increasingly incorporated into software products, yet developers face challenges in transitioning from ML prototypes to products. Academic researchers struggle to propose solutions to these challenges and evaluate interventions because they often do not have access to close-sourced ML products from industry. In this study, we define and identify open-source ML products, curating a dataset of 262 repositories from GitHub, to facilitate further research and education. As a start, we explore six broad research questions related to different development activities and report 21 findings from a sample of 30 ML products from the dataset. Our findings reveal a variety of development practices and architectural decisions surrounding different types and uses of ML models that offer ample opportunities for future research innovations. We also find very little evidence of industry best practices such as model testing and pipeline automation within the open-source ML products, which leaves room for further investigation to understand its potential impact on the development and eventual end-user experience for the products. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none Beyond research and development, access to open-source software on a larger scale has revolutionized education, offering students and professionals the opportunity to study practices in open-source repositories (Krishnan et al., 2017; Krishnan et al., 2018), and educators countless real-world examples (Krishnan et al., 2018; Krishnan et al., 2018). In the same manner, access to open-source _ML products_ could open opportunities for research, education, and technology transfer. To this end, we aim in this paper to identify a corpus of such ML products. Many past studies have tried to study ML projects in open source, but usually (a) only focus on one or two specific examples or (b) use a dataset composed of notebooks, research projects, homework solutions, and demos, which are not representative of real-world industrial ML products. While open source was useful for studying ML libraries and notebooks (Krishnan et al., 2018; Krishnan et al., 2018; Krishnan et al., 2018), researchers struggle to find good examples of open-source _ML products_ for studying: Several papers prominently highlight FaceSwap (Krishnan et al., 2018) as an active and fairly mature end-user open-source ML product and study it in depth, but is usually the only example ever identified or analyzed (Krishnan et al., 2018; Krishnan et al., 2018). Several papers (Krishnan et al., 2018; Krishnan et al., 2018; Krishnan et al., 2018) rely on a dataset of 4,500 projects labeled as "ML applied," curated by Gonzalez et al. (Gonzalez et al., 2018), but closer inspection reveals that most of these projects are packages for developers, research notebooks, tutorial-style projects, and toy projects - not a promising dataset for those interested in studying ML products. We have two goals for this paper, (a) to define and identify ML products in open-source, beyond just FaceSwap, and (b) to analyze the curated dataset to answer research questions of interest to the community. The first goal ended up being surprisingly difficult due to the abundance of open-source ML projects that are not ML products, making the keyword search approach used in previous studies ineffective. In response, we designed a best-effort approach, strategized specifically for finding _ML products_ rather than any ML projects. We collected a total of 262 repositories (Gonzalez et al., 2018). While this is a smaller corpus than past datasets on ML projects, it provides a considerable number of open-source ML products that have a development history and are fully transparent, providing opportunities not achievable with interviews, surveys, and even industry collaborations. Our second goal is to analyze the dataset to answer research questions for which open-source software might provide useful insights. Instead of a shallow quantitative analysis of all the products, we conducted an in-depth analysis of a 30-product sample (Table 1) of the corpus, and reported 21 findings around six research questions related to collaboration, system architecture, development order, testing, operations, and responsible AI. Among others, our findings reveal (a) more involvement of software engineers compared to data scientists in developing ML products, with a lack of clear boundaries between contributor responsibilities and ML and non-ML code, (b) diverse architectural choices on incorporating models into products, heavily influenced by how the model predictions are processed within the product, and (c) several industry best practices for ML development, such as pipeline automation, model evaluation, and monitoring in production, not being prevalent amongst these open-source ML products. While the dataset is unlikely to fully represent large-scale ML products in industry, its products provide ample opportunities to study development practices and interventions around models and products. To summarize, the primary contributions of this paper are: (a) a definition of ML product, (b) a search strategy for retrieving a dataset of ML products from GitHub, (c) an open-source dataset of 262 ML products, and (d) 21 findings around six broad research questions, characterizing the nature of ML products in the dataset. ## 2. Defining ML products Throughout the paper we use the term _ML product_ to describe software products for end-users that contain ML components, and to distinguish them from other ML-related software projects and artifacts. Note that terminology in this field is not standardized or consistent, as practitioners and researchers may refer to the libraries that train models (e.g., Tensorflow), the code to train models (e.g., in a notebook or with pipeline automation), the trained or deployed models (e.g., GPT-3), or the products around those models (e.g., FaceSwap) with names such as ML systems, ML projects, or ML applications. Our notion of _ML product_ considers the entire software system including non-ML components, in line with past research that used terms like ML-enabled systems (Krishnan et al., 2018; Krishnan et al., 2018), ML-infused systems (Krishnan et al., 2018), or just ML systems (Krishnan et al., 2018; Krishnan et al., 2018). During our research, we needed a clear definition for what we consider an _ML product_ (especially because we had to classify thousands of repositories, details in Section 4.1). We started with a working definition and refined it repeatedly when we encountered ambiguous cases. We finally settled on the following definition: **ML Product:** A machine-learning product is a software project (a) **for end-users** that (b) contains one or more **machine-learning components**. To be considered _for end-users_, the project must have a _clear purpose_ and a _clear target audience_, e.g., the purpose can be "fun" and the audience can be "everybody." Software developers and data scientists are explicitly excluded from the target audiences. The software must be _complete, usable, polished, and documented_ (e.g., install and usage instructions) to the level typically expected by the target audience (commensurate with the expected skills of the target audience, e.g. "pip install" is okay when targeting scientists but not for a general audience). For server-side applications, more technical installation instructions are acceptable, if the application is easy to use by clients once the server is installed. Additionally, the product needs to use at least one machine-learned model that is used for major or minor functionality of the software. The model can be developed from scratch or called using an existing library or API, and may have been trained with any ML algorithm. For contrasting, we define ML library and ML project: **ML Library:** Libraries, frameworks, or APIs that are used to perform ML tasks. Examples of popular ML libraries include TensorFlow, Scikit-learn, and PyTorch. **ML Project:** ML Project represents any software project that integrates some form of ML functionality or code. Examples include exploratory notebooks, code to perform partial ML tasks, research artifacts associated with a paper, and course homework. All ML products are ML projects, but most ML projects are not ML products. **Why do we exclude software developers and data scientists as end-users?** While software developers and data scientists are technically end-users of ML libraries (e.g., TensorFlow) and ML-based tools (e.g., GitHub CoPilot and AutoML), we have deliberately excluded them from our definition of ML products. This is because their expertise allows them to interact with software differently compared to non-technical users, which may not reflect the typical use and development of general-purpose software. Software for these technical users may include complex installation instructions, making it hard to distinguish ML products from other ML projects. Therefore, to avoid confusion and focus on general-purpose software, we exclude these technical users from consideration. ## 3. Existing Research and Limitations **Building ML products is challenging and requires engineering beyond the model and ML pipeline.** Many researchers have studied the challenges that practitioners face when turning an ML model or prototype into a product. Nahar et al. (2019) collected the challenges from 50 papers that surveyed and interviewed practitioners regarding the software engineering challenges faced when building ML products. These papers illustrated numerous challenges, such as vague ML specifications that make it difficult to align business goals with performance metrics (Vahdat et al., 2018; Vahdat et al., 2018), architectural issues due to lack of modularity in ML code that increase design complexity (Vahdat et al., 2018; Vahdat et al., 2018), and team collaboration hindered by the absence of necessary skills (Vahdat et al., 2018; Vahdat et al., 2018). While these papers provide us with a high-level understanding of the problems, they often do not offer sufficient details or access to design specific interventions. **Researcher tradeoff between internal and external validity for studying ML products.** For studying ML products (not just models and ML projects), researchers have adopted different research designs. Some conducted interviews with teams and organizations, e.g., (Vahdat et al., 2018; Vahdat et al., 2018; Vahdat et al., 2018), while others focused on surveying practitioners at scale, e.g., (Vahdat et al., 2018; Vahdat et al., 2018; Vahdat et al., 2018). While these studies provide a broad sense of the challenges (maximizing external validity), they rely on self-reported data without access to artifacts. There are a few ethnographic studies (Vahdat et al., 2018; Vahdat et al., 2018), industrial case studies (Bah platform for software projects, unlike platforms such as Hugging Face which only host ML models. We only include popular project repositories (over 50 stars) that have been maintained recently (updated after 2019-01-01), and that are documented in English. **Search Strategy**. Existing papers that mine ML projects (not necessarily ML products, cf. Section 3) employ a keyword-based search approach to find repositories by matching ML or AI-related keywords in project titles, descriptions, and readme files. We identify two problems why this approach is not suitable for effectively finding _ML products_ on GitHub. First, the search keywords are associated with many projects that are not products, making it hard to distinguish the actual products. Second, the visibility of ML functionality within a larger product is often limited, making it unlikely to get all relevant repositories solely through keyword searches. _Finding "Products:"_ We are looking for ML _products_, not just any ML project. There is an abundance of other types of projects outside of our scope in GitHub, such as ML libraries, tutorials, and toy projects. These projects contain similar keywords to ML products, and therefore keyword searches return a large number of unrelated projects, causing ML products to get lost in the crowd. To address this issue, we use _code search_, instead of relying solely on keyword search. We differentiate products from projects based on code fragments that indicate end-user applications, such as the use of GUI frameworks. _Finding "ML:"_ While some end-user products prominently advertise their use of ML, others may just use ML internally without mentioning it in the product description, especially when ML is associated with minor functionality - this is the case for 13 of our 30 analyzed ML products, e.g., product P29 in our sample, uses facial expression detection as an add-on within a video-conferencing application. In such cases, the existing strategies to rely on keyword search may miss these products. To identify ML, we again rely on code search rather than keyword search in readme files. We detect different ways the ML-related code can be incorporated into the repository - e.g., calling libraries, pre-trained ML models, or third-party APIs. _Validation of "ML Products:"_ We combine the two code-search strategies to find ML products. However, we realize that manual validation is unavoidable as the detection methods only serve as indicators for ML products, but are not always conclusive. Our attempts at fully automating detection of relevant ML products (with and without using ML itself) yielded only poor accuracy. Even humans can find it difficult to classify these projects and in many cases, often multiple authors had to discuss projects to agree on a classification. So, instead of trying to automate detection entirely, we used our indicators to inform and guide _manual inspections_. **Implementing and Scaling the Search.** The key to our implementation is to break down the vast search space into manageable units, making the process more feasible and efficient. We recognize three genres of end-user ML products (i.e., mobile, desktop, web) and customize search steps for each genre. To optimize the search, we filter projects in multiple stages (illustrated in Figure 1), as early as possible to minimize effort for downloading and manual analysis. _GitHub API Search:_ Due to the millions of public repositories on GitHub, it is impractical to download and search through all of them in our study. While the GitHub API allows to query repositories based on various parameters, such as keywords, language, and date, it does not provide an efficient method to search for specific code fragments across all GitHub projects. We perform code search locally after cloning repositories, but only after first reducing the number of candidate repositories with an initial filter. In addition to the previously mentioned filters for popularity (over 50 stars) and recency (updated after 2019-01-01), we consider only repositories with the following programming languages as their top language: Javascript, Python, Java, and C#, which are the most common languages for desktop and web applications [65, 107]. We additionally include projects matching the keywords "android" and "ios," because these keywords are reliable for detecting mobile applications. All these conditions can be expressed in the GitHub search API. Where necessary, we further partition the search space by date to overcome GitHub's maximum of 1000 search results. At this stage, we identified 430,902 candidate repositories. _Initial Filter:_ Next, we collected metadata and readme files to further exclude projects: We exclude repositories containing keywords such as "deprecated" and "obsolete" in metadata or readme, archived projects, forked projects, and projects with non-English descriptions (determined with an existing ML classifier [72]). We also exclude obvious non-products by matching keywords such as "framework", "tutorial," and "demo" in the repository description (full list in the appendix [89]). To ensure that we did not exclude relevant projects during the filtering steps, we validated a random set of filtered projects finding no incorrectly filtered projects. A total of 300,508 repositories remained after applying this filter. _Clone Code and Product Filter:_ Next, we cloned each candidate repository and performed code search locally. To develop code search strategies for likely products, we examined each product genre (mobile, desktop, web) for identifiable code structures. We discovered language and platform-specific code structures and frameworks for each genre. For instance, we found that a _gradle.build_ file must contain the string _com.android.application_ to make an Android application runnable. In desktop and web applications, certain frameworks (e.g., javafx, spring, pyqt) and code fragments (e.g., _make_server_, _sereg_forever_()) are used for GUI and server code. We compiled a list of such frameworks and code fragments (available in the appendix [89]) and searched for them in the cloned project code. Projects that did not contain these code fragments or frameworks were filtered out, leaving us with 26,386 potential products for further analysis. _ML Clues:_ Next, we search for the presence of ML components in the source code of the remaining repositories. We use four criteria to search and count occurrences of the following clues in the entire codebase - (1) popular ML libraries and frameworks such as TensorFlow, and programming language-specific ML libraries such as Deeplearning4j (e.g., detected through import statements), (2) code references to popular external ML APIs and libraries such as Google's Vision API, (3) trained model files with known extensions such as _.tflite_, and _.mlmodel_, and (4) general ML keywords such Figure 1. Overall Process of ML Product Mining in GitHub as "machine learning" and "NLP" in all code and text files (which included readme files and source code comments), accounting for the potential use of less popular or custom APIs. These counts act as signals that the repository might contain ML components, but they are not conclusive and often affected by noise. Some signals, such as the use of the TensorFlow library, are strong, while others, such as the occurrence of "NLP" in text are weak. Therefore, instead of using these counts to filter out projects directly, we employ them for guiding our manual inspection in the final stage. At this point, 11,257 projects have at least one ML clue. _Manual Inspection:_ We manually inspect candidate projects to verify whether they truly contain a "product" with "ML" components. However, considering the large number of repositories that could potentially contain ML products, inspecting all of them manually would be impractical. Instead, we guide manual inspection based on the ML clues identified earlier. We prioritize repositories with strong ML clues, such as specific ML libraries (clue 1), and then move on to repositories with weaker ML clues, such as general ML keywords (clue 4). To streamline the process, we perform multiple iterations, ranking the repositories by different clues each time. During each iteration, we manually inspect the top-k repositories, dynamically selecting an appropriate k. We stop our manual inspection when we observe a significant number of projects that do not fit our definition of ML product, typically after encountering 30 consecutive cases. Using this method, we examined approximately 4,000 repositories (spending 30 seconds to 20 minutes on each) and identified 262 ML products. **Limitations and Threats to Validity.** Our approach is not exhaustive and we do not claim completeness. We designed a best-effort attempt to compile a dataset of ML products, but recognize ML products in specific ways based on GUI and ML code fragments, that might not capture all products. We focused on the most popular categories for end-user applications but may have missed products from other possible categories, such as embedded systems, smartwatch applications, and games. Also, despite our best efforts to compile a list of ML-related libraries and keywords for code search in the _ML Clues_ stage, we might have missed some, and may hence miss some ML functionality. Additionally, our approach involved manual inspection, which despite best efforts opens the possibility of human error and subjectivity. While we would have preferred to evaluate recall (i.e., whether we have missed any ML products) by comparing our dataset against an existing ground-truth dataset of ML products, such a dataset does not exist. Instead, we attempted to collect ML products independently to evaluate whether our approach would find them, by seeking input from industry practitioners through platforms like Quora, Reddit, LinkedIn, Twitter, and a Slack channel with 32k members in the field of data science; but we only got two projects in response (and a number of answers expressing interest in our dataset), both of which we determined not to be ML products according to our definition. Additionally, we compared our dataset with other existing datasets (Friedman et al., 2017; Wang et al., 2018; Wang et al., 2019) but did not find any additional projects that we consider ML products in those datasets. In fact, those datasets only contained a total of four ML products, all of which we detected for our dataset. ### The Open-Source ML Product Dataset In total, we found 262 ML products (cf. Table 2, full dataset in the appendix (Wang et al., 2019)). The average ML product has 1,495 stars, 28 contributors, and 325MB of codebase size (cf. Figure 2). Over half of the ML products are written in Python; most are web applications. The dataset comprises a diverse range of products, some of which, have a significantly larger number of users and a more professional look than others. For instance, _Seek_ (P2 in Table 1) is a mobile app for identifying plants and animals using image recognition, downloaded over 1 million times and reviewed by over 38k users.1_Seek_ appears to have robust support from the established _iNaturalist_ community, who maintains a dedicated website and continuously improves and maintains the app. In contrast, _NotionAI MyMind_ (P5), an Android app developed by a single contributor, enables users to add images or web articles, which are subsequently tagged automatically by its ML classifier for easy searching capability. This app has a relatively simple user interface, is rarely updated, and only attracted over 1,000 downloads from the Play Store.2_ Footnote 1: [https://playgoogle.com/store/apps/details?id=org.inaturalist.seek](https://playgoogle.com/store/apps/details?id=org.inaturalist.seek), [https://arpps.apple.com/app/apple-store/id1353224144](https://arpps.apple.com/app/apple-store/id1353224144) Footnote 2: [https://play.google.com/store/apps/details?id=com.elblogbruno.notion_ai_my_mind](https://play.google.com/store/apps/details?id=com.elblogbruno.notion_ai_my_mind) Approximately half of the products in our dataset have a professional presentation like _Seek_; those generally have more stars and a larger codebase. The others, including _NotionAI MyMind_, are personal-interest projects released as a product. ## 5. Contribution B: Learning from the Dataset We created this dataset due to the limited access to industry products, which hinders research advancements in the field. Now that we have this dataset, we can finally attempt to answer numerous research questions about ML products that have accumulated in many past studies with open-source products, where previously we had to rely on interviews or experience reports from industry practitioners. Given the breadth of topics, we cannot cover everything in a single study. In this paper, we explore a wide range of topics rather than going into depth on a single one, allowing us to answer many research questions and report findings that may guide future research directions. In addition to contributing new knowledge, this approach helps us achieve two secondary objectives: _(1) Characterize the dataset:_ Our analysis with broad research questions will \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**After**} & \multirow{2}{*}{**Module App**} & \multicolumn{3}{c}{**Deskapp App**} & \multicolumn{3}{c}{**Web App**} \\ \cline{2-7} & & **android** & **10S** & **j** & **py** & **java** & **C** & **js** & **py** & **java** \\ \hline **Girth-Bessel** & 12044 & 10969 & 72793 & 67266 & 55892 & 12587 & 27793 & 67365 & 55892 \\ **Initial Filter** & 3358 & 4055 & 85737 & 36396 & 2212 & 7145 & 8377 & 33696 & 22802 \\ **Product Filter** & 2296 & 2801 & 1100 & 1663 & 1909 & 2590 & 3025 & 8274 & 2255 \\ **Manual Check*** & 33 & 14 & 19 & 43 & 17 & 12 & 42 & 104 & 5 - 262*** \\ \hline \hline \end{tabular} * Based on the ML clue counts, we inspected around 4D projects manually. “Removing duplicates. \end{table} Table 2. Number of Retrieved Projects after Each Step Figure 2. Density plot of number of stars, contributors, and codebase, with outliers removed implicitly characterize the products in our dataset and enable other researchers to effectively use it and interpret derived findings (cf. limitations of existing work in Section 3) - this also helps to explore how similar the ML products in our dataset are to ML products described in interviews and experience reports. **(2)**_Identify when deeper analysis of the dataset is feasible and promising:_ Our research questions from different topics, such as collaboration, architecture, and development process will identify what kind of questions are worth going into deeper, before committing resources to in-depth analyses for individual research questions. ### Research Method First, we curated a list of research questions relevant to the study and designed qualitative and quantitative strategies to answer them. Given the novelty of the dataset and questions, we heavily rely on qualitative analysis involving substantial manual effort. Therefore, we decided it would be more manageable to analyze a sample of 30 products from the dataset. **Deriving Research Questions.** We selected research questions that are not only of interest to the research community but can also be feasibly answered by analyzing open-source ML products (e.g, we did not commonly find artifacts that would allow us to answer _"How do data scientists elicit, document, and analyze requirements for ML systems?"_). For this selection, we employed a two-step process. First, we explored existing literature to identify topics that are of interest to researchers in the field, such as the challenges faced by practitioners building ML products, collected in a recent meta-survey of interviews and surveys (Zhou et al., 2018). We identified numerous topics of interest, such as collaboration, architecture, development process, quality assurance, MDps, and responsible AI. Second, we examined our dataset of ML products to identify potential research questions (RQs) that could plausibly be answered with open-source data. We randomly explored 15 ML products qualitatively, taking multiple pages of notes for each product. We immersed ourselves in the source code, documentation, contributed profiles, issues, and any other available information provided on associated websites; we identified the ML and non-ML components to become familiarized with common structures. In the end, we selected a broad set of questions spanning the entire life cycle: \(\bullet\)**RQ #1 (Collaboration)**: How interdisciplinary are open-source ML product teams and how do they divide work? \(\bullet\)**RQ #2 (Architecture)**: How are open-source ML products architected to incorporate models? \(\bullet\)**RQ #3 (Process)**: What model-product development trajectory do open-source ML products follow? \(\bullet\)**RQ #4 (Testing)**: What and how are the open-source ML products and their parts tested? \(\bullet\)**RQ #5 (Operations)**: How are open-source ML products designed for operation? \(\bullet\)**RQ #6 (Responsible AI)**: What responsible AI practices are used in open-source ML products? While each identified topic could warrant a dedicated study and deeper analysis, here, we provide initial answers for each and explore opportunities for future research. **Analysis and Synthesis.** Without existing established measures, we found manual mostly-qualitative analysis to be more responsive and effective than a narrow quantitative analysis at scale. To find answers to the RQs, the products in the dataset required an in-depth examination of the code, analysis of contributor activities, and thorough inspection of related documents. While we automated some measures, such as contributor percentages and modularity scores, designing them also required initial manual investigation and manual classification of ML components. This made the process quite labor-intensive, requiring 10-15 hours per product. Consequently, we employed a sampling methodology rather than analyzing every product in the dataset. _Sampling:_ We analyzed 30 products (11.5% of the dataset, shown in Table 1), which was manageable for our manual analysis. Our goal was not to make statistical generalizations but rather to gain insights into the dataset by sampling a wide range of products; thus we used different qualitative sampling techniques to ensure diversity in our sample. Following case-study research logic (Zhou et al., 2018), we use _information-oriented selection_ and stratification to maximize the utility of information from small samples: We select popular and large products (_extreme/deviant cases_), which may be closer to industry products, to explore the potential of this dataset as an alternative to industry products; we stratify selection across the three genres mobile, desktop, and web; and also include a random selection of other products (_average cases_). Specifically, we select two products with the most stars per genre (P8, P9, P16, P17, P21, P28), two products with the most contributors per genre (P3, P6, P14, P19, P24, P25), the product with the largest code size per genre (P1, P11, P29), and five randomly selected products per genre. _Analyzing Products and Card Sorting:_ We conducted a comprehensive qualitative analysis of the sampled products. We describe the specific analysis steps separately for each research question below, but generally, we follow the strategy: two researchers carefully examined the GitHub repositories, addressing each research question individually for each product, involving tasks such as reading documentation, identifying ML and non-ML components in the source code, measuring modularity, examining contributor profiles, analyzing commit history, and reviewing issues. The entire research team regularly met to discuss, clarify understanding, resolve disagreements, and organize findings. To organize and find patterns among the products, we performed card sorting (Zhou et al., 2018). Each product was represented by a card for each RQ, describing our findings for that particular RQ, and we iteratively grouped these cards to identify patterns within and across RQs. Additionally, we sought to understand relationships among characteristics across multiple questions by comparing how products were classified across different dimensions of analysis. We share analysis artifacts, including a Miro board and spreadsheets, in the appendix (Zhou et al., 2018). **Threats to Credibility and Validity.** This part of the research shares common threats encountered in qualitative research. Given the small sample size and sampling strategy, statistical generalization is not suitable and not advised. While we followed standard practices for coding and memoing during the analysis of the products, as is typical in qualitative research, we cannot completely eliminate biases introduced by the researchers. In addition, we only access public information and do not have access to offline activities, and did not talk to developers in those products - hence, our findings should be interpreted accordingly (construct validity), for example, regarding developer backgrounds and development order. ### RQ #1: How interdisciplinary are open-source ML product teams and how do they divide work? Interdisciplinary collaboration is difficult, and this has been found also in building ML products [63, 76]. The transparency of open-source development allows researchers to study many aspects of collaboration, as demonstrated by numerous past studies on team collaboration [41, 105], pull requests [30], and diversity [99]. To understand _interdisciplinary_ collaboration in open-source ML products, we explore team composition in terms of the number of contributors and their backgrounds, who works on the ML and the non-ML code of the product, and how tangled ML and non-ML code are in terms of co-changes. The findings can help study existing challenges such as siloed development and uncover further opportunities for studying collaboration. **Method.** To analyze contributor backgrounds and numbers (_Findings 1-2_), we collected contribution data from GitHub and identified the _core_ contributors as those collectively responsible for 80% of all commits (in line with past work [26, 111]). We manually classified each contributor's background as _SE-focused_, _ML-focused_, or _other_ (e.g., physics, finance), based on public self-description, professional title, and education history stated in their GitHub profile, LinkedIn profile, and personal or company websites, if available. We also examined the repository organization and related web pages for finding responsibility assignments. If the classification was not obvious (e.g., because of limited public information) we refrained from assigning a background and classify them as _"unsure."_ To study how developers from SE and ML backgrounds contribute to ML and non-ML code (_Findings 3-4_), we separated the ML and non-ML code (documentation and binaries excluded) manually: We categorized code associated with model training, prediction, and pipeline as ML-related, while all other software infrastructure and graphical user interface (GUI) code fell under non-ML, typically at the granularity of files. Finally, we analyze the commit history to attribute code changes to contributors. To analyze coupling of the ML and non-ML code (_Finding 5_), we analyze co-edits in the history of the product, known as _logical coupling_[23, 114]. Specifically, we compute a _relative coupling index_ that indicates whether the ML and non-ML parts are more or less coupled than would be expected if all changes were randomly distributed across files. A low _relative coupling index_ indicates that changes are typically isolated to only ML code or only non-ML code, whereas a high _relative coupling index_ indicates that ML and non-ML code are often changed together and is a sign of low modularity. To compute _relative coupling index_, we use _evolutionary coupling index (ECI)_[114] to compute coupling and then divide _ECI_ by the probability of coupling of random edits to normalize the effect of size, as ML and non-ML code size differs significantly (the average product has 971k LOC non-ML code and 23k LOC ML code). **Finding 1: Many products have a single core contributor (13/30).** **Finding 2: Most of the core contributors are software engineers (74/92).** In our sample of 30 products, we identified 140 core contributors, among whom we could classify 92. Of these, 74 contributors are _SE-focused_, whereas only 10 (spread across eight products) are _ML-focused_. For the products where we could classify all core contributors, most (9/16) had exclusively _SE-focused_ contributors. We found a single contributor (in \(P24\)) who self-identified as an expert in both SE and ML, which is a rare and highly sought-after combination in industry, often referred to as "unicorn." In line with general trends in open source [6, 19], we find relatively small teams, in contrast to often larger teams reported for industry ML products [91, 7, 90]. Studies of industry products tend to report a much more significant involvement of data scientists, often as the dominant role in the product, which we observed as less common in open source. Overall, GitHub remains primarily a platform for software engineers, with limited apparent participation from data scientists, even in the context of ML products. **Finding 3: There is little evidence of clear silos, with core contributors commonly committing to both ML and non-ML code, regardless of background.** We do not find a clear delineation where _ML-focused_ contributors work solely on ML code and _SE-focused_ contributors work exclusively on non-ML code. In products with substantial amounts of both ML and non-ML code, we often find contributors of either background working on both parts - in contrast to the widely reported problem of _siloing_ in industry teams [63]. **Finding 4: Team responsibilities are rarely assigned and recognizable in the commit history.** Only five products (P13, P21, P24, P25, P29) publicly documented team structures with assigned roles and responsibilities for team members. However, the assigned responsibilities are not always reflected in the commit history. For instance, even though P29 has an explicit data team and a separate ML repository, we do not find commits from the data team contributors to the ML repository. Instead, it is the _SE-focused_ contributors who commit to both the ML and non-ML repositories. We conjecture there may be some offline collaboration between the _SE-focused_ and _ML-focused_ contributors, which is not visible in the open-source repository. Previous studies analyzing team collaboration in industry reported misaligned responsibility assignments that do not reflect developers' abilities or preferences [45, 63, 76]. Our findings signal towards similar trends as team responsibilities are not clearly assigned, and developers perform model and engineering work, regardless of their backgrounds. **Finding 5: ML and non-ML code are often changed together, indicating low levels of modularity.** While one could expect ML and non-ML code to change independently as the model is often simply called as a function and could be developed as a distinct service by an independent team, our analysis reveals that a substantial portion of the products (12/30) in our sample exhibit frequent co-changes of ML and non-ML code. For example, P27 has a very high _relative coupling index_ between ML and non-ML code - upon manual inspection, we found that this product has a custom script for training a speech recognition model, where the ML code directly updates the button color in the user interface (UI) based on the prediction result; any modification to the UI properties requires an update to the model script to accommodate the change, causing frequent co-changes of the UI files and the model script. Conversely, although we indeed found many products (18/30) that have coupling lower than random, it is not as low as could be expected from fully modular components of a product. These numbers corroborate the existing modularity concerns, where changes in models propagate beyond module boundaries in unexpected and non-monotone ways [3, 44, 97]. These complicated interactions pose significant challenges to integrating models into products. In summary, for RQ 1, our findings suggest that open-source ML products are mostly developed by small groups of mostly software engineers, with limited involvement of data scientists, and without a clear modular separation of ML work. ### RQ #2: How are open-source ML products architected to incorporate models? Researchers have highlighted how ML can influence the architecture of software products [50, 91]. To comprehensively understand the structure of the products and the incorporated models, we explore multiple architecturally relevant aspects such as model type, usage, importance, existence of multiple models, pipeline automation, documentation, and big data infrastructure. **Method.** To understand the overall structure of the model and product, we conducted a comprehensive manual analysis of the ML and non-ML code in the repositories. We use our usual qualitative methods and focus on the following artifacts: code structure and data flow as it pertains to models - identifying how the models are created, where and how they are called, and how the model predictions are processed and used. We also reviewed their documentation, relevant blogs and forums, associated web pages, and related repositories under the same personal or organizational account. We then sorted our findings and grouped those into categories, using card sorting techniques [96], and guidance from previous research and domain knowledge from our research team. **Finding 6: About half of the products rely only on third-party ML models (13/30).** We identified 15 products that use _third-party models_ via libraries (such as Tesseract OCR), external APIs (such as ClarifyAI), or by loading pre-trained model files (such as a ditlite model) from a remote URL or repository. In contrast, 17 products _self-train models_. Two products use both _third-party_ or _self-trained models_ (P12, P30). For instance, the optical music recognition application, P12, uses a self-trained model to classify music symbols and an existing OCR library for classifying text. With a significant portion of the sampled products only integrating third-party ML models (13/30), this finding resonates with _Finding 2_, that most contributors, being software engineers, develop ML products by leveraging pre-existing models. **Finding 7: The importance of the ML models to the product varies, with about half using them as optional functionality only (13/30).** We found the importance of ML models to vary considerably across different products. For 11 products, the model is the _core_ functionality, as there would be no product without the model (e.g., the OCR model in the OCR scanner app P1). There are 6 products that may still provide value without the model, but the model provides _significant_ functionality (e.g., the OCR model in the video subtitle editor P15 that could potentially operate on manual inputs). In 13 products, the model provides _optional_ functionality, serving as a nice-to-have add-on (e.g., facial expression recognition in the video conferencing app P28). Whether a _third-party_ or _self-trained_ model is used (see _Finding 6_) is not necessarily associated with the model's importance. We found products investing substantial effort in self-trained models for _optional_ functionality (e.g., P6, P26, P29), and products that rely on a third-party model for _core_ functionality (e.g., P1, P12, P13). However, we do observe that models core to a product are usually developed first, as we will discuss in _RQ #3_. **Finding 8: Automation using model predictions is uncommon (5/30), with most products keeping humans in the loop.** A central question in human-AI design is how to use or present model predictions. While automation can improve user experience, it can also make it hard to fix model mistakes leading to safety concerns. In contrast, _prompts_ and _augmentation_ are often viewed as safer human-AI design choices when models make mistakes as humans control the decision [38, 44]. We find only five products that use model predictions to _automate_ actions (e.g., keyword spotting app, P27, executes gameplay instructions automatically based on a speech model that processes voice commands). Two products _prompt_ users to confirm an action (e.g., deepfake software, P17, displays image previews to users for verification after each stage of image processing). Most products (23) in our sample use predictions to _augment_ the user interface [44] to display the prediction to a human who may or may not take actions (e.g., trading app, P21, graphically presents model predictions related to investments to the end-users, letting them make decisions on possible investigations). **Finding 9: Most products use raw model predictions without any post-processing (21/30).** There are many ways for processing model results in products, some of which are viewed as architectural tactics, such as _two-phased predictions_ as an _availability tactic_, ensuring the resilient serving of models [9, 48]. However, most products in our sample trust models and their predictions, and do not process or check predictions any further. Only two products incorporate additional architectural tactics around the model predictions: Plant identifier app, P2, uses a two-phase prediction system, starting with a local model, and if its confidence score is below a certain threshold, an online model is called for consultation (architectural pattern known as _cascading_/two-_phase prediction_[48]). Subtitle editor software, P15, performs rigorous checks on texts predicted via optical character recognition (OCR) and language translation before presenting them to the end-user, complemented with an editor for manual correction of errors. In addition, three products incorporate a confidence score threshold to filter low confidence predictions (P9, P27, P29) and another three offer a retraining option for the model if performance proves unsatisfactory (P5, P12, P28). P11 is an exception to all, with the model itself serving as a post-processing mechanism to fix forecasting errors retrieved from an API. All others directly present the predictions to users as-is. Directly relying on model predictions can expose users directly to mistakes. This can be particularly problematic for third-party models that evolve over time [16]. However, none of these products are safety-critical, where inaccurate predictions could be harmful. **Finding 10: Products that automate actions based on model predictions incorporate architectural tactics.** Our findings suggest an association between how model predictions are used in a product _(Finding 8)_ and the architectural decisions made around the model _(Finding 9)_. For instance, three of the five products that _automate_ actions offer users an option to retrain the model, demonstrating a contingency plan for potential prediction inaccuracies. The fourth product includes a confidence score threshold on model predictions, thus ensuring a check on prediction reliability. These instances indicate that when models are given the power to automate decisions, developers implement architectural tactics to hand more control over to users (either to retrain model or adjust prediction threshold). **Finding 11: It is common to use multiple models in a product (18/30), though mostly the models work independently of each other (11/18)**. Interactions among multiple models are a frequently raised challenge in industry, where a minor change in one model can trigger cascading changes across the product [3, 44, 68, 8]. While 18 products use multiple models, they work independently in 11, among which, 7 products use models for _separate functions_ (e.g., P15 uses one model for OCR and another for speech-to-text), and 7 products use models as _alternatives_ for the same function (e.g., P26 provides a choice between two clustering models), with 3 products using models for both. There are 5 products that _sequentially compose_ models [44], with the output of one model used as an input to another model (e.g., P12 passes text recognized by an OCR model into an entity recognition model). Two products use models for _collective decision-making_ (e.g., P9 combines multiple classifiers to generate personalized news feeds). **Finding 12: Pipeline automation is not common in open-source ML products.** Among the 17 products that use self-trained models (_Finding 7_), training is often not automated. We did not find model training pipeline for four products (P2, P18, P22, P30) (we cannot tell if training happens offline or in a private repository). Four products (P6, P10, P12, P16) have non-automated training pipelines requiring manual execution of sequential steps, saving predictions in CSV files, and loading models from asset folders through product code. Four products (P9, P15, P27, P29) have GUintegrated training pipelines that can be separately activated via GUI actions. Only four products (P14, P21, P23, P26) feature fully automated training pipelines that consistently fetch the latest data for model retraining and use the most recent model or allow end-users to initiate retraining at run time. We also found two products (P11, P24) that include a partial pipeline only for data retrieval. Industry discussions often mention how dependencies between pipelines and other system parts can complicate ML product architecture [44]. However, ML products in our sample have simpler structures, lacking the intricate pipelines seen in industrial settings. **Finding 13: We do not find much effort on data or model documentation.** Both industry and academia view documentation as an important artifact in ML product development for many aspects, such as setting clear expectations for collaboration, improving transparency and accountability for responsible AI, and facilitating information sharing across different components [4, 15, 46, 75]. Documentation is vital to architecture, since architectural practices such as divide and conquer and information hiding at interfaces between components, require documentation support. However, in our sample, the 17 products using self-trained models (_Finding 7_) provided minimal and mostly scattered documentation about models and data, if any. _Model documentation:_ We only found one product, P29, which provides high-quality model documentation in the form of a model card [60] for a model that differentiates between fish and trash in the ocean in an AI-educational game for children. Other products generally limited their model documentation to brief instructions for using the model API or insights into inner model architecture from a data scientist's perspective. _Data documentation:_ Data documentation was mostly limited to presenting a data schema, occasionally mentioning the volume of training data, or simply providing a link to their data sources. We found no integration of data documentation frameworks such as datasheets [13]. Thus, much like commercial platforms, where ML documentation is rare and perceived as challenging [12, 15], open-source platforms are no different [12]. **Finding 14: Most products do not use/rely on big data infrastructure (23/30)**. Scalability is an important architectural challenge for ML products as they often need to handle extensive training and telemetry data, and operate large distributed jobs for model training, leading to reliance on big data infrastructure. However, we did not find use of local or self-hosted big data infrastructures (such as Hadoop and Spark) in any of the products. Seven products contain code related to cloud services such as Amazon S3 (storage), EC2 Mongo (storage), Amazon EC2 (compute), Amazon CloudWatch (monitoring), and Elastic Cloud (search). Given limited funding, unlike revenue-generating commercial products, most open-source products may not be able to afford cloud services or big on-site installations, or find it incompatible with the spirit of open source. In summary, for RQ2 our findings revealed that ML products are architected with single and multiple models of varying types, both self-trained and third-party, which are used for tasks of varying importance. Notably, we found products to employ different architectural decisions based on how they use model predictions. We also observed a lack of post-processing of model predictions, training pipeline automation, big data infrastructure, and insufficient emphasis on documentation in most of these products. ### RQ #3: What model-product development trajectory do open-source ML products follow? Industry teams have been observed to develop model and product in different orders, known as trajectories [58, 63], and face distinct challenges. We investigate trajectories in open-source ML products. **Method.** We analyzed the commit history to identify the timeline of model and product development, and classify a product as following _model-first trajectory_ if the model is developed first followed by the creation of a product around it and _product-first trajectory_ if models are added to existing products (Kumar et al., 2019). Products that have a single large initial commit containing both model and product code are classified as "_unsure_." **Finding 15: The _product-first trajectory_ (16/30) is more common than the _model-first trajectory_ (7/30). While we are not aware of representative numbers, prior work suggests that both trajectories are prevalent in industry (Kumar et al., 2019). In our open-source sample, we observe a greater prevalence of the product-first trajectory, which may be attributed to most contributors being software engineers (_Finding 2_) and many products adding machine learning for optional functionality (_Finding 7_). Overall though, both trajectories can be found in open source, which is encouraging for studies exploring phenomena or solutions to their respective challenges.** **Finding 16: When the model is the _core functionality_, it is always developed first. We observed that model importance (_Finding 7_) significantly impacts a product's development trajectory. The products with models as _core_ are always developed _model-first_, while the models as _optional or significant_ tend to follow the product-first trajectory. This pattern is plausible, as developers prototype the risky project parts first. For example, software for creating deepfakes, P17, first developed the model in 2017, and added the GUI components a year later to make the model accessible as a product. In contrast, video conferencing app, P28, follows a product-first trajectory, for which the software development started in 2013, with the face detection model (_optional_ functionality) being added several years later in 2020.** ### RQ #4-6: Testing, Operation, and Responsible AI Due to the page limit, we only report brief findings (methods and details are available in the appendix (Zhu et al., 2020)) for the remaining three research questions (cf. Section 5.1).** **Finding 17: Testing regular software functionality is common (23/30). Model testing is notably scarce (8/30). Data validation is rare (2/30). Contrary to industry practices, model evaluation appears to be remarkably less prevalent in our sample of open-source ML products. Even among the eight products that included model evaluation scripts, three of them (P3, P9, P29) approached ML model testing like unit testing, cross-checking predictions against expected values. Data validation testing was almost non-existent, with just two products implementing minimal checks.** **Finding 18: Only a few products (8/17) have mechanisms for evolving models. Of the 17 products with self-trained models, five products (P9, P16, P23, P27, P30) offer users the option to retrain products at run time and three products (P14, P21, P26) continually retrain their models by fetching up-to-date data from their data sources. This aligns with the finding from Nahar et al. (Kumar et al., 2019) that many product teams have a static view of models and model teams struggle to foster a more dynamic mindset.** **Finding 19: Telemetry monitoring is almost non-existent (1/30). Despite heavy emphasis on observability in industry and academic literature, we found 29 out of the 30 products to not have any telemetry tracking or monitoring, which is critical for detecting failure regardless of whether _third-party_ or _self-trained_ models are used. The exception was P21, which incorporated telemetry due to its inherent need in dealing with financial forecasting. We also found a product (P9) that, despite already having a monitoring infrastructure (Amazon CloudWatch), does not use it for model telemetry monitoring. Some developers might argue that collecting telemetry from users may not align with open-source principles.** **Finding 20: We find little evidence of MLOps tools. We did not find use of any popular MLOps tools for tasks such as automating deployment, testing, monitoring, and data cataloging, in any of the products. Two products (P17, P21) only implemented basic checks for GPU availability to execute models. Given that many of these products do not incorporate retraining mechanisms (_Finding 18_), they have no immediate need for many automation tasks that MLOps tools typically support. This finding aligns with the finding of Caleato et al. (Caleato et al., 2019) that the adoption of MLOps workflows in GitHub ML projects is limited, but contrasts with findings from industrial surveys where MLOps is deemed crucial and associated with several challenges (Zhu et al., 2020; Zhu et al., 2020).** **Finding 21: Despite significant attention in academia, responsible AI practices (e.g., fairness, safety, security) are not apparent in open-source ML products. We do not find adoption of any practices in our sample to mitigate responsible AI concerns. Only one product, P17, discuss ethical usage in their readme file,3 which is limited to a disclaimer for end users. Others included general privacy policies and disclaimers in readme files or web pages that are not related to ML. One product, P29, advocates model cards, a model documentation framework for transparent and responsible usage of ML models. This limited focus on responsible AI aligns with the literature (Bahdan et al., 2019; Kumar et al., 2019; Kumar et al., 2019), which suggests that companies (especially outside of big tech) often do not have concerns about responsible AI and rarely adopt responsible AI practices or tools.** Footnote 3: [https://github.com/deepfakes/faceswapflaceswap-has-ethical-uses](https://github.com/deepfakes/faceswapflaceswap-has-ethical-uses) ## 6. Discussion and Conclusion We compiled a dataset of 262 open-source ML products to facilitate research and education that can benefit from access to the development history and artifacts of ML products, and report 21 findings from the six research questions that suggest promising insights, and avenues for future exploration.** **Educational Resource for Academics and Practitioners. The compiled dataset can serve as a valuable educational resource for both academics and practitioners in ML product development. The dataset offers diverse study materials suitable for a range of learning needs as it contains both large ML products representative of industry applications, as well as small ones. Even though practices such as model documentation are rare (_Finding 13_), the dataset still contains positive examples for study as well as many opportunities to try interventions: Especially, the larger and more professionally-built ML products (cf. Section 4.2) within the dataset can provide a close-up view of various development practices and architectural tactics, including safety mechanisms, human-AI interaction, and deployment strategies. The smaller personal-interest-based ML products (cf. Section 4.2) can serve as guides or tutorials, illustrating the fundamentals and challenges of building ML products on a manageable scale, particularly beneficial for instructors and beginners looking for an accessible entry point. **Research Opportunities.** Our investigations into various activities associated with building ML products have disclosed numerous promising avenues for future research. Among others, we believe further exploration regarding team conflict is worthwhile, particularly in light of our findings revealing unclear team boundaries and responsibilities in open-source ML products (_Findings 3-4_), an issue also prevalent and frustrating for developers in industry [63]: With access to concrete data in open source, it becomes feasible to delveber into this issue, potentially employing firehouse studies [8, 61] to interview developers when certain events occur. In addition to the analysis of existing practices, our analysis provides researchers with opportunities to evaluate new tools and interventions in realistic settings with the open-source data. For instance, building on insights from _Finding 10_ that developers integrate architectural tactics for safeguarding against model predictions when used for automation, we can evaluate the effectiveness of different architectural tactics as safety and usability interventions. Similarly, our discovery of multiple interacting models (_Finding 11_) offers opportunities to design and evaluate whether data-flow tracking and architecturally limiting interactions could curb feedback loops and modularity challenges [3, 88], among many other potential research avenues.
2308.13654
Pretty darn good control: when are approximate solutions better than approximate models
Existing methods for optimal control struggle to deal with the complexity commonly encountered in real-world systems, including dimensionality, process error, model bias and data heterogeneity. Instead of tackling these system complexities directly, researchers have typically sought to simplify models to fit optimal control methods. But when is the optimal solution to an approximate, stylized model better than an approximate solution to a more accurate model? While this question has largely gone unanswered owing to the difficulty of finding even approximate solutions for complex models, recent algorithmic and computational advances in deep reinforcement learning (DRL) might finally allow us to address these questions. DRL methods have to date been applied primarily in the context of games or robotic mechanics, which operate under precisely known rules. Here, we demonstrate the ability for DRL algorithms using deep neural networks to successfully approximate solutions (the "policy function" or control rule) in a non-linear three-variable model for a fishery without knowing or ever attempting to infer a model for the process itself. We find that the reinforcement learning agent discovers an effective simplification of the problem to obtain an interpretable control rule. We show that the policy obtained with DRL is both more profitable and more sustainable than any constant mortality policy -- the standard family of policies considered in fishery management.
Felipe Montealegre-Mora, Marcus Lapeyrolerie, Melissa Chapman, Abigail G. Keller, Carl Boettiger
2023-08-25T19:58:17Z
http://arxiv.org/abs/2308.13654v1
# Pretty Darn good control: when are approximate solutions better than approximate models ###### Abstract Existing methods for optimal control struggle to deal with the complexity commonly encountered in real-world systems, including dimensionality, process error, model bias and data heterogeneity. Instead of tackling these system complexities directly, researchers have typically sought to simplify models to fit optimal control methods. But when is the optimal solution to an approximate, stylized model better than an approximate solution to a more accurate model? While this question has largely gone unanswered owing to the difficulty of finding even approximate solutions for complex models, recent algorithmic and computational advances in deep reinforcement learning (DRL) might finally allow us to address these questions. DRL methods have to date been applied primarily in the context of games or robotic mechanics, which operate under precisely known rules. Here, we demonstrate the ability for DRL algorithms using deep neural networks to successfully approximate solutions (the "policy function" or control rule) in a non-linear three-variable model for a fishery without knowing or ever attempting to infer a model for the process itself. We find that the reinforcement learning agent discovers a policy that outperforms both constant escapement and constant mortality policies--the standard family of policies considered in fishery management. This DRL policy has the shape of a constant escapement policy whose escapement values depend on the stock sizes of other species in the model. Optimal Control Reinforcement Learning Uncertainty Decision Theory ## 1 Introduction Much effort has been spent grappling with the complexity of our natural world in contrast to the relative simplicity of the models we use to understand it. Heroic amounts of data and computation are being brought to bear on developing better, more realistic models of our environments and ecosystems, in hopes of improving our capacity to address the many planetary crises. But despite these efforts and advances, we remain faced with the difficult task of figuring out how best to respond to these crises. While simplified process models for the population dynamics have historically allowed for exploration of large decision spaces, the new wave of rich models are applied to highly oversimplified descriptions of potential actions they seek to inform. For instance, Global Circulation Models (GCMs) such as HadCM3 (Pope et al. 2000; C. Gordon et al. 2000; Collins, Tett, and Cooper 2001) model earth's climate using 1.5M variables, while the comparably vast potential action space is modeled much more minimalistically, with 5 SSP socioeconomic storylines and 7 SSP-RCP marker scenarios summarizing the action space at the IPCC (Riahi et al. 2017). Even as our research community develops simulations of the natural world that fit only in supercomputers, we analyze a space of policies that would fit on index cards. Similar combinations of rich process models and highly simplified decision models (often not even given the status of'model') are common. Modeling the potential action space as one of a handful of discrete scenarios is sometimes a well justified acknowledgement of the constraints faced by real-world decision-makers - particularly in the context of multilateral decisions - and may seem to reflect a division of responsibilities between'scientists' modeling the 'natural processes' and policy-makers who make the decisions. But, more often, this simplification of decision choices is simply mathematically or conceptually convenient. This simplification reflects trade-offs between tractablity and complexity at the basis of any mathematical modeling - if we make both the state space and action space too realistic, the problem of finding the best sequence of actions quickly becomes intractable. However, emerging data-driven methods from machine learning offer a new choice - algorithms that can find good strategies in previously intractable problems, but at the cost of opacity. In this paper, we focus on a well-developed application of model-based management of the natural world that has long illustrated the trade-offs between model complexity and policy complexity: the management of marine fisheries. Fisheries management is both an important issue to society as well as a rich and frequent test-bed of ecological management more generally. Fisheries are an essential natural resource that provide the primary source of protein for one in every four humans, and have faced widely documented declines due to over-fishing Costello et al. (2016). Fisheries management centers around the process of sampling populations to determine fishing quotas based on population estimates. This decision is often guided by a model of the dynamics of the system. Our paper focuses on the decision side of this problem rather than the measurement step. Fisheries management has roots in both the fields of _ecosystem management_ and _natural resource economics_. Both fields might trace their origins to the notion of maximum sustainable yield (MSY), introduced independently by a fisheries ecological (Schaefer 1954) and the economist (H. S. Gordon and Press 1954) in the same year. From this shared origin, each field would depart from the simplifying assumptions of the Gordon-Schaefer model in divergent ways, leading to different techniques for deriving policies from models.The heart of the management problem is easily understood: a manager seeks to set quotas on fishing that will ensure the long-term profitability and sustainability of the industry. Mathematical approaches developed over the past century may be roughly divided between these two fields: (A) ecologists, focused on ever more realistic models of the biological processes of growth and recruitment of fish while considering a relatively stylized suite of potential management strategies, and (B) economists, focused on far more stylized models of the ecology while exploring a far less constrained set of possible policies. The economist's approach can be characterized by the mathematics of a Markov decision process (MDP Colin W. Clark 1973; Colin W. Clark 1990; Marescot et al. 2013), in which the decision-maker must observe the stock each year and recommend a possible action. In this approach, the policy space that must be searched is exponentially large - for a management horizon of T decisions and a space of N actions, the number of possible policies is \(N^{T}\). In contrast, fisheries ecologists and ecosystem management typically search a space of policies that does not scale with the time horizon. Under methods such as "Management Strategy Evaluation" (MSE, (Punt et al. 2016)) a manager identifies a candidate set of "strategies" a priori, and then compares the performance of each strategy over a suite of simulations to determine which strategy gives the best outcome (i.e. best expected utility). This approach is far more amenable to complex simulations of fisheries dynamics and more closely corresponds to how most marine fishing quotas are managed today in the United States (see stock assessments documented in RAM Legacy Stock Assessment Database 2020). There are several existing approaches that have been used to manage fisheries, most prominently including **constant escapement policies** and **constant mortality policies**. We collectively refer to these as \(classical\), and will compare their performance to RL-based management strategies. While often complex models of the ecosystem are used to estimate the size of fish population of interest, these classical strategies derive an optimal harvest policy using a simple model for the system dynamics. Across these strategies, setting the harvest quota has the shared aspect of reducing the complex dynamics of the fishery ecosystem to a single equation governing the harvested population, \(X\). A common example is the surplus production model that assumes logistic population growth in the absence of exploitation (Gordon, 1954; Schaefer, 1954): \[X_{t+1}-X_{t}=rX_{t}(1-X_{t}/K)-h_{t}=L(X_{t})-h_{t}\] The interaction between \(X\) and its environment is summarized to two parameters, the maximum intrinsic growth rate \(r\), and the carrying capacity \(K\). In the equation above, \(h_{t}\) is the _harvest_ at time step \(t\). The goal is to choose the harvest policy \(h:X_{t}\mapsto h_{t}\), such that long-term catch is maximized. An advantage of one dimensional approaches is that the optimal policy is intuitive and often known exactly. For example, in the logistic equation pointed out above, the maximum growth rate occurs at a population size \(X=K/2\). The optimizer is an **escapement** policy, which corresponds to a harvest, \(h_{t}\), that keeps the system at its optimal growth rate as much as possible: \[h_{t}=\begin{cases}X_{t}-a,&\text{if }X_{t}>a\\ 0,&\text{else},\end{cases}\] where \(a\) is the stock size at which growth is maximized. For example, in the logistic growth example above, \(a=K/2\). This type of _bang-bang_ policy tends to be the optimal solution for these types of control problems. A drawback of these solutions in the fishery context, is the possible presence of several time steps with zero harvest. To mend this, certain suboptimal solutions have been constructed for fishery management. One ubiquitous solution is based on a constant mortality policy: \[h_{t}=aX_{t},\] for some constant \(a\). The policy with optimal value of \(a\) is known as a _maximum sustainable yield (MSY)_ policy. In the logistic example above, this optimum mortality rate is \(a=rK/4\). Under this policy, the stock size at the maximum growth rate \(X_{MSY}=K/2\) is approached asymptotically from any positive initial state \(X_{0}\in(0,K)\). Thus, in equilibrium the MSY policy tends to match the results of constant escapement, namely, having the harvest rate be the maximum sustainable yield of the model: \[MSY=h(X_{MSY})=rK/4.\] That is, at the MSY biomass, \(X_{MSY}\), the logistic growth of \(X\) is cancelled exactly by the harvest. This MSY policy fixes the drawback of the escapement policy--\(h(X)>0\) for all \(X>0\). It, however, has its own drawbacks, as it is particularly sensitive to misestimates of the parameter \(r\). Due to this, similar but more conservative policies are often applied where the constant rate of fishing mortality is \(<M_{MSY}\). This control rule consists on reducing the inclination of the line defined by \(h(X)\) using a prefactor \(\alpha\) in \(h(X)=\alpha\cdot rX/2\). Plausible examples are \(\alpha=0.8\) or \(0.9\); here we examine an **80% MSY constant mortality** policy. Recent advances in machine learning may allow us to once again bridge these approaches, while also bringing new challenges of their own. Novel data-driven methods have allowed these models to evolve into ever more complex and realistic simulations used in fisheries management, where models with over 100 parameters are not uncommon (RAM Legacy Stock Assessment Database 2020). Constrained by computational limits, MDP approaches have been intractable on suitably realistic models and largely confined to more academic applications (Costello et al. 2016). However, advances in _Deep Reinforcement Learning_, (DRL) a sub-field of machine learning, have recently demonstrated remarkable performance in a range of such MDP problems, from video games (Bellemare et al. 2013; Mnih et al. 2013) to fusion reactions (Degrave et al. 2022; Seo et al. 2022) to the remarkable dialog abilities of ChatGPT (OpenAI 2022). RL methods also bring many challenges of their own: being notoriously difficult to train and evaluate, requiring immense computational costs, and presenting frequent challenges with reproducibility. A review of all these issues is beyond our scope but can be found elsewhere (Lapeyrolerie et al. 2022; Chapman et al. 2023). Here, though, we will focus on the issue of opacity and interpretability raised by these methods. In contrast with optimization algorithms currently used in either ecosystem management or resource economics, RL algorithms have no guarantees of or metrics for convergence to an optimal solution. In general, one can only assess the performance of these black box methods relative to alternatives. In most US fisheries, the mortality policy is often piecewise linear (often with one constant and one linear piece), and the allowable biological catch (ABC) or total allowable catch (TAC) is set at some heuristic (e.g. 80%) below the 'overfishing limit', \(F_{MSY}\), i.e. the highest (constant) mortality that can be sustained indefinitely (in the model - reality of course does not permit such definitions). This fixed mortality management can be seen, for instance, in most of the fisheries listed in the widely used R.A. Myers Legacy Stock Assessment Database. Here, we have focused on purely constant mortality policies, rather than piecewise linear mortality funcitons, for simplicity. Escapement-based management is less common, except in salmonoids, as it requires closing a fishery whenever the measured biomass falls below \(B_{MSY}\). In this article, we compare against two common methods: constant mortality (CMort) and constant escapement (CEsc), introduced in Text Box 1.1 Footnote 1: A repository with all the relevant code to reproduce our results may be found at [https://github.com/boettiger-lab/approx-model-or-approx-soln](https://github.com/boettiger-lab/approx-model-or-approx-soln) in the “src” subdirectory. The data used is found in the “data” subdirectory, but the user may use the code provided to generate new data sets. We consider the problem of devising harvest strategies in a for a series of ecosystem models with increasing complexity (Table 1): _1) one species, one fishery:_ a simple single-species recruitment model based on (May 1977); _2) three species, one fishery:_ a three-species generalization of model _1)_, where one of the species is harvested; _3) three species, two fisheries:_ the same three-species model as above but with two harvested species; _4) three species, two fisheries, parameter variation:_ a three-species model of which two are harvested, as above, with a time-varying parameter. This last model is meant to be a toy model of climate change's effect on the system. Across all of these scenarios, two goals are balanced in the decision process: maximizing long-term catch and preventing stock sizes to fall below some a-priori threshold. This way, we evaluate classical management strategies (CMort and CEsc) and DRL-based strategies on four different models. This experimental design is summrized in Fig. 1. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Model name** & **Model eqs.** & **N. Sp.** & **Harv. Sp.** & **Stationary?** \\ \hline Model 1 & (3) & 1 & \(X\) & Yes \\ Model 2 & (4) & 3 & \(X\) & Yes \\ Model 3 & (4) & 3 & \(X\) and \(Y\) & Yes \\ Model 4 & (4) & 3 & \(X\) and \(Y\) & No \\ \hline \end{tabular} \end{table} Table 1: Table of models considered in this paper. Here, **N. Sp.** is the _number of species_ in the model, **Harv. Sp.** is the species of the model which are harvested, and **Stationary?** refers to whether the parameters of the model have fixed values (or, on the contrary, if they vary in time). The only non-stationary case presented in the paper is where \(r_{X}\) drifts linearly with time. In the code repository associated to the paper, we consider other possible choices of non-stationarity. Regarding control for these 4 models, we show the following. _Model 1:_ DRL-based strategies recover the optimal constant escapement (CEsc) policy function. Constant mortality (CMort) performs considerably worse than these three.2_Model 2:_ Here, all management strategies perform similarly. _Model 3:_ For this model, DRL outperforms both classical strategies, with CMort surprisingly performing significantly better than CEsc. In particular, we observe that DRL strategies are more sensitive to stochastic variations of the system which allows it to adaptively manage the system to prevent population collapses below the threshold. _Model 4:_ Here, the performance gap between DRL and both classical strategies is maintained. Footnote 2: As will be explained later, all our models are stochastic. If we set stochasticity to zero in Model 1, CMort matches the performance of the other management strategies. We show that in the most complex scenario, Model 4, CMort is faced with a tradeoff--the optimal mortality rate leads a rather large fraction of episodes ending with a population crash, whose negative reward is counteracted with a higher economic output.3 Conversely, more conservative, lower, mortality rates lead to lower total reward on average. The DRL approach side-steps this trade-off by optimizing over a more complex family of possible policies--policies parametrized by a neural network, as opposed to policies labeled by a single parameter (the mortality rate). Figure 1: An experimental-design type of visualization of the management scenarios considered in this paper. On the x-axis are four different fishery management problems (Table 1). We represent the Model 4’s non-stationarity with a clock next to the X variable, and we intend to use it as an example of a possible simplified model for the effects of climate change. On the y-axis we have different management strategies with which one may control each of the models. On the bottom we have the constant escapement strategy (CEsc), based on calling off all fishing below a certain threshold population value. Above that is the constant mortality strategy (CMort), where one optimizes over constant fishing effort strategies. Finally, on top we have DRL-based strategies where policies are in general functions of the full state of the system, and they are parametrized by a neural network. The specific DRL-based strategy is referred to as PPO+GP in the main text, due to the algorithm used to produce the policy. The results plotted are the average reward obtained by the strategy over 100 episodes, and the fraction of those episodes which do not end with a near-extinction event (denoted Perc for Percentage). We have normalized to the highest reward in each column in order to enhance the comparison between strategies. For illustrative purposes we have color-coded the results using a two-dimensional color legend displayed on the bottom left. Our findings paint a picture of how a single-species optimal management strategy may lose performance rather dramatically when controlling a more complex ecosystem. Here, DRL performs better from both economical _and_ conservation points of view. Moreover, rather unintuitively, CMort - known to be suboptimal and unsustainable for single-species models - can turn out to even outperform the CEsc - the single-species optimal strategy - for complex ecosystems. Finally, within this regime of complex, possibly varying, ecosystems, we show that DRL consistently finds a policy which effectively either matches the best classical strategy, or outperforms it. We strengthen this result with a stability investigation: we show that random perturbations in the model's parameter values used do not significantly vary the conclusion that DRL outperforms CEsc. ## 2 Mathematical models of fisheries considered Here we mathematically introduce the four fishery models for which we compare different management strategies. In general, the class of models that appear in this context are _stochastic, first order, finite difference equations_. For \(n\) species, these models have the general form \[\Delta X_{t} =f_{X}(N_{t})+\eta_{X,t}-M_{X}X_{t} \tag{1}\] \[\Delta Y_{t} =f_{Y}(N_{t})+\eta_{Y,t}-M_{Y}Y_{t}\] \[\Delta Z_{t} =f_{Z}(N_{t})+\eta_{Z,t}-M_{Z}Z_{t}\] \[\ldots\] where \(N_{t}=(X_{t},\ Y_{t},\ Z_{t},\ \ldots)\in\mathbb{R}_{+}^{n}\) is a vector of populations, \(\Delta X_{t}:=X_{t+1}-X_{t}\), \(f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) are arbitrary functions, and where \(\eta_{i,t}\) are Gaussian random variables. Here, \(M_{i}=M_{i}(N_{t})\in\mathbb{R}_{+}\) is a state-dependent fish mortality arising from harvesting the \(i\)-th species (sometimes this is referred to as _fishing effort_). The term \(M_{X}X_{t}\) is the total \(X\) harvest at time \(t\). This formulation of stock recruitment as a discrete finite difference process is common among fisheries, as opposed to continuous time formulations which involve instantaneous growth rates. This growth rate simplifies e.g. the possibly seasonal nature of reproduction (which would need to be accounted for in a realistic continuous-time model) by simply considering the total recruitment experienced by the population over a full year. The fishing efforts are the _control variables_ of our problem--these may be set by the manager at each time-step to specified values. We make two further simplifying assumptions on the control problem: _1. Full observation_: the manager is able to accurately measure \(N_{t}\) and use that measurement to inform their decision. _2. Perfect execution:_ the action chosen by the manager is implemented perfectly (i.e., there is no noise affecting the actual value of the fishing efforts). Model 1 is a single-species classical model of ecological tipping points. Models 2-4 are all three-species models with similar dynamics. Following this logic, the first subsection will be dedicated to the single-species model and the second will focus on the three-species models. ### The single species model Optimal control policies for fisheries are frequently based on 1-dimensional models, \(n=1\), as described in \(Text\ Box\ 1\). The most familiar model of \(f(X)\) is that of _logistic growth_, for which \[f(X_{t})=rX_{t}\big{(}1-X_{t}/K\big{)}=:L(X_{t};\ r,K). \tag{2}\] Real world ecological systems are obviously far more complicated than this simple model suggests. One particularly important aspect that has garnered much attention is the potential for the kind of highly non-linear functions that can support dynamics such as alternative stable states and hysteresis. A seminal example of such dynamics was introduced in (May 1977), using a one-dimensional model of a prey (resource) species under the pressure of a (fixed) predator. In the notation of eq. (1), \[f_{X}(X_{t})=L(X_{t};\ r,K)-\frac{\beta HX_{t}^{2}}{c^{2}+X_{t}^{2}}. \tag{3}\] In the following, we will denote \[F(X_{t},H;\ \beta,c):=\frac{\beta HX_{t}^{2}}{c^{2}+X_{t}^{2}}.\] The model has six parameters: the growth rate \(r\) and carrying capacity \(K\) for \(X\), a constant population \(H\) of a species which preys on \(X\), the maximal predation rate \(\beta\), the predation rate half-maximum biomass \(c\), and the variance \(\sigma_{X}^{2}\) of the stochastic term \(\eta_{X,t}\). (Here and in the following we will center all random variables at zero.) Eq. (3) is an interesting study case of a _tipping point_ (saddle-node bifurcation) (see Fig. 2). Holding the value of \(\beta\) fixed, for intermediate values of \(H\) there exist two stable fixed points for the state \(X_{t}\) of the system, these two attractors separated by an unstable fixed point. At a certain threshold value of \(H\), however, the top stable fixed point collides with the unstable fixed point and both are annihilated. For this value of \(H\), and for higher values, only the lower fixed point remains. This also creates the phenomenon of _hysteresis_, where returning \(H\) to its original value is not sufficient to restore \(X_{t}\) to the original stable state. This structure implies two things. First, that a drift in \(H\) could lead to catastrophic consequences, with the population \(X_{t}\) plummeting to the lower fixed stable point. Second, that if the evolution of \(X_{t}\) is _stochastic_, then, even at values of \(H\) below the threshold point, the system runs a sizeable danger of tipping over towards the lower stable point. ### The three species models Models 2-4 are three-species models and they are all closely related--in fact, their natural dynamics (i.e. dynamics under zero harvest) is essentially given by the same equations: \[\begin{array}{l}f_{X}(N_{t})=L(X_{t};\ r_{X},K_{X})-F(X_{t},Z_{t};\ \beta,c)-c_{XY}X_{t}Y_{t},\\ f_{Y}(N_{t})=L(Y_{t};\ r_{Y},K_{Y})-DF(Y_{t},Z_{t};\ \beta,c)-c_{XY}X_{t}Y_{t}, \\ f_{Z}(N_{t})=(b(X_{t}+DY_{t})-d_{Z})Z_{t}.\end{array} \tag{4}\] The three species modeled are \(X\), \(Y\) and \(Z\). Species \(Z\) preys on both \(X\) and \(Y\), while the latter two compete for resources. There are thirteen parameters in this model: The growth rate and carrying capacity, \(r_{X}\), \(K_{X}\), \(r_{Y}\) and \(K_{Y}\), of \(X\) and \(Y\). A parameter \(c_{XY}\) mediating a Lotka-Volterra competition between \(X\) and \(Y\). A maximum predation rate \(\beta\) and a predation rate half-maximum biomass \(c\) specifying how \(Z\) forages on Figure 2: The fixed point diagram for the unharvested dynamics of Model 1 as a function of varying the parameter \(\beta H\), assuming zero noise. Stable fixed points (also known as attractors) are plotted using a solid line, while the unstable fixed point is shown as a dotted line. \(Y\) and \(Y\). A parameter \(D\) regulating a relative preference of \(Z\) to prey on \(Y\). A death rate \(d_{Z}\) and a parameter \(b\) scaling the birth rate of \(Z\). Finally, the noise variances \(\sigma_{X}\), \(\sigma_{Y}\) and \(\sigma_{Z}\). The three models will branch off of eq. (4) in the following way. _Model 2:_ here, only \(X\) is harvested, that is, in the notation of eq. (1), we fix \(M_{Y}=M_{Z}=0\) and leave \(M_{X}\) as a control variable. All parameters here are constant. _Model 3:_ as Model 1, but with \(X\) and \(Y\) being harvested. In other words, we set \(M_{Z}=0\) and leave \(M_{X}\) and \(M_{Y}\) as control variables. _Model 4:_ here \(X\) and \(Y\) are harvested, but now we include a non-stationary parameter: \[r_{X}=r_{X}(t)=\begin{cases}1-t/200,&\quad t\leq 100,\\ 1/2,&\quad t>100.\end{cases} \tag{5}\] All other parameters are constant. Eq. (5) is intended to reflect in a simple manner a possible effect of climate change: where the reproductive rate of \(X\) is reduced linearly over time until it stabilizes. ## 3 Reinforcement learning Reinforcement learning (RL) is a way of approaching _sequential decision problems_ through machine learning. All applications of RL can be conceptually separated into two parts: an _agent_ and an _environment_ which the agent interacts with. That is, the agent performs actions within the environment. After the agent takes an action, the environment will transition to a new state and return a numerical _reward_ to the agent. (See Fig. 1 in (Lapeyrolerie et al. 2022) for a conceptual description of reinforcement learning algorithms.) The rewards encode the agent's goal. The main task of any RL algorithm is then to maximize the cumulative reward received. This objective is achieved by aggregating experience in what is called the _training_ period and learning from such experience. The _environment_ is commonly a computer simulation. It is important to note here the role that real time-series data of stock sizes can play in this process. This data is not used directly to train the RL agent, but rather to estimate the model defining the environment. This environment is subsequently used to train the agent. In this paper, we focus on the second step--we take the estimated model of reality as a given, and train an RL agent on it.4 Footnote 4: In this sense, it is important to note that the classical management strategies we compare against have a similar flow of information. Namely, data is used to estimate a dynamical model, and this model is used to generate a policy function. The difference to our approach is located in the process of ‘how* the model is used to optimize a policy. Because of this difference, RL-based approaches can produce good heuristic solutions for complex problems. Specifically, we consider four environments corresponding to each of the models considered (Table 1). At each time step, the agent observes the state \(S\) and enacts some harvest--reducing \(X_{t}\) to \(X_{t}-M_{X}(N_{t})\cdot X_{t}\), and, for Models 3 and 4, also reducing \(Y_{t}\) to \(Y_{t}-M_{Y}(N_{t})\cdot Y_{t}\). Here the fish mortality-rates-from-harvest (i.e. \(M_{X}=M_{X}(N_{t})\) and \(M_{Y}=M_{Y}(N_{t})\)), are the agent's action at time \(t\). This secures a reward of \(M_{X}(N_{t})X\) for Models 1 and 2, and, similarly, a reward of \(M_{X}(N_{t})X+M_{Y}(N_{t})Y\) for Models 3 and 4. After this harvest portion of the time step, the environment evolves naturally according to eqs. (3) and (4) (Sec. 2). As mentioned previously, discretising time allows a simplification of the possibly seasonal mating behavioral patterns of the species involved. This approximation is commonly used in fisheries for species with annual reproductive cycles (see e.g. (Mangel 2006), Chap. 6). Moreover, the separation of each time-step into a harvest period and a natural growth period assumes that harvest has little disruption on the reproductive process. A detailed model which includes such a disruption is outside of the scope of this work. ### Mathematical framework for RL The RL environment can be formally described as a discrete time _partially observable Markov decision process (POMDP)_. This formalization is rather flexible and allows one, e.g., to account for situations where the agent may not fully observe the environment state, or where the only observations available to the agent are certain functions of the underlying state. For the sake of clarity, we will only present here the subclass of POMDPs which are relevant to our work: _fully observable MDPs_ (henceforth MDPs for short). An MDP may be defined by the following data: * \(\mathcal{S}\): _state space_, the set of states of the environment, * \(\mathcal{A}\): _action space_, the set of actions which the agent may choose from, * \(T(N_{t+1}|N_{t},a_{t},t)\): _transition operator_, a conditional distribution which describes the dynamics of the system (where \(N_{i}\in\mathcal{S}\) are states of the environment),5 Footnote 5: Transition operators are commonly discussed without having a direct time-dependence for simplicity, but the inclusion of \(t\) as an argument to \(T\) does not alter the structure of the learning problem appreciably. * \(r(N_{t},a_{t},t)\): _reward function_, the reward obtained after performing action \(a_{t}\in\mathcal{A}\) in state \(N_{t}\), * \(d(N_{0})\): _initial state distribution_, the initial state of the environment is sampled from this distribution, * \(\gamma\in[0,1]\): _discount factor_. At a time \(t\), the MDP agent observes the full state \(s_{t}\) of the environment and chooses an action based on this observation according to a _policy function_\(\pi(a_{t}|N_{t})\). In return, it receives a discounted reward \(\gamma^{t}r(a_{t},N_{t})\). The discount factor helps regularize the agent, helping the optimization algorithm find solutions which pay off within a timescale of \(t\sim\log(\gamma^{-1})^{-1}\). With any fixed policy function, the agent will traverse a path \(\tau=(N_{0},\ a_{0},\ N_{1},\ a_{1}\ \ldots,\ N_{t_{\text{fin.}}})\) sampled randomly from the distribution \[p_{\pi}(\tau)=d(N_{0})\prod_{t=0}^{t_{\text{fin.}}-1}\pi(a_{t}|N_{t})T(N_{t+1} |N_{t},a_{t},t).\] Reinforcement learning seeks to optimize \(\pi\) such that the expected rewards are maximal, \[\pi^{*}=\text{argmax}\ \mathbb{E}_{\tau\sim p_{\pi}}[R(\tau)],\] where, \[R(\tau)=\sum_{t=0}^{t_{\text{fin.}}-1}\gamma^{t}r(a_{t},N_{t},t),\] is the cumulative reward of path \(\tau\). The function \(J(\pi):=\mathbb{E}_{\tau\sim p_{\pi}}[R(\tau)]\) is called the _expected return_. ### Deep Reinforcement Learning The optimal policy function \(\pi\) often lives in a high or even infinite-dimensional space. This makes it unfeasible to directly optimize \(\pi\). In practice, an alternative approach is used: \(\pi\) is optimized over a much lower-dimensional parameterized family of functions.6 Deep reinforcement learning uses this strategy, focusing on function families parameterized by neural networks. (See Fig. 1 and App. A in (Lapeyrolerie et al., 2022) for a conceptual introduction to the use of reinforcement learning in the context of conservation decision making.) Footnote 6: Policies are, in general, functions from state space to policy space. In our paper, these are \(\pi:[0,1]^{\times 3}\rightarrow\mathbb{R}_{+}\) for the single fishery case, and \(\pi:[0,1]^{\times 3}\rightarrow\mathbb{R}_{+}^{2}\) for two fisheries. The space of all such functions is highly singular, spanning a _non-separable Hilbert space_. Even restricting ourselves to continuous policy functions, we end up with a set of policies which span the infinite dimensional space \(L^{2}([0,1]^{\times 3})\). One way to avoid optimizing over an infinite dimensional ambient space is to discretize state space into a set of bins. This approach runs into tractability problems: First, the dimension of policy space scales exponentially with the number of species. Second, even for a fixed number of species (e.g., 3), the dimension optimized over can be prohibitively large—for example if one uses 1000 bins for each population in a three-species model, the overall number of parameters being optimized over is \(10^{9}\). Neural networks with much smaller number of parameters, on the other hand, can be quite expressive and sufficient to find a rather good (if not optimal) policy function. We will focus on deep reinforcement learning throughout this paper. Within the DRL literature there is a wealth of algorithms from which to choose to optimize \(\pi\), each with its pros and cons. Most of these are based on gradient ascent by using the technique of _back-propagation_ to efficiently compute the gradient. Here we have used only one such algorithm (_proximal policy optimization (PPO)_) to draw a clear comparison between the RL-based and the classical fishery management approaches. In practice, further improvements can be expected by a careful selection of the optimization algorithm. (See, e.g., (Francois-Lavet et al., 2018) for an overview of different optimization schemes used in DRL.) ### Model-free reinforcement learning Within control theory, the classical setup is one where we use as much information from the model as possible in order to derive an optimal solution. Here, one may find a vast literature on model-based methods to attain optimal, or near-optimal, control (see, e.g., (Zhang, Li, and Liao 2019; Sethi and Sethi 2019; Anderson and Moore 2007)). The classical sustainable fishery management approaches summarized in Text Box 1, for instance, are model-based controls. As we saw there, these controls may run into trouble in the case where there are inaccuracies in the model parameter estimates. There are many situations, however, in which the exact model of the system is not known or not tractable. This is a standard situation in ecology: mathematical models capture the most prominent aspects of the ecosystem's dynamics, while ignoring or summarizing most of its complexity. In this case, it is clear, model-based controls run a grave danger of mismanaging the system. Reinforcement learning, on the other hand, can provide a model-free approach to control theory. While a model is often used to generate training data, this model is not directly used by model-free RL algorithms. This provides more flexibility to use RL in instances where the model of the system is not accurately known. In fact, it has been shown that model-free RL outperforms model-based alternatives in such instances (Janner et al. 2019). (For recent surveys of model-based reinforcement learning, which we do not focus on here, see (Moerland et al. 2023; Pylodyos and Nalpantidis 2017).) This context provides a motivation for this paper. Indeed, models for ecosystem dynamics are only ever approximate and incomplete descriptions of reality. This way, it is plausible that model-free RL controls could outperform currently used model-based controls in ecological management problems. Model-free DRL provides, moreover, a framework within which agents can be trained to be generally competent over a _variety_ of different models. This could more faithfully capture the ubiquitous uncertainty around ecosystem models. The aforementioned framework--known as _curriculum learning_--is considerably more intensive on computational resources than the "vanilla" DRL methods we have used in this paper.7 Due to the increased computational requirements of this framework, we have left the exploration in this direction for future work. Footnote 7: All our agents were trained in a local server with two commercial GPUs. The training time was between 30 minutes and one hour in each case. ## 4 Methods **The Environment.** We considered the problem of managing four increasingly complex models (see Table 1). To recap, these four environments are a single-species growth model (3) for a harvested species; a three-species model (4) with a single harvested species; the same three-species model but with _two_ harvested species; and, finally, a three-species model with a time-varying parameter and two harvested species. The policies explored were functions from states \(N_{t}\) to either a single number \(M_{X}(N_{t})=\pi(N_{t})\) (for Models 1 and 2), or a pair of numbers \((M_{X}(N_{t}),M_{Y}(N_{t}))=\pi(N_{t})\) (for Models 3 and 4). Our goal was to evaluate the performance of different policy strategies over a specified window of time. We chose this window to be 200 time-steps, where each discrete time-step represents the dynamical evolution of the system over a year. Each time-step was composed of two parts: First, a _harvest period_ where the harvest is collected from the system (e.g. the population \(X\) is reduced to \(X_{t}\mapsto X_{t}-M_{X}(N_{t})X_{t}\)). Second, a _recruitment and interaction period_, where the system's state evolves according to its natural dynamics. Training and evaluating management strategies were performed by simulating _episodes_. An episode begins at a fixed initial state and the system is controlled with a management policy until \(t=200\), or until a "near-extinction event" occurs--that is, until any of the populations go below a given threshold, \[X_{t}\leq X_{\text{thresh.}},\quad Y_{t}\leq Y_{\text{thresh.}},\quad\text{ or},\quad Z\leq Z_{\text{thresh.}}. \tag{6}\] In our setting we have chosen \(X_{\text{thresh.}}=Y_{\text{thresh.}}=Z_{\text{thresh.}}=0.05\) as a rule of thumb--given that under natural (unharvested) dynamics the populations range within values of 0.5 to 1, this would represent on the order of a 90-95% population decrease from their "natural" level. The reward function defining our policy optimization problem had two components. The first was economical: the total biomass harvested over an episode. The second seeked to reflect conservation goals: if a near-extinction event occurred at time \(t\), the episode was ended early and a negative reward of \(-100/t\) was awarded. This reward function balanced the extractive motivation of the fishery with conservation goals which went beyond the scope of long-term sustainable harvests commonly used in fishery management. **Training a DRL agent.** We trained a DRL agent parametrized by a neural network with two hidden, 64-neuron, layers, on a local server with 2 commercial GPUs. We used the Ray framework8 for training, specifically we used the Ray PPOConfig class to build the policy optimization algorithm using the default values for all hyper-parameters. In particular, no hyperparameter tuning was performed. The agent was trained for 300 iterations of the PPO optimization step for the three-species cases. The total training time was on the order of 30 minutes to 1 hour. For the single-species model, the training iterations were scaled down to 100 and the training time was around 10 minutes. Footnote 8: [https://docs.ray.io/](https://docs.ray.io/) The state space used was normalized case-by-case as follows: _Model 1:_ a line segment \([0,1]\), _Models 2-4:_ a cube \([0,1]^{\times 3}\). We used simulated data to derive a bound on the population sizes typically observed, and thus be able to normalize states to a finite volume. Policies obtained from the PPO algorithm can be "noisy" as their optimization algorithm is randomized (see, e.g. App. B for a visualization of the PPO policy obtained for Model 4). We smoothed this policy out using a Gaussian process regressor interpolation. Details for this interpolation process can be found in App. C. **Tuning the CMort strategy.** In order to estimate the optimum mortality rate, we optimized over a grid of possible mortality rates. Namely, for Models 1 and 2, grid of 101 mortality rates was laid in the interval \([0,0.5]\); for Models 3 and 4, a \(51\times 51\) grid was set in the square \([0,0.5]^{2}\). The latter had a slightly coarser grid due to the high memory cost of using the denser grid. Since the approach for tuning was completely analogous for all models evaluated, here we discuss only Model 4. For each one of these choices of mortality rates, say \((M_{X},M_{Y})\), we simulated 100 episodes based on (4): At each time step the state \((X_{t},Y_{t},Z_{t})\) was observed, a harvest of \(M_{X}X_{t}+M_{Y}Y_{t}\) was collected and then the system evolved according to its natural dynamics (4). The optimal mortality rate was the value \((M_{X}^{*},\ M_{Y}^{*})\) for which the mean episode reward was maximal. **Tuning the CEsc strategy.** This tuning procedure was analogous to that of the CMort strategy just summarized. Namely: A grid of 101 escapement values was laid out on the interval \([0,1]\) for Models 1 and 2; and a \(51\times 51\) grid on \([0,1]^{2}\) was laid out for Models 3 and 4. Each grid point represented a CEsc policy. We used each of these policies to manage 100 replicate episodes. The optimal policy was the policy with the highest average reward obtained. A visualization of the tuning outcome for Model 4 is shown in Fig. 3. **Parameter values used.** The single-species model's (eq. (3)) dynamic parameters were chosen as \[r=K=1,\quad\beta=0.25,\quad c=0.1. \tag{7}\] Here, the values of \(\beta\) and \(c\) were chosen as to make the system be roughly close to its tipping point. For Models 2 and 3, their dynamic parameters (in eq. (4)) were chosen as follows \[\begin{split} r_{X}=K_{X}=r_{Y}=K_{Y}=1,\quad\beta=0.3,\quad c= 0.3\\ c_{XY}=0.1,\quad b=0.1,\quad D=1.1,\quad d_{Z}=0.1.\end{split} \tag{8}\] Moreover, the variances for the stochastic terms were chosenas \[\sigma^{2}(\eta_{X,t})=\sigma^{2}(\eta_{Y,t})=\sigma^{2}(\eta_{Z,t})=0.05.\] For Model 4, we used \(r_{X}(t)\) given as in eq. (5) and all other parameters were given as in eq. (8). The values of \(c\) and \(\beta\) in the three-species model were slightly different than their values in the one-species model. These values were chosen heuristically: We observed that choosing the lower value of \(c=0.1\) in this case would lead to quick near-extinction events even without a harvest. Moreover, \(\beta\) was slightly increased simply to put more predation pressure on the \(X\) and \(Y\) populations and make them slightly more fragile. **Stability analysis.** To ensure that our results do not strongly depend on our parameter choices, we performed a stability analysis. Here, we perturbed parameters randomly and measured the difference in performance between our DRL-based methods and the CEsc strategy. We observed that the difference in performance is maintained for even relatively high noise strengths. We only considered the most complex case here, Model 4. For each value of parametric noise strength \(\sigma_{\text{param.}}\), we executed the following procedure: We sampled 100 choices of perturbed parameter values, where the perturbation was as follows-each parameter \(P\) was perturbed to \((1+g_{P})P\) where \(g_{P}\) was a Gaussian random variable with variance \(\sigma_{\text{param.}}^{2}\). For each of these sample parameter sets, we tuned CEsc and trained the DRL agent. We measured the average reward difference between these two strategies for each of sample (this was done using 100 replicate evaluation episodes). Finally, we took the mean of this average difference over the 100 perturbed parameter samples. The parametric noise strength values used were \([0.04,0.08,\ldots,0.2]\). ## 5 Results We evaluated each of the four management strategies considered on Models 1-4. To recap, the management strategies were CEsc, CMort, PPO and a Gaussian process interpolation of PPO ("PPO+G"). Furthermore, we characterized the trade-off between economic output and sustainability faced by CMort policies. This was done by evaluating CMort policies with a fraction of the optimal mortality rate (specifically, 80%, 90% and 95%). All evaluations were based on 100 replicate episodes. We will visualize the results concerning Model 4 in this section, leaving the other models for App. A. This is the most complex scenario considered and where our results show the most compelling advantage of DRL methods with respect to classical strategies. Our main result is summarized in Fig. 4 which displays the total reward distributions for the policy obtained through each strategy. Here we see that CEsc has a long-tailed distribution of rewards, and its average reward Figure 3: Visualization of the cosntant escapement strategy tuning procedure for Model 4. There was a certain multiplicity in this tuning strategy: a “ridge of optimality” where policies had essentially equivalent behavior. Throughout our investigation, we tuned constant escapment in several occasions and, on each occasion, a different optimal policy along the ridge was found. The results for different policies along the ridge were in practice equivalent, with no discernible difference in performance. We highlighted the ridge with a white dotted line. is much lower than other management strategies. CMort has a shorter-tailed distribution and a much higher average reward. Finally, both DRL-based strategies have a more concentrated reward distribution and a higher average reward than CMort. To assess what is the culprit for the classical strategies' low performance with respect to the DRL-based strategies, we plot the duration of each of the 100 evaluation episodes of each strategy in Fig. 5. We see that early episode ends are prevalent in classical strategies and rare for DRL-based strategies. Early episode ends tend to happen at lower \(t\) values for CESc than CMort. Thus, distribution of episode durations seems to be widest for CESc, followed by CMort, DRL and DRL+GP. We examine the trade-off between profit and sustainability faced by the CMort strategy in Fig. 6. Two quantities are plotted: the fraction of evaluation episodes with maximal length (i.e. episodes with no near-extinction events), and the average reward. On the x-axis we have several sub-optimal mortality rates that err on the conservative side: e.g. the policy labeled "80% Opt. CMort" has the form \[\pi:(X_{t},Y_{t},Z_{t})\mapsto(0.8M_{X}^{*},\ 0.8M_{Y}^{*}),\] where \((M_{X}^{*},\ M_{Y}^{*})\) is the optimal CMort strategy. We see that sufficiently conservative policies attain high sustainability, but only at a high price in terms of profit.9 We expect a similar and, possibly, more pronounced effect for the CESc strategy but do not analyze this case here. Footnote 9: As noted before, here we equate economic profit with biomass caught. This is done as an approximation to convey the conceptual message more clearly, and we do not expect our results to significantly change if, e.g., “effort cost” is included in the reward function. When we refer to “large differences” in profit, or “paying dearly,” we mean that the ratio between average rewards is considerable—e.g. a 15% loss in profit. One problem that is often encountered when using machine learning methods is the interpretability of the output of these methods. For our PPO strategy, the output is a policy function parametrized by a neural network \(\theta\): \[\pi_{\theta}:N_{t}\mapsto(M_{X}(N_{t}),M_{Y}(N_{t})),\] Figure 4: Reward distributions for the four strategies considered. These are based on 100 evaluation episodes. We denote CESc for constant escapement, CMort for constant mortality, PPO for the output policy of the PPO optimization algorithm, and PPO GP for the Gaussian process interpolation of the PPO policy. where \(N_{t}=(X_{t},Y_{t},Z_{t})\) is the state of the system at time \(t\), and where \(M_{X}\) and \(M_{Y}\) are the mortalities due to harvest during that time-step. While the values of the neural network parameters are hard to interpret, the actual shape of the policy function is much more understandable. Here we visualize the PPO+GP policy function and provide an interpretation for it, as this function is smoother and less noisy than the PPO policy function. The PPO policy function is visualized similarly in App. B. Given its high dimension, it is not possible to fully display how the policy function obtained "looks like"--we thus project it down to certain relevant axes. The result of this procedure is shown in Fig. 7. In that figure, the shape of the optimal escapement strategy is provided for comparison. We notice that the DRL-derived policy has similarities to a CESc policy. Here, the key difference is that the escapement value for each of the fished species is sensitive to variations in the other populations. This can be seen as color gradients in the plots of \((X,M_{X})\) and \((Y,M_{Y})\), where the gradient corresponds to differing values of \(Z\). Moreover, this can be seen as an anti-correlation in the plot of \((X,M_{Y})\)--for optimal CESc, \(M_{Y}\) is uncorrelated to \(X\). This sensitivity of the policy to, for instance, the values of \(Z\) can be seen in the sample time series displayed in Fig. 8. Here, we can see that species \(Z\) becomes endangered due to harvesting for all management strategies. The DRL-based strategy, however, is sensitive to the values of \(Z\) and can respond accordingly by scaling the fishing effort with the value of \(Z\). In particular, the policy responds to the period of diminishing values of \(Z\) near the end of the episode, by restricting fishing on \(X\) and \(Y\), thus promoting \(Z\)'s growth. This pattern is rather common among the whole dataset--early episode ends are largely due to near-extinctions of \(Z\) for all management strategies. ### Recovering constant escapement for a single species While the optimal control for our single-species model (3) can not be easily proven to be CESc (since the right-hand side of that equation is not concave), from experience we can expect CESc to either be optimal Figure 5: Histograms of episode lengths and rewards for the four different management strategies considered. Only the first 50 evaluation episodes (from a total of 100) were included, for ease of visualization. From left to right, the four management strategies compared are CESc, CMort, PPO, and PPO+GP. or near-optimal. We give evidence for this intuition by showing that our DRL method recovers a CEsc solution when trained. These results are shown in Fig. 9 Here we show both the output PPO policy, and its Gaussian process interpolation. This helps build an intuition about the relationship between our "PPO" and "PPO+GP" management strategies. There is a presence of certain high-mortality points at low \(X\) values in the PPO policy (which in turn generates a rising fishing mortality for \(X\) values below a certain threshold in the PPO+GP policy). This is likely due to experience of near-extinctions early on in the training process--where, given an impending extinction, there is a higher reward for intensive fishing. These "jitters" are likely not fully erased through the optimization algorithm since near-extinction events become extremely rare after only a few training iterations. This way, the agent does not further explore that region of state space to generate new experience. We believe the most important aspect of the CEsc policy reproduced by PPO is the fact that there is some sufficiently-wide window below the threshold of the policy (i.e. below the optimal escapement value), on which no fishing is performed. That is, there exists some sufficiently large \(\varepsilon\) such that if \(X_{\text{thresh.}}\) is the optimal escapement value of the system, then \(\pi_{PPO}(X)=0\) for all \(X\in[X_{\text{thresh.}}-\varepsilon,\ X_{\text{thresh.}}]\). ### Stability analysis In this section we present results intended to show that the effects that we observe in this paper are not the result of a careful selection of parameter values, but rather arise for a wide variety of parameter values. Our main result in this respect is Fig. 10. There, we plot the _average episode reward difference_ between the two DRL-based methods we considered, and the optimal CEsc strategy. This figure shows that, for a wide range of parameter values, DRL-based strategies can have a considerable advantage over an optimized CEsc policy (the single-species optimal solution). Figure 6: Trade-off between reward and probability of a near-extinction event for CMort policies. We evaluated policies at the full optimal constant escapement value, and also at 0.8, 0.9, 0.95 of the latter. Each evaluation is based on 100 episodes. On the left we plot the percentage of episodes which last their maximum time window, i.e. that do not see a near-extinction event. On the right, we plot the mean episode reward and standard deviation for each policy. ## 6 Discussion Fisheries are complex ecosystems, with interactions between species leading to highly non-linear dynamics. While current models for population estimation include many important aspects of this complexity, it is still common to use simplified dynamical models in order to guide decision making. This provides easily interpretable solutions, such as CMort policies. There is a drawback here, however: due to the simplicity of these dynamical models, the policies might not respond effectively in many situations--situations where the predictions of these simple models deviate considerably from reality. Because of this, policies such as MSY have faced pushback and are believed to have contributed to the depletion of fish stocks (Worm et al. 2006; Costello et al. 2016). We propose an alternative approach to the problem of fishery control: to use a more expressive - albeit more complicated - dynamical model to guide decision making. Furthermore, rather than computing the optimal control policy for the model (something that is impossible in practice for complex systems), we use deep reinforcement learning to obtain a "pretty darn good" policy. This policy is estimated in a _model-free_ setting, i.e., the agent treats the dynamical model (e.g. eq. (4)) as a black box of input-output pairs. By not relying on precise knowledge of the model's parameter values, but rather just relying on input-output statistics, model-free approaches have gained traction in a variety of control theory settings (see, e.g., (Sato 2019; Ramirez, Yu, and Perrusquia 2022; Zeng et al. 2019)). We compare deep reinforcement learning-based policies against classical management strategies (CMort and CEsc). While the latter are inspired by the shape of optimal solutions in the single-species setting, they are optimized in a model-free way as well: e.g. the optimal mortality rate is computed empirically from simulated data. Because of the simplicity of the classical policy functions, the optimal such policy may be easily estimated through a grid-search. This is the case since these policy functions are only specified by one or two parameters Figure 7: Plots of the PPO+GP policy \(\pi_{\text{PPO+GP}}\) along several relevant axes. Here \(M_{X}\) and \(M_{Y}\) are the \(X\) and \(Y\) components of the policy function. The values of the plots are generated in the following way: For each variable \(X\), \(Y\), and \(Z\), the time-series of evaluation episodes are used to generate a window of typical values that variable attains when controlled by \(\pi_{\text{PPO+GP}}\). Then, for each plot either \(X\) or \(Y\) was varied on \([0,1]\) along the \(x\) axis, while the other variables (resp. \(Y\) and \(Z\), or \(X\) and \(Z\)) were varied within the typical window computed before. The value of one of the latter two variables were visualized as color. Figure 8: Time-series of an episode managed with \(\pi_{\text{PPO+GP}}\). Here we plot the state of the system on the bottom panel, and the actions taken (fishing efforts on \(X\) and \(Y\)—respectively \(M_{X}\) and \(M_{Y}\)) on the top panel. Figure 9: Left panel: the policy obtained from 100 training iterations of the PPO algorithm on the “single species, single fishery” model. Right panel: the Gaussian process interpolation of the left panel. We plot both as scatter data evaluated on a 101-point grid on \([0,1]\), but these policies may of course be evaluated continuously—on any possible value of \(X\). (respectively in the single fishery, or two fishery cases). In contrast, DRL optimizes over the more expressive - and more complicated - family of policies parametrized by a neural network. Neural networks are often used as flexible function approximators that can be efficiently optimized over. We showed that for sufficiently complex management scenarios - Models 3 and 4 - DRL-based management strategies perform significantly better than CEsc. This with respect to both average rewards received and conservation goals. In this sense, an approximate solution to a more complicated and expressive model, can outperform the optimal solution of the single-species problem--even when the parameters of the single-species solution are empirically optimized. We found that the optimal CMort policy surprisingly performs much better than CEsc (Fig. 4). However, it can be observed in Fig. 6 that the CMort strategy faces a trade-off: high sustainability is achieved for sub-optimal mortality rates, but only at a significant decrease in the mean episode reward. We expect that with increasing ecosystem complexity this phenomenon might become more pronounced. We can understand this as a consequence of the rigidity of classical strategies: the simplicity of their expressions, depending only on a few parameters, means that policy optimization is constrained to a rather reduced subset of the space of possible policies. The question of when multi-species models are well-approximated by single-species models was studied in detail in (Burgess et al. 2017). Here our approach is dual to that of the aforementioned paper. Rather Figure 10: Mean reward difference between DRL methods (resp. our “PPO” and “PPO+GP” strategies), on the one hand, and the optimal constant escapement policy (“CEsc”) on the other. The dynamic parameters in eq. (4) were randomly perturbed from the values given in (8) according to the procedure detailed in Sec. 4. The noise strength of this perturbation is plotted as the x-axis. For each noise strength, 100 parameter perturbations were sampled—each one giving rise to a _realization_ of the model. For each such realization, we optimized a CEsc policy and trained a PPO agent. Moreover, we interpolated the PPO policy using a Gaussian process, as detailed in Sec 4. Then, for each realization we compared the performance of these policies: we measured the mean reward difference between PPO and CEsc, and between PPO+GP and CEsc. The plot represents the distribution of reward differences observed at a given noise strength—we plot the mean and the standard deviation of the mean reward differences observed. In an equation, we plot the means \(\mathbb{E}_{P}[\mu_{P}^{\text{DRL}}-\mu_{P}^{\text{CEsc}}]\), where \(P\) are the paramter values, \(\mu_{P}^{\text{DRL}}\) is the mean reward for a DRL policy trained on the problem with parameter values \(P\), and, similarly, \(\mu_{P}^{\text{CEsc}}\) is the mean reward of the optimal constant escapement policy for parameter values \(P\). than first optimizing a single-species model to approximate a more complex model and then finding the MSY value for the single-species model, we used simulated data to optimize CMort and CEsc directly on the three-species model. We do not investigate further whether our three-species models are well approximated by a single-species model in the sense of (Burgess et al. 2017). However: _1)_ Because the interaction terms in (4) are about an order of magnitude smaller than \(r_{X}\) and \(r_{Y}\), Models 2-4 are "close" in parameter-space to a single-species model. _2)_ The fact that for Model 2 all strategies match in performance suggests that (4) might be well-approximated by a single species model. This in turn suggests that the reason that DRL outperforms both single-species strategies for Models 3 and 4 is not due a lack of a single-species approximation for either \(X\) or \(Y\), but due to the complexity of having two harvested species. Moreover, the non-stationarity in Model 4 maintained the advantage of DRL over CEsc and CMort. Here, one may have expected an exacerbation of that advantage due to non-stationarities introducing biases to single-species approximate models (Burgess et al. 2017). We did, however, measure a decrease in the sustainability of CEsc in Model 4 with respect to Model 3. Finally, we performed a stability analysis to ensure that the advantage of DRL-based techniques over CEsc is a ubiquitous phenomenon and not a result of a lucky selection of parameter values. We found (in Fig. 10) that an advantage can be observed even for relatively high-noise perturbations of the parameters--noise with a variance of 20% of the parameter values. The aforementioned Figure summarizes the statistics of 100 parameter perturbations so that we may expect perturbations of up to 60% (i.e. three sigmas) in the parameter values to appear in these statistics. ### Future directions There are a number of interesting directions that would be interesting to explore in future work. First, benchmarking our results against increasingly more compex and realistic fishery models. This would include non-stationarities in the dynamical parameters to accurately reflect the effects of climate change on the ecosystem. This added complexity would likely pose a computational challenge--in future work we will likely need to test several different DRL training algorithms (see e.g. (Lapeyrolerie et al. 2022) for a non-exhaustive list), and it is very likely that hyperparameter tuning will need to be performed. Moreover, it may be that larger neural networks than the one we used in this research will be needed for the policy function. This all will mean that considerable technical work will be needed in order to make this next step computationally feasible (e.g. we might need to make more extensive use of GPUs and parallelization). Second, to account for noisy estimates of the system's state and imperfect policy implementation. This could be done straightforwardly, albeit it might increase the training time before DRL approaches converge, as well as introducing the need for hyperparameter tuning. Third, to account for the systematic uncertainties behind the dynamics of the ecosystem--that is, to account for model biases with respect to reality. Here, one can employ tools from curriculum learning in order to train an agent that is _generally capable_ of good management over a range of different dynamical models. This way, one can incorporate different models--expressing different aspects of the ecosystem--into the learning process of the agent. We believe that this step will likely be necessary if DRL algorithms are to be applied successfully in the fishery management problem. Curriculum learning is rather expensive computationally, however, and involves a non-trivial _curriculum design_ which will guide the agent in its learning process. This way, considerable technical work would be needed for this direction. ### Acknowledgements The title of this piece references a mathematical biology workshop at NIMBioS organized by Paul Armsworth, Alan Hastings, Megan Donahue, and Carl Towes in 2011 which first sought to emphasize 'pretty darn good' control solutions to more realistic problems over optimal control to idealized ones. This material is based upon work supported by the National Science Foundation under Grant No. DBI-1942280. ## Appendix A Appendix: Results for stationary models In the main text we focused on the non-stationary model ("three species, two fisheries, non-stationary" in Table 1) for the sake of space and because our results were most compelling there. Here we present the reward distributions for the other models considered--the three stationary models, lines 1-3 in Table 1. These results are shown in Figs. 11, 12 and 13. ## Appendix B Appendix: PPO policy function for non-stationary model In the main text, Fig. 7, we presented a visualization of the PPO+GP policy function obtained for the "three species, two fisheries, non-stationary" model. This policy function is a Gaussian process regression of scatter data of the PPO policy function. In Fig. 14 we present a representation of this scatter data in a similar format as Fig. 7. ## Appendix C Appendix: Gaussian process interpolation Here we summarize the procedure used to interpolate the PPO policy (visualized in Fig. 14). We use the _GaussianProcessRegressor_ object of the _sklearn_ Python library with a kernel given by \[\text{RBF(length scale = 10)}+\text{WhiteNoise(noise level = 0.1)}.\] This interpolation method is applied to scatter data of the PPO policy evaluated on 3 different grids on \((X,Y,Z)\) states: \(G_{X}\), a \(51\times 5\times 5\) grid; \(G_{Y}\), a \(5\times 51\times 5\) grid; and \(G_{Z}\), a \(5\times 5\times 51\) grid. This combination of grids was used instead of a single dense grid in order to reduce the computational intensity of the interpolation procedure. For \(G_{X}\), the 5 values for \(Y\) and \(Z\) were varied in a "popular window," i.e. episode time-series data was used to determine windows of \(Y\) and \(Z\) values which were most likely. The grids \(G_{Y}\) and \(G_{Z}\) were generated in a similar fashion, _mutatis mutandis_.10 The length scale and noise level values of this kernel were chosen arbitrarily--no hyperparameter tuning was needed to produce satisfactory interpolation, as will be shown in the results section. Footnote 10: The raw dataset is found at the data/results_data/2FISHERY/RXDRIFT sub-directory in the repository with the source code and data linked above. Scatter plots visualizing this policy are shown in App. B. Figure 11: Reward distributions for the four strategies considered. These are based on 100 evaluation episodes of Model 3 in Table 1. We denote CEsc for constant escapement, CMort for constant mortality, PPO for the output policy of the PPO optimization algorithm, and PPO GP for the Gaussian process interpolation of the PPO policy.
2306.14497
Your Code is 0000: An Analysis of the Disposable Phone Numbers Ecosystem
Short Message Service (SMS) is a popular channel for online service providers to verify accounts and authenticate users registered to a particular service. Specialized applications, called Public SMS Gateways (PSGs), offer free Disposable Phone Numbers (DPNs) that can be used to receive SMS messages. DPNs allow users to protect their privacy when creating online accounts. However, they can also be abused for fraudulent activities and to bypass security mechanisms like Two-Factor Authentication (2FA). In this paper, we perform a large-scale and longitudinal study of the DPN ecosystem by monitoring 17,141 unique DPNs in 29 PSGs over the course of 12 months. Using a dataset of over 70M messages, we provide an overview of the ecosystem and study the different services that offer DPNs and their relationships. Next, we build a framework that (i) identifies and classifies the purpose of an SMS; and (ii) accurately attributes every message to more than 200 popular Internet services that require SMS for creating registered accounts. Our results indicate that the DPN ecosystem is globally used to support fraudulent account creation and access, and that this issue is ubiquitous and affects all major Internet platforms and specialized online services.
José Miguel Moreno, Srdjan Matic, Narseo Vallina-Rodriguez, Juan Tapiador
2023-06-26T08:16:38Z
http://arxiv.org/abs/2306.14497v1
# Your Code is 0000: An Analysis of ###### Abstract Short Message Service (SMS) is a popular channel for online service providers to verify accounts and authenticate users registered to a particular service. Specialized applications, called Public SMS Gateways (PSGs), offer free Disposable Phone Numbers (DPNs) that can be used to receive SMS messages. DPNs allow users to protect their privacy when creating online accounts. However, they can also be abused for fraudulent activities and to bypass security mechanisms like Two-Factor Authentication (2FA). In this paper, we perform a large-scale and longitudinal study of the DPN ecosystem by monitoring 17,141 unique DPNs in 29 PSGs over the course of 12 months. Using a dataset of over 70M messages, we provide an overview of the ecosystem and study the different services that offer DPNs and their relationships. Next, we build a framework that (i) identifies and classifies the purpose of an SMS; and (ii) accurately attributes every message to more than 200 popular Internet services that require SMS for creating registered accounts. Our results suggest that the DPN ecosystem is globally abused for fraudulent account creation and access, affecting all major Internet platforms and online services. ## I Introduction Originating in the late 1990s, Short Message Service (SMS) have experienced a resurgence among online service providers (_e.g._, Alphabet, Meta) to deliver notifications, enable Two-Factor Authentication (2FA) and enhancing the security of online accounts [15, 23, 33]. Online services using SMS-based 2FA technologies assume that phone numbers are uniquely linked to an individual. However, this assumption does not hold with Disposable Phone Numbers (DPNs). DPNs are shared phone numbers that any individual can use to receive SMS messages on a public website, so their metadata and content is published for anyone to see. Users can take advantage of DPNs to register at online services without giving their true personal phone number, either for privacy reasons or to conduct fraudulent actions. Despite its potential for abuse, the DPN ecosystem remains relatively unexplored. The most recent prior systematic study of DPNs and their usage dates back to 2018 [37], a time before the expansion in popularity of SMS-based 2FA for web and mobile services. According to Duo Labs, 2FA usage has increased from 28% in 2017 to 78% in 2021, becoming the preferred user authentication method [8]. As a result, the key findings of [37] have become obsolete. These reasons, and their potential impact on web services, motivate us to systematically measure and investigate the current DPN ecosystem and the evolution of the purposes it supports in the context of 2FA. Specifically, we seek answers to the following research questions: \((i)\) How widely used are DPNs? \((ii)\) What services are sending messages to DPNs? And \((iii)\) What is their potential for abuse? To answer these questions, we develop a methodology to automatically gather and process a large-scale and longitudinal dataset containing 70.95M messages received by 17,141 unique DPNs, collected over a time span close to 12 months. Using this dataset, we make the following contributions: * _Study on the usage of DPNs._ We measure the volume of messages received by DPNs over time. We find that these numbers receive collectively more than 1.4M messages per week. A language analysis of message contents suggest a wide international user base. * _Service attribution._ We develop a framework to accurately attribute an SMS message to more than 200 popular global Internet services that require a registered account. We observe in our dataset messages sent by online service providers of all sectors, sizes, and geographical scope, including global companies (_e.g._, Uber, Facebook, Amazon, WhatsApp), security-sensitive industries (_e.g._, banking), and services developed by smaller and more geographically localized organizations (_e.g._, Paytm in India). * _Measuring the potential for abuse._ We develop a framework to infer and classify the purpose of an SMS as a proxy to measure their potential for abuse. We observe that nearly 80% of messages contain a One-Time Password (OTP), a single-use link, or both. This figure presents a significant increase with respect to the trend reported in the 2018 measurement [37], where this metric was at 67.6%. As these messages are closely related to 2FA processes, we hypothesize that DPN usage is closely related to anonymity or account fraud. Our findings suggest that the DPN ecosystem is an expanding and thriving field, and that the global SMS-based 2FA industry is oblivious to--or chooses to ignore--potential account abuses arising from it. **Ethics issues and dataset release**. The dataset gathered and analyzed in this study might contain sensitive data since it involves phone numbers that, due to number rotation, might have belonged in the past or might belong in the future to a real user, and also SMS messages that can potentially contain personal data or access credentials. We obtained approval from our IRB to conduct this study provided that \((i)\) we make no efforts to deanonymize the data in a way that could facilitate linking messages to actual users; \((ii)\) we inform affected parties in case that any security or privacy concerns are identified during the study; \((iii)\) we do not use the collected data for any secondary purposes other than the scope of this study; and \((iv)\) we share the dataset on demand with other researchers provided that they agree on using it for research purposes and under conditions similar to those described above. CSV files with the list of analyzed gateways and the services found in the messages are available at [https://github.com/josenmo/your-code-is-0000](https://github.com/josenmo/your-code-is-0000). ## II Background _Disposable Phone Numbers_ (DPNs) are publicly available phone numbers offered by _Public SMS Gateways_ (PSGs), or simply _gateways_. DPNs allow receiving messages from a wide catalog of international phone numbers without the need for a SIM card. PSGs are usually free services that do not require registering an account, so multiple users can simultaneously use the same DPN at any time. However, there are gateways offering "premium" DPN services that require a single-use payment to read the most recent messages. The SMS messages received by a DPN are compiled in an _inbox_. Depending on the gateway offering the service, some inboxes have a smaller capacity than others. For example, some only list the latest 30 messages while others index all messages received over the last months. SMS messages published in an inbox typically have four attributes: * _Receiver_. The international number of the message recipient. * _Sender_. The party that, allegedly, sent the message. Typically, the sender is an online service (_e.g._, Amazon or WhatsApp) that sends automatically-generated messages. This data can be displayed as an international phone number, a short code [47] or a sender ID [9]. We note that PSGs providing the sender data in the form of short code or string sender ID may be incorrect due to their dependency on poorly implemented or maintained Caller ID Lookups [2]. * _Reception timestamp_. The date and time when the message was received. It is displayed as a date string using the timezone of the gateway's server (_e.g._, "1st Jan 2022, 12:34 pm") or as a relative timestamp (_e.g._, "12 hours ago"). The latter format allows accurately determining the date when an SMS message was received but not its time. * _Content_. The actual payload of the SMS, often containing OTP codes and Single-use Links. OTPs are short numeric codes that are sent to the user of an online service to verify its identity or confirm an action by inputting the code in an application. Typically, these codes are 4 to 6 digits long with dashes or spaces to make them more readable. Single-use links are the equivalent to OTPs in the form of URLs, but instead of inputting a code, the user is expected to click and visit the link. Some gateways redact their text content to remove numeric codes. ## III Related work Reaves _et al._ conducted in 2016 the first large-scale study of the DPN ecosystem [36]. Their work analyzed 400 DPNs in 28 countries and showed how PSGs contribute to online account fraud. The authors conducted a follow up study in 2018 that doubled the size of its dataset but they did not observe any discernible change in the ecosystem [37]. Thomas _et al._ conducted a longitudinal analysis of Phone-Verified Accounts (PVAs) underground sellers and their infrastructure, proposing multiple strategies that service operator can leverage to combat fake accounts [40]. In 2019, Hu _et al._ measured Disposable Email Services (DEA) [21], which share commonalities with DPN as both services can be abused for verifying and managing online accounts. Dmitrienko _et al._ carried a study on multiple Two-Factor Authentication (2FA) schemes and weaknesses in their implementation, showing how malware can intercept messages with single-use codes [11]. Following the same research line, Lei _et al._ investigated how to exploit Android APIs to steal OTP codes sent through SMS [27]. ## IV Methodology This section describes our methodology to identify and crawl the PSGs, and the post-processing techniques used to parse and analyze the messages. Figure 1 provides an overview of our pipeline. ### _PSGs Identification_ To compile a recent and global list of widely used PSGs, we leverage two complementary methods: * _PSGs extraction from the Tranco list_. Using the Tranco top-3M list1[26], we perform an initial automated token-based search to find possible PSGs using the Python library WordSegment [24] and keeping the entries matching a set of predetermined keywords.2 This method identifies 17 sites offering DPN services, 15 of which are still indexed by Tranco as of Dec. 2022. Footnote 1: Generated on the 14th of June, 2021. Available at [https://transco-list.eu/list/NK2W](https://transco-list.eu/list/NK2W). * _PSGs extraction from search engines_. Some gateways are accessed exclusively through mobile apps instead of websites, so they are likely missed by Tranco. We leverage Google's search engine to increase our coverage using the same set of keywords. This step reveals 12 more gateways, 3 of which are offered by apps published on Google Play. Footnote 1: _free, get, receive, online, temp, opt, inbox, virtual, verify, verification and code._ We manually review each candidate to discard unrelated, parked or expired domains. Additionally, we also remove PSGs which \((i)\) are stale and had not received a single message in months; \((ii)\) are copies of another gateway belonging to the same provider; or \((iii)\) are aggregators that harvest and publish messages from other gateways. We note that aggregators are easy to identify because they contain duplicate DPNs that appear in other PSGs, and they publish messages at a slower rate compared to the original source. ### _Messages Collection_ We use a purpose-built Chromium-based crawler instrumented with Playwright [13] to fetch the DPNs and their messages for each gateway identified in the previous step. We use an actual web browser instead of a simpler and easier-to-maintain script because some PSGs need to run JavaScript code on the client-side to properly render webpages. For those cases, sending crafted HTTP requests and parsing their responses is not enough. The PSGs accessible exclusively through Android mobile apps are implemented as WebViews, and we observe their traffic to find the HTTP requests that fetch the DPNs from the remote server. Then, we crawl these gateways with our purpose-built crawler, sending HTTP requests using the fetch Web API [30]. This approach allows us not having to maintain two separate codebases. As mentioned in Section II, some PSGs keep a copy of all received message for a long time, while others only show the latest \(n\) messages. To minimize the number of messages that might get lost, we crawl PSGs at different sampling periods. We fine-tune the crawler using the reception timestamp of the oldest message in an inbox and the popularity of the gateway. We periodically adapt the crawling rate to guarantee that we do not miss too many messages in case a PSG starts receiving more traffic. Due to the lack of unique message IDs, we assign our own identifier to messages. The identifier we use is a composite key formed by the receiver (_i.e._, DPN), the sender, the reception timestamp and the text content. Thanks to this identifier, we can track DPNs and messages across different PSGs to detect and remove duplicates. ### _Language and Service Identification_ A DPN can receive messages from online services with a global user base, and also from other regional ones. For this reason, we first need to detect the message language. **Language Detection**. Guessing the language of short messages is an open research problem [5]. We first normalize the message to remove punctuation, duplicate spaces and other defects (_e.g._, corrupted Unicode characters), and then automatically assign the language to those messages that contain highly distinctive substrings (_e.g._, "verification code") or use Unicode characters [44] unique to world languages (_e.g._, the Modi script in Hindi). If this process fails, we use the JavaScript franc library [48] to determine the language. **Sending Service**. Gateways often include the service name as the "sender" from a given message, but this information might be inaccurate due the presence of services with outdated Caller ID Lookups. We choose to use our own list of keywords to map a message to the associated service. To extract the keywords, we \((i)\) normalize the message text and remove diacritics using the Normal Form Decomposition (NFD) [45], \((ii)\) tokenize the content separating by whitespace and add the unigrams as keywords, and \((iii)\) use the unigrams to generate bigrams and add them as keywords as well. Then, two of the authors manually analyzed the list of the top-10k most frequent keywords and flagged those that were service names or strictly related (_e.g._, branded domain names, _motos_). After this process, we end up with a list of 1.7k meaningful keywords and 212 unique services. We note our keyword-based approach might incorrectly attribute a message to a service if its text contains the service name but it was not sent from the particular service. To limit the number of wrongly attributed services, we iterate over the keyword-flagging process to create more specific keyword-matching rules for the services with most mislabels. We measure the accuracy of our service classifier by manually labeling a sample of 4k randomly selected messages, offering an accuracy of 99.10%. Most mislabels are attributed to false negatives (_i.e._, messages from a known service but tagged as "unknown") in 0.78% of cases. ### _Purpose Identification_ We use a hierarchical divisive clustering to group message into patterns that correspond to activities on an account for a given service. In the first phase, we group messages depending on the associated service and we normalize the message content. To generate the normalized version of the message, we remove multiple whitespaces and punctuation characters, perform stemming, and filter any stopword or token with less than two characters. Next, we replace all IBANs, URLs, email addresses, IP addresses, numeric codes and timestamps found in a message with a fixed pattern. These patterns or _identifiers_ often appear in notification messages informing of user activity, and can be easily detected using regular expressions due to their structured format. We replace each identifier with a pattern that captures the identifier type and length. For example, an URL with 36 characters is replaced with "URL{36}", and a sequence of 4 digits becomes "NUMERIC{4}". Once this process is completed, we use the identifiers to group messages into Fig. 1: Our pipeline for identifying PSGs, crawling messages and post-processing of data. For language detection, arrows with full lines (green) indicate passed check, while dashed lines in (red) denote a check that failed. clusters. Since a message might contain multiple identifiers, our clustering algorithm prioritizes longer identifiers (_e.g._, a message with "URL(36)" and "URL(20)", will be assigned to the cluster "URL(36)") and those that better capture the _purpose of the message_ (_e.g._, an IP address is usually more representative than a generic numeric code). For the priorities, we follow the same order we use to introduce the identifiers. We tokenize the normalized messages on the whitespaces and then calculate a fingerprint of the message using SimHash [29]. We leverage the Hamming distance to compare hashes given a similarity threshold [28], which determines the number of bits that can differ among two near-duplicates to be considered similar. We empirically chose a threshold of 10 with 64-bit hashes after experimenting with different values and validating the results against a ground truth of 10k randomly-selected normalized messages. **Lifecycle of an Account**. Once a user has provided a phone number to a service, the provider can send messages to notify the user about events associated to their personal account during their lifecycle. We label messages with a tag that captures the type of account activity or _purpose_ following the NIST SP 800-63B standard. This specification provides technical guidelines to agencies for the implementation of digital authentication [19]. Each one of the resulting purposes are tied to a particular stage in the lifecycle of an account: \((i)\) creation (a new account is created on the service), \((ii)\) verification (the service requests the user to verify his identity), \((iii)\) activity (the service notifies of user-performed actions or important events), \((iv)\) update (the user's personal data on the service is modified), and \((v)\) recovery (the service detected an attempt to recover access to an existing account). **Automated Message Labeling**. After defining the categories of purposes to monitor, we randomly select 6,429 message clusters and manually assign one category to each. To identify the best label for the cluster, we leverage the normalized version of the messages with the highest number of occurrences. By manually inspecting each message, we label the cluster using one of the five purposes defined above. We also use the message content to generate patterns that can identify other messages serving a similar purpose. In the final step, we apply the obtained patterns to automatically infer the purpose of all messages (and clusters) in our dataset. To evaluate the accuracy of our automated labeling technique, we randomly select 1k clusters and pick the content of the normalized message with the highest number of occurrences to confirm whether the assigned label is correct. We observe that our approach selects the right label in 91.3% of the cases being the most common sources for mislabeling tokens such as "activity" (7.8% of the cases) or "verification" (0.32%). ## V Ecosystem Characterization We collect 70,951,728 messages from 17,141 unique DPNs offered by 29 different PSGs over a period close to 12 months. We note that the number of gateways decreased over time as some went offline or stopped publishing new messages. For reference, the previous measurement from 2018 collected a dataset of 900k messages across 28 months [37]. Figure 2 shows the number of SMS messages, active DPNs, and online PSGs observed daily. While the weekly count of received messages is roughly stable, there are two major events on August and October 2021 where the daily count is considerably lower than on any other date, and a minor event on December. These events are unrelated to the DPN ecosystem and were caused by network downtimes and infrastructure upgrades on our side. ### _Gateways_ Table I lists the PSGs that we crawled during our research, along with their crawling period (in days), number of DPNs (_i.e._, inboxes) and number of different calling prefixes. The number of DPNs offered by PSGs offer on average 645 DPNs that span across 5 different calling prefixes. However, we find a high dispersion within these numbers, with "bulk-pva.com" having merely 3 DPNs as opposed to the more than 8k numbers offered by "temp99.com". Both gateways remained active for hundreds of days. Over the course of our work, the number of PSGs slowly decreased as some stopped working properly (_i.e._, receiving new messages), went offline or added protective measures like CAPTCHAs or JS Challenges to prevent their sites from being crawled. In this last case, we did not attempt to circumvent such measures for ethical reasons. Although we did not perform a longitudinal analysis of the ecosystem, we repeated the PSG identification process described in Section IV-A 18 months after the initial run to find changes in the list of active PSGs.3 As a result, we find 18 new gateways that did not appear before either because they did not exist at the time or were not popular enough. In addition, 14 gateways (half of the entries from Table I) are no longer included in this last outcome. These changes suggest that many PSGs in the DPN ecosystem are relatively volatile, with an approximate 1-year lifespan. Footnote 3: Using the Tranco list generated on the 11th of December, 2022. Available at [https://transco-list.eu/list/JXSLY](https://transco-list.eu/list/JXSLY). ### _DPN Dynamics_ In Section II, we hypothesize that a DPN can appear in different gateways. We confirm it by noticing that 9.1% of the DPNs in our dataset appear in more than one PSGs. We attribute this behavior to DPN rotation and infrastructure sharing (_i.e._, DPN reuse). **DPN rotation**. A typical PSG builds its DPN pool by \((i)\) acquiring and adding to GSM boxes SIM cards from multiple mobile network operators; or \((ii)\) renting VoIP lines from third-party External Short Messaging Entity (ESME) like Bandwidth.com [3] or Twilio [42] to handle the reception of messages. PSGs often distinguishing between so called _"real"_ (mobile lines) and _"virtual"_ numbers (VoIP lines) in their offerings.4 VoIP lines can be rented and canceled at any time, either manually or programmatically. The number can eventually be rented by another provider. Footnote 4: The receive-sms-online.info gateway is an example of a website advertising numbers “based on real SIM”. On the other hand, receive-sms.cc announces their DPNs as “virtual phone numbers”. **DPN reuse**. Some PSGs may share a considerable amount of DPNs that cannot be explained by DPN rotation. We hypothesize that a given operator might reuse the same infrastructure across different gateways under their management or ownership. A clear case of this is the pair formed by 99dark.com and temp99.com, where the former is the API endpoint for an Android app [41] and the latter is a public website. Besides sharing 483 DPNs, other reasons suggest that these two gateways are operated by the same group, such as similar naming and contact information, and same DNS namesrers from Cloudflare. Another not-so-straightforward case is the smsfree.cc gateway, which has hundreds of DPNs in common with other seemingly unrelated PSGs, and is used by (at least) a website [1] and an Android app [14]. The simplest explanation is that smsfree.cc is a white-label API endpoint reused by many platforms (both web and mobile ones) that lends its DPNs to other parties as if it were a mobile carrier. However, given that this PSG also rotates DPNs very frequently (sometimes even daily), it can also be the case that it aggregates phone numbers from other gateways on purpose as soon as they become available to rent. **DPN lifetime**. Table II reports the lifetime (in hours or days) during which a DPN is active. To determine the activity window, we calculate the amount of time that passed between the first and the last time when a SMS is sent to a particular DPN. We identify three main patterns: short lived DPNs (12%), phone numbers that are active up to four weeks (58%), and DPNs that receive messages for several months (29%). When inspecting the volume of messages that a phone number receives, we find that it is usually proportional to the DPN lifetime. We observe a similar trend both in the subset of messages sent by a service known by our classifier ("Msgs. w/ service" row in Table II) and in the entire dataset. Long-lasting DPNs tend to receive more messages, being responsible for over 81% of the total messages that we collected. The scenario changes when we check the total number of services for which we observe at least one message. In this case, the breakdown depending on the DPN activity is smoother, and phone numbers that were active for over one month receive messages from only 11% additional services when compared to DPNs which are active for less than one day. The lifetime distributions that we observe have some implications. First, we notice services being associated with DPNs as soon as the phone number becomes available in a gateway. Fig. 2: Daily volume of messages, active DPNs and online PSGs. This might be a possible indicator of abuse and we explore it further in Section VI-B. Second, DPNs with a long lifetime unnecessarily expose users to higher privacy and security risks. This is the case for services that leak Personally Identifiable Information (PII) such as email addresses or usernames in the SMS they send. While some gateways only display the messages received in the last 24 hours (see Section II), others provide access to historical information. The lifetime of a DPN does not only increase the time window in which scammers can collect sensitive information, but it exposes users to other risks. An attacker that harvests login information by scraping DPN messages can attempt to perform an account takeover if the DPN is still active and is used as a recovery mechanism for accessing an account. Alternatively, the DPN can be abused to discover previously unknown user credentials, in case a service uses SMS messages during the procedure of recovering forgotten usernames and email addresses. Lastly, we notice that if we merge into a single class the two sets of short-lived DPNs (_i.e._, those that live up to one week), the resulting set of 5,040 numbers received at least one message from all the services that we monitor. This suggests that the 212 services that we monitor are extremely popular and widely used across different countries. ### _Country and Language Diversity_ The international scope of our analysis requires us dealing with DPNs receiving messages in any language. We use the international phone number prefix of a DPN to associate it to a country (_e.g._, "+44" is the international code for the United Kingdom). This allows us to locate DPNs in 57 different countries. Using the language detection method described in Section IV-C, we identify up to 31 languages in our dataset. The diversity of languages we find suggests an international demand for DPN services. With our approach, we could not conclude the language in merely 4.9% of the messages either due to their short length and the presence of ambiguous words (_e.g._, "Code: 0000"). Unsurprisingly, English is the most prevalent language in our dataset, accounting for over 75% of the messages, followed by a long tail of other languages such as Indonesian (3.0%), French (2.9%), Portuguese (2.8%), Spanish (2.1%), Arabic (1.7%), Chinese (1.6%), and Russian (1.0%). We do not find a 1-to-1 mapping of languages and countries where this language is spoken. For example, only 30% of German-written messages are sent to DPNs with the German international call prefix. Instead, we see messages being sent globally regardless of their language. This global scope is clearly appreciable for English messages, half of which are sent to non-English speaking countries. ### _OTPs and Single-use Links_ In their 2018 study, Reaves et al. [37] found that 67.6% of the messages sent to DPNs contained a code or OTP, thus concluding that receivers were being used for account verification and user authentication [36]. In this paper, we extend this methodology and distinguish between Single-use Codes (_i.e._, OTPs) and Single-use Links. We find OTPs are still on the rise, with 77.02% of messages containing them. Single-use Links are less popular, being present in just 2.18% of messages, followed by 0.80% offering both an OTP and a link. Given that only 14M messages in our dataset (20%) have neither of these single-use means, and that sending services use them to verify authenticated actions (_i.e._, that require user intervention), we conclude that DPNs are predominantly being used to create accounts on online platforms. ### _Malicious URLs_ After discarding the aforementioned Single-use Links, we end up with 451,165 messages that contain a URL. These amount to 178k unique URLs after removing duplicates. In an attempt to find malicious or harmful URLs in messages sent to DPNs, we use the Google Safe Browsing API [17] to identify web resources flagged as phishing, malware or spam. To account for shortened links, we expand their URLs before checking them against Safe Browsing's database. This expansion is performed by sending an HTTP request to the shortened URL and retrieving the final "location" header without effectively loading the contents of the destination website. We use a list of publicly-known shortener services to determine which items need to be expanded.5 Footnote 5: See [https://github.com/boutetnico/url-shorteners](https://github.com/boutetnico/url-shorteners). With this pipeline, we find merely 41 URLs spread across 125 messages that are considered harmful by Google. All of them fail in the "social engineering" category. Most malicious URLs appear to be either Apple-related scams or phishing campaigns targeting banks. ## VI Analysis of Services This section presents the analysis of the services found sending SMS to DPNs. Our analysis focuses on answering two questions: \((i)\) whether DPNs are actually being used for creating accounts on online services, and \((ii)\) measuring their potential for abuse. Given the technical limitations of qualitatively analyzing all 70M messages in our dataset (which includes a long tail of small and lesser known services), we focus on the subset of 46,041,215 entries only containing SMS from the top 212 services. We still consider it a representative subset of the ecosystem as it is an order of magnitude larger than the whole dataset from the previous study [37]. As mentioned in Section IV-C, this list of services is based on the most occurring keywords found in our dataset. Table III lists the top-10 online sending services with more messages to DPNs. We find lesser-known --yet demanded--services like DENT [10] (which offers free mobile phone lines), regional operators like Disney+ Hotstar [34] (an Indian streaming platform recently acquired by The Walt Disney Company), and well-known companies with a global userbase (_e.g._, Uber, WhatsApp). Services like Google, Netflix, Telegram and Tinder are not included in Table III because, individually, they account for less than 1.5% of the total number of messages. In fact, there is a long tail of sending services in the DPN ecosystem. We find examples of companies operating in a particular country or world region (_e.g._, Smood [38] and Careem [7], a mobility app from Uber used in the Middle East) and recognized names in the Finance (_e.g._, HSBC, Chase, Citibank), Telecommunications (_e.g._, AT&T, Deutsche Telekom, Overbit) and even Public Administration (_e.g._, NHS, government agencies from Spain and India) sectors. Overall, we find the top services by volume account for 65% of all messages received by DPNs and offer an ample range of services, including Social Media, Entertainment, Education, and even sensitive ones such as Telecommunications and Finance. Globally-known services like TitTok, Facebook, What-App and Amazon have their messages spread across more than half of all DPNs. Conversely, highly localized services like the aforementioned Disney+ Hotstar have a higher density of messages that concentrate in just 50 DPNs, 41 of which have an Indian country prefix. DENT is an interesting case, as it is the second service with most messages sent to DPNs yet it only appears in 694 receivers from 45 different countries. If we sort by DPN coverage, we find two online services with more than 10k receivers that do not appear in Table III as they each account for less than 0.50% of messages. These services are Bigo Live [4] and Kwai [25], two social media platforms similar to TitTok functionality-wise. The median number of services that appear in a DPN is 40, with a maximum of 167 services (covering 78.7% of the known services lists). All the DPNs we found in our dataset contain at least one service from the top 212. ### _Evidence of Usage_ Given all the online services we labelled require users to create an account, we can safely assume these messages are sent by the service as a consequence of an action performed by the user (_e.g._, registration, login, transaction confirmation). Therefore, users must be registering accounts on online services using DPNs in order for these messages to appear in our dataset. Besides some exceptions like Google, most services make a 1-to-1 relation between user accounts and phone numbers. In practice, this means that a given phone number can only be tied to a single service account. For this reason, and considering that DPNs are by definition shared between multiple users, there is an incentive to register an account on a popular service as soon as a DPN becomes available on a gateway. To verify this dynamic, we measure how long it takes for an online service to appear in a DPN, presumably as a direct result of an account registration or recovery (_i.e._, re-verification) event. Figure 3 provides boxplots for each service grouped by category. These boxplots represent the _Time-to-First-Message_ (TTFM) since the DPN was first seen in a gateway until a message from a given service was received. We find a considerable amount of services, especially in the telecommunications category, have a median TTFM shorter than 24 hours (colored area in the chart). ### _Potential for Abuse_ We also analyze the rate at which messages from a specific service are being sent to a given DPN. As discussed in Section VI-A, for a service message to appear in a DPN, a user must have previously performed an action that triggered the message. Because service messages involve manual interaction, too many messages in a short period of time might be indicative of some sort of automation. While we cannot conclude that the presence of automation is always Fig. 3: Time-to-First-Message per service grouped by service category. related to service abuse (_e.g._, creation of bot accounts, fake engagement [32]), it is definitely abnormal for a legitimate user to request an OTP verification code or similar multiple times per hour on the same account and for an extended time. Figure 4 shows the services for which we detect long bursts of account-related messages (_i.e._, those having any of the purposes mentioned in Section IV-D except "activity"). For each subplot, we pick the top 50 DPNs with the most received messages and only color the days when a phone number received at least 72 messages (roughly equivalent to 3 messages per hour). We also represent as continuous gray lines the DPN lifespan. These services contain DPNs with bursts that extend several days, meaning that some phone numbers kept receiving more than 72 messages per day for more than 100 days straight. In the case of Disney+ Hotstar and Sony LIV, we can attribute this behavior to the two services being only available in India and to the scarcity of DPNs with an Indian calling prefix. For DENT and Uber, we observe this dynamic across a much larger pool of phones. While there are legitimate uses for DPNs (_e.g._, registering anonymously on dating sites for privacy reasons), DPNs can also be abused to create fake accounts either manually or using some automation tool as Figure 4 suggests. In fact, many PSGs have banners promoting or advertising services offering PVAs (_i.e._, phone-verified online accounts available for sale on underground sites, mostly for nefarious purposes like fake social engagement [40]). We posit that a reasonable explanation for that is that the PVA provider and the PSG is the same organization or they are affiliated in some way. ## VII Case Studies We next present several cases of interest found in our dataset that illustrate the potential for abuse of DPNs. **Free Trials.** Oftentimes, popular services offer promotions or _trials_ to new users. DPNs allow users to register multiple accounts and obtain access to features that would not be available beyond the trial period. For example, the Indian online e-learning platform BYJU offered a free 1-to-1 class to newly registered users [6]. Our dataset contains around 15k messages from this platform, 7% of which are associated with activities, when the service notifies of user-performed actions or important events. The remaining 92.5% are either associated with "account creation" or "account verification", suggesting that users leverage DPNs to test the service or access features that would not be available without paying a subscription. **Phone Chaining.** One of the most interesting uses we see for DPNs is the registration of private secondary phone numbers. This is a common practice in the PVA ecosystem and it is usually known as _phone chaining_[40]. Some big actors enabling this are DENT [10] (8% of the dataset are messages from this service), Google Voice [18] and TextNow [39]. In fact, phone chaining is so common in this ecosystem that more than 6K DPNs (36% of the dataset) have been, at some point, registered in one or more of just the previous three services. We also looked for popular mobile phone carriers outside the list of top services and found evidence of new lines being registered online and then their SIM cards being sent by mail. We have obtained evidence of such practices even for carriers located in countries with registration laws mandating Proof of Identity, such as Australia, India and France [20]. **Finance**. In this category we find cryptocurrency exchanges, FinTech (Financial Technology) apps, and traditional banks. In all cases, we see messages denoting successfully completed account creation and transactions. One of such examples is Empower, a FinTech app that offers microloans. This service has sent messages to at least 131 different DPNs in multiple occasions confirming the deposit of funds, meaning that it can potentially be abused for loan fraud. Outside the top 212 services list, we find users linking DPNs to bank accounts to receive verification codes (_e.g._, Citibank, HSBC, Barclays) and even opening entirely new accounts, raising concerns for whether this complies with Europe's PSD2 Strong Customer Authentication (SCA) requirement [12]. We found more than 100 DPNs registered against banks, although this figure is probably a lower bound estimate as we have not thoroughly looked at this matter nor is the focus of this study. **Healthcare**. We find evidence of DPNs being used for regis Fig. 4: DPNs with the longest bursts of account-related messages grouped by service. X axis shows days, while every row in the Y axis plots the messages for a given DPN. Gray pixels show the lifespan of a DPN. tering to medical services such as the British National Health Service (NHS) [31] or CoWIN (India's COVID-19 Vaccination Program) [22]. In both cases we find sensitive information being sent over SMS after the user has registered to the service. This includes secret single-use codes, COVID-19 test results and appointments including their precise date and location. We also identify cases where the names and surnames of the user are sent alongside the previous information. **Public Administrations**. We find various types of SMS messages sent by government agencies. While in most cases these are just innocuous notifications, there are two cases that raise our attention. "Cl@ve" [16] is an Identity Provider used to authenticate against the Spanish Public Administration by sending an OTP code to a registered phone number with every login attempt. The presence of this service in the dataset is concerning since it is tied to a citizen or a registered company, and can be used to perform sensitive procedures. We also encounter messages related to "Aadhaar," India's ID system and the largest in the world [35], which also sends OTP codes when logging in to banks and other online services [46]. ## VIII Conclusions Online services have recently doubled down on their efforts to implement account verification and 2FA flows, using SMS as one of the channels to deliver these messages. In this paper, we show that the DPN ecosystem is mostly being (ab)used for circumventing these security mechanisms without needing a personal phone number. We also observe a significant increase in the usage of DPNs for creating fake accounts since the last available measurement from 2018, jumping from thousands of messages received per year to millions. We find that operators of the PSGs offering DPNs seem to be, in some cases, strongly tied to underground markets offering Phone Verified Accounts (PVAs). We also find that online services do not have effective protections against the abuses of this ecosystem: Both global well-recognized services (such as Google and Facebook), banks, governments, and small brands send verification messages to DPNs. **Future Work**. Our keyword-based message classification shows a huge long tail of services and purposes yet to be analyzed. Given the scale of the dataset, a more in-depth automated analysis is needed to understand what trends lie in this long tail. One interesting research challenge is the definition of methodologies to identify DPNs and create effective blocklists to mitigate their potential for abuse when accounts are created. A second aspect we plan to explore in our future work is a detailed analysis of the messages with OTP tokens, investigating both message senders and recipients. Across our dataset, we observe nearly 80% of the messages with an OTP code. Fraudsters often abuse these messages to artificially inflate traffic to a range of numbers controlled by a single mobile network operator; in return, the mobile operator shares with the fraudster a portion of the generated revenue [43]. ## Acknowledgements This research was supported by the AEI grant ODIO (PID2019-111429RB-C21 and PID2019-111429RB-C22) and the Region of Madrid grant CYNAMON-CM (P2018/TCS-4566), co-financed by European Structural Funds ESF and FEDER. Jose Miguel Moreno was supported by the Spanish Ministry of Science and Innovation with a FPI Predoctoral Grant (PRE2020-094224). Srdjan Matic was partially supported by the Atraccion de Talento grant (Ref. 2020-T2/TIC-20184), funded by Madrid regional government, and the PRODIGY Project (TED2021-132464B-I00), funded by MCIN/AEI/10.13039/501100011033/ and the European Union NextGenerationEU. Narseo Vallina-Rodriguez was supported by a Ramon y Cajal Fellowship (RYC2020-030316-I). The opinions, findings, and conclusions, or recommendations expressed are those of the authors and do not necessarily reflect the views of any of the funding bodies.
2310.16566
Model-enhanced Contrastive Reinforcement Learning for Sequential Recommendation
Reinforcement learning (RL) has been widely applied in recommendation systems due to its potential in optimizing the long-term engagement of users. From the perspective of RL, recommendation can be formulated as a Markov decision process (MDP), where recommendation system (agent) can interact with users (environment) and acquire feedback (reward signals).However, it is impractical to conduct online interactions with the concern on user experience and implementation complexity, and we can only train RL recommenders with offline datasets containing limited reward signals and state transitions. Therefore, the data sparsity issue of reward signals and state transitions is very severe, while it has long been overlooked by existing RL recommenders.Worse still, RL methods learn through the trial-and-error mode, but negative feedback cannot be obtained in implicit feedback recommendation tasks, which aggravates the overestimation problem of offline RL recommender. To address these challenges, we propose a novel RL recommender named model-enhanced contrastive reinforcement learning (MCRL). On the one hand, we learn a value function to estimate the long-term engagement of users, together with a conservative value learning mechanism to alleviate the overestimation problem.On the other hand, we construct some positive and negative state-action pairs to model the reward function and state transition function with contrastive learning to exploit the internal structure information of MDP. Experiments demonstrate that the proposed method significantly outperforms existing offline RL and self-supervised RL methods with different representative backbone networks on two real-world datasets.
Chengpeng Li, Zhengyi Yang, Jizhi Zhang, Jiancan Wu, Dingxian Wang, Xiangnan He, Xiang Wang
2023-10-25T11:43:29Z
http://arxiv.org/abs/2310.16566v1
# Model-enhanced Contrastive Reinforcement Learning for Sequential Recommendation ###### Abstract. In recent years, reinforcement learning (RL) has been widely applied in recommendation systems due to its potential in optimizing the long-term engagement of users. From the perspective of RL, recommendation can be formulated as a Markov decision process (MDP), where recommendation system (agent) can interact with users (environment) and acquire feedback (reward signals). However, it is impractical to conduct online interactions with the concern on user experience and implementation complexity, and we can only train RL recommenders with offline datasets containing limited reward signals and state transitions. Therefore, the data sparsity issue of reward signals and state transitions is very severe, while it has long been overlooked by existing RL recommenders. Worse still, RL methods learn through the trial-and-error mode, but negative feedback cannot be obtained in implicit feedback recommendation tasks, which aggravates the overestimation problem of offline RL recommender. To address these challenges, we propose a novel RL recommender named model-enhanced contrastive reinforcement learning (MCRL). On the one hand, we learn a value function to estimate the long-term engagement of users, together with a conservative value learning mechanism to alleviate the overestimation problem. On the other hand, we construct some positive and negative state-action pairs to model the reward function and state transition function with contrastive learning to exploit the internal structure information of MDP. Specifically, we learn two predictive networks where positive state-action pair can lead the reward and next state in the dataset, and negative state-action pairs fail. Finally, regarding the reward and transition model learning as an auxiliary tasks, we use value-weighted regression to generate the recommendation policy. In this way, we alleviate the problem of data sparsity and overestimation. Experiments demonstrate that the proposed method significantly outperforms existing offline RL and self-supervised RL methods with different representative backbone networks on two real-world datasets. Sequential Recommendation, Reinforcement Learning, Contrastive Learning + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. + Footnote †: journal: Acm J. Speech Acm. strategies to learn through constant interaction with the environment (Zang et al., 2018; Wang et al., 2019), which is impractical in recommendation since online interaction is too expensive (Zang et al., 2018; Wang et al., 2019). In addition, the cold start training may also harm the user experience. Therefore, offline RL is more suitable for recommendation systems. Generally, studies on offline RL propose to restrict the agent from taking risky actions (Han et al., 2016; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), or constrain the estimation of value functions (Zang et al., 2018; Wang et al., 2019) to make the recommender more conservative, and alleviate the overestimation of value functions in recommendation (Han et al., 2016; Wang et al., 2019). However, current RL recommenders hardly consider the data sparsity of reward signal and state transition, which are important for RL methods, since it usually suffers from high variance and needs as much data as possible for accurate estimation of long-term engagement (Chen et al., 2018; Wang et al., 2019). Here we focus on offline RL for SeqRec. By scrutinizing this research line, we find several inherent limitations: * Sparsity of reward and state transition. As the recommender cannot interact with users to collect new data in real time, it can only utilize the observed interactions for policy learning. However, compared the huge state and action space, the observed interactions are extremely sparse in SeqRec, thus making it difficult for RL to learn a qualified representation of the state(Wang et al., 2019). * Overestimation of value function. In the offline RL setting, the value function is usually optimized on a small part of the action space, but evaluated on all valid actions. This easily results in the inaccuracy of value function estimation (Han et al., 2016; Wang et al., 2019). Although the bias is not always positive, in combination with the maximization operator in RL, there can be a persistent overestimation of value function(Han et al., 2016) and harm the final performance. * Ignorance of negative signal. In addition to positive samples, negative samples in recommendation can also provide rich collaborative signals to help the RL model learning by trial and error (Wang et al., 2019). However, in most existing RL-based studies (Han et al., 2016; Wang et al., 2019; Wang et al., 2019), negative samples are not fully utilized, which is easy to cause the model collapsing problem -- that is, the model can easily be optimized by predicting all interactions into positive ones. To tackle these challenges caused by the data sparsity problem, we believe that it is possible to develop an RL-based SeqRec model to exploit the internal structure information of MDP (_i.e._, reward function and state transition function). Specifically, we propose a new training framework, MCRL (Model-enhanced Contrastive Reinforcement Learning), to optimize the long-term engagement in SeqRec, which can be easily integrated into existing SeqRec models. There are two special designs for addressing the challenges. Firstly, inspired by the development of visual RL (Zang et al., 2018; Wang et al., 2019; Wang et al., 2019), we model reward function and state transition (_i.e._, learning a network to predict next state and reward with current state-action pairs) and regard them as auxiliary tasks to boost the state representation in the huge discrete state space. By this way, more self-supervised signal from the internal structure information of MDP can be utilized to mitigate data sparsity. Secondly, we take items that are not interacted with by users as negative actions to construct positive and negative state-action pairs for model learning. Benefiting from the negative signal, the overestimation issue can be alleviated implicitly. It is worth noting that our approach cannot be attributed to model-based RL methods, as we do not utilize the learned model to generate virtual state transitions and reward signals, but rather view it as an auxiliary task. Then we design a simple and effective value function learning to estimate the long-term engagement of different states. Finally, using the value-weight regression (Wang et al., 2019), we combine the value function estimation and imitation learning to extract the policy with the auxiliary tasks of learning reward model and state transition model. The main contributions of this work are summarized as follows: * We propose a new RL recommender named MCRL, which utilize the structure information of MDP to mitigate the data sparsity issues. * We introduce a new contrastive learning paradigm in MCRL for reward and state transition modeling from the perspective of negative action. * We conduct extensive experiments to demonstrate that the proposed MCRL can significantly outperform existing offline RL and self-supervised RL recommenders with different representative backbone networks. ## 2. Task Formulation We first introduce the concept of sequential recommendation (SeqRec) and formulate the task as a Markov Decision Process (MDP) as well as a corresponding reinforcement learning (RL) setting. In a SeqRec scenario, we typically have historical sequential or session-based user data. Let \(\mathcal{I}\) denote the item set. A user-item interaction sequence can be denoted as \(x_{1:t}=\{x_{1},x_{2},\ldots,x_{t}\}\), where \(x_{i}\in\mathcal{I},(0<i\leq t)\) is the \(i_{th}\) item interacted with by the user. When a user starts a session, the recommendation system feeds items to the user one by one according to the user's feedback(_e.g._, clicks and purchases in an e-commerce scenario or viewing time in a social media scenario). The recommendation system aims to recommend the most relevant item \(x_{t+1}\) based on the user's historical interaction \(x_{1:t}\). Moreover, an ideal recommendation system would Figure 1. The whole process of MCRL. Above the gray line is the Data collection process during the interaction process between recommender (agent) and user (environment). Below the gray line is the training procedure of MCRL. The red dotted line represents the transition model and reward model and corresponding MDP component. be expected to improve the long-term engagement of users. The SeqRec process can be formulated as a Markov Decision Process (MDP), defined by a tuple \(\langle\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\nu\rangle\): * State space \(\mathcal{S}:\text{A state }s_{t}\in\mathcal{S}(t>0)\) is defined as a user-item interaction sequence before timestamp \(t\), \(i\),\(e\),\(s_{t}=x_{1:t}\). We can obtain the representation of the state by a sequential model \(G(\cdot)\), _i.e._, \(z_{t}=G(s_{t})\). * Action space \(\mathcal{A}:\) An action \(a_{t}\in A\) represents the recommended item. From offline data, we can extract \(a_{t}\) at timestamp \(t+1\), _i.e._\(a_{t}:=x_{t+1}\). * State transition function \(\mathcal{P}:\mathcal{S}\times\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\): the state transition probability \(p(s_{t+1}\mid s_{t},a_{t})\) represents the probability density of transition to the next state \(s_{t+1}\in\mathcal{S}\) from the current state \(s_{t}\in\mathcal{S}\) recommending an item \(a_{t}\). * Reward function \(\mathcal{R}:\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\): the reward \(r(s_{t},a_{t})\) is the immediate reward after recommending an item \(a_{t}\) to user at state \(s_{t}\). * Discount factor \(\gamma:\) the discount factor \(\gamma\in[0,1]\) determines the present value of future rewards. If \(\gamma=0\), the recommendation system would be concerned only with maximizing immediate rewards. If \(\gamma=1\), the recommendation system gives equal weight to future rewards as it does to immediate rewards. In RL, the value of taking action \(a_{t}\) in the state \(s_{t}\) under a policy \(\pi(a_{t}\mid s_{t}):\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\), denoted by \(Q_{\pi}(s,a)\), is defined as the expected cumulative reward (return) starting from state \(s_{t}\), taking the action \(a_{t}\), and then following the policy \(\pi\): \[Q_{\pi}\left(s_{t},a_{t}\right)=\mathbb{E}_{\left(s_{t^{\prime}},a_{t^{\prime }}\right)\sim\pi}\left[r\left(s_{t},a_{t}\right)+\sum_{t^{\prime}=t+1}^{\infty }Y^{\left(t^{\prime}-t\right)}\cdot r\left(s_{t^{\prime}},a_{t^{\prime}} \right)\right]. \tag{1}\] Similarly, we define the value of a state \(s_{t}\) under a policy \(\pi\), denoted by \(V_{\pi}(s_{t})\), as the expected return starting in \(s_{t}\) and thereafter following policy \(\pi\): \[V_{\pi}\left(s_{t}\right)=\mathbb{E}_{\left(s_{t^{\prime}},a_{t^{\prime}} \right)\sim\pi}\left[\sum_{t^{\prime}=t}^{\infty}y^{\left(t^{\prime}-t\right) }\cdot r\left(s_{t^{\prime}},a_{t^{\prime}}\right)\right]. \tag{2}\] The optimization objective is to find the optimal policy \(\pi^{*}\) to maximize the expected return: \[\pi^{*}=\operatorname*{arg\,max}_{\pi}\mathbb{E}_{\left(s_{t},a_{t}\right)\sim \rho_{\pi}}\left[V^{\pi}\left(s_{t}\right)\right], \tag{3}\] where \(\rho_{\pi}\) denotes the state marginals of the trajectory distribution induced by a policy \(\pi\). ## 3. Methodology In this section, we describe our proposed MCRL, in detail. First, we introduce a value function estimator of the long-term value of various states in the recommendation system. We use a conservative TD loss to ease the overestimation problem. Next, we extract the policy based on the learned value function. To solve the problem of sparse data and lack of negative feedback in RL recommender, we reconstruct the reward function and state transition through contrastive learning during policy extraction, to enhance the state representation. Figure 1 illustrates the whole process of MCRL. ### Long-term Engagement Estimation via Value Function Since we have modeled the SeqRec task as RL, the estimation of long-term benefits is correspondingly equivalent to the learning of the value function \(V_{\pi}(s_{t})\) and \(Q\) function \(Q_{\pi}(s_{t},a_{t})\). As the interactions in the dataset are sparse, it is difficult to accurately estimate the Q value of every action. Therefore we learn a value function only to estimate the long-term engagement. To approximate the optimal value function in the dataset, we use expectile regression to yield a asymmetric \(\ell_{2}\) loss like (Srivastava et al., 2015; Wang et al., 2016; Wang et al., 2017). \[\mathbb{E}_{\left(s,a,r,s^{\prime}\right)\sim\mathcal{D}}\left[L_{2}^{T}\left( r(s,a)+VV_{\phi^{\prime}}\left(s^{\prime}\right)-V_{\phi}(s)\right)\right], \tag{4}\] where \(\ell_{2}\)\(L_{2}^{T}(u)=|\tau-\mathbb{I}(u<0)|u^{2}\rangle\), \(V_{\phi}(s)\) is a parameterized value function, \(V_{\phi^{\prime}}\) is the target value function and can be updated via Polyak averaging (Kang et al., 2017). Different from DQN TD error(Kang et al., 2017), the training process of \(V_{\phi}(s)\) only uses \((s,a,r,s^{\prime})\) in the dataset and there is no argmax operation for the TD goal \(r+VV_{\phi^{\prime}}\left(s^{\prime}\right)\), hence the estimation is conservative. In this way, it alleviates the problem of the overestimation issue mentioned and we will discuss more about the value function loss in the section 4. Another advantage of only using value function compared with DQN methods is that it does not need to allocate weights and storage to record the Q value of every action, while the action space is very large in the recommendation. ### Policy Extraction After learning the state value function, we turn to the exact policy. There exist two key problems here: 1) How to solve the problem of data sparsity and lack of negative feedback; and 2) How to combine the learned state value function to obtain a better policy. For the first problem, we design two auxiliary tasks. In particular, we learn two predictive models to predict the reward and next state using the current state-action pair. In addition, we construct positive and negative state-action pairs to learn the model with a new contrastive learning method, where the negative actions are randomly sampled from the uninteracted item space. For the second problem, we extract the policy by a value-weighted regression method like (Srivastava et al., 2015). We discuss the details in the following. #### 3.2.1. **Reward and State Transition Learning** Reward functions and state transition (_i.e._ state transitions) are important elements of MDP. Prior studies (Wang et al., 2016; Wang et al., 2016) on visual RL have demonstrated that state representation can be improved by learning state transitions and reward functions as auxiliary tasks. Since both SeqRec and visual RL have high dimensional discrete state space, learning state transition, and reward functions in SeqRec can also benefit the RL recommender in maintaining long-term profits. A popular method is learning a bisimulation metric representation in latent space (Wang et al., 2016), \[d\left(\mathbf{s}_{i},\mathbf{s}_{j}\right)=\max_{\mathbf{a}\in\mathcal{A}}(1-c) \left|r(\mathbf{s}_{i},\mathbf{a})-r(\mathbf{s}_{j},\mathbf{a})\right|+c\cdot W _{1}\left(p(\mathbf{s}_{i},\mathbf{a}),p(\mathbf{s}_{j},\mathbf{a});d\right), \tag{5}\] where \(W_{1}\) is Wasserstein metric. Many studies are based on state similarity measurement like bisimulation, and through data augmentation of the state, similar and dissimilar state pairs are constructed for contrastive learning. However, given the importance of negative feedback in recommendation systems, we propose a new contrastive learning style that has not yet been studied. Specifically, we can construct positive and negative pairs of state-action samples for the reward function and state transition based on the negative action samples, which has been proven to be of vital importance in the recommendation system. Here, for a state \(s\), the positive action \(a\) is just the action in the dataset, _i.e._, the item that the user interacts with. A negative action \(a^{-}\) is defined as a random sample from items that are not interacted with by users. Intuitively, positive state-action pairs lead to a reward signal and the next state in the dataset, whereas negative state-action pairs do not. * **Reward function learning.** Given a state and an item interacted with by the user, we can use the corresponding reward signal to learning a reward function. We can maximize the similarity of predictive reward and given reward for positive state-action pairs and minimize the corresponding similarity for the negative state-action pairs. For reward function learning, classical InfoNCE(Wang et al., 2017) may not be suitable because the scalar reward signal makes it difficult to determine similarity. This process can be formulated as a classification task, where \(r\) of negative action is set as \([1,0,0]\) and that of click and purchase is set as \([0,1,0]\) and \([0,0,1]\) respectively. We denote the contrastive loss as \(L_{r}\), \[L_{r}=\mathbb{E}_{(s,a,s^{\prime})}\left[d\left(\hat{r}\left(z,a\right),r \left(z,a\right)\right)+\sum_{i}d(\hat{r}(z,a_{i}^{-}),r(z,a_{i}^{-}))\right]\] (6) where \(z\) is a representation of state \(s\) by a sequential model \(G(\cdot)\), \(\hat{r}\) is a parameterized reward function, and \(d\) is a cross-entropy loss function. In addition, the loss can be pair-wise reweighed by the reward to distinguish the importance of different state-action pairs. * **State transition learning.** For state transition learning, we want to capture the relationship that the latent vector in the current state will transfer to the latent vector in the next state when the agent takes an action. The states and actions in the dataset give a positive state-action pair for predicting the next state. For negative state-action pairs, we conduct negative sampling from the item set that is not interacted with by the user. Specifically, we use InfoNCE to define state transition learning loss, \[L_{p}=\mathbb{E}_{(s,a,r)}\left[-\log\left(\frac{e^{\text{sim}\left(\hat{p}(z, a),z^{\prime})/\tau}}{e^{\text{sim}\left(\hat{p}(z,a),z^{\prime})/\tau}+\sum_{i}e^{ \text{sim}\left(\hat{p}(z,a_{i}^{-}),z^{\prime})/\tau}\right)}\right]\right],\] (7) where \(z,z^{\prime}\) is representation of \(s,s^{\prime}\), \(\hat{p}\) is a parameterized state transition function, \(\tau\) is a temperature factor and sim is defined as cosine similarity. Similarly in reward function learning, we pair-wise reweigh the similarity of negative state-action pairs and the exponential value of similarity of positive state-action pairs in the numerator with reward. In this way, we need not fine-tune the temperature factor \(\tau\), which is usually sensitive in contrastive learning. #### 3.2.2. **Algorithm Summary** After learning a value function, we next extract the recommendation policy. Like (Zhu et al., 2017; Zhang et al., 2018), we use the value-weighted regression method to extract the policy, which is an effective way to implement RL in offline setting. Considering the simplicity and efficiency, the policy extract loss \(L_{\pi}\) is defined as \[L_{\pi}=\mathbb{E}_{(s,a,r,s^{\prime})\sim\mathcal{D}}\left[-(r(s,a)+rY_{\phi^ {\prime}}(s^{\prime}))\log\pi(a\mid s)\right], \tag{8}\] where \(Y_{\phi^{\prime}}\) is the target value function got by (4) and \(\pi_{\theta}(a\mid s)\) is the recommendation policy. Since we have learned the reward function and state transition, (7) and (6) can be added to (8) as auxiliary tasks to enhance the representation, and we can get the loss of the whole policy extraction network, \[L_{p}(\theta)=aL_{\pi}+L_{r}+L_{p}, \tag{9}\] where \(\alpha\) is a coefficient that balances different losses. In general, the training process of the whole algorithm MCRL can be summarized as Algorithm 1. ## 4. Experiments In this section, we evaluate the proposed MCRL in the e-commerce scenario on two real-world datasets to demonstrate its superiority and reveal the reasons for its effectiveness by answering the following research questions: * **RQ1:** How does MCRL perform compared with state-of-the-art supervised methods, offline RL methods, and self-supervised RL methods? * **RQ2:** How do different components (_i.e._, value function, reward model, and state transition model) affect the effectiveness of the proposed MCRL? * **RQ3:** How do contrastive learning settings affect the effectiveness of the proposed MCRL? * **RQ4:** How does the number of negative actions and discount factor influence the performance? ### Dataset Description To evaluate the effectiveness of MCRL, we conduct experiments on two public datasets: **RetailRocket1** and **Yoochose2**. Table 3 presents these datasets' detailed statistics: Footnote 1: [https://www.kaggle.com/datasets/retailrocket/commerce-dataset](https://www.kaggle.com/datasets/retailrocket/commerce-dataset) Footnote 2: [https://recsys.acm.org/recsys15/challenge/](https://recsys.acm.org/recsys15/challenge/) 8:1:1 and use cross-validation to evaluate the proposed methods. For validation and test sets, the events of a sequence are provided one by one, and then we check the rank of the item of the next event. We rank the recommended item among the whole item set. We repeat each experiment 3 times to report the average performance. Two widely used metrics hit ratio (HR) and normalized discounted cumulative gain (NDCG) are adopted to evaluate the effectiveness of MCRL. In SeqRec, HR@K is equivalent to Recall@K, measuring whether the ground-truth item is in the top-k positions of the recommendation list. In this work, HR@K for click is defined as \[\text{HR(click)}=\frac{\#\text{hit among clicks}}{\#\text{clicks in the test}}. \tag{10}\] HR(purchase)@K is defined similarly. NDCG is sensitive to rank position, which assigns higher scores to top positions in the recommendation list (Kang et al., 2018). We set K as 5,10,20 in the reported results. In the e-commerce scenario, the recommendation system is more interested in increasing the number of purchases than clicks, hence we assign a higher reward to actions leading to purchases. For the recommended item that is not interacted with by the user, a zero reward is assigned and we treat it as a negative action. Therefore, the cumulative reward (_i.e._, long-term engagement) is proportional to HR and NG (Kang et al., 2018). #### 4.2.2. **backbone networks** We adopt three typical SeqRec methods as the backbone networks when offline RL and self-supervised RL methods are implemented, similar to (Kang et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018). * GRU (Kang et al., 2018): This is an RNN-based sequential model and leverages GRU to encode the input user-item interaction sequence. * **Caser**(Kang et al., 2018): This is a recently proposed CNN-based sequential recommender that applies convolution operations on the embedding matrix of the input sequence. * **SARec**(Kang et al., 2018): This is an attention-based sequential recommender and uses a Transformer encoder to encode the input sequence. #### 4.2.3. **Baselines** To demonstrate the effectiveness, we compare our proposed MCRL with different offline RL, and self-supervised RL methods with different backbone networks. Due to the unavailability of open source code, some related work will be discussed further in section 5. * IQL (Kang et al., 2018): This is a state-of-the-art offline RL method and takes a state conditional upper expected to estimate the value of the best actions in a state. * SQN (Li et al., 2018): This is a representative self-supervised RL method that uses an RL head to enhance the representation. * SAC (Li et al., 2018): This is also a self-supervised RL method and utilizes these Q-values to re-weight the supervised part. * CDARL (Li et al., 2018): This is a self-supervised RL method considering data augmentation and intrinsic reward. #### 4.2.4. **Parameter Settings** We implement all methods with Python 3.9.13 and PyTorch 1.9.0 in Nvidia GeForce RTX 3090. For both datasets, the input sequences are defined as the last 10 items before the target item. We complement the sequence whose length is less than 10 with a padding item. To ensure a fair comparison, we fix the embedding size of all models to 64. We optimize all models with Adam optimizer (Kingmaa et al., 2014) and fix the batch size at 256. The learning rate is set as 0.005 for both Yoochoose and RetailRocket. For GRU, we set the size of the hidden state as 64. For Caser, 1 vertical convolution filter and 16 horizontal filters are used. The height of filters is searched in the range of \(\{2,3,4\}\) and the dropout ratio is set as 0.1. For SASRec, we set the number of heads in self-attention as 1. For RL-based methods, as recommended by (Li et al., 2018), we set the discount factor as 0.5 and the ratio between the click reward (\(r_{c}\)) and the purchase reward (\(r_{p}\)) as 0.2 and 1 respectively. For negative sampling, a uniform distribution strategy is used. Without special mention, the negative number is set as 30. The temperature factor is set as 1 and coefficients \(\alpha\) can is set as 1 except in the case of GRU backbone on Yoochoos dataset. When IQL and MCRL are integrated with a base model, the hyperparameter setting remains exactly unchanged for a fair comparison. For the implementation of the reward model and state transition model, after concatenating the embedding of state and action from the supervised backbone network, the representation vector is fed into two fully-connected networks with ReLu as an activation function. The difference between the reward model and the state transition model is the last layer, where the output dims are 3 and 64 respectively. For the value function, its structure is similar to the reward model except that the output dim is 1. ### Performance Comparison (RQ1) Table 2 and Table 3 illustrate the performance of top-k recommendation on RetailRocket and Yoochoose, respectively. On the RetailRocket dataset, the proposed achieves the highest scores in all situations for purchase and click, which demonstrates the items recommended by MCRL have a higher probability to be clicked or purchased by users. Notice that MCRL achieves the best performance for both HR and NDCG, indicating that MCRL tends to give the items that have a higher purchase reward with a higher ranking position in the recommendation list. Compared with the supervised baseline, IQL, SQN, SAC,CDARL, and MCRL all have better performance, which shows the potential of applying RL to the SeqRec. In addition, the value function estimates the long-term engagement to improve the policy learning, hence the improvement in purchase is more obvious in the e-commerce scenario. In comparison with baseline RL-based methods, MCRL still achieves significant performance improvement for all backbone networks. It demonstrates that MCRL effectively improves the RL-based recommendation model with the help of exploiting MDP structures. On the Yoochoose dataset, we observe that MCRL outperforms all supervised methods in terms of click and purchase except for the purchase prediction when integrating with the Caser model, where SAC has the highest scores. However, SAC and MCRL both outperform the supervised methods, IQL and SQN, and their performance gap is small. Despite the improvement of MCRL over almost all baselines, the score increase is smaller compared to that on RetailRocket. Regarding the larger improvement of MCRL on RetailRocket compared to Yoochoose, we believe that the signal sparsity problem is more severe in RetailRocket due to its larger number of items(70,852 in RetailRocket and 26,702 in Yoochoose) and hence larger state and action space. As MCRL is designed to address the signal sparsity problem, it is likely to have a larger impact on datasets with more items. ### Study of MCRL (RQ2) To get deep insights into the effectiveness of the learned value function, reward model, and state transition model, we conduct \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{8}{c}{Purchase} & \multicolumn{8}{c}{Click} \\ \cline{2-13} & HR@5 & NG@5 & HR@10 & NG@10 & HR@20 & NG@20 & HR@5 & NG@5 & HR@10 & NG@10 & HR@20 & NG@20 \\ \hline GRU & 0.4608 & 0.3834 & 0.5107 & 0.3995 & 0.5564 & 0.4111 & 0.2233 & 0.1735 & 0.2673 & 0.1878 & 0.3082 & 0.1981 \\ GRU-IQL & 0.5321 & 0.4451 & 0.5858 & 0.4624 & 0.6328 & 0.4743 & 0.2654 & 0.2056 & 0.3181 & 0.2226 & 0.3659 & 0.2348 \\ GRU-SQN & 0.5069 & 0.4130 & 0.5589 & 0.4289 & 0.5946 & 0.4392 & 0.2487 & 0.1939 & 0.2967 & 0.2094 & 0.3406 & 0.2205 \\ GRU-SAC & 0.4942 & 0.4179 & 0.5464 & 0.4341 & 0.5870 & 0.4428 & 0.2451 & 0.1924 & 0.2930 & 0.2074 & 0.3371 & 0.2186 \\ GRU-CDARL & 0.5882 & 0.5047 & 0.6371 & 0.5205 & 0.6730 & 0.5296 & 0.2921 & 0.2277 & 0.3482 & 0.2459 & 0.4004 & 0.2591 \\ GRU-MCRL & **0.5972** & **0.5071** & **0.6481** & **0.5229** & **0.6900** & **0.5339** & **0.2779** & **0.2151** & **0.3301** & **0.2326** & **0.3806** & **0.2455** \\ \hline \multirow{2}{*}{Caser} & 0.3491 & 0.2935 & 0.3857 & 0.3053 & 0.4198 & 0.3141 & 0.1966 & 0.1566 & 2.2302 & 0.1675 & 0.2628 & 0.1758 \\ & 0.3855 & 0.3288 & 0.4266 & 0.3420 & 0.4618 & 0.3509 & 0.2125 & 0.1704 & 0.2504 & 0.1827 & 0.2854 & 0.1916 \\ \multirow{2}{*}{Caser-SQN} & 0.3674 & 0.3089 & 0.4050 & 0.3210 & 0.4409 & 0.3301 & 0.2089 & 0.1661 & 0.2454 & 0.1778 & 0.2803 & 0.1867 \\ & 0.3871 & 0.3234 & 0.4436 & 0.3386 & 0.4763 & 0.3494 & 0.2206 & 0.1732 & 0.2617 & 0.1865 & 0.2999 & 0.1961 \\ \multirow{2}{*}{Caser-CDARL} & 0.4073 & 0.3438 & 0.4521 & 0.3563 & 0.4822 & 0.3655 & 0.2153 & 0.1716 & 0.2532 & 0.1838 & 0.2879 & 0.1924 \\ & **0.4877** & **0.4171** & **0.5387** & **0.4337** & **0.5851** & **0.4455** & **0.2498** & **0.1979** & **0.2941** & **0.2123** & **0.3374** & **0.2232** \\ \hline \multirow{2}{*}{SASRec} & 0.5267 & 0.4298 & 0.5916 & 0.4510 & 0.6341 & 0.4618 & 0.2541 & 0.1931 & 0.3085 & 0.2107 & 0.3570 & 0.2230 \\ & 0.5403 & 0.4442 & 0.6016 & 0.4643 & 0.6519 & 0.4643 & 0.2737 & 0.2084 & 0.3286 & 0.2263 & 0.3813 & 0.2396 \\ \multirow{2}{*}{SASRec-SQN} & 0.5681 & 0.4617 & 0.6203 & 0.4806 & 0.6619 & 0.4914 & 0.2761 & 0.2104 & 0.3302 & 0.2279 & 0.3803 & 0.2406 \\ & 0.5623 & 0.4679 & 0.6127 & 0.4844 & 0.6505 & 0.4940 & 0.2670 & 0.2056 & 0.3208 & 0.2230 & 0.3701 & 0.2355 \\ \multirow{2}{*}{SASRec-CDARL} & 0.5780 & 0.4799 & 0.6315 & 0.4981 & 0.6785 & 0.5088 & 0.2806 & 0.2145 & 0.3386 & 0.2327 & 0.3913 & 0.2458 \\ \multirow{2}{*}{SASRec-MCRL} & **0.5457** & **0.5755** & **0.6967** & **0.5891** & **0.7296** & **0.5974** & **0.2815** & **0.2205** & **0.3342** & **0.2375** & **0.3827** & **0.2498** \\ \hline \hline \end{tabular} \end{table} Table 1. Top-k performance comparison of supervised, offline RL, and self-supervised RL methods and MCRL on RetailRocket dataset(\(k=5,10,20\)). ‘HR’ and ‘NG’ denotes hit ratio and NDCG respectively. Recommendations are generated from the policy \(\pi\). Boldface denotes the highest score. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{8}{c}{Purchase} & \multicolumn{8}{c}{Click} \\ \cline{2-13} & HR@5 & NG@5 & HR@10 & NG@10 & HR@20 & NG@20 & HR@5 & NG@5 & HR@10 & NG@10 & HR@20 & NG@20 \\ \hline GRU & 0.3994 & 0.2824 & 0.5183 & 0.3204 & 0.6067 & 0.3429 & 0.2876 & 0.1982 & 0.3793 & 0.2279 & 0.4581 & 0.2478 \\ GRU-IQL & 0.4220 & 0.3010 & 0.5350 & 0.3378 & 0.6243 & 0.3606 & 0.3213 & 0.2225 & 0.4186 & 0.2541 & 0.4987 & 0.2744 \\ GRU-SQN & 0.4228 & 0.3016 & 0.5333 & 0.3376 & 0.6233 & 0.3605 & 0.3020 & 0.2093 & 0.3946 & 0.2394 & 0.4741 & 0.2587 \\ GRU-SAC & 0.4394 & 0.3154 & 0.5525 & 0.3521 & 0.6378 & 0.3739 & 0.2863 & 0.1985 & 0.3764 & 0.2277 & 0.4541 & 0.2474 \\ GRU-CDARL & **0.4726** & **0.3395** & **0.5906** & **0.3779** & **0.6853** & **0.4018** & 0.3228 & 0.2244 & 0.4198 & 0.2552 & 0.5022 & 0.2752 \\ GRU-MCRL & 0.4588 & 0.3296 & 0.5703 & 0.3660 & 0.6492 & 0.3860 & **0.3263** & **0.2262** & **0.4220** & **0.2572** & **0.5043** & **0.2780** \\ \hline \multirow{2}{*}{Caser} & 0.4475 & 0.3211 & 0.5559 & 0.3565 & 0.6393 & 0.3775 & 0.2728 & 0.1896 & 0.3593 & 0.2177 & 0.4371 & 0.2372 \\ & 0.4690 & 0.3402 & 0.5756 & 0.3750 & 0.6601 & 0.3964 & 0.3017 & 0.210 ablation experiments. For space saving, we only report the results on the RetailRocket and the trend for Yoochoose is similar. Besides, we enhance IQL with the learned model to further see the effect of the learned model, since they can also be regarded as a relatively independent part. #### 4.4.1. Effect of the value function In this part, we use different RL methods to estimate long-term engagement to verify the rationality of the proposed value function. Besides HR, we also report the cumulative rewards@k, which means the sum of rewards for top-k recommendation in the test set. For space-saving, we do not report results about NDCG, which shows a similar trend. Figure 2 illustrates the comparison of supervised model Gru, MCRL-DQN, MCRL-IQL, and MCRL, where MCRL-DQN are MCRL-IQL are different from MCRL about the Q function learning. For MCRL-DQN, we implement a double Q learning (Kaswani et al., 2017) which learns two Q functions and use the conservative one, to estimate the expected cumulative rewards. For MCRL-IQL, we use the Q function learning methods proposed in (Kaswani et al., 2017), and use an advantage function to learn the policy. MCRL-DDQN performs better than MCRL-IQL for GRU and Caser, but gets a lower click HR, purchase HR and cumulative rewards. However, the performance of MCRL is better than compared methods for click prediction, purchase prediction, and the cumulative rewards on the test set when instantiated with GRU, Caser, and SASRec backbones, which proves that the proposed value function is more accurate, that is, it can better ease the overestimation problem than baseline offline RL component. Besides, MCRL is superior to them with respect to the number of parameters without directly learning a Q function, that is, the output dim of MCRL's value function is 1 while that of other RL-based methods is the item size. #### 4.4.2. Effect of reward and state transition model for MCRL and IQL We vary the setting of MCRL to investigate the impact of learned reward and state transition model. For simplicity's sake, we only report HR metrics since NDCG shows a similar trend. Figure 3 gives the corresponding results. MCRL-none is the variant of MCRL without learning the reward model and state transition model. MCRL-reward and MCRL-state are MCRL variants with only reward model and state transition model, respectively. For GRU, Caser, and SASRec, all methods with the model gain performance improvement, which shows the effectiveness of the proposed model learning. However, the combination of two models outperforms any single model in all cases, indicating that it is necessary to learn the two models. Moreover, for GRU, the state transition model is more effective than the reward model. For Caser and SASRec, the conclusion is the opposite, which indicates the different capacities of the two proposed models for different backbone networks. To further understand the role of model learning, we also implement bisimulation(see section 3.2.1) using MCRL framework, i.e. MCRL-BI, which learns the reward and transition simultaneously. The result shows that it can also boost the performance since it outperforms MCRL-none for different backbone networks. We can see that the learned reward and state transition model play an important role in the performance improvement of MCRL and we wonder whether the learned models can also boost other offline RL methods. Figure 4 shows that learning a reward model and state transition model (MCRL-IQL) can improve the performance of Figure 3. The study of reward and state transition model for MCRL on RetailRocket Figure 2. The study of the value function module on RetailRocket. the base offline RL method (IQL) for their backbones on RetailRocket with respect to click HR, purchase HR, and cumulative rewards. ### Contrastive Learning Study (RQ3) In this section, we study the effect of contrastive learning. Figure 6 shows the comparison of MCRL and MCRL w/o CL, which learns a reward and state transition model only with positive state-action pairs. On the RetailRocket dataset, MCRL outperforms MCRL w/o CL with respect to HR and NDCG in the prediction of both click and purchase, showing the effectiveness of contrastive learning. It gives us the insight that utilizing negative reward can ease the data sparsity problem for offline RL learning in recommendation. ### Hyperparameter Study (RQ4) In this part, we investigate how the hyperparameter setting of the number of negative actions and discount factor influence the performance MCRL. We only report the result of HR@5 for GRU backbone on RetailRocket, since the patterns in other cases are similar. #### 4.6.1. Effect of Discount Factor Figure 5 illustrates the influence of the discount factor. From the results of RetailRocket, we see that the performance fluctuates with the discount factor varying and the tendency for click prediction and purchase prediction is similar. However, the fluctuation range are both small (less than %3 ) for click HR and purchase HR. Overall, the proposed method is not sensitive to the discount factor. #### 4.6.2. Effect of the Number of Negative actions Figure 5 shows the influence of the number of negative actions. The performance remains nearly unchanged when there are more than 5 negative actions. In conclusion, our contrastive learning module shows robustness with respect to the number of negative actions. #### 4.6.3. Effect of the Training Steps From Figure 7, we can see that MCRL is not as effective as the supervised methods when the number of training steps is small, possibly due to the higher training loss and slower convergence. As the number of training steps increases, MCRL continuously surpasses the supervised methods, demonstrating the effectiveness and stability of MCRL. ## 5. Related Work In this section, we introduce some related tasks: sequential recommendation and RL for recommendation. ### Sequential Recommendation Sequential recommendation utilizes self-supervised learning to predict the next item based on the historical sequence of user interaction. In the early stage, Markov Chains (Kingma and Ba, 2014; Krizhevsky et al., 2014) and factorization methods (Krizhevsky et al., 2014) were widely used in the sequential recommendation, but are limited in the expressiveness of complex sequential signals. Recently, deep learning has played an increasingly important role in the sequential recommendation. For instance, (Krizhevsky et al., 2014) uses gated recurrent units (GRU) as the backbone to extract the sequential representation. (Wang et al., 2015; Wang et al., 2016) model the user-item sequence based on the utilization of a convolutional neural network (CNN). With transformer (Dev and Glorot, 2015; Glorot et al., 2016; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) gaining popularity in various areas, it is also used in sequential recommendations by (Krizhevsky et al., 2014) in the sequence embedding space. Recently, (Wang et al., 2015; Wang et al., 2016) focuses on using data augmentation in sequential recommendation with contrastive learning. (Krizhevsky et al., 2014) proposes an architecture that relies on common patterns as well as individual behaviors to tailor its recommendations for each person. (Chen et al., 2016) proposes a new training and inference paradigm, termed as Ada-Ranker, to address the challenges of dynamic online serving. In this article, we have selected three most typical neural network structures as backbones, similar to (Wang et al., 2015; Wang et al., 2016; Wang et al., 2017; Wang et al., 2018). ### RL for recommendation Reinforcement learning has been widely used for recommender systems in recent years(Wang et al., 2015; Wang et al., 2016; Wang et al., 2017). (Wang et al., 2018) first proposes to model recommendation as a Markov Decision Process, and a model-based RL method for is designed for book recommendation. (Wang et al., 2015; Wang et al., 2016; Wang et al., 2017) study how to use collecting recommendation data to construct the recommender simulator. (Chen et al., 2016) apply RL to recommendation with large discrete action spaces and demonstrates the effectiveness on various recommendation tasks with more than one million actions. (Wang et al., 2017) proves that the application of offline RL techniques to sequential recommendation is promising by learning a policy near the behaviour policy. (Wang et al., 2017) introduces the impact of negative feedback through learning the Q function. (Wang et al., 2017) uses reinforcement learning from human feedback to capture the interest of users. (Wang et al., 2018; Wang et al., 2018) adopt RL techniques to the multi-task recommendation. Self-supervised reinforcement (SSR), as a technique in which the model is trained by itself without external label information, is receiving growing interest in robotics (Krizhevsky et al., 2014). Moreover, (Wang et al., 2017) proposes two SSR learning frameworks named Self-Supervised Q-learning (SQN) and Self-Supervised Actor-Critic (SAC), utilizing the RL component as a form of a regularizer for the supervised learning component, SQN, and SAC can effectively lead to more purchases than clicks. (Wang et al., 2017; Wang et al., 2017) consider to combine the data augmentation with SSR to boost the state representation. (Wang et al., 2018) considers take the cumulative reward signals of sequence trajectory as the input and reformulates SSR as the task of action prediction. However, all these methods ignore mining the structure information of MDP. ## 6. Conclusion and Future Work In this work, we explore the application of RL to sequential recommendations for optimizing long-term engagement. We devise a new framework MCRL, which implicitly learns a model to enhance policy learning. The core insight of this work is to mine the structural information of the MDP and consider the negative feedback. Extensive experiments on two real-world datasets show the rationality and effectiveness of MCRL. This work shows the potential of enhancing RL-based sequential recommendation methods with MDP structure information and the importance of utilizing negative feedback, which is combined via contrastive learning. Besides reward function and environment dynamics, other structural information of MDP indeed exists in real-world scenarios, such as reward sequence. For instance, we can learn a model to predict the future reward or state sequence. (Chen et al., 2016) introduced an off-policy correction term (propensity score) for the policy-gradient method to evaluate the off-policy effect, this technique can be considered in MCRL. An simple implementation is to reweigh Eqn. (8) with propensity score and evaluate it with metric in (Kang et al., 2018). ## 7. Appendix ### The Algorithm of MCRL Here is the pseudocode for the overall MCRL algorithm. ### Dataset Description Here are the statistical features of the RetailRocket and Yoochoose datasets. ### Figures of Ablation Study of Contrastive Learning and Hyperparameter Here are results of ablation study of contrastive learning and hyperparameter.
2310.19655
Universality of the quantum energy flux at the inner horizon of asymptotically de Sitter black holes
Recently, it was found that the energy flux of a free scalar quantum field on a Reissner-Nordstr\"om-de Sitter spacetime has a quadratic divergence towards the inner horizon of the black hole. Moreover, the leading divergence was found to be state independent as long as the spectral gap of the wave equation on the spacetime is sufficiently large. In this work, we show that the latter result can be extended to all subextremal Reissner-Nordstr\"om-de Sitter and subextremal Kerr-de Sitter spacetimes with a positive spectral gap.
Peter Hintz, Christiane Klein
2023-10-30T15:39:14Z
http://arxiv.org/abs/2310.19655v2
Universality of the quantum energy flux at the inner horizon of asymptotically de Sitter black holes ###### Abstract Recently, it was found that the energy flux of a free scalar quantum field on a Reissner-Nordstrom-de Sitter spacetime has a quadratic divergence towards the inner horizon of the black hole. Moreover, the leading divergence was found to be state independent as long as the spectral gap of the wave equation on the spacetime is sufficiently large. In this work, we show that the latter result can be extended to all subextremal Reissner-Nordstrom-de Sitter and subextremal Kerr-de Sitter spacetimes with a positive spectral gap. ## I Introduction The inner horizons of charged or rotating black holes pose an interesting problem with regard to determinism in general relativity. They are examples for the appearance of Cauchy horizons, or in other words smooth boundaries of the maximal Cauchy development of complete initial data. Since the boundary is smooth, one can in principle extend the spacetime beyond the boundary. However, such an extension is not unique, since it is not fixed by the initial data sufficient for the unique characterization of the spacetime up to that horizon. In this sense, determinism in general relativity is lost beyond the Cauchy horizon. It has been argued by Penrose [1] that this issue can be settled by noting that the Cauchy horizons of charged and rotating black holes are unstable. In other words, a generic perturbation of the black hole's initial data will make the spacetime inextendible across the Cauchy horizon. This has become known as the strong cosmic censorship conjecture (sCC). To make the conjecture more precise, one has to define the notions of inextendibility and of generic perturbations. To guarantee that any observer attempting to cross the horizon is inevitably destroyed by tidal deformations, the metric should be inextendible as a continuous function, as happens for example at the central singularity of a Schwarzschild black hole [2]. However, it has been shown in [3] that this version of sCC fails for Kerr black holes with non-zero angular momentum; this assumes the non-linear stability of the black hole exterior under small asymptotically flat perturbations, which is known for small angular momenta [4]. Thus, one must content oneself with the slightly weaker version of the conjecture introduced by Christodoulou [5]. In this version, inextendibility is understood in the sense of a weak solution to the Einstein equations. In other words, this version demands the inextendibility of the metric as a function in the Sobolev space \(H^{1}_{\rm loc}\). While this will not cause the inevitable destruction of the careless observer, it still indicates a fundamental breakdown of the classical theory of general relativity. The (in-)validity of this version of the conjecture is still an open question, and a subject of active research. A first step towards understanding the conjecture can be made in two ways. The first one is to study it in a symmetric setting, e.g. by restricting to perturbations that maintain spherical symmetry; by Birkhoff's theorem, this necessitates coupling gravity to additional fields. This approach has been used for example in [6; 7; 8; 9; 10] on Reissner-Nordstrom spacetimes coupled to a scalar field. Alternatively, one can consider solutions to the (massless) scalar wave equation on a fixed background containing a Cauchy horizon as a first approximation of either linearized gravity or a simplified matter model coupled to gravity. It has been demonstrated that the linear perturbations indeed cease to be in \(H^{1}_{\rm loc}\) on (parts of) the Cauchy horizon of Reissner-Nordstrom [11] and Kerr [12]. Moreover, it has been shown [13] that on Reissner-Nordstrom-de Sitter (RNdS) and slowly rotating Kerr-de Sitter (KdS) spacetimes solutions to the scalar wave equation are in \(H^{1/2+\beta-0}\) near the Cauchy horizon. Here, \(\beta=\alpha/\kappa_{1}\), where \(\alpha\) is the spectral gap of quasi-normal modes, and \(\kappa_{1}\) is the surface gravity of the Cauchy horizon. Hence, in this linear approximation, strong cosmic censorship is directly related to the spectral gap of the black hole's quasi-normal modes. Subsequent numerical studies of the quasi-normal modes [14] revealed that one can find \(\beta>1/2\) for RNdS black holes of sufficiently large charge, and non-linear studies [15; 16; 17; 18] (albeit the last two restrict to spherical symmetry) indicate that this remains valid, in other words \(\beta\) remains the decisive quantity for determining the (in)stability of the Cauchy horizon when the non-linearities of the Einstein equations are taken into account. This raised the following question: since the scalar perturbations can be considered as part of a simple matter model, and since matter is, to our knowledge, most accurately described by quantum theory, can the inclusion of quantum effects potentially remedy the situation? To answer this question, the authors of [19] study a free scalar quantum field on a fixed RNdS background spacetime, focusing on cases in which [14] had found \(\beta\) to be larger than \(1/2\). They find that the stress-energy tensor of the quantum scalar field in a generic Hadamard state can be split into a state-independent term and a state-dependent term. The state-dependent term is subsequently found to be in \(L^{p}\), \(p=(2-2\beta)^{-1}\), near the Cauchy horizon and therefore it does not diverge faster than the classical stress-energy tensor. The state-independent term is computed numerically in [20] for a range of spacetime parameters. It has a generically non-vanishing quadratic divergence at the Cauchy horizon, in agreement with numerical results on Reissner-Nordstrom and Kerr spacetimes [21; 22]. This quadratic divergence is stronger than the divergence of the classical stress-energy tensor, and therefore stronger than the state-dependent contribution, as long as the state is Hadamard and \(\beta>1/2\). Consequently, the divergence of the quantum stress-energy tensor of the free scalar field at the Cauchy horizon of a RNdS spacetime satisfying \(\beta>1/2\) is not only sufficiently strong to potentially remedy sCC, but also universal in the sense that it does not depend on the choice of state as long as the state is physically reasonable. In this work we want to show that this universality holds more generally. We consider the stress-energy tensor of a free, real scalar quantum field near the Cauchy horizon of a RNdS or KdS spacetime merely satisfying \(\beta>0\), i.e. having a positive spectral gap. We demonstrate that for a generic state \(\omega\) which is Hadamard up to the Cauchy horizon, the expectation value of the stress-energy tensor can be split into a state-independent term and a state-dependent term. The state-independent term can be computed numerically, and is expected to diverge quadratically, i.e. like \((r-r_{1})^{-2}\), at the Cauchy horizon, similar to the results obtained in [20; 21; 22]. The state-dependent term is shown to diverge at most like \((r-r_{1})^{-2+\beta}\) at the Cauchy horizon, where \(r\) is the usual radial coordinate of these spacetimes and \(r_{1}\) the radius of the Cauchy horizon. (Strictly speaking, we show an upper bound of \((r-r_{1})^{-2+\beta^{\prime}}\) for some \(0<\beta^{\prime}<\beta\), though conjecturally one can take \(\beta^{\prime}\) arbitrarily close to \(\beta\).) In other words, we show that under two conditions--first, \(\beta>0\), i.e. there is a positive spectral gap and mode stability holds, and second, the state-independent term is generically non-vanishing--the state-independent term constitutes the leading divergence of the quantum stress-energy tensor at the Cauchy horizon, extending the universality result to all \(\beta>0\). Moreover, under the same conditions, the universal leading divergence of the quantum stress-energy tensor at the Cauchy horizon is stronger than the divergence of the classical stress-energy tensor. The condition \(\beta>0\) is known to hold for scalar fields on all subextremal RNdS spacetimes (for zero and non-zero scalar field masses) as well as on KdS spacetimes which are either slowly rotating [23] or have small mass [24] (for zero and also for small non-zero scalar field masses by [25, Lemma 3.5]), and it is conjectured to hold in the full subextremal KdS range [26, 27]. This leads to the conclusion that perturbations caused by quantum effects should not be neglected in considerations of Cauchy horizon stability, since they will eventually become comparable in size to, and ultimately dominate classical perturbations as the Cauchy horizon is approached. To estimate how close to the Cauchy horizon this happens, one can, for example, compare the \(rr\)-components of the stress-energy tensor of the classical and quantum scalar field near the Cauchy horizon under the assumption that the leading contribution to the quantum result is non-vanishing. Taking into account the pointwise bounds for the classical stress-energy tensor, and the relative size of quantum and classical results away from the horizon, which should typically be of order \(r_{P}/L\), one finds that the \(rr\)-component of the quantum stress-energy tensor will become dominant when \(r-r_{1}\ll r_{P}(r_{P}/L)^{1/\beta-1}\), where \(L\) is a typical length scale of the spacetime, for example \(M\). We will begin our discussion with an introduction of the geometric and field-theoretic setup in section II. We will also use this section to recall some results on Hadamard states. Section III will demonstrate how the decay of the solutions of the wave equations towards \(i^{+}\) can be translated into an estimate on the behavior at the Cauchy horizon. This estimate will be employed in section IV to bound the divergence of the state-dependent part of the expectation value of the stress-energy tensor in a generic Hadamard state \(\omega\). ## II Setup ### The RNdS and KdS spacetimes In this work, we will consider subextremal RNdS and KdS spacetimes. They can be described by a metric of the form \[g^{\rm RNdS}_{\lambda,Q}=-\frac{\Delta^{\rm RNdS}_{r}}{r^{2}}\,{\rm d}t^{2}+ \frac{r^{2}}{\Delta^{\rm RNdS}_{r}}\,{\rm d}r^{2}+r^{2}\,{\rm d}\Omega^{2}\,, \tag{1}\] with \[\Delta^{\rm RNdS}_{r}=-\lambda r^{4}+r^{2}-2Mr+Q^{2} \tag{2}\] for RNdS and \[g^{\rm KdS}_{\lambda,a} =\frac{\Delta_{\theta}a^{2}\sin^{2}\theta-\Delta_{r}^{\rm KdS}}{ \rho^{2}\chi^{2}}\,{\rm d}t^{2}+\left[\Delta_{\theta}(r^{2}+a^{2})^{2}-\Delta_{ r}^{\rm KdS}a^{2}\sin^{2}\theta\right]\frac{\sin^{2}\theta}{\rho^{2}\chi^{2}}\,{ \rm d}\varphi^{2} \tag{3}\] \[+\frac{\rho^{2}}{\Delta_{r}^{\rm KdS}}\,{\rm d}r^{2}+\frac{\rho^{ 2}}{\Delta_{\theta}}\,{\rm d}\theta^{2}+2\frac{a\sin^{2}\theta}{\rho^{2}\chi^{ 2}}[\Delta_{r}^{\rm KdS}-\Delta_{\theta}(r^{2}+a^{2})]\,{\rm d}t\,{\rm d} \varphi\,,\] with \[\Delta_{r}^{\rm KdS} =(1-\lambda r^{2})(r^{2}+a^{2})-2Mr\,, \Delta_{\theta} =1+a^{2}\lambda\cos^{2}\theta\,, \tag{4}\] \[\rho^{2} =r^{2}+a^{2}\cos^{2}\theta\,, \chi =1+a^{2}\lambda\,, \tag{5}\] for KdS. Throughout this discussion, we will choose the scale for the coordinates \(r\) and \(t\) such that the black hole mass \(M\) is set to one. The black hole's charge \(Q\) or angular momentum parameter \(a\) as well as the cosmological constant \(\Lambda=3\lambda\) are chosen such that the functions \(\Delta_{r}^{\#}\), with \(\#\) replacing either "RNdS" or "KdS", have three real distinct positive roots \(r_{1}<r_{2}<r_{3}\) indicating the locations of the cosmological horizon (\(r_{3}\)), the outer (\(r_{2}\)) and the inner horizon (\(r_{1}\)) of the black hole. The coordinate singularities at the horizons can be eliminated by introducing advanced or retarded time coordinates defined by \[{\rm d}t_{\pm} =\,{\rm d}t\pm\frac{\chi(r^{2}+a^{2})}{\Delta_{r}^{\#}}\,{\rm d}r\,, \tag{6}\] where we set \(\chi=1\) and \(a=0\) for RNdS. For Kerr-de Sitter, one has to introduce in addition the azimuthal coordinates \[{\rm d}\varphi_{\pm} =\,{\rm d}\varphi\pm\frac{\chi a}{\Delta_{r}^{\#}}\,{\rm d}r\,. \tag{7}\] The coordinates \((t_{\pm},r,\theta,\varphi_{\pm})\) then allow an extension of the metric through the outgoing/ingoing pieces of the horizons. It takes the form \[g^{\#} =g^{\#}_{tt}\,{\rm d}t_{\pm}^{2}\pm 2\frac{1}{\chi}\,{\rm d}t_{\pm}\,{ \rm d}r+g^{\#}_{\theta\theta}\,{\rm d}\theta^{2}+g^{\#}_{\varphi\varphi}\,{ \rm d}\varphi_{\pm}^{2}\mp 2\frac{a\sin^{2}\theta}{\chi}\,{\rm d}\varphi_{\pm} \,{\rm d}r\,, \tag{8}\] where again, we set \(\chi=1\) and \(a=0\) for the RNdS case, and \(g^{\#}_{\mu\nu}\) are the components of the corresponding metric in the Boyer-Lindquist coordinates as given in (1) and (3). It should be mentioned that none of these coordinate systems cover the axis where \(\sin\theta=0\). However, it has been shown that the metric can be analytically extended to the axis as well using a suitable coordinate transformation [13; 28]. The physical RNdS and KdS spacetimes respectively will be the manifolds \(\mathbb{R}_{t_{+}}\times(r_{1},r_{3})_{r}\times\mathbb{S}^{2}_{(\theta,\varphi _{+})}\) glued to \(\mathbb{R}_{t_{-}}\times(r_{2},\infty)_{r}\times\mathbb{S}^{2}_{(\theta, \varphi_{-})}\) on \(\{r_{2}<r<r_{3}\}\), equipped with the metric \(g^{\rm RNdS}_{\lambda,Q}\) or \(g^{\rm KdS}_{\lambda,a}\). More details on the advanced/retarded time coordinates, the extended spacetime and the gluing can be found in [28; 29]. We will refer to the physical spacetime as \(\mathcal{M}\); see also Figure 1. The physical RNdS and KdS spacetimes are globally hyperbolic. For RNdS, this follows from the analysis in [29], combined with the fact that the physical spacetime \(\mathcal{M}\) is a causally convex subset of the union of Kruskal domains around \(r=r_{2}\) and \(r=r_{3}\) discussed in [29]. For KdS, this was shown by an explicit construction in [30]. In this work, we focus mostly on the Cauchy horizon, which is the future boundary of \(\mathcal{M}\) considered as a submanifold of its analytic extension. More specifically, we focus on the ingoing part of the Cauchy horizon, cf. Figure 1. The metric can be analytically extended through the ingoing piece of the Cauchy horizon in the retarded coordinates \((t_{-},r,\theta,\varphi_{-})\). In Section III we will analyse the classical wave equation on a domain \(\Omega\subset\mathcal{M}\) which encompasses the relevant part of the Cauchy horizon. More specifically, the domain \(\Omega\) is bounded in the past by a hypersurface of constant \(r\), and in the future by a piece of the Cauchy horizon as well as a spacelike hypersurface transversal to the Cauchy horizon, as indicated in Figure 2. Figure 1: The Penrose diagram of the subextremal RNdS spacetime, or the Carter–Penrose diagram for the subextremal KdS spacetime. The gray region indicates our physical spacetime \(\mathcal{M}\), while the diagram shows also the analytic extension across \(\mathcal{CH}\). ### The free scalar field theory We consider a free scalar field on the physical RNdS or KdS spacetime \(\mathcal{M}\) satisfying the Klein-Gordon equation \[\mathcal{P}\phi=0\,,\quad\mathcal{P}=\Box_{g}+m^{2}+\xi R\,, \tag{9}\] where \(\Box_{g}\) is the d'Alembert operator, \(R=4\Lambda>0\) is the Ricci scalar of RNdS or KdS respectively, and \(m\geq 0\), \(\xi\geq 0\) are constants. The quantum theory for this field can be described by the CCR-algebra \(\mathcal{A}(\mathcal{M})\) of quasi-local observables, which can be defined as the free *-algebra generated by the identity \(\mathbf{1}\) and the smeared field operators \(\Phi(f)\), \(f\in\mathcal{C}_{0}^{\infty}\), subject to the relations * \(\Phi(\alpha f+g)=\alpha\Phi(f)+\Phi(g)\quad\forall f,g\in\mathcal{C}_{0}^{ \infty}(\mathcal{M})\), \(\alpha\in\mathbb{C}\) * \(\Phi(\mathcal{P}f)=0\quad\forall f\in\mathcal{C}_{0}^{\infty}(\mathcal{M})\) * \((\Phi(f))^{*}=\Phi(\bar{f})\quad\forall f\in\mathcal{C}_{0}^{\infty}(\mathcal{ M})\) * \([\Phi(f),\Phi(g)]=iE(f,g)\mathbf{1}\quad\forall f,g\in\mathcal{C}_{0}^{\infty}( \mathcal{M})\). Here, \(E\) is the commutator function, or Pauli-Jordan propagator of \(\mathcal{P}\). It is constructed as the difference between the unique retarded and advanced Green's operators \(E^{\pm}\) for the Klein-Gordon operator \(\mathcal{P}\). A state in this framework is a linear map \(\omega:\mathcal{A}(\mathcal{M})\rightarrow\mathbb{C}\) satisfying \(\omega(\mathbf{1})=1\) and \(\omega(A^{*}A)\geq 0\) for all \(A\in\mathcal{A}(\mathcal{M})\). It is called a quasi-free Hadamard state if it is entirely determined by its two-point function \[w(f,h)=\omega(\Phi(f)\Phi(h))\,,\] Figure 2: Illustration of the domain \(\Omega\), in which the results of decay towards \(i^{+}\) are propagated and converted to regularity results at \(\mathcal{CH}\). and if the wavefront set of the two-point function, considered as a distribution on \(\mathcal{M}\times\mathcal{M}\), satisfies the microlocal spectrum condition [31] \[\mathrm{WF}^{\prime}(w) =\mathcal{C}^{+}\,, \tag{10}\] \[\mathcal{C}^{\pm} =\{(x,k;y,l)\in T^{*}(\mathcal{M}\times\mathcal{M}):(x,k)\sim(y,l )\text{ and }\pm k\rhd 0\}\,. \tag{11}\] Here, \((x,k;y,l)\in\mathrm{WF}^{\prime}(w)\) if and only if \((x,k;y,-l)\in\mathrm{WF}(w)\). A point \((x,k)\in T^{*}\mathcal{M}\) is related to \((y,l)\in T^{*}\mathcal{M}\) by \(\sim\), \((x,k)\sim(y,l)\), if \(x\) and \(y\) are connected by a null geodesic to which \(k\) is the cotangent vector at \(x\), and \(l\) agrees with \(k\) coparallel transported along the geodesic to \(y\). In other words, \((x,k)\sim(y,l)\) if the two points lie in the same bicharacteristic strip of \(\mathcal{P}\). The notation \(k\rhd 0\) means that \(k(v)>0\) for all time-like future-pointing vectors in \(T_{x}\mathcal{M}\), i.e. \(k\) is a future-pointing covector. Physically reasonable states are usually required to satisfy the microlocal spectrum condition, since it allows one to extend the algebra to include important observables such as the (smeared) stress-energy tensor [32; 33], and it results in finite expectation values with finite variance for these observables [34; 35]. The condition that the state is quasi-free is made here for simplification of notation in the microlocal spectrum condition, and should not influence the following arguments, since we will be mainly interested in the two-point function. The most relevant property of Hadamard states for our purpose is the following: consider any two Hadamard states \(\omega\), \(\omega^{\prime}\) on the CCR-algebra \(\mathcal{A}(M)\) of any globally hyperbolic spacetime \((M,g)\). Then the difference of two-point functions considered as a distribution on \(M\times M\), \[W[\omega,\omega^{\prime}]\in\mathcal{D}^{\prime}(M\times M)\,,\quad W[\omega, \omega^{\prime}](f,h)=\omega(\Phi(f)\Phi(h))-\omega^{\prime}(\Phi(f)\Phi(h)), \quad f,h\in\mathcal{C}_{0}^{\infty}(M), \tag{12}\] is of the form \[W[\omega,\omega^{\prime}](f,h)=\int\limits_{M\times M}W[\omega,\omega^{\prime }](x,y)f(x)h(y)\,\mathrm{d}\mu(x)\,\mathrm{d}\mu(y)\,, \tag{13}\] with \(\,\mathrm{d}\mu\) the volume form induced on \(M\) by the metric \(g\) and \(W[\omega,\omega^{\prime}](x,y)\in\mathcal{C}^{\infty}(M\times M)\). Moreover, this function is a real, symmetric bi-solution to the Klein-Gordon equation, \[\mathcal{P}(x)W[\omega,\omega^{\prime}](x,x^{\prime})=\mathcal{P}(x^{\prime} )W[\omega,\omega^{\prime}](x,x^{\prime})=0\,.\] An important consequence of the smoothness becomes apparent when one considers the (smeared) local, non-linear observables of the theory. These observables, which lie in the extension of the CCR-algebra \(\mathcal{A}(M)\), can be written as linear combinations of locally and covariantly renormalized Wick powers of differentiated fields \(:\!(\prod_{i=1}^{n}\mathcal{D}_{i}\Phi)\,(f)\): [32]. Here, \(\mathcal{D}_{i}\) are (not necessarily scalar) differential operators, \(f\in\mathcal{C}_{0}^{\infty}(M)\) is a smearing function, and the double dots indicate that this quantity has been renormalized, with all renormalization ambiguities fixed in some way. Focusing on the case \(n=2\), which encompasses relevant observables such as the stress-energy tensor, the smoothness of \(W[\omega,\omega^{\prime}](x,x^{\prime})\) allows us to write \[\omega(:\left(\mathcal{D}_{1}\Phi\mathcal{D}_{2}\Phi\right)(x):)-\omega^{\prime }(:\left(\mathcal{D}_{1}\Phi\mathcal{D}_{2}\Phi\right)(x):)=\lim_{x^{\prime} \to x}\left(g(x,x^{\prime})\mathcal{D}_{1}(x)\mathcal{D}_{2}(x^{\prime})W[ \omega,\omega^{\prime}](x,x^{\prime})\right)\,, \tag{14}\] where \(:\left(\mathcal{D}_{1}\Phi\mathcal{D}_{2}\Phi\right)(x)\): should be understood as operator-valued distributions, and \(g(x,x^{\prime})\) is the proper power of the parallel transport bi-tensor \(g_{\mu}^{\nu^{\prime}}(x,x^{\prime})\) mapping \(T_{x^{\prime}}M\) to \(T_{x}M\), so that the result is a tensor at \(x\) in the case when the derivative operator \(\mathcal{D}_{2}\) is not scalar. The right hand side is the coinciding point limit of a \(\mathcal{C}^{\infty}\)-function on \(M\times M\), see [19, H1)-H3)], and hence a smooth function on \(M\). Therefore, the expression on the left hand side, which should a priori be understood in a distributional sense, is a smooth function as well. Indeed, it follows from the conditions in the local and covariant renormalization scheme that also \(\omega(:\!\Phi^{k}(x):)\) is a smooth function on \(M\) as long as \(\omega\) is a Hadamard state [32], implying that one can discuss the expectation values of Wick powers without smearing. These results will become crucial in Section IV. ## III Expansion of classical solutions Our aim in this section is to prove sharp pointwise bounds for scalar fields near the Cauchy horizon of subextremal RNdS and KdS spacetimes (see Corollary III.4). We begin by discussing the massless scalar wave equation on a fixed subextremal KdS background \(g=g_{\lambda,a}^{\text{KdS}}\); see (3). We write \(\Delta_{r}=\Delta_{r}^{\text{KdS}}\). Passing to the coordinates \(t_{\pm}\) and \(\varphi_{\pm}\) from (6)-(7) amounts to replacing \(\partial_{t},\partial_{\varphi}\), and \(\partial_{r}\) by \(\partial_{t_{\pm}}\), \(\partial_{\varphi_{\pm}}\), and \(\partial_{r}\pm\frac{\chi}{\Delta_{r}}((r^{2}+a^{2})\partial_{t_{\pm}}+a \partial_{\varphi_{\pm}})\), respectively, so the wave operator \[\rho^{2}\square_{g}=-\frac{\chi^{2}}{\Delta_{r}}\big{(}(r^{2}+a^{2})\partial _{t}+a\partial_{\varphi})^{2}+\frac{\chi^{2}}{\Delta_{\theta}\sin^{2}\theta} (a\sin^{2}\theta\,\partial_{t}+\partial_{\varphi})^{2}+\partial_{r}\Delta_{r }\partial_{r}+\frac{1}{\sin\theta}\partial_{\theta}\Delta_{\theta}\sin\theta \,\partial_{\theta}\] becomes \[\rho^{2}\square_{g}=\partial_{r}\Delta_{r}\partial_{r}\pm\chi\big{(}(r^{2}+a^{ 2})\partial_{t_{\pm}}+a\partial_{\varphi_{\pm}}\big{)}\partial_{r}\pm\partial _{r}\chi\big{(}(r^{2}+a^{2})\partial_{t_{\pm}}+a\partial_{\varphi_{\pm}} \big{)} \tag{15}\] Recall that the coordinates \(t_{+},\varphi_{+}\) are valid in the union of the regions I and II as well as the future event horizon \(\mathcal{H}\) in Figure 1, while \(t_{-},\varphi_{-}\) are valid in the union of the regions I and III as well as the cosmological horizon \(\mathcal{H}_{c}\), and also in the union of the regions II and IV as well as the Cauchy horizon \(\mathcal{CH}\); the level sets of \(t_{+}\) are transversal to the future event horizon, and the level sets of \(t_{-}\) are transversal to the cosmological horizon (in regions I and III) and to the Cauchy horizon (in regions II and IV). Let \(t_{*}\) denote a time function in the union of regions I, II, and III which in regions I and II differs from \(t_{+}\) by a smooth function of \(r\in(r_{1},r_{3})\), and which in regions I and III differs from \(t_{-}\) by a smooth function of \(r\in(r_{2},\infty)\). We may choose such a function \(t_{*}\) to have spacelike level sets (such as \(\Sigma_{-}\) and \(\Sigma_{+}\) in Figure 4). We now recall: **Theorem III.1** ([36]).: _Let \(r_{-}\in(r_{1},r_{2})\) and \(r_{+}>r_{3}\). Write \(e^{-\alpha t_{*}}H^{s}\) for the space of functions \(\psi=\psi(t_{*},x)\) (where \(x\in\mathbb{R}^{3}\) denotes Cartesian coordinates on \((r_{-},r_{+})\times\mathbb{S}^{2}\subset\mathbb{R}^{3}\)) with support in \(t_{*}\geq 0\) so that_ \[\|\psi\|_{e^{-\alpha t_{*}}H^{s}}^{2}:=\sum_{j+|\beta|\leq s}\|e^{\alpha t_{*} }\partial_{t_{*}}^{j}\partial_{x}^{\beta}\psi\|_{L^{2}}^{2}<\infty,\] _where \(\|\cdot\|_{L^{2}}\) is the spacetime \(L^{2}\)-norm. Then there exists \(\alpha_{1}>0\) so that the following holds. Let \(s>\frac{1}{2}+\alpha_{1}\max(\frac{1}{\kappa_{2}},\frac{1}{\kappa_{3}})\), where \(\kappa_{j}\) denotes the surface gravity of the horizon \(r=r_{j}\). Then for \(\alpha^{\prime}<\alpha_{1}\) and for all \(b\in e^{-\alpha^{\prime}t_{*}}H^{s}\), the unique retarded solution of \(\Box_{g}\psi=b\) has an asymptotic expansion_ \[\psi-\sum_{j=1}^{N}\sum_{k=0}^{k_{j}-1}t_{*}^{k}e^{-i\sigma_{j}t_{*}}v_{jk}= \tilde{\psi}\in e^{-\alpha^{\prime}t_{*}}H^{s}, \tag{16}\] _where the \(\sigma_{1},\ldots,\sigma_{N}\) are the finitely many quasinormal modes with \(\Im\sigma_{j}>-\alpha_{1}\), \(k_{j}\) is the multiplicity of \(\sigma_{j}\), and \(e^{-i\sigma_{j}t_{*}}\sum_{k=0}^{k_{j}}t_{*}^{k}v_{jk}\) is a corresponding (smooth) mode solution. Furthermore, \(\|\tilde{\psi}\|_{e^{-\alpha^{\prime}t_{*}}H^{s}}\leq C\|b\|_{e^{-\alpha^{ \prime}t_{*}}H^{s}}\) for some constant \(C\) depending only on \(r_{-},r_{+},\alpha^{\prime},s\), and the KdS black hole parameters._ **Theorem III.2**.: _In the notation of Theorem III.1, mode stability holds--that is, the quasinormal mode \(\sigma_{1}=0\) has \(k_{1}=1\) and corresponding mode solution equal to a constant, and all other \(\sigma_{j}\) have \(\Im\sigma_{j}<0\)--under either one of the following two conditions._ 1. _The KdS black hole is slowly rotating, i.e._ \(0<9\Lambda M^{2}<1\) _and_ \(|a/M|\leq C(\Lambda M^{2})\) _where_ \(C\colon(0,1)\to(0,\infty)\) _is a positive continuous function_ _[_23; 37_]__._ 2. _The KdS black hole has a small mass, i.e._ \(|a/M|\in[0,1)\) _and_ \(0<\Lambda M^{2}\leq C(|a/M|)\) _where_ \(C\colon[0,1)\to(0,\infty)\) _is a positive continuous function_ _[_24_]__._ The combination of the two results implies that for source terms \(b\) which vanish for large \(t_{*}\), the exponential decay rate of \(\psi\) (in an \(L^{2}\)-sense, but via Sobolev embedding for \(s>2+k\) also in the \(\mathcal{C}^{k}\) sense, i.e. in a pointwise sense with up to \(k\) derivatives) towards a constant is at least \(\alpha-\epsilon\) for all \(\epsilon>0\) where \(\alpha\) is the spectral gap, i.e. the infimum of \(-\Im\sigma\) over all non-zero quasinormal modes \(\sigma\in\mathbb{C}\).1 Footnote 1: There is a technical subtlety here: the constant \(\alpha_{1}\) for which Theorem III.1 is proved is given in terms of dynamical quantities associated with the trapped set. We are thus implicitly assuming that \(\alpha_{1}\geq\alpha\). This is valid in the second setting described by Theorem III.2 and proved in the given reference. It is also valid in the Schwarzschild–de Sitter case \(a=0\). In the slowly rotating Kerr–de Sitter case, it is true as well, and follows from the validity of a _full_ resonance expansion with error terms having any desired amount of exponential decay, as demonstrated in [37], except in this case the remainder \(\tilde{\psi}\) lies in \(e^{-\alpha^{\prime}t_{*}}H^{s-d}\) where \(d\) depends on \(\alpha^{\prime}\) (in [37] an estimate is stated only for \(s-d=1\), but the arguments given there give the claimed stronger statement). If one wishes to consider the full subextremal range of KdS parameters, one needs to _assume_ the validity of mode stability; while this is not known rigorously, there is strong numerical support [26; 27].2 Footnote 2: In the present work, we only use exponential decay to constants; it is not necessary to know that the exponential decay rate is exactly given by the spectral gap. This is important since Theorem III.1 does not give this more precise (and conjecturally true, but as of yet unproven) information. We thus proceed under the assumption--which as mentioned above is satisfied in the settings of Theorem III.2, and conjecturally in the full subextremmal KdS range--that \[\Box_{g}\psi=b\in e^{-\alpha^{\prime}t_{*}}H^{s+d}\implies\psi=c+\tilde{\psi},\quad c\in\mathbb{C},\ \tilde{\psi}\in e^{-\alpha^{\prime}t_{*}}H^{s},\] (17a) for some \[\alpha^{\prime}>0\] (which must satisfy \[\alpha^{\prime}\leq\alpha\] ), for all sufficiently large \[s\], and some fixed \[d\geq 0\], on the spacetime region where \[(r_{1},r_{2})\ni r_{-}<r<r_{+}\in(r_{3},\infty)\] ; and \[|c|+\|\tilde{\psi}\|_{e^{-\alpha^{\prime}t_{*}}H^{s}}\leq C\|b\|_{e^{-\alpha^{ \prime}t_{*}}H^{s+d}}. \tag{17b}\] For the purposes of the present paper, we only need to consider source terms \(b\) whose support is a compact subset of \(\mathbb{R}_{t_{*}}\times(r_{1},r_{3})\times\mathbb{S}^{2}\) (see Figure 4); after a constant shift of \(t_{*}\), we shall thus only consider \(b\) which vanish for \(t_{*}\geq 1\). Turning attention to the black hole interior, we record that the solution \(\psi\) in (17a) thus satisfies \(\Box_{g}\psi=0\) in \(t_{*}\geq 1\) and \(\psi-c\in e^{-\alpha^{\prime}t_{*}}H^{s}([1,\infty)\times(r_{-},r_{\sharp}) \times\mathbb{S}^{2})\) where \(r_{1}<r_{-}<r_{\sharp}<r_{2}\). The following is the main technical result of this section. **Proposition III.3**.: _Let \(r_{1}<r_{\flat}<r_{\sharp}<r_{2}\). Fix a smooth function \(U\colon[r_{1},r_{2})\to\mathbb{R}\) so that the level sets of \(u:=t_{-}+U(r)\) are spacelike. Let \(u_{\sharp}\) denote the \(u\)-coordinate of the point \(t_{*}=1,r=r_{\sharp}\), and define the domain \(\Omega=(u_{\sharp},\infty)_{u}\times(r_{1},r_{\sharp})_{r}\times\mathbb{S}^{2}\) inside the KdS spacetime. (See Figure 2.) Let \(u_{\flat}>u_{\sharp}\). Let \(\alpha^{\prime}>0\), and suppose \(\psi\) is a solution of \(\Box_{g}\psi=0\) on \(\Omega\) which is of the form_ \[\psi=c+\tilde{\psi},\qquad c\in\mathbb{C},\quad\tilde{\psi}|_{\Omega^{\prime} }\in e^{-\alpha^{\prime}u}H^{s}(\Omega^{\prime}) \tag{18}\] _where \(\Omega^{\prime}=\Omega\cap\{r_{\flat}<r<r_{\sharp}\}\) and \(s>\frac{5}{2}+\frac{\alpha^{\prime}}{\kappa_{1}}+m\), \(m\in\mathbb{N}_{0}\). Then there exists a function_ \[\psi_{0}=\psi_{0}(u,\omega)\in\mathcal{C}^{m}((u_{\sharp},u_{\flat})\times \mathbb{S}^{2})\] _so that for all \(j,k\in\mathbb{N}_{0}\) and \(\gamma\in\mathbb{N}_{0}^{2}\) with \(j+k+|\gamma|\leq m\) we have the pointwise bound3_ Footnote 3: If \(\frac{\alpha^{\prime}}{\kappa_{1}}=\ell+\delta\) with \(\ell\in\mathbb{N}\) and \(\delta\in(0,1]\), the \(0\)-th order Taylor expansion of \(\psi(r,u,\omega)=\psi_{0}(u,\omega)+\ldots\) at \(r=r_{1}\) can be improved to an \(\ell\)-th order expansion with a remainder term whose \(j\)-th \(r\)-derivative is of size \(\mathcal{O}((r-r_{1})^{\frac{\alpha^{\prime}}{\kappa_{1}}-j})\). Since in our application it only matters that \(\alpha^{\prime}>0\), we content ourselves with the stated version. An analogous comment applies to Corollary III.4 below when \(\beta>1\). \[|\partial_{r}^{j}\partial_{u}^{k}\partial_{\omega}^{\gamma}\big{(}\psi(r,u, \omega)-\psi_{0}(u,\omega)\big{)}|\leq C_{jk\gamma}(r-r_{1})^{\min(\frac{ \alpha^{\prime}}{\kappa_{1}},1)-j}\|\tilde{\psi}\|_{\Omega^{\prime}}\|_{e^{- \alpha^{\prime}u}H^{s}(\Omega^{\prime})}. \tag{19}\] We remark that Proposition III.3 is different from (and less delicate than) scattering theory from the event horizon (rather than from a hypersurface in the black hole interior) to the Cauchy horizon in the black hole interior, as studied for example in [38]. Proposition III.3 has the following immediate consequence, which will be used as a black box in Section IV: **Corollary III.4** (Pointwise bounds near the Cauchy horizon of KdS).: _Set \(\beta^{\prime}=\frac{\alpha^{\prime}}{\kappa_{1}}\) (which satisfies \(0<\beta^{\prime}\leq\beta=\frac{\alpha}{\kappa_{1}}\)). Fix \(T_{0}<T_{1}\), \(r_{\sharp}\in(r_{1},r_{2})\), \(r_{+}\in(r_{3},\infty)\), and \(m\in\mathbb{N}_{0}\). Fix further \(u_{\sharp}<u_{\flat}\) in the notation of Proposition III.3. Then there exist \(m^{\prime}\in\mathbb{N}\) and a constant \(C\) so that for all \(b\in\mathcal{C}^{m^{\prime}}(\mathbb{R}_{t_{*}}\times(r_{1},\infty)\times \mathbb{S}^{2})\) with support in \(\{T_{0}\leq t_{*}\leq T_{1},\ r_{\sharp}\leq r\leq r_{+}\}\), the retarded solution of \(\Box_{g}\psi=b\) satisfies_ \[|\partial_{u}^{k}\partial_{\omega}^{\gamma}\psi(r,u,\omega)| \leq C_{k\gamma}\|b\|_{\mathcal{C}^{m^{\prime}}}, \tag{20}\] \[|\partial_{r}^{j}\partial_{u}^{k}\partial_{\omega}^{\gamma}\psi( r,u,\omega)| \leq C_{jk\gamma\epsilon}(r-r_{1})^{\min(\beta^{\prime},1)-\epsilon-j}\|b\|_{ \mathcal{C}^{m^{\prime}}} \tag{21}\] _in the region \(\{u_{\sharp}<u<u_{\flat},\ r_{1}<r<r_{\sharp}\}\) for all \(\epsilon>0\) and for all \(j,k,\gamma\) with \(j+k+|\gamma|\leq m\)._ Proof of Proposition iii.3.: By subtracting from \(\psi\) the constant \(c\) (which solves the wave equation), we may assume that \(\psi=\tilde{\psi}\). We sketch two different proofs of the estimate (19); the first one closely follows arguments from [13], whereas the second one is more direct and can in principle be extended to produce more precise asymptotic expansions at \(r=r_{1}\), given asymptotic expansions (resonance expansions, or even expansions into powers of \(t_{*}\) as in Price's law on Kerr) of \(\psi\) in \(r_{\flat}\leq r\leq r_{\sharp}\). **First proof.** We can adapt the methods of [13]; in fact, the following arguments ultimately give a simpler proof of the main results of [13], in that the structure of spacetime in the region \(r\geq r_{2}\), or indeed \(r\geq r_{\sharp}\in(r_{1},r_{2})\), plays no role, _given the a priori assumption_ (18) on the structure of the solution of the wave equation.4 To wit, we work in the region \(r\leq r_{\sharp}\) and as in the reference consider the wave equation on an artificial extension of the KdS spacetime to \(r<r_{1}\) which features yet another artificial horizon at some value \(r=r_{0}<r_{1}\); and we place a time-translation-invariant complex absorbing operator \(\mathcal{Q}\) in the region \(r<r_{1}\). In the notation of [13], we thus work in Footnote 4: This a priori assumption in turn follows from analysis in a neighborhood of \(\{r_{2}\leq r\leq r_{3}\}\), cf. Theorems III.1 and III.2 above, which can be done completely independently of the analysis in the black hole interior. \[\Omega_{\text{ext}}:=[0,\infty)_{t^{*}}\times[r_{0}-2\delta,r_{\sharp}]\times \mathbb{S}^{2}\] (where we write \(t^{*}\) for the function denoted \(t_{*}\) in [13] to distinguish it from the time function \(t_{*}\)); and for \(r_{1}\leq r<r_{2}\), the difference \(t^{*}-t_{-}\) is a smooth function of \(r\), and thus so is \(t^{*}-u\). Let \(\zeta=\zeta(r)\) denote a smooth function which equals \(1\) near \((-\infty,r_{\flat}]\) and \(0\) near \([r_{\sharp},\infty)\), then \(\zeta\psi\) satisfies the equation \[\Box_{g}(\zeta\psi)=b^{\prime}:=[\Box_{g},\zeta]\psi\in e^{-\alpha^{\prime}t_ {*}}H^{s-1}(\Omega_{\rm ext})\] in \(r>r_{1}\), with \(b\) vanishing outside \(\{r_{\flat}\leq r\leq r_{\sharp}\}\). Following the strategy of [13], by uniqueness of retarded solutions in \(r>r_{1}\), \(\zeta\psi\) is also equal to the restriction to \(r>r_{1}\) of the solution \(\psi_{\rm ext}\) of the extended wave equation (with complex absorption) \(\mathcal{P}_{\rm ext}\psi_{\rm ext}=b^{\prime}\), where \(\mathcal{P}_{\rm ext}=\Box_{g}-i\mathcal{Q}\), on \(\Omega_{\rm ext}\) with vanishing Cauchy data at \(r=r_{0}-2\delta\) and \(r=r_{\sharp}\). Working on Sobolev spaces, with exponential weights in \(t^{*}\), of functions on \(\Omega_{\rm ext}\) which have supported character at (i.e. vanish beyond) \(r=r_{0}-2\delta\) and \(r=r_{\sharp}\), one can then prove the Fredholm property of \(\mathcal{P}_{\rm ext}\colon\{\psi_{\rm ext}\in e^{-\alpha^{\prime}t^{*}}H^{s }(\Omega_{\rm ext})\colon\mathcal{P}_{\rm ext}\psi_{\rm ext}\in e^{-\alpha^{ \prime}t^{*}}H^{s-1}(\Omega_{\rm ext})\}\to e^{-\alpha^{\prime}t^{*}}H^{s-1}( \Omega_{\rm ext})\), where \(s=s(r)\) is now a suitable variable order function subject to the bound \(s<\frac{1}{2}+\frac{\alpha^{\prime}}{\kappa_{1}}\) at the Cauchy horizon \(r=r_{1}\), by following the arguments in [13]. The first simplification afforded by working in \(r\leq r_{\sharp}<r_{2}\) is that there is no trapping in \(\Omega_{\rm ext}\), which is why \(\alpha^{\prime}\) can indeed be taken to be arbitrary (in particular, positive) here. The second simplification is that \(\mathcal{P}_{\rm ext}\) does not have any mode solutions, subject to the vanishing condition in \(r>r_{\sharp}\), which are non-zero in \(r>r_{1}\); this follows from domain of dependence considerations in \(\{r_{1}<r<r_{\sharp}\}\) (contained in region II in Figure 1). Having thus recovered \(\zeta\psi\) as the restriction \(\psi_{\rm ext}|_{r>r_{1}}\) of the solution of the extended equation \(\mathcal{P}_{\rm ext}\psi_{\rm ext}=b^{\prime}\), we can apply the radial point estimates of [13, Proposition 2.23] to the extended equation and deduce, upon restriction to \(r>r_{1}\) and the region \(u_{\sharp}<u<u_{\flat}\) (where the weights in \(t^{*}\sim u\sim 1\) are irrelevant) that \[X_{1}\cdots X_{j}\psi\in H^{\frac{1}{2}+\frac{\alpha^{\prime}}{\kappa_{1}}- \epsilon}([r_{1},r_{\sharp})\times(u_{\sharp},u_{\flat})\times\mathbb{S}^{2})\] for all \(j\leq m+2\) where each \(X_{i}\) is one of the vector fields \(\partial_{u}\), \((r-r_{1})\partial_{r}\), \(\partial_{\omega}\) (spherical vector fields). (We use here that \(s>\frac{1}{2}+\frac{\alpha^{\prime}}{\kappa_{1}}+m+2\).) Considering for small \(\epsilon>0\) the quantity \(\beta^{\prime\prime}=\min(\frac{\alpha^{\prime}}{\kappa_{1}},1)-\epsilon\in (0,1)\), we proceed to analyze this condition, which implies \[X_{1}\cdots X_{j}\psi\in H^{\frac{1}{2}+\beta^{\prime\prime}}([0,v_{0})_{V} \times A),\qquad V:=r-r_{1},\ v_{0}:=r_{\sharp}-r_{1},\quad A:=(u_{\sharp},u_{ \flat})\times\mathbb{S}^{2}\subset\mathbb{R}^{3},\] \(j\leq m+2\), where \(X_{i}=\partial_{u},V\partial_{V},\partial_{\omega}\). Using two derivatives along \(\partial_{u},\partial_{\omega}\), Sobolev embedding on implies \(X_{1}\cdots X_{j}\psi\in\mathcal{C}^{0}(A;H^{\frac{1}{2}+\beta^{\prime\prime}}([0,v_ {0})))\) for \(j\leq m\), and therefore \[\psi\in\bigcap_{j=0}^{m}\mathcal{C}^{m-j}\big{(}A;H^{\frac{1}{2}+\beta^{\prime \prime};j}([0,v_{0})\big{)},\] where we write \(H^{s;j}([0,v_{0}))\) for the space of all \(u\in H^{s}([0,v_{0}))\) so that \((V\partial_{V})^{i}u\in H^{s}([0,v_{0}))\) for all \(i\leq j\). Now, every \(u\in H^{\frac{1}{2}+\beta^{\prime\prime}}([0,v_{0}))\subset\mathcal{C}^{0}([0, v_{0}))\) has a well-defined value \(u(0)\) at \(V=0\). To complete the proof of the estimate (19), it thus suffices to show the following 1-dimensional result (with \(\beta^{\prime\prime}\in(0,1)\) and \(j\in\mathbb{N}_{0}\)): \[u\in H^{\frac{1}{2}+\beta^{\prime\prime};j}([0,v_{0}))\implies|\partial_{V}^{ i}(u-u(0))|\leq C_{i}V^{\beta^{\prime\prime}-i},\qquad 0\leq i\leq j.\] By definition of the space \(H^{\frac{1}{2}+\beta^{\prime\prime};j}\), it suffices to prove this in the case \(j=0\). But this follows directly from Sobolev embedding, which states that \(H^{\frac{1}{2}+\beta^{\prime\prime}}([0,v_{0}))\subset\mathcal{C}^{0,\beta^{ \prime\prime}}([0,v_{0}))\) (Holder space). **Second proof.** For technical simplicity, we will not operate at a sharp level of Sobolev regularity. Multiply \(\rho^{2}\square\) in (15) (with the '\(-\)' sign) by \(\Delta_{r}\), and notice that in terms of \(V:=r-r_{1}\) we have \(\Delta_{r}(r)=-|\Delta_{r}^{\prime}(r_{1})|V+\mathcal{O}(V)^{2}\) near \(V=0\), so \(\Delta_{r}\partial_{r}=(-|\Delta^{\prime}(r_{1})|+\mathcal{O}(V))V\partial_{V}\). Therefore, setting \(\Omega_{1}:=\frac{a}{r_{1}^{2}+a^{2}}\), \[L:=\frac{\Delta_{r}}{|\Delta_{r}^{\prime}(r_{1})|^{2}}\rho^{2} \square_{g} \equiv(V\partial_{V})^{2}+\frac{2\chi(r_{1}^{2}+a^{2})}{|\Delta_{ r}^{\prime}(r_{1})|}(\partial_{t_{-}}+\Omega_{1}\partial_{\varphi_{-}})V \partial_{V}\] \[\quad-\frac{\chi^{2}V}{|\Delta_{r}^{\prime}(r_{1})|\Delta_{\theta }\sin^{2}\theta}(a\sin^{2}\theta\,\partial_{t_{-}}+\partial_{\varphi_{-}})^{2 }-\frac{V}{|\Delta_{r}^{\prime}(r_{1})|\sin\theta}\partial_{\theta}\Delta_{ \theta}\sin\theta\,\partial_{\theta}\] modulo terms whose coefficients have at least one additional factor of \(V\). Another way of viewing \(L\) is that it is a differential operator constructed out of the vector fields \(V\partial_{V}\), \(\partial_{t_{-}}+\Omega_{1}\partial_{\varphi_{-}}\), and \(V^{\frac{1}{2}}\partial_{\omega}\) (weighted spherical derivatives) with coefficients that are smooth functions of \(V^{\frac{1}{2}}\) and \(\mathbb{S}^{2}\), and which are independent of \(t_{-},\varphi_{-}\). It is convenient to rewrite this further. Recognizing that \(\frac{|\Delta_{r}^{\prime}(r_{1})|}{2\chi(r_{1}^{2}+a^{2})}=\kappa_{1}\) is the surface gravity, let us introduce the new variable \[U:=e^{-\kappa_{1}t_{-}}.\] Then \(L\) is constructed out of the vector fields \(V\partial_{V}\), \(U\partial_{U}-\frac{\Omega_{1}}{\kappa_{1}}\partial_{\varphi_{-}}\), \(V^{\frac{1}{2}}\partial_{\omega}\) in the above sense, and to leading order (at \(V=0\)) equal to \[L\equiv\Big{(}V\partial_{V}-U\partial_{U}+\frac{\Omega_{1}}{\kappa_{1}} \partial_{\varphi_{-}}\Big{)}V\partial_{V}-\frac{\chi^{2}}{|\Delta_{r}^{ \prime}(r_{1})|\Delta_{\theta}\sin^{2}\theta}(V^{\frac{1}{2}}\partial_{\varphi _{-}})^{2}-\frac{1}{|\Delta_{r}^{\prime}(r_{1})|\sin\theta}V^{\frac{1}{2}} \partial_{\theta}\Delta_{\theta}\sin\theta\,V^{\frac{1}{2}}\partial_{\theta}; \tag{22}\] and indeed its principal part is a Lorentzian signature quadratic form in these vector fields. Using an energy estimate near \(V=0\), with vector field multiplier \(-V^{-2\gamma}U^{-\frac{2\rho^{\prime}}{\kappa_{1}}}X\) (where \(X\) is a future timelike linear combination of \(V\partial_{V}-U\partial_{U}+\frac{\Omega_{1}}{\kappa_{1}}\partial_{\varphi_{-}}\) and \(V\partial_{V}\)) for a suitable weight \(\gamma\) (sufficiently negative), one can thus bound the \(L^{2}\)-norms of \(V\partial_{V}\psi\), \((U\partial_{U}-\frac{\Omega_{1}}{\kappa_{1}}\partial_{\varphi_{-}})\psi\), \(V\frac{1}{2}\partial_{\omega}\psi\), and by \(V\)-integration also \(\psi\) itself, on the full domain \(\Omega\) in the space \(U^{\frac{\alpha^{\prime}}{\kappa_{1}}}V^{\gamma}L^{2}(\Omega)=e^{-\alpha^{ \prime}t_{-}}(r-r_{1})^{-|\gamma|}L^{2}(\Omega)\) by \(\|\psi\|_{e^{-\alpha^{\prime}u}H^{1}(\Omega^{\prime})}\). Since \(\rho^{2}\square_{g}\psi=0\), we also have \(\rho^{2}\square_{g}(A\psi)=0\) for all operators \(A\) which commute with \(\rho^{2}\square_{g}\); such \(A\) are (finite products of) \(\partial_{t_{-}}\), \(\partial_{\varphi_{-}}\), and the Carter operator \(\mathcal{C}:=\frac{\chi^{2}}{\Delta_{\theta}\sin^{2}\theta}(a\sin^{2}\theta \,\partial_{t_{-}}+\partial_{\varphi_{-}})^{2}+\frac{1}{\sin\theta}\partial_{ \theta}\Delta_{\theta}\sin\theta\,\partial_{\theta}\). The aforementioned energy estimate thus allows us to bound \(A\psi\), for all such \(A\), in the same weighted \(L^{2}\)-space.5 Since by elliptic regularity this in particular controls spherical derivatives of \(\psi\), the terms in (22) involving \(V^{\frac{1}{2}}\partial_{\varphi_{-}}\) and \(V^{\frac{1}{2}}\partial_{\theta}\) can now, due to the presence of \(V^{\frac{1}{2}}\), be considered to be of lower order near \(V=0\). Thus, we can reduce the equation satisfied by \(\psi\) to Footnote 5: A conceptually cleaner but technically considerably more involved procedure which avoids the usage of the subtle Carter operator is as follows: first, one proves higher regularity with respect to the above vector fields, and then commutes the equation \(L\psi=0\) with \(V\partial_{V}\), \(U\partial_{U}-\frac{\Omega_{1}}{\kappa_{1}}\partial_{\varphi_{-}}\), and \(\partial_{\omega}\). See [39] for such a strategy where the role of the Cauchy horizon is played by null infinity, and the commutator vector fields are called ‘commutator b-vector fields’. \[L_{0}\psi:=X_{\rm in}X_{\rm out}\psi=\mathcal{O}(V^{\frac{1}{2}})\psi,\qquad X _{\rm in}=U\partial_{U}-\frac{\Omega_{1}}{\kappa_{1}}\partial_{\varphi_{-}}- V\partial_{V},\quad X_{\rm out}=-V\partial_{V}. \tag{23}\] See Figure 3. This is a concatenation of two transport equations, which we can integrate up from a level set of \(V\) in the region \(\Omega^{\prime}\); the following are the two main observations. 1. Integration of \(X_{\rm in}\) transports decay/asymptotics of \(f:=X_{\rm out}\psi\) on a hypersurface \(V=V_{0}>0\) as \(U=e^{-\kappa_{1}t_{-}}\searrow 0\) to decay/asymptotics at the Cauchy horizon, i.e. as \(r-r_{1}\searrow 0\). For example, the solution of \(X_{\rm in}f(U,V,\theta,\varphi_{-})=0\) with initial condition \(f(U,V_{0},\theta,\varphi_{-})=f_{0}(U,\theta,\varphi_{-})\) is given by \[f(U,V,\theta,\varphi_{-})=f_{0}\Big{(}UV/V_{0},\theta,\varphi_{-}-\frac{\Omega_ {1}}{\kappa_{1}}\log(V/V_{0})\Big{)}.\] Note that \(f_{0}\sim U^{\frac{\alpha^{\prime}}{\kappa_{1}}}=e^{-\alpha^{\prime}t_{-}}\) implies that for fixed \(U=U_{0}\) one has \(f\sim V^{\frac{\alpha^{\prime}}{\kappa_{1}}}=(r-r_{1})^{\frac{\alpha^{\prime} }{\kappa_{1}}}\).6 Footnote 6: Likewise, for \(f_{0}\sim(\log\frac{1}{U})^{-m}\sim t_{-}^{-m}\) one has \(f\sim|\log(r-r_{1})|^{-\frac{\alpha^{\prime}}{\kappa_{1}}m}\) as \(r\searrow r_{1}\) for fixed \(U=U_{0}\). Upon integrating this along \(X_{\rm in}=-(r-r_{1})\partial_{r}\), this is a concrete manifestation of the logarithmic regularity at the Cauchy horizon on Kerr spacetimes discussed in [40]. 2. Subsequent integration of \(X_{\rm in}\psi=-V\partial_{V}\psi=f=\mathcal{O}(V^{\frac{\alpha^{\prime}}{ \kappa_{1}}})\) from \(V=V_{0}>0\) towards the Cauchy horizon at \(V=0\) produces \(\psi(U,V,\theta,\varphi_{-})=\psi_{0}(U,\theta,\varphi_{-})+\psi_{1}(U,V, \theta,\varphi_{-})\) where \(\psi_{1}=\mathcal{O}(V^{\frac{\alpha^{\prime}}{\kappa_{1}}})\). For a rigorous proof of (19), one uses equation (23) to improve control on bounds or asymptotic behavior of \(\psi\) near \(V=0\) by half a power of \(V\) by controlling \(L^{2}\)- (or pointwise) norms of the integrations of \(X_{\rm in},X_{\rm out}\). For the Klein-Gordon equation on subextremal RNdS or KdS spacetimes, the same pointwise bounds hold under the assumption of mode stability. In the RNdS setting, mode stability can be checked using separation of variables and a Wronskian argument for the radial ODE (see [41, Section 1.5] for the case of massless fields in the Schwarzschild case, with a scalar field mass being easily incorporated), and a full resonance expansion can be obtained using the techniques of [37, 42]; in the slowly rotating KdS setting and for small scalar field masses, this is mentioned in [23] and follows from [25, Lemma 3.5], and in the small mass KdS setting it is proved in [24]. The proof of Proposition III.3 goes through with only notational changes, and thus also Corollary III.4 holds in these settings. We do not discuss charged scalar fields in this paper. It is known that for some values of the black hole and scalar field parameters they do not satisfy mode stability [43]. ## IV Bounding the state-dependence In the following, we focus our attention on the stress-energy tensor of the free scalar field, which is the most relevant observable of the quantum field for semi-classical gravity. We take the spacetime \(\mathcal{M}\) to be a subextremal physical RNdS or KdS spacetime as described in Section II, and we consider a scalar quantum field theory described by the CCR-algebra \(\mathcal{A}(\mathcal{M})\) on this spacetime. The corresponding stress-energy tensor of the classical scalar field \(\phi(x)\) is given by \[T_{\mu\nu}(x)= (1-2\xi)\partial_{\mu}\phi(x)\partial_{\nu}\phi(x)+\xi\left(R_{\mu \nu}\phi(x)^{2}-2\xi\phi(x)\nabla_{\mu}\nabla_{\nu}\phi(x)\right) \tag{24}\] \[-\frac{1}{2}g_{\mu\nu}\left((1-4\xi)\partial_{\sigma}\phi(x) \partial^{\sigma}\phi(x)-4\xi\phi(x)\nabla_{\sigma}\nabla^{\sigma}\phi(x)+(m^ {2}+\xi R)\phi(x)^{2}\right)\,.\] Since this is local and quadratic in the field \(\phi\), the corresponding observable of the quantum field requires renormalization to be well-defined. Let us assume that the quantum stress-energy tensor is renormalized locally and covariantly utilizing Hadamard point-split renormalization [32], and that the renormalization ambiguities have been fixed in some way. Let us denote the resulting observable by \(T^{\text{ren}}_{\mu\nu}(x)\). It is a special case of a finite sum of Wick squares of differentiated fields, with derivatives up to second order, as discussed at the end of Section II.2. Therefore, if \(\omega\) is a Hadamard state on \(\mathcal{A}(\mathcal{M})\), then the expectation value \(\omega(T^{\text{ren}}_{\mu\nu}(x))\) will be well-defined and finite for all \(x\in\mathcal{M}\), but will in general diverge at the boundaries of \(\mathcal{M}\) considered as a submanifold of its maximal analytic extension. We are interested in the divergence at the ingoing Cauchy horizon \(\mathcal{CH}\). To study this divergence, let us fix a reference Hadamard state \(\omega_{0}\) on \(\mathcal{A}(\mathcal{M})\). One possible choice that has been used in the literature is the Unruh state [44, 45, 46, 19, 47, 48], but one could also make a different choice. The expectation values of the components \(T^{\text{ren}}_{\mu\nu}(x)\) of the renormalized stress-energy tensor in some coordinate system that is regular across \(\mathcal{CH}\) in the state \(\omega\) can then be written as \[\omega(T^{\text{ren}}_{\mu\nu}(x)) =\omega_{0}(T^{\text{ren}}_{\mu\nu}(x))+\omega(T^{\text{ren}}_{ \mu\nu}(x))-\omega_{0}(T^{\text{ren}}_{\mu\nu}(x)) \tag{25}\] \[=\omega_{0}(T^{\text{ren}}_{\mu\nu}(x))+\lim_{x^{\prime}\to x }D_{\mu\nu}(x,x^{\prime})W[\omega,\omega_{0}](x,x^{\prime})\] for \(x\in\mathcal{M}\). In the last step, we have used (14) to rewrite the difference of expectation values. The differential operator \(D_{\mu\nu}(x,x^{\prime})\) can be written as \[D_{\mu\nu}(x,x^{\prime})= (1-2\xi)g^{\nu^{\prime}}_{(\nu}(x,x^{\prime})\partial_{\mu)} \partial_{\nu^{\prime}}+\xi\left(R_{\mu\nu}(x)-2\nabla_{\mu}\nabla_{\nu}\right) \tag{26}\] \[-\frac{1}{2}g_{\mu\nu}(x)\left((1-4\xi)g^{\sigma\rho}(x)g^{ \rho^{\prime}}_{\rho}(x,x^{\prime})\partial_{\sigma}\partial_{\rho^{\prime}} +(m^{2}+\xi R)-4\xi\nabla_{\sigma}\nabla^{\sigma}\right)\,,\] where \(g^{\beta^{\prime}}_{\alpha}(x,x^{\prime})\) is the bi-tensor of parallel transport, round brackets around indices indicate a symmetrization, and (un)primed derivatives act on the (un)primed variable. The first term in (25) is independent of the state \(\omega\), and depends only on the reference state \(\omega_{0}\). This is the state-independent part discussed before. It can be computed numerically, and indeed its quadratic leading divergence in \((r-r_{1})\) at the Cauchy horizon has been found to be non-vanishing on RNdS [19], as well as on Reissner-Nordstrom [21] and Kerr [22]. The numerical results on RNdS, Kerr and Reissner-Nordstrom also indicate a smooth dependence of the coefficient of the \((r-r_{1})^{-2}\)-divergence on the spacetime parameters and the parameters of the scalar field. Moreover, first numerical results on KdS [48] indicate that the coefficient of the \((r-r_{1})^{-2}\)-divergence is indeed generically nonvanishing. The remainder of this section will be devoted to bounding the potential divergence of the state-dependent contribution of the second term in (25), \[t_{\mu\nu}^{\omega}(x)=\lim_{x^{\prime}\to x}D_{\mu\nu}(x,x^{\prime})W[ \omega,\omega_{0}](x,x^{\prime})\,, \tag{27}\] as \(x\) approaches the Cauchy horizon. In fact, one can show the following more general result: **Proposition IV.1**.: _Let \(x\in\mathcal{CH}\) be a point on the Cauchy horizon of RNdS or KdS. Let \(\mathcal{U}\) be a small open neighbourhood of \(x\) with compact closure in the analytic extension of \(\mathcal{M}\) and contained in the coordinate chart \((V,y^{i})\). Here, the coordinates \((y^{i})=(t_{-},\theta,\varphi_{-})\) parameterize \(\mathcal{CH}\), and \(V=r-r_{1}\) is chosen so that \(\mathcal{U}_{\mathcal{M}}:=\mathcal{U}\cap\mathcal{M}=\mathcal{U}\cap\{V>0\}\). Assume that the spectral gap \(\alpha\) of quasinormal modes is strictly positive. Set \(\beta=\alpha/\kappa_{1}\), and let \(\mathcal{D}_{j}\), \(j\in\{1,2\}\) be (not necessarily scalar) differential operators of order \(m_{j}\), so that \(m_{1}+m_{2}\leq 2\), and with coefficients that are smooth on \(\mathcal{M}\cup\mathcal{U}\). Let \(\omega_{j}\) be Hadamard states on \(\mathcal{M}\), and set_ \[A[\omega_{1},\omega_{2}](x)=\lim_{x^{\prime}\to x}\big{(}g(x,x^{\prime}) \mathcal{D}_{1}(x)\mathcal{D}_{2}(x^{\prime})W[\omega_{1},\omega_{2}](x,x^{ \prime})\big{)}\, \tag{28}\] _where, as in (14), \(g(x,x^{\prime})\) is the proper power of the parallel transport bi-tensor, so that \(A[\omega_{1},\omega_{2}](z)\) is a \((k,l)\)-tensor at \(z\) for some \(k,l\in\mathbb{N}_{0}\) and all \(z\in\mathcal{M}\). Then the tensor components \(A[\omega_{1},\omega_{2}]_{\nu_{1},\ldots,\nu_{l}}^{\mu_{1},\ldots,\mu_{k}}(V,\cdot)\) are smooth functions of \(y^{i}\) on \(\mathcal{U}_{\mathcal{M}}\) and there is a constant \(C>0\) so that_ \[\Big{|}V^{2-\beta^{\prime}}A[\omega_{1},\omega_{2}]_{\nu_{1},\ldots,\nu_{l}}^ {\mu_{1},\ldots,\mu_{k}}(x)\Big{|}\leq C \tag{29}\] _uniformly in \(y^{i}\) within \(\mathcal{U}_{\mathcal{M}}\) for some \(0<\beta^{\prime}<\min(\beta,1)\)._ (Conjecturally, one can take \(\beta^{\prime}\) to be arbitrarily close to \(\min(\beta,1)\).) From this, the corresponding claim for \(t_{\mu\nu}(x)\) follows immediately by choosing the right combination of derivative operators. To prove Proposition IV.1, we will first show that \(W[\omega,\omega_{0}](x,x^{\prime})\), with \(x\) and \(x^{\prime}\) in \(\mathcal{U}_{\mathcal{M}}\), can be rewritten as a series of forward solutions \(E^{+}(b_{i})\) to the Klein-Gordon equation with smooth and compactly supported sources \(b_{i}\in\mathcal{C}_{0}^{\infty}(\mathcal{M})\). **Lemma IV.2**.: _Let \(\mathcal{M}\) be the physical RNdS or KdS spacetime, and let \(\omega_{1}\), \(\omega_{2}\) be Hadamard states on \(\mathcal{A}(\mathcal{M})\). Let \(x,y\in\mathcal{U}_{\mathcal{M}}\) as described in Proposition IV.1. Then there exists a sequence \((b_{i})_{i\in\mathbb{N}}\subset\mathcal{C}_{0}^{\infty}(\mathcal{M})\) of real-valued test functions satisfying_ \[\sum_{i}\|b_{i}\|_{\mathcal{C}^{m}}^{2}=C(m)<\infty \tag{30}\] _for any \(m\in\mathbb{N}\) and some constants \(C(m)>0\) and, in a distributional sense on \(\mathcal{U}_{\mathcal{M}}\times\mathcal{U}_{\mathcal{M}}\),_ \[W[\omega_{1},\omega_{2}](x,y)=\sum_{i}c_{i}E^{+}(b_{i})(x)E^{+}(b_{i})(y)\,, \tag{31}\] _with \(c_{i}=\pm 1\)._ Proof.: Recall that since both \(\omega_{1}\) and \(\omega_{2}\) are Hadamard states on \(\mathcal{A}(\mathcal{M})\), \(W[\omega_{1},\omega_{2}](x,y)\) is a smooth, real, and symmetric function on \(\mathcal{M}\times\mathcal{M}\) which solves \(\mathcal{P}(x)W[\omega_{1},\omega_{2}](x,y)=\mathcal{P}(y)W[\omega_{1},\omega _{2}](x,y)=0\). In the rest of the proof, we will write \(W(x,y)=W[\omega_{1},\omega_{2}](x,y)\) for brevity of notation. Let \(\Sigma_{\pm}\) be two Cauchy surfaces of \(\mathcal{M}\) to the past of \(\mathcal{U}_{\mathcal{M}}\) satisfying \(\Sigma_{+}\subset I^{+}(\Sigma_{-})\), and define a subordinate partition of unity \((\chi_{+},\chi_{-})\in\mathcal{C}^{\infty}(\mathcal{M})\) on \(\mathcal{M}\) satisfying \(\chi_{\pm}=1\) on \(J^{\pm}(\Sigma_{\pm})\) and \(\chi_{\pm}=0\) on \(J^{\mp}(\Sigma_{\mp})\). Then the linear map \[\mathcal{C}_{0}^{\infty}(\mathcal{U}_{\mathcal{M}})\to\mathcal{C}^{\infty}( \mathcal{M}),\,f\mapsto\tilde{f}=\mathcal{P}(\chi_{+}E(f)) \tag{32}\] maps test functions supported in \(\mathcal{U}_{\mathcal{M}}\) to test functions supported in the closure of \(J^{+}(\Sigma_{-})\cap J^{-}(\Sigma_{+})\cap J(\mathcal{U}_{\mathcal{M}})\). The closure of this set is a compact subset of \(\mathcal{M}\) and we will call it \(G\). Moreover, the map satisfies \(E(\tilde{f})=E(f)\) for any \(f\in\mathcal{C}_{0}^{\infty}(\mathcal{U}_{\mathcal{M}})\). Taking into account that the kernel of \(E\) as a map acting on \(\mathcal{C}_{0}^{\infty}(\mathcal{M})\) is \(\mathcal{P}\mathcal{C}_{0}^{\infty}(\mathcal{M})\), there must be a function \(f_{0}\in\mathcal{C}_{0}^{\infty}(\mathcal{M})\) so that \(f=\tilde{f}+\mathcal{P}f_{0}\). Let \(f\), \(h\in\mathcal{C}_{0}^{\infty}(\mathcal{U}_{\mathcal{M}})\). Then by an application of Green's second identity (i.e. integration by parts), the preceding results entail \[\int\limits_{\mathcal{M}\times\mathcal{M}}\!\!W(x,y)f(x)h(y)\,\mathrm{d}vol_{g }(x)\,\mathrm{d}vol_{g}(y)=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Indeed, following the proof of [49, Lemma 3.7], for \((x,y)\in G\times G\), \(B\) is of the form \[B(x,y)=\mathcal{P}(x)\mathcal{P}(y)\chi(x)\chi(y)W(x,y)\,. \tag{36}\] Here, \(\chi\in\mathcal{C}^{\infty}(\mathcal{M})\) is equal to one on \(I^{-}(\sigma_{-})\), and vanishes on \(I^{+}(\sigma_{+})\). An illustration of the various Cauchy surfaces and relevant subsets of \(\mathcal{M}\) is shown in Figure 4. Next, we note that \(B\in\mathcal{C}_{0}^{\infty}(\tilde{G}\times\tilde{G})\) can be written in the form \[B(x,y)=\sum_{i}c_{i}b_{i}(x)b_{i}(y)\,, \tag{37}\] with \(c_{i}=\pm 1\) and the \(b_{i}\in\mathcal{C}_{0}^{\infty}(\tilde{G})\) are real-valued and satisfy (30). This follows for example from [49, App. B] and the symmetry properties of \(B\). Therefore, the functions \[B_{n}(x,y)=\sum_{i=0}^{n}c_{i}b_{i}(x)b_{i}(y)\] are integrable and bounded by an integrable function, namely the characteristic function of \(\mathrm{supp}(B)\) Figure 4: Illustration of the construction for the proof of Lemma IV.2. The orange ellipse represents the set \(\mathcal{U}\). \(J^{-}(\mathcal{U}_{\mathcal{M}})\) is indicated by the dashed orange lines. The blue and red hypersurfaces represent \(\Sigma_{\pm}\) and \(\sigma_{\pm}\), respectively. The filled blue region is the compact set \(G\), the red region is \(\mathrm{supp}(B)\). multiplied by the constant \(C\) from (30). Thus, for \(f\), \(h\in\mathcal{C}_{0}^{\infty}(\mathcal{U}_{\mathcal{M}})\), \[\int\limits_{\mathcal{M}\times\mathcal{M}}\!\!\sum_{i}c_{i}b_{i}(x) b_{i}(y)E(f)(x)E(h)(y)\,\mathrm{d}vol_{g}(x)\,\mathrm{d}vol_{g}(y)\] \[\quad=\sum_{i}c_{i}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! for some constant \(\tilde{C}>0\). Similar estimates can be obtained for any other number of derivatives. From these bounds one can deduce uniform convergence of the partial sums over \(c_{i}\partial_{\mu}\psi_{i}(x)\partial_{\nu}\psi_{i}(x^{\prime})\) on \(K\times K^{\prime}\). The uniform convergence, together with the convergence of the series \(\sum_{i}c_{i}\psi_{i}(x)\psi_{i}(x^{\prime})\) to \(W[\omega,\omega_{0}](x,x^{\prime})\) justifies the interchange of the differentiation and the infinite sum. Next, we consider the case where \(x\) approaches the Cauchy horizon from within \(\mathcal{U}_{\mathcal{M}}\). The bounds obtained in Section III, Corollary III.4, can then be used to conclude \[\left|\sum_{i}c_{i}\partial_{y}^{\gamma}\psi_{i}(x)\partial_{y}^{ \delta}\psi_{i}(x)\right| \leq\sum_{i}\left|\partial_{y}^{\gamma}\psi_{i}(x)\right|\left| \partial_{y}^{\delta}\psi_{i}(x)\right|\leq C_{1}\sum_{i}\left\|b_{i}\right\|_ {\mathcal{C}^{m_{1}}}^{2}\,, \tag{42a}\] \[\left|V^{1-\beta^{\prime}}\sum_{i}c_{i}\partial_{y}^{\gamma}\psi _{i}(x)\partial_{r}\partial_{y}^{\delta}\psi_{i}(x)\right| \leq\sum_{i}\left|V^{1-\beta^{\prime}}\partial_{r}\partial_{y}^{ \delta}\psi_{i}(x)\right|\left|\partial_{y}^{\gamma}\psi_{i}(x)\right|\leq C_ {2}\sum_{i}\left\|b_{i}\right\|_{\mathcal{C}^{m_{2}}}^{2}\,,\] (42b) \[\left|V^{2-2\beta^{\prime}}\sum_{i}c_{i}(\partial_{r}\psi_{i}(x))^ {2}\right| \leq\sum_{i}\left|V^{1-\beta^{\prime}}\partial_{r}\psi_{i}(x) \right|^{2}\leq C_{3}\sum_{i}\left\|b_{i}\right\|_{\mathcal{C}^{m_{3}}}^{2}\,,\] (42c) \[\left|V^{2-\beta^{\prime}}\sum_{i}c_{i}\psi_{i}(x)\partial_{r}^{ 2}\psi_{i}(x)\right| \leq\sum_{i}\left|V^{2-\beta^{\prime}}\partial_{r}^{2}\psi_{i}(x) \right|\left|\psi_{i}(x)\right|\leq C_{4}\sum_{i}\left\|b_{i}\right\|_{ \mathcal{C}^{m_{4}}}^{2}\,. \tag{42d}\] Here, \(\gamma\), \(\delta\in\mathbb{N}^{3}\cup\{0\}\) are multi-indices, \(C_{j}>0\), \(j=1,2,3,4\), are constants, and \(m_{j}\in\mathbb{N}\). It then follows from the convergence in (30) that the infinite sums on the right hand sides of (42) are finite. Since we chose a set of coordinates in which the metric is analytically extendable across the ingoing Cauchy horizon, and since \(\mathcal{U}\) is a compact subset of the analytic extension, the smooth coefficients of the differential operators \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) are bounded on \(\mathcal{U}\). Combining these bounds with the ones obtained from (42) then finishes the proof of the proposition. ## Acknowledgements C.K. has been funded by the Deutsche Forschungsgemeinschaft (DFG) under the Grant No. 406116891 within the Research Training Group RTG 2522/1. We would like to thank the Erwin Schrodinger Institut, Vienna, where part of this work has been completed, for its hospitality.
2308.03447
Biomedical Knowledge Graph Embeddings with Negative Statements
A knowledge graph is a powerful representation of real-world entities and their relations. The vast majority of these relations are defined as positive statements, but the importance of negative statements is increasingly recognized, especially under an Open World Assumption. Explicitly considering negative statements has been shown to improve performance on tasks such as entity summarization and question answering or domain-specific tasks such as protein function prediction. However, no attention has been given to the exploration of negative statements by knowledge graph embedding approaches despite the potential of negative statements to produce more accurate representations of entities in a knowledge graph. We propose a novel approach, TrueWalks, to incorporate negative statements into the knowledge graph representation learning process. In particular, we present a novel walk-generation method that is able to not only differentiate between positive and negative statements but also take into account the semantic implications of negation in ontology-rich knowledge graphs. This is of particular importance for applications in the biomedical domain, where the inadequacy of embedding approaches regarding negative statements at the ontology level has been identified as a crucial limitation. We evaluate TrueWalks in ontology-rich biomedical knowledge graphs in two different predictive tasks based on KG embeddings: protein-protein interaction prediction and gene-disease association prediction. We conduct an extensive analysis over established benchmarks and demonstrate that our method is able to improve the performance of knowledge graph embeddings on all tasks.
Rita T. Sousa, Sara Silva, Heiko Paulheim, Catia Pesquita
2023-08-07T10:08:25Z
http://arxiv.org/abs/2308.03447v1
# Biomedical Knowledge Graph Embeddings ###### Abstract A knowledge graph is a powerful representation of real-world entities and their relations. The vast majority of these relations are defined as positive statements, but the importance of negative statements is increasingly recognized, especially under an Open World Assumption. Explicitly considering negative statements has been shown to improve performance on tasks such as entity summarization and question answering or domain-specific tasks such as protein function prediction. However, no attention has been given to the exploration of negative statements by knowledge graph embedding approaches despite the potential of negative statements to produce more accurate representations of entities in a knowledge graph. We propose a novel approach, TrueWalks, to incorporate negative statements into the knowledge graph representation learning process. In particular, we present a novel walk-generation method that is able to not only differentiate between positive and negative statements but also take into account the semantic implications of negation in ontology-rich knowledge graphs. This is of particular importance for applications in the biomedical domain, where the inadequacy of embedding approaches regarding negative statements at the ontology level has been identified as a crucial limitation. We evaluate TrueWalks in ontology-rich biomedical knowledge graphs in two different predictive tasks based on KG embeddings: protein-protein interaction prediction and gene-disease association prediction. We conduct an extensive analysis over established benchmarks and demonstrate that our method is able to improve the performance of knowledge graph embeddings on all tasks. Keywords:Knowledge Graph Knowledge Graph Embedding Negative Statements Biomedical Applications. ## 1 Introduction Knowledge Graphs (KGs) represent facts about real-world entities and their relations and have been extensively used to support a range of applications from question-answering and recommendation systems to machine learning and analytics [17]. KGs have taken to the forefront of biomedical data through their ability to describe and interlink information about biomedical entities such as genes, proteins, diseases and patients, structured according to biomedical ontologies. This supports the analysis and interpretation of biological data, for instance, through the use of semantic similarity measures [32]. More recently, a spate of KG embedding methods [42] have emerged in this space and have been successfully employed in a number of biomedical applications [28]. The impact of KG embeddings in biomedical analytics is expected to increase in tandem with the growing volume and complexity of biomedical data. However, this success relies on the expectation that KG embeddings are semantically meaningful representations of the underlying biomedical entities. Regardless of their domain, the vast majority of KG facts are represented as positive statements, e.g. \((hemoglobin,hasFunction,oxygen\)transport). Under a Closed World Assumption, negative statements are not required, since any missing fact can be assumed as a negative. However, real-world KGs reside under the Open World Assumption where non-stated negative facts are formally indistinguishable from missing or unknown facts, which can have important implications across a variety of tasks. The importance of negative statements is increasingly recognized [2, 10]. For example, in the biomedical domain, the knowledge that a patient does not exhibit a given symptom or a protein does not perform a specific function is crucial for both clinical decision-making and biomedical insight. While ontologies are able to express negation and the enrichment of KGs with interesting negative statements is gaining traction, existing KG embedding methods are not able to adequately utilize them [21], which ultimately results in less accurate representations of entities. We propose True Walks, to the best of our knowledge, the first-ever approach that is able to incorporate negative statements into the KG embedding learning process. This is fundamentally different from other KG embedding methods, which produce negative statements by negative random sampling strategies to train representations that bring the representations of nodes that are linked closer, while distancing them from the negative examples. TrueWalks uses explicit negative statements to produce entity representations that take into account both existing attributes and lacking attributes. For example, for the negative statement \((Bruce\ Willis,NOT\ birthPlace,U.S.)\), our representation would be able to capture the similarity between Bruce Willis and Ryan Gosling, since neither was born in the U.S (see Figure 1). The explicit declaration of negative statements such as these is an important aspect of more accurate representations, especially when they capture unexpected negative statements (i.e., most people would expect that both actors are U.S. born). Using TrueWalks, Bruce Willis and Ryan Gosling would be similar not just because they are both actors but also because neither was born in the U.S. True Walks generates walks that can distinguish between positive and negative statements and consider the semantic implications of negation in KGs that are rich in ontological information, particularly in regard to inheritance. This is of particular importance for applications in the biomedical domain, where the inadequacy of embedding approaches regarding negative statements has been identified as a crucial limitation [21]. We demonstrate that the resulting embeddings can be employed to determine semantic similarity or as features for relation prediction. We evaluate the effectiveness of our approach in two different tasks, protein-protein interaction prediction and gene-disease association prediction, and show that our method improves performance over state-of-the-art embedding methods and popular semantic similarity measures. Our contributions are as follows: * We propose TrueWalks, a novel method to generate random walks on KGs that are aware of negative statements and results in the first KG embedding approach that considers negative statements. * We develop extensions of popular path-based KG embedding methods implementing the TrueWalks approach. * We enrich existing KGs with negative statements and propose benchmark datasets for two popular biomedical KG applications: protein-protein interaction (PPI) prediction and gene-disease association (GDA) prediction. * We report experimental results that demonstrate the superior performance of TrueWalks when compared to state-of-the-art KG embedding methods. ## 2 Related Work ### Exploring Negative Statements Approaches to enrich existing KGs with interesting negative statements have been proposed both for general-purpose KGs such as Wikidata [3] and for domain-specific ones such as the Gene Ontology (GO) [11, 44]. Exploring negative statements has been demonstrated to improve the performance of various applications. [2] developed a method to enrich Wikidata with interesting negative statements and its usage improved the performance on entity summarization and decision-making tasks. [44] have designed a method to enrich the GO [14] with relevant negative statements indicating that a protein does not perform a given function and demonstrated that a balance between positive and negative annotations supports a more reasonable evaluation of protein function prediction methods. Similarly, [11] enriched the GO with negative statements and demonstrated an associated increase in protein function prediction performance. The relevance of negative annotations has also been recognized in the prediction of gene-phenotype Figure 1: A DBPedia example motivating the negative statements problem. The author of Bruce Willis’ picture is Gage Skidmore. associations in the context of the Human-Phenotype Ontology (HP) [22], but the topic remains unexplored [25]. It should be highlighted that KG embedding methods have not been employed in any of these approaches to explore negative statements. ### Knowledge Graph Embeddings KG embedding methods map entities and their relations expressed in a KG into a lower-dimensional space while preserving the underlying structure of the KG and other semantic information [42]. These entity and relation embedding vectors can then be applied to various KG applications such as link prediction, entity typing, or triple classification. In the biomedical domain, KG embeddings have been used in machine learning-based applications in which they are used as input in classification tasks or to predict relations between biomedical entities. [21] provides an overview of KG embedding-based approaches for biomedical applications. Translational models, which rely on distance-based scoring functions, are some of the most widely employed KG embedding methods. A popular method, TransE [6], assumes that if a relation holds between two entities, the vector of the head entity plus the relation vector should be close to the vector of the tail entity in the vector space. TransE has the disadvantage of not handling one-to-many and many-to-many relationships well. To address this issue, TransH [43] introduces a relation-specific hyperplane for each relation and projects the head and tail entities into the hyperplane. TransR [23] builds entity and relation embeddings in separate entity space and relation spaces. Semantic matching approaches are also well-known and use similarity-based scoring functions to capture the latent semantics of entities and relations in their vector space representations. For instance, DistMult [48] employs tensor factorization to embed entities as vectors and relations as diagonal matrices. ### Walk-Based Embeddings More recently, random walk-based KG embedding approaches have emerged. These approaches are built upon two main steps: (i) producing entity sequences from walks in the graph to produce a corpus of sequences that is akin to a corpus of word sequences or sentences; (2) using those sequences as input to a neural language model [27] that learns a latent low-dimensional representation of each entity within the corpus of sequences. DeepWalk [31] first samples a set of paths from the input graph using uniform random walks. Then it uses those paths to train a skip-gram model, originally proposed by the word2vec approach for word embeddings [27]. Node2vec [16] introduces a different biased strategy for generating random walks and exploring diverse neighborhoods. The biased random walk strategy is controlled by two parameters: the likelihood of visiting immediate neighbors (breadth-first search behavior), and the likelihood of visiting entities that are at increasing distances (depth-first search behavior). Neither DeepWalk nor node2vec take into account the direction or type of the edges. Metapath2vec [8] proposes random walks driven by metapaths that define the node type order by which the random walker explores the graph. RDF2Vec [35] is inspired by the node2vec strategy but it considers both edge direction and type making it particularly suited to KGs. OWL2Vec* [7] was designed to learn ontology embeddings and it also employs direct walks on the graph to learn graph structure. ### Tailoring Knowledge Graph Embeddings Recent KG embedding approaches aim to tailor representations by considering different semantic, structural or lexical aspects of a KG and its underlying ontology. Approaches such as EL [20] and BoxEL [45] embeddings are geometric approaches that account for the logical structure of the ontology (e.g., intersection, conjunction, existential quantifiers). OWL2Vec* [7] and OPA2Vec [37] take into consideration the lexical portion of the KG (i.e., labels of entities) when generating graph walks or triples. OPA2Vec also offers the option of using a pre-trained language model to bootstrap the KG embedding. Closer to our approach, OLW2Vec* contemplates the declaration of inverse axioms to enable reverse path traversal, however, this option was found lacking for the biomedical ontology GO. Finally, different approaches have been proposed to train embeddings that are aware of the order of entities in a path, such as [51] and [34], which extend TransE and RDF2Vec, respectively. ## 3 Methods ### Problem Formulation In this work, we address the task of learning a relation between two KG entities (which can belong to the same or different KGs) when the relation itself is not encoded in the KG. We employ two distinct approaches: (1) using the KG embeddings of each entity Figure 2: A GO KG subgraph motivating the _reverse inheritance_ problem. as features for a machine learning algorithm and (2) comparing the KG embeddings directly through a similarity metric. We target ontology-rich KGs that use an ontology to provide rich descriptions of real-world entities instead of focusing on describing relations between entities themselves. These KGs are common in the biomedical domain. As a result, the KG's richness lies in the TBox, with a comparatively less complex ABox, since entities have no links between them. We focus on Web Ontology Language (OWL) [15] ontologies since biomedical ontologies are typically developed in OWL or have an OWL version. Biomedical entities in a KG are typically described through positive statements that link them to an ontology. For instance, to state that a protein \(P\) performs a function \(F\) described under the GO, a KG can declare the axiom \(P\sqsubseteq\exists hasFunction.F\). However, the knowledge that a given protein does not perform a function can also be relevant, especially to declare that a given protein does not have an activity typical of its homologs [12]. Likewise, the knowledge that a given disease does not exhibit a particular phenotype is also decisive in understanding the relations between diseases and genes [25]. We consider the definition of grounded negative statements proposed by [2] as \(\neg(s,p,o)\) which is satisfied if \((s,p,o)\notin KG\) and expressed as a _NegativeObjectPropertyAsAsertion_3. Similar to what was done in [2], we do not have a negative object property assertion for every missing triple. Negative statements are only included if there is clear evidence that a triple does not exist in the domain being captured. Taking the protein example, negative object property assertions only exist when it has been demonstrated that a protein does not perform a particular function. Footnote 3: [https://www.w3.org/TR/owl2-syntax/#Negative_Object_Property_Assertions](https://www.w3.org/TR/owl2-syntax/#Negative_Object_Property_Assertions) An essential difference between a positive and a negative statement of this kind is related to the implied inheritance of properties exhibited by the superclasses or subclasses of the assigned class. Let us consider that \((P_{1},\textit{hasFunction},F_{1})\) and \((F_{1},\textit{subClassOf},F_{2})\). This implies that \((P_{1},\textit{hasFunction},F_{2})\), since an individual with a class assignment also belongs to all superclasses of the given class, e.g., a protein that performs _iron ion binding_ also performs _metal ion binding_ (see Figure 2). This implication is easily captured by directed walk generation methods that explore the declared subclass axioms in an OWL ontology. However, when we have a negative statement, such as \(\neg(P_{2},\textit{hasFunction},F_{1})\), it does not imply that \(\neg(P_{2},\textit{hasFunction},F_{2})\). There are no guarantees that a protein that does not perform _iron ion binding_ also does not perform _metal ion binding_, since it can very well, for instance, perform _calcium ion binding_. However, for \((F_{3},\textit{subClassOf},F_{1})\) the negative statement \(\neg(P_{2},\textit{hasFunction},F_{1})\) implies that \(\neg(P_{2},\textit{hasFunction},F_{3})\), as a protein that does not perform _iron ion binding_ also does not perform _ferric ion binding_ nor _ferrous iron binding_. Therefore, we need to be able to declare that protein \(P_{1}\) performs both functions \(F_{1}\) and \(F_{3}\), but that \(P_{2}\) performs \(F_{1}\) but not \(F_{3}\). Since OWL ontologies typically declare subclass axioms, there is no opportunity for typical KG embedding methods to explore the reverse paths that would more accurately represent a negative statement. The problem we tackle is then two-fold: how can the _reverse inheritance_ implied by negative statements be adequately explored by walk-based KG embedding methods, and how can these methods distinguish between negative and positive statements. ### Overview An overview of TrueWalks, the method we propose, is shown in Figure 3. The first step is the transformation of the KG into an RDF Graph. Next, our novel random walk generation strategy that is aware of positive and negative statements is applied to the graph to produce a set of entity sequences. The positive and negative entity walks are fed to neural language models to learn a dual latent representation of the entities. TrueWalks has two variants: one that employs the classical skip-gram model to learn the embeddings (TrueWalks), and one that employs a variation of skip-gram that is aware of the order of entities in the walk (TrueWalksOA, i.e. order-aware). ### Creation of the RDF Graph The first step is the conversion of an ontology-rich KG into an RDF graph. This is a directed, labeled graph, where the edges represent the named relations between two resources or entities, represented by the graph nodes4. We perform the transformation according to the _OWL to RDF Graph Mapping_ guidelines defined by the W3C5. Simple Figure 3: Overview of the TrueWalks method with the four main steps: (i) creation of the RDF graph, (ii) random walk generation with negative statements; (iii) neural language models, and (iv) final representation. axioms can be directly transformed into RDF triples, such as subsumption axioms for atomic entities or data and annotation properties associated with an entity. Axioms involving complex class expressions are transformed into multiple triples which typically require blank nodes. Let us consider the following existential restriction of the class _obo:GO_0034708 (methyltransferase complex)_ that encodes the fact that a methyltransferase complex is part of at least one intracellular anatomical structure: _ObjectSomeValuesFrom(obo:BFO_0000050 (part of),_ _obo:GO_0005622 (intracellular anatomical structure))_ Its conversion to RDF results in three triples: _(obo:GO_0034708, rdfs:subClassOf, __x)_ _(_:x, owl:someValuesFrom,obo:GO_0005622)_ _(_:x, owl:onProperty,obo:BFO_0000050)_ where __x denotes a blank node. ### Random Walk Generation with Negative Statements The next step is to generate the graph walks that will make up the corpus (see Algorithm 1). For a given graph \(G=(V,E)\) where \(E\) is the set of edges and \(V\) is the set of vertices, for each vertex \(v_{r}\in V_{r}\), where \(V_{r}\) is the subset of individuals for which we want to learn representations, we generate up to \(w\) graph walks of maximum depth \(d\) rooted in vertex \(v_{r}\). We employ a depth-first search algorithm, extending on the basic approach in [35]. At the first iteration, we can find either a positive or negative statement. From then on, walks are biased: a positive statement implies that whenever a subclass edge is found it is traversed from subclass to superclass, whereas a negative statement results in a traversal of subclass edges in the opposite direction (see also Figure 3). This generates paths that follow the pattern \(v_{r}\to e_{1i}\to v_{1i}\to e_{2i}\). The set of walks is split in two, negative statement walks and positive statement walks. This will allow the learning of separate latent representations, one that captures the positive aspect and one that captures the negative aspect. An important aspect of our approach is that, since OWL is converted into an RDF graph for walk-based KG embedding methods, a negative statement declared using a simple object property assertion (e.g. _notHasFunction_) could result in the less accurate path: _Protein \(P\to\) notHasFunction \(\to\) iron ion binding \(\to\) subClassOf \(\to\) ion binding._ Moreover, random walks directly over the _NegativeObjectPropertyAssertion_, since it is decomposed into multiple triples using blank nodes, would also result in inaccurate paths. However, our algorithm produces more accurate paths, e.g.: _Protein \(P\to\) notHasFunction \(\to\) iron ion binding \(\to\) superClassOf \(\to\) ferric iron binding_ by adequately processing the _NegativeObjectPropertyAssertion_. ### Neural Language Models We employ two alternative approaches to learn a latent representation of the individuals in the KG. For the first approach, we use the skip-gram model [27], which predicts the context (neighbor entities) based on a target word, or in our case a target entity. Let \(f:E\rightarrow\mathbb{R}^{d}\) be the mapping function from entities to the latent representations we will be learning, where \(d\) is the number of dimensions of the representation (\(f\) is then a matrix \(|E|\times d\)). Given a context window \(c\), and a sequence of entities \(e_{1},e_{2},e_{3},...,e_{L}\), the objective of the skip-gram model is to maximize the average log probability \(p\): \[\frac{1}{L}\sum_{l=1}^{L}\log p(e_{l+c}|e_{l}) \tag{1}\] where \(p(e_{l+c}|e_{l})\) is calculated using the softmax function: \[p(e_{l+c}|e_{l})=\frac{\exp(f(e_{l+c})\cdot f(e_{l}))}{\sum_{e=1}^{E}\exp(f(e) \cdot f(e_{l}))} \tag{2}\] where \(f(e)\) is the vector of the entity \(e\). To improve computation time, we employ a negative sampling approach based in [27] that minimizes the number of comparisons required to distinguish the target entity, by taking samples from a noise distribution using logistic regression, where there are \(k\) negative samples for each entity. The second approach is the structured skip-gram model [24], a variation of skip-gram that is sensitive to the order of words, or in our case, entities in the graph walks. The critical distinction of this approach is that, instead of using a single matrix \(f\), it creates \(c\times 2\) matrices, \(f_{-c},...,f_{-2},f_{-1},f_{1},...,f_{c}\), each dedicated to predicting a specific relative position to the entity. To make a prediction \(p(e_{l+c}|e_{l})\), the method selects the appropriate matrix \(f_{l}\). The neural language models are applied separately to the positive and negative walks, producing two representations for each entity. ### Final Representations The two representations of each entity need to be combined to produce a final representation. Different vector operations can, in principle, be employed, such as the Hadamard product or the L1-norm. However, especially since we will employ these vectors as inputs for machine learning methods, we would like to create a feature space that allows the distinction between the negative and positive representations, motivating us to use a simple concatenation of vectors. ## 4 Experiments We evaluate our novel approach on two biomedical tasks: protein-protein interaction (PPI) prediction and gene-disease association (GDA) prediction[39]. These two challenges have significant implications for understanding the underlying mechanisms of biological processes and disease states. Both tasks are modeled as relation prediction tasks. For PPI prediction, we employ TrueWalks embeddings both as features for a supervised learning algorithm and directly for similarity-based prediction. For GDA prediction, since embeddings for genes and diseases are learned over two different KGs, we focus only on supervised learning. We employ a Random Forest algorithm across all classification experiments with the same parameters (see the supplementary file for details). ### Data Our method takes as input an ontology file, instance annotation file and a list of instance pairs. We construct the knowledge graph (KG) using the RDFlib package [5], which parses the ontology file in OWL format and processes the annotation file to add edges to the RDFlib graph. The annotation file contains both positive and negative statements which are used to create the edges in the graph. #### 4.2.1 Protein-Protein Interaction Prediction Predicting protein-protein interactions is a fundamental task in molecular biology that can explore both sequence and functional information [18]. Given the high cost of experimentally determining PPI, computational methods have been proposed as a solution to the problem of finding protein pairs that are likely to interact and thus provide a selection of good candidates for experimental analysis. In recent years, a number of approaches for PPI prediction based on functional information as described by the GO have been proposed [50; 20; 37; 38; 21]. The GO contains over 50000 classes that describe proteins or genes according to the molecular functions they perform, the biological processes they are involved in, and the cellular components where they act. The GO KG is built by integrating three sources: the GO itself [14], the Gene Ontology Annotation (GOA) data [13], and negative GO annotations [44] (details on the KG building method and data sources are available in the supplementary file). A GO annotation associates a Uniprot protein identifier with a GO class that describes it. We downloaded the GO annotations corresponding to positive statements from the GOA database for human species. For each protein \(P\) in the PPI dataset and each of its association statements to a function \(F\) in GOA, we add the assertion \((P,\textit{hasFunction},F)\). We employ the negative GO associations produced in [44], which were derived from expert-curated annotations of protein families on phylogenetic trees. For each protein \(P\) in the PPI dataset and each of its association statements to a function \(F\) in the negative GO associations dataset, we add a negative object property assertion. To do so, we use metamodeling (more specifically, punning 6) and represent each ontology class as both a class and an individual. This situation translates into using the same IRI. Then, we use a negative object property assertion to state that the individual representing a biomedical entity is not connected by the object property expression to the individual representing an ontology class. Table 1 presents the GO KG statistics. Footnote 6: [https://www.w3.org/TR/owl2-new-features/#F12:_Punning](https://www.w3.org/TR/owl2-new-features/#F12:_Punning) The target relations to predict are extracted from the STRING database [40]. We considered the following criteria to select protein pairs: (i) protein interactions must be \begin{table} \begin{tabular}{l r r r} \hline \hline & GO\({}_{\text{PPI}}\) & GO\({}_{\text{GDA}}\) & HP\({}_{\text{GDA}}\) \\ \hline Classes & 50918 & 50918 & 17060 \\ Literals and blank nodes & 532373 & 532373 & 442246 \\ Edges & 1425102 & 1425102 & 1082859 \\ Instances & 440 & 755 & 162 \\ Positive statements & 7364 & 10631 & 4197 \\ Negative statements & 8579 & 8966 & 225 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics for each KG regarding classes, instances, nodes, edges, positive and negative statements. extracted from curated databases or experimentally determined (as opposed to computationally determined); (ii) interactions must have a confidence score above 0.950 to retain only high confidence interaction; (iii) each protein must have at least one positive GO association and one negative GO association. The PPI dataset contains 440 proteins, 1024 interacting protein pairs, and another 1024 pairs generated by random negative sampling over the same set of proteins. #### 4.1.3 Gene-Disease Association Prediction Predicting the relation between genes and diseases is essential to understand disease mechanisms and identify potential biomarkers or therapeutic targets [9]. However, validating these associations in the wet lab is expensive and time-consuming, which fostered the development of computational approaches to identify the most promising associations to be further validated. Many of these explore biomedical ontologies and KGs [41, 49, 36, 4, 26] and some recent approaches even apply KG embedding methods such as DeepWalk [1] or OPA2Vec [37, 30]. For GDA prediction, we have used the GO KG, the Human Phenotype Ontology (HP) KG (created from the HP file and HP annotations files), and a GDA dataset. Two different ontologies are used to describe each type of entity. Diseases are described under the HP and genes under the GO. We built GO KG in the same fashion as in the PPI experiment, but instead of having proteins linked to GO classes, we have genes associated with GO classes. Regarding HP KG, HP [22] describes phenotypic abnormalities found in human hereditary diseases. The HP annotations link a disease to a specific class in the HP through both positive and negative statements. The target relations to predict are extracted from DisGeNET [33], adapting the approach described in [30] to consider the following criterion: each gene (or disease) must have at least one positive GO (or HP) association and one negative GO (or HP) association. This resulted in 755 genes, 162 diseases, and 107 gene-disease relations. To create a balanced dataset, we sampled random negative examples over the same genes and diseases. Table 1 describes the created KGs. ### Results and Discussion We compare TrueWalks against ten state-of-the-art KG embedding methods: TransE, TransH, TransR, ComplEx, distMult, DeepWalk, node2vec, metapath2vec, OWL2Vec* and RDF2Vec. TransE, TransH and TransR are representative methods of translational models. ComplEx and distMult are semantic matching methods. They represent a bottom-line baseline with well-known KG embedding methods. DeepWalk and node2vec are undirected random walk-based methods, and OWL2Vec* and RDF2Vec are directed walk-based methods. These methods represent a closer approach to ours, providing a potentially stronger baseline. Each method is run with two different KGs, one with only positive statements and one with both positive and negative statements. In this second KG, we declare the negative statements as an object property, so positive and negative statements appear as two distinct relation types. The size of all the embeddings is 200 dimensions across all experiments (details on parameters can be found in the supplementary file), with TrueWalks generating two 100-dimensional vectors, one for the positive statement-based representation and one for the negative, which are concatenated to produce the final 200-dimensional representation. #### 4.2.3 Relation Prediction using Machine Learning To predict the relation between a pair of entities \(e_{1}\) and \(e_{2}\) using machine learning, we take their vector representations and combine them using the binary Hadamard operator to represent the pair: \(r(e_{1},e_{2})=v_{e_{1}}\times v_{e_{2}}\). The pair representations are then fed into a Random Forest algorithm for training using Monte Carlo cross-validation (MCCV) [46]. MCCV is a variation of traditional \(k\)-fold cross-validation in which the process of dividing the data into training and testing sets (with \(\beta\) being the proportion of the dataset to include in the test split) is repeated \(M\) times. Our experiments use MCCV with \(M=30\) and \(\beta=0.3\). For each run, the predictive performance is evaluated based on recall, precision and weighted average F-measure. Statistically significant differences between TrueWalks and the other methods are determined using the non-parametric Wilcoxon test at \(p<\) 0.05. Table 2 reports the median scores for both PPI and GDA prediction. The top half contains the results of the first experiment where we compare state-of-the-art methods using only the positive statements to TrueWalks (at the bottom) which uses both types. The results reveal that the performance of TrueWalks is significantly better than the \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**PPI Prediction**} & \multicolumn{3}{c}{**GDA Prediction**} \\ \cline{3-8} & \multicolumn{3}{c}{**Precision Recall F-measure**} & \multicolumn{3}{c}{**Precision Recall F-measure**} \\ \hline \multirow{7}{*}{ \begin{tabular}{l} **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** ** \\ **E** \\ **E** ** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** **E** \\ **E** \\ **E** \\ **E** \\ **E** \\ **E** ** \\ **E** \\ **E** **E** \\ **E** \\ **E** \\ **E** \\ **E** **E** \\ **E** \\ **E** \\ **E** **E** \\ **E** \\ **E** \\ **E** **E** \\ **E** **\\ **E** **E** \\ **E** **\\ **E** \\ **E** ** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** \\ **E** ** \\ **E** **E** \\ **E** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** **\\ **E** **E** \\ **E** **E** \\ **E** **E** ** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** **E** \\ **E** **E** \\ **E** **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** **E** \\ **E** **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** **E** \\ **E** ** \\ **E** **E** \\ **E** ** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** ** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** ** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** **E** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ **E** ** **E** \\ **E** ** **E** \\ **E** **E** \\ **E** **E** \\ ** other methods, improving both precision and F-measure. An improvement in precision, which is not always accompanied by an increase in recall, confirms the hypothesis that embeddings that consider negative statements produce more accurate representations of entities, which allows a better distinction of true positives from false positives. A second experiment employs a KG with both negative and positive statements for all methods. Our method can accurately distinguish between positive statements and negative statements, as discussed in subsection 3.4. For the remaining embedding methods, we declare the negative statements as an object property so that these methods distinguish positive and negative statements as two distinct types of relation. This experiment allows us to test whether TrueWalks, which takes into account the positive or negative status of a statement, can improve the performance of methods that handle all statements equally regardless of status. The bottom half of Table 2 shows that both variants of TrueWalks improve on precision and F-measure for both tasks when compared with the state-of-the-art methods using both positive and negative statements. This experiment further shows that the added information given by negative statements generally improves the performance of most KG embedding methods. However, no method surpasses TrueWalks, likely due to its ability to consider the semantic implications of inheritance and walk direction, especially when combined with the order-aware model. Comparing the two variants of TrueWalks demonstrates that order awareness does not improve performance in most cases. However, TrueWalksOA improves on precision and F-measure for all other state-of-the-art methods. These results are not unexpected since the same effect was observed in other order-aware embedding methods [34]. Regarding the statistical tests, TrueWalks performance values are italicized/underlined in Table 2 when improvements over all other methods are statistically significant, except when comparing TrueWalks with OWL2Vec* for GDA, since in this particular case the improvement is not statistically significant. #### 4.2.2 Relation Prediction using Semantic Similarity We also evaluate all methods in PPI prediction using KG embedding-based semantic similarity, computed as the cosine similarity between the vectors of each protein in a pair. Adopting the methodology employed by [20] and [45], for each positive pair \(e_{1}\) and \(e_{2}\) in the dataset, we compute the similarity between \(e_{1}\) and all other entities and identify the rank of \(e_{2}\). The performance was measured using recall at rank \(10^{7}\), recall at rank 100, mean rank, and the area under the ROC curve (Table 3). Results show that TrueWalksOA achieves the top performance across all metrics, but TrueWalks is tested by RDF2Vec on all metrics except Hits@10, by OWL2Vec* on Hits@100 and by node2vec on Hits@10. To better understand these results, we plotted the distribution of similarity values for positive and negative pairs in Figure 4. There is a smaller overlap between negative and positive pairs similarities for TrueWalksOA, which indicates that considering both the status of the function assignments and the order of entities in the random walks results in embeddings that are more meaningful semantic representations of proteins. Furthermore, the cosine similarity for negative pairs is consistently lower when using both variants of TrueWalks, which supports that the contribution of negative statement-based embeddings is working towards filtering out false positives. ## 5 Conclusion Knowledge graph embeddings are increasingly used in biomedical applications such as the prediction of protein-protein interactions, gene-disease associations, drug-target relations and drug-drug interactions [28]. Our novel approach, TrueWalks, was motivated by the fact that existing knowledge graph embedding methods are ill-equipped to handle negative statements, despite their recognized importance in biomedical machine learning tasks [21]. TrueWalks incorporates a novel walk-generation method that distinguishes between positive and negative statements and considers the semantic implications of negation in ontology-rich knowledge graphs. It generates two separate embeddings, one for each type of statement, enabling a dual representation of entities that can be explored by downstream ML, focusing both on features entities have and those they lack. TrueWalks outperforms representative and state-of-the-art knowledge \begin{table} \begin{tabular}{l l c c c} \hline \hline \multicolumn{2}{c}{**Method**} & \multicolumn{1}{c}{**Hits@10**} & \multicolumn{1}{c}{**Hits@100**} & \multicolumn{1}{c}{**MeanRank**} & \multicolumn{1}{c}{**AUC**} \\ \hline \multirow{6}{*}{\begin{tabular}{} \end{tabular} } & TransE & 0.013 & 0.125 & 103.934 & 0.538 \\ & TransH & 0.013 & 0.134 & 102.703 & 0.543 \\ & TransR & 0.037 & 0.196 & 81.916 & 0.636 \\ & ComplEx & 0.080 & 0.261 & 64.558 & 0.689 \\ & distMult & 0.112 & 0.340 & 46.512 & 0.803 \\ & DeepWalk & 0.125 & 0.380 & 35.406 & 0.847 \\ & node2vec & 0.163 & 0.375 & 37.275 & 0.827 \\ & metapath2vec* & 0.017 & 0.151 & 98.445 & 0.558 \\ & OWL2Vec* & 0.152 & 0.386 & 33.192 & 0.860 \\ & RDF2Vec & 0.133 & 0.391 & 32.419 & 0.870 \\ \hline \multirow{6}{*}{ \begin{tabular}{} \end{tabular} } & TransE & 0.022 & 0.161 & 94.809 & 0.576 \\ & TransR & 0.100 & 0.274 & 60.120 & 0.732 \\ & TransH & 0.025 & 0.174 & 91.553 & 0.594 \\ & ComplEx & 0.132 & 0.334 & 45.268 & 0.805 \\ & distMult & 0.149 & 0.378 & 35.351 & 0.853 \\ & DeepWalk & 0.148 & 0.383 & 35.365 & 0.849 \\ & node2vec & **0.166** & 0.389 & 34.305 & 0.840 \\ & metapath2vec & 0.020 & 0.165 & 93.374 & 0.578 \\ & OWL2Vec* & 0.160 & 0.397 & 32.234 & 0.869 \\ & RDF2Vec & 0.155 & 0.401 & 30.281 & 0.879 \\ \hline \hline TrueWalks & 0.161 & 0.392 & 32.089 & 0.869 \\ TrueWalksOA & **0.166** & **0.407** & **28.128** & **0.889** \\ \hline \hline \end{tabular} \end{table} Table 3: Hits@10, Hits@100, mean rank, and ROC-AUC for PPI prediction using cosine similarity obtained with different methods. In bold, the best value for each metric. graph embedding approaches in the prediction of protein-protein interactions and gene-disease associations. We expect TrueWalks to be generalizable to other biomedical applications where negative statements play a decisive role, such as predicting disease-related phenotypes [47] or performing differential diagnosis [19]. In future work, we would also like to explore counter-fitting approaches, such as those proposed for language embeddings [29], to consider how opposite statements can impact the dissimilarity of entities. **Supplemental Material Statement:** The source code for True Walks is available on GitHub ([https://github.com/liseda-lab/TrueWalks](https://github.com/liseda-lab/TrueWalks)). All datasets are available on Zenodo ([https://doi.org/10.5281/zenodo.7709195](https://doi.org/10.5281/zenodo.7709195)). A supplementary file contains the links to the data sources, the parameters for the KG embedding methods and ML models, and the results of the statistical tests. **Acknowledgements** C.P., S.S., and R.T.S. are funded by FCT, Portugal, through LASIGE Research Unit (ref. UIDB/00408/2020 and ref. UIDP/00408/2020). R.T.S. acknowledges the FCT PhD grant (ref. SFRH/BD/145377/2019) and DAAD Contact Fellowship grant. This work was also partially supported by the KATY project, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101017453, and in part by projeto 41, HIPT: Health from Portugal, funded by the Portuguese Plano de Recuperacao e Resiliencia. The authors are grateful to Lina Aveiro and Carlota Cardoso for the fruitful discussions that inspired this work. Figure 4: Violin plot with embedding similarity obtained with RDF2Vec with positive statements (R2V P), RDF2Vec with both positive and negative statements (R2V P+N), TrueWalks (TW), and TrueWalksOA (TWOA).
2306.15951
Reduce Computational Complexity for Convolutional Layers by Skipping Zeros
Convolutional neural networks necessitate good algorithms to reduce complexity, and sufficient utilization of parallel processors for acceleration. Within convolutional layers, there are three types of operators: convolution used in forward propagation, deconvolution and dilated-convolution utilized in backward propagation. During the execution of these operators, zeros are typically added to tensors, leading to redundant calculations and unnecessary strain on hardware. To circumvent these inefficiencies, we propose the C-K-S algorithm, accompanied by efficient GPU implementations. C-K-S trims filters to exclude zero-padding. For deconvolution and dilated-convolution, C-K-S transforms sparse tensors into dense tensors, and standardizes the local computational rules to simplify the hardware control. The experimental results demonstrate that C-K-S offers good performance in terms of speed and convergence, surpassing the capabilities of PyTorch and cuDNN in certain scenarios.
Zhiyi Zhang, Pengfei Zhang, Zhuopin Xu, Qi Wang
2023-06-28T06:21:22Z
http://arxiv.org/abs/2306.15951v4
# Reduce Computational Complexity ###### Abstract Deep neural networks rely on parallel processors for acceleration. To design operators for them, it requires not only good algorithm to reduce computational complexity, but also sufficient utilization of hardwares. Convolutional layers mainly contain 3 kinds of convolutional operators: convolution in forward propagation, deconvolution and dilated-convolution in backward propagation. When executing these operators, 0s are always added to tensors, causing redundant calculations. This paper presents the _C-K-S_ algorithm (_ConvV2_, _KS-deconv_, _Sk-dilated_), which skips these 0s in two ways: trim the filters to exclude padded 0s; transform sparse tensors to dense tensors, to avoid inserted 0s in deconvolution and dilated-convolution. In contrast to regular convolution, deconvolution is hard to accelerate on GPUs due to its complicacy. This paper provides high-performance GPU implementations of _C-K-S_, and verifies their effectiveness with comparison to PyTorch (cuDNN). According to the experiments, _C-K-S_ has advantages over PyTorch in certain cases, especially in deconvolution on small feature-maps. Further optimizations of _C-K-S_ can be made by targeting specific GPU architectures. convolutional neural networks, deconvolution, dilated-convolution, graphic processing units. ## I Introduction Convolutional layers (conv-layers) are widely used in deep neural networks (DNNs) [2]-[6], with advantages of parameter sharing, sparse interactions and equivariant representations [1]. Conv-layers primarily include 3 kinds of convolutional operators: convolution, deconvolution (transposed convolution) and dilated-convolution. These 3 kinds of operators take the most resource in training convolutional neural networks (CNNs), so it's always a key issue to optimize them. Convolution and deconvolution are a pair of inverse operators. Generally, convolution is for down-sampling to reduce feature-size, while deconvolution performs up-sampling to expend features. For conv-layers, convolution generates output-features in forward propagation, whereas deconvolution finds the gradients of input-features in backward propagation [16]. The opposite is true for deconvolutional layers. Besides, dilated-convolution is used to find the gradients of filters. In propagations of Conv-Layers, zero-elements (0s) are inevitably added to tensors, in order to generate tensors with expected-size, or construct sparse tensors to find gradients. Such 0s cause redundant calculations (0-calculations), so the computational complexity can be reduced by skipping them. The efficient implementations [7]-[15] of conv-layers are mainly based on parallel processors including GPU and FPGA, due to their superiority in accelerating intensive operations such as convolution and matrix-multiply. To design operators for DNNs, algorithms with low complexity, and the adaptability to hardwares are both important. This paper presents the _C-K-S_ algorithm, which not only skip 0-calculations to lower the complexity of conv-layers, but also has efficient GPU implementations. _C-K-S_ consists of _ConvV2_, _KS-deconv_ and _Sk-dilated_, and these 3 parts respectively orient to convolution, deconvolution and dilated-convolution. For Dragon-Alpha [15][21], _C-K_-S is an indispensable factor to achieve high-performance. In relevant experiments, Dragon-Alpha spent less time and memory than PyTorch [13] to train some typical DNNs [2]-[5] on Cifar10 [25]. Therefore, this paper is also a summary of convolution techniques in Dragon-Alpha. This article uses 'is' for 'features' and 'filters' to see them as a whole. To more simply express time-complexity, \(n\) refers the size of data. The relevant parameter notations for 2D conv-layers are listed in Table I. ## II Motivation In conv-layers, to generate output-features with expected size, certain 0s may be padded on the boundary of the input-features \(X\). In many DNNs, due to down-sampling, the feature-maps become smaller with going deeper. In shallow layers, the proportion of the padded 0s in \(X\) is small. But it \begin{table} \begin{tabular}{|c|c|} \hline _Notation_ & _Explanation_ \\ \hline \(\ast\) & Scalar-multiply or Convolution \\ \hline \(\times\) & Cartesian-product or matrix-multiply \\ \hline \(\Theta\) & Element-wise multiply \\ \hline \((\cdot\cdot)\) & Matrix or Vector \\ \hline **in-range-of** & Return true if an index is within the range of a \\ & tensor, otherwise return false \\ \hline \(X\setminus Y\setminus W\) & input-features’ output-features’ filters \\ \hline \(X_{\text{s},\text{th.}\text{trtr},\text{trtr}}\) & An indexed element in \(X\) \\ \hline \(Y_{\text{s},\text{th.}\text{trtr},\text{trtrtr}}\) & An indexed element in \(Y\) \\ \hline \(Y_{\text{s},\text{th.}\text{trtrtr},\text{ could be large enough in deeper layers, resulting in a non-negligible number of 0-calculations. Fig. 1 gives an example, in the case of 2D-convolution with padding-1 stride-1 and (\(3\times 3\)) filters. The red curve represents the proportion of padded 0s in \(\mathbf{X}\), while the blue indicates the proportion of 0-calculations in the total amount. Such 2 proportions become bigger, as the size of feature-maps (\(F\times F\)) decreases. When stride of convolution is greater than 1, (stride - 1) 0s will be inserted between adjacent elements of \(\mathbf{\forall Y}\), in order to find \(\mathbf{\forall X}\) and \(\mathbf{\forall W}\) through deconvolution and dilated-convolution. The inserted 0s account for a big proportion in \(\mathbf{\forall Y}\) making it a sparse tensor, and leading to massive 0-calculations, especially when stride is big. As shown in Fig. 2, only calculations between the colored elements are necessary, and the rest are those. To avoid 0-calculations, _ConvV2_ trims the filters to exclude the padded 0s, while _KS-deconv_ and _SK-dilated_ transform \(\mathbf{\forall Y}\) back to dense tensor by filter-reconstruction and leaping-element-access to avoid the inserted 0s. GPU [22][23] is good at intensive computation due to its SIMD nature, which is also the reason for its weakness in executing conditional statements. GPU memory is high bandwidth but low capacity, and is therefore a scarce resource. When directly handling these added 0s, GPUs need either auxilary memory or conditional statements, that hurts the spatial or temporal efficiency. To maximize the hardware performance, the GPU implementations of _C-K-S_ are mainly dense kernel-functions, with minimized auxilary memory and conditional statements. ## III Background ### _Approaches to Implement 2D-Convolution on GPUs_ The approach of A. Krizhevsky [8] is to compute the convolution directly. It's very efficiency in some cases, but may be poor in the others, and requires many specialized implementations for the corner cases of convolution. FFT convolution has much lower time-complexity in theory than dense convolution, but has many drawbacks. It needs much temporary memory to pad filters the same size as input-feature-maps, and has lower performance in the case of small-filter or big-stride. Besides, FFT is a recursive algorithm with conditional statements, and its computation is not as intensive as dense convolution, so it gets less acceleration on GPUs. Caffe [12] lowers 2D-convolution to matrix-multiply by _im2col_ (Fig. 3). This approach is efficient and robust based on well-optimized matrix-multiply libraries, but it needs auxiliary memory to store the unfolded tensors. CuDNN [9] optimized this approach, by implicitly integrating _im2col_ to General-Matrix-Multiply (GEMM) operators, to improve the speed without auxilary memory. The _direct_[8] and _implicit-GEMM_[9] approaches are referred to implement _C-K-S_ on GPUs. Since deconvolution and dilated-convolution are special kinds of convolution, it's feasible to implement them in a similar way to convolution. ### _Related Works_ For deconvolution, the distribution of inserted 0s is different in each patch of \(\mathbf{\forall Y}\), leading to its complicacy. The main work of accelerating deconvolution is usually to avoid 0-calculations, and simplify the control on hardwares. Orosa _et. al._[17] devise a compile-time computation scheduling for deconvolution, to remove 0-calculations with minimal overhead. Cutlass [10] provides implementations of _strided_dgrad_, which skips 0-calculations by thread indices, but may not be universally applicable due to its specificity for CUDA's architecture. J. Chang _et. al._[18] present the TDC method with FPGA implementations, to transform sparse-deconvolution to dense-convolution. K. Chang _et. al._[19] and A. M. Vadakkevedu _et.al._[20] decompose the convolutional kernel to multi smaller kernels, to exclude 0-calculations. Their implementations are respectively based on TSMC-40nm-CMOS-technology and TensorFlow [11]. The kernel-decomposition ideas [18]-[20] are somewhat close to _KS-deconv_, but there exits some differences between them. _KS-deconv_ has lower time-complexity than TDC [18], because it decomposes smaller kernels. In the work of [19] and [20], their decomposition results are similar to _KS-deconv_'s in several specific cases, but they don't provide explicit math-formulas to make general comparisons. In addition, this work implemented _KS-deconv_ on different architectures, and combined it with _ConvV2_ to attain higher performance. Fig. 1: padding causes more 0-calculations on smaller feature-maps. Fig. 3: Through im2col, the 2D-convolution X = W cam be transformed to matrix-multiply \(X^{\text{orthedge}}\times W^{\text{reshdpool}}\). Fig. 2: Insert (stride -1) 0s between adjacent elements of \(\mathbf{\forall Y}\). ## IV Algorithm and Implementation The 2D-convolution of DNNs is with channels and batches, so the input-feature \(\mathbf{X}\!\subset\!\mathbf{\mathbb{R}}^{N\times n_{x}\times n_{y}\times l_{C}}\), output-features \(\mathbf{Y}\!\subset\!\mathbf{\mathbb{R}}^{N\times n_{x}\times n_{y}\times o_{C}}\) and filters \(\mathbf{W}\!\subset\!\mathbf{\mathbb{R}}^{O_{C}\times n_{x}\times n_{y}\times l_{C}}\) are all 4D tensors. For 2D conv-layers, its forward propagation is represented as (1), while its backward propagation is represented as (2) and (3). \[\mathbf{Y}=conv_{2D}(\mathbf{X},\mathbf{W}) \tag{1}\] \[\mathbf{\forall}\mathbf{X}=deconv_{2D}(\mathbf{\forall}\mathbf{W}^{\alpha\alpha\mathbf{1}\mathbf{0}}) \tag{2}\] \[\mathbf{\forall}\mathbf{W}=dilated\_conv_{2D}(\mathbf{\forall}\mathbf{Y}) \tag{3}\] _ConvV2_ is described from the perspective of forward propagation, while _KS-deconv_ and _Sk-dilated_ are in a view of backward propagation. All pseudo code of algorithms are concentrated in Appendix. ### _ConvV2: Convolution with Trimmed Filters_ To perform 2D-convolution, \(\mathbf{W}\) is used as a sliding window to generate patches of \(\mathbf{X}\) with padded 0s. In a specific patch, the padded 0s only appears at the boundary, while the other meaningful elements are in the center to form a rectangle. The start-position \((fh_{s}fw_{s})\) of these meaningful elements is in the top-left corner, and the end-position \((fh_{e}-1,fw_{e}-1)\) is in the down-right corner. _ConvV2_ trains the height and width of \(\mathbf{W}\) to \((fh_{e}-fh_{e})\) and \((fw_{e}-fw_{s})\), in order to only cover the meaningful part from the start-position to the end, so that all padded 0s are not involved in calculation, thus reducing the time-complexity. As shown in Fig. 4, within the current sliding window, \(\mathbf{W}\) is trimmed from \((3\times 3)\) to \((2\times 2)\). The time-complexity of _ConvV2_ is 50, that's smaller than the 72 of normal convolution (Fig. 3). The pseudo code of _ConvV2_ is in Algorithm. 1. In the practical implementations, _padding_ is achieved by conditional-statements: when fetching an element with a specific index, return its value if the index is in range, but 0 if not. Such logical _padding_ sacrifices a little speed, but saves lots of memory. Because _ConvV2_ excludes all padded 0s, there is no conditional statements for logical _padding_, which improves the efficiency. When the size of \(\mathbf{X}\) is large enough, to improve memory bandwidth, the dimension order of \(\mathbf{W}\) will be changed from \((O_{C}\times F_{ii}\times F_{w}\times l_{C})\) to \((F_{ii}\times F_{w}\times l_{C}\times O_{C})\) with \(\theta(n)\) complexity, that brings about 6% increase on speed. ### _KS-deconv: Kernel-Split Deconvolution_ The _KS-deconv_ is used for deconvolution with stride greater than 1. Otherwise, the direct approach is simpler and faster, due to the density of \(\mathbf{\forall}\mathbf{Y}\). _KS-deconv_ consists of 3 stages, and its pseudo code is in Algorithm 2. **Stagel**_kernel-split_ (Fig. 5): rotate \(\mathbf{W}\) 180 degrees, then split \(\mathbf{W}\) to construct \((sh*\mathbf{w})\) smaller kernels The shape of the \((y,x)_{th}\) smaller kernel \(\mathbf{\mathcal{C}}_{y,x}\) is \((O_{C}\!\times\!\frac{\left[\mathbf{y}_{x}-\mathbf{x}\right]}{sh}\!\times\!\left[ \frac{\mathbf{y}_{y}-\mathbf{x}}{sh}\!\times\!\left[\frac{\mathbf{y}_{y}-\mathbf{x}}{sh}\! \times\!l_{C}\right].\). Finally, concat all smaller kernels to a 6D tensor \(\mathbf{\mathcal{C}}\!\subset\!\mathbf{\mathbb{R}}^{h\times w\times o_{C}\times \left[\frac{\mathbf{y}_{x}-\mathbf{x}}{sh}\!\times\!l_{C}\right]}\!\times\!\left[\frac {\mathbf{y}_{x}-\mathbf{x}}{sh}\!\times\!l_{C}\right]\) with continuous mem-addresses. **Stage2**_stride-1-convolution_ (Fig. 6): perform stride-1 convolutions on \(\mathbf{\forall}\mathbf{Y}\) with each \(\mathbf{\mathcal{C}}_{y,x}\) to generate \((sh*\mathbf{sw})\) outputs. The \((y,x)_{th}\) convolution is specifically with padding \((\emph{oph}_{y,x}\emph{op}\mathbf{w}_{y,x})\), and a start-position \((ih_{s},\emph{iw}_{s})\) mapping to the top-left corner of an area in \(\mathbf{\forall}\mathbf{X}\). Based on these start-positions, \(\mathbf{\forall}\mathbf{Y}\) are trimmed to avoid needless calculations. **Stage3**_grad-composition_ (Fig. 7): compose all \((sh*\mathbf{sw})\) outputs to get \(\mathbf{\forall}\mathbf{X}\). To remove unnecessary calculations, \((ih_{s}\emph{iw}_{s})\) is adjusted to non-negative. When \((l_{H}\emph{Iw})\) is an integral multiple of \((sh*\mathbf{sw})\), there is no need to check whether \((ih\emph{iw})\) is within the range of \(\mathbf{\forall}\mathbf{X}\), and the efficiency will be improved due to reduction of conditional statements. **Stage4**_ConvV2_ trains filters to exclude all padded 0s. **Stage5**_grad-composition_ (Fig. 7): compose all \((sh*\mathbf{sw})\) outputs to get \(\mathbf{\forall}\mathbf{X}\). To remove unnecessary calculations, \((ih_{s}\emph{iw}_{s})\) is adjusted to non-negative. When \((l_{H}\emph{Iw})\) is an integral multiple of \((sh*\mathbf{sw})\), there is no need to check whether \((ih\emph{iw})\) is within the range of \(\mathbf{\forall}\mathbf{X}\), and the efficiency will be improved due to reduction of conditional statements. **Stage6**_rank-composition_ (Fig. 6): _KS-deconv_ Stage2: perform stride-1 convolution on \(\mathbf{\forall}\mathbf{Y}\) with \(\mathbf{\mathcal{C}}_{00}\), \(\mathbf{\mathcal{C}}_{01}\), \(\mathbf{\mathcal{C}}_{10}\) and \(\mathbf{\mathcal{C}}_{11}\) respectively to generate \(\mathbf{\mathcal{O}}_{00}\), \(\mathbf{\mathcal{O}}_{01}\), \(\mathbf{\mathcal{O}}_{10}\) and \(\mathbf{\mathcal{O}}_{11}\). **Stage7**_grad-composition_ (Fig. 7): compose all \((sh*\mathbf{sw})\) outputs to get \(\mathbf{\forall}\mathbf{X}\). To remove unnecessary calculations, \((ih_{s}\emph{iw}_{s})\) is adjusted to non-negative. When \((l_{H}\emph{Iw})\) is an integral multiple of \((sh*\mathbf{sw})\), there is no need to check whether \((ih\emph{iw})\) is within the range of \(\mathbf{\forall}\mathbf{X}\), and the efficiency will be improved due to reduction of conditional statements. The time-complexity of _KS-deconv_ is \(\theta(n^{3})\), and that of the direct method is \(\theta(sh*sw*n^{3})\), which is \((sh*sw)\) times of the first. As the stride increases, the gap between the two grows rapidly. Stage2 and Stage3 are merged, so that Stage3 can directly use the outputs of Stage2 in registers to reduce overhead. Stage1 needs \((sh*sw*0_{c}*\frac{\left[F_{2}\right]}{\left[n\right]}*\frac{\left[F_{2} \right]}{\left[n\right]}*l_{C})\) auxiliary memory to store \(\mathbf{C}\). In most cases, \(\mathbf{C}\) takes much less memory than \(\mathbf{\forall V}\) and \(\mathbf{\forall X}\), and Stage1 has fairly smaller time-complexity than Stage2. Therefore, the cost of Stage1 is acceptable, and can be ignored when batchsize and feature-size are large enough. This work made an contrast implementation, where Stage1 is achieved by index-conversion rather than auxiliary memory, and all 3 stages are merged to a whole. But it's slower, mainly because of the overhead caused by discontinuous memory-access and index-calculation. ### _Sk-dilated: Skip 0s in Dilated-Convolution_ In dilated-convolution, \(\mathbf{\forall Y}\) performs as filters, so the distribution of inserted 0s is fixed for all patches of \(\mathbf{X}\). Suppose \((sh-1)\) and \((sw-1)\) 0s are respectively inserted between adjacent elements of \(\mathbf{\forall Y}\) along the height and width axes, \((n,oh_{p},ow_{p},oc)\) is the index of a specific element in \(\mathbf{\forall Y}\). Such element is not an inserted 0 if \((oh_{p}\)\(ow_{p})\) is an integral multiple of \(\left\langle sh\ sw\right\rangle\), otherwise it is. Follow this rule, _Sk-dilated_ doesn't insert 0s to \(\mathbf{\forall Y}\), but fetches elements in leaping steps \((sh\ sw)\) to generate patches. As shown in Fig. 8, within the current sliding window, elements are fetched according to this 2D-index sequence: \((0,0)\rightarrow(0,2)\rightarrow(2,0)\rightarrow(2,2)\). Algorithm. 3 presents the pseudo code of _Sk-dilated_. The time-complexity of _Sk-dilated_ is \(\theta(n^{3})\), and that of the direct approach is \(\theta(sh*sw*n^{3})\) which is \((sh*sw)\) times of the first. With the increase of stride, the gap between the two rapidly grows. Through _im2col_, the _Sk-dilated-convolution_ between \(\mathbf{X}\) and \(\mathbf{\forall Y}\), can be lowered to a matrix-multiply between \(\mathbf{A}\subset\mathbf{R}^{G_{w}\times G_{w}}\) and \(\mathbf{B}\subset\mathbf{R}^{G_{w}\times G_{w}}\), where \(G_{w}=\mathbf{\alpha}_{c}\), \(G_{w}=F_{w}*F_{w}*l_{C}\) and \(G_{w}=N*0_{w}*0_{w}\). But \(G_{N}\) and \(G_{M}\) don't increase with feature size, if the number of thread-blocks is only determined by them, the parallelism of GPU may be insufficient. The'map-reduce' model is referred to solve this problem: split \(\mathbf{A}\) and \(\mathbf{B}\) to \(G_{Z}\) segments along \(G_{K}\) axis, then compute these segments in parallel, finally sum up all \(G_{Z}\) local results to get the global result. Since the computations on segments take the most of time-complexity, the parallelism becomes about \(G_{Z}\) times of the original. The following method is used to decide \(G_{Z}\). In conv-layers, a dilated-convolution \(\gamma\) corresponds to a convolution \(\alpha\) and a deconvolution \(\beta\). Without map-reduce, their number of thread-blocks are \(N_{\alpha}\), \(N_{B}\) and \(N_{\gamma}\). Since \(N_{\alpha}\) and \(N_{B}\) increase with feature-size, \(G_{Z}\) can be positive related to \((N_{\alpha}+N_{\beta})/N_{\gamma}\). It also needs a lower and upper bound to restrict \(G_{Z}\). The upper bound can be decided by the number of SMs in GPU. ### _Combination: KS-deconv-V2 and Sk-dilated-V2_ The _V2_ versions of _KS-deconv_ and _Sk-dilated_ combines the filter-trimming of _ConvV2_. So they perform better especially on small feature-maps, due to less complexity and conditional-statements. _KS-deconv-V2_ has 3 stages, and only the second stage is different from that of _KS-deconv_, where the stride-1 convolutions are achieved by _ConvV2_. The same as _Sk-dilated_, _Sk-dilated-V2_ also needs map-reduce to raise parallelism. Their pseudo code are in Algorithm 2B and 3B. Fig. 9 and 10 present how they work by examples. Fig. 8: _Sk-dilated_ fetches elements in leaping-steps to generate patches. Fig. 9: _KS-deconv-V2_ Stage2 (Fig. 5 for Stage1, Fig. 7 for Stage3): the stride-1 convolutions are achieved by _ConvV2_. The time-complexity is 50, less than the 72 of _KS-deconv_ Stage2 (Fig. 6). ### _GPU Optimizations_ More than 100 kernel-functions have been implemented for _C-K-S-V2_ (_C-K-S_ with _KS-deconv-V2_ and _Sk-dilated-V2_). Some of them are general solutions to assure performance, while some are specialized to have higher performance in specific situations. For convenience, they are aggregated to make a higher-level encapsulation [15]. GPU executes integer division and remainder operations much slower than floating-point operations, so their usage should be minimized. In some cases, they are replaced by cheap bitwise-operations. For codes with strict requirements for high-performance, the results of these integer operations will be pre-calculated, then stored in constant-memory to maximize the broadcast, and will be retrieved by kernel-functions during calculating. GPU is not good at handling conditional statements for its SIMD nature. Therefore, some conditional statements are replaced by equivalent tables and expressions. Besides, some fixed-loops are unrolled manually or by compilers, to remove the conditional statements which control the loops. The last dimension of tensors are implicitly padded to 4x, and some memory operations are merged through vector-types, in order to improve the bandwidth by using 128bit as the minimum unit for memory access. For dense kernel-functions, _im2col_, _reshape_, _transpose_, _padding_, etc operators are integrated to them, and achieved by index-conversion with no auxiliary memory. Double-buffered shared-memory is used to hasten data loading. Some statements are reordered to reduce the consumption of registers. Each thread executes a big batch of instructions per round to hide memory access, for example: in the heaviest kernel-functions of _ConvV2_, each thread execute (\(8\times 8\times 16\)) multiply-add operations, after each round of reading data from global-memory. ## V Experiments In this work, the GPU implementations of _C-K-S-V2_ are improved from cu32 of Dragon-Alpha [15][21]. This section discusses their performance and the variation tendencies. PyTorch (1.12.1) [13] is used as a baseline for comparison, since its underlying GPU library cuDNN [9] has been extremely optimized. _ConvV2_ is evaluated from the perspective of forward propagation, while _KS-deconv_ (_-V2_) and _Sk-dilated_ (_-V2_) are in a view of backward propagation. To know the overall performance of _C-K-S_ in DNNs, please refer the experimental datas of Dragon-Alpha [15]. ### _Methods_ For each member of _C-K-S-V2_, 2 test-sets with different filter-size are provided, and each test-set contains 8 test-cases. From the 1st to the 8th test-case, the feature-size goes smaller, but channels and batchsize become bigger. The stride or dilation is 2 to introduce sparse tensor. The operators were tested on RTX 3060ti GPU with CUDA 11.5 in _float32_ data-type. The operations of _C-K-S_ and PyTorch are logically equivalent, with the same time-complexity. All calculations were performed on GPU, and there was no data transmission between GPU and CPU. At the same time, only 1 operator was executed to grasp all GPU resource, while the interference from other programs was minimized. The size of input data is large enough to ensure parallelism. To maximize PyTorch's speed, it pre-allocated enough memory before loops of executing a specific convolutional operator; and it didn't do any _rotate_ or _transpose_ operations to change arrangement of tensors, but just executed convolutional operators. Each operator was executed 1000 times in succession for each of their test-cases, and _cudaDeviceSynchronize_() was called to ensure completion of each execution. The total execution time was averaged, to find the speed of floating-point operation (time-complexity / time), where the unit is GFlop/s (\(10^{9}\)_float32_ operations per second). Formulas for time-complexity of convolutional operators are in Table II. ### _Results_ Line-charts are used to demonstrate the variation of speed under different tensor shapes. Fig. 11-12 show the results of convolution, where the '**' means excluding the time of changing the dimension order of \(W\). Fig. 13-14 present the results of deconvolution, and Fig. 15-16 list the results of dilated-convolution. deeper layers with bigger channels and smaller feature-size, usually cause the majority of time-complexity. Therefore, cuDNN made full optimizations for them, to be more efficient than this work's implementations. But due to the filter-trimming, the speed of _ConvV2_ and _Sk-dilated-V2_ finally exceeds that of PyTorch. In conclusion, the above comparison verifies the effectiveness of this work's GPU implementations of _C-K-S-V2_. Benefiting from filter-trimming, in general, _C-K-S-V2_ is faster with smaller feature-size and bigger padding, since more padded 0s are excluded. Due to down-sampling, in all test-cases, the size of output feature-maps is smaller than that of input feature-maps, so the filter-trimming plays more roles in deconvolution, leading to a higher speed. In deconvolution and dilated-convolution, \(\nabla\mathbf{Y}\) expands nearly 4 times because of the inserted 0s. But these 0s are skipped through _KS-deconv_ and _Sk-dilated_, resulting in the following two gains: first, the actual time-complexity is only about 25% of that in theory; second, there is no sparse tensor involved in calculation, therefore, the design pattern of _ConvV2_'s kernel-functions for dense computing, can be naturally migrated to those of _KS-deconv_ and _Sk-dilated_, so that they can efficiently utilize the GPU to a similar extent. That's why the speed of _KS-deconv_ and _Sk-dilated_ is close to 4 times of _ConvV2_'s. But it's difficult to exceed such '4 times', due to their complicacy compared to _ConvV2_. _KS-deconv_ needs additional resources to reconstruct filters, and more registers for intermediate variables, which lowers the parallelism of dense kernels because of less active threads. The _Sk-dilated_ relies on map-reduce to have sufficient parallelism, that inevitably causes certain expenses. In addition, its memory access is less continuous with larger spans, that hurts the hit-radio of L2-cache. The heaviest kernel functions of _C-K-S-V2_ refers the _implicit-GEMM_[9] approach, and are selected to handle large-scale data, so their performance remains stable in all test cases. When handling large-scale data, the speed of _ConvV2_ can improve by 5% to 10%, without the time of dimension-reordering of \(\mathbf{W}\), and such reordering can be avoided with better tensor arrangement, or be hidden in the parallel execution of multi operators. When using _ConvV2_ or _KS-deconv_ to perform one-way propagation, it's able to reconstruct the filters before loops of convolutional operators, as it can further reduce the time of computation. ## VI Conclusion This paper discusses the motivation, background, algorithms as well as the implementations of _C-K-S_, and verifies its effectiveness by experiments with comparison to PyTorch. _C-K-S_ skips 0-calculations in two ways: exclude the padded 0s by filter-trimming, and transform \(\nabla\mathbf{Y}\) back to dense tensor to avoid the inserted 0s. _C-K-S_ reduces the computational complexity in theory, and gets on well with GPUS to have high-performance. As a result, it has the potential to break the limit of devices' compute capability. Certain optimizations for GPUs were made in this work, but not full optimization oriented to the Ampere architecture [24] to extremely utilize the hardware. The bank-conflict on shared-memory hasn't been thoroughly solved, that lowers the performance. All GPU programming is by C++ without Fig. 16: Dilated-Convolution: \(\langle F_{H}\ F_{W}\rangle=\overline{5},\textit{padding}=2\). Fig. 14: Deconvolution: \(\langle F_{H}\ F_{W}\rangle=\overline{5},\textit{padding}=2\). integrated ptx-codes, and the executable files are directly generated by the compiler with no fine-tuning on sass-codes. For elements fetching of convolutional operators, the memory bandwidth was improved, but the overlaps between patches haven't been fully used. Further enhancements can be made according to the aforementioned defects.
2303.17740
A CI-based Auditing Framework for Data Collection Practices
Apps and devices (mobile devices, web browsers, IoT, VR, voice assistants, etc.) routinely collect user data, and send them to first- and third-party servers through the network. Recently, there is a lot of interest in (1) auditing the actual data collection practices of those systems; and also in (2) checking the consistency of those practices against the statements made in the corresponding privacy policies. In this paper, we argue that the contextual integrity (CI) tuple can be the basic building block for defining and implementing such an auditing framework. We elaborate on the special case where the tuple is partially extracted from the network traffic generated by the end-device of interest, and partially from the corresponding privacy policies using natural language processing (NLP) techniques. Along the way, we discuss related bodies of work and representative examples that fit into that framework. More generally, we believe that CI can be the building block not only for auditing at the edge, but also for specifying privacy policies and system APIs. We also discuss limitations and directions for future work.
Athina Markopoulou, Rahmadi Trimananda, Hao Cui
2023-03-30T23:20:52Z
http://arxiv.org/abs/2303.17740v1
# A CI-based Auditing Framework for Data Collection Practices ###### Abstract. Apps and devices (mobile devices, web browsers, IoT, VR, voice assistants, _etc._) routinely collect user data, and send them to first- and third-party servers through the network. Recently, there is a lot of interest in (1) auditing the actual data collection practices of those systems; and also in (2) checking the consistency of those practices against the statements made in the corresponding privacy policies. In this paper1, we argue that the contextual integrity (CI) tuple can be the basic building block for defining and implementing such an auditing framework. We elaborate on the special case where the tuple is partially extracted from the network traffic generated by the end-device of interest, and partially from the corresponding privacy policies using natural language processing (NLP) techniques. Along the way, we discuss related bodies of work and representative examples that fit into that framework. More generally, we believe that CI can be the building block not only for auditing at the edge, but also for specifying privacy policies and system APIs. We also discuss limitations and directions for future work. Footnote 1: This position paper is supported by NSF Award 1956393. The paper was first presented at the 4th Annual Symposium on Applications of Contextual Integrity, NTC, Sept. 2022. A. Markopoulou, R. Trimananda, and H. Cui (2022). A CI-based Auditing Framework for Data Collection Practices. In. ACM, New York, NY, USA, 5 pages. ## 1. The Problem Space Personal data are routinely collected on end devices (browsers, mobile and IoT devices, smart TVs, VR devices, _etc._) and shared with many first- and third-party entities, without providing much transparency or control to users. Increased public awareness has led to data protection legislation, such as the GDPR, CCPA/CPRA, and other state or sector-specific data protection laws. These laws state rights of consumer or citizens, and duties of entities that collect, share, and use personal data. Government agencies, such as the US. Federal Trade Commission (FTC), take initiatives to enforce those regulations. Their efforts are amplified by non-profits, privacy-advocates, and academics who report the results of their investigation on data collection practices and violations. These developments have pushed tech companies, which were previously lacking incentives to self-regulate, towards the right direction, _i.e.,_ to become increasingly more transparent about their data collection practices and apply other good practices as well. **The Gap between Systems and Laws.** However, there is still a significant gap between the practices of tech companies, and the formulation and enforcement of privacy laws. First, from a software developer's perspective, it is often difficult to ensure compliance with all laws and regulations due to the complexity of their own system as well as their dependence on third party libraries, platforms, and other parts of the ecosystem that they have no control over. Second, policymakers need technical input to write relevant and enforceable policies. Sometimes the requirements in the laws do not directly map to a system specification that can be implemented and audited. Furthermore, the technology itself evolves rapidly and often renders privacy laws obsolete. The general problem of co-designing privacy-preserving systems along with consistent privacy laws-let alone auditing tools--is already quite daunting. Progress is made on narrow notions such as the meaning of "singling out" a user (Cheng et al., 2017), and this is an area of increasing research interest. **Transparency and Privacy Policies.** Although the general problem of technology-law interface is quite complex and difficult to tackle, there have been efforts and progress on developing methodologies to solve a more narrow problem: _transparency._ At the very least, most privacy laws require that entities disclose their data collection, sharing, and use practices. For example, entities must disclose _what they collect, with whom they share it, for what purposes, etc._ Furthermore, most privacy laws also require that the user is notified and consents to those practices. Software systems and services typically fulfill their "notice-and-consent" obligation through a _privacy policy_ document that they provide to inform their users about their practices. Then, the user can choose to opt in or opt out before using the product or service. Privacy policy documents are legally binding and they are the focus of this paper.2 Figure 1. The privacy problem space involves computer systems, privacy laws, and users. An important component interfacing with all three is the privacy policy of the system—we presented an earlier version of this diagram in (Bartos et al., 2017). In this paper, we focus on the part highlighted in red: auditing the actual data collection practices of the system and the statements made in the corresponding privacy policy.
2303.06895
An Improved Sample Complexity for Rank-1 Matrix Sensing
Matrix sensing is a problem in signal processing and machine learning that involves recovering a low-rank matrix from a set of linear measurements. The goal is to reconstruct the original matrix as accurately as possible, given only a set of linear measurements obtained by sensing the matrix [Jain, Netrapalli and Shanghavi, 2013]. In this work, we focus on a particular direction of matrix sensing, which is called rank-$1$ matrix sensing [Zhong, Jain and Dhillon, 2015]. We present an improvement over the original algorithm in [Zhong, Jain and Dhillon, 2015]. It is based on a novel analysis and sketching technique that enables faster convergence rates and better accuracy in recovering low-rank matrices. The algorithm focuses on developing a theoretical understanding of the matrix sensing problem and establishing its advantages over previous methods. The proposed sketching technique allows for efficiently extracting relevant information from the linear measurements, making the algorithm computationally efficient and scalable. Our novel matrix sensing algorithm improves former result [Zhong, Jain and Dhillon, 2015] on in two senses: $\bullet$ We improve the sample complexity from $\widetilde{O}(\epsilon^{-2} dk^2)$ to $\widetilde{O}(\epsilon^{-2} (d+k^2))$. $\bullet$ We improve the running time from $\widetilde{O}(md^2 k^2)$ to $\widetilde{O}(m d^2 k)$. The proposed algorithm has theoretical guarantees and is analyzed to provide insights into the underlying structure of low-rank matrices and the nature of the linear measurements used in the recovery process. It advances the theoretical understanding of matrix sensing and provides a new approach for solving this important problem.
Yichuan Deng, Zhihang Li, Zhao Song
2023-03-13T06:57:48Z
http://arxiv.org/abs/2303.06895v1
# An Improved Sample Complexity for Rank-1 Matrix Sensing ###### Abstract Matrix sensing is a problem in signal processing and machine learning that involves recovering a low-rank matrix from a set of linear measurements. The goal is to reconstruct the original matrix as accurately as possible, given only a set of linear measurements obtained by sensing the matrix [13]. In this work, we focus on a particular direction of matrix sensing, which is called rank-1 matrix sensing [14]. We present an improvement over the original algorithm in [14]. It is based on a novel analysis and sketching technique that enables faster convergence rates and better accuracy in recovering low-rank matrices. The algorithm focuses on developing a theoretical understanding of the matrix sensing problem and establishing its advantages over previous methods. The proposed sketching technique allows for efficiently extracting relevant information from the linear measurements, making the algorithm computationally efficient and scalable. Our novel matrix sensing algorithm improves former result [14] on in two senses, * We improve the sample complexity from \(\widetilde{O}(\epsilon^{-2}dk^{2})\) to \(\widetilde{O}(\epsilon^{-2}(d+k^{2}))\). * We improve the running time from \(\widetilde{O}(md^{2}k^{2})\) to \(\widetilde{O}(md^{2}k)\). The proposed algorithm has theoretical guarantees and is analyzed to provide insights into the underlying structure of low-rank matrices and the nature of the linear measurements used in the recovery process. It advances the theoretical understanding of matrix sensing and provides a new approach for solving this important problem. Introduction The matrix sensing problem is a fundamental problem in signal processing and machine learning that involves recovering a low-rank matrix from a set of linear measurement. This problem arises in various applications such as image and video processing [12, 1] and sensor networks [17, 14]. Mathematically, matrix sensing can be formulated as a matrix view of compressive sensing problem [15]. The rank-1 matrix sensing problem was formally raised in [16]. The matrix sensing problem has attracted significant attention in recent years, and several algorithms have been proposed to solve it efficiently. In this paper, we provide a novel improvement over the origin algorithm in [16], with improvement both on running time and sample complexity. Matrix sensing is a fundamental problem in signal processing and machine learning that involves recovering a low-rank matrix from a set of linear measurements. Specifically, given a matrix \(W_{*}\in\mathbb{R}^{d\times d}\) of rank \(k\) that is not directly accessible, we aim to recover \(W_{*}\) from a set of linear measurements \(b\in\mathbb{R}^{n}\) applied to the ground truth matrix \(W^{*}\) where \[b_{i}=\operatorname{tr}[A_{i}^{\top}W_{*}],\quad\forall i=1,\ldots,m,\] where \(A_{i}\) are known linear operators. The measurements \(b_{i}\) are obtained by sensing the matrix \(W_{*}\) using a set of linear measurements, and the goal is to reconstruct the original matrix \(W_{*}\) as accurately as possible. This problem arises in various applications such as image and video processing, sensor networks, and recommendation systems. The matrix sensing problem is ill-posed since there may exist multiple low-rank matrices that satisfy the given linear measurements. However, the problem becomes well-posed under some assumptions on the underlying matrix, such as incoherence and restricted isometry property (RIP) [18, 19, 20], which ensure unique and stable recovery of the matrix. A well-used method to solve this problem is to use convex optimization techniques that minimize a certain loss function subject to the linear constraints. Specifically, one can solve the following convex optimization problem: \[\min_{W_{*}}\ \operatorname{rank}(W_{*})\] \[\text{s.t.}\ \operatorname{tr}[A_{i}^{\top}W_{*}]=b_{i},\forall i =1,\ldots,m.\] However, this problem is NP-hard [13] and intractable in general, and hence, various relaxation methods have been proposed, such as nuclear norm minimization and its variants, which provide computationally efficient solutions with theoretical guarantees. In this work, we focus on the _rank-one independent_ measurements. Under this setting, the linear operators \(A_{i}\) can be decomposed into the form of \(A_{i}=x_{i}y_{i}^{\top}\), where \(x_{i}\in\mathbb{R}^{d},y_{i}\in\mathbb{R}^{d}\) are all sampled from zero-mean multivariate Gaussian distribution \(\mathcal{N}(0,I_{d})\). Our work on improving the matrix sensing algorithm is based on a novel analysis and sketching technique that enables faster convergence rates and better accuracy in recovering low-rank matrices. We focus on developing a theoretical understanding of the proposed algorithm and establishing its advantages over previous methods. Our analysis provides insights into the underlying structure of the low-rank matrices and the nature of the linear measurements used in the recovery process. The proposed sketching technique allows us to efficiently extract relevant information from the linear measurements, making our algorithm computationally efficient and scalable. Overall, our contribution advances the theoretical understanding of matrix sensing and provides a new approach for solving this important problem. ### Our Result To summarize, we improve both the running time of original algorithm [11] from \(O(md^{2}k^{2})\) to \(O(md^{2}k)\), and the sample complexity from \(\widetilde{O}(\epsilon^{-2}dk^{2})\) to \(\widetilde{O}(\epsilon^{-2}(d+k^{2}))\). Formally, we get the following result, **Theorem 1.1** (Informal, combination of Theorem 4.7, Theorem 5.7 and Theorem 7.8).: _Let \(\epsilon_{0}\in(0,0.1)\) denote the final accuracy of the algorithm. Let \(\delta\in(0,0.1)\) denote the failure probability of the algorithm. Let \(\sigma_{1}^{*}\) denote the largest singular value of ground-truth matrix \(W_{*}\in\mathbb{R}^{d\times d}\). Let \(\kappa\) denote the condition number of ground-truth matrix \(W_{*}\in\mathbb{R}^{d\times d}\). Let \(\epsilon\in(0,0.001/(k^{1.5}\kappa))\) denote the RIP parameter. Let \(m=\Theta(\epsilon^{-2}(d+k^{2})\log(d/\delta))\). Let \(T=\Theta(\log(k\kappa\sigma_{1}^{*}/\epsilon_{0}))\). There is a matrix sensing algorithm (Algorithm 1) that takes \(O(mT)\) samples, runs in \(T\) iterations, and each iteration takes \(\widetilde{O}(md^{2}k)\) time, finally outputs a matrix \(W\in\mathbb{R}^{d\times d}\) such that_ \[(1-\epsilon_{0})W_{*}\preceq W\preceq(1+\epsilon_{0})W_{*}\] _holds with probability at least \(1-\delta\)._ ### Related Work Matrix SensingThe matrix sensing problem has attracted significant attention in recent years, and several algorithms have been proposed to solve it efficiently. One of the earliest approaches is the convex optimization-based algorithm proposed by Candes and Recht in 2009 [10], which minimizes the nuclear norm of the matrix subject to the linear constraints. This approach has been shown to achieve optimal recovery guarantees under certain conditions on the linear operators, such as incoherence and RIP. Since then, various algorithms have been proposed that improve upon the original approach in terms of computational efficiency and theoretical guarantees. For instance, the iterative hard thresholding algorithm (IHT) proposed by Blumensath and Davies in 2009 [1], and its variants, such as the iterative soft thresholding algorithm (IST), provide computationally efficient solutions with improved recovery guarantees. In the work by Recht, Fazel, and Parrilo [14], they gave some measurement operators satisfying the RIP and proved that, with \(O(kd\log d)\) measurements, a rank-\(k\) matrix \(W_{*}\in\mathbb{R}^{d\times d}\) can be recovered. Moreover, later works have proposed new approaches that exploit additional structure in the low-rank matrix, such as sparsity or group sparsity, to further improve recovery guarantees and efficiency. For instance, the sparse plus low-rank (\(S\) + \(L\)) approach proposed by [13], and its variants, such as the robust principal component analysis (RPCA) and the sparse subspace clustering (SSC), provide efficient solutions with improved robustness to outliers and noise. More recently, [15] considers the non-square matrix sensing under RIP assumptions, and show that matrix factorization does not introduce any spurious local minima under RIP. [21] studies the technique of discrete-time mirror descent utilized to address the unregularized empirical risk in matrix sensing. Compressive SensingCompressive sensing has been a widely studied topic in signal processing and theoretical computer science field [17, 18, 19, 20, 13, 14, 21, 15, 16, 15, 17, 18, 19, 2]. [17] gave a fast algorithm (runs in time \(O(k\log n\log(n/k))\) for generall in puts and \(O(k\log n\log(n/k))\) for at most \(k\) non-zero Fourier coefficients input) for \(k\)-sparse approximation to the discrete Fourier transform of an \(n\)-dimensional signal. [16] provided an algorithm such that it uses \(O_{d}(k\log N\log\log N)\) samples of signal and runs in time \(O_{d}(k\log^{d+3}N)\) for \(k\)-sparse approximation to the Fourier transform of a length of \(N\) signal. Later work [16] proposed a new technique for analysing noisy hashing schemes that arise in Sparse FFT, which is called isolation on average, and applying it, it achieves sample-optimal results in \(k\log^{O(1)}n\) time for estimating the values of a list of frequencies using few samples and computing Sparse FFT itself. [20] gave the first sublinear-time \(\ell_{2}/\ell_{2}\) compressed sensing which achieves the optimal number of measurements without iterating. After that, [20] provided an algorithm which uses \(O(k\log k\log n)\) samples to compute a \(k\)-sparse approximation to the \(d\)-dimensional Fourier transform of a length \(n\) signal. Later by [21] provided an efficient Fourier Interpolation algorithm that improves the previous best algorithm [10] on sample complexity, time complexity and output sparsity. And in [21] they presented a unified framework for the problem of band-limited signal reconstruction and achieves high-dimensional Fourier sparse recovery and high-accuracy Fourier interpolation. Recent work [11] designed robust algorithms for super-resolution imaging that are efficient in terms of both running time and sample complexity for any constant dimension under the same noise model as [22], based on new techniques in Sparse Fourier transform. Faster Iterative Algorithm via SketchingLow rank matrix completion is a well-known problem in machine learning with various applications in practical fields such as recommender systems, computer vision, and signal processing. Some notable surveys of this problem are provided in [13, 20]. While Candes and Recht [23] first proved the sample complexity for low rank matrix completion, other works such as [21] and [20] have provided improvements and guarantees on convergence for heuristics. In recent years, sketching has been applied to various machine learning problems such as linear regression [23, 20], low-rank approximation [22, 24], weighted low rank approximation, matrix CUR decomposition [25, 26, 27], and tensor regression [28, 29, 27, 26], leading to improved efficiency of optimization algorithms in many problems. For examples, linear programming [2, 29, 30, 31, 32], matrix completion [30], empirical risk minimization [33, 34], training over-parameterized neural network [2, 35, 36, 37], JMS\({}^{+}\)22, Zha22], discrepancy algorithm [21, 38, 39], frank-wolfe method [20, 31], and reinforcement learning [26]. Roadmap.We organize the following paper as follows. In Section 2 we provide the technique overview for our paper. In Section 3 we provide some tools and existing results for our work. In Section 4 we provide the detailed analysis for our algorithm. In Section 5 we argue that our measurements are good. In Section 6 we provide analysis for a shrinking step. In Section 7 we provide the analysis for our techniques used to solve the optimization problem at each iteration. ## 2 Technique Overview In this section, we provide a detailed overview of the techniques used to prove our results. Our approach is based on a combination of matrix sketching and low-rank matrix recovery techniques. Specifically, we use a sketching technique that allows us to efficiently extract relevant information from linear measurements of the low-rank matrix. We then use this information to recover the low-rank matrix using a convex optimization algorithm. With these techniques, we are able to improve previous results in both sample complexity and running time. From the two perspective, we give the overview of our techniques here. ### Tighter Analysis Implies Reduction to Sample Complexity Our approach achieves this improvement by using a new sketching technique that compresses the original matrix into a smaller one while preserving its low-rank structure. This compressed version can then be used to efficiently extract relevant information from linear measurements of the original matrix. To analyze the performance of our approach, we use tools from random matrix theory and concentration inequalities. Specifically, we use the Bernstein's inequality for matrices to establish bounds on the error of our recovery algorithm. We first define our measurements and operators, for each \(i\in[m]\), let \(x_{i},y_{i}\) denotes samples from \(\mathcal{N}(0,I_{d})\). We define * \(A_{i}:=x_{i}y_{i}^{\top}\); * \(b_{i}:=x_{i}^{\top}W_{*}y_{i}\); * \(W_{0}:=\frac{1}{m}\sum_{i=1}^{m}b_{i}A_{i}\); * \(B_{x}:=\frac{1}{m}\sum_{i=1}^{m}(y_{i}^{\top}v)^{2}x_{i}x_{i}^{\top}\); * \(B_{y}:=\frac{1}{m}\sum_{i=1}^{m}(x_{i}^{\top}v)^{2}y_{i}y_{i}^{\top}\); * \(G_{x}:=\frac{1}{m}\sum_{i=1}^{m}(y_{i}^{\top}v)(y_{i}^{\top}v_{\perp})x_{i}x_ {i}^{\top}\); * \(G_{x}:=\frac{1}{m}\sum_{i=1}^{m}(x_{i}^{\top}v)(x_{i}^{\top}v_{\perp})y_{i}y_ {i}^{\top}\). We need to argue that our measurements are _good_ under our choices of \(m\), here the word "good" means that * \(\|W_{0}-W_{*}\|\leq\epsilon\cdot\|W_{*}\|\); * \(\|B_{x}-I\|\leq\epsilon\) and \(\|B_{y}-I\|\leq\epsilon\); * \(\|G_{x}\|\leq\epsilon\) and \(\|G_{y}\|\leq\epsilon\). In our analysis we need to first bound \(\|Z_{i}\|\) and \(\|\operatorname{\mathbb{E}}[Z_{i}Z_{i}^{\top}]\|\), where \(Z_{i}:=x_{i}x_{i}^{\top}U_{*}\Sigma_{*}Y_{*}^{\top}y_{i}y_{i}^{\top}\). With an analysis, we are able to show that (Lemma 5.5 and Lemma 5.6) \[\Pr[\|Z_{i}\|\leq C^{2}k^{2}\log^{2}(d/\delta)\sigma^{4}\cdot \sigma_{1}^{*}] \geq 1-\delta/\operatorname{poly}(d)\] \[\|\operatorname{\mathbb{E}}[Z_{i}Z_{i}^{\top}]\| \leq C^{2}k^{2}\sigma^{4}(\sigma_{1}^{*})^{2}.\] Now, applying these two results and by Bernstein's inequality, we are able to show that our operators are all "good" (Theorem 5.7). ### Induction Implies Correctness To get the final error bounded, we use an inductive strategy to analyze. Here we let \(U_{*}\) and \(V_{*}\) be the decomposition of ground truth \(W_{*}\), i.e., \(W_{*}=U_{*}V_{*}\). We show that, when iteratively applying our alternating minimization method, if \(U_{t}\) and \(V_{t}\) are closed to \(U_{*}\) and \(V_{*}\) respectively, then the output of next iteration \(t+1\) is close to \(U_{*}\) and \(V_{*}\). Specifically, we show that, if \(\operatorname{dist}(U_{t},U_{*})\leq\frac{1}{4}\cdot\operatorname{dist}(V_{t}, V_{*})\), then it yields \[\operatorname{dist}(V_{t+1},V_{*})\leq\frac{1}{4}\cdot\operatorname{dist}(U_{ t},U_{*}). \tag{1}\] Similarly, from the other side, if \(\mathrm{dist}(V_{t+1},V_{*})\leq\frac{1}{4}\cdot\mathrm{dist}(U_{t},U_{*})\), we have \[\mathrm{dist}(U_{t+1},U_{*})\leq\frac{1}{4}\cdot\mathrm{dist}(V_{t+1},V_{*}). \tag{2}\] This two recurrence relations together give the guarantee that, if the starting error \(U_{0}-U_{*}\) and \(V_{0}-V_{*}\), the distance from \(V_{t}\) and \(U_{t}\) to \(V_{*}\) and \(U_{*}\), respectively. To prove the result, we first define the value of \(\epsilon_{d}\) as \(/10\). Then, by the algorithm, we have the following relationship between \(V_{t+1}\) and \(\widehat{V}_{t+1}R^{-1}\), \[V_{t+1}=\widehat{V}_{t+1}R^{-1}=(W_{*}^{\top}U_{t}-F)R^{-1},\] where the second step follows from the definition of \(\widehat{V}\) and defining \(F\) as Definition 6.1. Now we show that, \(\|F\|\) and \(\|R^{-1}\|\) can be bound respectively, \[\|F\| \leq 2\epsilon k^{1.5}\cdot\sigma_{1}^{*}\cdot\mathrm{dist}(U_{t},U_ {*}) \text{Lemma \ref{lem:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_ When this problem is straightforwardly solved, it costs \(O(md^{2}k^{2})\) time, which is very expensive. So from another new direction, we give an analysis such that, this problem can be converted to a minimization problem where the target variable is a vector. To be specific, we show that, above optimization question (3) is equivalent to the following (Lemma 7.3), \[\arg\min_{v\in\mathbb{R}^{dk}}\|Mv-b\|_{2}^{2},\] where the matrix \(M\in\mathbb{R}^{m\times dk}\) is defined to be the reformed matrix of \(U^{\top}A_{i}\)'s, i.e., \[M_{i,*}:=\mathrm{vec}(U^{\top}A_{i}),\ \ \forall i\in[m].\] When working on this form of optimization problem, inspired by a recent work [1], we apply the fast sketch-to-solve low-rank matrix completion method. With this technique, we are able to reduce the running time to \(\widetilde{O}(md^{2}k)\) (Theorem 7.8), which is much more acceptable. ## 3 Preliminary In this section, we provide preliminaries to be used in our paper. In Section 3.1 we introduce notations we use. In Section 3.2 and Section 3.3 we provide some randomness facts and algebra facts respectively. In Section 3.4 we introduce the important definition of restricted isometry property. In Section 3.5 we provide results fro rank-one estimation. In Section 3.6 we introduce the rank-one independent Gaussian operator. In Section 3.7 we state our notations for angles and distances. In Section 3.8 we provide some matrix concentration results. ### Notations Let \(x\in\mathbb{R}^{n}\) and \(w\in\mathbb{R}^{n}_{\geq 0}\), we define the norm \(\|x\|_{w}:=(\sum_{i=1}^{n}w_{i}x_{i}^{2})^{1/2}\). For \(n>k\), for any matrix \(A\in\mathbb{R}^{n\times k}\), we denote the spectral norm of \(A\) by \(\|A\|\), i.e., \(\|A\|:=\sup_{x\in\mathbb{R}^{k}}\|Ax\|_{2}/\|x\|_{2}\). We denote the Frobenius norm of \(A\) by \(\|A\|_{F}\), i.e., \(\|A\|_{F}:=(\sum_{i=1}^{n}\sum_{j=1}^{k}A_{i,j}^{2})^{1/2}\). For any square matrix \(A\in\mathbb{R}^{n\times n}\), we denote its trace by \(\mathrm{tr}[A]\), i.e., \(\mathrm{tr}[A]:=\sum_{i=1}^{n}A_{i,i}\). For any \(A\in\mathbb{R}^{n\times d}\) and \(B\in\mathbb{R}^{n\times d}\), we denote \(\langle A,B\rangle=\mathrm{tr}[A^{\top}B]\). Let \(A\in\mathbb{R}^{n\times d}\) and \(x\in\mathbb{R}^{d}\) be any matrix and vector, we have that \[\|Ax\|_{2}^{2}=\langle Ax,Ax\rangle=\langle x,A^{\top}Ax\rangle=x^{\top}A^{ \top}Ax.\] Let the SVD of \(A\in\mathbb{R}^{n\times k}\) to be \(U\Sigma B^{\top}\), where \(U\in\mathbb{R}^{n\times k}\) and \(V\in\mathbb{R}^{k\times k}\) have orthonormal columns and \(\Sigma\in\mathbb{R}^{k\times k}\) be diagonal matrix. We say the columns of \(U\) are the singular vectors of \(A\). We denote the Moore-Penrose pseudoinverse matrix of \(A\) as \(A^{\dagger}\in\mathbb{R}k\times n\), i.e., \(A^{\dagger}:=V\Sigma^{-1}U^{\top}\). We call the diagonal entries \(\sigma_{1},\sigma_{2},\ldots,\sigma_{k}\) of \(\Sigma\) to be the eigenvalues of \(A\). We assume they are sorted from largest to lowest, so \(\sigma_{i}\) denotes its \(i\)-th largest eigenvalue, and we can write it as \(\sigma_{i}(A)\). For \(A\in\mathbb{R}^{n_{1}\times d_{1}}\), \(B\in\mathbb{R}^{n_{2}\times d_{2}}\). We define kronecker product \(\otimes\) as \((A\otimes B)_{i_{1}+(i_{2}-1)n_{1},j_{1}+(j_{2}-1)n_{2}}\) for all \(i_{1}\in[n_{1}]\), \(j_{1}\in[d_{1}]\), \(i_{2}\in[n_{2}]\) and \(j_{2}\in[d_{2}]\). For any non-singular matrix \(A\in\mathbb{R}^{n\times n}\), we define \(A=QR\) its QR-decomposition, where \(Q\in\mathbb{R}^{n\times n}\) is an orthogonal matrix and \(R\in\mathbb{R}^{n\times n}\) is an non-singular lower triangular matrix. For any full-rank matrix \(A\in\mathbb{R}^{n\times m}\), we define \(A=QR\) its QR-decomposition, where \(Q\in\mathbb{R}^{m\times n}\) is an orthogonal matrix and \(R\in\mathbb{R}^{n\times n}\) is an non-singular lower triangular matrix. We use \(R=\)QR\((A)\in\mathbb{R}^{n\times n}\) to denote the lower triangular matrix obtained by the QR-decomposition of \(A\in\mathbb{R}^{m\times n}\). Let \(A\in\mathbb{R}^{k\times k}\) be a symmetric matrix. The eigenvalue decomposition of \(A\) is \(A=U\Lambda U^{\top}\), where \(\Lambda\) is a diagonal matrix. If a matrix \(A\) is positive semidefinite (PSD) matrix, we denote it as \(A\succeq 0\), which means \(x^{\top}Ax\geq 0\) for all \(x\). Similarly, we say \(A\succeq B\) if \(x^{\top}Ax\geq x^{\top}Bx\) for all vector \(x\). For any matrix \(U\in\mathbb{R}^{n\times k}\), we say \(U\) is an orthonormal basis if \(\|U_{i}\|=1\) for all \(i\in[k]\) and for any \(i\neq j\), we have \(\langle U_{i},U_{j}\rangle=0\). Here for each \(i\in[k]\), we use \(U_{i}\) to denote the \(i\)-th column of matrix \(U\). For any \(U\in\mathbb{R}^{n\times k}\) (suppose \(n>k\))which is an orthonormal basis, we define \(U_{\perp}\in\mathbb{R}^{n\times(n-k)}\) to be another orthonormal basis that, \[UU^{\top}+U_{\perp}U_{\perp}^{\top}=I_{n}\] and \[U^{\top}U_{\perp}=\mathbf{0}^{k\times(n-k)}\] where we use \(\mathbf{0}^{k\times(n-k)}\) to denote a \(k\times(n-k)\) all-zero matrix. We say a vector \(x\) lies in the span of \(U\), if there exists a vector \(y\) such that \(x=Uy\). We say a vector \(z\) lies in the complement of span of \(U\), if there exists a vector \(w\) such that \(z=U_{\perp}w\). Then it is obvious that \(\langle x,z\rangle=x^{\top}z=z^{\top}x=0\). For a matrix \(A\), we define \(\sigma_{\min}(A):=\min_{x}\|Ax\|_{2}/\|x\|_{2}\). Equivalently, \(\sigma_{\min}(A):=\min_{x:\|x\|_{2}=1}\|Ax\|_{2}\). Similarly, we define \(\sigma_{\max}(A):=\max_{x}\|Ax\|_{2}/\|x\|_{2}\). Equivalently, \(\sigma_{\max}(A):=\max_{x:\|x\|_{2}=1}\|Ax\|_{2}\) Let \(A_{1},\cdots,A_{n}\) denote a list of square matrices. Let \(S\) denote a block diagonal matrix \(S=\begin{bmatrix}A_{1}&&\\ &A_{2}&&\\ &&\ddots&\\ &&&A_{n}\end{bmatrix}\). Then \(\|S\|=\max_{i\in[n]}\|A_{i}\|\). We use \(\Pr[]\) to denote probability. We use \(\mathbb{E}[]\) to denote expectation. Let \(a\) and \(b\) denote two random variables. Let \(f(a)\) denote some event that depends on \(a\) (for example \(f(a)\) can be \(a=0\) or \(a\geq 10\).) Let \(g(b)\) denote some event that depends on \(b\). We say \(a\) and \(b\) are independent if \(\Pr[f(a)\text{ and }g(b)]=\Pr[f(a)]\cdot\Pr[g(b)]\). We say \(a\) and \(b\) are not independent if \(\Pr[f(a)\text{ and }g(b)]\neq\Pr[f(a)]\cdot\Pr[g(b)]\). Usually if \(a\) and \(b\) are independent, then we also have \(\mathbb{E}[ab]=\mathbb{E}[a]\cdot\mathbb{E}[b]\). We say a random variable \(x\) is symmetric if \(\Pr[x=u]=\Pr[x=-u]\). For any random variable \(x\sim\mathcal{N}(\mu,\sigma^{2})\). This means \(\mathbb{E}[x]=\mu\) and \(\mathbb{E}[x^{2}]=\sigma^{2}\). We use \(\widetilde{O}(f)\) to denote \(f\cdot\text{poly}(\log f)\). **Definition 3.1**.: _We use \(\mathcal{T}_{\mathrm{mat}}(a,b,c)\) to denote the time of multiplying an \(a\times b\) matrix with another \(b\times c\) matrix._ We use \(\omega\) to denote the exponent of matrix multiplication, i.e., \(n^{\omega}=\mathcal{T}_{\mathrm{mat}}(n,n,n)\). ### Randomness Facts **Fact 3.2**.: _We have_ * _Part 1. Expectation has linearity, i.e.,_ \(\mathbb{E}[\sum_{i=1}^{n}x_{i}]=\sum_{i=1}^{n}\mathbb{E}[x_{i}]\)_._ * _Part 2. For any random vectors_ \(x\) _and_ \(y\)_, if_ \(x\) _and_ \(y\) _are independent, then for any fixed function_ \(f\)_, we have_ \(\mathbb{E}_{x,y}[f(x)f(y)]=\mathbb{E}_{x}[f(x)]\cdot\mathbb{E}_{y}[f(y)]\)_._ * _Part 3. Let_ \(A\in\mathbb{R}^{d\times d}\) _denote a fixed matrix. For any fixed function_ \(f:\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\)_, we have_ \(\mathbb{E}_{x}[f(x)\cdot A]=\mathbb{E}_{x}[f(x)]\cdot A\)_._ * _Part 4. Given_ \(n\) _events_ \(A_{1},A_{2},\cdots A_{n}\)_. For each_ \(i\in[n]\)_, if_ \(\Pr[A_{i}]\geq 1-\delta_{i}\)_. Then taking a union bound over all the_ \(n\) _events, we have_ \(\Pr[A_{1}\text{ and }A_{2}\cdots A_{n}]\geq 1-\sum_{i=1}^{n}\delta_{i}\)_._ ### Algebra Facts We state some standard facts and omit their proofs, since they're very standard. **Fact 3.3**.: _We have_ * _For any orthonormal basis_ \(U\in\mathbb{R}^{n\times k}\)_, we have_ \(\|Ux\|_{2}=\|x\|_{2}\)_._ * _For any orthonormal basis_ \(U\in\mathbb{R}^{n\times k}\)_, we have_ \(\|U\|_{F}\leq\sqrt{k}\)_._ * _For any diagonal matrix_ \(\Sigma\in\mathbb{R}^{k\times k}\) _and any vector_ \(x\in\mathbb{R}^{k}\)_, we have_ \(\|\Sigma x\|_{2}\geq\sigma_{\min}(\Sigma)\|x\|_{2}\)_._ * _For symmetric matrix_ \(A\)_, we have_ \(\sigma_{\min}(A)=\min_{z:\|z\|_{2}=1}z^{\top}Az\)_._ * _For symmetric matrix_ \(A\)_, we have_ \(\sigma_{\min}(A)\|z\|_{2}^{2}\leq z^{\top}Az\) _for all vectors_ \(z\)_._ * _For symmetric matrix_ \(A\)_, we have_ \(\sigma_{\max}(A)\|z\|_{2}^{2}\geq z^{\top}Az\) _for all vectors_ \(z\)_._ * _For any matrix_ \(A\)_, we have_ \(\|A\|\leq\|A\|_{F}\)_._ * _For any square matrix_ \(A\in\mathbb{R}^{k\times k}\) _and vector_ \(x\in\mathbb{R}^{k}\)_, we have_ \(x^{\top}Ax=\sum_{i=1}^{k}\sum_{j=1}^{k}x_{i}A_{i,j}x_{j}=\sum_{i=1}^{k}x_{i}A_{ i,i}x_{i}+\sum_{i\neq j}x_{i}A_{i,j}x_{j}\)_._ * _For any square and invertible matrix_ \(R\)_, we have_ \(\|R^{-1}\|=\sigma_{\min}(R)^{-1}\)__ * _For any matrix_ \(A\) _and for any unit vector_ \(x\)_, we have_ \(\|A\|\geq\|Ax\|_{2}\)_._ * _For any matrix_ \(A\)_,_ \(\|AA^{\top}\|=\|A^{\top}A\|\)_._ ### Restricted Isometry Property **Definition 3.4** (Restricted isometry property (Rip), see Definition 1 in [25]).: _A linear operator \(\mathcal{A}:\mathbb{R}^{d\times d}\to\mathbb{R}^{m}\) satisfies RIP iff, for \(\forall W\in\mathbb{R}^{d\times d}\) s.t. \(\operatorname{rank}(W)\leq k\), the following holds:_ \[(1-\epsilon_{k})\cdot\|W\|_{F}^{2}\leq\|\mathcal{A}(W)\|_{F}^{2}\leq(1+ \epsilon_{k})\cdot\|W\|_{F}^{2}\] _where \(\epsilon_{k}>0\) is a constant dependent only on \(k\)._ ### Rank-one Estimation The goal of matrix sensing is to design a linear operator \(\mathcal{A}:\mathbb{R}^{d\times d}\to\mathbb{R}^{m}\) and a recovery algorithm so that a low-rank matrix \(W_{*}\in\mathbb{R}^{d\times d}\) can be recovered exactly using \(\mathcal{A}(W_{*})\). **Definition 3.5** (Low-rank matrix estimation using rank one measurements).: _Given a ground-truth matrix \(W_{*}\in\mathbb{R}^{d\times d}\). Let \((x_{1},y_{1}),\cdots,(x_{m},y_{m})\in\mathbb{R}^{d}\times\mathbb{R}^{d}\) denote \(m\) pair of feature vectors. Let \(b\in\mathbb{R}^{m}\) be defined_ \[b_{i}=x_{i}^{\top}W_{*}y_{i},\quad\forall i\in[m].\] _The goal is to use \(b\in\mathbb{R}^{m}\) and \(\{(x_{i},y_{i})\}_{i\in[m]}\subset\mathbb{R}^{d}\times\mathbb{R}^{d}\) to recover \(W_{*}\in\mathbb{R}^{d\times d}\)._ We propose two different kinds of rank-one measurement operators based on Gaussian distribution. ### Rank-one Independent Gaussian Operator We formally define Gaussian independent operator, here. **Definition 3.6** (Gaussian Independent (GI) Operator).: _Let \((x_{1},y_{1}),\cdots,(x_{m},y_{m})\subset\mathbb{R}^{d}\times\mathbb{R}^{d}\) denote i.i.d. samples from Gaussian distribution._ _For each \(i\in[m]\), we define \(A_{i}\in\mathbb{R}^{d\times d}\) as follows_ \[A_{i}:=x_{i}y_{i}^{\top}.\] _We define \(\mathcal{A}_{\mathrm{GI}}\in\mathbb{R}^{d\times md}\) as follows:_ \[\mathcal{A}_{\mathrm{GI}}:=\begin{bmatrix}A_{1}&A_{2}&\cdots&A_{m}\end{bmatrix}.\] _Here \(\mathrm{GI}\) denotes Gaussian Independent._ ### Matrix Angle and Distance We list several basic definitions and tools in literature, e.g., see [10]. **Definition 3.7** (Definition 4.1 in [10]).: _Let \(X,Y\in\mathbb{R}^{n\times k}\) denote two matrices._ _For any matrix \(X\), and for orthonormal matrix \(Y\) (\(Y^{\top}Y=I_{k}\)) we define_ * \(\tan\theta(Y,X):=\|Y_{\perp}^{\top}X(Y^{\top}X)^{-1}\|\)__ _For orthonormal matrices \(Y\) and \(X\) (\(Y^{\top}Y=I_{k}\) and \(X^{\top}X=I_{k}\)), we define_ * _It is obvious that_ \(\cos(Y,X)=1/\|(Y^{\top}X)^{-1}\|\) _and_ \(\cos(Y,X)\leq 1\)_._ * _It is obvious that_ \(\sin\theta(Y,X)=\|Y_{\perp}Y_{\perp}^{\top}X\|=\|Y_{\perp}^{\top}X\|\) _and_ \(\sin\theta(Y,X)\leq 1\)_._ * _From Lemma_ 3.9_, we know that_ \(\sin^{2}\theta(Y,X)+\cos^{2}\theta(Y,X)=1\)_._ * \(\mathrm{dist}(Y,X):=\sin\theta(Y,X)\)__ **Lemma 3.8** (Lemma A.7 in [10]).: _Let \(X,Y\in\mathbb{R}^{n\times k}\) be orthogonal matrices, then_ \[\tan\theta(Y,X)=\frac{\sin\theta(Y,X)}{\cos\theta(Y,X)}.\] **Lemma 3.9** (Lemma A.8 in [10]).: _Let \(X,Y\in\mathbb{R}^{n\times k}\) be orthogonal matrices, then_ \[\sin^{2}\theta(Y,X)+\cos^{2}\theta(Y,X)=1.\] ### Matrix Concentration **Theorem 3.10** (Matrix Bernstein Inequality, Theorem 1.6 of [14]).: _Given a finite sequence \(\{X_{1},\cdots X_{m}\}\subset\mathbb{R}^{n_{1}\times n_{2}}\) of independent, random matrices all with the dimension of \(n_{1}\times n_{2}\)._ _Let \(Z=\sum_{i=1}^{m}X_{i}\)._ _Assume that_ \[\mathbb{E}[X_{i}]=0,\forall i\in[m],\|X_{i}\|\leq M,\forall i\in[m]\] _Let \(\operatorname{Var}[Z]\) be the matrix variances statistic of sum_ \[\operatorname{Var}[Z]=\max\{\|\sum_{i=1}^{m}\mathbb{E}[X_{i}X_{i}^{\top}]\|, \|\sum_{i=1}^{m}\mathbb{E}[X_{i}^{\top}X_{i}]\|\}\] _Then it holds that_ \[\mathbb{E}[\|Z\|]\leq(2\operatorname{Var}[Z]\cdot\log(n_{1}+n_{2}))^{1/2}+M \log(n_{1}+n_{3})/3\] _Further, for all \(t>0\)_ \[\Pr[\|Z\|\geq t]\leq(n_{1}+n_{2})\cdot\exp(-\frac{t^{2}/2}{ \operatorname{Var}[Z]+Mt/3})\] ## 4 Analysis Here in this section, we provide analysis for our proposed algorithm. In Section 4.1, we provide definitions in our algorithm analysis. In Section 4.2 we define the operators to be used. In Section 4.3 we provide our main theorem together with its proof. In Section 4.4 we introduce our main induction hypothesis. ### Definitions **Definition 4.1**.: _We define \(W_{*}\in\mathbb{R}^{d\times d}\) as follows_ \[W_{*}=U_{*}\Sigma_{*}V_{*}^{\top}\] _where \(U_{*}\in\mathbb{R}^{n\times k}\) are orthonormal columns, and \(V_{*}\in\mathbb{R}^{n\times k}\) are orthonormal columns. Let \(\sigma_{1}^{*},\sigma_{2}^{*},\cdots\sigma_{k}^{*}\) denote the diagonal entries of diagonal matrix \(\Sigma_{*}\in\mathbb{R}^{d\times d}\)._ **Definition 4.2** (Condition number).: _Let \(W_{*}\) be defined as Definition 4.1. We define \(\kappa\) to the condition number of \(W_{*}\), i.e.,_ \[\kappa:=\sigma_{1}/\sigma_{k}.\] _It is obvious that \(\kappa\geq 1\)._ **Definition 4.3** (Measurements).: _For each \(i\in[m]\), let \(x_{i},y_{i}\) denote samples from \(\mathcal{N}(0,I_{d})\)._ _For each \(i\in[m]\), we define_ \[A_{i}=x_{i}y_{i}^{\top}\] _and_ \[b_{i}=x_{i}^{\top}W_{*}y_{i}.\] ### Operators **Definition 4.4** (Initialization).: _For each \(i\in[m]\), let \(A_{i}\) and \(b_{i}\) be defined as Definition 4.3._ _We define \(W_{0}:=\frac{1}{m}\sum_{i=1}^{m}b_{i}A_{i}\)._ _We say initialization matrix \(W_{0}\in\mathbb{R}^{d\times d}\) is an \(\epsilon\)-good operator if_ \[\|W_{0}-W_{*}\|\leq\|W_{*}\|\cdot\epsilon.\] **Definition 4.5** (Concentration of operators \(B_{x},B_{y}\)).: _For any vectors \(u,v\), we define_ * \(B_{x}:=\frac{1}{m}\sum_{l=1}^{m}(y_{l}^{\top}v)^{2}x_{l}x_{l}^{\top}\)__ * \(B_{y}:=\frac{1}{m}\sum_{l=1}^{m}(x_{l}^{\top}u)^{2}y_{l}y_{l}^{\top}\)__ _We say \(B=(B_{x},B_{y})\) is \(\epsilon\)-operator if the following holds:_ * \(\|B_{x}-I\|\leq\epsilon\)__ * \(\|B_{y}-I\|\leq\epsilon\)__ **Definition 4.6** (Concentration of operators \(G_{x},G_{y}\)).: _For any vectors \(u,v\in\mathbb{R}^{d}\). We define_ * \(G_{x}:=\frac{1}{m}\sum_{l=1}^{m}(y_{l}^{\top}v)(y_{l}^{\top}v_{\perp})x_{l}x_{ l}^{\top}\)__ * \(G_{y}:=\frac{1}{m}\sum_{l=1}^{m}(x_{l}^{\top}u)(x_{l}^{\top}u_{\perp})y_{l}y_{ l}^{\top}\)__ \(u,u_{\perp}\in\mathbb{R}^{d},v,v_{\perp}\in\mathbb{R}^{d}\) are unit vectors, s.t., \(u^{\top}u_{\perp}=0\) and \(v^{\top}v_{\perp}=0\). We say \(G=(G_{x},G_{y})\) is \(\epsilon\)-operator if the following holds_ * \(\|G_{x}\|\leq\epsilon\)_,_ * \(\|G_{y}\|\leq\epsilon\)_._ ### Main Result We prove our main convergence result as follows: **Theorem 4.7** (Formal version of Theorem 1.1).: _Let \(W_{*}\in\mathbb{R}^{d\times d}\) be defined as Definition 4.1._ _Also, let \(\mathcal{A}:\mathbb{R}^{d\times d}\to\mathbb{R}^{m}\) be a linear measurement operator parameterized by m matrices, i.e., \(\mathcal{A}=\{A_{1},A_{2},\cdots,A_{m}\}\) where \(A_{l}=x_{l}y_{l}^{\top}\). Let \(\mathcal{A}(W)\) be as given by_ \[b=\mathcal{A}(W)=\begin{bmatrix}\operatorname{tr}[A_{1}^{\top}W]&\operatorname {tr}[A_{2}^{\top}W]&\cdots&\operatorname{tr}[A_{m}^{\top}W]\end{bmatrix}^{\top}\] _If the following conditions hold_ * \(\epsilon=0.001/(k^{1.5}\kappa)\)__ * \(T=100\log(\kappa k/\epsilon_{0})\)__ * _Let_ \(\{(b_{i},A_{i})\}_{i\in[m]}\) _be an_ \(\epsilon\)_-init operator (Definition_ 4.4_)._ * _Let_ \(B\) _be an_ \(\epsilon\)_-operator (Definition_ 4.5_)._ * _Let_ \(G\) _be an_ \(\epsilon\)_-operator(Definition_ 4.6_)._ _Then, after \(T\)-iterations of the alternating minimization method (Algorithm 1), we obtain \(W_{T}=U_{T}V_{T}^{\top}\) s.t.,_ \[\|W_{T}-W_{*}\|\leq\epsilon_{0}.\] Proof.: We first present the update equation for \(\widehat{V}_{t+1}\in\mathbb{R}^{d\times k}\). Also, note that using the initialization property (first property mentioned in Theorem 4.7), we get, \[\|W_{0}-W_{*}\|\leq\epsilon\sigma_{1}^{*}\leq\frac{\sigma_{k}^{*}}{100}.\] Now, using the standard \(\sin\) theta theorem for singular vector perturbation [14], we get: \[\operatorname{dist}(U_{0},U_{*}) \leq\frac{1}{100}\] \[\operatorname{dist}(V_{0},V_{*}) \leq\frac{1}{100}\] After \(T\) iteration (via Lemma 4.8), we obtain \[\operatorname{dist}(U_{T},U_{*}) \leq(1/4)^{T}\] \[\operatorname{dist}(V_{T},V_{*}) \leq(1/4)^{T}\] which implies that \[\|W_{T}-W_{*}\|\leq\epsilon_{0}\] ### Main Induction Hypothesis **Lemma 4.8** (Induction hypothesis).: _We define \(\epsilon_{d}:=1/10\). We assume that \(\epsilon=0.001/(k^{1.5}\kappa)\). For all \(t\in[T]\), we have the following results._ * _Part 1. If_ \(\operatorname{dist}(U_{t},U_{*})\leq\frac{1}{4}\operatorname{dist}(V_{t},V_{ * })\leq\epsilon_{d}\)_, then we have_ * \(\operatorname{dist}(V_{t+1},V_{*})\leq\frac{1}{4}\operatorname{dist}(U_{t},U_{ *})\leq\epsilon_{d}\)__ * _Part 2. If_ \(\operatorname{dist}(V_{t+1},V_{*})\leq\frac{1}{4}\operatorname{dist}(U_{t},U_{ *})\leq\epsilon_{d}\)_, then we have_ * \(\operatorname{dist}(U_{t+1},U_{*})\leq\frac{1}{4}\operatorname{dist}(V_{t+1},V_ {*})\leq\epsilon_{d}\)__ Proof.: **Proof of Part 1.** Recall that for each \(i\in[n]\), we have \[b_{i}=x_{i}^{\top}W_{*}y_{i}=\langle x_{i}y_{i}^{\top},W_{*}\rangle=\langle A _{i},W_{*}\rangle=\operatorname{tr}[A_{i}^{\top}W_{*}].\] Recall that \[\widehat{V}_{t+1} = \arg\min_{V\in\mathbb{R}^{d\times k}}\sum_{i=1}^{m}(b_{i}-x_{i}^ {\top}U_{t}V^{\top}y_{i})^{2}\] \[= \arg\min_{V\in\mathbb{R}^{d\times k}}\sum_{i=1}^{m}(x_{i}^{\top}W _{*}y_{i}-x_{i}^{\top}U_{t}V^{\top}y_{i})^{2}\] Hence, by setting gradient of this objective function to zero. Let \(F\in\mathbb{R}^{d\times k}\) be defined as Definition 6.1. We have \(\widehat{V}_{t+1}\in\mathbb{R}^{d\times k}\) can be written as follows: \[\widehat{V}_{t+1}=W_{*}^{\top}U_{t}-F \tag{4}\] where \(F\in\mathbb{R}^{d\times k}\) is the error matrix \[F=\begin{bmatrix}F_{1}&F_{2}&\cdots&F_{k}\end{bmatrix}\] where \(F_{i}\in\mathbb{R}^{d}\) for each \(i\in[k]\). Then, using the definitions of \(F\in\mathbb{R}^{d\times k}\) and Definition 6.1, we get: \[\left[\begin{array}{c}F_{1}\\ \vdots\\ F_{k}\end{array}\right]=B^{-1}(BD-C)S\cdot\operatorname{vec}(V_{*}) \tag{5}\] where \(\operatorname{vec}(V_{*})\in\mathbb{R}^{dk}\) is the vectorization of matrix \(V_{*}\in\mathbb{R}^{d\times k}\). Now, recall that in the \(t+1\)-th iteration of Algorithm 1, \(V_{t+1}\in\mathbb{R}^{d\times k}\) is obtained by QR decomposition of \(\widehat{V}_{t+1}\in\mathbb{R}^{d\times k}\). Using notation mentioned above, \[\widehat{V}_{t+1}=V_{t+1}R \tag{6}\] where \(R\in\mathbb{R}^{k\times k}\) denotes the lower triangular matrix \(R_{t+1}\in\mathbb{R}^{k\times k}\) obtained by the QR decomposition of \(V_{t+1}\in\mathbb{R}^{d\times k}\). We can rewrite \(V_{t+1}\in\mathbb{R}^{d\times k}\) as follows \[V_{t+1} =\widehat{V}_{t+1}R^{-1}\] \[=(W_{*}^{\top}U_{t}-F)R^{-1} \tag{7}\] where the first step follows from Eq. (6), and the last step follows from Eq. (4). Multiplying both the sides by \(V_{*,\perp}\in\mathbb{R}^{d\times(d-k)}\), where \(V_{*,\perp}\in\mathbb{R}^{d\times(d-k)}\) is a fixed orthonormal basis of the subspace orthogonal to \(\operatorname{span}(V_{*})\), using Claim 6.3 \[(V_{*,\perp})^{\top}V_{t+1}=-(V_{*,\perp})^{\top}FR^{-1} \tag{8}\] Thus, we get: \[\operatorname{dist}(V_{t+1},V_{*}) =\|(V_{*,\perp})^{\top}V_{t+1}\|\] \[=\|(V_{*,\perp})^{\top}FR^{-1}\|\] \[=\|FR^{-1}\|\] \[\leq\|F\|\cdot\|R^{-1}\|\] \[\leq\,0.001\sigma_{k}^{*}\operatorname{dist}(U_{t},U_{*})\cdot\| R^{-1}\|\] \[\leq\,0.001\sigma_{k}^{*}\operatorname{dist}(U_{t},U_{*})\cdot 2( \sigma_{k}^{*})^{-1}\] \[\leq\,0.01\cdot\operatorname{dist}(U_{t},U_{*})\] where the first step follows from definition of \(\operatorname{dist}\) (see Definition 3.7), the second step follows from Eq. (8), the third step follows from \(V_{*,\perp}\) is an orthonormal basis, and the forth step follows from Fact 3.3, the fifth step follows from Lemma. 6.4, the sixth step follows from Lemma 6.5 (In order to run this lemma, we need to the condition of Part 1 statement to be holding), the last step follows from simple algebra. **Proof of Part 2.** Similarly, we can prove this as Part 1. ## 5 Measurements are Good Operator In this section, we provide detailed analysis for our operators. First Section 5.1 we introduce some standard results for truncated Gaussian. In Section 5.2 and Section 5.2 we bound the term \(\|Z_{i}\|\) and \(\|\operatorname{\mathbb{E}}[Z_{i}Z_{i}^{\top}]\|\) respectively. In Section 5.4 we state our main lemma. In Section 5.5 we show that out initialization is good. In Section 5.6 we show our two operators are good. ### Tools for Gaussian We state a standard tool from literature, **Lemma 5.1** (Lemma 1 in [11] ).: _Let \(X\sim\mathcal{X}_{k}^{2}\) be a chi-squared distributed random variable with \(k\) degrees of freedom. Each one has zero means and \(\sigma^{2}\) variance._ _Then it holds that_ \[\Pr[X-k\sigma^{2}\geq(2\sqrt{kt}+2t)\sigma^{2}] \leq\,\exp{(-t)}\] \[\Pr[k\sigma^{2}-X\geq 2\sqrt{kt}\sigma^{2}] \leq\,\exp{(-t)}\] _Further if \(k\geq\Omega(\epsilon^{-2}t)\) and \(t\geq\Omega(\log(1/\delta))\), then we have_ \[\Pr[|X-k\sigma^{2}|\leq\epsilon k\sigma^{2}]\leq\delta.\] We state a standard fact for the 4-th moment of Gaussian distribution. **Fact 5.2**.: _Let \(x\sim\mathcal{N}(0,\sigma^{2})\), then it holds that \(\mathbb{E}_{x\sim\mathcal{N}(0,\sigma^{2})}[x^{4}]=3\sigma^{2}\)._ **Lemma 5.3**.: _Let \(x\sim\mathcal{N}(0,\sigma^{2}I_{d})\) denote a random Gaussian vector. Then we have_ * _Part 1_ \[\mathbb{E}[xx^{\top}xx^{\top}]=(d+2)\sigma^{4}\] * _Part 2_ \[\|\,\mathbb{E}[xx^{\top}xx^{\top}]\|=(d+2)\sigma^{4}\] Proof.: We define \(A:=xx^{\top}xx^{\top}\). Then we have \[A_{i,j}=x_{i}\sum_{l=1}^{d}x_{l}x_{l}x_{j}\] For \(i=j\), we have \[\mathbb{E}[A_{i,i}]= \,\mathbb{E}[x_{i}\sum_{l=1}^{d}x_{l}x_{l}x_{i}]\] \[= \,\mathbb{E}[x_{i}(\sum_{l=1}^{i-1}x_{l}x_{l}+x_{i}x_{i}+\sum_{l =i+1}^{d}x_{l}x_{l})x_{i}]\] \[= \,\mathbb{E}[x_{i}^{4}]+\sum_{l\in[d]\setminus i}\mathbb{E}[x_{l }^{2}x_{i}^{2}]\] \[= \,\mathbb{E}[x_{i}^{4}]+\sum_{l\in[d]\setminus i}\mathbb{E}[x_{l }^{2}]\,\mathbb{E}[x_{i}^{2}]\] \[= \,\mathbb{E}[x_{i}^{4}]+(d-1)\sigma^{4}\] \[= \,3\sigma^{4}+(d-1)\sigma^{4}\] \[= \,(d+2)\sigma^{4}\] where the third step follows from linearity of expectation (Fact 3.2), the forth step follows from \(x_{l}\) and \(x_{i}\) are independent, the fifth step follows \(\mathbb{E}_{z\sim\mathcal{N}(0,\sigma^{2})}[z^{4}]=3\sigma^{4}\). For \(i\neq j\), we have \[\mathbb{E}[A_{i,j}]= \,\mathbb{E}[x_{i}\sum_{l=1}^{d}x_{l}x_{l}x_{j}]\] \[= \,\mathbb{E}[x_{i}x_{j}^{3}]+\mathbb{E}[x_{i}^{3}x_{j}]+\sum_{l \in[d]\setminus i,j}\mathbb{E}[x_{i}x_{l}^{2}x_{j}]\] \[= \,0\] where the second step follows from linearity of expectation (Fact 3.2). **Fact 5.4** (Rotation invariance property of Gaussian).: _Let \(A^{\top}\in\mathbb{R}^{d\times k}\) with \(k<d\) denote an orthonormal basis (i.e., \(AA^{\top}=I_{k}\)). Then for a Gaussian \(x\sim\mathcal{N}(0,\sigma^{2}I_{d})\), we have_ \[Ax\sim\mathcal{N}(0,\sigma^{2}I_{k}).\] Proof.: Let \(y:=Ax\in\mathbb{R}^{k}\), then \[y_{i}=\sum_{j=1}^{d}A_{ij}x_{j},\ \ \forall i\in[k].\] By definition of Gaussian distribution \[y_{i}\sim\mathcal{N}(0,\sigma^{2}\sum_{j=1}^{d}A_{ij}^{2}).\] Recall that \(A^{\top}\) is an orthonormal basis. We have \[A_{ij}^{2}=1.\] Thus we have \[y\sim\mathcal{N}(0,\sigma^{2}I_{k}),\] ### Bounding \(\|Z_{i}\|\) **Lemma 5.5**.: _Let \(x_{i}\) denote a random Gaussian vector samples from \(\mathcal{N}(0,\sigma^{2}I_{d})\). Let \(y_{i}\) denote a random Gaussian vector samples from \(\mathcal{N}(0,\sigma^{2}I_{d})\)._ _Let \(U_{*},V_{*}\in\mathbb{R}^{d\times k}\)._ _We define_ \[Z_{i}:=x_{i}x_{i}^{\top}U_{*}\Sigma_{*}V_{*}^{\top}y_{i}y_{i}^{\top},\quad \forall i\in[m]\] * _Part 1. We have_ \[\Pr[\|Z_{i}\|\leq C^{2}k^{2}\log^{2}(d/\delta)\sigma^{4}\cdot\sigma_{1}^{*}] \geq 1-\delta/\operatorname{poly}(d).\] * _Part 2. If_ \(k\geq\Omega(\log(d/\delta))\) _We have_ \[\Pr[\|Z_{i}\|\leq C^{2}k^{2}\sigma^{4}\cdot\sigma_{1}^{*}]\geq 1-\delta/ \operatorname{poly}(d).\] Proof.: **Proof of Part 1.** We define \[a_{i} :=U_{*}^{\top}x_{i}\in\mathbb{R}^{k}\] \[b_{i} :=V_{*}^{\top}y_{i}\in\mathbb{R}^{k}\] Since \(U_{*}\) and \(V_{*}\) are orthornormal basis, due to rotation invariance property of Gaussian (Fact 5.4), we know that \(a_{i}\sim\mathcal{N}(0,\sigma^{2}I_{k})\) and \(b_{i}\sim\mathcal{N}(0,\sigma^{2}I_{k})\). We also know that \[x_{i}=(U_{*}^{\top})^{\dagger}a_{i}=U_{*}a_{i}\] \[y_{i}=(V_{*}^{\top})^{\dagger}b_{i}=V_{*}b_{i}\] Thus, by replacing \(x_{i},y_{i}\) with \(a_{i},b_{i}\), we have \[\|Z_{i}\| =\|x_{i}x_{i}^{\top}U_{*}\Sigma_{*}V_{*}^{\top}y_{i}y_{i}^{\top}\|\] \[=\|U_{*}a_{i}a_{i}^{\top}U_{*}^{\top}U_{*}\Sigma_{*}V_{*}^{\top}V _{*}b_{i}b_{i}^{\top}V_{*}^{\top}\|\] \[=\|U_{*}a_{i}a_{i}^{\top}\Sigma_{*}b_{i}b_{i}^{\top}V_{*}^{\top}\|\] \[\leq\|U_{*}\|\cdot\|a_{i}a_{i}^{\top}\|\cdot\|\Sigma_{*}\|\cdot\| b_{i}b_{i}^{\top}\|\cdot\|V_{*}^{\top}\|\] \[\leq\sigma_{1}^{*}\cdot\|a_{i}\|_{2}^{2}\cdot\|b_{i}\|_{2}^{2}\] where the second step follows from replacing \(x,y\) by \(a,b\), the third step follows from \(U_{*}^{\top}U_{*}=I\) and \(V_{*}^{\top}V_{*}=I\), the forth step follows from Fact 3.3. Due to property of Gaussian, we know that \[\Pr[|a_{i,j}|>\sqrt{C\log(d/\delta)}\sigma]\leq\delta/\operatorname{poly}(d)\] Taking a union bound over \(k\) coordinates, we know that \[\Pr[\|a_{i}\|_{2}^{2}\leq Ck\log(d/\delta)\sigma^{2}]\geq 1-\delta/ \operatorname{poly}(d)\] Similarly, we can prove it for \(\|b_{i}\|_{2}^{2}\). **Proof of Part 2.** Since \(k\geq\Omega(\log(d/\delta))\), then we can use Lemma 5.1 to obtain a better bound. ### Bounding \(\|\operatorname{\mathbb{E}}[Z_{i}Z_{i}^{\top}]\|\) **Lemma 5.6**.: _We can show that_ \[\|\operatorname{\mathbb{E}}[Z_{i}Z_{i}^{\top}]\|\leq C^{2}k^{2} \sigma^{4}(\sigma_{1}^{*})^{2}.\] Proof.: Using Lemma 5.3 \[\|\operatorname{\mathbb{E}}_{a\sim\mathcal{N}(0,\sigma^{2}I_{k})} [a_{i}a_{i}^{\top}a_{i}a_{i}^{\top}]\|\leq Ck\sigma^{2}.\] Thus, we have \[\operatorname{\mathbb{E}}[a_{i}a_{i}^{\top}a_{i}a_{i}^{\top}] \preceq Ck\sigma^{2}\cdot I_{k}\] Then, we have \[\|\operatorname{\mathbb{E}}[Z_{i}Z_{i}^{\top}]\|=\| \operatorname{\mathbb{E}}_{x,y}[x_{i}x_{i}^{\top}U_{*}\Sigma_{*}V_{*}^{\top} y_{i}y_{i}^{\top}y_{i}y_{i}^{\top}V_{*}\Sigma_{*}U_{*}^{\top}x_{i}x_{i}^{\top}]\|\] \[= \parallel\operatorname*{\mathbb{E}}_{a,b}[U_{*}a_{i}a_{i}^{\top}U_{*} ^{\top}U_{*}\Sigma_{*}V_{*}^{\top}V_{*}b_{i}b_{i}^{\top}V_{*}V_{*}b_{i}b_{i}^{ \top}V_{*}^{\top}V_{*}\Sigma_{*}U_{*}^{\top}U_{*}a_{i}a_{i}^{\top}U_{*}^{\top}]\|\] \[= \parallel\operatorname*{\mathbb{E}}_{a,b}[U_{*}a_{i}a_{i}^{\top} \Sigma_{*}b_{i}b_{i}^{\top}V_{*}^{\top}V_{*}b_{i}b_{i}^{\top}\Sigma_{*}a_{i}a_ {i}^{\top}U_{*}^{\top}]\|\] \[= \parallel\operatorname*{\mathbb{E}}_{a,b}[U_{*}a_{i}a_{i}^{\top} \Sigma_{*}b_{i}b_{i}^{\top}b_{i}^{\top}b_{i}^{\top}\Sigma_{*}a_{i}a_{i}^{\top} U_{*}^{\top}]\|\] \[= \parallel\operatorname*{\mathbb{E}}_{a,b}[U_{*}a_{i}a_{i}^{\top} \Sigma_{*}b_{i}b_{i}^{\top}b_{i}b_{i}^{\top}\Sigma_{*}a_{i}a_{i}^{\top}U_{*}^{ \top}]\|\] \[\leq \parallel\operatorname*{\mathbb{E}}_{a,b}[a_{i}a_{i}^{\top} \Sigma_{*}b_{i}b_{i}^{\top}b_{i}b_{i}^{\top}\Sigma_{*}a_{i}a_{i}^{\top}]\|\] \[\leq \parallel\operatorname*{\mathbb{E}}_{a}[a_{i}a_{i}^{\top}\Sigma_ {*}\operatorname*{\mathbb{E}}_{\overline{b}}[b_{i}b_{i}^{\top}b_{i}^{\top}] \Sigma_{*}a_{i}a_{i}^{\top}]\|\] \[\leq C^{2}k^{2}\sigma^{4}(\sigma_{1}^{*})^{2} \tag{9}\] where the first step follows from the definition of \(Z_{i}\), the second step follows from replacing \(x_{i},y_{i}\) with \(a_{i},b_{i}\), the third step follows from \(U_{*},V_{*}\) are orthonormal columns, the fourth step follows from \(V_{*}\) are orthonormal columns, the fifth step follows from \(\|U_{*}\|\leq 1\), the sixth step follows from using Lemma 5.3 twice. ### Main Results We prove our main result for measurements. **Theorem 5.7** (Formal of Theorem 1.1, Measurements are good operator).: _Let \(\{A_{i},b_{i}\}_{i\in[m]}\) denote measurements be defined as Definition 4.3._ _Assuming the following conditions are holding_ * \(k=\Omega(\log(d/\delta))\)__ * \(m=\Omega(\epsilon^{-2}(d+k^{2})\log(d/\delta))\)__ _Then,_ * _The property in Definition_ 4.4_, initialization is a_ \(\epsilon\)_-operator_ * _The property in Definition_ 4.5_,_ \(B\) _are_ \(\epsilon\)_-operator._ * _The property in Definition_ 4.6_,_ \(G\) _are_ \(\epsilon\)_-operator._ _holds with probability at least \(1-\delta/\operatorname{poly}(d)\)._ Proof.: Using Lemma 5.8 and Lemma 5.9, we complete the proof. ### Initialization Is a Good Operator **Lemma 5.8**.: _We define matrix \(S\in\mathbb{R}^{d\times d}\) as follows_ \[S:=\frac{1}{m}\sum_{i=1}^{m}b_{i}A_{i}.\] _If the following two condition holds_ * _Condition 1._ \(k=\Omega(\log(d/\delta))\)_,_ * _Condition 2._ _Then we have_ \[\Pr[\|S-W_{*}\|\leq\epsilon\cdot\|W_{*}\|]\geq 1-\delta.\] Proof.: (Initialization in Definition 4.4) Now, we have: \[S =\frac{1}{m}\sum_{i=1}^{m}b_{i}A_{i}\] \[=\frac{1}{m}\sum_{i=1}^{m}b_{i}x_{i}y_{i}^{\top}\] \[=\frac{1}{m}\sum_{i=1}^{m}x_{i}b_{i}y_{i}^{\top}\] \[=\frac{1}{m}\sum_{i=1}^{m}x_{i}x_{i}^{\top}W_{*}y_{i}y_{i}^{\top}\] \[=\frac{1}{m}\sum_{i=1}^{m}x_{i}x_{i}^{\top}U_{*}\Sigma_{*}V_{*}^{ \top}y_{i}y_{i}^{\top},\] where the first step follows from Definition 4.4, the second step follows from \(A_{i}=x_{i}y_{i}^{\top}\), the third step follows from \(b_{i}\) is a scalar, the forth step follows from \(b_{i}=x_{i}^{\top}W_{*}y_{i}\), the fifth step follows from \(W_{*}=U_{*}\Sigma_{*}V_{*}^{\top}\). For each \(i\in[m]\), we define matrix \(Z_{i}\in\mathbb{R}^{d\times d}\) as follows: \[Z_{i}:=x_{i}x_{i}^{\top}U_{*}\Sigma_{*}V_{*}^{\top}y_{i}y_{i}^{\top},\] then we can rewrite \(S\in\mathbb{R}^{d\times d}\) in the following sense, \[S=\frac{1}{m}\sum_{i=1}^{m}Z_{i}\] Note that, we can compute \(\mathbb{E}[Z_{i}]\in\mathbb{R}^{d\times d}\) \[\underset{x_{i},y_{i}}{\mathbb{E}}[Z_{i}] =\underset{x_{i},y_{i}}{\mathbb{E}}[x_{i}x_{i}^{\top}\underset{d \times d}{\mathbb{E}}\underbrace{U_{*}\Sigma_{*}V_{*}^{\top}}_{d\times d} \underbrace{y_{i}y_{i}^{\top}}_{d\times d}]\] \[=\underset{x_{i}}{\mathbb{E}}[x_{i}x_{i}^{\top}\underset{d \times d}{\mathbb{E}}\underbrace{U_{*}\Sigma_{*}V_{*}^{\top}}_{d\times d} \underbrace{\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \ \[= U_{*}\Sigma_{*}V_{*}^{\top}\] for large enough \(m\). Using Lemma 5.5 (Part 2) with choosing Gaussian variance \(\sigma^{2}=1\), we have \[\Pr[\|Z_{i}\|\leq C^{2}k^{2}\sigma_{1}^{*},\forall i\in[m]]\geq 1-\delta/\operatorname {poly}(d) \tag{10}\] Using Lemma 5.6 with choosing Gaussian variance \(\sigma^{2}=1\), we can bound \(\|\operatorname{\mathbb{E}}[Z_{i}Z_{i}^{\top}]\|\) as follows \[\|\operatorname{\mathbb{E}}[Z_{i}Z_{i}^{\top}]\|\leq C^{2}k^{2}(\sigma_{1}^{* })^{2} \tag{11}\] Let \(Z=\sum_{i=1}^{m}(Z_{i}-W_{*})\). Applying Theorem 3.10 we get \[\Pr[\|Z\|\geq t]\leq 2d\cdot\exp(-\frac{t^{2}/2}{\operatorname{Var}[Z]+Mt/3}) \tag{12}\] where \[Z = mS-mW_{*}\] \[\operatorname{Var}[Z] = m\cdot C^{2}k^{2}(\sigma_{1}^{*})^{2},\] by Eq. (11) \[M = C^{2}k^{2}\sigma_{1}^{*}\] by Eq. (10) Replacing \(t=\epsilon\sigma_{1}^{*}m\) and \(Z=mS-mW_{*}\) inside \(\Pr[]\) in Eq. (12), we have \[\Pr[\|S-W^{*}\|\geq\epsilon\sigma_{1}^{*}]\leq 2d\cdot\exp\bigg{(}-\frac{t^{2}/2 }{\operatorname{Var}[Z]+Mt/3}\bigg{)}\] Our goal is to choose \(m\) sufficiently large such that the above quantity is upper bounded by \(2d\cdot\exp(-\Omega(\log(d/\delta)))\). First, we need \[\frac{t^{2}}{\operatorname{Var}[Z]} = \frac{\epsilon^{2}m^{2}(\sigma_{1}^{*})^{2}}{m\cdot C^{2}k^{2}( \sigma_{1}^{*})^{2}}\] \[= \frac{\epsilon^{2}m}{C^{2}k^{2}}\] \[\geq \log(d/\delta)\] where the first step follows from choice of \(t\) and bound for \(\operatorname{Var}[Z]\). This requires \[m\geq C^{2}\epsilon^{-2}k^{2}\log(d/\delta)\] Second, we need \[\frac{t^{2}}{Mt}=\frac{\epsilon m\sigma_{1}^{*}}{M}\] \[=\frac{\epsilon m}{C^{2}k^{2}\sigma_{1}^{*}}\] \[=\frac{\epsilon m}{C^{2}k^{2}}\] \[\geq\ \log(d/\delta)\] where the first step follows from choice of \(t\) and the second step follows from bound on \(M\). This requires \[m\geq C^{2}\epsilon^{-2}k^{2}\log(d/\delta)\] Finally, we should choose \[m\geq 10C^{2}\epsilon^{-2}k^{2}\log(d/\delta),\] Which implies that \[\Pr[\|S-W_{*}\|\leq\epsilon\cdot\sigma_{1}^{*}]\geq 1-\delta/\operatorname{poly }(d). \tag{13}\] Taking the union bound with all \(\|Z_{i}\|\) are upper bounded, then we complete the proof. ### Operator \(B\) and \(G\) is good **Lemma 5.9**.: _If the following two conditions hold_ * _Condition 1._ \(d=\Omega(\log(d/\delta))\)__ * _Condition 2._ \(m=\Omega(\epsilon^{-2}d\log(d/\delta))\)__ _Then operator \(B\) (see Definition 4.5) is \(\epsilon\) good, i.e.,_ \[\Pr[\|B_{x}-I_{d}\|\leq\epsilon] \geq 1-\delta/\operatorname{poly}(d)\] \[\Pr[\|B_{y}-I_{d}\|\leq\epsilon] \geq 1-\delta/\operatorname{poly}(d)\] _Similar results hold for operator \(G\) (see Definition 4.6)._ Proof.: Recall that \(B_{x}:=\frac{1}{m}\sum_{l=1}^{m}(y_{l}^{\top}v)^{2}x_{l}x_{l}^{\top}\). Recall that \(B_{y}:=\frac{1}{m}\sum_{l=1}^{m}(x_{l}^{\top}u)^{2}y_{l}y_{l}^{\top}\). Now, as \(x_{i},y_{i}\) are rotationally invariant random variables, wlog, we can assume \(u=e_{1}\). We use \(x_{i,1}\in\mathbb{R}\) to denote the first entry of \(x_{i}\in\mathbb{R}^{d}\). Thus, \[(x_{i}^{\top}uu^{\top}x_{i})=x_{i,1}^{2}\] Then \[\mathbb{E}[(x_{i}^{\top}uu^{\top}x_{i})^{2}]=\mathbb{E}[x_{i,1}^{4}]=3\] We define \[Z_{i}=(x_{i}^{\top}u)^{2}y_{i}y_{i}^{\top}\] then \[\mathbb{E}[Z_{i}]=I_{d}\] Using similar idea in Lemma 5.5, we have \[\Pr[\|Z_{i}\|\leq Cd,\forall i\in[m]]\geq 1-\delta/\operatorname{poly}(d)\] We can bound \[\|\,\mathbb{E}[Z_{i}Z_{i}^{\top}]\| =\|\,\operatorname*{\mathbb{E}}_{x,y}[(x_{i}^{\top}u)^{2}y_{i}y_ {i}^{\top}y_{i}y_{i}^{\top}(x_{i}^{\top}u)^{2}]\|\] \[=\|\operatorname*{\mathbb{E}}_{x}[(x_{i}^{\top}u)^{2}\operatorname* {\mathbb{E}}_{y}[y_{i}y_{i}^{\top}y_{i}y_{i}^{\top}](x_{i}^{\top}u)^{2}]\|\] \[=(d+2)\cdot|\operatorname*{\mathbb{E}}_{x}[(x_{i}^{\top}u)^{2}(x _{i}^{\top}u)^{2}]|\] \[=(d+2)\cdot 3\] \[\leq Cd\] where the fourth step follows from \(C\geq 1\) is a sufficiently large constant. Let \(Z=\sum_{i=1}^{m}(Z_{i}-I_{d})\). Applying Theorem 3.10 we get \[\Pr[\|Z\|\geq t]\leq 2d\cdot\exp(-\frac{t^{2}/2}{\operatorname{Var}[Z]+Mt/3}),\] where \[Z =m\cdot B-m\cdot I\] \[\operatorname{Var}[Z] =Cmd\] \[M =Cd\] Using \(t=m\epsilon\) and \(Z=\sum_{i=1}^{m}(Z_{i}-I_{d})\), and \(B=\frac{1}{m}\sum_{i=1}^{m}Z_{i}\), we have \[\Pr[\|Z\|\geq t] =\Pr[\|\sum_{i=1}^{m}(Z_{i}-I_{d})\|\geq m\epsilon]\] \[=\Pr[\|\frac{1}{m}\sum_{i=1}^{m}Z_{i}-I_{d}\|\geq\epsilon]\] \[=\Pr[\|B-I_{d}\|\geq\epsilon]\] By choosing \(t=m\epsilon\) and \(m=\Omega(\epsilon^{-2}d\log(d/\delta))\) we have \[\Pr[\|B-I_{d}\|\geq\epsilon]\leq\delta/\operatorname{poly}(d).\] where \(B\) can be either \(B_{x}\) or \(B_{y}\). Similarly, we can prove \[\Pr[\|G_{x}\|\leq\epsilon]\geq 1-\delta,\] \[\Pr[\|G_{y}\|\leq\epsilon]\geq 1-\delta.\] One Shrinking Step In this section, we provide a shirking step for our result. In Section 6.1 we define the matrices \(B,C,D,S\) to be used in analysis. In Section 6.2 we upper bound the norm of \(BD-C\). In Section 6.3 we show the update term \(V_{t+1}\) can be written in a different way. In Section 6.4 and Section 6.5 we upper bounded \(\|F\|\) and \(\|R^{-1}\|\) respectively. ### Definitions of \(B,c,d,s\) **Definition 6.1**.: _For each \(p\in[k]\), let \(u_{*,p}\in\mathbb{R}^{n}\) denotes the \(p\)-th column of matrix \(U_{*}\in\mathbb{R}^{n\times k}\)._ _For each \(p\in[k]\), let \(u_{t,p}\) denote the \(p\)-th column of matrix \(U_{t}\in\mathbb{R}^{n\times k}\)._ _We define block matrices \(B,C,D,S\in\mathbb{R}^{kd\times kd}\) as follows: For each \((p,q)\in[k]\times[k]\)_ * _Let_ \(B_{p,q}\in\mathbb{R}^{d\times d}\) _denote the_ \((p,q)\)_-th block of_ \(B\)__ \[B_{p,q}=\sum_{i=1}^{m}\underbrace{y_{i}y_{i}^{\top}}_{d\times d\text{ matrix}}\cdot(\underbrace{x_{i}^{\top}u_{t,p}}_{\text{scalar}})\cdot(\underbrace{x_{i}^{ \top}u_{t,q}}_{\text{scalar}})\] * _Let_ \(C_{p,q}\in\mathbb{R}^{d\times d}\) _denote the_ \((p,q)\)_-th block of_ \(C\)_,_ \[C_{p,q}=\sum_{i=1}^{m}\underbrace{y_{i}y_{i}^{\top}}_{d\times d\text{ matrix}}\cdot(\underbrace{x_{i}^{\top}u_{t,p}}_{\text{scalar}})\cdot(\underbrace{x_{i}^{ \top}u_{sq}}_{\text{scalar}})\] * _Let_ \(D_{p,q}\in\mathbb{R}^{d\times d}\) _denote the_ \((p,q)\)_-th block of_ \(D\)_,_ \[D_{p,q}=u_{t,p}^{\top}u_{*q}I\] * _Let_ \(S_{p,q}\in\mathbb{R}^{d\times d}\) _denote the_ \((p,q)\)_-th block of_ \(S\)_,_ \[S_{p,q}=\begin{cases}\sigma_{p}^{*}I,&\text{if }p=q;\\ \;0,&\text{if }p\neq q.\end{cases}\] _Here_ \(\sigma_{1}^{*},\cdots\sigma_{k}^{*}\) _are singular values of_ \(W_{*}\in\mathbb{R}^{d\times d}\)_._ * _We define_ \(F\in\mathbb{R}^{d\times k}\) _as follows_ \[\underbrace{\operatorname{vec}(F)}_{d\times 1}:=\underbrace{B^{-1}}_{d \times d}(\underbrace{BD-C}_{d\times d})\cdot\underbrace{S}_{d\times d}\cdot \underbrace{\operatorname{vec}(V_{*})}_{d\times 1}.\] ### Upper Bound on \(\|BD-C\|\) **Claim 6.2**.: _Let \(B,C\) and \(D\) be defined as Definition 6.1. Then we have_ \[\|BD-C\|\leq\epsilon\cdot\operatorname{dist}(U,U_{*})\cdot k\] Proof.: Let \(z_{1},\cdots,z_{k}\in\mathbb{R}^{d}\) denote \(k\) vectors. Let \(z=\begin{bmatrix}z_{1}\\ \vdots\\ z_{k}\end{bmatrix}\). We define \(f(z):=z^{\top}(BD-C)z\) We define \(f(z,p,q)=z_{p}^{\top}(BD-C)_{p,q}z_{q}\). Then we can rewrite \[z^{\top}(BD-C)z = \sum_{p=1}^{k}\sum_{q=1}^{k}z_{p}^{\top}(BD-C)_{p,q}z_{q}\] \[= \sum_{p=1}^{k}\sum_{q=1}^{k}z_{p}^{\top}(B_{p,:}.D_{:,q}-C_{p,q})z _{q}\] \[= \sum_{p=1}^{k}\sum_{q=1}^{k}z_{p}^{\top}(\sum_{l=1}^{k}B_{p,l}D_{ l,q}-C_{p,q})z_{q}\] By definition, we know \[B_{p,l} = \sum_{i=1}^{m}y_{i}y_{i}^{\top}(x_{i}^{\top}u_{t,p})\cdot(u_{t,l} ^{\top}x_{i})\] \[D_{l,q} = (u_{*,q}^{\top}u_{t,l})I_{d}\] \[C_{p,q} = \sum_{i=1}^{m}y_{i}y_{i}^{\top}(x_{i}^{\top}u_{t,p})\cdot(u_{*,q} ^{\top}x_{i})\] We can rewrite \(C_{p,q}\) as follows \[C_{p,q}=\sum_{i=1}^{m}y_{i}y_{i}^{\top}\cdot(x_{i}^{\top}u_{t,p})\cdot(u_{*,q }^{\top}I_{d}x_{i}) \tag{14}\] Let us compute \[B_{p,l}D_{l,q} = \sum_{i=1}^{m}y_{i}y_{i}^{\top}(x_{i}^{\top}u_{t,p})\cdot(u_{t,l} ^{\top}x_{i})\cdot(u_{*,q}^{\top}u_{t,l})\] \[= \sum_{i=1}^{m}y_{i}y_{i}^{\top}(x_{i}^{\top}u_{t,p})\cdot(u_{*,q }^{\top}u_{t,l})\cdot(u_{t,l}^{\top}x_{i})\] where the second step follows from \(a\cdot b=b\cdot a\) for any two scalars. Taking the summation over all \(l\in[k]\), we have \[\sum_{l=1}^{k}B_{p,l}D_{l,q} = \sum_{l=1}^{k}\sum_{i=1}^{m}y_{i}y_{i}^{\top}(x_{i}^{\top}u_{t,p} )\cdot(u_{*,q}^{\top}u_{t,l})\cdot(u_{t,l}^{\top}x_{i}) \tag{15}\] \[= \sum_{i=1}^{m}y_{i}y_{i}^{\top}(x_{i}^{\top}u_{t,p})\cdot u_{*,q }^{\top}\sum_{l=1}^{k}(u_{t,l}\cdot u_{t,l}^{\top})x_{i}\] \[= \sum_{i=1}^{m}\underbrace{y_{i}y_{i}^{\top}}_{\text{matrix}} \cdot\underbrace{(x_{i}^{\top}u_{t,p})}_{\text{scalar}}\cdot\underbrace{u_{*,q }^{\top}U_{t}U_{t}^{\top}x_{i}}_{\text{scalar}}\] where first step follows from definition of \(B\) and \(D\). Then, we have \[\sum_{l=1}^{k}B_{p,l}D_{l,q}-C_{p,q} =\big{(}\sum_{i=1}^{m}\underbrace{y_{i}y_{i}^{\top}}_{\text{matrix} }\cdot(\underbrace{x_{i}^{\top}u_{t,p}}_{\text{scalar}})\cdot\underbrace{u_{*,q} ^{\top}U_{t}U_{t}^{\top}x_{i}}_{\text{scalar}}\big{)}-C_{p,q}\] \[=\big{(}\sum_{i=1}^{m}\underbrace{y_{i}y_{i}^{\top}}_{\text{matrix }}\cdot(\underbrace{x_{i}^{\top}u_{t,p}}_{\text{scalar}})\cdot\underbrace{u_{*,q}^{\top}U_{t}U_{t}^{\top}x_{i}}_{\text{scalar}}\big{)}-(\sum_{i=1}^{m}y_{i}y _{i}^{\top}\cdot(x_{i}^{\top}u_{t,p})\cdot(u_{*,q}^{\top}I_{d}x_{i}))\] \[=\;\sum_{i=1}^{m}\underbrace{y_{i}y_{i}^{\top}}_{\text{matrix}} \cdot(\underbrace{x_{i}^{\top}u_{t,p}}_{\text{scalar}})\cdot\underbrace{u_{*,q}^{\top}(U_{t}U_{t}^{\top}-I_{d})x_{i}}_{\text{scalar}}\] where the first step follows from Eq. (15), the second step follows from Eq. (14), the last step follows from merging the terms to obtain \((U_{t}U_{t}^{\top}-I_{d})\). Thus, \[f(z,p,q) =z_{p}^{\top}(\sum_{l=1}^{k}B_{p,l}D_{l,q}-C_{p,q})z_{q}\] \[=\;\sum_{i=1}^{m}(\underbrace{z_{p}^{\top}y_{i}}_{\text{scalar}} )\cdot(\underbrace{y_{i}^{\top}z_{q}}_{\text{scalar}})\cdot(\underbrace{x_{i} ^{\top}u_{t,p}}_{\text{scalar}})\cdot\underbrace{u_{*,q}^{\top}(U_{t}U_{t}^{ \top}-I_{d})x_{i}}_{\text{scalar}}\] For easy of analysis, we define \(v_{t}:=u_{*,q}^{\top}(U_{t}U_{t}^{\top}-I_{d})\). This means \(v_{t}\) lies in the complement of span of \(U_{t}\). Then \[\|v_{t}\|_{2} =\;\|u_{*,q}^{\top}(U_{t}U_{t}^{\top}-I_{d})\|_{2}\] \[=\;\|e_{q}^{\top}U_{*}^{\top}(U_{t}U_{t}^{\top}-I_{d})\|\] \[\leq\;\|U_{*}^{\top}(U_{t}U_{t}^{\top}-I_{d})\|\] \[=\;\text{dist}(U_{*},U_{t}). \tag{16}\] where the second step follows from \(u_{*,q}^{\top}=e_{q}^{\top}U_{*}^{\top}\) (\(e_{q}\in\mathbb{R}^{k}\) is the vector \(q\)-th location is \(1\) and all other locations are \(0\)s), third step follows from Fact 3.3. We want to apply Definition 4.6, but the issue is \(z_{p},z_{q}\) and \(v_{t}\) are not unit vectors. So normalize them. Let \(\overline{z}_{p}=z_{p}/\|z_{p}\|_{2}\), \(\overline{z}_{q}=z_{q}/\|z_{q}\|_{2}\) and \(\overline{v}_{t}=v_{t}/\|v_{t}\|_{2}\). In order to apply for Definition 4.6, we also need \(v_{t}^{\top}u_{t,p}=0\). This is obvious true, since \(v_{t}\) lies in the complement of span of \(U_{t}\) and \(u_{t,p}\) in the span of \(U_{t}\). We define \[G:=\sum_{i=1}^{m}\underbrace{(x_{i}^{\top}u_{t,p})}_{\text{ scalar}}\cdot( \underbrace{x_{i}^{\top}\overline{v}_{t}}_{\text{scalar}})\cdot\underbrace{y _{i}y_{i}^{\top}}_{\text{matrix}}\] By Definition 4.6, we know that \[\|G\|\leq\epsilon.\] By definition of spectral norm, we have for any unit vector \(\overline{z}_{p}\) and \(\overline{z}_{q}\), we know that \[|\overline{z}_{p}^{\top}G\overline{z}_{q}|\leq\|G\|\leq\epsilon.\] where the first step follows from definition of spectral norm (Fact 3.3), and the last step follows from Definition 4.6. Note that \[f(p,q,z) = \sum_{i=1}^{m}\underbrace{(x_{i}^{\top}u_{t,p})\cdot(x_{i}^{\top} \overline{v}_{t})}_{\text{scalar}}\cdot\underbrace{(\overline{z}_{p}^{\top}y_{ i})\cdot(y_{i}^{\top}\overline{z}_{q})}_{\text{scalar}}\cdot\underbrace{\|z_{p}\|_{2} \cdot\|z_{q}\|_{2}\cdot\|v_{t}\|_{2}}_{\text{scalar}}\] \[= \underbrace{\overline{z}_{p}^{\top}}_{1\times d}\cdot\Big{(}\sum _{i=1}^{m}\underbrace{(x_{i}^{\top}u_{t,p})\cdot(x_{i}^{\top}\overline{v}_{t} )}_{\text{scalar}}\cdot\underbrace{y_{i}y_{i}^{\top}}_{d\times d}\cdot\underbrace {\overline{z}_{q}}_{d\times 1}\cdot\underbrace{\|z_{p}\|_{2}\cdot\|z_{q}\|_{2}\cdot\|v_{t}\|_{ 2}}_{\text{scalar}}\] \[= \underbrace{\overline{z}_{p}^{\top}}_{1\times d}\cdot\underbrace{G }_{d\times d}\cdot\underbrace{\bar{z}_{q}}_{d\times 1}\cdot\underbrace{\|z_{p}\|_{2} \cdot\|z_{q}\|_{2}\cdot\|v_{t}\|_{2}}_{\text{scalar}}\] where the second step follows from rewrite the second scalar \((\overline{z}_{p}^{\top}y_{i})(y_{i}^{\top}\overline{z}_{q})=\overline{z}_{p }^{\top}(y_{i}y_{i}^{\top})\overline{z}_{q}\), the last step follows from definition of \(G\). Then, \[|f(z,p,q)| = |\sum_{i=1}^{m}\overline{z}_{p}^{\top}G\overline{z}_{q}|\cdot\|z _{p}\|_{2}\|z_{q}\|_{2}\|v_{t}\|_{2}\] \[\leq \epsilon\|z_{p}\|_{2}\|z_{q}\|_{2}\cdot\|v_{t}\|_{2}\] \[\leq \epsilon\|z_{p}\|_{2}\|z_{q}\|_{2}\cdot\operatorname{dist}(U_{t}, U_{*})\] where the last step follows from Eq. (16). Finally, we have \[\|BD-C\| = \max_{z,\|z\|_{2}=1}\big{|}z^{\top}(BD-C)z|\] \[= \max_{z,\|z\|_{2}=1}\big{|}\sum_{p\in[k],q\in[k]}f(z,p,q)\big{|}\] \[\leq \max_{z,\|z\|_{2}=1}\sum_{p\in[k],q\in[k]}|f(z,p,q)|\] \[\leq \epsilon\cdot\operatorname{dist}(U_{t},U_{*})\max_{z,\|z\|_{2}=1 }\sum_{p\in[k],q\in[k]}\|z_{p}\|_{2}\|z_{q}\|_{2}\] \[\leq \epsilon\cdot\operatorname{dist}(U,U_{*})\cdot k \tag{17}\] where the first step follows from Fact 3.3, the last step step follows from \(\sum_{p=1}^{k}\|z_{p}\|_{2}\leq\sqrt{k}(\sum_{p=1}^{k}\|z_{p}\|_{2}^{2})^{1/2} =\sqrt{k}\). ### Rewrite \(V_{t+1}\) **Claim 6.3**.: _If_ \[V_{t+1}=(W_{*}^{\top}U_{t}-F)R^{-1}\] _then,_ \[(V_{*,\perp})^{\top}V_{t+1}=-(V_{*,\perp})^{\top}FR^{-1}\] Proof.: Multiplying both sides by \(V_{*,\perp}\in\mathbb{R}^{d\times(d-k)}\): \[V_{t+1} =(W_{*}^{\top}U_{t}-F)R^{-1}\] \[(V_{*,\perp})^{\top}V_{t+1} =(V_{*,\perp})^{\top}(W_{*}^{\top}U_{t}-F)R^{-1}\] \[(V_{*,\perp})^{\top}V_{t+1} =(V_{*,\perp})^{\top}W_{*}^{\top}R^{-1}-(V_{*,\perp})^{\top}FR^{-1}\] We just need to show \((V_{*,\perp})^{\top}W_{*}^{\top}R^{-1}=0\). By definition of \(V_{*,\perp}\), we know: \[V_{*,\perp}^{\top}V_{*} =\mathbf{0}_{k\times(n-k)}\] Thus, we have: \[(V_{*,\perp})^{\top}W_{*}^{\top} =V_{*,\perp}^{\top}V_{*}\Sigma_{*}U_{*}^{\top}\] \[=0\] ### Upper bound on \(\|F\|\) **Lemma 6.4** (A variation of Lemma 2 in [2]).: _Let \(\mathcal{A}\) be a rank-one measurement operator where \(A_{i}=x_{i}u_{i}^{\top}\). Let \(\kappa\) be defined as Definition 4.2._ _Then, we have_ \[\|F\|\leq 2\epsilon k^{1.5}\cdot\sigma_{1}^{*}\cdot\operatorname{ dist}(U_{t},U_{*})\] _Further, if \(\epsilon\leq 0.001/(k^{1.5}\kappa)\)_ \[\|F\|\leq 0.01\cdot\sigma_{k}^{*}\cdot\operatorname{ dist}(U_{t},U_{*}).\] Proof.: Recall that \[\operatorname{vec}(F)=B^{-1}(BD-C)S\cdot\operatorname{vec}(V_{*}).\] Here, we can upper bound \(\|F\|\) as follows \[\|F\| \leq\|F\|_{F}\] \[=\|\operatorname{vec}(F)\|_{2}\] \[\leq\|B^{-1}\|\cdot\|BD-C\|\cdot\|S\|\cdot\|\operatorname{vec}(V_ {*})\|_{2}\] \[=\|B^{-1}\|\cdot\|(BD-C)\|\cdot\|S\|\cdot\sqrt{k}\] \[\leq\|B^{-1}\|\cdot\|(BD-C)\|\cdot\sigma_{1}^{*}\cdot\sqrt{k} \tag{18}\] where the first step follows from \(\|\cdot\|\leq\|\cdot\|_{F}\) (Fact 3.3), the second step follows vectorization of \(F\) is a vector, the third step follows from \(\|Ax\|_{2}\leq\|A\|\cdot\|x\|_{2}\), the forth step follows from \(\|\operatorname{vec}(V_{*})\|_{2}=\|V_{*}\|_{F}\leq\sqrt{k}\) (Fact 3.3) and the last step follows from \(\|S\|\leq\sigma_{1}^{*}\) (see Definition 6.1). Now, we first bound \(\|B^{-1}\|=1/(\sigma_{\min}(B))\). Also, let \(Z=\begin{bmatrix}z_{1}&z_{2}&\cdots&z_{k}\end{bmatrix}\) and let \(z=\operatorname{vec}(Z)\). Note that \(B_{p,q}\) denotes the \((p,q)\)-th block of \(B\). We define \[\mathcal{B}:=\{x\in\mathbb{R}^{kd}\ |\ \|x\|_{2}=1\}.\] Then \[\sigma_{\min}(B) = \min_{z\in\mathcal{B}}z^{\top}Bz \tag{19}\] \[= \min_{z\in\mathcal{B}}\sum_{p\in[k],q\in[k]}z_{p}^{\top}B_{pq}z_{q}\] \[= \min_{z\in\mathcal{B}}\sum_{p=1}^{k}z_{p}^{\top}B_{p,p}z_{p}+\sum _{p\neq q}z_{p}^{\top}B_{p,q}z_{q}.\] where the first step follows from Fact 3.3, the second step follows from simple algebra, the last step follows from (Fact 3.3). We can lower bound \(z_{p}^{\top}B_{p,p}z_{p}\) as follows \[z_{p}^{\top}B_{p,p}z_{p} \geq \sigma_{\min}(B_{p,p})\cdot\|z_{p}\|_{2}^{2} \tag{20}\] \[\geq (1-\epsilon)\cdot\|z_{p}\|_{2}^{2}\] where the first step follows from Fact 3.3, the last step follows from Definition 4.5. We can upper bound \(|z^{\top}B_{p,q}z_{q}|\) as follows, \[|z_{p}^{\top}B_{p,q}z_{q}| \leq \|z_{p}\|_{2}\cdot\|B_{p,q}\|\cdot\|z_{q}\|_{2} \tag{21}\] \[\leq \epsilon\cdot\|z_{p}\|_{2}\cdot\|z_{q}\|_{2}\] where the first step follows from Fact 3.3, the last step follows from Definition 4.5. We have \[\sigma_{\min}(B) = \min_{z,\|z\|_{2}=1}\sum_{p=1}^{k}z_{p}^{\top}B_{p,p}z_{p}+\sum_{ p\neq q}z_{p}^{\top}B_{p,q}z_{q} \tag{22}\] \[\geq \min_{z,\|z\|_{2}=1}(1-\epsilon)\sum_{p=1}^{k}\|z_{p}\|_{2}^{2}+ \sum_{p\neq q}z_{p}^{\top}B_{p,q}z_{q}\] \[\geq \min_{z,\|z\|_{2}=1}(1-\epsilon)\sum_{p=1}^{k}\|z_{p}\|_{2}^{2}- \epsilon\sum_{p\neq q}\|z_{p}\|_{2}\|z_{q}\|_{2}\] \[= \min_{z,\|z\|_{2}=1}(1-\epsilon)-\epsilon\sum_{p\neq q}\|z_{p}\| _{2}\|z_{q}\|_{2}\] \[= \min_{z,\|z\|_{2}=1}(1-\epsilon)-k\epsilon\] \[\geq 1-2k\epsilon\] \[\geq 1/2\] where the first step follows from Eq. (19), the second step follows from Eq. (20), the third step follows from Eq. (21), the forth step follows from \(\sum_{p=1}^{k}\|z_{p}\|_{2}^{2}=1\)(which derived from the \(\|z\|_{2}=1\) constraint and the definition of \(\|z\|_{2}\)), the fifth step follows from \(\sum_{p\neq q}\|z_{p}\|_{2}\|z_{q}\|_{2}\leq k\), and the last step follows from \(\epsilon\leq 0.1/k\). We can show that \[\|B^{-1}\|=\sigma_{\min}(B)\leq 2. \tag{23}\] where the first step follows from Fact 3.3, the second step follows from Eq. (22). Now, consider \(BD-C\), using Claim 6.2, we have \[\|BD-C\|\leq k\cdot\epsilon\cdot\operatorname{dist}(U_{t},U_{*})\] Now, we have \[\|F\| \leq\|B^{-1}\|\cdot\|(BD-C)\|\cdot\sigma_{1}^{*}\cdot\sqrt{k}\] \[\leq 2\cdot\|(BD-C)\|\cdot\sigma_{1}^{*}\cdot\sqrt{k}\] \[\leq 2\cdot k\cdot\epsilon\cdot\operatorname{dist}(U_{t},U_{*}) \cdot\sigma_{1}^{*}\cdot\sqrt{k}\] where the first step follows from Eq.(18), the second step follows from Eq. (23), and the third step follows from Eq. (17). ### Upper bound on \(\|R^{-1}\|\) **Lemma 6.5** (A variation of Lemma 3 in [25]).: _Let \(\mathcal{A}\) be a rank-one measurement operator matrix where \(A_{i}=x_{i}y_{i}^{\top}\). Also, let \(\mathcal{A}\) satisfy three properties mentioned in Theorem 4.7._ _If the following condition holds_ * \(\operatorname{dist}(U_{t},U_{*})\leq\frac{1}{4}\leq\epsilon_{d}=1/10\) _(The condition of Part 1 of Lemma_ 4.8_)_ _Then,_ \[\|R^{-1}\|\leq 10/\sigma_{k^{*}}\] Proof.: For simplicity, in the following proof, we use \(V\) to denote \(V_{t+1}\). We use \(U\) to denote \(U_{t}\). Using Fact 3.3 \[\|R^{-1}\|=\sigma_{\min}(R)^{-1}\] We can lower bound \(\sigma_{\min}(R)\) as follows: \[\sigma_{\min}(R) =\min_{z,\|z\|_{2}=1}\|Rz\|_{2}\] \[=\min_{z,\|z\|_{2}=1}\|VRz\|_{2}\] \[=\min_{z,\|z\|_{2}=1}\|V_{*}\Sigma_{*}U_{*}^{\top}Uz-Fz\|_{2}\] \[\geq\min_{z,\|z\|_{2}=1}\|V_{*}\Sigma_{*}U_{*}^{\top}Uz\|_{2}-\| Fz\|_{2}\] \[\geq\min_{z,\|z\|_{2}=1}\|V_{*}\Sigma_{*}U_{*}^{\top}Uz\|_{2}-\|Fz\| \tag{24}\] where the first step follows from definition of \(\sigma_{\min}\), the second step follows from Fact 3.3, the third step follows from \(V=(W_{*}^{\top}U-F)R^{-1}=(V_{*}\Sigma_{*}U_{*}^{\top}U-F)R^{-1}\) (due to Eq. (7) and Definition 4.1) , the forth step follows from triangle inequality, the fifth step follows from \(\|Ax\|_{2}\leq\|A\|\) for all \(\|x\|_{2}=1\). Next, we can show that \[\min_{z,\|z\|_{2}=1}\|V_{*}\Sigma_{*}U_{*}^{\top}Uz\|_{2} =\,\min_{z,\|\|_{2}=1}\|\Sigma_{*}U_{*}^{\top}Uz\|_{2}\] \[\geq\,\min_{z,\|\|_{2}=1}\sigma_{k}^{*}\cdot\|U_{*}^{\top}Uz\|_{2}\] \[=\sigma_{k}^{*}\cdot\sigma_{\min}(U^{\top}U_{*})\] where the first step follows from Fact 3.3, the second step follows from Fact 3.3, the third step follows from definition of \(\sigma_{\min}\), Next, we have \[\sigma_{\min}(U^{\top}U_{*}) =\,\cos\theta(U_{*},U)\] \[=\,\sqrt{1-\sin^{2}\theta(U_{*},U)}\] \[\geq\,\sqrt{1-\operatorname{dist}(U_{*},U)^{2}}\] where the first step follows definition of \(\cos\), the second step follows from \(\sin^{2}\theta+\cos^{2}\theta=1\) (Lemma 3.9), the third step follows from \(\sin\leq\operatorname{dist}\) (see Definition 3.7). Putting it all together, we have \[\sigma_{\min}(R) \geq\,\sigma_{k}^{*}\sqrt{1-\operatorname{dist}(U_{*},U)^{2}}-\| F\|\] \[\geq\,\sigma_{k}^{*}\sqrt{1-\operatorname{dist}(U_{*},U)^{2}}-0. 001\sigma_{k}^{*}\operatorname{dist}(U_{*},U)\] \[=\,\sigma_{k}^{*}(\sqrt{1-\operatorname{dist}(U_{*},U)^{2}}-0.00 1\operatorname{dist}(U_{*},U))\] \[\geq\,0.2\sigma_{k}^{*}\] where the second step follows from Lemma 6.4, the last step follows from \(\operatorname{dist}(U_{*},U)<1/10\). ## 7 Matrix Sensing Regression Our algorithm has \(O(\log(1/\epsilon_{0}))\) iterations, in previous section we have proved why is that number of iterations sufficient. In order to show the final running time, we still need to provide a bound for the time we spend in each iteration. In this section, we prove a bound for cost per iteration. In Section 7.1 we provide a basic claim that, our sensing problem is equivalent to some regression problem. In Section 7.2 we show the different running time of the two implementation of each iteration. In Section 7.3 we provide the time analysis for each of the iteration of our solver. In Section 7.4 shows the complexity for the straightforward solver. Finally in Section 7.5 we show the bound for the condition number. ### Definition and Equivalence In matrix sensing, we need to solve the following problem per iteration: **Definition 7.1**.: _Let \(A_{1},\ldots,A_{m}\in\mathbb{R}^{d\times d}\), \(U\in\mathbb{R}^{d\times k}\) and \(b\in\mathbb{R}^{m}\) be given. The goal is to solve the following minimization problem_ \[\min_{V\in\mathbb{R}^{d\times k}}\sum_{i=1}^{m}(\operatorname{tr}[A_{i}^{\top }UV^{\top}]-b_{i})^{2},\] We define another regression problem **Definition 7.2**.: _Let \(A_{1},\ldots,A_{m}\in\mathbb{R}^{d\times d}\), \(U\in\mathbb{R}^{d\times k}\) and \(b\in\mathbb{R}^{m}\) be given._ _We define matrix \(M\in\mathbb{R}^{m\times dk}\) as follows_ \[M_{i,*}:=\;\operatorname{vec}(U^{\top}A_{i}),\quad\forall i\in[m].\] _The goal is to solve the following minimization problem._ \[\min_{v\in\mathbb{R}^{dk}}\|Mv-b\|_{2}^{2},\] We can prove the following equivalence result **Lemma 7.3** ([23]).: _Let \(A_{1},\ldots,A_{m}\in\mathbb{R}^{d\times d}\), \(U\in\mathbb{R}^{d\times k}\) and \(b\in\mathbb{R}^{m}\) be given._ _If the following conditions hold_ * \(M_{i,*}:=\operatorname{vec}(U^{\top}A_{i}),\quad\forall i\in[m].\)__ * _The solution matrix_ \(V\in\mathbb{R}^{d\times k}\) _can be reshaped through vector_ \(v\in\mathbb{R}^{dk}\)_, i.e.,_ \(v=\operatorname{vec}(V^{\top})\)_._ _Then, the problem (defined in Definition 7.1) is equivalent to problem (defined in Definition 7.2)._ Proof.: Let \(X,Y\in\mathbb{R}^{d\times d}\), we want to show that \[\operatorname{tr}[X^{\top}Y]=\;\operatorname{vec}(X)^{\top} \operatorname{vec}(Y). \tag{25}\] Note that the RHS is essentially \(\sum_{i\in[d]}\sum_{j\in[d]}X_{i,j}Y_{i,j}\), for the LHS, note that \[(X^{\top}Y)_{j,j}=\;\sum_{i\in[d]}X_{i,j}Y_{i,j},\] the trace is then sum over \(j\). Thus, we have Eq. (25). This means that for each \(i\in[d]\), \[\operatorname{tr}[A_{i}^{\top}UV^{\top}]=\operatorname{vec}(U^{\top}A_{i})^{ \top}\operatorname{vec}(V^{\top}).\] Set \(M\in\mathbb{R}^{m\times dk}\) be the matrix where each row is \(\operatorname{vec}(U^{\top}A_{i})\), we see Definition 7.1 is equivalent to solve the regression problem as in the statement. This completes the proof. ### From Sensing Matrix to Regression Matrix **Definition 7.4**.: _Let \(A_{1},\ldots,A_{m}\in\mathbb{R}^{d\times d}\), \(U\in\mathbb{R}^{d\times k}\). We define matrix \(M\in\mathbb{R}^{m\times dk}\) as follows_ \[M_{i,*}:=\;\operatorname{vec}(U^{\top}A_{i}),\quad\forall i\in[m].\] **Claim 7.5**.: _The naive implementation of computing \(M\in\mathbb{R}^{m\times dk}\) takes \(m\cdot\mathcal{T}_{\mathrm{mat}}(k,d,d)\) time. Without using fast matrix multiplication, it is \(O(md^{2}k)\) time._ Proof.: For each \(i\in[m]\), computing matrix \(U^{\top}\in\mathbb{R}^{k\times d}\) times \(A_{i}\in\mathbb{R}^{d\times d}\) takes \(\mathcal{T}_{\mathrm{mat}}(k,d,d)\) time. Thus, we complete the proof. **Claim 7.6**.: _The batch implementation takes \(\mathcal{T}_{\mathrm{mat}}(k,dm,d)\) time. Without using fast matrix multiplication, it takes \(O(md^{2}k)\) time._ Proof.: We can stack all the \(A_{i}\) together, then we matrix multiplication. For example, we construct matrix \(A\in\mathbb{R}^{d\times dm}\). Then computing \(U^{\top}A\) takes \(\mathcal{T}_{\mathrm{mat}}(k,d,dm)\) time. The above two approach only has difference when we use fast matrix multiplication. ### Our Fast Regression Solver In this section, we provide the results of our fast regression solver. Our approach is basically as in [13]. For detailed analysis, we refer the readers to the Section 5 in [13]. **Lemma 7.7** (Main Cost Per Iteration).: _Assume \(m=\Omega(dk)\). There is an algorithm that runs in time_ \[\widetilde{O}(md^{2}k+d^{3}k^{3})\] _and outputs a \(v^{\prime}\) such that_ \[\|Mv^{\prime}-b\|_{2}\leq(1+\epsilon)\min_{v\in\mathbb{R}^{dk}}\|Mv-b\|_{2}\] Proof.: From Claim 7.6, writing down \(M\in\mathbb{R}^{m\times dk}\) takes \(O(md^{2}k)\) time. Using Fast regression resolver as [13], the fast regression solver takes \[O((m\cdot dk+(dk)^{3})\cdot\log(\kappa(M)/\epsilon)\cdot\log^{2}(n/\delta))\] **Lemma 7.8** (Formal version of Theorem 1.1).: _In each iteration, our requires takes \(\widetilde{O}(md^{2}k)\) time._ Proof.: Finally, in order to run Lemma 7.7, we need to argue that \(\kappa(M)\leq\operatorname{poly}(k,d,\kappa(W_{*}))\). This is true because \(\kappa(U)\leq O(\kappa(W_{*}))\) and condition number of random Gaussian matrices is bounded by \(\operatorname{poly}(k,d)\). Then applying Lemma 7.10, we can bound \(\kappa(M)\) in each iteration. Eventually, we just run standard error analysis in [13]. Thus, we should get the desired speedup. The reason we can drop the \((dk)^{3}\) is \(m\geq dk^{2}\). ### Straightforward Solver Note that from sample complexity analysis, we know that \(m=\Omega(dk)\). **Lemma 7.9**.: _Assume \(m=\Omega(dk)\). The straightforward implementation of the regression problem (Definition 7.2) takes_ \[O(md^{2}k^{2})\] _time._ Proof.: The algorithm has two steps. From Claim 7.6, writing down \(M\in\mathbb{R}^{m\times dk}\) takes \(O(md^{2}k)\) time. The first step is writing down the matrix \(M\in\mathbb{R}^{m\times dk}\). The second step is solving regression, it needs to compute \(M^{\dagger}b\) (where \(M^{\dagger}\in\mathbb{R}^{dk\times m}\) ) \[M^{\dagger}b=(M^{\top}M)^{-1}Mb\] this will take time \[\mathcal{T}_{\mathrm{mat}}(dk,m,dk)+\mathcal{T}_{\mathrm{mat}}(dk,dk,dk)=md^{2 }k^{2}+(dk)^{3}\] \[=md^{2}k^{2}\] the second step follows from \(m=\Omega(dk)\). Thus, the total time is \[md^{2}k+md^{2}k^{2}=O(md^{2}k^{2})\] ### Condition Number **Lemma 7.10**.: _We define \(B\in\mathbb{R}^{m\times k}\) as follows \(B:=XU\) and \(X\in\mathbb{R}^{m\times d}\) and \(U\in\mathbb{R}^{d\times k}\)._ _Then, we can rewrite \(M\in\mathbb{R}^{m\times dk}\)_ \[\underbrace{M}_{m\times dk}=\underbrace{B}_{m\times k}\otimes\underbrace{Y} _{m\times d}\] _Then, we know that \(\kappa(M)=\kappa(B)\cdot\kappa(Y)\leq\kappa(U)\kappa(X)\kappa(Y)\)._ Proof.: Recall \(U\in\mathbb{R}^{d\times k}\). Then we define \(b_{i}=U^{\top}x_{i}\) for each \(i\in[m]\). Then we have \[M_{i,*}=\operatorname{vec}(U^{\top}x_{i}y_{i}^{\top})=\operatorname{vec}(b_{ i}y_{i}^{\top}).\] Thus, it implies \[M=B\otimes Y\]
2308.02759
Photon Management in Silicon Photovoltaic Cells: A Critical Review
With the practical efficiency of the silicon photovoltaic (PV) cell approaching its theoretical limit, pushing conversion efficiencies even higher now relies on reducing every type of power loss that can occur within the device. Limiting optical losses is therefore critical and requires effective management of incident photons in terms of how they interact with the device. Ultimately, photon management within a PV cell means engineering the device and constituent materials to maximize photon absorption within the active semiconductor and therefore reduce the number of photons lost through other means, most notably reflection and parasitic absorption. There have been great advancements in the front and the rear side photon management techniques in recent years. This review aims to discuss these advancements and compare the various approaches, not only in terms of increases in photogenerated current, but also their compatibility with different PV cell architectures and potential trade-offs, like increased surface recombination or scalability for high-volume manufacturing. In this review, a comprehensive discussion of a wide variety of the front and the rear side photon management structures are presented with suggestions to improve the already achieved performance further. This review is unique because it not only presents the recent development in photon management techniques, but also offer through analysis of these techniques and pathways to improve further.
Mohammad Jobayer Hossain, Mengdi Sun, Kristopher O. Davis
2023-08-05T01:16:31Z
http://arxiv.org/abs/2308.02759v2
# Photon Management in Silicon Photovoltaic Cells: A Critical Review ###### Abstract With the practical efficiency of the silicon photovoltaic (PV) cell approaching its theoretical limit, pushing conversion efficiencies even higher now relies on reducing every type of power loss that can occur within the device. Limiting optical losses is therefore critical and requires effective management of incident photons in terms of how they interact with the device. Ultimately, photon management within a PV cell means engineering the device and constituent materials to maximize photon absorption within the active semiconductor and therefore reduce the number of photons lost through other means, most notably reflection and parasitic absorption. There have been great advancements in the front and the rear side photon management techniques in recent years. This review aims to discuss these advancements and compare the various approaches, not only in terms of increases in photogenerated current, but also their compatibility with different PV cell architectures and potential trade-offs, like increased surface recombination or scalability for high-volume manufacturing. In this review, a comprehensive discussion of a wide variety of the front and the rear side photon management structures are presented with suggestions to improve the already achieved performance further. This review is unique because it not only presents the recent development in photon management techniques, but also offer through analysis of these techniques and pathways to improve further. _Keywords:_ _Light trapping, path length enhancement, photon management, recombination, silicon, PV cell_ ## 1 Introduction Photovoltaic (PV) energy conversion has now become one of the cheapest sources of electricity [1], less expensive than most fossil fuel-based resources. Sunlight is abundant on earth, and PV cells and modules directly convert incident photons into electricity using a process called photovoltaic effect. A wide variety of materials can be used to make PV cells, including organic semiconductors, perovskites, III-V semiconductors, chalcogenides, and of course silicon (Si). Even though some of these materials are less expensive to produce and others yield higher conversion efficiencies, Si has been and continues to dominate the PV manufacturing sector with a current market share of approximately 95% [2]. Si has a bandgap of 1.12 eV, which is an optimal value for a single junction PV cell based on the AM 1.5G solar spectrum. However, it has an indirect bandgap that results in weak absorption near the band edge. Other materials have direct bandgaps of a similar magnitude, so why does silicon remain the dominant material? For starters, Si is the second most abundant element on earth, behind only oxygen. Secondly, many decades worth of research, development, and production in the integrated circuits sector have led to an incredible understanding of how to work with Si and produce it at a large scale. Finally, Si-based modules have shown to be very durable and reliable over time with very low degradation rates reported from the field [3; 4; 5; 6]. This, combined with the relatively high efficiencies obtained and the low production costs, have all contributed to the continued use of Si by PV manufacturers. Incident photons coming from the sun serve as the "fuel source" in a PV cell. The wavelengths of these photons span the ultraviolet, visible, and infrared domains. Not all the incident photons enter the cell. As illustrated in Fig. 1, some are reflected off the front surface of the cell and others are parasitically absorbed by other front surface layers of the device. Of the photons that do enter the semiconductor, not all will get absorbed and generate an electron-hole pair. Photons with energies smaller than the bandgap either don't get absorbed at all or get absorbed and converted into heat (e.g., Figure 1: Simplified illustration of the light propagation in a silicon photovoltaic cell. The figure highlights the losses in red color, including reflection and absorption losses. The legend provides a breakdown of these losses, including loss in current density due to front reflectance (\(J_{\text{w}}\)-\(J_{\text{l}}\), loss in current density due to escape reflectance (\(J_{\text{f}}\)-\(\omega\)), loss in current density (absorption) at the front side (\(J_{\text{l}m\text{-}}\)), and loss in current density (absorption) at the rear side (\(J_{\text{l}m\text{-}}\)). This visualization provides valuable insights into the mechanisms that affect the efficiency of the photovoltaic cell, helping to identify areas for optimization and improvement. free carrier absorption). Others may reach the rear side of the cell and get transmitted through the device or absorbed by other rear surface layers (e.g., metal contacts). Even photons that are internally reflected at the rear surface can escape out the front. Some of these losses associated with the rear side of the cell could be addressed by making the Si substrate much thicker, but cost constraints prevent, and to a lesser extent bulk carrier recombination, prevent the use of very thick wafers. The efficiency of a PV cell is now one of the driving factors that governs the levelized cost of energy (LCOE). Cell manufacturing costs have gotten so low that a larger fraction of the total PV system costs can now be attributed to things like the glass and aluminum needed to make the modules, the mounting hardware, the wiring and conduit, and the installation labor. Many of these factors scale with area, so an increase in cell efficiency can yield a lower cost per watt at the system level [7]. The current world record for silicon PV cell efficiency is 26.8% [8; 9] using a heterojunction structure, while the theoretical limit of such a cell, known as the Shockley-Queisser limit, is approximately 30% [10] under AM 1.5G solar spectrum. By implementing photon management techniques and minimizing losses such as recombination, resistive and reflection losses, the gap between the current efficiency levels and the Shockley-Queisser limit can be significantly reduced. Recent advancements in photon management techniques for Si PV cells have led to improved theoretical understanding, fabrication methods, and cost reduction. These designs range from nano-texturizing the active material to using resonators of various materials for front-side light trapping, and diffraction grating, planar metallic and dielectric layers for the rear side. These methods effectively trap light, resulting in an increase in photogenerated current density. However, these structures often come with drawbacks such as high surface recombination loss, which leads to low open-circuit voltage and fill factor, complex fabrication processes, higher contact resistivity and additional implementation costs, making them less suitable for industrial PV cell manufacturing. While many of these novel designs show promise in terms of photogenerated current density only, other performance parameters have yet to be fully explored. Recent advancements in light trapping structures have led to a growing need for a comprehensive review of photon management in silicon PV cells within the research community. In our search for such papers, we have found several review papers on the topic, including those focusing on nanoscale photon management in silicon PV [11; 12; 13], nanostructured silicon PV [14], and thin silicon PV cells [15]. While these papers provide thorough analysis of different structures, they lack an examination of the various loss mechanisms and implementation feasibility associated with these structures. Additionally, many of these articles only focus on photon management from one side of the cell, either the front or rear. In this article, we aim to provide a comprehensive review of existing photon management schemes, with a particular emphasis on both the front and rear sides of the cell. Silicon PV cells are diverse both in terms of how they are designed and manufactured [16; 17; 18; 19]. This variety takes the form of different cell architectures, etching and surface preparation processes (e.g., anisotropic wet texturing [20; 21; 22], isotropic wet texturing [23; 24; 25; 26], surface cleaning [27; 28; 29; 30; 31; 32; 33]), doping processes [34; 35; 36; 37; 38; 39], materials and deposition methods for passivation layers and optical coatings [40; 41; 42; 43; 44; 45], and various metallization materials and manufacturing processes [46; 47; 48; 49; 50; 51; 52; 53; 54]. The major cell architectures include: aluminum back surface field (AI-BSF) cells [55; 56], the dominant architecture until recently; passivated emitter rear contact (PERC) cells [57; 58; 59; 60; 61], the newly emerged dominant architecture; silicon heterojunctions (SHJ) or heterojunction with intrinsic thin-layer (HIT) cells [62; 63; 64; 65; 66; 67], a high efficiency technology that has a smaller market share due to higher manufacturing costs; and polysilicon-based passivating contact cells, a concept introduced many years ago [68], but recently attracting a lot of attention and development under the name of polycrystalline silicon on oxide contacts (POLO) [69; 70; 71], monofacial poly-Si contacts (monoPoly) [72; 73], or most frequently as tunnel oxide passivated contacts (TOPCon) [74; 75; 76; 77; 78; 79]. The HIT and poly-Si architectures both feature carrier-selective, passivating contacts aimed at suppressing contact recombination. Other carrier-selective, passivating contact technologies exist and have their own advantages and disadvantages, but are not the focus of this review as they are still in the early stages of development. Additionally, interdigitated back contact (IBC) cells [80; 81; 82] are another cell structure that eliminates the use of metallization on the front side of the cell, thereby reducing the optical loss associated with light being reflected off these metal contacts. The IBC concept can be combined with other approaches, including heterojunctions and polysilicon-based passivating contacts to form very high efficiency cells, as in the current record holding 26.8% cell that is an IBC heterojunction [8; 9]. In this article we consider the major photon management schemes, from the more conventional approaches, like the use of antireflection coatings (ARC), to more novel and recently developed approaches for these various cell architectures. Section 2 explains the underlying physics and materials properties that dictate the various energy conversion losses in a cell. These are categorized as optical (i.e., photons lost), recombination (i.e., charge carriers lost through recombination), and resistive (e.g., voltage drop via inefficient charge transport). Section 3 provides an up-to-date review of the major photon management approaches in terms of optical benefits, as well as their impact on recombination and resistive losses when incorporated into cells. Section 4 presents a comprehensive and objective analysis of the optical performance of the various schemes using optical simulations of the reflectance and quantum efficiency curves. In Section 5, a critical examination of newly developed photon management structures that have not yet been implemented in practical photovoltaic cells is conducted. The merit of these structures is evaluated based on the findings of previous sections. Finally, Section 6 presents the overall conclusions. ## 2 Loss Mechanisms in Photovoltaic Cells The efficiency of a PV cell depends on factors including its architecture, internal quantum efficiency, the quality of the active material (i.e., Si in our discussion), contact structure, photon management schemes, as well as any loss mechanisms present in the cell [83; 56; 84]. In studying how to effectively manage photons, it is crucial to recognize the various losses that can occur in PV cells, as the reflection loss, recombination loss, and resistive loss are all interconnected. A particular photon management approach may yield an optical gain, but if it also leads to increased recombination or resistive losses, that must be taken into acccount. Therefore, the optical design of a PV cell must prioritize increasing photocarrier generation while also minimizing recombination and resistive losses for practical applications. Light trapping structures on both the front and rear sides of the cell are crucial for trapping incident photons within the active material and boosting photocurrent. The photogenerated current density \(J_{G}\), which is ideally equal to the short-circuit current density of a cell \(J_{SC}\)is expressed by the following equation, \[J_{G}=J_{SC}=ef\left[1\ -R(\lambda)\right]\Phi_{inc}(\lambda)IQE(\lambda)d\lambda \tag{1}\] Here \(\Phi_{inc}\)is the incident photon flux, IQE is the internal quantum efficiency and R is the percentage reflected light (i.e., loss). R is the summation of both front reflectance, \(R_{f}\)and the escape reflectance, \(R_{exc}\)(light that enters into the cell through front side, is not absorbed by Si, becomes reflected from the rear side, is not absorbed by Si again and then exits the cell through the front side). For a typical silicon PV cell \(R_{esc}\)becomes significant near the band edge. Light trapping structures on the front side reduces front reflectance, which in turn leads to an increase in \(J_{SC}\). Similarly, if steps are taken to reduce the escape reflectance on the rear side of the cell, this also promotes light absorption in the active region of the cell. A good proportion of the incident photons are not captured by the cell due to shadowing caused by the front side metal contacts, such as busbars and gridlines, which are utilized for the efficient collection of photogenerated current. The proportion of contact area varies greatly depending on the type of cell and its contact structure. In industrial PV cells, the front contact area can range from 4% to 5% [85]. Additional optical losses can occur in the non-active regions of the cell, such as parasitic absorption in the emitter and antireflection coating layer, but these losses are relatively minor compared to the other types of optical losses previously mentioned [86]. Silicon is an indirect band-gap material, which makes a rapid drop in its absorption coefficient as the incident wavelength approaches the band gap energy (i.e., 1.1 eV or 1127 nm). This means that the thickness required to absorb all qualified photons (absorption length, L) increases rapidly with wavelength. For example, at 800 nm wavelength, L \(\approx\) 10 \(\mu\)m, while at 1100 nm wavelength, L \(\approx\) 3 mm [87]. A 6 mm thick silicon wafer can absorb all the light with energy above the band-gap, but it is not practical for carrier transport or cost. i.e., photogenerated electrons and holes now have to travel a greater distance before they can be extracted, and by doing so they lose their energy through nonradiative recombination process in the presence of crystalline imperfections in the bulk [88]. Rear side photon management improves absorption by providing additional chances for long wavelength photons to be absorbed, measured by path length enhancement. A path length enhancement factor of up to 10 can be achieved [89; 90]. Therefore, both front and rear side light trapping are necessary for efficient use of sunlight. Recombination loss has a major impact on a cell's efficiency. Crystalline defects, dislocation centers, trap states in the Si wafer act as Shockley-Read-Hall recombination centers. Excessive doping of the emitter can also lead to Auger recombination. Some photon management schemes may improve optical performance but also increase surface recombination by exposing bare surface. Surface recombination loss can be reduced by passivating dangling bonds with another material or using a carrier selective contact structure [91]. A good PV cell should have both surface passivation and light trapping mechanisms. The thickness of the wafer used in PV cell manufacturing is also important. A thinner wafer requires a strong photon management mechanism to ensure absorption. However, a thinner cell with well-passivated surfaces and high photogenerated current density may have a higher open circuit voltage (_V\({}_{OC}\)_) due to lower dark saturation current density _Jo_, as evident from equation (2) [92; 93]. \[V_{OC}=\frac{K_{B}T}{e}\left(\frac{J_{SC}}{J_{0}}+1\right) \tag{2}\] Here _K\({}_{B}\)_is the Boltzmann constant, T is the temperature and e is the charge of an electron. Thinner cells are harder to manufacture and have more difficulty with surface passivation, which can break during the process. Thicker wafers are easier to passivate and trap light but have more defects and trap states that increase _Jo_, and also increase material costs. It's important to choose the right thickness while considering these trade-offs. Now, the current density vs. voltage (J-V) characteristics of the cell is influenced by all the parameters discussed above. \[J=J_{SC}-J_{0}\left[\frac{V+JR_{S}}{nV_{T}}-1\right] \tag{3}\] Here n is the diode ideality factor, _R\({}_{S}\)_is the series resistance, and _V\({}_{T}\)_is the thermal voltage. The efficiency of the cell, \(\eta\) can be expressed as, \[\eta=\frac{V_{OC}*J_{SC}*FF}{P_{in}} \tag{4}\] Here FF is the fill factor of the cell, which expresses the measure of how much rectangular is the J-V curve, a phenomenon that is greatly influenced by the recombination and resistive losses. Designing and implementing an ideal photon management scheme is complex as it involves multiple factors and can only be considered useful when it improves the overall efficiency of the cell. In this article, we review contemporary photon management schemes and evaluate their performance based on the parameters discussed above. ## 3 Progress in Photon Management The major challenges for achieving the Shockley-Queisser limit in crystalline silicon (c-Si) PV cells are incomplete light trapping and parasitic optical absorption [1]. Various photon management techniques have been studied to address these issues, including reducing losses, enhancing the optical path, and increasing the probability of photon absorption [2]. However, not all photon management techniques are suitable for all types of PV cells, such as aluminum-back surface field (Al-BSF), heterojunction (HIT) [64], interdigited back contact (IBC) [82], passivated emitter and rear contact (PERC) [59], IBC with polycrystalline silicon on oxide (POLO) contact [69], tunnel oxide passivated contact (TOPCon) [79] and unpassivated cells. The applicability of a particular photon management scheme depends on the contact structure and surface passivation of the cell type. Figure 2: Impact of front side photon management structures and cell types on the short-circuit current density (\(J_{SC}\)), open-circuit voltage (\(V_{OC}\)), and efficiency of silicon photovoltaic cells. The horizontal and vertical dotted lines represent the highest achievable \(J_{SC}\) and \(V_{OC}\) values, respectively. Performance parameters for each cell are provided in Table 1, and the data point annotations in the figure correspond to the cell types and the serial numbers in that table. The performance of different types of PV cells are shown in Fig. 1, with overall efficiency (\(\eta\)) represented by the colorbar. The \(V_{oc}\)and \(J_{sc}\)of the cells are shown in x and y axis respectively, and their performance parameters are listed in Table 1. The shape of the data points in the figure represents the front side photon management structures, and a white circle inside some of the data point means that their rear side is structured (i.e., not planar). The label of the data points includes the cell type and their serial number (SN) on Table 1. The vertical and horizontal dotted lines represent the maximum \(V_{oc}\)and \(J_{sc}\)values attainable from a c-Si PV cell respectively. The following sections will discuss these data samples in detail, providing a summary and evaluation of the optical and electrical performance of these PV cells in relation to different photon management approaches. ### Front Side Photon Management Approaches Various methods have been utilized to improve the transparency of the photovoltaic cells on the front side, such as applying an antireflective coating (ARC), texturing the surface, and utilizing resonators. A graphical representation of these techniques is presented in Fig. 3(a). Single layer dielectric antireflective coating (ARC) is a straightforward method to reduce reflection losses. Silicon Nitride (SiNx) is commonly used as the coating material [95; 96] due to its lower refractive index (\(\approx\) 2) than crystalline silicon (\(\approx\) 4). Single layer ARC has the potential to achieve 100% optical transmission at the corresponding wavelength. Due to its simple fabrication process and low cost, dielectric ARC is widely used in commercial PV cells. However, it should be noted that the optical performance of single layer ARC is dependent on wavelength and polarization, and it has limited angular optical response which can be a drawback in practical applications [97]. To broaden the spectral and angular responses of ARC, various strategies have been examined. These methods can currently be divided into two categories: patterning on layers with different materials and direct patterning on the Figure 3: Schematic representation of different photon management structures employed on the front and rear sides of a silicon solar cell (reproduced from [94]). silicon wafer (surface texturing). In these architectures, nano or microstructures with gradually varying geometries are positioned above the silicon wafer, resulting in higher transmission. Unlike traditional single layer ARC, these approaches exhibit an omnidirectional and broadband optical response due to the graded index distribution and thus destructive interference of a wide range of wavelengths, which is crucial for PV cells. Antireflective nanostructures with different shapes have been studied, including nanowires, nanopillars, nanodomes, nanospheres, nanorods, and nanocones. For example, Jeong _et al._ investigated the performance of an ultra-thin silicon nanocone PV cell [98]. In this study, Si nanocones were fabricated on top of a Si substrate, and interdigitated Al contacts were placed on the rear side to prevent shadowing losses. It was reported that more than 80% external quantum efficiency (EQE) was demonstrated for 400-800 nm incident wavelengths. According to the current density-voltage relation, a _Voc_ of 0.623 V and _Jsc_ of 29 _mA/cm\({}^{2}\)_ were achieved, resulting in an efficiency of 13.7%. In addition to the antireflection effect, enhanced light scattering from these structures also contributed to the light trapping. Nanowires are another effective geometry in light management. Lee _et al._ reported a radial junction PV cell with hybrid silicon wire structures [99]. In this configuration, p-n junctions were formed along the radius of silicon microwire arrays. The n-type microwires were deposited on a thick Czochralski silicon (Cz-Si) wafer while p-type nanowires were deposited on top of them. This design shortened the traveling path of the charge carriers, thus reducing recombination losses. The combined effects of graded index structures and light trapping between the wires significantly reduced reflection losses, resulting in 97% absorption of total incident light. In this configuration, a _Voc_ of 0.584 V and _Jsc_ of 39.5 _mA/cm\({}^{2}\)Jsc_ were achieved, resulting in an efficiency of 17.6%. Surface texturing is another method of enhancing the optical performance by increasing transmission and reducing reflection losses. The main difference between surface texturing and antireflective nanostructure is that surface texturing is directly etched or deposited on the substrate, without adding a new layer with different materials. Therefore, surface texturing has a simple fabrication process. Similar to antireflective nanostructures, enhanced transmission into the cell is achieved through altered internal angles and graded refractive index caused by gradually varying geometry. For example, Kafle _et al._[100] reported a nanotextured multicrystalline (mc-Si) PV cell in which nanotextures were etched on a 195 \(\mu\)m thick p-type mc-Si wafer by dry exothermic plasma etching. An Al-BSF layer formed through screen printing Al paste and annealing was placed on the rear side of the cell as a metallic back contact. The structure achieved 0.627 V _Voc_ and 36.7 _mA/cm\({}^{2}\)Jsc_ simultaneously, resulting in 18% efficiency. Fellmath _et al._ reported another Al-BSF c-Si PV cell with pyramid texturing on the front surface [101]. In this study, 0.648 V _Voc_ and 38.6 _mA/cm\({}^{2}\)Jsc_ are reported, resulting in an overall efficiency of 20.1%. A cost-effective form of nanostructured silicon is called black silicon (b-Si) due to its low reflection, which is another candidate for light trapping. B-Si has a wide absorption spectrum, especially in the visible and infrared range, resulting in high efficiency. Savin _et al._ reported a b-Si PV cell with interdigitated back contacts (IBC) that mitigated reflection losses on the front side [102]. The b-Si was fabricated by deep reactive ion etching (DRIE). Due to the needle-like silicon textures with high aspect ratio, an effective medium was formed. As a result, most of the reflection loss was eliminated. In this cell, 0.665 V _Voc_ and 42.2 _mA/cm\({}^{2}\)Jsc_ were achieved, leading to 22.1% efficiency. An alternative approach of improving the device performance is incorporating optical resonators. When nanoparticles are embedded inside the PV cell, the field distribution is highly confined in the active layer by the excited resonant modes, thus enhancing the absorption of the incident photons. However, additional plasmonic absorptions can be induced by metallic nanoparticles. Various materials have been adopted as nano resonators, including dielectric (Mie resonance) and metallic (plasmon resonance) materials. By applying particles with different sizes, multiple resonant peaks are excited, resulting in broadband optical response. Additionally, these nanoparticles also scatter light, extending the length of the optical path inside the active region. For instance, Yao _et al._[103] reported a broadband spherical nanoshell PV cell. In this configuration, a single layer of silicon nanospheres with 50 nm thickness was deposited on a 1 \(\mu\)m silicon substrate. The incident light was coupled into whispering-gallery modes (WGMs) inside the spheres and transmitted into the substrate. Multiple resonant modes with different resonant peaks were excited, leading to broadband optical absorption. In this arrangement, 0.51 V \(V_{oc}\)and 20.1 \(mA/cm^{2}J_{Sc}\)were demonstrated, resulting in 8.1% efficiency. It is important to note that this result was based on the theoretical simulation of a p-i-n PV cell without additional efficiency enhancement approaches, and the generation of photoelectrons were approximated, leading to a relatively low efficiency. ### Rear Side techniques The incident light may exit the PV cell when it is not fully absorbed in the active region and minority carriers may suffer from surface recombination on the rear side of the cell, both of which limit the efficiency of the cell. To improve the performance of the cell, optimizations on the rear side are required. Similar to the front side, both optical and electrical approaches have been employed, including full and partial covering, such as aluminum back \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline SN & Front Structure & Rear Structure & Thickness & Cell Type & \(V_{oc}\) & \(J_{Sc}\) & FF & Efficiency & Reference \\ & & & of Si (\(\mu\)m) & & (V) & (\(mA/cm^{2}\)) & (\%) & (\%) & \\ \hline 1 & MicroSi & Planar & 165 & Heterojunction-IBC & 0.744 & 42.3 & 83.57 & 26.3 & [104] \\ \hline 2 & MicroSi & Planar & 150 & Heterojunction-IBC & 0.74 & 41.8 & 82.76 & 25.6 & [105] \\ \hline 3 & Planar & Planar & 130 & IBC & 0.737 & 41.3 & 82.79 & 25.2 & [106] \\ \hline 4 & MicroSi & MicroSi & 160 & Heterojunction & 0.738 & 40.8 & 83.36 & 25.1 & [107] \\ \hline 5 & MicroSi & MicroSi & 98 & Heterojunction & 0.75 & 39.5 & 83.37 & 24.7 & [108] \\ \hline 6 & MicroSi & Planar & 180 & Al-BSF & 0.618 & 36.3 & 79.34 & 17.8 & [96] \\ \hline 7 & MicroSi & Planar & 150 & Al-BSF & 0.623 & 35.7 & 75.53 & 16.8 & [95] \\ \hline 8 & MicroSi & Planar & 150 & Al-BSF & 0.641 & 36.6 & 77.58 & 18.2 & [95] \\ \hline 9 & MicroSi & Planar & 150 & Al-BSF & 0.626 & 36.9 & 75.76 & 17.5 & [95] \\ \hline 10 & NanoSi & Planar & 195 & mAI-BSF & 0.627 & 35.7 & 80.41 & 18 & [100] \\ \hline 11 & MicroSi & Planar & 225 & Al-BSF & 0.648 & 38.6 & 80.36 & 20.1 & [101] \\ \hline 12 & MicroSi & Planar & 75 & Al-BSF & 0.502 & 29.5 & 62.39 & 9.24 & [109] \\ \hline 13 & Planar & NanoDi & 0.450 & Thin-aSi & 0.83 & 14.07 & 56.08 & 6.55 & [110] \\ \hline 14 & Planar & Planar & 280 & Al-BSF & 0.604 & 27.9 & 62.31 & 10.5 & [111] \\ \hline 15 & NanoDi & Planar & - & IBC & 0.609 & 38.45 & 59.96 & 14.04 & [112] \\ \hline 16 & MicroSi & Planar & - & IBC & 0.605 & 32.48 & 65.19 & 12.81 & [112] \\ \hline 17 & NanoSi & Planar & 10 & Unpassivated & 0.5 & 20.59 & 69.84 & 7.19 & [113] \\ \hline 18 & MicroSi & Planar & - & IBC & 0.577 & 26.42 & 58.84 & 8.97 & [112] \\ \hline 19 & NanoSi & Planar & 200 & Al-BSF & 0.584 & 39.5 & 76.29 & 17.6 & [99] \\ \hline 20 & NanoSi & Planar & 10 & Al-BSF & 0.623 & 29 & 75.83 & 13.7 & [98] \\ \hline \end{tabular} \end{table} Table 1: Cell Performance Parameters \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline 21 & NanoSi & Planar & - & Al-BSF & 0.567 & 32.2 & 52.09 & 9.51 & [114] \\ \hline 22 & NanoSi & Planar & 20 & Unpassivated & 0.519 & 16.82 & 60.71 & 5.3 & [115] \\ \hline 23 & NanoSi & Planar & 8 & Unpassivated & 0.525 & 16.45 & 55.93 & 4.83 & [115] \\ \hline 24 & Planar & Planar & - & Unpassivated & 0.26 & 23.9 & 54.71 & 3.4 & [116] \\ \hline 25 & Planar & Planar & 0.225 & Unpassivated & 0.29 & 3.5 & 49.26 & 0.5 & [117] \\ \hline 26 & Planar & Planar & - & Unpassivated & 0.29 & 4.28 & 37.06 & 0.46 & [118] \\ \hline 27 & NanoSi & NanoSi & 0.280 & Thin-aSi & 0.75 & 17.5 & 44.95 & 5.9 & [119] \\ \hline 28 & Planar & MicroSi & 675 & IFC & 0.62 & 27.5 & 77.42 & 13.2 & [87] \\ \hline 29 & MicroSi & NanoSi & 20 & Thin-mcSi & 0.627 & 31.3 & 77.45 & 15.2 & [120] \\ \hline 30 & MicroSi & Planar & - & PERC & 0.661 & 39.8 & 80.58 & 21.2 & [121] \\ \hline 31 & MicroSi & Planar & - & PERC & 0.664 & 39.18 & 79.95 & 20.8 & [122] \\ \hline 32 & MicroSi & MicroSi & - & PERC & 0.684 & 40.54 & 81.54 & 22.61 & [123] \\ \hline 33 & MicroSi & Planar & - & PERC & 0.696 & 40.3 & 81.28 & 22.8 & [57] \\ \hline 34 & MicroSi & Planar & - & TMO-CSC & 0.629 & 23.7 & 82.51 & 12.3 & [124] \\ \hline 35 & MicroSi & Planar & 180 & PERC & 0.687 & 41.81 & 82.85 & 23.83 & [125] \\ \hline 36 & MicroSi & Planar & 290 & PERC & 0.727 & 42.6 & 84.27 & 26.1 & [126] \\ \hline 37 & MicroSi & Planar & 10 & PERC & 0.589 & 33.9 & 78.63 & 15.7 & [127] \\ \hline 38 & MicroSi & Planar & 43 & PERC & 0.65 & 37.8 & 77.74 & 19.1 & [128] \\ \hline 39 & MicroSi & Planar & 180 & TMO-CSC & 0.659 & 40.8 & 79.22 & 21.3 & [129] \\ \hline 40 & MicroSi & MicroSi & 200 & TMO-CSC & 0.724 & 36 & 74.03 & 19.3 & [130] \\ \hline SN & Front Structure & Rear Structure & Thickness & Cell Type & _Voc_ & _Jsc_ & FF & Efficiency & Reference \\ & & of Si (\(\mu\)m) & & (V) & (\(m\)/\(cm^{2}\)) & (\%) & (\%) & \\ \hline 41 & MicroSi & Planar & 200 & TMO-CSC & 0.632 & 38.8 & 81.97 & 20.1 & [131] \\ \hline 42 & MicroSi & MicroSi & 230 & Heterojunction & 0.728 & 40.5 & 81.09 & 23.91 & [132] \\ \hline 43 & MicroSi & MicroSi & 150 & Heterojunction & 0.739 & 38.8 & 80.60 & 23.11 & [132] \\ \hline 44 & MicroSi & Planar & 155 & TMO-CSC & 0.681 & 39.6 & 80.84 & 21.8 & [133] \\ \hline 45 & MicroSi & Planar & 175 & TMO-CSC & 0.676 & 39.6 & 80.69 & 21.6 & [134] \\ \hline 46 & NanoDi & Planar & 20 & Gr-Si Heterojunction & 0.51 & 28 & 61.62 & 8.8 & [135] \\ \hline 47 & NanoDi & Planar & \(>\) 0.032 & Thin-aSi & 0.885 & 9.24 & 60.90 & 4.98 & [136] \\ \hline 48 & NanoDi & Planar & 50 & Al-BSF & 0.571 & 35.11 & 79.21 & 15.88 & [137] \\ \hline 49 & NanoSi & Planar & - & Al-BSF & 0.644 & 38.77 & 79.30 & 19.8 & [138] \\ \hline 50 & NanoDi & Planar & - & Al-BSF & 0.56 & 43.5 & 68.14 & 16.6 & [139] \\ \hline 51 & NanoDi & Planar & - & Al-BSF & 0.61 & 36.75 & 74.90 & 16.79 & [140] \\ \hline 52 & NanoSi & Planar & 200 & Al-BSF & 0.635 & 36.74 & 79.81 & 18.62 & [141] \\ \hline 53 & MicroSi & MicroSi & 200 & Heterojunction & 0.73 & 37.67 & 79.42 & 21.84 & [142] \\ \hline \end{tabular} surface field (Al-BSF) and passivated emitter and rear cell (PERC), which act as a rear side passivation layer that keeps the minority carriers away from the highly recombining rear surface, and interdigitated back contacts (IBC) cell that place the metal contacts on the rear side of the cell to avoid shading losses on the front side. Additionally, resonators and grating structures have been reported in some configurations to boost the overall performance by exciting resonant modes. In practice, combined techniques are employed to further improve the efficiency. In Al-BSF PV cells, an Al back reflector is printed on the rear side of the silicon wafer to recover any transmitted light. By increasing the optical path of the incident photons, the chances of absorption in the active region are increased. Additionally, a passivation layer doped with phosphorus is added on the front side of the cell, decreasing the resistance at the metal-silicon interfaces and reducing the impact of surface defects and recombination losses. A p-type Al-BSF silicon PV cell was reported by Vermang _et al._[95] in which the performance of various types of Al-BSF PV cells with blistered (bubble-like) rear surface passivation were compared. It was reported that Al-BSF PV cells achieved a high _Voc_ of 0.627 V, 36.9 _mA/cm\({}^{2}\)Jsc_ and 17.4% efficiency. Full Al-BSF cells achieved 0.623 _V Voc_, 35.7 _mA/cm\({}^{2}\)Jsc_ and 16.8% efficiency. It was demonstrated that local Al-BSF PV cells exhibited the best performance among other types, with 0.641 _V Voc_, 38.7 _mA/cm\({}^{2}\)Jsc_ and 18.2% efficiency reported. It was shown that local Al-BSF cells provide good performance due to enhanced reflection on the rear side and improved surface passivation. In passivated emitter and rear cell (PERC), the shading losses are significantly reduced by relocating the contacts onto the rear side of the cell. Localized passivation contacts are added on the rear side to reduce the recombination loss and increase back-reflection. Unlike Al-BSF PV cells, localized back surface fields are printed above the rear contacts as back reflectors to enhance the absorption of the incident photons. For example, Hannebauer _et al._ reported a fine-line printed PERC PV cell [147]. In this study, a commercial PERC cell with 5 busbar front grids and fineline-printed Ag fingers were evaluated. It was reported that shading losses were reduced to 4% compared to 5.8% of conventional 3 busbar structures. It was reported that high _Voc_ of 0.661 V and _Jsc_ of 39.8 _mA/cm\({}^{2}\)_ were achieved, resulting in 21.2% efficiency. Huang _et al._ reported a PERC cell with ALD aluminum rear surface passivation [122]. A 190 _mA_ thick p-type Cz-Si wafer is used as the substrate. Excellent performance of 0.664 V _Voc_ and 39.18 _mA/cm\({}^{2}\)Jsc_ was reported, resulting in an overall efficiency of 20.8%. Additionally, Deng et al, reported a fully screen-printed PERC cell with high efficiency [123]. In this study, the silicon wafer had a thickness of 175 _mA_. The increased efficiency was mainly due to the reduced resistivity of the contacts and front finger lines. It was reported that 0.684 V _Voc_ and 40.48 _mA/cm\({}^{2}\)Jsc_ were achieved, resulting in a high efficiency of 22.61%. In interdigitated back contact (IBC) PV cells, the metallic contacts are placed on the rear side of the cells in order to eliminate shading losses on the front side. These contacts also act as back reflectors. Both sides of the silicon wafer are etched with surface textures, leading to longer optical path that increases the absorption of photons in the active region. Additionally, a thin layer of the same material (a-Si) is placed between the silicon wafer and a doped a-Si layer to passivate the interface on the rear side of the cell, which improves the total quantum efficiency. For example, Smith _et al._ have reported an IBC PV cell with total area efficiency above 25% [106]. Contact pads and busbars were placed on the bottom of a 130 \(\mu\)m thick c-Si wafer. As a commercial product, both high optical and electrical performances were reported. It was demonstrated that 0.737V _V\({}_{OC}\)_ and 41.3 _mA/cm\({}^{2}\)J\({}_{SC}\)_ were achieved by the cell, resulting in an overall 25.2% efficiency. Improvements in recombination losses on the edge and series resistances were also reported in comparison to previous products. Yoshikawa _et al._ reported a heterojunction interdigitated back contacts [H]-IBC silicon PV cell with pyramid texturing [104]. In their study, microscale textured pyramid structures were fabricated on a 165 \(\mu\)m thick silicon substrate by anisotropic etching. The cell achieved 0.744 V _V\({}_{OC}\)_ and 42.3 _mA/cm\({}^{2}\)J\({}_{SC}\)_, resulting in an extraordinary efficiency of 26.3%. Another configuration incorporating polo-IBC PV cell was reported by Hollemann _et al._[38]. In this study, passivating polo-IBC (poly-silicon of oxide junctions) contacts are placed in the rear side of the silicon PV cells, as shown in figure 8(a). Efficiency of 26.1% and 24.9% on p-type FZ wafers with resistivity of 1.3 \(\Omega\)-cm and 80 \(\Omega\)-cm, and 24.6% on a n-type wafer with 2 \(\Omega\)-cm resistivity were obtained. The 26.1% efficient cell exhibited a _J\({}_{SC}\)_ of 42.6 _mA/cm\({}^{2}\)_ and a _V\({}_{OC}\)_ of 0.727 V. Infrared lifetime mapping showed potential for further improvement by reducing losses from series resistance, perimeter recombination, process-induced degradation, and bulk recombination. Nanostructures, such as nanocones, are also used for light trapping on the rear side. For example, in a study by Zhu _et al._, a-Si:H PV cells with nanocone back reflectors were reported [119]. In this study, a-Si nanocones in the scale of hundreds of nanometers were deposited on the substrate with the same material, forming periodic nanodome structures under a p-i-n a-Si layer sandwiched between transparent conducting oxides (TCO) layers. The tapered shape of the nanodomes generated excellent antireflection properties, and the matching of the a-Si index with air greatly reduced front-side reflection loss. Additionally, incident light scattering was enhanced inside the plane, resulting in an enhanced optical path and improved absorption. This design yielded a _V\({}_{OC}\)_ of 0.75 V and _J\({}_{SC}\)_ of 17.5 _mA/cm\({}^{2}\)_, resulting in an efficiency of 5.9%. Grating structures on the rear side of a cell have been studied for their ability to support resonant modes and scatter light for improved optical path length. Zeng _et al._[87] reported a hybrid grating-photonic crystal back reflector silicon PV cell. The design included a reflection grating on the rear side of the cell and a 1D distributed back reflector (DBR) layer beneath the grating. The cell thickness was 675 \(\mu\)m, with metallic contact on the front side. The DBR achieved over 99.8% reflectance in the wavelength range of 800-1100 nm. This design resulted in a _V\({}_{OC}\)_ of 0.62 V and _J\({}_{SC}\)_ of 27.5 _mA/cm\({}^{2}\)_, yielding an efficiency of 13.2%. Resonant modes also improve light harvesting on the rear side of a PV cell. Tu _et al._ reported using double wall carbon nanotubes (DWCNTs) in amorphous silicon (a-Si) PV cells [110]. The DWCNTs were spin-coated on Ti/Ag back contacts to excite plasmon resonances and enhance light scattering in the range of 589-700 nm. By adjusting the ratio of DWCNTs to polymers, the resonance region was adapted to the desired wavelength range. The reported _V\({}_{OC}\)_ and _J\({}_{SC}\)_ were 0.83 V, and 14.07 _mA/cm\({}^{2}\)_ respectively, yielding an efficiency of 6.55% ### Comprehensive Analysis of Front and Rear Side Photon Management Techniques Fig. 2 displays performance parameters of various recently made Si PV cells, including their corresponding photon management techniques, and cell type. The maximum attainable _V\({}_{OC}\)_ and _J\({}_{SC}\)_ from a c-Si PV cells are shown as dotted lines. Table 1 provides additional information on these cells. The importance of surface passivation is evident in the unpassivated PV cells represented by the data points U17, U22, and U23, which have nanosilicon structures on the front side and planar rear side. Poor passivation of the front nanostructures resulted in recombination loss and reduced efficiency. U24, U25, and U26 are Si nanowire-based PV cells with no notable photon management structures. These cells have low voltage, likely due to the reduced bandgap energy of the Si nanowires, resulting in more band-to-band transitions for U24, resulting in higher _JSC_. However, unpassivated surfaces caused some loss and reduced final _JSC_. The other two nanowire cells probably had higher series resistance in addition to high surface recombination, resulting in even lower efficiency. Many data points represent Al-BSF cells, a type of cell that has dominated the industrial PV market for a long time. In the rear side of these cells, the BSF layer formed through Al screen printing and annealing acts as both a passivation layer and reflector. A variety of structures are employed on the front side, including planar, microsilicon, nanosilicon, and nanodielectric structures for photon management. The highest efficiency Al-BSF cell (A11, 20.1% efficiency) has a microsilicon front structure, providing _JSC_ of 38.6 _mA/cm\({}^{2}\)_with a Si thickness of 225 _\(\mu m\)_. The next highest efficiency Al-BSF cell (A49, 19.8% efficiency) has a _JSC_ of 38.77 _mA/cm\({}^{2}\)_. It is important to note that front-side nanostructures have slightly higher _JSC_values but slightly lower _VOC_and _FF_ values in the case of A49, likely due to poor surface passivation of the nanostructures. The lowest efficiency Al-BSF cell (A12, 9.24% efficiency) has a microsilicon front structure, poor performance in terms of _VOC_, _JSC_, and _FF_, indicating high recombination. Additionally, the thickness of 75 _\(\mu m\)_ was not sufficient for capturing incident photons, contributing to the lower _JSC_. PERC cells are currently the dominant PV cells in the global market [148, 149]. These cells have passivated front and rear sides, reducing surface recombination and achieving high _VOC_and _FF_ values with the same photon management as Al-BSF cells. For instance, P47 achieved 21.2% efficiency with a _VOC_of 0.627 V, _JSC_of 39.8 _mA/cm\({}^{2}\)_and _FF_ of 80.58% using a microsilicon front and planar rear. Structuring the rear side while maintaining good surface passivation improves performance further, as seen in cell P32, which has microsilicon structures on both sides and achieved a _VOC_of 0.684 V, _JSC_of 39.8 _mA/cm\({}^{2}\)_, _FF_ of 81.54%, and efficiency of 22.61% The TOPcon cell [150] is a new type of PV cell that significantly reduces surface recombination by utilizing a thin tunneling oxide to passivate the contact, keeping the contact resistivity of the cell almost unchanged. In one of these cells (T55), nanosilicon front side and planar rear provided 41.1 _mA/cm\({}^{2}\)JSC_, pointing to a good photon management arrangement. This cell achieved a _VOC_of 0.674 V and an efficiency of 22.3%. However, it was found that the surface passivation of the nanosilicon structure in T55 was not sufficient, prompting the introduction of a new nanosilicon structure in T56 that reduces the difficulty of passivation. This new structure, which involves forming a random pyramid texture and then creating nanoscale pores through reactive ion etching, resulted in increased photocurrent density, _VOC_and efficiency compared to T55. Another TOPcon cell, T57, used a microsilicon structure on both sides, but did not achieve as high photocurrent density as T55 and T56 due to its relatively higher front reflectance. However, the passivation quality of the microsilicon structure was better, resulting in a higher _VOC_and efficiency. These examples demonstrate that increasing the photogenerated current does not always improve cell performance and a comprehensive analysis should be conducted when implementing a photon management approach. The heterojunction cells in the dataset have achieved relatively higher efficiency by using microsilicon structures on both the front and rear sides. This high performance was achieved due to effective photon management, selecting the appropriate cell thickness, and proper surface passivation. For example, H4 demonstrated a _VOC_of 0.738 V, a photocurrent density of 40.8 mA/cm2, and a _FF_of 83.36%, resulting in an efficiency of 25.1%. Its performance was surpassed by H2, which achieved an efficiency of 25.6% by using a heterojunction-IBC structure. H2 had a _VOC_and _FF_ that were similar to H4, but the main improvement came from an increase in photocurrent density (41.8 _mA/cm\({}^{2}\)_) due to the adoption of an IBC structure on the rear side which also trapped some light to the cell. While it is common to have a textured front and a planar rear side in solar cells, there are also cases where the opposite is used. For instance, the cell IFC28 has a planar front side and a photonic crystal back reflector with a grating structure on the rear side. This arrangement resulted in a photocurrent density of 27.5 \(mA/cm^{2}\). The \(V_{oc}\) and \(FF\) of this cell were 0.62 V and 77.42%, respectively. However, the photocurrent density could have been further improved if the front side also had a photon management scheme. Additionally, the cell thickness of 675 \(\mu m\) is much greater than the diffusion length [151; 152] of silicon, which is between 100-300 \(\mu m\)[86; 153; 154]. This resulted in nonradiative recombinations and could have led to even better \(V_{oc}\), photocurrent density and fill factor. A number of cells were investigated with nanodielectric structures on the front side, which provided some level of light trapping compared to cells with no structure on the front. However, these schemes failed to achieve significant efficiency. For example, GSH46 achieved a \(V_{oc}\) of 0.51 V, a photocurrent density of 28 \(mA/cm^{2}\), and a \(FF\) of 61.62% using a bilayer graphene on the front side and a planar rear side. The front reflectance of the cell was less than 5% for the wavelength range of 520-700 nm and less than 10% for longer wavelengths, indicating that the cell was able to capture a good number of photons, yet failed to achieve high efficiency. There are several reasons for this. Firstly, it used whispering gallery mode [155; 156] to confine incident light to the cell. However, in this arrangement, perpendicularly incident light does not see any redirection by the front side, as is usually seen for pyramid textured front surfaces (microsilicon structure) [91; 157; 158]. Secondly, the cell thickness of only 20 \(\mu m\) was not sufficient for adequate light trapping. Additionally, the lower values of \(V_{oc}\)and \(FF\) also indicate that the surface passivation quality was poor. ## 4 Practical Limit of Photon Management and Improvement Pathways When searching for the most effective photon management approach, it is important to quantify the contribution of each technique. From our discussions, it has been found that microsilicon and nanosilicon structures have shown the greatest potential for use as front side photon management structures. Furthermore, when rear side structures such as microsilicon are used, they scatter incident light at an angle which enhances path length further. To determine the highest achievable photocurrent density practically in these configurations, a study was conducted using a simulation tool called SunSolve [159]. Pyramid structures were used as a representative of microsilicon structures and black silicon was used as a representative of nanosilicon structures for the front side. On the rear side, both metallic and Figure 4: Factors influencing photon management in a silicon solar cell. Legend: _j_sc short-circuit current density, _j_sc short-circuit current density, _j_sc loss in current density due to front reflectance, _j_-_ac loss in current density due to escape reflectance, _j_-_ac loss in current density (absorption) at the front side, and _j_-_ac loss in current density (absorption) at the rear side. dielectric rear sides with and without full scattering characteristics were considered. Additionally, the impact of cell thickness in the presence of front and rear side photon management mechanisms was also studied. The study presented in Fig. 4(a) investigates the performance of different front side photon management structures using a fixed rear side consisting of 1 \(\mu\)m AlSi + 10 nm Al:O\({}_{3}\) + 100 nm SiN\({}_{x}\), with no scattering. The cell thickness for this study was 175 \(\mu\)m. The study considers three different front structures: a planar surface, a random pyramid texture, and black Si (5 \(\mu\)m tall, 87.5" sidewall angle), both with and without 75 nm SiN\({}_{x}\) on them. The first part of Fig. 4(a) shows the cell's 1-R (total absorption) and EQE (external quantum efficiency) over the solar spectrum, where R is the total reflectance of the cell. It is important to note that 1-R accounts for both the absorption in the active region of the cell and the parasitic absorption loss in the front and rear side of the cell [56; 83]. We observed that a PV cell with a planar silicon surface and no ARC on the front side was able to absorb approximately 60% of incident light in the 600-1000 nm wavelength range, with a decrease in absorption for other wavelengths. Analysis revealed a slight discrepancy between the EQE and 1-R in the shorter (<400 nm) and longer (>1100 nm) wavelength ranges, which was attributed to parasitic optical absorption on the front and rear sides of the cell, respectively. The addition of a 75 nm SiN\({}_{x}\) ARC on the planar front side significantly improved the 1-R and EQE EQE of the cell. Additionally, incorporating pyramid textures on the front side also greatly improved the 1-R and EQE compared to a planar surface without an ARC, but exhibited a lower performance in the 430-820 nm range. The highest EQE and 1-R values were achieved using a black silicon front, which was able to absorb approximately 95% of incident light. Notably, applying an ARC to the black silicon surface resulted in a slight decrease in EQE in the shorter wavelength range, due to parasitic absorption within the ARC. The bar chart in Fig. 4(a) breaks down the total incident light energy into various components including photogenerated current density (_Jsc_), loss in current density due to front reflectance (_Jsc_-_f_) and escape reflectance (_Jsc_), and loss in current density due to parasitic optical loss on both the front (_Jsc_-_f_) and rear sides (_Jsc_-_f_) of the cell. Our findings indicate that the use of an ARC on planar silicon increased current density by 9 _mA/cm\({}^{2}\)_, primarily as a result of reduced front reflectance loss, however it also slightly increased the escape reflectance loss. This is due to the Si-SiN\({}_{x}\)-air boundary having a lower refractive index contrast than the Si-air boundary, making it easier for light to escape. Using a pyramid silicon surface with an ARC structure further reduced both front and escape reflectance loss. However, using a black silicon structure decreased front reflectance loss and parasitic optical loss on both the front and rear sides of the cell, but increased escape reflectance loss. To further reduce escape reflectance loss, the thickness of the cell could be increased until it reaches the minority carrier diffusion length of silicon, allowing for absorption of longer wavelength photons. We examined the characteristics of various rear-side photon management structures, all of which had the same front structure consisting of a 75 nm SiN\({}_{x}\) ARC on a pyramid texture, with a wafer thickness of 175 \(\mu\)m. The structures evaluated included planar aluminum, aluminum structures designed to scatter light, planar nickel (a lossy metal with good electrical transport properties), nickel structures designed to scatter light, planar Al:O\({}_{3}\) (which has poor electrical transport properties but excellent surface passivation), and Al:O\({}_{3}\) structures designed to scatter light. Our findings, shown in Fig. 4(b), indicate that all of the rear structures displayed similar EQE and reflectance characteristics in the shorter wavelength range, which can be attributed to their consistent front side structure. However, the rear structure primarily influenced the absorption of longer wavelength photons. We observed that scattering structures generally exhibited higher absorption than their planar counterparts, as more of the unabsorbed light is reflected back and forth between the front and rear sides, increasing the total path length for silicon absorption. Additionally, we found that the highest absorption in the longer wavelength range was achieved with the nickel structure designed to scatter light, followed by the planar nickel. However, these structures also exhibited the lowest EQE, indicating that most of the absorption occurred in nickel, rather than silicon, resulting in parasitic optical loss. The highest EQE in the longer wavelength range was observed in the case of the Al\({}_{2}\)O\({}_{3}\) structure designed to scatter light, due to the absence of parasitic optical loss on the rear side, which is typically present in metals [160; 161; 162; 149]. The bar chart presented in Figure 4(b) supports the findings that Al\({}_{2}\)O\({}_{3}\) structures did not exhibit any \(J_{loss\sim r}\), while Ni structures exhibited the highest \(J_{loss\sim r}\). Two key conclusions can be drawn from this information. Firstly, the \(J_{SC}\) achieved using the scattering Al\({}_{2}\)O\({}_{3}\) structure was 40 \(mA/cm^{2}\). In order to increase the \(J_{SC}\) further with the same front structure, the cell thickness must be increased until it reaches the minority carrier diffusion length. However, exceeding this limit will result in recombination loss in the bulk. Secondly, it is important to note that Al\({}_{2}\)O\({}_{3}\) is a dielectric material, using which on the entire surface prevents the collection of photogenerated carriers. Therefore, in practical applications, contacts must be created by covering a portion of the rear side with metal, which will result in parasitic optical loss in the metal and recombination loss at the Si-metal boundary, ultimately reducing the \(J_{SC}\). The impact of the cell thickness on the cell optical performance is illustrated in Fig. 4(c). The cell front side is assumed to be black Si without an ARC and the rear side is assumed to be composed of 1 \(\mu\)m AlSi, 10 nm Al\({}_{2}\)O\({}_{3}\), and 100 nm SiN\({}_{x}\) with full scattering. As shorter wavelength photons are absorbed in the front side and are not affected by cell thickness, the 1-R and EQE characteristics in the wavelength range of 800 - 1200 nm are presented. It can be observed that an increase in thickness leaded to an increase in 1-R and EQE, but the rate of increase slowed down with increasing thickness. In light of the findings discussed above, the dataset presented in this paper (Table 1) was analyzed in more detail. The highest efficiency cell had an efficiency of 26.3%, which was achieved using a microsilicon front, planar rear, and 165 \(\mu\)m thickness. To enhance its light trapping further, a nanostructured front and micro-structured rear with scattering capabilities could potentially be used. However, passivating a nanostructured surface is a significant challenge. There have been recent advancements in passivating black Si [163; 164; 165; 166; 167] which could be utilized. ## 5 Photon Management Structures Not Yet Incorporated in Practical PV cells Recent literature presents various photon management structures grown on isolated substrates or simulated, such as Heidarzadeh _et al._'s idea for increasing photogenerated current in thin-film PV cells [168] using hemispherical coreshell nanoparticles on the front and triangular grating reflectors on the rear which achieved \(J_{SC}\) of 22 \(mA/cm^{2}\). The forward scattering of the core-shell nanoparticles is a positive aspect for \(J_{SC}\) but they incur plasmonic absorption, causing dips in the active material's absorption spectra and not fully utilizing the incident sunlight. Comparing this to our SunSolve simulation of planar Si and no ARC (1-R characteristics in Fig. 4(a)), it showed better absorption in Si for some wavelengths but poorer in others. However, it under-performs compared to our planar Si and ARC simulation. Therefore, using these core-shell nanoparticles does not significantly improve cell performance. In a similar design, trapezoidal pyramid nanostructure (i.e., nanosilicon) are proposed for the front side and inverted pyramid nanotexture (i.e., nanosilicon) for the rear side of a 0.9 \(\mu\)m thick c-Si/ZnO heterojunction PV cell [169]. The simulations resulted in a \(J_{SC}\) of 41.94 \(mA/cm^{2}\), with a front reflectance of \(\leq\)5% for 400 - 1000 nm wavelength and higher for other wavelengths. Although the results are promising, it would be difficult to implement the nano-trapezoidal Si texture on the front side and passivate both sides, particularly the rear side where Si inverted pyramids are filled with Al. Recently, Wang _et al._ has developed a ZnO nano-needle array (i.e., nanodielectric structure) for front-side photon management [170] on a textured Si substrate, which has been experimentally shown to reduce front reflectance to \(\leq\)89% at 400-1000 nm wavelength. This structure achieved a _JSC_ of 37.8 _mA/cm\({}^{2}\)_, an increase of 1.8 _mA/cm\({}^{2}\)_ compared to textured Si front surface. The advantage of this structure is that it avoids the challenge of passivating a nanostructured Si while providing similar front reflectance. Another similar front structure is AZO (Al doped ZnO) nanorods which experimentally provided a front surface reflectance of about 12% at 400-900 nm wavelength [171]. Saravanan _et al._ designed a 1-D photonic crystal (i.e., a distributed Bragg grating) for the rear side of an a-Si PV cell [172]. which consists of alternating layers of Si and SiO\({}_{2}\) has the potential to make the rear side reflectance near perfect, which could increase the _JSC_ of the thin a-Si PV cells. However, it is unclear how rear side contacts could be formed through this dielectric reflector. Hossain _et al._ have experimentally demonstrated the use of self-assembled Al\({}_{2}\)O\({}_{3}\) nanostructures to provide surface passivation and rear-side photon management in c-Si PV cells [94; 149; 173]. They have the potential to achieve high rear side reflectance (\(\geq\)95%), due to the suppression of parasitic optical losses in the metal contacts [160; 161; 162]. They indicated that these structures can provide up to 1 _mA/cm\({}^{2}\)_ increase in _JSC_ for cells with Al rear contacts and up to 2.9 _mA/cm\({}^{2}\)_ for Ni contacts. In a similar simulation work by Shameli, a polarization independent phase gradient metasurface was proposed for the rear side of the cell [174]. This structure can provide a rear side reflectance of 83%, but surface recombination at the Si/Al boundary is not addressed, which could negatively impact cell performance. Dhawan _et al._ proposed using transformation optics to design photon management structures on Si surfaces, recognizing that texturing the surface increases recombination and it may not be possible to fully passivate it [175]. They proposed using optically equivalent planar layers with varying refractive indices instead of a nano-structured surface. However, it remains uncertain how to fabricate these precise planar layers with varying refractive indices. Sun _et al._ suggested using metallic light-trapping electrodes on the front side of PV cells, which can significantly decrease shadowing loss and have potential for use in front-side metal grids, increasing the active area and overall current of the cell [176; 177; 178]. Khokhar _et al._ recently designed a TOPcon cell that incorporates a nanosilicon structure on both sides of the cell [179]. By considering both bulk and surface recombination in their simulation, they were able to achieve a _JSC_ of 43.5 _mA/cm\({}^{2}\)_, a _VOC_ of 0.762 V and an _FF_ of 83%. These results are slightly higher than those of practical TOPcon cells reported in Table 1, suggesting that there is potential for further improvement in the performance of practical TOPcon cells through improved photon management and advancements in processing techniques that can reduce recombination losses. ## 6 Conclusion In conclusion, the front side structure, rear side structure, and thickness of a PV cell are crucial parameters to consider when designing an efficient photon management mechanism. The results of this study suggest that by incorporating a well-passivated nanostructured front side and a dielectric-made rear side that effectively scatters light without impeding current conduction, excellent light trapping can be achieved and an increase in cell _JSC_, _VOC_ and efficiency can be realized. Additionally, the cell thickness should be optimized such that it is less than the minority carrier diffusion length while still allowing for the capture of a majority of longer wavelength photons. We also explored various contemporary photon management structures that have not yet been implemented in practical PV cells. Some of these structures demonstrated promising results in terms of light trapping, however, the surface passivation of these structures poses a challenge for their practical use in PV cells. ## Acknowledgments This material is based upon work supported by the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy (EERE) under the Solar Energy Technologies Office Agreement Number DE-EE0007533. The authors would like to acknowledge Prof. Pieter G. Kik for his insightful discussions and valuable perspectives that significantly enhanced the quality of this work. During the preparation of this work, the authors used ChatGPT in order to correct grammatical errors in some parts of the manuscript. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication. ## Data Availability Datasets related to this article can be found at [https://doi.org/10.6084/m9.figshare.23573409](https://doi.org/10.6084/m9.figshare.23573409), an open source online data repository hosted at figshare [180].
2302.10306
Deep Convolutional Framelet Denoising for Panoramic by Mixed Wavelet Integration
Enhancing quality and removing noise during preprocessing is one of the most critical steps in image processing. X-ray images are created by photons colliding with atoms and the variation in scattered noise absorption. This noise leads to a deterioration in the graph's medical quality and, at times, results in repetition, thereby increasing the patient's effective dose. One of the most critical challenges in this area has consistently been lowering the image noise. Techniques like BM3d, low-pass filters, and Autoencoder have taken this step. Owing to their structural design and high rate of repetition, neural networks employing diverse architectures have, over the past decade, achieved noise reduction with satisfactory outcomes, surpassing the traditional BM3D and low-pass filters. The combination of the Hankel matrix with neural networks represents one of these configurations. The Hankel matrix aims to identify a local circle by separating individual values into local and non-local components, utilizing a non-local matrix. A non-local matrix can be created using the wave or DCT. This paper suggests integrating the waveform with the Daubechies (D4) wavelet due to its higher energy concentration and employs the u-Net neural network architecture, which incorporates the waveform exclusively at each stage. The outcomes were evaluated using the PSNR and SSIM criteria, and the outcomes were verified by using various waves. The effectiveness of a one-wave network has increased from 0.5% to 1.2%, according to studies done on other datasets
Masoud Shahraki Mohammadi, Seyed Javad Seyed Mahdavi Chabok
2023-01-25T11:00:32Z
http://arxiv.org/abs/2302.10306v3
# Deep Convolutional Framelet Denoising for Panoramic by Mixed Wavelet Integration ###### Abstract Enhancing quality and removing noise during preprocessing is one of the most critical steps in image processing. X-ray images are created by photons colliding with atoms and the variation in scattered noise absorption. This noise causes the graph's quality of medical to decline and, occasionally, causes it to repeat itself, causing an elevation in the patient's effective dose. One of the most critical challenges in this area has consistently been lowering the image noise. Techniques like BM3d, low-pass filters, and Autoencoder have taken this step. Due to the algorithm's structure and high repetition rate, neural networks using various architectures have reduced noise with acceptable results over the past ten years compared to the traditional BM3D and low-pass filters. The Hankel matrix combined with neural networks is one of these configurations. The Hankel matrix seeks a local circle by splitting up individual values into local and non-local components using a non-local matrix. A non-local matrix can be created using the wave or DCT. This paper proposes combining the waveform with the Daubechies (D4) wavelength because it has more energy and uses the u-Net neural network structure, which uses the waveform alone at each stage. The outcomes were evaluated using the PSNR and SSIM criteria, and the outcomes were verified by using various waves. The effectiveness of a one-wave network has increased from 0.5% to 1.2%, according to studies done on other datasets. Panoramic, Scatter, Neural Network, Hankel Matrix, Daubechies Wavelet ## I.Introduction Classification and low-level vision computers like segmentation [1], super-resolution [2], [3], and denoising [4] have both benefited from deep learning techniques. The theoretical underpinnings of its conquest have been studied [5], [6], and it works by learning high-level features like the brain does use multiple layers of neurons. Deep learning has seen success in many areas over the past ten years, including computed tomography (CT) [7], Kang et al. [7] were the first researchers to systematically propose convolution deep learning (CNN) on CT with a low dose and demonstrated that a CNN using a directional wavelet is more effective at denoising. The streaking artifacts resulting from sparse projection views exhibit globalized artifacts that are challenging to remove using traditional denoising CNNs, despite being low-dose artifacts from reduced tube currents [8], [9]. To eliminate the global streaking artifacts brought on by sparse projection views, Han et al [10] and Jin et al [11] suggested residual learning using U-Net [1]. Wang et al [12] were the first to use compressed sensing MRI with deep learning in MRI (CS-MRI). They used images from a downscaled reconstruction to train a deep neural network to learn a fully sampled reconstruction. The biggest problem for signal processing is the integration of deep learning and signal processing theories. For many image processing applications, wavelets [13] have been studied as a practical signal illustration theory using the effects of energy compaction in wavelet bases. For many image processing applications, non-local image processing techniques like non-local means [14], BM3D [15], etc., have also demonstrated excellent interpretation. However, sometimes blindly, deep learning outperforms traditional signal processing that is based on mathematics. Hankel matrices are used in encoders and decoders by Yin et al. [16], who originated the principle of applying mathematics-based classical signal processing theory to deep learning. We diverge from these prevalent viewpoints in this paper and extend the Hankel matrix of a deep network to a deeper network. Using Daubechies wavelets while also observing the wavelets' limitations in locating local knowledge, the researchers tried to lessen the amount of noise in panoramic images. The research aims to reduce system response time as close to zero as possible because it is one of the significant challenges in any process in medicine. Therefore, it will be assumed that the restriction on neural network expansion is the default.
2301.11548
No-signaling in Nonlinear Extensions of Quantum Mechanics
Devising a nonlinear extension of quantum mechanics is nontrivial because unphysical features such as supraluminal communication (signaling) are to be excluded. In this Letter, we show that the steepest entropy ascent formalism is a viable no-signaling extension belonging to a broader class of no-signaling nonlinear evolution equations for which the local evolution of a subsystem is not necessarily bound to depend only on its reduced state. We prove that, in addition to the local reduced density operator, there is a broad class of local operators called `local perceptions', which are insensitive to unitary operations localized within other non-interacting systems.
Rohit Kishan Ray, Gian Paolo Beretta
2023-01-27T06:22:27Z
http://arxiv.org/abs/2301.11548v2
# No-signaling in Nonlinear Extensions of Quantum Mechanics ###### Abstract Devising a nonlinear extension of quantum mechanics is nontrivial because unphysical features such as supraluminal communication (signaling) are to be excluded. In this Letter, we show that the steepest entropy ascent formalism is a viable no-signaling extension belonging to a broader class of no-signaling nonlinear evolution equations for which the local evolution of a subsystem is not necessarily bound to depend only on its reduced state. Quantum mechanics (QM) in its most common form (Schrodinger-von Neumann formalism) is linear in state space, where linear operators operate upon state vectors, and the time evolution is linear. In 1989, when the late Prof. Weinberg asked to test the linearity of quantum mechanics as we know it [1], a new quest began for the physicists working in the field. Thence onward, distinguished researchers such as Gisin [2] and Polchinski [3] showed that introducing non-linearity via operators produces signaling (faster than light communication), leading to the violation of causality. In this context as a consequence of using linear operators in QM, we must take note of the fact that no-cloning, first introduced by Park [4] and later explicitly shown by Wootters, Zurek, and Dieks [5], is sufficient for no-signaling. Nonlinearity, when introduced via stochastic QM through Lindblad operator formalism [6] for open quantum systems, has been shown to respect no-signaling. Ensuing work by Ferrero _et al._[7] has shown that nonlinearity in QM can be accommodated without compromising no-signaling if we consider the time evolution to be nonlinear albeit maintaining linearity in the state space and operators. In a more recent work [8], it has been shown that convex quasilinear maps can contribute to the nonlinear dynamics of QM without invoking signaling, and this formalism retains much of QM as it is. Thusly, Rembielinski and Caban [8] have found a minimal permissible deviation from the linear structure of QM dynamics. For about four decades, a nonlinear extension of quantum mechanics initiated by Hatsopoulos and Gyftopoulos [9], and developed by Beretta [10; 11] exists. This formalism originally sought to establish the second law of thermodynamics as a foundational principle at par with other conservation laws, embedded in a nonlinear law of evolution for mixed states, thus conceptualizing stability of canonical states and spontaneous decoherence. Later, the steepest entropy ascent (SEA) formalism was developed as a powerful modeling tool, applied to various quantum systems [12; 13], and even elevated to the status of fourth law of thermodynamics [14; 15] because of its equivalence with most of the recent far-nonequilibrium modeling formalisms. Nonetheless, while discussing possible nonlinear theories of QM in the context of no-signaling, SEA formalism is hardly ever mentioned in the literature cited in the preceding paragraph. The authors in Refs. [11; 12] claim that the SEA formalism abides by the no-signaling criteria as discussed by Gisin and others. Yet, an explicit focus on no-signaling is lacking in their works. As a result, a lacuna remains to be breached to answer Weinberg's original question. This Letter provides definitive proof that the SEA nonlinear extension of QM respects the no-signaling principle. Philosophically, SEA evolution was designed as part of a theory whereby spontaneous decoherence could be conceived as a fundamental dynamical feature, in contrast to the coarse-graining approach. Entropy, the second law, and irreversibility could acquire a more fundamental stature, by emerging as deterministic consequences of the law of evolution, without contradicting standard QM [16]. Nonlinearity was known to be essential for this purpose [17]. While the prevalent notion is that the second law is statistical, the pioneers of SEA believed that the many knots of thermodynamics [17; 18] could be unified by elevating it to a more fundamental stature [9]. But later, the strength and generality of its mathematical formalism made SEA a suitable tool for thermodynamically consistent nonequilibrium modeling even beyond and outside of its original context, including the current quest for fundamental or phenomenological nonlinear extensions of QM for quantum computing applications, and attempts to assign ontological status to mixed density operators, albeit motivated differently from Refs. [1; 16; 17; 18; 19]. The ontological hypothesis amounts to assuming, accepting, and interpreting the elevation of the mixed density operators to the same status of ontic physical states that traditional von Neumann QM reserves to pure states. To see that it is a conceptual prerequisite to adopt a nonlinear law of evolution for mixed states, let \(\mathcal{W}_{t}\) with \(\rho^{\prime}=\mathcal{W}_{t}(\rho)\) denote a nonlinear map representing the evolution \(\rho\rightarrow\rho^{\prime}\) after some time \(t\). Consider the evolution of three states \(\rho_{1}\), \(\rho_{2}\), \(\rho_{3}\) such that \(\rho_{1}\rightarrow\rho_{1}^{\prime}\), \(\rho_{2}\rightarrow\rho_{2}^{\prime}\), \(\rho_{3}\rightarrow\rho_{3}^{\prime}\), respectively. Further, assume that \(\rho_{3}=w\rho_{1}+(1-w)\rho_{2}\) with \(0\leq w\leq 1\). Due to the nonlinearity of \(\mathcal{W}_{t}\), in general \(\rho_{3}^{\prime}\neq w\rho_{1}^{\prime}+(1-w)\rho_{2}^{\prime}\). This is no is sue only if the ontological hypothesis is accepted, since \(w\) and \(1-w\) do not represent epistemic ignorance anymore. A sufficient condition [8] for the nonlinear map to be no-signaling is that it be convex quasilinear, i.e., it always admits a \(w^{\prime}\), \(0\leq w^{\prime}\leq 1\), such that \(\rho_{3}^{\prime}=w^{\prime}\rho_{1}^{\prime}+(1-w^{\prime})\rho_{2}^{\prime}\). In this Letter, we introduce a much broader class of non-convex-quasilinear, nonlinear maps that are no-signaling The SEA map, despite its logarithmic nonlinearity and structured construction, belongs to this class and has general thermodynamic compatibility features. The no-signaling condition, as noted in [7], is usually imposed by asking that in the absence of mutual interactions between subsystems A and B, the evolution of the local observables of A should only depend on its own reduced state. The SEA formalism, however, demonstrates that we can take a less restrictive view [11]: we only require that, if A and B are non-interacting, the law of evolution must not allow that a local unitary operation within B could affect the time evolution of local (reduced, marginal) state of A. Thus, the condition \(\rho_{A}=\rho_{A}^{\prime}\), such as for the two different states \(\rho\neq\rho_{A}\otimes\rho_{B}\) and \(\rho^{\prime}=\rho_{A}\otimes\rho_{B}\), does not require that \(\mathrm{d}\rho_{\mathrm{A}}/\mathrm{d}t=\mathrm{d}\rho_{\mathrm{A}}^{\prime}/ \mathrm{d}t\), because local memory of past interactions, i.e., existing entanglement and/or correlations, may well influence the local evolutions without violating no-signaling. This incorporates the idea that (1) by studying the local evolutions we can disclose the existence of correlations, but only of the type that can be classically communicated between the subsystems, and (2) in the absence of interactions the nonlinear dynamics may produce the fading away of correlations (spontaneous decoherence) but cannot create new correlations. In linear QM, the system's composition is specified by declaring: (1) the Hilbert space structure as direct product \(\mathcal{H}=\bigotimes_{J=1}^{M}\mathcal{H}_{J}\) of the subspaces of the \(M\) component subsystems, and (2) the overall Hamiltonian operator \(H=\sum_{J=1}^{M}H_{J}\otimes I_{\overline{J}}+V\) where \(H_{J}\) (on \(\mathcal{H}_{J}\)) is the local Hamiltonian of the \(J\)-th subsystem, \(I_{\overline{J}}\) the identity on the direct product \(\mathcal{H}_{\overline{J}}=\bigotimes_{K\neq J}\mathcal{H}_{K}\) of all the other subspaces, and \(V\) (on \(\mathcal{H}\)) is the interaction Hamiltonian. The linear law of evolution, \(\dot{\rho}=-\mathrm{i}[H,\rho]/\hbar\), has a universal structure and entails the local evolutions through partial tracing, \(\dot{\rho}_{J}=-\mathrm{i}[H_{J},\rho_{J}]/\hbar-\mathrm{i}\operatorname{Tr} \nolimits_{\overline{J}}([V,\rho])/\hbar\). Thus, we recover the universal law \(\dot{\rho}_{J}=-\mathrm{i}[H_{J},\rho_{J}]/\hbar\) for the local density operator \(\rho_{J}=\operatorname{Tr}\nolimits_{\overline{J}}(\rho)\) if subsystem \(J\) does not interact with the others (i.e., if \(V=I_{J}\otimes V_{\overline{J}}\)). Instead, a fully nonlinear QM cannot have a universal structure, because the subdivision into subsystems must explicitly concur to the structure of the dynamical law (see [11] for more on this). A different subdivision requires a different equation of motion. This high price for abandoning linearity is clearly reflected in the nontrivial structure of the SEA law of evolution. But this renders it compatible with the compelling constraint that correlations should not build up and signaling between subsystems should not occur other than per effect of the interaction Hamiltonian \(V\) through the standard Schrodinger term \(-i[H,\rho]/\hbar\) in the evolution law. Seldom used in composite quantum dynamics analysis, but crucial in our opinion, are the physical observables first introduced in [10] and called 'local perception' operators (on \(\mathcal{H}_{J}\)). Together with their 'deviation from the local mean value' operators and covariance functionals, they are defined as follows \[(X)_{\rho}^{J} =\operatorname{Tr}\nolimits_{\overline{J}}[(I_{J}\otimes\rho_{ \overline{J}})X]\,, \tag{1}\] \[\Delta(X)_{\rho}^{J} =(X)_{\rho}^{J}-I_{J}\operatorname{Tr}\nolimits[\rho_{J}(X)_{ \rho}^{J}]\,,\] (2) \[(X,Y)_{\rho}^{J} =\frac{1}{2}\operatorname{Tr}\nolimits[\rho_{J}\Big{\{}\Delta(X) _{\rho}^{J},\Delta(Y)_{\rho}^{J}\Big{\}}]\,, \tag{3}\] where \(\rho_{\overline{J}}=\operatorname{Tr}_{J}(\rho)\). For a bipartite system AB, the local perception operators \((X)_{\rho}^{A}\) (on \(\mathcal{H}_{A}\)) and \((X)_{\rho}^{B}\) (on \(\mathcal{H}_{B}\)) are the unique operators that for a given \(X\) on \(\mathcal{H}_{AB}\) satisfy for all states \(\rho\) the identity \[\operatorname{Tr}\nolimits[\rho_{A}(X)_{\rho}^{A}]=\operatorname{Tr}\nolimits[( \rho_{A}\otimes\rho_{B})X]=\operatorname{Tr}\nolimits[\rho_{B}(X)_{\rho}^{B}]\,, \tag{4}\] which shows that they represent all that A and B can say about the overall observable X by classically sharing their local states. Operator \((X)_{\rho}^{A}\) can be viewed as the projection onto \(\mathcal{H}_{A}\) of the operator \(X\) weighted by the local state \(\rho_{B}\) of subsystem \(B\). It is a local observable for subsystem A which, however, depends on the overall state \(\rho\) and overall observable \(X\). Its local mean value \(\operatorname{Tr}_{A}[\rho_{A}(X)_{\rho}^{A}]\) differs from the mean value \(\operatorname{Tr}(\rho X)\) for the overall system AB, except when A and B are uncorrelated (\(\rho=\rho_{A}\otimes\rho_{B}\)). It was dubbed 'local perception' because even if B performs a local tomography and sends the measured \(\rho_{B}\) to A by classical communication, the most that A can measure locally about the overall observable \(X\) is \((X)_{\rho}^{A}\). The overall energy and entropy of the composite system are locally perceived within subsystem \(J\) through the operators \((H)_{\rho}^{J}\) and \((S(\rho))_{\rho}^{J}\) defined on \(\mathcal{H}_{J}\) by Eq. (1), respectively with \(X=H\), the overall Hamiltonian, and \(X=S(\rho)=-k_{\mathrm{B}}\mathrm{ln}(\rho)\), that we call the overall entropy operator, where \(\mathrm{Bln}(\mathrm{x})\) denotes the discontinuous function \(\mathrm{Bln}(\mathrm{x})=\ln(\mathrm{x})\) for \(0<x\leq 1\) and \(\mathrm{Bln}(0)=0\). Note that the 'locally perceived overall entropy' operator \((S(\rho))_{\rho}^{J}\) is different from the 'local entropy' operator \(S(\rho_{J})=-k_{\mathrm{B}}\mathrm{Ly}\ln(\rho_{J})\). Their mean values \(\operatorname{Tr}\nolimits[\rho_{J}(S(\rho))_{\rho}^{J}]=-k_{\mathrm{B}} \operatorname{Tr}\nolimits[(\rho_{J}\otimes\rho_{\overline{J}})\mathrm{Bln}(\rho)]\) and \(\operatorname{Tr}\nolimits[\rho_{J}S(\rho_{J})]=-k_{\mathrm{B}} \operatorname{Tr}\nolimits[\rho_{J}\ln(\rho_{J})]\) are different. Only when \(\rho=\rho_{J}\otimes\rho_{\overline{J}}\) they are related by \(\operatorname{Tr}\nolimits[\rho_{J}(S(\rho))_{\rho}^{J}]=\operatorname{Tr} \nolimits[\rho_{J}S(\rho_{J})]+\operatorname{Tr}\nolimits[\rho_{\overline{J}}S( \rho_{\overline{J}})]=-k_{\mathrm{B}}\operatorname{Tr}\nolimits[\rho\ln(\rho)]\). Likewise, the 'locally perceived overall Hamiltonian' operator \((H)_{\rho}^{J}\) is different from the 'local Hamiltonian' operator \(H_{J}\). Their mean values \(\operatorname{Tr}\nolimits[\rho_{J}(H)_{\rho}^{J}]=\operatorname{Tr}\nolimits[( \rho_{J}\otimes\rho_{\overline{J}})H]\) and \(\operatorname{Tr}\nolimits(\rho_{J}H_{J})\) are different, and only when \(V=I_{J}\otimes V_{\overline{J}}\) they are related by \(\mathrm{Tr}\!\left[\rho_{J}(H)_{\rho}^{J}\right]=\mathrm{Tr}(\rho_{J}H_{J})+ \mathrm{Tr}(\rho_{\overline{J}}H_{\overline{J}})=\mathrm{Tr}(\rho H)\). However, it is noteworthy that when the overall observable \(X\) is'separable for subsystem J', in the sense that \(X=X_{J}\otimes I_{\overline{J}}+I_{J}\otimes X_{\overline{J}}\) then, even if \(\rho\neq\rho_{J}\otimes\rho_{\overline{J}}\), the deviations and covariances reduce to their local versions, \[\Delta(X)_{\rho}^{J} =\Delta X_{J}=X_{J}-I_{J}\,\mathrm{Tr}[\rho_{J}X_{J}]\,, \tag{5}\] \[(X,Y)_{\rho}^{J} =\mathrm{Tr}[\rho_{J}\{\Delta X_{J},\Delta Y_{J}\}]/2\,. \tag{6}\] Now, to formalize the no-signaling definition following [11] as discussed above, we impose that if A and B are non-interacting, a local unitary operation on B should not affect the evolution of A. So, assume that with AB in the state, \(\rho\) a local operation on B changes the state to \[\rho^{\prime}=\left(I_{A}\otimes U_{B}\right)\rho\left(I_{A}\otimes U_{B}^{ \dagger}\right), \tag{7}\] where \(U_{B}\) is an arbitrary unitary operator (\(U_{B}^{\dagger}U_{B}=I_{B}\)). Using the properties of the partial trace, in particular, \[\mathrm{Tr}_{B}[(I_{A}\otimes X_{B})Z_{AB}]=\mathrm{Tr}_{B}[Z_{AB} (I_{A}\otimes X_{B})]\,,\] \[\mathrm{Tr}_{A}[(I_{A}\otimes X_{B})Z_{AB}(I_{A}\otimes Y_{B})]=X _{B}\,\mathrm{Tr}_{A}(Z_{AB})Y_{B}\,,\] we obtain the identities \[\rho_{B} =\mathrm{Tr}_{A}[(I_{A}\otimes U_{B}^{\dagger})\,\rho^{\prime} \left(I_{A}\otimes U_{B}\right)]=U_{B}^{\dagger}\rho_{B}^{\prime}U_{B}, \tag{8}\] \[\rho_{A}^{\prime} =\mathrm{Tr}_{B}[(I_{A}\otimes U_{B})\,\rho\left(I_{A}\otimes U_{ B}^{\dagger}\right)]=\mathrm{Tr}_{B}[(I_{A}\otimes U_{B}^{\dagger}U_{B})\,\rho]\] \[=\mathrm{Tr}_{B}[(I_{A}\otimes I_{B})\,\rho]=\rho_{A}, \tag{9}\] which confirms that a local operation on B does not affect the local state \(\rho_{A}\) of A, hence the usual idea [7] that for no-signaling it is sufficient that the dynamical model implies evolutions of local observables that depend only on \(\rho_{A}\). But it is seldom noted that this is not a necessary condition. In fact, we prove next that not only the local reduced state \(\rho_{A}\) but also the local perception operators \((F(\rho))^{A}\) of any well-defined nonlinear function \(F(\rho)\) of the overall state (such as the function \(S(\rho)\) defined above for entropy) are not affected by local operations on B according to Eq. (7). And since the SEA formalism is based on such local perception operators, this is an important lemma in the proof that SEA is no-signaling. So, let us apply Eq. (7) to a function of \(F(\rho)\) as locally perceived by A represented, according to definition Eq. (1), by its partial trace weighted with respect to \(\rho_{B}\), \[(F(\rho))^{A}=\mathrm{Tr}_{B}[(I_{A}\otimes\rho_{B})F(\rho)]. \tag{10}\] A function of \(\rho\) is defined from its eigenvalue decomposition by \(F(\rho)=VF(D)V^{\dagger}=\sum_{j}F(\lambda_{j})|\lambda_{j}\rangle\!\langle \lambda_{j}|\), where \(\rho=VDV^{\dagger}\), \(D=\sum_{j}\lambda_{j}|j\rangle\!\langle j|\), and \(V=\sum_{j}|\lambda_{j}\rangle\!\langle j|\). Since unitary transformations do not alter the eigenvalues, \[F(\rho^{\prime})=V^{\prime}F(D)V^{\prime\dagger}\text{ where }V^{\prime}=(I_{A} \otimes U_{B})V\,, \tag{11}\] and therefore, using Eq. (8) in the last step, we obtain \[(F(\rho^{\prime}))^{A}=\mathrm{Tr}_{B}[(I_{A}\otimes\rho_{B}^{ \prime})F(\rho^{\prime})]\] \[=\mathrm{Tr}_{B}[(I_{A}\otimes\rho_{B}^{\prime})\,(I_{A}\otimes U _{B})VF(D)V^{\dagger}(I_{A}\otimes U_{B}^{\dagger})]\] \[=\mathrm{Tr}_{B}[(I_{A}\otimes U_{B}^{\dagger}\rho_{B}^{\prime}U_{ B})\,VF(D)V^{\dagger}]\] \[=\mathrm{Tr}_{B}[(I_{A}\otimes\rho_{B})\,F(\rho)]=(F(\rho))^{A}\,. \tag{12}\] This confirms that local operations on B do not affect the local perception operators of A and, therefore, their proper use in nonlinear QM does not cause signaling issues. We are now ready to introduce the last but not least essential ingredient of a general composite-system nonlinear QM, namely, the system's structure-dependent expressions of the separate contribution of each subsystem to the dissipative term of the equation of motion for the overall state \(\rho\). As discussed above (and clearly recognized in the early SEA literature [10; 11]), the composite-system nonlinear evolution should reflect explicitly the internal structure of the system, essentially by declaring which subsystems are to be prevented from nonphysical effects such as signaling, exchange of energy, or build-up of correlations between non-interacting subsystems. In terms of the notation introduced above, the structure proposed in [10; 11] for the dissipative term of the dynamics to be added to the usual Hamiltonian term is as follows \[\frac{\mathrm{d}\rho}{\mathrm{d}t}=-\frac{\mathrm{i}}{\hbar}[H,\rho]-\sum_{J=1} ^{M}\left\{\mathcal{D}_{\rho}^{J},\rho_{J}\right\}\otimes\rho_{\overline{J}}\,, \tag{13}\] where the 'local dissipation operators' \(\mathcal{D}_{\rho}^{J}\) (on \(\mathcal{H}_{J}\)) may be nonlinear functions of the local observables of J, the reduced state \(\rho_{J}\), and the local perception operators of overall observables. For the dissipative term to preserve \(\mathrm{Tr}(\rho)\), operators \(\left\{\mathcal{D}_{\rho}^{J},\rho_{J}\right\}\) must be traceless. To preserve \(\mathrm{Tr}(\rho H)\) [and possibly other conserved properties \(\mathrm{Tr}(\rho C_{k})\)], operators \(\left\{\mathcal{D}_{\rho}^{J},\rho_{J}\right\}\!(H)_{\rho}^{J}\) [and \(\left\{\mathcal{D}_{\rho}^{J},\rho_{J}\right\}\!(C_{k})_{\rho}^{J}\)] must also be traceless. The rate of change of the overall system entropy \(s(\rho)=-k_{\mathrm{in}}\,\mathrm{Tr}[\rho\ln(\rho)]\) is \[\frac{\mathrm{d}s(\rho)}{\mathrm{d}t}=-\sum_{J=1}^{M}\mathrm{Tr}\!\left[\{ \mathcal{D}_{\rho}^{J},\rho_{J}\}\left(S(\rho)\right)_{\rho}^{J}\right], \tag{14}\] and the local nonlinear evolution of subsystem J is obtained by partial tracing over \(\mathcal{H}_{\overline{J}}\), in general, \[\frac{\mathrm{d}\rho_{J}}{\mathrm{d}t}=-\frac{\mathrm{i}}{\hbar}[H_{J},\rho_{J} ]\,-\frac{\mathrm{i}}{\hbar}\,\mathrm{Tr}_{\overline{J}}([V,\rho]\,)-\left\{ \mathcal{D}_{\rho}^{J},\rho_{J}\right\}\,, \tag{15}\] where we recall that the second term in the rhs can be put, for weak interactions and under well-known assumptions, in Kossakowski-Lindblad form. Before introducing the SEA assumption, we emphasize that the construction obtained so far, Eq. (13), opens up and paves the way for a class of no-signaling nonlinear evolution equations that is much broader, through all the possible compatible choices of the operators \(\mathcal{D}_{\rho}^{J}\), than nonlinear laws restricted by the sufficient but not necessary condition that \(\mathrm{d}\rho_{J}/\mathrm{d}t\) be a function of \(\rho_{J}\) only. We can formally state this no-signaling condition using the following statement, \[\frac{\mathrm{d}\rho_{J}}{\mathrm{d}t}=f(\rho_{J},(C_{k})^{J}). \tag{16}\] Finally, to introduce the SEA assumption in the spirit of the fourth law of thermodynamics [14; 15], one way is employing a variational principle. We first observe from Eq. (14) that the rate of entropy change contributed by subsystem J is directly proportional to the norm of operator \(\mathcal{D}_{\rho}^{J}\), so there is no maximum entropy production rate because we can trivially increase it indefinitely by simple multiplication of \(\mathcal{D}_{\rho}^{J}\) by a positive scalar. But we can fix that norm, and maximize against the direction in operator space, to identify, for each given state \(\rho\), the operators \(\mathcal{D}_{\rho}^{J}\) that point in the direction of steepest entropy ascent. To this end, to recover the original SEA formulation [10] let us maximize Eq. (14) subject to the conservation constraints \(\mathrm{Tr}\Big{[}\{\mathcal{D}_{\rho}^{J},\rho_{J}\}\,(C_{k})_{p}^{J}\Big{]}=0\) where \(C_{1}=I\), \(C_{2}=H\), and \(C_{k}\) are other conserved properties (if any), together with the fixed weighted norm constraints \(\mathrm{Tr}\Big{[}\rho_{J}(\mathcal{D}_{\rho}^{J})^{2}\Big{]}=\mathrm{const}\) (for more general SEA formulations in terms of a different metric as necessary to incorporate Onsager reciprocity see [14; 15]). Introducing Lagrange multipliers \(\beta_{k}^{J}\) and \(\tau_{J}\) for the conservation and norm constraints, respectively, and imposing vanishing variational derivatives with respect to operators \(\mathcal{D}_{\rho}^{J}\) at fixed \(\rho\) and \(\rho_{J}\)'s (derivation details in [11; 14]) yields \[2\tau_{J}\mathcal{D}_{\rho}^{J}=(\mathrm{Bln}(\rho))_{\rho}^{J}+\sum_{\ell} \beta_{\ell}^{J}(C_{\ell})_{\rho}^{J}. \tag{17}\] where the multipliers \(\beta_{\ell}^{J}\) must solve the system of equations obtained by substituting these maximizing expressions of the \(\mathcal{D}_{\rho}^{J}\)'s into the conservation constraints, \[\sum_{\ell}\beta_{\ell}^{J}\,\mathrm{Tr}\Big{[}\rho_{J}\Big{\{}( C_{\ell})_{\rho}^{J},(C_{k})_{\rho}^{J}\Big{\}}\Big{]}\] \[=-\,\mathrm{Tr}\Big{[}\rho_{J}\Big{\{}(\mathrm{Bln}(\rho))_{\rho }^{J},(C_{k})_{\rho}^{J}\Big{\}}\Big{]}\,. \tag{18}\] When \(C_{1}=I\) and \(C_{2}=H\) determine the conserved properties and Eqs. (18) are linearly independent, using Cramers' rule, properties of determinants, and definitions (2) and (3), the SEA dissipators can be cast as \[\mathcal{D}_{\rho}^{J}=\frac{1}{4\tau_{J}}\frac{\Big{|}\Delta(\mathrm{Bln}( \rho))_{\rho}^{J}\quad\Delta(H)_{\rho}^{J}\Big{|}}{(H,H)_{\rho}^{J}}\,. \tag{19}\] The rate of entropy production may be expressed as \[\frac{\mathrm{d}s(\rho)}{\mathrm{d}t}=\sum_{J=1}^{M}\frac{1}{2\tau_{J}}\frac{ \Big{|}(\mathrm{Bln}(\rho),\mathrm{Bln}(\rho))_{\rho}^{J}\quad(H,\mathrm{Bln}( \rho))_{\rho}^{J}\Big{|}}{(H,H)_{\rho}^{J}}\,, \tag{20}\] showing clearly that it is nonnegative since the numerators in the summation are Gram determinants. Regarding no-signaling, we note that: (1) if subsystem J is noninteracting, \(H=H_{J}\otimes I_{\overline{J}}+I_{J}\otimes H_{\overline{J}}\), then \(\Delta(H)_{\rho}^{J}=H_{J}-I_{J}\,\mathrm{Tr}(\rho_{J}H_{J})\) and \((H,H)_{\rho}^{J}=\mathrm{Tr}\big{[}\rho_{J}(\Delta H_{J})^{2}\big{]}\) depend only on the local \(H_{J}\) and \(\rho_{J}\); and (2) if J is uncorrelated, \(\mathrm{Bln}(\rho)=\mathrm{Bln}(\rho_{J})\otimes I_{\overline{J}}+I_{J} \otimes\mathrm{Bln}(\rho_{\overline{J}})\), then \(\Delta(\mathrm{Bln}(\rho))_{\rho}^{J}=\mathrm{Bln}(\rho_{J})-I_{J}\,\mathrm{Tr }(\rho_{J}\ln(\rho_{J}))\) and \((\mathrm{Bln}(\rho),\mathrm{Bln}(\rho))_{\rho}^{J}=\mathrm{Tr}\big{[}\rho_{J}( \mathrm{ln}(\rho_{J}))^{2}\big{]}\) depend only on the local \(\rho_{J}\). Therefore, it is only when J is both noninteracting and uncorrelated that its local dissipation operator \(\mathcal{D}_{\rho}^{J}\) depends only on the local \(H_{J}\) and \(\rho_{J}\), and the local equation of motion Eq. (15) reduces exactly to the non-composite system version of SEA evolution [10]. Instead, if J is either interacting or correlated, \(\mathcal{D}_{\rho}^{J}\) and, therefore, the local nonlinear SEA evolution according to Eq. (15), is determined not only by the local \(H_{J}\) and \(\rho_{J}\), but also by the local perceptions of the overall interaction Hamiltonian and/or the overall entropy operator \(\mathrm{Bln}(\rho)\), nonetheless without violating the no-signaling condition. In extremal cases, it is known [10; 11; 20] that even if the subsystems are entangled and therefore the local states \(\rho_{J}\) are mixed, operators \(\mathcal{D}_{\rho}^{J}\) vanish and Eqs. (13) and (15) reduce to the standard Schrodinger equation. E.g., if the overall system is in a pure state, \(\mathrm{Bln}(\rho)=0\), standard unitary evolutions of pure states emerge as limit cycles of the nonlinear SEA dynamics. Consider the example of a two-qubit composite AB. The mixed and correlated states \[\rho=\frac{1}{4}\Big{[}\mathrm{I}_{4}+\!\!\!\!\!\!\sum_{j=\{x,y,z\}}\!\!\!\!\!(a_ {j}\,\sigma_{j}\otimes\mathrm{I}_{2}+b_{j}\,\mathrm{I}_{2}\otimes\sigma_{j}+c_ {j}\,\sigma_{j}\otimes\sigma_{j})\Big{]}, \tag{21}\] are Bell diagonal states if \(a_{j}=b_{j}=0\) for all \(j\)'s (and Werner states if in addition \(c_{j}=4w/3-1\) for all \(j\)'s) with eigenvalues \(4\lambda_{1}=1-c_{x}-c_{y}-c_{z}\), \(4\lambda_{2}=1-c_{x}+c_{y}+c_{z}\), \(4\lambda_{3}=1+c_{x}-c_{y}+c_{z}\), \(4\lambda_{4}=1+c_{x}+c_{y}-c_{z}\). Somewhat surprisingly, Bell diagonal states are nondissipative limit cycles within nonlinear SEA dynamics, under any Hamiltonian. Indeed, we find \((\mathrm{Bln}(\rho^{\mathrm{Bln}}))_{\rho}^{J}=\mathrm{I}_{2}\sum_{k}\mathrm{Bln }(\lambda_{k})/2\), so that \(\Delta(\mathrm{Bln}(\rho))_{\rho}^{J}=0\) and \(\mathcal{D}_{\rho}^{J}=0\), for both \(J=A,B\). But most neighboring and other states in this class are dissipative. For a simple example of correlated but separable mixed states, assume \(a_{x}=a\), \(b_{x}=b\), and \(a_{y}=a_{z}=b_{y}=b_{z}=c_{x}=c_{y}=0\), so that \(\rho_{A}\otimes\rho_{B}-\rho=(ab/4)\sigma_{x}\otimes\sigma_{x}\) and the eigenvalues are \(4\lambda_{1}=1-a-b\), \(4\lambda_{2}=1-a+b\), \(4\lambda_{3}=1+a-b\), \(4\lambda_{4}=1+a+b\). If the two noninteracting qubits A and B have local Hamiltonians \(H_{A}=H_{B}=\sigma_{z}\), we find \[\{\mathcal{D}^{A}_{\rho},\rho_{A}\} = \frac{(1-a^{2})}{16}(bf_{a,b}-g_{a,b})\,\sigma_{x}, \tag{22}\] \[\{\mathcal{D}^{B}_{\rho},\rho_{B}\} = \frac{(1-b^{2})}{16}(af_{a,b}-h_{a,b})\,\sigma_{x}, \tag{23}\] where \(f_{a,b}=\mathrm{Bln}(\lambda_{1})-\mathrm{Bln}(\lambda_{2})-\mathrm{Bln}( \lambda_{3})+\mathrm{Bln}(\lambda_{4})\), \(g_{a,b}=\mathrm{Bln}(\lambda_{1})+\mathrm{Bln}(\lambda_{2})-\mathrm{Bln}( \lambda_{3})-\mathrm{Bln}(\lambda_{4})\), \(h_{a,b}=\mathrm{Bln}(\lambda_{1})-\mathrm{Bln}(\lambda_{2})+\mathrm{Bln}( \lambda_{3})-\mathrm{Bln}(\lambda_{4})\) so that the nonlinear evolution is clearly nontrivial. But it preserves the zero mean energies of both qubits, and while the overall entropy increases and mutual information partially fades away, it drives the overall state towards a nondissipative correlated state with maximally mixed marginals. We proved above that signaling is impossible, even though \(\mathcal{D}^{A}_{\rho}\) depends not only on \(a\) but also on \(b\), and \(\mathcal{D}^{B}_{\rho}\) on \(a\) which agrees with our no-signaling condition in Eq. (16). For a slightly more elaborate example that includes entangled mixed states, assume \(a_{x}=a_{z}=a/\sqrt{2}\), \(b_{x}=b_{z}=b/\sqrt{2}\), and \(c_{x}=c_{y}=c_{z}=2(a-b)/3\), so that the eigenvalues are \(a_{1}=1+a-b\), \(12\lambda_{2}=3-a-5b\), \(12\lambda_{3}=3+5a+b\), \(12\lambda_{4}=3-7a+7b\), and those of the partial transpose \(12\lambda_{1}^{PT}=3+a-b\), \(12\lambda_{2}^{PT}=3-5a+5b\), \(12\lambda_{3}^{PT}=3+2a-2b+\sqrt{d}\), \(12\lambda_{4}^{PT}=3+2a-2b-\sqrt{d}\) with \(d=25a^{2}-14ab+25b^{2}\). For \(a=-b\) these states are separable for \(-3/14\leq b\leq 1/4\) and entangled for \(1/4<b\leq 1/2\). If the two noninteracting qubits A and B have local Hamiltonians \(H_{A}=H_{B}=\sigma_{z}\), we find \[\{\mathcal{D}^{A}_{\rho},\rho_{A}\} = \frac{\sqrt{2}(1-a^{2})}{80(2-a^{2})}(f_{a,b}-5bh_{a,b})\,\sigma_{x}, \tag{24}\] \[\{\mathcal{D}^{B}_{\rho},\rho_{B}\} = -\frac{\sqrt{2}(1-b^{2})}{80(2-b^{2})}(g_{a,b}+5ah_{a,b})\,\sigma_ {x}, \tag{25}\] where here \(f_{a,b}=3\mathrm{Bln}(\lambda_{1})-5\mathrm{Bln}(\lambda_{2})+5\mathrm{Bln}( \lambda_{3})-3\mathrm{Bln}(\lambda_{4})\), \(g_{a,b}=3\mathrm{Bln}(\lambda_{1})+5\mathrm{Bln}(\lambda_{2})-5\mathrm{Bln}( \lambda_{3})-3\mathrm{Bln}(\lambda_{4})\), \(h_{a,b}=\mathrm{Bln}(\lambda_{1})-\mathrm{Bln}(\lambda_{2})-\mathrm{Bln}( \lambda_{3})+\mathrm{Bln}(\lambda_{4})\) so that again the nonlinear evolution is clearly nontrivial in the sense that the local nonlinear evolution of A (B) does not depend only on \(\rho_{A}\) (\(\rho_{B}\)), despite being no-signaling. To summarize, in this Letter we prove that the SEA formalism provides a valid non-linear extension of QM. To show this, we explore the definition of no-signaling for composite systems and provide generalized necessary criteria in terms of locally perceived operators, less restrictive than the traditional criterion in terms of local density operators. Furthermore, we build on that definition and show how, by construction, SEA is no-signaling. For non-interacting subsystems, the traditional criterion is met for uncorrelated states, but we provide nontrivial examples of correlated states for which it is not met. RKR is grateful to the INSPIRE Fellowship program by the Department of Science and Technology, Govt. of India for funding his Ph.D., to Prof. Alok Pan of the Indian Institute of Technology Hyderabad for many useful discussions, and to the Wolfram Publication Watch Team for providing full access to online Mathematica [21].
2303.07553
Weighted norm inequalities in the variableLlebesgue spaces for the Bergman projector on the unit ball of $\mathbb{c}^n$
In this work, we extend the theory of B\'ekoll\`e-Bonami $B_p$ weights. Here we replace the constant $p$ by a non-negative measurable function $p(\cdot),$ which is log-H\"older continuous function with lower bound $1$. We show that the Bergman projector on the unit ball of $\mathbb C^n$ is continuous on the weighted variable Lebesgue spaces $L^{p(\cdot)}(w)$ if and only if $w$ belongs to the generalised B\'ekoll\`e-Bonami class $B_{p(\cdot)}$. To achieve this, we define a maximal function and show that it is bounded on $L^{p(\cdot)}(w)$ if $w\in B_{p(\cdot)}$. We next state and prove a weighted extrapolation theorem that allows us to conclude.
David BÉkollÈ, Edgar-Landry Tchoundja, Arsene-Brice Zotsa-Ngoufack
2023-03-14T00:45:24Z
http://arxiv.org/abs/2303.07553v1
Weighted norm inequalities in the variable Lebesgue spaces for the Bergman projector on the unit ball of \(\mathbb{C}^{n}\) ###### Abstract. In this work, we extend the theory of Bekolle-Bonami \(B_{p}\) weights. Here we replace the constant \(p\) by a non-negative measurable function \(p(\cdot),\) which is log-Holder continuous function with lower bound \(1\). We show that the Bergman projector on the unit ball of \(\mathbb{C}^{n}\) is continuous on the weighted variable Lebesgue spaces \(L^{p(\cdot)}(w)\) if and only if \(w\) belongs to the generalised Bekolle-Bonami class \(B_{p(\cdot)}\). To achieve this, we define a maximal function and show that it is bounded on \(L^{p(\cdot)}(w)\) if \(w\in B_{p(\cdot)}.\) We next state and prove a weighted extrapolation theorem that allows us to conclude. Key words and phrases:Variable exponent Lebesgue spaces, Variable exponent Bergman spaces, Weighted inequalities, Bergman projector 2010 Mathematics Subject Classification: 32A10; 46E40; 47B35 ## 1. Introduction The purpose of this work is to generalise the Bekolle-Bonami theorem [2] for the Bergman projector on the unit ball \(\mathbb{B}\) of \(\mathbb{C}^{n}\) to the case of weighted variable Lebesgue spaces. The case of the unweighted variable Lebesgue spaces was treated by Chacon and Rafeiro [4, 14]. These authors showed that the Bergman projector is bounded on variable Lebesgue spaces for exponent functions \(p(\cdot),\) which are log-Holder continuous function with lower bound \(1\) (cf. Definition 1.1 below). The ingredients of their proof are: the classical Bekolle-Bonami theorem, the boundedness of the Hardy-Littlewood maximal function on variable Lebesgue spaces, and an extrapolation theorem. For basic properties of variable Lebesgue spaces, e.g. the boundedness of Hardy-Littlewood maximal function and an extrapolation theorem, we refer to [9, 10]. The \(\sigma\)-algebra on \(\mathbb{B}\) is the Borel \(\sigma\)-algebra. Let \(\nu\) be a positive measure on \(\mathbb{B}.\) The variable Lebesgue space on \(\mathbb{B},\) denoted by \(L^{p(\cdot)}(\nu),\) is a generalisation of the classical Lebesgue spaces, obtained by replacing the constant exponent \(p\) by a measurable exponent function \(p(\cdot):\mathbb{B}\rightarrow[0,\infty).\) We shall denote by \(\mathcal{P}(\mathbb{B})\) this family of all exponent functions \(p(\cdot)\) on \(\mathbb{B}.\) For a measurable subset \(E\) of \(\mathbb{B},\) we introduce the following notation: \[p_{-}(E)=ess\inf_{z\in E}p(z)\quad\text{and}\quad p_{+}(E)=ess\sup_{z\in E}p(z)\] and we will use the notation \(p_{-}=p_{-}(\mathbb{B})\) and \(p_{+}=p_{+}(\mathbb{B}).\) We shall denote by \(\mathcal{P}_{+}(\mathbb{B}),\) the subfamily of \(\mathcal{P}(\mathbb{B})\) consisting of those \(p(\cdot)\) such that \(p_{+}<\infty.\) More precisely, for ## 1. Introduction Let \(\mathcal{P}_{+}(\mathbb{B})\) be a bounded bounded bounded domain with Lipschitz boundary \(\partial\mathbb{B}\). We denote by \(\mathcal{P}_{+}(\mathbb{B})\) the space of all \(\mathbb{B}\)-subls \(B\)-subl In the classical Lebesgue spaces, we have the following definition. **Definition 1.3**.: Let \(p>1\) be a constant exponent. The Bekolle-Bonami \(B_{p}\) weight class consists of weights \(w\) such that \[\sup_{B\in\mathcal{B}}\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}wd\mu_{\alpha} \right)\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}w^{-\frac{1}{p-1}}d\mu_{\alpha} \right)^{p-1}<\infty.\] This definition of \(B_{p}\) is equivalent to the following definition: \[\sup_{B\in\mathcal{B}}\frac{1}{\mu_{\alpha}(B)}\|w^{\frac{1}{p}}\chi_{B}\|_{p} \|w^{-\frac{1}{p}}\chi_{B}\|_{p^{\prime}}<\infty,\] where \(p^{\prime}\) is the conjugate exponent of \(p\), i.e. \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\). In the same spirit, we introduce a variable generalisation of the \(B_{p}\) weight class. Analogously to the classical case, for \(p(\cdot)\in\mathcal{P}_{-}(\mathbb{B})\), we say that \(p^{\prime}(\cdot)\) is the conjugate exponent function of \(p(\cdot)\) if for all \(z\in\mathbb{B}\) we have \[\frac{1}{p(z)}+\frac{1}{p^{\prime}(z)}=1.\] Moreover, we set \(\mathcal{P}_{\pm}(\mathbb{B})=\mathcal{P}_{-}(\mathbb{B})\cap\mathcal{P}_{+}( \mathbb{B})\), the subfamily of \(\mathcal{P}(\mathbb{B})\) consisting of those exponent functions \(p(\cdot)\) such that \(1<p_{-}\leq p_{+}<\infty.\) We now define the variable Bekolle-Bonami classes of weights. **Definition 1.4**.: Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B}).\) A weight \(w\) belongs to the variable Bekolle-Bonami class on \(\mathbb{B}\), denoted \(B_{p(\cdot)}\), if \[[w]_{B_{p(\cdot)}}:=\sup_{B\in\mathcal{B}}\frac{1}{\mu_{\alpha}(B)}\|w^{\frac {1}{p(\cdot)}}\chi_{B}\|_{p(\cdot)}\|w^{-\frac{1}{p(\cdot)}}\chi_{B}\|_{p^{ \prime}(\cdot)}<\infty. \tag{1.2}\] We define the operator \(P_{\alpha}\) on \(L^{1}(\mathbb{B},d\mu_{\alpha})\) by \[P_{\alpha}f(z)=\int_{\mathbb{B}}\frac{f(\zeta)}{(1-\langle z,\zeta\rangle)^{ n+\alpha}}d\mu_{\alpha}(\zeta).\] The restriction to \(L^{2}(\mathbb{B},d\mu_{\alpha})\) of the operator \(P_{\alpha}\) is called the Bergman projector of \(\mathbb{B}\). We also define the positive Bergman operator \(P_{\alpha}^{+}\) by \[P_{\alpha}^{+}f(z)=\int_{\mathbb{B}}\frac{f(\zeta)}{|1-\langle z,\zeta\rangle| ^{n+\alpha}}d\mu_{\alpha}(\zeta).\] We now recall the classical Bekolle-Bonami theorem: **Theorem 1.5**.: _[_2_]_ _Let \(w\) be a non-negative measurable function and let \(1<p<\infty\) (\(p\) is a constant exponent). The following two assertions are equivalent._ 1. _The Bergman operator_ \(P_{\alpha}\) _is well defined and bounded on_ \(L^{p}(wd\mu_{\alpha});\)__ 2. \(w\in B_{p}.\)__ _Moreover, \(P_{\alpha}^{+}\) is well defined and bounded on \(L^{p}(wd\mu_{\alpha})\) if \(w\in B_{p}.\)_ The purpose of this work is to prove the following generalisation of the previous theorem: **Theorem 1.6**.: _Let \(w\) be a non-negative measurable function and \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B})\). The following two assertions are equivalent._ 1. _The Bergman operator_ \(P_{\alpha}\) _is well defined and bounded on_ \(L^{p(\cdot)}(wd\mu_{\alpha});\)__ 2. \(w\in B_{p(\cdot)}.\)__ _Moreover, \(P_{\alpha}^{+}\) is well defined and bounded on \(L^{p(\cdot)}(wd\mu_{\alpha})\) if \(w\in B_{p(\cdot)}.\)_ The problem under study is trivial if \(w(\mathbb{B})=0,\) i.e. \(w\equiv 0\) a.e. on \(\mathbb{B}.\) We shall assume that \(w(\mathbb{B})>0.\) In [11], Diening and Hasto introduced the variable Muckenhoupt weight class \(A_{p(\cdot)}\) on \(\mathbb{R}^{n},\) and showed that for \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{R}^{n}),\) the Hardy-Littlewood maximal function is bounded on \(L^{p(\cdot)}(w)\) only if \(w\in A_{p(\cdot)}\). In order to manage the necessary condition, they introduced a new class \(A^{+}_{p(\cdot)}\) which coincides with \(A_{p(\cdot)}\) when \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{R}^{n}),\) but whose condition is easier to check. More precisely, they prove that the class \(A^{+}_{p(\cdot)}\) is contained in the class \(A_{p(\cdot)}\) when \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{R}^{n}),\) but they leave for future investigation the proof of the reverse inclusion. Later, Cruz-Uribe and these two authors [7] gave a new proof of this result using the Calderon-Zygmund decomposition and they also proved the reverse implication. Very recently, Cruz-Uribe and Cummings [6] extended the result of [7] to the spaces of homogeneous type. In this paper we will use the technique of [11, 7] to manage the proof of the necessary condition in Theorem 1.6. Precisely, we shall introduce a new class denoted \(B^{+}_{p(\cdot)}\) which coincides with \(B_{p(\cdot)}\) when \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B})\). To deal with the sufficient condition, we rely on the result of [6] about the boundedness on \(L^{p(\cdot)}(w)\) of the Hardy-Littlewood maximal function on the space of homogeneous type \(\mathbb{B},\) for \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B})\) and \(w\) in the corresponding class of weights \(A_{p(\cdot)}.\) The proof then follows two steps. First, we use this result to show that, for \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B}),\) the maximal function \(m_{\alpha}\) is bounded on \(L^{p(\cdot)}(w)\) if \(w\in B_{p(\cdot)}.\) Secondly, we lean on the first step to define a new extrapolation theorem which allows us to conclude the proof of the sufficient condition in Theorem 1.6. The rest of our paper is organised as follows. In section 2, we shall recall some preliminaries. Next in section 3, we review properties of weighted variable Lebesgue spaces, variable Bekolle-Bonami and Muckenhoupt classes of weights. In the end of this section, we state the theorem of Cruz-Uribe and Cummings about the boundedness on \(L^{p(\cdot)}(w)\) of the Hardy-Littlewood maximal function on \(\mathbb{B}.\) In section 4, we prove the necessity of the conditions \(w^{\frac{1}{p(\cdot)}}\in L^{p(\cdot)}(d\mu_{\alpha})\) and \(w^{-\frac{1}{p(\cdot)}}\in L^{p^{\prime}(\cdot)}(d\mu_{\alpha})\) in Theorem 1.6. In section 5, we define and study the class \(B^{+}_{p(\cdot)}\) and we show the identity \(B^{+}_{p(\cdot)}=B_{p(\cdot)}\) and we also prove the reverse inclusion \(A_{p(\cdot)}\) is contained in \(A^{+}_{p(\cdot)}\). In section 6, we prove the necessary condition in Theorem 1.6. In section 7, we show that the maximal function \(m_{\alpha}\) is bounded on \(L^{p(\cdot)}(w)\) if \(w\in B_{p(\cdot)}.\) Finally, in section 8, we prove a weighted extrapolation theorem from which we deduce a proof of the sufficient condition in Theorem 1.6. Let \(a\) and \(b\) be two positive numbers. Throughout the paper, we write \(a\lesssim b\) if there exists \(C>0\) such that \(a\leq Cb\). We write \(a\simeq b\) if \(a\lesssim b\) and \(b\lesssim a\). ## 2. Preliminaries In this section we present some background material regarding the unit ball of \(\mathbb{C}^{n}\) as a space of homogeneous type and the variable exponent Lebesgue spaces. ### The unit ball is a space of homogeneous type In this subsection we recall some lemmas from [2] where it was first considered \((\mathbb{B},d,\mu_{\alpha}).\) **Lemma 2.1**.: _Let \(z\in\mathbb{B}\) and \(R>0\). The pseudo-ball \(B(z,r)\) meets the boundary of \(\mathbb{B}\) (in other words, \(B\in\mathcal{B}\)) if and only if \(R>1-|z|\)._ **Lemma 2.2**.: _Let \(z\in\mathbb{B}\) and \(0<R\leq 3\), we have_ \[\mu_{\alpha}(B(z,R))\simeq R^{n+1}\left(\max(R,1-|z|)\right)^{\alpha-1}.\] **Remark 2.3**.: From Lemma 2.1 and Lemma 2.2 if \(B(z,R)\in\mathcal{B}\) we have \[\mu_{\alpha}(B(z,R))\simeq R^{n+\alpha}.\] From Lemma 2.2, that \((\mathbb{B},d,\mu_{\alpha})\) is a space of homogeneous type. Next we have the following lemma. **Lemma 2.4**.: _[_6_]_ _There exist two positive constants \(C\) and \(\gamma\) such that, for all \(\zeta\in B(z,R)\) we have_ \[\mu_{\alpha}(B(\zeta,r))\geq C\left(\frac{r}{R}\right)^{\gamma}\mu_{\alpha}(B (z,R))\] _for every \(0<r\leq R<\infty.\)_ ### Variable exponent Lebesgue spaces We denote by \(\mathcal{M}\) the space of complex-valued measurable functions defined on \(\mathbb{B}\). Let \(\nu\) be a positive measure on \(\mathbb{B}.\) The family \(\mathcal{P}(\mathbb{B})\) of variable exponents was defined in the introduction. In the rest of the paper, we take \(p(\cdot)\in\mathcal{P}(\mathbb{B}).\) The next definitions, properties and propositions are stated in [9, 10, 14]. We first recall some properties of the modular functional \(\rho_{p(\cdot)}:\mathcal{M}\to[0,\infty],\) defined in the introduction as \[\rho_{p(\cdot)}(f)=\int_{\mathbb{B}}|f(z)|^{p(z)}d\nu(z).\] **Proposition 2.5**.: _Let \(p(\cdot)\in\mathcal{P}(\mathbb{B})\) be such that \(p(\cdot)\geq 1.\)_ 1. _For all_ \(f\in\mathcal{M},\quad\rho_{p(\cdot)}(f)\geq 0,\text{ and }\rho_{p(\cdot)}(f)=\rho_{p(\cdot)}(|f|)\)_._ 2. _For all_ \(f\in\mathcal{M}\) _if_ \(\rho_{p(\cdot)}(f)<\infty\) _then_ \(|f(z)|<\infty\) _a.e. on_ \(\mathbb{B}.\)__ 3. \(\rho_{p(\cdot)}\) _is convex. In particular, for_ \(0<\alpha\leq 1\) _and_ \(f\in\mathcal{M},\)__\(\rho_{p(\cdot)}(\alpha f)\leq\alpha\rho_{p(\cdot)}(f)\) _and for_ \(\alpha\geq 1,\quad\alpha\rho_{p(\cdot)}(f)\leq\rho_{p(\cdot)}(\alpha f)\)_._ 4. \(\rho_{p(\cdot)}(f)=0\) _if and only if_ \(f(z)=0\) _a.e. on_ \(\mathbb{B}.\)__ 5. _If for almost all_ \(z\in\mathbb{B},\)__\(|f(z)|\leq|g(z)|\)_, then_ \(\rho_{p(\cdot)}(f)\leq\rho_{p(\cdot)}(g)\)_._ 6. _If there exists_ \(\beta>0\) _such that_ \(\,\rho_{p(\cdot)}(\frac{f}{\beta})<\infty\)_, then the function_ \(\lambda\longmapsto\rho_{p(\cdot)}(\frac{f}{\lambda})\) _is continuous and non-increasing on_ \([\beta,\infty[\)_. In addition_ \[\lim_{\lambda\longrightarrow\infty}\rho_{p(\cdot)}\left(\frac{f}{\lambda} \right)=0\] _._ For \(p(\cdot)\in\mathcal{P}(\mathbb{B}),\) the variable Lebesgue space \(L^{p(\cdot)}(d\nu)\) was defined in the introduction. **Proposition 2.6**.: _[_9_, Theorem 2.7.2]_ _Let \(p(\cdot)\in\mathcal{P}_{+}(\mathbb{B})\) be such that \(p(\cdot)\geq 1.\) Then the subspace of continuous functions of compact support in \(\mathbb{B}\) is dense in the space \(L^{p(\cdot)}(d\nu).\)_ We next recall the Holder inequality in the variable exponent context. **Proposition 2.7**.: _[_9_, Theorem 2.26, Corollary 2.28]___ 1. _Let_ \(p(\cdot)\in\mathcal{P}(\mathbb{B})\) _be such that_ \(p(\cdot)\geq 1.\) _Then for all_ \(f,g\in\mathcal{M},\) _we have_ \[\int_{\mathbb{B}}|fg|d\nu\leq 2\|f\|_{p(\cdot)}\|g\|_{p^{\prime}(\cdot)}.\] 2. _Let_ \(r(\cdot),\)__\(q(\cdot)\in\mathcal{P}(\mathbb{B})\) _such that_ \(r(\cdot),q(\cdot)\geq 1\) _and_ \(\frac{1}{q(x)}+\frac{1}{r(x)}\leq 1\) _for all_ \(x\in\mathbb{B}.\) _Define_ \(p(\cdot)\in\mathcal{P}(\mathbb{B})\) _such that_ \(p(\cdot)\geq 1,\) _by_ \[\frac{1}{p(x)}=\frac{1}{q(x)}+\frac{1}{r(x)}.\] _Then there exists a constant_ \(K\) _such that for all_ \(f\in L^{q(\cdot)}\) _and_ \(g\in L^{r(\cdot)},\)__\(fg\in L^{p(\cdot)}\) _and_ \[\|fg\|_{p(\cdot)}\leq K\|f\|_{q(\cdot)}\|g\|_{r(\cdot)}.\] We record the following useful remark. **Remark 2.8**.: The property \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) is also true for \(p^{\prime}(\cdot).\) In what follows, we will use condition log-Holder as in given in the following Lemma. **Lemma 2.9**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}(\mathbb{B}).\) Let \(B=B(x,R)\) be a pseudo-ball of \(\mathbb{B}\) such that \(R<\frac{1}{4}.\) Then_ \[p_{+}(B)-p_{-}(B)\leq\frac{c}{\ln(\frac{1}{4R})}.\] **Lemma 2.10**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}(\mathbb{B})\). There exist two positive constants \(C_{1}=C_{1}(\alpha,n,p(\cdot))\) and \(C_{2}=C_{2}(\alpha,n,p(\cdot))\) such that for every pseudo-ball \(B\) of \(\mathbb{B},\) we have_ \[\mu_{\alpha}(B)^{p_{-}(B)-p_{+}(B)}\leq C_{1}\text{ and }\mu_{\alpha}(B)^{p_{+ }(B)-p_{-}(B)}\leq C_{2}.\] Proof.: Since \(p_{+}(B)-p_{-}(B)\geq 0,\)\(\mu_{\alpha}(\mathbb{B})<\infty,\)\(B\subset\mathbb{B}\) and \(p(\cdot)\) bounded, we have the second inequality. We are going to prove the first inequality. Suppose that \(B=B(z,R).\) 1. If \(R\geq\frac{1}{16},\) from Lemma 2.2, there exists \(C>0\) such that \(\mu_{\alpha}(B)\geq CR^{n+\alpha}\) and as \(p_{-}(B)-p_{+}(B)\leq 0,\) we obtain: \[\mu_{\alpha}(B)^{p_{-}(B)-p_{+}(B)}\leq(CR^{n+\alpha})^{(p_{-}(B)-p_{+}(B))} \leq(C(16)^{n+1})^{(p_{+}(B)-p_{-}(B))}\] 2. If \(R<\frac{1}{16},\) from Lemma 2.9, \[p_{+}(B)-p_{-}(B)\leq\frac{c}{\ln(\frac{1}{4R})}.\] Hence from Remark 2.3, \[p_{+}(B)-p_{-}(B)\lesssim\frac{1}{\ln(\frac{1}{\mu_{\alpha}(B)})}.\] Thus \[\mu_{\alpha}(B)^{p_{-}(B)-p_{+}(B)}\lesssim 1.\] From the previous lemma, we easily deduce the following corollary. **Corollary 2.11**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}(\mathbb{B}).\) There exists a constant \(C=C(\alpha,n,p(\cdot))>1\) such that for every pseudo-ball \(B\) of \(\mathbb{B}\) and every \(z\in B\) we have_ \[\frac{1}{C}\leq\mu_{\alpha}(B)^{p_{-}(B)-p(z)}\leq C.\] ## 3. Weighted variable Lebesgue spaces, variable Bekolle-Bonami and Muckenhoupt classes of weights ### Weighted variable Lebesgue spaces Let \(w\) be a weight and let \(p(\cdot)\in\mathcal{P}(\mathbb{B})\) be such that \(p(\cdot)\geq 1.\) The corresponding weighted variable exponent Lebesgue space \(L^{p(\cdot)}(wd\mu_{\alpha})\) consists of those \(f\in\mathcal{M}\) which satisfy the estimate \[\rho_{p(\cdot),w}(f)=\int_{\mathbb{B}}|f(z)|^{p(z)}w(z)d\mu_{\alpha}(z)<\infty.\] We also denote it \(L^{p(\cdot)}(\mathbb{B},w)\) or simply \(L^{p(\cdot)}(w),\) and we denote its norm by \(\|\cdot\|_{p(\cdot),w}.\) It is easy to check that \[\|f\|_{p(\cdot),w}=\|fw^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}.\] In the sequel, we shall adopt the following notation: \[w^{\prime}:=w^{1-p^{\prime}(\cdot)}. \tag{3.1}\] We recall the notion of subordinate norm on \(L^{p(\cdot)}(\mathbb{B},w)\) defined by \[\|f\|_{p(\cdot),w}^{\prime}:=\sup_{\|g\|_{p^{\prime}(\cdot),w^{\prime}}=1}\left| \int_{\mathbb{B}}f(\zeta)\overline{g(\zeta)}d\mu_{\alpha}(\zeta)\right|.\] We next recall the following proposition. **Proposition 3.1**.: _[_9_, 10, Corollary 2.7.5]_ _Let \(p(\cdot)\in\mathcal{P}(\mathbb{B})\) such that \(p(\cdot)\geq 1\) and let \(w\) be a weight. Then_ \[\|f\|_{p(\cdot),w}\leq\|f\|_{p(\cdot),w}^{\prime}\leq 2\|f\|_{p(\cdot),w}.\] The following lemma will be very useful. **Lemma 3.2**.: _[_9_, 10_]_ _Let \(w\) be a non-negative measurable function and let \(p(\cdot)\in\mathcal{P}_{+}(\mathbb{B})\) be such that \(p_{-}>0\). Then for every \(f\in\mathcal{M}\) whose support is \(E,\) the following double inequality holds_ \[\min\left(\rho_{p(\cdot),w}(f)^{\frac{1}{p_{-}(E)}},\rho_{p(\cdot),w}(f)^{ \frac{1}{p_{+}(E)}}\right)\leq\|f\|_{p(\cdot),w}\leq\max\left(\rho_{p(\cdot), w}(f)^{\frac{1}{p_{-}(E)}},\rho_{p(\cdot),w}(f)^{\frac{1}{p_{+}(E)}}\right).\] _It is equivalent to_ \[\min\left(\|f\|_{p(\cdot),w}^{p_{-}(E)},\|f\|_{p(\cdot),w}^{p_{+}(E)}\right) \leq\rho_{p(\cdot),w}(f)\leq\max\left(\|f\|_{p(\cdot),w}^{p_{-}(E)},\|f\|_{p( \cdot),w}^{p_{+}(E)}\right).\] ### Variable Bekolle-Bonami classes of weights Concerning the variable Bekolle-Bonami weight class \(B_{p(\cdot)},\) we record the following elementary remark. **Remark 3.3**.: Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B}).\) If \(w\in B_{p(\cdot)},\) the following two assertions are valid. 1. \(\|w^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}<\infty\) and \(\|w^{-\frac{1}{p(\cdot)}}\|_{p^{\prime}(\cdot)}<\infty.\) 2. The functions \(w\) and \(w^{\prime}\) are integrable on \(\mathbb{B}.\) Proof of the remark.: Otherwise, if \(\|w^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}=\infty,\) then necessarily \(\|w^{-\frac{1}{p(\cdot)}}\|_{p^{\prime}(\cdot)}=0\) and this would imply that \(w\equiv\infty\) a.e. Alternatively, if \(\|w^{-\frac{1}{p(\cdot)}}\|_{p(\cdot)}=\infty,\) then necessarily \(\|w^{\frac{1}{p(\cdot)}}\|_{p^{\prime}(\cdot)}=0\) and this would imply that \(w\equiv 0\) a.e. Furthermore, by Lemma 3.2, for \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B}),\) the assertions 1) and 2) are equivalent. We also have the following proposition. **Proposition 3.4**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B}).\) For a weight \(w,\) the following two assertions are equivalent._ 1. \(w\in B_{p(\cdot)};\)__ 2. \(w^{\prime}\in B_{p^{\prime}(\cdot)}.\)__ _Moreover, \([w]_{B_{p(\cdot)}}=[w^{\prime}]_{B_{p^{\prime}(\cdot)}}.\)_ The following simple lemma will be useful. **Lemma 3.5**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B}).\) For a weight \(w,\) the following two assertions are equivalent._ 1. \(w\in B_{p(\cdot)};\)__ 2. \(\sup_{B\in\mathcal{B}}\frac{1}{\mu_{\alpha}(B)}\left\|\chi_{B}\right\|_{p( \cdot),w}\left\|\chi_{B}\right\|_{p^{\prime}(\cdot),w^{\prime}}<\infty.\)__ ### Variable Muckenhoupt classes of weights **Definition 3.6**.: The Hardy-Littlewood maximal function \(M_{\alpha}\) on the space of homogeneous type \((\mathbb{B},d,\mu_{\alpha})\) is defined by \[M_{\alpha}f(z)=\sup_{B}\frac{\chi_{B}(z)}{\mu_{\alpha}(B)}\int_{B}|f(\zeta)|d \mu_{\alpha}(\zeta)\] where the supremum is taken over all pseudo-balls of \(\mathbb{B}.\) When \(p\) is a constant greater than \(1,\) the Muckenhoupt class \(A_{p}\) consists of weights \(w\) which satisfy the estimate \[\sup_{B}\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}wd\mu_{\alpha}\right)\left( \frac{1}{\mu_{\alpha}(B)}\int_{B}w^{-\frac{1}{p-1}}d\mu_{\alpha}\right)^{p-1}<\infty,\] where the sup is taken over all pseudo-balls \(B\) of \(\mathbb{B}.\) This definition is equivalent with the following definition: \[\sup_{B}\frac{1}{\mu_{\alpha}(B)}\|w^{\frac{1}{p}}\chi_{B}\|_{p}\|w^{-\frac{1 }{p}}\chi_{B}\|_{p^{\prime}}<\infty,\] where the sup is taken again over all pseudo-balls of \(\mathbb{B}.\) We next have the following variable generalisation of the variable Muckenhoupt weight classes. This generalisation was given first by Diening and Hasto [11]. **Definition 3.7**.: Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B}).\) A weight \(w\) belongs to the variable Muckenhoupt class \(A_{p(\cdot)}\) on \(\mathbb{B}\) if \[[w]_{A_{p(\cdot)}}:=\sup_{B}\frac{1}{\mu_{\alpha}(B)}\|w^{\frac{1}{p(\cdot)}} \chi_{B}\|_{p(\cdot)}\|w^{-\frac{1}{p(\cdot)}}\chi_{B}\|_{p^{\prime}(\cdot)}<\infty \tag{3.2}\] where the sup is taken over all pseudo-balls of \(\mathbb{B}.\) Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B}).\) The following proposition is similar to Proposition 3.4. **Proposition 3.8**.: _The following two assertions are equivalent._ 1. \(w\in A_{p(\cdot)};\)__ 2. \(w^{\prime}\in A_{p^{\prime}(\cdot)}.\)__ We record the following properties of \(A_{p(\cdot)}\) and \(B_{p(\cdot)}.\) **Proposition 3.9**.: 1. _The inclusion_ \(A_{p(\cdot)}\subset B_{p(\cdot)}\) _holds with_ \([w]_{B_{p(\cdot)}}\leq[w]_{A_{p(\cdot)}}.\)__ 2. \([w]_{B_{p(\cdot)}}\geq\frac{1}{2}\) _and_ \([w]_{A_{p(\cdot)}}\geq\frac{1}{2}.\)__ Proof.: 1. This follows directly from the definitions of \(A_{p(\cdot)}\) and \(B_{p(\cdot)}.\) 2. We first give the proof for \(B_{p(\cdot)}.\) Let \(B\in\mathcal{B}.\) From the Holder inequality and the definition of \(B_{p(\cdot)},\) we have \[1=\frac{1}{\mu_{\alpha}(B)}\int_{B}w^{\frac{1}{p(\cdot)}}w^{-\frac{1}{p(\cdot )}}d\mu_{\alpha}\leq\frac{2}{\mu_{\alpha}(B)}\|w^{\frac{1}{p(\cdot)}}\chi_{B} \|_{p(\cdot)}\|w^{-\frac{1}{p(\cdot)}}\chi_{B}\|_{p^{\prime}(\cdot)}\leq 2[w]_{B_{ p(\cdot)}}. \tag{3.3}\] The proof for \(A_{p(\cdot)}\) then follows from assertion 1. In [6], Cruz-Uribe and Cummings proved the following fundamental result for the maximal Hardy-Littlewood function. This variable theorem generalises a well-known theorem of Muckenhoupt [13] in the Euclidean space \(\mathbb{R}^{n}\). For spaces of homogeneous type, the analogous theorem for constant exponents was proved later by A. P. Calderon [3]. **Theorem 3.10**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\). The following two assertions are equivalent._ 1. _There exists a positive constant_ \(C\) _such that for all_ \(f\in L^{p(\cdot)}(w),\) _we have_ \[\|M_{\alpha}f\|_{p(\cdot),w}\leq C\|f\|_{p(\cdot),w}.\] 2. \(w\in A_{p(\cdot)}.\)__ In fact, these authors [6] proved their result in the general setting of spaces of homogeneous type. There, in addition to the condition \(p(\cdot)\in\mathcal{P}^{\log}(\mathbb{B}),\) they have a condition at infinity which has the following expression on the unit ball \(\mathbb{B}\): there are two constants \(c\) and \(p_{\infty}\) such that \[|p(z)-p_{\infty}|\leq\frac{c}{\ln(e+|z|)}\] for every \(z\in\mathbb{B}.\) It is easy to show that this extra condition is satisfied on \(\mathbb{B}.\) Since \(m_{\alpha}f\leq M_{\alpha}f,\) we deduce the following corollary. **Corollary 3.11**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) and \(w\in A_{p(\cdot)}.\) For the same constant \(C\) as in Theorem 3.10, we have_ \[\|m_{\alpha}f\|_{p(\cdot),w}\leq C\|f\|_{p(\cdot),w}\] _for all \(f\in L^{p(\cdot)}(w).\)_ Necessity of the conditions \(w^{\frac{1}{p(\cdot)}}\in L^{p(\cdot)}(d\mu_{\alpha})\) and \(w^{-\frac{1}{p(\cdot)}}\in L^{p^{\prime}(\cdot)}(d\mu_{\alpha})\) in Theorem 1.6 **Proposition 4.1**.: _Let \(w\) be a weight and let \(p(\cdot)\in\mathcal{P}_{-}(\mathbb{B}).\) If the operator \(P_{\alpha}\) is well defined on \(L^{p(\cdot)}(w),\) then \(w^{-\frac{1}{p(\cdot)}}\in L^{p^{\prime}(\cdot)}(d\mu_{\alpha})\)._ Proof.: If the operator \(P_{\alpha}\) is well defined on \(L^{p(\cdot)}(w),\) then for every \(f\in L^{p(\cdot)}(w),\) we have \[P_{\alpha}^{+}|f|(z)=\int_{\mathbb{B}}\frac{|f(\zeta)|}{|1-z\cdot\overline{ \zeta}|^{n+\alpha}}d\mu_{\alpha}(\zeta)<\infty\quad\text{for all}\quad\ z\in \mathbb{B}.\] By a contradiction argument, suppose that \(w^{-\frac{1}{p(\cdot)}}\) does not belong to \(L^{p^{\prime}(\cdot)}(d\mu_{\alpha}).\) Then by Proposition 3.1, there exists a non-negative \(g\in L^{p(\cdot)}(d\mu_{\alpha})\) such that \[\int_{\mathbb{B}}g(\zeta)w(\zeta)^{-\frac{1}{p(\zeta)}}d\mu_{\alpha}(\zeta)=\infty.\] Let \(f=gw^{-\frac{1}{p(\cdot)}}.\) We have \(f\in L^{p(\cdot)}(w)\) but \(f\) does not belong to \(L^{1}(\mathbb{B});\) and as \(|1-\overline{\zeta}z|\leq 2\) then \[P_{\alpha}^{+}|f|(z)=\int_{\mathbb{B}}\frac{|f(\zeta)|}{|1-\overline{\zeta} \cdot z|^{n+\alpha}}d\mu_{\alpha}(\zeta)\geq\frac{1}{2^{n+\alpha}}\int_{ \mathbb{B}}|f(\zeta)|d\mu_{\alpha}(\zeta)=\infty.\] This contradict the fact that \(P_{\alpha}^{+}|f|(z)<\infty\) and consequently we have the result. **Proposition 4.2**.: _Let \(w\) be a weight and \(p(\cdot)\in\mathcal{P}_{-}(\mathbb{B}).\) If the operator \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(wd\mu)\), then \(w^{\frac{1}{p(\cdot)}}\in L^{p(\cdot)}(d\mu_{\alpha})\)._ Proof.: Let \(0<r<1\) and define the function \(f(z)=(1-|z|^{2})^{1-\alpha}\chi_{B(0,r)}(z)\) on \(\mathbb{B}.\) We have \[P_{\alpha}f(z) =\int_{\mathbb{B}}\frac{f(\zeta)}{\left(1-\langle z,\zeta\rangle \right)^{n+\alpha}}d\mu_{\alpha}(\zeta)\] \[=\int_{B(0,r)}\frac{1}{\left(1-\langle z,\zeta\rangle\right)^{n+ \alpha}}d\mu(\zeta)\] \[=\overline{\int_{B(0,r)}\frac{1}{\left(1-\langle\zeta,z\rangle \right)^{n+\alpha}}d\mu(\zeta)}.\] Since the function \(\zeta\longmapsto\frac{1}{\left(1-\langle\zeta,z\rangle\right)^{n+\alpha}}\) is analytic on \(\mathbb{B}\) and \(B(0,r)\) is the Euclidean ball centred at \(0\) and of radius \(r,\) it follows from the mean value property that \(P_{\alpha}f(z)\equiv C_{r,n}\) and so \[|C_{r,n}|\|w^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}=\|P_{\alpha}(f)\|_{p(\cdot),w}. \tag{4.1}\] In addition, \[\rho_{p(\cdot),w}(f)=\int_{B(0,r)}w(z)(1-|z|^{2})^{(\alpha-1)(1-p(z))}d\mu(z).\] On the one hand, if \(\alpha\leq 1\), we have \((1-|z|^{2})^{(\alpha-1)(1-p(z))}\leq 1\) because \((\alpha-1)(1-p(z))>0\) and \(1-|z|^{2}\leq 1\). Consequently \[\rho_{p(\cdot),w}(f)\leq\int_{B(0,r)}w(z)d\mu(z)<\infty\] because \(w\) is locally integrable. On the other hand, if \(\alpha>1\) we have \((1-|z|^{2})^{(\alpha-1)(1-p(z))}\leq(1-|z|^{2})^{(\alpha-1)(1-p_{+})}.\) So \[\rho_{p(\cdot),w}(f)\leq\sup_{z\in B(0,r)}(1-|z|^{2})^{(\alpha-1)(1-p_{+})} \int_{B(0,r)}w(z)d\mu(z)<\infty\] because \(w\) is locally integrable and \(\sup_{z\in B(0,r)}(1-|z|^{2})^{(\alpha-1)(1-p_{+})}=(1-r^{2})^{(\alpha-1)(1-p_ {+})}\). Thus, since \(\rho_{p(\cdot),w}(f)<\infty\) in both cases, by Lemma 3.2 we obtain \(\|f\|_{p(\cdot),w}<\infty\) and as \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(w)\), we deduce from (4.1) that there exists a positive constant \(c_{r,\alpha,n}\) such that \[\|w^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}\leq c_{r,\alpha,n}\|f\|_{p(\cdot),w}<\infty.\] Hence we have the result. In what follows, we need to calculate the \(p(\cdot)-\)norm of some functions but is not easy like a constant case. To deal with that, in the following section we introduce the definitions of new spaces whose coincide with the class \(B_{p(\cdot)}\). ## 5. The weight classes \(B^{+}_{p(\cdot)}\) and \(B^{++}_{p(\cdot)}\) **Definition 5.1**.: Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B})\) and let \(w\) be a weight. We say that \(w\) is in the \(B^{+}_{p(\cdot)}\) class if \[[w]_{B^{+}_{p(\cdot)}}:=\sup_{B\in\mathcal{B}}\frac{1}{\mu_{\alpha}(B)^{p_{B}} }\|w\chi_{B}\|_{1}\|w^{-1}\chi_{B}\|_{\frac{p^{\prime}(\cdot)}{p(\cdot)}}<\infty. \tag{5.1}\] where \[p_{B}=\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}\frac{1}{p(x)}d\mu_{\alpha}(x) \right)^{-1}.\] This class coincides with the \(B_{p}\) class when \(p(\cdot)=p\) (\(p\) constant). We also adopt the following notation: \[\langle p\rangle_{B}=\frac{1}{\mu_{\alpha}(B)}\int_{B}p(x)d\mu_{\alpha}(x). \tag{5.2}\] **Remark 5.2**.: Let \(p(\cdot)\in\mathcal{P}^{\log}(\mathbb{B})\) and let \(B\) be a pseudo-ball of \(\mathbb{B}\). As \(p_{-}(B)\leq p_{B},\langle p\rangle_{B}\leq p_{+}(B)\), it follows from Lemma 2.10 and Corollary 2.11 that \[\mu_{\alpha}(B)^{p_{-}(B)}\simeq\mu_{\alpha}(B)^{p_{B}}\simeq\mu_{\alpha}(B)^ {p_{B}}\simeq\mu_{\alpha}(B)^{{}^{\langle p\rangle_{B}}}\simeq\mu_{\alpha}(B)^ {p_{+}(B)}.\] **Lemma 5.3**.: _[_10_, Theorem 4.5.7]_ _Let \(p(\cdot)\in\mathcal{P}^{log}(\mathbb{B})\) be such that \(p_{-}>0.\) Let \(B\) be a pseudo-ball of \(\mathbb{B}\). Then_ \[\|\chi_{B}\|_{p(\cdot)}\simeq\mu_{\alpha}(B)^{\frac{1}{p_{B}}}.\] Proof.: From Lemma 3.2 we have \[\min\left(\mu_{\alpha}(B)^{\frac{1}{p_{-}(B)}},\mu_{\alpha}(B)^{\frac{1}{p_{+ }(B)}}\right)\leq\|\chi_{B}\|_{p(\cdot)}\leq\max\left(\mu_{\alpha}(B)^{\frac{ 1}{p_{-}(B)}},\mu_{\alpha}(B)^{\frac{1}{p_{+}(B)}}\right).\] Next, from the Remark 5.2, we have \(\mu_{\alpha}(B)^{\frac{1}{p_{-}(B)}}\simeq\mu_{\alpha}(B)^{\frac{1}{p_{+}(B)} }\simeq\mu_{\alpha}(B)^{\frac{1}{p_{B}}}\) because \(p(\cdot)\) is bounded away from zero. The conclusion follows. **Lemma 5.4**.: _Let \(p(\cdot)\in\mathcal{P}^{log}_{\pm}(\mathbb{B})\) and let \(q\) be a constant exponent greater than \(p_{+}+1.\) There exists a positive constant \(C\) depending only of the log-Holder constant of \(p(\cdot)\) such that_ \[[w]_{B_{q}}\leq C[w]_{B^{+}_{p(\cdot)}}.\] Proof.: As \(p(\cdot)<p_{+}+1<q,\) we have \(\frac{q^{\prime}}{q}<\frac{p^{\prime}(\cdot)}{p(\cdot)}.\) Hence from the Holder inequality (assertion 2 of Proposition 2.7), we obtain \[\|w^{-1}\chi_{B}\|_{\frac{q^{\prime}}{q}}\leq K\|\chi_{B}\|_{\beta(\cdot)}\|w^ {-1}\chi_{B}\|_{\frac{p^{\prime}(\cdot)}{p(\cdot)}},\] where \[\frac{1}{\beta(\cdot)}=\frac{q}{q^{\prime}}-\frac{p(\cdot)}{p^{\prime}(\cdot) }=q-p(\cdot)>1.\] It is easy to check that \(\beta(\cdot)\) is a member of \(\mathcal{P}^{log}(\mathbb{B})\) such that \(\beta_{-}>0.\) Consequently, from Lemma 5.3 and Remark 5.2, we have \[\|\chi_{B}\|_{\beta(\cdot)}\simeq\mu_{\alpha}(B)^{\frac{1}{\beta_{B}}}\simeq \mu_{\alpha}(B)^{q-\langle p\rangle_{B}}\simeq\mu_{\alpha}(B)^{q-p_{B}}.\] Thus there exists a positive constant \(C\) such that \[\begin{array}{ll}\frac{1}{\mu_{\alpha}(B)^{q}}\|w\chi_{B}\|_{1}\|w^{-1}\chi _{B}\|_{\frac{q^{\prime}}{q}}&\leq C\frac{1}{\mu_{\alpha}(B)^{q}}\|w\chi_{B}\| _{1}\|w^{-1}\chi_{B}\|_{\frac{p^{\prime}(\cdot)}{p(\cdot)}}\mu_{\alpha}(B)^{q -p_{B}}\\ &=C\frac{1}{\mu_{\alpha}(B)^{p_{B}}}\|w\chi_{B}\|_{1}\|w^{-1}\chi_{B}\|_{\frac {p^{\prime}(\cdot)}{p(\cdot)}}\leq C[w]_{B^{+}_{p(\cdot)}}.\end{array}\] for all pseudo-balls \(B\in\mathcal{B}.\) The conclusion follows. We recall the following definition. **Definition 5.5**.: The weight class \(B_{\infty}\) is defined by \(B_{\infty}=\bigcup\limits_{q\in(1,\infty)}B_{q}.\) **Remark 5.6**.: It follows from Lemma 5.4 that if \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B}),\) we have \[B^{+}_{p(\cdot)}\subset B_{\infty}.\] In the rest of this article, to simplify the notation, we denote \(w(B)=\|w\chi_{B}\|_{1}.\) We next define another class of weights \(\Lambda,\) which contains the class \(B_{\infty}.\) For a reference, cf. e.g. [1]. **Definition 5.7**.: We call \(\Lambda\) the class consisting of those integrable weights \(w\) satisfying the following property. There exist two positive constants \(C\) and \(\delta\) such that the following implication holds. \[\frac{\mu_{\alpha}(E)}{\mu_{\alpha}(B)}\leq C\left(\frac{w(E)}{w(B)}\right)^{\delta} \tag{5.3}\] whenever \(B\in\mathcal{B}\) and \(E\) is a measurable subset of \(B.\) **Remark 5.8**.: 1. For \(w\in\Lambda,\) the weighted measure \(wd\mu_{\alpha}\) is doubling in the following sense. There exists a positive constant \(C\) such that for every pseudo-ball \(B\) of \(\mathbb{B}\) whose pseudo-ball \(\widetilde{B}\) of same centre and of double radius is a member of \(\mathcal{B},\) we have \[w\left(\widetilde{B}\right)\leq Cw(B).\] This result easily follows from the definition of \(\Lambda.\) 2. We recall that \(w(\mathbb{B})>0.\) For \(w\in\Lambda,\) this implies that \(w(B)>0\) for every pseudo-ball \(B\) of \(\mathbb{B}.\) Indeed, take \(\mathbb{B}\) for \(B\) and \(B\) for \(E\) in (5.3). **Lemma 5.9**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}(\mathbb{B})\) be such that \(p_{-}>0.\) Let \(w\in\Lambda\). Then_ \[\|\chi_{B}\|_{p(\cdot),w}\simeq w(B)^{\frac{1}{p_{+}(B)}}\simeq w(B)^{\frac{1} {p_{-}(B)}}\simeq w(B)^{\frac{1}{p_{(x)}}}\simeq w(B)^{\frac{1}{p_{B}}}\] _for all pseudo-balls \(B\) of \(\mathbb{B}\) such that \(w(B)>0\) and for all \(x\in B.\)_ Proof.: Take \(\mathbb{B}\) for \(B\) and \(B\) for \(E\) in Definition 5.7. We have \[\left(C^{-1}\frac{\mu_{\alpha}(B)}{\mu_{\alpha}(\mathbb{B})}\right)^{\frac{1} {\delta}}w(\mathbb{B})\leq w(B)\leq w(\mathbb{B}).\] So \[w(\mathbb{B})^{p-(B)-p_{+}(B)}\leq w(B)^{p-(B)-p_{+}(B)}\lesssim\mu_{\alpha}(B )^{\frac{1}{\delta}(p-(B)-p_{+}(B))}w(\mathbb{B})^{p-(B)-p_{+}(B)}.\] It is easy to check that \[\min(1,w(\mathbb{B})^{p_{-}-p_{+}})\leq w(\mathbb{B})^{p-(B)-p_{+}(B)}\leq \max(1,w(\mathbb{B})^{p_{-}-p_{+}}).\] Next, combining with Lemma 2.10 gives \[\min(1,w(\mathbb{B})^{p_{-}-p_{+}})\leq w(B)^{p-(B)-p_{+}(B)}\lesssim C_{ \delta}\max(1,w(\mathbb{B})^{p-p_{+}}).\] We have thus proved the estimates \(w(B)^{\frac{1}{p_{+}(B)}}\simeq w(B)^{\frac{1}{p_{-}(B)}}\simeq w(B)^{\frac{1} {p(x)}}\simeq w(B)^{\frac{1}{p_{B}}}\) for all \(x\in B.\) On the other hand, from Lemma 3.2 we have \[\min\left(w(B)^{\frac{1}{p_{-}(B)}},w(B)^{\frac{1}{p_{+}(B)}}\right)\leq\| \chi_{B}\|_{p(\cdot),w}\leq\max\left(w(B)^{\frac{1}{p_{-}(B)}},w(B)^{\frac{1} {p_{+}(B)}}\right).\] Hence \[\|\chi_{B}\|_{p(\cdot),w}\simeq w(B)^{\frac{1}{p_{-}(B)}}\simeq w(B)^{\frac{1 }{p_{+}(B)}}.\] We recall again the notation \(w^{\prime}(y)=w(y)^{1-p^{\prime}(y)}.\) **Lemma 5.10**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B})\) and \(w\in B^{+}_{p(\cdot)}.\) Then_ \[\|w^{-1}\chi_{B}\|_{\frac{p^{\prime}(\cdot)}{p(\cdot)}}\simeq\left(\rho_{\frac{p ^{\prime}(\cdot)}{p(\cdot)}}(w^{-1}\chi_{B})\right)^{p_{B}-1}=w^{\prime}(B)^{p _{B}-1}. \tag{5.4}\] Proof.: Let \(w\in B^{+}_{p(\cdot)}\) and \(B\in\mathcal{B}\). By definition we have \[\frac{1}{\mu_{\alpha}(B)^{p_{B}}}w(B)\|w^{-1}\chi_{B}\|_{\frac{p^{\prime}( \cdot)}{p(\cdot)}}\leq[w]_{B^{+}_{p(\cdot)}}. \tag{5.5}\] On the other hand, by the Holder inequality (Proposition 2.7, assertion 1) and Lemma 5.9, we have \[\mu_{\alpha}(B)=\int_{B}w(y)^{\frac{1}{p(y)}}w(y)^{-\frac{1}{p(y)}}d\mu_{ \alpha}(y)\leq 2\|w^{\frac{1}{p(\cdot)}}\chi_{B}\|_{p(\cdot)}\|w^{-\frac{1}{p( \cdot)}}\chi_{B}\|_{p^{\prime}(\cdot)}\simeq w(B)^{\frac{1}{p_{B}}}\|w^{- \frac{1}{p(\cdot)}}\chi_{B}\|_{p^{\prime}(\cdot)}.\] Hence \[\left\|\frac{w(B)^{\frac{1}{p_{B}}}}{\mu_{\alpha}(B)}w^{-\frac{1}{p(\cdot)}} \chi_{B}\right\|_{p^{\prime}(\cdot)}\gtrsim 1. \tag{5.6}\] Consequently, from (5.6), Lemma 3.2, Lemma 5.9 and Corollary 2.11, we have: \[1\lesssim\rho_{p^{\prime}(\cdot)}\left(\frac{w(B)^{\frac{1}{p_{B }}}}{\mu_{\alpha}(B)}w^{-\frac{1}{p(\cdot)}}\chi_{B}\right) =\int_{B}\left(\frac{w(B)^{\frac{1}{p_{B}}}}{\mu_{\alpha}(B)} \right)^{p^{\prime}(y)}w(y)^{-\frac{p^{\prime}(y)}{p(y)}}d\mu_{\alpha}(y)\] \[\simeq\int_{B}\left(\frac{w(B)}{\mu_{\alpha}(B)^{p_{B}}}\right)^ {\frac{p^{\prime}(y)}{p(y)}}w(y)^{-\frac{p^{\prime}(y)}{p(y)}}d\mu_{\alpha}(y)\] \[=\rho_{\frac{p^{\prime}(\cdot)}{p(\cdot)}}\left(\frac{w(B)}{\mu_ {\alpha}(B)^{p_{B}}}w^{-1}\chi_{B}\right).\] So by Lemma 3.2, we have \[\left\|\frac{w(B)}{\mu_{\alpha}(B)^{p_{B}}}w^{-1}\chi_{B}\right\|_{\frac{p^{ \prime}(\cdot)}{p(\cdot)}}\gtrsim 1. \tag{5.7}\] Thus from (5.5) and (5.7), we have \[\|w^{-1}\chi_{B}\|_{\frac{p^{\prime}(\cdot)}{p(\cdot)}}\simeq\frac{\mu_{\alpha }(B)^{p_{B}}}{w(B)}. \tag{5.8}\] Furthermore, from Remark 5.2 and as \(p(\cdot)\in\mathcal{P}_{\pm}\), we have the equivalences \[\mu_{\alpha}(B)^{p_{+}(B)}\simeq\mu_{\alpha}(B)^{p_{-}(B)} \iff\mu_{\alpha}(B)^{\frac{1}{p_{-}(B)-1}} \simeq\mu_{\alpha}(B)^{\frac{1}{p_{+}(B)-1}}\] \[\iff\mu_{\alpha}(B)^{\frac{p_{B}}{p_{-}(B)-1}} \simeq\mu_{\alpha}(B)^{\frac{p_{B}}{p_{+}(B)-1}},\] from which we deduce that \[\mu_{\alpha}(B)^{\frac{p_{B}}{p_{+}(B)-1}}\simeq\mu_{\alpha}(B)^{\frac{p_{B}}{ p_{-}(B)-1}}\simeq\mu_{\alpha}(B)^{\frac{p_{B}}{p_{B}-1}}\] since \(p_{-}(B)\leq p_{B}\leq p_{+}(B).\) Similarly, by Lemma 5.9, we deduce from the estimate \[w(B)^{p_{B}}\simeq w(B)^{p_{+}(B)}\simeq w(B)^{p_{-}(B)}\] that \[w(B)^{\frac{1}{p_{-}(B)-1}}\simeq w(B)^{\frac{1}{p_{+}(B)-1}}\simeq w(B)^{\frac {1}{p_{B}-1}}.\] So from (5.8) we have \[\|w^{-1}\chi_{B}\|_{\frac{p^{\prime}(\cdot)}{p^{\prime}(\cdot)}}^{\frac{1}{p^{ \prime}(\cdot)}}\simeq\|w^{-1}\chi_{B}\|_{\frac{p^{\prime}(\cdot)}{p^{\prime}( \cdot)}}^{\frac{1}{p^{\prime}(\cdot)}}\simeq\|w^{-1}\chi_{B}\|_{\frac{p^{ \prime}(\cdot)}{p^{\prime}(\cdot)}}^{\frac{1}{p^{\prime}(\cdot)}}.\] Since \(\rho_{\frac{p^{\prime}(\cdot)}{p^{\prime}(\cdot)}}\left(w^{-1}\chi_{B}\right)= w^{\prime}(B),\) combining with Lemma 3.2 where \(\frac{p^{\prime}(\cdot)}{p(\cdot)}\) replaces \(p(\cdot),\) we obtain the required result. **Proposition 5.11**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) and \(w\in B_{p(\cdot)}^{+}.\) Then \(w^{\prime}\in B_{p^{\prime}(\cdot)}^{+}\) and_ \[\left(\frac{1}{\mu_{\alpha}(B)^{p_{B}^{p}}}\|w^{\prime}\chi_{B}\|_{1}\|w^{ \prime-1}\chi_{B}\|_{\frac{p(\cdot)}{p^{\prime}(\cdot)}}\right)^{p_{B}-1} \simeq\frac{1}{\mu_{\alpha}(B)^{p_{B}}}\|w\chi_{B}\|_{1}\|w^{-1}\chi_{B}\|_{ \frac{p^{\prime}(\cdot)}{p(\cdot)}}\simeq\frac{w(B)}{\mu_{\alpha}(B)}\left( \frac{w^{\prime}(B)}{\mu_{\alpha}(B)}\right)^{p_{B}-1}\] _for all pseudo-balls \(B\in\mathcal{B}.\)_ Proof.: We recall that \(w\in\Lambda\) by Remark 5.6. Hence, from Lemma 5.9 used with \(\frac{p(\cdot)}{p^{\prime}(\cdot)}\) replacing \(p(\cdot),\) equation (5.4) and the property \(w\in B_{p(\cdot)}^{+},\) we obtain \[\begin{split}\frac{1}{\mu_{\alpha}(B)^{p_{B}^{\prime}}}w^{\prime }(B)\|w^{\prime-1}\chi_{B}\|_{\frac{p^{(\cdot)}}{p^{\prime}(\cdot)}}& =\frac{1}{\mu_{\alpha}(B)^{p_{B}^{\prime}}}w^{\prime}(B)\|\chi_{B} \|_{\frac{p^{(\cdot)}}{p^{\prime}(\cdot)},w}\\ &\simeq\frac{1}{\mu_{\alpha}(B)^{p_{B}^{\prime}}}w^{\prime}(B)w(B )^{\frac{1}{p_{B}-1}}\\ &=\left(\frac{w(B)}{\mu_{\alpha}(B)^{p_{B}}}w^{\prime}(B)^{p_{B }-1}\right)^{\frac{1}{p_{B}-1}}\\ &\simeq\left(\frac{w(B)}{\mu_{\alpha}(B)^{p_{B}}}\|w^{-1}\chi_{B }\|_{\frac{p^{\prime}(\cdot)}{p(\cdot)}}\right)^{\frac{1}{p_{B}-1}}\\ &\leq[w]_{B_{p(\cdot)}^{+}}^{\frac{1}{p_{B}-1}}.\end{split} \tag{5.9}\] Hence \(w^{\prime}\in B_{p^{\prime}(\cdot)}^{+}\) and from (5.9) and (5.10), we deduce that \[\frac{w(B)}{\mu_{\alpha}(B)^{p_{B}}}\|w^{-1}\chi_{B}\|_{\frac{p^{\prime}(\cdot )}{p(\cdot)}}\simeq\frac{w(B)}{\mu_{\alpha}(B)}\left(\frac{w^{\prime}(B)}{\mu_ {\alpha}(B)}\right)^{p_{B}-1}.\] **Definition 5.12**.: Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B})\) and let \(w\) be a weight. We say that \(w\) is in the \(B_{p(\cdot)}^{++}\) class if \[[w]_{B_{p(\cdot)}^{++}}:=\sup_{B\in\mathcal{B}}\frac{w(B)}{\mu_{\alpha}(B)} \left(\frac{w^{\prime}(B)}{\mu_{\alpha}(B)}\right)^{p_{B}-1}<\infty. \tag{5.11}\] It is easy to check the following Proposition. **Proposition 5.13**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}(\mathbb{B}).\) The following two assertions are equivalent._ 1. \(w\in B^{++}_{p(\cdot)};\)__ 2. \(w^{\prime}\in B^{++}_{p^{\prime}(\cdot)}.\)__ **Lemma 5.14**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B})\), and \(w\in B^{++}_{p(\cdot)}\). Then for all pseudo-balls \(B\) of \(\mathbb{B}\),_ \[\|\chi_{B}\|_{p(\cdot),w}\simeq w(B)^{\frac{1}{p_{+}(B)}}\simeq w(B)^{\frac{1} {p_{-}(B)}}\simeq w(B)^{\frac{1}{p_{B}}}.\] Proof.: Since \(w\in B^{++}_{p(\cdot)}\), we have \(w^{\prime}(\mathbb{B})<\infty\) and it follows from Lemma 3.2 that \(\|\chi_{\mathbb{B}}\|_{p^{\prime}(\cdot),w^{\prime}}<\infty\). Hence, by the Holder inequality, we obtain \[\mu_{\alpha}(B) \leq 2\|\chi_{B}\|_{p(\cdot),w}\|\chi_{B}\|_{p^{\prime}(\cdot),w^ {\prime}}\] \[\leq 2\|\chi_{B}\|_{p(\cdot),w}\|\chi_{\mathbb{B}}\|_{p^{\prime}( \cdot),w^{\prime}}.\] Therefore, from Lemma 2.10, we get \[\|\chi_{B}\|^{p_{-}(B)-p_{+}(B)}_{p(\cdot),w} \lesssim\mu_{\alpha}(B)^{p_{-}(B)-p_{+}(B)}\|\chi_{\mathbb{B}}\| ^{p_{+}^{+}(B)-p_{-}(B)}_{p^{\prime}(\cdot),w^{\prime}} \tag{5.12}\] \[\lesssim\max\left(1,\|\chi_{\mathbb{B}}\|^{p_{+}-p_{-}}_{p^{ \prime}(\cdot),w^{\prime}}\right).\] On the other hand, using again \(w\in B^{++}_{p(\cdot)}\), we have \(w(\mathbb{B})<\infty\) and hence \(\|\chi_{\mathbb{B}}\|_{p(\cdot),w}<\infty\). Then \[\|\chi_{B}\|^{p_{+}(B)-p_{-}(B)}_{p(\cdot),w}\leq\|\chi_{\mathbb{B}}\|^{p_{+} ^{+}(B)-p_{-}(B)}_{p(\cdot),w}\leq\max\left(1,\|\chi_{\mathbb{B}}\|^{p_{+}-p_{ -}}_{p(\cdot),w}\right). \tag{5.13}\] Thus from (5.12) and (5.13), we have \[\|\chi_{B}\|^{p_{+}(B)}_{p(\cdot),w}\simeq\|\chi_{B}\|^{p_{-}(B)}_{p(\cdot),w},\] and from Lemma 3.2 we conclude the proof. **Lemma 5.15**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B}).\) Then \(B^{++}_{p(\cdot)}\subset\Lambda.\)_ Proof.: Let \(w\in B^{++}_{p(\cdot)}.\) Let \(B\in\mathcal{B}\) and let \(E\) be a measurable subset of \(B.\) By the Holder inequality and from Lemma 3.2, we have \[\mu_{\alpha}(E) \leq 2\|\chi_{E}\|_{p(\cdot),w}\|\chi_{E}\|_{p^{\prime}(\cdot),w^ {\prime}}\] \[\leq 2\|\chi_{E}\|_{p(\cdot),w}\|\chi_{B}\|_{p^{\prime}(\cdot),w^ {\prime}}\] \[\leq 2\max\left(w(E)^{\frac{1}{p_{+}(B)}},w(E)^{\frac{1}{p_{-}(B)} }\right)\|\chi_{B}\|_{p^{\prime}(\cdot),w^{\prime}}.\] However, since \(w^{\prime}\in B^{++}_{p^{\prime}(\cdot)}\) by Proposition 5.13, from Lemma 5.14 we have \[\|\chi_{B}\|_{p^{\prime}(\cdot),w^{\prime}}\simeq w^{\prime}(B)^{\frac{1}{p^{ \prime}_{+}(B)}}\simeq w^{\prime}(B)^{\frac{1}{p^{\prime}_{-}(B)}}\simeq w^{ \prime}(B)^{\frac{1}{p_{B}}}.\] Hence using \(w\in B^{++}_{p(\cdot)}\), we deduce that, \[\mu_{\alpha}(E) \lesssim\max\left(w(E)^{\frac{1}{p_{+}(B)}}w^{\prime}(B)^{\frac{1}{ p_{+}^{\prime}(B)}},w(E)^{\frac{1}{p_{-}(B)}}w^{\prime}(B)^{\frac{1}{p_{-}^{ \prime}(B)}}\right)\] \[\leq\max\left([w]_{B^{(\cdot)}_{(\cdot)}}^{\frac{1}{p_{+}(B)}},[w ]_{B^{++}_{p(\cdot)}}^{\frac{1}{p_{-}(B)}}\right)\max\left(\left(\frac{w(E)}{w (B)}\right)^{\frac{1}{p_{+}(B)}},\left(\frac{w(E)}{w(B)}\right)^{\frac{1}{p_{- }(B)}}\right)\mu_{\alpha}(B)\] \[\leq\max\left([w]_{B^{(\cdot)}_{(\cdot)}}^{\frac{1}{p_{+}}},[w ]_{B^{++}_{p(\cdot)}}^{\frac{1}{p_{-}}}\right)\left(\frac{w(E)}{w(B)}\right)^ {\frac{1}{p_{+}(B)}}\mu_{\alpha}(B)\] \[\leq\max\left([w]_{B^{(\cdot)}_{(\cdot)}}^{\frac{1}{p_{+}}},[w ]_{B^{++}_{p(\cdot)}}^{\frac{1}{p_{-}}}\right)\left(\frac{w(E)}{w(B)}\right)^ {\frac{1}{p_{+}}}\mu_{\alpha}(B).\] Therefore, \[\frac{\mu_{\alpha}(E)}{\mu_{\alpha}(B)}\lesssim\left(\frac{w(E)}{w(B)}\right)^ {\frac{1}{p_{+}}}.\] **Proposition 5.16**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B}).\) Then \(B^{+}_{p(\cdot)}=B^{++}_{p(\cdot)}.\)_ Proof.: For the inclusion \(B^{+}_{p(\cdot)}\subset B^{++}_{p(\cdot)}\), apply Proposition 5.11. For the reverse inclusion, apply Lemma 5.15 and Lemma 5.9. **Remark 5.17**.: From Proposition 5.11, Remark 5.6 and Lemma 5.9, we have the inclusion \(B^{+}_{p(\cdot)}\subset B_{p(\cdot)}\) for \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B}).\) Now we prove the reverse inclusion. In this direction, we first state the following result. **Lemma 5.18**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B})\) and \(w\in B_{p(\cdot)}\). Then there exists a constant \(C>1\)_ \[\frac{1}{C}\leq\|\chi_{B}\|_{p(\cdot),w}^{p_{-}(B)-p_{+}(B)}\leq C\] _for all \(B\in\mathcal{B}.\)_ Proof.: By the Holder inequality, we have \[\mu_{\alpha}(B)\leq 2\|\chi_{B}\|_{p(\cdot),w}\|\chi_{B}\|_{p^{\prime}(\cdot),w^{ \prime}}\] and as \(w\in B_{p(\cdot)}\), from Lemma 2.10 and according to the estimate \(\|\chi_{\mathbb{B}}\|_{p^{\prime}(\cdot),w^{\prime}}<\infty\) given by Remark 3.3, we have \[\|\chi_{B}\|_{p(\cdot),w}^{p_{-}(B)-p_{+}(B)} \lesssim\mu_{\alpha}(B)^{p_{-}(B)-p_{+}(B)}\|\chi_{B}\|_{p^{\prime }(\cdot),w^{\prime}}^{p_{+}(B)-p_{-}(B)}\] \[\simeq\|\chi_{\mathbb{B}}\|_{p^{\prime}(\cdot),w^{\prime}}^{p_{+} (B)-p_{-}(B)}\] \[\lesssim\max\left(1,\|\chi_{\mathbb{B}}\|_{p^{\prime}(\cdot),w^{ \prime}}^{p_{+}-p_{-}}\right).\] On the other hand, according to the estimate \(\|\chi_{\mathbb{B}}\|_{p(\cdot),w}<\infty\) given by Remark 3.3, we have \[\|\chi_{B}\|_{p(\cdot),w}^{p_{+}(B)-p_{-}(B)}\leq\max(1,\|\chi_{\mathbb{B}}\|_ {p(\cdot),w}^{p_{+}-p_{-}}).\] From Lemma 3.2 and Lemma 5.18, we deduce the following corollary. **Corollary 5.19**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\), and \(w\in B_{p(\cdot)}\). Then_ \[\|\chi_{B}\|_{p(\cdot),w}\simeq w(B)^{\frac{1}{p_{+}(B)}}\simeq w(B)^{\frac{1}{ p_{-}(B)}}\simeq w(B)^{\frac{1}{p_{B}}}.\] _for all \(B\in\mathcal{B}\),_ We next state the following theorem. **Theorem 5.20**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{log}(\mathbb{B}).\) Then \(B_{p(\cdot)}=B_{p(\cdot)}^{+}=B_{p(\cdot)}^{++}.\)_ Proof.: From Remark 5.17, we have \(B_{p(\cdot)}^{+}\subset B_{p(\cdot)}\). Let \(w\in B_{p(\cdot)}.\) By Proposition 3.4, \(w^{\prime}\in B_{p^{\prime}(\cdot)}.\) It follows from Corollary 5.19 that \[\frac{w(B)}{\mu_{\alpha}(B)}\left(\frac{w^{\prime}(B)}{\mu_{\alpha}(B)} \right)^{p_{B}-1}\simeq\left(\frac{1}{\mu_{\alpha}(B)}\|\chi_{B}\|_{p(\cdot), w}\|\chi_{B}\|_{p^{\prime}(\cdot),w^{\prime}}\right)^{p_{B}}\leq\max\left(1,[w] _{B_{p(\cdot)}}^{p_{+}}\right).\] To end this section, we record with the same proof the following analogous theorem for the variable Muckenhoupt weight classes. **Theorem 5.21**.: _Let \(w\) be a weight and let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\). The following three assertions are equivalent._ 1. \(w\in A_{p(\cdot)}\)_;_ 2. \(\sup\limits_{B}\frac{1}{\mu_{\alpha}(B)^{p_{B}}}\|w\chi_{B}\|_{1}\|w^{-1}\chi _{B}\|_{\frac{p^{\prime}(\cdot)}{p(\cdot)}}<\infty,\) _where the_ \(\sup\) _is taken over all pseudo-balls of_ \(\mathbb{B};\)__ 3. \(\sup\limits_{B}\frac{w(B)}{\mu_{\alpha}(B)}\left(\frac{w^{\prime}(B)}{\mu_{ \alpha}(B)}\right)^{p_{B}-1}<\infty,\) _where the_ \(\sup\) _is taken over all pseudo-balls of_ \(\mathbb{B}.\)__ ## 6. Proof of the necessary condition in Theorem 1.6 The aim of this section is to prove the following result. **Proposition 6.1**.: _Let \(w\) be a weight and let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B}).\) If the Bergman projector is bounded on \(L^{p(\cdot)}(wd\mu_{\alpha})\), then \(w\in B_{p(\cdot)}\)._ Proof.: According to Theorem 5.20, it suffices to prove that \(w\in B_{p(\cdot)}^{++},\) i.e. the following estimate holds \[\sup\limits_{B\in\mathcal{B}}\frac{w(B)}{\mu_{\alpha}(B)}\left(\frac{w^{\prime }(B)}{\mu_{\alpha}(B)}\right)^{p_{B}-1}<\infty. \tag{6.1}\] From Proposition 4.1, we have that \(w^{-\frac{1}{p(\cdot)}}\in L^{p(\cdot)}(d\mu_{\alpha})\) and from Proposition 4.2 we have \(w^{\frac{1}{p(\cdot)}}\in L^{p(\cdot)}(d\mu_{\alpha}).\) In particular, \(w(\mathbb{B})<\infty\) and \(w^{\prime}(\mathbb{B})<\infty.\) Thus we just have to show the estimate (6.1) for the pseudo-balls of radius smaller than a positive constant \(R_{0},\) because if the radius of \(B\) is larger, then \(B\) can be identified with \(\mathbb{B}\). We shall use the following lemma. **Lemma 6.2**.: _[_2_]_ _There exist three positive numbers \(R_{0},c\) and \(C_{\alpha}\) such that the following holds. For every pseudo-ball \(B^{1}\in\mathcal{B}\) of radius \(R<R_{0},\) there exists a pseudo-ball \(B^{2}\in\mathcal{B}\) of same radius such that \(d(B^{1},B^{2})=cR\), that satisfies the following property: for every non-negative measurable function \(f\) supported in \(B^{i}\) and for two distinct superscripts \(i,j\in\{1,2\},\) we have_ \[|P_{\alpha}f|\geq C_{\alpha}\chi_{B^{j}}\mu_{\alpha}(B^{i})^{-1}\int_{B^{i}}fd \mu_{\alpha}. \tag{6.2}\] Thus, by taking \(f=\chi_{B^{i}}\) in (6.2) we obtain : \[|P_{\alpha}\chi_{B^{i}}(z)|\geq\chi_{B^{j}}(z)C_{\alpha}\mu_{\alpha}(B^{i})^{- 1}\int_{B^{i}}\chi_{B^{i}}d\mu_{\alpha}\simeq\chi_{B^{j}}(z).\] Using the growth of the norm \(\|\cdot\|_{p(\cdot),w},\) we obtain \[\|P_{\alpha}\chi_{B^{i}}\|_{p(\cdot),w}=\|w^{\frac{1}{p(\cdot)}}P_{\alpha} \chi_{B^{i}}\|_{p(\cdot)}\gtrsim\|\chi_{B^{j}}w^{\frac{1}{p(\cdot)}}\|_{p( \cdot)}\] So using the fact that \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(wd\mu_{\alpha}),\) we obtain: \[\|\chi_{B^{j}}w^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}\lesssim\|P_{\alpha}\|\|\chi _{B^{i}}\|_{p(\cdot),w}\] We then deduce that \[\|\chi_{B^{1}}w^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}\simeq\|\chi_{B^{2}}w^{\frac {1}{p(\cdot)}}\|_{p(\cdot)}. \tag{6.3}\] In the rest of the proof, we shall take \(f=w^{\prime}\chi_{B^{1}}.\) We have \(f\in L^{p(\cdot)}(wd\mu_{\alpha})\) since \[\rho_{p(\cdot),w}(f)=\int_{\mathbb{B}}w(z)^{-p^{\prime}(z)}\chi_{B^{1}}(z)w(z )d\mu_{\alpha}(z)=\rho_{p^{\prime}(\cdot)}(w^{-\frac{1}{p(\cdot)}}\chi_{B^{1 }})<\infty\] by Proposition 4.1. However, \(\rho_{p(\cdot),w}(f)=\int_{B^{1}}w^{\prime}d\mu_{\alpha}=w^{\prime}(B^{1}).\) Also, from (6.2) and the previous equality, we have \[\chi_{B^{2}}(z)w^{\prime}(B^{1})\leq C_{\alpha}^{-1}\mu_{\alpha}(B^{1})|P_{ \alpha}f(z)|.\] Moving to the norm \(\|\cdot\|_{p(\cdot),w},\) we obtain \[\|w^{\frac{1}{p(\cdot)}}\chi_{B^{2}}\|_{p(\cdot)}w^{\prime}(B^{1})\leq C_{ \alpha}^{-1}\mu_{\alpha}(B^{1})\|P_{\alpha}f\|_{p(\cdot),w}\] Then using the boundedness of \(P_{\alpha}\) on \(L^{p(\cdot)}(wd\mu_{\alpha}),\) the previous inequality implies \[\|w^{\frac{1}{p(\cdot)}}\chi_{B^{2}}\|_{p(\cdot)}w^{\prime}(B^{1})\leq C_{ \alpha}^{-1}\mu_{\alpha}(B^{1})\|P_{\alpha}\|\|f\|_{p(\cdot),w}\] and combining with (6.3), we obtain the following lemma. **Lemma 6.3** (Main Lemma).: _Suppose that \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(wd\mu_{\alpha}).\) Then_ \[\|w^{\frac{1}{p(\cdot)}}\chi_{B}\|_{p(\cdot)}w^{\prime}(B)\leq CC_{\alpha}^{ -1}\mu_{\alpha}(B)\|P_{\alpha}\|\|w^{\prime}\chi_{B}\|_{p(\cdot),w} \tag{6.4}\] _for every pseudo-ball \(B\in\mathcal{B}\) of radius smaller than \(R_{0}.\) The absolute constants \(R_{0},C\) and \(C_{\alpha}\) were respectively defined in Lemma 6.2, (6.3) and (6.2)._ At this level, we need to calculate \(\|w^{\frac{1}{p(\cdot)}}\chi_{B}\|_{p(\cdot)}\) and \(\|\|w^{\prime}\chi_{B}\|_{p(\cdot),w}.\) This calculation is not as obvious as in the case where \(p(\cdot)\) is constant. **Lemma 6.4**.: _Let \(p(\cdot)\in\mathcal{P}(\mathbb{B})\). If \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(w)\), then \(P_{\alpha}\) is bounded on \(L^{p^{\prime}(\cdot)}(w^{\prime})\)._ Proof.: We first recall that the weighted Bergman projector \(P_{\alpha}\) is the orthogonal projector from the (Hilbert-) Lebesgue space \(L^{2}(d\mu_{\alpha})\) to its closed subspace \(L^{2}(d\mu_{\alpha})\cap Hol(\mathbb{B})\) (the standard weighted Bergman space). We call \(\mathcal{C}_{c}(\mathbb{B})\) the space of continuous functions with compact support in \(\mathbb{B}.\) By Proposition 2.6, \(\mathcal{C}_{c}(\mathbb{B})\) is a dense subspace of \(L^{p(\cdot)}(w)\) and \(L^{p^{\prime}(\cdot)}(w^{\prime}).\) From Proposition 3.1 and the boundedness of \(P_{\alpha}\) on \(L^{p(\cdot)}(w)\), for all \(f\in\mathcal{C}_{c}(\mathbb{B})\), we have \[\|P_{\alpha}f\|_{p^{\prime}(\cdot),w^{\prime}} =\sup_{g\in\mathcal{C}_{c}(\mathbb{B}:\|g\|_{p(\cdot),w}=1}\left| \int_{\mathbb{B}}P_{\alpha}f(\zeta)\overline{g(\zeta)}d\mu_{\alpha}(\zeta)\right|\] \[=\sup_{g\in\mathcal{C}_{c}(\mathbb{B}):\|g\|_{p(\cdot),w}=1}\left| \int_{\mathbb{B}}f(\zeta)\overline{P_{\alpha}g(\zeta)}d\mu_{\alpha}(\zeta)\right|\] \[\leq 2\sup_{g\in\mathcal{C}_{c}(\mathbb{B}):\|g\|_{p(\cdot),w}=1} \|f\|_{p^{\prime}(\cdot),w^{\prime}}\|P_{\alpha}g\|_{p(\cdot),w}\] \[\leq 2\,\|P_{\alpha}\|\,\|f\|_{p^{\prime}(\cdot),w^{\prime}}.\] We have used the elementary fact that \(\mathcal{C}_{c}(\mathbb{B})\) is contained in \(L^{2}(d\mu_{\alpha}).\) For the last but one inequality, we used the Holder inequality. **Lemma 6.5**.: _Let \(p(\cdot)\in\mathcal{P}(\mathbb{B})\) and let \(w\) be a weight. If \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(w)\), then for all \(t>0\),_ \[\|t\chi_{\{|P_{\alpha}f|>t\}}\|_{p(\cdot),w}\leq\|P_{\alpha}\|\,\|f\|_{p(\cdot ),w}.\] Proof.: It suffices to remark that for all \(t>0,\;t\chi_{\{|P_{\alpha}f|>t\}}\leq|P_{\alpha}f|.\) **Lemma 6.6**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) and \(w\) be a weight. If \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(w)\), then_ \[\|\chi_{B}\|_{p(\cdot),w}\simeq w(B)^{\frac{1}{p_{+}(B)}}\simeq w(B)^{\frac{1} {p_{-}(B)}}\] _for all pseudo balls \(B\) of \(\mathbb{B}\)._ Proof.: If \(\|\chi_{B}\|_{p(\cdot),w}\geq 1,\) then \(\|\chi_{\mathbb{B}}\|_{p(\cdot),w}^{p_{-}-p_{+}}\leq\|\chi_{B}\|_{p(\cdot),w} ^{p_{-}-p_{+}}\leq\|\chi_{B}\|_{p(\cdot),w}^{p_{-}(B)-p_{+}(B)}\leq 1.\) So \[\|\chi_{B}\|_{p(\cdot),w}^{p_{-}(B)-p_{+}(B)}\simeq 1. \tag{6.5}\] Otherwise, if \(\|\chi_{B}\|_{p(\cdot),w}<1,\) then by the Holder inequality, we have \[\mu_{\alpha}(B) \leq 2\|\chi_{B}\|_{p(\cdot),w}\|\chi_{B}\|_{p^{\prime}(\cdot),w^{ \prime}}\] \[\leq 2\|\chi_{B}\|_{p(\cdot),w}\|\chi_{\mathbb{B}}\|_{p^{\prime}( \cdot),w^{\prime}} \tag{6.6}\] Hence from (6.6) and Lemma 2.10, we have \[\|\chi_{B}\|_{p(\cdot),w}^{p_{-}(B)-p_{+}(B)} \leq 2^{p_{+}-p_{-}}\mu_{\alpha}(B)^{p_{-}(B)-p_{+}(B)}\|\chi_{ \mathbb{B}}\|_{p^{\prime}(\cdot),w^{\prime}}^{p_{+}(B)-p_{-}(B)} \tag{6.7}\] \[\lesssim\max\left(1,\|\chi_{\mathbb{B}}\|_{p^{\prime}(\cdot),w^{ \prime}}^{p_{+}-p_{-}}\right).\] We point out that \(\|\chi_{\mathbb{B}}\|_{p^{\prime}(\cdot),w^{\prime}}<\infty\) according to Proposition 4.1, since \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(w).\) On the other hand, using again the boundedness of \(P_{\alpha}\) on \(L^{p(\cdot)}(w),\) we have the estimate \(\|\chi_{\mathbb{B}}\|_{p(\cdot),w}<\infty\) according to Proposition 4.2. Then \[\|\chi_{B}\|_{p(\cdot),w}^{p+(B)-p_{-}(B)}\lesssim\max\left(1,\|\chi_{\mathbb{ B}}\|_{p(\cdot),w}^{p_{+}-p_{-}}\right)<\infty. \tag{6.8}\] Thus, from (6.5), (6.7) and (6.8), we deduce that \[\|\chi_{B}\|_{p(\cdot),w}^{p_{+}(B)}\simeq\|\chi_{B}\|_{p(\cdot),w}^{p_{-}(B)} \tag{6.9}\] for all pseudo-balls of \(\mathbb{B}.\) Applying Lemma 3.2 gives \[\|\chi_{B}\|_{p(\cdot),w}\simeq w(B)^{\frac{1}{p_{+}(B)}}\simeq w(B)^{\frac{1 }{p_{-}(B)}}.\] End of the proof of Proposition 6.1.: We go back to the Main Lemma (Lemma 6.3). On the one hand, since \(P_{\alpha}\) is bounded on \(L^{p(\cdot)}(w),\) it follows from Lemma 6.4 that \(P_{\alpha}\) is also bounded on \(L^{p^{\prime}(\cdot)}(w^{\prime})\). So from Lemma 6.6 with \(p^{\prime}(\cdot)\) in the place of \(p(\cdot)\) and \(w^{\prime}\) in the place of \(w,\) we have \[\|\chi_{B}\|_{p^{\prime}(\cdot),w^{\prime}}\simeq w^{\prime}(B)^{1-\frac{1}{p_ {-}(B)}}\simeq w^{\prime}(B)^{1-\frac{1}{p_{+}(B)}}.\] This implies the estimate \(w^{\prime}(B)^{\frac{1}{p_{-}(B)}}\simeq w^{\prime}(B)^{\frac{1}{p_{+}(B)}}.\) It then follows from Lemma 6.3 that \[\|w^{\prime}\chi_{B}\|_{p(\cdot),w}\simeq w^{\prime}(B)^{\frac{1}{p_{B}}}.\] On the other hand, \(\|w^{\frac{1}{p(\cdot)}}\chi_{B}\|_{p(\cdot)}=\|\chi_{B}\|_{p(\cdot),w}\simeq w (B)^{\frac{1}{p_{B}}}\) by Lemma 6.6. The inequality (6.4) of the Main Lemma takes the following form \[w(B)^{\frac{1}{p_{B}}}w^{\prime}(B)\lesssim\mu_{\alpha}(B)w^{\prime}(B)^{ \frac{1}{p_{B}}}.\] Equivalently, \[\sup_{B\in\mathcal{B}}\frac{w(B)}{\mu_{\alpha}(B)}\left(\frac{w^{\prime}(B)}{ \mu_{\alpha}(B)}\right)^{p_{B}-1}<\infty.\] We have shown the estimate (6.1). This finishes the proof of Proposition 6.1. ## 7. Boundedness on \(L^{p(\cdot)}(w)\) of the maximal function \(m_{\alpha}\) In this section, we prove the boundedness of the maximal function \(m_{\alpha}\) on \(L^{p(\cdot)}(w)\) when \(w\in B_{p(\cdot)}\). As in [2], we will use the regularisation operator that we recall here with some of its properties. **Definition 7.1**.: For all \(k\in(0,1),\) we define the regularisation operator \(R_{k}^{\alpha}\) of order \(k\) by \[R_{k}^{\alpha}f(z)=\frac{1}{\mu_{\alpha}(B^{k}(z))}\int_{B^{k}(z)}f(\zeta)d\mu _{\alpha}(\zeta),\] where \(B^{k}(z)=\{\zeta\in\mathbb{B}:d(z,\zeta)<k(1-|z|)\}.\) **Proposition 7.2**.: _For all \(k\in(0,1),\) there exists a constant \(C_{k}>1\) such that for every non-negative locally integrable function \(f,\) the following two estimates hold._ 1. \(m_{\alpha}f\leq C_{k}m_{\alpha}R_{k}^{\alpha}f;\) 2. \(C_{k}^{-1}m_{\alpha}g\leq R_{k}^{\alpha}m_{\alpha}g\leq C_{k}m_{\alpha}g.\) **Lemma 7.3**.: _Let \(k\in(0,\frac{1}{2})\). If \(z^{\prime}\in B^{k}(z)\) then \(z\in B^{k^{\prime}}(z^{\prime})\) where \(k^{\prime}=\frac{k}{1-k},\) and \(\chi_{B^{k}(z)}(z^{\prime})\leq\chi_{B^{k^{\prime}}(z^{\prime})}(z)\). Moreover there exists a constant \(C_{k}>1\) such that_ \[C_{k}^{-1}\mu_{\alpha}(B^{k}(z))\leq\mu_{\alpha}(B^{k^{\prime}}(z^{\prime})) \leq C_{k}\mu_{\alpha}(B^{k}(z)).\] **Lemma 7.4**.: _Let \(k\in\left(0,\frac{1}{5}\right)\) and \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\). For \(w\in B_{p(\cdot)}\) there exists a constant \(C_{k}>1\) such that for all \(z,z^{\prime}\in\mathbb{B}\) such that \(z^{\prime}\in B^{k}(z)\) we have:_ \[C_{k}^{-1}w(B^{k}(z))\leq w(B^{k^{\prime}}(z^{\prime}))\leq C_{k}w(B^{k}(z))\] Proof.: We have \(B^{k}(z)\subset B^{2k^{\prime}}(z^{\prime})\) and \(B^{k^{\prime}}(z^{\prime})\subset B^{6k}(z)\). From Theorem 5.20 and Remark 5.6, we have \(w\in\Lambda.\) Apply Remark 5.8 to conclude. **Lemma 7.5**.: _Let \(k\in(0,\frac{1}{2}).\) There exists a positive constant \(C_{k}\) such that for all non-negative locally integrable \(f,g,\) we have_ \[\int_{\mathbb{B}}f(\zeta)R_{k}g(\zeta)d\mu_{\alpha}(\zeta)\leq C_{k}\int_{ \mathbb{B}}g(z)R_{k}f(z)d\mu_{\alpha}(z).\] We also recall the following elementary lemma. **Lemma 7.6**.: _For \(z\in B\) and \(\zeta\in B^{k}(z),\) we have \(\zeta\in B^{\prime}.\)_ In the rest of this section, to simplify the notation, we write \(\sigma=R_{k}^{\alpha}w.\) The following result is a generalisation to the variable exponent of the analogous result in [2, Lemma 10]. **Proposition 7.7**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{log}(\mathbb{B}),\)\(k\in(0,\frac{1}{2})\) and \(w\in B_{p(\cdot)}\). Then \(R_{k}^{\alpha}w\in A_{p(\cdot)}\) with \([R_{k}^{\alpha}w]_{A_{p(\cdot)}}\lesssim[w]_{B_{p(\cdot)}}.\)_ Proof.: From Theorem 5.21, it suffices to show that \[\frac{\sigma(B)}{\mu_{\alpha}(B)}\left(\frac{\sigma^{\prime}(B)}{\mu_{\alpha}( B)}\right)^{p(z_{0})-1}\lesssim[w]_{B_{p(\cdot)}}\] for every pseudo-ball \(B\) of \(\mathbb{B}.\) We write \(a=2k+1.\) Let \(B=B(z_{0},r)\) be a pseudo-ball in \(\mathbb{B}.\) We set \(B^{\prime}=B(z_{0},ar).\) We distinguish two cases: 1. \(B\in\mathcal{B};\) 2. \(B\) is not a member of \(\mathcal{B}.\) 1. Suppose first that \(B\in\mathcal{B}.\) We claim that there exists a positive absolute constant \(C_{k}\) such that (7.1) \[\frac{\sigma(B)}{\mu_{\alpha}(B)}\leq C_{k}\frac{w(B^{\prime})}{\mu_{\alpha}(B^ {\prime})}.\] Indeed, from the Fubini-Tonelli theorem and Lemma 7.3 we have \[\sigma(B) =\int_{B}\sigma(z)d\mu_{\alpha}(z)\] \[=\int_{B}\left(\frac{1}{\mu_{\alpha}(B^{k}(z))}\int_{B^{k}(z)}w( \zeta)d\mu_{\alpha}(\zeta)\right)d\mu_{\alpha}(z)\] \[=\int_{\mathbb{B}}\left(\int_{\mathbb{B}}\frac{\chi_{B^{k}(z)}( \zeta)\chi_{B}(z)}{\mu_{\alpha}(B^{k}(z))}d\mu_{\alpha}(z)\right)w(\zeta)d\mu_ {\alpha}(\zeta)\] \[\lesssim\int_{\mathbb{B}}\left(\int_{\mathbb{B}}\frac{\chi_{B^{k ^{\prime}}(\zeta)}(z)\chi_{B^{\prime}}(\zeta)}{\mu_{\alpha}(B^{k^{\prime}}( \zeta))}d\mu_{\alpha}(z)\right)w(\zeta)d\mu_{\alpha}(\zeta)\] \[=w(B^{\prime}).\] For the latter inequality, we used Lemma 7.6. Moreover, since \(B\subset B^{\prime}\) and \(\mu_{\alpha}(B)\simeq\mu_{\alpha}(B^{\prime})\), we obtain that \[\frac{\sigma(B)}{\mu_{\alpha}(B)}\lesssim\frac{\sigma(B^{\prime})}{\mu_{ \alpha}(B^{\prime})}.\] Furthermore, from the Holder inequality and Lemma 5.9 we have \[\sigma^{-1}(z) =\frac{\mu_{\alpha}(B^{k}(z))}{w(B^{k}(z))}\] \[\leq\frac{2}{w(B^{k}(z))}\|w^{\frac{1}{p(\cdot)}}\chi_{B^{k}(z)} \|_{p(\cdot)}\|w^{-\frac{1}{p(\cdot)}}\chi_{B^{k}(z)}\|_{p^{\prime}(\cdot)}\] \[\simeq\frac{1}{w(B^{k}(z))}w(B^{k}(z))^{\frac{1}{p(z)}}w^{\prime} (B^{k}(z))^{\frac{1}{p^{\prime}(z)}}\] \[=\left(\frac{w^{\prime}(B^{k}(z))}{w(B^{k}(z))}\right)^{\frac{1}{ p^{\prime}(z)}}.\] Hence \[\sigma^{\prime}(z)=\left(\sigma^{-1}\right)^{p^{\prime}(z)-1}(z)\lesssim \left(\frac{w^{\prime}(B^{k}(z))}{w(B^{k}(z))}\right)^{\frac{1}{p(z)}}.\] From the Holder inequality and Lemma 5.9, we have \[\sigma^{\prime}(B) =\int_{B}\sigma^{\prime}(z)d\mu_{\alpha}(z) \tag{7.2}\] \[\lesssim\int_{B}\left(\frac{w^{\prime}(B^{k}(z))}{w(B^{k}(z))}w( z)\right)^{\frac{1}{p(z)}}w(z)^{-\frac{1}{p(z)}}d\mu_{\alpha}(z)\] \[\leq 2\|w^{-\frac{1}{p(\cdot)}}\chi_{B}\|_{p^{\prime}(\cdot)}\| \left(\frac{w^{\prime}(B^{k}(\cdot))}{w(B^{k}(\cdot))}w(.)\right)^{\frac{1}{p (\cdot)}}\chi_{B}\|_{p(\cdot)}\] \[\lesssim w^{\prime}(B)^{\frac{1}{p^{\prime}(z_{0})}}\|\left( \frac{w^{\prime}(B^{k}(.))}{w(B^{k}(.))}w(.)\right)^{\frac{1}{p(\cdot)}}\chi_{ B}\|_{p(\cdot)}\] Since \(w^{\prime}\in\Lambda\), from Lemma 5.9 we have \(\beta:=\|w^{\prime\frac{1}{p(\cdot)}}\chi_{B^{\prime}}\|_{p(\cdot)}\simeq w^{ \prime}(B^{\prime})^{\frac{1}{p(z_{0})}}\). Thus as \(B\subset B^{\prime}\), from Lemma 7.3 and Lemma 7.4 we have \[\rho_{p(\cdot)}\left(\frac{1}{\beta}\left(\frac{w^{\prime}(B^{k}(.))}{w(B^{k}(.))}w\right)^{\frac{1}{p(\cdot)}}\chi_{B}\right)\] \[=\int_{\mathbb{B}}\frac{1}{\beta^{p(z)}}\frac{w^{\prime}(B^{k}(z)) }{w(B^{k}(z))}w(z)\chi_{B}(z)d\mu_{\alpha}(z)\] \[\simeq\int_{\mathbb{B}}\frac{1}{w^{\prime}(B^{\prime})}\frac{w^{ \prime}(B^{k}(z))}{w(B^{k}(z))}w(z)\chi_{B}(z)d\mu_{\alpha}(z)\] \[=w^{\prime}(B^{\prime})^{-1}\times\int_{\mathbb{B}}\left(\frac{1 }{w(B^{k}(z))}\int_{\mathbb{B}}w^{\prime}(\zeta)\chi_{B_{k}(z)}(\zeta)\chi_{B }(z)w(z)d\mu_{\alpha}(\zeta)\right)d\mu_{\alpha}(z)\] \[\leq C_{k}w^{\prime}(B^{\prime})^{-1}\times\int_{\mathbb{B}} \left(\frac{1}{w(B_{k^{\prime}}(\zeta))}\int_{\mathbb{B}}\chi_{B_{k^{\prime}}( \zeta)}(z)\chi_{B^{\prime}}(\zeta)w(z)d\mu_{\alpha}(z)\right)w^{\prime}(\zeta )d\mu_{\alpha}(\zeta)\] \[=C_{k}.\] For the latter inequality, we used Lemma 7.6. Hence we obtain \[\left\|\left(\frac{w^{\prime}(B^{k}(.))}{w(B^{k}(.))}w\right)^{\frac{1}{p( \cdot)}}\chi_{B}\right\|_{p(\cdot)}\lesssim C_{k}w^{\prime}(B^{\prime})^{ \frac{1}{p(z_{0})}}.\] Consequently, we deduce from (7.2) that \[\sigma^{\prime}(B)\lesssim C_{k}w^{\prime}(B)^{\frac{1}{p^{\prime}(z_{0})}}w^ {\prime}(B^{\prime})^{\frac{1}{p(z_{0})}}\leq C_{k}w^{\prime}(B^{\prime})\] because \(B\subset B^{\prime}\). Moreover, as \(\mu_{\alpha}(B)\simeq\mu_{\alpha}(B^{\prime})\), we have \[\frac{\sigma^{\prime}(B)}{\mu_{\alpha}(B)}\leq C_{k}\frac{w^{\prime}(B^{\prime })}{\mu_{\alpha}(B^{\prime})}\] and hence (7.3) \[\left(\frac{\sigma^{\prime}(B)}{\mu_{\alpha}(B)}\right)^{p(z_{0})-1}\leq C^{ \prime}_{k}\left(\frac{w^{\prime}(B^{\prime})}{\mu_{\alpha}(B^{\prime})}\right) ^{p(z_{0})-1}.\] Combining (7.1) and (7.3) gives (7.4) \[\frac{\sigma(B)}{\mu_{\alpha}(B)}\left(\frac{\sigma^{\prime}(B)}{\mu_{\alpha}( B)}\right)^{p(z_{0})-1}\leq\gamma_{k}\frac{w(B^{\prime})}{\mu_{\alpha}(B^{ \prime})}\left(\frac{w^{\prime}(B^{\prime})}{\mu_{\alpha}(B^{\prime})}\right) ^{p(z_{0})-1}\leq\gamma_{k}[w]_{B_{p(\cdot)}}\] by Theorem 5.20. 2. Suppose next that the pseudo-ball \(B\) is not a member of \(\mathcal{B}\), i.e. \(r\leq 1-|z_{0}|\). In the case where \(k(1-|z_{0}|)\leq r\leq 1-|z_{0}|\), we have \(B\subset B(z_{0},1-|z_{0}|)\) and \(\mu_{\alpha}(B)\simeq(1-|z_{0}|)^{n+\alpha}\simeq\mu_{\alpha}(B(z_{0},1-|z_{0}|))\). The pseudo-ball \(B(z_{0},1-|z_{0}|)\) is a member of \(\mathcal{B}\); so we can apply to it the computations of the first case. We obtain: \[\frac{\sigma(B)}{\mu_{\alpha}(B)}\left(\frac{\sigma^{\prime}(B)}{\mu_ {\alpha}(B)}\right)^{p(z_{0})-1} \lesssim \frac{\sigma(B(z_{0},1-|z_{0}|))}{\mu_{\alpha}(B(z_{0},1-|z_{0}|)) }\left(\frac{\sigma^{\prime}(B(z_{0},1-|z_{0}|))}{\mu_{\alpha}(B(z_{0},1-|z_{0} |))}\right)^{p(z_{0})-1}\] \[\lesssim [w]_{B_{p(\cdot)}}.\] Next, if \(0<r<k(1-|z_{0}|)\), then for \(z\in B\) we have \((1-k)(1-|z_{0}|)\leq 1-|z|\leq(1+k)(1-|z_{0}|)\). This shows that \(\mu_{\alpha}(B^{k}(z_{0}))\simeq\mu_{\alpha}(B^{k}(z))\). We also claim that \(w(B^{k}(z_{0}))\simeq w(B^{k}(z)).\) Indeed, it is easy to show the inclusions \(B^{k}(z_{0})\subset B(z,4k(1-|z|))\) and \(B^{k}(z)\subset B(z_{0},2k(2+k)(1-|z_{0}|)).\) The claim then follows an application of Remark 5.8. Combining with the estimate \(\mu_{\alpha}(B^{k}(z_{0}))\simeq\mu_{\alpha}(B^{k}(z))\) gives \[\sigma(z)\simeq\sigma(z_{0}) \tag{7.5}\] for every \(z\in B.\) Now, by Remark 2.8, \(p^{\prime}(\cdot)\) is a member of \(\mathcal{P}_{\pm}^{log}(\mathbb{B}).\) Then by Lemma 5.9 and Corollary 2.11, we have \[\sigma(z_{0})^{1-p^{\prime}(z_{0})}\simeq\sigma(z)^{1-p^{\prime}(z_{0})}\simeq \sigma(z)^{1-p^{\prime}(z)}.\] Combining with (7.5) gives \[\frac{\sigma(B)}{\mu_{\alpha}(B)}\left(\frac{\sigma^{\prime}(B)}{\mu_{\alpha}( B)}\right)^{p(z_{0})-1}\simeq 1. \tag{7.6}\] The conclusion of the lemma follows a combination of (7.4) and (7.6) with Theorem 5.20. **Lemma 7.8**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B}),k\in(0,\frac{1}{2})\) and \(w\in B_{p(\cdot)}.\) Then_ \[(R_{k}^{\alpha}g(z))^{p(z)}\lesssim R_{k}^{\alpha}(g^{p(\cdot)})(z)+1\] _for all non-negative functions \(g\) such that \(\|g\|_{p(\cdot),w}=1\) and all \(z\in\mathbb{B}.\)_ Proof.: As \(\|g\|_{p(\cdot),w}=1,\) from the Holder inequality (assertion 1 of Proposition 2.7), we have \[\frac{1}{2\|\chi_{B^{k}(z)}\|_{p^{\prime}(\cdot),w^{\prime}}}\int_{B^{k}(z)}g( \zeta)d\mu_{\alpha}(\zeta)\leq\|g\chi_{B^{k}(z)}\|_{p(\cdot),w}=1.\] Therefore from the usual Holder inequality and Lemma 2.10, we obtain \[(R_{k}^{\alpha}g(z))^{p(z)} =\left(\frac{1}{2\|\chi_{B^{k}(z)}\|_{p^{\prime}(\cdot),w^{\prime}} }\int_{B^{k}(z)}g(\zeta)d\mu_{\alpha}(\zeta)\right)^{p(z)}\mu_{\alpha}(B^{k}(z))^ {-p(z)}2^{p(z)}\|\chi_{B^{k}(z)}\|_{p^{\prime}(\cdot),w^{\prime}}^{p(z)}\] \[\leq 2^{p(z)}\left(\frac{1}{2\|\chi_{B^{k}(z)}\|_{p^{\prime}(\cdot),w^{\prime}}}\int_{B^{k}(z)}g(\zeta)d\mu_{\alpha}(\zeta)\right)^{p-(B)}\mu_{ \alpha}(B^{k}(z))^{-p(z)}\|\chi_{B^{k}(z)}\|_{p^{\prime}(\cdot),w^{\prime}}^{p( z)}\] \[\lesssim\left(\frac{1}{\mu_{\alpha}(B^{k}(z))}\int_{B^{k}(z)}g( \zeta)d\mu_{\alpha}(\zeta)\right)^{p_{-}(B)}\mu_{\alpha}(B^{k}(z))^{p_{-}(B)- p(z)}\|\chi_{B^{k}(z)}\|_{p^{\prime}(\cdot),w^{\prime}}^{p(z)-p_{-}(B)}\] \[\leq\mu_{\alpha}(B^{k}(z))^{p_{-}(B)-p(z)}\|\chi_{B^{k}(z)}\|_{p ^{\prime}(\cdot),w^{\prime}}^{p(z)-p_{-}(B)}\frac{1}{\mu_{\alpha}(B^{k}(z))} \int_{B^{k}(z)}g(\zeta)^{p_{-}(B)}d\mu_{\alpha}(\zeta)\] \[\lesssim\frac{1}{\mu_{\alpha}(B^{k}(z))}\int_{B^{k}(z)}g\chi_{g \geq 1}(\zeta)^{p_{-}(B)}d\mu_{\alpha}(\zeta)+1\] \[\lesssim R_{k}^{\alpha}g^{p(\cdot)}(z)+1.\] For the last but one inequality, we also used the following inequality \[\|\chi_{B^{k}(z)}\|_{p^{\prime}(\cdot),w^{\prime}}^{p(z)-p_{-}(B)}\leq\max(1, \|\chi_{\mathbb{B}}\|_{p^{\prime}(\cdot),w^{\prime}}^{p_{+}-p_{-}}).\] **Lemma 7.9**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B}),k\in(0,\frac{1}{2})\) and \(w\in B_{p(\cdot)}\). Then_ \[\|R_{k}^{\alpha}g\cdot w^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}\lesssim\|g\cdot(R_ {k}^{\alpha}w)^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}\] _for all non-negative functions \(g\) belonging to \(L^{p(\cdot)}(R_{k}^{\alpha}wd\mu_{\alpha})\)._ Proof.: From Proposition 7.7, we have \(\sigma=R_{k}^{\alpha}w\in A_{p(\cdot)}\subset B_{p(\cdot)}\) because \(w\in B_{p(\cdot)}\). Without loss of generality, we assume that \(\|g\|_{p(\cdot),\sigma}=1.\) Thus from Lemma 7.8 and Lemma 7.5 we have \[\rho_{p(\cdot)}(R_{k}^{\alpha}g\cdot w^{\frac{1}{p(\cdot)}}) =\int_{\mathbb{B}}(R_{k}^{\alpha}g(z))^{p(z)}w(z)d\mu_{\alpha}(z)\] \[\lesssim\int_{\mathbb{B}}R_{k}^{\alpha}(g^{p(\cdot)})(z)w(z)d\mu _{\alpha}(z)+w(\mathbb{B})\] \[\lesssim\int_{\mathbb{B}}g(z)^{p(z)}\sigma(z)d\mu_{\alpha}(z)+w( \mathbb{B})\] \[\lesssim 1+w(\mathbb{B}).\] For the last inequality, apply Lemma 3.2. We still use the notation \(\sigma=R_{k}^{\alpha}w,\) with \(k\in(0,\frac{1}{2}).\) **Lemma 7.10**.: _Let \(k\in(0,\frac{1}{2}),\)\(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) and \(w\in B_{p(\cdot)}.\) Then there exists a positive constant \(C([w^{\prime}]_{B_{p^{\prime}(\cdot)}})\) depending on \([w^{\prime}]_{B_{p^{\prime}(\cdot)}}\) such that for every non-negative function \(g\in L^{p^{\prime}(\cdot)}(\sigma^{\prime}),\) we have_ \[\|R_{k}^{\alpha}g\|_{p^{\prime}(\cdot),w^{\prime}}\leq C([w^{\prime}]_{B_{p^{ \prime}(\cdot)}})\|g\|_{p^{\prime}(\cdot),\sigma^{\prime}}\] _for all non-negative functions \(g\in L^{p^{\prime}(\cdot)}(\sigma^{\prime}).\)_ Proof.: Without loss of generality, we assume that \(\|g\|_{p^{\prime}(\cdot),\sigma^{\prime}}=1\). From Proposition 3.8 and Proposition 7.7, we have \(\sigma^{\prime}\in A_{p^{\prime}(\cdot)}\subset B_{p^{\prime}(\cdot)}\) because \(w\in B_{p(\cdot)}\). Thus since \(\|g\|_{p^{\prime}(\cdot),\sigma^{\prime}}=1\), using Lemma 7.8 and Lemma 7.5 we have \[\rho_{p^{\prime}(\cdot),w^{\prime}}(R_{k}^{\alpha}g) =\int_{\mathbb{B}}(R_{k}^{\alpha}g(z))^{p^{\prime}(z)}w^{\prime}(z) d\mu_{\alpha}(z)\] \[\lesssim\int_{\mathbb{B}}R_{k}^{\alpha}(g^{p^{\prime}(\cdot)})(z )w^{\prime}(z)d\mu_{\alpha}(z)+w^{\prime}(\mathbb{B})\] \[\lesssim\int_{\mathbb{B}}g(z)^{p^{\prime}(z)}R_{k}^{\alpha}w^{ \prime}(z)d\mu_{\alpha}(z)+w^{\prime}(\mathbb{B})\] \[=\int_{\mathbb{B}}g(z)^{p^{\prime}(z)}\sigma^{\prime}(z)\sigma^{ \prime}(z)^{-1}R_{k}^{\alpha}w^{\prime}(z)d\mu_{\alpha}(z)+w^{\prime}(\mathbb{ B})\] \[\leq[w^{\prime}]_{B_{p^{\prime}(\cdot)}}\int_{\mathbb{B}}g(z)^{ p^{\prime}(z)}\sigma^{\prime}(z)d\mu_{\alpha}(z)+w^{\prime}(\mathbb{B})\] \[\lesssim[w^{\prime}]_{B_{p^{\prime}(\cdot)}}+w^{\prime}(\mathbb{ B}).\] Indeed, the last inequality follows from Lemma 3.2; for the last but one inequality, using Theorem 5.20 and Lemma 5.9 for \(w^{\prime}\) in the place of \(w\), we get \[\sigma^{\prime}(z)^{-1}R_{k}^{\alpha}w^{\prime}(z)=(R_{k}^{\alpha}w(z))^{p^{ \prime}(z)\over p(z)}R_{k}^{\alpha}w^{\prime}(z)\lesssim[w^{\prime}]_{B_{p^{ \prime}(\cdot)}}\] because \(B^{k}(z)\) is 'almost' a member of \(\mathcal{B}\), as it is a subset of the member \(B(z,1-|z|)\) of \(\mathcal{B}\), \(\mu_{\alpha}(B^{k}(z))\simeq\mu_{\alpha}(B(z,1-|z|))\) and \(w^{\prime}\in B_{p^{\prime}(\cdot)}\). So by Lemma 3.2, we get \[\|R_{k}^{\alpha}g\|_{p^{\prime}(\cdot),w^{\prime}}\lesssim C([w^{\prime}]_{B_{ p^{\prime}(\cdot)}}).\] **Lemma 7.11**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\), \(f\in L^{p(\cdot)}(w)\), \(k\in\left(0,\frac{1}{2}\right)\) and \(w\in B_{p(\cdot)}\). Then_ \[\|R_{k}^{\alpha}f(R_{k}^{\alpha}w)^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}\leq C([w^ {\prime}]_{B_{p^{\prime}(\cdot)}})\|f\|_{p(\cdot),w}.\] Proof.: We still write \(\sigma=R_{k}^{\alpha}w\). By duality (Proposition 3.1), there exists a function \(g\) satisfying \(\|g\|_{p^{\prime}(\cdot),\sigma^{\prime}}=1\) and such that \[\|R_{k}^{\alpha}f\|_{p(\cdot),\sigma}\leq 2\int_{\mathbb{B}}g(z)R_{k}^{\alpha}f(z )d\mu_{\alpha}(z).\] Next, from Lemma 7.5, the Holder inequality and Lemma 7.10, we obtain \[\|R_{k}^{\alpha}f\|_{p(\cdot),\sigma} \lesssim\int_{\mathbb{B}}f(z)R_{k}^{\alpha}g(z)d\mu_{\alpha}(z)\] \[\leq 2\|f\|_{p(\cdot),w}\|R_{k}^{\alpha}g\|_{p^{\prime}(\cdot),w^ {\prime}}\] \[\leq 2C([w^{\prime}]_{B_{p^{\prime}(\cdot)}})\|f\|_{p(\cdot),w}\|g \|_{p^{\prime}(\cdot),\sigma^{\prime}}\] \[=2C([w^{\prime}]_{B_{p^{\prime}(\cdot)}})\|f\|_{p(\cdot),w}.\] Hence we have the result. **Theorem 7.12**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\). If \(w\in B_{p(\cdot)},\) there exists a non-negative function \(C\) defined on \((0,\infty)\) such that for all \(f\in L^{p(\cdot)}(w)\) we have_ \[\|m_{\alpha}f\|_{p(\cdot),w}\leq C([w]_{B_{p(\cdot)}})\|f\|_{p(\cdot),w}.\] Proof.: By Proposition 3.4, we have the equality \([w]_{B_{p(\cdot)}}=[w^{\prime}]_{B_{p^{\prime}(\cdot)}}.\) So from Lemma 7.11, we have \[\|R_{k}^{\alpha}f\cdot(R_{k}^{\alpha}w)^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}\leq C ([w]_{B_{p(\cdot)}})\|f\|_{p(\cdot),w}. \tag{7.7}\] Hence \(R_{k}^{\alpha}f\in L^{p(\cdot)}(\sigma).\) Next, since \(\sigma\in A_{p(\cdot)}\) by Proposition 7.7, Corollary 3.11 gives \[\|m_{\alpha}(R_{k}^{\alpha}f)\cdot(R_{k}^{\alpha}w)^{\frac{1}{p(\cdot)}}\|_{p (\cdot)}\lesssim\|(R_{k}^{\alpha}f)\cdot(R_{k}^{\alpha}w)^{\frac{1}{p(\cdot)} }\|_{p(\cdot)} \tag{7.8}\] Hence from (7.7) and (7.8), we have \(m_{\alpha}(R_{k}^{\alpha}f)\in L^{p(\cdot)}(R_{k}^{\alpha}wd\mu_{\alpha})\). Now, Lemma 7.9 gives \[\|R_{k}^{\alpha}(m_{\alpha}(R_{k}^{\alpha}f))w^{\frac{1}{p(\cdot)}}\|_{p( \cdot)}\lesssim\|m_{\alpha}(R_{k}^{\alpha}f)\cdot(R_{k}^{\alpha}w)^{\frac{1}{ p(\cdot)}}\|_{p(\cdot)}.\] Next, by Proposition 7.2, there exists a positive constant \(C\) such that \[m_{\alpha}f\leq CR_{k}^{\alpha}(m_{\alpha}(R_{k}^{\alpha}f)).\] This implies that \[\|m_{\alpha}f\|_{p(\cdot),w}\lesssim\|m_{\alpha}(R_{k}^{\alpha}f)\cdot(R_{k}^ {\alpha}w)^{\frac{1}{p(\cdot)}}\|_{p(\cdot)}.\] Finally, applying (7.8) and (7.7) successively, we have the result. ## 8. A weighted extrapolation Theorem and the proof of the sufficient condition in Theorem 1.6 We are now ready to prove the sufficient condition in Theorem 1.6, we will adapt the strategy used in [8]. ### Preliminary results We recall the \(B_{1}\) class of weights. A weight \(w\) belongs to \(B_{1}\) if \[[w]_{B_{1}}:=ess\sup_{z\in\mathbb{B}}\frac{m_{\alpha}w(z)}{w(z)}<\infty. \tag{8.1}\] In \(\mathbb{R}^{n},\) the analogous of the following factorisation theorem was proved for the Muck-enhoupt classes \(A_{p},1<p<\infty,\) by Jones [12]. **Theorem 8.1**.: _For a constant exponent \(p\) such that \(1<p<\infty,\) the following two assertions are equivalent._ * \(w\in B_{p};\)__ * _there exist_ \(w_{1}\in B_{1}\) _and_ \(w_{2}\in B_{1}\) _such that_ \(w=w_{1}w_{2}^{1-p}.\)__ Proof.: We first show the implication \(2)\Rightarrow 1). Suppose that \(w=w_{1}w_{2}^{1-p}\) with \(w_{1},w_{2}\in B_{1}.\) For all \(B\in\mathcal{B}\) and \(z\in B,\) we have \[\frac{1}{\mu_{\alpha}(B)}\int_{B}w_{i}d\mu_{\alpha}\leq[w_{i}]_{B_{1}}w_{i}(z),\quad i=1,2. \tag{8.2}\] Thus as \((1-p^{\prime})(1-p)=1\), we have \(w^{1-p^{\prime}}=\left(w_{1}w_{2}^{1-p}\right)^{1-p^{\prime}}=w_{1}^{1-p^{\prime} }w_{2}\). So from (8.2) we have \[\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}wd\mu_{\alpha}\right)\left( \frac{1}{\mu_{\alpha}(B)}\int_{B}w^{1-p^{\prime}}d\mu_{\alpha}\right)^{p-1}\] \[=\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}w_{1}w_{2}^{1-p}d\mu_{ \alpha}\right)\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}w_{1}^{1-p^{\prime}}w_{2} d\mu_{\alpha}\right)^{p-1}\] \[\leq[w_{1}]_{B_{1}}[w_{2}]_{B_{1}}^{p-1}\left(\frac{1}{\mu_{ \alpha}(B)}\int_{B}w_{2}d\mu_{\alpha}\right)^{1-p}\left(\frac{1}{\mu_{\alpha} (B)}\int_{B}w_{1}d\mu_{\alpha}\right)\] \[\times\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}w_{2}d\mu_{\alpha} \right)^{p-1}\left(\frac{1}{\mu_{\alpha}(B)}\int_{B}w_{1}d\mu_{\alpha}\right) ^{-1}\] \[=[w_{1}]_{B_{1}}[w_{2}]_{B_{1}}^{p-1}.\] Hence \(w\in B_{p}.\) We next show the converse implication \(2)\Rightarrow 1). Suppose that \(w\in B_{p}.\) Set \(q=pp^{\prime}\) and define the operator \(S_{1}\) on the space \(\mathcal{M}\) by \[S_{1}f(z)=w(z)^{\frac{1}{q}}\left(m_{\alpha}\left(f^{p^{\prime}}w^{-\frac{1}{ p}}\right)(z)\right)^{\frac{1}{p^{\prime}}}.\] By the Minkowski inequality, \(S_{1}\) is sublinear. Moreover, from the constant exponent version of Theorem 7.12 [2, Proposition 3], we have \[\int_{\mathbb{B}}S_{1}f(z)^{q}d\mu_{\alpha}(z)=\int_{\mathbb{B}}\left(m_{ \alpha}\left(f^{p^{\prime}}w^{-\frac{1}{p}}\right)(z)\right)^{p}w(z)d\mu_{ \alpha}(z)\lesssim C\left([w]_{B_{p}}\right)\int_{\mathbb{B}}f^{q}(z)d\mu_{ \alpha}(z).\] In other words, \(\|S_{1}\|_{q}\lesssim\left(C\left([w]_{B_{p}}\right)\right)^{\frac{1}{q}}\). Similarly, denote again \(w^{\prime}=w^{1-p^{\prime}}\in B_{p^{\prime}}\) and define the operator \(S_{2}\) on the space \(\mathcal{M}\) by \[S_{2}f(z)=w^{\prime}(z)^{\frac{1}{q}}\left(m_{\alpha}\left(f^{p}w^{\prime- \frac{1}{p^{\prime}}}\right)(z)\right)^{\frac{1}{p}}.\] By the Minkowski inequality, \(S_{2}\) is also sublinear. Moreover, \[\int_{\mathbb{B}}S_{2}f(z)^{q}d\mu_{\alpha}(z)\lesssim C\left([w^{\prime}]_{B _{p^{\prime}}}\right)\int_{\mathbb{B}}f^{q}(z)d\mu_{\alpha}(z).\] In other words, \(\|S_{2}\|_{q}\lesssim C\left(\left([w^{\prime}]_{B_{p^{\prime}}}\right) \right)^{\frac{1}{q}}=\left(C\left([w]_{B_{p}}\right)\right)^{\frac{1}{q}}\). We use the following lemma. **Lemma 8.2**.: _Set \(S=S_{1}+S_{2}\) and define the operator \(\mathcal{R}\) on \(\mathcal{M}\) by_ \[\mathcal{R}h(z)=\sum_{k=0}^{\infty}\frac{S^{k}h(z)}{2^{k}\|S\|_{q}^{k}}\] _with \(S^{0}h=|h|.\) Then_ * \(|h|\leq\mathcal{R}h\)_;_ * \(\|\mathcal{R}h\|_{q}\leq 2\|h\|_{q}\)_;_ * \(S(\mathcal{R}h)\leq 2\|S\|_{q}\mathcal{R}h\) Proof of Lemma 8.2.: By the definition of \(\mathcal{R}h\) we have \(h\leq\mathcal{R}h\). Moreover \[\|\mathcal{R}h\|_{q}\leq\sum_{k=0}^{\infty}\frac{\|S^{k}h\|_{q}}{2^{k}\|S\|_{q}^{ k}}\leq\|h\|_{q}\sum_{k=0}^{\infty}\frac{1}{2^{k}}=2\|h\|_{q}.\] Next, the sublinearity of \(S\) gives \(S(\mathcal{R}h)\leq 2\|S\|_{q}\mathcal{R}h\). Applying assertion c) of Lemma 8.2, we obtain \[w(z)^{\frac{1}{q}}\left(m_{\alpha}\left((\mathcal{R}h)^{p^{\prime}}w^{-\frac{ 1}{p}}\right)(z)\right)^{\frac{1}{p^{\prime}}}=S_{1}(\mathcal{R}h)(z)\leq S( \mathcal{R}h)(z)\leq 2\|S\|_{q}\mathcal{R}h(z). \tag{8.3}\] Now set \(w_{2}=(\mathcal{R}h)^{p^{\prime}}w^{-\frac{1}{p}}.\) By (8.3), we have \(w_{2}\in B_{1}\). Similarly, we have \[w^{\prime}(z)^{\frac{1}{q}}\left(m_{\alpha}\left(\mathcal{R}h\right)^{p}w^{ \prime-\frac{1}{p^{\prime}}}(z)\right)^{\frac{1}{p}}=S_{2}(\mathcal{R}h)(z) \leq S(\mathcal{R}h)\leq 2\|S\|_{q}\mathcal{R}h(z). \tag{8.4}\] Now set \(w_{1}=(\mathcal{R}h)^{p}w^{\prime-\frac{1}{p^{\prime}}}.\) By (8.4), we have \(w_{1}\in B_{1}\). Moreover \(w_{1}w_{2}^{1-p}=w\in B_{p}\). This finishes the proof of Lemma 8.1. **Lemma 8.3**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) and let \(w\in B_{p(\cdot)}.\) We define the operator \(R\) on \(L^{p(\cdot)}(w)\) by_ \[Rh(x)=\sum_{k=0}^{\infty}\frac{m_{\alpha}^{k}h(x)}{2^{k}\|m_{\alpha}\|_{L^{p( \cdot)}(w)}^{k}}\] _where for \(k\geq 1,m_{\alpha}^{k}=\underbrace{m_{\alpha}\circ m_{\alpha}\circ\cdots\circ m_{ \alpha}}_{\text{k-times}}\) and \(m_{\alpha}^{0}h=|h|\). Then \(R\) satisfies the following properties:_ * \(|h|\leq Rh;\)__ * \(R\) _is bounded on_ \(L^{p(\cdot)}(w)\) _and_ \(\|Rh\|_{p(\cdot),w}\leq 2\|h\|_{p(\cdot),w}\)_;_ * \(Rh\in B_{1}\) _and_ \([Rh]_{B_{1}}\leq 2\|m_{\alpha}\|_{L^{p(\cdot)}(w)}.\)__ Proof.: The proof of assertions a) and b) are the same as for assertions a) and b) of Lemma 8.2. Here, we use the sublinearity of \(m_{\alpha}\). Finally, by the definition of \(Rh\), we have \[m_{\alpha}(Rh)(x) \leq\sum_{k=0}^{\infty}\frac{m_{\alpha}^{k+1}h(x)}{2^{k}\|m_{ \alpha}\|_{L^{p(\cdot)}(w)}^{k}}\] \[\leq 2\|m_{\alpha}\|_{L^{p(\cdot)}(w)}\sum_{k=0}^{\infty}\frac{m_{ \alpha}^{k+1}h(x)}{2^{k+1}\|m_{\alpha}\|_{L^{p(\cdot)}(w)}^{k+1}}\] \[\leq 2\|m_{\alpha}\|_{L^{p(\cdot)}(w)}Rh(x).\] Thus \(Rh\in B_{1}\) and \([Rh]_{B_{1}}\leq 2\|m_{\alpha}\|_{L^{p(\cdot)}(w)}\). **Lemma 8.4**.: _Let \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) and \(w\in B_{p(\cdot)}\). Define the operator \(H\) on \(L^{p^{\prime}(\cdot)}\) by_ \[Hh=\mathcal{R}^{\prime}\left(hw^{\frac{1}{p(\cdot)}}\right)w^{-\frac{1}{p( \cdot)}}\] _where_ \[\mathcal{R}^{\prime}g(x)=\sum_{k=0}^{\infty}\frac{m_{\alpha}^{k}g(x)}{2^{k}\|m_{ \alpha}\|_{L^{p^{\prime}(\cdot)}(w^{\prime})}^{k}}.\] _Then_ * \(|h|\leq Hh;\)__ * \(H\) _is bounded on_ \(L^{p^{\prime}(\cdot)}\) _and_ \(\|Hh\|_{p^{\prime}(\cdot)}\leq 2\|h\|_{p^{\prime}(\cdot)}\)_;_ * \(Hh\cdot w^{\frac{1}{p(\cdot)}}\in B_{1}\) _and_ \([Hh\cdot w^{\frac{1}{p(\cdot)}}]_{B_{1}}\leq 2\|m_{\alpha}\|_{L^{p^{\prime}( \cdot)}(w^{\prime})}.\)__ Proof.: The proof is the same as for Lemma 8.3. We replace \(p(\cdot)\) by \(p^{\prime}(\cdot)\) and \(w\in B_{p(\cdot)}\) by \(w^{\prime}\in B_{p^{\prime}(\cdot)}.\) The property \(p^{\prime}(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) comes from Remark 2.8. ### A weighted extrapolation theorem We denote by \(\mathcal{F}\) a family of couples of non-negative measurable functions. We are now ready to state and prove the following weighted variable extrapolation theorem. **Theorem 8.5**.: _Suppose that for some constant exponent \(p_{0}>1,\) there exists a function \(C:(0,\infty)\to(0,\infty)\) such that for all \(v\in B_{p_{0}}\) and \((F,G)\in\mathcal{F}\), we have_ \[\int_{\mathbb{B}}F(x)^{p_{0}}v(x)d\mu_{\alpha}(x)\leq C([v]_{B_{p_{0}}})\int_{ \mathbb{B}}G(x)^{p_{0}}v(x)d\mu_{\alpha}(x). \tag{8.5}\] _Then given \(p(\cdot)\in\mathcal{P}_{\pm}^{\log}(\mathbb{B})\) and \(w\in B_{p(\cdot)}\), we have_ \[\|F\|_{p(\cdot),w}\leq 16\times 4^{-\frac{1}{p_{0}}}\left(C([v]_{B_{p_{0}}}) \right)^{\frac{1}{p_{0}}}\|G\|_{p(\cdot),w} \tag{8.6}\] _for all \((F,G)\in\mathcal{F}\) and \(F\in L^{p(\cdot)}(w)\)_ Proof.: We use the technique of Cruz-Uribe in [8, Theorem 2.6]. Let \((F,G)\in\mathcal{F}\). If \(\|F\|_{p(\cdot),w}=0\) we have the result. Otherwise, \(\|F\|_{p(\cdot),w}>0\) and hence \(\|G\|_{p(\cdot),w}>0,\) because if \(\|G\|_{p(\cdot),w}=0,\) then \(G=0\) a.e. and by (8.5) we will have \(F=0\) a.e. Henceforth, we assume \(0<\|F\|_{p(\cdot),w}<\infty\) and \(0<\|G\|_{p(\cdot),w}<\infty.\) Define \[h_{1}=\frac{F}{\|F\|_{p(\cdot),w}}+\frac{G}{\|G\|_{p(\cdot),w}},\] then \(\|h_{1}\|_{p(\cdot),w}\leq 2\) and so \(h_{1}\in L^{p(\cdot)}(w).\) Since \(F\in L^{p(\cdot)}(w),\) by duality (Proposition 3.1), there exists \(h_{2}\in L^{p^{\prime}(\cdot)}\) such that \(\|h_{2}\|_{p^{\prime}(\cdot)}=1\) and \[\|F\|_{p(\cdot),w}\leq 2\int_{\mathbb{B}}Fw^{\frac{1}{p(\cdot)}}h_{2}d\mu_{ \alpha}\leq 2\int_{\mathbb{B}}F(Hh_{2})w^{\frac{1}{p(\cdot)}}d\mu_{\alpha} \tag{8.7}\] where the latter inequality comes from assertion \(a)\) of Lemma 8.4. Set \(\gamma=\frac{1}{p_{0}^{\prime}}.\) By the usual Holder inequality, we have \[\int_{\mathbb{B}}F(Hh_{2})w^{\frac{1}{p(\cdot)}}d\mu_{\alpha} =\int_{\mathbb{B}}F(Rh_{1})^{-\gamma}(Rh_{1})^{\gamma}(Hh_{2})w ^{\frac{1}{p(\cdot)}}d\mu_{\alpha} \tag{8.8}\] \[\leq I_{1}^{\frac{1}{p_{0}}}I_{2}^{\frac{1}{p_{0}^{\prime}}},\] where \[I_{1}:=\int_{\mathbb{B}}F^{p_{0}}(Rh_{1})^{1-p_{0}}(Hh_{2})w^{\frac{1}{p(\cdot)}}d \mu_{\alpha}\] and \[I_{2}:=\int_{\mathbb{B}}(Rh_{1})(Hh_{2})w^{\frac{1}{p(\cdot)}}d\mu_{\alpha}.\] In addition, from Lemma 8.3 and Lemma 8.4 respectively, \(R\) is bounded on \(L^{p(\cdot)}(w)\) and \(H\) is bounded on \(L^{p^{\prime}(\cdot)}.\) Thus by the Holder inequality, assertions \(b)\) of Lemma 8.3 and Lemma 8.4, we have \[I_{2}\leq 2\|Rh_{1}\|_{p(\cdot),w}\|Hh_{2}\|_{p^{\prime}(\cdot)}\leq 8\|h_{1} \|_{p(\cdot),w}\|h_{2}\|_{p^{\prime}(\cdot)}\leq 16.\] By the definition of \(h_{1}\) and assertion a) of Lemma 8.3, we have \[\frac{\varphi}{\|\varphi\|_{p(\cdot),w}}\leq h_{1}\leq Rh_{1}\] for \(\varphi\in\{F,\;G\}.\) Next, by the Holder inequality and assertion b) of Lemma 8.4, we have \[I_{1} \leq\int_{\mathbb{B}}F^{p_{0}}(\zeta)\left(\frac{F(\zeta)}{\|F\| _{p(\cdot),w}}\right)^{1-p_{0}}H(\zeta)h_{2}(\zeta)w^{\frac{1}{p(\zeta)}}d\mu_ {\alpha}(\zeta)\] \[=\|F\|_{p(\cdot),w}^{p_{0}-1}\int_{\mathbb{B}}F(\zeta)H(\zeta)h_{ 2}(\zeta)w^{\frac{1}{p(\zeta)}}d\mu_{\alpha}(\zeta)\] \[\leq 2\|F\|_{p(\cdot),w}^{p_{0}-1}\|F\|_{p(\cdot),w}\|Hh_{2}\|_{p^ {\prime}(\cdot)}\] \[\leq 4\|F\|_{p(\cdot),w}^{p_{0}}\] \[<\infty.\] Since \(Rh_{1}\in B_{1}\) and \((Hh_{2})w^{\frac{1}{p(\cdot)}}\in B_{1}\) by Lemma 8.3 and Lemma 8.4 respectively, it follows from Theorem 8.1 that \(v:=(Rh_{1})^{1-p_{0}}\left(Hh_{2}w^{\frac{1}{p(\cdot)}}\right)\in B_{p_{0}}.\) Hence by (8.5) and the same argument as above, we have \[I_{1} =\int_{\mathbb{B}}F^{p_{0}}(Rh_{1})^{1-p_{0}}(Hh_{2})w^{\frac{1} {p(\cdot)}}d\mu_{\alpha}\] \[\leq C([v]_{B_{p_{0}}})\int_{\mathbb{B}}G^{p_{0}}(Rh_{1})^{1-p_{ 0}}(Hh_{2})w^{\frac{1}{p(\cdot)}}d\mu_{\alpha}\] \[\leq C([v]_{B_{p_{0}}})\int_{\mathbb{B}}G^{p_{0}}\left(\frac{G}{ \|G\|_{p(\cdot),w}}\right)^{1-p_{0}}(Hh_{2})w^{\frac{1}{p(\cdot)}}d\mu_{\alpha}\] \[=C([v]_{B_{p_{0}}})\|G\|_{p(\cdot),w}^{p_{0}-1}\int_{\mathbb{B}} G(Hh_{2})w^{\frac{1}{p(\cdot)}}d\mu_{\alpha}\] \[\leq 2C([v]_{B_{p_{0}}})\|G\|_{p(\cdot),w}^{p_{0}-1}\|G\|_{p( \cdot),w}\|Hh_{2}\|_{p^{\prime}(\cdot)}\] \[\leq 4C([v]_{B_{p_{0}}})\|G\|_{p(\cdot),w}^{p_{0}}.\] Thus from (8.7) and (8.8), we have the result. ### The end of the proof of the sufficient condition in Theorem 1.6 We prove the following proposition. **Proposition 8.6**.: _Let \(p(\cdot)\in\mathcal{P}^{\log}_{\pm}(\mathbb{B})\) and \(w\in B_{p(\cdot)}.\) Then \(P^{+}_{\alpha}\) is a continuous operator on \(L^{p(\cdot)}(w).\) Consequently, the Bergman projector \(P_{\alpha}\) extends to a continuous operator on \(L^{p(\cdot)}(w).\)_ Proof.: We call again \(\mathcal{C}_{c}(\mathbb{B})\) the space of continuous functions of compact support in \(\mathbb{B}\) and we take \(\mathcal{F}=\{(P^{+}_{\alpha}f,|f|):\ f\in\mathcal{C}_{c}(\mathbb{B})\}\). We recall from Proposition 2.6 that \(\mathcal{C}_{c}(\mathbb{B})\) is a dense subspace in \(L^{p(\cdot)}(w).\) Let \(p_{0}\) be an arbitrary constant exponent greater than \(1.\) Let \(v\in B_{p_{0}}.\) By Theorem 1.5, for every \(f\in\mathcal{C}_{c}(\mathbb{B})\), we have \[\int_{\mathbb{B}}(P^{+}_{\alpha}f)^{p_{0}}vd\mu_{\alpha}\leq C\left([v]_{B_{p_ {0}}}\right)\int_{\mathbb{B}}|f|^{p_{0}}vd\mu_{\alpha}.\] Thus by Theorem 8.5, for all \(f\in\mathcal{C}_{c}(\mathbb{B})\), we have \[\|P^{+}_{\alpha}f\|_{p(\cdot),w}\leq 16\times 4^{-\frac{1}{p_{0}}}\left(C([v]_{B _{p_{0}}})\right)^{\frac{1}{p_{0}}}\|f\|_{p(\cdot),w}.\] We conclude by density.
2304.12847
NLP-LTU at SemEval-2023 Task 10: The Impact of Data Augmentation and Semi-Supervised Learning Techniques on Text Classification Performance on an Imbalanced Dataset
In this paper, we propose a methodology for task 10 of SemEval23, focusing on detecting and classifying online sexism in social media posts. The task is tackling a serious issue, as detecting harmful content on social media platforms is crucial for mitigating the harm of these posts on users. Our solution for this task is based on an ensemble of fine-tuned transformer-based models (BERTweet, RoBERTa, and DeBERTa). To alleviate problems related to class imbalance, and to improve the generalization capability of our model, we also experiment with data augmentation and semi-supervised learning. In particular, for data augmentation, we use back-translation, either on all classes, or on the underrepresented classes only. We analyze the impact of these strategies on the overall performance of the pipeline through extensive experiments. while for semi-supervised learning, we found that with a substantial amount of unlabelled, in-domain data available, semi-supervised learning can enhance the performance of certain models. Our proposed method (for which the source code is available on Github attains an F1-score of 0.8613 for sub-taskA, which ranked us 10th in the competition
Sana Sabah Al-Azzawi, György Kovács, Filip Nilsson, Tosin Adewumi, Marcus Liwicki
2023-04-25T14:19:46Z
http://arxiv.org/abs/2304.12847v1
NLP-LTU at SemEval-2023 Task 10: The Impact of Data Augmentation and Semi-Supervised Learning Techniques on Text Classification Performance on an Imbalanced Dataset ###### Abstract In this paper, we propose a methodology for task 10 of SemEval23, focusing on detecting and classifying online sexism in social media posts. The task is tackling a serious issue, as detecting harmful content on social media platforms is crucial for mitigating the harm of these posts on users. Our solution for this task is based on an ensemble of fine-tuned transformer-based models (BERTweet, RoBERTa, and DeBERTa). To alleviate problems related to class imbalance, and to improve the generalization capability of our model, we also experiment with data augmentation and semi-supervised learning. In particular, for data augmentation, we use back-translation, either on all classes, or on the underrepresented classes only. We analyze the impact of these strategies on the overall performance of the pipeline through extensive experiments. while for semi-supervised learning, we found that with a substantial amount of unlabelled, in-domain data available, semi-supervised learning can enhance the performance of certain models. Our proposed method (for which the source code is available on Github12) attains an \(F1\)-score of 0.8613 for sub-taskA, which ranked us 10th in the competition. Footnote 1: github.com/SanaNGU/semeval23-task10-sexism-detection-2huggingface.co/NLP-LTU/bertweet-large-sexism-detector ## 1 Introduction Remarkable technological advancements have made it simpler for people from diverse backgrounds to interact through social media using posts and comments written in natural language. These opportunities, however, come with their own challenges. Hateful content on the Internet increased to such levels that manual moderation cannot possibly deal with it (Gongane et al., 2022). Thus, precise identification of harmful content on social media is vital for ensuring that such content can be discovered and dealt with, minimizing the risk of victim harm and making online platforms safer and more inclusive. Detecting online sexism on social media remains a challenge in natural language processing (NLP), and the Explainable Detection of Online Sexism (EDOS) shared task on SemEval23 (Kirk et al., 2023) addresses this problem. The task has three main sub-tasks: (i) task A; binary sexism detection, in which we determine whether a given sentence contains sexist content, (ii) task B; sexism classification, which places sexist sentences into four categories: threats, derogation, animosity, and prejudiced discussions, and (iii) task C; fine-grained vector of sexism, an eleven-class categorization for sexist posts in which systems must predict one of 11 fine-grained vectors. One major challenge of this task is the imbalanced class distribution. For instance, sub-task A consists of only 3398 sexist posts, and 10602 non-sexist ones. Using an imbalanced dataset to train models can result in prediction bias towards the majority class (Johnson and Khoshgoftaar, 2019). In this paper, we (team NLP-LTU) present the automatic sexism detection system developed and submitted to SemEval23 task 10; EDOS. The objective of this study is (i) to examine how different state-of-the-art pre-trained language models (PLM) perform in sexism detection and classification tasks, and (ii) to contribute towards answering the following research question (RQ): **To what extent can data augmentation improve the results and address the data imbalance problem?** The core of our approach is a voting-based ensemble model consisting of three pre-trained language models: BERTweet-large (Nguyen et al., 2020), DeBERTa-v3-large (He et al., 2021), and RoBERTa-large (Liu et al., 2019). Additionally, in order to address the issue of data imbalance and to expand our dataset, our system's pipeline employed techniques such as data augmentation and semi-supervised learning. We achieved competitive results, ranking us in the top ten for Task A.3 Our results suggest that (i) using PLMs trained on domain-specific data (e.g. BERTweet-large) leads to better results than using PLMs pre-trained on other sources (ii) In most cases extending all classes via augmentation leads to higher classification scores than using augmentation on the minority classes only to completely balance the class distribution. However, drawing conclusive inferences would require further experiments with multiple data augmentation methods and datasets. (iii) with a substantial amount of unlabelled, in-domain data available, semi-supervised learning can enhance the performance of certain models. Footnote 3: [https://github.com/rewire-online/edos/blob/main/leaderboard](https://github.com/rewire-online/edos/blob/main/leaderboard) The rest of the paper is organised as follows: in Section 2, we present prior related work; in Section 3, we discuss the proposed system. Then, we describe the experiments in Section 4. Section 5, presents results and error analysis. Finally, we conclude the work in Section 6 and describe what further has to be done. ## 2 Related Work In the following section we discuss already existing efforts on the detection of sexism, and efforts directed at data augmentation. ### Sexism Detection Detecting sexism in social media is essential to ensure a safe online environment and to prevent the negative impact of being a target of sexism. Therefore, several studies have developed datasets and machine-learning models to identify and detect sexism in social media. Waseem and Hovy's early study involves collecting 16K English tweets and annotating them into three categories: racism, sexism, and neutral (Waseem and Hovy, 2016). Similarly, but from a multilingual perspective, Rodriguez-Sanchez et al. (2020) created the MeTwo dataset to identify various forms of sexism in Spanish Tweets, and they use machine learning techniques, including both classical and deep learning approaches. Several additional datasets have since been created to examine a wide range of sexist statements (Parikh et al., 2019; Samory et al., 2021; Rodriguez-Sanchez et al., 2021, 2022). The aforementioned studies often categorize sexist content into a limited number of classes, typically two to five, without any further breakdown. However, sexist sentences/posts should be identified, and the reasons for the identification should be provided to increase the interpretability, confidence, and comprehension of the judgments made by the detection system. The EDOS (Kirk et al., 2023) task aims to target this problem with fine-grained classifications for sexist content from social media. ### Data Augmentation A dataset may have several shortcomings that make text classification difficult. This paper mainly focuses on using data augmentation to deal with class imbalance. Easy Data Augmentation (EDA) (Wei and Zou, 2019) use four simple word-based operations to generate new data: synonym replacement, random insertion, random swap, and random deletion. EDA shows that the classification performance improves even with a simple data augmentation approach. Similarly, Kobayashi (2018) stochastically replaces words in the sentences with other relevant words using bidirectional recurrent neural networks. In more recent studies, PLM are used to get text samples that are more diverse and linguistically correct. Anaby-Tavor et al. (2020) apply GPT-2 to generate synthetic data for a given class in text classification tasks. Another study by Sabry et al. (2022), uses conversational model checkpoint created by Adewumi et al. (2022). ## 3 System Overview This section outlines the system pipeline employed in our study, as depicted in Figure 1. The proposed approach entails two main stages, generating additional training samples (Module 1), and classification (Module 2). Each is described in its own subsection below. ### Module 1.A: Data Augmentation Imbalanced data might impede a model's ability to distinguish between highly-represented classes (e.g., non-sexist) and under-represented ones (i.e., sexist). To address this concern, we studied the potential influence of data augmentation approaches on the system's performance. We expand module 1.A of Figure 1 to describe the data augmentation module (shown in Figure 2). The module comprises three main steps. First, we fine-tune our best-performing model; (BERTweet-large) using the gold-labelled data. Then, each sentence undergoes two rounds of back translation (English to German, and back to English, then English to Russian, and back to English again). Here, our choice of data augmentation method was motivated by its simplicity and the fact that it does not rely on specific task data and It can be applied independently of the task at hand (Longpre et al., 2020). In the final step, the newly generated English sentences from each stage in the second step are filtered using the fine-tuned model from step one. This ensures that each new synthetic sentence retains its original label. This technique can be employed in two ways. Firstly, it can augment only the underrepresented class (sexist sentences to balance the dataset. Alternatively, both classes can be augmented to double the dataset. We investigate the performance of the data augmentation technique using both ways. ### Module 1.B: Semi-supervised Learning Two more unlabelled datasets, each with one million entries, were made available by the task's organizers. Inspired by earlier research (e.g. (Shams, 2014)), we used the provided unlabelled datasets to generate weakly labelled samples to balance the original dataset. As shown in Figure 3, Module 1.B comprises three stages. The first stage being fine-tuning a select pre-trained model (BERTweet-large), using the gold labels. Then, we use the resulting model to create weak labels for the unlabelled data. Lastly, we select samples labelled with a minority class, where the predicted probability of the weak label is at least 0.9. ### Module 2: Ensemble Similar to the full pipeline, Module 2 can also be broken down into its individual constituents, which are (i) the pre-processing module, (ii) the individual classifiers, and (iii) the ensembling method to combine the decision of these classifiers. Firstly, a pre-processing step is needed, as the data for the tasks was collected from noisy resources (Reddit, and Gab). For this, we used the same common techniques for all models. In particular, we converted all uppercase characters to lowercase, removed repetitive patterns like "heeeey" and additional spaces, eliminated special characters like emojis and hashtags (#), and deleted numbers. For the individual classifiers, we examined different PLMs such as BERT (Devlin et al., 2019), RoBERTa, and DeBERTa. each of these models Figure 1: Architecture of the proposed approach Figure 3: Semi-Supervised Block. was initially fine-tuned using the entire dataset. Lastly, we employed an ensemble of the three best-performing classifiers from the previous step for the final submission, namely BERTweet-large, DeBERTa-v3-large, and RoBERTa-large. Ensembling multiple models can potentially prevent egregious mistakes made by a single model [14, 15]. We used two ensemble approaches: majority voting, a hard voting method where the prediction of each classifier is treated as a vote, and the class with the most votes is ultimately selected as the predicted class, and soft average ensemble, in which the output of each model is averaged as shown in Equation 1. \[y_{final}=argmax(\frac{y_{1}+y_{2}+y_{3}}{3}) \tag{1}\] ## 4 Experimental Setup All experiments have been implemented using the PyTorch and HuggingFace libraries [13] on an 8 32GB Nvidia V100 GPU-equipped DGX-1 cluster. The server contains 80 CPU cores with the Ubuntu 18 operating system. When evaluating solutions, the macro-averaged \(F1\)-score was the primary metric. ### Task A : Binary Sexism Detection In Task A, we employed the proposed pipeline illustrated in Figure 1. Initially, we utilized module 1.A to augment the sexist samples, thereby achieving dataset balance. Subsequently, we integrated the synthetic data with the original data and fine-tuned various pre-trained language models. The batch size is set to 16, and the Adamw optimizer is used for training. We set the learning rate of the pre-trained model for each language model to 1e-5 and fine-tuned it for three epochs. In the semi-supervised learning context, we utilized identical parameters and generated 7,000 additional samples with sexist content to balance the dataset. ### Task B:sexism classification For task B, we excluded Module 1.B. from our pipeline, as in our initial experiments, we were not able to train a sufficiently reliable classifier for the weak-labelling on this smaller dataset. Our hyper-parameters for this task were the same as discussed above, with the exception of an increased number of epochs (4) used here. About task B, we exclusively employed Module 2 and Module 1.A. Our rationale for this choice stemmed from the inadequacy of the training dataset, which rendered it unfeasible to produce weak labels for this particular task. ### Task C : Fine-grained Vector of Sexism In our experiments for task C, due to limited time, we forewent the first modules, and focused on Module 2, fine-tuning several pre-trained language models. We have, however, only used these models individually, as the fine-tuned models did not attain comparable levels of performance to those achieved in the previous tasks. Furthermore, the use of an ensemble in such cases may potentially detract from the overall performance of the system. ## 5 Results ### Evaluation Phase During the evaluation phase, we used the development set provided by the organizers. The results for task A are shown in Table 1. Concerning the data augmentation component, we compared two distinct data augmentation strategies. The initial approach entailed doubling the size of the entire dataset, while the alternative strategy solely augmented samples that contained sexist content. Table 2 shows the results on the development set for Task B. Due to the limited size of the training set, the use of data augmentation techniques resulted in an improved performance for some models, while others exhibited similar \(F1\)-scores to those obtained without augmentation. The results shown in Table 1 indicate that the use of the provided dataset with a hard ensemble strategy yields the best performance. Furthermore, the semi-supervised approach improves the performance of some pre-trained models (e.g. BERTbase, HateBERT [10], BERTweet-base), but not those models, which had been pre-trained on larger datasets (e.g. DeBERTa-large-v3, BERTweet-large). We hypothesize that these larger \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{w/DA} & \multirow{2}{*}{w/DA} & \multirow{2}{*}{w/DA} & \multicolumn{2}{c}{semi-} & \multirow{2}{*}{semi-} \\ & & & double & balanced & & normalized \\ \hline BERTbase & 25.00 & 81.5 & 76.00 & 81.75 & 82.10 \\ \hline BERTbase & 63.00 & 83.5 & 71.00 & 83.75 & 82.45 \\ \hline BERTbase & 63.50 & 83.50 & 80.01 & 84.52 & 83.53 \\ \hline BERTbase & 84.00 & 84.00 & 83.02 & 82.57 & 83.79 \\ \hline BERTbasebase & 84.00 & 84.00 & 82.00 & 84.75 & 85.08 \\ \hline DeBERTbase & 85.00 & 84.5 & 83.03 & 86.99 & 85.47 \\ \hline BERTbasebase & 86.55 & 86.50 & 83.10 & 86.07 & 86.12 \\ \hline SNI Ensemble & 86.00 & 83.01 & 83.08 & 86.11 & 86.00 \\ \hline Hard Ensemble & **86.05** & 86.07 & 83.23 & 86.09 & 86.01 \\ \hline \hline \end{tabular} \end{table} Table 1: \(F1\)-Macro performance for Task A. models already possess more knowledge due to their extensive pre-training. Regarding data augmentation, our findings indicate that doubling all classes resulted in better performance than balancing the dataset. ### Test Phase We combined the training and development data during the test phase and fine-tuned the models. Our submission, as demonstrated in Table 1, was only made once. We utilized only Module 2 from our pipeline for Task A, employing the hard ensemble strategy. Our three top-performing models, BERTweet-large, DeBERTa-v3-large, and RoBERTa-large, were used without data augmentation. The same approach was adopted for Task B. For Task C, we used RoBERTa-large for the final submission, which yielded the best results in the evaluation set. ### Error Analysis In this subsection, we have undertaken an error analysis for the submission on Task A. The confusion matrices presented in Figure 4 wasconstructed to evaluate the performance of our models on the test set. Our ensemble model achieved an \(F1\)-score of 86.13 on the test set for task A. However, the confusion matrix illustrated in Figure 4 indicates that the model correctly predicted the (not sexist) class 92.40 % of the time (2,800 out of 3,030), while struggling to generate correct predictions for the (sexist) class, with a correct prediction rate of only 80.41% (780 out of 970). This discrepancy is most likely due to the data imbalance, as 85.7% of the total training set comprises samples labelled as (not sexist). Despite performing data augmentation using back-translation to mitigate the data imbalance issue, the results in the Table1 indicate that this technique did not improve the overall performance. We hypothesise that the back-translation method did not generate diverse samples, and one possible solution is to use data augmentation methods that generate more diverse synthetic data. ## 6 Conclusion This paper presents our solution to the Shared Task on Explainable Detection of Online Sexism at SemEval23. Our approach involved employing ensemble voting techniques with previously fine-tuned language models, specifically BERTweet-large, RoBERTa-large, and DeBERTa-V3-large, which resulted in the best performance for both task A and B. Additionally, we discovered that fine-tuning RoBERTa-Large was the most effective approach for addressing task C, outperforming the ensemble voting method. These findings address the first objective of examining how different state-of-the-art transformer-based models perform in sexism detection and classification. \begin{table} \begin{tabular}{l l l l} \hline Task & Model & F1-score & Rank \\ \hline A & Ensemble & 86.13 & 10 \\ \hline B & Ensemble & 65.50 & 18 \\ \hline C & RoBERTa-large & 46.00 & 23 \\ \hline \end{tabular} \end{table} Table 4: Results on the Test set for All Tasks Figure 4: Confusion Matrix for Task A \begin{table} \begin{tabular}{l l l} \hline Model & w/o DA & with DA \\ \hline BERT-base & 30.01 & 30.33 \\ \hline BERT-base & 30.00 & 30.66 \\ \hline RoBERTa-base & 34.01 & 36.66 \\ \hline DeBERTa-large & 38.33 & 38.66 \\ \hline BERT-large & 42.33 & 41.66 \\ \hline BERTweet-large & 45.66 & 45.33 \\ \hline RoBERTa-large & **47.33** & 46.66 \\ \hline \end{tabular} \end{table} Table 3: \(F1\)-Macro performance for Task C. \begin{table} \begin{tabular}{l l l l} \hline Model & w/o DA & with DA \\ \hline BERT-base & 60.33 & 62.33 \\ \hline BERTweet-base & 56.33 & 59.05 \\ \hline BERT-large & 60.66 & 63.66 \\ \hline RoBERTa & 59.33 & 59.33 \\ \hline RoBERTa-Large & 68.00 & 68.33 \\ \hline DeBERTa-large-v3 & 68.33 & 67.16 \\ \hline BERTweet-large & 67.33 & 66.00 \\ \hline Soft Ensemble & 69.99 & 69.00 \\ \hline Hard Ensemble & **70.30** & **70.00** \\ \hline \end{tabular} \end{table} Table 2: \(F1\)-Macro performance for Task B. To address our research question; **(RQ): to what extent can data augmentation improve the results and address the data imbalance problem**, we employed a task agnostic data augmentation method, specifically back-translation, in two scenarios: one to double the dataset and the other to augment the underrepresented class. Our results showed that augmenting all classes was more effective than balancing the dataset by augmenting only the underrepresented class, which motivates further exploration of the effects of data augmentation on text classification with unbalanced datasets. In future research, we plan to explore alternative data augmentation techniques to produce more diverse sentences, such as utilizing generative models like GPT-2, to balance and double the dataset's size, and compare the results with the back-translation method. Moreover, we plan to investigate why augmenting all classes sometimes was more effective than augmenting only the underrepresented class and balancing the dataset. ## Acknowledgements This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
2306.15789
Structured State Space Models for Multiple Instance Learning in Digital Pathology
Multiple instance learning is an ideal mode of analysis for histopathology data, where vast whole slide images are typically annotated with a single global label. In such cases, a whole slide image is modelled as a collection of tissue patches to be aggregated and classified. Common models for performing this classification include recurrent neural networks and transformers. Although powerful compression algorithms, such as deep pre-trained neural networks, are used to reduce the dimensionality of each patch, the sequences arising from whole slide images remain excessively long, routinely containing tens of thousands of patches. Structured state space models are an emerging alternative for sequence modelling, specifically designed for the efficient modelling of long sequences. These models invoke an optimal projection of an input sequence into memory units that compress the entire sequence. In this paper, we propose the use of state space models as a multiple instance learner to a variety of problems in digital pathology. Across experiments in metastasis detection, cancer subtyping, mutation classification, and multitask learning, we demonstrate the competitiveness of this new class of models with existing state of the art approaches. Our code is available at https://github.com/MICS-Lab/s4_digital_pathology.
Leo Fillioux, Joseph Boyd, Maria Vakalopoulou, Paul-Henry Cournède, Stergios Christodoulidis
2023-06-27T20:38:09Z
http://arxiv.org/abs/2306.15789v1
# Structured State Space Models for Multiple Instance Learning in Digital Pathology ###### Abstract Multiple instance learning is an ideal mode of analysis for histopathology data, where vast whole slide images are typically annotated with a single global label. In such cases, a whole slide image is modelled as a collection of tissue patches to be aggregated and classified. Common models for performing this classification include recurrent neural networks and transformers. Although powerful compression algorithms, such as deep pre-trained neural networks, are used to reduce the dimensionality of each patch, the sequences arising from whole slide images remain excessively long, routinely containing tens of thousands of patches. Structured state space models are an emerging alternative for sequence modelling, specifically designed for the efficient modelling of long sequences. These models invoke an optimal projection of an input sequence into memory units that compress the entire sequence. In this paper, we propose the use of state space models as a multiple instance learner to a variety of problems in digital pathology. Across experiments in metastasis detection, cancer subtyping, mutation classification, and multitask learning, we demonstrate the competitiveness of this new class of models with existing state of the art approaches. Our code is available at [https://github.com/MICS-Lab/s4_digital_pathology](https://github.com/MICS-Lab/s4_digital_pathology). Keywords:Multiple instance learning Whole slide images State space models. ## 1 Introduction Precision medicine efforts are shifting cancer care standards by providing novel personalised treatment plans with promising outcomes. Patient selection for such treatment regimes is based principally on the assessment of tissue biopsies and the characterisation of the tumor microenvironment. This is typically performed by experienced pathologists, who closely inspect chemically stained histopathological whole slide images (WSIs). Increasingly, clinical centers are investing in the digitisation of such tissue slides to enable both automatic processing as well as research studies to elucidate the underlying biological processes of cancer. The resulting images are of gigapixel size, rendering their computational analysis challenging. To deal with this issue, multiple instance learning (MIL) schemes based on weakly supervised training are used for WSI classification tasks. In such schemes, the WSI is typically divided into a grid of patches, with general purpose features derived from pretrained ImageNet [18] networks extracted for each patch. These representations are subsequently pooled together using different aggregation functions and attention-based operators for a final slide-level prediction. State space models are designed to efficiently model long sequences, such as the sequences of patches that arise in WSI MIL. In this paper, we present the first use of state space models for WSI MIL. Extensive experiments on three publicly available datasets show the potential of such models for the processing of gigapixel-sized images, under both weakly and multi-task schemes. Moreover, comparisons with other commonly used MIL schemes highlight their robust performance, while we demonstrate empirically the superiority of state space models in processing the longest of WSI sequences with respect to commonly used MIL methods. ## 2 Related work Using pretrained networks for patch-wise feature extraction is a well established strategy for histopathology analysis [4, 20]. An extension of this approach is with MIL, where the patch-wise features of an entire slide are digested simultaneously by an aggregator model, such as attention-based models CLAM [17] and TransMIL [19], the latter being a variant of self-attention transformers [21]. [3] proposes another transformer-based method in the form of a hierarchical ViT. Similar to our multitask experiments, [6] explores combining slide-level and tile-level annotations with a minimal point-based annotation strategy. One of the key components of MIL methods is the aggregation module that pools together the set of patch representations. Mean or max pooling operations are among the simplest and most effective for aggregating predictions over a whole slide [2]. In contrast, recurrent neural networks (RNN) with long short-term memory (LSTM) [14] model the patches more explicitly as a set of tokens in sequence. In particular, LSTM networks have been shown to work well in different MIL settings including both visual cognition [22] and computational pathology [1]. The state space model is a linear differential equation, \[\begin{split}\dot{x}(t)&=\mathbf{A}x(t)+\mathbf{B}u(t)\\ y(t)&=\mathbf{C}x(t)+\mathbf{D}u(t)\end{split} \tag{1}\] that is widely studied in control theory, and describes a continuous time process for input and output signals \(u(t)\in\mathbb{R}^{p}\) and \(y(t)\in\mathbb{R}^{q}\), and state signal \(x(t)\in\mathbb{R}^{n}\), and where the process is governed by matrices \(\mathbf{A}\in\mathbb{R}^{n\times n}\), \(\mathbf{B}\in\mathbb{R}^{n\times p}\), \(\mathbf{C}\in\mathbb{R}^{q\times n}\), \(\mathbf{D}\in\mathbb{R}^{q\times p}\). In HiPPO [9] (high-order polynomial projection operator), continuous time memorisation is posed as a problem of function approximation in a Hilbert space defined by a probability measure \(\mu\). For a _scaled Legendre_ probability measure, one obtains the HiPPO matrix \(\mathbf{A}\), which enforces uniform weight in the memorisation of all previously observed inputs, in contrast to the exponentially decaying weighting of the constant error carousel of LSTMs [14]. The HiPPO mode of memorisation is shown empirically to be better suited to modeling long-range dependencies (LRD) than other neural memory layers, for which it serves as a drop-in replacement. Whereas in HiPPO, the state matrix \(\mathbf{A}\) is a fixed constant, the linear state space layer (LSSL) [12] incorporates \(\mathbf{A}\) as a learnable parameter. However, this increased expressiveness introduces intractable powers of \(\mathbf{A}\). In [10], the LSSL is instead reparameterised as the sum of diagonal and low-rank matrices, allowing for the efficient computation of the layer kernel in Fourier space. This updated formulation is known as the _structured_ state space sequence layer (S4). Note that as a linear operator, the inverse discrete Fourier transform is amenable to backpropagation in the context of a neural network. Note also that under this formulation, the hidden state \(x(t)\) is only computed implicitly. Finally, [11] presents a simplification of the S4 layer, known as diagonal S4 (S4D), in which \(\mathbf{A}\) is approximated by a diagonal matrix. ## 3 Method Given that the patch extraction of whole slide images at high magnifications results in long sequences of patches, we propose to incorporate a state space layer in a MIL aggregation network to better represent each patch sequence. ### Neural state space models In practice, neural state space models (SSM) simulate Equation 1 in discrete time, invoking a recurrence relation on the discretised hidden state, \[\begin{split} x_{t}&=\overline{\mathbf{A}}x_{t-1}+ \overline{\mathbf{B}}u_{t}\\ y_{t}&=\overline{\mathbf{C}}x_{t}+\overline{\mathbf{D}}u _{t}\end{split} \tag{2}\] where the sequences \(u_{t}\), \(x_{t}\), and \(y_{t}\) are the discretised \(u(t)\), \(x(t)\), and \(y(t)\), and the modified model parameters arise from a bilinear discretisation [12]. As such, SSMs bear an inherent resemblance to RNNs, where the hidden representation \(x_{t}\) can be interpreted as a memory cell for the observed sequence over the interval \([0,t]\), and with \(\overline{\mathbf{D}}u_{t}\) acting as a skip connection between the input and output at point \(t\). Due to their lack of non-linearities, state space models can also be viewed as a convolution between two discrete sequences. Playing out the recurrence in Equation 2, one obtains, \[y=\overline{\mathbf{K}}*u+\overline{\mathbf{D}}u, \tag{3}\] where \(u\in\mathbb{R}^{L}\) and \(y\in\mathbb{R}^{L}\) are the full input and output sequences, and the sequence \(\overline{\mathbf{K}}\in\mathbb{R}^{L}\) is defined as, \[\overline{\mathbf{K}}=(\overline{\mathbf{C}\mathbf{B}},\overline{\mathbf{C}\mathbf{A}\mathbf{B}}, \ldots,\overline{\mathbf{C}\mathbf{A}}^{L-1}\overline{\mathbf{B}}), \tag{4}\] which is computed efficiently by the S4D algorithm [11]. Note that although SSM layers are linear, they may be combined with other, non-linear layers in a neural network. Note also that although Equation 3 is posed as modeling a one-dimensional signal, in practice multi-dimensional inputs are modelled simply by stacking SSM layers together, followed by an affine "mixing" layer. ### MIL training In our pipeline (Figure 1) WSIs are first divided into a sequence of \(L\) patches \(\{u_{1},u_{2},\ldots,u_{L}\}\), where \(L\) will vary by slide. A pretrained ResNet50 is then used to extract a 1024-dimensional feature vector from each patch \(\{\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{L}\}\), which constitute the model inputs. We define a SSM-based neural network \(\mathcal{F}\) to predict a WSI-level class probability given this input sequence, \[\hat{y}=\mathcal{F}(\{\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{L}\}). \tag{5}\] The architecture of \(\mathcal{F}\) is composed of an initial linear projection layer, used to lower the dimensionality of each vector in the input sequence. A SSM layer is then applied feature-wise by applying the S4D algorithm. That is, Equation 3, including the skip connection, transforms the sequence \(\{u_{1,d},u_{2,d},\ldots,u_{L,d}\}\) for all features \(d\), and the resulting sequences are concatenated. A linear "mixing" layer is applied token-wise, doubling the dimensionality of each token, followed by a gated linear unit [5] acting as an output gate, which restores the input dimensionality. For the SSM layer, we used the official implementation of S4D1. A max pooling layer merges the SSM layer outputs into a single vector, which is projected by a final linear layer and softmax to give the class probabilities \(\hat{y}\). The model is trained according to, Footnote 1: [https://github.com/HazyResearch/state-spaces](https://github.com/HazyResearch/state-spaces) \[\mathcal{L}_{MIL}=-\frac{1}{M}\sum_{m=1}^{M}\log\hat{y}_{c_{m}}, \tag{6}\] where \(\hat{y}_{c_{m}}\) denotes the probability corresponding to \(c_{m}\), the slide-level label of the sequence corresponding to the \(m^{\text{th}}\) of \(M\) whole slide images. Figure 1: Overview of the proposed pipeline. In the first step, patches are extracted from a regular grid on a WSI. These patches are embedded using a pre-trained ResNet50 and are aggregated by a sequence model based on a state space layer. ### Multitask training One advantage of processing an entire slide as a sequence is the ease with which additional supervision may be incorporated, when available. A patch-level ground truth creates the opportunity for multitask learning, which can enhance the representations learned for slide-level classification. As an extension of our base model in Equation 6, we train a multitask model to jointly predict a slide-level and patch-level labels. Prior to the max pooling layer of the base model, an additional linear layer is applied to each sequence token, yielding \(L\) additional model outputs. This multitask model is trained according to a sum of log losses, \[\mathcal{L}_{MT}=-\frac{1}{M}\sum_{m=1}^{M}\bigg{(}\log\hat{y}_{c_{m}}+\frac{ \lambda}{L}\cdot\sum_{l=1}^{L}\log\hat{y}_{c_{m,l}}\bigg{)}, \tag{7}\] where \(c_{m,l}\) indexes the class of the \(l^{\text{th}}\) patch in the \(m^{\text{th}}\) training slide and \(\lambda\) is a tunable hyperparameter used to modulate the relative importance of each task. ### Implementation details We extracted patches of size \(256\times 256\) from the tissue regions of WSIs at 20x magnification. Following CLAM [17], the third residual block of a pretrained ResNet50 [13] was used as a feature extractor, followed by a mean pooling operation, resulting in a 1024-dimensional representation for each patch. These features were used as inputs to all models. All model training was performed under a 10-fold cross-validation, and all reported results are averaged over the validation sets of the folds, aside from CAMELYON16, for which the predefined test set was utilized. Thus, for CAMELYON16, we report test set performances averaged over the validation. Baseline models were chosen to be prior art CLAM [17] and TransMIL [19]. The official code of these two models was used to perform the comparison. In addition, we included a vanilla transformer, a LSTM RNN, and models based on mean and max pooling. Our vanilla transformer is composed of two stacked self-attention blocks, with four attention heads, a model dimension of 256, and a hidden dimension of 256. For the LSTM, we used an embedding size of 256 and a width of 256. The pooling models applied pooling feature-wise across each sequence, then used a random forest with 200 trees for classification. For the S4 models, the dimension of the state matrix \(\mathbf{A}\) was tuned to 32 for CAMELYON16 and TCGA-RCC, and 128 for TCGA-LUAD. Our models were trained using the Adam [15] optimizer with the lookahead method [23], with a learning rate of \(2\cdot 10^{-4}\), and weight decay of \(10^{-4}\) for TCGA-LUAD and TCGA-RCC and \(10^{-3}\) for CAMELYON16. Early stopping with a patience of 10 was used for all our training. Our implementation is publicly available2. Footnote 2: [https://github.com/MICS-Lab/s4_digital_pathology](https://github.com/MICS-Lab/s4_digital_pathology) ## 4 Experiments and discussion ### Data **CAMELYON16**[16] is a dataset that consists of resections of lymph nodes, where each WSI is annotated with a binary label indicating the presence of tumour tissue in the slide, and all slides containing tumors have a pixel-level annotation indicating the metastatic region. In multitask experiments, we use this annotation to give each patch a label indicating local tumour presence. There are 270 WSIs in the training/validation set, and 130 WSIs in the predefined test set. In our experiments, the average patch sequence length arising from CAMELYON16 is 6129 (ranging from 127 to 27444). **TCGA-LUAD** is a TCGA lung adenocarcinoma dataset that contains 541 WSIs along with genetic information about each patient. We obtained genetic information for this cohort using Xena browser [7]. As a MIL task, we chose the task of predicting the patient mutation status of TP53, a tumor suppressor gene that is highly relevant in oncology studies. The average sequence length is 10557 (ranging from 85 to 34560). **TCGA-RCC** is a TCGA dataset for three kidney cancer subtypes (denoted KICH, KIRC, and KIRP). It consists of 936 WSIs (121 KICH, 518 KIRC, and 297 KIRP). The average sequence length is 12234 (ranging from 319 to 62235). ### Results **Multiple instance learning results.** We evaluate our method on each dataset by accuracy and area under receiver operating characteristic curve (AUROC). For multiclass classification, these were computed in a one-versus-rest manner. Table 1 summarises the comparison between our proposed model and baselines. For the CAMELYON16 dataset, our method performs on par with TransMIL and the CLAM models, while it clearly outperforms the other methods. Similarly, in the TCGA-LUAD dataset the proposed model achieves comparable performance with both CLAM models, while outperforming TransMIL and the other methods. We note that TCGA-LUAD proves to be a more challenging dataset for all models. Moreover, our method outperforms CLAM models on the TCGA-RCC dataset, while reporting very similar performance with respect to TransMIL. Overall, looking at the average metrics per model across all three datasets, our proposed method achieves the highest accuracy and the second highest AUROC, only behind CLAM-MB. A pairwise t-test between the proposed method, CLAM, and TransMIL shows that there is no statistical significance performance difference (see supplementary material). We further compare our method with respect to model and time complexity. In Table 2 we report the number of trainable parameters, as well as the inference time for all models. The number of parameters is computed with all models configured to be binary classifiers, and the inference time is computed as the average time over 100 samples for processing a random sequence of 1024-dimensional vectors of length 30000. For our proposed method, we report both models with the different state dimensions (Ours (\(SSM_{32}\))) and (Ours (\(SSM_{128}\))). Compared with TransMIL, our method runs four times faster and has less than half the parameters. The CLAM models are more efficient in terms of number of trainable parameters, yet CLAM MB is slower. Table 3 shows the effect of modifying parts of the architecture on the results for TCGA-RCC. Most modifications had very little impact on AUROC, but a more significant impact can be seen on the accuracy of the model. Models A and B show that stacking multiple SSM layers results in lower accuracy, which was observed over all three datasets, while models C and D show that modifying the state dimension of the SSM module can have an impact on the accuracy. The optimal state space dimension varies depending on the dataset. **Multitask learning results.** We explored the ability of our model to combine slide- and patch-level information on the CAMEYLON16 dataset. We compared our model with the best performing model on CAMELYON16, TransMIL. Both models were trained according to Equation 7 with \(\lambda=5\) tuned by hand. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Dataset & \multicolumn{2}{c}{CAMELYON16} & TCGA-LUAD & \multicolumn{2}{c}{TCGA-RCC} & \multicolumn{2}{c}{Average} \\ Metric & Acc. & AUROC & Acc. & AUROC & Acc. & AUROC & Acc. & AUROC \\ \hline Mean-pooling & 0.5969 & 0.5810 & 0.6261 & 0.6735 & 0.8608 & 0.9612 & 0.6946 & 0.7386 \\ Max-pooling & 0.7078 & 0.7205 & 0.6328 & 0.6686 & 0.8803 & 0.9659 & 0.7403 & 0.7850 \\ Transformer [21] & 0.5419 & 0.5202 & 0.5774 & 0.6214 & 0.7932 & 0.9147 & 0.6375 & 0.6854 \\ LSTM [8] & 0.5310 & 0.5053 & 0.5389 & 0.5208 & 0.6654 & 0.7853 & 0.5784 & 0.6038 \\ CLAM SB [17] & 0.8147 & 0.8382 & 0.6859 & _0.7459_ & 0.8816\({}^{*}\) & 0.9723\({}^{*}\) & 0.7941 & 0.8532 \\ CLAM MB [17] & _0.8264_ & _0.8523_ & **0.6901** & **0.7573** & 0.8966\({}^{*}\) & 0.9799\({}^{*}\) & _0.8044_ & **0.8632** \\ TransMIL [19] & **0.8287** & **0.8628** & 0.6348 & 0.7015 & **0.9466\({}^{*}\)** & _0.9882\({}^{*}\)_ & 0.8034 & 0.8508 \\ \hline Ours & 0.8217 & 0.8485 & _0.6879_ & 0.7304 & _0.9426_ & **0.9885** & **0.8174** & _0.8558_ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of accuracy and AUROC on three datasets CAMELYON16, TCGA-LUAD, TCGA-RCC, and on average. All metrics in the table are the average of 10 runs. Best performing methods are indicated in **bold** and second best in _italics_. \({}^{*}\) indicates results from [19]. \begin{table} \begin{tabular}{c c c} \hline \hline Model & Number of parameters Inference time (ms) \\ \hline Mean-pooling & 1 025 & 5.60 \\ Max-pooling & 1 025 & 77.49 \\ Transformer [21] & 1 054 978 & 2.60 \\ LSTM [8] & 789 250 & 320.52 \\ CLAM SB [17] & 790 791 & 0.84 \\ CLAM MB [17] & 791 048 & 5.85 \\ TransMIL [19] & 2 672 146 & 8.58 \\ \hline Ours (SSM\({}_{128}\)) & 1 184 258 & 2.01 \\ Ours (SSM\({}_{32}\)) & 1 085 954 & 1.97 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of parameter count and inference time for all methods. In Table 4 we give slide-level accuracy and AUROC for the two models. We observe that all accuracies and AUROC increase compared with those reported in Table 1. This indicates that the use of patch-level annotations complements the learning of the slide-level label. We furthermore observe that our model outperforms TransMIL when combining slide- and patch-level annotations. We map the sequence of output probabilities to their slide coordinates giving a heatmap localising metastasis (see supplementary material). **Performance on longest sequences.** In order to highlight the inherent ability of SSM models to effectively model long sequences, we performed an experiment on only the largest WSIs of the TCGA-RCC dataset. Indeed, this dataset contains particularly long sequences (up to 62235 patches at 20x). We evaluated the trained models for each fold on a subset of the validation set, only containing sequences with a length in the 85\({}^{\text{th}}\) percentile. Table 5 shows the obtained average accuracy (weighted by the number of long sequences in each validation set) and AUROC on both CLAM models, TransMIL, and our proposed method. Both in terms of AUROC and accuracy, our method outperforms the other methods on long sequences, while the performances are comparable to Table 1, albeit slightly lower, illustrating the challenge of processing large WSIs. ## 5 Conclusions In this work we have explored the ability of state space models to act as multiple instance learners on sequences of patches extracted from histopathology images. These models have been developed for their ability to memorise long sequences, and they have proven competitive with state of the art MIL models across a range of pathology problems. Additionally, we demonstrated the ability of these models \begin{table} \begin{tabular}{c c c} \hline \hline Model & Accuracy & AUROC \\ \hline TransMIL [19] & 0.8403 & 0.8828 \\ Ours & **0.8488** & **0.8998** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of accuracy and AUROC for models trained as multitask classifiers on the CAMELYON16 dataset. Best results in **bold**. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & SSM layers & State dimension & Accuracy & AUROC \\ \hline A & 2 & 32 & 0.9236 & 0.9813 \\ B & 3 & 32 & 0.9179 & 0.9834 \\ C & 1 & 16 & 0.9352 & 0.9846 \\ D & 1 & 64 & 0.9352 & 0.9861 \\ \hline Ours & 1 & 32 & **0.9426** & **0.9885** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study for the different SSM components on the TCGA-RCC dataset. Best results in **bold**. to perform multiclass classification, which furthermore allowed us to visualise the localisation of metastasic regions. Finally, we demonstrated that on the longest sequences in our datasets, state space models offer better performance than competing models, confirming their power in modeling long-range dependencies. #### 5.0.1 Acknowledgments This work has benefited from state financial aid, managed by the Agence Nationale de Recherche under the investment program integrated into France 2030, project reference ANR-21-RHUS-0003. This work was partially supported by the ANR Hagnodice ANR-21-CE45-0007. Experiments have been conducted using HPC resources from the "Mesocentre" computing center of CentraleSupelec and Ecole Normale Superieure Paris-Saclay supported by CNRS and Region Ile-de-France.
2310.10117
Federated Learning with Convex Global and Local Constraints
In practice, many machine learning (ML) problems come with constraints, and their applied domains involve distributed sensitive data that cannot be shared with others, e.g., in healthcare. Collaborative learning in such practical scenarios entails federated learning (FL) for ML problems with constraints, or FL with constraints for short. Despite the extensive developments of FL techniques in recent years, these techniques only deal with unconstrained FL problems or FL problems with simple constraints that are amenable to easy projections. There is little work dealing with FL problems with general constraints. To fill this gap, we take the first step toward building an algorithmic framework for solving FL problems with general constraints. In particular, we propose a new FL algorithm for constrained ML problems based on the proximal augmented Lagrangian (AL) method. Assuming convex objective and convex constraints plus other mild conditions, we establish the worst-case complexity of the proposed algorithm. Our numerical experiments show the effectiveness of our algorithm in performing Neyman-Pearson classification and fairness-aware learning with nonconvex constraints, in an FL setting.
Chuan He, Le Peng, Ju Sun
2023-10-16T06:51:32Z
http://arxiv.org/abs/2310.10117v3
A proximal augmented Lagrangian based algorithm for federated learning with global and local convex conic constraints ###### Abstract This paper considers federated learning (FL) with constraints, where the central server and all local clients collectively minimize a sum of convex local objective functions subject to global and local convex conic constraints. To train the model without moving local data from clients to the central server, we propose an FL framework in which each local client performs multiple updates using the local objective and local constraint, while the central server handles the global constraint and performs aggregation based on the updated local models. In particular, we develop a proximal augmented Lagrangian (AL) based algorithm for FL with global and local convex conic constraints. The subproblems arising in this algorithm are solved by an inexact alternating direction method of multipliers (ADMM) in a federated fashion. Under a local Lipschitz condition and mild assumptions, we establish the worst-case complexity bounds of the proposed algorithm for finding an approximate KKT solution. To the best of our knowledge, this work proposes the first algorithm for FL with global and local constraints. Our numerical experiments demonstrate the practical advantages of our algorithm in performing Neyman-Pearson classification and enhancing model fairness in the context of FL. ## 1 Introduction Federated learning (FL) has emerged as a prominent distributed machine learning paradigm, finding extensive application across diverse domains. FL aims to train a learning model with good performance for all local data while ensuring the privacy of local clients by minimizing the disclosure of sensitive local information. A common FL strategy is that each local client independently performs certain iterations using local data to update its local parameters. Subsequently, the central server collects the updated local parameters (and sometimes derivatives of local models) from local clients and performs an aggregation to compute a new global parameter. This iterative FL strategy, with multiple local updates, saves the communication efforts between the central server and local clients. Also, as the raw data stored on the local clients is never shared directly with the central server, this FL strategy ensures the privacy of sensitive local information held by each client. On the other hand, recent years have seen a proliferation of deep learning tasks that are framed as constrained optimization problems. These constraints typically encode prior knowledge and essential properties pivotal to the learning tasks. Particularly, the learning tasks that fall within constrained optimization span various areas, including robustness evaluation (Goodfellow et al., 2014), fairness-aware learning (Agarwal et al., 2018), addressing label imbalance (Saito & Rehmsmeier, 2015), neural architecture search (Zoph et al., 2018), topology optimization (Christensen & Klarbring, 2008), knowledge-aware machine learning (McClenny & Braga-Neto, 2020). We will now describe two specific examples that motivate our research in this paper. **Neyman-Pearson classification:** Consider a binary classification problem where the primary concern is the risk of misclassifying one specific class more than the other, as often occurs in medical diagnosis. To address this problem, the Neyman-Pearson classification model is proposed as follows (e.g., see (Tong et al., 2016)): \[\min_{w}\frac{1}{n_{0}}\sum_{i=1}^{n_{0}}\varphi(w,z_{i,0})\quad\text{ s.t. }\quad\frac{1}{n_{1}}\sum_{i=1}^{n_{1}}\varphi(w,z_{i,1})\leq r,\] where \(w\) is the weight parameter, \(\varphi\) is a loss function, \(\{z_{i,0}\}_{i=1}^{n_{0}}\) and \(\{z_{i,1}\}_{i=1}^{n_{1}}\) are the training data from two separate classes 0 and 1, respectively, and \(r>0\) controls the training error for class 1. The Neyman-Pearson classification model is introduced as a statistical learning model for handling asymmetric training error priorities. **Learning with fairness:** Incorporating fairness constraints into the training of machine learning models is widely recognized as an important approach to ensure the models' trustworthiness (Agarwal et al., 2018; Celis et al., 2019; Mehrabi et al., 2021). Training a model with fairness constraints is usually formulated as follows: \[\min_{w}\frac{1}{n}\sum_{i=1}^{n}\varphi(w,z_{i})\quad\text{s.t. }\quad\min_{1\leq i\leq k}p_{j}(w,\{z_{i}\}_{i=1}^{n})\geq\rho\max_{1\leq j \leq k}p_{j}(w,\{z_{i}\}_{i=1}^{n}),\] where \(w\) is the weight parameter, \(\varphi\) is a loss function, \(p_{j}\), \(1\leq j\leq k\), are performance metrics, \(\rho\in[0,1]\) is the targeted fairness level, and \(\{z_{i}\}\) is the training data set. In the FL literature, many efforts have been devoted to mitigating class imbalance (Shen et al., 2021) and improving model fairness (Du et al., 2021; Zhang et al., 2020; Chu et al., 2021; Galvez et al., 2021) through the application of constrained optimization models. Nevertheless, these algorithms are often specialized to particular use cases and suffer from a lack of computational complexity guarantees for achieving consensus, optimality, and feasibility in their solutions. The main goals of this paper are twofold: (1) to investigate a general optimization problem with convex constraints in an FL setting; (2) to develop an FL algorithm with complexity guarantees for finding its solution. Specifically, we consider the following general optimization formulation of FL problems with global and local convex conic constraints 1: Footnote 1: Distributed optimization with global and local constraints has been studied before in the literature (e.g., see Zhu & Martinez (2011); Nedic et al. (2010)). \[\min_{w}\left\{\sum_{i=1}^{n}f_{i}(w)+h(w)\right\}\quad\text{s.t. }\quad\underbrace{c_{0}(w)\preceq_{K_{0}}0}_{\text{global constraint}},\quad\underbrace{c_{i}(w)\preceq_{K_{i}}0,\ 1\leq i\leq n}_{\text{local constraints}}, \tag{1}\] where the functions \(f_{i}:\mathbb{R}^{d}\to\mathbb{R}\), \(1\leq i\leq n\), and \(h:\mathbb{R}^{d}\to(-\infty,\infty]\) are proper, closed and convex, \(\mathcal{K}_{i}\subseteq\mathbb{R}^{m_{i}}\), \(0\leq i\leq n\), are closed convex cones, the symbol \(\preceq_{\mathcal{K}_{i}}\) stands for the partial order induced by \(\mathcal{K}_{i}\), that is \(u\preceq_{\mathcal{K}_{i}}v\) if and only if \(v-u\in\mathcal{K}_{i}\), and the mappings \(c_{i}:\mathbb{R}^{d}\to\mathbb{R}^{m_{i}}\), \(0\leq i\leq n\), are convex with respect to \(\mathcal{K}_{i}\) for each \(0\leq i\leq n\), that is, \[c_{i}(\alpha u+(1-\alpha)v)\preceq_{\mathcal{K}_{i}}\alpha c_{i}(u)+(1-\alpha) c_{i}(v),\quad\forall u,v\in\mathbb{R}^{d},\ \alpha\in[0,1],\ 0\leq i\leq n.\] We remark that the convexity assumption is necessary for our initial theoretical exploration of FL with constraints. Also, we explore the applicability of our FL algorithm to classification tasks with nonconvex fairness constraints in Section 5.3. The global constraint in (1), namely \(c_{0}(w)\preceq_{\mathcal{K}_{0}}0\), refers to a constraint that can be directly accessed by the central server. The local constraints in (1), namely \(c_{i}(w)\preceq_{\mathcal{K}_{i}}0\) for \(1\leq i\leq n\), refer to constraints that depend on the local data that clients used for training the model. Throughout this paper, we assume that _for each \(1\leq i\leq n\), the local objective \(f_{i}\) and local constraint \(c_{i}\) are handled solely by the local client \(i\), and the central server has access to the global constraint \(c_{0}\)._ This assumption generalizes the one commonly imposed for unconstrained FL, where each local objective function is solely handled by one local client. It is worth mentioning that when \(\mathcal{K}_{i}=\mathbb{R}_{+}^{\widehat{m}_{1}}\times\{0\}^{\widehat{m}_{2}}\) for some \(\widehat{m}_{1},\widehat{m}_{2}\geq 0\), and \(c_{i}(w)=[\tilde{c}_{1}(w),\ldots,\hat{c}_{\widehat{m}_{1}}(w),\tilde{c}_{1}(w), \ldots,\tilde{c}_{\widehat{m}_{2}}(w)]^{T}\) with convex \(\hat{c}_{j}\)'s and affine \(\bar{c}_{s}\)'s, the relation \(c_{i}(w)\preceq_{\mathcal{K}_{i}}0\) reduces to \(\hat{c}_{j}(w)\leq 0\) for \(1\leq j\leq\widehat{m}_{1}\) and \(\bar{c}_{s}(w)=0\) for \(1\leq s\leq\widehat{m}_{2}\). Therefore, problem (1) covers FL problems with ordinary convex equality and inequality constraints. Moreover, our model (1) is tailored for scenarios where local clients have enough amount of reliable data points to establish their own local constraints. Meanwhile, to enhance generalization property, the central server forms a global constraint by incorporating certain public or external data points. Additionally, it is noteworthy that solving an FL problem with \(n\) local constraints, such as \(c_{i}(w)\leq r\), \(1\leq i\leq n\), always yields a feasible solution for the coupled constraints involving data points from all local clients, such as \(1/n\sum_{i=1}^{n}c_{i}(w)\leq r\). Due to the sophistication of the constraints in problem (1), existing FL algorithms face challenges when attempting to apply or extend them directly to solve (1). For example, a natural approach for this problem is to adopt existing FL algorithms to minimize the quadratic penalty function associated with (1). However, to ensure global convergence to a solution for (1), it is often necessary to minimize a sequence of penalty functions with sufficiently large penalty parameters, rendering the solution process highly unstable and inefficient (e.g., see Nocedal & Wright (2006)). Moreover, in the centralized setting, Lagrangian methods are frequently employed for constrained optimization in deep learning (e.g., see Cotter et al. (2019)). However, these methods often require careful tuning of initial multipliers and step-sizes for the multipliers. In contrast, we propose an FL algorithm grounded in the proximal augmented Lagrangian (AL) method. This algorithm efficiently and robustly finds an \((\epsilon_{1},\epsilon_{2})\)-KKT solution of (1) (see Definition 1 for its definition). At each iteration of this algorithm, a fixed penalty parameter is employed, and an approximate solution to a proximal AL subproblem associated with (1) is computed by an inexact alternating direction method of multipliers (ADMM) in a federated manner. We study the worst-case complexity of this algorithm under a _locally Lipschitz_ assumption on \(\nabla f_{i}\), \(1\leq i\leq n\), and \(\nabla c_{i}\), \(0\leq i\leq n\). Our main contributions are highlighted below. * We propose a proximal AL based FL algorithm (Algorithm 1) for seeking an approximate KKT solution of problem (1). The proposed algorithm naturally generalizes the current FL algorithms designed for unconstrained finite-sum optimization (see problem (2) below). Under a _locally Lipschitz_ condition and mild assumptions, we establish the worst-case complexity for finding an \((\epsilon_{1},\epsilon_{2})\)-KKT solution of problem (1). To the best of our knowledge, the proposed algorithm is the first one for FL with global and local constraints, and its complexity results are entirely new in the literature. * We conduct numerical experiments by comparing our proximal AL based FL algorithm with existing FL algorithms on several real-world constrained learning tasks including binary classification with specified recall and classification with nonconvex fairness constraints. Our numerical results validate that our FL algorithm can achieve solution quality comparable to the centralized algorithm. * We propose an inexact ADMM based FL algorithm (Algorithm 2) for solving an unconstrained finite-sum optimization problem (see problem (13) below). Equipped with a newly introduced verifiable termination criterion, Algorithm 2 serves as a subproblem solver for Algorithm 1. We establish a global linear convergence rate for this algorithm under the assumptions of strongly convex local objectives and _locally_ Lipschitz continuous gradients. ### Related works **FL algorithms for unconstrained optimization:** Federated learning has emerged as a cornerstone technique for privacy-preserved distributed learning since Google proposed the seminal work (McMahan et al., 2017). Unlike traditional centralized learning methods, FL enables the training of models with distributed edge clients, ranging from small mobile devices like phones (Mills et al., 2019) to large data providers such as hospitals and banks (Long et al., 2020). This inherent property of privacy preservation aligns seamlessly with the principles upheld by various critical domains, including healthcare (Rieke et al., 2020; Peng et al., 2023;b), finance (Long et al., 2020), IoT (Mills et al., 2019), and transportation Liu et al. (2020), where safeguarding data privacy is essential. FedAvg, introduced by McMahan et al. (2017), is the first and also the most widely applied FL algorithm. It was proposed for solving the unconstrained finite-sum optimization problem: \[\min_{w}f(w)=\sum_{i=1}^{n}f_{i}(w). \tag{2}\] Since then, many variants have been proposed to tackle various practical issues, such as data heterogeneity (Karimireddy et al., 2020; Li et al., 2021; Zhang et al., 2021), system heterogeneity (Li et al., 2020; Wang et al., 2020; Gong et al., 2022), fairness (Li et al., 2021), efficiency (Sattler et al., 2019; Konecny et al., 2016), and incentives (Travadi et al., 2023). For example, Li et al. (2020) proposed FedProx by adding a proximal term in the local objective to handle clients with different computation capabilities. Karimireddy et al. (2020) proposed Scallfold to address the issue of date heterogeneity where local data is non-independent and identically distributed (non-iid). Additionally, ADMM based FL algorithms have been proposed in Acar et al. (2021); Gong et al. (2022); Zhang et al. (2021); Wang et al. (2022); Zhou and Li (2023), and these methods have been shown to be inherently resilient to heterogeneity. Reddi et al. (2020) extended FedAvg by introducing adaptive optimizers for server aggregation, significantly reducing communication costs and improving FL scalability. Li et al. (2021) proposed Ditto, a personalized FL framework that demonstrates improved client fairness and robustness. More variants of FL algorithms and their applications can be found in the survey (Li et al., 2021). Despite the numerous FL algorithms proposed previously, they primarily focus on unconstrained FL problems, leaving a gap between constrained optimization and FL. **Centralized algorithms for constrained optimization:** Recent years have witnessed fruitful algorithmic developments for constrained optimization in the centralized setting. In particular, there has been a rich literature on inexact AL methods for solving convex constrained optimization problems (e.g., see Aybat and Iyengar (2013); Necoara et al. (2019); Patrascu et al. (2017); Xu (2021); Lan and Monteiro (2016); Lu and Zhou (2023); Lu and Mei (2023)). In addition, AL methods have also been extended to solve nonconvex constrained optimization problems (e.g., see Hong et al. (2017); Grapiglia and Yuan (2021); Birgin and Martinez (2020); Kong et al. (2023); Li et al. (2021); He et al. (2023)). Besides, sequential quadratic programming methods (Boggs and Tolle, 1995; Curtis and Overton, 2012), trust-region methods (Byrd et al., 1987; Powell and Yuan, 1991), and interior point method (Wachter and Biegler, 2006) have also been proposed for solving constrained optimization problems. **Distributed algorithms for constrained optimization:** In another line of research, many algorithms have been developed for distributed optimization with global and local constraints. An early work Nedic et al. (2010) introduced a distributed projected subgradient algorithm for distributed optimization with local constraints. This work has been extended to handle scenarios involving time-varying directed graphs in Lin et al. (2016); Wang et al. (2017). Yet, these methods require each node to compute a projection on the local constraint set, which is applicable only to relatively simple constraints. To address more complicated constraints, distributed primal-dual algorithms were developed in Aybat and Hamedani (2016, 2019) for distributed convex optimization with conic constraints. In addition, primal-dual projected subgradient algorithms Zhu and Martinez (2011); Yuan et al. (2011) have been developed for distributed optimization with global and local constraints. For an overview of algorithmic developments in distributed optimization with constraints, we refer to Yang et al. (2019). We emphasize that the existing algorithms for constrained distributed optimization do not follow the common FL framework where clients perform multiple local updates before aggregating the global model. The algorithm development in this paper follows a distinct trajectory compared to them. In contrast to the aforementioned algorithms, this paper focuses on FL problems with global and local convex conic constraints. To the best of our knowledge, this work provides the first exploration in such a setting. ## 2 Notation and preliminaries Throughout this paper, we let \(\mathbb{R}^{d}\) and \(\mathbb{R}^{d}_{+}\) denote the \(d\)-dimensional Euclidean space and nonnegative orthant, respectively. Let \(\langle\cdot,\cdot\rangle\) denote the standard inner product. Let \(\|\cdot\|\) stand for the Euclidean norm of a vector or the spectral norm of a matrix, and \(\|\cdot\|_{\infty}\) stand for the \(\ell_{\infty}\)-norm of a vector. In addition, \(\widetilde{\mathcal{O}}(\cdot)\) represents \(\mathcal{O}(\cdot)\) with logarithmic terms omitted. Given a closed convex function \(h:\mathbb{R}^{d}\to(-\infty,\infty]\), \(\partial h\) and \(\mathrm{dom}(h)\) denote the subdifferential and domain of \(h\), respectively. The proximal operator associated with \(h\) is denoted by \(\mathrm{prox}_{h}\), that is, \(\mathrm{prox}_{h}(u)=\arg\min_{w}\!\left\{\|w-u\|^{2}/2+h(w)\right\}\) for all \(u\in\mathbb{R}^{d}\). Given a continuously differentiable mapping \(\phi:\mathbb{R}^{d}\!\to\mathbb{R}^{p}\), let the transpose of its Jacobian be denoted as \(\nabla\phi(w)=[\nabla\phi_{1}(w)\;\cdots\;\nabla\phi_{p}(w)]\in\mathbb{R}^{d \times p}\). We say that \(\nabla\phi\) is \(L\)-Lipschitz continuous on a set \(\Omega\) for some \(L>0\) if \(\|\nabla\phi(u)-\nabla\phi(v)\|\leq L\|u-v\|\) for all \(u,v\in\Omega\). In addition, we say that \(\nabla\phi\) is locally Lipschitz continuous on \(\Omega\) if for any \(w\in\Omega\), there exist \(L_{w}>0\) and an open set \(\mathcal{U}_{w}\) containing \(w\) such that \(\nabla\phi\) is \(L_{w}\)-Lipschitz continuous on \(\mathcal{U}_{w}\). Given a nonempty closed convex set \(\mathcal{C}\subseteq\mathbb{R}^{d}\), let \(\mathrm{dist}(u,\mathcal{C})\) and \(\mathrm{dist}_{\infty}(u,\mathcal{C})\) stand for the Euclidean distance and the Chebyshev distance from \(u\) to \(\mathcal{C}\), respectively, and let \(\Pi_{\mathcal{C}}(u)\) stand for the Euclidean projection of \(u\) onto \(\mathcal{C}\), that is, \(\mathrm{dist}(u,\mathcal{C})=\min_{v\in\mathcal{C}}\|u-v\|\), \(\mathrm{dist}_{\infty}(u,\mathcal{C})=\min_{v\in\mathcal{C}}\|u-v\|_{\infty}\), and \(\Pi_{\mathcal{C}}(u)=\arg\min_{v\in\mathcal{C}}\|u-v\|\) for all \(u\in\mathbb{R}^{d}\). The normal cone of \(\mathcal{C}\) at \(u\in\mathcal{C}\) is denoted by \(\mathcal{N}_{\mathcal{C}}(u)\). The Minkowski sum of two sets of vectors \(\mathcal{B}\) and \(\mathcal{C}\) in Euclidean space is denoted as \(\mathcal{B}+\mathcal{C}=\{b+c:b\in\mathcal{B},c\in\mathcal{C}\}\). For a closed convex cone \(\mathcal{K}\subseteq\mathbb{R}^{m}\), we use \(\mathcal{K}^{*}\) to denote the dual cone of \(\mathcal{K}\), that is, \(\mathcal{K}^{*}=\{u\in\mathbb{R}^{m}:\langle u,w\rangle\geq 0,\;\forall w\in \mathcal{K}\}\). For the ease of presentation, we let \(m=\sum_{i=0}^{n}m_{i}\) and adopt the following notations throughout this paper: \[f(w)=\sum_{i=1}^{n}f_{i}(w),\;\;c(w)=[c_{0}(w)^{T}\;\cdots\;c_{n}(w)^{T}]^{T}, \;\;\mu=[\mu_{0}^{T}\;\cdots\;\mu_{n}^{T}]^{T},\;\;\mathcal{K}=\mathcal{K}_{0 }\times\cdots\times\mathcal{K}_{n}\,. \tag{3}\] We make the following additional assumptions on problem (1) throughout this paper. **Assumption 1**.: 1. _The proximal operator for_ \(h\) _and the projection onto_ \(\mathcal{K}^{*}\) _can be exactly evaluated._ 2. _The functions_ \(f_{i}\)_,_ \(1\leq i\leq n\)_, and mappings_ \(c_{i}\)_,_ \(0\leq i\leq n\)_, are continuously differentiable, and_ \(\nabla f_{i}\)_,_ \(1\leq i\leq n\)_, and_ \(\nabla c_{i}\)_,_ \(0\leq i\leq n\)_, are locally Lipschitz continuous on_ \(\mathbb{R}^{d}\)_._ 3. _The strong duality holds for problem (_1_) and its dual problem_ \[\sup_{\mu\in\mathcal{K}^{*}}\inf\left\{f(w)+h(w)+\langle\mu,c(w)\rangle\right\}.\] (4) _That is, both problems have optimal solutions, and moreover, their optimal values coincide._ Assumption 1(b) holds if \(\nabla f_{i}\), \(1\leq i\leq n\), and \(\nabla c_{i}\), \(0\leq i\leq n\), are globally Lipschitz continuous on \(\mathbb{R}^{d}\). This assumption holds for a broader class of functions. For example, the gradient of the quadratic penalty term associated with (1), namely \(\|c(w)\|^{2}\), is merely locally Lipschitz continuous on \(\mathbb{R}^{d}\) even if \(\nabla c\) is globally Lipschitz continuous on \(\mathbb{R}^{d}\) (see Section 3). Additionally, the gradient of a convex high-degree polynomial function is merely locally Lipschitz continuous on \(\mathbb{R}^{d}\), but not globally Lipschitz continuous on \(\mathbb{R}^{d}\). Under Assumption 1, it is known that \((w,\mu)\in\mathrm{dom}(h)\times\mathcal{K}^{*}\) is a pair of optimal solutions of (1) and (4) if and only if it satisfies (e.g., see Lu & Zhou (2023)) \[0\in\begin{pmatrix}\nabla f(w)+\partial h(w)+\nabla c(w)\mu\\ c(w)-\mathcal{N}_{\mathcal{K}^{*}}(\mu)\end{pmatrix}. \tag{5}\] In general, it is hard to find an exact optimal solution of (1) and (4). Therefore, we are instead interested in seeking an approximate KKT solution of problems (1) and (4) defined as follows. **Definition 1**.: _Given any \(\epsilon_{1},\epsilon_{2}>0\), we say \((w,\mu)\in\mathrm{dom}(h)\times\mathcal{K}^{*}\) is an \((\epsilon_{1},\epsilon_{2})\)-KKT solution of problems (1) and (4) if \(\mathrm{dist}_{\infty}\left(0,\nabla f(w)+\partial h(w)+\nabla c(w)\mu\right) \leq\epsilon_{1}\) and \(\mathrm{dist}_{\infty}\left(c(w),\mathcal{N}_{\mathcal{K}^{*}}(\mu)\right) \leq\epsilon_{2}\)._ This definition is consistent with the \(\epsilon\)-KKT solution considered in Lu & Zhou (2023) except that Definition 1 uses the Chebyshev distance rather than the Euclidean distance, and two different tolerances \(\epsilon_{1},\epsilon_{2}\) are used for measuring stationarity and feasibility violation, respectively. ## 3 A proximal AL based FL algorithm for solving (1) In this section we propose a proximal AL based FL algorithm for solving (1). Specifically, we describe this algorithm in Section 3.1, and then analyze its complexity results in Section 3.2. ### Algorithm description In this subsection we describe the proximal AL based FL algorithm (Algorithm 1) for finding an \((\epsilon_{1},\epsilon_{2})\)-KKT solution of problem (1) for any prescribed \(\epsilon_{1},\epsilon_{2}\in(0,1)\). This algorithm follows a similar framework to a centralized proximal AL method. At each iteration, it applies an inexact ADMM (see Algorithm 2 below) to find an approximate solution \(w^{k+1}\) to the proximal AL subproblem associated with problem (1): \[\min_{w}\Bigg{\{}\ell_{k}(w):=\sum_{i=1}^{n}f_{i}(w)+h(w)+\frac{1}{2\beta}\sum_ {i=0}^{n}[\text{dist}^{2}(\mu_{i}^{k}+\beta c_{i}(w),-\mathcal{K}_{i})-\|\mu_{i }^{k}\|^{2}]+\frac{1}{2\beta}\|w-w^{k}\|^{2}\Bigg{\}}. \tag{6}\] Then the multiplier estimates are updated according to the classical scheme: \[\mu_{i}^{k+1}=\Pi_{\mathcal{K}_{i}^{*}}(\mu_{i}^{k}+\beta c_{i}(w^{k+1})), \quad 0\leq i\leq n.\] ``` 0: tolerances \(\epsilon_{1},\epsilon_{2}\in(0,1)\), \(w^{0}\in\text{dom}(h)\), \(\mu_{i}^{0}\in\mathcal{K}_{i}\) for \(0\leq i\leq n\), \(\bar{s}>0\), and \(\beta>0\). for\(k=0,1,2,\ldots\)do Set \(\tau_{k}=\bar{s}/(k+1)^{2}\). Call Algorithm 2 (see Section 4 below) with \((\tau,\tilde{w}^{0})=(\tau_{k},w^{k})\) to find an approximate solution \(w^{k+1}\) to (9) in a federated manner such that \[\text{dist}_{\infty}(0,\partial\ell_{k}(w^{k+1}))\leq\tau_{k}.\] (7) Server update: The central server updates \(\mu_{0}^{k+1}=\Pi_{\mathcal{K}_{0}^{*}}(\mu_{0}^{k}+\beta c_{0}(w^{k+1}))\). Communication (broadcast): Each local client \(i\), \(1\leq i\leq n\), receives \(w^{k+1}\) from the server. Client update (local): Each local client \(i\), \(1\leq i\leq n\), updates \(\mu_{i}^{k+1}=\Pi_{\mathcal{K}_{i}^{*}}(\mu_{i}^{k}+\beta c_{i}(w^{k+1}))\). Termination (server side): Output \((w^{k+1},\mu^{k+1})\) and terminate the algorithm if \[\|w^{k+1}-w^{k}\|_{\infty}+\beta\tau_{k}\leq\beta\epsilon_{1},\qquad\|\mu^{k+1 }-\mu^{k}\|_{\infty}\leq\beta\epsilon_{2}.\] (8) endfor ``` **Algorithm 1** A proximal AL based FL algorithm for solving problem (1) Notice that the subproblem (6) can be rewritten as \[\min_{w}\left\{\ell_{k}(w)=\sum_{i=0}^{n}P_{i,k}(w)+h(w)\right\}, \tag{9}\] where \(P_{i,k}\), \(0\leq i\leq n\), are defined as \[P_{0,k}(w) :=\frac{1}{2\beta}[\text{dist}^{2}(\mu_{0}^{k}+\beta c_{0}(w),- \mathcal{K}_{0})-\|\mu_{0}^{k}\|^{2}]+\frac{1}{2(n+1)\beta}\|w-w^{k}\|^{2}, \tag{10}\] \[P_{i,k}(w) :=f_{i}(w)+\frac{1}{2\beta}[\text{dist}^{2}(\mu_{i}^{k}+\beta c_{ i}(w),-\mathcal{K}_{i})-\|\mu_{i}^{k}\|^{2}]+\frac{1}{2(n+1)\beta}\|w-w^{k}\|^{2}, \quad\forall 1\leq i\leq n. \tag{11}\] When Algorithm 2 (see Section 4) is applied to solve problem (9), the local merit function \(P_{i,k}\), constructed from the local objective \(f_{i}\) and local constraint \(c_{i}\), is handled by the respective local client \(i\), while the merit function \(P_{0,k}\) is handled by the central server. Hence, Algorithm 2 is well-suited for the FL framework that the local objective \(f_{i}\) and local constraint \(c_{i}\) are handled by the local client \(i\), and the central server performs aggregation and handles the global constraint \(c_{0}\). In addition, the following lemma shows that \(\nabla P_{i,k}\), \(0\leq i\leq n\), are locally Lipschitz continuous on \(\mathbb{R}^{d}\). Its proof is deferred to Appendix B.1. **Lemma 3.1** (**Local Lipschitz continuity of \(\nabla P_{i,k}\))**.: _Suppose that Assumption 1 holds. Then the gradients \(\nabla P_{i,k}\), \(0\leq i\leq n\), are locally Lipschitz continuous on \(\mathbb{R}^{d}\)._ **Remark 3.1**.: _It is worth noting that \(\nabla P_{i,k}\), \(0\leq i\leq n\), are typically not globally Lipschitz continuous on \(\mathbb{R}^{d}\) even if \(\nabla f_{i}\), \(1\leq i\leq n\), and \(\nabla c_{i}\), \(0\leq i\leq n\), are globally Lipschitz continuous on \(\mathbb{R}^{d}\). For example, consider the case where \(c_{0}(w)=\|w\|^{2}-1\) and \(\mathcal{K}_{0}=\mathbb{R}_{+}\). By (10), one has that_ \[\nabla P_{0,k}(w)=2[\mu_{0}^{k}+\beta(\|w\|^{2}-1)]_{+}w+\frac{1}{(n+1)\beta} (w-w^{k}).\] _In this case, it is not hard to verify that \(\nabla c_{0}\) is globally Lipschitz continuous on \(\mathbb{R}^{d}\), but \(\nabla P_{0,k}\) is not._ For ease of later reference, we refer to the update from \(w^{k}\) to \(w^{k+1}\) as one outer iteration of Algorithm 1, and call one iteration of Algorithm 2 for solving (6) one inner iteration of Algorithm 1. In the rest of this section, we study the following measures of complexity for Algorithm 1. * _Outer iteration complexity_, which measures the number of outer iterations of Algorithm 1; * _Total inner iteration complexity_, which measures the total number of iterations of Algorithm 2 that are performed in Algorithm 1. The following theorem concerns the output of Algorithm 1, whose proof is deferred to Appendix B.2. **Theorem 3.1** (**Output of Algorithm 1**).: _If Algorithm 1 successfully terminates, its output \((w^{k+1},\mu^{k+1})\) is an \((\epsilon_{1},\epsilon_{2})\)-KKT solution of problem (1)._ ### Complexity analysis of Algorithm 1 In this subsection we establish the outer and total inner iteration complexity for Algorithm 1. To proceed, we let \((w^{*},\mu^{*})\) be any pair of optimal solutions of problems (1) and (4) and fixed throughout this section. Below, we establish a lemma to show that all the iterates generated by Algorithm 1 are bounded. Its proof can be found in Appendix B.3. **Lemma 3.2** (**Bounded iterates of Algorithm 1**).: _Suppose that Assumption 1 holds. Let \(\{w^{k}\}_{k\in\mathbb{K}}\) be all the iterates generated by Algorithm 1, where \(\mathbb{K}\) is a subset of consecutive nonnegative integers starting from \(0\). Then we have \(w^{k}\in\mathcal{Q}_{1}\) for all \(k\in\mathbb{K}\), where_ \[\mathcal{Q}_{1}=\{w\in\mathbb{R}^{d}:\|w-w^{*}\|\leq r_{0}+2\bar{s}\beta\}, \quad r_{0}=\|(w^{0},\mu^{0})-(w^{*},\mu^{*})\|, \tag{12}\] _and \(w^{0}\), \(\mu^{0}\), \(\bar{s}\), and \(\beta\) are inputs of Algorithm 1._ The following theorem states the worst-case complexity results of Algorithm 1, whose proof is relegated to Appendix B.4. **Theorem 3.2** (**Complexity results of Algorithm 1**).: _Suppose that Assumption 1 holds. Then the number of outer iteration of Algorithm 1 is at most \(\mathcal{O}(\max\{\epsilon_{1}^{-2},\epsilon_{2}^{-2}\})\), and the total number of inner iterations of Algorithm 1 is at most \(\widetilde{\mathcal{O}}(\max\{\epsilon_{1}^{-2},\epsilon_{2}^{-2}\})\)._ **Remark 3.2**.: * _To the best of our knowledge, Theorem_ 3.2 _provides the first worst-case complexity results for finding an approximate KKT solution of problem (_1_) in an FL framework._ * _Each iteration of Algorithm_ 1 _requires one communication round._ ## 4 An inexact ADMM for FL In this section we propose an inexact ADMM based FL algorithm for solving a class of finite-sum optimization problems. This algorithm is used as a subproblem solver for the proximal AL based FL algorithm (Algorithm 1). In particular, we consider the following regularized unconstrained finite-sum optimization problem: \[\min_{w}\left\{F_{h}(w):=\sum_{i=0}^{n}F_{i}(w)+h(w)\right\}, \tag{13}\] where \(F_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), \(0\leq i\leq n\), are continuously differentiable and convex functions. Throughout this section, we assume that the central server has access to \(F_{0}\), and for each \(1\leq i\leq n\), the local objective \(F_{i}\) is handled solely by the local client \(i\). ### Algorithm description ``` 0: tolerance \(\tau\in(0,1]\), \(q\in(0,1)\), \(\tilde{w}^{0}\in\text{dom}(h)\), and \(\rho_{i}>0\) for \(1\leq i\leq n\); Sets \(w^{0}=\tilde{w}^{0}\), and \((u_{i}^{0},\lambda_{i}^{0},\tilde{u}_{i}^{0})=(\tilde{w}^{0},-\nabla F_{i}( \tilde{w}^{0}),\tilde{w}^{0}-\nabla F_{i}(\tilde{w}^{0})/\rho_{i})\) for \(1\leq i\leq n\). for\(t=0,1,2,\ldots\)do Set \(\varepsilon_{t+1}=q^{t}\); Server update: The central server finds an approximate solution \(w^{t+1}\) to \[\min_{w}\left\{\varphi_{0,t}(w)=F_{0}(w)+h(w)+\sum_{i=1}^{n}\left[\frac{\rho_ {i}}{2}\|\tilde{u}_{i}^{t}-w\|^{2}\right]\right\}\] (14) such that \(\text{dist}_{\infty}(0,\partial\varphi_{0,t}(w^{t+1}))\leq\varepsilon_{t+1}\). Communication (broadcast): Each local client \(i\), \(1\leq i\leq n\), receives \(w^{t+1}\) from the server. Client update (local): Each local client \(i\), \(1\leq i\leq n\), finds an approximate solution \(u_{i}^{t+1}\) to \[\min_{u_{i}}\left\{\varphi_{i,t}(u_{i})=F_{i}(u_{i})+\langle\lambda_{i}^{t},u_ {i}-w^{t+1}\rangle+\frac{\rho_{i}}{2}\|u_{i}-w^{t+1}\|^{2}\right\}\] (15) such that \(\|\nabla\varphi_{i,t}(u_{i}^{t+1})\|_{\infty}\leq\varepsilon_{t+1}\), and then updates \[\lambda_{i}^{t+1} =\lambda_{i}^{t}+\rho_{i}(u_{i}^{t+1}-w^{t+1}),\] (16) \[\tilde{u}_{i}^{t+1} =u_{i}^{t+1}+\lambda_{i}^{t+1}/\rho_{i},\] (17) \[\tilde{\varepsilon}_{i,t+1} =\|\nabla\varphi_{i,t}(w^{t+1})-\rho_{i}(w^{t+1}-u_{i}^{t})\|_{ \infty}.\] (18) Communication: Each local client \(i\), \(1\leq i\leq n\), sends \((\tilde{u}_{i}^{t+1},\tilde{\varepsilon}_{i,t+1})\) back to the central server. Termination (server side): Output \(w^{t+1}\) and terminate this algorithm if \[\varepsilon_{t+1}+\sum_{i=1}^{n}\tilde{\varepsilon}_{i,t+1}\leq\tau.\] (19) endfor ``` **Algorithm 2** An inexact ADMM based FL algorithm for solving problem (13) In this subsection we propose an inexact ADMM based FL algorithm (Algorithm 2) for solving problem (13). Since each participating client \(i\) handles one local objective \(F_{i}\) independently, we obtain the following equivalent consensus reformulation for problem (13): \[\min_{w,u_{i}}\left\{\sum_{i=1}^{n}F_{i}(u_{i})+F_{0}(w)+h(w)\right\}\quad \text{s.\,t.}\quad u_{i}=w,\quad 1\leq i\leq n, \tag{20}\] which allows each local client \(i\) to handle the local variable \(u_{i}\) and the local objective function \(F_{i}\) while imposing consensus constraints that force clients' local parameters \(u_{i}\) equal to the global parameter \(w\). This reformulation enables the applicability of an inexact ADMM that solves problem (20) in a federated manner. At each iteration, an ADMM optimizes the AL function associated with (20): \[\mathcal{L}_{F}(w,u,\lambda):=\sum_{i=1}^{n}\left[F_{i}(u_{i})+\langle\lambda_ {i},u_{i}-w\rangle+\frac{\rho_{i}}{2}\|u_{i}-w\|^{2}\right]+F_{0}(w)+h(w) \tag{21}\] with respect to the variables \(w\), \(u\), and \(\lambda\) alternately, where \(u=[u_{1}^{T},\ldots,u_{n}^{T}]^{T}\) and \([\lambda_{1}^{T},\ldots,\lambda_{n}^{T}]^{T}\) collect all the local parameters and the multipliers associated with the consensus constraints, respectively. Specifically, at the iteration \(t\), one performs \[w^{t+1} \approx\operatorname*{arg\,min}_{w}\mathcal{L}_{F}(w,u^{t},\lambda^ {t}), \tag{22}\] \[u^{t+1} \approx\operatorname*{arg\,min}_{u}\mathcal{L}_{F}(w^{t+1},u, \lambda^{t}),\] (23) \[\lambda_{i}^{t+1} =\lambda_{i}^{t}+\rho_{i}(u_{i}^{t+1}-w^{t+1}),\quad\forall 1\leq i \leq n.\] By the definition of \(\mathcal{L}_{F}\) in (21), one can verify that the step (22) is equivalent to (14), and also the step (23) can be computed in parallel, which corresponds to (15). Therefore, the updates of an ADMM naturally suit the FL framework, as the separable structure in (21) over the pairs \(\{(u_{i},\lambda_{i})\}\) enables the local update of \((u_{i},\lambda_{i})\) at each client \(i\) while \(w\) is updated by the central server. We now make some remarks about Algorithm 2. Since \(F_{i}\), \(0\leq i\leq n\), are convex, the subproblems (14) and (15) are strongly convex. Consequently, their approximate solutions \(w^{t+1}\) and \(u_{i}^{t+1}\), \(1\leq i\leq n\), can be found by a gradient-based algorithm with a global linear convergence rate. Furthermore, the value \(\check{\varepsilon}_{i,t+1}\) in (18) serves as a measure for local optimality and consensus for client \(i\). By summing the \(\check{\varepsilon}_{i,t+1}\) values for \(1\leq i\leq n\) and including \(\varepsilon_{t+1}\), one can obtain a stationarity measure for the current iterate (see (19)), as presented in the following theorem. Its proof can be found in Appendix A.1. **Theorem 4.1** (**output of Algorithm 2**).: _If Algorithm 2 terminates at some iteration \(t\), then its output \(w^{t+1}\) satisfies \(\operatorname{dist}_{\infty}(0,\partial F_{h}(w^{t+1}))\leq\tau\)._ As seen from Theorem 4.1, Algorithm 2 outputs a point that approximately satisfies the first-order optimality condition of problem (1). ### Complexity analysis of Algorithm 2 In this subsection we establish the iteration complexity for the inexact ADMM, namely, Algorithm 2. We now make the following additional assumptions on problem (13) throughout this section. **Assumption 2**.: (a) _The functions \(F_{i}\), \(0\leq i\leq n\), are continuously differentiable, and moreover, \(\nabla F_{i}\), \(0\leq i\leq n\), are locally Lipschitz continuous on \(\mathbb{R}^{d}\)._ (b): _The functions_ \(F_{i}\)_,_ \(0\leq i\leq n\)_, are strongly convex on_ \(\mathbb{R}^{d}\)_, that is, there exists some_ \(\sigma>0\) _such that_ \[\langle\nabla F_{i}(u)-\nabla F_{i}(v),u-v\rangle\geq\sigma\|u-v\|^{2},\quad \forall u,v\in\mathbb{R}^{d},\ 0\leq i\leq n.\] Recall from (10 and 11) that \(P_{i,k}\), \(0\leq i\leq n\), are strongly convex with modulus \(1/[(n+1)\beta]\). Using this and the discussions in Section 3.1, we see that problem (9) satisfies Assumption 2, and therefore, Algorithm 2 is applicable to (9). Moreover, it follows from Theorem 4.1 that Algorithm 2 with \((\tau,\tilde{w}^{0})=(\tau_{k},w^{k})\) is capable of finding an approximate solution \(w^{k+1}\) to (9) such that (7) holds. Notice that the local Lipschitz continuity assumption for \(\nabla F_{i}\), \(0\leq i\leq n\), in Assumption 2(a) is weaker compared to the prevalent assumption on global Lipschitz continuity. In addition, under Assumption 2(b), problem (13) is strongly convex and thus has a unique optimal solution. We refer to this optimal solution of (13) as \(\tilde{w}^{*}\) throughout this section. The following lemma shows that all the iterates generated by Algorithm 2 lie in a compact set. Its proof can be found in Appendix A.2. **Lemma 4.1** (**bounded iterates of Algorithm 2**).: _Suppose that Assumption 2 holds. Let \(\{u_{i}^{t+1}\}_{1\leq i\leq n,t\in\mathbb{T}}\) and \(\{w^{t+1}\}_{t\in\mathbb{T}}\) be all the iterates generated by Algorithm 2, where \(\mathbb{T}\) is a subset of consecutive nonnegative integers starting from \(0\). Then we have \(w^{t+1}\in\mathcal{Q}\) and \(u_{i}^{t+1}\in\mathcal{Q}\) for all \(1\leq i\leq n\) and \(t\in\mathbb{T}\), where_ \[\mathcal{Q}=\left\{v:\|v-\tilde{w}^{*}\|^{2}\leq\frac{n+1}{\sigma^{2}(1-q^{2}) }+\frac{1}{\sigma}\sum_{i=1}^{n}\left(\rho_{i}\|\tilde{w}^{*}-\tilde{w}^{0}\|^ {2}+\frac{1}{\rho_{i}}\|\nabla F_{i}(\tilde{w}^{*})-\nabla F_{i}(\tilde{w}^{0} )\|^{2}\right)\right\}. \tag{24}\] The iteration complexity of Algorithm 2 is established in the following theorem, whose proof is relegated to Appendix A.3. **Theorem 4.2** (**iteration complexity of Algorithm 2**).: _Suppose that Assumption 2 holds. Then Algorithm 2 terminates in at most \(\mathcal{O}(|\log\tau|)\) iterations._ **Remark 4.1**.: 1. _Algorithm_ 2 _requires one communication round per iteration._ 2. _As seen from Theorem_ 4.2_, Algorithm_ 2 _enjoys a global linear convergence rate for solving problem (_1_) under Assumption_ 2_. To our knowledge, this provides the first study on the global linear convergence rate of an inexact ADMM under the local Lipschitz continuity of_ \(\nabla F_{i}\)_,_ \(0\leq i\leq n\)_._ ## 5 Numerical experiments In this section we conducted some experiments to test the performance of our proposed proximal AL based FL algorithm (Algorithm 1). Specifically, we compare Algorithm 1 with a centralized proximal AL method (abbreviated as cProx-AL) for solving a simulated linear equality constrained quadratic programming problem in Section 5.1. In Sections 5.2 and 5.3, we evaluate the performance of Algorithm 1 and cProx-AL on real-world datasets for a Neyman-Pearson classification problem and a classification problem with nonconvex fairness constraints. All the experiments are conducted in a system environment with an AMD EPYC 7763 64-core processor, using Python for execution. ### Linear equality constrained quadratic programming In this subsection we consider the linear equality constrained quadratic programming problem: \[\min_{w}\ \sum_{i=1}^{n}\left(\frac{1}{2}w^{T}A_{i}w+b_{i}^{T}w\right) \quad\text{s.\,t.}\quad C_{i}w+d_{i}=0,\quad 0\leq i\leq n, \tag{25}\] where \(A_{i}\in\mathbb{R}^{d\times d}\), \(1\leq i\leq n\), are positive semidefinite, \(b_{i}\in\mathbb{R}^{d}\), \(1\leq i\leq n\), \(C_{i}\in\mathbb{R}^{\tilde{m}\times d}\), \(0\leq i\leq n\), and \(d_{i}\in\mathbb{R}^{\tilde{m}}\), \(0\leq i\leq n\). For each \((d,n,\tilde{m})\), we randomly generate 10 instances of problem (25). In particular, for each \(1\leq i\leq n\), we first randomly generate matrix \(A_{i}\) by letting \(A_{i}=U_{i}D_{i}U_{i}^{T}\), where \(D_{i}\in\mathbb{R}^{d\times d}\) is a diagonal matrix, whose diagonal entries are randomly generated according to the uniform distribution over \([5,10]\) and \(U_{i}\in\mathbb{R}^{d\times d}\) is a randomly generated orthogonal matrix. We then randomly generate \(b_{i}\), \(1\leq i\leq n\), \(C_{i}\), \(0\leq i\leq n\), and \(d_{i}\), \(0\leq i\leq n\), with all entries chosen from the standard normal distribution. Our aim is to apply Algorithm 1 and cProx-AL to find a \((10^{-3},10^{-3})\)-KKT solution of problem (25), and compare their performance. In particular, we exactly solve the convex quadratic programming subproblems (14) and (15) arising in Algorithm 1. In addition, cProx-AL method follows the same framework as Algorithm 1 except that the \(w^{k+1}\) is obtained by solving (9) using a centralized method. We set parameters for Algorithm 1 and cProx-AL as \(w^{0}=(1,\ldots,1)^{T}\), \(\mu_{i}^{0}=(0,\ldots,0)^{T}\)\(\forall 0\leq i\leq n\), \(\bar{s}=0.01\) and \(\beta=1\). We also set \(\rho_{i}=1\)\(\forall 1\leq i\leq n\) for Algorithm 2. The computational results of Algorithm 1 and the cProx-AL for solving the instances generated above are presented in Table 1. In detail, the value of \(d\), \(n\), and \(\tilde{m}\) is listed in the first three columns, respectively. For each triple \((d,n,\tilde{m})\), the average objective value, the average feasibility violation, the average number of outer iterations, and the average total number of iterations over 10 random instances are given in the rest columns. One observes that: 1) both Algorithm 1 and the cProx-AL are capable of finding solutions of similar quality in terms of objective value and constraint violation; 2) Algorithm 1 is as efficient as cProx-AL in terms of the number of outer iterations. Taking all factors into account, the proposed method can approximate high-quality solutions as the centralized method. ### Neyman-Pearson classification In this subsection we consider the Neyman-Pearson binary classification problem: \[\min_{w}\frac{1}{n}\sum_{i=1}^{n}\frac{1}{m_{i0}}\sum_{j=1}^{m_{i0}}\phi(w;( x_{j}^{(i0)},0))\quad\text{s.\,t.}\quad\frac{1}{m_{i1}}\sum_{j=1}^{m_{i1}} \phi(w;(x_{j}^{(i1)},1))\leq r_{i},\quad 1\leq i\leq n, \tag{26}\] where \(\{x_{j}^{(i0)}\}_{1\leq j\leq m_{i0}}\) and \(\{x_{j}^{(i1)}\}_{1\leq j\leq m_{i1}}\) are the sets of samples in client \(i\) associated with labels \(0\) and \(1\), respectively. In our experiment, we set \(\phi\) as the binary logistic loss (see Hastie et al. (2009)): \[\phi(w;(x_{j}^{(i)},y_{j}^{(i)}))=-y_{j}^{(i)}w^{T}x_{j}^{(i)}+\log(1+e^{w^{T}x_ {j}^{(i)}}),\quad y_{j}^{(i)}\in\{0,1\}. \tag{27}\] We consider three real-world datasets, namely 'breast-cancer-wisc', 'adult-a', and'monks-1', from the UCI repository.2 The sizes of these datasets can be found in Table 2. For each dataset, we conducted an imbalanced classification task that minimizes the binary classification loss while ensuring the loss for class 1 (minority) was less than a threshold \(r=0.2\). To simulate the FL setting, we divided each dataset into \(n\) folds, mimicking distributed clients each holding the same amount of data with equal imbalanced ratios. Footnote 2: see [https://archive.ics.uci.edu/datasets](https://archive.ics.uci.edu/datasets) We implement Algorithm 1 and a cProx-AL for solving the Neyman-Pearson classification problem (26). In particular, we apply Algorithm 1 and cProx-AL to find an \((10^{-3},10^{-3})\)-KKT solution of problem (26), where cProx-AL follows the same framework as Algorithm 1 except that the \(w^{k+1}\) is obtained by applying L-BFGS method built in scipy.optimize.minimize. We set parameters for Algorithm 1 and the cProx-AL method as \(w^{0}=(1,\ldots,1)^{T}\), \(\mu_{i}^{0}=(0,\ldots,0)^{T}\)\(\forall 0\leq i\leq n\), \(\bar{s}=0.001\) and \(\beta=300\). We also set \(\rho_{i}=0.01\)\(\forall 1\leq i\leq n\) for Algorithm 2. The computational results for solving problems (26) using three real-world datasets are presented in Table 2. In detail, the first four columns of Table 2 represent the names of the dataset, the numbers of samples in classes 1 and 0, the number of features, and the number of clients. In the last two columns, we present the losses for class 0 and class 1, respectively. We include the mean and max loss values for class 1 among all local clients. By comparing Algorithm 1 and cProx-AL in Table 2, one can observe that both algorithms can yield solutions of similar quality. This demonstrates the ability of Algorithm 1 to solve the problem in an FL framework without compromising solution quality. Figure 1 shows the convergence behavior of losses for classes 0 and 1 across all local clients over the outer iterations of Algorithm 1. From this figure, we observe that our proposed method consistently reduces the losses for class 0 (local constraint) on all clients to a level below a threshold (\(\leq 0.2\)) while also consistently minimizing the losses for class 1 (local objective) on all local clients. ### Classification with fairness constraints In this subsection we consider the classification with global and local fairness constraints: \[\min_{w} \ \frac{1}{n}\sum_{i=1}^{n}\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}\phi(w ;(x_{j}^{(i)},y_{j}^{(i)})) \tag{28a}\] \[\mathrm{s.\,t.}\ -r_{i}\leq\frac{1}{\tilde{m}_{i}}\sum_{j=1}^{ \tilde{m}_{i}}\phi(w;(\hat{x}_{j}^{(i)},\tilde{y}_{j}^{(i)}))-\frac{1}{\tilde {m}_{i}}\sum_{j=1}^{\tilde{m}_{i}}\phi(w;(\hat{x}_{j}^{(i)},\hat{y}_{j}^{(i)}) )\leq r_{i},\quad 0\leq i\leq n. \tag{28b}\] \begin{table} \begin{tabular}{c c c||c c||c c||c c} \hline & & & \multicolumn{2}{c||}{objective value} & \multicolumn{2}{c||}{feasibility violation (\(\times 10^{-4}\))} & \multicolumn{2}{c}{outer iterations} \\ \(d\) & \(n\) & \(\tilde{m}\) & Algorithm 1 & cProx-AL & Algorithm 1 & cProx-AL & Algorithm 1 & cProx-AL \\ \hline 100 & 1 & 1 & -6.1982 & -6.1982 & 1.1986 & 3.0593 & 5.6 & 5.0 \\ 100 & 5 & 1 & -5.4337 & -5.4348 & 5.5226 & 7.6943 & 6.0 & 6.8 \\ 100 & 10 & 1 & -0.8998 & -0.9021 & 7.7720 & 6.9185 & 8.5 & 9.0 \\ 300 & 1 & 3 & -20.0387 & -20.0387 & 0.0920 & 0.9312 & 5.9 & 5.0 \\ 300 & 5 & 3 & -16.7407 & -16.7408 & 1.4489 & 1.9619 & 5.0 & 5.0 \\ 300 & 10 & 3 & -12.9434 & -12.9449 & 4.4227 & 5.3871 & 5.9 & 6.8 \\ 500 & 1 & 5 & -33.0971 & -33.0971 & 7.5226 & 0.9430 & 5.9 & 5.0 \\ 500 & 5 & 5 & -30.8218 & -30.8220 & 4.4055 & 4.5350 & 4.0 & 4.0 \\ 500 & 10 & 5 & -24.7565 & -24.7583 & 4.4589 & 6.8591 & 5.0 & 5.1 \\ \hline \end{tabular} \end{table} Table 1: Numerical results for problem (25) where \(\phi\) is the logistic loss defined as in (27), \((x_{j}^{(i)},y_{j}^{(i)})\in\mathbb{R}^{d}\times\{0,1\}\), \(1\leq j\leq m_{i}\), are the feature-label pairs at client \(i\). For each client \(i\), the local dataset \(\{(x_{j}^{(i)},y_{j}^{(i)})\}_{1\leq j\leq m_{i}}\) is divided into two sensitive groups \(\{(\tilde{x}_{j}^{(i)},\tilde{y}_{j}^{(i)})\}_{1\leq j\leq\tilde{m}_{i}}\) and \(\{(\tilde{x}_{j}^{(i)},\tilde{y}_{j}^{(i)})\}_{1\leq j\leq\tilde{m}_{i}}\). The global dataset at the central server also includes two sensitive groups of samples \(\{(\tilde{x}_{j}^{(0)},\tilde{y}_{j}^{(0)})\}_{1\leq j\leq\tilde{m}_{0}}\) and \(\{(\tilde{x}_{j}^{(0)},\tilde{y}_{j}^{(0)})\}_{1\leq j\leq\tilde{m}_{0}}\). We consider the real-world dataset named 'adult-b' consisting of a training set and a testing set.3 Each sample in this dataset has 39 features and a binary label. We conducted a binary classification task with fairness constraints that control the loss disparity between two sensitive groups of samples. We allocate 22,654 samples from the training set to the local dataset at clients, and 5,659 samples from the testing set to form the global dataset at the central server. To simulate an FL setting, we partitioned each dataset into \(n\) folds, ensuring an equal number of samples at each client. Footnote 3: This dataset can be found in [https://github.com/heyaudace/ml-bias-fairness/tree/master/data/adult](https://github.com/heyaudace/ml-bias-fairness/tree/master/data/adult). We implement Algorithm 1 and a cProx-AL for solving the classification problem with fairness constraints. In particular, we apply Algorithm 1 and cProx-AL to find an \((10^{-3},10^{-3})\)-KKT solution of problem (28), where cProx-AL follows the same framework as Algorithm 1 except that the \(w^{k+1}\) is obtained by applying L-BFGS method built in scipy.optimize.minimize. We set parameters for Algorithm 1 and the cProx-AL method as \(w^{0}=(0,\ldots,0)^{T}\), \(\mu_{i}^{0}=(0,\ldots,0)^{T}\)\(\forall 0\leq i\leq n\), \(\bar{s}=0.001\) and \(\beta=10\). We also set \(\rho_{i}=10^{8}\)\(\forall 1\leq i\leq n\) for Algorithm 2. \begin{table} \begin{tabular}{c|c|c|c||c c||c c c} \hline \hline \multirow{2}{*}{dataset} & \multirow{2}{*}{pos/neg} & \multirow{2}{*}{d} & \multirow{2}{*}{\(n\)} & \multicolumn{3}{c||}{loss for class 0} & \multicolumn{3}{c}{loss for class 1 (\(\leq 0.2\))} \\ & & & & Algorithm 1 & cProx-AL & \multicolumn{3}{c}{Algorithm 1} & \multicolumn{3}{c}{cProx-AL} \\ & & & & & mean & max & mean & max \\ \hline \multirow{3}{*}{breast-cancer-wisc} & \multirow{3}{*}{240/455} & \multirow{3}{*}{20} & 1 & 0.2735 & 0.2737 & 0.2000 & 0.2000 & 0.2000 & 0.2000 \\ & & & 5 & 0.3365 & 0.3380 & 0.1867 & 0.2001 & 0.1869 & 0.2000 \\ & & & 10 & 0.3751 & 0.3754 & 0.1739 & 0.2000 & 0.1739 & 0.2000 \\ & & & 20 & 0.4500 & 0.4498 & 0.1628 & 0.2000 & 0.1628 & 0.2000 \\ \hline \multirow{3}{*}{adult-a} & \multirow{3}{*}{7840/24715} & \multirow{3}{*}{21} & 1 & 0.7331 & 0.9186 & 0.2000 & 0.2000 & 0.2008 & 0.2008 \\ & & & 5 & 0.7465 & 0.8610 & 0.1966 & 0.1995 & 0.1957 & 0.2000 \\ & & & 10 & 0.7739 & 0.8291 & 0.1873 & 0.2002 & 0.1920 & 0.2009 \\ & & & 20 & 0.7881 & 0.8173 & 0.1830 & 0.2003 & 0.1853 & 0.2007 \\ \hline \multirow{3}{*}{monks-1} & \multirow{3}{*}{275/275} & \multirow{3}{*}{21} & 1 & 1.5758 & 1.7783 & 0.2000 & 0.2000 & 0.1995 & 0.1995 \\ & & & 5 & 1.6467 & 1.7488 & 0.1905 & 0.2001 & 0.1928 & 0.2008 \\ & & & 10 & 1.7071 & 1.8438 & 0.1821 & 0.2000 & 0.1816 & 0.2001 \\ & & & 20 & 1.7852 & 1.9750 & 0.1678 & 0.2000 & 0.1637 & 0.2001 \\ \hline \hline \end{tabular} \end{table} Table 2: Numerical results for problem (26). Figure 1: Convergence behavior of losses for classes 0 and 1 across all local clients over the outer iterations of Algorithm 1 on three real-world datasets. The solid blue and brown lines indicate the convergence behavior of the average loss for classes 0 and 1 over all clients, respectively. The blue and brown shaded areas indicate the regions between the maximum loss and minimum loss for classes 0 and 1 over all clients, respectively. The computational results for solving problems (28) are presented in Table 3. In detail, the first column of Table 3 represent the number of clients. In the last two columns, we present the classification loss and loss disparity, respectively, which include results computed from the classification model with fairness constraints in (28). We include the mean and max loss disparity (absolute difference of losses for two sensitive groups) among all clients and the central server. Comparing Algorithm 1 and cProx-AL in Table 3 reveals that both Algorithm 1 and cProx-AL can yield solutions of similar quality. This demonstrates the ability of Algorithm 1 to solve the problem in an FL framework without compromising solution quality, and it also implies the potential of our algorithm in solving FL problems with particular nonconvex constraints. Figure 2 shows the convergence behavior of loss disparity and classification loss across all local clients over the outer iterations of Algorithm 1. From this figure, we see that our proposed method consistently relegates the loss disparities (local/global constraints) on all clients and the central server to a level below a threshold (\(\leq 0.01\)) while also consistently minimizing the classification losses (local objectives) on all local clients. ## 6 Concluding remarks In this paper we proposed a proximal AL based algorithm for solving FL problems with global and local convex conic constraints. We then analyzed its worst-case iteration complexity under mild assumptions. Finally, we performed numerical experiments using real-world datasets to assess the performance of our proposed algorithm for Neyman-Pearson classification and classification with fairness constraints in the FL setting. The numerical results clearly demonstrate the practical efficacy of our proposed algorithm. \begin{table} \begin{tabular}{c||c c||c c c} \hline \hline \multirow{2}{*}{\(n\)} & \multicolumn{2}{c||}{classification loss} & \multicolumn{3}{c}{loss disparity (\(\leq 0.01\))} \\ & Algorithm 1 & cProx-AL & \multicolumn{2}{c}{Algorithm 1} & \multicolumn{2}{c}{cProx-AL} \\ & & & mean & max & mean & max \\ \hline 1 & 0.6790 & 0.6808 & 0.0090 & 0.0110 & 0.0078 & 0.0104 \\ 5 & 0.6813 & 0.6814 & 0.0092 & 0.0106 & 0.0091 & 0.0105 \\ 10 & 0.6817 & 0.6816 & 0.0093 & 0.0108 & 0.0093 & 0.0108 \\ 20 & 0.6818 & 0.6818 & 0.0094 & 0.0109 & 0.0094 & 0.0109 \\ \hline \hline \end{tabular} \end{table} Table 3: Numerical results for problem (28). Figure 2: Convergence behavior of loss disparity and classification loss across all local clients over the outer iterations of Algorithm 1 on the adult dataset. The solid blue and brown lines indicate the convergence behavior of the average loss disparity and classification loss over all clients, respectively. The blue and brown shaded areas indicate the regions between the maximum value and minimum value of loss disparity and classification loss over all clients, respectively. The blue dashdot line indicates the convergence behavior of the global loss disparity in the central server.
2303.13526
Trust Explanations to Do What They Say
How much are we to trust a decision made by an AI algorithm? Trusting an algorithm without cause may lead to abuse, and mistrusting it may similarly lead to disuse. Trust in an AI is only desirable if it is warranted; thus, calibrating trust is critical to ensuring appropriate use. In the name of calibrating trust appropriately, AI developers should provide contracts specifying use cases in which an algorithm can and cannot be trusted. Automated explanation of AI outputs is often touted as a method by which trust can be built in the algorithm. However, automated explanations arise from algorithms themselves, so trust in these explanations is similarly only desirable if it is warranted. Developers of algorithms explaining AI outputs (xAI algorithms) should provide similar contracts, which should specify use cases in which an explanation can and cannot be trusted.
Neil Natarajan, Reuben Binns, Jun Zhao, Nigel Shadbolt
2023-02-14T13:53:33Z
http://arxiv.org/abs/2303.13526v1
# Trust Explanations to Do What They Say ###### Abstract How much are we to trust a decision made by an AI algorithm? Trusting an algorithm without cause may lead to abuse, and mistrusting it may similarly lead to disuse. Trust in an AI is only desirable if it is warranted; thus, calibrating trust is critical to ensuring appropriate use. In the name of calibrating trust appropriately, AI developers should provide contracts specifying use cases in which an algorithm can and cannot be trusted. Automated explanation of AI outputs is often touted as a method by which trust can be built in the algorithm. However, automated explanations arise from algorithms themselves, so trust in these explanations is similarly only desirable if it is warranted. Developers of algorithms explaining AI outputs (xAI algorithms) should provide similar contracts, which should specify use cases in which an explanation can and cannot be trusted. ## 1 Trust in AI Algorithms Increasingly, decisions affecting the lives of lay people are made by AI algorithms. And while these algorithms may be useful, they can also be dangerous. Both unwarranted trust in such an algorithm may lead to wrong and damaging decisions, while unwarranted mistrust in such an algorithm may lead to disuse. Neither outcome is favourable. Hardin draws an important distinction between whether someone or something is trusted and whether that trust is well-placed; i.e. it is worthy of trust [2]. It is clear, then, that trust in AI algorithms should be _calibrated_, so that users are led to trust trustworthy AI systems and distrust untrustworthy AI systems. Jacovi et al. propose that trust in AI systems can be understood in terms of a contract between the system and the trustor [4]. Jacovi et al. define a model of human-AI trust resting on two key properties: the _vulnerability_ of the user to the model and the user's ability to _anticipate_ the impact of the AI model's decisions. In the human context, person \(A\) trusts person \(B\) if and only if \(A\) believes that \(B\) will act in \(A\)'s best interest, and \(A\) accepts vulnerability to \(B\)'s actions. In the machine context, we do not always expect the machine to act in our best interests. Instead, user \(U\) trusts AI model \(M\) if and only if \(U\) can anticipate and accepts vulnerability to \(M\)'s actions [4]. Moreover, trust often does not have a blanket scope; typically, \(U\) will trust \(M\) regarding some particular actions or range of actions, though a broader trust will include many such actions. In the algorithmic context, this scope is clearly limited - unlike humans, trust in algorithms should never be broad; warranted trust is always scoped to a region in which the algorithm's actions can be anticipated, and in which users might reasonably accept vulnerability to these actions. Generally, this scope is limited to some subsection of the intended use cases of the AI system. Trust placed in an AI system to do something it was not intended to do is often unwarranted; trust placed in an AI system to do something it does not claim to do is always unwarranted. Thus, for an algorithm to be trustworthy in a given scope, that algorithm should demonstrate both that a user can anticipate behaviour in that scope and that the anticipated behaviour is such that users might accept vulnerability to the algorithm. We call this demonstration a _contract_, and call this sort of trust _contractual trust_[4]. Following this framing, the extent to which an algorithm warrants trust is modulated by the extent to which it adheres to its contract. Therefore, when the developers of an algorithm provide a contract regarding the intended use of an algorithm, we can evaluate the trustworthiness of an algorithm by evaluating adherence to the contract. One method of evaluating adherence to contract comes from a user observing the AI algorithm's reasoning process by way of an explanation or an interpretation. However, unlike human decision-makers, few algorithms are inherently capable of explaining their reasoning. The growing field of Explainable Artificial Intelligence (xAI) aims to develop methods for explaining the reasoning algorithms, often with a broad goal of increasing warranted trust in algorithms. However, though it is clear that these algorithms often increase trust in algorithms, it is not always clear that this trust is warranted, as demonstrated by Jacobs et al. [3]. Thus, it seems, there are times where even an explanation of an AI algorithm should be distrusted. ## 2 Trust in Explanations of AI Algorithms Explanation algorithms help us determine whether to trust AI algorithms, but only if we trust the explanation methods. But when can we trust an explanation algorithm? And what are we trusting it to do? The answer that we are trusting these algorithms to _explain_ AI systems is insufficient, because what it means to explain is unspecified. AI explainers can be put to a number of different uses, and different algorithms should be trusted for different uses; contracting to behave appropriately in all of these uses is infeasible (an end-user demands a different explanation than a domain expert), so explanations methods should contract to provide only a particular type of explanation. Much like AI models themselves, we contend that xAI algorithms should be trusted to uphold specific contracts with respect to the ways in which they are used. For example, a model like _recourse_, developed in Ustun et al., designed to informs end-users of what must be done to change their determination, should not be trusted to report errors in model or to point out the most important features [8]. Similarly, a model like _Scoped Anchors_, developed in Ribeiro et al., designed to simplify predictions into rule-based approximations, should not be trusted to provide recourse information [7]. We also contend that xAI methods should be evaluated on whether they can be trusted to do what they say. That is, a good xAI method is one that fulfills its intended use case. Much like trust in AI algorithms, trust in explanations of AI algorithms is contractual; xAI methods should be evaluated in terms of the extent to which they uphold the terms of a contract between the explainer and explainer; and an explainee's trust should be calibrated accordingly. We should not trust an explanation algorithm to do something it has not promised to do. The absence of contracts is not a mere conceptual problem; it creates a problematic dialectic and hinders effective critique of xAI methods. To demonstrate this, we consider two particular kinds of AI explanations: SHAP explanations, introduced in Lundberg and Lee, and counterfactual explanations, introduced in Wachter et al. [6; 9]. Both papers focus on the mathematical properties of the explanation algorithm introduce, but neither makes clear what they contend a good explanation consists in or specifies a circumscribed set of use cases for their methods. We consider two evaluation articles: Kumar et al.'s evaluation of the SHAP method, and Barocas et al.'s evaluation of counterfactual explanations [5; 1]. Both articles rely on similar notions regarding the purpose of explanations - frameworks that the authors of SHAP and counterfactual explanations do not make clear that they subscribe to. For instance, Barocas et al.'s critique counterfactual explanations on the grounds that they are not useful in providing users with actionable information [1]. Similarly, Kumar et al. argue that SHAP cannot be used to inform users' actions [5]. In this paper we investigate similarities between trust in AI algorithms and trust in explanations of those algorithms (and the outputs they produce). We contend that, like trust in AI algorithms, trust in AI explanation algorithms is composed of an ability to anticipate the algorithm and an acceptance of vulnerability to the algorithm's actions. In both cases, this trust is only desirable if it is warranted. The scope of the trust, in both cases, should be clearly enumerated in a contract, and AI and explanation algorithms alike should be evaluated for trustworthiness within this scope.
2302.04373
Measuring the Privacy Leakage via Graph Reconstruction Attacks on Simplicial Neural Networks (Student Abstract)
In this paper, we measure the privacy leakage via studying whether graph representations can be inverted to recover the graph used to generate them via graph reconstruction attack (GRA). We propose a GRA that recovers a graph's adjacency matrix from the representations via a graph decoder that minimizes the reconstruction loss between the partial graph and the reconstructed graph. We study three types of representations that are trained on the graph, i.e., representations output from graph convolutional network (GCN), graph attention network (GAT), and our proposed simplicial neural network (SNN) via a higher-order combinatorial Laplacian. Unlike the first two types of representations that only encode pairwise relationships, the third type of representation, i.e., SNN outputs, encodes higher-order interactions (e.g., homological features) between nodes. We find that the SNN outputs reveal the lowest privacy-preserving ability to defend the GRA, followed by those of GATs and GCNs, which indicates the importance of building more private representations with higher-order node information that could defend the potential threats, such as GRAs.
Huixin Zhan, Kun Zhang, Keyi Lu, Victor S. Sheng
2023-02-08T23:40:24Z
http://arxiv.org/abs/2302.04373v1
Measuring the Privacy Leakage via Graph Reconstruction Attacks on Simplicial Neural Networks (Student Abstract) ###### Abstract In this paper, we measure the privacy leakage via studying whether graph representations can be inverted to recover the graph used to generate them via graph reconstruction attack (GRA). We propose a GRA that recovers a graph's adjacency matrix from the representations via a graph decoder that minimizes the reconstruction loss between the partial graph and the reconstructed graph. We study three types of representations that are trained on the graph, i.e., representations output from graph convolutional network (GCN), graph attention network (GAT), and our proposed simplicial neural network (SNN) via a higher-order combinatorial Laplacian. Unlike the first two types of representations that only encode pairwise relationships, the third type of representation, i.e., SNN outputs, encodes higher-order interactions (e.g., homological features) between nodes. We find that the SNN outputs reveal the lowest privacy-preserving ability to defend the GRA, followed by those of GATs and GCNs, which indicates the importance of building more private representations with higher-order node information that could defend the potential threats, such as GRAs. 1 Department of Computer Science, Texas Tech University, Box 43104, Lubbock, TX 79409-3104, USA 2 Department of Computer Science, Xavier University of Louisiana, New Orleans, LA 70125, USA 3 Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, USA {\({}^{\dagger}\)Huixin.Zhan, "Victor.Sheng)@ttu.edu, \({}^{\ddagger}\)[email protected], \({}^{\dagger}\)[email protected], ## 1 Introduction Most real-world graphs associated with people or human-related activities are often sensitive and might contain confidential information. In this paper, we focus on the threat of edge privacy, e.g., in an online social network, a user's friend list could potentially be private to the user. The server could aggregate node representations with their neighbors to learn better user representations to improve its services. Thus, if there is an edge between two nodes, their output representations obtained from the graph neural network (GNN) should be closer. Therefore, a potential adversary could possibly recover the sensitive edge information (e.g., friend lists) via a machine learning classifier that computes distance differences in graph representations. In this paper, we study three representation methods (two low-order and one higher-order) that perform different aggregations to capture important graph structure properties, i.e., representations output from GCN, GAT, and our proposed SNN. Unlike the first two types of representations that only encode pairwise relationships, the proposed SNN utilize a higher-order combinatorial Laplacian to learn a graph convolution that encodes the homological features of simplicial complexes, which are higher-dimensional analogs of graphs [1]. We will then measure if SNN outputs are more vulnerable to potential attacks compared to regular GCN or GAT outputs. We use GRAs as the potential attacks to measure privacy leakage. GRAs study whether representations can be inverted to recover the graph used to generate them. We propose a novel GRA for accurate representation inversion, i.e., from the representation outputs of a graph \(\mathcal{G}\), we can find a graph \(\mathcal{\hat{G}}\) with a very similar adjacency matrix. When a user's device computes a representation via GNN and sends it to a server for node classification, we assume the adversary could access the representation outputs with non-confidential edges during the data uploading process. We propose a graph decoder that reconstructs the graph by minimizing the reconstruction loss between the partial graph (with non-confidential edges) and the reconstructed graph. In our experiments, we will show that the SNN outputs reveal the lowest privacy-preserving ability to defend the GRAs. This calls for future research towards building more private representations with higher-order node information that could defend the potential attacks, such as GRAs. ## 2 Our Proposed Methods SNN OutputsIn this paper, we will refer abstract simplicial complex to simplicial complexes. An abstract simplicial complex is a collection \(K\) of subsets of a finite set \(S\) that satisfies two axioms: (1) The singleton set \(\{v\}\) lies in \(K\) for each \(v\) in \(S\). (2) Whenever some \(\sigma\subset S\) lies in \(K\), every subset of \(\sigma\) must also lie in \(K\). The constituent subsets \(\sigma\subset S\) which lie in \(K\) are called simplices. Next, we will introduce the computation of \(d\)-dimensional incidence matrices. Fixing a collection \(K\) and letting \(K_{d}\) indicate the set of all \(d\)-simplices in \(K\), the \(d\)-dimensional incidence matrices operators can be represented as \(\partial_{d}:\mathbb{R}^{K_{d}}\rightarrow\mathbb{R}^{K_{d-1}}\). To build these incidence matrices operators, one first orders the nodes in \(K_{0}\) so that each \(d\)-simplex \(\sigma\subset K\) can be uniquely expressed as a list \(\sigma=[v_{0},...,v_{d}]\) of nodes in increasing order. The desired matrix \(\partial_{d}\) is completely prescribed by the following action on each \(\sigma\): \(\partial_{d}(\sigma)=\sum_{i=0}^{d}(-1)^{i}\cdot\sigma_{-i}\), where \(\sigma_{-i}:=[v_{0},...,\hat{v}_{i},...,v_{d}]\) and \(\hat{v}_{i}\) indicates that \(v_{i}\) is omitted. These operators form a sequence of vector spaces and linear maps: \[\cdots\overset{\partial_{d+1}}{\rightarrow}\mathbb{R}^{K_{d}}\overset{\partial_{ d}}{\rightarrow}\mathbb{R}^{K_{d-1}}\overset{\partial_{d-1}}{\rightarrow}\cdots. \tag{1}\] In order to model the higher-order interactions between nodes, the graph Laplacian was generalized to simplicial complexes by Horak and Jost (2013). The higher-order combinatorial Laplacian can be formulated as: \(\mathcal{L}_{d}:=\partial_{d+1}\partial_{d+1}^{T}\partial_{d}\). Therefore, leveraging this \(\mathcal{L}_{d},\) we could obtain the final SNN outputs via the graph convolution as follows: \(H^{(2)}=\sigma(\tilde{Q}^{-\frac{1}{2}}\tilde{\mathcal{L}}_{d}\tilde{Q}^{- \frac{1}{2}}\sigma(\tilde{Q}^{-\frac{1}{2}}\tilde{\mathcal{L}}_{d}\tilde{Q}^{- \frac{1}{2}}H^{(0)}W^{(0)})W^{(1)}),\) where \(\tilde{\mathcal{L}}_{d}=\mathcal{L}_{d}+\mathbb{I},\mathbb{I}\) is the identity matrix, \(\tilde{Q}_{ii}=\sum_{j}\tilde{\mathcal{I}}_{d_{ij}}\) and \(W^{(k)}\) is a layer-specific trainable weight matrix. \(\sigma(\cdot)\) denotes an activation function. \(H^{(k)}\) is the matrix of activations in the \(k\)-th layer and the initial node representations are \(H^{(0)}\). Graph Reconstruction AttackA graph \(\mathcal{G}=(V,E)\) is represented by the set of nodes \(V=\{v_{i}\}_{i=1}^{|V|}\) and edges \(E=\{e_{ij}\}_{i,j=1}^{|E|}\). To measure the privacy leakage via GRA, we will then propose a decoder only approach. Specifically, we utilize the outputs \(H^{(k)}\) and a partial adjacency matrix \(\mathcal{A}^{*}\) as the prior knowledge, the decoder \(f_{dec}\) reconstructs the adjacency matrix \(A_{rec}=\sigma((\tilde{\mathcal{A}}^{*}H^{(k)}W_{a})(\tilde{\mathcal{A}}^{*} H^{(k)}W_{a})^{\mathbf{T}}),\) where \(W_{a}\) is trained using back-propagation to minimize reconstruction loss between the adjacency matrix of the perial graph \(\mathcal{A}^{*}\) and the reconstructed partial graph \(\mathcal{A}^{*}_{rec}\) using \(\mathcal{L}^{(att)}=||\mathcal{A}^{*}-\mathcal{A}^{*}_{rec}||_{2}^{2}\). ## Experiments DatasetsIn our experiments, we used five well-known real-world datasets: CiteSeer and CORA from Sen et al. (2008), PubMed Namata et al. (2012), as well as Amazon Computers and Amazon Photo from Shchur et al. (2018). Experimental Set-UpIn order to generate GCN and GAT outputs, we follow the original graph convolution in GCNs Welling and Kipf (2016) and concatenation in GATs Velickovic et al. (2017). We only consider \(1\)-simplices, i.e., \(d=1\). Performance for Three Types of RepresentationsTable 1 shows the GRA performance in terms of AUC of three types of representations for recovering the adjacency matrix \(\mathcal{A}\) of \(\mathcal{G}\). We compare our results with Attack-6 He et al. (2021) because it also requires node representations and the partial adjacency matrix \(\mathcal{A}^{*}\) as prior knowledge. Our proposed GRA with GCN outputs (GRA_GCN outputs) obtains inferior performances compared to Attack-6 (with \(\downarrow\)). However, GRA_GAT outputs achieve higher performances compared to Attack-6 in most of the cases and GRA_SNN outputs achieve the highest performances for all datasets (in bold), e.g., it achieves \(98.33\%\) AUC on the Citeseer dataset. This indicates the SNN outputs reveal the lowest privacy-preserving ability to defend the GRAs, followed by GAT and GCN outputs. ## Conclusion In this paper, we measure the privacy leakage via studying whether representations can be inverted to recover the graph used to generate them using GRA. Our proposed GRA recovers a graph's adjacency matrix from the representations via a graph decoder that minimizes the reconstruction loss. We studied three types of representations that are trained on the graph, i.e., GCN outputs, GAT outputs, and SNN outputs using a higher-order combinatorial Laplacian. We found SNN outputs reveal the lowest privacy preserving ability. This indicates the importance of building more private representations with higher-order node information that could defend the potential threats.
2310.04162
Light-LOAM: A Lightweight LiDAR Odometry and Mapping based on Graph-Matching
Simultaneous Localization and Mapping (SLAM) plays an important role in robot autonomy. Reliability and efficiency are the two most valued features for applying SLAM in robot applications. In this paper, we consider achieving a reliable LiDAR-based SLAM function in computation-limited platforms, such as quadrotor UAVs based on graph-based point cloud association. First, contrary to most works selecting salient features for point cloud registration, we propose a non-conspicuous feature selection strategy for reliability and robustness purposes. Then a two-stage correspondence selection method is used to register the point cloud, which includes a KD-tree-based coarse matching followed by a graph-based matching method that uses geometric consistency to vote out incorrect correspondences. Additionally, we propose an odometry approach where the weight optimizations are guided by vote results from the aforementioned geometric consistency graph. In this way, the optimization of LiDAR odometry rapidly converges and evaluates a fairly accurate transformation resulting in the back-end module efficiently finishing the mapping task. Finally, we evaluate our proposed framework on the KITTI odometry dataset and real-world environments. Experiments show that our SLAM system achieves a comparative level or higher level of accuracy with more balanced computation efficiency compared with the mainstream LiDAR-based SLAM solutions.
Shiquan Yi, Yang Lyu, Lin Hua, Quan Pan, Chunhui Zhao
2023-10-06T11:21:31Z
http://arxiv.org/abs/2310.04162v1
# Light-LOAM: A Lightweight LiDAR Odometry and Mapping based on Graph-Matching ###### Abstract Simultaneous Localization and Mapping (SLAM) plays an important role in robot autonomy. Reliability and efficiency are the two most valued features for applying SLAM in robot applications. In this paper, we consider achieving a reliable LiDAR-based SLAM function in computation-limited platforms, such as quadrotor UAVs based on graph-based point cloud association. First, contrary to most works selecting salient features for point cloud registration, we propose a non-conspicuous feature selection strategy for reliability and robustness purposes. Then a two-stage correspondence selection method is used to register the point cloud, which includes a KDC-tree-based coarse matching followed by a graph-based matching method that uses geometric consistency to vote out incorrect correspondences. Additionally, we propose an odometry approach where the weight optimizations are guided by vote results from the aforementioned geometric consistency graph. In this way, the optimization of LiDAR odometry rapidly converges and evaluates a fairly accurate transformation resulting in the back-end module efficiently finishing the mapping task. Finally, we evaluate our proposed framework on the KITTI odometry dataset and real-world environments. Experiments show that our SLAM system achieves a comparative level or higher level of accuracy with more balanced computation efficiency compared with the mainstream LiDAR-based SLAM solutions. LiDAR SLAM, data association, odometry. ## I Introduction Simultaneously Localization and Mapping (SLAM) is now considered one indispensable module for many mobile robots to achieve autonomous navigation in challenging GNSS-denied environments. To adapt to different missions and platforms, various SLAM frameworks based on different sensors, processing methods, or functional structures have been proposed. Disregarding the taxonomy, most SLAM works focus on improving two indices, which are 1) localization performance, such as accuracy and reliability, and 2) runtime efficiency, such as computation burden and storage requirement. Usually, the two indices cannot be optimized both on a resource-limited robot platform. Rather, a trade-off between performance and efficiency is typically required to set up a proper SLAM system function in practical robot applications. In this paper, we aim to develop a lightweight LiDAR-based SLAM with comparative performance to the state-of-the-art methods, but with balanced computation requirements so as to adapt to resource-limited platforms. Vision-based and LiDAR-based SLAMs are the two most widely researched streamlines according to the sensor types. Vision-based methods provide the most economical and practical localization solutions in many stable environments. On the other hand, LiDAR-based methods are usually preferred in structure-rich environments and are considered to be more robust against light condition changes than vision sensors. There are several mile-stone LiDAR-based works, such as LiDAR Odometry and Mapping (LOAM) [1], lightweight and ground-optimized LiDAR odometry and mapping (LeGO-LOAM) [2] and Cartographer [3]. Roughly, the LiDAR-base SLAM function can be divided into two subfunctions, which are the front-end odometry, and the back-end map optimization. In the front-end part, the main purpose is to obtain the incremental transformations between consecutive frames through scan-to-scan or scan-to-map alignment. In the back-end part, an estimator/optimizer that integrates more state constraints is utilized to improve the front-end estimation accuracy and smoothness. In most SLAM frameworks, the back-end task consumes more computation resources than the front-end and is usually updated more slowly. With the utilization of the multi-thread technique, the slower thread determines the functional frequency of the whole SLAM function. In a robot control loop, a slower map update rate may affect the robot feedback reaction in unknown environments. Therefore, we considered achieving a more balanced SLAM functional partition with improved performance for practical robot applications. More specifically, we put more computation effort in the front end to improve the point cloud alignment performance, and thanks to a more accurate front end, we can shrink the computation complexity Fig. 1: Feature points alignment between two consecutive scans using different data association methods. (a) A scan of a scenario’s point cloud. (b) K-nearest neighbor method. (c) Graph-based two-stage matching method. of the back end part. In the front-end part, there are mainly two streams to achieve the point cloud registration between scans or between scan and map. The first stream is to obtain relative transformation directly from point-to-point correspondences. Iterative closest point (ICP) and its variants, such as the GICP [4], are the most widely used methods to align two-point clouds. ICP-based methods are widely used in early-stage 2D LiDAR-based SLAM frameworks when the size of the point cloud is much more sparse than that of 3D LiDARs. Besides, the Normal distribution transformation (NDT) is considered another direct point cloud registration method. Although it uses all points, the NDT method represents the point cloud with normal distributions and then calculates the relative transformations in a probabilistic fashion. However, the direct methods use all points to optimize the relative transformation, therefore its computation requirement can be a major concern for processing real-time 3D LiDAR scan sequences. The second stream is to extract salient features from point clouds, and then to carry out registration only with the feature points. In the milestone work, LOAM, points are selected based on their curvature and then assigned as planar points and edge points, and then registration is carried out based on the nearest neighborhood methods. Similar geometric features are also used in LeGO-LOAM [2] and F-LOAM [5]. Besides the geometric calculated features, there is also a trend to implement learning-based methods to obtain features represented as deep neural networks due to their power to represent nonlinearity in the descriptors. However, the learning-based methods are often criticized as data-dependent and may not be ready to be used in unknown environments. Although with different representations of the point cloud scans, most methods above use KD-tree-based technology for efficient correspondence indexing. While KD-tree [6] is widely used for establishing initial correspondences in SLAM systems [1, 2, 5, 7, 8], it can introduce erroneous associations, particularly in noisy or occluded environments. Recent years have seen a notable increase in research efforts directed toward leveraging the principles of geometric consistency and graph theory to tackle the point cloud data association problems. Bailey et al.'s pioneering work [9] introduced the application of geometric consistency in addressing the 2D LiDAR-based map-building problem. This approach involves the construction of a graph, wherein the selection of the correct correspondence set is facilitated through the identification of the maximum common subgraph. In a related vein, Lajoie et al. [10] utilized pairwise consistent measurements to mitigate spurious loop closures within a distributed SLAM system. This methodology similarly involves the maximization of a consistent subgraph, which finds application in the domain of multi-robot map merging, as documented in the literature [11]. Yang et al. [12] introduced the TEASER graph-theoretic framework, which incorporates the truncated least squares optimization method and maximum clique inlier selection technology to effectively eliminate numerous spurious correspondences. Its efficacy has been substantiated in point registration and scan-matching tasks. In the studies [13] and [14], the researchers utilize geometrically consistent graphs in conjunction with a variety of voting strategies to rank correspondences and select dependable inliers. Another graph-theoretic framework development, named Clipper [15], constructs a weighted graph and formulates the inlier association as an optimization problem, ultimately solving for the densest subgraph of consistent correspondences. While these graph-based solutions for association problems have demonstrated effectiveness, particularly in adhering to the geometric consistency constraint, it remains a challenge to optimize their efficiency. Notably, their performance falls short of desired levels when confronted with large-scale point cloud alignment within stringent time constraints, for example, in a LiDAR-based SLAM framework. In this paper, we consider achieving a reliable LiDAR-based SLAM function in computation-limited robot platforms. Contribution are mainly on two folds. First, we develop an innovative SLAM front-end which includes a non-conspicious feature selection strategy and a graph-based feature matching function to achieve better point cloud registration. Secondly, to benefit from the reliable registration of the front end, we develop a light backend that can be executed more efficiently on a computation-limited platform. Experiments validations are carried out with both public datasets and self-collected data. Our implementation of Light-LOAM will be freely available at: [https://github.com/BrenYi/Light-LOAM](https://github.com/BrenYi/Light-LOAM). The remainder of the paper is organized as follows. In Section II, we give a description of the proposed method, and Section IV provides the experimental results. Section V concludes the paper. ## II System Overview We present the pipeline of our Light-LOAM SLAM system in Fig. 2, which is composed of three core stages: _pre-processing_, _two-stage feature matching_, and _pose estimation_. In the pre-processing stage, we begin by filtering out disjoint points from each point cloud scan. To select stable corner and plane features with subtle local geometry attributes, we employ a non-conspicuous selection method and filter out the most significant corner and plane features. This is one main difference against other methods [1, 2, 5]. A two-stage feature-matching process is carried out afterward. In the first stage, a KD-Tree-based method [1] is employed to establish initial correspondences for the selected features. Then, we introduce a graph-based consistency voting mechanism to assess these correspondence relationships, effectively filtering out unreliable associations. Moving to the front-end odometry module, the consistent scores of reliable point pairs are leveraged to optimize transformations, resulting in initial, relatively precise pose estimations. Finally, with the support of these initial reliable estimations, the mapping module optimizes more accurate poses in a more efficient fashion. ### _Feature Extraction and Selection_ Disjoint Point RemovalGiven the substantial volume of data produced by 3D LiDAR sensors, feature extraction and feature-based alignment is a widely adopted approach for efficient transformation evaluation. However, it is imperative to eliminate disjoint objects before extracting feature candidates. Disjoint points may often represent outliers or segments of occluded objects, and their inclusion can significantly degrade subsequent feature association and pose estimation quality. Therefore, in alignment with prior work [16], we employ the following criteria to exclude these discontinued points: \[\left|\left\|\mathbf{p}_{i+1}^{k}-\mathbf{p}_{i}^{k}\right\|_{2}-\left\|\mathbf{ p}_{i-1}^{k}-\mathbf{p}_{i}^{k}\right\|_{2}\right|>\sigma_{disjoint} \tag{1}\] where \(\mathbf{p}_{i}^{k}\) represents point \(i\) located in the \(k\)th laser beam channel, and \(\sigma_{disjoint}\) serves as the judgment threshold. A point is classified as disjoint if the absolute difference between its Euclidean distances to neighbors on both sides exceeds \(\sigma_{disjoint}\). Otherwise, it is considered a consecutive candidate. After eliminating disjoint objects, we extract feature points from each laser beam channel. The local geometric attribute of a point is characterized using the smoothness metric (2). \[r(\mathbf{p}_{i}^{k})=\frac{1}{|S|\left\|\mathbf{p}_{i}^{k}\right\|}\left\| \sum_{j\in S,j\notin i}(\mathbf{p}_{i}^{k}-\mathbf{p}_{j}^{k})\right\| \tag{2}\] where \(S\) denotes the assessed point set, encompassing the candidate and its adjacent objects on both the left and right sides. A candidate is designated as a corner feature if its smoothness \(r_{i}^{k}\) exceeds the threshold \(r_{t}\); otherwise, it is identified as a plane feature. This classification process involves considering 5 points on both sides of a candidate, and the threshold \(r_{t}\) is set to \(0.1\) for practical implementation. In conventional LiDAR-based SLAM systems, such as LOAM [1], FLOAM [5], LEGO-LOAM [2], the perceived space is commonly divided into several subregions. Feature candidates with the highest or lowest smoothness attributes are selected from each subregion for subsequent feature matching. However, our Light-LOAM SLAM system introduces an innovative non-conspicuous feature selection strategy. As mentioned earlier, feature selection is typically guided by a discriminative principle. But, are these discriminative features truly robust and capable of serving as high-quality optimization samples? It's worth noting that some outliers or occluded points can exhibit highly discriminative geometric attributes. Therefore, we hold the view that candidates with weaker smoothness attributes than the top conspicuous ones may be more valuable and robust for data association. We prioritize the selection of weaker corner and plane features as our optimized candidates within each subregion. Before initiating the feature selection process, points are first sorted in descending order based on their smoothness values within each subregion: \[\mathbb{F}^{k}=\left\{\mathbf{p}_{i}^{k},\cdots,\mathbf{p}_{i+j}^{k}|r( \mathbf{p}_{i}^{k})>\cdots>r(\mathbf{p}_{i+j}^{k})\right\} \tag{3}\] We choose the \(m\) sharpest points after the first \(k\) points in the ordered set \(\mathbb{F}^{k}\) and designate them as the edge feature set \(\mathbb{F}_{e}^{k}\). Similarly, we select the \(n\) flattest candidates before the last \(l\) points and include them in the planar set \(\mathbb{F}_{s}^{k}\): \[\mathbb{F}_{e}^{k}\!=\!\left\{\mathbf{p}_{i+k+1}^{k},\cdots,\mathbf{p}_{i+k+m }^{k}\mid\mathbf{p}_{x}^{k}\in\mathbb{F}^{k},r(\mathbf{p}_{x}^{k})>r_{t}\right\} \tag{4}\] \[\mathbb{F}_{s}^{k}\!=\!\left\{\mathbf{p}_{i+j-l-n}^{k},\cdots,\mathbf{p}_{i+j- l-1}^{k}\mid\mathbf{p}_{x}^{k}\in\mathbb{F}^{k},r(\mathbf{p}_{x}^{k})<r_{t}\right\} \tag{5}\] In our implementation, we horizontally divide each laser beam channel into 6 subregions. Setting \(m\), \(n\), \(k\), and \(l\) to 2, 4, 1, and 2, respectively, this non-conspicuous feature selection method, combined with the disjoint point removal preprocessing step, efficiently filters out more outliers, ensuring a more reliable set of candidates for subsequent pose estimation. ### _Graph-based Two-stage Feature Matching_ Identifying corresponding features from both the last scan of the point cloud and the existing constructed map is a fundamental prerequisite for subsequent scan-to-scan and scan-to-map alignments. KD-tree [6] is a widely used method for Fig. 3: Process of Graph-based Two-stage Feature Matching Fig. 2: Overview of Light-LOAM System establishing correspondence relationships due to its efficiency and effectiveness, as evidenced by its adoption in various works [1, 2, 5]. Despite its prevalence, KD-tree is susceptible to errors caused by environmental occlusions, outliers, and noise within the point cloud, leading to inaccurate pose estimations. For instance, as shown in Fig. 4, situations can arise where more than one candidate feature from the current scan matches the same point from the last scan of the point cloud as its closest counterpart, leading to false multi-to-one correspondence cases. To mitigate such issues and reduce spurious correspondences, we introduce a novel graph-based two-stage correspondence selection method. Initial Correspondences Determination by KD-treeWe start by using KD-tree to find correspondences for our feature candidates, assuming the closest point from the last scan or map is the true correspondence for each feature. This establishes our initial set of point pairs using the formula (6). \[\varpi=\left\{(\mathbf{p}_{i},\mathbf{p}_{i^{\prime}})\mid\mathbf{p}_{i}\in \mathbb{F}_{e}^{k}\cup\mathbb{F}_{s}^{k},\mathbf{p}_{i^{\prime}}\in\mathbb{F}_ {e^{\prime}}^{k}\cup\mathbb{F}_{s^{\prime}}^{k}\right\} \tag{6}\] where \(\mathbf{p}i\) represents features from the edge set \(\mathbb{F}_{e}^{k}\) or the planar group \(\mathbb{F}_{s}^{k}\) of the current scan, and \(\mathbf{p}_{i^{\prime}}\) corresponds to their closest corresponding point from the feature set of the last scan of the point cloud or the map. Reliable Associations via Consistent GraphIn the second stage, a graph-based correspondence validation algorithm is introduced. Beginning with the initial putative associations \(\varpi\) from the KD-tree, a compatibility graph is constructed based on the principle of geometric consistency. Before delving into the concept of geometric consistency, let's assume the existence of two correct associations, \((\mathbf{p}_{i},\mathbf{p}_{i^{\prime}})\) and \((\mathbf{p}_{j},\mathbf{p}_{j^{\prime}})\), which share an identical set of transformation parameters denoted as \((\mathbf{R},\mathbf{T})\). These two pairs of associations can be formulated as: \[\mathbf{p}_{i^{\prime}}=\mathbf{R}\mathbf{p}_{i}+\mathbf{T} \tag{7}\] \[\mathbf{p}_{j^{\prime}}=\mathbf{R}\mathbf{p}_{j}+\mathbf{T} \tag{8}\] Theoretically, the Euclidean distance between two target points remains constant across different frames, as expressed by (9), embodying what we refer to as geometric consistency. \[\left\|\mathbf{p}_{i^{\prime}}-\mathbf{p}_{j^{\prime}}\right\|_{2}=\left\| \mathbf{p}_{i}-\mathbf{p}_{j}\right\|_{2} \tag{9}\] We can leverage this constraint to evaluate the compatibility of correspondences within a graph space, rather than the Euclidean space. To illustrate this conveniently, let's assume there are four hypothetical association cases generated from the KD-tree, as depicted in Fig. 3. We can construct a compatibility graph, where each vertex, denoted as \(v_{i}=(\mathbf{p}_{i},\mathbf{p}_{i^{\prime}})\), represents the \(i\)th association relationship. The edges in the graph indicate that the two associations, \(v_{i}\) and \(v_{j}\), are compatible or geometrically consistent. Within the graph displayed in Fig. 3, there exist four associations: \(v_{1}\), \(v_{2}\), \(v_{3}\), and \(v_{4}\). Notably, \(v_{1}\), \(v_{2}\), and \(v_{3}\) are mutually geometrically consistent. Following this, as depicted in Fig. 3, the compatibility graph is constructed using an affinity matrix \(M\). Each entry \(M(i,j)\) in this matrix represents the geometric consistency score of the correspondence pair \((v_{i},v_{j})\) and is calculated quantitatively as: \[S_{c}(v_{i},v_{j})=\exp(-\frac{d(v_{i},v_{j})^{2}}{\sigma^{2}}) \tag{10}\] Here, the term \(d(v_{i},v_{j})\) is defined as: \[d(v_{i},v_{j})=\left\|\mathbf{p}_{i^{\prime}}-\mathbf{p}_{j^{\prime}}\right\|_ {2}-\left\|\mathbf{p}_{i}-\mathbf{p}_{j}\right\|_{2} \tag{11}\] where \(\sigma\) serves as a distance adjustment parameter. Notably, \(S_{c}\) ranges from 0 to 1, achieving 1 for perfect geometric consistency. The diagonal entry \(M(i,i)\) consistently equals 1. A lower score indicates a higher degree of inconsistency. Based on the compatibility graph, we employ a voting rule to assess the quality of each correspondence. This voting mechanism can be expressed as: \[o_{i}=o(v_{i})=\sum_{j=0,j\neq i}^{|\varpi|}\left\lfloor\frac{S_{c}(v_{i},v_{j })}{\eta}\right\rfloor \tag{12}\] where \(\eta\) functions as the voting threshold that determines the compatibility of two associations in the voting process. Additionally, \(|\varpi|\) denotes the cardinality of the correspondence set \(\varpi\). In this scheme, an association subject \(v_{i}\) receives one vote if its consistency score \(S_{c}(v_{i},v_{j})\) with association \(v_{j}\) meets or exceeds the threshold \(\eta\). The final level of consistency in the voting process is determined by all consistent voters. Following the completion of our voting pipeline, we can represent the sequence of voting results in descending order as follows: \[\mathbb{O}=\{o_{i}\mid o_{1}>o_{2}>\cdots>o_{i},i\in[1,|\varpi|]\} \tag{13}\] In the event that a correspondence candidate \(v_{i}\) receives a voting score \(o_{i}\) lower than \(x\%\) of the total number of association candidates, it is considered an unreliable association and is subsequently filtered out. After removing these outlier associations, we obtain the final set of reliable associations along with their corresponding scores: \[\mathbb{O}^{\prime}=\{o_{i}\mid o_{i}\in\mathbb{O},o_{i}>x\left|\mathbb{O} \right|\} \tag{14}\] \[\varpi^{\prime}=\{v_{i}\mid v_{i}\in\varpi,o(v_{i})\in\mathbb{O}^{\prime}\} \tag{15}\] In analyzing the computational complexity of our graph-based matching algorithm with N correspondences, the construction of the compatibility graph and the correspondence ranking using quicksort have time complexities of \(O(N^{2})\) Fig. 4: The demonstration of initial feature correspondences generated by KD-tree. In each correspondence, the red point is the source feature from the current scan, and the blue one is its corresponding target object from the last scan or map. Ellipses indicate incorrect data associations of one scan. and \(O(N\log N)\), respectively. This results in a total time complexity of \(O(N^{2}+N\log N)\!=\!O(N^{2})\) for our graph-based voting algorithm. To maintain real-time performance, we partition the perceptual space into n subregions. Correspondence relationships within each subregion form subgraphs, handling tasks in both odometry and mapping stages. In the odometry stage, each subregion processes around 200 correspondences, averaging a total processing time of approximately 3 ms. During the mapping stage, each subregion deals with roughly 350 correspondences, with an average total processing time of about 7 ms. This correspondence division across subregions ensures both real-time performance and accurate results. ### _Consistency-Guided LiDAR Odometry_ In LiDAR SLAM systems, odometry is pivotal for refining initial poses through scan-to-scan point cloud matching. The odometry module usually provides high-frequency but somewhat imprecise pose estimations, acting as an initial input for the mapping module. More accurate initial transformations estimated by the odometry module accelerate the convergence of the final robot pose estimation, resulting in a reduction in the computational cost of the mapping back-end.In light of this, we propose a novel LiDAR odometry mechanism in which pose optimization is guided by the voting results from the compatibility graph. In our odometry module, we aim to optimize the transformation \(T_{k}^{k-1}\in SE(3)\) representing the motion from the kth frame to the \((k-1)\)th frame, and update the global pose \(T_{k}^{W}\in SE(3)\) of the point cloud in the \(k\)-th frame. Prior to optimization, we correct motion distortions in the point cloud by assuming uniform motion. Similar to LOAM [1], we define two types of residual terms. The first term is the point-to-line distance residual, given by: \[f_{e}^{L}(v_{i})=\frac{\left|(\tilde{\mathbf{p}}_{(i,k)}^{L}-\mathbf{p}_{(i^{ \prime},k-1)}^{L})\times(\tilde{\mathbf{p}}_{(i,k)}^{L}-\mathbf{p}_{(j^{ \prime},k-1)}^{L})\right|}{\left|\mathbf{p}_{(i^{\prime},k-1)}^{L}-\mathbf{p}_ {(j^{\prime},k-1)}^{L}\right|} \tag{16}\] In (16), \(\tilde{\mathbf{p}}_{(i,k)}^{L}\!=\!T_{k}^{k-1}\mathbf{p}_{(i,k)}^{L}\), where \(\mathbf{p}_{(i,k)}^{L}\!\in\!\mathbb{F}_{e}^{k}\) represents a corner feature. \(\mathbf{p}_{(i^{\prime},k-1)}^{L}\in\mathbb{F}_{e}^{k-1}\) represents the closest object from the same laser beam channel to \(\tilde{\mathbf{p}}_{(i,k)}^{L}\). The point-pair \((\mathbf{p}_{(i,k)}^{L},\mathbf{p}_{(i^{\prime},k-1)}^{L})\in\varpi^{\prime}\) signifies a reliable correspondence identified through our graph-based two-stage feature matching. Additionally, \(\mathbf{p}_{(j^{\prime},k-1)}^{L}\in\mathbb{F}_{e}^{k-1}\) is another nearest neighbor of \(\tilde{\mathbf{p}}_{(i,k)}^{L}\) from different laser channel. \(\mathbf{p}_{(i^{\prime},k-1)}^{L}\) and \(\mathbf{p}_{(j^{\prime},k-1)}^{L}\) together form a line geometric residual term. For the planar residual, we have: \[f_{s}^{L}(v_{i})=\mathbf{n_{s}}\cdot(\tilde{\mathbf{p}}_{(i,k)}^{L}-\mathbf{p }_{(i^{\prime},k-1)}^{L}) \tag{17}\] where \[\mathbf{n_{s}}\!=\!\frac{(\mathbf{p}_{(i^{\prime},k-1)}^{L}-\mathbf{p}_{(j^{ \prime},k-1)}^{L})\!\times\!(\mathbf{p}_{(i^{\prime},k-1)}^{L}-\mathbf{p}_{(l^ {\prime},k-1)}^{L})}{\left|(\mathbf{p}_{(i^{\prime},k-1)}^{L}-\mathbf{p}_{(j^{ \prime},k-1)}^{L})\!\times\!(\mathbf{p}_{(i^{\prime},k-1)}^{L}-\mathbf{p}_{(l^ {\prime},k-1)}^{L})\right|} \tag{18}\] In (17), \(\tilde{\mathbf{p}}_{(i,k)}^{L}=T_{k}^{k-1}\mathbf{p}_{i,k}^{L}\) where \(\mathbf{p}_{i,k}^{L}\in\mathbb{F}_{s}^{k}\) is a planar feature. \(\mathbf{p}_{(i^{\prime},k-1)}^{L}\) and \(\mathbf{p}_{(j^{\prime},k-1)}^{L}\) are the first and second closest objects to the projected feature \(\tilde{\mathbf{p}}_{(i,k)}^{L}\) from the same laser beam channel. Additionally, \(\mathbf{p}_{(i^{\prime},k-1)}^{L}\) is the other nearest point from a different channel. These three points from the \(k-1\)th scan collectively construct a plane to establish a point-to-plane residual. Certainly, \((\mathbf{p}_{i,k}^{L},\mathbf{p}_{(i^{\prime},k-1)}^{L})\in\varpi^{\prime}\) is also a valid association obtained through the two-stage feature matching process. In section II.B, we introduced the compatibility graph to filter out unreliable associations, mitigating issues that could degrade optimization. Each correspondence is associated with a consistent voting score indicating its level of reliability. Consequently, we understand the potential positive contribution of each association. Leveraging the voting results from the two-stage feature matching process, we assign higher weights to these more reliable associations. The process of designing custom weights can be formulated as follows: \[W_{i}=\begin{cases}\alpha\cdot\frac{\omega_{i}-\omega_{min}}{\omega_{max}-o_{ min}},i\in[0,\lambda\,|\mathbb{O}^{\prime}|]\\ 1,i\in(\lambda\,|\mathbb{O}^{\prime}|,|\mathbb{O}^{\prime}|]\end{cases} \tag{19}\] The associations in the top \(\lambda\%\) of the ordered set \(\mathbb{O}^{\prime}\) receive custom optimization weights. \(o_{min}\) and \(o_{max}\) represent the minimum and maximum scores in the set \(\mathbb{O}^{\prime}\), and \(\alpha\) is a scale factor. Additionally, the weights for the remaining associations remain unaltered. Finally, the pose is estimated by minimizing the total of weighted residual terms: \[\min_{T_{k}^{k-1}}\sum W_{i}f_{e}^{L}(v_{i})+\sum W_{j}f_{s}^{L}(v_{j}) \tag{20}\] With the aid of cost function (20), we can determine the pose \(T_{k}^{k-1}\) using the Levenberg-Marquardt method [17] as follows: \[T_{k}^{k-1}\gets T_{k}^{k-1}-(J^{T}J+\lambda diag(J^{T}J))^{-1}J^{T}f \tag{21}\] In the optimization process, where \(\lambda\) is the Lagrange multiplier, \(J\) is the Jacobian matrix, and \(f\) is the residual vector term, equation (21) iterates to minimize the cost function (20) and obtain the transformation \(T_{k}^{k-1}\). The residual terms linked to these highly reliable correspondences, with increased weights, play a predominant role during optimization. As a result, the gradient descent direction and parameter updates are predominantly guided by these high-quality associations, leading to a more accurate and closer convergence to the ground truth in pose estimation. ### _Lightweight LiDAR Mapping_ The mapping module, typically the back-end, handles precise global pose estimation but at a lower frequency. However, we now present a streamlined mapping module that balances accuracy and efficiency. During the two-stage feature matching in the mapping module, a KD-tree identifies the neighborhood set \(C\), consisting of the five nearest objects to each feature \(\mathbf{p}i\) from the mapped cloud. In the second graph-based stage, we assign the centroid \(\mathbf{p}i^{\prime},m\) of \(C\) as the temporary corresponding object of feature \(\mathbf{p}i\), establishing feature correspondence relationships \((\mathbf{p}i,\tilde{\mathbf{p}}_{i^{\prime},m})\). These relationships are utilized to construct the compatibility graph and perform voting. While the compatibility graph is used to remove unreliable correspondences during mapping, the voting results are not utilized to weigh the importance of each association. In the optimization process of the mapping module, two types of residual terms are constructed: a point-to-line residual term and a point-to-plane residual term. The residual terms \(f_{e}^{M}(v_{i})\) and \(f_{s}^{M}(v_{j})\) are formulated using the same equations as (16) and (17), respectively, which aligns with the LOAM-based solution [1]. The cost function is defined as: \[\min_{T_{k}^{M}}\sum f_{e}^{M}(v_{i})+\sum f_{s}^{M}(v_{j}) \tag{22}\] In this optimization process, we estimate the global pose \(T_{k}^{M}\) using the Levenberg-Marquardt method [17] without any preferential treatment among the residual terms. Thanks to the removal of unreliable associations and the guidance from the two-stage feature matching, odometry poses converge quickly. This provides a more accurate initial transformation estimation for the mapping module, resulting in a faster and more precise global pose calculation. The corresponding results are presented in section IV. B. ## III Experiments ### _Experiment Setup_ To evaluate the performance of our Light-LOAM SLAM system, we conducted a series of experiments using both the KITTI odometry dataset [18] and real-world environments. Comprehensive validations, including ablation studies and accuracy assessments, are performed on the KITTI dataset. The experiments are conducted on a laptop with an octa-core 3.3GHz processor and 16GB of memory. Real-world testing is carried out on a UAV equipped with an Ouster OS1-32-U LiDAR sensor and a DJI Manifold-2G companion computer featuring an Arm Cortex-A57 CPU and 8GB of memory. To quantitatively evaluate the accuracy of our SLAM system and facilitate comparison with other approaches, we employed the Absolute Trajectory Error (ATE) metric [19]. The ATE metric measures the disparity between the estimated poses generated by our system and the ground-truth pose values. ### _Experiment on KITTI_ In this section, we conducted two types of experiments: ablation studies and validations of our Light-LOAM system. To evaluate its performance, we compared Light-LOAM with state-of-the-art LiDAR-based systems, including LOAM [1], LeGO-LOAM [2], and HDL-Graph-SLAM [20]. Notably, for comparison, we used A-LOAM1, an advanced implementation of the LOAM system, instead of the older LOAM version. Footnote 1: [https://github.com/HKUST-Aerial-Robotics/A-LOAM](https://github.com/HKUST-Aerial-Robotics/A-LOAM) #### Iii-B1 Abolation Study To assess the impact of our advanced algorithm on the Light-LOAM SLAM system, we conducted ablation studies on the inconspicuous feature selection algorithm and the graph-based two-stage feature matching method. When evaluating the non-conspicuous feature selection method, Our(a) can also be viewed as the LOAM system with the inconspicuous feature selection. The result presented in Table III clearly demonstrates that the non-conspicuous feature selection method improves the pose estimation accuracy of the Light-LOAM system. This suggests that the inconspicuous selection strategy enhances the ability of our Light-LOAM to select more reliable feature samples. Our(b) represents the Light-LOAM version with only the two-stage graph-based feature matching module, and its localization result in Table III indicates a substantial improvement in the accuracy of pose estimation. Additionally, we assess the effectiveness of our graph-based matching method by comparing it with the LOAM system in the front-end odometry stage. In this ablation experiment, there are no differences between Light-LOAM and LOAM, except for the inclusion of our graph-based feature matching method. The results in Table II emphasize the improved precision of odometry's estimated poses when utilizing the graph-based feature matching method. Furthermore, the trajectory results for sequence 09, as shown in Fig. 5, also demonstrate that the path estimated by the front-end is closer to the ground truth with the assistance of the graph-based feature matching mechanism. Our graph-based feature matching approach excels at filtering out incorrect correspondence relationships and significantly expediting the convergence of pose optimization. In essence, this method Fig. 5: Trajectories estimated by the different front-end odometry modules for the KITTI 09 sequence. Fig. 6: Ground truth and estimated trajectories for the KITTI 02 sequence. shows promise in achieving accurate pose estimation with a limited number of point-pairs, guided by our graph-based feature matching strategy. #### Iv-B2 Validation of Light-LOAM In this subsection, we evaluate the performance of our Light-LOAM SLAM system in terms of accuracy and efficiency. We start by comparing it with state-of-the-art solutions, as shown in Table I. The results clearly indicate that our Light-LOAM system consistently outperforms others in terms of average performance. Even in challenging scenarios, such as sequences 01 and 02 of the KITTI dataset, Light-LOAM maintains its strong performance. For a visual representation of these results, refer to Fig. 6, where our Light-LOAM exhibits the lowest trajectory errors. These findings underscore the robustness and effectiveness of Light-LOAM, especially in environments where other SLAM algorithms struggle to provide accurate pose estimations. In addition to assessing accuracy, we have also evaluated the efficiency of our Light-LOAM system. Table IV provides an overview of the average runtime results. Notably, our back-end operates with the lowest computational demands, and the mapping module within our SLAM system is lightweight. This improvement in efficiency is attributed to the more precise initial poses provided by consistency-guided odometry. These findings highlight our system's ability to efficiently maintain a global map. Light-LOAM can take a well-balanced compromise between computational cost and localization accuracy. ### _Experiment on Realworld Environment_ In this section, we assess the localization and mapping performance of Light-LOAM in a real-world dormitory building using a DJI M300 UAV, depicted in Fig. 7(a). The UAV circled the building at 1.2 m/s and returned to the starting point to check for closed-loop trajectory formation. For comparison purposes, we also conducted the same localization tasks using state-of-the-art solutions on the same device. We utilized the results from the RTK-GPS(Real-Time Kinematic Global Positioning System) module on the DJI M300 as our ground truth for comparison. The estimated trajectories are visualized in Fig. 8, and the results are summarized in Table V. Notably, the Absolute Trajectory Error (ATE) of Light-LOAM is 0.991, which is lower than its competitors. The outcome of the localization and mapping performance is presented in Fig. 7, demonstrating that our mapping results exhibit high accuracy and are capable of completing loop closures without Fig. 7: Light-LOAM system in a real-world environment. (a) DJI M300 UAV for experiment. (b) The campus building. (c) Mapping result of Light-LOAM. the need for additional loop closure techniques in real-world environments. These results underscore the effectiveness and precision of our Light-LOAM system in real-world scenarios. ## IV Conclusion and Future Work In this letter, we introduce a lightweight LiDAR SLAM system, Light-LOAM, that employs graph-based matching techniques for efficient and accurate pose estimation. Departing from traditional LOAM-based approaches, we propose a non-conspicuous feature extraction strategy for obtaining stable features. Our graph-based two-stage feature matching method assesses the consistency of associations, filtering unreliable correspondences. The consistency-guided odometry module provides reliable initial pose estimations, and the lightweight mapping module completes localization and mapping tasks. Experiments on the KITTI odometry dataset and in real-world scenarios show Light-LOAM outperforming the state-of-the-art solutions in accuracy and efficiency. The consistency graph effectively filters out outlier associations, enabling high-quality pose optimization with a limited number of feature samples. Future work involves integrating IMU data to mitigate point cloud distortion and designing a more robust graph-based feature matching method for enhanced reliability in data associations, leading to more accurate pose estimations.
2301.12041
Vehicle-to-Grid Fleet Service Provision considering Nonlinear Battery Behaviors
The surging adoption of electric vehicles (EV) calls for accurate and efficient approaches to coordinate with the power grid operation. By being responsive to distribution grid limits and time-varying electricity prices, EV charging stations can minimize their charging costs while aiding grid operation simultaneously. In this study, we investigate the economic benefit of vehicle-to-grid (V2G) using real-time price data from New York State and a real-world charging network dataset. We incorporate nonlinear battery models and price uncertainty into the V2G management design to provide a realistic estimation of cost savings from different V2G options. The proposed control method is computationally tractable when scaling up to real-world applications. We show that our proposed algorithm leads to an average of 35% charging cost savings compared to uncontrolled charging when considering unidirectional charging, and bi-directional V2G enables additional 18% cost savings compared to unidirectional smart charging. Our result also shows the importance of using more accurate nonlinear battery models in V2G controllers and evaluating the cost of price uncertainties over V2G.
Joshua Jaworski, Ningkun Zheng, Matthias Preindl, Bolun Xu
2023-01-28T01:15:01Z
http://arxiv.org/abs/2301.12041v1
# Vehicle-to-Grid Fleet Service Provision ###### Abstract The surging adoption of electric vehicles (EV) calls for accurate and efficient approaches to coordinate with the power grid operation. By being responsive to distribution grid limits and time-varying electricity prices, EV charging stations can minimize their charging costs while alding grid operation simultaneously. In this study, we investigate the economic benefit of vehicle-to-grid (V2G) using real-time price data from New York State and a real-world charging network dataset. We incorporate nonlinear battery models and price uncertainty into the V2G management design to provide a realistic estimation of cost savings from different V2G options. The proposed control method is computationally tractable when scaling up to real-world applications. We show that our proposed algorithm leads to an average of 35% charging cost savings compared to uncontrolled charging when considering unidirectional charging, and bi-directional V2G enables additional 18% cost savings compared to unidirectional smart charging. Our result also shows the importance of using more accurate nonlinear battery models in V2G controllers and evaluating the cost of price uncertainties over V2G. Energy storage, stochastic optimal control, electric vehicle charging, vehicle-to-grid. ## I Introduction The International Energy Agency's (IEA) roadmap to achieve net zero greenhouse gas (GHG) emissions by 2050 calls for an increase in the share of renewable energy in total global power generation and global transport sector electrification from 29% and 2% in 2020 to 90% and 45% in 2050, respectively [1, 2]. However, increasing energy supply intermittency and electric vehicles (EV) charging demand can put significant stress on the grid without adequate management and control, such as peak charging demand during periods of low wind and solar power generation [3]. Smart charging integrates external data, such as distribution grid constraints or time-varying electricity prices, into unidirectional (V1G) or bidirectional (V2G) power transfer management between the grid and the EV charging station (EVCS) [4]. Smart charging and V2G management have emerged as a key strategy to accelerate transportation electrification to support an increasingly renewable-powered grid operation, minimizing EV owners' charging cost, and leading new business models and job opportunities [3, 5, 6, 7]. As the cost of V2G-compatible chargers continues to decline [8], software development becomes pivotal to efficiently aggregate EVs and optimally control their V2G responses while meeting the designated charging targets. While plenty of works have conducted techno-economic analyses (TEA) of V2G [9, 10, 11, 12], few have considered complicating factors that practical V2G implementations must address. We group these factors into three categories. The first is battery model nonlinearities, in which the battery voltage, current, efficiency, and degradation depend on the state of charge (SoC). Controlling EV batteries accurately according to their nonlinear characteristics is crucial to strike a balance between ensuring battery security and economic benefits [13]. The second is grid uncertainty, that the distribution grid load and electricity prices are time-varying and uncertain [7]. Uncertainties are often neglected in TEA primarily due to computation difficulties, but practical V2G implementations must consider uncertainties in price-response applications. The last is computational scalability, the V2G management software must manage tens to hundreds of EVs without consuming monstrous computing power. As we will later be shown in our results, while the aggregate benefit of V2G is pivotal for future grids, economic saving for each individual EV is not significant to justify investment in specialized computing hardware. This paper presents a computation-efficient V2G management framework and a realistic case study integrating the aforementioned complicating factors in practical V2G implementations. Our contributions include: * We propose a computation-efficient and scalable V2G management controller which optimizes V2G charging using accurate nonlinear battery models under stochastic electricity prices. Combining a stochastic dynamic programming algorithm with a least-larity first (LLF) scheduling algorithm [14], our proposed V2G framework minimizes charging costs for EVCS to meet charging targets and distribution grid limits. * Using real-world electricity price and EV charging behavior data, our paper provides a first-of-its-kind case study to demonstrate cost savings an EVCS can realistically achieve in various V2G settings. * Our case study compares uncontrolled charging, V1G, and V2G with and without nonlinear storage models and price uncertainties. The results quantify the impact of various charging and model options and guide EVCS planning and technology developments. The remainder of this paper is organized as follows. Section II presents the literature review. Section III describes the system model and formulates the EV charging cost minimization problem. Section IV presents the solution algorithm to the formulated charging problem. Section V includes simulation results and discussion. Section VI concludes the paper. ## II Literature Review Previous literature has proposed multiple heuristic, optimization, or learning-based approaches to conduct smart charging mostly considering linear battery models with constant power rating and efficiency [15, 16]. For example, Liu et al. [17] formulate an EVCS controller as a bi-level program and use a genetic algorithm to minimize charging costs under a time-of-use (ToU) tariff. Similarly, Long et al. [9] use an ordinal optimization approach to minimize EVCS operating cost under a ToU scheme but add aggregated EV demand, maintenance costs, V2G capability, hydrogen storage, and renewable energy generation to the formulation. Additionally, Cao et al. [10] propose a custom actor-critic algorithm to minimize charging costs and peak charging load for a V2G-enabled EVCS, which results in a 24% energy cost savings when compared to uncontrolled charging. While most prior V2G literature included benchmarks to demonstrate the effectiveness of the proposed algorithm [9, 10, 17], they often assume constant EV battery parameters (power ratings, efficiency) and do not include a penalty term to minimize battery cycling. Lab experiments and real-world data have shown that Li-ion battery power ratings and efficiency strongly depend on SoC, especially in nickel-cobalt-based batteries, which are the most common choice for EVs [18, 19, 20]. In EV smart charging or V2G applications, which aim to provide high charging power with low-cost power conversion hardware. Battery power rating and efficiency are sensitive to the SoC, and the charge or discharge power must be carefully controlled to ensure battery thermal security and reduce degradation rates [21]. A common protocol for EV charging management is the CC-CV (constant current-constant voltage) method [22, 23, 24], that the battery charges with constant current until reaching a high SoC level and then gradually reduces the current to maintain a constant charging voltage to prevent over-voltage damages. Modeling battery characteristics such as CC-CV protocols in V2G management is critical to maximize cost savings and ensure battery security, but it requires representing battery power and efficiencies as functions of SoC instead of using constant values, which introduces significant computation complexities [25, 26, 27, 28]. Some EVCS smart charging algorithms partially accounted for nonlinear charging/discharging characteristics [11, 12, 29, 30], but few were able to model all nonlinear factors in a computation efficient approach. Starting from an EVCS profit maximization problem formulated as mixed-integer linear programming (MILP), Mouli et al. [11] accounts for SoC-dependent power ratings in which the maximum power drops linearly after 80% SoC. Ebrahimi et al. [12] models battery degradation as dependent on both SoC and depth of discharge (DoD), and Schwenk et al. [29] incorporated both nonlinear efficiency and nonlinear degradation terms. Lee et al. [30] implemented an adaptive scheduling algorithm that formulates custom objectives and constraints as a convex program and computes an optimal charging schedule in real-time while considering feeder limits and battery tail capacity reclamation at high SoCs with a data-driven approach. However, the reviewed control solutions do not model all power ratings, efficiencies, and cycling penalties/degradation as nonlinear nor do they provide a method to incorporate different behavior curves. This property will be pivotal for the adaptability of EVCS control algorithms to fast-charging applications and manufacturer-customized EV battery management systems (BMS). Besides battery models, price uncertainty is another complicating factor that may impact the EVCS cost estimations but was rarely studied in V2G due to computation difficulties. Most literature on smart charging assumes perfect price forecasts or predetermined ToU tariffs [9, 10, 11, 12, 17, 29, 30]. As EV capacity surges, future V2G projects will most likely arbitrage real-time electricity prices that are highly volatile and uncertain, and EVCS must consider price uncertainties. Frendo et al. and Ahmad et al. [31, 32] use day-ahead prices to forecast the next day's LMPs, incorporating EVCS controller using MILP formulation. Zhang et al. [33] develop the charging control deep deterministic policy gradient, which models EV charging as a Markov decision process (MDP) and optimizes user satisfaction and charging costs with the output of a long short-term memory (LSTM) network that approximates sequential energy price dynamics. Results from this research demonstrated the significance of modeling price uncertainties, but the computing approach is not scalable to address nonlinear storage models. As summarized in Table I, the reviewed literature proposes EVCS algorithms that partially account for nonlinear EV battery behavior and price uncertainty. We close this research gap by developing a real-time EVCS V2G control algorithm based on analytical nonlinear stochastic dynamic programming (SDP) and a least-laxity first (LLF) scheduling approach that adds nonlinear EV battery behavior and price uncertainty to the EVCS control formulation while minimizing operating costs and complying with EV charging and battery dynamics such as CC-CV charging profiles, facility power limits, and users' charging targets. While our proposed method accounts for system non-linearity and uncertainty, we demonstrate that it is scalable and computationally tractable in practical scenarios. ## III System Model and Formulation We take the perspective of a public EVCS operator whose electricity cost settles using time-varying wholesale electricity real-time prices. The objective of the EVCS is to minimize the electricity cost under either smart charging (V1G) or V2G operation. The EVCS has enough chargers, all with the same specifications so that no rejection-of-service event occurs during the simulation time frame and complies with \(L\), the maximum power rating of the charging station, at all time steps. ### _EV Charging Sessions_ We consider a total of \(K\) EVs accessing the EVCS during the considered period, \(\mathcal{K}=\{1,..,K\}\) is the set of EVs. EV \(k\in\mathcal{K}\) arrives on time step \(A_{k}\) with a starting SoC of \(S_{k}\), and departs on time \(D_{k}\) with a charging target SoC \(F_{k}\), with \(D_{k}>A_{k}\) and \(F_{k}>S_{k}\). Thus we use \(\mathcal{T}_{k}=\{A_{k},\ldots,D_{k}\}\) to denote the time frame of the current charging session of EV \(k\). At every EV arrival, the controller updates the tuple (\(A_{k},D_{k}\), \(F_{k}\)), which is used as input for the proposed solution algorithm. Additionally, the controller has access to every EV SoC at all times. We also assume the EVCS does not have information or can predict the arrival of EVs, but each EV upon arrival will inform the EVCS its departure time and charging target. ### _Battery Nonlinear Behavior_ All EVs are modeled as having the same battery capacity and SoC-dependent charge power rating (\(B_{k}(e_{t-1,k})\)), discharge power rating (\(P_{k}(e_{t-1,k})\)), single-trip efficiency (\(\eta_{k}(e_{t-1,k})\)) and discharge cost penalty (\(c_{k}(e_{t-1,k})\)) curves. As shown in Figure 1, efficiency, and cycling penalty are modeled as quadratic functions of SoC [18]. The power rating curves resemble a Tesla Model S fast charging curve [34] with custom-defined CC-CV behavior at low and high SoC. The controller has access to a low-resolution version of the battery nonlinear parameter curves to use as an approximation during control policy computations. This resolution gap simulates the challenge of EV battery behavior approximation in online smart charging control. ### _Price Prediction_ The EVCS operator can access the system's day-ahead price (DAP) information and a DAP-based real-time price (RTP) prediction tool. For this work, RTP is modeled as a 1st-order Markov process with 12 nodes per time step trained with historical RTP-DAP bias price data as in setting DB-Dep in [35]. The proposed solution algorithm uses the resulting prediction to provide a control decision that accounts for future price uncertainty. ### _Formulation_ We start with formulating the EVCS V2G fleet management problem including nonlinear battery models, charging station power limits, and a causality control policy constraint with respect to time-varying electricity price uncertainties. The objective of the charging station is to minimize the cost of electricity to charge up each EV including a discharge penalty to avoid frequent cycles that accelerate battery degradation \[\underset{b_{t,k},p_{t,k}}{\text{min}}\sum_{k\in\mathcal{K}_{t} \times\mathcal{T}_{k}}\sum_{t\in\mathcal{T}_{k}}\lambda_{t}(b_{t,k}-p_{t,k})+c _{k}(e_{t-1,k})p_{t,k}\] (1a) in which \[p_{t,k}\] is the charging power of EV \[k\] during time step \[t\], while \[p_{t,k}\] is the discharge power. \[\lambda_{t}\] is the time-varying price of electricity. The second term reduces battery degradation by introducing a discharge penalty cost \[c_{k}(e_{t-1,k})\] as a function of SoC. While the EVCS operator does not assume EV battery degradation cost, the controller incorporates an EV battery discharge penalty to avoid excessive cycling. Note that \[b_{t,k}\] and \[p_{t,k}\] have been normalized by the chosen simulation time step and have units of energy. Each EV is subject to the following power and energy constraints ( \[t\in\mathcal{T}_{k}\] ) \[0\leq b_{t,k} \leq B_{k}(e_{t-1,k}) \tag{1b}\] \[0\leq p_{t,k} \leq P_{k}(e_{t-1,k})\] (1c) \[e_{t,k}-e_{t-1,k} =-p_{t,k}/\eta_{k}(e_{t-1,k})+b_{t,k}\cdot\eta_{k}(e_{t-1,k})\] (1d) \[\underline{E}_{k} \leq e_{t,k}\leq\overline{E}_{k}\] (1e) \[e_{A_{k}} =S_{k},\quad e_{D_{k}}\geq F_{k} \tag{1f}\] in which \(e_{t,k}\) is the SoC of EV \(k\) during time step \(t\). \(\underline{E}_{k}\) is the EV minimal SoC limit and \(\overline{E}_{k}\) is the EV maximum SoC. (1b) and (1c) model the EV power rating, (1d) models the SoC evolution, (1e) models the upper and lower SoC limits, and (1f) models the starting SoC and SoC charging target. The total charging and discharge power is subjected to the station power limit \[\sum_{k\in\mathcal{K}}p_{t,k} \leq L \tag{1g}\] \[\sum_{k\in\mathcal{K}}b_{t,k} \leq L \tag{1h}\] The control policy must be causal (non-anticipatory) [36] and only depends on past and current information \[\{b_{t,k},p_{t,k}|k\in\mathcal{K},t\in\mathcal{T}_{k}\}\in\text{Causal Control Policies} \tag{1i}\] **Remark 1**: _Generalization to different charging scenarios._ (1) provides a generalized formulation in different Fig. 1: Assumed EV battery SoC-dependent parameter curves for all EVs charging scenarios. In V2G, both \(B_{k}\) and \(P_{k}\) are non-zero, while in single-directional smart charging, or V1G, the EV will not inject power into the grid and thus \(P_{k}\) is set to zero. In cases when assuming a linear battery model, \(P_{k}\), \(B_{k}\), \(c_{k}\) and \(\eta_{k}\) are constants, while in nonlinear battery models, these parameters are dependent on the SoC. ## IV Solution Method We take a two-step approach to solving the EVCS control problem (1). First, we formulate the charging session of a single EV as a price arbitrage problem, which satisfies all constraints in (1) except the EVCS power limit constraints (1f) and (1g), and solve it using an analytical SDP method [35]. Second, we aggregate the control policies from the EVs in an active session at the current time step and prioritize the distribution of control signals according to an LLF approach [14] to ensure compliance with the EVCS power limit. ### _Decomposition to Arbitrage Problems_ The core of the proposed control method is a single energy storage price arbitrage problem formulated as an SDP [35]. **Remark 2**: _V2G decomposition._ Because we do not assume the EV charging actions would impact market clearing prices, we can decompose (1) into parallel arbitrage problems by relaxing the charging station power limit constraint (1f) and (1g), which are the only coupling factors among all EVs. Each resulting sub-problem becomes an arbitrage problem that maximizes the arbitrage profit (or, equivalently, minimizes the charging session electricity cost) while meeting the final SoC target. We will discuss in later sections how we aggregate results from all EVs to incorporate the EVCS power limits. To solve the arbitrage sub-problem under the causality policy constraint, we adopt an SDP approach with the following formulation (for simplicity, we omit the EV index \(k\) in the following formulation, but the following formulation is for a single EV): \[Q_{t-1}(e_{t-1}\,|\,\lambda_{t})=\max_{b_{t},p_{t}}\lambda_{t}(p_{t}-b_{t})-c_ {k}(e_{t-1})p_{t}+V_{t}(e_{t}\,|\,\lambda_{t}) \tag{2a}\] \[V_{t}(e_{t}\,|\,\lambda_{t})=\mathbb{E}_{\lambda_{t+1}}\Big{[}Q_{t}(e_{t}\,| \,\lambda_{t+1})\Big{]}\lambda_{t}\Big{]} \tag{2b}\] subject to (1b)-(1e). We model the time-varying price \(\lambda_{t}\) as an order-1 Markov process, as described in III-C, in which the price distribution over a time period \(t+1\) depends on the realized price over \(t\). \(Q_{t-1}(e_{t-1}\,|\,\lambda_{t})\) is the maximized current period profit given the SoC at the start of the time period, while \(V_{t}(e_{t}\,|\,\lambda_{t})\) is the expected value function representing the opportunity value of energy stored in the battery at the end of the time period. In the context of an EV charging session, \(V_{t}(e_{t}\,|\,\lambda_{t})\) represents the minimum expected cost of the remainder of the session based on the current SoC, current RTP, and the RTP uncertainty model. ### _Solving Nonlinear Battery Models_ We extend the solution approach from [35] by incorporating the SoC-dependency of battery behavior parameters into the first-order optimality condition expression as \[q_{t-1,i}(e_{t-1})=\] \[\pi_{t,i}\Big{(}\frac{\partial p_{t}}{\partial e_{t-1}}-\frac{ \partial b_{t}}{\partial e_{t-1}}\Big{)}-c_{k}(e_{t-1})\frac{\partial p_{t}}{ \partial e_{t-1}}-\frac{\partial c_{k}(e_{t-1})}{\partial e_{t-1}}p_{t}\] \[+v_{t,i}(e_{t})\frac{\partial e_{t}}{\partial e_{t-1}}=0 \tag{3}\] where \(q_{t-1,i}(e_{t-1})\) is the derivative of \(Q_{t-1}(e_{t-1}\,|\,\lambda_{t})\), or the storage device's marginal opportunity value. According to the Karush-Kuhn-Tucker (KKT) conditions and (1d), we obtain the following: \[\partial p_{t}/\partial e_{t-1}= \tag{4a}\] \[\begin{cases}\eta+(p_{t}/\eta)\cdot(\partial\eta/\partial e)&\text{ if (1c) not binding}\\ \partial P/\partial e&\text{if (1c) binding}\\ \partial b_{t}/\partial e_{t-1}=&\end{cases}\] (4b) \[\begin{cases}-1/\eta-(b_{t}/\eta)\cdot(\partial\eta/\partial e)&\text{ if (1b) not binding}\\ \partial B/\partial e&\text{if (1b) binding}\\ \partial e_{t}/\partial e_{t-1}=&\end{cases}\] (4c) \[\begin{cases}0&\text{if (1b) or (1c) not binding}\\ 1-(1/\eta)\cdot(\partial P/\partial e)+(1/\eta)^{2}\cdot(\partial\eta/ \partial e)P\\ +\eta\cdot(\partial B/\partial e)+B\cdot\partial\eta/\partial e&\text{if (1b) and (1c) binding}\end{cases}\] By replacing the partial derivative expressions given by (4) in (3) for full power rating (binding) and partial (non-binding) charging or discharging cases, we obtain an analytical marginal opportunity value function update expression. Note that the expressions in (4) involve the optimization variables \(p_{t}\) and \(b_{t}\). The expressions are solved by approximating \(p_{t}\) and \(b_{t}\) by the power ratings \(B\) and \(P\) corresponding to the current SoC. The full formulation of this equation is deferred to Appendix B. ### _Arbitrage Policy_ We use the developed analytical SDP algorithm to calculate a marginal opportunity value function for each charging session for each EV. At each time step, the control decision \(p_{t}\) and \(b_{t}\) for each connected EV can be determined by comparing the corresponding marginal opportunity value function and the observed realized RTP \(\lambda_{t}\). The value difference between the EV battery's marginal value and the marginal grid price will trigger a charging, discharging, or idling control signal. The full marginal value function and control policy calculation methods are deferred to Appendix C and Appendix B. **Remark 3**: _Lagrangian relaxation of the final SoC constraint._ We apply a Lagrangian relaxation to incorporate the final SoC charging target constraint into the SDP by assuming an arbitrarily large penalty ($1000/MWh$) for not achieving the charging target. This enables the marginal value function corresponding to the EV departing time step to act as an inverse activation function, with a linear penalty cost to the battery SoC until reaching the specified charging target. This enforces the battery to charge regardless of the price when approaching the end of the charging session in V1G and V2G cases to meet the charging target. When applying this control policy with a nonlinear system assumption is that the storage device parameters \(B\), \(P\), \(\eta\), and c used in the elaboration of the control decision are approximations of the real storage device behavior parameters. The control decision will be calculated using a trained model from historical price data and executed in the testing environment. The testing environment will limit the control inputs to a range within the true storage behavior constraints. ### _EV Fleet Control Simulation_ The EVCS control algorithm incorporates nonlinear battery parameter curve approximations, data provided by the EV users (SoC charging targets and session duration), the marginal value function calculation algorithm, an LLF prioritization step, and the control policy outlined in IV-C. The algorithm is executed in real-time as follows: 1. Set \(t\to t+1\). 2. Calculate the marginal value function for the EVs that arrive to the EVCS at time \(t\). For the value function computation, set \(t\) as the starting time, the provided session duration as the time horizon \(T\) and the provided SoC session target as the final SoC \(e^{f}\). The resulting marginal value function will be used as the basis of the control policy for its corresponding EV for the duration of the current session. 3. Identify all connected EVs. Compute the ratio of time elapsed in the current session to the total session duration for each connected EV. Based on the calculated ratios, sort the EVs in descending order. This step is aligned with an LLF [14] scheduling approach, which prioritizes EVs with the least time to achieve their target. 4. In the order defined by step 3), execute the control policy for each EV as described in IV-C (i.e., comparing the EV marginal value to the grid marginal price). If the facility power limit is reached, set the remaining EVs' power control signals to zero and go to step 1). 5. Go to step 1) until reaching the target simulation time. Note that the control signals in step 4) will be truncated by the testing environment if they are outside the range of the actual battery behavior model. The LLF sorting step provides a lightweight solution to aggregate the individual value function results and comply with the facility's power limit. This enables the modular nature of the algorithm components and prevents exponential computation time growth as the number of EVs in the EVCS increase. ## V Case Study ### _Data and Experiment Design_ We test the proposed control algorithm using the 2019 New York Independent System Operator (NYISO) price data. Price uncertainty is modeled using a 1st-order Markov process trained with 2016-2018 NYISO price data. We include prices from four zones to demonstrate performance results in different price patterns: NYC, LONGIL, NORTH, and WEST. A 101-sample resolution version of the SoC-dependent battery parameter curves shown in Figure 1 is considered the ground truth and used as the testing environment. We assume that the controller has access to a 10-sample resolution version of the same battery nonlinear parameter curves. Although the proposed method can handle different parameter curves for each EV, we assume identical parameter curves in this case study for simplicity. With this resolution gap between the environment and the valuation process, we demonstrate the effectiveness of the proposed algorithm in providing efficient control with a limited amount of data to approximate the battery model. We consider six scenarios to test the proposed algorithm and establish benchmarks for comparison. The following two scenarios are used as benchmarks: 1. **PF (perfect forecast).** We perform a deterministic optimization using real-time prices in Julia/Gurobi. The optimization problem setup can be found in Appendix D. This benchmark scenario represents the lowest possible EVCS operating cost. 2. **UC (uncontrolled charging).** EVs start charging as soon as they arrive at the EVCS and charge with the maximum allowable power rating at all times until reaching their charging target. Facility power limits are fairly distributed among actively charging EVs. The control logic is implemented in Julia. This is the second benchmark case and represents an EVCS without a control policy or V2G capability. and the remaining scenarios are solved with the proposed custom algorithm implemented in Julia: 1. **NL-V2G.** We perform SDP control assuming V2G capability and approximate EV battery and charger behavior with a 10-sample version of the nonlinear parameter curves. 2. **NL-V1G.** Similar to NL-V2G, but assuming no V2G capability. 3. **L-V2G.** Similar to NL-V2G, but approximating EV battery and nonlinear charger parameters with constant values. Power ratings are set to the nominal charger capability, one-way charging and discharging efficiencies to 95% and marginal battery degradation cost to $15MWh 4. **L-V1G.** Similar to L-V2G, but assuming no V2G capability. The EVCS consists of 21 bi-directional (unless otherwise noted by the scenario being tested) 17.2 kW level-2 chargers and has a 150 kW power limit, leading to an over-subscription ratio of 2.4. We assume 75 users have access to the EVCS and all users own a 100 kWh EV. Users' energy requested, and arrival and departure times are obtained from the Caltech ACN dataset [37], specifically using the 2019 JPL data with energy requests greater than 5 kWh. Figure 2 shows the distribution of arrival and departure times of the dataset that will be used for the simulation. A starting SoC of 10% is assumed for all arrivals. SDP control is performed in all scenarios using the described 1st-order Markov process price prediction. **Remark 4**: **Charging target compliance.** For a charging session to be successful in any of the scenarios, the control algorithm must achieve a final SoC within 5% of the user's SoC charging target. We define the charging target compliance performance metric as the ratio between successful and total charging sessions. Note that the user may input a target that is infeasible due to a short session duration. The charging target compliance metric will exclude these infeasible cases. All computations were performed on a personal laptop with an Intel Core i9-10885H 2.5GHz CPU and 32 GB memory. The benchmark (PF) using MILP is solved using Gurobi [38], while the proposed algorithms and the EVCS simulation are implemented in Julia. ### _Cost Savings and Charging Target Compliance_ Figure 3 shows a sample charging session using both NL-V2G and NL-V1G scenarios. V2G achieves cost reduction by charging during low price periods and capturing additional revenue through energy arbitrage, and V1G reduces EVCS operating costs only through its smart charging capability. Note that embedding the final SoC requirement in the value-to-go function calculation results in successful charging sessions for all the shown cases while minimizing total cost and EV battery cycling during the session. Figure 4 shows the EVCS operating cost savings achieved by the proposed algorithm and the PF scenario as a percentage of the uncontrolled charging (UC) scenarios as well as the charging compliance results across the four considered NYISO zones. NL-V2G results in an average operating cost savings of 35% over UC, with savings reaching up to 56% in the WEST zone while maintaining average charging compliance of 95%. The cost savings average drops to 17% across zones if NL-V1G is used. This demonstrates the impact of bi-directional charging capability on EVCS operating costs. Both L-V2G and L-V1G result in increased average cost savings (37% and 18% respectively) compared to NL-V2G and NL-V1G, but at the expense of charging compliance (83% for L-V2G and 73% for L-V1G). L-V2G and L-V1G increased cost savings come from estimating the nonlinear power ratings as constant, which causes mismatches between the control signal and the actual power rating capability at a given time step. This mismatch leads to EVs not achieving their session charging target, which results in lower power purchased from the grid and reduced charging target compliance. Additionally, Figure 5 shows the cumulative costs of the V2G/V1G PF, NL-V2G, NL-V1G, and UC cases for the simulated year. ### _V2G Energy Equivalent Mileage_ Most EVs have a battery warranty based on the production time and the drive mileage. To this end, V2G puts on additional discharges to the battery and may accelerate the expiration of the manufacturer warranty. In this section, we study how much additional energy is discharged in V2G and the equivalent mileage consumption to understand how much stress V2G would put on battery warranties. Table II compares the total EVCS energy input (charged) and output (discharged) in NL-V2G and NL-V1G scenarios across all NYISO zones. On average, 7.3% of the total energy charged in the NL-V2G scenario is used for discharging to arbitrage electricity prices. From these results, an equivalent mileage value for an EV participating in a V2G EVCS can be estimated by translating the energy output from the station to mileage through an EPA EV range estimate [39]. This value becomes relevant when calculating the impact of V2G on EV warranty, which is regulated to be the first of 8 years or 100,000 mi (15 years or 150,000 mi proposed in California) [40]. As a case study, we use the Tesla Model X EPA estimated range of 348 mi for a full charge (100 kWh) [41]. Subsequently, a fraction of yearly energy output from the NYC EVCS (9.6 MWh), proportional to the number of charging sessions corresponding to a particular EV (139 sessions out of a total of 2967), is equivalent to a mileage value of 1565 mi. This corresponds to a 12% increase in the average mileage driven per year (USDOT average is 13500 mi [42]), which would lead to passing 100,000 mi approximately nine months earlier than the baseline average mileage. Additionally, using the EVCS total energy output, an Fig. 3: Performance of the proposed control policy in a 9-hour charging session in both (a) NL-V2G and (b) NL-V1G scenarios Fig. 2: Histogram of EVCS users’ arrival and departure times from the ACN (JPL 2019) Dataset. EPA estimated range of 348 mi and the cost savings achieved by V2G result in an incremental EVCS operating cost savings benefit of $0.125/kWh and $0.036/mi. Using our proposed method to estimate V2G equivalent mileage under different control policies would increase accuracy when performing long-term EVCS TEA studies. ### _Computation Times_ Table III shows computation times for 1-year simulations of three cases all using nonlinear battery models: 1) PF-MILP: nonlinear V2G optimization with perfect price forecast formulated using MILP and solved using Gurobi; 2) PF-DP: nonlinear V2G optimization with perfect price forecast solved with the proposed algorithm, note that in this case there is no uncertainty, so the proposed algorithm is essentially dynamic programming; 3) NL-V2G: nonlinear V2G optimization solved using the proposed stochastic dynamic programming algorithm. Hence, PF-MILP and PF-DP are deterministic, while SDP is stochastic. The computation time result shows the computation tractability of the proposed algorithm in both deterministic and stochastic optimization. The comparison between PF Fig. 4: EVCS operating cost savings of all scenarios simulated with the proposed method for the uncontrolled case. Perfect Forecast represents the highest possible EVCS operating cost savings under the considered simulation conditions. Fig. 5: EVCS cumulative operating costs during 2019 for all simulated NYISO zones. Perfect-forecast V2G/V1G, stochastic V2G/V1G and uncontrolled charging scenarios are shown. MILP and PF-DP is an apple-to-apple comparison as both algorithms solve a deterministic EVCS problem. PF-DP, the deterministic version of our proposed algorithm yields an average result within 1.5% of the solution times provided by the MILP formulation, while the computation time is around 60x faster. Our proposed SDP-based algorithm achieves a computation time 7.5x faster than the MILP. Note that SDP is solving multi-stage stochastic optimization while the MILP is solving deterministic optimization, while both use nonlinear battery models. Therefore, our proposed algorithm can also be considered a faster, more efficient, and open-source alternative for solving deterministic smart charging control case studies. ## VI Conclusion We proposed and tested an EVCS controller based on a nonlinear analytical stochastic dynamic programming algorithm and least-laxity first scheduling. Using historical prices from New York State, our proposed V2G algorithm achieved 24% to 56% of EVCS operating cost savings compared to uncontrolled charging while maintaining a 95% charging target compliance and accounting for EV battery nonlinear behavior and price uncertainty in real-time. Our study covers smart charging in which EVs are not discharged to the grid. Still, our approach provides on average 17% cost savings by responding to grid price variations compared to uncontrolled charging. We also show the importance of considering nonlinear battery models in V2G optimization, in which the battery power rating and efficiencies are dependent on the SoC, which is critical to ensure the EV meets its charging target while responding to time-varying prices. Finally, the proposed algorithm is open-source and not requiring any thrid party solvers, while the computation time surpasses commercial solvers. Hence, our approach is suitable for real-world implementations and scale-up for large-scale EV fleet management. In the future, we plan to improve the proposed approach in several directions. The first is to integrate the solution method with data-driven probability price prediction methods. The current solution to the V2G problem using stochastic dynamic programming still requires a Markov process to be trained using historical price data. However, designing the Markov process can be complicated and limited by the quantity of historical price data. Second, our result still shows that V2G is more likely to miss charging targets due to the feeder capacity constraints, we will investigate approaches to manage the charging constraint better and improve charging compliance. Finally, we will test our proposed algorithm using more sophisticated charging scenarios such as using heterogeneous EV fleets and considering local renewable generation and study the connection between driving patterns and zone prices with the control policy performance.
2304.06832
Task Adaptive Feature Transformation for One-Shot Learning
We introduce a simple non-linear embedding adaptation layer, which is fine-tuned on top of fixed pre-trained features for one-shot tasks, improving significantly transductive entropy-based inference for low-shot regimes. Our norm-induced transformation could be understood as a re-parametrization of the feature space to disentangle the representations of different classes in a task specific manner. It focuses on the relevant feature dimensions while hindering the effects of non-relevant dimensions that may cause overfitting in a one-shot setting. We also provide an interpretation of our proposed feature transformation in the basic case of few-shot inference with K-means clustering. Furthermore, we give an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization. We report comprehensive experiments, which show consistent improvements over a variety of one-shot benchmarks, outperforming recent state-of-the-art methods.
Imtiaz Masud Ziko, Freddy Lecue, Ismail Ben Ayed
2023-04-13T21:52:51Z
http://arxiv.org/abs/2304.06832v1
# Task Adaptive Feature Transformation for One-Shot Learning ###### Abstract We introduce a simple non-linear embedding adaptation layer, which is fine-tuned on top of fixed pre-trained features for one-shot tasks, improving significantly transductive entropy-based inference for low-shot regimes. Our norm-induced transformation could be understood as a re-parametrization of the feature space to disentangle the representations of different classes in a task specific manner. It focuses on the relevant feature dimensions while hindering the effects of non-relevant dimensions that may cause overfitting in a one-shot setting. We also provide an interpretation of our proposed feature transformation in the basic case of few-shot inference with K-means clustering. Furthermore, we give an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization. We report comprehensive experiments, which show consistent improvements over a variety of one-shot benchmarks, outperforming recent state-of-the-art methods. Imtiaz Masud Ziko\({}^{1}\), Freddy Lecue\({}^{2}\), Ismail Ben Ayed\({}^{3}\)\({}^{1}\) Thales Canada, \({}^{2}\) JPMorgan Chase, \({}^{3}\) ETS Montreal, Few-Shot Learning, Domain adaptation ## 1 Introduction Deep learning models have achieved impressive success in a breadth of applications. However, these successes mostly rely on learning from huge amounts of annotated data, which requires a time-consuming and expensive process. Deep learning models still have difficulty generalizing to novel classes unseen during training, given only a few labeled instances for these new classes. In this context, few-shot learning research has attracted wide interest recently. For example in a one-shot learning setting, a model is first trained on substantial labeled data over an initial set of classes, commonly called the base classes. Then, supervision is confined to one labeled example per novel class, which is not observed during base training. The model is then fine-tuned on these labeled examples from the novel classes (the support set) and evaluated on the unlabeled samples (the query set). Traditional fine-tuning would result in over-fitting in such low-data regimes. A large body of works investigated few-shot learning via meta-learning strategies, such as the very popular prototypical networks [1]. Meta-learning creates a set of few-shot tasks (or episodes), with support and query samples that simulate generalization difficulties during testing and train the model to generalize well on these tasks. **Related works**: Our method is in line with recent transductive methods in the few-shot learning literature, e.g. [2, 3, 4, 5, 6], among others. Transductive inference performs class predictions jointly for all the unlabeled query samples of the task, rather than one sample at a time as in inductive inference. For instance, TPN [3] uses label propagation along with episodic training and a specific network architecture; the goal was to learn how to propagate labels from the support to the query samples. CAN-T [7] is another meta-learning based transductive method, which uses attention mechanisms to propagate labels to unlabeled samples. The authors of [5] proposed a method based on graph clustering, which regularizes the inductive predictions of query samples with a Laplacian term. Many transductive few-shot methods focused on strategies for fine-tuning a pre-trained model during inference. For instance, the entropy fine-tuning in [4] re-trains the whole network, performing costly gradient updates over all the parameters during inference. Transductive Information Maximization (TIM) [6] proposes an entropy-based fine-tuning loss, which maximizes the mutual information between the query features and their label predictions for a few-shot task at inference, while minimizing the cross-entropy loss on the support set. However, instead of retraining the whole network, [6] only fine-tunes the softmax layer on top of the fixed pre-trained features. This showed substantial improvements over retraining the whole network. In addition to its recent successful use in transductive few-shot classification [6, 4], it is worth noting that entropy minimization is widely used in semi-supervised learning [8, 9], and has been successfully used recently in unsupervised domain adaptation [10] and unsupervised representation learning [11]. **Our Contribution**: Fine-tuning the classifier on top of fixed pre-trained features from the base classes may not take full advantage of the expressive power of the task-specific feature space. A standard linear transformation causes overfitting when dealing with limited supervision. In this regard, we propose a simple yet effective norm-induced feature transformation, which is fine-tuned to emphasize class-specific feature dimensions while hindering the effect of non-relevant dimensions that may cause overfitting in a few-shot setting. Our non-linear transformation could be understood as a re-parametrization of the feature space, which disentangles the representations of the different classes in a task-specific manner. While our motivation is conceptually similar to early kernel-based metric-learning methods [12], in which non-linear transformations are implicit, our transformation is explicit. We provide an interpretation of our transformation in the basic case of few-shot inference with K-means clustering. Furthermore, we give an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization, which is widely used in learning, even beyond few-shot classification. We report comprehensive experiments, showing that the proposed transformation could yield consistent improvements over various one-shot benchmarks, outperforming recent state-of-the-art methods. ## 2 Task Adaptive Feature Transformation for One-Shot Learning In the one-shot setting, we are given a labeled support set \(\mathcal{S}\) with \(C\) novel test classes, where each novel class has one labeled example. The objective is to accurately classify unlabeled unseen query sample set \(\mathcal{Q}\) from these \(C\) classes. Let \(f_{\phi}\) denote the embedding function of a deep convolutional neural network, with parameters \(\phi\) and \(\mathbf{x}_{i}=f_{\phi}(\mathbf{y}_{i})\in\mathbb{R}^{d}\) is the features of a given sample \(\mathbf{y}_{i}\). \(f_{\phi}\) is pre-trained from a labeled set \(\mathbb{X}_{\text{base}}\), via a standard cross-entropy loss, with base classes that are different from the test classes of \(\mathcal{S}\) and \(\mathcal{Q}\). **Proposed Transformation**: The proposed feature transformation is performed during fine-tuning, which is derived from minimizing an entropy-based loss function for the target one-shot task. We will detail the entropy-based loss function below, and draw an interesting connection to the basic K-means objective through bound optimization. For now, let us introduce our non-linear transformation, which reads as follows for each \(L_{2}\)-normalized pre-trained feature vector \(\mathbf{x}_{i}\) of a given target few-shot task: \[\mathbf{g}(\mathbf{x}_{i},\mathbf{W})=-\frac{1}{2}\left(\|\mathbf{x}_{i}- \mathbf{w}_{1}\|^{2},\ldots,\|\mathbf{x}_{i}-\mathbf{w}_{d}\|^{2}\right)^{T} \tag{1}\] where \(\mathbf{x}_{i}\) is the initial feature vector either from the support set \(\mathcal{S}\) or query set \(\mathcal{Q}\), and superscript \(T\) denotes the transpose operator. We introduce a learnable transformation matrix \(\mathbf{W}=[\mathbf{w}_{1}^{T}\ldots\mathbf{w}_{d}^{T}]\in\mathbb{R}^{d\times d}\), which is updated during the fine-tuning procedure. To understand the effect of our transformation, let us first consider a transductive inference with a basic K-means clustering of the transformed features of the query set. This could be done by optimizing the following mixed objective: \[\mathcal{J}(\mathbf{W},\boldsymbol{\theta},\boldsymbol{Q})=\sum_{i\in Q}\sum_ {e=1}^{C}q_{ic}\|\boldsymbol{\theta}_{c}-\mathbf{g}(\mathbf{x}_{i},\mathbf{W} )\|^{2} \tag{2}\] Figure 1: TSNE plots depicting the feature space with or without the proposed feature transformation in (1). The support images of the 1-shot task are provided (leftmost). The bigger markers correspond to the support image in each class. where \(\mathbf{\theta}=(\mathbf{\theta}_{c})_{1\leq c\leq C}\) represent class prototypes, and \(\mathbf{Q}\) is the \(|\mathcal{Q}|\)-by-\(C\) matrix whose rows are given by binary assignment simplex vectors \(\mathbf{q}_{i}=(q_{ic})_{1\leq c\leq C}\in\{0,1\}^{C}\): \(q_{ic}=1\) if sample \(\mathbf{x}_{i}\) is assigned to class \(c\) and \(q_{ic}=0\) otherwise. Alternating iterative minimization of the mixed K-means objective in Eq. (2) with respect to \(\mathbf{W}\), \(\mathbf{\theta}\) and \(\mathbf{Q}\) could be viewed as a joint task-adaptive metric learning and clustering, and has a clear interpretation of the effect of the proposed transformation. Optimization with respect to transformation parameters \(\mathbf{W}\) encourages new features \(\mathbf{g}(\mathbf{x}_{i},\mathbf{W})\) to approach their current class prototypes (or means) \(\mathbf{\theta}_{c}\), thereby disentangling the class representations; see the TSNE-plot illustrations in Figure 1. Clearly, given current assignments \(q_{ic}^{j}\) at iteration \(j\), the optimal \(\mathbf{\theta}_{c}\) minimizing (2) corresponds to the mean of features within class \(c\): \(\mathbf{\theta}_{c}^{j}=\frac{\sum_{i\in\mathcal{Q}}\mathbf{g}(\mathbf{x}_{i}, \mathbf{W})}{|\mathcal{Q}|}\). Also, given both \(q_{ic}^{j}\) and \(\mathbf{\theta}_{c}^{j}\), it is clear that the objective in Eq. (2) contains, for each sample, an \(L_{2}\) distance between the transformed feature of the sample and its current-class prototype. Therefore, optimization with respect to \(\mathbf{W}\) encourages the transformed feature to align with its current-class prototype. It is important to note that the specific norm-induced form of \(\mathbf{g}\) that we propose in (1) constrains implicitly the transformation, hindering the effects of non-relevant dimensions that may cause over-fitting. An unconstrained transformation \(\mathbf{g}\), such as a neural net for instance, trained jointly with K-means, might yield trivial solutions i.e. bringing all of the transformed features in the same cluster. This difficulty is known in the context of deep clustering [13]. _In fact, the norm-based form of each component in transformation (1) forces some dimensions to approach zero, when aligning with the prototypes_. Identity, for instance, is not recoverable under this constrained form, unlike an unconstrained neural-net transformation. Let us now give more details in terms of the TSNE plots in Figure 1. We randomly sample a 1-shot task with 5 test classes from each of the _mini_ImageNet and Aircraft datasets. The task from _mini_ImageNet contains three fine-grained classes sampled from the generic dog category: 'Golden retriever', 'Dalmatian', and 'African hunting dog'. The other two classes are from generic categories: 'Nematode' and 'Crate'. The task from the Aircraft dataset contains the fine-grained categories of 5 different airplane models, which are visually quite similar to each other. The leftmost plots refer to the entropy-based loss in (3) fine-tuned on top of the initial fixed pre-trained feature vector \(\mathbf{x}_{i}\) without the feature transformation. In this case, only the classifier weights \(\{\mathbf{\theta}_{c}\}_{c=1}^{C}\) are updated, which results in accuracies of 77.3% for _mini_Imagenet and 69.33% for the Aircraft. Note that the support images (with bigger markers) are not well separated along with their corresponding query samples in the pre-trained feature space learned from the base classes. Linear transformation for feature transformation during fine-tuning causes overfitting with limited supervision, as can be seen in figures 1(b) and 1(e). In this case, features from different classes are brought in the same cluster and the estimates of errors could be found from the respective classification accuracy given at the top of each plot. Finally, if we utilize the proposed transformation in Eq. (1) on top of the initial pre-trained features, we achieve a better spread-out task adaptive feature space as shown in the rightmost TSNE plot 1(c) and 1(f), with boosted accuracies of 93.33% (16% improvement) and 84.00% (13% improvement), respectively for _mini_Imagenet and Aircraft dataset. **Entropy-based loss function**: In our case, the transformation matrix \(\mathbf{W}\) is learned by fine-tuning a transductive information maximization (TIM) loss [6]. TIM loss is a combination of cross-entropy defined over the support set \(\mathcal{S}\) and a mutual information term, which includes two Shannon entropies: The entropy of posterior predictions (i.e., the softmax outputs of the network) and the entropy of the marginal probabilities of the classes, both defined over the query set \(\mathcal{Q}\): \[\begin{split}\mathcal{L}(\mathbf{W},\mathbf{\theta})=& \overbrace{-\frac{\lambda}{|\mathcal{S}|}\sum_{i\in\mathcal{S}}\sum_{c=1}^{C}y _{ic}\log(p_{ic})}^{\text{cross-entropy}}\\ &\underbrace{-\frac{\alpha}{|\mathcal{Q}|}\sum_{i\in\mathcal{Q}} \sum_{c=1}^{C}p_{ic}\log(p_{ic})}_{\text{conditional entropy}}+\underbrace{ \sum_{c=1}^{C}\widehat{p}_{c}\log\widehat{p}_{c}}_{\text{ marginal entropy}}\end{split} \tag{3}\] where \[p_{ic}=\text{s}(\mathbf{\theta}_{c},\mathbf{W},\mathbf{x}_{i})=\frac{\exp\left(- \frac{\tau}{2}\|\mathbf{\theta}_{c}-\mathbf{g}(\mathbf{x}_{i},\mathbf{W})\|^{2} \right)}{\sum_{k}\exp\left(-\frac{\tau}{2}\|\mathbf{\theta}_{k}-\mathbf{g}(\mathbf{ x}_{i},\mathbf{W})\|^{2}\right)}\] denotes the softmax probability outputs, \(\widehat{p}_{c}=\frac{1}{|\mathcal{Q}|}\sum_{i\in\mathcal{Q}}p_{ic}\) is the marginal probability of class \(c\), and \(y_{ic}\in\{0,1\}\) the ground-truth labels for the support samples and \(\tau\) is the temperature parameter. Minimizing the conditional entropy pushes the network probability predictions toward the vertices of the simplex, yielding confident predictions. The marginal entropy term avoids the trivial single-class solutions that might result from conditional entropy minimization. **On the link between entropy and K-means**: We now change gear and show an interesting bound-optimization link between the conditional entropy in Eq. (3) and K-means in Eq. (2). This link further clarifies why our feature transformation is useful in the context of entropy minimization. In addition to its recent successful use in few-shot classification [6, 4], entropy is widely used in semi-supervised learning [8, 9], and has been successfully used recently in unsupervised domain adaptation [10] and unsupervised representation learning [11]. Therefore, connecting entropy minimization to K-means could provide interesting insights even beyond few-shot classification. To show the link, let us first decompose the conditional entropy in (3): \[\underbrace{\sum_{i,c}s(\mathbf{\theta}_{c},\mathbf{W},\mathbf{x}_{i})\| \mathbf{\theta}_{c}-\mathbf{g}(\mathbf{x}_{i},\mathbf{W})\|^{2}}_{\mathcal{H}( \mathbf{W},\mathbf{\theta})\text{: Clustering}}+\underbrace{\sum_{i}l(\mathbf{\theta}, \mathbf{W},\mathbf{x}_{i})}_{\text{Prototype dispersion}} \tag{4}\] where \(l(\mathbf{\theta},\mathbf{W},\mathbf{x}_{i})=\log\sum_{c}\exp\left(-\frac{\tau}{2} \|\mathbf{\theta}_{c}-\mathbf{g}(\mathbf{x}_{i},\mathbf{W})\|^{2}\right)\). Minimizing the prototype dispersion encourages large distances between the prototypes and the features of all data points. Term \(\mathcal{H}(\mathbf{W},\mathbf{\theta})\) in Eq. (4) is closely related to basic K-means (2) from a bound-optimization perspective, although it seems more complex. The following shows that optimizing a soft K-means could be viewed as an approximate Majorize-Minimize (MM) algorithm for optimizing \(\mathcal{H}(\mathbf{W},\mathbf{\theta})\). Given a function \(\mathcal{H}(\mathbf{W},\mathbf{\theta})\), the general MM paradigm minimizes iteratively a tight upper bound on \(\mathcal{H}\): \[\mathcal{H}(\mathbf{W},\mathbf{\theta}) \leq\mathcal{A}^{j}(\mathbf{W},\mathbf{\theta})\;\forall\;\mathbf{W}, \mathbf{\theta}\] \[\mathcal{H}(\mathbf{W}^{j},\mathbf{\theta}^{j}) =\mathcal{A}^{j}(\mathbf{W}^{j},\mathbf{\theta}^{j}) \tag{5}\] where \(j\) is the current iteration index. An upper bound satisfying the tightness condition in (5) is often referred to as _auxiliary function_ of the original objective \(\mathcal{H}\). It is straightforward to verify that minimizing \(\mathcal{A}^{j}\) iteratively guarantees the original objective does not increase: \(\mathcal{H}(\mathbf{W}^{j+1},\mathbf{\theta}^{j+1})\leq\mathcal{A}^{j}(\mathbf{W} ^{j+1},\mathbf{\theta}^{j+1})\leq\mathcal{A}^{j}(\mathbf{W}^{j},\mathbf{\theta}^{j})= \mathcal{H}(\mathbf{W}^{j},\mathbf{\theta}^{j})\). **Proposition 1**.: \(\mathcal{H}(\mathbf{W},\mathbf{\theta})\) _is upper bounded by the following soft K-means objective for any set of soft simplex assignment variables \(\mathbf{q}_{i}=(q_{ic})_{1\leq c\leq C}\in[0,1]^{C}\), \(i\in\mathcal{Q}\):_ \[\mathcal{H}(\mathbf{W},\mathbf{\theta})\leq\mathcal{J}(\mathbf{W}, \mathbf{\theta},\mathbf{Q})+\frac{\tau}{2}\sum_{i}\mathbf{q}_{i}^{T}\log\mathbf{q}_{i} \tag{6}\] _Furthermore, given parameters \(\mathbf{W}^{j}\) and prototype \(\mathbf{\theta}^{j}=(\mathbf{\theta}^{j}_{c})_{1\leq c\leq C}\) at iteration \(j\), choosing specific expressions \(q_{ic}=s(\mathbf{\theta}^{j}_{c},\mathbf{W}^{j},\mathbf{x}_{i})\) in upper bound (6) yields an approximate auxiliary function on \(\mathcal{H}(\mathbf{W},\mathbf{\theta})\) when \(\tau\) is small (\(\tau\to 0\))._ Proof.: The upper bound in (6) is convex w.r.t \(\mathbf{Q}\) as it is the sum of linear and convex functions. Solving the KKT conditions for minimizing this bound, s.t. simplex constraint on each \(\mathbf{q}_{i}\), yields closed-form solutions: \(\tilde{q}_{ic}=\mathbf{s}(\mathbf{\theta}_{c},\mathbf{W},\mathbf{x}_{i})\) The inequality in (6) follows directly from plugging these optimal solutions in the upper-bound in (6) and using the fact that \(\tau\) is small (\(\tau\to 0\)). Finally, it is straightforward to verify that the specific choice \(q_{ic}=\mathbf{s}(\mathbf{\theta}^{j}_{c},\mathbf{W}^{j},\mathbf{x}_{i})\) makes the upper bound in (6) tight at the current solution and, hence, an auxiliary function, when temperature \(\tau\to 0\). ## 3 Experiments **Datasets**: We used four one-shot benchmarks, including both the fine-grained classification settings (**CUB** and **Aircraft**) and standard one-shot classification setting (_mini_**ImageNet** and _tiered_**ImageNet**). _mini_**ImageNet** is a subset of the larger ILSVRC-12 dataset [23]. We use the standard split of 64 classes for base training, 16 for validation, and 20 for testing. _tiered_**ImageNet**[24] is also a subset of the ILSVRC-12 dataset, but with 608 classes instead. We split the dataset into 351 classes for base training, 97 for validation and 160 for testing. **CUB**[25] is a fine-grained image classification dataset with 200 categories. We split it into 100 classes for base training, 50 for validation and 50 for testing. **Aircraft** or FGVCAircraft [26] is a fine-grained image classification dataset with 100 airplane models. Following the same ratio as CUB, we split classes into 50 base classes for training, 25 validation, and 25 test classes. Images are resized to \(84\times 84\) pixels. **Implementation Details**: The results of the proposed FT-TIM is reproduced and evaluated in the same settings as in [5, 6] for fair comparisons. The network models are trained with cross-entropy loss on the base classes. We utilize the same publicly available pre-trained models of [5, 6] for _mini_ImageNet, _tiered_ImageNet, and CUB. For the Aircraft dataset, we train the model according to the same protocol. The evaluation is done on two different setups of 5-way one-shot benchmark: 1) _Standard one-shot benchmark_, 15 samples per class in the query set for each task, and the aver age accuracy over query sets are reported. 2) _Semi-supervised one-shot benchmark_, where we treat 15 samples per class as the additional unlabeled samples along with the support set and report the accuracy on a separate held out test set containing 5 test samples from each class. In this setup, we compare the results with and without the proposed task adaptive feature transformation while fine-tuning entropy loss (TIM) [6] in (3). The average accuracy over 600 one-shot tasks are reported. In case of **FT-TIM**, transformation matrix \(\mathbf{W}\) is fine-tuned with \(0.01\) learning rate, starting from the 200th fine-tuning iteration, which we decide from mini-Imagenet validation set accuracy. The feature transformation weights \(\mathbf{W}\) is initialized from the cosine similarity matrix formed with the \(L_{2}\) normalized initial pre trained support set features. ### Results The results of the general one-shot classification are highlighted in Table 1. It can be observed that in each of the datasets and network models, the proposed **FT-TIM** which includes the proposed feature transformation consistently improves the 1-shot accuracy by \(~{}1-3\)% in comparison to the baseline TIM [6] without the proposed transformation. Note that, the proposed FT-TIM also outperforms the other recent transductive methods such as ICIR [27] and RAP-LaplacianShot [2] by simply fine-tuning the feature transformation during evaluation. The similar consistent improvement is also reflected in the case of fine-grained classification on both of the **CUB** and **Aircraft** datasets in Table 1. These results clearly demonstrate that the proposed feature transformation can bring out the expressive power of the task adaptive feature space in one-shot learning. We again evaluate the efficacy of the proposed feature transformation in semi-supervised one-shot tasks, where additional unlabeled samples are provided along with the one-shot labeled data per novel class. The transformation weights and the classifier weights are updated during the fine-tuning with the labeled data and the additional unlabeled data in the one-shot task. Finally, the inference is performed on a separate held out test set. To observe the benefit of plugging the proposed feature transformation during fine-tuning entropy based loss, we compare the proposed FT-TIM with respect to baseline TIM [6] without the proposed transformation. From the results in Table 2, we can observe that consistent improvements are achieved by FT-TIM across different datasets, number of shots, and network models. These results indicate that the proposed transformation layer, while fine-tuned on top of pre-trained features jointly with the classifier, helps to disentangle the representations of different classes in a task-specific manner. ## 4 Conclusion In this paper, we present a simple yet effective feature transformation layer, which brings consistent improvements in transductive one-shot learning while fine-tuned on top of \begin{table} \begin{tabular}{l l c c c c} \hline \hline **Methods** & **Network** & _mini_**ImageNet** & _tiered_**ImageNet** & **CUB** & **Aircraft** \\ \hline MAML [14] & ResNet-18 & 49.61 & - & 68.42 & - \\ TPN [3] & ResNet-12 & 59.46 & - & - & - \\ Entropy-min [4] & ResNet-12 & 62.35 & 68.36 & - & - \\ DPGN [15] & ResNet-18 & 66.63 & 70.46 & - & - \\ CAN+T [7] & ResNet-18 & 67.19 & 73.21 & - & - \\ DSN-MR [16] & ResNet-18 & 64.60 & 67.39 & - & - \\ MetaoptNet [17] & ResNet-18 & 62.64 & 65.99 & - & - \\ LaplacianShot [5] & ResNet-18 & 70.89 & 77.60 & 79.93 & - \\ TIM [6] & ResNet-18 & 72.77 & 80.80 & 82.24 & 83.06 \\ RAP-LaplacianShot [2] & ResNet-12 & 74.29 & - & 83.59 & - \\ FT-TIM (ours) & ResNet-18 & **75.00** & **83.45** & **85.54** & **84.47** \\ \hline AWGIM [18] & WRN & 63.12 & 67.69 & - & - \\ Entropy-min [4] & WRN & 65.73 & 73.34 & - & - \\ SIB [19] & WRN & 70.0 & 70.90 & - & - \\ BD-CSPN [20] & WRN & 70.31 & 78.74 & - & - \\ SIB+E\({}^{3}\)BM [21] & WRN & 71.4 & 75.6 & - & - \\ LaplacianShot [5] & WRN & 73.44 & 78.80 & - & - \\ IFSL [22] & WRN & 73.51 & 83.07 & - & - \\ TIM [6] & WRN & 77.8 & 82.1 & - & - \\ FT-TIM (ours) & WRN & **79.22** & **85.06** & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Average one-shot accuracy (in %) for the _standard_ benchmark. pre-trained features. The proposed transformation takes full advantage of the expressive power of the task-specific feature space. It could be understood as a re-parametrization of the feature space, which disentangles the representations of different classes in a task-specific manner. We further provided an interpretation of our transformation in the basic case of few-shot inference with K-means clustering, along with an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization, which is widely used in learning.
2306.06093
HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork
Neural Radiance Fields (NeRF) have become an increasingly popular representation to capture high-quality appearance and shape of scenes and objects. However, learning generalizable NeRF priors over categories of scenes or objects has been challenging due to the high dimensionality of network weight space. To address the limitations of existing work on generalization, multi-view consistency and to improve quality, we propose HyP-NeRF, a latent conditioning method for learning generalizable category-level NeRF priors using hypernetworks. Rather than using hypernetworks to estimate only the weights of a NeRF, we estimate both the weights and the multi-resolution hash encodings resulting in significant quality gains. To improve quality even further, we incorporate a denoise and finetune strategy that denoises images rendered from NeRFs estimated by the hypernetwork and finetunes it while retaining multiview consistency. These improvements enable us to use HyP-NeRF as a generalizable prior for multiple downstream tasks including NeRF reconstruction from single-view or cluttered scenes and text-to-NeRF. We provide qualitative comparisons and evaluate HyP-NeRF on three tasks: generalization, compression, and retrieval, demonstrating our state-of-the-art results.
Bipasha Sen, Gaurav Singh, Aditya Agarwal, Rohith Agaram, K Madhava Krishna, Srinath Sridhar
2023-06-09T17:56:07Z
http://arxiv.org/abs/2306.06093v3
# HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork ###### Abstract Neural Radiance Fields (NeRF) have become an increasingly popular representation to capture high-quality appearance and shape of scenes and objects. However, learning generalizable NeRF priors over categories of scenes or objects has been challenging due to the high dimensionality of network weight space. To address the limitations of existing work on generalization, multi-view consistency and to improve quality, we propose HyP-NeRF, a latent conditioning method for learning generalizable category-level NeRF priors using hypernetworks. Rather than using hypernetworks to estimate only the weights of a NeRF, we estimate both the weights and the multi-resolution hash encodings [33] resulting in significant quality gains. To improve quality even further, we incorporate a denoise and finetune strategy that denoises images rendered from NeRFs estimated by the hypernetwork and finetunes it while retaining multiview consistency. These improvements enable us to use HyP-NeRF as a generalizable prior for multiple downstream tasks including NeRF reconstruction from single-view or cluttered scenes, and text-to-NeRF. We provide qualitative comparisons and evaluate HyP-NeRF on three tasks: generalization, compression, and retrieval, demonstrating our state-of-the-art results. ## 1 Introduction Neural fields, also known as implicit neural representations (INRs), are neural networks that learn a continuous representation of physical quantities such as shape or radiance at any given space-time coordinate [68]. Recent developments in neural fields have enabled significant advances in applications such as 3D shape generation [75], novel view synthesis [30; 2], 3D reconstruction [70; 64; 36; 66], and robotics [50; 49]. In particular, we are interested in Neural Radiance Fields (NeRF) that learn the parameters of a neural network \(f_{\phi}(\mathbf{x},\theta)=\{\sigma,c\}\), where \(\mathbf{x}\) and \(\theta\) are the location and viewing direction of a 3D point, respectively, and \(\sigma\) and \(c\) denote the density and color estimated by \(f_{\phi}\) at that point. Once fully trained, \(f_{\phi}\) can be used to render novel views of the 3D scene. Despite their ability to model high-quality appearance, NeRFs cannot easily generalize to scenes or objects not seen during training thus limiting their broader application. Typically, achieving generalization involves learning a prior over a data source such as image, video, or point cloud distributions [19; 18; 56; 74; 27; 47], possibly belonging to a category of objects [63; 44]. However, NeRFs are continuous volumetric functions parameterized by tens of millions of parameters making it challenging to learn generalizable priors. Previous works try to address this challenge by relying on 2D image-based priors, 3D priors in voxelized space, or by using latent conditioning. Image-based priors re-use the information learned by 2D convolutional networks [73, 32] but may lack 3D knowledge resulting in representations that are not always multiview consistent. Methods that learn 3D priors in voxelized space [31] suffer from high compute costs and inherently lower quality due to voxelization limitations. Latent conditioning methods [17, 41] learn a joint network \(f(\mathrm{x},\theta,z)\) where \(z\) is the conditioning vector for a given object instance. These methods retain the advantages of native NeRF representations such as instance-level 3D and multiview consistency, but have limited capacity to model a diverse set of objects at high visual and geometric quality. InstantNGP [33] provides a way to improve quality and speed using _instance-specific_ multi-resolution hash encodings (MRHE), however, this is limited to single instances. We propose HyP-NeRF, a latent conditioning method for learning improved quality generalizable **category-level NeRF priors** using hypernetworks [13] (see Figure 1). We take inspiration from methods that use meta-learning to learn generalizable representations [53, 46] while retaining the quality of instance-specific methods [33]. Our hypernetwork is trained to generate the parameters-both the multi-resolution **hash encodings (MRHE) and weights**-of a NeRF model of a given category conditioned on an instance code \(z_{n}\). For each instance code \(z_{n}\) in the learned codebook, HyP-NeRF estimates \(h_{n}\) denoting the instance-specific MRHE along with \(\phi_{n}\) indicating the weights of an MLP. Our key insight is that estimating both the MRHEs and the weights results in a significant improvement in quality. To improve the quality even further, we denoise rendered views [40] from the estimated NeRF model, and finetune the NeRF with the denoised images to enforce multiview consistency. As shown in Figure 2 and the experiments section, this denoising and finetuning step significantly improves quality and fine details while retaining the original shape and appearance properties. Once HyP-NeRF is trained, it can be used as a NeRF prior in a variety of different applications such as NeRF reconstruction from a single view posed or unposed images, single pass text-to-NeRF, Figure 1: We propose HyP-NeRF, a latent conditioning method that learns improved quality NeRF priors using a hypernetwork to generate instance-specific multi-resolution hash encodings along with neural network weights. The figure showcases the fine details preserved in the NeRF generated by HyP-NeRF (green box) as opposed to the NeRF generated by naive conditioning (red box) in which, a hypernetwork predicts only the neural weights while relying on the standard positional encodings. Figure 2: Once trained, HyP-NeRF acts as a prior to support multiple downstream applications, including NeRF reconstruction from single or multi-view images and cluttered scene images, and text-to-NeRF. We further improve quality using our denoising network. or even the ability to reconstruct real-world objects in cluttered scene images (see Figure 2). We show qualitative results on applications and quantitatively evaluate HyP-NeRF's performance and suitability as a NeRF prior on the ABO dataset [10] across three tasks: generalization, compression, and retrieval. To sum up our contributions: 1. We introduce HyP-NeRF, a method for learning improved quality NeRF priors using a hypernetwork that estimates _instance-specific_ hash encodings and MLP weights of a NeRF. 2. We propose a denoise and finetune strategy to further improve the quality while preserving the multiview consistency of the generated NeRF. 3. We demonstrate how our NeRF priors can be used in multiple downstream tasks including single-view NeRF reconstruction, text-to-NeRF, and reconstruction from cluttered scenes. ## 2 Related Work **Neural Radiance Fields**[30] (NeRFs) are neural networks that capture a specific 3D scene or object given sufficient views from known poses. Numerous follow-up work (see [60; 68] for a more comprehensive review) has investigated improving quality and speed, relaxing assumptions, and building generalizable priors. Strategies for improving quality or speed include better sampling [2], supporting unbounded scenes [3], extensions to larger scenes [67; 58], using hybrid representations [33; 72], using learned initializations [59; 5; 43], or discarding neural networks completely [71; 57]. Other work relaxes assumption of known poses [65; 29; 25; 23; 8; 51], or reduce the number of views [73; 41; 4; 12; 32; 42; 69; 26; 35]. Specifically, PixelNeRF [73] uses convolution-based image features to learn priors enabling NeRF reconstruction from as few as a single image. VisionNeRF [26] extends PixelNeRF by augmenting the 2D priors with 3D representations learned using a transformer. Unlike these methods, we depend purely on priors learned by meta-learning, specifically by hypernetworks [13]. AutoRF [32] and LoNeRF [41] are related works that assume only a single view for each instance at the training time. FWD [5] optimizes NeRFs from sparse views in real-time and SRT [43] aims to generate NeRFs in a single forward pass. These methods produce NeRFs of lower quality and are not designed to be used as priors for various downstream tasks. In contrast, our focus is to generate high-quality multiview consistent NeRFs that capture fine shapes and textures details. HyP-NeRF can be used as a category-level prior for multiple downstream tasks including NeRF reconstruction from one or more posed or unposed images, text-to-NeRF (similar to [38; 16]), or reconstruction from cluttered scene images. Additionally, HyP-NeRF can estimate the NeRFs in a single forward pass with only a few iterations needed to improve the quality. **Learning 3D Priors**. To learn category-level priors, methods like CodeNeRF [17] and LoNeRF [41] use a conditional NeRF on instance vectors \(z\) given as \(f(x,\theta,z)\), where different \(z\)s result in different NeRFs. PixelNeRF [73] depends on 2D priors learned by 2D convolutional networks which could result in multi-view inconsistency. DiffRf [31] uses diffusion to learn a prior over voxelized radiance field. Like us, DiffRF can generate radiance fields from queries like text or images. However, it cannot be directly used for downstream tasks easily. Our approach closely follows the line of work that aims to learn a prior over a 3D data distribution like signed distance fields [37], light field [53], and videos [46]. We use meta-learning, specifically hypernetworks [13], to learn a prior over the MRHEs and MLP weights of a fixed NeRF architecture. LearnedInit [59], also employs standard meta-learning algorithms for getting a good initialization of the NeRF parameters. However, unlike us, they do not use a hypernetwork, and use the meta-learning algorithms only for initializing a NeRF, which is further finetuned on the multiview images. Methods like GRAF [45], \(\pi\)-GAN [6], CIPS-3D [77], EG3D [7], and Pix2NeRF [4] use adversarial training setups with 2D discriminators resulting in 3D and multiview inconsistency. [38; 62; 14] tightly couple text and NeRF priors to generate and edit NeRFs based on text inputs. We, on the other hand, train a 3D prior on NeRFs and separately train a mapping network that maps text to HyP-NeRF's prior, decoupling the two. ## 3 HyP-NeRF: Learning Improved NeRF prior using a Hypernetwork Our goal is to learn a generalizable NeRF prior for a category of objects while maintaining visual and geometric quality, and multiview consistency. We also want to demonstrate how this prior can be used to enable downstream applications in single/few-image NeRF generation, text-to-NeRF, and reconstruction of real-world objects in cluttered scenes. **Background**. We first provide a brief summary of hypernetworks and multi-resolution hash encodings that form the basis of HyP-NeRF. Hypernetworks are neural networks that were introduced as a meta-network to predict the weights for a second neural network. They have been widely used for diverse tasks, starting from representation learning for continuous signals [53; 52; 55; 46], compression [34; 11], few-shot learning [48; 21], continual learning [61]. Our key insight is to use hypernetworks to generate both the network weights and instance-specific MRHEs. **Neural Radiance Fields** (NeRF) [30; 2] learn the parameters of a neural network \(f_{\phi}(\mathbf{x},\theta)=\{\sigma,c\}\), where \(\mathbf{x}\) and \(\theta\) are the location and viewing direction of a 3D point, respectively, and \(\sigma\) and \(c\) denote the density and color predicted by \(f_{\phi}\) at that point. Once fully trained, \(f_{\phi}\) can be used to render novel views of the 3D scene. NeRF introduced _positional encodings_ of the input 3D coordinates, \(\mathbf{x}\), to a higher dimensional space to capture high-frequency variations in color and geometry. InstantNGP [33] further extended this idea to _instance-specific_ multi-resolution hash encodings (MRHE) to encode \(\mathbf{x}\) dynamically based on scene properties. These MRHEs, \(h\), are learned along with the MLP parameters, \(\phi\) for a given NeRF function, \(f\) and show improved quality and reduced training/inference time. **Image Denoising** is the process of reducing the noise and improving the perceptual quality of images while preserving important structural details. Recent advancements in deep learning-based image restoration and denoising techniques [24; 22; 9] have demonstrated remarkable success in removing noise and enhancing the perceptual quality of noisy images that may have suffered degradation. Such networks are trained on large datasets of paired noisy and clean images to learn a mapping between the degraded input and the corresponding high-quality output by minimizing the difference between the restored and the ground truth clean image. In our case, we use denoising to improve the quality of our NeRF renderings by reducing artifacts and improving the texture and structure at the image level. ### Method Given a set of NeRFs denoted by \(\{f_{(\phi_{n},h_{n})}\}_{n=1}^{N}\), where \(N\) denotes the number of object instances in a given object category, we want to learn a prior \(\Phi=\{\Phi_{S},\Phi_{C}\}\), where \(\Phi_{S}\) and \(\Phi_{C}\) are the shape and color priors, respectively. Each NeRF, \(f_{(\cdot)_{n}}\), is parameterized by the neural network weights, \(\phi_{n}\), and learnable MRHEs, \(h_{i}\) as proposed in [33]. \(f_{(\cdot)_{n}}\) takes a 3D position, \(\mathbf{x}\), and viewing direction, \(\theta\), as input and predicts the density conditioned on \(\mathbf{x}\) denoted by \(\sigma_{n}^{\{\mathbf{x}\}}\), and color conditioned on \(\mathbf{x}\) and \(\theta\) denoted by \(c_{n}^{\{\mathbf{x},\theta\}}\). This is given as, \[f_{(\phi_{n},h_{n})}(\mathbf{x},\theta)=\{\sigma_{n}^{\{\mathbf{x}\}},c_{n}^{ \{\mathbf{x},\theta\}}\}. \tag{1}\] Our proposed method for learning NeRF priors involves two steps. First, we train a hypernetwork, \(M\), to learn a prior over a set of multiview consistent NeRFs of high-quality shape and texture. Second, we employ an image-based denoising network that takes as input an already multiview consistent set of images, rendered from the predicted NeRF, and improves the shape and texture of NeRF to higher quality by finetuning on a set of denoised images. Our architecture is outlined in Figure 3 and we explain each step in detail below. **Step 1: Hypernetwork for Learning NeRF Prior.** We want to design our hypernetwork, \(M\), with trainable parameters, \(\Omega\) that can predict NeRF parameters \(\{\phi_{n},h_{n}\}\) given a conditioning code \(z_{n}=\{S_{n},C_{n}\}\), where \(S_{n}\) and \(C_{n}\) are the shape and color codes, respectively, for an object instance \(n\) belonging to a specific category. Here, \(S_{n}\) and \(C_{n}\) belong to codebooks, \(S\) and \(C\) that are trained along with \(\Omega\) in an auto-decoding fashion. As shown in Figure 3 (top), ideally we want \(M\) to learn a prior \(\{\Phi_{C},\Phi_{S}\}\) over \(S\) and \(C\) such that given a random set of codes, \(\{\mathcal{Y}_{S}\sim\Phi_{S},\mathcal{Y}_{C}\sim\Phi_{C}\}\), \(M\) should be able to generate a valid NeRF with consistent shape and texture for the given category of objects. To achieve this, we train \(M\) by assuming the same constraints as are needed to train a NeRF - a set of multiview consistent images \(\mathbf{I}=\{\{I_{\theta\in\Theta}\}_{n}\}_{n=1}^{N}\) for a set of poses, \(\Theta\). In each training step, we start with a random object instance, \(n\), and use the corresponding codes \(S_{n}\) and \(C_{n}\) from the codebooks as an input for \(M\). Our key insight is that estimating **both** the MRHEs and MLP weights results in a higher quality than other alternatives. \(M\) then predicts the NeRF parameters \(\{\phi_{n},h_{n}\}\), which is then used to minimize the \[\mathcal{L}(\Omega,S_{n},C_{n})=\sum_{\mathbf{r}\in R}||\mathbf{V}^{\prime}( \mathbf{r},\{\sigma_{n}^{\{\mathbf{x}_{1}^{\mathbf{r}}\}},c_{n}^{\{\mathbf{x}_{ 1}^{\mathbf{r}},\theta\}}\}_{i=1}^{L})-\mathbf{V}_{n}(\mathbf{r})|| \tag{2}\] \[\{\sigma_{n}^{\{\mathbf{x}_{1}^{\mathbf{r}}\}},c_{n}^{\{\mathbf{x}_{1}^{ \mathbf{r}},\theta\}}\}=f_{(\phi_{n},h_{n})}(x_{i}^{\mathbf{r}},\theta)\quad \mathrm{and}\quad\{\phi_{n},h_{n}\}=M_{\Omega}(S_{n},C_{n}) \tag{3}\] where \(\mathbf{V}^{\prime}\) denotes the volumetric rendering function as given in [30] eqn. \(3\) and \(5\), \(\mathbf{r}\) is a ray projected along the camera pose \(\theta\), \(x_{i}^{\mathbf{r}}\in\mathbf{x}\) and \(L\) denote the number of points sampled along \(\mathbf{r}\), and \(\mathbf{V}_{n}\) denote the ground truth value for projection of the \(n^{\mathrm{th}}\) object along \(\mathbf{r}\). Note that, in this step, the only trainable parameters are the meta-network weights, \(\Omega\), and the codebooks \(S\) and \(C\). In this setting, the NeRF functions \(f_{(\cdot)_{n}}\) only act as differentiable layers that allow backpropogation through to \(M\) enabling it to train with multiview consistency loss attained by the volumetric rendering loss as described in [30]. We use an instantiation of InstantNGP [33] as our function \(f_{(\cdot)_{n}}\) consisting of MRHE and a small MLP. **Step 2: Denoise and Finetune.** In the first step, \(M\) is trained to produce a consistent NeRF with high-fidelity texture and shape. However, we observed that there is room to improve the generated NeRFs to better capture fine details like uneven textures and edge definition. To tackle this challenge, we augment \(M\) using a denoising process that takes \(f_{(\cdot)_{n}}\) and further finetunes it to achieve \(f_{(\cdot)_{n}}^{H}\). As shown in Figure 3 (bottom), we render novel views from the multiview consistent NeRF into \(m\) different predefined poses given by \(\{\theta_{1},\theta_{2}...\theta_{m}\}\) to produce a set of multiview consistent images \(\{\hat{I}_{i}\}_{i=1}^{m}\). We then use a pre-trained image-level denoising autoencoder that takes \(\{\hat{I}_{i}\}_{i=1}^{m}\) as input and produces images of improved quality given as \(\{\hat{I}_{i}^{H}\}_{i=1}^{m}\). These improved images are then used to finetune \(f_{(\cdot)_{n}}\) to achieve \(f_{(\cdot)_{n}}^{H}\). Note that, we do not train the NeRFs from scratch on \(\{\hat{I}^{H}\}\) and Figure 3: **Architecture Diagram:** HyP-NeRF is trained and inferred in two steps. In the first step **(top)**, our hypernetwork, \(M\), is trained to predict the parameters of a NeRF model, \(f_{n}\) corresponding to object instance \(n\). At this stage, the NeRF model acts as a set of differentiable layers to compute the volumetric rendering loss, using which \(M\) is trained on a set of \(N\) objects, thereby learning a prior \(\Phi=\{\Phi_{S},\Phi_{C}\}\) over the shape and color codes given by \(S\) and \(C\), respectively. In the second step **(bottom)**, the quality of the predicted multiview consistent NeRF, \(f_{n}\), is improved using a denoising network trained directly in the image space. To do this, \(f_{n}\) is rendered from multiple known poses to a set of images that are improved to photorealistic quality. \(f_{n}\) is then finetuned on these improved images. Importantly, since \(f_{n}\) is only finetuned and not optimized from scratch, and thus \(f_{n}\) retains the multiview consistency whilst improving in terms of texture and shape quality. only finetune the NeRFs, which ensures fast optimization and simplifies the task of the denoising module that only needs to improve the quality and does not necessarily need to maintain the multiview consistency. While our denoising is image-level, we still obtain multiview consistent NeRFs since we finetune on the NeRF itself (as we also demonstrate through experiments in the appendix). For our denoising autoencoder, we use VQVAE2 [40] as the backbone. To train this network, we simply use images projected from the NeRF, predicted by the hypernetwork (lower quality relative to the ground truth) as the input to the VQVAE2 model. We then train VQVAE2 to decode the ground truth by minimizing the loss objective between VQVAE2's output and the ground truth. ### HyP-NeRF Inference and Applications Training over many NeRF instances, \(M\) learns a prior \(\Phi\) that can be used to generate novel consistent NeRFs. However, \(\Phi\) is not a known distribution like Gaussian distributions that can be naively queried by sampling a random point from the underlying distribution. We tackle this in two ways: **Test Time Optimization**. In this method, given a single-view or multi-view posed image(s), we aim to estimate shape and color codes \(\{S_{o},C_{o}\}\) of the NeRF that renders the view(s). To achieve this, we freeze \(M\)'s parameters and optimize the \(\{S_{o},C_{o}\}\) using the objective given in Equation (2). **Query Network**. We create a query network, \(\Delta\), that maps a point from a known distribution to \(\Phi\). As CLIP's [39] pretrained semantic space, say \(\mathbf{C}\), is both text and image aware, we chose \(\mathbf{C}\), as our known distribution and learn a mapping function \(\Delta(z\sim\mathbf{C})\rightarrow\Phi\). Here, \(\Delta\) is an MLP that takes \(z\) as input and produces \(\mathcal{Y}_{z}\in\Phi\) as output. To train \(\Delta\), we randomly sample one pose from the ground truth multiview images \(I_{\theta}^{n}\in\{I_{\theta\in\Theta}\}_{n}\) and compute the semantic embedding \(z_{\theta}^{n}=\text{CLIP}(I_{\theta}^{n})\) and map it to \(\{\bar{S}_{n},\bar{C}_{n}\}\in\Phi\) given as \(\{\bar{S}_{n},\bar{C}_{n}\}=\Delta(z_{\theta}^{n})\). We then train our query network by minimizing the following objective: \[\mathcal{L}_{\Delta}=\sum_{\theta}||\{\bar{S}_{n},\bar{C}_{n}\},\{S_{n},C_{n}\} ||. \tag{4}\] At the time of inference, given a text or image modality such as a text prompt, single-view unposed (in-the-wild) image, or segmented image, we compute the semantic embedding using CLIP encoder and map it to \(\Phi\) using \(\Delta\), which then predicts the shape and color codes as input for the HyP-NeRF. ## 4 Experiments We provide evaluations of the prior learned by HyP-NeRF specifically focusing on the quality of the generated NeRFs. We consider three dimensions: (1) **Generalization** (Section 4.1): we validate whether HyP-NeRF can generate novel NeRFs not seen during training by conditioning on only a single-posed-view of novel NeRF instances. (2) **Compression** (Section 4.2): since HyP-NeRF is trained in an auto-decoding fashion on specific NeRF instances (see Equation (2)), we can evaluate the quality of the NeRFs compressed in this process. (3) **Retrieval** (Section 4.3): as shown in Figure 2, HyP-NeRF's prior enables various downstream applications. We show how to combine our prior with CLIP [39] to retrieve novel NeRFs. \begin{table} \begin{tabular}{c|c|c c c c c|c c c c} \hline \hline & & \multicolumn{4}{c|}{Chairs} & \multicolumn{4}{c}{Sofa} \\ & & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & FID\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & FID\(\downarrow\) \\ \hline \multirow{3}{*}{ABO} & PixelNeRF [73] & 18.30 & 0.83 & 0.31 & 292.32 & 17.51 & 0.84 & 0.28 & 323.89 \\ & HyP-NeRF (Ours) & **24.23** & **0.91** & **0.16** & **68.11** & **23.96** & **0.90** & 0.18 & **120.80** \\ & w/o Denoise & 23.05 & 0.90 & **0.16** & 102.45 & 23.54 & **0.90** & **0.174** & 121.69 \\ \hline \hline \end{tabular} \end{table} Table 1: **Generalization**. Comparison of single-posed-view NeRF generation. Metrics are computed on renderings of resolution \(512\times 512\). HyP-NeRF significantly outperforms PixelNeRF on all the metrics in both the datasets. \begin{table} \begin{tabular}{c|c c c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{ABO Chairs} & \multicolumn{4}{c|}{ABO Table} & \multicolumn{4}{c}{ABO Sofas} \\ & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & CD \(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & CD \(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & CD \(\downarrow\) \\ \hline [33] & 35.43 & 0.96 & 0.07 & – & 34.07 & 0.95 & 0.07 & – & 33.87 & 0.95 & 0.08 & – \\ Ours & 31.37 & 0.94 & 0.1 & 0.0082 & 29.52 & 0.93 & 0.11 & 0.0033 & 30.32 & 0.94 & 0.11 & 0.0118 \\ \hline \hline \end{tabular} \end{table} Table 2: **Compression**. We randomly sample \(250\) datapoints from our training dataset and compare the NeRFs learned using InstantNGP [33] on the individual instances against HyP-NeRF that learns the entire dataset. Note, we do not employ the denoising module (see Section 3.1) for this evaluation. **Datasets and Comparisons**. We primarily compare against two baselines, PixelNeRF [73] and InstantNGP [33] on the Amazon-Berkeley Objects (ABO) [10] dataset. ABO contains diverse and detailed objects rendered at a resolution of \(512\times 512\) which is perfect to showcase the quality of the NeRF generated by HyP-NeRF. Rather than use a computationally expensive model like VisionNeRF (on the SRN [54] dataset) on a resolution of \(128\times 128\), we show our results on \(512\times 512\) and compare with PixelNeRF. Additionally, we compare with the other baselines on SRN at \(128\times 128\) resolution qualitatively in the main paper (Figure 5) and quantitatively in the appendix. For compression, we directly compare with InstantNGP [33], that proposed MRHE, trained to fit on individual objects instance-by-instance. **Architectural Details**. We use InstantNGP as \(f_{(\cdot)_{n}}\), with \(16\) levels, hashtable size of \(2^{11}\), feature dimension of 2, and linear interpolation for computing the MRHE; the MLP has a total of 5, 64-dimensional, layers. We observed that a hashtable size \(2^{11}\) produces NeRF of high-quality at par with the a size of \(2^{14}\). Hence, we use \(2^{11}\) to speed up our training. Our hypernetwork, \(M\), consists of 6 MLPs, 1 for predicting the MRHE, and the rest predicts the parameters \(\phi\) for each of the MLP layers of \(f\). Each of the MLPs are made of 3, 512-dimensional, layers. We perform all of our experiments on NVIDIA RTX 2080Tis. **Metrics**. To evaluate NeRF quality, we render them at 91 distinct views and compute metrics on the rendered images. Following PixelNeRF, we use PSNR(\(\uparrow\)), SSIM(\(\uparrow\)), and LPIPS(\(\downarrow\)) [76]. Additionally, we compute Frechet Inception Distance (FID)(\(\downarrow\)) [15] to further test the visual quality. Although these metrics measure the quality of novel-view synthesis, they do not necessarily evaluate the geometry captured by the NeRFs. Therefore, we compute Chamfer's Distance (CD) whenever necessary by extracting a mesh from NeRF densities [28]. Please see the appendix for additional details. Figure 4: **Qualitative Comparison of Generalization on ABO. The NeRFs are rendered at a resolution of \(512\times 512\). HyP-NeRF is able to preserve fine details such as the legs, creases, and texture even for novel instances. PixelNeRF fails to preserve details and to model the structure.** ### Generalization One way to evaluate if Hyp-NeRF can render novel NeRF instances of high quality is through unconditional sampling. However, our learned prior \(\Phi\) is a non-standard prior (like a Gaussian distribution) and thus random sampling needs carefully designed mapping between such a known prior and \(\Phi\). Therefore, we instead rely on a conditional task of single-view novel NeRF generation: given a single arbitrarily-chosen view of a novel object, we generate the corresponding NeRF, \(f_{(\cdot)_{2}}\) through test-time optimization (see Section 3.2). We compare quantitatively with PixelNeRF on ABO at a high resolution of \(512\times 512\) and qualitatively with the rest of the baselines on SRN at \(128\times 128\). As shown in Table 1, we significantly outperform PixelNeRF on all of the metrics. Further, the qualitative results in Figure 4 clearly shows the difference between the rendering quality of Hyp-NeRF against PixelNeRF. Specifically, PixelNeRF fails to learn details, especially for the Sofa category. On the other hand, Hyp-NeRF preserves intricate details like the texture, legs, and folds in the objects even at a high resolution. Further, we show our results on the widely used SRN dataset at the resolution of \(128\times 128\) in Figure 5. Here, our quality is comparable with the baselines. For the car (right), our texture is very similar to the single-view input whereas VisionNeRF renders texture that matches the ground truth more than the input view indicating possible overfitting. ### Compression Unlike InstantNGP, which is trained on a single 3D instance, Hyp-NeRF is trained on many NeRF instances which effectively results in the compression of these NeRFs into the latent space (or the codebook). We evaluate this compression capability by computing NeRF quality degradation compared to single-instance-only method, InstantNGP. We randomly sample 250 instances from the training set and train InstantNGP separately on each of them. These samples are a subset of the training data used in Hyp-NeRF's codebook. We show degradation metrics in Table 2. Note that we **do not perform denoising** on the generated NeRFs as we want to only evaluate the compression component of Hyp-NeRF in this section. As can be seen in Table 2, there is a significant degradation in terms of PSNR (an average of \(11\%\)), but the overall geometry is preserved almost as well as InstantNGP. However, InstantNGP is trained on a single instance, whereas we train on 1000s of NeRF instances (1038, 783, and 517 instances for ABO Chairs, Sofa, and Tables, respectively). This results in a 60\(\times\) compression gain: for ABO Chairs, with 1038 training instances, Hyp-NeRF needs 163MB to store the model, whereas a single instance of InstantNGP needs on Figure 5: **Qualitative Comparison of Generalization on SRN** on the task of single-view inversion (posed in our case) and compare the quality of the views rendered at \(128\times 128\). Hyp-NeRF renders NeRFs that are true to the input view such as in the car example, whereas PixelNeRF and VisionNeRF hallucinate texture not present in the input view. Figure 6: **Qualitative Comparison of Querying (Section 3.2) on Hyp-NeRF’s prior**. In the top, we use an in-the-wild single-view unposed image to retrieve the closest NeRF Hyp-NeRF has seen during training. In middle, we take a cluttered scene, and mask out the object of interest using Segment Anything [20] and in the bottom we use a text prompt as an input to our query network, \(\Delta\). \(\Delta\) then predicts the latent codes \(\{S,C\}\) that is used as an input for Hyp-NeRF. average 8.9MB. Note that we use the same network architecture [1] for HyP-NeRF and InstantNGP making this a fair comparison. Moreover, the storage complexity for InstantNGP-based NeRFs is linear with respect to the number of instances, whereas our degradation in visual quality is sublinear. ### Retrieval A generalizable learned prior has the ability to generate NeRFs based on different input modalities like text, images, segmented and occluded images, random noise, and multi-view images. We now demonstrate additional querying and retrieval capabilities as described in Section 3.2. This experiment's goal is to retrieve specific NeRF instances that HyP-NeRF has encountered during training from a single-view unposed image of that instance. Section 4.3 presents the number of times we could correctly retrieve from an arbitrary view of seen NeRF instances. We achieve almost 100% accuracy for Chair and Sofa datasets. However, we take this a step further and try to retrieve the closest training instance code corresponding to **unseen views** of seen instances taken from in-the-wild internet images. Figure 6 (top) shows examples from this experiment in which we are able to retrieve a NeRF closely matching the input query. This demonstrates the ease of designing a simple mapping network that can effectively interact with HyP-NeRF's prior. Along with retrieving a seen instance, we use the query network to generate novel NeRFs of **unseen instances** as shown in Figure 6 (middle and bottom). In the middle row, we take an image of a cluttered scene, segment it with SAM [20], and pass this as input to the query network that predicts a set of latent codes given as input to HyP-NeRF (see Figure 2). Finally, in the bottom row, we show text-to-NeRF capabilities enabled by HyP-NeRF. ### Ablation Two key designs of HyP-NeRF include incorporating the MRHE and the denoising network. We present the affect of removing these two components in Table 4 and Figure 1 for MRHE and Table 1, Figure 2, and Figure 4 for denoising. In the first ablation, we change the design of our neural network by using a hypernetwork to predict the parameters of a standard nerf with positional encodings [30]. Since we remove the MRHE, we also increase the number of layers in the MLP to match the layers mentioned in [30]. Since there is a significant increase in the view rendering time, we randomly sample 70 training examples for evaluation. As seen in Table 4, the quality of the rendered views lags significantly in all the metrics including the CD (measured against NeRFs rendered on InstantNGP individually). This is showcased visually in Figure 1 and the appendix. Similarly, we find significant differences between the quality of the NeRFs before and after denoising (Table 1, Figure 2, and Figure 4), particularly in the Chair category with more diverse shapes. ## 5 Conclusion, Limitation, and Future Work We propose HyP-NeRF, a learned prior for Neural Radiance Fields (NeRFs). HyP-NeRF uses a hypernetwork to predict instance-specific multi-resolution hash encodings (MRHEs) that significantly improve the visual quality of the predicted NeRFs. To further improve the visual quality, we propose a denoising and finetuning technique that result in an improved NeRF that preserves its original multiview and geometric consistency. Experimental results demonstrate HyP-NeRF's capability to generalize to unseen samples and its effectiveness in compression. With its ability to overcome limitations of existing approaches, such as rendering at high resolution and multiview consistency, HyP-NeRF holds promise for various applications as we demonstrate for single- and multi-view NeRF reconstruction and text-to-NeRF. **Limitation and Future Work**. One limitation of our work is the need for the pose to be known during test-time optimization (Section 3.2). Although we propose the query network to predict novel NeRFs conditioned on an unposed single view, the result may not exactly match the given view because of the loss of detail in the CLIP embedding. Future work should design a mapping \begin{table} \begin{tabular}{l|c c c c} \hline \hline & \multicolumn{4}{c}{Chairs} \\ & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & CD\(\downarrow\) \\ \hline HyP-NeRF & **29.23** & **0.94** & **0.10** & **0.0075** \\ w/o MRHE & 26.42 & 0.92 & 0.16 & 0.0100 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation of removing MRHE** on ABO dataset. Due to the significant rendering time of HyP-NeRF w/o MRHE, we sample 70 object instances from the training dataset to compute the metrics at \(512\times 512\) resolution. network that can preserve fine details. An iterative pose refinement approach that predicts the pose along with the shape and color codes could also be adopted. A second limitation of our work is the non-standard prior \(\Phi\) that was learned by HyP-NeRF which makes unconditional generation challenging. GAN-based generative approaches solve this problem by randomly sampling from a standard distribution (like Gaussian distribution) and adversarially training the network. However, those methods often focus more on image quality than 3D structure. Future work could address this by incorporating latent diffusion models that can map a standard prior to HyP-NeRF's prior.
2304.10088
Towards the Universal Defense for Query-Based Audio Adversarial Attacks
Recently, studies show that deep learning-based automatic speech recognition (ASR) systems are vulnerable to adversarial examples (AEs), which add a small amount of noise to the original audio examples. These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices. The existing defense methods are either limited in application or only defend on results, but not on process. In this work, we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process. The insight of this method is based on the observation: many existing audio AE attacks utilize query-based methods, which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process. Inspired by this observation, We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query. Thus, we can identify when a sequence of queries appears to be suspectable to generate audio AEs. Through extensive evaluation on four state-of-the-art audio AE attacks, we demonstrate that on average our defense identify the adversary intent with over 90% accuracy. With careful regard for robustness evaluations, we also analyze our proposed defense and its strength to withstand two adaptive attacks. Finally, our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining.
Feng Guo, Zheng Sun, Yuxuan Chen, Lei Ju
2023-04-20T04:50:02Z
http://arxiv.org/abs/2304.10088v1
# Towards the Universal Defense for Query-Based Audio Adversarial Attacks ###### Abstract Recently, studies show that deep learning-based automatic speech recognition (ASR) systems are vulnerable to adversarial examples (AEs), which add a small amount of noise to the original audio examples. These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices. The existing defense methods are either limited in application or only defend on results, but not on process. In this work, we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process. The insight of this method is based on the observation: many existing audio AE attacks utilize query-based methods, which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process. Inspired by this observation, we propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query. Thus, we can identify when a sequence of queries appears to be suspectable to generate audio AEs. Through extensive evaluation on four state-of-the-art audio AE attacks, we demonstrate that on average our defense identify the adversary's intent with over \(90\%\) accuracy. With careful regard for robustness evaluations, we also analyze our proposed defense and its strength to withstand two adaptive attacks. Finally, our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining. Keywords:Adversarial Attacks; Defense; Memory Mechanism; Query-based ## Introduction Benefiting from the application of deep learning, the field of speech recognition has also been widely developed. However, deep learning-based automatic speech recognition (ASR) systems are shown to be vulnerable to audio adversarial examples (AEs), which add tiny perturbations on benign audio clips to fool the deep neural network model. Thus, how to secure ASR systems to prevent AE attacks remains a critical question. Multiple mechanisms have been proposed to defend against audio AEs on ASR. Some methods mainly rely on signal processing skills such as smoothing, downsampling, reconstruction, and so on [1, 2, 3, 4]. These methods can destroy the adversarial components of AE to a certain extent, and prevent them from reaching the preset target to reduce their impact on ASR. But it also destroys the benign sample and works for defense against unknown attacks. There are some works that train an additional DNN network as a prior part of ASR [5, 6, 7]. However, those defense methods depend heavily on the algorithms for generating AEs, the generalization capability is the key that limits the ability of defense, and the model will be difficult to discriminate the adversarial samples without participating in the training. In addition, the existing defense methods against audio adversarial examples focus on the generation results of AEs, without on the process. We reinvestigate and rethink the process of generating the adversarial examples, trying to locate the "specific" features in this process. We also scrutinize the current state-of-the-art attacks, including white-box attacks [8, 9, 10], black-box attacks [11, 12, 13] and transfer attacks [14, 15, 16]. We note that the perturbation of the AEs in some attacks is quite light, and the distance between them and the benign examples is small without a particularly significant difference. So it is difficult to identify whether a single input is an AE. We often ignore the process of AE generation and only pay attention to the results. How to utilize this discarded information. Yet, except for some attacks that directly generate AEs, the majority need to keep visiting the target model to adjust the AE, essentially stealing key information (e.g., gradients) from the model. In this case, the adversary needs to send massive and similar queries to the target model in a period, which likely exposure her adversarial behavior. Therefore, according to this feature, we do not try to discover individual inputs, rather we focus on the relationship between the inputs to recognize the attack. In this work, we propose a universal and lightweight defense framework to infer the adversarial behavior by memory mechanism. The basic idea of our framework is that generating adversarial examples and the query to ASR models is continuous and correlated before and after. In contrast, a regular query is independent of others. We consider some history inputs of a certain length as a piece of memory, analyze the correlation between a new input and the memory, and mark the input as adversarial if the correlation crosses a certain threshold. We use the similarity of the audio fingerprint to estimate the correlation of the input. The insensitivity of the audio fingerprint to noise is an attractive trait. Meanwhile, since its simplicity, it is hard for the adversary to be aware of the use of defensive models. Furthermore, motivated by the similarity matrix for recommender systems, In this way, we can efficiently and quickly verify that the input query sound is adversarial or benign.We employ a non-neural network defense architecture and are not able to optimize the defense model in a similar way to a neural network, so an attacker may not be able to attack the defense model from that perspective. This strategy efficiently identifies the existing state-of-the-art adversarial sample attacks. The robust average uncovering success rates (\(DSR\)) are all above 90%. Also, our proposed framework can be easily combined with any other existing defense methods. Finally, we study some adaptive attacks. We designed experiments with random noise attacks, which disturbed audio fingerprint feature extraction. For noise adaptive attacks, we observed that the modest level of random noise instead results in better performance to our defense system and we build a more robust defense system. In addition, we tested the potential role of different "_fake query_" ratios \(p_{fake}\) on the results. We conducted experiments on both types of adaptive attacks and proved that our defense framework remains robust under the damage. The main contributions of this work are three-fold: \(\bullet\) We propose a new defense mechanism for adversarial audio attacks by analyzing the correlation between input with memory. This is the first proposed defense framework based on the AEs generation process for the ASR. The robust average uncovering success rates are all beyond 90% for existing attacks and we first evaluated the music-based AEs. \(\bullet\) We demonstrate the robustness of our defense framework toward adaptive attacks. We found that the adaptive attack methods of fingerprint extraction damage and the "_fake query_" are unable to evade our defense, and our defense strategy is still effective. We build a more robust defense system through the combination of a moderate level of random noise. \(\bullet\) We designed a music-carrier dataset that can be used to produce audio adversarial examples, which also establish a foundation for future research on attacks and defenses based on music-carrier. And we release the source code for our defense and datasets at: [https://github.com/xxxx](https://github.com/xxxx). ## Background and Related Work **Adversarial Examples (AEs)**. Adversarial attacks originate from images and quickly develop, with much relevant research. Many works achieve successful attacks on image classifiers by the computed gradient and these attacks are relatively convenient to implement [17, 18, 19, 20]. Some work explores transfer attacks from white-box to black-box models but needs a lot of access to the target model [15, 16, 21]. This provides a good reference for adversarial studies on audio. One may inquire about the reasons for the existence of adversarial examples. According to several works [22, 23, 24, 25], they think that adversarial examples are not a network drawback but a feature. The network attempts to learn "all" the beneficial features during the training process, whereas humans are naturally inclined to ignore some features. When an adversary attacks the model via manipulation of such features, it leads to a rapid decrease in the accuracy of the model, whereas the accuracy of humans is immune. Thus our concern is not to remove the AEs and it fails to do so, instead, we should avoid the risk of the AEs to the model. **Audio Adversarial Attacks to ASR**. A similar situation exists in the ASR. Typically, a state-of-the-art ASR model is susceptible to deception by malicious AEs, which has evolved from a single-word attack to an attack on the entire sentence. Some state-of-the-art models were successfully attacked, [8] used CTC-loss to compute gradients to achieve an attack on DeepSpeech; CommanderSong [9] used pdf-id to design a loss function to implement attack base on Kaldi [1]; [26] implemented an attack on Lingvo [2] with psychological masking. For black-box attacks, the gradient is incomputable. However, [27] successfully attacked the DeepSpeech black-box model with a genetic algorithm; [28] successfully attacked four commercial speech API services (Google Cloud Speech-to-Text, Microsoft Bing Speech Service, IBM Speech to Text, and Amazon Transcribe); [3] successfully attacked the speech recognition API interfaces of iFLYTEK and Ali with the co-evolutionary algorithm. Besides, already there are attacks that can be launched in the physical world. In order to enhance the robustness of physical attacks, in [27, 28, 29], the authors added the Gaussian white noise to AEs and the evaluation results show that this strategy enhances the physical robustness of the AEs. Although they do not require a specific noise model, they may rely on the playback device and the experimental environment. These attacks inevitably require a massive amount of queries to models, and query-based attacks are becoming worse with time. In this article, our main object of our article will be focused on recognizing such attacks before they succeed and defending against query-based adversarial attacks. **Defense against Audio Adversarial Attacks**. The majority of proposed methods of defense against audio adversarial attacks are removing or ruining the adversarial component by the technical tool of signal processing. Paper [1] proposed random smoothing to mask the disturbing adversarial component. [2] proposed WaveGAN vocoder to reconstruct the waveform to eliminate the disturbing domain. [30] used label smoothing, [31, 32] squeezed the audio, [3] is the down-sampling method and [4] added distorted signals. These works of defense are concerned with removing or ruining the perturbation component. Those approaches have both advantages and disadvantages, as it breaks the adversarial behavior of AEs while also causing a lot of damage to examples of benign queries. Deficiency of hard evidence for the difference between AEs and benign examples. Some people suggested applying sub-models to preclude some attacks [5, 33]. The literature [6, 7, 34] applies extra neural networks to check adversarial examples to protect the ASR model. But they can only restrain some existing attacks, which are impotent to uncertain attacks. The applications are limited due to the sub-models bulky. Some methods based on state detection of images [35, 36] also provide some guidance for the audio adversarial attacks. Although these defensive works are available for certain types of attacks, it is a deficiency that the evaluation of adaptive attacks is incomplete or oversimplified. No integral architecture is available for combination with other methods. We work mainly on building a lightweight framework that can be easily combined with other defense methods. **Problem Setup**. Hereafter, we concentrate on adversarial tasks. In a setup like this, the DNN network is represented as \(f\), and \(f:\mathbf{X}\rightarrow\mathbf{C}\) represents the given input \(x(x\in\mathbf{X})\) is mapped to one of a set of classes \(\mathbf{C}\), where \(f(x)=c\in\mathbf{C}\). The DNN model is vulnerable to adversarial input attacks, which forces the DNN model to misjudge. Attacks on DNNs can be classified as targeted and untargeted. Here, we will focus on the setting of targeted attacks. Specifically, adversarial examples \(x^{*}\) are normally generated by slightly modifying \(x\) and \(x^{*}=x+\delta\). The solve of \(\delta\) can be converted to a min-optimization problem, i.e., \(arg\ min\)\(\mathcal{L}(f(x+\delta),c^{*})\). The adversary's goal is to force \(f\) to misclassify \(x^{*}\) as the target \(c^{*}\), i.e., \(f(x^{*})=c^{*},c^{*}\neq c\). To ensure that \(x^{*}\) is acoustically similar to \(x\), the perturbation needs to be restricted to a limited range \(g(x^{*}-x)\leq\varepsilon\), where the \(g\) is a measure function of the auditory difference. The attack process is shown in Fig. 1. ## Defense against Query-Based Audio Adversarial Attacks A successful audio AE requires a specified carrier (the carrier can be music or dialogue) undergoing several iterations and queries. The process of AE generation is continuous. Every time, the adversary needs to produce a small disturbance \(\delta\) to repeatedly adjust \(x^{*}\). When crossing the decision boundary, a successful AE is done and the whole process is depicted in Fig. 2. Our defense is motivated by the process nature of query-based attacks. We can examine the query-to-memory relationship to determine if queries are intended to generate an AE, which is the process-based defense approach. To calculate the correlation \(C\) of the new query about the memory, we used the similarity \(F\) of the audio fingerprint to estimate the correlation, i.e \(C(q_{\text{memory}},q_{\text{new}})\approx F(q_{\text{memory}},q_{\text{ new}})\). For each query, audio has unique fingerprint information. The audio fingerprint is robust to noise and adapts to a noisy environment. Moreover, it can prevent audio splicing attack [37]. According to the obtained fingerprints, we Figure 1: The correct transcription of \(x\) is “My friend, how are you”, and the adversary’s purpose is to add a careful perturbation “\(\delta\)” to \(x\) and then make it become \(x^{*}\) that can be transcribed as the target of “Call my wife”. can figure out the similarity between the input query and the memory, which provides the foundation for our determination. **Defense Architecture** Our defense architecture is a process-based defense approach and our goal is to find potential attacks in continuous queries. Suppose we have determined that the audio fingerprint similarity between the input query and memory is beyond the set threshold, we will report it as part of the attack sequence and take action accordingly. We can take some actions such as blacklisting the querying user or warning the user. Fig. 3 illustrates our scheme. \(\bullet\)_Firstly_, place query audio into the cache to form a query memory \(\mathbf{X}\) of depth \(k\). If the number of audio put into the cache is below \(k\), consider all queries as a memory sequence. In the process of locating an attack, we expect to consume minimal resources and time, so \(k\) should not be too large. Also, it is disadvantageous to discover adversary behavior if \(k\) is too small. The \(k\) means the shortest depth before we can make sure that those input queries are intended to produce AEs. \(\bullet\)_Secondly_, calculating the fingerprints of all inputs in memory \(\mathbf{X}\) and overwriting and updating the previous memory. \(\bullet\)_Thirdly_, for every new input audio, we calculate the weighted cosine similarity between the new input and each fingerprint in memory. Since audio fingerprint is a particular distribution about time and frequency, the cosine similarity can capture the correlation between such coordinate-dependent distributions. Besides, for each input, there is a necessity to check the legality, so we allocate a weight value \(\alpha\) to each input with the _Inverse Variance Coefficient Method_[38]. Then, calculate the similarity of the queries via: \[\left\{\begin{array}{l}s=\sum\limits_{i=1}^{k}s_{i}\alpha_{i}\to s_{i}(x,y_ {i})=\frac{x\times y_{i}}{\sqrt{x^{2}}*\sqrt{y_{i}{}^{2}}}\\ \sum\limits_{i=1}^{k}\alpha_{i}=1\end{array}\right., \tag{1}\] where \(x\) is the fingerprint of the new input, \(y_{i}\) is a fingerprint in memory, and \(k\) is the depth of the memory \(\mathbf{X}\). The final similarity value \(s\) is the weighted average value of \(s_{i}\). The selection of the \(\alpha_{i}\) value is explained in the next section. \(\bullet\)_Fourthly_, obtain threshold \(\delta\), which implies minimal constraints regarding the input as malicious. When \(s>\delta\), it demonstrates that the current input is a potential attempt at generating an AE, and appropriate measures must be taken immediately. In practice, for the setting of \(\delta\), it is important to have a high uncovering success rate as well as a low false positive ratio. Usually, the false positive ratio will be limited to no more than 10% of the training data, according to the size of the training data set [35, 39]. The details of \(k\) and \(\delta\) are explained in the next part of this section. **Memory Sequence** A memory sequence \(\mathbf{X}\) consists of several queries that are placed in the cache. In the process of attack detection, we expect to consume minimal resources and time. So\(\mathbf{X}\) should not be too large. Also, it is disadvantageous to detect adversary behavior if \(\mathbf{X}\) is too small. \(\mathbf{X}\) of depth \(k\) means the shortest sequence Figure 3: Query-based defense: Architecture for recognizing query-based audio adversarial attack. Figure 2: Query-based attack: setting a target, for the first time \(x^{*}\)=\(x\), if \(x^{*}\) can be transcribed as a target, the AE is true, else false, adjust the \(\delta\) carefully, and perform the next query. Repeat this process until \(x^{*}\) can be transcribed as the target. before we are sure that those queries are intended to produce AEs, and the length of the sequence is \(k\), i.e. \[\left\{\begin{array}{l}k=\min(f(1),f(2)...f(n))\\ f(i)=\left\{\begin{array}{l}i,\ if\ f\ \text{can detect attacks.}\\ +\infty,\ if\ f\ \text{can not detect attacks.}\end{array}\right.\end{array}\right. \tag{2}\] where \(f\) is the detection function, \(f(i)\) indicate whether the function \(f\) can detect a sequence of length \(i\). Eq. 2 implies that the depth of \(1,2,...k-1\) is not sufficient for \(\mathbf{X}\) to be considered as the intention of generating AEs; depths \(k,k+1,...n\) are considered to be for the purpose of generating AEs, with the minimum depth is \(k\). We explain how to choose the value of \(k\) in parameter selection. **Query Audio Fingerprints Similarity** The auditory similarity is an important feature in estimating the gap between humans and machines. There is a close auditory similarity between the malicious examples and the benign examples. The malicious examples are produced by appending carefully structured small perturbations to the benign carriers. Although the neural network regards them as two completely different classes, humans believe them as the same intuitively. So the trait of keeping intuitively consistent with humans is what we need. The audio fingerprint has this trait and is not sensitive as the DNN to perturbations. Fingerprints will maintain high similarity if humans believe they are the same samples. It is possible to predict whether new input might have a strong correlation with the memory and whether they share the same behavioral attributes, according to the similarity computation between the preserved fingerprints and the new one. This is similar to the recommender system [40, 41], which differentiates users based on their memory behaviors and recommends new content or products [42, 43]. We note that the digital audio fingerprint [44, 45] uniquely flags audio. The small noise of the audio doesn't bother the core information of the fingerprint. And it can defend against some attacks such as audio patching. Moreover, it is reliable and feasible in implementation cost to employ fingerprint similarity as an audio similarity. Fingerprint similarity relies on the following requirements, assume that \(s\) is the similarity function, \(x,y,z\) are three candidates in \(D\) dimensional space that satisfy Eq. 3, Eq. 4, Eq. 5, Eq. 6. \[s(x,y)\geq 0,(Non-negativity) \tag{3}\] \[s(x,y)=1,\ only\ x=y.(Homogeneity) \tag{4}\] \[s(x,y)=s(y,x).(Symmetry) \tag{5}\] \[s(x,y)+s(x,z)\geq s(y,z).(Triangularinequality) \tag{6}\] A robust acoustic fingerprinting algorithm needs to consider the perception of the audio. When two audio files sound the same, their acoustic fingerprints should be the same or very close, even if there are some differences in their file data. According to the literature [37, 44]. The fingerprint similarity can be divided into two steps: **fingerprint extraction** and **similarity calculation**. Audio corresponds to a unique fingerprint, so the relationship between digital audio fingerprint \(\mathbf{F}\) and audio object \(\mathbf{X}\) is a surjection \(h:\mathbf{X}\rightarrow\mathbf{F}\), and only when \(\forall f\in\mathbf{F},\exists x\in\mathbf{X},\rightarrow\)\(f=h(x)\). That expands to \(\{x_{1}\to f_{1},x_{2}\to f_{2}...x_{n}\to f_{n}\}\) or \(\{f_{1}=h(x_{1}),f_{2}=h(x_{2})...f_{n}=h(x_{n})\}\). For fingerprint \(f_{i},f_{j}\in\mathbf{F}\), we can obtain similarity \(s_{ij}\) (\(s_{ij}\in\mathbf{S}\)) and \(g:\mathbf{F}\rightarrow\mathbf{S}\) is surjection only when \(\forall s\in\mathbf{S},\exists f_{i},f_{j}\in\mathbf{F},\rightarrow\)\(s=g(f_{i},f_{j})\). \(h,g\) is the map function. \(\bullet\)_Fingerprint extraction_ (\(h:\mathbf{X}\rightarrow\mathbf{F}\)). The fingerprint extraction process is illustrated in the fingerprint extraction module in Fig. 4. The main procedures include: 1) Preprocessing: it mainly involves frame split and filtering of the input data. 2) STFT: short-time Fourier transform. For each frame, apply STFT via Eq. 7, where \(x(t)\) is the input signal at time \(t\), \(h(t-\tau)\) is the window function, and \(S(\omega,\tau)\) shows the spectral result if the center of the window function is \(\tau\). 3) Find Peaks: after STFT, select the frequency peaks \(f\) and corresponding time \(t\), and make sure the distribution of frequency peaks is uniform. 4) Pairs: pair the obtained frequency peaks \(f\) and time \(t\), then the result \(\{f,t\}\) is used as fingerprints \(f_{i}\) and \(f_{i}\) is a high-dimensional vector of a certain length. \[S(\omega,\tau)=\sum_{t=-\infty}^{\infty}x(t)h(t-\tau)e^{-j\omega m} \tag{7}\] \(\bullet\)_Find Peaks._ In Fig. 4, after calculating the STFT, we need to uniformly select the peak in the frequency domain. Eq. 8 describes this process, in which \(F(n,m)\) is the two-dimensional matrix after STFT, \(H(u,v)\) is the kernel function. Eq. 9 is the maximum filter and Eq. 10 is the high-pass filter for resetting the frequency to 0 when the frequency is below the cutoff \(D_{0}\). Both filters are useful for canceling low-frequency components and uniformly capturing the local maximum high frequencies. We choose the former as a tool to find peaks. \(\bullet\)_Similarity calculation_ (\(g:\boldsymbol{F}\rightarrow\boldsymbol{S}\)). After fingerprint extraction, fingerprint \(f\) is obtained, which is written as \(x=f_{i}\). Similarly, another fingerprint can be written as \(y=f_{j}\) and its length is the same as \(x\). Then calculate the similarity \(s\) between them. The process is illustrated in the Similarity Module in Fig. 4. The fingerprint contains coordinate-dependent details. Finally, the similarity of \(x\), \(y\) could be achieved by Eq. 1. \[G(u,v) = \frac{1}{NM}\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}F(n,m)H(u-n,v-m) \tag{8}\] \[H(u,v) = \max_{s,t\in N(n,m)}[F(s,t)]\] (9) \[H(u,v) = \left\{\begin{array}{l}0,D(u,v)\leq D_{0}\\ 1,D(u,v)>D_{0}\end{array}\right. \tag{10}\] **Parameter Selection** \(\bullet\)_The choice of \(k\) and \(\delta\)._ The larger the \(k\) value, the more effective our solution is in observing input queries, and the smaller the \(k\) value, the lower the computational cost. The \(k\) is the minimum depth of memory before we are sure that those inputs are intended to produce an AE. The \(\delta\) is the minimum similarity before we determine that the current input is malicious. So the values of \(\delta\) would be influenced by the depth of \(k\). Specifically, establishing the threshold requires evaluating fingerprint similarities under the datasets, so that if the entire datasets were to be randomly streamed as queries, 0.1% of the carrier datasets would be marked as attacks. (In theory, the percentage of false positives should be limited to 10% of the dataset size, but since our dataset is small, our value is 100 times smaller than the default.) Actually, the threshold \(\delta\) is a function of \(k\), and Fig. 5 discloses their relation. The smaller the threshold \(\delta\), the more intense the constraints on the input. Hence small thresholds are advisable, but the too-small value risk regards a benign input as malicious. From what we observed from Fig. 5 with the increase of \(k\), the similarity drops sharply in the beginning. (In turn, the distance rises, rapidly. The higher the similarity, the lower the degree of dissociation between input queries, i.e., the closer the distance.) After it reaches around \(k=75\), curves become smooth and increase modestly with \(k\), and the process is quite gentle, so we set up \(k\) as 75 and the thresholds \(\delta\) in both datasets are 0.313711 and 0.207398. \(\bullet\)_The choice of \(\alpha\)._ First, let's consider a case, in Eq. 11 below: \[\left\{\begin{array}{l}\text{s}_{1} = \text{f}(\text{X}_{A}\text{,q}_{new})\text{,X}_{A}=\{\text{q}_{0}, \text{q}_{1}...\text{q}_{m},\text{q}_{n}\}\\ \text{s}_{2} = \text{f}(\text{X}_{B}\text{,q}_{new})\text{,X}_{B}=\{\text{q}_{0}, \text{q}_{1}...\text{p},\text{q}_{n}\}\end{array}\right. \tag{11}\] There exist two memory sequences \(\boldsymbol{X}\), where memory \(X_{A}\) consists of {q\({}_{0}\),q\({}_{1}\)...q\({}_{m}\),q\({}_{n}\)} and \(X_{B}\) is: {q\({}_{0}\),q\({}_{1}\)...p,q\({}_{n}\)}, s\({}_{1}\) and s\({}_{2}\) are the similarity of the two sequences with new input, \(f\) is the fingerprint similarity function. The key distinguishing element between \(X_{A}\) and \(X_{B}\) is that the query \(q_{m}\) differs from \(p\). Assuming that \(p\) is a query deliberately placed in the queries by an adversary. The adversary's purpose of injecting \(p\) is to try to fabricate a fake input (i.e., almost irrelevant to the former) to confuse the analysis of the similarity and hide Fig. 4: Architecture of fingerprint similarity calculation. Fig. 5: \(k\) and \(\delta\): the mean \(k\) number of the 0.1% percentile of the datasets as a function of k. her intent. Essentially, both \(X_{A}\) and \(X_{B}\) are malicious memory sequences with only trivial disparity. But \(\mathrm{s}_{1}\) is below the threshold and \(\mathrm{s}_{2}\) is beyond the threshold, \(X_{A}\) is decided as a potential attack while \(X_{B}\) is not decided as a potential attack due to the injection of a fake query. We call this \(p\)-input as "_fake query_", and the ratio of "_fake query_" to all queries is called \(p_{fake}\) (\(p_{fake}\)= \(\mathrm{(p/k)}*100\%\)). In our experiments, we found that the \(s\) value would change sharply when there were "fake queries" in the query memory and we employed the _Inverse Variance Coefficient Method_[38] to describe such fluctuations and disparities. According to this method, it is easy to determine the weights \(\alpha\), which are assigned as follows: \[\alpha_{i}=1-\frac{std_{i}(s(l))}{mena_{i}(s(l))}\rightarrow\alpha_{i}=\frac{1 }{\alpha}\alpha_{i}(\alpha\mathrm{=}\sum_{i=0}^{k}\alpha_{i}), \tag{12}\] where \(mena_{i}\) depict the mean, \(std_{i}\) depict the standard deviation, and \(s(l)\) depict getting the query vector of length \(l/2\) before and after the i-query. For \(l\), we set the maximum value as 7 (No more than 10% of the memory length, i.e. \(l_{\max}=floor(0.1*k)=floor(0.1*75)=7\)) and \(l\) begin with 2 (The mean and variance are worthwhile at least two values). Then, the value increases linearly. When it exceeds the maximum value, \(l\) shrinks to half of the original value and then increases linearly duplicate. Repeat this process until all elements are traversed. ### Evaluation In this section, we will show the evaluation results of our scheme for some non-adaptive attacks and adaptive attacks. We collected open-source code attacks as much as possible, and we did not evaluate attacks without open-source code, but we made some surveys about their details. Finally, we evaluated four class attacks that are well-known in the audio adversarial attack. Those are sufficiently representative and the bulk of the other work revolves around them. We evaluate the CommanderSong (CS) [9] attacks and the Devil's Whisper (DW) [28] attacks by applying the Music-set. The Mini-Librispeech dataset is applied to assess the IRTA\({}^{[3]}\) attack [26] and DS\({}^{[4]}\) attack [8]. Those attacks all reported a success rate of attacks (SRoA) of almost 100%. **Datasets** Our scheme conducts experiments on Mini-Librispeech [5] and Music-sets datasets (We build a carrier library of music-based samples containing 10,553 music clips. Appendix 1.1 contains all details about Music-sets). For Mini-Librispeech, this is a dialog-based dataset that some classic attack works rely on it and we cannot ignore it [11, 12, 27]. For Music-sets, music has the characteristic of large-scale availability in most situations, and its accessibility and popularity allow it to become a candidate of the carrier in attacks. Lots of strong attacks refer to music as the necessary carrier for producing AEs. [3, 8, 9, 10, 28] So, defense and evaluation of the AEs on musical carriers are inevitable and important. **Evaluation Metric** \(\bullet DSR\). To evaluate the effectiveness of our approach for defending the query-based attacks, we employ the detection success rate (\(DSR\)) and First-Signal-to-Noise Ratio (\(FSNR\)) as the evaluation metrics. The detection success rate (\(DSR\)) is the most intuitive metric to evaluate the detection results. To calculate it as follows: \[DSR(\%)=\frac{\mathrm{d}_{n}*k}{a_{n}}*100\%, \tag{13}\] where \(\mathrm{d}_{n}\) is the number of detections, \(a_{n}\) is the number of queries, and \(k\) is the length of memory \(X\). Obviously, the \(DSR\) value is below 1 because \(a_{n}>d_{n}*k\) is clear. The detection occurs after performing at least one query. For our purposes, we consider it to measure the probability of finding adversary behavior. A higher \(DSR\) is preferable. \(\bullet FSNR\). The First-Signal-to-Noise Ratio (\(FSNR\)) is a function that defines the minimum \(SNR\) to detect an attack, i.e., how much \(SNR\) when we can detect the attack, as shown in Eq. 14: \[FSNR(dB)=20\mathrm{log}_{10}(\frac{A_{x}}{FA_{\delta}}), \tag{14}\] where \(x\) is the original sound, \(\delta\) is the perturbation, \(A_{x}\) is the amplitude of the original sound, and \(FA_{\delta}\) is the amplitude of the perturbation when the first attack is detected. This is a metric of the relative value of distortion of the AE vs the original sound. The higher \(FSNR\) describes that the query will be regarded as a suspect under a smaller perturbation. **Non-adaptive Attack Evaluation** We evaluate four class attacks that are well-known in the audio attack. Those are sufficiently representative and the bulk of the other work revolves around them. We evaluate the CommanderSong (CS) [9] attack and the Devil's Whisper (DW) [28] attacks by applying the Music-set. The Mini-Librispeech dataset is applied to assess the IRTA attack [26] and DS attack [8]. Those attacks all reported a success rate of attacks (SRoA) of almost 100%. CS attack is the representation of employing music as carriers and some subsequent work[3, 28] set it as an indispensable collection. The DW attack is the typical instance for commercial black-box APIs. Subsequently, much of the work[12, 13] on black-box attacks has to test on APIs. IRTA attack based on the psychoacoustic hiding model is an outstanding work of the period. And several studies[10, 46] adopted the psychological masking effect. DS attack is the earliest version of voice attack, which launched the gateway to voice attack and provided a reliable infrastructure for the subsequent works. \(\bullet\)_N1. CS attack Evaluation_. CS attack is a white-box attack by injecting target commands into the song. It started a precedent of producing AEs with music as a carrier and achieving a 100% success rate of attacks (SRoA) on the Kaldi speech recognition system. It has a profound influence, and many follow-up works set it as an indispensable reference. For the defense based on our approach, there are few blanks in the music, the spectrum is abundant, and the fingerprints are often more reliable than those of the dialogue version. Tab. 1 shows that CS examples spend an average of about 300 visits to the target model. Our security architecture can accurately detect such attacks with \(DSR\) up to 98%. However, the value of \(FSNR\) is only 7.38 dB, revealing that the AEs were already very noisy when we suspected the query was an attack. The primary factors of this situation are that the small perturbation is not ideal for a CS attack and the perturbation is constrained to a very broad range. Therefore, the amount of additional noise is significant. Apart from that, various audio lengths will affect the SRoA of AE. To ensure the validity of AE, the length of audio ought to be no shorter than 4s. The longer the audio, the richer the fingerprint, which is more helpful for detection. However, the shorter audio is not beneficial for the adversary to generate AEs successfully. \(\bullet\)_N2. DW attack Evaluation_. DW attack first accomplished a black-box attack on commercial speech recognition APIs (including Google Assistant, Google Home, Amazon Echo, and Microsoft Corina). Since then, attacks on APIs have gradually become a necessary option for black-box attacks and the most intuitive indicator of the attack algorithm. Tab. 1 shows that DW also works based on the music dataset, which accounts for 50% of CS in the average query to the target model and SRoA is close to 98%. On defense, our approach enables a \(DSR\) of 84.74% under DW attack. DW attack employs a local substitution model to simulate approximately the target model of the APIs ASR system. It helps to diminish the number of queries and the likelihood of triggering detection. So \(DSR\) possible losses. The \(FSNR\) value is \(18.41dB\), which is about 2.5 times that of CS. DW increases the \(FSNR\) value by reducing the number of visits to the model, and the perturbation naturally decreases. DW adopts Noise Model to augment the physical robustness of AEs. However, the SRoA is deeply relevant to the environment and the device. Regarding the noise model, the combination of our scheme with some straightforward measures (e.g., down-sampling, filtering) can raise the level of difficulty of physical attack. \(\bullet\)_N3. IRTA attack Evaluation_. IRTA attack is a two-stage attack algorithm on Lingvo, concealing target commands to a space that the human ear cannot hear through a psychoacoustic masking model. The IRTA example is based on the open-source dataset Librispeech. This type of dialogue audio contains a large number of silent fragments. Therefore, the fingerprint of the audio is inferior to that of the music. But the inspiring thing is that our approach maintains a robust attack detection and that the \(DSR\) reaches 84%. This can be attributed to the time cost of this type of attack (Producing a successful adversarial example costs 24.8h) leads to a remarkable number of queries. Such massive queries easily provoke the inspection of the defense system. Moreover, the perturbation is very small, \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Attack** & **dataset** & **SRoA(\%)** & **Avg.Queries(\(n\))** & **Detections** & \(DSR\)(\%)** & \(FSNR\) **(dB)** \\ \hline **CS** & Music-sets & 100.00 & \(\sim\)300 & \(\sim\)3.92 & 98.00 & 7.38 \\ **DW** & Music-sets & 98.00 & \(\sim\)150 & \(\sim\)1.7 & 84.74 & 18.41 \\ \hline & **Average** & 100.00 & \(\sim\)225 & \(\sim\)2.81 & 91.37 & 12.90 \\ \hline **IRTA** & Mini-librispeech & 100.00 & \(\sim\)5000 & \(\sim\)56.00 & 84.00 & 40.97 \\ **DS** & Mini-librispeech & 100.00 & \(\sim\)1000 & \(\sim\)11.00 & 82.50 & 13.02 \\ \hline & **Average** & 100.00 & \(\sim\)3000 & \(\sim\)34.00 & 83.25 & 27.00 \\ \hline \hline \end{tabular} \end{table} Tab. 1: Non-adaptive attack evaluation. SRoA denotes the success rate of attack. The higher the value of \(DSR\) and \(FSNR\), the more beneficial. Normally, every \(k\) (\(k\)=75) query is detected once, and if the queries are less than \(k\), at least one detection is performed for all \(n\) queries, and the ratio of \(n/k\) is the detections. and the \(FSNR\) can reach 40.97dB in which the psychoacoustic masking model plays an important role. Still, the perturbation would reflect the frequency domain and the fingerprint extraction happens in the frequency domain. We can further presume that it will be costly to bypass our defenses for adversaries with an emphasis on hidden perturbation via psychoacoustic masking. Nevertheless, it also exposes a critical concern: _In the areas that humans fail to hear, is there a necessity for the machine to do so?_ AI researchers aim to narrow the gap between humans and machines, so machines should also appear human-like for regions beyond human perception. Blocking such attacks implies that the machine does not have the power to do anything in the regions where humans are unable to perceive, thus, the attack will completely dissolve. \(\bullet\)_N4. DS attack Evaluation._ DS attack is a type of attack first implemented on DeepSpeech. At its core is to optimize the CTC-Loss function. Compared to IRTA attacks, DS is relatively heavily perturbed that maybe without applying the theory of psychological masking, and relatively poorer \(FSNR\) but \(DSR\) is 82.5% closer to IRTA. Compared to CS and DW attacks, DS and IRTA attack are implemented on Librispeech containing rare fingerprint information, so \(DSR\) is inferior to CS and DW. Nevertheless, the general \(FSNR\) is superior to the former, showing the method's detection capability to attacks with small perturbations. Separate work deploys genetic algorithms and gradient estimation to generate adversarial samples. However, gradient estimation relies on the sampling theory. Biological evolutionary algorithms demand substantial expenses without the guideline of the gradient. The literature [27] queries numbers up to 1000+, and the literature [3] reach a stunning 30000+. From Tab. 1, it has a remarkably higher detection rate for query numbers above 1000+. Multiple query numbers are an obvious disadvantage of the evolutionary \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Attack** & **Task** & **Attack Method** & **Attack Model** & **Target** & **M or D** & **Avg.Queries** & **SRoA(\%)** \\ \hline \multirow{3}{*}{**CS[9]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{GD} & \multirow{3}{*}{Kaldi-Aspire} & \multicolumn{3}{c}{Play music.} & \multirow{3}{*}{M} & \multirow{3}{*}{\(\sim\)300} & \multirow{3}{*}{100} \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ \hline \multirow{3}{*}{**DS[8]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{GD} & \multirow{3}{*}{DeepSpeech} & \multirow{3}{*}{M\%} & \multirow{3}{*}{\(\sim\)1000} & \multirow{3}{*}{100} \\ & & & & & & & \\ & & & & & & & \\ \hline \multirow{3}{*}{**DW[28]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{Alt-M} & \multirow{3}{*}{APIs} & \multicolumn{3}{c}{Turn off The Light} & \multirow{3}{*}{M} & \multirow{3}{*}{\(\sim\)150} & \multirow{3}{*}{100} \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ \hline \multirow{3}{*}{**DSG[27]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{GA \& GE} & \multirow{3}{*}{DeepSpeech} & \multirow{3}{*}{\(\sim\)1000} & \multirow{3}{*}{35} \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ \hline \multirow{3}{*}{**SGEA[47]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{SGE} & \multirow{3}{*}{DeepSpeech} & \multirow{3}{*}{\(\sim\)1000} & \multirow{3}{*}{98} \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ \hline \multirow{3}{*}{**IRTA[26]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{Psy-M} & \multirow{3}{*}{Lingvo} & \multirow{3}{*}{\(\sim\)-ce miss miss missgers had her accident.} & \multirow{3}{*}{D} & \multirow{3}{*}{\(\sim\)5000} & \multirow{3}{*}{100} \\ & & & & & & & \\ & & & & & & & \\ \cline{1-1} & & & & & & & \\ \hline \multirow{3}{*}{**PHA[10]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{Psy-M} & \multirow{3}{*}{Kaldi-WSJ} & \multirow{3}{*}{The command is planted.} & \multirow{3}{*}{M \& D} & \multirow{3}{*}{\(\sim\)500} & \multirow{3}{*}{98} \\ & & & & & & & The cake is a.ie. \\ \cline{1-1} & & & & & & & \\ \cline{1-1} & & & & & & & \\ \cline{1-1} & & & & & & & \\ \hline \multirow{3}{*}{**Occam[3]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{Co-E} & \multirow{3}{*}{DeepSpeech and APIs} & \multirow{3}{*}{Call my wife.} & \multirow{3}{*}{\(\sim\)0.} & \multirow{3}{*}{\(\sim\)30000} & \multirow{3}{*}{100} \\ & & & & & & & \\ \cline{1-1} & & & & & & & \\ \hline \multirow{3}{*}{**SirentAttack[48]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{PSO} & \multirow{3}{*}{DeepSpeech} & \multirow{3}{*}{\(\sim\)1000} & \multirow{3}{*}{100} \\ & & & & & & & \\ \cline{1-1} & & & & & & & \\ \hline \multirow{3}{*}{**MOGA-Attack[11]**} & \multirow{3}{*}{ASR} & \multirow{3}{*}{Mul-Obj} & \multirow{3}{*}{DeepSpeech and Kaldi} & \multirow{3}{*}{All of these.} & \multirow{3}{*}{D} & \multirow{3}{*}{-} & \multirow{3}{*}{*} \\ & & & & & & That i love you. & & \\ \hline \hline \end{tabular} \end{table} Tab. 2: An overview of the query-based attacks against ASR. **Note:** in the table, “GD”, “GA”, “GE”, “SGE” represent the Gradient Descent, Genetic Algorithm, Gradient Estimation, and Selective Gradient Estimation. “Alt-M”, “Psy-M”, “Co-E”, “PSO”, “Mul-Obj GO” represent the Alternative Models, Psychoacoustic Masking, Co-evolutionary algorithm, Particle Swarm Optimization, Multi-Objective Genetic Optimization. “M or D” represents the Music-carrier or Dialogue-carrier, “-” denotes the author didn’t show, and “*” denotes the author told us the WER of the attack model to AEs was increased to 980%. algorithm. Unless improving this shortcoming, do not expect to evade our inspection. We investigated the perturbation level of AEs so that we can easily compare them with \(FSNR\), as shown in Tab. 4. \(\bullet\)_N5. Other query-based attacks evaluation._ Other query-based attacks, the majority of them are based on the 4 attacks above. CS attack is the representation of employing music as the carrier. After that, subsequent work [3, 28] also set it as an indispensable collection. The DW attack is a typical example of attacking commercial black-box APIs. Subsequently, a lot of the work [12, 13] on black-box attacks has to be tested on APIs. IRTA attack based on the psychoacoustic hiding model is an outstanding work of the period. Several studies [10, 46] adopted the psychological masking effect. Literature [47, 48] using biological evolutionary algorithms to perform attacks and optimize the number of queries. DS attack is the earliest relatively sophisticated version of an audio attack, which provides a reliable infrastructure for subsequent works. Since our defense framework is process-based, we were unable to evaluate the attacks without open-source code but still surveyed them. More relevant details are provided in Tab. 2. We can learn from the above that applying a music carrier is quite advantageous for detection, also the detection is significant when the number of queries is numerous. The critical factor is that the fingerprints of music are more obtuse to perturbations, while the conversational ones are not. In terms of fingerprint extraction, Fig. 9 from the Appendix 2 supports similar results. In the following, we built a more robust defense system that raises the average DSR beyond 90% and substantially strengthens our defense, Tab. 5 shows the results. For adversaries, unless improving those shortcomings, do not expect to evade our inspection. Below, we propose a more robust defense system by combining other methods, which can achieve a detection ratio of over 90%, The details are in. **Adaptive Attack Evaluation** Whereas our defense framework can effectively detect existing attacks, it only assures in "_zero-knowledge_" attack scenarios where the attacker is unknown of the existence of the defense framework. In order to reliably implement our framework in practice, we have to assess adaptive adversaries who understand the defense details entirely and intend to deploy some strategies to bypass the defense mechanism. Following the guidelines of [49], we designed adaptive attacks to evaluate the ability of our defense to adaptive attacks. According to the defense details we consider both adaptive attacks: _Random Noise attack_ and _Proportion of Fake Queries attack_. Fig. 6 shows the effect of \(p_{fake}\) on \(DSR\). \(\bullet\)_A1. Random noise attack._ We conceive an adaptive attack of corrupting fingerprint extraction. Randomly insert noise with different SNR to the audio in the process of query. Forcing the \(x^{*}\) to bypass the defense, and successfully attack the ASR, and the perturbation is not easily perceived by the human. In Fig. 7, according to audio quality theory, when \(SNR\) is above 70, it belongs to high-fidelity quality audio. When \(SNR=0\), the noise has the same energetic value as the original audio, so when \(SNR\) is below 0, the original audio is almost flooded with noise. As shown in Fig. 7, for CS and DW attacks, when Noise-SNR is below 0, the SRoA is also nearly 0. Therefore the malicious queries are almost unsuccessful in attacking the ASR system, which is unacceptable for the adversaries. When Noise-SNR\(>\)0, the SRoA and \(DSR\) are rapidly recovering to their maximum value and keep it and the SRoA, in other words, \(DSR\) displays a comparable consistency. Though large noise decreases the \(DSR\) value but also decreases SRoA, which diverts from the adversary's target. So it is impossible to achieve superior SRoA while trying to break our defense. However, when the Noise-SNR value gradually increases, for IRTA and DS attacks, SRoA is rapidly recovering to its maximum value and keeping it except IRTA attack recovery is slower and the \(DSR\) value sharply rises and then gently drops until it becomes peaceful. Since Mini-Librispeech is a dialogue-based dataset and it contains a lot of blank frames, when inserting noise, it will fill the blank and become more helpful to the extraction of fingerprints. It can be deduced that joining appropriate noise can improve the robustness to our method. The query of containing noise does not undermine our defenses, on the contrary, it leads the defense system more sensitive and robust. Fig. 6: \(DSR\) as a function of the \(p_{fake}\). \(\bullet\)_A2. Proportion of Fake Queries Attack._ Moreover, we noted above that some adversaries use "_fake queries_" to develop a fake query history. In this section, we evaluate the impact on the defense system for different proportions of "_fake queries_" (\(p_{fake}\)). Tab. 3 plots the results. It also can be intuitively understood from Fig. 6. As observed, there is a critical threshold \(p_{fake}\) for the defender: once \(p_{fake}\) exceeds this threshold, the \(DSR\) drops dramatically. For these attacks, if \(p_{fake}\geq 60\%\), \(DSR\) drops to approximately \(10\%\) or \(0\%\). For CS and DW attacks, the \(DSR\) linearly dropped when \(p_{fake}\in[25,50]\). However, for the other two attacks, this situation does not happen. An intuitive explanation of this can be as follows: \(p_{fake}\) mainly affects the estimation of the query of interest for defense; yet, the priority of our defense is to distinguish the authenticity of the query, \(p_{fake}\) tends to have a larger impact on our proposal. The adversary's strategy to evade detection would probably be to set up \(p_{fake}\) to a sufficiently high value (e.g., \(p_{fake}\geq 60\%\)), but this would dramatically raise the cost of the attack and the number of queries. This makes the attacker overwhelmed and they are not sure if they can obtain AEs to attack target models successfully. \(\bullet\)_A3. Other Adaptive Attacks._ EOT is a well-known attack on images [50]. However, in audio attacks, after testing EOT transformations in audio (including waveform shifting, volume up/down, Pitch Shifting, Frequency Mask [51], SpecSwap, etc), we found that EOT transformations could play the role of enlarging the datasets but has no significant effect on ASR results. We suspect that this is attributed to the time-series correlation between the before and after of audio data, so some simple transformations cannot impact ASR. Therefore it is difficult to perform an adaptive \begin{table} \begin{tabular}{c c c c} \hline \hline **Constraint** & \(||\delta||_{1}\) & \(||\delta||_{2}\) & \(||\delta||_{\infty}\) \\ \hline **CS-Attack** & 346.85 & 23.62 & 0.24 \\ **DW-Attack** & 198.21 & 2.60 & 0.05 \\ \hline **Average** & 272.53 & 13.11 & 0.15 \\ \hline **IRTA-Attack** & 169.58 & 0.80 & 0.02 \\ **DS-Attack** & 63.09 & 0.37 & 0.37 \\ \hline **Average** & 116.34 & 0.59 & 0.20 \\ \hline \hline \end{tabular} \end{table} Tab. 4: Perturbation levels for different attacks (The numbers in the table are the outcome after normalization). Shows perturbation levels for different attacks. The higher the level of perturbation, the smaller the \(FSNR\). Fig. 7: Adaptive attack: Different noise-snr to disturb the extraction of fingerprints. Noise-SNR indicates the noise of different SNR. The smaller Noise-SNR means higher noise level. attack similar to the images. We evaluated the two like-liest adaptive attacks, and of course, it is also possible to design the other attack according to the details of the defense, but probably without significant impact. **Robust Defense** In the _random noise_ adaptive attack and Fig. 6, we found that the appropriate level of noise could help us build a more robust defense system, so we further studied the subtle relationship. In Tab. 5, we set up six different noise levels. The audio belongs to high-fidelity quality audio when \(SNR>75\) and the noise is extremely slight. Once the noise gradually rises to \(SNR=75\), our defense system can achieve more than 90% detection success rate for all attacks; when the noise rises to \(SNR=50\), the detection success rate reaches the maximum (and the average is 93.69%). The noise \(SNR<25\), the noise has become significant, exceeds the threshold, and the detection success rate drops. So, with the noise \(SNR\in[25,75]\), we can build a more robust defense system and achieve a detection ratio of over 90%. Besides, our experiments also proved that the small input noise has a defense effect [52]. ## Conclusion and Discussion In this work, we analyze adversary behavior during AE generation and detect potential attacks based on the association before and after the query. Our focus is on detecting the AE generation process, which provides a novel approach to process-based defense. Our approach achieves average detection success rate of over 90%. It is a lightweight framework that is both quick and efficient, able to be closely combined with other defenses to build the foundation for a structured defense system. However, with more research on attacks, single-step generation attack of AEs is growing, which impose higher requirements to the defense. From another aspect, our scheme increases the attacker's cost of attack, and our scheme will be fooled if the attacker has a large number of resources. Fingerprint fraud techniques can also create vulnerabilities in our approach. In addition, some adversaries may give up their attacks on the target system and turn to attack the defense system, which also warrants our attention. ## 1 Datasets ### Music-sets We contacted the authors of CommanderSong [9] and Devil's Whisper [28] to consult them on the details about how to design the music-based carries for the adversarial samples (AEs) they used in their experiments, and obtained a copy of the original music dataset they applied. To evaluate the threshold, we created a music carrier dataset for making AEs based on the obtained original music dataset. We have released the processed dataset and you can get our data from: [https://drive.google.com/file/d/1wPVK9SSTyB0aaX4FFKEbYKuKShmBvDc/view](https://drive.google.com/file/d/1wPVK9SSTyB0aaX4FFKEbYKuKShmBvDc/view). The original music dataset is a raw dataset of 100 songs collected on YouTube, including pop, classical, rock and light music, and ranging across multiple languages, including Korean, English, Japanese, Chinese, Russian and Arabic. The length of each song is about 5 minutes. In our experiments, we studied the impact of different audio lengths on AEs and found that different lengths of audio affect the generation of adversarial examples. Overly short audio decreases the success rate of attacks, and too long audio increases the cost of producing AEs. Only properly lengthy audio is a candidate for AEs. We use Word Error Rates (WER) to research this issue. \[WER=100\%*\frac{S+D+I}{N} \tag{15}\] In Eq. 15, \(S\) represents the number of characters replaced, \(D\) represents the number of characters deleted, \(I\) represents the number of characters inserted, and \(N\) represents the total number of characters. From Fig. 8, it can be seen that the WER changes with the length of the audio. If the audio length is below 3.19\(s\), the attack success rate of the AEs decreases as the audio length reduces (the WER of the target \begin{table} \begin{tabular}{c c c c c c} \hline \hline **SNR(dB)** & **CS-Attack** & **DW-Attack** & **IRTA-Attack** & **DS-Attack** & **Average** \\ \hline 150 & 3.92/98.00 & 1.70/84.74 & 56.00/84.00 & 11.00/82.5 & 17.44/87.31 \\ 100 & 3.92/98.00 & 1.70/84.74 & 56.50/84.75 & 11.00/82.50 & 17.57/87.50 \\ **75** & **3.92/98.00** & **1.70/84.74** & **58.50/87.75** & **12.50/93.75** & **19.16/91.06** \\ **50** & **3.92/98.00** & **1.70/84.74** & **63.00/94.50** & **13.00/97.50** & **20.41/93.69** \\ **25** & **3.92/98.00** & **1.70/84.74** & **63.00/94.50** & **13.00/97.50** & **20.41/93.69** \\ 0 & 3.70/92.50 & 1.50/75.00 & 10.00/15.00 & 3.00/22.50 & 4.55/51.25 \\ \hline \hline \end{tabular} * \({}^{\text{``3.92/98.00"}}\) indicates that the average queries are 3.92, and \(DSR\) is 98.00% \end{table} Tab. 5: Robust defense: we add noise based on different \(SNR\), the lower the SNR, the heavier the added noise. command increases). Above this value, the attack success rate reaches 100% and the WER falls to 0%. However, the time cost of producing an AE increases linearly with the length of the audio. The longer the audio, the higher the cost of producing AEs. While the audio length is \(3s\)-\(4s\), the most excellent performance is obtained and the ratio of time cost to WER is the lowest. Finally, the recommended audio length is \(3s\) or \(4s\) by balancing time and word error rate. During the production of our dataset, we divided each audio data into \(3s\) and \(4s\) to balance the success rate of the attack and the cost. To simulate disturbances and improve the noise immunity of the audio, we must insert some noise into the clean dataset. Our experiments showed that when music develops as the carrier, the inserted noise is within 8000 (randomly insert), and the similarity distribution is in \([0.36,1]\). The noise does not influence people's auditory perception, and the primary information of the audio remains reachable. So we keep the randomly inserted noise to the audio below 8000. When clipping music, the length of each slice is limited to 4s according to the principle of random slice. For each song, segment 25 slices at a time, 5 times in total. Finally, obtaining \(5*25*100=12,500\) slices. After that, the noise is randomly inserted into some of these slices by randomly displacing the sequence. After testing each slice, there were 10553 of qualified slices obtained in total. Storage space occupied nearly 1.3G. Currently, in the field of audio adversarial attacks, no publicly available dataset is based on music, except for some are dialogue-based which as a carrier for AEs. Instead, music is becoming a necessary candidate for attacks due to some of its advantages, but lack of proper datasets. To alleviate this problem, we are happy to share our data with the researcher community so that they can develop more research on music-based attacks and defenses. We also welcome interested researchers to expand the dataset with us. ### Mini LibriSpeech For the Mini-LibriSpeech dataset, we used FFmpeg6 to convert from flac to wav. According to Fig. 8, we removed some samples that were either overly short or overly long, and we suggested recalculating the threshold to ensure that the detection was not affected once the dataset was modified. You can download the training data set from [https://www.openslr.org/resources/31/train-clean-5.tar.gz](https://www.openslr.org/resources/31/train-clean-5.tar.gz). Footnote 6: [https://github.com/FFmpeg/FFmpeg](https://github.com/FFmpeg/FFmpeg) ## 2 Benign examples and AEs Audio Fingerprint As shown in Fig. 9, through the addition of perturbations (i.e., noise) on the clean carriers audio to generate AEs, the music-based ones have relatively more and richer fingerprints than the dialogue-based ones, which also confirms that the music-based AEs are easier to detect by our scheme. We also observed that the fingerprint difference between AEs and carriers is small. The fingerprint of each query is similar and the calculated similarity between the queries is very high if the carrier intends to generate AEs. This further proves the viability of our scheme. ## 3 Experimental Environment Linux Ubuntu20.0.4 operating system, a 2080Ti GPU with 12G memory, Numpy 1.21.5, Cupy-Cuda 114, 64 CPUs with 256G RAM. ## 4 Societal Impacts For the attacks that require querying the ASR model, much of the defense work was mainly concentrated on the processing of inputs to achieve the defense purpose. Only considering the examination of individual inputs, it lost the procedure information and the results are often not reliable. Our scheme, on the other hand, involves considering the totality and continuity of inputs and capturing the neglected information, which can help us better track the adversary behaviors and make an accurate diagnosis. Such a strategy is more consistent with sociology as well. Meanwhile, dialogue-based carriers have lots of limitations in practical applications and it's hard to reproduce in real attack scenarios, which are gradually abandoned by researchers. Music-based AEs are gradually becoming Fig. 8: Audio length impacts the production time of AEs and the integrity of command. the mainstream of attacks. The music is easily reproduced in the actual attack scenarios. The danger is very significant if music is hijacked as AEs, which cannot be ignored by researchers. However, the existing evaluation of defense work is still focused on the evaluation of public dialogue datasets. Lack of evaluation of music-based datasets for defense. In our paper, we have comprehensively evaluated the AEs with music-based carriers, which has a large social impact and also lays a solid foundation for related works in the future.
2305.16908
On efficient covariate adjustment selection in causal effect estimation
In order to achieve unbiased and efficient estimators of causal effects from observational data, covariate selection for confounding adjustment becomes an important task in causal inference. Despite recent advancements in graphical criterion for constructing valid and efficient adjustment sets, these methods often rely on assumptions that may not hold in practice. We examine the properties of existing graph-free covariate selection methods with respect to both validity and efficiency, highlighting the potential dangers of producing invalid adjustment sets when hidden variables are present. To address this issue, we propose a novel graph-free method, referred to as CMIO, adapted from Mixed Integer Optimization (MIO) with a set of causal constraints. Our results demonstrate that CMIO outperforms existing state-of-the-art methods and provides theoretically sound outputs. Furthermore, we present a revised version of CMIO capable of handling the scenario in the absence of causal sufficiency and graphical information, offering efficient and valid covariate adjustments for causal inference.
Hongyi Chen, Maurits Kaptein
2023-05-26T13:21:54Z
http://arxiv.org/abs/2305.16908v1
# On efficient covariate adjustment selection in causal effect estimation ###### Abstract In order to achieve unbiased and efficient estimators of causal effects from observational data, covariate selection for confounding adjustment becomes an important task in causal inference. Despite recent advancements in graphical criterion for constructing valid and efficient adjustment sets, these methods often rely on assumptions that may not hold in practice. We examine the properties of existing graph-free covariate selection methods with respect to both validity and efficiency, highlighting the potential dangers of producing invalid adjustment sets when hidden variables are present. To address this issue, we propose a novel graph-free method, referred to as CMIO, adapted from Mixed Integer Optimization (MIO) with a set of causal constraints. Our results demonstrate that CMIO outperforms existing state-of-the-art methods and provides theoretically sound outputs. Furthermore, we present a revised version of CMIO capable of handling the scenario in the absence of causal sufficiency and graphical information, offering efficient and valid covariate adjustments for causal inference. causal inference, covariate selection, suffcient adjustment set, Mixed integer optimization ## 1 Introduction Covariate selection holds great importance in various popular methodologies, including regression analysis, propensity score matching, and doubly robust methods, which aim to determine outcomes or causal effects resulting from different exposures, treatments, or interventions based on observational data. A covariate set that mediates all confounding factors affecting both exposure and outcome is deemed as a valid adjustment set. (A rigorous definition in mathematical terms will be provided later in the paper.) With a focus on sufficiency, research in recent decades has sought to examine the characteristics of valid adjustment sets that can reduce the variance of unbiased estimators, one example of such is a recent paper [1] on efficient estimators for recursive linear structural equation models. Two key pieces of guidance have been established for comparing and defining efficient valid adjustment sets, dependent on the presence of causal graphs. The graphical criteria for evaluating the asymptotic variances of valid adjustment sets were first introduced in linear models by [2] and further expanded upon in more general settings of non-parametric models by [3]. While non-graphical settings can only provide heuristic suggestions, such as including predictors of outcome variables [4]. One of the significant benefits of the graphical criteria proposed is the ability to identify an optimal adjustment set that results in the unbiased causal effect estimator with the smallest asymptotic variance among all valid adjustment sets. As significant as determining the composition of an efficient valid adjustment set is, so too is the challenge of selecting such a set from a vast array of covariates, particularly in the absence of a causal graph. Currently, many techniques for efficient covariate selection, which include nearly all algorithms underpinned by optimization methods, e.g. [5, 6, 7], have overlooked the potential risk of invalidity that arises from mistakenly identifying the target set that is both valid and more efficient.These methods have been demonstrated to be effective under the assumption of causal sufficiency: no hidden variables allowed. Our research, however, highlights that these methods may become unreliable when applied to more realistic scenarios featuring hidden variables. As a result, it is crucial to assess the limitations of state-of-the-art methods not only in terms of their efficiency but also in terms of their validity in the presence of hidden variables. Additionally, the development of novel, robust methods and algorithms for efficient covariate adjustment selection that can handle challenging, real-world scenarios such as the existence of hidden variables is of immense value. Motivated by the aforementioned observations, we present three contributions on two dimensions that are validity and efficiency of covariate selection methods of interests as followings. * The first objective of this study is to delve into the properties of existing prediction-based covariate selection methods. Particular attention is given to the potential dangers of producing invalid adjustment sets when hidden variables are present and there is a lack of graphical information. (Section 3) * Assuming causal sufficiency, although the existing methods have demonstrated accuracy, we propose a novel graph-free method which enhances the efficiency of these methods. The novel approach along with an instantiated algorithm, denoted as CMIO, is adapted from the Mixed Integer Optimization (MIO) with a set of causal constraints. In Section 4, we demonstrate that the output of CMIO is theoretically sound and will coincide with the optimal adjustment set with probability 1, provided that the sample size is sufficiently large and complies with a linear causal model. In Section 6, we conduct a comparison between the performance of CMIO and other state-of-the-art variable selection approaches for causal effect estimation using simulated data and show that CMIO outperforms the other methods. * Lastly, in Section 5, we revisit the scenario in the absence of causal sufficiency and graphical information, offering an algorithm for generating valid and efficient sets of covariates. Drawing upon the insights gained from our earlier work on CMIO, we present a revised version capable of handling such demanding circumstances and offer both efficient and valid sets of covariate adjustments for causal inference. The statistical properties of this revised method are rigorously demonstrated in the appendices. p ## 2 Preliminaries Before presenting our primary results, we commence with a preliminary discussion of the underlying causal framework, the Structural Causal Model (SCM) as introduced by Pearl [8]. Furthermore, we provide an explanation of the graph-based terminologies and definitions used to represent random variables, conditional dependencies, and direct causal effects. Additionally, we include a brief overview of relevant literature regarding efficient covariate selection methods for causal inference in this section. ### Graph terminology A graph is an ordered pair defined as \(\mathcal{G}=(V,E)\) consisting of a vertices set V and an edges set E. In the context of graphical modelling, vertices represent random variables and edges encode probabilistic and causal relations between vertices associated with them. For convenience, we use terms vertices and variables interchangeably. A _(un)directed_ graph is a type of graph which contains only (un)directed edges. Otherwise, it is called a _mixed_ or _partially directed_ graph. In particular, a directed graph absent of _directed cycles_ is known as a _Directed Acyclic Graph_ (DAG) that could be transformed into an undirected graph by removing all edge directions, referred as _skeleton_ of the DAG. Two vertices are _adjacent_ if they are linked with an edge. A _directed path_ is a sequence of distinct vertices which are successively adjacent by edges of the same direction. If there is a directed path from vertex \(X\) to \(Y\), then \(X\) is an ancestor of \(Y\) while \(Y\) is a descendant of \(X\). Moreover, if such a directed path is an edge, we call \(X\) a parent of \(Y\) and \(Y\) a child of \(X\). The sets of parents, children, ancestors and descendants of a vertex \(X\) in graph \(\mathcal{G}\) are denoted as \(\mathbf{PA}(\mathcal{G},X),\mathbf{CH}(\mathcal{G},X),\mathbf{AN}(\mathcal{G},X)\) and \(\mathbf{DE}(\mathcal{G},X)\) accordingly. A graphical criterion designated as _d-seperation_[9, 8] specifies conditional independence relationships of a DAG comprehensively. If joint distribution of \(\mathbf{X}\), \(\mathbb{P}(\mathbf{X})\), contains all the conditional independence relationships encoded by a DAG \(\mathcal{G}\), the distribution is said to be Markovian to \(\mathcal{G}\). On the other hand, a distribution \(\mathbb{P}(\mathbf{X})\) is said to be faithful to a graph \(\mathcal{G}\) if every conditional independence relation of \(\mathbb{P}(\mathbf{X})\) is entailed by that in \(\mathcal{G}\)[10]. If a distribution is both Markovian and faithful with respect to a DAG, we call the DAG a perfect map of the distribution. ### Structural Causal Models A structural equation model (SEM) determines the marginal distribution of each variable in the random vector \(\mathbf{X}\)=\((X_{1},...,X_{n})\) corresponding to their DAG \(\mathcal{G}\) by structural equations of the following form [11]: \[X_{i}=f_{i}(\mathbf{PA}(\mathcal{G},X_{i}),\epsilon_{i})\] where \(\{\epsilon_{i}\}_{j=1,...,n}\) are mutually independent random noises and \(\{f_{i}\}_{j=1,...,n}\) are real functions. If a random vector \(X_{1},...,X_{n}\) is generated according a SEM, we can factorize the density of the joint distribution as [9]: \[p(x_{1},...,x_{n})=\prod_{1}^{n}p(x_{i}|\mathbf{PA}(\mathcal{G},x_{i}))\] It is clear that such a distribution is Markov to the DAG \(\mathcal{G}\). We now define an important concept called Pearl's _do-intervention_[8]. When we operate a do-intervention upon a variable \(X_{i}\), we change the generating mechanism of \(X_{i}\) and rewrite the SEM of \(\mathbf{X}\) by updating the corresponding equation of \(X_{i}\). This results in a new post-intervention distribution for \(\mathbf{X}\). In particular, if the do-intervention fixes \(X_{i}\) to a fixed point in the support of \(X_{i}\), the joint density of \(\mathbf{X}\) according to truncated factorization, \[p(x_{1},...,x_{n}|do(X_{i}=\hat{x}_{i}))=\begin{cases}\prod\limits_{j\neq i}p( x_{j}|\mathbf{PA}(\mathcal{G},x_{j}))&\text{if }X_{i}=x_{i}\\ 0&\text{otherwise}\end{cases}\] **Causal effects** Let \((X,\mathbf{Y},\mathbf{U})\) be a random vector, where \(\mathbf{Y}=(Y_{1},...,Y_{k})\). We define the total effects of \(X\) on \(\mathbf{Y}\) as \[(\tau_{yx})_{i}=\frac{d}{d(X)}\mathbb{E}(Y_{i}|do(X))\] where \(i\in(1,...,k)\) **Valid adjustment set** We call \(\mathbf{Z}\) is a valid adjustment set for \((X,\mathbf{Y})\) if the following holds: \[p(\mathbf{Y})|do(X=x)=\int p(\mathbf{Y}|x,\mathbf{Z}=\mathbf{z})p(\mathbf{Z}= \mathbf{z})d\mathbf{z}\] **Total effects estimation via covariate adjustment** Let \(\beta_{\mathbf{st},\mathbf{w}}\) represent the least squares regression coefficient matrix whose \((i,j)\)-th element is the regression coefficient of \(T_{j}\) in the regression of \(S_{i}\) on \(\mathbf{T}\) and \(\mathbf{W}\). Then the corresponding estimator \(\hat{\beta}_{\mathbf{yx},\mathbf{z}}\) is an unbiased estimator for \(\tau_{yx}\) while \(\mathbf{Z}\) is a valid adjustment set for \((X,\mathbf{Y})\). **Optimal adjustment set** Given \((\mathbf{X},\mathbf{Y})\), we denote the valid adjustment set whose total effect estimator attaining the smallest asymptotic variance among all valid adjustment sets as the optimal adjustment set, \[\mathbf{O}_{\mathbf{x},\mathbf{y}}=\operatorname*{argmin}_{\mathbf{Z}}\, \text{a.Var}(\hat{\beta}_{\mathbf{yx},\mathbf{z}})\] **Bias and efficiency of a causal effect estimator** The bias of a causal effect estimator of \(X\) on \(\mathbf{Y}\), \(\hat{\beta}\), is defined as \(|\mathbb{E}(\hat{\beta})-\tau_{yx})|\). If \(|\mathbb{E}(\hat{\beta})-\tau_{yx})|=\mathbf{0}\), we call \(\hat{\beta}\) an unbiased estimator of \(\tau_{yx})\). Then we could compare different _unbiased_ estimators by their variances. The smaller their variances are, the more efficient the estimators are referred as. By convention, as the estimators are generated via covariate adjustments, the concept of efficiency is also preserved when comparing different covariate sets for how precise they could generate causal effects estimators. The possibility of multiple valid adjustment sets existing renders the subsequent selection of covariates an important and sometimes even imperative task, depending on various purposes such as dimension reduction or optimizing efficiency. Some approaches, such as those found in [12, 13, 14], aim to select valid adjustment sets with minimal cardinality. Meanwhile, other approaches, such as [15, 5, 6, 7], seek to minimize the variance of the causal effect estimator by including variables known as predictors of the outcome, which are statistically associated with the outcome, into the adjustment set. The approaches aimed at obtaining valid adjustment sets with improved efficiency generally draw upon two key principles: conditional independence testing or outcome-based model selection. To be more specific, some methods, such as [15] or causal structure learning techniques (e.g. PC algorithm or Fast Causal Inference [10]) eliminate covariates from a given adjustment set if they are found to be conditionally independent of the outcome variable, given the exposure variable and other covariates, via parametric or nonparametric tests that align with the assumptions of the model. On the other hand, outcome-based model selection approaches such as those described in [5, 6, 7] strive to identify covariate sets that optimize metrics that assess the goodness of fit, such as AIC, BIC, p-value, or residual values. In the realm of covariate selection, causal graphical information can prove to be a valuable asset. Through the graphical characterization of efficient valid adjustment sets, first explored in linear causal models by [2] and later expanded upon for non-parametric causal models by [3], one could determine the optimal adjustment set that has the lowest asymptotic variance among all valid adjustment sets, based on the true causal Directed Acyclic Graph (DAG). Unfortunately, the true causal graph is often unavailable, rendering this graphical characterization inapplicable. As for causal graphs that have been estimated from observational data, it is necessary to assume causal sufficiency before using the results from [2, 3], which is not only unrealistic, but also raises concerns regarding the accuracy and credibility of the causal structure estimation. ## 3 Performance of covariate selection methods for causal inference with hidden variables Firstly, our research focuses on a specific group of covariate selection methods that rely on fitting a model of the outcome variables to gauge its predictive capability and thereby prune an efficient adjustment set from a larger one. (The other group has its own limitations which are out the scope of this paper.) By forgoing the requirement of causal sufficiency, we examine and evaluate their performance in causal inference with hidden variables. This allows us to provide a definitive answer to the intriguing question of whether these methods are still capable of producing valid adjustment sets in this section. ### Examination of validity as adjustment sets It is common practice in prediction-based approaches to use least square methods, such as OLS, LASSO, or similar, to prune variables from a given valid adjustment set that are considered to be "predictors" of the outcome variable in order to create a more efficient valid covariate set. While this approach may result in causal effect estimators with lower variance, it is not necessarily valid and can introduce biases when strong premises such as causal sufficiency are violated. The problem arises from the difference between what should be the "predictors" of the outcome and the variables that are selected through these approaches. Simply having covariates with zero coefficients in a regression model against a dependent variable does not indicate conditional independence between the variables given the remaining covariates. Unfortunately, many studies either intentionally or unintentionally overlook this important distinction. In this subsection, we shall formally define what is considered as the "predictors" of the outcome in many prediction-based approaches and determine the actual set of variables that are obtained through these methods within the context of causal graphs. We consider the following problem setting and define key terms that will be used later in the study. Let \(\mathcal{G}\) be a causal DAG with nodes set \(\mathbf{V}\) containing \(X,Y\) and only non-descendants of \(X,Y\). Suppose \(\mathbf{Z}\subseteq\mathbf{V}\) is a valid adjustment set relative to \(X,Y\) in \(\mathcal{G}\) and distribution of \(\mathbf{V}\) is faithful to \(\mathcal{G}\). For simplicity, we restrict our attention for situations when \(\mathbf{V}\) admits a linear additive model.(Extension to general additive models is possible.) Let \(\mathbf{T}(X,Y)\) denote \(\operatorname*{argmin}_{Z:\beta_{i}\neq 0}\mathbb{E}(Y-\alpha X-\mathbf{Z} \beta)^{2}\). **Definition 1**.: Predictors of \(Y\) are all the \(Z\in\mathbf{Z}\) such that \(Z\not\perp Y|X\cup\mathbf{Z}\setminus Z\) To illustrate definition of predictors, we shall refer to Figure 1 for different configurations of \(\mathbf{Z}\) and corresponding predictors of \(Y\) as examples. * If \(\mathbf{Z}=\{Z_{1},Z_{2},Z_{3}\}\), the predictors of \(Y\) is \(Z_{2}\). * If \(\mathbf{Z}=\{Z_{1},Z_{3}\}\), the predictors of \(Y\) are \(\{Z_{1},Z_{3}\}\). * If \(\mathbf{Z}=\{Z_{2},Z_{3}\}\), the predictor of \(Y\) is \(Z_{2}\). **Definition 2**.: Two nodes in \(Z_{1},Z_{2}\in X\cup\mathbf{Z}\) are d-adjacent if there exists a path between \(Z_{1},Z_{2}\) on \(\mathcal{G}\) that is not blocked by \(\mathbf{Z}\setminus\{Z_{1},Z_{2}\}\) It is clear that any two adjacent nodes are d-adjacent. It is also important to note that two non-adjacent nodes of a graph may become d-adjacent when considering only a subset of the nodes, which is a common scenario in models with hidden variables. For instances, in Figure 1, both \(Z_{1},Z_{3}\) are d-adjacent to \(Y\) in the set \(\{X,Z_{1},Z_{3},Y\}\). Proposition 1 underscores the crucial difference between the outcome predictors and \(\mathbf{T}(X,Y)\) in terms of their graphical representation and their validity as adjustment sets. It is evident that when \(\mathbf{Z}\), a valid adjustment set relative to \(X,Y\) Figure 1: Example illustrating Proposition 1(c) is given, the predictors of \(Y\) also form a valid adjustment set. However, this is not always the case for \(\mathbf{T}(X,Y)\), as demonstrated in Proposition 1. **Proposition 1**.: Let \(\mathcal{G}\) be a causal DAG with nodes set \(\mathbf{V}\) containing \(X,Y\) and only non-descendants of \(X,Y\). Suppose \(\mathbf{Z}\subseteq\mathbf{V}\) is a valid adjustment set relative to \(X,Y\) in \(\mathcal{G}\). Then * Predictors of \(Y\) are exactly all variables in \(X\cup\mathbf{Z}\) that are d-adjacent to \(Y\). * If \(Y\perp\mathbf{Z}\setminus\mathbf{T}(X,Y)|X\cup\mathbf{T}(X,Y)\), then \(\mathbf{T}(X,Y)\) is a valid adjustment set relative to \(X,Y\) in \(\mathcal{G}\) and constitutes the optimal adjustment in \(\mathbf{Z}\). * \(\nexists W\in\mathbf{V}\) such that there exist two variables in predictors of \(Y\) in \(\mathbf{Z}\) that are not connected with \(Y\) in \(\mathbf{Z}\cup W\), then \(\mathbf{T}(X,Y)\) is exactly the predictors of \(Y\) and a valid adjustment set. We give an example illustrating Proposition 1(c) according to Figure 1. Assuming all coefficients in the underlying linear structural causal model are all 1's with \(Z_{2}\) missing, we can deduce the following results: * If \(Var(Z_{3})<Var(Z_{1})\), then \(\mathbf{T}(X,Y)=Z_{3}\) * If \(Var(Z_{3})>Var(Z_{1})\), then \(\mathbf{T}(X,Y)=Z_{1}\) However, it is clear that only \(Z_{1}\) is a valid adjustment set relative to \(X,Y\) but not \(Z_{3}\). Thus far, our analysis has shown that when the underlying graphical structure is unknown, the results of prediction-based methods may deviate from the true outcome predictors, and more significantly, may not remain valid as a covariate adjustment set in the presence of hidden variables. It is dangerous to draw causal inferences from these results or to employ them for causal inquiries such as interventions. ### Predictive power revisit While extra precaution is advised when using prediction-based covariate selection methods to extract causal information, it is not surprising to find that these methods maintain their eponymous characteristics in terms of predictive performance even with hidden variables present. **Proposition 2**.: Let \(\mathcal{G}\) be a causal DAG with nodes set \(\mathbf{V}\) containing \(X,Y\) and only non-descendants of \(X,Y\). Suppose \(\mathbf{Z}\subseteq\mathbf{V}\) is a valid adjustment set relative to \(X,Y\) in \(\mathcal{G}\), following a linear causal model. Let \(\mathbf{T}(X,Y)\) denote \(\operatorname*{argmin}_{Z_{3}:\beta_{\neq}0}\mathbb{E}(Y-\alpha X-\mathbf{Z} \beta)^{2}\). Then, \(\mathbb{E}(\hat{\beta}_{\mathbf{x}:\mathbf{T}})=\tau_{yx}\) at and \(\mathbf{var}(\hat{\beta}_{\mathbf{x}:\mathbf{T}})\leq\mathbf{var}(\hat{\beta }_{\mathbf{y}:\mathbf{Z}})\) at large sample. The key aspect to Proposition 2 is when the underlying data generating mechanism is assumed to be a linear causal structural model, with the potential presence of hidden variables, \(\mathbf{T}(X,Y)\) can provide a numerically accurate estimate for causal effect estimation. However, this assertion cannot be extended to non-parametric models or in applications such as domain adaptation, where a purely observational model may not be adequate. ## 4 Causal Mixed Integer Optimization algorithm for optimal valid covariate selection As previously noted, the use of prediction-based covariate selection methods can lead to more efficient causal estimates under the assumption of causal sufficiency. In this section, we will delve deeper into this important subject and explore ways to further improve the efficiency of causal effect estimators by constructing novel approach to select valid adjustment set. We present a novel approach for pruning the valid covariate adjustment set, which has the minimum asymptotic variance of its corresponding causal effect estimator from any given set of observed covariates that are valid under certain mild conditions outlined below. Our method has several key features that set it apart from existing approaches. Firstly, it is a non-graphical approach that can be applied to observational data. Secondly, it focuses on exploring the implicit causal connotation of variables in an optimization model, rather than the predictive accuracy, as is often the case with state-of-the-art methods. Thirdly, we have proven its asymptotic consistency in achieving the optimal covariate adjustment set and have shown its superiority in a variety of data generating schemes with finite samples, including high-dimensional ones, in terms of both optimal covariate set identification and subsequent causal effect estimation. Before presenting our own methodology, it is imperative that we first introduce a renowned work on best subset selection, which forms the basis our investigation. ### Mixed Integer Optimization algorithm It is widely recognized that the classical approach of best subset selection in a linear regression model with a constraint on the cardinality of non-zero regression coefficients has limitations in scaling to a large number of covariates. However, this changed with the introduction of the mixed integer optimization (MIO) algorithmic framework by [16]. The remarkable advancements in the capabilities of MIO solvers, such as Gurobi and Cplex, now allow the methods proposed in [16] to provide solutions to best subset problems with thousands of covariates in mere minutes, making it both practical and appealing. For further information on the properties of the algorithmic framework and its theoretical and simulation results in the context of linear regression, we direct readers to consult [16]. In what follows, our focus shifts to incorporating the ideas behind MIO solvers for best subset selection into the realm of efficient covariate selection for causal estimation, a distinct but related challenge. ### Mixed Integer Optimization Formulations for optimal covariate set selection problem Throughout this section, the problem settings as well as assumptions are given as below. Given a dataset of \(n\) observations derived from a random vector \(X\cup Y\cup\mathbf{Z}\), which is generated in accordance with a linear structural equation model[8], our objective is to determine the optimal covariate adjustment set associated with the variables \(X\) and \(Y\) within \(\mathbf{Z}\), under the assumption that 1. \(\forall Z\in\mathbf{Z}\) is a nondescendant of \(X,Y\). 2. \(\mathbf{Z}\) is a valid adjustment set relative to \(X,Y\). 3. There are no hidden variables in \(\mathbf{Z}\). The first two assumptions, referred to as pretreatment covariates and conditional exchangeability, are widely acknowledged in the causal inference literature [4]. The third assumption, causal sufficiency, is implicitly assumed in most of the covariate selection methods currently under examination of our paper. Thanks to graphical criteria put forth by [2], we are able to determine the composition of the optimal adjustment set, providing insight into what we should be searching for even in the absence of a causal graph. **Proposition 3**.: Given the problem setting and assumptions 1-3, then the optimal covariate adjustment set of \(X,Y\), \(\mathbf{O}(X,Y)\), is \(\mathbf{PA}(\mathcal{G},Y)\setminus X\). With the aim of attaining the optimal adjustment set outlined in Proposition 3, we present an adaptation of the Mixed Integer Optimization problem incorporating additional causal constraints as stated in equation (1). To this end, we also propose a corresponding optimization algorithm capable of yielding a set that can be proven to be \(\mathbf{PA}(\mathcal{G},Y)\) in probability. \[\max_{k}\min_{\beta,\mathbf{u}} \frac{1}{2}||Y-\alpha X-\mathbf{Z}\beta||_{2}^{2}\] (1) s.t. \[-Cu_{i}\leq\beta_{i}\leq Cu_{i};u_{i}\in\{0,1\};\sum_{i}^{p}u_{i}= k;i=1,...,p\] \[Y\not\perp Z_{i}|A,\forall A\in\mathbf{Z}^{+},\mathrm{where}\beta _{i}\neq 0,\mathbf{Z}^{+}=\{Z_{i}:\beta_{i}\neq 0\}\] **Proposition 4**.: Solution to (1) exists uniquely and is \(\mathbf{PA}(\mathcal{G},Y)\) in probability. Inspired by "Algorithm 1" in [16], we present an algorithm that provides a solution to (1) by incorporating the existing MIO algorithm by [16] with the general properties of conditional independence imposed by the parents of \(Y\). We refer to this algorithm as CMIO. The convergence properties of this algorithm are discussed in Section 4.3. It is worth noticing that, empirically, the use of d-separation relationships in CMIO may be substituted with a measure of conditional independence, such as the p-value of t-tests for linear models with Gaussian noise, or any other nonparametric implementation of conditional independence testing. ``` 1:INPUT:Given a dataset of \(\mathbf{Z}\), \(k=1\),\(\mathbf{O}=\emptyset\) 2:while\(k\leq p\)do 3: Employ MIO algorithm with \(k\)-sparsity to solve \(\min||Y-\beta(X,\mathbf{Z}^{T})^{T}||_{2}^{2}\). Denote the solution as \(\mathbf{Z}^{\mathbf{k}}\) where \(\mathbf{Z}^{0}=\mathbf{0}\) 4:if\(\mathbf{Z}^{k-1}\subseteq\mathbf{Z}^{k}\)then\(Z_{k}^{MIO}=\mathbf{Z}^{k}\setminus\mathbf{Z}^{k-1}\) 5:if\(\mathbf{Z}_{k}^{MIO}\not\perp Y|A,\forall A\subseteq\mathbf{Z}^{k-1}\)then\(\mathbf{O}=\mathbf{Z}^{\mathbf{k}}\), \(k=k+1\) 6:else\(\mathbf{O}=\mathbf{Z}^{k-1}\), \(k=p+1\) 7:endif 8:else\(\mathbf{O}=\mathbf{Z}^{k}\)\(k=k+1\) 9:endif 10:endwhilereturnO ``` **Algorithm 1** ### Statistical properties of CMIO algorithm in the causal context In their study, [16] has made a significant contribution by outlining the theoretical properties, including the convergence properties, of the MIO algorithm. We recommend adopting their findings as a basis when discussing the statistical properties of CMIO as they are blood related. For instance, the results obtained from the MIO algorithm are considered to be exact solutions to the optimization problems, subject to the constraints specified in the algorithm. In the following theorem, we present the theoretical guarantees that underpin the validity of our approach, the CMIO. **Theorem 1**.: Algorithm 1 estimates the optimal covariate set related to \(X\) on \(Y\) with probability \(1\) as \(n\rightarrow\infty\). In the appendix, we present a proof of Theorem 1 which demonstrates the soundness of Algorithm 1 in identifying the optimal covariate set. This is accomplished through the use of an adapted procedure from a MIO solver, as recommended in [16]. In this approach, the properties of MIO solutions are harnessed to address a similar optimization problem that includes causal constraints. ## 5 CMIO with hidden variables In the preceding section, we have primarily focused on the application of the MIO solver in facilitating efficient estimation of causal effects, under the assumption of causal sufficiency. A logical next step would be to examine the scenario where this assumption is no longer valid. With the presence of hidden variables, neither the optimal covariate set nor the predictors of outcome is identifiable solely from observed variables. And the optimization solution \(\mathbf{T}(X,Y)\) has been proved to be possibly an invalid adjustment set. As a result, it becomes imperative to find new and innovative ways to address this issue. Fortunately, we have discovered that adding certain variables in \(\mathbf{Z}\) to \(\mathbf{T}(X,Y)\) helps resolve the problem. **Proposition 5**.: Let \(\mathcal{G}\) be a causal DAG with nodes set \(\mathbf{V}\) containing \(X,Y\) and only non-descendants of \(X,Y\). Suppose \(\mathbf{Z}\subseteq\mathbf{V}\) is a valid adjustment set relative to \(X,Y\) in \(\mathcal{G}\). Then we can select \(\mathbf{Z}^{\prime}\), a subset of \(\mathbf{Z}\) that is valid covariate adjustment set relative to \(X,Y\) in \(\mathcal{G}\) and more efficient than \(\mathbf{Z}\) for causal effect estimation. Explicitly, \(\mathbf{Z}^{\prime}\) is comprised of the following: \(\mathbf{Z}^{\prime}=\mathbf{T}(X,Y)\cup\mathbf{S}\), where \(\mathbf{S}=\{S\in\mathbf{Z},:\,S\not\perp Y|\mathbf{T}(X,Y)\cup X\}\). Proposition 5 has enabled us to formulate a target set in a specific way, resulting in a valid adjustment set that coincides with predictors of the outcome. This is the optimal result that can be achieved using non-graphical methods. However, it cannot be guaranteed that this set is the optimal adjustment set in \(\mathbf{Z}\). It is also untestable whether removing a variable that is d-adjacent to \(Y\) and connected with \(X\) from a valid adjustment set would result in an invalid set, unless the assumption of causal sufficiency is made. This issue has been previously documented in literature, as evidenced by [2]. Algorithm 2 represents an implementation of the procedure outlined in Proposition 5, which serves as a complement to Algorithm 1 for dealing with covariate selection in the presence of hidden variables. The first part of the iteration in Algorithm 2 is dedicated to identifying variables that are conditionally dependent on \(Y\) given \(X\cup\mathbf{O}\) in the remaining set \(\mathbf{Z}\setminus\mathbf{O}\). The procedure successfully finds a set that complements \(\mathbf{O}\) and forms a valid adjustment set. In order to minimize the size of the complement set, a second iteration is conducted, which resembles a backward selection process to remove any redundant variables. Under the assumption of a linear model with Gaussian errors, conditional independence testing can be performed using Fisher's Z-transform at a chosen significance level. Alternatively, other measures of conditional independence can be used, as demonstrated in [15] through a non-parametric testing approach, to accommodate various data distributions. Once again, we refer back to Figure 1 and illustrate the discrepancy of existing methods and our revised CMIO on data generated from linear structural model with \(Z_{2}\) missing. Assuming all coefficients in the underlying linear structural causal model are all 1's, and \(Var(Z_{3})<Var(Z_{1})\), then \(\mathbf{T}(X,Y)=Z_{3}\), which is clearly invalid as covariate set, while Algorithm 2 would produce \(\{Z_{1},Z_{3}\}\), a valid adjustment set over \(X,Y\). ``` 1:INPUT: \(\mathbf{O}\), which is the output of Algorithm 1, a data set of \(\mathbf{Z}\) 2:\(\mathbf{T}=\mathbf{Z}\setminus\mathbf{O}\) 3:for\(\forall T\in\mathbf{T}\)do 4:if\(T\perp Y|X\cup\mathbf{O}\)then\(\mathbf{T}=\mathbf{T}\setminus T\) 5:endif 6:endfor 7:for\(\forall T\in\mathbf{T}\)do 8:if\(T\perp Y|X\cup\mathbf{O}\cup\mathbf{T}\setminus T\)then\(\mathbf{T}=\mathbf{T}\setminus T\) 9:endif 10:endforreturn\(\mathbf{O}\cup\mathbf{T}\) ``` **Algorithm 2** ## 6 Simulation study In this section, we evaluate the performance of the proposed CMIO algorithm in comparison to established state-of-the-art methods using simulated data. The simulations, based on illustrations presented by [17] and [6], consist of multivariate Gaussian distributed covariates \(\mathbf{Z}\), binary \(X\), and continuous \(Y\), generated through logit and linear regression, respectively. We consider three different data sets with varying correlation levels of \(\mathbf{Z}\) and sample sizes. The first two scenarios involve low-dimensional data with 100 covariates, 20 of which are parents of \(Y\), and sample sizes of either \(n=200\) or \(n=1000\). The third scenario involves high-dimensional data with 100 covariates and \(|\mathbf{Pa}(Y)|=4\), where \(|\mathbf{Z}|=100>n=50\). Each scenario was run 100 times. The data generating processes of three cases are specified as follows: 1. \(\mathbf{Z}=(Z_{1},...,Z_{100})^{T}\sim\mathcal{N}(\mathbf{0},I_{100})\)\(X\sim\text{Bern}(p)\), where \(p=\text{logit}(Z_{1}+Z_{2}+...+Z_{10}+Z_{21}+Z_{22}+...+Z_{30})\)\(Y=0.5*X+0.6*(\sum_{1}^{20}Z_{i})+\epsilon\), where \(\sim\mathcal{N}(0,1)\) 2. \(\mathbf{Z}=(Z_{1},...,Z_{100})^{T}\sim\mathcal{N}(\mathbf{0},\Sigma)\), where \(\Sigma_{ij}=1\) if \(i=j\)\(\Sigma_{ij}=0.5\) if \(i\neq j\)\(X\sim\text{Bern}(p)\), where \(p=\text{logit}(Z_{1}+Z_{2}+...+Z_{10}+Z_{21}+Z_{22}+...+Z_{30})\)\(Y=0.5*X+0.6*(\sum_{1}^{20}Z_{i})+\epsilon\), where \(\sim\mathcal{N}(0,1)\) 3. \(\mathbf{Z}=(Z_{1},...,Z_{100})^{T}\sim\mathcal{N}(\mathbf{0},\Sigma)\), where \(\Sigma_{ij}=1\) if \(i=j\)\(\Sigma_{ij}=0.5\) if \(i\neq j\)\(X\sim\text{Bern}(p)\), where \(p=\text{logit}(0.5*Z_{1}-0.5*Z_{2}+0.3*Z_{5}-0.3*Z_{6}+0.35*Z_{7}+0.4*Z_{8})\)\(Y=X+2*(\sum_{1}^{4}Z_{i})+\epsilon\), where\(\sim\mathcal{N}(0,1)\) In this section, we conduct a comparative evaluation of our proposed CMIO algorithm against two benchmark methods: the Bayesian Causal Effect Estimation (BCEE) method proposed by Talbot et al. (2015) and the Outcome Adaptive Lasso (OLA) method put forward by Shortreed et al. (2017). We also compare CMIO's performance to the oracle target set \(\mathbf{PA}(\mathcal{G},Y)\). Other well-known methods such as Bayesian Model Averaging proposed by Wang et al. (2012) and a Non-Parametric method by De and Buhlmann (2011) have been shown to perform poorly compared to BCEE and OLA in previous studies (Shortreed et al., 2017), and thus we do not consider them in our evaluation. We evaluate CMIO's performance in two dimensions: the precision of the estimated causal effect and the ability to discover the optimal adjustment set. To assess the precision of the estimated causal effect, we present the results in boxplots using two metrics: the set difference between the estimated and actual covariate set, and the percentage of the estimation being valid and containing all members of the target set. Figures 2 and 3 present box plots of estimated causal effects obtained from various methods as part of our simulation studies for Case 1 and Case 2 scenarios, respectively. Across both scenarios, our proposed method CMIO performs exceptionally well, closely approximating the results of the target set. Moreover, even at a small sample size of 200 observations, CMIO achieves an overall mean value that is closely aligned with the true causal effects, outperforming both BCEE and OLA on both objectives. As we have deliberately selected the most advanced existing methods for comparison, it is not unexpected that all methods perform satisfactorily in terms of accuracy at a larger sample size. Regarding high-dimensional data, we have summarised our findings in the box plot presented in Figure 4. OLA was excluded from this analysis as it does not support high-dimensional data. The plot clearly indicates that CMIO excels in estimation accuracy, performing significantly better than BCEE and following the target set quite closely without deviating substantially. It is essential to examine the validity of CMIO's output as an adjustment set, which can be evaluated by including all confounding variables. Therefore, we have analysed the difference between the estimated and optimal adjustment sets produced by CMIO, as well as the proportion of instances where all confounders are included in CMIO's estimates. The CMIO algorithm imposes a cardinality constraint, so only a few spurious variables are selected, if any. With the exception of scenarios with highly correlated data for confounders with a smaller sample size of 200, CMIO consistently identifies a valid adjustment set that includes all confounders. This reassures us of the unbiasedness of the causal effects estimations produced by CMIO apart from the boxplots in previous graphs. Overall, our results show that CMIO performs well in all simulated cases, delivering a perfect selection of the optimal adjustment set at \(n=1000\) while maintaining high validity (95%) even for small sample sizes. CMIO also produces near-perfect results when the data is high-dimensional and comprised of moderately correlated covariate variables, demonstrating the superior efficacy of our proposed CMIO algorithm in efficient covariate selection. \begin{table} \begin{tabular}{l l l} \hline \hline & Size of Set difference & \%**Pa\((Y)\subseteq\mathbf{O}\)** \\ \hline \hline Case 1, n=200 & 0.12 \(\pm\) 0.42 & 100 \\ Case 1, n=1000 & 0 & 100 \\ Case 2, n=200 & 0.96 \(\pm\) 0.83 & 95 \\ Case 2, n=1000 & 0 & 100 \\ Case 3, n=50 & 0.06 \(\pm\) 0.24 & 100 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of set difference between the optimal adjustment set and the estimated ones by CMIO and proportion of time the estimated covariate set containinng the target set. Figure 2: Boxplot of estimated causal effects for case 1. Horizontal dashed line indicate the true treatment effect. Figure 4: Boxplot of estimated causal effects for case 3. Horizontal dashed line indicates the true treatment effect. Figure 3: Boxplot of estimated causal effects for case 2. Horizontal dashed line indicates the true treatment effect. ## 7 Discussion In this paper, we provide insights into field of covariate selection methods in causal inference, focusing on the validity and efficiency of these methods. Specifically, the study examines the properties of existing prediction-based covariate selection methods and caveats about the potential dangers of producing invalid adjustment sets in the presence of hidden variables and lack of graphical information. In response to this, the study proposed a novel graph-free method, CMIO, which enhances the efficiency of existing methods. The method is based on Mixed Integer Optimization and equipped with a set of causal constraints, ensuring that the output is theoretically sound. The study compares the performance of CMIO with other state-of-the-art methods and shows that CMIO outperforms them in terms of validity and efficiency. Finally, the study revisits the scenario in the absence of causal sufficiency causal graphs and presents a revised version of CMIO capable of handling such circumstances. The statistical properties of this revised method are rigorously demonstrated and offer a promising solution for efficient and valid covariate selection in causal inference. ## Appendix A In this appendix we prove propositions and theorems stated in the paper. ### Proof of Proposition 1 Proof.: a. Let \(Z\in X\cup\mathbf{Z}\) that is connected to \(Y\). By Definition 2,we have \(Z\not\perp_{\mathcal{G}}Y|X\cup\mathbf{Z}\setminus Z\). From faithfulness assumption, we can deduce that \(Z\not\perp Y|X\cup\mathbf{Z}\setminus Z\), which shows that \(Z\) is a predictor from Definition 1. Similarly, let \(Z\in X\cup\mathbf{Z}\) be a predictor of \(Y\), then \(Z\not\perp Y|X\cup\mathbf{Z}\setminus Z\) and by Markovian property, \(Z\not\perp_{\mathcal{G}}Y|X\cup\mathbf{Z}\setminus Z\), which means \(Z\) is connected to \(Y\) in \(\mathbf{Z}\) by Definition 2. b. According to Theorem 3.4.1 of [8], \(\mathbf{T}(X,Y)\) is a valid adjustment set relative to \(X,Y\) in \(\mathcal{G}\). Suppose that we have two valid adjustment sets \(\mathbf{T}(X,Y)\) and \(\mathbf{Z}_{1}\), where \(\mathbf{Z}_{1}\in\mathbf{Z}\), then clearly it can be deduced from \(Y\perp\mathbf{Z}\setminus\mathbf{T}(X,Y)|X\cup\mathbf{T}(X,Y)\) that \(Y\perp\mathbf{Z}_{1}\setminus\mathbf{T}(X,Y)|X\cup\mathbf{T}(X,Y)\) since \(\mathbf{Z}_{1}\in\mathbf{Z}\). On the other hand, \(\forall S\in\mathbf{Z}_{1}\setminus\mathbf{T}(X,Y)\) it must satisfy wither \(S\perp X|\mathbf{Z}_{1}\) or \(S\perp Y|\mathbf{Z}_{1}\) as \(\mathbf{Z}_{1}\) being a valid adjustment set. Furthermore, \(S\not\perp Y|\mathbf{Z}_{1}\) due to \(S\in\mathbf{T}(X,Y)\), indicating \(S\) is d-adjacent to \(Y\). Hence, we must have \(S\perp X|\mathbf{Z}_{1}\). By Theorem 3.4 of [2], combining the two conditions above, we can obtain that the asymptotic variance of causal effect estimators satisfy \(:\!\mathbf{var}\big{(}\beta_{\mathbf{X}_{\mathbf{Z}},\mathbf{Z}_{1}}\big{)} \geq\mathbf{var}\big{(}\beta_{\mathbf{X}_{\mathbf{T}},\mathbf{T}}\big{)}\)\(\Box\) c. First, we notice that \(\mathbf{T}(X,Y)\) is a subset of predictors of \(Y\). Suppose that \(\exists U\in\) predictors of \(Y\) with \(U\notin\mathbf{T}(X,Y)\), by uniqueness of the least squares regression, \(U\) is not a parent of \(Y\) in \(\mathcal{G}\), otherwise its coefficient would be non-zero and belongs to \(\mathbf{T}(X,Y)\). Hence, we can find a subset of parents of \(Y\) in \(\mathbf{V}\), say \(\mathbf{W}\) such that \(Y\perp U|X\cup\mathbf{W}\). Clearly, \(\exists T\in\mathbf{T}(X,Y),T\notin\mathbf{W}\) that is d-separated with \(Y\) by some of \(\mathbf{W}\), contradicting our assumption that no two members of the predictors of \(Y\) are not connected in \(\mathbf{Z}\cup\mathbf{W}\). Therefore, every member of predictors of \(Y\) belongs to \(\mathbf{T}(X,Y)\). Hence, predictors of \(Y\) are exactly those of \(\mathbf{T}(X,Y)\) and are valid. ### Proof of Proposition 2 Proof.: Based on the uniqueness of the least squares regression, \(Y=\alpha X+\mathbf{T}(X,Y)\beta_{\mathbf{T}(X,Y)}+\mathbf{0}\mathbf{S}+\varepsilon\) where \(\mathbb{E}(\varepsilon(X,\mathbf{Z}))=\mathbf{0}\), \(\mathbf{Z}=\mathbf{T}(X,Y)\cup\mathbf{S}\) Then we can also write \(Y=\alpha X+\mathbf{T}(X,Y)\beta_{\mathbf{T}(X,Y)}+\varepsilon\) where \(\mathbb{E}(\varepsilon(X,\mathbf{T}(X,Y)))=\mathbf{0}\). Hence \(\mathbb{E}(\beta_{\mathbf{X}_{\mathbf{T}}})=\mathbb{E}(\beta_{\mathbf{X}_{ \mathbf{T}},\mathbf{Z}})=\tau_{yx}\)\(\Box\) ### Proof of Proposition 3 Proof.: As Definition 3.12 in [2] states, \(\mathbf{O}(X,Y)\) \[=\mathbf{PA}(\mathcal{G},\mathbf{CN}(\mathcal{G},X,Y))\setminus(X \cup\mathbf{DE}(\mathcal{G},\mathbf{CN}(\mathcal{G},X,Y)))\] \[=\mathbf{PA}(Y)\setminus X\)\(\Box\) ### Proof of Proposition 4 Proof.: Since the \(\mathbf{Z},X,Y\) follows a structural equation model depicted in Section 2.3, we know that \(Y=\beta\mathbf{PA}(\mathcal{G},Y)+\varepsilon\) where \(\varepsilon\perp\mathbf{PA}(\mathcal{G},Y)\). Along with the uniqueness of the least squares regression, \(\mathbf{PA}(\mathcal{G},Y)\) is the unique solution to (1). \(\Box\) ### Proof of Proposition 5 Proof.: Since \(Y\perp\mathbf{Z}\setminus\mathbf{Z}^{\prime}|X\cup\mathbf{Z}^{\prime},\mathbf{ Z}^{\prime}\) is a valid adjustment set relative to \(X,Y\) by Theorem 3.4.1 of [8]. By Theorem 3.4 of [2], two valid adjustment sets \(\mathbf{Z}^{\prime}=\mathbf{S}\cup\mathbf{T}(X,Y)\) and \(\mathbf{Z}\) have that \(Y\perp\mathbf{Z}\setminus\mathbf{Z}^{\prime}|X\cup\mathbf{T}(X,Y)\), which is given by definition of \(\mathbf{Z}^{\prime}\) as well as \(X\perp\emptyset|\mathbf{Z}\), then the asymptotic variance of causal effect estimators satisfy \(:\!\mathbf{var}\big{(}\beta_{\mathbf{X}_{\mathbf{Z}},\mathbf{Z}}\big{)}\geq \mathbf{var}\big{(}\beta_{\mathbf{Y}_{\mathbf{X}},\mathbf{T}}\big{)}\)\(\Box\) ### Proof of Theorem 1 Since we aim to tackle the optimisation problem with objective being a non-negative convex function on constricted support, we hereby adapt the same denotation displayed in [16], where we call the convex function \(\min||Y-\beta(X,\mathbf{Z}^{T})^{T}||_{2}^{2}\) as \(g(\beta)\). By Proposition 6(a) of [16], under the constraint on \(||\beta||_{0}\leq k\) of a given fixed constant \(k\), MIO algorithm[16] would produce sequence of estimators \(\beta_{m},m=\{1,2,3,...\}\) in its procedure, which satisfies the following property: \(g(\beta_{m})\) decreases and converges with \(g(\beta_{m})-g(\beta_{m+1})\geq\frac{L-1}{2}||\beta_{m}-\beta_{m+1}||_{2}^{2}\), where \(L\) is a parameter in MIO algorithm while \(l=\lambda_{max}(X,\mathbf{Z})^{T}(X,\mathbf{Z})\), the largest eigenvalue of \((X,\mathbf{Z})^{T}(X,\mathbf{Z})\). Therefore, for any \(L\geq l\), we will get \(\beta_{m}\) converges. Furthermore, Proposition 6(c) guarantee that after finite iterations, MIO algorithm would arrive at a converged \(\beta_{m}\). Therefore, we have shown that through the MIO algorithm, the unique solution to the optimizing problem \(\min||Y-\beta(X,\mathbf{Z}^{T})^{T}||_{2}^{2}\) s.t. \(||B||_{0}\leq k\) is obtained. The remainder of the Algorithm 1 aims to resolve the causal constraints. Suppose that the output of Algorithm 1 is \(\mathbf{Z}^{\prime}\). Then the cardinality of \(\mathbf{Z}^{\prime}\) can't be smaller than that of \(\mathbf{Pa}(Y)\). Otherwise, \(\exists Z_{1}\in\mathbf{Pa}(Y),\notin\mathbf{Z}^{\prime}\) and \(Z_{1}\not\perp Y|A,\forall A\subseteq\mathbf{Z}^{m-1}\). Clearly, \(g(\beta_{m},Z_{1})\leq g(\beta_{m+1})\), contradicting with optimality of MIO algorithm. Hence, \(\mathbf{Z}^{\prime}\) must contain all of \(\mathbf{Pa}(Y)\). But in the procedure of Algorithm 1, any \(Z\notin\mathbf{Pa}(Y)\) has \(Z\perp Y|\mathbf{Pa}(Y)\), which means is pruned out already. Hence, we have established that \(\mathbf{Z}^{\prime}=\mathbf{Pa}(Y)\)
2307.00913
Generation of narrow beams of ultrarelativistic positrons (electrons) in the resonant strong electromagnetic field-assisted Breit-Wheeler process
The resonant external field-assisted Breit-Wheeler process (Oleinik resonances) for strong electromagnetic fields with intensities less than the critical Schwinger field has been theoretically studied. The resonant kinematics has been studied in detail. The case of high-energy initial gamma quanta and emerging ultrarelativistic electron-positron pairs is studied. The resonant differential cross section is obtained. The generation of narrow beams of ultrarelativistic positrons (for Channel A) and electrons (for Channel B) is predicted with a probability significantly exceeding corresponding to the non-resonant process.
S. P. Roshchupkin, V. D. Serov, V. V. Dubov
2023-07-03T10:16:39Z
http://arxiv.org/abs/2307.00913v1
Generation of narrow beams of ultrarelativistic positrons (electrons) in the resonant strong electromagnetic field-assisted Breit-Wheeler process ###### Abstract The resonant external field-assisted Breit-Wheeler process (Oleinik resonances) for strong electromagnetic fields with intensities less than the critical Schwinger field has been theoretically studied. The resonant kinematics has been studied in detail. The case of high-energy initial gamma quanta and emerging ultrarelativistic electron-positron pairs is studied. The resonant differential cross section is obtained. The generation of narrow beams of ultrarelativistic positrons (for Channel A) and electrons (for Channel B) is predicted with a probability significantly exceeding corresponding to the non-resonant process. ## 1 Introduction Over the past several decades, there has been significant interest in studying the processes of quantum electrodynamics (QED) in external electromagnetic fields (see, for example, reviews [1]-[7], monographs [8]-[10] and articles [11]-[53]). This is mainly associated with the appearance of lasers with high radiation intensities and beams of small transverse dimensions [11]-[18]. An important feature of high-order by the fine structure constant QED processes in an external field is the potential for their resonant occurrence, where virtual intermediate particles enter the mass shell. Such resonances were first considered by Oleinik [19, 20]. Under resonance conditions, the conservation laws of energy and momentum are satisfied for intermediate particles in an external field. As a result, second-order processes by the fine structure constant effectively reduce into two sequential first-order processes. A detailed discussion of resonant processes is presented in reviews [2, 4], monographs [8, 9, 10], as well as recent articles [29]-[34]. It is important to note that the probability of resonant processes can significantly exceed the corresponding probabilities of non-resonant processes. The process of electron-positron pair production by two gamma quanta was first considered by Breit and Wheeler [35]. Currently, there is a significant number of works devoted to the study of the Breit-Wheeler process in an external electromagnetic field (see, for example, [36]-[47]). It should be noted that a distinction should be made between the external field-stimulated Breit-Wheeler process (a first-order process with respect to the fine structure constant) and the external field-assisted Breit-Wheeler process (a second-order process with respect to the fine structure constant). In this paper, Oleinik's resonances for the external strong field-assisted Breit-Wheeler process will be investigated. It should be noted that in a weak field, this process was considered in the article [47]. It is important to note that under the conditions of resonance and the absence of interference between different reaction channels, the original second-order process effectively reduces to two first-order processes: the external field-stimulated Breit-Wheeler process and the external field-stimulated Compton effect [47]. The main parameter for describing the Breit-Wheeler process in the field of a plane electromagnetic wave is the classical relativistic-invariant parameter \[\eta=\frac{eF\lambd{\lambda}}{mc^{2}}, \tag{1}\] numerically equal to the ratio of the work of the field on the wavelength to the rest energy of the electron. Here \(e\) and \(m\) are the charge and mass of the electron, \(F\) and \(\lambd{\lambda}=c/\omega\) are the electric field strength and wavelength, \(\omega\) is the frequency of the wave [1]. In this paper, we consider the resonant strong electromagnetic field-assisted Breit-Wheeler process for high-energy gamma quanta with energies \(\hbar\omega_{1,2}\lesssim 10^{2}\) GeV. Therefore, we will consider high-energy gamma quanta in the following, ensuring that the produced electron-positron pair in a field of the wave is ultrarelativistic \[\hbar\omega_{1,2}\gg mc^{2},\quad E_{\pm}\gg mc^{2}. \tag{2}\] Here \(\hbar\omega_{1,2}\) and \(E_{\pm}\) are energies of the initial gamma quanta and final positron or electron. Therefore, we will assume that the magnitude of the classical parameter \(\eta\) is upper bounded by the condition: \[\eta\ll\eta_{max},\quad\eta_{max}=\min\left(\frac{E_{\pm}}{mc^{2}}\right). \tag{3}\] Let's estimate the maximum intensity of the electric field in the wave. For electron-positron pair energies \(E_{\pm}\lesssim 10^{2}\) GeV, it follows from equation (3) that \(\eta\ll\eta_{max}\sim 10^{5}\), or for the field strength we have \(F\ll F_{max}\sim 10^{15}\) Vcm\({}^{-1}\) (\(I\ll I_{\rm max}\sim 10^{28}\) Wcm\({}^{-2}\)). Thus, the problem will consider sufficiently large intensities of the electromagnetic wave. However, these fields must be smaller than the Schwinger critical field \(F_{*}\approx 1.3\cdot 10^{16}\) Vcm\({}^{-1}\)[5, 50]. In the following, the relativistic system of units is used: \(c=\hbar=1\). ## 2 Amplitude of the process Let us consider this process in the field of a plane circularly polarized wave propagating along the z axis: \[A(\varphi)=\frac{F}{\omega}\left(e_{x}\cos\varphi+\delta e_{y}\sin\varphi \right),\quad\varphi=(kx)=\omega(t-z),\quad\delta=\pm 1. \tag{4}\] Here \(e_{x},e_{y}\) are the polarization 4-vectors of the external field that have the following properties: \(e_{x}=(0,\mathbf{e_{x}}),\quad e_{y}=(0,\mathbf{e_{y}}),\quad e_{x}e_{y}=0, \quad(e_{x})^{2}=(e_{y})^{2}=-1.\) The external field-assisted Breit-Wheeler process is characterized by two Feynman diagrams (Fig.1). Figure 1: Feynman diagrams of electron–positron-pair production by two gamma quanta in an external field. External double lines correspond to Volkov functions of electron or positron, wavy lines correspond to wave functions of initial gamma quanta, and internal double lines correspond to Green’s electron function in the field of a plane electromagnetic wave. The amplitude of the considered process is written as follows \[S_{if}=ie^{2}\iint d^{4}x_{1}d^{4}x_{2}\overline{\Psi}_{p_{-}}(x_{1}|A)\hat{A}_{1 }(x_{1};k_{1})G(x_{2}x_{1}|A)\hat{A}_{2}(x_{2};k_{2})\Psi_{-p_{+}}(x_{2}|A)\,+ \,(k_{1}\leftrightarrow k_{2}), \tag{5}\] where \(k_{1,2}=(\omega_{1,2},\mathbf{k_{1,2}})\) -- 4-momenta of the initial gamma quanta, \(p_{\pm}=(E_{\pm},\mathbf{p}_{\pm})\) -- 4-momenta of the final electron and positron. Here and further, the notation for the convolution of a 4-vector with the Dirac gamma matrices is used: \(\hat{A}_{1,2}\equiv\gamma_{\mu}A_{1,2}^{\mu}\quad\mu=0,1,2,3\). The 4-potentials of the initial gamma quanta \(A_{j}\) in expression (5) are determined by the functions \[A_{j}(x;k_{j})=\sqrt{\frac{2\pi}{\omega_{j}}}\varepsilon_{j}e^{-ik_{j}x},\quad j =1,2, \tag{6}\] where \(\varepsilon_{j}\) -- 4-vectors of the polarization of the initial gamma quanta. In the amplitude (5), the electron-positron pair corresponds to the Volkov functions [48, 49]: \[\Psi_{p}(x|A)=\mathfrak{J}_{p}(x)\frac{u_{p}}{\sqrt{2E}},\quad\mathfrak{J}_{p} (x)=\left[1+\frac{e}{2(pk)}\hat{k}\hat{A}(kx)\right]e^{iS_{p}(x)}, \tag{7}\] \[S_{p}(x)=-(px)-\frac{e}{(kp)}\int_{0}^{kx}d\varphi[pA(\varphi)-\frac{e}{2}A^{2 }(\varphi)], \tag{8}\] where \(u_{p}\) is the Dirac bispinor. The intermediate state in the amplitude (5) corresponds to the Green's function of the electron in the field of a plane wave \(G(x_{2}x_{1}|A)\)[51]: \[G(x_{2}x_{1}|A)=\int\frac{d^{4}p}{(2\pi)^{4}}\mathfrak{J}_{p}(x_{2})\frac{ \hat{p}+m}{p^{2}-m^{2}}\mathfrak{J}_{p}(x_{1}). \tag{9}\] After simple transformations, the amplitude (5) can be represented as follows: \[S_{if}=\sum_{l=-\infty}^{+\infty}S_{l}, \tag{10}\] where the partial amplitude \(S_{l}\) corresponds to the absorption or emission of \(|l|\) photons of the external wave. For the Channel A, the partial amplitude can be represented in the following form: \[S_{l}=\frac{i\pi e^{2}(2\pi)^{4}e^{-id}}{\sqrt{\widetilde{E}_{-}\widetilde{E }_{+}\omega_{1}\omega_{2}}}\left[u_{p_{-}}M_{l}v_{p_{+}}\right]\delta^{(4)} \left(k_{1}+k_{2}-\widetilde{p}_{-}-\widetilde{p}_{+}-lk\right). \tag{11}\] Here \(d\) is the phase, independent of the summation indices, \(M_{l}\) -- the matrix determined by the expression \[M_{l}=\varepsilon_{1\mu}\varepsilon_{2\nu}\sum_{r=-\infty}^{+\infty}K_{l+r}^{ \mu}(\widetilde{p}_{-},\widetilde{q}_{-})\frac{\hat{q}_{-}+m}{\widetilde{q}_{ -}^{2}-m_{*}^{2}}K_{-r}^{\nu}(\widetilde{q}_{-},-\widetilde{p}_{+}),\quad\mu, \nu=0,1,2,3. \tag{12}\] In relation (12), the functions \(K_{l+r}^{\mu}\) and \(K_{-r}^{\nu}\) have the following form: \[K_{n}^{\mu^{\prime}}(\widetilde{p}^{\prime},\widetilde{p})=a^{\mu^{\prime}}L_ {n}(\widetilde{p}^{\prime},\widetilde{p})+b_{-}^{\mu^{\prime}}L_{n-1}+b_{+}^{ \mu^{\prime}}L_{n+1}. \tag{13}\] Here, the matrices \(a^{\mu^{\prime}}\) and \(b_{\pm}^{\mu^{\prime}}\) have the following form: \[a^{\mu^{\prime}}=\gamma^{\mu^{\prime}}+\frac{m^{2}\hat{k}}{2(k\widetilde{p^{ \prime}})(k\widetilde{p})}k^{\nu},\quad b_{\pm}^{\mu^{\prime}}=\frac{1}{4}\eta m \left(\frac{\hat{e}_{\pm}\hat{k}\gamma^{\mu^{\prime}}}{(k\widetilde{p^{\prime} })}+\frac{\gamma^{\mu^{\prime}}\hat{k}\hat{e}_{\pm}}{(k\widetilde{p})}\right), \tag{14}\] \[e_{\pm}\equiv e_{x}\pm ie_{y},\quad\mu^{\prime}=\mu,\nu,\quad n=l+r,-r,\quad \widetilde{p}=-\widetilde{p}_{+},\widetilde{q}_{-},\quad\widetilde{p^{\prime}} =\widetilde{q}_{-},\widetilde{p}_{-}. \tag{15}\] In relations (12), (13) there are special functions \(L_{n}\)[3], which in the case of circular polarization of the wave can be represented using Bessel functions with integer indices \[L_{n}(\widetilde{p}^{\prime},\widetilde{p})=\exp(-in\chi_{\widetilde{p}^{ \prime}\widetilde{p}})J_{n}(\gamma_{\widetilde{p}^{\prime}\widetilde{p}}), \tag{16}\] where is denoted \[\gamma_{\widetilde{p}^{\prime}\widetilde{p}}=m\eta\sqrt{-Q_{\widetilde{p}^{ \prime}\widetilde{p}}^{2}},\quad\tan\chi_{\widetilde{p}^{\prime}\widetilde{p}} =\delta\frac{(Q_{\widetilde{p}^{\prime}\widetilde{p}}e_{y})}{(Q_{\widetilde{p} ^{\prime}\widetilde{p}}e_{x})},\quad Q_{\widetilde{p}^{\prime}\widetilde{p}}= \frac{\widetilde{p}^{\prime}}{(p^{\prime}k)}-\frac{\widetilde{p}}{(pk)}. \tag{17}\] In the expressions (11) and (12) \(\widetilde{p}_{\pm}=(\widetilde{E}_{\pm},\widetilde{\mathbf{p}}_{\pm})\) and \(\widetilde{q}_{-}\) are the 4-quasimomenta of the electron (positron) and intermediate electron, \(m_{*}\) is the effective mass of the electron in the field of a circularly polarized wave (4) [32]: \[\widetilde{q}_{-}=k_{2}+rk-\widetilde{p}_{\pm}, \tag{18}\] \[\widetilde{p}_{\pm}=p_{\pm}+\eta^{2}\frac{m^{2}}{2(kp_{\pm})}k,\quad\widetilde {q}_{-}=q_{-}+\eta^{2}\frac{m^{2}}{2(kq_{-})}k, \tag{19}\] \[\widetilde{p}_{\pm}^{2}=m_{*}^{2},\quad m_{*}=m\sqrt{1+\eta^{2}}. \tag{20}\] ## 3 The resonant kinematics Under resonance conditions, both an electron and a positron can be intermediate particles. Therefore, instead of two Feynman diagrams in the non-resonant case (see Fig. 1), under resonance conditions we will have 4 Feynman diagrams (see Fig. 2): Channels A and B, as well as Channels A' and B', which are obtained from channels A and B by rearranging the initial gamma quanta (\(k_{1}\leftrightarrow k_{2}\)). Each channel in the resonance conditions effectively decays into two first-order processes by the fine structure constant: the external field-stimulated Breit-Wheeler process (EFSBWP) and the external field-stimulated Compton effect (EFSCE) with intermediate electrons and positrons entering the mass shell: \[\widetilde{q}_{-}^{2}=m_{*}^{2},\quad\widetilde{q}_{+}^{2}=m_{*}^{2}. \tag{21}\] Further consideration will be carried out for resonant Channels A and B (see Fig. 2). It is important to emphasize that the laws of conservation of energy-momentum for intermediate processes of resonant Channels A and B have the form: \[\text{EFSBWP}:\qquad k_{2}+rk=\widetilde{q}_{\mp}+\widetilde{p}_{\pm}\quad r =1,2,3\dots; \tag{22}\] \[\text{EFSCE}:\qquad k_{1}+\widetilde{q}_{\mp}=\widetilde{p}_{\mp}+r^{\prime}k \quad r^{\prime}=1,2,3\dots\quad(r^{\prime}=l+r). \tag{23}\] Figure 2: Feynman diagrams of the resonant electron–positron-pair production by two gamma quanta in an external field, Channels A and B; for Channels A’ and B’ \(k_{1}\leftrightarrow k_{2}\). Since the problem considers high-energy initial gamma quanta and ultrarelativistic energies of the final electron-positron pair (2), under such conditions, the momenta of the initial and final particles should lie within a narrow angle cone, which should be far away from the direction of wave propagation: \[\theta_{j\pm}\equiv\angle({\bf k}_{j},{\bf p}_{\pm})\ll 1,\quad\theta_{i}\equiv \angle({\bf k}_{1},{\bf k}_{2})\ll 1, \tag{24}\] \[\theta\equiv\angle({\bf p}_{\pm},{\bf k})\sim 1,\quad\theta_{j}\equiv\angle({\bf k }_{j},{\bf k})\sim 1,\quad j=1,2;\quad\theta\approx\theta_{1}\approx\theta_{2}. \tag{25}\] Let us note that under conditions (2), (3), the expression for the positron (electron) quasienergy can be simplified: \[\widetilde{E}_{\pm}=E_{\pm}\left[1+\frac{1}{4\sin^{2}\frac{\theta_{\pm}}{2}} \left(\frac{m\eta}{E_{\pm}}\right)^{2}\right]\approx E_{\pm}. \tag{26}\] Let us determine the resonance energy of the positron (electron) for the second vertex (see Fig. 2). Taking into account relations (2), (3), (21), (24) from the conservation of 4-momentum law (22) for the external field-stimulated Breit-Wheeler process, we obtain the resonance energies of the positron (for Channel A) or electron (for Channel B) in units of the total energy of the initial gamma quanta: \[x_{j^{\prime}(r)}=\frac{\omega_{2}}{2\omega_{i}(\varepsilon_{2BW(r)}+\delta_{ 2j^{\prime}}^{2}}\left[\varepsilon_{2BW(r)}\pm\sqrt{\varepsilon_{2BW(r)}( \varepsilon_{2BW(r)}-1)-\delta_{2j^{\prime}}^{2}}\right],\quad j^{\prime}=+,-. \tag{27}\] Here it is indicated: \[x_{\pm(r)}=\frac{E_{\pm}(r)}{\omega_{i}},\quad\omega_{i}=\omega_{1}+\omega_{2 },\quad\delta_{2\pm}=\frac{\omega_{2}}{2m_{*}}\theta_{2\pm}. \tag{28}\] In this case, the ultrarelativistic parameter \(\delta_{2\pm}\), which determines the outgoing angle of the positron or electron, is contained within the interval \[0\leq\delta_{2+}^{2}\leq\delta_{2+max}^{2},\quad\delta_{2+max}^{2}=\varepsilon _{2BW(r)}(\varepsilon_{2BW(r)}-1). \tag{29}\] It is important to emphasize that in equation (27), the quantity \(\varepsilon_{2BW(r)}\) is bounded from below by unity \[\varepsilon_{2BW(r)}=r\varepsilon_{2BW}\geq 1,\quad\varepsilon_{2BW}=\frac{ \omega_{2}}{\omega_{BW}}, \tag{30}\] where \(\omega_{BW}\) is the characteristic quantum energy of the external field-stimulated Breit-Wheeler process: \[\omega_{BW}=\frac{m_{*}^{2}}{\omega\sin^{2}\frac{\theta}{2}}=\left\{\begin{array} []{ccc}174{\rm GeV}&{\rm if}&\omega=3{\rm eV},I=1.675\cdot 10^{19}{\rm Wcm}^{-2} \\ 5.22{\rm GeV}&{\rm if}&\omega=0.1{\rm keV},I=1.861\cdot 10^{22}{\rm Wcm}^{-2} \\ 52.2{\rm MeV}&{\rm if}&\omega=10{\rm keV},I=1.861\cdot 10^{26}{\rm Wcm}^{-2} \end{array}\right. \tag{31}\] When estimating the value of the characteristic energy, frequencies of electromagnetic waves in the optical and X-ray ranges were used in equation (31), as well as values of parameters \(\eta=1\) and \(\theta=\pi\). It is worth noting that the ratio between the initial energy of the gamma quantum and the characteristic energy \(\omega_{BW}\) determines the value of parameter \(\varepsilon_{2BW}\) (30), which can be either greater or less than unity. This significantly affects the number of photons absorbed in the EFBWP. Specifically, if the initial energy of the gamma quantum is less than the characteristic energy, then from equations (30) and (31) it follows that this process occurs if the number of absorbed wave photons is above a certain minimum \(r_{min}\) value, which is greater than unity: \[r\geq r_{min}=\lceil\varepsilon_{2BW}^{-1}\rceil\quad(\omega_{2}<\omega_{BW}). \tag{32}\] If the initial energy of the gamma quantum is greater than the characteristic energy, then this process takes place already when one photon of the wave is absorbed: \[r\geq 1\quad(\omega_{2}\geq\omega_{BW}). \tag{33}\] Thus, the resonant energy of a positron (for Channel A) or an electron (for Channel B) is determined by two parameters: the corresponding outgoing angle of the positron (\(\delta^{2}_{2+}\)) or electron (\(\delta^{2}_{2-}\)), and the parameter \(\varepsilon_{2BW(r)}\). At the same time, with a fixed parameter \(\varepsilon_{2BW(r)}\), for each outgoing angle of the positron or electron, there are two possible energies (see equation (27)). Figure 3 shows the dependence of the energy of the positron (for Channel A) or electron (for Channel B) (see equations (27)-(30)) for the external field-stimulated Breit-Wheeler process with absorption of one and two photons of the wave at different frequencies, intensities of the electromagnetic wave (equation (31), and various initial gamma quanta energies. From this figure, it follows that the interval for the outgoing angle of the positron (electron) significantly depends on the number of absorbed photons of the wave. Additionally, for the same outgoing angle, there are two possible particle energies, except for the maximum outgoing angle. Now let's determine the resonant electron (positron) energy at the first vertex (see Fig. 2). Taking into account equations (2), (3), (21), and (24), from the conservation law of the 4-momentum (equation (23) of the external field-stimulated Compton effect, we obtain the resonant energies of the electron (for Channel A) or the positron (for Channel B) in terms of the total energy of the initial gamma quanta: \[x_{\mp(r^{\prime})}=\frac{\omega_{1}}{2\omega_{i}(\varepsilon_{1C(r^{\prime})} -\delta^{2}_{1\mp})}\left[\varepsilon_{1C(r^{\prime})}+\sqrt{\varepsilon^{2}_{ 1C(r^{\prime})}+4(\varepsilon_{1C(r^{\prime})}-\delta^{2}_{1\mp})}\right]. \tag{34}\] Here is denoted: \[x_{\mp(r^{\prime})}=\frac{E_{\mp}(r^{\prime})}{\omega_{i}},\quad\delta_{1\mp} =\frac{\omega_{1}}{m_{*}}\theta_{1\mp}. \tag{35}\] \[\varepsilon_{1C(r^{\prime})}=r^{\prime}\varepsilon_{1C},\quad\varepsilon_{1C} =\frac{\omega_{1}}{\omega_{C}},\quad\omega_{C}=\frac{1}{4}\omega_{BW}. \tag{36}\] Here \(\omega_{C}\) s the characteristic quantum energy of the external field-stimulated Compton effect. This energy is four times less than the characteristic energy for the external field-stimulated Breit-Wheeler process. Additionally, it should be noted that the ultrarelativistic parameter \(\delta^{2}_{1\mp}\), which determines the outgoing angle of the electron or positron, should not take values close to \(\varepsilon_{1C(r^{\prime})}\), in order to satisfy the condition \(x_{\mp(r^{\prime})}<1\) (see equation (34)). It should also be noted that there are no limitations on the parameter \(\varepsilon_{1C(r^{\prime})}\) for the external field-stimulated Compton effect. Therefore, this process occurs for any number of emitted photons of the wave \(r^{\prime}\geq 1\). Figure 3: The energy of the positron (Channel A) or electron (Channel B) (27)-(30) for the external field-stimulated Breit-Wheeler process with absorption of one and two photons of the wave at different frequencies and intensities of the electromagnetic wave (31). Solid lines correspond to the ”+” and dashed lines correspond to the ”–” signs before the square root in (27). The energies of the initial gamma quanta: Fig.3a) – \(\omega_{1}=10\mathrm{GeV},\omega_{2}=180\mathrm{GeV}\); Fig.3b) – \(\omega_{1}=0.5\mathrm{GeV},\omega_{2}=7\mathrm{Gev}\); Fig.3c) – \(\omega_{1}=10\mathrm{MeV},\omega_{2}=80\mathrm{MeV}\). Furthermore, we will assume that the energies of the initial gamma quanta, within the framework of conditions (2), satisfy the additional conditions: \[\omega_{2}>\omega_{BW},\quad\omega_{1}\ll\omega_{BW}. \tag{37}\] Conditions (37) mean that parameter \(\varepsilon_{2BW}>1\), and parameter \(\varepsilon_{1BW}\ll 1\) (see equations (30) and (31)). Therefore, in Channels A and B, the external field-stimulated Breit-Wheeler process occurs with a number of absorbed photons of the wave \(r\geq 1\), and for the exchange resonant diagrams A' and B' the number of absorbed photons is \(r\geq r_{min}=\left[\varepsilon_{1BW}^{-1}\right]\gg 1\). Thus, within the framework of conditions (37), resonant Channels A' and B' will be suppressed, and we will only consider two resonant Channels A and B (see Fig. 2). It is also important to consider that for Channel A, the resonant energy of the positron is determined by its outgoing angle relative to the momentum of the second gamma quantum in the EFBWP, while the resonant energy of the electron is determined by its outgoing angle relative to the momentum of the first gamma quantum in the EFSCE. For Channel B, we have the opposite situation, where the energy of the electron is determined by its outgoing angle relative to the momentum of the second gamma quantum, and the energy of the positron is determined by its outgoing angle relative to the momentum of the first gamma quantum (see Fig. 2). Therefore, Channels A and B are distinguishable and do not interfere with each other. It is important to note that under resonance conditions (21), the resonant energies of the positron and electron for each reaction channel are determined by different physical processes: the external field-stimulated Breit-Wheeler process (27) and the Compton external field-stimulated effect (34). At the same time, the energies of the electron-positron pair are related to each other by the general law of conservation of energy \[x_{+}+x_{-}\approx 1\quad(x_{\pm}=\frac{E_{\pm}}{\omega_{i}}). \tag{38}\] It should be noted that in equation (38) we have neglected a small correction term \(|l|\omega/\omega_{i}\ll 1\). Taking into account equations (27) and (34), as well as the law of conservation of energy (38) for Channels A and B, we obtain the following equations relating the outgoing angles of the positron and electron: \[\delta_{1\mp}^{2}=\varepsilon_{1C(r^{\prime})}-\frac{(\omega_{1}/\omega_{i})}{ (1-x_{\pm(r)})}\left[\varepsilon_{1C(r^{\prime})}+\frac{(\omega_{1}/\omega_{i })}{(1-x_{\pm(r)})}\right]. \tag{39}\] Here the upper (lower) sign corresponds to Channel A (B). In equation (39), the left side represents the ultrarelativistic parameter associated with the outgoing angle of the electron (positron) relative to the momentum of the first gamma quantum, and the right side is the function of the ultrarelativistic parameter \(\delta_{2\pm}\), associated with the outgoing angle of the positron (electron) relative to the momentum of the second gamma quantum. Under given parameters \(\varepsilon_{1C(r^{\prime})}\) and \(\varepsilon_{2BW(r)}\), equation (39) uniquely determines the outgoing angles of the electron and positron, and therefore their resonant energies (see Fig. 3 and Fig. 4). Figure 4 presents the dependence of the energy of the electron (for channel A) or positron (for channel B) (34), (39) for the external field-stimulated Compton effect at different frequencies, intensities of the electron (for channel C) and positron (for channel C). The energy of the electron (for channel C) is \(\omega_{1}=10\)MeV, \(\omega_{2}=180\)GeV, \(\omega_{1}=0.5\)GeV, \(\omega_{2}=7\)GeV; Fig.4c), 4c') - \(\omega_{1}=10\)MeV, \(\omega_{2}=80\)MeV. Figure 4: The dependence of the energy of the electron (Channel A) or positron (Channel B) (34), (39) for the external field-stimulated Compton effect at different frequencies, intensities of the electromagnetic wave (31), and initial gamma quanta energies under the condition of energy conservation in the first and second vertices (38). Solid lines correspond to the ”+” and dashed lines correspond to the ”–” signs before the square root in expressions (27), (34), (39). The energies of the initial gamma quanta: Fig.4a), 4a’) – \(\omega_{1}=10\)GeV, \(\omega_{2}=180\)GeV; Fig.4b), 4b’) – \(\omega_{1}=0.5\)GeV, \(\omega_{2}=7\)GeV; Fig.4c), 4c’) – \(\omega_{1}=10\)MeV, \(\omega_{2}=80\)MeV. electromagnetic wave (31), and initial gamma quanta energies under the condition of energy conservation in the first and second vertices (38). The graphs are given for different numbers of absorbed (\(r\)) and emitted (\(r^{\prime}\)) photons of the wave. It is also worth noting the important case when the quantum parameter \(\varepsilon_{2BW(r)}\gg 1\). In this case, from the expression (27) with the "+" sign before the square root, the energy of the positron (Channel A) or electron (Channel B) approaches the energy of the highly energetic second gamma quantum: \[E_{\pm}\approx\omega_{2}\left[1-\frac{(1+\delta_{2\pm}^{2})}{4\varepsilon_{2 BW(r)}}\right]\longrightarrow\omega_{2}\quad(\delta_{2\pm}^{2}\ll\varepsilon_{2 BW(r)}). \tag{40}\] The expression with the "-" sign before the square root in equation (27) leads to the minimum energy of the positron or electron \(E_{\pm}\sim\omega_{2}/\varepsilon_{2BW(r)}\ll\omega_{2}\). However, this case is unlikely. Similarly, for the first gamma quantum, when the quantum parameter \(\varepsilon_{1C(r^{\prime})}\gg 1\), we obtain the energy of the electron (Channel A) or positron (Channel B) approaching the energy of the first gamma quantum: \[E_{\mp}\approx\omega_{1}\left[1-\frac{(1+\delta_{1\mp}^{2})}{\varepsilon_{1C( r^{\prime})}}\right]\longrightarrow\omega_{1}\quad(\delta_{1\mp}^{2}\ll \varepsilon_{1C(r^{\prime})}). \tag{41}\] Thus, if the quantum parameters \(\varepsilon_{1C(r^{\prime})}\) and \(\varepsilon_{2BW(r)}\) take large values, the resonant energies of the positron and electron tend towards the energies of the corresponding initial gamma quanta. ## 4 The resonant differential cross section Previously, it has been shown that in conditions (2), (3), and (37), exchange Channels A' and B' are suppressed. In addition, Channels A and B are distinguishable and therefore do not interfere (see text after equation (37)). It is also important to note that resonance processes with different numbers of absorbed and emitted wave photons correspond to significantly different probabilities and energies of electron-positron pair. Therefore, they do not interfere either. Due to this, summation over all possible processes with absorption of \(r\) wave photons is not necessary in the amplitude (12): \[M_{rr^{\prime}}=\varepsilon_{1\mu}\varepsilon_{2\nu}K_{r^{\prime}}^{\mu}( \widetilde{p}_{-},\widetilde{q}_{-})\frac{\hat{q}_{-}+m}{\hat{\widetilde{q}}_ {-}^{2}-m_{*}^{2}}K_{-r}^{\nu}(\widetilde{q}_{-},-\widetilde{p}_{+}),\quad r^{ \prime}=l+r. \tag{42}\] The resonant differential cross section for Channels A and B and unpolarized initial gamma quanta and the final electron-positron pair is obtained from the amplitude (10), (11), (42) in a standard way [53]. After simple calculations, we obtain: \[d\sigma_{rr^{\prime}}=\frac{2m^{6}r_{e}^{2}}{\widetilde{E}_{-}\widetilde{E}_{ +}m_{*}^{2}\delta_{\eta i}^{2}}\frac{K_{1\mp(r^{\prime})}P_{2\pm(r)}}{|\vec{q }_{\mp}^{2}-m_{*}^{2}|^{2}}\delta^{(4)}\left[k_{1}+k_{2}-\widetilde{p}_{-}- \widetilde{p}_{+}-(r^{\prime}-r)k\right]d^{3}\widetilde{p}_{-}d^{3}\widetilde{ p}_{+}. \tag{43}\] Here, the upper (lower) sign corresponds to Channel A (B), \(r_{e}=e^{2}/m\) is the classical electron radius. In obtaining the resonant differential cross-section (43), the resonant probability was divided by the flux density of the initial gamma quanta [53]: \[j=\frac{(k_{1}k_{2})}{\omega_{1}\omega_{2}}\approx\frac{m_{*}^{2}}{2\omega_{1} \omega_{2}}\delta_{\eta i}^{2},\quad\delta_{\eta i}^{2}\equiv\frac{\omega_{1} \omega_{2}}{m_{*}^{2}}\theta_{i}^{2}. \tag{44}\] In expression (43), the function \(P_{2\pm(r)}\) determines the probability of the external field-stimulated Breit-Wheeler process [1], and the function \(K_{1\mp(r^{\prime})}\) determines the probability of the external field-stimulated Compton effect [1]: \[P_{2\pm(r)}=J_{r}^{2}(\gamma_{2\pm(r)})+\eta^{2}(2u_{2\pm(r)}-1)\left[\left( \frac{r^{2}}{\gamma_{2\pm(r)}^{2}}-1\right)J_{r}^{2}+{J_{r}^{\prime}}^{2} \right], \tag{45}\] \[K_{1\mp(r^{\prime})}=-4J_{r^{\prime}}^{2}(\gamma_{1\mp(r^{\prime})})+\eta^{2} \left[2+\frac{u_{1\mp(r^{\prime})}^{2}}{1+u_{1\mp(r^{\prime})}}\right](J_{r^{ \prime}-1}^{2}+J_{r^{\prime}+1}^{2}-2J_{r^{\prime}}^{2}). \tag{46}\] The arguments of the Bessel functions for the external field-stimulated Breit-Wheeler process (45) and the external field-stimulated Compton effect (46) have the following form: \[\gamma_{2\pm(r)}=2r\frac{\eta}{\sqrt{1+\eta^{2}}}\sqrt{\frac{u_{2\pm(r)}}{v_{ 2\pm(r)}}\left(1-\frac{u_{2\pm(r)}}{v_{2\pm(r)}}\right)}, \tag{47}\] \[\gamma_{1\mp(r^{\prime})}=2r^{\prime}\frac{\eta}{\sqrt{1+\eta^{2}}}\sqrt{ \frac{u_{1\mp(r^{\prime})}}{v_{1\mp(r^{\prime})}}\left(1-\frac{u_{1\mp(r^{ \prime})}}{v_{1\mp(r^{\prime})}}\right)}. \tag{48}\] Here, the relativistic-invariant parameters are equal to: \[u_{1\mp(r^{\prime})}=\frac{(k_{1}k)}{(p_{\mp}k)}\approx\frac{(\omega_{1}/ \omega_{i})}{x_{\mp(r^{\prime})}},\quad v_{1\mp(r^{\prime})}=\frac{2r^{\prime} (q_{\mp}k)}{m_{*}^{2}}\approx\varepsilon_{1C(r^{\prime})}\left(\frac{x_{\mp(r^ {\prime})}}{(\omega_{1}/\omega_{i})}-1\right), \tag{49}\] \[u_{2\pm(r)}=\frac{(k_{2}k)^{2}}{4(p_{\pm}k)(q_{\mp}k)}\approx\frac{(\omega_{2 }/\omega_{i})}{4x_{\pm(r)}\left(1-\frac{x_{\pm(r)}}{(\omega_{2}/\omega_{i})} \right)},\quad v_{2\pm(r)}=r\frac{(k_{2}k)}{2m_{*}^{2}}\approx\varepsilon_{2 BW(r)}. \tag{50}\] The elimination of resonant singularity in expression (43) is carried out by the Breit-Wigner procedure [47, 52]: \[m_{*}\longrightarrow\mu_{*}=m_{*}-i\Gamma_{\mp(r)},\quad\Gamma_{\mp(r)}= \frac{\widehat{q}_{\mp}^{0}}{2m_{*}}W_{1}, \tag{51}\] where \(W_{1}\) is the total probability (per unit of time) of the external field-stimulated Compton effect on the intermediate electron (for Channel A) or positron (for Channel B). \[W_{1}=\frac{\alpha m^{2}}{4\pi\widehat{q}_{\mp}^{0}}K(\varepsilon_{1C}), \tag{52}\] \[K(\varepsilon_{1C})=\sum_{s=1}^{\infty}\int_{0}^{s\varepsilon_{1C}}\frac{du}{ (1+u)^{2}}K(u,s\varepsilon_{1C}). \tag{53}\] Here, \(\alpha\) is the fine-structure constant, and the function \(K(u,s\varepsilon_{1C})\) is determined by the expression: \[K(u,s\varepsilon_{1C})=-4J_{s}^{2}(\gamma_{1(s)})+\eta^{2}\left[2+\frac{u^{2} }{1+u}\right](J_{s-1}^{2}+J_{s+1}^{2}-2J_{s}^{2}) \tag{54}\] \[\gamma_{1(s)}=2s\frac{\eta}{\sqrt{1+\eta^{2}}}\sqrt{\frac{u}{s\varepsilon_{1C }}\left(1-\frac{u}{s\varepsilon_{1C}}\right)}. \tag{55}\] Taking into account the relations (51)-(55), the resonant denominator in the cross-section (43) takes the following form: \[|\widehat{q}_{\mp}^{2}-m_{*}^{2}|^{2}\longrightarrow m_{*}^{4}\frac{x_{\mp(r^ {\prime})}^{2}}{(\omega_{1}/\omega_{i})^{2}}\left[\left(\delta_{1\mp(0)}^{2}- \delta_{1\mp}^{2}\right)^{2}+\Upsilon_{\mp(r^{\prime})}^{2}\right]. \tag{56}\] Here, the ultrarelativistic parameter \(\delta_{1\mp}^{2}\) is related to the resonance energy of the electron (for Channel A) or positron (for Channel B) by the relation (34), and the corresponding parameter \(\delta_{1\mp(0)}^{2}\) can take arbitrary values unrelated to the energy of the electron (positron). In this case, the corresponding angular width of the resonance \(\Upsilon_{\mp(r^{\prime})}\) is determined by the expression: \[\Upsilon_{\mp(r^{\prime})}=\frac{\alpha m^{2}}{4\pi m_{*}^{2}}\frac{\omega_{1}} {\omega_{i}x_{\mp(r^{\prime})}}K(\varepsilon_{1C}). \tag{57}\] Considering relation (26), we can set \(d^{3}\widetilde{p}_{\pm}\approx d^{3}p_{\pm}\) and integrate the three-dimensional momentum of the electron (positron) as well as the energy of the positron (electron) for Channel A (for Channel B) using the delta-function in expression (43). After simple calculations, we obtain the following expression for the resonant differential cross-section for Channels A and B: \[R_{2\pm(rr^{\prime})}=\frac{d\sigma_{rr^{\prime}}}{d\delta_{2\pm}^{2}}=8\pi r_ {e}^{2}\left(\frac{m}{\delta_{\eta i}\omega_{i}}\right)^{2}\frac{x_{\pm(r)}}{ x_{\mp(r^{\prime})}^{3}}\left(\frac{m}{m_{*}}\right)^{4}\left(\frac{\omega_{1}}{ \omega_{2}}\right)^{2}\frac{K_{1\mp(r^{\prime})}P_{2\pm(r)}}{\left[\left( \delta_{1\mp(0)}^{2}-\delta_{1\mp}^{2}\right)^{2}+\Upsilon_{\mp(r^{\prime})} ^{2}\right]}. \tag{58}\] Here, the upper (lower) sign corresponds to Channel A (B). It should be noted that the differential cross-section (58) has a characteristic Breit-Wigner resonance structure [52]. Let's determine the maximum resonant differential cross-section when \[\left(\delta_{1\mp(0)}^{2}-\delta_{1\mp}^{2}\right)^{2}\ll\Upsilon_{\mp(r^{ \prime})}^{2}. \tag{59}\] Under conditions (59), the resonant cross-section (58) takes its maximum value, which is equal to: \[R_{2\pm(rr^{\prime})}^{max}=\frac{d\sigma_{rr^{\prime}}^{max}}{d\delta_{2\pm}^ {2}}=r_{e}^{2}c_{\eta i}\Psi_{\pm(rr^{\prime})}. \tag{60}\] Here, the function \(c_{\eta i}\) is determined by the initial setup parameters \[c_{\eta i}=\frac{2(4\pi)^{3}}{\alpha^{2}K^{2}(\varepsilon_{1C})}\left(\frac{m }{\delta_{\eta i}\omega_{2}}\right)^{2}\sim 10^{8}\left(\frac{m}{\delta_{ \eta i}\omega_{2}}\right)^{2}, \tag{61}\] and the functions \(\Psi_{\pm(rr^{\prime})}\) determine the spectral-angular distribution of the generated electron-positron pair: \[\Psi_{\pm(rr^{\prime})}=\frac{x_{\pm(r)}}{1-x_{\pm(r)}}K_{1\mp(r^{\prime})}P_ {2\pm(r)}. \tag{62}\] It is important to emphasize that the magnitude of the maximum resonant differential cross-section significantly depends on the value of the function \(c_{\eta i}\) (61). Let's require that the function \(c_{\eta i}>1\). Then, from relation (61), we obtain a condition on the initial ultrarelativistic parameter \(\delta_{\eta i}^{2}\) (44): \[\delta_{\eta i}^{2}<\left(10^{4}\frac{m}{\omega_{2}}\right)^{2}. \tag{63}\] It should be noted that the corresponding Breit-Wheeler differential cross-section without an external field in this kinematics (24) has the following order of magnitude [35]: \[\frac{d\sigma_{BW}}{d\delta_{2\pm}^{2}}\sim r_{e}^{2}\left(\frac{m}{\delta_{ i}\omega_{i}}\right)^{2},\quad\delta_{i}=\frac{\sqrt{\omega_{1}\omega_{2}} \theta_{i}}{m}. \tag{64}\] From relations (60)-(62) and (64), it can be seen that the maximum resonant cross-section significantly exceeds the corresponding Breit-Wheeler cross-section without an external field. Figure 5 shows the dependencies of the maximum resonance differential cross-section (60) on the positron outgoing angle (for Channel A) or electron outgoing angle (for Channel B) for various frequencies and intensities, as well as the numbers of absorbed and emitted photons at the first and second vertices (see Fig. 2). The study focused on the regions of optical and X-ray frequencies of the external strong electromagnetic wave at different sufficiently high energies of initial gamma quanta. It is important to note that the energy of the second high-energy gamma quantum for each frequency and intensity of the wave was chosen according to condition (33), in order for the stimulated Breit-Wheeler process to occur with the highest probability, and the energy of the first gamma quantum was chosen to be much lower than the energy of the second gamma quantum (37). In this case, with increasing frequency of the external field, the characteristic energy of the Breit-Wheeler process decreased (see relation (31)). Therefore, energies of initial gamma quanta were chosen to be lower for the X-ray frequency range than for the optical frequency range. As a result, the function (61) increased, leading to an increase in the maximum resonance cross-section. This case is shown in Figures 5a) to 5c). However, if the energy of initial gamma quanta remains constant and the intensity of the external field increases, then the maximum resonance cross-section decreases (see Figures 5c) and 5c')). Table 1 displays the values of positron (for Channel A) and electron (for Channel B) energies, as well as the corresponding maximum values of the resonance differential cross-section according to their spectral-angular distribution (see Figures 5a) to 5c')) for different frequencies and intensities of the wave, as well as different energies of Figure 5: The dependence of the maximum resonance differential cross-section (60) (in units of \(r_{e}^{2}\)) on the positron outgoing angle (Channel A) or electron outgoing angle (Channel B) for various frequencies and intensities, as well as the numbers of absorbed (\(r\)) and emitted (\(r^{\prime}\)) photons. The value of the initial ultrarelativistic parameter \(\delta_{\eta i}^{2}=10^{-4}\). The energies of the initial gamma quanta: Fig.5a) – \(\omega_{1}=10\mathrm{GeV},\omega_{2}=180\mathrm{GeV}\); Fig.5b) – \(\omega_{1}=0.5\mathrm{GeV},\omega_{2}=7\mathrm{Gev}\); Fig.5c), 5c’) – \(\omega_{1}=10\mathrm{MeV},\omega_{2}=80\mathrm{MeV}\). initial the gamma quanta. From Table 1, it can be observed that if the energy of one of the initial gamma quanta slightly exceeds the characteristic Breit-Wheeler energy, the production of electron-positron pairs occurs with a very large cross-section. For the optical frequency range, the resonance differential cross-section can exceed the value in magnitude by a factor of 44, while for the X-ray frequency range, it can exceed the value by eight orders of magnitude. In this case, the positrons (electrons) are emitted in a narrow cone and with very high energy. ## Conclusion We considered the resonant Breit-Wheeler process modified by an external strong electromagnetic field for high-energy initial gamma quanta when the energy of one of them significantly exceeded the energy of the other. The following results were obtained: 1. The resonant kinematics of the process has been studied in detail. It was demonstrated that the problem involves two characteristic energies: the Breit-Wheeler energy \(\omega_{BW}\) (31) and the Compton effect energy \(\omega_{C}\) (36). These energies differ from each other by a factor of four. The ratios of the initial gamma quanta energies to these characteristic energies significantly affect the number of absorbed or emitted wave photons and, ultimately, the probability of the process. 2. The resonant energies of the positron and electron strongly depend on their outgoing angles, as well as the characteristic quantum parameters \(\varepsilon_{2BW(r)}\) (30) and \(\varepsilon_{1C(r^{\prime})}\) (36). Furthermore, the outgoing angles of the electron and positron are interdependent (39). 3. The maximum resonant differential cross-section is achieved when the energy of one of the initial gamma quantum slightly exceeds the characteristic Breit-Wheeler energy. In this case, for the optical frequency range and \(\omega_{2}=180\)GeV, the maximum resonant cross-section is \(R^{max}_{2\pm(rr^{\prime})}=44r_{e}^{2}\), whereas for the X-ray frequency range, it is \(R^{max}_{2\pm(rr^{\prime})}\sim(10^{6}\div 10^{8})r_{e}^{2}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \((r,r^{\prime})\) & \(\delta^{2}_{2\pm max}\) & \(x_{\pm(r)}\) & \(R^{max}_{2\pm(rr^{\prime})}\) \\ \hline \(I=1.675\cdot 10^{19}\)Wcm\({}^{-2}\), & (1,1) & 0 & 0.56 & 44 \\ \(\omega=3\)eV, & (1,2) & 0 & 3 \\ \cline{2-5} \(\omega_{1}=10\)GeV, & (2,1) & 0.12 & 0.76 & 29 \\ \cline{2-5} \(\omega_{2}=180\)GeV & (2,2) & 0.07 & 0.78 & 10 \\ \hline \(I=1.861\cdot 10^{22}\)Wcm\({}^{-2}\), & (1,1) & 0 & 0.7 & \(2.9\cdot 10^{4}\) \\ \cline{2-5} \(\omega=100\)eV, & (1,2) & 0 & 5.3 \(\cdot 10^{3}\) \\ \cline{2-5} \(\omega_{1}=0.5\)GeV, & (2,1) & 0.06 & 0.82 & \(1.2\cdot 10^{4}\) \\ \cline{2-5} \(\omega_{2}=7\)GeV & (2,2) & 0.07 & 0.81 & \(5.2\cdot 10^{3}\) \\ \hline \(I=1.861\cdot 10^{26}\)Wcm\({}^{-2}\), & (1,1) & 0 & 0.71 & \(1.9\cdot 10^{8}\) \\ \cline{2-5} \(\omega=10\)keV, & (1,2) & 0 & 0.71 & \(6.3\cdot 10^{7}\) \\ \cline{2-5} \(\omega_{1}=10\)MeV, & (2,1) & 0.03 & 0.8 & \(1.7\cdot 10^{8}\) \\ \cline{2-5} \(\omega_{2}=80\)MeV & (2,2) & 0.08 & 0.79 & \(3.6\cdot 10^{7}\) \\ \hline \(I=1.675\cdot 10^{27}\)Wcm\({}^{-2}\), & (4,1) & 0.2 & 0.47 & \(8.3\cdot 10^{6}\) \\ \cline{2-5} \(\omega=10\)keV, & (4,2) & 0.17 & 0.5 & \(3.4\cdot 10^{6}\) \\ \cline{2-5} \(\omega_{1}=10\)MeV, & (5,1) & 0.11 & 0.64 & \(2\cdot 10^{7}\) \\ \cline{2-5} \(\omega_{2}=80\)MeV & (5,2) & 0.27 & 0.56 & \(5.5\cdot 10^{6}\) \\ \hline \end{tabular} \end{table} Table 1: The maximum values of the resonance differential cross-section. The obtained results can be utilized to achieve ultrarelativistic positron (electron) beams with a very high probability in the external field-modified Breit-Wheeler process. Additionally, these results can be employed to explain the fluxes of ultrarelativistic positrons (electrons) near neutron stars and magnetars, as well as in the modeling of physical processes in laser-induced thermonuclear fusion. The research was funded by the Ministry of Science and Higher Education of the Russian Federation under the strategic academic leadership program "Priority 2030" (Agreement 075-15-2023-380 dated 20.02.2023).